FAQ SAP HANA SQL Optimization v1257
FAQ SAP HANA SQL Optimization v1257
Symptom
SQL statements run for a long time or consume a high amount of resources in terms of memory and CPU.
Environment
SAP HANA
Cause
1. Where do I find information about SQL statement tuning provided by SAP?
2. Which indications exist for critical SQL statements?
3. Which prerequisites are helpful for SQL statement tuning?
4. How can I identify a specific SQL statement?
5. What is an expensive SQL statement?
6. How can time information in the SQL cache (M_SQL_PLAN_CACHE) be interpreted?
7. How can I determine the most critical SQL statements?
8. How can I determine and interpret basic performance information for a particular SQL statement?
9. Which options exist to understand the execution of a SQL statement in detail?
10. What are typical approaches to tune expensive SQL statements?
11. Are secondary indexes required to provide an optimal performance?
12. Which advanced features exist for tuning SQL statements?
13. Are there standard recommendations for specific SQL statements available?
14. Is it required to create optimizer statistics in order to support optimal execution plans?
15. Are all database operations recorded in the SQL cache (M_SQL_PLAN_CACHE)?
16. Can sorting be supported by an appropriate index?
17. Is it possible to capture bind values of prepared SQL statements?
18. How can the performance of data modifications be tuned?
19. Why are there significant differences between SQL statements on ABAP and SAP HANA side?
20. How does the EXPLAIN functionality work?
21. How can I identify the ABAP coding location related to a SQL statement?
22. Are there known issues with particularly large SQL statement texts?
23. How can details for prepared SQL statements be determined?
24. Are there special considerations for analyzing SQLScript procedures?
25. How can compatibility view accesses be tuned?
26. How can SLT specific performance issues be tuned?
27. How can SAP ABAP client copies be tuned?
28. Are there best practices for efficient application development on SAP HANA?
29. What is the root statement hash?
30. Are all SQL cache entries written to the history in HOST_SQL_PLAN_CACHE?
31. Where do I find details about the SAP HANA SQL optimizer?
32. Where can I find details about unfolding?
33. Is client related information like application name or application source always filled properly?
34. What are typical reasons for internal statement executions?
35. How can core data services (CDS) view accesses be optimized?
36. How can important optimizer decisions (e.g. for specific execution engines) be determined?
37. Which SAP HANA execution engines exist?
38. Is it possible to manually calculate the statement hash for a given string?
Resolution
SQL: "HANA_Configuration_MiniChecks" (SAP Notes 1969700, 1999993) returns a potentially critical issue (C = 'X') for one of the following individual checks:
Check ID Details
STATEMENT_HASH global The STATEMENT_HASH is derived from the SQL text and so it globally identifies a specific SQL statement.
STATEMENT_ID per connectio The STATEMENT_ID is derived from the SQL text and the connection and so it can be used to distinguished identical SQL statements being executed in different connections. Nevertheless it can't be used to distinguish different executions of the same SQL statement
n in the same connection.
STATEMENT_EXECUTION_ per execution The STATEMENT_EXECUTION_ID is unique for every statement execution, so also executions of the same SQL statement in the same connection can be distinguished.
ID
CURSOR Contains the overall cursor time, i.e. from start of the execution until having sent the last package to the client; when the client processes the data in multiple fetches, the network and client time during these fetches is included in the cursor time
Mainly applies to SELECT and CALL operations, can be 0 for others (e.g. DML, DDL)
If the client performs other tasks between fetches of data, the cursor time can be much higher than the SAP HANA server time.
This can result in MVCC issues because old versions of data need to be kept until the execution is finished.
EXECUTION Contains the execution time (open + fetch + lock wait + close) on SAP HANA server side, does not include preparation time
Includes the actual retrieval of data in case of column store accesses with early materialization
Includes the actual retrieval of data in case of row store accesses or late materialization
TABLE_LOAD Contains the table load time during preparation, is part of the preparation time
LOCK_WAIT Contains the transaction lock wait time, internal locks are not included
Usually long EXECUTION_OPEN or EXECUTION_FETCH times are caused by retrieving the data.
From a SQL tuning perspective the most important information is the total elapsed time of the SQL statement which is the sum of preparation time and execution time.
SQL trace SAP HANA Studio, SQL, ABAP (DBACOCKPIT) 2031647 SQL trace
2412519
Expensive statements trace SAP HANA Studio, SQL, ABAP (DBACOCKPIT) 2180165 Expensive statements trace
If you are interested in the top SQL statements in terms of memory consumption, you can activate both the expensive statements trace and the statement memory tracking (SPS 08 or higher, see SAP Note 1999997 -> "Is it possible to limit the memory that can be allocated by a single SQL
statement?") and later on run SQL: "HANA_SQL_ExpensiveStatements" (SAP Note 1969700) with ORDER_BY = 'MEMORY'.
The currently running SQL statements can be determined via SQL: "HANA_SQL_ActiveStatements" (SAP Note 1969700).
In all cases you get a list of SQL statements / STATEMENT_HASH values that subsequently be analyzed in more detail.
8. How can I determine and interpret basic performance information for a particular SQL statement?
The following SQL statements available via SAP Note 1969700 can be used to collect further details for a given STATEMENT_HASH (to be specified in "Modification section" of the statements):
SQL: "HANA_SQL_ActiveProcedures" Procedure calls and SQL statements executed within procedures
SQL: "HANA_SQL_StatementHash_BindValues" Captured bind values from SQL cache in case of long running prepared SQL statements (SPS 08 and higher)
SQL: "HANA_SQL_StatementHash_DataCollector" Collection of various details related to a specific SQL statement including:
Key figures
Statement text
Bind values
SQL cache
View information
Table information
Index information
Column information
Partition information
Expensive statements trace
Executed statements trace
Transactional locks
Thread samples
OOM events
Active statements
Call stacks
Pinned SQL plans
Statement hints
SQL: "HANA_SQL_StatementHash_KeyFigures" Important key figures from SQL cache, for examples see below
SQL: "HANA_SQL_ExpensiveStatements" Important key figures from expensive statement trace (SAP Note 2180165)
SQL: "HANA_SQL_ExpensiveStatements_BindValues" Captured bind values from expensive statement trace (SAP Note 2180165)
Below you can find several typical output scenarios for SQL: "HANA_SQL_StatementHash_KeyFigures".
Scenario 1: Transactional lock waits
We can see that nearly the whole execution time is caused by lock wait time, so transactional lock (i.e. record or object locks) are responsible for the long runtime. Further transactional lock analysis can now be performed based on SAP Note 1999998.
Scenario 2: High number of executions
An elapsed time of 0.55 ms for 1 record is not particularly bad and for most of the SQL statements no further action is required. In this particular case the number of executions is very high, so that overall the execution time of this SQL statement is significant. Optimally the number of
executions can be reduced from an application perspective. If this is not possible, further technical analysis should be performed. Really quick single row accesses can be below 0.10 ms, so there might be options for further performance improvements (e.g. index design or table store
changes).
Scenario 3: High elapsed time
An execution time of 284 ms for retrieving one row from a single table is definitely longer than expected and it is very likely that an improvement can be achieved. Further analysis is required to understand the root cause of the increased runtime.
Scenario 4: High elapsed time for DML operations, no lock wait time
An elapsed time of 10 ms for inserting a single record is quite high. If DML operations have an increased elapsed time it can be caused by internal locks that are e.g. linked to the blocking phase of savepoints. Further internal lock analysis can now be performed based on SAP Note 1999998.
Scenario 5: Many records
Reading about 200,000 records in less than 2 seconds is not a bad value. In the first place you should check from an application perspective if it is possible to reduce the result set or the number of executions. Apart from this there are typically also technical optimizations available to
further reduce the elapsed time (e.g. delta storage or table store optimizations).
Scenario 6: High Preparation Time
Preparation time is linked to the initial parsing and compilation. You can reduce this overhead by using bind variables rather than literals so that several SQL statements with the same structure need to be parsed only once. Furthermore you can perform a more detailed analysis to understand
why the preparation step is taking longer than expected.
Tool Details
Explain High-level information for SQL execution (e.g. joins, used row store indexes)
Thread sample analysis High-level thread state and lock type information (e.g. useful in case of waits for internal locks which are not reflected in the "lock wait time")
Performance trace Detailed insight in SQL execution including SQL plan and function profiling.
User-specific trace Granular trace information for configurable components of the SQL statement execution
SAP HANA Studio: Administration -> Trace configuration -> User-Specific Trace
Trace components depend on individual scenario, see SAP Note 2909779 ("User-specific trace") for typical trace components
High number of executions Check from application perspective if the number of executions can be reduced, e.g. by avoiding identical SELECTs or adjusting the application logic.
High number of selected records Check from application perspective if you can restrict the number of selected records by adding further selection conditions or modifying the application logic.
Check if you can reduce the amount of relevant data by archiving or cleanup of technical tables (SAP Note 2388483).
High number of selected columns Always specify a targeted list of required columns rather than using "SELECT *". The performance penalty of "SELECT *" is particularly high in case of hundreds of columns or in case of partitioned tables (SAP Note 2044468) where a similar fix overhead for column materializ
ation is required for each relevant partition compared to a non-partitioned table.
The relevant methods in PlanViz (SAP Note 2119087 -> "PlanViz / Execution Trace") are "Materialize Results" or ProjectBufferOp.
High lock wait time due to record loc Check from application perspective if you can reduce concurrent changes of same records.
ks
Check from application perspective if you can reduce the critical time frame between change operation and next COMMIT.
High lock wait time due to object loc Check if you can schedule critical offline DDL operations less often or at times of lower workload.
ks
Check if you can use online instead of offline DDL operations.
High total execution time, significan Check if you can reduce the internal lock wait time (SAP Note 1999998).
t amount of thread samples pointing
to internal lock waits
High system CPU consumption See SAP Note 2100040 and check for system CPU optimizations based on the related call stacks and executed operating system calls.
Execution time higher than expected Check from an application perspective if you can adjust the database request so that existing indexes can be used efficiently.
, optimal index doesn't exist
Create a new index or adjust an existing index so that the SQL statement can be processed efficiently:
Table and column scan related threa
d methods and details like: Make sure that selective fields are contained in the index
Use indexes that don't contain more fields than specified in the WHERE clause
ClusterScanBvOutJob<BV>
ClusterScanBvOutJob<range> See SAP Note 2321573 for more information.
ClusterScanVecOutJob<range> SAP Note 2516807 describes a problem where a FULL index can't be used in context of a compressed column (SAP HANA 1.00.122.10 - 1.00.122.11, <= 2.00.012.01 and 2.00.020.
HEX job running hex::cs::Tabl
eScanScheduleOp SAP Note 2914233 describes a bug in context IN lists with decimal values where an existing index can't be used with of SAP HANA <= 2.00.037.04 and <= 2.00.045.
HEX job running hex::operator
s::TableScanScheduleOp
IndirectPredScanBvOutJob<Scan
RangesPredicate>
IndirectPredScanVecOutJob<Sca
nRangesBinSearchPredicate>
IndirectPredScanVecOutJob<Sca
nVectorBinSearchPredicate>
IndirectPredScanVecOutJob<Sca
nVectorPredicate>
IndirectScanBvOutJob<BV>
IndirectScanBvOutJob<range>
IndirectScanVecOutJob<BV>
IndirectScanVecOutJob<range>
JEJobReadIndexChunked
JobParallelMgetSearch
JobParallelPagedMgetSearch
PrefixedScanVecOutJob<range>
RlePredScanJob<ScanVectorBinS
earchPredicate>(out=vector)
RlePredScanJob<ScanVectorPred
icate>(out=vector)
RleScanBvOutJob<BV
RleScanBvOutJob<range>
RleScanVecOutJob<BV>
RleScanVecOutJob<range>
scanWithoutIndex
searchDocumentsIterateDocidsP
arallel
SparseBvScanBvOutJob
SparseBvScanVecOutJob
SparsePredScanBvOutJob<ScanRa
ngesPredicate>
SparsePredScanVecOutJob<ScanR
angesPredicate>
SparsePredScanVecOutJob<ScanV
ectorBinSearchPredicate>
SparsePredScanVecOutJob<ScanV
ectorPredicate>
SparseRangeScanBvOutJob
SparseRangeScanVecOutJob
sparseSearch
sse_icc_lib::mgetSearchi_AVX2
impl
sse_icc_lib::mgetSearchi_AVX
Worker Job hex::cs::DataVecto
rScanOrLookupOp
Expensive index joins If long runtime is observed in context of inefficient index joins, you can disable it on statement level using the NO_HEX_INDEX_JOIN hint (SAP Note 2142945). See also check ID C1500 ("HEX index join activity") described in SAP Note 2313619.
hex::cs::IndexJoinOp::run
Execution time higher than expected Consider the following possible optimization approaches:
, negative impact by existing partitio
Make sure that existing partitioning optimally supports the most important SQL statements (e.g. via partition pruning, load distribution and minimization of inter-host communication). See SAP Note 2044468 for more information.
ning
Define partitioning on as few columns as possible. A high number of columns used for the partitioning criteria (e.g. HASH partitioning on 10 primary key columns) can result in significant internal overhead during partition pruning evaluation. Call stacks containing Que
ryMediator::FilterProcessor::getPruningEntries, QueryMediator::PruningOptimization::getPruningEntries, QueryMediator::FilterProcessor::mergePruningEntries, QueryMediator::PruningOptimization::mergePruningEntries, TRexAPI::FilterExpression::insertPruning
Entry or TRexAPI::Partitioning::PruningEntry::PruningEntry are good indications for this kind of overhead.
For technical reasons partitioning can sometimes introduce performance overhead in combination with join engine accesses. In this case you can check if it is possible to eliminate the use of the join engine, e.g. by removing a DISTINCT aggregation.
Use as few partitions as possible. A high number of partitions can result in a significant overhead (e.g. threads with method getColumnStat (SAP Note 2114710) when for each individual partition column statistics need to be retrieved). Also materializing result column val
ues (PlanViz methods like "Materialize Results" or ProjectBufferOp) have a certain overhead per partition containing data. Thus, reading a small amount of records from a higher number of partitions, can be significantly more expensive compared to reading all records f
rom a single partition or non-partitioned table.
Try to avoid changes of records that require a remote uniqueness check or a partition move (i.e. changes of partitioning key columns or primary key columns), because a significant overhead is imposed. See SAP Note 2312769 for more information.
UPSERT operations on partitioned tables can require significant time in module TrexStore::UdivListContainerMVCC::checkValidEqualSSN (SAP Note 2373312). Increasing the SAP profile parameter dbs/hdb/cmd_buffersize can reduce the overhead.
Partitioning in context of data aging (SAP Note 2416490) may result in decreased performance of UPDATE / UPSERT operations (SAP Note 2387064) in module TRexAPI::TableUpdate::execute_update_partitioning_attribute. The related thread method is SearchPartJ
ob.
If it is technically possible to disable partitioning, you can consider undoing the partitioning.
Starting with SAP HANA 1.00.122.00 OLTP accesses to partitioned tables are executed single-threaded and so the runtime can be increased compared to a parallelized execution (particularly in cases where a high amount of records is processed). You can set indexserver.
ini -> [joins] -> single_thread_execution_for_partitioned_tables to "false" in order to allow also parallelized processing (e.g. in context of COUNT DISTINCT performance on a partitioned table). See SAP Note 2620310 for more information. Setting this parameter to "fa
lse" is also recommended as a best practice in SAP Note 2600030.
Optimizer estimations are sometimes performed based on a subset of the table partitions (default: 8). In case too many empty or nearly empty partitions exist, the estimation can be quite imprecise. Therefore it is good to keep the amount of (nearly) empty partitions sma
ll compared to the amount of significantly used partitions. If required, you can increase the number of sampled partitions to a higher value using parameter indexserver.ini -> [sql] -> compile_time_sampling_partitions = <number_of_sampled_partitions>. Be aware th
at higher values result in better estimations, but they can increase the parse times, so adjustments need to be done with care.
Partitioning in tree specification notation (as e.g. used with BW4/HANA 2.0 SPS 04 and higher) can result in performance overhead with the OLAP engine. This issue is fixed with SAP HANA >= 2.00.048.03 and 2.00.054 (SAP Note 2966606).
Long runtimes with selective IN lists For row store tables the optimizer decides cost based if an IN list is evaluated via index (CPBTREE INDEX SEARCH (IN)) or not (CPBTREE INDEX SEARCH). In case of selective IN lists that aren't considered by the optimizer during index processing, you can add the followi
on row store tables ng hint:
OPTIMIZATION_LEVEL(RULE_BASED)
This is e.g. required in context of the statement hint delivery (SAP Note 2700051) for TRFCQIN so that it is guaranteed that the selective QNAME IN list is processed via the index.
Long runtime with OR condition hav If an OR concatenation is used and the terms reference columns of more than one table, a significant overhead can be caused by the fact that a cartesian product of both individual result sets is required in some cases.
ing selection conditions on both tabl
If you face performance issues in this context, you can check if the problem disappears with a simplified SQL statement accessing only one table. If yes, you should check if you can avoid joins with OR concatenated selection conditions on both tables.
es
Long runtime with join conditions c join conditions concatenated with OR (e.g. "A.X = B.X1 AND A.Y = B.Y1 OR A.X = B.X2 AND A.Y = B.Y2") can impose a significant performance overhead up to SAP HANA SPS 11. In this situation you should either avoid this scenario from application side or consider an upgr
oncatenated with OR ade to SAP HANA >= SPS 12 where the optimized Hashed Disjunctive Join is available.
Long runtime with non-unique mult The SAP HANA join engine may disregard columns during a join if they are also specified as selection condition in the WHERE clause ("<column> = ?"). This is a typical constellation for the client column (MANDT, CLIENT, ...). As a consequence it can happen that a secondar
i-column index on join columns y index on this and other columns isn't used although it looks perfect. In order to avoid this scenario, you should create non-unique column store indexes as single-column indexes rather than adding columns like client providing limited benefit in terms of filtering.
This issue doesn't apply to row store and unique / primary key indexes.
See SAP Note 2160391 for more information related to SAP HANA indexes.
Execution time higher than expected Make sure that the auto merge mechanism is properly configured. See SAP Note 2057046 for more details.
, significant portion for accessing del
Consider smart merges controlled by the application scenario to make sure that the delta storage is minimized before critical processing starts.
ta storage
The join engine considers only the main storage when deciding for using an MVCC bitmap. In case of a large and volatile delta storage the generation of the delta MVCC bitmap can be quite time consuming and the following call stack modules are visible:
UnifiedTable::MVCCObject::generateOLAPBitmapMVCC
JoinEvaluator::JEUtils::createFullSnapshot
This behavior is improved with SAP HANA >= 1.00.122.21, >= 2.00.024.07 and >= 2.00.034.
Execution time slightly higher than e In general the number of tables in the row store should be kept on a small level, but under the following circumstances it is an option to check if the performance can be optimized by moving a table to the row store:
xpected
Involved table located in column store and not too large (<= 2 GB)
Many records with many columns selected or a very high number of quick accesses with small result sets performed
Execution time sporadically increase Check if peaks correlate to resource bottlenecks (CPU, memory, paging) and eliminate bottleneck situations.
d
Execution time higher than expected Sort operations (e.g. related to ORDER BY) are particularly expensive if all of the following conditions are fulfilled:
, significant portion for sorting (trex
Sorting of a high number of records
_qo trace: doSort)
Sorting of more than one column
Leading sort column has rather few (but more than 1) distinct values
In order to optimize the sort performance you can check from an application side if you can reduce the number of records to be sorted (e.g. by adding further selection conditions) or if you can put a column with a high amount of distinct values at the beginning of the ORDER
BY
Increased runtime of INSERT operat INSERTs in hybrid LOBs have to perform disk I/O if the configured memory threshold is exceeded and the data is stored in a disk LOB. See SAP Note 2220627 for more information related SAP HANA LOBs. In order to optimize performance you can proceed as follows:
ion on table with hybrid LOB field
Check for disk I/O bottlenecks and eliminate them in order to optimize the I/O performance (SAP Note 1999930).
Consider increasing the MEMORY THRESHOLD configuration for the hybrid LOB (SAP Note 1994962) so that more data is kept in memory. Be aware that this will increase the memory requirements of SAP HANA, so you have to find an individual trade-off between me
mory consumption and performance.
Increased runtime of EXPORT TO D ABAP tables with INDX structure (e.g. INDX, BALDAT, PCL*, SOC3, SSCOOKIE) are accessed via EXPORT TO DATABASE / IMPORT FROM DATABASE commands on ABAP side. If bigger chunks of data are exported, the data is split into pieces based on the length of the C
ATABASE / IMPORT FROM DATAB LUSTD field of the table. This can result in significant communication overhead. In this case you can reduce the amount of INSERT operations and the network overhead by increasing the length of the CLUSTD column to a larger value.
ASE
Starting with ABAP SAP_BASIS 7.51 it is possible to store the whole EXPORT data in a single table line by defining the CLUSTD column with data element INDX_CLUST_BLOB and dictionary type RAWSTRING. This can reduce the amount of INSERT operations to a minim
um. Be aware that simply adjusting the CLUSTD definition of existing tables is not sufficient and can result in syntax errors. Instead the table has to be set up as described in Export / import tables. In case of SAP standard tables this needs to be implemented and delivered by
SAP.
IMPORT FROM DATABASE based on major id and minor id may use multiple SELECTs including sorting. In case of large table sizes, this sorting overhead can become significant. SAP Note 3234023 provides a correction so that the number of records transferred as well as th
e sorting overhead are reduced.
An optimization for UPDATE and DELETE operations of keys (SRTFD) with many record counters (SRTF2) on INDX-like tables is available in SAP Note 3320379.
Long BW query runtime TREXviaDBSL are used for complex database requests like BW queries or enterprise search. See SAP Note 2800048 for more details and troubleshooting options.
Long execution time of TREXviaDBS
L calls
Long execution time of TREXviaDBS
LWithParameter calls
High load caused by queries with W Queries including WHY_FOUND function calls like
HY_FOUND function
SELECT WHY_FOUND() as "_WHERE_FOUND" ...
originate from calls to the Enterprise Search (ESH) procedure ESH_SEARCH (thread method BuiltinProcedure_ESH_SEARCH) and are used to display the part of the text that is responsible for the text to be returned by the request. They can sometimes introduce significant
overhead. See SAP Note 3258905 and consider deactivating the "Why found" functionality in case it is not really required.
Long BW DTP and transformation r See SAP Notes 2033679 (BW 7.40 SP 05 - 07) and 2067912 (BW 7.40 SP 08 - 10) in order to make sure that all recommended fixes are implemented. For example, SAP Note 2133987 provides a coding correction for SAPKW74010 in order to speed up delta extractions from a
untime using SAP HANA execution DSO.
mode
See SAP Note 2057542 and consider SAP HANA based transformations available as of BW 7.40 SP 05.
Long F4 search help in BW Queries on SID tables with sub-queries on fact or DSO tables with the following structure can originate from F4 search helps with "Only Posted Values for Navigation" setting:
In this case you should check if using a less expensive F4 query execution mode can be used. See SAP Note 1565809 for more information.
Performance issues in context of full SAP Note 2800008 -> "What are typical problem scenarios in context of fulltext indexes?" describes among others scenarios that can lead to performance problems in context of fuzzy searches and creating, updating or using fulltext indexes.
text indexes
Slow accesses to FI data sources If accesses to FI data sources (0FI_GL_10, 0FI_GL_11, 0FI_GL_12, 0FI_GL_14, 0FI_GL_20, 0FI_GL_40) are slow, you can proceed according to SAP Note 2302508.
Long InA / MDS accesses If access to SAP HANA views takes longer than expected in InA / MDS environments (SAP Note 2670064), you can check if the cache timeout for metadata is set too small (SAP Note 2559231) and increase the following parameter if required:
SAP Note 3287726 describes a potential performance decrease due to too many calculation steps with EPMMDS binaries 1.00.202221.06.
Performance and resource consumption may be improved with the explicit option for using the SQL engine for MDS request processing. You can configure the related calculation view property to use the SQL engine for that purpose. See SAP Notes 2670064, 2223597 and 281
8549 for more information.
SAP Note 3584983 discusses a performance improvement for deeply nested calculations.
Long runtimes in context of SQLScri See Are there special considerations for analyzing SQLScript procedures? for details how to analyze SQLScript procedure executions.
pt
SAP Notes 2795151 and 2902534 provide suggestions how to proceed in case of SQLScript performance regressions, e.g. by adjusting the used SQLScript version level via SQLSCRIPT_VERSION hint.
Long runtimes in context of smart d Make sure that SDA is configured optimally for performance. See SAP Note 2180119 and in particular make sure that data statistics are created for the remote tables so that the optimizer is aware about cardinalities.
ata access (SDA)
Long runtimes in context of dynamic See SAP Note 2733393 in order to optimize table accesses in dynamic tiering environments.
tiering
Long runtimes in context of plannin SAP Note 1637199 contains settings that can be used to activate / deactivate HANA optimized planning engine processing in BW.
g engine
Long runtimes in context of S/4HA Normal performance tuning approaches apply for the S/4HANA / Fiori scenario. Some Fiori apps may take longer than expected due to design limitations, improvements are available with newer SAP HANA Revisions (SAP Note 2519264). SAP Notes 2689405 and 2916959 pr
NA and Fiori ovide further details about performance optimizations in S/4HANA and Fiori environments.
Long runtime in context of HEX eng Starting with SAP HANA 2.0 more and more SQL processing is taken over by the SAP HANA Execution Engine (HEX, SAP Note 2570371). Call stacks like the following indicate that HEX is used:
ine
AttributeEngine::hexLookupInvertedIndex
hex::operators::ConjunctionInitOp::run
hex::operators::FragmentColumnLookupInitOp
hex::operators::FragmentScanInitOp::run
hex::operators::ValidPullInitOp::run
hex::planex::ExecutablePlan::executePipelinesUpTo
hex::planex::ExecutablePlan::open
hex::planex::impl::runNextImpl
hex::planex::NoDataOperator::xf_run
ptime::Hex_search::do_open
While other SAP HANA execution engines perform some parsing and considerations during every execution (resulting in slightly increased runtimes), the HEX engine defines an execution plan during initial parsing and keeps on using it (unless a fallback condition is met, SA
P Note 3326981). This can sometimes result in unfavorable execution plans when the set of bind values during parsing is not representative. SAP Note 2700051 delivers some stabilizing statement hints (mainly based on the CS_FILTER_FIRST hint, SAP Note 2142945) for us
ual suspects like STXH, TOA0* and ADR* tables.
See SAP Note 2570371 for an overview of known HEX bugs, including performance bugs.
If you aren't able to optimize the query in a different way, you can - as a last resort - consider to disable HEX for the SQL statement in question using the NO_USE_HEX_PLAN hint (SAP Note 2142945).
Long runtime of FOR ALL ENTRIES If a FOR ALL ENTRIES selection in SAP ABAP environments takes long and consumes a lot of resources, you can consider the following optimizations:
query
Make sure that FOR ALL ENTRIES statements aren't executed when the driving table is empty because in this case all records of the current client and selection conditions aren't evaluated. You can capture these scenarios using transaction SRTCM -> "Empty table in FO
R ALL ENTRIES clause" -> "Activate Globally".
Make sure that fast data access (FDA) recommendations are considered (SAP Note 2399993).
See High runtime when using fast data access for additional recommendations.
Frequent queries with '/* Buffer Loa Check and optimize the SAP ABAP table buffer configuration (transactions ST02, ST10, AL12):
ding */' comment
Sufficient buffer size and directory entries
Frequent accesses to small SAP ABA Hit ratio of at least 99 %
P tables that are supposed to be buff Limited number of swaps and invalidations
ered on ABAP side No unnecessary buffering of large tables
Analysis and resolution of tables with buffer state "error" (SAP Note 703035)
See SAP Note 2103827 for configuring the SAP HANA table buffer.
Long runtime in context of LOCALE Database requests using the LOCALE clause can suffer from restricted HEX engine support. For language LOCALEs, see SAP Note 2570371 and check if it is an option to allow HEX engine processing.
In context of the Cloud Application Programming model (CAP) the Java locale JA was used, preventing the use of the HEX engine. See SAP Note 3568636 for possible optimizations.
Long runtime of query on SAP HAN If the CATALOG READ privilege is not assigned to a user, queries on SAP HANA data dictionary objects and monitoring views like SCHEMAS, TABLES, TABLE_COLUMNS, M_TEMPORARY_TABLE_COLUMNS or M_TEMPORARY_TABLES can take much longer, because
A dictionary objects and monitoring SAP HANA needs to filter the relevant (own) data and suppress the display of information from other schemas. This is done via built-in functions like ISAUTHORIZED or HASSYSTEMPRIVILEGE.
views
In the explain plan you will see additional accesses to security related objects like:
M_EFFECTIVE_PRIVILEGE_GRANTEES_
M_EFFECTIVE_PRIVILEGES_
M_EFFECTIVE_ROLES_
P_GRANTEDPRIVS_
P_PRINCIPALS_
You can use view GRANTED_PRIVILEGES or SQL: "HANA_Security_GrantedRolesAndPrivileges" (SAP Note 1969700) to check if this privilege is already granted or not. Consider granting CATALOG READ to optimize the dictionary object accesses:
Mainly for SAP HANA <= 2.00.022 see SAP Note 2100040 ("How can CPU intensive operations in SAP HANA be identified and optimized?" -> "__hasanyprivileges__String_BigInt_String_String") and apply a sufficiently new SAP HANA Revision level or make sure that CA
TALOG READ is assigned (either directly or indirectly via roles) to users having to access these views. With newer SAP HANA Revision levels the call stacks can still happen, but with lower probability.
Long runtime of queries on monitori For technical reasons accesses to monitoring views like M_TABLE_LOCATIONS or M_TABLE_PERSISTENCE_LOCATIONS often scan the complete underlying structures regardless of the WHERE clause. Thus, you should avoid frequent selections of small amounts of data
ng views (e.g. one access per table) and use fewer selections reading larger amounts of data (e.g. for all tables of a schema at once).
Alternatively check if you can select the required information from other sources, e.g. monitoring view M_CS_TABLES.
Similar overhead is also required for other monitoring views. The column FILTER_PUSHDOWN_TYPE in internal view SYS.M_MONITOR_COLUMNS_ provides information to what extent a column supports the pushdown of filters.
Wrong join order An obviously wrong join order can be caused by problems with join statistics. Join statistics are created on the fly when two columns are joined the first time. Up to SAP HANA 1.0 SPS 08 the initially created join statistics are kept until SAP HANA is restarted. This can cause tr
ouble if join statistics were created at a time when the involved tables had a different filling level, e.g. when they were empty. In this case you can restart the indexserver in order to make sure that new join statistics are created. Starting with 1.0 SPS 09 SAP HANA will automati
cally invalidate join statistics (and SQL plans) when the size of an involved table changed significantly.
If you suspect problems with join statistics, you can create a join_eval trace (SAP Note 2119087) and check for lines like:
Zero values in the second line can indicate that join statistics were created at a time when one table was empty.
Starting with SAP HANA 1.0 SPS 12 you can also use SQL: "HANA_SQL_Statistics_JoinStatistics" (SAP Note 1969700) to evaluate existing join statistics.
See SAP Note 2800028 for more information related to SAP HANA optimizer statistics in general and join statistics in particular.
High runtime (higher than usual), n If the runtime of a query is longer than expected and you can't reproduce the long runtime with SAP HANA Studio or DBACOCKPIT (if bind variables are used: using a prepared statement with proper bind values), the issue can be caused by an inadequate execution plan (e.g.
ot reproducible in SAP HANA Studio generated based on the bind values of the first execution or based on statistical information collected during the first execution). In this case you can check if an invalidation of the related SQL cache entry can resolve the issue:
/ DBACOCKPIT
ALTER SYSTEM RECOMPILE SQL PLAN CACHE ENTRY '<plan_id>'
You can identify the plan ID related to a statement hash by executing SQL: "HANA_SQL_SQLCache" (STATEMENT_HASH = '<statement_hash>', AGGREGATE_BY = 'NONE', DATA_SOURCE = 'CURRENT') available via SAP Note 1969700.
Depending on the factors considered during next parsing (e.g. set of bind values, dynamic statistics information) a better execution plan may be generated. It can happen that you have to repeate the RECOMPILE command until a good set of bind values is parsed.
Even if the problem is resolved after the RECOMPILE, there is no guarantee that it is permanently fixed, because after every eviction or restart a new parsing happens from scratch. If the problem is supposed to be linked to bind values, you can consider to adjust the applicatio
n so that different classes of bind values are executed with slightly different SQL statements, so that SAP HANA can parse each statement individually.
Due to issue number 245677 it can happen with SAP HANA <= 2.00.048.02 and <= 2.00.052 that in case of parallel compilations of the same statement a limited pre-compiled plan is used for execution. Performing a RECOMPILE can solve an existing issue. As a workaround
to avoid this issue to happen you can set the following parameter (attention: can have adverse effects on other statements, so should be implemented with care):
If only certain SQL statements suffer, you can use the NO_RECOMPILE_WITH_SQL_PARAMETERS hint (SAP Note 2142945) on statement level.
As of SAP HANA 1.0 SPS 09 you can also think about the IGNORE_PLAN_CACHE hint as a last resort (see SAP Note 2142945). Be aware that this will make the performance more predictable, but due to the permanent parsing requirements the quick executions can significa
ntly slow down. So it should only be used in exceptional cases.
Long preparation times See SAP Note 2124112 and check if the amount and duration of preparations can be reduced (e.g. by increasing the SQL cache or using bind variables) or if there are other ways to improve the parsing behavior of a specific query.
High runtime with range condition o Range conditions like BETWEEN, "<", ">", ">=", "<=" OR LIKE can't be used in order to restrict the search in a multi column index. As a consequence the performance can be significantly worse than expected. Possible solutions are:
n multi column index
Use a single column index rather than a multi column index if possible.
Use "=" or "IN" instead of a range condition.
High runtime scanning indexed colu As described in SAP Note 2112604 ("What do I have to take into account in order to make sure that the tables are compressed optimally?"), there can be different reasons why scanning indexed columns with SPARSE or PREFIXED compression doesn't happen efficiently, e.g.
mn with SPARSE or PREFIXED com no index support or overhead due to bugs. Follow the instructions in SAP Note 2112604 (e.g. new optimize compression run or switch to DEFAULT compression) in order to optimize the behavior.
pression
High runtime scanning index colum Starting with SAP HANA 1.00.122.10 and 2.00.002.01 SAP HANA may use indexes with type FULL on compressed columns. With SAP HANA 1.00.122.10 to 1.00.122.11, 2.00.002.01 to 2.00.012.01 and 2.00.020 this can cause a performance overhead in context of joins. See S
n with advanced compression and F AP Note 2516807 for more information.
ULL index type
High runtime with multiple OR conc Due to a design limitation with SPS <= 100 SAP HANA doesn't use an index on an advanced compressed column if multiple OR concatenated range conditions exist on the same column. As a consequence statements like
atenated ranges on indexed column
SELECT ... FROM COSP
WHERE ... AND ( OBJNR BETWEEN ? AND ? OR OBJNR BETWEEN ? AND ? )
can have a long runtime and high resource consumption. As a workaround you can only use DEFAULT compression for the table. See SAP Note 2112604 for more information.
High runtime with long OR concaten If search terms with particularly many OR concatenations suffer from long runtimes, it may be possible to improve the cardinality estimation and the execution plan by optimizing the predicate term sampling with SAP HANA >= 2.00.046. See SAP Note 2124112 -> "What kin
tations d of advanced parsing features exist?" -> "Predicate term sampling" for details.
High runtime with EXISTS Although EXISTS is a semi-join that can be finished as soon as the first record is found in the subquery, there are situations where SAP HANA reads the complete result set, resulting on increased runtimes (see e.g. statement hash 6805026a381879e9e5469be3f09cc654 below)
. In this case you can either consider to rewrite the application coding avoiding EXISTS or you can upgrade to SAP HANA >= 1.00.122.13, >= 2.00.012.02, >= 2.00.021 or >= 2.00.030 where the behavior is optimized.
High runtime with multiple EXISTS Up to SAP HANA Rev. 1.00.110 multiple EXISTS semi-joins are not evaluated optimally if combined with OR. As a workaround you can check if OR can be transformed into UNION. As a permanent solution you can use SAP HANA Rev. >= 1.00.111.
in combination with OR
High runtime, expected single colum Whenever a table column is part of a primary key or unique index, an implicit single column index structure is created. Exceptions and solutions are described in SAP Note 2160391 ("What are BLOCK and FULL indexes?" -> Index type = 'NONE' -> Column indexed = 'X'). You
n index doesn't exist can use SQL: "HANA_Indexes_ColumnStore_MissingSingleColumnIndexes" (SAP Note 1969700) to display columns without the expected single column index.
High runtime with TOP / LIMIT SAP HANA often performs a significant amount of operations on the overall data set before finally returning the first records. Therefore, you should consider the following options:
Upgrade to SAP HANA >= 2.00.072 when the HEX engine (SAP Note 2570371) is able to evaluate TOP / LIMIT conditions already after the first 255 result records. A USE_HEX_PLAN hint (SAP Note 2142945) may be required to force the usage of the HEX engine.
With SAP HANA 2.00.070 - 2.00.071 an issue with the HEX engine results in unexpected overhead valuating TOP / LIMIT conditions (SAP Note 3369534). A NO_USE_HEX_PLAN hint (SAP Note 2142945) may be required as a workaround in this case.
Provide selective conditions in the WHERE clause so that the amount of data is limited.
Avoid TOP / LIMIT restrictions in unselective joins.
Optimize the join processing, e.g., by defining optimal indexes on the join columns.
Inner joins suffer most from the late limit evaluation. Check if an inner join can be replaced by OUTER JOIN [MANY TO ONE].
High runtime with ORDER BY and T As described in Can sorting be supported by an appropriate index?, SAP HANA indexes don't support sorting. Therefore, an ORDER BY requires explicit sorting, even if an index on the related column(s) exists. In cases where a high amount of records need to be sorted before
OP / LIMIT the first few records are returned, the runtime can be rather high. In general you can consider the following adjustments:
Check from application perspective if the ORDER BY is really required and avoid it if possible.
Check if you can specify more selective conditions so that the amount of sorted records is reduced.
Issues when bind variables are prese In general it is recommended and useful to use bind variables for literals in order to keep the amount of database requests in the SQL cache and the parsing activity at a reasonable level (SAP Note 2124112). In some cases bind variables can have a negative impact:
nt
SAP HANA 1.0 has some internal restrictions handling bind variables in context of TOP and LIMIT, therefore this combination should be avoided. SAP ABAP kernel 7.49 uses "LIMIT ?", so when upgrading to this kernel level, the SAP HANA performance and resource co
nsumption should be carefully tested beforehand. In order to avoid "LIMIT ?" you can remove UP TO <n> ROWS on ABAP side if possible. SAP Note 2522456 provides a related correction for method CL_CRM_REPORT_ACC_DYNAMIC in CRM environments. Consid
er hints like LIMIT_THRU_JOIN and PRELIMIT_BEFORE_JOIN (SAP Note 2142945) in case the result of a large join is limited significantly. See SAP Notes 2793263 and 2900345 for using these hints with ABAP transaction SE16 and SE16N that also usually use a res
trictive limitation.
SAP Note 2795151 describes SQLScript performance regressions with SAP HANA 2.0 because bind variables may introduce additional complexity. You can use the BIND_AS_VALUE function to make sure that a literal is not replaced with a bind variable.
SAP Note 2891894 describes a problem where SDA (SAP Note 2180119) can't use bind variables properly in context of remote table accesses with SAP HANA 2.00.037.02 - 2.00.037.05. As a workaround you can use the BIND_AS_VALUES function (SQLScript) or explic
itly replace the bind variables with literals (SQL).
High runtime with TOP or LIMIT in SAP HANA typically processes all records fitting to the available conditions before applying a TOP or LIMIT restriction and returning only the specified number of records. In order to avoid unnecessary overhead you should avoid frequent TOP or LIMIT selections on larger ta
combination with unselective conditi bles with unselective conditions.
ons
High runtime of MIN and MAX sear Indexes in SAP HANA can be used to support MIN / MAX searches if all of the following conditions are met:
ches
SAP HANA >= 2.0 SPS 04
Column store table
Number column or character column without mixture of upper / lower case letters
Be aware that currently other predicates can't be evaluated as part of the index scan before applying MIN / MAX.
In all other scenarios an index can't be used to identify the maximum or minimum value of a column directly. Instead the whole column / table has to be scanned. Therefore you avoid frequent MAX or MIN searches on a large data volume. Possible alternatives are:
In most cases the column store provides better performance of MIN / MAX searches compared to row store, so you can consider a move to column store (SAP Note 2222277) in cases when the performance is linked to a row store table.
High runtime when LIKE condition The evaluation of a LIKE condition with a leading place holder (e.g. '%1234') can consume significant time on large tables. The related call stack typically contains modules like AttributeEngine::PatternMatching::ScanJob, AttributeEngine::RoDict::_getNextPattern, Attribute
with leading place holder is evaluate Engine::RoDictDefaultPages::getNext or TRexUtils::WildcardPattern::match. The thread method is often PatternMatching::ScanJob or DictScanJob.
d
The runtime can be high even if the result set was already reduced to a small amount of records (> 0) before evaluating the LIKE condition. An evaluation of the LIKE predicate based on the reduced result set depends on the following factors:
It only works for a single LIKE condition, so e.g. "<column> LIKE '%abc%' OR <column> LIKE '%def%'" will not take advantage.
It only works if the distinct values of the column exceeds the limit defined in parameter indexserver.ini -> [evaluator_redirect] -> dict_size_main (default: 2000000). Consider reducing this parameter to smaller values if also columns with a smaller amount of distinct va
lues suffer.
It only works if the number of result records from other conditions is below the limit defined by parameter indexserver.ini -> [evaluator_redirect] -> match_rows_main (default: 2500). If the result records are above 2500 and you still want the LIKE to be evaluated base
d on this larger result, you can increase the match_rows_main parameter sufficiently. Be aware that evaluating the LIKE based on a rather large number of result records can become more and more inefficient.
Starting with SAP HANA 2.0 SPS 06 the ABAPVARCHARMODE (SAP Note 2262114) is considered in context of LIKE (issue number 267338). This can result in regressions with SAP HANA 2.00.060 - 2.00.063 due to evaluation overhead, even for non-restrictive "LIKE
'%'" conditions (issue number 291148). Setting
indexserver.ini -> [search] -> sql_like_pushdown = false
High runtime with LIKE condition, '*' and '?' are no wild cards for LIKE conditions but due to some internal transformations they can have a negative impact on the evaluation of LIKE conditions with SAP HANA <= 2.00.055 (issue number 241426). Therefore you should avoid these characters in LIKE values w
bind variables and '*' or '?' as part of henever possible.
the bind value
If the problem happens on table STXH, SAP Note 2208025 is responsible where explicitly a '*' is added to the TDNAME value. A correction is available via SAP Note 2302627.
With SAP HANA >= 2.00.056 the special characters '*' and '?' are handled properly.
High runtime of range condition eva SAP HANA tends to evaluate range conditions globally, even if the already analyzed predicates have reduced the result set significantly. Starting with SAP HANA Rev. 1.00.112.02 this behavior is optimized.
luation, even if result set is already r
In certain cases you can use LIKE_REGEXPR (with the appropriate search pattern) instead of LIKE as a workaround. This should be tested thoroughly because LIKE_REGEXPR can also impose overhead in other scenarios.
estricted
High runtime of LIKE conditions wit Currently, the push down of LIKE predicates to SDA remote sources (SAP Note 2180119) is not supported if the session variable ABAPVARCHARMODE is set to TRUE (issue number 276074). This is the default setting in ABAP contexts. Check whether the LIKE condition can
h ABAP accessing remote sources be avoided or alternatively, execute the statement via a secondary database connection for which abapVarcharMode=false was set via DBCO.
High runtime of COUNT DISTINCT If SAP HANA 1.0 <= SPS 08 is used and a COUNT DISTINCT is executed on a column with a high amount of distinct values, a rather larger internal data structure is created regardless of the actual number of records that have to be processed. This can significantly increase th
e processing time for COUNT DISTINCT. As a workaround you can check if the SQL statement is processed more efficiently using the OLAP engine by using the USE_OLAP_PLAN hint (SAP Note 2142945). As a permanent solution you have to upgrade to SPS 09 or higher, so
that the size of the internal data structure takes the amount of processed records into account.
Also with later SAP HANA 1.0 Revisions the COUNT DISTINCT can take a long time and consume significant space in allocator Pool/JoinEvaluator/DictsAndDocs (SAP Note 1999997) if it is used on a large partitioned table and the join engine is implicitly used. In this case yo
u can check if the USE_OLAP_PLAN hint (SAP Note 2142945) can improve the situation. This problem is fixed with SAP HANA 2.0.
High runtime of DISTINCT In case of a larger data volume it is normal that DISTINCT takes some time. The fact that the column dictionary of column store tables already contains the distinct values of a column can't be used as a shortcut because visibility information needs to be evaluated for every reco
rd. Thus, a DISTINCT on a column with only one distinct value can take a significant time in case the table has many records.
High load of COUNT Counting records based on a large table or a complex join can be quite expensive, so preferrably you avoid the COUNT operation in these contexts. For existence check, for example, a SELECT TOP 1 would be already sufficient and it is not required to count everything.
If the COUNT is executed in context of OData calls using the OData V2 Data Model, the OData Count Mode may be responsible. In this case you can switch to sap.ui.model.odata.CountMode.None in order to suppress the execution of the counting.
A COUNT in combination with FOR ALL ENTRIES is executed as SELECT DISTINCT. See SELECT COUNT and avoid SELECT COUNT in combination with FOR ALL ENTRIES and a potentially high number of matching records.
Long runtime of NOT IN join The evaluation of NOT IN join conditions can be much more expensive than NOT EXISTS or EXCEPT. Check if it is possible to use NOT EXISTS or EXCEPT instead (SAP Note 3125731).
High mass UPDATE runtime Updating a high amount of records in a single command (e.g. "UPDATE ... FROM TABLE" in ABAP systems) is more efficient than performing individual UPDATEs for each record, but it still can consume significant time. A special UPDATE performance optimization is availa
ble when the UPDATE is based on a primary key. So if you suffer from long mass UPDATE runtimes you can check if you can implement an appropriate primary key. A unique index is not sufficient for this purpose, a real primary key constraint is needed.
Increased UPDATE / DELETE runti SAP Notes 2351294 and 2823243 describe problems with the new update engine implementation that can result in performance issues on SAP HANA 2.00.030 - 2.00.037.02 and and 2.00.040 - 2.00.041. As a workaround the multistore_feature_toggle parameter can be adju
mes sted.
High runtime of anti joins Anti joins (EXCEPT, subquery with NOT) are often more performance critical than normal joins. The following special situations exist where particularly bad anti join performance can be observed:
Long runtimes of database requests with anti joins (e.g. EXCEPT) and call stacks in JoinEvaluator::LoopJob::findJoinPairsTL_native can be caused by a SAP HANA bug that is fixed with Rev. 1.00.122.12 and 2.00.010. With SAP HANA >= 2.0 SPS 02 the fix is enabled p
er default. With earlier versions the fix is disabled per default and can be activated with hint CONSERVATIVE_CS_ANTI_JOIN_ESTIMATION or globally with the following parameter:
indexserver.ini -> [sql] -> conservative_cs_anti_join_estimation_enabled = true
As a workaround the NO_GROUPING_SIMPLIFICATION hint (SAP Note 2142945) can be used. If triggered by BW / MDX, you can also disable the RSADMIN parameter MDX_F4_USE_SQL (SAP Note 1865554).
High runtime of cyclic joins If a statement with a cyclic join ("[CYCLIC] JOIN CONDITION" in explain plan) can't be optimized differently you can check if disabling cyclic joins improves the situation (as a temporary workaround) by using the NO_CYCLIC_JOIN hint on statement level (SAP Note 21429
45).
Thread method ParallelLoopWithEqJob (SAP Note 2114710) and call stack module JoinEvaluator::LoopWithEqJob::recurseWithEq point toward cyclic joins.
In some cases semi-join reductions are the actual bottleneck. In this case you won't find ParallelLoopWithEqJob but other more common thread methods like ParallelRadixSort (that can also happen in different contexts).
High runtime of certain queries with Similar to the TOP 1 issue above also other queries can suffer from long runtimes in modules like UnifiedTable::MVCCObject::isTSBlockGarbageCollected or UnifiedTable::MVCCObject::generateOLAPBitmapMVCC. One main root cause is fixed as of Rev. 97.02, so an upgrad
Rev. 90 to 97.01 in UnifiedTable::M e can be considered in case the performance seriously impacts production operation.
VCCObject coding
Sporadically increased runtimes of c If accesses to calculation scenarios are sometimes slow, cache displacements can be responsible. You can use SQL: "HANA_Configuration_MiniChecks" (SAP Note 1999993, check ID 460) and SQL: "HANA_CalculationEngine_CalculationScenarios" (SAP Note 1969700) to f
alculation scenario accesses ind out more. If the cache is undersized, you can increase it using the following parameter (default: 1048576):
Increased runtime of calculation vie In case of slow and resource-demanding accesses to a calculation view or analytic view you can check the following aspects:
w accesses
See the SAP HANA Modeling Guide and make sure that modeling best practices are used. Adjust the view definition if required.
See SAP Notes 2291812 and 2223597 and check if adjusting the execution engine helps to improve the performance.
For SQL statements on these views the following best practices should be considered:
Avoid data type conversions (implicit, explicit type case, CAST) in join definitions and WHERE clauses
Avoid joins on calculated columns and calculations in WHERE clauses
Avoid non equi join definitions
Avoid joining big analytic views of calculation views, instead use UNION
Use UNION (with constants) to combine large data sets
Minimize the use of expensive calculations, row based expressions and data manipulation including calculated attributes
Make sure, e.g. using PlanViz, that push-down works for selective predicates. If not, you can adjust the view or open a SAP case on component HAN-DB for clarification.
If you use a composite provider / stacked calculation view on top of a scripted calculation view predicate push down may be impacted in case of different users. So you should either make sure that the same user is used for both or that the scripted calculation view is defin
ed with "Invoker" mode.
If you observe an increased number of accesses to table RS2HANA_AUTH_FIL in BW environments, it is linked to a SELECT based analytic privilege check, see BW2HANA Authorization Generation and SAP Note 2604161 for details. You can check if using an alternativ
e approach provides better performance.
See SAP Note 2500573 for more details about column pruning limitations that can negatively impact performance and memory consumption.
Be aware that data preview functionalities in tools like SAP HANA Studio or SAP HANA Cockpit can be expensive in terms of runtime, CPU and memory consumption. See SAP Note 1894854 for more information and avoid generic data preview operations whenever pos
sible.
Calculation view accesses may also run longer and with a higher resource consumption when they are not unfolded, i.e. not processed via SQL engines. See Where can I find details about unfolding? for more information about calculation view unfolding and analysis deta
ils.
High runtime when calling procedur Check from an application perspective if the implementation of the procedure / UDF is reasonable and optimal.
e / user defined function (UDF) or ta
If the implementation is already optimal, you need to perform a more detailed technical performance analysis.
ble user defined function (TUDF)
The following scenarios can be responsible for increased runtimes:
When a procedure call (e.g. AMDP) takes a long time and / or consumes a lot of resources you can check if the deactivation of inlining with the NO_INLINE hint (SAP Note 2142945) helps to improve performance by reducing complexity.
Starting with SAP HANA 2.00.037.00 unfolding may no longer work in context of SQL SECURITY INVOKER. As a workaround you can consider removing this setting until a better solution is found. See SAP Note 2847558 for more details.
With SAP HANA <= 2.0 SPS 03 WITH clauses are not optimally evaluated inside a TUDF (SAP Note 2909860) - consider upgrading to SAP HANA >= 2.0 SPS 04 or avoid using WITH inside a TUDF.
High runtime of joins in scale-out en Make sure that tables involved in critical joins are located on the same host whenever possible, so that unnecessary communication overhead between SAP HANA nodes can be avoided.
vironments
High runtime of joins when different Joining columns with different data types (see e.g. "BUT000, CRMD_PARTNER" section in the SQL statement overview below) can impose significant overhead (e.g. parallel CalculationJob activities). In general you should avoid to join columns with different data types. If thi
data types are involved s is not possible, you can consider to include an explicit data type conversion, e.g. via HEXTOBIN function.
High runtime of multi-column joins Due to a bug SAP HANA doesn't evaluate filter conditions efficiently if three or more tables are joined on more than one column. See SAP Note 2311087 and consider upgrading to SAP HANA >= 1.00.102.06 or >= 1.00.112.03.
on three or more tables
High runtime in context of self-join If the value of two columns of the same table is compared (e.g. "MENGE" > "WAMNG"), older Revisions of SAP HANA aren't able to consider a previously reduced result set and always work on the complete data. Starting with SAP HANA Rev. 1.00.112.04 and 1.00.122.00 this
behavior is optimized resulting in better performance.
High runtime accessing row store ta A unexpected high runtime on row store tables can be caused by the following reasons:
bles
High number of versions, e.g. due do a blocked garbage collection. See SAP Note 2169283 for more information about SAP HANA garbage collection.
Full table scans (call stack module ptime::Table_scan::do_fetch) can slow down due to the row store memory leak (SAP Note 2362759) with SAP HANA Rev. 111 to 112.05 and 120 to 122.01. A SAP HANA restart (even without row store reorganization) optimizes the perf
ormance again.
High runtime when using fast data a Starting with SAP ABAP kernel 7.45 fast data access (FDA, SAP Note 2399993) is activated per default. As a consequence you can see database statements originating from FOR ALL ENTRIES queries or explicit itab joins on ABAP side that look like the following pattern:
ccess
SELECT /* FDA WRITE */ DISTINCT ... FROM ... ? AS "t_00" ...
See SAP Note 2399993 -> "Which problems exist in relation to fast data access?" and check if a known issue can be responsible.
Perform classic SQL analysis and optimization like for normal database requests.
In case of SAP standard coding: Check for SAP Notes with application optimizations.
Check if deactivating FDA WRITE with a &prefer_join_with_fda 0& DBI hint (SAP Note 2142945) improves the execution time as a workaround.
If columns with data type DECIMAL are joined to the itab, implicit data type conversions are done when the length of the DECIMAL column is even. This is caused by different representations of DECIMAL columns on ABAP and SAP HANA side. In this case, you can adj
ust the length of the DECIMAL column on SAP HANA side to an uneven value, e.g. by increasing the previous even value by one.
Bad performance on specific row sto For some reasons (e.g. when a column is added) a row store table can consist out of more than one underlying container. As a consequence, SAP HANA needs to combine the results from several containers using UNION ALL. Existing indexes may only work for a subset of the
re table, unexpected UNION ALL in containers and so they aren't used in the most efficient way. In order to check for the number of containers and generate cleanup commands, you can use SQL: "HANA_Tables_RowStore_TablesWithMultipleContainers" (SAP Note 1969700). A table can be merged into a sing
execution plan le container by reorganizing it with the following command:
Be aware that this activity requires a table lock and so concurrent accesses to the same table may be blocked. Therefore it is recommended to perform it during a time of reduced workload. Furthermore you can set a low lock wait timeout on transaction level (e.g. 10000 ms) in
order to reduce the risk of long-term lock escalations:
Bad performance in context of UNIO If a database access including a UNION ALL (e.g. a compatibility view access) takes a long time, it is worth to check if disabling the column store UNION ALL operation with parameter NO_CS_UNION_ALL has a positive effect. See SAP Note 2142945 for more information re
N ALL lated to SAP HANA hints. This can at least be a workaround before a final fix is found.
Long runtime of compatibility view a See question "How can compatibility view accesses be tuned?" below for details.
ccesses
Long runtime of CDS view accesses See question "How can core data services (CDS) view accesses be optimized?" below for details.
Increased runtime due to unnecessa In the following cases it is possible to transform a distributed query (i.e. involving more than one SAP HANA node in scale-out scenarios) into a typically more efficient local query:
ry distributed execution
Make sure that statement routing is used (SAP Note 2200772). Otherwise a query may be executed on a node that is different from the table location. In extreme cases this can result in performance regressions of factor 10.
Check if you can locate all tables accessed in a join on the same SAP HANA node.
Be aware that in BW tables are on purpose partitioned and distributed across different SAP HANA nodes, so local executions are often neither desired nor possible.
High runtime of CREATE VIEW co A long runtime of CREATE VIEW commands (e.g. in method drRecreateSecondarySchemas for view M_CONTEXT_MEMORY in schema _SYS_SR_SITE_<site_name>) can be caused by unnecessary statistics collection. The call stack typically contains:
mmands
Diagnose::TypedStatisticsWrapper
Diagnose::StatisticsWrapper::traverseNodesRecursive
TypedStatisticsWrapper__M_CONTEXT_MEMORY::traverseNodesImpl
ptime::StatisticsMonitorHandle::getRowCountEstimation
ptime::qo_size_estimation::getMonitorViewRowCount
ptime::qo_size_estimation::fetch_all_histogram
High runtime of COMMIT operation If commits take in average more than a few ms, this is an indication for a bottleneck. See SAP Note 2000000 ("How can the performance of commit operations be optimized?") for more information.
s
Many FDA queries in thread method If you see a high number of fast data access (FDA) requests (SAP Note 2399993) in method CloseStatement, the problem is usually linked to a cleanup of no longer required temporary tables (call stack module: ptime::TrexMD::deleteIndex). You can consider the following opti
CloseStatement and status Running mizations:
or "Network Poll"
Make sure that there is no overload situation on the master node handling the metadata information.
Make sure that repository activations (e.g. via procedure REPOSITORY_REST) are performed during non-critical time frames, because they can be responsible for significant bottlenecks on the master node.
Upgrade to SAP HANA >= 1.00.122.14 where the metadata request during CloseStatement of FDA queries is no longer required.
If all other options aren't sufficient, consider to disable FDA READ and / or FDA WRITE as described in SAP Note 2399993.
Long database access time during S The following reasons exist that can be responsible for unnecessary long database request times during SUM updates:
UM upgrades
Processing of many small packages of table rows in phase RUN_FDCT_TRANSFER due to inadequate row length calculation: Upgrade to SUM SP 19 or higher where an improved row length calculation is implemented.
Long runtime due to cross-node join In scale-out scenarios it is important to distribute tables optimally to the different nodes (SAP Note 2081591) and take advantage of table replication (SAP Note 2340450) if required. In BW environments the standard distribution mechanisms should already take care for a go
s od distribution of tables. In other environments you can use the Table Group Advisor of SAP HANA Cockpit (SAP Note 2800006) or SQL: "HANA_SQL_SQLCache_CrossNodeJoins" (SAP Note 1969700) in order to check for expensive cross-node joins. Be aware that these ev
aluations are based on the SQL cache that may show tables located on several nodes while the final plan may only use tables on one node, so in some cases cross-node joins may be reported erroneously.
Long runtime due to outer join While outer joins are not generally an issue, there is a tendency that legacy engines like the join engine have some restrictions in processing outer joins, for example cyclic join graphs containing outer joins. If you face performance issues related to outer joins, check the followi
ng options:
Replace outer join with inner join if possible from a business perspective (like e.g. done in SAP Note 3335213)
Replace outer join with sub-query
Check if problem can be fixed using the HEX engine (SAP Note 2570371) that has no specific issue with outer join processing
Unexplainable long runtime If you experience a runtime of a SQL statement that is much higher than expected, but none of the above scenarios apply, a few more general options are left.
See SAP Note 2142945 and test if the performance of SQL statements improves with certain hints (e.g. USE_OLAP_PLAN, NO_USE_OLAP_PLAN), because this can provide you with more ideas for workarounds and underlying root causes.
Check if SAP Notes exist that describe a performance bug for similar scenarios.
Check if the problem remains after having implemented a recent SAP HANA patch level.
Open a SAP case on component HAN-DB in order to get assistance from SAP support.
Hints 2142945 If the long runtime is caused by inadequate execution plans, you can influence them by using SAP HANA hints.
SQL plan pinning 2222321 >= 1.00.110 Starting with SAP HANA 1.0 SPS 11 it is possible to permanently pin SQL plans with hints (based on PLAN_ID).
Statement hints 2400006 >= 1.00.122.03 Starting with SAP HANA 1.00.122.03 it is possible to globally assign hints to specific SQL statements (based on statement text).
Abstract SQL plans 2799998 >= 2.00.024.01 Abstract SQL plans can be used to capture and apply / freeze good existing execution plans to database requests, avoiding regressions due to changed optimizer decisions.
The following features exist that can be used to cache results of frequently executed queries in order to avoid repeated expensive joins, searches and aggregations:
Static result cache 2336344 >= 1.00.110 Aggregation, no transactional consistency required
Dynamic result cache 2506811 >= 2.00.020 Aggregation, transactional consistency required, single table query
Furthermore it is possible to influence the optimizer decision by making adjustments to the optimizer statistics collection (SAP Note 2800028).
13. Are there standard recommendations for specific SQL statements available?
In general it is hard to provide standard recommendations because optimizations often depend on the situation on the individual system. Nevertheless there are some exceptions where the underlying root cause is of a general nature. These SQL statements are collected in the following table:
various SELE A<nnn> Condition tables with names starting with "A" followed by three digits (e.g. A580) can show up with expensive accesses for different reasons:
CT
Per default they are buffered on ABAP side. In case of ABAP table buffer size limitations or many changes expensive reloads may happen. You can check for details in ABAP transaction ST10 and consider adjusting the table buffer configuration or unbuf
fering critical, large condition tables. As an advanced option you can consider to switch from full buffering to an appropriate generic key buffering.
Expensive queries with "TOP 1" or "LIMIT 1" are usually linked to a prestep access that is used to avoid unnecessary subsequent accesses to the table. Depending on the table and application background it is an individual decision if the prestep is activat
ed or not. See SAP Note 1738398 for further details and consider a deactivation of the prestep to see if it overall improves the performance.
4e498f19bb992 GRA ABAP_SYS_REPO TO _SY Long runtimes of this GRANT are linked to a SAP HANA bug that is fixed with SAP HANA Rev. 1.00.112.06 and 1.00.120. See SAP Note 2386290 for more information.
a89791e1e96d9c NT S_REPO
2e1c8
000f40538b11e SELE ACDOCA A SELECT COUNT on ACDOCA in context of FOR ALL ENTRIES originating from report CL_SUS_IMP_CUP_B_FIN_ACDOCA001CP and batch job SAP_COLLECTOR_FOR_PERFMONITOR suffers from the fact that a COUNT in context of FOR ALL EN
751f072794c5d8 CT TRIES is executed on ABAP side after all data has been retrieved from SAP HANA, see SELECT COUNT for more details.
6dfa7
SAP Note 2942858 provides a correction that among others fixes this problem.
6d0bbcc90f4fdd SELE ACDOCA This query can be expensive due to the USE_OLAP_PLAN hint. SAP Note 2465294 is available to remove the hints on ABAP side.
ebe3cd16b07b6 CT
e3519
d4e254ec9866d SELE ACDOCA This query can be expensive because of a bug in HEX engine that is fixed with SAP HANA >= 2.00.024.00 (SAP Note 2568333). Be aware that the HEX engine should be generally disabled (hex_enabled = false) with SAP HANA 2.00.020 - 2.00.024.03 and
451822a78b791 CT 2.00.030 - 2.00.031 (SAP Note 2600030).
c36e16
SELE ACTIVITY_OPTION Check SAP Note 2535647 that provides optimizations for this database request from Manufacturing Execution application side.
CT
00364c15bee4e SELE ADRT These selections are done with conditions on columns ADDRNUMBER and PERSNUMBER. While ADDRNUMBER is usually a good starting point, PERSNUMBER can sometimes be quite selective and sometimes very unselective (e.g. for PERSNUMBER =
da3b700114e6e CT ''). Due to this data skew, SAP HANA may sometimes pick the wrong execution plan (SAP Note 3513868). Thus, it is important that SAP HANA enters the evaluation via column ADDRNUMBER and SAP Note 2700051 delivers the following statement hints f
ac5b64 ADRU or that reason.
0536f3e797bf19
7ef74742b38213 ADR2
f7cd
ADR3
0734bb8360ee4
30834fd79e47c
ADR6
6c8ded
1597073bbcf64
06954b6a9928d
9c353d
16aeba53544d6
0ed5830381d18
d01c72
1b0c5dd7d91a5
b09dd77dea7ea
661b65
1b7ab75efce9b7f
aa1cfe76055cdf
27a
2277027b8cfeb5
69dd927370923
742c1
3d12378e0bb47
cad454a0a9925
33afa6
3e14b5fed3f9fbe
ffddb469170e07
ca1
426e348da3b70
020882d3881b
7adc081
46657f53f01423
f6764a3064cc4
3a0cf
46c2d08f694e3
49382de1f56c27
a7793
4c0e6df627399
6ccdd9783c348
a3da89
54a156ae830b6
95e5b9d3df509
3948ba
69c9dd739e14d
a09645891b9d2
41fb5f
8077c04a1ea67
03b0c26c05f29
bf3482
8468aec357c3d
dda63c0c852c0
736e23
84d768b33354f
0bcf82d25b8ce
b9abe0
90da0c0c94022
c89a288aa8952
87d36b
94c404d4671dc
6a2a1a3edb43e
bf3408
b9222b17dda94
7f29917acf4743f
6f91
bdb6de442e6e0
f19385ee67e226
fb831
be7fc33b798435
d8d7e14007fc71
25af
bf877ef00892a
87b4e6cdd72f13
c710b
c07285806fb3c
bbc785c54a78d
eb44f3
c78c8ae851ff89
d63816309dfed
3f02f
c9418a6333cb3
ee55259cc30e9
305dbc
cba6a447f8e130
58c0a2007c2da
631e4
cf7670e9527c4f
c1ea412f843e79
ec1b
d0fa7c2e180e84
5e806068f4dca
2a926
dae1ca689083d
e83b6e2bf3d10
e9d90b
def6f6daced096
45fa35497dc34
8ffe9
e094153a06fbb
e726d8ea7ba55
e87d0c
e7e2b348b9a2b
59422d481f753
320baa
f07de1b462bdff
69190fc9667d2
79dde
f08c07ad0f068f
f850e1d734b6a1
cc94
f587ec125bf2c2
d90d9066afe1d
e6d2f
fe3c0dc3965e58
c839f99efd0341
78eb
feb81cb3412074
91e87484809ca
84c32
6413f16e8a2709 SELE ADRV This query is used in context of program S3VBRKWRS (S/4HANA archiving). SAP HANA may decide to access the table with column APPL_KEY because it typically has the highest number of distinct values. Usually the primary key column ADDRNUMBER
f7cd094042db8 CT would be more efficient because these accesses are supported by an index.
4e2a5
You can create an index on column APPL_KEY in order to optimize this access.
2e9d6d1d68ac8 CALL ALERT_BACKUP_LONG_ These calls are issued by the SAP HANA statistics server (SAP Note 2147247). In rather idle systems it is normal that these statistics server operations appear at the top of the SQL cache and runtimes up to 300 ms / execution are normal and acceptable. In th
9c78a3dba5e53 LOG_BACKUP is case no action is required.
2ca308
Usually a long runtime is caused by a large backup catalog. See SAP Note 2147247 ("How can the runtime and CPU requirements of the statistics server actions be analyzed and optimized?") and check if the backup catalog can be reduced. Alternatively you c
48482a15fb005f
an increase the execution interval.
2ef288ef198c0b
1275
9aaf8105bdad5
308ab706eea28
078d28
a35caae16f2fa4
3033c8c4f2784
83d0c
e5332b10f3a1a4
215728857efc0f
8eda
f9468e4f53d23d
0dd90230463b
544c3c
various, e.g.: CALL ALERT_MON_COUNT_U The monitoring view M_CS_UNLOADS contains information about SAP HANA column unloads and is based on unload trace files (SAP Note 2127458). Per default up to 10 trace files á 10 MB are retained per host and service, so the amount of scanned files c
NLOADS an be significant. The following options exist to improve accesses to M_CS_UNLOADS:
4ba302a20772e SELE
66ff21abe681f4 CT M_CS_UNLOADS Upgrade to SAP HANA >= 1.00.122.12, >= 2.00.001, >= 2.00.012.02, >= 2.00.021 in order to take advantage of trace file pruning, i.e. scanning only the trace files that fit to the selected time frame.
b0861 Delete unload trace files, e.g. manually, using ALTER SYSTEM REMOVE TRACES or ALTER SYSTEM CLEAR TRACES or using the related trace cleanup options of SAP HANACleaner (SAP Note 2399996).
Reduce indexserver.ini -> [unload_trace] -> maxfiles from 10 to a smaller value (e.g. 3) so that less concurrent unload trace files are retained. The older unload trace files (indexserver_<host>.<port>.unloads.<counter>.trc) aren't automatically purged
, so you have to delete them manually so that only the three most current are left.
If the M_CS_UNLOADS query is linked to statisticsserver calls (SAP Note 2147247), an upgrade to SAP HANA >= 2.00.024.03 or >= 2.00.031 can improve the scenario because then the statisticsserver will query only most recent unloads and not all a
vailable historic unloads.
430c496e0fe15c CALL ALERT_MON_PART_TAB This procedure is used for statistics server alert 40 ("Total memory usage of column-store tables"). Runtimes up to 10 seconds for large systems are normal and acceptable. In rare cases it can be helpful to reduce the execution frequency. See SAP Note 21472
0353c80de1c72 LE_SIZE_HOST_TOTAL_ 47 for more information about the SAP HANA statistics servers in general and statistics server alerts in particular.
caab1 MEM
c0f43e5dbdfc43 DELE HELPER_ALERT_MON_ This statistics server check accesses the file-based M_INIFILE_CONTENTS view among others. As a consequence its runtime can suffer from disk I/O problems. Check if there are times of significantly increased execution times. If yes, analyze the disk I/O p
8b86964acb0a2 TE PYTHONTRACE_ACTIVE erformance at this time (SAP Note 1999930). In case of correlated I/O problems it is likely that they are responsible for the long runtime of ALERT_MON_PYTHONTRACE_ACTIVE.
2c05f _AGE
CALL
various ALERT_MON_PYTHONT
RACE_ACTIVE
3ba27a32c12ccb CALL ALERT_SHM_USED_SIZ These calls are issued by the SAP HANA statistics server (SAP Note 2147247). In rather idle systems it is normal that these statistics server operations appear at the top of the SQL cache and runtimes up to 300 ms / execution are normal and acceptable. In th
ed76fe54c8c0ad E is case no action is required.
1a4d
In case of higher runtime a performance analysis and / or a reduction of the execution frequency may be useful (ID 12).
7b8d8724cdab3 CALL ALERT_TABLE_CONSIST This statement is executed in context of the statistics server alert for consistency check runs (ID 146). See SAP Note 2147247 for more information about the SAP HANA statistics server and SAP Note 2116157 for more information about SAP HANA consisten
9a6dfb1a7f6341 ENCY_STATUS cy checks.
1af17
Long runtimes are typically a consequence of many records in the underlying tables. See SAP Note 2388483 -> "CONSISTENCY_CHECK_HISTORY_, CONSISTENCY_CHECK_HISTORY_ERRORS_" and remove no longer required records using:
8813754d5ca38 SELE ALTIDTOOL This table contains tool assignments to CCMS monitoring tasks. The selection in function module SALU_SAPDEFAULT_VERSION_CREATE can be expensive in case of a high number of records in the table. In this case you should check in transaction RZ2
8b8481e90e877 CT 0 if an unnecessary high amount of monitoring is configured, e.g. individual monitoring for tens of thousands of different batch jobs.
751d43
SELE ANEA If these tables are already implemented as compatibility views you can check question "How can compatibility view accesses be tuned?" below for more details related to compatibility view tuning.
CT ANEK
SELE ANEP
CT ANLC
SELE
CT
SELE
CT
6fc02f44c1b21c SELE APB_LPD_OTR_KEYS This selection with ROLE, INSTANCE and TEXT_ID filters originates from method SELECT_OTR_TEXTS of class CL_APB_LAUNCHPAD. It is used to retrieve keys for ABAP report launchpads. In order to eliminate the database table accesses you can acti
0a4e288b48d4c CT vate shared memory buffering for the roles / instances in question:
5ee61
Start transaction LPD_CUST
Double-click on role / instance to open the launchpad
Choose "Extras -> General settings" from menu
Set the flag "Use shared memory"
You can identify the frequently accessed roles and instances by activating an SQL trace for table APB_LPD_OTR_KEYS via transaction ST05 and check the values of the ROLE and INSTANCE conditions in the WHERE clause.
74bfbf32537e5e SELE APB_LPD_SH_TEXTS This selection with UNIQUE_LPD_ID and LANGU filters originates from method READ_SHORT_TEXTS of class CL_APB_LPD_UTILITIES. It is used to retrieve texts for ABAP report launchpads. In order to eliminate the database table accesses you can
d95a04dc60817 CT activate shared memory buffering for the roles / instances in question as described in context of table APB_LPD_OTR_KEYS above.
ce3fe
8001561cb4947f CALL APS_DRP_AMOUNT_GE Calling the integrated liveCache procedure APS_DRP_AMOUNT_GET2 (used for aggregating quantities of I/O nodes in a pegging area) is usually quick from a performance perspective, but implicit memory booking leaks have been observed on SAP HANA
e18fd7e967e9a1 T2 side, so unjustified out-of-memory terminations may happen See SAP Note 1999997 -> "Is the SAP HANA memory information always correct?" -> M_CONTEXT_MEMORY and take appropriate actions to mitigate booking leak issues.
931e
You can use transaction /SAPAPO/OM21 and SQL: "HANA_liveCache_LCApps_Executions" (SAP Note 1969700) to identify context and KPIs for the APS_DRP_AMOUNT_GET2 calls.
b59c3c5005309
a050d5e035155 See SAP Note 2593571 for more information related to the SAP HANA integrated liveCache.
133b85
0b1e2b1c549a5c CALL APS_ORDER_GET_DATA This procedure is a central integrated liveCache procedure that can be used for various purposes. The performance massively depends on the way how it is called. If you experience high runtimes (that can't be explained with usual reasons like resource bottle
36ae973c6cd3fb necks or locks) you should check the application that issues the expensive calls and make sure that the call is executed as light-weight as possible. In particular you should make sure that exclude flags are set in order to avoid processing data that isn't require
85c6 d later on.
57dec217b2c81e
You can use transaction /SAPAPO/OM21 and SQL: "HANA_liveCache_LCApps_Executions" (SAP Note 1969700) to identify context and KPIs for the APS_ORDER_GET_DATA calls.
c47bd0577131c8
196b See SAP Note 2593571 for more information related to the SAP HANA integrated liveCache.
ae3891f889846
5b9c844d36f80
6d1c78
bc671ed9c3058
b43c12aafbab57
3e891
03adbc8704374 CALL APS_ACT_GET_BY_ORD These database requests are calls of procedures in the integrated liveCache that is used in SCM environments. From a SAP HANA perspective specific tuning is hardly possible, but you can check for general optimizations like reducing the number of calls fro
d289ecce4bd8f1 ER m application side or optimizing internal lock contention (SAP Note 1999998) if applicable.
a3150 APS_ACT_GET_BY_PEG
You can use transaction /SAPAPO/OM21 and SQL: "HANA_liveCache_LCApps_Executions" (SAP Note 1969700) to identify context and KPIs of liveCache function calls.
0555162ef70915 AREA
eb8eb672670bd APS_ACT_SCHEDULE
6c6b3 APS_ALERTS_GET_DAT
099952519dab1 A
7d505c8e3cb3d APS_CUS_ORDER_CHAN
245196 GE
11b1c4b452b55b APS_CUS_ORDER_GET
7fea973c5c3e75 APS_CUS_SCHD_GET_B
47df Y_PEGID
11ccb7ff8be1cd3 APS_DRP_IO_GET
9d058c4616ebc APS_FIX_PEGGING_CHA
1e46 NGE
17bd0e4181db3 APS_GET_ATP_DATA
5ada2fdb4c682f APS_PEG_CAT_GET_OR
6eb0b DERS
1db5c91fffcadec APS_PEGID_GET_IO
8f6551bdca82ad APS_PEGID_GET_ORDE
f44 RS
20b2fd8aeb4db APS_OPT_GET_ACTIVIT
6a26931cc61f6d Y_NET
b4cd3 APS_ORDER_CHANGE
2b637fd280f55 APS_ORDER_CREATE
baf313331486ac APS_ORDER_GET
35723 APS_ORDER_MODIFY
2efdc94d660cf4 APS_PEG_CAT_GET_OR
b20c235059ab3 DERS
69898 APS_PEGID_GET_IO
2fe987d90027e APS_PEGID_SELECT_OR
e74e82001499b DERS
ae57bf APS_REORG_ATP
31140725b1f4a4 APS_RESOURCE_CHANG
6369aab247c0b E
7e097 APS_RESOURCE_GET_B
3371f7da0b57db RUTTO
f46f4c05692137 APS_RESOURCE_GET_D
6b2b ATA
3554924da9a9e APS_SNP_CAPA_GET
f871625c35f712 APS_SNP_ORDER_GET
7da58 APS_STOCK_GET_BY_K
38e915516cf4f9 EY
bfef7cab8f2289f APS_TIMESTREAM_CHA
328 NGE
40e6646599fe5 LCK_ENQUEUE02
c5a04f7441ef1e5 SAPATP_READ_BUCKET
79d0 _PARAMS_SIM
439daf39eb698 SAPTS_CHANGE_TG
7ab5ef620bd47 SAPTS_GET_ALL_DATA2
56ef3b SAPTS_GET_DATA
481f650b89dbe SAPTS_SET_DATA
dae171995ea9a6 SIM_SIMSESSION_CONT
45d97 ROL
49406d1f7e2b0
51b799aa12440
a3d43a
4e100847ee8a7
6ccfd8630d23d
ad66ba
52273b2666beb
1a8ee46f069e77
59b3b
540f5a0b90c3d
de6badaf758b4f
999ca
565f8588bcbf95
fdd964510fd4e8
ad27
57461b5a5d2ad
b64e80ebbeebe
3a329d
5d110ba33c2e0
314ff186019c37
be2c7
5fbd592536c53
068deb5b4a672
72d7a0
683b80896e355
0eea06ade4a3a
855917
75698fbb304e4
06dca77e4b87fe
3946c
7b54142eca3561
66c71929d03c4
62dc6
80aed209c3c36
91b535d4d9a82
c5a993
81788f9c7ecf4c
6253e8099f7ca
9eea0
8637ffebfcc65ab
0d613d10c42a8
6f3f
8a1bcc8b73f1a2
a21b194779ed6
03b80
8cd8929305df5
9e136926a31cf6
ba797
8fa09e6dfa200
434333e3728df
ac986e
9af92d27da7def
573ec1d519b3c6
5c97
9dac2f34cd80ff
538bbb4773f8a
ed514
a34890b38a9b1
6fb44327f36a61
8efc9
a464569c7abe5
5d26c35a07034
a99a2f
b473d79629612
5065b1857e239
924b11
b52d2cc6afb4f0
7f8f9c2064e87a
bd93
b735b0cdfe3c8c
595aa76166901f
68c2
bafec15d2b8d4c
db478fb1f50d6d
2b16
be96367bc1dbd
364a00584477f
e2261a
c004e3e7d510a
429896b839c9a
0b9e0b
c3a6938605f1d
3c5a10f58b30be
2014d
d7c4094d96b05
7c5127a4e2c0c5
4fcc4
d909d52390ed
83bb90f01dd46
7a0a2f6
e17fb11651ec4b4
10b5f36e86718e
a5d
e3868db606001
d6780f58b56c8
5b977a
ec76510784343
4d8097b6c796e
493a61
f12b30f24a33fd
e01082983778e
f35dd
f974240658152
21c9b36d27cb6
71c329
fb0fef08b6e0f1
d46cde07d8835
545dd
fc00dc4612028
ce1ce686bd10ec
c1761
45c9b4c77bb6b INSE ARFCRSTATE INSERTs in ARFCRSTATE can be quite frequent in systems with a significant RFC functionality and so they are one of the first modification statements that suffer from general issues (e.g. Barrier Wait, LoggerBufferSwitch). See a more detailed discussion fo
1ae3f659f02ab8 RT r table USR02 that is also a typical victim of general issues.
bbc7f
614e2621fbfd45
dc43a6dbc88c8
29ae5
03b1b6d7c3138 SELE ARFCRSTATE This query is executed in report SAPLARFC and reads a rather high number of records and then filters out most of them on ABAP side because they haven’t exceeded a defined retention period.
557fd26c9553fe CT
Optimize the RFC handling as described in SAP Note 1483757 by setting the SAP profile parameter abap/arfcrstate_col_delete to ‘X’ and scheduling the RSTRFCEU batch job.
7baae
614e2621fbfd45
dc43a6dbc88c8
29ae5
6414ea1875d5ff
8deeb62a9b5f0
94762
791faba8d46ced
32bc70d4bdde8
80d1e
c291030bf5bbcc
fbcb3d7c6b853
28cd2
da81634398801
68ca7aca60b2ce
64d25
e1df1b69227985
b845d4f3f8135c
6688
11817dbccc82f2 SELE ARFCSSTATE These queries from report /SDF/SAPLIMA_DATA_COLLECTORS (function module /SDF/IMA_DC_TRFC) and report /SDF/SAPLE2E_EFWKE_COLLECTORS (function module /SDF/E2E_TRFC) select all entries of table ARFCSSTATE with ARFCRET
828b600e8c841 CT URN = ''. These entries are typically linked to terminated RFC requests and in case of a high number of entries with this value the monitoring queries can significantly slow down. You can check for RFC requests in transaction SM58. See SAP Note 375566 an
ef1d7 d make sure that old and terminated requests are purged in time.
4790a816f7f326
f7c78567171817
a8a5
7b84bc319e14ca SELE ARFCSSTATE ARFCSSTATE contains asynchronous RFC information on sender side. These database requests originating from function module ARFC_RUN / report SAPLARFC or TRFC_QOUT_READ_NEXT can suffer from delays accessing the underlying RFC destina
9371cf0f500e88 CT F tion due to transactional lock waits. You can identify the involved RFC destination of long running accesses using SQL: "HANA_SQL_ExpensiveStatements" (STATEMENT_HASH = '<statement_hash>') or via an SQL trace using transaction ST05 by checki
4582 OR U ng the bind value used for column ARFCDEST in the WHERE clause. In transaction SM58 you can check if there are errors reported for the identified RFC destination. In transaction SM59 you can perform a connection test for the identified RFC destination
a2c02d737ce00 PDAT . Reproducible or sporadic issues indicate that there is an issue with the destination, resulting in delays and overhead. You need to make sure that the RFC connection properly works to reduce lock time and contention.
3965d65321ab8 E
Examples: (connection error in SM59)
900124
UPD
917733c889f0ee ATE Error when opening an RFC connection (CPIC-CALL: 'ThSAPOCMINIT' ...)
ec874ca783ea8a ERROR: SAP gateway connection failed; is SAP gateway started?
LOCATION: SAP-Server <instance> on host saphana (wp <cpid>)
3b40
COMPONENT: CPIC
db4ff98c75591e
RETURN CODE: 236
934351e781a47
Error when opening an RFC connection (CPIC-CALL: 'ThSAPOCMINIT', communication rc: CM_RESOURCE_FAILURE_RETRY)
4d6e1 ERROR: timeout during allocate of registered program
LOCATION: SAP-Gateway on host <host> / sapgw<id>
DETAIL: TP <rfc_dest> init/busy for more than 60 sec
RETURN CODE: 677
db4ff98c75591e UPD ARFCSSTATE UPDATEs on ARFCSSTATE can be quite frequent in systems with a significant RFC functionality and so they are one of the first modification statements that suffer from general issues (e.g. Barrier Wait, LoggerBufferSwitch). See a more detailed discussion f
934351e781a47 ATE or table USR02 that is also a typical victim of general issues.
4d6e1
2e90904f47547 SELE ATPC_RESB Queries on ATPC_RESB originating from method SELECT_RESB of class CL_ATP_PAC_DB_SELECT can suffer from the evaluation of the following view condition:
1fba8a383e1eac CT
defdd CASE
WHEN "RESB"."SOBKZ" = N'E' THEN "RESB"."KDAUF"
WHEN "RESB"."SOBKZ" = N'Q' THEN "RESB"."PSPEL"
WHEN "RESB"."SOBKZ" = N'O' THEN "RESB"."LIFNR"
ELSE N''
END AS "SSKEY"
In case of a partitioned RESB table an upgrade to SAP HANA >= 2.00.070 may help (issue numer 276161). In general, SAP Note 3386125 can be implemented so that this CASE condition is split into several statements, allowing efficient processing.
various SELE ATP_RESB SELECTs on ATP_RESB with a "BDMNG > ENMNG" condition from report PPIO_ENTRY can suffer from the SAP HANA limitation described in "High runtime in context of self-join" above. This problem is fixed with SAP HANA Rev. 1.00.112.04 and 1.00.1
CT 22.00. As a workaround you can implement the coding correction provided in SAP Note 2357019.
e13c4740f87f9ef SELE AUFK The AUFK selection with "<=" and ">=" conditions on column AUFNR in report RKOSEL00 can suffer from overhead introduced by FDA WRITE in context of FOR ALL ENTRIES (SAP Note 2399993). This problem can be fixed by disabling FDA WRITE for
697b8e2b01c53f CT this query using a &prefer_join_with_fda 0& hint (SAP Note 2601166).
388
160512ee66cee6 DELE BALDAT The deletion of BALDAT records can run for a significant time (up to several seconds) when the same primary keys are inserted and deleted again and again. The related call stack is typically:
ff5f4bd0394008 TE
ce88 memcpy_impl
AttributeEngine::Delta::LockedInvertedIndex
AttributeEngine::Delta::BTreeIndex::getValues
While being active in this call stack, it also holds the delta storage lock. Thus, it can block other BALDAT accesses with delta storage related contention like "BTree GuardContainer", "Sleeping" and "Sleep Semaphore" (SAP Note 1999998).
In order to optimize the DELETE, you need to check and correct the application that is responsible for the permanent processing of the same primary BALDAT keys.
b124eba8255584b UPSE BALDAT Increased UPSERT times on table BALDAT may be caused by delta storage contention like "BTree GuardContainer", "Sleeping" and "Sleep Semaphore" (SAP Note 1999998) caused by a repeated insertion and deletion of the same primary keys. See DELETE
6f6149b99886224 RT on BALDAT for more information.
c3
00f5990f5e2b0 DELE BC_MSG_AUDIT This deletion with MSG_ID specified in the WHERE clause can be slow due to an absence of an index on column MSG_ID. See SAP Note 3146538 and apply the mentioned SP or manually create an index on column MSG_ID of table BC_MSG_AUDIT (SAP
48815271736eb TE Note 2160391).
06971e
e76c89783d652 SELE BC_SLD_INST These accesses are not expensive, but in context of transactional LOBs they can be responsible for a high number of SQL contexts and an increased size of the Pool/Statistics or Pool/RowEngine/QueryExecution/SearchAlloc. See SAP Notes 2220627 -> "Wha
cedeb0183d83e CT t are transactional LOBs?" and 2711824 for more details.
3d82bc 6687e2
d2480061466bc
ffabfacc012b9
075632af64e3b SELE /BDL/MSGLOG A high execution rate of these statements can be a consequence of a loop in context of /BDL/TASKPROCESSOR. Implement SAP Note 3328348 in order to fix this problem.
b1134f241462d8 CT
22d71 INSE
4d64f616ce7a5a RT
3c68a1b851492
6cb93
4526e0c421145 UPSE BDSER The MODIFY / UPSERT command on table BDSER from function module IDOC_SERIAL_POST can result in lock contention if IDocs are processed in parallel although the related application uses a serialization functionality. If the same object is processed
70e4b45d0f61c RT by several IDocs in parallel this can result in high wait times. To solve this issue you can change the outbound option in the partner profile (transaction WE20) to queue processing.
083cd1
fb6819ba816dcb DELE BGRFC_I_RUNNABLE The deletion from background RFC table BGRFC_I_RUNNABLE in context of application source CL_BGRFC_UNIT_HANDLER_INB_T===CP:149 respectively method IF_BGRFC_UNIT_HANDLER~MARK_UNIT_AS_RUNNING of class CL_BGRFC_
984cea118f89d4 TE UNIT_HANDLER_INB_T mainly suffers from transactional lock contention (SAP Note 1999998) due to concurrent modifications of the same table record(s) by different transactions. SAP Note 3031022 provides an optimization so that the transactional rec
3449 ord lock is no longer held when the actual RFC call is done. In case you don't have access to this pilot SAP Note, you can open a SAP case on component BC-MID-RFC-BG to request access.
28e20ad740634 SELE BGRFC_I_RUNNABLE The selection of distinct DEST_NAME values originating from class CL_BGRFC_SCHEDULER_INBOUND in context of the bgRFC (background RFC) watchdog (passport action <BGRFC WATCHDOG>) can become particularly expensive when a high num
7dfd5a0cb6a8cd CT ber of records exist in table BGRFC_I_RUNNABLE. Check from bgRFC side if an unexpectedly high backlog has piled up and take actions to process it to reduce the number of records in table BGRFC_I_RUNNABLE.
8f68d
See SAP Note 2309399 for bgRFC configuration. In case of a high bgRFC generation rate it can be required to increase the open connections per scheduler or the number of schedulers per instance to cope with the load.
b93124fe6aa7d2
d45e6fad54345
b9672
8032cffe801f81 SELE BGRFC_O_DESTLOCK This selection in class CL_BGRFC_SCHEDULER_OUTBOUND can be executed very frequently in context of the bgRFC (background RFC) watchdog (passport action <BGRFC WATCHDOG>) in case of bgRFC inconsistencies (e.g. entries in table BGRFC_O
ca835dddd255c CT _RUNNABLE without related entries in other tables). Check for inconsistencies via report RS_BGRFC_DB_CONSISTENCY and repair them to eliminate permanent watchdog activities.
50d68
155f2514ffbd97c SELE BGRFC_O_RUNNABLE These selections in class CL_BGRFC_SCHEDULER_OUTBOUND can be executed very frequently in context of the bgRFC (background RFC) watchdog (passport action <BGRFC WATCHDOG>) in case of bgRFC inconsistencies (e.g. entries in table BGRFC
d62c013758286 CT _O_RUNNABLE without related entries in other tables). Check for inconsistencies via report RS_BGRFC_DB_CONSISTENCY and repair them to eliminate permanent watchdog activities.
ea76
a541d8b887b90
0b3d2f1073eaa
df00c2
various, e.g.: SELE BIMC_ALL_AUTHORIZE Accesses to these SAP HANA metadata tables read via MDS (SAP Note 2670064) can sometimes take advantage of a different optimization level. See SAP Note 2967256 for more details.
CT D_CUBES
7499c13d54ad9
BIMC_ALL_CUBES
5db232f03e8d7f
BIMC_DIMENSIONS
95e78
BIMC_DIMENSION_VIE
847e7f7f2dbfcbe
W
ee615e1ace3759
BIMC_MEASURES
63f
BIMC_VARIABLE
BIMC_VARIABLE_VIEW
fe4e46ff323a57 SELE BIMC_ALL_CUBES This query originates from the XSC web workbench. An optimization is available with XSC WebIDE Version >= 1.135.5.
316de435ced0e1 CT BIMC_DIMENSIONS
c941 BIMC_MEASURES
sap.hana.xs.dt.base.server.
DTAccess::dtaa
VIEW_COLUMNS
VIEWS
SELE BSAD If these tables are already implemented as compatibility views you can check question "How can compatibility view accesses be tuned?" below for more details related to compatibility view tuning.
CT BSID
SELE BSAK
CT BSIK
SELE BSAS
CT BSIS
SELE
CT
SELE
CT
SELE
CT
04fc8ad42cfad7 SELE BSP_DLC_DOBJ, BSP_DL SAP Note 2538840 provides a coding correction that can significantly reduce the amount of executed queries against these BSP and WCFV tables.
80a2a301cffea6 CT C_SDOBJ, WCFV_DLC_D
6201 ESIG, WCFV_DLC_SDESI
3beb58c03468b G
c7cd64ab764a3c
af0b2
3dcc171fdeed9f1 SELE BUT000 This statement with a GPART interval selected in the WHERE clause ("PARTNER" > ? AND "PARTNER" <= ?) is used when scheduling mass activities and using the business partner (GPART) object for parallel processing. The related application source is S
22a8e21a8772f7 CT APLFKDI, the related function module is FKK_DI_GPART_DETERMINE_CLOSED. Due to changing parameters after the first execution, the query performance can degrade significantly when using an HEX index scan (SAP Note 2570371). You can either
64c add the hint NO_USE_HEX_PLAN (SAP HANA <= 2.0 SPS 05) or NO_HEX_INDEX_SCAN (SAP HANA >= 2.0 SPS 06). See SAP Note 2142945 for more information related to SAP HANA hints.
various SELE BUT000, BUT020, ADRC Expensive SELECTs on tables BUT000, BUT020 and ADRC from CL_BUPA_IL_SEARCH_SERVICE->SEARCH_BY_ADDRESS2 can be optimized via SAP Note 1792008 by using switch CRM_PERF_BP_SEARCH.
CT
various SELE BUT000, CRMD_PARTNE Joining BUT000.PARTNER_GUID with CRMD_PARTNER.PARTNER_NO imposes significant overhead (e.g. parallel CalculationJob activities) because the data type of both columns differ (VARBINARY vs. NVARCHAR). You either have to avoid this join
CT R or consider an explicit conversion like:
"PARTNER_GUID" = HEXTOBIN("PARTNER_NO")
This problem doesn't show up in the SAP standard, it is linked to customer specific view definitions or joins in ABAP.
eb82038136e28 CALL BW_CONVERT_CLASSIC This procedure is executed when a classic infocube is converted to an in-memory optimized infocube using transaction RSMIGRHANADB. Increased load is normal when large infocubes are converted. After the conversion of the existing infocubes is finished
e802bd6913b38 _TO_IMO_CUBE , executing this procedure is no longer required, so it is only a temporary activity.
a7848c
42566e1f2491b6 CALL BW_F_FACT_TABLE_CO It is normal to see a significant cumulated runtime, because it is a central procedure call for all F fact table compressions.
b9820bd20d46 MPRESSION
This procedure performs BW compression on F fact tables (i.e. elimination of requests and transition of data into E fact table or dedicated F fact table partition). If F fact tables with a significant amount of records are processed, a significant runtime is expect
7af93b
ed. Otherwise you can use SQL: "HANA_Threads_ThreadSamples_FilterAndAggregation" (SAP Note 1969700) in order to check for the THREAD_DETAIL information related to this statement hash, which contains the involved table names. Based on this
68f35c58ff746e
information you can check if you can optimize BW F fact table compression for the top tables. Be aware that the THREAD_DETAIL information is only available when the service_thread_sampling_monitor_thread_detail_enabled parameter is set to true (s
0fe131a22792cc
ee SAP Note 2114710).
c1b5
On BW side the compression activities typically happens as part of BI_COMP* jobs (manual execution) or BI_PROCESS_COMPRESS jobs (scheduled execution), so you can check in transaction SM37 for jobs with these names having a particularly high run
time. In the job logs you can find entries like:
Based on the job log timestamps you can determine how long an actual compression took. For the infocubes with the longest overall compression times you can check from BW perspective if any optimization is possible (e.g. reduction of data volume, reducti
on of compression activities).
d39ab69a66a6a INSE BWFI_AEDAT, BWFI_AE Lock contention and deadlocks on these tables can be caused by an inadequate application logic. See SAP Notes 1630808 and 2375171 for optimizations.
9f15ec60253966 RT DA2, BWFI_AEDA3
f0c9f and others
CALL BW_PRECHECK_ACQUIR This procedure is called before updating the activation queue of standard DSO tables in BW.
E_LOCK_WITH_TYPE
It is required to ensure that two same data packages aren't updated at the same time.
Among other a COUNT(*) on large DSO tables may be executed, resulting in increased CPU consumption and higher JobWorker utilization. This is fixed with SAP HANA >= 2.00.043 where the COUNT is replaced by a TOP 1.
4891a82d8e0e7 SELE CATSDB CATSDB selections with "=" or "IN" conditions on columns PERNR and WORKDATE may not be optimally supported by a single column index because only a combination of both conditions provides significant selectivity. The index CATSDB~1 on columns
7ab4de398c352 CT MANDT, PERNR and WORKDATE is typically not active in SAP HANA environment and ABAP transaction SE11 -> "Indexes" will show it with "E HDB", i.e. excluded on SAP HANA. Reactivate this index or create an additional CATSDB index on columns P
08bc20 and oth ERNR and WORKDATE.
ers
2551f72c1b7b5f SELE CDHDR CDHDR selections originating from SAPLBUSA / BUS_CDOBJECTID_SELECT_WTH_LIKE can be quite expensive due to the generic selection approach that may involve leading place holders in LIKE conditions on column OBJECTID. SAP Note 2126752
bc36eb82b3178 CT provides some further ideas for optimizations from business perspective.
2df26
6b45f4dc17d1fd
081cbc958d829
1cfba
1b1f6509e454a1 SELE CDHDR This CDHDR selection in context of archiving (ABAP report CHANGEDOCU_WRI) has the following structure:
cdf413b4d9b07 CT
46da9 SELECT
235200fb75363 *
FROM
7c5f9d6ebe9e4a
"CDHDR"
cca20
WHERE
2575a134205b9
"MANDANT" = ? AND
1987fbbd79fc56 "OBJECTCLAS" = ? AND
d46f9 "OBJECTID" > ? AND
54ae92f6372acb "UDATE" BETWEEN ? AND ?
27af2b6b4de3e ORDER BY
d7f41 "CDHDR" . "MANDANT" , "CDHDR" . "OBJECTCLAS" ,
f8ef7563ffd1b5a "CDHDR" . "OBJECTID" , "CDHDR" . "CHANGENR"
48c3de174d654 LIMIT ?
0ae0
ff014ff9f4a75e8 Due to the combination of ORDER BY and LIMIT the execution can be particularly inefficient if a high number of matching records exist and a rather low limit is choosen. In this case you should consider increasing the package size of the used CHANGEDOC
aa5ea9b6ef84e4 U_WRI variant via Options -> "Internal packaging" -> "Package Size". As a consequence the expensive selection and sorting is done less often for processing the same number of records.
7a5
783ed7080a18c SELE CDHDR, CDPOS, CDPOS_ This statement executed from SAPLCD_READ includes three OR concatenated EXISTS subqueries that can't be handled optimally with SAP HANA Rev. <= 1.00.110. See "High runtime with multiple EXISTS in combination with OR" for details and consider
2f0684a1765c95 CT UID, CDPOS_STR an upgrade to SAP HANA Rev. >= 1.00.111 in order to eliminate the underlying limitation.
354e6
889628dbc36e0
bcb9c8ea18725
9ab2f9
a9f17b62f2c85a
2bbacdcbc81ec6
8545
ae3ca933e5445e
92dcb4f617a783
8c09
0ac4ae3446af6c INSE CDPOS The INSERT into CDPOS is frequently executed in many ABAP systems when many change documents are created. Increased runtimes were observed in the following scenarios:
7c202036363b9 RT
When CDPOS uses dynamic range partitioning but there are issues with the automatic split (SAP Note 2380176 -> T3410) table CDPOS is again and again unnecessarily locked, resulting in transactional lock contention (SAP Note 1999998). Make sure
ec67e
that dynamic range partition splits can be executed with success.
Using an inverted individual index (SAP Note 2600076) for CDPOS is usually very useful, but dependent on the inverted individual costs the runtime of the INSERTs can increase. Normally, runtimes up to 3 ms are still acceptable. In rare cases with un
fortunate value distribution there can be particularly high inverted individual costs (significantly higher than 1000), resulting also in particular long INSERT times. In this case an individual trade-off between INSERT performance and memory saving
needs to be made.
43e7c0eac5e582 SELE CDPOS Implement the dbsl_equi_join hint as described in SAP Note 2162575, so that the SQL statement is processed via IN list rather than OR concatenation. Alternatively avoid blanks at the end of TABKEY, so that the OR concatenation of LIKE conditions on TA
316a7a1733438 CT BKEY is no longer generated.
9b0d7
611684d486918
a356f1bbb14b79
0c17a
8d4657eeeaa52
6d9a34aa8a53b
63f2a3
a7b7221b320c2
cf6d25fd5842bf
89ec4
e6c9e30929d7a
58fd740be11b9d
63204
SELE CE4* Accesses to CO-PA tables starting with CE4 (e.g. CE4OC00, CE40004 or CE41000) can be significantly improved by adding an appropriate index. The WHERE clause of these queries typically contain "=" conditions on the majority of all table fields, concate
CT nated by AND. The selectivity of the individual conditions changes massively between different statement executions because sometimes a rather unselective initial value may be specified, and other times a very selective individual value. As a consequence yo
u need to create a multi-column index on the most selective columns of the WHERE clause (e.g. columns with a high number of distinct values). Creating a multi-column index with SAP HANA is a rather exceptional situation because usually single-column i
ndexes are sufficient (SAP Note 2160391), but in this scenario it is required.
88e1daba605c7 CALL _SYS_STATISTICS.Collect This statement hash is related to the database consistency check performed by the SAP HANA statistics server.
d1d5a54e04f00 or_Global_Table_Consiste
See SAP Note 2116157 for options to adjust CPU and memory consumption of the consistency check.
e12495 ncy
SYS.CHECK_TABLE_CON
95370f008f857
SISTENCY_DEV
7a7267981df48
3e1b86
5a9012b2349c8
e356c328bd696
bbe9e9
fa9643cf99d05f SELE CIC_SOS These read accesses related to entity CIC_SRCE_OF_SUPPLY from CL_CIC_SOS_READER are related to Industry Cloud Solutions (ICS) extractions. As described in SAP Note 3411602, the corresponding change processing may be active even for customer
ea1a6ba64c14e9 CT s who do not use any integration with ICS at all. If you are not using integration with ICS (i.e. there are no records in table DRFC_APPL_SERV with SERV_IMPL values starting with 'CIC_'), implement the recommendation of the mentioned SAP Note to avo
208f id the statement from being executed at all.
If you use integration with ICS, SAP Note 3469038 provides an optimized view that may result in less performance overhead for the same data access. Alternatively, you can also implement the newest TCI Note, which is currently SAP Note 3337701.
As a technical workaround, you can check if using a USE_HEX_PLAN statement hint (SAP Note 2400006) can improve the performance.
f0721df5b7ae49 UPSE CIF_UPDCNT This UPSERT can run into a transactional deadlock situation (SAP Note 1999998) with a modification operation on table EKET. Pilot SAP Note 3017515 provides a coding correction. Open a SAP case on component MM-PUR-GF-APO if you need access to th
71a8d68a7f5e49 RT e SAP Note.
59be
270c660725af2 INSE CKMLPP Inserts into CKMLPP can suffer from exclusive lock waits (SAP Note 1999998) and unique constraint violation terminations if the same primary key is inserted multiple times. Due to a bug in context of "Late Lock for Goods Movements Active" = 'X' in transa
64b11bf387002 RT ction CKM9 and S4CORE <= 104/OP 2009 it can happen that erroneously several transactions try to insert the same primary key. This is fixed with S4CORE 105/OP 2009 and higher (SAP Note 2833472).
abcd00
SAP Note 3128614 provides another optimization to reduce inserts and lock contention on table CKMLPP.
bed65ee7ee721e
1472fda712236e
7445
various CALL CL_PPH_READ_CLASSIC Several SAP Notes like 2576155 and 2850985 provide performance optimizations for this procedure execution.
=>GET_MRP_ELEMENT
Implicit memory booking leaks have been observed on SAP HANA side, so unjustified out-of-memory terminations may happen See SAP Note 1999997 -> "Is the SAP HANA memory information always correct?" -> M_CONTEXT_MEMORY and take appro
S
priate actions to mitigate booking leak issues.
bf1f07c495eea3 SELE COBK, ACDOCA This access from method CL_FINS_CFIN_CO_POSTING is optimized with the application coding correction provided in SAP Note 2778352.
b1ca3ad794e80 CT
6467c
b7c8dccd902d9 SELE COEP COEP accesses with a selective condition on column PAROB1 (e.g. coming from program RKAZCO43) can significantly take advantage of a single column index on column PAROB1. See SAP Note 2160391 for more information related to SAP HANA indexes.
83736a4cae8ee CT
8efd47
bb892077dc465
4e79b3cacf3ad8
5b405
and others
SELE COEP If these tables are already implemented as compatibility views you can check question "How can compatibility view accesses be tuned?" below for more details related to compatibility view tuning.
CT COSP
COSS
COVP
cac55626bf3476 CALL COLLECTOR_GLOBAL_T This statistics server procedure triggers a consistency check with CHECK_TABLE_CONSISTENCY. See CALL -> CHECK_TABLE_CONSISTENCY for more details
dec8f39bcb303 ABLE_CONSISTENCY
22f09
651d11661829c3 CALL COLLECTOR_HOST_CS_ Disable the data collection for HOST_CS_UNLOADS as described in SAP Note 2084747. This action is also recommended in SAP Note 2147247.
7bf3aa668f83da INSE UNLOADS
0950 RT HOST_CS_UNLOADS_BA
42bf4a47fbf7ea SE
bb5e5b887dffd5
3c7c
48ab530741921 CALL COLLECTOR_HOST_LON These calls are issued by the SAP HANA statistics server (SAP Note 2147247) in order to check for particularly long running database requests. In rather idle systems it is normal that these statistics server operations appear at the top of the SQL cache and ru
bb7cf230629a7a INSE G_RUNNING_STATEME ntimes up to 700 ms / execution are normal and acceptable.
965ce RT NTS
The default execution frequency is once per minute. This is higher than necessary (issue number 334265). You can manually adjust it to once every 5 minutes as follows:
88f3a62bd06cb HOST_LONG_RUNNING
ae2ec4ac184df7 _STATEMENTS_BASE UPDATE _SYS_STATISTICS.STATISTICS_SCHEDULE SET INTERVALLENGTH = 300 WHERE ID = 5028
e0a3a
8a70aa0437cd9 The performance can also suffer if the implicit access to M_ACTIVE_STATEMENTS takes a long time. Check section M_ACTIVE_STATEMENTS for possible reasons and optimizations.
bf03d7e1f51e4a
722bb
c4dae8464c001
072258edb483a
95e5cf
247afa8dd0c4d80 CALL COLLECTOR_HOST_RS_ These statements are related to the row store index history collection performed by the statistics server (SAP Note 2147247). Consider reducing the amount of data in large row store tables (e.g. based on SAP Note 2388483) or moving tables with large indexe
09dc454d456b5d3 INSE INDEXES s to column store in order to improve the runtime of these queries.
92 RT HOST_RS_INDEXES_BA
ac52398f58a752 SE
bed7843aac8d8
29464
e9f2feac54a048
984928f3fcbcfa
cedd
f92fcc35b3c28ae2
98ebf2820aebe09
a
2d40db6145e57 CALL COLLECTOR_LIVECACH These procedures are used by the statistics server (SAP Note 2147247) in order to retrieve statistics for the integrated liveCache (SAP Note 2593571). With some SAP HANA Revisions the related statement hashes can be mentioned in context of trace file entri
9aa5c23e42337 E_CONTAINER_STATIST es "SharedLock overflow on context" (SAP Note 2380176), this can be ignored.
750ee8 ICS, COLLECTOR_LIVEC
5157ba1bc92d31 ACHE_SCHEMA_STATIS
66d7eab858566 TICS
f9ea1
58071fe07364fd UPSE COOI_CHK This UPSERT originating from a MODIFY operation in include LKAOIF80 can result in transactional lock contention (SAP Note 1999998) when updating commitment data in parallel tasks. Implement SAP Note 2906851 to switch off the responsible check f
7c23f7e2888015 RT unction provided by the report RKA_COMM_CHECK.
0128
36490273789ae SELE COSS Queries originating from line 139 of include LKAIVF1B respectively application source SAPLKAIV:17205 is optimized with the correction available in SAP Note 2876066.
b5c4a5070e630 CT
7fe782
cb96aede72803
c8b5a872043e0
25f5de
8324ebd3d1cf16 TRU COVRES The TRUNCATE is triggered by job /SDF/UPL_PERIODIC_EXT_JOB respectively program /SDF/UPL_PERIODIC_EXTRACTOR:
908d778f7dceb NCAT
5eee5 E CALL METHOD repository->('IF_SCV_LITE_REPOSITORY~TRUNCATE_COVERAGE_DATA')
CALL FUNCTION 'DB_TRUNCATE_TABLE
EXPORTING
tabname = 'COVRES'
save_views = space
set_init_storage = 'X'
IMPORTING
subrc = return_code.
It is usually executed quickly, but the TRUNCATE results in a metadata change and so it can result in (harmless) terminations of concurrent CHECK_TABLE_CONSISTENCY runs with message "5099: metadata version changed while running checks".
26e1ca7b731a46 SELE CRMD_ORDER_INDEX This access suffers from the COUNT DISTINCT limitation of SAP HANA SPS <= 08 and can be improved with the USE_OLAP_PLAN hint. SAP Note 2255511 provides the coding correction.
7f2db818753d8 CT
0118f
0e483b307490 SELE CRMORDERLOPR This SELECT is triggered by SAP ABAP table buffer reloads that can happen frequently because of invalidations. See SAP Note 1916476 for more information and disable table buffering for CRMORDERLOPR.
6106fd7c321a30 CT
fdea85
f33b8660de1e3 DELE CRMT_RECENT_OBJ This deletion originating from ABAP function module CRM_RECOBJ_DATA_SET (application source SAPLCRM_UI_RECOBJ_DATA_SET:40) can suffer from deadlocks when multiple tabs are closed in a browser at once. SAP Notes 2232607 and 355340
bc467d79366be TE 5 provide optimizations.
1d7258
55c0d811e887fa INSE CS_AUDIT_LOG_ This INSERT is related to SAP HANA auditing. You can consider the following steps to optimize it:
4a08bba2b4c94 RT
Check if the amount of audited information can be reduced. For example, SAP Note 2293725 describes a scenario where more objects are audited than originally intended.
a46ff
Check if there are general issues impacting INSERT performance like slow log write times or system replication issues (SAP Note 2000000).
9ce7b872af35b7
c992a3b5d8f46 See SAP Note 2159014 for more information related to SAP HANA security.
3ccdc
86a04561a02aa INSE D342L, D345T, D346T Modifications to the CUA runtime object tables can suffer from exclusive lock waits and transactional deadlocks (SAP Note 1999998). See SAP Note 3171132 for more information.
97dd40f05e2e5f RT
41524
f3612366d53e8f
cbf465da8dcc8f
2473
d07382ba4b6dc SELE DATA_TYPES, VIEW_COL This SELECT is executed when an ODBC or SQLDBC application calls the SqlColumns client function (e.g. in context of SAP Data Services / BODS) in order to retrieve column related metadata. Typically only one request is sent per connection and table. A hi
49c8594fe66b12 CT UMNS and others gh load can be caused by:
8571b
Long runtime of query on SAP HANA dictionary objects and monitoring views due to missing CATALOG READ privilege
High number of connections requesting SqlColumns information
High number of tables for which SqlColumns information is requested
Overhead accessing metadata (see SAP Note 2222200 -> "What can be reasons for threads in network related states?" -> "Metadata access")
c6e61129d85ccb INSE DBTABLOG INSERTs into DBTABLOG are linked to table logging. Check if table logging is unnecessarily activated for tables with a significant change load. You can find the top tables logged in DBTABLOG via SQL: "HANA_Data_ColumnValueCounter_CommandGene
14ca14b58ca5b1 RT rator" (TABLE_NAME = 'DBTABLOG', COLUMN_NAME = 'TABNAME') available via SAP Note 1969700.
88d4
You can check via ABAP transaction SE13 -> "Display" -> "Log Data Changes" whether table logging is activated for a specific table.
e9c4f7f727f746f
54c1e36a87205 The following problems are known:
eae8 SAP Note 3295259: Slowness due to table logging being active for table WDR_ADP_CONST_MP
ae0be434c10c14 SELE DD08L DD08L is used to determine an ABAP text table for a given table using function module DDUT_TEXTTABLE_GET. While the individual selection to DD08L is typically very quick, there can be scenarios of a particularly high execution rate. The table DD08L
870930b787707 CT can't be buffered on ABAP side. Thus, the only way to reduce the load is to adjust the application so that DDUT_TEXTTABLE_GET is executed less frequently.
ff413
b03a016fa9a79e SELE DD17S, DD12L Overhead accessing these tables is reduced starting with SAP ABAP kernel patch level 7.53 (211). See SAP Note 2641772 for more information.
6973cee05f693c CT
88cc
a9c2427d8a5fee INSE DDLOG These statement hashes refer to an INSERT operation in ABAP table DDLOG that is required to synchronize buffered tables between application servers. This table is based on a sequence and uses a LOB column so runtimes of up to 2 ms per INSERT can be
49ef90da48ce8 RT acceptable.
ebe04
Check on ABAP side if there is a high amount of changes on tables buffered on ABAP side and try to reduce it or unbuffer tables.
afb54cf91cd5d5
1ac7f360f82e67 INSERTs into DDLOG can be impacted by the underlying SAP HANA sequence DDLOG_SEQ (see e.g. SAP Note 1977214). It is generally recommended to activate caching for this sequence. You can use the following command to activate a cache with 1000 e
d5dd lements for the sequence:
SAP HANA may also report a high memory consumption for this INSERT. This is usually not correct and the result of a bug in memory accounting.
INSERTs in DDLOG are quite frequent in ABAP environments and so they are one of the first modification statements that suffer from general issues (e.g. Barrier Wait, LoggerBufferSwitch). See a more detailed discussion for table USR02 that is also a typica
l victim of general issues.
50876a653c389 SELE DDNTT Accesses to table DDNTT can suffer from issues with the ABAP catalog cache and ABAP table logging:
9d678727d97c1 CT
See SAP Note 3241223 for a known issue resulting in an efficient usage of the catalog cache.
279ca7
Make sure in transaction ST02 that the catalog cache is appropriately sized and no or only a limited amount of swaps (up to 1000 per day) take place.
With S/4HANA >= 2023 the catalog cache is also used for CDS enums and in case of questionable coding ASSIGN values may be considered as CDS enums, searched in the catalog cache and not found there, resulting in a lot of DDNTT accesses and a g
rowth of the catalog cache with many "Not found" entries. In this case, the ASSIGN coding needs to be adjusted.
Check if table logging is actually required for the table in question. You can use SQL: "HANA_ABAP_TableLogging" (SAP Note 1969700) to check for existing table logging entries and tables that potentially don't require table logging (ONLY_UNNECE
SSARY_TABLES = 'X'). You can adjust table logging via ABAP transaction SE13 -> "Table name" -> Change -> "Log data changes".
4678087513951 UPSE DDNTT_HIST Upserts of table DDNTT_HIST can suffer from transactional lock issues due to the ABAP catalog cache and ABAP table logging:
9ef1423e34efe1a RT
See SAP Note 3241223 for a known issue resulting in an efficient usage of the catalog cache.
9582
Make sure in transaction ST02 that the catalog cache is appropriately sized and no or only a limited amount of swaps (up to 1000 per day) take place.
Check if table logging is actually required for the table in question. You can use SQL: "HANA_ABAP_TableLogging" (SAP Note 1969700) to check for existing table logging entries and tables that potentially don't require table logging (ONLY_UNNECE
SSARY_TABLES = 'X'). You can adjust table logging via ABAP transaction SE13 -> "Table name" -> Change -> "Log data changes".
49f80acdc0294 SELE DFKKLOCKS This query can be executed unnecessarily frequent in context of FKK_READ_DOC_INTO_LOGICAL (application source SAPLFKKLOCK_DB:640). See SAP Note 3517252 for an optimization.
a25a8732d3d27 CT
708e81
SELE DFKKOP DFKKOP selections with "=" or "IN" conditions on column XBLNR typically can take advantage of an additional single column index on XBLNR because the conditions that are typically very selective.
CT
SELE DIFT_POS_IDENT Check if the suggestions of SAP Note 1122623 (including proposed index modifications) can help to improve the performance.
CT
de4f2fe75cf84e SELE DIMAIOB This query scans a potentially large amount of records, sorts it by the primary key and returns the first record based on the sort order. This is expensive on SAP HANA as the primary index can't be used for an efficient sorted access.
95ec75ffbaf982 CT
An application correction is available via SAP Note 2406424.
5c8a
dc94aabe7e8dd DELE DIRTREE This statement can suffer from transactional deadlocks in context of tables DIRTREE and DWTREE when ABAP jobs EU_PUT and EU_REORG are executed. See SAP Note 3483940 for a coding correction.
b62a9f687aef9e TE
41d7d
447875bbafe59 SELE DMC_C_WL_TABL_OP This SELECT COUNT executed in context of SLT (SAP Note 2014562) can be improved with an optimized buffering delivered via SAP Note 3154154.
901254cf1fb7e8 CT
46fe2
07dad4af60cb7 SELE DMC_COBJ, DMC_STREE This selection executed in method COMPARE_DDIC_TIMESTAMP of class CL_IUUC_MT_TABLES_ITERATOR is used in SLT contexts (SAP Note 2014562) in order to determine structure changes of replicated tables on source side. It is executed at times
4158ca4fbae0b3 CT , DMC_STRUCT when no actual SLT replication work is pending. Thus, it can be observed with a particularly high execution frequency in systems that have configured SLT, but there is currently no SLT replication load. In this case, the behavior is expected, and no action is r
7c16f equired.
1045d8b4e76ac SELE DOKIL This selection with an equal condition on ID and a LIKE condition on OBJECT originating from application source SAPLSDOC:29399 respectively function module DOCU_FROM_TR_OBJECT_RECEIVE may be executed too frequently. Implement SAP No
8d4d37303bab4 CT te 3250296 in order to reduce the number of selections.
242be0
9cf5e32514053d
9c617baff34d03
2952
f85c4cdbc90cd4 CALL DSO_ACTIVATE This procedure was used to activate HANA optimized / in-memory optimized / IMO DSOs. This DSO type is deprecated with SAP HANA >= 2.0 SPS 01 (SAP Note 2425002). Migrate these DSOs to supported types (SAP Notes 1849497, 1849498).
17ef4f941d949a
bb7a
4073a6b50a844 CALL DSO_ACTIVATE_PERSIS It is normal to see a significant cumulated runtime, because it is a central procedure call for all DSO activations in SAP HANA environments (SAP Note 1849497).
1fa93f10d2acb6 TED
This procedure is the HANA server side implementation of DSO activation in BW environments. During this activation data is moved between three involved tables (/BIC/A*40, /BIC/A*00 and /BIC/B000* for classic DSOs or /BIC/A*1, /BIC/A*2 and /BIC
8da4d
/A*3 for advanced DSOs). It is much more efficient than the previous, SAP application server based approach, because moving a high amount from the database to the application and back is no longer required.
6cfb2be313962f
e8bde8b1135c9 You can use SQL: "HANA_BW_DSOOperations" (SAP Note 1969700) to check for particularly long running DSO operations and their runtime and record details.
6de6d Check if you can reduce the activation load by reducing the number of DSO activations or reducing the data volume per activation (e.g. by using delta loads instead of full loads).
91d7aff2b6fb6c
513293d9359bf If the DSO activation in distributed environments takes longer than expected, you should check if the partitioning of the /BIC/A*40 and /BIC/A*00 tables is consistent, i.e. same partitioning and same hosts for the related partitions. SQL: "HANA_TraceFile
559a6 s_Content" (SAP Note 1969700) reports DSO misconfigurations via check ID T1100 ("Inadequate BW partitioning").
ae7f94b9a7e4c9 A DSO activation performs many changes, so it can suffer significantly from bottlenecks in the redo log and system replication area. See SAP Notes 1999930 for redo log I/O analysis and 1999880 ("Can problems with system replication impact the performa
02dba96571db1 nce on the primary system?") for more details.
9dffb
If a very high number of records is modified you may experience contention on the delta storage (BTree GuardContainer lock, SAP Note 1999998). You may be able to improve the contention by increasing the number of partitions so that the delta load is dist
fa34afc37fd964
94f3d37a3aab9 ributed.
5f17a Apart from this you can also apply optimizations on BW side:
Smaller requests
Smaller keys
DSOs that don't require an activation, e.g. info cube-like ADSOs
2b88d45d4e124 CALL DSO_ROLLBACK_PERSIS This procedure is called if a request is deleted from a standard DSO in BW. If you repeatedly see this procedure call you should check from BW side why repeatedly requests are deleted from standard DSOs.
6c74d27ac1f968 TED
In addition you can check the suggestions provided in context of DSO_ACTIVATE_PERSISTED (see above).
9c289
33b1df83771b03 DELE DWTREE This statement can suffer from transactional deadlocks in context of tables DIRTREE and DWTREE when ABAP jobs EU_PUT and EU_REORG are executed. See SAP Note 3483940 for a coding correction.
f119ea13ddaf472 TE
069
b1e72ff44bc0ee SELE DYNPLOAD Selections on the ABAP dynpro load table can suffer from delays reading LOB files from disk. In this case you typically see I/O related information in the threads, e.g. "Resource Load Wait", PrefetchIteratorCallback or PageIO::SyncCallbackSemaphore. See S
a0b572fc89cfa7 CT AP Note 1999930 and make sure that I/Os are processed efficiently. For example, bottlenecks in the I/O stack, overloads due to savepoints of huge table optimizations or resource container unloads can result in unnecessarily long I/O read times.
674b
various SELE EABL, EABLG If joins on tables EABL and EABLG are expensive without apparent reasons (like inadequate index design) you can check for the following optimizations:
CT
If the tables were recently populated, the join statistics may not reflect the current state and SAP HANA may use an inappropriate execution plan. In this case a restart of SAP HANA would result in a collection of current join statistics. Starting with SAP
HANA SPS 09 join statistics are also recreated online when the amount of changes on the underlying tables is significant.
If the expensive join happens in context of CRM for Utilities and the IC Webclient / CRM UI you can check SAP Note 2218437 and implement the proposed coding correction or the required IS-UT support package in order to reduce the amount of exec
uted EABL / EABLG joins for displaying historical meter readings from an application perspective.
b8d1861e45262 SELE EDIDS This EDIDS query is typically linked to the hourly batch job SAP_CCMS_MONI_BATCH_DP respectively report RBDMONI_CCMS_IDOC that is executed in context of CCMS monitoring. You can adjust or disable the underlying monitoring method ALE_I
1f0c6642ed24a5 CT DC_COLLECT here:
0099e
Transaction RZ21
f246ac9c3b1ce5
-> Methods
32d4b51caf6fd7
-> Method definitions
0691
-> ALE_IDC_COLLECT
-> Control
-> Uncheck "Execute method immediately after monitoring segment start"
In transaction BDMO you can configure what kind of details should be monitored in context of ALE CCMS monitoring.
Alternatively you have to resolve existing problems in the EDI area and make sure that table EDIDS is regularly cleaned (SAP Note 2388483).
See also SAP Notes 2905493 and 3268073 that provide performance optimizations for ALE CCMS monitoring.
2cafcd6301acc5 SELE EDIDS This statement originates from function module IDOC_GET_MESSAGE_ATTRIBUTE (function group SAPLBDID) and has a subquery based on COUNTR. It can take advantage of using the HEX engine (SAP Note 2570371). Please implement the delivered
cf9e5c80de5755 CT SAP HANA statement hints (SAP Note 2700051) that will among others generate a USE_HEX_PLAN hint for this query.
3052
30cc2a2936e36
a3caa732accea1f
a711
73393b77ed7d0
9afce44478e348
79786
857c5b8d64f94
9f1beb9736dec9
ef069
a1652d017b9f41 SELE EDIQO This query is the selection of the maximum counter for a certain QNAME originating from include LEDI1F09 of the SAPLEDI1 function group:
5f4411fd8974f3c CT
af4 SELECT MAX( "COUNTER" ) FROM "EDIQO" WHERE "MANDT" = ? AND "QNAME" = ?
In case of a greater number of records in table EDIQO this operation can be expensive due to the limitations described in section High runtime of MIN and MAX searches. For consistency reasons it is not possible to change the EDI implementation in this co
ntext. Instead you have to make sure that the amount of concurrent requests in EDIQO remains at a reasonable level by considering the following aspects:
64cd1a531cdab0 SELE ENHINCINX These selections from function module RS_GET_ALL_INCLUDES respectively method GET_ALL_INCLUDES of class CL_WB_CROSSREFERENCE only specify PROGRAMNAME in the WHERE clause and no appropriate index exists. The selection with
abd980f4a515b CT hash 76c77bd3d063cfebb63a03c7e7c4d001 was introduced as part of the correction of SAP Note 3072385.
83c87
Create an additional ENHINCINX index on column PROGRAMNAME in order to optimize this request.
76c77bd3d063cf
ebb63a03c7e7c
4d001
68eee9077938d SELE ESSR The ESSR selection from function module LMEREPI02 / application source SAPLMEREP:8926 is expensive because all records of the current client are selected in case the FOR ALL ENTRIES itab lt_bet_ses is empty. Implement SAP Note 3028018 to mak
4c8223ef0149ec CT e sure that this selection is only executed when the itab is not empty.
b0d51
2aebdbfdf69d7d CALL ESH_SEARCH This is a generic procedure for enterprise search calls. Make sure that these calls are implemented as efficiently as possible, e.g. by avoiding unnecessary WHY_FOUND information.
becbd54e76fe71
6e5e
d956e5bcd4f275 CALL SYS.EXECUTE_MDS This is a generic SQL procedure call for executing MDS requests (SAP Note 2670064) when the following parameter setting is in place (SAP Notes 2180165, 2600030):
6c19a28a92778
e1177 indexserver.ini -> [mds] -> use_sql_procedure_for_analytics = true
As a lot of different MDS requests are executed based on this procedure, it is normal and expected that it is part of the most expensive database requests in MDS environments. To drill down, you can check the bind values of expensive executions via the expe
nsive statements trace (SAP Note 2180165) that map to method, schema, package, object, datasource type and request.
See Long InA / MDS accesses for further MDS performance optimization details.
SELE EXTENSION Check SAP Note 2535647 that provides optimizations for this database request from Manufacturing Execution application side.
CT
7df11e8ddef118e SELE FAAT_DOC_IT This query is executed in method CL_FAA_MDO_ITEM and suffers from the fact that the maximum (MAX) of a potentially rather larger amount of records needs to be determined. SAP Note 2410056 provides a coding correction.
147bcd96dd848 CT
a06b
various SELE FAAV_ANLC FAAV_ANLC is a compatility view on table ANLC. Ceck question "How can compatibility view accesses be tuned?" below for more details related to compatibility view tuning.
CT
See SAP Note 2816302 and check if you can directly query the underlying table FAAT_YDDA to avoid unnecessary overhead.
See SAP Notes 2816301 and 2796770 and make sure that the compatibility view is set up optimally for performance.
various SELE FAGLFLEXT, FMGLFLEX If these tables are already implemented as compatibility views you can check question "How can compatibility view accesses be tuned?" below for more details related to compatibility view tuning.
CT T, GLT0, JVGLFLEXT, PS
GLFLEXT
35c76a107e1aea DELE FEBIP DELETE operations of certain KUKEY values from report RFEBKA30 / application source RFEBKA30:4713 can suffer from transactional locks and deadlocks due to an inadequate application design. SAP Note 3208265 provides a coding correction.
b12355e3ffb438 TE
9ba9
6db0e37f140e3
b529ad82ab0cd
bc0c2b
7b94dc29b0ff58
4f3a0e1be2955b
fc64
c50d944ceb16c
b5aacd20be3c9
e811c1
0522278d30d2 SELE FKKDIHDTMP If SELECT FOR UPDATE requests on table FKKDIHDTMP suffer from exclusive lock waits (SAP Note 1999998) you should check if a reduction of mass activity intervals can improve the situation (SAP Note 2177798). Additionally, implement the recommen
827b20923d9a1 CT F dations of SELECT on FKKDIPOTMP because that query may be executed in the critical lock path of the SELECT FOR UPDATE.
afaf71e OR U
077736806b5fb PDAT
c0fa1f59c47ca6 E
d5d6d
2f8e5433683b9
8b4fe6b8257d9
bd4195
554c18cc7e6018
abd21c69429b2
73f0b
a8a84c81280a0
c2bc035cff4729
e904e
dba4750eb518f5
dbe0e4fcfd2d96
1a53
15a57c95913b86 SELE FKKDIPOTMP This access to table FKKDIPOTMP with selection conditions on columns PROGID and INTNR can sometimes suffer from an inadequate order of condition evaluation due to data skew (SAP Note 3513868). Typically, PROGID is most selective, but sometime
e6eeac034addc7 CT s SAP HANA may decide to enter the evaluation via INTNR.
a48c
Please implement SAP Note 2700051 in order to provide a good statement hint for this query, making sure that the table is accessed via the typically most selective PROGID condition.
cb3e7321fc94e1
ba375d6595622
f4026
de6b3906f5979
6ff3192b51f6319
05c4
fbb83898141f2f
33d8672578591
5fc25
CALL GET_ACCESSED_OBJEC This procedure is used to determine which objects are accessed in a database request.
TS_IN_STATEMENT
Among others it is used when a SQL statement is executed with the SQL editor of ABAP transaction DBACOCKPIT (SAP Note 2222220) in order to determine the involved objects for a proper table-level authorization check (SAP Note 1933254). The call is o
nly executed when table specific permissions are configured, so it can be suppressed by assigning general permissions (e.g. S_TABU_SQL = * or SAP_ALL).
2cf6dfab2c7a73 UPD GLFUNCT Long runtimes are typically caused by record lock contention (SAP Note 1999998). Implement SAP Notes 2193726 and 2296436 in order to speed up processing and reduce lock times.
0bc56b40fc3d6 ATE
4734c
c38678b42400e
a7b7252e3409a
d6db3f
5ec05e8ced606 INSE ALERT_BLOCKED_TRAN In rather idle systems it is normal that these statistics server operations appear at the top of the SQL cache and runtimes up to 700 ms / execution are normal and acceptable. In this case no action is required.
6fbe5a94cb1ef5 RT SACTIONS
With SAP HANA >= 2.00.047 the statistics server actions for blocked transactions are redesigned, providing both more reliable results and better performance.
c130a CALL COLLECTOR_HOST_BLO
5fa89dc1471bde CKED_TRANSACTIONS The performance can also suffer if the implicit access to M_ACTIVE_STATEMENTS takes a long time. Check section M_ACTIVE_STATEMENTS for possible reasons and optimizations.
ddba67e0088d HOST_BLOCKED_TRANS
d52b96 ACTIONS_BASE
690cf05502836
d8ed30d809ead
c360ec
7c541b915a48ae
457956cda4e97
02780
83ac4bf74da99
0133f1c525d05f
43714
a99311b79cb2fb
8706e979cda20
edb44
db2a5d8b668a8
37677bb6946de
2a8d76
e5b0749c28655
12bc6a4f70d75c
932a4
e7aa79c355895c
079e00ef03ffdf
cc47
f6283fb30b61f0
b8dd66c025552
1b881
fc1e3ccb204989
1578e6dcb4deaa
71e0
064bc772039cc INSE HOST_JOB_HISTORY_B This INSERT is executed in context of the statistics server (SAP Note 2147247) collector COLLECTOR_HOST_JOB_HISTORY. This collector is scheduled every 60 seconds per default although the underlying SAP HANA monitoring view already provides hi
bfe53c34b0d6b RT ASE storic information with a much higher retention time. In order to reduce the overhead, you can increase the interval from 60 to 600 seconds:
bd0aa9
UPDATE _SYS_STATISTICS.STATISTICS_SCHEDULE SET INTERVALLENGTH = 600 WHERE ID = 5048
The SAP default is adjusted to 600 seconds with SAP HANA >= 2.00.059.12 and >= 2.00.076 (issue number 316265).
3824c73a54b35 DELE HOST_OBJECT_LOCK_S This DELETE purges old records from the statistics sever history for object lock statistics. It can be particularly expensive if many unnecessary entries with OBJECT_NAME = '(unknown)' exist in HOST_OBJECT_LOCK_STATISTICS_BASE. This problem i
0c95c843aed2a TE TATISTICS_BASE s fixed starting with SAP HANA 2.0 SPS 00. You can consider using SAP HANACleaner (SAP Note 2399996) to configure an automatic cleanup. The following command can be used for a manual cleanup:
a8c70f
DELETE FROM _SYS_STATISTICS.HOST_OBJECT_LOCK_STATISTICS_BASE WHERE OBJECT_NAME = '(unknown)'
See SAP Note 2147247 for more information related to the SAP HANA statistics server.
fb8b1663df2592 INSE HOST_SERVICE_THREA This command is used by the statistics server (SAP Note 2147247) to generates the history of thread samples. Longer runtimes are possible if you increase the thread samples in memory (SAP Note 2114710, parameters service_thread_sampling_monitor_m
7cbd8041043a7 RT D_SAMPLES_BASE ax_samples, service_thread_sampling_monitor_max_sample_lifetime) or if you reduce the history collector interval length (collector Collector_Host_Service_Thread_Samples, ID 5034).
15541
f816711c913d15 INSE HOST_SERVICE_THREA This insert populates the call stack history table HOST_SERVICE_THREAD_CALLSTACK_BASE that can e.g. be evaluated via SQL: "HANA_Threads_Callstacks_History" (SAP Note 1969700). Historic call stacks are helpful for troubleshooting purposes a
2181a1e2d3d3d1 RT D_CALLSTACKS_BASE nd so it is usually acceptable that this statement generates some load. See SAP Note 2313619 for more information related to SAP HANA call stacks.
dc43
There can be overhead if the call stacks are captured with the interval of 300 seconds because in this case often a lot of other statistics server activities are captured. See SAP Note 1999993 (check ID M0752, "Historic thread call stacks interval (s)") and make
sure that the interval is set to 299 seconds in order to avoid correlation with other activities.
SELE HRP1002 Accesses to table HRP1002 can suffer from missing indexes on columns TABNR and OTJID. See SAP Note 2549948 for more information.
CT
a6ce4874ce95d UPSE HTTP_CORS_LOG UPSERTs of HTTP_CORS_LOG can suffer from transactional lock wait contention (SAP Note 1999998). In most cases, this contention is the victim of a more severe underlying issue. See a more detailed discussion for table USR02 that is also a typical victi
9e50e2180f932 RT m of general issues.
5c7f38
See also SAP Note 3279311 for some other potential reasons.
b2d58abfa55b9 UPD IBINST_OBJ This update can suffer from exclusive lock waits and deadlocks when many variant configuration statistics are written concurrently via ABAP V2 update. SAP Note 1548171 provides more efficient alternatives, e.g. the collective update using report RCU_IBST
5b98e4632f0111 ATE AT_UPDATE_STATISTICS.
ce077
ee8c70fd188e0b
92eba9b72e484
abc90
22835ac37c9fbb UPD IBINVALST_SYM This update can suffer from exclusive lock waits and deadlocks when many variant configuration statistics are written concurrently via ABAP V2 update. SAP Note 1548171 provides more efficient alternatives, e.g. the collective update using report RCU_IBST
1e0ee1dba2819b ATE AT_UPDATE_STATISTICS.
f2e9
66342c6548a87
008636bb0cda
6db8872
bc89b21e84146 SELE INDEX_COLUMNS, INDE This query is used by the change data capture (CDC) ABAP report DHCDCR_PUSH_CDS_DELTA to identify indexes and index columns of related tables with the naming convention /1DH/RDB*.
d3db77c5536c3 CT XES
c9e736
19d5723835e91 DELE INDX Deletions from INDX can suffer from the deletion scenario described in SAP Note 1999998 ("Why are there locks and deadlocks that can't be explained by the actual modification mechanisms?"). SAP Note 2640640 provides a fix from business perspective.
0684f6df43346 TE
SAP Note 2666147 provides another application correction to improve lock scenarios from application perspective.
59e652
23b51d96b5791 UPD INDX Updates on INDX in context of method BUFFER_WRITE_ATTRIBUTES of class CL_HR_GENAT_SCENARIO (application source CL_HR_GENAT_SCENARIO==========CP:1042) can suffer from transactional lock contention due to frequent organiza
b86e3f69cf47db ATE tional management buffering when many HR objects are modified. Implement SAP Note 3513096 to minimize the overhead.
ae4b6
a0b0fa5c32900
b69be4dd54c52
bbedfa
SELE ITEM Check SAP Note 2535647 that provides optimizations for this database request from Manufacturing Execution application side.
CT
3766a846a4ad2 SELE IUUC_ARCH_USERS The selection from table IUUC_ARCH_USERS with a "user_name = SESSION_CONTEXT('APPLICATIONUSER')" condition is part of SLT triggers (SAP Note 2800020) when the following option is activated in transcation LTRS:
8a7dadeb08a14 CT
4b51ea Options for Archiving
8574088c0e4e8 -> Action Taken if Source Table Record is Archived
-> Do Not Delete Corresponding Target Table Record
6fee180b6de98
0ab0b2
This feature makes sure that data isn't deleted in the target system when archiving (with the archiving user configured in transaction LTRS being stored in table IUUC_ARCH_USERS) happens in the source system.
Make sure that you only activate this option when you need to preserve archived data in the SLT target system and there is actually archiving configured for the table in question.
In scale-out systems, you should make sure that IUUC_ARCH_USERS is locally available on all relevant scale-out nodes in order to avoid inter-node communication. If required, configure table replication for IUUC_ARCH_USERS (SAP Note 2340450) so t
hat a table replica is available on all relevant scale-out nodes.
Alternatively, check if using the "SAP HANA - any version" approach described in SAP Note 2617971 can be used. This approach directly selects the user names rather than having to select them from the IUUC_ARCH_USERS table.
0f156f5e07b286 SELE IUUC_TAB_ALLOWED This SELECT COUNT executed in context of SLT can be improved with an optimized buffering delivered via SAP Note 3154154.
5a9f1fed5837a0 CT
36c3
130dfe8491e99e DELE /IWFND/I_MED_CTC This DELETE originating from method REMOVE_MODELS of class /IWFND/CL_MED_MDL_CACHE_PERSIS (application source /IWFND/CL_MED_MDL_CACHE_PERSISCP:621) can suffer from exclusive lock waits and deadlocks (SAP Note 199999
9960690f9fce9 TE 8) due to flaws in the application design. Check SAP Notes 2631265, 2889829 and 3041609 for application corrections reducing the risk of locking and deadlock scenarios.
83a0a
1c413a671a05e2 INSE /IWFND/SU_STATS Increased INSERT times in table /IWFND/SU_STATS can be caused by a significant amount of INSERT operations that can be switched off via SAP Note 2293307 or a programming error that is fixed with SAP Note 2508563.
fb627f278474dc RT
ea61
4307a6026c116 UPD J_3RKKRO Concurrent processing can lead to transactional record lock contention. Proceed according to SAP Note 2371528 in order to minimize the risk of contention. Alternatively, check if you can use the Offsetting Account Determination (SAP Note 2621199) that no
73d8137fb2403 ATE longer relies on table J_3RKKRO.
93637c
ce499972ced79 INSE JOB_LOG This INSERT into the JOB_LOG table of schema _SYS_XS includes an implicit NOT EXISTS anti join on the same table and so it can become more expensive when there are many records in the JOB_LOG table. See SAP Note 2388483 -> JOB_LOG and ma
7474c71d21bae6 RT ke sure that no longer required entries are cleaned in time and the amount of XS classic jobs is kept at a reasonable level.
b4cf9
SELE KNA1 Expensive selections from table KNA1 triggered by SELECT COUNT requests on ABAP side in report RKDFHDB are possible when no KUNNR condition is specified. SAP Note 3246036 provides a correction.
CT
13a2bac734add SELE KONV Queries on KONV with a selective condition on column KNUMH can take advantage of an additional KONV index on column KNUMH, see for example SAP Note 2424784 for SD_COND_ARCH_WRITE.
8e6c33e837b14 CT
6cef62
c027840449c9b
f6bffff7b7ff813f
eaf
7d45827e7d5b5 SELE J2EE_CONFIGENTRY This query can be expensive if no index on column CID exists. See SAP Note 3233072 and create an additional J2EE_CONFIGENTRY index on column CID.
ce271baa827730 CT
985ba
969667ed5f022 SELE J2EE_CONFIGENTRY This query is not expensive, but in context of transactional LOBs it can be responsible for a high number of SQL contexts and an increased size of the Pool/Statistics or Pool/RowEngine/QueryExecution/SearchAlloc. See SAP Notes 2220627 -> "What are tra
a232f710b5a5af CT nsactional LOBs?" and 2711824 for more details.
51dff
8559ff443a3816 INSE LICENSE_MEASUREMEN This statement is used for license measurement. It can be quite expensive in system databases in case a particularly high number of tenants exist. See SAP Note 3334173 for more information.
983aeac38377e RT TS_
37d8d
2d1ea9a365efd6 SELE /LIME/NTREE This selection with an application source like /LIME/SAPLQUERY:28438 originates from include /LIME/LQUERYTDI. It is executed with the DBI hint &prefer_join_with_fda 0& so that FDA WRITE is not used (SAP Note 2921070) and instead long OR co
4d1504ba5dc25 CT ncatenations of
5b0db and othe
rs "LFT" > ? AND "LFT" <= ? AND "LVL" > ?
are generated. This scenario is often not processed in an optimal way by the HEX engine (SAP Note 2570371) and a NO_USE_HEX_PLAN statement hint (SAP Note 2400006) can improve the performance.
2bae02d679a57 SELE /LIME/NTREE, /SCWM/ SAP Note 2738822 implements NO_CS_JOIN into ABAP code of /LIME/LQUERYBH1 in order to improve performance.
9d736bf92d660 CT HU_IW01
282c2f
and others
7a00a013b367f SELE LMDB_P_INSTANCE This selection originating from CL_LMDB_CIM_PERS_DB can suffer from a missing index on column VALUE. In order to optimize the performance you can create a single column index on column VALUE of table LMDB_P_INSTANCE.
505d7840506e2 CT
37169e
76f246be96028 SELE M_ACTIVE_STATEMENT On SAP HANA 1.0 this query is executed by the mvcc_anti_ager module (MVCCGarbageCollector) that regularly checks for problems impacting the row store garbage collection (SAP Note 2169283). It usually doesn't require many resources or show a high r
db1e4ac3997b6 CT S untime, but it may be reported when you have low-level system problems (e.g. communication problems between hosts in SAP HANA scale-out environments). Reason: It is one of the first SQL statements executed against the database, even before any busin
a07373 ess transaction is started.
In large systems with a lot of active or prepared SQL statements it is possible that this query shows a long runtime and an increased resource consumption. In order to prevent overhead it is recommended to adjust the following SAP HANA parameter from a
10 seconds to a 300 seconds check interval (SAP Note 2600030):
Long runtimes of the query can also because by other reasons described below in the general M_ACTIVE_STATEMENTS section.
various SELE M_ACTIVE_STATEMENT Long runtimes accessing M_ACTIVE_STATEMENTS are usually caused by many entries in M_ACTIVE_STATEMENTS and / or M_PREPARED_STATEMENTS (issue number 258995). See SAP Note 2088971 -> M_PREPARED_STATEMENTS for possibl
CT S e reasons of a high number of entries.
c5c2b97695d4c SELE MARA This selection in method /SCWM/IF_AF_MATERIAL_BASE~READ_PRODUCT_GROUPING of class /SCWM/CL_AF_MATERIAL_BASE_S4 may repeatedly read a rather high number of records. It can be optimized with SAP Note 3041119.
ac5f3993ff7900 CT
2576b
various SELE MARC, MARD, MARDH If these tables are already implemented as compatibility views you can check question "How can compatibility view accesses be tuned?" below for more details related to compatibility view tuning.
CT
a505eb1288614 SELE M_BACKUP_CATALOG_F These SELECTs are regularly issued by the backup console in SAP HANA Studio. In order to minimize the impact on the system you should open the backup console only when required and not permanently.
d619b31f160eb1 CT ILES, M_BACKUP_CATAL
If you repeatedly need to open the backup console, you can increase the refresh interval (default: 3 seconds):
b0e5b OG, SOURCE_ALERT_65
b8b6f286b1ed1e _BACKUP_CATALOG refreshInterval.jpg
f2e003a26c3e8
e3c73
cf2e9b37451455
0f2b2e522df9f6
19ec
e04936562f440
2c22dda54cb98
807a5d
16970b5f102164 Check if the backup catalog is unnecessarily large and delete old catalog entries if possible using:
9f4b79dde9017
BACKUP CATALOG DELETE ALL BEFORE BACKUP_ID ...
812c2
2afa9311f17e325
d6d1418b3dd3e
b388
2c7032c1db3d0
2465b5b025642
d609e0
5ec9ba31bee68e
09adb2eca981c
03d43
5f42d3d4c911e0
b34a7c60e3de8
a70d2
e291174cbc7ab2 SELE M_BACKUP_SIZE_ESTIM This query is executed starting with SAP_BASIS 7.50 SP19 when ABAP transaction ST04 is called or when "Current Status" -> "Overview" is selected in transaction DBACOCKPIT (SAP Note 2222220). The runtime depends on the maximum converter page n
c68740114ac34 CT ATIONS umber (M_CONVERTER_STATISTICS.MAX_PAGENUMBER) because all these converter pages have to be scanned and it takes roughly 1 second to scan 50 million converter pages. So in a system with 800 million converter pages runtimes of around 16 sec
bfd3e onds can be expected.
The related call stack typically looks as follows, indicating a scan of converter pages:
PageAccess::Converter::Stream::getCurrent
PageAccess::Converter::estimateBackupSize
DataAccess::PersistenceManager::estimateBackupSize
ptime::BackupSizeEstimationsMonitor::estimateBackupSize
The maximum page number (MAX_PAGENUMBER) can be considered as a high water mark of the system while the currently allocated pages (ALLOCATED_PAGE_COUNT) can be much smaller. So if for whatever reason (e.g. many file LOBs, massive gar
bage collection issues) the number of converter pages was much higher in the past, the MAX_PAGENUMBER will remain on that level and apart from recreating the database it is not possible to reduce it.
You can avoid the delays caused by this statement execution when you call transaction DBACOCKPIT instead of transaction ST04, because DBACOCKPIT has another landing page that doesn't require this information.
various SELE MBEW, MBEWH, MBVMB If tables MBEW, MBEWH, MCHB and MCHBH are already implemented as compatibility views you can check question "How can compatibility view accesses be tuned?" below for more details related to compatibility view tuning.
CT EW, MBVMBEWH, MCHB,
MCHBH
bb45c65f7cc79e SELE M_CONNECTIONS This SELECT is executed when a SAP ABAP work process connects to SAP HANA. Implement a sufficiently new DBSL patch level according to SAP Note 2207349 in order to replace this query with a more efficient approach based on session variables.
50dbdf6bbb3c1 CT
bf67e
b0f95b51bcece7
1350ba40ce188
32179
8b0c1d926307b SELE M_CONNECTIONS, M_A This SELECT originates from function module DB_WP_CURRENT_SQL and is used to determine the statement text and the application source for a database request currently executed in a work process (SAP Note 1902446). It is normal that it runs for up
660a1797c05d4 CT CTIVE_STATEMENTS to 1 second. A high load is usually caused by a high number of executions that can have different reasons:
6162b8
High number of calls to transaction SM50 / SM66 (manually or due to a monitoring tool)
fcdf8f8383886a
High number of calls to the session monitor in transaction DBACOCKPIT (manually or due to a monitoring tool)
af02c3a45e27fb
Capturing of /SDF/MON data with activated check box "SQL statements"
d5f2
Make sure that monitoring levels and frequency remain on a reasonable level and don't overload the database with monitoring requests.
various SELE M_CONTEXT_MEMORY Long runtimes accessing M_CONTEXT_MEMORY are usually caused by a high number of records in that monitoring view. See SAP Note 2088971 -> M_CONTEXT_MEMORY for possible reasons of a high number of entries.
CT
abf60afa26d352 UPSE MCSIDIR If this UPSERT operation from program SAPLMCEX fails due to a deadlock you can consider SAP Note 2492966.
9bb3b2ce95991 RT
87a3b
f1b12a80a3eb48 SELE M_CS_ALL_COLUMNS This statement is used in context of the SAP Early Watch Alert (EWA) download and selects table column information by specifying the TABLE_OID in the WHERE clause. The TABLE_OID is not supported by SAP HANA for initial filtering, thus, the M_CS
9df86f5e250977 CT _ALL_COLUMNS information is calculated for all tables before the information for the table in question is filtered. SAP Note 3347789 provides a fix by changing the WHERE clause from TABLE_OID to the efficiently supported SCHEMA_NAME and TABL
25db E_NAME filters.
various, e.g.: SELE M_CS_ALL_COLUMNS A particularly high thread activity can be observed in systems where CPU workload management (in particular based on parameter default_statement_concurrency_limit) isn't properly set up, a high number of parallel threads is used and contention on IniF
1964c8ab878cd CT M_CS_COLUMNS ileLock (SAP Note 1999998) happens. In this case see SAP Notes 2222250 and 2600030 and make sure that a reasonably small value for default_statement_concurrency_limit is set.
e5f306d9f605e3 M_CS_TABLES
With SAP HANA >= 2.00.041 the calculation of values for memory related columns (*MEMORY_SIZE*) in monitoring views M_CS_ALL_COLUMNS, M_CS_COLUMNS and M_CS_TABLES also considers columns that are currently not loaded (fix for íssu
b94ed
e number 198542). This can result in overhead in terms of CPU consumption, JobWorker utilization and runtime when memory related columns are accessed in these monitoring views. Among others the following statistics server actions can suffer:
1f989f4acd844d
502ccd445f69f5 ID 17: Alert_Mon_Column_Tables_Record_Count (statement hash 09ebcf4d5dc0954389551c4b23c05d9a)
ea02 ID 20: Alert_Column_Tables_Size_Growth (statement hash 1f989f4acd844d502ccd445f69f5ea02, 3596bfa8e0055ff715ef52570ff66128, d3759ce6047b78f61d5fc3be392d0336)
3596bfa8e0055f ID 27: Alert_Partitioned_Table_Record_Count (statement hash 26daef039f35e5df24eceb94ccdfb462)
f715ef52570ff66 ID 29: Alert_Delta_Mem_Merge_Dog (statement hashes 38a8e11286f7309f2715c07c270a473b)
128 ID 40: Alert_Mon_Part_Table_Size_Host_Total_Mem (statement hash 430c496e0fe15c0353c80de1c72caab1)
38a8e11286f730 ID 45: Alert_Mon_Part_Table_Size_Host_Main_Mem (statement hash ba81a383d98a296d2e44e04278ccb770)
9f2715c07c270a ID 5007: Collector_Host_Column_Tables_Part_Size (statement hash c6edbab2e426f7e83b156ed06ccdf6bb)
473b Also accesses in context of SAPHostAgent (user SAPDBCTRL) like hash 1964c8ab878cde5f306d9f605e3b94ed can be affected.
430c496e0fe15c
0353c80de1c72 Avoid reading memory information from these monitoring views on a regular basis.
caab1 Increased memory consumption on M_CS_TABLES is possible with SAP HANA 2.00.059.05 - 2.00.059.11 and 2.00.070 - 2.00.076 (SAP Note 3434490).
ba81a383d98a2
In order to limit the load spikes / CPU consumption introduced by the statistics server (SAP Note 2147247) you can set up a workload class with a maximum thread limit as described in SAP Note 2970921. Starting with SAP HANA 2.0 SPS 06 this workload c
96d2e44e04278
ccb770 lass is delivered per default (unless an individual workload class already exists). Make sure that the statistics server related workload class isn't too restrictive in terms of the total thread count (SAP Note 2970921). In general you should configure 25 % of the
available CPU threads as total thread count, but stay in the range of 4 to 20.
c6edbab2e426f7
e83b156ed06cc You can also consider the following workload parameter setting that limits the maximum parallelism of SAP HANA monitoring view accesses (SAP Note 2222250):
df6bb
d3759ce6047b7 indexserver.ini -> [row_engine] -> num_parallel_monitor_thread
8f61d5fc3be392
d0336 Resource overhead in terms of CPU and memory is reduced with SAP HANA >= 2.00.059.02 and >= 2.00.061 (issue number 266068).
ecfca5a26b8a59
59c1b5825e800
fcfea
c8f682aa3be41b SELE M_CS_PARTITIONS These selections with TABLE_NAME = 'MAI_UDM_STORE' may be executed way too frequently in FRUN environments. See SAP Notes 3384080 and 3350069 for more details and resolution steps.
442f4aa42395c CT
5a099
3c4d665564c85 SELE M_CS_TABLES These queries are linked to the BW sizing report /SDF/HANA_BW_SIZING (SAP Notes 1736976, 2296290). Avoid running it frequently and / or during peak load times. Starting with version 2.5.4 of the sizing report the number of M_CS_TABLES accesses
a6e1fbaf3d1499 CT is reduced by factor 3.
34e5a
4a5b6efc4e44ff
925434b049d5
02f878
6c734feb16ab86
2202c7221de63
131d0
53f12e37eda8e9 SELE M_CS_TABLES This query checks for history tables with more than 1 billion records in the history partition:
a6f4cade65138e CT
c110 ...
WHERE
T.RAW_RECORD_COUNT_IN_HISTORY_MAIN +
T.RAW_RECORD_COUNT_IN_HISTORY_DELTA > 1000000000
It is related to the "Table histories > 1 billion rows" metric that is executed by Solution Manager / FRun every 30 minutes per default. In the majority of all systems no history tables exist, so the check is not required. You can check via SQL: "HANA_Configur
ation_Overview" (SAP Note 1969700) and check the "History tables" line for the actual number of existing history tables in the system.
Depending on the result you can either deactivate the metric or increase the collection interval (e.g. to once per day) for the "Table histories > 1 billion rows" metric in Solution Manager / FRun. See SAP Solution Manager System Monitoring for more inform
ation.
4268da3d3e23c SELE M_CS_TABLES These accesses are standard monitoring queries. If they are executed frequently, they are most likely issued by the SAP HANA monitoring of the SAP Database Migration Option (DMO) - "SAPuptool hdbmonitor". You can reduce or disable the amount of que
f17374d2d849e CT M_RS_TABLES ries with the following SUM parameters (SAPup.par / SAPup_add.par):
34927c SELE M_SERVICE_MEMORY
/proc/dbmonsleep: Frequency of monitoring checks in seconds (default: 30)
7fec4d6bc9b347 CT
/clone/hdbmonitor: Activation / deactivation of SAP HANA monitoring (default: ON), can be set to OFF in order to disable the accesses completely
a461c3edfc7391 SELE
84af CT M_HOST_RESOURCE_U
fea41d9c262ad1 TILIZATION
d7413b1dfe33c6
1ff1 SELE
344262f32bb29 CT
df38139b498a71
2987e
2986820e47db
8abf2a2a8341e9
b81af4
fa0d29c997bcd
593669b75226f
cff4ff
0ff809b4d4c72
3a6d1b3d8835d
77d89a
1d612f19e24018
3a67f0434bba4
dbdd7
8946ba4387f1fe
2dbcda9433fe5c
42ef
2fed6e6d601555
b4330f6bbe02a
985aa
d19f00cc8330e
8de6ae4d78498
229642
is executed by JDBC clients (SAP Note 2393013) when establishing a connection to the database. Consider the following optimizations:
Check if it is possible to keep connections open for a longer time rather than to establish and close connections with high frequency.
A high number of parses (SAP Note 2124112) can be caused by the requirement to change the password for the connection user (e.g. due to force_first_password_change = true). In this case the JDBC client parses and executes the query before it dema
nds a password change and terminates in case there is no user interaction. In this scenario the invalidation reason is OBJECT VERSION MISMATCH(OBJ-ID:<user_obj_id>). Make sure that JDBC connection users, particularly in context of automatic
monitoring tools, are able to connect to the database successfully without a requirement to adjust the password.
See also the below general M_DATABASE performance discussion.
3124bc8c7b100 SELE M_DATABASE Accesses to M_DATABASE are typically quick. Slowness has been observed in context of operating system calls related to time and timezone. Typicall call stacks are:
28d6797d19db8 CT
992d58 __lll_lock_wait_private
76423bbac93cd __tzset
__GI_timelocal
ca6dec17fed344
str_as_time
96465
NameServer::TNSClient::getMinStartTime
94927f293e398
ptime::DatabaseMonitor::update
e2e45d82d96a1
ptime::DatabaseMonitorHandle::create_objects
9a96a6 ptime::Monitor_scan::do_open
__lll_lock_wait_private
__tz_convert
ranged_convert
__mktime_internal
str_as_time
NameServer::TNSClient::getMinStartTime
ptime::DatabaseMonitor::update
ptime::DatabaseMonitorHandle::create_objects
ptime::Monitor_scan::do_open
Proper timer / timezone configuration on SAP HANA and OS level (SAP Note 2100040 -> __tz_convert)
Collection of additional operating system details at the time of the problem: OS kernel stack back trace (SAP Note 2166414), gstack (SAP Note 2257203)
Checking for potentially responsible kernel taints via sapsysinfo.sh (SAP Note 618104)
9068544c5cbd8 SELE M_BACKUP_SIZE_ESTIM These queries to M_DATABASE and other monitoring views originate from the external Dynatrace tool. Check in collaboration with your Dynatrace partner how to optimize these requests or reduce the execution frequency.
307f39cba079b CT ATIONS
e0f964 M_DATABASE
ae5d89091865b M_HOST_RESOURCE_U
496d3d81c5c1a1 TILIZATION
74019 M_TABLE_PERSISTENCE
e52a73036933d _STATISTICS
1721ed9081ae8 M_TABLES
849fdf
48227050702b UPSE MDG_MDF2035 The table MDG_MDF2035 is used contains information for the master data governance (MDG) timestamp service and transactional lock wait contention is possible in scenarios with an extremely high amount of concurrent changes. In this case SAP Note 22
6788417bcdd09 RT 28078 can be considered to deactivate the MDG timestamp service if possible.
e6c6760
de32c5c30fab83 SELE MDMA This SELECT issued by method ASSIGN_MRP_AREA of class CL_PPH_MRPRECORD suffers from an empty FOR ALL ENTRIES list, so that only MANDT is used in the WHERE clause and basically the complete table content is retrieved. SAP Note 22604
809bdc8baeec3 CT 10 provides an application correction.
60c63
02a1e492e2f681 SELE _SYS_SR_SITE_<site_na SELECTs to M_EVENTS on remote system replication sites via the system replication proxy schema _SYS_SR_SITE_<site_name> can suffer from network problems or unavailabilities related to the secondary system. In the worst case selections get stuck u
959509afca5b3a CT me>.M_EVENTS ntil the connection timeout (default: 180 s) is reached.
20b3
As a consequence the following statistics server (SAP Note 2147247) actions can take a longer time:
346fa3f331c198
6006ada8918ae ID 20: Alert_Internal_Events
e1d90 ID 30: Alert_Internal_Disk_Full_Events
58b942966bd9 ID 78: Alert_Replication_Connection_Closed
8138845ccca5e5 ID 89: Alert_Missing_Volume_File
513ea9 See SAP Note 1999880 and make sure that system replication is established properly. Check according to SAP Note 2222200 if the network configuration of the primary and secondary SAP HANA system replication site works properly.
6e40b2c493f0ff
fbe39d79ff66bc As a temporary workaround it is possible to reduce the connection timeout (default: 180000, unit: ms, i.e. 180 seconds):
1af8
<service>.ini -> [communication] -> default_connect_timeout
bf12292cbd91df
d4d56053f3972
24271
ea4958c3e5667
e0038894be667
4ab4aa
9dc47c00d2d9e SELE M_INIFILE_CONTENTS This selection is done in context of service data download (/BDL/TASK_PROCESSOR). It is usually harmless and quick, but it can lead to some unexpected database trace entries like:
66ec1e06459e1e CT
87676 Uncaught remote exception left during request destruction
active channel with method handleHexDistributionRequest is readable while sending data
As indicated in SAP Note 2380176, these messages are related to fallback considerations during distributed HEX queries (and M_INIFILES_CONTENTS is distributed to several services also in single node systems) and can be ignored.
If you want to suppress them, you can define a USE_HEX_PLAN statement (SAP Note 2400006) hint for this query so that the HEX engine will no longer consider engine fallbacks.
various SELE MKPF If this table is already implemented as compatibility view you can check question "How can compatibility view accesses be tuned?" below for more details related to compatibility view tuning.
CT
8e2be385cf6cfb SELE M_LANDSCAPE_HOST_C This access is triggered by ABAP report SAPLSRFC respectively function module RFC_SYSTEM_INFO respectively ABAP kernel call RFCSystemInfo. This expensive ABAP kernel check exists in ABAP kernels 7.85 (<= 220) and 7.89 (<= 62) and is fixed with
a7e80a7c3fc4be CT ONFIGURATION newer kernel versions. See SAP Note 3265730 for more information.
81a6
various, e.g. SELE M_LICENSE Accesses to monitoring view M_LICENSE involve communication with the nameserver. The thread waits in state "Network Poll" while the nameserver processes the request and a typical call stack looks like:
455a6193f1440e CT
44490f0e8d5ee __GI___poll
87d0a ...
NameServer::TNSInfo::sendRequestTo
8cdf6af3445a91
NameServer::TNSInfo::processRequest
a4f3a9125cb1db
NameServer::TNSClient::processRequest
b341
NameServer::LicenseClient::getLicenseInfo
cc8716b21968a ptime::LicenseMonitor::create_objects
bdd86f26faea0 ptime::Monitor_scan::do_open
8ceaa3
cc8c76c98e842c Thus, problems of the nameserver can result in slow or hanging M_LICENSE requests. Double-check the health of the nameserver. SAP Note 3343278 describes one possible root cause of a hanging nameserver with SAP HANA <= 2.00.059.09, <= 2.00.067.
0db137e2a0389 02 and <= 2.00.072.
5f2bc
Accesses to M_LICENSE can happen in the following contexts:
ebd30763da6cf
827578153bb37f License check (e.g. based on request lic/trigger_update_license, SAP Note 2114710)
3ed6e Statistics server (SAP Note 2147247) operations (ALERT_LICENSE_EXPIRING, ALERT_PRODUCT_PERCENTAGE_USAGE, COLLECTOR_TEL_LICENSE, COLLECTOR_TEL_LICENSES)
Monitoring tools
8bde35673a45c SELE M_LOAD_HISTORY_HO This SELECT is executed by SAP HANA Cockpit (SAP Note 2185556) in XS classic when the CPU tile is called. A high number of executions is typically linked to a configured automatic CPU tile refresh. In order to reduce the load you should either reduce the
0b2984b24cf19 CT ST refresh interval or deactivate automatic refresh (Settings -> Automatic Tile Refresh).
9dc175
4c93c09e31313c SELE _SYS_SR_SITE_<site_na SELECTs to M_LOG_SEGMENTS on remote system replication sites via the system replication proxy schema _SYS_SR_SITE_<site_name> can suffer from network problems or unavailabilities related to the secondary system. In the worst case selections g
80e1e5c427d174 CT me>.M_LOG_SEGMENTS et stuck until the connection timeout (default: 180 s) is reached.
d86a
As a consequence the following statistics server (SAP Note 2147247) actions can take a longer time:
ID 72: Alert_Log_Segment_Count
See SAP Note 1999880 and make sure that system replication is established properly. Check according to SAP Note 2222200 if the network configuration of the primary and secondary SAP HANA system replication site works properly.
As a temporary workaround it is possible to reduce the connection timeout (default: 180000, unit: ms, i.e. 180 seconds):
various SELE M_PREPARED_STATEM Queries on M_PREPARED_STATEMENTS can be expensive when many entries exist in M_PREPARED_STATEMENTS. See SAP Note 2088971 -> M_PREPARED_STATEMENTS for possible reasons of a high number of entries.
CT ENTS
various, e.g.: INSE M_RS_INDEXES Accesses to monitoring views M_RS_INDEXES or M_RS_TABLES can be expensive in case of large row stores. The following statistics server operations (SAP Note 2147247) are impacted:
0c2d40de2702b RT ... M_RS_TABLES
Collector COLLECTOR_GLOBAL_ROWSTORE_TABLES_SIZE (statement hashes 0c2d40de2702bc576de87a9aec2cef30, d2245dad2c0de4c5eb013392c5eddc32, f45e2eaba236b6655804f6585d631923)
c576de87a9aec2 SELE
Collector GLOBAL_TABLE_PERSISTENCE_STATISTICS (statement hash 3b9fd71de9695ead62b08fa990106e04, indirect M_RS_TABLES access via M_TABLE_PERSISTENCE_LOCATION_STATISTICS)
cef30 CT
Collector HOST_RS_INDEXES (statement hash 9fb114381d4932bd068275d5a964f2ab)
3b9fd71de9695e
ad62b08fa9901 Check if you can reduce the row store size, e.g. by data management and archiving or by moving large tables to column store. With SAP HANA >= 2.00.080 the implementation of M_RS_TABLES is improved (Jira HDBDEVSUPPORT-160), resulting in redu
06e04 ced runtimes.
9fb114381d4932
bd068275d5a96
4f2ab
d2245dad2c0de
4c5eb013392c5e
ddc32
f45e2eaba236b6
655804f6585d6
31923
faf34398b15a9f SELE _SYS_SR_SITE_<site_na SELECTs to M_SAVEPOINTS on remote system replication sites via the system replication proxy schema _SYS_SR_SITE_<site_name> can suffer from network problems or unavailabilities related to the secondary system. In the worst case selections get st
96958d83a267b CT me>.M_SAVEPOINTS uck until the connection timeout (default: 180 s) is reached.
99ab4
As a consequence the following statistics server (SAP Note 2147247) actions can take a longer time:
ID 54: Alert_Mon_SavePoint_Duration
See SAP Note 1999880 and make sure that system replication is established properly. Check according to SAP Note 2222200 if the network configuration of the primary and secondary SAP HANA system replication site works properly.
As a temporary workaround it is possible to reduce the connection timeout (default: 180000, unit: ms, i.e. 180 seconds):
various SELE MSEG If this table is already implemented as compatibility view you can check question "How can compatibility view accesses be tuned?" below for more details related to compatibility view tuning.
CT
066d8841facca4 SELE M_SERVICE_REPLICATI SELECTs to M_SERVICE_REPLICATION read some data from the secondary system replication site. They can suffer from network problems or unavailabilities related to the secondary system.
3e927ef61253cc CT ON
As a consequence starting ABAP transaction DBACOCKPIT (SAP Note 2222220) can take a long time (statement hash b00125455f78c4d794fba903eea620aa).
d2d1
2a4f9ce41e671c In addition, the following statistics server (SAP Note 2147247) actions can take a longer time:
3043ca01ee2f53 ID 94: Alert_Replication_Logreplay_Backlog
c482
ID 104: Alert_Replication_Logshipping_Backlog
6335cad129f598 ID 5046: Collector_Host_Service_Replication
38e2077de33f61
e599 See SAP Note 1999880 and make sure that system replication is established properly. Check according to SAP Note 2222200 if the network configuration of the primary and secondary SAP HANA system replication site works properly.
66e2c312f45566 For SAP HANA <= 2.0 SPS 05 see SAP Note 3392865 and make sure that outdated entries in the DNS cache can't impact the query performance. Starting with SAP HANA 2.0 SPS 06 the DNS cache is purged every 5 seconds, so it should no longer have an i
58bbb191a63ec mpact.
4eb32
As a workaround in case of an unavailable secondary system you can disable the secondary system check temporarily (SAP HANA Rev. >= 1.00.102.03 and >= 1.00.111):
8f3a3253cd1fc8
9d43ea42d7532
global.ini -> [system_replication] -> check_secondary_active_status = 'false'
b1015
b00125455f78c
Alternatively it is possible to reduce the connection timeout (default: 180000, unit: ms, i.e. 180 seconds):
4d794fba903ee
a620aa <service>.ini -> [communication] -> default_connect_timeout
c0aa71d9f78499
6b626e589b745 See SAP Note 2736804 for more details.
44c38
c47f784b695a3f
686bd010b7654
19faf
cdd1818164205
0527b4a1a95d0
0e3f3e
d952da8472c0d
4350559843cbf
d48d0e
ef7d4dc44f62c3
0868abd8bcdaa
acd57
09214fa9f6a8b5 SELE M_STATISTICS_LASTVAL This SELECT is executed by the Solution Manager in order to read history data from the statistics server. It is particularly critical if the standalone statistics server is used. The UPDATE is executed by the standalone statistics server in order to update statistic
322d329b156c8 CT UES al information.
6313b
Implement SAP Note 2147247 ("How can the memory requirements of the statistics server be minimized?"), so that the collection and processing of statistical data is optimized. A switch to the embedded statistics server is typically already sufficient to resolv
28996bd0243d
e this issue.
3b4649fcb713f4
3a45d7
7439e3c53e84b
9f849c61d672d UPD STATISTICS_LASTVALUE
a8cf79 ATE S
84e1f3afbcb91b
8e02b80546ee5
7c537
9ce26ae22be317
4ceaffcfeee3fdd
9b7
f32de4a673809
ad96393ac60df
c41347
3795b564b0c72 SELE M_SYSTEM_OVERVIEW Accesses to monitoring view M_SYSTEM_OVERVIEW can be slow if the retrieval of disk information (thread method: remotediskinfo) takes longer than expected. Check the threads / thread samples for remotediskinfo (SAP Note 2114710) and check operati
8737e695b1021 CT ng system / hardware for bottlenecks (SAP Note 1999930) in case remotediskinfo calls are a major contributor the runtime.
ac3582
100b6ebd38bbb SELE M_SYSTEM_REPLICATIO SELECTs to M_SYSTEM_REPLICATION read some data from the secondary system replication site. They can suffer from network problems or unavailabilities related to the secondary system.
c84c20e887f4c5 CT N
As a consequence starting ABAP transaction DBACOCKPIT (SAP Note 2222220) can take a long time (statement hash a0adfca9b52ccf6778f8d5d0e1efb0cb).
1339e
32b53f05a6d99 Also some statistics server (SAP Note 2147247) queries can suffer.
d8e55f349cc1d9 See SAP Note 1999880 and make sure that system replication is established properly. Check according to SAP Note 2222200 if the network configuration of the primary and secondary SAP HANA system replication site works properly.
580e5
35c3ed2d193b5 As a workaround in case of an unavailable secondary system you can disable the secondary system check temporarily (SAP HANA Rev. >= 1.00.102.03 and >= 1.00.111):
930001b02832e
global.ini -> [system_replication] -> check_secondary_active_status = 'false'
ad9245
7d80c6be4848f
Alternatively it is possible to reduce the connection timeout (default: 180000, unit: ms, i.e. 180 seconds):
2f9fe778fbe814
27fb4 <service>.ini -> [communication] -> default_connect_timeout
a0adfca9b52ccf
6778f8d5d0e1ef See SAP Note 2736804 for more details.
b0cb
4db908d3710d6 SELE M_TABLE_PERSISTENCE This query is triggered by the SAP HANA license check (lic/license_measurement) on an hourly basis starting with SAP HANA SPS 12. It is usually running with a reasonable performance. In systems with a low overall workload it can nevertheless still be am
c3639b2a9e483 CT _STATISTICS, M_CS_TAB ong the top SQL statements.
22d10a LES_
9a2ce762f4885e SELE M_TABLE_PERSISTENCE This query is triggered by the SAP HANA license check (lic/license_measurement) on an hourly basis. The underlying table M_TABLE_PERSISTENCE_STATISTICS is quite expensive and so runtimes of several minutes in larger systems are acceptable.
347f4567f9ee27 CT _STATISTICS, TABLES
Starting with SAP HANA 1.0 SPS 12 the license check was adjusted and so this expensive query is no longer executed.
c35d
9d3480ff01bd8f SELE M_TEMPORARY_TABLE This query retrieves columns for temporary tables following the <sid>.<client>.<guid32> naming convention (SAP Note 2800007) in context of preparations for BW query processing. Thus, you have to accept a certain load coming from this query in case m
c51759b4e9c83 CT _COLUMNS any BW queries are executed in the system.
41e0d
Queries on M_TEMPORARY_TABLE_COLUMNS can suffer from a missing CATALOG READ privilege. Proceed according to Long runtime of query on SAP HANA dictionary objects and monitoring views.
fe769268583b8
b8715593ac10f0 Situations have been observed where a high number of active, but waiting JobWorkers were spawned, impacting the stability of the database (issue number 283156). If a similar scenario happens, you can capture runtime dumps (SAP Note 2400007) and op
c1fd8 en a SAP case on component HAN-DB-PERF for a more detailed analysis.
various, e.g.: SELE M_TEMPORARY_TABLES Queries on M_TEMPORARY_TABLES can suffer from a missing CATALOG READ privilege. Proceed according to Long runtime of query on SAP HANA dictionary objects and monitoring views.
0b7761c6b9ffcd CT
a1debae659cfd7
a53e
6ea085309583e SELE M_TEMPORARY_TABLES This selection with a WHERE clause consisting of SCHEMA_NAME and TABLE_NAME originates from report TREX_EXT_INDEX_CELL_TABLE. Among others it can be frequently triggered by the ESH real-time indexing job ESH_EX_FU_DEMON. Th
2436162a48b8c CT e related application source is the generic CL_SQL_STATEMENT class. See SAP Note 3225546 for an application optimization.
c62cb4
0ad1a4c1c1a584 SELE M_TRACEFILE_CONTEN This SELECT is regularly issued by the backup console in SAP HANA Studio. In order to minimize the impact on the system you should open the backup console only when required and not permanently and avoid large sizes (> 50 MB) for the backup.log file.
4d01595f5b3cdc CT TS
2977
4592fda53cacd4
7ac7263ae58c4f
585d
376071168f97a7 SELE M_TRANSACTIONS_ This query is executed in the context of the kernel sentinel (SAP Note 2800055) and it can be very expensive when many transactions exist with TRANSACTION_ID = -1 (issue number 258410, fixed with SAP HANA >= 2.00.054). You can improve the perfo
5c7ada05cda1a6 CT M_TRANSTOKEN_DIREC rmance by checking and eliminating reasons for a high amount of transactions with TRANSACTION_ID = -1.
12ec TORY_
2820eb46f5c1af SELE M_UNDO_CLEANUP_FIL Selecting the number of cleanup files from M_UNDO_CLEANUP_FILES via the hdbsql client is related to standard garbage collection monitoring implemented in SAP ECS environments using SAP HANASitter (SAP Note 2399979):
6dbfc7e1a890f6 CT ES
05a2 -cf "M_UNDO_CLEANUP_FILES,WHERE,TYPE='CLEANUP',9999999999"
This monitoring is currently reviewed and improvements (e.g. reduction of collection frequency or deactivation) are planned for Q3 / 2023 (Jira MCDGSSDBOPERATIONS-13603).
The selection can be particularly expensive if many cleanup files exist. See SAP Note 2169283 and make sure that garbage collection isn't unnecessarily blocked for a long time.
239985e53e623 SELE NAST Accesses to table NAST with selection conditions on columns KAPPL and OBJKY can sometimes suffer from an inadequate order of condition evaluation. Typically, OBJKY is most selective, but if during first parsing a highly selective KAPPL value is used, K
8f61141da5e9e6 CT APPL may be used as a starting point of the execution plan, leading to bad performance when later on less selective KAPPL values are processed (SAP Note 3513868).
94f32
Please implement SAP Note 2700051 in order to provide good statement hints for the most important NAST queries, making sure that the table is accessed via the typically most selective OBJKY condition.
28a81c5e4deeaa
dfac5aebf0a24d
26b6
54c080dd251a0
46a06e2641ff2c
b9da8
b0101e8a31307
c6f509cbc726e3
48f21
baf38fec735db8
5754081db918a
22922
d918985d5f15b
886ce0b82e10f
bf3243
e1e746510efab6
b3c4339095291
43774
e46b50ff293f9b
c624293af592af
80b1
002a7edbc17efff SELE NRIV Check if you can reduce the critical time frame between SELECT FOR UPDATE and the next COMMIT. Identify the involved number range objects from application perspective and optimize their number range buffering.
01430ea0c727b CT F NRIV_LOKAL
The NRIV_LOKAL access is related to the local number range buffering approach. As described in SAP Note 504875 this approach can result in exclusive lock waits in case of long running batch jobs requiring new numbers.
7458 OR U
11df5737a4a42e PDAT See SAP Notes 599157 and 840901 and consider using parallel number range buffering in order to reduce the risk of lock situations.
d26e0121151c77 E Be aware that bad performance for the SELECT FOR UPDATE on NRIV and NRIV_LOKAL can also be a consequence of some underlying general issues, so if it happens occasionally and in combination with some other issues, it can be a symptom for anothe
8785 r problem and not an issue itself.
12eeafbd3aae52
8306a8158ec86 Depending on the impacted number range object you can proceed with buffering as follows:
06fe9 BW DIMIDs and SIDs (BIM*, BID*): SAP Note 857998
165a0f6e267af8 EDOITPROGR: SAP Note 2696897
80a21e667e84d FKK_BELEG: SAP Notes 2276534, 3077239
fb311 RF_BELEG: SAP Note 1398444; With SAP S/4HANA >= 1610 buffering is activated per default for RF_BELEG (SAP Note 2376829), taking advantage of table NRIVSHADOW for persisting currently used number range values.
4476d72f3fd4f3 RV_BELEG: SAP Note 1524325
fba01fd14ff1f92
Be aware that even with activated buffering there can be exceptions defined for certain sub-objects, e.g. related to specific countries. This can still introduce contention and should be avoided. You can find these exceptions in transaction SM56 -> Goto -> Ent
be4
ries. Lines with "Critical Number" = '' and "Buffer to No." = '00000000000000000000' indicate disabled buffering. These exceptions are configured via transaction SPRO -> "Financial Accounting" -> "Financial Accounting Global Settings" -> "Business Tr
5e852025fa982
ansaction Events" -> Settings -> "Process Modules" -> "of a customer" and should be eliminated whenever possible. The business transaction event (BTE) for RF_BELEG is 00001170.
2ebada6b23500
c87d8e
62ccf744165213
50984e10a991cf
71a3
6b1d10732401fe
82992d93fa91f4
ae86
6e8a0d4379e7d
a3618e69fa4c72
d3312
7e2f1e1dda7eaa
7f552f8cce7561
4838
891cf32d4e5f76
6efcac3070b576
597f
89aef8e089794
81bdcfb712dd17
96e49
9968071caf7c64
02f1b89e85b3d
1439d
b3687e541a32e
ab7c40f1dd2c4a
f6f4d
baed6a80eae9d
5e505c41cd07d1
25624
c3d7d15bec5e66
ec6ab15e86527b
cca5
ed36d9e0501d5
392cbde83889e
d2ff54
afd1b7b7694d54 SELE _SYS_REPO.NRIV Table NRIV in schema _SYS_REPO is responsible for number range handling in context of SAP HANA tasks like activations (DOMAIN = 'activation_id'). Concurrent activations serialize on transactional NRIV record locks, so to reduce locking you have to a
bca2fc84117792f CT F void concurrent activations and check for reasons resulting in particularly long running activations.
fbf OR U
Additionally, unexpected delays during activation can also result in increased contention. For example, slowness accessing an MSSQL remote source during creation of a virtual table because of many dlclose operations massively slowed down an activation w
PDAT
hile the NRIV lock was held. In this case it is useful to preload the library according to SAP Note 2741672 and check with Microsoft for other reasons contributing to the long runtime.
E
When the lock timeout is reached, the following message may be recorded in the SAP HANA database trace (SAP Note 2380176):
Repository activation lock timeout. Another activation is still running. Try again later.
60c366c73e378 DELE NRIVSHADOW Increased runtimes including locks and deadlocks (SAP Note 1999998) can be a consequence of the ABAP program error described in SAP Note 2357744.
be600fe0a0621 TE
cb36a8
0e35685e8bbd9 SELE NRIVSHADOW This query is expensive because the INSTANZ column isn't specified in the WHERE clause and so the existing primary key can't be used efficiently.
08e025f3eca1f0 CT
SAP Notes 2257975, 2790738, 2924805 and 3055954 provide coding corrections that reduce the number of NRIVSHADOW requests without the INSTANZ condition. If the remaining NRIVSHADOW accesses without INSTANZ are still responsible for signif
63085
icant load and runtime, you can create a secondary index on columns OBJECT, SUBOBJECT, NRRANGENR and TOYEAR to support the query without INSTANZ optimally.
17629effd42806
185aec18c7d09 A high number of records (millions) in NRIVSHADOW can also be responsible for increased runtimes. See SAP Note 2530392 and consider a cleanup using report NK_REORGANIZE.
0a7db
7c1c0248cd1601
fee27aa2ab8f24
4d8d
various SELE NSDM_V_MARC, NSDM_ These objects are compatibility views (e.g. NSDM_V_MARC -> compatibility view on table MARC). You can check question "How can compatibility view accesses be tuned?" below for more details related to compatibility view tuning.
CT V_MARD, NSDM_V_MAR
DH, NSDM_V_MCHB, NS
DM_V_MCHBH, NSDM_
V_MKPF, NSDM_V_MSE
G
9c560d6781598 SELE NSDM_V_MCHB This query originating from line 761 of report SAPLCOWB is optimized with the correction available in SAP Note 2737478.
7d3a20359ecc2 CT
251b6b
b29bb65fa699a SELE NSDM_V_MSSA The existence check on tables MSSA or MSSQ in transaction MB52 can take some time. SAP Note 2830243 provides an optimization from business perspective.
56349dfa9436f5 CT NSDM_V_MSSQ
23eb6
7c99e482e615c SELE NSDM_V_V_CF_MCHB This query originating from application source SAPLV01F:1479 respectively line 22 of function module VB_READ_BATCH can be expensive when it is processed with FDA WRITE (SAP Note 2399993) because the underlying view NSDM_V_MCHB_DIFF c
0beba986c58e9 CT ontains some SUM aggregations of CASE expressions and these expressions are calculated first before data is aggregated or filtered. In order to bypass this issue you can implement SAP Note 3115064 that disables FDA WRITE.
1c0c97
953dc8254b96e SELE NWPLACE_AD1 In program SAPLNPA10_WP the system saves the user variant for Clinical Process Builder even when there were no changes. This can result in unnecessarily long record lock waits (SAP Note 1999998). Implementing SAP Note 2737474 ensures that person
d3f0af9fbcdbc2 CT F al settings are saved only when there were changes done.
513ed OR U
PDAT
E
022a0656e4b70 DELE ODQDATA Deletions of ODQDATA objects based on TID IN lists from method DELETE_BUCKET of class CL_ODQ_CORE_SERVICE (application source: CL_ODQ_CORE_SERVICE===========CP) may sometimes use a full table scan rather than using the prima
c0e2124497d9c TE ry key to filter the TID list. Implement the statement hints available via SAP Note 2700051 in order to force a good primary key access for IN lists with lengths <= 10 elements.
b56075
The deletions may also suffer from the fact that SAP HANA can't evaluate IN lists on DECIMAL columns properly with Revisions <= 2.00.037.04 and <= 2.00.045 (SAP Notes 2914233). See SAP Note 2842894 for recommendations how to bypass this issue f
060a6bf7aa804
rom application side.
dae3c55a08136
dbe38e
11c7cb3e6ab61c
8e9dc01ffe3d67
3f03
2c578e3d40002
3f7f367f0860f8
b55a0
3a8b3213ddae5
7ed4f037c19a66
d067b
3ba12256f9f2a2
b79e75095cd81
7ad9f
5a0a91f1b589d2
cb7d0f55be856
be0b2
6ac093ce5b721
0ce431f81f7fd79
8a18
720bb6c662543
b066728ee98c6
e129ad
7b0b3c36174db
29a794b82eebb
0d9fc1
884514c4f1ae4c
d7aebe63d3bc8
16167
a227ea3fecf3c3
4a4d3fdff1488a
d005
b5bb116f425f16
a50d63c3ea9fd
8a4ae
c513a46e8923e
772c4e5343a0c
ac82ff
cc5a32805a092
4dc52d1e8a236
711e8c
cf4b4a85ebe1be
6ffe754d6f7fa08
e86
e245088da4606
00132029e0bcf
b9775e
ef9d983128f4be
754244d5be4ae
ea34c
SELE OPERATION Check SAP Note 2535647 that provides optimizations for this database request from Manufacturing Execution application side.
CT
32139366d69f1 DELE /PLMB/FRW_BUFFER This DELETE can suffer from exclusive record lock waits due to an inappropriate application COMMIT strategy. See SAP Note 2236233 for a coding correction.
0543bc7abfc605 TE
b37a2
various (due to DELE /PLMB/SEA_OBJMAP The deletion and insertion from /PLMB/SEA_OBJMAP based on the result of a join with tables DRAD and STXH is executed in context of batch job /PLMB/SEA_EXTRACT_OBJKEY on a 3 minutes basis. Due to the complex join with string manipulations
explicit MANDT TE like SUBSTRING and TRIM and due to a high number of involved records it can be quite expensive. SAP Note 3306740 provides a fix by eliminating this statements.
literal), e.g. /PLMB/SEA_OBJMAP
24fc53bf1b175e INSE
dceab063fd21be RT
e64a
ca5feffa109ed4f
58a92e5508232
cc13
38550b31044d5
e679d998956cf
958cfa
5468e2292c2d1 SELE POC_D_EVTQ This SELECT originating from report POCR_PROCESS_EVENTQUEUE_UNI suffers in case of a large number of records in table POC_D_EVTQ because at first all records of the current client are sorted and finally the first records are returned. See SAP No
69ac5d318f9f08 CT te 2071310 and make sure that the process observer event buffer tables are cleaned in time so that they don't grow to large sizes.
f287a
4670a96d6abf4 SELE PPOIX, PPOPX Bad performance of extractor 0HR_PY_PP_1 can be improved by activating the secondary indexes PPOIX~001 and PPOPX~001. See SAP Note 2285753 for more information.
b048d40d28fbf CT
1f466a
2dcac2da411584 SELE PROCEDURE_PARAMET This query retrieves metadata when processing XS engine requests. In particular it is needed when a stored procedure containing one or more in-place table parameters is being called from within an XS classic application, e.g.:
7242aed4c2e1d CT ER_COLUMNS
9cb38 create procedure proc_inplace_tbl (in a table(id int), out b table(id int))
language sqlscript reads sql data as begin
b = SELECT * FROM :a;
end;
You can avoid these metadata lookups by specifying a table type or table name as parameter type, e.g.:
a77c6081dd733 SELE QIWKTAB This SELECT FOR UPDATE can suffer from row store garbage collection activities if the QRFC scheduler issues a very high amount of updates on a few records of this table. This typically happens if there are STOP entries in transaction SMQ2 that never can
bd4641d4f4220 CT F be processed, but result in a looping scheduler check. Go to transaction SMQ2 and check if there are STOP entries in any client which can be removed. After cleanup the update frequency should significantly reduce and the row store garbage collection is able
5f6c84 OR U to keep up without problems. See SAP Note 2169283 for more information related to garbage collection.
3359a412c4e75 PDAT
SAP Note 2125972 provides an application correction to reduce the amount of updates on QIWKTAB.
6a428bcafedd7 E
8471d9
5f74d263c5c217
551d4afc30848c
1791
5331b867628b6 INSE QRFC_I_QIN_LOCK This insert from class CL_BGRFC_UNIT_HANDLER_INB_Q is used to synchronize concurrent accesses to the same qRFC queue. In case of many parallel work processes accessing the same queue, significant numbers of unique constraint violations are po
4a02706379d27 RT ssible, resulting in exception handling overhead that is a normal consequence of many unique constraint violations. See check ID C0400 ("Exception unwinding") of SQL: "HANA_Threads_Callstacks_MiniChecks" (SAP Note 1969700) for details (SAP Note
1bef5b 2313619).
Check from application side if it is possible to reduce highly parallel accesses to the same qRFC queue in order to reduce unique constraint violations and exception handling overhead.
Make sure that the fix from SAP Note 2124871 is applied in the system.
various SELE R_COLLSPROMISETOPA Selections from CDS view R_COLLSPROMISETOPAYINVOICETP (and other tables) originating from UDM_SUPERVISOR can be expensive because predicate pushdown may be blocked by outer join processing. SAP Note 3335213 provides an optimization
CT YINVOICETP .
5b9df4f9bb94ff SELE REMOTE_SUBSCRIPTIO This selection is triggered by the DPReceiverHouseKeeping thread in context of SDI (SAP Note 2400022). Proceed according to SAP Note 3258907 in order to reduce runtime or execution count.
d6ef445e2a7d8 CT N_DATA_CONTAINERS
d3ad2
8394dec3bca54 SELE REPOLOAD Selections on the ABAP report load table can suffer from delays reading LOB files from disk. In this case you typically see I/O related information in the threads, e.g. "Resource Load Wait", PrefetchIteratorCallback or PageIO::SyncCallbackSemaphore. See S
7c12e1728a859a CT AP Note 1999930 and make sure that I/Os are processed efficiently. For example, bottlenecks in the I/O stack, overloads due to savepoints of huge table optimizations or resource container unloads can result in unnecessarily long I/O read times.
b3952
87f6d1130f139af
11559242490fec
a24
15f8c397054d81 CALL REPOSITORY_REST REPOSITORY_REST is the central procedure for activations based on the repository API. It can be quite expensive in case of comprehensive activations and a lot of dependencies. Among others it is implicitly called when using the perspectives for Modeling,
7883a13955fe18 Development and Administration in SAP HANA Studio. Starting with SAP HANA 2.0 SPS 03 this API is deprecated and should be replaced with the SAP HANA deployment infrastructure (HDI). See SAP Notes 2425002 and 2465027 for more details.
855f
ce0d9cedb9b4a
0194dd551c7f06
c22ad
d339f40ed4667
a80eef50d8d13
0e5dec
d8fc952923d1e
879d7350e61ae
04d4ce
33f3b2704fa52c SELE REPOSRC These lock waits are typically linked to SAP ABAP compilations. Typical reasons are:
b1b85b047a5def CT F
Imports of transports in an online system so that ABAP recompilation is required; try to schedule imports during non-critical time frames.
4f01 OR U
ABAP support package upgrades in parallel to business workload; plan them based on SAP Note 1803986 in order to minimize the impact on the system
3fa9f17cac3944 PDAT
If REPOSRC locks happen on a frequent basis in combination with ABAP dumps like DDIC_TYPE_INCONSISTENCY, DDIC_TYPE_REF_ACCESS_ERROR, DDIC_TYPES_INCONSISTENT, LOAD_PROGRAM_CLASS_MISMATCH, LOAD_PROGR
4f386b6f8a463 E
AM_INTF_MISMATCH, LOAD_PROGRAM_MISMATCH, LOAD_PROGRAM_TABLE_MISMATCH, LOAD_TYPE_VERSION_MISMATCH, SYNTAX_ERROR or TYPELOAD_NEW_VERSION it can be caused by inconsistent objects. In this case yo
8ffff UPD
u have to look out for the impacted objects and repair / regenerate them.
80cd171d058d6 ATE
These ABAP dumps can also show up in context of the problem described in SAP Notes 2172127 and 2182690 (permanent recompilations in context of ABAP table structure changes like the creation of indexes). In this case it is required to restart the SA
4da0c342ed85f
P application servers.
b1c6a6
SAP Notes 2831890 and 3069620 describe REPOSRC lock scenarios caused by bugs in the ABAP kernel.
bbe68180a2e6fc
aec3edc600798
5f932
b28dca04ced6f
8b3495e3970c0
ef36a7
cd2266401b892
65e8b51686955
960c63
3ac852b3d17c8c SELE REPOSRC This selection reads ABAP source code based on PROGNAME and R3STATE conditions. A significant number of accesses is possible in context of transaction SGEN / report RSPARAGENER8M / job SAP_SGEN_REGENERATE_LOADS that are used to reg
e87c7c5be218f2 CT enerate ABAP loads, e.g. after an upgrade. In this case it is normal to see a spike of executions. See SAP Note 1989778 for more information related to SGEN.
2be7
a1bab370bb198 UPSE RODPS_REPL_TID This modification in context of application source CL_RODPS_REPLICATION==========CP:3900 respectively passport action BIDTPR_* and application component RODPS_REPL_ODP_PREFETCH or RODPS_REPL_ODP_FETCH_XML is done on
999df9cc84a5c2 RT a single record basis, resulting in a potentially high number of executions in context of mass extractions. SAP Note 3406265 provides a performance optimization by switching from single record to mass modifications.
d7fd7
ef04ace2ae2ff44 UPD ROOSPRMSC This update is executed in function module RSA1_DELTA_REQUEST_WRITE and can suffer from lock contention in case when extractions of the same data source are simultaneously done for SAPI and ODQ targets. As described in SAP Note 2660461 it is
8ef0b19ca999a6 ATE an intended behavior that simultaneous extractions are blocked. You need to adjust your scheduling of extractions to make sure that the extractions for the same data source happen at disjoint times if you want to avoid the lock contention. The related table c
d37 olumns are:
OLTPSOURCE: Data source with overlapping extractions
SLOGSYS: Source system (i.e. system where the lock contention happens)
RLOGSYS: Identifier for receiving system (e.g. <sid>_<client> for SAPI and $ODQ_DUMMY for ODQ target)
806953dd0df65 SELE RSAPOADM Check if the correction provided in SAP Note 2371147 can be used to resolve the problem.
b4f67c0de0c33 CT F
d28da7 OR U
PDAT
E
47b3bd84eec1cc INSE RSAU_BUF_DATA A high number of inserts into table RSAU_BUF_DATA can be a consequence of too fine-grained security audit logging. See SAP Note 2191612 for more information related to the security audit log and make sure that it is configured as slim as possible and re
e26c1ba82cde2e RT quired from security perspective.
e2a0
Due to the fact that the primary key contains a timestamp on seconds basis it is possible that a significant amounf of unique constraint violations happen that can result in system CPU consumption and exception unwinding as described in SAP Note 2313619
-> C0400 ("Exception unwinding").
efd74759c6dc35 SELE RSBERRORLOG The runtime is high because of the ORDER BY clause that can't be supported by the primary index with SAP HANA. See SAP Note 2117500 and implement the coding correction or upgrade to a newer SAP BW support package level to eliminate this ORDER
f70ccd3529b62 CT BY.
588f3
7551257378ec73 UPSE RSBMREQ_DTP Increased UPSERT times on a partitioned RSBMREQ_DTP table with SAP HANA <= 112.06 and <= 122.03 can be caused by the problem described in SAP Note 2373312.
e2ba93b5e5117b RT
6c8f
dbff6e35c53899 INSE RSBKDATA The table RSBKDATA is used as runtime buffer for the BW Data Transfer Process (DTP). In case of a very high number of INSERTs you can determine the usual requests (column REQUID30) using this table and check if they can be adjusted, e.g. by switchin
c2c47e38e671f4 RT g from full to delta load or by reducing the load frequency.
7386
1d2893314f809f SELE RSDDSTATEVDATA Consider reducing the execution frequency of the related Solution Manager extraction “WORKLOAD ANALYSIS (BI DATA)”.
3e491e5628f66 CT
See SAP Note and delete / archive old records from RSDDSTATEVDATA. The minimum STARTTIME in table RSDDSTATINFO is from 30 days ago, so the actual retention time is 30 days. You could reduce it via parameter TCT_KEEP_OLAP_DM_DATA_
83ea5
N_DAYS (SAP Note 891740).
3ff7bab8b9778b
77b821816c3a14
7f81
6662b470dc055
adfdd4c70952d
e9d239
93db9a151f1602
2e99d6c1c7694
a83b0
c706c6fb087b7
8f6d07b5ae084
8179a3
ccbffaf1607aab0
10cbd4512626e1
0fd
d740f122ae4c4a
e3664a09482c2
49947
df4382e56e6c6
b193dac9ac7ab4
d7897
07a4169722d98 UPD RSDD_TMPNM_ADM Table RSDD_TMPNM_ADM contains counters for temporary BW table names. Only a few records are updated frequently. In row store this can result in garbage collection related performance issues (see SAP Note 2169283). Therefore it is recommended (a
48d8150f53037f ATE nd also default with newer SAP ABAP releases) that RSDD_TMPNM_ADM is located in column store. You can move the table to column store using the following command:
3a6fe
ALTER TABLE RSDD_TMPNM_ADM COLUMN
3d4087623880 SELE RSDVENQUEUE Long runtimes of a SELECT FOR UPDATE triggered from an application source like SAPLRSDV:6780 due to transactional lock contention is possible due to inadequate application coding. SAP Note 3136261 provides a coding correction that is also available
35d1c8534ad62 CT F with support packages SAPK-20011INDW4CORE respectively SAPK-30001INDW4CORE.
e157c8b OR U
a70c4031ece0e PDAT
895dfaaf4c4e9a E
cd690
915d86cf95219e SELE RSHIEDIR If a high number of selections from table RSHIEDIR happen, you can check if SAP Notes 2078190 and 2742363 can help to reduce the execution frequency.
5c3c51a5aad207 CT
e19d
0e30f2cd9cda5 SELE RSICCONT If theses queries return a significant amount of records (> 100), it might be caused by a high number of existing requests. In this case you can check the following:
4594a8c72afbb CT
Run SQL: "HANA_BW_DataTargets" (MIN_REQUESTS = 10000) available via SAP Note 1969700 or check table RSMDATASTATE_EXT manually for data targets with REQUESTS_ALL > 10000.
69d8fd
See SAP Note 2037093 and reduce the request lists of data targets with more than 10,000 requests using programs like RSSM_AUTODEL_REQU_MASTER_TEXT and RSSM_REDUCE_REQUESTLIST.
a115fd6e8a4da7 RSMONICDP
See SAP Note 2049519 and make sure that you don't run into follow-up problems caused by reduced requests.
8f0813cfe35ffc6 SELE
d42 CT
RSMONMESS
8a8ec2baa7873
e835da66583aa SELE RSSELDONE
0caba2 CT
ba5bc36900c83
234ea654c6f53 SELE RSSTATMANPART
8ad0b3 CT
470c9174c5503
52f622b5c476a SELE
622589 CT
00218f32b03b4 RSSTATMANPARTT
4529da647b530
bf4e6d RSSTATMANREQMAP
f58ad21a76b2ec
4c8e4ba409e25
50c7b
UPD
ATE
02400fb6f17140
aba360cc2753c
SELE
975d3 RSSTATMANSTATUS
CT
457606afbf7f22
92aa1a8e9be58 RSTSODSPART
801dd
4c99241383fa8c
3d847bd1a5506 TESTDATRNRPART0
28f8a
b51046ad3685e
1d45034c4ba78
SELE
293fd8
CT
ceaedf22c0a0de
014a6191520d7
SELE
23b81
CT
ec91f8ecc50309
96f1e15374f167
49a8
SELE
f0a4785a6ada81
CT
a04e600dc2b06
f0e49
c6378cd6747517
a52433d978b84
d1b80
14e0f25144b799
e6377579185a7c
be58
422199ddf7bf9b
481e1e6b541d7c
1c1f
52063e6acb349
2518389d23049
0492f7
78cfbe0776760c
57376389aa595
672ff
8d87d6a30338
9d8a3f2c72639
910da49
a3df1aa41b2208
69d74683a8fe9
b1f98
c3a9a6620824c
d0c931ae3a0e8
8942c6
4fe60bbfbfa297
9b89e8da75f5b
2aac7
840338617fa7a1
b55977fbe68991
45bb
aae8d7722c8c6
a8195d3954d12
a3f2a8
ca8758b296bd2
321306f4fee733
5cce5
2aeb4f7ffd47da
8917a03f15a57f
411a
fb1a593e493a94 DELE RSIX_MANDTINDEP The changes to this table depend on the BW configuration RSD_UNION_DEL_SHRD_BUFF_RFC that can be maintained in table RSADMIN. As described in SAP Note 2686186 setting this parameter to ' ' can lead to locking issues during times of increase
43415b1b52070 TE d system load and so transactional SAP HANA locks can be observed (SAP Note 1999998).
54e68 UPD
912e1379e9ebf6 ATE
7109eb00df87d
d2d76
e345f4b3df20d DELE RS_LOB_GARBAGE_<id> These queries are linked to the redesigned row store LOB garbage collection with SAP HANA 1.0 >= SPS 12. The number at the end is the volume ID assigned to the related SAP HANA service.
dfbcd37ee4494f TE _
Long runtimes on these tables are typically caused by a bug in SAP HANA 1.00.120 to 1.00.122.04. See SAP Note 2413261 for more information.
d0430 SELE
94bc362b8f70a CT
9460f583ac71e0 DELE
8b9d0 TE
9321a96c23789 SELE
4be2e9c80cb6a CT
021f43 SELE
7973b84c289fb CT
68dc3d480eb83 DELE
aad0ee TE
7c71e94c46f7f6
2b679b83b2d67
746c9
ee341ad92f3a9d
4b359b3e7fa7a4
4cec
da78d5abc901b SELE RS_LOB_GARBAGE_<id> These queries are related to row store LOB garbage collection.
a7166721c78d78 CT _
A very high number of executions with 0 processed records can happen on SAP HANA 1.00.122.16, 2.00.012.04, 2.00.024.00 - 2.00.024.01 and 2.00.030 due to the problem described in SAP Note 2633077.
697db DELE
1aab02cdb52e14 TE
600b7fd7e1b66
6ad33
2b8bd3d2501cf
e61cd64e2ec30a
7f83a
5a9fa03bb31bec SELE RS_LOB_GARBAGE_<id Queries of the following type can be expensive in case of blocked garbage collection when more and more row store LOB garbage piles up:
877a0b0d00c76 CT >_
1a252 SELECT DISTINCT TRANS_ID, TCB_INDEX FROM SYS.RS_LOB_GARBAGE_3_
See SAP Note 2169283 and make sure that garbage collection isn't blocked for a long time.
4522b390b39c7 SELE RSPCLOGCHAIN These selections from class CL_RSPC_LOG===================CP or function module RSPC_GET_DELAY can read a lot of records from RSPCLOGCHAIN. See SAP Note 2388483 and check if you can clean up old entries in the chain run log table R
45346e4b2c20c CT SPCLOGCHAIN.
a0e730
ac1948f2a0ef6b
22ad39e762da0
17d6c
4fe08f000cb95 SELE RSPCLOGCHAIN, RSPCPR These selections originating from function module RSPC_GET_DELAY (application sources SAPLRSPC_BACKEND:1247, SAPLRSPC_BACKEND:12474) are executed frequently in context of batch jobs ODQ_TQ_JOB. See SAP Note 3080823 in order to mi
d6d03a8c279ef CT OCESSLOG nimize the load introduced by these batch jobs and perform housekeeping for RSPCLOGCHAIN (SAP Note 2388483).
74c2d1
5cf54ecb10867f
e6c9057d08504
9d7c4
3732125d72c6d SELE RSPCLOGTIMECACHE This selection originating from function module RSPC_GET_DEVIATION (application source SAPLRSPC_BACKEND:11364) are executed frequently in context of batch jobs ODQ_TQ_JOB. See SAP Note 3080823 in order to minimize the load introduced
25d5e0eeb8026 CT by these batch jobs and perform housekeeping for RSPCLOGCHAIN (SAP Note 2388483).
c1b789
various, e.g.: SELE RSPMPROCESS Long runtimes of RSPMPROCESS calls from method IF_RSPM_QUERY_RETRIEVE~QUERY_PROCESS of class CL_RSPM_PERSISTENCY_STANDARD==CP can also be a consequence of insufficient housekeeping. You can use SQL: "HANA_ABAP_Mi
26ce5e2bc2f400 CT niChecks" (SAP Note 1969700) and have a look at check ID A1001 ("BW data targets with many RSPMREQUEST entries") to see if there is increased demand for housekeeping. You can implement a cleanup strategy based on the instructions provided in SAP
c047631aabe411 Note 3137171.
a6f4
various SELE RSPMREQUEST SAP Note 2727934 improves the performance of activation of an RSPM process in method IF_RSPM_QUERY_RETRIEVE~QUERY_REQUEST of class CL_RSPM_PERSISTENCY_STANDARD by implementing the NO_USE_OLAP_PLAN hint.
CT
Long runtimes can also be a consequence of insufficient housekeeping. You can use SQL: "HANA_ABAP_MiniChecks" (SAP Note 1969700) and have a look at check ID A1001 ("BW data targets with many RSPMREQUEST entries") to see if there is increased
demand for housekeeping. You can implement a cleanup strategy based on the instructions provided in SAP Note 3137171.
8437943036a2a SELE RSR_CACHE_FFB See SAP Note 2388483 and check if you can clean up old entries in RSR_CACHE tables.
9fd82b290b2c6 CT RSR_CACHE_VARSH
eafce7
a12aff7f7db3b8 UPD RSR_CACHE_QUERY Lock waits and deadlocks related to method __DELETE_VARSH_BY_CREATESTMP of class CL_RSR_CACHE_STORE are tackled by SAP Note 2905290.
9f04df28429f47 ATE RSR_CACHE_VARSH
25b4
a7df7509a22f5e
64c6f50e223441
5f5e
03d58c448c321 UPD RSRNEWSIDS Updates on tables RSRNEWSIDS or RSRNEWSIDS_740 can suffer from the hierarchy SID check overhead described in SAP Note 2183482. This check is deactivated per default with current BW releases, but it may be activated on purpose via RSR_HIER_I
ae8e78c00563b ATE RSRNEWSIDS_740 NTVL_CHECK_SID = 'X' in table RSADMIN. In this case you can check if you really need this feature and consider deactivating it by removing the setting from RSADMIN.
ea0e95
6422b644e862e UPD RSZCOMPDIR Increased runtimes are often caused by transactional lock contention (SAP Note 1999998). SAP Note 3217436 provides an alternative option to update the LASTUSED field of table RSZCOMPDIR, thus reducing the risk of lock contention.
f07e4636d32a7 ATE
254632
1c340b120554c SELE RSZELTXREF If this selection from SAPLRZD1 with conditions on OBJVERS and INFOCUBE take a long time, you should consider the following:
65c8352b06b8e CT
Make sure that the amount of data in the table is as small and efficient as possible (SAP Note 823804).
c16650
Make sure that index RSZELTXREF~IC on column INFOCUBE is created on SAP HANA level.
286595fe6035f3
aefbd0b9d0be0
f814e
b54bf2afe56ed2
35b8349db68b3
896c3
054ca4ad7d229 SELE S009 These statements on LIS tables for CAS documents S009 and S014 originating from RMCSS009 and RMCSS014 can suffer from exclusive lock waits and deadlocks (SAP Note 1999998). SAP Note 743100 provides suggestions to reduce the risk of contention
638928813856d CT F S014 and deadlocks.
0720b5 OR U
079e5109d4bf0 PDAT
ce8d47f8d8470 E
4c071c
6cf1986b54b8cf
7df708083bc07
af1a4
b5b6bd4bb2f84
2133c9a1f07ea1
a48b4
UPD S061 Depending on the application configuration transactional locks can increase the runtime and also deadlocks are possible on the info structure tables. SAP Note 1619751 describes possible steps to reduce lock contention and deadlocks.
ATE S062
833302af6e1fc2 UPD S066 The table S066 is used to track open orders in credit management contexts. In case of concurrent modifications of the same records lock contention is possible, resulting in increased runtimes. SAP Note 3031614 describes typical scenarios that can result in c
619d9a37be35ec ATE ontention scenarios and possible optimizations. In S/4HANA and FSCM environments this table is no longer used.
4c16
UPD S071 The table S071 is used to track special sales conditions. It can suffer from transactional lock contention in case multiple transactions try to modify the same record concurrently. SAP Note 2200612 provides suggestions how to analyze and minimize the transa
ATE ctional lock contention.
673800ad653f8 UPD SACF_ALERT Long runtimes of these update operations are usually caused by exclusive lock waits due to an insufficient application coding. SAP Notes 2248439 and 2801827 provide application corrections in order to reduce the lock wait times.
d62d3769a0618 ATE
1c6f8e
various, e.g. SELE /SAPAPO/MARM Accesses to CDS view /SAPAPO/MARM (respectively view SCMPRDMARM) include a CONVERT_UNIT call for unit conversion that introduces a rather fix overhead for building up temporary data structures (often indicated by ceCustomCppPop activities r
18edc88026d04 CT eported in the thread details) and selecting from ABAP tables T006 and T006D of typically a few milli-seconds. Thus, you should avoid executing many small requests (e.g. with only a single MATID) on /SAPAPO/MARM and instead select the data in bigger
39404488c7c0d chunks, so that the fix CONVERT_UNIT overhead is required only fewer times.
af824b
759c32107c725e
2629f1e9dc08a1
c9fa
ba8fc270dc2b8 SELE /SAPAPO/MATGROUP This selection in method /SCWM/IF_AF_MATERIAL_BASE~READ_PRODUCT_GROUPING of class /SCWM/CL_AF_MATERIAL_BASE_S4 may repeatedly read a rather high number of records. It can be optimized with SAP Note 3041119.
51ee2c41e06941 CT
3bc4
cc351578e364bf SELE /SAPAPO/MATLOC See SAP Note 2904036 for optimizing FDA WRITE accesses (SAP Note 2399993) for compatibility view /SAPAPO/MATLOC in context of function module /SAPAPO/DM_MATLOC_READ / application source like /SAPAPO/SAPLDM_MATERIAL:36155.
59a48edd53719 CT
1829c
ee9876a198c6c
67a81649487ad
c9c837
0fb83778bdf2ed SELE /SAPAPO/ORDKEY Per default only the primary key on columns MANDT, ORDID and SIMID is delivered for table /SAPAPO/ORDKEY. Additional indexes need to be created based on individual needs.
44eefbe48c92ae CT
Transaction /SAPAPO/OM13 -> OrdKeyIndex can provide further insight about the necessity of additional /SAPAPO/ORDKEY indexes.
bca4
e48c904c5fbc4e Example:
8dd8dd4ce4b44
---------------------------------------------------------------------------------
97dca
|Optimal Index |Number of Selections |Max. duration [s] |Avg. duration [s]|
---------------------------------------------------------------------------------
| LOCID ORDTYPE SIMID| 0 | 0,000 | 0,000 |
| LOCID SIMID | 0 | 0,000 | 0,000 |
| LOCID TTYPE SIMID | 0 | 0,000 | 0,000 |
| ORDNO | 110.385 | 4,345 | 2,732 |
| ORDNO SIMID | 1.829.009 | 0,510 | 0,004 |
| ORDTYPE SIMID | 48 | 0,794 | 0,141 |
| SIMID | 0 | 0,000 | 0,000 |
| TROID SIMID | 0 | 0,000 | 0,000 |
| TRPID | 88.240 | 0,079 | 0,002 |
| TRPID SIMID | 2.713 | 0,139 | 0,007 |
| TRPID TTYPE | 43 | 5,108 | 2,171 |
| TRPID TTYPE SIMID | 0 | 0,000 | 0,000 |
---------------------------------------------------------------------------------
Here we can see that a significant amount of selections happen via ORDNO and the average duration is at about 2.7 seconds. An index on MANDT, SIMID and ORDNO already exists but for requests only specifying ORDNO this index can't be used. For an op
timal layout, also supporting pure ORDNO selections, you can adjust the MANDT / SIMID / ORDNO index to ORDNO / SIMID.
b7944c0c3ee72f SELE /SAPAPO/ORD_LINK This query in report /SAPAPO/LSDORDER_DBF08 can suffer from activated FDA WRITE for FOR ALL ENTRIES (SAP Note 2399993). You can disable it globally or via statement hint as described in SAP Note 2486627.
83c747db881a2 CT
eba05
4a9b3e1d38e9c SELE /SAPAPO/POSMAPN This query with application source /SAPAPO/SAPLOO_TR_READ originating from report /SAPAPO/SDORDER_DEL is used to read the next package of records (default: 10000) based on column POSID. It uses a combination of ORDER BY and TOP that
962d7a3cfc0aac CT can't be handled efficiently by SAP HANA (see High runtime with ORDER BY and TOP). In order to reduce the repeated read and sort overhead, you can increase the package size in report /SAPAPO/SDORDER_DEL ("No. of Items in Del. Block") significant
6e2c2 ly, e.g. by factor 10 to 100.
2328166fbaa97 SELE /SAPAPO/STOCKANC Selections from table /SAPAPO/STOCKANC originating from ABAP source /SAPAPO/SAPLDM_LC_SQL respectively function module /SAPAPO/OM_STOCKANC_SELECT may suffer from the fact that table /SAPAPO/STOCKANC is located in row store a
8fd21fad94a905 CT nd so the combination with other aspects like FDA WRITE (SAP Note 2399993) may not work out fine. Moving the table to column store can sometimes improve performance:
4f362
a54c45a29c540 ALTER TABLE "/SAPAPO/STOCKANC" COLUMN
5fda0427f9ca7e
3ff1f Be aware that this adjustment may impact other database requests, so it should be monitored if there is an overall benefit. Transactions like /SAPAPO/OM11 and /SAPAPO/OM13 may show up with a red traffic light that can be ignored.
4de83928f69e0 SELE /SAPAPO/V_TRQTA This selection originating from ABAP include /SAPAPO/LOO_TR_READF01 is an existence check that reads one record from view /SAPAPO/V_TRQTA with only a MANDT conditions specified in the WHERE clause. Due to SAP HANA design limitations, t
78ebd8b68f65fe CT he limitation to one record may be applied at a late stage after the join has been processed, resulting in an unnecessary long runtime and resource consumption. See High runtime with join and TOP / LIMIT and upgrade to SAP HANA >= 2.0 SPS 07 so that t
3ae30 he statement can be processed efficiently by the HEX engine.
4f61ad6ab9893 CREA SAP_NCLOB_TO_NCLOB This CREATE OR REPLACE call for function SAP_NCLOB_TO_NCLOB is triggered by the ABAP database shared library (DBSL) in some cases when a connection to the database is set up. This operation is no issue, but at times of other issues (e.g. commit
32a42ee9e3d76 TE overhead) it can show up more dominantly. See a more detailed discussion for table USR02 that is also a typical victim of general issues.
ef58fa
Be aware that this call is not only executed by ABAP work processes, but also by other tools using the DBSL like R3trans. Thus, the calls can also be observed at times when the actual ABAP instance is stopped.
be788bf5f4482e CALL /SAPSLL/CL_SPL_HANA This procedure call can suffer from TRexAPI_StopwordAndTermMappingUtil_tokenizerCacheLock contention (SAP Note 1999998) that is optimized with SAP HANA >= 2.00.059.07 and >= 2.00.066 (issue number 296062). A fulltext index (SAP Note 280
70db907fbf7c73 _SEARCH=>CHECK_NA 0008) in column TERM_1 of the term mapping table /SAPSLL/TERMMAP can also have a positive impact on performance.
d857 ME_COUNTRY
ead5ced088741f SELE /SAPTRX/EH_TASK This selection from the event management tasks table from method /SAPTRX/IF_EH_MODEL~LOAD_TASKS of class /SAPTRX/CL_EH_DB_ACCESS can be expensive if many event management tasks exist and a high number of records needs to be retri
49bd27a5f8607 CT eved. See SAP Note 2895344 and minimize the task volume:
66ecd
Disable "Log Task" via transaction RSPO whenever possible.
Archive event handlers as early as possible.
Additionally check if it is possible to set the flag "No loading of task" in the event handler type customizing (SCP -> SPRO -> Event Management -> Event Handlers and Event Handler Data -> Event Handlers -> Define Event Handler Types).
9f3a4785ad9ff7 SELE /SCDL/DB_ADDMEAS This access originating from method READ_GEN_TABS_BY_ID of class /SCDL/CL_DL_DB_SERVICE can suffer from FDA WRITE overhead (SAP Note 2399993). Implement SAP Note 3041119 in order to deactivate FDA WRITE.
ee4f1342618a37 CT
e8e5
3631d2eb9948c SELE /SCDL/DB_BPLOC This access originating from method READ_GEN_TABS_BY_ID of class /SCDL/CL_DL_DB_SERVICE can suffer from FDA WRITE overhead (SAP Note 2399993). Implement SAP Note 3041119 in order to deactivate FDA WRITE.
e2ac6f1dac057d CT
99bb0
85cb05c86e403 SELE /SCDL/DB_DATE This access originating from method READ_GEN_TABS_BY_ID of class /SCDL/CL_DL_DB_SERVICE can suffer from FDA WRITE overhead (SAP Note 2399993). Implement SAP Note 3041119 in order to deactivate FDA WRITE.
1d22ea077b66bf CT
e02b9
c5d8831b45637 SELE /SCDL/DB_DF This SELECT originating from ABAP class /SCDL/CL_DL_DF_DB_SRV can be optimized with an additional secondary index on column DOCIDTO of table /SCDL/DB_DF.
9da4d9f643288 CT
1d47e8
12a7794553189e SELE /SCDL/DB_REFDOC This access originating from method READ_GEN_TABS_BY_ID of class /SCDL/CL_DL_DB_SERVICE can suffer from FDA WRITE overhead (SAP Note 2399993). Implement SAP Note 3041119 in order to deactivate FDA WRITE.
ad8027ee9a91c CT
52118
0a931335817155 SELE /SCDL/DB_STATUS This access originating from method READ_GEN_TABS_BY_ID of class /SCDL/CL_DL_DB_SERVICE can suffer from FDA WRITE overhead (SAP Note 2399993). Implement SAP Note 3041119 in order to deactivate FDA WRITE.
c25ddeae3183b CT
7ad53
29058a5de1089 SELE SCHEMAS This SELECT is executed frequently in BW environments in context of report CL_RSD_DTA====================CP. The execution frequency is significantly reduced with correction provided in SAP Note 2726420.
af9471b1009f7d CT
09855
d21cf1dc3fe2111 SELE SCHEMAS This selection is related to SLT (SAP Note 2014562). See Long runtime of query on SAP HANA dictionary objects and monitoring views and make sure that the CATALOG READ privilege is assigned to the database user executing the query. This is anyway a r
bc1e53d58cacc2 CT equired configuration for SLT as described in SAP Landscape Transformation Replication Server -> "Security Guide" -> "Initial user".
9b9
effbb7b7b04458
296ec46e9858d
10451
31de26d87576a INSE /SCMB/TBUSSYS This insertion originating from method GET_INSTANCE of class /SCMB/CL_BUSINESS_SYSTEM can suffer from "unique constraint violation" issues in case the usual 1:1 mapping between business key (column BSKEY, primary index 0) and logical syste
b66558ee13c54 RT m name (column LOGSYS, unique index LOG) can't be maintained, e.g. because another BSKEY should be inserted for an already existing logical system. As a consequence of the "unique constraint issues" also increased transactional lock times are possible
a3e3ed (SAP Note 1999998 -> "Why are there locks and deadlocks that can't be explained by the actual modification mechanisms?" -> "Unique constraint violations").
Make sure that a consistent business key is maintained in SLD, table /SCMB/TBUSSYS and /SCMB/TOWNBS. See SAP Notes 3099337 and 3202213 for more information about adjusting the business key and troubleshoot /SCMB/TBUSSYS contention.
As a temporary technical workaround you can move table /SCMB/TBUSSYS to row store because for row store tables record locks are released directly after the "unique constraint violation" termination and not at the end of the transaction:
3147c8d956a60 INSE /SCMB/TSLDPROD This INSERT can suffer from the same scenario like the INSERT into /SCMB/TBUSSYS. Check for details at /SCMB/TBUSSYS and consider to move table /SCMB/TSLDPROD to row store as a temporary workaround:
0ad16ca8ee074 RT
444078 ALTER TABLE "/SCMB/TSLDPROD" ROW
7f46508f34943 SELE /SCMTMS/D_TORITE These SELECTs with NOT conditions on *KEY columns originate from class /SCMTMS/CL_DAC_TABLE_OPT. SAP Note 2654211 provides a fix by replacing the NOT conditions with a ">" condition.
3ef9716a782bb5 CT
c3d7e
9f75f289f8b19f
d36da01a86155
5ec85
a5fe878e070d0
43417c1a5174a7
92aab
4580f3059c884 SELE /SCWM/LAGP This SELECT FOR UPDATE from application sources like /SCWM/SAPLHU_TO_UPD:57367 or /SCWM/SAPLHU_TO_UPD:59723 respectively include /SCWM/LHU_TO_UPDF57 can suffer from transactional lock contention (SAP Note 1999998) due to
ee10de1eeb1fb9 CT F a high amount of capacity updates that may not be required. See SAP Note 3190865 and check if you can disable this specific update by flagging "No capacity update" in customizing.
8ac06 OR U
4d7a6a4047dec PDAT
b99f489334709 E
3ca434
6cc91e12c9ed81
2c4ad3b55a2cd
10cfa
78b9774c47970
97346cc18f858
006c2b
8d7cfa00098e2
14b537eb4ed541
31cb1
c411c649778b21
63fb9b5c7a929
26e89
c6923a33de2de
74d7cd9753432
edee12
ea75eeb3e7c6fa
58daabd157360
cc5bd
f33871dd973201
1b2597ac24f5ab
8f4d
b470824aed086 SELE /SCWM/WO_PPF This selection originating from report /SCWM/R_REORG_HU_WO_PRINT uses ORDER BY in combination with a LIMIT that is derived from the "Commit Counter" that can be configured on the selection screen of the report /SCWM/R_REORG_HU_W
7d8e2dfd756e0 CT O_PRINT (default: 1000). Low values can result in significant overhead because a potentially large amount of data needs to be sorted before returning the next few records. Consider increasing the "Commit Counter" significantly, e.g. to 100000, in order to
d703eb minimize the sort overhead.
2f0edf3918c670 CALL SDA_EXECUTION_DEV This procedure is executed when a remote system triggers a smart data access (SDA) request on the local system in universal itab mode (SAP Note 2180119 -> "Which data transfer modes exist for SDA?"). The usage of universal itabs between SAP HANA dat
f03f0d0a7865b abases is controlled by the DEV_NO_UNIVERSAL_ITAB hint (SAP Note 2142945) or by the following Boolean parameter:
e4b11
indexserver.ini -> [smart_data_access] -> enable_universal_itab_hana_sda
Deactivating the universal itab can sometimes help to make the remote requests more transparent for analysis reasons rather than summarizing all via SDA_EXECUTION_DEV. See SAP Notes 2514255 and 2943297 for more information.
Due to the fact that SDA_EXECUTION_DEV executes different queries from different remote systems, it is not necessarily an issue to see increased total runtimes. If you want to optimize it, you need to identify and optimize the most expensive requests com
ing from remote systems.
CALL SDA_SELECT_AS_ITAB_ This procedure is used in context of the binary transfer mode (SAP Note 2180119 -> "Which data transfer modes exist for SDA?") in the target system of a smart data access (SDA) request. It is used when both the local and remote SAP HANA have exactly the
DEV same SAP HANA Revision level. If the execution shows increased runtime or resource utilization, you have to analyze and optimize the database request that is provided as first argument of this procedure.
5f13497ba22f20 SELE SEC_CONTEXT_BLKD To prevent individual users from consuming all HTTP security sessions, a "Session Limit" per user is defined via transaction SICF_SESSIONS (default: 100). When the limit is hit, additional requests will be rejected and documented in table SEC_CONTEXT
ebd629da493e3 CT F _BLKD. See SAP Note 2760552 for more information.
95f8d OR U
In case of a malicious client with a very high amount of concurrent session requests, the updates of counter and timestamp in table SEC_CONTEXT_BLKD can result in transactional lock wait serialization. In this case an increase of the "Session Limit" in tra
c4fd3b8fd975a4 PDAT
nsaction SICF_SESSIONS can be a quick workaround to reduce issues caused by this serialization (e.g. a general lack of available work processes). For a good permanent solution it is important to adjust the behavior of the client flooding the system with sec
cda303590e7b2 E
urity session requests. See SAP Notes 2760552, 2754328 and 3201227 for more details related to analysis steps, configuration options and optimization approaches.
a707a
Starting with AS ABAP SAP_BASIS 740 there is a resilience improvement available with SAP Note 3293436 (combined ABAP and kernel patch) where the majority of processing is done via shared memory and so the SEC_CONTEXT_BLKD accesses will sig
nificantly reduce.
281f191a6b9fd6 SELE SEC_CONTEXT_COPY This selection is used by the SecuritySessionInactivityWatchdog triggered by the ABAP task handler that checks for outdated HTTP security (https) sessions. In busy systems with many HTTP security sessions there can be millions of selections per hour. You
5216df0d99457 CT can check for existing HTTP security sessions via transaction SM05 and e.g. identify users with a particularly high or unnecessary amount of HTTP security sessions that can be reduced with appropriate adjustments. See SAP Note 2760552 for more informat
35113 ion.
dc2bcf2b5aa0f2
Starting with AS ABAP SAP_BASIS 740 there is a resilience improvement available with SAP Note 3293436 (kernel patch) where the high amount of SELECTs is transformed into fewer, more efficient sub-select joins.
2aec2ea1fc62a4
9418
0ec0b76746585 SELE SECURITY_CONTEXT This table contains security session information that is required for state-less protocols like HTTP. Selections on this table are typically very quick, but due to the fact that it is the very first database request to be executed in transactions, it can suffer from ad
dbcdc308e168d CT mission control queueing (SAP Note 2222250). In this case the execution times on SAP HANA side are fine, but from an ABAP perspective long-running accesses to SECURITY_CONTEXT are visible. You can check via SQL: "HANA_Workload_AdmissionC
d9a353 ontrolEvents" (SAP Note 1969700) to what extent the mentioned statement hashes suffer from admission control queueing. If admission control is responsible for queueing overhead, you can either adjust admission control settings (e.g. disabling it) or make
266fb9202f359 sure that the triggering event (e.g. high CPU consumption) happens less frequently by optimizing responsible database requests.
45f4ee016214e2
a67a1
SELE SFC_ROUTER Check SAP Note 2535647 that provides optimizations for this database request from Manufacturing Execution application side.
CT F
OR U
PDAT
E
113dc3a08e050 CALL SHARED_BUILD_AA_VI These procedures in the _SYS_STATITICS schema are used by the statistics server (SAP Note 2147247) to build system replication views during initialization.
b79cf423deadc4 EWS
The performance can suffer from network problems or unavailabilities related to the secondary system. In the worst case accesses to remote objects get stuck until the connection timeout (default: 180 s) is reached.
6c16b SHARED_BUILD_SR_VIE
145850f44fb637 WS See SAP Note 1999880 and make sure that system replication is established properly. Check according to SAP Note 2222200 if the network configuration of the primary and secondary SAP HANA system replication site works properly.
05f273858dcc3 SHARED_CREATE_UNIO As a temporary workaround it is possible to reduce the connection timeout (default: 180000, unit: ms, i.e. 180 seconds):
257df N_VIEW
3d2724778ca7c <service>.ini -> [communication] -> default_connect_timeout
d61bcd36fa492
4a59c0
46c5fe2884f112
57abb74793abf1
8c9f
486ac42ad0dd8
db9e857e6116e
835d93
6eccd14a84dec8
a1e868706655c
a1020
946e87a951107
0edf1bbd6177b4
b0150
b1e4e649916da
9724d0ff114258
ac53d
d138cc6eaaf2a9
050cd51844dfb
187f5
e137a8162443fe
c5dd94150565a
cf3c6
efaeaf6c8787b2f
8e75f49603432
3482
753769700d443 CALL SHARED_FOLLOW_UP_ This call is executed at the end of every statistics server alert check (independent of the fact if actually an alert was triggered) in order to schedule follow-up actions like creating a runtime dump or activating traces. It is normal to see a certain load coming fro
5edcbc8cd8ecaa ACTIONS m this SQL statement. Runtimes of up to 20 ms are normal. The number of executions can theoretically be influenced by adjusting alert check intervals as described in SAP Note 2147247 ("How can the statistics server check intervals be adjusted?"), but nor
2a6fc mally not required.
806b77333ee56
cb908c119e2cc9
ade7c
abf87f501874ae CALL /SHCM/CL_WFD_OP_IM Statement hash abf87f501874ae38a04bce633f869019 is a non-unfolded part of the procedure used for CDS view I_PersonWorkAgrmtStatus for variable ET_PERSONWORKSTATUS. It can impact various root statement hashes implicitly or explicitly using
38a04bce633f8 _PERSWRKSTAT=> this CDS view like bedaa3a75085247a6e1f0370c380c3c6. See SAP Note 3277599 and implement the INLINE hint to take advantage of unfolding.
69019 IF_WFD_IM_PERSONW
bedaa3a750852 ORKSTATUS~
47a6e1f0370c38 GET_PERSONWORKSTA
0c3c6 TUS
dc6ac3aa4b5dcf DELE SMMW_DEVMBOMAPER SAP Note 2423891 provides an application fix to reduce the risk of deadlocks on these SMMW tables.
34b926da4f7ad TE SMMW_LOG_HDR
c9088 and othe SMMW_MON_NOTIFY
rs
9546a29b93480 SELE SOOD The SOOD selection in method REORG of class CL_REORG_BCS is executed as part of the SAPoffice reorganization (SAP Note 966854). It sorts all matching records and then returns the first records up to the package size defined in report RSBCS_REORG
cab55902d36b3 CT . The sort overhead is fix, independent of the actual package size. So larger package sizes usually improve the throughput per record. If the configured package size is below 10000, consider an increase, e.g. to 50000.
aaefc2
c3f9f2134143eb
8da4f860c1930
24caa
7488ecfdc72722 SELE SOURCE_ALERT_65_BA This SELECT is linked to the statistics server action Alert_Backup_Long_Log_Backup. See SAP Note 2147247 ("How can the runtime and CPU requirements of the statistics server actions be analyzed and optimized?" -> "Alert_Backup_Long_Log_Backup"
6c7bad7e99ebe CT CKUP_CATALOG ) for details about optimizations.
21b25
e5332b10f3a1a4
215728857efc0f
8eda
9a79d31c5bcd6 SELE SRRELROLES This queries originate from report SAPLRREL / include LRRELF01 and from function module BBP_PDH_OR_DB_SELECT_BBP_PDBIN.
5e129cddf4e670 CT BBP_PDBINREL, SRRELR
Activate index SRRELROLES~001 on column ROLEID in order to support the query optimally. Usually an index on ROLEID should be created automatically if the tools available via SAP Note 1794297 are used when migrating to SAP HANA.
57c49 SELE OLES
2510974b53d11 CT
210cfe465e5223
a572d
a7fc8bc85b7b95 UPD SSCOOKIE Lock contention and deadlocks in context of class CL_BSP_SERVER_SIDE_COOKIE can be caused by the application design. An optimization is available via SAP Note 2564753.
dddb6d73a12dd ATE
Users with activated CRM_CENTRAL_SEARCH = REBUILD_MENU parameter can significantly increase the lock contention and so you should avoid this setting whenever possible. You can adjust it via transaction SU01-> "Parameter".
00c0c
Lock contention and deadlocks in context of job BSP_CLEAN_UP_SERVER_COOKIES can be caused by an ABAP coding issue that is resolved with SAP Note 2944650.
Also the issues described in SAP Notes 3018745 and 3119644 can contribute to the issue.
aa01412bfb912d DELE /SSF/BTAB Deletions on the service software framework table /SSF/BTAB can suffer from exclusive lock waits if multiple concurrent monitoring related tasks are running at the same time, e.g.:
422e4cc07e401c TE
DVM Cockpit tasks (e.g. collective analysis)
a70f
TAANA operations
ee83408659f7c
Solution Manager tasks (e.g. EFWK resource manager)
61a023c27ab5d
784994 So you should check in the first place if long running monitoring activities can be improved or scheduled at different times.
In general long runtimes only impact monitoring activties, important business transactions shouldn't suffer.
5ef81b3843efbe CALL STATISTICS_PREPARE_C The UPDATE on STATISTICS_SCHEDULE is executed from within the procedure STATISTICS_PREPARE_CALL_TIMER. High runtimes in combination with record locks (ConditionalVariable Wait) on table STATISTICS_SCHEDULE can be caused by:
961b19fdd79c2b UPD ALL_TIMER
Missing COMMIT after STATISTICS_PREPARE_CALL_TIMER: This problem is resolved as of SAP HANA 1.00.101.
d86b ATE STATISTICS_SCHEDULE
Missing COMMIT after STATISTICS_PREPARE_CALL_MANUAL: This procedure is linked to Solution Manager requests to the statistics server. Starting with SAP HANA 1.00.122.04 you can eliminate this problem by switching to the new SAP HANA
a3bbe3e5447bc
monitoring approach for Solution Manager (SAP Note 2374272).
42eb3f0ece820
585294 These calls are executed at the beginning of every statistics server check. It is normal to see a certain load coming from this SQL statement. Runtimes of up to 30 ms are normal. The number of executions can theoretically be influenced by adjusting alert chec
8343a81d8e702 k intervals as described in SAP Note 2147247 ("How can the statistics server check intervals be adjusted?"), but normally not required.
5d91e72be8d53
0bc095
ac08e80aa8512f SELE STATISTICS_CURRENT_ See SAP Notes 2147247 ("How can the runtime and CPU requirements of the statistics server actions be analyzed and optimized?" -> STATISTICS_ALERTS_BASE) and make sure that the size of the underlying table STATISTICS_ALERTS_BASE remains o
9669eaf56dedc CT ALERTS n a reasonable level.
02202 STATISTICS_LAST_CHEC
c6c5915097722 KS
0ea4fdea412cac
7d1ce
0800928edd9d
43764935a4e02 SELE
abbfa15 CT
16c226e204b54
efc280091df267
152fd
2b7fee2d2b95e7
92d3890051cbb
97ec9
55bb62c4b9ff3f
0b32de3a4df24
8e18c
d6fd6678833f9 CALL STATISTICS_SCHEDULA This procedure is a wrapper for all embedded statistics server actions like history collections and alert checks. This statement hash typically indicates that it is called by the statistics server based on the configured check time intervals. Average execution time
a2e25e7b53239 BLEWRAPPER (timer) s of up to 400 ms can usually be considered as normal and no analysis or optimization is required.
c50e9a
See SAP Note 2147247 ("How can the runtime and CPU requirements of the statistics server actions be analyzed and optimized?") for more information how to analyze and optimize the actions.
0f54bb2482e75 CALL STATISTICS_SCHEDULA This procedure is a wrapper for all embedded statistics server actions like history collections and alert checks. This class of statement hashes represents manual calls with the first argument being 'Manual', e.g.:
ab61657973d91 BLEWRAPPER (manual)
95bbb4 CALL _SYS_STATISTICS.STATISTICS_SCHEDULABLEWRAPPER ('Manual', ?, 40, 0, NULL)
10cec71e46696d
be20a2143a315 These kinds of statements typically originate from SAP Solution Manager or FRUN in order to extract history and alert information from SAP HANA. See SAP Note 2374272 that describes how to enable the new SAP HANA monitoring mechanism in these co
8dfd4 ntexts. Then the Solution Manager and FRUN will rely on information already collected by the statistics server rather than executing statistics server checks on its own. As a consequence this particular STATISTICS_SCHEDULABLEWRAPPER call will no lo
1716655062c017 nger be executed.
719efc84121e3e In cases where it is not possible to disable to manual calls, you can check SAP Note 2147247 ("How can the runtime and CPU requirements of the statistics server actions be analyzed and optimized?") for more information how to analyze and optimize the mo
a753 st expensive actions.
1bf11f8fd6c108e
2194bc162032b
10a9
383b3cb02f8f6
92ed9a28c9249
a54c66
3bf0fcaebd43f1
6dc504b9beb00
05d29
45e3ba47a0368
068d71abb143f
26d1d0
673d5512da840
4303acbe65a3b
aaf01a
6801336f2e5de
891edc0b806c6
048c03
74ab46ff2c0164
9253edf88abed
9a870
869b2445401fa
3cb696e631dc3
9a5f0a
9d334c558cffb2
f66af0648598ce
ebf8
a2210deae1b210
ac98a194a57c5c
e4bf
a562281167397c
8e00fe95677fa3
32a2
aa220713a00ad
a128ae424d62b
5a5868
aa4ab1034dff6b
5c016c3db0be6
7d8cb
ba1ee014dd96e
b9a21fdae5abbb
1f94c
d617c529ab701a
44b2c40cbafa9a
314b
d8091e1a6717e8
2567d58a6c943
4a9a1
d9232f67ad889
6b521b6a22234
fee91f
dc571bf5eb7cad
9c313c20de904
ab709
e02eb8dc390a1
db7e1e8779ace4
f3ce2
e54cd3f2d069e
395bf5bc9414d
6369a3
e60fb426874c2
2c32b92f3b6d1e
52372
ec25d42a14422
d8e899a7932ab
213410
f019d40ff3e156
a7fbcd78c5f364
b286
f609c887b08bb
14c9a218a9756e
d64df
f978e72506944
9cf440f5c6b918
5e4d5
fa2bdf3047b4de
33242305aef6b
593e9
fbf6ff256ba159a
53893d7bf4bc6
d277
05795bf8c5815 DELE STATISTICS_USED_VAL STATISTICS_USED_VALUES is used by the SAP HANA statistics server (SAP Note 2147247) to store check results for later evaluation by SAP Solution Manager (SAP Note 2529478, internal.check.store_results). When the statistics server executes an alert
0579e539ad4c7 TE UES check, it deletes previous information for the same ALERT_ID with this statement so that it can later on insert new alert results. Due to the high modification frequency the performance of this request is particularly sensitive for garbage collection issues, so i
62a411 n the first place you should check if garbage collection is blocked for an unusual long time (SAP Note 2169283).
6d112675aaec6e
Starting with SAP HANA 2.00.037.03 and 2.00.043 an index on column ALERT_ID is delivered per default which will significantly reduce the amount of scanned records and improve performance.
4e853052145f0
c0d23
SELE STATUS_DESCRIPTION Check SAP Note 2535647 that provides optimizations for this database request from Manufacturing Execution application side.
CT STATUS_MEANING
1626b0156893d SELE ST_GEOMETRY_COLUM This query originates from the SAP HANA license measurement (thread method lic/license_measurement). It can be particularly expensive if the underlying table CS_VIEW_ATTRIBUTES_ is large because many millions of custom view attributes are defin
ab5438186b616 CT NS ed.
0139b7
6edaa7fd2e3585 SELE STOREDTASK This access is not expensive, but in context of transactional LOBs it can be responsible for a high number of SQL contexts and an increased size of the Pool/Statistics or Pool/RowEngine/QueryExecution/SearchAlloc. See SAP Note 2220627 -> "What are tra
b77132c650493 CT nsactional LOBs?" and 2711824 for more details.
b58c3
00497cb80638 SELE STXH In many cases the performance can suffer from a wrong query optimizer decision due to data skew (SAP Note 3513868). Please implement SAP Note 2700051 in order to provide good statement hints for the most important STXH queries, making sure that t
494864725be90 CT he table is accessed via the typically most selective TDNAME condition.
bbf2eb6
Check (e.g. via SQL: "HANA_SQL_StatementHash_BindValues" available in SAP Note 1969700) if the TDNAME values sometimes end with '*', e.g. '0030646364*'. If yes, you suffer from a problem introduced with SAP Note 2208025 where accidentally th
027eda7086e8e
e '*' constant rather than a wild card was added at the end of the value. This bug is fixed with the correction provided in SAP Note 2302627.
ab6c0f3e23138a
3d1dd
093f81a4960e3
1871a1f6e44848
3e15e
12b4ef1b19369b
76831360dc02d
074d2
2723663c7576a
23c860ba55ab9
1c513a
28756aa945165
9c348e122f5c5c
2e2e9
2ed7d48e4a9af
82a0be336e1dd
5f0d47
36723585726e7f
185d6a855af6cf
f433
3dd71780f165eb
fa059ba8b9689
0aa77
4dff04027f025c
711d7009bd3ce
90be2
729ee1d736aeec
45e956fb661df8
2ecb
87f18bf0f023ccf
c0993cba21c1e2
6ce
89797bd5430b3
7d2a5b0a26185
17e8d5
9807a4101b41a
305ae7e32d360
4a3062
98153cbbd4700
9f99b0a24e0b1
121d9f
9d5690b9382a9
8abbf80ab8375
40c75d
a450774b850a7
c0e4e0b0af5514
3e1b1
a475c18cf7a1f2a
c01bee618a672c
baf
a97c6b2c11324d
68599a5782116
25c19
ab45c25157636
69bd81f1bf0b3e
842d6
ae381fd5ba6acc
6f61e96be0fef3
50de
ae7aae037f9996
85d5590800071
3b22d
b0fcbee2d05a57
7dadc5702318a
458b4
b3f8f9ee4e70c3
2358aa9fdfda44
7ef8
b42f76950ed88
307329a78ca3b
bf5620
b5535cbcb7f4ae
77f92875b5565
36c7c
bbcf8c3d170a34
73855932dd2de
67f19
bc1342f1e39a60
afbddb0a17c5c8
3cef
c0a29931b90fec
21bc2068a8bce
1183d
c1f08adead4051
c58289f552b4a
6d71a
c2ba78991cf102
d23895f37013f4
14c0
c57ea3ddfd0ce5
810b0e4b2a616
571f9
c81bfaaa2120b5
efd4686b9a1d0
d5180
d387622f72348
f72bbbe2112e2e
73d8c
d3d8d694df5da
4b18db8c4f199
306d5f
dae85597f14a53
bed3af1368e061
53f2
deefdc73509bee
2c6e78103e6ba
5a44c
e472acb667ca8
bcc421c1eef9172
52ec
e878769dccc55
0af78a11bbb196
34b42
e8e01d06eac7f4
a9e077c82b184
1c00e
f1b7014a4be2d6
6f7ce29de385f2
313c
f84cbe45c8832
7c3efa7ef99f419
ea15
fc10ecb7472575
d4d42a3712199
e49a8
feb6b7a1c1f054
ab4e42a50c2a2
4feb3
6805026a38187 SELE STXH Analogue to SAP Note 2107400 for Sybase ASE the following kind of SELECTs can cause trouble with SAP HANA while performing a TDMS import:
9e9e5469be3f0 CT
9cc654 STXL SELECT DISTINCT 'HUGO' FROM SVERS WHERE EXISTS ( SELECT * FROM "<table_name>" )
SELE
1f19127a9f6efcb CT Reason is that SAP HANA sometimes evalutes the sub-query completely rather than finishing after the first retrieved record (that already satisfies the EXISTS semi-join condition). A simplified record existence check will be provided in R3trans that is proces
0aa7b7da8b6e0 sed much more efficiently:
f31d
SELECT TOP 1 'HUGO' FROM "<table_name>"
On SAP HANA side the processing of these kinds of EXISTS subqueries is optimized starting with 1.00.122.13, 2.00.012.02, 2.00.021 and 2.00.030.
0fe0c8111a52cf9 SELE STXL STXL selections related to application sources like SAPLSTXD:11322 may use an unfortunate execution plan due to data skew (SAP Note 3513868). Implement the statement hints provided via SAP Note 2700051 in order to make sure that the evaluation sta
5def4b5589346 CT rts with the selective TDNAME condition.
4ca1
19354e7e67213d
6212f0df81da4b
2d9e
4c55cc3d1e4470
87df33bb57328
e75f7
513225a2f19eb2
bb43d150ff8537
2f0f
6f7a50eedb5601
e59bf9dc59aaaf
2a28
717514f88a91ca
db97ef879f660
8e29c
8aee14a7ffb08f
df88d2759592b
1b07a
9063ecdcca002
cb5368be439b5
ee8f39
97a49ffc74874c
3c6062738d751
56e72
ac02ddd413034
029702c97afb3f
c19be
b9f6cd7853228
3a78ef9a8efea5
aedc1
c3389e9753cb4
6da85cd640da8
ed3caa
b11500ff88187c SELE SUAUTHVALTRC This selection is linked to the ABAP user trace for authorization checks (transaction STUSERTRACE, ABAP profile parameter auth/auth_user_trace). Make sure that this trace is only activated when really required. See SAP Notes 2388483 and 2421733 and
d8f06a308fc8e5 CT make sure that no longer required data is purged in time.
68f0
769d7b44a80f1 INSE SUSAGE The table SUSAGE tracks the usage of different SAP components. In case of significant tracking activities there can be lock contention (SAP Note 1999998) and deadlock. SAP Note 2182269 describes how the SUSAGE accesses can be optimized.
756ed4da215f97 RT
44a72
efbf8ae66edaab
UPD
4e26ba700d7f7f
ATE
fac6
a609f96b4ec87
7d38c9881fb9b
8810b0
0d9248912d550 UPD SWNCMONI This update from class CL_SWNC_COLLECTOR_DB is related to data collection for the ABAP workload transaction ST03. Among others it adjusts the LOB column CLUSTD. In context of packed LOBs (SAP Note 2220627) this can be quite expensive and it
b8fb9f5069990 ATE may acquire the VarSizeEntryFreeSpaceHandler lock. You can bypass this issue by switching the data storage to transparent tables (SAP Note 2274315) or by implementing workarounds on SAP HANA side as described in SAP Note 1999998 -> VarSizeEntry
4a2309 FreeSpaceHandler.
e7ad51351b6725 SELE SWWWIHEAD The selection from table SWWWIHEAD with a specific WI_ID restriction in function module SWW_WI_HEADER_READ (application source SAPLSWW_SRV:187) can e.g. show up in context of the regularly scheduled batch jobs SWWERRE when many work ite
e665885fd80ad CT ms in ERROR state exist. You can check for work items in ERROR state via ABAP transaction SWPR or using SQL: "HANA_ABAP_Workflow_WorkItems" (WI_STAT = 'ERROR') available via SAP Note 1969700.
edac1
Minimize the number of work items in error state, e.g. by resolving the root cause and restarting the workflow in transaction SWPR or using the cleanup report RSWWWIDE (SAP Note 49545).
SAP Note 3194886 provides a correction that reduces the amount of SWWWIHEAD selections when many work items are in ERROR state.
b4a0dfe6daab8 SELE SWW_WI2OBJ This selection with application sources CL_SWF_UTL_UPDATE_00013=======CP:84 originating from line 16 of method IF_SWF_UTL_UPDATE~EXECUTE of class CL_SWF_UTL_UPDATE_00013 can suffer from a small default package size in comb
73b41ce5e0a5f5 CT ination with high sorting efforts. The default package size is MV_PACKAGE_SIZE = 3000. Configuring larger package sizes (e.g. 30000) can reduce the number of executions and the sort overhead, but would be a modification of the SAP standard.
b8716
Usually the expensive processing is only required once per system in order to eliminate potential inconsistencies in the existing data. Thus, increased runtimes and resource consumption is typically acceptable. Be aware that in case of a termination (e.g. due
b8ad6ffc101923
to an ABAP application server restart) the whole operation has to be started from the beginning. Therefore, you should make sure that you do not terminate the operation unnecessarily.
496954f7de033
80382 SAP Note 3494365 provides optimizations for this database request so that the overall runtime should be significantly reduced.
0a60a633e2e1f SELE SWW_WIREGISTER This selection can be expensive due to a high number of retrieved records for APPLICATION = 'WIM'. See SAP Note 2388483 and make sure that the batch job RSWWWIM is scheduled on a regular basis and runs successfully.
038aacfadd17a7 CT
72003
bba5b88bebf80 SELE SYNONYMS, TABLES, VIE This SELECT is executed when an ODBC or SQLDBC application requests table related meta data from the client (e.g. in context of SAP Data Services / BODS).
51e8ac3dcc70d7 CT WS
ef589
originate from ERP unit conversion being implemented with the CONVERT_UNIT function call of SAP HANA. This call is used in different application scenarios, e.g. as part of accessed to CDS view /SAPAPO/MARM. Contrary to currency conversion no cac
hing is possible for unit conversion on SAP HANA side. In case there is a significant overhead, it is recommended to user fewer larger conversion requests rather than a high number of accesses with only a few records each.
With SAP HANA <= 2.0 SPS 05 a LATE_MAT hint is generated that can negatively impact the runtimes in some scenarios because the usage of the HEX engine (SAP Note 2570371) is disabled (issue number 261730). With SAP HANA >= 2.0 SPS 06 the hint
is no longer generated.
513fc810543cf3f SELE TABLE_COLUMNS This query, executed by CL_SQL_STATEMENT==============CP on ABAP side with filters on schema and table, is related to SLT (SAP Note 2014562) structure comparison. A SLT target system checks the structure of the table in the SLT source syste
c331cb30152576 CT m when a SLT job is executed in order to detect the necessity to adjust the replication process. The following optimizations exist on SAP side:
10b
Check if the number of related jobs (/1LT/IUC_LOAD_MT_*, report DMC_MT_STARTER_BATCH) in the SLT system can be reduced as more jobs can result in more structure comparisons over time.
Switch real-time replication so scheduled replication, e.g. once a minute. This will result in some delays until a source change is replicated to the target system, but it will reduce the number of required structure comparisons.
c7aa02f6ae8919 SELE TABLE_COLUMNS If an access to TABLE_COLUMNS and INDEX_COLUMNS from ABAP module DB_GET_TABLE_FIELDS takes long, SAP Note 2402693 can be implemented to improve the performance.
061f5708ad63e CT INDEX_COLUMNS
56d33
027a69b3097e6 SELE TABLE_COLUMNS_ODB Complex queries on TABLE_COLUMNS_ODBC and several other SAP HANA dictionary objects are used when an ODBC client (SAP Note 2393013) calls the GetColumns function. It takes catalog, schema, table and column as arguments and returns SAP H
c6c93176c6fec5 CT C ANA column information. In case of increased runtimes and CPU consumption consider the following optimization approaches:
40d53
See Long runtime of query on SAP HANA dictionary objects and monitoring views and make sure that the CATALOG READ privilege is assigned to the database user executing the query.
24869fd1d16d2
Get in touch with the related ODBC client partner being responsible for the calls and check if there is a way to reduce the GetColumns calls, e.g. by caching information on client side.
7501ce93874c6e
5070b
c335135bcf1af43
23675498fea42
d5f9
25a6171ba41bdf SELE TABLE_GROUPS This selection from SAP HANA view TABLE_GROUPS is executed in certain scenarios by SAP HANA in order to determine table distribution (SAP Note 2081591) details for a specific table. Frequent executions can happen in systems with many tables using
171e818c986177 CT dynamic range partitioning (SAP Note 2044468) because for every table a TABLE_GROUPS selection may be executed in each dynamic range check time interval. It can be controlled with the following SAP HANA parameter (default: 900 seconds):
f37e
indexserver.ini -> [partitioning] -> dynamic_range_check_time_interval_sec
Besides on a high number of executions also the execution time of each TABLE_GROUPS selection can be critical. In particular in context of a high number of temporary tables in BW environments (GROUP_TYPE = sap.bw.temp) long runtimes were observ
ed. See SAP Note 2800007 and make sure that the number of NO LOGGING tables created in context of BW remain at a reasonable level, e.g. by scheduling SAP_DROP_TMPTABLES on a regular basis. See also SAP Note 2388483 that among others discuss
es options to keep the size of table TABLE_GROUPS_ at a reasonable level.
With SAP HANA >= 2.00.056 the dynamic range partition check is cut into smaller pieces so that it doesn't block the garbage collection for a long time (issue number 262672).
97171744726a5 CALL ALERT_DEC_EXTRACTO This CALL and SELECT is executed by the statistics server under the _SYS_STATISTICS user. It is linked to DXC (see M_EXTRACTORS) and looks for tables with names like '/BIC/A%AO'. Tables following this naming convention are technical tables to con
0e248699f758d R_STATUS trol the DXC DSO activation process. If the CATALOG READ privilege is not granted to _SYS_STATISTICS, the query can take quite long and return limited results. Proceed according to Long runtime of query on SAP HANA dictionary objects and monitori
e23444 ng views.
ba8161b613363
3e2984828ce38
aa669a
CALL
7f55e72c226945
COLLECTOR_GLOBAL_D
41b531e83d8ba SELE
EC_EXTRACTOR_STATU
b8557 CT
S
d24d8b8d00dd SELE
8b8df3e13dd9fb GLOBAL_DEC_EXTRACT
CT
17b5f9 OR_STATUS_BASE
2b459e0d42037 TABLES
fe4d6880b2380
18a6f7
70efe3b4e4704
38543e6ecec418
e4f02
905dbaa93a672
b087c6f226bc2
83431d
0132d9eeaf22c1 SELE TABLES This selection is related to SLT (SAP Note 2014562). See Long runtime of query on SAP HANA dictionary objects and monitoring views and make sure that the CATALOG READ privilege is assigned to the database user executing the query. This is anyway a r
d38d4c5a8f4d6 CT equired configuration for SLT as described in SAP Landscape Transformation Replication Server -> "Security Guide" -> "Initial user".
a36af
a058f013fb0ec2
f4c1141e5b3965
88e8
2d57b02cadda1 SELE TBTCO A TBTCO selection with STATUS = 'F' or STATUS = 'A' from SAPMSSY2 (start_stuck_eoj_jobs) is related to jobs that are defined with the SAP_END_OF_JOB event, so they wait until another batch job is finished. In case a high amount of these jobs exists,
85883f5b21f278 CT there can be a high number of (typically quick) TBTCO selections. You can use SQL: "HANA_ABAP_BatchJobs" (EVENTID = 'SAP_END_OF_JOB') available via SAP Note 1969700 to check for these jobs. Check ID A0860 ("Old batch jobs waiting for END
fc9ec OF JOB event") of SQL: "HANA_ABAP_MiniChecks" (SAP Note 1969700) reports an issue in case a high number of jobs exist waiting for event SAP_END_OF_JOB.
4bcb3c3b073c1c
The related jobs often have names following the convention USR_ATCR_IMP<timestamp>. SAP Note 2076491 describes the background of the creation of these jobs.
0b3ec8c618945
2c3ca SAP Note 2640389 describes an ABAP job scheduling problem that can result in more and more jobs in SAP_END_OF_JOB wait state.
8009724b392f3
SAP Note 2998530 provides an optimization to reduce the number of requests from a SAP ABAP batch perspective.
9ef358c8fcd399
b9f72 Check why many jobs exist since a long time waiting for event SAP_END_OF_JOB and delete jobs via report RSBTCDEL2 in case they are no longer required relics from the past.
05a6382b46753 SELE TBTCO TBTCO selection from CL_RSPC_CHAIN=================CP usually specify selective STATUS values like 'S' (released) or 'Z' (put active) and so index TBTCO~3 is most efficient. Index TBTCO~5 on EVENTID and EVENTPARM also appears to be se
87c44e3008c8a CT lective, but there can be scenarios with a high amount of records for specific EVENTID / EVENTPARM combinations. Implement SAP Note 2700051 that delivers an appropriate hint for index TBTCO~3 for this request.
6fb32a
72175743393a7
015e247647acc1
bb7b5
b80f8bd3c59c1
bab3c692c55e4
d5b3f9
f9fe5a1f2bbfe7b
e197f89f4813de
e95
4f765870d762c SELE TBTCO This selection from TBTCO with a LIKE condition on column JOBCOUNT originates from the consistency check report RSTS0024 (SAP Note 666290) that compares table TBTCO in TST01 in order to identify orphan job logs. Consider the following recomm
351b57c4217e60 CT endations:
93f9f Make sure that the number of records in table TBTCO remains on a reasonable level (typically not more than 1 million) by keeping the number of scheduled jobs as small as possible and by implementing a proper house-keeping strategy (SAP Note 238
8483).
Avoid running report RSTS0024 too frequently (more than once a month).
If required, an additional index on column JOBCOUNT can speed up the selections.
ad5ab291e053d SELE TBTCO This TBTCO selection from SAPLBTCH with the following conditions in the WHERE clause can be unnecessarily expensive in cases when the JOBCOUNT condition is more selective than the JOBNAME condition:
2c9c6c18af69c0 CT
b6493 "JOBNAME" LIKE ? ESCAPE ? AND
"JOBCOUNT" = ?
In this case an additional TBTCO index on column JOBCOUNT can help to improve the performance.
4871402a099e8 SELE TBTCO TBTCO selections with EVENTID and an selective STATUS condition (e.g. from SAPLBTCH) suffer from the absence of an index on both columns. You can create an additional TBTCO index on columns EVENTID and STATUS in order to optimize the access
66795707c31bc5 CT .
aaea1
fb8b46b90b806 SELE TBTCO, TBTCP Selections from table TBTCO with a NOT EXISTS condition on table TBTCP executed in function module BP_JOB_SELECT_SM37C (application source SAPLBTCH) are related to an implicit consistency check between tables TBTCO and TBTCP. It is execut
19b1b97af549e3 CT ed when transaction SM37C is called explicitly or implicitly and a step condition is provided.
a5772
This scenario can regularly happen in context of the job monitoring functionality of the FRUN Simple Diagnostics Agent. In this case the ENDTIME / ENDDATE conditions are typically most selective and creating an additional TBTCO index on columns EN
DTIME and ENDDATE can improve the performance. In a real life scenario the runtime improved from 3500 ms to 10 ms after the index was created.
SAP Note 3108256 provides a coding correction so that this consistency check is only executed when transaction SM37C is called explicitly, but no longer when it is called implicitly, e.g. via monitoring tools.
bfa3071482fd3d UPD TBTC_TASK Updates on TBTC_TASK from class CL_SDL_TASK and method SUBMIT_NEXT_JOB (source CL_SDL_TASK===================CP) may touch an unnecessary high amount of records and so lock waits and deadlocks become more likely. SAP Not
80f268bb48cd3 ATE e 2843720 provides a correction.
12095
SELE TCURF Implicit accesses to table TCURF are related to currency conversion. Consider setting up the currency conversion cache (SAP Note 2502256) to make sure that conversion rates are cached and so repeated accesses to table TCURF are no longer required. The
CT statement hash differs between systems because database name and user are explicitly used in the statement text.
Example: (typical implicit TCURF access)
select
"FCURR","TCURR","KURST","GDATU","FFACT","TFACT","ABWCT"
from
"C11"."SAPC11"."TCURF"
where
"MANDT" = ?
order by
"FCURR" asc,"TCURR" asc,"KURST" asc,"GDATU" asc,"ABWCT" asc
with parameters ( 'HINT' = ( 'LATE_MAT', 'OFF' ) )
371d616387247f DELE TESTDATRNRPART0
2101fde31ed537 TE TESTDATRNRPART1 SAP Note 1964024 provides a correction that reduces the DML operations on the TESTDATRNRPART<id> tables for all types of databases. The optimization is available as of BW support packages SAPKW73012, SAPKW73112 and SAPKW74007.
dec0 DELE TESTDATRNRPART2
Also make sure that no data targets exist with a high number of requests. See SAP Note 2037093 for more information.
aaff34b83d1018 TE TESTDATRNRPART3
8d08109cb52a0 DELE TESTDATRNRPART4
069ae TE TESTDATRNRPART5
3f96e3762412e8 DELE TESTDATRNRPART6
d4cf4c670f2993 TE TESTDATRNRPART0
aa8a DELE TESTDATRNRPART1
51bffaafaddd0ec TE TESTDATRNRPART2
825ecb9f5bf1b5 DELE
615 TE
5d0b4a21b0c17 DELE
c08a536bec04f TE
2825cd INSE
a3f19325031503 RT
e9e4088a8e575 INSE
15cd3 RT
1cbbe08ef2bf07 INSE
5daecb9e383ae RT
74deb
a5539c73611c1d
0ba9e4a5df7193
29b8
a382ffaeafac914
c422752ab9c2f9
eab
81d83c08828b7
92684d4fd4d25
80038e
1ce978ccb4cfed
10e1ef12dc554c
2273
54c8e15427f41b
9074b4f324fdb
07ee9
da7ef0fee69db5
16f1e048217bca
39e7
c197df197e4530
b3eb0dcc1367e5
ba4b
various, e.g.: SELE TFACS Queries like the following are implicit SAP HANA factory calendar lookups:
CT
75a374cc7d49af select "IDENT","JAHR","MON01", ...
a729f43796082 FROM "<schema>"."TFACS"
70db4 WHERE "IDENT"=? ORDER BY "JAHR" ASC
They are executed in context of SQL functions like WORKDAYSBETWEEN or ADDWORKDAYS. Usually the time per execution is quick (significantly below 1 ms) but there can be a high number of lookups depending on the modelling design. If a very high n
umber of executions consumes significant time, you need to check the design in context of the factory calendar functions.
Starting with SAP HANA 2.0 SPS 04 you can identify the responsible main database requests via the root statement hash, e.g. using SQL: "HANA_Threads_ThreadSamples_FilterAndAggregation" (STATEMENT_HASH = '<statement_hash>', AGGREGA
TE_BY = 'ROOT_STATEMENT_HASH') available via SAP Note 1969700.
When a more complex procedure or view is involved, you can check for all dependent objects using SQL: "HANA_Objects_ObjectDependencies_Hierarchy" (SAP Note 1969700) and inspect related procedures, functions or views for workday related functio
ns.
6079a94e9b06f SELE TFDIR TFDIR is usually single-record buffered on SAP ABAP side and so these single record accesses should not be directed to SAP HANA frequently. If you see a high amount of accesses, please check on SAP ABAP side if table buffering is set up properly and with
8fb6cde8b4f96f CT reasonable sizes:
cc3f3
In the technical settings (transaction SE11) table TFDIR should be shown with "Buffering Activated" and "Single records buff."
In transaction ST02 the table buffer hit ratio should be > 99 % and the amount of swaps in the table buffer should be low.
It may be required to increase the following parameters if they aren't sufficiently sized, yet:
See SAP Note 2103827 for more information related to the SAP table buffer parameters with SAP ABAP kernel 7.40 and higher.
1e4dcf71cd91a9 SELE TMY06 This SELECT FOR UPDATE and a related UPDATE of table TMY06 in context of include MRYF_COMMIT_WORK respectively application source RMNIWE20_01 is usually not required and can be removed with the coding correction available via SAP Not
88edcdcabfac5ff CT F e 3084147.
3c8 OR U
PDAT
E
07235a83815cd SELE TOA01 These queries executed in context of method FIND of class CL_ALINK_CONNECTION can suffer from inadequate optimizer decisions due to data skew (SAP Note 3513868). It can happen that during the first parsing the condition on column SAP_OBJECT
5866e4beda292 CT TOA02 is quite selective, so that the optimizer picks this column as an entry point. At the same time, starting via column OBJECT_ID usually provides the best performance.
43da0a TOA03
Implement SAP Note 2700051 that delivers these statement hints for the most frequent selections on the TOA0* tables.
10773e93773c65
72b858d40d02e
fdaa9
1129bbd742e83f
67d3593926744
e7c69
1db0f6ae9516a0
7651706615bda1
5d9b
29fe87c0c75c1d
644db4e3a67b5
736de
36832958d1a5a
d76b9835525dd
186570
3b5a222efb5a2
016506f975320
3ea551
3d4802a04f79a
09957595d3103
6f8b4b
43c49425b6f66
13c18efa5a050b
1dcd8
4e1bb5a5602c3
927cc2c07ac48e
2bbe5
53f9ff6100ff8c1
4703a53615678
7fa7
56bee5bb25528
0c3927eb691ec
d26740
6bb5f8afea31ef3
e1e7bb3344b08
009b
773a580662140
171d0d8756c1d1
33592
84ce82d311c7c3
9f4b4275fdabdd
8340
8724996697944
d25738636b238
2ac5a9
87eaa701d54dcf
a4bc027c08cfd
8bee1
a8a39bd2dbd03
f6299f851a8d5e
4a075
b2924e84d11a5
971280e52dff63
8613d
cab1c3ab82bdd1
897bf03edd69e
2f4ea
caf46e6637f132
a29fccfae159f1b
1ae
d1cd7ff86a6fa4
b87010b2f0075
8e94c
e34239b1e8a38
68c789aa16036
bafd70
f7fc0b6e050071
88355fd8a4e55
ace37
56f3adf67e6975 SET TRANSACTION This "SET TRANSACTION READ WRITE" command is executed while establishing a connection to SAP HANA. Long runtimes are typically not caused by the statement itself, but by general infrastructure or SAP HANA issues. See SAP Note 2000000 and ch
e264365631776 eck for general performance issues, e.g.:
51f21
High CPU consumption (SAP Note 2100040)
Network issues (SAP Note 2222200)
I/O issues (SAP Note 1999930)
System replication issues (SAP Note 1999880)
Lock issues (SAP Note 1999998)
Savepoint issues (SAP Note 2100009)
various, e.g.: SELE TRANSACTION_HISTORY See SAP Note 2388483 -> TRANS_TOKEN_HISTORY (which is the table accessed via view TRANSACTION_HISTORY) and make sure that the transaction history is kept at a reasonable size.
35e994d35949a CT
8e6aad9836a75
6f5152
23b55b68c5131 CALL TREXviaDBSL TREXviaDBSL is used to execute queries that are not possible with plain SQL (e.g. BW queries with execution modes > 0). In the SAP HANA SQL cache both the content and the bind values are hidden, so a direct analysis and optimization is not possible.
a580959ec19dc
See SAP Note 2800048 for more information related to TREXviaDBSL calls.
986441
2d5f7b7a147b7c
23dfc28d19591d
3b85
6a822f11564fc5
b1779be273e27
82077
9344fd6a680c2
5d0e28bd66c55
b14e32
acdedb945a5b0
083ecc662acac
88af75
c94fc486be0b7
8a9b6e7d1f848
82631d
db757bc405842
a7a278838467e
de540b
de4d08b0863b
6076896dc1334
e2592f4
e1cdd703df87fc
61ce8163fa1071
62a9
ee6a8a43d1165f
e8cc2dd5b0c6b
43799
f306ecaa6d72a7
f56d0620259f8
181a1
75ea317896b0f3 SELE TRFC_O_UNIT This selection in class CL_BGRFC_UNIT_HANDLER_OUT_T can be executed very frequently in context of the bgRFC (background RFC) watchdog (passport action <BGRFC WATCHDOG>) in case of bgRFC inconsistencies (e.g. entries in table BGRFC_O
b001e0bbdd462 CT _RUNNABLE without related entries in other tables). Check for inconsistencies via report RS_BGRFC_DB_CONSISTENCY and repair them to eliminate permanent watchdog activities.
edc67
0a67253485f84 SELE TRFCQIN Different TRFCQIN selections in function module QIWK_RUN (SAPLQIWK) with QNAME and QCOUNT restrictions can be expensive due to a wrong optimizer index decision. Pilot SAP Note 2398412 provides an ABAP coding correction as a workaround.
9f94eac4d7acd7 CT SAP Note 2700051 provides statement hints so that the queries are forced to use index TRFCQIN~1 even if the ABAP coding correction isn't in place.
4f71d
Independent of optimizer decisions accesses can also be expensive in case many records exist in table TRFCQIN. See SAP Note 375566 that provides analysis and cleanup guidance.
1716815ae5051c
3afbc19c2c5902
cc5b
1cbc1b492e2fce
bad473cfc5eec8
cba0
28e4bfc6c34297
1f9ab54c7abdf6
250e
2f28892b2843a
8b159ae89a1571
e3e1a
4543c836acce6e
f174c42a61e443
134e
4696934057f82
12f8dc5ef82c0c
0380b
59b3a1d01421e
944a0c7949f48
3f0e8c
6cc8ae0345f0cd
530f6f3754fcd5
5688
74644c4d6d442
6a4aeb5f4f3bb6
b49a3
972175e7136b15
69ca4979b2cf8e
754c
973e198ee37f31
f2cbc2b7b3252e
acc3
a33e729252710
7bc7ff6cfa2531b
0a21
a5744848b279e
912379e1d2501f
4ad0b
b2bb2d97fb55b
064dcc303d889
54ad2e
c3c626adae60fc
f3f8f662ac7547
7d41
c6ce4430af800
1676f1860ac129
b4ad5
e4460b851e3d9
dea92825cf65ae
c3bf7
e63a0e779cdc7
0b9053276a455
b99c5b
0bb645d182f85 SELE TRFCQIN This selection can be expensive in case many failed inbound queue RFCs exist (e.g. states RETRY, SYSFAIL or CPICERR). Check for failed RFCs in transaction SMQ2 and consider SAP Note 375566 for more details about analysis and resolution.
0aa5beee23f970 CT
4f48b
49ba27b9fc819c SELE TRFCQOUT This statement originating from function module QDEST_RUN (application source SAPLQOWK:1627) can suffer from the inability of SAP HANA to evaluate a MIN conditions based on an index. It is particularly slow if many records exist for the queried M
9a4589eeda7e7 CT ANDT / QNAME / DEST combination.
87c7a
You can use SQL: "HANA_ABAP_QueuedRFC_Outbound" (SAP Note 1969700) in order to evaluate the existing entries in table qRFC.
f4cf86ae7e2edbf
0a32802e0a58c The following options exist to optimize the scenario:
22ad Avoid flooding TRFCQOUT with too many records for the same queue / destination.
Make sure that qRFC is properly configured and working efficiently to avoid that a significant backlog piles up.
Move table TRFCQOUT to column store where the MIN condition is typically evaluated with better performance compared to row store.
04889f506b608 SELE TRFCQOUT In order to minimize the risk of performance overhead caused by inadequate optimizer decisions you should implement the statement hints delivered via SAP Note 2700051 (SAP HANA >= 1.00.122.03).
27138e56436d3 CT
474786
0c0de3325053a
ed2e2a1260163
974c8b
0e7c0ef5261f68
e200db6db607
0e2979
0f32e3f24f81f9
4a6812c093511a
851f
1ee1643152080
46bb05ca070a2
bfa3d9
219c467bfaff3e2
ce2f9d66dc2e1e
290
32c31293c1bc36
f7d5bc8806840
00d40
3539d0edfd130
e2a95708c7ef7a
4f326
38f7c968a229df
b7ee2e8688888
619b0
3a0a3b53f8aa9
37d796c7f45a9f
94628
3d6233854ca57
d2bea43e2a8ea
593b70
46f7ed701af8e2
71a94a592944a
eeda5
4cc734c0928b5
0441befd3782e
52ca71
6425c3456f550
064e9831cbd47
9abe32
65541ea9ff3ca5
89ab3785a2d24
7270f
76e9d6bafebb0
e8ee65c294eb5
89b295
7edde48cace745
d72cfaa18f8c38
cd23
80e3134b8b8ca
5b6f0f652c5d53
8f94f
917a5ce67e2e9a
e6049a542756a
75374
92280dc954620
c61741b9648e0
20311d
96e55a4f0622c
3c6646ef03470
701962
9ee1c7a5b0d1e6
c4093f54b64c2
5189a
aaf99cc61499b2
3eea688143994
5544a
ab60f50ee7fb8b
2f7303561db04
5e5df
b19b5145b99c5
8559e1678a39b
4211c4
b5f9056a7ee102
2042e20306c0b
48e59
b7f626060290b
24f84e777d5b5
6a2527
b987c268ee78c
3b25e7390a464
33d93e
c387fc1ba5614b
32f26f46c8762b
4d07
c67e0ff151ad6c6
de696a4b15f9d
0c64
cdc85b0be8f10
4d0b6390f84fd
c9f918
ddcf8f3f2322f3
7456fbdfa7d7d4
31fb
e4ee5310d59f1a
949294663ec99
01639
e62051ca906a3
a694d8b2345d0
dcb942
f053abebad183
def228db1b506
d49307
f792b66c21e644
42bac2276b941
55a4a
fa4b1ed668d159
56ce86d56f363
2a4d9
fc938671ce5893
de9dbb99b81ca
18698
fd11181f9b4580
948ce5b658163
a1c12
fe7ebefcad7f334
18ea1d43558ea
6bfe
616d53e49560d SELE TSP01 Long runtimes of this database request are typically a consequence of transactional lock contention (SAP Note 1999998). If you see many spool work processes being active with printing into the file system (showing details like "print 92336781/1"), there ma
76d919176535b CT F y be contention / overload on lower layers. You can consider the following analysis steps and optimizations:
4bc185 OR U
If spooling to the file system is active (SAP Note 10551, rspo/store_location = G) and spool work processes are often busy with printing, you should check in collaboration with your hardware partner if there is any bottleneck on operating system or file
e314592f5bce71 PDAT
system level. As a workaround you can set rspo/store_location = db so that spool information is written to table TST03.
690d6ae05d08 E
Check if you can separate concurrent spool activities to different spool requests so that contention on spool request level is no longer possible.
8a1d41
Avoid running spool intensive operations like the reorg jobs RSPO0041 and RSPO1041 (SAP Note 2388483) during times of high concurrent workload.
Check if it is possible to reduce the amount of data spooled by the application.
various, e.g.: DELE TSP02 Modifications on spool tables like TSP02 or TSPEVJOB can terminate due to transactional deadlocks (SAP Note 1999998) in case printers can't be reached, e.g.:
44efafa3b4d423 TE TSPEVJOB
76ddd05ae489e INSE S temp disable PRTR for 300 s (until 1564113637) (0 retries): connect problem
b02a2 RT S *** ERROR => Safeguard: Printer PRTR disabled for 5 minutes [rspowunx.c 489]
6aef2dd269305 SELE
Connection to SAPLPD or LPD Broken; Computer <computer>.
6867b6aeef7d0 CT F
8d94d4 OR U
See SAP Note 173856 for more information.
PDAT
E In order to eliminate the deadlocks you have to take care for the terminations and make sure that connection issues are reduced. You can find errors e.g. in the SAP ABAP syslog (transaction SM21) or in the work process traces (transaction ST11). It isn't easil
UPD y possible to tackle the deadlock on its own.
ATE
a4ba84c9f41719 SELE TST01 These selections are done in context of the TBTCO / TST01 consistency check, ABAP report RSTS0024 (SAP Note 666290). Consider the following recommendations:
decfd7bf1472c7 CT
Make sure that the number of records in table TBTCO remains on a reasonable level (typically not more than 1 million) by keeping the number of scheduled jobs as small as possible and by implementing a proper house-keeping strategy (SAP Note 238
9227
8483).
f0a0081a03e2d
Avoid running report RSTS0024 too frequently (more than once a month).
6275d3849853d
If required, an additional index on column DNAME can speed up the selections.
db9125
84cb85a3eba68 SELE TTZCU The following selection from timezone table TTZCU happens in context of evaluating the ABAP_SYSTEM_TIMEZONE function that can be part of other database requests and view definitions:
fe6ae4fbcf29ad CT
26a14 SELECT TOP 1 TZONESYS FROM <schema>.TTZCU WHERE CLIENT = '<client>'
9b2eec360ac06
8a98bd0e7f2bd Also CDS functions like DATS_TIMS_TO_TSTMP, TSTMP_TO_DATS, TSTMP_TO_DST and TSTMP_TO_TIMS may implicitly access the TTZCU table.
12f8aa Starting with SAP HANA 2.0 SPS 04 you can use the root statement hash to track this implicit statement back to its origin, e.g. via SQL: "HANA_Threads_ThreadSamples_FilterAndAggregation" (STATEMENT_HASH = '<statement_hash', AGGREGATE_
bf5cee7981f70d BY = 'ROOT') available via SAP Note 1969700. Then you can check if the root statement can be optimized in terms of timezone related procedure calls.
e69deeba55c59c
In scale-out systems it can be of advantage to activate table replication for table TTZCU (SAP Note 2340450) in order to make sure that the table information is locally available on all SAP HANA nodes.
3d1f
(and others, dep
endent on <sche
ma> and <client
>)
9f0a8508e0543 INSE TXMILOGRAW This table holds data of the ABAP XMI interface. Increased trace levels can result in an unncessarily high data volume. Check the trace configuration as follows and make sure that tracing is only activated as short and targeted as possible:
774562406e080 RT
Server-side tracing: Function module SXBP_SET_TRACE or transaction RZ15 -> "Goto" -> "Set Trace for XBP"
a258d2
Tracing triggered by external scheduler: Check log entries in transaction RZ15. Entries like "Setting audit level to 3" indicate that an external job scheduler has increased the trace level. In this case, review the configuration of the external scheduler to m
inimize tracing.
9e514d0f1ca916 SELE T001L This access from /ISDFPS/CL_FDP_STOCK_LIST (transactions /ISDFPS/DISP_EQU_SIT or /ISDFPS/DISP_MAT_SIT) reads the whole T001L entries of the current client because a check for an empty FOR ALL ENTRIES list is missing on ABAP side. SA
769b36d52aec4 CT P Notes 2398901 and 2415082 provide corrections.
ae1e9
DELE UCONCACONFIGRT These modification requests can take longer due to exclusive lock waits (SAP Note 1999998) and eventually they are terminated due to lock wait timeout or deadlock. See SAP Notes 2904355, 2907832, 2943690 and 2979153 for application changes to minim
TE UCONRFCCFGRT ize lock contention.
INSE
RT
UPD
ATE
1c4fc4ab7e6374 UPD UKMBP_CMS_SGM The update of table UKMBP_CMS_SGM can suffer from transactional lock contention (SAP Note 1999998) and deadlocks in case of a parallelized credit check. SAP Note 3145867 provides an ABAP coding correction.
05c3383d4140d ATE
b89ef
UPD USERS Updates related to the SAP HANA dictionary table USERS are often not explicitly visible, but they can be indirectly observed, e.g. when threads in method session_cookie/update_last_connect_time have to wait for a transactional lock (SAP Note 1999998).
ATE Contention is possible when SAP HANA user adjustments are done via ALTER USER (e.g. mass updates via individual SQLScript procedure) and are not committed immediately, e.g. because autocommit for DDL is disabled.
7343a8b8c43d6 SELE USOB_AUTHVALTRC This query is linked to the ABAP authorization trace. See SAP Note 1854561 and make sure that the parameter auth/authorization_trace isn't permanently set to "Y", resulting in a global authorization trace. Either activate it only as short as possible or use re
41e21e9e28339c CT asonable filtering.
23a89
In case you need to keep the authorization trace globally active, you can activate ABAP table buffering for table USOB_AUTHVALTRC in transaction SE13 so that the selections are satisfied in the ABAP table buffer and accesses to the database are no longer
required.
09b3d2f0452e0 SELE USR02 Accesses to USR02 are frequently executed in SAP ABAP environments in order to update user specific information. Long runtimes are typically not caused by the statement itself, but by general infrastructure or SAP HANA issues. See SAP Note 2000000 an
e63a59bfcc5d7e CT d check for general performance issues, e.g.:
3a99f
High CPU consumption (SAP Note 2100040)
3e3b8e11cad214 SELE
Network issues (SAP Note 2222200)
e9a7a5cec6193a CT F
I/O issues (SAP Note 1999930)
d270 OR U
System replication issues (SAP Note 1999880)
8b46525ae5c6c PDAT
Lock issues (SAP Note 1999998)
b03474ce50b06 E
Savepoint issues (SAP Note 2100009)
65d2ca
Memory reclaims (SAP Note 1999997)
edbc92bf95fdb9 UPD
Dead ABAP connections not properly terminated (issue number 324701)
ed1e0746be18ae ATE
4e55
a412091705b83 INSE USR05 INSERTs into table USR05 in context of application source SAPLFKKA:99 respectively function module FKK_GET_APPLICATION can suffer from transactional lock contention, deadlocks and "301: unique constraint violated" when concurrent application t
f996acc04b745e RT ransactions try to process the same primary keys. A coding correction from application side is available in SAP Note 3349622.
2b120
070f0383a839f SELE VARI This selection originating from include LSVARF02 (application source SAPLSVAR:21472) is executed frequently in context of batch jobs ODQ_TQ_JOB. See SAP Note 3080823 in order to minimize the load introduced by these batch jobs and perform house
783d1bc3a2ae6 CT keeping for RSPCLOGCHAIN (SAP Note 2388483).
6c21ae
5fe2acb9cfd7b3 UPD VARINUM This update is usually expensive due to exclusive lock waits (SAP Note 1999998) when variants are created for simultaneously running batch jobs using the same programs. SAP Notes 1791958 and 2928893 provide corrections to minimize the lock contentio
72f100e8133e50 ATE n using a dedicated connection for the VARINUM update to decouple the VARINUM commit from the batch job commit. The related report is RSDBSPJS. In addition you should check if you unintentionally scheduled concurrent batch jobs using the same pr
2cf2 ogram.
939eaa61a3d9df
d4a8d610e607a
8c8f6
cbf7228b6601e
0485769afbfc50
77b5a
aa9c7d24619a3f SELE VBHDR This selection from application source SAPLC14Z / report C14Z_COMMIT_CHECK checks for related VBHDR entries that still need to be processed for a specific user (VBUSR) and transaction code (VBTCODE). This check is repeated in a loop until no mor
a5c9159c352f4d CT e fitting records are found, so it is usually expensive due to the number of executions and not due to the individual execution time. Consider the following optimization approaches:
cb05
Calling C14Z_COMMIT_CHECK with 'i_flg_wait_on_updtsk = lc_true' is a SAP internal setting and must not be used in production mode by customers. When you keep this parameter deactivated, the problem with the repeated VBHDR accesses will
no longer happen.
In cases when SAP standard coding suffers, you need to check why the VBHDR entries aren't processed in time. Eliminate related bottlenecks like RFC misconfigurations and make clean up previous failed updates (transaction SM13) to make sure that
no orphan VBHDR records result in an endless selection loop.
6cc478091b517a SELE LIKPUK In case many quick selections are executed on *UK tables in context of the DVM readiness check you can disable this unnecessary functionality as described in SAP Note 2820779.
1329060e23971 CT VBAKUK
01774 VBRKUK
ba1c044211c852
0ba553a12654cf
2747
6066d80ea76a9
efa5c6b80d7b3
bbe198
635ecd1151236f SELE VEPO, LIPS, LIKP and VE This selection with application source CL_RTST_RP_POST_DOCUMENT======CP:4225 originating from method GET_PRECEDING_SD_DOC_ITEMS4HU of class CL_RTST_RP_POST_DOCUMENT can suffer from overhead due to the VENUM = '' s
1e994ce172c992 CT KP election condition. SAP Note 3313650 provides a correction from application side.
0050
SELE V_GLPOS_N_GL_CT See SAP Notes 3382199 and 3440558 and check if it is possible to use CDS view FGL_LIB_N_GL that provides the same results while using a simplified data model. If the performance is still not acceptable, open a SAP incident on component FI-GL-IS.
CT
504b2d1e70932 SELE V_ML_ACDOC_EX_UL This query originating from function module IF_FML_XBEW_AGGREGATION~GET_AGGREGATION_MOTHER_MULTI can result in volatile optimizer decisions, resulting in varying runtimes and resource consumption. The following options exist to st
1595a6d72084f CT abilize the execution plans:
0e4858
Add statement hint (SAP Note 2400006) USE_HEX_PLAN for this query.
9dd0bb3115913
Consider SAP Notes 3156880, 3216523 and 3547166. Particularly the last one is promising because it will use an efficient FOR ALL ENTRIES-based HEX engine approach for itabs with a single record and an explicit itab join only for itabs with more re
df9208c3388f7f
cords.
5f461
74d3229c25a96 SELE V_ML4H_CDS_CAT_H This query originating from method ENHANCE_BUFFER_WITH_NEW_ACT of class CL_FML_JOIN_CKMLPP_CR_BUFFER accesses table MLDOC via a CDS view using FDA WRITE (SAP Note 2399993). It is usually processed more efficiently using th
f60303335a58b CT e HEX engine (SAP Note 2570371). You can configure a USE_HEX_PLAN statement hint to make sure that the HEX engine is actually used (SAP Note 2400006).
d537de
fe99dc1eebd963
4d67d52edbeba
15af8
878c325b36eaa SELE V_OP Selections from view V_OP in function module BP_JOB_SELECT_SM37C (application source SAPLBTCH) are related to explicit or implicit calls to transaction SM37C.
e5309594f17e0f CT
This scenario can regularly happen in context of the job monitoring functionality of the FRUN Simple Diagnostics Agent. In this case the ENDTIME / ENDDATE conditions are typically most selective and creating an additional TBTCO index on columns EN
7bc3f
DTIME and ENDDATE can improve the performance. In a real life scenario the runtime improved from 8470 ms to 17 ms after the index was created.
3ef368dc1b2ce6 SELE WLF_P_FACTORINGARI This selection executed in method _FILL_AR_ITEM_BUFFER of class CL_WLF_FACTORING_SERVICES is removed with the correction provided via SAP Note 3245234.
4e1e8ae065f846 CT TEMFLOW
e84f
b74b5839acb38 SELE XMII_CSTMATTRIBMAP This access is not expensive, but in context of transactional LOBs they can be responsible for a high number of SQL contexts and an increased size of the Pool/Statistics or Pool/RowEngine/QueryExecution/SearchAlloc. See SAP Notes 2220627 -> "What are
69a79b889c325 CT transactional LOBs?" and 2711824 for more details.
e94070
97edbffda3d971 SELE XMII_DB_MEMORY This access is not expensive, but in context of transactional LOBs they can be responsible for a high number of SQL contexts and an increased size of the Pool/Statistics or Pool/RowEngine/QueryExecution/SearchAlloc. See SAP Notes 2220627 -> "What are
680763d1653a7 CT transactional LOBs?" and 2711824 for more details.
3fe7c
b10c4869dce50 SELE XMII_FILES This access is not expensive, but in context of transactional LOBs they can be responsible for a high number of SQL contexts and an increased size of the Pool/Statistics or Pool/RowEngine/QueryExecution/SearchAlloc. See SAP Notes 2220627 -> "What are
5ffda821289f98 CT XMII_PATHS transactional LOBs?" and 2711824 for more details.
76d33
4275f64f1da015 SELE XMII_JOBPROP This access is not expensive, but in context of transactional LOBs they can be responsible for a high number of SQL contexts and an increased size of the Pool/Statistics or Pool/RowEngine/QueryExecution/SearchAlloc. See SAP Notes 2220627 -> "What are
109fbb7cb187a1 CT transactional LOBs?" and 2711824 for more details.
7534
14. Is it required to create optimizer statistics in order to support optimal execution plans?
It is in most cases not necessary to create optimizer statistics in the SAP HANA context. SAP HANA determines optimal execution plans by certain heuristics (e.g. based on unique indexes and constraints), by ad-hoc sampling of data or by internally collecting and re-using statistical
information. Nevertheless there can be exceptions, e.g. for remote tables or when the query optimizer isn't able to determine column correlations properly. For details see SAP Note 2800028.
15. Are all database operations recorded in the SQL cache (M_SQL_PLAN_CACHE)?
Standard operations like SELECT, INSERT, UPDATE or DELETE are recorded in M_SQL_PLAN_CACHE, but there are some exceptions listed below. In general the thread samples can be used to determine details. If more specific monitoring views are available, they are provided in column
"Alternative source".
IGNORE_PLAN_CACHE hint 214294 M_SERVICE_THREA When the hint IGNORE_PLAN_CACHE is used, the SQL cache is bypassed and no information is stored there.
5 D_SAMPLES
M_EXPENSIVE_STAT
EMENTS
DDL 236629 M_EXECUTED_STAT DDL statements like CREATE, ALTER or DROP aren't stored in the SQL cache. The same applies to TRUNCATE operations. Instead some of operations can be found in the executed statements trace (SAP HANA >= 1.0 SPS 11).
1 EMENTS
UPDATE ... WITH PARAMETERS commands like
are actually DDL commands but due to the UPDATE key word they were treated as DML commands up to SAP HANA 2.00.056 and stored in the SQL cache. They are no longer part of the SQL cache starting with SAP HANA 2.00.057.
Backup 164214 M_BACKUP_CATALO Backup operations (e.g. BACKUP DATA) are neither considered as DDL nor recorded in the SQL cache.
8 G
M_BACKUP_CATALO
G_FILES
Smart Data Access (SDA) 218011 M_REMOTE_STATE Accesses to virtual tables in SDA environments aren't tracked in the SQL cache.
9 MENTS
Temporary objects in procedur 20037 M_SERVICE_THREA Accesses involving temporary objects in procedures aren't cached starting with SAP HANA 1.00.74. See SAP Note 2003736 ("Changed the implementation of SQL queries...") for more details.
es 36 D_SAMPLES
M_EXPENSIVE_STAT
EMENTS
Calculation view unfolding with 244105 M_SERVICE_THREA Starting with SAP HANA 1.0 SPS 12 unfolded calculation views aren't cached in specific scenarios. As a workaround it is possible to disable unfolding.
specific features 4 D_SAMPLES
M_EXPENSIVE_STAT
EMENTS
Multiprovider pruning 269111 M_SERVICE_THREA Queries taking advantage of multiprovider pruning can't be cached in the SQL cache because for technical reasons changes to the pruning table wouldn't result in an invalidation of the cache entry and the existing plan would produce wrong resul
7 D_SAMPLES ts. SAP Note 2691117 describes how to deactivate multiprovider pruning as a workaround (imposing a risk of performance regressions).
M_EXPENSIVE_STAT
You can set the database trace level for ceinstantiate to info in order to verify if multiprovider pruning is the reason for a missing SQL cache entry:
EMENTS
indexserver.ini -> [trace] -> ceinstantiate = info
Ad-hoc queries via TREX_EXT 274936 M_SERVICE_THREA With newer SAP basis SP levels ad-hoc queries via TREX_EXT_AGGREGATE are no longer recorded in the SQL cache because they anyway are never reused.
_AGGREGATE 0 D_SAMPLES
M_EXPENSIVE_STAT
EMENTS
SELECT
*
FROM
"A004"
WHERE
"MANDT" = ? AND
"KNUMH" = ? AND
"KAPPL" = ?
In order to understand selectivities and correlation and to be able to reproduce a problem with a SQL statement it is important to know which actual bind values are used. This information can be determined based on specific traces, e.g.:
ABAP SQL trace (transaction ST05)
SAP HANA SQL trace (SAP Note 2119087)
On SAP HANA side the expensive statement trace captures bind values which can e.g. be evaluated via SQL: "HANA_SQL_ExpensiveStatements" or SQL: "HANA_SQL_ExpensiveStatements_BindValues" (SAP Note 1969700).
Additionally SAP HANA is able to capture the bind values of critical SQL statements in the SQL plan cache per default. This capturing is controlled by the following parameters:
true true: Activate capturing of bind values (for non-LOB columns) for long running SQL statements
indexserver.ini -> [sql] -> plan_cache_parameter_enabled
false: Deactivate capturing of bind values
100 ms After having captured the first set of bind values for a certain SQL statement, it will capture further sets of bind values if the single execution time exceeds the parameter value and is higher than the single execution time of the pr
indexserver.ini -> [sql] -> plan_cache_parameter_thresho
ld eviously captured bind values.
The captured values are stored in view M_SQL_PLAN_CACHE_PARAMETERS and can be evaluated via SQL: "HANA_SQL_StatementHash_BindValues" or SQL: "HANA_SQL_StatementHash_BindValues_CommaList" (SAP Note 1969700).
18. How can the performance of data modifications be tuned?
Data modifications are operations modifying existing data in the database, e.g.:
INSERTs (not the SELECT part of INSERT ... SELECT)
UPDATEs, DELETEs, UPSERTs (not the evaluation of the WHERE clause)
If you want to improve the performance of data modifications, you can consider the following areas:
Area Details
Lock wait See SAP Note 1999998 and optimize internal and transactional lock wait situations if required. Typical situation when an INSERT has to wait for a lock are:
s
Blocking savepoint phase (ConsistentChangeLock)
Concurrent modifications of same primary key (transactional lock)
Delta storage contention (Sleeping, SleepSemaphore)
DDL operation on same table active (transactional lock)
I/O bottleneck (DeltaDataObjectAppendRollover)
Columns During an INSERT every column has to be maintained individually, so the INSERT time significantly depends on the number of table columns. For UPDATEs the overhead depends on the number of columns in the SET clause, so try to avoid including columns in the SET clause when the original and n
ew value is identical. When columns are unnecessarily specified, not only the modification itself needs to be done. SAP HANA also has to take care for dependent structures like multi-column indexes, primary keys or foreign key constraints.
Indexes Every existing index slows down modifications. Check if you can reduce the number of indexes during mass modifications and data loads. SAP BW provides possibilities to automatically drop and recreate indexes during data loads. Primary index normally mustn't be dropped.
Batch loa If a high number of records is loaded, you shouldn't perform modifications for every individual record. Instead you should take advantage of batch loading options (i.e. inserting / updating multiple records with a single operation) whenever possible, for example using the "FROM TABLE" clause in ABA
d P. As a general rule of thumb batch sizes smaller than 100000 can introduce a measureable performance impact.
The size of individual batches in SAP ABAP environments is controlled by the command buffer size that can be configured with SAP ABAP profile parameter dbs/hdb/cmd_buffersize. Per default it is set to 1048576 byte (1 MB). If the size of a record is 5000 byte, this would result in INSERT batch sizes
of 200 records. An increase of dbs/hdb/cmd_buffersize (e.g. to 10485760 bytes / 10 MB) can increase the batch size and improve the performance. Be aware that the command buffer is allocated by each SAP ABAP work process, so the memory requirements on ABAP side will increase (dbs/hdb/cmd_
buffersize * #work processes).
Parallelis If a high number of records is loaded, you should consider parallelism on client side, so that multiple connections to SAP HANA are used to load the data.
m
Partitioni Modifications can only use a single CPU per column and partition. Furthermore there can be delta storage contention (BTree GuardContainer waits, SAP Note 1999998) so in case of a very high amount of inserted records it can be helpful to configure more partitions (SAP Note 2044468). Be aware tha
ng t an increased number of partitions can have negative side effects, so you should consider this option with care.
Commits Make sure that a commit is executed on a regular basis when mass modifications are done (e.g after each bulk of a bulk load).
Redo loggi Processing of redo logs can be a significant overhead and result in different wait situations like LoggerBufferSwitch or LogBufferFreeWait (SAP Note 1999998). The following options exist to minimize the overhead:
ng
Make sure that disk I/O to the redo logs (SAP Note 1999930) and system replication (SAP Note 1999880) is working fine and without bottlenecks from a performance perspective.
Consider increasing the log buffer size and count (SAP Note 2215131) if modifications have to wait for log buffer related locks.
If it is possible to reconstruct a table completely in case of issues, you can consider deactivating delta logging via ALTER TABLE ... DISABLE DELTA LOG. Attention: When the delta log is disabled, the table content can no longer be recovered in case of crash or restore and so you need to be able
to recreated the table content from redundant data.
Delta mer Usually auto merges should do an acceptable job also during mass modifications. In exceptional cases deactivating auto merges and using an individual manual merge strategy can be useful. See SAP Note 2057046 for more details.
ge
An extremely large delta storage can reduce the load performance and increase the memory footprint, so executing delta merges also during mass modifications can be of advantage.
Avoid repeated merges of small delta storages or with a high amount of uncommitted data in order to avoid unnecessary overhead.
Table vs. r In cases where only a single, non-parallelized modification is possible and concurrent changes to the underlying table aren't required, it can be useful to use a global table lock instead of a high number of individual record locks. The table lock can be set via:
ecord lock
LOCK TABLE "<table_name>" IN EXCLUSIVE MODE
Afterwards SAP HANA no longer needs to maintain individual record locks. This approach is also valid for INSERT ... SELECT operations which may be parallelized internally.
Savepoint Savepoints are required to write modified data down to disk (SAP Note 2100009). Normally it is the main intention to shorten the blocking savepoint phase as much as possible and accept longer savepoint durations at the same time. During mass imports the opposite can be better: Shorter savepoints
s with the risk of increased blocking phases. Shorter savepoints can reduce the amount of data written to disk and they can reduce the amount of logs that need to be kept, reducing the risk of file system overflows.
During mass changes the following parameter adjustments can be considered to reduce the overall savepoint duration:
lower values for global.ini -> [persistence] -> savepoint_max_pre_critical_flush_duration (e.g. 300 instead of 900)
higher values for global.ini -> [persistence] -> savepoint_pre_critical_flush_retry_threshold (e.g. 10000 instead of 3000)
INSERT .. When an INSERT is based on a SELECT, the SELECT part can be the dominating performance factor and it can be analyzed individually like a normal database query.
. SELECT
In the special case when the SELECT is used as a scalar subquery, performance issues are possible with SAP HANA 2.0 SPS 04 (SAP Note 2911162).
Trigger If triggers (SAP Note 2800020) fire for each modification, this can significantly extend the runtime. In the worst case a highly parallelized batch insert operation has to execute the individual trigger operations sequentially. Thus, you should make sure that only really required triggers are in place when
doing high volume data loads.
1.00.120 - 1.00.122.11 1999998 If a lot of spatial data is inserted row-by-row without commit, the performance can be quite bad due to a SAP HANA bug and a lot of time is spent in call stack module AttributeEngine::spatialae::DeltaComponents::reserveDocid. As a secondary effect contention on "GeometryDeltaAttribute Lock" is possible.
2.00.000 - 2.00.012.00
2.0 SPS 00 - 2.0 SPS 02 2220627 INSERTs can suffer from packed LOB processing and spend a lot of time in call stack module DataContainer::VarSizeEntryUserDataHandler::getPageWithFreeSpaceFromFreeList. As a workaround you can disable packed LOBs by setting the following parameter:
In general this setting should be kept as short as possible, e.g. only during the time of the critical data load during migration.
<= 2.0 SPS 02 2646143 Up to SAP HANA 2.0 SPS 02 mass inserts into temporary tables defined with NO LOGGING can slow down due to main storage optimizations that are executed after every insert batch. The related call stack module is typically AttributeEngine::AttributeValueContainer::commitOptimize.
In ABAP environments the amount of insert batches and the related optimization overhead can be reduced by increasing the SAP profile parameter dbs/hdb/cmd_buffersize. See SAP Note 2600030 for suggestions to generally use higher parameter values for dbs/hdb/cmd_buffersize.
Problem situations like long critical savepoint phases or other locks < 500 records / second
19. Why are there significant differences between SQL statements on ABAP and SAP HANA side?
The ABAP Open SQL statements are transferred to the database via the database interface (DBI). In many cases, the statement is modified in the DBI before being sent to the database. Typical adjustments are:
Bind variables ? 21241 Per default literals are replaced with bind variables ("?"), so DOCNO = 1234 will show up as DOCNO = ? on database side. In case of newer ABAP version levels and constants, a replacement no longer happens and the constant valu
12 e is directly passed to the database.
SELECT * * A SELECT * (i.e. the selection of all columns) is propagated to SAP HANA if all of the following conditions are fulfilled:
Primary connection
ABAP kernel < 7.81
ABAP nametab is aware about field order (DBTABPOS)
If at least one of the conditions is not fulfilled, a comma separated column list is generated.
SELECT COUNT COUNT Counting the number of records via SELECT COUNT is usually done on database level and the result is returned to the ABAP. There is one important, exception, though. A SELECT COUNT in combination with FOR ALL ENTRIES
reads all matching records via SELECT DISTINCT and performs the counting on ABAP side. This is required because it is possible that FOR ALL ENTRIES is split into several database requests with potentially overlapping result r
ecords.
Example:
ABAP coding:
Database request:
SELECT
DISTINCT "MANDT" , "KUNNR"
FROM
"KNA1"
WHERE
"MANDT" = ?
In this case the database request looks completely different from the ABAP coding because:
ABAP itab DD_KUNNR is empty and so only the client condition is used in the WHERE clause
SELECT COUNT is executed as SELECT DISTINCT <primary_key_columns>
The SELECT DISTINCT can be much more expensive than the SELECT COUNT. Here it reads 90 million records in 40 seconds while a SELECT COUNT finishes in around 0.1 seconds. Thus, SELECT COUNT with FOR ALL ENTR
IES and potentially large amounts of matching records should generally be avoided.
National character set N'<literal>' Constant string literals are prefixed with 'N', indicating that the national character set is used, e.g. FLG_DELIVERED = N'X'.
Empty ABAP variable for colu If a column with an empty variable is part of the WHERE clause on ABAP side, the DBI omits this condition and doesn't send it to SAP HANA.
mn in WHERE clause
FOR ALL ENTRIES (single itab <column> IN ( ? , ... , ? ) 19871 ABAP FOR ALL ENTRIES statements that don't use FDA WRITE and that only have a single reference to the itab in the WHERE clause are transformed into IN list selections with a maximum length of rsdb/max_in_blocking_fact
reference, no FDA WRITE) 32 or (default: 100).
FOR ALL ENTRIES (multiple i <conditions> OR ... OR <conditions> 19871 ABAP FOR ALL ENTRIES statements that don't use FDA WRITE and that have multiple references to the itab in the WHERE clause are transformed into OR concatenation selections with a maximum length of rsdb/max_blocking
tab references, no FDA WRITE 32 _factor (default: 50).
)
FOR ALL ENTRIES (empty ita SELECT ... FROM <table> WHERE M In case of an empty FOR ALL ENTRIES itab all records of the current client are selected. Neither the FOR ALL ENTRIES columns nor any other restriction is propagated to the database.
b) ANDT = ?
FOR ALL ENTRIES DISTINCT 19871 Starting with ABAP kernel 7.42 a DISTINCT is generated for FOR ALL ENTRIES statements in any of the following situations:
32
FDA is used and no LOB columns are selected
The whole FOR ALL ENTRIES list can be satisfied with a single database selection and no LOB columns are selected
FDA WRITE /* FDA WRITE */ 23999 FDA WRITE writes an ABAP itab to the database and joins it there, using the alias t_00. The comment "FDA WRITE" indicates this specific context. The most common context of FDA WRITE are FOR ALL ENTRIES statements, b
93 ut it can also happen in context of mass modifications. This feature can be controlled with ABAP parameter rsdb/prefer_join_with_fda.
? AS "t_00"
FDA READ /* FDA READ */ 23999 FDA READ reads a database result in ABAP itab structure from the database. Many database requests take advantage of this optimization that is indicated via "FDA READ" comment. This feature can be controlled via ABAP param
93 eter rsdb/supports_fda_prot.
ABAP data aging WITH RANGE_RESTRICTION('<date 24164 This clause is added at the end of database requests when data aging is activated on ABAP side. This feature can be controlled with the following ABAP parameter:
>') 90
WITH RANGE_RESTRICTION('CUR abap/data_aging = on | off
RENT')
ABAP table buffering /* Buffer Loading */ If tables are completely or generically buffered on SAP side, the buffers are reloaded, if necessary, with special statements that may be completely different to the statement from the ABAP source code. The comment "Buffer Loadin
g" indicates this context.
SELECT ... FROM <table> WHERE M
ANDT = ? ORDER BY <primary_index
>
ABAP table buffering /* Table Buffer could not be used */ These kinds of comments are included in a SQL statement in case ABAP table buffering is in place and could normally be used by the statement, but a special scenario exists that prevents the buffer from being used, e.g.:
/* Table Buffer could not be used: Disp
5 executions after a table buffer invalidation
lacementIsRunning */
"displaced" or "error" buffer state, e.g. due to an insufficient ABAP table buffer size or a character sorting issue
/* Table Buffer could not be used: Gen
Buffer synchronizations (SAP Note 3186413)
ericKeyMiss */
/* Table Buffer could not be used: Obje You can use ABAP transactions ST02 and ST10 to check for ABAP table buffer details.
ctInStateError */
/* Table Buffer could not be used: Obje
ctUnderPenalty */
/* Table Buffer could not be used: Para
llelAccessDoesNotAllowLoading */
/* Table Buffer could not be used: Para
llelInvalidation */
/* Table Buffer could not be used: Para
llelLoading */
/* Table Buffer could not be used: Sync
hronisationIsRunning */
ABAP IN conditions "=", "IN", "LIKE", "BETWEEN", ">", " IN conditions on ABAP side can be converted into different types of conditions on the database level, depending on the selection criteria configured on ABAP side: "=", "IN", "LIKE", "BETWEEN", ">", "<", ">=", "<="
<", ">=", "<="
Columns both in selection list a Columns that appear in both the selection list and the WHERE condition are removed from the selection list if it is clear from the WHERE condition what the column's value must be.
nd WHERE clause
Expressions ending with blank (<column> LIKE ? OR <column> LIK If an expression ends with a space followed by a placeholder, the system generates an OR concatenation as follows:
and placeholder E ?)
SQL statement: ... WHERE <column> LIKE '<string> %'
Statement after DBI transformation: ... WHERE (<column> LIKE '<string> %' OR <column> LIKE '<string>')
Reason: In ABAP trailing blanks are always removed. So if the blank is the last character, it must be removed. If subsequent characters follow, it needs to remain.
Be aware that with certain ABAP 7.77 - 7.96 kernel versions the following conditions are generated in addition:
CDS view accesses /* Entity name: ... */ ABAP core data services (CDS) view accesses are indicated with a comment containing the related entity name.
Compatibility view accesses /* Redirected table: <table> */ ABAP compatibility view accesses are indicated with a comment containing the related original table name.
Native SQL being part of open /* Contains Native SQL */ 2800 Using the %_NATIVE syntax it is possible to include native, database specific coding in the ABAP open SQL statement. On SAP HANA level the comment "Contains Native SQL" is an indicator for this scenario. A common reason fo
SQL 008 r this approach is the use of CONTAINS / FUZZY clauses in context of fulltext indexes.
ABAP conversion exits ABAP conversion exits can be responsible for additional conditions that are not explicitly present in the ABAP source code.
ABAP kernel requests The ABAP kernel may directly execute database requests that are not visible explicitly in the ABAP coding.
>= 90 Generation of explain plan for SQL statement with PLAN_ID <plan_id> in M_SQL_PLAN_CACHE
EXPLAIN PLAN FOR SQL PLAN CACHE ENTRY <plan_id>
This option is helpful to understand why a previously recorded SQL statement shows a different performance than current executions.
Results of these commands are written to the EXPLAIN_PLAN_TABLE. Among others, you can use SQL: "HANA_SQL_ExplainPlan" (SAP Note 1969700) to evaluate the results.
In order to identify the proper entries in EXPLAIN_PLAN_TABLE, you can use "EXPLAIN PLAN SET STATEMENT_NAME = '<statement_name>' ..." when generating the explain plan. Then you can check EXPLAIN_PLAN_TABLE for entries related to <statement_name>.
21. How can I identify the client coding responsible for a database request?
In order to understand the business background of a database request it is important to understand from where it originates. The following options can be used for drill-down. Be aware that limitations apply as described in Is client related information like application name or application
source always filled properly?:
Enviro Option SAP Details
nment Note
all Thread samples 211471 The thread samples views M_SERVICE_THREAD_SAMPLES and HOST_SERVICE_THREAD_SAMPLES contain application information in columns APPLICATION_SOURCE and APPLICATION_NAME.
0
all Traces 21190 The following traces provide application related information in the output:
87
Expensive statements trace: APPLICATION_SOURCE and APPLICATION_NAME in M_EXPENSIVE_STATEMENTS
Performance trace: APPLICATION_USER_NAME and APPLICATION_NAME in M_PERFTRACE
all SQL statement views The SQL statement views M_PREPARED_STATEMENTS and M_ACTIVE_STATEMENTS contain APPLICATION_SOURCE information.
ABAP SQL plan cache in DBAC 22222 The "Navigation to Editor" button allows to jump from a SQL statement to the related ABAP coding location. For performance reasons this may only work if the SQL statement is currently executed.
OCKPIT 20
With an ABAP basis SP level of at least 7.02 SP20, 7.30 SP18, 7.31 SP21, 7.40 SP18, 7.50 SP9, 7.51 SP4, 7.52 SP1 or 7.53 SP0 and SAP HANA 2.0 the coding location is also available if the statement isn't executed at the moment.
ABAP Function module DB_SQ If it is not possible to jump from the DBACOCKPIT SQL plan cache section to the ABAP location, you can manually call the ABAP function module DB_SQL_SOURCE_DISPLAY and execute it with the proper PROG_NAME and CONT_OFFS values (in SOURCE_R
L_SOURCE_DISPLAY EF) you can derive from the APPLICATION_SOURCE column available in views like M_SERVICE_THREAD_SAMPLES, HOST_SERVICE_THREAD_SAMPLES, M_PREPARED_STATEMENTS or M_ACTIVE_STATEMENTS.
Report RSDB6GETSOUR
Example:
CE
Location Name Value
22. Are there known issues with particularly large SQL statement texts?
SQL statements with a large SQL text can result in different problems. These statements are not necessarily complex statements from a processing logic, but may be large for other reasons, e.g. due to a very long IN list.
The following problems can originate from large SQL texts:
Long preparation times 2092196 Very long IN lists can result in long preparation times with processing in module ptime::qo_Comp::is_same_pred.
Blocked garbage collection 2169283 SQL statements typically can't be terminated while they are prepared, so long preparation times can also result in blocked garbage collection. In the worst case only a SAP HANA restart can resolve the problem.
Increased Pool/malloc/libhdbrskernel.so allocator size 1999997 The allocator Pool/malloc/libhdbrskernel.so can grow when repeatedly thread samples of very large SQL statements are taken, including thread details.
2114710
Terminations 2124112 If very long SQL statements are parsed, the following termination can happen:
It can be useful to reduce or limit the maximum allowed SQL statement size. The following options exist:
SAP ABAP dbs/hdb/stmtlng You can use the SAP ABAP profile parameter dbs/hdb/stmtlng to limit the maximum SQL statement size. The default is 104857600 byte, so 100 MB. This large value basically disables the size threshold and SQL statements of any size are sent to SAP HANA.
As an example, the following setting would implement a statement length limitation of 2 MB:
dbs/hdb/stmtlng = 2097152
Be aware that there are specific situations (e.g. in the context of TREX_EXT_CREATE_CALC_SCENARIO in BW) where statements of 10 MB and more may be generated.
A SQL statement exceeding the defined limit will be terminated with an error message like:
*** ERROR => max. statement length (2097152) exceeded [dbhdbsql.cpp 1051]
SAP ABAP FOR ALL ENTRIES If SQL statements with long IN lists are generated, you can cut it into individual pieces using the ABAP FOR ALL ENTRIES feature. The split is done based on the configured blocking factors. See SAP Note 1987132 for more details.
SAP BW TREX modules SAP Notes 2294033 and 2341605 provide optimizations in handling large database requests in the context of TREX modules.
23. How can details for prepared SQL statements be determined?
In order to reduce parsing overhead (SAP Note 2124112), bind variables are used in many environments like SAP ABAP:
Literals SELECT * FROM DBSTATC WHERE OBJOW = 'SAPR3' AND DBOBJ = 'AFPO'
It can make a significant difference in terms of execution plan, performance and resource consumption if a SQL statement is executed with explicit literals or with bind variables. Therefore it is recommended that you analyze an expensive SQL statement that uses bind variables in the same
way, i.e. also with bind variables. This can be achieved by using a prepared SQL statement.
SAP Note 2410208 describes how to generated an explain plan for a prepared SQL statement. Similarly you can create a PlanViz (SAP Note 2073964) of a prepared statement by choosing "PlanViz" -> "Prepare" in the SQL editor of SAP HANA Studio.
Attention: With SAP HANA >= 2.0 the parameter monitoring_level is used for this purpose.
Attention: With SAP HANA 1.0 the parameter execution_monitoring_level is used for this purpose.
If this parameter is set and retention_period_for_sqlscript_context isn't set, a value of 3600 s will be used for retention_period_for_sqlscript_context will be used.
Values like 1000 retained calls and 3600 seconds retention time can be a reasonable starting point.
Standard SQL and performance optim In order to make sure that no standard issue impacts the compatibility view performance (e.g. infrastructure, parameter settings, missing index) you should at first make sure that the recommendations of SAP Note 2000000 are in place and you should perform an initial SQ
ization L analysis based on the instructions of this SAP Note here. Particularly pay attention for SAP HANA and SAP ABAP parameter recommendations provided in SAP Note 2600030.
Avoid individual queries Individual queries against compatibility views (e.g. coming from ABAP transaction SE16) without reasonable restrictions can be responsible for long runtimes and high resource consumption, so you should make sure that the possibility of individual queries is as restricted as
possible and users should be trained to avoid unselective conditions.
SAP HANA hints Check if the execution time improves when using a SAP HANA hint (SAP Note 2142945). In particular the following hints can make a difference:
USE_OLAP_PLAN / NO_USE_OLAP_PLAN
CS_JOIN / NO_CS_JOIN
CS_UNION_ALL / NO_CS_UNION_ALL
LIMIT_THRU_JOIN / PRELIMIT_BEFORE_JOIN when limiting the result significantly, see SAP Notes 2793263, 2900345 in context of ABAP transaction SE16 / SE16N
If a hint works out fine you can consider pinning it to the related database requests (SAP Notes 2222321, 2400006), at least as a temporary workaround until a better solution is available.
Application changes Several best practices exist how to optimize the use of compatibility views from an application perspective:
MARC, MARD, MARDH, MBEW, MBEWH, MBVMBEW, MCHB, MCHBH, MKPF, MSEG, NSDM_V_MARC, NSDM_V_MARD, NSDM_V_MARDH, NSDM_V_MCHB, NSDM_V_MCHBH, NSDM_V_MKPF, NSDM_V_MSEG 2206980, 2217299, 2337368, 2713495
More frequent precompacting See SAP Notes 2246602 and 2342347 and consider a more frequent precompacting of data in tables ACDOCA_M_EXTRACT and / or MATDOC_EXTRACT.
Switch from compatibility view to dire Compatibility views are an intermediate solution to map the old application coding to the new table structures. In the long term the application coding should directly access the S/4HANA tables and so the compatibility view accesses are no longer required.
ct S/4HANA table access
Scenario Details
Sequence perfor Changes in the main tables are tracked in logging tables using SAP HANA triggers and sequences (SAP Note 2600095). In order to optimize the sequence handling, you can consider the following:
mance
Activate sequence caching (e.g. "CACHE 100")
Use DMIS 2011 SP15 or higher where sequence caching (with a cache size of 100) is active per default.
When doing a full replication you should make sure that for the target table no SLT replications to other systems are active. Otherwise the efficient full synchronization from the source to the target system can be massively slowed down by the logging of the changes of the target table using t
riggers and sequences.
In scale-out: Put source tables for SLT replication on the master node if possible, in order to avoid network communications with the sequence manager on the master node
In scale-out: Avoid distributing partitions of source tables for SLT replication to different SAP HANA nodes in order to avoid frequent remote requests and shipment of the sequence cache between the nodes.
Trigger perform Changes in the main tables are tracked in logging tables using SAP HANA triggers (SAP Note 2800020) and sequences (SAP Note 2600095). In order to optimize the trigger activities, you check SAP Note 2800020 -> "What do I have to consider in terms of database trigger performance?".
ance
Initial load Initial loads based on "Reading Type 3 - Primary Key Order" (as described in the SLT Performance Optimization Guide) are not efficient on SAP HANA in case of large source tables due to the fact that the primary key doesn't provide a sorted result set and so a large number of rows has to be read
and sorted before returning the first few thousand of rows in the sorted order. Related queries have an ORDER BY clause based on the primary key, "=" conditions on some leading index columns and a ">" condition on the subsequent primary key column (not necessarily the last column in the in
dex).
Example:
SELECT
*
FROM
"ZTABLE"
WHERE
"MANDT" = ? AND
"ZTABLE_SYSTEMID" = ? AND
"MATNR" > ?
ORDER BY
"MANDT" , "ZTABLE_SYSTEMID" , "MATNR" , "STATM" , "ZHLER"
In order to optimize these requests, you can consider the following aspects:
Keep the amount of data in logging tables as small as possible. Logging tables with an increased record count are reported by check ID M2540 ("SLT logging tables with significant record count") of the SAP HANA Mini Checks (SAP Note 1999993). See SAP Note 2882481 about tackling a hi
gh number of logged records, e.g. by reloading the table or defining ranges.
If the amount of data that has to be sorted is quite large and the package size is rather small ("TOP 5000" in the case above), you can maintain a larger value (e.g. 99999) in transaction LTRS -> "Replication Options" -> "Portion Size (Records)" or column NUM_RECS_LOGTAB of table IU
UC_PERF_OPTION for the involved source table and regenerate the runtime object. Then you can process significantly more records and still sort only once. Be aware that the length of the field is limited to 5 characters, so values beyond 99999 aren't possible. When increasing the package
size you also have to make sure that the data volume processed in one package (i.e. package size * length of source table record) doesn't exceed 1 GB.
In transaction LTRS you can use the "Performance Options" -> "Replication Options" to configure a parallelization of the replication. For this purpose, in section "Ranges for Logging Table" you can choose "Use Key Fields to calculate Ranges".
The above example uses an older, no longer recommended option to parallelize the replication by defining a parallelization column (DOCNUM in the above example) in PARALLEL_FIELDNAME of table IUUC_PERF_OPTION. This parallelization column is included as second column of a
n index after column IUUC_PROCESSED. As SAP HANA can't evaluate ranges in multi column indexes, you can manually create an additional single column index on the parallelization column (in this case on DOCNUM).
CDC observer jo The following kind of SLT logging table selection is executed by the CDC observer job (SAP Note 3061258, application source CL_DHCDC_REC_OBSERVER_CNTRLR==CP:385, batch job /1DH/OBSERVE_LOGTAB):
b logging table s
election SELECT
COUNT(*) "CNT" , MIN( "/1DH/SEQUENCE" ) "MIS" , MAX( "/1DH/SEQUENCE" ) "MAS"
FROM
"/1DH/ML00000001S"
WITH HINT(NO_USE_OLAP_PLAN)
General optimization options 446485 These SAP Notes provide an overview about general performance optimizations.
2163425
Reading source data with rather small package size can introduce significant scan and sort overhead. Make sure that SMALLBLOCK is not set and set LARGEBLOCK to make sure that more data is processed at once.
2761821
Optimization of default behavior 2550545 This SAP Note provides several coding corrections for the default client copy behavior.
Expert settings for SAP HANA contexts 2555451 This SAP Note suggests specific settings to speed up client copies in context of SAP HANA.
Parallelism configuration 541311 This SAP Note describes how to define a reasonable parallelism for client copies.
Remote client copy configurations 2953662 This SAP Note provides specific recommendations for remote client copies.
Deletion of target client If the target client already exists, you should use transaction SCC5 to delete it before starting a new client copy. Otherwise a quite time-consuming record-by-record deletion happens at the beginning of the client copy.
The configured client copy expert options can be displayed via SQL: "HANA_ABAP_Parameters_ClientCopy" (SAP Note 1969700).
28. Are there best practices for efficient application development on SAP HANA?
Consider the following general rules when developing applications on SAP HANA:
Only select the columns that you actually require in the current context. Selecting many columns imposes some overhead in a column oriented database like SAP HANA.
Avoid executing many small requests and instead execute fewer requests with a larger result set. SAP HANA is particularly efficient in processing larger requests and at the same time overhead due to network roundtrips is avoided.
Specify ORDER BY whenever you require a sorted result set. This should be obvious, but due to the fact that other databases sometimes appear to return the data in a sorted manner (e.g. based on primary key) even without ORDER BY, this explicit sort is sometimes forgotten.
See the ABAP Development Performance Guidelines for more information how to create ABAP code that is optimized for SAP HANA. SAP Note 1912445 provides best practices and tools for custom code when migrating to SAP HANA.
---------------------------------------------------------------------------
|ROOT_STATEMENT_HASH |STATEMENT_HASH |SAMPLES|
---------------------------------------------------------------------------
|d6fd6678833f9a2e25e7b53239c50e9a|4359b2aac11d212ea7ad9153cd25b7f8| 976 |
|d6fd6678833f9a2e25e7b53239c50e9a|d6fd6678833f9a2e25e7b53239c50e9a| 604 |
|d6fd6678833f9a2e25e7b53239c50e9a|3554da8720da1f2de90f6ff0caa503a2| 574 |
|d6fd6678833f9a2e25e7b53239c50e9a|d41efb11db78d68b7b1e4cdc275216a0| 313 |
|d6fd6678833f9a2e25e7b53239c50e9a|420f78838e9d2b95d948d8401556fba0| 294 |
|d6fd6678833f9a2e25e7b53239c50e9a|9d2c75830edbc1ecacfeb7577637325f| 243 |
|d6fd6678833f9a2e25e7b53239c50e9a|eaa7a4db84281fc26370f913da7df597| 166 |
|d6fd6678833f9a2e25e7b53239c50e9a|28ca313866cc6b3302bfbbcda7dc1084| 158 |
|d6fd6678833f9a2e25e7b53239c50e9a|c15e482937b9628a627e27b537444445| 154 |
|d6fd6678833f9a2e25e7b53239c50e9a|9aaf8105bdad5308ab706eea28078d28| 124 |
...
---------------------------------------------------------------------------
30. Are all SQL cache entries written to the history in HOST_SQL_PLAN_CACHE?
The SQL cache (M_SQL_PLAN_CACHE) can contain hundreds of thousands of SQL statements and it would be very inefficient to write all of them to the statistics server history HOST_SQL_PLAN_CACHE. Therefore some important key figures are considered and only the 100 SQL cache
entries with the highest delta values (since the last snapshot) are considered, e.g.:
Top 100 execution time
Top 100 executions
Top 100 lock wait time
Top 100 result records
Top 100 execution memory size (SAP HANA >= 2.00.040)
31. Where do I find details about the SAP HANA SQL optimizer?
You can find details about the SAP HANA SQL optimizer in the SAP HANA SQL Optimizer section of the SAP HANA Performance Guide for Developers.
'application_user') = '<app_user>',
('traceprofile_<profile_name>', 'maxfiles') = '20',
('traceprofile_<profile_name>', 'maxfilesize') = '50000000',
('traceprofile_<profile_name>', 'sqlopt_ceqo') = 'debug',
('traceprofile_<profile_name>', 'calcengine') = 'debug',
('traceprofile_<profile_name>', 'ceoptimizer') = 'debug',
('traceprofile_<profile_name>', 'ceinstantiate') = 'debug',
('traceprofile_<profile_name>', 'ceoptimizerinfo') = 'info',
('traceprofile_<profile_name>', 'ceqo') = 'debug' WITH RECONFIGURE
-- query execution
ALTER SYSTEM ALTER CONFIGURATION ('indexserver.ini', 'SYSTEM') UNSET ('traceprofile_<profile_name>') WITH RECONFIGURE
In the generated trace you can then look out for unfolding blockers. For example, a very high number of expressions caused by a very long WHERE clause can result in the following message:
SqlOpt_ceqo Ce2qoUtils.cc: [UNFOLD_CV] Calc view unfold plan size check: maximum num exprs=12860, limit=10000
SqlOpt_ceqo Ce2qoUtils.cc: [UNFOLD_CV] Calc view unfold blocked due to plan complexity, maximum num of exprs: 12860(limit=10000)
In this case, unfolding is refused because more than 10000 expressions exist. As a first step, you should check from application perspective if it is possible to reduce the number of expressions. If this is not possible, you can eliminate this specific blocker by increasing the expression limit
sufficiently, e.g. to 20000, using the following SAP HANA parameter:
Additionally, it may be required to use the CALC_VIEW_UNFOLDING hint (SAP Note 2142945).
TUDF unfolding:
Unfolding in context of table user-defined functions (TUDF) means to combine different SQL statements inside the SQLScript body into a single SQL statement and convert it into query optimizer (QO) operators. This can often improve the quality of the generated execution plan. See the blog
How to investigate if table user-defined function is unfolded or not? for details.
If a TUDF can't be unfolded, you see a TABLE FUNCTION operator in the explain plan. Sometimes, non-unfolded parts of a TUDF execution can be identified by a comment at the beginning of the executed statement that contains context information, e.g.:
Reason Details
NOT UNFOLDED BECAUSE FUNCTION BODY CANNOT BE SIMPLIFIED TO A SINGLE STATEMENT Multiple statements in TUDF body cannot be simplified into a single statement.
NOT UNFOLDED DUE TO ANY TABLE TUDF uses ANY TABLE type.
NOT UNFOLDED DUE TO BINARY TYPE PARAMETER TUDF has a binary type as its parameter.
NOT UNFOLDED DUE TO DEV_NO_SQLSCRIPT_SCENARIO HINT The caller of TUDF disables unfolding with the DEV_NO_PREPARE_SQLSCRIPT_SCENARIO hint.
NOT UNFOLDED DUE TO IMPERATIVE LOGICS TUDF has an imperative logic, including SQLScript IF, FOR,WHILE, or LOOP statements.
NOT UNFOLDED DUE TO INTERNAL SQLSCRIPT OPERATOR TUDF unfolding is blocked by an internal SQLScript operator.
NOT UNFOLDED DUE TO INPUT PARAMETER TYPE MISMATCH The type of the input argument does not match the defined type of the TUDF input parameter.
NOT UNFOLDED DUE TO JSON OR SYSTEM FUNCTION TUDF uses JSON or system function.
NOT UNFOLDED DUE TO NATIVE SQLSCRIPT OPERATOR TUDF has a SQLScript native operator, which does not have an appropriate SQL counterpart.
NOT UNFOLDED DUE TO NO CALCULATION VIEW UNFOLDING The caller of TUDF disables Calculation View unfolding.
NOT UNFOLDED DUE TO PRIMARY KEY CHECK TUDF has a primary key check.
NOT UNFOLDED DUE TO RANGE RESTRICTION Table with RANGE RESTRICTION is used within the TUDF.
NOT UNFOLDED DUE TO SEQUENCE OBJECT A SEQUENCE variable is used within the TUDF.
NOT UNFOLDED DUE TO SEQUENTIAL EXECUTION TUDF is executed with SEQUENTIAL EXECUTION clause.
NOT UNFOLDED DUE TO SPATIAL TYPE PARAMETER TUDF has a spatial type as its parameter.
NOT UNFOLDED DUE TO TIME TRAVEL OPTION TUDF uses a history table OR the time travel option is used.
NOT UNFOLDED DUE TO WITH HINT TUDF uses a WITH HINT clause that cannot be unfolded.
NOT UNFOLDED DUE TO WITH PARAMETERS TUDF uses a WITH PARAMETERS clause.
33. Is client related information like application name or application source always filled properly?
Monitoring views like M_SQL_PLAN_CACHE or M_SERVICE_THREAD_SAMPLES contain columns with application related information like APPLICATION_NAME, APPLICATION_USER or APPLICATION_SOURCE. These values are only filled properly when the application explicitly
sets it on session level (see SET Statement).
In SAP ABAP contexts this information is per default provided during prepare operations, i.e. during parsing. This means that details like the coding location (APPLICATION_SOURCE) is not necessarily correct on session level in case database requests are executed and the last prepare in the
session happened at a different coding location. While these details are correct in M_SQL_PLAN_CACHE (because it is populated during prepare), it can be wrong in other views like M_SERVICE_THREAD_SAMPLES.
If you want to make sure that the application source information is always up to date, you can set the following SAP ABAP profile parameter in case SAP ABAP kernel >= 7.77 (324), >= 7.81 (110) or >= 7.83 (11) is in place (SAP Note 3017584):
dbs/hdb/send_application_source = 2
Be aware that this setting can introduce performance overhead, so it should be tested and monitored.
34. What are typical reasons for internal statement executions?
Internal SAP HANA statement executions are indicated in M_SQL_PLAN_CACHE via IS_INTERNAL = 'TRUE'. Unlike database requests directly triggered by end users these internal statements are triggered by internal SAP HANA mechanisms. Check ID M1165 ("Internal executions (%)") of
the SAP HANA Mini Checks (SAP Note 1999993) reports a potentially critical issue if the share of internal executions is significant.
Typical scenarios that can result in an increased amount of internal executions are:
Trigger processing 280002 The implicit processing of triggers in context of a statement execution happens as internal statement. In case many triggers exist and fire, the amount of internal statements can be significant.
0
Blocked TUDF unfolding 20000 If procedures / TUDFs can't be unfolded, parts of it are executed individually as internal statement. Make sure that unfolding is blocked as rarely as possible in order to reduce the amount of internal statements. The amount of internal statements can be even larger if no bin
02 d variables are used, because then for every different bind value a different statement is stored.
You can use SQL: "HANA_SQL_SQLCache_SpecialStatements" (STATEMENT_CLASS = 'PROC_NOT_UNFOLDED') available via SAP Note 1969700 in order to display internal statements originating from unfolding limitations.
Activated TREXviaDBSL SQ 280004 The SQL cache trace for TREXviaDBSL can be activated with the following setting:
L cache trace 8
indexserver.ini -> [sql] -> plan_cache_trexviadbsl_enabled = true
Subsequently internal cache entries are stored with a statement string of the following convention:
TrexViaDbsl42584D4C3F0356455203302E3...
Make sure that the trace is only active as targeted as possible if you want to avoid the generation of these internal statement.
You can use SQL: "HANA_SQL_SQLCache_SpecialStatements" (STATEMENT_CLASS = 'TREXVIADBSL_TRACE') available via SAP Note 1969700 in order to display SQL cache entries related to the TREXviaDBSL SQL cache trace.
Statistics server 214724 Database requests issued by the SAP HANA statistics server (user _SYS_STATISTICS) are marked as internal. In systems with limited production load these requests can be responsible for a significant portion of the SQL cache without being an issue. Thus, check ID M1165
7 ("Internal executions (%)") of the SAP HANA Mini Checks (SAP Note 1999993) generally ignores statistics server requests.
35. How can core data services (CDS) view accesses be optimized?
Core data services views (CDS views) can be quite complex and so it is important to adhere to best practices in order to avoid long runtimes and instabilities. In terms of ABAP CDS accesses (involving ABAP CDS views, ABAP CDS hierarchies and ABAP CDS view entities) you can use SQL:
"HANA_SQL_SQLCache_SpecialStatements" (STATEMENT_CLASS = 'CDS_VIEW') available via SAP Note 1969700 to identify the most expensive CDS view accesses based on the SAP HANA SQL cache. On top of the general suggestions provided in this SAP Note further important
considerations for achieving good performance are:
ABAP Core Data Services - SAP S/4HANA Best Practice Guide
ABAP Core Data Services - SAP Business Suite Best Practice Guide
ABAP Development Performance Guidelines
SAP Note 3418693
Specify join cardinality TO ONE / [0..1] / [1] and TO MANY / [0..*] / [*] / [] whenever possible.
Avoid aggregations on calculated fields, e.g. by deferring on-the-fly calculations to the consumption layer if possible, using annotations for aggregation instead of aggregation functions, so that the GROUP BY clause can be generated dynamically (as long as most queries do not ask for the
calculated field).
In case hard-coded aggregation is needed (for example for pre-aggregation purposes) aggregate as early as possible (on as few joins as possible) on as few fields as possible and strive for aggregation on persisted database fields.
Access base table instead of redirected views whenever possible.
Avoid re-use of calculated fields or calculated expressions within the data model (anything else than pure projection into the field list).
Avoid using compatibility views as data sources for performance intensive accesses.
Minimize the number of joins by using the shortest possible association path.
Avoid NOT NULL-preserving calculations (like ELSE with a constant, COALESCE with a constant) in views to be used as association targets or on right side of a left outer join.
Try to re-formulate the join conditions using on the fly calculations without calculations on persisted database table fields if possible.
In case of simple CASE expressions consider refactoring into UNION ALL.
Consider persisting the calculated values or in case of ABAP code context, moving the join to ABAP (by SELECT FOR ALL ENTRIES) in case the calculation in the condition cannot be avoided. Alternatively use global temporary tables to simplify huge number of line items caused by FOR
ALL ENTRIES.
Avoid non-equal join conditions in ON clause for large tables in SAP HANA column store – consider moving interval checks into WHERE condition
Avoid OR operator in ON condition for large tables in SAP HANA column store – consider refactoring into UNION ALL.
Consider using minimalistic ABAP CDS views for search helps via annotation @Consumption.valueHelp or @Consumption.valueHelpDefinition, especially avoiding search helps on calculated fields. For simple CASE expressions to be used in search help, consider refactoring into UNION
ALL.
Use ABAP layer for data pre-calculation, especially with SAP Analytic Cloud (SAC) to avoid unnecessary parameters calculations on CDS level.
In case of long runtime or high CPU consumption caused by CDS views using fast data access (FDA), consider deactivating it (SAP Note 2399993).
36. How can important optimizer decisions (e.g. for specific execution engines) be determined?
In many cases a detailed understanding of optimizer decisions is only possible with advanced trace settings and expert knowledge. See SAP Note 2909779 for details.
SAP Note 3326981 provides background for scenarios when the HEX engine (SAP Note 2570371) isn't used and a fallback to other execution engines is done. You can evaluate the details using SQL: "HANA_SQL_HEX_RecompileReasons" (SAP Note 1969700).
With SAP HANA >= 2.00.079.01 and CE 2023.Q3.28 the SQL plan decision recorder exists that can provide information like the recompilation state, the used execution engines and the HEX rejection reason.
It can be configured with the following parameters:
false Activation / deactivation of SQL plan decision recorder for queries without bind variables
indexserver.ini -> [sql_plan_decision_recorder] -> enable_non_parameterized_query
The trace will be written to a trace file with the following naming convention:
<service>_<host>.<port>.sqlplan.<counter>.trc
EXTERNAL X External engine (e.g. when monitoring commands access secondary system replication site)
HEX H HANA execution engine (SAP Note 2570371), will replace legacy engine like column, OLAP and row over time
38. Is it possible to manually calculate the statement hash for a given string?
You can use SQL: "HANA_SQL_StatementHash_Generator" (SAP Note 1969700) in order to generate the statement hash for an arbitrary (statement) string. Via input options INCLUDE_FDA_READ_VARIANT and INCLUDE_RANGE_RESTRICTION_CURRENT_VARIANT the statement
hash can also be determined for the string being enhanced with the "/* FDA READ */" comment or the data-aging restriction "WITH RANGE_RESTRICTION('CURRENT')".
Example:
------------------------------------------------------------------------------------------------------------------------------------
|VARIANT |STATEMENT_HASH |STATEMENT_STRING |
------------------------------------------------------------------------------------------------------------------------------------
| |8e5a2db15943a6040806b9e937097d72|SELECT * FROM T000 |
|FDA READ |9aebb0c4990551f89b89070e90104fbc|SELECT /* FDA READ */ * FROM T000 |
|RANGE RESTRICTION |703613ed73ffbc158aca9ab987b717cb|SELECT * FROM T000 WITH RANGE_RESTRICTION('CURRENT') |
|RANGE RESTRICTION + FDA READ|5d3cbbf0cb4b49d82ec7ebdaee48c7d7|SELECT /* FDA READ */ * FROM T000 WITH RANGE_RESTRICTION('CURRENT')|
------------------------------------------------------------------------------------------------------------------------------------
Keywords
SQL optimization SAP HANA performance
Attributes
Key Value
Products
Products
3568636 BC-XS-CDX
3418693 HAN-DB Examples for optimization of ABAP CDS views in SAP S/4HANA scenarios
3392865 HAN-DB-HA DBACOCKPIT/ST04/System Overview in HANA studio takes long time after migrating HANA replication secondary site to AWS
3295259 HAN-DB-PERF Unexpected slow performance for DELETE statements against table WDR_ADP_CONST_MP
3279311 BC-MID-ICF All DIA Work Processes are occupied updating table HTTP_CORS_LOG
3137171 BW-WHM-DST-RSPM [BW Central KBA] Housekeeping for Request Administration tables (RSPM* tables)
3031614 SD-BF-CM Performance problem happens due to database locks during S066 update
2953662 BC-CTS-CCO Recommendations for remote client copy performance improvements in S/4HANA
2895344 SCM-EM-MGR Deleting entries from /SAPTRX/EH_TASK table in SAP Event Management
2882481 CA-LT-SLT Records in the logging table are greater than the number of records in the source table - SLT
2818549 HAN-STD-DEV-MOD Performance Problem with an on-premise HANA Database Calculation View using a HANA Live Connection in SAP Analytics Cloud (SAC)
2733393 HAN-DYT How to improve performance when using multistore tables in SAP HANA Dynamic Tiering
2399996 HAN-DB-MON How-To: Configuring automatic SAP HANA Cleanup with SAP HANACleaner
2399979 HAN-DB-PERF How-To: Configuring automatic SAP HANA Data Collection with SAP HANASitter
2371528 FI-LOC-FI-RU SAP RU-FI: Table J_3RKKR0 is locked during parallel processing
2313619 HAN-DB-MON How-To: Generating and Evaluating SAP HANA Call Stacks
2222277 HAN-DB-PER FAQ: SAP HANA Column Store and Row Store
2200772 HAN-DB FAQ: SAP HANA Statement Routing and Client Distribution Mode
2185556 HAN-CPT-ADM FAQ: SAP HANA Cockpit (delivered with SAP HANA 1.0)
2126752 AP-MD-BP Performance Problems when reading Business Partner Change Documents
2107400 BC-DB-SYB performance issue with parallelism and WHERE EXISTS non-correlated sub-query - SAP ASE for Business Suite
2088971 HAN-DB-MON How-To: Controlling the Amount of Records in SAP HANA Monitoring Views
2084747 HAN-DB How-To: Disabling Memory-intensive Data Collections of standalone SAP HANA Statistics Server
2076491 BC-SEC-USR-ADM Jobs with name USR_ATCR_IMP<UTC time stamp> automatically created after role imports
2073964 HAN-STD-ADM-PVZ How-to create & export PlanViz in SAP HANA Studio
2037093 BW-WHM-DST Consulting: Performance / Memory problems during delta/init in the status management with DTPs between infoproviders
1977262 HAN-DB-MON ARCHIVED: How to handle HANA Alert 39: 'Check long-running statements'
1524325 SD-BIL-IV Poor performance due to locks on table NRIV - Buffering of billing documents
3584983 HAN-DB-ENG-MDS Performance improvement query processing for deeply nested calculations
3517252 FI-CA FI-CA: Huge number of database accesses when reading documents with selection of locks
3513868 HAN-DB-ENG Performance Deterioration of Table Scan in HEX When Data Distribution is Skewed
3513507 HAN-DB HANA HEX Engine not Used for Queries Containing "WITH PARAMETERS( 'LOCALE' = <code>) "
3513096 CRM-S4-MD-ORG High workload in the system due to Organizational Management buffering process
3501346 HAN-DB Unexpected Sorted Results Are Returned With LOCALE Parameter
3469038 CA-ICS-INT Source Of Supply: suboptimal performance when reading changed purchase inforecords
3457297 BC-DB-DBI ABAP SQL LIKE OR LIKE creates unnecessary additional conditions
3434490 HAN-DB Excessive Memory Consumption for SELECT Queries Referencing M_CS_TABLES
3411602 CA-ICS-INT Change Pointers are created even though Industry Cloud Solutions are not used
3406265 BC-BW-ODP Modify RODPS_REPL_TID takes unnecessarily long for big data loads with many packages
3386125 CA-ATP-PAC Split PAC Resb Select by SOBKZ for Performance Reasons
3382199 FI-GL-IS FI Line Item Browser: Using CDS views in the Line Item Browser - FAQ
3349622 FI-CA FKK_GET_APPLICATION unnecessary update of table USR05 leads to lock problems
3347789 SV-SMG-SER EWA: Download for HANA SQL Statement Check Shows Bad Performance
3337701 CA-ICS-INT Data replication from SAP S/4HANA to industry cloud solutions - TCI #6
3335213 FIN-FSCM-COL Performance issues while reading P2P data to create customer contact
3328348 SV-SMG-SDD /BDL/TASK_PROCESSOR Job in Endless Loop Inserting Into Table /BDL/MSGLOG
3326981 HAN-DB Why HEX Plans fallback to old engines such as Join, OLAP, ESX
3313650 LO-RFM-STO-FIO Receive Products: Fix performance issue within external postings check for HUs
3300123 SCM-EWM-MD-PM S/4 EWM - Performance issue while selecting product grouping
3293436 BC-SEC-SSM Improve performance and resilience of HTTP security session processing in exceptional high-load situations
3287726 HAN-DB-ENG-MDS Performance decrease in SAP HANA 2.0 with EPMMDS binaries "Internal Version" 1.00.202221.06
3246036 FI-AR-AR-D DDF/KDF: Selection with FOR ALL ENTRIES not efficient if no restriction was done to vendor or customer
3241223 BC-DB-DBI Slow performance because catalog buffer is running out of memory
3234023 BC-DB-DBI IMPORT FROM DATABASE with major ID and minor ID has a long runtime
3225546 HAN-DB-ENG-TRX Bad performance over TREX_EXT_INDEX_CELL_TABLE. Accessing M_TEMPORARY_TABLES takes long.
3202213 SCM-EWM-IF Missing entry in /SCMB/TBUSSYS causes performance and blocking issues
3201227 BC-SEC-SSM Best practices for security session handling for API caller calling an ABAP system
3194886 BC-BMT-WFM Increase performance of RSWWERRE for large numbers of erroneous workflow instances
3171132 BC-ABA-SC Risk of deadlock in generation of CUA runtime objects (GUI Status, Menus)
3154154 CA-LT-SLT SLT (DMIS 2011 / DMIS 2018 / S/4HANA) - Downport - Performance Enhancement for Allowlist Checks
3145867 FIN-FSCM-CR UKMBP_CMS_SGM: Credit Check leads to runtime or deadlock issues in a parallelized job
3128614 CO-PC-ACT Database locks due to superfluous inserts into table CKMLPP or CKMLCR
3125731 HAN-DB Performance Considerations When Filtering Records Based On Table Intersection
3099337 SCM-EWM-IF EWM: Change Business System Key after Client Copy
3077239 FI-CA Buffered document number assignment in Contract Accounts Receivable and Payable as of SAP S/4HANA 1909
3061258 CA-DI-IS-ABA SAP Data Intelligence ABAP Integration (S/4HANA 2020) - performance CDC observer job
3051729 FI-CA FI-CA on SAP HANA: Performance when scheduling mass activities (GPART)
3041609 OPU-FND-CS Deadlock on Gateway cache table /IWFND/I_MED_CTC as wait time to acquire enqueue lock on the table is high
2967256 HAN-DB-ENG-MDS Recommendation of different SQL Optimization Levels for different metadata tables/views read by MDS
2966606 HAN-DB Slower Join Queries After BW/4HANA 2.0 Upgrade due to "Tree Spec" Partitioning
2952296 HAN-DP-SDI Many open internal connections for Data Provisioning (dpserver)
2914233 HAN-DB IN Predicate Filter For DECIMAL Column Can’t Use INDEX SCAN
2911162 HAN-DB Batch Insert-Values Including Scalar-Subquery may Lead to Performance Issues
2909860 HAN-DB SELECT FROM Table UDF Containing View Defined With WITH Clause Does Not Unfold
2909779 HAN-DB How to Collect Frequently Required Debug Info for Analyzing HANA Issues
2904036 HAN-DB Sporadic High Memory Consumption When Accessing Compatibility View /SAPAPO/MATLOC
2902534 HAN-DB Slower Performance of SQLScript Procedure after Upgrade From HANA 1.0 to HANA 2.0
2900345 BC-DWB-UTL-BRD SE16: Add database hints to SQL query before execution
2891894 HAN-DB-SDA HANA Smart Data Access Cannot Bind Variables of a Parameterized Query to a Remote DB
2847558 HAN-DB Enabling Unfolding For Stacked Scenarios With Switched Security Flag
2833472 MM-IM-GF-LOCK Short dump ‘DBSQL_DUPLICATE_KEY_ERROR’ during parallel postings of good movements for the same material without value change
2831890 BC-DB-DBI Runtime error DBIF_REPO_SQL_ERROR for execution of CL_O2_XSLT_API_INTERNAL->INSERT_ACTIVE_TRANSFORMATION
2830243 MM-IM-GF-REP MB52: Optimize performance on project/sales order/transit stock existence check (table MSSQ/MSSA)
2823243 HAN-DB Performance Degradation of Update or Delete Queries Inside a Procedure After Upgrade to HANA 2 SPS03 or above
2820779 SV-SMG-DVM Disable long running DANA analysis during DVM data collection for Readiness Check
2819529 BC-SRV-SUA-CUP SUA: SQL error "SQL code: 2048" occurred while accessing table "ACDOCA".
2816302 HAN-DB SELECT COUNT(*) on FAAV_ANLC Becomes Slower After Implementation of SAP Note 2796770
2795151 HAN-DB Performance Becomes Slower in Function / Procedure Written by Sqlscript After Upgrade From HANA1.0 to HANA2.0
2793263 HAN-DB Executing SE16 or SE16N Transaction Against S/4HANA Compatibility View Runs Long or Leads to Composite Out-of-Memory (OOM)
2761821 BC-CTS-CCO Performance improvement for HANA,Oracle and DB6 systems: Client copy
2751390 HAN-DB Potential Performance Issues in BW Queries in SAP HANA 2 Revision 036.00
2749360 HAN-DB-ENG-TRX On-the-fly query over TREX_EXT_AGGREGATE makes always a new entry in SQL plan cache
2741672 HAN-DB-SDA Very slow access to remote data source created using an SAP HANA SDA setup to MS SQL
2737478 PP-SFC-EXE-GM "Batch" input help in goods movement overview, confirmation: Performance
2736804 HAN-DB-HA High Runtime of systemReplicationStatus.py, M_SERVICE_REPLICATION or M_SYSTEM_REPLICATION When System Replication Target Site Unreachable
2726420 BW-WHM-DBA-ODA 750SP14: SAP HANA ODP: Minor Optimization for accessing ODP in HANA Context(II)
2713495 MM-IM-GF S/4HANA: Performance issues in custom code when using the obsolete stock data model
2711824 HAN-DB High Number of Prepared Statements Causing High Usage of Memory Allocator Pool/Statistics
2686186 BW-WHM-DBA 750SP14: 'DBSQL_INVALID_CURSOR' during extraction from a Query provider or "UNCAUGHT_EXCEPTION" during import of an HCPR with multiple servers
2660461 BC-BW Duplicate data for simultaneous delta from SAPI and ODQ
2641772 BC-DB-DBI Open SQL: Delayed loading of secondary index information to the catalog cache
2633077 HAN-DB Rowstore LOB Garbage is not Collected and the Number of Disk LOBs Keeps Increasing
2620310 HAN-DB Long Running DISTINCT Search in Join Engine Blocks the Delta Merge Operation
2617971 CA-LT-SLT SLT (DMIS 2011 / DMIS 2018) Avoid Recording of Archiving Actions in the SLT Logging Table
2568333 HAN-DB-ENG Suboptimal Execution Plan in HANA Execution Engine (HEX) for a Specific Query Pattern
2559231 HAN-AS-INA-SVC Low performance when first opening a view via InformationAccess (InA) Service / EPM-MDS service
2538840 CA-WUI-UI-RT Performance issue in some screens after upgrade indicated in note 2361752
2522456 CRM-BTX-ANA-RFW High CPU utilization in HANA post CRM kernel upgrade
2517443 HAN-DB Filter push down missing for TREXviaDBSL calls on Hana native calculation view when FEMS are used
2514255 HAN-DB Universal ITAB for SAP HANA Smart Data Access
2500573 HAN-DB-ENG Column Pruning is Limited For Graphical Calculation Views Used in SQL- And SQLScript-Based Views
2465294 FI-GL-IS High CPU usage on HANA database due to hint USE_OLAP_PLAN
2465027 HAN-DB Deprecation of SAP HANA extended application services, classic model and SAP HANA Repository
2441054 HAN-DB High query compilation times and absence of plan cache entries for queries against calculation views
2425002 HAN-DB SAP HANA 2.0: Deprecations reported by the HANA statistics server
2424784 SD-MD-CM-AR SD_COND_ARCH_WRITE: Runtime improvement for document check with transparent KONV
2421733 BC-SEC-AUT-PFC STUSERTRACE: Changing maximum number of filter values for users
2375171 BW-BCT-FI Dump DBSQL_SQL_ERROR: "SQL code: 60" occurred while accessing BWFI_AEDA2 or BWFI_AEDA3
2374272 HAN-DB-MON Enabling new HANA Monitoring mechanism for Solution Manager/Focused Run
2371147 BW-WHM-DST P37; APO: Trace code for hanging lock on RSAPOADM/RSICCONT
2351294 BC-UPG-RDM S/4HANA System Conversion / Upgrade: Measures to reduce technical downtime
2342347 CO-PC-ACT Precompacting run for table ACDOCA_M_EXTRACT in case of reduced system performance during posting period
2337368 MM-IM-GF-VAL Inventory Valuation (part of Materials Management - Inventory Management) : Change of data model in S/4HANA 1610
2296436 FI-GL-GL-A Performance problems on HANA DB during totals record update with automatic balance carryforward
2291812 HAN-DB-ENG SAP HANA DB: Disable/Enable CalculationEngine Feature - CalcView Unfolding
2276534 FI-CA Dealing with problems in table NRIV for number range object FKK_BELEG
2257203 HAN-DB Creating Call Stack Trace (Gstack) to Analyse Hang Situations
2246602 MM-IM-GF Precompacting scheduling in case system performance gets slowed down during a posting period
2232607 CA-WUI-UI Deadlocks occur on table CRMT_RECENT_OBJ when closing browser with mutliple tabs
2222535 CO-OM Compatibility views COSP, COSS in the case of large datasets (in customer programs)
2221298 FI-GL Notes about using views GLT0, FAGLFLEXT, FMGLFLEXT, PSGLFLEXT, and JVGLFLEXT in custom programs in SAP S/4HANA Finance
2219527 FI-GL Notes about using views BSID, BSAD, BSIK, BSAK, BSIS, and BSAS in customer-defined programs in SAP S/4HANA Finance
2217299 MM-IM-GF-VAL Inventory Valuation (part of Materials Management - Inventory Management) : Change of data model in S/4HANA 1511
2193726 FI-SL-SL-A Performance problems during totals record update with automatic balance carryforward
2191612 BC-SEC-SAL FAQ | Use of Security Audit Log as of SAP NetWeaver 7.50
2185026 CO-OM Compatibility views COSP, COSS, COEP, COVP: How do you optimize their use?
2182690 BC-DWB-DIC-AC LOAD_PROGRAM_MISMATCH after insertion or deletion of index for non-buffered tables
2182269 BC-SRV-SUA Database locks on table SUSAGE block all work processes
2124871 BC-MID-RFC-BG Deadlocks on INSERT statement on QRFC_I_QIN_LOCK or QRFC_O_QOUT_LOCK table in HANA Database
2103827 BC-DB-DBI Profile parameters for table buffer as of SAP Kernel Release 7.40
2078190 BW-BEX-OT-F4 Long runtime for input help for node variables
2071310 CA-EPT-POC Deletion report for process observer event buffer tables
1987132 BC-DB-HDB-SYS SAP HANA: Parameter setting for SELECT FOR ALL ENTRIES
1933254 BC-DB-HDB-CCM DBA Cockpit: Authorization check for SQL editor at table level
1912445 BC-DWB-TOO-ATF ABAP custom code migration for SAP HANA - recommendations and Code Inspector variants for SAP HANA migration
1803986 BC-UPG-RDM Rules to use SUM or SPAM/SAINT to apply SPs for ABAP stacks
857998 BW-BEX-OT Number range buffering for DIM IDs and SIDs
375566 BC-MID-RFC Extremely large number of entries in tRFC and qRFC tables
173856 BC-CCM-PRN-SPO Deadlocks on tables TSP02 and TSPEVJOB after connection problems
ABAP Core Data Services - SAP Business Suite Best Practice Guide
SET Statement
SAP HANA Performance Guide for Developers - SAP HANA SQL Optimizer
3523400 HAN-DB Overview of the compatibility view topic in the context of SAP HANA (SAP S/4HANA)
3486194 HAN-DB-PERF SAP HANA out of memory (OOM) issue caused by report CL_SUS_IMP_CUP_B_FIN_COVP_001 and/or CL_SUS_IMP_CUP_B_FIN_ACDOCA001CP
3228858 HAN-DB-ENG High usage of memory (in SAP HANA) by Pool/Statistics because of JDBC client
2534862 HAN-DB-SCR HowTo: Capturing SQL statements and run time information triggered by a stored procedure
3205302 HAN-DB-PERF High runtime when LIKE condition with leading place holder is evaluated
3133914 CA-LT-SLT TIME_OUT dump when replicating tables in SLT. - SAP SLT
3065607 CA-LT-MC Performance tips & tricks for SAP S/4HANA Migration Cockpit: Migrate Data Using Staging Tables
2122650 HAN-DB HANA oom trace file dumps with 'Composite limit violation (OUT OF MEMORY) occurred' in HANA SPS 08 and higher
3001300 HAN-DB-MON How to: Analyze HANA Issues Using Graphical Thread Sample Results
2313619 HAN-DB-MON How-To: Generating and Evaluating SAP HANA Call Stacks
2949761 HAN-DB-PERF Expensive SQL statement of type SELECT FOR UPDATE on the table NRIV with statement hash 11df5737a4a42ed26e0121151c778785
2923584 HAN-DB-MON Hana Consistency Check: metadata version changed while running checks
2916439 HAN-DB-ENG How to Check Thread Samples for a Poor Performance Query
2898013 HAN-DB Update of table VARINUM causes Alert ID: 59 transactions are blocked on HANA DB
2889270 HAN-DB-MON High CPU Usage by Embedded Statistics server with CALL _SYS_STATISTICS.STATISTICS_SCHEDULABLEWRAPPER
2689405 XX-SER-MCC FAQ: SAP S/4HANA Performance Best Practices - Collective Note
2088971 HAN-DB-MON How-To: Controlling the Amount of Records in SAP HANA Monitoring Views
2585986 BC-UPG-DTM-TLA Process on DBTABLOG is very slow in the DMO phase EU_CLONE_MIG_DT_RUN
2399990 HAN-DB How-To: Analyzing ABAP Short Dumps in SAP HANA Environments
2222277 HAN-DB-PER FAQ: SAP HANA Column Store and Row Store
3413143 HAN-DB-MON EWM: Change Business System Key after Client Copy
2884606 HAN-DB Execute Batch DELETE, UPDATE or UPSERT Does not Delete all Expected Rows From a Multi-Container Row-Store Table in SAP HANA DB
2472197 HAN-DB SQL Statement Fails With "Invalid argument type" When Using SQL Hint for Join Engine Optimization
1794297 HAN-DB Secondary Indexes for S/4HANA and the business suite on HANA
2019973 HAN-DB-ENG-BW Handling Very Large Data Volumes in SAP BW on SAP HANA
2385077 CA-RT-CAR-PIP Long Running TLOGF SELECTs when loading transaction details
1601951 SV-SMG-SER Self Service 'SQL Statement Tuning' - Prerequisites and FAQ