Sap Hana SQL Script Reference en PDF
Sap Hana SQL Script Reference en PDF
4 What is SQLScript? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
4.1 SQLScript Security Considerations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .12
4.2 SQLScript Processing Overview. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
Orchestration Logic. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
Declarative Logic. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
6 Logic Container. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
6.1 Procedures. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
CREATE PROCEDURE. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
DROP PROCEDURE. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
ALTER PROCEDURE. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
ALTER PROCEDURE RECOMPILE. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
Procedure Calls. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
Procedure Parameters. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .35
Procedure Metadata. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
6.2 User Defined Function. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .45
CREATE FUNCTION. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .46
ALTER FUNCTION. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
Function Parameters. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
Function Metadata. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
Default Values for Parameters. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
Deterministic Scalar Functions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
DROP FUNCTION. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
6.3 CREATE OR REPLACE. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .58
6.4 Anonymous Block. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
6.5 SQLScript Encryption. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
16 Supportability. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 166
16.1 M_ACTIVE_PROCEDURES. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 166
16.2 Query Export . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 168
SQLScript Query Export. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 168
16.3 Type and Length Check for Table Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 171
16.4 SQLScript Debugger. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 172
Conditional Breakpoints. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 173
Watchpoints. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174
Break on Error. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174
Save Table. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .175
16.5 EXPLAIN PLAN FOR Call. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 175
19 Appendix. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 194
19.1 Example code snippets. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 194
ins_msg_proc. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 194
This reference describes how to use the SQL extension SAP HANA SQLScript to embed data-intensive
application logic into SAP HANA.
SQLScript is a collection of extensions to the Structured Query Language (SQL). The extensions include:
● Data extension, which allows the definition of table types without corresponding tables
● Functional extension, which allows the definition of (side-effect free) functions which can be used to
express and encapsulate complex data flows
● Procedural extension, which provides imperative constructs executed in the context of the database
process.
SQLScript is a collection of extensions to the Structured Query Language (SQL). The extensions include:
● Data extension, which allows the definition of table types without corresponding tables
● Functional extension, which allows the definition of (side-effect free) functions which can be used to
express and encapsulate complex data flows
● Procedural extension, which provides imperative constructs executed in the context of the database
process
This document uses BNF (Backus Naur Form) which is the notation technique used to define programming
languages. BNF describes the syntax of a grammar by using a set of production rules and by employing a set
of symbols.
Table 1:
Symbol Description
<> Angle brackets are used to surround the name of a syntax element (BNF non-terminal) of the SQL
language.
::= The definition operator is used to provide definitions of the element appearing on the left side of
the operator in a production rule.
[] Square brackets are used to indicate optional elements in a formula. Optional elements may be
specified or omitted.
{} Braces group elements in a formula. Repetitive elements (zero or more elements) can be speci
fied within brace symbols.
| The alternative operator indicates that the portion of the formula following the bar is an alterna
tive to the portion preceding the bar.
... The ellipsis indicates that the element may be repeated any number of times. If ellipsis appears
after grouped elements, the grouped elements enclosed with braces are repeated. If ellipsis ap
pears after a single element, only that element is repeated.
!! Introduces normal English text. This is used when the definition of a syntactic element is not ex
pressed in BNF.
Throughout the BNF used in this document each syntax term is defined to one of the lowest term
representations shown below.
<digit> ::= 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9
<letter> ::= a | b | c | d | e | f | g | h | i | j | k | l | m | n | o | p | q |
r | s | t | u | v | w | x | y | z
| A | B | C | D | E | F | G | H | I | J | K | L | M | N | O | P | Q |
R | S | T | U | V | W | X | Y | Z
<comma> ::= ,
<dollar_sign> ::= $
<hash_symbol> ::= #
<left_bracket> ::= [
<period> ::= .
<pipe_sign> ::= |
<right_bracket> ::= ]
<right_curly_bracket> ::= }
<sign> ::= + | -
<underscore> ::= _
The motivation for SQLScript is to embed data-intensive application logic into the database. As of today,
applications only offload very limited functionality into the database using SQL, most of the application logic is
normally executed on an application server. The effect of that is that data to be operated upon needs to be
copied from the database onto the application server and vice versa. When executing data intensive logic, this
copying of data can be very expensive in terms of processor and data transfer time. Moreover, when using an
imperative language like ABAP or JAVA for processing data, developers tend to write algorithms which follow a
one-tuple-at-a-time semantics (for example, looping over rows in a table). However, these algorithms are hard
to optimize and parallelize compared to declarative set-oriented languages like SQL.
The SAP HANA database is optimized for modern technology trends and takes advantage of modern
hardware, for example, by having data residing in the main memory and allowing massive parallelization on
multi-core CPUs. The goal of the SAP HANA database is to support application requirements by making use of
such hardware. The SAP HANA database exposes a very sophisticated interface to the application, consisting
of many languages. The expressiveness of these languages far exceeds that attainable with OpenSQL. The set
of SQL extensions for the SAP HANA database, which allows developers to push data-intensive logic to the
database, is called SQLScript. Conceptually SQLScript is related to stored procedures as defined in the SQL
standard, but SQLScript is designed to provide superior optimization possibilities. SQLScript should be used in
cases where other modeling constructs of SAP HANA, for example analytic views or attribute views are not
sufficient. For more information on how to best exploit the different view types, see "Exploit Underlying
Engine".
The set of SQL extensions are the key to avoid massive data copies to the application server and for leveraging
sophisticated parallel execution strategies of the database. SQLScript addresses the following problems:
● Decomposing an SQL query can only be done using views. However when decomposing complex queries
using views, all intermediate results are visible and must be explicitly typed. Moreover SQL views cannot
be parameterized which limits their reuse. In particular they can only be used like tables and embedded
into other SQL statements.
● SQL queries do not have features to express business logic (for example a complex currency conversion).
As a consequence such a business logic cannot be pushed down into the database (even if it is mainly
based on standard aggregations like SUM(Sales), etc.).
● An SQL query can only return one result at a time. As a consequence the computation of related result
sets must be split into separate, usually unrelated, queries.
● As SQLScript encourages developers to implement algorithms using a set-oriented paradigm and not
using a one tuple at a time paradigm, imperative logic is required, for example by iterative approximation
algorithms. Thus it is possible to mix imperative constructs known from stored procedures with
declarative ones.
Related Information
You can develop secure procedures using SQLScript in SAP HANA by observing the following
recommendations.
Using SQLScript, you can read and modify information in the database. In some cases, depending on the
commands and parameters you choose, you can create a situation in which data leakage or data tampering
can occur. To prevent this, SAP recommends using the following practices in all procedures.
● Mark each parameter using the keywords IN or OUT. Avoid using the INOUT keyword.
● Use the INVOKER keyword when you want the user to have the assigned privileges to start a procedure.
The default keyword, DEFINER, allows only the owner of the procedure to start it.
● Mark read-only procedures using READS SQL DATA whenever it is possible. This ensures that the data
and the structure of the database are not altered.
Tip
Another advantage to using READS SQL DATA is that it optimizes performance.
● Ensure that the types of parameters and variables are as specific as possible. Avoid using VARCHAR, for
example. By reducing the length of variables you can reduce the risk of injection attacks.
● Perform validation on input parameters within the procedure.
Dynamic SQL
In SQLScript you can create dynamic SQL using one of the following commands: EXEC and EXECUTE
IMMEDIATE. Although these commands allow the use of variables in SQLScript where they might not be
supported. In these situations you risk injection attacks unless you perform input validation within the
procedure. In some cases injection attacks can occur by way of data from another database table.
To avoid potential vulnerability from injection attacks, consider using the following methods instead of
dynamic SQL:
● Use static SQL statements. For example, use the static statement, SELECT instead of EXECUTE
IMMEDIATE and passing the values in the WHERE clause.
● Use server-side JavaScript to write this procedure instead of using SQLScript.
● Perform validation on input parameters within the procedure using either SQLScript or server-side
JavaScript.
● Use APPLY_FILTER if you need a dynamic WHERE condition
● Use the SQL Injection Prevention Function
Escape Code
You might need to use some SQL statements that are not supported in SQLScript, for example, the GRANT
statement. In other cases you might want to use the Data Definition Language (DDL) in which some <name>
To avoid potential vulnerability from injection attacks, consider using the folowing methods instead of escape
code:
Tip
For more information about security in SAP HANA, see the SAP HANA Security Guide.
Related Information
To better understand the features of SQLScript, and their impact on execution, it can be helpful to understand
how SQLScript is processed in the SAP HANA database.
When a user defines a new procedure, for example using the CREATE PROCEDURE statement, the SAP HANA
database query compiler processes the statement in a similar way to an SQL statement. A step-by-step
analysis of the process flow follows below:
When the procedure starts, the invoke activity can be divided into two phases:
1. Compilation
○ Code generation - for declarative logic the calculation models are created to represent the dataflow
defined by the SQLScript code. It is optimized further by the calculation engine, when it is instantiated.
For imperative logic the code blocks are translated into L-nodes.
○ The calculation models generated in the previous step are combined into a stacked calculation model.
2. Execution - the execution commences with binding actual parameters to the calculation models. When the
calculation models are instantiated they can be optimized based on concrete input provided.
Optimizations include predicate or projection embedding in the database. Finally, the instantiated
calculation model is executed by using any of the available parts of the SAP HANA database.
Orchestration logic is used to implement data-flow and control-flow logic using imperative language
constructs such as loops and conditionals. The orchestration logic can also execute declarative logic, which is
defined in the functional extension by calling the corresponding procedures. In order to achieve an efficient
execution on both levels, the statements are transformed into a dataflow graph to the maximum extent
possible. The compilation step extracts data-flow oriented snippets out of the orchestration logic and maps
them to data-flow constructs. The calculation engine serves as execution engine of the resulting dataflow
graph. Since the language L is used as intermediate language for translating SQLScript into a calculation
model, the range of mappings may span the full spectrum – from a single internal L-node for a complete
SQLScript script in its simplest form, up to a fully resolved data-flow graph without any imperative code left.
Typically, the dataflow graph provides more opportunities for optimization and thus better performance.
To transform the application logic into a complex data-flow graph two prerequisites have to be fulfilled:
● All data flow operations have to be side-effect free, that is they must not change any global state either in
the database or in the application logic.
● All control flows can be transformed into a static dataflow graph.
In SQLScript the optimizer will transform a sequence of assignments of SQL query result sets to table
variables into parallelizable dataflow constructs. The imperative logic is usually represented as a single node in
the dataflow graph, and thus it is executed sequentially.
Related Information
Declarative logic is used for efficient execution of data-intensive computations. This logic is represented
internally as data flows which can be executed in a parallel manner. As a consequence, operations in a data-
flow graph have to be free of side effects. This means they must not change any global state neither in the
database, nor in the application. The first condition is ensured by only allowing changes on the dataset that is
passed as input to the operator. The second condition is achieved by only allowing a limited subset of language
features to express the logic of the operator. If those prerequisites are fulfilled, the following types of operators
are available:
Logically each operator represents a node in the data-flow graph. Custom operators have to be manually
implemented by SAP.
Besides the built-in scalar SQL datatypes, SQLScript allows you to use user-defined types for tabular values.
The SQLScript type system is based on the SQL-92 type system. It supports the following primitive data types:
Table 2:
Numeric types TINYINT SMALLINT INT BIGINT DECIMAL SMALL
DECIMAL REAL DOUBLE
Note
This also holds true for SQL statements, apart from the TEXT and SHORTTEXT types.
For more information on scalar types, see SAP HANA SQL and System Views Reference, Data Types.
The SQLScript data type extension allows the definition of table types. These types are used to define
parameters for procedures representing tabular results.
Syntax
Syntax Elements
Identifies the table type to be created and, optionally, in which schema the creation should take place
For more information on data types, see Scalar Data Types [page 16].
Description
Example
Syntax
Syntax Elements
The identifier of the table type to be dropped, with optional schema name
When the <drop_option> is not specified, a non-cascaded drop is performed. This drops only the specified
type, dependent objects of the type are invalidated but not dropped.
The invalidated objects can be revalidated when an object with the same schema and object name is created.
Example
In SQLScript there are two different logic containers, Procedure and User-Defined Function. The User-Defined
Function container is separated into Scalar User-Defined Function and Table User-Defined Function.
The following sections provide an overview of the syntactical language description for both containers.
6.1 Procedures
Procedures allows you to describe a sequence of data transformations on data passed as input and database
tables.
Data transformations can be implemented as queries that follow the SAP HANA database SQL syntax by
calling other procedures. Read-only procedures can only call other read-only procedures.
● You can parameterize and reuse calculations and transformations described in one procedure in other
procedures.
● You can use and express knowledge about relationships in the data; related computations can share
common sub-expressions, and related results can be returned using multiple output parameters.
● You can define common sub-expressions. The query optimizer decides if a materialization strategy (which
avoids recomputation of expressions) or other optimizing rewrites are best to apply. In any case, it eases
the task of detecting common sub-expressions and improves the readability of the SQLScript code.
● You can use scalar variables or imperative language features if required.
Syntax
Note
The default is IN. Each parameter is marked using the keywords IN/OUT/INOUT. Input and output
parameters must be explicitly assigned a type (that means that tables without a type are note
supported)
● The input and output parameters of a procedure can have any of the primitive SQL types or a table type.
INOUT parameters can only be of the scalar type.
Note
For more information on data types see Data Types in the SAP HANA SQL and System Views Reference
on the SAP Help Portal.
● A table type previously defined with the CREATE TYPE command, see CREATE TYPE [page 17].
LANGUAGE <lang>
<lang> ::= SQLSCRIPT | R
● Indication that that the execution of the procedure is performed with the privileges of the definer of the
procedure
DEFINER
● Indication that the execution of the procedure is performed with the privileges of the invoker of the
procedure
INVOKER
● Specifies the schema for unqualified objects in the procedure body; if nothing is specified, then the
current_schema of the session is used.
● Marks the procedure as being read-only and side-effect free - the procedure does not make modifications
to the database data or its structure. This means that the procedure does not contain DDL or DML
statements and that it only calls other read-only procedures. The advantage of using this parameter is that
certain optimizations are available for read-only procedures.
● Defines the main body of the procedure according to the programming language selected
● This statement forces sequential execution of the procedure logic. No parallelism takes place.
SEQUENTIAL EXECUTION
For more information on inserting, updating and deleting data records, see Modifying the Content of Table
Variables [page 120].
● You can modify a data record at a specific position. There are two equivalent syntax options:
● You can delete data records from a table variable. Wth the following syntax you can delete a single record.
● Sections of your procedures can be nested using BEGIN and END terminals
● The ARRAY_AGG function returns the array by aggregating the set of elements in the specified column of
the table variable. Elements can optionally be ordered.
The CARDINALITY function returns the number of the elements in the array, <array_variable_name>.
The TRIM_ARRAY function returns the new array by removing the given number of elements,
<numeric_value_expression>, from the end of the array, <array_value_expression>.
The ARRAY function returns an array whose elements are specified in the list <array_variable_name>. For
more information see the chapter ARRAY [page 112].
● Assignment of values to a list of variables with only one function evaluation. For example,
<function_expression> must be a scalar user-defined function and the number of elements in
<var_name_list> must be equal to the number of output parameters of the scalar user-defined
function.
● The MAP_MERGE operator is used to apply each row of the input table to the mapper function and unite all
intermediate result tables. For more information, see Map Merge [page 75].
● For more information about the CE operators, see Calculation Engine Plan Operators [page 134].
● APPLY_FILTER defines a dynamic WHERE-condition <variable_name> that is applied during runtime. For
more information about that, see the chapter APPLY_FILTER [page 103].
● The UNNEST function returns a table including a row for each element of the specified array.
WITH ORDINALTIY
● You use WHILE to repeatedly call a set of trigger statements while a condition is true.
● You use FOR - EACH loops to iterate over all elements in a set of data.
● Terminates a loop
● Skips a current loop iteration and continues with the next value.
● You use the SIGNAL statement to explicitly raise an exception from within your trigger procedures.
● You use the RESIGNAL statement to raise an exception on the action statement in an exception handler. If
an error code is not specified, RESIGNAL will throw the caught exception.
● You use SET MESSAGE_TEXT to deliver an error message to users when specified error is thrown during
procedure execution.
For information on <insert_stmt>, see INSERT in the SAP HANA SQL and System Views Reference.
For information on <delete_stmt>, see DELETE in the SAP HANA SQL and System Views Reference.
For information on <update_stmt>, see UPDATE in the SAP HANA SQL and System Views Reference.
For information on <replace_stmt> and <upsert_stmt>, see REPLACE and UPSERT in the SAP HANA
SQL and System Views Reference.
For information on <truncate_stmt>, see TRUNCATE in the SAP HANA SQL and System Views
Reference.
● <var_name> is a scalar variable. You can assign selected item value to this scalar variable.
● Cursor operations
● Procedure call. For more information, see CALL - Internal Procedure Call [page 33]
Description
The CREATE PROCEDURE statement creates a procedure by using the specified programming language
<lang>.
Example
The procedure features a number of imperative constructs including the use of a cursor (with associated
state) and local scalar variables with assignments.
Syntax
Syntax Elements
If you do not specify the <drop_option>, the system performs a non-cascaded drop. This will only drop the
specified procedure; dependent objects of the procedure will be invalidated but not dropped.
The invalidated objects can be revalidated when an object that uses the same schema and object name is
created.
CASCADE
RESTRICT
This parameter drops the procedure only when dependent objects do not exist. If this drop option is used and
a dependent object exists an error will be sent.
Description
This statement drops a procedure created using CREATE PROCEDURE from the database catalog.
Examples
You drop a procedure called my_proc from the database using a non-cascaded drop.
You can use ALTER PROCEDURE if you want to change the content and properties of a procedure without
dropping the object.
For more information about the parameters, refer to CREATE PROCEDURE [page 20].
For instance, with ALTER PROCEDURE you can change the content of the body itself. Consider the following
GET_PROCEDURES procedure that returns all procedure names on the database.
The procedure GET_PROCEDURES should now be changed to return only valid procedures. In order to do so,
use ALTER PROCEDURE:
Besides changing the procedure body, you can also change the language <lang> of the procedure, the default
schema <default_schema_name> as well as change the procedure to read only mode (READS SQL DATA).
Note
The following properties cannot be changed with ALTER PROCEDURE:
● procedure owner
● security mode (INVOKER, DEFINER)
● procedure type (table function, scalar function, procedure)
● parameter signature (parameter name, parameter type, default value)
If you need to change these properties you have to drop and recreate the procedure by using DROP
PROCEDURE and CREATE PROCEDURE.
Note that if the default schema and read only mode are not explicitly specified, they will be removed. Language
is defaulted to SQLScript.
Note
You must have the ALTER privilege for the object you want to change.
Syntax
Syntax Elements
The identifier of the procedure to be altered, with the optional schema name.
WITH PLAN
Specifies that internal debug information should be created during execution of the procedure.
Example
You trigger the recompilation of the my_proc procedure to produce debugging information.
A procedure can be called either by a client on the outer-most level, using any of the supported client
interfaces, or within the body of a procedure.
Recommendation
SAP recommends that you use parameterized CALL statements for better performance. The advantages
follow.
● The parameterized query compiles only once, thereby reducing the compile time.
● A stored query string in the SQL plan cache is more generic and a precompiled query plan can be
reused for the same procedure call with different input parameters.
● By not using query parameters for the CALL statement, the system triggers a new query plan
generation.
6.1.5.1 CALL
Syntax
Procedure parameters
For more information on these data types, see Backus Naur Form Notation [page 8] and Scalar Data Types
[page 16].
Parameters passed to a procedure are scalar constants and can be passed either as IN, OUT or INOUT
parameters. Scalar parameters are assumed to be NOT NULL. Arguments for IN parameters of table type can
either be physical tables or views. The actual value passed for tabular OUT parameters must be`?`.
WITH OVERVIEW
Defines that the result of a procedure call will be stored directly into a physical table.
Calling a procedure WITH OVERVIEW returns one result set that holds the information of which table contains
the result of a particular table's output variable. Scalar outputs will be represented as temporary tables with
only one cell. When you pass existing tables to the output parameters WITH OVERVIEW will insert the result-
set tuples of the procedure into the provided tables. When you pass '?' to the output parameters, temporary
tables holding the result sets will be generated. These tables will be dropped automatically once the database
session is closed.
Description
CALL conceptually returns a list of result sets with one entry for every tabular result. An iterator can be used to
iterate over these results sets. For each result set you can iterate over the result table in the same manner as
you do for query results. SQL statements that are not assigned to any table variable in the procedure body are
added as result sets at the end of the list of result sets. The type of the result structures will be determined
during compilation time but will not be visible in the signature of the procedure.
CALL when executed by the client the syntax behaves in a way consistent with the SQL standard semantics,
for example, Java clients can call a procedure using a JDBC CallableStatement. Scalar output variables are
a scalar value that can be retrieved from the callable statement directly.
Note
Unquoted identifiers are implicitly treated as upper case. Quoting identifiers will respect capitalization and
allow for using white spaces which are normally not allowed in SQL identifiers.
It is also possible to use scalar user defined function as parameters for procedure call:
CALL proc(udf(),’EUR’,?,?);
CALL proc(udf()* udf()-55,’EUR’, ?, ?);
In this example, udf() is a scalar user-defined function. For more information about scalar user-defined
functions, see CREATE FUNCTION [page 46]
Syntax:
Syntax Elements:
Note
Use a colon before the identifier name.
Description:
For an internal procedure, in which one procedure calls another procedure, all existing variables of the caller or
literals are passed to the IN parameters of the callee and new variables of the caller are bound to the OUT
parameters of the callee. That is to say, the result is implicitly bound to the variable that is given in the function
call.
Example:
When procedure addDiscount is called, the variable <:lt_expensive_books> is assigned to the function
and the variable <lt_on_sales> is bound by this function call.
Related Information
CALL
You can call a procedure passing named parameters by using the token =>.
For example:
When you use named parameters, you can ignore the order of the parameters in the procedure signature. Run
the following commands and you can try some of the examples below.
Or
Parameter Modes
The following table lists the parameters you can use when defining your procedures.
IN An input parameter
INOUT Specifies a parameter that passes in and returns data to and from the procedure
Note
This is only supported for scalar values. The parameter needs to be parameterized if you
call the procedure, for example CALL PROC ( inout_var=>?). A non-parameter
ized call of a procedure with an INOUT parameter is not supported.
Both scalar and table parameter types are supported. For more information on datatypes, see Datatype
Extension
Related Information
Scalar Parameters
Table Parameters
You can pass tables and views to the parameter of this function.
Note
Implicit binding of multiple values is currently not supported.
You should always use SQL special identifiers when binding a value to a table variable.
Note
Do not use the following syntax:
In the signature you can define default values for input parameters by using the DEFAULT keyword:
The usage of the default value will be illustrated in the next example. Therefore the following tables are needed:
The procedure in the example generates a FULLNAME by the given input table and delimiter. Whereby default
values are used for both input parameters:
END;
For the tabular input parameter INTAB the default table NAMES is defined and for the scalar input parameter
DELIMITER the ‘,’ is defined as default. To use the default values in the signature, you need to pass in
parameters using Named Parameters. That means to call the procedure FULLNAME and using the default value
would be done as follows:
FULLNAME
--------
DOE,JOHN
Now we want to pass a different table, i.e. MYNAMES but still want to use the default delimiter value, the call
looks then as follows:
And the result shows that now the table MYNAMES was used:
FULLNAME
--------
DOE,ALICE
Note
Please note that default values are not supported for output parameters.
For a tabular IN and OUT parameter the EMPTY keyword can be used to define an empty input table as a
default:
Although the general default value handling is supported for input parameters only, the DEFAULT EMPTY is
supported for both tabular IN and OUT parameters.
In the following example use the DEFAULT EMPTY for the tabular output parameter to be able to declare a
procedure with an empty body.
END;
Creating the procedure without DEFAULT EMPTY causes an error indicating that OUTTAB is not assigned. The
PROC_EMPTY procedure can be called as usual and it returns an empty result set:
call CHECKINPUT(result=>?)
OUT(1)
-----------------
'Input is empty'
An example of calling the funtion without passing an input table looks as follows:
When a procedure is created, information about the procedure can be found in the database catalog. You can
use this information for debugging purposes.
The procedures observable in the system views vary according to the privileges that a user has been granted.
The following visibility rules apply:
● CATALOG READ or DATA ADMIN – All procedures in the system can be viewed.
● SCHEMA OWNER, or EXECUTE – Only specific procedures where the user is the owner, or they have
execute privileges, will be shown.
Procedures can be exported and imported as are tables, see the SQL Reference documentation for details. For
more information see Data Import Export Statements in the SAP HANA SQL and System Views Referenece.
Related Information
Structure
Table 4:
Structure
Table 5:
6.1.7.3 SYS.OBJECT_DEPENDENCIES
Dependencies between objects, for example, views that refer to a specific table
Structure
Table 6:
● 0: NORMAL (default)
● 1: EXTERNAL_DIRECT (direct de
pendency between dependent ob
ject and base object)
● 2: EXTERNAL_INDIRECT (indirect
dependency between dependent
object und base object)
● 5: REFERENTIAL_DIRECT (foreign
key dependency between tables)
This section explores the ways in which you can query the OBJECT_DEPENDENCIES system view.
Find all the (direct and indirect) base objects of the DEPS.GET_TABLES procedure using the following
statement.
Table 7:
BASE_SCHEM BASE_OB BASE_OB DEPEND DEPEND DEPEND DEPEND
A_NAME JECT_NAME JECT_TYPE ENT_SCHEMA »ÔH„ϯ!ŠÑ=sh0µ »ÔH„ϯ!ŠÑ=sh0µ ENCY_TYPE
_NAME JECT_NAME JECT_TYPE
Look at the DEPENDENCY_TYPE column in more detail. You obtained the results in the table above using a
select on all the base objects of the procedure; the objects shown include both persistent and transient
objects. You can distinguish between these object dependency types using the DEPENDENCY_TYPE column,
as follows:
Table 8:
BASE_SCHEM BASE_OB BASE_OB DEPEND DEPEND DEPEND DEPEND
A_NAME JECT_NAME JECT_TYPE ENT_SCHEMA ³ì6onıùÇ4à¢fÆ} ³ì6onıùÇ4à¢fÆ} ENCY_TYPE
_NAME JECT_NAME JECT_TYPE
Finally, to find all the dependent objects that are using DEPS.MY_PROC, use the following statement.
Table 9:
BASE_SCHEM BASE_OB BASE_OB DEPEND DEPEND DEPEND DEPEND
A_NAME JECT_NAME JECT_TYPE ENT_SCHEMA ³ì6onıùÇ4à¢fÆ} ³ì6onıùÇ4à¢fÆ} ENCY_TYPE
_NAME JECT_NAME JECT_TYPE
6.1.7.4 PROCEDURE_PARAMETER_COLUMNS
PROCEDURE_PARAMETER_COLUMNS provides information about the columns used in table types which
appear as procedure parameters. The information is provided for all table types in use, in-place types and
externally defined types.
Table 10:
There are two different kinds of user defined functions (UDF): Table User Defined Functions and Scalar User
Defined Functions. They are referred to as Table UDF and Scalar UDF in the following table. They differ by
input/output parameters, supported functions in the body, and the way they are consumed in SQL
statements.
Table 11:
Table UDF Scalar UDF
Functions Calling A table UDF can only be called in the A scalar UDF can be called in SQL
FROM –clause of an SQL statement in statements in the same parameter po
the same parameter positions as table sitions as table column names. These
names. For example, SELECT * FROM occur in the SELECT and WHERE
myTableUDF(1) clauses of SQL statements. For exam
ple, SELECT myScalarUDF(1) AS my
Column FROM DUMMY
Output Must return a table whose type is de Must return scalar values specified in
fined in <return_type>. <return_parameter_list>
This SQL statement creates read-only user-defined functions that are free of side effects. This means that
neither DDL, nor DML statements (INSERT, UPDATE, and DELETE) are allowed in the function body. All
functions or procedures selected or called from the body of the function must be read-only.
Syntax
Syntax Elements
To look at a table type previously defined with the CREATE TYPE command, see CREATE TYPE [page 17].
Table UDFs must return a table whose type is defined by <return_table_type>. And scalar UDF must
return scalar values specified in <return_parameter_list>.
The following expression defines the structure of the returned table data.
LANGUAGE <lang>
<lang> ::= SQLSCRIPT
Default: SQLSCRIPT
Note
Only SQLScript UDFs can be defined.
DEFINER
INVOKER
Specifies that the execution of the function is performed with the privileges of the invoker of the function.
Specifies the schema for unqualified objects in the function body. If nothing is specified, then the
current_schema of the session is used.
Defines the main body of the table user-defined functions and scalar user-defined functions. Since the
function is flagged as read-only, neither DDL, nor DML statements (INSERT, UPDATE, and DELETE), are
allowed in the function body. A scalar UDF does not support table operations in the function body and
variables of type TABLE as input.
Note
Scalar functions can be marked as DETERMINISTIC, if they always return the same result any time they are
called with a specific set of input parameters.
Defines one or more local variables with associated scalar type or array type.
An array type has <type> as its element type. An Array has a range from 1 to 2,147,483,647, which is the
limitation of underlying structure.
You can assign default values by specifying <expression>s. See Expressions in the SAP HANA SQL and
System Views Reference on the SAP Help Portal.
For further information of the definitions in <func_stmt>, see CREATE PROCEDURE [page 20]..
Example
How to call the table function scale is shown in the following example:
How to create a scalar function of name func_add_mul that takes two values of type double and returns two
values of type double is shown in the following example:
In a query you can either use the scalar function in the projection list or in the where-clause. In the following
example the func_add_mul is used in the projection list:
Besides using the scalar function in a query you can also use a scalar function in scalar assignment, e.g.:
You can use ALTER FUNCTION if you want to change the content and properties of a function without
dropping the object.
For more information about the parameters please refer to CREATE FUNCTION. For instance, with ALTER
FUNCTION you can change the content of the body itself. Consider the following procedure GET_FUNCTIONS
that returns all function names on the database.
AS
BEGIN
return SELECT schema_name AS schema_name,
function_name AS name
FROM FUNCTIONS;
END;
The function GET_FUNCTIONS should now be changed to return only valid functions. In order to do so, we will
use ALTER FUNCTION:
AS
BEGIN
return SELECT schema_name AS schema_name,
function_name AS name
FROM FUNCTIONS
WHERE IS_VALID = 'TRUE';
END;
Besides changing the function body, you can also change the default schema <default_schema_name>.
● function owner
● security mode (INVOKER, DEFINER)
● function type (table function, scalar function, procedure)
● parameter signature (parameter name, parameter type, default value)
If you need to change these properties you have to drop and re-create the procedure again by using DROP
FUNCTION and CREATE FUNCTION.
Note that if the default schema is not explicitly specified, it will be removed.
Note
Note that you need the ALTER privilege for the object you want to change.
The following tables list the parameters you can use when defining your user-defined functions.
Table 12:
Function Parameter
Table user-defined functions ● Can have a list of input parameters and must return a
table whose type is defined in <return type>
● Input parameters must be explicitly typed and can have
any of the primitive SQL type or a table type.
Scalar user-defined functions ● Can have a list of input parameters and must returns
scalar values specified in <return parameter list>.
● Input parameters must be explicitly typed and can have
any primitive SQL type.
● Using a table as an input is not allowed.
When a function is created, information about the function can be found in the database catalog. You can use
this information for debugging purposes. The functions observable in the system views vary according to the
privileges that a user has been granted. The following visibility rules apply:
● CATALOG READ or DATA ADMIN – All functions in the system can be viewed.
● SCHEMA OWNER, or EXECUTE – Only specific functions where the user is the owner, or they have
execute privileges, will be shown.
Structure
Table 13:
Structure
Table 14:
FUNCTION_PARAMETER_COLUMNS provides information about the columns used in table types which
appear as function parameters. The information is provided for all table types in use, in-place types and
externally defined types.
Table 15:
In the signature you can define default values for input parameters by using the DEFAULT keyword:
The usage of the default value will be illustrated in the next example. Therefore the following tables are needed:
The function in the example generates a FULLNAME by the given input table and delimiter. Whereby default
values are used for both input parameters:
END;
For the tabular input parameter INTAB the default table NAMES is defined and for the scalar input parameter
DELIMITER the ‘,’ is defined as default.
That means to query the function FULLNAME and using the default value would be done as follows:
FULLNAME
--------
DOE,JOHN
Now we want to pass a different table, i.e. MYNAMES but still want to use the default delimiter value. To do so
you need to use using Named Parameters to pass in parameters. The query looks then as follows:
And the result shows that now the table MYNAMES was used:
FULLNAME
--------
DOE,ALICE
In a scalar function, default values can also be used, as shown in the next example:
Calling that function by using the default value of the variable delimiter would be the following:
Note
Please note that default values are not supported for output parameters.
Related Information
Deterministic scalar user-defined functions always return the same result any time they are called with a
specific set of input values.
When you use such funtions, it is not necessary to recalculate the result every time - you can refer to the
cached result. If you want to make a scalar user-defined function explicitly deterministic, you need to use the
optional keyword DETERMINISTIC when you create your function, as demonstrated in the example below. The
lifetime of the cache entry is bound to the query execution (for example, SELECT/DML). After the execution of
the query, the cache will be destroyed.
Sample Code
Note
In the system view SYS.FUNCTIONS, the column IS_DETERMINISTIC provides information about whether a
function is deterministic or not.
The following not-deterministic functions cannot be specified in deterministic scalar user-defined functions.
They return an error at function creation time.
● nextval/currval of sequence
● current_time/current_timestamp/current_date
● current_utctime/current_utctimestamp/current_utcdate
● rand/rand_secure
● window functions
Syntax
Syntax Elements
When <drop_option> is not specified a non-cascaded drop will be performed. This will only drop the specified
function, dependent objects of the function will be invalidated but not dropped.
The invalidated objects can be revalidated when an object that has same schema and object name is created.
CASCADE
RESTRICT
Drops the function only when dependent objects do not exist. If this drop option is used and a dependent
object exists an error will be thrown.
Drops a function created using CREATE FUNCTION from the database catalog.
Examples
You drop a function called my_func from the database using a non-cascaded drop.
When creating a SQLScript procedure or function, you can use the OR REPLACE option to change the defined
procedure or function, if it already exists.
Syntax
Behavior
The behavior of this command depends on the existence of the defined procedure or function. If the procedure
or function already exists, it will be modified according to the new definition. If you do not explicitly specify a
property (for example, read only), this property will be set to the default value. Please refer to the example
Compared to using DROP PROCEDURE followed by CREATE PROCEDURE, CREATE OR REPLACE has the
following benefits:
● DROP and CREATE incur object revalidation twice, while CREATE OR REPLACE incurs it only once
● If a user drops a procedure, its privileges are lost, while CREATE OR REPLACE preserves them.
Restrictions
Example
Sample Code
An anonymous block is an executable DML statement which can contain imperative or declarative statements.
All SQLScript statements supported in procedures are also supported in anonymous blocks. Compared to
procedures, anonymous blocks have no corresponding object created in the metadata catalog.
An anonymous block is defined and executed in a single step by using the following syntax:
DO [(<parameter_clause>)]
BEGIN [SEQUENTIAL EXECUTION]
<body>
END
<body> ::= !! supports the same feature set as procedure did
For more information on <body>, see <procedure_body> in CREATE in the SAP HANA SQL and System
Views Reference on the SAP Help Portal.
With the parameter clause you can define a signature, whereby the value of input and output parameters
needs to be bound by using named parameters.
Note
INOUT parameters and DEFAULT EMPTY are not supported.
The following example illustrates how to call an anonymous block with a parameter clause:
For output parameters only ? is a valid value and cannot be omitted, otherwise the query parameter cannot be
bound. For the scalar input parameter any scalar expression can be used.
You can also parameterize the scalar parameters if needed. For example, for the above given example it would
look as follows:
Contrary to a procedure, an anonymous block has no container-specific properties (for example, language,
security mode, and so on.) However, the body of an anonymous block is similar to the procedure body.
In the following example, you find further examples for anonymous blocks:
Example 1
DO
BEGIN
DECLARE I INTEGER;
CREATE TABLE TAB1 (I INTEGER);
FOR I IN 1..10 DO
INSERT INTO TAB1 VALUES (:I);
END FOR;
END;
This example contains an anonymous block that creates a table and inserts values into that table.
Example 2
DO
BEGIN
T1 = SELECT * FROM TAB;
CALL PROC3(:T1, :T2);
SELECT * FROM :T2;
END
Example 3
Procedure and function definitions may contain delicate or critical information but a user with system
privileges can easily see all definitions from the public system views PROCEDURES, FUNCTIONS or from
traces, even if the procedure or function owner has controlled the authorization rights in order to secure their
objects. If application developers want to protect their intellectual property from any other users, even system
users, they can use SQLScript encryption.
Note
Decryption of an encrypted procedure or function is not supported and cannot be performed even by SAP.
Users who want to use encrypted procedures or functions are responsible for saving the original source
code and providing supportability because there is no way to go back and no supportability tools for that
purpose are available in SAP HANA.
Syntax
Code Syntax
Code Syntax
Code Syntax
If a procedure or a function is created by using the WITH ENCRYPTION option, their definition is saved as an
encrypted string that is not human readable. That definition is decrypted only when the procedure or function
is compiled. The body in the CREATE statement is masked in various traces or monitoring views.
Encrypting a procedure or a function with the ALTER PROCEDURE/FUNCTION statement can be achieved in
the following ways. An ALTER PROCEDURE/FUNCTION statement, accompanying a procedure body, can
make use of the WITH ENCRYPTION option, just like the CREATE PROCEDURE/FUNCTION statement.
If you do not want to repeat the procedure or function body in the ALTER PROCEDURE/FUNCTION statement
and want to encrypt the existing procedure or function, you can use ALTER PROCEDURE/FUNCTION
<proc_func_name> ENCRYPTION ON. However, the CREATE statement without the WITH ENCRYPTION
property is not secured.
Note
A new encryption key is generated for each procedure or function and is managed internally.
SQLScript Debugger, PlanViz, traces, monitoring views, and others that can reveal procedure definition are
not available for encrypted procedures or functions.
Additional Considerations
Object Dependency
The object dependency of encrypted procedures or functions is not secured. The purpose of encryption is to
secure the logic of procedures or functions and object dependency cannot reveal how a procedure or a
function works.
Limitation in Optimization
Some optimizations, which need analysis of the procedure or function definition, are turned off for encrypted
procedures and functions.
Calculation Views
An encrypted procedure cannot be used as a basis for a calculation view. It is recommended to use table user-
defined functions instead.
For every public interface that shows procedure or function definitions, such as PROCEDURES or
FUNCTIONS, the definition column displays only the signature of the procedure, if it is encrypted.
Sample Code
Sample Code
Sample Code
Supportability
For every monitoring view showing internal queries, the internal statements will also be hidden, if its parent is
an encrypted procedure call. Debugging tools or plan analysis tools are also blocked.
● SQLScript Debugger
● EXPLAIN PLAN FOR Call
● PlanViz
● Statement-related views
● Plan Cache-related views
● M_ACTIVE_PROCEDURES
In these monitoring views, the SQL statement string is replaced with the string <statement from
encrypted procedure <proc_schema>.<proc_name> (<sqlscript_context_id>)>.
Default Behavior
Encrypted procedures or functions cannot be exported, if the option ENCRYPTED OBJECT HEADER ONLY is
not applied. When the export target is an encrypted object or if objects, which are referenced by the export
object, include an encrypted object, the export will fail with the error FEATURE_NOT_SUPPORTED. However,
when exporting a schema and an encrypted procedure or function in the schema does not have any dependent
objects, the procedure or function will be skipped during the export.
To enable export of any other objects based on an encrypted procedure, the option ENCRYPTED OBJECT
HEADER ONLY is introduced for the EXPORT statement. This option does not export encrypted objects in
encrypted state, but exports the encrypted object as a header-only procedure or function. After an encrypted
procedure or a function has been exported with the HEADER ONLY option, objects based on encrypted objects
Sample Code
Original Procedure
Sample Code
Export Statement
export all as binary into <path> with encrypted object header only;
Sample Code
Exported create.sql
Each table assignment in a procedure or table user defined function specifies a transformation of some data
by means of classical relational operators such as selection, projection. The result of the statement is then
bound to a variable which either is used as input by a subsequent statement data transformation or is one of
the output variables of the procedure. In order to describe the data flow of a procedure, statements bind new
variables that are referenced elsewhere in the body of the procedure.
This approach leads to data flows which are free of side effects. The declarative nature to define business logic
might require some deeper thought when specifying an algorithm, but it gives the SAP HANA database
freedom to optimize the data flow which may result in better performance.
The following example shows a simple procedure implemented in SQLScript. To better illustrate the high-level
concept, we have omitted some details.
This SQLScript example defines a read-only procedure that has 2 scalar input parameters and 2 output
parameters of type table. The first line contains an SQL query Q1, that identifies big publishers based on the
number of books they have published (using the input parameter cnt). Next, detailed information about these
publishers along with their corresponding books is determined in query Q2. Finally, this information is
aggregated in 2 different ways in queries Q3 (aggregated per publisher) and Q4 (aggregated per year)
respectively. The resulting tables constitute the output tables of the function.
A procedure in SQLScript that only uses declarative constructs can be completely translated into an acyclic
dataflow graph where each node represents a data transformation. The example above could be represented
as the dataflow graph shown in the following image. Similar to SQL queries, the graph is analyzed and
optimized before execution. It is also possible to call a procedure from within another procedure. In terms of
the dataflow graph, this type of nested procedure call can be seen as a sub-graph that consumes intermediate
results and returns its output to the subsequent nodes. For optimization, the sub-graph of the called
procedure is merged with the graph of the calling procedure, and the resulting graph is then optimized. The
optimization applies similar rules as an SQL optimizer uses for its logical optimization (for example filter
pushdown). Then the plan is translated into a physical plan which consists of physical database operations (for
example hash joins). The translation into a physical plan involves further optimizations using a cost model as
well as heuristics.
Description
Table parameters that are defined in the Signature are either input or output. They must be typed explicitly.
This can be done either by using a table type previously defined with the CREATE TYPE command or by writing
it directly in the signature without any previously defined table type.
Example
The advantage of previously defined table type is that it can be reused by other procedure and functions. The
disadvantage is that you must take care of its lifecycle.
The advantage of a table variable structure that you directly define in the signature is that you do not need to
take care of its lifecycle. In this case, the disadvantage is that it cannot be reused.
The type of a table variable in the body of a procedure or a table function is either derived from the SQL Query,
or declared explicitly.
If the table variable has derived its type from the SQL query, the SQLScript compiler determines its type from
the first assignments of the variable thus providing a lot of flexibility. One disadvantage of this procedure is
that it also leads to many type conversions in the background because sometimes the derived table type does
not match the typed table parameters in the signature. This can lead to additional conversions, which are
unnecessary. Another disadvantage is the unnecessary internal statement compilation to derive the types. To
avoid this unnecessary effort, you can declare the type of a table variable explicitly. A declared table variable is
always initialized with empty content.
Local table variables are declared by using the DECLARE keyword. For the referenced type, you can either use
a previously declared table type, or the type definition TABLE (<column_list_definition>). The next
example illustrates both variants:
You can also directly assign a default value to a table variable by using the DEFAULT keyword or ‘=’. By default
all statements are allowed all statements that are also supported for the typical table variable assignment.
The table variable can be also flagged as read-only by using the CONSTANT keyword. The consequence is that
you cannot override the variable any more. Note that in case the CONSTANT keyword is used, the table
variable should have a default value, it cannot be NULL.
Description
Local table variables are declared by using the DECLARE keyword. A table variable temp can be referenced by
using :temp. For more information, see Referencing Variables [page 73]. The <sql_identifier> must be
unique among all other scalar variables and table variables in the same code block. However, you can use
names that are identical to the name of another variable in a different code block. Additionally, you can
reference those identifiers only in their local scope.
In each block there are table variables declared with identical names. However, since the last assignment to
the output parameter <outTab> can only have the reference of variable <temp> declared in the same block,
the result is the following:
In this code example there is no explicit table variable declaration where done, that means the <temp> variable
is visible among all blocks. For this reason, the result is the following:
N
----
2
For every assignment of the explicitly declared table variable, the derived column names and types on the
right-hand side are checked against the explicitly declared type on the left-hand side.
Another difference, compared to derived types, is that a reference to a table variable without an assignment,
returns a warning during the compilation.
BEGIN
DECLARE a TABLE (i DECIMAL(2,1), j INTEGER);
IF :num = 4
THEN
a = SELECT i, j FROM tab;
END IF;
END;
The example above returns a warning because the table variable <a> is unassigned if <:num> is not 4. This
behavior can be controlled by the configuration parameter UNINITIALIZED_TAVLE_VARIABLE_USAGE.
Besides issuing a warning, it also offers the follwoing options:
Table 20:
Derived Type Explicitly Declared
Create new variable First SQL query assignment Table variable declaration in a block:
Variable scope Global scope, regardless of the block Available in declared block only.
where it was first declared
Variable hiding is applied.
Unassigned variable check No warning during the compilation Warning during compilation if it is pos
sible to refer to the unassigned table
variable. The check is perforrmed only
if a table variable is used.
You can specify the NOT NULL constraint on columns in table types used in SQLScript. Historically, this was
not allowed by the syntax and existing NOT NULL constraints on tables and table types were ignored when
used as types in SQLScript. Now, NOT NULL constraints are taken into consideration, if specified directly in
the column list of table types. NOT NULL constraints in persistent tables and table types are still ignored by
default for backward compatibility but you can make them valid by changing the configuration, as follows:
If both are set, the session variable takes precedence. Setting it to 'ignore_with_warning' has the same
effect as 'ignore', except that you additionally get a warning whenever the constraint is ignored. With
'respect', the NOT NULL constraints (including primary keys) in tables and table types will be taken into
consideration but that could invalidate existing procedures. Consider the following example:
Sample Code
Table variables are bound using the equality operator. This operator binds the result of a valid SELECT
statement on the right-hand side to an intermediate variable or an output parameter on the left-hand side.
Statements on the right hand side can refer to input parameters or intermediate result variables bound by
other statements. Cyclic dependencies that result from the intermediate result assignments or from calling
other functions are not allowed, that is to say recursion is not possible.
Bound variables are referenced by their name (for example, <var>). In the variable reference the variable
name is prefixed by <:> such as <:var>. The procedure or table function describe a dataflow graph using
their statements and the variables that connect the statements. The order in which statements are written in a
body can be different from the order in which statements are evaluated. In case a table variable is bound
multiple times, the order of these bindings is consistent with the order they appear in the body. Additionally,
statements are only evaluated if the variables that are bound by the statement are consumed by another
subsequent statement. Consequently, statements whose results are not consumed are removed during
optimization.
Example:
In this assignment, the variable <lt_expensive_books> is bound. The <:it_books> variable in the FROM
clause refers to an IN parameter of a table type. It would also be possible to consume variables of type table in
the FROM clause which were bound by an earlier statement. <:minPrice> and <:currency> refer to IN
parameters of a scalar type.
Syntax
Syntax Elements
The parameter name definition. PLACEHOLDER is used for place holder parameters and HINT for hint
parameters.
Description
Using column view parameter binding it is possible to pass parameters from a procedure/scripted calculation
view to a parameterized column view e.g. hierarchy view, graphical calculation view, scripted calculation view.
Examples:
In the following example, assume you have the calculation view CALC_VIEW with placeholder parameters
"client" and "currency". You want to use this view in a procedure and bind the values of the parameters during
the execution of the procedure.
The following example assumes that you have a hierarchical column view "H_PROC" and you want to use this
view in a procedure. The procedure should return an extended expression that will be passed via a variable.
CALL "EXTEND_EXPRESSION"('',?);
CALL "EXTEND_EXPRESSION"('subtree("B1")',?);
Description
The MAP_MERGE operator is used to apply each row of the input table to the mapper function and unite all
intermediate result tables. The purpose of the operator is to replace sequential FOR-loops and union patterns,
like in the example below, with a parallel operator.
Sample Code
Note
The mapper procedure is a read-only procedure with only one output that is a tabular output.
Syntax
The first input of the MAP_MERGE operator is th mapper table <table_or_table_variable> . The mapper
table is a table or a table variable on which you want to iterate by rows. In the above example it would be table
variable t.
The second input is the mapper function <mapper_identifier> itself. The mapper function is a function you
want to have evaluated on each row of the mapper table <table_or_table_variable>. Currently, the
MAP_MERGE operator supports only table functions as <mapper_identifier>. This means that in the above
example you need to convert the mapper procedure into a table function.
Example
As an example, let us rewrite the above example to leverage the parallel execution of the MAP_MERGE operator.
We need to transform the procedure into a table function, because MAP_MERGE only supports table functions
as <mapper_identifier>.
Sample Code
After transforming the mapper procedure into a function, we can now replace the whole FOR loop by the
MAP_MERGE operator.
Table 21:
Sequential FOR-Loop Version Parallel MAP_Merge Operator
The SQLScript compiler combines statements to optimize code. Hints enable you to block or enforce the
inlining of table variables.
Note
Using a HINT needs to be considered carefully. In some cases, using a HINT could end up being more
expensive.
Block Statement-Inlining
The overall optimization guideline in SQLScript states that dependent statements are combined if possible.
For example, you have two table variable assignments as follows:
There can be situations, however, when the combined statements lead to a non-optimal plan and as a result, to
less-than-optimal performance of the executed statement. In these situations it can help to block the
combination of specific statements. Therefore SAP has introduced a HINT called NO_INLINE. By placing that
HINT at the end of select statement, it blocks the combination (or inlining) of that statement into other
statements. An example of using this follows:
By adding WITH HINT (NO_INLINE) to the table variable tab, you can block the combination of that
statement and ensure that the two statements are executed separately.
Enforce Statement-Inlining
Using the hint called INLINE helps in situations when you want to combine the statement of a nested
procedure into the outer procedure.
Currently statements that belong to nested procedure are not combined into the statements of the calling
procedures. In the following example, you have two procedures defined.
By executing the procedure, ProcCaller, the two table assignments are executed separately. If you want to
have both statements combined, you can do so by using WITH HINT (INLINE) at the statement of the
output table variable. Using this example, it would be written as follows:
Now, if the procedure, ProcCaller, is executed, then the statement of table variable tab2 in ProcInner is
combined into the statement of the variable, tab, in the procedure, ProcCaller:
SELECT I FROM (SELECT I FROM T WITH HINT (INLINE)) where I > 10;
Local table variables are, as the name suggests, variables with a reference to tabular data structure. This data
structure originates from an SQL Query.
In this section we will focus on imperative language constructs such as loops and conditionals. The use of
imperative logic splits the logic among several dataflows. For additional information, see Orchestration Logic
[page 14] and Declarative SQLScript Logic [page 67].
Syntax
Syntax Elements
Description
Local variables are declared using DECLARE keyword and they can optionally be initialized with their
declaration. By default scalar variables are initialized with NULL. A scalar variable var can be referenced the
same way as described above using :var.
Tip
If you want to access the value of the variable, then use :var in your code. If you want to assign a value to
the variable, then use var in your code.
Recommendation
SAP recommends that you use only the = operator in defining scalar variables. (The := operator is still
available, however.)
Example
CREATE PROCEDURE proc (OUT z INT) LANGUAGE SQLSCRIPT READS SQL DATA
AS
BEGIN
DECLARE a int;
DECLARE b int = 0;
DECLARE c int DEFAULT 0;
In the example you see the various ways of making declarations and assignments.
Note
Before the SAP HANA SPS 08 release, scalar UDF assignment to the scalar variable was not supported. If
you wanted to get the result value from a scalar UDF and consume it in a procedure, the scalar UDF had to
be used in a SELECT statement, even though this was expensive.
Now you can assign a scalar UDF to a scalar variable with 1 output or more than 1 output, as depicted in the
following code examples.
Assign the scalar UDF with more than 1 output to scalar variables:
Global session variables can be used in SQLScript to share a scalar value between procedures and functions
that are running in the same session. The value of a global session variable is not visible from another session.
To set the value of a global session variable you use the following syntax:
While <key> can only be a constant string or a scalar variable, <values> can be any expression, scalar
variable or function which returns a value that is convertible to string. Both have maximum length of 5000
characters. The session variable cannot be explicitly typed and is of type string. If <value> is not of type string
the value will be implicitly converted to string.
The next examples illustrate how you can set the value of session variable in a procedure:
To retrieve the session variable, the function SESSION_CONTEXT (<key>) can be used.
For more information on SESSION_CONTEXT see SESSION_CONTEXT in the SAP HANA SQL and System
Views Reference on the SAP Help Portal.
For example, the following function retrieves the value of session variable 'MY_VAR'
Note
SET <key> = <value> cannot not be used in functions and procedure flagged as READ ONLY (scalar and
table functions are implicitly READ ONLY)
Note
The maximum number of session variables can be configured with the configuration parameter
max_session_variables under the section session (min=1, max=5000) . The default is 1024.
SQLScript supports local variable declaration in a nested block. Local variables are only visible in the scope of
the block in which they are defined. It is also possible to define local variables inside LOOP / WHILE /FOR / IF-
ELSE control structures.
call nested_block(?)
--> OUT:[2]
From this result you can see that the inner most nested block value of 3 has not been passed to the val
variable. Now let's redefine the procedure without the inner most DECLARE statement:
Now when you call this modified procedure the result is:
call nested_block(?)
--> OUT:[3]
From this result you can see that the innermost nested block has used the variable declared in the second level
nested block.
Conditionals
CREATE PROCEDURE nested_block_if(IN inval INT, OUT val INT) LANGUAGE SQLSCRIPT
READS SQL DATA AS
BEGIN
DECLARE a INT = 1;
DECLARE v INT = 0;
DECLARE EXIT HANDLER FOR SQLEXCEPTION
BEGIN
val = :a;
END;
v = 1 /(1-:inval);
IF :a = 1 THEN
DECLARE a INT = 2;
DECLARE EXIT HANDLER FOR SQLEXCEPTION
BEGIN
val = :a;
END;
v = 1 /(2-:inval);
IF :a = 2 THEN
DECLARE a INT = 3;
DECLARE EXIT HANDLER FOR SQLEXCEPTION
BEGIN
val = :a;
END;
v = 1 / (3-:inval);
END IF;
v = 1 / (4-:inval);
END IF;
v = 1 / (5-:inval);
END;
call nested_block_if(1, ?)
-->OUT:[1]
call nested_block_if(2, ?)
-->OUT:[2]
call nested_block_if(3, ?)
-->OUT:[3]
call nested_block_if(4, ?)
--> OUT:[2]
call nested_block_if(5, ?)
--> OUT:[1]
While Loop
For Loop
Loop
Note
The example below uses tables and values created in the For Loop example above.
9.4.1 Conditionals
Syntax:
IF <bool_expr1>
THEN
<then_stmts1>
[{ELSEIF <bool_expr2>
THEN
<then_stmts2>}...]
[ELSE
<else_stmts3>]
END IF
Syntax elements:
Specifies the comparison value. This can be based on either scalar literals or scalar variables.
Description:
The IF statement consists of a Boolean expression <bool_expr1>. If this expression evaluates to true then
the statements <then_stmts1> in the mandatory THEN block are executed. The IF statement ends with END
IF. The remaining parts are optional.
If the Boolean expression <bool_expr1> does not evaluate to true the ELSE-branch is evaluated. The
statements<else_stmts3> are executed without further checks. After an else branch no further ELSE branch
or ELSEIF branch is allowed.
Alternatively, when ELSEIF is used instead of ELSE a further Boolean expression <bool_expr2> is evaluated.
If it evaluates to true, the statements <then_stmts2> are executed. In this manner an arbitrary number of
ELSEIF clauses can be added.
This statement can be used to simulate the switch-case statement known from many programming
languages.
Examples:
Example 1
You use the IF statement to implementing the functionality of the SAP HANA database`s UPSERT statement.
Example 3
It is also possible to use a scalar UDF in the condition, as shown in the following example.
Related Information
Syntax:
WHILE <condition> DO
<proc_stmts>
END WHILE
Syntax elements:
The while loop executes the statements <proc_stmts> in the body of the loop as long as the Boolean
expression at the beginning <condition> of the loop evaluates to true.
Example 1
You use WHILE to increment the :v_index1 and :v_index2 variables using nested loops.
Example 2
You can also use scalar UDF for the while condition as follows.
Caution
No specific checks are performed to avoid infinite loops.
Syntax:
Syntax elements:
REVERSE
Description:
The for loop iterates a range of numeric values and binds the current value to a variable <loop-var> in
ascending order. Iteration starts with the value of <start_value> and is incremented by one until the
<loop-var> is greater than <end_value> .
If <start_value> is larger than <end_value>, <proc_stmts> in the loop will not be evaluated.
Example 1
You use nested FOR loops to call a procedure that traces the current values of the loop variables appending
them to a table.
Example 2
You can also use scalar UDF in the FOR loop, as shown in the following example.
Syntax:
BREAK
CONTINUE
Syntax elements:
BREAK
CONTINUE
Specifies that a loop should stop processing the current iteration, and should immediately start processing the
next.
Description:
Example:
You defined the following loop sequence. If the loop value :x is less than 3 the iterations will be skipped. If :x is
5 then the loop will terminate.
Related Information
9.5 Cursors
Cursors are used to fetch single rows from the result set returned by a query. When the cursor is declared it is
bound to a query. It is possible to parameterize the cursor query.
Syntax:
Syntax elements:
Description:
Cursors can be defined either after the signature of the procedure and before the procedure’s body or at the
beginning of a block with the DECLARE token. The cursor is defined with a name, optionally a list of parameters,
and an SQL SELECT statement. The cursor provides the functionality to iterate through a query result row-by-
row. Updating cursors is not supported.
Note
Avoid using cursors when it is possible to express the same logic with SQL. You should do this as cursors
cannot be optimized the same way SQL can.
Example:
You create a cursor c_cursor1 to iterate over results from a SELECT on the books table. The cursor passes
one parameter v_isbn to the SELECT statement.
Syntax:
OPEN <cursor_name>[(<argument_list>)]
Syntax elements:
Specifies one or more arguments to be passed to the select statement of the cursor.
Description:
Evaluates the query bound to a cursor and opens the cursor so that the result can be retrieved. When the
cursor definition contains parameters then the actual values for each of these parameters must be provided
when the cursor is opened.
This statement prepares the cursor so the results can be fetched for the rows of a query.
Example:
You open the cursor c_cursor1 and pass a string '978-3-86894-012-1' as a parameter.
OPEN c_cursor1('978-3-86894-012-1');
Syntax:
CLOSE <cursor_name>
Syntax elements:
Description:
Closes a previously opened cursor and releases all associated state and resources. It is important to close all
cursors that were previously opened.
CLOSE c_cursor1;
Syntax:
Syntax elements:
Specifies the name of the cursor where the result will be obtained.
Specifies the variables where the row result from the cursor will be stored.
Description:
Fetches a single row in the result set of a query and advances the cursor to the next row. This assumes that
the cursor was declared and opened before. One can use the cursor attributes to check if the cursor points to
a valid row. See Attributes of a Cursor
Example:
You fetch a row from the cursor c_cursor1 and store the results in the variables shown.
Related Information
A cursor provides a number of methods to examine its current state. For a cursor bound to variable
c_cursor1, the attributes summarized in the table below are available.
c_cursor1::ROWCOUNT Returns the number of rows that the cursor fetched so far.
This value is available after the first FETCH operation. Be
fore the first fetch operation the number is 0.
Example:
The example below shows a complete procedure using the attributes of the cursor c_cursor1 to check if
fetching a set of results is possible.
Related Information
Syntax:
Specifies one or more arguments to be passed to the select statement of the cursor.
To access the row result attributes in the body of the loop you use the syntax shown.
Description:
Opens a previously declared cursor and iterates over each row in the result set of the query bound to the
cursor. For each row in the result set the statements in the body of the procedure are executed. After the last
row from the cursor has been processed, the loop is exited and the cursor is closed.
Tip
As this loop method takes care of opening and closing cursors, resource leaks can be avoided.
Consequently this loop is preferred to opening and closing a cursor explicitly and using other loop-variants.
Within the loop body, the attributes of the row that the cursor currently iterates over can be accessed like an
attribute of the cursor. Assuming <row_var> isa_row and the iterated data contains a column test, then the
value of this column can be accessed using a_row.test.
Example:
The example below demonstrates using a FOR-loop to loop over the results from c_cursor1 .
Syntax:
Description:
The autonomous transaction is independent from the main procedure. Changes made and committed by an
autonomous transaction can be stored in persistency regardless of commit/rollback of the main procedure
transaction. The end of the autonomous transaction block has an implicit commit.
The examples show how commit and rollback work inside the autonomous transaction block. The first updates
(1) are committed, whereby the updates made in step (2) are completely rolled back. And the last updates (3)
are committed by the implicit commit at the end of the autonomous block.
CREATE PROCEDURE PROC1( IN p INT , OUT outtab TABLE (A INT)) LANGUAGE SQLSCRIPT
AS
BEGIN
DECLARE errCode INT;
DECLARE errMsg VARCHAR(5000);
DECLARE EXIT HANDLER FOR SQLEXCEPTION
BEGIN AUTONOMOUS TRANSACTION
errCode= ::SQL_ERROR_CODE;
errMsg= ::SQL_ERROR_MESSAGE ;
INSERT INTO ERR_TABLE (PARAMETER,SQL_ERROR_CODE, SQL_ERROR_MESSAGE)
VALUES ( :p, :errCode, :errMsg);
END;
outtab = SELECT 1/:p as A FROM DUMMY; -- DIVIDE BY ZERO Error if p=0
END
In the example above, an autonomous transaction is used to keep the error code in the ERR_TABLE stored in
persistency.
P |SQL_ERROR_CODE| SQL_ERROR_MESSAGE
--------------------------------------------
0 | 304 | division by zero undefined: at function /()
The LOG_TABLE table contains 'MESSAGE', even though the inner autonomous transaction rolled back.
Note
You have to be cautious if you access a table both before and inside an autonomous transaction started in a
nested procedure (e.g. TRUNCATE, update the same row), because this can lead to a deadlock situation.
One solution to avoid this is to commit the changes before entering the autonomous transaction in the
nested procedure.
The COMMIT command commits the current transaction and all changes before the COMMIT command is
written to persistence.
The ROLLBACK command rolls back the current transaction and undoes all changes since the last COMMIT.
In this example, the B_TAB table has one row before the PROC1 procedure is executed:
Table 23:
V ID
0 1
After you execute the PROC1 procedure, the B_TAB table is updated as follows:
Table 24:
V ID
3 1
This means only the first update in the procedure affected the B_TAB table. The second update does not affect
the B_TAB table because it was rolled back.
The following graphic provides more detail about the transactional behavior. With the first COMMIT command,
transaction tx1 is committed and the update on the B_TAB table is written to persistence. As a result of the
COMMIT, a new transaction starts, tx2.
By triggering ROLLBACK, all changes done in transaction tx2 are reverted. In Example 1, the second update is
reverted. Additionally after the rollback is performed, a new transaction starts, tx3.
Example 2:
In Example 2, the PROC1 procedure calls the PROC2procedure. The COMMIT in PROC2 commits all changes
done in the tx1 transaction (see the following graphic). This includes the first update statement in the PROC1
procedure as well as the update statement in the PROC2 procedure. With COMMIT a new transaction starts
implicitly, tx2.
Therefore the ROLLBACK command in PROC1 only affects the previous update statement; all other updates
were committed with the tx1 transaction.
Note
● If you used DSQL in the past to execute these commands (for example, EXEC ‘COMMIT’,
EXEC ’ROLLBACK’), SAP recommends that you replace all occurrences with the native commands
COMMIT/ROLLBACK because they are more secure.
● The COMMIT/ROLLBACK commands are not supported in Scalar UDF or in Table UDF.
Dynamic SQL allows you to construct an SQL statement during the execution time of a procedure. While
dynamic SQL allows you to use variables where they might not be supported in SQLScript and also provides
more flexibility in creating SQL statements, it does have the disadvantage of an additional cost at runtime:
Note
You should avoid dynamic SQL wherever possible as it can have a negative impact on security or
performance.
9.8.1 EXEC
Syntax:
Description:
EXEC executes the SQL statement <sql-statement> passed in a string argument. EXEC does not return any
result set if <sql_statement> is a SELECT statement. You have to use EXECUTE IMMEDIATE for that
purpose.
If the query returns a single row, you can assign the value of each column to a scalar variable by using the INTO
clause.
INTO <var_name_list>
<var_name_list> ::= <var_name>[{, <var_name>}...]
<var_name> ::= <identifier>
Sample Code
You can also bind scalar values with the USING clause:
USING <expression_list>
<expression_list>::= <expression> [{ , <expression>} …]
<expression> can be either a simple expression, such as a character, a date, or a number, or a scalar
variable.
Sample Code
END;
Syntax:
Description:
EXECUTE IMMEDIATE executes the SQL statement passed in a string argument. The results of queries
executed with EXECUTE IMMEDIATE are appended to the procedures result iterator.
You can also use the INTO und USING clauses to pass in or out scalar values. With the INTO clause the result
set is not appended to the procedure result iterator. For more information, see the EXEC statement
documentation.
Example:
You use dynamic SQL to delete the contents of the table tab, insert a value and, finally, to retrieve all results in
the table.
9.8.3 APPLY_FILTER
Syntax
<variable_name> = APPLY_FILTER(<table_or_table_variable>,
<filter_variable_name>);
Syntax Elements
The variable where the result of the APPLY_FILTER function will be stored.
You can use APPLY_FILTER with persistent tables and table variables.
<table_name> :: = <identifier>
Note
The following constructs are not supported in the filter string <filter_variable_name>:
The APPLY_FILTER function applies a dynamic filter on a table or table variable. Logically it can be considered
a partial dynamic sql statement. The advantage of the function is that you can assign it to a table variable and
will not block sql – inlining. Despite this all other disadvantages of a full dynamic sql yields also for the
APPLY_FILTER.
Examples
Exception handling is a method for handling exception and completion conditions in an SQLScript procedure.
The DECLARE EXIT HANDLER parameter allows you to define an exit handler to process exception conditions
in your procedure or function.
For example the following exit handler catches all SQLEXCEPTION and returns the information that an
exception was thrown:
DECLARE EXIT HANDLER FOR SQLEXCEPTION SELECT 'EXCEPTION was thrown' AS ERROR
FROM dummy;
For getting the error code and the error message the two system variables ::SQL_ERROR_CODE
and ::SQL_ERROR_MESSAGE can be used as it is shown in the next example:
CREATE PROCEDURE MYPROC (IN in_var INTEGER, OUT outtab TABLE(I INTEGER) ) AS
BEGIN
DECLARE EXIT HANDLER FOR SQLEXCEPTION
SELECT ::SQL_ERROR_CODE, ::SQL_ERROR_MESSAGE FROM DUMMY;
outtab = SELECT 1/:in_var as I FROM dummy;
END;
::SQL_ERROR_CODE ::SQL_ERROR_MESSAGE
304 Division by zero undefined: the right-hand value of the division cannot be zero
at function /() (please check lines: 6)
Besides defining an exit handler for an arbitrary SQLEXCEPTION you can also define it for a specific error code
number by using the keyword SQL_ERROR_CODE followed by an SQL error code number.
For example if only the “division-by-zero” error should be handled the exception handler looks as follows:
Please note that only the SQL (code strings starting with ERR_SQL_*) and SQLScript (code strings starting
with ERR_SQLSCRIPT_*) error codes are supported in the exit handler. You can use the system view
M_ERROR_CODES to get more information about the error codes.
Instead of using an error code the exit handler can be also defined for a condition.
How a condition is declared will be explained in section DECLARE CONDITION [page 106].
END;
tab = SELECT 1/:in_var as I FROM dummy;
Note
Please notice in the example above that in case of an unhandled exception the transaction will be rolled
back. Thus the new row in the table LOG_TABLE is gone as well. To avoid this you can use an autonomous
transaction. You will find more information in Autonomous Transaction [page 97].
Declaring a CONDITION variable allows you to name SQL error codes or even to define a user-defined
condition.
These variables can be used in EXIT HANDLER declaration as well as in SIGNAL and RESIGNAL statements.
Whereby in SIGNAL and RESIGNAL only user-defined conditions are allowed.
Using condition variables for SQL error codes makes the procedure/function code more readable. For
example instead of using the SQL error code 304, which signals a division by zero error, you can declare a
meaningful condition for it:
Besides declaring a condition for an already existing SQL error code, you can also declare a user-defined
condition. Either define it with or without a user-defined error code.
Optional you can also associate a user-defined error code, e.g. 10000:
Note
Please note the user-defined error codes must be within the range of 10000 to 19999.
How to signal and/or resignal a user-defined condition will be handled in the section SIGNAL and RESIGNAL
[page 107].
The SIGNAL statement is used to explicitly raise a user-defined exception from within your procedure or
function.
The error value returned by the SIGNAL statement is either an SQL_ERROR_CODE or a user_defined_condition
that was previously defined with DECLARE CONDITION [page 106]. The used error code must be within the
user-defined range of 10000 to 19999.
To raise a user-defined condition e.g. invalid_input that we declared in the previous section (see DECLARE
CONDITION [page 106]) looks like this:
SIGNAL invalid_input;
But none of these user-defined exception does have an error message text. That means the value of the
system variable ::SQL_ERROR_MESSAGE is empty. Whereas the value of ::SQL_ERROR_CODE is 10000.
In both cases you are receiving the following information in case the user-defined exception was thrown:
To set a corresponding error message you have to use SET MESSAGE_TEXT, e.g.:
[10000]: user-defined error: "SYSTEM"."MY": line 4 col 2 (at pos 96): [10000]
(range 3) user-defined error exception: Invalid input arguments
In the next example the procedure signals an error in case the input argument of start_date is greater than the
input argument of end_date:
END;
In case of calling the procedures with invalid input arguments you receive the following error message:
How to handle the exception and continue with procedure execution will be explained in section DECLARE
EXIT HANDLER FOR A NESTED BLOCK.
The RESIGNAL statement is used to pass on the exception that is handled in the exit handler.
Besides pass on the original exception by simple using RESIGNAL you can also change some information
before pass it on. Please note that the RESIGNAL statement can only be used in the exit handler.
Using RESIGNAL statement without changing the related information of an exception is done as follows:
CREATE PROCEDURE MYPROC (IN in_var INTEGER, OUT outtab TABLE(I INTEGER) ) AS
BEGIN
DECLARE EXIT HANDLER FOR SQLEXCEPTION
RESIGNAL;
In case of <in_var> = 0 the raised error would be the original SQL error code and message text.
The original SQL error message will be now replaced by the new one:
[304]: division by zero undefined: [304] "SYSTEM"."MY": line 4 col 10 (at pos
131): [304] (range 3) division by zero undefined exception: for the input
parameter in_var = 0 exception was raised
The original error message you can get via the system variable ::SQL_ERROR_MESSAGE, e.g. this is useful if
you still want to keep the original message, but like to add additional information:
General exception can be handled with exception handler declared at the beginning of statements which make
an explicit or implicit signal exception.
An exception handler can be declared that catches exceptions with a specific error code numbers.
Exceptions can be declared using a CONDITION variable. The CONDITION can optionally be specified with an
error code number.
Signal an exception
The SIGNAL statement can be used to explicitly raise an exception from within your procedures.
Resignal an exception
The RESIGNAL statement raises an exception on the action statement in exception handler. If error code is not
specified, RESIGNAL will throw the caught exception.
An array is an indexed collection of elements of a single data type. In the following section we explore the
varying ways to define and use arrays in SQLScript.
You can declare an array <variable_name> with the element type <sql_type>. The following SQL types are
supported:
<sql_type> ::=
DATE | TIME| TIMESTAMP | SECONDDATE | TINYINT | SMALLINT | INTEGER | BIGINT |
DECIMAL | SMALLDECIMAL | REAL | DOUBLE | VARCHAR | NVARCHAR | ALPHANUM |
VARBINARY | CLOB | NCLOB |BLOB
Note that only unbounded arrays are supported with a maximum cardinality of 2^31. You cannot define a
static size for an array.
You can use the array constructor to directly assign a set of values to the array.
The array constructor returns an array containing elements specified in the list of value expressions. The
following example illustrates an array constructor that contains the numbers 1, 2 and 3:
Besides using scalar constants you can also use scalar variables or parameters instead, as shown in the next
example.
The <array_index> indicates the index of the element in the array to be modified whereby <array_index>
can have any value from 1 to 2^31. For example the following statement stores the value 10 in the second
element of the array id:
id[2] = 10;
Please note that all unset elements of the array are NULL. In the given example id[1] is then NULL.
Instead of using a constant scalar value it is also possible to use a scalar variable of type INTEGER as
<array_index>. In the next example, variable I of type INTEGER is used as an index.
DECLARE i INT ;
DECLARE arr NVARCHAR(15) ARRAY ;
for i in 1 ..10 do
arr [:i] = 'ARRAY_INDEX '|| :i;
end for;
SQL Expressions and Scalar User Defined Functions (Scalar UDF) that return a number also can be used as an
index. For example, a Scalar UDF that adds two values and returns the result of it
Note
The array starts with the index 1.
The value of an array element can be accessed with the index <array_index>, where <array_index> can
be any value from 1 to 2^31. The syntax is:
For example, the following copies the value of the second element of array arr to variable var. Since the array
elements are of type NVARCHAR(15) the variable var has to have the same type:
Please note that you have to use ‘:’ before the array variable if you read from the variable.
Instead of assigning the array element to a scalar variable it is possible to directly use the array element in the
SQL expression as well. For example, using the value of an array element as an index for another array.
DO
BEGIN
DECLARE arr TINYINT ARRAY = ARRAY(1,2,3);
DECLARE index_array INTEGER ARRAY = ARRAY(1,2);
DECLARE value TINYINT;
arr[:index_array[1]] = :arr[:index_array[2]];
value = :arr[:index_array[1]];
select :value from dummy;
END;
9.10.4 UNNEST
The UNNEST function converts one or many arrays into a table. The result table includes a row for each
element of the specified array. The result of the UNNEST function needs to be assigned to a table variable. The
syntax is:
For example, the following statements convert the array id of type INTEGER and the array name of type
VARCHAR(10) into a table and assign it to the tabular output parameter rst:
:ARR_ID :ARR_NAME
-------------------
1 name1
2 name2
? name3
Furthermore the returned columns of the table can also be explicitly named be using the AS clause. In the
following example, the column names for :ARR_ID and :ARR_NAME are changed to ID and NAME.
ID NAME
-------------------
1 name1
2 name2
? name3
As an additional option an ordinal column can be specified by using the WITH ORDINALITY clause.
The ordinal column will then be appended to the returned table. An alias for the ordinal column needs to be
explicitly specified. The next example illustrates the usage. SEQ is used as an alias for the ordinal column:
AMOUNT SEQ
----------------
10 1
20 2
Note
The UNNEST function cannot be referenced directly in a FROM clause of a SELECT statement.
9.10.5 ARRAY_AGG
The type of the array needs to have the same type as the column.
Optionally the ORDER BY clause can be used to determine the order of the elements in the array. If it is not
specified, the array elements are ordered non-deterministic. In the following example all elements of array id
are sorted descending by column B.
Additionally it is also possible to define where NULL values should appear in the result set. By default NULL
values are returned first for ascending ordering, and last for descending ordering. You can override this
behavior using NULLS FIRST or NULLS LAST to explicitly specify NULL value ordering. The next example
shows how the default behavior for the descending ordering can be overwritten by using NULLS FIRST:
Note
ARRAY_AGG function does not support using value expressions instead of table variables.
9.10.6 TRIM_ARRAY
The TRIM_ARRAY function removes elements from the end of an array. TRIM_ARRAY returns a new array with
a <trim_quantity> number of elements removed from the end of the array <array_variable>.
TRIM_ARRAY”(“:<array_variable>, <trim_quantity>”)”
<array_variable> ::= <identifier>
<trim_quantity> ::= <unsigned_integer>
ID
---
1
2
9.10.7 CARDINALITY
The CARDINALITY function returns the highest index of a set element in the array <array_variable>. It
returns N (>= 0) if the index of the N-th element is the largest among the indices.
CARDINALITY(:<array_variable>)
The result is n=0 because there is no element in the array. In the next example the cardinality is 20, as the
20th element is set. This implicitly sets the elements 1-19 to NULL:
END;
The CARDINALITY function can also directly be used everywhere where expressions are supported, for
example in a condition:
The CONCAT function concatenates two arrays. It returns the new array that contains a concatenation of
<array_variable_left> and <array_variable_right>. Both || and the CONCAT function can be used
for concatenation:
The index-based cell access allows you random access (read and write) to each cell of table variable.
<table_variable>.<column_name>[<index>]
For example, writing to certain cell of a table variable is illustrated in the following example. Here we simply
change the value in the second row of column A.
Reading from a certain cell of a table variable is done in similar way. Note that for the read access, the ‘:’ is
needed in front of the table variable.
The same rules apply for <index> as for the array index. That means that the <index> can have any value
from 1 to 2^31 and that SQL Expression and Scalar User Defined Functions (Scalar UDF) that return a number
also can be used as an index. Instead of using a constant scalar values, it is also possible to use a scalar
variable of type INTEGER as <index>.
Restrictions:
To determine whether a table or table variable is empty you can use the predicate IS_EMPTY:
You can use IS_EMPTY in conditions like in if-statements or while loops. For instance in the next example
IS_EMPTY is used in an if-statement:
Besides that you can also use it in scalar variable assignments. But note since SQLScript does not support
BOOLEAN as scalar type, you need to assign the result of the value to a variable of type INTEGER, if needed.
That means true will be converted to 1 and false will be converted to 0.
Note
Note that the IS_EMPTY cannot be used in SQL queries or in expressions.
To get the number of records of a table or a table variable, you can use the operator RECORD_COUNT:
RECORD_COUNT takes as the argument <table_name> or <table_variable> and returns the number of
records of type BIGINT.
You can use RECORD_COUNT in all places where expressions are supported such as IF-statements, loops or
scalar assignments. In the following example it is used in a loop:
END FOR;
END
Note
RECORD_COUNT cannot be used in queries.
Besides the index-based table cell assignment, SQLScript offers additional operations that directly modify the
content of a table variable, without having to assign some statement result to a new table variable. This,
together with not involving the SQL layer, leads to a performance improvement. On the other hand, these
operations require data materialization in contrary to the declarative logic.
For all position expressions, the valid values are in the interval from 1 to 2^31-1.
You can insert a new data record at a specific position in a table variable with the following syntax:
Sample Code
IF IS_EMPTY(:IT) THEN
RETURN;
END IF;
If you omit the position, the data record will be appended at the end.
Note
The values for the omitted columns are initialized with NULL values.
You can insert the content of one table variable into another table variable with one single operation without
using SQL.
Code Syntax
:<target_table_var>[.(<column_list>)].INSERT(:<source_table_var>[, <position>])
If no position is specified, the values will be appended to the end. The positions starts from 1 - NULL and all
values smaller than 1 are invalid. If no column list is specified, all columns of the table are insertion targets.
:tab_a.insert(:tab_b);
:tab_a.(col1, COL2).insert(:tab_b);
:tab_a.INSERT(:tab_b, 5);
:tab_a.("a","b").insert(:tab_b, :index_to_insert);
The mapping which column of the source table is inserted into which column of the target table is done
according to the column position. The source table has to have the same number of columns as the target
table or as the number of columns in the column list.
If SOURCE_TAB has columns (X, A, B, C) and TARGET_TAB has columns (A, B, C, D),
then :target_tab.insert(:source_tab) will insert X into A, A into B, B into C and C into D.
If another order is desired, the column sequence has to specified in the column list for the TARGET_TAB. for
example :TARGET_TAB.(D, A, B, C).insert(:SOURCE_TAB) will insert X into D, A into A, B into B and C
into C.
The types of the columns have to match, otherwise it is not possible to insert data into the column. For
example, a column of type DECIMAL cannot be inserted in an INTEGER column and vice versa.
Sample Code
Iterative Result Build
CALL P(?)
K V
--------
C 3.890
B 2.045
B 2.067
A 1.123
You can modify a data record at a specific position. There are two equivalent syntax options.
Note
The index must be specified.
Note
The values for the omitted columns remain unchanged.
Sample Code
Note
You can also set values at a position outside the original table size. Just like with INSERT, the records
between the original last record and the newly inserted records are initialized with NULL values.
:<table_variable>.DELETE(<index>)
Sample Code
:<table_variable>.DELETE(<from_index>..<to_index>)
If the starting index is greater than the table size, no operation is performed. If the end index is smaller than the
starting index, an error occurs. If the end index is greater than the table size, all records from the starting index
to the end of the table are deleted.
:<table_variable>.DELETE(<array_of_integers>)
The provided array expression contains indexes pointing to records which shall be deleted from the table
variable. If the array contains an invalid index (for example, zero), an error occurs.
Sample Code
If your SQLScript procedure needs execution of dynamic SQL statements where the parts of it are derived
from untrusted input (e.g. user interface), there is a danger of an SQL injection attack. The following functions
can be utilized in order to prevent it:
Example:
The following values of input parameters can manipulate the dynamic SQL statement in an unintended way:
This cannot happen if you validate and/or process the input values:
Syntax IS_SQL_INJECTION_SAFE
IS_SQL_INJECTION_SAFE(<value>[, <max_tokens>])
Syntax Elements
String to be checked.
Description
Checks for possible SQL injection in a parameter which is to be used as a SQL identifier. Returns 1 if no
possible SQL injection is found, otherwise 0.
Example
The following code example shows that the function returns 0 if the number of tokens in the argument is
different from the expected number of a single token (default value).
safe
-------
0
The following code example shows that the function returns 1 if the number of tokens in the argument matches
the expected number of 3 tokens.
safe
-------
1
ESCAPE_SINGLE_QUOTES(<value>)
Description
Escapes single quotes (apostrophes) in the given string <value>, ensuring a valid SQL string literal is used in
dynamic SQL statements to prevent SQL injections. Returns the input string with escaped single quotes.
Example
The following code example shows how the function escapes a single quote. The one single quote is escaped
with another single quote when passed to the function. The function then escapes the parameter content
Str'ing to Str''ing, which is returned from the SELECT.
string_literal
---------------
Str''ing
Syntax ESCAPE_DOUBLE_QUOTES
ESCAPE_DOUBLE_QUOTES(<value>)
Description
Escapes double quotes in the given string <value>, ensuring a valid SQL identifier is used in dynamic SQL
statements to prevent SQL injections. Returns the input string with escaped double quotes.
Example
The following code example shows that the function escapes the double quotes.
table_name
--------------
So far, implicit parallelization has been applied to table variable assignments as well as read-only procedure
calls that are independent from each other. DML statements and read-write procedure calls had to be
executed sequentially. From now on, it is possible to parallelize the execution of independent DML statements
and read-write procedure calls by using parallel execution blocks:
For example, in the following procedure several UPDATE statements on different tables are parallelized:
Note
Only DML statements on column store tables are supported within the parallel execution block.
In the next example several records from a table variable are inserted into different tables in parallel.
Sample Code
Sample Code
call cproc;
Note
Only the following statements are allowed in read-write procedures, which can be called within a parallel
block:
● DML
● Imperative logic
● Autonomous transaction
● Implicit SELECT and SELECT INTO scalar variable
Recommendation
SAP recommends that you use SQL rather than Calculation Engine Plan Operators with SQLScript.
The execution of Calculation Engine Plan Operators currently is bound to processing within the calculation
engine and does not allow a possibility to use alternative execution engines, such as L native execution. As
most Calculation Engine Plan Operators are converted internally and treated as SQL operations, the
conversion requires multiple layers of optimizations. This can be avoided by direct SQL use. Depending on
your system configuration and the version you use, mixing Calculation Engine Plan Operators and SQL can
lead to significant performance penalties when compared to to plain SQL implementation.
Calculation engine plan operators encapsulate data-transformation functions and can be used in the definition
of a procedure or a table user-defined function. They constitute a no longer recommended alternative to using
SQL statements. Their logic is directly implemented in the calculation engine, which is the execution
environments of SQLScript.
● Data Source Access operators that bind a column table or a column view to a table variable.
● Relational operators that allow a user to bypass the SQL processor during evaluation and to directly
interact with the calculation engine.
● Special extensions that implement functions.
The data source access operators bind the column table or column view of a data source to a table variable for
reference by other built-in operators or statements in a SQLScript procedure.
10.1.1 CE_COLUMN_TABLE
Syntax:
CE_COLUMN_TABLE(<table_name> [<attributes>])
Syntax Elements:
Description:
The CE_COLUMN_TABLE operator provides access to an existing column table. It takes the name of the table
and returns its content bound to a variable. Optionally a list of attribute names can be provided to restrict the
output to the given attributes.
Note that many of the calculation engine operators provide a projection list for restricting the attributes
returned in the output. In the case of relational operators, the attributes may be renamed in the projection list.
The functions that provide data source access provide no renaming of attributes but just a simple projection.
Note
Calculation engine plan operators that reference identifiers must be enclosed with double-quotes and
capitalized, ensuring that the identifier's name is consistent with its internal representation.
If the identifiers have been declared without double-quotes in the CREATE TABLE statement (which is the
normal method), they are internally converted to upper-case letters. Identifiers in calculation engine plan
operators must match the internal representation, that is they must be upper case as well.
In contrast, if identifiers have been declared with double-quotes in the CREATE TABLE statement, they are
stored in a case-sensitive manner. Again, the identifiers in operators must match the internal
representation.
10.1.2 CE_JOIN_VIEW
Syntax:
CE_JOIN_VIEW(<column_view_name>[{,<attributes>,}...])
Syntax elements:
Specifies the name of the required columns from the column view.
The CE_JOIN_VIEW operator returns results for an existing join view (also known as Attribute View). It takes
the name of the join view and an optional list of attributes as parameters of such views/models.
10.1.3 CE_OLAP_VIEW
Syntax:
CE_OLAP_VIEW(<olap_view_name>, '['<attributes>']')
Syntax elements:
Note
Note you must have at least one <aggregation_exp> in the attributes.
● count("column")
● sum("column")
● min("column")
● max("column")
● use sum("column") / count("column") to compute the average
The CE_OLAP_VIEW operator returns results for an existing OLAP view (also known as an Analytical View). It
takes the name of the OLAP view and an optional list of key figures and dimensions as parameters. The OLAP
cube that is described by the OLAP view is grouped by the given dimensions and the key figures are
aggregated using the default aggregation of the OLAP view.
10.1.4 CE_CALC_VIEW
Syntax:
CE_CALC_VIEW(<calc_view_name>, [<attributes>])
Syntax elements:
Specifies the name of the required attributes from the calculation view.
Description:
The CE_CALC_VIEW operator returns results for an existing calculation view. It takes the name of the
calculation view and optionally a projection list of attribute names to restrict the output to the given attributes.
The calculation engine plan operators presented in this section provide the functionality of relational operators
that are directly executed in the calculation engine. This allows exploitation of the specific semantics of the
calculation engine and to tune the code of a procedure if required.
10.2.1 CE_JOIN
Syntax:
Syntax elements:
Specifies a list of join attributes. Since CE_JOIN requires equal attribute names, one attribute name per pair of
join attributes is sufficient. The list must at least have one element.
Specifies a projection list for the attributes that should be in the resulting table.
Note
If the optional projection list is present, it must at least contain the join attributes.
Description:
The CE_JOIN operator calculates a natural (inner) join of the given pair of tables on a list of join attributes. For
each pair of join attributes, only one attribute will be in the result. Optionally, a projection list of attribute
names can be given to restrict the output to the given attributes. Finally, the plan operator requires each pair
of join attributes to have identical attribute names. In case of join attributes having different names, one of
them must be renamed prior to the join.
10.2.2 CE_LEFT_OUTER_JOIN
Calculate the left outer join. Besides the function name, the syntax is the same as for CE_JOIN.
10.2.3 CE_RIGHT_OUTER_JOIN
Calculate the right outer join. Besides the function name, the syntax is the same as for CE_JOIN.
Note
CE_FULL_OUTER_JOIN is not supported.
Syntax:
Syntax elements:
Specifies a list of attributes that should be in the resulting table. The list must at least have one element. The
attributes can be renamed using the SQL keyword AS, and expressions can be evaluated using the CE_CALC
function.
Specifies an optional filter where Boolean expressions are allowed. See CE_CALC [page 142] for the filter
expression syntax.
Description:
Restricts the columns of the table variable <var_table> to those mentioned in the projection list. Optionally,
you can also rename columns, compute expressions, or apply a filter.
With this operator, the <projection_list> is applied first, including column renaming and computation of
expressions. As last step, the filter is applied.
Caution
Be aware that <filter> in CE_PROJECTION can be vulnerable to SQL injection because it behaves like
dynamic SQL. Avoid use cases where the value of <filter> is passed as an argument from outside of the
procedure by the user himself or herself, for example:
create procedure proc (in filter nvarchar (20), out output ttype)
begin
tablevar = CE_COLUMN_TABLE(TABLE);
output = CE_PROJECTION(:tablevar,
["A", "B"], '"B" = :filter );
end;
It enables the user to pass any expression and to query more than was intended, for example: '02 OR B =
01'.
Syntax:
Syntax elements:
Specifies the expression to be evaluated. Expressions are analyzed using the following grammar:
Where terminals in the grammar are enclosed, for example 'token' (denoted with id in the grammar), they are
like SQL identifiers. An exception to this is that unquoted identifiers are converted into lower-case. Numeric
constants are basically written in the same way as in the C programming language, and string constants are
enclosed in single quotes, for example, 'a string'. Inside string, single quotes are escaped by another single
quote.
An example expression valid in this grammar is: "col1" < ("col2" + "col3"). For a full list of expression
functions, see the following table.
Description:
CE_CALC is used inside other relational operators. It evaluates an expression and is usually then bound to a
new column. An important use case is evaluating expressions in the CE_PROJECTION operator. The CE_CALC
function takes two arguments:
midstr returns a part of the string starting at string midstr(string, int, int)
arg2, arg3 bytes long. arg2 is counted
from 1 (not 0) 2
leftstr returns arg2 bytes from the left of the string leftstr(string, int)
arg1. If arg1 is shorter than the value of
arg2, the complete string will be re
turned. 1
rightstr returns arg2 bytes from the right of the string rightstr(string, int)
arg1. If arg1 is shorter than the value of
arg2, the complete string will be re
turned. 1
instr returns the position of the first occur int instr(string, string)
rence of the second string within the
first string (>= 1) or 0, if the second
string is not contained in the first. 1
● trim(s) = ltrim(rtrim(s))
● trim(s1, s2) = ltrim(rtrim(s1, s2),
s2)
Mathematical Functions The math functions described here generally operate on floating point values;
their inputs will automatically convert to double, the output will also be a double.
● double log(double) These functions have the same functionality as in the Cprogramming language.
● double exp(double)
● double log10(double)
● double sin(double)
● double cos(double)
● double tan(double)
● double asin(double)
● double acos(double)
● double atan(double)
● double sinh(double)
● double cosh(double)
● double floor(double)
● double ceil(double)
Further Functions
1 Due to calendar variations with dates earlier that 1582, the use of the date data type is deprecated; you
should use the daydate data type instead.
Note
date is based on the proleptic Gregorian calendar. daydate is based on the Gregorian calendar which is
also the calendar used by SAP HANA SQL.
2 These Calculation Engine string functions operate using single byte characters. To use these functions with
multi-byte character strings please see section: Using String Functions with Multi-byte Character Encoding
below. Note, this limitation does not exist for the SQL functions of the SAP HANA database which support
Unicode encoded strings natively.
To allow the use of the string functions of Calculation Engine with multi-byte character encoding you can use
the charpos and chars (see table above for syntax of these commands) functions. An example of this usage
for the single byte character function midstr follows below:-
10.2.6 CE_AGGREGATION
Syntax:
Syntax elements:
Specifies a list of aggregates. For example, [SUM ("A"), MAX("B")] specifies that in the result, column "A"
has to be aggregated using the SQL aggregate SUM and for column B, the maximum value should be given.
● count("column")
● sum("column")
● min("column")
● max("column")
● use sum("column") / count("column") to compute the average
Specifies an optional list of group-by attributes. For instance, ["C"] specifies that the output should be
grouped by column C. Note that the resulting schema has a column named C in which every attribute value
from the input table appears exactly once. If this list is absent the entire input table will be treated as a single
group, and the aggregate function is applied to all tuples of the table.
Specifies the name of the column attribute for the results to be grouped by.
Note
CE_AGGREGATION implicitly defines a projection: All columns that are not in the list of aggregates, or in the
group-by list, are not part of the result.
Description:
● For the aggregates, the default is the name of the attribute that is aggregated.
● For instance, in the example above ([SUM("A"),MAX("B")]), the first column is called A and the second
is B.
● The attributes can be renamed if the default is not appropriate.
● For the group-by attributes, the attribute names are unchanged. They cannot be renamed using
CE_AGGREGATION.
Note
Note that count(*) can be achieved by doing an aggregation on any integer column; if no group-by
attributes are provided, this counts all non-null values.
10.2.7 CE_UNION_ALL
Syntax:
Syntax elements:
Description:
The CE_UNION_ALL function is semantically equivalent to SQL UNION ALL statement. It computes the union
of two tables which need to have identical schemas. The CE_UNION_ALL function preserves duplicates, so the
result is a table which contains all the rows from both input tables.
Syntax:
Syntax elements:
Specifies a list of attributes that should be in the resulting table. The list must at least have one element. The
attributes can be renamed using the SQL keyword AS.
Description:
For each input table variable the specified columns are concatenated. Optionally columns can be renamed. All
input tables must have the same cardinality.
Caution
The vertical union is sensitive to the order of its input. SQL statements and many calculation engine plan
operators may reorder their input or return their result in different orders across starts. This can lead to
unexpected results.
10.3.2 CE_CONVERSION
Syntax:
Syntax elements:
Specifies the parameters for the conversion. The CE_CONVERSIONoperator is highly configurable via a list of
key-value pairs. For the exact conversion parameters permissible, see the Conversion parameters table.
Description:
Applies a unit conversion to input table <var_table> and returns the converted values. Result columns can
optionally be renamed. The following syntax depicts valid combinations. Supported keys with their allowed
domain of values are:
'source_unit_col column in input ta column name N None The name of the
umn' ble column containing
the source unit in
the input table.
'target_unit_col column in input ta column name N None The name of the
umn' ble column containing
the target unit in
the input table.
'refer column in input ta column name N None The default refer
ÛÛúf�òå,?ªX€Š!!CÉÉPÍk¾‚"·™qæ ble ence date for any
umn' kind of conversion.
Table 28:
Key Values Type Mandatory Default
10.3.3 TRACE
Syntax:
TRACE(<var_input>)
Syntax elements:
The TRACE operator is used to debug SQLScript procedures. It traces the tabular data passed as its argument
into a local temporary table and returns its input unmodified. The names of the temporary tables can be
retrieved from the SYS.SQLSCRIPT_TRACE monitoring view. See SQLSCRIPT_TRACE below.
Example:
out = TRACE(:input);
Note
This operator should not be used in production code as it will cause significant runtime overhead.
Additionally, the naming conventions used to store the tracing information may change. Thus, this operator
should only be used during development for debugging purposes.
Related Information
SQLSCRIPT_TRACE
To eliminate the dependency of having a procedure or a function that already exist when you want to create a
new procedure consuming them, you can use headers in their place.
When creating a procedure, all nested procedures that belong to that procedure must exist beforehand. If
procedure P1 calls P2 internally, then P2 must have been created earlier than P1. Otherwise, P1 creation fails
with the error message,“P2 does not exist”. With large application logic and no export or delivery unit available,
it can be difficult to determine the order in which the objects need to be created.
To avoid this kind of dependency problem, SAP introduces HEADERS. HEADERS allow you to create a minimum
set of metadata information that contains only the interface of the procedure or function.
AS HEADER ONLY
You create a header for a procedure by using the HEADER ONLY keyword, as in the following example:
With this statement you are creating a procedure <proc_name> with the given signature
<parameter_clause>. The procedure <proc_name> has no body definition and thus has no dependent base
objects. Container properties (for example, security mode, default_schema, and so on) cannot be defined
with the header definition. These are included in the body definition.
The following statement creates the procedure TEST_PROC with a scalar input INVAR and a tabular output
OUTTAB:
CREATE PROCEDURE TEST_PROC (IN INVAR NVARCHAR(10), OUT OUTTAB TABLE(no INT)) AS
HEADER ONLY
By checking the is_header_only field in the system view PROCEDURE, you can verify that a procedure only
header is defined.
If you want to check for functions, then you need to look into the system view FUNCTIONS.
Once a header of a procedure or function is defined, other procedures or functions can refer to it in their
procedure body. Procedures containing these headers can be compiled as shown in the following example:
CREATE PROCEDURE OUTERPROC (OUT OUTTAB TABLE (NO INT)) LANGUAGE SQLSCRIPT
AS
BEGIN
DECLARE s INT;
s = 1;
CALL TEST_PROC (:s, outtab);
END;
To change this and to make a valid procedure or function from the header definition, you must replace the
header by the full container definition. Use the ALTER statement to replace the header definition of a
procedure, as follows:
For a function header, the task is similar, as shown in the following example:
For example, if you want to replace the header definition of TEST_PROC that was defined already, then the
ALTER statement might look as follows:
ALTER PROCEDURE TEST_PROC (IN INVAR NVARCHAR(10), OUT OUTTAB TABLE(no INT))
LANGUAGE SQLSCRIPT SQL SECURITY INVOKER READS SQL DATA
AS
BEGIN
DECLARE tvar TABLE (no INT, name nvarchar(10));
tvar = SELECT * FROM TAB WHERE name = :invar;
outtab = SELECT no FROM :tvar;
END
You cannot change the signature with the ALTER statement. If the name of the procedure or the function or
the input and output variables do not match, you will receive an error.
Note
The ALTER PROCEDURE and the ALTER FUNCTION statements are supported only for a procedure or a
function respectively, that contain a header definition.
SQLScript supports the spatial data type ST_GEOMETRY and SQL spatial functions to access and manipulate
spatial data. In addition SQLScript also supports the objective style function calls, which are needed for some
SQL spatial functions.
The next example illustrates a small scenario of using spatial data type and function in SQLScript.
The function get_distance calculates the distance between the two given parameters <first> and
<second> of type ST_GEOMETRY by using the spatial function ST_DISTANCE.
Note the ‘:’ in front of the variable <first>. This needed because you are reading from the variable.
The function get_distance itself is called by the procedure nested_call. The procedure returns the
distance and the text representation of the ST_GEOMETRY variable <first>.
Out(1) Out(2)
----------------------------------------------------------------------
8,602325267042627 POINT(7 48)
Note that the optional SRID (Spatial reference Identifier) parameter in SQL spatial functions is mandatory if
the function is used within SQLScript. If you not specify the SRID you will receive an error as demonstrated
with the function ST_GEOMFROMTEXT in the following example. Here SRID 0 is used which specifies the default
spatial reference system.
DO
BEGIN
If you do not use the same SRID for the ST_GEOMETRY variables <line1> and <line2> latest the UNNEST
will throw an error because it is not allowed that the values in one column have different SRID.
Besides this there is a consistency check for output table variables to ensure that all elements of a spatial
column have the same SRID.
Note that the following function are not currently not supported in SQLScript:
● ST_CLUSTERID
● ST_CLUSTERCENTEROID
● ST_CLUSTERENVELOPE
● ST_CLUSTERCONVEXHULL
● ST_AsSVG
The construction of objects with the NEW keyword is also not supported in SQLScript. Instead you can use
ST_GEOMFROMTEXT(‘POINT(1 1)’, srid).
Please refer to the "SAP HANA Spatial Reference" available from the SAP HANA platform page for having
detailed information about the SQL spatial functions and their usage.
System variables are built-in variables in SQLScript that provide you with information about the current
context.
13.1 ::CURRENT_OBJECT_NAME
and ::CURRENT_OBJECT_SCHEMA
To identify the name of the current running procedure or function you can use the following two system
variables:
the result of that function is then the name and the schema_name of the function:
SCHEMA_NAME NAME
----------------------------------------
MY_SCHEMA RETURN_NAME
The next example shows that you can also pass the two system variables as arguments to procedure or
function call.
Note
Note that in anonymous blocks the value of both system variables is NULL.
The two system variable will always return the schema name and the name of the procedure or function.
Creating a synonym on top of the procedure or function and calling it with the synonym will still return the
original name as shown in the next example.
We create a synonym on the RETURN_NAME function from above and will query it with the synonym:
SCHEMA_NAME NAME
------------------------------------------------------
MY_SCHEMA RETURN_NAME
13.2 ::ROWCOUNT
The System Variable ::ROWCOUNT stores the number of updated row counts of the previously executed DML
statement. There is no accumulation of all previously executed DML statements.
The next examples shows you how you can use ::ROWCOUNT in a procedure. Consider we have the following
table T:
Now we want to update table T and want to return the number of updated rows:
UPDATED_ROWS
-------------------------
2
In the next example we change the procedure by having two update statements and in the end we again get
the row count:
By calling the procedure you will see that the number of updated rows is now 1. That is because the las update
statements only updated one row.
UPDATED_ROWS
-------------------------
1
If you now want to have the number of all updated rows you have to retrieve the row count information after
each update statement and accumulate them:
By now calling this procedure again the number of updated row is now 3:
UPDATED_ROWS
-------------------------
3
SQLScript procedures, functions and triggers can return the line number of the current statement
via ::CURRENT_LINE_NUMBER.
Syntax
::CURRENT_LINE_NUMBER
Example
Sample Code
Sample Code
Sample Code
1 do begin
2 declare a int = ::CURRENT_LINE_NUMBER;
3 select :a, ::CURRENT_LINE_NUMBER + 1 from dummy;
4 end;
5 -- Returns [2, 3 + 1]
In some scenarios, you may need to let certain processes wait for a while (for example, when executing
repetitive tasks). If you implement such waiting manually, that leads to "busy waiting" and the CPU performs
unnecessary work during the waiting time. To avoid this, SQLScript offers a built-in library
SYS.SQLSCRIPT_SYNC containing the procedures SLEEP_SECONDS and WAKEP_CONNECTION.
Procedure SLEEP_SECONDS
This procedure puts the current process on hold. It has one input parameter of type DOUBLE which specifies
the waiting time in seconds. The maximum precision is one millisecond (0.001), however the real waiting time
may be slightly longer (about 1-2 ms) than the given time.
Note
● If you pass 0 or NULL to SLEEP_SECONDS, SQLScript executor will do nothing (also no log will be
written).
● If you pass a negative number, you get an error.
Procedure WAKEUP_CONNECTION
This procedure resumes a waiting process. It has one input parameter of type INTEGER which specifies the ID
of a waiting connection. If this connection is waiting because the procedure SLEEP_SECONDS has been called,
the sleep is terminated and the process continues. If the given connection does not exist or is not waiting
because of SLEEP_SECONDS, an error is raised.
If the user calling WAKEUP_CONNECTION is not a session admin and is different from the user of the waiting
connection, an error is raised as well.
Note
● The waiting process is also terminated, if the session is canceled (with ALTER SYSTEM CANCEL
SESSION or ALTER SYSTEM DISCONNECT SESSION).
Limitations
The library cannot be used in functions (neither in scalar, nor in tabular ones) and in calculation views.
Examples
Sample Code
Monitor
Sample Code
Resume all sleeping processes
All scalar variables used in queries of procedures, functions or anonymous blocks, are represented either as
query parameters, or as constant values during query compilation. Which option shall be chosen is a decision
of the optimizer.
Example
The following procedure uses two scalar variables (var1 and var2) in the WHERE-clause of a nested query.
Sample Code
CREATE PROCEDURE PROC (IN var1 INT, IN var2 INT, OUT tab mytab)
AS
BEGIN
tab = SELECT * FROM MYTAB WHERE MYCOL >:var1
OR MYCOL =:var2;
END;
Sample Code
will prepare the nested query of the table variable tab by using query parameters for the scalar parameters:
Sample Code
Before the query is executed, the parameter values will be bound to the query parameters.
Calling the procedure without query parameters and using constant values directly
Sample Code
will lead to the following query string, which uses the parameter values directly:
The advantage of using query parameters is that the generated query plan cache entry can be used even if the
values of the variables var1 and var2 change. A potential disadvantage is that there is a chance of not getting
the most optimal query plan because optimizations using parameter values cannot be performed directly
during compilation time. Using constant values will always lead to preparing a new query plan and therefore to
different query plan cache entries for the different parameter values. This comes along with additional time
spend for query preparation and potential cache flooding effects in fast-changing parameter value scenarios.
In order to control the parameterization behavior of scalar parameters explicitly, you can use the function
BIND_AS_PARAMETER and BIND_AS_VALUE. The decision of the optimizer and the general configuration are
overridden when you use these functions.
Syntax
Using BIND_AS_PARAMETER will always use a query parameter to represent a <scalar_variable> during query
preparation.
Using BIND_AS_VALUE will always use a value to represent a <scalar_variable> during query preparation.
The following example represents the same procedure from above but now using the functions
BIND_AS_PARAMETER and BIND_AS_VALUE instead of referring to the scalar parameters directly:
Sample Code
CREATE PROCEDURE PROC (IN var1 INT, IN var2 INT, OUT tab mytab)
AS
BEGIN
tab = SELECT * FROM MYTAB WHERE MYCOL > BIND_AS_PARAMETER(:var1)
OR MYCOL = BIND_AS_VALUE(:var2);
END;
Sample Code
Sample Code
Sample Code
The same query string will be prepared even if you call this procedure with constant values because the
functions override the optimizer's decisions.
Sample Code
16.1 M_ACTIVE_PROCEDURES
The view M_ACTIVE_PROCEDURES monitors all internally executed statements starting from a procedure
call. That also includes remotely executed statements.
Table 29:
Among other things the M_ACTIVE_PROCEDURES is helpful for analyzing long running procedures and to
determine their current status. You can run the following query from another session to find more about the
status of a procedure, like MY_SCHEMA.MY_PROC in the example:
Table 30:
Level Description
To prevent flooding of the memory with irrelevant data, the number of records is limited. If the record count
exceeds the given threshold then the first record will be erased independent from its status. The limit can be
adjust the INI-Parameter execution_monitoring_limit, e.g. execution_monitoring_limit = 100
000.
Limitations:
With NUMBER_OF_CALLS_TO_RETAIN_AFTER_EXECUTION you can specify how many calls retain after
execution and RETENTION_PERIOD_FOR_SQLSCRIPT_CONTEXT defines how long the result should be kept in
M_ACTIVE_PROCEDURES, unit is in seconds. The interaction of these two parameters are as follows:
● Both parameters are set: M_ACTIVE_PROCEDURES keeps the specified numbers of records for the
specified amount of time
● Only NUMBER_OF_CALLS_TO_RETAIN_AFTER_EXECUTION is set: M_ACTIVE_PROCEDURES keeps the
specified number for the default amount of time ( = 3600 seconds)
● Only RETENTION_PERIOD_FOR_SQLSCRIPT_CONTEXT is set: M_ACTIVE_PROCEDURES keeps the
default number of records ( = 100) for the specified amount of time
● Nothing set: no records are kept.
Note
All configuration parameters needs to be defined under the sqlscript section
The Query Export is an enhancement of the EXPORT statement. It allows exporting queries, that is database
objects used in a query together with the query string and parameters. This query can be either standalone, or
executed as a part of a SQLScript procedure.
Prerequisites
In order to execute the query export as a developer you need an EXPORT system privilege.
Procedure
Note
Currently the only format supported for SQLScript query export is CSV . If you choose BINARY, you get a
warning message and the export is performed in CSV.
The server path where the export files are be stored is specified as <path>.
For more information about <export_option_list>, see EXPORT in the SAP HANA SQL and System Views
Reference on the SAP Help Portal.
Apart from SELECT statements, you can export the following statement types as well:
With the <sqlscript_location_list> you can define in a comma-separated list several queries that you want to
export. For each query you have to specify the name of the procedure with <procedure_name> to indicate
where the query is located. <procedure_name> can be omitted if it is the same procedure as the procedure in
<procedure_call_statement>.
You also need to specify the line information, <line_number>, and the column information, <column_number>.
The line number must correspond to the first line of the statement. If the column number is omitted, all
statements (usually there is just one) on this line are exported. Otherwise the column must match the first
character of the statement.
The line and column information is usually contained in the comments of the queries generated by SQLScript
and can be taken over from there. For example, the monitoring view M_ACTIVE_PROCEDURES or the
statement statistic in PlanViz shows the executed queries together with the comment.
If you want to export both queries of table variables tabtemp, then the <sqlscript_location> looks as follows:
and
For the query of table variable temp we also specified the column number because there are two table variable
assignments on one line and we only wanted to have the first query.
To export these queries, the export needs to execute the procedure call that triggers the execution of the
procedure containing the queries. Therefore the procedure call has to be specified as well by using
<procedure_call_statement>:
EXPORT ALL AS CSV INTO '/tmp' ON (proc_one LINE 15), ( proc_two LINE 27 COLUMN
4) FOR CALL PROC_ONE (...);
If you want to export a query that is executed multiple times, you can use <pass_number> to specify which
execution should be exported. If <pass_number> is omitted, only the first execution of the query is exported. If
you need to export multiple passes, but not all of them, you need to specify the same location multiple times
with the corresponding pass numbers.
Given the above example, we want to export the query on line 34 but only the snapshot of the 2nd and 30th
loop iteration. The export statement is then the following, considering that PROC_LOOP is a procedure call:
If you want to export the snapshots of all iterations you need to use PASS ALL:
EXPORT ALL AS CSV INTO '/tmp' ON (myschema.proc_loop LINE 34 PASS ALL) FOR CALL
PROC_LOOP(...);
Overall the SQLScript Query Export creates one subdirectory for each exported query under the given path
<path> with the following name pattern <schema_name>-<procedure_name>-<line_number>-
|_ /tmp
|_ MYSCHEMA-PROC_LOOP-34-10-2
|_Query.sql
|_index
|_export
|_ MYSCHEMA-PROC_LOOP-34-10-30
|_Query.sql
|_index
|_export
The exported SQLScript query is stored in a file named Query.sql and all related base objects of that query are
stored in the directories index and export, as it is done for a typical catalog export.
You can import the exported objects, including temporary tables and their data, with the IMPORT statement.
For more information about IMPORT, see IMPORT in the SAP HANA SQL and System Views Reference on the
SAP Help Portal.
Note
Queries within a function are not supported and cannot be exported.
Note
Query export is not supported on distributed systems. Only single-node systems are supported.
The derived table type of a tabular variable should always match the declared type of the corresponding
variable, both for the type code as well as length or precision/scale information. This is particular important for
signature variables as they can be considered the contract a caller will follow. The derived type code will be
implicitly converted if this conversion is possible without loss in information (see SQL guide for further details
on which data types conversion are supported).
If the derived type is larger (e.g. BIGINT) than the expected type (e.g. INTEGER) can this lead to errors as
shown in the following example.
The procedure PROC_TYPE_MISMATCH has a defined tabular output variable RESULT with a single column of
type VARCHAR with a length of 2. The derived type from the table variable assignment has a single column of
type VARCHAR with a length of 10.
Declared type "VARCHAR(2)" of attribute "A" not same as assigned type "VARCHAR(10)"
The configuration parameters have three different levels to reveal differences between expected and derived
types if the derived type is larger than the expected type:
Table 31:
warn general warning: Declared type "VAR Print warning in case of type mis
CHAR(2)" of attribute "A" not same as match(default behavior)
assigned type "VARCHAR(10)"
strict return type mismatch: Declared type Error in case of potential type error
"VARCHAR(2)" of attribute "A" not
same as assigned type "VARCHAR(10)"
Note
Both configuration parameters needs to be defined under the sqlscript section
With the SQLScript debugger you can investigate functional issues. The debugger is available in the SAP
WebIDE for SAP HANA (WebIDE) and in ABAP in Eclipse (ADT Debugger). In the following we want to give you
Table 32:
A conditional breakpoint can be used to break the debugger in the breakpoint-line only when certain
conditions are met. This is especially useful when a Breakpoint is set within a loop.
Each breakpoint can have only one condition. The condition expressions can contain any SQL function. A
condition must either contain an expression that results in true or false, or can contain a single variable or a
complex expression without restrictions in the return type.
When setting a conditional breakpoint, the debugger will check all conditions for potential syntax errors. It
checks for:
At execution time the debugger will check and evaluate the conditions of the conditional breakpoints, but with
the given variables and its values. If the value of a variable in a condition is not accessible and therefor the
condition cannot be evaluated, the debugger will send a warning and will break for the breakpoint anyway.
Note
Conditional breakpoints are only supported for scalar variables.
For more information on SQL functions, see FUNCTION in the SAP HANA SQL and System Views Reference on
the SAP Help Portal.
16.4.2 Watchpoints
Watchpoints give you the possibility to watch the values of variables or complex expressions and break the
debugger if certain conditions are met.
For each watchpoint an arbitrary number of conditions can be defined. The conditions can either contain an
expression that results in true or false or contain a single variable or complex expression without restrictions in
the return type.
When setting a watchpoint, the debugger will check all conditions for potential syntax errors. It checks for:
At execution time the debugger will check and evaluate the conditions of the watchpoints, but with the given
variables and its values. A watchpoint will be skipped, if the value of a variable in a condition is not accessible.
But in case the return type of the condition is wrong , the debugger will send a warning to the user and will
break for the watchpoint anyway.
Note
If a variable value changes to NULL, the debugger will not break since it cannot evaluate the expression
anymore.
You can activate the Exception Mode to allow the Debugger to break if an error in the execution of a procedure
or function occurs. User defined exceptions are also handled.
The debugger will stop in the line where the exception was thrown and allows access to the current value of all
local variables, the call stack and a short information about the error. Afterwards the execution can be
continued and you might step into the exception handler or further exceptions (e.g. on a call statement).
Save tables allows you to store the result set of a table variable into persistent table in a predefined schema in
a debugging session.
Syntax
Syntax Elements
Table 33:
Syntax Element Description
<statement_name> ::= <string_literal> Specifies the name of a specific execution plan in the output
table for a given SQL statement
<explain_plan_entry> ::= <plan_id> specifies the identifier of the entry in the SQL
<call_statement> | SQL PLAN CACHE ENTRY plan cache to be explained. Refer to the
<plan_id> M_SQL_PLAN_CACHE monitoring view to find the
<plan_id> for the desired cache entry.
<plan_id> ::= <integer_literal>
<call_statement> specifies the procedure call to ex
plain the plan for. For more information about subqueries,
see the CALL statement.
Note
The EXPLAIN PLAN [SET STATEMENT_NAME = <statement_name>] FOR SQL PLAN CACHE ENTRY
<plan_id> command can only be run by users with the OPTIMIZER_ADMIN privilege.
Description
EXPLAIN PLAN provides information about the compiled plan of a given procedure. It inserts each piece of
information into a system global temporary table named EXPLAIN_CALL_PLANS. The result is visible only
within the session where the EXPLAIN PLAN call is executed.
In the case of invoking another procedure inside of a procedure, EXPLAIN PLAN inserts the results of the
invoked procedure (callee) under the invoke operator (caller) although the actual invoked procedure is a sub-
plan which is not located under the invoke operator.
Another case is the else operator. EXPLAIN PLAN generates a dummy else operator to represent
alternative operators in the condition operator.
Example
You can retrieve the result by selecting from the table EXPLAIN_CALL_PLANS.
EXPLAIN PLAN FOR select query deletes its temporary table by HDB client but in the case of EXPLAIN PLAN
FOR call, it is not yet supported. To delete rows in the table, execute a delete query from
EXPLAIN_CALL_PLANS table or close current session.
Note
Client integration is not available yet. You need to use the SQL statement above to retrieve the plan
information.
So far this document has introduced the syntax and semantics of SQLScript. This knowledge is sufficient for
mapping functional requirements to SQLScript procedures. However, besides functional correctness, non-
functional characteristics of a program play an important role for user acceptance. For instance, one of the
most important non-functional characteristics is performance.
The following optimizations all apply to statements in SQLScript. The optimizations presented here cover how
dataflow exploits parallelism in the SAP HANA database.
● Reduce Complexity of SQL Statements: Break up a complex SQL statement into many simpler ones. This
makes a SQLScript procedure easier to comprehend.
● Identify Common Sub-Expressions: If you split a complex query into logical sub queries it can help the
optimizer to identify common sub expressions and to derive more efficient execution plans.
● Multi-Level-Aggregation: In the special case of multi-level aggregations, SQLScript can exploit results at a
finer grouping for computing coarser aggregations and return the different granularities of groups in
distinct table variables. This could save the client the effort of reexamining the query result.
● Understand the Costs of Statements: Employ the explain plan facility to investigate the performance
impact of different SQL queries.
● Exploit Underlying Engine: SQLScript can exploit the specific capabilities of the OLAP- and JOIN-Engine by
relying on views modeled appropriately.
● Reduce Dependencies: As SQLScript is translated into a dataflow graph, and independent paths in this
graph can be executed in parallel, reducing dependencies enables better parallelism, and thus better
performance.
● Avoid Mixing Calculation Engine Plan Operators and SQL Queries: Mixing calculation engine plan operators
and SQL may lead to missed opportunities to apply optimizations as calculation engine plan operators and
SQL statements are optimized independently.
● Avoid Using Cursors: Check if use of cursors can be replaced by (a flow of) SQL statements for better
opportunities for optimization and exploiting parallel execution.
● Avoid Using Dynamic SQL: Executing dynamic SQL is slow because compile time checks and query
optimization must be done for every invocation of the procedure. Another related problem is security
because constructing SQL statements without proper checks of the variables used may harm security.
Variables in SQLScript enable you to arbitrarily break up a complex SQL statement into many simpler ones.
This makes a SQLScript procedure easier to comprehend. To illustrate this point, consider the following query:
Writing this query as a single SQL statement either requires the definition of a temporary view (using WITH) or
repeating a sub query multiple times. The two statements above break the complex query into two simpler
SQL statements that are linked via table variables. This query is much easier to comprehend because the
names of the table variables convey the meaning of the query and they also break the complex query into
smaller logical pieces.
The SQLScript compiler will combine these statements into a single query or identify the common sub-
expression using the table variables as hints. The resulting application program is easier to understand without
sacrificing performance.
The query examined in the previous sub section contained common sub-expressions. Such common sub-
expressions might introduce expensive repeated computation that should be avoided. For query optimizers it
is very complicated to detect common sub-expressions in SQL queries. If you break up a complex query into
logical sub queries it can help the optimizer to identify common sub-expressions and to derive more efficient
execution plans. If in doubt, you should employ the EXPLAIN plan facility for SQL statements to investigate
how the HDB treats a particular statement.
Computing multi-level aggregation can be achieved by using grouping sets. The advantage of this approach is
that multiple levels of grouping can be computed in a single SQL statement.
To retrieve the different levels of aggregation, the client typically has to examine the result repeatedly, for
example by filtering by NULL on the grouping attributes.
In the special case of multi-level aggregations, SQLScript can exploit results at a finer grouping for computing
coarser aggregations and return the different granularities of groups in distinct table variables. This could save
the client the effort of re-examining the query result. Consider the above multi-level aggregation expressed in
SQLScript.
It is important to keep in mind that even though the SAP HANA database is an in-memory database engine and
that the operations are fast, each operation has its associated costs and some are much more costly than
others.
As an example, calculating a UNION ALL of two result sets is cheaper than calculating a UNION of the same
result sets because of the duplicate elimination the UNION operation performs. The calculation engine plan
operator CE_UNION_ALL (and also UNION ALL) basically stacks the two input tables over each other by using
references without moving any data within the memory. Duplicate elimination as part of UNION, in contrast,
requires either sorting or hashing the data to realize the duplicate removal, and thus a materialization of data.
Various examples similar to these exist. Therefore it is important to be aware of such issues and, if possible, to
avoid these costly operations.
You can get the query plan from the view SYS.QUERY_PLANS. The view is shared by all users. Here is an
example of reading a query plan from the view.
Sometimes alternative formulations of the same query can lead to faster response times. Consequently
reformulating performance critical queries and examining their plan may lead to better performance.
The SAP HANA database provides a library of application-level functions which handle frequent tasks, e.g.
currency conversions. These functions can be expensive to execute, so it makes sense to reduce the input as
much as possible prior to calling the function.
SQLScript can exploit the specific capabilities of the built-in functions or SQL statements. For instance, if your
data model is a star schema, it makes sense to model the data as an Analytic view. This allows the SAP HANA
database to exploit the star schema when computing joins producing much better performance.
Similarly, if the application involves complex joins, it might make sense to model the data either as an Attribute
view or a Graphical Calculation view. Again, this conveys additional information on the structure of the data
which is exploited by the SAP HANA database for computing joins. When deciding to use Graphical Calculation
views involving complex joins refer to SAP note 1857202 for details on how, and under which conditions,
you may benefit from SQL Engine processing with Graphical Calculation views.
Finally, note that not assigning the result of an SQL query to a table variable will return the result of this query
directly to the client as a result set. In some cases the result of the query can be streamed (or pipelined) to the
client. This can be very effective as this result does not need to be materialized on the server before it is
returned to the client.
One of the most important methods for speeding up processing in the SAP HANA database is a massive
parallelization of executing queries. In particular, parallelization is exploited at multiple levels of granularity:
For example, the requests of different users can be processed in parallel, and also single relational operators
within a query are executed on multiple cores in parallel. It is also possible to execute different statements of a
single SQLScript in parallel if these statements are independent of each other. Remember that SQLScript is
translated into a dataflow graph, and independent paths in this graph can be executed in parallel.
From an SQLScript developer perspective, we can support the database engine in its attempt to parallelize
execution by avoiding unnecessary dependencies between separate SQL statements, and also by using
declarative constructs if possible. The former means avoiding variable references, and the latter means
avoiding imperative features, for example cursors.
Best Practices: Avoid Mixing Calculation Engine Plan Operators and SQL Queries
The semantics of relational operations as used in SQL queries and calculation engine operations are different.
In the calculation engine operations will be instantiated by the query that is executed on top of the generated
data flow graph.
Therefore the query can significantly change the semantics of the data flow graph. For example consider a
calculation view that is queried using attribute publisher (but not year) that contains an aggregation node
( CE_AGGREGATION) which is defined on publisher and year. The grouping on year would be removed from
the grouping. Evidently this reduces the granularity of the grouping, and thus changes the semantics of the
model. On the other hand, in a nested SQL query containing a grouping on publisher and year this aggregation-
level would not be changed if an enclosed query only queries on publisher.
Because of the different semantics outlined above, the optimization of a mixed data flow using both types of
operations is currently limited. Hence, one should avoid mixing both types of operations in one procedure.
While the use of cursors is sometime required, they imply row-at-a-time processing. As a consequence,
opportunities for optimizations by the SQL engine are missed. So you should consider replacing the use of
cursors with loops, by SQL statements as follows:
Read-Only Access
Computing this aggregate in the SQL engine may result in parallel execution on multiple CPUs inside the SQL
executor.
Similar to updates and deletes, computing this statement in the SQL engine reduces the calls through the
runtime stack of the SAP HANA database, and potentially benefits from internal optimizations like buffering or
parallel execution.
Dynamic SQL is a powerful way to express application logic. It allows for constructing SQL statements at
execution time of a procedure. However, executing dynamic SQL is slow because compile time checks and
query optimization must be done for every start up of the procedure. So when there is an alternative to
dynamic SQL using variables, this should be used instead.
Another related problem is security because constructing SQL statements without proper checks of the
variables used can create a security vulnerability, for example, SQL injection. Using variables in SQL
statements prevents these problems because type checks are performed at compile time and parameters
cannot inject arbitrary SQL code.
This section contains information about creating applications with SQLScript for SAP HANA.
In this section we briefly summarize the concepts employed by the SAP HANA database for handling
temporary data.
Table Variables are used to conceptually represent tabular data in the data flow of a SQLScript procedure. This
data may or may not be materialized into internal tables during execution. This depends on the optimizations
applied to the SQLScript procedure. Their main use is to structure SQLScript logic.
Temporary Tables are tables that exist within the life time of a session. For one connection one can have
multiple sessions. In most cases disconnecting and reestablishing a connection is used to terminate a session.
The schema of global temporary tables is visible for multiple sessions. However, the data stored in this table is
private to each session. In contrast, for local temporary tables neither the schema nor the data is visible
outside the present session. In most aspects, temporary tables behave like regular column tables.
Persistent Data Structures are like sequences and are only used within a procedure call. However, sequences
are always globally defined and visible (assuming the correct privileges). For temporary usage – even in the
presence of concurrent invocations of a procedure, you can invent a naming schema to avoid sequences. Such
a sequence can then be created using dynamic SQL.
Ranking can be performed using a Self-Join that counts the number of items that would get the same or lower
rank. This idea is implemented in the sales statistical example below.
In this document we have discussed the syntax for creating SQLScript procedures and calling them. Besides
the SQL command console for invoking a procedure, calls to SQLScript will also be embedded into client code.
In this section we present examples how this can be done.
The best way to call SQLScript from ABAP is to create a procedure proxy which can be natively called from
ABAP by using the built in command CALL DATABASE PROCEDURE.
The SQLScript procedure has to be created normally in the SAP HANA Studio with the HANA Modeler. After
this a procedure proxy can be creating using the ABAP Development Tools for Eclipse. In the procedure proxy
the type mapping between ABAP and HANA data types can be adjusted. The procedure proxy is transported
normally with the ABAP transport system while the HANA procedure may be transported within a delivery unit
as a TLOGO object.
Calling the procedure in ABAP is very simple. The example below shows calling a procedure with two inputs
(one scalar, one table) and one (table) output parameter:
Using the connection clause of the CALL DATABASE PROCEDURE command, it is also possible to call a
database procedure using a secondary database connection. Please consult the ABAP help for detailed
instructions of how to use the CALL DATABASE PROCEDURE command and for the exceptions may be raised.
It is also possible to create procedure proxies with an ABAP API programmatically. Please consult the
documentation of the class CL_DBPROC_PROXY_FACTORY for more information on this topic.
Using ADBC
*&---------------------------------------------------------------------*
*& Report ZRS_NATIVE_SQLSCRIPT_CALL
*&---------------------------------------------------------------------*
*&
*&---------------------------------------------------------------------*
report zrs_native_sqlscript_call.
parameters:
con_name type dbcon-con_name default 'DEFAULT'.
Output:
import java.sql.Connection;
import java.sql.DriverManager;
import java.sql.CallableStatement;
import java.sql.ResultSet;
…
import java.sql.SQLException;CallableStatement cSt = null;
String sql = "call SqlScriptDocumentation.getSalesBooks(?,?,?,?)";
ResultSet rs = null;
Connection conn = getDBConnection(); // establish connection to database using
jdbc
try {
cSt = conn.prepareCall(sql);
if (cSt == null) {
System.out.println("error preparing call: " + sql);
return;
}
cSt.setFloat(1, 1.5f);
cSt.setString(2, "'EUR'");
cSt.setString(3, "books");
int res = cSt.executeUpdate();
System.out.println("result: " + res);
do {
rs = cSt.getResultSet();
while (rs != null && rs.next()) {
System.out.println("row: " + rs.getString(1) + ", " +
rs.getDouble(2) + ", " + rs.getString(3));
}
} while (cSt.getMoreResults());
} catch (Exception se) {
se.printStackTrace();
} finally {
if (rs != null)
rs.close();
if (cSt != null)
cSt.close();
}
Given procedure:
using System;
using System.Collections.Generic;
using System.Text;
using System.Data;
using System.Data.Common;
using ADODB;
using System.Data.SqlClient;
namespace NetODBC
{
class Program
{
static void Main(string[] args)
{
try
{
DbConnection conn;
DbProviderFactory _DbProviderFactoryObject;
String connStr = "DRIVER={HDBODBC32};UID=SYSTEM;PWD=<password>;
SERVERNODE=<host>:<port>;DATABASE=SYSTEM";
String ProviderName = "System.Data.Odbc";
_DbProviderFactoryObject =
DbProviderFactories.GetFactory(ProviderName);
conn = _DbProviderFactoryObject.CreateConnection();
conn.ConnectionString = connStr;
conn.Open();
System.Console.WriteLine("Connect to HANA database
successfully");
DbCommand cmd = conn.CreateCommand();
//call Stored Procedure
cmd = conn.CreateCommand();
cmd.CommandText = "call SqlScriptDocumentation.scalar_proc (?)";
DbParameter inParam = cmd.CreateParameter();
inParam.Direction = ParameterDirection.Input;
inParam.Value = "asc";
cmd.Parameters.Add(inParam);
DbParameter outParam = cmd.CreateParameter();
outParam.Direction = ParameterDirection.Output;
outParam.ParameterName = "a";
outParam.DbType = DbType.Integer;
cmd.Parameters.Add(outParam);
reader = cmd.ExecuteReader();
System.Console.WriteLine("Out put parameters = " +
outParam.Value);
reader.Read();
String row1 = reader.GetString(0);
System.Console.WriteLine("row1=" + row1);
}
catch(Exception e)
{
System.Console.WriteLine("Operation failed");
System.Console.WriteLine(e.Message);
}
The SQLScript Code Analyzer consists of two built-in procedures that scan CREATE FUNCTION and CREATE
PROCEDURE statements and search for patterns indicating problems in code quality, security or
performance.
Interface
The view SQLSCRIPT_ANALYZER_RULES listing the available rules is defined in the following way:
Table 35:
Column Name Type
RULE_NAMESPACE VARCHAR(16)
RULE_NAME VARCHAR(64)
CATEGORY VARCHAR(16)
SHORT_DESCRIPTION VARCHAR(256)
LONG_DESCRIPTION NVARCHAR(5000)
RECOMMENDATION NVARCHAR(5000)
Procedure ANALYZE_SQLSCRIPT_DEFINITION
The procedure ANALYZE_SQLSCRIPT_DEFINITION can be used to analyze the source code of a single
procedure or a function that has not yet been created. The analyzed function or procedure should not refer to
non-existing objects.
Sample Code
) AS BUILTIN
RULES Rules to be used for the analysis. Available rules can be re
trieved from the view SQLSCRIPT_ANALYZER_RULES
Procedure ANALYZE_SQLSCRIPT_OBJECTS
The procedure ANALYZE_SQLSCRIPT_OBJECTS can be used to analyze the source code of multiple already
existing procedures or functions.
Sample Code
Table 37:
Parameter Description
RULES Rules that should be used for the analysis. Available rules
can be retrieved from the view SQLSCRIPT_ANA
LYZER_RULES.
OBJECT_DEFINITIONS Contains the names and definitions of all objects that were
analyzed, including those without any findings
Rules
Examples
Sample Code
DO BEGIN
tab = SELECT rulenamespace, rulename, category FROM
SQLSCRIPT_ANALYZER_RULES; -- selects all rules
CALL ANALYZE_SQLSCRIPT_DEFINITION('
CREATE PROCEDURE UNCHECKED_DYNAMIC_SQL(IN query NVARCHAR(500)) AS
BEGIN
DECLARE query2 NVARCHAR(500) = ''SELECT '' || query || '' from tab'';
EXEC :query2;
query2 = :query2; --unused variable value
END', :tab, res);
SELECT * FROM :res;
END;
Sample Code
DO BEGIN
tab = SELECT rulenamespace, rulename, category FROM
SQLSCRIPT_ANALYZER_RULES;
to_scan = SELECT schema_name, procedure_name object_name, definition
FROM sys.procedures
WHERE procedure_type = 'SQLSCRIPT2' AND schema_name
IN('MY_SCHEMA','OTHER_SCHEMA')
ORDER BY procedure_name;
CALL analyze_sqlscript_objects(:to_scan, :tab, objects, findings);
SELECT t1.schema_name, t1.object_name, t2.*, t1.object_definition
FROM :findings t2
JOIN :objects t1
ON t1.object_definition_id = t2.object_definition_id;
The examples used throughout this manual make use of various predefined code blocks. These code snippets
are presented below.
19.1.1 ins_msg_proc
This code is used in the examples in this reference manual to store outputs so the action of the examples can
be seen. It simple stores some text along with a timestamp of the entry.
Before you can use this procedure you must create the following table.
To view the contents of the message_box you select the messages in the table.
SAP HANA server software and tools can be used for several SAP HANA platform and options scenarios as
well as the respective capabilities used in these scenarios. The availability of these is based on the available
SAP HANA licenses and the SAP HANA landscape, including the type and version of the back-end systems the
SAP HANA administration and development tools are connected to. There are several types of licenses
available for SAP HANA. Depending on your SAP HANA installation license type, some of the features and
tools described in the SAP HANA platform documentation may only be available in the SAP HANA options and
capabilities, which may be released independently of an SAP HANA Platform Support Package Stack (SPS).
Although various features included in SAP HANA options and capabilities are cited in the SAP HANA platform
documentation, each SAP HANA edition governs the options and capabilities available. Based on this,
customers do not necessarily have the right to use features included in SAP HANA options and capabilities.
For customers to whom these license restrictions apply, the use of features included in SAP HANA options and
capabilities in a production system requires purchasing the corresponding software license(s) from SAP. The
documentation for the SAP HANA options is available in SAP Help Portal. If you have additional questions
about what your particular license provides, or wish to discuss licensing features available in SAP HANA
options, please contact your SAP account team representative.
Coding Samples
Any software coding and/or code lines / strings ("Code") included in this documentation are only examples and are not intended to be used in a productive system
environment. The Code is only intended to better explain and visualize the syntax and phrasing rules of certain coding. SAP does not warrant the correctness and
completeness of the Code given herein, and SAP shall not be liable for errors or damages caused by the usage of the Code, unless damages were caused by SAP
intentionally or by SAP's gross negligence.
Accessibility
The information contained in the SAP documentation represents SAP's current view of accessibility criteria as of the date of publication; it is in no way intended to be
a binding guideline on how to ensure accessibility of software products. SAP in particular disclaims any liability in relation to this document. This disclaimer, however,
does not apply in cases of willful misconduct or gross negligence of SAP. Furthermore, this document does not result in any direct or indirect contractual obligations
of SAP.
Gender-Neutral Language
As far as possible, SAP documentation is gender neutral. Depending on the context, the reader is addressed directly with "you", or a gender-neutral noun (such as
"sales person" or "working days") is used. If when referring to members of both sexes, however, the third-person singular cannot be avoided or a gender-neutral noun
does not exist, SAP reserves the right to use the masculine form of the noun and pronoun. This is to ensure that the documentation remains comprehensible.
Internet Hyperlinks
The SAP documentation may contain hyperlinks to the Internet. These hyperlinks are intended to serve as a hint about where to find related information. SAP does
not warrant the availability and correctness of this related information or the ability of this information to serve a particular purpose. SAP shall not be liable for any
damages caused by the use of related information unless damages have been caused by SAP's gross negligence or willful misconduct. All links are categorized for
transparency (see: http://help.sap.com/disclaimer).