Manual SQL Sb2
Manual SQL Sb2
1 for z/OS
SC18-9841-05
DB2 Version 9.1 for z/OS
SC18-9841-05
Note
Before using this information and the product it supports, be sure to read the general information under “Notices” at the
end of this information.
Contents v
SQL statements in REXX programs . . . . . . . . . . . . . . . . . . . . . . . . . . . 405
Delimiters in SQL statements in REXX programs . . . . . . . . . . . . . . . . . . . . . 408
Accessing the DB2 REXX language support application programming interfaces . . . . . . . . . . . 409
Ensuring that DB2 correctly interprets character input data in REXX programs . . . . . . . . . . . 410
Passing the data type of an input data type to DB2 for REXX programs . . . . . . . . . . . . . . 411
Setting the isolation level of SQL statements in a REXX procedure. . . . . . . . . . . . . . . . 411
Retrieving data from DB2 tables in REXX programs . . . . . . . . . . . . . . . . . . . . 412
Cursors and statement names in REXX . . . . . . . . . . . . . . . . . . . . . . . . . 413
Programming examples in REXX . . . . . . . . . . . . . . . . . . . . . . . . . . . 414
Contents vii
Saving storage when manipulating LOBs by using LOB locators . . . . . . . . . . . . . . . . 711
Deferring evaluation of a LOB expression to improve performance . . . . . . . . . . . . . . . 713
| LOB file reference variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . 715
Referencing a sequence object . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 717
Retrieving thousands of rows . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 718
| Determining when a row was changed . . . . . . . . . . . . . . . . . . . . . . . . . . 718
| Checking whether an XML column contains a certain value . . . . . . . . . . . . . . . . . . . 719
Accessing DB2 data that is not in a table . . . . . . . . . . . . . . . . . . . . . . . . . 719
Ensuring that queries perform sufficiently . . . . . . . . . . . . . . . . . . . . . . . . . 720
Items to include in a batch DL/I program . . . . . . . . . . . . . . . . . . . . . . . . . 720
Contents ix
Tailoring DB2-supplied JCL procedures for preparing CICS programs . . . . . . . . . . . . . . 953
DB2I primary option menu . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 955
DB2I panels that are used for program preparation . . . . . . . . . . . . . . . . . . . . . . 956
DB2 Program Preparation panel . . . . . . . . . . . . . . . . . . . . . . . . . . . 957
DB2I Defaults Panel 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 961
DB2I Defaults Panel 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 963
Precompile panel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 964
Bind Package panel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 967
Bind Plan panel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 969
Defaults for Bind Package panel . . . . . . . . . . . . . . . . . . . . . . . . . . . 973
Defaults for Bind Plan panel . . . . . . . . . . . . . . . . . . . . . . . . . . . . 976
System Connection Types panel . . . . . . . . . . . . . . . . . . . . . . . . . . . 980
Panels for entering lists of values. . . . . . . . . . . . . . . . . . . . . . . . . . . 981
Program Preparation: Compile, Link, and Run panel . . . . . . . . . . . . . . . . . . . . 982
DB2I panels that are used to rebind and free plans and packages . . . . . . . . . . . . . . . . . 984
Bind/Rebind/Free Selection panel . . . . . . . . . . . . . . . . . . . . . . . . . . 985
Rebind Package panel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 986
Rebind Trigger Package panel . . . . . . . . . . . . . . . . . . . . . . . . . . . . 988
Rebind Plan panel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 990
Free Package panel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 992
Free Plan panel. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 993
Chapter 19. Testing and debugging an application program on DB2 for z/OS . . . . 1009
Designing a test data structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1009
Analyzing application data needs . . . . . . . . . . . . . . . . . . . . . . . . . . 1009
Authorization for test tables and applications . . . . . . . . . . . . . . . . . . . . . . 1010
Example SQL statements to create a comprehensive test structure . . . . . . . . . . . . . . . 1011
Populating the test tables with data . . . . . . . . . . . . . . . . . . . . . . . . . . 1012
Methods for testing SQL statements . . . . . . . . . . . . . . . . . . . . . . . . . . 1012
Executing SQL by using SPUFI . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1013
SPUFI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1016
Content of a SPUFI input data set . . . . . . . . . . . . . . . . . . . . . . . . . . 1017
The SPUFI panel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1017
Changing SPUFI defaults . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1019
Setting the SQL terminator character in a SPUFI input data set . . . . . . . . . . . . . . . . 1024
Controlling toleration of warnings in SPUFI . . . . . . . . . . . . . . . . . . . . . . . 1025
Output from SPUFI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1025
Testing an external user-defined function . . . . . . . . . . . . . . . . . . . . . . . . . 1027
Testing a user-defined function by using the Debug Tool for z/OS . . . . . . . . . . . . . . . 1027
Testing a user-defined function by routing the debugging messages to SYSPRINT . . . . . . . . . . 1029
Testing a user-defined function by using driver applications . . . . . . . . . . . . . . . . . 1029
Testing a user-defined function by using SQL INSERT statements . . . . . . . . . . . . . . . 1030
| Debugging stored procedures . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1030
Information resources for DB2 for z/OS and related products . . . . . . . . . . . 1091
Notices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1105
Programming Interface Information . . . . . . . . . . . . . . . . . . . . . . . . . . 1106
General-use Programming Interface and Associated Guidance Information . . . . . . . . . . . . 1107
Product-sensitive Programming Interface and Associated Guidance Information . . . . . . . . . . . 1107
Trademarks. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1107
Glossary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1109
Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1153
Contents xi
xii Application Programming and SQL Guide
About this information
This information discusses how to design and write application programs that
access DB2® for z/OS® (DB2), a highly flexible relational database management
system (DBMS).
Visit the following Web site for information about ordering DB2 books and
obtaining other valuable information about DB2 for z/OS: http://
publib.boulder.ibm.com/infocenter/imzic
This information assumes that your DB2 subsystem is running in Version 9.1
new-function mode. Generally, new functions that are described, including changes
to existing functions, statements, and limits, are available only in new-function
mode. Two exceptions to this general statement are new and changed utilities and
optimization enhancements, which are also available in conversion mode unless
stated otherwise.
The DB2 Utilities Suite is designed to work with the DFSORT™ program, which
you are licensed to use in support of the DB2 utilities even if you do not otherwise
license DFSORT for general use. If your primary sort product is not DFSORT,
consider the following informational APARs mandatory reading:
v II14047/II14213: USE OF DFSORT BY DB2 UTILITIES
v II13495: HOW DFSORT TAKES ADVANTAGE OF 64-BIT REAL
ARCHITECTURE
These informational APARs are periodically updated.
Related information
DB2 utilities packaging (Utility Guide)
Accessibility features
The following list includes the major accessibility features in z/OS products,
including DB2 Version 9.1 for z/OS. These features support:
v Keyboard-only operation.
v Interfaces that are commonly used by screen readers and screen magnifiers.
v Customization of display attributes such as color, contrast, and font size
Keyboard navigation
You can access DB2 Version 9.1 for z/OS ISPF panel functions by using a keyboard
or keyboard shortcut keys.
For information about navigating the DB2 Version 9.1 for z/OS ISPF panels using
TSO/E or ISPF, refer to the z/OS TSO/E Primer, the z/OS TSO/E User’s Guide, and
the z/OS ISPF User’s Guide. These guides describe how to navigate each interface,
including the use of keyboard shortcuts or function keys (PF keys). Each guide
includes the default settings for the PF keys and explains how to modify their
functions.
Online documentation for DB2 Version 9.1 for z/OS is available in the Information
Management Software for z/OS Solutions Information Center, which is available at
the following Web site: http://publib.boulder.ibm.com/infocenter/dzichelp
http://www.ibm.com/support/docview.wss?&uid=swg27011656
This Web site has an online reader comment form that you can use to send
comments.
v You can also send comments by using the feedback link at the footer of each
page in the Information Management Software for z/OS Solutions Information
Center at http://publib.boulder.ibm.com/infocenter/db2zhelp.
Apply the following rules when reading the syntax diagrams that are used in DB2
for z/OS documentation:
v Read the syntax diagrams from left to right, from top to bottom, following the
path of the line.
The ─── symbol indicates the beginning of a statement.
The ─── symbol indicates that the statement syntax is continued on the next
line.
The ─── symbol indicates that a statement is continued from the previous line.
The ─── symbol indicates the end of a statement.
v Required items appear on the horizontal line (the main path).
required_item
If an optional item appears above the main path, that item has no effect on the
execution of the statement and is used only for readability.
optional_item
required_item
v If you can choose from two or more items, they appear vertically, in a stack.
If you must choose one of the items, one item of the stack appears on the main
path.
required_item required_choice1
required_choice2
If choosing one of the items is optional, the entire stack appears below the main
path.
required_item
optional_choice1
optional_choice2
If one of the items is the default, it appears above the main path and the
remaining choices are shown below.
default_choice
required_item
optional_choice
optional_choice
v An arrow returning to the left, above the main line, indicates an item that can be
repeated.
required_item repeatable_item
If the repeat arrow contains a comma, you must separate repeated items with a
comma.
required_item repeatable_item
A repeat arrow above a stack indicates that you can repeat the items in the
stack.
| v Sometimes a diagram must be split into fragments. The syntax fragment is
| shown separately from the main syntax diagram, but the contents of the
| fragment should be read as if they are on the main path of the diagram.
| required_item fragment-name
|
| required_item
optional_name
|
| v With the exception of XPath keywords, keywords appear in uppercase (for
| example, FROM). Keywords must be spelled exactly as shown. XPath keywords
| are defined as lowercase names, and must be spelled exactly as shown. Variables
appear in all lowercase letters (for example, column-name). They represent
user-supplied names or values.
v If punctuation marks, parentheses, arithmetic operators, or other such symbols
are shown, you must enter them as part of the syntax.
If you are migrating an existing application from a previous release of DB2, read
the application and SQL release incompatibilities and make any necessary changes
in the application.
If you are writing a new DB2 application, first determine the following items:
v the value of some of the SQL processing options
v the binding method
v the value of some of the bind options
Then make sure that your program implements the appropriate recommendations
so that it promotes concurrency, can handle recovery and restart situations, and can
efficiently access distributed data.
| Plan for the following changes in Version 9.1 that might affect your migration.
| The default value for bind option CURRENTDATA is changed from YES to NO.
| This applies to the BIND PLAN and the BIND PACKAGE subcommands, as well
| as the CREATE TRIGGER for trigger packages, and the CREATE PROCEDURE and
| the ALTER PROCEDURE ADD VERSION SQL statements for SQL PL procedure
| packages. Specifying NO for CURRENTDATA is the best option for performance.
| The default value for bind option ISOLATION is changed from RR to CS. This
| applies to the BIND PLAN and the remote BIND PACKAGE subcommands. For
| the BIND PACKAGE subcommand, the current default (plan value) stays. The
| default change does not apply to implicitly-built CTs (for example, DISTSERV CTs).
| All BIND statements for plans and packages that are bound during the installation
| or migration process specify the ISOLATION parameter explicitly, except for
| routines that do not fetch data. The current settings are maintained for
| compatibility.
| Drop any user-defined data types with the name XML to prevent problems with
| the new Version 9.1 built-in XML data type. You can re-create the existing
| user-defined data types with new names.
| Adjust any applications that call one of the following stored procedures and then
| check and process the specific SQLCODE or SQLSTATE that is returned by the
| CALL statement:
| v SQLJ.INSTALL_JAR
| v SQLJ.REMOVE_JAR
| v SQLJ.REPLACE_JAR
| v SQLJ.DB2_INSTALL_JAR
| v SQLJ.DB2_REPLACE_JAR
| v SQLJ.DB2_REMOVE_JAR
| v SQLJ.DB2_UPDATEJARINFO
| In Version 9.1, these stored procedures return more meaningful SQLCODEs and
| SQLSTATEs than they return in previous releases of DB2. The other input and
| output parameters of these stored procedures have not changed.
| For example, the following application needs to change because -20201 is no longer
| the SQLCODE that is returned. Successful execution (SQLCODE 0) is not affected.
| CALL SQLJ.REMOVE_JAR(...)
| IF (SQLCODE = -20201) THEN
| DO;
| ...
| END;
| Before migrating to conversion mode, drop all materialized query tables that are
| based on the SYSIBM.SYSROUTINES catalog table. During migration, if any
| materialized query tables are based on the SYSIBM.SYSROUTINES catalog table,
| SQLCODE -750 is issued.
| Ensure that you do not have any incomplete object definitions in your DB2 Version
| 8 catalog. For example, if a table has a primary or unique key defined but the
| enforcing primary or unique key index does not exist, the table definition is
| considered incomplete. You need to complete or drop all such objects before you
| begin migration because their behavior will be different in Version 9.1. For
| example, if you attempt to create an enforcing primary key index to complete a
| table definition in Version 9.1 and the residing table space is implicitly created, the
| index is treated as a regular index instead of an enforcing index.
| Version 9.1 has several new SQL reserved words. Refer to Reserved words (SQL
| Reference) for the list, and adjust your applications accordingly.
| For PL/I applications with no DECLARE VARIABLE statements, the rules for host
| variables and string constants in the FROM clause of a PREPARE or EXECUTE
| IMMEDIATE statement have changed. A host variable must be a varying-length
| string variable that is preceded by a colon. A PL/I string cannot be preceded by a
| colon.
| If you have plans and packages that were bound before DB2 Version 4 and you
| specified YES or COEXIST in the AUTO BIND field of panel DSNTIPO, DB2
| Version 9.1 autobinds these packages. Thus, you might experience an execution
| delay the first time that such a plan is loaded. Also, DB2 might change the access
| path due to the autobind, potentially resulting in a more efficient access path.
| If you specify NO in the AUTO BIND field of panel DSNTIPO, DB2 Version 9.1
| returns SQLCODE -908, SQLSTATE 23510 for each attempt to use such a package
| or plan until it is rebound.
| When the INSERT statement is specified with the OVERRIDING USER VALUES
| clause, the value for the insert operation is ignored for columns that are defined
| with the GENERATED BY DEFAULT attribute.
| Because DB2 no longer stores LONG type values in the catalog, when you execute
| a DESCRIBE statement against a column with a LONG VARCHAR or LONG
| VARGRAPHIC data type, the DESCRIBE statement returns the values as
| VARCHAR or VARGRAPHIC data type.
| The DSNTIAUL sample program was updated through APAR PK46518 to account
| for this change. You need to apply APAR PK46518 and precompile, bind compile,
| and link-edit DSNTIAUL to make it compatible with the changed DESCRIBE
| behavior.
| For more information about where you can specify the host-variable-array variable,
| see DB2 SQL Reference.
| After you migrate to new-function mode, users that debug external SQL
| procedures need the DEBUGSESSION system privilege. (External SQL procedures
| were previously called SQL procedures in Version 8.) Only users of the new
| Unified Debugger enabled client platforms need this system privilege. Users of the
| Version 8 SQL Debugger-enabled client platforms do not need this system
| privilege.
| The result length of the DECRYPT function is shortened to 8 bytes less than the
| length of the input value. If the result expands because of a difference between
| input and result CCSIDs, you must cast the encrypted data to a larger VARCHAR
| value before the DECRYPT function is run.
| When new tables are created with LONG VARCHAR or LONG VARGRAPHIC
| columns, the COLTYPE values in SYSIBM.SYSCOLUMNS and
| SYSIBM.SYSCOLUMNS_HIST contain VARCHAR or VARG.
| The CREATEDBY column might contain a different value than in previous releases
| of DB2. The column might contain a different value in static CREATE statements
| for distinct types, functions, and procedures or when a dynamic SQL statement
| sets the CURRENT SQLID value to a value other than USER.
| DB2 enforces the restriction that row IDs are not compatible with
| character strings when they are used with a set operator
| In previous releases, DB2 did not always enforce the restriction that row IDs are
| not compatible with character strings. In Version 9.1, DB2 enforces the restriction
| that row IDs are not compatible with string types when they are used with a set
| operator (UNION, INTERSECT, or EXCEPT).
| After you migrate to conversion mode, if you explicitly create a database name
| with eight characters that begins with DSN and is followed by exactly five digits,
| DB2 issues an SQLCODE -20074 (SQLSTATE 42939).
| Because database privileges on the DSNDB04 database now give you those
| privileges on all implicitly created databases, careful consideration is needed before
| you grant database privileges on DSNDB04. For example, in Version 9.1, if you
| have the STOPDB privilege on DSNDB04, you also have the STOPDB privilege on
| all implicitly created databases.
| In previous releases, implicitly created objects that are associated with LOB
| columns do not require CREATETAB and CREATETS privileges on the database of
| the base table. Those implicitly created objects also do not require the USE
| privilege on the buffer pool and storage group that is used by the LOB objects. In
| Version 9.1, these privileges are required.
| The LGDISCLR field in the DSNDQJ00 macro has been removed. Update
| applications that use the LGDISCLR value in the DSNDQJ00 mapping macro to
| determine whether a log record is a compensation log record to use the LRHCLR
| value instead.
| You can no longer create databases with the AS TEMP clause or table spaces that
| specify TEMP as the target database. The TEMP database is no longer used by
| DB2. The WORKFILE database is the only temporary database.
| After you migrate to Version 9.1 new-function mode, if the containing table space
| is implicitly created, you cannot drop any system-required objects, except for the
| LOB table space, even if you explicitly created these objects in a previous release.
| The following statements will not work properly if the system-required objects
| were implicitly created by DB2:
| In Version 9.1, you cannot execute INSERT, UPDATE, or DELETE statements that
| affect an index in the same commit scope as ALTER INDEX statements on that
| index.
| In previous releases, DB2 did not return an error when a LOB value was specified
| for an argument to a stored procedure and the argument value was longer than the
| target parameter and the excess was not trailing blanks. DB2 truncated the data
| and the procedure executed. In Version 9.1, DB2 returns an error.
| In Version 9.1, the formatting of decimal data has changed for the VARCHAR
| function and CAST specification with a VARCHAR result type. When the input
| data is decimal, any leading zeroes in the input value are removed, and leading
| zeroes are not added to an input value that did not already contain leading zeroes.
| In Version 9.1, the length attribute of the result is the length attribute of the format
| string, up to a maximum of 100.
| DB2 Version 9.1 no longer recognizes ’W’ as a valid format element of the
| VARCHAR_FORMAT function format string. Version 8 never recognized ’W’ as a
| valid format element.
| Use WW instead. Drop and re-create existing views and materialized queries that
| are defined with Version 9.1 and that use the ’W’ format element with the
| VARCHAR_FORMAT function. Rebind existing bound statements that are bound
| with Version 9.1 and that use the ’W’ format element with the
| VARCHAR_FORMAT function.
| Leading or trailing blanks from the format string for the VARCHAR_FORMAT
| function are no longer removed. Existing view definitions are recalculated as part
| of Version 9.1, so the new rules take effect. You can continue to use existing
| materialized query statements, but they use the old rules and remove leading and
| DB2 Version 9.1 for z/OS issues a warning when a BEFORE or AFTER trigger is
| created and a trigger transition variable is passed as an argument for a parameter
| on a CALL statement that is within the trigger body.
| In previous releases, if a unique constraint was dropped, DB2 did not drop the
| index that enforced uniqueness. In Version 9.1, if a table is in an implicitly-created
| table space, and a unique constraint on that table is dropped, DB2 drops the index
| that enforces uniqueness.
| Changes to the upper limit to the size of the row that is used by
| sort to evaluate column functions
| The maximum limit of a row (data and key columns) that is used by sort to
| evaluate MULTIPLE DISTINCT and GROUP BY column functions is decreased to
| 32600. If you exceed the limit, DB2 issues an error.
| The CAST FROM clause is included only in the syntax diagram for the CREATE
| FUNCTION statement for an external scalar function. The CAST FROM clause is
| not included in the syntax diagrams for the other variations of CREATE
| FUNCTION (external table function, sourced function, or SQL function); the clause
| cannot be used for these other variations. In previous releases, if you specified a
| CAST FROM clause in an unsupported context, you received no errors. In Version
| 9.1 if a CAST FROM clause is specified in an unsupported context, DB2 issues an
| error.
| The AS LOCATOR clause for LOBs is included in the syntax diagram for the
| CREATE FUNCTION statement for an SQL function. This clause is not supported
| in other contexts when identifying an existing SQL function such as in an ALTER,
| COMMENT, DROP, GRANT, or REVOKE statement. In previous releases, if you
| specified an AS LOCATOR clause for LOBs in an unsupported context, you might
| not have received an error. In Version 9.1 if an AS LOCATOR clause for LOBs is
| specified in an unsupported context, DB2 issues an error.
| The TABLE LIKE clause for a trigger transition table is included only in the syntax
| diagram for the CREATE FUNCTION statement for an external scalar function,
| external table function, or sourced function. This clause is not supported for SQL
| functions or in other contexts when identifying an existing function such as in an
| ALTER, COMMENT, DROP, GRANT, or REVOKE statement, or in the SOURCE
| clause of a CREATE FUNCTION statement. In previous releases, if you specified a
| TABLE LIKE clause for a trigger transition table in an unsupported context, you
| might not have received an error. In Version 9.1 if a TABLE LIKE clause for a
| With the introduction of native SQL procedures in Version 9, the semantics of the
| CREATE PROCEDURE statement for an SQL procedure has changed. Starting in
| Version 9, all SQL procedures that are created without the FENCED option or the
| EXTERNAL option in the CREATE PROCEDURE statement are native SQL
| procedures. In previous releases of DB2, if you did not specify either of these
| options, the procedures were created as external SQL procedures.
| As of Version 9, the rules that are used for name resolution within a native SQL
| procedure differ from the rules that were used for SQL procedures in prior
| releases. Because an SQL parameter or SQL variable can have the same name as a
| column name, you should explicitly qualify the names of any SQL parameters,
| SQL variables or columns that have non-unique names. For more information
| about how the names of these items are resolved, see References to SQL
| parameters and SQL variables (SQL Reference). The rules that are used for name
| resolution within external SQL procedures remain unchanged.
| In Version 9.1, the SQLSTATE and SQLCODE SQL variables are not cleared
| following a GET DIAGNOSTICS statement.
| Previous releases of DB2 did not allow for a compound statement within a handler.
| A workaround to include multiple statements within a handler (without support
| for a compound statement in a handler) was to use another control statement, such
| as an IF statement, which in turn contained multiple statements. Version 9.1 now
| supports a compound statement within a handler body. The compound statement
| is recommended for including multiple statements within a handler body.
| Unhandled warnings
| In Version 9.1, DB2 issues different messages for the new native SQL procedures
| than it does for external SQL procedures. (External SQL procedures were
| previously called SQL procedures in Version 8.) For external SQL procedures, DB2
| continues to issue DSNHxxxx messages. For native SQL procedures, DB2 issues
| SQL return codes. The relationship between these messages is shown in the
| following table:
| Table 1. Relationship between DSNHxxxx messages that are issued for external SQL
| procedures and SQLCODEs that are issued for native SQL procedures
| DSNHxxxx message1 SQLCODE2
| DSNH051I -051
| DSNH385I +385
| DSNH590I -590
| DSNH4408I -408
| DSNH4777I n/a
| DSNH4778I -778
| DSNH4779I -779
| DSNH4780I -780
| DSNH4781I -781
| DSNH4782I -782
| DSNH4785I -785
| DSNH4787I -787
|
| Note:
| In Version 9.1, when you specify a CHAR data type with a length of 0 in the
| SQLDA, DB2 issues SQLCODE -804 regardless of the null indicator value.
| In previous releases, adding a column to a table did not generate a new table space
| version. In Version 9.1, adding a column to a table with an ALTER TABLE ADD
| COLUMN statement generates a new table space version.
| The CAST FROM clause of the CREATE FUNCTION statement for SQL functions
| is no longer supported. If you issue a CREATE FUNCTION statement for an SQL
| function with a CAST FROM clause, DB2 issues an error.
| If you specify the SQL processing options GRAPHIC or NOGRAPHIC, DB2 ignores
| them. These options are superseded by the CCSID SQL processing option.
| The INTO clause (as related to queries) is included only in the syntax diagram for
| the SELECT INTO statement. The INTO clause is not included in the syntax
| diagrams for select-clause, subselect, fullselect, or select-statement. In previous
| releases, if you specified an INTO clause in an unsupported context in a query, you
| might not have received an error. In Version 9.1, if an INTO clause is specified in
| an unsupported context, DB2 issues an error.
| For some COBOL and PL/I compilers that are no longer supported, you can use a
| version of the precompiler that allows you to precompile applications that have
| dependencies on these unsupported compilers. You can use this version of the
| precompiler with the following unsupported compilers:
| v OS/VS COBOL V1.2.4
| v OS PL/I 1.5 (PL/I Opt. V1.5.1)
| v VS/COBOL II V1R4
| v OS PL/I 2.3
| The load module for this precompiler is DSNHPC7. This precompiler is meant
| only to ease the transition from unsupported compilers to supported compilers.
| This precompiler has the following restrictions:
| v There is no corresponding DB2 coprocessor function to match this precompiler.
| v The precompiler does not support SQL procedures.
| v Only COBOL and PL/I are supported.
| v The SQL flagger is not supported.
| v The precompiler produces Version 7 DBRMs, and does not support any
| capability that is newer than Version 7.
| If you use a user-defined function that has the same name as a built-in function
| that has been added to Version 9.1, ensure that you fully qualify the function
| name. If the function name is unqualified and “SYSIBM” precedes the schema that
| you used for this function in the SQL path, DB2 invokes one of the built-in
| functions.
| For a list of built-in functions, including those that have been added in Version 9.1,
| see Functions (SQL Reference).
Determining the value of any SQL processing options that affect the
design of your program
When you process SQL statements in an application program, you can specify
options that describe the basic characteristics of the program or indicate how you
want the output listings to look. Although most of these options do not affect how
you design or code the program, a few options do.
SQL processing options specify program characteristics such as the following items:
v The host language in which the program is written
v The maximum precision of decimal numbers in the program
v How many lines are on a page of the precompiler listing
In many cases, you may want to accept the default value provided.
To determine the value of any SQL processing options that affect the design of
your program:
Review the list of SQL processing options and decide the values for any options
that affect the way that you write your program. For example, you need to know if
you are using NOFOR or STDSQL(YES) before you begin coding.
The use of packages affects your application design. For example, you might
decide to put certain SQL statements together in the same program, precompile
them into the same DBRM, and then bind them into a single package.
Consider the advantages and disadvantages of each binding method, which are
described in the following table.
Table 2. Advantages and disadvantages of each binding method
Binding method Advantages Disadvantages
Bind all of your DBRMs This method has fewer steps and is appropriate in Maintenance is difficult. This
into a single application some cases. This method is suitable for small method has the disadvantage
plan. applications that are unlikely to change or that require that a change to one DBRM
all resources to be acquired when the plan is allocated, requires rebinding the entire
rather than when your program first uses them. plan, even though most DBRMs
are unchanged.
Related concepts
“DB2 program preparation overview” on page 947
Related tasks
“Binding an application” on page 916
A change to your program probably invalidates one or more of your packages and
perhaps your entire plan. For some changes, you must bind a new object; for
others, rebinding is sufficient.
A plan or package can also become invalid for reasons that do not depend on
operations in your program. For example, when an index is dropped that is used
in an access path by one of your queries, a plan or package can become invalid. In
those cases, DB2 might rebind the plan or package automatically the next time that
the plan or package is used.
The following table lists the actions that you must take when changes are made to
your program or database objects.
Table 3. Changes that require plans or packages to be rebound.
Change made Required action
Run RUNSTATS to update catalog statistics Rebind the package or plan by using the
REBIND command. Rebinding might
improve the access path that DB2 uses.
Add an index to a table Rebind the package or plan by using the
REBIND command. Rebinding causes DB2 to
consider using the index when accessing this
table.
1
Change the bind options Rebind the package or plan by using the
REBIND command and specifying the new
value for the bind option. If the option that
you want to change is not available for the
REBIND command, issue the BIND
command with ACTION(REPLACE) instead.
Change both statements in the host language Precompile, compile, and link the application
and SQL statements program. Issue the BIND command with
ACTION(REPLACE) for the package or plan.
Note:
1. In the case of changing the bind options, the change is not actually made until
you perform the required action.
Determining the value of any bind options that affect the design of
your program
Several options of the BIND PACKAGE and BIND PLAN commands can affect
your program design. For example, you can use a bind option to ensure that a
package or plan can run only from a particular CICS connection or IMS region;
your code does not need to enforce this situation.
To determine the value of any bind options that affect the design of your program:
Review the list of bind options and decide the values for any options that affect
the way that you write your program. For example, you should decide the values
of the ACQUIRE and RELEASE options before you write your program. These
options determine when your application acquires and releases locks on the objects
it uses.
Related reference
BIND and REBIND options (DB2 Command Reference)
Concurrency is the ability of more than one application process to access the same
data at essentially the same time. Concurrency must be controlled to prevent lost
updates and such possibly undesirable effects as unrepeatable reads and access to
uncommitted data.
In TSO applications, a unit of work starts when the first updates of a DB2 object
occur. A unit of work ends when one of the following conditions occurs:
v The program issues a subsequent COMMIT statement. At this point in the
processing, your program has determined that the data is consistent; all data
changes that were made since the previous commit point were made correctly.
v The program issues a subsequent ROLLBACK statement. At this point in the
processing, your program has determined that the data changes were not made
correctly and, therefore, should not be permanent. A ROLLBACK statement
causes any data changes that were made since the last commit point to be
backed out.
v The program terminates and returns to the DSN command processor, which
returns to the TSO Terminal Monitor Program (TMP).
The first and third conditions in the preceding list are called a commit point. A
commit point occurs when you issue a COMMIT statement or your program
terminates normally.
Related reference
COMMIT (SQL Reference)
ROLLBACK (SQL Reference)
All the processing that occurs in your program between two commit points is
known as a logical unit of work (LUW) or unit of work. In CICS applications, a
unit of work is marked as complete by a commit or synchronization (sync) point,
which is defined in one of following ways:
v Implicitly at the end of a transaction, which is signaled by a CICS RETURN
command at the highest logical level.
v Explicitly by CICS SYNCPOINT commands that the program issues at logically
appropriate points in the transaction.
v Implicitly through a DL/I PSB termination (TERM) call or command.
For example, consider a program that subtracts the quantity of items sold from an
inventory file and then adds that quantity to a reorder file. When both transactions
complete (and not before) and the data in the two files is consistent, the program
can then issue a DL/I TERM call or a SYNCPOINT command. If one of the steps
fails, you want the data to return to the value it had before the unit of work began.
That is, you want it rolled back to a previous point of consistency. You can achieve
this state by using the SYNCPOINT command with the ROLLBACK option.
By using a SYNCPOINT command with the ROLLBACK option, you can back out
uncommitted data changes. For example, a program that updates a set of related
rows sometimes encounters an error after updating several of them. The program
can use the SYNCPOINT command with the ROLLBACK option to undo all of the
updates without giving up control.
The SQL COMMIT and ROLLBACK statements are not valid in a CICS
environment. You can coordinate DB2 with CICS functions that are used in
programs, so that DB2 and non-DB2 data are consistent.
Both IMS and DB2 handle recovery in an IMS application program that accesses
DB2 data. IMS coordinates the process, and DB2 handles recovery for DB2 data.
Restriction: You cannot use SQL COMMIT and ROLLBACK statements in the
DB2 DL/I batch support environment, because IMS coordinates the unit of
work.
3. Issue CLOSE CURSOR statements before any checkpoint calls or GU calls to
the message queue, not after.
4. After any checkpoint calls, set the value of any special registers that were reset
if their values are needed after the checkpoint:
A CHKP call causes IMS to sign on to DB2 again, which resets the special
registers that are shown in the following table.
Table 4. Special registers that are reset by a checkpoint call.
Value to which it is reset after a checkpoint
Special register call
CURRENT PACKAGESET blanks
CURRENT SERVER blanks
CURRENT SQLID blanks
CURRENT DEGREE 1
5. After any commit points, reopen the cursors that you want and re-establish
positioning
6. Decide whether to specify the WITH HOLD option for any cursors. This option
determines whether the program retains the position of the cursor in the DB2
database after you issue IMS CHKP calls. You always lose the program
database positioning in DL/I after an IMS CHKP call.
The program database positioning in DB2 is affected according to the following
criteria:
v If you do not specify the WITH HOLD option for a cursor, you lose the
position of that cursor.
v If you specify the WITH HOLD option for a cursor and the application is
message-driven, you lose the position of that cursor.
v If you specify the WITH HOLD option for a cursor and the application is
operating in DL/I batch or DL/I BMP, you retain the position of that cursor.
7. Use IMS rollback calls, ROLL and ROLB, to back out DB2 and DL/I changes to
the last commit point. These options have the following differences:
Related concepts
“Checkpoints in IMS programs” on page 26
In IMS, a unit of work starts when one of the following events occurs:
v When the program starts
v After a CHKP, SYNC, ROLL, or ROLB call has completed
v For single-mode transactions, when a GU call is issued to the I/O PCB
Restriction: The SQL COMMIT and ROLLBACK statements are not valid in an
IMS environment.
A commit point occurs in a program as the result of any one of the following
events:
v The program terminates normally. Normal program termination is always a
commit point.
v The program issues a checkpoint call. Checkpoint calls are a program’s means of
explicitly indicating to IMS that it has reached a commit point in its processing.
v The program issues a SYNC call. A SYNC call is a Fast Path system service call
to request commit-point processing. You can use a SYNC call only in a
non-message-driven Fast Path program.
v For a program that processes messages as its input, a commit point can occur
when the program retrieves a new message. This behavior depends on the mode
that you specify in the APPLCTN macro for the program:
– If you specify single-mode transactions, a commit point in DB2 occurs each
time the program issues a call to retrieve a new message.
– If you specify multiple-mode transactions or you do not specify a mode, a
commit point occurs when the program issues a checkpoint call or when it
terminates normally.
If the program abends before reaching the commit point, the following actions
occur:
v Both IMS and DB2 back out all the changes the program has made to the
database since the last commit point.
v IMS deletes any output messages that the program has produced since the last
commit point (for nonexpress PCBs).
v If the program processes messages, people at terminals and other application
programs receive information from the terminating application program.
Issuing checkpoint calls releases locked resources and establishes a place in the
program from which you can restart the program. The decision about whether
your program should issue checkpoints (and if so, how often) depends on your
program.
Programs that issue symbolic checkpoint calls can specify as many as seven data
areas in the program that is to be restored at restart. DB2 always recovers to the
last checkpoint. You must restart the program from that point.
If you use symbolic checkpoint calls, you can use a restart call (XRST) to restart a
program after an abend. This call restores the program’s data areas to the way they
were when the program terminated abnormally, and it restarts the program from
the last checkpoint call that the program issued before terminating abnormally.
Restriction: For BMP programs that process DB2 databases, you can restart the
program only from the latest checkpoint and not from any checkpoint, as in IMS.
In multiple-mode BMPs and MPPs, the only commit points are the checkpoint calls
that the program issues and normal program termination. If the program abends
and it has not issued checkpoint calls, IMS backs out the program’s database
updates and cancels the messages that it has created since the beginning of the
program. If the program has issued checkpoint calls, IMS backs out the program’s
changes and cancels the output messages it has created since the most recent
checkpoint call.
If a batch-oriented BMP does not issue checkpoints frequently enough, IMS can
abend that BMP or another application program for one of the following reasons:
v Other programs cannot get to the data that they need within a specified amount
of time.
If a BMP retrieves and updates many database records between checkpoint calls,
it can monopolize large portions of the databases and cause long waits for other
programs that need those segments. (The exception to this situation is a BMP
with a processing option of GO; IMS does not enqueue segments for programs
with this processing option.) Issuing checkpoint calls releases the segments that
the BMP has enqueued and makes them available to other programs.
v Not enough storage is available for the segments that the program has read and
updated.
If IMS is using program isolation enqueuing, the space that is needed to
enqueue information about the segments that the program has read and updated
must not exceed the amount of storage that is defined for the IMS system. (The
amount of storage available is specified during IMS system definition. ) If a BMP
enqueues too many segments, the amount of storage that is needed for the
enqueued segments can exceed the amount of available storage. In that case,
When you issue a DL/I CHKP call from an application program that uses DB2
databases, IMS processes the CHKP call for all DL/I databases, and DB2 commits
all the DB2 database resources. No checkpoint information is recorded for DB2
databases in the IMS log or the DB2 log. The application program must record
relevant information about DB2 databases for a checkpoint, if necessary. One way
to record such information is to put it in a data area that is included in the DL/I
CHKP call.
Performance might be slowed by the commit processing that DB2 does during a
DL/I CHKP call, because the program needs to re-establish position within a DB2
database. The fastest way to re-establish a position in a DB2 database is to use an
index on the target table, with a key that matches one-to-one with every column in
the SQL predicate.
Take one or more of the following actions depending on the type of program:
Examples
Rolling back to the most recently created savepoint: When the ROLLBACK TO
SAVEPOINT statement is executed in the following code, DB2 rolls back work to
savepoint B.
EXEC SQL SAVEPOINT A;
...
EXEC SQL SAVEPOINT B;
...
EXEC SQL ROLLBACK TO SAVEPOINT;
Setting multiple savepoints with the same name: Suppose that the following
actions occur within a unit of work:
1. Application A sets savepoint S.
2. Application A calls stored procedure P.
3. Stored procedure P sets savepoint S.
4. Stored procedure P executes the following statement: ROLLBACK TO SAVEPOINT S
When DB2 executes the ROLLBACK statement, DB2 rolls back work to the
savepoint that was set in the stored procedure, because that value is the most
recent value of savepoint S.
Related reference
RELEASE SAVEPOINT (SQL Reference)
ROLLBACK (SQL Reference)
SAVEPOINT (SQL Reference)
| Although you can plan for recovery, you still need to take some corrective actions
| after any system failures to recover the data and fix any affected table spaces. For
| example, if a table space that is not logged was open for update at the time that
| DB2 terminates, the subsequent restart places that table space in LPL and marks it
| with RECOVER-pending status. You need to take corrective action to clear the
| RECOVER-pending status.
If your system is not already set up to use DRDA access, you must first prepare
your system to use DRDA access. One of the tools that can help you during this
process is the private to DRDA protocol REXX™ tool (DSNTP2DP).
If you are requesting services from a remote DBMS, that DBMS is a server, and
your local system is a requester or client.
Your application can be connected to many DBMSs at one time; the one that is
currently performing work is the current server. When the local system is
performing work, it also is called the current server.
A DBMS, whether local or remote, is known to your DB2 system by its location
name. The location name of a remote DBMS is recorded in the communications
database.
Related tasks
Choosing names for the local subsystem (DB2 Installation and Migration)
DRDA access has the following advantages over DB2 private protocol access:
v Integration: DRDA access is available to all DBMSs that implement Distributed
Relational Database Architecture™ (DRDA). Those DBMSs include supported
releases of DB2 for z/OS, other members of the DB2 family of IBM products,
and many products of other companies.
Ensure that all systems that your program accesses implement two-phase commit
processing. This processing ensures that updates to two or more DBMSs are
coordinated automatically.
For example, DB2 and IMS, and DB2 and CICS, jointly implement a two-phase
commit process. You can update an IMS database and a DB2 table in the same unit
of work. If a system or communication failure occurs between committing the
work on IMS and on DB2, the two programs restore the two systems to a
consistent point when activity resumes.
You cannot do true coordinated updates within a DBMS that does not implement
two-phase commit processing, because DB2 prevents you from updating such a
DBMS and any other system within the same unit of work. In this context, update
includes the statements INSERT, UPDATE, MERGE, DELETE, CREATE, ALTER,
When you prepare your program, specify the SQL processing option
CONNECT(1). This option applies type 1 CONNECT statement rules.
Restriction: Do not use packages that are precompiled with the CONNECT(1)
option and packages that are precompiled with the CONNECT(2) option in the
same package list. The first CONNECT statement that is executed by your
program determines which rules are in effect for the entire execution: type 1 or
type 2. If your program attempts to execute a later CONNECT statement that is
precompiled with the other type, DB2 returns an error.
Related concepts
“Options for SQL statement processing” on page 904
In this case, the FETCH FIRST 1 ROW ONLY clause prevents 15 unnecessary
prefetches.
4. If your program accesses LOB columns in a remote table, use the following
techniques to minimize the number of bytes that are transferred between the
client and the server:
v Use LOB locators instead of LOB host variables.
If you need to store only a portion of a LOB value at the client, or if your
client program manipulates the LOB data but does not need a copy of it,
Restriction: You must use DRDA access to access LOB columns in a remote
table.
5. Minimize the use of parameter markers.
When using DRDA access, DB2 can streamline the processing of dynamic
queries that do not have parameter markers. When a DB2 requester encounters
a PREPARE statement for such a query, it anticipates that the application is
going to open a cursor. DB2 therefore sends a single message to the server that
When you specify the OPTIMIZE FOR n ROWS clause in your query, the number
of rows that DB2 transmits on each network transmission depends on the
following factors:
v If n rows of the SQL result set fit within a single DRDA query block, a DB2
server can send n rows to any DRDA client. In this case, DB2 sends n rows in
each network transmission until the entire query result set is returned.
v If n rows of the SQL result set exceed a single DRDA query block, the number of
rows that are contained in each network transmission depends on the client’s
DRDA software level and configuration. The following conditions apply:
– If the client does not support extra query blocks, the DB2 server automatically
reduces the value of n to match the number of rows that fit within a DRDA
query block.
– If the client supports extra query blocks, the DRDA client can choose to
accept multiple DRDA query blocks in a single data transmission. DRDA
allows the client to establish an upper limit on the number of DRDA query
blocks in each network transmission.
The number of rows that a DB2 server sends is the smaller of the following
values:
- n rows
- the number of rows that fit within the maximum number of extra DRDA
query blocks that the DB2 server returns to a client in a single network
transmission. (This value is specified in the EXTRA BLOCKS SRV field on
installation panel DSNTIP5 at the DB2 server.)
- the number of rows that fit within the client’s extra query block limit,
which is obtained from the DDM MAXBLKEXT parameter that is received
from the client. (When DB2 acts as a DRDA client, the DDM MAXBLKEXT
parameter is set to the value of EXTRA BLOCKS REQ on installation panel
DSNTIP5.)
Depending on the value that you specify for n, the OPTIMIZE FOR n ROWS clause
can improve performance in the following ways:
Although the OPTIMIZE FOR n ROWS clause can improve performance, this same
function can degrade performance if you do not use it properly. The following
examples demonstrate the performance problems that can occur when you do not
use this clause judiciously.
In the following figure, the DRDA client opens a cursor and fetches rows from the
cursor. At some point before all rows in the query result set are returned, the
application issues an SQL INSERT statement.
DECLARE C1 CURSOR
FOR SELECT * FROM T1
FOR FETCH ONLY;
OPEN C1; SQL cursor is opened
In this case, DB2 uses normal DRDA message blocking, which has the following
advantages over the message blocking that is used for the OPTIMIZE FOR n
ROWS clause:
v If the application issues an SQL statement other than FETCH (for example, an
INSERT statement in this case), the DRDA client can transmit the SQL statement
immediately, because the DRDA connection is not in use after the SQL OPEN.
v The DRDA query block size places an upper limit on the number of rows that
are fetched unnecessarily. If the SQL application closes the cursor before fetching
all the rows in the query result set, the server fetches only the number of rows
that fit in one query block, which is 100 rows of the result set.
In the following figure, the DRDA client opens a cursor and fetches rows from the
cursor by using OPTIMIZE FOR n ROWS clause. Both the DRDA client and the
DB2 server are configured to support multiple DRDA query blocks. At some time
before the end of the query result set, the application issues an SQL INSERT.
DECLARE C1 CURSOR
FOR SELECT * FROM T1
OPTIMIZE FOR
1000 ROWS;
Figure 2. Message flows with the OPTIMIZE FOR 1000 ROWS clause
Because the query uses the OPTIMIZE FOR n ROWS clause, the DRDA connection
is not available when the SQL INSERT is issued. The connection is still being used
to receive the DRDA query blocks for 1000 rows of data. This situation causes the
following performance problems:
v Application elapsed time can increase if the DRDA client waits for a large query
result set to be transmitted before the DRDA connection can be used for other
SQL statements. In this example, the SQL INSERT statement is delayed because
of a large query result set.
v If the application closes the cursor before fetching all the rows in the SQL result
set, the server might fetch a large number of rows unnecessarily.
Related concepts
Minimizing overhead for retrieving few rows: OPTIMIZE FOR n ROWS (DB2
Performance)
Related reference
optimize-for-clause (SQL Reference)
DB2 uses fast implicit close when all of the following conditions are true:
v The query uses limited block fetch.
v The query does not retrieve any LOBs.
v The cursor is not a scrollable cursor.
v Either of the following conditions is true:
– The cursor is defined with the WITH HOLD option, and the package or plan
that contains the cursor is bound with the KEEPDYNAMIC(YES) option.
– The cursor is not defined with the WITH HOLD option.
Related concepts
Block fetch (DB2 Performance)
Requirement: Ensure that any application that requests DB2 services satisfies the
following environment characteristics, regardless of the attachment facility that you
use:
v The application must be running in TCB mode. SRB mode is not supported.
v An application task cannot have any Enabled Unlocked Task (EUT) functional
recovery routines (FRRs) active when requesting DB2 services. If an EUT FRR is
active, the DB2 functional recovery can fail, and your application can receive
some unpredictable abends.
v Different attachment facilities cannot be active concurrently within the same
address space. Specifically, the following requirements exist:
– An application must not use CAF or RRSAF in an CICS or IMS address
space.
– An application that runs in an address space that has a CAF connection to
DB2 cannot connect to DB2 by using RRSAF.
– An application that runs in an address space that has an RRSAF connection to
DB2 cannot connect to DB2 by using CAF.
– An application cannot invoke the z/OS AXSET macro after executing the CAF
CONNECT call and before executing the CAF DISCONNECT call.
v One attachment facility cannot start another. For example, your CAF or RRSAF
application cannot use DSN, and a DSN RUN subcommand cannot call your
CAF or RRSAF application.
Requirement: For C and PL/I applications, you must also include in your
program the compiler directives that are listed in the following table, because
DSNALI is an assembler language program.
Table 7. Compiler directives to include in C and PL/I applications that contain CALL DSNALI
statements
Language Compiler directive to include
C #pragma linkage(dsnali, OS)
C++ extern "OS" {
int DSNALI(
char * functn,
...); }
PL/I DCL DSNALI ENTRY OPTIONS(ASM,INTER,RETCODE;
v Implicitly invoke CAF by including SQL statements or IFI calls in your program
just as you would in any program. The CAF facility establishes the connections
to DB2 with the default values for the subsystem name and plan name.
Restriction: If your program can make its first SQL call from different modules
with different DBRMs, you cannot use a default plan name and thus, you cannot
implicitly invoke CAF. Instead, you must explicitly invoke CAF by using the
OPEN function.
Requirement: If your application includes both SQL and IFI calls, you must
issue at least one SQL call before you issue any IFI calls. This action ensures that
your application uses the correct plan.
Although doing so is not recommended, you can run existing DSN applications
with CAF by allowing them to make implicit connections to DB2. For DB2 to
make an implicit connection successfully, the plan name for the application must
be the same as the member name of the database request module (DBRM) that
DB2 produced when you precompiled the source program that contains the first
SQL call. You must also substitute the DSNALI language interface module for
the TSO language interface module, DSNELI.
If you do not specify the return code and reason code parameters in your CAF
calls or you invoked CAF implicitly, CAF puts a return code in register 15 and a
reason code in register 0.
To determine if an implicit connection was successful, the application program
should examine the return and reason codes immediately after the first executable
SQL statement in the application program by performing one of the following
actions:
v Examining registers 0 and 15 directly.
v Examining the SQLCA, and if the SQLCODE is -991, obtain the return and
reason code from the message text. The return code is the first token, and the
reason code is the second token.
Examples
CALL DSNWLI
(IFI calls) (Process
CALL DSNHLI connection
(SQL calls) requests)
DSNWLI (dummy
application
entry point)
Sample programs that use CAF: You can find a sample assembler program
(DSN8CA) and a sample COBOL program (DSN8CC) that use the CAF in library
prefix.SDSNSAMP. A PL/I application (DSN8SPM) calls DSN8CA, and a COBOL
application (DSN8SCM) calls DSN8CC.
Any task in an address space can establish a connection to DB2 through CAF. Only
one connection can exist for each task control block (TCB). A DB2 service request
that is issued by a program that is running under a given task is associated with
that task’s connection to DB2. The service request operates independently of any
DB2 activity under any other task.
Each connected task can run a plan. Multiple tasks in a single address space can
specify the same plan, but each instance of a plan runs independently from the
others. A task can terminate its plan and run a different plan without fully
breaking its connection to DB2.
When you design your application, consider that using multiple simultaneous
connections can increase the possibility of deadlocks and DB2 resource contention.
The connection that CAF makes with DB2 has the basic properties that are listed in
the following table.
Table 8. Properties of CAF connections
Property Value Comments
Connection name DB2CALL You can use the DISPLAY
THREAD command to list
CAF applications that have
the connection name
DB2CALL.
Connection type BATCH BATCH connections use a
single phase commit process
that is coordinated by DB2.
Application programs can
also control when statements
are committed by using the
SQL COMMIT and
ROLLBACK statements.
Authorization IDs Authorization IDs that are DB2 establishes authorization
associated with the address IDs for each task’s connection
space when it processes that
connection. For the BATCH
connection type, DB2 creates
a list of authorization IDs
based on the authorization ID
that is associated with the
address space. This list is the
same for every task. A
location can provide a DB2
connection authorization exit
routine to change the list of
IDs.
Scope CAF processes connections as none
if each task is entirely
isolated. When a task
requests a function, the CAF
passes the functions to DB2
and is unaware of the
connection status of other
tasks in the address space.
However, the application
program and the DB2
subsystem are aware of the
connection status of multiple
tasks in an address space.
If a connected task terminates normally before the CLOSE function deallocates the
plan, DB2 commits any database changes that the thread made since the last
An attention exit routine works by detaching the TCB that is currently waiting on
an SQL or IFI request to complete. After the TCB is detached, DB2 detects the
resulting abend and performs termination processing for that task. The termination
processing includes any necessary rollback of transactions.
You can provide your own attention exit routines. However, your routine might
not get control if you request attention while DB2 code is running, because DB2
uses enabled unlocked task (EUT) functional recovery routines (FRRs).
The CAF has no abend recovery routines, but you can provide your own. Any
abend recovery routines that you provide must use tracking indicators to
determine if an abend occurred during DB2 processing. If an abend occurs while
DB2 has control, the recovery routine can take one of the following actions:
v Allow task termination to complete. Do not retry the program. DB2 detects task
termination and terminates the thread with the ABRT parameter. You lose all
database changes back to the last sync point or commit point.
This action is the only action that you can take for abends that are caused by the
CANCEL command or by DETACH. You cannot use additional SQL statements.
If you attempt to execute another SQL statement from the application program
or its recovery routine, you receive a return code of +256 and a reason code of
X’00F30083’.
v In an ESTAE routine, issue a CLOSE function call with the ABRT parameter
followed by a DISCONNECT function call. The ESTAE exit routine can retry so
that you do not need to reinstate the application task.
Part of CAF is a DB2 load module, DSNALI, which is also known as the CAF
language interface. DSNALI has the alias names DSNHLI2 and DSNWLI2. The
module has five entry points: DSNALI, DSNHLI, DSNHLI2, DSNWLI, and
DSNWLI2. These entry points serve the following functions:
v Entry point DSNALI handles explicit DB2 connection service requests.
v DSNHLI and DSNHLI2 handle SQL calls. Use DSNHLI if your application
program link-edits DSNALI. Use DSNHLI2 if your application program loads
DSNALI.
v DSNWLI and DSNWLI2 handle IFI calls. Use DSNWLI if your application
program link-edits DSNALI. Use DSNWLI2 if your application program loads
DSNALI.
When you write programs that use CAF, ensure that they meet the following
requirements:
v The program accounts for the size of the CAF code. The CAF code requires
about 16 KB of virtual storage per address space and an additional 10 KB for
each TCB that uses CAF.
v If your local environment intercepts and replaces the z/OS LOAD SVC that CAF
uses, you must ensure that your version of LOAD manages the load list element
(LLE) and contents directory entry (CDE) chains like the standard z/OS LOAD
macro. CAF uses z/OS SVC LOAD to load two modules as part of the
initialization after your first service request. Both modules are loaded into
fetch-protected storage that has the job-step protection key.
v If you use CAF from IMS batch, you must write data to only one system in any
one unit of work. If you write to both systems within the same unit, a system
failure can leave the two databases inconsistent with no possibility of automatic
recovery. To end a unit of work in DB2, execute the SQL COMMIT statement. To
end a unit of work in IMS, issue the SYNCPOINT command.
The following table lists the standard calling conventions for registers R1, R13, R14,
and R15.
Table 9. Standard usage of registers R1, R13, R14, and R15
Register Usage
R1 CALL DSNALI parameter list pointer
R13 Address of caller’s save area
R14 Caller’s return address
R15 CAF entry point address
CAF also supports high-level languages that cannot examine the contents of
individual registers.
Related concepts
“CALL DSNALI statement parameter list” on page 55
For CALL DSNALI statements, use a standard z/OS CALL parameter list. Register
1 points to a list of fullword addresses that point to the actual parameters. The last
address must contain a 1 in the high-order bit.
In CALL DSNALI statements, you cannot omit any of parameters that come before
the return code parameter by coding zeros or blanks. No defaults exist for those
parameters for explicit connection requests. Defaults are provided for only implicit
connections. All parameters starting with the return code parameter are optional.
When you want to use the default value for a parameter but specify subsequent
parameters, code the CALL DSNALI statement as follows:
| v For C-language, when you code CALL DSNALI statements in C, you need to
| specify the address of every required parameter, using the “address of” operator
| (&), and not the parameter itself. For example, to pass the startecb parameter on
| CONNECT, specify the address of the 4-byte integer (&secb).
| functn char[13] = "CONNECT ";
| ssid char[ 5] = "DB2A";
| int tecb = 0;
| int secb = 0;
| ptr ribptr;
| int retcode;
| int reascode;
| ptr eibptr;
|
| fnret = dsnali(&functn[0], &ssid[0], &tecb, &secb, &ribptr, &retcode, &reascode,
| NULL, &eibptr);
v For other languages except assembler language, code zero for that parameter in
the CALL DSNALI statement. For example, suppose that you are coding a
CONNECT call in a COBOL program, and you want to specify all parameters
except the return code parameter. You can write a statement similar to the
following statement:
CALL 'DSNALI' USING FUNCTN SSID TECB SECB RIBPTR
BY CONTENT ZERO BY REFERENCE REASCODE SRDURA EIBPTR.
v For assembler language, code a comma for that parameter in the CALL DSNALI
statement. For example, to specify all optional parameters except the return code
parameter write a statement similar to the following statement:
CALL DSNALI,(FUNCTN,SSID,TERMECB,STARTECB,RIBPTR,,REASCODE,SRDURA,EIBPTR,
GROUPOVERRIDE)
The following figure shows a sample parameter list structure for the CONNECT
function.
The preceding figure illustrates how you can omit parameters for the CALL
DSNALI statement to control the return code and reason code fields after a
CONNECT call. You can terminate the parameter list at any of the following
points. These termination points apply to all CALL DSNALI statement parameter
lists.
1. Terminates the parameter list without specifying the parameters retcode,
reascodeand srdura and places the return code in register 15 and the reason code
in register 0.
Terminating the parameter list at this point ensures compatibility with CAF
programs that require a return code in register 15 and a reason code in register
0.
2. Terminates the parameter list after the parameter retcode and places the return
code in the parameter list and the reason code in register 0.
Terminating the parameter list at this point enables the application program to
take action, based on the return code, without further examination of the
associated reason code.
3. Terminates the parameter list after the parameter reascode and places the return
code and the reason code in the parameter list.
Terminating the parameter list at this point provides support to high-level
languages that are unable to examine the contents of individual registers.
Even if you specify that the return code be placed in the parameter list, it is also
placed in register 15 to accommodate high-level languages that support special
return code processing.
Related concepts
“How CAF modifies the content of registers” on page 54
The following table summarizes CAF behavior after various inputs from
application programs. The top row lists the possible CAF functions that programs
can call. The first column lists the task’s most recent history of connection requests.
For example, the value “CONNECT followed by OPEN” in the first column means
that the task issued CONNECT and then OPEN with no other CAF calls in
between. The intersection of a row and column shows the effect of the next call if
it follows the corresponding connection history. For example, if the call is OPEN
and the connection history is CONNECT, the effect is OPEN; the OPEN function is
performed. If the call is SQL and the connection history is empty (meaning that the
SQL call is the first CAF function the program), the effect is that implicit
CONNECT and OPEN functions are performed, followed by the SQL function.
Table 10. Effects of CAF calls, as dependent on connection history
Next function
Previous
function CONNECT OPEN SQL CLOSE DISCONNECT TRANSLATE
1
Empty: first call CONNECT OPEN CONNECT, Error 203 Error 2041 Error 2051
OPEN, followed
by the SQL or
IFI call
CONNECT Error 2011 OPEN OPEN, followed Error 2031 DISCONNECT TRANSLATE
by the SQL or
IFI call
CONNECT Error 2011 Error 2021 The SQL or IFI CLOSE2 DISCONNECT TRANSLATE
followed by call
OPEN
CONNECT Error 2011 Error 2021 The SQL or IFI CLOSE2 DISCONNECT TRANSLATE
followed by SQL call
or IFI call
Notes:
1. An error is shown in this table as Error nnn. The corresponding reason code is
X’00C10nnn’. The message number is DSNAnnnI or DSNAnnnE.
2. The task and address space connections remain active. If the CLOSE call fails
because DB2 was down, the CAF control blocks are reset, the function produces
return code 4 and reason code X’00C10824’, and CAF is ready for more
connection requests when DB2 is up.
3. A TRANSLATE request is accepted, but in this case it is redundant. CAF
automatically issues a TRANSLATE request when an SQL or IFI request fails.
Related reference
“CAF return codes and reason codes” on page 69
You can specify the following CAF functions in a CALL DSNALI statement:
CONNECT
Establishes the task (TCB) as a user of the named DB2 subsystem. When
the first task within an address space issues a connection request, the
address space is also initialized as a user of DB2.
OPEN Allocates a DB2 plan. You must allocate a plan before DB2 can process SQL
statements. If you did not request the CONNECT function, the OPEN
function implicitly establishes the task, and optionally the address space, as
a user of DB2.
CLOSE
Commits or abnormally terminates any database changes and deallocates
the plan. If the OPEN function implicitly requests the CONNECT function,
the CLOSE function removes the task, and possibly the address space, as a
user of DB2.
DISCONNECT
Removes the task as a user of DB2 and, if this task is the last or only task
in the address space with a DB2 connection, terminates the address space
connection to DB2.
TRANSLATE
Returns an SQL code and printable text that describe a DB2 hexadecimal
error reason code. This information is returned to the SQLCA.
Restriction: You cannot call the TRANSLATE function from the Fortran
language.
The CONNECT function establishes the caller’s task as a user of DB2 services. If
no other task in the address space currently holds a connection with the specified
subsystem, the CONNECT function also initializes the address space for
communication to the DB2 address spaces. The CONNECT function establishes the
address space’s cross memory authorization to DB2 and builds address space
control blocks. You can issue a CONNECT request from any or all tasks in the
address space, but the address space level is initialized only once when the first
task connects.
Using the CONNECT function is optional. If you do not call the CONNECT
function, the first request from a task, either an OPEN request or an SQL or IFI
call, causes CAF to issue an implicit CONNECT request. If a task is connected
implicitly, the connection to DB2 is terminated either when you call the CLOSE
function or when the task terminates.
The CONNECT function also enables the caller to learn the following items:
v That the operator has issued a STOP DB2 command. When this event occurs,
DB2 posts the termination ECB, termecb. Your application can either wait on or
just look at the ECB.
v That DB2 is abnormally terminating. When this event occurs happens, DB2 posts
the termination ECB, termecb.
v That DB2 is available again after a connection attempt that failed because DB2
was down. Your application can either wait or look at the startup ECB, startecb.
DB2 ignores this ECB if it was active at the time of the CONNECT request.
Restriction: Do not issue CONNECT requests from a TCB that already has an
active DB2 connection.
The following diagram shows the syntax for the CONNECT function.
)
,retcode
,reascode
,srdura
,eibptr
,groupoverride
Before you check termecb in your CAF application program, first check the
return code and reason code from the CONNECT call to ensure that the call
completed successfully.
startecb
A 4-byte integer representing the application’s startup ECB. If DB2 has not yet
Note:
v For C and PL/I applications, you must include the appropriate compiler
directives, because DSNALI is an assembler language program. These compiler
directives are described in the instructions for invoking CAF.
62 Application Programming and SQL Guide
Related concepts
“Examples of invoking CAF” on page 71
Related tasks
“Invoking the call attachment facility” on page 46
Related information
Synchronizing Tasks (WAIT, POST, and EVENTS Macros) (MVS Programming:
Assembler Services Guide)
Using the OPEN function is optional. If you do not call the OPEN function, the
actions that the OPEN function perform occur implicitly on the first SQL or IFI call
from the task.
Restriction: Do not use the OPEN function if the task already has a plan allocated.
The following diagram shows the syntax for the OPEN function.
)
, retcode
, reascode
, groupoverride
Note:
v For C and PL/I applications, you must include the appropriate compiler
directives, because DSNALI is an assembler language program. These compiler
directives are described in the instructions for invoking CAF.
Related concepts
“Implicit connections to CAF” on page 54
Related tasks
“Invoking the call attachment facility” on page 46
Using the CLOSE function is optional. Consider the following rules and
recommendations about when to use and not use the CLOSE function:
v Do not use the CLOSE function when your current task does not have a plan
allocated.
v If you want to use a new plan, you must issue an explicit CLOSE call, followed
by an OPEN call with the new plan name.
v When shutting down your application you can improve the performance of this
shut down by explicitly calling the CLOSE function before the task terminates. If
you omit the CLOSE call, DB2 performs an implicit CLOSE. In this case, DB2
performs the same actions when your task terminates, by using the SYNC
parameter if termination is normal and the ABRT parameter if termination is
abnormal.
v If DB2 terminates, issue an explicit CLOSE call for any task that did not issue a
CONNECT call. This action enables CAF to reset its control blocks to allow for
future connections. This CLOSE call returns the reset accomplished return code
(+004) and reason code X’00C10824’. If you omit the CLOSE call in this case,
when DB2 is back on line, the task’s next connection request fails. You get either
the message YOUR TCB DOES NOT HAVE A CONNECTION, with X’00F30018’
in register 0, or the CAF error message DSNA201I or DSNA202I, depending on
what your application tried to do. The task must then issue a CLOSE call before
it can reconnect to DB2.
v A task that issued an explicit CONNECT call should issue a DISCONNECT call
instead of a CLOSE call. This action causes CAF to reset its control blocks when
DB2 terminates.
The following diagram shows the syntax for the CLOSE function.
Note:
v For C and PL/I applications, you must include the appropriate compiler
directives, because DSNALI is an assembler language program. These compiler
directives are described in the instructions for invoking CAF.
Related tasks
“Invoking the call attachment facility” on page 46
DISCONNECT removes the calling task’s connection to DB2. If no other task in the
address space has an active connection to DB2, DB2 also deletes the control block
structures that were created for the address space and removes the cross memory
authorization.
If an OPEN call is in effect, which means that a plan is allocated, when the
DISCONNECT call is issued, CAF issues an implicit CLOSE with the SYNC
parameter.
Using the DISCONNECT function is optional. Consider the following rules and
recommendations about when to use and not use the DISCONNECT function:
v Only those tasks that explicitly issued a CONNECT call can issue a
DISCONNECT call. If a CONNECT call was not used, a DISCONNECT call
causes an error.
v When shutting down your application you can improve the performance of this
shut down by explicitly calling the DISCONNECT function before the task
terminates. If you omit the DISCONNECT call, DB2 performs an implicit
DISCONNECT. In this case, DB2 performs the same actions when your task
terminates.
v If DB2 terminates, any task that issued a CONNECT call must issue a
DISCONNECT call to reset the CAF control blocks. The DISCONNECT function
The following diagram shows the syntax for the DISCONNECT function.
Note:
v For C and PL/I applications, you must include the appropriate compiler
directives, because DSNALI is an assembler language program. These compiler
directives are described in the instructions for invoking CAF.
The DB2 error reason code that is converted is read from register 0. The
TRANSLATE function does not change the contents of registers 0 and 15, unless
the TRANSLATE request fails; in that case, register 0 is set to X’C10205’ and
register 15 is set to 200.
Consider the following rules and recommendations about when to use and not use
the TRANSLATE function:
v You cannot call the TRANSLATE function from the Fortran language.
v The TRANSLATE function is useful only if you used an explicit CONNECT call
before an OPEN request that fails. For errors that occur during SQL or IFI
requests, the TRANSLATE function performs automatically.
v The TRANSLATE function can translate those codes that begin with X’00F3’, but
it does not translate CAF reason codes that begin with X’00C1’.
If you receive error reason code X’00F30040’ (resource unavailable) after an OPEN
request, the TRANSLATE function returns the name of the unavailable database
object in the last 44 characters of the SQLERRM field.
If the TRANSLATE function does not recognize the error reason code, it returns
SQLCODE -924 (SQLSTATE ’58006’) and places a printable copy of the original
DB2 function code and the return and error reason codes in the SQLERRM field.
The following diagram shows the syntax for the TRANSLATE function.
Note:
v For C and PL/I applications, you must include the appropriate compiler
directives, because DSNALI is an assembler language program. These compiler
directives are described in the instructions for invoking CAF.
Related tasks
“Invoking the call attachment facility” on page 46
When the reason code begins with X’00F3’ except for X’00F30006’, you can use the
CAF TRANSLATE function to obtain error message text that can be printed and
displayed. These reason codes are issued by the subsystem support for allied
memories, a part of the DB2 subsystem support subcomponent that services all
DB2 connection and work requests.
For SQL calls, CAF returns standard SQL codes in the SQLCA. CAF returns IFI
return codes and reason codes in the instrumentation facility communication area
(IFCA).
The following table lists the CAF return codes and reason codes.
The simplest connection scenario is a single task that makes calls to DB2 without
using explicit CALL DSNALI statements. The task implicitly connects to the
default subsystem name and uses the default plan name.
A task can have a connection to only one DB2 subsystem at any point in time. A
CAF error occurs if the subsystem name in the OPEN call does not match the
subsystem name in the CONNECT call. To switch to a different subsystem, the
application must first disconnect from the current subsystem and then issue a
connect request with a new subsystem name.
Multiple tasks
In the following scenario, multiple tasks within the address space use DB2 services.
Each task must explicitly specify the same subsystem name on either the
CONNECT function request or the OPEN function request. Task 1 makes no SQL
or IFI calls. Its purpose is to monitor the DB2 termination and startup ECBs and to
check the DB2 release level.
TASK 1 TASK 2 TASK 3 TASK n
CONNECT
OPEN OPEN OPEN
SQL SQL SQL
... ... ...
CLOSE CLOSE CLOSE
OPEN OPEN OPEN
SQL SQL SQL
... ... ...
CLOSE CLOSE CLOSE
DISCONNECT
The following sample JCL shows how to use CAF in a batch (non-TSO)
environment. The DSNTRACE statement in this example is optional.
//jobname JOB z/OS_jobcard_information
//CAFJCL EXEC PGM=CAF_application_program
//STEPLIB DD DSN=application_load_library
// DD DSN=DB2_load_library
.
.
.
//SYSPRINT DD SYSOUT=*
//DSNTRACE DD SYSOUT=*
//SYSUDUMP DD SYSOUT=*
The following examples show parts of a sample assembler program that uses CAF.
They demonstrate the basic techniques for making CAF calls, but do not show the
code and z/OS macros needed to support those calls. For example, many
applications need a two-task structure so that attention-handling routines can
detach connected subtasks to regain control from DB2. This structure is not shown
in the following code examples. Also, these code examples assume the existence of
a WRITE macro. Wherever this macro is included in the example, substitute code
of your own. You must decide what you want your application to do in those
situations; you probably do not want to write the error messages shown.
Example of loading and deleting the CAF language interface: The following code
segment shows how an application can load entry points DSNALI and DSNHLI2
for the CAF language interface. Storing the entry points in variables LIALI and
LISQL ensures that the application has to load the entry points only once. When
the module is done with DB2, you should delete the entries.
****************************** GET LANGUAGE INTERFACE ENTRY ADDRESSES
LOAD EP=DSNALI Load the CAF service request EP
ST R0,LIALI Save this for CAF service requests
LOAD EP=DSNHLI2 Load the CAF SQL call Entry Point
ST R0,LISQL Save this for SQL calls
* .
* . Insert connection service requests and SQL calls here
* .
DELETE EP=DSNALI Correctly maintain use count
DELETE EP=DSNHLI2 Correctly maintain use count
Example of connecting to DB2 with CAF: The following example code shows
how to issue explicit requests for certain actions, such as CONNECT, OPEN,
CLOSE, DISCONNECT, and TRANSLATE, and uses the CHEKCODE subroutine to
check the return reason codes from CAF.
****************************** CONNECT ********************************
L R15,LIALI Get the Language Interface address
MVC FUNCTN,CONNECT Get the function to call
CALL (15),(FUNCTN,SSID,TECB,SECB,RIBPTR),VL,MF=(E,CAFCALL)
BAL R14,CHEKCODE Check the return and reason codes
CLC CONTROL,CONTINUE Is everything still OK
BNE EXIT If CONTROL not 'CONTINUE', stop loop
USING R8,RIB Prepare to access the RIB
L R8,RIBPTR Access RIB to get DB2 release level
WRITE 'The current DB2 release level is' RIBREL
This example code does not show a task that waits on the DB2 termination ECB. If
you want such a task, you can code it by using the z/OS WAIT macro to monitor
the ECB. You probably want this task to detach the sample code if the termination
ECB is posted. That task can also wait on the DB2 startup ECB. This sample waits
on the startup ECB at its own task level.
This example code assumes that the variables in the following table are already set:
Table 18. Variables that preceding example assembler code assumes are set
Variable Usage
LIALI The entry point that handles DB2 connection
service requests.
LISQL The entry point that handles SQL calls.
SSID The DB2 subsystem identifier.
TECB The address of the DB2 termination ECB.
SECB The address of the DB2 startup ECB.
RIBPTR A fullword that CAF sets to contain the RIB
address.
PLAN The plan name to use in the OPEN call.
CONTROL This variable is used to shut down
processing because of unsatisfactory return
or reason codes. The CHECKCODE
subroutine sets this value.
CAFCALL List-form parameter area for the CALL
macro.
Example of checking return codes and reason codes when using CAF: The
following example code illustrates a way to check the return codes and the DB2
termination ECB after each connection service request and SQL call. The routine
sets the variable CONTROL to control further processing within the module.
***********************************************************************
* CHEKCODE PSEUDOCODE *
***********************************************************************
*IF TECB is POSTed with the ABTERM or FORCE codes
* THEN
* CONTROL = 'SHUTDOWN'
* WRITE 'DB2 found FORCE or ABTERM, shutting down'
* ELSE /* Termination ECB was not POSTed */
* SELECT (RETCODE) /* Look at the return code */
* WHEN (0) ; /* Do nothing; everything is OK */
Example of invoking CAF when you do not specify the precompiler option
ATTACH(CAF): Each of the four DB2 attachment facilities contains an entry point
named DSNHLI. When you use CAF but do not specify the precompiler option
ATTACH(CAF), SQL statements result in BALR instructions to DSNHLI in your
program. To find the correct DSNHLI entry point without including DSNALI in
your load module, code a subroutine with entry point DSNHLI that passes control
to entry point DSNHLI2 in the DSNALI module. DSNHLI2 is unique to DSNALI
and is at the same location in DSNALI as DSNHLI. DSNALI uses 31-bit
addressing. If the application that calls this intermediate subroutine uses 24-bit
addressing, this subroutine should account for the difference.
In the following example, LISQL is addressable because the calling CSECT used
the same register 12 as CSECT DSNHLI. Your application must also establish
addressability to LISQL.
***********************************************************************
* Subroutine DSNHLI intercepts calls to LI EP=DSNHLI
***********************************************************************
DS 0D
DSNHLI CSECT Begin CSECT
STM R14,R12,12(R13) Prologue
LA R15,SAVEHLI Get save area address
ST R13,4(,R15) Chain the save areas
ST R15,8(,R13) Chain the save areas
LR R13,R15 Put save area address in R13
L R15,LISQL Get the address of real DSNHLI
BASSM R14,R15 Branch to DSNALI to do an SQL call
* DSNALI is in 31-bit mode, so use
* BASSM to assure that the addressing
* mode is preserved.
L R13,4(,R13) Restore R13 (caller's save area addr)
L R14,12(,R13) Restore R14 (return address)
RETURN (1,12) Restore R1-12, NOT R0 and R15 (codes)
Example of variable declarations when using CAF: The following example code
shows declarations for some of the variables that were used in the previous
subroutines.
****************************** VARIABLES ******************************
SECB DS F DB2 Startup ECB
TECB DS F DB2 Termination ECB
LIALI DS F DSNALI Entry Point address
LISQL DS F DSNHLI2 Entry Point address
SSID DS CL4 DB2 Subsystem ID. CONNECT parameter
PLAN DS CL8 DB2 Plan name. OPEN parameter
TRMOP DS CL4 CLOSE termination option (SYNC|ABRT)
FUNCTN DS CL12 CAF function to be called
RIBPTR DS F DB2 puts Release Info Block addr here
RETCODE DS F Chekcode saves R15 here
REASCODE DS F Chekcode saves R0 here
CONTROL DS CL8 GO, SHUTDOWN, or RESTART
SAVEAREA DS 18F Save area for CHEKCODE
****************************** CONSTANTS ******************************
SHUTDOWN DC CL8'SHUTDOWN' CONTROL value: Shutdown execution
To invoke RRSAF:
1. Perform one of the following actions:
v Explicitly invoke RRSAF by including in your program CALL DSNRLI
statements with the appropriate options.
The first option is an RRSAF connection function, which describes the action
that you want RRSAF to take. The effect of any function depends in part on
what functions the program has already performed.
To code RRSAF functions in C, COBOL, Fortran, or PL/I, follow the
individual language’s rules for making calls to assembler language routines.
Specify the return code and reason code parameters in the parameter list for
each RRSAF call.
Requirement: For C, C++, and PL/I applications, you must also include in
your program the compiler directives that are listed in the following table,
because DSNRLI is an assembler language program.
Table 19. Compiler directives to include in C, C++, and PL/I applications that contain CALL
DSNRLI statements
Language Compiler directive to include
C #pragma linkage(dsnrli, OS)
C++ extern "OS" {
int DSNRLI(
char * functn,
...); }
PL/I DCL DSNRLI ENTRY OPTIONS(ASM,INTER,RETCODE);
Restriction: If your program can make its first SQL call from different
modules with different DBRMs, you cannot use a default plan name and
thus, you cannot implicitly invoke RRSAF. Instead, you must explicitly
invoke RRSAF by calling the CREATE THREAD function.
Requirement: If your application includes both SQL and IFI calls, you must
issue at least one SQL call before you issue any IFI calls. This action ensures
that your application uses the correct plan.
2. If you implicitly invoked RRSAF, determine if the implicit connection was
successful by examining the return code and reason code immediately after the
first executable SQL statement within the application program. Your program
can check these codes by performing one of the following actions:
v Examine registers 0 and 15 directly.
v Examine the SQLCA, and if the SQLCODE is -981, obtain the return and
reason code from the message text. The return code is the first token, and the
reason code is the second token.
The following figure shows an conceptual example of invoking and using RRSAF.
Any task in an address space can establish a connection to DB2 through RRSAF.
Each task control block (TCB) can have only one connection to DB2. A DB2 service
request that is issued by a program that runs under a given task is associated with
that task’s connection to DB2. The service request operates independently of any
DB2 activity under any other task.
Each connected task can run a plan. Tasks within a single address space can
specify the same plan, but each instance of a plan runs independently from the
others. A task can terminate its plan and run a different plan without completely
breaking its connection to DB2.
When you design your application, consider that using multiple simultaneous
connections can increase the possibility of deadlocks and DB2 resource contention.
To commit work in RRSAF applications, use the CPIC SRRCMIT function or the
DB2 COMMIT statement. To roll back work, use the CPIC SRRBACK function or
the DB2 ROLLBACK statement.
Use the following guidelines to decide whether to use the DB2 statements or the
CPIC functions for commit and rollback operations:
v Use DB2 COMMIT and ROLLBACK statements when all of the following
conditions are true:
– The only recoverable resource that is accessed by your application is DB2 data
that is managed by a single DB2 instance.
DB2 COMMIT and ROLLBACK statements fail if your RRSAF application
accesses recoverable resources other than DB2 data that is managed by a
single DB2 instance.
– The address space from which syncpoint processing is initiated is the same as
the address space that is connected to DB2.
v If your application accesses other recoverable resources, or syncpoint processing
and DB2 access are initiated from different address spaces, use SRRCMIT and
SRRBACK.
Related reference
COMMIT (SQL Reference)
ROLLBACK (SQL Reference)
Related information
z/OS Internet Library at ibm.com
The connection that RRSAF makes with DB2 has the basic properties that are listed
in the following table.
Table 20. Properties of RRSAF connections
Property Value Comments
Connection name RRSAF You can use the DISPLAY
THREAD command to list
RRSAF applications that have
the connection name RRSAF.
Connection type RRSAF None.
If DB2 abends while an application is running, DB2 rolls back changes to the last
commit point. If DB2 terminates while processing a commit request, DB2 either
commits or rolls back any changes at the next restart. The action taken depends on
the state of the commit request when DB2 terminates.
Part of RRSAF is a DB2 load module, DSNRLI, which is also known as the RRSAF
language interface module. DSNRLI has the alias names DSNHLIR and DSNWLIR.
The module has five entry points: DSNRLI, DSNHLI, DSNHLIR, DSNWLI, and
DSNWLIR. These entry points serve the following functions:
v Entry point DSNRLI handles explicit DB2 connection service requests.
v DSNHLI and DSNHLIR handle SQL calls. Use DSNHLI if your application
program link-edits RRSAF. Use DSNHLIR if your application program loads
RRSAF.
v DSNWLI and DSNWLIR handle IFI calls. Use DSNWLI if your application
program link-edits RRSAF. Use DSNWLIR if your application program loads
RRSAF.
When you write programs that use RRSAF, ensure that they meet the following
requirements:
v The program accounts for the size of the RRSAF code. The RRSAF code requires
about 10 KB of virtual storage per address space and an additional 10 KB for
each TCB that uses RRSAF.
v If your local environment intercepts and replaces the z/OS LOAD SVC that
RRSAF uses, you must ensure that your version of LOAD manages the load list
element (LLE) and contents directory entry (CDE) chains like the standard z/OS
LOAD macro. RRSAF uses z/OS SVC LOAD to load a module as part of the
initialization after your first service request. The module is loaded into
fetch-protected storage that has the job-step protection key.
You can prepare application programs to run in RRSAF similar to how you prepare
applications to run in other environments, such as CICS, IMS, and TSO. You can
prepare an RRSAF application either in the batch environment or by using the DB2
program preparation process. You can use the program preparation system either
through DB2I or through the DSNH CLIST.
Related tasks
Chapter 17, “Preparing an application to run on DB2 for z/OS,” on page 887
If you specify the return code and reason code parameters, RRSAF places the
return code in register 15 and in the return code parameter to accommodate
high-level languages that support special return code processing.
The following table summarizes the register conventions for RRSAF calls.
Table 21. Register conventions for RRSAF calls
Register Usage
R1 Parameter list pointer
R13 Address of caller’s save area
R14 Caller’s return address
R15 RRSAF entry point address
For an implicit connection request, your application should not explicitly specify
either the IDENTIFY function or the CREATE THREAD function. Your application
can execute other explicit RRSAF calls after the implicit connection is made. An
implicit connection does not perform any SIGNON processing. Your application
can execute the SIGNON function at any point of consistency. To terminate an
implicit connection, you must use the proper function calls.
For implicit connection requests, register 15 contains the return code, and register 0
contains the reason code. The return code and reason code are also in the message
text for SQLCODE -981.
Related concepts
“Summary of RRSAF behavior” on page 87
In CALL DSNRLI statements, you cannot omit any of parameters that come before
the return code parameter by coding zeros or blanks. No defaults exist for those
parameters for explicit connection requests. Defaults are provided for only implicit
connections. All parameters starting with the return code parameter are optional.
For assembler programs that invoke RRSAF, use a standard parameter list for an
z/OS CALL. Register 1 must contain the address of a list of pointers to the
parameters. Each pointer is a 4-byte address. The last address must contain the
value 1 in the high-order bit.
The following tables summarize RRSAF behavior after various inputs from
application programs. The contents of each table cell indicate the result of calling
the function in the first column for that row followed by the function in the
current column heading. For example, if you issue TERMINATE THREAD and
then IDENTIFY, RRSAF returns reason code X’00C12201’. Use these tables to
understand the order in which your application must issue RRSAF calls, SQL
statements, and IFI requests.
The following table summarizes RRSAF behavior when the next call is to the
IDENTIFY function, the SWITCH TO function, the SIGNON function, or the
CREATE THREAD function.
Table 22. Effect of call order when next call is IDENTIFY, SWITCH TO, SIGNON, or CREATE THREAD
Next function
SIGNON, AUTH SIGNON,
Previous function IDENTIFY SWITCH TO or CONTEXT SIGNON CREATE THREAD
Empty: first call IDENTIFY X’00C12205’1 X’00C12204’1 X’00C12204’1
IDENTIFY X’00F30049’1 Switch to ssnm Signon 2
X’00C12217’1
| The following table summarizes RRSAF behavior when the next call is an SQL
| statement or an IFI call or to the TERMINATE THREAD function, the TERMINATE
| IDENTIFY function, or the TRANSLATE function.
Table 23. Effect of call order when next call is SQL or IFI, TERMINATE THREAD, TERMINATE IDENTIFY, or
TRANSLATE
Next function
Previous function SQL or IFI TERMINATE THREAD TERMINATE IDENTIFY TRANSLATE
4
Empty: first call SQL or IFI call X’00C12204’1 X’00C12204’1 X’00C12204’1
IDENTIFY SQL or IFI call4 X’00C12203’1 TERMINATE IDENTIFY TRANSLATE
4
SWITCH TO SQL or IFI call TERMINATE THREAD TERMINATE IDENTIFY TRANSLATE
4
SIGNON, AUTH SIGNON, SQL or IFI call TERMINATE THREAD TERMINATE IDENTIFY TRANSLATE
or CONTEXT SIGNON
CREATE THREAD SQL or IFI call4 TERMINATE THREAD TERMINATE IDENTIFY TRANSLATE
4 1
TERMINATE THREAD SQL or IFI call X’00C12203’ TERMINATE IDENTIFY TRANSLATE
4
IFI SQL or IFI call TERMINATE THREAD TERMINATE IDENTIFY TRANSLATE
4 12 13
SQL SQL or IFI call X’00F30093’ X’00F30093’ TRANSLATE
4
SRRCMIT or SRRBACK SQL or IFI call TERMINATE THREAD TERMINATE IDENTIFY TRANSLATE
Related concepts
X’F3......’ codes (DB2 Codes)
The IDENTIFY function establishes the caller’s task as a user of DB2 services. If no
other task in the address space currently is connected to the specified subsystem,
the IDENTIFY function also initializes the address space to communicate with the
DB2 address spaces. The IDENTIFY function establishes the cross-memory
authorization of the address space to DB2 and builds address space control blocks.
The following diagram shows the syntax for the IDENTIFY function.
startecb
| )
, retcode
, reascode
, groupoverride
, decpptr
startecb
The address of the application’s startup ECB. If DB2 has not started when the
application issues the IDENTIFY call, DB2 posts the ECB when DB2 has
started. Enter a value of zero if you do not want to use a startup ECB. DB2
Note:
1. For C, C++, and PL/I applications, you must include the appropriate compiler
directives, because DSNRLI is an assembler language program. These compiler
directives are described in the instructions for invoking RRSAF.
| When you call the IDENTIFY function, DB2 performs the following steps:
| 1. DB2 determines whether the user address space is authorized to connect to
| DB2. DB2 invokes the z/OS SAF and passes a primary authorization ID to SAF.
| That authorization ID is the 7-byte user ID that is associated with the address
| space, unless an authorized function has built an ACEE for the address space.
| If an authorized function has built an ACEE, DB2 passes the 8-byte user ID
| from the ACEE. SAF calls an external security product, such as RACF, to
| determine if the task is authorized to use the following items:
| v The DB2 resource class (CLASS=DSNR)
| v The DB2 subsystem (SUBSYS=ssnm)
| v Connection type RRSAF
| 2. If that check is successful, DB2 calls the DB2 connection exit routine to perform
| additional verification and possibly change the authorization ID.
| 3. DB2 searches for a matching trusted context in the system cache and then the
| catalog based on the following criteria:
| v The primary authorization ID matches a trusted context SYSTEM AUTHID.
| v The job or started task name matches the JOBNAME attribute that is defined
| for the identified trusted context.
| If a trusted context is defined, DB2 checks if SECURITY LABEL is defined in
| the trusted context. If SECURITY LABEL is defined, DB2 verifies the SECURITY
| LABEL with RACF by using the RACROUTE VERIFY request. This security
| label is used to verify multi-level security for SYSTEM AUTHID.If a matching
| trusted context is defined, DB2 establishes the connection as trusted. Otherwise,
| the connection is established without any additional privileges.
| 4. DB2 then sets the connection name to RRSAF and the connection type to
| RRSAF.
Related tasks
“Invoking the Resource Recovery Services attachment facility” on page 77
The first time that you make a SWITCH TO call to a new DB2 subsystem, DB2
returns return code 4 and reason code X’00C12205’ as a warning to indicate that
the current task has not yet been identified to the new DB2 subsystem.
The following diagram shows the syntax for the SWITCH TO function.
)
, retcode
, reascode
, groupoverride
Examples
1. For C, C++, and PL/I applications, you must include the appropriate compiler
directives, because DSNRLI is an assembler language program. These compiler
directives are described in the instructions for invoking RRSAF.
Generally, you issue a SIGNON call after an IDENTIFY call and before a CREATE
THREAD call. You can also issue a SIGNON call if the application is at a point of
consistency, and one of the following conditions is true:
v The value of reuse in the CREATE THREAD call was RESET.
v The value of reuse in the CREATE THREAD call was INITIAL, no held cursors
are open, the package or plan is bound with KEEPDYNAMIC(NO), and all
special registers are at their initial state. If open held cursors exist or the package
or plan is bound with KEEPDYNAMIC(YES), you can issue a SIGNON call only
if the primary authorization ID has not changed.
| After you issue a SIGNON call, subsequent SQL statements return an error
| (SQLCODE -900) if the both of following conditions are true:
| v The connection was established as trusted when it was initialized.
| v The primary authorization ID that was used when you issued the SIGNON call
| is not allowed to use the trusted connection.
The following diagram shows the syntax for the SIGNON function.
)
,retcode
,reascode
,user
,appl
,ws
,xid
,accounting-string
accounting-string
A one-byte length field and a 255-byte area in which you can put a value for a
DB2 accounting string. This value is placed in the DDF accounting trace
records in the QMDASQLI field, which is mapped by DSNDQMDA DSECT. If
accounting-string is less than 255 characters, you must pad it on the right with
zeros to a length of 255 bytes. The entire 256 bytes is mapped by DSNDQMDA
DSECT.
This parameter is optional. If you specify accounting-string, you must also
specify retcode, reascode, user, appl and xid. If you do not specify
accounting-string, no accounting string is associated with the connection.
You can also change the value of the accounting string with RRSAF functions
AUTH SIGNON, CONTEXT SIGNON, or SET_CLIENT_ID.
You can retrieve the DDF suffix portion of the accounting string with the
CURRENT CLIENT_ACCTNG special register. The suffix portion of
accounting-string can contain a maximum of 200 characters. The QMDASFLN
field contains the accounting suffix length, and the QMDASUFX field contains
the accounting suffix value. If the DDF accounting string is set, you cannot
query the accounting token with the CURRENT CLIENT_ACCTNG special
register.
Note:
1. For C, C++, and PL/I applications, you must include the appropriate compiler
directives, because DSNRLI is an assembler language program. These compiler
directives are described in the instructions for invoking RRSAF.
Related tasks
“Invoking the Resource Recovery Services attachment facility” on page 77
Related information
z/OS Internet Library at ibm.com
Generally, you issue an AUTH SIGNON call after an IDENTIFY call and before a
CREATE THREAD call. You can also issue an AUTH SIGNON call if the
application is at a point of consistency, and one of the following conditions is true:
v The value of reuse in the CREATE THREAD call was RESET.
v The value of reuse in the CREATE THREAD call was INITIAL, no held cursors
are open, the package or plan is bound with KEEPDYNAMIC(NO), and all
special registers are at their initial state. If open held cursors exist or the package
or plan is bound with KEEPDYNAMIC(YES), a SIGNON call is permitted only if
the primary authorization ID has not changed.
The following diagram shows the syntax for the AUTH SIGNON function.
)
,retcode
,reascode
,user
,appl
,ws
,xid
,accounting-string
Note:
1. For C, C++, and PL/I applications, you must include the appropriate compiler
directives, because DSNRLI is an assembler language program. These compiler
directives are described in the instructions for invoking RRSAF.
Related tasks
“Invoking the Resource Recovery Services attachment facility” on page 77
Related reference
“SIGNON function for RRSAF” on page 95
Requirement: Before you invoke CONTEXT SIGNON, you must have called the
RRS context services function Set Context Data (CTXSDTA) to store a primary
authorization ID and optionally, the address of an ACEE in the context data whose
context key you supply as input to CONTEXT SIGNON.
The CONTEXT SIGNON function uses the context key to retrieve the primary
authorization ID from data that is associated with the current RRS context. DB2
uses the RRS context services function Retrieve Context Data (CTXRDTA) to
retrieve context data that contains the authorization ID and ACEE address. The
context data must have the following format:
Version number
A 4-byte area that contains the version number of the context data. Set this
area to 1.
Server product name
An 8-byte area that contains the name of the server product that set the
context data.
ALET A 4-byte area that can contain an ALET value. DB2 does not reference this
area.
ACEE address
A 4-byte area that contains an ACEE address or 0 if an ACEE is not
provided. DB2 requires that the ACEE is in the home address space of the
task.
If you pass an ACEE address, the CONTEXT SIGNON function uses the
value in ACEEGRPN as the secondary authorization ID if the length of the
group name (ACEEGRPL) is not 0.
Generally, you issue a CONTEXT SIGNON call after an IDENTIFY call and before
a CREATE THREAD call. You can also issue a CONTEXT SIGNON call if the
application is at a point of consistency, and one of the following conditions is true:
v The value of reuse in the CREATE THREAD call was RESET.
v The value of reuse in the CREATE THREAD call was INITIAL, no held cursors
are open, the package or plan is bound with KEEPDYNAMIC(NO), and all
special registers are at their initial state. If open held cursors exist or the package
or plan is bound with KEEPDYNAMIC(YES), a SIGNON call is permitted only if
the primary authorization ID has not changed.
The following diagram shows the syntax for the CONTEXT SIGNON function.
)
,retcode
,reascode
,user
,appl
,ws
,xid
,accounting-string
Note:
1. For C, C++, and PL/I applications, you must include the appropriate compiler
directives, because DSNRLI is an assembler language program. These compiler
directives are described in the instructions for invoking RRSAF.
Related tasks
“Invoking the Resource Recovery Services attachment facility” on page 77
Related reference
“SIGNON function for RRSAF” on page 95
Note:
1. For C, C++, and PL/I applications, you must include the appropriate compiler
directives, because DSNRLI is an assembler language program. These compiler
directives are described in the instructions for invoking RRSAF.
These values can be used to identify the end user. The calling program defines the
contents of these parameters. DB2 places the parameter values in the output from
the DISPLAY THREAD command and in DB2 accounting and statistics trace
records.
)
,retcode
,reascode
,accounting-string
Note:
1. For C, C++, and PL/I applications, you must include the appropriate compiler
directives, because DSNRLI is an assembler language program. These compiler
directives are described in the instructions for invoking RRSAF.
Related tasks
“Invoking the Resource Recovery Services attachment facility” on page 77
The following diagram shows the syntax of the CREATE THREAD function.
)
, retcode
, reascode
, pklistptr
This parameter is required. If the 8-byte area does not contain either RESET or
INITIAL, the default value is INITIAL.
retcode
A 4-byte area in which RRSAF places the return code.
This parameter is optional. If you do not specify retcode, RRSAF places the
return code in register 15 and the reason code in register 0.
reascode
A 4-byte area in which RRSAF places the reason code.
This parameter is optional. If you do not specify reascode, RRSAF places the
reason code in register 0.
If you specify reascode, you must also specify retcode.
pklistptr
A 4-byte field that contains a pointer to a user-supplied data area that contains
a list of collection IDs. A collection ID is an SQL identifier of 1 to 128 letters,
digits, or the underscore character that identifies a collection of packages. The
length of the data area is a maximum of 2050 bytes. The data area contains a
2-byte length field, followed by up to 2048 bytes of collection ID entries,
separated by commas.
Note:
1. For C, C++, and PL/I applications, you must include the appropriate compiler
directives, because DSNRLI is an assembler language program. These compiler
directives are described in the instructions for invoking RRSAF.
If you call the TERMINATE THREAD function and the application is not at a point
of consistency, RRSAF returns reason code X’00C12211’.
The following diagram shows the syntax of the TERMINATE THREAD function.
Note:
1. For C, C++, and PL/I applications, you must include the appropriate compiler
directives, because DSNRLI is an assembler language program. These compiler
directives are described in the instructions for invoking RRSAF.
Related tasks
“Invoking the Resource Recovery Services attachment facility” on page 77
If DB2 terminates, the application must issue TERMINATE IDENTIFY to reset the
RRSAF control blocks. This action ensures that future connection requests from the
task are successful when DB2 restarts.
The TERMINATE IDENTIFY function removes the calling task’s connection to DB2.
If no other task in the address space has an active connection to DB2, DB2 also
deletes the control block structures that were created for the address space and
removes the cross-memory authorization.
If the application is not at a point of consistency when you call the TERMINATE
IDENTIFY function, RRSAF returns reason code X’00C12211’.
If the application allocated a plan, and you call the TERMINATE IDENTIFY
function without first calling the TERMINATE THREAD function, DB2 deallocates
the plan before terminating the connection.
The following diagram shows the syntax of the TERMINATE IDENTIFY function.
Note:
1. For C, C++, and PL/I applications, you must include the appropriate compiler
directives, because DSNRLI is an assembler language program. These compiler
directives are described in the instructions for invoking RRSAF.
Related tasks
“Invoking the Resource Recovery Services attachment facility” on page 77
Consider the following rules and recommendations about when to use and not use
the TRANSLATE function:
v You cannot call the TRANSLATE function from the Fortran language.
v Call the TRANSLATE function only after a successful IDENTIFY operation. For
errors that occur during SQL or IFI requests, the TRANSLATE function performs
automatically.
v The TRANSLATE function translates codes that begin with X’00F3’, but it does
not translate RRSAF reason codes that begin with X’00C1’.
If you receive error reason code X’00F30040’ (resource unavailable) after an OPEN
request, the TRANSLATE function returns the name of the unavailable database
object in the last 44 characters of the SQLERRM field.
If the TRANSLATE function does not recognize the error reason code, it returns
SQLCODE -924 (SQLSTATE ’58006’) and places a printable copy of the original
DB2 function code and the return and error reason codes in the SQLERRM field.
The contents of registers 0 and 15 do not change, unless TRANSLATE fails. In this
case, register 0 is set to X’00C12204’, and register 15 is set to 200.
116 Application Programming and SQL Guide
The following diagram shows the syntax of the TRANSLATE function.
Note:
1. For C, C++, and PL/I applications, you must include the appropriate compiler
directives, because DSNRLI is an assembler language program. These compiler
directives are described in the instructions for invoking RRSAF.
Related tasks
“Invoking the Resource Recovery Services attachment facility” on page 77
| Assume that two subsystems are defined on the current LPAR. Subsystem DB2A is
| active, and subsystem DB2B is stopped. Suppose that you invoke RRSAF with the
| function FIND_DB2_SYSTEMS and a value of 3 for arraysz. The ssnma array and
| activea array are set to the following values:
| Table 37. Example values returned in the ssnma and activeaarrays
| Array element number Values in ssnma array Values in activea array
| 1 DB2A 1
| 2 DB2B 0
| 3 (four blanks) -1
|
| Related tasks
| “Invoking the Resource Recovery Services attachment facility” on page 77
When the reason code begins with X’00F3’, except for X’00F30006’, you can use the
RRSAF TRANSLATE function to obtain error message text that can be printed and
displayed.
For SQL calls, RRSAF returns standard SQL return codes in the SQLCA. RRSAF
returns IFI return codes and reason codes in the instrumentation facility
communication area (IFCA).
Related reference
“TRANSLATE function for RRSAF” on page 116
A single task
Multiple tasks
The following example pseudocode shows a DB2 thread that is reused by another
user at a point of consistency. When the application calls the SIGNON function for
user B, DB2 reuses the plan that is allocated by the CREATE THREAD function for
user A.
IDENTIFY
SIGNON user A
CREATE THREAD
SQL
...
SRRCMIT
SIGNON user B
SQL
...
SRRCMIT
The following scenario shows how you can switch the threads for four users (A, B,
C, and D) among two tasks (1 and 2).
Task 1 Task 2
The following sample JCL shows how to use RRSAF in a batch environment. The
DSNRRSAF DD statement starts the RRSAF trace. Use that DD statement only if
you are diagnosing a problem.
//jobname JOB z/OS_jobcard_information
//RRSJCL EXEC PGM=RRS_application_program
//STEPLIB DD DSN=application_load_library
// DD DSN=DB2_load_library
//SYSPRINT DD SYSOUT=*
//DSNRRSAF DD DUMMY
//SYSUDUMP DD SYSOUT=*
The following code segment shows how an application loads entry points DSNRLI
and DSNHLIR of the RRSAF language interface. Storing the entry points in
variables LIRLI and LISQL ensures that the application loads the entry points only
once. Delete the loaded modules when the application no longer needs to access
DB2.
****************************** GET LANGUAGE INTERFACE ENTRY ADDRESSES
LOAD EP=DSNRLI Load the RRSAF service request EP
ST R0,LIRLI Save this for RRSAF service requests
LOAD EP=DSNHLIR Load the RRSAF SQL call Entry Point
ST R0,LISQL Save this for SQL calls
* .
* . Insert connection service requests and SQL calls here
* .
DELETE EP=DSNRLI Correctly maintain use count
DELETE EP=DSNHLIR Correctly maintain use count
Each of the DB2 attachment facilities contains an entry point named DSNHLI.
When you use RRSAF but do not specify the ATTACH(RRSAF) precompiler
option, the precompiler generates BALR instructions to DSNHLI for SQL
statements in your program. To find the correct DSNHLI entry point without
including DSNRLI in your load module, code a subroutine, with entry point
DSNHLI, that passes control to entry point DSNHLIR in the DSNRLI module.
DSNHLIR is unique to DSNRLI and is at the same location as DSNHLI in
DSNRLI. DSNRLI uses 31-bit addressing. If the application that calls this
intermediate subroutine uses 24-bit addressing, the intermediate subroutine must
account for the difference.
In the following example, LISQL is addressable because the calling CSECT used
the same register 12 as CSECT DSNHLI. Your application must also establish
addressability to LISQL.
***********************************************************************
* Subroutine DSNHLI intercepts calls to LI EP=DSNHLI
***********************************************************************
DS 0D
DSNHLI CSECT Begin CSECT
STM R14,R12,12(R13) Prologue
LA R15,SAVEHLI Get save area address
ST R13,4(,R15) Chain the save areas
ST R15,8(,R13) Chain the save areas
LR R13,R15 Put save area address in R13
L R15,LISQL Get the address of real DSNHLI
BASSM R14,R15 Branch to DSNRLI to do an SQL call
* DSNRLI is in 31-bit mode, so use
* BASSM to assure that the addressing
* mode is preserved.
L R13,4(,R13) Restore R13 (caller's save area addr)
L R14,12(,R13) Restore R14 (return address)
RETURN (1,12) Restore R1-12, NOT R0 and R15 (codes)
This example uses the variables that are declared in the following code.
****************** VARIABLES SET BY APPLICATION ***********************
LIRLI DS F DSNRLI entry point address
LISQL DS F DSNHLIR entry point address
SSNM DS CL4 DB2 subsystem name for IDENTIFY
CORRID DS CL12 Correlation ID for SIGNON
ACCTTKN DS CL22 Accounting token for SIGNON
ACCTINT DS CL6 Accounting interval for SIGNON
PLAN DS CL8 DB2 plan name for CREATE THREAD
COLLID DS CL18 Collection ID for CREATE THREAD. If
* PLAN contains a plan name, not used.
REUSE DS CL8 Controls SIGNON after CREATE THREAD
CONTROL DS CL8 Action that application takes based
* on return code from RRSAF
****************** VARIABLES SET BY DB2 *******************************
STARTECB DS F DB2 startup ECB
TERMECB DS F DB2 termination ECB
EIBPTR DS F Address of environment info block
RIBPTR DS F Address of release info block
****************************** CONSTANTS ******************************
CONTINUE DC CL8'CONTINUE' CONTROL value: Everything OK
IDFYFN DC CL18'IDENTIFY ' Name of RRSAF service
SGNONFN DC CL18'SIGNON ' Name of RRSAF service
CRTHRDFN DC CL18'CREATE THREAD ' Name of RRSAF service
TRMTHDFN DC CL18'TERMINATE THREAD ' Name of RRSAF service
TMIDFYFN DC CL18'TERMINATE IDENTIFY' Name of RRSAF service
****************************** SQLCA and RIB **************************
EXEC SQL INCLUDE SQLCA
DSNDRIB Map the DB2 Release Information Block
******************* Parameter list for RRSAF calls ********************
RRSAFCLL CALL ,(*,*,*,*,*,*,*,*),VL,MF=L
The following example code shows how to issue requests for the RRSAF functions
IDENTIFY, SIGNON, CREATE THREAD, TERMINATE THREAD, and
TERMINATE IDENTIFY. This example does not show a task that waits on the DB2
termination ECB. You can code such a task and use the z/OS WAIT macro to
monitor the ECB. The task that waits on the termination ECB should detach the
sample code if the termination ECB is posted. That task can also wait on the DB2
startup ECB. This example waits on the startup ECB at its own task level.
***************************** IDENTIFY ********************************
L R15,LIRLI Get the Language Interface address
CALL (15),(IDFYFN,SSNM,RIBPTR,EIBPTR,TERMECB,STARTECB),VL,MF=X
(E,RRSAFCLL)
BAL R14,CHEKCODE Call a routine (not shown) to check
* return and reason codes
CLC CONTROL,CONTINUE Is everything still OK
BNE EXIT If CONTROL not 'CONTINUE', stop loop
USING R8,RIB Prepare to access the RIB
L R8,RIBPTR Access RIB to get DB2 release level
WRITE 'The current DB2 release level is' RIBREL
***************************** SIGNON **********************************
L R15,LIRLI Get the Language Interface address
CALL (15),(SGNONFN,CORRID,ACCTTKN,ACCTINT),VL,MF=(E,RRSAFCLL)
BAL R14,CHEKCODE Check the return and reason codes
*************************** CREATE THREAD *****************************
L R15,LIRLI Get the Language Interface address
CALL (15),(CRTHRDFN,PLAN,COLLID,REUSE),VL,MF=(E,RRSAFCLL)
BAL R14,CHEKCODE Check the return and reason codes
****************************** SQL ************************************
* Insert your SQL calls here. The DB2 Precompiler
* generates calls to entry point DSNHLI. You should
* code a dummy entry point of that name to intercept
You can start and stop the CICS attachment facility from within an application
program.
When an SQL statement is executed, and the CICS attachment facility is in standby
mode, the attachment issues SQLCODE -923 with a reason code that indicates that
DB2 is not available.
Use the INQUIRE EXITPROGRAM command for the CICS Transaction Server in
your application.
The following example shows how to use this command. In this example, the
INQUIRE EXITPROGRAM command tests whether the resource manager for SQL,
DSNCSQL, is up and running. CICS returns the results in the EIBRESP field of the
EXEC interface block (EIB) and in the field whose name is the argument of the
CONNECTST parameter (in this case, STST). If the EIBRESP value indicates that
If you use the INQUIRE EXITPROGRAM command to avoid AEY9 abends and the
CICS attachment facility is down, the storm drain effect can occur. The storm drain
effect is a condition that occurs when a system continues to receive work, even
though that system is down.
Related concepts
Storm-drain effect (DB2 Data Sharing Planning and Administration)
Related information
CICS Transaction Server Library at ibm.com
-923 (DB2 Codes)
Close all cursors that are declared with the WITH HOLD option before each sync
point. DB2 does not automatically close them. A thread for an application that
contains an open cursor cannot be reused. You should close all cursors
immediately after you finish using them.
Related concepts
“Held and non-held cursors” on page 673
Your program is not required to declare tables or views, but doing so offers the
following advantages:
v Clear documentation in the program
The declaration specifies the structure of the table or view and the data type of
each column. You can refer to the declaration for the column names and data
types in the table or view.
v Assurance that your program uses the correct column names and data types
The DB2 precompiler uses your declarations to make sure that you have used
correct column names and data types in your SQL statements. The DB2
precompiler issues a warning message when the column names and data types
in SQL statements do not correspond to the table and view declarations in your
program.
Restriction: You can use DCLGEN for only C, COBOL, and PL/I programs.
Related reference
DECLARE TABLE (SQL Reference)
Requirements:
v DB2 must be active before you can use DCLGEN.
v You can use DCLGEN for table declarations only if the table or view that you
are declaring already exists.
v If you use DCLGEN, you must use it before you precompile your program.
2 FOR BIT DATA .... ===> NO (Enter YES to declare SQL variables for
FOR BIT DATA columns)
The following table lists the C, COBOL, and PL/I data types that DCLGEN uses
for variable declarations based on the corresponding SQL data types that are used
in the source tables. var represents a variable name that DCLGEN provides.
Table 40. Type declarations that DCLGEN generates
SQL data type1 C COBOL PL/I
SMALLINT short int PIC S9(4) USAGE COMP BIN FIXED(15)
INTEGER long int PIC S9(9) USAGE COMP BIN FIXED(31)
member-name is the name of the data set member where the DCLGEN output is
stored.
Example: Suppose that you used DCLGEN to generate a table declaration and
corresponding COBOL record description for the table DSN8910.EMP, and those
declarations were stored in the data set member DECEMP. (A COBOL record
description is a two-level host structure that corresponds to the columns of a
table’s row. ) To include those declarations in your program, include the following
statement in your COBOL program:
EXEC SQL
INCLUDE DECEMP
END-EXEC.
Related reference
INCLUDE (SQL Reference)
Throughout this example, information that you must enter on each panel is in
bold-faced type.
Figure 9. The COBOL defaults panel. Shown only if the field APPLICATION LANGUAGE on the DB2I DEFAULTS
PANEL l panel is IBMCOB.
Figure 10. DCLGEN panel—selecting source table and destination data set
DB2 again displays the DCLGEN screen, as shown in the following figure.
If your application contains SQL statements and does not include an SQL
communications area (SQLCA), you must declare individual SQLCODE and
SQLSTATE host variables. Your program can use these variables to check whether
an SQL statement executed successfully.
Related tasks
“Accessing the DB2 REXX language support application programming interfaces”
on page 409
“Defining the SQL communications area, SQLSTATE, and SQLCODE in assembler”
on page 229
“Defining the SQL communications area, SQLSTATE, and SQLCODE in C” on page
249
“Defining the SQL communications area, SQLSTATE, and SQLCODE in COBOL”
on page 295
“Defining the SQL communications area, SQLSTATE, and SQLCODE in Fortran”
on page 363
“Defining the SQL communications area, SQLSTATE, and SQLCODE in PL/I” on
page 375
“Defining the SQL communications area, SQLSTATE, and SQLCODE in REXX” on
page 403
Related reference
“Descriptions of SQL processing options” on page 904
Description of SQLCA fields (SQL Reference)
INCLUDE (SQL Reference)
The REXX SQLCA (SQL Reference)
If your program includes any of the following statements, you must include an
SQLDA in your program:
v CALL ... USING DESCRIPTOR descriptor-name
v DESCRIBE statement-name INTO descriptor-name
v DESCRIBE CURSOR host-variable INTO descriptor-name
v DESCRIBE INPUT statement-name INTO descriptor-name
v DESCRIBE PROCEDURE host-variable INTO descriptor-name
Unlike the SQLCA, a program can have more than one SQLDA, and an SQLDA
can have any valid name.
To define SQL descriptor areas, take the actions that are appropriate for the
programming language that you use.
Related tasks
“Defining SQL descriptor areas in assembler” on page 230
“Defining SQL descriptor areas in C” on page 250
“Defining SQL descriptor areas in COBOL” on page 296
“Defining SQL descriptor areas in Fortran” on page 364
“Defining SQL descriptor areas in PL/I” on page 376
“Defining SQL descriptor areas in REXX” on page 403
Related reference
“Descriptions of SQL processing options” on page 904
Description of SQLCA fields (SQL Reference)
INCLUDE (SQL Reference)
SQL descriptor area (SQLDA) (SQL Reference)
The REXX SQLCA (SQL Reference)
To declare host variables, host variable arrays, and host structures, use the
techniques that are appropriate for the programming language that you use.
Host variables
Use host variables to pass a single data item between DB2 and your application.
A host variable is a single data item that is declared in the host language to be used
within an SQL statement. You can use host variables in application programs that
are written in the following languages: assembler, C, C++, COBOL, Fortran, and
PL/I to perform the following actions:
v Retrieve data into the host variable for your application program’s use
v Place data into the host variable to insert into a table or to change the contents
of a row
v Use the data in the host variable when evaluating a WHERE or HAVING clause
v Assign the value that is in the host variable to a special register, such as
CURRENT SQLID and CURRENT DEGREE
v Insert null values into columns by using a host indicator variable that contains a
negative value
v Use the data in the host variable in statements that process dynamic SQL, such
as EXECUTE, PREPARE, and OPEN
| A host variable array is a data array that is declared in the host language to be used
| within an SQL statement. You can use host variable arrays to perform the
| following actions:
| v Retrieve data into host variable arrays for your application program’s use
| v Place data into host variable arrays to insert rows into a table
| You typically define host variable arrays for use with multiple-row FETCH,
| INSERT, and MERGE statements.
Related concepts
“Host variable arrays in an SQL statement” on page 159
Related tasks
“Inserting multiple rows of data from host variable arrays” on page 160
“Retrieving multiple rows of data into host variable arrays” on page 160
Related reference
“Host variable arrays in C” on page 262
“Host variable arrays in COBOL” on page 307
“Host variable arrays in PL/I” on page 382
Host structures
Use host structures to pass a group of host variables between DB2 and your
application.
A host structure is a group of host variables that can be referenced with a single
name. You can use host structures in all host languages except REXX. You define
host structures with statements in the host language. You can refer to a host
structure in any context where you want to refer to the list of host variables in the
structure. A host structure reference is equivalent to a reference to each of the host
variables within the structure in the order in which they are defined in the
structure declaration. You can also use indicator variables (or indicator structures)
with host structures.
You can use indicator variable arrays and indicator structures to perform these
same actions for individual items in host data arrays and structures.
If you provide an indicator variable for the variable X, when DB2 retrieves a null
value for X, it puts a negative value in the indicator variable and does not update
X. Your program should check the indicator variable before using X. If the
indicator variable is negative, you know that X is null and any value that you find
in X is irrelevant. When your program uses variable X to assign a null value to a
column, the program should set the indicator variable to a negative number. DB2
then assigns a null value to the column and ignores any value in X.
Specify the DECLARE VARIABLE statement after the corresponding host variable
declaration and before your first reference to that host variable.
This statement associates an encoding scheme and a CCSID with individual host
variables. You can use this statement in static or dynamic SQL applications.
Restriction: You cannot use the DECLARE VARIABLE statement to control the
CCSID and encoding scheme of data that you retrieve or update by using an
SQLDA.
The DECLARE VARIABLE statement has the following effects on a host variable:
v When you use the host variable to update a table, the local subsystem or the
remote server assumes that the data in the host variable is encoded with the
CCSID and encoding scheme that the DECLARE VARIABLE statement assigns.
v When you retrieve data from a local or remote table into the host variable, the
retrieved data is converted to the CCSID and encoding scheme that are assigned
by the DECLARE VARIABLE statement.
Example
Suppose that you are writing a C program that runs on a DB2 for z/OS subsystem.
The subsystem has an EBCDIC application encoding scheme. The C program
retrieves data from the following columns of a local table that is defined with the
CCSID UNICODE option:
PARTNUM CHAR(10)
JPNNAME GRAPHIC(10)
ENGNAME VARCHAR(30)
Because the application encoding scheme for the subsystem is EBCDIC, the
retrieved data is EBCDIC. To make the retrieved data Unicode, use DECLARE
VARIABLE statements to specify that the data that is retrieved from these columns
is encoded in the default Unicode CCSIDs for the subsystem.
For example, suppose that you fetch an integer value of 32768 into a host variable
of type SMALLINT. The conversion might cause an error if you do not provide
sufficient conversion information to DB2.
The variable to which DB2 assigns the data is called the output host variable. If you
provide an indicator variable for the output host variable or if data type
conversion is not required, DB2 returns a positive SQLCODE for the row in most
cases. In other cases where data conversion problems occur, DB2 returns a negative
SQLCODE for that row. Regardless of the SQLCODE for the row, no new values
are assigned to the host variable or to subsequent variables for that row. Any
values that are already assigned to variables remain assigned. Even when a
negative SQLCODE is returned for a row, statement processing continues and DB2
returns a positive SQLCODE for the statement (SQLSTATE 01668, SQLCODE
+354).
To determine what caused an error when retrieving data into a host variable:
1. When DB2 returns SQLCODE = +354, use the GET DIAGNOSTICS statement
with the NUMBER option to determine the number of errors and warnings.
Example: Suppose that no indicator variables are provided for the values that
are returned by the following statement:
FETCH FIRST ROWSET FROM C1 FOR 10 ROWS INTO :hva_col1, :hva_col2;
For each row with an error, DB2 records a negative SQLCODE and continues
processing until the 10 rows are fetched. When SQLCODE = +354 is returned
for the statement, you can use the GET DIAGNOSTICS statement to determine
which errors occurred for which rows. The following statement returns
num_rows = 10 and num_cond = 3:
GET DIAGNOSTICS :num_rows = ROW_COUNT, :num_cond = NUMBER;
2. To investigate the errors and warnings, use additional GET DIAGNOSTIC
statements with the CONDITION option.
This output shows that the fifth row has a data mapping error (-304) for
column 1 and that the seventh row has a data mapping error (-802) for column
2. These rows do not contain valid data, and they should not be used.
Related concepts
“Indicator variables, arrays, and structures” on page 145
Related reference
GET DIAGNOSTICS (SQL Reference)
Related information
+354 (DB2 Codes)
When deciding the data types of host variables, consider the following rules and
recommendations:
| v Numeric data types are compatible with each other:
| Assembler: A SMALLINT, INTEGER, BIGINT, DECIMAL, or FLOAT column is
| compatible with a numeric assembler host variable.
| Fortran: An INTEGER column is compatible with any Fortran host variable that
| is defined as INTEGER*2, INTEGER*4, REAL, REAL*4, REAL*8, or DOUBLE
| PRECISION.
| PL/I: A SMALLINT, INTEGER, BIGINT, DECIMAL, or FLOAT column is
| compatible with a PL/I host variable of BIN FIXED(15), BIN FIXED(31),
| DECIMAL(s,p), or BIN FLOAT(n), where n is from 1 to 53, or DEC FLOAT(m)
| where m is from 1 to 16.
v Character data types are compatible with each other:
Assembler: A CHAR, VARCHAR, or CLOB column is compatible with a
fixed-length or varying-length assembler character host variable.
C/C++: A CHAR, VARCHAR, or CLOB column is compatible with a
single-character, NUL-terminated, or VARCHAR structured form of a C
character host variable.
COBOL: A CHAR, VARCHAR, or CLOB column is compatible with a
fixed-length or varying-length COBOL character host variable.
| Recommendation: Use the XML host variable types for data from XML
| columns.
| v Assembler:You can assign LOB data to a file reference variable (BLOB_FILE,
| CLOB_FILE, and DBCLOB_FILE).
To embed SQL statements in your application, take action based on the program
language that you use.
Related concepts
“SQL statements in assembler programs” on page 241
“SQL statements in C programs” on page 280
“SQL statements in COBOL programs” on page 326
“SQL statements in Fortran programs” on page 371
“SQL statements in PL/I programs” on page 393
“SQL statements in REXX programs” on page 405
To delimit an SQL statement, take action based on the programming language that
you use.
Related concepts
“Delimiters in SQL statements in assembler programs” on page 246
“Delimiters in SQL statements in C programs” on page 284
“Delimiters in SQL statements in COBOL programs” on page 332
“Delimiters in SQL statements in Fortran programs” on page 374
“Delimiters in SQL statements in PL/I programs” on page 397
“Delimiters in SQL statements in REXX programs” on page 408
Restriction: These instructions do not apply if you do not know how many rows
DB2 will return or if you expect DB2 to return more than one row. In these
situations, use a cursor. A cursor enables an application to return a set of rows and
fetch either one row at a time or one rowset at a time from the result table.
In the SELECT statement specify the INTO clause with the name of one or more
host variables to contain the retrieved values. Specify one variable for each value
that is to be retrieved. The retrieved value can be a column value, a value of a host
variable, the result of an expression, or the result of an aggregate function.
Recommendation: If you want to ensure that only one row is returned, specify the
FETCH FIRST 1 ROW ONLY clause. Consider using the ORDER BY clause to
control which row is returned. If you specify both the ORDER BY clause and the
FETCH FIRST clause, ordering is performed on the entire result set before the first
row is returned.
DB2 assigns the first value in the result row to the first variable in the list, the
second value to the second variable, and so on.
If the SELECT statement returns more than one row, DB2 returns an error, and any
data that is returned is undefined and unpredictable.
Example of retrieving a single row into a host variable: Suppose that you are
retrieving the LASTNAME and WORKDEPT column values from the
DSN8910.EMP table for a particular employee. You can define a host variable in
your program to hold each column value and then name the host variables in the
INTO clause of the SELECT statement, as shown in the following COBOL example.
MOVE '000110' TO CBLEMPNO.
EXEC SQL
SELECT LASTNAME, WORKDEPT
INTO :CBLNAME, :CBLDEPT
FROM DSN8910.EMP
WHERE EMPNO = :CBLEMPNO
END-EXEC.
In this example, the host variable CBLEMPNO is preceded by a colon (:) in the
SQL statement, but it is not preceded by a colon in the COBOL MOVE statement.
This example also uses a host variable to specify a value in a search condition. The
host variable CBLEMPNO is defined for the employee number, so that you can
retrieve the name and the work department of the employee whose number is the
same as the value of the host variable, CBLEMPNO; in this case, 000110.
In the DATA DIVISION section of a COBOL program, you must declare the host
variables CBLEMPNO, CBLNAME, and CBLDEPT to be compatible with the data
types in the columns EMPNO, LASTNAME, and WORKDEPT of the
DSN8910.EMP table.
Example of ensuring that a query returns only a single row: You can use the
FETCH FIRST 1 ROW ONLY clause in a SELECT statement to ensure that only one
row is returned. This action prevents undefined and unpredictable data from being
returned when you specify the INTO clause of the SELECT statement. The
following example SELECT statement ensures that only one row of the
DSN8910.EMP table is returned.
EXEC SQL
SELECT LASTNAME, WORKDEPT
INTO :CBLNAME, :CBLDEPT
FROM DSN8910.EMP
FETCH FIRST 1 ROW ONLY
END-EXEC.
You can include an ORDER BY clause in the preceding example to control which
row is returned. The following example SELECT statement ensures that the only
row returned is the one with a last name that is first alphabetically.
EXEC SQL
SELECT LASTNAME, WORKDEPT
INTO :CBLNAME, :CBLDEPT
FROM DSN8810.EMP
ORDER BY LASTNAME
FETCH FIRST 1 ROW ONLY
END-EXEC.
Example of retrieving the results of host variable values and expressions into
host variables:
When you specify a list of items in the SELECT clause, that list can include more
than the column names of tables and views. You can request a set of column
values mixed with host variable values and constants. For example, the following
The preceding SELECT statement returns the following results. The column
headings represent the names of the host variables.
EMP-NUM PERSON-NAME EMP-SAL EMP-RAISE EMP-TTL
======= =========== ======= ========= =======
000220 LUTZ 29840 4476 34316
Before you determine whether a retrieved column value is null or truncated, you
must have defined the appropriate indicator variables, arrays, and structures.
An error occurs if you do not use an indicator variable and DB2 retrieves a null
value.
Determine the value of the indicator variable, array, or structure that is associated
with the host variable, array, or structure. Those values have the following
meanings:
Examples
Example of testing an indicator variable: Assume that you have defined the
following indicator variable INDNULL for the host variable CBLPHONE.
EXEC SQL
SELECT PHONENO
INTO :CBLPHONE:INDNULL
FROM DSN8910.EMP
WHERE EMPNO = :EMPID
END-EXEC.
You can then test INDNULL for a negative value. If the value is negative, the
corresponding value of PHONENO is null, and you can disregard the contents of
CBLPHONE.
Example of testing an indicator variable array: Suppose that you declare the
following indicator array INDNULL for the host variable array CBLPHONE.
EXEC SQL
FETCH NEXT ROWSET CURS1
FOR 10 ROWS
INTO :CBLPHONE :INDNULL
END-EXEC.
After the multiple-row FETCH statement, you can test each element of the
INDNULL array for a negative value. If an element is negative, you can disregard
the contents of the corresponding element in the CBLPHONE host variable array.
You can test the indicator structure EMP-IND for negative values. If, for example,
EMP-IND(6) contains a negative value, the corresponding host variable in the host
structure (EMP-BIRTHDATE) contains a null value.
Related concepts
“Arithmetic and conversion errors” on page 216
Related tasks
“Declaring host variables and indicator variables” on page 142
Example
The following code, which uses an indicator variable, does not select the
employees who have no phone number:
MOVE -1 TO PHONE-IND.
EXEC SQL
SELECT LASTNAME
INTO :PGM-LASTNAME
FROM DSN8910.EMP
WHERE PHONENO = :PHONE-HV:PHONE-IND
END-EXEC.
Instead, use the following statement with the IS NULL predicate to select
employees who have no phone number:
EXEC SQL
SELECT LASTNAME
INTO :PGM-LASTNAME
FROM DSN8910.EMP
WHERE PHONENO IS NULL
END-EXEC.
To select employees whose phone numbers are equal to the value of :PHONE-HV
and employees who have no phone number (as in the second example), code two
predicates, one to handle the non-null values and another to handle the null
values, as in the following statement:
EXEC SQL
SELECT LASTNAME
INTO :PGM-LASTNAME
You can simplify the preceding example by coding the following statement with
the NOT form of the IS DISTINCT FROM predicate:
EXEC SQL
SELECT LASTNAME
INTO :PGM-LASTNAME
FROM DSN8910.EMP
WHERE PHONENO IS NOT DISTINCT FROM :PHONE-HV:PHONE-IND
END-EXEC.
Related tasks
“Declaring host variables and indicator variables” on page 142
Related reference
DISTINCT predicate (SQL Reference)
NULL predicate (SQL Reference)
Examples
Example of updating multiple rows by using a host variable value in the search
condition: The following example gives the employees in a particular department
a salary increase of 10%. The department value is passed through the DEPTID host
variable.
MOVE 'D11' TO DEPTID.
EXEC SQL
UPDATE DSN8910.EMP
SET SALARY = 1.10 * SALARY
WHERE WORKDEPT = :DEPTID
END-EXEC.
Restriction: These instructions apply only to inserting a single row. If you want to
insert multiple rows, use host variable arrays or the form of the INSERT statement
that selects values from another table or view.
Specify an INSERT statement with column values in the VALUES clause. Specify
host variables or a combination of host variables and constants as the column
values.
DB2 inserts the first value into the first column in the list, the second value into
the second column, and so on.
Example
The following example uses host variables to insert a single row into the activity
table.
EXEC SQL
INSERT INTO DSN8910.ACT
VALUES (:HV-ACTNO, :HV-ACTKWD, :HV-ACTDESC)
END-EXEC.
Related tasks
“Inserting multiple rows of data from host variable arrays” on page 160
Related reference
INSERT (SQL Reference)
EXEC SQL
INSERT INTO DSN8910.ACT
(ACTNO, ACTKWD, ACTDESC)
VALUES (:hva1:ind1, :hva2:ind2, :hva3:ind3)
FOR 10 ROWS;
DB2 ignores the values in the hva3 array and assigns the values in the ARTDESC
column to null for the 10 rows that are inserted.
Related tasks
“Declaring host variables and indicator variables” on page 142
To use a host variable array in an SQL statement, specify any valid host variable
array that is declared according to the host language rules. You can specify host
variable arrays in C or C++, COBOL, and PL/I. You must declare the array in the
host program before you use it.
You can use host variable arrays to specify a program data area to contain multiple
rows of column values. A DB2 rowset cursor enables an application to retrieve and
process a set of rows from the result table of the cursor.
Related concepts
“Host variable arrays” on page 144
“Host variable arrays in an SQL statement” on page 159
Related tasks
“Accessing data by using a rowset-positioned cursor” on page 680
“Inserting multiple rows of data from host variable arrays”
Example: You can insert the number of rows that are specified in the host variable
NUM-ROWS by using the following INSERT statement:
EXEC SQL
INSERT INTO DSN8910.ACT
(ACTNO, ACTKWD, ACTDESC)
VALUES (:HVA1, :HVA2, :HVA3)
FOR :NUM-ROWS ROWS
END-EXEC.
Assume that the host variable arrays HVA1, HVA2, and HVA3 have been declared
and populated with the values that are to be inserted into the ACTNO, ACTKWD,
and ACTDESC columns. The NUM-ROWS host variable specifies the number of
rows that are to be inserted, which must be less than or equal to the dimension of
each host variable array.
Related tasks
“Retrieving multiple rows of data into host variable arrays” on page 160
In the following example, assume that your COBOL program includes the
following SQL statement:
EXEC SQL
SELECT EMPNO, FIRSTNME, MIDINIT, LASTNAME, WORKDEPT
INTO :EMPNO, :FIRSTNME, :MIDINIT, :LASTNAME, :WORKDEPT
FROM DSN8910.VEMP
WHERE EMPNO = :EMPID
END-EXEC.
If you want to avoid listing host variables, you can substitute the name of a
structure, say :PEMP, that contains :EMPNO, :FIRSTNME, :MIDINIT, :LASTNAME,
and :WORKDEPT. The example then reads:
EXEC SQL
SELECT EMPNO, FIRSTNME, MIDINIT, LASTNAME, WORKDEPT
INTO :PEMP
FROM DSN8910.VEMP
WHERE EMPNO = :EMPID
END-EXEC.
You can declare a host structure yourself, or you can use DCLGEN to generate a
COBOL record description, PL/I structure declaration, or C structure declaration
that corresponds to the columns of a table.
Dynamic SQL
Dynamic SQL statements are prepared and executed while the program is running.
Use dynamic SQL when you do not know what SQL statements your application
needs to execute before run time.
Before you decide to use dynamic SQL, you should consider whether using static
SQL or dynamic SQL is the best technique for your application.
For most DB2 users, static SQL, which is embedded in a host language program
and bound before the program runs, provides a straightforward, efficient path to
DB2 data. You can use static SQL when you know before run time what SQL
statements your application needs to execute.
Dynamic SQL prepares and executes the SQL statements within a program, while
the program is running. Four types of dynamic SQL are:
v Interactive SQL
| A user enters SQL statements through SPUFI or the command line processor.
| DB2 prepares and executes those statements as dynamic SQL statements.
v Embedded dynamic SQL
Your application puts the SQL source in host variables and includes PREPARE
and EXECUTE statements that tell DB2 to prepare and run the contents of those
host variables at run time. You must precompile and bind programs that include
embedded dynamic SQL.
v Deferred embedded SQL
Deferred embedded SQL statements are neither fully static nor fully dynamic.
Like static statements, deferred embedded SQL statements are embedded within
applications, but like dynamic statements, they are prepared at run time. DB2
processes deferred embedded SQL statements with bind-time rules. For example,
DB2 uses the authorization ID and qualifier determined at bind time as the plan
or package owner. Deferred embedded SQL statements are used for DB2 private
protocol access to remote data.
v Dynamic SQL executed through ODBC functions
Your application contains ODBC function calls that pass dynamic SQL
statements as arguments. You do not need to precompile and bind programs
that use ODBC function calls.
Static and dynamic SQL are each appropriate for different circumstances. You
should consider the differences between the two when determining whether static
SQL or dynamic SQL is best for your application.
When you use static SQL, you cannot change the form of SQL statements unless
you make changes to the program. However, you can increase the flexibility of
static statements by using host variables.
Example: In the following example, the UPDATE statement can update the salary
of any employee. At bind time, you know that salaries must be updated, but you
do not know until run time whose salaries should be updated, and by how much.
01 IOAREA.
02 EMPID PIC X(06).
. 02 NEW-SALARY PIC S9(7)V9(2) COMP-3.
.
.
(Other declarations)
READ CARDIN RECORD INTO IOAREA
. AT END MOVE 'N' TO INPUT-SWITCH.
.
.
(Other COBOL statements)
EXEC SQL
UPDATE DSN8910.EMP
SET SALARY = :NEW-SALARY
WHERE EMPNO = :EMPID
END-EXEC.
The statement (UPDATE) does not change, nor does its basic structure, but the
input can change the results of the UPDATE statement.
What if a program must use different types and structures of SQL statements? If
there are so many types and structures that it cannot contain a model of each one,
your program might need dynamic SQL.
| You can use one of the following programs to execute dynamic SQL:
| DB2 Query Management Facility (DB2 QMF™)
| Provides an alternative interface to DB2 that accepts almost any SQL statement
| SPUFI
| Accepts SQL statements from an input data set, and then processes and
| executes them dynamically
| command line processor
| Accepts SQL statements from a UNIX® System Services environment.
A program that provides for dynamic SQL accepts as input, or generates, an SQL
statement in the form of a character string. You can simplify the programming if
you can plan the program not to use SELECT statements, or to use only those that
return a known number of values of known types. In the most general case, in
which you do not know in advance about the SQL statements that will execute, the
program typically takes these steps:
1. Translates the input data, including any parameter markers, into an SQL
statement
To access DB2 data, an SQL statement requires an access path. Two big factors in
the performance of an SQL statement are the amount of time that DB2 uses to
determine the access path at run time and whether the access path is efficient. DB2
determines the access path for a statement at either of these times:
v When you bind the plan or package that contains the SQL statement
v When the SQL statement executes
The time at which DB2 determines the access path depends on these factors:
v Whether the statement is executed statically or dynamically
v Whether the statement contains input host variables
For static SQL statements that do not contain input host variables, DB2 determines
the access path when you bind the plan or package. This combination yields the
best performance because the access path is already determined when the program
executes.
| For static SQL statements that have input host variables, the time at which DB2
| determines the access path depends on which bind option you specify:
| REOPT(NONE), REOPT(ONCE), or REOPT(ALWAYS). REOPT(NONE) is the
| default. Do not specify REOPT(AUTO); this option is applicable only to dynamic
| statements. DB2 ignores REOPT(AUTO) for static SQL statements, because DB2 can
| cache only dynamic statements.
If you specify REOPT(NONE), DB2 determines the access path at bind time, just as
it does when there are no input variables.
DB2 ignores REOPT(ONCE) for static SQL statements because DB2 can cache only
dynamic SQL statements
If you specify REOPT(ALWAYS), DB2 determines the access path at bind time and
again at run time, using the values in these types of input variables:
v Host variables
v Parameter markers
v Special registers
This means that DB2 must spend extra time determining the access path for
statements at run time, but if DB2 determines a significantly better access path
using the variable values, you might see an overall performance improvement. In
general, using REOPT(ALWAYS) can make static SQL statements with input
variables perform like dynamic SQL statements with constants.
For dynamic SQL statements, DB2 determines the access path at run time, when
the statement is prepared. This can make the performance worse than that of static
SQL statements. However, if you execute the same SQL statement often, you can
use the dynamic statement cache to decrease the number of times that those
dynamic statements must be prepared.
| Dynamic SQL statements with input host variables: When you bind applications
| that contain dynamic SQL statements with input host variables, use the
| REOPT(ALWAYS) option, the REOPT(ONCE) option, or the REOPT(AUTO) option.
Use REOPT(ALWAYS) when you are not using the dynamic statement cache. DB2
determines the access path for statements at each EXECUTE or OPEN of the
statement. This ensure the best access path for a statement, but using
REOPT(ALWAYS) can increase the cost of frequently used dynamic SQL
statements.
| Use REOPT(ONCE) or REOPT(AUTO) when you are using the dynamic statements
| cache:
| v If you specify REOPT(ONCE), DB2 determines and the access path for
| statements only at the first EXECUTE or OPEN of the statement. It saves that
| access path in the dynamic statement cache and uses it until the statement is
| invalidated or removed from the cache. This reuse of the access path reduces the
| cost of frequently used dynamic SQL statements that contain input host
| variables; however, it does not account for changes to parameter marker values
| for dynamic statements.
| v If you specify REOPT(AUTO), DB2 determines the access path at run time. For
| each execution of a statement with parameter markers, DB2 generates a new
| access path if it determines that a new access path will improve performance.
If you specify REOPT(ALWAYS), DB2 prepares the statement twice each time it is
run.
If you specify REOPT(ONCE), DB2 prepares the statement twice only when the
statement has never been saved in the cache. If the statement has been prepared
and saved in the cache, DB2 will use the saved version of the statement to
complete the DESCRIBE statement.
For a statement that uses a cursor, you can avoid the double prepare by placing
the DESCRIBE statement after the OPEN statement in your program.
If you use predictive governing, and a dynamic SQL statement that is bound with
either REOPT(ALWAYS) or REOPT(ONCE) exceeds a predictive governing warning
threshold, your application does not receive a warning SQLCODE. However, it will
receive an error SQLCODE from the OPEN or EXECUTE statement.
You can write non-SELECT and fixed-list SELECT statements in any of the DB2
supported languages. A program containing a varying-list SELECT statement is
more difficult to write in Fortran, because the program cannot run without the
help of a subroutine to manage address variables (pointers) and storage allocation.
Most of the examples in this topic are in PL/I. Longer examples in the form of
complete programs are available in the sample applications:
DSNTEP2
Processes both SELECT and non-SELECT statements dynamically. (PL/I).
DSNTIAD
Processes only non-SELECT statements dynamically. (Assembler).
DSNTIAUL
Processes SELECT statements dynamically. (Assembler).
Library prefix.SDSNSAMP contains the sample programs. You can view the
programs online, or you can print them using ISPF, IEBPTPCH, or your own
printing program.
You can use all forms of dynamic SQL in all supported versions of COBOL.
Related concepts
“Sample COBOL dynamic SQL program” on page 333
The term “fixed-list” does not imply that you must know in advance how many
rows of data will be returned. However, you must know the number of columns
and the data types of those columns. A fixed-list SELECT statement returns a result
table that can contain any number of rows; your program looks at those rows one
at a time, using the FETCH statement. Each successive fetch returns the same
number of values as the last, and the values have the same data types each time.
Therefore, you can specify host variables as you do for static SQL.
An advantage of the fixed-list SELECT is that you can write it in any of the
programming languages that DB2 supports. Varying-list dynamic SELECT
statements require assembler, C, PL/I, and COBOL.
Example: Suppose that your program retrieves last names and phone numbers by
dynamically executing SELECT statements of this form:
SELECT LASTNAME, PHONENO FROM DSN8910.EMP
WHERE ... ;
The program reads the statements from a terminal, and the user determines the
WHERE clause.
Dynamic SELECT statements cannot use INTO. Therefore, you must use a cursor
to put the results into host variables.
Example: When you declare the cursor, use the statement name (call it STMT), and
give the cursor itself a name (for example, C1):
EXEC SQL DECLARE C1 CURSOR FOR STMT;
ATTRVAR contains attributes that you want to add to the SELECT statement, such
as FETCH FIRST 10 ROWS ONLY or OPTIMIZE for 1 ROW. In general, if the
SELECT statement has attributes that conflict with the attributes in the PREPARE
statement, the attributes on the SELECT statement take precedence over the
attributes on the PREPARE statement. However, in this example, the SELECT
statement in DSTRING has no attributes specified, so DB2 uses the attributes in
ATTRVAR for the SELECT statement.
To execute STMT, your program must open the cursor, fetch rows from the result
table, and close the cursor.
Example: If four parameter markers are in STMT, you need the following
statement:
EXEC SQL OPEN C1 USING :PARM1, :PARM2, :PARM3, :PARM4;
The key feature of this statement is the use of a list of host variables to receive the
values returned by FETCH. The list has a known number of items (in this case,
two items, :NAME and :PHONE) of known data types (both are character strings,
of lengths 15 and 4, respectively).
You can use this list in the FETCH statement only because you planned the
program to use only fixed-list SELECTs. Every row that cursor C1 points to must
contain exactly two character values of appropriate length. If the program is to
handle anything else, it must use the techniques for including dynamic SQL for
varying-list SELECT statements in your program.
Because the varying-list SELECT statement requires pointer variables for the SQL
descriptor area, you cannot issue it from a Fortran program. A Fortran program
Suppose that your program dynamically executes SQL statements, but this time
without any limits on their form. Your program reads the statements from a
terminal, and you know nothing about them in advance. They might not even be
SELECT statements.
Now, the program must find out whether the statement is a SELECT. If it is, the
program must also find out how many values are in each row, and what their data
types are. The information comes from an SQL descriptor area (SQLDA).
The SQLDA is a structure that is used to communicate with your program, and
storage for it is usually allocated dynamically at run time.
A program that admits SQL statements of every kind for dynamic execution has
two choices:
v Provide the largest SQLDA that it could ever need. The maximum number of
columns in a result table is 750, so an SQLDA for 750 columns occupies 33 016
bytes for a single SQLDA, 66 016 bytes for a double SQLDA, or 99 016 bytes for
a triple SQLDA. Most SELECT statements do not retrieve 750 columns, so the
program does not usually use most of that space.
v Provide a smaller SQLDA, with fewer occurrences of SQLVAR. From this the
program can find out whether the statement was a SELECT and, if it was, how
many columns are in its result table. If more columns are in the result than the
SQLDA can hold, DB2 returns no descriptions. When this happens, the program
must acquire storage for a second SQLDA that is long enough to hold the
column descriptions, and ask DB2 for the descriptions again. Although this
technique is more complicated to program than the first, it is more general.
How many columns should you allow? You must choose a number that is large
enough for most of your SELECT statements, but not too wasteful of space; 40 is
a good compromise. To illustrate what you must do for statements that return
more columns than allowed, the example in this discussion uses an SQLDA that
is allocated for at least 100 columns.
As before, you need a cursor for the dynamic SELECT. For example, write:
EXEC SQL
DECLARE C1 CURSOR FOR STMT;
Suppose that your program declares an SQLDA structure with the name
MINSQLDA, having 100 occurrences of SQLVAR and SQLN set to 100. To prepare
a statement from the character string in DSTRING and also enter its description
into MINSQLDA, write this:
EXEC SQL PREPARE STMT FROM :DSTRING;
EXEC SQL DESCRIBE STMT INTO :MINSQLDA;
Equivalently, you can use the INTO clause in the PREPARE statement:
EXEC SQL
PREPARE STMT INTO :MINSQLDA FROM :DSTRING;
Do not use the USING clause in either of these examples. At the moment, only the
minimum SQLDA is in use. The following figure shows the contents of the
minimum SQLDA in use.
The SQLN field, which you must set before using DESCRIBE (or PREPARE INTO),
tells how many occurrences of SQLVAR the SQLDA is allocated for. If DESCRIBE
needs more than that, the results of the DESCRIBE depend on the contents of the
result table. Let n indicate the number of columns in the result table. Then:
v If the result table contains at least one distinct type column but no LOB
columns, you do not specify USING BOTH, and n<=SQLN<2*n, then DB2
returns base SQLVAR information in the first n SQLVAR occurrences, but no
distinct type information. Base SQLVAR information includes:
– Data type code
– Length attribute (except for LOBs)
– Column name or label
– Host variable address
– Indicator variable address
v Otherwise, if SQLN is less than the minimum number of SQLVARs specified in
the table above, then DB2 returns no information in the SQLVARs.
To find out if the statement is a SELECT, your program can query the SQLD field
in MINSQLDA. If the field contains 0, the statement is not a SELECT, the
statement is already prepared, and your program can execute it. If no parameter
markers are in the statement, you can use:
EXEC SQL EXECUTE STMT;
(If the statement does contain parameter markers, you must use an SQL descriptor
area)
Now you can allocate storage for a second, full-size SQLDA; call it FULSQLDA.
The following figure shows its structure.
After allocating sufficient space for FULSQLDA, your program must take these
steps:
The following figure shows an SQLDA that describes two columns that are not
LOB columns or distinct type columns.
If the SQLTYPE field indicates that the value can be null, the program must also
put the address of an indicator variable in the SQLIND field. The following figures
show the SQL descriptor area after you take certain actions.
In the previous figure, the DESCRIBE statement inserted all the values except the
first occurrence of the number 200. The program inserted the number 200 before it
executed DESCRIBE to tell how many occurrences of SQLVAR to allow. If the
result table of the SELECT has more columns than this, the SQLVAR fields describe
nothing.
The first SQLVAR pertains to the first column of the result table (the WORKDEPT
column). SQLVAR element 1 contains fixed-length character strings and does not
allow null values (SQLTYPE=452); the length attribute is 3.
The following figure shows the SQLDA after your program acquires storage for the
column values and their indicators, and puts the addresses in the SQLDATA fields
of the SQLDA.
SQLVAR element 1 (44 bytes) 452 3 Addr FLDA Addr FLDAI 8 WORKDEPT
SQLVAR element 2 (44 bytes) 453 4 Addr FLDB Addr FLDBI 7 PHONENO
Indicator variables
FLDA FLDB (halfword)
CHAR(3) CHAR(4) FLDAI FLDBI
Figure 16. SQL descriptor area after analyzing descriptions and acquiring storage
The following figure shows the SQLDA after your program executes a FETCH
statement.
SQLVAR element 1 (44 bytes) 452 3 Addr FLDA Addr FLDAI 8 WORKDEPT
SQLVAR element 2 (44 bytes) 453 4 Addr FLDB Addr FLDBI 7 PHONENO
Indicator variables
FLDA FLDB (halfword)
CHAR(3) CHAR(4) FLDAI FLDBI
E11 4502 0 0
Figure 16 on page 175 shows the content of the descriptor area before the program
obtains any rows of the result table. Addresses of fields and indicator variables are
already in the SQLVAR.
All DB2 string data has an encoding scheme and CCSID associated with it. When
you select string data from a table, the selected data generally has the same
encoding scheme and CCSID as the table. If the application uses some method,
such as issuing the DECLARE VARIABLE statement, to change the CCSID of the
selected data, the data is converted from the CCSID of the table to the CCSID that
is specified by the application.
You can set the default application encoding scheme for a plan or package by
specifying the value in the APPLICATION ENCODING field of the panel
DEFAULTS FOR BIND PACKAGE or DEFAULTS FOR BIND PLAN. The default
application encoding scheme for the DB2 subsystem is the value that was specified
in the APPLICATION ENCODING field of installation panel DSNTIPF.
If you want to retrieve the data in an encoding scheme and CCSID other than the
default values, you can use one of the following techniques:
v For dynamic SQL, set the CURRENT APPLICATION ENCODING SCHEME
special register before you execute the SELECT statements. For example, to set
the CCSID and encoding scheme for retrieved data to the default CCSID for
Unicode, execute this SQL statement:
EXEC SQL SET CURRENT APPLICATION ENCODING SCHEME ='UNICODE';
The initial value of this special register is the application encoding scheme that
is determined by the BIND option.
v For static and dynamic SQL statements that use host variables and host variable
arrays, use the DECLARE VARIABLE statement to associate CCSIDs with the
host variables into which you retrieve the data. See “Setting the CCSID for host
variables” on page 146 for information about this technique.
v For static and dynamic SQL statements that use a descriptor, set the CCSID for
the retrieved data in the SQLDA. The following text describes that technique.
To change the encoding scheme for SQL statements that use a descriptor, set up the
SQLDA, and then make these additional changes to the SQLDA:
1. Put the character + in the sixth byte of field SQLDAID.
2. For each SQLVAR entry:
a. Set the length field of SQLNAME to 8.
b. Set the first two bytes of the data field of SQLNAME to X’0000’.
c. Set the third and fourth bytes of the data field of SQLNAME to the CCSID,
in hexadecimal, in which you want the results to display, or to X’0000’.
X’0000’ indicates that DB2 should use the default CCSID If you specify a
nonzero CCSID, it must meet one of the following conditions:
For example, suppose that the table that contains WORKDEPT and PHONENO is
defined with CCSID ASCII. To retrieve data for columns WORKDEPT and
PHONENO in ASCII CCSID 437 (X’01B5’), change the SQLDA as shown in the
following figure.
SQLVAR element 1 (44 bytes) 452 3 Addr FLDA Addr FLDAI 8 X 000001B500000000
SQLVAR element 2 (44 bytes) 453 4 Addr FLDB Addr FLDBI 8 X 000001B500000000
Indicator variables
FLDA FLDB (halfword)
CHAR(3) CHAR(4) FLDAI FLDBI
Figure 18. SQL descriptor area for retrieving data in ASCII CCSID 437
Restriction: You cannot use column labels with set operators (UNION,
INTERSECT, and EXCEPT).
To specify that DESCRIBE use column labels in the SQLNAME field, specify one of
the following options when you issue the DESCRIBE statement:
USING LABELS
Specifies that SQLNAME is to contain labels. If a column has no label,
SQLNAME contains nothing.
USING ANY
Specifies that SQLNAME is to contain labels wherever they exist. If a column
has no label, SQLNAME contains the column name.
| Some columns, such as those derived from functions or expressions, have neither
| name nor label; SQLNAME contains nothing for those columns. For example, if
| you use a UNION to combine two columns that do not have the same name and
| do not use a label, SQLNAME contains a string of length zero.
In general, the steps that you perform when you prepare an SQLDA to select rows
from a table with LOB and distinct type columns are similar to the steps that you
perform if the table has no columns of this type. The only difference is that you
need to analyze some additional fields in the SQLDA for LOB or distinct type
columns.
The USER column cannot contain nulls and is of distinct type ID, defined like this:
CREATE DISTINCT TYPE SCHEMA1.ID AS CHAR(20);
The result table for this statement has two columns, but you need four SQLVAR
occurrences in your SQLDA because the result table contains a LOB type and a
distinct type. Suppose that you prepare and describe this statement into
FULSQLDA, which is large enough to hold four SQLVAR occurrences. FULSQLDA
looks like the following figure .
Figure 19. SQL descriptor area after describing a CLOB and distinct type
The next steps are the same as for result tables without LOBs or distinct types:
1. Analyze each SQLVAR description to determine the maximum amount of space
you need for the column value.
For a LOB type, retrieve the length from the SQLLONGL field instead of the
SQLLEN field.
2. Derive the address of some storage area of the required size.
For a LOB data type, you also need a 4-byte storage area for the length of the
LOB data. You can allocate this 4-byte area at the beginning of the LOB data or
in a different location.
3. Put this address in the SQLDATA field.
For a LOB data type, if you allocated a separate area to hold the length of the
LOB data, put the address of the length field in SQLDATAL. If the length field
is at beginning of the LOB data area, put 0 in SQLDATAL. When you use a file
reference variable for a LOB column, the indicator variable indicates whether
the data in the file is null, not whether the data to which SQLDATA points is
null.
4. If the SQLTYPE field indicates that the value can be null, the program must
also put the address of an indicator variable in the SQLIND field.
The following figure shows the contents of FULSQLDA after you fill in pointers to
the storage locations.
Figure 20. SQL descriptor area after analyzing CLOB and distinct type descriptions and acquiring storage
The following figure shows the contents of FULSQLDA after you execute a FETCH
statement.
Instead of specifying host variables to store XML values from a table, you can
create an SQLDA to point to the data areas where DB2 puts the retrieved data. The
SQLDA needs to describe the data type for each data area.
3. Check the SQLTYPE field of each SQLVAR entry. If the SQLTYPE field is 988 or
989, the column in the result set is an XML column.
Restriction: You cannot use the XML type (988/989) as a target host
variable type.
b. If the target host variable type is XML AS BLOB, XML AS CLOB, or XML
AS DBCLOB, change the first two bytes in the SQLNAME field to X’0000’
and the fifth and sixth bytes to X’0100’. These bytes indicate that the value
to be received is an XML value.
5. Populate the extended SQLVAR fields for each XML column as you would for a
LOB column, as indicated in the following table.
Table 46. Fields for an extended SQLVAR entry for an XML host variable
SQLVAR field Value for an XML host variable
length attribute for the XML host variable
len.sqllonglen
SQLLONGL
SQLLONGLEN
* Reserved
pointer to the length of the XML host variable
sqldatalen
SQLDATAL
SQLDATALEN
not used
sqldatatype_name
SQLTNAME
SQLDATATYPENAME
You can easily retrieve rows of the result table using a varying-list SELECT
statement. The statements differ only a little from those for the fixed-list example.
Open the cursor: If the SELECT statement contains no parameter marker, this step
is simple enough. For example:
EXEC SQL OPEN C1;
Fetch rows from the result table: This statement differs from the corresponding
one for the case of a fixed-list select. Write:
EXEC SQL
FETCH C1 USING DESCRIPTOR :FULSQLDA;
The key feature of this statement is the clause USING DESCRIPTOR :FULSQLDA.
That clause names an SQL descriptor area in which the occurrences of SQLVAR
point to other areas. Those other areas receive the values that FETCH returns. It is
possible to use that clause only because you previously set up FULSQLDA to look
like Figure 15 on page 174.
Figure 17 on page 175 shows the result of the FETCH. The data areas identified in
the SQLVAR fields receive the values from a single row of the result table.
Successive executions of the same FETCH statement put values from successive
rows of the result table into these same areas.
Close the cursor: This step is the same as for the fixed-list case. When no more
rows need to be processed, execute the following statement:
EXEC SQL CLOSE C1;
When COMMIT ends the unit of work containing OPEN, the statement in STMT
reverts to the unprepared state. Unless you defined the cursor using the WITH
HOLD option, you must prepare the statement again before you can reopen the
cursor.
When the number and types of parameters are known: In the preceding example,
you do not know in advance the number of parameter markers, and perhaps the
In both cases, the number and types of host variables named must agree with the
number of parameter markers in STMT and the types of parameter they represent.
The first variable (VAR1 in the examples) must have the type expected for the first
parameter marker in the statement, the second variable must have the type
expected for the second marker, and so on. There must be at least as many
variables as parameter markers.
When the number and types of parameters are not known: When you do not
know the number and types of parameters, you can adapt the SQL descriptor area.
Your program can include an unlimited number of SQLDAs, and you can use them
for different purposes. Suppose that an SQLDA, arbitrarily named DPARM,
describes a set of parameters.
The structure of DPARM is the same as that of any other SQLDA. The number of
occurrences of SQLVAR can vary, as in previous examples. In this case, every
parameter marker must have one SQLVAR. Each occurrence of SQLVAR describes
one host variable that replaces one parameter marker at run time. DB2 replaces the
parameter markers when a non-SELECT statement executes or when a cursor is
opened for a SELECT statement.
You must fill in certain fields in DPARM before using EXECUTE or OPEN; you
can ignore the other fields.
Field Use when describing host variables for parameter markers
SQLDAID
The seventh byte indicates whether more than one SQLVAR entry is used
for each parameter marker. If this byte is not blank, at least one parameter
marker represents a distinct type or LOB value, so the SQLDA has more
than one set of SQLVAR entries.
You do not set this field for a REXX SQLDA.
SQLDABC
The length of the SQLDA, which is equal to SQLN * 44 + 16. You do not
set this field for a REXX SQLDA.
SQLN The number of occurrences of SQLVAR allocated for DPARM. You do not
set this field for a REXX SQLDA.
SQLD The number of occurrences of SQLVAR actually used. This number must
not be less than the number of parameter markers. In each occurrence of
SQLVAR, put information in the following fields: SQLTYPE, SQLLEN,
SQLDATA, SQLIND.
Using the SQLDA with EXECUTE or OPEN: To indicate that the SQLDA called
DPARM describes the host variables substituted for the parameter markers at run
time, use a USING DESCRIPTOR clause with EXECUTE or OPEN.
v For a non-SELECT statement, write:
EXEC SQL EXECUTE STMT USING DESCRIPTOR :DPARM;
v For a SELECT statement, write:
EXEC SQL OPEN C1 USING DESCRIPTOR :DPARM;
When you specify the bind option REOPT(ALWAYS), DB2 reoptimizes the access
path at run time for SQL statements that contain host variables, parameter
markers, or special registers. The option REOPT(ALWAYS) has the following effects
on dynamic SQL statements:
v When you specify the option REOPT(ALWAYS), DB2 automatically uses
DEFER(PREPARE), which means that DB2 waits to prepare a statement until it
encounters an OPEN or EXECUTE statement.
v When you execute a DESCRIBE statement and then an EXECUTE statement on a
non-SELECT statement, DB2 prepares the statement twice: Once for the
DESCRIBE statement and once for the EXECUTE statement. DB2 uses the values
in the input variables only during the second PREPARE. These multiple
PREPAREs can cause performance to degrade if your program contains many
dynamic non-SELECT statements. To improve performance, consider putting the
code that contains those statements in a separate package and then binding that
package with the option REOPT(NONE).
v If you execute a DESCRIBE statement before you open a cursor for that
statement, DB2 prepares the statement twice. If, however, you execute a
DESCRIBE statement after you open the cursor, DB2 prepares the statement only
once. To improve the performance of a program bound with the option
REOPT(ALWAYS), execute the DESCRIBE statement after you open the cursor.
To prevent an automatic DESCRIBE before a cursor is opened, do not use a
PREPARE statement with the INTO clause.
v If you use predictive governing for applications bound with REOPT(ALWAYS),
DB2 does not return a warning SQLCODE when dynamic SQL statements
exceed the predictive governing warning threshold. DB2 does return an error
| When you specify the bind option REOPT(AUTO), DB2 optimizes the access path
| for SQL statements at the first EXECUTE or OPEN. Each time a statement is
| executed, DB2 determines if a new access path is needed to improve the
| performance of the statement. If a new access path will improve the performance,
| DB2 generates one. The option REOPT(AUTO) has the following effects on
| dynamic SQL statements:
| v When you specify the bind option REOPT(AUTO), DB2 optimizes the access
| path for SQL statements at the first EXECUTE or OPEN. Each time a statement
| is executed, DB2 determines if a new access path is needed to improve the
| performance of the statement. If a new access path will improve the
| performance, DB2 generates one.
| v When you specify the option REOPT(ONCE), DB2 automatically uses
| DEFER(PREPARE), which means that DB2 waits to prepare a statement until it
| encounters an OPEN or EXECUTE statement.
| v When DB2 prepares a statement using REOPT(AUTO), it saves the access path
| in the dynamic statement cache. This access path is used each time the statement
| is run, until DB2 determines that a new access path is needed to improve the
| performance or the statement that is in the cache is invalidated (or removed
| from the cache) and needs to be rebound.
| v The DESCRIBE statement has the following effects on dynamic statements that
| are bound with REOPT(AUTO):
| – When you execute a DESCRIBE statement before an EXECUTE statement on a
| non-SELECT statement, DB2 prepares the statement an extra time if it is not
| already saved in the cache: Once for the DESCRIBE statement and once for
| the EXECUTE statement. DB2 uses the values of the input variables only
| during the second time the statement is prepared. It then saves the statement
| in the cache. If you execute a DESCRIBE statement before an EXECUTE
| statement on a non-SELECT statement that has already been saved in the
| cache, DB2 will always prepare the non-SELECT statement for the DESCRIBE
| statement, and will prepare the statement again on EXECUTE only if DB2
| determines that a new access path different from the one already saved in the
| cache can improve the performance.
| – If you execute DESCRIBE on a statement before you open a cursor for that
| statement, DB2 always prepares the statement on DESCRIBE. However, DB2
| will not prepare the statement again on OPEN if the statement has already
| been saved in the cache and DB2 doesn’t think that a new access path is
| needed at OPEN time. If you execute DESCRIBE on a statement after you
| open a cursor for that statement, DB2 prepared the statement only once if it is
| not already saved in the cache. If the statement is already saved in the cache
| and you execute DESCRIBE after you open a cursor for that statement, DB2
| does not prepare the statement, it used the statement that is saved in the
| cache.
| v If you use predictive governing for applications that are bound with
| REOPT(AUTO), DB2 does not return a warning SQLCODE when dynamic SQL
| statements exceed the predictive governing warning threshold. DB2 does return
| an error SQLCODE when dynamic SQL statements exceed the predictive
| governing error threshold. DB2 returns the error SQLCODE for an EXECUTE or
| OPEN statement.
Suppose that you design a program to read SQL DELETE statements, similar to
these, from a terminal:
DELETE FROM DSN8910.EMP WHERE EMPNO = '000190'
DELETE FROM DSN8910.EMP WHERE EMPNO = '000220'
Recall that you must prepare (precompile and bind) static SQL statements before
you can use them. You cannot prepare dynamic SQL statements in advance. The
SQL statement EXECUTE IMMEDIATE causes an SQL statement to prepare and
execute, dynamically, at run time.
Declaring the host variableBefore you prepare and execute an SQL statement, you
can read it into a host variable. If the maximum length of the SQL statement is 32
KB, declare the host variable as a character or graphic host variable according to
the following rules for the host languages:
| v In assembler, PL/I, COBOL and C, you must declare a string host variable as a
| varying-length string.
| v In Fortran, it must be a fixed-length string variable.
If the length is greater than 32 KB, you must declare the host variable as a CLOB
or DBCLOB, and the maximum is 2 MB.
Declaring a CLOB or DBCLOB host variable: You declare CLOB and DBCLOB
host variables according to certain rules.
The precompiler generates a structure that contains two elements, a 4-byte length
field and a data field of the specified length. The names of these fields vary
depending on the host language:
v In PL/I, assembler, and Fortran, the names are variable_LENGTH and
variable_DATA.
v In COBOL, the names are variable–LENGTH and variable–DATA.
v In C, the names are variable.LENGTH and variable.DATA.
Example: Using a CLOB host variable: This excerpt is from a C program that
copies an UPDATE statement into the host variable string1 and executes the
statement:
EXEC SQL BEGIN DECLARE SECTION;
...
SQL TYPE IS CLOB(4k) string1;
EXEC SQL END DECLARE SECTION;
...
/* Copy a statement into the host variable string1. */
strcpy(string1.data, "UPDATE DSN8610.EMP SET SALARY = SALARY * 1.1");
string1.length = 44;
EXEC SQL EXECUTE IMMEDIATE :string1;
...
Suppose that you want to execute DELETE statements repeatedly using a list of
employee numbers. Consider how you would do it if you could write the DELETE
statement as a static SQL statement:
< Read a value for EMP from the list. >
DO UNTIL (EMP = 0);
EXEC SQL
DELETE FROM DSN8910.EMP WHERE EMPNO = :EMP ;
< Read a value for EMP from the list. >
END;
If you know in advance that you will use only the DELETE statement and only the
table DSN8910.EMP, you can use the more efficient static SQL. Suppose further
that several different tables have rows that are identified by employee numbers,
and that users enter a table name as well as a list of employee numbers to delete.
Although variables can represent the employee numbers, they cannot represent the
table name, so you must construct and execute the entire statement dynamically.
Your program must now do these things differently:
v Use parameter markers instead of host variables
v Use the PREPARE statement
v Use EXECUTE instead of EXECUTE IMMEDIATE
You can indicate to DB2 that a parameter marker represents a host variable of a
certain data type by specifying the parameter marker as the argument of a CAST
specification. When the statement executes, DB2 converts the host variable to the
data type in the CAST specification. A parameter marker that you include in a
CAST specification is called a typed parameter marker. A parameter marker without
a CAST specification is called an untyped parameter marker.
Example using parameter markers: Suppose that you want to prepare this
statement:
DELETE FROM DSN8910.EMP WHERE EMPNO = :EMP;
You associate host variable :EMP with the parameter marker when you execute the
prepared statement. Suppose that S1 is the prepared statement. Then the EXECUTE
statement looks like this:
EXECUTE S1 USING :EMP;
Using the PREPARE statement: Before you prepare an SQL statement, you can
assign it to a host variable. If the length of the statement is greater than 32 KB, you
must declare the host variable as a CLOB or DBCLOB.
Example using the PREPARE statement: Assume that the character host variable
:DSTRING has the value “DELETE FROM DSN8910.EMP WHERE EMPNO = ?”.
To prepare an SQL statement from that string and assign it the name S1, write:
EXEC SQL PREPARE S1 FROM :DSTRING;
The prepared statement still contains a parameter marker, for which you must
supply a value when the statement executes. After the statement is prepared, the
table name is fixed, but the parameter marker enables you to execute the same
statement many times with different values of the employee number.
Using the EXECUTE statement: The EXECUTE statement executes a prepared SQL
statement by naming a list of one or more host variables, one or more host variable
arrays, or a host structure. This list supplies values for all of the parameter
markers.
After you prepare a statement, you can execute it many times within the same unit
of work. In most cases, COMMIT or ROLLBACK destroys statements prepared in a
unit of work. Then, you must prepare them again before you can execute them
again. However, if you declare a cursor for a dynamic statement and use the
option WITH HOLD, a commit operation does not destroy the prepared statement
if the cursor is still open. You can execute the statement in the next unit of work
without preparing it again.
Example using the EXECUTE statement: To execute the prepared statement S1 just
once, using a parameter value contained in the host variable :EMP, write:
EXEC SQL EXECUTE S1 USING :EMP;
Preparing and executing the example DELETE statement: The example in this
topic began with a DO loop that executed a static SQL statement repeatedly:
You can now write an equivalent example for a dynamic SQL statement:
< Read a statement containing parameter markers into DSTRING.>
EXEC SQL PREPARE S1 FROM :DSTRING;
< Read a value for EMP from the list. >
DO UNTIL (EMPNO = 0);
EXEC SQL EXECUTE S1 USING :EMP;
< Read a value for EMP from the list. >
END;
The PREPARE statement prepares the SQL statement and calls it S1. The EXECUTE
statement executes S1 repeatedly, using different values for EMP.
Using more than one parameter marker: The prepared statement (S1 in the
example) can contain more than one parameter marker. If it does, the USING
clause of EXECUTE specifies a list of variables or a host structure. The variables
must contain values that match the number and data types of parameters in S1 in
the proper order. You must know the number and types of parameters in advance
and declare the variables in your program, or you can use an SQLDA (SQL
descriptor area).
Related concepts
“SQL statements in assembler programs” on page 241
“SQL statements in C programs” on page 280
“SQL statements in COBOL programs” on page 326
“SQL statements in Fortran programs” on page 371
“SQL statements in PL/I programs” on page 393
“SQL statements in REXX programs” on page 405
Related tasks
“Dynamically executing an SQL statement by using EXECUTE IMMEDIATE” on
page 187
Related reference
PREPARE (SQL Reference)
For example, suppose that you want to repeatedly execute a multiple-row INSERT
statement with a list of activity IDs, activity keywords, and activity descriptions
that are provided by the user. You can use the following static SQL INSERT
statement to insert multiple rows of data into the activity table:
EXEC SQL
INSERT INTO DSN8910.ACT
VALUES (:hva_actno, :hva_actkwd, :hva_actdesc)
FOR :num_rows ROWS;
However, if you want to enter the rows of data into different tables or enter
different numbers of rows, you can construct the INSERT statement dynamically.
You can use an SQLDA structure to specify data types and other information about
the host variable arrays that contain the values to insert.
Example
You can use the following code to set up an SQLDA, obtain parameter information
by using the DESCRIBE INPUT statement, and execute the statement:
SQLDAPTR=ADDR(INSQLDA); /* Get pointer to SQLDA */
SQLDAID='SQLDA'; /* Fill in SQLDA eye-catcher */
SQLDABC=LENGTH(INSQLDA); /* Fill in SQLDA length */
SQLN=1; /* Fill in number of SQLVARs */
SQLD=0; /* Initialize # of SQLVARs used */
DO IX=1 TO SQLN; /* Initialize the SQLVAR */
SQLTYPE(IX)=0;
SQLLEN(IX)=0;
To enable the dynamic statement cache to save prepared statements, specify YES
on the CACHE DYNAMIC SQL field of installation panel DSNTIP8.
Related concepts
“SQL statements in assembler programs” on page 241
“SQL statements in C programs” on page 280
“SQL statements in COBOL programs” on page 326
“SQL statements in Fortran programs” on page 371
“SQL statements in PL/I programs” on page 393
“SQL statements in REXX programs” on page 405
Related reference
Performance and optimization panel: DSNTIP8 (DB2 Installation and
Migration)
The dynamic statement cache is a pool in which DB2 saves prepared SQL
statements that can be shared among different threads, plans, and packages to
improve performance. Only certain dynamic SQL statements can be saved in this
cache.
As the DB2 ability to optimize SQL has improved, the cost of preparing a dynamic
SQL statement has grown. Applications that use dynamic SQL might be forced to
pay this cost more than once. When an application performs a commit operation, it
must issue another PREPARE statement if that SQL statement is to be executed
again. For a SELECT statement, the ability to declare a cursor WITH HOLD
DB2 can save prepared dynamic statements in a cache. The cache is a dynamic
statement cache pool that all application processes can use to save and retrieve
prepared dynamic statements. After an SQL statement has been prepared and is
automatically saved in the cache, subsequent prepare requests for that same SQL
statement can avoid the costly preparation process by using the statement that is in
the cache. Statements that are saved in the cache can be shared among different
threads, plans, or packages.
Eligible statements: The following SQL statements can be saved in the cache:
SELECT
UPDATE
INSERT
DELETE
| MERGE
Distributed and local SQL statements are eligible to be saved. Prepared, dynamic
statements that use DB2 private protocol access are also eligible to be saved.
Restriction: Even though static statements that use DB2 private protocol access are
dynamic at the remote site, those statements can not be saved in the cache.
| Statements in plans or packages that are bound with REOPT(ALWAYS) can not be
| saved in the cache. Statements in plans and packages that are bound with
| REOPT(ONCE) or REOPT(AUTO) can be saved in the cache.
Prepared statements cannot be shared among data sharing members. Because each
member has its own EDM pool, a cached statement on one member is not
available to an application that runs on another member.
Related tasks
“Including dynamic SQL for varying-list SELECT statements in your program” on
page 169
Suppose that S1 and S2 are source statements, and P1 is the prepared version of
S1. P1 is in the dynamic statement cache.
The following conditions must be met before DB2 can use statement P1 instead of
preparing statement S2:
v S1 and S2 must be identical. The statements must pass a character by character
comparison and must be the same length. If the PREPARE statement for either
statement contains an ATTRIBUTES clause, DB2 concatenates the values in the
ATTRIBUTES clause to the statement string before comparing the strings. That
is, if A1 is the set of attributes for S1 and A2 is the set of attributes for S2, DB2
compares S1||A1 to S2||A2.
If the statement strings are not identical, DB2 cannot use the statement in the
cache.
For example, assume that S1 and S2 are specified as follows:
'UPDATE EMP SET SALARY=SALARY+50'
In this case, DB2 cannot use P1 for S2. DB2 prepares S2 and saves the prepared
version of S2 in the cache.
| v The authorization ID or role that was used to prepare S1 must be used to
| prepare S2:
| – When a plan or package has run behavior, the authorization ID is the current
| SQLID value.
| For secondary authorization IDs:
| - The application process that searches the cache must have the same
| secondary authorization ID list as the process that inserted the entry into
| the cache or must have a superset of that list.
| - If the process that originally prepared the statement and inserted it into the
| cache used one of the privileges held by the primary authorization ID to
| accomplish the prepare, that ID must either be part of the secondary
| authorization ID list of the process searching the cache, or it must be the
| primary authorization ID of that process.
| – When a plan or package has bind behavior, the authorization ID is the plan
| owner’s ID. For a DDF server thread, the authorization ID is the package
| owner’s ID.
| – When a package has define behavior, then the authorization ID is the
| user-defined function or stored procedure owner.
| – When a package has invoke behavior, then the authorization ID is the
| authorization ID under which the statement that invoked the user-defined
| function or stored procedure executed.
| – If the application process has a role associated with it, DB2 uses the role to
| search the cache instead of the authorization IDs. If the trusted context that
As part of an online monitoring strategy, you can examine all statements in the
dynamic statement cache and concentrate on improving specific statements for
specific performance characteristics.
Example: If you are concerned about CPU time, you can select from the
STAT_CPU column in the DSN_STATEMENT_CACHE_TABLE table to identify the
queries that consume the most CPU time. Then you can work to improve the
performance of those specific queries.
Related reference
EXPLAIN (SQL Reference)
When the dynamic statement cache is active, and you run an application bound
with KEEPDYNAMIC(YES), DB2 retains a copy of both the prepared statement
and the statement string. The prepared statement is cached locally for the
application process. In general, the statement is globally cached in the EDM pool,
to benefit other application processes. If the application issues an OPEN,
EXECUTE, or DESCRIBE after a commit operation, the application process uses its
local copy of the prepared statement to avoid a prepare and a search of the cache.
The following example illustrates this process.
| PREPARE STMT1 FROM ... Statement is prepared and put in memory.
| EXECUTE STMT1
|| COMMIT
.
|| .
.
| EXECUTE STMT1 Application does not issue PREPARE.
|| COMMIT
. DB2 uses the prepared statement in memory.
|| .
.
| EXECUTE STMT1 Again, no PREPARE needed.
|| COMMIT
. DB2 uses the prepared statement in memory.
|| .
.
| PREPARE STMT1 FROM ... Again, no PREPARE needed.
| COMMIT DB2 uses the prepared statement in memory.
The local instance of the prepared SQL statement is kept in ssnmDBM1 storage
until one of the following occurs:
v The application process ends.
Chapter 3. Coding SQL statements in application programs: General information 199
v A rollback operation occurs.
v The application issues an explicit PREPARE statement with the same statement
name.
If the application does issue a PREPARE for the same SQL statement name that
has a kept dynamic statement associated with it, the kept statement is discarded
and DB2 prepares the new statement.
v The statement is removed from memory because the statement has not been
used recently, and the number of kept dynamic SQL statements reaches the
subsystem default as set during installation.
The KEEPDYNAMIC option has performance implications for DRDA clients that
specify WITH HOLD on their cursors:
v If KEEPDYNAMIC(NO) is specified, a separate network message is required
when the DRDA client issues the SQL CLOSE for the cursor.
v If KEEPDYNAMIC(YES) is specified, the DB2 for z/OS server automatically
closes the cursor when SQLCODE +100 is detected, which means that the client
does not have to send a separate message to close the held cursor. This reduces
network traffic for DRDA applications that use held cursors. It also reduces the
duration of locks that are associated with the held cursor.
Considerations for data sharing: If one member of a data sharing group has
enabled the cache but another has not, and an application is bound with
KEEPDYNAMIC(YES), DB2 must implicitly prepare the statement again if the
statement is assigned to a member without the cache. This can mean a slight
reduction in performance.
Your system administrator can establish the limits for individual plans or packages,
for individual users, or for all users who do not have personal limits.
Reactive governing
The reactive governing function of the resource limit facility stops any dynamic
SQL statements that overuse system resources. When a statement exceeds a
reactive governing threshold, the application program receives SQLCODE -905. The
application must then determine what to do next.
If the failed statement involves an SQL cursor, the cursor’s position remains
unchanged. The application can then close that cursor. All other operations with
the cursor do not run and the same SQL error code occurs.
If the failed SQL statement does not involve a cursor, then all changes that the
statement made are undone before the error code returns to the application. The
application can either issue another SQL statement or commit all work done so far.
Predictive governing
The predictive governing function of the resource limit facility provides an
estimate of the processing cost of SQL statements before they run.
For information about setting up the resource limit facility for predictive
governing, see the topic “The resource limit facility (governor)” in DB2 Performance
Monitoring and Tuning Guide.
Handling the +495 SQLCODEIf your requester uses deferred prepare, the
presence of parameter markers determines when the application receives the +495
Normally with deferred prepare, the PREPARE, OPEN, and first FETCH of the
data are returned to the requester. For a predictive governor warning of +495, you
would ideally like to have the option to choose beforehand whether you want the
OPEN and FETCH of the data to occur. For down-level requesters, you do not
have this option.
If SQLCODE +495 is returned to the requester, OPEN processing continues but the
first block of data is not returned with the OPEN. Thus, if your application does
not continue with the query, you have already incurred the performance cost of
OPEN processing.
If your application does not defer the prepare, SQLCODE +495 is returned to the
requester and OPEN processing does not occur.
If your application does defer prepare processing, the application receives the +495
at its usual time (OPEN or PREPARE). If you have parameter markers with
deferred prepare, you receive the +495 at OPEN time as you normally do.
However, an additional message is exchanged.
Recommendation: Do not use deferred prepare for applications that use parameter
markers and that are predictively governed at the server side.
You can check the execution of SQL statements in one of the following ways:
v By displaying specific fields in the SQLCA.
v By testing SQLCODE or SQLSTATE for specific values.
v By using the WHENEVER statement in your application program.
v By testing indicator variables to detect numeric errors.
v By using the GET DIAGNOSTICS statement in your application program to
return all the condition information that results from the execution of an SQL
statement.
v By calling DSNTIAR to display the contents of the SQLCA.
If you use the SQLCA, include the necessary instructions to display information
that is contained in the SQLCA in your application program. Alternatively, you can
use the GET DIAGNOSTICS statement, which is an SQL standard, to diagnose
problems.
v When DB2 processes an SQL statement, it places return codes that indicate the
success or failure of the statement execution in SQLCODE and SQLSTATE.
v When DB2 processes a FETCH statement, and the FETCH is successful, the
contents of SQLERRD(3) in the SQLCA is set to the number of returned rows.
v When DB2 processes a multiple-row FETCH statement, the contents of
SQLCODE is set to +100 if the last row in the table has been returned with the
set of rows.
v When DB2 processes an UPDATE, INSERT, or DELETE statement, and the
statement execution is successful, the contents of SQLERRD(3) in the SQLCA is
set to the number of rows that are updated, inserted, or deleted.
| v When DB2 processes a TRUNCATE statement and the statement execution is
| successful, SQLERRD(3) in the SQLCA is set to -1. The number of rows that are
| deleted is not returned.
v If SQLWARN0 contains W, DB2 has set at least one of the SQL warning flags
(SQLWARN1 through SQLWARNA):
– SQLWARN1 contains N for non-scrollable cursors and S for scrollable cursors
after an OPEN CURSOR or ALLOCATE CURSOR statement.
– SQLWARN4 contains I for insensitive scrollable cursors, S for sensitive static
scrollable cursors, and D for sensitive dynamic scrollable cursors, after an
OPEN CURSOR or ALLOCATE CURSOR statement, or blank if the cursor is
not scrollable.
– SQLWARN5 contains a character value of 1 (read only), 2 (read and delete),
or 4 (read, delete, and update) to indicate the operation that is allowed on the
result table of the cursor.
You should check for errors codes before you commit data, and handle the errors
that they represent. The assembler subroutine DSNTIAR helps you to obtain a
formatted form of the SQLCA and a text message based on the SQLCODE field of
the SQLCA. You can retrieve this same message text by using the MESSAGE_TEXT
condition item field of the GET DIAGNOSTICS statement. Programs that require
long token message support should code the GET DIAGNOSTICS statement
instead of DSNTIAR.
DSNTIAR takes data from the SQLCA, formats it into a message, and places the
result in a message output area that you provide in your application program.
Each time you use DSNTIAR, it overwrites any previous messages in the message
output area. You should move or print the messages before using DSNTIAR again,
and before the contents of the SQLCA change, to get an accurate view of the
SQLCA.
DSNTIAR:
The assembler subroutine DSNTIAR helps you to obtain a formatted form of the
SQLCA and a text message based on the SQLCODE field of the SQLCA.
DSNTIAR can run either above or below the 16-MB line of virtual storage. The
DSNTIAR object module that comes with DB2 has the attributes AMODE(31) and
RMODE(ANY). At install time, DSNTIAR links as AMODE(31) and RMODE(ANY).
DSNTIAR runs in 31-bit mode if any of the following conditions is true:
When loading DSNTIAR from another program, be careful how you branch to
DSNTIAR. For example, if the calling program is in 24-bit addressing mode and
DSNTIAR is loaded above the 16-MB line, you cannot use the assembler BALR
instruction or CALL macro to call DSNTIAR, because they assume that DSNTIAR
is in 24-bit mode. Instead, you must use an instruction that is capable of branching
into 31-bit mode, such as BASSM.
You can dynamically link (load) and call DSNTIAR directly from a language that
does not handle 31-bit addressing. To do this, link a second version of DSNTIAR
with the attributes AMODE(24) and RMODE(24) into another load module library.
Alternatively, you can write an intermediate assembler language program that calls
DSNTIAR in 31-bit mode and then call that intermediate program in 24-bit mode
from your application.
For more information on the allowed and default AMODE and RMODE settings
for a particular language, see the application programming guide for that
language. For details on how the attributes AMODE and RMODE of an application
are determined, see the linkage editor and loader user’s guide for the language in
which you have written the application.
If a program calls DSNTIAR, the program must allocate enough storage in the
message output area to hold all of the message text.
You will probably need no more than 10 lines, 80-bytes each, for your message
output area. An application program can have only one message output area.
You must define the message output area in VARCHAR format. In this varying
character format, a 2-byte length field precedes the data. The length field indicates
to DSNTIAR how many total bytes are in the output message area; the minimum
length of the output area is 240-bytes.
The following figure shows the format of the message output area, where length is
the 2-byte total length field, and the length of each line matches the logical record
length (lrecl) you specify to DSNTIAR.
.
.
.
n-1
When you call DSNTIAR, you must name an SQLCA and an output message area
in the DSNTIAR parameters. You must also provide the logical record length (lrecl)
as a value between 72 and 240 bytes. DSNTIAR assumes the message area contains
fixed-length records of length lrecl.
The assembler subroutine DSNTIAR helps your program read the information in
the SQLCA. The subroutine also returns its own return code.
Code Meaning
0 Successful execution.
4 More data available than could fit into the provided message area.
8 Logical record length not between 72 and 240, inclusive.
12 Message area not large enough. The message length was 240 or greater.
16 Error in TSO message routine.
20 Module DSNTIA1 could not be loaded.
24 SQLCA data error.
You can use the assembler subroutine DSNTIAR to generate the error message text
in the SQLCA.
Suppose you want your DB2 COBOL application to check for deadlocks and
timeouts, and you want to make sure your cursors are closed before continuing.
You use the statement WHENEVER SQLERROR to transfer control to an error
routine when your application receives a negative SQLCODE.
An SQLCODE of 0 or -501 resulting from that statement indicates that the close
was successful.
To use DSNTIAR to generate the error message text, first follow these steps:
1. Choose a logical record length (lrecl) of the output lines. For this example,
assume lrecl is 72 (to fit on a terminal screen) and is stored in the variable
named ERROR-TEXT-LEN.
2. Define a message area in your COBOL application. Assuming you want an area
for up to 10 lines of length 72, you should define an area of 720 bytes, plus a
2-byte area that specifies the total length of the message output area.
01 ERROR-MESSAGE.
02 ERROR-LEN PIC S9(4) COMP VALUE +720.
02 ERROR-TEXT PIC X(72) OCCURS 10 TIMES
INDEXED BY ERROR-INDEX.
77 ERROR-TEXT-LEN PIC S9(9) COMP VALUE +72.
To display the contents of the SQLCA when SQLCODE is 0 or -501, call DSNTIAR
after the SQL statement that produces SQLCODE 0 or -501:
CALL 'DSNTIAR' USING SQLCA ERROR-MESSAGE ERROR-TEXT-LEN.
You can then print the message output area just as you would any other variable.
Your message might look like this:
DSNT408I SQLCODE = -501, ERROR: THE CURSOR IDENTIFIED IN A FETCH OR
CLOSE STATEMENT IS NOT OPEN
DSNT418I SQLSTATE = 24501 SQLSTATE RETURN CODE
DSNT415I SQLERRP = DSNXERT SQL PROCEDURE DETECTING ERROR
DSNT416I SQLERRD = -315 0 0 -1 0 0 SQL DIAGNOSTIC INFORMATION
DSNT416I SQLERRD = X'FFFFFEC5' X'00000000' X'00000000'
X'FFFFFFFF' X'00000000' X'00000000' SQL DIAGNOSTIC
INFORMATION
You can declare SQLCODE and SQLSTATE (SQLCOD and SQLSTA in Fortran) as
stand-alone host variables. If you specify the STDSQL(YES) precompiler option,
these host variables receive the return codes, and you should not include an
SQLCA in your program.
Related tasks
“Defining the SQL communications area, SQLSTATE, and SQLCODE in assembler”
on page 229
“Defining the SQL communications area, SQLSTATE, and SQLCODE in C” on page
249
“Defining the SQL communications area, SQLSTATE, and SQLCODE in COBOL”
on page 295
“Defining the SQL communications area, SQLSTATE, and SQLCODE in Fortran”
on page 363
“Defining the SQL communications area, SQLSTATE, and SQLCODE in PL/I” on
page 375
“Defining the SQL communications area, SQLSTATE, and SQLCODE in REXX” on
page 403
Related reference
SQLSTATE values and common error codes (DB2 Codes)
The WHENEVER statement must precede the first SQL statement it is to affect.
However, if your program checks SQLCODE directly, you must check SQLCODE
after each SQL statement.
Related concepts
Chapter 9, “Coding SQL statements in REXX application programs,” on page 403
Related reference
WHENEVER (SQL Reference)
You can use the GET DIAGNOSTICS statement to return diagnostic information
about the last SQL statement that was executed. You can request individual items
of diagnostic information from the following groups of items:
v Statement items, which contain information about the SQL statement as a whole
v Condition items, which contain information about each error or warning that
occurred during the execution of the SQL statement
v Connection items, which contain information about the SQL statement if it was a
CONNECT statement
| In SQL procedures, you can also retrieve diagnostic information by using handlers.
| Handlers tell the procedure what to do if a particular error occurs.
Chapter 3. Coding SQL statements in application programs: General information 209
| Use the GET DIAGNOSTICS statement to handle multiple SQL errors that might
| result from the execution of a single SQL statement. First, check SQLSTATE (or
| SQLCODE) to determine whether diagnostic information should be retrieved by
| using GET DIAGNOSTICS. This method is especially useful for diagnosing
| problems that result from a multiple-row INSERT that is specified as NOT
| ATOMIC CONTINUE ON SQLEXCEPTIONand multiple row MERGE statements.
Even if you use only the GET DIAGNOSTICS statement in your application
program to check for conditions, you must either include the instructions required
to use the SQLCA or you must declare SQLSTATE (or SQLCODE) separately in
your program.
When you use the GET DIAGNOSTICS statement, you assign the requested
diagnostic information to host variables. Declare each target host variable with a
data type that is compatible with the data type of the requested item.
To retrieve condition information, you must first retrieve the number of condition
items (that is, the number of errors and warnings that DB2 detected during the
execution of the last SQL statement). The number of condition items is at least one.
If the last SQL statement returned SQLSTATE ’00000’ (or SQLCODE 0), the number
of condition items is one.
In the following example, the first GET DIAGNOSTICS statement returns the
number of rows inserted and the number of conditions returned. The second GET
DIAGNOSTICS statement returns the following items for each condition:
SQLCODE, SQLSTATE, and the number of the row (in the rowset that was being
inserted) for which the condition occurred.
EXEC SQL BEGIN DECLARE SECTION;
long row_count, num_condns, i;
long ret_sqlcode, row_num;
char ret_sqlstate[6];
...
EXEC SQL END DECLARE SECTION;
...
EXEC SQL
INSERT INTO DSN8910.ACT
(ACTNO, ACTKWD, ACTDESC)
VALUES (:hva1, :hva2, :hva3)
FOR 10 ROWS
NOT ATOMIC CONTINUE ON SQLEXCEPTION;
In the activity table, the ACTNO column is defined as SMALLINT. Suppose that
you declare the host variable array hva1 as an array with data type long, and you
populate the array so that the value for the fourth element is 32768.
If you check the SQLCA values after the INSERT statement, the value of
SQLCODE is equal to 0, the value of SQLSTATE is ’00000’, and the value of
SQLERRD(3) is 9 for the number of rows that were inserted. However, the INSERT
statement specified that 10 rows were to be inserted.
The GET DIAGNOSTICS statement provides you with the information that you
need to correct the data for the row that was not inserted. The printed output from
your program looks like this:
Number of rows inserted = 9
SQLCODE = -302, SQLSTATE = 22003, ROW NUMBER = 4
The value 32768 for the input variable is too large for the target column ACTNO.
You can print the MESSAGE_TEXT condition item.
When you use the GET DIAGNOSTICS statement, you assign the requested
diagnostic information to host variables. Declare each target host variable with a
data type that is compatible with the data type of the requested item.
To retrieve condition information, you must first retrieve the number of condition
items (that is, the number of errors and warnings that DB2 detected during the
execution of the last SQL statement). The number of condition items is at least one.
If the last SQL statement returned SQLSTATE ’00000’ (or SQLCODE 0), the number
of condition items is one.
In the following example code, the first GET DIAGNOSTICS statement returns the
number of rows inserted and the number of conditions returned. The second GET
DIAGNOSTICS statement returns the following items for each condition:
SQLCODE, SQLSTATE, and the number of the row (in the rowset that was being
inserted) for which the condition occurred
EXEC SQL BEGIN DECLARE SECTION;
long row_count, num_condns, i;
long ret_sqlcode, row_num;
char ret_sqlstate[6];
...
EXEC SQL END DECLARE SECTION;
...
EXEC SQL
INSERT INTO DSN8810.ACT
(ACTNO, ACTKWD, ACTDESC)
VALUES (:hva1, :hva2, :hva3)
In the activity table, the ACTNO column is defined as SMALLINT. Suppose that
you declare the host variable array hva1 as an array with data type long, and you
populate the array so that the value for the fourth element is 32768.
If you check the SQLCA values after the INSERT statement, the value of
SQLCODE is equal to 0, the value of SQLSTATE is ’00000’, and the value if
SQLERRD(3) is 9 for the number of rows that were inserted. However, the INSERT
statement specified that 10 rows were to be inserted.
The GET DIAGNOSTICS statement provides you with the information that you
need to correct the data for the row that was not inserted. The printed output from
your program looks like this:
Number of rows inserted = 9
SQLCODE = -302, SQLSTATE = 22003, ROW NUMBER = 4
The value 32768 for the input variable is too large for the target column ACTNO.
You can print the MESSAGE_TEXT condition item.
Related concepts
“Handlers in an SQL procedure” on page 540
Related reference
“Data types for GET DIAGNOSTICS items”
GET DIAGNOSTICS (SQL Reference)
Related information
-302 (DB2 Codes)
The following tables specify the data types for the statement, condition, and
connection information items that you can request by using the GET
DIAGNOSTICS statement.
Table 48. Data types for GET DIAGNOSTICS items that return condition information
Item Description Data type
CATALOG_NAME This item contains the server name of the VARCHAR(128)
table that owns a constraint that caused an
error, or that caused an access rule or check
violation.
CONDITION_NUMBER This item contains the number of the INTEGER
condition.
CURSOR_NAME This item contains the name of a cursor in VARCHAR(128)
an invalid cursor state.
DB2_ERROR_CODE1 This item contains an internal error code. INTEGER
DB2_ERROR_CODE2 This item contains an internal error code. INTEGER
DB2_ERROR_CODE3 This item contains an internal error code. INTEGER
DB2_ERROR_CODE4 This item contains an internal error code. INTEGER
DB2_INTERNAL_ERROR_POINTER For some errors, this item contains a INTEGER
negative value that is an internal error
pointer.
DB2_MESSAGE_ID This item contains the message ID that CHAR(10)
corresponds to the message that is contained
in the MESSAGE_TEXT diagnostic item.
DB2_MODULE_DETECTING_ERROR After any SQL statement, this item indicates CHAR(8)
which module detected the error.
DB2_ORDINAL_TOKEN_n After any SQL statement, this item contains VARCHAR(515)
the nth token, where n is a value from 1 to
100.
DB2_REASON_CODE After any SQL statement, this item contains INTEGER
the reason code for errors that have a reason
code token in the message text.
DB2_RETURNED_SQLCODE After any SQL statement, this item contains INTEGER
the SQLCODE for the condition.
Table 49. Data types for GET DIAGNOSTICS items that return connection information
Item Description Data type
DB2_AUTHENTICATION_TYPE This item contains the authentication type (S, CHAR(1)
C, D, E, or blank).
DB2_AUTHORIZATION_ID This item contains the authorization ID that VARCHAR(128)
is used by the connected server.
DB2_CONNECTION_STATE This item indicates whether the connection is INTEGER
unconnected (-1), local (0), or remote (1).
DB2_CONNECTION_STATUS This item indicates whether updates can be INTEGER
committed for the current unit of work (1 for
Yes, 2 for No).
DB2_ENCRYPTION_TYPE This item contains one of the following CHAR(1)
values that indicates the level of encryption
for the connection:
A Only the authentication tokens
(authid and password) are
encrypted
D All of the data for the connection is
encrypted
DB2_SERVER_CLASS_NAME After a CONNECT or SET CONNECTION VARCHAR(128)
statement, this item contains the DB2 server
class name.
DB2_PRODUCT_ID This item contains the DB2 product VARCHAR(8)
signature.
Related reference
GET DIAGNOSTICS (SQL Reference)
For rows in which a conversion or arithmetic expression error does occur, the
indicator variable indicates that one or more selected items have no meaningful
value. The indicator variable flags this error with a -2 for the affected host variable
and an SQLCODE of +802 (SQLSTATE ’01519’) in the SQLCA.
To write an SQL application that allows you to create new tables, add columns to
them, increase the length of character columns, rearrange the columns, and delete
columns:
Use the CREATE TABLE and ALTER TABLE statements to create new tables, add
columns to existing table, or change the data types of existing columns. Added
columns initially contain either the null value or a default value. Both statements,
like any data definition statement, are relatively expensive to execute; consider the
effects of locks.
You cannot rearrange or delete columns in a table without dropping the entire
table. You can, however, create a view on the table, which includes only the
columns you want, in the order you want. This has the same effect as redefining
the table.
Related concepts
“Dynamic SQL” on page 162
Saving SQL statements that are translated from end user requests
If your program translates requests from end users into SQL statements and allows
users to save their requests, your program can improve performance by saving
those translated statements.
Save the corresponding SQL statements in a table with a column having a data
type of VARCHAR(n), where n is the maximum length of any SQL statement. You
must save the source SQL statements, not the prepared versions. That means that
you must retrieve and then prepare each statement before executing the version
stored in the table. In essence, your program prepares an SQL statement from a
character string and executes it dynamically.
Related concepts
“Dynamic SQL” on page 162
The following examples show you how to declare XML host variables in each
supported language. In each table, the left column contains the declaration that
you code in your application program. The right column contains the declaration
that DB2 generates.
The following table shows assembler language declarations for some typical XML
types.
Table 50. Example of assembler XML variable declarations
You declare this variable DB2 generates this variable
BLOB_XML SQL TYPE IS XML AS BLOB 1M BLOB_XML DS 0FL4
BLOB_XML_LENGTH DS FL4
BLOB_XML_DATA DS CL655351
ORG *+(983041)
CLOB_XML SQL TYPE IS XML AS CLOB 40000K CLOB_XML DS 0FL4
CLOB_XML_LENGTH DS FL4
CLOB_XML_DATA DS CL655351
ORG *+(40894465)
DBCLOB_XML SQL TYPE IS XML AS DBCLOB 4000K DBCLOB_XML DS 0FL4
DBCLOB_XML_LENGTH DS FL4
DBCLOB_XML_DATA DS GL655342
ORG *+(4030466)
|| BLOB_XML_FILE SQL TYPE IS XML AS BLOB_FILE BLOB_XML_FILE DS 0FL4
| BLOB_XML_FILE_NAME_LENGTH DS FL4
| BLOB_XML_FILE_DATA_LENGTH DS FL4
| BLOB_XML_FILE_FILE_OPTIONS DS FL4
| BLOB_XML_FILE_NAME DS CL255
|| CLOB_XML_FILE SQL TYPE IS XML AS CLOB_FILE CLOB_XML_FILE DS 0FL4
| CLOB_XML_FILE_NAME_LENGTH DS FL4
| CLOB_XML_FILE_DATA_LENGTH DS FL4
| CLOB_XML_FILE_FILE_OPTIONS DS FL4
| CLOB_XML_FILE_NAME DS CL255
|| DBCLOB_XML_FILE SQL TYPE IS XML AS DBCLOB_FILE DBCLOB_XML_FILE DS 0FL4
| DBCLOB_XML_FILE_NAME_LENGTH DS FL4
| DBCLOB_XML_FILE_DATA_LENGTH DS FL4
| DBCLOB_XML_FILE_FILE_OPTIONS DS FL4
| DBCLOB_XML_FILE_NAME DS CL255
The following table shows C and C++ language declarations that are generated by
the DB2 precompiler for some typical XML types. The declarations that the DB2
coprocessor generates might be different.
Table 51. Examples of C language variable declarations
You declare this variable DB2 generates this variable
SQL TYPE IS XML AS BLOB (1M) blob_xml; struct
{ unsigned long length;
char data??(1048576??);
} blob_xml;
SQL TYPE IS XML AS CLOB(40000K) clob_xml; struct
{ unsigned long length;
char data??(40960000??);
} clob_xml;
SQL TYPE IS XML AS DBCLOB (4000K) dbclob_xml; struct
{ unsigned long length;
unsigned short data??(4096000??);
} dbclob_xml;
|| SQL TYPE IS XML AS BLOB_FILE blob_xml_file; struct {
| unsigned long name_length;
| unsigned long data_length;
| unsigned long file_options;
| char name??(255??);
| } blob_xml_file;
|| SQL TYPE IS XML AS CLOB_FILE clob_xml_file; struct {
| unsigned long name_length;
| unsigned long data_length;
| unsigned long file_options;
| char name??(255??);
| } clob_xml_file;
|| SQL TYPE IS XML AS DBCLOB_FILE dbclob_xml_file; struct {
| unsigned long name_length;
| unsigned long data_length;
| unsigned long file_options;
| char name??(255??);
| } dbclob_xml_file;
The declarations that are generated for COBOL differ, depending on whether you
use the DB2 precompiler or the DB2 coprocessor.
The declarations that are generated for PL/I differ, depending on whether you use
the DB2 precompiler or the DB2 coprocessor.
The following table shows PL/I declarations that the DB2 precompiler generates
for some typical XML types.
Table 53. Examples of PL/I variable declarations
You declare this variable DB2 precompiler generates this variable
The encoding of XML data can be derived from the data itself, which is known as
internally encoded data, or from external sources, which is known as externally
encoded data. XML data that is sent to the database server as binary data is treated
as internally encoded data. XML data that is sent to the database server as
character data is treated as externally encoded data.
Externally encoded data can have internal encoding. That is, the data might be sent
to the database server as character data, but the data contains encoding
information. DB2 does not enforce consistency of the internal and external
encoding. When the internal and external encoding information differs, the
external encoding takes precedence. However, if there is a difference between the
external and internal encoding, intervening character conversion might have
occurred on the data, and there might be data loss.
Character data in XML columns is stored in UTF-8 encoding. The database server
handles conversion of the data from its internal or external encoding to UTF-8.
Example: The following example shows an assembler program that inserts data
from XML AS BLOB, XML AS CLOB, and CLOB host variables into an XML
column. The XML AS BLOB data is inserted as binary data, so the database server
honors the internal encoding. The XML AS CLOB and CLOB data is inserted as
character data, so the database server honors the external encoding.
**********************************************************************
* UPDATE AN XML COLUMN WITH DATA IN AN XML AS CLOB HOST VARIABLE *
**********************************************************************
EXEC SQL +
UPDATE MYCUSTOMER +
SET INFO = :XMLBUF +
Example: The following example shows a C language program that inserts data
from XML AS BLOB, XML AS CLOB, and CLOB host variables into an XML
column. The XML AS BLOB data is inserted as binary data, so the database server
honors the internal encoding. The XML AS CLOB and CLOB data is inserted as
character data, so the database server honors the external encoding.
/******************************/
/* Host variable declarations */
/******************************/
EXEC SQL BEGIN DECLARE SECTION;
SQL TYPE IS XML AS CLOB( 10K ) xmlBuf;
SQL TYPE IS XML AS BLOB( 10K ) xmlblob;
SQL TYPE IS CLOB( 10K ) clobBuf;
EXEC SQL END DECLARE SECTION;
/******************************************************************/
/* Update an XML column with data in an XML AS CLOB host variable */
/******************************************************************/
EXEC SQL UPDATE MYCUSTOMER SET INFO = :xmlBuf where CID = 1000;
/******************************************************************/
/* Update an XML column with data in an XML AS BLOB host variable */
/******************************************************************/
EXEC SQL UPDATE MYCUSTOMER SET INFO = :xmlblob where CID = 1000;
/******************************************************************/
/* Update an XML column with data in a CLOB host variable. Use */
/* the XMLPARSE function to convert the data to the XML type. */
/******************************************************************/
EXEC SQL UPDATE MYCUSTOMER SET INFO = XMLPARSE(DOCUMENT :clobBuf) where CID = 1000;
Example: The following example shows a COBOL program that inserts data from
XML AS BLOB, XML AS CLOB, and CLOB host variables into an XML column.
The XML AS BLOB data is inserted as binary data, so the database server honors
the internal encoding. The XML AS CLOB and CLOB data is inserted as character
data, so the database server honors the external encoding.
******************************
* Host variable declarations *
******************************
01 XMLBUF USAGE IS SQL TYPE IS XML as CLOB(10K).
01 XMLBLOB USAGE IS SQL TYPE IS XML AS BLOB(10K).
01 CLOBBUF USAGE IS SQL TYPE IS CLOB(10K).
*******************************************************************
Example: The following example shows a PL/I program that inserts data from
XML AS BLOB, XML AS CLOB, and CLOB host variables into an XML column.
The XML AS BLOB data is inserted as binary data, so the database server honors
the internal encoding. The XML AS CLOB and CLOB data is inserted as character
data, so the database server honors the external encoding.
/******************************/
/* Host variable declarations */
/******************************/
DCL
XMLBUF SQL TYPE IS XML AS CLOB(10K),
XMLBLOB SQL TYPE IS XML AS BLOB(10K),
CLOBBUF SQL TYPE IS CLOB(10K);
/*******************************************************************/
/* Update an XML column with data in an XML AS CLOB host variable */
/*******************************************************************/
EXEC SQL UPDATE MYCUSTOMER SET INFO = :XMLBUF where CID = 1000;
/*******************************************************************/
/* Update an XML column with data in an XML AS BLOB host variable */
/*******************************************************************/
EXEC SQL UPDATE MYCUSTOMER SET INFO = :XMLBLOB where CID = 1000;
/*******************************************************************/
/* Update an XML column with data in a CLOB host variable. Use */
/* the XMLPARSE function to convert the data to the XML type. */
/*******************************************************************/
EXEC SQL UPDATE MYCUSTOMER SET INFO = XMLPARSE(DOCUMENT :CLOBBUF) where CID = 1000;
DB2 might add an XML encoding specification to the retrieved data, depending on
whether you call the XMLSERIALIZE function when you retrieve the data. If you
do not call the XMLSERIALIZE function, DB2 adds the correct XML encoding
specification to the retrieved data. If you call the XMLSERIALIZE function, DB2
adds an internal XML encoding declaration for UTF-8 encoding if you specify
INCLUDING XMLDECLARATION in the function call. When you use
INCLUDING XMLDECLARATION, you need to ensure that the retrieved data is
not converted from UTF-8 encoding to another encoding.
The following examples demonstrate how to retrieve data from XML columns in
assembler, C, COBOL, and PL/I applications. The examples use a table named
MYCUSTOMER, which is a copy of the sample CUSTOMER table.
Example: The following example shows a C language program that retrieves data
from an XML column into XML AS BLOB, XML AS CLOB, and CLOB host
variables. The data that is retrieved into an XML AS BLOB host variable is
retrieved as binary data, so the database server generates an XML declaration with
UTF-8 encoding. The data that is retrieved into an XML AS CLOB host variable is
retrieved as character data, so the database server generates an XML declaration
with an internal encoding declaration that is consistent with the external encoding.
The data that is retrieved into a CLOB host variable is retrieved as character data,
so the database server generates an XML declaration with an internal encoding
declaration. That declaration might not be consistent with the external encoding.
/******************************/
/* Host variable declarations */
/******************************/
EXEC SQL BEGIN DECLARE SECTION;
SQL TYPE IS XML AS CLOB( 10K ) xmlBuf;
SQL TYPE IS XML AS BLOB( 10K ) xmlBlob;
Example: The following example shows a COBOL program that retrieves data
from an XML column into XML AS BLOB, XML AS CLOB, and CLOB host
variables. The data that is retrieved into an XML AS BLOB host variable is
retrieved as binary data, so the database server generates an XML declaration with
UTF-8 encoding. The data that is retrieved into an XML AS CLOB host variable is
retrieved as character data, so the database server generates an XML declaration
with an internal encoding declaration that is consistent with the external encoding.
The data that is retrieved into a CLOB host variable is retrieved as character data,
so the database server generates an XML declaration with an internal encoding
declaration. That declaration might not be consistent with the external encoding.
| ******************************
| * Host variable declarations *
| ******************************
| 01 XMLBUF USAGE IS SQL TYPE IS XML AS CLOB(10K).
| 01 XMLBLOB USAGE IS SQL TYPE IS XML AS BLOB(10K).
| 01 CLOBBUF USAGE IS SQL TYPE IS CLOB(10K).
| **********************************************************************
| * Retrieve data from an XML column into an XML AS CLOB host variable *
| **********************************************************************
| EXEC SQL SELECT INFO
| INTO :XMLBUF
| FROM MYTABLE
| WHERE CID = 1000
| END-EXEC.
| **********************************************************************
| * Retrieve data from an XML column into an XML AS BLOB host variable *
| **********************************************************************
| EXEC SQL SELECT INFO
| INTO :XMLBLOB
| FROM MYTABLE
| WHERE CID = 1000
| END-EXEC.
| **********************************************************************
| * RETRIEVE DATA FROM AN XML COLUMN INTO A CLOB HOST VARIABLE. *
| * BEFORE SENDING THE DATA TO THE APPLICATION, INVOKE THE *
| * XMLSERIALIZE FUNCTION TO CONVERT THE DATA FROM THE XML *
| * TYPE TO THE CLOB TYPE. *
| **********************************************************************
| EXEC SQL SELECT XMLSERIALIZE(INFO AS CLOB(10K))
| INTO :CLOBBUF
| FROM MYTABLE
| WHERE CID = 1000
| END-EXEC.
Programming examples
You can write DB2 programs in assembler language, C, C++, COBOL, Fortran,
PL/I or REXX. These programs can access a local or remote DB2 subsystem and
can execute static or dynamic SQL statements. This information contains several
such programming examples.
You can write DB2 programs in assembler language, C, C++, COBOL, Fortran,
PL/I or REXX. These programs can access a local or remote DB2 subsystem and
can execute static or dynamic SQL statements. This information contains several
such programming examples.
The examples in this information use certain conventions and assumptions. Some
of the examples vary from these conventions. Exceptions are noted where they
occur.
If you specify the SQL processing option STDSQL(YES), do not define an SQLCA.
If you do, DB2 ignores your SQLCA, and your SQLCA definition causes
compile-time errors. If you specify the SQL processing option STDSQL(NO),
include an SQLCA explicitly.
If your application contains SQL statements and does not include an SQL
communications area (SQLCA), you must declare individual SQLCODE and
SQLSTATE host variables. Your program can use these variables to check whether
an SQL statement executed successfully.
Option Description
To define the SQL communications area: 1. Code the SQLCA directly in the program
or use the following SQL INCLUDE
statement to request a standard SQLCA
declaration:
EXEC SQL INCLUDE SQLCA
Related tasks
“Checking the execution of SQL statements” on page 202
“Checking the execution of SQL statements by using the SQLCA” on page 203
“Checking the execution of SQL statements by using SQLCODE and SQLSTATE”
on page 207
“Defining the items that your program can use to check whether an SQL statement
executed successfully” on page 141
Code the SQLDA directly in the program, or use the following SQL INCLUDE
statement to request a standard SQLDA declaration:
EXEC SQL INCLUDE SQLDA
Restriction: You must place SQLDA declarations before the first SQL statement
that references the data descriptor, unless you use the TWOPASS SQL processing
option.
Related tasks
“Defining SQL descriptor areas” on page 141
Restrictions:
v Only some of the valid assembler declarations are valid host variable
declarations. If the declaration for a host variable is not valid, any SQL
statement that references the variable might result in the message
UNDECLARED HOST VARIABLE.
v The locator data types are assembler language data types and SQL data types.
You cannot use locators as column types.
Recommendations:
v Be careful of overflow. For example, suppose that you retrieve an INTEGER
column value into a DS H host variable, and the column value is larger than
32767. You get an overflow warning or an error, depending on whether you
provide an indicator variable.
v Be careful of truncation. For example, if you retrieve an 80-character CHAR
column value into a host variable that is declared as DS CL70, the rightmost ten
characters of the retrieved string are truncated. If you retrieve a floating-point or
decimal column value into a host variable declared as DS F, any fractional part
of the value is removed.
The following diagram shows the syntax for declaring numeric host variables.
| variable-name DC H
DS 1 L2
F
L4
FD
L8
(1)
P ’value ’
Ln
E
L4
EH
L4
EB
L4
ED
L4
D
L8
DH
L8
DB
L8
DD
L8
LD
L16
Notes:
1 value is a numeric value that specifies the scale of the packed decimal
variable. If value does not include a decimal point, the scale is 0.
For floating-point data types (E, EH, EB, D, DH, and DB), use the FLOAT SQL
processing option to specify whether the host variable is in IEEE binary
floating-point or z/Architecture® hexadecimal floating-point format. If you specify
FLOAT(S390), you need to define your floating-point host variables as E, EH, D, or
DH. If you specify FLOAT(IEEE), you need to define your floating-point host
variables as EB or DB. DB2 does not check if the host variable declarations or
format of the host variable contents match the format that you specified with the
FLOAT SQL processing option. Therefore, you need to ensure that your
floating-point host variable types and contents match the format that you specified
with the FLOAT SQL processing option. DB2 converts all floating-point input data
to z/Architecture hexadecimal floating-point format before storing it.
| Restriction: The FLOAT SQL processing options do not apply to the decimal
| floating-point host variable types ED, DD, or LD.
For the decimal floating-point host variable types ED, DD, and LD, you can specify
the following special values: MIN, MAX, NAN, SNAN, and INFINITY.
The following diagrams show the syntax for forms other than CLOBs.
The following diagram shows the syntax for declaring fixed-length character
strings.
variable-name DC C
DS 1 (1)
Ln
Notes:
1 If you declare a character string host variable without a length (for example,
DC C 'ABCD') DB2 interprets the length as 1. To get the correct length, specify
a length attribute (for example, DC CL 4 'ABCD').
The following diagram shows the syntax for declaring varying-length character
strings.
variable-name DC H , CLn
DS 1 L2 1
The following diagrams show the syntax for forms other than DBCLOBs. In the
syntax diagrams, value denotes one or more DBCS characters, and the symbols <
and > represent the shift-out and shift-in characters.
The following diagram shows the syntax for declaring fixed-length graphic strings.
variable-name DC G
DS Ln
’<value>’
Ln’<value>’
The following diagram shows the syntax for declaring varying-length graphic
strings.
variable-name DS H , GLn
DC L2 ’m’ ’<value>’
The following diagram shows the syntax for declaring binary host variables.
(1)
variable-name DS X Ln
Notes:
1 1 ≤ n ≤ 255
The following diagram shows the syntax for declaring varbinary host variables.
(1)
variable-name DS H L2 , X Ln
Notes:
1 1 ≤ n ≤ 32704
The following diagram shows the syntax for declaring result set locators.
variable-name DC F
DS 1 L4
Table Locators
The following diagram shows the syntax for declaring of table locators.
| The following diagram shows the syntax for declaring BLOB, CLOB, and DBCLOB
| host variables, locators, and file reference variables.
| The following diagram shows the syntax for declaring BLOB, CLOB, and DBCLOB
| host variables and file reference variables for XML data types.
| (1)
variable-name SQL TYPE IS XML AS BINARY LARGE OBJECT length
BLOB K
CHARACTER LARGE OBJECT M
CHAR LARGE OBJECT G
CLOB
DBCLOB
BLOB_FILE
CLOB_FILE
DBCLOB_FILE
|
Notes:
1 If you specify the length of the LOB in terms of KB, MB, or GB, do not leave
spaces between the length and K, M, or G.
ROWIDs
The following diagram shows the syntax for declaring ROWID host variables.
Related concepts
“Host variables” on page 143
“Rules for host variables in an SQL statement” on page 151
“Large objects (LOBs)” on page 430
Related tasks
“Determining whether a retrieved value in a host variable is null or truncated” on
page 154
“Inserting a single row by using a host variable” on page 158
“Inserting null values into columns by using indicator variables or arrays” on page
158
“Retrieving a single row of data into host variables” on page 152
“Updating data by using host variables” on page 157
Related reference
“Descriptions of SQL processing options” on page 904
Related information
High Level Assembler (HLASM) and Toolkit Feature library
z/OS Internet Library
The following diagram shows the syntax for declaring an indicator variable in
assembler.
Example
The following example shows a FETCH statement with the declarations of the host
variables that are needed for the FETCH statement and their associated indicator
variables.
EXEC SQL FETCH CLS_CURSOR INTO :CLSCD, X
:DAY :DAYIND, X
:BGN :BGNIND, X
:END :ENDIND
The following table describes the SQL data type and the base SQLTYPE and
SQLLEN values that the precompiler uses for host variables in SQL statements.
Table 54. SQL data types, SQLLEN values, and SQLTYPE values that the precompiler uses for host variables in
assembler programs
SQLTYPE of host SQLLEN of
Assembler host variable data type variable1 host variable SQL data type
SQL TYPE IS XML AS BLOB_FILE 916/917 267 XML BLOB file reference 4
SQL TYPE IS XML AS CLOB_FILE 920/921 267 XML CLOB file reference 4
SQL TYPE IS XML AS DBCLOB_FILE 924/925 267 XML DBCLOB file reference 4
The following table shows equivalent assembler host variables for each SQL data
type. Use this table to determine the assembler data type for host variables that
you define to receive output from the database. For example, if you retrieve
TIMESTAMP data, you can define variable DS CLn.
This table shows direct conversions between SQL data types and assembler data
types. However, a number of SQL data types are compatible. When you do
assignments or comparisons of data that have compatible data types, DB2 converts
those compatible data types.
Related concepts
“Compatibility of SQL and language data types” on page 148
“LOB host variable, LOB locator, and LOB file reference variable declarations” on
page 705
“Host variable data types for XML data in embedded SQL applications” on page
217
Each SQL statement in an assembler program must begin with EXEC SQL. The
EXEC and SQL keywords must appear on one line, but the remainder of the
statement can appear on subsequent lines.
Declaring tables and views: Your assembler program should include a DECLARE
statement to describe each table and view the program accesses.
Margins: Use the precompiler option MARGINS to set a left margin, a right
margin, and a continuation margin. The default values for these margins are
columns 1, 71, and 16, respectively. If EXEC SQL starts before the specified left
margin, the DB2 precompiler does not recognize the SQL statement. If you use the
default margins, you can place an SQL statement anywhere between columns 2
and 71.
Multiple-row FETCH statements: You can use only the FETCH ... USING
DESCRIPTOR form of the multiple-row FETCH statement in an assembler
program. The DB2 precompiler does not recognize declarations of host variable
arrays for an assembler program.
Names: You can use any valid assembler name for a host variable. However, do
not use external entry names or access plan names that begin with ’DSN’ or host
variable names that begin with ’SQL’. These names are reserved for DB2.
The first character of a host variable that is used in embedded SQL cannot be an
underscore. However, you can use an underscore as the first character in a symbol
that is not used in embedded SQL.
Statement labels: You can prefix an SQL statement with a label. The first line of an
SQL statement can use a label beginning in the left margin (column 1). If you do
not use a label, leave column 1 blank.
WHENEVER statement: The target for the GOTO clause in an SQL WHENEVER
statement must be a label in the assembler source code and must be within the
scope of the SQL statements that WHENEVER affects.
| In this example, the actual storage allocation is done by the DFHEIENT macro.
| TSO: The sample program in prefix.SDSNSAMP(DSNTIAD) contains an example
| of how to acquire storage for the SQLDSECT in a program that runs in a TSO
| environment. The following example code contains pieces from
| prefix.SDSNSAMP(DSNTIAD) with explanations in the comments.
| DSNTIAD CSECT CONTROL SECTION NAME
| SAVE (14,12) ANY SAVE SEQUENCE
| LR R12,R15 CODE ADDRESSABILITY
| USING DSNTIAD,R12 TELL THE ASSEMBLER
| LR R7,R1 SAVE THE PARM POINTER
| *
| * Allocate storage of size PRGSIZ1+SQLDSIZ, where:
| * - PRGSIZ1 is the size of the DSNTIAD program area
| * - SQLDSIZ is the size of the SQLDSECT, and declared
| * when the DB2 precompiler includes the SQLDSECT
| *
| L R6,PRGSIZ1 GET SPACE FOR USER PROGRAM
| A R6,SQLDSIZ GET SPACE FOR SQLDSECT
| GETMAIN R,LV=(6) GET STORAGE FOR PROGRAM VARIABLES
| LR R10,R1 POINT TO IT
| *
| * Initialize the storage
| *
| LR R2,R10 POINT TO THE FIELD
| LR R3,R6 GET ITS LENGTH
| SR R4,R4 CLEAR THE INPUT ADDRESS
| SR R5,R5 CLEAR THE INPUT LENGTH
| MVCL R2,R4 CLEAR OUT THE FIELD
| *
| * Map the storage for DSNTIAD program area
| *
| ST R13,FOUR(R10) CHAIN THE SAVEAREA PTRS
| ST R10,EIGHT(R13) CHAIN SAVEAREA FORWARD
| LR R13,R10 POINT TO THE SAVEAREA
| USING PRGAREA1,R13 SET ADDRESSABILITY
| *
| * Map the storage for the SQLDSECT
| *
| LR R9,R13 POINT TO THE PROGAREA
| A R9,PRGSIZ1 THEN PAST TO THE SQLDSECT
| USING SQLDSECT,R9 SET ADDRESSABILITY
| ...
| LTORG
| **********************************************************************
You can use the subroutine DSNTIAR to convert an SQL return code into a text
message. DSNTIAR takes data from the SQLCA, formats it into a message, and
places the result in a message output area that you provide in your application
program. For concepts and more information about the behavior of DSNTIAR, see
“Displaying SQLCA fields by calling DSNTIAR” on page 204.
You can also use the MESSAGE_TEXT condition item field of the GET
DIAGNOSTICS statement to convert an SQL return code into a text message.
Programs that require long token message support should code the GET
DIAGNOSTICS statement instead of DSNTIAR. For more information about GET
DIAGNOSTICS, see “Checking the execution of SQL statements by using the GET
DIAGNOSTICS statement” on page 209.
DSNTIAR syntax:
where MESSAGE is the name of the message output area, LINES is the
number of lines in the message output area, and LRECL is the length of each
line.
lrecl
A fullword containing the logical record length of output messages, between 72
and 240.
See “DB2 sample applications” on page 1069 for instructions on how to access and
print the source code for the sample program.
CICS: If your CICS application requires CICS storage handling, you must use the
subroutine DSNTIAC instead of DSNTIAR. DSNTIAC has the following syntax:
CALL DSNTIAC,(eib,commarea,sqlca,msg,lrecl),MF=(E,PARM)
DSNTIAC has extra parameters, which you must use for calls to routines that use
CICS commands.
eib EXEC interface block
commarea
communication area
You must define DSNTIA1 in the CSD. If you load DSNTIAR or DSNTIAC, you
must also define them in the CSD. For an example of CSD entry generation
statements for use with DSNTIAC, see member DSN8FRDO in the data set
prefix.SDSNSAMP.
The assembler source code for DSNTIAC and job DSNTEJ5A, which assembles and
link-edits DSNTIAC, are also in the data set prefix.SDSNSAMP.
Related concepts
“Dynamic SQL” on page 162
“Rules for host variables in an SQL statement” on page 151
“Possible host languages for dynamic SQL applications” on page 166
Related tasks
“Determining whether a column value is null” on page 156
“Determining whether a retrieved value in a host variable is null or truncated” on
page 154
“Dynamically executing a data change statement” on page 191
“Dynamically executing an SQL statement by using EXECUTE IMMEDIATE” on
page 187
“Dynamically executing an SQL statement by using PREPARE and EXECUTE” on
page 189
“Enabling the dynamic statement cache” on page 195
“Handling SQL error codes” on page 215
“Including dynamic SQL for fixed-list SELECT statements in your program” on
page 167
“Including dynamic SQL for non-SELECT statements in your program” on page
166
“Including dynamic SQL for varying-list SELECT statements in your program” on
page 169
“Inserting a single row by using a host variable” on page 158
“Inserting null values into columns by using indicator variables or arrays” on page
158
“Limiting CPU time for dynamic SQL statements by using the resource limit
facility” on page 200
“Retrieving a single row of data into host variables” on page 152
“Updating data by using host variables” on page 157
Delimit an SQL statement in your assembler program with the beginning keyword
EXEC SQL and an end of line or end of last continued line.
If you specify the SQL processing option STDSQL(YES), do not define an SQLCA.
If you do, DB2 ignores your SQLCA, and your SQLCA definition causes
compile-time errors. If you specify the SQL processing option STDSQL(NO),
include an SQLCA explicitly.
If your application contains SQL statements and does not include an SQL
communications area (SQLCA), you must declare individual SQLCODE and
SQLSTATE host variables. Your program can use these variables to check whether
an SQL statement executed successfully.
Option Description
To define the SQL communications area: 1. Code the SQLCA directly in the program
or use the following SQL INCLUDE
statement to request a standard SQLCA
declaration:
EXEC SQL INCLUDE SQLCA
Related tasks
“Checking the execution of SQL statements” on page 202
“Checking the execution of SQL statements by using the SQLCA” on page 203
“Checking the execution of SQL statements by using SQLCODE and SQLSTATE”
on page 207
“Defining the items that your program can use to check whether an SQL statement
executed successfully” on page 141
Code the SQLDA directly in the program, or use the following SQL INCLUDE
statement to request a standard SQLDA declaration:
EXEC SQL INCLUDE SQLDA
Restriction: You must place SQLDA declarations before the first SQL statement
that references the data descriptor, unless you use the TWOPASS SQL processing
option.
Restriction: The DB2 coprocessor for C/C++ supports only the ONEPASS
option.
v If you specify the STDSQL(YES) SQL processing option, you must precede
the host language statements that define the host variables and host variable
arrays with the BEGIN DECLARE SECTION statement and follow the host
language statements with the END DECLARE SECTION statement.
Otherwise, these statements are optional.
v Ensure that any SQL statement that uses a host variable or host variable
array is within the scope of the statement that declares that variable or array.
| v If you are using the DB2 precompiler, ensure that the names of host variables
| and host variable arrays are unique within the program, even if the variables
| and variable arrays are in different blocks, classes, procedures, functions, or
| subroutines. You can qualify the names with a structure name to make them
| unique.
2. Optional: Define any associated indicator variables, arrays, and structures.
Related tasks
“Declaring host variables and indicator variables” on page 142
Host variables in C
In C and C++ programs, you can specify numeric, character, graphic, binary, LOB,
XML, and ROWID host variables. You can also specify result set, table, and LOB
locators and LOB and XML file reference variables.
Restrictions:
v Only some of the valid C declarations are valid host variable declarations. If the
declaration for a variable is not valid, any SQL statement that references the
variable might result in the message UNDECLARED HOST VARIABLE.
v C supports some data types and storage classes with no SQL equivalents, such
as register storage class, typedef, and long long.
| v The following locator data types are special SQL data types that do not have C
| equivalents:
Recommendations:
v Be careful of overflow. For example, suppose that you retrieve an INTEGER
column value into a short integer host variable, and the column value is larger
than 32767. You get an overflow warning or an error, depending on whether you
provide an indicator variable.
v Be careful of truncation. Ensure that the host variable that you declare can
contain the data and a NUL terminator, if needed. Retrieving a floating-point or
decimal column value into a long integer host variable removes any fractional
part of the value.
The following diagram shows the syntax for declaring numeric host variables.
| float
auto const double
extern volatile int
static short
sqlint32
int
long
int
long long
decimal ( precision )
, scale
_Decimal32
_Decimal64
_Decimal128
variable-name ;
(1) =expression
*pointer-name
Notes:
1 If you use the pointer notation of the host variable, you must use the DB2
coprocessor.
Restrictions:
For floating-point data types, use the FLOAT SQL processing option to specify
whether the host variable is in IEEE binary floating-point or z/Architecture
hexadecimal floating-point format. DB2 does not check if the format of the host
variable contents match the format that you specified with the FLOAT SQL
processing option. Therefore, you need to ensure that your floating-point host
variable contents match the format that you specified with the FLOAT SQL
processing option. DB2 converts all floating-point input data to z/Architecture
hexadecimal floating-point format before storing it.
The following diagrams show the syntax for forms other than CLOBs.
The following diagram shows the syntax for declaring single-character host
variables.
char variable-name ;
auto const unsigned (1) =expression
extern volatile *pointer-name
static
The following diagram shows the syntax for declaring NUL-terminated character
host variables.
char
auto const unsigned
extern volatile
static
,
(2) (3)
variable-name [ length ] ;
(1) =expression
*pointer-name
Notes:
1 If you use the pointer notation of the host variable, you must use the DB2
coprocessor.
2 Any string that is assigned to this variable must be NUL-terminated. Any
string that is retrieved from this variable is NUL-terminated.
3 A NUL-terminated character host variable maps to a varying-length character
string (except for the NUL).
The following diagram shows the syntax for declaring varying-length character
host variables that use the VARCHAR structured form.
auto const
extern volatile
static
(2)
char var-2 [ length ] ; }
unsigned
variable-name ;
(3) ={expression, expression}
*pointer-name
Example: The following example code shows valid and invalid declarations of the
VARCHAR structured form:
EXEC SQL BEGIN DECLARE SECTION;
For NUL-terminated string host variables, use the SQL processing options
PADNTSTR and NOPADNTSTR to specify whether the variable should be padded
with blanks. The option that you specify determines where the NUL-terminator is
placed.
Restriction: If you use the DB2 precompiler, you cannot use a host variable that is
of the NUL-terminated form in either a PREPARE or DESCRIBE statement.
| Recommendation: Instead of using the C data type wchar_t to define graphic and
| vargraphic host variables, use one of the following techniques:
| v Define the sqldbchar data type by using the following typedef statement:
| typedef unsigned short sqldbchar;
| v Use the sqldbchar data type that is defined in the typedef statement in one of
| the following files or libraries:
| – SQL library, sql.h
| – DB2 CLI library, sqlcli.h
| – SQLUDF file in data set DSN910.SDSNC.H
| v Use the C data type unsigned short.
| Using sqldbchar or unsigned short enables you to manipulate DBCS and Unicode
| UTF-16 data in the same format in which it is stored in DB2. Using sqldbchar also
| makes applications easier to port to other platforms.
The following diagrams show the syntax for forms other than DBCLOBs.
The following diagram shows the syntax for declaring single-graphic host
variables.
,
(1) (2)
sqldbchar variable-name ;
auto const *pointer-name =expression
extern volatile
static
Notes:
1 You cannot use array notation in variable-name.
2 The single-graphic form declares a fixed-length graphic string of length 1.
The following diagram shows the syntax for declaring NUL-terminated graphic
host variables.
,
(2) (3) (4)
sqldbchar variable-name [ length ] ;
auto const (1) =expression
extern volatile *pointer-name
static
Notes:
1 If you use the pointer notation of the host variable, you must use the DB2
coprocessor.
The following diagram shows the syntax for declaring graphic host variables that
use the VARGRAPHIC structured form.
auto const
extern volatile
static
(3) (4)
sqldbchar var-2 [ length ] ; }
variable-name ;
(5) ={ expression, expression}
*pointer-name
Notes:
1 You can use the struct tag to define other variables, but you cannot use them
as host variables in SQL.
2 var-1 must be less than or equal to length.
3 You cannot use var-1 or var-2 as host variables in an SQL statement.
4 length must be a decimal integer constant greater than 1 and not greater than
16352.
5 If you use the pointer notation of the host variable, you must use the DB2
coprocessor.
Example: The following example shows valid and invalid declarations of graphic
host variables that use the VARGRAPHIC structured form:
EXEC SQL BEGIN DECLARE SECTION;
| The following diagrams show the syntax for forms other than BLOBs.
| The following diagram shows the syntax for declaring binary host variables.
| ,
(1)
SQL TYPE IS BINARY ( length ) variable-name ;
auto const
extern volatile
static
|
| Notes:
| 1 The length must be a value from 1 to 255.
| The following diagram shows the syntax for declaring VARBINARY host variables.
| (1)
| SQL TYPE IS VARBINARY ( length )
auto const BINARY VARYING
extern volatile
static
| ,
|
variable-name ;
= { init-len , ″ init-data ″ }
|
| Notes:
| 1 For VARBINARY host variables, the length must be in the range from 1 to
| 32 704.
| The C language does not have variables that correspond to the SQL binary data
| types BINARY and VARBINARY. To create host variables that can be used with
| these data types, use the SQL TYPE IS clause. The SQL precompiler replaces this
| declaration with the C language structure in the output source member.
| Recommendation: Be careful when you use binary host variables with C and C++.
| The SQL TYPE declaration for BINARY and VARBINARY does not account for the
| NUL-terminator that C expects, because binary strings are not NUL-terminated
| strings. Also, the binary host variable might contain zeroes at any point in the
| string.
The following diagram shows the syntax for declaring result set locators.
variable-name ;
*pointer-name = init-value
Table locators
The following diagram shows the syntax for declaring table locators.
auto const
extern volatile
static
register
variable-name ;
*pointer-name =init-value
The following diagram shows the syntax for declaring BLOB, CLOB, and DBCLOB
host variables, locators, and file reference variables.
SQL TYPE IS
auto const
extern volatile
static
register
variable-name ;
*pointer-name (1)
=init-value
Notes:
| 1 Specify the initial value as a series of expressions. For example, specify
| ={expression, expression}. For BLOB_FILE, CLOB_FILE, and
| DBCLOB_FILE, specify ={name_length, data_length, file_option_map,
| file_name}.
| The following diagram shows the syntax for declaring BLOB, CLOB, and DBCLOB
| host variables and file reference variables for XML data types.
The following diagram shows the syntax for declaring ROWID host variables.
Constants
The syntax for constants in C and C++ programs differs from the syntax for
constants in SQL statements in the following ways:
| v C/C++ uses various forms for numeric literals (possible suffixes are: ll, LL, u, U,
| f,F,l,L,df,DF, dd, DD, dl, DL,d, D). For example, in C/C++:
| 4850976 is a decimal literal
| 0x4bD is a hexadecimal integer literal
| 03245 is an octal integer literal
| 3.2E+4 is a double floating-point literal
| 3.2E+4f is a float floating-point literal
| 3.2E+4l is a long double floating-point literal
| 0x4bDP+4 is a double hexadecimal floating-point literal
| 22.2df is a _Decimal32 decimal floating-point literal
| 0.00D is a fixed-point decimal literal (z/OS only when
| LANGLVL(EXTENDED) is specified)
| v Use C/C++ literal form only outside of SQL statements. Within SQL statements,
| use numeric constants.
v In C, character constants and string constants can use escape sequences. You
cannot use the escape sequences in SQL statements.
v Apostrophes and quotation marks have different meanings in C and SQL. In C,
you can use double quotation marks to delimit string constants, and apostrophes
to delimit character constants.
Restrictions:
v Only some of the valid C declarations are valid host variable array declarations.
If the declaration for a variable array is not valid, any SQL statement that
references the variable array might result in the message UNDECLARED HOST
VARIABLE ARRAY.
| v For both C and C++, you cannot specify the _packed attribute on the structure
| declarations for the following arrays that are used in multiple-row INSERT,
| FETCH, and MERGE statements:
| – varying-length character arrays
| – varying-length graphic arrays
| – LOB arrays
| In addition, the #pragma pack(1) directive cannot be in effect if you plan to use
| these arrays in multiple-row statements.
The following diagram shows the syntax for declaring numeric host variable
arrays.
| float
double
int
long
short
int
long long
decimal ( precision )
, scale
_Decimal32
_Decimal64
_Decimal128
,
(1)
variable-name [ dimension ] ;
,
= { expression }
Notes:
1 dimension must be an integer constant between 1 and 32767.
You can specify the following forms of character host variable arrays:
v NUL-terminated character form
v VARCHAR structured form
v CLOBs
The following diagrams show the syntax for forms other than CLOBs.
The following diagram shows the syntax for declaring NUL-terminated character
host variable arrays.
char
auto const unsigned
extern volatile
static
= { expression }
Notes:
1 dimension must be an integer constant between 1 and 32767.
2 Any string that is assigned to this variable must be NUL-terminated. Any
string that is retrieved from this variable is NUL-terminated.
3 The strings in a NUL-terminated character host variable array map to
varying-length character strings (except for the NUL).
The following diagram shows the syntax for declaring varying-length character
host variable arrays that use the VARCHAR structured form.
auto const
extern volatile
static
(3)
char var-2 [ length ] ; }
unsigned
,
(4)
variable-name [ dimension ] ;
,
= { expression }
Notes:
1 You can use the struct tag to define other variables, but you cannot use them
as host variable arrays in SQL.
| 2 var-1 must be a scalar numeric variable.
| 3 var-2 must be a scalar CHAR array variable.
4 dimension must be an integer constant between 1 and 32767.
| The following diagram shows the syntax for declaring binary host variable arrays.
| ,
| (1)
variable-name [ dimension ] ;
|
| Notes:
| 1 dimension must be an integer constant between 1 and 32767.
You can specify the following forms of graphic host variable arrays:
v NUL-terminated graphic form
v VARGRAPHIC structured form.
Recommendation: Instead of using the C data type wchar_t to define graphic and
vargraphic host variable arrays, use one of the following techniques:
v Define the sqldbchar data type by using the following typedef statement:
typedef unsigned short sqldbchar;
v Use the sqldbchar data type that is defined in the typedef statement in the
header files that are supplied by DB2.
v Use the C data type unsigned short.
The following diagram shows the syntax for declaring NUL-terminated graphic
host variable arrays.
sqldbchar
auto const unsigned
extern volatile
static
,
(1) (2) (3) (4)
variable-name [ dimension ] [ length ] ;
,
= { expression }
Notes:
1 dimension must be an integer constant between 1 and 32767.
2 length must be a decimal integer constant greater than 1 and not greater than
16352.
3 Any string that is assigned to this variable must be NUL-terminated. Any
string that is retrieved from this variable is NUL-terminated.
The following diagram shows the syntax for declaring graphic host variable arrays
that use the VARGRAPHIC structured form.
auto const
extern volatile
static
(3) (4)
sqldbchar var-2 [ length ] ; }
unsigned
,
(5)
variable-name [ dimension ] ;
,
= { expression }
Notes:
1 You can use the struct tag to define other variables, but you cannot use them
as host variable arrays in SQL.
2 var-1 must be a scalar numeric variable.
3 var-2 must be a scalar char array variable.
4 length must be a decimal integer constant greater than 1 and not greater than
16352.
5 dimension must be an integer constant between 1 and 32767.
Example: The following example shows valid and invalid declarations of graphic
host variable arrays that use the VARGRAPHIC structured form.
EXEC SQL BEGIN DECLARE SECTION;
/* valid declaration of host variable array vgraph */
struct VARGRAPH {
short len;
sqldbchar d[10];
} vgraph[20];
The following diagram shows the syntax for declaring BLOB, CLOB, and DBCLOB
host variable arrays, locators, and file reference variables.
,
(1)
variable-name [ dimension ] ;
,
= { expression }
Notes:
1 dimension must be an integer constant between 1 and 32767.
| The following diagram shows the syntax for declaring BLOB, CLOB, and DBCLOB
| host variable arrays and file reference variable arrays for XML data types.
= { expression }
|
| Notes:
| 1 dimension must be an integer constant between 1 and 32767.
The following diagram shows the syntax for declaring ROWID variable arrays.
,
(1)
SQL TYPE IS ROWID variable-name [ dimension ] ;
auto const
extern volatile
static
register
Notes:
1 dimension must be an integer constant between 1 and 32767.
Related concepts
“Host variable arrays” on page 144
“Host variable arrays in an SQL statement” on page 159
“Large objects (LOBs)” on page 430
Related tasks
“Inserting multiple rows of data from host variable arrays” on page 160
“Retrieving multiple rows of data into host variable arrays” on page 160
Host structures in C
A C host structure contains an ordered group of data fields.
Host structures
The following diagram shows the syntax for declaring host structures.
struct {
auto const packed tag
extern volatile
static
float var-1 ; }
double
int
short
sqlint32
int
long
int
long long
decimal ( precision
, scale )
_Decimal32
_Decimal64
_Decimal128
varchar structure
binary structure
vargraphic structure
SQL TYPE IS ROWID
LOB data type
char var-2 ;
unsigned [ length ]
sqldbchar var-5 ;
[ length ]
variable-name ;
=expression
VARCHAR structures
The following diagram shows the syntax for VARCHAR structures that are used
within declarations of host structures.
int
struct { short var-3 ;
tag signed
VARGRAPHIC structures
The following diagram shows the syntax for VARGRAPHIC structures that are
used within declarations of host structures.
int
struct { short var-6 ; sqldbchar var-7 [ length ] ; }
tag signed
| Binary structures
| The following diagram shows the syntax for binary structures that are used within
| declarations of host structures.
The following diagram shows the syntax for LOB data types that are used within
declarations of host structures.
| The following diagram shows the syntax for LOB data types that are used within
| declarations of host structures for XML data.
| Example
In the following example, the host structure is named target, and it contains the
fields c1, c2, and c3. c1 and c3 are character arrays, and c2 is a host variable that is
equivalent to the SQL VARCHAR data type. The target host structure can be part
of another host structure but must be the deepest level of the nested structure.
struct {char c1[3];
struct {short len;
char data[5];
}c2;
char c3[2];
}target;
Related concepts
“Host structures” on page 144
int
short
auto const signed
extern volatile
static
variable-name ;
= expression
The following diagram shows the syntax for declaring an indicator array or a host
structure indicator array in C and C++.
int
short
auto const signed
extern volatile
static
,
(1)
variable-name [ dimension ] ;
= expression
Notes:
1 dimension must be an integer constant between 1 and 32767.
Example
The following example shows a FETCH statement with the declarations of the host
variables that are needed for the FETCH statement and their associated indicator
variables.
EXEC SQL FETCH CLS_CURSOR INTO :ClsCd,
:Day :DayInd,
:Bgn :BgnInd,
:End :EndInd;
Specify the pointer host variable exactly as it was declared. The only exception is
when you reference pointers to nul-terminated character arrays. In this case, you
do not have to include the parentheses that were part of the declaration.
The following example references this bounded character pointer host variable:
hvcharp.len = dynlen; a
hvcharp.data = (char 8) malloc (hvcharp.len); b
EXEC SQL set :hvcharp = 'data buffer with length'; c
Note:
1. dynlen can be either a compile time constant or a variable with a value that is
assigned at run time.
2. Storage is dynamically allocated for hvcharp.data.
3. The SQL statement references the name of the structure, not an element within
the structure.
Example of a structure array host variable reference: Suppose that your program
declares the following pointer to the structure tbl_struct:
struct tbl_struct *ptr_tbl_struct =
(struct tbl_struct *) malloc (sizeof (struct tbl_struct) * n);
To reference this data is SQL statements, use the pointer as shown in the following
example. Assume that tbl_sel_cur is a declared cursor.
for (L_col_cnt = 0; L_col_cnt < n; L_con_cnt++)
{ ...
EXEC SQL FETCH tbl_sel_cur INTO :ptr_tbl_struct [L_col_cnt]
...
}
Related tasks
“Declaring pointer host variables in C programs”
Include an asterisk (*) in each variable declaration to indicate that the variable is a
pointer.
Restrictions:
v You cannot use pointer host variables that point to character data of an
unknown length. For example, do not specify the following declaration: char *
hvcharpu. Instead, specify the length of the data by using a bounded character
pointer host variable. A bounded character pointer host variable is a host variable
that is declared as a structure with the following elements:
– A 4-byte field that contains the length of the storage area.
The following example code declares a pointer to the structure tbl_struct. Storage is
allocated dynamically for up to n rows.
struct tbl_struct *ptr_tbl_struct =
(struct tbl_struct *) malloc (sizeof (struct tbl_struct) * n);
The following table describes the SQL data type and the base SQLTYPE and
SQLLEN values that the precompiler uses for host variables in SQL statements.
Table 62. SQL data types, SQLLEN values, and SQLTYPE values that the precompiler uses
for host variables in C programs
SQLTYPE of host SQLLEN of host
C host variable data type variable1 variable SQL data type
short int 500 2 SMALLINT
long int 496 4 INTEGER
| long long 492 8 BIGINT5
| long long int
| sqlint64
decimal(p,s)2 484 p in byte 1, s in DECIMAL(p,s)2
byte 2
The following table shows equivalent C host variables for each SQL data type. Use
this table to determine the C data type for host variables that you define to receive
output from the database. For example, if you retrieve TIMESTAMP data, you can
define a variable of NUL-terminated character form or VARCHAR structured form
This table shows direct conversions between SQL data types and C data types.
However, a number of SQL data types are compatible. When you do assignments
or comparisons of data that have compatible data types, DB2 converts those
compatible data types.
Table 63. C host variable equivalents that you can use when retrieving data of a particular SQL data type
SQL data type C host variable equivalent Notes
SMALLINT short int
INTEGER long int
DECIMAL(p,s) or decimal You can use the double data type if your
NUMERIC(p,s) C compiler does not have a decimal data
type; however, double is not an exact
equivalent.
REAL or FLOAT(n) float 1<=n<=21
Related concepts
“Compatibility of SQL and language data types” on page 148
“LOB host variable, LOB locator, and LOB file reference variable declarations” on
page 705
“Host variable data types for XML data in embedded SQL applications” on page
217
Each SQL statement in a C program must begin with EXEC SQL and end with a
semicolon (;). The EXEC and SQL keywords must appear on one line, but the
remainder of the statement can appear on subsequent lines.
In general, because C is case sensitive, use uppercase letters to enter all SQL
keywords. However, if you use the FOLD precompiler suboption, DB2 folds
lowercase letters in SBCS SQL ordinary identifiers to uppercase. For information
about host language precompiler options, see Table 150 on page 904.
You must keep the case of host variable names consistent throughout the program.
For example, if a host variable name is lowercase in its declaration, it must be
lowercase in all SQL statements. You might code an UPDATE statement in a C
program as follows:
EXEC SQL
UPDATE DSN8910.DEPT
SET MGRNO = :mgr_num
WHERE DEPTNO = :int_dept;
| Comments: You can include C comments (/* ... */) within SQL statements
| wherever you can use a blank, except between the keywords EXEC and SQL. You
| can use single-line comments (starting with //) in C language statements, but not
| in embedded SQL. You can use SQL comments within embedded SQL statements.
| You can nest comments.
To include EBCDIC DBCS characters in comments, you must delimit the characters
by a shift-out and shift-in control character; the first shift-in character in the DBCS
string signals the end of the DBCS string.
Declaring tables and views: Your C program should use the DECLARE TABLE
statement to describe each table and view the program accesses. You can use the
You cannot nest SQL INCLUDE statements. Do not use C #include statements to
include SQL statements or C host variable declarations.
| Margins: Code SQL statements in columns 1 through 72, unless you specify other
| margins to the DB2 precompiler. If EXEC SQL is not within the specified margins,
| the DB2 precompiler does not recognize the SQL statement. The margin rules do
| not apply to the DB2 coprocessor. The DB2 coprocessor allows variable length
| source input.
Names:You can use any valid C name for a host variable, subject to the following
restrictions:
v Do not use DBCS characters.
| v Do not use external entry names or access plan names that begin with ’DSN’,
| and do not use host variable names or macro names that begin with ’SQL’ (in
| any combination of uppercase or lowercase letters). These names are reserved
| for DB2.
Nulls and NULs: C and SQL differ in the way they use the word null. The C
language has a null character (NUL), a null pointer (NULL), and a null statement
(just a semicolon). The C NUL is a single character that compares equal to 0. The C
NULL is a special reserved pointer value that does not point to any valid data
object. The SQL null value is a special value that is distinct from all non-null
values and denotes the absence of a (nonnull) value. NUL (or NUL-terminator) is
the null character in C and C++, and NULL is the SQL null value.
Trigraph characters: Some characters from the C character set are not available on
all keyboards. You can enter these characters into a C source program using a
sequence of three characters called a trigraph. The trigraph characters that DB2
supports are the same as those that the C compiler supports.
WHENEVER statement: The target for the GOTO clause in an SQL WHENEVER
statement must be within the scope of any SQL statements that the statement
WHENEVER affects.
| To use the decimal floating-point host data type, you must do the following:
| v Use z/OS 1.10 or above (z/OS V1R10 XL C/C++ ).
| v Compile with the C/C++ compiler option, DFP.
| v Specify the SQL compiler option to enable the DB2 coprocessor.
| v Specify C/C++ compiler option, ARCH(7). It is required by the DFP compiler
| option if the DFP type is used in the source.
| v Specify ’DEFINE(__STDC_WANT_DEC_FP__)’ compiler option.
You can use the subroutine DSNTIAR to convert an SQL return code into a text
message. DSNTIAR takes data from the SQLCA, formats it into a message, and
places the result in a message output area that you provide in your application
program. For concepts and more information about the behavior of DSNTIAR, see
“Displaying SQLCA fields by calling DSNTIAR” on page 204.
You can also use the MESSAGE_TEXT condition item field of the GET
DIAGNOSTICS statement to convert an SQL return code into a text message.
Programs that require long token message support should code the GET
DIAGNOSTICS statement instead of DSNTIAR. For more information about GET
DIAGNOSTICS, see “Checking the execution of SQL statements by using the GET
DIAGNOSTICS statement” on page 209.
DSNTIAR syntax:
For C, include:
#pragma linkage (dsntiar,OS)
CICS: If your CICS application requires CICS storage handling, you must use the
subroutine DSNTIAC instead of DSNTIAR. DSNTIAC has the following syntax:
rc = DSNTIAC(&eib, &commarea, &sqlca, &message, &lrecl);
DSNTIAC has extra parameters, which you must use for calls to routines that use
CICS commands.
&eib EXEC interface block
&commarea
communication area
You must define DSNTIA1 in the CSD. If you load DSNTIAR or DSNTIAC, you
must also define them in the CSD. For an example of CSD entry generation
statements for use with DSNTIAC, see job DSNTEJ5A.
The assembler source code for DSNTIAC and job DSNTEJ5A, which assembles and
link-edits DSNTIAC, are in the data set prefix.SDSNSAMP.
Delimit an SQL statement in your C program with the beginning keyword EXEC
SQL and a Semicolon (;).
Programming examples in C
You can write DB2 programs in C and C++. These programs can access a local or
remote DB2 subsystem and can execute static or dynamic SQL statements. This
information contains several such programming examples.
The following figure illustrates dynamic SQL and static SQL embedded in a C
program. Each section of the program is identified with a comment. Section 1 of
the program shows static SQL; sections 2, 3, and 4 show dynamic SQL. The
function of each section is explained in detail in the prologue to the program.
/**********************************************************************/
/* Descriptive name = Dynamic SQL sample using C language */
/* */
/* Function = To show examples of the use of dynamic and static */
/* SQL. */
/* */
/* Notes = This example assumes that the EMP and DEPT tables are */
/* defined. They need not be the same as the DB2 Sample */
/* tables. */
/* */
/* Module type = C program */
/* Processor = DB2 precompiler, C compiler */
/* Module size = see link edit */
/* Attributes = not reentrant or reusable */
/* */
/* Input = */
/* */
/* symbolic label/name = DEPT */
/* description = arbitrary table */
/* symbolic label/name = EMP */
/* description = arbitrary table */
/* */
/* Output = */
/* */
/* symbolic label/name = SYSPRINT */
/* description = print results via printf */
/* */
/* Exit-normal = return code 0 normal completion */
/* */
/* Exit-error = */
/* */
/* Return code = SQLCA */
/* */
/* Abend codes = none */
/* */
/* External references = none */
/* */
/* Control-blocks = */
/* SQLCA - sql communication area */
/* */
/* Logic specification: */
/* */
/* There are four SQL sections. */
/* */
/* 1) STATIC SQL 1: using static cursor with a SELECT statement. */
/* Two output host variables. */
/* 2) Dynamic SQL 2: Fixed-list SELECT, using same SELECT statement */
/* used in SQL 1 to show the difference. The prepared string */
/* :iptstr can be assigned with other dynamic-able SQL statements. */
/* 3) Dynamic SQL 3: Insert with parameter markers. */
/* Using four parameter markers which represent four input host */
/* variables within a host structure. */
/* 4) Dynamic SQL 4: EXECUTE IMMEDIATE */
#include "stdio.h"
#include "stdefs.h"
EXEC SQL INCLUDE SQLCA;
EXEC SQL INCLUDE SQLDA;
EXEC SQL BEGIN DECLARE SECTION;
short edlevel;
struct { short len;
char x1[56];
} stmtbf1, stmtbf2, inpstr;
struct { short len;
char x1[15];
} lname;
short hv1;
struct { char deptno[4];
struct { short len;
char x[36];
} deptname;
char mgrno[7];
char admrdept[4];
} hv2;
short ind[4];
EXEC SQL END DECLARE SECTION;
EXEC SQL DECLARE EMP TABLE
(EMPNO CHAR(6) ,
FIRSTNAME VARCHAR(12) ,
MIDINIT CHAR(1) ,
LASTNAME VARCHAR(15) ,
WORKDEPT CHAR(3) ,
PHONENO CHAR(4) ,
HIREDATE DECIMAL(6) ,
JOBCODE DECIMAL(3) ,
EDLEVEL SMALLINT ,
SEX CHAR(1) ,
BIRTHDATE DECIMAL(6) ,
SALARY DECIMAL(8,2) ,
FORFNAME VARGRAPHIC(12) ,
FORMNAME GRAPHIC(1) ,
FORLNAME VARGRAPHIC(15) ,
FORADDR VARGRAPHIC(256) ) ;
EXEC SQL DECLARE DEPT TABLE
(
DEPTNO CHAR(3) ,
DEPTNAME VARCHAR(36) ,
MGRNO CHAR(6) ,
ADMRDEPT CHAR(3) );
main ()
{
printf("??/n*** begin of program ***");
EXEC SQL WHENEVER SQLERROR GO TO HANDLERR;
EXEC SQL WHENEVER SQLWARNING GO TO HANDWARN;
EXEC SQL WHENEVER NOT FOUND GO TO NOTFOUND;
/******************************************************************/
/* Assign values to host variables which will be input to DB2 */
/******************************************************************/
strcpy(hv2.deptno,"M92");
strcpy(hv2.deptname.x,"DDL");
hv2.deptname.len = strlen(hv2.deptname.x);
strcpy(hv2.mgrno,"123456");
strcpy(hv2.admrdept,"abc");
/******************************************************************/
/* Static SQL 1: DECLARE CURSOR, OPEN, FETCH, CLOSE */
The following figure contains the example C program that calls the GETPRML
stored procedure.
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
main()
{
/************************************************************/
/* Include the SQLCA and SQLDA */
/************************************************************/
EXEC SQL INCLUDE SQLCA;
EXEC SQL INCLUDE SQLDA;
/************************************************************/
/* Declare variables that are not SQL-related. */
/************************************************************/
short int i; /* Loop counter */
/************************************************************/
/* Declare the following: */
/* - Parameters used to call stored procedure GETPRML */
/* - An SQLDA for DESCRIBE PROCEDURE */
/* - An SQLDA for DESCRIBE CURSOR */
/* - Result set variable locators for up to three result */
/* sets */
/************************************************************/
EXEC SQL BEGIN DECLARE SECTION;
char procnm[19]; /* INPUT parm -- PROCEDURE name */
char schema[9]; /* INPUT parm -- User's schema */
long int out_code; /* OUTPUT -- SQLCODE from the */
/* SELECT operation. */
struct {
short int parmlen;
char parmtxt[254];
} parmlst; /* OUTPUT -- RUNOPTS values */
/* for the matching row in */
/* catalog table SYSROUTINES */
struct indicators {
short int procnm_ind;
short int schema_ind;
short int out_code_ind;
short int parmlst_ind;
} parmind;
/* Indicator variable structure */
/*************************************************************/
/* Allocate the SQLDAs to be used for DESCRIBE */
/* PROCEDURE and DESCRIBE CURSOR. Assume that at most */
/* three cursors are returned and that each result set */
/* has no more than five columns. */
/*************************************************************/
proc_da = (struct sqlda *)malloc(SQLDASIZE(3));
res_da = (struct sqlda *)malloc(SQLDASIZE(5));
/************************************************************/
/* Call the GETPRML stored procedure to retrieve the */
/* RUNOPTS values for the stored procedure. In this */
/* example, we request the PARMLIST definition for the */
/* stored procedure named DSN8EP2. */
/* */
/* The call should complete with SQLCODE +466 because */
/* GETPRML returns result sets. */
/************************************************************/
strcpy(procnm,"dsn8ep2 ");
/* Input parameter -- PROCEDURE to be found */
strcpy(schema," ");
/* Input parameter -- Schema name for proc */
parmind.procnm_ind=0;
parmind.schema_ind=0;
parmind.out_code_ind=0;
/* Indicate that none of the input parameters */
/* have null values */
parmind.parmlst_ind=-1;
/* The parmlst parameter is an output parm. */
/* Mark PARMLST parameter as null, so the DB2 */
/* requester doesn't have to send the entire */
/* PARMLST variable to the server. This */
/* helps reduce network I/O time, because */
/* PARMLST is fairly large. */
EXEC SQL
CALL GETPRML(:procnm INDICATOR :parmind.procnm_ind,
:schema INDICATOR :parmind.schema_ind,
:out_code INDICATOR :parmind.out_code_ind,
:parmlst INDICATOR :parmind.parmlst_ind);
if(SQLCODE!=+466) /* If SQL CALL failed, */
{
/* print the SQLCODE and any */
/* message tokens */
printf("SQL CALL failed due to SQLCODE =
printf("sqlca.sqlerrmc = ");
for(i=0;i<sqlca.sqlerrml;i++)
printf("i]);
printf("\n");
}
else /* If the CALL worked, */
if(out_code!=0) /* Did GETPRML hit an error? */
printf("GETPRML failed due to RC =
/**********************************************************/
/* If everything worked, do the following: */
/* - Print out the parameters returned. */
/* - Retrieve the result sets returned. */
/**********************************************************/
else
{
printf("RUNOPTS =
/********************************************************/
/* Use the statement DESCRIBE PROCEDURE to */
/* return information about the result sets in the */
/* SQLDA pointed to by proc_da: */
/* - SQLD contains the number of result sets that were */
/* returned by the stored procedure. */
/* - Each SQLVAR entry has the following information */
/* about a result set: */
/* - SQLNAME contains the name of the cursor that */
/* the stored procedure uses to return the result */
/* set. */
/* - SQLIND contains an estimate of the number of */
/* rows in the result set. */
/* - SQLDATA contains the result locator value for */
/* the result set. */
/********************************************************/
EXEC SQL DESCRIBE PROCEDURE INTO :*proc_da;
/********************************************************/
/* Assume that you have examined SQLD and determined */
/* that there is one result set. Use the statement */
/* ASSOCIATE LOCATORS to establish a result set locator */
/* for the result set. */
/********************************************************/
EXEC SQL ASSOCIATE LOCATORS (:loc1) WITH PROCEDURE GETPRML;
/********************************************************/
/* Use the statement ALLOCATE CURSOR to associate a */
/* cursor for the result set. */
/********************************************************/
EXEC SQL ALLOCATE C1 CURSOR FOR RESULT SET :loc1;
/********************************************************/
/* Use the statement DESCRIBE CURSOR to determine the */
/* columns in the result set. */
/********************************************************/
EXEC SQL DESCRIBE CURSOR C1 INTO :*res_da;
/********************************************************/
/* Call a routine (not shown here) to do the following: */
/* - Allocate a buffer for data and indicator values */
/* fetched from the result table. */
/* - Update the SQLDATA and SQLIND fields in each */
/* SQLVAR of *res_da with the addresses at which to */
/* to put the fetched data and values of indicator */
/* variables. */
/********************************************************/
alloc_outbuff(res_da);
/********************************************************/
/* Fetch the data from the result table. */
/********************************************************/
while(SQLCODE==0)
EXEC SQL FETCH C1 USING DESCRIPTOR :*res_da;
}
return;
}
The output parameters from this stored procedure contain the SQLCODE from the
SELECT statement and the value of the RUNOPTS column from SYSROUTINES.
The CREATE PROCEDURE statement for this stored procedure might look like
this:
CREATE PROCEDURE GETPRML(PROCNM CHAR(18) IN, SCHEMA CHAR(8) IN,
OUTCODE INTEGER OUT, PARMLST VARCHAR(254) OUT)
LANGUAGE C
DETERMINISTIC
READS SQL DATA
EXTERNAL NAME "GETPRML"
COLLID GETPRML
ASUTIME NO LIMIT
PARAMETER STYLE GENERAL
STAY RESIDENT NO
RUN OPTIONS "MSGFILE(OUTFILE),RPTSTG(ON),RPTOPTS(ON)"
WLM ENVIRONMENT SAMPPROG
PROGRAM TYPE MAIN
SECURITY DB2
RESULT SETS 2
COMMIT ON RETURN NO;
/***************************************************************/
/* Declare C variables for SQL operations on the parameters. */
/* These are local variables to the C program, which you must */
/* copy to and from the parameter list provided to the stored */
/* procedure. */
/***************************************************************/
EXEC SQL BEGIN DECLARE SECTION;
char PROCNM[19];
char SCHEMA[9];
char PARMLST[255];
EXEC SQL END DECLARE SECTION;
/***************************************************************/
/* Declare cursors for returning result sets to the caller. */
/***************************************************************/
EXEC SQL DECLARE C1 CURSOR WITH RETURN FOR
SELECT NAME
FROM SYSIBM.SYSTABLES
WHERE CREATOR=:SCHEMA;
main(argc,argv)
int argc;
char *argv[];
{
/********************************************************/
/* Copy the input parameters into the area reserved in */
/********************************************************/
/* Issue the SQL SELECT against the SYSROUTINES */
/* DB2 catalog table. */
/********************************************************/
strcpy(PARMLST, ""); /* Clear PARMLST */
EXEC SQL
SELECT RUNOPTS INTO :PARMLST
FROM SYSIBM.ROUTINES
WHERE NAME=:PROCNM AND
SCHEMA=:SCHEMA;
/********************************************************/
/* Copy SQLCODE to the output parameter list. */
/********************************************************/
*(int *) argv[3] = SQLCODE;
/********************************************************/
/* Copy the PARMLST value returned by the SELECT back to*/
/* the parameter list provided to this stored procedure.*/
/********************************************************/
strcpy(argv[4], PARMLST);
/********************************************************/
/* Open cursor C1 to cause DB2 to return a result set */
/* to the caller. */
/********************************************************/
EXEC SQL OPEN C1;
}
The linkage convention for this stored procedure is GENERAL WITH NULLS.
The output parameters from this stored procedure contain the SQLCODE from the
SELECT operation, and the value of the RUNOPTS column retrieved from the
SYSROUTINES table.
The CREATE PROCEDURE statement for this stored procedure might look like
this:
CREATE PROCEDURE GETPRML(PROCNM CHAR(18) IN, SCHEMA CHAR(8) IN,
OUTCODE INTEGER OUT, PARMLST VARCHAR(254) OUT)
LANGUAGE C
DETERMINISTIC
READS SQL DATA
EXTERNAL NAME "GETPRML"
COLLID GETPRML
/***************************************************************/
/* Declare C variables used for SQL operations on the */
/* parameters. These are local variables to the C program, */
/* which you must copy to and from the parameter list provided */
/* to the stored procedure. */
/***************************************************************/
EXEC SQL BEGIN DECLARE SECTION;
char PROCNM[19];
char SCHEMA[9];
char PARMLST[255];
struct INDICATORS {
short int PROCNM_IND;
short int SCHEMA_IND;
short int OUT_CODE_IND;
short int PARMLST_IND;
} PARM_IND;
EXEC SQL END DECLARE SECTION;
/***************************************************************/
/* Declare cursors for returning result sets to the caller. */
/***************************************************************/
EXEC SQL DECLARE C1 CURSOR WITH RETURN FOR
SELECT NAME
FROM SYSIBM.SYSTABLES
WHERE CREATOR=:SCHEMA;
main(argc,argv)
int argc;
char *argv[];
{
/********************************************************/
/* Copy the input parameters into the area reserved in */
/* the local program for SQL processing. */
/********************************************************/
strcpy(PROCNM, argv[1]);
strcpy(SCHEMA, argv[2]);
/********************************************************/
/* Copy null indicator values for the parameter list. */
/********************************************************/
memcpy(&PARM_IND,(struct INDICATORS *) argv[5],
sizeof(PARM_IND));
/********************************************************/
/* If any input parameter is NULL, return an error */
/* return code and assign a NULL value to PARMLST. */
/********************************************************/
if (PARM_IND.PROCNM_IND<0 ||
PARM_IND.SCHEMA_IND<0 || {
else {
/********************************************************/
/* If the input parameters are not NULL, issue the SQL */
/* SELECT against the SYSIBM.SYSROUTINES catalog */
/* table. */
/********************************************************/
strcpy(PARMLST, ""); /* Clear PARMLST */
EXEC SQL
SELECT RUNOPTS INTO :PARMLST
FROM SYSIBM.SYSROUTINES
WHERE NAME=:PROCNM AND
SCHEMA=:SCHEMA;
/********************************************************/
/* Copy SQLCODE to the output parameter list. */
/********************************************************/
*(int *) argv[3] = SQLCODE;
PARM_IND.OUT_CODE_IND = 0; /* OUT_CODE is not NULL */
}
/********************************************************/
/* Copy the RUNOPTS value back to the output parameter */
/* area. */
/********************************************************/
strcpy(argv[4], PARMLST);
/********************************************************/
/* Copy the null indicators back to the output parameter*/
/* area. */
/********************************************************/
memcpy((struct INDICATORS *) argv[5],&PARM_IND,
sizeof(PARM_IND));
/********************************************************/
/* Open cursor C1 to cause DB2 to return a result set */
/* to the caller. */
/********************************************************/
EXEC SQL OPEN C1;
}
If you specify the SQL processing option STDSQL(YES), do not define an SQLCA.
If you do, DB2 ignores your SQLCA, and your SQLCA definition causes
compile-time errors. If you specify the SQL processing option STDSQL(NO),
include an SQLCA explicitly.
For COBOL programs, when you specify STDSQL(YES), you must declare an
SQLCODE variable. DB2 declares an SQLCA area for you in the
WORKING-STORAGE SECTION. DB2 controls the structure and location of the
SQLCA.
If your application contains SQL statements and does not include an SQL
communications area (SQLCA), you must declare individual SQLCODE and
SQLSTATE host variables. Your program can use these variables to check whether
an SQL statement executed successfully.
Option Description
To define the SQL communications area: 1. Code the SQLCA directly in the program
or use the following SQL INCLUDE
statement to request a standard SQLCA
declaration:
EXEC SQL INCLUDE SQLCA
Related tasks
“Checking the execution of SQL statements” on page 202
“Checking the execution of SQL statements by using the SQLCA” on page 203
“Checking the execution of SQL statements by using SQLCODE and SQLSTATE”
on page 207
“Defining the items that your program can use to check whether an SQL statement
executed successfully” on page 141
Restrictions:
v You must place SQLDA declarations before the first SQL statement that
references the data descriptor, unless you use the TWOPASS SQL processing
option.
v You cannot use the SQL INCLUDE statement for the SQLDA, because it is not
supported in COBOL.
Related tasks
“Defining SQL descriptor areas” on page 141
Restrictions:
v Only some of the valid COBOL declarations are valid host variable declarations.
If the declaration for a variable is not valid, any SQL statement that references
the variable might result in the message UNDECLARED HOST VARIABLE.
v You can not use locators as column types.
The following locator data types are COBOL data types and SQL data types:
– Result set locator
– Table locator
– LOB locators
– LOB file reference variables
v One or more REDEFINES entries can follow any level 77 data description entry.
However, you cannot use the names in these entries in SQL statements. Entries
with the name FILLER are ignored.
Recommendations:
v Be careful of overflow. For example, suppose that you retrieve an INTEGER
column value into a PICTURE S9(4) host variable and the column value is larger
than 32767 or smaller than -32768. You get an overflow warning or an error,
depending on whether you specify an indicator variable.
v Be careful of truncation. For example, if you retrieve an 80-character CHAR
column value into a PICTURE X(70) host variable, the rightmost 10 characters of
the retrieved string are truncated. Retrieving a double precision floating-point or
decimal column value into a PIC S9(8) COMP host variable removes any
fractional part of the value. Similarly, retrieving a column value with DECIMAL
data type into a COBOL decimal variable with a lower precision might truncate
the value.
v If your varying-length string host variables receive values whose length is
greater than 9999 bytes, compile the applications in which you use those host
variables with the option TRUNC(BIN). TRUNC(BIN) lets the length field for the
string receive a value of up to 32767 bytes.
The following diagram shows the syntax for declaring floating-point or real host
variables.
(2)
COMPUTATIONAL-1 .
COMP-1 IS
(3) VALUE numeric-constant
COMPUTATIONAL-2
COMP-2
Notes:
1 level-1 indicates a COBOL level between 2 and 48.
2 COMPUTATIONAL-1 and COMP-1 are equivalent.
3 COMPUTATIONAL-2 and COMP-2 are equivalent.
The following diagram shows the syntax for declaring integer and small integer
host variables.
| IS
01 variable-name PICTURE S9(4)
77 PIC S9999
(1) S9(9)
level-1 S999999999
S9(18)
(2)
BINARY
IS COMPUTATIONAL-4
USAGE COMP-4
(3)
COMPUTATIONAL-5
COMP-5
COMPUTATIONAL
COMP
(4)
.
IS
VALUE numeric-constant
Notes:
1 level-1 indicates a COBOL level between 2 and 48.
2 The COBOL binary integer data types BINARY, COMPUTATIONAL, COMP,
COMPUTATIONAL-4, and COMP-4 are equivalent.
3 COMPUTATIONAL-5 (and COMP-5) are equivalent to the other COBOL
binary integer data types if you compile the other data types with
TRUNC(BIN).
4 Any specification for scale is ignored.
The following diagram shows the syntax for declaring decimal host variables.
IS
USAGE
(3)
PACKED-DECIMAL
COMPUTATIONAL-3
COMP-3
IS CHARACTER
DISPLAY SIGN LEADING SEPARATE
NATIONAL
.
IS
VALUE numeric-constant
Notes:
1 level-1 indicates a COBOL level between 2 and 48.
2 The picture-string that is associated with SIGN LEADING SEPARATE must
have the form S9(i)V9(d) (or S9...9V9...9, with i and d instances of 9 or S9...9V
with i instances of 9).
3 PACKED-DECIMAL, COMPUTATIONAL-3, and COMP-3 are equivalent. The
picture-string that is that is associated with these types must have the form
S9(i)V9(d) (or S9...9V9...9, with i and d instances of 9) or S9(i)V.
In COBOL, you declare the SMALLINT and INTEGER data types as a number of
decimal digits. DB2 uses the full size of the integers (in a way that is similar to
processing with the TRUNC(BIN) compiler option) and can place larger values in
the host variable than would be allowed in the specified number of digits in the
COBOL declaration. If you compile with TRUNC(OPT) or TRUNC(STD), ensure
that the size of numbers in your application is within the declared number of
digits.
For small integers that can exceed 9999, use S9(4) COMP-5 or compile with
TRUNC(BIN). For large integers that can exceed 999 999 999, use S9(10) COMP-3
to obtain the decimal data type. If you use COBOL for integers that exceed the
COBOL PICTURE, specify the column as decimal to ensure that the data types
match and perform well.
If you are using a COBOL compiler that does not support decimal numbers of
more than 18 digits, use one of the following data types to hold values of greater
than 18 digits:
v A decimal variable with a precision less than or equal to 18, if the actual data
values fit. If you retrieve a decimal value into a decimal variable with a scale
that is less than the source column in the database, the fractional part of the
value might be truncated.
The following diagrams show the syntax for forms other than CLOBs.
The following diagram shows the syntax for declaring fixed-length character host
variables.
IS (2)
01 variable-name PICTURE picture-string
77 PIC
(1)
level-1
.
DISPLAY IS
IS VALUE character-constant
USAGE
Notes:
1 level-1 indicates a COBOL level between 2 and 48.
2 The picture-string that is associated with these forms must be X(m) (or XX...X,
with m instances of X), where m is up to COBOL’s limitation. However, the
maximum length of the CHAR data type (fixed-length character string) in
DB2 is 255 bytes.
The following diagrams show the syntax for declaring varying-length character
host variables.
01 variable-name .
(1)
level-1
Notes:
1 level-1 indicates a COBOL level between 2 and 48.
BINARY .
COMPUTATIONAL-4 IS
COMP-4 VALUE numeric-constant
COMPUTATIONAL-5
COMP-5
COMPUTATIONAL
COMP
Notes:
1 You cannot use an intervening REDEFINE at level 49.
2 You cannot directly reference var-1 as a host variable.
3 DB2 uses the full length of the S9(4) BINARY variable even though COBOL
with TRUNC(STD) recognizes values up to only 9999. This behavior can
cause data truncation errors when COBOL statements execute and might
effectively limit the maximum length of variable-length character strings to
9999. Consider using the TRUNC(BIN) compiler option or USAGE COMP-5
to avoid data truncation.
.
DISPLAY IS
IS VALUE character-constant
USAGE
Notes:
1 You cannot use an intervening REDEFINE at level 49.
2 You cannot directly reference var-2 as a host variable.
3 For fixed-length strings, the picture-string must be X(m) (or XX, with m
instances of X), where mis up to COBOL’s limitation. However, the maximum
length of the VARCHAR data type in DB2 varies depending on the data page
size.
The following diagrams show the syntax for forms other than DBCLOBs.
The following diagram shows the syntax for declaring fixed-length graphic host
variables.
DISPLAY-1 .
IS (3) IS
USAGE NATIONAL VALUE graphic-constant
Notes:
1 level-1 indicates a COBOL level between 2 and 48.
2 For fixed-length strings, the picture-string is G(m) or N(m) (or, m instances of
GG...G or NN...N), where m is up to COBOL’s limitation. However, the
maximum length of the GRAPHIC data type (fixed-length graphic string) in
DB2 is 127 double-bytes.
3 Use USAGE NATIONAL only for Unicode UTF-16 data. In the picture-string
for USAGE NATIONAL, you must use N in place of G. USAGE NATIONAL
is supported only by the DB2 coprocessor.
The following diagrams show the syntax for declaring varying-length graphic host
variables.
01 variable-name .
(1)
level-1
Notes:
1 level-1 indicates a COBOL level between 2 and 48.
(1) IS (2)
49 var-1 PICTURE S9(4)
PIC S9999 IS
USAGE
BINARY .
COMPUTATIONAL-4 IS
COMP-4 VALUE numeric-constant
COMPUTATIONAL-5
COMP-5
COMPUTATIONAL
COMP
Notes:
1 You cannot directly reference var-1 as a host variable.
2 DB2 uses the full length of the S9(4) BINARY variable even though COBOL
with TRUNC(STD) recognizes values up to only 9999. This behavior can
cause data truncation errors when COBOL statements execute and might
effectively limit the maximum length of variable-length character strings to
9999. Consider using the TRUNC(BIN) compiler option or USAGE COMP-5
to avoid data truncation.
DISPLAY-1 .
(3) IS
NATIONAL VALUE graphic-constant
Notes:
1 You cannot directly reference var-2 as a host variable.
2 For fixed-length strings, the picture-string is G(m) or N(m) (or, m instances of
GG...G or NN...N), where m is up to COBOL’s limitation. However, the
maximum length of the VARGRAPHIC data type in DB2 varies depending on
the data page size.
3 Use USAGE NATIONAL only for Unicode UTF-16 data. In the picture-string
for USAGE NATIONAL, you must use N in place of G. USAGE NATIONAL
is supported only by the DB2 coprocessor.
| The following diagram shows the syntax for declaring BINARY and VARBINARY
| host variables.
| IS
| USAGE
01 variable-name SQL TYPE IS BINARY
VARBINARY
BINARY VARYING
| (1)
| ( length ) .
|
| Notes:
| 1 For BINARY host variables, the length must be in the range from 1 to 255.
| For VARBINARY host variables, the length must be in the range from 1 to
| 32 704.
| COBOL does not have variables that correspond to the SQL binary types BINARY
| and VARBINARY. To create host variables that can be used with these data types,
| use the SQL TYPE IS clause. The SQL precompiler replaces this declaration with a
| COBOL language structure in the output source member.
The following diagram shows the syntax for declaring result set locators.
Table Locators
The following diagram shows the syntax for declaring table locators.
Notes:
1 level-1 indicates a COBOL level between 2 and 48.
| The following diagram shows the syntax for declaring BLOB, CLOB, and DBCLOB
| variables and file reference variables.
| The following diagram shows the syntax for declaring BLOB, CLOB, and DBCLOB
| host variables and file reference variables for XML data types.
Notes:
1 level-1 indicates a COBOL level between 2 and 48.
The following diagram shows the syntax for declaring ROWID host variables.
Notes:
1 level-1 indicates a COBOL level between 2 and 48.
Restriction: Only some of the valid COBOL declarations are valid host variable
array declarations. If the declaration for a variable array is not valid, any SQL
statement that references the variable array might result in the message
UNDECLARED HOST VARIABLE ARRAY.
You can specify the following forms of numeric host variable arrays:
v Floating-point numbers
v Integers and small integers
v Decimal numbers
The following diagram shows the syntax for declaring floating-point host variable
arrays.
(1)
level-1 variable-name COMPUTATIONAL-1
IS (2)
USAGE COMP-1
COMPUTATIONAL-2
(3)
COMP-2
(4)
OCCURS dimension .
TIMES IS
VALUE numeric-constant
Notes:
1 level-1 indicates a COBOL level between 2 and 48.
The following diagram shows the syntax for declaring integer and small integer
host variable arrays.
(1) IS
level-1 variable-name PICTURE S9(4)
PIC S9999
S9(9)
S999999999
(2) (4)
BINARY OCCURS dimension
IS COMPUTATIONAL-4 TIMES
USAGE COMP-4
COMPUTATIONAL-5
(3)
COMP-5
COMPUTATIONAL
COMP
(5)
.
IS
VALUE numeric-constant
Notes:
1 level-1 indicates a COBOL level between 2 and 48.
2 The COBOL binary integer data types BINARY, COMPUTATIONAL, COMP,
COMPUTATIONAL-4, and COMP-4 are equivalent.
3 COMPUTATIONAL-5 (and COMP-5) are equivalent to the other COBOL
binary integer data types if you compile the other data types with
TRUNC(BIN).
4 dimension must be an integer constant between 1 and 32767.
5 Any specification for scale is ignored.
The following diagram shows the syntax for declaring decimal host variable
arrays.
(1) IS
level-1 variable-name PICTURE picture-string
PIC
IS
USAGE
(4)
OCCURS dimension .
TIMES IS
VALUE numeric-constant
Notes:
1 level-1 indicates a COBOL level between 2 and 48.
2 PACKED-DECIMAL, COMPUTATIONAL-3, and COMP-3 are equivalent. The
picture-string that is associated with these types must have the form S9(i)V9(d)
(or S9...9V9...9, with i and d instances of 9) or S9(i)V.
3 The picture-string that is associated with SIGN LEADING SEPARATE must
have the form S9(i)V9(d) (or S9...9V9...9, with i and d instances of 9 or S9...9V
with i instances of 9).
4 dimension must be an integer constant between 1 and 32767.
You can specify the following forms of character host variable arrays:
v Fixed-length character strings
v Varying-length character strings
v CLOBs
The following diagrams show the syntax for forms other than CLOBs.
The following diagram shows the syntax for declaring fixed-length character string
arrays.
(1) IS (2)
level-1 variable-name PICTURE picture-string
PIC
(3)
OCCURS dimension
DISPLAY TIMES
IS
USAGE
.
IS
VALUE character-constant
Notes:
1 level-1 indicates a COBOL level between 2 and 48.
2 The picture-string must be in the form X(m) (or XX...X, with m instances of X),
The following diagrams show the syntax for declaring varying-length character
string arrays.
(1) (2)
level-1 variable-name OCCURS dimension .
TIMES
Notes:
1 level-1 indicates a COBOL level between 2 and 48.
2 dimension must be an integer constant between 1 and 32767.
(1) IS (2)
49 var-1 PICTURE S9(4)
PIC S9999 IS
USAGE
BINARY SYNCHRONIZED
COMPUTATIONAL-4 SYNC IS
COMP-4 VALUE numeric-constant
COMPUTATIONAL-5
COMP-5
COMPUTATIONAL
COMP
.
Notes:
1 You cannot directly reference var-1 as a host variable array.
2 DB2 uses the full length of the S9(4) BINARY variable even though COBOL
with TRUNC(STD) recognizes values up to only 9999. This behavior can
cause data truncation errors when COBOL statements execute and might
effectively limit the maximum length of variable-length character strings to
9999. Consider using the TRUNC(BIN) compiler option or USAGE COMP-5
to avoid data truncation.
(1) IS (2)
49 var-2 PICTURE picture-string
PIC
DISPLAY IS
IS VALUE character-constant
USAGE
Notes:
1 You cannot directly reference var-2 as a host variable array.
2 The picture-string must be in the form X(m) (or XX...X, with m instances of X),
where 1 <= m <= 32767 for fixed-length strings; for other strings, m cannot be
greater than the maximum size of a varying-length character string.
3 You cannot use an intervening REDEFINE at level 49.
You can specify the following forms of graphic host variable arrays:
v Fixed-length strings
v Varying-length strings
v DBCLOBs
The following diagrams show the syntax for forms other than DBCLOBs.
The following diagram shows the syntax for declaring fixed-length graphic string
arrays.
(1) IS (2)
level-1 variable-name PICTURE picture-string
PIC
IS (5)
USAGE DISPLAY-1 OCCURS dimension
(3) (4) TIMES
NATIONAL
.
IS
VALUE graphic-constant
Notes:
1 level-1 indicates a COBOL level between 2 and 48.
2 For fixed-length strings, the format for picture-string is G(m) or N(m) (or, m
instances of GG...G or NN...N), where 1 <= m <= 127; for other strings, m
cannot be greater than the maximum size of a varying-length graphic string.
3 Use USAGE NATIONAL only for Unicode UTF-16 data. In the picture-string
for USAGE NATIONAL, you must use N in place of G.
4 You can use USAGE NATIONAL only if you are using the DB2 coprocessor.
The following diagrams show the syntax for declaring varying-length graphic
string arrays.
(1) (2)
level-1 variable-name OCCURS dimension .
TIMES
Notes:
1 level-1 indicates a COBOL level between 2 and 48.
2 dimension must be an integer constant between 1 and 32767.
(1) IS (2)
49 var-1 PICTURE S9(4)
PIC S9999 IS
USAGE
BINARY SYNCHRONIZED
COMPUTATIONAL-4 SYNC IS
COMP-4 VALUE numeric-constant
COMPUTATIONAL-5
COMP-5
COMPUTATIONAL
COMP
.
Notes:
1 You cannot directly reference var-1 as a host variable array.
2 DB2 uses the full length of the S9(4) BINARY variable even though COBOL
with TRUNC(STD) recognizes values up to only 9999. This behavior can
cause data truncation errors when COBOL statements execute and might
effectively limit the maximum length of variable-length character strings to
9999. Consider using the TRUNC(BIN) compiler option or USAGE COMP-5
to avoid data truncation.
(1)
49 var-2 PICTURE
PIC
IS (2) IS
picture-string USAGE DISPLAY-1
(3) (4)
NATIONAL
.
IS
VALUE graphic-constant
| The following diagram shows the syntax for declaring BLOB, CLOB, and DBCLOB
| host variable, locator, and file reference arrays.
(1)
level-1 variable-name SQL TYPE IS
IS
USAGE
(2)
OCCURS dimension .
TIMES
Notes:
1 level-1 indicates a COBOL level between 2 and 48.
2 dimension must be an integer constant between 1 and 32767.
| The following diagram shows the syntax for declaring BLOB, CLOB, and DBCLOB
| host variable and file reference arrays for XML data types.
| (1)
| level-1 variable-name SQL TYPE IS XML AS
IS
USAGE
| (2)
| OCCURS dimension .
TIMES
|
| Notes:
| 1 level-1 indicates a COBOL level between 2 and 48.
| 2 dimension must be an integer constant between 1 and 32767.
The following diagram shows the syntax for declaring ROWID variable arrays.
(1) (2)
level-1 variable-name SQL TYPE IS ROWID OCCURS dimension .
IS TIMES
USAGE
Notes:
1 level-1 indicates a COBOL level between 2 and 48.
2 dimension must be an integer constant between 1 and 32767.
Related concepts
“Host variable arrays” on page 144
“Host variable arrays in an SQL statement” on page 159
“Large objects (LOBs)” on page 430
Related tasks
“Inserting multiple rows of data from host variable arrays” on page 160
“Retrieving multiple rows of data into host variable arrays” on page 160
When you write an SQL statement that contains a qualified host variable name
(perhaps to identify a field within a structure), use the name of the structure
followed by a period and the name of the field. For example, for structure B that
contains field C1, specify B.C1 rather than C1 OF B or C1 IN B.
Host structures
The following diagram shows the syntax for declaring host structures.
(1)
level-1 variable-name .
Notes:
1 level-1 indicates a COBOL level between 1 and 47.
2 level-2 indicates a COBOL level between 2 and 48.
3 For elements within a structure, use any level 02 through 48 (rather than 01
or 77), up to a maximum of two levels.
4 Using a FILLER or optional FILLER item within a host structure declaration
can invalidate the whole structure.
The following diagram shows the syntax for numeric-usage items that are used
within declarations of host structures.
COMPUTATIONAL-1
IS COMP-1 IS
USAGE COMPUTATIONAL-2 VALUE constant
COMP-2
The following diagram shows the syntax for integer and decimal usage items that
are used within declarations of host structures.
BINARY
COMPUTATIONAL-4
COMP-4
COMPUTATIONAL-5
COMP-5
COMPUTATIONAL
COMP
PACKED-DECIMAL
COMPUTATIONAL-3
COMP-3
IS
DISPLAY SIGN LEADING SEPARATE
NATIONAL CHARACTER
IS
VALUE constant
The following diagram shows the syntax for CHAR inner variables that are used
within declarations of host structures.
IS
PICTURE picture-string
PIC DISPLAY
IS
USAGE
IS
VALUE constant
The following diagrams show the syntax for VARCHAR inner variables that are
used within declarations of host structures.
(1) IS
49 var-2 PICTURE S9(4)
PIC S9999 IS
USAGE
Notes:
| 1 The number 49 has a special meaning to DB2. Do not specify another number.
IS
49 var-3 PICTURE picture-string
PIC
.
DISPLAY IS
IS VALUE character-constant
USAGE
The following diagrams show the syntax for VARGRAPHIC inner variables that
are used within declarations of host structures.
IS
49 var-4 PICTURE S9(4)
PIC S9999 IS
USAGE
BINARY .
COMPUTATIONAL-4 IS
COMP-4 VALUE numeric-constant
COMPUTATIONAL-5
COMP-5
COMPUTATIONAL
COMP
IS (1)
49 var-5 PICTURE picture-string
PIC IS
USAGE
(2) (3)
DISPLAY-1 .
NATIONAL IS
VALUE graphic-constant
Notes:
1 For fixed-length strings, the format of picture-string is G(m) or N(m) (or, m
instances of GG...G or NN...N), where 1 <= m <= 127; for other strings, m
cannot be greater than the maximum size of a varying-length graphic string.
The following diagram shows the syntax for LOB variables, locators, and file
reference variables that are used within declarations of host structures.
| The following diagram shows the syntax for LOB variables and file reference
| variables that are used within declarations of host structures for XML.
| Example
In the following example, B is the name of a host structure that contains the
elementary items C1 and C2.
01 A
02 B
03 C1 PICTURE ...
03 C2 PICTURE ...
The following diagram shows the syntax for declaring an indicator variable in
COBOL.
IS
01 variable-name PICTURE S9(4)
77 PIC S9999 IS
USAGE
BINARY .
COMPUTATIONAL-4 IS
COMP-4 VALUE constant
COMPUTATIONAL-5
COMP-5
COMPUTATIONAL
COMP
The following diagram shows the syntax for declaring an indicator array in
COBOL.
(1) IS
level-1 variable-name PICTURE S9(4)
PIC S9999
(2)
BINARY OCCURS dimension
IS COMPUTATIONAL-4 TIMES
USAGE COMP-4
COMPUTATIONAL-5
COMP-5
COMPUTATIONAL
COMP
.
IS
VALUE constant
Notes:
1 level-1 must be an integer between 2 and 48.
2 dimension must be an integer constant between 1 and 32767.
The following example shows a FETCH statement with the declarations of the host
variables that are needed for the FETCH statement and their associated indicator
variables.
EXEC SQL FETCH CLS_CURSOR INTO :CLS-CD,
:DAY :DAY-IND,
:BGN :BGN-IND,
:END :END-IND
END-EXEC.
This task applies to programs that use IBM Enterprise COBOL for z/OS and the
DB2 coprocessor.
Example
Assume that the COBOL SQLCCSID compiler option is specified and that the
COBOL CODEPAGE compiler option is specified as CODEPAGE(1141). The
following code shows how you can control the CCSID:
DATA DIVISION.
01 HV1 PIC N(10) USAGE NATIONAL.
01 HV2 PIC X(20) USAGE DISPLAY.
01 HV3 PIC X(30) USAGE DISPLAY.
...
EXEC SQL
DECLARE :HV3 VARIABLE CCSID 1047
END-EXEC.
...
PROCEDURE DIVISION.
...
EXEC SQL
SELECT C1, C2, C3 INTO :HV1, :HV2, :HV3 FROM T1
END-EXEC.
Assume that the COBOL NOSQLCCSID compiler option is specified, the COBOL
CODEPAGE compiler option is specified as CODEPAGE(1141), and the DB2 default
single byte CCSID is set to 37. In this case, each of the host variables in this
example have the following CCSIDs:
HV1 1200
HV2 37
HV3 1047
The following table describes the SQL data type and the base SQLTYPE and
SQLLEN values that the precompiler uses for host variables in SQL statements.
Table 65. SQL data types, SQLLEN values, and SQLTYPE values that the precompiler uses for host variables in
COBOL programs
SQLTYPE of host
COBOL host variable data type variable1 SQLLEN of host variable SQL data type
COMP-1 480 4 REAL or FLOAT(n) 1<=n<=21
COMP-2 480 8 DOUBLE PRECISION, or
FLOAT(n) 22<=n<=53
S9(i)V9(d) COMP-3 or S9(i)V9(d) 484 i+d in byte 1, d in byte 2 DECIMAL(i+d,d) or
PACKED-DECIMAL NUMERIC(i+d,d)
S9(i)V9(d) DISPLAY SIGN 504 i+d in byte 1, d in byte 2 No exact equivalent. Use
LEADING SEPARATE DECIMAL(i+d,d) or
NUMERIC(i+d,d)
S9(i)V9(d) NATIONAL SIGN 504 i+d in byte 1, d in byte 2 No exact equivalent. Use
LEADING SEPARATE DECIMAL(i+d,d) or
NUMERIC(i+d,d)
S9(4) COMP-4, S9(4) COMP-5, 500 2 SMALLINT
S9(4) COMP, or S9(4) BINARY
S9(9) COMP-4, S9(9) COMP-5, 496 4 INTEGER
S9(9) COMP, or S9(9) BINARY
| S9(18) COMP-4, S9(18) COMP-5, 492 8 BIGINT
| S9(18) COMP, or S9(18) BINARY
Fixed-length character data 452 n CHAR(n)
Varying-length character data 448 n VARCHAR(n)
1<=n<=255
Varying-length character data 456 m VARCHAR(m)
m>255
Fixed-length graphic data 468 m GRAPHIC(m)
Varying-length graphic data 464 m VARGRAPHIC(m)
1<=m<=127
Varying-length graphic data 472 m VARGRAPHIC(m)
m>127
| SQL TYPE is BINARY(n), 912 n BINARY(n)
| 1<=n<=255
| SQL TYPE is VARBINARY(n), 908 n VARBINARY(n)
| 1<=n<=32 704
The following table shows equivalent COBOL host variables for each SQL data
type. Use this table to determine the COBOL data type for host variables that you
define to receive output from the database. For example, if you retrieve
TIMESTAMP data, you can define a fixed-length character string variable of length
n
This table shows direct conversions between SQL data types and COBOL data
types. However, a number of SQL data types are compatible. When you do
assignments or comparisons of data that have compatible data types, DB2 converts
those compatible data types.
TIME Fixed-length character string of length n. If you are using a time exit routine, n is
For example, determined by that routine. Otherwise, n
01 VAR-NAME PIC X(n). must be at least 6; to include seconds, n
must be at least 8.
TIMESTAMP Fixed-length character string of length n. n must be at least 19. To include
For example, microseconds, n must be 26; if n is less
01 VAR-NAME PIC X(n). than 26, truncation occurs on the
microseconds part.
| Each SQL statement in a COBOL program must begin with EXEC SQL and end
| with END-EXEC. If you are using the DB2 precompiler, the EXEC and SQL
| keywords must appear on one line, but the remainder of the statement can appear
| on subsequent lines. If you are using the DB2 coprocessor, the EXEC and SQL
| keywords can be on different lines. Do not include any tokens between the two
| keywords EXEC and SQL except for COBOL comments, including debugging lines.
| Do not include SQL comments between the keywords EXEC and SQL.
If the SQL statement appears between two COBOL statements, the period after
END-EXEC is optional and might not be appropriate. If the statement appears in
an IF...THEN set of COBOL statements, omit the ending period to avoid
inadvertently ending the IF statement.
| For an SQL INCLUDE statement, the DB2 precompiler treats any text that follows
| the period after END-EXEC, and on the same line as END-EXEC, as a comment.
| The DB2 coprocessor treats this text as part of the COBOL program syntax.
In addition, you can include SQL comments (’--’) in any embedded SQL statement.
| Debugging lines: The DB2 precompiler ignores the ’D’ in column 7 on debugging
| lines and treats it as a blank. The DB2 coprocessor follows the COBOL language
| rules regarding debugging lines.
Continuation for SQL statements: The rules for continuing a character string
constant from one line to the next in an SQL statement embedded in a COBOL
program are the same as those for continuing a non-numeric literal in COBOL.
However, you can use either a quote or an apostrophe as the first nonblank
character in area B of the continuation line. The same rule applies for the
continuation of delimited identifiers and does not depend on the string delimiter
option.
| COPY: If you use the DB2 precompiler, do not use a COBOL COPY statement
| within host variable declarations. If you use the DB2 coprocessor, you can use
| COBOL COPY.
REPLACE: If you use the DB2 precompiler, the REPLACE statement has no effect
on SQL statements. It affects only the COBOL statements that the precompiler
generates.
If you use the DB2 coprocessor, the REPLACE statement replaces text strings in
SQL statements as well as in generated COBOL statements.
Declaring tables and views: Your COBOL program should include the statement
DECLARE TABLE to describe each table and view the program accesses. You can
use the DB2 declarations generator (DCLGEN) to generate the DECLARE TABLE
statements. You should include the DCLGEN members in the DATA DIVISION.
For more information, see “DCLGEN (declarations generator)” on page 129.
If you are using the DB2 precompiler, you cannot nest SQL INCLUDE statements.
In this case, do not use COBOL verbs to include SQL statements or host variable
declarations, and do not use the SQL INCLUDE statement to include CICS
preprocessor related code. In general, if you are using the DB2 precompiler, use the
SQL INCLUDE statement only for SQL-related coding. If you are using the COBOL
DB2 coprocessor, none of these restrictions apply.
Margins: You must code SQL statements that begin with EXEC SQL in columns 12
through 72. Otherwise the DB2 precompiler does not recognize the SQL statement.
Names: You can use any valid COBOL name for a host variable. Do not use
external entry names or access plan names that begin with ’DSN’, and do not use
host variable names that begin with ’SQL’. These names are reserved for DB2.
Sequence numbers: The source statements that the DB2 precompiler generates do
not include sequence numbers.
Statement labels: You can precede executable SQL statements in the PROCEDURE
DIVISION with a paragraph name, if you wish.
WHENEVER statement: The target for the GOTO clause in an SQL statement
WHENEVER must be a section name or unqualified paragraph name in the
PROCEDURE DIVISION.
PSPI If your program uses the DB2 precompiler and uses parameters that are
defined in LINKAGE SECTION as host variables to DB2 and the address of the
input parameter might change on subsequent invocations of your program, your
You can use the MESSAGE_TEXT condition item field of the GET DIAGNOSTICS
statement to convert an SQL return code into a text message. Programs that require
long token message support should code the GET DIAGNOSTICS statement
instead of DSNTIAR. For more information about GET DIAGNOSTICS, see
“Checking the execution of SQL statements by using the GET DIAGNOSTICS
statement” on page 209.
You can use the subroutine DSNTIAR to convert an SQL return code into a text
message. DSNTIAR takes data from the SQLCA, formats it into a message, and
places the result in a message output area that you provide in your application
program. For concepts and more information on the behavior of DSNTIAR, see
“Displaying SQLCA fields by calling DSNTIAR” on page 204.
DSNTIAR syntax:
If your CICS application requires CICS storage handling, you must use the
subroutine DSNTIAC instead of DSNTIAR. DSNTIAC has the following syntax:
CALL 'DSNTIAC' USING eib commarea sqlca msg lrecl.
DSNTIAC has extra parameters, which you must use for calls to routines that use
CICS commands.
eib EXEC interface block
commarea
communication area
You must define DSNTIA1 in the CSD. If you load DSNTIAR or DSNTIAC, you
must also define them in the CSD. For an example of CSD entry generation
statements for use with DSNTIAC, see job DSNTEJ5A.
The assembler source code for DSNTIAC and job DSNTEJ5A, which assembles and
link-edits DSNTIAC, are in the data set prefix.SDSNSAMP.
Delimit an SQL statement in your COBOL program with the beginning keyword
EXEC SQL and an END-EXEC.
Example
Use EXEC SQL and END-EXEC. to delimit an SQL statement in a COBOL program:
EXEC SQL
an SQL statement
END-EXEC.
Where to place SQL statements in your application: A COBOL source data set or
member can contain the following elements:
v Multiple programs
v Multiple class definitions, each of which contains multiple methods
You can put SQL statements in only the first program or class in the source data
set or member. However, you can put SQL statements in multiple methods within
a class. If an application consists of multiple data sets or members, each of the data
sets or members can contain SQL statements.
Where to place the SQLCA, SQLDA, and host variable declarations: You can put
the SQLCA, SQLDA, and SQL host variable declarations in the
WORKING-STORAGE SECTION of a program, class, or method. An SQLCA or
SQLDA in a class WORKING-STORAGE SECTION is global for all the methods of
the class. An SQLCA or SQLDA in a method WORKING-STORAGE SECTION is
local to that method only.
If a class and a method within the class both contain an SQLCA or SQLDA, the
method uses the SQLCA or SQLDA that is local.
Rules for host variables: You can declare COBOL variables that are used as host
variables in the WORKING-STORAGE SECTION or LINKAGE-SECTION of a
program, class, or method. You can also declare host variables in the
LOCAL-STORAGE SECTION of a method. The scope of a host variable is the
method, class, or program within which it is defined.
“Dynamic SQL” on page 162 describes three variations of dynamic SQL statements:
v Non-SELECT statements
v Fixed-List SELECT statements
In this case, you know the number of columns returned and their data types
when you write the program.
v Varying-List SELECT statements.
This example program does not support BLOB, CLOB, or DBCLOB data types.
COBOL has a POINTER type and a SET statement that provide pointers and based
variables.
The SET statement sets a pointer from the address of an area in the linkage section
or another pointer; the statement can also set the address of an area in the linkage
section. DSN8BCU2 in “Example of the sample COBOL program” provides these
uses of the SET statement. The SET statement does not permit the use of an
address in the WORKING-STORAGE section.
COBOL does not provide a means to allocate main storage within a program. You
can achieve the same end by having an initial program which allocates the storage,
and then calls a second program that manipulates the pointer. (COBOL does not
permit you to directly manipulate the pointer because errors and abends are likely
to occur.)
The initial program is extremely simple. It includes a working storage section that
allocates the maximum amount of storage needed. This program then calls the
second program, passing the area or areas on the CALL statement. The second
program defines the area in the linkage section and can then use pointers within
the area.
If you need to allocate parts of storage, the best method is to use indexes or
subscripts. You can use subscripts for arithmetic and comparison operations.
The following example shows an example of the initial program DSN8BCU1 that
allocates the storage and calls the second program DSN8BCU2. DSN8BCU2 then
defines the passed storage areas in its linkage section and includes the USING
clause on its PROCEDURE DIVISION statement.
The following example is the called program that does pointer manipulation.
**** DSN8BCU2- DB2 SAMPLE BATCH COBOL UNLOAD PROGRAM ***********
* *
* MODULE NAME = DSN8BCU2 *
* *
* DESCRIPTIVE NAME = DB2 SAMPLE APPLICATION *
* UNLOAD PROGRAM *
* BATCH *
* ENTERPRISE COBOL FOR Z/OS *
* *
* *
* FUNCTION = THIS MODULE ACCEPTS A TABLE NAME OR VIEW NAME *
* AND UNLOADS THE DATA IN THAT TABLE OR VIEW. *
* READ IN A TABLE NAME FROM SYSIN. *
* PUT DATA FROM THE TABLE INTO DD SYSREC01. *
* WRITE RESULTS TO SYSPRINT. *
* *
* NOTES = *
* DEPENDENCIES = NONE. *
* *
* RESTRICTIONS = *
* THE SQLDA IS LIMITED TO 33016 BYTES. *
* THIS SIZE ALLOWS FOR THE DB2 MAXIMUM *
* OF 750 COLUMNS. *
* *
* DATA RECORDS ARE LIMITED TO 32700 BYTES, *
* INCLUDING DATA, LENGTHS FOR VARCHAR DATA, *
* AND SPACE FOR NULL INDICATORS. *
* *
* TABLE OR VIEW NAMES ARE ACCEPTED, AND ONLY *
* ONE NAME IS ALLOWED PER RUN. *
* *
* MODULE TYPE = COBOL PROGRAM *
* PROCESSOR = DB2 PRECOMPILER *
* *
* *
* MODULE SIZE = SEE LINK EDIT *
* ATTRIBUTES = REENTRANT *
* *
* ENTRY POINT = DSN8BCU2 *
* PURPOSE = SEE FUNCTION *
* LINKAGE = *
* CALL 'DSN8BCU2' USING WORKAREA-IND RECWORK. *
* *
* INPUT = SYMBOLIC LABEL/NAME = WORKAREA-IND *
* DESCRIPTION = INDICATOR VARIABLE ARRAY *
* 01 WORKAREA-IND. *
* 02 WORKIND PIC S9(4) COMP OCCURS 750 TIMES. *
* *
* SYMBOLIC LABEL/NAME = RECWORK *
* DESCRIPTION = WORK AREA FOR OUTPUT RECORD *
* 01 RECWORK. *
* 02 RECWORK-LEN PIC S9(8) COMP. *
* *
* SYMBOLIC LABEL/NAME = SYSIN *
* DESCRIPTION = INPUT REQUESTS - TABLE OR VIEW *
* *
The following figure contains a sample COBOL program that uses two-phase
commit and DRDA to access distributed data.
IDENTIFICATION DIVISION.
PROGRAM-ID. TWOPHASE.
AUTHOR.
REMARKS.
*****************************************************************
* *
* MODULE NAME = TWOPHASE *
* *
* DESCRIPTIVE NAME = DB2 SAMPLE APPLICATION USING *
* TWO PHASE COMMIT AND THE DRDA DISTRIBUTED *
* ACCESS METHOD WITH CONNECT STATEMENTS *
* *
* COPYRIGHT = 5665-DB2 (C) COPYRIGHT IBM CORP 1982, 1989 *
* REFER TO COPYRIGHT INSTRUCTIONS FORM NUMBER G120-2083 *
* *
* STATUS = VERSION 5 *
* *
* FUNCTION = THIS MODULE DEMONSTRATES DISTRIBUTED DATA ACCESS *
* USING 2 PHASE COMMIT BY TRANSFERRING AN EMPLOYEE *
* FROM ONE LOCATION TO ANOTHER. *
* *
* NOTE: THIS PROGRAM ASSUMES THE EXISTENCE OF THE *
* TABLE SYSADM.EMP AT LOCATIONS STLEC1 AND *
* STLEC2. *
* *
* MODULE TYPE = COBOL PROGRAM *
* PROCESSOR = DB2 PRECOMPILER, ENTERPRISE COBOL FOR Z/OS *
* MODULE SIZE = SEE LINK EDIT *
ENVIRONMENT DIVISION.
INPUT-OUTPUT SECTION.
FILE-CONTROL.
SELECT PRINTER, ASSIGN TO S-OUT1.
DATA DIVISION.
*****************************************************************
* Variable declarations *
*****************************************************************
01 H-EMPTBL.
05 H-EMPNO PIC X(6).
05 H-NAME.
49 H-NAME-LN PIC S9(4) COMP-4.
49 H-NAME-DA PIC X(32).
05 H-ADDRESS.
49 H-ADDRESS-LN PIC S9(4) COMP-4.
49 H-ADDRESS-DA PIC X(36).
05 H-CITY.
49 H-CITY-LN PIC S9(4) COMP-4.
49 H-CITY-DA PIC X(36).
05 H-EMPLOC PIC X(4).
05 H-SSNO PIC X(11).
05 H-BORN PIC X(10).
05 H-SEX PIC X(1).
05 H-HIRED PIC X(10).
05 H-DEPTNO PIC X(3).
05 H-JOBCODE PIC S9(3)V COMP-3.
05 H-SRATE PIC S9(5) COMP.
05 H-EDUC PIC S9(5) COMP.
05 H-SAL PIC S9(6)V9(2) COMP-3.
05 H-VALIDCHK PIC S9(6)V COMP-3.
01 H-EMPTBL-IND-TABLE.
02 H-EMPTBL-IND PIC S9(4) COMP OCCURS 15 TIMES.
*****************************************************************
* Includes for the variables used in the COBOL standard *
* language procedures and the SQLCA. *
*****************************************************************
*****************************************************************
* Declaration for the table that contains employee information *
*****************************************************************
*****************************************************************
* Constants *
*****************************************************************
*****************************************************************
* Declaration of the cursor that will be used to retrieve *
* information about a transferring employee *
*****************************************************************
PROCEDURE DIVISION.
A101-HOUSE-KEEPING.
OPEN OUTPUT PRINTER.
*****************************************************************
* An employee is transferring from location STLEC1 to STLEC2. *
* Retrieve information about the employee from STLEC1, delete *
* the employee from STLEC1 and insert the employee at STLEC2 *
* using the information obtained from STLEC1. *
*****************************************************************
MAINLINE.
PERFORM CONNECT-TO-SITE-1
IF SQLCODE IS EQUAL TO 0
PERFORM PROCESS-CURSOR-SITE-1
IF SQLCODE IS EQUAL TO 0
PERFORM UPDATE-ADDRESS
PERFORM CONNECT-TO-SITE-2
IF SQLCODE IS EQUAL TO 0
PERFORM PROCESS-SITE-2.
PERFORM COMMIT-WORK.
PROG-END.
CLOSE PRINTER.
GOBACK.
*****************************************************************
* Establish a connection to STLEC1 *
*****************************************************************
CONNECT-TO-SITE-1.
*****************************************************************
* When a connection has been established successfully at STLEC1,*
PROCESS-CURSOR-SITE-1.
*****************************************************************
* Retrieve information about the transferring employee. *
* Provided that the employee exists, perform DELETE-SITE-1 to *
* delete the employee from STLEC1. *
*****************************************************************
FETCH-DELETE-SITE-1.
DELETE-SITE-1.
*****************************************************************
* Close the cursor used to retrieve information about the *
* transferring employee. *
*****************************************************************
CLOSE-CURSOR-SITE-1.
*****************************************************************
* Update certain employee information in order to make it *
* current. *
*****************************************************************
UPDATE-ADDRESS.
*****************************************************************
* Establish a connection to STLEC2 *
*****************************************************************
CONNECT-TO-SITE-2.
PROCESS-SITE-2.
*****************************************************************
* COMMIT any changes that were made at STLEC1 and STLEC2. *
*****************************************************************
COMMIT-WORK.
*****************************************************************
* Include COBOL standard language procedures *
*****************************************************************
INCLUDE-SUBS.
EXEC SQL INCLUDE COBSSUB END-EXEC.
The following sample program demonstrates distributed access data using DB2
private protocol access with two-phase commit.
IDENTIFICATION DIVISION.
PROGRAM-ID. TWOPHASE.
AUTHOR.
REMARKS.
*****************************************************************
* *
* MODULE NAME = TWOPHASE *
* *
* DESCRIPTIVE NAME = DB2 SAMPLE APPLICATION USING *
* TWO PHASE COMMIT AND DRDA WITH *
* THREE-PART NAMES *
* *
* COPYRIGHT = 5665-DB2 (C) COPYRIGHT IBM CORP 1982, 1989 *
* REFER TO COPYRIGHT INSTRUCTIONS FORM NUMBER G120-2083 *
* *
* STATUS = VERSION 5 *
* *
* FUNCTION = THIS MODULE DEMONSTRATES DISTRIBUTED DATA ACCESS *
* USING 2 PHASE COMMIT BY TRANSFERRING AN EMPLOYEE *
* FROM ONE LOCATION TO ANOTHER. *
* *
* NOTE: THIS PROGRAM ASSUMES THE EXISTENCE OF THE *
* TABLE SYSADM.EMP AT LOCATIONS STLEC1 AND *
* STLEC2. *
* *
* MODULE TYPE = COBOL PROGRAM *
* PROCESSOR = DB2 PRECOMPILER, ENTERPRISE COBOL FOR Z/OS *
* MODULE SIZE = SEE LINK EDIT *
* ATTRIBUTES = NOT REENTRANT OR REUSABLE *
* *
* ENTRY POINT = *
* PURPOSE = TO ILLUSTRATE 2 PHASE COMMIT *
* LINKAGE = INVOKE FROM DSN RUN *
* INPUT = NONE *
* OUTPUT = *
* SYMBOLIC LABEL/NAME = SYSPRINT *
* DESCRIPTION = PRINT OUT THE DESCRIPTION OF EACH *
* STEP AND THE RESULTANT SQLCA *
* *
* EXIT NORMAL = RETURN CODE 0 FROM NORMAL COMPLETION *
* *
* EXIT ERROR = NONE *
* *
* EXTERNAL REFERENCES = *
* ROUTINE SERVICES = NONE *
* DATA-AREAS = NONE *
* CONTROL-BLOCKS = *
* SQLCA - SQL COMMUNICATION AREA *
* *
* TABLES = NONE *
* *
* CHANGE-ACTIVITY = NONE *
* *
* *
* *
* PSEUDOCODE *
* *
* MAINLINE. *
* Perform PROCESS-CURSOR-SITE-1 to obtain the information *
ENVIRONMENT DIVISION.
INPUT-OUTPUT SECTION.
FILE-CONTROL.
SELECT PRINTER, ASSIGN TO S-OUT1.
DATA DIVISION.
FILE SECTION.
FD PRINTER
RECORD CONTAINS 120 CHARACTERS
DATA RECORD IS PRT-TC-RESULTS
LABEL RECORD IS OMITTED.
01 PRT-TC-RESULTS.
03 PRT-BLANK PIC X(120).
WORKING-STORAGE SECTION.
*****************************************************************
* Variable declarations *
*****************************************************************
01 H-EMPTBL.
05 H-EMPNO PIC X(6).
05 H-NAME.
49 H-NAME-LN PIC S9(4) COMP-4.
49 H-NAME-DA PIC X(32).
05 H-ADDRESS.
49 H-ADDRESS-LN PIC S9(4) COMP-4.
49 H-ADDRESS-DA PIC X(36).
05 H-CITY.
49 H-CITY-LN PIC S9(4) COMP-4.
49 H-CITY-DA PIC X(36).
05 H-EMPLOC PIC X(4).
05 H-SSNO PIC X(11).
05 H-BORN PIC X(10).
05 H-SEX PIC X(1).
05 H-HIRED PIC X(10).
05 H-DEPTNO PIC X(3).
05 H-JOBCODE PIC S9(3)V COMP-3.
05 H-SRATE PIC S9(5) COMP.
05 H-EDUC PIC S9(5) COMP.
05 H-SAL PIC S9(6)V9(2) COMP-3.
05 H-VALIDCHK PIC S9(6)V COMP-3.
01 H-EMPTBL-IND-TABLE.
02 H-EMPTBL-IND PIC S9(4) COMP OCCURS 15 TIMES.
*****************************************************************
* Includes for the variables used in the COBOL standard *
* language procedures and the SQLCA. *
*****************************************************************
*****************************************************************
* Declaration for the table that contains employee information *
*****************************************************************
*****************************************************************
* Constants *
*****************************************************************
*****************************************************************
* Declaration of the cursor that will be used to retrieve *
* information about a transferring employee *
*****************************************************************
*****************************************************************
* An employee is transferring from location STLEC1 to STLEC2. *
* Retrieve information about the employee from STLEC1, delete *
* the employee from STLEC1 and insert the employee at STLEC2 *
* using the information obtained from STLEC1. *
*****************************************************************
MAINLINE.
PERFORM PROCESS-CURSOR-SITE-1
IF SQLCODE IS EQUAL TO 0
PERFORM UPDATE-ADDRESS
PERFORM PROCESS-SITE-2.
PERFORM COMMIT-WORK.
PROG-END.
CLOSE PRINTER.
GOBACK.
*****************************************************************
* Open the cursor that will be used to retrieve information *
* about the transferring employee. *
*****************************************************************
PROCESS-CURSOR-SITE-1.
*****************************************************************
* Retrieve information about the transferring employee. *
* Provided that the employee exists, perform DELETE-SITE-1 to *
* delete the employee from STLEC1. *
*****************************************************************
FETCH-DELETE-SITE-1.
*****************************************************************
* Delete the employee from STLEC1. *
*****************************************************************
DELETE-SITE-1.
*****************************************************************
* Close the cursor used to retrieve information about the *
* transferring employee. *
*****************************************************************
CLOSE-CURSOR-SITE-1.
*****************************************************************
* Update certain employee information in order to make it *
* current. *
*****************************************************************
UPDATE-ADDRESS.
MOVE TEMP-ADDRESS-LN TO H-ADDRESS-LN.
MOVE '1500 NEW STREET' TO H-ADDRESS-DA.
MOVE TEMP-CITY-LN TO H-CITY-LN.
MOVE 'NEW CITY, CA 97804' TO H-CITY-DA.
MOVE 'SJCA' TO H-EMPLOC.
****************************************************************
* Using the employee information that was retrieved from STLEC1 *
* and updated previously, insert the employee at STLEC2. *
*****************************************************************
PROCESS-SITE-2.
*****************************************************************
* COMMIT any changes that were made at STLEC1 and STLEC2. *
*****************************************************************
COMMIT-WORK.
*****************************************************************
* Include COBOL standard language procedures *
*****************************************************************
INCLUDE-SUBS.
EXEC SQL INCLUDE COBSSUB END-EXEC.
The linkage convention for this stored procedure is GENERAL WITH NULLS.
The output parameters from this stored procedure contain the SQLCODE from the
SELECT operation, and the value of the RUNOPTS column retrieved from the
SYSIBM.SYSROUTINES table.
ENVIRONMENT DIVISION.
INPUT-OUTPUT SECTION.
FILE-CONTROL.
DATA DIVISION.
FILE SECTION.
*
WORKING-STORAGE SECTION.
*
EXEC SQL INCLUDE SQLCA END-EXEC.
*
***************************************************
* DECLARE A HOST VARIABLE TO HOLD INPUT SCHEMA
***************************************************
01 INSCHEMA PIC X(8).
***************************************************
* DECLARE CURSOR FOR RETURNING RESULT SETS
***************************************************
*
EXEC SQL DECLARE C1 CURSOR WITH RETURN FOR
SELECT NAME FROM SYSIBM.SYSTABLES WHERE CREATOR=:INSCHEMA
END-EXEC.
*
LINKAGE SECTION.
***************************************************
* DECLARE THE INPUT PARAMETERS FOR THE PROCEDURE
***************************************************
01 PROCNM PIC X(18).
01 SCHEMA PIC X(8).
***************************************************
* DECLARE THE OUTPUT PARAMETERS FOR THE PROCEDURE
***************************************************
01 OUT-CODE PIC S9(9) USAGE BINARY.
01 PARMLST.
49 PARMLST-LEN PIC S9(4) USAGE BINARY.
49 PARMLST-TEXT PIC X(254).
***************************************************
* DECLARE THE STRUCTURE CONTAINING THE NULL
* INDICATORS FOR THE INPUT AND OUTPUT PARAMETERS.
The CREATE PROCEDURE statement for this stored procedure might look like
this:
CREATE PROCEDURE GETPRML(PROCNM CHAR(18) IN, SCHEMA CHAR(8) IN,
OUTCODE INTEGER OUT, PARMLST VARCHAR(254) OUT)
LANGUAGE COBOL
DETERMINISTIC
READS SQL DATA
EXTERNAL NAME "GETPRML"
COLLID GETPRML
ASUTIME NO LIMIT
PARAMETER STYLE GENERAL
STAY RESIDENT NO
RUN OPTIONS "MSGFILE(OUTFILE),RPTSTG(ON),RPTOPTS(ON)"
WLM ENVIRONMENT SAMPPROG
PROGRAM TYPE MAIN
SECURITY DB2
RESULT SETS 2
COMMIT ON RETURN NO;
CBL RENT
IDENTIFICATION DIVISION.
PROGRAM-ID. GETPRML.
AUTHOR. EXAMPLE.
DATE-WRITTEN. 03/25/98.
ENVIRONMENT DIVISION.
INPUT-OUTPUT SECTION.
FILE-CONTROL.
DATA DIVISION.
FILE SECTION.
WORKING-STORAGE SECTION.
***************************************************
* DECLARE CURSOR FOR RETURNING RESULT SETS
***************************************************
*
EXEC SQL DECLARE C1 CURSOR WITH RETURN FOR
SELECT NAME FROM SYSIBM.SYSTABLES WHERE CREATOR=:INSCHEMA
END-EXEC.
*
LINKAGE SECTION.
***************************************************
* DECLARE THE INPUT PARAMETERS FOR THE PROCEDURE
***************************************************
01 PROCNM PIC X(18).
01 SCHEMA PIC X(8).
*******************************************************
* DECLARE THE OUTPUT PARAMETERS FOR THE PROCEDURE
*******************************************************
01 OUT-CODE PIC S9(9) USAGE BINARY.
01 PARMLST.
49 PARMLST-LEN PIC S9(4) USAGE BINARY.
*******************************************************
* COPY SQLCODE INTO THE OUTPUT PARAMETER AREA
*******************************************************
MOVE SQLCODE TO OUT-CODE.
*******************************************************
* OPEN CURSOR C1 TO CAUSE DB2 TO RETURN A RESULT SET
* TO THE CALLER.
*******************************************************
EXEC SQL OPEN C1
END-EXEC.
PROG-END.
GOBACK.
The following figure contains the example COBOL program that calls the
GETPRML stored procedure.
IDENTIFICATION DIVISION.
PROGRAM-ID. CALPRML.
ENVIRONMENT DIVISION.
CONFIGURATION SECTION.
INPUT-OUTPUT SECTION.
FILE-CONTROL.
SELECT REPOUT
ASSIGN TO UT-S-SYSPRINT.
DATA DIVISION.
FILE SECTION.
FD REPOUT
RECORD CONTAINS 127 CHARACTERS
LABEL RECORDS ARE OMITTED
DATA RECORD IS REPREC.
01 REPREC PIC X(127).
WORKING-STORAGE SECTION.
*****************************************************
* MESSAGES FOR SQL CALL *
*****************************************************
01 SQLREC.
02 BADMSG PIC X(34) VALUE
' SQL CALL FAILED DUE TO SQLCODE = '.
02 BADCODE PIC +9(5) USAGE DISPLAY.
02 FILLER PIC X(80) VALUE SPACES.
01 ERRMREC.
02 ERRMMSG PIC X(12) VALUE ' SQLERRMC = '.
*****************************************************
* SQL INCLUDE FOR SQLCA *
*****************************************************
EXEC SQL INCLUDE SQLCA END-EXEC.
PROCEDURE DIVISION.
*------------------
PROG-START.
OPEN OUTPUT REPOUT.
* OPEN OUTPUT FILE
MOVE 'DSN8EP2 ' TO PROCNM.
* INPUT PARAMETER -- PROCEDURE TO BE FOUND
MOVE SPACES TO SCHEMA.
* INPUT PARAMETER -- SCHEMA IN SYSROUTINES
MOVE -1 TO PARMIND.
* THE PARMLST PARAMETER IS AN OUTPUT PARM.
* MARK PARMLST PARAMETER AS NULL, SO THE DB2
* REQUESTER DOESN'T HAVE TO SEND THE ENTIRE
* PARMLST VARIABLE TO THE SERVER. THIS
* HELPS REDUCE NETWORK I/O TIME, BECAUSE
* PARMLST IS FAIRLY LARGE.
EXEC SQL
CALL GETPRML(:PROCNM,
:SCHEMA,
:OUT-CODE,
:PARMLST INDICATOR :PARMIND)
END-EXEC.
If you specify the SQL processing option STDSQL(YES), do not define an SQLCA.
If you do, DB2 ignores your SQLCA, and your SQLCA definition causes
compile-time errors. If you specify the SQL processing option STDSQL(NO),
include an SQLCA explicitly.
If your application contains SQL statements and does not include an SQL
communications area (SQLCA), you must declare individual SQLCODE and
SQLSTATE host variables. Your program can use these variables to check whether
an SQL statement executed successfully.
Option Description
To define the SQL communications area: 1. Code the SQLCA directly in the program
or use the following SQL INCLUDE
statement to request a standard SQLCA
declaration:
EXEC SQL INCLUDE SQLCA
DB2 sets the SQLCODE and SQLSTATE
values in the SQLCA after each SQL
statement executes. Your application should
check these values to determine whether the
last SQL statement was successful.
Related tasks
“Checking the execution of SQL statements” on page 202
“Checking the execution of SQL statements by using the SQLCA” on page 203
“Checking the execution of SQL statements by using SQLCODE and SQLSTATE”
on page 207
“Defining the items that your program can use to check whether an SQL statement
executed successfully” on page 141
Call a subroutine that is written in C, PL/I, or assembler language and that uses
the INCLUDE SQLDA statement to define the SQLDA. The subroutine can also
include SQL statements for any dynamic SQL functions that you need.
Restrictions:
v You must place SQLDA declarations before the first SQL statement that
references the data descriptor, unless you use the TWOPASS SQL processing
option.
v You cannot use the SQL INCLUDE statement for the SQLDA, because it is not
supported in COBOL.
Restrictions:
v Only some of the valid Fortran declarations are valid host variable declarations.
If the declaration for a variable is not valid, any SQL statement that references
the variable might result in the message UNDECLARED HOST VARIABLE.
v Fortran supports some data types with no SQL equivalent (for example,
REAL*16 and COMPLEX). In most cases, you can use Fortran statements to
convert between the unsupported data types and the data types that SQL allows.
v You can not use locators as column types.
Recommendations:
v Be careful of overflow. For example, if you retrieve an INTEGER column value
into a INTEGER*2 host variable and the column value is larger than 32767 or
-32768, you get an overflow warning or an error, depending on whether you
provided an indicator variable.
v Be careful of truncation. For example, if you retrieve an 80-character CHAR
column value into a CHARACTER*70 host variable, the rightmost ten characters
of the retrieved string are truncated. Retrieving a double-precision floating-point
or decimal column value into an INTEGER*4 host variable removes any
fractional value.
The following diagram shows the syntax for declaring numeric host variables.
INTEGER*2 variable-name
*4 / numeric-constant /
INTEGER
*4
REAL
REAL*8
DOUBLE PRECISION
Restrictions:
v Fortran does not provide an equivalent for the decimal data type. To hold a
decimal value, use one of the following variables:
– An integer or floating-point variable, which converts the value. If you use an
integer variable, you lose the fractional part of the number. If the decimal
number can exceed the maximum value for an integer or you want to
preserve a fractional value, use a floating-point variable. Floating-point
numbers are approximations of real numbers. Therefore, when you assign a
decimal number to a floating-point variable, the result might be different from
the original number.
– A character string host variable. Use the CHAR function to retrieve a decimal
value into it.
| v The SQL data type DECFLOAT has no equivalent in Fortran.
The following diagram shows the syntax for declaring character host variables
other than CLOBs.
CHARACTER variable-name
*n *n / character-constant /
variable-name
The following diagram shows the syntax for declarations of ROWID variables.
Constants
The syntax for constants in Fortran programs differs from the syntax for constants
in SQL statements in the following ways:
v Fortran interprets a string of digits with a decimal point to be a real constant.
An SQL statement interprets such a string to be a decimal constant. Therefore,
use exponent notation when specifying a real (that is, floating-point) constant in
an SQL statement.
v In Fortran, a real (floating-point) constant that has a length of 8 bytes uses a D
as the exponent indicator (for example, 3.14159D+04). An 8-byte floating-point
constant in an SQL statement must use an E (for example, 3.14159E+04).
The following diagram shows the syntax for declaring an indicator variable in
Fortran.
INTEGER*2 variable-name
/ numeric-constant /
Example
The following example shows a FETCH statement with the declarations of the host
variables that are needed for the FETCH statement and their associated indicator
variables.
EXEC SQL FETCH CLS_CURSOR INTO :CLSCD,
C :DAY :DAYIND,
C :BGN :BGNIND,
C :END :ENDIND
The following table describes the SQL data type and the base SQLTYPE and
SQLLEN values that the precompiler uses for host variables in SQL statements.
Table 68. SQL data types, SQLLEN values, and SQLTYPE values that the precompiler uses for host variables in
Fortran programs
SQLTYPE of host
Fortran host variable data type variable1 SQLLEN of host variable SQL data type
INTEGER*2 500 2 SMALLINT
INTEGER*4 496 4 INTEGER
REAL*4 480 4 FLOAT (single precision)
REAL*8 480 8 FLOAT (double precision)
CHARACTER*n 452 n CHAR(n)
SQL TYPE IS 972 4 Result set locator. Do not use
RESULT_SET_LOCATOR this data type as a column type.
SQL TYPE IS BLOB_LOCATOR 960 4 BLOB locator. Do not use this
data type as a column type.
SQL TYPE IS CLOB_LOCATOR 964 4 CLOB locator. Do not use this
data type as a column type.
SQL TYPE IS BLOB(n) 404 n BLOB(n)
1≤n≤2147483647
SQL TYPE IS CLOB(n) 408 n CLOB(n)
1≤n≤2147483647
SQL TYPE IS ROWID 904 40 ROWID
Notes:
1. If a host variable includes an indicator variable, the SQLTYPE value is the base
SQLTYPE value plus 1.
The following table shows equivalent Fortran host variables for each SQL data
type. Use this table to determine the Fortran data type for host variables that you
define to receive output from the database. For example, if you retrieve
TIMESTAMP data, you can define a variable of type CHARACTER*n.
This table shows direct conversions between SQL data types and Fortran data
types. However, a number of SQL data types are compatible. When you do
assignments or comparisons of data that have compatible data types, DB2 converts
those compatible data types.
Each SQL statement in a Fortran program must begin with EXEC SQL. The EXEC
and SQL keywords must appear on one line, but the remainder of the statement
can appear on subsequent lines.
You cannot follow an SQL statement with another SQL statement or Fortran
statement on the same line.
Fortran does not require blanks to delimit words within a statement, but the SQL
language requires blanks. The rules for embedded SQL follow the rules for SQL
syntax, which require you to use one or more blanks as a delimiter.
Comments: You can include Fortran comment lines within embedded SQL
statements wherever you can use a blank, except between the keywords EXEC and
SQL. You can include SQL comments in any embedded SQL statement.
The DB2 precompiler does not support the exclamation point (!) as a comment
recognition character in Fortran programs.
Continuation for SQL statements: The line continuation rules for SQL statements
are the same as those for Fortran statements, except that you must specify EXEC
SQL on one line. The SQL examples in this topic have Cs in the sixth column to
indicate that they are continuations of EXEC SQL.
Declaring tables and views: Your Fortran program should also include the
DECLARE TABLE statement to describe each table and view the program accesses.
You can use a Fortran character variable in the statements PREPARE and
EXECUTE IMMEDIATE, even if it is fixed-length.
You cannot nest SQL INCLUDE statements. You cannot use the Fortran INCLUDE
compiler directive to include SQL statements or Fortran host variable declarations.
Margins: Code the SQL statements between columns 7 through 72, inclusive. If
EXEC SQL starts before the specified left margin, the DB2 precompiler does not
recognize the SQL statement.
Names: You can use any valid Fortran name for a host variable. Do not use
external entry names that begin with ’DSN’ or host variable names that begin with
’SQL’. These names are reserved for DB2.
Do not use the word DEBUG, except when defining a Fortran DEBUG packet. Do
not use the words FUNCTION, IMPLICIT, PROGRAM, and SUBROUTINE to
define variables.
Sequence numbers: The source statements that the DB2 precompiler generates do
not include sequence numbers.
Statement labels: You can specify statement numbers for SQL statements in
columns 1 to 5. However, during program preparation, a labeled SQL statement
generates a Fortran CONTINUE statement with that label before it generates the
code that executes the SQL statement. Therefore, a labeled SQL statement should
never be the last statement in a DO loop. In addition, you should not label SQL
statements (such as INCLUDE and BEGIN DECLARE SECTION) that occur before
the first executable SQL statement, because an error might occur.
WHENEVER statement: The target for the GOTO clause in the SQL WHENEVER
statement must be a label in the Fortran source code and must refer to a statement
in the same subprogram. The WHENEVER statement only applies to SQL
statements in the same subprogram.
DB2 supports Version 3 Release 1 (or later) of VS Fortran with the following
restrictions:
v The parallel option is not supported. Applications that contain SQL statements
must not use Fortran parallelism.
v You cannot use the byte data type within embedded SQL, because byte is not a
recognizable host data type.
You can use the subroutine DSNTIR to convert an SQL return code into a text
message. DSNTIR builds a parameter list and calls DSNTIAR for you. DSNTIAR
takes data from the SQLCA, formats it into a message, and places the result in a
You can also use the MESSAGE_TEXT condition item field of the GET
DIAGNOSTICS statement to convert an SQL return code into a text message.
Programs that require long token message support should code the GET
DIAGNOSTICS statement instead of DSNTIAR. For more information about GET
DIAGNOSTICS, see “Checking the execution of SQL statements by using the GET
DIAGNOSTICS statement” on page 209.
DSNTIR syntax:
where ERRLEN is the total length of the message output area, ERRTXT is the
name of the message output area, and ICODE is the return code.
return-code
Accepts a return code from DSNTIAR.
Delimit an SQL statement in your Fortran program with the beginning keyword
EXEC SQL and an end of line or end of last continued line.
Related reference
“Programming examples” on page 227
If you specify the SQL processing option STDSQL(YES), do not define an SQLCA.
If you do, DB2 ignores your SQLCA, and your SQLCA definition causes
compile-time errors. If you specify the SQL processing option STDSQL(NO),
include an SQLCA explicitly.
If your application contains SQL statements and does not include an SQL
communications area (SQLCA), you must declare individual SQLCODE and
SQLSTATE host variables. Your program can use these variables to check whether
an SQL statement executed successfully.
Option Description
To define the SQL communications area: 1. Code the SQLCA directly in the program
or use the following SQL INCLUDE
statement to request a standard SQLCA
declaration:
EXEC SQL INCLUDE SQLCA
DB2 sets the SQLCODE and SQLSTATE
values in the SQLCA after each SQL
statement executes. Your application should
check these values to determine whether the
last SQL statement was successful.
Related tasks
“Checking the execution of SQL statements” on page 202
“Checking the execution of SQL statements by using the SQLCA” on page 203
“Checking the execution of SQL statements by using SQLCODE and SQLSTATE”
on page 207
“Defining the items that your program can use to check whether an SQL statement
executed successfully” on page 141
Code the SQLDA directly in the program, or use the following SQL INCLUDE
statement to request a standard SQLDA declaration:
EXEC SQL INCLUDE SQLDA
Restriction: You must place SQLDA declarations before the first SQL statement
that references the data descriptor, unless you use the TWOPASS SQL processing
option.
Related tasks
“Defining SQL descriptor areas” on page 141
Restrictions:
v Only some of the valid PL/I declarations are valid host variable declarations.
The precompiler uses the data attribute defaults that are specified in the PL/I
DEFAULT statement. If the declaration for a host variable is not valid, any SQL
statement that references the variable might result in the message
UNDECLARED HOST VARIABLE.
v The alignment, scope, and storage attributes of host variables have the following
restrictions:
– A declaration with the EXTERNAL scope attribute and the STATIC storage
attribute must also have the INITIAL storage attribute.
– If you use the BASED storage attribute, you must follow it with a PL/I
element-locator-expression.
– Host variables can be STATIC, CONTROLLED, BASED, or AUTOMATIC
storage class, or options. However, CICS requires that programs be reentrant.
Although the precompiler uses only the names and data attributes of variables
and ignores the alignment, scope, and storage attributes, you should not ignore
these restrictions. If you do ignore them, you might have problems compiling
the PL/I source code that the precompiler generates.
v PL/I supports some data types with no SQL equivalent (COMPLEX and BIT
variables, for example). In most cases, you can use PL/I statements to convert
between the unsupported PL/I data types and the data types that SQL supports.
v You can not use locators as column types.
The following locator data types are PL/I data types as well as SQL data types:
– Result set locator
– Table locator
Recommendations:
v Be careful of overflow. For example, if you retrieve an INTEGER column value
into a BIN FIXED(15) host variable and the column value is larger than 32767 or
smaller than -32768, you get an overflow warning or an error, depending on
whether you provided an indicator variable.
v Be careful of truncation. For example, if you retrieve an 80-character CHAR
column value into a CHAR(70) host variable, the rightmost ten characters of the
retrieved string are truncated. Retrieving a double-precision floating-point or
decimal column value into a BIN FIXED(31) host variable removes any fractional
part of the value. Similarly, retrieving a column value with a DECIMAL data
type into a PL/I decimal variable with a lower precision might truncate the
value.
The following diagram shows the syntax for declaring numeric host variables.
(2)
FIXED
( precision )
(1)
,scale
FLOAT ( precision )
Alignment and/or Scope and/or Storage
Notes:
1 You can specify a scale only for DECIMAL FIXED.
2 You can specify host variable attributes in any order that is acceptable to
PL/I. For example, BIN FIXED(31), BINARY FIXED(31), BIN(31) FIXED, and
FIXED BIN(31) are all acceptable.
For floating-point data types, use the FLOAT SQL processing option to specify
whether the host variable is in IEEE binary floating-point or z/Architecture
hexadecimal floating-point format. DB2 does not check if the format of the host
variable contents match the format that you specified with the FLOAT SQL
processing option. Therefore, you need to ensure that your floating-point host
variable contents match the format that you specified with the FLOAT SQL
processing option. DB2 converts all floating-point input data to z/Architecture
hexadecimal floating-point format before storing it.
If the PL/I compiler that you are using does not support a decimal data type with
a precision greater than 15, use one of the following variable types for decimal
data:
The following diagram shows the syntax for declaring character host variables,
other than CLOBs.
DECLARE variable-name
DCL ,
( variable-name )
CHARACTER ( length )
CHAR VARYING
VAR
Alignment and/or Scope and/or Storage
The following diagram shows the syntax for declaring graphic host variables other
than DBCLOBs.
DECLARE variable-name
DCL ,
( variable-name )
GRAPHIC ( length )
VARYING
VAR
Alignment and/or Scope and/or Storage
| (1)
| ( length ) ;
|
| Notes:
| 1 For BINARY host variables, the length must be in the range from 1 to 255.
| For VARBINARY host variables, the length must be in the range from 1 to
| 32 704.
| PL/I does not have variables that correspond to the SQL binary data types
| BINARY and VARBINARY. To create host variables that can be used with these
| data types, use the SQL TYPE IS clause.
The following diagram shows the syntax for declaring result set locators.
DECLARE variable-name
DCL ,
( variable-name )
Alignment and/or Scope and/or Storage
The following diagram shows the syntax for declaring table locators.
DCL variable-name
DECLARE ,
( variable-name )
| The following diagram shows the syntax for declaring BLOB, CLOB, and DBCLOB
| host variables, locators, and file reference variables.
| (2)
BINARY LARGE OBJECT ( length )
BLOB K
CHARACTER LARGE OBJECT M
CHAR LARGE OBJECT G
CLOB
DBCLOB
BLOB_LOCATOR
CLOB_LOCATOR
DBCLOB_LOCATOR
BLOB_FILE
CLOB_FILE
DBCLOB_FILE
Notes:
1 A single PL/I declaration that contains a LOB variable declaration is limited
to no more than 1000 lines of source code.
2 Variable attributes such as STATIC and AUTOMATIC are ignored if specified
on a LOB variable declaration.
| The following diagram shows the syntax for declaring BLOB, CLOB, and DBCLOB
| host variables and file reference variables for XML data types.
( variable-name )
|
Chapter 8. Coding SQL statements in PL/I application programs 381
|
| BINARY LARGE OBJECT ( length )
BLOB K
CHARACTER LARGE OBJECT M
CHAR LARGE OBJECT G
CLOB
DBCLOB
BLOB_FILE
CLOB_FILE
DBCLOB_FILE
|
The following diagram shows the syntax for declaring ROWID host variables.
( variable-name )
Related concepts
“Host variables” on page 143
“Rules for host variables in an SQL statement” on page 151
“Large objects (LOBs)” on page 430
Related tasks
“Determining whether a retrieved value in a host variable is null or truncated” on
page 154
“Inserting a single row by using a host variable” on page 158
“Inserting null values into columns by using indicator variables or arrays” on page
158
“Retrieving a single row of data into a host structure” on page 161
“Retrieving a single row of data into host variables” on page 152
“Updating data by using host variables” on page 157
Restrictions:
v Only some of the valid PL/I declarations are valid host variable declarations.
The precompiler uses the data attribute defaults that are specified in the PL/I
DEFAULT statement. If the declaration for a host variable is not valid, any SQL
statement that references the host variable array might result in the message
UNDECLARED HOST VARIABLE ARRAY.
v The alignment, scope, and storage attributes of host variable arrays have the
following restrictions:
– A declaration with the EXTERNAL scope attribute and the STATIC storage
attribute must also have the INITIAL storage attribute.
– If you use the BASED storage attribute, you must follow it with a PL/I
element-locator-expression.
The following diagram shows the syntax for declaring numeric host variable
arrays.
(1)
DECLARE variable-name ( dimension )
DCL ,
( variable-name )
,
(1)
( variable-name ( dimension ) )
(3)
BINARY FIXED
BIN ( precision )
DECIMAL (2)
DEC ,scale
FLOAT ( precision )
Alignment and/or Scope and/or Storage
Notes:
1 dimension must be an integer constant between 1 and 32767.
2 You can specify the scale for only DECIMAL FIXED.
3 You can specify host variable array attributes in any order that is acceptable
to PL/I. For example, BIN FIXED(31), BINARY FIXED(31), BIN(31) FIXED,
and FIXED BIN(31) are all acceptable.
The following diagram shows the syntax for declaring character host variable
arrays other than CLOBs.
( variable-name )
,
(1)
( variable-name ( dimension ) )
CHARACTER ( length )
CHAR VARYING Alignment and/or Scope and/or Storage
VAR
Notes:
1 dimension must be an integer constant between 1 and 32767.
The following diagram shows the syntax for declaring graphic host variable arrays
other than DBCLOBs.
(1)
DECLARE variable-name ( dimension )
DCL ,
( variable-name )
,
(1)
( variable-name ( dimension ) )
GRAPHIC ( length )
VARYING Alignment and/or Scope and/or Storage
VAR
Notes:
1 dimension must be an integer constant between 1 and 32767.
| The following diagram shows the syntax for declaring binary variable arrays.
( variable-name )
,
( variable-name ( dimension ) )
| The following diagram shows the syntax for declaring BLOB, CLOB, and DBCLOB
| host variable, locator, and file reference variable arrays.
| (1)
| DCL variable-name ( dimension )
DECLARE ,
( variable-name )
,
(1)
( variable-name ( dimension ) )
|
| Notes:
| 1 dimension must be an integer constant between 1 and 32767.
| The following diagram shows the syntax for declaring BLOB, CLOB, and DBCLOB
| host variable arrays and file reference variable arrays for XML data types.
| (1)
| DCL variable-name ( dimension )
DECLARE ,
( variable-name )
,
(1)
( variable-name ( dimension ) )
The following diagram shows the syntax for declaring ROWID variable arrays.
(1)
DCL variable-name ( dimension )
DECLARE ,
( variable-name )
,
(1)
( variable-name ( dimension ) )
Notes:
1 dimension must be an integer constant between 1 and 32767.
Related concepts
“Host variable arrays” on page 144
“Host variable arrays in an SQL statement” on page 159
“Large objects (LOBs)” on page 430
Related tasks
“Inserting multiple rows of data from host variable arrays” on page 160
“Retrieving multiple rows of data into host variable arrays” on page 160
Example:
DCL 1 A,
2 B CHAR,
2 (C, D) CHAR;
DCL (E, F) CHAR;
v You can specify host variable attributes in any order that is acceptable to PL/I.
For example, BIN FIXED(31), BIN(31) FIXED, and FIXED BIN(31) are all
acceptable.
When you reference a host variable, you can qualify it with a structure name. For
example, you can specify STRUCTURE.FIELD.
The following diagram shows the syntax for declaring host structures.
( var-2 )
Data types
The following diagram shows the syntax for data types that are used within
declarations of host structures.
| CHARACTER
CHAR ( integer ) VARYING
VAR
GRAPHIC
( integer ) VARYING
VAR
BINARY FIXED
BIN ( precision )
DECIMAL , scale
DEC FLOAT
( precision )
SQL TYPE IS ROWID
LOB data type
The following diagram shows the syntax for LOB data types that are used within
declarations of host structures.
| The following diagram shows the syntax for LOB data types that are used within
| declarations of host structures for XML data.
| Example
In the following example, B is the name of a host structure that contains the scalars
C1 and C2.
DCL 1 A,
2 B,
3 C1 CHAR(...),
3 C2 CHAR(...);
Related concepts
“Host structures” on page 144
The following diagram shows the syntax for declaring an indicator variable in
PL/I.
,
(1)
DECLARE ( variable-name ) BINARY FIXED(15) ;
DCL BIN
Notes:
1 You can specify host variable attributes in any order that is acceptable to
PL/I. For example, BIN FIXED(31), BIN(31) FIXED, and FIXED BIN(31) are all
acceptable.
The following diagram shows the syntax for declaring an indicator array in PL/I.
Notes:
1 dimension must be an integer constant between 1 and 32767.
Example
The following example shows a FETCH statement with the declarations of the host
variables that are needed for the FETCH statement and their associated indicator
variables.
EXEC SQL FETCH CLS_CURSOR INTO :CLS_CD,
:DAY :DAY_IND,
:BGN :BGN_IND,
:END :END_IND;
The following table describes the SQL data type and the base SQLTYPE and
SQLLEN values that the precompiler uses for host variables in SQL statements.
Table 71. SQL data types, SQLLEN values, and SQLTYPE values that the precompiler uses for host variables in PL/I
programs
SQLTYPE of host
PL/I host variable data type variable1 SQLLEN of host variable SQL data type
BIN FIXED(n) 1<=n<=15 500 2 SMALLINT
BIN FIXED(n) 16<=n<=31 496 4 INTEGER
| FIXED BIN(63) 492 8 BIGINT
DEC FIXED(p,s) 0<=p<=31 and 484 p in byte 1, s in byte 2 DECIMAL(p,s)
0<=s<=p2
BIN FLOAT(p) 1<=p<=21 480 4 REAL or FLOAT(n) 1<=n<=21
BIN FLOAT(p) 22<=p<=53 480 8 DOUBLE PRECISION or
FLOAT(n) 22<=n<=53
DEC FLOAT(m) 1<=m<=6 480 4 FLOAT (single precision)
DEC FLOAT(m) 7<=m<=16 480 8 FLOAT (double precision)
The following table shows equivalent PL/I host variables for each SQL data type.
Use this table to determine the PL/I data type for host variables that you define to
receive output from the database. For example, if you retrieve TIMESTAMP data,
you can define a variable of type CHAR(n).
This table shows direct conversions between SQL data types and PL/I data types.
However, a number of SQL data types are compatible. When you do assignments
or comparisons of data that have compatible data types, DB2 converts those
compatible data types.
Table 72. PL/I host variable equivalents that you can use when retrieving data of a particular SQL data type
SQL data type PL/I host variable equivalent Notes
SMALLINT BIN FIXED(n) 1<=n<=15
INTEGER BIN FIXED(n) 16<=n<=31
| BIGINT FIXED BIN(63)
DECIMAL(p,s) or If p<16: DEC FIXED(p) or DEC p is precision; s is scale. 1<=p<=31 and
NUMERIC(p,s) FIXED(p,s) 0<=s<=p
The first statement of the PL/I program must be the PROCEDURE statement with
OPTIONS(MAIN), unless the program is a stored procedure. A stored procedure
application can run as a subroutine.
Each SQL statement in a PL/I program must begin with EXEC SQL and end with
a semicolon (;). The EXEC and SQL keywords must appear must appear on one
line, but the remainder of the statement can appear on subsequent lines.
Continuation for SQL statements: The line continuation rules for SQL statements
are the same as those for other PL/I statements, except that you must specify
EXEC SQL on one line.
Declaring tables and views: Your PL/I program should include a DECLARE
TABLE statement to describe each table and view the program accesses. You can
use the DB2 declarations generator (DCLGEN) to generate the DECLARE TABLE
statements. For more information, see “DCLGEN (declarations generator)” on page
129.
Including code: You can use SQL statements or PL/I host variable declarations
from a member of a partitioned data set by using the following SQL statement in
the source code where you want to include the statements:
EXEC SQL INCLUDE member-name;
You cannot nest SQL INCLUDE statements. Do not use the PL/I %INCLUDE
statement to include SQL statements or host variable DCL statements. You must
use the PL/I preprocessor to resolve any %INCLUDE statements before you use
the DB2 precompiler. Do not use PL/I preprocessor directives within SQL
statements.
Names: You can use any valid PL/I name for a host variable. Do not use external
entry names or access plan names that begin with ’DSN’, and do not use host
variable names that begin with ’SQL’. These names are reserved for DB2.
Sequence numbers: The source statements that the DB2 precompiler generates do
not include sequence numbers. IEL0378I messages from the PL/I compiler identify
lines of code without sequence numbers. You can ignore these messages.
Statement labels: You can specify a statement label for executable SQL statements.
However, the INCLUDE text-file-name and END DECLARE SECTION statements
cannot have statement labels.
Whenever statement: The target for the GOTO clause in an SQL statement
WHENEVER must be a label in the PL/I source code and must be within the
scope of any SQL statements that WHENEVER affects.
You can use the subroutine DSNTIAR to convert an SQL return code into a text
message. DSNTIAR takes data from the SQLCA, formats it into a message, and
places the result in a message output area that you provide in your application
program. For concepts and more information on the behavior of DSNTIAR, see
“Displaying SQLCA fields by calling DSNTIAR” on page 204.
You can also use the MESSAGE_TEXT condition item field of the GET
DIAGNOSTICS statement to convert an SQL return code into a text message.
Programs that require long token message support should code the GET
DIAGNOSTICS statement instead of DSNTIAR. For more information about GET
DIAGNOSTICS, see “Checking the execution of SQL statements by using the GET
DIAGNOSTICS statement” on page 209.
DSNTIAR syntax:
CICS: If your CICS application requires CICS storage handling, you must use the
subroutine DSNTIAC instead of DSNTIAR. DSNTIAC has the following syntax:
CALL DSNTIAC (eib, commarea, sqlca, msg, lrecl);
DSNTIAC has extra parameters, which you must use for calls to routines that use
CICS commands.
eib EXEC interface block
commarea
communication area
You must define DSNTIA1 in the CSD. If you load DSNTIAR or DSNTIAC, you
must also define them in the CSD. For an example of CSD entry generation
statements for use with DSNTIAC, see job DSNTEJ5A.
Delimit an SQL statement in your PL/I program with the beginning keyword EXEC
SQL and a Semicolon (;).
The following figure contains the example PL/I program that calls the GETPRML
stored procedure.
/************************************************************/
/* Declare the parameters used to call the GETPRML */
/* stored procedure. */
/************************************************************/
DECLARE PROCNM CHAR(18), /* INPUT parm -- PROCEDURE name */
SCHEMA CHAR(8), /* INPUT parm -- User's schema */
OUT_CODE FIXED BIN(31),
/* OUTPUT -- SQLCODE from the */
/* SELECT operation. */
PARMLST CHAR(254) /* OUTPUT -- RUNOPTS for */
VARYING, /* the matching row in the */
/* catalog table SYSROUTINES */
PARMIND FIXED BIN(15);
/* PARMLST indicator variable */
/************************************************************/
/* Include the SQLCA */
/************************************************************/
EXEC SQL INCLUDE SQLCA;
/************************************************************/
/* Call the GETPRML stored procedure to retrieve the */
/* RUNOPTS values for the stored procedure. In this */
/* example, we request the RUNOPTS values for the */
/* stored procedure named DSN8EP2. */
/************************************************************/
PROCNM = 'DSN8EP2';
/* Input parameter -- PROCEDURE to be found */
SCHEMA = ' ';
/* Input parameter -- SCHEMA in SYSROUTINES */
PARMIND = -1; /* The PARMLST parameter is an output parm. */
/* Mark PARMLST parameter as null, so the DB2 */
/* requester doesn't have to send the entire */
/* PARMLST variable to the server. This */
/* helps reduce network I/O time, because */
/* PARMLST is fairly large. */
EXEC SQL
CALL GETPRML(:PROCNM,
:SCHEMA,
:OUT_CODE,
:PARMLST INDICATOR :PARMIND);
IF SQLCODE¬=0 THEN /* If SQL CALL failed, */
DO;
PUT SKIP EDIT('SQL CALL failed due to SQLCODE = ',
SQLCODE) (A(34),A(14));
PUT SKIP EDIT('SQLERRM = ',
SQLERRM) (A(10),A(70));
END;
ELSE /* If the CALL worked, */
IF OUT_CODE¬=0 THEN /* Did GETPRML hit an error? */
PUT SKIP EDIT('GETPRML failed due to RC = ',
OUT_CODE) (A(33),A(14));
ELSE /* Everything worked. */
PUT SKIP EDIT('RUNOPTS = ', PARMLST) (A(11),A(200));
RETURN;
END CALPRML;
The output parameters from this stored procedure contain the SQLCODE from the
SELECT operation, and the value of the RUNOPTS column retrieved from the
SYSIBM.SYSROUTINES table.
The CREATE PROCEDURE statement for this stored procedure might look like
this:
CREATE PROCEDURE GETPRML(PROCNM CHAR(18) IN, SCHEMA CHAR(8) IN,
OUTCODE INTEGER OUT, PARMLST VARCHAR(254) OUT)
LANGUAGE PLI
DETERMINISTIC
READS SQL DATA
EXTERNAL NAME "GETPRML"
COLLID GETPRML
ASUTIME NO LIMIT
PARAMETER STYLE GENERAL
STAY RESIDENT NO
RUN OPTIONS "MSGFILE(OUTFILE),RPTSTG(ON),RPTOPTS(ON)"
WLM ENVIRONMENT SAMPPROG
PROGRAM TYPE MAIN
SECURITY DB2
RESULT SETS 0
COMMIT ON RETURN NO;
GETPRML:
PROC(PROCNM, SCHEMA, OUT_CODE, PARMLST)
OPTIONS(MAIN NOEXECOPS REENTRANT);
/************************************************************/
/* Execute SELECT from SYSIBM.SYSROUTINES in the catalog. */
/************************************************************/
EXEC SQL
SELECT RUNOPTS INTO :PARMLST
FROM SYSIBM.SYSROUTINES
WHERE NAME=:PROCNM AND
SCHEMA=:SCHEMA;
The linkage convention for this stored procedure is GENERAL WITH NULLS.
The output parameters from this stored procedure contain the SQLCODE from the
SELECT operation, and the value of the RUNOPTS column retrieved from the
SYSIBM.SYSROUTINES table.
The CREATE PROCEDURE statement for this stored procedure might look like
this:
CREATE PROCEDURE GETPRML(PROCNM CHAR(18) IN, SCHEMA CHAR(8) IN,
OUTCODE INTEGER OUT, PARMLST VARCHAR(254) OUT)
LANGUAGE PLI
DETERMINISTIC
READS SQL DATA
EXTERNAL NAME "GETPRML"
COLLID GETPRML
ASUTIME NO LIMIT
PARAMETER STYLE GENERAL WITH NULLS
STAY RESIDENT NO
RUN OPTIONS "MSGFILE(OUTFILE),RPTSTG(ON),RPTOPTS(ON)"
WLM ENVIRONMENT SAMPPROG
PROGRAM TYPE MAIN
SECURITY DB2
RESULT SETS 0
COMMIT ON RETURN NO;
GETPRML:
PROC(PROCNM, SCHEMA, OUT_CODE, PARMLST, INDICATORS)
OPTIONS(MAIN NOEXECOPS REENTRANT);
IF PROCNM_IND<0 |
SCHEMA_IND<0 THEN
DO; /* If any input parm is NULL, */
OUT_CODE = 9999; /* Set output return code. */
OUT_CODE_IND = 0;
/* Output return code is not NULL.*/
PARMLST_IND = -1; /* Assign NULL value to PARMLST. */
END;
ELSE /* If input parms are not NULL, */
DO; /* */
/************************************************************/
/* Issue the SQL SELECT against the SYSIBM.SYSROUTINES */
/* DB2 catalog table. */
/************************************************************/
EXEC SQL
SELECT RUNOPTS INTO :PARMLST
FROM SYSIBM.SYSROUTINES
WHERE NAME=:PROCNM AND
SCHEMA=:SCHEMA;
PARMLST_IND = 0; /* Mark PARMLST as not NULL. */
END GETPRML;
The REXX SQLCA differs from the SQLCA for other languages. The REXX SQLCA
consists of a set of separate variables, rather than a structure. If you use the
ADDRESS DSNREXX 'CONNECT' ssid syntax to connect to DB2, the SQLCA variables
are a set of simple variables. If you use the CALL SQLDBS 'ATTACH TO' syntax to
connect to DB2, the SQLCA variables are compound variables that begin with the
stem SQLCA.
Related tasks
“Checking the execution of SQL statements” on page 202
“Checking the execution of SQL statements by using the SQLCA” on page 203
“Checking the execution of SQL statements by using SQLCODE and SQLSTATE”
on page 207
“Defining the items that your program can use to check whether an SQL statement
executed successfully” on page 141
Restrictions:
v You must place SQLDA declarations before the first SQL statement that
references the data descriptor, unless you use the TWOPASS SQL processing
option.
v You cannot use the SQL INCLUDE statement for the SQLDA, because it is not
supported in COBOL.
When you assign input data to a DB2 table column, you can either let DB2
determine the type that your input data represents, or you can use an SQLDA to
tell DB2 the intended type of the input data.
When a REXX procedure assigns data to a column, it can either let DB2 determine
the data type or use an SQLDA to specify the intended data type. If the procedure
lets DB2 assign a data type for the input data, DB2 bases its choice on the input
string format.
The following table shows the SQL data types that DB2 assigns to input data and
the corresponding formats for that data. The two SQLTYPE values that are listed
for each data type are the value for a column that does not accept null values and
the value for a column that accepts null values.
Table 73. SQL input data types and REXX data formats
SQL data type SQLTYPE for
assigned by DB2 data type REXX input data format
INTEGER 496/497 A string of numerics that does not contain a decimal point or exponent
identifier. The first character can be a plus (+) or minus (-) sign. The
number that is represented must be between -2147483647 and 2147483647,
inclusive.
| BIGINT 492/493 A string of numbers that does not contain a decimal point or an exponent
| identifier. The first character can be a plus (+) or minus (-) sign. The
| number that is represented must be between -9223372036854775808 and
| -2147483648, inclusive, or between 2147483648 and 9223372036854775807.
DECIMAL(p,s) 484/485 One of the following formats:
v A string of numerics that contains a decimal point but no exponent
identifier. p represents the precision and s represents the scale of the
decimal number that the string represents. The first character can be a
plus (+) or minus (-) sign.
| v A string of numerics that does not contain a decimal point or an
| exponent identifier. The first character can be a plus (+) or minus (-)
| sign. The number that is represented is less than -9223372036854775808
| or greater than 9223372036854775807.
FLOAT 480/481 A string that represents a number in scientific notation. The string consists
of a series of numerics followed by an exponent identifier (an E or e
followed by an optional plus (+) or minus (-) sign and a series of
numerics). The string can begin with a plus (+) or minus (-) sign.
For example, when DB2 executes the following statements to update the MIDINIT
column of the EMP table, DB2 must determine a data type for HVMIDINIT:
SQLSTMT="UPDATE EMP" ,
"SET MIDINIT = ?" ,
"WHERE EMPNO = '000200'"
"EXECSQL PREPARE S100 FROM :SQLSTMT"
HVMIDINIT='H'
"EXECSQL EXECUTE S100 USING" ,
":HVMIDINIT"
Because the data that is assigned to HVMIDINIT has a format that fits a character
data type, DB2 REXX Language Support assigns a VARCHAR type to the input
data.
If you do not assign a value to a host variable before you assign the host variable
to a column, DB2 returns an error code.
Related concepts
“Compatibility of SQL and language data types” on page 148
| DB2 REXX Language Support supports all dynamic SQL statements and the
| following static SQL statements:
| v CALL
| v CLOSE
| v CONNECT
| v DECLARE CURSOR
| v DESCRIBE prepared statement or table
| v DESCRIBE CURSOR
Each SQL statement in a REXX procedure must begin with EXECSQL, in either
upper-, lower-, or mixed-case. One of the following items must follow EXECSQL:
v An SQL statement enclosed in single or double quotation marks.
v A REXX variable that contains an SQL statement. The REXX variable must not
be preceded by a colon.
For example, you can use either of the following methods to execute the COMMIT
statement in a REXX procedure:
EXECSQL "COMMIT"
rexxvar="COMMIT"
EXECSQL rexxvar
An SQL statement follows rules that apply to REXX commands. The SQL statement
can optionally end with a semicolon and can be enclosed in single or double
quotation marks, as in the following example:
Comments: You cannot include REXX comments (/* ... */) or SQL comments (--)
within SQL statements. However, you can include REXX comments anywhere else
in the procedure.
Names:Continuation for SQL statements: SQL statements that span lines follow
REXX rules for statement continuation. You can break the statement into several
strings, each of which fits on a line, and separate the strings with commas or with
concatenation operators followed by commas. For example, either of the following
statements is valid:
EXECSQL ,
"UPDATE DSN8910.DEPT" ,
"SET MGRNO = '000010'" ,
"WHERE DEPTNO = 'D11'"
"EXECSQL " || ,
" UPDATE DSN8910.DEPT " || ,
" SET MGRNO = '000010'" || ,
" WHERE DEPTNO = 'D11'"
Including code: The EXECSQL INCLUDE statement is not valid for REXX. You
therefore cannot include externally defined SQL statements in a procedure.
Margins: Like REXX commands, SQL statements can begin and end anywhere on a
line.
You can use any valid REXX name that does not end with a period as a host
variable. However, host variable names should not begin with ’SQL’, ’RDI’, ’DSN’,
’RXSQL’, or ’QRW’. Variable names can be at most 64 bytes.
Nulls: A REXX null value and an SQL null value are different. The REXX language
has a null string (a string of length 0) and a null clause (a clause that contains only
blanks and comments). The SQL null value is a special value that is distinct from
all nonnull values and denotes the absence of a value. Assigning a REXX null
value to a DB2 column does not make the column value null.
Statement labels: You can precede an SQL statement with a label, in the same way
that you label REXX commands.
Handling errors and warnings: DB2 does not support the SQL WHENEVER
statement in a REXX procedure. To handle SQL errors and warnings, use the
following methods:
v To test for SQL errors or warnings, test the SQLCODE or SQLSTATE value and
the SQLWARN. values after each EXECSQL call. This method does not detect
errors in the REXX interface to DB2.
v To test for SQL errors or warnings or errors or warnings from the REXX
interface to DB2, test the REXX RC variable after each EXECSQL call. The
following table lists the values of the RC variable.
You can also use the REXX SIGNAL ON ERROR and SIGNAL ON FAILURE
keyword instructions to detect negative values of the RC variable and transfer
control to an error routine.
Table 74. REXX return codes after SQL statements
Return code Meaning
0 No SQL warning or error occurred.
Related concepts
“Dynamic SQL” on page 162
“Rules for host variables in an SQL statement” on page 151
“Possible host languages for dynamic SQL applications” on page 166
Related tasks
“Determining whether a column value is null” on page 156
“Determining whether a retrieved value in a host variable is null or truncated” on
page 154
“Dynamically executing a data change statement” on page 191
“Dynamically executing an SQL statement by using EXECUTE IMMEDIATE” on
page 187
“Dynamically executing an SQL statement by using PREPARE and EXECUTE” on
page 189
“Enabling the dynamic statement cache” on page 195
“Handling SQL error codes” on page 215
“Including dynamic SQL for fixed-list SELECT statements in your program” on
page 167
“Including dynamic SQL for non-SELECT statements in your program” on page
166
“Including dynamic SQL for varying-list SELECT statements in your program” on
page 169
“Inserting a single row by using a host variable” on page 158
“Inserting null values into columns by using indicator variables or arrays” on page
158
“Limiting CPU time for dynamic SQL statements by using the resource limit
facility” on page 200
“Retrieving a single row of data into host variables” on page 152
“Updating data by using host variables” on page 157
Delimit an SQL statement in your REXX program by preceding the statement with
EXECSQL. If the statement is in a literal string, enclose it in single or double
quotation marks.
’CONNECT’ ’subsystem-ID’
ADDRESS DSNREXX REXX-variable
Note: CALL SQLDBS ’ATTACH TO’ ssid is equivalent to ADDRESS DSNREXX ’CONNECT’ ssid.
EXECSQL
Executes SQL statements in REXX procedures. The syntax of EXECSQL is:
EXECSQL ″SQL-statement″
ADDRESS DSNREXX REXX-variable
Notes:
1. CALL SQLEXEC is equivalent to EXECSQL.
2. EXECSQL can be enclosed in single or double quotation marks.
DISCONNECT
Disconnects the REXX procedure from a DB2 subsystem. You should execute
DISCONNECT to release resources that are held by DB2. The syntax of
DISCONNECT is:
’DISCONNECT’
ADDRESS DSNREXX
The ADD function adds DSNREXX to the REXX host command environment table.
The DELETE function deletes DSNREXX from the REXX host command
environment table.
The following figure shows an example of REXX code that makes DSNREXX
available to an application.
'SUBCOM DSNREXX' /* HOST CMD ENV AVAILABLE? */
IF RC THEN /* IF NOT, MAKE IT AVAILABLE */
S_RC = RXSUBCOM('ADD','DSNREXX','DSNREXX')
/* ADD HOST CMD ENVIRONMENT */
ADDRESS DSNREXX /* SEND ALL COMMANDS OTHER */
/* THAN REXX INSTRUCTIONS TO */
/* DSNREXX */
/* CALL CONNECT, EXECSQL, AND */
. /* DISCONNECT INTERFACES */
.
.
S_RC = RXSUBCOM('DELETE','DSNREXX','DSNREXX')
/* WHEN DONE WITH */
/* DSNREXX, REMOVE IT. */
To ensure that DB2 correctly interprets character input data in REXX programs:
Precede and follow character literals with a double quotation mark, followed by a
single quotation mark, followed by another double quotation mark ("'").
After the command executes, stringvar contains the characters 100 (without the
apostrophes). DB2 REXX Language Support then passes the numeric value 100 to
DB2, which is not what you intended.
However, suppose that you write the following command:
stringvar = "'"100"'"
In this case, REXX assigns the string ’100’ to stringvar, including the single
quotation marks. DB2 REXX Language Support then passes the string ’100’ to DB2,
which is the desired result.
DB2 does not assign data types of SMALLINT, CHAR, or GRAPHIC to input data.
If you assign or compare this data to columns of type SMALLINT, CHAR, or
GRAPHIC, DB2 must do more work than if the data types of the input data and
columns match.
To pass the data type of an input data type to DB2 for REXX programs:
Use an SQLDA.
Examples
Example of specifying CHAR as an input data type: Suppose that you want to
tell DB2 that the data with which you update the MIDINIT column of the EMP
table is of type CHAR, rather than VARCHAR. You need to set up an SQLDA that
contains a description of a CHAR column, and then prepare and execute the
UPDATE statement using that SQLDA, as shown in the following example.
INSQLDA.SQLD = 1 /* SQLDA contains one variable */
INSQLDA.1.SQLTYPE = 453 /* Type of the variable is CHAR, */
/* and the value can be null */
INSQLDA.1.SQLLEN = 1 /* Length of the variable is 1 */
INSQLDA.1.SQLDATA = 'H' /* Value in variable is H */
INSQLDA.1.SQLIND = 0 /* Input variable is not null */
SQLSTMT="UPDATE EMP" ,
"SET MIDINIT = ?" ,
"WHERE EMPNO = '000200'"
"EXECSQL PREPARE S100 FROM :SQLSTMT"
"EXECSQL EXECUTE S100 USING DESCRIPTOR :INSQLDA"
Example of specifying the input data type as DECIMAL with precision and
scale: Suppose that you want to tell DB2 that the data is of type DECIMAL with
precision and nonzero scale. You need to set up an SQLDA that contains a
description of a DECIMAL column, as shown in the following example.
INSQLDA.SQLD = 1 /* SQLDA contains one variable */
INSQLDA.1.SQLTYPE = 484 /* Type of variable is DECIMAL */
INSQLDA.1.SQLLEN.SQLPRECISION = 18 /* Precision of variable is 18 */
INSQLDA.1.SQLLEN.SQLSCALE = 8 /* Scale of variable is 8 */
INSQLDA.1.SQLDATA = 9876543210.87654321 /* Value in variable */
Execute the SET CURRENT PACKAGESET statement to select one of the following
DB2 REXX Language Support packages with the isolation level that you need.
Note:
1. These packages enable your procedure to access DB2 and are bound when you
install DB2 REXX Language Support.
For example, to change the isolation level to cursor stability, execute the following
SQL statement:
"EXECSQL SET CURRENT PACKAGESET='DSNREXCS'"
The following table gives the format for each type of output data.
Table 76. SQL output data types and REXX data formats
SQL data type REXX output data format
| SMALLINTINTEGERBIGINT A string of numerics that does not contain leading zeroes, a decimal point, or
| an exponent identifier. If the string represents a negative number, it begins
| with a minus (-) sign. The numeric value is between -9223372036854775808 and
| 9223372036854775807, inclusive.
DECIMAL(p,s) A string of numerics with one of the following formats:
v Contains a decimal point but not an exponent identifier. The string is
padded with zeroes to match the scale of the corresponding table column. If
the value represents a negative number, it begins with a minus (-) sign.
v Does not contain a decimal point or an exponent identifier. The numeric
value is less than -2147483647 or greater than 2147483647. If the value is
negative, it begins with a minus (-) sign.
FLOAT(n) REALDOUBLE A string that represents a number in scientific notation. The string consists of a
numeric, a decimal point, a series of numerics, and an exponent identifier. The
exponent identifier is an E followed by a minus (-) sign and a series of
numerics if the number is between -1 and 1. Otherwise, the exponent identifier
is an E followed by a series of numerics. If the string represents a negative
number, it begins with a minus (-) sign.
| DECFLOAT REXX emulates the DECFLOAT data type with DOUBLE, so support for
| DECFLOAT is limited to the REXX support for DOUBLE. The following special
| values are not supported:
| v INFINITY
| v SNAN
| v NAN
CHAR(n)VARCHAR(n) A character string of length n bytes. The string is not enclosed in single or
double quotation marks.
GRAPHIC(n) VARGRAPHIC(n) A string of length 2*n bytes. Each pair of bytes represents a double-byte
character. This string does not contain a leading G, is not enclosed in quotation
marks, and does not contain shift-out or shift-in characters.
The following names are valid for cursors and prepared statements in REXX
applications:
c1 to c100
Cursor names for DECLARE CURSOR, OPEN, CLOSE, and FETCH statements.
By default, c1 to c100 are defined with the WITH RETURN clause, and c51 to
c100 are defined with the WITH HOLD clause. You can use the ATTRIBUTES
clause of the PREPARE statement to override these attributes or add additional
attributes. For example, you might want to add attributes to make your cursor
scrollable.
c101 to c200
Cursor names for ALLOCATE, DESCRIBE, FETCH, and CLOSE statements that
are used to retrieve result sets in a program that calls a stored procedure.
s1 to s100
Prepared statement names for DECLARE STATEMENT, PREPARE, DESCRIBE,
and EXECUTE statements.
Use only the predefined names for cursors and statements. When you associate a
cursor name with a statement name in a DECLARE CURSOR statement, the cursor
name and the statement must have the same number. For example, if you declare
cursor c1, you need to declare it for statement s1:
EXECSQL 'DECLARE C1 CURSOR FOR S1'
The following example shows a complete DB2 REXX application named DRAW.
DRAW must be invoked from the command line of an ISPF edit session. DRAW
takes a table or view name as input and produces a SELECT, INSERT, or UPDATE
SQL statement or a LOAD utility control statement that includes the columns of
the table as output.
DRAW syntax:
%DRAW object-name (
SSID=ssid SELECT
TYPE= INSERT
UPDATE
LOAD
DRAW parameters:
object-name
The name of the table or view for which DRAW builds an SQL statement or
utility control statement. The name can be a one-, two-, or three-part name.
The table or view to which object-name refers must exist before DRAW can run.
object-name is a required parameter.
SSID=ssid
Specifies the name of the local DB2 subsystem.
S can be used as an abbreviation for SSID.
If you invoke DRAW from the command line of the edit session in SPUFI,
SSID=ssid is an optional parameter. DRAW uses the subsystem ID from the
DB2I Defaults panel.
TYPE=operation-type
The type of statement that DRAW builds.
T can be used as an abbreviation for TYPE.
operation-type has one of the following values:
Generate a SELECT statement for table DSN8910.EMP at the local subsystem. Use
the default DB2I subsystem ID.
Generate a LOAD control statement to load values into table DSN8910.EMP. The
local subsystem ID is DSN.
For example, the following statements set a value in the WORKDEPT column in
table EMP to null:
SQLSTMT="UPDATE EMP" ,
"SET WORKDEPT = ?"
HVWORKDEPT='000'
INDWORKDEPT=-1
"EXECSQL PREPARE S100 FROM :SQLSTMT"
"EXECSQL EXECUTE S100 USING :HVWORKDEPT :INDWORKDEPT"
In the following program, the phone number for employee Haas is selected into
variable HVPhone. After the SELECT statement executes, if no phone number for
employee Haas is found, indicator variable INDPhone contains -1.
'SUBCOM DSNREXX'
IF RC THEN ,
S_RC = RXSUBCOM('ADD','DSNREXX','DSNREXX')
ADDRESS DSNREXX
'CONNECT' 'DSN'
SQLSTMT = ,
"SELECT PHONENO FROM DSN8910.EMP WHERE LASTNAME='HAAS'"
"EXECSQL DECLARE C1 CURSOR FOR S1"
"EXECSQL PREPARE S1 FROM :SQLSTMT"
Say "SQLCODE from PREPARE is "SQLCODE
"EXECSQL OPEN C1"
Say "SQLCODE from OPEN is "SQLCODE
"EXECSQL FETCH C1 INTO :HVPhone :INDPhone"
Say "SQLCODE from FETCH is "SQLCODE
If INDPhone < 0 Then ,
Say 'Phone number for Haas is null.'
"EXECSQL CLOSE C1"
Say "SQLCODE from CLOSE is "SQLCODE
S_RC = RXSUBCOM('DELETE','DSNREXX','DSNREXX')
Creating tables
Creating a table provides a logically place to store related data on a DB2
subsystem.
To create a table, use a CREATE TABLE statement that includes the following
elements:
v The name of the table
v A list of the columns that make up the table. For each column, specify the
following information:
– The column’s name (for example, SERIAL).
– The data type and length attribute (for example, CHAR(8)).
– Optionally, a default value.
– Optionally, a referential constraint or check constraint.
Separate each column description from the next with a comma, and enclose the
entire list of column descriptions in parentheses.
For more information about check constraints, see “Check constraints” on page
434.
If you want to constrain the input or identify the default of a column, you can use
the following values:
v NOT NULL, when the column cannot contain null values.
v UNIQUE, when the value for each row must be unique, and the column cannot
contain null values.
v DEFAULT, when the column has one of the following DB2-assigned defaults:
– For numeric columns, 0 (zero) is the default value.
– For character or graphic fixed-length strings, blank is the default value.
– For binary fixed-length strings, a set of hexadecimal zeros is the default value.
Data types
When you create a DB2 table, you define each column to have a specific data type.
The data type of a column determines what you can and cannot do with the
column.
When you perform operations on columns, the data must be compatible with the
data type of the referenced column. For example, you cannot insert character data,
such as a last name, into a column whose data type is numeric. Similarly, you
cannot compare columns that contain incompatible data types.
The data type for a column can be a distinct type, which is a user-defined data
type, or a DB2 built-in data type. As shown in the following figure, DB2 built-in
data types have four general categories: datetime, string, numeric, and row
identifier (ROWID).
The following table shows whether operands of any two data types are compatible,
Y (Yes), or incompatible, N (No). Notes are indicated either as a superscript
number next to Y or N or as a value in the column of the table.
Table 77. Supported casts between built-in data types
To data type1
V
| A V T
| S D R A I
| M I D E V G G R M
| A N B E C D A R R D B B E
| L T I C F O R A A B I I S R
| L E G I L R U C C C P P C N N B D T T O
| I G I M O E B H H L H H L A A L A I A W X
| Cast from N E N A A A L A A O I I O R R O T M M I M
| data type – T R T L T L E R R B C C B Y Y B E E P D L
SMALLINT Y Y Y Y Y Y Y Y Y
INTEGER Y Y Y Y Y Y Y Y Y
BIGINT Y Y Y Y Y Y Y Y Y
DECIMAL Y Y Y Y Y Y Y Y Y
| DECFLOAT Y Y Y Y Y Y Y Y Y
REAL Y Y Y Y Y Y Y Y Y
DOUBLE Y Y Y Y Y Y Y Y Y
V
| A V T
| S D R A I
| M I D E V G G R M
| A N B E C D A R R D B B E
| L T I C F O R A A B I I S R
| L E G I L R U C C C P P C N N B D T T O
| I G I M O E B H H L H H L A A L A I A W X
| Cast from N E N A A A L A A O I I O R R O T M M I M
| data type – T R T L T L E R R B C C B Y Y B E E P D L
CHAR Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y
VARCHAR Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y
CLOB Y Y Y Y Y Y Y Y Y
2 2 2
GRAPHIC Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y3 Y3 Y3
VARGRAPHIC Y Y Y Y Y Y Y Y2 Y2 Y2 Y Y Y Y Y Y Y Y Y
2 2 2
DBCLOB Y Y Y Y Y Y Y Y Y
BINARY Y Y Y
VARBINARY Y Y Y
BLOB Y Y Y
DATE Y Y Y
TIME Y Y Y
TIMESTAMP Y Y Y Y Y
ROWID Y Y Y Y Y Y
| XML Y
Note:
1. Other synonyms for the listed data types are considered to be the same as the synonym listed. Some exceptions
exist when the cast involves character string data if the subtype is FOR BIT DATA.
2. The result length for these casts is 3 * LENGTH(graphic string).
3. These data types are castable between each other only if the data is Unicode.
Related concepts
“Distinct types” on page 477
Data types (SQL Reference)
Example: Adding a CLOB column: Suppose that you want to add a resume for
each employee to the employee table. The employee resumes are no more than 5
MB in size. Because the employee resumes contain single-byte characters, you can
define the resumes to DB2 as CLOBs. You therefore need to add a column of data
type CLOB with a length of 5 MB to the employee table. If you want to define a
ROWID column explicitly, you must define it before you define the CLOB column.
First, execute an ALTER TABLE statement to add the ROWID column, and then
execute another ALTER TABLE statement to add the CLOB column. The following
statements create these columns:
ALTER TABLE EMP
ADD ROW_ID ROWID NOT NULL GENERATED ALWAYS;
COMMIT;
ALTER TABLE EMP
ADD EMP_RESUME CLOB(5M);
COMMIT;
| If you explicitly created the table space for this table and the CURRENT RULES
| special register is not set to STD, you then need to define a LOB table space and an
| auxiliary table to hold the employee resumes. You also need to define an index on
| the auxiliary table. You must define the LOB table space in the same database as
| the associated base table. The following statements create these objects:
| You can then load your employee resumes into DB2. In your application, you can
define a host variable to hold the resume, copy the resume data from a file into the
host variable, and then execute an UPDATE statement to copy the data into DB2.
Although the LOB data is stored in the auxiliary table, your UPDATE statement
specifies the name of the base table. The following code declares a host variable to
store the resume in the C language:
SQL TYPE is CLOB (5M) resumedata;
In this statement, employeenum is a host variable that identifies the employee who
is associated with a resume.
You can use DB2 to store LOB data, but this data is stored differently than other
kinds of data.
Although a table can have a LOB column, the actual LOB data is stored in a
another table, which called the auxiliary table. This auxiliary table exists in a
separate table space called a LOB table space. One auxiliary table must exist for
each LOB column. The table with the LOB column is called the base table. The
base table has a ROWID column that DB2 uses to locate the data in the auxiliary
table. The auxiliary table must have exactly one index.
For more information about when DB2 creates implicitly hidden ROWID columns,
see the following topics in DB2 SQL Reference:
v “CREATE TABLE”
v “ALTER TABLE”
v “ALTER VIEW”
For more information about selecting hidden columns, see the topic “select-clause”
in DB2 SQL Reference.
Identity columns
An identity column contains a unique numeric value for each row in the table.
DB2 can automatically generate sequential numeric values for this column as rows
are inserted into the table. Thus, identity columns are ideal for primary key values,
such as employee numbers or product numbers.
If you define a column with the AS IDENTITY attribute, and with the
GENERATED ALWAYS and NO CYCLE attributes, DB2 automatically generates a
monotonically increasing or decreasing sequential number for the value of that
column when a new row is inserted into the table. However, for DB2 to guarantee
that the values of the identity column are unique, you should define a unique
index on that column.
You can use identity columns for primary keys that are typically unique sequential
numbers, for example, order numbers or employee numbers. By doing so, you can
avoid the concurrency problems that can result when an application generates its
own unique counter outside the database.
Recommendation: Set the values of the foreign keys in the dependent tables after
loading the parent table. If you use an identity column as a parent key in a
referential integrity structure, loading data into that structure could be quite
complicated. The values for the identity column are not known until the table is
loaded because the column is defined as GENERATED ALWAYS.
You might have gaps in identity column values for the following reasons:
v If other applications are inserting values into the same identity column
v If DB2 terminates abnormally before it assigns all the cached values
v If your application rolls back a transaction that inserts identity values
The values that DB2 generates for an identity column depend on how the column
is defined. The START WITH option determines the first value that DB2 generates.
The values advance by the INCREMENT BY value in ascending or descending
order.
The MINVALUE and MAXVALUE options determine the minimum and maximum
values that DB2 generates. The CYCLE or NO CYCLE option determines whether
DB2 wraps values when it has generated all values between the START WITH
value and MAXVALUE if the values are ascending, or between the START WITH
value and MINVALUE if the values are descending.
Now suppose that you execute the following INSERT statement eight times:
INSERT INTO T1 (CHARCOL1) VALUES ('A');
When DB2 generates values for IDENTCOL1, it starts with -1 and increments by 1
until it reaches the MAXVALUE of 3 on the fifth INSERT. To generate the value for
the sixth INSERT, DB2 cycles back to MINVALUE, which is -3. T1 looks like this
after the eight INSERTs are executed:
CHARCOL1 IDENTCOL1
======== =========
A -1
A 0
A 1
A 2
A 3
A -3
A -2
A -1
The value of IDENTCOL1 for the eighth INSERT repeats the value of IDENTCOL1
for the first INSERT.
The SELECT from INSERT statement enables you to insert a row into a parent
table with its primary key defined as a DB2-generated identity column, and
retrieve the value of the primary or parent key. You can then use this generated
value as a foreign key in a dependent table. For information about the SELECT
from INSERT statement, see “Selecting values while inserting data” on page 613.
Example: Using SELECT from INSERT: Suppose that an EMPLOYEE table and a
DEPARTMENT table are defined in the following way:
CREATE TABLE EMPLOYEE
(EMPNO INTEGER GENERATED ALWAYS AS IDENTITY
PRIMARY KEY NOT NULL,
NAME CHAR(30) NOT NULL,
SALARY DECIMAL(7,2) NOT NULL,
WORKDEPT SMALLINT);
When you insert a new employee into the EMPLOYEE table, to retrieve the value
for the EMPNO column, you can use the following SELECT from INSERT
statement:
EXEC SQL
SELECT EMPNO INTO :hv_empno
FROM FINAL TABLE (INSERT INTO EMPLOYEE (NAME, SALARY, WORKDEPT)
VALUES ('New Employee', 75000.00, 11));
The SELECT statement returns the DB2-generated identity value for the EMPNO
column in the host variable :hv_empno.
You can then use the value in :hv_empno to update the MGRNO column in the
DEPARTMENT table with the new employee as the department manager:
EXEC SQL
UPDATE DEPARTMENT
SET MGRNO = :hv_empno
WHERE DEPTNO = 11;
Constraints are rules that limit the values that you can insert, delete, or update in a
table. There are two types of constraints:
v Check constraints determine the values that a column can contain. Check
constraints are discussed in “Check constraints” on page 434.
Triggers are a series of actions that are invoked when a table is updated. Triggers
are discussed in “Creating triggers” on page 457.
Check constraints:
A check constraint is a rule that specifies the values that are allowed in one or more
columns of every row of a base table. For example, you can define a check
constraint to ensure that all values in a column that contains ages are positive
numbers.
Check constraints designate the values that specific columns of a base table can
contain, providing you a method of controlling the integrity of data entered into
tables. You can create tables with check constraints using the CREATE TABLE
statement, or you can add the constraints with the ALTER TABLE statement.
However, if the check integrity is compromised or cannot be guaranteed for a
table, the table space or partition that contains the table is placed in a check
pending state. Check integrity is the condition that exists when each row of a table
conforms to the check constraints defined on that table.
For example, you might want to make sure that no salary can be below 15000
dollars. To do this, you can create the following check constraint:
CREATE TABLE EMPSAL
(ID INTEGER NOT NULL,
SALARY INTEGER CHECK (SALARY >= 15000));
Using check constraints makes your programming task easier, because you do not
need to enforce those constraints within application programs or with a validation
routine. Define check constraints on one or more columns in a table when that
table is created or altered.
The syntax of a check constraint is checked when the constraint is defined, but the
meaning of the constraint is not checked. The following examples show mistakes
that are not caught. Column C1 is defined as INTEGER NOT NULL.
After check constraints are defined on a table, any change must satisfy those
constraints if it is made by:
v The LOAD utility with the option ENFORCE CONSTRAINT
| v An SQL insert operation
| v An SQL update operation
A row satisfies a check constraint if its condition evaluates either to true or to
unknown. A condition can evaluate to unknown for a row if one of the named
columns contains the null value for that row.
Any constraint defined on columns of a base table applies to the views defined on
that base table.
When you use ALTER TABLE to add a check constraint to already populated
tables, the enforcement of the check constraint is determined by the value of the
CURRENT RULES special register as follows:
v If the value is STD, the check constraint is enforced immediately when it is
defined. If a row does not conform, the check constraint is not added to the
table and an error occurs.
v If the value is DB2, the check constraint is added to the table description but its
enforcement is deferred. Because there might be rows in the table that violate
the check constraint, the table is placed in CHECK-pending status.
CHECK-pending status:
Referential constraints:
A table can serve as the “master list” of all occurrences of an entity. In the sample
application, the employee table serves that purpose for employees; the numbers
that appear in that table are the only valid employee numbers. Likewise, the
department table provides a master list of all valid department numbers; the
project activity table provides a master list of activities performed for projects; and
so on.
The following figure shows the relationships that exist among the tables in the
sample application. Arrows point from parent tables to dependent tables.
CASCADE
DEPT
SET SET
NULL NULL
RESTRICT EMP
RESTRICT
CASCADE ACT
PROJ
RESTRICT RESTRICT
PROJACT
RESTRICT
RESTRICT
EMPPROJACT
When a table refers to an entity for which there is a master list, it should identify
an occurrence of the entity that actually appears in the master list; otherwise, either
the reference is invalid or the master list is incomplete. Referential constraints
enforce the relationship between a table and a master list.
A cycle is a set of two or more tables that are ordered so that each is a dependent
of the one before it, and the first is a dependent of the last. Every table in the cycle
is a descendent of itself. DB2 restricts certain operations on cycles.
In the sample application, the employee and department tables are a cycle; each is
a dependent of the other.
Valid Invalid
cycle cycle
TABLE1 TABLE1
Recommendation: Avoid creating a cycle in which all the delete rules are
RESTRICT and none of the foreign keys allows nulls. If you do this, no row of any
of the tables can ever be deleted.
You cannot use referential constraints on a security label column, which is used for
multilevel security with row-level granularity. However, you can use referential
constraints on other columns in the row.
DB2 does not enforce multilevel security with row-level granularity when it is
already enforcing referential constraints. Referential constraints are enforced when
the following situations occur:
v An insert operation is applied to a dependent table.
v An update operation is applied to a foreign key of a dependent table, or to the
parent key of a parent table.
v A delete operation is applied to a parent table. In addition to all referential
constraints being enforced, the DB2 system enforces all delete rules for all
dependent rows that are affected by the delete operation. If all referential
constraints and delete rules are not satisfied, the delete operation will not
succeed.
v The LOAD utility with the ENFORCE CONSTRAINTS option is run on a
dependent table.
v The CHECK DATA utility is run.
For more information about multilevel security with row-level granularity, see the
topic “Multilevel security” in DB2 Administration Guide.
DB2 ignores informational referential constraints during insert, update, and delete
operations. Some utilities ignore these constraints; other utilities recognize them.
For example, CHECK DATA and LOAD ignore these constraints. QUIESCE
TABLESPACESET recognizes these constraints by quiescing all table spaces related
to the specified table space.
You should use this type of referential constraint only when an application process
verifies the data in a referential integrity relationship. For example, when inserting
a row in a dependent table, the application should verify that a foreign key exists
as a primary or unique key in the parent table. To define an informational
referential constraint, use the NOT ENFORCED option of the referential constraint
definition in a CREATE TABLE or ALTER TABLE statement. For more information
about the NOT ENFORCED option, see the topic “CREATE TABLE” in DB2 SQL
Reference.
The primary key of a table, if one exists, uniquely identifies each occurrence of an
entity in the table. The PRIMARY KEY clause of the CREATE TABLE or ALTER
TABLE statements identifies the column or columns of the primary key. Each
identified column must be defined as NOT NULL.
| Another way to allow only unique values in a column is to specify the UNIQUE
| clause when you create or alter a table. For more information about the UNIQUE
| clause, see the topics “CREATE TABLE” and “ALTER TABLE” in DB2 SQL
| Reference.
A table can have no more than one primary key. A primary key has the same
restrictions as index keys:
v The key can include no more than 64 columns.
v You cannot specify a column name twice.
v The sum of the column length attributes cannot be greater than 2000.
You define a list of columns as the primary key of a table with the PRIMARY KEY
clause in the CREATE TABLE statement.
To add a primary key to an existing table, use the PRIMARY KEY clause in an
ALTER TABLE statement. In this case, a unique index must already exist.
Consider the following items when you plan for primary keys:
v The theoretical model of a relational database suggests that every table should
have a primary key to uniquely identify the entities it describes. However, you
must weigh that model against the potential cost of index maintenance
overhead. DB2 does not require you to define a primary key for tables with no
dependents.
v Choose a primary key whose values will not change over time. Choosing a
primary key with persistent values enforces the good practice of having unique
identifiers that remain the same for the lifetime of the entity occurrence.
v A primary key column should not have default values unless the primary key is
a single TIMESTAMP column.
v Choose the minimum number of columns to ensure uniqueness of the primary
key.
v A view that can be updated that is defined on a table with a primary key should
include all columns of the key. Although this is necessary only if the view is
used for inserts, the unique identification of rows can be useful if the view is
used for updates, deletes, or selects.
v Drop a primary key later if you change your database or application using SQL.
A parent key is either a primary key or a unique key in the parent table of a
referential constraint. This key consists of a column or set of columns. The values
of a parent key determine the valid values of the foreign key in the constraint.
If every row in a table represents relationships for a unique entity, the table should
have one column or a set of columns that provides a unique identifier for the rows
of the table. This column (or set of columns) is called the parent key of the table. To
ensure that the parent key does not contain duplicate values, you must create a
unique index on the column or columns that constitute the parent key. Defining
the parent key is called entity integrity, because it requires each entity to have a
unique key.
Chapter 10. Creating and modifying DB2 objects 439
In some cases, using a timestamp as part of the key can be helpful, for example
when a table does not have a “natural” unique key or if arrival sequence is the
key.
Table 78 shows part of the project table which has the primary key column,
PROJNO.
Table 78. Part of the project table with the primary key column, PROJNO
PROJNO PROJNAME DEPTNO
Table 79 shows part of the project activity table, which has a primary key that
contains more than one column. The primary key is a composite key, which consists
of the PRONNO, ACTNO, and ACSTDATE columns.
Table 79. Part of the Project activities table with a composite primary key
PROJNO ACTNO ACSTAFF ACSTDATE ACENDATE
You define a list of columns as a foreign key of a table with the FOREIGN KEY
clause in the CREATE TABLE statement.
A foreign key can refer to either a unique or a primary key of the parent table. If
the foreign key refers to a non-primary unique key, you must specify the column
names of the key explicitly. If the column names of the key are not specified
explicitly, the default is to refer to the column names of the primary key of the
parent table.
The column names you specify identify the columns of the parent key. The
privilege set must include the ALTER or the REFERENCES privilege on the
columns of the parent key. A unique index must exist on the parent key columns of
the parent table.
The relationship name: You can choose a constraint name for the relationship that
is defined by a foreign key. If you do not choose a name, DB2 generates one from
440 Application Programming and SQL Guide
the name of the first column of the foreign key, in the same way that it generates
the name of an implicitly created table space.
The name is used in error messages, queries to the catalog, and DROP FOREIGN
KEY statements. Hence, you might want to choose one if you are experimenting
with your database design and have more than one foreign key beginning with the
same column (otherwise DB2 generates the name).
You can create an index on the columns of a foreign key in the same way you
create one on any other set of columns. Most often it is not a unique index. If you
do create a unique index on a foreign key, it introduces an additional constraint on
the values of the columns.
To let an index on the foreign key be used on the dependent table for a delete
operation on a parent table, the columns of the index on the foreign key must be
identical to and in the same order as the columns in the foreign key.
A foreign key can also be the primary key; then the primary index is also a unique
index on the foreign key. In that case, every row of the parent table has at most
one dependent row. The dependent table might be used to hold information that
pertains to only a few of the occurrences of the entity described by the parent
table. For example, a dependent of the employee table might contain information
that applies only to employees working in a different country.
The primary key can share columns of the foreign key if the first n columns of the
foreign key are the same as the primary key’s columns. Again, the primary index
serves as an index on the foreign key. In the sample project activity table, the
primary index (on PROJNO, ACTNO, ACSTDATE) serves as an index on the
foreign key on PROJNO. It does not serve as an index on the foreign key on
ACTNO, because ACTNO is not the first column of the index.
You can add a foreign key to an existing table; in fact, that is sometimes the only
way to proceed. To make a table self-referencing, you must add a foreign key after
creating it.
The encrypted value should be extracted from the parent table (the primary key)
and used for the dependent table (the foreign key). You can do this in one of the
following two ways:
| v Use the FINAL TABLE clause on a SELECT from UPDATE, SELECT from
| INSERT, or SELECT from MERGE statement.
v Use the ENCRYPT_TDES function to encrypt the foreign key using the same
password as the primary key. The encrypted value of the foreign key will be the
same as the encrypted value of the primary key.
The SET ENCRYPTION PASSWORD statement sets the password that will be used
for the ENCRYPT_TDES function.
For more information about the SET ENCRYPTION PASSWORD special register,
see the topic “ENCRYPTION PASSWORD” in DB2 SQL Reference.
For more information about the ENCRYPT_TDES statement, see the topic
“ENCRYPT_TDES” in DB2 SQL Reference.
Creating work tables for the EMP and DEPT sample tables
Before testing SQL statements that insert, update, and delete rows in the
DSN8910.EMP and DSN8910.DEPT sample tables, you should create duplicates of
these tables, so that the original sample tables remain intact. These duplicate tables
are called work tables.
This topic shows how to create the department and employee work tables and
how to fill a work table with the contents of another table:
Each of these topics assumes that you logged on by using your own authorization
ID. The authorization ID qualifies the name of each object that you create. For
example, if your authorization ID is SMITH, and you create table YDEPT, the name
of the table is SMITH.YDEPT. If you want to access table DSN8910.DEPT, you
must refer to it by its complete name. If you want to access your own table
YDEPT, you need only to refer to it as YDEPT.
Use the following statements to create a new department table called YDEPT,
modeled after the existing table, DSN8910.DEPT, and an index for YDEPT:
CREATE TABLE YDEPT
LIKE DSN8910.DEPT;
CREATE UNIQUE INDEX YDEPTX
ON YDEPT (DEPTNO);
If you want DEPTNO to be a primary key, as in the sample table, explicitly define
the key. Use an ALTER TABLE statement, as in the following example:
ALTER TABLE YDEPT
PRIMARY KEY(DEPTNO);
For information about using the INSERT statement, see “Inserting rows by using
the INSERT statement” on page 605.
You can use the following statements to create a new employee table called YEMP:
CREATE TABLE YEMP
(EMPNO CHAR(6) PRIMARY KEY NOT NULL,
FIRSTNME VARCHAR(12) NOT NULL,
MIDINIT CHAR(1) NOT NULL,
LASTNAME VARCHAR(15) NOT NULL,
WORKDEPT CHAR(3) REFERENCES YDEPT
ON DELETE SET NULL,
PHONENO CHAR(4) UNIQUE NOT NULL,
HIREDATE DATE ,
JOB CHAR(8) ,
EDLEVEL SMALLINT ,
SEX CHAR(1) ,
BIRTHDATE DATE ,
SALARY DECIMAL(9, 2) ,
BONUS DECIMAL(9, 2) ,
COMM DECIMAL(9, 2) );
This statement also creates a referential constraint between the foreign key in
YEMP (WORKDEPT) and the primary key in YDEPT (DEPTNO). It also restricts all
phone numbers to unique numbers.
If you want to change a table definition after you create it, use the ALTER TABLE
statement with a RENAME clause. If you want to change a table name after you
create it, use the RENAME statement.
You can change a table definition by using the ALTER TABLE statement only in
certain ways. For example, you can add and drop constraints on columns in a
table. You can also change the data type of a column within character data types,
within numeric data types, and within graphic data types. You can add a column
to a table. However, you cannot use the ALTER TABLE statement to drop a
column from a table.
Related tasks
Altering DB2 tables (DB2 Administration Guide)
Related reference
ALTER TABLE (SQL Reference)
RENAME (SQL Reference)
Each application process has its own instance of the created temporary table.
Example: You can also create this same definition by copying the definition of a
base table (named PROD) by using the LIKE clause:
CREATE GLOBAL TEMPORARY TABLE TEMPPROD LIKE PROD;
The SQL statements in the previous examples create identical definitions for the
TEMPPROD table, but these tables differ slightly from the PROD sample table
PROD. The PROD sample table contains two columns, DESCRIPTION and
CURDATE, that are defined as NOT NULL WITH DEFAULT. Because created
temporary tables do not support non-null default values, the DESCRIPTION and
CURDATE columns in the TEMPPROD table are defined as NOT NULL and do
not have defaults.
After you run one of the two CREATE statements, the definition of TEMPPROD
exists, but no instances of the table exist. To create an instance of TEMPPROD, you
must use TEMPPROD in an application. DB2 creates an instance of the table when
TEMPPROD is specified in one of the following SQL statements:
v OPEN
v SELECT
v INSERT
v DELETE
Restriction: You cannot use the MERGE statement with created temporary tables.
An instance of a created temporary table exists at the current server until one of
the following actions occurs:
v The application process ends.
v The remote server connection through which the instance was created
terminates.
v The unit of work in which the instance was created completes.
When you run a ROLLBACK statement, DB2 deletes the instance of the created
temporary table. When you run a COMMIT statement, DB2 deletes the instance
of the created temporary table unless a cursor for accessing the created
temporary table is defined with the WITH HOLD clause and is open.
Example: Suppose that you create a definition of TEMPPROD and then run an
application that contains the following statements:
EXEC SQL DECLARE C1 CURSOR FOR SELECT * FROM TEMPPROD;
EXEC SQL INSERT INTO TEMPPROD SELECT * FROM PROD;
EXEC
. SQL OPEN C1;
.
.
EXEC
. SQL COMMIT;
.
.
EXEC SQL CLOSE C1;
In this case, DB2 does not delete the contents of TEMPPROD until the application
ends because C1, a cursor that is defined with the WITH HOLD clause, is open
when the COMMIT statement runs. In either case, DB2 drops the instance of
TEMPPROD when the application ends.
To drop the definition of TEMPPROD, you must run the following statement:
DROP TABLE TEMPPROD;
Temporary tables
Use temporary tables when you need to store data for only the duration of an
application process. Depending on whether you want to share the table definition,
you can create a created temporary table or a declared temporary table.
Temporary tables are especially useful when you need to sort or query
intermediate result tables that contain a large number of rows, but you want to
store only a small subset of those rows permanently.
Temporary tables can also return result sets from stored procedures. The following
topics provide more details about created temporary tables and declared temporary
tables:
v “Creating created temporary tables” on page 443
v “Creating declared temporary tables”
For more information, see “Writing an external procedure to return result sets to a
DRDA client” on page 584.
You create an instance of a declared temporary table by using the SQL DECLARE
GLOBAL TEMPORARY TABLE statement. That instance is known only to the
| Before you can define declared temporary tables, you must have a WORKFILE
| database that has at least one table space with a 32-KB page size.
If you need to delete the definition before the application process completes, you
can do that with the DROP TABLE statement. For example, to drop the definition
of TEMPPROD, run the following statement:
DROP TABLE SESSION.TEMPPROD;
DB2 creates an empty instance of a declared temporary table when it runs the
DECLARE GLOBAL TEMPORARY TABLE statement. You can then perform the
following actions:
v Populate the declared temporary table by using INSERT statements
v Modify the table using searched or positioned UPDATE or DELETE statements
v Query the table using SELECT statements
v Create indexes on the declared temporary table
Example: Suppose that you run the following statement in an application program:
EXEC SQL DECLARE GLOBAL TEMPORARY TABLE TEMPPROD
AS (SELECT * FROM BASEPROD)
DEFINITION ONLY
INCLUDING IDENTITY COLUMN ATTRIBUTES
INCLUDING COLUMN DEFAULTS
ON COMMIT PRESERVE ROWS;
EXEC
. SQL INSERT INTO SESSION.TEMPPROD SELECT * FROM BASEPROD;
.
.
EXEC
. SQL COMMIT;
.
.
Answer: Add a column with the data type ROWID or an identity column. ROWID
columns and identity columns contain a unique value for each row in the table.
You can define the column as GENERATED ALWAYS, which means that you
cannot insert values into the column, or GENERATED BY DEFAULT, which means
that DB2 generates a value if you do not specify one. If you define the ROWID or
identity column as GENERATED BY DEFAULT, you need to define a unique index
that includes only that column to guarantee uniqueness.
| You can complete the table definition by performing one of the following actions,
| depending on why the table definition was incomplete:
| v Creating a primary index or altering the table to drop the primary key.
| v Creating a unique index on the unique key or altering the table to drop the
| unique key.
| v Defining a unique index on the ROWID column.
| v Creating the necessary LOB objects.
| Example of creating a primary index: To create the primary index for the project
| activity table, issue the following SQL statement:
| CREATE UNIQUE INDEX XPROJAC1
| ON DSN8910.PROJACT (PROJNO, ACTNO, ACSTDATE);
Use the DROP TABLE statement with care: Dropping a table is not equivalent to
deleting all its rows. When you drop a table, you lose more than its data and its
definition. You lose all synonyms, views, indexes, and referential and check
constraints that are associated with that table. You also lose all authorities that are
granted on the table.
For more information about the syntax of the DROP statement, see the topic
“DROP” in DB2 SQL Reference.
Defining a view
A view is a named specification of a result table. Use views to control which users
have access to certain data or to simplify writing SQL statements.
Use the CREATE VIEW statement to define a view and give the view a name, just
as you do for a table. The view that is created with the following statement shows
each department manager’s name with the department data in the DSN8910.DEPT
table.
CREATE VIEW VDEPTM AS
SELECT DEPTNO, MGRNO, LASTNAME, ADMRDEPT
FROM DSN8910.DEPT, DSN8910.EMP
WHERE DSN8910.EMP.EMPNO = DSN8910.DEPT.MGRNO;
When a program accesses the data that is defined by a view, DB2 uses the view
definition to return a set of rows that the program can access with SQL statements.
Example: To see the departments that are administered by department D01 and the
managers of those departments, run the following statement, which returns
information from the VDEPTM view:
SELECT DEPTNO, LASTNAME
FROM VDEPTM
WHERE ADMRDEPT = 'DO1';
| When you create a view, you can reference the SESSION_USER and CURRENT
| SQLID special registers in the CREATE VIEW statement. When referencing the
| view, DB2 uses the value of the SESSION_USER or CURRENT SQLID special
| register that belongs to the user of the SQL statement (SELECT, UPDATE, INSERT,
| or DELETE) rather than the creator of the view. In other words, a reference to a
| special register in a view definition refers to its run-time value.
| You can use views to limit access to certain kinds of data, such as salary
| information. Alternatively, you can use the IMPLICITLY HIDDEN clause of a
| CREATE TABLE statement define a column of a table to be hidden from some
| operations.
Views
A view does not contain data; it is a stored definition of a set of rows and
columns. A view can present any or all of the data in one or more tables and, in
most cases, is interchangeable with a table.
Although you cannot modify an existing view, you can drop it and create a new
one if your base tables change in a way that affects the view. Dropping and
creating views does not affect the base tables or their data.
Some views are read-only and thus cannot be used to update the table data. For
those views that are updatable, several restrictions apply.
| For complex views, you can make insert, update and delete operations possible by
| defining INSTEAD OF triggers.
For more information about INSTEAD OF triggers, see “Inserting, updating, and
deleting data in views by using INSTEAD OF triggers” on page 466.
For more information about read-only views, see the topic “CREATE VIEW” in
DB2 SQL Reference.
Dropping a view
When you drop a view, you also drop all views that are defined on that view. The
base table is not affected.
You can use a common table expression in a SELECT statement by using the WITH
clause at the beginning of the statement.
Example: WITH clause in a SELECT statement: The following statement finds the
department with the highest total pay. The query involves two levels of
aggregation. First, you need to determine the total pay for each department by
using the SUM function and order the results by using the GROUP BY clause. You
then need to find the department with highest total pay based on the total pay for
each department.
WITH DTOTAL (deptno, totalpay) AS
(SELECT deptno, sum(salary+bonus)
FROM DSN8810.EMP
GROUP BY deptno)
SELECT deptno
FROM DTOTAL
WHERE totalpay = (SELECT max(totalpay)
FROM DTOTAL);
The result table for the common table expression, DTOTAL, contains the
department number and total pay for each department in the employee table. The
fullselect in the previous example uses the result table for DTOTAL to find the
department with the highest total pay. The result table for the entire statement
looks similar to the following results:
DEPTNO
======
D11
You can use common table expressions before a fullselect in a CREATE VIEW
statement. This technique is useful if you need to use the results of a common
table expression in more than one query.
The fullselect in the previous example uses the result table for DTOTAL to find the
departments that have a greater-than-average total pay. The result table is saved as
the RICH_DEPT view and looks similar to the following results:
DEPTNO
======
A00
D11
D21
You can use common table expressions before a fullselect in an INSERT statement.
| You can define a common table expression wherever you can have a fullselect
| statement. For example, you can include a common table expression in a SELECT,
| INSERT, SELECT INTO, or CREATE VIEW statement.
Each common table expression must have a unique name and be defined only
once. However, you can reference a common table expression many times in the
same SQL statement. Unlike regular views or nested table expressions, which
Consider a table of parts with associated subparts and the quantity of subparts
required by each part. For more information about recursive SQL, refer to
“Creating recursive SQL by using common table expressions” on page 656.
Assume that the PARTLIST table is populated with the values that are in the
following table:
Table 80. PARTLIST table
PART SUBPART QUANTITY
00 01 5
00 05 3
01 02 2
01 03 3
01 04 4
01 06 3
02 05 7
02 06 6
03 07 6
04 08 10
04 09 11
05 10 10
05 11 10
06 12 10
06 13 10
07 14 8
07 12 8
Single level explosion answers the question, ″What parts are needed to build the
part identified by ’01’?″. The list will include the direct subparts, subparts of the
subparts and so on. However, if a part is used multiple times, its subparts are only
listed once.
WITH RPL (PART, SUBPART, QUANTITY) AS
(SELECT ROOT.PART, ROOT.SUBPART, ROOT.QUANTITY
FROM PARTLIST ROOT
WHERE ROOT.PART = '01'
UNION ALL
SELECT CHILD.PART, CHILD.SUBPART, CHILD.QUANTITY
FROM RPL PARENT, PARTLIST CHILD
WHERE PARENT.SUBPART = CHILD.PART)
SELECT DISTINCT PART, SUBPART, QUANTITY
FROM RPL
ORDER BY PART, SUBPART, QUANTITY;
The preceding query includes a common table expression, identified by the name
RPL, that expresses the recursive part of this query. It illustrates the basic elements
of a recursive common table expression.
The second operand (fullselect) of the UNION uses RPL to compute subparts of
subparts by using the FROM clause to refer to the common table expression RPL
and the source table PARTLIST with a join of a part from the source table (child) to
a subpart of the current result contained in RPL (parent). The result goes then back
to RPL again. The second operand of UNION is used repeatedly until no more
subparts exist.
The SELECT DISTINCT in the main fullselect of this query ensures the same
part/subpart is not listed more than once.
Observe in the result that part ’01’ contains subpart ’02’ which contains subpart
’06’ and so on. Further, notice that part ’06’ is reached twice, once through part ’01’
directly and another time through part ’02’. In the output, however, the subparts of
part ’06’ are listed only once (this is the result of using a SELECT DISTINCT).
This infinite loop is created by not coding what is intended. You should carefully
determining what to code so that there is a definite end of the recursion cycle.
A summarized explosion answers the question, ″What is the total quantity of each
part required to build part ’01’?″ The main difference from a single level explosion
is the need to aggregate the quantities. A single level explosion indicates the
quantity of subparts required for the part whenever it is required. It does not
indicate how many of each subpart is needed to build part ’01’.
WITH RPL (PART, SUBPART, QUANTITY) AS
(
SELECT ROOT.PART, ROOT.SUBPART, ROOT.QUANTITY
FROM PARTLIST ROOT
WHERE ROOT.PART = '01'
UNION ALL
SELECT PARENT.PART, CHILD.SUBPART,
PARENT.QUANTITY*CHILD.QUANTITY
FROM RPL PARENT, PARTLIST CHILD
WHERE PARENT.SUBPART = CHILD.PART
)
SELECT PART, SUBPART, SUM(QUANTITY) AS "Total QTY Used"
FROM RPL
GROUP BY PART, SUBPART
ORDER BY PART, SUBPART;
In the preceding query, the select list of the second operand of the UNION in the
recursive common table expression, identified by the name RPL, shows the
aggregation of the quantity. To determine how many of each subpart is used, the
Consider the total quantity for subpart ’06’. The value of 15 is derived from a
quantity of 3 directly for part ’01’ and a quantity of 6 for part ’02’ which is needed
two times by part ’01’.
This query is similar to the query in example 1. The column LEVEL is introduced
to count the level each subpart is from the original part. In the initialization
fullselect, the value for the LEVEL column is initialized to 1. In the subsequent
fullselect, the level from the parent table increments by 1. To control the number of
levels in the result, the second fullselect includes the condition that the level of the
parent must be less than 2. This ensures that the second fullselect only processes
children to the second level.
Creating triggers
A trigger is a set of SQL statements that execute when a certain event occurs in a
table. Use triggers to control changes in DB2 databases. Triggers are more powerful
than constraints, because they can monitor a broader range of changes and
perform a broader range of actions than constraints can.
For example, a constraint can disallow an update to the salary column of the
employee table if the new value is over a certain amount. A trigger can monitor
the amount by which the salary changes, as well as the salary value. If the change
is above a certain amount, the trigger might substitute a valid value and call a
user-defined function to send a notice to an administrator about the invalid
update.
Triggers also move application logic into DB2, which can result in faster
application development and easier maintenance. For example, you can write
applications to control salary changes in the employee table, but each application
program that changes the salary column must include logic to check those changes.
A better method is to define a trigger that controls changes to the salary column.
Then DB2 does the checking for any application that modifies salaries.
You create triggers using the CREATE TRIGGER statement. The following figure
shows an example of a CREATE TRIGGER statement.
1
CREATE TRIGGER REORDER
2 3 4
AFTER UPDATE OF ON_HAND, MAX_STOCKED ON PARTS
When you execute this CREATE TRIGGER statement, DB2 creates a trigger
package called REORDER and associates the trigger package with table PARTS.
DB2 records the timestamp when it creates the trigger. If you define other triggers
on the PARTS table, DB2 uses this timestamp to determine which trigger to
activate first. The trigger is now ready to use.
When you no longer want to use trigger REORDER, you can delete the trigger by
executing the statement:
DROP TRIGGER REORDER;
Executing this statement drops trigger REORDER and its associated trigger
package named REORDER.
If you drop table PARTS, DB2 also drops trigger REORDER and its trigger
package.
Parts of a trigger:
Trigger name:
Use an ordinary identifier to name your trigger. You can use a qualifier or let DB2
determine the qualifier. When DB2 creates a trigger package for the trigger, it uses
the qualifier for the collection ID of the trigger package. DB2 uses these rules to
determine the qualifier:
v If you use static SQL to execute the CREATE TRIGGER statement, DB2 uses the
authorization ID in the bind option QUALIFIER for the plan or package that
contains the CREATE TRIGGER statement. If the bind command does not
include the QUALIFIER option, DB2 uses the owner of the package or plan.
| v If you use dynamic SQL to execute the CREATE TRIGGER statement, DB2 uses
| the authorization ID in special register CURRENT SCHEMA.
Subject table:
When you perform an insert, update, or delete operation on this table, the trigger
is activated. You must name a local table in the CREATE TRIGGER statement. You
cannot define a trigger on a catalog table or on a view.
The two choices for trigger activation time are NO CASCADE BEFORE and
AFTER. NO CASCADE BEFORE means that the trigger is activated before DB2
makes any changes to the subject table, and that the triggered action does not
activate any other triggers. AFTER means that the trigger is activated after DB2
makes changes to the subject table and can activate other triggers. Triggers with an
activation time of NO CASCADE BEFORE are known as before triggers. Triggers
with an activation time of AFTER are known as after triggers.
Triggering event:
Every trigger is associated with an event. A trigger is activated when the triggering
event occurs in the subject table. The triggering event is one of the following SQL
operations:
v insert
v update
v delete
A triggering event can also be an update or delete operation that occurs as the
result of a referential constraint with ON DELETE SET NULL or ON DELETE
CASCADE.
Triggers are not activated as the result of updates made to tables by DB2 utilities,
with the exception of the LOAD utility when it is specified with the RESUME YES
and SHRLEVEL CHANGE options.
When the triggering event for a trigger is an update operation, the trigger is called
an update trigger. Similarly, triggers for insert operations are called insert triggers,
and triggers for delete operations are called delete triggers.
If the triggering SQL operation is an update operation, the event can be associated
with specific columns of the subject table. In this case, the trigger is activated only
if the update operation updates any of the specified columns.
Granularity:
The triggering SQL statement might modify multiple rows in the table. The
granularity of the trigger determines whether the trigger is activated only once for
the triggering SQL statement or once for every row that the SQL statement
modifies. The granularity values are:
v FOR EACH ROW
The trigger is activated once for each row that DB2 modifies in the subject table.
If the triggering SQL statement modifies no rows, the trigger is not activated.
However, if the triggering SQL statement updates a value in a row to the same
value, the trigger is activated. For example, if an UPDATE trigger is defined on
table COMPANY_STATS, the following SQL statement will activate the trigger.
UPDATE COMPANY_STATS SET NBEMP = NBEMP;
v FOR EACH STATEMENT
The trigger is activated once when the triggering SQL statement executes. The
trigger is activated even if the triggering SQL statement modifies no rows.
Triggers with a granularity of FOR EACH ROW are known as row triggers.
Triggers with a granularity of FOR EACH STATEMENT are known as statement
triggers. Statement triggers can only be after triggers.
Trigger NEW_HIRE is activated once for every row inserted into the employee
table.
When you code a row trigger, you might need to refer to the values of columns in
each updated row of the subject table. To do this, specify transition variables in the
REFERENCING clause of your CREATE TRIGGER statement. The two types of
transition variables are:
v Old transition variables, specified with the OLD transition-variable clause, capture
the values of columns before the triggering SQL statement updates them. You
can define old transition variables for update and delete triggers.
v New transition variables, specified with the NEW transition-variable clause,
capture the values of columns after the triggering SQL statement updates them.
You can define new transition variables for update and insert triggers.
Suppose that you have created tables T and S, with the following definitions:
CREATE TABLE T
(ID SMALLINT GENERATED BY DEFAULT AS IDENTITY (START WITH 100),
C2 SMALLINT,
C3 SMALLINT,
C4 SMALLINT);
CREATE TABLE S
(ID SMALLINT GENERATED ALWAYS AS IDENTITY,
C1 SMALLINT);
This statement inserts a row into S with a value of 5 for column C1 and a value of
1 for identity column ID. Next, suppose that you execute the following SQL
statement, which activates trigger TR1:
INSERT INTO T (C2)
VALUES (IDENTITY_VAL_LOCAL());
This insert statement, and the subsequent activation of trigger TR1, have the
following results:
v The INSERT statement obtains the most recent value that was assigned to an
identity column (1), and inserts that value into column C2 of table T. 1 is the
value that DB2 inserted into identity column ID of table S.
v When the INSERT statement executes, DB2 inserts the value 100 into identity
column ID column of C2.
Transition tables:
If you want to refer to the entire set of rows that a triggering SQL statement
modifies, rather than to individual rows, use a transition table. Like transition
variables, transition tables can appear in the REFERENCING clause of a CREATE
TRIGGER statement. Transition tables are valid for both row triggers and statement
triggers. The two types of transition tables are:
v Old transition tables, specified with the OLD TABLE transition-table-name clause,
capture the values of columns before the triggering SQL statement updates
them. You can define old transition tables for update and delete triggers.
v New transition tables, specified with the NEW TABLE transition-table-name
clause, capture the values of columns after the triggering SQL statement updates
them. You can define new transition variables for update and insert triggers.
The scope of old and new transition table names is the trigger body. If another
table exists that has the same name as a transition table, any unqualified reference
to that name in the trigger body points to the transition table. To reference the
other table in the trigger body, you must use the fully qualified table name.
The following example uses a new transition table to capture the set of rows that
are inserted into the INVOICE table:
CREATE TRIGGER LRG_ORDR
AFTER INSERT ON INVOICE
REFERENCING NEW TABLE AS N_TABLE
FOR EACH STATEMENT MODE DB2SQL
BEGIN ATOMIC
SELECT LARGE_ORDER_ALERT(CUST_NO,
TOTAL_PRICE, DELIVERY_DATE)
FROM N_TABLE WHERE TOTAL_PRICE > 10000;
END
Triggered action:
When a trigger is activated, a triggered action occurs. Every trigger has one
triggered action, which consists of a trigger condition and a trigger body.
Trigger condition:
For a row trigger, DB2 evaluates the trigger condition once for each modified row
of the subject table. For a statement trigger, DB2 evaluates the trigger condition
once for each execution of the triggering SQL statement.
If the trigger condition of a before trigger has a fullselect, the fullselect cannot
reference the subject table.
The following example shows a trigger condition that causes the trigger body to
execute only when the number of ordered items is greater than the number of
available items:
CREATE TRIGGER CK_AVAIL
NO CASCADE BEFORE INSERT ON ORDERS
REFERENCING NEW AS NEW_ORDER
FOR EACH ROW MODE DB2SQL
WHEN (NEW_ORDER.QUANTITY >
(SELECT ON_HAND FROM PARTS
WHERE NEW_ORDER.PARTNO=PARTS.PARTNO))
BEGIN ATOMIC
VALUES(ORDER_ERROR(NEW_ORDER.PARTNO,
NEW_ORDER.QUANTITY));
END
Trigger body:
In the trigger body, you code the SQL statements that you want to execute
whenever the trigger condition is true. If the trigger body consists of more than
one statement, it must begin with BEGIN ATOMIC and end with END. You cannot
include host variables or parameter markers in your trigger body. If the trigger
body contains a WHERE clause that references transition variables, the comparison
operator cannot be LIKE.
The statements you can use in a trigger body depend on the activation time of the
trigger. For a list of valid SQL statements for triggers, see the ″Allowable SQL
statements″ table in the CREATE TRIGGER (SQL Reference) topic.
The following list provides more detailed information about SQL statements that
are valid in triggers:
v fullselect, CALL, and VALUES
Use a fullselect or the VALUES statement in a trigger body to conditionally or
unconditionally invoke a user-defined function. Use the CALL statement to
invoke a stored procedure. See “Invoking stored procedures and user-defined
functions from triggers” on page 465 for more information on invoking
user-defined functions and stored procedures from triggers.
A fullselect in the trigger body of a before trigger cannot reference the subject
table.
v SIGNAL
Use the SIGNAL statement in the trigger body to report an error condition and
back out any changes that are made by the trigger, as well as actions that result
from referential constraints on the subject table. When DB2 executes the SIGNAL
If any SQL statement in the trigger body fails during trigger execution, DB2 rolls
back all changes that are made by the triggering SQL statement and the triggered
SQL statements. However, if the trigger body executes actions that are outside of
DB2’s control or are not under the same commit coordination as the DB2
subsystem in which the trigger executes, DB2 cannot undo those actions. Examples
of external actions that are not under DB2’s control are:
v Performing updates that are not under RRS commit control
v Sending an electronic mail message
If the trigger executes external actions that are under the same commit
coordination as the DB2 subsystem under which the trigger executes, and an error
occurs during trigger execution, DB2 places the application process that issued the
triggering statement in a must-rollback state. The application must then execute a
rollback operation to roll back those external actions. Examples of external actions
that are under the same commit coordination as the triggering SQL operation are:
v Executing a distributed update operation
v From a user-defined function or stored procedure, executing an external action
that affects an external resource manager that is under RRS commit control.
| Because a before trigger must not modify any table, functions and procedures that
| you invoke from a trigger cannot include INSERT, UPDATE, DELETE, or MERGE
| statements that modify the subject table.
Use the VALUES statement to execute a function unconditionally; that is, once for
each execution of a statement trigger or once for each row in a row trigger. In this
example, user-defined function PAYROLL_LOG executes every time an update
operation occurs that activates trigger PAYROLL1:
CREATE TRIGGER PAYROLL1
AFTER UPDATE ON PAYROLL
FOR EACH STATEMENT MODE DB2SQL
BEGIN ATOMIC
VALUES(PAYROLL_LOG(USER, 'UPDATE',
CURRENT TIME, CURRENT DATE));
END
When you call a user-defined function or stored procedure from a trigger, you
might want to give the function or procedure access to the entire set of modified
rows. That is, you want to pass a pointer to the old or new transition table. You do
this using table locators.
Most of the code for using a table locator is in the function or stored procedure
that receives the locator. To pass the transition table from a trigger, specify the
parameter TABLE transition-table-name when you invoke the function or stored
| Complex views are those views that are defined on expressions or multiple tables.
| In some cases, those views are read only. In these cases, INSTEAD OF triggers
| make the insert, update and delete operations possible. If the complex view is not
| read only, you can request an insert, update, or delete operation. However, DB2
| automatically decides how to perform that operation on the base tables that are
| referenced in the view. With INSTEAD OF triggers, you can define exactly how
| DB2 is to execute an insert, update, or delete operation on the view. You no longer
| leave the decision to DB2.
| Example: Suppose that you create the following view on the sample tables
| DSN8910.EMP and DSN8910.DEPT:
| CREATE VIEW EMPV (EMPNO, FIRSTNME, MIDINIT, LASTNAME, PHONENO, HIREDATE,DEPTNAME)
| AS SELECT EMPNO, FIRSTNME, MIDINIT, LASTNAME, PHONENO, HIREDATE, DEPTNAME
| FROM DSN8910.EMP, DSN8910.DEPT WHERE DSN8910.EMP.WORKDEPT
| = DSN8910.DEPT.DEPTNO
| Suppose that you also define the following three INSTEAD OF triggers:
| CREATE TRIGGER EMPV_INSERT INSTEAD OF INSERT ON EMPV
| REFERENCING NEW AS NEWEMP
| FOR EACH ROW MODE DB2SQL
| Because the view is on a query with an inner join, the view is read only. However,
| the INSTEAD OF triggers makes insert, update, and delete operations possible.
| Table 84 describes what happens for various insert, update, and delete operations
| on the EMPV view.
| Table 84. Results of INSTEAD OF triggers
| SQL statement Result
|| INSERT INTO EMPV VALUES (...) The EMPV_INSERT trigger is activated. This
| trigger inserts the row into the base table
| DSN8910.EMP if the department name
| matches a value in the WORKDEPT column
| in the DSN8910.DEPT table. Otherwise, an
| error is returned. If a query had been used
| instead of a VALUES clause on the INSERT
| statement, the trigger body would be
| processed for each row from the query.
|| UPDATE EMPV The EMPV_UPDATE trigger is activated. This
|| SET DEPTNAME='PLANNING & STRATEGY' trigger updates the DEPTNAME column in
| WHERE DEPTNAME='PLANNING' the DSN8910.DEPT for the any qualifying
| rows.
|| DELETE FROM EMPV The EMPV_DELETE trigger is activated. This
|| WHERE HIREDATE<'1910-01-01' trigger deletes the qualifying rows from the
| DSN8910.EMP table.
|
| Trigger packages
A trigger package is a special type of package that is created only when you
execute a CREATE TRIGGER statement. A trigger package executes only when its
associated trigger is activated.
Unlike other packages, a trigger package is freed if you drop the table on which
the trigger is defined, so you can recreate the trigger package only by recreating
the table and the trigger.
You can use the subcommand REBIND TRIGGER PACKAGE to rebind a trigger
package that DB2 has marked as inoperative. You can also use REBIND TRIGGER
PACKAGE to change the option values with which DB2 originally bound the
trigger package. You can change only a limited subset of the default bind options
that DB2 used when creating the package. For a description of these options, see
the topic “REBIND TRIGGER PACKAGE (DSN)” in DB2 Command Reference.
Trigger cascading
When a trigger performs an SQL operation, it might modify the subject table or
other tables with triggers, so DB2 also activates those triggers. This situation is
called trigger cascading.
A trigger that is activated as the result of another trigger can be activated at the
same level as the original trigger or at a different level. Two triggers, A and B, are
activated at different levels if trigger B is activated after trigger A is activated and
completes before trigger A completes. If trigger B is activated after trigger A is
activated and completes after trigger A completes, then the triggers are at the same
level.
For example, in these cases, trigger A and trigger B are activated at the same level:
v Table X has two triggers that are defined on it, A and B. A is a before trigger and
B is an after trigger. An update to table X causes both trigger A and trigger B to
activate.
v Trigger A updates table X, which has a referential constraint with table Y, which
has trigger B defined on it. The referential constraint causes table Y to be
updated, which activates trigger B.
In these cases, trigger A and trigger B are activated at different levels:
v Trigger A is defined on table X, and trigger B is defined on table Y. Trigger B is
an update trigger. An update to table X activates trigger A, which contains an
UPDATE statement on table B in its trigger body. This UPDATE statement
activates trigger B.
v Trigger A calls a stored procedure. The stored procedure contains an INSERT
statement for table X, which has insert trigger B defined on it. When the INSERT
statement on table X executes, trigger B is activated.
When triggers are activated at different levels, it is called trigger cascading. Trigger
cascading can occur only for after triggers because DB2 does not support cascading
of before triggers.
To prevent the possibility of endless trigger cascading, DB2 supports only 16 levels
of cascading of triggers, stored procedures, and user-defined functions. If a trigger,
user-defined function, or stored procedure at the 17th level is activated, DB2
returns SQLCODE -724 and backs out all SQL changes in the 16 levels of
You can write a monitor program that issues IFI READS requests to collect DB2
trace information about the levels of cascading of triggers, user-defined functions,
and stored procedures in your programs. For information on how to write a
monitor program, see the topic “Invoking IFI from your program” in DB2
Performance Monitoring and Tuning Guide.
DB2 records the timestamp when each CREATE TRIGGER statement executes.
When an event occurs in a table that activates more than one trigger, DB2 uses the
stored timestamps to determine which trigger to activate first.
DB2 always activates all before triggers that are defined on a table before the after
triggers that are defined on that table, but within the set of before triggers, the
activation order is by timestamp, and within the set of after triggers, the activation
order is by timestamp.
In this example, triggers NEWHIRE1 and NEWHIRE2 have the same triggering
event (INSERT), the same subject table (EMP), and the same activation time
(AFTER). Suppose that the CREATE TRIGGER statement for NEWHIRE1 is run
before the CREATE TRIGGER statement for NEWHIRE2:
CREATE TRIGGER NEWHIRE1
AFTER INSERT ON EMP
FOR EACH ROW MODE DB2SQL
BEGIN ATOMIC
UPDATE COMPANY_STATS SET NBEMP = NBEMP + 1;
END
When an insert operation occurs on table EMP, DB2 activates NEWHIRE1 first
because NEWHIRE1 was created first. Now suppose that someone drops and
re-creates NEWHIRE1. NEWHIRE1 now has a later timestamp than NEWHIRE2,
so the next time an insert operation occurs on EMP, NEWHIRE2 is activated before
NEWHIRE1.
If two row triggers are defined for the same action, the trigger that was created
earlier is activated first for all affected rows. Then the second trigger is activated
for all affected rows. In the previous example, suppose that an INSERT statement
with a fullselect inserts 10 rows into table EMP. NEWHIRE1 is activated for all 10
rows, then NEWHIRE2 is activated for all 10 rows.
In general, the following steps occur when triggering SQL statement S1 performs
an insert, update, or delete operation on table T1:
1. DB2 determines the rows of T1 to modify. Call that set of rows M1. The
contents of M1 depend on the SQL operation:
v For a delete operation, all rows that satisfy the search condition of the
statement for a searched delete operation, or the current row for a positioned
delete operation
v For an insert operation, the row identified by the VALUES statement, or the
rows identified by the result table of a SELECT clause within the INSERT
statement
v For an update operation, all rows that satisfy the search condition of the
statement for a searched update operation, or the current row for a
positioned update operation
2. DB2 processes all before triggers that are defined on T1, in order of creation.
Each before trigger executes the triggered action once for each row in M1. If
M1 is empty, the triggered action does not execute.
If an error occurs when the triggered action executes, DB2 rolls back all
changes that are made by S1.
| 3. DB2 makes the changes that are specified in statement S1 to table T1, unless an
| INSTEAD OF trigger is defined for that action. If an appropriate INSTEAD OF
| trigger is defined, DB2 executes the trigger instead of the statement and skips
| the remaining steps in this list.
| If an error occurs, DB2 rolls back all changes that are made by S1.
4. If M1 is not empty, DB2 applies all the following constraints and checks that
are defined on table T1:
v Referential constraints
v Check constraints
v Checks that are due to updates of the table through views defined WITH
CHECK OPTION
Application of referential constraints with rules of DELETE CASCADE or
DELETE SET NULL are activated before delete triggers or before update
triggers on the dependent tables.
If any constraint is violated, DB2 rolls back all changes that are made by
constraint actions or by statement S1.
5. DB2 processes all after triggers that are defined on T1, and all after triggers on
tables that are modified as the result of referential constraint actions, in order of
creation.
Each after row trigger executes the triggered action once for each row in M1. If
M1 is empty, the triggered action does not execute.
Each after statement trigger executes the triggered action once for each
execution of S1, even if M1 is empty.
If any triggered actions contain SQL insert, update, or delete operations, DB2
repeats steps 1 through 5 for each operation.
For example, table DEPT is a parent table of EMP, with these conditions:
v The DEPTNO column of DEPT is the primary key.
v The WORKDEPT column of EMP is the foreign key.
v The constraint is ON DELETE SET NULL.
Suppose the following trigger is defined on EMP:
CREATE TRIGGER EMPRAISE
AFTER UPDATE ON EMP
REFERENCING NEW TABLE AS NEWEMPS
FOR EACH STATEMENT MODE DB2SQL
BEGIN ATOMIC
VALUES(CHECKEMP(TABLE NEWEMPS));
END
Also suppose that an SQL statement deletes the row with department number E21
from DEPT. Because of the constraint, DB2 finds the rows in EMP with a
WORKDEPT value of E21 and sets WORKDEPT in those rows to null. This is
equivalent to an update operation on EMP, which has update trigger EMPRAISE.
Therefore, because EMPRAISE is an after trigger, EMPRAISE is activated after the
constraint action sets WORKDEPT values to null.
If a subject table has a security label column, the column in the transition table or
transition variable that corresponds to the security label column in the subject table
does not inherit the security label attribute. This means that the multilevel security
check with row-level granularity is not enforced for the transition table or the
transition variable. If you add a security label column to a subject table using the
ALTER TABLE statement, the rules are the same as when you add any column to a
subject table because the column in the transition table or the transition variable
that corresponds to the security label column does not inherit the security label
attribute.
If the ID you are using does not have write-down privilege and you execute an
insert or update operation, the security label value of your ID is assigned to the
security label column for the rows that you are inserting or updating.
When a BEFORE trigger is activated, the value of the transition variable that
corresponds to the security label column is the security label of the ID if either of
the following conditions is true:
v The user does not have write-down privilege
v The value for the security label column is not specified
If the user does not have write-down privilege, and the trigger changes the
transition variable that corresponds to the security label column, the value of the
security label column is changed back to the security label value of the user before
the row is written to the page. For more information about multilevel security with
row-level granularity, see the topic “Multilevel security” in DB2 Administration
Guide.
Two common reasons that you can get inconsistent results are:
v Positioned UPDATE or DELETE statements that use uncorrelated subqueries
cause triggers to operate on a larger result table than you intended.
v DB2 does not always process rows in the same order, so triggers that propagate
rows of a table can generate different result tables at different times.
The following examples demonstrate these situations.
When DB2 executes the FETCH statement that positions cursor C1 for the first
time, DB2 evaluates the subselect, SELECT B1 FROM T2, to produce a result table
that contains the two rows of column T2:
1
2
When DB2 executes the positioned UPDATE statement for the first time, trigger
TR1 is activated. When the body of trigger TR1 executes, the row with value 2 is
deleted from T2. However, because SELECT B1 FROM T2 is evaluated only once,
when the FETCH statement is executed again, DB2 finds the second row of T1,
To avoid processing of the second row after it should have been deleted, use a
correlated subquery in the cursor declaration:
DCL C1 CURSOR FOR
SELECT A1 FROM T1 X
WHERE EXISTS (SELECT B1 FROM T2 WHERE X.A1 = B1)
FOR UPDATE OF A1;
In this case, the subquery, SELECT B1 FROM T2 WHERE X.A1 = B1, is evaluated
for each FETCH statement. The first time that the FETCH statement executes, it
positions the cursor to the first row of T1. The positioned UPDATE operation
activates the trigger, which deletes the second row of T2. Therefore, when the
FETCH statement executes again, no row is selected, so no update operation or
triggered action occurs.
The contents of tables T2 and T3 after the UPDATE statement executes depend on
the order in which DB2 updates the rows of T1.
If DB2 updates the first row of T1 first, after the UPDATE statement and the
trigger execute for the first time, the values in the three tables are:
Table T1 Table T2 Table T3
A1 B1 C1
== == ==
2 2 2
2
After the second row of T1 is updated, the values in the three tables are:
However, if DB2 updates the second row of T1 first, after the UPDATE statement
and the trigger execute for the first time, the values in the three tables are:
Table T1 Table T2 Table T3
A1 B1 C1
== == ==
1 3 3
3
After the first row of T1 is updated, the values in the three tables are:
Table T1 Table T2 Table T3
A1 B1 C1
== == ==
2 3 3
3 2 3
2
Sequence objects
A sequence is a user-defined object that generates a sequence of numeric values
according to the specification with which the sequence was created. Sequences,
unlike identity columns, are not associated with tables. Applications refer to a
sequence object to get its current or next value.
Your application can reference a sequence object and coordinate the value as keys
across multiple rows and tables. However, a table column that gets its values from
a sequence object does not necessarily have unique values in that column. Even if
the sequence object has been defined with the NO CYCLE clause, some other
application might insert values into that table column other than values you obtain
by referencing that sequence object.
You create a sequence object with the CREATE SEQUENCE statement, alter it with
the ALTER SEQUENCE statement, and drop it with the DROP SEQUENCE
statement. You grant access to a sequence with the GRANT (privilege) ON
SEQUENCE statement, and revoke access to the sequence with the REVOKE
(privilege) ON SEQUENCE statement.
The MINVALUE and MAXVALUE options determine the minimum and maximum
values that DB2 generates. The CYCLE or NO CYCLE option determines whether
DB2 wraps values when it has generated all values between the START WITH
value and MAXVALUE if the values are ascending, or between the START WITH
value and MINVALUE if the values are descending.
Keys across multiple tables: You can use the same sequence number as a key
value in two separate tables by first generating the sequence value with a NEXT
VALUE expression to insert the first row in the first table. You can then reference
this same sequence value with a PREVIOUS VALUE expression to insert the other
rows in the second table.
Example: Suppose that an ORDERS table and an ORDER_ITEMS table are defined
in the following way:
CREATE TABLE ORDERS
(ORDERNO INTEGER NOT NULL,
ORDER_DATE DATE DEFAULT,
CUSTNO SMALLINT
PRIMARY KEY (ORDERNO));
You create a sequence named ORDER_SEQ to use as key values for both the
ORDERS and ORDER_ITEMS tables:
CREATE SEQUENCE ORDER_SEQ AS INTEGER
START WITH 1
INCREMENT BY 1
NO MAXVALUE
NO CYCLE
CACHE 20;
You can then use the same sequence number as a primary key value for the
ORDERS table and as part of the primary key value for the ORDER_ITEMS table:
INSERT INTO ORDERS (ORDERNO, CUSTNO)
VALUES (NEXT VALUE FOR ORDER_SEQ, 12345);
The NEXT VALUE expression in the first INSERT statement generates a sequence
number value for the sequence object ORDER_SEQ. The PREVIOUS VALUE
expression in the second INSERT statement retrieves that same value because it
was the sequence number most recently generated for that sequence object within
the current application process.
With those extensions, you can store instances of object-oriented data types in
columns of tables and operate on them using functions in SQL statements. In
addition, you can control the types of operations that users can perform on those
data types.
Issue the CREATE DISTINCT TYPE statement. For example, you can create distinct
types for euros and yen by issuing the following SQL statements:
Distinct types
A distinct type is a user-defined data type that is based on existing built-in DB2
data types. You can use distinct types in the same way that you use built-in data
types, in any type of SQL application except for a DB2 private protocol application.
Each distinct type has the same internal representation as a built-in data type.
Suppose you want to define some audio and video data in a DB2 table. You can
define columns for both types of data as BLOB, but you might want to use a data
type that more specifically describes the data. To do that, define distinct types. You
can then use those types when you define columns in a table or manipulate the
data in those columns. For example, you can define distinct types for the audio
and video data like this:
CREATE DISTINCT TYPE AUDIO AS BLOB (1M);
CREATE DISTINCT TYPE VIDEO AS BLOB (1M);
For more information on LOB data, see “Large objects (LOBs)” on page 430.
After you define distinct types and columns of those types, you can use those data
types in the same way you use built-in types. You can use the data types in
assignments, comparisons, function invocations, and stored procedure calls.
However, when you assign one column value to another or compare two column
values, those values must be of the same distinct type. For example, you must
assign a column value of type VIDEO to a column of type VIDEO, and you can
compare a column value of type AUDIO only to a column of type AUDIO. When
you assign a host variable value to a column with a distinct type, you can use any
host data type that is compatible with the source data type of the distinct type. For
example, to receive an AUDIO or VIDEO value, you can define a host variable like
this:
SQL TYPE IS BLOB (1M) HVAV;
For example, if you have defined a user-defined function to convert U.S. dollars to
euro currency, you do not want anyone to use this same user-defined function to
convert Japanese yen to euros because the U.S. dollars to euros function returns the
wrong amount. Suppose you define three distinct types:
CREATE DISTINCT TYPE US_DOLLAR AS DECIMAL(9,2);
CREATE DISTINCT TYPE EURO AS DECIMAL(9,2);
CREATE DISTINCT TYPE JAPANESE_YEN AS DECIMAL(9,2);
Suppose that you keep electronic mail documents that are sent to your company in
a DB2 table. The DB2 data type of an electronic mail document is a CLOB, but you
define it as a distinct type so that you can control the types of operations that are
performed on the electronic mail. The distinct type is defined like this:
CREATE DISTINCT TYPE E_MAIL AS CLOB(5M);
You have also defined and written user-defined functions to search for and return
the following information about an electronic mail document:
v Subject
v Sender
v Date sent
v Message content
v Indicator of whether the document contains a user-specified string
The user-defined function definitions look like this:
CREATE FUNCTION SUBJECT(E_MAIL)
RETURNS VARCHAR(200)
EXTERNAL NAME 'SUBJECT'
LANGUAGE C
PARAMETER STYLE SQL
NO SQL
DETERMINISTIC
NO EXTERNAL ACTION;
CREATE FUNCTION SENDER(E_MAIL)
RETURNS VARCHAR(200)
EXTERNAL NAME 'SENDER'
The table that contains the electronic mail documents is defined like this:
CREATE TABLE DOCUMENTS
(LAST_UPDATE_TIME TIMESTAMP,
DOC_ROWID ROWID NOT NULL GENERATED ALWAYS,
A_DOCUMENT E_MAIL);
Because the table contains a column with a source data type of CLOB, the table
requires an associated LOB table space, auxiliary table, and index on the auxiliary
table. Use statements like this to define the LOB table space, the auxiliary table,
and the index:
CREATE LOB TABLESPACE DOCTSLOB
LOG YES
GBPCACHE SYSTEM;
To populate the document table, you write code that executes an INSERT
statement to put the first part of a document in the table, and then executes
multiple UPDATE statements to concatenate the remaining parts of the document.
For example:
EXEC SQL BEGIN DECLARE SECTION;
char hv_current_time[26];
SQL TYPE IS CLOB (1M) hv_doc;
EXEC SQL END DECLARE SECTION;
/* Determine the current time and put this value */
/* into host variable hv_current_time. */
/* Read up to 1 MB of document data from a file */
/* into host variable hv_doc. */
Now that the data is in the table, you can execute queries to learn more about the
documents. For example, you can execute this query to determine which
documents contain the word ″performance″:
SELECT SENDER(A_DOCUMENT), SENDING_DATE(A_DOCUMENT),
SUBJECT(A_DOCUMENT)
FROM DOCUMENTS
WHERE CONTAINS(A_DOCUMENT,'performance') = 1;
Because the electronic mail documents can be very large, you might want to use
LOB locators to manipulate the document data instead of fetching all of a
document into a host variable. You can use a LOB locator on any distinct type that
is defined on one of the LOB types. The following example shows how you can
cast a LOB locator as a distinct type, and then use the result in a user-defined
function that takes a distinct type as an argument:
EXEC SQL BEGIN DECLARE SECTION
long hv_len;
char hv_subject[200];
SQL TYPE IS CLOB_LOCATOR hv_email_locator;
EXEC
. SQL END DECLARE SECTION
.
.
/* Select a document into a CLOB locator. */
EXEC SQL SELECT A_DOCUMENT, SUBJECT(A_DOCUMENT)
INTO :hv_email_locator, :hv_subject
FROM DOCUMENTS
. WHERE LAST_UPDATE_TIME = :hv_current_time;
.
.
/* Extract the subject from the document. The */
/* SUBJECT function takes an argument of type */
/* E_MAIL, so cast the CLOB locator as E_MAIL. */
EXEC SQL SET :hv_subject =
. SUBJECT(CAST(:hv_email_locator AS E_MAIL));
.
.
If you discover after you define the function that any of these characteristics is not
appropriate for the function, you can use an ALTER FUNCTION statement to
change information in the definition. You cannot use ALTER FUNCTION to change
some of the characteristics of a user-defined function definition. For more
information about ALTER FUNCTION, see the topics “ALTER FUNCTION
(external)” and “ALTER FUNCTION (SQL scalar)” in DB2 SQL Reference.
Examples
The output from the user-defined function is of type float, but users require integer
output for their SQL statements. The user-defined function is written in C and
contains no SQL statements. The function is defined to stop when the number of
abnormal terminations is equal to 3.
The user-defined function takes two integer values as input. The output from the
user-defined function is of type integer. The user-defined function is in the MATH
schema, is written in assembler, and contains no SQL statements. This CREATE
FUNCTION statement defines the user-defined function:
CREATE FUNCTION MATH."/" (INT, INT)
RETURNS INTEGER
SPECIFIC DIVIDE
EXTERNAL NAME 'DIVIDE'
LANGUAGE ASSEMBLE
PARAMETER STYLE SQL
NO SQL
DETERMINISTIC
NO EXTERNAL ACTION
FENCED;
Example: Definition for an SQL user-defined function: You can define an SQL
user-defined function for the tangent of a value by using the existing built-in SIN
and COS functions:
CREATE FUNCTION TAN (X DOUBLE)
RETURNS DOUBLE
LANGUAGE SQL
CONTAINS SQL
NO EXTERNAL ACTION
DETERMINISTIC
RETURN SIN(X)/COS(X);
The user-defined function is written in COBOL, uses SQL only to perform queries,
always produces the same output for given input, and should not execute as a
User-defined functions
A user-defined function is an extension to the SQL language. It is a small program
that you write, similar to a host language subprogram or function. However, a
user-defined function is often the better choice for an SQL application because you
can invoke it in an SQL statement.
This section contains information that applies to all user-defined functions and
specific information about user-defined functions in languages other than Java.
To prepare an SQL scalar function for execution, you execute the CREATE
FUNCTION statement, either statically or dynamically.
Related tasks
“Defining a user-defined function” on page 480
Related reference
CREATE FUNCTION (SQL scalar) (SQL Reference)
| Sourced functions
| A sourced function is a function that invokes another function that already exists at
| the server. The function inherits the attributes of the underlying source function.
| The source function can be either built-in, external, SQL, or sourced.
Related reference
CREATE FUNCTION (SQL Reference)
Your user-defined function can also access remote data using the following
methods:
v DB2 private protocol access using three-part names or aliases for three-part
names
v DRDA access using three-part names or aliases for three-part names
v DRDA access using CONNECT or SET CONNECTION statements
The user-defined function and the application that calls it can access the same
remote site if both use the same protocol.
If you code your user-defined function as a subprogram and manage the storage
and files yourself, you can get better performance. The user-defined function
should always free any allocated storage before it exits. To keep data between
invocations of the user-defined function, use a scratchpad.
You must code a user-defined table function that accesses external resources as a
subprogram. Also ensure that the definer specifies the EXTERNAL ACTION
parameter in the CREATE FUNCTION or ALTER FUNCTION statement. Program
variables for a subprogram persist between invocations of the user-defined
The following figure shows the structure of the parameter list that DB2 passes to a
user-defined function. An explanation of each parameter follows.
1
Result 1 Result 1 data
.
Result 2 Result 2 data
Indicator 1 Indicator 1
Indicator 2 Indicator 2
Result
1
Result indicator 1
indicator 1
Result Result indicator 2
indicator 2
SQLSTATE SQLSTATE
Procedure
name Procedure name
Specific
name Specific name
Message
Message text
text
2
Scratchpad Scratchpad
3
Call type Call type
4, 5
DBINFO DBINFO
1. For a user-defined scalar function, only one result and one result indicator are passed.
2. Passed if the SCRATCHPAD option is specified in the user-defined function definition.
3. Passed if the FINAL CALL option is specified in a user-defined scalar function definition;
always passed for a user-defined table function.
4. For PL/I, this value is the address of a pointer to the DBINFO data.
5. Passed if the DBINFO option is specified in the user-defined function definition.
DB2 obtains the input parameters from the invoker’s parameter list, and your
user-defined function receives those parameters according to the rules of the host
language in which the user-defined function is written. The number of input
parameters is the same as the number of parameters in the user-defined function
invocation. If one of the parameters in the function invocation is an expression,
DB2 evaluates the expression and assigns the result of the expression to the
parameter.
Chapter 10. Creating and modifying DB2 objects 491
For all data types except LOBs, ROWIDs, locators, and VARCHAR (with C
language), see the tables listed in the following table for the host data types that
are compatible with the data types in the user-defined function definition.
Table 86. Listing of tables of compatible data types
Language Compatible data types table
Assembler “Compatibility of SQL and language data types”
on page 148
C “Compatibility of SQL and language data types”
on page 148
COBOL “Compatibility of SQL and language data types”
on page 148
PL/I “Compatibility of SQL and language data types”
on page 148
For LOBs, ROWIDs, and locators, see the following table for the assembler data
types that are compatible with the data types in the user-defined function
definition.
Table 87. Compatible assembler language declarations for LOBs, ROWIDs, and locators
SQL data type in definition Assembler declaration
TABLE LOCATOR DS FL4
BLOB LOCATOR
CLOB LOCATOR
DBCLOB LOCATOR
BLOB(n) If n <= 65535:
var DS 0FL4
var_length DS FL4
var_data DS CLn
If n > 65535:
var DS 0FL4
var_length DS FL4
var_data DS CL65535
ORG var_data+(n-65535)
CLOB(n) If n <= 65535:
var DS 0FL4
var_length DS FL4
var_data DS CLn
If n > 65535:
var DS 0FL4
var_length DS FL4
var_data DS CL65535
ORG var_data+(n-65535)
DBCLOB(n) If n (=2*n) <= 65534:
var DS 0FL4
var_length DS FL4
var_data DS CLm
If n > 65534:
var DS 0FL4
var_length DS FL4
var_data DS CL65534
ORG var_data+(m-65534)
For LOBs, ROWIDs, VARCHARs, and locators see the following table for the C
data types that are compatible with the data types in the user-defined function
definition.
Table 88. Compatible C language declarations for LOBs, ROWIDs, VARCHARs, and locators
SQL data type in definition1 C declaration
TABLE LOCATOR unsigned long
BLOB LOCATOR
CLOB LOCATOR
DBCLOB LOCATOR
BLOB(n) struct
{unsigned long length;
char data[n];
} var;
CLOB(n) struct
{unsigned long length;
char var_data[n];
} var;
DBCLOB(n) struct
{unsigned long length;
sqldbchar data[n];
} var;
ROWID struct {
short int length;
char data[40];
} var;
VARCHAR(n)2 If PARAMETER VARCHAR NULTERM is
specified or implied:
char data[n+1];
For LOBs, ROWIDs, and locators, see the following table for the COBOL data types
that are compatible with the data types in the user-defined function definition.
For LOBs, ROWIDs, and locators, see the following table for the PL/I data types
that are compatible with the data types in the user-defined function definition.
Table 90. Compatible PL/I declarations for LOBs, ROWIDs, and locators
SQL data type in definition PL/I
TABLE LOCATOR BIN FIXED(31)
BLOB LOCATOR
CLOB LOCATOR
DBCLOB LOCATOR
BLOB(n) If n <= 32767:
01 var,
03 var_LENGTH
BIN FIXED(31),
03 var_DATA
CHAR(n);
If n > 32767:
01 var,
02 var_LENGTH
BIN FIXED(31),
02 var_DATA,
03 var_DATA1(n)
CHAR(32767),
03 var_DATA2
CHAR(mod(n,32767));
If n > 32767:
01 var,
02 var_LENGTH
BIN FIXED(31),
02 var_DATA,
03 var_DATA1(n)
CHAR(32767),
03 var_DATA2
CHAR(mod(n,32767));
DBCLOB(n) If n <= 16383:
01 var,
03 var_LENGTH
BIN FIXED(31),
03 var_DATA
GRAPHIC(n);
If n > 16383:
01 var,
02 var_LENGTH
BIN FIXED(31),
02 var_DATA,
03 var_DATA1(n)
GRAPHIC(16383),
03 var_DATA2
GRAPHIC(mod(n,16383));
ROWID CHAR(40) VAR;
Result parameters: Set these values in your user-defined function before exiting.
For a user-defined scalar function, you return one result parameter. For a
user-defined table function, you return the same number of parameters as columns
in the RETURNS TABLE clause of the CREATE FUNCTION statement. DB2
allocates a buffer for each result parameter value and passes the buffer address to
the user-defined function. Your user-defined function places each result parameter
value in its buffer. You must ensure that the length of the value you place in each
output buffer does not exceed the buffer length. Use the SQL data type and length
in the CREATE FUNCTION statement to determine the buffer length.
See “Parameters for external user-defined functions” on page 490 to determine the
host data type to use for each result parameter value. If the CREATE FUNCTION
statement contains a CAST FROM clause, use a data type that corresponds to the
SQL data type in the CAST FROM clause. Otherwise, use a data type that
corresponds to the SQL data type in the RETURNS or RETURNS TABLE clause.
Input parameter indicators: These are SMALLINT values, which DB2 sets before it
passes control to the user-defined function. You use the indicators to determine
whether the corresponding input parameters are null. The number and order of the
indicators are the same as the number and order of the input parameters. On entry
to the user-defined function, each indicator contains one of these values:
0 The input parameter value is not null.
negative
The input parameter value is null.
Code the user-defined function to check all indicators for null values unless the
user-defined function is defined with RETURNS NULL ON NULL INPUT. A
user-defined function defined with RETURNS NULL ON NULL INPUT executes
only if all input parameters are not null.
Result indicators: These are SMALLINT values, which you must set before the
user-defined function ends to indicate to the invoking program whether each result
parameter value is null. A user-defined scalar function has one result indicator. A
user-defined table function has the same number of result indicators as the number
of result parameters. The order of the result indicators is the same as the order of
the result parameters. Set each result indicator to one of these values:
0 or positive
The result parameter is not null.
negative
The result parameter is null.
SQLSTATE value: This CHAR(5) value represents the SQLSTATE that is passed in
to the program from the database manager. The initial value is set to ‘00000’.
Although the SQLSTATE is usually not set by the program, it can be set as the
result SQLSTATE that is used to return an error or a warning. Returned values that
start with anything other than ‘00’, ‘01’, or ‘02’ are error conditions.
User-defined function name: DB2 sets this value in the parameter list before the
user-defined function executes. This value is VARCHAR(257): 128 bytes for the
schema name, 1 byte for a period, and 128 bytes for the user-defined function
name. If you use the same code to implement multiple versions of a user-defined
function, you can use this parameter to determine which version of the function
the invoker wants to execute.
Specific name: DB2 sets this value in the parameter list before the user-defined
function executes. This value is VARCHAR(128) and is either the specific name
from the CREATE FUNCTION statement or a specific name that DB2 generated. If
you use the same code to implement multiple versions of a user-defined function,
you can use this parameter to determine which version of the function the invoker
wants to execute.
Diagnostic message: Your user-defined function can set this CHAR or VARCHAR
value to a character string of up to 1000 bytes before exiting. Use this area to pass
descriptive information about an error or warning to the invoker.
You must ensure that your user-defined function does not write more bytes to the
scratchpad than the scratchpad length.
Call type: For a user-defined scalar function, if the definer specified FINAL CALL
in the CREATE FUNCTION statement, DB2 passes this parameter to the
user-defined function. For a user-defined table function, DB2 always passes this
parameter to the user-defined function.
On entry to a user-defined scalar function, the call type parameter has one of the
following values:
-1 This is the first call to the user-defined function for the SQL statement. For
a first call, all input parameters are passed to the user-defined function. In
addition, the scratchpad, if allocated, is set to binary zeros.
0 This is a normal call. For a normal call, all the input parameters are passed
to the user-defined function. If a scratchpad is also passed, DB2 does not
modify it.
1 This is a final call. For a final call, no input parameters are passed to the
user-defined function. If a scratchpad is also passed, DB2 does not modify
it.
This type of final call occurs when the invoking application explicitly
closes a cursor. When a value of 1 is passed to a user-defined function, the
user-defined function can execute SQL statements.
255 This is a final call. For a final call, no input parameters are passed to the
user-defined function. If a scratchpad is also passed, DB2 does not modify
it.
This type of final call occurs when the invoking application executes a
COMMIT or ROLLBACK statement, or when the invoking application
abnormally terminates. When a value of 255 is passed to the user-defined
function, the user-defined function cannot execute any SQL statements,
except for CLOSE CURSOR. If the user-defined function executes any close
cursor statements during this type of final call, the user-defined function
should tolerate SQLCODE -501 because DB2 might have already closed
cursors before the final call.
During the first call, your user-defined scalar function should acquire any system
resources it needs. During the final call, the user-defined scalar function should
release any resources it acquired during the first call. The user-defined scalar
function should return a result value only during normal calls. DB2 ignores any
results that are returned during a final call. However, the user-defined scalar
function can set the SQLSTATE and diagnostic message area during the final call.
On entry to a user-defined table function, the call type parameter has one of the
following values:
-2 This is the first call to the user-defined function for the SQL statement. A
first call occurs only if the FINAL CALL keyword is specified in the
user-defined function definition. For a first call, all input parameters are
passed to the user-defined function. In addition, the scratchpad, if
allocated, is set to binary zeros.
-1 This is the open call to the user-defined function by an SQL statement. If
FINAL CALL is not specified in the user-defined function definition, all
input parameters are passed to the user-defined function, and the
scratchpad, if allocated, is set to binary zeros during the open call. If
FINAL CALL is specified for the user-defined function, DB2 does not
modify the scratchpad.
0 This is a fetch call to the user-defined function by an SQL statement. For a
fetch call, all input parameters are passed to the user-defined function. If a
scratchpad is also passed, DB2 does not modify it.
1 This is a close call. For a close call, no input parameters are passed to the
user-defined function. If a scratchpad is also passed, DB2 does not modify
it.
2 This is a final call. This type of final call occurs only if FINAL CALL is
specified in the user-defined function definition. For a final call, no input
parameters are passed to the user-defined function. If a scratchpad is also
passed, DB2 does not modify it.
This type of final call occurs when the invoking application executes a
CLOSE CURSOR statement.
255 This is a final call. For a final call, no input parameters are passed to the
user-defined function. If a scratchpad is also passed, DB2 does not modify
it.
This type of final call occurs when the invoking application executes a
COMMIT or ROLLBACK statement, or when the invoking application
abnormally terminates. When a value of 255 is passed to the user-defined
function, the user-defined function cannot execute any SQL statements,
except for CLOSE CURSOR. If the user-defined function executes any close
cursor statements during this type of final call, the user-defined function
should tolerate SQLCODE -501 because DB2 might have already closed
cursors before the final call.
During the close call, a user-defined table function can set the SQLSTATE and
diagnostic message area.
If you write your user-defined function in C or C++, you can use the declarations
in member SQLUDF of DSN910.SDSNC.H for many of the passed parameters. To
include SQLUDF, make these changes to your program:
v Put this statement in your source code:
#include <sqludf.h>
v Include the DSN910.SDSNC.H data set in the SYSLIB concatenation for the
compile step of your program preparation job.
The following examples show how a user-defined function that is written in each
of the supported host languages receives the parameter list that is passed by DB2.
These examples assume that the user-defined function is defined with the
SCRATCHPAD, FINAL CALL, and DBINFO parameters.
Assembler: The follow figure shows the parameter conventions for a user-defined
scalar function that is written as a main program that receives two parameters and
returns one result. For an assembler language user-defined function that is a
subprogram, the conventions are the same. In either case, you must include the
CEEENTRY and CEEEXIT macros.
MYMAIN CEEENTRY AUTO=PROGSIZE,MAIN=YES,PLIST=OS
USING PROGAREA,R13
For subprograms, you pass the parameters directly. For main programs, you use
the standard argc and argv variables to access the input and output parameters:
v The argv variable contains an array of pointers to the parameters that are passed
to the user-defined function. All string parameters that are passed back to DB2
must be null terminated.
– argv[0] contains the address of the load module name for the user-defined
function.
– argv[1] through argv[n] contain the addresses of parameters 1 through n.
v The argc variable contains the number of parameters that are passed to the
external user-defined function, including argv[0].
The following figure shows the parameter conventions for a user-defined scalar
function that is written as a main program that receives two parameters and
returns one result.
#include <stdlib.h>
#include <stdio.h>
main(argc,argv)
int argc;
char *argv[];
{
/***************************************************/
/* Assume that the user-defined function invocation*/
/* included 2 input parameters in the parameter */
/* list. Also assume that the definition includes */
/* the SCRATCHPAD, FINAL CALL, and DBINFO options, */
/* so DB2 passes the scratchpad, calltype, and */
/* dbinfo parameters. */
/* The argv vector contains these entries: */
/* argv[0] 1 load module name */
/* argv[1-2] 2 input parms */
/* argv[3] 1 result parm */
/* argv[4-5] 2 null indicators */
/* argv[6] 1 result null indicator */
/* argv[7] 1 SQLSTATE variable */
/* argv[8] 1 qualified func name */
/* argv[9] 1 specific func name */
/* argv[10] 1 diagnostic string */
/* argv[11] 1 scratchpad */
/* argv[12] 1 call type */
/* argv[13] + 1 dbinfo */
/* ------ */
/* 14 for the argc variable */
/***************************************************/
if argc<>14
{
.
.
.
/**********************************************************/
/* This section would contain the code executed if the */
/* user-defined function is invoked with the wrong number */
/* of parameters. */
/**********************************************************/
}
/***************************************************/
/* Assume the first parameter is an integer. */
/* The following code shows how to copy the integer*/
/* parameter into the application storage. */
/***************************************************/
/***************************************************/
/* Access the null indicator for the first */
/* parameter on the invoked user-defined function */
/* as follows: */
/***************************************************/
short int ind1;
ind1 = *(short int *) argv[4];
/***************************************************/
/* Use the following expression to assign */
/* 'xxxxx' to the SQLSTATE returned to caller on */
/* the SQL statement that contains the invoked */
/* user-defined function. */
/***************************************************/
strcpy(argv[7],"xxxxx/0");
/***************************************************/
/* Obtain the value of the qualified function */
/* name with this expression. */
/***************************************************/
char f_func[28];
strcpy(f_func,argv[8]);
/***************************************************/
/* Obtain the value of the specific function */
/* name with this expression. */
/***************************************************/
char f_spec[19];
strcpy(f_spec,argv[9]);
/***************************************************/
/* Use the following expression to assign */
/* 'yyyyyyyy' to the diagnostic string returned */
/* in the SQLCA associated with the invoked */
/* user-defined function. */
/***************************************************/
strcpy(argv[10],"yyyyyyyy/0");
/***************************************************/
/* Use the following expression to assign the */
/* result of the function. */
/***************************************************/
char l_result[11];
strcpy(argv[3],l_result);
.
.
.
}
The following figure shows the parameter conventions for a user-defined scalar
function written as a C subprogram that receives two parameters and returns one
result.
#pragma runopts(plist(os))
#include <stdlib.h>
#include <stdio.h>
#include <string.h>
#include <sqludf.h>
l_p1 = *parm1;
strcpy(l_p2,parm2);
l_ind1 = *f_ind1;
l_ind1 = *f_ind2;
strcpy(ludf_sqlstate,udf_sqlstate);
strcpy(ludf_fname,udf_fname);
strcpy(ludf_specname,udf_specname);
l_udf_call_type = *udf_call_type;
strcpy(ludf_msgtext,udf_msgtext);
memcpy(&ludf_scratchpad,udf_scratchpad,sizeof(ludf_scratchpad));
memcpy(&ludf_dbinfo,udf_dbinfo,sizeof(ludf_dbinfo));
.
.
.
}
The following figure shows the parameter conventions for a user-defined scalar
function that is written as a C++ subprogram that receives two parameters and
returns one result. This example demonstrates that you must use an extern ″C″
modifier to indicate that you want the C++ subprogram to receive parameters
according to the C linkage convention. This modifier is necessary because the
CEEPIPI CALL_SUB interface, which DB2 uses to call the user-defined function,
passes parameters using the C linkage convention.
#pragma runopts(plist(os))
#include <stdlib.h>
#include <stdio.h>
#include <sqludf.h>
{
/***************************************************/
/* Define local copies of parameters. */
/***************************************************/
int l_p1;
char l_p2[11];
short int l_ind1;
short int l_ind2;
COBOL: The following figure shows the parameter conventions for a user-defined
table function that is written as a main program that receives two parameters and
returns two results. For a COBOL user-defined function that is a subprogram, the
conventions are the same.
PL/I: The following figure shows the parameter conventions for a user-defined
scalar function that is written as a main program that receives two parameters and
returns one result. For a PL/I user-defined function that is a subprogram, the
conventions are the same.
*PROCESS SYSTEM(MVS);
MYMAIN: PROC(UDF_PARM1, UDF_PARM2, UDF_RESULT,
UDF_IND1, UDF_IND2, UDF_INDR,
UDF_SQLSTATE, UDF_NAME, UDF_SPEC_NAME,
UDF_DIAG_MSG, UDF_SCRATCHPAD,
UDF_CALL_TYPE, UDF_DBINFO)
OPTIONS(MAIN NOEXECOPS REENTRANT);
When the primary program of a user-defined function calls another program, DB2
uses the CURRENT PACKAGE PATH special register to determine the list of
collections to search for the called program’s package. The primary program can
change this collection ID by executing the statement SET CURRENT PACKAGE
PATH.
The following table shows information you need when you use special registers in
a user-defined function or stored procedure.
Table 91. Characteristics of special registers in a user-defined function or a stored procedure
Routine can
Initial value when INHERIT Initial value when DEFAULT use SET
SPECIAL REGISTERS option SPECIAL REGISTERS option statement to
Special register is specified is specified modify?
CURRENT APPLICATION The value of bind option The value of bind option Yes
ENCODING SCHEME ENCODING for the ENCODING for the
user-defined function or stored user-defined function or stored
procedure package procedure package
CURRENT CLIENT_ACCTNG Inherited from the invoking Inherited from the invoking Not applicable5
application application
CURRENT Inherited from the invoking Inherited from the invoking Not applicable5
CLIENT_APPLNAME application application
CURRENT CLIENT_USERID Inherited from the invoking Inherited from the invoking Not applicable5
application application
CURRENT Inherited from the invoking Inherited from the invoking Not applicable5
CLIENT_WRKSTNNAME application application
CURRENT DATE New value for each SQL New value for each SQL Not applicable5
statement in the user-defined statement in the user-defined
function or stored procedure function or stored procedure
package1 package1
| CURRENT DEBUG MODE Inherited from the invoking DISALLOW Yes
| application
| CURRENT DECFLOAT Inherited from the invoking The value of bind option Yes
| ROUNDING MODE application ROUNDING for the
| user-defined function or stored
| procedure package
CURRENT DEGREE CURRENT DEGREE2 The value of field CURRENT Yes
DEGREE on installation panel
DSNTIP8
CURRENT LOCALE LC_CTYPE Inherited from the invoking The value of field CURRENT Yes
application LC_CTYPE on installation panel
DSNTIPF
To access transition tables in a user-defined function, use table locators, which are
pointers to the transition tables. You declare table locators as input parameters in
the CREATE FUNCTION statement using the TABLE LIKE table-name AS
LOCATOR clause.
Assembler: The following example shows how an assembler program accesses rows
of transition table NEWEMPS.
CHECKEMP CSECT
SAVE (14,12) ANY SAVE SEQUENCE
LR R12,R15 CODE ADDRESSABILITY
USING CHECKEMP,R12 TELL THE ASSEMBLER
LR R7,R1 SAVE THE PARM POINTER
USING PARMAREA,R7 SET ADDRESSABILITY FOR PARMS
USING SQLDSECT,R8 ESTABLISH ADDRESSIBILITY TO SQLDSECT
L R6,PROGSIZE GET SPACE FOR USER PROGRAM
GETMAIN R,LV=(6) GET STORAGE FOR PROGRAM VARIABLES
LR R10,R1 POINT TO THE ACQUIRED STORAGE
LR R2,R10 POINT TO THE FIELD
LR R3,R6 GET ITS LENGTH
SR R4,R4 CLEAR THE INPUT ADDRESS
SR R5,R5 CLEAR THE INPUT LENGTH
MVCL R2,R4 CLEAR OUT THE FIELD
ST R13,FOUR(R10) CHAIN THE SAVEAREA PTRS
ST R10,EIGHT(R13) CHAIN SAVEAREA FORWARD
LR R13,R10 POINT TO THE SAVEAREA
USING PROGAREA,R13 SET ADDRESSABILITY
ST R6,GETLENTH SAVE THE LENGTH OF THE GETMAIN
.
.
.
************************************************************
* Declare table locator host variable TRIGTBL *
************************************************************
TRIGTBL SQL TYPE IS TABLE LIKE EMP AS LOCATOR
************************************************************
* Declare a cursor to retrieve rows from the transition *
* table *
************************************************************
C or C++: The following example shows how a C or C++ program accesses rows
of transition table NEWEMPS.
int CHECK_EMP(int trig_tbl_id)
{
.
.
.
/**********************************************************/
/* Declare table locator host variable trig_tbl_id */
/**********************************************************/
EXEC SQL BEGIN DECLARE SECTION;
SQL TYPE IS TABLE LIKE EMP AS LOCATOR trig_tbl_id;
char name[25];
EXEC SQL END DECLARE SECTION;
.
.
.
/**********************************************************/
/* Declare a cursor to retrieve rows from the transition */
/* table */
/**********************************************************/
EXEC SQL DECLARE C1 CURSOR FOR
SELECT NAME FROM TABLE(:trig_tbl_id LIKE EMPLOYEE)
WHERE SALARY > 100000;
/**********************************************************/
/* Fetch a row from transition table */
/**********************************************************/
EXEC SQL OPEN C1;
EXEC SQL FETCH C1 INTO :name;
.
.
.
EXEC SQL CLOSE C1;
.
.
.
}
COBOL: The following example shows how a COBOL program accesses rows of
transition table NEWEMPS.
PL/I: The following example shows how a PL/I program accesses rows of
transition table NEWEMPS.
CHECK_EMP: PROC(TRIG_TBL_ID) RETURNS(BIN FIXED(31))
OPTIONS(MAIN NOEXECOPS REENTRANT);
/****************************************************/
/* Declare table locator host variable TRIG_TBL_ID */
/****************************************************/
DECLARE TRIG_TBL_ID SQL TYPE IS TABLE LIKE EMP AS LOCATOR;
DECLARE NAME CHAR(24);
.
.
.
/****************************************************/
/* Declare a cursor to retrieve rows from the */
/* transition table */
/****************************************************/
EXEC SQL DECLARE C1 CURSOR FOR
SELECT NAME FROM TABLE(:TRIG_TBL_ID LIKE EMP)
WHERE SALARY > 100000;
/****************************************************/
/* Retrieve rows from the transition table */
/****************************************************/
EXEC SQL OPEN C1;
EXEC SQL FETCH C1 INTO :NAME;
.
.
.
EXEC SQL CLOSE C1;
You should include code in your program to check for a user-defined function
abend and to roll back the unit of work that contains the user-defined function
invocation.
The scratchpad consists of a 4-byte length field, followed by the scratchpad area.
The definer can specify the length of the scratchpad area in the CREATE
FUNCTION statement. The specified length does not include the length field. The
default size is 100 bytes. DB2 initializes the scratchpad for each function to binary
zeros at the beginning of execution for each subquery of an SQL statement and
does not examine or change the content thereafter. On each invocation of the
user-defined function, DB2 passes the scratchpad to the user-defined function. You
can therefore use the scratchpad to preserve information between invocations of a
reentrant user-defined function.
The scratchpad length is not specified, so the scratchpad has the default length of
100 bytes, plus 4 bytes for the length field. The user-defined function increments
an integer value and stores it in the scratchpad on each execution.
#pragma linkage(ctr,fetchable)
#include <stdlib.h>
#include <stdio.h>
/* Structure scr defines the passed scratchpad for function ctr */
struct scr {
long len;
long countr;
char not_used[96];
};
/***************************************************************/
/* Function ctr: Increments a counter and reports the value */
/* from the scratchpad. */
/* */
/* Input: None */
/* Output: INTEGER out the value from the scratchpad */
/***************************************************************/
void ctr(
long *out, /* Output answer (counter) */
short *outnull, /* Output null indicator */
char *sqlstate, /* SQLSTATE */
char *funcname, /* Function name */
char *specname, /* Specific function name */
char *mesgtext, /* Message text insert */
struct scr *scratchptr) /* Scratchpad */
{
*out = ++scratchptr->countr; /* Increment counter and */
/* copy to output variable */
*outnull = 0; /* Set output null indicator*/
return;
}
/* end of user-defined function ctr */
Suppose that your organization needs a user-defined scalar function that calculates
the bonus that each employee receives. All employee data, including salaries,
commissions, and bonuses, is kept in the employee table, EMP. The input fields for
the bonus calculation function are the values of the SALARY and COMM columns.
The output from the function goes into the BONUS column. Because this function
gets its input from a DB2 table and puts the output in a DB2 table, a convenient
way to manipulate the data is through a user-defined function.
The user-defined function’s definer and invoker determine that this new
user-defined function should have these characteristics:
v The user-defined function name is CALC_BONUS.
v The two input fields are of type DECIMAL(9,2).
v The output field is of type DECIMAL(9,2).
v The program for the user-defined function is written in COBOL and has a load
module name of CBONUS.
User-defined function invokers write and prepare application programs that invoke
CALC_BONUS. An invoker might write a statement like this, which uses the
user-defined function to update the BONUS field in the employee table:
UPDATE EMP
SET BONUS = CALC_BONUS(SALARY,COMM);
Member DSN8DUWC contains a client program that shows you how to invoke the
WEATHER user-defined table function.
Member DSNTEJ2U shows you how to define and prepare the sample user-defined
functions and the client program.
The DB2 installer sets the size of the routine authorization cache by entering a size
in field ROUTINE AUTH CACHE of DB2 installation panel DSNTIPP.
Related reference
Protection panel: DSNTIPP (DB2 Installation and Migration)
Stored procedures
| A stored procedure is a compiled program, stored at a DB2 local or remote server,
| that can execute SQL statements. You can invoke a stored procedure from an
| application program or from the command line processor.
| A typical stored procedure contains two or more SQL statements and some
| manipulative or logical processing in a host language or SQL procedure
| statements. A client application program uses the SQL statement CALL to invoke
| the stored procedure. You can also invoke a stored procedure from the command
| line processor. For more information about how to invoke the stored procedure
| from the command line processor, see “Running stored procedures from the
| command line processor” on page 1006.
Consider using stored procedures for a client/server application that does at least
one of the following things:
v Executes multiple remote SQL statements.
Remote SQL statements can create many network send and receive operations,
which results in increased processor costs. Stored procedures can encapsulate
many of your application’s SQL statements into a single message to the DB2
The following figure shows processing without using stored procedures. A client
application embeds SQL statements and communicates with the server separately
for each statement. This results in increased network traffic and processor costs.
The following figure shows processing with stored procedures. The same series of
SQL statements that are illustrated in the following figure uses a single send and
receive operation, reducing network traffic and the cost of processing these
statements.
Note:
| 1. Alternatively, steps 4 and 5 can be accomplished with a single MERGE
| statement.
The following figure illustrates the steps that are involved in executing this stored
procedure.
8 Control returns
to application
9 EXEC SQL
COMMIT; Result of
(or ROLLBACK) COMMIT or
ROLLBACK
Notes:
1. The workstation application uses the SQL CONNECT statement to create a
conversation with DB2.
2. DB2 creates a DB2 thread to process SQL requests.
SQL procedures
An SQL procedure is a stored procedure that contains only SQL statements.
The source code for these procedures is specified in an SQL CREATE PROCEDURE
statement. The part of the CREATE PROCEDURE statement that contains SQL
statements is called the procedure body.
| DB2 for z/OS supports the following two types of SQL procedures:
| native SQL procedures
| Native SQL procedures are procedures whose body is written in SQL, and
| DB2 does not generate an associated C program. Native SQL procedures
| have the following advantages
| v You can create them in one step.
| v They do not run in a WLM environment.
| v They might be eligible for zIIP redirect if they are invoked remotely
| through a DRDA client.
| v They usually perform better than external SQL procedures.
| v They support more capabilities, such as nested compound statements,
| than external SQL procedures.
| v DB2 can manage multiple versions of these procedures for you.
| Starting in Version 9.1, all SQL procedures that are created without the
| FENCED or EXTERNAL options in the CREATE PROCEDURE statement
| are native SQL procedures.
| external SQL procedures
| External SQL procedures are procedures whose body is written in SQL, but
| DB2 supports them by generating an associated C program for each
| procedure. All SQL procedures that were created prior to Version 9.1 are
| external SQL procedures. Starting in Version 9.1, you can create an external
| SQL procedure by specifying FENCED or EXTERNAL in the CREATE
| PROCEDURE statement.
| The body of an SQL procedure contains one or more SQL statements. You can also
| declare variables, conditions, and condition handlers and reference parameters,
| variables, and conditions.
To store data that you use only within an SQL procedure, you can declare SQL
variables. SQL variables are the equivalent of host variables in external stored
procedures. SQL variables can have the same data types and lengths as SQL
procedure parameters.
The general form of a declaration for an SQL variable that you use as a result set
locator is:
DECLARE SQL-variable-name data-type RESULT_SET_LOCATOR VARYING;
You can perform any operations on SQL variables that you can perform on host
variables in SQL statements.
These examples show how to use CASE statements, compound statements, and
nested statements within an SQL procedure body.
Example: CASE statement: The following SQL procedure demonstrates how to use
a CASE statement. The procedure receives an employee’s ID number and rating as
input parameters. The CASE statement modifies the employee’s salary and bonus,
using a different UPDATE statement for each of the possible ratings.
CREATE PROCEDURE UPDATESALARY2
(IN EMPNUMBR CHAR(6),
IN RATING INT)
LANGUAGE SQL
MODIFIES SQL DATA
CASE RATING
WHEN 1 THEN
UPDATE CORPDATA.EMPLOYEE
SET SALARY = SALARY * 1.10, BONUS = 1000
WHERE EMPNO = EMPNUMBR;
WHEN 2 THEN
UPDATE CORPDATA.EMPLOYEE
SET SALARY = SALARY * 1.05, BONUS = 500
WHERE EMPNO = EMPNUMBR;
Language requirements for the external stored procedure and its caller
You can write an external stored procedure in Assembler, C, C++, COBOL, Java,
REXX, or PL/I. All programs must be designed to run using Language
Environment. Your COBOL and C++ stored procedures can contain object-oriented
extensions. See “Object-oriented extensions in COBOL” on page 333 for
information about including object-oriented extensions in SQL applications. For
information about writing Java stored procedures, see the topic “Creating Java
The program that calls the stored procedure can be in any language that supports
the SQL CALL statement. ODBC applications can use an escape clause to pass a
stored procedure call to DB2.
SQL procedures and external procedures consist of a procedure definition and the
code for the procedure program.
Both an SQL procedure definition and an external procedure definition specify the
following information:
v The procedure name.
v Input and output parameter attributes.
v The language in which the procedure is written. For an SQL procedure, the
language is SQL.
v Information that will be used when the procedure is called, such as run-time
options, length of time that the procedure can run, and whether the procedure
returns result sets.
An SQL procedure and external procedure share the same rules for the use of
COMMIT and ROLLBACK statements in a procedure.
| Notes:
| 1 The stored procedure name is UPDATESALARY1.
| 2 The two parameters have data types of CHAR(10) and DECIMAL(6,2).
| Both are input parameters.
| 3 LANGUAGE SQL indicates that this is an SQL procedure, so a
| procedure body follows the other parameters.
| 4 The procedure body consists of a single SQL UPDATE statement, which
| updates rows in the employee table.
| Example: The following example shows a definition for an equivalent external
| stored procedure that is written in COBOL. The stored procedure program,
| which updates employee salaries, is called UPDSAL.
| CREATE PROCEDURE UPDATESALARY1 1
| (IN EMPNUMBR CHAR(10), 2
| IN RATE DECIMAL(6,2))
| LANGUAGE COBOL 3
| EXTERNAL NAME UPDSAL; 4
| Notes:
| 1 The stored procedure name is UPDATESALARY1.
| 2 The two parameters have data types of CHAR(10) and DECIMAL(6,2).
| Both are input parameters.
| 3 LANGUAGE COBOL indicates that this is an external procedure, so the
| code for the stored procedure is in a separate, COBOL program.
| 4 The name of the load module that contains the executable stored
| procedure program is UPDSAL.
For native SQL procedures that satisfy at least one of the following conditions, you
must set up the stored procedure environment:
v The native SQL procedure calls at least one external stored procedure, external
SQL procedure, or user-defined function.
| v The native SQL procedure is defined with ALLOW DEBUG MODE or
| DISALLOW DEBUG MODE. If you specify DISABLE DEBUG MODE, you do
| not need to set up the stored procedure environment.
If you changed the JCL startup procedure for an existing WLM application
environment, refresh the WLM application environment for stored procedures (DB2
Administration Guide).
| Example: The following example contains three declarations of the variable A. One
| instance is declared in the outer compound statement, which has the label
| OUTER1. The other instances are declared in the inner compound statements with
| the labels INNER1 and INNER2. In the INNER1 compound statement, DB2
| presumes that the unqualified references to A in the assignment statement and
| UPDATE statement refer to the instance of A that is declared in the INNER1
| compound statement. To refer to the instance of A that is declared in the OUTER1
| compound statement, qualify the variable as OUTER1.A.
| CREATE PROCEDURE P2 ()
| LANGUAGE SQL
|
| -- Outermost compound statement ------------------------
| OUTER1: BEGIN 1
| DECLARE A INT DEFAULT 100;
|
| -- Inner compound statement with label INNER1 ---
| INNER1: BEGIN 2
| DECLARE A INT DEFAULT NULL;
| DECLARE W INT DEFAULT NULL;
|
| SET A = A + OUTER1.A; 3
|
| UPDATE T1 SET T1.B = 5
| WHERE T1.B = A; 4
|
| SET OUTER1.A = 100; 5
|
| SET INNER1.A = 200; 6
| END INNER1; 7
| -- End of inner compound statement INNER1 ------
|
| -- Inner compound statement with label INNER2 ---
| INNER2: BEGIN 8
| DECLARE A INT DEFAULT NULL;
| DECLARE Z INT DEFAULT NULL;
|
| SET A = A + OUTER1.A;
|
| END INNER2; 9
| -- End of inner compound statement INNER2 ------
|
| SET OUTER1.A = 100; 10
|
| END OUTER1 11
| Nested compound statements are blocks of SQL statements that are contained by
| other blocks of SQL statements. Use nested compound statements in native SQL
| procedures to define condition handlers that execute more than one statement and
| to define different scopes for variables and condition handlers.
| The following pseudo code shows a basic structure of an SQL procedure with
| nested compound statements:
| CREATE PROCEDURE...
| OUTERMOST: BEGIN
| ...
| INNER1: BEGIN
| ...
| INNERMOST: BEGIN
| ...
| ...
| END INNERMOST;
| END INNER1;
| INNER2: BEGIN
| ...
| ...
| END INNER2;
| END OUTERMOST
| You can define a label for each compound statement in an SQL procedure. This
| label enables you to reference this block of statements in other statements such as
| the GOTO, LEAVE, and ITERATE SQL PL control statements. You can also use the
| label to qualify a variable when necessary. Labels are not required.
| You can reference the cursor within the compound statement in which it is
| declared and any nested statements. If the cursor is declared as a result set cursor,
| even if the cursor is not declared in the outermost compound statement, any
| calling application can reference it.
| If an error occurs when an SQL procedure executes, the procedure ends unless you
| include statements to tell the procedure to perform some other action. These
| statements are called handlers.
| In general, the way that a handler works is that when an error occurs that matches
| condition, the SQL-procedure-statement executes. When the SQL-procedure-statement
| completes, DB2 performs the action that is indicated by handler-type.
| Types of handlers
| The handler type determines what happens after the completion of the
| SQL-procedure-statement. You can declare the handler type to be either CONTINUE
| or EXIT:
| CONTINUE
| Specifies that after SQL-procedure-statement completes, execution continues with
| the statement after the statement that caused the error.
| EXIT
| Specifies that after SQL-procedure-statement completes, execution continues at
| the end of the compound statement that contains the handler.
| Example: EXIT handler: This handler places the string ’Table does not exist’ into
| output parameter OUT_BUFFER when condition NO_TABLE occurs. NO_TABLE is
| previously declared as SQLSTATE 42704 (name is an undefined name). The handler
| then causes the SQL procedure to exit the compound statement in which the
| handler is declared.
|| DECLARE
. NO_TABLE CONDITION FOR '42704';
|| .
.
| DECLARE EXIT HANDLER FOR NO_TABLE
| SET OUT_BUFFER='Table does not exist';
| A condition handler defines the action that an SQL procedure takes when a
| particular condition occurs. You must specify the action as a single SQL procedure
| statement.
| To define a condition handler that executes more than one statement when the
| specified condition occurs, specify a compound statement within the declaration of
| that handler.
| Example: The following example shows a condition handler that captures the
| SQLSTATE value and sets a local flag to TRUE.
| BEGIN
| DECLARE SQLSTATE CHAR(5);
| DECLARE PrvSQLState CHAR(5) DEFAULT '00000';
| DECLARE ExceptState INT;
| DECLARE CONTINUE HANDLER FOR SQLEXCEPTION
| BEGIN
| SET PrvSQLState = SQLSTATE;
| SET ExceptState = TRUE;
| END;
| ...
| END
| To control how errors are handled within different scopes in an SQL procedure:
| 1. Optional: Declare a condition by specifying a DECLARE CONDITION
| statement within the compound statement in which you want to reference it.
| You can reference a condition in the declaration of a condition handler, a
| SIGNAL statement, or a RESIGNAL statement.
| Restriction: If multiple conditions with that name exist within the same scope,
| you cannot explicitly refer to a condition that is not the most local in scope.
| DB2 uses the condition in the innermost compound statement.
| 2. Declare a condition handler by specifying a DECLARE HANDLER statement
| within the compound statement to which you want the condition handler to
| apply. Within the declaration of the condition handler, you can specify a
| previously defined condition.
| Example: In the following example, a condition with the name ABC is declared
| twice, and a condition named XYZ is declared once.
| CREATE PROCEDURE...
| DECLARE ABC CONDITION...
|
| DECLARE XYZ CONDITION...
| BEGIN
| DECLARE ABC CONDITION...
| SIGNAL ABC; 1
| END;
|
| SIGNAL ABC; 2
| Example: In the following example, the compound statement with the label
| OUTER contains two other compound statements: INNER1A and INNER1B. The
| INNER1A compound statement contains another compound statement, which has
| the label INNER1A2, and the declaration for a condition handler HINNER1A. The
| body of the condition handler HINNER1A contains another compound statement,
| which defines another condition handler, HINNER1A_HANDLER.
| OUTER:
| BEGIN <=============.
| -- Handler for OUTER |
| DECLARE ... HANDLER -- HOUTER |
| BEGIN <---. |
| : | |
| END; -- End of handler <---. |
| : |
| : |
| |
| -- Level 1 - first compound statement |
| INNER1A: |
| BEGIN <---------. |
| -- Handler for INNER1A | |
| DECLARE ... HANDLER -- HINNER1A | |
| BEGIN <------. | |
| -- Handler for handler HINNER1A | |
| DECLARE...HANDLER --HINNER1A_HANDLER| | |
| BEGIN <---. | | |
| : | | | |
| END; -- End of handler <---. | | |
| : | | |
| : -- stmt that gets condition | | | 2
| : | | |
| : -- more statements in handler | | |
| END; -- End of HINNER1A handler<------. | |
| | |
| INNER1A2: | |
| BEGIN <--. | |
| DECLARE ... HANDLER...-- HINNER1A2 | | |
| BEGIN; <---. | | |
| : | | | |
| Example: In the following example, DB2 checks for SQLSTATE 22H11 only for
| statements inside the INNER compound statement. DB2 checks for
| SQLEXCEPTION for all statements in both the OUTER and INNER blocks.
| OUTER: BEGIN
| DECLARE var1 INT;
| DECLARE EXIT HANDLER FOR SQLEXCEPTION
| RETURN -3;
|
| INNER: BEGIN
| DECLARE EXIT HANDLER FOR SQLSTATE '22H11'
| RETURN -1;
| DECLARE C1 CURSOR FOR SELECT col1 FROM table1;
| OPEN C1;
| CLOSE C1;
| :
| : -- more statements
| END INNER;
| :
| : -- more statements
| Example: In the following example, DB2 checks for SQLSTATE 42704 only for
| statements inside the A compound statement.
| CREATE PROCEDURE EXIT_TEST ()
| LANGUAGE SQL
| BEGIN
| DECLARE OUT_BUFFER VARCHAR(80);
| DECLARE NO_TABLE CONDITION FOR SQLSTATE '42704';
|
| A: BEGIN 1
| DECLARE EXIT HANDLER FOR NO_TABLE 3
| BEGIN
| SET OUT_BUFFER ='Table does not exist'; 4
| END;
|
| -- Drop potentially nonexistent table:
| DROP TABLE JAVELIN; 2
|
| B: SET OUT_BUFFER ='Table dropped successfully';
| END;
| -- Copy OUT_BUFFER to some message table:
| C: INSERT INTO MESSAGES VALUES (OUT_BUFFER); 5
| The following notes describe a possible flow for the preceding example:
| 1. A nested compound statement with label A confines the scope of the
| NO_TABLE exit handler to the statements that are specified in the A compound
| statement.
| Example: In the following example SQL procedure, the condition handler for
| exception1 is not within the scope of the condition handler for exception0. If
| exception condition exception1 is raised in the body of the condition handler for
| exception0, no appropriate handler exists, and the procedure terminates with an
| unhandled exception.
| CREATE PROCEDURE divide ( .....)
| LANGUAGE SQL CONTAINS SQL
| BEGIN
| DECLARE dn_too_long CHAR(5) DEFAULT 'abcde';
|
| -- Declare condition names --------------------------
| DECLARE exception0 CONDITION FOR SQLSTATE '22001';
| DECLARE exception1 CONDITION FOR SQLSTATE 'xxxxx';
|
| -- Declare cursors ----------------------------------
| DECLARE cursor1 CURSOR WITH RETURN FOR
| SELECT * FROM dept;
|
| -- Declare handlers ---------------------------------
| DECLARE CONTINUE HANDLER FOR exception0
| BEGIN
| some SQL statement that causes an error 'xxxxx'
| END
|
| DECLARE CONTINUE HANDLER FOR exception1
| BEGIN
| ...
| END
|
|
| -- Mainline of procedure ----------------------------
| INSERT INTO DEPT (DEPTNO) VALUES (dn_too_long);
| -- Assume that this statement results in SQLSTATE '22001'
|
| OPEN CURSOR1;
| END
| Handlers specify the action that an SQL procedure takes when a particular error or
| condition occurs. In some cases, you want to retrieve additional diagnostic
| information about the error or warning condition.
| Example: Using GET DIAGNOSTICS to retrieve message text: Suppose that you
| create an SQL procedure, named divide1, that computes the result of the division
| of two integers. You include GET DIAGNOSTICS to return the text of the division
| error message as an output parameter:
| CREATE PROCEDURE divide1
| (IN numerator INTEGER, IN denominator INTEGER,
| OUT divide_result INTEGER, OUT divide_error VARCHAR(1000))
| LANGUAGE SQL
| BEGIN
| DECLARE CONTINUE HANDLER FOR SQLEXCEPTION
| GET DIAGNOSTICS CONDITION 1 divide_error = MESSAGE_TEXT;
| SET divide_result = numerator / denominator;
| END
| You can create a condition handler to specify that you want to ignore a condition
| within a particular scope of statements in an SQL procedure.
| Example
| You can use either a SIGNAL or RESIGNAL statement to raise a condition with a
| specific SQLSTATE and message text within an SQL procedure. The SIGNAL and
| RESIGNAL statements differ in the following ways:
| v You can use the SIGNAL statement anywhere within an SQL procedure. You
| must specify the SQLSTATE value. In addition, you can use the SIGNAL
| statement in a trigger body. For information about using the SIGNAL statement
| in a trigger, see “Creating triggers” on page 457.
| v You can use the RESIGNAL statement only within a handler of an SQL
| procedure. If you do not specify the SQLSTATE value, DB2 uses the same
| SQLSTATE value that activated the handler.
| You can use any valid SQLSTATE value in a SIGNAL or RESIGNAL statement,
| except an SQLSTATE class with ’00’ as the first two characters.
| Note:
| 1. This statement raises the current condition with the existing SQLSTATE,
| SQLCODE, message text, and tokens.
| 2. This statement raises a new condition (SQLSTATE ’98765’). Existing message
| text and tokens are reset. The SQLCODE is set to -438 for an error or 438 for a
| warning.
| 3. This statement raises a new condition (SQLSTATE ’98765’) with new message
| text (’xyz’). The SQLCODE is set to -438 for an error or 438 for a warning.
| You can use the SIGNAL statement anywhere within an SQL procedure to raise a
| particular condition. This example shows how to use the SIGNAL statement to
| raise a particular SQLSTATE and set the associated message text.
| The following example uses an ORDERS table and a CUSTOMERS table that are
| defined in the following way:
| CREATE TABLE ORDERS
| (ORDERNO INTEGER NOT NULL,
| PARTNO INTEGER NOT NULL,
| ORDER_DATE DATE DEFAULT,
| CUSTNO INTEGER NOT NULL,
| QUANTITY SMALLINT NOT NULL,
| CONSTRAINT REF_CUSTNO FOREIGN KEY (CUSTNO)
| REFERENCES CUSTOMERS (CUSTNO) ON DELETE RESTRICT,
| PRIMARY KEY (ORDERNO,PARTNO));
| CREATE TABLE CUSTOMERS
| (CUSTNO INTEGER NOT NULL,
| CUSTNAME VARCHAR(30),
| CUSTADDR VARCHAR(80),
| PRIMARY KEY (CUSTNO));
| Suppose that you have an SQL procedure for an order system that signals an
| application error when a customer number is not known to the application. The
| ORDERS table has a foreign key to the CUSTOMERS table, which requires that the
| CUSTNO exist in the CUSTOMERS table before an order can be inserted:
| In this example, the SIGNAL statement is in the handler. However, you can use the
| SIGNAL statement to invoke a handler when a condition occurs that will result in
| an error.
| Related concepts
| “Example of the RESIGNAL statement in a handler”
| You can use the RESIGNAL statement in an SQL procedure to assign a different
| value to the condition that activated the handler. This example shows how to use
| the RESIGNAL statement to reset a particular SQLSTATE and the associated
| message text.
| Suppose that you create an SQL procedure, named divide2, that computes the
| result of the division of two integers. You include SIGNAL to invoke the handler
| with an overflow condition that is caused by a zero divisor, and you include
| RESIGNAL to set a different SQLSTATE value for that overflow condition:
| CREATE PROCEDURE divide2
| (IN numerator INTEGER, IN denominator INTEGER,
| OUT divide_result INTEGER)
| LANGUAGE SQL
| BEGIN
| DECLARE overflow CONDITION FOR SQLSTATE '22003';
| DECLARE CONTINUE HANDLER FOR overflow
| RESIGNAL SQLSTATE '22375';
| IF denominator = 0 THEN
| SIGNAL overflow;
| ELSE
| SET divide_result = numerator / denominator;
| END IF;
| END
| If the following SQL procedure is invoked with argument values 1, 0, and 0, the
| procedure returns a value of 2 for RC and sets the oparm1 parameter to 650.
| CREATE PROCEDURE resig4
| (IN iparm1 INTEGER, INOUT oparm1 INTEGER, INOUT rc INTEGER)
| LANGUAGE SQL
| A1: BEGIN
| DECLARE c1 INT DEFAULT 1;
| DECLARE CONTINUE HANDLER FOR SQLSTATE VALUE '01ABX'
| BEGIN
| .... some other statements
| SET RC = 3; 6
| END;
| When you issue a SIGNAL statement, a new diagnostics area is logically created.
| When you issue a RESIGNAL statement, the current diagnostics area is updated.
| When you issue a SIGNAL statement, a new diagnostics area is logically created.
| In that diagnostics area, RETURNED_SQLSTATE is set to the SQLSTATE or
| condition name specified. If you specified message text as part of the SIGNAL
| statement, MESSAGE_TEXT in the diagnostics area is also set to the specified
| value.
| When you issue a RESIGNAL statement with a SQLSTATE value, condition name,
| or message text, the current diagnostics area is updated with the specified
| information.
| You can use the RETURN statement in an SQL procedure to return an integer
| status value. If a RETURN statement with a specified return value is used to return
| from a procedure, DB2 sets the SQLCODE in the SQLCA to 0, and the caller must
| retrieve the return status of the procedure in either of the following ways:
| v By using the DB2_RETURN_STATUS item of GET DIAGNOSTICS to retrieve the
| return value of the RETURN statement
| v By retrieving SQLERRD(1) of the SQLCA (SQLERRD[0] of the SQLCA in C
| applications), which contains the return value of the RETURN statement
| When the SQLCODE is not less than zero, the caller can access the value directly
| from the SQLCA returned from processing the CALL of an SQL procedure by
| retrieving the value of SQLERRD[0]. When the SQLCODE is less than zero, the
| SQLERRD[0] value is not set and the application should assume a return status
| value of -1.
| Example: Using GET DIAGNOSTICS to retrieve the return status: Suppose that
| you create an SQL procedure, named TESTIT, that calls another SQL procedure,
| named TRYIT. The TRYIT procedure returns a status value, and the TESTIT
| procedure retrieves that value with the DB2_RETURN_STATUS item of GET
| DIAGNOSTICS:
| CREATE PROCEDURE TESTIT ()
| LANGUAGE SQL
| A1:BEGIN
| DECLARE RETVAL INTEGER DEFAULT 0;
| ...
| CALL TRYIT;
| GET DIAGNOSTICS RETVAL = DB2_RETURN_STATUS;
| IF RETVAL <> 0 THEN
| ...
| LEAVE A1;
| ELSE
| ...
| END IF;
| END A1
| To make copies of a package for a native SQL procedure, specify the BIND
| PACKAGE command with the COPY option. For copies that are created on the
| current server, specify a different schema qualifier, which is the collection ID. For
| the first copy that is created on a remote server, you can specify the same schema
| qualifier. For other copies on that remote server, specify a different schema
| qualifier.
| If you later change the native SQL procedure, you might need to explicitly rebind
| any local or remote copies of the package that exist for that version of the
| procedure. For more information on the situations when this rebinding in
| necessary and how to rebind, see “Replacing copies of a package for a version of a
| native SQL procedure” on page 555 and “Regenerating an existing version of a
| native SQL procedure” on page 560.
| Example: The following native SQL procedure sets the CURRENT PACKAGESET
| special register to ensure that DB2 uses the package with the collection ID COLL2
| for this version of the procedure. Consequently, you must create such a package.
| For more information about the bind options that you can set when you create or
| alter a native SQL procedure, see the topic “ALTER PROCEDURE (SQL - native)”
| in DB2 SQL Reference.
| When you change a version of a native SQL procedure and the ALTER
| PROCEDURE REPLACE statement contains a certain option or options, you must
| replace any local or remote copies of the package that exist for that version of the
| procedure.
| If you specify any of the following ALTER PROCEDURE options, you must replace
| copies of the package:
| v REPLACE VERSION
| v REGENERATE
| v DISABLE DEBUG MODE
| v QUALIFIER
| v PACKAGE OWNER
| v DEFER PREPARE
| v NODEFER PREPARE
| v CURRENT DATA
| v DEGREE
| v DYNAMICRULES
| v APPLICATION ENCODING SCHEME
| v WITH EXPLAIN
| v WITHOUT EXPLAIN
| v WITH IMMEDIATE WRITE
| v WITHOUT IMMEDIATE WRITE
| v ISOLATION LEVEL
| v WITH KEEP DYNAMIC
| v WITHOUT KEEP DYNAMIC
| v OPTHINT
| v SQL PATH
| v RELEASE AT COMMIT
| v RELEASE AT DEALLOCATE
| v REOPT
| v VALIDATE RUN
| To replace copies of a package for a version of a native SQL procedure, specify the
| BIND COPY ACTION(REPLACE) command with the appropriate package name
| and version ID.
| You can define multiple versions of a native SQL procedure. DB2 maintains this
| version information for you.
| One or more versions of a procedure can exist at any point in time at the current
| server, but only one version of a procedure is considered the active version. When
| you first create a procedure, that initial version is considered the active version of
| the procedure.
| Using multiple versions of a native SQL procedure has the following advantages:
| v You can keep the existing version of a procedure active while you create another
| version. When the other version is ready, you can make it the active one.
| v When you make another version of a procedure active, you do not need to
| change any existing calls to that procedure.
| v You can easily switch back to a previous version of a procedure if the version
| that you switched to does not work as planned.
| v You can drop an unneeded version of a procedure.
| A new version of a native SQL procedure can have different values for the
| following items:
| v parameter names
| v procedure options
| v procedure body
| Restriction: A new version of a native SQL procedure cannot have different values
| for the following items:
| v number of parameters
| v parameter data types
| v parameter attributes for character data
| v parameter CCSIDs
| v Whether a parameter is an input or output parameter, as defined by the IN,
| OUT, and INOUT options
| If you need to specify different values for any of the preceding items, create a new
| native SQL procedure.
| Requirements:
| v The remote server must be properly defined in the communications database of
| the DB2 subsystem from which you deploy the native SQL procedure.
| v The target DB2 subsystem must be operating at a PTF level that is compatible
| with the PTF level of the local DB2 subsystem.
| Tip: When specifying the parameters for the DEPLOY option, consider the
| following naming rules for native SQL procedures:
| v The collection ID is the same as the schema name in the original CREATE
| PROCEDURE statement.
| v The package ID is the same as the procedure name in the original CREATE
| PROCEDURE statement.
| COPYVER
| Specify the version of the procedure whose logic you want to use on the target
| server.
| ACTION(ADD) or ACTION(REPLACE)
| Specify whether you want DB2 to create a new version of the native SQL
| procedure and its associated package or to replace the specified version.
| Optionally, you can also specify the bind options QUALIFIER or OWNER if want
| to change them.
| If you created the external SQL procedure in a previous release of DB2, consider
| the release incompatibilities for applications that use stored procedures. Examine
| your external SQL procedure source code, and make any adjustments.
| For external SQL procedures, DB2 assumes that C1 is a parameter. For native
| SQL procedures, DB2 assumes that C1 is a column in table T1. If such a
| column does not exist, DB2 then assumes that C1 is a parameter.
| 4. Issue the same GRANT EXECUTE statements that you used to originally grant
| privileges for this stored procedure.
| 5. Test your new native SQL procedure.
Before you remove an existing version of a native SQL procedure, ensure that the
version is not active. If the version is the active version, designate a different active
version before proceeding.
| Issue the ALTER PROCEDURE statement with the DROP VERSION clause and the
| name of the version that you want to drop. If you instead want to drop all
| versions of the procedure, use the DROP statement.
| Example of dropping a version that is not active: The following statement drops
| the OLD_PRODUCTION version of the P1 procedure.
| ALTER PROCEDURE P1 DROP VERSION OLD_PRODUCTION
| Configure DB2 for running stored procedures and user-defined functions (DB2
| Installation Guide).
| The SQL procedure processor, DSNTPSMP, is a REXX stored procedure that you
| can use to prepare an external SQL procedure for execution.
| You can also use DSNTPSMP to perform selected steps in the preparation process
| or delete an existing external SQL procedure. DSNTPSMP is the only preparation
| method for enabling external SQL procedures to be debugged with either the SQL
| Debugger or the Unified Debugger.
| The following example shows sample JCL for a startup procedure for the address
| space in which DSNTPSMP runs.
| //DSN8WLMP PROC DB2SSN=DSN,NUMTCB=1,APPLENV=WLMTPSMP 1
| //*
| //WLMTPSMP EXEC PGM=DSNX9WLM,TIME=1440, 2
| // PARM='&DB2SSN,&NUMTCB,&APPLENV',
| // REGION=0M,DYNAMNBR=10
| //STEPLIB DD DISP=SHR,DSN=DSN810.SDSNEXIT 3
| // DD DISP=SHR,DSN=DSN810.SDSNLOAD
| // DD DISP=SHR,DSN=CBC.SCCNCMP
| // DD DISP=SHR,DSN=CEE.SCEERUN
| //SYSEXEC DD DISP=SHR,DSN=DSN810.SDSNCLST 4
| //SYSTSPRT DD SYSOUT=A
| //CEEDUMP DD SYSOUT=A
| //SYSABEND DD DUMMY
| //*
| //SQLDBRM DD DISP=SHR,DSN=DSN910.DBRMLIB.DATA 5
| //SQLCSRC DD DISP=SHR,DSN=USER.PSMLIB.DATA 6
| //SQLLMOD DD DISP=SHR,DSN=DSN910.RUNLIB.LOAD 7
| //SQLLIBC DD DISP=SHR,DSN=CEE.SCEEH.H 8
| // DD DISP=SHR,DSN=CEE.SCEEH.SYS.H
| //SQLLIBL DD DISP=SHR,DSN=CEE.SCEELKED 9
| // DD DISP=SHR,DSN=DSN910.SDSNLOAD
| //SYSMSGS DD DISP=SHR,DSN=CEE.SCEEMSGP(EDCPMSGE) 10
| //*
| //* DSNTPSMP Configuration File - CFGTPSMP (optional) 11
| //* A site-provided sequential dataset or member, used to
| //* define customized operation of DSNTPSMP in this APPLENV
| //*
| //* CFGTPSMP DD DISP=SHR,DSN=
| //*
| //SQLSRC DD UNIT=SYSALLDA,SPACE=(800,(20,20)), 12
| // DCB=(RECFM=FB,LRECL=80,BLKSIZE=3200)
| //SQLPRINT DD UNIT=SYSALLDA,SPACE=(16000,(20,20)),
| // DCB=(RECFM=VB,LRECL=137,BLKSIZE=882)
| //SQLTERM DD UNIT=SYSALLDA,SPACE=(4000,(20,20)),
| // DCB=(RECFM=VB,LRECL=137,BLKSIZE=882)
| //SQLOUT DD UNIT=SYSALLDA,SPACE=(16000,(20,20)),
| // DCB=(RECFM=FB,LRECL=80,BLKSIZE=3200)
| //SQLCPRT DD UNIT=SYSALLDA,SPACE=(16000,(20,20)),
| // DCB=(RECFM=VB,LRECL=137,BLKSIZE=882)
| //SQLUT1 DD UNIT=SYSALLDA,SPACE=(16000,(20,20)),
| // DCB=(RECFM=FB,LRECL=80,BLKSIZE=3200)
| //SQLUT2 DD UNIT=SYSALLDA,SPACE=(16000,(20,20)),
| // DCB=(RECFM=FB,LRECL=80,BLKSIZE=3200)
| //SQLCIN DD UNIT=SYSALLDA,SPACE=(8000,(20,20))
| //SQLLIN DD UNIT=SYSALLDA,SPACE=(3200,(30,30)),
| // DCB=(RECFM=FB,LRECL=80,BLKSIZE=3200)
| //SYSMOD DD UNIT=SYSALLDA,SPACE=(8000,(20,20)),
| // DCB=(RECFM=FB,LRECL=80,BLKSIZE=3200)
| //SQLDUMMY DD DUMMY
| Also ensure that you have the required authorizations, as indicated in the
| following table, for invoking DSNTPSMP.
| Table 94. Required authorizations for invoking DSNTPSMP
| Required authorization Associated syntax for the authorization
| Procedure privilege to run application EXECUTE ON PROCEDURE
| programs that invoke the stored procedure. SYSPROC.DSNTPSMP
| Collection privilege to use BIND to create CREATE ON COLLECTION collection-id
| packages in the specified collection. You can
| use an asterisk (*) as the identifier for a
| collection.
| Package privilege to use BIND or REBIND to BIND ON PACKAGE collection-id.*
| bind packages in the specified collection.
| System privilege to use BIND with the ADD BINDADD
| option to create packages and plans.
| Schema privilege to create, alter, or drop CREATEIN, ALTERIN, DROPIN ON
| stored procedures in the specified schema. SCHEMA schema-name
| The BUILDOWNER authorization ID must
| have the CREATEIN privilege on the schema.
| You can use an asterisk (*) as the identifier
| for a schema.
|| Table privileges to select or delete from, SELECT ON TABLE
|| insert into, or update the specified catalog SYSIBM.SYSROUTINES
| tables.
| SELECT ON TABLE SYSIBM.SYSPARMS
| SELECT, INSERT, UPDATE, DELETE ON
| TABLE SYSIBM.SYSROUTINES_SRC
| SELECT, INSERT, UPDATE, DELETE ON
| TABLE SYSIBM.SYSROUTINES_OPTS
| ALL ON TABLE SYSIBM.SYSPSMOUT
| Any privileges that are required for the SQL Syntax varies depending on the SQL
| statements and that are contained within the procedure body
| SQL procedure body. These privileges must
| be associated with the OWNER
| authorization-id that is specified in your bind
| options. The default owner is the user that is
| invoking DSNTPSMP.
|
| You can invoke the SQL procedure processor, DSNTPSMP, from an application
| program by using an SQL CALL statement. DSNTPSMP prepares an external SQL
| procedure.
| The following diagrams show the syntax of invoking DSNTPSMP through the SQL
| CALL statement:
|
’ option ’
|
||
| Figure 33. CALL DSNTPSMP bind-options, compiler-options, precompiler-options, prelink-options, link-options
|
| Note: You must specify:
| v The DSNTPSMP parameters in the order listed
| v The empty string if an optional parameter is not required for the function
| v The options in the order: bind, compiler, precompiler, prelink, and link
| These examples show how to invoke the BUILD, DESTROY, REBUILD, and
| REBIND functions of DSNTPSMP.
| If you want to recreate an existing external SQL procedure for debugging with the
| SQL Debugger and the Unified Debugger, use the following CALL statement,
| which includes the REBUILD_DEBUG function:
| EXEC SQL CALL SYSPROC.DSNTPSMP('REBUILD_DEBUG','MYSCHEMA.SQLPROC','',
| 'VALIDATE(BIND)',
| 'SOURCE,LIST,LONGNAME,RENT',
| 'SOURCE,XREF,STDSQL(NO)',
| '',
| 'AMODE=31,RMODE=ANY,MAP,RENT',
| '','DSN910.SDSNSAMP(PROCSRC)','','',
| :returnval);
| DSNTPSMP returns one result set that contains messages and listings. You can
| write your client program to retrieve information from this result set.
| Rows in the message result set are ordered by processing step, ddname, and
| sequence number.
| Restriction: You cannot use JCL to prepare an external SQL procedure for
| debugging with the DB2 stored procedure debugger or the Unified Debugger. If
| you plan to use either of these debugging tools, use either DSNTPSMP or IBM
| Optim Development Studio to create the external SQL procedure.
| To create an external SQL procedure by using JCL, include the following job steps
| in your JCL job:
| 1. Issue a CREATE PROCEDURE statement that includes either the FENCED
| keyword or the EXTERNAL keyword and the procedure body, which is written
| in SQL.
| Alternatively, you can issue the CREATE PROCEDURE statement dynamically
| by using an application such as SPUFI, DSNTEP2, DSNTIAD, or the command
| line processor.
| Example: Suppose that you define an external SQL procedure by issuing the
| following CREATE PROCEDURE statement dynamically:
| CREATE PROCEDURE DEVL7083.EMPDTLSS
| (
| IN PEMPNO CHAR(6)
| ,OUT PFIRSTNME VARCHAR(12)
| ,OUT PMIDINIT CHAR(1)
| ,OUT PLASTNAME VARCHAR(15)
| ,OUT PWORKDEPT CHAR(3)
| ,OUT PHIREDATE DATE
| ,OUT PSALARY DEC(9,2)
| ,OUT PSQLCODE INTEGER
| )
| RESULT SETS 0
| MODIFIES SQL DATA
| FENCED
| NO DBINFO
| WLM ENVIRONMENT DB9AWLMR
| STAY RESIDENT NO
| COLLID DEVL7083
| PROGRAM TYPE MAIN
| RUN OPTIONS 'TRAP(OFF),RPTOPTS(OFF)'
| COMMIT ON RETURN NO
| LANGUAGE SQL
| BEGIN
| DECLARE SQLCODE INTEGER;
| DECLARE SQLSTATE CHAR(5);
| SELECT
| FIRSTNME
| , MIDINIT
| , LASTNAME
| , WORKDEPT
| , HIREDATE
| , SALARY
| INTO PFIRSTNME
| , PMIDINIT
| , PLASTNAME
| , PWORKDEPT
| , PHIREDATE
| , PSALARY
| FROM EMP
| WHERE EMPNO = PEMPNO
| ;
| DECLARE EXIT HANDLER FOR SQLEXCEPTION SET PSQLCODE = SQLCODE;
| END
| You can use JCL that is similar to the following JCL to prepare the procedure:
| The following table lists the sample jobs that DB2 provides for external SQL
| procedures.
| Table 99. External SQL procedure samples shipped with DB2
| Member that
| contains
| source code Contents Purpose
| DSNHSQL JCL procedure Precompiles, compiles, prelink-edits, and link-edits an
| external SQL procedure
| DSNTEJ63 JCL job Invokes JCL procedure DSNHSQL to prepare external
| SQL procedure DSN8ES1 for execution
| DSN8ES1 External SQL A stored procedure that accepts a department number
| procedure as input and returns a result set that contains salary
| information for each employee in that department
| DSNTEJ64 JCL job Prepares client program DSN8ED3 for execution
| DSN8ED3 C program Calls SQL procedure DSN8ES1
| DSN8ES2 External SQL A stored procedure that accepts one input parameter
| procedure and returns two output parameters. The input
| parameter specifies a bonus to be awarded to
| managers. The external SQL procedure updates the
| BONUS column of DSN910.SDSNSAMP. If no SQL
| error occurs when the external SQL procedure runs,
| the first output parameter contains the total of all
| bonuses awarded to managers and the second output
| parameter contains a null value. If an SQL error
| occurs, the second output parameter contains an
| SQLCODE.
| DSN8ED4 C program Calls the SQL procedure processor, DSNTPSMP, to
| prepare DSN8ES2 for execution
Configure DB2 for running stored procedures and user-defined functions (DB2
Installation Guide).
Restriction: These instructions do not apply to Java stored procedures. The process
for creating a Java stored procedure is different. The preparation process varies
depending on what the procedure contains.
| Suppose that you have written and prepared a stored procedure that has the
| following characteristics:
| v The name of the stored procedure is B.
| v The stored procedure has the following two parameters:
| – An integer input parameter that is named V1
| – A character output parameter of length 9 that is named V2
| v The stored procedure is written in the C language.
| v The stored procedure contains no SQL statements.
| v The same input always produces the same output.
| v The load module name is SUMMOD.
| Example: The following statement allows an application that runs under the
| authorization ID JONES to call stored procedure SPSCHEMA.STORPRCA:
| GRANT EXECUTE ON PROCEDURE SPSCHEMA.STORPRCA TO JONES;
You can now invoke the stored procedure from an application program.
DBINFO structure
Use the DBINFO structure to pass environment information to user-defined
functions and stored procedures. Some fields in the structure are not used for
stored procedures.
DBINFO is a structure that contains information such as the name of the current
server, the application runtime authorization ID and identification of the version
and release of the database manager that invoked the procedure.
As part of the process of creating an external stored procedure, you prepare the
procedure, which means that you precompile, compile, link-edit, and bind the
application. The result of this process is a DB2 package. You do not need to create
a DB2 plan for an external procedure. The procedure runs under the caller’s thread
and uses the plan from the client program that calls it.
The calling application can use a DB2 package or plan to execute the CALL
statement.
Both the stored procedure package and the calling application plan or package
must exist on the server before you run the calling application.
The following figure shows this relationship between a client program and a stored
procedure. In the figure, the client program, which was bound into package A,
issues a CALL statement to program B. Program B is an external stored procedure
in a WLM address space. This external stored procedure was bound into package
B.
You can control access to the stored procedure package by specifying the ENABLE
bind option when you bind the package.
In the following situations, the stored procedure might use more than one package:
v You bind a DBRM several times into several versions of the same package, all of
which have the same package name but reside in different package collections.
Your stored procedure can switch from one version to another by using the SET
CURRENT PACKAGESET statement.
v The stored procedure calls another program that contains SQL statements. This
program has an associated package. This package must exist at the location
where the stored procedure is defined and at the location where the SQL
statements are executed.
Related reference
BIND and REBIND options (DB2 Command Reference)
BIND PACKAGE (DSN) (DB2 Command Reference)
SET CURRENT PACKAGESET (SQL Reference)
Stored procedures can access tables at other DB2 locations by using three-part
object names or CONNECT statements. If you use CONNECT statements, you use
DRDA access to access tables. If you use three-part object names or aliases for
three-part object names, the distributed access method depends on the value of
DBPROTOCOL that you specified when you bound the stored procedure package.
A value of PRIVATE tells DB2 to use DB2 private protocol access to access remote
data for the stored procedure. A value of DRDA tells DB2 to use DRDA access.
| Recommendation: Do not use private protocol. Support for private protocol will be
| removed in a future release of DB2.
When a local DB2 application calls a stored procedure, the stored procedure cannot
have DB2 private protocol access to any DB2 sites already connected to the calling
program by DRDA access.
ODBA support uses RRS for syncpoint control of DB2 and IMS resources.
Therefore, stored procedures that use ODBA can run only in WLM-established
stored procedures address spaces.
When you write a stored procedure that uses ODBA, follow the rules for writing
an IMS application program that issues DL/I calls. For information about writing
DL/I applications, see IMS Application Programming: Database Manager and IMS
Application Programming: Transaction Manager.
IMS work that is performed in a stored procedure is in the same commit scope as
the stored procedure. As with any other stored procedure, the calling application
commits work.
A stored procedure that uses ODBA must issue a DPSB PREP call to deallocate a
PSB when all IMS work under that PSB is complete. The PREP keyword tells IMS
to move inflight work to an indoubt state. When work is in the indoubt state, IMS
does not require activation of syncpoint processing when the DPSB call is executed.
IMS commits or backs out the work as part of RRS two-phase commit when the
stored procedure caller executes COMMIT or ROLLBACK.
A sample COBOL stored procedure and client program demonstrate accessing IMS
data using the ODBA interface. The stored procedure source code is in member
DSN8EC1 and is prepared by job DSNTEJ61. The calling program source code is in
member DSN8EC1 and is prepared and executed by job DSNTEJ62. All code is in
data set DSN910.SDSNSAMP.
The startup procedure for a stored procedures address space in which stored
procedures that use ODBA run must include a DFSRESLB DD statement and an
extra data set in the STEPLIB concatenation. See “Setting up the stored procedures
environment” on page 533 for more information.
For each result set you want returned, your stored procedure must:
v Declare a cursor with the option WITH RETURN.
v Open the cursor.
v If the cursor is scrollable, ensure that the cursor is positioned before the first row
of the result table.
When the stored procedure ends, DB2 returns the rows in the query result set to
the client.
DB2 does not return result sets for cursors that are closed before the stored
procedure terminates. The stored procedure must execute a CLOSE statement for
each cursor associated with a result set that should not be returned to the DRDA
client.
Example: Declaring a cursor to return a result set: Suppose you want to return a
result set that contains entries for all employees in department D11. First, declare a
cursor that describes this subset of employees:
EXEC SQL DECLARE C1 CURSOR WITH RETURN FOR
SELECT * FROM DSN8910.EMP
WHERE WORKDEPT='D11';
DB2 returns the result set and the name of the SQL cursor for the stored procedure
to the client.
Use meaningful cursor names for returning result sets: The name of the cursor that
is used to return result sets is made available to the client application through
extensions to the DESCRIBE statement. See “Writing a program or stored
procedure to receive the result sets from a stored procedure” on page 600 for more
information.
Use cursor names that are meaningful to the DRDA client application, especially
when the stored procedure returns multiple result sets.
Objects from which you can return result sets: You can use any of these objects in
the SELECT statement that is associated with the cursor for a result set:
v Tables, synonyms, views, created temporary tables, declared temporary tables,
and aliases defined at the local DB2 subsystem
v Tables, synonyms, views, created temporary tables, and aliases defined at remote
DB2 for z/OS systems that are accessible through DB2 private protocol access
Returning a subset of rows to the client: If you execute FETCH statements with a
result set cursor, DB2 does not return the fetched rows to the client program. For
example, if you declare a cursor WITH RETURN and then execute the statements
OPEN, FETCH, and FETCH, the client receives data beginning with the third row
in the result set. If the result set cursor is scrollable and you fetch rows with it, you
need to position the cursor before the first row of the result table after you fetch
the rows and before the stored procedure ends.
Using a temporary table to return result sets: You can use a created temporary
table or declared temporary table to return result sets from a stored procedure.
This capability can be used to return nonrelational data to a DRDA client.
For example, you can access IMS data from a stored procedure in the following
way:
v Use APPC/MVS to issue an IMS transaction.
If the stored procedure calls other programs that contain SQL statements, each of
those called programs must have a DB2 package. The owner of the package or
plan that contains the CALL statement must have EXECUTE authority for all
packages that the other programs use.
When a stored procedure calls another program, DB2 determines which collection
the package of the called program belongs to in one of the following ways:
| v If the stored procedure definition contains PACKAGE PATH with a specified list
| of collection IDs, DB2 uses those collection IDs. If you also specify COLLID, DB2
| ignores that clause.
v If the stored procedure definition contains COLLID collection-id, DB2 uses
collection-id.
v If the stored procedure executes SET CURRENT PACKAGE PATH and contains
the NO COLLID option, DB2 uses the CURRENT PACKAGE PATH special
register. The package of the called program comes from the list of collections in
the CURRENT PACKAGE PATH special register. For example, assume that
CURRENT PACKAGE PATH contains the list COLL1, COLL2, COLL3, COLL4.
DB2 searches for the first package (in the order of the list) that exists in these
collections.
v If the stored procedure does not execute SET CURRENT PACKAGE PATH and
instead executes SET CURRENT PACKAGESET, DB2 uses the CURRENT
PACKAGESET special register. The package of the called program comes from
the collection that is specified in the CURRENT PACKAGESET special register.
v If both of the following conditions are true, DB2 uses the collection ID of the
package that contains the SQL statement CALL:
– the stored procedure does not execute SET CURRENT PACKAGE PATH or
SET CURRENT PACKAGESET
– the stored procedure definition contains the NO COLLID option
When control returns from the stored procedure, the value of the CURRENT
PACKAGESET special register is reset.DB2 restores the value of the CURRENT
PACKAGESET special register to the value that it contained before the client
program executed the SQL statement CALL.
Using reentrant stored procedures can lead to improved performance for the
following reasons:
For all data types except LOBs, ROWIDs, locators, and VARCHARs (for C
language), see the tables listed in the following table for the host data types that
are compatible with the data types in the stored procedure definition.
For LOBs, ROWIDs, VARCHARs, and locators, the following table shows
compatible declarations for the assembler language.
For LOBs, ROWIDs, and locators, the following table shows compatible
declarations for the C language.
Table 102. Compatible C language declarations for LOBs, ROWIDs, and locators
SQL data type in definition C declaration
TABLE LOCATOR unsigned long
BLOB LOCATOR
CLOB LOCATOR
DBCLOB LOCATOR
For LOBs, ROWIDs, and locators, the following table shows compatible
declarations for COBOL.
Table 103. Compatible COBOL declarations for LOBs, ROWIDs, and locators
SQL data type in definition COBOL declaration
TABLE LOCATOR 01 var PIC S9(9) USAGE IS BINARY.
BLOB LOCATOR
CLOB LOCATOR
DBCLOB LOCATOR
BLOB(n) If n <= 32767:
01 var.
49 var-LENGTH PIC 9(9)
USAGE COMP.
49 var-DATA PIC X(n).
If n > 32767:
01 var.
02var-LENGTH PIC S9(9)
USAGE COMP.
02 var-DATA.
49 FILLER
PIC X(32767).
49 FILLER
.. PIC X(32767).
.
49 FILLER
PIC X(mod(n,32767)).
If n > 32767:
01 var.
02var-LENGTH PIC S9(9)
USAGE COMP.
02 var-DATA.
49 FILLER
PIC X(32767).
49 FILLER
.. PIC X(32767).
.
49 FILLER
PIC X(mod(n,32767)).
DBCLOB(n) If n <= 32767:
01 var.
49 var-LENGTH PIC 9(9)
USAGE COMP.
49 var-DATA PIC G(n)
USAGE DISPLAY-1.
If n > 32767:
01 var.
02 var-LENGTH PIC S9(9)
USAGE COMP.
02 var-DATA.
49 FILLER
PIC G(32767)
USAGE DISPLAY-1.
49 FILLER
PIC G(32767).
.. USAGE DISPLAY-1.
.
49 FILLER
PIC G(mod(n,32767))
USAGE DISPLAY-1.
ROWID 01 var.
49 var-LEN PIC 9(4)
USAGE COMP.
49 var-DATA PIC X(40).
For LOBs, ROWIDs, and locators, the following table shows compatible
declarations for PL/I.
Table 104. Compatible PL/I declarations for LOBs, ROWIDs, and locators
SQL data type in definition PL/I
TABLE LOCATOR BIN FIXED(31)
BLOB LOCATOR
CLOB LOCATOR
DBCLOB LOCATOR
If n > 32767:
01 var,
02 var_LENGTH
BIN FIXED(31),
02 var_DATA,
03 var_DATA1(n)
CHAR(32767),
03 var_DATA2
CHAR(mod(n,32767));
CLOB(n) If n <= 32767:
01 var,
03 var_LENGTH
BIN FIXED(31),
03 var_DATA
CHAR(n);
If n > 32767:
01 var,
02 var_LENGTH
BIN FIXED(31),
02 var_DATA,
03 var_DATA1(n)
CHAR(32767),
03 var_DATA2
CHAR(mod(n,32767));
DBCLOB(n) If n <= 16383:
01 var,
03 var_LENGTH
BIN FIXED(31),
03 var_DATA
GRAPHIC(n);
If n > 16383:
01 var,
02 var_LENGTH
BIN FIXED(31),
02 var_DATA,
03 var_DATA1(n)
GRAPHIC(16383),
03 var_DATA2
GRAPHIC(mod(n,16383));
ROWID CHAR(40) VAR
A REXX stored procedure is different from other REXX procedures in the following
ways:
v A REXX stored procedure must not execute any of the following DSNREXX
commands that are used for the DB2 subsystem thread attachment:
ADDRESS DSNREXX CONNECT
ADDRESS DSNREXX DISCONNECT
CALL SQLDBS ATTACH TO
CALL SQLDBS DETACH
When you execute SQL statements in your stored procedure, DB2 establishes the
connection for you.
v A REXX stored procedure must run in a WLM-established stored procedures
address space.
Unlike other stored procedures, you do not prepare REXX stored procedures for
execution. REXX stored procedures run using one of four packages that are bound
during the installation of DB2 REXX Language Support. The current isolation level
at which the stored procedure runs depends on the package that DB2 uses when
the stored procedure runs:
Package name
Isolation level
DSNREXRR
Repeatable read (RR)
This topic shows an example of a REXX stored procedure that executes DB2
commands. The stored procedure performs the following actions:
v Receives one input parameter, which contains a DB2 command.
v Calls the IFI COMMAND function to execute the command.
v Extracts the command result messages from the IFI return area and places the
messages in a created temporary table. Each row of the temporary table contains
a sequence number and the text of one message.
v Opens a cursor to return a result set that contains the command result messages.
v Returns the unformatted contents of the IFI return area in an output parameter.
The following example shows the COMMAND stored procedure that executes DB2
commands.
/* REXX */
PARSE UPPER ARG CMD /* Get the DB2 command text */
/* Remove enclosing quotation marks */
IF LEFT(CMD,2) = ""'" & RIGHT(CMD,2) = "'"" THEN
CMD = SUBSTR(CMD,2,LENGTH(CMD)-2)
ELSE
IF LEFT(CMD,2) = """'" & RIGHT(CMD,2) = "'""" THEN
CMD = SUBSTR(CMD,3,LENGTH(CMD)-4)
COMMAND = SUBSTR("COMMAND",1,18," ")
/****************************************************************/
/* Set up the IFCA, return area, and output area for the */
/* IFI COMMAND call. */
/****************************************************************/
IFCA = SUBSTR('00'X,1,180,'00'X)
IFCA = OVERLAY(D2C(LENGTH(IFCA),2),IFCA,1+0)
IFCA = OVERLAY("IFCA",IFCA,4+1)
RTRNAREASIZE = 262144 /*1048572*/
RTRNAREA = D2C(RTRNAREASIZE+4,4)LEFT(' ',RTRNAREASIZE,' ')
OUTPUT = D2C(LENGTH(CMD)+4,2)||'0000'X||CMD
BUFFER = SUBSTR(" ",1,16," ")
/****************************************************************/
/* Make the IFI COMMAND call. */
/****************************************************************/
ADDRESS LINKPGM "DSNWLIR COMMAND IFCA RTRNAREA OUTPUT"
WRC = RC
RTRN= SUBSTR(IFCA,12+1,4)
REAS= SUBSTR(IFCA,16+1,4)
| Suppose that an existing C stored procedure was defined with the following
| statement:
| CREATE PROCEDURE B(IN V1 INTEGER, OUT V2 CHAR(9))
| LANGUAGE C
| DETERMINISTIC
| NO SQL
| EXTERNAL NAME SUMMOD
| COLLID SUMCOLL
| ASUTIME LIMIT 900
| PARAMETER STYLE GENERAL WITH NULLS
| STAY RESIDENT NO
| RUN OPTIONS 'MSGFILE(OUTFILE),RPTSTG(ON),RPTOPTS(ON)'
| WLM ENVIRONMENT PAYROLL
| PROGRAM TYPE MAIN
| SECURITY DB2
| DYNAMIC RESULT SETS 10
| COMMIT ON RETURN NO;
The first alternative is simpler to write, but if you use the second alternative, you
do not need to make major modifications to your client program if the stored
procedure changes.
Writing for a fixed number of result sets is the only alternative in which you can
write an SQL procedure to return result sets.
You do not need to connect to the remote location when you execute these
statements:
v DESCRIBE PROCEDURE
v ASSOCIATE LOCATORS
v ALLOCATE CURSOR
v DESCRIBE CURSOR
v FETCH
v CLOSE
The following example demonstrates how you receive result sets when you know
how many result sets are returned and what is in each result set.
/*************************************************************/
/* Declare result set locators. For this example, */
/* assume you know that two result sets will be returned. */
/* Also, assume that you know the format of each result set. */
/*************************************************************/
EXEC SQL BEGIN DECLARE SECTION;
static volatile SQL TYPE IS RESULT_SET_LOCATOR *loc1, *loc2;
EXEC SQL END DECLARE SECTION;
.
.
.
/*************************************************************/
/* Call stored procedure P1. */
/* Check for SQLCODE +466, which indicates that result sets */
/* were returned. */
/*************************************************************/
EXEC SQL CALL P1(:parm1, :parm2, ...);
if(SQLCODE==+466)
{
/*************************************************************/
/* Establish a link between each result set and its */
/* locator using the ASSOCIATE LOCATORS. */
/*************************************************************/
EXEC SQL ASSOCIATE LOCATORS (:loc1, :loc2) WITH PROCEDURE P1;
.
.
.
/*************************************************************/
/* Associate a cursor with each result set. */
/*************************************************************/
EXEC SQL ALLOCATE C1 CURSOR FOR RESULT SET :loc1;
EXEC SQL ALLOCATE C2 CURSOR FOR RESULT SET :loc2;
/*************************************************************/
/* Fetch the result set rows into host variables. */
/*************************************************************/
while(SQLCODE==0)
{
EXEC SQL FETCH C1 INTO :order_no, :cust_no;
.
.
.
}
while(SQLCODE==0)
{
EXEC SQL FETCH C2 :order_no, :item_no, :quantity;
.
.
.
}
}
The following example demonstrates how you receive result sets when you do not
know how many result sets are returned or what is in each result set.
/*************************************************************/
/* Declare result set locators. For this example, */
/* assume that no more than three result sets will be */
/* returned, so declare three locators. Also, assume */
/* that you do not know the format of the result sets. */
/*************************************************************/
EXEC SQL BEGIN DECLARE SECTION;
static volatile SQL TYPE IS RESULT_SET_LOCATOR *loc1, *loc2, *loc3;
EXEC SQL END DECLARE SECTION;
| The following example demonstrates how you can use an SQL procedure to
| receive result sets. The logic assumes that no handler exists to intercept the +466
| SQLCODE, such as DECLARE CONTINUE HANDLER FOR SQLWARNING ..... Such a
| handler causes SQLCODE to be reset to zero. Then the test for IF SQLCODE = 466
| is never true and the statements in the IF body are never executed.
| DECLARE RESULT1 RESULT_SET_LOCATOR VARYING;
| DECLARE RESULT2 RESULT_SET_LOCATOR VARYING;
| DECLARE AT_END, VAR1, VAR2 INT DEFAULT 0;
| DECLARE SQLCODE INTEGER DEFAULT 0;
| DECLARE CONTINUE HANDLER FOR NOT FOUND SET AT_END = 99;
| SET TOTAL1 = 0;
| SET TOTAL2 = 0;
| CALL TARGETPROCEDURE();
| IF SQLCODE = 466 THEN
| ASSOCIATE RESULT SET LOCATORS(RESULT1,RESULT2)
| WITH PROCEDURE SPDG3091;
| ALLOCATE RSCUR1 CURSOR FOR RESULT1;
| ALLOCATE RSCUR2 CURSOR FOR RESULT2;
| WHILE AT_END = 0 DO
| FETCH RSCUR1 INTO VAR1;
| SET TOTAL1 = TOTAL1 + VAR1;
| SET VAR1 = 0; /* Reset so the last value fetched is not added after AT_END */
| END WHILE;
| SET AT_END = 0; /* Reset for next loop */
| WHILE AT_END = 0 DO
| FETCH RSCUR2 INTO VAR2;
| SET TOTAL2 = TOTAL2 + VAR2;
| SET VAR2 = 0; /* Reset so the last value fetched is not added after AT_END */
| END WHILE;
| END IF;
Related concepts
“Examples of programs that call stored procedures” on page 228
Besides using stand-alone INSERT statements, you can use the following ways to
insert data into a table:
| v You can user the MERGE statement to insert new data and update existing data
| in the same operation. For information about how to perform this operation, see
| “Inserting data and updating data in a single operation” on page 611.
v You can write an application program to prompt for and enter large amounts of
data into a table.
v You can also use the DB2 LOAD utility to enter data from other sources. For
information about the LOAD utility, see the topic “LOAD” in DB2 Utility Guide
and Reference.
Use an INSERT statement to add new rows to a table or view. Using an INSERT
statement, you can do the following actions:
v Specify the column values to insert a single row. You can specify constants, host
variables, expressions, DEFAULT, or NULL by using the VALUES clause.
v In an application program, specify arrays of column values to insert multiple
rows into a table. “Inserting multiple rows of data from host variable arrays” on
page 160 explains how to use host variable arrays in the VALUES clause of the
INSERT FOR n ROWS statement to add multiple rows of column values to a
table.
v Include a SELECT statement in the INSERT statement to tell DB2 that another
table or view contains the data for the new row or rows. “Inserting rows into a
table from another table” on page 607 explains how to use the SELECT
statement within an INSERT statement to add multiple rows to a table.
In each case, for every row that you insert, you must provide a value for any
column that does not have a default value. For a column that meets one of the
following conditions, specify DEFAULT to tell DB2 to insert the default value for
that column:
v The column is nullable.
v The column is defined with a default value.
v The column has data type ROWID. ROWID columns always have default
values.
v The column is an identity column. Identity columns always have default values.
v The column is a row change timestamp column.
For more information about inserting data into ROWID columns, see “Rules for
inserting data into a ROWID column” on page 608.
For more information about inserting data into identity columns, see “Rules for
inserting data into an identity column” on page 608.
| Fore more information about row change timestamp columns, see the topic
| “CREATE TABLE” in DB2 SQL Reference.
You can use the VALUES clause of the INSERT statement to insert a single row of
column values into a table. You can either name all of the columns for which you
are providing values, or you can omit the list of column names. If you omit the
column name list, you must specify values for all of the columns.
Recommendation: For static INSERT statements, name all of the columns for
which you are providing values for the following reasons:
v Your INSERT statement is independent of the table format. (For example, you do
not need to change the statement when a column is added to the table.)
v You can verify that you are specifying the values in order.
v Your source statements are more self-descriptive.
If you do not name the columns in a static INSERT statement, and a column is
added to the table, an error can occur if the INSERT statement is rebound. An
error will occur after any rebind of the INSERT statement unless you change the
INSERT statement to include a value for the new column. This is true even if the
new column has a default value.
When you list the column names, you must specify their corresponding values in
the same order as in the list of column names.
After inserting a new department row into your YDEPT table, you can use a
SELECT statement to see what you have loaded into the table. The following SQL
statement shows you all of the new department rows that you have inserted:
SELECT *
FROM YDEPT
WHERE DEPTNO LIKE 'E%'
ORDER BY DEPTNO;
Example: The following statement also inserts a row into the YEMP table. Because
the unspecified columns allow null values, DB2 inserts null values into the
columns that you do not specify.
INSERT INTO YEMP
(EMPNO, FIRSTNME, MIDINIT, LASTNAME, WORKDEPT, PHONENO, JOB)
VALUES ('000410', 'MILLARD', 'K', 'FILLMORE', 'D11', '4888', 'MANAGER');
Use a fullselect within an INSERT statement to select rows from one table to insert
into another table.
The following statement copies data from DSN8910.EMP into the newly created
table:
INSERT INTO TELE
SELECT LASTNAME, FIRSTNME, PHONENO
FROM DSN8910.EMP
WHERE WORKDEPT = 'D21';
The two previous statements create and fill a table, TELE, that looks similar to the
following table:
NAME2 NAME1 PHONE
=============== ============ =====
PULASKI EVA 7831
JEFFERSON JAMES 2094
MARINO SALVATORE 3780
SMITH DANIEL 0961
JOHNSON SYBIL 8953
PEREZ MARIA 9001
MONTEVERDE ROBERT 3780
The CREATE TABLE statement example creates a table which, at first, is empty.
The table has columns for last names, first names, and phone numbers, but does
not have any rows.
The INSERT statement fills the newly created table with data that is selected from
the DSN8910.EMP table: the names and phone numbers of employees in
department D21.
AROWID column is a column that is defined with a ROWID data type. You must
have a column with a ROWID data type in a table that contains a LOB column.
The ROWID column is stored in the base table and is used to look up the actual
LOB data in the LOB table space. In addition, a ROWID column enables you to
write queries that navigate directly to a row in a table. For information about using
ROWID columns for direct-row access, see “Specifying direct row access by using
row IDs” on page 702.
Before you insert data into a ROWID column, you must know how the ROWID
column is defined. ROWID columns can be defined as GENERATED ALWAYS or
GENERATED BY DEFAULT. GENERATED ALWAYS means that DB2 generates a
value for the column, and you cannot insert data into that column. If the column is
defined as GENERATED BY DEFAULT, you can insert a value, and DB2 provides a
default value if you do not supply one.
Example: Suppose that tables T1 and T2 have two columns: an integer column and
a ROWID column. For the following statement to run successfully, ROWIDCOL2
must be defined as GENERATED BY DEFAULT.
INSERT INTO T2 (INTCOL2,ROWIDCOL2)
SELECT * FROM T1;
Before you insert data into an identity column, you must know how the column is
defined. Identity columns are defined with the GENERATED ALWAYS or
GENERATED BY DEFAULT clause. GENERATED ALWAYS means that DB2
generates a value for the column, and you cannot insert data into that column. If
the column is defined as GENERATED BY DEFAULT, you can insert a value, and
DB2 provides a default value if you do not supply one.
Example: Suppose that tables T1 and T2 have two columns: a character column
and an integer column that is defined as an identity column. For the following
statement to run successfully, IDENTCOL2 must be defined as GENERATED BY
DEFAULT.
INSERT INTO T2 (CHARCOL2,IDENTCOL2)
SELECT * FROM T1;
If you need to assign a value of one distinct type to a column of another distinct
type, a function must exist that converts the value from one type to another.
Because DB2 provides cast functions only between distinct types and their source
types, you must write the function to convert from one distinct type to another.
You need to insert values from the TOTAL column in JAPAN_SALES into the
TOTAL column of JAPAN_SALES_03. Because INSERT statements follow
assignment rules, DB2 does not let you insert the values directly from one column
to the other because the columns are of different distinct types. Suppose that a
user-defined function called US_DOLLAR has been written that accepts values of
type JAPANESE_YEN as input and returns values of type US_DOLLAR. You can
then use this function to insert values into the JAPAN_SALES_03 table:
INSERT INTO JAPAN_SALES_03
SELECT PRODUCT_ITEM, US_DOLLAR(TOTAL)
FROM JAPAN_SALES
WHERE YEAR = 2003;
The rules for assigning distinct types to host variables or host variables to columns
of distinct types differ from the rules for constants and columns.
You can assign a column value of a distinct type to a host variable if you can
assign a column value of the distinct type’s source type to the host variable. In the
following example, you can assign SIZECOL1 and SIZECOL2, which has distinct
type SIZE, to host variables of type double and short because the source type of
SIZE, which is INTEGER, can be assigned to host variables of type double or short.
EXEC SQL BEGIN DECLARE SECTION;
double hv1;
short hv2;
EXEC SQL END DECLARE SECTION;
CREATE DISTINCT TYPE SIZE AS INTEGER;
CREATE
. TABLE TABLE1 (SIZECOL1 SIZE, SIZECOL2 SIZE);
.
.
SELECT SIZECOL1, SIZECOL2
INTO :hv1, :hv2
FROM TABLE1;
When you assign a value in a host variable to a column with a distinct type, the
type of the host variable must be able to cast to the distinct type. For a table of
base data types and the base data types to which they can be cast, see the topic
“Promotion of data types” in DB2 SQL Reference.
In this example, values of host variable hv2 can be assigned to columns SIZECOL1
and SIZECOL2, because C data type short is equivalent to DB2 data type
SMALLINT, and SMALLINT is promotable to data type INTEGER. However,
values of hv1 cannot be assigned to SIZECOL1 and SIZECOL2, because C data
type double, which is equivalent to DB2 data type DOUBLE, is not promotable to
data type INTEGER.
EXEC SQL BEGIN DECLARE SECTION;
double hv1;
short hv2;
EXEC SQL END DECLARE SECTION;
CREATE DISTINCT TYPE SIZE AS INTEGER;
CREATE
. TABLE TABLE1 (SIZECOL1 SIZE, SIZECOL2 SIZE);
.
.
INSERT INTO TABLE1
| You can update existing data and insert new data in a single operation by using
| the MERGE statement.
| For example, an application might request a set of rows from a database, enable a
| user to modify the data through a GUI, and then store the modified data in the
| database. Some of this modified data is updates to existing rows, and some of this
| data is new rows. You can do these update and insert operations in one step.
| To update existing data and inserting new data, specify a MERGE statement with
| the WHEN MATCHED and WHEN NOT MATCHED clauses. These clauses
| specify how DB2 handles matched and unmatched data. If DB2 finds a matching
| row, that row is updated. If DB2 does not find a matching row, a new row is
| inserted.
| Example: Suppose that you need to update the inventory at a car dealership. You
| need to add new car models to the inventory and update information about car
| models that are already in the inventory.
| You could make these changes with the following series of statements:
| UPDATE INVENTORY
| SET QUANTITY = QUANTITY + :hv_delta
| WHERE MODEL = :hv_model;
|
| --begin pseudo code
| if sqlcode >= 0
| then do
| GD
| if rc = 0 then INSERT..
| end
| -- end pseudo code
|
| GET DIAGNOSTICS :rc = ROW_COUNT;
|
| IF rc = 0 THEN
| INSERT INTO INVENTORY VALUES (:hv_model, :hv_delta);
| END IF;
| The MERGE statement simplifies the update and the insert into a single statement:
| MERGE INTO INVENTORY
| USING ( VALUES (:hv_model, :hv_delta) ) AS SOURCE(MODEL, DELTA)
| ON INVENTORY.MODEL = SOURCE.MODEL
| WHEN MATCHED THEN UPDATE SET QUANTITY = QUANTITY + SOURCE.DELTA
| WHEN NOT MATCHED THEN INSERT VALUES (SOURCE.MODEL, SOURCE.DELTA)
| NOT ATOMIC CONTINUE ON SQLEXCEPTION;
| Example: Suppose that you need to input data into the STOCK table, which
| contains company stock symbols and stock prices from your stock portfolio. Some
| of your input data refers to companies that are already in the STOCK table; some
| of the data refers to companies that you are adding to your stock portfolio. If the
| stock symbol exists in the SYMBOL column of the STOCK table, you need to
| update the PRICE column. If the company stock symbol is not yet in the STOCK
| table, you need to insert a new row with the stock symbol and the stock price.
| Furthermore, you need to add a new value DELTA to your output to show the
| change in stock price.
| Suppose that the STOCK table contains the data that is shown in Table 105.
| Table 105. STOCK table before SELECT FROM MERGE statement
| SYMBOL PRICE
| XCOM 95.00
| YCOM 24.50
|
| Now, suppose that :hv_symbol and :hv_price are host variable arrays that contain
| updated data that corresponds to the data that is shown in Table 105. Table 106
| shows the host variable data for stock activity.
| Table 106. Host variable arrays of stock activity
| hv_symbol hv_price
| XCOM 97.00
| NEWC 30.00
| XCOM 107.00
|
| NEWC is new to the STOCK table, so its symbol and price need to be inserted into
| the STOCK table. The rows for XCOM in Table 106represent changed stock prices,
| so these values need to be updated in the STOCK table. Also, the output needs to
| show the change in stock prices as a DELTA value.
| The following SELECT FROM MERGE statement updates the price of XCOM,
| inserts the symbol and price for NEWC, and returns an output that includes a
| DELTA value for the change in stock price.
| SELECT SYMBOL, PRICE, DELTA FROM FINAL TABLE
| (MERGE INTO STOCK AS S INCLUDE (DELTA DECIMAL(5,20)
| USING ((:hv_symbol, :hv_price) FOR :hv_nrows ROWS) AS R (SYMBOL, PRICE)
| ON S.SYMBOL = R.SYMBOL
| WHEN MATCHED THEN UPDATE SET
| The INCLUDE clause specifies that an additional column, DELTA, can be returned
| in the output without adding a column to the STOCK table. The UPDATE portion
| of the MERGE statement sets the DELTA value to the differential of the previous
| stock price with the value set for the update operation. The INSERT portion of the
| MERGE statement sets the DELTA value to the same value as the PRICE column.
| After the SELECT FROM MERGE statement is processed, the STOCK table
| contains the data that is shown in Table 107.
| Table 107. STOCK table after SELECT FROM MERGE statement
| SYMBOL PRICE
| XCOM 107.00
| YCOM 24.50
| NEWC 30.00
|
| The following output of the SELECT FROM MERGE statement includes both
| updates to XCOM and a DELTA value for each output row.
| SYMBOL PRICE DELTA
| =============================
| XCOM 97.00 2.00
| NEWC 30.00 30.00
| XCOM 107.00 10.00
You can select values from rows that are being inserted by specifying the INSERT
statement in the FROM clause of the SELECT statement. When you insert one or
more new rows into a table, you can retrieve:
v The value of an automatically generated column such as a ROWID or identity
column
v Any default values for columns
v All values for an inserted row, without specifying individual column names
v All values that are inserted by a multiple-row INSERT operation
v Values that are changed by a BEFORE INSERT trigger
Example: In addition to examples that use the DB2 sample tables, the examples in
this topic use an EMPSAMP table that has the following definition:
CREATE TABLE EMPSAMP
(EMPNO INTEGER GENERATED ALWAYS AS IDENTITY,
NAME CHAR(30),
SALARY DECIMAL(10,2),
DEPTNO SMALLINT,
LEVEL CHAR(30),
HIRETYPE VARCHAR(30) NOT NULL WITH DEFAULT 'New Hire',
HIREDATE DATE NOT NULL WITH DEFAULT);
Assume that you need to insert a row for a new employee into the EMPSAMP
table. To find out the values for the generated EMPNO, HIRETYPE, and
HIREDATE columns, use the following SELECT FROM INSERT statement:
The SELECT statement returns the DB2-generated identity value for the EMPNO
column, the default value ’New Hire’ for the HIRETYPE column, and the value of
the CURRENT DATE special register for the HIREDATE column.
Recommendation: Use the SELECT FROM INSERT statement to insert a row into a
parent table and retrieve the value of a primary key that was generated by DB2 (a
ROWID or identity column). In another INSERT statement, specify this generated
value as a value for a foreign key in a dependent table. For an example of this
method, see “Identity columns” on page 431.
The rows that are inserted into the target table produce a result table whose
columns can be referenced in the SELECT list of the query. The columns of the
result table are affected by the columns, constraints, and triggers that are defined
for the target table:
v The result table includes DB2-generated values for identity columns, ROWID
columns, or row change timestamp columns.
v Before DB2 generates the result table, it enforces any constraints that affect the
insert operation (that is, check constraints, unique index constraints, and
referential integrity constraints).
v The result table includes any changes that result from a BEFORE trigger that is
activated by the insert operation. An AFTER trigger does not affect the values in
the result table. For information about triggers, see “Creating triggers” on page
457.
The INSERT statement in the FROM clause of the following SELECT statement
inserts a new employee into the EMPSAMP table:
SELECT NAME, SALARY
FROM FINAL TABLE (INSERT INTO EMPSAMP (NAME, SALARY, LEVEL)
VALUES('Mary Smith', 35000.00, 'Associate'));
The SELECT statement returns a salary of 40000.00 for Mary Smith instead of the
initial salary of 35000.00 that was explicitly specified in the INSERT statement.
When you insert a new row into a table, you can retrieve any column in the result
table of the SELECT FROM INSERT statement. When you embed this statement in
an application, you retrieve the row into host variables by using the SELECT ...
Example: You can retrieve all the values for a row that is inserted into a structure:
EXEC SQL SELECT * INTO :empstruct
FROM FINAL TABLE (INSERT INTO EMPSAMP (NAME, SALARY, DEPTNO, LEVEL)
VALUES('Mary Smith', 35000.00, 11, 'Associate'));
For this example, :empstruct is a host variable structure that is declared with
variables for each of the columns in the EMPSAMP table.
If the INSERT statement references a view that is defined with a search condition,
that view must be defined with the WITH CASCADED CHECK OPTION option.
When you insert data into the view, the result table of the SELECT FROM INSERT
statement includes only rows that satisfy the view definition.
SELECT C1 FROM
FINAL TABLE (INSERT INTO V1 (I1) VALUES(12));
The value 12 satisfies the search condition of the view definition, and the result
table consists of the value for C1 in the inserted row.
If you use a value that does not satisfy the search condition of the view definition,
the insert operation fails, and DB2 returns an error.
Example: Inserting rows with ROWID values: To see the values of the ROWID
columns that are inserted into the employee photo and resume table, you can
declare the following cursor:
EXEC SQL DECLARE CS1 CURSOR FOR
SELECT EMP_ROWID
FROM FINAL TABLE (INSERT INTO DSN8910.EMP_PHOTO_RESUME (EMPNO)
SELECT EMPNO FROM DSN8910.EMP);
Example: Using the FETCH FIRST clause: To see only the first five rows that are
inserted into the employee photo and resume table, use the FETCH FIRST clause:
EXEC SQL DECLARE CS2 CURSOR FOR
SELECT EMP_ROWID
FROM FINAL TABLE (INSERT INTO DSN8910.EMP_PHOTO_RESUME (EMPNO)
SELECT EMPNO FROM DSN8910.EMP)
FETCH FIRST 5 ROWS ONLY;
Example: Using the INPUT SEQUENCE clause: To retrieve rows in the order in
which they are inserted, use the INPUT SEQUENCE clause:
Example: Inserting rows with multiple encoding CCSIDs: Suppose that you want
to populate an ASCII table with values from an EBCDIC table and then see
selected values from the ASCII table. You can use the following cursor to select the
EBCDIC columns, populate the ASCII table, and then retrieve the ASCII values:
EXEC SQL DECLARE CS4 CURSOR FOR
SELECT C1, C2
FROM FINAL TABLE (INSERT INTO ASCII_TABLE
SELECT * FROM EBCDIC_TABLE);
You can use the INCLUDE clause to introduce a new column to the result table but
not add a column to the target table.
Example: Suppose that you need to insert department number data into the project
table. Suppose also, that you want to retrieve the department number and the
corresponding manager number for each department. Because MGRNO is not a
column in the project table, you can use the INCLUDE clause to include the
manager number in your result but not in the insert operation. The following
SELECT FROM INSERT statement performs the insert operation and retrieves the
data.
DECLARE CS1 CURSOR FOR
SELECT manager_num, projname FROM FINAL TABLE
(INSERT INTO PROJ (DEPTNO) INCLUDE(manager_num CHAR(6))
SELECT DEPTNO, MGRNO FROM DEPT);
In an application program, when you insert multiple rows into a table, you declare
a cursor so that the INSERT statement is in the FROM clause of the SELECT
statement of the cursor. The result table of the cursor is determined during OPEN
cursor processing. The result table may or may not be affected by other processes
in your application.
When you declare a scrollable cursor, the cursor must be declared with the
INSENSITIVE keyword if an INSERT statement is in the FROM clause of the
cursor specification. The result table is generated during OPEN cursor processing
and does not reflect any future changes. You cannot declare the cursor with the
SENSITIVE DYNAMIC or SENSITIVE STATIC keywords. For information about
cursor sensitivity, see “Types of cursors” on page 671.
Example: Assume that your application declares a cursor, opens the cursor,
performs a fetch, updates the table, and then fetches additional rows:
EXEC SQL DECLARE CS1 CURSOR FOR
SELECT SALARY
FROM FINAL TABLE (INSERT INTO EMPSAMP (NAME, SALARY, LEVEL)
SELECT NAME, INCOME, BAND FROM OLD_EMPLOYEE);
EXEC SQL OPEN CS1;
EXEC SQL FETCH CS1 INTO :hv_salary;
/* print fetch result */
...
EXEC SQL UPDATE EMPSAMP SET SALARY = SALARY + 500;
while (SQLCODE == 0) {
EXEC SQL FETCH CS1 INTO :hv_salary;
/* print fetch result */
...
}
The fetches that occur after the updates return the rows that were generated when
the cursor was opened. If you use a simple SELECT (with no INSERT statement in
the FROM clause), the fetches might return the updated values, depending on the
access path that DB2 uses.
When you declare a cursor with the WITH HOLD option and open the cursor, all
of the rows are inserted into the target table. The WITH HOLD option has no
effect on the SELECT FROM INSERT statement of the cursor definition. After your
application performs a commit, you can continue to retrieve all of the inserted
rows. For information about held cursors, see “Held and non-held cursors” on
page 673.
Example: Assume that the employee table in the DB2 sample application has five
rows. Your application declares a WITH HOLD cursor, opens the cursor, fetches
two rows, performs a commit, and then fetches the third row successfully:
EXEC SQL DECLARE CS2 CURSOR WITH HOLD FOR
SELECT EMP_ROWID
FROM FINAL TABLE (INSERT INTO DSN8910.EMP_PHOTO_RESUME (EMPNO)
SELECT EMPNO FROM DSN8910.EMP);
EXEC SQL OPEN CS2; /* Inserts 5 rows */
EXEC SQL FETCH CS2 INTO :hv_rowid; /* Retrieves ROWID for 1st row */
EXEC SQL FETCH CS2 INTO :hv_rowid; /* Retrieves ROWID for 2nd row */
EXEC SQL COMMIT; /* Commits 5 rows */
EXEC SQL FETCH CS2 INTO :hv_rowid; /* Retrieves ROWID for 3rd row */
When you set a savepoint prior to opening the cursor and then roll back to that
savepoint, all of the insertions are undone. For information about savepoints and
ROLLBACK processing, see “Undoing selected changes within a unit of work by
using savepoints” on page 28.
What happens if an error occurs: In an application program, when you insert one
or more rows into a table by using the SELECT FROM INSERT statement, the
result table of the insert operation may or may not be affected, depending on
where the error occurred in the application processing.
During SELECT INTO processing: If the insert processing or the select processing
fails during a SELECT INTO statement, no rows are inserted into the target table,
and no rows are returned from the result table of the insert operation.
Example: Assume that the employee table of the DB2 sample application has one
row, and that the SALARY column has a value of 9 999 000.00.
EXEC SQL SELECT EMPNO INTO :hv_empno
FROM FINAL TABLE (INSERT INTO EMPSAMP (NAME, SALARY)
SELECT FIRSTNAME || MIDINIT || LASTNAME,
SALARY + 10000.00
FROM DSN8910.EMP)
The addition of 10000.00 causes a decimal overflow to occur, and no rows are
inserted into the EMPSAMP table.
During OPEN cursor processing: If the insertion of any row fails during the
OPEN cursor processing, all previously successful insertions are undone. The result
table of the insert is empty.
During FETCH processing: If the FETCH statement fails while retrieving rows
from the result table of the insert operation, a negative SQLCODE is returned to
the application, but the result table still contains the original number of rows that
was determined during the OPEN cursor processing. At this point, you can undo
all of the inserts.
Example: Assume that the result table contains 100 rows and the 90th row that is
being fetched from the cursor returns a negative SQLCODE:
EXEC SQL DECLARE CS1 CURSOR FOR
SELECT EMPNO
FROM FINAL TABLE (INSERT INTO EMPSAMP (NAME, SALARY)
SELECT FIRSTNAME || MIDINIT || LASTNAME, SALARY + 10000.00
FROM DSN8910.EMP);
EXEC SQL OPEN CS1; /* Inserts 100 rows */
while (SQLCODE == 0)
EXEC SQL FETCH CS1 INTO :hv_empno;
if (SQLCODE == -904) /* If SQLCODE is -904, undo all inserts */
EXEC SQL ROLLBACK;
else /* Else, commit inserts */
EXEC SQL COMMIT;
| To preserve the order of the derived table specify the ORDER OF clause with the
| ORDER BY clause. These two clauses ensure that the result rows of a fullselect
| follow the same order as the result table of a subquery within the fullselect.
| You can use the ORDER OF clause in any query that uses an ORDER BY clause,
| but the ORDER OF clause is most useful with queries that contain a set operator,
| such as UNION.
| Example: The following example joins data from table T1 to the result table of a
| nested table expression. The nested table expression is ordered by the second
| column in table T2. The ORDER BY ORDER OF TEMP clause in the query specifies
| that the fullselect result rows are to be returned in the same order as the nested
| table expression.
| SELECT T1.C1, T1.C2, TEMP.Cy, TEMP.Cx
| FROM T1, (SELECT T2.C1, T2.C2 FROM T2 ORDER BY 2) as TEMP(Cx, Cy)
| WHERE Cy = T1.C1
| ORDER BY ORDER OF TEMP;
| Alternatively, you can produce the same result by explicitly stating the ORDER BY
| column TEMP.Cy in the fullselect instead of using the ORDER OF syntax.
| SELECT T1.C1, T1.C2, TEMP.Cy, TEMP.Cx
| FROM T1, (SELECT T2.C1, T2.C2 FROM T2 ORDER BY 2) as TEMP(Cx, Cy)
| WHERE Cy = T1.C1
| ORDER BY TEMP.Cy;
|
Adding data to the end of a table
In a relational database, the rows of a table are not ordered, and thus, the table has
no “end”. However, depending on your goal, you can perform several actions to
simulate adding data to the end of a table.
Question: How can I store a large volume of data that is not defined as a set of
columns in a table?
| Answer: You can store the data in a table in a binary string, a LOB, or an XML
| column.
To change the data in a table, use the UPDATE statement. You can also use the
UPDATE statement to remove a value from a column (without removing the row)
by changing the column value to null.
You cannot update rows in a created temporary table, but you can update rows in
a declared temporary table.
The SET clause names the columns that you want to update and provides the
values that you want to assign to those columns. You can replace a column value
in the SET clause with any of the following items:
v A null value
The column to which you assign the null value must not be defined as NOT
NULL.
v An expression, which can be any of the following items:
– A column
– A constant
– A scalar fullselect
– A host variable
– A special register
| v A default value
In addition, you can replace one or more column values in the SET clause with the
column values in a row that is returned by a fullselect.
If you omit the WHERE clause, DB2 updates every row in the table or view with
the values that you supply.
If DB2 finds an error while executing your UPDATE statement (for example, an
update value that is too large for the column), it stops updating and returns an
error. No rows in the table change. Rows that were already changed, if any, are
restored to their previous values. If the UPDATE statement is successful,
SQLERRD(3) is set to the number of rows that are updated.
Example: The following statement supplies a missing middle initial and changes
the job for employee 000200.
UPDATE YEMP
SET MIDINIT = 'H', JOB = 'FIELDREP'
WHERE EMPNO = '000200';
The following statement gives everyone in department D11 a raise of 400.00. The
statement can update several rows.
UPDATE YEMP
SET SALARY = SALARY + 400.00
WHERE WORKDEPT = 'D11';
The following statement sets the salary for employee 000190 to the average salary
and sets the bonus to the minimum bonus for all employees.
UPDATE YEMP
SET (SALARY, BONUS) =
(SELECT AVG(SALARY), MIN(BONUS)
FROM EMP)
WHERE EMPNO = '000190';
| You can select values from rows that are being updated by specifying the UPDATE
| statement in the FROM clause of the SELECT statement. When you update one or
| more rows in a table, you can retrieve:
| v The value of an automatically generated column such as a ROWID or identity
| column
| v Any default values for columns
| v All values for an updated row, without specifying individual column names
| In most cases, you want to use the FINAL TABLE clause with SELECT FROM
| UPDATE statements. The FINAL TABLE consists of the rows of the table or view
| after the update occurs.
| Example: Suppose that all designers for a company are receiving a 30 percent
| increase in their bonus. You can use the following SELECT FROM UPDATE
| statement to increase the bonus of each clerk by 30 percent and to retrieve the
| bonus for each clerk.
| DECLARE CS1 CURSOR FOR
| SELECT LASTNAME, BONUS FROM FINAL TABLE
| (UPDATE EMP SET BONUS = BONUS * 1.3
| WHERE JOB = 'CLERK');
| FETCH CS1 INTO :lastname, :bonus;
| You can use the INCLUDE clause to introduce a new column to the result table but
| not add the column to the target table.
Question: Are there any special techniques for updating large volumes of data?
Answer: Yes. When updating large volumes of data using a cursor, you can
minimize the amount of time that you hold locks on the data by declaring the
cursor with the HOLD option and by issuing commits frequently.
You can use the DELETE statement to remove entire rows from a table. The
DELETE statement removes zero or more rows of a table, depending on how many
rows satisfy the search condition that you specify in the WHERE clause. If you
omit a WHERE clause from a DELETE statement, DB2 removes all rows from the
table or view that you have named. The DELETE statement does not remove
specific columns from the row.
This DELETE statement deletes each row in the YEMP table that has an employee
number 000060.
DELETE FROM YEMP
WHERE EMPNO = '000060';
When this statement executes, DB2 deletes any row from the YEMP table that
meets the search condition.
If DB2 finds an error while executing your DELETE statement, it stops deleting
data and returns error codes in the SQLCODE and SQLSTATE host variables or
related fields in the SQLCA. The data in the table does not change.
The DELETE statement is a powerful statement that deletes all rows of a table
unless you specify a WHERE clause to limit it. (With segmented table spaces,
deleting all rows of a table is very fast.) For example, the following statement
deletes every row in the YDEPT table:
DELETE FROM YDEPT;
If the statement executes, the table continues to exist (that is, you can insert rows
into it), but it is empty. All existing views and authorizations on the table remain
intact when using DELETE. By comparison, using DROP TABLE drops all views
and authorizations, which can invalidate plans and packages. For information
about the DROP statement, see “Dropping tables” on page 449.
| Alternatively, you can use the TRUNCATE statement to delete all of the rows in a
| table. A TRUNCATE statement can provide the following advantages over a
| DELETE statement:
| v The TRUNCATE statement can ignore delete triggers
| v The TRUNCATE statement can perform an immediate commit
| v The TRUNCATE statement can keep storage allocated for the table
| The TRUNCATE statement does not, however, reset the count for an automatically
| generated value for an identity column on the table. If 14872 was the next identity
| column value to be generated before a TRUNCATE statement, 14872 would be the
| next value generated after the TRUNCATE statement.
| The following examples demonstrate how to use the TRUNCATE statement with
| various keywords.
| Example: Suppose that you need to empty the data from an old inventory table,
| regardless of any existing delete triggers, and that you need to make the space that
| is allocated for the table available for other uses. Use the following statement.
| Example: Suppose that you need to empty the data from an old inventory table
| permanently, regardless of any existing delete triggers, and that you need to
| preserve the space that is allocated for the table. You need the emptied data to be
| completely unavailable, so that a ROLLBACK statement cannot return the data.
| Use the following statement.
| TRUNCATE INVENTORY_TABLE
| REUSE STORAGE
| IGNORE DELETE TRIGGERS
| IMMEDIATE;
| You can select values from rows that are being deleted by specifying the DELETE
| statement in the FROM clause of the SELECT statement. When you delete one or
| more rows in a table, you can retrieve:
| v Any default values for columns
| v All values for a deleted row, without specifying individual column names
| v Calculated values based on deleted rows
| When you use a SELECT FROM DELETE statement, you must use the FROM OLD
| TABLE clause to retrieve deleted values. The OLD TABLE consists of the rows of
| the table or view before the delete occurs.
| Example: Suppose that a company is eliminating all operator positions and that
| the company wants to know how much salary money it will save by eliminating
| these positions. You can use the following SELECT FROM DELETE statement to
| delete operators from the EMP table and to retrieve the sum of operator salaries.
| SELECT SUM(SALARY) INTO :salary FROM OLD TABLE
| (DELETE FROM EMP
| WHERE JOB = 'OPERATOR');
| To retrieve row-by-row output of deleted data, use a cursor with a SELECT FROM
| DELETE statement.
| Example: Suppose that a company is eliminating all analyst positions and that the
| company wants to know how many years of experience each analyst had with the
| company. You can use the following SELECT FROM DELETE statement to delete
| analysts from the EMP table and to retrieve the experience of each analyst.
| DECLARE CS1 CURSOR FOR
| SELECT YEAR(CURRENT DATE - HIREDATE) FROM OLD TABLE
| (DELETE FROM EMP
| WHERE JOB = 'ANALYST');
| FETCH CS1 INTO :years_of_service;
| If you need to retrieve calculated data based on the data that you delete but not
| add that column to the target table.
| Example: Suppose that you need to delete managers from the EMP table and that
| you need to retrieve the salary and the years of employment for each manager.
| You can use the following SELECT FROM DELETE statement to perform the delete
| operation and to retrieve the required data.
The contents of the DB2 catalog tables can be a useful reference tool when you
begin to develop an SQL statement or an application program.
The catalog table, SYSIBM.SYSTABAUTH, lists table privileges that are granted to
authorization IDs. To display the tables that you have authority to access (by
privileges granted either to your authorization ID or to PUBLIC), you can execute
an SQL statement similar to the one shown in the following example. To do this,
you must have the SELECT privilege on SYSIBM.SYSTABAUTH.
Example: The following statement displays the tables that the current user has
authority to access:
SELECT DISTINCT TCREATOR, TTNAME
FROM SYSIBM.SYSTABAUTH
WHERE GRANTEE IN (USER, 'PUBLIC', 'PUBLIC*') AND GRANTEETYPE = ' ';
In this query, the predicate GRANTEETYPE = ' ' selects authorization IDs.
Exception: If your DB2 subsystem uses an exit routine for access control
authorization, you cannot rely on catalog queries to tell you the tables that you can
access. When such an exit routine is installed, both RACF and DB2 control table
access.
If you display column information about a table that includes LOB or ROWID
columns, the LENGTH field for those columns contains the number of bytes that
those column occupy in the base table. The LENGTH field does not contain the
length of the LOB or ROWID data.
| Consider developing SQL statements similar to the examples in this section, and
| then running them dynamically using SPUFI, the command line processor, or DB2
| Query Management Facility (DB2 QMF).
For more advanced topics on using SELECT statements, see “Subqueries” on page
659, and “Remote servers and distributed data” on page 32.
| You do not need to know the column names to select DB2 data. Use an asterisk (*)
| in the SELECT clause to indicate that you want to retrieve all columns of each
| selected row of the named table. Implicitly hidden columns, such as ROWID
| columns and XML document ID columns, are not included in the result of the
| SELECT * statement. To view the values of these columns, you must specify the
| column name.
Example: SELECT *: The following SQL statement selects all columns from the
department table:
SELECT *
FROM DSN8910.DEPT;
Because the example does not specify a WHERE clause, the statement retrieves
data from all rows.
The dashes for MGRNO and LOCATION in the result table indicate null values.
SELECT * is recommended mostly for use with dynamic SQL and view definitions.
You can use SELECT * in static SQL, but doing so is not recommended because of
host variable compatibility and performance reasons.
If you list the column names in a static SELECT statement instead of using an
asterisk, you can avoid the problem that sometimes occurs with SELECT *. You can
also see the relationship between the receiving host variables and the columns in
the result table.
For more information about host variables, see “Host variables” on page 143.
Select the column or columns you want to retrieve by naming each column. All
columns appear in the order you specify, not in their order in the table.
Example: SELECT column-name: The following SQL statement retrieves only the
MGRNO and DEPTNO columns from the department table:
SELECT MGRNO, DEPTNO
FROM DSN8910.DEPT;
With a single SELECT statement, you can select data from one column or as many
as 750 columns.
| To SELECT data from implicitly hidden columns, such as ROWID and XML
| document ID, look up the column names in SYSIBM.SYSCOLUMNS and specify
| these names in the SELECT list. For example, suppose that you create and
| populate the following table:
| CREATE TABLE MEMBERS (MEMBERID INTEGER,
| BIO XML,
| REPORT XML,
| RECOMMENDATIONS XML);
DB2 evaluates a predicate for each row as true, false, or unknown. Results are
unknown only if an operand is null.
If a search condition contains a column of a distinct type, the value to which that
column is compared must be of the same distinct type, or you must cast the value
to the distinct type. See “Distinct types” on page 477 for more information about
distinct types.
The following table lists the type of comparison, the comparison operators, and an
example of each type of comparison that you can use in a predicate in a WHERE
clause.
Table 108. Comparison operators used in conditions
Type of comparison Comparison operator Example
Equal to = DEPTNO = ’X01’
Not equal to <> DEPTNO <> ’X01’
Less than < AVG(SALARY) < 30000
Less than or equal to <= AGE <= 25
Not less than >= AGE >= 21
Greater than > SALARY > 2000
Greater than or equal to >= SALARY >= 5000
Not greater than <= SALARY <= 5000
Equal to null IS NULL PHONENO IS NULL
Not equal to another IS DISTINCT FROM PHONENO IS DISTINCT FROM
value or one value is :PHONEHV
equal to null
Similar to another value LIKE NAME LIKE ’ or STATUS LIKE ’N_’
At least one of two OR HIREDATE < ’1965-01-01’ OR SALARY
conditions < 16000
Both of two conditions AND HIREDATE < ’1965-01-01’ AND
SALARY < 16000
Between two values BETWEEN SALARY BETWEEN 20000 AND 40000
Equals a value in a set IN (X, Y, Z) DEPTNO IN (’B01’, ’C01’, ’D01’)
Note: SALARY BETWEEN 20000 AND 40000 is equivalent to SALARY >= 20000 AND
SALARY <= 40000. For more information about predicates, see the topic “Predicates” in DB2
SQL Reference.
You can also search for rows that do not satisfy one of the preceding conditions by
using the NOT keyword before the specified condition.
You can search for rows that do not satisfy the IS DISTINCT FROM predicate by
using either of the following predicates:
v value 1 IS NOT DISTINCT FROM value 2
v NOT(value 1 IS DISTINCT FROM value 2)
Example: SELECT with an expression: This SQL statement generates a result table
in which the second column is a derived column that is generated by adding the
values of the SALARY, BONUS, and COMM columns.
SELECT EMPNO, (SALARY + BONUS + COMM)
FROM DSN8910.EMP;
To order the rows in a result table by the values in a derived column, specify a
name for the column by using the AS clause, and specify that name in the ORDER
BY clause. For information about using the ORDER BY clause, see “Ordering the
result table rows” on page 634.
| You can select all XML data that is stored in a particular column by specifying
| SELECT column name or SELECT *, just as you would for columns of any other
| data type. Alternatively, you can select only a subset of data from an XML column
| by using an XPath expression in a SELECT statement. XPath expressions identify
| specific nodes in an XML document. For more information about XPath, see the
| topic “DB2 XPath basics” in DB2 XML Guide.
| Example: Suppose that you store purchase orders as XML documents in the
| POrder column in the PurchaseOrders table. You need to find in each purchase
| order the items whose product name is equal to a name in the Product table. You
| can use the following statement to find these values:
| SELECT XMLQUERY('//item[productName = $n]'
| PASSING PO.POrder,
| P.name AS "n")
| FROM PurchaseOrders PO, Product P;
Result tables
The data that is retrieved by an SQL statement is always in the form of a table,
which is called a result table. Like the tables from which you retrieve the data, a
result table has rows and columns. A program fetches this data one row at a time.
Example result table: Assume that you issue the following SELECT statement,
which retrieves the last name, first name, and phone number of employees in
department D11 from the sample employee table:
SELECT LASTNAME, FIRSTNME, PHONENO
FROM DSN8910.EMP
WHERE WORKDEPT = 'D11'
ORDER BY LASTNAME;
The DISTINCT keyword removes redundant duplicate rows from your result table,
so that each row contains unique data.
| Restriction: You cannot use the DISTINCT keyword with LOB columns or XML
| columns.
With the AS clause, you can name result columns in a SELECT statement. For
more information about the syntax of the AS clause, see the topic “select-clause” in
DB2 SQL Reference.
Example: CREATE VIEW with AS clause: You can specify result column names in
the select-clause of a CREATE VIEW statement. You do not need to supply the
column list of CREATE VIEW, because the AS keyword names the derived column.
The columns in the view EMP_SAL are EMPNO and TOTAL_SAL.
CREATE VIEW EMP_SAL AS
SELECT EMPNO,SALARY+BONUS+COMM AS TOTAL_SAL
FROM DSN8910.EMP;
For more information about using the CREATE VIEW statement, see “Defining a
view” on page 449.
Example: set operator with AS clause: You can use the AS clause with set
operators, such as UNION. In this example, the AS clause is used to give the same
name to corresponding columns of tables in a UNION. The third result column
from the union of the two tables has the name TOTAL_VALUE, even though it
contains data that is derived from columns with different names:
SELECT 'On hand' AS STATUS, PARTNO, QOH * COST AS TOTAL_VALUE
FROM PART_ON_HAND
UNION ALL
SELECT 'Ordered' AS STATUS, PARTNO, QORDER * COST AS TOTAL_VALUE
FROM ORDER_PART
ORDER BY PARTNO, TOTAL_VALUE;
The column STATUS and the derived column TOTAL_VALUE have the same name
in the first and second result tables. They are combined in the union of the two
result tables, which is similar to the following partial output:
STATUS PARTNO TOTAL_VALUE
======= ====== ===========
On hand 00557 345.60
Ordered 00557 150.50
.
.
.
For information about unions, see “Combining result tables from multiple SELECT
statements” on page 637.
Example: GROUP BY derived column: You can use the AS clause in a FROM clause
to assign a name to a derived column that you want to refer to in a GROUP BY
clause. This SQL statement names HIREYEAR in the nested table expression,
which lets you use the name of that result column in the GROUP BY clause:
You cannot use GROUP BY with a name that is defined with an AS clause for the
derived column YEAR(HIREDATE) in the outer SELECT, because that name does
not exist when the GROUP BY runs. However, you can use GROUP BY with a
name that is defined with an AS clause in the nested table expression, because the
nested table expression runs before the GROUP BY that references the name. For
more information about using the GROUP BY clause, see “Summarizing group
values” on page 642.
To retrieve rows in a specific order, use the ORDER BY clause. Using ORDER BY is
the only way to guarantee that your rows are ordered as you want them. The
following topics show you how to use the ORDER BY clause.
The order of the selected rows depends on the sort keys that you identify in the
ORDER BY clause. A sort key can be a column name, an integer that represents the
number of a column in the result table, or an expression. DB2 orders the rows by
the first sort key, followed by the second sort key, and so on.
You can list the rows in ascending or descending order. Null values appear last in
an ascending sort and first in a descending sort.
| DB2 sorts strings in the collating sequence associated with the encoding scheme of
| the table. DB2 sorts numbers algebraically and sorts datetime values
| chronologically.
| Restriction: You cannot use the ORDER BY clause with LOB or XML columns.
Example: ORDER BY clause with a column name as the sort key: Retrieve the
employee numbers, last names, and hire dates of employees in department A00 in
ascending order of hire dates:
SELECT EMPNO, LASTNAME, HIREDATE
FROM DSN8910.EMP
WHERE WORKDEPT = 'A00'
ORDER BY HIREDATE ASC;
Example: ORDER BY clause with an expression as the sort key: The following
subselect retrieves the employee numbers, salaries, commissions, and total
If you use the AS clause to name an unnamed column in a SELECT statement, you
can use that name in the ORDER BY clause.
Example: ORDER BY clause that uses a derived column: The following SQL
statement orders the selected information by total salary:
SELECT EMPNO, (SALARY + BONUS + COMM) AS TOTAL_SAL
FROM DSN8910.EMP
ORDER BY TOTAL_SAL;
| Example: Suppose that you want a list of employees and salaries from department
| D11 in the sample EMP table. You can return a numbered list that is ordered by
| last name by submitting the following query:
| SELECT ROW_NUMBER() OVER (ORDER BY LASTNAME) AS NUMBER,
| WORKDEPT, LASTNAME, SALARY
| FROM DSN8910.EMP
| WHERE WORKDEPT='D11'
| To rank rows, use one of the following ranking specifications in an SQL statement:
| RANK
| Returns a rank number for each row value. Use this specification if you
| want rank numbers to be skipped when duplicate row values exist. For
| example, suppose the top five finishers in a marathon have the following
| times:
| v 2:31:57
| v 2:34:52
| v 2:34:52
| v 2:37:26
| v 2:38:01
| When you use the RANK specification, DB2 returns the following rank
| numbers:
| Table 109. Example of values returned when you specify RANK
| Value Rank number
| 2:31:57 1
| 2:34:52 2
| 2:34:52 2
| 2:37:26 4
| 2:38:01 5
|
| DENSE_RANK
| Returns a rank number for each row value. Use this specification if you do
| not want rank numbers to be skipped when duplicate row values exist. For
| example, when you specify DENSE_RANK with the same times that are
| listed in the description of RANK, DB2 returns the following rank
| numbers:
| Table 110. Example of values returned when you specify RANK
| Value Rank number
| 2:31:57 1
| 2:34:52 2
| 2:34:52 2
| 2:37:26 3
| 2:38:01 4
|
| Example: Suppose that you had the following values in the DATA column of table
| T1:
| DATA
| -------
| 100
| Suppose that you use the following DENSE_RANK specification on the same data:
| SELECT DATA,
| DENSE_RANK() OVER (ORDER BY DATA DESC) AS RANK_DATA
| FROM T1
| ORDER BY RANK_DATA;
| In the example with the RANK specification, two equal values are both ranked as
| 4. The next rank number is 6. Number 5 is skipped.
| In the example with the DENSE_RANK option, those two equal values are also
| ranked as 4. However, the next rank number is 5. With DENSE_RANK, no gaps
| exist in the sequential rank numbering.
| For more information about the syntax of the RANK and DENSE_RANK
| specifications, see the topic “OLAP-specification” in DB2 SQL Reference.
| To combine two or more SELECT statements to form a single result table, use one
| of the following key words:
| UNION
| Returns all of the values from the result table of each SELECT statement. If
| When you specify one of the preceding set operators (UNION, EXCEPT, or
| INTERSECT), DB2 processes each SELECT statement to form an interim result
| table, and then combines the interim result table of each statement. If the nth
| column of the first result table (R1) and the nth column of the second result table
| (R2) have the same result column name, the nth column of the result table has that
| same result column name. If the nth column of R1 and the nth column of R2 do
| not have the same names, the result column is unnamed.
| Examples: Assume that you want to combine the results of two SELECT statements
| that return the following result tables:
| R1 result table
| COL1 COL2
| a a
| a b
| a c
| R2 result table
| COL1 COL2
| a b
| a c
| a d
| A UNION operation combines the two result tables and returns four rows:
| COL1 COL2
| a a
| a b
| a c
| a d
| An EXCEPT operation combines the two result tables and returns one row
| The result of the EXCEPT operation depends on the which SELECT
| statement is included before the EXCEPT keyword in the SQL statement. If
| the SELECT statement that returns the R1 result table is listed first, the
| result is a single row:
| COL1 COL2
| a a
| If the SELECT statement that returns the R2 result table is listed first, the
| final result is a different row:
| COL1 COL2
| a d
| An INTERSECT operation combines the two result tables and returns two rows:
|
638 Application Programming and SQL Guide
| COL1 COL2
| a b
| a c
| To eliminate redundant duplicate rows when combining result tables, specify one
| of the following keywords:
| v UNION or UNION DISTINCT
| v EXCEPT or EXCEPT DISTINCT
| v INTERSECT or INTERSECT DISTINCT
| To order the entire result table, specify the ORDER BY clause at the end.
| Examples: Assume that you have the following tables to manage stock at two book
| stores.
| Table 111. STOCKA
| ISBN TITLE AUTHOR NOBEL PRIZE
| 8778997709 For Whom the Bell Tolls Hemmingway N
| 4599877699 The Good Earth Buck Y
| 9228736278 A Tale of Two Cities Dickens N
| 1002387872 Beloved Morrison Y
| 4599877699 The Good Earth Buck Y
| 0087873532 The Labyrinth of Solitude Paz Y
|
| Table 112. STOCKB
| ISBN TITLE AUTHOR NOBEL PRIZE
| 6689038367 The Grapes of Wrath Steinbeck Y
| 2909788445 The Silent Cry Oe Y
| 1182983745 Light in August Faulkner Y
| 9228736278 A Tale of Two Cities Dickens N
| 1002387872 Beloved Morrison Y
|
| Example 1: UNION clause: Suppose that you want a list of books whose authors
| have won the Nobel Prize and that are in stock at either store. The following SQL
| statement returns these books in order by author name without redundant
| duplicate rows:
| SELECT TITLE, AUTHOR
| FROM STOCKA
| WHERE NOBELPRIZE = 'Y'
| UNION
| SELECT TITLE, AUTHOR
| FROM STOCKB
| WHERE NOBELPRIZE = 'Y'
| ORDER BY AUTHOR
| Example 2: EXCEPT : Suppose that you want a list of books that are only in
| STOCKA. The following SQL statement returns the book names that are in
| STOCKA only without any redundant duplicate rows:
| SELECT TITLE
| FROM STOCKA
| EXCEPT
| SELECT TITLE
| FROM STOCKB
| ORDER BY TITLE;
| Example 3: INTERSECT: Suppose that you want a list of books that are in both
| STOCKA and in STOCKB. The following statement returns a list of all books from
| both of these tables with redundant duplicate rows are removed
| SELECT TITLE
| FROM STOCKA
| INTERSECT
| SELECT TITLE
| FROM STOCKB
| ORDER BY TITLE;
| To keep all duplicate rows when combining result tables, specify ALL with one of
| the following set operator keywords:
| v UNION ALL
| To order the entire result table, specify the ORDER BY clause at the end.
| Examples: The following examples use the STOCKA and STOCK B tables.
| Example: UNION ALL: The following SQL statement returns a list of books that
| won Nobel prizes and are in stock at either store, with duplicates included
| SELECT TITLE, AUTHOR
| FROM STOCKA
| WHERE NOBELPRIZE = 'Y'
| UNION ALL
| SELECT TITLE, AUTHOR
| FROM STOCKB
| WHERE NOBELPRIZE = 'Y'
| ORDER BY AUTHOR
| Example: EXCEPT ALL: Suppose that you want a list of books that are only in
| STOCKA. The following SQL statement returns the book names that are in
| STOCKA only with all duplicate rows:
| SELECT TITLE
| FROM STOCKA
| EXCEPT ALL
| SELECT TITLE
| FROM STOCKB
| ORDER BY TITLE;
| Example: INTERSECT ALL clause: Suppose that you want a list of books that are
| in both STOCKA and in STOCKB, including any duplicate matches. The following
Except for the columns that are named in the GROUP BY clause, the SELECT
statement must specify any other selected columns as an operand of one of the
aggregate functions.
Example: GROUP BY clause using one column: The following SQL statement lists,
for each department, the lowest and highest education level within that
department:
SELECT WORKDEPT, MIN(EDLEVEL), MAX(EDLEVEL)
FROM DSN8910.EMP
GROUP BY WORKDEPT;
If a column that you specify in the GROUP BY clause contains null values, DB2
considers those null values to be equal. Thus, all nulls form a single group.
When it is used, the GROUP BY clause follows the FROM clause and any WHERE
clause, and it precedes the ORDER BY clause.
You can group the rows by the values of more than one column.
Example: GROUP BY clause using more than one column: The following statement
finds the average salary for men and women in departments A00 and C01:
SELECT WORKDEPT, SEX, AVG(SALARY) AS AVG_SALARY
FROM DSN8910.EMP
WHERE WORKDEPT IN ('A00', 'C01')
GROUP BY WORKDEPT, SEX;
Filtering groups
If you group rows in the result table, you can also specify a search condition that
each retrieved group must satisfy. The search condition tests properties of each
group rather than properties of individual rows in the group.
To filter groups, use the HAVING clause to specify a search condition. The
HAVING clause acts like a WHERE clause for groups, and it contains the same
kind of search conditions that you specify in a WHERE clause.
Example: HAVING clause: The following SQL statement includes a HAVING clause
that specifies a search condition for groups of work departments in the employee
table:
SELECT WORKDEPT, AVG(SALARY) AS AVG_SALARY
FROM DSN8910.EMP
GROUP BY WORKDEPT
HAVING COUNT(*) > 1
ORDER BY WORKDEPT;
Compare the preceding example with the second example shown in “Summarizing
group values” on page 642. The clause, HAVING COUNT(*) > 1, ensures that only
departments with more than one member are displayed. In this case, departments
B01 and E01 do not display because the HAVING clause tests a property of the
group.
Example: HAVING clause used with a GROUP BY clause: Use the HAVING clause
to retrieve the average salary and minimum education level of women in each
department for which all female employees have an education level greater than or
equal to 16. Assuming that you want results from only departments A00 and D11,
the following SQL statement tests the group property, MIN(EDLEVEL):
SELECT WORKDEPT, AVG(SALARY) AS AVG_SALARY,
MIN(EDLEVEL) AS MIN_EDLEVEL
FROM DSN8910.EMP
WHERE SEX = 'F' AND WORKDEPT IN ('A00', 'D11')
GROUP BY WORKDEPT
HAVING MIN(EDLEVEL) >= 16;
When you specify both GROUP BY and HAVING, the HAVING clause must follow
the GROUP BY clause. A function in a HAVING clause can include DISTINCT if
you have not used DISTINCT anywhere else in the same SELECT statement. You
can also connect multiple predicates in a HAVING clause with AND or OR, and
you can use NOT for any predicate of a search condition.
| To find the rows that were changed within a specified period of time, specify the
| ROW CHANGE TIMESTAMP expression in the predicate of your SQL statement.
| For more information about the syntax of ROW CHANGE TIMESTAMP
| expression, see the topic “ROW CHANGE expression” inDB2 SQL Reference.
| If the table does not have a ROW CHANGE TIMESTAMP column, DB2 returns all
| rows on each page that has had any changes within the given time period. In this
| case, your result set can contain rows that have not been updated in the give time
| period, if other rows on that page have been updated or inserted.
| Example: Suppose that the TAB table has a ROW CHANGE TIMESTAMP column
| and that you want to return all of the records that have changed in the last 30
| days. The following query returns all of those rows.
| SELECT * FROM TAB
| WHERE ROW CHANGE TIMESTAMP FOR TAB <= CURRENT TIMESTAMP AND
| ROW CHANGE TIMESTAMP FOR TAB >= CURRENT TIMESTAMP - 30 days;
| Example: Suppose that you want to return all of the records that have changed
| since 9:00 AM January 1, 2004. The following query returns all of those rows.
| SELECT * FROM TAB
| WHERE ROW CHANGE TIMESTAMP FOR TAB >= '2004-01-01-09.00.00';
You can use a SELECT statement to retrieve and join column values from two or
more tables into a single row.
An operand of a join can be more complex than the name of a single table. You
can specify one of the following items as a join operand:
nested table expression
A fullselect that is enclosed in parentheses and followed by a correlation name.
The correlation name lets you refer to the result of that expression.
Using a nested table expression in a join can be helpful when you want to
create a temporary table to use in a join. You can specify the nested table
expression as either the right or left operand of a join, depending on which
unmatched rows you want included.
user-defined table function
A user-defined function that returns a table.
Using a nested table expression in a join can be helpful when you want to
perform some operation on the values in a table before you join them to
another table.
The correlated references are valid because they do not occur in the table
expression where CHEAP_PARTS is defined. The correlated references are from a
table specification at a higher level in the hierarchy of subqueries.
Example of using a nested table expression as the right operand of a join: The
following query contains a fullselect (in bold) as the right operand of a left outer
join with the PROJECTS table. The correlation name is TEMP. In this case the
unmatched rows from the PROJECTS table are included, but the unmatched rows
from the nested table expression are not.
SELECT PROJECT, COALESCE(PROJECTS.PROD#, PRODNUM) AS PRODNUM,
PRODUCT, PART, UNITS
FROM PROJECTS LEFT JOIN
(SELECT PART,
COALESCE(PARTS.PROD#, PRODUCTS.PROD#) AS PRODNUM,
PRODUCTS.PRODUCT
FROM PARTS FULL OUTER JOIN PRODUCTS
ON PARTS.PROD# = PRODUCTS.PROD#) AS TEMP ON PROJECTS.PROD# = PRODNUM;
Use correlation names to refer to the results of a nested table expression. After you
specify the correlation name for an expression, any subsequent reference to this
correlation name is called a correlated reference.
Example: In this example, the correlated reference T.C2 is valid because the table
specification, to which it refers, T, is to its left.
If you specify the join in the opposite order, with T following TABLE(TF3(T.C2),
T.C2 is invalid.
Example: In this example, the correlated reference D.DEPTNO is valid because the
nested table expression within which it appears is preceded by TABLE, and the
table specification D appears to the left of the nested table expression in the FROM
clause.
SELECT D.DEPTNO, D.DEPTNAME,
EMPINFO.AVGSAL, EMPINFO.EMPCOUNT
FROM DEPT D,
TABLE(SELECT AVG(E.SALARY) AS AVGSAL,
COUNT(*) AS EMPCOUNT
FROM EMP E
WHERE E.WORKDEPT=D.DEPTNO) AS EMPINFO;
To join more than two tables, specify join conditions that include columns from all
of the relevant tables.
Example: Suppose that you want a result table that shows employees who have
projects that they are responsible for, their projects, and their department names.
You need to join three tables to get all the information. You can use the following
SELECT statement:
SELECT EMPNO, LASTNAME, DEPTNAME, PROJNO
FROM DSN8910.EMP, DSN8910.PROJ, DSN8910.DEPT
WHERE EMPNO = RESPEMP
AND WORKDEPT = DSN8910.DEPT.DEPTNO;
Joining more than two tables by using more than one join type:
When joining more than two tables, you don’t have to use the same join type for
every join.
To join tables by using more than one join type, specify the join types in the FROM
clause.
Example: Suppose that you want a result table that shows the following items:
v employees whose last name begins with ’S’ or a letter that comes after ’S’ in the
alphabet
v the department names for the these employees
v any projects that these employees are responsible for
You can use the following SELECT statement:
SELECT EMPNO, LASTNAME, DEPTNAME, PROJNO
FROM DSN8910.EMP INNER JOIN DSN8910.DEPT
ON WORKDEPT = DSN8910.DEPT.DEPTNO
LEFT OUTER JOIN DSN8910.PROJ
ON EMPNO = RESPEMP
WHERE LASTNAME > 'S';
DB2 determines the intermediate and final results of the previous query by
performing the following logical steps:
1. Join the employee and department tables on matching department numbers,
dropping the rows where the last name begins with a letter before ’S in the
alphabet’.
2. Join the intermediate result table with the project table on the employee
number, keeping the rows for which no matching employee number exists in
the project table.
3. Process the select list in the final result table, leaving only four columns.
To request an inner join, execute a SELECT statement in which you specify the
tables that you want to join in the FROM clause, and specify a WHERE clause or
an ON clause to indicate the join condition. The join condition can be any simple
or compound search condition that does not contain a subquery reference. For the
complete syntax of a join condition, see the topic “from-clause” in DB2 SQL
Reference.
Example: You can join the PARTS and PRODUCTS tables in “Sample data for
joins” on page 655 on the PROD# column to get a table of parts with their
suppliers and the products that use the parts.
To do this, you can use either one of the following SELECT statements:
SELECT PART, SUPPLIER, PARTS.PROD#, PRODUCT
FROM PARTS, PRODUCTS
WHERE PARTS.PROD# = PRODUCTS.PROD#;
SELECT PART, SUPPLIER, PARTS.PROD#, PRODUCT
FROM PARTS INNER JOIN PRODUCTS
ON PARTS.PROD# = PRODUCTS.PROD#;
Regardless of whether you omit the WHERE clause or specify a join condition
that is always true, the number of rows in the result table is the product of the
number of rows in each table.
The result of the query is all rows that do not have a supplier that begins with A.
The result table looks like the following output:
PART SUPPLIER PROD# PRODUCT
======= ============ ===== ==========
MAGNETS BATEMAN 10 GENERATOR
PLASTIC PLASTIK_CORP 30 RELAY
The following SQL statement joins table DSN8910.PROJ to itself and returns the
number and name of each major project followed by the number and name of the
project that is part of it:
SELECT A.PROJNO, A.PROJNAME, B.PROJNO, B.PROJNAME
FROM DSN8910.PROJ A, DSN8910.PROJ B
WHERE A.PROJNO = B.MAJPROJ;
In this example, the comma in the FROM clause implicitly specifies an inner join,
and it acts the same as if the INNER JOIN keywords had been used. When you
use the comma for an inner join, you must specify the join condition on the
WHERE clause. When you use the INNER JOIN keywords, you must specify the
join condition on the ON clause.
Outer joins
An outer join is a method of combining two or more tables so that the result
includes unmatched rows of one or the other table, or of both. The matching is
based on the join condition.
The following table illustrates how the PARTS and PRODUCTS tables in “Sample
data for joins” on page 655 can be combined using the three outer join functions.
PARTS PRODUCTS
PART PROD# PROD# PRICE Unmatched
WIRE 10 row
MAGNETS 10 Matches 505 3.70
10 45.75
BLADES 205 205 18.90
Unmatched PLASTIC 30 30 7.55
row OIL 160
Figure 35. Three outer joins from the PARTS and PRODUCTS tables
The result table contains data that is joined from all of the tables, for rows that
satisfy the search conditions.
The result columns of a join have names if the outermost SELECT list refers to
base columns. However, if you use a function (such as COALESCE or VALUE) to
build a column of the result, that column does not have a name unless you use the
AS clause in the SELECT list.
If you are joining two tables and want the result set to include unmatched rows
from both tables, use a FULL OUTER JOIN clause. The matching is based on the
join condition. If any column of the result table does not have a value, that column
has the null value in the result table.
The join condition for a full outer join must be a simple search condition that
compares two columns or an invocation of a cast function that has a column name
as its argument.
The result table from the query looks similar to the following output:
PART SUPPLIER PROD# PRODUCT
======= ============ ===== ==========
WIRE ACWF 10 GENERATOR
MAGNETS BATEMAN 10 GENERATOR
PLASTIC PLASTIK_CORP 30 RELAY
BLADES ACE_STEEL 205 SAW
OIL WESTERN_CHEM 160 -----------
------- ------------ --- SCREWDRIVER
The product number in the result of the example for “Full outer join” on page 651
is null for SCREWDRIVER, even though the PRODUCTS table contains a product
number for SCREWDRIVER. If you select PRODUCTS.PROD# instead, PROD# is
null for OIL. If you select both PRODUCTS.PROD# and PARTS.PROD#, the result
contains two columns, both of which contain some null values. You can merge data
from both columns into a single column, eliminating the null values, by using the
COALESCE function.
With the same PARTS and PRODUCTS tables, the following example merges the
non-null data from the PROD# columns:
SELECT PART, SUPPLIER,
COALESCE(PARTS.PROD#, PRODUCTS.PROD#) AS PRODNUM, PRODUCT
FROM PARTS FULL OUTER JOIN PRODUCTS
ON PARTS.PROD# = PRODUCTS.PROD#;
The AS clause (AS PRODNUM) provides a name for the result of the COALESCE
function.
If you are joining two tables and want the result set to include unmatched rows
from only one table, use a LEFT OUTER JOIN clause or a RIGHT OUTER JOIN
clause. The matching is based on the join condition.
As in an inner join, the join condition can be any simple or compound search
condition that does not contain a subquery reference.
Example: The following example uses the tables in “Sample data for joins” on
page 655. To include rows from the PARTS table that have no matching values in
the PRODUCTS table, and to include prices that exceed 10.00, run the following
query:
SELECT PART, SUPPLIER, PARTS.PROD#, PRODUCT, PRICE
FROM PARTS LEFT OUTER JOIN PRODUCTS
ON PARTS.PROD#=PRODUCTS.PROD#
AND PRODUCTS.PRICE>10.00;
A row from the PRODUCTS table is in the result table only if its product number
matches the product number of a row in the PARTS table and the price is greater
than 10.00 for that row. Rows in which the PRICE value does not exceed 10.00 are
included in the result of the join, but the PRICE value is set to null.
In this result table, the row for PROD# 30 has null values on the right two columns
because the price of PROD# 30 is less than 10.00. PROD# 160 has null values on
the right two columns because PROD# 160 does not match another product
number.
If you are joining two tables and want the result set to include unmatched rows
from only one table, use a LEFT OUTER JOIN clause or a RIGHT OUTER JOIN
clause. The matching is based on the join condition.
The clause RIGHT OUTER JOIN includes rows from the table that is specified after
RIGHT OUTER JOIN that have no matching values in the table that is specified
before RIGHT OUTER JOIN.
As in an inner join, the join condition can be any simple or compound search
condition that does not contain a subquery reference.
Example: The following example uses the tables in “Sample data for joins” on
page 655. To include rows from the PRODUCTS table that have no corresponding
rows in the PARTS table, execute this query:
SELECT PART, SUPPLIER, PRODUCTS.PROD#, PRODUCT, PRICE
FROM PARTS RIGHT OUTER JOIN PRODUCTS
ON PARTS.PROD# = PRODUCTS.PROD#
AND PRODUCTS.PRICE>10.00;
A row from the PARTS table is in the result table only if its product number
matches the product number of a row in the PRODUCTS table and the price is
greater than 10.00 for that row.
Because the PRODUCTS table can have rows with nonmatching product numbers
in the result table, and the PRICE column is in the PRODUCTS table, rows in
which PRICE is less than or equal to 10.00 are included in the result. The PARTS
columns contain null values for these rows in the result table.
SQL rules dictate that the result of a SELECT statement look as if the clauses had
been evaluated in this order:
v FROM
v WHERE
v GROUP BY
v HAVING
v SELECT
A join operation is part of a FROM clause; therefore, for the purpose of predicting
which rows will be returned from a SELECT statement that contains a join
operation, assume that the join operation is performed first.
Example: Suppose that you want to obtain a list of part names, supplier names,
product numbers, and product names from the PARTS and PRODUCTS tables. You
want to include rows from either table where the PROD# value does not match a
PROD# value in the other table, which means that you need to do a full outer join.
You also want to exclude rows for product number 10. Consider the following
SELECT statement:
SELECT PART, SUPPLIER,
VALUE(PARTS.PROD#,PRODUCTS.PROD#) AS PRODNUM, PRODUCT
FROM PARTS FULL OUTER JOIN PRODUCTS
ON PARTS.PROD# = PRODUCTS.PROD#
WHERE PARTS.PROD# <> '10' AND PRODUCTS.PROD# <> '10';
DB2 performs the join operation first. The result of the join operation includes
rows from one table that do not have corresponding rows from the other table.
However, the WHERE clause then excludes the rows from both tables that have
null values for the PROD# column.
For this statement, DB2 applies the WHERE clause to each table separately. DB2
then performs the full outer join operation, which includes rows in one table that
do not have a corresponding row in the other table. The final result includes rows
with the null value for the PROD# column and looks similar to the following
output:
PART SUPPLIER PRODNUM PRODUCT
======= ============ ======= ===========
OIL WESTERN_CHEM 160 -----------
BLADES ACE_STEEL 205 SAW
PLASTIC PLASTIK_CORP 30 RELAY
------- ------------ 505 SCREWDRIVER
The examples in these topics use the following two tables to show various types of
joins:
The PARTS table The PRODUCTS table
PART PROD# SUPPLIER PROD# PRODUCT PRICE
======= ===== ============ ===== =========== =====
WIRE 10 ACWF 505 SCREWDRIVER 3.70
OIL 160 WESTERN_CHEM 30 RELAY 7.55
MAGNETS 10 BATEMAN 205 SAW 18.90
PLASTIC 30 PLASTIK_CORP 10 GENERATOR 45.75
BLADES 205 ACE_STEEL
Question: How can I tell DB2 that I want only a few of the thousands of rows that
satisfy a query?
DB2 usually optimizes queries to retrieve all rows that qualify. But sometimes you
want to retrieve only the first few rows. For example, to retrieve the first row that
is greater than or equal to a known value, code:
SELECT column list FROM table
WHERE key >= value
ORDER BY key ASC
Even with the ORDER BY clause, DB2 might fetch all the data first and sort it
afterwards, which could be wasteful. Instead, you can write the query in one of the
following ways:
SELECT * FROM table
WHERE key >= value
ORDER BY key ASC
OPTIMIZE FOR 1 ROW
Use OPTIMIZE FOR 1 ROW to influence the access path. OPTIMIZE FOR 1 ROW
tells DB2 to select an access path that returns the first qualifying row quickly.
Use FETCH FIRST n ROWS ONLY to limit the number of rows in the result table
to n rows. FETCH FIRST n ROWS ONLY has the following benefits:
v When you use FETCH statements to retrieve data from a result table, FETCH
FIRST n ROWS ONLY causes DB2 to retrieve only the number of rows that you
need. This can have performance benefits, especially in distributed applications.
If you try to execute a FETCH statement to retrieve the n+1st row, DB2 returns a
+100 SQLCODE.
v When you use FETCH FIRST ROW ONLY in a SELECT INTO statement, you
never retrieve more than one row. Using FETCH FIRST ROW ONLY in a
SELECT INTO statement can prevent SQL errors that are caused by
inadvertently selecting more than one value into a host variable.
When you specify FETCH FIRST n ROWS ONLY but not OPTIMIZE FOR n ROWS,
OPTIMIZE FOR n ROWS is implied. When you specify FETCH FIRST n ROWS
ONLY and OPTIMIZE FOR m ROWS, and m is less than n, DB2 optimizes the
query for m rows. If m is greater than n, DB2 optimizes the query for n rows.
You can use common table expressions to create recursive SQL If a fullselect of a
common table expression contains a reference to itself in a FROM clause, the
common table expression is a recursive common table expression.
See “Examples of recursive common table expressions” on page 453 for examples
of bill-of-materials applications that use recursive common table expressions.
Answer: On the SELECT statement, use the FOR UPDATE clause without a column
list, or the FOR UPDATE OF clause with a column list. For a more efficient
program, specify a column list with only those columns that you intend to update.
Then use the positioned UPDATE statement. The clause WHERE CURRENT OF
identifies the cursor that points to the row you want to update.
For static SQL statements, the simplest way to avoid a division error is to override
DEC31 rules by specifying the precompiler option DEC(15). In some cases you can
avoid a division error by specifying D31.s, where s is a number between 1 and 9
and represents the minimum scale to be used for division operations. This
specification reduces the probability of errors for statements that are embedded in
the program.
If the dynamic SQL statements have bind, define, or invoke behavior and the value
of the installation option for USE FOR DYNAMICRULES on panel DSNTIP4 is
NO, you can use the precompiler option DEC(15), DEC15, or D15.s to override
DEC31 rules, where s is a number between 1 and 9.
For a dynamic statement, or for a single static statement, use the scalar function
DECIMAL to specify values of the precision and scale for a result that causes no
errors.
Before you execute a dynamic statement, set the value of special register
CURRENT PRECISION to DEC15 or D15.s, where s is a number between 1 and 9.
Even if you use DEC31 rules, multiplication operations can sometimes cause
overflow because the precision of the product is greater than 31. To avoid overflow
from multiplication of large numbers, use the MULTIPLY_ALT built-in function
instead of the multiplication operator.
| To control how DB2 rounds decimal floating point numbers, set the CURRENT
| DECFLOAT ROUNDING MODE special register.
| Related concepts
| CURRENT DECFLOAT ROUNDING MODE (SQL Reference)
| Related reference
| SET CURRENT DECFLOAT ROUNDING MODE (SQL Reference)
Subqueries
When you need to narrow your search condition based on information in an
interim table, you can use a subquery. For example, you might want to find all
employee numbers in one table that also exist for a given project in a second table.
Suppose that you want a list of the employee numbers, names, and commissions of
all employees who work on a particular project, whose project number is MA2111.
The first part of the SELECT statement is easy to write:
SELECT EMPNO, LASTNAME, COMM
FROM DSN8910.EMP
WHERE EMPNO
.
.
.
However, you cannot proceed because the DSN8910.EMP table does not include
project number data. You do not know which employees are working on project
MA2111 without issuing another SELECT statement against the
DSN8910.EMPPROJACT table.
To better understand the results of this SQL statement, imagine that DB2 goes
through the following process:
1. DB2 evaluates the subquery to obtain a list of EMPNO values:
(SELECT EMPNO
FROM DSN8910.EMPPROJACT
WHERE PROJNO = 'MA2111');
The result is in an interim result table, similar to the one in the following
output:
from EMPNO
=====
200
200
220
2. The interim result table then serves as a list in the search condition of the outer
SELECT. Effectively, DB2 executes this statement:
A subquery executes only once, if the subquery is the same for every row or
group. This kind of subquery is uncorrelated, which means that it executes only
once. For example, in the following statement, the content of the subquery is the
same for every row of the table DSN8910.EMP:
SELECT EMPNO, LASTNAME, COMM
FROM DSN8910.EMP
WHERE EMPNO IN
(SELECT EMPNO
FROM DSN8910.EMPPROJACT
WHERE PROJNO = 'MA2111');
Subqueries that vary in content from row to row or group to group are correlated
subqueries. For information about correlated subqueries, see “Correlated
subqueries” on page 663.
Subqueries can also appear in the predicates of other subqueries. Such subqueries
are nested subqueries at some level of nesting. For example, a subquery within a
subquery within an outer SELECT has a nesting level of 2. DB2 allows nesting
down to a level of 15, but few queries require a nesting level greater than 1.
The relationship of a subquery to its outer SELECT is the same as the relationship
of a nested subquery to a subquery, and the same rules apply, except where
otherwise noted.
Except for a subquery of a basic predicate, the result table can contain more than
one row. For more information, see “Places where you can include a subquery.”
You can specify a subquery in either a WHERE or HAVING clause by using one of
the following items:
You can use a subquery immediately after any of the comparison operators. If you
do, the subquery can return at most one value. DB2 compares that value with the
value to the left of the comparison operator.
Example: The following SQL statement returns the employee numbers, names, and
salaries for employees whose education level is higher than the average
company-wide education level.
SELECT EMPNO, LASTNAME, SALARY
FROM DSN8910.EMP
WHERE EDLEVEL >
(SELECT AVG(EDLEVEL)
FROM DSN8910.EMP);
You can use a subquery after a comparison operator, followed by the keyword
ALL, ANY, or SOME. The number of columns and rows that the subquery can
return for a quantified predicate depends on the type of quantified predicate:
v For = SOME, = ANY, or <> ALL, the subquery can return one or many rows and
one or many columns. The number of columns in the result table must match
the number of columns on the left side of the operator.
v For all other quantified predicates, the subquery can return one or many rows,
but no more than one column.
ALL predicate:
To satisfy this WHERE clause, the column value must be greater than all of the
values that the subquery returns. A subquery that returns an empty result table
satisfies the predicate.
Now suppose that you use the <> operator with ALL in a WHERE clause like this:
WHERE (column1, column1, ... columnn) <> ALL (subquery)
To satisfy this WHERE clause, each column value must be unequal to all of the
values in the corresponding column of the result table that the subquery returns. A
subquery that returns an empty result table satisfies the predicate.
Use ANY or SOME to indicate that the values on the left side of the operator must
compare in the indicated way to at least one of the values that the subquery
returns. For example, suppose that you use the greater-than comparison operator
with ANY:
WHERE expression > ANY (subquery)
To satisfy this WHERE clause, the value in the expression must be greater than at
least one of the values (that is, greater than the lowest value) that the subquery
returns. A subquery that returns an empty result table does not satisfy the
predicate.
Now suppose that you use the = operator with SOME in a WHERE clause like
this:
WHERE (column1, column1, ... columnn) = SOME (subquery)
To satisfy this WHERE clause, each column value must be equal to at least one of
the values in the corresponding column of the result table that the subquery
returns. A subquery that returns an empty result table does not satisfy the
predicate.
IN predicate in a subquery:
You can use IN to say that the value or values on the left side of the IN operator
must be among the values that are returned by the subquery. Using IN is
equivalent to using = ANY or = SOME.
When you use the keyword EXISTS, DB2 checks whether the subquery returns one
or more rows. Returning one or more rows satisfies the condition; returning no
rows does not satisfy the condition.
The result of the subquery is always the same for every row that is examined for
the outer SELECT. Therefore, either every row appears in the result of the outer
SELECT or none appears. A correlated subquery is more powerful than the
uncorrelated subquery that is used in this example because the result of a
correlated subquery is evaluated for each row of the outer SELECT.
As shown in the example, you do not need to specify column names in the
subquery of an EXISTS clause. Instead, you can code SELECT *. You can also use
the EXISTS keyword with the NOT keyword in order to select rows when the data
or condition that you specify does not exist; that is, you can code the following
clause:
WHERE NOT EXISTS (SELECT ...);
Correlated subqueries
A correlated subquery is a subquery that DB2 reevaluates when it examines a new
row (in a WHERE clause) or a group of rows (in a HAVING clause) as it executes
the outer SELECT statement.
In an uncorrelated subquery, DB2 executes the subquery once, substitutes the result
of the subquery in the right side of the search condition, and evaluates the outer
SELECT based on the value of the search condition.
Suppose that you want a list of all the employees whose education levels are
higher than the average education levels in their respective departments. To get
this information, DB2 must search the DSN8910.EMP table. For each employee in
the table, DB2 needs to compare the employee’s education level to the average
education level for that employee’s department.
For this example, you need to use a correlated subquery, which differs from an
uncorrelated subquery. An uncorrelated subquery compares the employee’s
education level to the average of the entire company, which requires looking at the
entire table. A correlated subquery evaluates only the department that corresponds
to the particular employee.
In the subquery, you tell DB2 to compute the average education level for the
department number in the current row. The following query performs this action:
A correlated subquery looks like an uncorrelated one, except for the presence of
one or more correlated references. In the example, the single correlated reference is
the occurrence of X.WORKDEPT in the WHERE clause of the subselect. In this
clause, the qualifier X is the correlation name that is defined in the FROM clause of
the outer SELECT statement. X designates rows of the first instance of
DSN8910.EMP. At any time during the execution of the query, X designates the
row of DSN8910.EMP to which the WHERE clause is being applied.
Consider what happens when the subquery executes for a given row of
DSN8910.EMP. Before it executes, X.WORKDEPT receives the value of the
WORKDEPT column for that row. Suppose, for example, that the row is for
Christine Haas. Her work department is A00, which is the value of WORKDEPT
for that row. Therefore, the following is the subquery that is executed for that row:
(SELECT AVG(EDLEVEL)
FROM DSN8910.EMP
WHERE WORKDEPT = 'A00');
The subquery produces the average education level of Christine’s department. The
outer SELECT then compares this average to Christine’s own education level. For
some other row for which WORKDEPT has a different value, that value appears in
the subquery in place of A00. For example, in the row for Michael L Thompson,
this value is B01, and the subquery for his row delivers the average education level
for department B01.
The result table that is produced by the query is similar to the following output:
EMPNO LASTNAME WORKDEPT EDLEVEL
====== ========= ======== =======
000010 HASS A00 18
000030 KWAN C01 20
000070 PULASKI D21 16
000090 HENDERSON E11 16
When you use a correlated reference in a subquery, the correlation name can be
defined in the outer SELECT or in any of the subqueries that contain the reference.
You can define a correlation name for each table name in a FROM clause. Specify
the correlation name after its table name. Leave one or more blanks between a
table name and its correlation name. You can include the word AS between the
table name and the correlation name to increase the readability of the SQL
statement.
The following example demonstrates the use of a correlated reference in the search
condition of a subquery:
SELECT EMPNO, LASTNAME, WORKDEPT, EDLEVEL
FROM DSN8910.EMP AS X
WHERE EDLEVEL >
(SELECT AVG(EDLEVEL)
FROM DSN8910.EMP
WHERE WORKDEPT = X.WORKDEPT);
The following example demonstrates the use of a correlated reference in the select
list of a subquery:
UPDATE BP1TBL T1
SET (KEY1, CHAR1, VCHAR1) =
(SELECT VALUE(T2.KEY1,T1.KEY1), VALUE(T2.CHAR1,T1.CHAR1),
VALUE(T2.VCHAR1,T1.VCHAR1)
FROM BP2TBL T2
WHERE (T2.KEY1 = T1.KEY1))
WHERE KEY1 IN
(SELECT KEY1
FROM BP2TBL T3
WHERE KEY2 > 0);
Use correlation names in an UPDATE statement to refer to the rows that you are
updating. The subquery for which you specified a correlation name is called a
correlated subquery.
For example, when all activities of a project must complete before September 2006,
your department considers that project to be a priority project. Assume that you
have added the PRIORITY column to DSN8910.PROJ. You can use the following
SQL statement to evaluate the projects in the DSN8910.PROJ table, and write a 1 (a
flag to indicate PRIORITY) in the PRIORITY column for each priority project:
UPDATE DSN8910.PROJ X
SET PRIORITY = 1
WHERE DATE('2006-09-01') >
(SELECT MAX(ACENDATE)
FROM DSN8910.PROJACT
WHERE PROJNO = X.PROJNO);
To process this statement, DB2 determines for each project (represented by a row in
the DSN8910.PROJ table) whether the combined staffing for that project is less than
0.5. If it is, DB2 deletes that row from the DSN8910.PROJ table.
To continue this example, suppose that DB2 deletes a row in the DSN8910.PROJ
table. You must also delete rows that are related to the deleted project in the
DSN8910.PROJACT table. To do this, use a statement similar to this statement:
DELETE FROM DSN8910.PROJACT X
WHERE NOT EXISTS
(SELECT *
FROM DSN8910.PROJ
WHERE PROJNO = X.PROJNO);
DB2 determines, for each row in the DSN8910.PROJACT table, whether a row with
the same project number exists in the DSN8910.PROJ table. If not, DB2 deletes the
row from DSN8910.PROJACT.
This example uses a copy of the employee table for the subquery.
DB2 restricts delete operations for dependent tables that are involved in referential
constraints. If a DELETE statement has a subquery that references a table that is
With the referential constraints that are defined for the sample tables, this
statement causes an error because the result table for the subquery is not
materialized before the deletion occurs. Because DSN8910.EMP is a dependent
table of DSN8910.DEPT, the deletion involves the table that is referred to in the
subquery, and the last delete rule in the path to EMP is SET NULL, not RESTRICT
or NO ACTION. If the statement could execute, its results would depend on the
order in which DB2 accesses the rows. Therefore, DB2 prohibits the deletion.
Example: Suppose that you create a view that combines the values of the
US_SALES, EUROPEAN_SALES, and JAPAN_SALES tables. The TOTAL columns
in the three tables are of different distinct types. Before you combine the table
values, you must convert the types of two of the TOTAL columns to the type of
the third TOTAL column. Assume that the US_DOLLAR type has been chosen as
the common distinct type. Because DB2 does not generate cast functions to convert
from one distinct type to another, two user-defined functions must exist:
v A function called EURO_TO_US that converts values of type EURO to type
US_DOLLAR
v A function called YEN_TO_US that converts values of type JAPANESE_YEN to
type US_DOLLAR
Then you can execute a query like this to display a table of combined sales:
SELECT PRODUCT_ITEM, MONTH, YEAR, TOTAL
FROM US_SALES
UNION
SELECT PRODUCT_ITEM, MONTH, YEAR, EURO_TO_US(TOTAL)
FROM EUROPEAN_SALES
UNION
SELECT PRODUCT_ITEM, MONTH, YEAR, YEN_TO_US(TOTAL)
FROM JAPAN_SALES;
Because the result type of both the YEN_TO_US function and the EURO_TO_US
function is US_DOLLAR, you have satisfied the requirement that the distinct types
of the combined columns are the same.
The basic rule for comparisons is that the data types of the operands must be
compatible. The compatibility rule defines, for example, that all numeric types
(SMALLINT, INTEGER, FLOAT, and DECIMAL) are compatible. That is, you can
compare an INTEGER value with a value of type FLOAT. However, you cannot
compare an object of a distinct type to an object of a different type. You can
compare an object with a distinct type only to an object with exactly the same
distinct type.
For example, suppose you want to know which products sold more than
$100 000.00 in the US in the month of July in 2003 (7/03). Because you cannot
compare data of type US_DOLLAR with instances of data of the source type of
US_DOLLAR (DECIMAL) directly, you must use a cast function to cast data from
DECIMAL to US_DOLLAR or from US_DOLLAR to DECIMAL. Whenever you
create a distinct type, DB2 creates two cast functions, one to cast from the source
type to the distinct type and the other to cast from the distinct type to the source
type. For distinct type US_DOLLAR, DB2 creates a cast function called DECIMAL
and a cast function called US_DOLLAR. When you compare an object of type
US_DOLLAR to an object of type DECIMAL, you can use one of those cast
functions to make the data types identical for the comparison. Suppose table
US_SALES is defined like this:
CREATE TABLE US_SALES
(PRODUCT_ITEM INTEGER,
MONTH INTEGER CHECK (MONTH BETWEEN 1 AND 12),
YEAR INTEGER CHECK (YEAR > 1990),
TOTAL US_DOLLAR);
The casting satisfies the requirement that the compared data types are identical.
You cannot use host variables in statements that you prepare for dynamic
execution. As explained in “Dynamically executing an SQL statement by using
PREPARE and EXECUTE” on page 189, you can substitute parameter markers for
host variables when you prepare a statement, and then use host variables when
you execute the statement.
If you use a parameter marker in a predicate of a query, and the column to which
you compare the value represented by the parameter marker is of a distinct type,
you must cast the parameter marker to the distinct type, or cast the column to its
source type.
For example, suppose that distinct type CNUM is defined like this:
CREATE DISTINCT TYPE CNUM AS INTEGER;
Alternatively, you can cast the parameter marker to the distinct type:
SELECT FIRST_NAME, LAST_NAME, PHONE_NUM FROM CUSTOMER
WHERE CUST_NUM = CAST (? AS CNUM)
The preceding list of restrictions do not apply to SQL statements that are executed
at a lower level of nesting as a result of an after trigger. For example, suppose an
UPDATE statement at nesting level 1 activates an after update trigger, which calls
a stored procedure. The stored procedure executes two SQL statements that
reference the triggering table: one SELECT statement and one INSERT statement.
In this situation, both the SELECT and the INSERT statements can be executed
even though they are at nesting level 3.
Although trigger activations count in the levels of SQL statement nesting, the
previous restrictions on SQL statements do not apply to SQL statements that are
executed in the trigger body.
Now suppose that you execute this SQL statement at level 1 of nesting:
INSERT INTO T1 VALUES(...);
Although the UPDATE statement in the trigger body is at level 2 of nesting and
modifies the same table that the triggering statement updates, DB2 can execute the
INSERT statement successfully.
Use either of the following types of cursors to retrieve rows from a result table:
v A row-positioned cursor retrieves at most a single row at a time from the result
table into host variables. At any point in time, the cursor is positioned on at
most a single row. For information about how to use a row-positioned cursor,
see “Accessing data by using a row-positioned cursor” on page 675.
v A rowset-positioned cursor retrieves zero, one, or more rows at a time, as a
rowset, from the result table into host variable arrays. At any point in time, the
cursor can be positioned on a rowset. You can reference all of the rows in the
rowset, or only one row in the rowset, when you use a positioned DELETE or
positioned UPDATE statement. For information about how to use a
rowset-positioned cursor, see “Accessing data by using a rowset-positioned
cursor” on page 680.
Cursors bound with cursor stability that are used in block fetch operations are
particularly vulnerable to reading data that has already changed. In a block fetch,
database access prefetches rows ahead of the row retrieval controlled by the
application. During that time the cursor might close, and the locks might be
released, before the application receives the data. Thus, it is possible for the
application to fetch a row of values that no longer exists, or to miss a recently
inserted row. In many cases, that is acceptable; a case for which it is not acceptable
is said to require data currency.
If your application requires data currency for a cursor, you need to prevent block
fetching for the data to which it points. To prevent block fetching for a distributed
cursor, declare the cursor with the FOR UPDATE clause.
Types of cursors
You can declare cursors, both row-positioned and rowset-positioned, as scrollable
or not scrollable, held or not held, and returnable or not returnable.
When you declare a cursor, you tell DB2 whether you want the cursor to be
scrollable or non-scrollable by including or omitting the SCROLL clause. This
clause determines whether the cursor moves sequentially forward through the
result table or can move randomly through the result table.
If you want to order the rows of the cursor’s result set, and you also want the
cursor to be updatable, you need to declare the cursor as scrollable, even if you
use it only to retrieve rows (or rowsets) sequentially. You can use the ORDER BY
clause in the declaration of an updatable cursor only if you declare the cursor as
scrollable.
To indicate that a cursor is scrollable, you declare it with the SCROLL keyword.
The following examples show a characteristic of scrollable cursors: the sensitivity.
Declaring a scrollable cursor with the INSENSITIVE keyword has the following
effects:
v The size, the order of the rows, and the values for each row of the result table
do not change after the application opens the cursor.
v The result table is read-only. Therefore, you cannot declare the cursor with the
FOR UPDATE clause, and you cannot use the cursor for positioned update or
delete operations.
The following figure shows a declaration for a sensitive static scrollable cursor.
EXEC SQL DECLARE C2 SENSITIVE STATIC SCROLL CURSOR FOR
SELECT DEPTNO, DEPTNAME, MGRNO
FROM DSN8910.DEPT
ORDER BY DEPTNO
END-EXEC.
The following figure shows a declaration for a sensitive dynamic scrollable cursor.
Both the INSENSITIVE cursor and the SENSITIVE STATIC cursor follow the static
cursor model:
v The size of the result table does not grow after the application opens the cursor.
Rows that are inserted into the underlying table are not added to the result
table.
v The order of the rows does not change after the application opens the cursor.
If the cursor declaration contains an ORDER BY clause, and the columns that are
in the ORDER BY clause are updated after the cursor is opened, the order of the
rows in the result table does not change.
When you declare a cursor as SENSITIVE, you can declare it either STATIC or
DYNAMIC. The SENSITIVE DYNAMIC cursor follows the dynamic cursor model:
v The size and contents of the result table can change with every fetch.
The base table can change while the cursor is scrolling on it. If another
application process changes the data, the cursor sees the newly changed data
when it is committed. If the application process of the cursor changes the data,
the cursor sees the newly changed data immediately.
v The order of the rows can change after the application opens the cursor.
If the cursor declaration contains an ORDER BY clause, and columns that are in
the ORDER BY clause are updated after the cursor is opened, the order of the
rows in the result table changes.
If the program abnormally terminates, the cursor position is lost. To prepare for
restart, your program must reposition the cursor.
The following restrictions apply to cursors that are declared WITH HOLD:
v Do not use DECLARE CURSOR WITH HOLD with the new user signon from a
DB2 attachment facility, because all open cursors are closed.
v Do not declare a WITH HOLD cursor in a thread that might become inactive. If
you do, its locks are held indefinitely.
IMS
CICS
You should always close cursors that you no longer need. If you let DB2 close a
CICS attachment cursor, the cursor might not close until the CICS attachment
facility reuses or terminates the thread.
The following cursor declaration causes the cursor to maintain its position in the
DSN8910.EMP table after a commit point:
EXEC SQL
DECLARE EMPLUPDT CURSOR WITH HOLD FOR
SELECT EMPNO, LASTNAME, PHONENO, JOB, SALARY, WORKDEPT
Your program can have several cursors, each of which performs the previous steps.
The following example shows a simple form of the DECLARE CURSOR statement:
EXEC SQL
DECLARE C1 CURSOR FOR
SELECT EMPNO, FIRSTNME, MIDINIT, LASTNAME, SALARY
FROM DSN8910.EMP
END-EXEC.
You can use this cursor to list select information about employees.
More complicated cursors might include WHERE clauses or joins of several tables.
For example, suppose that you want to use a cursor to list employees who work
on a certain project. Declare a cursor like this to identify those employees:
EXEC SQL
DECLARE C2 CURSOR FOR
SELECT EMPNO, FIRSTNME, MIDINIT, LASTNAME, SALARY
FROM DSN8910.EMP X
WHERE EXISTS
Declaring cursors for tables that use multilevel security: You can declare a cursor
that retrieves rows from a table that uses multilevel security with row-level
granularity. However, the result table for the cursor contains only those rows that
have a security label value that is equivalent to or dominated by the security label
value of your ID. For more information about multilevel security with row-level
granularity, see the topic “Multilevel security” in DB2 Administration Guide.
Updating a column: You can update columns in the rows that you retrieve.
Updating a row after you use a cursor to retrieve it is called a positioned update. If
you intend to perform any positioned updates on the identified table, include the
FOR UPDATE clause. The FOR UPDATE clause has two forms:
v The first form is FOR UPDATE OF column-list. Use this form when you know in
advance which columns you need to update.
v The second form is FOR UPDATE, with no column list. Use this form when you
might use the cursor to update any of the columns of the table.
For example, you can use this cursor to update only the SALARY column of the
employee table:
EXEC SQL
DECLARE C1 CURSOR FOR
SELECT EMPNO, FIRSTNME, MIDINIT, LASTNAME, SALARY
FROM DSN8910.EMP X
WHERE EXISTS
(SELECT *
FROM DSN8910.PROJ Y
WHERE X.EMPNO=Y.RESPEMP
AND Y.PROJNO=:GOODPROJ)
FOR UPDATE OF SALARY;
If you might use the cursor to update any column of the employee table, define
the cursor like this:
EXEC SQL
DECLARE C1 CURSOR FOR
SELECT EMPNO, FIRSTNME, MIDINIT, LASTNAME, SALARY
FROM DSN8910.EMP X
WHERE EXISTS
(SELECT *
FROM DSN8910.PROJ Y
WHERE X.EMPNO=Y.RESPEMP
AND Y.PROJNO=:GOODPROJ)
FOR UPDATE;
DB2 must do more processing when you use the FOR UPDATE clause without a
column list than when you use the FOR UPDATE clause with a column list.
Therefore, if you intend to update only a few columns of a table, your program
can run more efficiently if you include a column list.
The precompiler options NOFOR and STDSQL affect the use of the FOR UPDATE
clause in static SQL statements. For information about these options, see Table 150
on page 904. If you do not specify the FOR UPDATE clause in a DECLARE
CURSOR statement, and you do not specify the STDSQL(YES) option or the
NOFOR precompiler options, you receive an error if you execute a positioned
UPDATE statement.
Read-only result table: Some result tables cannot be updated—for example, the
result of joining two or more tables. For information about the defining
characteristics of read-only result tables, see the topic “DECLARE CURSOR” in
DB2 SQL Reference.
To open a row cursor, execute the OPEN statement in your program. DB2 then
uses the SELECT statement within DECLARE CURSOR to identify a set of rows. If
you use host variables in the search condition of that SELECT statement, DB2 uses
the current value of the variables to select the rows. The result table that satisfies
the search condition might contain zero, one, or many rows. An example of an
OPEN statement is:
EXEC SQL
OPEN C1
END-EXEC.
Two factors that influence the amount of time that DB2 requires to process the
OPEN statement are:
v Whether DB2 must perform any sorts before it can retrieve rows
v Whether DB2 uses parallelism to process the SELECT statement of the cursor
To determine whether the program has retrieved the last row of data, test the
SQLCODE field for a value of 100 or the SQLSTATE field for a value of ’02000’.
These codes occur when a FETCH statement has retrieved the last row in the result
table and your program issues a subsequent FETCH. For example:
IF SQLCODE = 100 GO TO DATA-NOT-FOUND.
The SELECT statement within DECLARE CURSOR statement identifies the result
table from which you fetch rows, but DB2 does not retrieve any data until your
application program executes a FETCH statement.
When your program executes the FETCH statement, DB2 positions the cursor on a
row in the result table. That row is called the current row. DB2 then copies the
current row contents into the program host variables that you specify on the INTO
clause of FETCH. This sequence repeats each time you issue FETCH, until you
process all rows in the result table.
The row that DB2 points to when you execute a FETCH statement depends on
whether the cursor is declared as a scrollable or non-scrollable.
When you query a remote subsystem with FETCH, consider using block fetch for
better performance. Block fetch processes rows ahead of the current row. You
cannot use a block fetch when you perform a positioned update or delete
operation.
After your program has executed a FETCH statement to retrieve the current row,
you can use a positioned UPDATE statement to modify the data in that row. An
example of a positioned UPDATE statement is:
EXEC SQL
UPDATE DSN8910.EMP
SET SALARY = 50000
WHERE CURRENT OF C1
END-EXEC.
After your program has executed a FETCH statement to retrieve the current row,
you can use a positioned DELETE statement to delete that row. A example of a
positioned DELETE statement looks like this:
EXEC SQL
DELETE FROM DSN8910.EMP
WHERE CURRENT OF C1
END-EXEC.
A positioned DELETE statement deletes the row on which the cursor is positioned.
Recommendation: To free the resources that are held by the cursor, close the
cursor explicitly by issuing the CLOSE statement.
Your program can have several cursors, each of which performs the previous steps.
To declare a rowset cursor, use the WITH ROWSET POSITIONING clause in the
DECLARE CURSOR statement. The following example shows how to declare a
rowset cursor:
EXEC SQL
DECLARE C1 CURSOR WITH ROWSET POSITIONING FOR
SELECT EMPNO, LASTNAME, SALARY
FROM DSN8910.EMP
END-EXEC.
To open a rowset cursor, execute the OPEN statement in your program. DB2 then
uses the SELECT statement within DECLARE CURSOR to identify the rows in the
result table. For more information about the OPEN CURSOR process, see “Opening
a row cursor” on page 677.
To determine whether the program has retrieved the last row of data in the result
table, test the SQLCODE field for a value of +100 or the SQLSTATE field for a
value of ’02000’. With a rowset cursor, these codes occur when a FETCH statement
retrieves the last row in the result table. However, when the last row has been
To determine the number of retrieved rows, use either of the following values:
v The contents of the SQLERRD(3) field in the SQLCA
v The contents of the ROW_COUNT item of GET DIAGNOSTICS
For information about GET DIAGNOSTICS, see “Checking the execution of SQL
statements by using the GET DIAGNOSTICS statement” on page 209.
If you declare the cursor as dynamic scrollable, and SQLCODE has the value +100,
you can continue with a FETCH statement until no more rows are retrieved.
Additional fetches might retrieve more rows because a dynamic scrollable cursor is
sensitive to updates by other application processes. For information about dynamic
cursors, see “Types of cursors” on page 671.
You can execute these static SQL statements when you use a rowset cursor:
v A multiple-row FETCH statement that copies a rowset of column values into
either of the following data areas:
– Host variable arrays that are declared in your program
– Dynamically-allocated arrays whose storage addresses are put into an SQL
descriptor area (SQLDA), along with the attributes of the columns that are to
be retrieved
v After either form of the multiple-row FETCH statement, you can issue:
– A positioned UPDATE statement on the current rowset
– A positioned DELETE statement on the current rowset
You must use the WITH ROWSET POSITIONING clause of the DECLARE
CURSOR statement if you plan to use a rowset-positioned FETCH statement.
The following example shows a FETCH statement that retrieves 20 rows into host
variable arrays that are declared in your program:
EXEC SQL
FETCH NEXT ROWSET FROM C1
FOR 20 ROWS
INTO :HVA-EMPNO, :HVA-LASTNAME, :HVA-SALARY :INDA-SALARY
END-EXEC.
When your program executes a FETCH statement with the ROWSET keyword, the
cursor is positioned on a rowset in the result table. That rowset is called the current
rowset. The dimension of each of the host variable arrays must be greater than or
equal to the number of rows to be retrieved.
Suppose that you want to dynamically allocate the storage needed for the arrays of
column values that are to be retrieved from the employee table. You must:
1. Declare an SQLDA structure and the variables that reference the SQLDA.
2. Dynamically allocate the SQLDA and the arrays needed for the column values.
3. Set the fields in the SQLDA for the column values to be retrieved.
4. Open the cursor.
5. Fetch the rows.
Your program must also declare variables that reference the SQLDA structure, the
SQLVAR structure within the SQLDA, and the DECLEN structure for the precision
and scale if you are retrieving a DECIMAL column. For C programs, the code
looks like this:
struct sqlda *sqldaptr;
struct sqlvar *varptr;
struct DECLEN {
unsigned char precision;
unsigned char scale;
};
Before you can set the fields in the SQLDA for the column values to be retrieved,
you must dynamically allocate storage for the SQLDA structure. For C programs,
the code looks like this:
sqldaptr = (struct sqlda *) malloc (3 * 44 + 16);
The size of the SQLDA is SQLN * 44 + 16, where the value of the SQLN field is the
number of output columns.
You must set the fields in the SQLDA structure for your FETCH statement.
Suppose you want to retrieve the columns EMPNO, LASTNAME, and SALARY.
The C code to set the SQLDA fields for these columns looks like this:
| strcpy(sqldaptr->sqldaid,"SQLDA");
| sqldaptr->sqldabc = 148; /* number bytes of storage allocated for the SQLDA */
| sqldaptr->sqln = 3; /* number of SQLVAR occurrences */
| sqldaptr->sqld = 3;
| varptr = (struct sqlvar *) (&(sqldaptr->sqlvar[0])); /* Point to first SQLVAR */
| varptr->sqltype = 452; /* data type CHAR(6) */
| varptr->sqllen = 6;
| varptr->sqldata = (char *) hva1;
| varptr->sqlind = (short *) inda1;
| varptr->sqlname.length = 8;
| memcpy(varptr->sqlname.data, "\x00\x00\x00\x00\x00\x01\x00\x14",varptr->sqlname.length);
| varptr = (struct sqlvar *) (&(sqldaptr->sqlvar[0]) + 1); /* Point to next SQLVAR */
| varptr->sqltype = 448; /* data type VARCHAR(15) */
| varptr->sqllen = 15;
| varptr->sqldata = (char *) hva2;
| varptr->sqlind = (short *) inda2;
| varptr->sqlname.length = 8;
| memcpy(varptr->sqlname.data, "\x00\x00\x00\x00\x00\x01\x00\x14",varptr->sqlname.length);
| varptr = (struct sqlvar *) (&(sqldaptr->sqlvar[0]) + 2); /* Point to next SQLVAR */
| varptr->sqltype = 485; /* data type DECIMAL(9,2) */
| ((struct DECLEN *) &(varptr->sqllen))->precision = 9;
| ((struct DECLEN *) &(varptr->sqllen))->scale = 2;
| varptr->sqldata = (char *) hva3;
| varptr->sqlind = (short *) inda3;
| varptr->sqlname.length = 8;
| memcpy(varptr->sqlname.data, "\x00\x00\x00\x00\x00\x01\x00\x14",varptr->sqlname.length);
For information about using the SQLDA in dynamic SQL, see “Dynamic SQL” on
page 162.
For more information about the SQLDA, see the topic “SQL descriptor area
(SQLDA)” in DB2 SQL Reference.
You can open the cursor only after all of the fields have been set in the output
SQLDA:
EXEC SQL OPEN C1;
After the OPEN statement, the program fetches the next rowset:
EXEC SQL
FETCH NEXT ROWSET FROM C1
FOR 20 ROWS
USING DESCRIPTOR :*sqldaptr;
The USING clause of the FETCH statement names the SQLDA that describes the
columns that are to be retrieved.
After your program executes a FETCH statement to establish the current rowset,
you can use a positioned UPDATE statement with either of the following clauses:
v Use WHERE CURRENT OF to modify all of the rows in the current rowset
v Use FOR ROW n OF ROWSET to modify row n in the current rowset
For information about restrictions for a positioned UPDATE, see “Executing SQL
statements by using a row cursor” on page 678.
When the UPDATE statement is executed, the cursor must be positioned on a row
or rowset of the result table. If the cursor is positioned on a row, that row is
updated. If the cursor is positioned on a rowset, all of the rows in the rowset are
updated.
After your program executes a FETCH statement to establish the current rowset,
you can use a positioned DELETE statement with either of the following clauses:
v Use WHERE CURRENT OF to delete all of the rows in the current rowset
v Use FOR ROW n OF ROWSET to delete row n in the current rowset
For information about restrictions for a positioned DELETE, see “Executing SQL
statements by using a row cursor” on page 678.
When the DELETE statement is executed, the cursor must be positioned on a row
or rowset of the result table. If the cursor is positioned on a row, that row is
deleted, and the cursor is positioned before the next row of its result table. If the
cursor is positioned on a rowset, all of the rows in the rowset are deleted, and the
cursor is positioned before the next rowset of its result table.
If you do not explicitly specify the number of rows in a rowset, DB2 implicitly
determines the number of rows based on the last fetch request.
To explicitly set the size of a rowset, use the FOR n ROWS clause in the FETCH
statement. If a FETCH statement specifies the ROWSET keyword, and not the FOR
n ROWS clause, the size of the rowset is implicitly set to the size of the rowset that
was most recently specified in a prior FETCH statement. If a prior FETCH
statement did not specify the FOR n ROWS clause or the ROWSET keyword, the
size of the current rowset is implicitly set to 1. For examples of rowset positioning,
see Table 121 on page 697.
Recommendation: To free the resources held by the cursor, close the cursor
explicitly by issuing the CLOSE statement.
When you open any cursor, the cursor is positioned before the first row of the
result table. You move a scrollable cursor around in the result table by specifying a
fetch orientation keyword in a FETCH statement. A fetch orientation keyword
indicates the absolute or relative position of the cursor when the FETCH statement
is executed. The following table lists the fetch orientation keywords that you can
specify and their meanings. These keywords apply to both row-positioned
scrollable cursors and rowset-positioned scrollable cursors.
Table 119. Positions for a scrollable cursor
Keyword in FETCH statement Cursor position when FETCH is executed1
BEFORE Before the first row
FIRST or ABSOLUTE +1 On the first row
LAST or ABSOLUTE -1 On the last row
AFTER After the last row
ABSOLUTE2 On an absolute row number, from before the first
row forward or from after the last row backward
RELATIVE2 On the row that is forward or backward a relative
number of rows from the current row
CURRENT On the current row
PRIOR or RELATIVE -1 On the previous row
NEXT On the next row (default)
Notes:
1. The cursor position applies to both row position and rowset position, for example, before
the first row or before the first rowset.
2. For more information about ABSOLUTE and RELATIVE, see the topic “FETCH” inDB2
SQL Reference.
Example: To use the cursor that is declared in “Types of cursors” on page 671 to
fetch the fifth row of the result table, use a FETCH statement like this:
EXEC SQL FETCH ABSOLUTE +5 C1 INTO :HVDEPTNO, :DEPTNAME, :MGRNO;
To fetch the fifth row from the end of the result table, use this FETCH statement:
EXEC SQL FETCH ABSOLUTE -5 C1 INTO :HVDEPTNO, :DEPTNAME, :MGRNO;
When you declare a cursor as SENSITIVE STATIC, changes that other processes or
cursors make to the underlying table can be visible to the result table of the cursor.
Whether those changes are visible depends on whether you specify SENSITIVE or
INSENSITIVE when you execute FETCH statements with the cursor. When you
specify FETCH INSENSITIVE, changes that other processes or other cursors make
to the underlying table are not visible in the result table. When you specify FETCH
SENSITIVE, changes that other processes or cursors make to the underlying table
are visible in the result table.
When you declare a cursor as SENSITIVE DYNAMIC, changes that other processes
or cursors make to the underlying table are visible to the result table after the
changes are committed.
The following table summarizes the sensitivity values and their effects on the
result table of a scrollable cursor.
Table 120. How sensitivity affects the result table for a scrollable cursor
DECLARE
sensitivity FETCH INSENSITIVE FETCH SENSITIVE
INSENSITIVE No changes to the underlying Not valid.
table are visible in the result
table. Positioned UPDATE and
DELETE statements using the
cursor are not allowed.
SENSITIVE STATIC Only positioned updates and All updates and deletes are visible
deletes that are made by the in the result table. Inserts made by
cursor are visible in the result other processes are not visible in
table. the result table.
SENSITIVE Not valid. All committed changes are visible
DYNAMIC in the result table, including
updates, deletes, inserts, and
changes in the order of the rows.
Answer: Declare your cursor as scrollable. When you select rows from the table,
you can use the various forms of the FETCH statement to move to an absolute row
number, move ahead or back a certain number of rows, to the first or last row,
before the first row or after the last row, forward, or backward. You can use any
combination of these FETCH statements to change direction repeatedly.
You can use code like the following example to move forward in the department
table by 10 records, backward five records, and forward again by three records:
/**************************/
/* Declare host variables */
/**************************/
To do that, execute a FETCH statement, such as FETCH AFTER, that positions the
cursor after the last row. You can then examine the fields SQLERRD(1) and
SQLERRD(2) in the SQLCA (fields sqlerrd[0] and sqlerrd[1] for C and C++) for the
number of rows in the result table. Alternatively, you can use the GET
DIAGNOSTICS statement to retrieve the number of rows in the ROW_COUNT
statement item.
You can remove a delete hole only by opening the scrollable cursor, setting a
savepoint, executing a positioned DELETE statement with the scrollable cursor,
and rolling back to the savepoint.
You can convert an update hole back to a result table row by updating the row in
the base table, as shown in the following figure. You can update the base table
with a searched UPDATE statement in the same application process, or a searched
or positioned UPDATE statement in another application process. After you update
the base table, if the row qualifies for the result table, the update hole disappears.
If the scrollable cursor creates the hole, the hole is visible when you execute a
FETCH statement for the row that contains the hole. The FETCH statement can be
FETCH INSENSITIVE or FETCH SENSITIVE.
If an update or delete operation outside the scrollable cursor creates the hole, the
hole is visible at the following times:
v If you execute a FETCH SENSITIVE statement for the row that contains the hole,
the hole is visible when you execute the FETCH statement.
v If you execute a FETCH INSENSITIVE statement, the hole is not visible when
you execute the FETCH statement. DB2 returns the row as it was before the
update or delete operation occurred. However, if you follow the FETCH
INSENSITIVE statement with a positioned UPDATE or DELETE statement, the
hole becomes visible.
In some situations, you might not be able to fetch a row from the result table of a
scrollable cursor, depending on how the cursor is declared:
v Scrollable cursors that are declared as INSENSITIVE or SENSITIVE STATIC
follow a static model, which means that DB2 determines the size of the result
table and the order of the rows when you open the cursor.
Deleting or updating rows after a static cursor is open can result in holes in the
result table. See “Removing a delete hole or update hole” on page 688.
v Scrollable cursors that are declared as SENSITIVE DYNAMIC follow a dynamic
model, which means that the size and contents of the result table, and the order
of the rows, can change after you open the cursor.
A dynamic cursor scrolls directly on the base table. If the current row of the
cursor is deleted or if it is updated so that it no longer satisfies the search
condition, and the next cursor operation is FETCH CURRENT, then DB2 issues
an SQL warning.
The following examples demonstrate how delete and update holes can occur when
you use a SENSITIVE STATIC scrollable cursor.
Suppose that table A consists of one integer column, COL1, which has the values
shown in the following figure.
Now suppose that you declare the following SENSITIVE STATIC scrollable cursor,
The positioned delete statement creates a delete hole, as shown in the following
figure.
After you execute the positioned delete statement, the third row is deleted from
the result table, but the result table does not shrink to fill the space that the deleted
row creates.
Suppose that you declare the following SENSITIVE STATIC scrollable cursor,
which you use to update rows in A:
EXEC SQL DECLARE C4 SENSITIVE STATIC SCROLL CURSOR FOR
SELECT COL1
FROM A
WHERE COL1<6;
The searched UPDATE statement creates an update hole, as shown in the following
figure.
After you execute the searched UPDATE statement, the last row no longer qualifies
for the result table, but the result table does not shrink to fill the space that the
disqualified row creates.
| FETCH WITH CONTINUE breaks XML and LOB values into manageable pieces
| and processes the pieces one at a time to avoid the following buffer allocation
| problems:
690 Application Programming and SQL Guide
| v Allocating overly large or unnecessary space for buffers. If some LOB values are
| shorter than the maximum length for values in a column, you can waste buffer
| space if you allocate enough space for the maximum length. The buffer
| allocation problem can be even worse for XML data because an XML column
| does not have a defined maximum length. If you use FETCH WITH
| CONTINUE, you can allocate more appropriate buffer space for the actual
| length of the XML and LOB values.
| v Truncating very large XML and LOB data. If a very large XML or LOB value
| does not fit in the host variable buffer space that is provided by the application
| program, DB2 truncates the value. If the application program retries this fetch
| with a larger buffer, two problems exist. First, when using a non-scrollable
| cursor, you cannot re-fetch the current row without closing, reopening, and
| repositioning the cursor to the row that was truncated. Second, if you do not use
| FETCH WITH CONTINUE, DB2 does not return the actual length of the entire
| value to the application program. Thus, DB2 does not know how large a buffer
| to reallocate. If you use FETCH WITH CONTINUE, DB2 preserves the truncated
| portion of the data for subsequent retrieval and returns the actual length of the
| entire data value so that the application can reallocate a buffer of the appropriate
| size.
| DB2 provides two methods for using FETCH WITH CONTINUE with LOB and
| XML data:
| v “Dynamically allocating buffers when fetching XML and LOB data”
| v “Moving data through fixed-size buffers when fetching XML and LOB data” on
| page 692
| To use dynamic buffer allocation for LOB and XML data, perform the following
| steps:
| 1. Use an initial FETCH WITH CONTINUE to fetch data into a pre-allocated
| buffer of a moderate size.
| 2. If the value is too large to fit in the buffer, use the length information that is
| returned by DB2 to allocate the appropriate amount of storage.
| 3. Use a single FETCH CURRENT CONTINUE statement to retrieve the
| remainder of the data.
| Example: Suppose that table T1 was created with the following statement:
| CREATE TABLE T1 (C1 INT, C2 CLOB(100M), C3 CLOB(32K), C4 XML);
| Now, suppose that you declare CURSOR1, prepare and describe statement
| DYNSQLSTMT1 with descriptor sqlda, and open CURSOR1 with the following
| statements:
| EXEC SQL DECLARE CURSOR1 CURSOR FOR DYNSQLSTMT1;
| EXEC SQL PREPARE DYNSQLSTMT1 FROM 'SELECT * FROM T1';
| EXEC SQL DESCRIBE DYNSQLSTMT1 INTO DESCRIPTOR :SQLDA;
| EXEC SQL OPEN CURSOR1;
| Because C2 and C4 contain data that do not fit in the buffer, some of the data is
| truncated. Your application can use the information that DB2 returns to allocate
| large enough buffers for the remaining data and reset the data pointers and length
| fields in SQLDA. At that point, you can resume the fetch and complete the process
| with the following FETCH CURRENT CONTINUE statement and CLOSE CURSOR
| statement:
| EXEC SQL FETCH CURRENT CONTINUE CURSOR1 INTO DESCRIPTOR :SQLDA;
| EXEC SQL CLOSE CURSOR1;
| The application needs to concatenate the two returned pieces of the data value.
| One technique is to move the first piece of data to the dynamically-allocated larger
| buffer before the FETCH CONTINUE. Set the SQLDATA pointer in the SQLDA
| structure to point immediately after the last byte of this truncated value. DB2 then
| writes the remaining data to this location and thus completes the concatenation.
| To use fixed buffer allocation for LOB and XML data, perform the following steps:
| 1. Use an initial FETCH WITH CONTINUE to fetch data into a pre-allocated
| buffer of a moderate size.
| 2. If the value is too large to fit in the buffer, use as many FETCH CONTINUE
| statements as necessary to process all of the data through a fixed buffer.
| After each FETCH operation, check whether a column was truncated by first
| examining the SQLWARN1 field in the returned SQLCA. If that field contains a
| ’W’ value, at least one column in the returned row has been truncated. To then
| determine if a particular LOB or XML column was truncated, your application
| must compare the value that is returned in the length field with the declared
| length of the host variable. If a column is truncated, continue to use FETCH
| CONTINUE statements until all of the data has been retrieved.
| After you fetch each piece of the data, move it out of the buffer to make way
| for the next fetch. Your application can write the pieces to an output file or
| reconstruct the entire data value in a buffer above the 2-GB bar.
| Example: Suppose that table T1 was created with the following statement:
| CREATE TABLE T1 (C1 INT, C2 CLOB(100M), C3 CLOB(32K), C4 XML);
| Next, suppose that you use the following statements to declare and open
| CURSOR1 and to FETCH WITH CONTINUE:
| As each piece of the data value is fetched, move it from the buffer to the output
| file.
| Because the 10 MB value in C2 does not fit into the 32 KB buffer, some of the data
| is truncated. Your application can loop through the following FETCH CURRENT
| CONTINUE:
| EXEC SQL FETCH CURRENT CONTINUE CURSOR1 INTO :CLOBHV;
| After each FETCH operation, you can determine if the data was truncated by first
| checking if the SQLWARN1 field in the returned SQLCA contains a ’W’ value. If
| so, then check if the length value, which is returned in CLOBHV_LENGTH, is
| greater than the declared length of 32767. (CLOBHV_LENGTH is declared as part
| of the precompiler expansion of the CLOBHV declaration.) If the value is greater,
| that value has been truncated and more data can be retrieved with the next
| FETCH CONTINUE operation.
| When all of the data has moved to the output file, you can close the cursor:
| EXEC SQL CLOSE CURSOR1;
After you open a cursor, you can determine the following attributes of the cursor
by checking the following SQLWARN and SQLERRD fields of the SQLCA:
SQLWARN1
Indicates whether the cursor is scrollable or non-scrollable.
SQLWARN4
Indicates whether the cursor is insensitive (I), sensitive static (S), or sensitive
dynamic (D).
SQLWARN5
Indicates whether the cursor is read-only, readable and deletable, or readable,
deletable, and updatable.
SQLERRD(1) and SQLERRD(2)
| These two fields together contain a double byte integer that represents the
| number of rows in the result table of a cursor when the cursor is positioned
| after the last row. The cursor is positioned after the last row when the
| SQLCODE is 100. These fields are not set for dynamic scrollable cursors.
SQLERRD(3)
| The number of rows in the result table when the SELECT statement of the
| cursor contains a data change statement.
If the OPEN statement executes with no errors or warnings, DB2 does not set
SQLWARN0 when it sets SQLWARN1, SQLWARN4, or SQLWARN5.
For more information about the SQLCA fields, see the topic “Description of
SQLCA fields” in DB2 SQL Reference.
After you open a cursor, you can determine the following attributes of the cursor
by checking these GET DIAGNOSTICS items:
DB2_SQL_ATTR_CURSOR_HOLD
Indicates whether the cursor can be held open across commits (Y or N)
DB2_SQL_ATTR_CURSOR_ROWSET
Indicates whether the cursor can use rowset positioning (Y or N)
DB2_SQL_ATTR_CURSOR_SCROLLABLE
Indicates whether the cursor is scrollable (Y or N)
DB2_SQL_ATTR_CURSOR_SENSITIVITY
Indicates whether the cursor is insensitive or sensitive to changes that are
made by other processes (I or S)
DB2_SQL_ATTR_CURSOR_TYPE
Indicates whether the cursor is forward (F) declared static (S for INSENSITIVE
or SENSITIVE STATIC) or dynamic (D for SENSITIVE DYNAMIC)
For more information about the GET DIAGNOSTICS statement, see “Checking the
execution of SQL statements by using the GET DIAGNOSTICS statement” on page
209.
Question: When a program retrieves data from the database, how can the program
scroll backward through the data?
Using a scrollable cursor: Using a scrollable cursor to fetch backward through data
involves these basic steps:
1. Declare the cursor with the SCROLL keyword.
2. Open the cursor.
3. Execute a FETCH statement to position the cursor at the end of the result table.
4. In a loop, execute FETCH statements that move the cursor backward and then
retrieve the data.
5. When you have retrieved all the data, close the cursor.
You can use code like the following example to retrieve department names in
reverse order from table DSN8910.DEPT:
/**************************/
/* Declare host variables */
/**************************/
Question: How can you scroll backward and update data that was retrieved
previously?
Answer: Use a scrollable cursor that is declared with the FOR UPDATE clause.
Using a scrollable cursor to update backward involves these basic steps:
1. Declare the cursor with the SENSITIVE STATIC SCROLL keywords.
2. Open the cursor.
3. Execute a FETCH statement to position the cursor at the end of the result table.
4. FETCH statements that move the cursor backward, until you reach the row that
you want to update.
5. Execute the UPDATE WHERE CURRENT OF statement to update the current
row.
6. Repeat steps 4 and 5 until you have update all the rows that you need to.
7. When you have retrieved and updated all the data, close the cursor.
For information about using a multiple-row FETCH statement, see “Executing SQL
statements by using a rowset cursor” on page 681.
The following table shows the interaction between row and rowset positioning for
a scrollable cursor. Assume that you declare the scrollable cursor on a table with 15
rows.
The following example shows how to retrieve data backward with a cursor.
**************************************************
* Declare a cursor to retrieve the data backward *
* from the EMP table. The cursor has access to *
* changes by other processes. *
**************************************************
EXEC SQL
DECLARE THISEMP SENSITIVE STATIC SCROLL CURSOR FOR
SELECT EMPNO, LASTNAME, WORKDEPT, JOB
FROM DSN8910.EMP
END-EXEC.
**************************************************
* Open the cursor *
**************************************************
The following example shows how to update an entire rowset with a cursor.
**************************************************
* Declare a rowset cursor to update the JOB *
* column of the EMP table. *
**************************************************
EXEC SQL
DECLARE EMPSET CURSOR
WITH ROWSET POSITIONING FOR
SELECT EMPNO, LASTNAME, WORKDEPT, JOB
FROM DSN8910.EMP
WHERE WORKDEPT = 'D11'
FOR UPDATE OF JOB
END-EXEC.
**************************************************
* Open the cursor. *
**************************************************
EXEC SQL
OPEN EMPSET
END-EXEC.
The following example shows how to update specific rows with a rowset cursor.
*****************************************************
* Declare a static scrollable rowset cursor. *
*****************************************************
EXEC SQL
DECLARE EMPSET SENSITIVE STATIC SCROLL CURSOR
WITH ROWSET POSITIONING FOR
SELECT EMPNO, WORKDEPT, JOB
FROM DSN8910.EMP
FOR UPDATE OF JOB
END-EXEC.
*****************************************************
* Open the cursor. *
*****************************************************
EXEC SQL
OPEN EMPSET
END-EXEC.
*****************************************************
* Fetch next rowset to position the cursor. *
*****************************************************
EXEC SQL
FETCH SENSITIVE NEXT ROWSET FROM EMPSET
FOR :SIZE-ROWSET ROWS
INTO :HVA-EMPNO,
When you select a ROWID column, the value implicitly contains the location of the
retrieved row. If you use the value from the ROWID column in the search
condition of a subsequent query, DB2 can choose to navigate directly to that row.
The following code uses the SELECT from INSERT statement to retrieve the value
of the ROWID column from a new row that is inserted into the EMPLOYEE table.
This value is then used to reference that row for the update of the SALARY
column.
EXEC SQL BEGIN DECLARE SECTION;
SQL TYPE IS ROWID hv_emp_rowid;
short hv_dept, hv_empno;
char hv_name[30];
decimal(7,2) hv_salary;
EXEC SQL END DECLARE SECTION;
...
EXEC SQL
SELECT EMP_ROWID INTO :hv_emp_rowid
FROM FINAL TABLE (INSERT INTO EMPLOYEE
VALUES (DEFAULT, :hv_empno, :hv_name, :hv_salary, :hv_dept));
EXEC SQL
UPDATE EMPLOYEE
SET SALARY = SALARY + 1200
WHERE EMP_ROWID = :hv_emp_rowid;
For DB2 to be able to use direct row access for the update operation, the SELECT
from INSERT statement and the UPDATE statement must execute within the same
unit of work. If these statements execute in different units of work, the ROWID
value for the inserted row might change due to a REORG of the table space before
| the update operation. Alternatively, you can use a SELECT from MERGE
| statement. The MERGE statement performs INSERT and UPDATE operations as
| one coordinated statement.
If you define a column in a table to have the ROWID data type, DB2 provides a
unique value for each row in the table only if you define the column as
GENERATED ALWAYS. The purpose of the value in the ROWID column is to
uniquely identify rows in the table.
Requirement: To use direct row access, you must use a retrieved ROWID value
before you commit. When your application commits, it releases its claim on the
table space. After the commit, a REORG on your table space might execute and
change the physical location of the rows.
The value that you retrieve from a ROWID column is a varying-length character
value that is not monotonically ascending or descending (the value is not always
increasing or not always decreasing). Therefore, a ROWID column does not
provide suitable values for many types of entity keys, such as order numbers or
employee numbers.
When you specify a particular row ID, or RID, DB2 can navigate directly to the
specified row for those queries that qualify for direct row access.
Before you begin this task, ensure that the query qualifies for direct row access. To
qualify, the search condition must be a Boolean term, stage 1 predicate that fits one
of the following criteria:
v A simple Boolean term predicate of the following form:
RID (table designator) = noncolumn expression
To specify direct row access by using RIDs, specify the RID function in the search
condition of a SELECT, DELETE, or UPDATE statement.
The RID function returns the RID of a row, which you can use to uniquely identify
a row.
| Restriction: Because DB2 might reuse RID numbers when the REORG utility is
| run, the RID function might return different values when invoked for a row
| multiple times.
| If you specify a RID and DB2 cannot locate the row through direct row access, DB2
| does not switch to another access method. Instead, DB2 returns no rows.
For example, you can use the following statements to extract information about an
employee’s department from the resume:
EXEC SQL BEGIN DECLARE SECTION;
char employeenum[6];
long deptInfoBeginLoc;
long deptInfoEndLoc;
SQL TYPE IS CLOB_LOCATOR resume;
SQL TYPE IS CLOB_LOCATOR deptBuffer;
EXEC
. SQL END DECLARE SECTION;
.
.
EXEC SQL DECLARE C1 CURSOR FOR
. SELECT EMPNO, EMP_RESUME FROM EMP;
.
.
EXEC
. SQL FETCH C1 INTO :employeenum, :resume;
.
.
EXEC SQL SET :deptInfoBeginLoc =
POSSTR(:resumedata, 'Department Information');
These statements use host variables of data type large object locator (LOB locator).
LOB locators let you manipulate LOB data without moving the LOB data into host
variables. By using LOB locators, you need much smaller amounts of memory for
your programs. LOB locators are discussed in “Saving storage when manipulating
LOBs by using LOB locators” on page 711.
| You can also use LOB file reference variables when you are working with LOB
| data. You can use LOB file reference variables to insert LOB data from a file into a
| DB2 table or to retrieve LOB data from a DB2 table. LOB file reference variables
| are discussed in “LOB file reference variables” on page 715.
For instructions on how to prepare and run the sample LOB applications, see the
topic “ Phase 7: Accessing LOB data” in DB2 Installation Guide.
| You can declare LOB host variables and LOB locators in assembler, C, C++,
| COBOL, Fortran, and PL/I. Additionally, you can declare LOB file reference
| variables in assembler, C, C++, COBOL, and PL/I. For each host variable, locator,
| or file reference variable of SQL type BLOB, CLOB, or DBCLOB that you declare,
| DB2 generates an equivalent declaration that uses host language data types. When
| you refer to a LOB host variable, LOB locator, or LOB file reference variable in an
| SQL statement, you must use the variable that you specified in the SQL type
| declaration. When you refer to the host variable in a host language statement, you
| must use the variable that DB2 generates.
| Declare LOB host variables that are referenced by the precompiler in SQL
| statements by using the SQL TYPE IS BLOB, SQL TYPE IS CLOB, or SQL TYPE IS
| DBCLOB keywords.
LOB host variables that are referenced only by an SQL statement that uses a
DESCRIPTOR should use the same form as declared by the precompiler. In this
form, the LOB host-variable-array consists of a 31-bit length, followed by the data,
followed by another 31-bit length, followed by the data, and so on. The 31-bit
length must be fullword aligned.
The following examples show you how to declare LOB host variables in each
supported language. In each table, the left column contains the declaration that
you code in your application program. The right column contains the declaration
that DB2 generates.
The following table shows C and C++ language declarations for some typical LOB
types.
| The declarations that are generated for COBOL depend on whether you use the
| DB2 precompiler or the DB2 coprocessor. The following table shows COBOL
| declarations that the DB2 precompiler generates for some typical LOB types. The
| declarations that the DB2 coprocessor generates might be different.
The following table shows Fortran declarations for some typical LOB types.
Table 126. Examples of Fortran variable declarations
You declare this variable DB2 generates this variable
| The declarations that are generated for PL/I depend on whether you use the DB2
| precompiler or the DB2 coprocessor. The following table shows PL/I declarations
| that the DB2 precompiler generates for some typical LOB types. The declarations
| that the DB2 coprocessor generates might be different.
| Table 127. Examples of PL/I variable declarations by the DB2 precompiler
| You declare this variable DB2 precompiler generates this variable
LOB materialization
Materialization means that DB2 puts the data that is selected into a work file. This
action can slow performance. Because LOB values can be very large, DB2 avoids
materializing LOB data until absolutely necessary.
DB2 stores LOB values in contiguous storage. DB2 must materialize LOBs when
your application program performs the following actions:
v Calls a user-defined function with a LOB as an argument
v Moves a LOB into or out of a stored procedure
v Assigns a LOB host variable to a LOB locator host variable
v Converts a LOB from one CCSID to another
The amount of storage that is used for LOB materialization depends on a number
of factors including:
v The size of the LOBs
v The number of LOBs that need to be materialized in a statement
DB2 loads LOBs into virtual pools above the bar. If insufficient space is available
for LOB materialization, your application receives SQLCODE -904.
Although you cannot completely avoid LOB materialization, you can minimize it
by using LOB locators, rather than LOB host variables in your application
programs. See “Saving storage when manipulating LOBs by using LOB locators”
for information on how to use LOB locators.
To retrieve LOB data from a DB2 table, you can define host variables that are large
enough to hold all of the LOB data. This requires your application to allocate large
amounts of storage, and requires DB2 to move large amounts of data, which can
be inefficient or impractical. Instead, you can use LOB locators. LOB locators let
you manipulate LOB data without retrieving the data from the DB2 table. Using
LOB locators for LOB data retrieval is a good choice in the following situations:
v When you move only a small part of a LOB to a client program
v When the entire LOB does not fit in the application’s memory
v When the program needs a temporary LOB value from a LOB expression but
does not need to save the result
v When performance is important
A LOB locator is associated with a LOB value or expression, not with a row in a
DB2 table or a physical storage location in a table space. Therefore, after you select
If you want to remove the association between a LOB locator and its value before a
unit of work ends, execute the FREE LOCATOR statement. To keep the association
between a LOB locator and its value after the unit of work ends, execute the
HOLD LOCATOR statement. After you execute a HOLD LOCATOR statement, the
locator keeps the association with the corresponding value until you execute a
FREE LOCATOR statement or the program ends.
If you execute HOLD LOCATOR or FREE LOCATOR dynamically, you cannot use
EXECUTE IMMEDIATE. For more information about the HOLD LOCATOR
statement, see the topic “HOLD LOCATOR” in DB2 SQL Reference. For more
information about the FREE LOCATOR statement, see the topic “FREE LOCATOR”
in DB2 SQL Reference.
For host variables other than LOB locators, when you select a null value into a
host variable, DB2 assigns a negative value to the associated indicator variable.
However, for LOB locators, DB2 uses indicator variables differently. A LOB locator
is never null. When you select a LOB column using a LOB locator and the LOB
column contains a null value, DB2 assigns a null value to the associated indicator
variable. The value in the LOB locator does not change. In a client/server
environment, this null information is recorded only at the client.
When you use LOB locators to retrieve data from columns that can contain null
values, define indicator variables for the LOB locators, and check the indicator
variables after you fetch data into the LOB locators. If an indicator variable is null
after a fetch operation, you cannot use the value in the LOB locator.
You can use a VALUES INTO or SET statement to obtain the results of functions
that operate on LOB locators, such as LENGTH or SUBSTR. VALUES INTO and
SET statements are processed in the application encoding scheme for the plan or
package that contains the statement. If that encoding scheme is different from the
By using these tables, you can obtain the same result as you would with a
VALUES INTO or SET statement.
Example: Suppose that the encoding scheme of the following statement is EBCDIC:
SET : unicode_hv = SUBSTR(:Unicode_lob_locator,X,Y);
Because the program in the following figure uses LOB locators, rather than placing
the LOB data into host variables, no LOB data is moved until the INSERT
statement executes. In addition, no LOB data moves between the client and the
server.
EXEC SQL INCLUDE SQLCA;
/**************************/
/* Declare host variables */ 1
/**************************/
EXEC SQL BEGIN DECLARE SECTION;
/*************************************************/
/* Delete any instance of "A00130" from previous */
/* executions of this sample */
/*************************************************/
EXEC SQL DELETE FROM EMP_RESUME WHERE EMPNO = 'A00130';
/*************************************************/
/* Use a single row select to get the document */ 2
/*************************************************/
EXEC SQL SELECT RESUME
INTO :HV_DOC_LOCATOR1
FROM EMP_RESUME
WHERE EMPNO = '000130'
AND RESUME_FORMAT = 'ascii';
/*****************************************************/
/* Use the POSSTR function to locate the start of */
/* sections "Department Information" and "Education" */ 3
/*****************************************************/
EXEC SQL SET :HV_START_DEPTINFO =
POSSTR(:HV_DOC_LOCATOR1, 'Department Information');
/*******************************************************/
/* Append the Department Information to the end */
/* of the resume */
/*******************************************************/
EXEC SQL SET :HV_DOC_LOCATOR3 =
:HV_DOC_LOCATOR2 || :HV_NEW_SECTION_LOCATOR;
/*******************************************************/
/* Store the modified resume in the table. This is */ 4
/* where the LOB data really moves. */
/*******************************************************/
EXEC SQL INSERT INTO EMP_RESUME VALUES ('A00130', 'ascii',
:HV_DOC_LOCATOR3, DEFAULT);
/*********************/
/* Free the locators */ 5
/*********************/
EXEC SQL FREE LOCATOR :HV_DOC_LOCATOR1, :HV_DOC_LOCATOR2, :HV_DOC_LOCATOR3;
| When you use a file reference variable, you can select or insert an entire LOB or
| XML value without contiguous application storage to contain the entire LOB or
| XML value. LOB file reference variables move LOB or XML values from the
| database server to an application or from an application to the database server
| without going through the application’s memory. Furthermore, LOB file reference
| variables bypass the host language limitation on the maximum size allowed for
| dynamic storage to contain a LOB value.
| You can declare LOB or XML values as LOB file reference variables or LOB file
| reference arrays for applications that are written in C, COBOL, PL/I, and
| assembler. The LOB file reference variables do not contain LOB data; they
| represent a file that contains LOB data. Database queries, updates, and inserts can
| use file reference variables to store or retrieve column values. As with other host
| variables, a LOB file reference variable can have an associated indicator variable.
| With the DB2-generated construct, you can use the following code to select from a
| CLOB column in the database into a new file that is referenced by :hv_text_file.
| Similarly, you can use the following code to insert the data from a from a file that
| is referenced by :hv_text_file into a CLOB column.
| strcopy(hv_text_file.name, "/u/gainer/patents/chips.13");
| hv_text_file.name_length = strlen("/u/gainer/patents/chips.13");
| hv_text_file.file_options = SQL_FILE_READ;
| strcopy(:hv_patent_title, "A Method for Pipelining Chip Consumption");
| EXEC SQL INSERT INTO PATENTS(TITLE, TEXT)
| VALUES(:hv_patent_title, :hv_text_file);
| For examples of how to declare file reference variables for XML data in C, COBOL,
| and PL/I, see “Host variable data types for XML data in embedded SQL
| applications” on page 217.
You reference a sequence by using the NEXT VALUE expression or the PREVIOUS
VALUE expression, specifying the name of the sequence:
v A NEXT VALUE expression generates and returns the next value for the
specified sequence. If a query contains multiple instances of a NEXT VALUE
expression with the same sequence name, the sequence value increments only
once for that query. The ROLLBACK statement has no effect on values already
generated.
v A PREVIOUS VALUE expression returns the most recently generated value for
the specified sequence for a previous NEXT VALUE expression that specified the
same sequence within the current application process. The value of the
PREVIOUS VALUE expression persists until the next value is generated for the
Question: Are there any special techniques for fetching and displaying large
volumes of data?
Answer: There are no special techniques; but for large numbers of rows, efficiency
can become very important. In particular, you need to be aware of locking
considerations, including the possibilities of lock escalation.
If your program allows input from a terminal before it commits the data and
thereby releases locks, it is possible that a significant loss of concurrency results.
| For information about how to create a table with a ROW CHANGE TIMESTAMP
| column, see the topic “CREATE TABLE” inDB2 SQL Reference.
| To determine when a row was changed, issue a SELECT statement with the ROW
| CHANGE TIMESTAMP column in the column list.
| If a qualifying row does not have a value for the ROW CHANGE TIMESTAMP
| column, DB2 returns the time that the page in which that row resides was
| updated.
| Example: Suppose that you issue the following statements to create, populate, and
| alter a table:
| CREATE TABLE T1 (C1 INTEGER NOT NULL);
| INSERT INTO T1 VALUES (1);
| ALTER TABLE T1 ADD COLUMN C2 NOT NULL GENERATED ALWAYS
| FOR EACH ROW ON UPDATE AS ROW CHANGE TIMESTAMP;
| SELECT T1.C2 FROM T1 WHERE T1.C1 = 1;
| Because the ROW CHANGE TIMESTAMP column was added after the data was
| inserted, the following statement returns the time that the page was last modified:
| SELECT T1.C2 FROM T1 WHERE T1.C1 = 1;
| Assume that this row is added to the same page as the first row. The following
| statement returns the time that value ″2″ was inserted into the table:
| SELECT T1.C2 FROM T1 WHERE T1.C1 = 2;
| To check whether an XML column contains a certain value, specify the XMLEXISTS
| predicate in the WHERE clause of your SQL statement. Include the following
| parameters for the XMLEXISTS predicate:
| v An XPath expression that is embedded in a character string literal. Specify an
| XPath expression that identifies the XML data that you are looking for. If the
| result of the XPath expression is an empty sequence, XMLEXISTS returns false.
| If the result is not empty, XMLEXISTS returns true. If the evaluation of the
| XPath expression returns an error, XMLEXISTS returns an error.
| v The XML column name. Specify this value after the PASSING keyword.
| Example: Suppose that you want to return only purchase orders that have a billing
| address. Assume that column XMLPO stores the XML purchase order documents
| and that the billTo nodes within these documents contain any billing addresses.
| You can use the following SELECT statement with the XMLEXISTS predicate:
| SELECT XMLPO FROM T1
| WHERE XMLEXISTS ('declare namespace ipo="http://www.example.com/IPO";
| /ipo:purchaseOrder[billTo]'
| PASSING XMLPO);
| Related reference
| XMLEXISTS predicate (SQL Reference)
The expression does not include a column of a table. The three ways to return a
value in a host variable are shown in the following examples.
Example: To set the contents of a host variable to the value of an expression, use
the SET host-variable assignment statement:
EXEC SQL SET :hvrandval = RAND(:hvrand);
Example: To return the value of an expression in a host variable, use the VALUES
INTO statement:
EXEC SQL VALUES RAND(:hvrand)
INTO :hvrandval;
Example: To select the expression from the DB2-provided EBCDIC table, named
SYSIBM.SYSDUMMY1, which consists of one row, use the following statement:
EXEC SQL SELECT RAND(:hvrand)
INTO :hvrandval
FROM SYSIBM.SYSDUMMY1;
The restart capabilities for DB2 and IMS databases, as well as for sequential data
sets that are accessed through GSAM, are available through the IMS Checkpoint
and Restart facility.
DB2 allows access to both DB2 and DL/I data through the use of the following
DB2 and IMS facilities:
v IMS synchronization calls, which commit and abnormally terminate units of
recovery
In a data sharing environment, DL/I batch supports group attachment. You can
specify a group attachment name instead of a subsystem name in the SSN
parameter of the DDITV02 data set for the DL/I batch job. For information about
the SSN parameter and the DDITV02 data set, see “Input and output data sets for
DL/I batch jobs” on page 949.
Requirements for using DB2 in a DL/I batch job: Using DB2 in a DL/I batch job
requires the following changes to the application program and the job step JCL:
v Add SQL statements to your application program to gain access to DB2 data.
You must then precompile the application program and bind the resulting
DBRM into a plan or package, as described in Chapter 17, “Preparing an
application to run on DB2 for z/OS,” on page 887.
v Before you run the application program, use JOBLIB, STEPLIB, or link book to
access the DB2 load library, so that DB2 modules can be loaded.
v In a data set that is specified by a DDITV02 DD statement, specify the program
name and plan name for the application, and the connection name for the DL/I
batch job.
In an input data set or in a subsystem member, specify information about the
connection between DB2 and IMS. The input data set name is specified with a
DDITV02 DD statement. The subsystem member name is specified by the
parameter SSM= on the DL/I batch invocation procedure. For detailed
information about the contents of the subsystem member and the DDITV02 data
set, see “Input and output data sets for DL/I batch jobs” on page 949.
v Optionally specify an output data set using the DDOTV02 DD statement. You
might need this data set to receive messages from the IMS attachment facility
about indoubt threads and diagnostic information.
Address spaces in DL/I batch: A DL/I batch region is independent of both the
IMS control region and the CICS address space. The DL/I batch region loads the
DL/I code into the application region along with the application program.
Commits in DL/I batch: Commit IMS batch applications frequently so that you do
not use resources for an extended time. For more information about coordinated
commits for recovery, see the topic “Multiple system consistency” in DB2
Administration Guide.
SQL statements and IMS calls in DL/I batch: DL/I batch applications cannot use
the SQL COMMIT and ROLLBACK statements; otherwise, you get an SQL error
code. DLI/I batch applications also cannot use ROLS, SETS, and SYNC calls;
otherwise the application program abnormally terminates.
Checkpoint calls in DL/I batch: Write your program with SQL statements and
DL/I calls, and use checkpoint calls. The frequency of checkpoints depends on the
application design. All checkpoints that are issued by a batch application program
must be unique. At a checkpoint, DL/I positioning is lost, DB2 cursors are closed
(with the possible exception of cursors that are defined as WITH HOLD), commit
You can also have IMS dynamically back out the updates within the same job. You
must specify the BKO parameter as ’Y’ and allocate the IMS log to DASD.
You could have a problem if the system on which the job is run fails after the
program terminates but before the job step ends. If you do not have a checkpoint
call before the program ends, DB2 commits the unit of work without involving
IMS. If the system fails before DL/I commits the data, the DB2 data is out of
synchronization with the DL/I changes. If the system fails during DB2 commit
processing, the DB2 data could be indoubt. When you restart the application
program, use the XRST call to obtain checkpoint information and resolve any DB2
indoubt work units.
Checkpoint and XRST considerations in DL/I batch: If you use an XRST call, DB2
assumes that any checkpoint that is issued is a symbolic checkpoint. The options of
the symbolic checkpoint call differ from the options of a basic checkpoint call.
Using the incorrect form of the checkpoint call can cause problems.
If you do not use an XRST call, DB2 assumes that any checkpoint call that is issued
is a basic checkpoint.
See the following topics for details you that should know before you invoke a
user-defined function:
v “How DB2 resolves functions” on page 726
v “Cases when DB2 casts arguments for a user-defined function” on page 733
v “Abnormal termination of an external user-defined function” on page 517
For information about the syntax for invoking a table function, see the topic
“from-clause” in DB2 SQL Reference.
For information about the syntax for invoking a user-defined scalar function, see
the topic “Function invocation” in DB2 SQL Reference.
The access path that DB2 chooses for a predicate determines whether a
user-defined function in that predicate is executed. To ensure that DB2 executes the
external action for each row of the result table, put the user-defined function
invocation in the SELECT list.
The results can differ even more, depending on the order in which DB2 retrieves
the rows from the table. Suppose that an ascending index is defined on column C2.
Then DB2 retrieves row 3 first, row 1 second, and row 2 third. This means that
row 1 satisfies the predicate WHERE COUNTER()=2. The value of COUNTER in
the select list is again 1, so the result of the query in this case is:
COUNTER() C1 C2
--------- -- --
1 1 b
When you use the following techniques, you can simplify function resolution:
v When you invoke a function, use the qualified name. This causes DB2 to search
for functions only in the schema you specify. This has two advantages:
– DB2 is less likely to choose a function that you did not intend to use. Several
functions might fit the invocation equally well. DB2 picks the function whose
schema name is earliest in the SQL path, which might not be the function you
want.
– The number of candidate functions is smaller, so DB2 takes less time for
function resolution.
v Cast parameters in a user-defined function invocation to the types in the
user-defined function definition. For example, if an input parameter for
user-defined function FUNC is defined as DECIMAL(13,2), and the value you
want to pass to the user-defined function is an integer value, cast the integer
value to DECIMAL(13,2):
SELECT FUNC(CAST (INTCOL AS DECIMAL(13,2))) FROM T1;
| v Use the data type BIGINT for numeric parameters in a user-defined function. If
| you use BIGINT as the parameter type, when you invoke the function, you can
| pass in SMALLINT, INTEGER, or BIGINT values. If you use SMALLINT or
| REAL as the parameter type, you must pass parameters of the same types. For
| example, if user-defined function FUNC is defined with a parameter of type
| SMALLINT, only an invocation with a parameter of type SMALLINT resolves
| correctly. The following call does not resolve to FUNC because the constant 123
| is of type INTEGER, not SMALLINT:
| SELECT FUNC(123) FROM T1;
| v Avoid defining user-defined function string parameters with fixed-length string
| types. If you define a parameter with a fixed-length string type (CHAR,
| GRAPHIC, or BINARY), you can invoke the user-defined function only with a
| fixed-length string parameter. However, if you define the parameter with a
| varying-length string type (VARCHAR, VARGRAPHIC, or VARBINARY), you
| can invoke the user-defined function with either a fixed-length string parameter
| or a varying-length string parameter.
| If you must define parameters for a user-defined function as CHAR or BINARY,
| and you call the user-defined function from a C program or SQL procedure, you
| need to cast the corresponding parameter values in the user-defined function
| invocation to CHAR or BINARY to ensure that DB2 invokes the correct function.
| For example, suppose that a C program calls user-defined function CVRTNUM,
| which takes one input parameter of type CHAR(6). Also suppose that you
| declare host variable empnumbr as char empnumbr[6]. When you invoke
| CVRTNUM, cast empnumbr to CHAR:
Several user-defined functions with the same name but different numbers or types
of parameters can exist in a DB2 subsystem. Several user-defined functions with
the same name can have the same number of parameters, as long as the data types
of any of the first 30 parameters are different. In addition, several user-defined
functions might have the same name as a built-in function. When you invoke a
function, DB2 must determine which user-defined function or built-in function to
execute.
The remainder of this section discusses details of the function resolution process
and gives suggestions on how you can ensure that DB2 picks the right function.
To determine whether a data type is promotable to another data type, see the topic
“Promotion of data types” in DB2 SQL Reference.
More than one function instance might be a candidate for execution. In that case,
DB2 determines which function instances are the best fit for the invocation by
comparing parameter data types.
If the data types of all parameters in a function instance are the same as those in
the function invocation, that function instance is a best fit. If no exact match exists,
DB2 compares data types in the parameter lists from left to right, using this
method:
1. DB2 compares the data types of the first parameter in the function invocation
to the data type of the first parameter in each function instance.
If the first parameter in the invocation is an untyped parameter marker, DB2
does not do the comparison.
2. For the first parameter, if one function instance has a data type that fits the
function invocation better than the data types in the other instances, that
function is a best fit. For information about the possible fits for each data type,
in best-to-worst order, see the topic “Promotion of data types” in DB2 SQL
Reference.
3. If the data types of the first parameter are the same for all function instances,
or if the first parameter in the function invocation is an untyped parameter
marker, DB2 repeats this process for the next parameter. DB2 continues this
process for each parameter until it finds a best fit.
Candidate 2:
CREATE FUNCTION FUNC(VARCHAR(20),REAL,DOUBLE)
RETURNS DECIMAL(9,2)
EXTERNAL NAME 'FUNC2'
PARAMETER STYLE SQL
LANGUAGE COBOL;
DB2 compares the data type of the first parameter in the user-defined function
invocation to the data types of the first parameters in the candidate functions.
Because the first parameter in the invocation has data type VARCHAR, and both
The data type of the second parameter in the invocation is SMALLINT. INTEGER,
which is the data type of candidate 1, is a better fit to SMALLINT than REAL,
which is the data type of candidate 2. Therefore, candidate 1 is the DB2 choice for
execution.
DSN_FUNCTION_TABLE
The function table, DSN_FUNCTION_TABLE, contains descriptions of functions
that are used in specified SQL statements.
Recommendation: Do not manually insert data into or delete data from EXPLAIN
tables. The data is intended to be manipulated only by the DB2 EXPLAIN function
and optimization tools.
| Important: If mixed data strings are allowed on a DB2 subsystem, EXPLAIN tables
| must be created with CCSID UNICODE. This includes, but is not limited to, mixed
| data strings that are used for tokens, SQL statements, application names, program
| names, correlation names, and collection IDs.
Your subsystem or data sharing group can contain more than one of these tables,
including a table with the qualifier SYSIBM, a table with the qualifier DB2OSCA,
and additional tables that are qualified by user IDs.
SYSIBM
One instance of each EXPLAIN table can be created with the SYSIBM
qualifier. SQL optimization tools use these tables. You can find the SQL
statement for creating these tables in member DSNTESC of the
SDSNSAMP library.
userID You can create additional instances of EXPLAIN tables that are qualified by
user ID. These tables are populated with statement cost information when
you issue the EXPLAIN statement or bind, or rebind, a plan or package
with the EXPLAIN(YES) option. SQL optimization tools might also create
EXPLAIN tables that are qualified by a user ID. You can find the SQL
statement for creating an instance of these tables in member DSNTESC of
the SDSNSAMP library.
DB2OSC
SQL optimization tools, such as Optimization Service Center for DB2 for
z/OS, create EXPLAIN tables to collect information about SQL statements
and workloads to enable analysis and tuning processes. You can find the
SQL statements for creating EXPLAIN tables in member DSNTIJOS of the
SDSNSAMP library. You can also create this table from the Optimization
Service Center interface on your workstation.
Column descriptions
PSPI
PSPI
Example: Defining a function with distinct types as arguments: Suppose that you
want to invoke the built-in function HOUR with a distinct type that is defined like
this:
CREATE DISTINCT TYPE FLIGHT_TIME AS TIME;
The HOUR function takes only the TIME or TIMESTAMP data type as an
argument, so you need a sourced function that is based on the HOUR function that
accepts the FLIGHT_TIME data type. You might declare a function like this:
CREATE FUNCTION HOUR(FLIGHT_TIME)
RETURNS INTEGER
SOURCE SYSIBM.HOUR(TIME);
Example: Casting function arguments to acceptable types: Another way you can
invoke the HOUR function is to cast the argument of type FLIGHT_TIME to the
TIME data type before you invoke the HOUR function. Suppose table
FLIGHT_INFO contains column DEPARTURE_TIME, which has data type
FLIGHT_TIME, and you want to use the HOUR function to extract the hour of
Example: Using an infix operator with distinct type arguments: Suppose you want
to add two values of type US_DOLLAR. Before you can do this, you must define a
version of the + function that accepts values of type US_DOLLAR as operands:
CREATE FUNCTION "+"(US_DOLLAR,US_DOLLAR)
RETURNS US_DOLLAR
SOURCE SYSIBM."+"(DECIMAL(9,2),DECIMAL(9,2));
Because the US_DOLLAR type is based on the DECIMAL(9,2) type, the source
function must be the version of + with arguments of type DECIMAL(9,2).
This means that EURO_TO_US accepts only the EURO type as input. Therefore, if
you want to call CDN_TO_US with a constant or host variable argument, you
must cast that argument to distinct type EURO:
SELECT * FROM US_SALES
WHERE TOTAL = EURO_TO_US(EURO(:H1));
SELECT * FROM US_SALES
WHERE TOTAL = EURO_TO_US(EURO(10000));
Whenever you invoke a user-defined function, DB2 assigns your input argument
values to parameters with the data types and lengths in the user-defined function
definition.
When you invoke a user-defined function that is sourced on another function, DB2
casts your arguments to the data types and lengths of the sourced function.
Now suppose that PRICE2 has the DECIMAL(9,2) value 0001234.56. DB2 must first
assign this value to the data type of the input parameter in the definition of
TAXFN2, which is DECIMAL(8,2). The input parameter value then becomes
001234.56. Next, DB2 casts the parameter value to a source function parameter,
which is DECIMAL(6,0). The parameter value then becomes 001234. (When you
cast a value, that value is truncated, rather than rounded.)
Now, if TAXFN1 returns the DECIMAL(5,2) value 123.45, DB2 casts the value to
DECIMAL(5,0), which is the result type for TAXFN2, and the value becomes 00123.
This is the value that DB2 assigns to column SALESTAX2 in the UPDATE
statement.
You can use untyped parameter markers in a function invocation. However, DB2
cannot compare the data types of untyped parameter markers to the data types of
candidate functions. Therefore, DB2 might find more than one function that
qualifies for invocation. If this happens, an SQL error occurs. To ensure that DB2
picks the right function to execute, cast the parameter markers in your function
invocation to the data types of the parameters in the function that you want to
execute. For example, suppose that two versions of function FX exist. One version
of FX is defined with a parameter of type of DECIMAL(9,2), and the other is
defined with a parameter of type INTEGER. You want to invoke FX with a
parameter marker, and you want DB2 to execute the version of FX that has a
DECIMAL(9,2) parameter. You need to cast the parameter marker to a
DECIMAL(9,2) type by using a CAST specification:
SELECT FX(CAST(? AS DECIMAL(9,2))) FROM T1;
Related concepts
Assignment and comparison (SQL Reference)
| Before you call a stored procedure, ensure that you have all of the following
| authorizations that are required to run the stored procedure:
| v Authorization to execute the stored procedure that is referenced in the CALL
| statement.
| The authorizations that you need depend on whether the form of the CALL
| statement is CALL procedure-name or CALL :host-variable.
| v Authorization to execute any triggers or user-defined functions that the stored
| procedure invokes.
| v Authorization to execute the stored procedure package and any packages under
| the stored procedure package.
| For example, if the stored procedure invokes any user-defined functions, you
| need authorization to execute the packages for those user-defined functions.
| An application program that calls a stored procedure can perform one or more of
| the following actions:
| v Call more than one stored procedure.
| v Call a single stored procedure more than once at the same or at different levels
| of nesting. However, do not assume that the variables for the stored procedures
| persist between calls.
| If a stored procedure runs as a main program, before each call, Language
| Environment reinitializes the storage that is used by the stored procedure.
| Program variables for the stored procedure do not persist between calls.
| If a stored procedure runs as a subprogram, Language Environment does not
| initialize the storage between calls. Program variables for the stored procedure
| can persist between calls. However, you should not assume that your program
| variables are available from one stored procedure call to another call for the
| following reasons:
| – Stored procedures from other users can run in an instance of Language
| Environment between two executions of your stored procedure.
| – Consecutive executions of a stored procedure might run in different stored
| procedure address spaces.
| – The z/OS operator might refresh Language Environment between two
| executions of your stored procedure.
| v Call a local or remote stored procedure.
| If both the client and server application environments support two-phase
| commit, the coordinator controls updates between the application, the server,
| and the stored procedures. If either side does not support two-phase commit,
| updates fail.
| v Mix CALL statements with other SQL statements.
| v Use any of the DB2 attachment facilities.
| DB2 runs stored procedures under the DB2 thread of the calling application, which
| means that the stored procedures are part of the caller’s unit of work.
| Example of passing parameters that can have null values: The preceding
| examples assume that none of the input parameters can have null values. The
| following example shows how to allow for null values for the parameters by
| passing indicator variables in the parameter list:
| EXEC SQL CALL A (:EMP :IEMP, :PRJ :IPRJ, :ACT :IACT,
| :EMT :IEMT, :EMS :IEMS, :EME :IEME,
| :TYPE :ITYPE, :CODE :ICODE);
| In this example, :IEMP, :IPRJ, :IACT, :IEMT, :IEMS, :IEME, :ITYPE, and
| :ICODE are indicator variables for the parameters.
| Example of passing string constants and null values: The following example
| CALL statement passes integer and character string constants, a null value,
| and several host variables:
| EXEC SQL CALL A ('000130', 'IF1000', 90, 1.0, NULL, '2009-10-01',
| :TYPE, :CODE);
| Example of using a host variable for the stored procedure name: The
| following example CALL statement uses a host variable for the name of the
| stored procedure:
| EXEC SQL CALL :procnm (:EMP, :PRJ, :ACT, :EMT, :EMS, :EME,
| :TYPE, :CODE);
| One advantage of using this form is that you can change the encoding scheme
| of the stored procedure parameter values. For example, if the subsystem on
| which the stored procedure runs has an EBCDIC encoding scheme, and you
| want to retrieve data in ASCII CCSID 437, you can specify the desired CCSIDs
| for the output parameters in the SQLVAR fields of the SQLDA.
| This technique for overriding the CCSIDs of parameters is the same as the
| technique for overriding the CCSIDs of variables, which is described in the
| information about including dynamic SQL for varying-list SELECT statements
| in your program. When you use this technique, the defined encoding scheme
| of the parameter must be different from the encoding scheme that you specify
| in the SQLDA. Otherwise, no conversion occurs.
| The defined encoding scheme for the parameter is the encoding scheme that
| you specify in the CREATE PROCEDURE statement, or if you do not specify
| an encoding scheme in this statement, the default encoding scheme for the
| subsystem.
| Because the following example CALL statement uses a host variable name for
| the stored procedure and an SQLDA for the parameter list, it can be reused to
| call different stored procedures with different parameter lists:
| EXEC SQL CALL :procnm USING DESCRIPTOR :sqlda;
| Your client program must assign a stored procedure name to the host variable
| procnm and load the SQLDA with the parameter information before issuing
| the SQL CALL statement.
| 5. Process any output, including the OUT and INOUT parameters.
| 6. If the stored procedure returns multiple result sets, retrieve those result sets.
| Recommendation: Close the result sets after you retrieve them, and issue
| commits often to prevent DB2 storage shortages and EDM POOL FULL
| conditions.
| 7. For PL/I applications, also perform the following actions:
| a. Include the runtime option NOEXECOPS in your source code.
| b. Specify the compile-time option SYSTEM(MVS).
| These additional steps ensure that the linkage conventions work correctly on
| z/OS.
| 8. For C applications, include the following line in your source code:
| #pragma runopts(PLIST(OS))
Parameters are defined as part of the stored procedure definition in the CREATE
PROCEDURE statement. They can be one of the following types:
IN Input-only parameters, which provide values to the stored procedure
OUT Output-only parameters, which return values from the stored procedure to
the calling program
INOUT
Input/output parameters, which provide values to or return values from
the stored procedure
If a stored procedure fails to set one or more of the output-only parameters, DB2
does not return an error. Instead, DB2 returns the output parameters to the calling
program, with the values that were established on entry to the stored procedure.
DB2 supports three parameter list conventions. DB2 chooses the parameter list
convention based on the value of the PARAMETER STYLE parameter in the stored
procedure definition: GENERAL, GENERAL WITH NULLS, or SQL.
v Use GENERAL when you do not want the calling program to pass null values
for input parameters (IN or INOUT) to the stored procedure. The stored
procedure must contain a variable declaration for each parameter passed in the
CALL statement.
The following figure shows the structure of the parameter list for PARAMETER
STYLE GENERAL.
v Use GENERAL WITH NULLS to allow the calling program to supply a null
value for any parameter passed to the stored procedure. For the GENERAL
WITH NULLS linkage convention, the stored procedure must do the following
tasks:
– Declare a variable for each parameter passed in the CALL statement.
– Declare a null indicator structure containing an indicator variable for each
parameter.
– On entry, examine all indicator variables associated with input parameters to
determine which parameters contain null values.
– On exit, assign values to all indicator variables associated with output
variables. An indicator variable for an output variable that returns a null
value to the caller must be assigned a negative number. Otherwise, the
indicator variable must be assigned the value 0.
In the CALL statement, follow each parameter with its indicator variable, using
one of the following forms:
host-variable :indicator-variable
or
host-variable INDICATOR :indicator-variable.
The following figure shows the structure of the parameter list for PARAMETER
STYLE GENERAL WITH NULLS.
v Like GENERAL WITH NULLS, option SQL lets you supply a null value for any
parameter that is passed to the stored procedure. In addition, DB2 passes input
and output parameters to the stored procedure that contain this information:
– The SQLSTATE that is to be returned to DB2. This is a CHAR(5) parameter
that represents the SQLSTATE that is passed in to the program from the
database manager. The initial value is set to ‘00000’. Although the SQLSTATE
is usually not set by the program, it can be set as the result SQLSTATE that is
used to return an error or a warning. Returned values that start with
anything other than ‘00’, ‘01’, or ‘02’ are error conditions.
– The qualified name of the stored procedure. This is a VARCHAR(128) value.
– The specific name of the stored procedure. The specific name is a
VARCHAR(128) value that is the same as the unqualified name.
| – The SQL diagnostic string that is to be returned to DB2. This is a
| VARCHAR(1000) value. Use this area to pass descriptive information about
| an error or warning to the caller.
SQL is not a valid linkage convention for a REXX language stored procedure.
The following figure shows the structure of the parameter list for PARAMETER
STYLE SQL.
For these examples, assume that a COBOL application has the following parameter
declarations and CALL statement:
************************************************************
* PARAMETERS FOR THE SQL STATEMENT CALL *
************************************************************
01 V1 PIC S9(9) USAGE COMP.
01
. V2 PIC X(9).
.
.
EXEC SQL CALL A (:V1, :V2) END-EXEC.
In the CREATE PROCEDURE statement, the parameters are defined like this:
IN V1 INT, OUT V2 CHAR(9)
The following figures show how an assembler, C, COBOL, and PL/I stored
procedure uses the GENERAL linkage convention to receive parameters.
The following figure shows how a stored procedure in the C language receives
these parameters.
#pragma runopts(PLIST(OS))
#pragma options(RENT)
#include <stdlib.h>
#include <stdio.h>
/*****************************************************************/
/* Code for a C language stored procedure that uses the */
/* GENERAL linkage convention. */
/*****************************************************************/
main(argc,argv)
int argc; /* Number of parameters passed */
char *argv[]; /* Array of strings containing */
/* the parameter values */
{
long int locv1; /* Local copy of V1 */
char locv2[10]; /* Local copy of V2 */
/* (null-terminated) */
.
.
.
/***************************************************************/
The following figure shows how a stored procedure in the COBOL language
receives these parameters.
CBL RENT
IDENTIFICATION DIVISION.
************************************************************
* CODE FOR A COBOL LANGUAGE STORED PROCEDURE THAT USES THE *
* GENERAL LINKAGE CONVENTION. *
************************************************************
PROGRAM-ID. A.
.
.
.
DATA DIVISION.
.
.
.
LINKAGE SECTION.
************************************************************
* DECLARE THE PARAMETERS PASSED BY THE SQL STATEMENT *
* CALL HERE. *
************************************************************
01 V1 PIC S9(9) USAGE COMP.
01 V2 PIC X(9).
.
.
.
PROCEDURE DIVISION USING V1, V2.
************************************************************
* THE USING PHRASE INDICATES THAT VARIABLES V1 AND V2 *
* WERE PASSED BY THE CALLING PROGRAM. *
************************************************************
.
.
.
****************************************
* ASSIGN A VALUE TO OUTPUT VARIABLE V2 *
****************************************
MOVE '123456789' TO V2.
The following figure shows how a stored procedure in the PL/I language receives
these parameters.
*PROCESS SYSTEM(MVS);
A: PROC(V1, V2) OPTIONS(MAIN NOEXECOPS REENTRANT);
/***************************************************************/
/* Code for a PL/I language stored procedure that uses the */
/* GENERAL linkage convention. */
/***************************************************************/
For these examples, assume that a C application has the following parameter
declarations and CALL statement:
/************************************************************/
/* Parameters for the SQL statement CALL */
/************************************************************/
long int v1;
char v2[10]; /* Allow an extra byte for */
/* the null terminator */
/************************************************************/
/* Indicator structure */
/************************************************************/
struct indicators {
short int ind1;
short int ind2;
} indstruc;
.
.
.
indstruc.ind1 = 0; /* Remember to initialize the */
/* input parameter's indicator*/
/* variable before executing */
/* the CALL statement */
EXEC SQL CALL B (:v1 :indstruc.ind1, :v2 :indstruc.ind2);
.
.
.
In the CREATE PROCEDURE statement, the parameters are defined like this:
IN V1 INT, OUT V2 CHAR(9)
The following figure shows how a stored procedure in assembler language receives
these parameters.
*******************************************************************
* CODE FOR AN ASSEMBLER LANGUAGE STORED PROCEDURE THAT USES *
* THE GENERAL WITH NULLS LINKAGE CONVENTION. *
*******************************************************************
B CEEENTRY AUTO=PROGSIZE,MAIN=YES,PLIST=OS
USING PROGAREA,R13
*******************************************************************
* BRING UP THE LANGUAGE ENVIRONMENT. *
*******************************************************************
The following figure shows how a stored procedure in the C language receives
these parameters.
#pragma options(RENT)
#pragma runopts(PLIST(OS))
#include <stdlib.h>
#include <stdio.h>
/*****************************************************************/
/* Code for a C language stored procedure that uses the */
/* GENERAL WITH NULLS linkage convention. */
/*****************************************************************/
main(argc,argv)
int argc; /* Number of parameters passed */
char *argv[]; /* Array of strings containing */
/* the parameter values */
{
long int locv1; /* Local copy of V1 */
char locv2[10]; /* Local copy of V2 */
/* (null-terminated) */
The following figure shows how a stored procedure in the COBOL language
receives these parameters.
CBL RENT
IDENTIFICATION DIVISION.
************************************************************
* CODE FOR A COBOL LANGUAGE STORED PROCEDURE THAT USES THE *
* GENERAL WITH NULLS LINKAGE CONVENTION. *
************************************************************
PROGRAM-ID. B.
.
.
.
DATA DIVISION.
.
.
.
LINKAGE SECTION.
************************************************************
* DECLARE THE PARAMETERS AND THE INDICATOR ARRAY THAT *
* WERE PASSED BY THE SQL STATEMENT CALL HERE. *
************************************************************
01 V1 PIC S9(9) USAGE COMP.
01 V2 PIC X(9).
*
01 INDARRAY.
10 INDVAR PIC S9(4) USAGE COMP OCCURS 2 TIMES.
The following figure shows how a stored procedure in the PL/I language receives
these parameters.
*PROCESS SYSTEM(MVS);
A: PROC(V1, V2, INDSTRUC) OPTIONS(MAIN NOEXECOPS REENTRANT);
/***************************************************************/
/* Code for a PL/I language stored procedure that uses the */
/* GENERAL WITH NULLS linkage convention. */
/***************************************************************/
/***************************************************************/
/* Indicate on the PROCEDURE statement that two parameters */
/* and an indicator variable structure were passed by the SQL */
/* statement CALL. Then declare them in the following section.*/
/* For PL/I, you must declare an indicator variable structure, */
/* not an array. */
/***************************************************************/
DCL V1 BIN FIXED(31),
V2 CHAR(9);
DCL
01 INDSTRUC,
02 IND1 BIN FIXED(15),
02 IND2 BIN FIXED(15);
.
.
.
IF IND1 < 0 THEN
CALL NULLVAL; /* If indicator variable is negative */
/* then V1 is null */
.
.
.
V2 = '123456789'; /* Assign a value to output variable V2 */
IND2 = 0; /* Assign 0 to V2's indicator variable */
For these examples, assume that a C application has the following parameter
declarations and CALL statement:
In the CREATE PROCEDURE statement, the parameters are defined like this:
IN V1 INT, OUT V2 CHAR(9)
The following figure shows how a stored procedure in assembler language receives
these parameters.
*******************************************************************
* CODE FOR AN ASSEMBLER LANGUAGE STORED PROCEDURE THAT USES *
* THE SQL LINKAGE CONVENTION. *
*******************************************************************
B CEEENTRY AUTO=PROGSIZE,MAIN=YES,PLIST=OS
USING PROGAREA,R13
*******************************************************************
* BRING UP THE LANGUAGE ENVIRONMENT. *
*******************************************************************
.
.
.
*******************************************************************
* GET THE PASSED PARAMETER VALUES. THE SQL LINKAGE *
* CONVENTION IS AS FOLLOWS: *
* ON ENTRY, REGISTER 1 POINTS TO A LIST OF POINTERS. IF N *
* PARAMETERS ARE PASSED, THERE ARE 2N+4 POINTERS. THE FIRST *
* N POINTERS ARE THE ADDRESSES OF THE N PARAMETERS, JUST AS *
* WITH THE GENERAL LINKAGE CONVENTION. THE NEXT N POINTERS ARE *
* THE ADDRESSES OF THE INDICATOR VARIABLE VALUES. THE LAST *
* 4 POINTERS (5, IF DBINFO IS PASSED) ARE THE ADDRESSES OF *
* INFORMATION ABOUT THE STORED PROCEDURE ENVIRONMENT AND *
* EXECUTION RESULTS. *
*******************************************************************
L R7,0(R1) GET POINTER TO V1
MVC LOCV1(4),0(R7) MOVE VALUE INTO LOCAL COPY OF V1
L R7,8(R1) GET POINTER TO 1ST INDICATOR VARIABLE
MVC LOCI1(2),0(R7) MOVE VALUE INTO LOCAL STORAGE
L R7,20(R1) GET POINTER TO STORED PROCEDURE NAME
MVC LOCSPNM(20),0(R7) MOVE VALUE INTO LOCAL STORAGE
L R7,24(R1) GET POINTER TO DBINFO
MVC LOCDBINF(DBINFLN),0(R7)
* MOVE VALUE INTO LOCAL STORAGE
LH R7,LOCI1 GET INDICATOR VARIABLE FOR V1
LTR R7,R7 CHECK IF IT IS NEGATIVE
BM NULLIN IF SO, V1 IS NULL
main(argc,argv)
int argc;
char *argv[];
{
int parm1;
short int ind1;
char p_proc[28];
char p_spec[19];
/***************************************************/
/* Assume that the SQL CALL statement included */
/* 3 input/output parameters in the parameter list.*/
/* The argv vector will contain these entries: */
/* argv[0] 1 contains load module */
/* argv[1-3] 3 input/output parms */
/* argv[4-6] 3 null indicators */
/* argv[7] 1 SQLSTATE variable */
/* argv[8] 1 qualified proc name */
/* argv[9] 1 specific proc name */
/* argv[10] 1 diagnostic string */
/* argv[11] + 1 dbinfo */
/* ------ */
/* 12 for the argc variable */
/***************************************************/
if argc<>12 {
.
.
.
/* We end up here when invoked with wrong number of parms */
}
/***************************************************/
/* Assume the first parameter is an integer. */
/* The following code shows how to copy the integer*/
/* parameter into the application storage. */
/***************************************************/
parm1 = *(int *) argv[1];
/***************************************************/
/* We can access the null indicator for the first */
/* parameter on the SQL CALL as follows: */
/***************************************************/
ind1 = *(short int *) argv[4];
/***************************************************/
/* We can use the following expression */
/* to assign 'xxxxx' to the SQLSTATE returned to */
/* caller on the SQL CALL statement. */
/***************************************************/
strcpy(argv[7],"xxxxx/0");
/***************************************************/
/* We obtain the value of the qualified procedure */
/* name with this expression. */
/***************************************************/
strcpy(p_proc,argv[8]);
/***************************************************/
/* We obtain the value of the specific procedure */
/* name with this expression. */
/***************************************************/
strcpy(p_spec,argv[9]);
/***************************************************/
/* We can use the following expression to assign */
/* 'yyyyyyyy' to the diagnostic string returned */
/* in the SQLDA associated with the CALL statement.*/
/***************************************************/
The following figure shows how a stored procedure in the COBOL language
receives these parameters.
| CBL RENT
|| . IDENTIFICATION DIVISION.
|| .
.
| DATA DIVISION.
The following figure shows how a stored procedure in the PL/I language receives
these parameters.
| *PROCESS SYSTEM(MVS);
| MYMAIN: PROC(PARM1, PARM2, ...,
| P_IND1, P_IND2, ...,
| P_SQLSTATE, P_PROC, P_SPEC, P_DIAG, DBINFO)
| OPTIONS(MAIN NOEXECOPS REENTRANT);
|
| DCL PARM1 ... /* first parameter */
|| . DCLPARM2 ... /* second parameter */
|| .
.
| DCL P_IND1 BIN FIXED(15);/* indicator for 1st parm */
|| . DCL P_IND2 BIN FIXED(15);/* indicator for 2nd parm */
|| .
.
| DCLP_SQLSTATE CHAR(5); /* SQLSTATE to return to DB2 */
| DCL01 P_PROC CHAR(27) /* Qualified procedure name */
| VARYING;
| DCL 01 P_SPEC CHAR(18) /* Specific stored proc */
| VARYING;
| DCL 01 P_DIAG CHAR(1000) /* Diagnostic string */
| VARYING;
| DCL DBINFO PTR;
DCL 01 SP_DBINFO BASED(DBINFO), /* Dbinfo */
03 UDF_DBINFO_LLEN BIN FIXED(15), /* location length */
03 UDF_DBINFO_LOC CHAR(128), /* location name */
03 UDF_DBINFO_ALEN BIN FIXED(15), /* auth ID length */
03 UDF_DBINFO_AUTH CHAR(128), /* authorization ID */
03 UDF_DBINFO_CCSID, /* CCSIDs for DB2 for z/OSDB2 for z/OS */
05 R1 BIN FIXED(15), /* Reserved */
05 UDF_DBINFO_ASBCS BIN FIXED(15), /* ASCII SBCS CCSID */
05 R2 BIN FIXED(15), /* Reserved */
05 UDF_DBINFO_ADBCS BIN FIXED(15), /* ASCII DBCS CCSID */
05 R3 BIN FIXED(15), /* Reserved */
05 UDF_DBINFO_AMIXED BIN FIXED(15), /* ASCII MIXED CCSID */
05 R4 BIN FIXED(15), /* Reserved */
05 UDF_DBINFO_ESBCS BIN FIXED(15), /* EBCDIC SBCS CCSID */
05 R5 BIN FIXED(15), /* Reserved */
05 UDF_DBINFO_EDBCS BIN FIXED(15), /* EBCDIC DBCS CCSID */
05 R6 BIN FIXED(15), /* Reserved */
05 UDF_DBINFO_EMIXED BIN FIXED(15), /* EBCDIC MIXED CCSID*/
05 R7 BIN FIXED(15), /* Reserved */
05 UDF_DBINFO_USBCS BIN FIXED(15), /* Unicode SBCS CCSID */
05 R8 BIN FIXED(15), /* Reserved */
05 UDF_DBINFO_UDBCS BIN FIXED(15), /* Unicode DBCS CCSID */
05 R9 BIN FIXED(15), /* Reserved */
05 UDF_DBINFO_UMIXED BIN FIXED(15), /* Unicode MIXED CCSID*/
05 UDF_DBINFO_ENCODE BIN FIXED(31), /* SP encode scheme */
05 UDF_DBINFO_RESERV0 CHAR(20), /* reserved */
You can use the following procedure regardless of whether the linkage convention
for the stored procedure is GENERAL, GENERAL WITH NULLS, or SQL.
For example, suppose that a stored procedure that is defined with the GENERAL
linkage convention takes one integer input parameter and one character output
parameter of length 6000. You do not want to pass the 6000 byte storage area to
the stored procedure. The following example PL/I program passes only 2 bytes to
the stored procedure for the output variable and receives all 6000 bytes from the
stored procedure:
DCL INTVAR BIN FIXED(31); /* This is the input variable */
DCL BIGVAR(6000); /* This is the output variable */
DCL
. I1 BIN FIXED(15); /* This is an indicator variable */
.
.
I1 = -1; /* Setting I1 to -1 causes only */
/* a two byte area representing */
/* I1 to be passed to the */
/* stored procedure, instead of */
/* the 6000 byte area for BIGVAR*/
EXEC SQL CALL PROCX(:INTVAR, :BIGVAR INDICATOR :I1);
The format of the parameters that you pass in the CALL statement in an
application must be compatible with the data types of the parameters in the
CREATE PROCEDURE statement.
For all data types except LOBs, ROWIDs, locators, and VARCHARs (for C
language), see the tables listed in the following table for the host data types that
are compatible with the data types in the stored procedure definition.
Table 129. Listing of tables of compatible data types
Language Compatible data types table
Assembler “Equivalent SQL and assembler data types” on
page 236
C “Equivalent SQL and C data types” on page 275
COBOL “Equivalent SQL and COBOL data types” on
page 322
PL/I “Equivalent SQL and PL/I data types” on page
389
The following table lists each SQL data type that you can specify for the
parameters in the CREATE PROCEDURE statement and the corresponding format
for a REXX parameter that represents that data type.
Table 130. Parameter formats for a CALL statement in a REXX procedure
SQL data type REXX format
SMALLINT A string of numerics that does not contain a decimal point or exponent identifier.
INTEGER The first character can be a plus or minus sign. This format also applies to
| BIGINT indicator variables that are passed as parameters.
DECIMAL(p,s) A string of numerics that has a decimal point but no exponent identifier. The first
NUMERIC(p,s) character can be a plus or minus sign.
REAL A string that represents a number in scientific notation. The string consists of a
FLOAT(n) series of numerics followed by an exponent identifier (an E or e followed by an
DOUBLE optional plus or minus sign and a series of numerics).
| DECFLOAT
CHARACTER(n) A string of length n, enclosed in single quotation marks.
VARCHAR(n)
VARCHAR(n) FOR BIT DATA
| If you use host variables, the REXX format of BINARY and VARBINARY data is
| BX followed by a string that is enclosed in a single quotation mark.
DATE A string of length 10, enclosed in single quotation marks. The format of the string
depends on the value of field DATE FORMAT that you specify when you install
DB2.
TIME A string of length 8, enclosed in single quotation marks. The format of the string
depends on the value of field TIME FORMAT that you specify when you install
DB2.
TIMESTAMP A string of length 26, enclosed in single quotation marks. The string has the
format yyyy-mm-dd-hh.mm.ss.nnnnnn.
| XML No equivalent.
The following figure demonstrates how a REXX procedure calls the stored
procedure in “REXX stored procedures” on page 594. The REXX procedure
performs the following actions:
v Connects to the DB2 subsystem that was specified by the REXX procedure
invoker.
v Calls the stored procedure to execute a DB2 command that was specified by the
REXX procedure invoker.
v Retrieves rows from a result set that contains the command output messages.
/* REXX */
PARSE ARG SSID COMMAND /* Get the SSID to connect to */
/* and the DB2 command to be */
/* executed */
/****************************************************************/
/* Set up the host command environment for SQL calls. */
/****************************************************************/
"SUBCOM DSNREXX" /* Host cmd env available? */
IF RC THEN /* No--make one */
S_RC = RXSUBCOM('ADD','DSNREXX','DSNREXX')
/****************************************************************/
/* Connect to the DB2 subsystem. */
/****************************************************************/
ADDRESS DSNREXX "CONNECT" SSID
IF SQLCODE ¬= 0 THEN CALL SQLCA
PROC = 'COMMAND'
RESULTSIZE = 32703
RESULT = LEFT(' ',RESULTSIZE,' ')
/****************************************************************/
/* Call the stored procedure that executes the DB2 command. */
/* The input variable (COMMAND) contains the DB2 command. */
/* The output variable (RESULT) will contain the return area */
/* from the IFI COMMAND call after the stored procedure */
/* executes. */
/****************************************************************/
ADDRESS DSNREXX "EXECSQL" ,
"CALL" PROC "(:COMMAND, :RESULT)"
Before you can call a stored procedure from your embedded SQL application, you
must bind a package for the client program on the remote system. You can use the
remote DRDA bind capability on your DRDA client system to bind the package to
the remote system.
If you have packages that contain SQL CALL statements that you bound before
DB2 Version 6, you can get better performance from those packages if you rebind
them in DB2 Version 6 or later. Rebinding lets DB2 obtain some information from
For an ODBC or CLI application, the DB2 packages and plan associated with the
ODBC driver must be bound to DB2 before you can run your application. A stored
procedure that runs under DB2 ODBC on a remote DB2 database server does not
need to be bound into the DB2 ODBC plan. It can be bound as a package at the
remote site.
A z/OS client can bind the DBRM to a remote server by specifying a location
name on the command BIND PACKAGE. For example, suppose you want a client
program to call a stored procedure at location LOCA. You precompile the program
to produce DBRM A. Then you can use the following command to bind DBRM A
into package collection COLLA at location LOCA:
BIND PACKAGE (LOCA.COLLA) MEMBER(A)
The plan for the package resides only at the client system.
However, if you do not qualify the stored procedure name, DB2 uses the following
method to determine which stored procedure to run:
1. DB2 searches the list of schema names from the PATH bind option or the
CURRENT PATH special register from left to right until it finds a schema name
for which a stored procedure definition exists with the name in the CALL
statement.
DB2 uses schema names from the PATH bind option for CALL statements of
the following form:
CALL procedure-name
DB2 uses schema names from the CURRENT PATH special register for CALL
statements of the following form:
CALL host-variable
2. When DB2 finds a stored procedure definition, DB2 executes that stored
procedure if the following conditions are true:
v The caller is authorized to execute the stored procedure.
v The stored procedure has the same number of parameters as in the CALL
statement.
If both conditions are not true, DB2 continues to go through the list of schemas
until it finds a stored procedure that meets both conditions or reaches the end
of the list.
3. If DB2 cannot find a suitable stored procedure, it returns an SQL error code for
the CALL statement.
For example, suppose that you want to write one program, PROGY, that calls one
of two versions of a stored procedure named PROCX. The load module for both
stored procedures is named SUMMOD. Each version of SUMMOD is in a different
load library. The stored procedures run in different WLM environments, and the
startup JCL for each WLM environment includes a STEPLIB concatenation that
specifies the correct load library for the stored procedure module.
First, define the two stored procedures in different schemas and different WLM
environments:
CREATE PROCEDURE TEST.PROCX(IN V1 INTEGER, OUT V2 CHAR(9))
LANGUAGE C
EXTERNAL NAME SUMMOD
WLM ENVIRONMENT TESTENV;
CREATE PROCEDURE PROD.PROCX(IN V1 INTEGER, OUT V2 CHAR(9))
LANGUAGE C
EXTERNAL NAME SUMMOD
WLM ENVIRONMENT PRODENV;
Bind two plans for PROGY. In one BIND statement, specify PATH(TEST). In the
other BIND statement, specify PATH(PROD).
To call TEST.PROCX, execute PROGY with the plan that you bound with
PATH(TEST). To call PROD.PROCX, execute PROGY with the plan that you bound
with PATH(PROD).
Each instance of the stored procedure runs serially within the same DB2 thread
and opens its own result sets. These multiple calls invoke multiple instances of any
packages that are invoked while running the stored procedure. These instances are
invoked at either the same or different level of nesting under one DB2 connection
or thread.
DB2 storage shortages and EDM POOL FULL conditions can occur if you call too
many instances of a stored procedure or if you open too many cursors. If the
stored procedure issues remote SQL statements to another DB2 server, these
conditions can occur at both the DB2 client and at the DB2 server.
The calling application for the stored procedure should close the result sets and
issue commits often. Even read-only applications should perform these actions.
Applications that fail to do so terminate abnormally with DB2 storage shortage
and EDM POOL FULL conditions.
You can set the maximum number of stored procedure instances and the maximum
number of open cursors on installation panel DSNTIPX. For more information
about setting the maximum number of stored procedure instances and the
maximum number of open cursors per DB2 thread or connection, see the topic
“Routine parameters panel: DSNTIPX”in DB2 Installation Guide.
Consider the following when you develop stored procedures that access non-DB2
resources:
| v When a stored procedure runs, the stored procedure uses the Recoverable
| Resource Manager Services for commitment control. When DB2 commits or rolls
| back work in this environment, DB2 coordinates all updates made to recoverable
| resources by other RRS compliant resource managers in the z/OS system.
| v When a stored procedure runs, DB2 can establish a RACF environment for
| accessing non-DB2 resources. The authority used when the stored procedure
| accesses protected z/OS resources depends on the value of SECURITY in the
| stored procedure definition:
| – If the value of SECURITY is DB2, the authorization ID associated with the
| stored procedures address space is used.
| – If the value of SECURITY is USER, the authorization ID under which the
| CALL statement is executed is used.
| – If the value of SECURITY is DEFINER, the authorization ID under which the
| CREATE PROCEDURE statement was executed is used.
v Not all non-DB2 resources can tolerate concurrent access by multiple TCBs in the
same address space. You might need to serialize the access within your
application.
CICS
| If your system is running a release of CICS that uses z/OS RRS, then z/OS RRS
| controls commitment of all resources.
IMS
If your system is not running a release of IMS that uses z/OS RRS, you can use
one of the following methods to access DL/I data from your stored procedure:
v Use the CICS EXCI interface to run a CICS transaction synchronously. That CICS
transaction can, in turn, access DL/I data.
v Invoke IMS transactions asynchronously using the MQI.
v Use APPC through the CPI Communications application programming interface
Exception: If an existing active version is still being used by a process, the new
active version is not used until the next call to that procedure.
To temporarily override the active version of a native SQL procedure, specify the
following statements in your program in the following order:
1. The SET CURRENT ROUTINE VERSION statement with the name of the
version of the procedure that you want to use. If the specified version does not
exist, the active version is used.
2. The CALL statement with the name of the procedure.
Special cases:
– For REXX stored procedures, you must set the NUMTCB parameter to 1.
| – Stored procedures that invoke utilities can invoke only one utility at a time in
| a single address space. Consequently, the value of the NUMTCB parameter is
| forced to 1 for those procedures.
Related tasks
“Setting up the stored procedures environment” on page 533
Maximizing the number of procedures or functions that run in an address
space (DB2 Performance)
To execute the CALL statement, the SQL authorization ID of the process must have
READ access or higher to the z/OS Security Server System Authorization Facility
(SAF) resource profile ssid.WLM_REFRESH.WLM-environment-name in resource
class DSNR. This is a different resource profile from the ssid.WLMENV.WLM-
environment-name resource profile, which DB2 uses to determine whether a stored
procedure or user-defined function is authorized to run in the specified WLM
environment.
For more information about permitting access to the extended MCS console, see
z/OS MVS Planning: Operations.
The following syntax diagram shows the SQL CALL statement for invoking
WLM_REFRESH. The linkage convention for WLM_REFRESH is GENERAL WITH
NULLS.
| The following DB2 for z/OS client special registers can be changed:
| v CURRENT CLIENT_ACCTNG
| v CURRENT CLIENT_USERID
| v CURRENT CLIENT_WRKSTNNAME
| This procedure is not under transaction control and client information changes
| made by the procedure are independent of committing or rolling back units of
| work.
| Environment
| Authorization
| To execute the CALL statement, the owner of the package or plan that contains the
| CALL statement must have one or more of the following privileges on each
| package that the stored procedure uses:
| v The EXECUTE privilege on the package for DSNADMSI
| v Ownership of the package
| v PACKADM authority for the package collection
| v SYSADM authority
| Syntax
| WLM_SET_CLIENT_INFO ( client_userid , client_wrkstnname ,
| NULL NULL
| client_applname , client_acctstr )
| NULL NULL
|
| Procedure parameters
| client_userid
| An input argument of type VARCHAR(255) that specifies the user ID for the
| client. If NULL is specified, the value remains unchanged. If an empty string
| (″) is specified, the user ID for the client is reset to the default value, which is
| blank. If the value specified exceeds 16 bytes, it is truncated to 16 bytes. If the
| value specified is less than 16 bytes, it is padded on the right with blanks to a
| length of 16 bytes.
| client_wrkstnname
| An input argument of type VARCHAR(255) that specifies the workstation
| name for the client. If NULL is specified, the value remains unchanged. If an
| empty string (″) is specified, the workstation name for the client is reset to the
| default value, which is blank. If the value specified exceeds 18 bytes, it is
| truncated to 18 bytes. If the value specified is less than 18 bytes, it is padded
| on the right with blanks to a length of 18 bytes.
| client_applname
| An input argument of type VARCHAR(255) that specifies the application name
| for the client. If NULL is specified, the value remains unchanged. If an empty
| Examples
| Set the user ID, workstation name, application name, and accounting string for the
| client.
| strcpy(user_id, "db2user");
| strcpy(wkstn_name, "mywkstn");
| strcpy(appl_name, "db2bp.exe");
| strcpy(acct_str, "myacctstr");
| iuser_id = 0;
| iwkstn_name = 0;
| iappl_name = 0;
| iacct_str = 0;
| EXEC SQL CALL SYSPROC.WLM_SET_CLIENT_INFO(:user_id:iuser_id, :wkstn_name:iwkstn_name,
| :appl_name:iappl_name, :acct_str:iacct_str);
| Set the user ID to db2user for the client without setting the other client attributes.
| strcpy(user_id, "db2user");
| iuser_id = 0;
| iwkstn_name = -1;
| iappl_name = -1;
| iacct_str = -1;
| EXEC SQL CALL SYSPROC.WLM_SET_CLIENT_INFO(:user_id:iuser_id, :wkstn_name:iwkstn_name,
| :appl_name:iappl_name, :acct_str:iacct_str);
| Reset the user ID for the client to blank without modifying the values of the other
| client attributes.
| strcpy(user_id, "");
| iuser_id = 0;
| iwkstn_name = -1;
| iappl_name = -1;
| iacct_str = -1;
| EXEC SQL CALL SYSPROC.WLM_SET_CLIENT_INFO(:user_id:iuser_id, :wkstn_name:iwkstn_name,
| :appl_name:iappl_name, :acct_str:iacct_str);
Environment
If you use CICS Transaction Server for OS/390® Version 1 Release 3 or later, you
can register your CICS system as a resource manager with recoverable resource
management services (RRMS). When you do that, changes to DB2 databases that
are made by the program that calls DSNACICS and the CICS server program that
DSNACICS invokes are in the same two-phase commit scope. This means that
when the calling program performs an SQL COMMIT or ROLLBACK, DB2 and
RRS inform CICS about the COMMIT or ROLLBACK.
If the CICS server program that DSNACICS invokes accesses DB2 resources, the
server program runs under a separate unit of work from the original unit of work
that calls the stored procedure. This means that the CICS server program might
deadlock with locks that the client program acquires.
Authorization
To execute the CALL statement, the owner of the package or plan that contains the
CALL statement must have one or more of the following privileges:
v The EXECUTE privilege on stored procedure DSNACICS
v Ownership of the stored procedure
v SYSADM authority
The CICS server program that DSNACICS calls runs under the same user ID as
DSNACICS. That user ID depends on the SECURITY parameter that you specify
when you define DSNACICS.
The DSNACICS caller also needs authorization from an external security system,
such as RACF, to use CICS resources.
Syntax
The following syntax diagram shows the SQL CALL statement for invoking this
stored procedure.
Because the linkage convention for DSNACICS is GENERAL WITH NULLS, if you
pass parameters in host variables, you need to include a null indicator with every
host variable. Null indicators for input host variables must be initialized before
you execute the CALL statement.
Option descriptions
parm-level
Specifies the level of the parameter list that is supplied to the stored procedure.
This is an input parameter of type INTEGER. The value must be 1.
pgm-name
Specifies the name of the CICS program that DSNACICS invokes. This is the
name of the program that the CICS mirror transaction calls, not the CICS
transaction name.
This is an input parameter of type CHAR(8).
CICS-applid
Specifies the applid of the CICS system to which DSNACICS connects.
This is an input parameter of type CHAR(8).
CICS-level
Specifies the level of the target CICS subsystem:
1 The CICS subsystem is CICS for MVS/ESA™ Version 4 Release 1, CICS
Transaction Server for OS/390 Version 1 Release 1, or CICS Transaction
Server for OS/390 Version 1 Release 2.
2 The CICS subsystem is CICS Transaction Server for OS/390 Version 1
Release 3 or later.
This is an input parameter of type INTEGER.
connect-type
Specifies whether the CICS connection is generic or specific. Possible values are
GENERIC or SPECIFIC.
This is an input parameter of type CHAR(8).
netname
If the value of connection-type is SPECIFIC, specifies the name of the specific
connection that is to be used. This value is ignored if the value of
connection-type is GENERIC.
This is an input parameter of type CHAR(8).
mirror-trans
Specifies the name of the CICS mirror transaction to invoke. This mirror
transaction calls the CICS server program that is specified in the pgm-name
parameter. mirror-trans must be defined to the CICS server region, and the
CICS resource definition for mirror-trans must specify DFHMIRS as the
program that is associated with the transaction.
If this parameter contains blanks, DSNACICS passes a mirror transaction
parameter value of null to the CICS EXCI interface. This allows an installation
DSNACICS always calls user exit routine DSNACICX. You can use DSNACICX to
change the values of DSNACICS input parameters before you pass those
parameters to CICS. If you do not supply your own version of DSNACICX,
DSNACICS calls the default DSNACICX, which modifies no values and does an
immediate return to DSNACICS. The source code for the default version of
DSNACICX is in member DSNASCIX in data set prefix.SDSNSAMP.. The source
code for a sample version of DSNACICX that is written in COBOL is in member
DSNASCIO in data set prefix.SDSNSAMP.
Example
The following PL/I example shows the variable declarations and SQL CALL
statement for invoking the CICS transaction that is associated with program
CICSPGM1.
/***********************/
/* DSNACICS PARAMETERS */
/***********************/
DECLARE PARM_LEVEL BIN FIXED(31);
DECLARE PGM_NAME CHAR(8);
DECLARE CICS_APPLID CHAR(8);
DECLARE CICS_LEVEL BIN FIXED(31);
DECLARE CONNECT_TYPE CHAR(8);
DECLARE NETNAME CHAR(8);
DECLARE MIRROR_TRANS CHAR(4);
DECLARE COMMAREA_TOTAL_LEN BIN FIXED(31);
DECLARE SYNC_OPTS BIN FIXED(31);
DECLARE RET_CODE BIN FIXED(31);
DECLARE MSG_AREA CHAR(500) VARYING;
/***********************************************/
/* INDICATOR VARIABLES FOR DSNACICS PARAMETERS */
/***********************************************/
DECLARE 1 IND_VARS,
3 IND_PARM_LEVEL BIN FIXED(15),
3 IND_PGM_NAME BIN FIXED(15),
3 IND_CICS_APPLID BIN FIXED(15),
3 IND_CICS_LEVEL BIN FIXED(15),
3 IND_CONNECT_TYPE BINFIXED(15),
3 IND_NETNAME BIN FIXED(15),
3 IND_MIRROR_TRANSBIN FIXED(15),
3 IND_COMMAREA BIN FIXED(15),
3 IND_COMMAREA_TOTAL_LEN BIN FIXED(15),
3 IND_SYNC_OPTS BIN FIXED(15),
3 IND_RETCODE BIN FIXED(15),
3 IND_MSG_AREA BIN FIXED(15);
/**************************/
PGM_NAME = 'CICSPGM1';
IND_PGM_NAME = 0 ;
MIRROR_TRANS = 'MIRT';
IND_MIRROR_TRANS = 0;
P1 = ADDR(COMMAREA_STG);
COMMAREA_INPUT = 'THIS IS THE INPUT FOR CICSPGM1';
COMMAREA_OUTPUT = ' ';
COMMAREA_LEN = LENGTH(COMMAREA_INPUT);
IND_COMMAREA = 0;
SYNC_OPTS= 1;
IND_SYNC_OPTS = 0;
IND_CICS_APPLID= -1;
IND_CICS_LEVEL = -1;
IND_CONNECT_TYPE = -1;
IND_NETNAME = -1;
/*****************************************/
/* INITIALIZE
OUTPUT PARAMETERS TO NULL. */
/*****************************************/
IND_RETCODE = -1;
IND_MSG_AREA= -1;
/*****************************************/
/* CALL DSNACICS TO INVOKE CICSPGM1. */
/*****************************************/
EXEC SQL
CALL SYSPROC.DSNACICS(:PARM_LEVEL :IND_PARM_LEVEL,
:PGM_NAME :IND_PGM_NAME,
:CICS_APPLID :IND_CICS_APPLID,
:CICS_LEVEL :IND_CICS_LEVEL,
:CONNECT_TYPE :IND_CONNECT_TYPE,
:NETNAME :IND_NETNAME,
:MIRROR_TRANS :IND_MIRROR_TRANS,
:COMMAREA_STG :IND_COMMAREA,
:COMMAREA_TOTAL_LEN :IND_COMMAREA_TOTAL_LEN,
:SYNC_OPTS :IND_SYNC_OPTS,
:RET_CODE :IND_RETCODE,
:MSG_AREA :IND_MSG_AREA);
Output
DSNACICS places the return code from DSNACICS execution in the return-code
parameter. If the value of the return code is non-zero, DSNACICS puts its own
error messages and any error messages that are generated by CICS and the
DSNACICX user exit routine in the msg-area parameter.
Restrictions
Because DSNACICS uses the distributed program link (DPL) function to invoke
CICS server programs, server programs that you invoke through DSNACICS can
contain only the CICS API commands that the DPL function supports. The list of
supported commands is documented in CICS Transaction Server for z/OS
Application Programming Reference.
| DSNACICS does not propagate the transaction identifier (XID) of the thread. The
| stored procedure runs under a new private context rather than under the native
| context of the task that called it.
Debugging
If you receive errors when you call DSNACICS, ask your system administrator to
add a DSNDUMP DD statement in the startup procedure for the address space in
which DSNACICS runs. The DSNDUMP DD statement causes DB2 to generate an
SVC dump whenever DSNACICS issues an error message.
General considerations
DSNACICS always calls an exit routine named DSNACICX. DSNACICS calls your
DSNACICX exit routine if it finds it before the default DSNACICX exit routine.
Otherwise, it calls the default DSNACICX exit routine.
The DSNACICX exit routine is taken whenever DSNACICS is called. The exit
routine is taken before DSNACICS invokes the CICS server program.
DB2 loads DSNACICX only once, when DSNACICS is first invoked. If you change
DSNACICX, you can load the new version by quiescing and then resuming the
WLM application environment for the stored procedure address space in which
DSNACICS runs:
VARY WLM,APPLENV=DSNACICS-applenv-name,QUIESCE VARY
WLM,APPLENV=DSNACICS-applenv-name,RESUME
Parameter list
The following table shows the contents of the DSNACICX exit parameter list, XPL.
Member DSNDXPL in data set prefix.SDSNMACS contains an assembler language
mapping macro for XPL. Sample exit routine DSNASCIO in data set
prefix.SDSNSAMP includes a COBOL mapping macro for XPL.
Table 134. Contents of the XPL exit parameter list
Corresponding
DSNACICS
Name Hex offset Data type Description parameter
XPL_EYEC 0 Character, 4 bytes Eye-catcher: ’XPL ’
XPL_LEN 4 Character, 4 bytes Length of the exit
parameter list
XPL_LEVEL 8 4-byte integer Level of the parm-level
parameter list
DSNAIMS uses the IMS Open Transaction Manager Access (OTMA) API to
connect to IMS and execute the transactions.
| To use a two-phase commit process, you must have IMS Version 8 with UQ70789
| or later.
Authorization
To set up and run DSNAIMS, you must be authorized the perform the following
steps:
1. Use the job DSNTIJIM to issue the CREATE PROCEDURE statement for
DSNAIMS and to grant the execution of DSNAIMS to PUBLIC. DSNTIJIM is
provided in the SDSNSAMP data set. You need to customize DSNTIJIM to fit
the parameters of your system.
2. Ensure that OTMA C/I is initialized. See IMS Open Transaction Manager Access
Guide and Reference for an explanation of the C/I initialization.
Syntax
The following syntax diagram shows the SQL CALL statement for invoking this
stored procedure:
Option descriptions
dsnaims-function
A string that indicates whether the transaction is send-only, receive-only, or
send-and-receive. Possible values are:
SENDRECV
Sends and receives IMS data. SENDRECV invokes an IMS transaction
or command and returns the result to the caller. The transaction can be
an IMS full function or a fast path. SENDRECV does not support
multiple iterations of a conversational transaction
SEND Sends IMS data. SEND invokes an IMS transaction or command, but
does not receive IMS data. If result data exists, it can be retrieved with
the RECEIVE function. A send-only transaction cannot be an IMS fast
path transaction or a conversations transaction.
Examples
By default DSNAIMS connects to only one IMS subsystem at a time. The first
request to DSNAIMS determines to which IMS subsystem the stored procedure
connects. DSNAIMS attempts to reconnect to IMS only in the following cases:
v IMS is restarted and the saved connection is no longer valid
v WLM loads another DSNAIMS task
| PSPI
| Environment
| Before you can invoke DSNAEXP, table sqlid.PLAN_TABLE must exist. sqlid is the
| value that you specify for the sqlid input parameter when you call DSNAEXP.
| Authorization required
| To execute the CALL DSN8.DSNAEXP statement, the owner of the package or plan
| that contains the CALL statement must have one or more of the following
| privileges on each package that the stored procedure uses:
| v The EXECUTE privilege on the package for DSNAEXP
| v Ownership of the package
| v PACKADM authority for the package collection
| v SYSADM authority
| In addition:
| v The SQL authorization ID of the process in which DSNAEXP is called must have
| the authority to execute SET CURRENT SQLID=sqlid.
| v The SQL authorization ID of the process must also have one of the following
| characteristics:
| – Be the owner of a plan table named PLAN_TABLE
| – Have an alias on a plan table named owner.PLAN_TABLE and have SELECT
| and INSERT privileges on the table
| The following syntax diagram shows the CALL statement for invoking DSNAEXP.
| Because the linkage convention for DSNAEXP is GENERAL, you cannot pass null
| values to the stored procedure.
|
| DSNAEXP output
| PSPI
To execute the CALL statement, the owner of the package or plan that contains the
CALL statement must have one or more of the following privileges:
v The EXECUTE privilege on stored procedure DXXMQINSERT
v Ownership of the stored procedure
v SYSADM authority
The following syntax diagram shows the SQL CALL statement for invoking
DXXMQINSERT. Because the linkage convention for DXXMQINSERT is GENERAL
WITH NULLS, if you pass parameters in host variables, you need to include a null
indicator with every host variable. Null indicators for input host variables must be
initialized before you execute the CALL statement.
XML-collection-name, status )
DXXMQINSERT output
To execute the CALL statement, the owner of the package or plan that contains the
CALL statement must have one or more of the following privileges:
v The EXECUTE privilege on stored procedure DXXMQSHRED
v Ownership of the stored procedure
v SYSADM authority
The following syntax diagram shows the SQL CALL statement for invoking
DXXMQSHRED. Because the linkage convention for DXXMQSHRED is GENERAL
WITH NULLS, if you pass parameters in host variables, you need to include a null
status )
DXXMQSHRED output
To execute the CALL statement, the owner of the package or plan that contains the
CALL statement must have one or more of the following privileges:
v The EXECUTE privilege on stored procedure DXXMQINSERTCLOB
v Ownership of the stored procedure
v SYSADM authority
The following syntax diagram shows the SQL CALL statement for invoking
DXXMQINSERTCLOB. Because the linkage convention for DXXMQINSERTCLOB
is GENERAL WITH NULLS, if you pass parameters in host variables, you need to
include a null indicator with every host variable. Null indicators for input host
variables must be initialized before you execute the CALL statement.
XML-collection-name, status )
DXXMQINSERTCLOB output
To execute the CALL statement, the owner of the package or plan that contains the
CALL statement must have one or more of the following privileges:
v The EXECUTE privilege on stored procedure DXXMQSHREDCLOB
v Ownership of the stored procedure
v SYSADM authority
The following syntax diagram shows the SQL CALL statement for invoking
DXXMQSHREDCLOB. Because the linkage convention for DXXMQSHREDCLOB is
GENERAL WITH NULLS, if you pass parameters in host variables, you need to
include a null indicator with every host variable. Null indicators for input host
variables must be initialized before you execute the CALL statement.
status )
DXXMQSHREDCLOB output
To execute the CALL statement, the owner of the package or plan that contains the
CALL statement must have one or more of the following privileges:
v The EXECUTE privilege on stored procedure DXXMQINSERTALL
v Ownership of the stored procedure
v SYSADM authority
XML-collection-name, status )
DXXMQINSERTALL output
To execute the CALL statement, the owner of the package or plan that contains the
CALL statement must have one or more of the following privileges:
v The EXECUTE privilege on stored procedure DXXMQSHREDALL
v Ownership of the stored procedure
v SYSADM authority
The following syntax diagram shows the SQL CALL statement for invoking
DXXMQSHREDALL. Because the linkage convention for DXXMQSHREDALL is
GENERAL WITH NULLS, if you pass parameters in host variables, you need to
include a null indicator with every host variable. Null indicators for input host
variables must be initialized before you execute the CALL statement.
status )
DXXMQSHREDALL output
To execute the CALL statement, the owner of the package or plan that contains the
CALL statement must have one or more of the following privileges:
v The EXECUTE privilege on stored procedure DXXMQSHREDALLCLOB
v Ownership of the stored procedure
v SYSADM authority
The following syntax diagram shows the SQL CALL statement for invoking
DXXMQSHREDALLCLOB. Because the linkage convention for
DAD-file-name, status )
DXXMQSHREDALLCLOB output
To execute the CALL statement, the owner of the package or plan that contains the
CALL statement must have one or more of the following privileges:
v The EXECUTE privilege on stored procedure DXXMQINSERTALLCLOB
v Ownership of the stored procedure
v SYSADM authority
The following syntax diagram shows the SQL CALL statement for invoking
DXXMQINSERTALLCLOB. Because the linkage convention for
DXXMQINSERTALLCLOB is GENERAL WITH NULLS, if you pass parameters in
host variables, you need to include a null indicator with every host variable. Null
indicators for input host variables must be initialized before you execute the CALL
statement.
XML-collection-name, status )
DXXMQINSERTALLCLOB output
To execute the CALL statement, the owner of the package or plan that contains the
CALL statement must have one or more of the following privileges:
v The EXECUTE privilege on stored procedure DXXMQGEN
v Ownership of the stored procedure
v SYSADM authority
The following syntax diagram shows the SQL CALL statement for invoking
DXXMQGEN. Because the linkage convention for DXXMQGEN is GENERAL
WITH NULLS, if you pass parameters in host variables, you need to include a null
indicator with every host variable. Null indicators for input host variables must be
initialized before you execute the CALL statement.
DXXMQGEN output
To execute the CALL statement, the owner of the package or plan that contains the
CALL statement must have one or more of the following privileges:
v The EXECUTE privilege on stored procedure DXXMQRETRIEVE
v Ownership of the stored procedure
v SYSADM authority
The following syntax diagram shows the SQL CALL statement for invoking
DXXMQRETRIEVE. Because the linkage convention for DXXMQRETRIEVE is
GENERAL WITH NULLS, if you pass parameters in host variables, you need to
include a null indicator with every host variable. Null indicators for input host
variables must be initialized before you execute the CALL statement.
DXXMQRETRIEVE output
To execute the CALL statement, the owner of the package or plan that contains the
CALL statement must have one or more of the following privileges:
v The EXECUTE privilege on stored procedure DXXMQGENCLOB
v Ownership of the stored procedure
v SYSADM authority
The following syntax diagram shows the SQL CALL statement for invoking
DXXMQGENCLOB. Because the linkage convention for DXXMQGENCLOB is
GENERAL WITH NULLS, if you pass parameters in host variables, you need to
include a null indicator with every host variable. Null indicators for input host
variables must be initialized before you execute the CALL statement.
DXXMQGENCLOB output
To execute the CALL statement, the owner of the package or plan that contains the
CALL statement must have one or more of the following privileges:
v The EXECUTE privilege on stored procedure DXXMQRETRIEVECLOB
v Ownership of the stored procedure
v SYSADM authority
The following syntax diagram shows the SQL CALL statement for invoking
DXXMQRETRIEVECLOB. Because the linkage convention for
DXXMQRETRIEVECLOB is GENERAL WITH NULLS, if you pass parameters in
host variables, you need to include a null indicator with every host variable. Null
indicators for input host variables must be initialized before you execute the CALL
statement.
DXXMQRETRIEVECLOB output
The user that calls this stored procedure is considered the creator of this XML
schema. DB2 obtains the namespace attribute from the schema document when
XSR_COMPLETE is invoked.
The user ID of the caller of the procedure must have the EXECUTE privilege on
the XSR_REGISTER stored procedure.
The following syntax diagram shows the CALL statement for invoking
XSR_REGISTER.
content , docproperty )
NULL
Example of XSR_REGISTER
Each XML schema in the XSR can consist of one or more XML schema documents.
When an XML schema consists of multiple documents, you need to call
XSR_ADDSCHEMADOC for the additional documents.
The user ID of the caller of the procedure must have the EXECUTE privilege on
the XSR_ADDSCHEMADOC stored procedure.
The following syntax diagram shows the CALL statement for invoking
XSR_ADDSCHEMADOC.
SYSPROC
CALL . XSR_ADDSCHEMADOC ( rschema , name , schemalocation ,
NULL NULL
content , docproperty )
NULL
Example of XSR_ADDSCHEMADOC
An XML schema is not available for validation until the schema registration
completes through a call to this stored procedure.
The user ID of the caller of the procedure must have the EXECUTE privilege on
the XSR_COMPLETE stored procedure.
The following syntax diagram shows the CALL statement for invoking
XSR_COMPLETE.
SYSPROC
CALL . XSR_COMPLETE ( rschema , name , schemaproperties ,
issuedfordecomposition )
Example of XSR_COMPLETE
The user ID of the caller of the procedure must have the EXECUTE privilege on
the XSR_REMOVE stored procedure.
The following syntax diagram shows the CALL statement for invoking
XSR_REMOVE.
SYSPROC
CALL . XSR_REMOVE ( rschema , name )
NULL
Example of XSR_REMOVE
The following syntax diagram shows the CALL statement for invoking
XDBDECOMPXML.
SYSPROC
CALL . XDBDECOMPXML ( rschema , name , xmldoc , documentid )
NULL NULL
These two methods of coding applications for distributed access are illustrated by
the following example.
Example: Spiffy Computer has a master project table that supplies information
about all projects that are currently active throughout the company. Spiffy has
several branches in various locations around the world, each a DB2 location that
maintains a copy of the project table named DSN8910.PROJ. The main branch
location occasionally inserts data into all copies of the table. The application that
makes the inserts uses a table of location names. For each row that is inserted, the
application executes an INSERT statement in DSN8910.PROJ for each location.
Copying a table from a remote location: To copy a table from one location to
another, you can either write your own application program or use the DB2
DataPropagator™ product.
When you use three-part table names, the way you code your application is the
same, regardless of the access method you choose. You determine the access
method when you bind the SQL statements into a package or plan. If you use
DRDA access, you must bind the DBRMs for the SQL statements to be executed at
the server to packages that reside at that server.
In a three-part table name, the first part denotes the location. The local DB2 makes
and breaks an implicit connection to a remote server as needed.
When a three-part name is parsed and forwarded to a remote location, any special
register settings are automatically propagated to remote server. This allows the
SQL statements to process the same way no matter at what site a statement is run.
The following example assumes that all systems involved implement two-phase
commit. This example suggests updating several systems in a loop and ending the
unit of work by committing only when the loop is complete. Updates are
coordinated across the entire set of systems.
The following overview shows how the application uses three-part names:
Read input values
Do for all locations
Read location name
Set up statement to prepare
Prepare statement
a Execute statement
End loop
Commit
After the application obtains a location name, for example ’SAN_JOSE’, it next
creates the following character string:
INSERT INTO SAN_JOSE.DSN8910.PROJ VALUES (?,?,?,?,?,?,?,?)
The application assigns the character string to the variable INSERTX and then
executes these statements:
EXEC SQL
PREPARE STMT1 FROM :INSERTX;
EXEC SQL
EXECUTE STMT1 USING :PROJNO, :PROJNAME, :DEPTNO, :RESPEMP,
:PRSTAFF, :PRSTDATE, :PRENDATE, :MAJPROJ;
The host variables for Spiffy’s project table match the declaration for the sample
project table.
To keep the data consistent at all locations, the application commits the work only
when the loop has executed for all locations. Either every location has committed
the INSERT or, if a failure has prevented any location from inserting, all other
locations have rolled back the INSERT. (If a failure occurs during the commit
process, the entire unit of work can be indoubt.)
Three-part names and multiple servers: If you use a three-part name, or an alias
that resolves to one, in a statement that is executed at a remote server by DRDA
access, and if the location name is not that of the server, then the method by which
the remote server accesses data at the named location depends on the value of
DBPROTOCOL. If the package at the first remote server is bound with
DBPROTOCOL(PRIVATE), DB2 uses DB2 private protocol access to access the
second remote server. If the package at the first remote server is bound with
DBPROTOCOL(DRDA), DB2 uses DRDA access to access the second remote server.
The following steps are recommended so that access to the second remote server is
by DRDA access:
1. Rebind the package at the first remote server with DBPROTOCOL(DRDA).
2. Bind the package that contains the three-part name at the second server.
Example
You can perform the following series of actions, which includes a forward
reference to a declared temporary table:
EXEC SQL CONNECT TO CHICAGO; /* Connect to the remote site */
EXEC SQL
DECLARE GLOBAL TEMPORARY TABLE T1 /* Define the temporary table */
(CHARCOL CHAR(6) NOT NULL); /* at the remote site */
EXEC SQL CONNECT RESET; /* Connect back to local site */
EXEC SQL INSERT INTO CHICAGO.SESSION.T1
(VALUES 'ABCDEF'); /* Access the temporary table*/
/* at the remote site (forward reference) */
However, you cannot perform the following series of actions, which includes a
backward reference to the declared temporary table:
EXEC SQL
DECLARE GLOBAL TEMPORARY TABLE T1 /* Define the temporary table */
(CHARCOL CHAR(6) NOT NULL); /* at the local site (ATLANTA)*/
EXEC SQL CONNECT TO CHICAGO; /* Connect to the remote site */
EXEC SQL INSERT INTO ATLANTA.SESSION.T1
(VALUES 'ABCDEF'); /* Cannot access temp table */
/* from the remote site (backward reference)*/
You must bind the DBRMs for the SQL statements to be executed at the server to
packages that reside at that server.
The following example assumes that all systems involved implement two-phase
commit. This example suggests updating several systems in a loop and ending the
unit of work by committing only when the loop is complete. Updates are
coordinated across the entire set of systems.
In this example, Spiffy’s application executes CONNECT for each server in turn,
and the server executes INSERT. In this case, the tables to be updated each have
the same name, although each table is defined at a different server. The application
executes the statements in a loop, with one iteration for each server.
The following overview shows how the application uses explicit CONNECTs:
Read input values
Do for all locations
Read location name
Connect to location
Execute insert statement
End loop
Commit
Release all
For example, the application inserts a new location name into the variable
LOCATION_NAME and executes the following statements:
EXEC SQL
CONNECT TO :LOCATION_NAME;
EXEC SQL
INSERT INTO DSN8910.PROJ VALUES (:PROJNO, :PROJNAME, :DEPTNO, :RESPEMP,
:PRSTAFF, :PRSTDATE, :PRENDATE, :MAJPROJ);
To keep the data consistent at all locations, the application commits the work only
when the loop has executed for all locations. Either every location has committed
the INSERT or, if a failure has prevented any location from inserting, all other
locations have rolled back the INSERT. (If a failure occurs during the commit
process, the entire unit of work can be indoubt.)
The host variables for Spiffy’s project table match the declaration for the sample
project table. LOCATION_NAME is a character-string variable of length 16.
Related reference
Project table (.) (Introduction to DB2 for z/OS)
DB2 uses the DBALIAS value in the SYSIBM.LOCATIONS table to override the
location name that an application uses to access a server.
For example, suppose that an employee database is deployed across two sites and
that both sites make themselves known as location name EMPLOYEE. To access
each site, insert a row for each site into SYSIBM.LOCATIONS with the location
names SVL_EMPLOYEE and SJ_EMPLOYEE. Both rows contain EMPLOYEE as the
DBALIAS value. When an application issues a CONNECT TO SVL_EMPLOYEE
statement, DB2 searches the SYSIBM.LOCATIONS table to retrieve the location and
network attributes of the database server. Because the DBALIAS value is not blank,
DB2 uses the alias EMPLOYEE, and not the location name, to access the database.
If the application uses fully qualified object names in its SQL statements, DB2
sends the statements to the remote server without modification. For example,
suppose that the application issues the statement SELECT * FROM
SVL_EMPLOYEE.authid.table with the fully-qualified object name. However, DB2
accesses the remote server by using the EMPLOYEE alias. The remote server must
identify itself as both SVL_EMPLOYEE and EMPLOYEE; otherwise, it rejects the
SQL statement with a message indicating that the database is not found. If the
Releasing connections
When you connect to remote locations explicitly, you must also break those
connections explicitly.
To break the connections, you can use the RELEASE statement. The RELEASE
statement differs from the CONNECT statement in the following ways:
v While the CONNECT statement makes an immediate connection, the RELEASE
statement does not immediately break a connection. The RELEASE statement
labels connections for release at the next commit point. A connection that has
been labeled for release is in the release-pending state and can still be used before
the next commit point.
v While the CONNECT statement connects to exactly one remote system, you can
use the RELEASE statement to specify a single connection or a set of connections
for release at the next commit point.
Example: By using the RELEASE statement, you can place any of the following
connections in the release-pending state:
v A specific connection that the next unit of work does not use:
EXEC SQL RELEASE SPIFFY1;
v The current SQL connection, whatever its location name:
EXEC SQL RELEASE CURRENT;
v All connections except the local connection:
EXEC SQL RELEASE ALL;
v All DB2 private protocol connections. If the first phase of your application
program uses DB2 private protocol access and the second phase uses DRDA
access, open DB2 private protocol connections from the first phase could cause a
CONNECT operation to fail in the second phase. To prevent that error, execute
the following statement before the commit operation that separates the two
phases:
EXEC SQL RELEASE ALL PRIVATE;
PRIVATE refers to DB2 private protocol connections, which exist only between
instances of DB2 for z/OS.
If you transmit mixed data between your local system and a remote system, put
the data in varying-length character strings instead of fixed-length character
strings.
The special register CURRENT SERVER contains the location name of the system
you are connected to. You can assign that name to a host variable with a statement
like this:
EXEC SQL SET :CS = CURRENT SERVER;
When a distributed query against an ASCII or Unicode table arrives at the DB2 for
z/OS server, the server indicates in the reply message that the columns of the
result table contain ASCII or Unicode data, rather than EBCDIC data. The reply
message also includes the CCSIDs of the data to be returned. The CCSID of data
from a column is the CCSID that was in effect when the column was defined.
The encoding scheme in which DB2 returns data depends on two factors:
v The encoding scheme of the requesting system.
If the requester is ASCII or Unicode, the returned data is ASCII or Unicode. If
the requester is EBCDIC, the returned data is EBCDIC, even though it is stored
at the server as ASCII or Unicode. However, if the SELECT statement that is
used to retrieve the data contains an ORDER BY clause, the data displays in
ASCII or Unicode order.
v Whether the application program overrides the CCSID for the returned data. The
ways to do this are as follows:
– For static SQL
You can bind a plan or package with the ENCODING bind option to control
the CCSIDs for all static data in that plan or package. For example, if you
specify ENCODING(UNICODE) when you bind a package at a remote DB2
for z/OS system, the data that is returned in host variables from the remote
system is encoded in the default Unicode CCSID for that system.
For more information about the ENCODING bind options, see the topic
“BIND and REBIND options” in DB2 Command Reference.
– For static or dynamic SQL
An application program can specify overriding CCSIDs for individual host
variables in DECLARE VARIABLE statements. See “Setting the CCSID for
host variables” on page 146 for information about how to specify the CCSID
for a host variable.
An application program that uses an SQLDA can specify an overriding
CCSID for the returned data in the SQLDA. When the application program
executes a FETCH statement, you receive the data in the CCSID that is
specified in the SQLDA.
If a DB2 for z/OS server processes an OPEN cursor statement for a scrollable
cursor, and the OPEN cursor statement comes from a requester that does not
support scrollable cursors, the DB2 for z/OS server returns an SQL error. However,
if a stored procedure at the server uses a scrollable cursor to return a result set, the
down-level requester can access data through that cursor. The DB2 for z/OS server
Chapter 15. Coding methods for distributed data 841
converts the scrollable result set cursor to a non-scrollable cursor. The requester can
retrieve the data using sequential FETCH statements.
Restriction: All DB2 MQ functions that use AMI are deprecated. You can convert
those applications that use the AMI-based functions to use the MQI-based
functions.
Related tasks
“Converting applications to use the MQI functions” on page 861
Related information
WebSphere MQ information center
WebSphere MQ messages
WebSphere MQ uses messages to pass information between applications.
| DB2 communicates with the WebSphere message handling system through a set of
| external user-defined functions, which are called DB2 MQ functions. These
| functions use either the MQI or the AMI.
| Restriction: The AMI and all DB2 MQ functions that use the AMI have been
| deprecated.
| When you send a message, you must specify the following three components:
| message data
| Defines what is sent from one program to another.
| service
| Defines where the message is going to or coming from. The parameters for
| managing a queue are defined in the service, which is typically defined by
| a system administrator. The complexity of the parameters in the service is
| hidden from the application program.
| policy Defines how the message is handled. Policies control such items as:
| v The attributes of the message, for example, the priority.
| v Options for send and receive operations, for example, whether an
| operation is part of a unit of work.
| The default service and policy are set as part of defining the WebSphere MQ
| configuration for a particular installation of DB2. (This action is typically
| performed by a system administrator.) DB2 provides the default service
| DB2.DEFAULT.SERVICE and the default policy DB2.DEFAULT.POLICY.
| How services and policies are stored and managed depends on whether you are
| using the AMI or the MQI.
| Related tasks
| Enabling WebSphere MQ user-defined functions (DB2 Installation and
| Migration)
| Related information
| WebSphere MQ information center
| One way to send and receive WebSphere MQ messages from DB2 applications is to
| use the DB2 MQ functions that use MQI.
| These MQI-based functions use the services and policies that are defined in two
| DB2 tables, SYSIBM.MQSERVICE_TABLE and SYSIBM.MQPOLICY_TABLE. These
| The application program does not need know the details of the services and
| policies that are defined in these tables. The application need only specify which
| service and policy to use for each message that it sends and receives. The
| application specifies this information when it calls a DB2 MQ function.
| Related concepts
| “DB2 MQ functions and DB2 MQ XML stored procedures” on page 846
| Related reference
| “DB2 MQ tables” on page 853
| The MQI-based DB2 MQ functions use the services that are defined in the DB2
| table SYSIBM.MQSERVICE_TABLE. This table is user-managed and is typically
| created and maintained by a system administrator. This table contains a row for
| each defined service, including your customized services and the default service
| that is provided by DB2.
| The application program does not need know the details of the defined services.
| When an application program calls an MQI-based DB2 MQ function, the program
| selects a service from SYSIBM.MQSERVICE_TABLE by specifying it as a parameter.
| Related concepts
| “DB2 MQ functions and DB2 MQ XML stored procedures” on page 846
| “WebSphere MQ message handling” on page 843
| Related reference
| “DB2 MQ tables” on page 853
| A policy controls how the MQ messages are handled. DB2 MQI policies are
| defined in the DB2 table SYSIBM.MQPOLICY_TABLE.
| The MQI-based DB2 MQ functions use the policies that are defined in the DB2
| table SYSIBM.MQPOLICY_TABLE. This table is user-managed and is typically
| created and maintained by a system administrator. This table contains a row for
| each defined policy, including your customized policies and the default policy that
| is provided by DB2.
| The application program does not need know the details of the defined policies.
| When an application program calls an MQI-based DB2 MQ function, the program
| selects a policy from SYSIBM.MQPOLICY_TABLE by specifying it as a parameter.
| One way to send and receive WebSphere MQ messages from DB2 applications is to
| use the DB2 MQ functions that use AMI. However, be aware that this interface and
| the associated DB2 MQ functions have been deprecated.
| Restriction: The AMI and the DB2 MQ functions that use the AMI have been
| deprecated. You can convert those applications that use the AMI-based functions to
| use the MQI-based functions.
| The AMI-based functions use the services and policies that are defined in AMI
| configuration files, which are in XML format. Typically, these files are created and
| maintained by a system administrator. These files also define any default services
| and policies, including the defaults that are provided by DB2.
| The application program does not need know the details of the services and
| policies that are defined in these files. The application need only specify which
| service and policy to use for each message that it sends and receives. The
| application specifies this information when it calls a DB2 MQ function.
| The AMI uses the service and policy to interpret and construct the MQ headers
| and message descriptors. The AMI does not act on the message data.
| Related concepts
| “DB2 MQ functions and DB2 MQ XML stored procedures” on page 846
| Related tasks
| “Converting applications to use the MQI functions” on page 861
| Related information
| WebSphere MQ information center
| AMI services:
| Restriction: The AMI and the DB2 MQ functions that use the AMI have been
| deprecated.
| The AMI-based DB2 MQ functions use the services that are defined in AMI
| configuration files, which are in XML format. These files are typically created and
| maintained by a system administrator. These files contain all of the defined
| services, including your customized services and any default services, such as the
| one that DB2 provides.
| The application program does not need know the details of the defined services.
| When an application program calls an AMI-based DB2 MQ function, the program
| selects a service from the AMI configuration file by specifying it as a parameter.
| AMI policies:
| A policy controls how the MQ messages are handled. AMI policies are defined in
| AMI configuration files.
| Restriction: The AMI and the DB2 MQ functions that use the AMI have been
| deprecated.
| The AMI-based DB2 MQ functions use the policies that are defined in AMI
| configuration files, which are in XML format. These files are typically created and
| maintained by a system administrator. These files contain all of the defined
| policies, including your customized policies and any default policies, such as the
| one that DB2 provides.
| The application program does not need know the details of the defined policies.
| When an application program calls an AMI-based DB2 MQ function, the program
| selects a policy from the AMI configuration file by specifying it as a parameter.
| Related concepts
| “DB2 MQ functions and DB2 MQ XML stored procedures”
| “WebSphere MQ message handling” on page 843
You can use the DB2 MQ functions and stored procedures to send messages to a
message queue or to receive messages from the message queue. You can send a
request to a message queue and receive a response, and you can also publish
messages to the WebSphere MQ publisher and subscribe to messages that have
been published with specific topics. The DB2 MQ XML functions and stored
procedures enable you to query XML documents and then publish the results to a
message queue.
| The WebSphere MQ server is located on the same z/OS system as the DB2
| database server. The DB2 MQ functions and stored procedures are registered with
| the DB2 database server and provide access to the WebSphere MQ server by using
| the MQI.
| The DB2 MQ functions include scalar functions, table functions, and XML-specific
| functions. For each of these functions, you can call a version that uses the MQI or
| a version that uses the AMI. (Any exceptions are noted in the description of these
| functions.) The function signatures are the same. However, the qualifying schema
| names are different. To call an MQI-based function, specify the schema name
| DB2MQ. To call an AMI-based function, specify the schema names DB2MQ1C,
| DB2MQ1N, DB2MQ2C, or DB2MQ2N.
| Restriction: All DB2 MQ functions that use the AMI have been deprecated. You
| can convert those applications that use the AMI-based functions to use the
| MQI-based functions.
| Requirement: Before you can call the version of these functions that uses MQI ,
| you need to populate the DB2 MQ tables.
The following table describes the MQ table functions that DB2 can use.
Table 136. DB2 MQ table functions
Table function Description
MQREADALL (receive-service, MQREADALL returns a table that contains the messages and message metadata
service-policy, num-rows) in VARCHAR variables from the MQ location specified by receive-service, using
the policy defined in service-policy. This operation does not remove the messages
from the queue. If num-rows is specified, a maximum of num-rows messages is
returned; if num-rows is not specified, all available messages are returned.
MQREADALLCLOB (receive-service, MQREADALLCLOB returns a table that contains the messages and message
service-policy, num-rows) metadata in CLOB variables from the MQ location specified by receive-service,
using the policy defined in service-policy. This operation does not remove the
messages from the queue. If num-rows is specified, a maximum of num-rows
messages is returned; if num-rows is not specified, all available messages are
returned.
MQRECEIVEALL (receive-service, MQRECEIVEALL returns a table that contains the messages and message
service-policy, correlation-id, metadata in VARCHAR variables from the MQ location specified by
num-rows) receive-service, using the policy defined in service-policy. This operation removes
the messages from the queue. If correlation-id is specified, only those messages
with a matching correlation identifier are returned; if correlation-id is not
specified, all available messages are returned. If num-rows is specified, a
maximum of num-rows messages is returned; if num-rows is not specified, all
available messages are returned.
The following table describes the MQ functions that DB2 can use to work with
XML data.
You can use the WebSphere MQ XML stored procedures to retrieve an XML
document from a message queue, decompose it into untagged data, and store the
data in DB2 tables. You can also compose an XML document from DB2 data and
send the document to an MQSeries(R) message queue.
| Restriction: All of these DB2 MQ XML composition stored procedures have been
| deprecated. Instead of using these composition stored procedures, you can
| generate XML documents from existing tables and send them to an MQ message
| queue.
Table 139. DB2 MQ XML composition stored procedures
XML composition stored
procedure Description
DXXMQGEN and Generate XML documents from existing database tables and send the generated
DXXMQGENALL XML documents to a message queue. The DXXMQGEN and DXXMQGENALL
stored procedures take a DAD file as input; they do not require an enabled
XML collection name as input.
DXXMQRETRIEVE and Generate XML documents from existing database tables and send the generated
DXXMQRETRIEVECLOB XML documents to a message queue. The DXXMQRETRIEVE and
DXXMQRETRIEVECLOB stored procedures require an enabled XML collection
name as input.
| Restriction: The AMI and the DB2 MQ functions that use the AMI have been
| deprecated.
| The schema name when you use AMI-based DB2 MQ functions and stored
| procedures for a single-phase commit is DB2MQ1N. The schema name when you
| use AMI-based DB2 MQ functions and stored procedures for a two-phase commit
| is DB2MQ2N. The schema names DB2MQ1C and DB2MQ2C, are still valid, but
| they do not support the parameter style that allows the value to contain binary ’0’.
| You need to assign these two versions of the AMI-based DB2 MQ functions and
| stored procedures to different WLM environments, which guarantees that the
| versions are never invoked from the same address space.
| For MQI-based DB2 MQ functions, you can specify whether the function is for
| one-phase commit or two-phase commit by using the value in the SYNCPOINT
| column of the table SYSIBM.MQPOLICY_TABLE.
This type of commit is typically used in the case of application error. You might
want to use WebSphere MQ messaging functions to notify a system programmer
If your application uses two-phase commit, RRS coordinates the commit process. If
a transaction is rolled back, the messages that have been sent to a queue within the
current unit of work are discarded.
| The DB2 MQ XML stored procedures for composing XML documents have been
| deprecated. Instead of using those stored procedures, you can achieve the same
| result by completing the steps in this task.
| The DB2 MQ XML stored procedures for decomposing XML documents have been
| deprecated. Instead of using those stored procedures, you can achieve the same
| result by completing the steps in this task.
| DB2 MQ tables
| The DB2 MQ tables contain service and policy definitions that are used by the
| MQI-based DB2 MQ functions. You must populate the DB2 MQ tables before you
| can use these MQI-based functions.
| If you previously used the AMI-based DB2 MQ functions, you used AMI
| configuration files instead of these tables. To use the MQI-based DB2 MQ
| functions, you need to move the data from those configuration files to the DB2
| tables SYSIBM.MQSERVICE_TABLE and SYSIBM.MQPOLICY_TABLE .
| The default value for this column is -1, which sets the
| Priority field in the MQMD to the value
| MQQPRI_PRIORITY_AS_Q_DEF.
| SEND_PERSISTENCE This column indicates whether the message persists
| despite any system failures or instances of restarting the
| queue manager.
| The default value is -1, which sets the Expiry field to the
| value MQEI_UNLIMITED.
| SEND_RETRY_COUNT This column contains the number of times that the MQ
| function is to try to send a message if the procedure fails.
| To convert an application to use the MQI functions, perform the following actions:
| 1. Set up the DB2 MQ functions that are based on MQI by performing the
| following actions:
| a. Run installation job DSNTIJSG. This job binds the new MQI-based DB2 MQ
| functions and creates the tables SYSIBM.MQSERVICE_TABLE and
| SYSIBM.MQPOLICY_TABLE.
| b. Convert the contents of the AMI configuration files to rows in the tables
| SYSIBM.MQSERVICE_TABLE and SYSIBM.MQPOLICY_TABLE.
| 2. If the application contains unqualified references to DB2 MQ functions, set the
| CURRENT PATH special register to the schema name DB2MQ.
| 3. If the application contains qualified references to DB2 MQ functions, change the
| schema names in those references from the old names (DB2MQ1N, DB2MQ2N,
| DB2MQ1C, and DB2MQ2C) to DB2MQ.
| 4. Change the size of any host variables to accommodate for the following larger
| message sizes:
| v DB2 MQ functions for VARCHAR data can have a maximum message size
| of 32 KB.
If you send more than one column of information, separate the columns with the
characters || ' ' ||.
The following examples use the DB2MQ2N schema for two-phase commit, with
the default service DB2.DEFAULT.SERVICE and the default policy
DB2.DEFAULT.POLICY.
Example: The following SQL SELECT statement sends a message that consists of
the string ″Testing msg″:
SELECT DB2MQ2N.MQSEND ('Testing msg')
FROM SYSIBM.SYSDUMMY1;
COMMIT;
When you use single-phase commit, you do not need to use a COMMIT statement.
For example:
SELECT DB2MQ1N.MQSEND ('Testing msg')
FROM SYSIBM.SYSDUMMY1;
Example: Assume that you have an EMPLOYEE table, with VARCHAR columns
LASTNAME, FIRSTNAME, and DEPARTMENT. To send a message that contains
this information for each employee in DEPARTMENT 5LGA, issue the following
SQL SELECT statement:
SELECT DB2MQ2N.MQSEND (LASTNAME || ' ' || FIRSTNAME || ' ' || DEPARTMENT)
FROM EMPLOYEE WHERE DEPARTMENT = '5lGA';
COMMIT;
A message that is retrieved using a receive operation can be retrieved only once,
whereas a message that is retrieved using a read operation allows the same
message to be retrieved many times.
Example: The following SQL SELECT statement reads the message at the head of
the queue that is specified by the default service and policy:
SELECT DB2MQ2N.MQREAD()
FROM SYSIBM.SYSDUMMY1;
Example: The following SQL SELECT statement causes the contents of a queue to
be materialized as a DB2 table:
SELECT T.*
FROM TABLE(DB2MQ2N.MQREADALL()) T;
The result table T of the table function consists of all the messages in the queue,
which is defined by the default service, and the metadata about those messages.
The first column of the materialized result table is the message itself, and the
remaining columns contain the metadata. The SELECT statement returns both the
messages and the metadata.
The result table T of the table function consists of all the messages in the queue,
which is defined by the default service, and the metadata about those messages.
This SELECT statement returns only the messages.
Example: The following SQL SELECT statement receives (removes) the message at
the head of the queue:
SELECT DB2MQ2N.MQRECEIVE()
FROM SYSIBM.SYSDUMMY1;
COMMIT;
The following two scenarios are very common when interconnecting applications:
v Request-and-reply communication method
v Publish-and-subscribe method
The following examples use the DB2MQ1N schema for single-phase commit.
Example: The following SQL SELECT statement sends a message consisting of the
string ″Msg with corr id″ to the service MYSERVICE, using the policy MYPOLICY
with correlation identifier CORRID1:
SELECT DB2MQ1N.MQSEND ('MYSERVICE', 'MYPOLICY', 'Msg with corr id', 'CORRID1')
FROM SYSIBM.SYSDUMMY1;
Example: The following SQL SELECT statement receives the first message that
matches the identifier CORRID1 from the queue that is specified by the service
MYSERVICE, using the policy MYPOLICY:
SELECT DB2MQ1N.MQRECEIVE ('MYSERVICE', 'MYPOLICY', 'CORRID1')
FROM SYSIBM.SYSDUMMY1;
Publish-and-subscribe method:
Simple data publication: In many cases, only a simple message needs to be sent
using the MQSEND function. When a message needs to be sent to multiple
recipients concurrently, the distribution list facility of the MQSeries® AMI can be
used.
You define distribution lists by using the AMI administration tool. A distribution list
comprises a list of individual services. A message that is sent to a distribution list
is forwarded to every service defined within the list. Publishing messages to a
distribution list is especially useful when there are multiple services that are
interested in every message.
Example: The following example shows how to send a message to the distribution
list ″InterestedParties″:
SELECT DB2MQ2N.MQSEND ('InterestedParties','Information of general interest')
FROM SYSIBM.SYSDUMMY1;
When you require more control over the messages that a particular service should
receive, you can use the MQPUBLISH function, in conjunction with the WebSphere
MQSeries Integrator facility. This facility provides a publish-and-subscribe system,
which provides a scalable, secure environment in which many subscribers can
register to receive messages from multiple publishers. Subscribers are defined by
queues, which are represented by service names.
MQPUBLISH enables you to specify a list of topics that are associated with a
message. Topics enable subscribers to more clearly specify the messages they
receive. The following sequence illustrates how the publish-and-subscribe
capabilities are used:
1. An MQSeries administrator configures the publish-and-subscribe capability of
the WebSphere MQSeries Integrator facility.
2. Interested applications subscribe to subscriber services that are defined in the
WebSphere MQSeries Integrator configuration. Each subscriber selects relevant
topics and can also use the content-based subscription techniques that are
provided by Version 2 of the WebSphere MQSeries Integrator facility.
3. A DB2 application publishes a message to a specified publisher service. The
message indicates the topic it concerns.
4. The MQSeries functions provided by DB2 for z/OS handle the mechanics of
publishing the message. The message is sent to the WebSphere MQSeries
Integrator facility by using the specified service policy.
5. The WebSphere MQSeries Integrator facility accepts the message from the
specified service, performs any processing defined by the WebSphere MQSeries
Integrator configuration, and determines which subscriptions the message
satisfies. It then forwards the message to the subscriber queues that match the
subscriber service and topic of the message.
6. Applications that subscribe to the specific service, and register an interest in the
specific topic, will receive the message in their receiving service.
Example: To publish the last name, first name, department, and age of employees
who are in department 5LGA, using all the defaults and a topic of EMP, you can
use the following statement:
Example: The following statement publishes messages that contain only the last
name of employees who are in department 5LGA to the HR_INFO_PUB publisher
service using the SPECIAL_POLICY service policy:
SELECT DB2MQ2N.MQPUBLISH ('HR_INFO_PUB', 'SPECIAL_POLICY', LASTNAME,
'ALL_EMP:5LGA', 'MANAGER')
FROM DSN8910.EMP
WHERE DEPARTMENT = '5LGA';
The messages indicate that the sender has the MANAGER correlation id. The topic
string demonstrates that multiple topics, concatenated using a ’:’ (a colon) can be
specified. In this example, the use of two topics enables subscribers of both the
ALL_EMP and the 5LGA topics to receive these messages.
To receive published messages, you must first register your application’s interest in
messages of a given topic and indicate the name of the subscriber service to which
messages are sent. An AMI subscriber service defines a broker service and a
receiver service. The broker service is how the subscriber communicates with the
publish-and-subscribe broker. The receiver service is the location where messages
that match the subscription request are sent.
Example: The following statement subscribes to the topic ALL_EMP and indicates
that messages be sent to the subscriber service, ″aSubscriber″:
SELECT DB2MQ2N.MQSUBSCRIBE ('aSubscriber','ALL_EMP')
FROM SYSIBM.SYSDUMMY1;
Example: The following statement non-destructively reads the first message, where
the subscriber service, ″aSubscriber″, defines the receiver service as
″aSubscriberReceiver″:
SELECT DB2MQ2N.MQREAD ('aSubscriberReceiver')
FROM SYSIBM.SYSDUMMY1;
To display both the messages and the topics with which they are published, you
can use one of the table functions.
Example: The following statement receives the first five messages from
″aSubscriberReceiver″ and display both the message and the topic for each of the
five messages:
SELECT t.msg, t.topic
FROM table (DB2MQ2N.MQRECEIVEALL ('aSubscriberReceiver',5)) t;
Example: To read all of the messages with the topic ALL_EMP, issue the following
statement:
SELECT t.msg
FROM table (DB2MQ2N.MQREADALL ('aSubscriberReceiver')) t
WHERE t.topic = 'ALL_EMP';
Example: The following statement unsubscribes from the ALL_EMP topic of the
″aSubscriber″ subscriber service:
SELECT DB2MQ2N.MQUNSUBSCRIBE ('aSubscriber', 'ALL_EMP')
FROM SYSIBM.SYSDUMMY1;
Example: The following example shows how you can use the MQSeries functions
of DB2 for z/OS with a trigger to publish a message each time a new employee is
hired:
CREATE TRIGGER new_employee AFTER INSERT ON DSN8910.EMP
REFERENCING NEW AS n
FOR EACH ROW MODE DB2SQL
SELECT DB2MQ2N.MQPUBLISH ('HR_INFO_PUB', current date || ' ' ||
LASTNAME || ' ' || DEPARTMENT, 'NEW_EMP');
If the program needs information from the reply, the program suspends processing
and waits for a reply message. If the messaging programs use an intermediate
queue that holds messages, the requestor program and the receiver program do
not need to be running at the same time. The requestor program places a request
message on a queue and then exits. The receiver program retrieves the request
from the queue and processes the request.
MQListener combines messaging with database operations. You can configure the
MQListener daemon to listen to the WebSphere MQ message queues that you
specify in a configuration database. MQListener reads the messages that arrive
from the queue and calls DB2 stored procedures using the messages as input
parameters. If the message requires a reply, MQListener creates a reply from the
MQListener tasks are grouped together into named configurations. By default, the
configuration name is empty. If you do not specify the name of a configuration for
a task, MQListener uses the configuration with an empty name.
Transaction support: There is support for both one-phase and two-phase commit
environments. A one-phase commit environment is where DB interactions and MQ
interactions are independent. A two-phase commit environment is where DB
interactions and MQ interactions are combined in a single unit of work.
’db2mqln1’ is the name of the executable for one phase and ’db2mqln2’ is the
name of the executable for two phase.
| Stored Procedure Interface: The stored procedure interface for MQListener takes
| the incoming message as input and returns the reply, which might be NULL, as
| output:
| schema.proc(in inMsg inMsgType, out outMsg outMsgType)
| The data type for inMsgType and the data type for outMsgType can be VARCHAR,
| VARBINARY, CLOB, or BLOB of any length and are determined at startup. The
| input data type and output data type can be different data types. If an incoming
| message is a request and has a specified reply-to queue, the message in outMsg
| will be sent to the specified queue. The incoming message can be one of the
| following message types:
| v Datagram
| v Datagram with report requested
| v Request message with reply
| v Request message with reply and report requested
Before you can use MQListener, you must configure your database environment so
that your applications can use messaging with database operations. You must also
configure WebSphere MQ for MQListener.
Use the following procedure to configure the environment for MQListener and to
develop a simple application that receives a message, inserts the message in a
table, and creates a simple response message:
1. Configure MQListener to run in the DB2 environment.
Configure your database environment so that your applications can use messaging
with database operations.
Ensure that the person who runs the installation job has required authority to
create the configuration table and to bind the DBRM’s.
Follow the instructions in the README file that is created in the MQListener
installation path in z/OS UNIX System Services to complete the configuration
process.
Two environment variables control logging and tracing for MQListener. These
variables are defined in the file .profile.
MQLSNTRC
When this ENV variable is set to 1, it will write function entry, data, and
| exit points to a unique HFS or zFS file. A unique trace file will be
generated whenever any of the MQListener commands are run. This trace
file will be used by IBM software support for debugging if the customer
reports any problem. Unless requested, this variable should not be defined.
MQLSNLOG
The log file contains diagnostic information about the major events. This
ENV variable is set to the name of the file where all log information will
be written. All instances of MQListener daemon running one or more tasks
will share the same file. For monitoring MQListener daemon, this variable
should always be set. When MQListener daemon is running, open the
log/trace files only in read mode (use cat/more/tail commands in z/OS
UNIX System Services to open the files) as they are used by the daemon
process for writing.
Refer to the README file for more details about these variables.
If you use MQListener, you must create the MQListener configuration table
SYSMQL.LISTENERS by running installation job DSNTIJML.
The following table describes each of the columns of the configuration table
SYSMQL.LISTENERS.
As part of configuring MQListener in DB2 for z/OS, you must configure at least
one MQListener task.
Restriction:
v Use the same queue manager for the request queue and the reply queue.
v MQListener does not support logical messages that are composed of multiple
physical messages. MQListener processes physical messages independently.
You can create a sample stored procedure, APROC, that can be used by
MQListener to store a message in a table. The stored procedure returns the string
OK if the message is successfully inserted into the table.
The following steps create DB2 objects that you can use with MQListener
applications:
| 1. Create a table using SPUFI, DSNTEP2, or the command line processor in the
| subsystem where you want to run MQListener:
| CREATE TABLE PROCTABLE (MSG VARCHAR(25) CHECK (MSG NOT LIKE 'FAIL
| The table contains a check constraint so that messages that start with the
| characters FAIL cannot be inserted into the table. The check constraint is used
| to demonstrate the behavior of MQListener when the stored procedure fails.
2. Create the following SQL procedure and define it to the same DB2 subsystem:
TESTLSRN is the name of the collection that is used for this stored procedure
and TESTWLMX is the name of the WLM environment where this stored
procedure will run.
3. Optional: Bind the collection TESTLSRN to the plan DB2MQLSN, which is used
by MQListener:
BIND PLAN(DB2MQLSN) +
PKLIST(LSNR.*,TESTLSRN.*) +
ACTION(REP) DISCONNECT(EXPLICIT);
If your application calls a stored procedure or user defined function that is
defined with the COLLID option, the application does not need to include the
collection ID in its plan. Thus, this step is optional.
| MQListener reads from WebSphere MQ message queues and calls DB2 stored
| procedures with those messages. If any errors occur during this process and the
| conditions are such that the message is to be sent to the deadletter queue,
| MQListener returns a reason code to the deadletter queue.
| The following table describes the reason codes that the MQListener daemon
| returns.
| Table 143. Reason codes that MQListener returns
| Reason code Explanation
| 900 The call to a stored procedure was successful but an error occurred during the DB2
| commit process and either of the following conditions were true:
| v No exception report was requested.1
| v An exception report was requested, but could not be delivered.
| This reason code applies only to one-phase commit environments.
| 901 The call to the specified stored procedure failed and the disposition of the MQ message is
| that an exception report be generated and the original message be sent the deadletter
| queue.
| Note:
| 1. To specify that the receiver application generate exception reports if errors
| occur, set the report field in the MQMD structure that was used when sending
| the message to one of the following values:
| v MQRO_EXCEPTION
| v MQRO_EXCEPTION_WITH_DATA
| v MQRO_EXCEPTION_WITH_FULL_DATA
| For more information about the report field, see the WebSphere MQ
| Information Center at http://publib.boulder.ibm.com/infocenter/wmqv6/
| v6r0/index.jsp.
| MQListener examples:
Before you run the MQListener daemon, add the following configuration, named
ACFG, to the configuration table by issuing the following command:
Run the MQListener daemon for two-phase commit for the added configuration
’ACFG’. To run MQListener with all of the tasks specified in a configuration, issue
the following command:
db2mqln2 run
-ssID DB7A
-config ACFG
-adminQueue ADMIN_Q
-adminQMgr MQND
The following examples show how to use MQListener to send a simple message
and then inspect the results of the message in the WebSphere MQ queue manager
and the database. The examples include queries to determine if the input queue
contains a message or to determine if a record is placed in the table by the stored
procedure.
MQListener example 2: Sending requests to the input queue and inspecting the
reply:
1. Start with a clean database table by issuing the following SQL statement:
delete from PROCTABLE
2. Send a request to the input queue, ’IN_Q’, with the message as ’another sample
message’. Refer to Websphere MQ sample CSQ4BCK1 to send a message to the
queue. Specify the MsgType option for ’Message Descriptor’ as
’MQMT_REQUEST’ and the queue name for ReplytoQ option.
3. Query the table by using the following statement to verify that the sample
message is inserted:
select * from PROCTABLE
4. Display the number of messages that remain on the input queue to verify that
the message has been removed. Issue the following command from a z/OS
console:
/-MQND display queue('In_Q') curdepth
Note: In this example, if a request message with added options for ’exception
report’ is sent (the Report option is specified for ’Message Descriptor’), an
exception report is sent to reply queue and original message is sent to the
deadletter queue.
DB2 as a web services consumer: DB2 can act as a client for web services, which
enables you to be a consumer of web services in your DB2 applications. web
services employs Simple Object Access Protocol (SOAP). SOAP is an XML protocol
that consists of the following characteristics:
v An envelope that defines a framework for describing the contents of a message
and how to process the message
v A set of encoding rules for expressing instances of application-defined data types
v A convention for representing SOAP requests and responses
DB2 as a web services provider: You can enable your DB2 data and applications
as web services through the Web Services Object Runtime Framework (WORF).
You can define a web service in DB2 by using a Document Access Definition
Extension (DADX). In the DADX file, you can define web services based on SQL
statements and stored procedures. Based on your definitions in the DADX file,
WORF performs the following actions:
v Handles the connection to DB2 and the execution of the SQL and the stored
procedure call
v Converts the result to a web service
v Handles the generation of any Web Services Definition Language (WSDL) and
UDDI (Universal Description, Discovery, and Integration) information that the
client application needs
For more information about using DB2 as a web services provider, see DB2
Information Integrator Application Developer’s Guide.
When a consumer receives the result of a web services request, the SOAP envelope
is stripped and the XML document is returned. An application program can
process the result data and perform a variety of operations, including inserting or
updating a table with the result data.
Example: The following example shows an HTTP post header that posts a SOAP
request envelope to a host. The SOAP envelope body shows a temperature request
for Barcelona.
POST /soap/servlet/rpcrouter HTTP/1.0
Host: services.xmethods.net
Connection: Keep-Alive User-Agent: DB2SOAP/1.0
Content-Type: text/xml; charset="UTF-8"
SOAPAction: ""
Content-Length: 410
Example: The following example is the result of the preceding example. This
example shows the HTTP response header with the SOAP response envelope. The
result shows that the temperature is 85 degrees Fahrenheit in Barcelona.
HTTP/1.1 200 OK
Date: Wed, 31 Jul 2002 22:06:41 GMT
Server: Enhydra-MultiServer/3.5.2
Status: 200
Content-Type: text/xml; charset=utf-8
Servlet-Engine: Lutris Enhydra Application Server/3.5.2
(JSP 1.1; Servlet 2.2; Java™ 1.3.1_04;
Linux 2.4.7-10smp i386; java.vendor=Sun Microsystems Inc.)
Content-Length: 467
Set-Cookie:JSESSIONID=JLEcR34rBc2GTIkn-0F51ZDk;Path=/soap
X-Cache: MISS from www.xmethods.net
Keep-Alive: timeout=15, max=10
Connection: Keep-Alive
Example: The following example shows how to insert the result from a web
service into a table
INSERT INTO MYTABLE(XMLCOL) VALUES (DB2XML.SOAPHTTPC(
'http://www.myserver.com/services/db2sample/list.dadx/SOAP',
'http://tempuri.org/db2sample/list.dadx'
'<listDepartments xmlns="http://tempuri.org/db2sample/listdadx">
<deptno>A00</deptno>
</ListDepartments>'))
| Example
| The following example shows how to insert the complete result from a web service
| into a table using SOAPHTTPNC.
| INSERT INTO EMPLOYEE(XMLCOL)
| VALUES (DB2XML.SOAPHTTPNC(
| 'http://www.myserver.com/services/db2sample/list.dadx/SOAP',
| 'http://tempuri.org/db2sample/list.dadx',
| '<?xml version="1.0" encoding="UTF-8" ?>' ||
| '<SOAP-ENV:Envelope ' ||
| 'xmlns:SOAP-ENV="http://schemas.xmlsoap.org/soap/envelope/" ' ||
| 'xmlns:xsd="http://www.w3.org/2001/XMLSchema" ' ||
| 'xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">' ||
| '<SOAP-ENV:Body>' ||
| '<listDepartments xmlns="http://tempuri.org/db2sample/list.dadx">
| <deptNo>A00</deptNo>
| </listDepartments>' ||
| '</SOAP-ENV:Body>' ||
| '</SOAP-ENV:Envelope>'))
Table 145. SQLSTATE values for SOAPHTTPNV and SOAPHTTPNC user-defined functions
SQLSTATE Description
| 38350 An unexpected NULL value was specified for the endpoint, action, or
| SOAP input.
| 38351 A dynamic memory allocation error.
| 38352 An unknown or unsupported transport protocol.
| 38353 An invalid URL was specified.
| 38354 An error occurred while resolving the hostname.
| 38355 A memory exception for socket.
| 38356 An error occurred during socket connect.
| 38357 An error occurred while setting socket options.
| 38358 An error occurred during input/output control (ioctl) to verify HTTPS
| enablement.
| 38359 An error occurred while reading from the socket.
Related tasks
Enabling Web service user-defined functions (DB2 Installation and Migration)
You can perform these steps by using one of the following methods:
Productivity hint: To avoid rework, first test your SQL statements using SPUFI.
Then compile your program without SQL statements, and resolve all compiler
errors. Finally, proceed with the preparation and the DB2 precompiler or with the
host compiler that supports that DB2 coprocessor.
For information about running REXX programs, which you do not prepare for
execution, see “Running a DB2 REXX application” on page 998.
The following topics provide details on preparing and running a DB2 application:
“Processing SQL statements” on page 890
“Compiling and link-editing an application” on page 915
“Binding an application” on page 916
Chapter 18, “Running an application on DB2 for z/OS,” on page 995.
Binding a package is not necessary in all cases. These instructions assume that you
bind some of your DBRMs into packages and include a package list in your plan.
For more information about when to bind a package, see “DB2 program
preparation overview” on page 947.
A number of methods are available for preparing an application to run. You can:
v Use DB2 interactive (DB2I) panels, which lead you step by step through the
preparation process.
v Submit a background job using JCL (which the program preparation panels can
create for you).
v Start the DSNH CLIST in TSO foreground or background.
v Use TSO prompters and the DSN command processor.
v Use JCL procedures added to your SYS1.PROCLIB (or equivalent) at DB2 install
time.
For information about using the DB2I panels, see Chapter 17, “Preparing an
application to run on DB2 for z/OS,” on page 887.
For information about using the DSNH CLIST, see the topic “DSNH (TSO CLIST)”
in DB2 Command Reference.
For information about the DSN command processor, see the topic “TSO attachment
facility” in DB2 Administration Guide.
For information about using the command line processor, see the topic “Command
line processor” in DB2 Command Reference.
If you develop programs using TSO and ISPF, you can prepare them to run by
using the DB2 Program Preparation panels. These panels guide you step by step
through the process of preparing your application to run. Other ways of preparing
a program to run are available, but using DB2 Interactive (DB2I) is the easiest
because it leads you automatically from task to task.
Important: If your C++ program satisfies both of the following conditions, you
must use a JCL procedure to prepare it:
v The program consists of more than one data set or member.
v More than one data set or member contains SQL statements.
Use the following guidelines when you prepare a program to access DB2 and DL/I
in a batch program:
v “Processing SQL statements by using the DB2 precompiler” on page 892
v “Binding a batch program” on page 930
v “Compiling and link-editing an application” on page 915
v “Loading and running a batch program” on page 1002
As DB2I leads you through a series a panels, enter the default values that you
want on the following panels when they are displayed.
Table 146. DB2I panels to use to set default values
If you want to set the following default values... Use this panel
Related reference
“DB2I Defaults Panel 1” on page 961
“DB2I Defaults Panel 2” on page 963
“Defaults for Bind Package panel” on page 973
“Defaults for Bind Plan panel” on page 976
For assembler or Fortran applications, use the DB2 precompiler to prepare the SQL
statements.
CICSIf the application contains CICS commands, you must translate the program
before you compile it. (See “Translating command-level statements in a CICS
program” on page 901.)
| DB2 version in DSNHDECP module: When you process SQL statements in your
| program, if the DB2 version in DSNHDECP is the default system-provided version,
| DB2 issues a warning and processing continues. In this case, ensure that the
| information in DSNHDECP that DB2 uses accurately reflects your environment.
Because most compilers do not recognize SQL statements, you can prevent
compiler errors by using either the DB2 precompiler or the DB2 coprocessor.
The precompiler scans the program and returns modified source code, which you
can then compile and link edit. The precompiler also produces a DBRM (database
request module). You can bind this DBRM to a package or plan using the BIND
subcommand. (For information about packages and plans, see “DB2 program
preparation overview” on page 947.) When you complete these steps, you can run
your DB2 application.
Alternatively, you can use the DB2 coprocessor for the host language. The DB2
coprocessor performs DB2 precompiler functions at compile time. When you use
the DB2 coprocessor, the compiler (rather than the precompiler) scans the program
and returns the modified source code. The DB2 coprocessor also produces a
DBRM.
Before you run the DB2 precompiler, use DCLGEN to obtain accurate SQL
DECLARE TABLE statements. The precompiler checks table and column references
against SQL DECLARE TABLE statements in the program, not the actual tables
and columns. For information about DCLGEN, see “DCLGEN (declarations
generator)” on page 129.
DB2 does not need to be active when you precompile your program.
You do not need to precompile the program on the same DB2 subsystem on which
you bind the DBRM and run the program. You can bind a DBRM and run it on a
DB2 subsystem at the previous release level, if the original program does not use
any properties of DB2 that are unique to the current release. You can also run
applications on the current release that were previously bound on subsystems at
the previous release level.
You must precompile the contents of each data set or member separately, but the
prelinker must receive all of the compiler output together.
You can use the SQL INCLUDE statement to get secondary input from the include
library, SYSLIB. The SQL INCLUDE statement reads input from the specified
member of SYSLIB until it reaches the end of the member.
Another preprocessor, such as the PL/I macro preprocessor, can generate source
statements for the precompiler. Any preprocessor that runs before the precompiler
must be able to pass on SQL statements. Similarly, other preprocessors can process
the source code, after you precompile and before you compile or assemble.
You can call the precompiler from an assembler program by using one of the
macro instructions ATTACH, CALL, LINK, or XCTL. For more information about
these macros, see z/OS MVS Programming: Assembler Services Reference, Volumes 1
and 2.
To call the precompiler, specify DSNHPC as the entry point name. You can pass
three address options to the precompiler; the following topics describe their
formats. The options are addresses of:
v A precompiler option list
v A list of alternative DD names for the data sets that the precompiler uses
v A page number to use for the first page of the compiler listing on SYSPRINT
When you call the precompiler, you can specify a number of options for SQL
statement processing. You must specify that option list in a particular format.
The option list must begin on a 2-byte boundary. The first 2 bytes contain a binary
count of the number of bytes in the list (excluding the count field). The remainder
of the list is EBCDIC and can contain precompiler option keywords, separated by
one or more blanks, a comma, or both.
When you call the precompiler, you can specify a list of alternative DD names for
the data sets that the precompiler uses. You must specify this list in a particular
format.
The DD name list must begin on a 2-byte boundary. The first 2 bytes contain a
binary count of the number of bytes in the list (excluding the count field). Each
entry in the list is an 8-byte field, left-justified, and padded with blanks if needed.
When you call the precompiler, you can specify a page number to use for the first
page of the compiler listing on SYSPRINT . You must specify this page number in
a particular format.
A 6-byte field beginning on a 2-byte boundary contains the page number. The first
2 bytes must contain the binary value 4 (the length of the remainder of the field).
The last 4 bytes contain the page number in character or zoned-decimal format.
The precompiler adds 1 to the last page number that is used in the precompiler
listing and puts this value into the page-number field before returning control to
the calling routine. Thus, if you call the precompiler again, page numbering is
continuous.
| Exception: For PL/I, the DB2 coprocessor is called from the PL/I SQL preprocessor
| instead of the compiler.
| The DB2 coprocessor has fewer restrictions on SQL programs than the DB2
| precompiler. When you process SQL statements with the DB2 coprocessor, you can
| do the following things in your program:
| To process SQL statements by using the DB2 coprocessor, perform one of the
| following actions:
| v Submit a JCL job to process that SQL statement. Include the following
| information:
| – Specify the SQL compiler option when you compile your program:
| The SQL compiler option indicates that you want the compiler to invoke the
| DB2 coprocessor. Specify a list of SQL processing options in parentheses after
| the SQL keyword. Table 150 on page 904 lists the options that you can specify.
| For COBOL and PL/I, enclose the list of SQL processing options in single or
| double quotation marks. For PL/I, separate options in the list by a comma,
| blank, or both.
| Examples:
| C/C++
| SQL(APOSTSQL STDSQL(NO))
| COBOL
| SQL("APOSTSQL STDSQL(NO)")
| PL/I
| PP(SQL("APOSTSQL,STDSQL(NO)")
| – For PL/I programs that use BIGINT or LOB data types, specify the following
| compiler options when you compile your program: LIMITS(FIXEDBIN(63),
| FIXEDDEC(31))
| – If needed, increase the user’s region size so that it can accommodate more
| memory for the DB2 coprocessor.
| – Include DD statements for the following data sets in the JCL for your compile
| step:
| - DB2 load library (prefix.SDSNLOAD)
| The DB2 coprocessor calls DB2 modules to process the SQL statements. You
| therefore need to include the name of the DB2 load library data set in the
| STEPLIB concatenation for the compile step.
| - DBRM library
| The DB2 coprocessor produces a DBRM. DBRMs and the DBRM library are
| described in “Output from the DB2 precompiler” on page 897. You need to
| include a DBRMLIB DD statement that specifies the DBRM library data set.
| - Library for SQL INCLUDE statements
| If your program contains SQL INCLUDE member-name statements that
| specify secondary input to the source program, you need to also specify the
| data set for member-name. Include the name of the data set that contains
| member-name in the SYSLIB concatenation for the compile step.
| To use the alternate DBRMLIB DD name, Enterprise COBOL V4.1 and above is
| required.
| Related information
| IBM System z Enterprise Development Tools & Compilers information center
CICS
Program and process requirements: Use the DB2 precompiler before the CICS
translator to prevent the precompiler from mistaking CICS translator output for
graphic data.
If your source program is in COBOL, you must specify a string delimiter that is
the same for the DB2 precompiler, COBOL compiler, and CICS translator. The
defaults for the DB2 precompiler and COBOL compiler are not compatible with the
default for the CICS translator.
If the SQL statements in your source program refer to host variables that a pointer
stored in the CICS TWA addresses, you must make the host variables addressable
to the TWA before you execute those statements. For example, a COBOL
application can issue the following statement to establish addressability to the
TWA:
EXEC CICS ADDRESS
TWA (address-of-twa-area)
END-EXEC
You can run CICS applications only from CICS address spaces. This restriction
applies to the RUN option on the second program DSN command processor. All of
those possibilities occur in TSO.
prefix.SDSNSAMP contains examples of the JCL that is used to prepare and run a
CICS program that includes SQL statements. For a list of CICS program names and
JCL member names, see “Sample applications in CICS” on page 1077. The set of
JCL includes:
v PL/I macro phase
v DB2 precompiling
v CICS Command Language Translation
v Compiling of the host language source statements
v Link-editing of the compiler output
v Binding of the DBRM
v Running of the prepared application.
Depending on whether you use the DB2 precompiler or the DB2 coprocessor,
ensure that you account for the following differences:
v Differences in handling source CCSIDs:
The DB2 precompiler and DB2 coprocessor convert the SQL statements of your
source program to UTF-8 for parsing.
| The precompiler or DB2 coprocessor uses the source CCSID(n) value, as
| described in Table 150 on page 904, to convert from that CCSID to CCSID 1208
| (UTF-8). The CCSID value must be an EBCDIC CCSID. If you want to prepare a
| source program that is written in a CCSID that cannot be directly converted to
| or from CCSID 1208, you must create an indirect conversion. For information
| about indirect conversions, see z/OS Support for Unicode.
v Differences in handling host variable CCSIDs:
– COBOL:
DB2 precompiler:
The DB2 precompiler sets CCSIDs for alphanumeric host variables only
when the program includes an explicit DECLARE :hv VARIABLE
statement.
DB2 coprocessor:
The COBOL compiler with National Character Support always sets
CCSIDs for alphanumeric variables, including host variables that are used
within SQL, to the source CCSID. Alternatively, you can specify that you
want the COBOL DB2 coprocessor to handle CCSIDs the same way as the
precompiler. For more information about this option, see Enterprise
COBOL for z/OS Programming Guide.
Example: Assume that DB2 has mapped a FOR BIT DATA column to a host
variable in the following way:
01 hv1 pic x(5).
01 hv2 pic x(5).
EXEC SQL
INSERT INTO T1 VALUES (:hv1, :hv2)
END-EXEC.
DB2 precompiler: In the modified source from the DB2 precompiler, hv1 and
hv2 are represented to DB2 through SQLDA in the following way, without
CCSIDs:
for hv1: NO CCSID
DB2 coprocessor: In the modified source from the DB2 coprocessor with the
National Character Support for COBOL, hv1 and hv2 are represented to DB2
in the following way, with CCSIDs: (Assume that the source CCSID is 1140.)
for hv1 and hv2, the value for CCSID is set to '1140' ('474'x) in input SQLDA
of the INSERT statement.
'7F00000474000000007F'x
To ensure that no discrepancy exists between the column with FOR BIT
DATA and the host variable with CCSID 1140, add the following statement
for :hv1 or use the DB2 precompiler:
EXEC SQL DECLARE : hv1 VARIABLE FOR BIT DATA END-EXEC.
for hv1 declared with for bit data. The value in SQL---AVAR-NAME-DATA is
set to 'FFFF'x for CCSID instead of '474x'.
For more information about these options, see IBM Enterprise PL/I for z/OS
Programming Guide.
If you are using the DB2 precompiler, specify SQL processing options in one of the
following ways:
v With DSNH operands
v With the PARM.PC option of the EXEC JCL statement
v On DB2I panels
If you are using the DB2 coprocessor, specify SQL processing options in one of the
following ways:
v For C or C++, specify the options as the argument of the SQL compiler option.
v For COBOL, specify the options as the argument of the SQL compiler option.
v For PL/I, specify the options as the argument of the PP(SQL(’option,...’))
compiler option.
For examples of how to specify the DB2 coprocessor options, see “Processing SQL
statements by using the DB2 coprocessor” on page 898
DB2 assigns default values for any SQL processing options for which you do not
explicitly specify a value. Those defaults are the values that are specified on the
APPLICATION PROGRAMMING DEFAULTS installation panels.
The following table shows the options that you can specify when you use the DB2
precompiler or DB2 coprocessor. The table also includes abbreviations for those
options and indicates which options are ignored for a particular host language or
by the DB2 coprocessor. This table uses a vertical bar (|) to separate mutually
exclusive options, and brackets ([ ]) to indicate that you can sometimes omit the
enclosed option.
Table 150. SQL processing options
Option keyword Meaning
1
APOST Indicates that the DB2 precompiler is to use the apostrophe (’) as the string delimiter
in host language statements that it generates.
This option is not available in all languages; see Table 152 on page 912.
APOST and QUOTE are mutually exclusive options. The default is in the field
STRING DELIMITER on Application Programming Defaults Panel 1 during
installation. If STRING DELIMITER is the apostrophe (’), APOST is the default.
APOSTSQL and QUOTESQL are mutually exclusive options. The default is in the
field SQL STRING DELIMITER on Application Programming Defaults Panel 1
during installation. If SQL STRING DELIMITER is the apostrophe (’), APOSTSQL is
the default.
ATTACH(TSO|CAF|RRSAF) Specifies the attachment facility that the application uses to access DB2. TSO, CAF,
and RRSAF applications that load the attachment facility can use this option to
specify the correct attachment facility, instead of coding a dummy DSNHLI entry
point.
The default setting is the EBCDIC system CCSID as specified on the panel DSNTIPF
during installation.
The DB2 coprocessor uses the following process to determine the CCSID of the
source statements:
1. If the CCSID of the source program is specified by a compiler option, such as
the COBOL CODEPAGE compiler option, the DB2 coprocessor uses that CCSID.
2. If the CCSID is not specified by a compiler option:
a. If the CCSID suboption of the SQL compiler option is specified and contains
a valid EBCDIC CCSID, that CCSID is used.
b. If the CCSID suboption of the SQL compiler option is not specified, and the
compiler supports an option for specifying the CCSID, such as the COBOL
CODEPAGE compiler option, the default for the CCSID compiler option is
used.
c. If the CCSID suboption of the SQL compiler option is not specified, and the
compiler does not support an option for specifying the CCSID, the default
CCSID from DSNHDECP is used.
d. If the CCSID suboption of the SQL option is specified and contains an invalid
CCSID, compilation terminates.
If you specify CCSID(1026) or CCSID(1155), the DB2 coprocessor does not support
the code point ’FC’X for the double quotation mark (″).
COMMA Recognizes the comma (,) as the decimal point indicator in decimal or floating point
literals in the following cases:
v For static SQL statements in COBOL programs
v For dynamic SQL statements, when the value of installation parameter DYNRULS
is NO and the package or plan that contains the SQL statements has
DYNAMICRULES bind, define, or invoke behavior.
COMMA and PERIOD are mutually exclusive options. The default (COMMA or
PERIOD) is chosen under DECIMAL POINT IS on Application Programming
Defaults Panel 1 during installation.
The default format is determined by the installation defaults of the system where
the program is bound, not by the installation defaults of the system where the
program is precompiled.
You cannot use the LOCAL option unless you have a date exit routine.
DEC(15|31)D(15.s|31.s) Specifies the maximum precision for decimal arithmetic operations. See “Precision
for operations with decimal numbers” on page 657.
If the form Dpp.s is specified, pp must be either 15 or 31, and s, which represents the
minimum scale to be used for division, must be a number between 1 and 9.
FLAG(I|W|E|S)1 Suppresses diagnostic messages below the specified severity level (Informational,
Warning, Error, and Severe error for severity codes 0, 4, 8, and 12 respectively).
Indicates that the source code might use mixed data, and that X’0E’and X’0F’ are
special control characters (shift-out and shift-in) for EBCDIC data.
For C, specify:
v C if you do not want DB2 to fold lowercase letters in SBCS SQL ordinary
identifiers to uppercase
v C(FOLD) if you want DB2 to fold lowercase letters in SBCS SQL ordinary
identifiers to uppercase
If you omit the HOST option, the DB2 precompiler issues a level-4 diagnostic
message and uses the default value for this option.
This option also sets the language-dependent defaults; see Table 152 on page 912.
LEVEL[(aaaa)]L Defines the level of a module, where aaaa is any alphanumeric value of up to seven
characters. This option is not recommended for general use, and the DSNH CLIST
and the DB2I panels do not support it. For more information, see “Setting the
program level” on page 931.
For assembler, C, C++, Fortran, and PL/I, you can omit the suboption (aaaa). The
resulting consistency token is blank. For COBOL, you need to specify the suboption.
LINECOUNT1(n)LC Defines the number of lines per page to be n for the DB2 precompiler listing. This
includes header lines that are inserted by the DB2 precompiler. The default setting is
LINECOUNT(60).
MARGINS1(m,n[,c]) MAR Specifies what part of each source record contains host language or SQL statements.
For assembler, this option also specifies where column continuations begin. The first
option (m) is the beginning column for statements. The second option (n) is the
ending column for statements. The third option (c) specifies where assembler
continuations begin. Otherwise, the DB2 precompiler places a continuation indicator
in the column immediately following the ending column. Margin values can range
from 1 to 80.
Default values depend on the HOST option that you specify; see Table 152 on page
912.
The DSNH CLIST and the DB2I panels do not support this option. In assembler, the
margin option must agree with the ICTL instruction, if presented in the source.
| NEWFUN(YES) causes the precompiler to accept syntax that is new for the current
| version of DB2.
| NEWFUN(NO) causes the precompiler to reject any syntax that is introduced in the
| current version.
| The NEWFUN option applies only to the precompilation process by either the
| precompiler or the DB2 coprocessor, regardless of the current migration mode. You
| are responsible for ensuring that you bind the resulting DBRM on a subsystem in
| the correct migration mode. For example, suppose that you have an application that
| contains new syntax for the current version of DB2. You can use NEWFUN(YES) to
| precompile that application on a subsystem in any migration mode. However, you
| cannot bind the resulting DBRM on a subsystem that is not in new-function mode.
NOFOR In static SQL, eliminates the need for the FOR UPDATE or FOR UPDATE OF clause
in DECLARE CURSOR statements. When you use NOFOR, your program can make
positioned updates to any columns that the program has DB2 authority to update.
When you do not use NOFOR, if you want to make positioned updates to any
columns that the program has DB2 authority to update, you need to specify FOR
UPDATE with no column list in your DECLARE CURSOR statements. The FOR
UPDATE clause with no column list applies to static or dynamic SQL statements.
Regardless of whether you use NOFOR, you can specify FOR UPDATE OF with a
column list to restrict updates to only the columns that are named in the clause, and
you can specify the acquisition of update locks.
If the resulting DBRM is very large, you might need extra storage when you specify
NOFOR or use the FOR UPDATE clause with no column list.
| NOGRAPHIC This option is no longer used for SQL statement processing.
Indicates the use of X’0E’and X’0F’ in a string, but not as control characters.
Default values depend on the HOST option specified; see Table 152 on page 912.
COMMA and PERIOD are mutually exclusive options. The default (COMMA or
PERIOD) is specified in the field DECIMAL POINT IS on Application Programming
Defaults Panel 1 during installation.
QUOTE1Q Indicates that the DB2 precompiler is to use the quotation mark (″) as the string
delimiter in host language statements that it generates.
QUOTE is valid only for COBOL applications. QUOTE is not valid for either of the
following combinations of precompiler options:
v CCSID(1026) and HOST(IBMCOB)
v CCSID(1155) and HOST(IBMCOB)
SQL(DB2), the default, means to interpret SQL statements and check syntax for use
by DB2 for z/OS. SQL(DB2) is recommended when the database server is DB2 for
z/OS.
STDSQL(NO|YES)3 Indicates to which rules the output statements should conform.
STDSQL(YES)3 indicates that the precompiled SQL statements in the source program
conform to certain rules of the SQL standard. STDSQL(NO) indicates conformance
to DB2 rules.
The default format is determined by the installation defaults of the system where
the program is bound, not by the installation defaults of the system where the
program is precompiled.
You cannot use the LOCAL option unless you have a time exit routine.
TWOPASSTW Processes in two passes, so that declarations need not precede references. Default
values depend on the HOST option that is specified; see Table 152 on page 912.
| For the DB2 coprocessor, you can specify the TWOPASS option for only PL/I
| applications. For C/C++ and COBOL applications, the DB2 coprocessor uses the
| ONEPASS option.
When you specify VERSION, the SQL statement processor creates a version
identifier in the program and DBRM. This affects the size of the load module and
DBRM. DB2 uses the version identifier when you bind the DBRM to a plan or
package.
If you do not specify a version at precompile time, an empty string is the default
version identifier. If you specify AUTO, the SQL statement processor uses the
consistency token to generate the version identifier. If the consistency token is a
timestamp, the timestamp is converted into ISO character format and is used as the
version identifier. The timestamp that is used is based on the store clock value. For
information about using VERSION, see “Creating a package version” on page 918.
XREF1 Includes a sorted cross-reference listing of symbols that are used in SQL statements
in the listing output.
Notes:
1. The DB2 coprocessor ignores this option when the DB2 coprocessor is invoked by the compiler to prepare the
application.
2. This option is always in effect when the DB2 coprocessor is invoked by the compiler to prepare the application.
3. You can use STDSQL(86) as in prior releases of DB2. The SQL statement processor treats it the same as
STDSQL(YES).
4. Precompiler options do not affect ODBC behavior.
Some SQL statement processor options have default values based on the host
language. Some options do not apply to some languages. Table 152 shows the
language-dependent options and defaults.
Table 152. Language-dependent DB2 precompiler options and defaults
HOST value Defaults
ASM APOST1, APOSTSQL1, PERIOD1, TWOPASS, MARGINS(1,71,16)
C or CPP APOST1, APOSTSQL1, PERIOD1, ONEPASS, MARGINS(1,72)
IBMCOB QUOTE2, QUOTESQL2, PERIOD, ONEPASS1, MARGINS(8,72)1
FORTRAN APOST1, APOSTSQL1, PERIOD1, ONEPASS1, MARGINS(1,72)1
PLI APOST1, APOSTSQL1, PERIOD1, ONEPASS, MARGINS(2,72)
Notes:
1. Forced for this language; no alternative is allowed.
2. The default is chosen on Application Programming Defaults Panel 1 during installation. The IBM-supplied
installation defaults for string delimiters are QUOTE (host language delimiter) and QUOTESQL (SQL escape
character). The installer can replace the IBM-supplied defaults with other defaults. The precompiler options that
you specify override any defaults that are in effect.
The following SQL statement processing options are relevant for DRDA access:
CONNECT
Use CONNECT(2), explicitly or by default.
The following table gives generic descriptions of program preparation options, lists
the equivalent DB2 option for each one, and indicates if appropriate, whether it is
a bind package (B) or a precompiler (P) option. In addition, the table indicates
whether a DB2 server supports the option.
Table 153. Program preparation options for packages
Bind or
Precompile
Generic option description Equivalent for Requesting DB2 Option DB2 Server Support
Package replacement: protect ACTION(ADD) B Supported
existing packages
Package replacement: replace ACTION(REPLACE) B Supported
existing packages
Package replacement: version ACTION(REPLACE REPLVER B Supported
name (version-id))
Statement string delimiter APOSTSQL/QUOTESQL P Supported
DRDA access: SQL CONNECT CONNECT(1) P Supported
(Type 1)
DRDA access: SQL CONNECT CONNECT(2) P Supported
(Type 2)
Block protocol: Do not block data CURRENTDATA(YES) B Supported
for an ambiguous cursor
Block protocol: Block data when CURRENTDATA(NO) B Supported
possible
Block protocol: Never block data (Not available) Not supported
Name of remote database CURRENTSERVER(location name) B Supported as a BIND
PLAN option
Date format of statement DATE P Supported
Protocol for remote access DBPROTOCOL B Not supported
Maximum decimal precision: 15 DEC(15) P Supported
Maximum decimal precision: 31 DEC(31) P Supported
Defer preparation of dynamic DEFER(PREPARE) B Supported
SQL
Do not defer preparation of NODEFER(PREPARE) B Supported
dynamic SQL
You can use one of the following methods to compile and link-edit an application:
v DB2I panels
v The DSNH command procedure (a TSO CLIST)
v JCL procedures supplied with DB2
v JCL procedures supplied with a host language compiler
| If you use the DB2 coprocessor, you process SQL statements as you compile your
| program. For programs other than C and C++ programs, you must use JCL
| procedures when you use the DB2 coprocessor. For C and C++ programs, you can
| use either JCL procedures or UNIX System Services on z/OS to invoke the DB2
| coprocessor.
TSO and batch: Include the DB2 TSO attachment facility language interface
module (DSNELI) or DB2 call attachment facility language interface module
(DSNALI). For a program that uses 31-bit addressing, link-edit the program with
the AMODE=31 and RMODE=ANY options. For more details, see the appropriate
z/OS publication.
IMS and DB2 share a common alias name, DSNHLI, for the language interface
module. You must do the following when you concatenate your libraries:
v If you use IMS, be sure to concatenate the IMS library first so that the
application program compiles with the correct IMS version of DSNHLI.
v If you run your application program only under DB2, be sure to concatenate the
DB2 library first.
CICS: Include the DB2 CICS language interface module (DSNCLI). You can link
DSNCLI with your program in either 24-bit or 31-bit addressing mode
(AMODE=31). If your application program runs in 31-bit addressing mode, you
should link-edit the DSNCLI stub to your application with the attributes
AMODE=31 and RMODE=ANY so that your application can run above the 16-MB
line.
You also need the CICS EXEC interface module that is appropriate for the
programming language. CICS requires that this module be the first control section
(CSECT) in the final load module.
The size of the executable load module that is produced by the link-edit step varies
depending on the values that the SQL statement processor inserts into the source
code of the program.
Link-editing a batch program: DB2 has language interface routines for each
unique supported environment. DB2 requires the IMS language interface routine
for DL/I batch. You need to have DFSLI000 link-edited with the application
program.
Related tasks
Chapter 17, “Preparing an application to run on DB2 for z/OS,” on page 887
Related reference
DSNH (TSO CLIST) (DB2 Command Reference)
Related information
z/OS Internet Library
CICS Transaction Server Library at ibm.com
Binding an application
You must bind the DBRM that is produced by the SQL statement processor to a
plan or package before your DB2 application can run.
A plan can contain DBRMs, a package list that specifies packages or collections of
packages, or a combination of DBRMs and a package list. The plan must contain at
least one package or at least one directly bound DBRM. Each package that you
bind can contain only one DBRM.
Exception: You do not need to bind a DBRM if the only SQL statement in the
program is SET CURRENT PACKAGESET.
You must bind plans locally, regardless of whether they reference packages that
run remotely. However, you must bind the packages that run at remote locations at
those remote locations.
| For C and C++ programs whose corresponding DBRMs are in HFS files, you can
| use the command line processor to bind the DBRMs to packages. Optionally, you
| can also copy the DBRM into a partitioned data set member by using the oput and
| oget commands and then bind it by using conventional JCL.
From a DB2 requester, you can run a plan by specifying it in the RUN
subcommand, but you cannot run a package directly. You must include the
package in a plan and then run the plan.
Develop a naming convention and strategy for the most effective and efficient use
of your plans and packages.
v To bind a new plan or package, other than a trigger package, use the
subcommand BIND PLAN or BIND PACKAGE with the option
ACTION(REPLACE).
To bind a new trigger package, recreate the trigger associated with the trigger
package.
When you bind a package, you specify the collection to which the package
belongs. The collection is not a physical entity, and you do not create it; the
collection name is merely a convenient way of referring to a group of packages.
If a local stored procedure uses a cursor to access data through DRDA access, and
the cursor-related statement is bound in a separate package under the stored
procedure, you must bind this separate package both locally and remotely. In
addition, the invoker or owner of the stored procedure must be authorized to
execute both local and remote packages. At your local system, you must bind a
plan whose package list includes all those packages, local and remote.
To bind a package at a remote DB2 system, you must have all the privileges or
authority there that you would need to bind the package on your local system. To
The bind process for a remote package is the same as for a local package, except
that the local communications database must be able to recognize the location
name that you use as resolving to a remote location.
Example:
To bind the DBRM PROGA at the location PARIS, in the collection GROUP1, use:
BIND PACKAGE(PARIS.GROUP1)
MEMBER(PROGA)
Then, include the remote package in the package list of a local plan, such as
PLANB, by using:
BIND PLAN (PLANB)
PKLIST(PARIS.GROUP1.PROGA)
The ENCODING bind option has the following effect on a remote application:
v If you bind a package locally, which is recommended, and you specify the
ENCODING bind option for the local package, the ENCODING bind option for
the local package applies to the remote application.
v If you do not bind a package locally, and you specify the ENCODING bind
option for the plan, the ENCODING bind option for the plan applies to the
remote application.
v If you do not specify an ENCODING bind option for the package or plan at the
local site, the value of APPLICATION ENCODING that was specified on
installation panel DSNTIPF at the local site applies to the remote application.
When you bind or rebind, DB2 checks authorizations, reads and updates the
catalog, and creates the package in the directory at the remote site. DB2 does not
read or update catalogs or check authorizations at the local site.
If you specify the option EXPLAIN(YES) and you do not specify the option
SQLERROR(CONTINUE), PLAN_TABLE must exist at the location that is specified
on the BIND or REBIND subcommand. This location could also be the default
location.
If you bind with the option COPY, the COPY privilege must exist locally. DB2
performs authorization checking, reads and updates the catalog, and creates the
package in the directory at the remote site. DB2 reads the catalog records that are
related to the copied package at the local site. DB2 converts values that are
returned from the remote site in ISO format if all of the following conditions are
true:
v If the local site is installed with time or date format LOCAL
v A package is created at a remote site with the COPY option
v The SQL statement does not specify a different format.
After you bind a package, you can rebind, free, or bind it with the REPLACE
option using either a local or a remote bind.
You can create a different package version for each version of the program. Each
package has the same package name and collection name, but a different version
number is associated with each package. The plan that includes that package
includes all versions of that package. Thus, you can run a program that is
associated with any one of the package versions without having to rebind the
application plan, rename the plan, or change any RUN subcommands that use it.
Example: Suppose that you bound a plan with the following statement:
BIND PLAN (PLAN1) PKLIST (COLLECT.*)
The following steps show how to create two versions of a package, one for each of
two programs.
| Only DBRMs for C and C++ programs can be generated to HFS files.
| Restrictions:
| You cannot specify the REBIND command with the command line processor.
| Alternatively, specify the BIND command with the ACTION(REPLACE) option.
| You cannot specify the FREE PACKAGE command with the command line
| processor. Alternatively, specify the DROP PACKAGE statement to drop the
| existing packages.
| Use the command line processor BIND command to bind DBRMs that are in HFS
| files to packages.
| The following diagram shows the syntax for the command line processor BIND
| command.
| (2)
BIND dbrm-file-name options-clause
(1)
-COLLECTION collection-name
|
| Notes:
| 1 If you do not specify a collection, DB2 uses NULLID.
| 2 You can specify the options after collection-name in any order.
||
| options-clause:
|
| DYNAMICRULES(RUN) EXPLAIN(NO)
|
DYNAMICRULES( BIND ) ENCODING ( ASCII ) EXPLAIN( YES )
DEFINEBIND EBCDIC ALL
DEFINERUN UNICODE
INVOKEBIND ccsid
INVOKERUN
| (1)
| REOPT(NONE) RELEASE(COMMIT)
(2) RELEASE(DEALLOCATE) OPTHINT(’hint-ID’)
REOPT(ALWAYS)
|
| Notes:
| 1 You can specify NOREOPT(VARS) as a synonym of REOPT(NONE).
| 2 You can specify REOPT(VARS) as a synonym of REOPT(ALWAYS).
||
To bind an application plan, use the BIND PLAN subcommand with at least one of
the following options:
MEMBER
Specify this option to bind DBRMs directly to the plan. After the keyword
MEMBER, specify the member names of the DBRMS.
PKLIST
Specify this option to include package lists in the plan. After the keyword
PKLIST, specify the names of the packages to include in the package list. To
include an entire collection of packages in the list, use an asterisk after the
collection name. For example, PKLIST(GROUP1.*).
Specifying the package list for the PKLIST option of BIND PLAN:
The order in which you specify packages in a package list can affect run-time
performance. Searching for the specific package involves searching the DB2
directory, which can be costly. When you use collection-id.* with the PKLIST
keyword, you should specify first the collections in which DB2 is most likely to
find a package.
Then you execute program PROG1. DB2 does the following package search:
1. Checks to see if program PROG1 is bound as part of the plan
2. Searches for COLL1.PROG1.timestamp
3. If it does not find COLL1.PROG1.timestamp, searches for
COLL2.PROG1.timestamp
4. If it does not find COLL2.PROG1.timestamp, searches for
COLL3.PROG1.timestamp
5. If it does not find COLL3.PROG1.timestamp, searches for
COLL4.PROG1.timestamp.
If the order of search is not important: In many cases, the order in which DB2
searches the packages is not important to you and does not affect performance. For
an application that runs only at your local DB2 system, you can name every
package differently and include them all in the same collection. The package list on
your BIND PLAN subcommand can read:
PKLIST (collection.*)
If your application uses DRDA access, you must bind some packages at remote
locations. Use the same collection name at each location, and identify your package
list as:
PKLIST (*.collection.*)
If you use an asterisk for part of a name in a package list, DB2 checks the
authorization for the package to which the name resolves at run time. To avoid the
checking at run time in the preceding example, you can grant EXECUTE authority
for the entire collection to the owner of the plan before you bind the plan.
Related tasks
“Maximizing the performance of an application that accesses distributed data” on
page 34
Related reference
BIND PLAN (DSN) (DB2 Command Reference)
(Usually, the consistency token is in an internal DB2 format. You can override that
token if you want: see “Setting the program level” on page 931.)
You also need other identifiers. The consistency token alone uniquely identifies a
DBRM that is bound directly to a plan, but it does not necessarily identify a
unique package. When you bind DBRMs directly to a particular plan, you bind
each one only once. But you can bind the same DBRM to many packages, at
different locations and in different collections, and then you can include all those
packages in the package list of the same plan. All those packages will have the
same consistency token. You can specify a particular location or a particular
collection at run time.
You can change the value of CURRENT SERVER by using the SQL CONNECT
statement in your program. If you do not use CONNECT, the value of CURRENT
SERVER is the location name of your local DB2 subsystem (or blank, if your DB2
subsystem has no location name).
| If you do not set these registers, they contain an empty string when your
| application begins to run, and they remain as an empty string. In this case, DB2
| searches the available collections, using methods described in “Binding an
| application plan” on page 922.
However, explicitly specifying the intended collection by using the special registers
can avoid a potentially costly search through a package list that has many
qualifying entries. In addition, DB2 uses the values in these special registers for
applications that do not run under a plan. How DB2 uses these special registers is
described in “Overriding the values that DB2 uses to resolve package lists.”
| When you call a stored procedure, the special register CURRENT PACKAGESET
| contains the value that you specified for the COLLID parameter when you defined
| the stored procedure. If the routine was defined without a value for the COLLID
| parameter, the value of the special register is inherited from the calling program.
| Also, the special register CURRENT PACKAGE PATH contains the value that you
| specified for the PACKAGE PATH parameter when you defined the stored
| procedure. When the stored procedure returns control to the calling program, DB2
| restores this register to the value that it contained before the call.
If you set CURRENT PACKAGE PATH, DB2 uses the value of CURRENT
PACKAGE PATH as the collection name list for package resolution. For example, if
CURRENT PACKAGE PATH contains the list COLL1, COLL2, COLL3, COLL4,
DB2 searches for the first package that exists in the following order:
COLL1.PROG1.timestamp
COLL2.PROG1.timestamp
COLL3.PROG1.timestamp
COLL4.PROG1.timestamp
If you set CURRENT PACKAGESET and not CURRENT PACKAGE PATH, DB2
uses the value of CURRENT PACKAGESET as the collection for package
resolution. For example, if CURRENT PACKAGESET contains COLL5, DB2 uses
COLL5.PROG1.timestamp for the package search.
When CURRENT PACKAGE PATH is set, the server that receives the request
ignores the collection that is specified by the request and instead uses the value of
CURRENT PACKAGE PATH at the server to resolve the package. Specifying a
collection list with the CURRENT PACKAGE PATH special register can avoid the
need to issue multiple SET CURRENT PACKAGESET statements to switch
collections for the package search.
SET CURRENT PACKAGE PATH The collections in PACKAGE PATH determine which
SELECT ... FROM T1 ... package is invoked.
SET CURRENT PACKAGE PATH The collections in PACKAGE PATH that are set at
= 'A,B' server S2 determine which package is invoked.
CONNECT TO S2 ...
SET CURRENT PACKAGE PATH
= 'X,Y'
SELECT ... FROM T1 ...
Example
| Suppose that you need to access data at a remote server CHICAGO, by using the
| following query:
SELECT * FROM CHICAGO.DSN8910.EMP
WHERE EMPNO = '0001000';
This statement can be executed with DRDA access or DB2 private protocol access.
The method of access depends on the DBPROTOCOL bind option that you specify
when you bind your DBRMs into packages. DRDA is used by default if you do not
specify a DBPROTOCOL bind option.
If you bind the DBRM that contains the statement by using one of the following
processes, you access the server using DRDA access:
Local-bind DRDA access process:
1. Bind the DBRM into a package at the local DB2 using the bind option
DBPROTOCOL(DRDA).
2. Bind the DBRM into a package at the remote location (CHICAGO).
3. Bind the packages into a plan using bind option
DBPROTOCOL(DRDA).
Remote-bind DRDA access process:
1. Bind the DBRM into a package at the remote location.
2. Bind the remote package and the DBRM into a plan at the local site,
using the bind option DBPROTOCOL(DRDA).
In some cases you cannot use private protocol to access distributed data. The
following examples require DRDA access.
Example
Suppose that you need to access data at a remote server CHICAGO, by using the
following CONNECT and SELECT statements:
EXEC SQL
CONNECT TO CHICAGO;
EXEC SQL SELECT * FROM DSN8910.EMP
WHERE EMPNO = '0001000';
This example requires DRDA access and the correct binding procedure to work
from a remote server. Before you can execute the query at location CHICAGO, you
must bind the application as a remote package at the CHICAGO server. Before you
can run the application, you must also bind a local package and a local plan with a
package list that includes the local and remote package.
Example
Suppose that you need to call a stored procedure at the remote server ATLANTA,
by using the following CONNECT and CALL statements:
EXEC SQL
CONNECT TO ATLANTA;
EXEC SQL
CALL procedure_name (parameter_list);
This example requires DRDA access because private protocol does not support
stored procedures. The parameter list is a list of host variables that is passed to the
stored procedure and into which it returns the results of its execution. To execute,
the stored procedure must already exist at the ATLANTA server.
The following options of BIND PLAN are particularly relevant to binding a plan
that uses DRDA access:
DISCONNECT
For most flexibility, use DISCONNECT(EXPLICIT), explicitly or by default.
That requires you to use RELEASE statements in your program to explicitly
end connections.
The other values of the option are also useful.
DISCONNECT(AUTOMATIC) ends all remote connections during a
commit operation, without the need for RELEASE statements in your
program.
DISCONNECT(CONDITIONAL) ends remote connections during a
commit operation except when an open cursor defined as WITH HOLD is
associated with the connection.
SQLRULES
Use SQLRULES(DB2), explicitly or by default.
SQLRULES(STD) applies the rules of the SQL standard to your CONNECT
statements, so that CONNECT TO x is an error if you are already connected to
x. Use STD only if you want that statement to return an error code.
If your program selects LOB data from a remote location, and you bind the
plan for the program with SQLRULES(DB2), the format in which you retrieve
the LOB data with a cursor is restricted. After you open the cursor to retrieve
the LOB data, you must retrieve all of the data using a LOB variable, or
retrieve all of the data using a LOB locator variable. If the value of SQLRULES
is STD, this restriction does not exist.
If you intend to switch between LOB variables and LOB locators to retrieve
data from a cursor, execute the SET SQLRULES=STD statement before you
connect to the remote location.
CURRENTDATA
Use CURRENTDATA(NO) to force block fetch for ambiguous cursors. See
“Maximizing the performance of an application that accesses distributed data”
on page 34 for more information.
DBPROTOCOL
Use DBPROTOCOL(PRIVATE) if you want DB2 to use DB2 private protocol
access for accessing remote data that is specified with three-part names.
Use DBPROTOCOL(DRDA) if you want DB2 to use DRDA access to access
remote data that is specified with three-part names. You must bind a package
at all locations whose names are specified in three-part names.
The package value for the DBPROTOCOL option overrides the plan option.
For example, if you specify DBPROTOCOL(DRDA) for a remote package and
DBPROTOCOL(PRIVATE) for the plan, DB2 uses DRDA access when it
accesses data at that location using a three-part name. If you do not specify
any value for DBPROTOCOL, DB2 uses the value of DATABASE PROTOCOL
on installation panel DSNTIP5.
To find out which options are supported by a specific server DBMS, refer to the
documentation provided for that server.
The owner of the plan or package must have all the privileges that are required to
execute the SQL statements embedded in it.
You can specify the plan name to DB2 in one of the following ways:
v In the DDITV02 input data set.
v In subsystem member specification.
v By default; the plan name is then the application load module name that is
specified in DDITV02.
DB2 passes the plan name to the IMS attach package. If you do not specify a plan
name in DDITV02, and a resource translation table (RTT) does not exist or the
name is not in the RTT, DB2 uses the passed name as the plan name. If the name
exists in the RTT, the name translates to the plan that is specified for the RTT.
Recommendation: Give the DB2 plan the same name as that of the application
load module, which is the IMS attachment facility default. The plan name must be
the same as the program name.
To turn an existing plan into packages to run remotely, perform the following
actions for each remote location:
1. Choose a name for a collection to contain all the packages in the plan, such as
REMOTE1. (You can use more than one collection if you like, but one is
enough.)
2. Assuming that the server is a DB2 system, at the remote location execute:
a. GRANT CREATE IN COLLECTION REMOTE1 TO authorization-name;
b. GRANT BINDADD TO authorization-name;
where authorization-name is the owner of the package.
3. Bind each DBRM as a package at the remote location, using the instructions
under “Binding packages at a remote location” on page 917. Before run time,
the package owner must have all the data access privileges that are needed at
the remote location. If the owner does not yet have those privileges when you
are binding, use the VALIDATE(RUN) option. The option lets you create the
package, even if the authorization checks fail. DB2 checks the privileges again
at run time.
4. Bind a new application plan at your local DB2 system, using these options:
PKLIST (location-name.REMOTE1.*)
CURRENTSERVER (location-name)
where location-name is the value of LOCATION, in SYSIBM.LOCATIONS at
your local DB2 system, that denotes the remote location at which you intend to
run. You do not need to bind any DBRMs directly to that plan; the package list
is sufficient.
When you now run the existing application at your local DB2 system using the
new application plan, these things happen:
v You connect immediately to the remote location that is named in the
CURRENTSERVER option.
v DB2 searches for the package in the collection REMOTE1 at the remote location.
v Any UPDATE, DELETE, or INSERT statements in your application affect tables
at the remote location.
v Any results from SELECT statements are returned to your existing application
program, which processes them as though they came from your local DB2
system.
To override the construction of the consistency token by DB2, use the LEVEL (aaaa)
option. DB2 uses the value that you choose for aaaa to generate the consistency
token. Although this method is not recommended for general use and the DSNH
CLIST or the DB2 Program Preparation panels do not support it, this method
enables you to perform the following actions:
The dynamic SQL attributes that are determined by the value of the
DYNAMICRULES bind option and the run-time environment are collectively called
the dynamic SQL statement behavior. The four behaviors are:
v Run behavior
v Bind behavior
v Define behavior
v Invoke behavior
The following table shows the dynamic SQL attribute values for each type of
dynamic SQL behavior.
Table 156. Definitions of dynamic SQL statement behaviors
Setting for dynamic SQL attributes
Dynamic SQL attribute Bind behavior Run behavior Define behavior Invoke behavior
Authorization ID Plan or package Current SQLID User-defined Authorization ID of
owner function or stored invoker1
procedure owner
| Default qualifier for Bind OWNER or CURRENT User-defined Authorization ID of
| unqualified objects QUALIFIER value SCHEMA function or stored invoker
procedure owner
CURRENT SQLID2 Not applicable Applies Not applicable Not applicable
Source for application Determined by Install panel Determined by Determined by
programming options DSNHDECP DSNTIP4 DSNHDECP DSNHDECP
parameter parameter parameter
DYNRULS3 DYNRULS3 DYNRULS3
Can execute GRANT, No Yes No No
REVOKE, CREATE,
ALTER, DROP, RENAME?
Notes:
1. If the invoker is the primary authorization ID of the process or the CURRENT SQLID value, secondary
authorization IDs are also checked if they are needed for the required authorization. Otherwise, only one ID, the
ID of the invoker, is checked for the required authorization.
2. DB2 uses the value of CURRENT SQLID as the authorization ID for dynamic SQL statements only for plans and
packages that have run behavior. For the other dynamic SQL behaviors, DB2 uses the authorization ID that is
associated with each dynamic SQL behavior, as shown in this table.
The value to which CURRENT SQLID is initialized is independent of the dynamic SQL behavior. For stand-alone
programs, CURRENT SQLID is initialized to the primary authorization ID.
You can execute the SET CURRENT SQLID statement to change the value of CURRENT SQLID for packages with
any dynamic SQL behavior, but DB2 uses the CURRENT SQLID value only for plans and packages with run
behavior.
3. The value of DSNHDECP parameter DYNRULS, which you specify in field USE FOR DYNAMICRULES in
installation panel DSNTIP4, determines whether DB2 uses the SQL statement processing options or the
application programming defaults for dynamic SQL statements. See “Options for SQL statement processing” on
page 904 for more information.
The CACHESIZE option (optional) enables you to specify the size of the cache to
acquire for the plan. DB2 uses this cache for caching the authorization IDs of those
users that are running a plan. An authorization ID can take up to 128 bytes of
storage. DB2 uses the CACHESIZE value to determine the amount of storage to
acquire for the authorization cache. DB2 acquires storage from the EDM storage
pool. .
The size of the cache that you specify depends on the number of individual
authorization IDs that are actively using the plan. Required overhead takes 32
bytes, and each authorization ID takes up 8 bytes of storage. The minimum cache
size is 256 bytes (enough for 28 entries and overhead information) and the
maximum is 4096 bytes (enough for 508 entries and overhead information). You
should specify size in multiples of 256 bytes; otherwise, the specified value rounds
up to the next highest value that is a multiple of 256.
If you run the plan infrequently, or if authority to run the plan is granted to
PUBLIC, you might want to turn off caching for the plan so that DB2 does not use
unnecessary storage. To do this, specify a value of 0 for the CACHESIZE option.
Any plan that you run repeatedly is a good candidate for tuning by using the
CACHESIZE option. Also, if you have a plan that a large number of users run
concurrently, you might want to use a larger CACHESIZE.
Authorization cache
DB2 uses the authorization cache for caching the authorization IDs of those users
that are running a plan.
When DB2 determines that you have the EXECUTE privilege on a plan, package
collection, stored procedure, or user-defined function, DB2 can cache your
authorization ID. When you run the plan, package, stored procedure, or
user-defined function, DB2 can check your authorization more quickly.
For more information about setting the size of the package authorization cache, see
the topic “Protection panel: DSNTIPP” in DB2 Installation Guide.
You could create packages and plans using the following bind statements:
BIND PACKAGE(PKGB) MEMBER(PKGB)
BIND PLAN(MAIN) MEMBER(MAIN,PLANA) PKLIST(*.PKGB.*)
BIND PLAN(PLANC) MEMBER(PLANC)
The following scenario illustrates thread association for a task that runs program
MAIN. Suppose that you execute the following SQL statements in the indicated
order. For each SQL statement, the resulting event is described.
1. EXEC CICS START TRANSID(MAIN)
TRANSID(MAIN) executes program MAIN.
2. EXEC SQL SELECT...
Program MAIN issues an SQL SELECT statement. The default dynamic plan
exit routine selects plan MAIN.
3. EXEC CICS LINK PROGRAM(PROGA)
Program PROGA is invoked.
4. EXEC SQL SELECT...
DB2 does not call the default dynamic plan exit routine, because the program
does not issue a sync point. The plan is MAIN.
5. EXEC CICS LINK PROGRAM(PROGB)
Program PROGB is invoked.
6. EXEC SQL SELECT...
DB2 does not call the default dynamic plan exit routine, because the program
does not issue a sync point. The plan is MAIN and the program uses package
PKGB.
7. EXEC CICS SYNCPOINT
DB2 calls the dynamic plan exit routine when the next SQL statement
executes.
8. EXEC CICS LINK PROGRAM(PROGC)
Program PROGC is invoked.
9. EXEC SQL SELECT...
DB2 calls the default dynamic plan exit routine and selects PLANC.
10. EXEC SQL SET CURRENT SQLID = 'ABC'
CICS With packages, you probably do not need dynamic plan selection and its
accompanying exit routine. A package that is listed within a plan is not accessed
until it is executed. However, you can use dynamic plan selection and packages
together, which can reduce the number of plans in an application and the effort to
maintain the dynamic plan exit routine.
Rebinding an application
You need to rebind an application if you want to change any bind options or when
you make changes that affect the plan or package, such as creating an index, but
have not changed the SQL statements. In some cases, DB2 automatically rebinds
the plan or package for you.
If you change the SQL statements, you need to replace the plan or package.
Rebinding a package
You need to rebind a package when you make changes that affect the package but
have not changed the SQL statements. For example, if you create a new index, you
need to rebind the package. If you change the SQL, you need to use the BIND
PACKAGE command with the ACTION(REPLACE) option.
To rebind a package, other than a trigger package, use the REBIND subcommand.
To rebind a trigger package, use the REBIND TRIGGER PACKAGE subcommand.
You can change any of bind options for a package when you rebind it.
The following table clarifies which packages are bound, depending on how you
specify collection-id (coll-id), package-id (pkg-id), and version-id (ver-id) on the
REBIND PACKAGE subcommand. For syntax and descriptions of this
subcommand, see the topic “REBIND PACKAGE (DSN)” in DB2 Command
Reference.
REBIND PACKAGE does not apply to packages for which you do not have the
BIND privilege. An asterisk (*) used as an identifier for collections, packages, or
versions does not apply to packages at remote sites.
Table 158. Behavior of REBIND PACKAGE specification. ″All″ means all collections,
packages, or versions at the local DB2 server for which the authorization ID that issues the
command has the BIND privilege.
Collections Packages
Input affected affected Versions affected
Example: The following example shows the options for rebinding a package at the
remote location. The location name is SNTERSA. The collection is GROUP1, the
package ID is PROGA, and the version ID is V1. The connection types shown in
the REBIND subcommand replace connection types that are specified on the
original BIND subcommand. For information about the REBIND subcommand
options, see the topic “BIND and REBIND options” in DB2 Command Reference.
REBIND PACKAGE(SNTERSA.GROUP1.PROGA.(V1)) ENABLE(CICS,REMOTE)
You can use the asterisk on the REBIND subcommand for local packages, but not
for packages at remote sites. Any of the following commands rebinds all versions
of all packages in all collections, at the local DB2 system, for which you have the
BIND privilege.
REBIND PACKAGE (*)
REBIND PACKAGE (*.*)
REBIND PACKAGE (*.*.(*))
Either of the following commands rebinds all versions of all packages in the local
collection LEDGER for which you have the BIND privilege.
REBIND PACKAGE (LEDGER.*)
REBIND PACKAGE (LEDGER.*.(*))
Either of the following commands rebinds the empty string version of the package
DEBIT in all collections, at the local DB2 system, for which you have the BIND
privilege.
REBIND PACKAGE (*.DEBIT)
REBIND PACKAGE (*.DEBIT.())
Rebinding a plan
You need to rebind a plan when you make changes that affect the plan but have
not changed the SQL statements. For example, if you create a new index, you need
to rebind the plan. If you change the SQL, you need to use the BIND PLAN
command with the ACTION(REPLACE) option.
To rebind a plan use the REBIND subcommand. You can change any of bind
options for that plan.
When you rebind a plan, use the PKLIST keyword to replace any previously
specified package list. Omit the PKLIST keyword to use of the previous package
list for rebinding. Use the NOPKLIST keyword to delete any package list that was
specified when the plan was previously bound.
Example: Rebinds the plan and drops the entire package list:
REBIND PLAN(PLANA) NOPKLIST
Related reference
BIND and REBIND options (DB2 Command Reference)
For a description of the technique and several examples of its use, see “Sample
program to create REBIND subcommands for lists of plans and packages” on page
939.
One situation in which this technique might be useful is when a resource becomes
unavailable during a rebind of many plans or packages. DB2 normally terminates
the rebind and does not rebind the remaining plans or packages. Later, however,
you might want to rebind only the objects that remain to be rebound. You can
build REBIND subcommands for the remaining plans or packages by using
DSNTIAUL to select the plans or packages from the DB2 catalog and to create the
REBIND subcommands. You can then submit the subcommands through the DSN
command processor, as usual.
You might first need to edit the output from DSNTIAUL so that DSN can accept it
as input. The CLIST DSNTEDIT can perform much of that task for you.
Building REBIND subcommands: The examples that follow illustrate the following
techniques:
v Using SELECT to select specific packages or plans to be rebound
v Using the CONCAT operator to concatenate the REBIND subcommand syntax
around the plan or package names
v Using the SUBSTR function to convert a varying-length string to a fixed-length
string
v Appending additional blanks to the REBIND PLAN and REBIND PACKAGE
subcommands, so that the DSN command processor can accept the record length
as valid input
For both REBIND PLAN and REBIND PACKAGE subcommands, add the DSN
command that the statement needs as the first line in the sequential data set, and
add END as the last line, using TSO edit commands. When you have edited the
sequential data set, you can run it to rebind the selected plans or packages.
If the SELECT statement returns no qualifying rows, then DSNTIAUL does not
generate REBIND subcommands.
The examples in this topic generate REBIND subcommands that work in DB2 for
z/OS Version 9.1. You might need to modify the examples for prior releases of DB2
that do not allow all of the same syntax.
Example 1:
REBIND all plans without terminating because of unavailable resources.
SELECT SUBSTR('REBIND PLAN('CONCAT NAME
CONCAT') ',1,45)
FROM SYSIBM.SYSPLAN;
Example 2:
REBIND all versions of all packages without terminating because of
unavailable resources.
SELECT SUBSTR('REBIND PACKAGE('CONCAT COLLID CONCAT'.'
CONCAT NAME CONCAT'.(*)) ',1,55)
FROM SYSIBM.SYSPACKAGE;
Example 3:
REBIND all plans bound before a given date and time.
SELECT SUBSTR('REBIND PLAN('CONCAT NAME
CONCAT') ',1,45)
FROM SYSIBM.SYSPLAN
WHERE BINDDATE <= 'yyyymmdd' AND
BINDTIME <= 'hhmmssth';
where yyyymmdd represents the date portion and hhmmssth represents the
time portion of the timestamp string.
Example 4:
REBIND all versions of all packages bound before a given date and time.
SELECT SUBSTR('REBIND PACKAGE('CONCAT COLLID CONCAT'.'
CONCAT NAME CONCAT'.(*)) ',1,55)
FROM SYSIBM.SYSPACKAGE
WHERE BINDTIME <= 'timestamp';
where yyyymmdd represents the date portion and hhmmssth represents the
time portion of the timestamp string.
Example 8:
REBIND all versions of all packages bound within a given date and time
range.
SELECT SUBSTR('REBIND PACKAGE('CONCAT COLLID CONCAT'.'
CONCAT NAME CONCAT'.(*)) ',1,55)
FROM SYSIBM.SYSPACKAGE
WHERE BINDTIME >= 'timestamp1' AND
BINDTIME <= 'timestamp2';
You specify the date and time period for which you want packages to be rebound
in a WHERE clause of the SELECT statement that contains the REBIND command.
In The following example, the WHERE clause looks like the following clause:
WHERE BINDTIME >= 'YYYY-MM-DD-hh.mm.ss' AND
BINDTIME <= 'YYYY-MM-DD-hh.mm.ss'
The following example shows some sample JCL for rebinding all plans bound
without specifying the DEGREE keyword on BIND with DEGREE(ANY).
//REBINDS JOB MSGLEVEL=(1,1),CLASS=A,MSGCLASS=A,USER=SYSADM,
// REGION=1024K
//*********************************************************************/
//SETUP EXEC TSOBATCH
//SYSPRINT DD SYSOUT=*
//SYSPUNCH DD SYSOUT=*
//SYSREC00 DD DSN=SYSADM.SYSTSIN.DATA,
// UNIT=SYSDA,DISP=SHR
//*********************************************************************/
//*
//* REBIND ALL PLANS THAT WERE BOUND WITHOUT SPECIFYING THE DEGREE
//* KEYWORD ON BIND WITH DEGREE(ANY)
//*
//*********************************************************************/
//SYSTSIN DD *
DSN S(DSN)
RUN PROGRAM(DSNTIAUL) PLAN(DSNTIB91) PARM('SQL')
END
//SYSIN DD *
SELECT SUBSTR('REBIND PLAN('CONCAT NAME
CONCAT') DEGREE(ANY) ',1,45)
FROM SYSIBM.SYSPLAN
WHERE DEGREE = ' ';
/*
//*********************************************************************/
//*
//* PUT IN THE DSN COMMAND STATEMENTS
//*
Automatic rebinding
Automatic rebind might occur if an authorized user invokes a plan or package
when the attributes of the data on which the plan or package depends change, or
if the environment in which the package executes changes. Whether the automatic
rebind occurs depends on the value of the field AUTO BIND on installation panel
DSNTIPO.
The options used for an automatic rebind are the options used during the most
recent bind process.
| In the following cases, DB2 automatically rebinds a plan or package that has not
| been marked as invalid if the ABIND subsystem parameter is set to YES (the
| default):
v A plan or package that is bound on a release of DB2 that is more recent than the
release in which it is being run. This situation can happen in a data sharing
environment or after a DB2 subsystem has fallen back to a previous release of
DB2.
| v A plan or package that was bound prior to DB2 Version 4 Release 1. Plans and
| packages that are bound prior to Version 4 Release 1 are automatically rebound
| when they are run on the current release of DB2.
v A plan or package that has a location dependency and runs at a location other
than the one at which it was bound. This situation can happen when members
of a data sharing group are defined with location names, and a package runs on
a different member from the one on which it was bound.
| In the following cases, DB2 automatically rebinds a plan or package that has not
| been marked as invalid if the ABIND subsystem parameter is set to COEXIST:
| v The subsystem on which the plan or package runs is in a data sharing group.
| v The plan or package was previously bound on the current DB2 release and is
| now running on the previous DB2 release.
If the ABIND subsystem parameter is set to NO and you attempt to execute a plan
or package that requires a rebind, but cannot be automatically rebound, DB2
returns an error.
Whether EXPLAIN runs during automatic rebind depends on the value of the field
EXPLAIN PROCESSING on installation panel DSNTIPO, and on whether you
specified EXPLAIN(YES). Automatic rebind fails for all EXPLAIN errors except
“PLAN_TABLE not found.”
The SQLCA is not available during automatic rebind. Therefore, if you encounter
lock contention during an automatic rebind, DSNT501I messages cannot
accompany any DSNT376I messages that you receive. To see the matching
DSNT501I messages, you must issue the subcommand REBIND PLAN or REBIND
PACKAGE.
Related reference
AUTO BIND field (ABIND subsystem parameter) (DB2 Installation and
Migration)
Not only does SQLRULES specify the rules under which a type 2 CONNECT
statement executes, but it also sets the initial value of the special register
CURRENT RULES when the database server is the local DB2 system. When the
server is not the local DB2 system, the initial value of CURRENT RULES is DB2.
After binding a plan, you can change the value in CURRENT RULES in an
application program by using the statement SET CURRENT RULES.
CURRENT RULES determines the SQL rules, DB2 or SQL standard, that apply to
SQL behavior at run time. For example, the value in CURRENT RULES affects the
behavior of defining check constraints by issuing the ALTER TABLE statement on a
populated table:
v If CURRENT RULES has a value of STD and no existing rows in the table
violate the check constraint, DB2 adds the constraint to the table definition.
Otherwise, an error occurs and DB2 does not add the check constraint to the
table definition.
If the table contains data and is already in a check pending status, the ALTER
TABLE statement fails.
v If CURRENT RULES has a value of DB2, DB2 adds the constraint to the table
definition, defers the enforcing of the check constraints, and places the table
space or partition in CHECK-pending status.
You can use the statement SET CURRENT RULES to control the action that the
statement ALTER TABLE takes. Assuming that the value of CURRENT RULES is
initially STD, the following SQL statements change the SQL rules to DB2, add a
check constraint, defer validation of that constraint, place the table in
CHECK-pending status, and restore the rules to STD.
EXEC SQL
SET CURRENT RULES = 'DB2';
EXEC SQL
See “Check constraints” on page 434 for information about check constraints.
You can also use CURRENT RULES in host variable assignments. For example, if
you want to store the value of the CURRENT RULES special register at a
particular point in time, you can use assign the value to a host variable, as in the
following statement:
SET :XRULE = CURRENT RULES;
You can also use CURRENT RULES as the argument of a search-condition. For
example, the following statement retrieves rows where the COL1 column contains
the same value as the CURRENT RULES special register.
SELECT * FROM SAMPTBL WHERE COL1 = CURRENT RULES;
If your application program includes SQL statements, you need to process those
SQL statements by using either the DB2 precompiler or the DB2 coprocessor that is
provided with a compiler. Both the precompiler and the coprocessor perform the
following actions:
v Replaces the SQL statements in your source programs with calls to DB2
language interface modules
v Creates a database request module (DBRM), which communicates your SQL
requests to DB2 during the bind process
The following figure illustrates the program preparation process when you use the
DB2 precompiler. After you process SQL statements in your source program by
using the DB2 precompiler, you create a load module, possibly one or more
packages, and an application plan. Creating a load module involves compiling the
modified source code that is produced by the precompiler into an object program,
and link-editing the object program to create a load module. Creating a package or
an application plan, a process unique to DB2, involves binding one or more
DBRMs, which are created by the DB2 precompiler, using the BIND PACKAGE or
BIND PLAN commands.
The following figure illustrates the program preparation process when you use the
DB2 coprocessor. The process is similar to the process for the DB2 precompiler,
except that the DB2 coprocessor does not create modified source for your
application program.
Compile and
process SQL DBRM
Object
program Bind package Bind plan
Plan
Load
module
Figure 44. Program preparation with the DB2 coprocessor
Before you can run a DL/I batch job, you need to provide values for a number of
input parameters. The input parameters are positional and delimited by commas.
You can specify values for the following parameters using a DDITV02 data set or a
subsystem member:
SSN,LIT,ESMT,RTT,REO,CRC
You can specify values for the following parameters only in a DDITV02 data set:
CONNECTION_NAME,PLAN,PROG
If you use the DDITV02 data set and specify a subsystem member, the values in
the DDITV02 DD statement override the values in the specified subsystem
member. If you provide neither, DB2 abnormally terminates the application
program with system abend code X’04E’ and a unique reason code in register 15.
DDITV02 is the DD name for a data set that has DCB options of LRECL=80 and
RECFM=F or FB.
You might want to save and print the data set, as the information is useful for
diagnostic purposes. You can use the IMS module, DFSERA10, to print the
variable-length data set records in both hexadecimal and character format.
DB2 has a unique JCL procedure for each supported language, with appropriate
defaults for starting the DB2 precompiler and host language compiler or assembler.
The procedures are in prefix.SDSNSAMP member DSNTIJMV, which installs the
procedures.
Table 159. Procedures for precompiling programs
Language Procedure Invocation included in...
High-level assembler DSNHASM DSNTEJ2A
C DSNHC DSNTEJ2D
2
C++ DSNHCPPDSNHCPP2 DSNTEJ2EN/A
| Enterprise COBOL DSNHICOB DSNTEJ2C1
Fortran DSNHFOR DSNTEJ2F
PL/I DSNHPLI DSNTEJ2P
SQL DSNHSQL DSNTEJ63
Notes:
1. You must customize these programs to invoke the procedures that are listed in this table.
2. This procedure demonstrates how you can prepare an object-oriented program that
consists of two data sets or members, both of which contain SQL.
If you use the PL/I macro processor, you must not use the PL/I *PROCESS
statement in the source to pass options to the PL/I compiler. You can specify the
needed options on the PARM.PLI= parameter of the EXEC statement in the
DSNHPLI procedure.
member must be DSNELI, except for FORTRAN, in which case member must be
DSNHFT.
ENTRY specification varies depending on the host language. Include one of the
following:
DLITCBL, for COBOL applications
PLICALLA, for PL/I applications
The program name, for assembler language applications.
CICS
//LKED.SYSIN DD *
INCLUDE SYSLIB(DSNCLI)
/*
Related tasks
“Making the CAF language interface (DSNALI) available” on page 52
“Compiling and link-editing an application” on page 915
The following example illustrates the necessary changes. This example assumes the
use of a COBOL program. For any other programming language, change the CICS
procedure name and the DB2 precompiler options.
//TESTC01 JOB
//*
//*********************************************************
//* DB2 PRECOMPILE THE COBOL PROGRAM
//*********************************************************
(1) //PC EXEC PGM=DSNHPC,
(1) // PARM='HOST(COB2),XREF,SOURCE,FLAG(I),APOST'
(1) //STEPLIB DD DISP=SHR,DSN=prefix.SDSNEXIT
(1) // DD DISP=SHR,DSN=prefix.SDSNLOAD
For more information about the procedure DFHEITVL, other CICS procedures, or
CICS requirements for application programs, please see the appropriate CICS
manual.
If you are preparing a particularly large or complex application, you can use
another preparation method. For example, if your program requires four of your
own link-edit include libraries, you cannot prepare the program with DB2I,
because DB2I limits the number of include libraries to three, plus language, IMS or
Figure 45 shows an example of the DB2I Primary Option Menu. From this point,
you can access all of the DB2I panels without passing through panels that you do
not need. For example, to bind a program, enter the number that corresponds to
BIND/REBIND/FREE to reach the BIND PLAN panel without seeing the ones
previous to it.
Figure 45. Initiating program preparation through DB2I. Specify Program Preparation on the
DB2I Primary Option Menu.
The following descriptions explain the functions on the DB2I Primary Option
Menu.
1 SPUFI
Lets you develop and execute one or more SQL statements interactively.
For further information, see “Executing SQL by using SPUFI” on page
1013.
2 DCLGEN
Lets you generate C, COBOL, or PL/I data declarations of tables. For
further information, see “DCLGEN (declarations generator)” on page 129.
3 PROGRAM PREPARATION
Lets you prepare and run an application program to run. For more
information, see “DB2 Program Preparation panel” on page 957.
4 PRECOMPILE
Lets you convert embedded SQL statements into statements that your host
language can process. For further information, see “Precompile panel” on
page 964.
5 BIND/REBIND/FREE
Lets you bind, rebind, or free a package or application plan. For more
information, see “Bind/Rebind/Free Selection panel” on page 985.
6 RUN
Lets you run an application program in a TSO or batch environment. For
more information, see “DB2I Run panel” on page 996.
The following table describes each of the panels that you need to use to prepare an
application.
Table 160. DB2I panels used for program preparation
Panel name Panel description
DB2 Program Lets you choose specific program preparation functions to
Preparation“DB2 Program perform. For the functions that you choose, you can also
Preparation panel” on page display the associated panels to specify options for performing
957 those functions.
This panel also lets you change the DB2I default values and
perform other precompile and prelink functions.
DB2I Defaults Panel 1“DB2I Lets you change many of the system defaults that are set at
Defaults Panel 1” on page DB2 installation time.
961
DB2I Defaults Panel 2“DB2I Lets you change your default job statement and set additional
Defaults Panel 2” on page COBOL options.
963
Precompile“Precompile Lets you specify values for precompile functions.
panel” on page 964
You can reach this panel directly from the DB2I Primary
Option Menu or from the DB2 Program Preparation panel. If
you reach this panel from the Program Preparation panel,
many of the fields contain values from the Primary and
Precompile panels.
Bind Package“Bind Package Lets you change many options when you bind a package.
panel” on page 967
You can reach this panel directly from the DB2I Primary
Option Menu or from the DB2 Program Preparation panel. If
you reach this panel from the DB2 Program Preparation panel,
many of the fields contain values from the Primary and
Precompile panels.
Bind Plan“Bind Plan panel” Lets you change options when you bind an application plan.
on page 969
You can reach this panel directly from the DB2I Primary
Option Menu or as a part of the program preparation process.
This panel also follows the Bind Package panels.
Defaults for Bind or Rebind Let you change the defaults for BIND or REBIND PACKAGE
Package or Plan or PLAN.
panels“Defaults for Bind
Package panel” on page 973
For TSO programs, the panel also lets you run programs.
For the functions you choose, you can also choose to display the associated panels
to specify options for performing those functions. Some of the functions you can
select are:
v Precompile. The panel for this function lets you control the DB2 precompiler. See
page “Precompile panel” on page 964.
v Bind a package. The panel for this function lets you bind your program’s DBRM
to a package (see page “Bind Package panel” on page 967), and change your
defaults for binding the packages (see page “Defaults for Bind Package panel”
on page 973).
v Bind a plan. The panel for this function lets you create your program’s
application plan (see page “Bind Plan panel” on page 969), and change your
defaults for binding the plans (see page “Defaults for Bind Package panel” on
page 973).
v Compile, link, and run. The panel for these functions let you control the
compiler or assembler and the linkage editor. See page “Program Preparation:
Compile, Link, and Run panel” on page 982.
TSO and batchFor TSO programs, you can use the program preparation
programs to control the host language run-time processor and the program itself.
The Program Preparation panel also lets you change the DB2I default values, and
perform other precompile and prelink functions.
On the DB2 Program Preparation panel, shown in the following figure, enter the
name of the source program data set (this example uses SAMPLEPG.COBOL) and
specify the other options you want to include. When finished, press ENTER to
view the next panel.
Figure 46. The DB2 Program Preparation panel. Enter the source program data set name
and other options.
The following explains the functions on the DB2 Program Preparation panel and
how to fill in the necessary fields in order to start program preparation.
1 INPUT DATA SET NAME
Lets you specify the input data set name. The input data set name can be a
PDS or a sequential data set, and can also include a member name. If you
do not enclose the data set name in apostrophes, a standard TSO prefix
(user ID) qualifies the data set name.
The input data set name you specify is used to precompile, bind, link-edit,
and run the program.
2 DATA SET NAME QUALIFIER
Lets you qualify temporary data set names involved in the program
preparation process. Use any character string from 1 to 8 characters that
conforms to normal TSO naming conventions. (The default is TEMP.)
For programs that you prepare in the background or that use EDITJCL for
the PREPARATION ENVIRONMENT option, DB2 creates a data set named
tsoprefix.qualifier.CNTL to contain the program preparation JCL. The name
tsoprefix represents the prefix TSO assigns, and qualifier represents the value
you enter in the DATA SET NAME QUALIFIER field. If a data set with
this name already exists, DB2 deletes it.
3 PREPARATION ENVIRONMENT
Lets you specify whether program preparation occurs in the foreground or
background. You can also specify EDITJCL, in which case you are able to
edit and then submit the job. Use:
FOREGROUND to use the values you specify on the Program
Preparation panel and to run immediately.
BACKGROUND to create and submit a file containing a DSNH CLIST
that runs immediately using the JOB control statement from either the
DB2I Defaults panel or your site’s SUBMIT exit. The file is saved.
EDITJCL to create and open a file containing a DSNH CLIST in edit
mode. You can then submit the CLIST or save it.
4 RUN TIME ENVIRONMENT
Lets you specify the environment (TSO, CAF, CICS, IMS, RRSAF) in which
your program runs.
Fields 6 through 15 let you select the function to perform and to choose whether to
show the DB2I panels for the functions you select. Use Y for YES, or N for NO.
If you are willing to accept default values for all the steps, enter N under Display
panel? for all the other preparation panels listed.
To make changes to the default values, entering Y under Display panel? for any
panel you want to see. DB2I then displays each of the panels that you request.
After all the panels display, DB2 proceeds with the steps involved in preparing
your program to run.
Variables for all functions used during program preparation are maintained
separately from variables entered from the DB2I Primary Option Menu. For
example, the bind plan variables you enter on the Program Preparation panel are
saved separately from those on any Bind Plan panel that you reach from the
Primary Option Menu.
6 CHANGE DEFAULTS
Lets you specify whether to change the DB2I defaults. Enter Y in the
Display panel? field next to this option; otherwise enter N. Minimally, you
should specify your subsystem identifier and programming language on
the Defaults panel.
7 PL/I MACRO PHASE
Lets you specify whether to display the “Program Preparation: Compile,
Link, and Run” panel to control the PL/I macro phase by entering PL/I
options in the OPTIONS field of that panel. That panel also displays for
options COMPILE OR ASSEMBLE, LINK, and RUN.
This field applies to PL/I programs only. If your program is not a PL/I
program or does not use the PL/I macro processor, specify N in the
Perform function field for this option, which sets the Display panel? field
to the default N.
8 PRECOMPILE
Lets you specify whether to display the Precompile panel. To see this panel
enter Y in the Display panel? field next to this option; otherwise enter N.
9 CICS COMMAND TRANSLATION
Lets you specify whether to use the CICS command translator. This field
applies to CICS programs only.
Pressing ENTER takes you to the first panel in the series you specified, in this
example to the DB2I Defaults panel. If, at any point in your progress from panel to
panel, you press the END key, you return to this first panel, from which you can
change your processing specifications. Asterisks (*) in the Display panel? column of
rows 7 through 14 indicate which panels you have already examined. You can see
a panel again by writing a Y over an asterisk.
Related reference
“Bind Package panel” on page 967
“Bind Plan panel” on page 969
“DB2I Defaults Panel 1”
“Precompile panel” on page 964
“Program Preparation: Compile, Link, and Run panel” on page 982
DSNH (TSO CLIST) (DB2 Command Reference)
The following figure shows the fields that affect the processing of the other DB2I
panels.
If you specify IBMCOB, DB2 prompts you for more COBOL defaults on
panel DSNEOP02. See “DB2I Defaults Panel 2” on page 963.
You cannot specify FORTRAN for IMS or CICS programs.
4 LINES/PAGE OF LISTING
Lets you specify the number of lines to print on each page of listing or
SPUFI output. The default is 60.
5 MESSAGE LEVEL
Lets you specify the lowest level of message to return to you during the
BIND phase of the preparation process. Use:
I For all information, warning, error, and severe error messages
W For warning, error, and severe error messages
E For error and severe error messages
S For severe error messages only
6 SQL STRING DELIMITER
Lets you specify the symbol used to delimit a string in SQL statements in
COBOL programs. This option is valid only when the application language
is IBMCOB. Use:
DEFAULT
To use the default defined at installation time
’ For an apostrophe
″ For a quotation mark
7 DECIMAL POINT
Lets you specify how your host language source program represents
decimal separators and how SPUFI displays decimal separators in its
output. Use a comma (,) or a period (.). The default is a period (.).
8 STOP IF RETURN CODE >=
Lets you specify the smallest value of the return code (from precompile,
compile, link-edit, or bind) that will prevent later steps from running. Use:
4 To stop on warnings and more severe errors.
8 To stop on errors and more severe errors. The default is 8.
9 NUMBER OF ROWS
Lets you specify the default number of input entry rows to generate on the
Suppose that the default programming language is PL/I and the default number of
lines per page of program listing is 60. Your program is in COBOL, so you want to
change field 3, APPLICATION LANGUAGE. You also want to print 80 lines to the
page, so you need to change field 4, LINES/PAGE OF LISTING, as well. Figure 47
on page 961 shows the entries that you make in DB2I Defaults Panel 1 to make
these changes. In this case, pressing ENTER takes you to DB2 Defaults Panel 2.
The following figure shows the DB2I Defaults Panel 2 when IBMCOB is selected.
Pressing ENTER takes you to the next panel you specified on the DB2 Program
Preparation panel, in this case, to the Precompile panel.
Precompile panel
After you set the DB2I defaults, you can precompile your application. You can
reach the Precompile panel in two ways: you can either specify it as a part of the
program preparation process from the DB2 Program Preparation panel, or you can
reach it directly from the DB2I Primary Option Menu.
The way you choose to reach the panel determines the default values of the fields
it contains. Figure 49 shows the Precompile panel.
Figure 49. The Precompile panel. Specify the include library, if any, that your program should
use, and any other options you need.
The following explains the functions on the Precompile panel, and how to enter
the fields for preparing to precompile.
You can reach the Bind Package panel either directly from the DB2I Primary
Option Menu, or as a part of the program preparation process. If you enter the
Bind Package panel from the Program Preparation panel, many of the Bind
Package entries contain values from the Primary and Precompile panels. Figure 50
shows the Bind Package panel.
COMMAND ===>_
The following information explains the functions on the Bind Package panel and
how to fill the necessary fields in order to bind your program.
1 LOCATION NAME
Lets you specify the system at which to bind the package. You can use
from 1 to 16 characters to specify the location name. The location name
must be defined in the catalog table SYSIBM.LOCATIONS. The default is
the local DBMS.
2 COLLECTION-ID
| Lets you specify the collection the package is in. You can use from 1 to 18
| characters to specify the collection, and the first character must be
| alphabetic.
3 DBRM: COPY:
Lets you specify whether you are creating a new package (DBRM) or
making a copy of a package that already exists (COPY). Use:
DBRM
To create a new package. You must specify values in the LIBRARY,
PASSWORD, and MEMBER fields.
If you enter the Bind Plan panel from the Program Preparation panel, many of the
Bind Plan entries contain values from the Primary and Precompile panels.
The following explains the functions on the Bind Plan panel and how to fill the
necessary fields in order to bind your program.
1 MEMBER
Lets you specify the DBRMs to include in the plan. You can specify a name
from 1 to 8 characters. You must specify MEMBER or INCLUDE
PACKAGE LIST, or both. If you do not specify MEMBER, fields 2, 3, and 4
are ignored.
The default member name depends on the input data set.
v If the input data set is partitioned, the default name is the member name
of the input data set specified in field 1 of the DB2 Program Preparation
panel.
v If the input data set is sequential, the default name is the second
qualifier of this input data set.
If you reached this panel directly from the DB2I Primary Option Menu,
you must provide values for the MEMBER and LIBRARY fields.
If you plan to use more than one DBRM, you can include the library name
and member name of each DBRM in the MEMBER and LIBRARY fields,
separating entries with commas. You can also specify more DBRMs by
using the ADDITIONAL DBRMS? field on this panel.
2 PASSWORD
Lets you enter passwords for the libraries you list in the LIBRARY field.
You can use this field only if you reached the Bind Plan panel directly
from the DB2 Primary Option Menu.
With a few minor exceptions, the options on this panel are the same as the options
for the defaults for rebinding a package. However, the defaults for REBIND
PACKAGE are different from those shown in the preceding figure, and you can
specify SAME in any field to specify the values used the last time the package was
bound. For rebinding, the default value for all fields is SAME.
1 ISOLATION LEVEL
Lets you specify how far to isolate your application from the effects of
other running applications. The default is the value used for the old plan
or package if you are replacing an existing one.
2 PLAN VALIDATION TIME
Lets you specify RUN or BIND to tell whether to check authorization at
run time or at bind time. The default is that used for the old plan or
package, if you are replacing it.
3 RESOURCE RELEASE TIME
Lets you specify COMMIT or DEALLOCATE to tell when to release locks
on resources. The default is that used for the old plan or package, if you
are replacing it.
The options on this panel are mostly the same as the options for the defaults for
rebinding a package. However, for REBIND PLAN defaults, you can specify SAME
in any field to specify the values used the last time the plan was bound. For
rebinding, the default value for all fields is SAME.
1 ISOLATION LEVEL
Lets you specify how far to isolate your application from the effects of
other running applications. The default is the value used for the old plan
or package if you are replacing an existing one.
2 VALIDATION TIME
Lets you specify RUN or BIND to tell whether to check authorization at
run time or at bind time. The default is that used for the old plan or
package, if you are replacing it.
3 RESOURCE RELEASE TIME
Lets you specify COMMIT or DEALLOCATE to tell when to release locks
on resources. The default is that used for the old plan or package, if you
are replacing it.
4 EXPLAIN PATH SELECTION
Lets you specify YES or NO for whether to obtain EXPLAIN information
about how SQL statements in the package execute. The default is NO.
The bind process inserts information into the table owner.PLAN_TABLE,
where owner is the authorization ID of the plan or package owner. If you
defined owner.DSN_STATEMNT_TABLE, DB2 also inserts information
about the cost of statement execution into that table. If you specify YES in
this field and BIND in the VALIDATION TIME field, and if you do not
correctly define PLAN_TABLE, the bind fails.
5 DATA CURRENCY
Lets you specify YES or NO for whether you need data currency for
ambiguous cursors opened at remote locations.
Data is current if the data within the host structure is identical to the data
within the base table. Data is always current for local processing.
6 PARALLEL DEGREE
Lets you specify ANY to run queries using parallel processing (when
possible) or 1 to request that DB2 not execute queries in parallel.
7 RESOURCE ACQUISITION TIME
Lets you specify when to acquire locks on resources. Use:
Chapter 17. Preparing an application to run on DB2 for z/OS 977
USE (default) to open table spaces and acquire locks only when the
program bound to the plan first uses them.
ALLOCATE to open all table spaces and acquire all locks when you
allocate the plan. This value has no effect on dynamic SQL.
8 REOPTIMIZE FOR INPUT VARS
Specifies whether DB2 determines access paths at bind time and again at
execution time for statements that contain:
v Input host variables
v Parameter markers
v Special registers
If you specify ALWAYS, DB2 determines the access paths again at
execution time. When you specify ALWAYS for this option, you must also
specify YES for DEFER PREPARE, or you will receive a bind error. If you
specify ONCE, DB2 determines the access path at the first execution or
open time. It saves and continues to use that access path for that specific
statement until the statement is invalidated or removed from the dynamic
statement cache or until the statement needs to be prepared again. The
default, NONE, specifies that DB2 does not determine the access path at
bind time using input host variables, parameter markers, or special
registers.
9 DEFER PREPARE
Lets you defer preparation of dynamic SQL statements until DB2
encounters the first OPEN, DESCRIBE, or EXECUTE statement that refers
to those statements. Specify YES to defer preparation of the statement.
10 KEEP DYN SQL PAST COMMIT
Specifies whether DB2 keeps dynamic SQL statements after commit points.
YES causes DB2 to keep dynamic SQL statements after commit points. An
application can execute a PREPARE statement for a dynamic SQL
statement once and execute that statement after later commit points
without executing PREPARE again.
11 DBPROTOCOL
Specifies whether DB2 uses DRDA protocol or DB2 private protocol to
execute statements that contain 3-part names.
12 APPLICATION ENCODING
Specifies the application encoding scheme to be used:
blank Indicates that all host variables in static SQL statements are
encoded using the encoding scheme in the DEF ENCODING
SCHEME field of installation panel DSNTIPF.
ASCII Indicates that the CCSIDs for all host variables in static SQL
statements are determined by the values in the ASCII CODED
CHAR SET and MIXED DATA fields of installation panel
DSNTIPF.
EBCDIC
Indicates that the CCSIDs for all host variables in static SQL
statements are determined by the values in the EBCDIC CODED
CHAR SET and MIXED DATA fields of installation panel
DSNTIPF.
UNICODE
Indicates that the CCSIDs of all host variables in static SQL
statements are determined by the value in the UNICODE CCSID
field of installation panel DSNTIPF.
To enable or disable connection types (that is, allow or prevent the connection from
running the package or plan), enter the following information.
If you use the DISPLAY command under TSO on this panel, you can determine
what you have currently defined as “enabled” or “disabled” in your ISPF
DSNSPFT library (member DSNCONNS). The information does not reflect the
current state of the DB2 Catalog.
If you type DISPLAY ENABLED on the command line, you get the connection
names that are currently enabled for your TSO connection types. For example:
Display OF ALL connection name(s) to be ENABLED
CONNECTION SUBSYSTEM
CICS1 ENABLED
CICS2 ENABLED
CICS3 ENABLED
CICS4 ENABLED
DLI1 ENABLED
DLI2 ENABLED
DLI3 ENABLED
DLI4 ENABLED
DLI5 ENABLED
A list panel looks like an ISPF edit session and lets you scroll and use a limited set
of commands.
CMD
"""" value ...
"""" value ...
""""
""""
""""
""""
All of the list panels let you enter limited commands in two places:
v On the system command line, prefixed by ====>
v In a special command area, identified by ″″″″
When you finish with a list panel, specify END to same the current panel values
and continue processing.
For TSO programs, the panel also lets you run programs.
Figure 56. The Program Preparation: Compile, Link, and Run panel
Your application could need other data sets besides SYSIN and SYSPRINT. If so,
remember to catalog and allocate them before you run your program.
When you press ENTER after entering values in this panel, DB2 compiles and
link-edits the application. If you specified in the DB2 Program Preparation panel
that you want to run the application, DB2 also runs the application.
DB2I panels that are used to rebind and free plans and packages
A set of DB2I panels lets you bind, rebind, or free packages.
Table 161 on page 985 describes additional panels that you can use to Rebind and
Free packages and plans. It also describes the Run panel, which you can use to run
application programs that have already been prepared.
or
Enter package name(s) to be rebound:
2 LOCATION NAME ............. ===> (Defaults to local)
3 COLLECTION-ID ............. ===> (Required)
4 PACKAGE-ID ................ ===> (Required)
5 VERSION-ID ................ ===>
(*, Blank, (), or version-id)
6 ADDITIONAL PACKAGES? ...... ===> (Yes to include more packages)
This panel lets you choose options for rebinding a package. For information about
the rebind options that these fields represent, see the topic “BIND and REBIND
options” in DB2 Command Reference.
1 Rebind all local packages
Lets you rebind all packages on the local DBMS. To do so, place an asterisk
(*) in this field; otherwise, leave it blank.
2 LOCATION NAME
Lets you specify where to bind the package. If you specify a location name,
you should use from 1 to 16 characters, and you must have defined it in
the catalog table SYSIBM.LOCATIONS.
3 COLLECTION-ID
| Lets you specify the collection of the package to rebind. You must specify a
| collection ID from 1 to 8 characters, or an asterisk (*) to rebind all
| collections in the local DB2 system. You cannot use the asterisk to rebind a
| remote collection.
4 PACKAGE-ID
Lets you specify the name of the package to rebind. You must specify a
package ID from 1 to 8 characters, or an asterisk (*) to rebind all packages
in the specified collections in the local DB2 system. You cannot use the
asterisk to rebind a remote package.
5 VERSION-ID
Lets you specify the version of the package to rebind. You must specify a
version ID from 1 to 64 characters, or an asterisk (*) to rebind all versions
in the specified collections and packages in the local DB2 system. You
cannot use the asterisk to rebind a remote version.
6 ADDITIONAL PACKAGES?
Lets you indicate whether to name more packages to rebind. Use YES to
specify more packages on an additional panel, described on “Panels for
entering lists of values” on page 981. The default is NO.
7 CHANGE CURRENT DEFAULTS?
Lets you indicate whether to change the binding defaults. Use:
NO (default) to retain the binding defaults of the previous package.
or
Enter trigger package name(s) to be rebound:
2 LOCATION NAME ............. ===> (Defaults to local)
3 COLLECTION-ID (SCHEMA NAME) ===> (Required)
4 PACKAGE-ID (TRIGGER NAME).. ===> (Required)
This panel lets you choose options for rebinding a trigger package. For information
about the rebind options that these fields represent, see the topic “BIND and
REBIND options” in DB2 Command Reference.
1 Rebind all trigger packages
Lets you rebind all packages on the local DBMS. To do so, place an asterisk
(*) in this field; otherwise, leave it blank.
2 LOCATION NAME
Lets you specify where to bind the trigger package. If you specify a
location name, you should use from 1 to 16 characters, and you must have
defined it in the catalog table SYSIBM.LOCATIONS.
3 COLLECTION-ID (SCHEMA NAME)
| Lets you specify the collection of the trigger package to rebind. You must
| specify a collection ID from 1 to 8 characters, or an asterisk (*) to rebind all
| collections in the local DB2 system. You cannot use the asterisk to rebind a
| remote collection.
4 PACKAGE-ID
Lets you specify the name of the trigger package to rebind. You must
specify a package ID from 1 to 8 characters, or an asterisk (*) to rebind all
trigger packages in the specified collections in the local DB2 system. You
cannot use the asterisk to rebind a remote trigger package.
5 ISOLATION LEVEL
Lets you specify how far to isolate your application from the effects of
other running applications. The default is the value used for the old trigger
package.
6 RESOURCE RELEASE TIME
Lets you specify COMMIT or DEALLOCATE to tell when to release locks
on resources. The default is that used for the old trigger package.
7 EXPLAIN PATH SELECTION
Lets you specify YES or NO for whether to obtain EXPLAIN information
about how SQL statements in the package execute. The default is the value
used for the old trigger package.
The bind process inserts information into the table owner.PLAN_TABLE,
where owner is the authorization ID of the plan or package owner. If you
defined owner.DSN_STATEMNT_TABLE, DB2 also inserts information
about the cost of statement execution into that table. If you specify YES in
This panel lets you specify options for rebinding your plan.
1 PLAN NAME
Lets you name the application plan to rebind. You can specify a name from
1 to 8 characters, and the first character must be alphabetic. Do not begin
or
Enter package name(s) to be freed:
2 LOCATION NAME ............. ===> (Defaults to local)
3 COLLECTION-ID ............. ===> (Required)
4 PACKAGE-ID ................ ===> (* to free all packages)
5 VERSION-ID ................ ===>
(*, Blank, (), or version-id)
6 ADDITIONAL PACKAGES?....... ===> (Yes to include more packages)
At run time, DB2 verifies that the information in the application plan and its
associated packages is consistent with the corresponding information in the DB2
catalog. If any destructive changes, such as DROP or REVOKE, occur (either to the
data structures that your application accesses or to the binder’s authority to access
those data structures), DB2 automatically rebinds packages or the plan as needed.
Establishing a test environment: This topic describes how to design a test data
structure and how to fill tables with test data.
CICSBefore you run an application, ensure that the following two conditions are
met:
v The corresponding entries in the SNT and RACF control areas authorize your
application to run.
v The program and its transaction code are defined in the CICS CSD.
The system administrator is responsible for these functions.
It uses the TSO attachment facility to access DB2. The DSN command processor
provides an alternative method for running programs that access DB2 in a TSO
environment.
| When you run an application by using the DSN command processor, that
| application can run in a trusted connection if DB2 finds a matching trusted context.
You can use the DSN command processor implicitly during program development
for functions such as:
v Using the declarations generator (DCLGEN)
v Running the BIND, REBIND, and FREE subcommands on DB2 plans and
packages for your program
v Using SPUFI (SQL Processor Using File Input) to test some of the SQL functions
in the program
The DSN command processor runs with the TSO terminal monitor program (TMP).
Because the TMP runs in either foreground or background, DSN applications run
interactively or as batch jobs.
The DSN command processor can provide these services to a program that runs
under it:
v Automatic connection to DB2
v Attention key support
v Translation of return codes into error messages
When using DSN services, your application runs under the control of DSN.
Because TSO executes the ATTACH macro to start DSN, and DSN executes the
ATTACH macro to start a part of itself, your application gains control that is two
task levels below TSO.
If these limitations are too severe, consider having your application use the call
attachment facility or Resource Recovery Services attachment facility. For more
information about these attachment facilities, see “Call attachment facility” on page
49 and “Resource Recovery Services attachment facility” on page 79.
DSN return code processing: At the end of a DSN session, register 15 contains the
highest value that is placed there by any DSN subcommand that is used in the
session or by any program that is run by the RUN subcommand. Your run-time
environment might format that value as a return code. The value does not,
however, originate in DSN.
You can reach the Run panel only through the DB2I Primary Options Menu. You
can accomplish the same task using the “Program Preparation: Compile, Link, and
Run” panel. You should use this panel if you have already prepared the program
and simply want to run it. Figure 63 shows the run options.
The RUN subcommand prompts you for more input. The end the DSN
processor, use the END command.
Before running the program, be sure to allocate any data sets that your program
needs.
This sequence also works in ISPF option 6. You can package this sequence in a
CLIST. DB2 does not support access to multiple DB2 subsystems from a single
address space.
The PARMS keyword of the RUN subcommand enables you to pass parameters to
the run-time processor and to your application program:
PARMS ('/D01, D02, D03')
The slash (/) indicates that you are passing parameters. For some languages, you
pass parameters and run-time options in the form PARMS(’parameters/run-time-
options). An example of the PARMS keyword might be:
PARMS ('D01, D02, D03/')
Check your host language publications for the correct form of the PARMS option.
In a batch environment, you might use statements like these to invoke application
REXXPROG:
//RUNREXX EXEC PGM=IKJEFT01,DYNAMNBR=20
//SYSEXEC DD DISP=SHR,DSN=SYSADM.REXX.EXEC
//SYSTSPRT DD SYSOUT=*
//SYSTSIN DD *
%REXXPROG parameters
The SYSEXEC data set contains your REXX application, and the SYSTSIN data set
contains the command that you use to invoke the application.
Disadvantages: Compared to the modular structure using DSN, the structure using
CAF is likely to require a more complex program, which in turn might require
assembler language subroutines. For more information, see “Call attachment
facility” on page 49.
ISPF
The Interactive System Productivity Facility (ISPF) helps you to construct and
execute dialogs. DB2 includes a sample application that illustrates how to use ISPF
through the call attachment facility (CAF).
For more information about the ISPF/CAF sample application, see the topic
“Running dynamic SQL and the ISPF/CAF application” in DB2 Installation Guide.
For more information about using the sample applications, see “DB2 sample
applications” on page 1069.
For more information about printing the listing sample applications, see the topic
“Printing the sample application listings” in DB2 Installation Guide.
There are some restrictions on how you make and break connections to DB2 in any
structure. If you use the PGM option of ISPF SELECT, ISPF passes control to your
load module by the LINK macro; if you use CMD, ISPF passes control by the
ATTACH macro.
The DSN command processor (see “DSN command processor” on page 995)
permits only single task control block (TCB) connections. Take care not to change
the TCB after the first SQL statement. ISPF SELECT services change the TCB if you
started DSN under ISPF, so you cannot use these to pass control from load module
to load module. Instead, use LINK, XCTL, or LOAD.
The following figure shows the task control blocks that result from attaching the
DSN command processor below TSO or ISPF.
ATTACH
DSN initialization
load module
Alias=DSN
ATTACH
Ordinary
DSN main load LINK
application
module
program
(See Note 2)
ATTACH
Application
command
processor
(See Note 1)
Notes:
1. The RUN command with the CP option causes DSN to attach your program
and create a new TCB.
2. The RUN command without the CP option causes DSN to link to your
program.
If you are in ISPF and running under DSN, you can perform an ISPLINK to
another program, which calls a CLIST. In turn, the CLIST uses DSN and another
application. Each such use of DSN creates a separate unit of recovery (process or
transaction) in DB2.
All such initiated DSN work units are unrelated, with regard to isolation (locking)
and recovery (commit). It is possible to deadlock with yourself; that is, one unit
(DSN) can request a serialized resource (a data page, for example) that another
unit (DSN) holds incompatibly.
A COMMIT in one program applies only to that process. There is no facility for
coordinating the processes.
The application has one large load module and one plan.
Disadvantages: For large programs of this type, you want a more modular design,
making the plan more flexible and easier to maintain. If you have one large plan,
If you want to use ISPLINK, then call ISPF to run under DSN:
DSN
RUN PROGRAM(ISPF) PLAN(MYPLAN)
END
You then need to leave ISPF before you can start your application.
You might write some functions as separately compiled and loaded programs,
others as EXECs or CLISTs. You can start any of those programs or functions
through the ISPF SELECT service, and you can start that from a program, a CLIST,
or an ISPF selection panel.
When you use the ISPF SELECT service, you can specify whether ISPF should
create a new ISPF variable pool before calling the function. You can also break a
large application into several independent parts, each with its own ISPF variable
pool.
You can call different parts of the program in different ways. For example, you can
use the PGM option of ISPF SELECT:
PGM(program-name) PARM(parameters)
For a part that accesses DB2, the command can name a CLIST that starts DSN:
DSN
RUN PROGRAM(PART1) PLAN(PLAN1) PARM(input from panel)
END
Breaking the application into separate modules makes it more flexible and easier to
maintain. Furthermore, some of the application might be independent of DB2;
portions of the application that do not call DB2 can run, even if DB2 is not
running. A stopped DB2 database does not interfere with parts of the program that
refer only to other databases.
To run a program using DB2, you need a DB2 plan. The bind process creates the
DB2 plan. DB2 first verifies whether the DL/I batch job step can connect to batch
job DB2. Then DB2 verifies whether the application program can access DB2 and
enforce user identification of batch jobs accessing DB2.
The primary authorization ID is the value of the USER parameter on the job
statement, if that is available. If that parameter is not available, the primary
authorization ID is the TSO logon name if the job is submitted. Otherwise, the
primary authorization ID is the IMS PSB name. In that case, however, the ID must
not begin with the string “SYSADM” because this string causes the job to
abnormally terminate. The batch job is rejected if you try to change the
authorization ID in an exit routine.
For guidelines on finding the last successful checkpoint, see “Finding the DL/I
batch checkpoint ID” on page 1005.
JCL example of a batch backoutThe skeleton JCL example that follows illustrates a
batch backout for PSB=IVP8CA.
JCL example of restarting a DL/I batch job: Operational procedures can restart a
DL/I batch job step for an application program using IMS XRST and symbolic
CHKP calls.
You cannot restart a BMP application program in a DB2 DL/I batch environment.
The symbolic checkpoint records are not accessed, causing an IMS user abend
U0102.
IMS also records the checkpoint ID in the type X’41’ IMS log record. Symbolic
CHKP calls also create one or more type X’18’ records on the IMS log. XRST uses
the type X’18’ log records to reposition DL/I databases and return information to
the application program.
DB2 performs one of two actions automatically when restarted, if the failure occurs
outside the indoubt period: it either backs out the work unit to the prior
checkpoint, or it commits the data without any assistance. If the operator then
issues the following command, no work unit information is displayed:
-DISPLAY THREAD(*) TYPE(INDOUBT)
| Use the following syntax for the command line processor CALL statement.
| (1)
CALL procedure-name ( )
,
(2) (3) (4)
parameter
|
| Notes:
| 1 If you specify an unqualified stored procedure name, DB2 searches the
| schema list in the CURRENT PATH special register. DB2 searches this list for
| a stored procedure with the specified number of input and output
| parameters.
| 2 Specify a question mark (?) as a placeholder for each output parameter.
| 3 For non-numeric, BLOB, or CLOB input parameters, enclose each value in
| single quotation marks (’). The exception is if the data is a BLOB or CLOB
| value that is to be read from a file. In that case, use the notation file://fully
| qualified file name.
| 4 Specify the input and output parameters in the order that they are specified
| in the signature for the stored procedure.
| To invoke the stored procedure from the command line processor, you can specify
| the following CALL statement:
| CALL TEST.DEPT_MEDIAN(51, ?)
| Assume that the stored procedure returns a value of 25,000. The following
| information is displayed by the command line processor:
| Example: Suppose that stored procedure TEST.BLOBSP is defined with one input
| parameter of type BLOB and one output parameter. You can invoke this stored
| procedure from the command line processor with the following statement:
| CALL TEST.BLOBSP(file:///tmp/photo.bmp,?)
| The command line processor reads the contents from /tmp/photo.bmp as the
| input parameter. Alternatively, you can invoke this stored procedure by specifying
| the input parameter in the CALL statement itself, as in the following example:
| CALL TEST.BLOBSP('abcdef',?)
|
Example of running a batch DB2 application in TSO
Most application programs that are written for the batch environment run under
the TSO Terminal Monitor Program (TMP) in background mode.
The following figure shows the JCL statements that you need in order to start such
a job. The list that follows explains each statement.
//jobname JOB USER=MY DB2ID
//GO EXEC PGM=IKJEFT01,DYNAMNBR=20
//STEPLIB DD DSN=prefix.SDSNEXIT,DISP=SHR
//
. DD DSN=prefix.SDSNLOAD,DISP=SHR
.
.
//SYSTSPRT DD SYSOUT=A
//SYSTSIN DD *
DSN SYSTEM (ssid)
RUN PROG (SAMPPGM) -
PLAN (SAMPLAN) -
LIB (SAMPPROJ.SAMPLIB) -
PARMS ('/D01 D02 D03')
END
/*
v The JOB option identifies this as a job card. The USER option specifies the DB2
authorization ID of the user.
v The EXEC statement calls the TSO Terminal Monitor Program (TMP).
v The STEPLIB statement specifies the library in which the DSN Command
Processor load modules and the default application programming defaults
module, DSNHDECP, reside. It can also reference the libraries in which user
applications, exit routines, and the customized DSNHDECP module reside. The
customized DSNHDECP module is created during installation.
v Subsequent DD statements define additional files that are needed by your
program.
v The DSN command connects the application to a particular DB2 subsystem.
v The RUN subcommand specifies the name of the application program to run.
v The PLAN keyword specifies plan name.
v The LIB keyword specifies the library that the application should access.
v The PARMS keyword passes parameters to the run-time processor and the
application program.
v END ends the DSN command processor.
Usage notes:
The following CLIST calls a DB2 application program named MYPROG. ssid
represents the DB2 subsystem name or group attachment name.
PROC 0 /* INVOCATION OF DSN FROM A CLIST */
DSN SYSTEM(ssid) /* INVOKE DB2 SUBSYSTEM ssid */
IF &LASTCC = 0 THEN /* BE SURE DSN COMMAND WAS SUCCESSFUL */
DO /* IF SO THEN DO DSN RUN SUBCOMMAND */
DATA /* ELSE OMIT THE FOLLOWING: */
RUN PROGRAM(MYPROG)
END
ENDDATA /* THE RUN AND THE END ARE FOR DSN */
END
EXIT
First, ensure that you can respond to the program’s interactive requests for data
and that you can recognize the expected results. Then, enter the transaction code
that is associated with the program. Users of the transaction code must be
authorized to run the program.
First, ensure that the corresponding entries in the SNT and RACF control areas
allow run authorization for your application. The system administrator is
responsible for these functions.
Submit the job control statements that are needed to run the program.
Also, be sure to define to CICS the transaction code that is assigned to your
program and the program itself.
Issue the NEWCOPY command if CICS has not been reinitialized since the
program was last bound and compiled.
If your location has a separate DB2 system for testing, you can create the test
tables and views on the test system and then test your program thoroughly on that
system. This information assumes that you do all testing on a separate system, and
that the person who created the test tables and views has an authorization ID of
TEST. The table names are TEST.EMP, TEST.PROJ and TEST.DEPT.
This information assumes that you do all testing on a separate system, and that the
person who created the test tables and views has an authorization ID of TEST. The
table names are TEST.EMP, TEST.PROJ and TEST.DEPT.
To design test tables and views, first analyze the data needs of your application.
1. List the data that your application accesses and describe how it accesses each
data item. For example, suppose that you are testing an application that
accesses the DSN8910.EMP, DSN8910.DEPT, and DSN8910.PROJ tables. You
might record the information about the data as shown in Table 162 on page
1010.
2. Determine the test tables and views that you need to test your application.
Create a test table on your list when either of the following conditions exists:
v The application modifies data in the table.
v You need to create a view that is based on a test table because your
application modifies data in the view.
To continue the example, create these test tables:
v TEST.EMP, with the following format:
DEPTNO MGRNO
. .
. .
. .
Because the application does not change any data in the DSN8910.DEPT table,
you can base the view on the table itself (rather than on a test table). However,
a safer approach is to have a complete set of test tables and to test the program
thoroughly using only test data.
If you intend to use existing tables and views (either directly or as the basis for a
view), you need privileges to access those tables and views. Your DBA can grant
those privileges.
To create a view, you must have authorization for each table and view on which
you base the view. You then have the same privileges over the view that you have
over the tables and views on which you based the view. Before trying the
examples, have your DBA grant you the privileges to create new tables and views
and to access existing tables. Obtain the names of tables and views that you are
authorized to access (as well as the privileges you have for each table) from your
DBA.
The following SQL statements show how to create a complete test structure to
contain a small table named SPUFINUM. The test structure consists of:
v A storage group named SPUFISG
v A database named SPUFIDB
v A table space named SPUFITS in database SPUFIDB and using storage group
SPUFISG
v A table named SPUFINUM within the table space SPUFITS
CREATE STOGROUP SPUFISG
VOLUMES (user-volume-number)
VCAT DSNCAT ;
For more information about the syntax of each CREATE statement, see the topics
“CREATE STOGROUP”, “CREATE DATABASE”“CREATE TABLESPACE” and
“CREATE TABLE” in DB2 SQL Reference.
Chapter 19. Testing and debugging an application program on DB2 for z/OS 1011
Populating the test tables with data
To populate test tables, use SQL INSERT statements or the LOAD utility.
Test with SPUFI:You can use SPUFI (an interface between ISPF and DB2) to test
SQL statements in a TSO/ISPF environment. With SPUFI panels, you can put SQL
statements into a data set that DB2 subsequently executes. The SPUFI Main panel
has several functions that enable you to:
v Name an input data set to hold the SQL statements that are passed to DB2 for
execution
v Name an output data set to contain the results of executing the SQL statements
v Specify SPUFI processing options
For more information about how to use SPUFI, see “Executing SQL by using
SPUFI” on page 1013.
| Test with the command line processor: You can use the command line processor
| to test SQL statements from UNIX System Services on z/OS. For more information
| about how to use the command line processor and which statements you can run,
| see the topic “Working with DB2 Commands” in DB2 Command Reference.
| SQL statements that are executed under SPUFI or the command line processor
| operate on actual tables (in this case, the tables that you created for testing).
| Consequently, before you access DB2 data:
| v Make sure that all tables and views that your SQL statements refer to exist.
Before you use SPUFI, allocate an input data set to store the SQL statements that
you want to execute, if such a data set does not already exist. For information
about allocating data sets, see z/OS ISPF User’s Guide Volumes 1 and 2.
Important: Ensure that the TSO terminal CCSID matches the DB2 CCSID. If these
CCSIDs do not match, data corruption can occur. If SPUFI issues the warning
message DSNE345I, terminate your SPUFI session and notify the system
administrator.
Before you begin this task, you can specify whether TSO message IDs are
displayed by using the TSO PROFILE command. To view message IDs, type TSO
PROFILE MSGID on the ISPF command line. To suppress message IDs, type TSO
PROFILE NOMSGID.
To being using SPUFI, you need to open and fill out the SPUFI panel.
Chapter 19. Testing and debugging an application program on DB2 for z/OS 1013
DSNESP01 SPUFI SSID: DSN
===>
Enter the input data set name: (Can be sequential or partitioned)
1 DATA SET NAME..... ===> EXAMPLES(XMP1)
2 VOLUME SERIAL..... ===> (Enter if not cataloged)
3 DATA SET PASSWORD. ===> (Enter if password protected)
Enter the output data set name: (Must be a sequential data set)
4 DATA SET NAME..... ===> RESULT
3. Optional: Specify new values in any of the other fields on the SPUFI panel. For
more information about these fields, see “The SPUFI panel” on page 1017.
After you open SPUFI, specify the initial options, and optionally change any SPUFI
defaults, you can enter one or more SQL statements to execute.
Before you begin this task, you must complete the task ″Opening SPUFI and
specifying initial options.″
If the input data set that you specified on the SPUFI panel already contains all of
the SQL statements that you want to execute, you can bypass this editing step by
specifying NO for the EDIT INPUT field on the SPUFI panel.
You can use SPUFI to submit the SQL statements in a data set to DB2.
For information about how to interpret the output in the output data set, see
“Output from SPUFI” on page 1025.
Your system administrator might use the DB2 resource limit facility (governor) to
set time limits for processing SQL statements in SPUFI. Those limits can be error
limits or warning limits.
If you execute an SQL statement through SPUFI that runs longer than this error
time limit, SPUFI terminates processing of that SQL statement and all statements
Chapter 19. Testing and debugging an application program on DB2 for z/OS 1015
that follow in the SPUFI input data set. SPUFI displays a panel that lets you
commit or roll back the previously uncommitted changes that you have made.
That panel is shown in the following figure.
Statement text
Your SQL statement has exceeded the resource utilization threshold set
by your site administrator.
You must ROLLBACK or COMMIT all the changes made since the last COMMIT.
SPUFI processing for the current input file will terminate immediately
after the COMMIT or ROLLBACK is executed.
If you execute an SQL statement through SPUFI that runs longer than the warning
time limit for predictive governing, SPUFI displays the SQL STATEMENT
RESOURCE LIMIT EXCEEDED panel. On this panel, you can tell DB2 to continue
executing that statement, or stop processing that statement and continue to the
next statement in the SPUFI input data set. That panel is shown in the following
figure.
Statement text
You can now either CONTINUE executing this statement or BYPASS the execution
of this statement. SPUFI processing for the current input file will continue
after the CONTINUE or BYPASS processing is completed.
For information about the DB2 governor and how to set error and warning time
limits, see the topic “Controlling resource usage” in DB2 Administration Guide.
SPUFI
You can use SPUFI to execute SQL statements dynamically.
You can put comments about SQL statements either on separate lines or on the
same line. In either case, use two hyphens (--) to begin a comment. Specify any
text other than #SET TERMINATOR or #SET TOLWARN after the comment
marker. DB2 ignores everything else to the right of the two hyphens.
After you complete any fields on the SPUFI panel and press Enter, those settings
are saved. When the SPUFI panel displays again, the data entry fields on the panel
contain the values that you previously entered. You can specify data set names and
processing options each time the SPUFI panel is displayed, as needed. Values that
you do not change remain in effect.
The following descriptions explain the fields that are available on the SPUFI panel.
1,2,3 INPUT DATA SET NAME
Identify the input data set in fields 1 through 3. This data set contains one
or more SQL statements that you want to execute. Allocate this data set
before you use SPUFI, if one does not already exist. Consider the following
rules:
v The name of the data set must conform to standard TSO naming
conventions.
v The data set can be empty before you begin the session. You can then
add the SQL statements by editing the data set from SPUFI.
v The data set can be either sequential or partitioned, but it must have the
following DCB characteristics:
– A record format (RECFM) of either F or FB.
– A logical record length (LRECL) of either 79 or 80. Use 80 for any
data set that the EXPORT command of DB2 QMF did not create.
v Data in the data set can begin in column 1. It can extend to column 71 if
the logical record length is 79, and to column 72 if the logical record
length is 80. SPUFI assumes that the last 8 bytes of each record are for
sequence numbers.
If you use this panel a second time, the name of the data set you
previously used displays in the field DATA SET NAME. To create a new
member of an existing partitioned data set, change only the member name.
4 OUTPUT DATA SET NAME
Enter the name of a data set to receive the output of the SQL statement.
You do not need to allocate the data set before you do this.
If the data set exists, the new output replaces its content. If the data set
does not exist, DB2 allocates a data set on the device type specified on the
CURRENT SPUFI DEFAULTS panel and then catalogs the new data set.
Chapter 19. Testing and debugging an application program on DB2 for z/OS 1017
The device must be a direct-access storage device, and you must be
authorized to allocate space on that device.
Attributes required for the output data set are:
v Organization: sequential
v Record format: F, FB, FBA, V, VB, or VBA
v Record length: 80 to 32768 bytes, not less than the input data set
“Executing SQL by using SPUFI” on page 1013 shows the simplest choice,
entering RESULT. SPUFI allocates a data set named userid.RESULT and
sends all output to that data set. If a data set named userid.RESULT already
exists, SPUFI sends DB2 output to it, replacing all existing data.
5 CHANGE DEFAULTS
Enables you to change control values and characteristics of the output data
set and format of your SPUFI session. If you specify Y(YES) you can look
at the SPUFI defaults panel. See “Changing SPUFI defaults” on page 1019
for more information about the values you can specify and how they affect
SPUFI processing and output characteristics. You do not need to change
the SPUFI defaults for this example.
6 EDIT INPUT
To edit the input data set, leave Y(YES) on line 6. You can use the ISPF
editor to create a new member of the input data set and enter SQL
statements in it. (To process a data set that already contains a set of SQL
statements you want to execute immediately, enter N (NO). Specifying N
bypasses the step 3 described in “Executing SQL by using SPUFI” on page
1013.)
7 EXECUTE
To execute SQL statements contained in the input data set, leave Y(YES) on
line 7.
SPUFI handles the SQL statements that can be dynamically prepared.
8 AUTOCOMMIT
To make changes to the DB2 data permanent, leave Y(YES) on line 8.
Specifying Y makes SPUFI issue COMMIT if all statements execute
successfully. If all statements do not execute successfully, SPUFI issues a
ROLLBACK statement, which deletes changes already made to the file
(back to the last commit point). For information about the COMMIT
statement, see the topic “COMMIT” in DB2 SQL Reference. For information
about the ROLLBACK statement, see the topic “ROLLBACK” in DB2 SQL
Reference.
If you specify N, DB2 displays the SPUFI COMMIT OR ROLLBACK panel
after it executes the SQL in your input data set. That panel prompts you to
COMMIT, ROLLBACK, or DEFER any updates made by the SQL. If you
enter DEFER, you neither commit nor roll back your changes.
9 BROWSE OUTPUT
To look at the results of your query, leave Y(YES) on line 9. SPUFI saves
the results in the output data set. You can look at them at any time, until
you delete or write over the data set.
10 CONNECT LOCATION
Specify the name of the database server, if applicable, to which you want
to submit SQL statements. SPUFI then issues a type 2 CONNECT
statement to this server.
SPUFI provides default values the first time that you use SPUFI for all options
except the DB2 subsystem name. Any changes that you make to these values
remain in effect until you change the values again.
Chapter 19. Testing and debugging an application program on DB2 for z/OS 1019
v SPUFI panel. This panel opens if you specified NO for all of the processing
options on the SPUFI panel.
If you press the END key on the CURRENT SPUFI DEFAULTS panel, the
SPUFI panel is displayed, and you lose all the changes that you made on the
CURRENT SPUFI DEFAULTS panel.
5. If the CURRENT SPUFI DEFAULTS - PANEL 2 panel opens, specify values for
the fields on that panel and press Enter. All fields must contain a value. For
more information about these values, see “CURRENT SPUFI DEFAULTS -
PANEL 2 panel” on page 1023.
SPUFI saves your changes and one of the following panels or data sets open:
v EDIT panel. This panel opens if you specified YES in the EDIT INPUT field
on the SPUFI panel.
v Output data set. This data set opens if you specified NO in the EDIT INPUT
field on the SPUFI panel.
v SPUFI panel. This panel opens if you specified NO for all of the processing
options on the SPUFI panel.
Chapter 19. Testing and debugging an application program on DB2 for z/OS 1021
| 6 SQL FORMAT
| Specify how SPUFI pre-processes the SQL input before passing it to DB2.
| Select one of the following options:
| SQL This is the preferred mode for SQL statements other than SQL
| procedural language. When you use this option, which is the
| default, SPUFI collapses each line of an SQL statement into a single
| line before passing the statement to DB2. SPUFI also discards all
| SQL comments.
| SQLCOMNT
| This mode is suitable for all SQL, but it is intended primarily for
| SQL procedural language processing. When this option is in effect,
| behavior is similar to SQL mode, except that SPUFI does not
| discard SQL comments. Instead, it automatically terminates each
| SQL comment with a line feed character (hex 25), unless the
| comment is already terminated by one or more line formatting
| characters. Use this option to process SQL procedural language
| with minimal modification by SPUFI.
| SQLPL
| This mode is suitable for all SQL, but it is intended primarily for
| SQL procedural language processing. When this option is in effect,
| SPUFI retains SQL comments and terminates each line of an SQL
| statement with a line feed character (hex 25) before passing the
| statement to DB2. Lines that end with a split token are not
| terminated with a line feed character. Use this mode to obtain
| improved diagnostics and debugging of SQL procedural language.
| You can also specify how SPUFI pre-processes the SQL input by using the
| --#SET SQLFORMAT statement. For more information about how to enter this
| statement, see “Executing SQL by using SPUFI” on page 1013.
| 7 SPACE UNIT
| Specify how space for the SPUFI output data set is to be allocated.
| TRK Track
| CYL Cylinder
| 8 PRIMARY SPACE
| Specify how many tracks or cylinders of primary space are to be allocated.
| 9 SECONDARY SPACE
| Specify how many tracks or cylinders of secondary space are to be
| allocated.
| 10 RECORD LENGTH
The record length must be at least 80 bytes. The maximum record length
| depends on the device type you use. The default value allows a 32756-byte
record.
Each record can hold a single line of output. If a line is longer than a
record, the output is truncated, and SPUFI discards fields that extend
beyond the record length.
| 11 BLOCKSIZE
Follow the normal rules for selecting the block size. For record format F,
the block size is equal to the record length. For FB and FBA, choose a
block size that is an even multiple of LRECL. For VB and VBA only, the
block size must be 4 bytes larger than the block size for FB or FBA.
This panel opens if you specify YES in the CHANGE PLAN NAMES field of the
CURRENT SPUFI DEFAULTS panel.
Chapter 19. Testing and debugging an application program on DB2 for z/OS 1023
| DSNESP07 CURRENT SPUFI DEFAULTS - PANEL 2 SSID: DSN
| ===>
| Enter the following to control your SPUFI session:
| 1 CS ISOLATION PLAN ===> DSNESPCS (Name of plan for CS isolation level)
| 2 RR ISOLATION PLAN ===> DSNESPRR (Name of plan for RR isolation level)
| 3 UR ISOLATION PLAN ===> DSNESPUR (Name of plan for UR isolation level)
|
| Indicate warning message status:
| 4 BLANK CCSID WARNING ===> YES (Show warning if terminal CCSID is blank)
|
|
|
|
|
|
|
|
|
|
|
|
|
| PRESS: ENTER to process END to exit HELP for more information
|
|
| Figure 70. CURRENT SPUFI DEFAULTS - PANEL 2
|
The following descriptions explain the information on the CURRENT SPUFI
DEFAULTS - PANEL 2 panel.
1 CS ISOLATION PLAN
Specify the name of the plan that SPUFI uses when you specify an
isolation level of cursor stability (CS). By default, this name is DSNESPCS.
2 RR ISOLATION PLAN
Specify the name of the plan that SPUFI uses when you specify an
isolation level of repeatable read (RR). By default, this name is DSNESPRR.
3 UR ISOLATION PLAN
Specify the name of the plan that SPUFI uses when you specify an
isolation level of uncommitted read (UR). By default, this name is
DSNESPUR.
4 BLANK CCSID ALERT
Indicate whether to receive message DSNE345I when the terminal CCSID
setting is blank. A blank terminal CCSID setting occurs when the terminal
code page and character set cannot be queried or if they are not supported
by ISPF.
Recommendation: To avoid possible data contamination use the default
setting of YES, unless you are specifically directed by your DB2 system
administrator to use NO.
Overriding the default SQL termination character is useful if you need to use a
different SQL terminator character for one particular SQL statement.
To set the SQL terminator character in a SPUFI input data set, specify the text
--#SET TERMINATOR character before that SQL statement to which you want this
character to apply. This text specifies that SPUFI is to interpret character as a
Use a character other than a semicolon if you plan to execute a statement that
contains embedded semicolons. For example, suppose that you choose the
character # as the statement terminator. In this case, a CREATE TRIGGER
statement with embedded semicolons looks like this:
CREATE TRIGGER NEW_HIRE
AFTER INSERT ON EMP
FOR EACH ROW MODE DB2SQL
BEGIN ATOMIC
UPDATE COMPANY_STATS SET NBEMP = NBEMP + 1;
END#
Example: The following example activates and then deactivates toleration of SQL
warnings:
SELECT * FROM MY.T1;
--#SET TOLWARN YES
SELECT * FROM YOUR.T1;
--#SET TOLWARN NO
Chapter 19. Testing and debugging an application program on DB2 for z/OS 1025
Figure 71 shows the output from the sample program. An output data set contains
the following items for each SQL statement that DB2 executes:
v The executed SQL statement, copied from the input data set
v The results of executing the SQL statement
v The formatted SQLCA, if an error occurs during statement execution
At the end of the data set are summary statistics that describe the processing of the
input data set as a whole.
For SELECT statements that are executed with SPUFI, the message “SQLCODE IS
100” indicates an error-free result. If the message SQLCODE IS 100 is the only
result, DB2 is unable to find any rows that satisfy the condition that is specified in
the statement.
For all other types of SQL statements that are executed with SPUFI, the message
“SQLCODE IS 0” indicates an error-free result.
Other messages that you could receive from the processing of SQL statements
include:
v The number of rows that DB2 processed, that either:
– Your select operation retrieved
– Your update operation modified
– Your insert operation added to a table
– Your delete operation deleted from a table
v Which columns display truncated data because the data was too wide
You can use the Debug Tool either interactively or in batch mode.
Chapter 19. Testing and debugging an application program on DB2 for z/OS 1027
Using the Debug Tool interactively: To test a user-defined function interactively
using the Debug Tool, you must have the Debug Tool installed on the z/OS system
where the user-defined function runs. To debug your user-defined function using
the Debug Tool, do the following:
1. Compile the user-defined function with the TEST option. This places
information in the program that the Debug Tool uses.
2. Invoke the Debug Tool. One way to do that is to specify the Language
Environment run-time TEST option. The TEST option controls when and how
the Debug Tool is invoked. The most convenient place to specify run-time
options is with the RUN OPTIONS clause of CREATE FUNCTION or ALTER
FUNCTION. See “Components of a user-defined function definition” on page
485 for more information about the RUN OPTIONS clause. For example,
suppose that you code this option:
TEST(ALL,*,PROMPT,JBJONES%SESSNA:)
This should be the first command that you enter from the terminal or include
in your commands file.
Using the Debug Tool in batch mode: To test your user-defined function in batch
mode, you must have the Debug Tool installed on the z/OS system where the
user-defined function runs. To debug your user-defined function in batch mode
using the Debug Tool, do the following:
1. If you plan to use the Language Environment run-time TEST option to invoke
the Debug Tool, compile the user-defined function with the TEST option. This
places information in the program that the Debug Tool uses during a
debugging session.
2. Allocate a log data set to receive the output from the Debug Tool. Put a DD
statement for the log data set in the startup procedure for the stored procedures
address space.
3. Enter commands in a data set that you want the Debug Tool to execute. Put a
DD statement for that data set in the startup procedure for the stored
procedures address space. To define the data set that contains the commands to
the Debug Tool, specify its data set name or DD name in the TEST run-time
option. For example, this option tells the Debug Tool to look for the commands
in the data set that is associated with DD name TESTDD:
For more information about the Debug Tool, see Debug Tool User’s Guide and
Reference.
You can serialize I/O by running the WLM-established stored procedure address
space with NUMTCB=1.
You can then use TSO TEST and other commonly used debugging tools.
Chapter 19. Testing and debugging an application program on DB2 for z/OS 1029
Testing a user-defined function by using SQL INSERT
statements
You can use SQL to insert debugging information into a DB2 table. This allows
other machines in the network (such as workstations) to easily access the data in
the table using DRDA access.
| For detailed information about the Debug Tool, see Debug Tool User’s Guide and
| Reference.
| Before you begin debugging, write your COBOL stored procedure and set up the
| WLM environment.
Chapter 19. Testing and debugging an application program on DB2 for z/OS 1031
| PROGRAM TYPE MAIN
| WLM ENVIRONMENT WLMENV1
| RUN OPTIONS 'POSIX(ON),TEST(,,,VADTCPIP&9.63.51.17:*)'
| 3. In the JCL startup procedure for WLM-established stored procedures address
| space, add the data set name of the Debug Tool load library to the STEPLIB
| concatenation. For example, suppose that ENV1PROC is the JCL procedure for
| application environment WLMENV1. The modified JCL for ENV1PROC might
| look like this:
| //DSNWLM PROC RGN=0K,APPLENV=WLMENV1,DB2SSN=DSN,NUMTCB=8
| //IEFPROC EXEC PGM=DSNX9WLM,REGION=&RGN,TIME=NOLIMIT,
| // PARM='&DB2SSN,&NUMTCB,&APPLENV'
| //STEPLIB DD DISP=SHR,DSN=DSN910.RUNLIB.LOAD
| // DD DISP=SHR,DSN=CEE.SCEERUN
| // DD DISP=SHR,DSN=DSN910.SDSNLOAD
| // DD DISP=SHR,DSN=EQAW.SEQAMOD <== DEBUG TOOL
| 4. On the workstation, start the VisualAge Remote Debugger daemon. This
| daemon waits for incoming requests from TCP/IP.
| 5. Call the stored procedure. When the stored procedure starts, a window that
| contains the debug session is displayed on the workstation. You can then
| execute Debug Tool commands to debug the stored procedure.
| The code against which you run the debug tools is the C source program that is
| produced by the program preparation process for the stored procedure. For
| detailed information about the Debug Tool, see Debug Tool User’s Guide and
| Reference.
| Before you begin debugging, write your C++ stored procedure and set up the
| WLM environment.
| To test the stored procedure with the Distributed Debugger feature of the C/C++
| Productivity Tools for z/OS and the Debug Tool:
| 1. When you define the stored procedure, include run-time option TEST with the
| suboption VADTCPIP&ipaddr in your RUN OPTIONS argument.
| VADTCPIP& tells the Debug Tool that it is interfacing with a workstation that
| runs VisualAge C++ and is configured for TCP/IP communication with your
| z/OS system. ipaddr is the IP address of the workstation on which you display
| your debug information. For example, this RUN OPTIONS value in a stored
| procedure definition indicates that debug information should go to the
| workstation with IP address 9.63.51.17:
| RUN OPTIONS 'POSIX(ON),TEST(,,,VADTCPIP&9.63.51.17:*)'
| 2. Precompile the stored procedure. Ensure that the modified source program that
| is the output from the precompile step is in a permanent, catalogued data set.
| 3. Compile the output from the precompile step. Specify the TEST, SOURCE, and
| OPT(0) compiler options.
| 4. In the JCL startup procedure for the stored procedures address space, add the
| data set name of the Debug Tool load library to the STEPLIB concatenation. For
| With the Unified Debugger, you can observe the execution of the procedure code,
| set breakpoints for lines, and view or modify variable values.
Chapter 19. Testing and debugging an application program on DB2 for z/OS 1033
| Related concepts
| Java stored procedures and user-defined functions (Programming for Java)
| Related tasks
| “Creating an external SQL procedure by using DSNTPSMP” on page 565
| Related reference
| “Sample programs to help you prepare and run external SQL procedures” on page
| 576
| The Unified Debugger (DB2 9 for z/OS Stored Procedures: Through the CALL
| and Beyond)
| Integrated Data Management Information Center
| ALTER PROCEDURE (SQL - native) (SQL Reference)
| CREATE PROCEDURE (SQL - native) (SQL Reference)
| Using Debug Tool interactively: To test a stored procedure interactively using the
| Debug Tool, you must have the Debug Tool installed on the z/OS system where
| the stored procedure runs.
| Using Debug Tool in batch mode: To test your stored procedure in batch mode, you
| must have the Debug Tool installed on the z/OS system where the stored
| procedure runs. To debug your stored procedure in batch mode using the Debug
| Tool, do the following:
| v Compile the stored procedure with option TEST, if you plan to use the Language
| Environment run-time option TEST to invoke the Debug Tool. This places
| information in the program that the Debug Tool uses during a debugging
| session.
| v Allocate a log data set to receive the output from the Debug Tool. Put a DD
| statement for the log data set in the start-up procedure for the stored procedures
| address space.
| v Enter commands in a data set that you want the Debug Tool to execute. Put a
| DD statement for that data set in the start-up procedure for the stored
| procedures address space. To define the commands data set to the Debug Tool,
| specify the commands data set name or DD name in the TEST run-time option.
| For example, to specify that the Debug Tool use the commands that are in the
| data set that is associated with the DD name TESTDD, include the following
| parameter in the TEST option:
| TEST(ALL,TESTDD,PROMPT,*)
| The first command in the commands data set should be:
| SET LOG ON FILE ddname;
| This command directs output from your debugging session to the log data set
| that you defined in the previous step. For example, if you defined a log data set
| with DD name INSPLOG in the stored procedures address space start-up
| procedure, the first command should be the following command:
| SET LOG ON FILE INSPLOG;
| v Invoke the Debug Tool. The following are two possible methods for invoking the
| Debug Tool:
| – Specify the run-time option TEST. The most convenient place to do that is in
| the RUN OPTIONS parameter of the CREATE PROCEDURE or ALTER
| PROCEDURE statement for the stored procedure.
| – Put CEETEST calls in the stored procedure source code. If you use this
| approach for an existing stored procedure, you must recompile, re-link, and
| bind it, and issue the STOP PROCEDURE and START PROCEDURE
| commands to reload the stored procedure.
| You can combine the run-time option TEST with CEETEST calls. For example,
| you might want to use TEST to name the commands data set but use
| CEETEST calls to control when the Debug Tool takes control.
| For more information about using the Debug Tool for z/OS, see Debug Tool User’s
| Guide and Reference.
Chapter 19. Testing and debugging an application program on DB2 for z/OS 1035
| Recording stored procedure debugging messages in a file
| You can debug external stored procedures and external SQL procedures by
| recording debugging messages in a disk file or JES spool file. You cannot use this
| debugging technique for native SQL procedures or Java stored procedures.
| Using this method, you can use TSO TEST and other commonly used debugging
| tools.
For information about the compiler or assembler test facilities, see the publications
for the compiler or CODE/370. The compiler publications include information
about the appropriate debugger for the language you are using.
You can also use ISPF Dialog Test to debug your program. You can run all or
portions of your application, examine the results, make changes, and rerun it. Fore
more information about ISPF Dialog Test, see z/OS ISPF User’s Guide Volumes 1 and
2.
Chapter 19. Testing and debugging an application program on DB2 for z/OS 1037
v Your JCL.
IMS
– If you are using IMS, have you included the DL/I option statement in the
correct format?
– Have you included the region size parameter in the EXEC statement? Does it
specify a region size that is large enough for the required storage for the DB2
interface, the TSO, IMS, or CICS system, and your program?
– Have you included the names of all data sets (DB2 and non-DB2) that the
program requires?
v Your program.
You can also use dumps to help localize problems in your program. For
example, one of the more common error situations occurs when your program is
running and you receive a message that it abended. In this situation, your test
procedure might be to capture a TSO dump. To do so, you must allocate a
SYSUDUMP or SYSABEND dump data set before calling DB2. When you press
the ENTER key (after the error message and READY message), the system
requests a dump. You then need to use the FREE command to deallocate the
dump data set.
The DB2 precompiler provides SYSTERM output when you allocate the DD name
SYSTERM. If you use the program preparation panels to prepare and run your
program, DB2I allocates SYSTERM according to the TERM option that you specify.
You can use the line number that is provided in each error message in the
SYSTERM output to locate the failing source statement.
DSNH104I E DSNHPARS LINE 32 COL 26 ILLEGAL SYMBOL "X" VALID SYMBOLS ARE:, FROM1
SELECT VALUE INTO HIPPO X;2
Notes:
1. Error message.
When you use the program preparation panels to prepare and run your program,
DB2 allocates SYSPRINT according to TERM option that you specify (on line 12 of
the PROGRAM PREPARATION: COMPILE, PRELINK, LINK, AND RUN panel).
As an alternative, when you use the DSNH command procedure (CLIST), you can
specify PRINT(TERM) to obtain SYSPRINT output at your terminal, or you can
specify PRINT(qualifier) to place the SYSPRINT output into a data set named
authorizationID.qualifier.PCLIST. Assuming that you do not specify PRINT as
LEAVE, NONE, or TERM, DB2 issues a message when the precompiler finishes,
telling you where to find your precompiler listings. This helps you locate your
diagnostics quickly and easily.
The SYSPRINT output can provide information about your precompiled source
module if you specify the options SOURCE and XREF when you start the DB2
precompiler.
Chapter 19. Testing and debugging an application program on DB2 for z/OS 1039
| CONNECT(2)
| DEC(15)
| FLAG(I)
| HOST(PLI)
| FLOAT(S390)
| LINECOUNT(60)
| MARGINS(2,72)
| NEWFUN(YES)
| ONEPASS
| OPTIONS
| PERIOD
| SOURCE
| STDSQL(NO)
| SQL(DB2)
| XREF
Notes:
1. This section lists the options that are specified at precompilation time. This list
does not appear if one of the precompiler option is NOOPTIONS.
2. This section lists the options that are in effect, including defaults, forced values,
and options that you specified. The DB2 precompiler overrides or ignores any
options that you specify that are inappropriate for the host language.
Notes:
v The left column of sequence numbers, which the DB2 precompiler generates, is
for use with the symbol cross-reference listing, the precompiler error messages,
and the BIND error messages.
v The right column shows sequence numbers that come from the sequence
numbers that are supplied with your source statements.
...
Notes:
DATA NAMES
Identifies the symbolic names that are used in source statements. Names
enclosed in double quotation marks (″) or apostrophes (’) are names of SQL
entities such as tables, columns, and authorization IDs. Other names are host
variables.
DEFN
Is the number of the line that the precompiler generates to define the name.
**** means that the object was not defined, or the precompiler did not
recognize the declarations.
REFERENCE
Contains two kinds of information: the symbolic name, which the source
program defines, and which lines refer to the symbolic name. If the symbolic
name refers to a valid host variable, the list also identifies the data type or the
word STRUCTURE.
SOURCE STATISTICS
SOURCE LINES READ: 15231
NUMBER OF SYMBOLS: 1282
SYMBOL TABLE BYTES EXCLUDING ATTRIBUTES: 64323
Notes:
1. Summary statement that indicates the number of source lines.
Chapter 19. Testing and debugging an application program on DB2 for z/OS 1041
2. Summary statement that indicates the number of symbolic names in the symbol
table (SQL names and host names).
3. Storage requirement statement that indicates the number of bytes for the
symbol table.
4. Summary statement that indicates the number of messages that are printed.
5. Summary statement that indicates the number of errors that are detected but
not printed. You might get this statement if you specify the option FLAG.
6. Storage requirement statement that indicates the number of bytes of working
storage that are actually used by the DB2 precompiler to process your source
statements.
7. Return code 0 = success, 4 = warning, 8 = error, 12 = severe error, and 16 =
unrecoverable error.
8. Error messages (this example detects only one error).
When your program encounters an error that does not result in an abend, it can
pass all the required error information to a standard error routine. Online
programs might also send an error message to the terminal.
The TSO TEST command is especially useful for debugging assembler programs.
For more information about the TEST command, see z/OS TSO/E Command
Reference.
When your program encounters an error, it can pass all the required error
information to a standard error routine. Online programs can also send an error
message to the originating logical terminal.
An interactive program also can send a message to the master terminal operator
giving information about the termination of the program. To do that, the program
places the logical terminal name of the master terminal in an express PCB and
issues one or more ISRT calls.
Some organizations run a BMP at the end of the day to list all the errors that
occurred during the day. If your organization does this, you can send a message by
using an express PCB that has its destination set for that BMP.
Batch Terminal Simulator: The Batch Terminal Simulator (BTS) enables you to test
IMS application programs. BTS traces application program DL/I calls and SQL
statements, and it simulates data communication functions. It can make a TSO
terminal appear as an IMS terminal to the terminal operator, which enables the end
user to interact with the application as though it were an online application. The
user can use any application program that is under the user’s control to access any
database (whether DL/I or DB2) that is under the user’s control. Access to DB2
databases requires BTS to operate in batch BMP or TSO BMP mode.
Chapter 19. Testing and debugging an application program on DB2 for z/OS 1043
v Transaction dump, if produced
Using CICS facilities, you can have a printed error record; you can also print the
SQLCA and SQLDA contents.
CICS provides the following aids to the testing, monitoring, and debugging of
application programs:
v Execution (Command Level) Diagnostic Facility (EDF). EDF shows CICS
commands for all releases of CICS.
v Abend recovery. You can use the HANDLE ABEND command to deal with
abend conditions. You can use the ABEND command to cause a task to abend.
v Trace facility. A trace table can contain entries showing the execution of various
CICS commands, SQL statements, and entries that are generated by application
programs; you can have these entries written to main storage and, optionally, to
an auxiliary storage device.
v Dump facility. You can specify areas of main storage to dump onto a sequential
data set, either tape or disk, for subsequent offline formatting and printing with
a CICS utility program.
v Journals. For statistical or monitoring purposes, facilities can create entries in
special data sets called journals. The system log is a journal.
v Recovery. When an abend occurs, CICS restores certain resources to their
original state so that the operator can easily resubmit a transaction for restart.
You can use the SYNCPOINT command to subdivide a program so that you
only need to resubmit the uncompleted part of a transaction.
For more details about each of these topics, see CICS Transaction Server for z/OS
Application Programming Reference.
EDF intercepts the running application program at various points and displays
helpful information about the statement type, input and output variables, and any
error conditions after the statement executes. It also displays any screens that the
application program sends, so that you can converse with the application program
during testing just as a user would on a production system.
EDF displays essential information before and after an SQL statement runs, while
the task is in EDF mode. This can be a significant aid in debugging CICS
transaction programs that contains SQL statements. The SQL information that EDF
displays is helpful for debugging programs and for error analysis after an SQL
error or warning. Using this facility reduces the amount of work that you need to
do to write special error handlers.
EDF before execution: The following figure shows an example of an EDF screen
before it executes an SQL statement. The names of the key information fields on
this panel are in boldface.
ENTER: CONTINUE
PF1 : UNDEFINED PF2 : UNDEFINED PF3 : UNDEFINED
PF4 : SUPPRESS DISPLAYS PF5 : WORKING STORAGE PF6 : USER DISPLAY
PF7 : SCROLL BACK PF8 : SCROLL FORWARD PF9 : STOP CONDITIONS
PF10: PREVIOUS DISPLAY PF11: UNDEFINED PF12: ABEND USER TASK
SQL statements that contain input host variables: The IVAR (input host variables)
section and its attendant fields appear only when the executing statement contains
input host variables.
The host variables section includes the variables from predicates, the values used
for inserting or updating, and the text of dynamic SQL statements that are being
prepared. The address of the input variable is AT X’nnnnnnnn’.
Chapter 19. Testing and debugging an application program on DB2 for z/OS 1045
Specifies the length of the host variable.
v IND=indicator variable status number
Specifies the indicator variable that is associated with this particular host
variable. A value of zero indicates that no indicator variable exists. If the value
for the selected column is null, DB2 puts a negative value in the indicator
variable for this host variable. For more information indicator variables, see
“Indicator variables, arrays, and structures” on page 145.
v DATA=host variable data
Specifies the data, displayed in hexadecimal format, that is associated with this
host variable. If the data exceeds what can display on a single line, three periods
(...) appear at the far right to indicate that more data is present.
EDF after execution:The following figure shows an example of the first EDF screen
that is displayed after the executing an SQL statement. The names of the key
information fields on this panel are in boldface.
ENTER: CONTINUE
PF1 : UNDEFINED PF2 : UNDEFINED PF3 : END EDF SESSION
PF4 : SUPPRESS DISPLAYS PF5 : WORKING STORAGE PF6 : USER DISPLAY
PF7 : SCROLL BACK PF8 : SCROLL FORWARD PF9 : STOP CONDITIONS
PF10: PREVIOUS DISPLAY PF11: UNDEFINED PF12: ABEND USER TASK
Plus signs (+) on the left of the screen indicate that you can see additional EDF
output by using PF keys to scroll the screen forward or back.
The OVAR (output host variables) section and its attendant fields are displayed
only when the executing statement returns output host variables.
The following figure contains the rest of the EDF output for this example.
ENTER: CONTINUE
PF1 : UNDEFINED PF2 : UNDEFINED PF3 : END EDF SESSION
PF4 : SUPPRESS DISPLAYS PF5 : WORKING STORAGE PF6 : USER DISPLAY
PF7 : SCROLL BACK PF8 : SCROLL FORWARD PF9 : STOP CONDITIONS
PF10: PREVIOUS DISPLAY PF11: UNDEFINED PF12: ABEND USER TASK
The attachment facility automatically displays SQL information while in the EDF
mode. (You can start EDF as outlined in the appropriate CICS application
programmer’s reference manual.) If this information is not displayed, contact the
person that is responsible for installing and migrating DB2.
Answer: When you receive an SQL error because of a constraint violation, print out
the SQLCA. You can use the DSNTIAR routine described in “Displaying SQLCA
fields by calling DSNTIAR” on page 204 to format the SQLCA for you. Check the
SQL error message insertion text (SQLERRM) for the name of the constraint. For
information about possible violations, see SQLCODEs -530 through -548 in the
topic “Error SQL codes” in DB2 Codes.
Chapter 19. Testing and debugging an application program on DB2 for z/OS 1047
1048 Application Programming and SQL Guide
Chapter 20. DB2 sample applications and data
DB2 provides sample data and applications that you can use to learn about DB2
capabilities or as models for your own situations.
Related reference
DB2 sample tables (Introduction to DB2 for z/OS)
The sample storage group, databases, table spaces, tables, and views are
created when you run the installation sample jobs DSNTEJ1 and DSNTEJ7. DB2
sample objects that include LOBs are created in job DSNTEJ7. All other sample
objects are created in job DSNTEJ1. The CREATE INDEX statements for the sample
tables are not shown here; they, too, are created by the DSNTEJ1 and DSNTEJ7
sample jobs.
The activity table resides in database DSN8D91A and is created with the
following statement:
CREATE TABLE DSN8910.ACT
(ACTNO SMALLINT NOT NULL,
ACTKWD CHAR(6) NOT NULL,
ACTDESC VARCHAR(20) NOT NULL,
PRIMARY KEY (ACTNO) )
IN DSN8D91A.DSN8S91P
CCSID EBCDIC;
The following table shows the content of the columns in the activity table.
Table 166. Columns of the activity table
Column Column name Description
1 ACTNO Activity ID (the primary key)
2 ACTKWD Activity keyword (up to six characters)
3 ACTDESC Activity description
The activity table is a parent table of the project activity table, through a foreign
key on column ACTNO.
The following table shows the content of the columns in the department table.
Table 168. Columns of the department table
Column Column name Description
1 DEPTNO Department ID, the primary key.
2 DEPTNAME A name that describes the general activities of the
department.
3 MGRNO Employee number (EMPNO) of the department
manager.
4 ADMRDEPT ID of the department to which this department
reports; the department at the highest level reports
to itself.
5 LOCATION The remote location name.
The LOCATION column contains null values until sample job DSNTEJ6 updates
this column with the location name.
The department table is a dependent of the employee table, through its foreign key
on column MGRNO.
The following table shows the type of content of each of the columns in the
employee table. The table has a check constraint, NUMBER, which checks that the
four-digit phone number is in the numeric range 0000 to 9999.
Table 171. Columns of the employee table
Column Column name Description
1 EMPNO Employee number (the primary key)
2 FIRSTNME First name of employee
3 MIDINIT Middle initial of employee
4 LASTNAME Last name of employee
5 WORKDEPT ID of department in which the employee works
6 PHONENO Employee telephone number
7 HIREDATE Date of hire
8 JOB Job held by the employee
9 EDLEVEL Number of years of formal education
10 SEX Sex of the employee (M or F)
11 BIRTHDATE Date of birth
12 SALARY Yearly salary in dollars
13 BONUS Yearly bonus in dollars
14 COMM Yearly commission in dollars
(Table 173 shows the first half (right side) of the content of employee table.)
The employee table is a dependent of the department table, through its foreign key
on column WORKDEPT.
Each row of the photo and resume table contains a photo of the employee,
in two formats, and the employee’s resume. The photo and resume table resides in
table space DSN8D91A.DSN8S91E. The following statement creates the table:
CREATE TABLE DSN8910.EMP_PHOTO_RESUME
(EMPNO CHAR(06) NOT NULL,
EMP_ROWID ROWID NOT NULL GENERATED ALWAYS,
PSEG_PHOTO BLOB(500K),
BMP_PHOTO BLOB(100K),
RESUME CLOB(5K))
PRIMARY KEY (EMPNO)
IN DSN8D91L.DSN8S91B
CCSID EBCDIC;
DB2 requires an auxiliary table for each LOB column in a table. The following
statements define the auxiliary tables for the three LOB columns in
DSN8910.EMP_PHOTO_RESUME:
CREATE AUX TABLE DSN8910.AUX_BMP_PHOTO
IN DSN8D91L.DSN8S91M
STORES DSN8910.EMP_PHOTO_RESUME
COLUMN BMP_PHOTO;
The following table shows the content of the columns in the employee photo and
resume table.
Table 175. Columns of the employee photo and resume table
Column Column name Description
1 EMPNO Employee ID (the primary key).
2 EMP_ROWID Row ID to uniquely identify each row of the table.
DB2 supplies the values of this column.
3 PSEG_PHOTO Employee photo, in PSEG format.
4 BMP_PHOTO Employee photo, in BMP format.
5 RESUME Employee resume.
The following table shows the indexes for the employee photo and resume table.
Table 176. Indexes of the employee photo and resume table
Name On column Type of index
DSN8910.XEMP_PHOTO_RESUME EMPNO Primary, ascending
The employee photo and resume table is a parent table of the project table,
through a foreign key on column RESPEMP.
The project table resides in database DSN8D91A. Because this table has
foreign keys that reference DEPT and EMP, those tables and the indexes on their
primary keys must be created first. Then PROJ is created with the following
statement:
CREATE TABLE DSN8910.PROJ
(PROJNO CHAR(6) PRIMARY KEY NOT NULL,
PROJNAME VARCHAR(24) NOT NULL WITH DEFAULT
'PROJECT NAME UNDEFINED',
DEPTNO CHAR(3) NOT NULL REFERENCES
DSN8910.DEPT ON DELETE RESTRICT,
RESPEMP CHAR(6) NOT NULL REFERENCES
DSN8910.EMP ON DELETE RESTRICT,
PRSTAFF DECIMAL(5, 2) ,
PRSTDATE DATE ,
PRENDATE DATE ,
MAJPROJ CHAR(6))
IN DSN8D91A.DSN8S91P
CCSID EBCDIC;
Because the project table is self-referencing, the foreign key for that constraint must
be added later with the following statement:
ALTER TABLE DSN8910.PROJ
FOREIGN KEY RPP (MAJPROJ) REFERENCES DSN8910.PROJ
ON DELETE CASCADE;
The following table shows the content of the columns of the project table.
Table 178. Columns of the project table
Column Column name Description
1 PROJNO Project ID (the primary key)
2 PROJNAME Project name
3 DEPTNO ID of department responsible for the project
The following table shows the indexes for the project table:
Table 179. Indexes of the project table
Name On column Type of index
DSN8910.XPROJ1 PROJNO Primary, ascending
DSN8910.XPROJ2 RESPEMP Ascending
The project activity table resides in database DSN8D91A. Because this table
has foreign keys that reference PROJ and ACT, those tables and the indexes on
their primary keys must be created first. Then PROJACT is created with the
following statement:
CREATE TABLE DSN8910.PROJACT
(PROJNO CHAR(6) NOT NULL,
ACTNO SMALLINT NOT NULL,
ACSTAFF DECIMAL(5,2) ,
ACSTDATE DATE NOT NULL,
ACENDATE DATE ,
PRIMARY KEY (PROJNO, ACTNO, ACSTDATE),
FOREIGN KEY RPAP (PROJNO) REFERENCES DSN8910.PROJ
ON DELETE RESTRICT,
FOREIGN KEY RPAA (ACTNO) REFERENCES DSN8910.ACT
ON DELETE RESTRICT)
IN DSN8D91A.DSN8S91P
CCSID EBCDIC;
The following table shows the content of the columns of the project activity table.
Table 180. Columns of the project activity table
Column Column name Description
1 PROJNO Project ID
2 ACTNO Activity ID
3 ACSTAFF Estimated mean number of employees that are
needed to staff the activity
4 ACSTDATE Estimated activity start date
5 ACENDATE Estimated activity completion date
The following table shows the index of the project activity table:
Table 181. Index of the project activity table
Name On columns Type of index
DSN8910.XPROJAC1 PROJNO, ACTNO, primary, ascending
ACSTDATE
The project activity table is a parent table of the employee to project activity table,
through a foreign key on columns PROJNO, ACTNO, and EMSTDATE. It is a
dependent of the following tables:
v The activity table, through its foreign key on column ACTNO
v The project table, through its foreign key on column PROJNO
Related reference
“Project table (DSN8910.PROJ)” on page 1056
“Activity table (DSN8910.ACT)” on page 1049
The following table shows the content of the columns in the employee to project
activity table.
Table 182. Columns of the employee to project activity table
Column Column name Description
1 EMPNO Employee ID number
2 PROJNO Project ID of the project
3 ACTNO ID of the activity within the project
4 EMPTIME A proportion of the employee’s full time (between
0.00 and 1.00) that is to be spent on the activity
5 EMSTDATE Date the activity starts
6 EMENDATE Date the activity ends
The following table shows the indexes for the employee to project activity table:
Table 183. Indexes of the employee to project activity table
Name On columns Type of index
DSN8910.XEMPPROJACT1 PROJNO, ACTNO, Unique, ascending
EMSTDATE, EMPNO
DSN8910.XEMPPROJACT2 EMPNO Ascending
The table resides in database DSN8D91A, and is defined with the following
statement:
CREATE TABLE DSN8910.DEMO_UNICODE
(LOWER_A_TO_Z CHAR(26) ,
UPPER_A_TO_Z CHAR(26) ,
ZERO_TO_NINE CHAR(10) ,
X00_TO_XFF VARCHAR(256) FOR BIT DATA)
IN DSN8D81E.DSN8S81U
CCSID UNICODE;
The following table shows the content of the columns in the Unicode sample table:
Table 184. Columns of the Unicode sample table
Column Column Name Description
1 LOWER_A_TO_Z Array of characters, ’a’ to ’z’
2 UPPER_A_TO_Z Array of characters, ’A’ to ’Z’
3 ZERO_TO_NINE Array of characters, ’0’ to ’9’
4 X00_TO_XFF Array of characters, x’00’ to x’FF’
The following figure shows relationships among the sample tables. You can
find descriptions of the columns with the descriptions of the tables.
CASCADE
DEPT
SET SET
NULL NULL
RESTRICT EMP
RESTRICT
RESTRICT EMP_PHOTO_RESUME
RESTRICT
CASCADE ACT
PROJ RESTRICT
RESTRICT
PROJACT
RESTRICT
RESTRICT
EMPPROJACT
The following table indicates the tables on which each view is defined and
the sample applications that use the view. All view names have the qualifier
DSN8910.
Table 185. Views on sample tables
View name On tables or views Used in application
VDEPT DEPT
Organization
Project
VHDEPT DEPT Distributed organization
VEMP EMP
Distributed organization
Organization
Project
VPROJ PROJ Project
VACT ACT Project
VPROJACT PROJACT Project
VEMPPROJACT EMPPROJACT Project
VDEPMG1 Organization
DEPT
EMP
VEMPDPT1 Organization
DEPT
EMP
VASTRDE1 DEPT
VASTRDE2 Organization
VDEPMG1
EMP
VPROJRE1 Project
PROJ
EMP
VPSTRDE1 Project
VPROJRE1
VPROJRE2
The following figure shows how the sample tables are related to databases
and storage groups. Two databases are used to illustrate the possibility.
Table
spaces: Separate
LOB spaces
spaces for DSN8SvrP
for employee
DSN8SvrD DSN8SvrE other common for
photo and
department employee application programming
resume table
table table tables tables
In addition to the storage group and databases that are shown in the preceding
figure, the storage group DSN8G91U and database DSN8D91U are created when
you run DSNTEJ2A.
The storage group that is used to store sample application data is defined
by the following statement:
CREATE STOGROUP DSN8G910
VOLUMES (DSNV01)
VCAT DSNC910;
DSN8D91P is the database that is used for tables that are related to
programs. The other databases are used for tables that are related to applications.
The databases are defined by the following statements:
| CREATE DATABASE DSN8D91A
| STOGROUP DSN8G910
| BUFFERPOOL BP0
| CCSID EBCDIC;
|
| CREATE DATABASE DSN8D91P
| STOGROUP DSN8G910
| BUFFERPOOL BP0
| CCSID EBCDIC;
|
| CREATE DATABASE DSN8D91L
| STOGROUP DSN8G910
This topic describes the DB2 sample applications and the environments under
which each application runs. It also provides information on how to use the
applications, and how to print the application listings.
You can examine the source code for the sample application programs in the online
sample library included with the DB2 product. The name of this sample library is
prefix.SDSNSAMP.
You can use the applications interactively by accessing data in the sample tables on
screen displays (panels). You can also access the sample tables in batch when using
the phone applications. All sample objects have PUBLIC authorization, which
makes the samples easier to run.
DB2 provides four sample programs that many users find helpful as productivity
aids. These programs are shipped as source code, so you can modify them to meet
your needs. The programs are:
DSNTIAUL
The sample unload program. This program, which is written in assembler
language, is a simple alternative to the UNLOAD utility. It unloads some
or all rows from up to 100 DB2 tables. With DSNTIAUL, you can unload
data of any DB2 built-in data type or distinct type. DSNTIAUL unloads the
rows in a form that is compatible with the LOAD utility and generates
utility control statements for LOAD. DSNTIAUL also lets you execute any
SQL non-SELECT statement that can be executed dynamically.
DSNTIAD
A sample dynamic SQL program that is written in assembler language.
With this program, you can execute any SQL statement that can be
executed dynamically, except a SELECT statement.
DSNTEP2
A sample dynamic SQL program that is written in the PL/I language. With
this program, you can execute any SQL statement that can be executed
dynamically. You can use the source version of DSNTEP2 and modify it to
meet your needs, or, if you do not have a PL/I compiler at your
installation, you can use the object code version of DSNTEP2.
DSNTEP4
A sample dynamic SQL program that is written in the PL/I language. This
Because these four programs also accept the static SQL statements CONNECT, SET
CONNECTION, and RELEASE, you can use the programs to access DB2 tables at
remote locations.
You can use DSNTEP2, DSNTEP4, and DSNTIAUL to retrieve Unicode UTF-16
graphic data. However, these programs might not be able to display some
characters, if those characters have no mapping in the target SBCS EBCDIC CCSID.
DSNTIAUL and DSNTIAD are shipped only as source code, so you must
precompile, assemble, link, and bind them before you can use them. If you want to
use the source code version of DSNTEP2 or DSNTEP4, you must precompile,
compile, link, and bind it. You need to bind the object code version of DSNTEP2 or
DSNTEP4 before you can use it. Usually a system administrator prepares the
programs as part of the installation process. The following table indicates which
installation job prepares each sample program. All installation jobs are in data set
DSN910.SDSNSAMP.
Table 186. Jobs that prepare DSNTIAUL, DSNTIAD, DSNTEP2, and DSNTEP4
Program name Program preparation job
DSNTIAUL DSNTEJ2A
DSNTIAD DSNTIJTM
DSNTEP2 (source) DSNTEJ1P
DSNTEP2 (object) DSNTEJ1L
DSNTEP4 (source) DSNTEJ1P
DSNTEP4 (object) DSNTEJ1L
The following table lists the load module name and plan name that you must
specify, and the parameters that you can specify when you run each program. See
the following topics for the meaning of each parameter.
Table 187. DSN RUN option values for DSNTIAUL, DSNTIAD, DSNTEP2, and DSNTEP4
Program name Load module Plan Parameters
DSNTIAUL DSNTIAUL DSNTIB91 SQL
number of rows per fetch
TOLWARN(NO|YES)
DSNTIAD DSNTIAD DSNTIA91 RC0
SQLTERM(termchar)
DSNTEP2 DSNTEP2 DSNTEP91 ALIGN(MID)
or ALIGN(LHS)
NOMIXED or MIXED
SQLTERM(termchar)
TOLWARN(NO|YES)
| PREPWARN
The remainder of this section contains the following information about running
each program:
v Descriptions of the input parameters
v Data sets that you must allocate before you run the program
v Return codes from the program
v Examples of invocation
Related reference
RUN (DSN) (DB2 Command Reference)
Organization application:
Project application:
The phone application lets you view or update individual employee phone
numbers. There are different versions of the application for ISPF/TSO, CICS, IMS,
and batch:
v ISPF/TSO applications use COBOL and PL/I.
v CICS and IMS applications use PL/I.
v Batch applications use C, C++, COBOL, FORTRAN, and PL/I.
The user-defined function applications consist of a client program that invokes the
sample user-defined functions and a set of user-defined functions that perform the
following functions:
v Convert the current date to a user-specified format
v Convert a date from one format to another
v Convert the current time to a user-specified format
v Convert a date from one format to another
v Return the day of the week for a user-specified date
v Return the month for a user-specified date
v Format a floating point number as a currency value
v Return the table name for a table, view, or alias
v Return the qualifier for a table, view or alias
v Return the location for a table, view or alias
v Return a table of weather information
All programs are written in C or C++ and run in the TSO batch environment.
LOB application:
The programs that create and populate the LOB objects use DSNTIAD and run in
the TSO batch environment. The program that manipulates the LOB data is written
in C and runs under ISPF/TSO.
The following table shows the environments under which each application runs,
and the languages the applications use for each environment.
Table 188. Application languages and environments
Programs ISPF/TSO IMS CICS Batch SPUFI
Dynamic SQL Assembler
programs PL/I
Exit routines Assembler Assembler Assembler Assembler Assembler
Organization COBOL COBOL COBOL
PL/I PL/I
Note:
1. All of the stored procedure applications consist of a calling program, a stored
procedure program, or both.
Related reference
“Data sets that the precompiler uses” on page 893
Related reference
“Data sets that the precompiler uses” on page 893
Related reference
“Data sets that the precompiler uses” on page 893
DSNTIAUL
Use the DSNTIAUL program to unload data from DB2 tables into sequential data
sets.
This topic contains information that you need when you run DSNTIAUL,
including parameters, data sets, return codes, and invocation examples.
DSNTIAUL parameters:
SQL
Specify SQL to indicate that your input data set contains one or more complete
SQL statements, each of which ends with a semicolon. You can include any
SQL statement that can be executed dynamically in your input data set. In
addition, you can include the static SQL statements CONNECT, SET
CONNECTION, or RELEASE. DSNTIAUL uses the SELECT statements to
determine which tables to unload and dynamically executes all other
statements except CONNECT, SET CONNECTION, and RELEASE. DSNTIAUL
executes CONNECT, SET CONNECTION, and RELEASE statically to connect
to remote locations.
number of rows per fetch
Specify a number from 1 to 32767 to specify the number of rows per fetch that
DSNTIAUL retrieves. If you do not specify this number, DSNTIAUL retrieves
100 rows per fetch. This parameter can be specified with the SQL parameter.
Specify 1 to retrieve data from a remote site when DSNTIAUL is bound with
the DBPROTOCOL(PRIVATE) option.
TOLWARN
Specify NO (the default) or YES to indicate whether DSNTIAUL continues to
retrieve rows after receiving an SQL warning:
NO If a warning occurs when DSNTIAUL executes an OPEN or FETCH to
retrieve rows, DSNTIAUL stops retrieving rows. If the SQLWARN1,
SQLWARN2, SQLWARN6, or SQLWARN7 flag is set when DSNTIAUL
executes a FETCH to retrieve rows, DSNTIAUL continues to retrieve
rows.
YES If a warning occurs when DSNTIAUL executes an OPEN or FETCH to
retrieve rows, DSNTIAUL continues to retrieve rows.
| LOBFILE(prefix)
| Specify LOBFILE to indicate that you want DSNTIAUL to dynamically allocate
| data sets, each to receive the full content of a LOB cell. (A LOB cell is the
| intersection of a row and a LOB column.) If you do not specify the LOBFILE
| option, you can unload up to only 32 KB of data from a LOB column.
| prefix
| Specify a high-level qualifier for these dynamically allocated data sets. You
| can specify up to 17 characters. The qualifier must conform with the rules
| for TSO data set names.
| DSNTIAUL uses a naming convention for these dynamically allocated data sets
| of prefix.Qiiiiiii.Cjjjjjjj.Rkkkkkkk, where these qualifiers have the following
| values:
| prefix
| The high-level qualifier that you specify in the LOBFILE option.
| Qiiiiiii
| The sequence number (starting from 0) of a query that returns one or more
| LOB columns
If you do not specify the SQL parameter, your input data set must contain one or
more single-line statements (without a semicolon) that use the following syntax:
table or view name [WHERE conditions] [ORDER BY columns]
Each input statement must be a valid SQL SELECT statement with the clause
SELECT * FROM omitted and with no ending semicolon. DSNTIAUL generates a
SELECT statement for each input statement by appending your input line to
SELECT * FROM, then uses the result to determine which tables to unload. For this
input format, the text for each table specification can be a maximum of 72 bytes
and must not span multiple lines.
You can use the input statements to specify SELECT statements that join two or
more tables or select specific columns from a table. If you specify columns, you
need to modify the LOAD statement that DSNTIAUL generates.
Define all data sets as sequential data sets. You can specify the record length and
block size of the SYSPUNCH and SYSRECnn data sets. The maximum record
length for the SYSPUNCH and SYSRECnn data sets is 32760 bytes.
Suppose that you want to unload the rows for department D01 from the project
table. Because you can fit the table specification on one line, and you do not want
to execute any non-SELECT statements, you do not need the SQL parameter. Your
invocation looks like the one that is shown in the following figure:
//UNLOAD EXEC PGM=IKJEFT01,DYNAMNBR=20
//SYSTSPRT DD SYSOUT=*
//SYSTSIN DD *
DSN SYSTEM(DSN)
RUN PROGRAM(DSNTIAUL) PLAN(DSNTIB91) -
LIB('DSN910.RUNLIB.LOAD')
//SYSPRINT DD SYSOUT=*
//SYSUDUMP DD SYSOUT=*
//SYSREC00 DD DSN=DSN8UNLD.SYSREC00,
// UNIT=SYSDA,SPACE=(32760,(1000,500)),DISP=(,CATLG),
// VOL=SER=SCR03
//SYSPUNCH DD DSN=DSN8UNLD.SYSPUNCH,
// UNIT=SYSDA,SPACE=(800,(15,15)),DISP=(,CATLG),
// VOL=SER=SCR03,RECFM=FB,LRECL=120,BLKSIZE=1200
//SYSIN DD *DSN8910
.PROJ WHERE DEPTNO='D01'
Suppose that you also want to use DSNTIAUL to perform the following actions:
v Unload all rows from the project table
v Unload only rows from the employee table for employees in departments with
department numbers that begin with D, and order the unloaded rows by
employee number
v Lock both tables in share mode before you unload them
For these activities, you must specify the SQL parameter and specify the number of
rows per fetch when you run DSNTIAUL. Your DSNTIAUL invocation is shown in
the following figure:
//UNLOAD EXEC PGM=IKJEFT01,DYNAMNBR=20
//SYSTSPRT DD SYSOUT=*
//SYSTSIN DD *
DSN SYSTEM(DSN)
RUN PROGRAM(DSNTIAUL) PLAN(DSNTIB91) PARMS('SQL,250') -
LIB('DSN910.RUNLIB.LOAD')
//SYSPRINT DD SYSOUT=*
//SYSUDUMP DD SYSOUT=*
//SYSREC00 DD DSN=DSN8UNLD.SYSREC00,
// UNIT=SYSDA,SPACE=(32760,(1000,500)),DISP=(,CATLG),
// VOL=SER=SCR03
//SYSREC01 DD DSN=DSN8UNLD.SYSREC01,
// UNIT=SYSDA,SPACE=(32760,(1000,500)),DISP=(,CATLG),
// VOL=SER=SCR03
//SYSPUNCH DD DSN=DSN8UNLD.SYSPUNCH,
// UNIT=SYSDA,SPACE=(800,(15,15)),DISP=(,CATLG),
// VOL=SER=SCR03,RECFM=FB,LRECL=120,BLKSIZE=1200
//SYSIN DD *
LOCK TABLE DSN8910.EMP IN SHARE MODE;
LOCK TABLE DSN8910.PROJ IN SHARE MODE;
SELECT * FROM DSN8910.PROJ;
SELECT * FROM DSN8910.EMP
WHERE WORKDEPT LIKE 'D%'
ORDER BY EMPNO;
If you want to obtain the LOAD utility control statements for loading rows into a
table, but you do not want to unload the rows, you can set the data set names for
the SYSRECnn data sets to DUMMY. For example, to obtain the utility control
statements for loading rows into the department table, you invoke DSNTIAUL as
shown in the following figure:
//UNLOAD EXEC PGM=IKJEFT01,DYNAMNBR=20
//SYSTSPRT DD SYSOUT=*
//SYSTSIN DD *
DSN SYSTEM(DSN)
RUN PROGRAM(DSNTIAUL) PLAN(DSNTIB91) -
LIB('DSN910.RUNLIB.LOAD')
//SYSPRINT DD SYSOUT=*
//SYSUDUMP DD SYSOUT=*
//SYSREC00 DD DUMMY
//SYSPUNCH DD DSN=DSN8UNLD.SYSPUNCH,
// UNIT=SYSDA,SPACE=(800,(15,15)),DISP=(,CATLG),
// VOL=SER=SCR03,RECFM=FB,LRECL=120,BLKSIZE=1200
//SYSIN DD *DSN8910
.DEPT
| This example uses the sample LOB table with the following structure:
CREATE TABLE DSN8910.EMP_PHOTO_RESUME
( EMPNO CHAR(06) NOT NULL,
EMP_ROWID ROWID NOT NULL GENERATED ALWAYS,
PSEG_PHOTO BLOB(500K),
BMP_PHOTO BLOB(100K),
The following call to DSNTIAUL unloads the sample LOB table. The parameters
for DSNTIAUL indicate the following options:
v The input data set (SYSIN) contains SQL.
v DSNTIAUL is to retrieve 2 rows per fetch.
v DSNTIAUL places the LOB data in data sets with a high-level qualifier of
DSN8UNLD.
//UNLOAD EXEC PGM=IKJEFT01,DYNAMNBR=20
//SYSTSPRT DD SYSOUT=*
//SYSTSIN DD *
DSN SYSTEM(DSN)
RUN PROGRAM(DSNTIAUL) PLAN(DSNTIB91) -
PARMS('SQL,2,LOBFILE(DSN8UNLD)') -
LIB('DSN910.RUNLIB.LOAD')
//SYSPRINT DD SYSOUT=*
//SYSUDUMP DD SYSOUT=*
//SYSREC00 DD DSN=DSN8UNLD.SYSREC00,
// UNIT=SYSDA,SPACE=(800,(15,15)),DISP=(,CATLG),
// VOL=SER=SCR03,RECFM=FB
//SYSPUNCH DD DSN=DSN8UNLD.SYSPUNCH,
// UNIT=SYSDA,SPACE=(800,(15,15)),DISP=(,CATLG),
// VOL=SER=SCR03,RECFM=FB
//SYSIN DD *
SELECT * FROM DSN8910.EMP_PHOTO_RESUME;
Given that the sample LOB table has 4 rows of data, DSNTIAUL produces the
following output:
v Data for columns EMPNO and EMP_ROWID are placed in the data set that is
allocated according to the SYSREC00 DD statement. The data set name is
DSN8UNLD.SYSREC00
v A generated LOAD statement is placed in the data set that is allocated according
to the SYSPUNCH DD statement. The data set name is DSN8UNLD.SYSPUNCH
v The following data sets are dynamically created to store LOB data:
– DSN8UNLD.Q0000000.C0000002.R0000000
– DSN8UNLD.Q0000000.C0000002.R0000001
– DSN8UNLD.Q0000000.C0000002.R0000002
– DSN8UNLD.Q0000000.C0000002.R0000003
– DSN8UNLD.Q0000000.C0000003.R0000000
– DSN8UNLD.Q0000000.C0000003.R0000001
– DSN8UNLD.Q0000000.C0000003.R0000002
– DSN8UNLD.Q0000000.C0000003.R0000003
– DSN8UNLD.Q0000000.C0000004.R0000000
– DSN8UNLD.Q0000000.C0000004.R0000001
– DSN8UNLD.Q0000000.C0000004.R0000002
– DSN8UNLD.Q0000000.C0000004.R0000003
For example, DSN8UNLD.Q0000000.C0000004.R0000001 means that the data set
contains data that is unloaded from the second row (R0000001) and the fifth
column (C0000004) of the result set for the first query (Q0000000).
DSNTIAD parameters:
RC0
If you specify this parameter, DSNTIAD ends with return code 0, even if the
program encounters SQL errors. If you do not specify RC0, DSNTIAD ends
with a return code that reflects the severity of the errors that occur. Without
RC0, DSNTIAD terminates if more than 10 SQL errors occur during a single
execution.
SQLTERM(termchar)
Specify this parameter to indicate the character that you use to end each SQL
statement. You can use any special character except one of those listed in the
following table. SQLTERM(;) is the default.
Table 193. Invalid special characters for the SQL terminator
Name Character Hexadecimal representation
blank X’40’
comma , X’6B’
double quotation mark ″ X’7F’
left parenthesis ( X’4D’
right parenthesis ) X’5D’
single quotation mark ’ X’7D’
underscore _ X’6D’
Use a character other than a semicolon if you plan to execute a statement that
contains embedded semicolons.
example:
Suppose that you specify the parameter SQLTERM(#) to indicate that the
character # is the statement terminator. Then a CREATE TRIGGER statement
with embedded semicolons looks like this:
CREATE TRIGGER NEW_HIRE
AFTER INSERT ON EMP
FOR EACH ROW MODE DB2SQL
BEGIN ATOMIC
UPDATE COMPANY_STATS SET NBEMP = NBEMP + 1;
END#
Suppose that you want to execute 20 UPDATE statements, and you do not want
DSNTIAD to terminate if more than 10 errors occur. Your invocation looks like the
one that is shown in the following figure:
//RUNTIAD EXEC PGM=IKJEFT01,DYNAMNBR=20
//SYSTSPRT DD SYSOUT=*
//SYSTSIN DD *
DSN SYSTEM(DSN)
RUN PROGRAM(DSNTIAD) PLAN(DSNTIA91) PARMS('RC0') -
LIB('DSN910.RUNLIB.LOAD')
//SYSPRINT DD SYSOUT=*
//SYSUDUMP DD SYSOUT=*
//SYSIN DD *
UPDATE DSN8910.PROJ SET DEPTNO='J01' WHERE DEPTNO='A01';
UPDATE
. DSN8910.PROJ SET DEPTNO='J02' WHERE DEPTNO='A02';
.
.
UPDATE DSN8910.PROJ SET DEPTNO='J20' WHERE DEPTNO='A20';
| Important: When you allocate a new data set with the SYSPRINT DD statement,
| either specify a DCB with LRECL=133, or do not specify the DCB parameter.
Be careful to choose a character for the statement terminator that is not used
within the statement.
If you want to change the SQL terminator within a series of SQL statements,
you can use the --#SET TERMINATOR control statement.
Example: Suppose that you have an existing set of SQL statements to which
you want to add a CREATE TRIGGER statement that has embedded
semicolons. You can use the default SQLTERM value, which is a semicolon, for
all of the existing SQL statements. Before you execute the CREATE TRIGGER
statement, include the --#SET TERMINATOR # control statement to change the
SQL terminator to the character #:
SELECT * FROM DEPT;
SELECT * FROM ACT;
SELECT * FROM EMPPROJACT;
SELECT * FROM PROJ;
SELECT * FROM PROJACT;
--#SET TERMINATOR #
CREATE TRIGGER NEW_HIRE
AFTER INSERT ON EMP
FOR EACH ROW MODE DB2SQL
BEGIN ATOMIC
UPDATE COMPANY_STATS SET NBEMP = NBEMP + 1;
END#
Suppose that you want to use DSNTEP2 to execute SQL SELECT statements that
might contain DBCS characters. You also want left-aligned output. Your invocation
looks like the one in the following figure.
//RUNTEP2 EXEC PGM=IKJEFT01,DYNAMNBR=20
//SYSTSPRT DD SYSOUT=*
//SYSTSIN DD *
DSN SYSTEM(DSN)
RUN PROGRAM(DSNTEP2) PLAN(DSNTEP91) PARMS('/ALIGN(LHS) MIXED TOLWARN(YES)') -
LIB('DSN910.RUNLIB.LOAD')
//SYSPRINT DD SYSOUT=*
//SYSUDUMP DD SYSOUT=*
//SYSIN DD *
SELECT * FROM DSN8910.PROJ;
Suppose that you want to use DSNTEP4 to execute SQL SELECT statements that
might contain DBCS characters, and you want center-aligned output. You also
want DSNTEP4 to fetch 250 rows at a time. Your invocation looks like the one in
the following figure:
//RUNTEP2 EXEC PGM=IKJEFT01,DYNAMNBR=20
//SYSTSPRT DD SYSOUT=*
//SYSTSIN DD *
DSN SYSTEM(DSN)
RUN PROGRAM(DSNTEP4) PLAN(DSNTEP481) PARMS('/ALIGN(MID) MIXED') -
LIB('DSN910.RUNLIB.LOAD')
//SYSPRINT DD SYSOUT=*
//SYSUDUMP DD SYSOUT=*
//SYSIN DD *
--#SET MULT_FETCH 250
SELECT * FROM DSN8910.EMP;
Disclaimer: Any Web addresses that are included here are accurate at the time this
information is being published. However, Web addresses sometimes change. If you
visit a Web address that is listed here but that is no longer valid, you can try to
find the current Web address for the product information that you are looking for
at either of the following sites:
v http://www.ibm.com/support/publications/us/library/index.shtml, which lists
the IBM information centers that are available for various IBM products
v http://www.elink.ibmlink.ibm.com/public/applications/publications/cgibin/
pbi.cgi, which is the IBM Publications Center, where you can download online
PDF books or order printed books for various IBM products
The primary place to find and use information about DB2 for z/OS is the
Information Management Software for z/OS Solutions Information Center
(http://publib.boulder.ibm.com/infocenter/imzic), which also contains information
about IMS, QMF, and many DB2 and IMS Tools products. This information center
is also available as an installable information center that can run on a local system
or on an intranet server. You can order the Information Management for z/OS
Solutions Information Center DVD (SK5T-7377) for a low cost from the IBM
Publications Center (www.ibm.com/shop/publications/order).
The majority of the DB2 for z/OS information in this information center is also
available in the books that are identified in the following table. You can access
these books at the DB2 for z/OS library Web site (http://www.ibm.com/software/
data/db2/zos/library.html) or at the IBM Publications Center
(http://www.ibm.com/shop/publications/order).
Table 196. DB2 Version 9.1 for z/OS book titles
Available in Available in
Publication information Available in BookManager® Available in
Title number center PDF format printed book
DB2 Version 9.1 for z/OS SC18-9840 X X X X
Administration Guide
DB2 Version 9.1 for z/OS Application SC18-9841 X X X X
Programming & SQL Guide
DB2 Version 9.1 for z/OS Application SC18-9842 X X X X
Programming Guide and Reference for
Java
DB2 Version 9.1 for z/OS Codes GC18-9843 X X X X
DB2 Version 9.1 for z/OS Command SC18-9844 X X X X
Reference
DB2 Version 9.1 for z/OS Data Sharing: SC18-9845 X X X X
Planning and Administration
In the following table, related product names are listed in alphabetic order, and the
associated Web addresses of product information centers or library Web pages are
indicated.
These resources include information about the following products and others:
v DB2 Administration Tool
v DB2 Automation Tool
v DB2 Log Analysis Tool
v DB2 Object Restore Tool
v DB2 Query Management Facility
v DB2 SQL Performance Analyzer
DB2 Universal Database Information center: http://www.ibm.com/systems/i/infocenter/
for iSeries
Debug Tool for z/OS Information center: http://publib.boulder.ibm.com/infocenter/pdthelp/v1r1/index.jsp
Enterprise COBOL for Information center: http://publib.boulder.ibm.com/infocenter/pdthelp/v1r1/index.jsp
z/OS
Enterprise PL/I for z/OS Information center: http://publib.boulder.ibm.com/infocenter/pdthelp/v1r1/index.jsp
IMS Information center: http://publib.boulder.ibm.com/infocenter/imzic
Information resources for DB2 for z/OS and related products 1093
Table 197. Related product information resource locations (continued)
Related product Information resources
IMS Tools One of the following locations:
v Information center: http://publib.boulder.ibm.com/infocenter/imzic
v Library Web site: http://www.ibm.com/software/data/db2imstools/library.html
These resources have information about the following products and others:
v IMS Batch Terminal Simulator for z/OS
v IMS Connect
v IMS HALDB Conversion and Maintenance Aid
v IMS High Performance Utility products
v IMS DataPropagator
v IMS Online Reorganization Facility
v IMS Performance Analyzer
Integrated Data Information center: http://publib.boulder.ibm.com/infocenter/idm/v2r2/index.jsp
Management products
This information center has information about the following products and others:
v IBM Data Studio
v InfoSphere™ Data Architect
v InfoSphere Warehouse
v Optim™ Database Administrator
v Optim Development Studio
v Optim Query Tuner
PL/I Information center: http://publib.boulder.ibm.com/infocenter/pdthelp/v1r1/index.jsp
This resource includes information about the following z/OS elements and components:
v Character Data Representation Architecture
v Device Support Facilities
v DFSORT
v Fortran
v High Level Assembler
v NetView®
v SMP/E for z/OS
v SNA
v TCP/IP
v TotalStorage® Enterprise Storage Server®
v VTAM
v z/OS C/C++
v z/OS Communications Server
v z/OS DCE
v z/OS DFSMS™
v z/OS DFSMS Access Method Services
v z/OS DFSMSdss™
v z/OS DFSMShsm™
v z/OS DFSMSdfp
v z/OS ICSF
v z/OS ISPF
v z/OS JES3
v z/OS Language Environment
v z/OS Managed System Infrastructure
v z/OS MVS
v z/OS MVS JCL
v z/OS Parallel Sysplex
v z/OS RMF™
v z/OS Security Server
v z/OS UNIX System Services
z/OS XL C/C++ http://www.ibm.com/software/awdtools/czos/library/
The following information resources from IBM are not necessarily specific to a
single product:
v The DB2 for z/OS Information Roadmap; available at: http://www.ibm.com/
software/data/db2/zos/roadmap.html
v DB2 Redbooks® and Redbooks about related products; available at:
http://www.ibm.com/redbooks
v IBM Educational resources:
– Information about IBM educational offerings is available on the Web at:
http://www.ibm.com/software/sw-training/
Information resources for DB2 for z/OS and related products 1095
– A collection of glossaries of IBM terms in multiple languages is available on
the IBM Terminology Web site at: http://www.ibm.com/software/
globalization/terminology/index.jsp
v National Language Support information; available at the IBM Publications
Center at: http://www.elink.ibmlink.ibm.com/public/applications/publications/
cgibin/pbi.cgi
v SQL Reference for Cross-Platform Development; available at the following
developerWorks® site: http://www.ibm.com/developerworks/db2/library/
techarticle/0206sqlref/0206sqlref.html
The following information resources are not published by IBM but can be useful to
users of DB2 for z/OS and related products:
v Database design topics:
– DB2 for z/OS and OS/390 Development for Performance Volume I, by Gabrielle
Wiorkowski, Gabrielle & Associates, ISBN 0-96684-605-2
– DB2 for z/OS and OS/390 Development for Performance Volume II, by Gabrielle
Wiorkowski, Gabrielle & Associates, ISBN 0-96684-606-0
– Handbook of Relational Database Design, by C. Fleming and B. Von Halle,
Addison Wesley, ISBN 0-20111-434-8
v Distributed Relational Database Architecture (DRDA) specifications;
http://www.opengroup.org
v Domain Name System: DNS and BIND, Third Edition, Paul Albitz and Cricket
Liu, O’Reilly, ISBN 0-59600-158-4
v Microsoft® Open Database Connectivity (ODBC) information;
http://msdn.microsoft.com/library/
v Unicode information; http://www.unicode.org
Stay current with the latest information about DB2 by visiting the DB2 home page
on the Web:
www.ibm.com/software/db2zos
On the DB2 home page, you can find links to a wide variety of information
resources about DB2. You can read news items that keep you informed about the
latest enhancements to the product. Product announcements, press releases, fact
sheets, and technical articles help you plan and implement your database
management strategy.
The official DB2 for z/OS information is available in various formats and delivery
methods. IBM provides mid-version updates to the information in the information
center and in softcopy updates that are available on the Web and on CD-ROM.
Information Management Software for z/OS Solutions Information Center
DB2 product information is viewable in the information center, which is
the primary delivery vehicle for information about DB2 for z/OS, IMS,
QMF, and related tools. This information center enables you to search
across related product information in multiple languages for data
management solutions for the z/OS environment and print individual
topics or sets of related topics. You can also access, download, and print
PDFs of the publications that are associated with the information center
topics. Product technical information is provided in a format that offers
more options and tools for accessing, integrating, and customizing
information resources. The information center is based on Eclipse open
source technology.
The Information Management Software for z/OS Solutions Information
Center is viewable at the following Web site:
http://publib.boulder.ibm.com/infocenter/imzic
CD-ROMs and DVD
Books for DB2 are available on a CD-ROM that is included with your
product shipment:
v DB2 V9.1 for z/OS Licensed Library Collection, LK3T-7195, in English
The CD-ROM contains the collection of books for DB2 V9.1 for z/OS in
PDF and BookManager formats. Periodically, IBM refreshes the books on
subsequent editions of this CD-ROM.
© Copyright IBM Corp. 1983, 2009 1097
The books for DB2 for z/OS are also available on the following CD-ROM
and DVD collection kits, which contain online books for many IBM
products:
v IBM z/OS Software Products Collection , SK3T-4270, in English
v IBM z/OS Software Products DVD Collection , SK3T–4271, in English
PDF format
Many of the DB2 books are available in PDF (Portable Document Format)
for viewing or printing from CD-ROM or the DB2 home page on the Web
or from the information center. Download the PDF books to your intranet
for distribution throughout your enterprise.
BookManager format
You can use online books on CD-ROM to read, search across books, print
portions of the text, and make notes in these BookManager books. Using
the IBM Softcopy Reader, appropriate IBM Library Readers, or the
BookManager Read product, you can view these books in the z/OS,
Windows, and VM environments. You can also view and search many of
the DB2 BookManager books on the Web.
DB2 education
IBM Education and Training offers a wide variety of classroom courses to help you
quickly and efficiently gain DB2 expertise. IBM schedules classes are in cities all
over the world. You can find class information, by country, at the IBM Learning
Services Web site:
www.ibm.com/services/learning
IBM also offers classes at your location, at a time that suits your needs. IBM can
customize courses to meet your exact requirements. For more information,
including the current local schedule, contact your IBM representative.
www.elink.ibmlink.ibm.com/public/applications/publications/cgibin/pbi.cgi
From the IBM Publication Center, you can go to the Publication Notification
System (PNS). PNS users receive electronic notifications of updated publications in
their profiles. You have the option of ordering the updates by using the
publications direct ordering application or any other IBM publication ordering
channel. The PNS application does not send automatic shipments of publications.
You will receive updated publications and a bill for them if you respond to the
electronic notification.
You can also order DB2 publications and CD-ROMs from your IBM representative
or the IBM branch office that serves your locality. If your location is within the
United States or Canada, you can place your order by calling one of the toll-free
numbers:
v In the U.S., call 1-800-879-2755.
v In Canada, call 1-800-426-4968.
| If you are new to DB2 for z/OS, Introduction to DB2 for z/OS provides a
| comprehensive introduction to DB2 Version 9.1 for z/OS. Topics included in this
| book explain the basic concepts that are associated with relational database
| management systems in general, and with DB2 for z/OS in particular.
The most rewarding task associated with a database management system is asking
questions of it and getting answers, the task called end use. Other tasks are also
necessary—defining the parameters of the system, putting the data in place, and so
on. The tasks that are associated with DB2 are grouped into the following major
categories.
Installation
If you are involved with installing DB2, you will need to use a variety of resources,
such as:
v DB2 Program Directory
v DB2 Installation Guide
v DB2 Administration Guide
v DB2 Application Programming Guide and Reference for Java
v DB2 Codes
v DB2 Internationalization Guide
v DB2 Messages
v DB2 Performance Monitoring and Tuning Guide
v DB2 RACF Access Control Module Guide
v DB2 Utility Guide and Reference
If you will be using data sharing capabilities you also need DB2 Data Sharing:
Planning and Administration, which describes installation considerations for data
sharing.
If you will be installing and configuring DB2 ODBC, you will need DB2 ODBC
Guide and Reference.
If you are installing IBM Spatial Support for DB2 for z/OS, you will need IBM
Spatial Support for DB2 for z/OS User’s Guide and Reference.
If you are installing IBM OmniFind® Text Search Server for DB2 for z/OS, you will
need IBM OmniFind Text Search Server for DB2 for z/OS Installation, Administration,
and Reference.
End users issue SQL statements to retrieve data. They can also insert, update, or
delete data, with SQL statements. They might need an introduction to SQL,
detailed instructions for using SPUFI, and an alphabetized reference to the types of
SQL statements. This information is found in DB2 Application Programming and SQL
Guide, and DB2 SQL Reference.
End users can also issue SQL statements through the DB2 Query Management
Facility (QMF) or some other program, and the library for that licensed program
might provide all the instruction or reference material they need. For a list of the
titles in the DB2 QMF library, see the bibliography at the end of this book.
Application programming
Some users access DB2 without knowing it, using programs that contain SQL
statements. DB2 application programmers write those programs. Because they
write SQL statements, they need the same resources that end users do.
The material needed for writing a host program containing SQL is in DB2
Application Programming and SQL Guide.
| The material needed for writing applications that use JDBC and SQLJ to access
| DB2 servers is in DB2 Application Programming Guide and Reference for Java. The
| material needed for writing applications that use DB2 CLI or ODBC to access DB2
| servers is in DB2 ODBC Guide and Reference. The material needed for working with
| XML data in DB2 is in DB2 XML Guide. For handling errors, see DB2 Messages and
| DB2 Codes.
If you are a software vendor implementing DRDA clients and servers, you will
need DB2 Reference for Remote DRDA Requesters and Servers.
DB2 Performance Monitoring and Tuning Guide explains how to monitor the
performance of the DB2 system and its parts. It also lists things that can be done to
make some parts run faster.
If you will be using the RACF access control module for DB2 authorization
checking, you will need DB2 RACF Access Control Module Guide.
If you are involved with DB2 only to design the database, or plan operational
procedures, you need DB2 Administration Guide. If you also want to carry out your
own plans by creating DB2 objects, granting privileges, running utility jobs, and so
on, you also need:
v DB2 SQL Reference, which describes the SQL statements you use to create, alter,
and drop objects and grant and revoke privileges
v DB2 Utility Guide and Reference, which explains how to run utilities
v DB2 Command Reference, which explains how to run commands
If you will be using data sharing, you need DB2 Data Sharing: Planning and
Administration, which describes how to plan for and implement data sharing.
Diagnosis
Diagnosticians detect and describe errors in the DB2 program. They might also
recommend or apply a remedy. The documentation for this task is in DB2 Diagnosis
Guide and Reference, DB2 Messages, and DB2 Codes.
IBM may not offer the products, services, or features discussed in this document in
other countries. Consult your local IBM representative for information on the
products and services currently available in your area. Any reference to an IBM
product, program, or service is not intended to state or imply that only that IBM
product, program, or service may be used. Any functionally equivalent product,
program, or service that does not infringe any IBM intellectual property right may
be used instead. However, it is the user’s responsibility to evaluate and verify the
operation of any non-IBM product, program, or service.
IBM may have patents or pending patent applications covering subject matter
described in this document. The furnishing of this document does not give you
any license to these patents. You can send license inquiries, in writing, to:
For license inquiries regarding double-byte (DBCS) information, contact the IBM
Intellectual Property Department in your country or send inquiries, in writing, to:
The following paragraph does not apply to the United Kingdom or any other
country where such provisions are inconsistent with local law:
INTERNATIONAL BUSINESS MACHINES CORPORATION PROVIDES THIS
PUBLICATION ″AS IS″ WITHOUT WARRANTY OF ANY KIND, EITHER
EXPRESS OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
WARRANTIES OF NON-INFRINGEMENT, MERCHANTABILITY OR FITNESS
FOR A PARTICULAR PURPOSE. Some states do not allow disclaimer of express or
implied warranties in certain transactions, therefore, this statement may not apply
to you.
Any references in this information to non-IBM Web sites are provided for
convenience only and do not in any manner serve as an endorsement of those Web
sites. The materials at those Web sites are not part of the materials for this IBM
product and use of those Web sites is at your own risk.
Licensees of this program who wish to have information about it for the purpose
of enabling: (i) the exchange of information between independently created
programs and other programs (including this one) and (ii) the mutual use of the
information which has been exchanged, should contact:
IBM Corporation
J46A/G4
555 Bailey Avenue
San Jose, CA 95141-1003
U.S.A.
The licensed program described in this document and all licensed material
available for it are provided by IBM under terms of the IBM Customer Agreement,
IBM International Program License Agreement, or any equivalent agreement
between us.
This information contains examples of data and reports used in daily business
operations. To illustrate them as completely as possible, the examples include the
names of individuals, companies, brands, and products. All of these names are
fictitious and any similarity to the names and addresses used by an actual business
enterprise is entirely coincidental.
COPYRIGHT LICENSE:
If you are viewing this information softcopy, the photographs and color
illustrations may not appear.
Trademarks
IBM, the IBM logo, and ibm.com® are trademarks or registered marks of
International Business Machines Corp., registered in many jurisdictions worldwide.
Other product and service names might be trademarks of IBM or other companies.
A current list of IBM trademarks is available on the Web at http://www.ibm.com/
legal/copytrade.shtml.
Microsoft, Windows, Windows NT, and the Windows logo are trademarks of
Microsoft Corporation in the United States, other countries, or both.
UNIX is a registered trademark of The Open Group in the United States and other
countries.
Java and all Java-based trademarks and logos are trademarks of Sun Microsystems,
Inc. in the United States, other countries, or both.
Notices 1107
1108 Application Programming and SQL Guide
Glossary
abend See abnormal end of task. aggregate function
An operation that derives its result by
abend reason code
using values from one or more rows.
A 4-byte hexadecimal code that uniquely
Contrast with scalar function.
identifies a problem with DB2.
| alias An alternative name that can be used in
abnormal end of task (abend)
| SQL statements to refer to a table or view
Termination of a task, job, or subsystem
| in the same or a remote DB2 subsystem.
because of an error condition that
| An alias can be qualified with a schema
recovery facilities cannot resolve during
| qualifier and can thereby be referenced by
execution.
| other users. Contrast with synonym.
access method services
allied address space
| The facility that is used to define, alter,
An area of storage that is external to DB2
| delete, print, and reproduce VSAM
and that is connected to DB2. An allied
| key-sequenced data sets.
address space can request DB2 services.
access path See also address space.
The path that is used to locate data that is
allied agent
specified in SQL statements. An access
An agent that represents work requests
path can be indexed or sequential.
that originate in allied address spaces. See
active log also system agent.
The portion of the DB2 log to which log
allied thread
records are written as they are generated.
A thread that originates at the local DB2
The active log always contains the most
subsystem and that can access data at a
recent log records. See also archive log.
remote DB2 subsystem.
address space
allocated cursor
| A range of virtual storage pages that is
A cursor that is defined for a stored
| identified by a number (ASID) and a
procedure result set by using the SQL
| collection of segment and page tables that
ALLOCATE CURSOR statement.
| map the virtual pages to real pages of the
| computer’s memory. ambiguous cursor
A database cursor for which DB2 cannot
address space connection
determine whether it is used for update
The result of connecting an allied address
or read-only purposes.
space to DB2. See also allied address
space and task control block. APAR See authorized program analysis report.
address space identifier (ASID) APF See authorized program facility.
A unique system-assigned identifier for
API See application programming interface.
an address space.
APPL A VTAM network definition statement
| AFTER trigger
that is used to define DB2 to VTAM as an
| A trigger that is specified to be activated
application program that uses SNA LU
| after a defined trigger event (an insert,
6.2 protocols.
| update, or delete operation on the table
| that is specified in a trigger definition). application
| Contrast with BEFORE trigger and A program or set of programs that
| INSTEAD OF trigger. performs a task; for example, a payroll
application.
agent In DB2, the structure that associates all
processes that are involved in a DB2 unit application plan
of work. See also allied agent and system The control structure that is produced
agent. during the bind process. DB2 uses the
CLOB See character large object. coded character set identifier (CCSID)
A 16-bit number that uniquely identifies a
| clone object coded representation of graphic
| An object that is associated with a clone characters. It designates an encoding
| table, including the clone table itself and scheme identifier and one or more pairs
| check constraints, indexes, and BEFORE that consist of a character set identifier
| triggers on the clone table. and an associated code page identifier.
| clone table code page
| A table that is structurally identical to a A set of assignments of characters to code
| base table. The base and clone table each
subcomponent synonym
A group of closely related DB2 modules | In SQL, an alternative name for a table or
that work together to provide a general | view. Synonyms can be used to refer only
function. | to objects at the subsystem in which the
| synonym is defined. A synonym cannot
subject table | be qualified and can therefore not be used
The table for which a trigger is created. | by other users. Contrast with alias.
When the defined triggering event occurs
on this table, the trigger is activated. Sysplex
See Parallel Sysplex.
subquery
A SELECT statement within the WHERE Sysplex query parallelism
or HAVING clause of another SQL Parallel execution of a single query that is
statement; a nested SQL statement. accomplished by using multiple tasks on
more than one DB2 subsystem. See also
subselect query CP parallelism.
| That form of a query that includes only a
| SELECT clause, FROM clause, and system administrator
| optionally a WHERE clause, GROUP BY The person at a computer installation
| clause, HAVING clause, ORDER BY who designs, controls, and manages the
| clause, or FETCH FIRST clause. use of the computer system.
Index 1155
CICS applications column (continued)
thread reuse 125 labels, usage 169
CICS attachment facility name, with UPDATE statement 620
controlling from applications 124 retrieving, with SELECT 628
detecting whether it is operational 124 specified in CREATE TABLE 425
starting 124 width of results 1023, 1026
stopping 124 COMMA precompiler option 905
client 32 command line processor
CLOSE binding 919
statement CALL statement 1006
description 679 stored procedures 1006
recommendation 685 commit point
WHENEVER NOT FOUND clause 167, 169 description 20
CLOSE (connection function of CAF) IMS unit of work 24
description 58 COMMIT statement
language examples 65 description 1018
program example 71 in a stored procedure 598
syntax 65 when to issue 20
COALESCE function 652 with RRSAF 80
COBOL common table expressions
creating stored procedure 577 description 452
COBOL application program examples 453
compiling 915 in a CREATE VIEW statement 451
controlling CCSID 320 in a SELECT statement 451
data type compatibility 322 in an INSERT statement 451
DB2 precompiler option defaults 912 infinite loops 657
DCLGEN support 135 recursion 453
declaring tables 326 comparison
declaring views 326 compatibility rules 426
defining the SQLDA 141, 296 HAVING clause
dynamic SQL 166 subquery 661
host structure 314 operator, subquery 661
host variable WHERE clause
use of hyphens 326 subquery 661
host variable array, declaring 297 compatibility
host variable, declaring 297 data types 426
INCLUDE statement 326 rules 426
including SQLCA 295 composite key 440
indicator variable array declaration 319 compound statement
indicator variable declaration 319 example
naming convention 326 dynamic SQL 531
object-oriented extensions 333 nested IF and WHILE statements 529
options 326 EXIT handler 540
preparation 915 labels 529
resetting SQL-INIT-FLAG 326 compound statements
sample program 333 nested 537
SQLCODE host variable 295 within the declaration of a condition handler 541
SQLSTATE host variable 295 concurrency 18
variable array declaration 307 condition handlers
variable declaration 298 empty 549
WHENEVER statement 326 conditions
with classes, preparing 892 ignoring 549
coding SQL statements CONNECT
dynamic 162 statement
collection, package SPUFI 1018
identifying 925 CONNECT (connection function of CAF)
SET CURRENT PACKAGESET statement 925 description 58
colon language examples 59
preceding a host variable 151 program example 71
preceding a host variable array 159 syntax 59
column CONNECT LOCATION field of SPUFI panel 1018
data types 426 CONNECT precompiler option 906
default value CONNECT processing option
system-defined 425 enforcing restricted system rules 34
user-defined 426 CONNECT statement, with DRDA access 837
displaying, list of 627 connecting
heading created by SPUFI 1027 DB2 45
Index 1157
cursor (continued) DB2 MQ tables
rowset-positioned (continued) descriptions 853
retrieving a rowset of data 681 DB2 private protocol access
steps in using 680 coding an application 835
updating a current rowset 681 compared to DRDA access 32
scrollable DB2-established address spaces
description 671 stored procedures 534
dynamic 671 DB2-supplied stored procedures 766
fetch orientation 685 DB2I
INSENSITIVE 672 default panels 889
retrieving rows 685 invoking DCLGEN 130
SENSITIVE DYNAMIC 673 DB2I (DB2 Interactive)
SENSITIVE STATIC 672 background processing
sensitivity 672 run-time libraries 961
static 671 EDITJCL processing
updatable 671 run-time libraries 961
static scrollable 671 interrupting 1013
types 671 menu 1013
WITH HOLD panels
description 674 BIND PACKAGE 967
BIND PLAN 970
Compile, Link, and Run 982
D Current SPUFI Defaults 1019
DB2I Primary Option Menu 955, 1013
data
Defaults for BIND PACKAGE 973
accessing from an application program 627
Defaults for BIND PLAN 976
adding 605
Precompile 964
adding to the end of a table 619
Program Preparation 957
associated with WHERE clause 630
System Connection Types 980
currency 671
preparing programs 887
distributed 32
program preparation example 957
modifying 605
selecting
not in a table 719
SPUFI 1013
retrieval using SELECT * 658
SPUFI 1013
retrieving a rowset 681
DB2I defaults
retrieving a set of rows 678
setting 889
retrieving large volumes 718
DBCS (double-byte character set)
scrolling backward through 694
translation in CICS 901
security and integrity 19
DBINFO
updating during retrieval 657
passing to external stored procedure 577
updating previously retrieved data 696
user-defined function 500
data encryption 442
DBRM (database request module)
data integrity
binding to a package 917
tables 433
binding to a plan 922
data type
deciding how to bind 14
built-in 426
description 897
comparisons 151
DBRMs in HFS files
compatibility
binding 919
assembler application program 236
DCLGEN
C application program 275
COBOL example 138
COBOL and SQL 322
data types 135
Fortran and SQL 369
declaring indicator variable arrays 130
PL/I application program 389
generating table and view declarations 129
REXX and SQL 404
generating table and view declarations from DB2I 130
result set locator 600
INCLUDE statement 137
data types
including declarations in a program 137
compatibility 148
invoking 129
used by DCLGEN 135
using from DB2I 130
DATE precompiler option 906
variable declarations 135
datetime data type 426
DCLGEN (declarations generator)
DB2
description 129
connection from a program 45
DDITV02 input data set 949
DB2 abend
DDOTV02 output data set 949
DL/I batch 720
debugging
DB2 books online 1097
recording messages for stored procedures 1036
DB2 coprocessor 898
stored procedures 1030
processing SQL statements 890
debugging application programs 1037
DB2 Information Center for z/OS solutions 1097
Index 1159
distributed data (continued) DSNALI (continued)
planning making available 52
DB2 private protocol access 926 DSNALI (CAF language interface module)
DRDA access 926 example of deleting 71
program preparation 930 example of loading 71
programming DSNCLI (CICS language interface module) 915
coding with DB2 private protocol access 835 DSNH command of TSO 1038
coding with DRDA access 835 DSNHASM procedure 952
retrieving from ASCII or Unicode tables 841 DSNHC procedure 952
savepoints 32 DSNHCOB procedure 952
scrollable cursors 32 DSNHCOB2 procedure 952
three-part table names 835 DSNHCPP procedure 952
transmitting mixed data 839 DSNHCPP2 procedure 952
two-phase commit 33 DSNHFOR procedure 952
using alias for location 838 DSNHICB2 procedure 952
distributed queries DSNHICOB procedure 952
fast implicit close 44 DSNHLI entry point to DSNALI
DL/I batch program example 71
application programming 720 DSNHLI2 entry point to DSNALI
checkpoint ID 1005 program example 71
DB2 requirements 720 DSNHPLI procedure 952
DDITV02 input data set 949 DSNMTV01 module 1002
DSNMTV01 module 1002 DSNRLI
features 720 loading 83
SSM= parameter 1002 making available 83
submitting an application 1002 DSNTEDIT CLIST 939
double-byte character large object (DBCLOB) 430 DSNTEP2 and DSNTEP4 sample program
DRDA access specifying SQL terminator 1077, 1085
accessing remote temporary table 837 DSNTEP2 sample program
bind options 928 how to run 1069
coding an application 835 parameters 1069
compared to DB2 private protocol access 32 program preparation 1069
connecting to remote server 837 DSNTEP4 sample program
planning 926 how to run 1069
precompiler options 912 parameters 1069
preparing programs 928 program preparation 1069
programming hints 840 DSNTIAC subroutine
releasing connections 839 assembler 241
sample program 344 C 280
SQL limitations at different servers 840 COBOL 326
DRDA access with CONNECT statements PL/I 393
sample program 344 DSNTIAD sample program
DRDA with three-part names how to run 1069
sample program 351 parameters 1069
DROP TABLE statement 449 program preparation 1069
DSN applications, running with CAF 46 specifying SQL terminator 1083
DSN command of TSO DSNTIAR subroutine
return code processing 995 assembler 216
RUN subcommands 995 C 280
DSN_FUNCTION_TABLE COBOL 326
column descriptions 729 description 204
DSN_FUNCTION_TABLE table 729 Fortran 371
DSN_STATEMENT_CACHE_TABLE 198 PL/I 393
DSN8BC3 sample program 326 return codes 206
DSN8BD3 sample program 280 using 204
DSN8BE3 sample program 280 DSNTIAUL sample program
DSN8BF3 sample program 371 how to run 1069
DSN8BP3 sample program 393 parameters 1069
DSNACICS stored procedure 778 program preparation 1069
DSNAEXP stored procedure DSNTIJSD sample program
description 791 using to set up the Unified Debugger 1033
example call 793 DSNTIR subroutine 371
option descriptions 792 DSNTPSMP
output 793 creating external SQL procedures 565
syntax diagram 791 required authorizations 565
DSNALI syntax for invoking 566
loading 52 DSNTRACE data set 69
Index 1161
error (continued) FETCH WITH CONTINUE 690
return codes 203 file reference variable 715
run 1037 DB2-generated construct 715
errors when retrieving data into a host variable FIND_DB2_SYSTEMS (connection function of RRSAF)
determining cause 147 language examples 118
EXCEPT syntax 118
eliminating duplicate rows 639 fixed buffer allocation
keeping duplicate rows with ALL 640 FETCH WITH CONTINUE 692
EXCEPT clause FLAG precompiler option 906
columns of result table 637 FLOAT precompiler option 906
exception condition handling 208 FOLD
EXEC SQL delimiter 151 value for C and CPP 907
EXECUTE IMMEDIATE statement 187 value of precompiler option HOST 907
EXECUTE statement FOR UPDATE clause 676
dynamic execution 189 FOREIGN KEY clause
parameter types 169 description 440
USING DESCRIPTOR clause 169 usage 440
EXISTS predicate, subquery 661 format
EXIT handler (SQL procedure) 540 SELECT statement results 1026
exit routine SQL in input data set 1013
abend recovery with CAF 51 formatting
attention processing with CAF 51 result tables 632
EXPLAIN Fortran application program
automatic rebind 944 @PROCESS statement 371
EXPLAIN STATEMENT CACHE ALL 198 byte data type 371
EXPLAIN tables 729 constant syntax 365
external SQL procedure data type compatibility 369
creating 561 declaring tables 371
external SQL procedures declaring views 371
creating by using DSNTPSMP 565 defining the SQLDA 141, 364
creating by using JCL 573 host variable, declaring 365
debugging with the Unified Debugger 1033 INCLUDE statement 371
migrating to native SQL procedures 559 including SQLCA 363
setting up the environment 533 indicator variable declaration 368
external stored procedure naming convention 371
creating 577 parallel option 371
modifying the definition 597 precompiler option defaults 912
package 582 SQLCODE host variable 363
package authorizations 582 SQLSTATE host variable 363
plan 582 statement labels 371
preparing 577 variable declaration 365
running as authorized program 577 WHENEVER statement 371
external stored procedures FROM clause
setting up the environment 533 joining tables 644
SELECT statement 628
FRR (functional recovery routine)
F in CAF 51
FULL OUTER JOIN clause 651
fast implicit close 44
function resolution 726
FETCH CURRENT CONTINUE 690
functional recovery routine (FRR)
FETCH FIRST n ROWS ONLY clause
in CAF 51
effect on distributed performance 34
FETCH statement
description, multiple rows 681
description, single row 678 G
fetch orientation 685 general-use programming information, described 1107
host variables 167 generating table and view declarations
multiple-row by using DCLGEN 129
assembler 241 with DCLGEN from DB2I 130
description 681 generating XML documents for MQ message queue 852
FOR n ROWS clause 684 GET DIAGNOSTICS statement
number of rows in rowset 684 condition items 209
using with descriptor 681 connection items 209
using with host variable arrays 681 data types for items 209, 212
row and rowset positioning 696 description 209
scrolling through data 694 multiple-row INSERT 209
USING DESCRIPTOR clause 169 RETURN_STATUS item 549
using row-positioned cursor 678 ROW_COUNT item 681
Index 1163
indicator variables join operation (continued)
using to pass large output parameters 756 joining a table to itself 650
indicator variablesassembler syntax 235 joining tables 644
indicator variablesC/C++ syntax 271 LEFT OUTER JOIN 652
indicator variablesCOBOL syntax 319 more than one join 647
indicator variablesFortran syntax 368 more than one join type 648
indicator variablesPL/I syntax 388 operand
infinite loop 657 nested table expression 644
informational referential constraint user-defined table function 644
automatic query rewrite 438 RIGHT OUTER JOIN 653
description 438 SQL rules 654
INNER JOIN clause 649
input data set DDITV02 949
input parameters
stored procedures 740
K
KEEPDYNAMIC option
INSERT statement
BIND PACKAGE subcommand 199
description 605
BIND PLAN subcommand 199
multiple rows 607
key
single row 606
composite 440
VALUES clause 605
foreign 440
with identity column 609
parent 440
with ROWID column 608
primary
inserting
choosing 439
values from host variable arrays 161
defining 439
inserting data
recommendations for defining 438
by using host variables 158
using timestamp 439
Interactive System Productivity Facility (ISPF) 1013
unique 448
internal resource lock manager (IRLM) 1002
INTERSECT
eliminating duplicate rows 639
keeping duplicate rows with ALL 640 L
INTERSECT clause label, column 169
columns of result table 637 language interface modules
invalid SQL terminator characters 1083 DSNCLI 915
invoking large object (LOB)
call attachment facility (CAF) 46 character conversion 712
Resource Recovery Services attachment facility declaring host variables 705
(RRSAF) 77 for precompiler 705
invoking stored procedures declaring LOB file reference variables 705
syntax for command line processor 1006 declaring LOB locators 705
isolation level defining and moving data into DB2 428
REXX 411 description 430
ISPF (Interactive System Productivity Facility) expression 713
browse 1018, 1026 file reference variable 715
DB2 uses dialog management 1013 indicator variable 712
DB2I Primary Option Menu 955 locator 711
Program Preparation panel 957 materialization 711
programming 999 sample applications 704
scroll command 1027 LEFT OUTER JOIN clause 652
ISPLINK SELECT services 998 LEVEL precompiler option 907
libraries
for table declarations and host-variable structures 138
J library 1097
LINECOUNT precompiler option 907
Java stored procedures
link-editing 915
debugging with the Unified Debugger 1033
AMODE option 984
JCL (job control language)
RMODE option 984
batch backout example 1003
LOAD z/OS macro used by CAF 53
DDNAME list format 896
LOAD z/OS macro used by RRSAF 85
page number format 897
LOB column, definition 428
precompilation procedures 952
LOB file reference variable
precompiler option list format 896
assembler 231
preparing a CICS program 953
C/C++ 251, 262
preparing a object-oriented program 892
COBOL 298, 307
starting a TSO batch application 1007
PL/I 377, 382
join operation
LOB host variable array
FULL OUTER JOIN 651
C/C++ 262
INNER JOIN 649
Index 1165
N numeric host variable array (continued)
COBOL 307
naming convention PL/I 382
assembler 241 NUMTCB parameter 766
C 280
COBOL 326
Fortran 371
PL/I 393 O
REXX 405 object-oriented program, preparation 892
tables you create 442 objects
native SQL procedures creating in a application program 425
BIND COPY 554 ON clause, joining tables 644
BIND COPY REPLACE 555 ONEPASS precompiler option 909
creating 535 online 1097
debugging with the Unified Debugger 1033 online books 1097
deploying to another server 558 OPEN
deploying to production 558 statement
migrating from external SQL procedures 559 opening a cursor 677
packages for 554 opening a rowset cursor 680
replacing packages for 555 prepared SELECT 167
nested compound statements USING DESCRIPTOR clause 169
cursor declarations 539 without parameter markers 169
definition 537 OPEN (connection function of CAF)
for controlling scope of conditions 542 description 58
scope of variables 535 language examples 63
statement labels 537 program example 71
nested table expression syntax 63
correlated reference 645 syntax usage 63
correlation name 645 OPTIMIZE FOR n ROWS
join operation 644 effect on distributed applications 41
NEWFUN OPTIMIZE FOR n ROWS clause
precompiler option 908 effect on distributed performance 34
NODYNAM option of COBOL 326 OPTIONS precompiler option 909
NOFOR precompiler option 908 ORDER BY clause
NOGRAPHIC precompiler option 908 SELECT statement 634
nontabular data storage 620 with ORDER OF clause 619
NOOPTIONS precompiler option 908 ORDER OF clause 619
NOPADNTSTR precompiler option 908 organization application
NOSOURCE precompiler option 909 examples 1071
NOT FOUND clause of WHENEVER statement 208 outer join
not logged FULL OUTER JOIN 651
table spaces LEFT OUTER JOIN 652
recovering 30 RIGHT OUTER JOIN 653
NOXREF precompiler option 909 output host variable
NUL character in C 280 determining if null 154
null determining if truncated 154
determining value of output host variable 154 output host variable processing
NULL errors 147
pointer in C 280 output parameters
null value stored procedures 740, 756
column value of UPDATE statement 620
determining column value 156
inserting into columns 158 P
Null, in REXX 405 package
numeric advantages 14
data binding
width of column in results 1026 DBRM to a package 916
numeric data remote 917
description 426 to plans 922
width of column in results 1023 identifying at run time 924
numeric host variable invalid 16
assembler 231 invalidated 944
C/C++ 251 listing 922
COBOL 298 location 924
Fortran 365 rebinding examples 937
PL/I 377 rebinding with pattern-matching characters 936
numeric host variable array selecting 924, 925
C/C++ 262
Index 1167
PRIMARY KEY clause (continued) recursive SQL (continued)
CREATE TABLE statement 438 single level explosion 453
problem determination, guidelines 1037 summarized explosion 453
procedures referential constraint
inheriting special registers 511 defining 436
WLM_SET_CLIENT_INFO 776 description 436
product-sensitive programming information, described 1107 determining violations 1047
production environment informational 438
deploying native SQL procedures 558 name 440
program preparation 887 on tables with data encryption 442
program problems checklist on tables with multilevel security 437
documenting error situations 1037 referential integrity
error messages 1042, 1043 effect on subqueries 666
programming interface information, described 1107 programming considerations 1047
project activity sample table 1057 register conventions
project application, description 1071 RRSAF 85
project sample table 1056 register XML schema
PSPI symbols 1107 XSR_REGISTER 827
registers
changed by CAF (call attachment facility) 54
Q release incompatibilities
applications and SQL 1
queries
RELEASE SAVEPOINT statement 28
in application programs 127
RELEASE statement, with DRDA access 839
tuning in application programs 720
remote queries
QUOTE precompiler option 909
performance 34
QUOTESQL precompiler option 909
REPLACE statement (COBOL) 326
requester 32
resetting control blocks
R CAF 66
RANK specification 636 RESIGNAL statement
example 636 raising a condition 549
reason code setting SQLSTATE value 551
CAF 69 resource limit facility (governor)
RRSAF 119 description 200
X″00D44057″ 720 writing an application for predictive governing 201
REBIND PACKAGE subcommand of DSN Resource Recovery Services attachment facility (RRSAF)
generating list of 939 application program
rebinding with wildcard characters 936 preparation 85
remote 917 authorization IDs 81
REBIND PLAN subcommand of DSN behavior summary 87
generating list of 939 connection functions 89
options connection name 81
NOPKLIST 938 connection properties 81
PKLIST 938 connection type 81
remote 917 DB2 abends 81
REBIND TRIGGER PACKAGE subcommand of DSN 468 description 80
rebinding implicit connections 86
automatically invoking 77
conditions for 944 loading 83
changes that require 16 making available 83
list of plans and packages 938 parameters for CALL DSNRLI 86
lists of plans or packages 938, 939 program examples 121
packages with pattern-matching characters 936 program requirements 85
planning for 944 register conventions 85
plans 938 return codes and reason codes 119
recovering sample JCL 121
table spaces that are not logged 30 sample scenarios 119
recovery scope 81
IMS programs 21, 28 terminated task 81
planning for in your application 19 restart, DL/I batch programs using JCL 1003
recursive SQL restricted system
controlling depth 453 definition 34
description 656 forcing rules 34
examples 453 update rules 34
infinite loops 657 restricted systems 31
rules 656
Index 1169
sample application (continued) scrollable cursor (continued)
DRDA access 344 static (continued)
DRDA access with CONNECT statements 344 static model 671
DRDA with three-part names 351 updatable 671
dynamic SQL 285 scrolling
environments 1073 backward through data 694
languages 1073 backward using identity columns 694
LOB 1071 backward using ROWIDs 694
organization 1071 in any direction 686
phone 1071 ISPF (Interactive System Productivity Facility) 1027
programs 1069 search condition
project 1071 comparison operators 630
static SQL 285 NOT keyword 630
stored procedure 1071 SELECT statement 659
structure of 1065 WHERE clause 630
use 1069 SELECT FROM DELETE statement
user-defined function 1071 description 624
sample applications 1049 retrieving
Sample applications multiple rows 624
TSO 1074 with INCLUDE clause 624
sample data 1049 SELECT FROM INSERT statement
sample program BEFORE trigger values 614
DSN8BC3 326 default values 613
DSN8BD3 280 description 613
DSN8BE3 280 inserting into view 615
DSN8BF3 371 multiple rows
DSN8BP3 393 cursor sensitivity 616
sample table effect of changes 617
DSN8910.ACT (activity) 1049 effect of SAVEPOINT and ROLLBACK 617
DSN8910.DEMO_UNICODE (Unicode sample ) 1059 effect of WITH HOLD 617
DSN8910.DEPT (department) 1050 processing errors 613
DSN8910.EMP (employee) 1051 result table of cursor 613
DSN8910.EMP_PHOTO_RESUME (employee photo and using cursor 615
resume) 1055 using FETCH FIRST 615
DSN8910.EMPPROJACT (employee-to-project using INPUT SEQUENCE 615
activity) 1058 result table 613
DSN8910.PROJ (project) 1056 retrieving
PROJACT (project activity) 1057 BEFORE trigger values 613
views on 1061 default values 613
sample tables 1049 generated values 613
samples multiple rows 613
provided by DB2 1049 special registers 613
savepoint using SELECT INTO 615
distributed environment 32 SELECT FROM MERGE statement
SAVEPOINT statement 28 description 612
savepoints 28 with INCLUDE clause 612
scalar pointer host variable SELECT FROM UPDATE statement
declaring 273 description 621
referencing in SQL statements 272 retrieving
scrollable cursor multiple rows 621
comparison of types 686 with INCLUDE clause 613, 621
DB2 for z/OS down-level requester 841 SELECT INTO
distributed environment 32 using with host variables 152
dynamic SELECT statement
dynamic model 671 AS clause
fetching current row 689 with ORDER BY clause 634
fetch orientation 685 changing result format 1026
retrieving rows 685 clauses
sensitive dynamic 673 DISTINCT 632
sensitive static 672 EXCEPT 637
sensitivity 686 FROM 628
static GROUP BY 642
creating delete hole 689 HAVING 643
creating update hole 689 INTERSECT 637
holes in result table 689 ORDER BY 634
number of rows 687 UNION 637
removing holes 688 WHERE 628
Index 1171
SQL statement terminator SQL statements (continued)
modifying in DSNTEP2 and DSNTEP4 1077, 1085 PL/I program sections 393
modifying in DSNTIAD 1083 PREPARE 189
modifying in SPUFI 1019 RELEASE, with DRDA access 839
specifying in SPUFI 1019 REXX program sections 405
SQL statements SELECT
ALLOCATE CURSOR 600 description 628
ALTER FUNCTION 480 joining a table to itself 650
ASSOCIATE LOCATORS 600 joining tables 644
checking for successful execution 141 SELECT FROM DELETE 624
CLOSE 167, 679, 685 SELECT FROM INSERT 613
COBOL program sections 326 SELECT FROM MERGE 612
coding REXX 151 SELECT FROM UPDATE 621
comments set symbols 241
assembler 241 UPDATE
C 280 description 678, 681
COBOL 326 example 620
Fortran 371 WHENEVER 208
PL/I 393 SQL terminator, specifying in DSNTEP2 and DSNTEP4 1077,
REXX 405 1085
CONNECT, with DRDA access 837 SQL terminator, specifying in DSNTIAD 1083
continuation SQL variable 527
assembler 241 SQL-INIT-FLAG, resetting 326
C 280 SQLCA (SQL communication area)
COBOL 326 checking SQLCODE 207
Fortran 371 checking SQLERRD(3) 203
PL/I 393 checking SQLSTATE 207
REXX 405 checking SQLWARN0 203
CREATE FUNCTION 480 description 203
DECLARE CURSOR DSNTIAC subroutine
description 675, 680 assembler 241
example 167, 169 C 280
DELETE COBOL 326
description 678 PL/I 393
example 622 DSNTIAR subroutine
DESCRIBE 169 assembler 216
DESCRIBE CURSOR 600 C 280
DESCRIBE PROCEDURE 600 COBOL 326
embedded 895 Fortran 371
error return codes 204 PL/I 393
EXECUTE 189 sample C program 285
EXECUTE IMMEDIATE 187 SQLCA (SQL communications area)
FETCH assembler 229
description 678, 681 C/C++ 249
example 167 COBOL 295
Fortran program sections 371 deciding whether to include 141
in application programs 127 Fortran 363
INSERT 605 PL/I 375
labels REXX 403
assembler 241 SQLCODE
C 280 -923 949
COBOL 326 -925 720
Fortran 371 -926 720
PL/I 393 +100 208
REXX 405 +802 216
margins values 207
assembler 241 SQLCODE host variable
C 280 deciding whether to declare 141
COBOL 326 SQLDA
Fortran 371 setting an XML host variable 169
PL/I 393 XML column 169
REXX 405 SQLDA (SQL descriptor area)
MERGE allocating storage 169, 681
example 611 assembler 141, 230
OPEN assembler program 169
description 677, 680 C 169
example 167 C/C++ 141, 250
Index 1173
stored procedures (continued) SYSPRINT precompiler output (continued)
parameter list 740 used to analyze errors 1039
passing large output parameters 756 SYSTERM output to analyze errors 1038
recording debugging messages 1036
running concurrently 766
setting up the environment 533
SQLJ.ALTER_JAVA_PATH 766
T
table
SQLJ.DB2_INSTALL_JAR 766
altering
SQLJ.DB2_REMOVE_JAR 766
changing definitions 442
SQLJ.DB2_REPLACE_JAR 766
using CREATE and ALTER 216
SQLJ.DB2_UPDATEJARINFO 766
copying from remote locations 835
SQLJ.INSTALL_JAR 766
declaring in a program 128
SQLJ.REMOVE_JAR 766
deleting rows 622
SQLJ.REPLACE_JAR 766
dependent, cycle restrictions 436
syntax for invoking from command line processor 1006
displaying, list of 627
WLM_REFRESH 766
DROP statement 449
XDBDECOMPXML 766
filling with test data 1012
XSR_ADDSCHEMADOC 766
incomplete definition of 448
XSR_COMPLETE 766
inserting multiple rows 607
XSR_REGISTER 766
inserting single row 606
XSR_REMOVE 766
loading, in referential structure 435
stored proceduresmoving to a WLM-established
merging rows 611
environment 534
populating 1012
storm drain effect 124
referential structure 436
string
retrieving 670
data type 426
selecting values as you delete rows 624
structure array host variable
selecting values as you insert rows 613
declaring 273
selecting values as you merge rows 612
referencing in SQL statements 272
selecting values as you update rows 621
subquery
temporary 445
basic predicate 661
updating rows 620
conceptual overview 659
using three-part table names 835
correlated
table and view declarations
DELETE statement 664
including in an application program 137
description 663
table and view declarationsgenerating with DCLGEN 129
example 663
table declarations
UPDATE statement 664
adding to libraries 138
DELETE statement 664
table locator
description 659
assembler 231
EXISTS predicate 661
C/C++ 251
IN predicate 661
COBOL 298
quantified predicate 661
PL/I 377
referential constraints 666
table space
restrictions with DELETE 666
for sample application 1067
UPDATE statement 664
not logged
subsystem
recovering 30
identifier (SSID), specifying 961
tables
subsystem parameters 766
creating for data integrity 433
summarizing group values 642
TCB (task control block)
SWITCH TO (connection function of RRSAF)
capabilities with CAF 49
language examples 93
capabilities with RRSAF 80
syntax 93
temporary table
SYNC call, IMS 24
advantages of 445
synchronization call abends 720
working with 445
SYNCPOINT command of CICS 20
terminal monitor program (TMP) 995
syntax diagram
TERMINATE IDENTIFY (connection function of RRSAF)
how to read xv
language examples 115
SYSIBM.MQPOLICY_TABLE
program example 121
column descriptions 853
syntax 115
SYSIBM.MQSERVICE_TABLE
TERMINATE THREAD (connection function of RRSAF)
column descriptions 853
language examples 114
SYSLIB data sets 952
program example 121
SYSPRINT precompiler output
syntax 114
options section 1039
TEST command of TSO 1042
source statements section, example 1039
test environment, designing 995
summary section, example 1039
test tables 1009
symbol cross-reference section 1039
test views of existing tables 1009
Index 1175
user-defined function (UDF) (continued) view (continued)
invoking from predicate 723 identity columns 450
main program 487 join of two or more tables 450
multiple programs 510 referencing special registers 449
naming 497 retrieving 670
nesting SQL statements 669 summary data 450
parallelism considerations 487 union of two or more tables 450
parameter conventions 490 using
assembler 503 deleting rows 622
C 504 inserting rows 605
COBOL 509 updating rows 620
PL/I 509
preparing 517
reentrant 510
restrictions 487
W
Web service consumer
samples 519
SQLSTATEs 883
scratchpad 498, 518
WebSphere MQ
scrollable cursor 723
APIs 842
setting result values 496
Application Messaging Interface (AMI) 845
simplifying function resolution 725
commit environment 851
specific name 497
description 842
steps in creating and using 483
interaction with DB2 842
subprogram 487
message handling 843
table locators
Message Queue Interface (MQI) 843
assembler 514
messages 842
C 515
WHENEVER statement
COBOL 515
assembler 241
PL/I 516
C 280
testing 1027
COBOL 326
types 483
CONTINUE clause 208
user-defined functions
Fortran 371
SOAPHTTPNC 883
GO TO clause 208
SOAPHTTPNV 883
NOT FOUND clause 208, 678
USING DESCRIPTOR clause
PL/I 393
EXECUTE statement 169
specifying 208
FETCH statement 169
SQL error codes 208
OPEN statement 169
SQLERROR clause 208
SQLWARNING clause 208
WHERE clause
V SELECT statement
VALUES clause, INSERT statement 605 description 628
varbinary host variable joining a table to itself 650
assembler 231 joining tables 644
C/C++ 251 WITH clause
COBOL 298 common table expressions 452
PL/I 377 WITH HOLD clause
varbinary host variable array and CICS 674
C/C++ 262 and IMS 674
PL/I 382 DECLARE CURSOR statement 674
variable restrictions 674
assembler 231 WITH HOLD cursor
C/C++ 251 effect on dynamic SQL 189
COBOL 298 WLM environment
declaring in SQL procedure 527 moving stored procedures 534
Fortran 365 WLM_REFRESH stored procedure
PL/I 377 description 774
variable array option descriptions 775
C/C++ 262 sample JCL 776
COBOL 307 syntax diagram 775
PL/I 382 WLM_SET_CLIENT_INFO procedure 776
version of a package 919 write-down privilege 471
VERSION precompiler option 911, 919
view
contents 450
declaring in a program 128
X
XDBDECOMPXML
description 449
authorization 833
dropping 451
Index 1177
1178 Application Programming and SQL Guide
Printed in USA
SC18-9841-05
Spine information:
DB2 Version 9.1 for z/OS Application Programming and SQL Guide