DB2 - Application Programming & SQL Guide - Ver 7
DB2 - Application Programming & SQL Guide - Ver 7
Application Programming
and SQL Guide
Version 7
SC26-9933-01
DB2 Universal Database for OS/390 and z/OS
Application Programming
and SQL Guide
Version 7
SC26-9933-01
Note
Before using this information and the product it supports, be sure to read the
general information under “Notices” on page 949.
Contents v
Determining compatibility of SQL and FORTRAN data types. . . . . . . 172
Using indicator variables . . . . . . . . . . . . . . . . . . . . 172
Handling SQL error return codes . . . . . . . . . . . . . . . . . 173
Coding SQL statements in a PL/I application . . . . . . . . . . . . . 174
Defining the SQL communication area . . . . . . . . . . . . . . . 174
Defining SQL descriptor areas . . . . . . . . . . . . . . . . . . 174
Embedding SQL statements . . . . . . . . . . . . . . . . . . 175
Using host variables . . . . . . . . . . . . . . . . . . . . . 177
Declaring host variables . . . . . . . . . . . . . . . . . . . . 178
Using host structures . . . . . . . . . . . . . . . . . . . . . 181
Determining equivalent SQL and PL/I data types . . . . . . . . . . . 182
Determining compatibility of SQL and PL/I data types . . . . . . . . . 186
Using indicator variables . . . . . . . . . . . . . . . . . . . . 187
Handling SQL error return codes . . . . . . . . . . . . . . . . . 188
Coding SQL statements in a REXX application. . . . . . . . . . . . . 189
Defining the SQL communication area . . . . . . . . . . . . . . . 189
Defining SQL descriptor areas . . . . . . . . . . . . . . . . . . 190
Accessing the DB2 REXX Language Support application programming
interfaces . . . . . . . . . . . . . . . . . . . . . . . . 190
Embedding SQL statements in a REXX procedure . . . . . . . . . . 192
Using cursors and statement names . . . . . . . . . . . . . . . 194
Using REXX host variables and data types . . . . . . . . . . . . . 194
Using indicator variables . . . . . . . . . . . . . . . . . . . . 197
Setting the isolation level of SQL statements in a REXX procedure . . . . 198
Contents vii
The statement LOCK TABLE . . . . . . . . . . . . . . . . . . 352
Access paths . . . . . . . . . . . . . . . . . . . . . . . . 353
LOB locks . . . . . . . . . . . . . . . . . . . . . . . . . . 355
Relationship between transaction locks and LOB locks . . . . . . . . . 355
Hierarchy of LOB locks . . . . . . . . . . . . . . . . . . . . 356
LOB and LOB table space lock modes. . . . . . . . . . . . . . . 357
Duration of locks . . . . . . . . . . . . . . . . . . . . . . . 357
Instances when locks on LOB table space are not taken . . . . . . . . 358
The LOCK TABLE statement . . . . . . . . . . . . . . . . . . 358
Contents ix
Program preparation considerations . . . . . . . . . . . . . . . . . 484
Precompiling . . . . . . . . . . . . . . . . . . . . . . . . 484
Binding . . . . . . . . . . . . . . . . . . . . . . . . . . 484
Link-editing . . . . . . . . . . . . . . . . . . . . . . . . . 485
Loading and running . . . . . . . . . . . . . . . . . . . . . 485
Restart and recovery . . . . . . . . . . . . . . . . . . . . . . 486
JCL example of a batch backout . . . . . . . . . . . . . . . . . 486
JCL example of restarting a DL/I batch job . . . . . . . . . . . . . 487
Finding the DL/I batch checkpoint ID . . . . . . . . . . . . . . . 488
Contents xi
General rules about predicate evaluation . . . . . . . . . . . . . . . 631
Order of evaluating predicates . . . . . . . . . . . . . . . . . . 632
Summary of predicate processing . . . . . . . . . . . . . . . . 632
Examples of predicate properties . . . . . . . . . . . . . . . . . 636
Predicate filter factors . . . . . . . . . . . . . . . . . . . . . 637
DB2 predicate manipulation . . . . . . . . . . . . . . . . . . . 642
Column correlation . . . . . . . . . . . . . . . . . . . . . . 645
Using host variables efficiently . . . . . . . . . . . . . . . . . . . 648
Using REOPT(VARS) to change the access path at run time . . . . . . 648
Rewriting queries to influence access path selection. . . . . . . . . . 649
Writing efficient subqueries . . . . . . . . . . . . . . . . . . . . 652
Correlated subqueries . . . . . . . . . . . . . . . . . . . . . 653
Noncorrelated subqueries . . . . . . . . . . . . . . . . . . . 654
Subquery transformation into join. . . . . . . . . . . . . . . . . 655
Subquery tuning . . . . . . . . . . . . . . . . . . . . . . . 657
| Using scrollable cursors efficiently . . . . . . . . . . . . . . . . . 658
Writing efficient queries on views with UNION operators . . . . . . . . . 659
Special techniques to influence access path selection . . . . . . . . . . 660
Obtaining information about access paths . . . . . . . . . . . . . 661
Minimizing overhead for retrieving few rows: OPTIMIZE FOR n ROWS 661
| Fetching a limited number of rows: FETCH FIRST n ROWS ONLY . . . . 663
Reducing the number of matching columns . . . . . . . . . . . . . 664
Adding extra local predicates . . . . . . . . . . . . . . . . . . 665
Creating indexes for efficient star schemas . . . . . . . . . . . . . 666
Rearranging the order of tables in a FROM clause . . . . . . . . . . 668
Updating catalog statistics . . . . . . . . . . . . . . . . . . . 668
Using a subsystem parameter . . . . . . . . . . . . . . . . . . 670
Chapter 29. Programming for the call attachment facility (CAF) . . . . . 733
Call attachment facility capabilities and restrictions . . . . . . . . . . . 733
Capabilities when using CAF . . . . . . . . . . . . . . . . . . 733
CAF requirements . . . . . . . . . . . . . . . . . . . . . . 734
How to use CAF . . . . . . . . . . . . . . . . . . . . . . . . 736
Summary of connection functions . . . . . . . . . . . . . . . . 737
Accessing the CAF language interface. . . . . . . . . . . . . . . 739
General properties of CAF connections . . . . . . . . . . . . . . 740
CAF function descriptions . . . . . . . . . . . . . . . . . . . 741
CONNECT: Syntax and usage . . . . . . . . . . . . . . . . . . 743
OPEN: Syntax and usage . . . . . . . . . . . . . . . . . . . 747
CLOSE: Syntax and usage . . . . . . . . . . . . . . . . . . . 749
DISCONNECT: Syntax and usage . . . . . . . . . . . . . . . . 750
TRANSLATE: Syntax and usage . . . . . . . . . . . . . . . . . 751
Summary of CAF behavior . . . . . . . . . . . . . . . . . . . 753
Sample scenarios . . . . . . . . . . . . . . . . . . . . . . . 754
A single task with implicit connections . . . . . . . . . . . . . . . 754
A single task with explicit connections . . . . . . . . . . . . . . . 754
Contents xiii
Several tasks . . . . . . . . . . . . . . . . . . . . . . . . 754
Exits from your application . . . . . . . . . . . . . . . . . . . . 755
Attention exits . . . . . . . . . . . . . . . . . . . . . . . . 755
Recovery routines . . . . . . . . . . . . . . . . . . . . . . 755
Error messages and dsntrace . . . . . . . . . . . . . . . . . . . 756
CAF return codes and reason codes . . . . . . . . . . . . . . . . 756
Subsystem support subcomponent codes (X'00F3') . . . . . . . . . . 757
Program examples . . . . . . . . . . . . . . . . . . . . . . . 757
Sample JCL for using CAF . . . . . . . . . . . . . . . . . . . 757
Sample assembler code for using CAF . . . . . . . . . . . . . . 757
Loading and deleting the CAF language interface. . . . . . . . . . . 758
Establishing the connection to DB2 . . . . . . . . . . . . . . . . 758
Checking return codes and reason codes. . . . . . . . . . . . . . 760
Using dummy entry point DSNHLI . . . . . . . . . . . . . . . . 763
Variable declarations . . . . . . . . . . . . . . . . . . . . . 764
Contents xv
Example DB2 REXX application . . . . . . . . . . . . . . . . . . 866
Sample COBOL program using DRDA access . . . . . . . . . . . . . 880
Sample COBOL program using DB2 private protocol access . . . . . . . 888
Examples of using stored procedures . . . . . . . . . . . . . . . . 894
Calling a stored procedure from a C program . . . . . . . . . . . . 895
Calling a stored procedure from a COBOL program . . . . . . . . . . 898
Calling a stored procedure from a PL/I program . . . . . . . . . . . 901
C stored procedure: GENERAL . . . . . . . . . . . . . . . . . 903
C stored procedure: GENERAL WITH NULLS . . . . . . . . . . . . 905
COBOL stored procedure: GENERAL . . . . . . . . . . . . . . . 907
COBOL stored procedure: GENERAL WITH NULLS. . . . . . . . . . 910
PL/I stored procedure: GENERAL . . . . . . . . . . . . . . . . 912
PL/I stored procedure: GENERAL WITH NULLS . . . . . . . . . . . 913
Appendix J. Summary of changes to DB2 for OS/390 and z/OS Version 7 945
Enhancements for managing data . . . . . . . . . . . . . . . . . 945
Enhancements for reliability, scalability, and availability. . . . . . . . . . 945
Easier development and integration of e-business applications . . . . . . . 946
Improved connectivity . . . . . . . . . . . . . . . . . . . . . . 947
Features of DB2 for OS/390 and z/OS . . . . . . . . . . . . . . . . 948
Migration considerations . . . . . . . . . . . . . . . . . . . . . 948
Notices . . . . . . . . . . . . . . . . . . . . . . . . . . . 949
Glossary . . . . . . . . . . . . . . . . . . . . . . . . . . 953
Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . 971
Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . X-1
.
Contents xvii
xviii Application Programming and SQL Guide
About this book
This book discusses how to design and write application programs that access
DB2® for OS/390® (DB2), a highly flexible relational database management system
(DBMS).
Important
In this version of DB2 for OS/390 and z/OS, some utility functions are
available as optional products. You must separately order and purchase a
license to such utilities, and discussion of those utility functions in this
publication is not intended to otherwise imply that you have a license to them.
When referring to a DB2 product other than DB2 for OS/390 and z/OS, this book
uses the product’s full name to avoid ambiguity.
required_item
required_item
optional_item
If an optional item appears above the main path, that item has no effect on the
execution of the statement and is used only for readability.
optional_item
required_item
v If you can choose from two or more items, they appear vertically, in a stack.
If you must choose one of the items, one item of the stack appears on the main
path.
required_item required_choice1
required_choice2
If choosing one of the items is optional, the entire stack appears below the main
path.
required_item
optional_choice1
optional_choice2
If one of the items is the default, it appears above the main path and the
remaining choices are shown below.
default_choice
required_item
optional_choice
optional_choice
v An arrow returning to the left, above the main line, indicates an item that can be
repeated.
If the repeat arrow contains a comma, you must separate repeated items with a
comma.
required_item repeatable_item
A repeat arrow above a stack indicates that you can repeat the items in the
stack.
v Keywords appear in uppercase (for example, FROM). They must be spelled exactly
as shown. Variables appear in all lowercase letters (for example, column-name).
They represent user-supplied names or values.
v If punctuation marks, parentheses, arithmetic operators, or other such symbols
are shown, you must enter them as part of the syntax.
http://www.ibm.com/software/db2os390
The Web site has a feedback page that you can use to send comments.
v Complete the readers’ comment form at the back of the book and return it by
mail, by fax (800-426-7773 for the United States and Canada), or by giving it to
an IBM representative.
For more advanced topics on using SELECT statements, see “Chapter 4. Using
subqueries” on page 43, “Chapter 19. Planning to access distributed data” on
page 369, and Chapter 4 of DB2 SQL Reference.
Examples of SQL statements illustrate the concepts that this chapter discusses.
Consider developing SQL statements similar to these examples and then execute
them dynamically using SPUFI or Query Management Facility (QMF).
Result tables
The data retrieved through SQL is always in the form of a table, which is called a
result table. Like the tables from which you retrieve the data, a result table has rows
and columns. A program fetches this data one row at a time.
Example: SELECT statement: This SELECT statement retrieves the last name,
first name, and phone number of employees in department D11 from the sample
employee table:
SELECT LASTNAME, FIRSTNME, PHONENO
FROM DSN8710.EMP
WHERE WORKDEPT = 'D11'
ORDER BY LASTNAME;
The result table displays in this form after SPUFI fetches and formats it. The format
of your results might be different.
Data types
When you create a DB2 table, you define each column to have a specific data type.
The data type can be a built-in data type or a distinct type. This section discusses
built-in data types. For information on distinct types, see “Chapter 15. Creating and
using distinct types” on page 301. The data type of a column determines what you
can and cannot do with it. When you perform operations on columns, the data must
be compatible with the data type of the referenced column. For example, you
cannot insert character data, like a last name, into a column whose data type is
numeric. Similarly, you cannot compare columns containing incompatible data
types.
For more detailed information on each data type, see Chapter 2 of DB2 SQL
Reference.
Table 1 on page 5 shows whether operands of any two data types are compatible
(Yes) or incompatible (No).
Because the example does not specify a WHERE clause, the statement retrieves
data from all rows.
The dashes for MGRNO and LOCATION in the result table indicate null values.
SELECT * is recommended mostly for use with dynamic SQL and view definitions.
You can use SELECT * in static SQL, but this is not recommended; if you add a
column to the table to which SELECT * refers, the program might reference
columns for which you have not defined receiving host variables. For more
information on host variables, see “Accessing data using host variables and host
structures” on page 67.
If you list the column names in a static SELECT statement instead of using an
asterisk, you can avoid the problem just mentioned. You can also see the
relationship between the receiving host variables and the columns in the result
table.
Example: SELECT column-name: This SQL statement selects only the MGRNO
and DEPTNO columns from the department table:
SELECT MGRNO, DEPTNO
FROM DSN8710.DEPT;
With a single SELECT statement, you can select data from one column or as many
as 750 columns.
For example, if you want to execute a DB2 built-in function on host variable, you
can use an SQL statement like this:
SELECT RAND(:HRAND)
FROM SYSIBM.SYSDUMMY1;
If you want to order the rows of data in the result table, use the ORDER BY clause
described in “Putting the rows in order: ORDER BY” on page 9.
Example: CREATE VIEW with AS clause: You can specify result column names in
the select-clause of a CREATE VIEW statement. You do not need to supply the
column list of CREATE VIEW, because the AS keyword names the derived column.
The columns in the view EMP_SAL are EMPNO and TOTAL_SAL.
CREATE VIEW EMP_SAL AS
SELECT EMPNO,SALARY+BONUS+COMM AS TOTAL_SAL
FROM DSN8710.EMP;
Example: UNION ALL with AS clause: You can use the AS clause to give the
same name to corresponding columns of tables in a union. The third result column
from the union of the two tables has the name TOTAL_VALUE, even though it
contains data derived from columns with different names:
SELECT 'On hand' AS STATUS, PARTNO, QOH * COST AS TOTAL_VALUE
FROM PART_ON_HAND
UNION ALL
SELECT 'Ordered' AS STATUS, PARTNO, QORDER * COST AS TOTAL_VALUE
FROM ORDER_PART
ORDER BY PARTNO, TOTAL_VALUE;
The column STATUS and the derived column TOTAL_VALUE have the same name
in the first and second result tables, and are combined in the union of the two result
tables:
STATUS PARTNO TOTAL_VALUE
----------- ------ -----------
On hand 00557 345.60
Ordered
. 00557 150.50
.
.
For information on unions, see “Merging lists of values: UNION” on page 12.
Example: FROM clause with AS clause: Use the AS clause in a FROM clause to
assign a name to a derived column that you want to refer to in a GROUP BY
clause. Using the AS clause in the first SELECT clause causes an error, because
the names assigned in the AS clause do not yet exist when the GROUP BY
executes. However, you can use an AS clause of a subselect in the outer GROUP
BY clause, because the subselect is at a lower level than the GROUP BY that
references the name. This SQL statement names HIREYEAR in the nested table
expression, which lets you use the name of that result column in the GROUP BY
clause:
SELECT HIREYEAR, AVG(SALARY)
FROM (SELECT YEAR(HIREDATE) AS HIREYEAR, SALARY
FROM DSN8710.EMP) AS NEWEMP
GROUP BY HIREYEAR;
If a search condition contains a column of a distinct type, the value to which that
column is compared must be of the same distinct type, or you must cast the value
to the distinct type. See “Chapter 15. Creating and using distinct types” on page 301
for more information.
The next sections illustrate different comparison operators that you can use in a
predicate in a WHERE clause. The following table lists the comparison operators.
Table 2. Comparison operators used in conditions
Type of Specified with... Example
comparison
Equal to null IS NULL PHONENO IS NULL
Equal to = DEPTNO = 'X01'
Not equal to <> DEPTNO <> 'X01'
Less than < AVG(SALARY) < 30000
Less than or equal to <= AGE <= 25
Not less than >= AGE >= 21
Greater than > SALARY > 2000
Greater than or equal to >= SALARY >= 5000
Not greater than <= SALARY <= 5000
Similar to another value LIKE NAME LIKE '%SMITH%' or STATUS LIKE 'N_'
At least one of two OR HIREDATE < '1965-01-01' OR SALARY < 16000
conditions
Both of two conditions AND HIREDATE < '1965-01-01' AND SALARY < 16000
Between two values BETWEEN SALARY BETWEEN 20000 AND 40000
Equals a value in a set IN (X, Y, Z) DEPTNO IN ('B01', 'C01', 'D01')
You can also search for rows that do not satisfy one of the above conditions, by
using the NOT keyword before the specified condition.
You can list the rows in ascending or descending order. Null values appear last in
an ascending sort and first in a descending sort.
Example: ORDER BY clause with a column name as the sort key: Retrieve the
employee numbers, last names, and hire dates of employees in department A00 in
ascending order of hire dates:
SELECT EMPNO, LASTNAME, HIREDATE
FROM DSN8710.EMP
WHERE WORKDEPT = 'A00'
ORDER BY HIREDATE ASC;
Example: ORDER BY clause with an expression as the sort key: Retrieve the
employee numbers, salaries, commissions, and total compensation (salary plus
commission) for employees with a total compensation of greater than 40000. Order
the results by total compensation:
SELECT EMPNO, SALARY, COMM, SALARY+COMM AS "TOTAL COMP"
FROM DSN8710.EMP
WHERE SALARY+COMM = 40000
ORDER BY SALARY+COMM;
Except for the columns named in the GROUP BY clause, the SELECT statement
must specify any other selected columns as an operand of one of the column
functions.
If a column you specify in the GROUP BY clause contains null values, DB2
considers those null values to be equal. Thus, all nulls form a single group.
When it is used, the GROUP BY clause follows the FROM clause and any WHERE
clause, and precedes the ORDER BY clause.
You can also group the rows by the values of more than one column. For example,
the following statement finds the average salary for men and women in departments
A00 and C01:
SELECT WORKDEPT, SEX, AVG(SALARY) AS AVG_SALARY
FROM DSN8710.EMP
WHERE WORKDEPT IN ('A00', 'C01')
GROUP BY WORKDEPT, SEX;
DB2 groups the rows first by department number and next (within each department)
by sex before DB2 derives the average SALARY value for each group.
Compare the preceding example with the second example shown in “Summarizing
group values: GROUP BY” on page 10. The HAVING COUNT(*) > 1 clause ensures
The HAVING clause tests a property of the group. For example, you could use it to
retrieve the average salary and minimum education level of women in each
department in which all female employees have an education level greater than or
equal to 16. Assuming you only want results from departments A00 and D11, the
following SQL statement tests the group property, MIN(EDLEVEL):
SELECT WORKDEPT, AVG(SALARY) AS AVG_SALARY,
MIN(EDLEVEL) AS MIN_EDLEVEL
FROM DSN8710.EMP
WHERE SEX = 'F' AND WORKDEPT IN ('A00', 'D11')
GROUP BY WORKDEPT
HAVING MIN(EDLEVEL) >= 16;
When you specify both GROUP BY and HAVING, the HAVING clause must follow
the GROUP BY clause. A function in a HAVING clause can include DISTINCT if you
have not used DISTINCT anywhere else in the same SELECT statement. You can
also connect multiple predicates in a HAVING clause with AND and OR, and you
can use NOT for any predicate of a search condition.
When you use the UNION statement, the SQLNAME field of the SQLDA contains
the column names of the first operand.
If you have an ORDER BY clause, it must appear after the last SELECT statement
that is part of the union. In this example, the first column of the final result table
determines the final order of the rows.
# Avoiding decimal arithmetic errors: For static SQL statements, the simplest way
# to avoid a division error is to override DEC31 rules by specifying the precompiler
# option DEC(15). In some cases it is possible to avoid a division error by specifying
# D31.s. That reduces the probability of errors for statements embedded in the
# program. (The number s is between 1 and 9 and represents the minimum scale to
# be used for division operations.)
If the dynamic SQL statements have bind, define, or invoke behavior and the value
of the installation option for USE FOR DYNAMICRULES on panel DSNTIPF is NO,
# you can use the precompiler option DEC(15), DEC15, or D15.s to override DEC31
rules.
For a dynamic statement, or for a single static statement, use the scalar function
DECIMAL to specify values of the precision and scale for a result that causes no
errors.
For a dynamic statement, before you execute the statement, set the value of
# special register CURRENT PRECISION to DEC15 or D15.s.
Even if you use DEC31 rules, multiplication operations can sometimes cause
overflow because the precision of the product is greater than 31. To avoid overflow
from multiplication of large numbers, use the MULTIPLY_ALT built-in function
instead of the multiplication operator.
The contents of the DB2 system catalog tables can be a useful reference tool when
you begin to develop an SQL statement or an application program.
If the table about which you display column information includes LOB or ROWID
columns, the LENGTH field for those columns contains the number of bytes those
column occupy in the base table, rather than the length of the LOB or ROWID data.
To determine the maximum length of data for a LOB or ROWID column, include the
LENGTH2 column in your query. For example:
SELECT NAME, COLTYPE, LENGTH, LENGTH2
FROM SYSIBM.SYSCOLUMNS
WHERE TBNAME = 'EMP_PHOTO_RESUME'
AND TBCREATOR = 'DSN8710';
You must separate each column description from the next with a comma, and
enclose the entire list of column descriptions in parentheses.
Each example shown in this chapter assumes you logged on using your own
authorization ID. The authorization ID qualifies the name of each object you create.
For example, if your authorization ID is SMITH, and you create table YDEPT, the
name of the table is SMITH.YDEPT. If you want to access table DSN8710.DEPT,
you must refer to it by its complete name. If you want to access your own table
YDEPT, you need only to refer to it as “YDEPT”.
If you want DEPTNO to be a primary key as in the sample table, explicitly define
the key. Use an ALTER TABLE statement:
ALTER TABLE YDEPT
PRIMARY KEY(DEPTNO);
You can use an INSERT statement with a SELECT clause to copy rows from one
table to another. The following statement copies all of the rows from
DSN8710.DEPT to your own YDEPT work table.
For information on the INSERT statement, see “Modifying DB2 data” on page 25.
This statement also creates a referential constraint between the foreign key in
YEMP (WORKDEPT) and the primary key in YDEPT (DEPTNO). It also restricts all
phone numbers to unique numbers.
If you want to change a table definition after you create it, use the statement ALTER
TABLE.
If you want to change a table name after you create it, use the statement RENAME
TABLE. For details on the ALTER TABLE and RENAME TABLE statements, see
Chapter 5 of DB2 SQL Reference. You cannot drop a column from a table or
change a column definition. However, you can add and drop constraints on columns
in a table.
Example: You can also create a definition by copying the definition of a base table:
CREATE GLOBAL TEMPORARY TABLE TEMPPROD LIKE PROD;
The SQL statements in the previous examples create identical definitions, even
though table PROD contains two columns, DESCRIPTION and CURDATE, that are
defined as NOT NULL WITH DEFAULT. Because created temporary tables do not
support WITH DEFAULT, DB2 changes the definitions of DESCRIPTION and
CURDATE to NOT NULL when you use the second method to define TEMPPROD.
After you execute one of the two CREATE statements, the definition of TEMPPROD
exists, but no instances of the table exist. To drop the definition of TEMPPROD, you
must execute this statement:
DROP TABLE TEMPPROD;
An instance of a created temporary table exists at the current server until one of the
following actions occurs:
v The remote server connection under which the instance was created terminates.
v The unit of work under which the instance was created completes.
When you execute a ROLLBACK statement, DB2 deletes the instance of the
created temporary table. When you execute a COMMIT statement, DB2 deletes
the instance of the created temporary table unless a cursor for accessing the
created temporary table is defined WITH HOLD and is open.
v The application process ends.
For example, suppose that you create a definition of TEMPPROD and then run an
application that contains these statements:
EXEC SQL DECLARE C1 CURSOR FOR SELECT * FROM TEMPPROD;
EXEC SQL INSERT INTO TEMPPROD SELECT * FROM PROD;
EXEC SQL OPEN C1;
EXEC
. SQL COMMIT;
.
.
When you execute the INSERT statement, DB2 creates an instance of TEMPPROD
and populates that instance with rows from table PROD. When the COMMIT
statement is executed, DB2 deletes all rows from TEMPPROD. If, however, you
change the declaration of C1 to:
EXEC SQL DECLARE C1 CURSOR WITH HOLD
FOR SELECT * FROM TEMPPROD;
DB2 does not delete the contents of TEMPPROD until the application ends
because C1, a cursor defined WITH HOLD, is open when the COMMIT statement is
executed. In either case, DB2 drops the instance of TEMPPROD when the
application ends.
Before you can define declared temporary tables, you must create a special
database and table spaces for them. You do that by executing the CREATE
DATABASE statement with the AS TEMP clause, and then creating segmented
table spaces in that database. A DB2 subsystem can have only one database for
declared temporary tables, but that database can contain more than one table
space.
Example: These statements create a database and table space for declared
temporary tables:
CREATE DATABASE DTTDB AS TEMP;
CREATE TABLESPACE DTTTS IN DTTDB
SEGSIZE 4;
You can define a declared temporary table in any of the following ways:
v Specify all the columns in the table.
Unlike columns of created temporary tables, columns of declared temporary
tables can include the WITH DEFAULT clause.
v Use a LIKE clause to copy the definition of a base table, created temporary
table, or view.
If the base table or created temporary table that you copy has identity columns,
you can specify that the corresponding columns in the declared temporary table
are also identity columns. Do that by specifying the INCLUDING IDENTITY
COLUMN ATTRIBUTES clause when you define the declared temporary table.
v Use a fullselect to choose specific columns from a base table, created temporary
table, or view.
If the base table, created temporary table, or view from which you select columns
has identity columns, you can specify that the corresponding columns in the
declared temporary table are also identity columns. Do that by specifying the
INCLUDING IDENTITY COLUMN ATTRIBUTES clause when you define the
declared temporary table.
DB2 creates an empty instance of a declared temporary table when it executes the
DECLARE GLOBAL TEMPORARY TABLE statement. You can populate the
declared temporary table using INSERT statements, modify the table using
searched or positioned UPDATE or DELETE statements, and query the table using
SELECT statements. You can also create indexes on the declared temporary table.
For example, suppose that you execute these statement in an application program:
EXEC
. SQL COMMIT;
.
.
When DB2 executes the DECLARE GLOBAL TEMPORARY TABLE statement, DB2
creates an empty instance of TEMPPROD. The INSERT statement populates that
instance with rows from table BASEPROD. The qualifier, SESSION, must be
specified in any statement that references TEMPPROD. When DB2 executes the
COMMIT statement, DB2 keeps all rows in TEMPPROD because TEMPPROD is
defined with ON COMMIT PRESERVE ROWS. When the program ends, DB2 drops
TEMPPROD.
Use the DROP TABLE statement with care: Dropping a table is NOT equivalent
to deleting all its rows. When you drop a table, you lose more than both its data
and its definition. You lose all synonyms, views, indexes, and referential and check
constraints associated with that table. You also lose all authorities granted on the
table.
For more information on the DROP statement, see Chapter 5 of DB2 SQL
Reference.
Use the CREATE VIEW statement to define a view and give the view a name, just
as you do for a table.
CREATE VIEW VDEPTM AS
SELECT DEPTNO, MGRNO, LASTNAME, ADMRDEPT
FROM DSN8710.DEPT, DSN8710.EMP
WHERE DSN8710.EMP.EMPNO = DSN8710.DEPT.MGRNO;
This view shows each department manager’s name with the department data in the
DSN8710.DEPT table.
When you create a view, you can reference the USER and CURRENT SQLID
special registers in the CREATE VIEW statement. When referencing the view, DB2
uses the value of the USER or CURRENT SQLID that belongs to the user of the
SQL statement (SELECT, UPDATE, INSERT, or DELETE) rather than the creator of
the view. In other words, a reference to a special register in a view definition refers
to its run-time value.
You can use views to limit access to certain kinds of data, such as salary
information. You can also use views to do the following:
v Make a subset of a table’s data available to an application. For example, a view
based on the employee table might contain rows for a particular department only.
v Combine columns from two or more tables and make the combined data
available to an application. By using a SELECT statement that matches values in
one table with those in another table, you can create a view that presents data
from both tables. However, you can only select data from this type of view. You
cannot update, delete, or insert data using a view that joins two or more tables.
| v Combine rows from two or more tables and make the combined data available to
| an application. By using two or more subselects that are connected by UNION or
| UNION ALL operators, you can create a view that presents data from several
| tables. However, you can only select data from this type of view. You cannot
| update, delete, or insert data using a view that contains UNION operations.
v Present computed data, and make the resulting data available to an application.
You can compute such data using any function or operation that you can use in a
SELECT statement.
In either case, for every row you insert, you must provide a value for any column
that does not have a default value. For a column that meets one of these
conditions, you can specify DEFAULT to tell DB2 to insert the default value for that
column:
v Is nullable.
v Is defined with a default value.
v Has data type ROWID. ROWID columns always have default values.
v Is an identity column. Identity columns always have default values.
The values that you can insert into a ROWID column or identity column depend on
whether the column is defined with GENERATED ALWAYS or GENERATED BY
DEFAULT. See “Inserting data into a ROWID column” on page 27 and “Inserting
data into an identity column” on page 27 for more information.
You can name all columns for which you are providing values. Alternatively, you can
omit the column name list.
For static insert statements, it is a good idea to name all columns for which you are
providing values because:
v Your insert statement is independent of the table format. (For example, you do
not have to change the statement when a column is added to the table.)
v You can verify that you are giving the values in order.
v Your source statements are more self-descriptive.
If you do not name the columns in a static insert statement, and a column is added
to the table being inserted into, an error can occur if the insert statement is
rebound. An error will occur after any rebind of the insert statement unless you
change the insert statement to include a value for the new column. This is true,
even if the new column has a default value.
For example,
INSERT INTO YDEPT (DEPTNO, DEPTNAME, MGRNO, ADMRDEPT, LOCATION)
VALUES ('E31', 'DOCUMENTATION', '000010', 'E01', ' ');
After inserting a new department row into your YDEPT table, you can use a
SELECT statement to see what you have loaded into the table. This SQL
statement:
SELECT *
FROM YDEPT
WHERE DEPTNO LIKE 'E%'
ORDER BY DEPTNO;
shows you all the new department rows that you have inserted:
DEPTNO DEPTNAME MGRNO ADMRDEPT LOCATION
====== ==================================== ====== ======== ===========
E01 SUPPORT SERVICES 000050 A00 -----------
E11 OPERATIONS 000090 E01 -----------
E21 SOFTWARE SUPPORT 000100 E01 -----------
E31 DOCUMENTATION 000010 E01 -----------
This statement copies data from DSN8710.EMP into the newly created table:
INSERT INTO TELE
SELECT LASTNAME, FIRSTNME, PHONENO
FROM DSN8710.EMP
WHERE WORKDEPT = 'D21';
The two previous statements create and fill a table, TELE, that looks like this:
NAME2 NAME1 PHONE
=============== ============ =====
PULASKI EVA 7831
JEFFERSON JAMES 2094
MARINO SALVATORE 3780
SMITH DANIEL 0961
JOHNSON SYBIL 8953
PEREZ MARIA 9001
MONTEVERDE ROBERT 3780
The INSERT statement fills the newly created table with data selected from the
DSN8710.EMP table: the names and phone numbers of employees in Department
D21.
Before you insert data into a ROWID column, you must know how the ROWID
column is defined. ROWID columns can be defined as GENERATED ALWAYS or
GENERATED BY DEFAULT. GENERATED ALWAYS means that DB2 generates a
value for the column, and you cannot insert data into that column. If the column is
defined as GENERATED BY DEFAULT, you can insert a value, and DB2 provides a
default value if you do not supply one. For example, suppose that tables T1 and T2
have two columns: an integer column and a ROWID column. For the following
statement to execute successfully, ROWIDCOL2 must be defined as GENERATED
BY DEFAULT.
INSERT INTO T2 (INTCOL2,ROWIDCOL2)
SELECT INTCOL1, ROWIDCOL1 FROM T1;
Before you insert data into an identity column, you must know whether the column
is defined as GENERATED ALWAYS or GENERATED BY DEFAULT. If you try to
insert a value into an identity column that is defined as GENERATED ALWAYS, the
insert operation fails.
| The values that DB2 generates for an identity column depend on how the column is
| defined. The START WITH parameter determines the first value that DB2
| generates. The MINVALUE and MAXVALUE parameters determine the minimum
| and maximum values that DB2 generates. The CYCLE or NO CYCLE parameter
| determines whether DB2 wraps values when it has generated all values between
| the START WITH value and MAXVALUE, if the values are ascending, or between
| the START WITH value and MINVALUE, if the values are descending.
| Identity columns that are defined with GENERATED ALWAYS and NO CYCLE are
| guaranteed to have unique values. For identity columns that are defined as
| GENERATED BY DEFAULT and NO CYCLE, only the values that DB2 generates
| are guaranteed to be unique among each other. To guarantee unique values in an
| identity column, you need to create a unique index on the identity column.
| Now suppose that you execute the following INSERT statement six times:
| INSERT INTO T1 (CHARCOL1) VALUES ('A');
| When DB2 generates values for IDENTCOL1, it starts with -1 and increments by 1
| until it reaches the MAXVALUE of 3 on the fifth INSERT. To generate the value for
| the sixth INSERT, DB2 cycles back to MINVALUE, which is -3. T1 looks like this
| after the six INSERTs are executed:
| CHARCOL1 IDENTCOL1
| ======== =========
| A -1
| A 0
| A 1
| A 2
| A 3
| A -3
Examples: This statement inserts information about a new employee into the
YEMP table. Because YEMP has a foreign key WORKDEPT referencing the
primary key DEPTNO in YDEPT, the value inserted for WORKDEPT (E31) must be
a value of DEPTNO in YDEPT or null.
INSERT INTO YEMP
VALUES ('000400', 'RUTHERFORD', 'B', 'HAYES', 'E31',
'5678', '1983-01-01', 'MANAGER', 16, 'M', '1943-07-10', 24000,
500, 1900);
The following statement also inserts a row into the YEMP table. However, the
statement does not specify a value for every column. Because the unspecified
columns allow nulls, DB2 inserts null values into the columns not specified.
Because YEMP has a foreign key WORKDEPT referencing the primary key
DEPTNO in YDEPT, the value inserted for WORKDEPT (D11) must be a value of
DEPTNO in YDEPT or null.
INSERT INTO YEMP
(EMPNO, FIRSTNME, MIDINIT, LASTNAME, WORKDEPT, PHONENO, JOB)
VALUES ('000410', 'MILLARD', 'K', 'FILLMORE', 'D11', '4888', 'MANAGER');
The SET clause names the columns that you want to update and provides the
values you want to assign to those columns. You can replace a column value with
any of the following items:
v A null value
The column to which you assign the null value must not be defined as NOT
NULL.
v An expression
An expression can be any of the following items:
– A column
– A constant
| – A fullselect that returns a scalar or a row
– A host variable
– A special register
If you omit the WHERE clause; DB2 updates every row in the table or view with the
values you supply.
If DB2 finds an error while executing your UPDATE statement (for instance, an
update value that is too large for the column), it stops updating and returns error
codes in the SQLCODE and SQLSTATE host variables or related fields in the
SQLCA. No rows in the table change (rows already changed, if any, are restored to
their previous values). If the UPDATE statement is successful, SQLERRD(3) is set
to the number of rows updated.
Examples: The following statement supplies a missing middle initial and changes
the job for employee 000200.
UPDATE YEMP
SET MIDINIT = 'H', JOB = 'FIELDREP'
WHERE EMPNO = '000200';
The following statement gives everyone in department D11 a $400 raise. The
statement can update several rows.
UPDATE YEMP
SET SALARY = SALARY + 400.00
WHERE WORKDEPT = 'D11';
The following statement sets the salary and bonus for employee 000190 to the
average salary and minimum bonus for all employees.
UPDATE YEMP
SET (SALARY, BONUS) =
(SELECT AVG(SALARY), MIN(BONUS)
FROM EMP)
WHERE EMPNO = '000190';
You can use DELETE to remove all rows from a created temporary table or
declared temporary table. However, you can use DELETE with a WHERE clause to
remove only selected rows from a declared temporary table.
This DELETE statement deletes each row in the YEMP table that has an employee
number 000060.
DELETE FROM YEMP
WHERE EMPNO = '000060';
When this statement executes, DB2 deletes any row from the YEMP table that
meets the search condition.
If DB2 finds an error while executing your DELETE statement, it stops deleting data
and returns error codes in the SQLCODE and SQLSTATE host variables or related
fields in the SQLCA. The data in the table does not change.
deletes every row in the YDEPT table. If the statement executes, the table
continues to exist (that is, you can insert rows into it) but it is empty. All existing
views and authorizations on the table remain intact when using DELETE. By
comparison, using DROP TABLE drops all views and authorizations, which can
invalidate plans and packages. For information on the DROP statement, see
“Dropping tables: DROP TABLE” on page 23.
DB2 supports these types of joins: inner join, left outer join, right outer join, and full
outer join.
You can specify joins in the FROM clause of a query: Figure 2 below shows the
ways to combine tables using outer join functions.
The result table contains data joined from all of the tables, for rows that satisfy the
search conditions.
The result columns of a join have names if the outermost SELECT list refers to
base columns. But, if you use a function (such as COALESCE or VALUE) to build a
column of the result, then that column does not have a name unless you use the
AS clause in the SELECT list.
To distinguish the different types of joins, the examples in this section use the
following two tables:
The PARTS table The PRODUCTS table
PART PROD# SUPPLIER PROD# PRODUCT PRICE
======= ===== ============ ===== =========== =====
WIRE 10 ACWF 505 SCREWDRIVER 3.70
OIL 160 WESTERN_CHEM 30 RELAY 7.55
MAGNETS 10 BATEMAN 205 SAW 18.90
PLASTIC 30 PLASTIK_CORP 10 GENERATOR 45.75
BLADES 205 ACE_STEEL
In the simplest type of inner join, the join condition is column1=column2. For
example, you can join the PARTS and PRODUCTS tables on the PROD# column to
get a table of parts with their suppliers and the products that use the parts.
or
SELECT PART, SUPPLIER, PARTS.PROD#, PRODUCT
FROM PARTS INNER JOIN PRODUCTS
ON PARTS.PROD# = PRODUCTS.PROD#;
In either case, the number of rows in the result table is the product of the number
of rows in each table.
You can specify more complicated join conditions to obtain different sets of results.
For example, to eliminate the suppliers that begin with the letter A from the table of
parts, suppliers, product numbers and products, write a query like this:
The result of the query is all rows that do not have a supplier that begins with A:
PART SUPPLIER PROD# PRODUCT
======= ============ ===== ==========
MAGNETS BATEMAN 10 GENERATOR
PLASTIC PLASTIK_CORP 30 RELAY
Example of joining a table to itself using an inner join: The following example
joins table DSN8710.PROJ to itself and returns the number and name of each
“major” project followed by the number and name of the project that is part of it. In
this example, A indicates the first instance of table DSN8710.PROJ and B indicates
a second instance of this table. The join condition is such that the value in column
PROJNO in table DSN8710.PROJ A must be equal to a value in column MAJPROJ
in table DSN8710.PROJ B.
In this example, the comma in the FROM clause implicitly specifies an inner join,
and acts the same as if the INNER JOIN keywords had been used. When you use
the comma for an inner join, you must specify the join condition on the WHERE
clause. When you use the INNER JOIN keywords, you must specify the join
condition on the ON clause.
The join condition for a full outer join must be a simple search condition that
compares two columns or cast functions that contain columns.
For example, the following query performs a full outer join of the PARTS and
PRODUCTS tables:
SELECT PART, SUPPLIER, PARTS.PROD#, PRODUCT
FROM PARTS FULL OUTER JOIN PRODUCTS
ON PARTS.PROD# = PRODUCTS.PROD#;
You probably noticed that the result of the example for “Full outer join” on page 35
is null for SCREWDRIVER, even though the PRODUCTS table contains a product
number for SCREWDRIVER. If you select PRODUCTS.PROD# instead, PROD# is
null for OIL. If you select both PRODUCTS.PROD# and PARTS.PROD#, the result
contains two columns, with both columns contain some null values. We can merge
data from both columns into a single column, eliminating the null values, using the
COALESCE function.
The AS clause (AS PRODNUM) provides a name for the result of the COALESCE
function.
As in an inner join, the join condition can be any simple or compound search
condition that does not contain a subquery reference.
For example, to include rows from the PARTS table that have no matching values in
the PRODUCTS table and include only prices greater than 10.00, execute this
query:
SELECT PART, SUPPLIER, PARTS.PROD#, PRODUCT, PRICE
FROM PARTS LEFT OUTER JOIN PRODUCTS
ON PARTS.PROD#=PRODUCTS.PROD#
AND PRODUCTS.PRICE>10.00;
# Because the PARTS table can have nonmatching rows, and the PRICE column is
# not in the PARTS table, rows in which PRICE is less than or equal to 10.00 are not
# included in the result of the join.
As in an inner join, the join condition can be any simple or compound search
condition that does not contain a subquery reference.
For example, to include rows from the PRODUCTS table that have no matching
values in the PARTS table and include prices greater than 10.00, execute this
query:
SELECT PART, SUPPLIER, PRODUCTS.PROD#, PRODUCT, PRICE
FROM PARTS RIGHT OUTER JOIN PRODUCTS
ON PARTS.PROD# = PRODUCTS.PROD#
AND PRODUCTS.PRICE>10.00;
# Because the PRODUCTS table can have nonmatching rows, and the PRICE
# column is in the PRODUCTS table, rows in which PRICE is less than or equal to
# 10.00 are also included in the result of the join. The predicate PRODUCTS.PRICE
# is greater than 10.00 does not eliminate any rows. When PRODUCTS.PRICE is
# less than or equal to 10.00, the PARTS columns in the result table contain null.
A join operation is part of a FROM clause; therefore, for the purpose of predicting
which rows will be returned from a SELECT statement containing a join operation,
assume that the join operation is performed first.
which is not the desired result. DB2 performs the join operation first, then applies
the WHERE clause. The WHERE clause excludes rows where PROD# has a null
value, so the result is the same as if you had specified an inner join.
In this case, DB2 applies the WHERE clause to each table separately, so that no
rows are eliminated because PROD# is null. DB2 then performs the full outer join
operation, and the desired table is obtained:
PART SUPPLIER PRODNUM PRODUCT
======= ============ ======= ===========
OIL WESTERN_CHEM 160 -----------
BLADES ACE_STEEL 205 SAW
PLASTIC PLASTIK_CORP 30 RELAY
------- ------------ 505 SCREWDRIVER
You can join the results of a user-defined table function with a table, just as you can
join two tables. For example, suppose CVTPRICE is a table function that converts
the prices in the PRODUCTS table to the currency you specify and returns the
PRODUCTS table with the prices in those units. You can obtain a table of parts,
suppliers, and product prices with the prices in your choice of currency by executing
a query like this:
SELECT PART, SUPPLIER, PARTS.PROD#, Z.PRODUCT, Z.PRICE
FROM PARTS, TABLE(CVTPRICE(:CURRENCY)) AS Z
WHERE PARTS.PROD# = Z.PROD#;
The correlated reference D.DEPTNO is valid because the nested table expression
within which it appears is preceded by TABLE and the table specification D appears
to the left of the nested table expression in the FROM clause. If you remove the
keyword TABLE, D.DEPTNO is invalid.
Conceptual overview
Suppose you want a list of the employee numbers, names, and commissions of all
employees working on a particular project, say project number MA2111. The first
part of the SELECT statement is easy to write:
SELECT EMPNO, LASTNAME, COMM
FROM DSN8710.EMP
WHERE EMPNO
.
.
.
But you cannot go further because the DSN8710.EMP table does not include
project number data. You do not know which employees are working on project
MA2111 without issuing another SELECT statement against the
DSN8710.EMPPROJACT table.
To better understand what results from this SQL statement, imagine that DB2 goes
through the following process:
1. DB2 evaluates the subquery to obtain a list of EMPNO values:
(SELECT EMPNO
FROM DSN8710.EMPPROJACT
WHERE PROJNO = 'MA2111');
000200
000200
000220
2. The interim result table then serves as a list in the search condition of the outer
SELECT. Effectively, DB2 executes this statement:
This kind of subquery is uncorrelated. In the previous query, for example, the
content of the subquery is the same for every row of the table DSN8710.EMP.
Subqueries that vary in content from row to row or group to group are correlated
subqueries. For information on correlated subqueries, see “Using correlated
subqueries” on page 47. All of the information preceding that section applies to both
correlated and uncorrelated subqueries.
Subqueries can also appear in the predicates of other subqueries. Such subqueries
are nested subqueries at some level of nesting. For example, a subquery within a
subquery within an outer SELECT has a level of nesting of 2. DB2 allows nesting
down to a level of 15, but few queries require a nesting level greater than 1.
The relationship of a subquery to its outer SELECT is the same as the relationship
of a nested subquery to a subquery, and the same rules apply, except where
otherwise noted.
| Except for a subquery of a basic predicate, the result table can contain more than
| one row.
Basic predicate
You can use a subquery immediately after any of the comparison operators. If you
do, the subquery can return at most one value. DB2 compares that value with the
value to the left of the comparison operator.
For example, the following SQL statement returns the employee numbers, names,
and salaries for employees whose education level is higher than the average
company-wide education level.
SELECT EMPNO, LASTNAME, SALARY
FROM DSN8710.EMP
WHERE EDLEVEL >
(SELECT AVG(EDLEVEL)
FROM DSN8710.EMP);
Use ALL to indicate that the operands on the left side of the comparison must
compare in the same way with all the values the subquery returns. For example,
suppose you use the greater-than comparison operator with ALL:
WHERE column > ALL (subquery)
To satisfy this WHERE clause, the column value must be greater than all the values
that the subquery returns. A subquery that returns an empty result table satisfies the
predicate.
| Now suppose that you use the <> operator with ALL in a WHERE clause like this:
| WHERE column1, column2, ... columnn <> ALL (subquery)
| To satisfy this WHERE clause, each column value must be unequal to all the values
| in the corresponding column of the result table that the subquery returns. A
| subquery that returns an empty result table satisfies the predicate.
Use ANY or SOME to indicate that the values on the left side of the operator must
compare in the indicated way to at least one of the values that the subquery
returns. For example, suppose you use the greater-than comparison operator with
ANY:
WHERE expression > ANY (subquery)
To satisfy this WHERE clause, the value in the expression must be greater than at
least one of the values (that is, greater than the lowest value) that the subquery
returns. A subquery that returns an empty result table does not satisfy the predicate.
| Now suppose that you use the = operator with SOME in a WHERE clause like this:
| WHERE column1, column1, ... columnn = SOME (subquery)
| To satisfy this WHERE clause, each column value must be equal to at least one of
| the values in the corresponding column of the result table that the subquery returns.
| A subquery that returns an empty result table does not satisfy the predicate.
If a subquery that returns one or more null values gives you unexpected results,
see the description of quantified predicates in Chapter 2 of DB2 SQL Reference.
In the example, the search condition is true if any project represented in the
DSN8710.PROJ table has an estimated start date which is later than 1 January
1986. This example does not show the full power of EXISTS, because the result is
always the same for every row examined for the outer SELECT. As a consequence,
either every row appears in the results, or none appear. A correlated subquery is
more powerful, because the subquery would change from row to row.
As shown in the example, you do not need to specify column names in the
subquery of an EXISTS clause. Instead, you can code SELECT *. You can also use
the EXISTS keyword with the NOT keyword in order to select rows when the data
or condition you specify does not exist; that is, you can code
WHERE NOT EXISTS (SELECT ...);
In the subquery, you tell DB2 to compute the average education level for the
department number in the current row. A query that does this follows:
SELECT EMPNO, LASTNAME, WORKDEPT, EDLEVEL
FROM DSN8710.EMP X
WHERE EDLEVEL >
(SELECT AVG(EDLEVEL)
FROM DSN8710.EMP
WHERE WORKDEPT = X.WORKDEPT);
Consider what happens when the subquery executes for a given row of
DSN8710.EMP. Before it executes, X.WORKDEPT receives the value of the
WORKDEPT column for that row. Suppose, for example, that the row is for
CHRISTINE HAAS. Her work department is A00, which is the value of WORKDEPT
for that row. The subquery executed for that row is therefore:
(SELECT AVG(EDLEVEL)
FROM DSN8710.EMP
WHERE WORKDEPT = 'A00');
The subquery produces the average education level of Christine’s department. The
outer subselect then compares this to Christine’s own education level. For some
other row for which WORKDEPT has a different value, that value appears in the
subquery in place of A00. For example, in the row for MICHAEL L THOMPSON, this
value is B01, and the subquery for his row delivers the average education level for
department B01.
The result table produced by the query has the following values:
(From EMP)
When you use a correlation name in a subquery, the subquery can be the
outer-level SELECT, or any of the subqueries that contain the reference. Suppose,
for example, that a query contains subqueries A, B, and C, and that A contains B
and B contains C. Then C could use a correlation name defined in B, A, or the
outer SELECT.
You can define a correlation name for each table name appearing in a FROM
clause. Append the correlation name after its table name. Leave one or more
blanks between a table name and its correlation name. You can include the word
AS between the table name and the correlation name to increase the readability of
The following example demonstrates the use of a correlation name in the select list
of a subquery:
UPDATE BP1TBL T1
SET (KEY1, CHAR1, VCHAR1) =
(SELECT VALUE(T2.KEY1,T1.KEY1), VALUE(T2.CHAR1,T1.CHAR1), VALUE(T2.VCHAR1,T1.VCHAR1)
FROM BP2TBL T2
WHERE (T2.KEY1 = T1.KEY1))
WHERE KEY1 IN
(SELECT KEY1
FROM BP2TBL T3
WHERE KEY2 > 0);
To process this statement, DB2 determines for each project (represented by a row
in the DSN8710.PROJ table) whether or not the combined staffing for that project is
less than 0.5. If it is, DB2 deletes that row from the DSN8710.PROJ table.
To continue this example, suppose DB2 deletes a row in the DSN8710.PROJ table.
You must also delete rows related to the deleted project in the DSN8710.PROJACT
table. To do this, use:
DELETE FROM DSN8710.PROJACT X
WHERE NOT EXISTS
(SELECT *
FROM DSN8710.PROJ
WHERE PROJNO = X.PROJNO);
DB2 determines, for each row in the DSN8710.PROJACT table, whether a row with
the same project number exists in the DSN8710.PROJ table. If not, DB2 deletes the
row in DSN8710.PROJACT.
| This example uses copies of the employee and department table that do not have
| referential constraints.
DB2 restricts delete operations for dependent tables that are involved in referential
constraints. If a DELETE statement has a subquery that references a table involved
in the deletion, the last delete rule in the path to that table must be RESTRICT or
NO ACTION. For example, without referential constraints, the following statement
deletes departments from the department table whose managers are not listed
correctly in the employee table:
DELETE FROM DSN8710.DEPT THIS
WHERE NOT DEPTNO =
(SELECT WORKDEPT
FROM DSN8710.EMP
WHERE EMPNO = THIS.MGRNO);
With the referential constraints defined for the sample tables, the statement causes
an error. The deletion involves the table referred to in the subquery (DSN8710.EMP
is a dependent table of DSN8710.DEPT) and the last delete rule in the path to EMP
is SET NULL, not RESTRICT or NO ACTION. If the statement could execute, its
results would again depend on the order in which DB2 accesses the rows.
To use SPUFI, select SPUFI from the DB2I Primary Option Menu as shown in
Figure 3.
From then on, when the SPUFI panel displays, the data entry fields on the panel
contain the values that you previously entered. You can specify data set names and
processing options each time the SPUFI panel displays, as needed. Values you do
not change remain in effect.
Enter the output data set name: (Must be a sequential data set)
4 DATA SET NAME..... ===> RESULT
If you use this panel a second time, the name of the data set you
previously used displays in the field DATA SET NAME. To create a new
member of an existing partitioned data set, change only the member name.
4 OUTPUT DATA SET NAME
Enter the name of a data set to receive the output of the SQL statement.
You do not need to allocate the data set before you do this.
If the data set exists, the new output replaces its content. If the data set
does not exist, DB2 allocates a data set on the device type specified on the
CURRENT SPUFI DEFAULTS panel and then catalogs the new data set.
The device must be a direct-access storage device, and you must be
authorized to allocate space on that device.
Attributes required for the output data set are:
Specify values for the following options on the CURRENT SPUFI DEFAULTS panel.
All fields must contain a value.
1 SQL TERMINATOR
Allows you to specify the character that you use to end each SQL
statement. You can specify any character except one of those listed in
Table 3. A semicolon is the default.
Table 3. Invalid special characters for the SQL terminator
Hexadecimal
Name Character Representation
blank X'40'
comma , X'5E'
double quote " X'7F'
left parenthesis ( X'4D'
right parenthesis ) X'5D'
single quote ' X'7D'
underscore _ X'6D'
Be careful to choose a character for the SQL terminator that is not used
within the statement.
You can also set or change the SQL terminator within a SPUFI input data
set using the --#SET TERMINATOR statement. See “Entering SQL statements”
on page 56 for details.
2 ISOLATION LEVEL
Allows you to specify the isolation level for your SQL statements. See “The
ISOLATION option” on page 343 for more information.
3 MAX SELECT LINES
The maximum number of output lines that a SELECT statement can return.
To limit the number of rows retrieved, enter another maximum number
greater than 1.
4 RECORD LENGTH
The record length must be at least 80 bytes. The maximum record length
depends on the device type you use. The default value allows a 4092-byte
record.
Each record can hold a single line of output. If a line is longer than a
record, the last fields in the line truncate. SPUFI discards fields beyond the
record length.
5 BLOCKSIZE
Follow the normal rules for selecting the block size. For record format F, the
block size is equal to record length. For FB and FBA, choose a block size
that is an even multiple of LRECL. For VB and VBA only, the block size
must be 4 bytes larger than the block size for FB or FBA.
6 RECORD FORMAT
Specify F, FB, FBA, V, VB, or VBA. FBA and VBA formats insert a printer
control character after the number of lines specified in the LINES/PAGE OF
LISTING field on the DB2I Defaults panel. The record format default is VB
(variable-length blocked).
7 DEVICE TYPE
Allows you to specify a standard MVS name for direct-access storage
device types. The default is SYSDA. SYSDA specifies that MVS is to select
an appropriate direct access storage device.
8 MAX NUMERIC FIELD
The maximum width of a numeric value column in your output. Choose a
value greater than 0. The IBM-supplied default is 20. For more information,
see “Format of SELECT statement results” on page 58.
9 MAX CHAR FIELD
The maximum width of a character value column in your output. DATETIME
and GRAPHIC data strings are externally represented as characters, and
Column names are the column identifiers that you can use in SQL
statements. If an SQL statement has an AS clause for a column, SPUFI
displays the contents of the AS clause in the heading, rather than the
column name. You define column labels with LABEL ON statements.
When you have entered your SPUFI options, press the ENTER key to continue.
SPUFI then processes the next processing option for which you specified YES. If all
other processing options are NO, SPUFI displays the SPUFI panel.
If you press the END key, you return to the SPUFI panel, but you lose all the
changes you made on the SPUFI Defaults panel. If you press ENTER, SPUFI
saves your changes.
On the panel, use the ISPF EDIT program to enter SQL statements that you want
to execute, as shown in Figure 6 on page 57.
Move the cursor to the first input line and enter the first part of an SQL statement.
You can enter the rest of the SQL statement on subsequent lines, as shown in
Figure 6 on page 57. Indenting your lines and entering your statements on several
lines make your statements easier to read, and do not change how your statements
process.
You can put more than one SQL statement in the input data set. You can put an
SQL statement on one line of the input data set or on more than one line. DB2
executes the statements in the order you placed them in the data set. Do not put
more than one SQL statement on a single line. The first one executes, but DB2
ignores the other SQL statements on the same line.
In your SPUFI input data set, end each SQL statement with the statement
terminator that you specified in the CURRENT SPUFI DEFAULTS panel.
When you have entered your SQL statements, press the END PF key to save the
file and to execute the SQL statements.
Pressing the END PF key saves the data set. You can save the data set and
continue editing it by entering the SAVE command. In fact, it is a good practice to
save the data set after every 10 minutes or so of editing.
Figure 6 shows what the panel looks like if you enter the sample SQL statement,
followed by a SAVE command.
You can bypass the editing step by resetting the EDIT INPUT processing option:
EDIT INPUT ... ===> NO
You can put comments about SQL statements either on separate lines or on the
same line. In either case, use two hyphens (--) to begin a comment. Specify any
text other than #SET TERMINATOR after the comment. DB2 ignores everything to
the right of the two hyphens.
Use the text --SET TERMINATOR character in a SPUFI input data set as an
instruction to SPUFI to interpret character as a statement terminator. You can
specify any single-byte character except one of the characters that are listed in
Table 3 on page 54. The terminator that you specify overrides a terminator that you
specified in option 1 of the CURRENT SPUFI DEFAULTS panel or in a previous
--SET TERMINATOR statement.
You can bypass the DB2 processing step by resetting the EXECUTE processing
option:
EXECUTE ..... ===> NO
Your SQL statement might take a long time to execute, depending on how large a
table DB2 has to search, or on how many rows DB2 has to process. To interrupt
DB2’s processing, press the PA1 key and respond to the prompting message that
asks you if you really want to stop processing. This cancels the executing SQL
statement and returns you to the ISPF-PDF menu.
What happens to the output data set? This depends on how much of the input data
set DB2 was able to process before you interrupted its processing. DB2 might not
have opened the output data set yet, or the output data set might contain all or part
of the results data produced so far.
At the end of the data set are summary statistics that describe the processing of the
input data set as a whole.
For all other types of SQL statements executed with SPUFI, the message
“SQLCODE IS 0” indicates an error-free result.
You can change the amount of data displayed for numeric and character columns
by changing values on the CURRENT SPUFI DEFAULTS panel, as described in
“Changing SPUFI defaults (optional)” on page 54.
v A null value displays as a series of hyphens (-).
v A ROWID or BLOB column value displays in hexadecimal.
v A CLOB column value displays in the same way as a VARCHAR column value.
v A DBCLOB column value displays in the same way as a VARGRAPHIC column
value.
v A heading identifies each selected column, and repeats at the top of each output
page. The contents of the heading depend on the value you specified in field
COLUMN HEADING of the CURRENT SPUFI DEFAULTS panel.
Other messages that you could receive from the processing of SQL statements
include:
v The number of rows that DB2 processed, that either:
– Your SELECT statement retrieved
– Your UPDATE statement modified
– Your INSERT statement added to a table
– Your DELETE statement deleted from a table
v Which columns display truncated data because the data was too wide
ODBC lets you access data through ODBC function calls in your application. You
execute SQL statements by passing them to DB2 through a ODBC function call.
ODBC eliminates the need for precompiling and binding your application and
increases the portability of your application by using the ODBC interface.
If you are writing your applications in Java, you can use JDBC application
support to access DB2. JDBC is similar to ODBC but is designed specifically for
use with Java and is therefore a better choice than ODBC for making DB2 calls
from Java applications.
For more information on using JDBC, see DB2 ODBC Guide and Reference.
v Delimit SQL statements, as described in “Delimiting an SQL statement” on
page 66.
v Declare the tables you use, as described in “Declaring table and view definitions”
on page 67. (This is optional.)
v Declare the data items used to pass data between DB2 and a host language, as
described in “Accessing data using host variables and host structures” on
page 67.
v Code SQL statements to access DB2 data. See “Accessing data using host
variables and host structures” on page 67.
For information about using the SQL language, see “Part 1. Using SQL queries”
on page 1 and in DB2 SQL Reference. Details about how to use SQL
statements within an application program are described in “Chapter 9.
Embedding SQL statements in host languages” on page 107.
v Declare a communications area (SQLCA), or handle exceptional conditions that
DB2 indicates with return codes, in the SQLCA. See “Checking the execution of
SQL statements” on page 74 for more information.
In addition to these basic requirements, you should also consider several special
topics:
v “Chapter 7. Using a cursor to retrieve a set of rows” on page 81 discusses how to
use a cursor in your application program to select a set of rows and then process
the set one row at a time.
This section includes information about using SQL in application programs written in
assembler, C, COBOL, FORTRAN, PL/I, and REXX. You can also use SQL in
application programs written in Ada, APL2®, BASIC, and Prolog. See the following
publications for more information about these languages:
Ada IBM Ada/370 SQL Module Processor for DB2 Database Manager
User's Guide
APL2 APL2 Programming: Using Structured Query Language (SQL)
BASIC IBM BASIC/MVS Language Reference
Prolog/MVS & VM
IBM SAA AD/Cycle® Prolog/MVS & VM Programmer's Guide
Some of the examples vary from these conventions. Exceptions are noted where
they occur.
For example, use EXEC SQL and END-EXEC to delimit an SQL statement in a
COBOL program:
EXEC SQL
an SQL statement
END-EXEC.
You do not have to declare tables or views, but there are advantages if you do. One
advantage is documentation. For example, the DECLARE statement specifies the
structure of the table or view you are working with, and the data type of each
column. You can refer to the DECLARE statement for the column names and data
types in the table or view. Another advantage is that the DB2 precompiler uses your
declarations to make sure you have used correct column names and data types in
your SQL statements. The DB2 precompiler issues a warning message when the
column names and data types do not correspond to the SQL DECLARE statements
in your program.
For example, the DECLARE TABLE statement for the DSN8710.DEPT table looks
like this:
EXEC SQL
DECLARE DSN8710.DEPT TABLE
(DEPTNO CHAR(3) NOT NULL,
DEPTNAME VARCHAR(36) NOT NULL,
MGRNO CHAR(6) ,
ADMRDEPT CHAR(3) NOT NULL,
LOCATION CHAR(16) )
END-EXEC.
When you declare a table or view that contains a column with a distinct type, it is
best to declare that column with the source type of the distinct type, rather than the
distinct type itself. When you declare the column with the source type, DB2 can
check embedded SQL statements that reference that column at precompile time.
A host structure is a group of host variables that an SQL statement can refer to
using a single name. You can use host structures in all languages except REXX.
Use host language statements to define the host structures.
To optimize performance, make sure the host language declaration maps as closely
as possible to the data type of the associated data in the database; see “Chapter 9.
Embedding SQL statements in host languages” on page 107. For more performance
suggestions, see “Part 6. Additional programming techniques” on page 489.
You can use a host variable to represent a data value, but you cannot use it to
represent a table, view, or column name. (You can specify table, view, or column
names at run time using dynamic SQL. See “Chapter 23. Coding dynamic SQL in
application programs” on page 497 for more information.)
Host variables follow the naming conventions of the host language. A colon (:) must
precede host variables used in SQL to tell DB2 that the variable is not a column
name. A colon must not precede host variables outside of SQL statements.
For more information about declaring host variables, see the appropriate language
section:
v Assembler: “Using host variables” on page 111
v C: “Using host variables” on page 124
v COBOL: “Using host variables” on page 146
v FORTRAN: “Using host variables” on page 167
v PL/I: “Using host variables” on page 177.
v REXX: “Using REXX host variables and data types” on page 194.
Retrieving a single row of data: The INTO clause of the SELECT statement
names one or more host variables to contain the column values returned. The
named variables correspond one-to-one with the list of column names in the
SELECT list.
In the DATA DIVISION of the program, you must declare the host variables
CBLEMPNO, CBLNAME, and CBLDEPT to be compatible with the data types in the
columns EMPNO, LASTNAME, and WORKDEPT of the DSN8710.EMP table.
If the SELECT statement returns more than one row, this is an error, and any data
returned is undefined and unpredictable.
Retrieving Multiple Rows of Data: If you do not know how many rows DB2 will
return, or if you expect more than one row to return, then you must use an
alternative to the SELECT ... INTO statement.
The DB2 cursor enables an application to process a set of rows and retrieve one
row at a time from the result table. For information on using cursors, see
“Chapter 7. Using a cursor to retrieve a set of rows” on page 81.
Specifying a list of items in a select clause: When you specify a list of items in
the SELECT clause, you can use more than the column names of tables and views.
You can request a set of column values mixed with host variable values and
constants. For example:
MOVE 4476 TO RAISE.
MOVE '000220' TO PERSON.
EXEC SQL
SELECT EMPNO, LASTNAME, SALARY, :RAISE, SALARY + :RAISE
INTO :EMP-NUM, :PERSON-NAME, :EMP-SAL, :EMP-RAISE, :EMP-TTL
FROM DSN8710.EMP
WHERE EMPNO = :PERSON
END-EXEC.
The results shown below have column headings that represent the names of the
host variables:
EMP-NUM PERSON-NAME EMP-SAL EMP-RAISE EMP-TTL
======= =========== ======= ========= =======
000220 LUTZ 29840 4476 34316
Retrieving data into host variables: If the value for the column you retrieve is
null, DB2 puts a negative value in the indicator variable. If it is null because of a
numeric or character conversion error, or an arithmetic expression error, DB2 sets
the indicator variable to -2. See “Handling arithmetic or conversion errors” on
page 75 for more information.
If you do not use an indicator variable and DB2 retrieves a null value, an error
results.
When DB2 retrieves the value of a column, you can test the indicator variable. If the
indicator variable’s value is less than zero, the column value is null. When the
column value is null, the value of the host variable does not change from its
previous value.
You can also use an indicator variable to verify that a retrieved character string
value is not truncated. If the indicator variable contains a positive integer, the
integer is the original length of the string.
You can specify an indicator variable, preceded by a colon, immediately after the
host variable. Optionally, you can use the word INDICATOR between the host
variable and its indicator variable. Thus, the following two examples are equivalent:
You can then test INDNULL for a negative value. If it is negative, the corresponding
value of PHONENO is null, and you can disregard the contents of CBLPHONE.
Inserting null values into columns using host variables: You can use an
indicator variable to insert a null value from a host variable into a column. When
DB2 processes INSERT and UPDATE statements, it checks the indicator variable (if
it exists). If the indicator variable is negative, the column value is null. If the
indicator variable is greater than -1, the associated host variable contains a value
for the column.
For example, suppose your program reads an employee ID and a new phone
number, and must update the employee table with the new number. The new
number could be missing if the old number is incorrect, but a new number is not yet
available. If it is possible that the new value for column PHONENO might be null,
you can code:
EXEC SQL
UPDATE DSN8710.EMP
SET PHONENO = :NEWPHONE:PHONEIND
WHERE EMPNO = :EMPID
END-EXEC.
When NEWPHONE contains other than a null value, set PHONEIND to zero by
preceding the statement with:
MOVE 0 TO PHONEIND.
Use IS NULL to test for a null column value: You cannot determine whether a
column value is null by comparing a host variable with an indicator variable that is
set -1 to the column. Two DB2 null values are not equal to each other. To test
whether a column has a null value, use the IS NULL comparison operator. For
example, the following code does not select the employees who do no have a
phone number:
MOVE -1 TO PHONE-IND.
EXEC SQL
SELECT LASTNAME
INTO :PGM-LASTNAME
FROM DSN8710.EMP
WHERE PHONENO = :PHONE-HV:PHONE-IND
END-EXEC.
| When you use a DECLARE VARIABLE statement in a program, put the DECLARE
| VARIABLE statement after the corresponding host variable declaration and before
| you refer to that host variable.
| Because the application encoding scheme for the subsystem is EBCDIC, the
| retrieved data is EBCDIC. To make the retrieved data Unicode, use DECLARE
| VARIABLE statements to specify that the data that is retrieved from these columns
| is encoded in the default Unicode CCSIDs for the subsystem. Suppose that you
| want to retrieve the character data in Unicode CCSID 1208 and the graphic data in
| Unicode CCSID 1200. Use DECLARE VARIABLE statements like these:
| EXEC SQL BEGIN DECLARE SECTION;
| char hvpartnum[11];
| EXEC SQL DECLARE :hvpartnum VARIABLE CCSID 1208;
| wchar_t hvjpnname[11];
| EXEC SQL DECLARE :hvjpnname VARIABLE CCSID 1200;
| struct {
| short len;
| char d[30];
| } hvengname;
| EXEC SQL DECLARE :hvengname VARIABLE CCSID 1208;
| EXEC SQL END DECLARE SECTION;
If you want to avoid listing host variables, you can substitute the name of a
structure, say :PEMP, that contains :EMPNO, :FIRSTNME, :MIDINIT, :LASTNAME,
and :WORKDEPT. The example then reads:
EXEC SQL
SELECT EMPNO, FIRSTNME, MIDINIT, LASTNAME, WORKDEPT
INTO :PEMP
FROM DSN8710.VEMP
WHERE EMPNO = :EMPID
END-EXEC.
You can declare a host structure yourself, or you can use DCLGEN to generate a
COBOL record description, PL/I structure declaration, or C structure declaration that
corresponds to the columns of a table. For more details about coding a host
structure in your program, see “Chapter 9. Embedding SQL statements in host
languages” on page 107. For more information on using DCLGEN and the
restrictions that apply to the C language, see “Chapter 8. Generating declarations
for your tables using DCLGEN” on page 95.
MOVE
. '000230' TO EMPNO.
.
.
EXEC SQL
SELECT EMPNO, FIRSTNME, MIDINIT, LASTNAME, WORKDEPT, BIRTHDATE
INTO :PEMP-ROW:EMP-IND
FROM DSN8710.EMP
WHERE EMPNO = :EMPNO
END-EXEC.
In this example, EMP-IND is an array containing six values, which you can test for
negative values. If, for example, EMP-IND(6) contains a negative value, the
corresponding host variable in the host structure (EMP-BIRTHDATE) contains a null
value.
The meaning of SQLCODEs other than 0 and 100 varies with the particular product
implementing SQL.
An advantage to using the SQLCODE field is that it can provide more specific
information than the SQLSTATE. Many of the SQLCODEs have associated tokens
in the SQLCA that indicate, for example, which object incurred an SQL error.
To conform to the SQL standard, you can declare SQLCODE and SQLSTATE
(SQLCOD and SQLSTA in FORTRAN) as stand-alone host variables. If you specify
the STDSQL(YES) precompiler option, these host variables receive the return
codes, and you should not include an SQLCA in your program.
The WHENEVER statement is not supported for REXX. For information on REXX
error handling, see “Embedding SQL statements in a REXX procedure” on
page 192.
The WHENEVER statement must precede the first SQL statement it is to affect.
However, if your program checks SQLCODE directly, it must check SQLCODE after
the SQL statement executes.
You can find the programming language specific syntax and details for calling
DSNTIAR on the following pages:
For assembler programs, see page 119
For C programs, see page 139
For COBOL programs, see page 161
For FORTRAN programs, see page 173
For PL/I programs, see page 188
DSNTIAR takes data from the SQLCA, formats it into a message, and places the
result in a message output area that you provide in your application program. Each
time you use DSNTIAR, it overwrites any previous messages in the message output
area. You should move or print the messages before using DSNTIAR again, and
before the contents of the SQLCA change, to get an accurate view of the SQLCA.
You must define the message output area in VARCHAR format. In this varying
character format, a two-byte length field precedes the data. The length field tells
DSNTIAR how many total bytes are in the output message area; its minimum value
is 240.
Figure 8 on page 77 shows the format of the message output area, where length is
the two-byte total length field, and the length of each line matches the logical record
length (lrecl) you specify to DSNTIAR.
When you call DSNTIAR, you must name an SQLCA and an output message area
in its parameters. You must also provide the logical record length (lrecl) as a value
between 72 and 240 bytes. DSNTIAR assumes the message area contains
fixed-length records of length lrecl.
When loading DSNTIAR from another program, be careful how you branch to
DSNTIAR. For example, if the calling program is in 24-bit addressing mode and
DSNTIAR is loaded above the 16-megabyte line, you cannot use the assembler
BALR instruction or CALL macro to call DSNTIAR, because they assume that
You can dynamically link (load) and call DSNTIAR directly from a language that
does not handle 31-bit addressing (OS/VS COBOL, for example). To do this, link a
second version of DSNTIAR with the attributes AMODE(24) and RMODE(24) into
another load module library. Or, you can write an intermediate assembler language
program that calls DSNTIAR in 31-bit mode; then call that intermediate program in
24-bit mode from your application.
For more information on the allowed and default AMODE and RMODE settings for a
particular language, see the application programming guide for that language. For
details on how the attributes AMODE and RMODE of an application are determined,
see the linkage editor and loader user’s guide for the language in which you have
written the application.
In your error routine, you write a section that checks for SQLCODE -911 or -913.
You can receive either of these SQLCODEs when there is a deadlock or timeout.
When one of these errors occurs, the error routine closes your cursors by issuing
the statement:
EXEC SQL CLOSE cursor-name
An SQLCODE of 0 or -501 from that statement indicates that the close was
successful.
You can use DSNTIAR in the error routine to generate the complete message text
associated with the negative SQLCODEs.
1. Choose a logical record length (lrecl) of the output lines. For this example,
assume lrecl is 72, to fit on a terminal screen, and is stored in the variable
named ERROR-TEXT-LEN.
2. Define a message area in your COBOL application. Assuming you want an area
for up to 10 lines of length 72, you should define an area of 720 bytes, plus a
2-byte area that specifies the length of the message output area.
01 ERROR-MESSAGE.
02 ERROR-LEN PIC S9(4) COMP VALUE +720.
02 ERROR-TEXT PIC X(72) OCCURS 10 TIMES
INDEXED BY ERROR-INDEX.
77 ERROR-TEXT-LEN PIC S9(9) COMP VALUE +72.
To display the contents of the SQLCA when SQLCODE is 0 or -501, you should first
format the message by calling DSNTIAR after the SQL statement that produces
SQLCODE 0 or -501:
CALL 'DSNTIAR' USING SQLCA ERROR-MESSAGE ERROR-TEXT-LEN.
You can then print the message output area just as you would any other variable.
Your message might look like the following:
Your program can have several cursors, each of which performs the previous steps.
The following example shows a simple form of the DECLARE CURSOR statement:
EXEC SQL
DECLARE C1 CURSOR FOR
SELECT EMPNO, FIRSTNME, MIDINIT, LASTNAME, SALARY
FROM DSN8710.EMP
END-EXEC.
More complicated cursors might include WHERE clauses or joins of several tables.
For example, suppose that you want to use a cursor to list employees who work on
a certain project. Declare a cursor like this to identify those employees:
EXEC SQL
DECLARE C2 CURSOR FOR
SELECT EMPNO, FIRSTNME, MIDINIT, LASTNAME, SALARY
FROM DSN8710.EMP X
WHERE EXISTS
| Updating a column: You can update columns in the rows that you retrieve.
| Updating a row after you use a cursor to retrieve it is called a positioned update. If
| you intend to perform any positioned updates on the identified table, include the
| FOR UPDATE clause. The FOR UPDATE clause has two forms. The first form is
| FOR UPDATE OF column-list. Use this form when you know in advance which
| columns you need to update. The second form of the FOR UPDATE clause is FOR
| UPDATE, with no column list. Use this form when you might use the cursor to
| update any of the columns of the table.
For example, you can use this cursor to update only the SALARY column of the
employee table:
EXEC SQL
DECLARE C1 CURSOR FOR
SELECT EMPNO, FIRSTNME, MIDINIT, LASTNAME, SALARY
FROM DSN8710.EMP X
WHERE EXISTS
(SELECT *
FROM DSN8710.PROJ Y
WHERE X.EMPNO=Y.RESPEMP
AND Y.PROJNO=:GOODPROJ)
FOR UPDATE OF SALARY;
If you might use the cursor to update any column of the employee table, define the
cursor like this:
EXEC SQL
DECLARE C1 CURSOR FOR
SELECT EMPNO, FIRSTNME, MIDINIT, LASTNAME, SALARY
FROM DSN8710.EMP X
WHERE EXISTS
(SELECT *
FROM DSN8710.PROJ Y
WHERE X.EMPNO=Y.RESPEMP
AND Y.PROJNO=:GOODPROJ)
FOR UPDATE;
| DB2 must do more processing when you use the FOR UPDATE clause without a
| column list than when you use the FOR UPDATE OF clause with a column list.
| Therefore, if you intend to update only a few columns of a table, your program can
| run more efficiently if you include a column list.
The precompiler options NOFOR and STDSQL affect the use of the FOR UPDATE
clause in static SQL statements. For information on these options, see Table 48 on
| page 403. If you do not specify the FOR UPDATE clause in a DECLARE CURSOR
| statement, and you do not specify the STDSQL(YES) option or the NOFOR
| precompiler options, you receive an error if you execute a positioned UPDATE
| statement.
You can update a column of the identified table even though it is not part of the
result table. In this case, you do not need to name the column in the SELECT
statement. When the cursor retrieves a row (using FETCH) that contains a column
value you want to update, you can use UPDATE ... WHERE CURRENT OF to
identify the row that is to be updated.
Two factors that influence the amount of time that DB2 requires to process the
OPEN statement are:
v Whether DB2 must perform any sorts before it can retrieve rows from the result
table
v Whether DB2 uses parallelism to process the SELECT statement associated with
the cursor
For more information, see “The effect of sorts on OPEN CURSOR” on page 710.
Your program must anticipate and handle an end-of-data whenever you use a
cursor to fetch a row. For further information about the WHENEVER NOT FOUND
statement, see “Checking the execution of SQL statements” on page 74.
The SELECT statement within DECLARE CURSOR statement identifies the result
table from which you fetch rows, but DB2 does not retrieve any data until your
application program executes a FETCH statement.
When your program executes the FETCH statement, DB2 uses the cursor to point
to a row in the result table. That row is called the current row. DB2 then copies the
current row contents into the program host variables that you specified on the INTO
clause of FETCH. This sequence repeats each time you issue FETCH, until you
have processed all rows in the result table.
The row that DB2 points to when you execute a FETCH statement depends on
whether the cursor is declared as a scrollable or non-scrollable. See “Scrollable and
non-scrollable cursors” on page 85 for more information.
When you query a remote subsystem with FETCH, consider using block fetch for
better performance. For more information see “Use block fetch” on page 385. Block
fetch processes rows ahead of the current row. You cannot use a block fetch when
you perform a positioned update or delete operation.
A positioned UPDATE statement updates the row that the cursor points to.
A positioned DELETE statement deletes the row that cursor-name points to.
If you finish processing the rows of the result table, and you do not want to use the
cursor, you can let DB2 automatically close the cursor when your program
terminates.
Types of cursors
Cursors can be scrollable or not scrollable. They can also be held or not held. The
following sections discuss these characteristics in more detail.
| If you want to order the rows of the cursor's result set, and you also want the cursor
| to be updatable, you need to declare the cursor as scrollable, even if you use it
| only to retrieve rows sequentially. You can use the ORDER BY clause in the
| declaration of an updatable cursor only if you declare the cursor as scrollable.
|
| EXEC SQL DECLARE C1 INSENSITIVE SCROLL CURSOR FOR
| SELECT DEPTNO, DEPTNAME, MGRNO
| FROM DSN8710.DEPT
| ORDER BY DEPTNO
| END-EXEC.
|
| Figure 9. Declaration for an insensitive scrollable cursor
|
| Declaring a scrollable cursor with the INSENSITIVE keyword has the following
| effects:
| v The size, the order of the rows, and the values for each row of the result table do
| not change after you open the cursor.
| v The result table is read-only. Therefore, you cannot declare the cursor with the
| FOR UPDATE clause, and you cannot use the cursor for positioned update or
| delete operations.
|
| EXEC SQL DECLARE C2 SENSITIVE STATIC SCROLL CURSOR FOR
| SELECT DEPTNO, DEPTNAME, MGRNO
| FROM DSN8710.DEPT
| ORDER BY DEPTNO
| END-EXEC.
|
| Figure 10. Declaration for a sensitive scrollable cursor
|
| Declaring a cursor as SENSITIVE has the following effects:
| v When you execute positioned UPDATE and DELETE statements with the cursor,
| those updates are visible in the result table.
| v When the current value of a row no longer satisfies the SELECT statement for
| the cursor, that row is no longer visible in the result table.
| v When a row of the result table is deleted from the underlying table, the row is no
| longer visible in the result table.
| v Changes that are made to the underlying table by other cursors or other
| application processes can be visible in the result table, depending on whether the
| FETCH statements that you use with the cursor are FETCH INSENSITIVE or
| FETCH SENSITIVE statements.
| If the OPEN statement executes with no errors or warnings, DB2 does not set
| SQLWARN0 when it sets SQLWARN1, SQLWARN4, or SQLWARN5. See Appendix
| C of DB2 SQL Reference for specific information on fields in the SQLCA.
| Retrieving rows with a scrollable cursor: When you open any cursor, the cursor
| is positioned before the first row of the result table. You move a scrollable cursor
| around in the result table by specifying a fetch orientation keyword in a FETCH
| statement. A fetch orientation keyword indicates the absolute or relative position of
| the cursor when the FETCH statement is executed. Table 4 lists the fetch
| orientation keywords that you can specify and their meanings.
| Table 4. Positions for a scrollable cursor
| Keyword in FETCH statement Cursor position when the FETCH is executed
| BEFORE Before the first row
| FIRST or ABSOLUTE +1 At the first row
| LAST or ABSOLUTE −1 At the last row
| AFTER After the last row
1
| ABSOLUTE To an absolute row number, from before the first
| row forward or from after the last row backward
| RELATIVE1 Forward or backward a relative number of rows
| CURRENT At the current row
| PRIOR or RELATIVE −1 To the previous row
| NEXT or RELATIVE +1 To the next row (default)
| Note:
| 1. ABSOLUTE and RELATIVE are described in greater detail in the discussion of FETCH in
| Chapter 5 of DB2 SQL Reference.
|
| For example, to use the cursor that is declared in Figure 9 on page 86 to fetch the
| fifth row of the result table, use a FETCH statement like this:
# To fetch the fifth row from the end of the result table, use this FETCH statement:
# EXEC SQL FETCH ABSOLUTE -5 C1 INTO :HVDEPTNO, :DEPTNAME, :MGRNO;
| Determining the number of rows in the result table for a scrollable cursor:
| You can determine how many rows are in the result table of an INSENSITIVE or
| SENSITIVE STATIC scrollable cursor. To do that, execute a FETCH statement, such
| as FETCH AFTER, that positions the cursor after the last row. Then examine the
| SQLCA. Fields SQLERRD(1) and SQLERRD(2) (fields sqlerrd[0] and sqlerrd[1] for
| C and C⁺⁺) contain the number of rows in the result table.
| Holes in the result table: Scrollable cursors that are declared as INSENSITIVE
| or SENSITIVE STATIC follow a static model, which means that DB2 determines the
| size of the result table and the order of the rows when you open the cursor.
| Updating or deleting rows from the underlying table after the cursor is open can
| result in holes in the result table. A hole in the result table occurs when a delete or
| update operation results in a difference between the result table and the underlying
| base table.
| Example: Creating a delete hole: Suppose that table A consists of one integer
| column, COL1, which has the following values:
|
|
| The positioned delete statement creates a delete hole, as shown in Figure 11.
|
|
|
| Figure 11. Creating a delete hole
|
| After you execute the positioned delete statement, the third row is deleted from the
| result table, but the result table does not shrink to fill the space that the deleted row
| creates.
| Example: Creating an update hole: Suppose that you declare the following cursor,
| which you use to update rows in A:
| EXEC SQL DECLARE C4 SENSITIVE STATIC SCROLL CURSOR FOR
| SELECT COL1
| FROM A
| WHERE COL1<6;
| The searched UPDATE statement creates an update hole, as shown in Figure 12.
|
|
|
| Figure 12. Creating an update hole
|
| If you try to fetch from a delete hole, DB2 issues an SQL warning. If you try to
| update or delete the delete hole, DB2 issues an SQL error. You can remove a
| delete hole only by opening the scrollable cursor, setting a savepoint, executing a
| positioned DELETE statement with the scrollable cursor, and rolling back to the
| savepoint.
| If you try to fetch from an update hole, DB2 issues an SQL warning. If you try to
| delete the update hole, DB2 issues an SQL error. However, you can convert an
| update hole back to a result table row by updating the row in the base table, as
| shown in Figure 13. You can update the base table with a searched UPDATE
| statement in the same application process, or a searched or positioned UPDATE
| statement in another application process. After you update the base table, if the row
| qualifies for the result table, the update hole disappears.
|
|
|
| Figure 13. Removing an update hole
|
| A hole becomes visible to a cursor when a cursor operation returns a non-zero
| SQLCODE. The point at which a hole becomes visible depends on the following
| factors:
| v Whether the scrollable cursor creates the hole
| v Whether the FETCH statement is FETCH SENSITIVE or FETCH INSENSITIVE
| If the scrollable cursor creates the hole, the hole is visible when you execute a
| FETCH statement for the row that contains the hole. The FETCH statement can be
| FETCH INSENSITIVE or FETCH SENSITIVE.
| If an update or delete operation outside the scrollable cursor creates the hole, the
| hole is visible at the following times:
| v If you execute a FETCH SENSITIVE statement for the row that contains the hole,
| the hole is visible when you execute the FETCH statement.
| v If you execute a FETCH INSENSITIVE statement, the hole is not visible when
| you execute the FETCH statement. DB2 returns the row as it was before the
| update or delete operation occurred. However, if you follow the FETCH
| INSENSITIVE statement with a positioned UPDATE or DELETE statement, the
| hole becomes visible.
| The page size of the TEMP table space must be large enough to hold the longest
| row in the declared temporary table. See Part 2 of DB2 Installation Guide for
| information on calculating the page size for TEMP table spaces that are used for
| scrollable cursors.
After a commit operation, a held cursor is positioned after the last row retrieved and
before the next logical row of the result table to be returned.
If the program abnormally terminates, the cursor position is lost. To prepare for
restart, your program must reposition the cursor.
The following restrictions apply to cursors that are declared WITH HOLD:
v Do not use DECLARE CURSOR WITH HOLD with the new user signon from a
DB2 attachment facility, because all open cursors are closed.
v Do not declare a WITH HOLD cursor in a thread that could become inactive. If
you do, its locks are held indefinitely.
IMS
You should always close cursors that you no longer need. If you let DB2 close
a CICS attachment cursor, the cursor might not close until the CICS
attachment facility reuses or terminates the thread.
The following cursor declaration causes the cursor to maintain its position in the
DSN8710.EMP table after a commit point:
EXEC SQL
DECLARE EMPLUPDT CURSOR WITH HOLD FOR
SELECT EMPNO, LASTNAME, PHONENO, JOB, SALARY, WORKDEPT
FROM DSN8710.EMP
WHERE WORKDEPT < 'D11'
ORDER BY EMPNO
END-EXEC.
**************************************************
* Close the cursor *
**************************************************
CLOSE-THISEMP.
EXEC SQL
CLOSE THISEMP
END-EXEC.
**************************************************
* Close the cursor *
**************************************************
CLOSE-THISEMP.
EXEC SQL
CLOSE THISEMP
END-EXEC.
You must use DCLGEN before you precompile your program. Supply DCLGEN with
the table or view name before you precompile your program. To use the
declarations generated by DCLGEN in your program, use the SQL INCLUDE
statement.
DB2 must be active before you can use DCLGEN. You can start DCLGEN in
several different ways:
v From ISPF through DB2I. Select the DCLGEN option on the DB2I Primary Option
Menu panel. Next, fill in the DCLGEN panel with the information it needs to build
the declarations. Then press ENTER.
v Directly from TSO. To do this, sign on to TSO, issue the TSO command DSN,
and then issue the subcommand DCLGEN.
v From a CLIST, running in TSO foreground or background, that issues DSN and
then DCLGEN.
v With JCL. Supply the required information, using JCL, and run DCLGEN in batch.
If you wish to start DCLGEN in the foreground, and your table names include
DBCS characters, you must input and display double-byte characters. If you do
not have a terminal that displays DBCS characters, you can enter DBCS
characters using the hex mode of ISPF edit.
If you do not specify a location, then this option defaults to the local location
name. This field applies to DB2 private protocol access only (that is, the
location you name must be another DB2 for OS/390 and z/OS).
4 DATA SET NAME
Is the name of the data set you allocated to contain the declarations that
DCLGEN produces. You must supply a name; there is no default.
The data set must already exist, be accessible to DCLGEN, and can be
either sequential or partitioned. If you do not enclose the data set name in
apostrophes, DCLGEN adds a standard TSO prefix (user ID) and suffix
(language). DCLGEN knows what the host language is from the DB2I
defaults panel.
For example, for library name LIBNAME(MEMBNAME), the name becomes:
userid.libname.language(membname)
If this data set is password protected, you must supply the password in the
DATA SET PASSWORD field.
5 DATA SET PASSWORD
Is the password for the data set in the DATA SET NAME field, if the data
set is password protected. It does not display on your terminal, and is not
recognized if you issued it from a previous session.
6 ACTION
Tells DCLGEN what to do with the output when it is sent to a partitioned
data set. (The option is ignored if the data set you specify in DATA SET
NAME field is sequential.)
ADD indicates that an old version of the output does not exist, and
creates a new member with the specified data set name. This is the
default.
REPLACE replaces an old version, if it already exists. If the member
does not exist, this option creates a new member.
7 COLUMN LABEL
Tells DCLGEN whether to include labels declared on any columns of the
table or view as comments in the data declarations. (The SQL statement
LABEL ON creates column labels to use as supplements to column names.)
Use:
YES to include column labels.
NO to ignore column labels. This is the default.
8 STRUCTURE NAME
Is the name of the generated data structure. The name can be up to 31
characters. If the name is not a DBCS string, and the first character is not
The default is NO, which does not generate an indicator variable array.
If you are using an SQL reserved word as an identifier, you must edit the DCLGEN
output in order to add the appropriate SQL delimiters.
DCLGEN produces output that is intended to meet the needs of most users, but
occasionally, you will need to edit the DCLGEN output to work in your specific case.
For example, DCLGEN is unable to determine whether a column defined as NOT
NULL also contains the DEFAULT clause, so you must edit the DCLGEN output to
add the DEFAULT clause to the appropriate column definitions.
For further details about the DCLGEN subcommand, see Chapter 2 of DB2
Command Reference.
Fill in the COBOL defaults panel as necessary. Press Enter to save the new
defaults, if any, and return to the DB2I Primary Option menu.
Figure 18. The COBOL defaults panel. Shown only if the field APPLICATION LANGUAGE on
the DB2I Defaults panel is COBOL, COB2, or IBMCOB.
Fill in the fields as shown in Figure 19 on page 103, and then press Enter.
Figure 19. DCLGEN panel—selecting source table and destination data set
If the operation succeeds, a message displays at the top of your screen as shown
in Figure 20.
DB2 then displays the screen as shown in Figure 21 on page 104. Press Enter to
return to the DB2I Primary Option menu.
For information on reading the syntax diagrams in this chapter, see “How to read
the syntax diagrams” on page xix.
For information on writing embedded SQL application programs in Java, see DB2
Application Programming Guide and Reference for Java.
DB2 sets the SQLCODE and SQLSTATE values after each SQL statement
executes. An application can check these variables values to determine whether the
last SQL statement was successful. All SQL statements in the program must be
within the scope of the declaration of the SQLCODE and SQLSTATE variables.
If your program is reentrant, you must include the SQLCA within a unique data area
acquired for your task (a DSECT). For example, at the beginning of your program,
specify:
PROGAREA DSECT
EXEC SQL INCLUDE SQLCA
As an alternative, you can create a separate storage area for the SQLCA and
provide addressability to that area.
See Chapter 5 of DB2 SQL Reference for more information about the INCLUDE
statement and Appendix C of DB2 SQL Reference for a complete description of
SQLCA fields.
You must place SQLDA declarations before the first SQL statement that references
the data descriptor unless you use the precompiler option TWOPASS. See Chapter
5 of DB2 SQL Reference for more information about the INCLUDE statement and
Appendix C of DB2 SQL Reference for a complete description of SQLDA fields.
Each SQL statement in an assembler program must begin with EXEC SQL. The
EXEC and SQL keywords must appear on one line, but the remainder of the
statement can appear on subsequent lines.
Continuation for SQL statements: The line continuation rules for SQL statements
are the same as those for assembler statements, except that you must specify
EXEC SQL within one line. Any part of the statement that does not fit on one line
can appear on subsequent lines, beginning at the continuation margin (column 16,
the default). Every line of the statement, except the last, must have a continuation
character (a non-blank character) immediately after the right margin in column 72.
Declaring tables and views: Your assembler program should include a DECLARE
statement to describe each table and view the program accesses.
Margins: The precompiler option MARGINS allows you to set a left margin, a right
margin, and a continuation margin. The default values for these margins are
columns 1, 71, and 16, respectively. If EXEC SQL starts before the specified left
margin, the DB2 precompiler does not recognize the SQL statement. If you use the
default margins, you can place an SQL statement anywhere between columns 2
and 71.
Names: You can use any valid assembler name for a host variable. However, do
not use external entry names or access plan names that begin with 'DSN' or host
variable names that begin with 'SQL'. These names are reserved for DB2.
Statement labels: You can prefix an SQL statement with a label. The first line of an
SQL statement can use a label beginning in the left margin (column 1). If you do
not use a label, leave column 1 blank.
CICS
TSO
The sample program in prefix.SDSNSAMP(DSNTIAD) contains an example
of how to acquire storage for the SQLDSECT in a program that runs in a
TSO environment.
CICS
You can precede the assembler statements that define host variables with the
statement BEGIN DECLARE SECTION, and follow the assembler statements with
the statement END DECLARE SECTION. You must use the statements BEGIN
DECLARE SECTION and END DECLARE SECTION when you use the precompiler
option STDSQL(YES).
You can declare host variables in normal assembler style (DC or DS), depending on
the data type and the limitations on that data type. You can specify a value on DC
or DS declarations (for example, DC H'5'). The DB2 precompiler examines only
packed decimal declarations.
An SQL statement that uses a host variable must be within the scope of the
statement that declares the variable.
For floating point data types (E, EH, EB, D, DH, and DB), DB2 uses the FLOAT
precompiler option to determine whether the host variable is in IEEE floating point
or System/390 floating point format. If the precompiler option is FLOAT(S390), you
need to define your floating point host variables as E, EH, D, or DH. If the
precompiler option is FLOAT(IEEE), you need to define your floating point host
variables as EB or DB. DB2 converts all floating point input data to System/390
floating point before storing it.
variable-name DC H
DS 1 L2
F
L4
P ’value’
Ln
E
L4
EH
L4
EB
L4
D
L8
DH
L8
DB
L8
Character host variables: There are three valid forms for character host variables:
v Fixed-length strings
v Varying-length strings
v CLOBs
The following figures show the syntax for forms other than CLOBs. See Figure 30
on page 114 for the syntax of CLOBs.
variable-name DC C
DS 1 Ln
variable-name DC H , CLn
DS 1 L2 1
Graphic host variables: There are three valid forms for graphic host variables:
The following figures show the syntax for forms other than DBCLOBs. See
Figure 30 on page 114 for the syntax of DBCLOBs. In the syntax diagrams, value
denotes one or more DBCS characters, and the symbols < and > represent shift-out
and shift-in characters.
variable-name DC G
DS Ln
’<value>’
Ln’<value>’
variable-name DS H , GLn
DC L2 ’m’ ’<value>’
Result set locators: The following figure shows the syntax for declarations of result
set locators. See “Chapter 24. Using stored procedures for client/server processing”
on page 527 for a discussion of how to use these host variables.
variable-name DC F
DS 1 L4
Table Locators: The following figure shows the syntax for declarations of table
locators. See “Accessing transition tables in a user-defined function or stored
procedure” on page 279 for a discussion of how to use these host variables.
LOB variables and locators: The following figure shows the syntax for
declarations of BLOB, CLOB, and DBCLOB host variables and locators.
If you specify the length of the LOB in terms of KB, MB, or GB, you must leave no
spaces between the length and K, M, or G.
See “Chapter 13. Programming for large objects (LOBs)” on page 229 for a
discussion of how to use these host variables.
ROWIDs: The following figure shows the syntax for declarations of ROWID
variables. See “Chapter 13. Programming for large objects (LOBs)” on page 229 for
a discussion of how to use these host variables.
Table 8 helps you define host variables that receive output from the database. You
can use Table 8 to determine the assembler data type that is equivalent to a given
SQL data type. For example, if you retrieve TIMESTAMP data, you can use the
table to define a suitable host variable in the program that receives the data value.
| Table 8 shows direct conversions between DB2 data types and host data types.
| However, a number of DB2 data types are compatible. When you do assignments
| or comparisons of data that have compatible data types, DB2 does conversions
| between those compatible data types. See Table 1 on page 5 for information on
| compatible data types.
Table 8. SQL data types mapped to typical assembler declarations
SQL Data Type Assembler Equivalent Notes
SMALLINT DS HL2
INTEGER DS F
Host graphic data type: You can use the assembler data type “host graphic” in
SQL statements when the precompiler option GRAPHIC is in effect. However, you
cannot use assembler DBCS literals in SQL statements, even when GRAPHIC is in
effect.
Floating point host variables: All floating point data is stored in DB2 in
System/390 floating point format. However, your host variable data can be in
System/390 floating point format or IEEE floating point format. DB2 uses the
FLOAT(S390|IEEE) precompiler option to determine whether your floating point host
variables are in IEEE floating point format or System/390 floating point format. DB2
does no checking to determine whether the host variable declarations or format of
the host variable contents match the precompiler option. Therefore, you need to
ensure that your floating point host variable types and contents match the
precompiler option.
Special Purpose Assembler Data Types: The locator data types are assembler
language data types as well as SQL data types. You cannot use locators as column
types. For information on how to use these data types, see the following sections:
Table locator “Accessing transition tables in a user-defined function or stored
procedure” on page 279
LOB locators “Chapter 13. Programming for large objects (LOBs)” on page 229
When your program uses X to assign a null value to a column, the program should
set the indicator variable to a negative number. DB2 then assigns a null value to the
column and ignores any value in X.
You declare indicator variables in the same way as host variables. You can mix the
declarations of the two types of variables in any way that seems appropriate. For
more information on indicator variables, see “Using indicator variables with host
variables” on page 70 or Chapter 2 of DB2 SQL Reference.
Example:
The following figure shows the syntax for a valid indicator variable.
variable-name DC H
DS 1 L2
DSNTIAR syntax
CALL DSNTIAR,(sqlca, message, lrecl),MF=(E,PARM)
MESSAGE DS H,CL(LINES*LRECL)
ORG MESSAGE
MESSAGEL DC AL2(LINES*LRECL)
MESSAGE1 DS CL(LRECL) text line 1
MESSAGE2 DS CL(LRECL) text line 2
.
.
.
where MESSAGE is the name of the message output area, LINES is the
number of lines in the message output area, and LRECL is the length of each
line.
lrecl
A fullword containing the logical record length of output messages, between 72
and 240.
CICS
If your CICS application requires CICS storage handling, you must use the
subroutine DSNTIAC instead of DSNTIAR. DSNTIAC has the following syntax:
CALL DSNTIAC,(eib,commarea,sqlca,msg,lrecl),MF=(E,PARM)
DSNTIAC has extra parameters, which you must use for calls to routines that
use CICS commands.
eib EXEC interface block
commarea communication area
You must define DSNTIA1 in the CSD. If you load DSNTIAR or DSNTIAC, you
must also define them in the CSD. For an example of CSD entry generation
statements for use with DSNTIAC, see member DSN8FRDO in the data set
prefix.SDSNSAMP.
The assembler source code for DSNTIAC and job DSNTEJ5A, which
assembles and link-edits DSNTIAC, are also in the data set
prefix.SDSNSAMP.
DB2 sets the SQLCODE and SQLSTATE values after each SQL statement
executes. An application can check these variable values to determine whether the
last SQL statement was successful. All SQL statements in the program must be
within the scope of the declaration of the SQLCODE and SQLSTATE variables.
A standard declaration includes both a structure definition and a static data area
named 'sqlca'. See Chapter 5 of DB2 SQL Reference for more information about
the INCLUDE statement and Appendix C of DB2 SQL Reference for a complete
description of SQLCA fields.
Unlike the SQLCA, more than one SQLDA can exist in a program, and an SQLDA
can have any valid name. You can code an SQLDA in a C program either directly or
by using the SQL INCLUDE statement. The SQL INCLUDE statement requests a
standard SQLDA declaration:
EXEC SQL INCLUDE SQLDA;
A standard declaration includes only a structure definition with the name 'sqlda'.
See Chapter 5 of DB2 SQL Reference for more information about the INCLUDE
statement and Appendix C of DB2 SQL Reference for a complete description of
SQLDA fields.
You must place SQLDA declarations before the first SQL statement that references
the data descriptor, unless you use the precompiler option TWOPASS. You can
place an SQLDA declaration wherever C allows a structure definition. Normal C
scoping rules apply.
Each SQL statement in a C program must begin with EXEC SQL and end with a
semi-colon (;). The EXEC and SQL keywords must appear all on one line, but the
remainder of the statement can appear on subsequent lines.
# In general, because C is case sensitive, use uppercase letters to enter SQL words.
# However, if you use the FOLD precompiler suboption, DB2 folds lowercase letters
# in SBCS SQL ordinary identifiers to uppercase. For information on host language
# precompiler options, see Table 48 on page 403.
You must keep the case of host variable names consistent throughout the program.
For example, if a host variable name is lowercase in its declaration, it must be
lowercase in all SQL statements. You might code an UPDATE statement in a C
program as follows:
EXEC SQL
UPDATE DSN8710.DEPT
SET MGRNO = :mgr_num
WHERE DEPTNO = :int_dept;
Comments: You can include C comments (/* ... */) within SQL statements wherever
you can use a blank, except between the keywords EXEC and SQL. You can use
single-line comments (starting with //) in C language statements, but not in
embedded SQL. You cannot nest comments.
Declaring tables and views: Your C program should use the statement DECLARE
TABLE to describe each table and view the program accesses. You can use the
DB2 declarations generator (DCLGEN) to generate the DECLARE TABLE
statements. For details, see “Chapter 8. Generating declarations for your tables
using DCLGEN” on page 95.
You cannot nest SQL INCLUDE statements. Do not use C #include statements to
include SQL statements or C host variable declarations.
Margins: Code SQL statements in columns 1 through 72, unless you specify other
margins to the DB2 precompiler. If EXEC SQL is not within the specified margins,
the DB2 precompiler does not recognize the SQL statement.
Names: You can use any valid C name for a host variable, subject to the following
restrictions:
Nulls and NULs: C and SQL differ in the way they use the word null. The C
language has a null character (NUL), a null pointer (NULL), and a null statement
(just a semicolon). The C NUL is a single character which compares equal to 0. The
C NULL is a special reserved pointer value that does not point to any valid data
object. The SQL null value is a special value that is distinct from all nonnull values
and denotes the absence of a (nonnull) value. In this chapter, NUL is the null
character in C and NULL is the SQL null value.
Sequence numbers: The source statements that the DB2 precompiler generates
do not include sequence numbers.
Statement labels: You can precede SQL statements with a label, if you wish.
Trigraphs: Some characters from the C character set are not available on all
keyboards. You can enter these characters into a C source program using a
sequence of three characters called a trigraph. The trigraphs that DB2 supports are
the same as those that the C/370 compiler supports.
WHENEVER statement: The target for the GOTO clause in an SQL WHENEVER
statement must be within the scope of any SQL statements that the statement
WHENEVER affects.
Special C considerations:
v Use of the C/370 multi-tasking facility, where multiple tasks execute SQL
statements, causes unpredictable results.
v You must run the DB2 precompiler before running the C preprocessor.
v The DB2 precompiler does not support C preprocessor directives.
v If you use conditional compiler directives that contain C code, either place them
after the first C token in your application program, or include them in the C
program using the #include preprocessor directive.
Please refer to the appropriate C documentation for further information on C
preprocessor directives.
Precede C statements that define the host variables with the statement BEGIN
DECLARE SECTION, and follow the C statements with the statement END
DECLARE SECTION. You can have more than one host variable declaration
section in your program.
The names of host variables must be unique within the program, even if the host
variables are in different blocks, classes, or procedures. You can qualify the host
variable names with a structure name to make them unique.
Host variables must be scalar variables or host structures; they cannot be elements
of vectors or arrays (subscripted variables) unless you use the character arrays to
hold strings. You can use an array of indicator variables when you associate the
array with a host structure.
Numeric host variables: The following figure shows the syntax for valid numeric
host variable declarations.
float
auto const double
extern volatile int
static short
sqlint32
int
long
decimal ( integer )
, integer
variable-name ;
=expression
Character host variables: There are four valid forms for character host variables:
v Single-character form
v NUL-terminated character form
v VARCHAR structured form
v CLOBs
The following figures show the syntax for forms other than CLOBs. See Figure 42
on page 129 for the syntax of CLOBs.
char variable-name ;
auto const unsigned =expression
extern volatile
static
Figure 34. Single-character form
char
auto const unsigned
extern volatile
static
variable-name [ length ] ;
=expression
Notes:
1. On input, the string contained by the variable must be NUL-terminated.
2. On output, the string is NUL-terminated.
3. A NUL-terminated character host variable maps to a varying length character
string (except for the NUL).
int
struct { short var-1 ;
auto const tag
extern volatile
static
char var-2 [ length ] ; }
unsigned
variable-name ;
={ expression, expression }
Notes:
v var-1 and var-2 must be simple variable references. You cannot use them as host
variables.
v You can use the struct tag to define other data areas, which you cannot use as
host variables.
Example:
EXEC SQL BEGIN DECLARE SECTION;
struct VARCHAR {
short len;
char s[10];
} vstring;
Graphic host variables: There are four valid forms for graphic host variables:
You can use the C data type wchar_t to define a host variable that inserts, updates,
deletes, and selects data from GRAPHIC or VARGRAPHIC columns.
The following figures show the syntax for forms other than DBCLOBs. See
Figure 42 on page 129 for the syntax of DBCLOBs.
wchar_t variable-name ;
auto const =expression
extern volatile
static
Figure 37. Single-graphic form
Notes:
1. length must be a decimal integer constant greater than 1 and not greater than
16352.
2. On input, the string in variable-name must be NUL-terminated.
3. On output, the string is NUL-terminated.
4. The NUL-terminated graphic form does not accept single byte characters into
variable-name.
int
struct { short var-1 ;
auto const tag
extern volatile
static
,
Notes:
Example:
EXEC SQL BEGIN DECLARE SECTION;
struct VARGRAPH {
short len;
wchar_t d[10];
} vgraph;
Result set locators: The following figure shows the syntax for declarations of result
set locators. See “Chapter 24. Using stored procedures for client/server processing”
on page 527 for a discussion of how to use these host variables.
variable-name ;
= init-value
Table Locators: The following figure shows the syntax for declarations of table
locators. See “Accessing transition tables in a user-defined function or stored
procedure” on page 279 for a discussion of how to use these host variables.
variable-name ;
init-value
LOB Variables and Locators: The following figure shows the syntax for
declarations of BLOB, CLOB, and DBCLOB host variables and locators. See
“Chapter 13. Programming for large objects (LOBs)” on page 229 for a discussion of
how to use these host variables.
SQL TYPE IS
auto const
extern volatile
static
register
ROWIDs: The following figure shows the syntax for declarations of ROWID
variables. See “Chapter 13. Programming for large objects (LOBs)” on page 229 for
a discussion of how to use these host variables.
In this example, target is the name of a host structure consisting of the c1, c2, and
c3 fields. c1 and c3 are character arrays, and c2 is the host variable equivalent to
the SQL VARCHAR data type. The target host structure can be part of another host
structure but must be the deepest level of the nested structure.
The following figure shows the syntax for valid host structures.
struct {
auto const packed tag
extern volatile
static
float var-1 ; }
double
int
short
sqlint32
int
long
decimal ( integer )
, integer
varchar structure
vargraphic structure
SQL TYPE IS ROWID
LOB data type
char var-2 ;
unsigned [ length ]
wchar_t var-5 ;
[ length ]
variable-name ;
= expression
int
struct { short var-3 ;
tag signed
char var-4 [ length ] ; }
unsigned
int
struct { short var-6 ; wchar_t var-7 [ length ] ; }
tag signed
Table 10 on page 133 helps you define host variables that receive output from the
database. You can use the table to determine the C data type that is equivalent to a
given SQL data type. For example, if you retrieve TIMESTAMP data, you can use
the table to define a suitable host variable in the program that receives the data
value.
| Table 10 on page 133 shows direct conversions between DB2 data types and host
| data types. However, a number of DB2 data types are compatible. When you do
| assignments or comparisons of data that have compatible data types, DB2 does
| conversions between those compatible data types. See Table 1 on page 5 for
| information on compatible data types.
C data types with no SQL equivalent: C supports some data types and storage
classes with no SQL equivalents, for example, register storage class, typedef, and
the pointer.
SQL data types with no C equivalent: If your C compiler does not have a decimal
data type, then there is no exact equivalent for the SQL DECIMAL data type. In this
case, to hold the value of such a variable, you can use:
v An integer or floating-point variable, which converts the value. If you choose
integer, you will lose the fractional part of the number. If the decimal number can
exceed the maximum value for an integer, or if you want to preserve a fractional
value, you can use floating-point numbers. Floating-point numbers are
approximations of real numbers. Hence, when you assign a decimal number to a
floating point variable, the result could be different from the original number.
v A character string host variable. Use the CHAR function to get a string
representation of a decimal number.
Floating point host variables: All floating point data is stored in DB2 in
System/390 floating point format. However, your host variable data can be in
System/390 floating point format or IEEE floating point format. DB2 uses the
FLOAT(S390|IEEE) precompiler option to determine whether your floating point host
variables are in IEEE floating point or System/390 floating point format. DB2 does
no checking to determine whether the contents of a host variable match the
precompiler option. Therefore, you need to ensure that your floating point data
format matches the precompiler option.
Special Purpose C Data Types: The locator data types are C data types as well
as SQL data types. You cannot use locators as column types. For information on
how to use these data types, see the following sections:
Result set locator
“Chapter 24. Using stored procedures for client/server processing”
on page 527
Table locator “Accessing transition tables in a user-defined function or stored
procedure” on page 279
LOB locators “Chapter 13. Programming for large objects (LOBs)” on page 229
If you assign a string of length n to a NUL-terminated variable with a length that is:
v less than or equal to n, then DB2 inserts the characters into the host variable as
long as the characters fit up to length (n-1) and appends a NUL at the end of the
string. DB2 sets SQLWARN[1] to W and any indicator variable you provide to the
original length of the source string.
v equal to n+1, then DB2 inserts the characters into the host variable and appends
a NUL at the end of the string.
v greater than n+1, then the rules depend on whether the source string is a value
of a fixed-length string column or a varying-length string column. See Chapter 2
of DB2 SQL Reference for more information.
PREPARE or DESCRIBE statements: You cannot use a host variable that is of the
NUL-terminated form in either a PREPARE or DESCRIBE statement.
Truncation: Be careful of truncation. Ensure the host variable you declare can
contain the data and a NUL terminator, if needed. Retrieving a floating-point or
decimal column value into a long integer host variable removes any fractional part
of the value.
In SQL, you can use quotes to delimit identifiers and apostrophes to delimit string
constants. The following examples illustrate the use of apostrophes and quotes in
SQL.
Quotes
SELECT "COL#1" FROM TBL1;
Apostrophes
SELECT COL1 FROM TBL1 WHERE COL2 = 'BELL';
Character data in SQL is distinct from integer data. Character data in C is a subtype
of integer data.
Varying-length strings: For varying-length BIT data, use the VARCHAR structured
form. Some C string manipulation functions process NUL-terminated strings and
When your program uses X to assign a null value to a column, the program should
set the indicator variable to a negative number. DB2 then assigns a null value to the
column and ignores any value in X.
You declare indicator variables in the same way as host variables. You can mix the
declarations of the two types of variables in any way that seems appropriate. For
more information about indicator variables, see “Using indicator variables with host
variables” on page 70.
Example:
The following figure shows the syntax for a valid indicator variable.
,
int
short variable-name ;
auto const signed
extern volatile
static
Figure 48. Indicator variable
The following figure shows the syntax for a valid indicator array.
int
short
auto const signed
extern volatile
static
,
variable-name [ dimension ] ;
= expression
Note:
DSNTIAR syntax
rc = dsntiar(&sqlca, &message, &lrecl);
where error_message is the name of the message output area, data_dim is the
number of lines in the message output area, and data_len is length of each line.
&lrecl
A fullword containing the logical record length of output messages, between 72
and 240.
For C, include:
#pragma linkage (dsntiar,OS)
CICS
If your CICS application requires CICS storage handling, you must use the
subroutine DSNTIAC instead of DSNTIAR. DSNTIAC has the following syntax:
rc = DSNTIAC(&eib, &commarea, &sqlca, &message, &lrecl);
DSNTIAC has extra parameters, which you must use for calls to routines that
use CICS commands.
&eib EXEC interface block
&commarea
communication area
You must define DSNTIA1 in the CSD. If you load DSNTIAR or DSNTIAC, you
must also define them in the CSD. For an example of CSD entry generation
statements for use with DSNTIAC, see job DSNTEJ5A.
The assembler source code for DSNTIAC and job DSNTEJ5A, which
assembles and link-edits DSNTIAC, are in the data set prefix.SDSNSAMP.
Using C⁺⁺ data types as host variables: You can use class members as host
variables. Class members used as host variables are accessible to any SQL
statement within the class.
Except where noted otherwise, this information pertains to all COBOL compilers
supported by DB2 for OS/390 and z/OS.
DB2 sets the SQLCODE and SQLSTATE values after each SQL statement
executes. An application can check these variables value to determine whether the
last SQL statement was successful. All SQL statements in the program must be
within the scope of the declaration of the SQLCODE and SQLSTATE variables.
When you use the precompiler option STDSQL(YES), you must declare an
SQLCODE variable. DB2 declares an SQLCA area for you in the
WORKING-STORAGE SECTION. DB2 controls that SQLCA, so your application
programs should not make assumptions about its structure or location.
You can specify INCLUDE SQLCA or a declaration for SQLCODE wherever you
can specify a 77 level or a record description entry in the WORKING-STORAGE
SECTION. You can declare a stand-alone SQLCODE variable in either the
WORKING-STORAGE SECTION or LINKAGE SECTION.
Unlike the SQLCA, there can be more than one SQLDA in a program, and an
SQLDA can have any valid name. The DB2 SQL INCLUDE statement does not
provide an SQLDA mapping for COBOL. You can define the SQLDA using one of
the following two methods:
v For COBOL programs compiled with any compiler except the OS/VS COBOL
compiler, you can code the SQLDA declarations in your program. For more
information, see “Using dynamic SQL in COBOL” on page 526. You must place
SQLDA declarations in the WORKING-STORAGE SECTION or LINKAGE
SECTION of your program, wherever you can specify a record description entry
in that section.
v For COBOL programs compiled with any COBOL compiler, you can call a
subroutine (written in C, PL/I, or assembler language) that uses the DB2
INCLUDE SQLDA statement to define the SQLDA. The subroutine can also
include SQL statements for any dynamic SQL functions you need. You must use
this method if you compile your program using OS/VS COBOL. The SQLDA
definition includes the POINTER data type, which OS/VS COBOL does not
support. For more information on using dynamic SQL, see “Chapter 23. Coding
dynamic SQL in application programs” on page 497.
You must place SQLDA declarations before the first SQL statement that references
the data descriptor. An SQL statement that uses a host variable must be within the
scope of the statement that declares the variable.
Each SQL statement in a COBOL program must begin with EXEC SQL and end
with END-EXEC. If the SQL statement appears between two COBOL statements,
the period is optional and might not be appropriate. If the statement appears in an
IF...THEN set of COBOL statements, leave off the ending period to avoid
inadvertently ending the IF statement. The EXEC and SQL keywords must appear
on one line, but the remainder of the statement can appear on subsequent lines.
In addition, you can include SQL comments in any embedded SQL statement if you
specify the precompiler option STDSQL(YES).
Continuation for SQL statements: The rules for continuing a character string
constant from one line to the next in an SQL statement embedded in a COBOL
program are the same as those for continuing a non-numeric literal in COBOL.
However, you can use either a quotation mark or an apostrophe as the first
nonblank character in area B of the continuation line. The same rule applies for the
continuation of delimited identifiers and does not depend on the string delimiter
option.
Declaring tables and views: Your COBOL program should include the statement
DECLARE TABLE to describe each table and view the program accesses. You can
use the DB2 declarations generator (DCLGEN) to generate the DECLARE TABLE
statements. You should include the DCLGEN members in the DATA DIVISION. For
details, see “Chapter 8. Generating declarations for your tables using DCLGEN” on
page 95.
You cannot nest SQL INCLUDE statements. Do not use COBOL verbs to include
SQL statements or COBOL host variable declarations, or use the SQL INCLUDE
statement to include CICS preprocessor related code. In general, use the SQL
INCLUDE only for SQL-related coding.
Margins: Code SQL statements in columns 12 through 72. If EXEC SQL starts
before column 12, the DB2 precompiler does not recognize the SQL statement.
The precompiler option MARGINS allows you to set new left and right margins
between 1 and 80. However, you must not code the statement EXEC SQL before
column 12.
Names: You can use any valid COBOL name for a host variable. Do not use
external entry names or access plan names that begin with 'DSN' and host variable
names that begin with 'SQL'. These names are reserved for DB2.
Sequence numbers: The source statements that the DB2 precompiler generates
do not include sequence numbers.
WHENEVER statement: The target for the GOTO clause in an SQL statement
WHENEVER must be a section name or unqualified paragraph name in the
PROCEDURE DIVISION.
You can specify the option DYNAM when compiling a COBOL program if
you use VS COBOL II or COBOL/370™, or if you use OS/VS COBOL with
the VS COBOL II or COBOL/370 run-time libraries.
IMS and DB2 share a common alias name, DSNHLI, for the language
interface module. You must do the following when you concatenate your
libraries:
– If you use IMS with the COBOL option DYNAM, be sure to concatenate
the IMS library first.
– If you run your application program only under DB2, be sure to
concatenate the DB2 library first.
You must specify the option NODYNAM when you compile a COBOL
program that includes SQL statements. You cannot use DYNAM.
Because stored procedures use CAF, you must also compile COBOL stored
procedures with the option NODYNAM.
DB2 assigns values to COBOL binary integer host variables as if you had
specified the COBOL compiler option TRUNC(BIN).
v If a COBOL program contains several entry points or is called several times, the
USING clause of the entry statement that executes before the first SQL
statement executes must contain the SQLCA and all linkage section entries that
any SQL statement uses as host variables.
v The REPLACE statement has no effect on SQL statements. It affects only the
COBOL statements that the precompiler generates.
v Do not use COBOL figurative constants (such as ZERO and SPACE), symbolic
characters, reference modification, and subscripts within SQL statements.
v Observe the rules in Chapter 2 of DB2 SQL Reference when you name SQL
# identifiers. However, for COBOL only, the names of SQL identifiers can follow the
# rules for naming COBOL words, if the names do not exceed the allowable length
# for the DB2 object. For example, the name 1ST-TIME is a valid cursor name
# because it is a valid COBOL word, but the name 1ST_TIME is not valid because
# it is not a valid SQL identifier or a valid COBOL word.
v Observe these rules for hyphens:
– Surround hyphens used as subtraction operators with spaces. DB2 usually
interprets a hyphen with no spaces around it as part of a host variable name.
If you pass host variables with address changes into a program more than once,
then the called program must reset SQL-INIT-FLAG. Resetting this flag indicates
that the storage must initialize when the next SQL statement executes. To reset the
flag, insert the statement MOVE ZERO TO SQL-INIT-FLAG in the called program’s
PROCEDURE DIVISION, ahead of any executable SQL statements that use the
host variables.
End of Product-sensitive Programming Interface
You can precede COBOL statements that define the host variables with the
statement BEGIN DECLARE SECTION, and follow the statements with the
statement END DECLARE SECTION. You must use the statements BEGIN
DECLARE SECTION and END DECLARE SECTION when you use the precompiler
option STDSQL(YES).
The names of host variables should be unique within the source data set or
member, even if the host variables are in different blocks, classes, or procedures.
You can qualify the host variable names with a structure name to make them
unique.
An SQL statement that uses a host variable must be within the scope of the
statement that declares the variable.
You cannot define host variables, other than indicator variables, as arrays. You can
specify OCCURS only when defining an indicator structure. You cannot specify
OCCURS for any other type of host variable.
Numeric host variables: The following figures show the syntax for valid numeric
host variable declarations.
01 variable-name COMPUTATIONAL-1
77 IS COMP-1
level-1 USAGE COMPUTATIONAL-2
COMP-2
.
IS
VALUE numeric-constant
Notes:
1. level-1 indicates a COBOL level between 2 and 48.
2. COMPUTATIONAL-1 and COMP-1 are equivalent.
3. COMPUTATIONAL-2 and COMP-2 are equivalent.
IS
01 variable-name PICTURE S9(4)
77 PIC S9999 IS
level-1 S9(9) USAGE
S999999999
BINARY .
COMPUTATIONAL-4 IS
COMP-4 VALUE numeric-constant
COMPUTATIONAL-5
COMP-5
COMPUTATIONAL
COMP
Notes:
1. level-1 indicates a COBOL level between 2 and 48.
2. BINARY, COMP, COMPUTATIONAL, COMPUTATIONAL-4, COMP-4 ,
COMPUTATIONAL-5, COMP-5 are equivalent.
3. Any specification for scale is ignored.
IS
01 variable-name PICTURE picture-string
77 PIC IS
level-1 USAGE
PACKED-DECIMAL
COMPUTATIONAL-3 IS
COMP-3 VALUE numeric-constant
IS CHARACTER
DISPLAY SIGN LEADING SEPARATE
.
Notes:
1. level-1 indicates a COBOL level between 2 and 48.
2. PACKED-DECIMAL, COMPUTATIONAL-3, and COMP-3 are equivalent. The
picture-string associated with these types must have the form S9(i)V9(d) (or
S9...9V9...9, with i and d instances of 9) or S9(i)V.
3. The picture-string associated with SIGN LEADING SEPARATE must have the
form S9(i)V9(d) (or S9...9V9...9, with i and d instances of 9 or S9...9V with i
instances of 9).
Character host variables: There are three valid forms of character host variables:
v Fixed-length strings
v Varying-length strings
v CLOBs
The following figures show the syntax for forms other than CLOBs. See Figure 59
on page 151 for the syntax of CLOBs.
IS
01 variable-name PICTURE picture-string
77 PIC
level-1
.
DISPLAY IS
IS VALUE character-constant
USAGE
Note:
01 variable-name .
level-1
IS
49 var-1 PICTURE S9(4) BINARY
PIC S9999 IS COMPUTATIONAL-4
USAGE COMP-4
COMPUTATIONAL-5
COMP-5
.
IS
VALUE numeric-constant
IS
49 var-2 PICTURE picture-string
PIC DISPLAY
IS
USAGE
.
IS
VALUE character-constant
Notes:
1. level-1 indicates a COBOL level between 2 and 48.
2. The picture-string associated with these forms must be X(m) (or XX...X, with m
instances of X), with 1 <= m <= 255 for fixed-length strings; for other strings, m
cannot be greater than the maximum size of a varying-length character string.
DB2 uses the full length of the S9(4) variable even though IBM COBOL for MVS
and VM only recognizes values up to 9999. This can cause data truncation
errors when COBOL statements execute and might effectively limit the
maximum length of variable-length character strings to 9999. Consider using the
TRUNC(OPT) or NOTRUNC COBOL compiler option (whichever is appropriate)
to avoid data truncation.
3. You cannot directly reference var-1 and var-2 as host variables.
# 4. You cannot use an intervening REDEFINE at level 49.
Graphic character host variables: There are three valid forms for graphic
character host variables:
v Fixed-length strings
v Varying-length strings
v DBCLOBs
The following figures show the syntax for forms other than DBCLOBs. See
Figure 59 on page 151 for the syntax of DBCLOBs.
IS
01 variable-name PICTURE picture-string
level-1 PIC
77
IS
USAGE DISPLAY-1
.
IS
VALUE graphic-constant
Note:
IS
01 variable-name . 49 var-1 PICTURE S9(4)
level-1 PIC S9999 IS
USAGE
BINARY . 49 var-2 PICTURE
COMPUTATIONAL-4 IS PIC
COMP-4 VALUE numeric-constant
COMPUTATIONAL-5
COMP-5
IS IS
picture-string USAGE DISPLAY-1 .
IS
VALUE graphic-constant
Notes:
1. level-1 indicates a COBOL level between 2 and 48.
2. The picture-string associated with these forms must be G(m) (or GG...G, with m
instances of G), with 1 <= m <= 127 for fixed-length strings. You can use N in
place of G for COBOL graphic variable declarations. If you use N for graphic
variable declarations, USAGE DISPLAY-1 is optional. For strings other than
fixed-length, m cannot be greater than the maximum size of a varying-length
graphic string.
DB2 uses the full size of the S9(4) variable even though some COBOL
implementations restrict the maximum length of varying-length graphic string to
9999. This can cause data truncation errors when COBOL statements execute
and might effectively limit the maximum length of variable-length graphic strings
to 9999. Consider using the TRUNC(OPT) or NOTRUNC COBOL compiler
option (which ever is appropriate) to avoid data truncation.
3. You cannot directly reference var-1 and var-2 as host variables.
Table Locators: The following figure shows the syntax for declarations of table
locators. See “Accessing transition tables in a user-defined function or stored
procedure” on page 279 for a discussion of how to use these host variables.
AS LOCATOR .
Note:
LOB Variables and Locators: The following figure shows the syntax for
declarations of BLOB, CLOB, and DBCLOB host variables and locators. See
“Chapter 13. Programming for large objects (LOBs)” on page 229 for a discussion of
how to use these host variables.
Note:
Note:
A host structure name can be a group name whose subordinate levels name
elementary data items. In the following example, B is the name of a host structure
consisting of the elementary items C1 and C2.
01 A
02 B
03 C1 PICTURE ...
03 C2 PICTURE ...
When you write an SQL statement using a qualified host variable name (perhaps to
identify a field within a structure), use the name of the structure followed by a
period and the name of the field. For example, specify B.C1 rather than C1 OF B or
C1 IN B.
The precompiler does not recognize host variables or host structures on any
subordinate levels after one of these items:
v A COBOL item that must begin in area A
v Any SQL statement (except SQL INCLUDE)
v Any SQL statement within an included member
When the precompiler encounters one of the above items in a host structure, it
therefore considers the structure to be complete.
Figure 61 on page 153 shows the syntax for valid host structures.
level-1 variable-name .
BINARY
IS COMPUTATIONAL-4
USAGE COMP-4
COMPUTATIONAL-5
COMP-5
COMPUTATIONAL
COMP
PACKED-DECIMAL
COMPUTATIONAL-3
COMP-3
IS
DISPLAY SIGN LEADING SEPARATE
CHARACTER
IS
VALUE constant
IS
PICTURE picture-string
PIC DISPLAY
IS
USAGE
IS
VALUE constant
IS
49 var-2 PICTURE S9(4) BINARY
PIC S9999 IS COMPUTATIONAL-4
USAGE COMP-4
COMPUTATIONAL-5
COMP-5
COMPUTATIONAL
COMP
IS
. 49 var-3 PICTURE picture-string
IS PIC
VALUE numeric-constant
.
DISPLAY IS
IS VALUE constant
USAGE
IS
49 var-4 PICTURE S9(4) BINARY
PIC S9999 IS COMPUTATIONAL-4
USAGE COMP-4
COMPUTATIONAL-5
COMP-5
COMPUTATIONAL
COMP
IS
. 49 var-5 PICTURE picture-string
IS PIC
VALUE numeric-constant
.
DISPLAY-1 IS
IS VALUE graphic-constant
USAGE
Notes:
1. level-1 indicates a COBOL level between 1 and 47.
2. level-2 indicates a COBOL level between 2 and 48.
3. For elements within a structure use any level 02 through 48 (rather than 01 or
77), up to a maximum of two levels.
4. Using a FILLER or optional FILLER item within a host structure declaration can
invalidate the whole structure.
5. You cannot use picture-string for floating point elements but must use it for other
data types.
Table 13 helps you define host variables that receive output from the database. You
can use the table to determine the COBOL data type that is equivalent to a given
SQL data type. For example, if you retrieve TIMESTAMP data, you can use the
table to define a suitable host variable in the program that receives the data value.
| Table 13 shows direct conversions between DB2 data types and host data types.
| However, a number of DB2 data types are compatible. When you do assignments
| or comparisons of data that have compatible data types, DB2 does conversions
| between those compatible data types. See Table 1 on page 5 for information on
| compatible data types.
Table 13. SQL data types mapped to typical COBOL declarations
SQL Data Type COBOL Data Type Notes
SMALLINT S9(4) COMP-4,
S9(4) COMP-5,
or BINARY
INTEGER S9(9) COMP-4,
S9(9) COMP-5,
or BINARY
DATE Fixed-length character string of length n. If you are using a date exit routine, n is
For example, determined by that routine. Otherwise, n
01 VAR-NAME PIC X(n). must be at least 10.
TIME Fixed-length character string of length n. If you are using a time exit routine, n is
For example, determined by that routine. Otherwise, n
01 VAR-NAME PIC X(n). must be at least 6; to include seconds, n
must be at least 8.
TIMESTAMP Fixed-length character string of length of n must be at least 19. To include
length n. For example, microseconds, n must be 26; if n is less
01 VAR-NAME PIC X(n). than 26, truncation occurs on the
microseconds part.
Result set locator SQL TYPE IS Use this data type only for
RESULT-SET-LOCATOR receiving result sets.
Do not use this data type as a
column type.
Table locator SQL TYPE IS Use this data type only in a user-defined
TABLE LIKE table-name function or stored procedure to receive
AS LOCATOR rows of a transition table. Do not use this
data type as a column type.
BLOB locator USAGE IS Use this data type only to manipulate data
SQL TYPE IS in BLOB columns. Do not use this data
BLOB-LOCATOR type as a column type.
CLOB locator USAGE IS Use this data type only to manipulate data
SQL TYPE IS in CLOB columns. Do not use this data
CLOB-LOCATOR type as a column type.
SQL data types with no COBOL equivalent: If you are using a COBOL compiler
that does not support decimal numbers of more than 18 digits, use one of the
following data types to hold values of greater than 18 digits:
v A decimal variable with a precision less than or equal to 18, if the actual data
values fit. If you retrieve a decimal value into a decimal variable with a scale that
is less than the source column in the database, then the fractional part of the
value could be truncated.
v An integer or a floating-point variable, which converts the value. If you choose
integer, you lose the fractional part of the number. If the decimal number could
exceed the maximum value for an integer or, if you want to preserve a fractional
value, you can use floating point numbers. Floating-point numbers are
approximations of real numbers. Hence, when you assign a decimal number to a
floating point variable, the result could be different from the original number.
v A character string host variable. Use the CHAR function to retrieve a decimal
value into it.
Special Purpose COBOL Data Types: The locator data types are COBOL data
types as well as SQL data types. You cannot use locators as column types. For
information on how to use these data types, see the following sections:
Result set locator
“Chapter 24. Using stored procedures for client/server processing”
on page 527
Table locator “Accessing transition tables in a user-defined function or stored
procedure” on page 279
LOB locators “Chapter 13. Programming for large objects (LOBs)” on page 229
Level 77 data description entries: One or more REDEFINES entries can follow
any level 77 data description entry. However, you cannot use the names in these
entries in SQL statements. Entries with the name FILLER are ignored.
SMALLINT and INTEGER data types: In COBOL, you declare the SMALLINT and
INTEGER data types as a number of decimal digits. DB2 uses the full size of the
integers (in a way that is similar to processing with the COBOL options
TRUNC(OPT) or NOTRUNC) and can place larger values in the host variable than
would be allowed in the specified number of digits in the COBOL declaration.
For small integers that can exceed 9999, use S9(5) COMP. For large integers that
can exceed 999,999,999, use S9(10) COMP-3 to obtain the decimal data type. If
you use COBOL for integers that exceed the COBOL PICTURE, then specify the
column as decimal to ensure that the data types match and perform well.
# Similarly, retrieving a column value with DECIMAL data type into a COBOL decimal
# variable with a lower precision could truncate the value.
When your program uses X to assign a null value to a column, the program should
set the indicator variable to a negative number. DB2 then assigns a null value to the
column and ignores any value in X.
You declare indicator variables in the same way as host variables. You can mix the
declarations of the two types of variables in any way that seems appropriate. You
can define indicator variables as scalar variables or as array elements in a structure
form or as an array variable using a single level OCCURS clause. For more
information about indicator variables, see “Using indicator variables with host
variables” on page 70.
The following figure shows the syntax for a valid indicator variable.
IS
01 variable-name PICTURE S9(4) BINARY
77 PIC S9999 IS COMPUTATIONAL-4
USAGE COMP-4
COMPUTATIONAL-5
COMP-5
COMPUTATIONAL
COMP
.
IS
VALUE constant
The following figure shows the syntax for valid indicator array declarations.
IS
level-1 variable-name PICTURE S9(4)
PIC S9999 IS
USAGE
BINARY OCCURS dimension .
COMPUTATIONAL-4 TIMES IS
COMP-4 VALUE constant
COMPUTATIONAL-5
COMP-5
COMPUTATIONAL
COMP
DSNTIAR syntax
CALL ’DSNTIAR’ USING sqlca message lrecl.
CICS
If your CICS application requires CICS storage handling, you must use the
subroutine DSNTIAC instead of DSNTIAR. DSNTIAC has the following syntax:
CALL 'DSNTIAC' USING eib commarea sqlca msg lrecl.
DSNTIAC has extra parameters, which you must use for calls to routines that
use CICS commands.
eib EXEC interface block
commarea communication area
You must define DSNTIA1 in the CSD. If you load DSNTIAR or DSNTIAC, you
must also define them in the CSD. For an example of CSD entry generation
statements for use with DSNTIAC, see job DSNTEJ5A.
The assembler source code for DSNTIAC and job DSNTEJ5A, which
assembles and link-edits DSNTIAC, are in the data set prefix.SDSNSAMP.
Where to Place SQL Statements in Your Application: An IBM COBOL for MVS &
VM source data set or member can contain the following elements:
v Multiple programs
v Multiple class definitions, each of which contains multiple methods
You can put SQL statements in only the first program or class in the source data
set or member. However, you can put SQL statements in multiple methods within a
class. If an application consists of multiple data sets or members, each of the data
sets or members can contain SQL statements.
Where to Place the SQLCA, SQLDA, and Host Variable Declarations: You can
put the SQLCA, SQLDA, and SQL host variable declarations in the
WORKING-STORAGE SECTION of a program, class, or method. An SQLCA or
SQLDA in a class WORKING-STORAGE SECTION is global for all the methods of
the class. An SQLCA or SQLDA in a method WORKING-STORAGE SECTION is
local to that method only.
If a class and a method within the class both contain an SQLCA or SQLDA, the
method uses the SQLCA or SQLDA that is local.
DB2 sets the SQLCOD and SQLSTA (or SQLSTATE) values after each SQL
statement executes. An application can check these variables value to determine
whether the last SQL statement was successful. All SQL statements in the program
must be within the scope of the declaration of the SQLCOD and SQLSTA (or
SQLSTATE) variables.
See Chapter 5 of DB2 SQL Reference for more information about the INCLUDE
statement and Appendix C of DB2 SQL Reference for a complete description of
SQLCA fields.
Unlike the SQLCA, there can be more than one SQLDA in a program, and an
SQLDA can have any valid name. DB2 does not support the INCLUDE SQLDA
statement for FORTRAN programs. If present, an error message results.
You must place SQLDA declarations before the first SQL statement that references
the data descriptor.
You can code SQL statements in a FORTRAN program wherever you can place
executable statements. If the SQL statement is within an IF statement, the
precompiler generates any necessary THEN and END IF statements.
Each SQL statement in a FORTRAN program must begin with EXEC SQL. The
EXEC and SQL keywords must appear on one line, but the remainder of the
statement can appear on subsequent lines.
You cannot follow an SQL statement with another SQL statement or FORTRAN
statement on the same line.
FORTRAN does not require blanks to delimit words within a statement, but the SQL
language requires blanks. The rules for embedded SQL follow the rules for SQL
syntax, which require you to use one or more blanks as a delimiter.
Comments: You can include FORTRAN comment lines within embedded SQL
statements wherever you can use a blank, except between the keywords EXEC and
SQL. You can include SQL comments in any embedded SQL statement if you
specify the precompiler option STDSQL(YES).
The DB2 precompiler does not support the exclamation point (!) as a comment
recognition character in FORTRAN programs.
Declaring tables and views: Your FORTRAN program should also include the
statement DECLARE TABLE to describe each table and view the program
accesses.
You can use a FORTRAN character variable in the statements PREPARE and
EXECUTE IMMEDIATE, even if it is fixed-length.
You cannot nest SQL INCLUDE statements. You cannot use the FORTRAN
INCLUDE compiler directive to include SQL statements or FORTRAN host variable
declarations.
Margins: Code the SQL statements between columns 7 through 72, inclusive. If
EXEC SQL starts before the specified left margin, the DB2 precompiler does not
recognize the SQL statement.
Names: You can use any valid FORTRAN name for a host variable. Do not use
external entry names that begin with 'DSN' and host variable names that begin with
'SQL'. These names are reserved for DB2.
Do not use the word DEBUG, except when defining a FORTRAN DEBUG packet.
Do not use the words FUNCTION, IMPLICIT, PROGRAM, and SUBROUTINE to
define variables.
Sequence numbers: The source statements that the DB2 precompiler generates
do not include sequence numbers.
Statement labels: You can specify statement numbers for SQL statements in
columns 1 to 5. However, during program preparation, a labelled SQL statement
generates a FORTRAN statement CONTINUE with that label before it generates
the code that executes the SQL statement. Therefore, a labelled SQL statement
should never be the last statement in a DO loop. In addition, you should not label
SQL statements (such as INCLUDE and BEGIN DECLARE SECTION) that occur
before the first executable SQL statement because an error might occur.
WHENEVER statement: The target for the GOTO clause in the SQL statement
WHENEVER must be a label in the FORTRAN source and must refer to a
statement in the same subprogram. The statement WHENEVER only applies to
SQL statements in the same subprogram.
You can precede FORTRAN statements that define the host variables with a BEGIN
DECLARE SECTION statement and follow the statements with an END DECLARE
SECTION statement. You must use the statements BEGIN DECLARE SECTION
and END DECLARE SECTION when you use the precompiler option
STDSQL(YES).
The names of host variables should be unique within the program, even if the host
variables are in different blocks, functions, or subroutines.
When you declare a character host variable, you must not use an expression to
define the length of the character variable. You can use a character host variable
with an undefined length (for example, CHARACTER *(*)). The length of any such
variable is determined when its associated SQL statement executes.
An SQL statement that uses a host variable must be within the scope of the
statement that declares the variable.
You must be careful when calling subroutines that might change the attributes of a
host variable. Such alteration can cause an error while the program is running. See
Appendix C of DB2 SQL Reference for more information.
Numeric host variables: The following figure shows the syntax for valid numeric
host variable declarations.
INTEGER*2 variable-name
*4 / numeric-constant /
INTEGER
*4
REAL
REAL*8
DOUBLE PRECISION
Figure 69. Numeric host variables
Character host variables: The following figure shows the syntax for valid character
host variable declarations other than CLOBs. See Figure 72 for the syntax of
CLOBs.
CHARACTER variable-name
*n *n / character-constant /
Result set locators: The following figure shows the syntax for declarations of result
set locators. See “Chapter 24. Using stored procedures for client/server processing”
on page 527 for a discussion of how to use these host variables.
LOB Variables and Locators: The following figure shows the syntax for
declarations of BLOB and CLOB host variables and locators. See “Chapter 13.
Programming for large objects (LOBs)” on page 229 for a discussion of how to use
these host variables.
ROWIDs: The following figure shows the syntax for declarations of ROWID
variables. See “Chapter 13. Programming for large objects (LOBs)” on page 229 for
a discussion of how to use these host variables.
Table 15 on page 170 helps you define host variables that receive output from the
database. You can use the table to determine the FORTRAN data type that is
equivalent to a given SQL data type. For example, if you retrieve TIMESTAMP data,
you can use the table to define a suitable host variable in the program that receives
the data value.
| Table 15 on page 170 shows direct conversions between DB2 data types and host
| data types. However, a number of DB2 data types are compatible. When you do
| assignments or comparisons of data that have compatible data types, DB2 does
| conversions between those compatible data types. See Table 1 on page 5 for
| information on compatible data types.
SQL data types with no FORTRAN equivalent: FORTRAN does not provide an
equivalent for the decimal data type. To hold the value of such a variable, you can
use:
v An integer or floating-point variables, which converts the value. If you choose
integer, however, you lose the fractional part of the number. If the decimal
number can exceed the maximum value for an integer or you want to preserve a
fractional value, you can use floating point numbers. Floating-point numbers are
approximations of real numbers. When you assign a decimal number to a floating
point variable, the result could be different from the original number.
v A character string host variable. Use the CHAR function to retrieve a decimal
value into it.
Special-purpose FORTRAN data types: The locator data types are FORTRAN
data types as well as SQL data types. You cannot use locators as column types.
For information on how to use these data types, see the following sections:
Result set locator
“Chapter 24. Using stored procedures for client/server processing”
on page 527
LOB locators “Chapter 13. Programming for large objects (LOBs)” on page 229
| Processing Unicode data: Because FORTRAN does not support graphic data
| types, FORTRAN applications can process only Unicode tables that use UTF-8
| encoding.
When your program uses X to assign a null value to a column, the program should
set the indicator variable to a negative number. DB2 then assigns a null value to the
column and ignores any value in X.
You declare indicator variables in the same way as host variables. You can mix the
declarations of the two types of variables in any way that seems appropriate. For
more information about indicator variables, see “Using indicator variables with host
variables” on page 70.
The following figure shows the syntax for a valid indicator variable.
INTEGER*2 variable-name
/ numeric-constant /
DSNTIR syntax
CALL DSNTIR ( error-length, message, return-code )
where ERRLEN is the total length of the message output area, ERRTXT is the
name of the message output area, and ICODE is the return code.
return-code
Accepts a return code from DSNTIAR.
DB2 sets the SQLCODE and SQLSTATE values after each SQL statement
executes. An application can check these variables value to determine whether the
last SQL statement was successful. All SQL statements in the program must be
within the scope of the declaration of the SQLCODE and SQLSTATE variables.
See Chapter 5 of DB2 SQL Reference for more information about the INCLUDE
statement and Appendix C of DB2 SQL Reference for a complete description of
SQLCA fields.
You must declare an SQLDA before the first SQL statement that references that
data descriptor, unless you use the precompiler option TWOPASS. See Chapter 5
of DB2 SQL Reference for more information about the INCLUDE statement and
Appendix C of DB2 SQL Reference for a complete description of SQLDA fields.
You can code SQL statements in a PL/I program wherever you can use executable
statements.
Each SQL statement in a PL/I program must begin with EXEC SQL and end with a
semicolon (;). The EXEC and SQL keywords must appear all on one line, but the
remainder of the statement can appear on subsequent lines.
Continuation for SQL statements: The line continuation rules for SQL statements
are the same as those for other PL/I statements, except that you must specify
EXEC SQL on one line.
Declaring tables and views: Your PL/I program should also include a DECLARE
TABLE statement to describe each table and view the program accesses. You can
use the DB2 declarations generator (DCLGEN) to generate the DECLARE TABLE
statements. For details, see “Chapter 8. Generating declarations for your tables
using DCLGEN” on page 95.
Including code: You can use SQL statements or PL/I host variable declarations
from a member of a partitioned data set by using the following SQL statement in the
source code where you want to include the statements:
EXEC SQL INCLUDE member-name;
Margins: Code SQL statements in columns 2 through 72, unless you have
specified other margins to the DB2 precompiler. If EXEC SQL starts before the
specified left margin, the DB2 precompiler does not recognize the SQL statement.
Names: You can use any valid PL/I name for a host variable. Do not use external
entry names or access plan names that begin with 'DSN' and host variable names
that begin with 'SQL'. These names are reserved for DB2.
Sequence numbers: The source statements that the DB2 precompiler generates
do not include sequence numbers. IEL0378 messages from the PL/I compiler
identify lines of code without sequence numbers. You can ignore these messages.
Statement labels: You can specify a statement label for executable SQL
statements. However, the statements INCLUDE text-file-name and END DECLARE
SECTION cannot have statement labels.
Whenever statement: The target for the GOTO clause in an SQL statement
WHENEVER must be a label in the PL/I source code and must be within the scope
of any SQL statements that WHENEVER affects.
You can precede PL/I statements that define the host variables with the statement
BEGIN DECLARE SECTION, and follow the statements with the statement END
| A colon (:) must precede all host variables in an SQL statement, with the following
| exception. If the SQL statement meets the following conditions, a host variable in
| the SQL statement cannot be preceded by a colon:
| v The SQL statement is an EXECUTE IMMEDIATE or PREPARE statement.
| v The SQL statement is in a program that also contains a DECLARE VARIABLE
| statement.
| v The host variable is part of a string expression, but the host variable is not the
| only component of the string expression.
| The names of host variables should be unique within the program, even if the host
variables are in different blocks or procedures. You can qualify the host variable
names with a structure name to make them unique.
An SQL statement that uses a host variable must be within the scope of the
statement that declares the variable.
Host variables must be scalar variables or structures of scalars. You cannot declare
host variables as arrays, although you can use an array of indicator variables when
you associate the array with a host structure.
The precompiler uses only the names and data attributes of the variables; it ignores
the alignment, scope, and storage attributes. Even though the precompiler ignores
alignment, scope, and storage, if you ignore the restrictions on their use, you might
have problems compiling the PL/I source code that the precompiler generates.
These restrictions are as follows:
v A declaration with the EXTERNAL scope attribute and the STATIC storage
attribute must also have the INITIAL storage attribute.
v If you use the BASED storage attribute, you must follow it with a PL/I
element-locator-expression.
v Host variables can be STATIC, CONTROLLED, BASED, or AUTOMATIC storage
class, or options. However, CICS requires that programs be reentrant.
Numeric host variables: The following figure shows the syntax for valid numeric
host variable declarations.
DECLARE variable-name
DCL ,
( variable-name )
BINARY FIXED
BIN ( precision )
DECIMAL ,scale
DEC FLOAT ( precision )
Alignment and/or Scope and/or Storage
Notes:
1. You can specify host variable attributes in any order acceptable to PL/I. For
example, BIN FIXED(31), BINARY FIXED(31), BIN(31) FIXED, and FIXED
BIN(31) are all acceptable.
2. You can specify a scale for only DECIMAL FIXED.
Character host variables: The following figure shows the syntax for valid character
host variable declarations, other than CLOBs. See Figure 80 on page 180 for the
syntax of CLOBs.
Alignment and/or Scope and/or Storage
Graphic host variables: The following figure shows the syntax for valid graphic
host variable declarations, other than DBCLOBs. See Figure 80 on page 180 for the
syntax of DBCLOBs.
Alignment and/or Scope and/or Storage
( variable-name )
Alignment and/or Scope and/or Storage
Table Locators: The following figure shows the syntax for declarations of table
locators. See “Accessing transition tables in a user-defined function or stored
procedure” on page 279 for a discussion of how to use these host variables.
( variable-name )
LOB Variables and Locators: The following figure shows the syntax for
declarations of BLOB, CLOB, and DBCLOB host variables and locators. See
“Chapter 13. Programming for large objects (LOBs)” on page 229 for a discussion of
how to use these host variables.
( variable-name )
ROWIDs: The following figure shows the syntax for declarations of ROWID
variables. See “Chapter 13. Programming for large objects (LOBs)” on page 229 for
a discussion of how to use these host variables.
( variable-name )
In this example, B is the name of a host structure consisting of the scalars C1 and
C2.
You can use the structure name as shorthand notation for a list of scalars. You can
qualify a host variable with a structure name (for example, STRUCTURE.FIELD).
Host structures are limited to two levels. You can think of a host structure for DB2
data as a named group of host variables.
You must terminate the host structure variable by ending the declaration with a
semicolon. For example:
DCL 1 A,
2 B CHAR,
2 (C, D) CHAR;
DCL (E, F) CHAR;
You can specify host variable attributes in any order acceptable to PL/I. For
example, BIN FIXED(31), BIN(31) FIXED, and FIXED BIN(31) are all acceptable.
The following figure shows the syntax for valid host structures.
( var-2 )
BINARY FIXED
BIN ( precision )
DECIMAL ,scale
DEC FLOAT
( precision )
CHARACTER
CHAR ( integer ) VARYING
VARY
GRAPHIC
( integer ) VARYING
VARY
SQL TYPE IS ROWID
LOB data type
Table 17 on page 184 helps you define host variables that receive output from the
database. You can use the table to determine the PL/I data type that is equivalent
to a given SQL data type. For example, if you retrieve TIMESTAMP data, you can
use the table to define a suitable host variable in the program that receives the data
value.
| Table 17 on page 184 shows direct conversions between DB2 data types and host
| data types. However, a number of DB2 data types are compatible. When you do
| assignments or comparisons of data that have compatible data types, DB2 does
| conversions between those compatible data types. See Table 1 on page 5 for
| information on compatible data types.
PL/I Data Types with No SQL Equivalent: PL/I supports some data types with no
SQL equivalent (COMPLEX and BIT variables, for example). In most cases, you
can use PL/I statements to convert between the unsupported PL/I data types and
the data types that SQL supports.
# SQL data types with no PL/I equivalent: If the PL/I compiler you are using does
# not support a decimal data type with a precision greater than 15, use the following
# types of variables for decimal data:
v Decimal variables with precision less than or equal to 15, if the actual data
values fit. If you retrieve a decimal value into a decimal variable with a scale that
is less than the source column in the database, then the fractional part of the
value could truncate.
v An integer or a floating-point variable, which converts the value. If you choose
integer, you lose the fractional part of the number. If the decimal number can
exceed the maximum value for an integer or you want to preserve a fractional
value, you can use floating point numbers. Floating-point numbers are
approximations of real numbers. When you assign a decimal number to a floating
point variable, the result could be different from the original number.
v A character string host variable. Use the CHAR function to retrieve a decimal
value into it.
# Floating point host variables: All floating point data is stored in DB2 in
# System/390 floating point format. However, your host variable data can be in
# System/390 floating point format or IEEE floating point format. DB2 uses the
# FLOAT(S390|IEEE) precompiler option to determine whether your floating point host
# variables are in IEEE floating point format or System/390 floating point format. If
# you use this option for a PL/I program, you must compile the program using IBM
# Enterprise PL/I for z/OS and OS/390 Version 3 Release 1 or later. DB2 does no
# checking to determine whether the host variable declarations or format of the host
# variable contents match the precompiler option. Therefore, you need to ensure that
# your floating point host variable types and contents match the precompiler option.
Special Purpose PL/I Data Types: The locator data types are PL/I data types as
well as SQL data types. You cannot use locators as column types. For information
on how to use these data types, see the following sections:
Result set locator
“Chapter 24. Using stored procedures for client/server processing”
on page 527
PL/I scoping rules: The precompiler does not support PL/I scoping rules.
# Similarly, retrieving a column value with a DECIMAL data type into a PL/I decimal
# variable with a lower precision could truncate the value.
When your program uses X to assign a null value to a column, the program should
set the indicator variable to a negative number. DB2 then assigns a null value to the
column and ignores any value in X.
You declare indicator variables in the same way as host variables. You can mix the
declarations of the two types of variables in any way that seems appropriate. For
more information about indicator variables, see “Using indicator variables with host
variables” on page 70.
Example:
The following figure shows the syntax for a valid indicator variable.
The following figure shows the syntax for a valid indicator array.
( variable-name ( dimension ) )
FIXED(15) ;
Alignment and/or Scope and/or Storage
DSNTIAR syntax
CALL DSNTIAR ( sqlca, message, lrecl );
CICS
If your CICS application requires CICS storage handling, you must use the
subroutine DSNTIAC instead of DSNTIAR. DSNTIAC has the following syntax:
DSNTIAC has extra parameters, which you must use for calls to routines that
use CICS commands.
eib EXEC interface block
commarea
communication area
You must define DSNTIA1 in the CSD. If you load DSNTIAR or DSNTIAC, you
must also define them in the CSD. For an example of CSD entry generation
statements for use with DSNTIAC, see job DSNTEJ5A.
The assembler source code for DSNTIAC and job DSNTEJ5A, which
assembles and link-edits DSNTIAC, are in the data set prefix.SDSNSAMP.
DB2 sets the SQLCODE and SQLSTATE values after each SQL statement
executes. An application can check these variable values to determine whether the
last SQL statement was successful.
See Appendix C of DB2 SQL Reference for information on the fields in the REXX
SQLCA.
A REXX procedure can contain more than one SQLDA. Each SQLDA consists of a
set of REXX variables with a common stem. The stem must be a REXX variable
name that contains no periods and is the same as the value of descriptor-name that
you specify when you use the SQLDA in an SQL statement. DB2 does not support
the INCLUDE SQLDA statement in REXX.
See Appendix C of DB2 SQL Reference for information on the fields in a REXX
SQLDA.
(1)
'CONNECT' 'subsystem-ID'
ADDRESS DSNREXX REXX-variable
Notes:
1 CALL SQLDBS 'ATTACH TO' ssid is equivalent to ADDRESS DSNREXX 'CONNECT' ssid.
EXECSQL
Executes SQL statements in REXX procedures. The syntax of EXECSQL is:
(1)
"EXECSQL" "SQL-statement"
ADDRESS DSNREXX REXX-variable
Notes:
1 CALL SQLEXEC is equivalent to EXECSQL.
See “Embedding SQL statements in a REXX procedure” on page 192 for more
information.
DISCONNECT
Disconnects the REXX procedure from a DB2 subsystem. You should execute
DISCONNECT to release resources that are held by DB2. The syntax of
DISCONNECT is:
(1)
'DISCONNECT'
ADDRESS DSNREXX
Notes:
1 CALL SQLDBS 'DETACH' is equivalent to DISCONNECT.
These application programming interfaces are available through the DSNREXX host
command environment. To make DSNREXX available to the application, invoke the
RXSUBCOM function. The syntax is:
The ADD function adds DSNREXX to the REXX host command environment table.
The DELETE function deletes DSNREXX from the REXX host command
environment table.
S_RC = RXSUBCOM('DELETE','DSNREXX','DSNREXX')
/* WHEN DONE WITH */
/* DSNREXX, REMOVE IT. */
Each SQL statement in a REXX procedure must begin with EXECSQL, in either
upper, lower, or mixed case. One of the following items must follow EXECSQL:
v An SQL statement enclosed in single or double quotation marks.
v A REXX variable that contains an SQL statement. The REXX variable must not
be preceded by a colon.
For example, you can use either of the following methods to execute the COMMIT
statement in a REXX procedure:
EXECSQL "COMMIT"
rexxvar="COMMIT"
EXECSQL rexxvar
An SQL statement follows rules that apply to REXX commands. The SQL statement
can optionally end with a semicolon and can be enclosed in single or double
quotation marks, as in the following example:
'EXECSQL COMMIT';
Continuation for SQL statements: SQL statements that span lines follow REXX
rules for statement continuation. You can break the statement into several strings,
each of which fits on a line, and separate the strings with commas or with
concatenation operators followed by commas. For example, either of the following
statements is valid:
EXECSQL ,
"UPDATE DSN8710.DEPT" ,
"SET MGRNO = '000010'" ,
"WHERE DEPTNO = 'D11'"
"EXECSQL " || ,
" UPDATE DSN8710.DEPT " || ,
" SET MGRNO = '000010'" || ,
" WHERE DEPTNO = 'D11'"
Including code: The EXECSQL INCLUDE statement is not valid for REXX. You
therefore cannot include externally defined SQL statements in a procedure.
Margins: Like REXX commands, SQL statements can begin and end anywhere on
a line.
Names: You can use any valid REXX name that does not end with a period as a
host variable. However, host variable names should not begin with 'SQL', 'RDI',
'DSN', 'RXSQL', or 'QRW'. Variable names can be at most 64 bytes.
Nulls: A REXX null value and an SQL null value are different. The REXX language
has a null string (a string of length 0) and a null clause (a clause that contains only
blanks and comments). The SQL null value is a special value that is distinct from all
nonnull values and denotes the absence of a value. Assigning a REXX null value to
a DB2 column does not make the column value null.
Statement labels: You can precede an SQL statement with a label, in the same
way that you label REXX commands.
Handling errors and warnings: DB2 does not support the SQL WHENEVER
statement in a REXX procedure. To handle SQL errors and warnings, use the
following methods:
v To test for SQL errors or warnings, test the SQLCODE or SQLSTATE value and
the SQLWARN. values after each EXECSQL call. This method does not detect
errors in the REXX interface to DB2.
v To test for SQL errors or warnings or errors or warnings from the REXX interface
to DB2, test the REXX RC variable after each EXECSQL call. Table 18 lists the
values of the RC variable.
You can also use the REXX SIGNAL ON ERROR and SIGNAL ON FAILURE
keyword instructions to detect negative values of the RC variable and transfer
control to an error routine.
Table 18. REXX return codes after SQL statements
Return code Meaning
0 No SQL warning or error occurred.
+1 An SQL warning occurred.
-1 An SQL error occurred.
Use only the predefined names for cursors and statements. When you associate a
cursor name with a statement name in a DECLARE CURSOR statement, the cursor
name and the statement must have the same number. For example, if you declare
cursor c1, you need to declare it for statement s1:
EXECSQL 'DECLARE C1 CURSOR FOR S1'
A REXX host variable can be a simple or compound variable. DB2 REXX Language
Support evaluates compound variables before DB2 processes SQL statements that
contain the variables. In the following example, the host variable that is passed to
DB2 is :x.1.2:
a=1
b=2
EXECSQL 'OPEN C1 USING :x.a.b'
When you assign input data to a DB2 table column, you can either let DB2
determine the type that your input data represents, or you can use an SQLDA to tell
DB2 the intended type of the input data.
If you do not assign a value to a host variable before you assign the host variable
to a column, DB2 returns an error code.
Table 19. SQL input data types and REXX data formats
SQL data type SQLTYPE for data REXX input data format
assigned by DB2 type
INTEGER 496/497 A string of numerics that does not contain a decimal point or
exponent identifier. The first character can be a plus (+) or minus (−)
sign. The number that is represented must be between -2147483647
and 2147483647, inclusive.
DECIMAL(p,s) 484/485 One of the following formats:
v A string of numerics that contains a decimal point but no exponent
identifier. p represents the precision and s represents the scale of
the decimal number that the string represents. The first character
can be a plus (+) or minus (−) sign.
v A string of numerics that does not contain a decimal point or an
exponent identifier. The first character can be a plus (+) or minus
(−) sign. The number that is represented is less than -2147483647
or greater than 2147483647.
FLOAT 480/481 A string that represents a number in scientific notation. The string
consists of a series of numerics followed by an exponent identifier
(an E or e followed by an optional plus (+) or minus (−) sign and a
series of numerics). The string can begin with a plus (+) or minus (−)
sign.
VARCHAR(n) 448/449 One of the following formats:
v A string of length n, enclosed in single or double quotation marks.
v The character X or x, followed by a string enclosed in single or
double quotation marks. The string within the quotation marks has
a length of 2*n bytes and is the hexadecimal representation of a
string of n characters.
v A string of length n that does not have a numeric or graphic
format, and does not satisfy either of the previous conditions.
VARGRAPHIC(n) 464/465 One of the following formats:
v The character G, g, N, or n, followed by a string enclosed in single
or double quotation marks. The string within the quotation marks
begins with a shift-out character (X'0E') and ends with a shift-in
character (X'0F'). Between the shift-out character and shift-in
character are n double-byte characters.
v The characters GX, Gx, gX, or gx, followed by a string enclosed in
single or double quotation marks. The string within the quotation
marks has a length of 4*n bytes and is the hexadecimal
representation of a string of n double-byte characters.
For example, when DB2 executes the following statements to update the MIDINIT
column of the EMP table, DB2 must determine a data type for HVMIDINIT:
SQLSTMT="UPDATE EMP" ,
"SET MIDINIT = ?" ,
"WHERE EMPNO = '000200'"
Because the data that is assigned to HVMIDINIT has a format that fits a character
data type, DB2 REXX Language Support assigns a VARCHAR type to the input
data.
Enclosing the string in apostrophes is not adequate because REXX removes the
apostrophes when it assigns a literal to a variable. For example, suppose that you
want to pass the value in host variable stringvar to DB2. The value that you want to
pass is the string '100'. The first thing that you need to do is to assign the string to
the host variable. You might write a REXX command like this:
stringvar = '100'
After the command executes, stringvar contains the characters 100 (without the
apostrophes). DB2 REXX Language Support then passes the numeric value 100 to
DB2, which is not what you intended.
In this case, REXX assigns the string '100' to stringvar, including the single
quotation marks. DB2 REXX Language Support then passes the string '100' to DB2,
which is the desired result.
To indicate the data type of input data to DB2, use an SQLDA. For example,
suppose you want to tell DB2 that the data with which you update the MIDINIT
column of the EMP table is of type CHAR, rather than VARCHAR. You need to set
up an SQLDA that contains a description of a CHAR column, and then prepare and
execute the UPDATE statement using that SQLDA:
INSQLDA.SQLD = 1 /* SQLDA contains one variable */
INSQLDA.1.SQLTYPE = 453 /* Type of the variable is CHAR, */
/* and the value can be null */
INSQLDA.1.SQLLEN = 1 /* Length of the variable is 1 */
INSQLDA.1.SQLDATA = 'H' /* Value in variable is H */
INSQLDA.1.SQLIND = 0 /* Input variable is not null */
SQLSTMT="UPDATE EMP" ,
"SET MIDINIT = ?" ,
"WHERE EMPNO = '000200'"
"EXECSQL PREPARE S100 FROM :SQLSTMT"
"EXECSQL EXECUTE S100 USING" ,
"DESCRIPTOR :INSQLDA"
Because you cannot use the SELECT INTO statement in a REXX procedure, to
retrieve data from a DB2 table you must prepare a SELECT statement, open a
cursor for the prepared statement, and then fetch rows into host variables or an
SQLDA using the cursor. The following example demonstrates how you can retrieve
data from a DB2 table using an SQLDA:
SQLSTMT= ,
'SELECT EMPNO, FIRSTNME, MIDINIT, LASTNAME,' ,
' WORKDEPT, PHONENO, HIREDATE, JOB,' ,
' EDLEVEL, SEX, BIRTHDATE, SALARY,' ,
' BONUS, COMM' ,
' FROM EMP'
EXECSQL DECLARE C1 CURSOR FOR S1
EXECSQL PREPARE S1 INTO :OUTSQLDA FROM :SQLSTMT
EXECSQL OPEN C1
Do Until(SQLCODE ¬= 0)
EXECSQL FETCH C1 USING DESCRIPTOR :OUTSQLDA
If SQLCODE = 0 Then Do
Line = ''
Do I = 1 To OUTSQLDA.SQLD
Line = Line OUTSQLDA.I.SQLDATA
End I
Say Line
End
End
The way that you use indicator variables for input host variables in REXX
procedures is slightly different from the way that you use indicator variables in other
languages. When you want to pass a null value to a DB2 column, in addition to
putting a negative value in an indicator variable, you also need to put a valid value
in the corresponding host variable. For example, to set a value of WORKDEPT in
table EMP to null, use statements like these:
SQLSTMT="UPDATE EMP" ,
"SET WORKDEPT = ?"
HVWORKDEPT='000'
INDWORKDEPT=-1
"EXECSQL PREPARE S100 FROM :SQLSTMT"
"EXECSQL EXECUTE S100 USING :HVWORKDEPT :INDWORKDEPT"
After you retrieve data from a column that can contain null values, you should
always check the indicator variable that corresponds to the output host variable for
that column. If the indicator variable value is negative, the retrieved value is null, so
you can disregard the value in the host variable.
In the following program, the phone number for employee Haas is selected into
variable HVPhone. After the SELECT statement executes, if no phone number for
employee Haas is found, indicator variable INDPhone contains -1.
'SUBCOM DSNREXX'
IF RC THEN ,
S_RC = RXSUBCOM('ADD','DSNREXX','DSNREXX')
ADDRESS DSNREXX
'CONNECT' 'DSN'
SQLSTMT = ,
"SELECT PHONENO FROM DSN8710.EMP WHERE LASTNAME='HAAS'"
"EXECSQL DECLARE C1 CURSOR FOR S1"
"EXECSQL PREPARE S1 FROM :SQLSTMT"
Say "SQLCODE from PREPARE is "SQLCODE
"EXECSQL OPEN C1"
Say "SQLCODE from OPEN is "SQLCODE
"EXECSQL FETCH C1 INTO :HVPhone :INDPhone"
Say "SQLCODE from FETCH is "SQLCODE
If INDPhone < 0 Then ,
Say 'Phone number for Haas is null.'
"EXECSQL CLOSE C1"
Say "SQLCODE from CLOSE is "SQLCODE
S_RC = RXSUBCOM('DELETE','DSNREXX','DSNREXX')
Constraints are rules that limit the values that you can insert, delete, or update in a
table. There are two types of constraints:
v Table check constraints determine the values that a column can contain. Table
check constraints are discussed in “Using table check constraints”.
v Referential constraints preserve relationships between tables. Referential
constraints are discussed in “Using referential constraints” on page 203.
Triggers are a series of actions that are invoked when a table is updated. Triggers
are discussed in “Chapter 11. Using triggers for active data” on page 209.
For example, you might want to make sure that no salary can be below 15000
dollars:
Using table check constraints makes your programming task easier, because you
do not need to enforce those constraints within application programs or with a
validation routine. Define table check constraints on one or more columns in a table
when that table is created or altered.
Constraint considerations
The syntax of a table check constraint is checked when the constraint is defined,
but the meaning of the constraint is not checked. The following examples show
mistakes that are not caught. Column C1 is defined as INTEGER NOT NULL.
A table check constraint is not checked for consistency with other types of
constraints. For example, a column in a dependent table can have a referential
constraint with a delete rule of SET NULL. You can also define a check constraint
that prohibits nulls in the column. As a result, an attempt to delete a parent row
fails, because setting the dependent row to null violates the check constraint.
Similarly, a table check constraint is not checked for consistency with a validation
routine, which is applied to a table before a check constraint. If the routine requires
a column to be greater than or equal to 10 and a check constraint requires the
same column to be less than 10, table inserts are not possible. Plans and packages
do not need to be rebound after table check constraints are defined on or removed
from a table.
Any constraint defined on columns of a base table applies to the views defined on
that base table.
When you use ALTER TABLE to add a table check constraint to already populated
tables, the enforcement of the check constraint is determined by the value of the
CURRENT RULES special register as follows:
v If the value is STD, the check constraint is enforced immediately when it is
defined. If a row does not conform, the table check constraint is not added to the
table and an error occurs.
v If the value is DB2, the check constraint is added to the table description but its
enforcement is deferred. Because there might be rows in the table that violate
the check constraint, the table is placed in check pending status.
Table check violations place a table space or partition in check pending status when
any of these conditions exist:
CASCADE
DEPT
SET SET
NULL NULL
RESTRICT EMP
RESTRICT
CASCADE ACT
PROJ
RESTRICT RESTRICT
PROJACT
RESTRICT
RESTRICT
EMPPROJACT
Figure 89. Relationships among tables in the sample application. Arrows point from parent
tables to dependent tables.
When a table refers to an entity for which there is a master list, it should identify an
occurrence of the entity that actually appears in the master list; otherwise, either the
reference is invalid or the master list is incomplete. Referential constraints enforce
the relationship between a table and a master list.
Figure 90 shows part of the project table with the primary key column indicated.
Project table
Primary key column
Figure 91 shows a primary key containing more than one column; the primary key is
a composite key.
Figure 91. A composite primary key. The PROJNO, ACTNO, and ACSTDATE columns are all
parts of the key.
The primary key of a table, if one exists, uniquely identifies each occurrence of an
entity about which the table contains information. The PRIMARY KEY clause of the
CREATE TABLE or ALTER TABLE statements identifies the column or columns of
the primary key. Each identified column must be defined as NOT NULL.
Another way to allow only unique values in a column is to create a table using the
UNIQUE clause of the CREATE TABLE or ALTER TABLE statement. Like the
PRIMARY KEY clause, specifying a UNIQUE clause prevents use of the table until
you create an index to enforce the uniqueness of the key. And if you use the
UNIQUE clause in an ALTER TABLE statement, a unique index must already exist.
For more information about the UNIQUE clause, see Chapter 5 of DB2 SQL
Reference.
A table can have no more than one primary key. A primary key obeys the same
restrictions as do index keys:
v The key can include no more than 64 columns.
v No column can be named twice.
v The sum of the column length attributes cannot be greater than 255.
You define a list of columns as the primary key of a table with the PRIMARY KEY
clause in the CREATE TABLE statement.
To add a primary key to an existing table, use the PRIMARY KEY clause in an
ALTER TABLE statement. In this case, a unique index must already exist.
Incomplete definition
If a table is created with a primary key, its primary index is the first unique index
created on its primary key columns, with the same order of columns as the primary
key columns. The columns of the primary index can be in either ascending or
descending order. The table has an incomplete definition until you create an index
on the parent key. This incomplete definition status is recorded as a P in the
TABLESTATUS column of SYSIBM.SYSTABLES. Use of a table with an incomplete
definition is severely restricted: you can drop the table, create the primary index,
and drop or create other indexes; you cannot load the table, insert data, retrieve
data, update data, delete data, or create foreign keys that reference the primary
key.
Because of these restrictions, plan to create the primary index soon after creating
the table. For example, to create the primary index for the project activity table,
issue:
CREATE UNIQUE INDEX XPROJAC1
ON DSN8710.PROJACT (PROJNO, ACTNO, ACSTDATE);
Creating the primary index resets the incomplete definition status and its associated
restrictions. But if you drop the primary index, it reverts to incomplete definition
status; to reset the status, you must create the primary index or alter the table to
drop the primary key.
If the primary key is added later with ALTER TABLE, a unique index on the key
columns must already exist. If more than one unique index is on those columns,
DB2 chooses one arbitrarily to be the primary index.
You define a list of columns as a foreign key of a table with the FOREIGN KEY
clause in the CREATE TABLE statement.
A foreign key can refer to either a unique or a primary key of the parent table. If the
foreign key refers to a non-primary unique key, you must specify the column names
of the key explicitly. If the column names of the key are not specified explicitly, the
default is to refer to the column names of the primary key of the parent table.
The column names you specify identify the columns of the parent key. The privilege
set must include the ALTER or the REFERENCES privilege on the columns of the
parent key. A unique index must exist on the parent key columns of the parent
table.
The name is used in error messages, queries to the catalog, and DROP FOREIGN
KEY statements. Hence, you might want to choose one if you are experimenting
with your database design and have more than one foreign key beginning with the
same column (otherwise DB2 generates the name).
You can create an index on the columns of a foreign key in the same way you
create one on any other set of columns. Most often it is not a unique index. If you
do create a unique index on a foreign key, it introduces an additional constraint on
the values of the columns.
To let an index on the foreign key be used on the dependent table for a delete
operation on a parent table, the leading columns of the index on the foreign key
must be identical to and in the same order as the columns in the foreign key.
A foreign key can also be the primary key; then the primary index is also a unique
index on the foreign key. In that case, every row of the parent table has at most
one dependent row. The dependent table might be used to hold information that
pertains to only a few of the occurrences of the entity described by the parent table.
For example, a dependent of the employee table might contain information that
applies only to employees working in a different country.
The primary key can share columns of the foreign key if the first n columns of the
foreign key are the same as the primary key’s columns. Again, the primary index
serves as an index on the foreign key. In the sample project activity table, the
primary index (on PROJNO, ACTNO, ACSTDATE) serves as an index on the
foreign key on PROJNO. It does not serve as an index on the foreign key on
ACTNO, because ACTNO is not the first column of the index.
When a foreign key is added to a populated table, the table space is put into check
pending status.
DB2 does not allow you to create a cycle in which a delete operation on a table
involves that same table. Enforcing that principle creates rules about adding a
foreign key to a table:
v In a cycle of two tables, neither delete rule can be CASCADE.
v In a cycle of more than two tables, two or more delete rules must not be
CASCADE. For example, in a cycle with three tables, two of the delete rules
must be other than CASCADE. This concept is illustrated in Figure 93 on
page 208.
Alternatively, a delete operation on a self-referencing table must involve the same
table, and the delete rule there must be CASCADE or NO ACTION.
Valid Invalid
cycle cycle
TABLE1 TABLE1
Figure 93. Valid and invalid delete cycles. The left cycle is valid because two or more delete
rules are not CASCADE. The cycle on the right is invalid because of the two cascading
deletes.
Triggers also move application logic into DB2, which can result in faster application
development and easier maintenance. For example, you can write applications to
control salary changes in the employee table, but each application program that
changes the salary column must include logic to check those changes. A better
method is to define a trigger that controls changes to the salary column. Then DB2
does the checking for any application that modifies salaries.
You create triggers using the CREATE TRIGGER statement. Figure 94 on page 210
shows an example of a CREATE TRIGGER statement.
When you execute this CREATE TRIGGER statement, DB2 creates a trigger
package called REORDER and associates the trigger package with table PARTS.
DB2 records the timestamp when it creates the trigger. If you define other triggers
on the PARTS table, DB2 uses this timestamp to determine which trigger to activate
first. The trigger is now ready to use.
When you no longer want to use trigger REORDER, you can delete the trigger by
executing the statement:
DROP TRIGGER REORDER;
Executing this statement drops trigger REORDER and its associated trigger
package named REORDER.
If you drop table PARTS, DB2 also drops trigger REORDER and its trigger
package.
Trigger name: Use a short, ordinary identifier to name your trigger. You can use a
qualifier or let DB2 determine the qualifier. When DB2 creates a trigger package for
the trigger, it uses the qualifier for the collection ID of the trigger package. DB2
uses these rules to determine the qualifier:
v If you use static SQL to execute the CREATE TRIGGER statement, DB2 uses
the authorization ID in the bind option QUALIFIER for the plan or package that
contains the CREATE TRIGGER statement. If the bind command does not
include the QUALIFIER option, DB2 uses the owner of the package or plan.
v If you use dynamic SQL to execute the CREATE TRIGGER statement, DB2 uses
the authorization ID in special register CURRENT SQLID.
Subject table: When you perform an insert, update, or delete operation on this
table, the trigger is activated. You must name a local table in the CREATE
TRIGGER statement. You cannot define a trigger on a catalog table or on a view.
Trigger activation time: The two choices for trigger activation time are NO
CASCADE BEFORE and AFTER. NO CASCADE BEFORE means that the trigger
is activated before DB2 makes any changes to the subject table, and that the
triggered action does not activate any other triggers. AFTER means that the trigger
is activated after DB2 makes changes to the subject table and can activate other
triggers. Triggers with an activation time of NO CASCADE BEFORE are known as
before triggers. Triggers with an activation time of AFTER are known as after
triggers.
A triggering event can also be an update or delete operation that occurs as the
result of a referential constraint with ON DELETE SET NULL or ON DELETE
CASCADE.
Triggers are not activated as the result of updates made to tables by DB2 utilities.
When the triggering event for a trigger is an update operation, the trigger is called
an update trigger. Similiarly, triggers for insert operations are called insert triggers,
and triggers for delete operations are called delete triggers.
The SQL statement that performs the triggering SQL operation is called the
triggering SQL statement.
Each triggering event is associated with one subject table and one SQL operation. If
the triggering SQL operation is an update operation, the event can be associated
with specific columns of the subject table. In this case, the trigger is activated only if
the update operation updates any of the specified columns.
For example, the following trigger, PAYROLL1, which invokes user-defined function
named PAYROLL_LOG, is activated only if an update operation is performed on
columns SALARY or BONUS of table PAYROLL:
CREATE TRIGGER PAYROLL1
AFTER UPDATE OF SALARY, BONUS ON PAYROLL
FOR EACH STATEMENT MODE DB2SQL
BEGIN ATOMIC
VALUES(PAYROLL_LOG(USER, 'UPDATE', CURRENT TIME, CURRENT DATE));
END
Granularity: The triggering SQL statement might modify multiple rows in the table.
The granularity of the trigger determines whether the trigger is activated only once
for the triggering SQL statement or once for every row that the SQL statement
modifies. The granularity values are:
v FOR EACH ROW
The trigger is activated once for each row that DB2 modifies in the subject table.
If the triggering SQL statement modifies no rows, the trigger is not activated.
However, if the triggering SQL statement updates a value in a row to the same
value, the trigger is activated. For example, if an UPDATE trigger is defined on
table COMPANY_STATS, the following SQL statement will activate the trigger:
UPDATE COMPANY_STATS SET NBEMP = NBEMP;
v FOR EACH STATEMENT
The trigger is activated once when the triggering SQL statement executes. The
trigger is activated even if the triggering SQL statement modifies no rows.
Triggers with a granularity of FOR EACH ROW are known as row triggers. Triggers
with a granularity of FOR EACH STATEMENT are known as statement triggers.
Statement triggers can only be after triggers.
Trigger NEW_HIRE is activated once for every row inserted into the employee
table.
Transition variables: When you code a row trigger, you might need to refer to the
values of columns in each updated row of the subject table. To do this, specify
Suppose that you have created tables T and S, with the following definitions:
CREATE TABLE T
(ID SMALLINT GENERATED BY DEFAULT AS IDENTITY (START WITH 100),
C2 SMALLINT,
C3 SMALLINT,
C4 SMALLINT);
CREATE TABLE S
(ID SMALLINT GENERATED ALWAYS AS IDENTITY,
C1 SMALLINT);
This statement inserts a row into S with a value of 5 for column C1 and a value of 1
for identity column ID. Next, suppose that you execute the following SQL statement,
which activates trigger TR1:
INSERT INTO T (C2)
VALUES (IDENTITY_VAL_LOCAL());
This insert statement, and the subsequent activation of trigger TR1, have the
following results:
v The INSERT statement obtains the most recent value that was assigned to an
identity column (1), and inserts that value into column C2 of table T. 1 is the
value that DB2 inserted into identity column ID of table S.
v When the INSERT statement executes, DB2 inserts the value 100 into identity
column ID column of C2.
v The first statement in the body of trigger TR1 inserts the value of transition
variable N.ID (100) into column C3. N.ID is the value that identity column ID
contains after the INSERT statement executes.
Transition tables: If you want to refer to the entire set of rows that a triggering
SQL statement modifies, rather than to individual rows, use a transition table. Like
transition variables, transition tables can appear in the REFERENCING clause of a
CREATE TRIGGER statement. Transition tables are valid for both row triggers and
statement triggers. The two types of transition tables are:
v Old transition tables, specified with the OLD TABLE transition-table-name clause,
capture the values of columns before the triggering SQL statement updates them.
You can define old transition tables for update and delete triggers.
v New transition tables, specified with the NEW TABLE transition-table-name
clause, capture the values of columns after the triggering SQL statement updates
them. You can define new transition variables for update and insert triggers.
The scope of old and new transition table names is the trigger body. If another table
exists that has the same name as a transition table, any unqualified reference to
that name in the trigger body points to the transition table. To reference the other
table in the trigger body, you must use the fully qualified table name.
The following example uses a new transition table to capture the set of rows that
are inserted into the INVOICE table:
CREATE TRIGGER LRG_ORDR
AFTER INSERT ON INVOICE
REFERENCING NEW TABLE AS N_TABLE
FOR EACH STATEMENT MODE DB2SQL
BEGIN ATOMIC
SELECT LARGE_ORDER_ALERT(CUST_NO,
TOTAL_PRICE, DELIVERY_DATE)
FROM N_TABLE WHERE TOTAL_PRICE > 10000;
END
Trigger condition: If you want the triggered action to occur only when certain
conditions are true, code a trigger condition. A trigger condition is similar to a
predicate in a SELECT, except that the trigger condition begins with WHEN, rather
than WHERE. If you do not include a trigger condition in your triggered action, the
trigger body executes every time the trigger is activated.
For a row trigger, DB2 evaluates the trigger condition once for each modified row of
the subject table. For a statement trigger, DB2 evaluates the trigger condition once
for each execution of the triggering SQL statement.
The following example shows a trigger condition that causes the trigger body to
execute only when the number of ordered items is greater than the number of
available items:
CREATE TRIGGER CK_AVAIL
NO CASCADE BEFORE INSERT ON ORDERS
REFERENCING NEW AS NEW_ORDER
FOR EACH ROW MODE DB2SQL
WHEN (NEW_ORDER.QUANTITY >
(SELECT ON_HAND FROM PARTS
WHERE NEW_ORDER.PARTNO=PARTS.PARTNO))
BEGIN ATOMIC
VALUES(ORDER_ERROR(NEW_ORDER.PARTNO,
NEW_ORDER.QUANTITY));
END
Trigger body: In the trigger body, you code the SQL statements that you want to
execute whenever the trigger condition is true. The trigger body begins with BEGIN
ATOMIC and ends with END. You cannot include host variables or parameter
markers in your trigger body. If the trigger body contains a WHERE clause that
references transition variables, the comparison operator cannot be LIKE.
The statements you can use in a trigger body depend on the activation time of the
trigger. Table 21 summarizes which SQL statements you can use in which types of
triggers.
Table 21. Valid SQL statements for triggers and trigger activation times
SQL Statement Valid for Activation Time
Before After
SELECT Yes Yes
VALUES Yes Yes
CALL Yes Yes
SIGNAL SQLSTATE Yes Yes
SET transition-variable Yes No
INSERT No Yes
UPDATE No Yes
DELETE No Yes
The following list provides more detailed information about SQL statements that are
valid in triggers:
v SELECT, VALUES, and CALL
Use the SELECT or VALUES statement in a trigger body to conditionally or
unconditionally invoke a user-defined function. Use the CALL statement to invoke
a stored procedure. See “Invoking stored procedures and user-defined functions
from triggers” on page 217 for more information on invoking user-defined
functions and stored procedures from triggers.
A SELECT statement in the trigger body of a before trigger cannot reference the
subject table.
v SET transition-variable
If any SQL statement in the trigger body fails during trigger execution, DB2 rolls
back all changes that are made by the triggering SQL statement and the triggered
SQL statements. However, if the trigger body executes actions that are outside of
DB2's control or are not under the same commit coordination as the DB2
subsystem in which the trigger executes, DB2 cannot undo those actions. Examples
of external actions that are not under DB2's control are:
v Performing updates that are not under RRS commit control
v Sending an electronic mail message
If the trigger executes external actions that are under the same commit coordination
as the DB2 subsystem under which the trigger executes, and an error occurs during
trigger execution, DB2 places the application process that issued the triggering
statement in a must-rollback state. The application must then execute a rollback
operation to roll back those external actions. Examples of external actions that are
under the same commit coordination as the triggering SQL operation are:
v Executing a distributed update operation
Because a before trigger must not modify any table, functions and procedures that
you invoke from a trigger cannot include INSERT, UPDATE, or DELETE statements
that modify the subject table.
Use the VALUES statement to execute a function unconditionally; that is, once for
each execution of a statement trigger or once for each row in a row trigger. In this
example, user-defined function PAYROLL_LOG executes every time an update
operation occurs that activates trigger PAYROLL1:
CREATE TRIGGER PAYROLL1
AFTER UPDATE ON PAYROLL
FOR EACH STATEMENT MODE DB2SQL
BEGIN ATOMIC
VALUES(PAYROLL_LOG(USER, 'UPDATE',
CURRENT TIME, CURRENT DATE));
END
Trigger cascading
An SQL operation that a trigger performs might modify the subject table or other
tables with triggers, so DB2 also activates those triggers. A trigger that is activated
as the result of another trigger can be activated at the same level as the original
trigger or at a different level. Two triggers, A and B, are activated at different levels
if trigger B is activated after trigger A is activated and completes before trigger A
completes. If trigger B is activated after trigger A is activated and completes after
trigger A completes, then the triggers are at the same level.
For example, in these cases, trigger A and trigger B are activated at the same level:
v Table X has two triggers that are defined on it, A and B. A is a before trigger and
B is an after trigger. An update to table X causes both trigger A and trigger B to
activate.
v Trigger A updates table X, which has a referential constraint with table Y, which
has trigger B defined on it. The referential constraint causes table Y to be
updated, which activates trigger B.
In these cases, trigger A and trigger B are activated at different levels:
v Trigger A is defined on table X, and trigger B is defined on table Y. Trigger B is
an update trigger. An update to table X activates trigger A, which contains an
UPDATE statement on table B in its trigger body. This UPDATE statement
activates trigger B.
v Trigger A calls a stored procedure. The stored procedure contains an INSERT
statement for table X, which has insert trigger B defined on it. When the INSERT
statement on table X executes, trigger B is activated.
When triggers are activated at different levels, it is called trigger cascading. Trigger
cascading can occur only for after triggers because DB2 does not support
cascading of before triggers.
To prevent the possibility of endless trigger cascading, DB2 supports only 16 levels
of cascading of triggers, stored procedures, and user-defined functions. If a trigger,
user-defined function, or stored procedure at the 17th level is activated, DB2 returns
SQLCODE -724 and backs out all SQL changes in the 16 levels of cascading.
However, as with any other SQL error that occurs during trigger execution, if any
action occurs that is outside the control of DB2, that action is not backed out.
You can write a monitor program that issues IFI READS requests to collect DB2
trace information about the levels of cascading of triggers, user-defined functions,
DB2 always activates all before triggers that are defined on a table before the after
triggers that are defined on that table, but within the set of before triggers, the
activation order is by timestamp, and within the set of after triggers, the activation
order is by timestamp.
In this example, triggers NEWHIRE1 and NEWHIRE2 have the same triggering
event (INSERT), the same subject table (EMP), and the same activation time
(AFTER). Suppose that the CREATE TRIGGER statement for NEWHIRE1 is run
before the CREATE TRIGGER statement for NEWHIRE2:
CREATE TRIGGER NEWHIRE1
AFTER INSERT ON EMP
FOR EACH ROW MODE DB2SQL
BEGIN ATOMIC
UPDATE COMPANY_STATS SET NBEMP = NBEMP + 1;
END
When an insert operation occurs on table EMP, DB2 activates NEWHIRE1 first
because NEWHIRE1 was created first. Now suppose that someone drops and
recreates NEWHIRE1. NEWHIRE1 now has a later timestamp than NEWHIRE2, so
the next time an insert operation occurs on EMP, NEWHIRE2 is activated before
NEWHIRE1.
If two row triggers are defined for the same action, the trigger that was created
earlier is activated first for all affected rows. Then the second trigger is activated for
all affected rows. In the previous example, suppose that an INSERT statement with
a fullselect inserts 10 rows into table EMP. NEWHIRE1 is activated for all 10 rows,
then NEWHIRE2 is activated for all 10 rows.
In general, the following steps occur when triggering SQL statement S1 performs an
insert, update, or delete operation on table T1:
1. DB2 determines the rows of T1 to modify. Call that set of rows M1. The
contents of M1 depend on the SQL operation:
If any constraint is violated, DB2 rolls back all changes that are made by
constraint actions or by statement S1.
5. DB2 processes all after triggers that are defined on T1, and all after triggers on
tables that are modified as the result of referential constraint actions, in order of
creation.
Each after row trigger executes the triggered action once for each row in M1. If
M1 is empty, the triggered action does not execute.
Each after statement trigger executes the triggered action once for each
execution of S1, even if M1 is empty.
If any triggered actions contain SQL insert, update, or delete operations, DB2
repeats steps 1 through 5 for each operation.
For example, table DEPT is a parent table of EMP, with these conditions:
v The DEPTNO column of DEPT is the primary key.
v The WORKDEPT column of EMP is the foreign key.
v The constraint is ON DELETE SET NULL.
Suppose the following trigger is defined on EMP:
CREATE TRIGGER EMPRAISE
AFTER UPDATE ON EMP
REFERENCING NEW TABLE AS NEWEMPS
Also suppose that an SQL statement deletes the row with department number E21
from DEPT. Because of the constraint, DB2 finds the rows in EMP with a
WORKDEPT value of E21 and sets WORKDEPT in those rows to null. This is
equivalent to an update operation on EMP, which has update trigger EMPRAISE.
Therefore, because EMPRAISE is an after trigger, EMPRAISE is activated after the
constraint action sets WORKDEPT values to null.
EXEC
. SQL OPEN C1;
.
.
When DB2 executes the FETCH statement that positions cursor C1 for the first
time, DB2 evaluates the subselect, SELECT B1 FROM T2, to produce a result table
that contains the two rows of column T2:
1
2
When DB2 executes the positioned UPDATE statement for the first time, trigger
TR1 is activated. When the body of trigger TR1 executes, the row with value 2 is
deleted from T2. However, because SELECT B1 FROM T2 is evaluated only once,
when the FETCH statement is executed again, DB2 finds the second row of T1,
even though the second row of T2 was deleted. The FETCH statement positions
the cursor to the second row of T1, and the second row of T1 is updated. The
update operation causes the trigger to be activated again, which causes DB2 to
attempt to delete the second row of T2, even though that row was already deleted.
To avoid processing of the second row after it should have been deleted, use a
correlated subquery in the cursor declaration:
DCL C1 CURSOR FOR
SELECT A1 FROM T1 X
WHERE EXISTS (SELECT B1 FROM T2 WHERE X.A1 = B1)
FOR UPDATE OF A1;
In this case, the subquery, SELECT B1 FROM T2 WHERE X.A1 = B1, is evaluated
for each FETCH statement. The first time that the FETCH statement executes, it
positions the cursor to the first row of T1. The positioned UPDATE operation
activates the trigger, which deletes the second row of T2. Therefore, when the
FETCH statement executes again, no row is selected, so no update operation or
triggered action occurs.
If DB2 updates the first row of T1 first, after the UPDATE statement and the trigger
execute for the first time, the values in the three tables are:
Table T1 Table T2 Table T3
A1 B1 C1
== == ==
2 2 2
2
After the second row of T1 is updated, the values in the three tables are:
Table T1 Table T2 Table T3
A1 B1 C1
== == ==
2 2 2
3 3 2
3
However, if DB2 updates the second row of T1 first, after the UPDATE statement
and the trigger execute for the first time, the values in the three tables are:
Table T1 Table T2 Table T3
A1 B1 C1
== == ==
1 3 3
3
After the first row of T1 is updated, the values in the three tables are:
Table T1 Table T2 Table T3
A1 B1 C1
== == ==
2 3 3
3 2 3
2
Introduction to LOBs
Working with LOBs involves defining the LOBs to DB2, moving the LOB data into
DB2 tables, then using SQL operations to manipulate the data. This chapter
concentrates on manipulating LOB data using SQL statements. For information on
defining LOBs to DB2, see Chapter 5 of DB2 SQL Reference. For information on
how DB2 utilities manipulate LOB data, see Part 2 of DB2 Utility Guide and
Reference.
These are the basic steps for defining LOBs and moving the data into DB2:
1. Define a column of the appropriate LOB type and a row identifier (ROWID)
column in a DB2 table. Define only one ROWID column, even if there are
multiple LOB columns in the table.
The LOB column holds information about the LOB, not the LOB data itself. The
table that contains the LOB information is called the base table. DB2 uses the
ROWID column to locate your LOB data. You need only one ROWID column in
a table that contains one or more LOB columns. You can define the LOB
column and the ROWID column in a CREATE TABLE or ALTER TABLE
statement. If you are adding a LOB column and a ROWID column to an existing
table, you must use two ALTER TABLE statements. Add the ROWID with the
first ALTER TABLE statement and the LOB column with the second.
2. Create a table space and table to hold the LOB data.
The table space and table are called a LOB table space and an auxiliary table.
If your base table is nonpartitioned, you must create one LOB table space and
one auxiliary table for each LOB column. If your base table is partitioned, for
each LOB column, you must create one LOB table space and one auxiliary
table for each partition. For example, if your base table has three partitions, you
must create three LOB table spaces and three auxiliary tables for each LOB
column. Create these objects using the CREATE LOB TABLESPACE and
CREATE AUXILIARY TABLE statements.
3. Create an index on the auxiliary table.
For example, suppose you want to add a resume for each employee to the
employee table. Employee resumes are no more than 5 MB in size. The employee
resumes contain single-byte characters, so you can define the resumes to DB2 as
CLOBs. You therefore need to add a column of data type CLOB with a length of 5
MB to the employee table. If a ROWID column has not been defined in the table,
you need to add the ROWID column before you add the CLOB column. Execute an
ALTER TABLE statement to add the ROWID column, and then execute another
ALTER TABLE statement to add the CLOB column. You might use statements like
this:
ALTER TABLE EMP
ADD ROW_ID ROWID NOT NULL GENERATED ALWAYS;
COMMIT;
ALTER TABLE EMP
ADD EMP_RESUME CLOB(1M);
COMMIT;
Next, you need to define a LOB table space and an auxiliary table to hold the
employee resumes. You also need to define an index on the auxiliary table. You
must define the LOB table space in the same database as the associated base
table. You can use statements like this:
CREATE LOB TABLESPACE RESUMETS
IN DSN8D71A
LOG NO;
COMMIT;
CREATE AUXILIARY TABLE EMP_RESUME_TAB
IN DSN8D71A.RESUMETS
STORES DSN8710.EMP
COLUMN EMP_RESUME;
CREATE UNIQUE INDEX XEMP_RESUME
ON EMP_RESUME_TAB;
COMMIT;
Now that your DB2 objects for the LOB data are defined, you can load your
employee resumes into DB2. To do this in an SQL application, you can define a
host variable to hold the resume, copy the resume data from a file into the host
variable, and then execute an UPDATE statement to copy the data into DB2.
Although the data goes into the auxiliary table, your UPDATE statement specifies
the name of the base table. The C language declaration of the host variable might
be:
SQL TYPE is CLOB (5K) resumedata;
In this example, employeenum is a host variable that identifies the employee who is
associated with a resume.
After your LOB data is in DB2, you can write SQL applications to manipulate the
data. You can use most SQL statements with LOBs. For example, you can use
statements like these to extract information about an employee's department from
the resume:
EXEC SQL BEGIN DECLARE SECTION;
long deptInfoBeginLoc;
long deptInfoEndLoc;
SQL TYPE IS CLOB_LOCATOR resume;
SQL TYPE IS CLOB_LOCATOR deptBuffer;
EXEC
. SQL END DECLARE SECTION;
.
.
EXEC
. SQL FETCH C1 INTO :employeenum, :resume;
.
.
These statements use host variables of data type large object locator (LOB locator).
LOB locators let you manipulate LOB data without moving the LOB data into host
variables. By using LOB locators, you need much smaller amounts of memory for
your programs. LOB locators are discussed in “Using LOB locators to save storage”
on page 236.
Sample LOB applications: Table 22 lists the sample programs that DB2 provides
to assist you in writing applications to manipulate LOB data. All programs reside in
data set DSN710.SDSNSAMP.
Table 22. LOB samples shipped with DB2
Member that Language Function
contains
source code
DSNTEJ7 JCL Demonstrates how to create a table with LOB columns, an
auxiliary table, and an auxiliary index. Also demonstrates
how to load LOB data that is 32KB or less into a LOB table
space.
DSN8DLPL C Demonstrates the use of LOB locators and UPDATE
statements to move binary data into a column of type
BLOB.
DSN8DLRV C Demonstrates how to use a locator to manipulate data of
type CLOB.
For instructions on how to prepare and run the sample LOB applications, see Part 2
of DB2 Installation Guide.
You can declare LOB host variables and LOB locators in assembler, C, C⁺⁺,
COBOL, FORTRAN, and PL/I. For each host variable or locator of SQL type BLOB,
CLOB, or DBCLOB that you declare, DB2 generates an equivalent declaration that
uses host language data types. When you refer to a LOB host variable or locator in
an SQL statement, you must use the variable you specified in the SQL type
declaration. When you refer to the host variable in a host language statement, you
must use the variable that DB2 generates. See “Part 2. Coding SQL in your host
application program” on page 61 for the syntax of LOB declarations in each
language and for host language equivalents for each LOB type.
The following examples show you how to declare LOB host variables in each
supported language. In each table, the left column contains the declaration that you
code in your application program. The right column contains the declaration that
DB2 generates.
49 FILLER
PIC X(1048576-32*32767).
49 FILLER
PIC X(40960000-1250*32767).
01 DBCLOB-VAR USAGE IS 01 DBCLOB-VAR.
SQL TYPE IS DBCLOB(4000K). 02 DBCLOB-VAR-LENGTH
PIC 9(9) COMP.
02 DBCLOB-VAR-DATA.
49 FILLER PIC G(32767)
USAGE DISPLAY-1.2
49 FILLER PIC G(32767)
USAGE DISPLAY-1.
Repeat 1248 times
.
.
.
49 FILLER
PIC X(20480000-1250*32767)
USAGE DISPLAY-1.
01 BLOB-LOC USAGE IS SQL 01 BLOB-LOC PIC S9(9) USAGE IS BINARY.
TYPE IS BLOB-LOCATOR.
01 CLOB-LOC USAGE IS SQL 01 CLOB-LOC PIC S9(9) USAGE IS BINARY.
TYPE IS CLOB-LOCATOR.
01 DBCLOB-LOC USAGE IS SQL 01 DBCLOB-LOC PIC S9(9) USAGE IS BINARY.
TYPE IS DBCLOB-LOCATOR.
Notes:
1. Because the COBOL language allows character declarations of no more than 32767
bytes, for BLOB or CLOB host variables that are greater than 32767 bytes in length, DB2
creates multiple host language declarations of 32767 or fewer bytes.
2. Because the COBOL language allows graphic declarations of no more than 32767
double-byte characters, for DBCLOB host variables that are greater than 32767
double-byte characters in length, DB2 creates multiple host language declarations of
32767 or fewer double-byte characters.
Declarations of LOB host variables in PL/I: Table 27 shows PL/I declarations for
some typical LOB types.
Table 27. Examples of PL/I variable declarations
You Declare this Variable DB2 Generates this Variable
DCL BLOB_VAR DCL 1 BLOB_VAR,
SQL TYPE IS BLOB (1M); 2 BLOB_VAR_LENGTH FIXED BINARY(31),
2 BLOB_VAR_DATA,1
3 BLOB_VAR_DATA1(32)
CHARACTER(32767),
3 BLOB_VAR_DATA2
CHARACTER(1048576-32*32767);
DCL CLOB_VAR DCL 1 CLOB_VAR,
SQL TYPE IS CLOB (40000K); 2 CLOB_VAR_LENGTH FIXED BINARY(31),
2 CLOB_VAR_DATA,1
3 CLOB_VAR_DATA1(1250)
CHARACTER(32767),
3 CLOB_VAR_DATA2
CHARACTER(40960000-1250*32767);
DCL DBCLOB_VAR DCL 1 DBCLOB_VAR,
SQL TYPE IS DBCLOB (4000K); 2 DBCLOB_VAR_LENGTH FIXED BINARY(31),
2 DBCLOB_VAR_DATA,2
3 DBCLOB_VAR_DATA1(2500)
GRAPHIC(16383),
3 DBCLOB_VAR_DATA2
GRAPHIC(40960000-2500*16383);
DCL blob_loc DCL blob_loc FIXED BINARY(31);
SQL TYPE IS BLOB_LOCATOR;
DCL clob_loc DCL clob_loc FIXED BINARY(31);
SQL TYPE IS CLOB_LOCATOR;
DCL dbclob_loc SQL TYPE IS DCL dbclob_loc FIXED BINARY(31);
DBCLOB_LOCATOR;
LOB materialization
LOB materialization means that DB2 places a LOB value into contiguous storage in
a data space. Because LOB values can be very large, DB2 avoids materializing
LOB data until absolutely necessary. However, DB2 must materialize LOBs when
your application program:
v Calls a user-defined function with a LOB as an argument
v Moves a LOB into or out of a stored procedure
v Assigns a LOB host variable to a LOB locator host variable
v Converts a LOB from one CCSID to another
Data spaces for LOB materialization: The amount of storage that is used in data
spaces for LOB materialization depends on a number of factors including:
v The size of the LOBs
v The number of LOBs that need to be materialized in a statement
DB2 allocates a certain number of data spaces for LOB materialization. If there is
insufficient space available in a data space for LOB materialization, your application
receives SQLCODE -904.
Although you cannot completely avoid LOB materialization, you can minimize it by
using LOB locators, rather than LOB host variables in your application programs.
See “Using LOB locators to save storage” for information on how to use LOB
locators.
A LOB locator is associated with a LOB value or expression, not with a row in a
DB2 table or a physical storage location in a table space. Therefore, after you
select a LOB value using a locator, the value in the locator normally does not
change until the current unit of work ends. However the value of the LOB itself can
change.
If you want to remove the association between a LOB locator and its value before a
unit of work ends, execute the FREE LOCATOR statement. To keep the association
between a LOB locator and its value after the unit of work ends, execute the HOLD
LOCATOR statement. After you execute a HOLD LOCATOR statement, the locator
keeps the association with the corresponding value until you execute a FREE
LOCATOR statement or the program ends.
If you execute HOLD LOCATOR or FREE LOCATOR dynamically, you cannot use
EXECUTE IMMEDIATE. For more information on HOLD LOCATOR and FREE
LOCATOR, see Chapter 5 of DB2 SQL Reference.
Because the program uses LOB locators, rather than placing the LOB data into host
variables, no LOB data is moved until the INSERT statement executes. In addition,
no LOB data moves between the client and the server.
/**************************/
/* Declare host variables */ 1
/**************************/
EXEC SQL BEGIN DECLARE SECTION;
char userid[9];
char passwd[19];
long HV_START_DEPTINFO;
long HV_START_EDUC;
long HV_RETURN_CODE;
SQL TYPE IS CLOB_LOCATOR HV_NEW_SECTION_LOCATOR;
SQL TYPE IS CLOB_LOCATOR HV_DOC_LOCATOR1;
SQL TYPE IS CLOB_LOCATOR HV_DOC_LOCATOR2;
SQL TYPE IS CLOB_LOCATOR HV_DOC_LOCATOR3;
EXEC SQL END DECLARE SECTION;
/*************************************************/
/* Use a single row select to get the document */ 2
/*************************************************/
EXEC SQL SELECT RESUME
INTO :HV_DOC_LOCATOR1
FROM EMP_RESUME
WHERE EMPNO = '000130'
AND RESUME_FORMAT = 'ascii';
/*****************************************************/
/* Use the POSSTR function to locate the start of */
/* sections "Department Information" and "Education" */ 3
/*****************************************************/
EXEC SQL SET :HV_START_DEPTINFO =
POSSTR(:HV_DOC_LOCATOR1, 'Department Information');
/*******************************************************/
/* Replace Department Information section with nothing */
/*******************************************************/
EXEC SQL SET :HV_DOC_LOCATOR2 =
SUBSTR(:HV_DOC_LOCATOR1, 1, :HV_START_DEPTINFO -1)
|| SUBSTR (:HV_DOC_LOCATOR1, :HV_START_EDUC);
/*******************************************************/
/* Associate a new locator with the Department */
/* Information section */
/*******************************************************/
EXEC SQL SET :HV_NEW_SECTION_LOCATOR =
SUBSTR(:HV_DOC_LOCATOR1, :HV_START_DEPTINFO,
:HV_START_EDUC -:HV_START_DEPTINFO);
/*******************************************************/
/* Append the Department Information to the end */
/* of the resume */
/*******************************************************/
EXEC SQL SET :HV_DOC_LOCATOR3 =
:HV_DOC_LOCATOR2 || :HV_NEW_SECTION_LOCATOR;
/*******************************************************/
/* Store the modified resume in the table. This is */ 4
/* where the LOB data really moves. */
/*******************************************************/
EXEC SQL INSERT INTO EMP_RESUME VALUES ('A00130', 'ascii',
:HV_DOC_LOCATOR3, DEFAULT);
/*********************/
/* Free the locators */ 5
/*********************/
EXEC SQL FREE LOCATOR :HV_DOC_LOCATOR1, :HV_DOC_LOCATOR2, :HV_DOC_LOCATOR3;
When you use LOB locators to retrieve data from columns that can contain null
values, define indicator variables for the LOB locators, and check the indicator
variables after you fetch data into the LOB locators. If an indicator variable is null
after a fetch operation, you cannot use the value in the LOB locator.
| This chapter contains information that applies to all user-defined functions and
| specific information about user-defined functions in languages other than Java™.
| For information on writing, preparing, and running Java user-defined functions, see
| DB2 Application Programming Guide and Reference for Java.
The user-defined function's definer and invoker determine that this new user-defined
function should have these characteristics:
v The user-defined function name is CALC_BONUS.
v The two input fields are of type DECIMAL(9,2).
v The output field is of type DECIMAL(9,2).
v The program for the user-defined function is written in COBOL and has a load
module name of CBONUS.
User-defined function invokers write and prepare application programs that invoke
CALC_BONUS. An invoker might write a statement like this, which uses the
user-defined function to update the BONUS field in the employee table:
UPDATE EMP
SET BONUS = CALC_BONUS(SALARY,COMM);
Member DSN8DUWC contains a client program that shows you how to invoke the
WEATHER user-defined table function.
Member DSNTEJ2U shows you how to define and prepare the sample user-defined
functions and the client program.
The user-defined function takes two integer values as input. The output from the
user-defined function is of type integer. The user-defined function is in the MATH
schema, is written in assembler, and contains no SQL statements. This CREATE
FUNCTION statement defines the user-defined function:
Suppose you want the FINDSTRING user-defined function to work on BLOB data
types, as well as CLOB types. You can define another instance of the user-defined
function that specifies a BLOB type as input:
CREATE FUNCTION FINDSTRING (BLOB(500K), VARCHAR(200))
RETURNS INTEGER
CAST FROM FLOAT
SPECIFIC FINDSTRINBLOB
EXTERNAL NAME 'FNDBLOB'
LANGUAGE C
PARAMETER STYLE DB2SQL
NO SQL
DETERMINISTIC
NO EXTERNAL ACTION
FENCED;
The user-defined function is written in COBOL, uses SQL only to perform queries,
always produces the same output for given input, and should not execute as a
parallel task. The program is reentrant, and successive invocations of the
user-defined function share information. You expect an invocation of the
user-defined function to return about 20 rows.
Your user-defined function can also access remote data using the following
methods:
v DB2 private protocol access using three-part names or aliases for three-part
names
v DRDA access using three-part names or aliases for three-part names
v DRDA access using CONNECT or SET CONNECTION statements
The user-defined function and the application that calls it can access the same
remote site if both use the same protocol.
You can write an external user-defined function in assembler, C, C⁺⁺, COBOL, PL/I,
or Java. User-defined functions that are written in COBOL can include
object-oriented extensions, just as other DB2 COBOL programs can. For
information on writing Java user-defined functions, see DB2 Application
Programming Guide and Reference for Java.
The following sections include additional information that you need when you write
a user-defined function:
v “Restrictions on user-defined function programs” on page 249
v “Coding your user-defined function as a main program or as a subprogram” on
page 249
v “Parallelism considerations” on page 249
v “Passing parameter values to and from a user-defined function” on page 251
v “Examples of passing parameters in a user-defined function” on page 263
v “Using special registers in a user-defined function” on page 276
v “Using a scratchpad in a user-defined function” on page 277
v “Accessing transition tables in a user-defined function or stored procedure” on
page 279
If you code your user-defined function as a subprogram and manage the storage
and files yourself, you can get better performance. The user-defined function should
always free any allocated storage before it exits. To keep data between invocations
of the user-defined function, use a scratchpad.
You must code a user-defined table function that accesses external resources as a
subprogram. Also ensure that the definer specifies the EXTERNAL ACTION
parameter in the CREATE FUNCTION or ALTER FUNCTION statement. Program
variables for a subprogram persist between invocations of the user-defined function,
and use of the EXTERNAL ACTION parameter ensures that the user-defined
function stays in the same address space from one invocation to another.
Parallelism considerations
If the definer specifies the parameter ALLOW PARALLEL in the definition of a
user-defined scalar function, and the invoking SQL statement runs in parallel, the
function can run under a parallel task. DB2 executes a separate instance of the
user-defined function for each parallel task. When you write your function program,
you need to understand how the following parameter values interact with ALLOW
PARALLEL so that you can avoid unexpected results:
v SCRATCHPAD
When an SQL statement invokes a user-defined function that is defined with the
ALLOW PARALLEL parameter, DB2 allocates one scratchpad for each parallel
task of each reference to the function. This can lead to unpredictable or incorrect
results.
When the query is executed with no parallelism, DB2 invokes COUNTER once
for each row of table T1, and there is one scratchpad for counter, which DB2
initializes the first time that COUNTER executes. COUNTER returns 1 the first
time it executes, 2 the second time, and so on. The result table for the query is
therefore:
1
2
3
4
5
6
7
8
9
10
Now suppose that the query is run with parallelism, and DB2 creates three
parallel tasks. DB2 executes the predicate WHERE C1 = COUNTER() for each
parallel task. This means that each parallel task invokes its own instance of the
user-defined function and has its own scratchpad. DB2 initializes the scratchpad
to zero on the first call to the user-defined function for each parallel task.
Figure 97 on page 252 shows the structure of the parameter list that DB2 passes to
a user-defined function. An explanation of each parameter follows.
Input parameter values: DB2 obtains the input parameters from the invoker's
parameter list, and your user-defined function receives those parameters according
to the rules of the host language in which the user-defined function is written. The
number of input parameters is the same as the number of parameters in the
user-defined function invocation. If one of the parameters in the function invocation
is an expression, DB2 evaluates the expression and assigns the result of the
expression to the parameter.
Table 31. Compatible assembler language declarations for LOBs, ROWIDs, and locators
SQL data type in definition Assembler declaration
TABLE LOCATOR DS FL4
BLOB LOCATOR
CLOB LOCATOR
DBCLOB LOCATOR
BLOB(n) If n <= 65535:
var DS 0FL4
var_length DS FL4
var_data DS CLn
If n > 65535:
var DS 0FL4
var_length DS FL4
var_data DS CL65535
ORG var_data+(n-65535)
CLOB(n) If n <= 65535:
var DS 0FL4
var_length DS FL4
var_data DS CLn
If n > 65535:
var DS 0FL4
var_length DS FL4
var_data DS CL65535
ORG var_data+(n-65535)
DBCLOB(n) If m (=2*n) <= 65534:
var DS 0FL4
var_length DS FL4
var_data DS CLm
If m > 65534:
var DS 0FL4
var_length DS FL4
var_data DS CL65534
ORG var_data+(m-65534)
ROWID DS HL2,CL40
Table 32. Compatible C language declarations for LOBs, ROWIDs, and locators
SQL data type in definition C declaration
TABLE LOCATOR unsigned long
BLOB LOCATOR
CLOB LOCATOR
DBCLOB LOCATOR
Table 33. Compatible COBOL declarations for LOBs, ROWIDs, and locators
SQL data type in definition COBOL declaration
TABLE LOCATOR 01 var PIC S9(9) USAGE IS BINARY.
BLOB LOCATOR
CLOB LOCATOR
DBCLOB LOCATOR
BLOB(n) If n <= 32767:
01 var.
49 var-LENGTH PIC 9(9)
USAGE COMP.
49 var-DATA PIC X(n).
If length > 32767:
01 var.
02 var-LENGTH PIC S9(9)
USAGE COMP.
02 var-DATA.
49 FILLER
PIC X(32767).
49 FILLER
PIC X(32767).
.
.
.
49 FILLER
PIC X(mod(n,32767)).
49 FILLER
PIC X(mod(n,32767)).
DBCLOB(n) If n <= 32767:
01 var.
49 var-LENGTH PIC 9(9)
USAGE COMP.
49 var-DATA PIC G(n)
USAGE DISPLAY-1.
If length > 32767:
01 var.
02 var-LENGTH PIC S9(9)
USAGE COMP.
02 var-DATA.
49 FILLER
PIC G(32767)
USAGE DISPLAY-1.
49 FILLER
PIC G(32767).
USAGE DISPLAY-1.
.
.
.
49 FILLER
PIC G(mod(n,32767))
USAGE DISPLAY-1.
ROWID 01 var.
49 var-LEN PIC 9(4)
USAGE COMP.
49 var-DATA PIC X(40).
Table 34. Compatible PL/I declarations for LOBs, ROWIDs, and locators
SQL data type in definition PL/I
TABLE LOCATOR BIN FIXED(31)
BLOB LOCATOR
CLOB LOCATOR
DBCLOB LOCATOR
Result parameters: Set these values in your user-defined function before exiting.
For a user-defined scalar function, you return one result parameter. For a
user-defined table function, you return the same number of parameters as columns
in the RETURNS TABLE clause of the CREATE FUNCTION statement. DB2
allocates a buffer for each result parameter value and passes the buffer address to
the user-defined function. Your user-defined function places each result parameter
value in its buffer. You must ensure that the length of the value you place in each
See “Passing parameter values to and from a user-defined function” on page 251 to
determine the host data type to use for each result parameter value. If the CREATE
FUNCTION statement contains a CAST FROM clause, use a data type that
corresponds to the SQL data type in the CAST FROM clause. Otherwise, use a
data type that corresponds to the SQL data type in the RETURNS or RETURNS
TABLE clause.
To improve performance for user-defined table functions that return many columns,
you can pass values for a subset of columns to the invoker. For example, a
user-defined table function might be defined to return 100 columns, but the invoker
needs values for only two columns. Use the DBINFO parameter to indicate to DB2
the columns for which you will return values. Then return values for only those
columns. See the explanation of DBINFO below for information on how to indicate
the columns of interest.
Input parameter indicators: These are SMALLINT values, which DB2 sets before
it passes control to the user-defined function. You use the indicators to determine
whether the corresponding input parameters are null. The number and order of the
indicators are the same as the number and order of the input parameters. On entry
to the user-defined function, each indicator contains one of these values:
0 The input parameter value is not null.
negative The input parameter value is null.
Code the user-defined function to check all indicators for null values unless the
user-defined function is defined with RETURNS NULL ON NULL INPUT. A
user-defined function defined with RETURNS NULL ON NULL INPUT executes only
if all input parameters are not null.
Result indicators: These are SMALLINT values, which you must set before the
user-defined function ends to indicate to the invoking program whether each result
parameter value is null. A user-defined scalar function has one result indicator. A
user-defined table function has the same number of result indicators as the number
of result parameters. The order of the result indicators is the same as the order of
the result parameters. Set each result indicator to one of these values:
0 or positive The result parameter is not null.
negative The result parameter is null.
SQLSTATE value: This is a CHAR(5) value, which you must set before the
user-defined function ends. The user-defined function can return one of these
SQLSTATE values:
00000 Use this value to indicate that the user-defined function executed
without any warnings or errors.
01Hxx Use these values to indicate that the user-defined function detected
a warning condition. xx can be any two single-byte alphanumeric
characters. DB2 returns SQLCODE +462 if the user-defined
function sets the SQLSTATE to 01Hxx.
02000 Use this value to indicate that there no more rows are to be
returned from a user-defined table function.
38yxx Use these values to indicate that the user-defined function detected
When your user-defined function returns an SQLSTATE of 38yxx other than one of
the four listed above, DB2 returns SQLCODE -443.
If both the user-defined function and DB2 set an SQLSTATE value, DB2 returns its
SQLSTATE value to the invoker.
User-defined function name: DB2 sets this value in the parameter list before the
user-defined function executes. This value is VARCHAR(137): 8 bytes for the
schema name, 1 byte for a period, and 128 bytes for the user-defined function
name. If you use the same code to implement multiple versions of a user-defined
function, you can use this parameter to determine which version of the function the
invoker wants to execute.
Specific name: DB2 sets this value in the parameter list before the user-defined
function executes. This value is VARCHAR(128) and is either the specific name
from the CREATE FUNCTION statement or a specific name that DB2 generated. If
you use the same code to implement multiple versions of a user-defined function,
you can use this parameter to determine which version of the function the invoker
wants to execute.
You must ensure that your user-defined function does not write more bytes to the
scratchpad than the scratchpad length.
Call type: For a user-defined scalar function, if the definer specified FINAL CALL in
the CREATE FUNCTION statement, DB2 passes this parameter to the user-defined
function. For a user-defined table function, DB2 always passes this parameter to
the user-defined function.
On entry to a user-defined scalar function, the call type parameter has one of the
following values:
-1 This is the first call to the user-defined function for the SQL statement. For
a first call, all input parameters are passed to the user-defined function. In
addition, the scratchpad, if allocated, is set to binary zeros.
0 This is a normal call. For a normal call, all the input parameters are passed
to the user-defined function. If a scratchpad is also passed, DB2 does not
modify it.
1 This is a final call. For a final call, no input parameters are passed to the
user-defined function. If a scratchpad is also passed, DB2 does not modify
it.
This type of final call occurs when the invoking application explicitly closes
a cursor. When a value of 1 is passed to a user-defined function, the
user-defined function can execute SQL statements.
255 This is a final call. For a final call, no input parameters are passed to the
user-defined function. If a scratchpad is also passed, DB2 does not modify
it.
This type of final call occurs when the invoking application executes a
COMMIT or ROLLBACK statement, or when the invoking application
abnormally terminates. When a value of 255 is passed to the user-defined
function, the user-defined function cannot execute any SQL statements,
except for CLOSE CURSOR. If the user-defined function executes any
close cursor statements during this type of final call, the user-defined
function should tolerate SQLCODE -501 because DB2 might have already
closed cursors before the final call.
During the first call, your user-defined scalar function should acquire any system
resources it needs. During the final call, the user-defined scalar function should
release any resources it acquired during the first call. The user-defined scalar
function should return a result value only during normal calls. DB2 ignores any
If an invoking SQL statement contains more than one user-defined scalar function,
and one of those user-defined functions returns an error SQLSTATE, DB2 invokes
all of the user-defined functions for a final call, and the invoking SQL statement
receives the SQLSTATE of the first user-defined function with an error.
On entry to a user-defined table function, the call type parameter has one of the
following values:
-2 This is the first call to the user-defined function for the SQL statement. A
first call occurs only if the FINAL CALL keyword is specified in the
user-defined function definition. For a first call, all input parameters are
passed to the user-defined function. In addition, the scratchpad, if allocated,
is set to binary zeros.
-1 This is the open call to the user-defined function by an SQL statement. If
FINAL CALL is not specified in the user-defined function definition, all input
parameters are passed to the user-defined function, and the scratchpad, if
allocated, is set to binary zeros during the open call. If FINAL CALL is
specified for the user-defined function, DB2 does not modify the scratchpad.
0 This is a fetch call to the user-defined function by an SQL statement. For a
fetch call, all input parameters are passed to the user-defined function. If a
scratchpad is also passed, DB2 does not modify it.
1 This is a close call. For a close call, no input parameters are passed to the
user-defined function. If a scratchpad is also passed, DB2 does not modify
it.
2 This is a final call. This type of final call occurs only if FINAL CALL is
specified in the user-defined function definition. For a final call, no input
parameters are passed to the user-defined function. If a scratchpad is also
passed, DB2 does not modify it.
This type of final call occurs when the invoking application executes a
CLOSE CURSOR statement.
255 This is a final call. For a final call, no input parameters are passed to the
user-defined function. If a scratchpad is also passed, DB2 does not modify
it.
This type of final call occurs when the invoking application executes a
COMMIT or ROLLBACK statement, or when the invoking application
abnormally terminates. When a value of 255 is passed to the user-defined
function, the user-defined function cannot execute any SQL statements,
except for CLOSE CURSOR. If the user-defined function executes any
close cursor statements during this type of final call, the user-defined
function should tolerate SQLCODE -501 because DB2 might have already
closed cursors before the final call.
During the close call, a user-defined table function can set the SQLSTATE and
diagnostic message area.
CEETERM RC=0
*******************************************************************
* VARIABLE DECLARATIONS AND EQUATES *
*******************************************************************
R1 EQU 1 REGISTER 1
R7 EQU 7 REGISTER 7
PPA CEEPPA , CONSTANTS DESCRIBING THE CODE BLOCK
LTORG , PLACE LITERAL POOL HERE
PROGAREA DSECT
ORG *+CEEDSASZ LEAVE SPACE FOR DSA FIXED PART
PARM1 DS F PARAMETER 1
PARM2 DS F PARAMETER 2
RESULT DS CL9 RESULT
F_IND1 DS H INDICATOR FOR PARAMETER 1
F_IND2 DS H INDICATOR FOR PARAMETER 2
F_INDR DS H INDICATOR FOR RESULT
C or C⁺⁺:
For subprograms, you pass the parameters directly. For main programs, you use
the standard argc and argv variables to access the input and output parameters:
v The argv variable contains an array of pointers to the parameters that are passed
to the user-defined function. All string parameters that are passed back to DB2
must be null terminated.
– argv[0] contains the address of the load module name for the user-defined
function.
– argv[1] through argv[n] contain the addresses of parameters 1 through n.
v The argc variable contains the number of parameters that are passed to the
external user-defined function, including argv[0].
Figure 99 on page 266 shows the parameter conventions for a user-defined scalar
function that is written as a main program that receives two parameters and returns
one result.
main(argc,argv)
int argc;
char *argv[];
{
/***************************************************/
/* Assume that the user-defined function invocation*/
/* included 2 input parameters in the parameter */
/* list. Also assume that the definition includes */
/* the SCRATCHPAD, FINAL CALL, and DBINFO options, */
/* so DB2 passes the scratchpad, calltype, and */
/* dbinfo parameters. */
/* The argv vector contains these entries: */
/* argv[0] 1 load module name */
/* argv[1-2] 2 input parms */
/* argv[3] 1 result parm */
/* argv[4-5] 2 null indicators */
/* argv[6] 1 result null indicator */
/* argv[7] 1 SQLSTATE variable */
/* argv[8] 1 qualified func name */
/* argv[9] 1 specific func name */
/* argv[10] 1 diagnostic string */
/* argv[11] 1 scratchpad */
/* argv[12] 1 call type */
/* argv[13] + 1 dbinfo */
/* ------ */
/* 14 for the argc variable */
/***************************************************/
if argc<>14
{
.
.
.
/**********************************************************/
/* This section would contain the code executed if the */
/* user-defined function is invoked with the wrong number */
/* of parameters. */
/**********************************************************/
}
Figure 99. How a C or C⁺⁺ user-defined function that is written as a main program receives
parameters (Part 1 of 2)
/***************************************************/
/* Access the null indicator for the first */
/* parameter on the invoked user-defined function */
/* as follows: */
/***************************************************/
short int ind1;
ind1 = *(short int *) argv[4];
/***************************************************/
/* Use the expression below to assign */
/* 'xxxxx' to the SQLSTATE returned to caller on */
/* the SQL statement that contains the invoked */
/* user-defined function. */
/***************************************************/
strcpy(argv[7],"xxxxx/0");
/***************************************************/
/* Obtain the value of the qualified function */
/* name with this expression. */
/***************************************************/
char f_func[28];
strcpy(f_func,argv[8]);
/***************************************************/
/* Obtain the value of the specific function */
/* name with this expression. */
/***************************************************/
char f_spec[19];
strcpy(f_spec,argv[9]);
/***************************************************/
/* Use the expression below to assign */
/* 'yyyyyyyy' to the diagnostic string returned */
/* in the SQLCA associated with the invoked */
/* user-defined function. */
/***************************************************/
strcpy(argv[10],"yyyyyyyy/0");
/***************************************************/
/* Use the expression below to assign the */
/* result of the function. */
/***************************************************/
char l_result[11];
strcpy(argv[3],l_result);
.
.
.
Figure 99. How a C or C⁺⁺ user-defined function that is written as a main program receives
parameters (Part 2 of 2)
Figure 100 on page 268 shows the parameter conventions for a user-defined scalar
function written as a C subprogram that receives 2 parameters and returns one
result.
Figure 100. How a C language user-defined function that is written as a subprogram receives
parameters (Part 1 of 2)
l_p1 = *parm1;
strcpy(l_p2,parm2);
l_ind1 = *f_ind1;
l_ind1 = *f_ind2;
strcpy(ludf_sqlstate,udf_sqlstate);
strcpy(ludf_fname,udf_fname);
strcpy(ludf_specname,udf_specname);
l_udf_call_type = *udf_call_type;
strcpy(ludf_msgtext,udf_msgtext);
memcpy(&ludf_scratchpad,udf_scratchpad,sizeof(ludf_scratchpad));
memcpy(&ludf_dbinfo,udf_dbinfo,sizeof(ludf_dbinfo));
.
.
.
Figure 100. How a C language user-defined function that is written as a subprogram receives
parameters (Part 2 of 2)
Figure 101 on page 270 shows the parameter conventions for a user-defined scalar
function that is written as a C⁺⁺ subprogram that receives two parameters and
returns one result. This example demonstrates that you must use an extern "C"
modifier to indicate that you want the C⁺⁺ subprogram to receive parameters
according to the C linkage convention. This modifier is necessary because the
CEEPIPI CALL_SUB interface, which DB2 uses to call the user-defined function,
passes parameters using the C linkage convention.
Figure 101. How a C⁺⁺ user-defined function that is written as a subprogram receives
parameters (Part 1 of 2)
Figure 101. How a C⁺⁺ user-defined function that is written as a subprogram receives
parameters (Part 2 of 2)
COBOL: Figure 102 on page 272 shows the parameter conventions for a
user-defined table function that is written as a main program that receives two
parameters and returns two results. For a COBOL user-defined function that is a
subprogram, the conventions are the same.
DATA DIVISION.
.
.
.
LINKAGE SECTION.
*********************************************************
* Declare each of the parameters *
*********************************************************
01 UDFPARM1 PIC S9(9) USAGE COMP.
01 UDFPARM2 PIC X(10).
.
.
.
*********************************************************
* Declare these variables for result parameters *
*********************************************************
01 UDFRESULT1 PIC X(10).
01 UDFRESULT2 PIC X(10).
.
.
.
*********************************************************
* Declare a null indicator for each parameter *
*********************************************************
01 UDF-IND1 PIC S9(4) USAGE COMP.
01 UDF-IND2 PIC S9(4) USAGE COMP.
.
.
.
*********************************************************
* Declare a null indicator for result parameter *
*********************************************************
01 UDF-RIND1 PIC S9(4) USAGE COMP.
01 UDF-RIND2 PIC S9(4) USAGE COMP.
.
.
.
*********************************************************
* Declare the SQLSTATE that can be set by the *
* user-defined function *
*********************************************************
01 UDF-SQLSTATE PIC X(5).
*********************************************************
* Declare the qualified function name *
*********************************************************
01 UDF-FUNC.
49 UDF-FUNC-LEN PIC 9(4) USAGE BINARY.
49 UDF-FUNC-TEXT PIC X(137).
*********************************************************
* Declare the specific function name *
*********************************************************
01 UDF-SPEC.
49 UDF-SPEC-LEN PIC 9(4) USAGE BINARY.
49 UDF-SPEC-TEXT PIC X(128).
PL/I: Figure 103 on page 275 shows the parameter conventions for a user-defined
scalar function that is written as a main program that receives two parameters and
returns one result. For a PL/I user-defined function that is a subprogram, the
conventions are the same.
Table 35 shows information you need when you use special registers in a
user-defined function.
| Table 35. Characteristics of special registers in a user-defined function
| Special register Initial value when Initial value when Function
| INHERIT SPECIAL DEFAULT SPECIAL can use
| REGISTERS option is REGISTERS option is SET
| specified specified statement
| to modify?
| CURRENT The value of bind option The value of bind option Yes
| APPLICATION ENCODING for the ENCODING for the
| ENCODING SCHEME user-defined function user-defined function
| package1 package1
| CURRENT DATE New value for each SQL New value for each SQL Not
| statement in the statement in the applicable5
| user-defined function user-defined function
| package2 package2
| CURRENT DEGREE Inherited from invoker3 The value of field Yes
| CURRENT DEGREE on
| installation panel
| DSNTIP4
| CURRENT LOCALE Inherited from invoker The value of field Yes
| LC_CTYPE CURRENT DEGREE on
| installation panel
| DSNTIP4
| CURRENT MEMBER New value for each SET New value for each SET No
| host-variable=CURRENT host-variable=CURRENT
| MEMBER statement MEMBER statement
| CURRENT The value of bind option The value of bind option Yes
| OPTIMIZATION HINT OPTHINT for the OPTHINT for the
| user-defined function user-defined function
| package or inherited package
| from invoker6
| CURRENT Inherited from invoker4 Inherited from invoker4 Yes
| PACKAGESET
| CURRENT PATH The value of bind option The value of bind option Yes
| PATH for the PATH for the
| user-defined function user-defined function
| package or inherited package
| from invoker6
| CURRENT PRECISION Inherited from invoker The value of field Yes
| DECIMAL ARITHMETIC
| on installation panel
| DSNTIP4
| CURRENT RULES Inherited from invoker The value of bind option Yes
| SQLRULES for the
| user-defined function
| package
| CURRENT SERVER Inherited from invoker Inherited from invoker Yes
The scratchpad consists of a 4-byte length field, followed by the scratchpad area.
The definer can specify the length of the scratchpad area in the CREATE
FUNCTION statement. The specified length does not include the length field. The
default size is 100 bytes. DB2 initializes the scratchpad for each function to binary
zeros at the beginning of execution for each subquery of an SQL statement and
does not examine or change the content thereafter. On each invocation of the
user-defined function, DB2 passes the scratchpad to the user-defined function. You
can therefore use the scratchpad to preserve information between invocations of a
reentrant user-defined function.
Figure 104 on page 279 demonstrates how to enter information in a scratchpad for
a user-defined function defined like this:
CREATE FUNCTION COUNTER()
RETURNS INT
SCRATCHPAD
FENCED
NOT DETERMINISTIC
NO SQL
NO EXTERNAL ACTION
LANGUAGE C
PARAMETER STYLE DB2SQL
EXTERNAL NAME 'UDFCTR';
The scratchpad length is not specified, so the scratchpad has the default length of
100 bytes, plus 4 bytes for the length field. The user-defined function increments an
integer value and stores it in the scratchpad on each execution.
To access transition tables in a user-defined function, use table locators, which are
pointers to the transition tables. You declare table locators as input parameters in
the CREATE FUNCTION statement using the TABLE LIKE table-name AS
LOCATOR clause. See Chapter 5 of DB2 SQL Reference for more information.
The five basic steps to accessing transition tables in a user-defined function are:
1. Declare input parameters to receive table locators. You must define each
parameter that receives a table locator as an unsigned 4-byte integer.
2. Declare table locators. You can declare table locators in assembler, C, C⁺⁺,
COBOL, PL/I, and in an SQL procedure compound statement. The syntax for
declaring table locators in C, C⁺⁺, COBOL, and PL/I is described in “Chapter 9.
Embedding SQL statements in host languages” on page 107. The syntax for
declaring table locators in an SQL procedure is described in Chapter 6 of DB2
SQL Reference.
3. Declare a cursor to access the rows in each transition table.
4. Assign the input parameter values to the table locators.
The following examples show how a user-defined function that is written in C, C⁺⁺,
COBOL, or PL/I accesses a transition table for a trigger. The transition table,
NEWEMP, contains modified rows of the employee sample table. The trigger is
defined like this:
CREATE TRIGGER EMPRAISE
AFTER UPDATE ON EMP
REFERENCING NEW TABLE AS NEWEMPS
FOR EACH STATEMENT MODE DB2SQL
BEGIN ATOMIC
VALUES (CHECKEMP(TABLE NEWEMPS));
END;
Assembler: Figure 105 on page 281 shows how an assembler program accesses
rows of transition table NEWEMPS.
************************************************************
* Declare table locator host variable TRIGTBL *
************************************************************
TRIGTBL SQL TYPE IS TABLE LIKE EMP AS LOCATOR
************************************************************
* Declare a cursor to retrieve rows from the transition *
* table *
************************************************************
EXEC SQL DECLARE C1 CURSOR FOR X
SELECT LASTNAME FROM TABLE(:TRIGTBL LIKE EMP) X
WHERE SALARY > 100000
************************************************************
* Copy table locator for trigger transition table *
************************************************************
L R2,TABLOC GET ADDRESS OF LOCATOR
L R2,0(0,R2) GET LOCATOR VALUE
ST R2,TRIGTBL
EXEC SQL OPEN C1
EXEC SQL FETCH C1 INTO :NAME
.
.
.
Figure 105. How an assembler user-defined function accesses a transition table (Part 1 of 2)
NAME
. DS CL24
.
.
DS 0D
PROGSIZE EQU *-PROGAREA DYNAMIC WORKAREA SIZE
PARMAREA DSECT
TABLOC
. DS A INPUT PARAMETER FOR TABLE LOCATOR
.
.
END CHECKEMP
Figure 105. How an assembler user-defined function accesses a transition table (Part 2 of 2)
C or C⁺⁺: Figure 106 shows how a C or C⁺⁺ program accesses rows of transition
table NEWEMPS.
/**********************************************************/
/* Declare table locator host variable trig_tbl_id */
/**********************************************************/
EXEC SQL BEGIN DECLARE SECTION;
SQL TYPE IS TABLE LIKE EMP AS LOCATOR trig_tbl_id;
char name[25];
EXEC SQL END DECLARE SECTION;
.
.
.
/**********************************************************/
/* Declare a cursor to retrieve rows from the transition */
/* table */
/**********************************************************/
EXEC SQL DECLARE C1 CURSOR FOR
SELECT NAME FROM TABLE(:trig_tbl_id LIKE EMPLOYEE)
WHERE SALARY > 100000;
/**********************************************************/
/* Fetch a row from transition table */
/**********************************************************/
EXEC SQL OPEN C1;
EXEC SQL FETCH C1 INTO :name;
.
.
.
COBOL: Figure 107 on page 283 shows how a COBOL program accesses rows of
transition table NEWEMPS.
LINKAGE SECTION.
*********************************************************
* Declare table locator host variable TRIG-TBL-ID *
*********************************************************
01 TRIG-TBL-ID SQL TYPE IS TABLE LIKE EMP AS LOCATOR.
.
.
.
*********************************************************
* Declare cursor to retrieve rows from transition table *
*********************************************************
EXEC SQL DECLARE C1 CURSOR FOR
SELECT NAME FROM TABLE(:TRIG-TBL-ID LIKE EMP)
WHERE SALARY > 100000 END-EXEC.
*********************************************************
* Fetch a row from transition table *
*********************************************************
EXEC SQL OPEN C1 END-EXEC.
EXEC SQL FETCH C1 INTO :NAME END-EXEC.
.
.
.
PROG-END.
GOBACK.
PL/I: Figure 108 on page 284 shows how a PL/I program accesses rows of
transition table NEWEMPS.
/****************************************************/
/* Declare a cursor to retrieve rows from the */
/* transition table */
/****************************************************/
EXEC SQL DECLARE C1 CURSOR FOR
SELECT NAME FROM TABLE(:TRIG_TBL_ID LIKE EMP)
WHERE SALARY > 100000;
/****************************************************/
/* Retrieve rows from the transition table */
/****************************************************/
EXEC SQL OPEN C1;
EXEC SQL FETCH C1 INTO :NAME;
.
.
.
END CHECK_EMP;
When the primary program of a user-defined function calls another program, DB2
uses the CURRENT PACKAGESET special register to determine the collection to
search for the called program's package. The primary program can change this
collection ID by executing the statement SET CURRENT PACKAGESET. If the
value of CURRENT PACKAGESET is blank, DB2 uses the method described in
“The order of search” on page 416 to search for the package.
To maximize the number of user-defined functions and stored procedures that can
run concurrently, follow these preparation recommendations:
v Ask the system administrator to set the region size parameter in the startup
procedures for the WLM-established stored procedures address spaces to
REGION=0. This lets an address space obtain the largest possible amount of
storage below the 16-MB line.
The definer can list these options as values of the RUN OPTIONS parameter of
CREATE FUNCTION, or the system administrator can establish these options as
defaults during Language Environment installation.
This should be the first command that you enter from the terminal or include in
your commands file.
This command directs output from your debugging session to the log data set
you defined in step 2. For example, if you defined a log data set with DD name
INSPLOG in the start-up procedure for the stored procedures address space,
the first command should be:
SET LOG ON FILE INSPLOG;
You can combine the Language Environment run-time TEST option with
CEETEST calls. For example, you might want to use TEST to name the
commands data set but use CEETEST calls to control when the Debug Tool
takes control.
Driver applications: You can write a small driver application that calls the
user-defined function as a subprogram and passes the parameter list for the
user-defined function. You can then test and debug the user-defined function as a
normal DB2 application under TSO. You can then use TSO TEST and other
commonly used debugging tools.
SQL INSERT: You can use SQL to insert debugging information into a DB2 table.
This allows other machines in the network (such as a workstation) to easily access
the data in the table using DRDA access.
DB2 discards the debugging information if the application executes the ROLLBACK
statement. To prevent the loss of the debugging data, code the calling application
so that it retrieves the diagnostic data before executing the ROLLBACK statement.
| See “Defining a user-defined function” on page 244 and Chapter 5 of DB2 SQL
| Reference for a description of the parameters that you can specify in the CREATE
| FUNCTION statement for an SQL scalar function.
| To prepare an SQL scalar function for execution, you execute the CREATE
| FUNCTION statement, either statically or dynamically.
See the following sections for details you should know before you invoke a
user-defined function:
v “Syntax for user-defined function invocation”
v “Ensuring that DB2 executes the intended user-defined function” on page 290
v “Casting of user-defined function arguments” on page 296
v “What happens when a user-defined function abnormally terminates” on page 297
| function-name ( )
ALL ,
DISTINCT
expression
TABLE transition-table-name
|
||
| Figure 109. Syntax for user-defined scalar function invocation
|
Use the syntax shown in Figure 110 on page 290 when you invoke a table function:
correlation-clause:
AS
correlation-name
,
( column-name )
See Chapter 2 of DB2 SQL Reference for more information about the syntax of
user-defined function invocation.
| If two or more candidates fit the unqualified function invocation equally well
| because the function invocation contains parameter markers, DB2 issues an
| error.
The remainder of this section discusses details of the function resolution process
and gives suggestions on how you can ensure that DB2 picks the right function.
To determine whether a data type is promotable to another data type, see Table 36.
The first column lists data types in function invocations. The second column lists
data types to which the types in the first column can be promoted, in order from
best fit to worst fit. For example, suppose that in this statement, the data type of A
is SMALLINT:
SELECT USER1.ADDTWO(A) FROM TABLEA;
If the data types of all parameters in a function instance are the same as those in
the function invocation, that function instance is a best fit. If no exact match exists,
DB2 compares data types in the parameter lists from left to right, using this method:
1. DB2 compares the data types of the first parameter in the function invocation to
the data type of the first parameter in each function instance.
| If the first parameter in the invocation is an untyped parameter marker, DB2
| does not do the comparison.
2. For the first parameter, if one function instance has a data type that fits the
function invocation better than the data types in the other instances, that
function is a best fit. Table 36 on page 292 shows the possible fits for each data
type, in best-to-worst order.
| 3. If the data types of the first parameter are the same for all function instances, or
| if the first parameter in the function invocation is an untyped parameter marker,
| DB2 repeats this process for the next parameter. DB2 continues this process for
| each parameter until it finds a best fit.
Candidate 2:
CREATE FUNCTION FUNC(VARCHAR(20),REAL,DOUBLE)
RETURNS DECIMAL(9,2)
EXTERNAL NAME 'FUNC2'
PARAMETER STYLE DB2SQL
LANGUAGE COBOL;
DB2 compares the data type of the first parameter in the user-defined function
invocation to the data types of the first parameters in the candidate functions.
Because the first parameter in the invocation has data type VARCHAR, and both
The data type of the second parameter in the invocation is SMALLINT. INTEGER,
which is the data type of candidate 1, is a better fit to SMALLINT than REAL, which
is the data type of candidate 2. Therefore, candidate 1 is DB2's choice for
execution.
Before you use EXPLAIN to obtain information about function resolution, create
DSN_FUNCTION_TABLE. The table definition looks like this:
CREATE TABLE DSN_FUNCTION_TABLE
(QUERYNO INTEGER NOT NULL WITH DEFAULT,
QBLOCKNO INTEGER NOT NULL WITH DEFAULT,
APPLNAME CHAR(8) NOT NULL WITH DEFAULT,
PROGNAME CHAR(8) NOT NULL WITH DEFAULT,
COLLID CHAR(18) NOT NULL WITH DEFAULT,
GROUP_MEMBER CHAR(8) NOT NULL WITH DEFAULT,
EXPLAIN_TIME TIMESTAMP NOT NULL WITH DEFAULT,
SCHEMA_NAME CHAR(8) NOT NULL WITH DEFAULT,
FUNCTION_NAME CHAR(18) NOT NULL WITH DEFAULT,
SPEC_FUNC_NAME CHAR(18) NOT NULL WITH DEFAULT,
FUNCTION_TYPE CHAR(2) NOT NULL WITH DEFAULT,
VIEW_CREATOR CHAR(8) NOT NULL WITH DEFAULT,
VIEW_NAME CHAR(18) NOT NULL WITH DEFAULT,
PATH VARCHAR(254) NOT NULL WITH DEFAULT,
FUNCTION_TEXT VARCHAR(254) NOT NULL WITH DEFAULT);
When you invoke a user-defined function that is sourced on another function, DB2
casts your parameters to the data types and lengths of the sourced function.
The following example demonstrates what happens when the parameter definitions
of a sourced function differ from those of the function on which it is sourced.
Now suppose that PRICE2 has the DECIMAL(9,2) value 0001234.56. DB2 must
first assign this value to the data type of the input parameter in the definition of
TAXFN2, which is DECIMAL(8,2). The input parameter value then becomes
001234.56. Next, DB2 casts the parameter value to a source function parameter,
which is DECIMAL(6,0). The parameter value then becomes 001234. (When you
cast a value, that value is truncated, rather than rounded.)
Although trigger activations count in the levels of SQL statement nesting, the
previous restrictions on SQL statements do not apply to SQL statements that are
executed in the trigger body. For example, suppose that trigger TR1 is defined on
table T1:
CREATE TRIGGER TR1
AFTER INSERT ON T1
FOR EACH STATEMENT MODE DB2SQL
BEGIN ATOMIC
UPDATE T1 SET C1=1;
END
Now suppose that you execute this SQL statement at level 1 of nesting:
Although the UPDATE statement in the trigger body is at level 2 of nesting and
modifies the same table that the triggering statement updates, DB2 can execute the
INSERT statement successfully.
The access path that DB2 chooses for a predicate determines whether a
user-defined function in that predicate is executed. To ensure that DB2 executes the
external action for each row of the result set, put the user-defined function
invocation in the SELECT list.
The results can differ even more, depending on the order in which DB2 retrieves
the rows from the table. Suppose that an ascending index is defined on column C2.
Then DB2 retrieves row 3 first, row 1 second, and row 2 third. This means that row
1 satisfies the predicate WHERE COUNTER()=2. The value of COUNTER in the
select list is again 1, so the result of the query in this case is:
COUNTER() C1 C2
--------- -- --
1 1 b
| A similar situation occurs with scrollable cursors and nondeterministic functions. The
| result of a nondeterministic user-defined function can be different each time you
| execute the user-defined function. If the select list of a scrollable cursor contains a
| nondeterministic user-defined function, and you use that cursor to retrieve the same
| row multiple times, the results can differ each time you retrieve the row.
You must define a column of type ROWID in the table because tables with any type
of LOB columns require a ROWID column, and internally, the VIDEO_CATALOG
table contains two LOB columns. For more information on LOB data, see
“Chapter 13. Programming for large objects (LOBs)” on page 229.
After you define distinct types and columns of those types, you can use those data
types in the same way you use built-in types. You can use the data types in
assignments, comparisons, function invocations, and stored procedure calls.
However, when you assign one column value to another or compare two column
values, those values must be of the same distinct type. For example, you must
assign a column value of type VIDEO to a column of type VIDEO, and you can
compare a column value of type AUDIO only to a column of type AUDIO. When you
assign a host variable value to a column with a distinct type, you can use any host
data type that is compatible with the source data type of the distinct type. For
example, to receive an AUDIO or VIDEO value, you can define a host variable like
this:
SQL TYPE IS BLOB (1M) HVAV;
For example, if you have defined a user-defined function to convert U.S. dollars to
euro currency, you do not want anyone to use this same user-defined function to
convert Japanese Yen to euros because the U.S. dollars to euros function returns
the wrong amount. Suppose you define three distinct types:
CREATE DISTINCT TYPE US_DOLLAR AS DECIMAL(9,2) WITH COMPARISONS;
CREATE DISTINCT TYPE EURO AS DECIMAL(9,2) WITH COMPARISONS;
CREATE DISTINCT TYPE JAPANESE_YEN AS DECIMAL(9,2) WITH COMPARISONS;
DB2 does not let you compare data of a distinct type directly to data of its source
type. However, you can compare a distinct type to its source type by using a cast
function.
For example, suppose you want to know which products sold more than US
$100 000.00 in the US in the month of July in 1992 (7/92). Because you cannot
compare data of type US_DOLLAR with instances of data of the source type of
US_DOLLAR (DECIMAL) directly, you must use a cast function to cast data from
DECIMAL to US_DOLLAR or from US_DOLLAR to DECIMAL. Whenever you
create a distinct type, DB2 creates two cast functions, one to cast from the source
type to the distinct type and the other to cast from the distinct type to the source
type. For distinct type US_DOLLAR, DB2 creates a cast function called DECIMAL
and a cast function called US_DOLLAR. When you compare an object of type
US_DOLLAR to an object of type DECIMAL, you can use one of those cast
functions to make the data types identical for the comparison. Suppose table
US_SALES is defined like this:
CREATE TABLE US_SALES
(PRODUCT_ITEM INTEGER,
MONTH INTEGER CHECK (MONTH BETWEEN 1 AND 12),
YEAR INTEGER CHECK (YEAR > 1985),
TOTAL US_DOLLAR);
The casting satisfies the requirement that the compared data types are identical.
You cannot use host variables in statements that you prepare for dynamic
execution. As explained in “Using parameter markers” on page 508, you can
substitute parameter markers for host variables when you prepare a statement, and
then use host variables when you execute the statement.
If you use a parameter marker in a predicate of a query, and the column to which
you compare the value represented by the parameter marker is of a distinct type,
you must cast the parameter marker to the distinct type, or cast the column to its
source type.
Alternatively, you can cast the parameter marker to the distinct type:
SELECT FIRST_NAME, LAST_NAME, PHONE_NUM FROM CUSTOMER
WHERE CUST_NUM = CAST (? AS CNUM)
If you need to assign a value of one distinct type to a column of another distinct
type, a function must exist that converts the value from one type to another.
Because DB2 provides cast functions only between distinct types and their source
types, you must write the function to convert from one distinct type to another.
You need to insert values from the TOTAL column in JAPAN_SALES into the
TOTAL column of JAPAN_SALES_98. Because INSERT statements follow
assignment rules, DB2 does not let you insert the values directly from one column
to the other because the columns are of different distinct types. Suppose that a
user-defined function called US_DOLLAR has been written that accepts values of
type JAPANESE_YEN as input and returns values of type US_DOLLAR. You can
then use this function to insert values into the JAPAN_SALES_98 table:
INSERT INTO JAPAN_SALES_98
SELECT PRODUCT_ITEM, US_DOLLAR(TOTAL)
FROM JAPAN_SALES
WHERE YEAR = 1998;
You can assign a column value of a distinct type to a host variable if you can assign
a column value of the distinct type's source type to the host variable. In the
following example, you can assign SIZECOL1 and SIZECOL2, which has distinct
type SIZE, to host variables of type double and short because the source type of
SIZE, which is INTEGER, can be assigned to host variables of type double or short.
EXEC SQL BEGIN DECLARE SECTION;
double hv1;
short hv2;
EXEC SQL END DECLARE SECTION;
CREATE DISTINCT TYPE SIZE AS INTEGER;
CREATE
. TABLE TABLE1 (SIZECOL1 SIZE, SIZECOL2 SIZE);
.
.
In this example, values of host variable hv2 can be assigned to columns SIZECOL1
and SIZECOL2, because C data type short is equivalent to DB2 data type
SMALLINT, and SMALLINT is promotable to data type INTEGER. However, values
of hv1 cannot be assigned to SIZECOL1 and SIZECOL2, because C data type
double, which is equivalent to DB2 data type DOUBLE, is not promotable to data
type INTEGER.
EXEC SQL BEGIN DECLARE SECTION;
double hv1;
short hv2;
EXEC SQL END DECLARE SECTION;
CREATE DISTINCT TYPE SIZE AS INTEGER;
CREATE
. TABLE TABLE1 (SIZECOL1 SIZE, SIZECOL2 SIZE);
.
.
Because the result type of both US_DOLLAR functions is US_DOLLAR, you have
satisfied the requirement that the distinct types of the combined columns are the
same.
The HOUR function takes only the TIME or TIMESTAMP data type as an argument,
so you need a sourced function that is based on the HOUR function that accepts
the FLIGHT_TIME data type. You might declare a function like this:
CREATE FUNCTION HOUR(FLIGHT_TIME)
RETURNS INTEGER
SOURCE SYSIBM.HOUR(TIME);
Example: Using an infix operator with distinct type arguments: Suppose you
want to add two values of type US_DOLLAR. Before you can do this, you must
define a version of the + function that accepts values of type US_DOLLAR as
operands:
CREATE FUNCTION "+"(US_DOLLAR,US_DOLLAR)
RETURNS US_DOLLAR
SOURCE SYSIBM."+"(DECIMAL(9,2),DECIMAL(9,2));
Because the US_DOLLAR type is based on the DECIMAL(9,2) type, the source
function must be the version of + with arguments of type DECIMAL(9,2).
This means that EURO_TO_US accepts only the EURO type as input. Therefore, if
you want to call CDN_TO_US with a constant or host variable argument, you must
cast that argument to distinct type EURO:
SELECT * FROM US_SALES
WHERE TOTAL = EURO_TO_US(EURO(:H1));
SELECT * FROM US_SALES
WHERE TOTAL = EURO_TO_US(EURO(10000));
Suppose you keep electronic mail documents that are sent to your company in a
DB2 table. The DB2 data type of an electronic mail document is a CLOB, but you
define it as a distinct type so that you can control the types of operations that are
performed on the electronic mail. The distinct type is defined like this:
CREATE DISTINCT TYPE E_MAIL AS CLOB(5M);
You have also defined and written user-defined functions to search for and return
the following information about an electronic mail document:
v Subject
v Sender
v Date sent
v Message content
v Indicator of whether the document contains a user-specified string
The user-defined function definitions look like this:
CREATE FUNCTION SUBJECT(E_MAIL)
RETURNS VARCHAR(200)
EXTERNAL NAME 'SUBJECT'
LANGUAGE C
PARAMETER STYLE DB2SQL
NO SQL
DETERMINISTIC
NO EXTERNAL ACTION;
CREATE FUNCTION SENDER(E_MAIL)
RETURNS VARCHAR(200)
EXTERNAL NAME 'SENDER'
LANGUAGE C
PARAMETER STYLE DB2SQL
NO SQL
DETERMINISTIC
NO EXTERNAL ACTION;
CREATE FUNCTION SENDING_DATE(E_MAIL)
RETURNS DATE
EXTERNAL NAME 'SENDDATE'
LANGUAGE C
PARAMETER STYLE DB2SQL
NO SQL
DETERMINISTIC
NO EXTERNAL ACTION;
CREATE FUNCTION CONTENTS(E_MAIL)
RETURNS CLOB(1M)
EXTERNAL NAME 'CONTENTS'
LANGUAGE C
PARAMETER STYLE DB2SQL
NO SQL
DETERMINISTIC
NO EXTERNAL ACTION;
CREATE FUNCTION CONTAINS(E_MAIL, VARCHAR (200))
RETURNS INTEGER
EXTERNAL NAME 'CONTAINS'
LANGUAGE C
PARAMETER STYLE DB2SQL
NO SQL
DETERMINISTIC
NO EXTERNAL ACTION;
Because the table contains a column with a source data type of CLOB, the table
requires a ROWID column and an associated LOB table space, auxiliary table, and
index on the auxiliary table. Use statements like this to define the LOB table space,
the auxiliary table, and the index:
CREATE LOB TABLESPACE DOCTSLOB
LOG YES
GBPCACHE SYSTEM;
To populate the document table, you write code that executes an INSERT statement
to put the first part of a document in the table, and then executes multiple UPDATE
statements to concatenate the remaining parts of the document. For example:
EXEC SQL BEGIN DECLARE SECTION;
char hv_current_time[26];
SQL TYPE IS CLOB (1M) hv_doc;
EXEC SQL END DECLARE SECTION;
/* Determine the current time and put this value */
/* into host variable hv_current_time. */
/* Read up to 1 MB of document data from a file */
/*
. into host variable hv_doc. */
.
.
Now that the data is in the table, you can execute queries to learn more about the
documents. For example, you can execute this query to determine which
documents contain the word 'performance':
SELECT SENDER(A_DOCUMENT), SENDING_DATE(A_DOCUMENT),
SUBJECT(A_DOCUMENT)
FROM DOCUMENTS
WHERE CONTAINS(A_DOCUMENT,'performance') = 1;
Because the electronic mail documents can be very large, you might want to use
LOB locators to manipulate the document data instead of fetching all of a document
into a host variable. You can use a LOB locator on any distinct type that is defined
on one of the LOB types. The following example shows how you can cast a LOB
locator as a distinct type, and then use the result in a user-defined function that
takes a distinct type as an argument:
Figure 112 illustrates the program preparation process when you use the DB2
precompiler. Figure 113 on page 316 illustrates the program preparation process
when you use an SQL statement coprocessor. “Chapter 20. Preparing an
application program to run” on page 397 supplies specific details about
accomplishing these steps.
After you have processed SQL statements in your source program, you create a
load module, possibly one or more packages, and an application plan. Creating a
load module involves compiling and link-editing the modified source code that is
produced by the precompiler. Creating a package or an application plan, a process
unique to DB2, involves binding one or more DBRMs.
A few options, however, can affect the way that you write your program. For
example, you need to know if you are using NOFOR or STDSQL(YES) before you
begin coding.
| Before you begin writing your program, review the list of options in Table 48 on
| page 403. You can specify any of those options whether you use the DB2
| precompiler or an SQL statement coprocessor. However, the SQL statement
| coprocessor might ignore certain options because there are compiler options that
| provide the same information.
Binding or rebinding a package or plan in use: Packages and plans are locked
when you bind or run them. Packages that run under a plan are not locked until the
plan uses them. If you run a plan and some packages in the package list never run,
those packages are never locked.
You cannot bind or rebind a package or a plan while it is running. However, you can
bind a different version of a package that is running.
Options for binding and rebinding: Several of the options of BIND PACKAGE
and BIND PLAN can affect your program design. For example, you can use a bind
option to ensure that a package or plan can run only from a particular CICS
connection or a particular IMS region—you do not have to enforce this in your code.
Several other options are discussed at length in later chapters, particularly the ones
that affect your program’s use of locks, such as the option ISOLATION. Before you
finish reading this chapter, you might want to review those options in Chapter 2 of
DB2 Command Reference.
Input to binding the plan can include DBRMs only, a package list only, or a
combination of the two. When choosing one of those alternatives for your
application, consider the impact of rebinding and see “Planning for changes to your
application” on page 319.
Binding all DBRMs to a plan is suitable for small applications that are unlikely to
change or that require all resources to be acquired when the plan is allocated rather
than when your program first uses them.
Advantages of packages
You must decide how to use packages based on your application design and your
operational objectives. Keep in mind the following:
Ease of maintenance: When you use packages, you do not need to bind the entire
plan again when you change one SQL statement. You need to bind only the
package associated with the changed SQL statement.
Flexibility in using bind options: The options of BIND PLAN apply to all DBRMs
bound directly to the plan. The options of BIND PACKAGE apply only to the single
DBRM bound to that package. The package options need not all be the same as
the plan options, and they need not be the same as the options for other packages
used by the same plan.
Flexibility in using name qualifiers: You can use a bind option to name a qualifier
for the unqualified object names in SQL statements in a plan or package. By using
packages, you can use different qualifiers for SQL statements in different parts of
your application. By rebinding, you can redirect your SQL statements, for example,
from a test table to a production table.
With packages, you probably do not need dynamic plan selection and its
accompanying exit routine. A package listed within a plan is not accessed until
it is executed. However, it is possible to use dynamic plan selection and
packages together. Doing so can reduce the number of plans in an
application, and hence less effort to maintain the dynamic plan exit routine.
See “Using packages with dynamic plan selection” on page 423 for information
on using packages with dynamic plan selection.
A change to your program probably invalidates one or more of your packages and
perhaps your entire plan. For some changes, you must bind a new object; for
others, rebinding is sufficient.
v To bind a new plan or package, other than a trigger package, use the
subcommand BIND PLAN or BIND PACKAGE with the option
ACTION(REPLACE).
To bind a new trigger package, recreate the trigger associated with the trigger
package.
v To rebind an existing plan or package, other than a trigger package, use the
REBIND subcommand.
To rebind trigger package, use the REBIND TRIGGER PACKAGE subcommand.
Table 37 tells which action particular types of change require. For more information
on trigger packages, see “Working with trigger packages” on page 322.
If you want to change the bind options in effect when the plan or package runs,
review the descriptions of those options in Chapter 2 of DB2 Command Reference.
Not all options of BIND are also available on REBIND.
A plan or package can also become invalid for reasons that do not depend on
operations in your program: for example, if an index is dropped that is used as an
access path by one of your queries. In those cases, DB2 might rebind the plan or
package automatically, the next time it is used. (For details about that operation,
see “Automatic rebinding” on page 322.)
Table 37. Changes requiring BIND or REBIND
Change made: Minimum action necessary:
Drop a table, index or other object, If a table with a trigger is dropped, recreate the trigger
and recreate the object if you recreate the table. Otherwise, no change is
required; automatic rebind is attempted at the next run.
Revoke an authorization to use an None required; automatic rebind is attempted at the
object next run. Automatic rebind fails if authorization is still
not available; then you must issue REBIND for the
package or plan.
Run RUNSTATS to update catalog Issue REBIND for the package or plan to possibly
statistics change the access path chosen.
Dropping objects
If you drop an object that a package depends on, the following occurs:
v If the package is not appended to any running plan, the package becomes
invalid.
v If the package is appended to a running plan, and the drop occurs outside of that
plan, the object is not dropped, and the package does not become invalid.
v If the package is appended to a running plan, and the drop occurs within that
plan, the package becomes invalid.
In all cases, the plan does not become invalid unless it has a DBRM referencing
the dropped object. If the package or plan becomes invalid, automatic rebind occurs
the next time the package or plan is allocated.
Rebinding a package
Table 38 clarifies which packages are bound, depending on how you specify
collection-id (coll-id) package-id (pkg-id), and version-id (ver-id) on the REBIND
PACKAGE subcommand. For syntax and descriptions of this subcommand, see
Chapter 2 of DB2 Command Reference.
REBIND PACKAGE does not apply to packages for which you do not have the
BIND privilege. An asterisk (*) used as an identifier for collections, packages, or
versions does not apply to packages at remote sites.
Table 38. Behavior of REBIND PACKAGE specification. “All” means all collections, packages,
or versions at the local DB2 server for which the authorization ID that issues the command
has the BIND privilege. The symbol 'v .' stands for a required period in the command syntax;
'.' stands for an asterisk.
INPUT Collections Packages Versions
Affected Affected Affected
. all all all
.v.v(.) all all all
.v. all all all
.v.v(ver-id) all all ver-id
.v.v() all all empty string
coll-idv. coll-id all all
coll-idv.v(.) coll-id all all
coll-idv.v(ver-id) coll-id all ver-id
coll-idv.v() coll-id all empty string
coll-idvpkg-idv(.) coll-id pkg-id all
The following example shows the options for rebinding a package at the remote
location, SNTERSA. The collection is GROUP1, the package ID is PROGA, and the
version ID is V1. The connection types shown in the REBIND subcommand replace
connection types specified on the original BIND subcommand. For information on
the REBIND subcommand options, see DB2 Command Reference.
REBIND PACKAGE(SNTERSA.GROUP1.PROGA.(V1)) ENABLE(CICS,REMOTE)
You can use the asterisk on the REBIND subcommand for local packages, but not
for packages at remote sites. Any of the following commands rebinds all versions of
all packages in all collections, at the local DB2 system, for which you have the
BIND privilege.
Either of the following commands rebinds all versions of all packages in the local
collection LEDGER for which you have the BIND privilege.
Either of the following commands rebinds the empty string version of the package
DEBIT in all collections, at the local DB2 system, for which you have the BIND
privilege.
Rebinding a plan
Using the PKLIST keyword replaces any previously specified package list. Omitting
the PKLIST keyword allows the use of the previous package list for rebinding. Using
the NOPKLIST keyword deletes any package list specified when the plan was
previously bound.
The following example rebinds PLANA and changes the package list.
For a description of the technique and several examples of its use, see
“Appendix E. REBIND subcommands for lists of plans or packages” on page 915.
As with any other package, DB2 marks a trigger package invalid when you drop a
table, index, or view on which the trigger package depends. DB2 executes an
automatic rebind the next time the trigger activates. However, if the automatic
rebind fails, DB2 does not mark the trigger package inoperative.
Unlike other packages, a trigger package is freed if you drop the table on which the
trigger is defined, so you can recreate the trigger package only by recreating the
table and the trigger.
You can use the subcommand REBIND TRIGGER PACKAGE to rebind a trigger
package that DB2 has marked inoperative. You can also use REBIND TRIGGER
PACKAGE to change the option values with which DB2 originally bound the trigger
package. The default values for the options that you can change are:
v CURRENTDATA(YES)
v EXPLAIN(YES)
v FLAG(I)
v ISOLATION(RR)
v IMMEDWRITE(NO)
v RELEASE(COMMIT)
When you run REBIND TRIGGER PACKAGE, you can change only the values of
options CURRENTDATA, EXPLAIN, FLAG, IMMEDWRITE, ISOLATION, and
RELEASE.
Automatic rebinding
Automatic rebind might occur if an authorized user invokes a plan or package when
the attributes of the data on which the plan or package depends change, or if the
environment in which the package executes changes. Whether the automatic rebind
occurs depends on the value of the field AUTO BIND on installation panel
DSNTIPO. The options used for an automatic rebind are the options used during
the most recent bind process.
In the following cases, DB2 might automatically rebind a plan or package that has
not been marked as invalid:
v A plan or package is bound in a different release of DB2 from the release in
which it was first used.
v A plan or package has a location dependency and runs at a location other than
the one at which it was bound. This can happen when members of a data
sharing group are defined with location names, and a package runs on a different
member from the one on which it was bound.
Whether EXPLAIN runs during automatic rebind depends on the value of the field
EXPLAIN PROCESSING on installation panel DSNTIPO, and on whether you
specified EXPLAIN(YES). Automatic rebind fails for all EXPLAIN errors except
“PLAN_TABLE not found.”
The SQLCA is not available during automatic rebind. Therefore, if you encounter
lock contention during an automatic rebind, DSNT501I messages cannot
accompany any DSNT376I messages that you receive. To see the matching
DSNT501I messages, you must issue the subcommand REBIND PLAN or REBIND
PACKAGE.
After the basic recommendations, the chapter tells what you can do about a major
technique that DB2 uses to control concurrency.
v Transaction locks mainly control access by SQL statements. Those locks are
the ones over which you have the most control.
– “Aspects of transaction locks” on page 333 describes the various types of
transaction locks that DB2 uses and how they interact.
– “Lock tuning” on page 339 describes what you can change to control locking.
Your choices include:
- “Bind options” on page 339
- “Isolation overriding with SQL statements” on page 351
- “The statement LOCK TABLE” on page 352
To prevent those situations from occurring unless they are specifically allowed, DB2
might use locks to control concurrency.
What do locks do? A lock associates a DB2 resource with an application process
in a way that affects how other processes can access the same resource. The
process associated with the resource is said to “hold” or “own” the lock. DB2 uses
locks to ensure that no process accesses data that has been changed, but not yet
committed, by another process.
What do you do about locks? To preserve data integrity, your application process
acquires locks implicitly, that is, under DB2 control. It is not necessary for a process
to request a lock explicitly to conceal uncommitted data. Therefore, sometimes you
need not do anything about DB2 locks. Nevertheless processes acquire, or avoid
acquiring, locks based on certain general parameters. You can make better use of
your resources and improve concurrency by understanding the effects of those
parameters.
Suspension
Definition: An application process is suspended when it requests a lock that is
already held by another application process and cannot be shared. The suspended
process temporarily stops running.
Order of precedence for lock requests: Incoming lock requests are queued.
Requests for lock promotion, and requests for a lock by an application process that
already holds a lock on the same object, precede requests for locks by new
applications. Within those groups, the request order is “first in, first out”.
Example: Using an application for inventory control, two users attempt to reduce
the quantity on hand of the same item at the same time. The two lock requests are
queued. The second request in the queue is suspended and waits until the first
request releases its lock.
Timeout
Definition: An application process is said to time out when it is terminated because
it has been suspended for longer than a preset interval.
IMS
If you are using IMS, and a timeout occurs, the following actions take place:
v In a DL/I batch application, the application process abnormally terminates
with a completion code of 04E and a reason code of 00D44033 or
00D44050.
v In any IMS environment except DL/I batch:
– DB2 performs a rollback operation on behalf of your application process
to undo all DB2 updates that occurred during the current unit of work.
– For a non-message driven BMP, IMS issues a rollback operation on
behalf of your application. If this operation is successful, IMS returns
control to your application, and the application receives SQLCODE -911.
If the operation is unsuccessful, IMS issues user abend code 0777, and
the application does not receive an SQLCODE.
– For an MPP, IFP, or message driven BMP, IMS issues user abend code
0777, rolls back all uncommitted changes, and reschedules the
transaction. The application does not receive an SQLCODE.
COMMIT and ROLLBACK operations do not time out. The command STOP
DATABASE, however, may time out and send messages to the console, but it will
retry up to 15 times.
Deadlock
Definition: A deadlock occurs when two or more application processes each hold
locks on resources that the others need and without which they cannot proceed.
Example: Figure 114 on page 328 illustrates a deadlock between two transactions.
Table M (2) OK
(4)
000300 Page B Job PROJNCHG
Suspend
Notes:
1. Jobs EMPLJCHG and PROJNCHG are two transactions. Job EMPLJCHG
accesses table M, and acquires an exclusive lock for page B, which contains
record 000300.
2. Job PROJNCHG accesses table N, and acquires an exclusive lock for page A,
which contains record 000010.
3. Job EMPLJCHG requests a lock for page A of table N while still holding the lock
on page B of table M. The job is suspended, because job PROJNCHG is
holding an exclusive lock on page A.
4. Job PROJNCHG requests a lock for page B of table M while still holding the
lock on page A of table N. The job is suspended, because job EMPLJCHG is
holding an exclusive lock on page B. The situation is a deadlock.
Effects: After a preset time interval (the value of DEADLOCK TIME), DB2 can roll
back the current unit of work for one of the processes or request a process to
terminate. That frees the locks and allows the remaining processes to continue. If
statistics trace class 3 is active, DB2 writes a trace record with IFCID 0172. Reason
code 00C90088 is returned in the SQLERRD(3) field of the SQLCA.
If you are using IMS, and a deadlock occurs, the following actions take place:
v In a DL/I batch application, the application process abnormally terminates
with a completion code of 04E and a reason code of 00D44033 or
00D44050.
v In any IMS environment except DL/I batch:
– DB2 performs a rollback operation on behalf of your application process
to undo all DB2 updates that occurred during the current unit of work.
– For a non-message driven BMP, IMS issues a rollback operation on
behalf of your application. If this operation is successful, IMS returns
control to your application, and the application receives SQLCODE -911.
If the operation is unsuccessful, IMS issues user abend code 0777, and
the application does not receive an SQLCODE.
– For an MPP, IFP, or message driven BMP, IMS issues user abend code
0777, rolls back all uncommitted changes, and reschedules the
transaction. The application does not receive an SQLCODE.
CICS
If you are using CICS and a deadlock occurs, the CICS attachment facility
decides whether or not to roll back one of the application processes, based on
the value of the ROLBE or ROLBI parameter. If your application process is
chosen for rollback, it receives one of two SQLCODEs in the SQLCA:
-911 A SYNCPOINT command with the ROLLBACK option was
issued on behalf of your application process. All updates
(CICS commands and DL/I calls, as well as SQL statements)
that occurred during the current unit of work have been
undone. (SQLSTATE '40001')
-913 A SYNCPOINT command with the ROLLBACK option was not
issued. DB2 rolls back only the incomplete SQL statement that
encountered the deadlock or timed out. CICS does not roll
back any resources. Your application process should either
issue a SYNCPOINT command with the ROLLBACK option
itself or terminate. (SQLSTATE '57033')
Consider using the DSNTIAC subroutine to check the SQLCODE and display
the SQLCA. Your application must take appropriate actions before resuming.
Keep unlike things apart: Give users different authorization IDs for work with
different databases; for example, one ID for work with a shared database and
another for work with a private database. This effectively adds to the number of
possible (but not concurrent) application processes while minimizing the number of
databases each application process can access.
Plan for batch inserts: If your application does sequential batch insertions,
excessive contention on the space map pages for the table space can occur. This
problem is especially apparent in data sharing, where contention on the space map
means the added overhead of page P-lock negotiation. For these types of
applications, consider using the MEMBER CLUSTER option of CREATE
TABLESPACE. This option causes DB2 to disregard the clustering index (or implicit
clustering index) when assigning space for the SQL INSERT statement. For more
information about using this option in data sharing, see Chapter 6 of DB2 Data
Sharing: Planning and Administration. For the syntax, see Chapter 5 of DB2 SQL
Reference.
Use LOCKSIZE ANY until you have reason not to: LOCKSIZE ANY is the default
for CREATE TABLESPACE. It allows DB2 to choose the lock size, and DB2 usually
chooses LOCKSIZE PAGE and LOCKMAX SYSTEM for non-LOB table spaces. For
| LOB table spaces, it chooses LOCKSIZE LOB and LOCKMAX SYSTEM. You
| should use LOCKSIZE TABLESPACE or LOCKSIZE TABLE only for read-only table
| spaces or tables, or when concurrent access to the object is not needed. Before
you choose LOCKSIZE ROW, you should estimate whether there will be an
increase in overhead for locking and weigh that against the increase in concurrency.
Examine small tables: For small tables with high concurrency requirements,
estimate the number of pages in the data and in the index. If the index entries are
short or they have many duplicates, then the entire index can be one root page and
a few leaf pages. In this case, spread out your data to improve concurrency, or
consider it a reason to use row locks.
Partition the data: Online queries typically make few data changes, but they occur
often. Batch jobs are just the opposite; they run for a long time and change many
rows, but occur infrequently. The two do not run well together. You might be able to
separate online applications from batch, or two batch jobs from each other. To
separate online and batch applications, provide separate partitions. Partitioning can
also effectively separate batch jobs from each other.
Fewer rows of data per page: By using the MAXROWS clause of CREATE or
ALTER TABLESPACE, you can specify the maximum number of rows that can be
on a page. For example, if you use MAXROWS 1, each row occupies a whole
page, and you confine a page lock to a single row. Consider this option if you have
a reason to avoid using row locking, such as in a data sharing environment where
row locking overhead can be excessive.
Taking commit points frequently in a long running unit of recovery (UR) has the
following benefits:
v Reduces lock contention
v Improves the effectiveness of lock avoidance, especially in a data sharing
environment
v Reduces the elapsed time for DB2 system restart following a system failure
v Reduces the elapsed time for a unit of recovery to rollback following an
application failure or an explicit rollback request by the application
v Provides more opportunity for utilities, such as online REORG, to break in
| Consider using the UR CHECK FREQ field or the UR LOG WRITE CHECK field of
| installation panel DSNTIPN to help you identify those applications that are not
| committing frequently. UR CHECK FREQ, which identifies when too many
| checkpoints have occurred without a UR issuing a commit, is helpful in monitoring
| overall system activity. UR LOG WRITE CHECK enables you to detect applications
| that might write too many log records between commit points, potentially creating a
| lengthy recovery situation for critical tables.
Even though an application might conform to the commit frequency standards of the
installation under normal operational conditions, variation can occur based on
system workload fluctuations. For example, a low-priority application might issue a
commit frequently on a system that is lightly loaded. However, under a heavy
system load, the use of the CPU by the application may be pre-empted, and, as a
result, the application may violate the rule set by the UR CHECK FREQ parameter.
For this reason, add logic to your application to commit based on time elapsed
since last commit, and not solely based on the amount of SQL processing
performed. In addition, take frequent commit points in a long running unit of work
that is read-only to reduce lock contention and to provide opportunities for utilities,
such as online REORG, to access the data.
Close cursors: If you define a cursor using the WITH HOLD option, the locks it
needs can be held past a commit point. Use the CLOSE CURSOR statement as
soon as possible in your program to cause those locks to be released and the
resources they hold to be freed at the first commit point that follows the CLOSE
CURSOR statement. Whether page or row locks are held for WITH HOLD cursors
is controlled by the RELEASE LOCKS parameter on panel DSNTIP4.
Bind plans with ACQUIRE(USE): ACQUIRE(USE), which indicates that DB2 will
acquire table and table space locks when the objects are first used and not when
the plan is allocated, is the best choice for concurrency. Packages are always
| bound with ACQUIRE(USE), by default. ACQUIRE(ALLOCATE) can provide better
| protection against timeouts. Consider ACQUIRE(ALLOCATE) for applications that
| need gross locks instead of intent locks or that run with other applications that may
| request gross locks instead of intent locks. Acquiring the locks at plan allocation
| also prevents any one transaction in the application from incurring the cost of
| acquiring the table and table space locks. If you need ACQUIRE(ALLOCATE), you
| might want to bind all DBRMs directly to the plan.
For information on how to make an agent part of a global transaction for RRSAF
applications, see “Chapter 30. Programming for the Recoverable Resource Manager
Services attachment facility (RRSAF)” on page 767.
Knowing the aspects helps you understand why a process suspends or times out or
why two processes deadlock.
As Figure 115 on page 334 suggests, row locks and page locks occupy an equal
place in the hierarchy of lock sizes.
Table space lock Table space lock LOB table space lock
Table lock
Row lock Page lock Row lock Page lock LOB lock
Row lock Page lock Row lock Page lock Row lock Page lock
Row lock Page lock Row lock Page lock Row lock Page lock
No matter how LOCKPART is defined, utility jobs can control separate partitions
of a table space or index space and can run concurrently with operations on
other partitions.
v A simple table space can contain more than one table. A lock on the table
space locks all the data in every table. A single page of the table space can
contain rows from every table. A lock on a page locks every row in the page, no
matter what tables the data belongs to. Thus, a lock needed to access data from
one table can make data from other tables temporarily unavailable. That effect
can be partly undone by using row locks instead of page locks. But that step
does not relieve the sweeping effect of a table space lock.
v In a segmented table space, rows from different tables are contained in different
pages. Locking a page does not lock data from more than one table. Also, DB2
can acquire a table lock, which locks only the data from one specific table.
Because a single row, of course, contains data from only one table, the effect of
a row lock is the same as for a simple or partitioned table space: it locks one row
of data from one table.
v In a LOB table space, pages are not locked. Because there is no concept of a
row in a LOB table space, rows are not locked. Instead, LOBs are locked. See
“LOB locks” on page 355 for more information.
Effects
For maximum concurrency, locks on a small amount of data held for a short
duration are better than locks on a large amount of data held for a long duration.
However, acquiring a lock requires processor time, and holding a lock requires
storage; thus, acquiring and holding one table space lock is more economical than
acquiring and holding many page locks. Consider that trade-off to meet your
performance and concurrency objectives.
Duration of partition, table, and table space locks: Partition, table, and table
space locks can be acquired when a plan is first allocated, or you can delay
acquiring them until the resource they lock is first used. They can be released at
the next commit point or be held until the program terminates.
Duration of page and row locks: If a page or row is locked, DB2 acquires the lock
only when it is needed. When the lock is released depends on many factors, but it
is rarely held beyond the next commit point.
For information about controlling the duration of locks, see “Bind options” on
page 339.
The possible modes for page and row locks and the modes for partition, table, and
table space locks are listed below. See “LOB locks” on page 355 for more
information about modes for LOB locks and locks on LOB table spaces.
When a page or row is locked, the table, partition, or table space containing it is
also locked. In that case, the table, partition, or table space lock has one of the
intent modes: IS, IX, or SIX. The modes S, U, and X of table, partition, and table
space locks are sometimes called gross modes. In the context of reading, SIX is a
gross mode lock because you don’t get page or row locks; in this sense, it is like an
S lock.
Example: An SQL statement locates John Smith in a table of customer data and
changes his address. The statement locks the entire table space in mode IX and
the specific row that it changes in mode X.
Definition: Locks of some modes do not shut out all other users. Assume that
application process A holds a lock on a table space that process B also wants to
access. DB2 requests, on behalf of B, a lock of some particular mode. If the mode
of A’s lock permits B’s request, the two locks (or modes) are said to be compatible.
Compatible lock modes: Compatibility for page and row locks is easy to define.
Table 39 shows whether page locks of any two modes, or row locks of any two
modes, are compatible (Yes) or not (No). No question of compatibility of a page lock
with a row lock can arise, because a table space cannot use both page and row
locks.
Table 39. Compatibility of page lock and row lock modes
Lock Mode S U X
S Yes Yes No
U Yes No No
X No No No
Compatibility for table space locks is slightly more complex. Table 40 shows
whether or not table space locks of any two modes are compatible.
Table 40. Compatibility of table and table space (or partition) lock modes
Lock Mode IS IX S U SIX X
The underlying data page or row locks are acquired to serialize the reading and
updating of index entries to ensure the data is logically consistent, meaning that the
data is committed and not subject to rollback or abort. The data locks can be held
for a long duration such as until commit. However, the page latches are only held
for a short duration while the transaction is accessing the page. Because the index
pages are not locked, hot spot insert scenarios (which involve several transactions
trying to insert different entries into the same index page at the same time) do not
cause contention problems in the index.
A query that uses index-only access might lock the data page or row, and that lock
can contend with other processes that lock the data. However, using lock avoidance
techniques can reduce the contention. See “Lock avoidance” on page 348 for more
information about lock avoidance.
Lock tuning
This section describes what you can change to affect how a particular application
uses transaction locks, under:
v “Bind options”
v “Isolation overriding with SQL statements” on page 351
v “The statement LOCK TABLE” on page 352
Bind options
These options determine when an application process acquires and releases its
locks and to what extent it isolates its actions from possible effects of other
processes acting concurrently.
Effect of LOCKPART YES: Partition locks follow the same rules as table space
locks, and all partitions are held for the same duration. Thus, if one package is
using RELEASE(COMMIT) and another is using RELEASE(DEALLOCATE), all
partitions use RELEASE(DEALLOCATE).
For table spaces defined as LOCKPART YES, lock demotion occurs as with other
table spaces; that is, the lock is demoted at the table space level, not the partition
level.
Restriction: This combination is not allowed for BIND PACKAGE. Use this
combination if processing efficiency is more important than concurrency. It is a good
choice for batch jobs that would release table and table space locks only to
reacquire them almost immediately. It might even improve concurrency, by allowing
batch jobs to finish sooner. Generally, do not use this combination if your
application contains many SQL statements that are often not executed.
IMS
A CHKP or SYNC call (for single-mode transactions), a GU call to the I/O
PCB, or a ROLL or ROLB call is completed
CICS
A SYNCPOINT command is issued.
Exception: If the cursor is defined WITH HOLD, table or table space locks
necessary to maintain cursor position are held past the commit point. (See “The
effect of WITH HOLD for a cursor” on page 350 for more information.
v The least restrictive lock needed to execute each SQL statement is used except
when a more restrictive lock remains from a previous statement. In that case,
that lock is used without change.
Application
Time line
Figure 116. How an application using RR isolation acquires locks. All locks are held until the
application commits.
Applications that use repeatable read can leave rows or pages locked for
longer periods, especially in a distributed environment, and they can claim
more logical partitions than similar applications using cursor stability.
They are also subject to being drained more often by utility operations.
Because so many locks can be taken, lock escalation might take place.
Frequent commits release the locks and can help avoid lock escalation.
With repeatable read, lock promotion occurs for table space scan to prevent
the insertion of rows that might qualify for the predicate. (If access is via
index, DB2 locks the key range. If access is via table space scans, DB2
locks the table, partition, or table space.)
ISOLATION (RS)
Allows the application to read the same pages or rows more than once
without allowing qualifying rows to be updated or deleted by another
Application
Time line
Figure 117. How an application using RS isolation acquires locks when no lock avoidance
techniques are used. Locks L2 and L4 are held until the application commits. The other locks
aren’t held.
Applications using read stability can leave rows or pages locked for long
periods, especially in a distributed environment.
| Figure 118 and Figure 119 show processing of positioned update and delete
| operations without optimistic concurrency control and with optimistic
| concurrency control.
|
Figure 118. Positioned updates and deletes without optimistic concurrency control
Figure 119. Positioned updates and deletes with optimistic concurrency control
ISOLATION (UR)
Allows the application to read while acquiring few locks, at the risk of
reading uncommitted data. UR isolation applies only to read-only
operations: SELECT, SELECT INTO, or FETCH from a read-only result
table.
There is an element of uncertainty about reading uncommitted data.
Example: An application tracks the movement of work from station to
station along an assembly line. As items move from one station to another,
the application subtracts from the count of items at the first station and
adds to the count of items at the second. Assume you want to query the
count of items at all the stations, while the application is running
concurrently.
What can happen if your query reads data that the application has changed
but has not committed?
If the application subtracts an amount from one record before adding it
to another, the query could miss the amount entirely.
If the application adds first and then subtracts, the query could add the
amount twice.
If those situations can occur and are unacceptable, do not use UR isolation.
Restrictions: You cannot use UR isolation for the types of statement listed
below. If you bind with ISOLATION(UR), and the statement does not specify
WITH RR or WITH RS, then DB2 uses CS isolation for:
v INSERT, UPDATE, and DELETE
v Any cursor defined with FOR UPDATE OF
When can you use uncommitted read (UR)? You can probably use UR
isolation in cases like the following ones:
v When errors cannot occur.
Example: A reference table, like a table of descriptions of parts by part
number. It is rarely updated, and reading an uncommitted update is
probably no more damaging than reading the table 5 seconds earlier. Go
ahead and read it with ISOLATION(UR).
Example: The employee table of Spiffy Computer, our hypothetical user.
For security reasons, updates can be made to the table only by members
of a single department. And that department is also the only one that can
query the entire table. It is easy to restrict queries to times when no
updates are being made and then run with UR isolation.
v When an error is acceptable.
Example: Spiffy wants to do some statistical analysis on employee data.
A typical question is, “What is the average salary by sex within education
Local access: Locally, CURRENTDATA(YES) means that the data upon which
the cursor is positioned cannot change while the cursor is positioned on it. If the
cursor is positioned on data in a local base table or index, then the data returned
with the cursor is current with the contents of that table or index. If the cursor is
positioned on data in a work file, the data returned with the cursor is current only
with the contents of the work file; it is not necessarily current with the contents of
the underlying table or index.
Application
Request Request next
row or page row or page
Time line
Figure 120. How an application using isolation CS with CURRENTDATA(YES) acquires locks.
This figure shows access to the base table. The L2 and L4 locks are released after DB2
moves to the next row or page. When the application commits, the last lock is released.
As with work files, if a cursor uses query parallelism, data is not necessarily current
with the contents of the table or index, regardless of whether a work file is used.
Therefore, for work file access or for parallelism on read-only queries, the
CURRENTDATA option has no effect.
If you are using parallelism but want to maintain currency with the data, you have
the following options:
v Disable parallelism (Use SET DEGREE = ’1’ or bind with DEGREE(1)).
v Use isolation RR or RS (parallelism can still be used).
v Use the LOCK TABLE statement (parallelism can still be used).
To take the best advantage of this method of avoiding locks, make sure all
applications that are accessing data concurrently issue COMMITs frequently.
Figure 121 on page 349 shows how DB2 can avoid taking locks and Table 41 on
page 349 summarizes the factors that influence lock avoidance.
Time line
DB2
Figure 121. Best case of avoiding locks using CS isolation with CURRENTDATA(NO). This
figure shows access to the base table. If DB2 must take a lock, then locks are released when
DB2 moves to the next row or page, or when the application commits (the same as
CURRENTDATA(YES)).
Table 41. Lock avoidance factors. “Returned data” means data that satisfies the predicate.
“Rejected data” is that which does not satisfy the predicate.
Avoid Avoid
locks on locks on
Isolation CURRENTDATA Cursor type
returned rejected
data? data?
UR N/A Read-only N/A N/A
Read-only
YES Updatable No
Ambiguous
CS Yes
Read-only Yes
NO Updatable No
Ambiguous Yes
Read-only
RS N/A Updatable No Yes
Ambiguous
Read-only
RR N/A Updatable No No
Ambiguous
For example, the plan value for CURRENTDATA has no effect on the packages
executing under that plan. If you do not specify a CURRENTDATA option explicitly
when you bind a package, the default is CURRENTDATA(YES).
The rules are slightly different for the bind options RELEASE and ISOLATION. The
values of those two options are set when the lock on the resource is acquired and
usually stay in effect until the lock is released. But a conflict can occur if a
statement that is bound with one pair of values requests a lock on a resource that
is already locked by a statement that is bound with a different pair of values. DB2
resolves the conflict by resetting each option with the available value that causes
the lock to be held for the greatest duration.
Table 42 shows how conflicts between isolation levels are resolved. The first column
is the existing isolation level, and the remaining columns show what happens when
another isolation level is requested by a new application process.
Table 42. Resolving isolation conflicts
UR CS RS RR
UR n/a CS RS RR
CS CS n/a RS RR
RS RS RS n/a RR
RR RR RR RR n/a
For locks and claims needed for cursor position, the rules described above differ as
follows:
Page and row locks: If your installation specifies NO on the RELEASE LOCKS
field of installation panel DSNTIP4, as described in Part 5 (Volume 2) of DB2
Administration Guide, a page or row lock is held past the commit point. This page
or row lock is not necessary for cursor position, but the NO option is provided for
compatibility that might rely on this lock. However, an X or U lock is demoted to an
S lock at that time. (Because changes have been committed, exclusive control is no
longer needed.) After the commit point, the lock is released at the next commit
point, provided that no cursor is still positioned on that page or row.
A YES for RELEASE LOCKS means that no data page or row locks are held past
commit.
Claims: All claims, for any claim class, are held past the commit point. They are
released at the next commit point after all held cursors have moved off the object or
have been closed.
finds the maximum, minimum, and average bonus in the sample employee table.
The statement is executed with uncommitted read isolation, regardless of the value
of ISOLATION with which the plan or package containing the statement is bound.
Using KEEP UPDATE LOCKS on the WITH clause: You can use the clause
KEEP UPDATE LOCKS clause when you specify a SELECT with FOR UPDATE
OF. This option is only valid when you use WITH RR or WITH RS. By using this
clause, you tell DB2 to acquire an X lock instead of an U or S lock on all the
qualified pages or rows.
Here is an example:
SELECT ...
FOR UPDATE OF WITH RS KEEP UPDATE LOCKS;
With read stability (RS) isolation, a row or page rejected during stage 2 processing
still has the X lock held on it, even though it is not returned to the application.
With repeatable read (RR) isolation, DB2 acquires the X locks on all pages or rows
that fall within the range of the selection expression.
Executing the statement requests a lock immediately, unless a suitable lock exists
already, as described below. The bind option RELEASE determines when locks
acquired by LOCK TABLE or LOCK TABLE with the PART option are released.
You can use LOCK TABLE on any table, including auxiliary tables of LOB table
spaces. See “The LOCK TABLE statement” on page 358 for information about
locking auxiliary tables.
Caution when using LOCK TABLE with simple table spaces: The statement
locks all tables in a simple table space, even though you name only one table. No
other process can update the table space for the duration of the lock. If the lock is
in exclusive mode, no other process can read the table space, unless that process
is running with UR isolation.
Additional examples of LOCK TABLE: You might want to lock a table or partition
that is normally shared for any of the following reasons:
Taking a“snapshot”
If you want to access an entire table throughout a unit of work as it
was at a particular moment, you must lock out concurrent changes.
If other processes can access the table, use LOCK TABLE IN
SHARE MODE. (RR isolation is not enough; it locks out changes
only from rows or pages you have already accessed.)
Avoiding overhead
If you want to update a large part of a table, it can be more efficient
to prevent concurrent access than to lock each page as it is
updated and unlock it when it is committed. Use LOCK TABLE IN
EXCLUSIVE MODE.
Preventing timeouts
Your application has a high priority and must not risk timeouts from
contention with other application processes. Depending on whether
your application updates or not, use either LOCK IN EXCLUSIVE
MODE or LOCK TABLE IN SHARE MODE.
Access paths
The access path used can affect the mode, size, and even the object of a lock. For
example, an UPDATE statement using a table space scan might need an X lock on
the entire table space. If rows to be updated are located through an index, the
same statement might need only an IX lock on the table space and X locks on
individual pages or rows.
If you use the EXPLAIN statement to investigate the access path chosen for an
SQL statement, then check the lock mode in column TSLOCKMODE of the
resulting PLAN_TABLE. If the table resides in a nonsegmented table space, or is
defined with LOCKSIZE TABLESPACE, the mode shown is that of the table space
lock. Otherwise, the mode is that of the table lock.
IMS
A CHKP or SYNC call, or (for single-mode transactions) a GU call to the I/O
PCB
CICS
A SYNCPOINT command.
LOB locks
The locking activity for LOBs is described separately from transaction locks
because the purpose of LOB locks is different than that of regular transaction locks.
Terminology: A lock that is taken on a LOB value in a LOB table space is called a
LOB lock.
DB2 also obtains locks on the LOB table space and the LOB values stored in that
LOB table space, but those locks have the following primary purposes:
v To determine whether space from a deleted LOB can be reused by an inserted or
updated LOB
Storage for a deleted LOB is not reused until no more readers (including held
locators) are on the LOB and the delete operation has been committed.
v To prevent deallocating space for a LOB that is currently being read
A LOB can be deleted from one application’s point-of-view while a reader from
another application is reading the LOB. The reader continues reading the LOB
because all readers, including those readers that are using uncommitted read
isolation, acquire S-locks on LOBs to prevent the storage for the LOB they are
reading from being deallocated. That lock is held until commit. A held LOB
| locator or a held cursor cause the LOB lock and LOB table space lock to be held
past commit.
Table 44 shows the relationship between the action that is occurring on the LOB
value and the associated LOB table space and LOB locks that are acquired.
Table 44. Locks that are acquired for operations on LOBs. This table does not account for
gross locks that can be taken because of LOCKSIZE TABLESPACE, the LOCK TABLE
statement, or lock escalation.
Action on LOB value LOB table space
lock LOB lock Comment
Read (including UR) IS S Prevents storage from being
reused while the LOB is
being read or while locators
are referencing the LOB
Insert IX X Prevents other processes
from seeing a partial LOB
Delete IS S To hold space in case the
delete is rolled back. (The X
is on the base table row or
page.) Storage is not
reusable until the delete is
committed and no other
readers of the LOB exist.
Update IS->IX Two LOB Operation is a delete
locks: an followed by an insert.
S-lock for the
delete and an
X-lock for the
insert.
Update the LOB to null IS S No insert, just a delete.
or zero-length
Update a null or IX X No delete, just an insert.
zero-length LOB to a
value
Duration of locks
Duration of locks on LOB table spaces
Locks on LOB table spaces are acquired when they are needed; that is, the
ACQUIRE option of BIND has no effect on when the table space lock on the LOB
table space is taken. The table space lock is released according to the value
specified on the RELEASE option of BIND (except when a cursor is defined WITH
HOLD or if a held LOB locator exists).
If the application uses HOLD LOCATOR, the LOB lock is not freed until the first
commit operation after a FREE LOCATOR statement is issued, or until the thread is
deallocated.
| A note about INSERT with fullselect: Because LOB locks are held until commit
and because locks are put on each LOB column in both a source table and a target
table, it is possible that a statement such as an INSERT with a fullselect that
involves LOB columns can accumulate many more locks than a similar statement
that does not involve LOB columns. To prevent system problems caused by too
many locks, you can:
v Ensure that you have lock escalation enabled for the LOB table spaces that are
involved in the INSERT. In other words, make sure that LOCKMAX is non-zero
for those LOB table spaces.
v Alter the LOB table space to change the LOCKSIZE to TABLESPACE before
executing the INSERT with fullselect.
v Increase the LOCKMAX value on the table spaces involved and ensure that the
user lock limit is sufficient.
v Use LOCK TABLE statements to lock the LOB table spaces. (Locking the
auxiliary table that is contained in the LOB table space locks the LOB table
space.)
If your application intercepts abends, DB2 commits work because it is unaware that
an abend has occurred. If you want DB2 to roll back work automatically when an
abend occurs in your program, do not let the program or runtime environment
intercept the abend. For example, if your program uses Language Environment, and
you want DB2 to roll back work automatically when an abend occurs in the
program, specify the runtime options ABTERMENC(ABEND) and TRAP(ON).
A unit of work is a logically distinct procedure containing steps that change the data.
If all the steps complete successfully, you want the data changes to become
permanent. But, if any of the steps fail, you want all modified data to return to the
original value before the procedure began.
For example, suppose two employees in the sample table DSN8710.EMP exchange
offices. You need to exchange their office phone numbers in the PHONENO
column. You would use two UPDATE statements to make each phone number
current. Both statements, taken together, are a unit of work. You want both
statements to complete successfully. For example, if only one statement is
successful, you want both phone numbers rolled back to their original values before
attempting another update.
When a unit of work completes, all locks implicitly acquired by that unit of work after
it begins are released, allowing a new unit of work to begin.
The amount of processing time used by a unit of work in your program determines
the length of time DB2 prevents other users from accessing that locked data. When
several programs try to use the same data concurrently, each program’s unit of
work must be as short as possible to minimize the interference between the
programs. The remainder of this chapter describes the way a unit of work functions
in various environments. For more information on unit of work, see Chapter 1 of
DB2 SQL Reference or Part 4 (Volume 1) of DB2 Administration Guide.
A commit point occurs when you issue a COMMIT statement or your program
terminates normally. You should issue a COMMIT statement only when you are sure
Before you can connect to another DBMS you must issue a COMMIT statement. If
the system fails at this point, DB2 cannot know that your transaction is complete. In
this case, as in the case of a failure during a one-phase commit operation for a
single subsystem, you must make your own provision for maintaining data integrity.
If your program abends or the system fails, DB2 backs out uncommitted data
changes. Changed data returns to its original condition without interfering with other
system activities.
Consider the inventory example, in which the quantity of items sold is subtracted
from the inventory file and then added to the reorder file. When both transactions
complete (and not before) and the data in the two files is consistent, the program
can then issue a DL/I TERM call or a SYNCPOINT command. If one of the steps
fails, you want the data to return to the value it had before the unit of work began.
That is, you want it rolled back to a previous point of consistency. You can achieve
this by using the SYNCPOINT command with the ROLLBACK option.
By using a SYNCPOINT command with the ROLLBACK option, you can back out
uncommitted data changes. For example, a program that updates a set of related
rows sometimes encounters an error after updating several of them. The program
can use the SYNCPOINT command with the ROLLBACK option to undo all of the
updates without giving up control.
The SQL COMMIT and ROLLBACK statements are not valid in a CICS
environment. You can coordinate DB2 with CICS functions used in programs, so
that DB2 and non-DB2 data are consistent.
A commit point can occur in a program as the result of any one of the following four
events:
v The program terminates normally. Normal program termination is always a
commit point.
v The program issues a checkpoint call. Checkpoint calls are a program’s means
of explicitly indicating to IMS that it has reached a commit point in its processing.
v The program issues a SYNC call. The SYNC call is a Fast Path system service
call to request commit point processing. You can use a SYNC call only in a
nonmessage-driven Fast Path program.
v For a program that processes messages as its input, a commit point can occur
when the program retrieves a new message. IMS considers a new message the
start of a new unit of work in the program. Unless you define the transaction as
single- or multiple-mode on the TRANSACT statement of the APPLCTN macro
for the program, retrieving a new message does not signal a commit point. For
more information about the APPLCTN macro, see the IMS Install Volume 2:
System Definition and Tailoring.
– If you specify single-mode, a commit point in DB2 occurs each time the
program issues a call to retrieve a new message. Specifying single-mode can
simplify recovery; you can restart the program from the most recent call for a
new message if the program abends. When IMS restarts the program, the
program starts by processing the next message.
– If you specify multiple-mode, a commit point occurs when the program issues
a checkpoint call or when it terminates normally. Those are the only times
during the program that IMS sends the program’s output messages to their
destinations. Because there are fewer commit points to process in
multiple-mode programs than in single-mode programs, multiple-mode
programs could perform slightly better than single-mode programs. When a
multiple-mode program abends, IMS can restart it only from a checkpoint call.
Instead of having only the most recent message to reprocess, a program
DB2 does some processing with single- and multiple-mode programs that IMS does
not. When a multiple-mode program issues a call to retrieve a new message, DB2
performs an authorization check and closes all open cursors in the program.
If the program processes messages, IMS sends the output messages that the
application program produces to their final destinations. Until the program reaches a
commit point, IMS holds the program’s output messages at a temporary destination.
If the program abends, people at terminals, and other application programs do not
receive inaccurate information from the terminating application program.
The SQL COMMIT and ROLLBACK statements are not valid in an IMS
environment.
If the system fails, DB2 backs out uncommitted changes to data. Changed data
returns to its original state without interfering with other system activities.
Sometimes DB2 data does not return to a consistent state immediately. DB2 does
not process data in an indoubt state until you restart IMS. To ensure that DB2 and
IMS are synchronized, you must restart both DB2 and IMS.
There are two calls available to IMS programs to simplify program recovery: the
symbolic checkpoint call and the restart call.
Programs that issue symbolic checkpoint calls can specify as many as seven data
areas in the program to be restored at restart. Symbolic checkpoint calls do not
support OS/VS files; if your program accesses OS/VS files, you can convert those
files to GSAM and use symbolic checkpoints. DB2 always recovers to the last
checkpoint. You must restart the program from that point.
However, message-driven BMPs must issue checkpoint calls rather than get-unique
calls to establish commit points, because they can restart from a checkpoint only. If
a program abends after issuing a get-unique call, IMS backs out the database
updates to the most recent commit point—the get-unique call.
Checkpoints also close all open cursors, which means you must reopen the cursors
you want and re-establish positioning.
If a batch-oriented BMP does not issue checkpoints frequently enough, IMS can
abend that BMP or another application program for one of these reasons:
v If a BMP retrieves and updates many database records between checkpoint
calls, it can monopolize large portions of the databases and cause long waits for
other programs needing those segments. (The exception to this is a BMP with a
processing option of GO. IMS does not enqueue segments for programs with this
processing option.) Issuing checkpoint calls releases the segments that the BMP
has enqueued and makes them available to other programs.
v If IMS is using program isolation enqueuing, the space needed to enqueue
information about the segments that the program has read and updated must not
exceed the amount defined for the IMS system. If a BMP enqueues too many
When you issue a DL/I CHKP call from an application program using DB2
databases, IMS processes the CHKP call for all DL/I databases, and DB2 commits
all the DB2 database resources. No checkpoint information is recorded for DB2
databases in the IMS log or the DB2 log. The application program must record
relevant information about DB2 databases for a checkpoint, if necessary.
One way to do this is to put such information in a data area included in the DL/I
CHKP call. There can be undesirable performance implications of re-establishing
position within a DB2 database as a result of the commit processing that takes
place because of a DL/I CHKP call. The fastest way to re-establish a position in a
DB2 database is to use an index on the target table, with a key that matches
one-to-one with every column in the SQL predicate.
Another limitation of processing DB2 databases in a BMP program is that you can
restart the program only from the latest checkpoint and not from any checkpoint, as
in IMS.
Using ROLL
Issuing a ROLL call causes IMS to terminate the program with a user abend code
U0778. This terminates the program without a storage dump.
When you issue a ROLL call, the only option you supply is the call function, ROLL.
Using ROLB
The advantage of using ROLB is that IMS returns control to the program after
executing ROLB, thus the program can continue processing. The options for ROLB
are:
v The call function, ROLB
v The name of the I/O PCB.
In batch programs
If your IMS system log is on direct access storage, and if the run option BKO is Y to
specify dynamic back out, you can use the ROLB call in a batch program. The
ROLB call backs out the database updates since the last commit point and returns
control to your program. You cannot specify the address of an I/O area as one of
the options on the call; if you do, your program receives an AD status code. You
must, however, have an I/O PCB for your program. Specify CMPAT=YES on the
CMPAT keyword in the PSBGEN statement for your program’s PSB. For more
information on using the CMPAT keyword, see IMS Utilities Reference: System.
Example: Rolling back to the most recently created savepoint: When the
ROLLBACK TO SAVEPOINT statement is executed in the following code, DB2 rolls
back work to savepoint B.
EXEC
. SQL SAVEPOINT A;
.
.
EXEC
. SQL SAVEPOINT B;
.
.
When savepoints are active, you cannot access remote sites using three-part
names or aliases for three-part names. You can, however, use DRDA access with
explicit CONNECT statements when savepoints are active. If you set a savepoint
before you execute a CONNECT statement, the scope of that savepoint is the local
site. If you set a savepoint after you execute the CONNECT statement, the scope
of that savepoint is the site to which you are connected.
You can set a savepoint with the same name multiple times within a unit of work.
Each time that you set the savepoint, the new value of the savepoint replaces the
old value.
Example: Setting a savepoint multiple times: Suppose that the following actions
take place within a unit of work:
1. Application A sets savepoint S.
2. Application A calls stored procedure P.
3. Stored procedure P sets savepoint S.
If you do not want a savepoint to have different values within a unit of work, you
can use the UNIQUE option in the SAVEPOINT statement. If an application
executes a SAVEPOINT statement for a savepoint that was previously defined as
unique, an SQL error occurs.
Savepoints are automatically released at the end of a unit of work. However, if you
no longer need a savepoint before the end of a transaction, you should execute the
SQL RELEASE SAVEPOINT statement. Releasing savepoints is essential if you
need to use three-part names to access remote locations.
In this chapter, we assume that you are requesting services from a remote DBMS.
That DBMS is a server in that situation, and your local system is a requester or
client.
Your application can be connected to many DBMSs at one time; the one currently
performing work is the current server. When the local system is performing work, it
also is called the current server.
A remote server can be truly remote in the physical sense: thousands of miles
away. But that is not necessary; it could even be another subsystem of the same
operating system your local DBMS runs under. We assume that your local DBMS is
an instance of DB2 for OS/390 and z/OS. A remote server could be an instance of
DB2 for OS/390 and z/OS also, or an instance of one of many other products.
A DBMS, whether local or remote, is known to your DB2 system by its location
name. The location name of a remote DBMS is recorded in the communications
database. (If you need more information about location names or the
communications database, see Part 3 of DB2 Installation Guide.)
Example 1: You can write a query like this to access data at a remote server:
SELECT * FROM CHICAGO.DSN8710.EMP
WHERE EMPNO = '0001000';
The mode of access depends on whether you bind your DBRMs into packages and
on the value of field DATABASE PROTOCOL in installation panel DSNTIP5 or the
value of bind option DBPROTOCOL. Bind option DBPROTOCOL overrides the
installation setting.
Example 2: You can also write statements like these to accomplish the same task:
# Before you can execute the query at location CHICAGO, you must bind the
# application as a remote package at the CHICAGO server. Before you can run the
# application, you must also bind a local plan with a package list that includes the
# remote package.
Example 3: You can call a stored procedure, which is a subroutine that can contain
many SQL statements. Your program executes this:
EXEC SQL
CONNECT TO ATLANTA;
EXEC SQL
CALL procedure_name (parameter_list);
The parameter list is a list of host variables that is passed to the stored procedure
and into which it returns the results of its execution. The stored procedure must
already exist at location ATLANTA.
Two methods of access: The examples above show two different methods for
accessing distributed data.
v Example 1 shows a statement that can be executed with DB2 private protocol
access or DRDA access.
If you bind the DBRM that contains the statement into a plan at the local DB2
and specify the bind option DBPROTOCOL(PRIVATE), you access the server
using DB2 private protocol access.
If you bind the DBRM that contains the statement using one of these methods,
you access the server using DRDA access.
Method 1:
– Bind the DBRM into a package at the local DB2 using the bind option
DBPROTOCOL(DRDA®).
– Bind the DBRM into a package at the remote location (CHICAGO).
– Bind the packages into a plan using bind option DBPROTOCOL(DRDA).
Method 2:
– Bind the DBRM into a package at the remote location.
| – Bind the remote package and the DBRM into a plan at the local site, using the
bind option DBPROTOCOL(DRDA).
v Examples 2 and 3 show statements that are executed with DRDA access only.
When you use these methods for DRDA access, your application must include an
explicit CONNECT statement to switch your connection from one system to
another.
If you update two or more DBMSs you must consider how updates can be
coordinated, so that units of work at the two DBMSs are either both committed or
both rolled back. Be sure to read “Coordinating updates to two or more data
sources” on page 379.
You can use the resource limit facility at the server to govern distributed SQL
statements. Governing is by plan for DB2 private protocol access and by package
for DRDA access. See “Considerations for moving from DB2 private protocol access
to DRDA access” on page 393 for information on changes you need to make to
your resource limit facility tables when you move from DB2 private protocol access
to DRDA access.
Because platforms other than DB2 for OS/390 and z/OS might not support the
three-part name syntax, you should not code applications with three-part names if
you plan to port those applications to other platforms.
In a three-part table name, the first part denotes the location. The local DB2 makes
and breaks an implicit connection to a remote server as needed.
The following overview shows how the application uses three-part names:
Read input values
Do for all locations
Read location name
Set up statement to prepare
Prepare statement
Execute statement
End loop
Commit
After the application obtains a location name, for example 'SAN_JOSE', it next
creates the following character string:
INSERT INTO SAN_JOSE.DSN8710.PROJ VALUES (?,?,?,?,?,?,?,?)
The application assigns the character string to the variable INSERTX and then
executes these statements:
EXEC SQL
PREPARE STMT1 FROM :INSERTX;
EXEC SQL
EXECUTE STMT1 USING :PROJNO, :PROJNAME, :DEPTNO, :RESPEMP,
:PRSTAFF, :PRSTDATE, :PRENDATE, :MAJPROJ;
The host variables for Spiffy’s project table match the declaration for the sample
project table in “Project table (DSN8710.PROJ)” on page 822.
To keep the data consistent at all locations, the application commits the work only
when the loop has executed for all locations. Either every location has committed
the INSERT or, if a failure has prevented any location from inserting, all other
Programming hint: You might find it convenient to use aliases when creating
character strings that become prepared statements, instead of using full three-part
names like SAN_JOSE.DSN8710.PROJ. For information on aliases, see the section
on CREATE ALIAS in DB2 SQL Reference.
In this example, Spiffy’s application executes CONNECT for each server in turn and
the server executes INSERT. In this case the tables to be updated each have the
same name, though each is defined at a different server. The application executes
the statements in a loop, with one iteration for each server.
The application connects to each new server by means of a host variable in the
CONNECT statement. CONNECT changes the special register CURRENT SERVER
to show the location of the new server. The values to insert in the table are
transmitted to a location as input host variables.
The following overview shows how the application uses explicit CONNECTs:
Read input values
Do for all locations
Read location name
Connect to location
Execute insert statement
End loop
Commit
Release all
The application inserts a new location name into the variable LOCATION_NAME,
and executes the following statements:
EXEC SQL
CONNECT TO :LOCATION_NAME;
EXEC SQL
INSERT INTO DSN8710.PROJ VALUES (:PROJNO, :PROJNAME, :DEPTNO, :RESPEMP,
:PRSTAFF, :PRSTDATE, :PRENDATE, :MAJPROJ);
To keep the data consistent at all locations, the application commits the work only
when the loop has executed for all locations. Either every location has committed
the INSERT or, if a failure has prevented any location from inserting, all other
locations have rolled back the INSERT. (If a failure occurs during the commit
process, the entire unit of work can be indoubt.)
The host variables for Spiffy’s project table match the declaration for the sample
project table in “Project table (DSN8710.PROJ)” on page 822. LOCATION_NAME is
a character-string variable of length 16.
Releasing connections
When you connect to remote locations explicitly, you must also break those
connections explicitly. You have considerable flexibility in determining how long
connections remain open, so the RELEASE statement differs significantly from
CONNECT.
Examples: Using the RELEASE statement, you can place any of the following in
the release-pending state.
v A specific connection that the next unit of work does not use:
EXEC SQL RELEASE SPIFFY1;
v The current SQL connection, whatever its location name:
EXEC SQL RELEASE CURRENT;
v All connections except the local connection:
EXEC SQL RELEASE ALL;
v All DB2 private protocol connections. If the first phase of your application
program uses DB2 private protocol access and the second phase uses DRDA
access, then open DB2 private protocol connections from the first phase could
cause a CONNECT operation to fail in the second phase. To prevent that error,
execute the following statement before the commit operation that separates the
two phases:
EXEC SQL RELEASE ALL PRIVATE;
PRIVATE refers to DB2 private protocol connections, which exist only between
instances of DB2 for OS/390 and z/OS.
Three-part names and multiple servers: If you use a three-part name, or an alias
that resolves to one, in a statement executed at a remote server by DRDA access,
and if the location name is not that of the server, then the method by which the
remote server accesses data at the named location depends on the value of
DBPROTOCOL. If the package at the first remote server is bound with
DBPROTOCOL(PRIVATE), DB2 uses DB2 private protocol access to access the
second remote server. If the package at the first remote server is bound with
DBPROTOCOL(DRDA), DB2 uses DRDA access to access the second remote
server. We recommend that you follow these steps so that access to the second
remote server is by DRDA access:
v Rebind the package at the first remote server with DBPROTOCOL(DRDA).
v Bind the package that contains the three-part name at the second server.
Accessing declared temporary tables using three-part names: You can access
a remote declared temporary table using a three-part name only if you use DRDA
access. However, if you combine explicit CONNECT statements and three-part
names in your application, a reference to a remote declared temporary table must
be a forward reference. For example, you can perform the following series of
actions, which includes a forward reference to a declared temporary table:
EXEC SQL CONNECT TO CHICAGO; /* Connect to the remote site */
EXEC SQL
DECLARE GLOBAL TEMPORARY TABLE T1 /* Define the temporary table */
(CHARCOL CHAR(6) NOT NULL); /* at the remote site */
EXEC SQL CONNECT RESET; /* Connect back to local site */
EXEC SQL INSERT INTO CHICAGO.SESSION.T1
(VALUES 'ABCDEF'); /* Access the temporary table*/
/* at the remote site (forward reference) */
However, you cannot perform the following series of actions, which includes a
backward reference to the declared temporary table:
EXEC SQL
DECLARE GLOBAL TEMPORARY TABLE T1 /* Define the temporary table */
(CHARCOL CHAR(6) NOT NULL); /* at the local site (ATLANTA)*/
EXEC SQL CONNECT TO CHICAGO; /* Connect to the remote site */
EXEC SQL INSERT INTO ATLANTA.SESSION.T1
(VALUES 'ABCDEF'); /* Cannot access temp table */
/* from the remote site (backward reference)*/
Savepoints: In a distributed environment, you can set savepoints only if you use
DRDA access with explicit CONNECT statements. If you set a savepoint and then
execute an SQL statement with a three-part name, an SQL error occurs.
Precompiler options
The following precompiler options are relevant to preparing a package to be run
using DRDA access:
CONNECT
Use CONNECT(2), explicitly or by default.
CONNECT(1) causes your CONNECT statements to allow only the restricted
function known as “remote unit of work”. Be particularly careful to avoid
CONNECT(1) if your application updates more than one DBMS in a single unit
of work.
SQL
Use SQL(ALL) explicitly for a package that runs on a server that is not DB2 for
OS/390 and z/OS. The precompiler then accepts any statement that obeys
DRDA rules.
Use SQL(DB2), explicitly or by default, if the server is DB2 for OS/390 and
z/OS only. The precompiler then rejects any statement that does not obey the
rules of DB2 for OS/390 and z/OS.
DB2 and IMS, and DB2 and CICS, jointly implement a two-phase commit process.
You can update an IMS database and a DB2 table in the same unit of work. If a
system or communication failure occurs between committing the work on IMS and
on DB2, then the two programs restore the two systems to a consistent point when
activity resumes.
Details of the two-phase commit process are not important to the rest of this
description. You can read them in Part 4 (Volume 1) of DB2 Administration Guide.
Versions 3 and later of DB2 for OS/390 and z/OS implement two-phase commit. For
other types of DBMS, check the product specifications.
To achieve the effect of coordinated updates with a restricted system, you must first
update one system and commit that work, and then update the second system and
commit its work. If a failure occurs after the first update is committed and before the
second is committed, there is no automatic provision for bringing the two systems
back to a consistent point. Your program must assume that task.
If these conditions are not met, then you are restricted to read-only operations.
Restricting to CONNECT (type 1): You can also restrict your program completely
to the rules for restricted systems, by using the type 1 rules for CONNECT. Those
rules are compatible with packages that were bound on Version 2 Release 3 of DB2
for MVS and were not rebound on a later version. To put those rules into effect for a
package, use the precompiler option CONNECT(1). Be careful not to use packages
precompiled with CONNECT(1) and packages precompiled with CONNECT(2) in
the same package list. The first CONNECT statement executed by your program
determines which rules are in effect for the entire execution: type 1 or type 2. An
attempt to execute a later CONNECT statement precompiled with the other type
returns an error.
For more information about CONNECT (Type 1) and about managing connections
to other systems, see Chapter 1 of DB2 SQL Reference.
Use LOB locators instead of LOB host variables: If you need to store only a
portion of a LOB value at the client, or if your client program manipulates the LOB
data but does not need a copy of it, LOB locators are a good choice. When a client
program retrieves a LOB column from a server into a locator, DB2 transfers only the
4 byte locator value to the client, not the entire LOB value. For information on how
to use LOB locators in an application, see “Using LOB locators to save storage” on
page 236.
Use stored procedure result sets: When you return LOB data to a client program
from a stored procedure, use result sets, rather than passing the LOB data to the
client in parameters. Using result sets to return data causes less LOB
materialization and less movement of data among address spaces. For information
on how to write a stored procedure to return result sets, see “Writing a stored
procedure to return result sets to a DRDA client” on page 547. For information on
how to write a client program to receive result sets, see “Writing a DB2 for OS/390
and z/OS client program or SQL procedure to receive result sets” on page 602.
Set the CURRENT RULES special register to DB2: When a DB2 for OS/390 and
z/OS server receives an OPEN request for a cursor, the server uses the value in
the CURRENT RULES special register to determine the type of host variables the
associated statement uses to retrieve LOB values. If you specify a value of DB2 for
CURRENT RULES before you perform a CONNECT, and the first FETCH for the
cursor uses a LOB locator to retrieve LOB column values, DB2 lets you use only
LOB locators for all subsequent FETCH statements for that column until you close
the cursor. If the first FETCH uses a host variable, DB2 lets you use only host
variables for all subsequent FETCH statements for that column until you close the
cursor. However, if you set the value of CURRENT RULES to STD, DB2 lets you
use the same open cursor to fetch a LOB column into either a LOB locator or a
host variable.
Although a value of STD for CURRENT RULES gives you more programming
flexibility when you retrieve LOB data, you get better performance if you use a
value of DB2. With the STD option, the server must send and receive network
messages for each FETCH to indicate whether the data being transferred is a LOB
locator or a LOB value. With the DB2 option, the server knows the size of the LOB
data after the first FETCH, so an extra message about LOB data size is
unnecessary. The server can send multiple blocks of data to the requester at one
time, which reduces the total time for data transfer.
For example, an end user might want to browse through a large set of employee
records but want to look at pictures of only a few of those employees. At the server,
you set the CURRENT RULES special register to DB2. In the application, you
declare and open a cursor to select employee records. The application then fetches
all picture data into 4 byte LOB locators. Because DB2 knows that 4 bytes of LOB
data is returned for each FETCH, DB2 can fill the network buffers with locators for
many pictures. When a user wants to see a picture for a particular person, the
application can retrieve the picture from the server by assigning the value
referenced by the LOB locator to a LOB host variable:
FETCH
. C1 INTO :my_loc; /* Fetch BLOB into LOB locator */
.
.
DEFER(PREPARE)
To improve performance for both static and dynamic SQL used in DB2 private
protocol access, and for dynamic SQL in DRDA access, consider specifying the
option DEFER(PREPARE) when you bind or rebind your plans or packages.
Remember that statically bound SQL statements in DB2 private protocol access are
processed dynamically. When a dynamic SQL statement accesses remote data, the
PREPARE and EXECUTE statements can be transmitted over the network together
and processed at the remote location, and responses to both statements can be
sent together back to the local subsystem, thus reducing traffic on the network. DB2
does not prepare the dynamic SQL statement until the statement executes. (The
exception to this is dynamic SELECT, which combines PREPARE and DESCRIBE,
whether or not the DEFER(PREPARE) option is in effect.)
All PREPARE messages for dynamic SQL statements that refer to a remote object
will be deferred until either:
v The statement executes
v The application requests a description of the results of the statement.
In general, when you defer PREPARE, DB2 returns SQLCODE 0 from PREPARE
statements. You must therefore code your application to handle any SQL codes that
might have been returned from the PREPARE statement after the associated
EXECUTE or DESCRIBE statement.
When you use predictive governing, the SQL code returned to the requester if the
server exceeds a predictive governing warning threshold depends on the level of
DRDA at the requester. See “Writing an application to handle predictive governing”
on page 505 for more information.
For DB2 private protocol access, when a static SQL statement refers to a remote
object, the transparent PREPARE statement and the EXECUTE statements are
automatically combined and transmitted across the network together. The
PREPARE statement is deferred only if you specify the bind option
DEFER(PREPARE).
PKLIST
The order in which you specify package collections in a package list can affect the
performance of your application program. When a local instance of DB2 attempts to
You can also specify the package collection associated with an SQL statement in
your application program. Execute the SQL statement SET CURRENT
PACKAGESET before you execute an SQL statement to tell DB2 which package
collection to search for the statement.
When you use DEFER(PREPARE) with DRDA access, the package containing the
statements whose preparation you want to defer must be the first qualifying entry in
DB2’s package search sequence. (See “Identifying packages at run time” on
page 415 for more information.) For example, assume that the package list for a
plan contains two entries:
PKLIST(LOCB.COLLA.*, LOCB.COLLB.*)
If the intended package is in collection COLLB, ensure that DB2 searches that
collection first. You can do this by executing the SQL statement
SET CURRENT PACKAGESET = 'COLLB';
For NODEFER(PREPARE), the collections in the package list can be in any order,
but if the package is not found in the first qualifying PKLIST entry, there is
significant network overhead for searching through the list.
REOPT(VARS)
When you specify REOPT(VARS), DB2 determines access paths at both bind time
and run time for statements that contain one or more of the following variables:
v Host variables
v Parameter markers
v Special registers
At run time, DB2 uses the values in those variables to determine the access paths.
If you specify the bind option REOPT(VARS), DB2 sets the bind option
DEFER(PREPARE) automatically.
Because there are performance costs when DB2 reoptimizes the access path at run
time, we recommend that you do the following:
CURRENTDATA(NO)
Use this bind option to force block fetch for ambiguous queries. See “Use block
fetch” for more information on block fetch.
KEEPDYNAMIC(YES)
Use this bind option to improve performance for queries that use cursors defined
WITH HOLD. With KEEPDYNAMIC(YES), DB2 automatically closes the cursor
when there is no more data to retrieve. The client does not need to send a network
message to tell DB2 to close the cursor. For more information on
KEEPDYNAMIC(YES), see “Keeping prepared statements after commit points” on
page 502.
DBPROTOCOL(DRDA)
If the value of installation default DATABASE PROTOCOL is not DRDA, use this
bind option to cause DB2 to use DRDA access to execute SQL statements with
three-part names. Statements that use DRDA access perform better at execution
time because:
v Binding occurs when the package is bound, not during program execution.
v DB2 does not destroy static statement information at COMMIT time, as it does
with DB2 private protocol access. This means that with DRDA access, if a
COMMIT occurs between two executions of a statement, DB2 does not need to
prepare the statement twice.
How to ensure block fetching: To use either type of block fetch, DB2 must
determine that the cursor is not used for updating or deleting. Indicate that the
cursor does not modify data by adding FOR FETCH ONLY or FOR READ ONLY to
the query in the DECLARE CURSOR statement. If you do not use FOR FETCH
ONLY or FOR READ ONLY, DB2 still uses block fetch for the query if:
| Table 47 summarizes the conditions under which a DB2 server uses block fetch for
| a scrollable cursor when the cursor is used to retrieve result sets.
| Table 47. Effect of CURRENTDATA and isolation level on block fetch for a scrollable cursor
| that is used for a stored procedure result set
| Isolation Cursor sensitivity CURRENT– Cursor type Block fetch
| DATA
| CS, RR, or RS INSENSITIVE Yes Read-only Yes
| No Read-only Yes
| SENSITIVE Yes Read-only No
| No Read-only Yes
| UR INSENSITIVE Yes Read-only Yes
| No Read-only Yes
| SENSITIVE Yes Read-only Yes
| No Read-only Yes
|
| When a DB2 for OS/390 and z/OS requester uses a scrollable cursor to retrieve
| data from a DB2 for OS/390 and z/OS server, the following conditions are true:
| v The requester never requests more than 64 rows in a query block, even if more
| rows fit in the query block. In addition, the requester never requests extra query
| blocks. This is true even if the setting of field EXTRA BLOCKS REQ in the
| DISTRIBUTED DATA FACILITY PANEL 2 installation panel on the requester
| allows extra query blocks to be requested. If you want to limit the number of rows
| that the server returns to fewer than 64, you can specify the FETCH FIRST n
| ROWS ONLY clause when you declare the cursor.
| v The requester discards rows of the result table if the application does not use
| those rows. For example, if the application fetches row n and then fetches row
| n+2, the requester discards row n+1. The application gets better performance for
| a blocked scrollable cursor if it mostly scrolls forward, fetches most of the rows in
The number of rows that DB2 transmits on each network transmission depends on
the following factors:
v If n rows of the SQL result set fit within a single DRDA query block, a DB2 server
can send n rows to any DRDA client. In this case, DB2 sends n rows in each
network transmission, until the entire query result set is exhausted.
v If n rows of the SQL result set exceed a single DRDA query block, the number of
rows that are contained in each network transmission depends on the client’s
DRDA software level and configuration:
– If the client does not support extra query blocks, the DB2 server automatically
reduces the value of n to match the number of rows that fit within a DRDA
query block.
– If the client supports extra query blocks, the DRDA client can choose to
accept multiple DRDA query blocks in a single data transmission. DRDA
allows the client to establish an upper limit on the number of DRDA query
blocks in each network transmission.
The number of rows that a DB2 server sends is the smaller of n rows and the
number of rows that fit within the lesser of these two limitations:
- The value of EXTRA BLOCKS SRV in install panel DSNTIP5 at the DB2
server
This is the maximum number of extra DRDA query blocks that the DB2
server returns to a client in a single network transmission.
- The client’s extra query block limit, which is obtained from the DDM
MAXBLKEXT parameter received from the client
When DB2 acts as a DRDA client, the DDM MAXBLKEXT parameter is set
to the value that is specified on the EXTRA BLOCKS REQ install option of
the DSNTIP5 install panel.
Specifying a large value for n in OPTIMIZE FOR n ROWS can increase the number
of DRDA query blocks that a DB2 server returns in each network transmission. This
function can improve performance significantly for applications that use DRDA
access to download large amounts of data. However, this same function can
In Figure 123, the DRDA client opens a cursor and fetches rows from the cursor. At
some point before all rows in the query result set are returned, the application
issues an SQL INSERT. DB2 uses normal DRDA blocking, which has two
advantages over the blocking that is used for OPTIMIZE FOR n ROWS:
v If the application issues an SQL statement other than FETCH (the example
shows an INSERT statement), the DRDA client can transmit the SQL statement
immediately, because the DRDA connection is not in use after the SQL OPEN.
v If the SQL application closes the cursor before fetching all the rows in the query
result set, the server fetches only the number of rows that fit in one query block,
which is 100 rows of the result set. Basically, the DRDA query block size places
an upper limit on the number of rows that are fetched unnecessarily.
In Figure 124 on page 390, the DRDA client opens a cursor and fetches rows from
the cursor using OPTIMIZE FOR n ROWS. Both the DRDA client and the DB2
server are configured to support multiple DRDA query blocks. At some time before
the end of the query result set, the application issues an SQL INSERT. Because
OPTIMIZE FOR n ROWS is being used, the DRDA connection is not available
when the SQL INSERT is issued because the connection is still being used to
receive the DRDA query blocks for 1000 rows of data. This causes two
performance problems:
v Application elapsed time can increase if the DRDA client waits for a large query
result set to be transmitted, before the DRDA connection can be used for other
SQL statements. Figure 124 on page 390 shows how an SQL INSERT statement
can be delayed because of a large query result set.
v If the application closes the cursor before fetching all the rows in the SQL result
set, the server might fetch a large number of rows unnecessarily.
For more information on OPTIMIZE FOR n ROWS, see “Minimizing overhead for
retrieving few rows: OPTIMIZE FOR n ROWS” on page 661.
| Fast implicit close and FETCH FIRST n ROWS ONLY: Fast implicit close means
| that during a distributed query, the DB2 server automatically closes the cursor after
| it prefetches the nth row if you specify FETCH FIRST n ROWS ONLY, or when
| there are no more rows to return. Fast implicit close can improve performance
| because it saves an additional network transmission between the client and the
| server.
| DB2 uses fast implicit close when the following conditions are true:
| v The query uses limited block fetch.
| v The query retrieves no LOBs.
| v The cursor is not a scrollable cursor.
| v Either of the following conditions is true:
| – The cursor is declared WITH HOLD, and the package or plan that contains
| the cursor is bound with the KEEPDYNAMIC(YES) option.
| – The cursor is not defined WITH HOLD.
| When you use FETCH FIRST n ROWS ONLY, and DB2 does a fast implicit close,
| the DB2 server closes the cursor after it prefetches the nth row, or when there are
| no more rows to return.
# For OPTIMIZE FOR n ROWS, when n is 1, 2, or 3, DB2 uses the value 16 (instead
# of n) for network blocking and prefetches 16 rows. As a result, network usage is
# more efficient even though DB2 uses the small value of n for query optimization.
| Suppose that you need only one row of the result table. To avoid 15 unnecessary
| prefetches, add the FETCH FIRST 1 ROW ONLY clause:
| SELECT * FROM EMP
| OPTIMIZE FOR 1 ROW ONLY
| FETCH FIRST 1 ROW ONLY;
| DB2 for OS/390 and z/OS support for the rowset parameter
The rowset parameter can be used in ODBC and JDBC applications on some
platforms to limit the number of rows that are returned from a fetch operation. If a
DRDA requester sends the rowset parameter to a DB2 for OS/390 and z/OS server,
the server does the following things:
v Returns no more than the number of rows in the rowset parameter
v Returns extra query blocks if the value of field EXTRA BLOCKS SRV in the
DISTRIBUTED DATA FACILITY PANEL 2 installation panel on the server allows
extra query blocks to be returned
v Processes the FETCH FIRST n ROWS ONLY clause, if it is specified
v Does not process the OPTIMIZE FOR n ROWS clause
| How to prevent block fetching: If your application requires data currency for a
| cursor, you need to prevent block fetching for the data it points to. To prevent block
| fetching for a distributed cursor, declare the cursor with the FOR UPDATE or FOR
| UPDATE OF clause.
| When ASCII MIXED data or Unicode MIXED data is converted to EBCDIC MIXED,
the converted string is longer than the source string. An error occurs if that
conversion is done to a fixed-length input host variable. The remedy is to use a
varying-length string variable with a maximum length that is sufficient to contain the
expansion.
For dynamic SQL applications, bind packages at all remote locations that users
might access with three-part names.
2. Bind the application into a package at every location that is named in the
application. Also bind a package locally.
This situation can become more complicated if you use three-part names to
access DB2 objects from remote sites. For example, suppose you are
connected explicitly to LOC2, and you use DRDA access to execute the
following statement:
SELECT * FROM YRALIAS;
Before you can run DB2 applications of the first type, you must precompile,
compile, link-edit, and bind them.
Productivity hint: To avoid rework, first test your SQL statements using SPUFI,
then compile your program without SQL statements and resolve all compiler errors.
Then proceed with the preparation and the DB2 precompile and bind steps.
Because most compilers do not recognize SQL statements, you must use the DB2
precompiler before you compile the program to prevent compiler errors. The
precompiler scans the program and returns a modified source code, which you can
then compile and link edit. The precompiler also produces a DBRM (database
request module). Bind this DBRM to a package or plan using the BIND
subcommand. (For information on packages and plans, see “Chapter 16. Planning
for DB2 program preparation” on page 315.) When you complete these steps, you
can run your DB2 application.
This chapter details the steps to prepare your application program to run. It includes
instructions for the main steps for producing an application program, additional
steps you might need, and steps for rebinding.
For information on running REXX programs, which you do not prepare for
execution, see “Running a DB2 REXX application” on page 428.
For information on preparing and executing Java programs, see DB2 Application
Programming Guide and Reference for Java.
There are several ways to control the steps in program preparation. They are
described in “Using JCL procedures to prepare applications” on page 428.
For COBOL, you can use one of the following techniques to process SQL
statements:
v Use the DB2 precompiler before you compile your program.
You can use this technique with any version of COBOL.
v Use the COBOL SQL statement coprocessor as you compile your program.
You invoke the SQL statement coprocessor by specifying the SQL compiler
option. You need IBM COBOL for OS/390 & VM Version 2 Release 2 or later to
use this technique. For more information on using the COBOL SQL statement
coprocessor, see IBM COBOL for OS/390 & VM Programming Guide.
# For PL/I, you can use one of the following techniques to process SQL statements:
# v Use the DB2 precompiler before you compile your program.
# You can use this technique with any version of PL/I.
# v Use the PL/I SQL statement coprocessor as you compile your program.
# You invoke the SQL statement coprocessor by specifying the PP(SQL('...'))
# compiler option. You need IBM Enterprise PL/I for z/OS and OS/390 Version 3
# Release 1 or later to use this technique. For more information on using the PL/I
# SQL statement coprocessor, see IBM Enterprise PL/I for z/OS and OS/390
# Programming Guide.
| In this section, references to an SQL statement processor apply to either the DB2
| precompiler or an SQL statement coprocessor. References to the DB2 precompiler
| apply specifically to the precompiler that is provided with DB2.
CICS
If the application contains CICS commands, you must translate the program
before you compile it. (See “Translating command-level statements in a CICS
program” on page 410.)
When you precompile your program, DB2 does not need to be active. The
precompiler does not validate the names of tables and columns that are used in
SQL statements. However, the precompiler checks table and column references
against SQL DECLARE TABLE statements in the program. Therefore, you should
use DCLGEN to obtain accurate SQL DECLARE TABLE statements.
| Input to the precompiler: The primary input for the precompiler consists of
| statements in the host programming language and embedded SQL statements.
|
| Important
| The size of a source program that DB2 can precompile is limited by the region
| size and the virtual memory available to the precompiler. The maximum region
| size and memory available to the DB2 precompiler is usually around 8 MB, but
| these amounts vary with each system installation.
|
|
| You can use the SQL INCLUDE statement to get secondary input from the include
| library, SYSLIB. The SQL INCLUDE statement reads input from the specified
| member of SYSLIB until it reaches the end of the member.
| Another preprocessor, such as the PL/I macro preprocessor, can generate source
| statements for the precompiler. Any preprocessor that runs before the precompiler
| must be able to pass on SQL statements. Similarly, other preprocessors can
| process the source code, after you precompile and before you compile or
| assemble.
| There are limits on the forms of source statements that can pass through the
| precompiler. For example, constants, comments, and other source syntax that are
| not accepted by the host compilers (such as a missing right brace in C) can
| interfere with precompiler source scanning and cause errors. You might want to run
| the host compiler before the precompiler to find the source statements that are
| unacceptable to the host compiler. At this point you can ignore the compiler error
| messages for SQL statements. After the source statements are free of unacceptable
| compiler errors, you can then perform the normal DB2 program preparation process
| for that host language.
| Output from the precompiler: The following sections describe various kinds of
output from the precompiler.
Listing output: The output data set, SYSPRINT, used to print output from the
precompiler, has an LRECL of 133 and a RECFM of FBA. Statement numbers in
the output of the precompiler listing always display as they appear in the listing.
However, DB2 stores statement numbers greater than 32767 as 0 in the DBRM.
| The DB2 precompiler writes the following information in the SYSPRINT data set:
| v Precompiler source listing
| Modified source statements: The DB2 precompiler writes the source statements
| that it processes to SYSCIN, the input data set to the compiler or assembler. This
| data set must have attributes RECFM F or FB, and LRECL 80. The modified source
| code contains calls to the DB2 language interface. The SQL statements that the
| calls replace appear as comments.
Database request modules: The major output from the precompiler is a database
request module (DBRM). That data set contains the SQL statements and host
variable information extracted from the source program, along with information that
identifies the program and ties the DBRM to the translated source statements. It
becomes the input to the bind process.
The data set requires space to hold all the SQL statements plus space for each
host variable name and some header information. The header information alone
requires approximately two records for each DBRM, 20 bytes for each SQL record,
and 6 bytes for each host variable. For an exact format of the DBRM, see the
DBRM mapping macro, DSNXDBRM in library prefix.SDSNMACS. The DCB
attributes of the data set are RECFM FB, LRECL 80. The precompiler sets the
characteristics. You can use IEBCOPY, IEHPROGM, TSO commands COPY and
DELETE, or other PDS management tools for maintaining these data sets.
The DB2 language preparation procedures in job DSNTIJMV use the DISP=OLD
parameter to enforce data integrity. However, the installation process converts the
DISP=OLD parameter for the DBRM library data set to DISP=SHR, which can
cause data integrity problems when you run multiple precompilation jobs. If you plan
to run multiple precompilation jobs and are not using DFSMSdfp’s partitioned data
set extended (PDSE), you must change the DB2 language preparation procedures
(DSNHCOB, DSNHCOB2, DSNHICOB, DSNHFOR, DSNHC, DSNHPLI,
DSNHASM, DSNHSQL) to specify the DISP=OLD parameter instead of the
DISP=SHR parameter.
To use the COBOL SQL statement coprocessor, you need to do the following
things:
v Specify the following options when you compile your program:
– SQL
The SQL compiler option indicates that you want the compiler to invoke the
SQL statement coprocessor. Specify a list of SQL processing options (within
single or double quotes and enclosed in parentheses) after the SQL keyword.
Table 48 on page 403 lists the options that you can specify.
For example, suppose that you want to process SQL statements as you
compile a COBOL program. In your program, the apostrophe is the string
delimiter in SQL statements, and the SQL statements conform to DB2 rules.
This means that you need to specify the APOSTSQL and STDSQL(NO)
options. Therefore, you need to include this option in your compile step:
SQL("APOSTSQL STDSQL(NO)")
– LIB
You need to specify the LIB option when you specify the SQL option, whether
or not you have any COPY, BASIS, or REPLACE statements in your program.
– SIZE(nnnnnn)
You might need to increase the SIZE value so that the user region is large
enough for the SQL statement coprocessor. Do not specify SIZE(MAX).
v Include DD statements for the following data sets in the JCL for your compile
step:
– DB2 load library (prefix.SDSNLOAD)
The SQL statement coprocessor calls DB2 modules to do the SQL statement
processing. You therefore need to include the name of the DB2 load library
data set in the STEPLIB concatenation for the compile step.
– DBRM library
The SQL statement coprocessor produces a DBRM. DBRMs and the DBRM
library are described in “Output from the precompiler” on page 399. You need
to include a DBRMLIB DD statement that specifies the DBRM library data set.
– Library for SQL INCLUDE statements
If your program contains SQL INCLUDE member-name statements that
specify secondary input to the source program, you need to include the name
of the data set that contains member-name in the SYSLIB concatenation for
the compile step.
# To use the SQL statement preprocessor, you need to do the following things:
# v Specify the following options when you compile your program by using the IBM
# Enterprise PL/I for z/OS and OS/390 Version 3 Release 1 or later:
# – PP(SQL('option, ...'))
# This compiler option indicates that you want the compiler to invoke the SQL
# statement preprocessor. Specify a list of SQL processing options (within single
# or double quotes and enclosed in parentheses) after the SQL keyword.
# Separate options in the list by a comma, blank, or both. Table 48 on page 403
# lists the options that you can specify.
# For example, suppose that you want to process SQL statements as you
# compile a PL/I program. In your program, the DATE data types require USA
# format, and the SQL statements conform to DB2 rules. This means that you
# need to specify the DATE(USA) and STDSQL(NO) options. Therefore, you
# need to include this option in your compile step:
# PP(SQL('DATE(USA), STDSQL(NO)'))
# – LIMITS(FIXEDBIN(63), FIXEDDEC(31))
# These options are required for LOB support.
# – SIZE(nnnnnn)
# You might need to increase the SIZE value so that the user region is large
# enough for the SQL statement preprocessor. Do not specify SIZE(MAX).
# v Include DD statements for the following data sets in the JCL for your compile
# step:
# – DB2 load library (prefix.SDSNLOAD)
# The SQL preprocessor calls DB2 modules to do the SQL statement
# processing. You therefore need to include the name of the DB2 load library
# data set in the STEPLIB concatenation for the compile step.
# – DBRM library
# The SQL preprocessor produces a DBRM. DBRMs and the DBRM library are
# described in “Output from the precompiler” on page 399. You need to include
# a DBRMLIB DD statement that specifies the DBRM library data set.
# – Library for SQL INCLUDE statements
# If your program contains SQL INCLUDE member-name statements that
# specify secondary input to the source program, you need to include the name
# of the data set that contains member-name in the SYSLIB concatenation for
# the compile step.
If you use the DB2 precompiler, you can specify SQL processing options in one of
the following ways:
v With DSNH operands
If you use the COBOL SQL statement coprocessor, you specify the coprocessor
options as the argument of the SQL compiler option.
# If you use the PL/I SQL statement preprocessor, you specify the preprocessor
# options as the argument of the PP(SQL('...')) compiler option.
DB2 assigns default values for any SQL processing options for which you do not
explicitly specify a value. Those defaults are the values that are specified in the
APPLICATION PROGRAMMING DEFAULTS installation panels.
Table of SQL processing options: Table 48 shows the options you can specify
when you use the DB2 precompiler or an SQL statement coprocessor. The table
also includes abbreviations for those options.
The table uses a vertical bar (|) to separate mutually exclusive options, and
brackets ([ ]) to indicate that you can sometimes omit the enclosed option.
Table 48. SQL processing options
Option Keyword Meaning
2,3
# APOST Recognizes the apostrophe (') as the string delimiter within host language statements.
The option is not available in all languages; see Table 50 on page 409.
APOST and QUOTE are mutually exclusive options. The default is in the field STRING
DELIMITER on Application Programming Defaults Panel 1 when DB2 is installed. If
| STRING DELIMITER is the apostrophe ('), APOST is the default.
# APOSTSQL2,3 Recognizes the apostrophe (') as the string delimiter and the quotation mark (") as the
SQL escape character within SQL statements. If you have a COBOL program and you
specify SQLFLAG, then you should also specify APOSTSQL.
APOSTSQL and QUOTESQL are mutually exclusive options. The default is in the field
SQL STRING DELIMITER on Application Programming Defaults Panel 1 when DB2 is
installed. If SQL STRING DELIMITER is the apostrophe ('), APOSTSQL is the default.
ATTACH(TSO|CAF| Specifies the attachment facility that the application uses to access DB2. TSO, CAF,
RRSAF) and RRSAF applications that load the attachment facility can use this option to specify
the correct attachment facility, instead of coding a dummy DSNHLI entry point.
COMMA and PERIOD are mutually exclusive options. The default (COMMA or
PERIOD) is chosen under DECIMAL POINT IS on Application Programming Defaults
Panel 1 when DB2 is installed.
The default is in the field DATE FORMAT on Application Programming Defaults Panel
2 when DB2 is installed.
You cannot use the LOCAL option unless you have a date exit routine.
# DEC(15|31) Specifies the maximum precision for decimal arithmetic operations. See “Using 15-digit
# D(15.s|31.s) and 31-digit precision for decimal numbers” on page 13.
# If the form Dpp.s is specified, pp must be either 15 or 31, and s, which represents the
# minimum scale to be used for division, must be a number between 1 and 9.
# FLAG(I|W|E|S)3 Suppresses diagnostic messages below the specified severity level (Informational,
Warning, Error, and Severe error for severity codes 0, 4, 8, and 12 respectively).
GRAPHIC and NOGRAPHIC are mutually exclusive options. The default (GRAPHIC or
NOGRAPHIC) is chosen under MIXED DATA on Application Programming Defaults
Panel 1 when DB2 is installed.
| For C, specify:
| v C if you do not want DB2 to fold lowercase letters in SBCS SQL ordinary identifiers
| to uppercase
| v C(FOLD) if you want DB2 to fold lowercase letters in SBCS SQL ordinary identifiers
| to uppercase
| If you omit the HOST option, the DB2 precompiler issues a level-4 diagnostic message
| and uses the default value for this option.
| This option also sets the language-dependent defaults; see Table 50 on page 409.
LEVEL[(aaaa)] Defines the level of a module, where aaaa is any alphanumeric value of up to seven
L characters. This option is not recommended for general use, and the DSNH CLIST
and the DB2I panels do not support it. For more information, see “Setting the program
level” on page 418.
For assembler, C, C⁺⁺, FORTRAN, and PL/I, you can omit the suboption (aaaa). The
resulting consistency token is blank. For COBOL, you need to specify the suboption.
| LINECOUNT1,2,3(n)
# Defines the number of lines per page to be n for the DB2 precompiler listing. This
|# LC includes header lines inserted by the DB2 precompiler. The default setting is
| LINECOUNT(60).
| MARGINS2,3(m,n[,c])
# Specifies what part of each source record contains host language or SQL statements;
|# MAR and, for assembler, where column continuations begin. The first option (m) is the
| beginning column for statements. The second option (n) is the ending column for
| statements. The third option (c) specifies for assembler where continuations begin.
| Otherwise, the DB2 precompiler places a continuation indicator in the column
| immediately following the ending column. Margin values can range from 1 to 80.
| Default values depend on the HOST option you specify; see Table 50 on page 409.
| The DSNH CLIST and the DB2I panels do not support this option. In assembler, the
| margin option must agree with the ICTL instruction, if presented in the source.
| When you do not use NOFOR, if you want to make positioned updates to any columns
| that the program has DB2 authority to update, you need to specify FOR UPDATE with
| no column list in your DECLARE CURSOR statements. The FOR UPDATE clause with
| no column list applies to static or dynamic SQL statements.
| Whether you use or do not use NOFOR, you can specify FOR UPDATE OF with a
| column list to restrict updates to only the columns named in the clause and specify the
| acquisition of update locks.
| If the resulting DBRM is very large, you might need extra storage when you specify
| NOFOR or use the FOR UPDATE clause with no column list.
NOGRAPHIC Indicates the use of X'0E' and X'0F' in a string, but not as control characters.
GRAPHIC and NOGRAPHIC are mutually exclusive options. The default (GRAPHIC or
NOGRAPHIC) is chosen under MIXED DATA on Application Programming Defaults
Panel 1 when DB2 is installed.
| NOOPTIONS2 Suppresses the DB2 precompiler options listing.
| NOOPTN
#| NOSOURCE2,3 Suppresses the DB2 precompiler source listing. This is the default.
# NOS
#| NOXREF2,3 Suppresses the DB2 precompiler cross-reference listing. This is the default.
# NOX
| ONEPASS2 Processes in one pass, to avoid the additional processing time for making two passes.
| ON Declarations must appear before SQL references.
| Default values depend on the HOST option specified; see Table 50 on page 409.
COMMA and PERIOD are mutually exclusive options. The default (COMMA or
PERIOD) is chosen under DECIMAL POINT IS on Application Programming Defaults
Panel 1 when DB2 is installed.
#| QUOTE2,3 Recognizes the quotation mark (") as the string delimiter within host language
|# Q statements. This option applies only to COBOL.
SQL(DB2), the default, means to interpret SQL statements and check syntax for use
by DB2 for OS/390 and z/OS. SQL(DB2) is recommended when the database server
is DB2 for OS/390 and z/OS.
SQLFLAG(IBM|STD Specifies the standard used to check the syntax of SQL statements. When statements
[(ssname deviate from the standard, the SQL statement processor writes informational
[,qualifier])]) messages (flags) to the output listing. The SQLFLAG option is independent of other
SQL statement processor options, including SQL and STDSQL. However, if you have
a COBOL program and you specify SQLFLAG, then you should also specify
APOSTSQL.
IBM checks SQL statements against the syntax of IBM SQL Version 1. You can also
use SAA for this option, as in releases before Version 7.
STD checks SQL statements against the syntax of the entry level of the ANSI/ISO
SQL standard of 1992. You can also use 86 for this option, as in releases before
Version 7.
ssname requests semantics checking, using the specified DB2 subsystem name for
catalog access. If you do not specify ssname, the SQL statement processor checks
only the syntax.
qualifier specifies the qualifier used for flagging. If you specify a qualifier, you must
always specify the ssname first. If qualifier is not specified, the default is the
authorization ID of the process that started the SQL statement processor.
STDSQL(NO|YES)1 Indicates to which rules the output statements should conform.
STDSQL(YES) indicates that the precompiled SQL statements in the source program
conform to certain rules of the SQL standard. STDSQL(NO) indicates conformance to
DB2 rules.
The default is in the field STD SQL LANGUAGE on Application Programming Defaults
Panel 2 when DB2 is installed.
The default is in the field TIME FORMAT on Application Programming Defaults Panel 2
when DB2 is installed.
You cannot use the LOCAL option unless you have a time exit routine.
2
| TWOPASS Processes in two passes, so that declarations need not precede references. Default
| TW values depend on the HOST option specified; see Table 50 on page 409.
If you do not specify a version at precompile time, then an empty string is the default
version identifier. If you specify AUTO, the SQL statement processor uses the
consistency token to generate the version identifier. If the consistency token is a
timestamp, the timestamp is converted into ISO character format and used as the
version identifier. The timestamp used is based on the System/370 Store Clock value.
For information on using VERSION, see “Identifying a package version” on page 417.
# XREF2,3 Includes a sorted cross-reference listing of symbols used in SQL statements in the
| listing output.
Notes:
1. You can use STDSQL(86) as in prior releases of DB2. The SQL statement processor treats it the same as
STDSQL(YES).
2. This option is ignored when the COBOL compiler precompiles the application.
# 3. This option is ignored when the PL/I compiler precompiles the application.
Defaults for options of the SQL statement processor: Some SQL statement
processor options have defaults based on values specified on the Application
Programming Defaults panels. Table 49 shows those options and defaults:
Table 49. IBM-supplied installation default SQL statement processing options. The installer can change these defaults.
Install option (DSNTIPF) Install default Equivalent SQL statement Available SQL statement
processing option processing options
STRING DELIMITER quotation mark (") QUOTE APOST
QUOTE
SQL STRING DELIMITER quotation mark (") QUOTESQL APOSTSQL
QUOTESQL
DECIMAL POINT IS PERIOD PERIOD COMMA
PERIOD
DATE FORMAT ISO DATE(ISO) DATE(ISO|USA|
EUR|JIS|LOCAL)
DECIMAL ARITHMETIC DEC15 DEC(15) DEC(15|31)
MIXED DATA NO NOGRAPHIC GRAPHIC
NOGRAPHIC
For dynamic SQL statements, another application programming default, USE FOR DYNAMICRULES, determines
whether DB2 uses the application programming default or the SQL statement processor option for the following install
options:
v STRING DELIMITER
v SQL STRING DELIMITER
v DECIMAL POINT IS
v DECIMAL ARITHMETIC
v MIXED DATA
If the value of USE FOR DYNAMICRULES is YES, then dynamic SQL statements use the application programming
defaults. If the value of USE FOR DYNAMICRULES is NO, then dynamic SQL statements in packages or plans with
bind, define, and invoke behavior use the SQL statement processor options. See “Using DYNAMICRULES to specify
behavior of dynamic SQL statements” on page 418 for an explanation of bind, define, and invoke behavior.
Some SQL statement processor options have default values based on the host
language. Some options do not apply to some languages. Table 50 show the
language-dependent options and defaults.
Table 50. Language-dependent DB2 precompiler options and defaults
HOST value Defaults
ASM APOST1, APOSTSQL1, PERIOD1, TWOPASS, MARGINS(1,71,16)
C or CPP APOST1, APOSTSQL1, PERIOD1, ONEPASS, MARGINS(1,72)
COBOL, COB2, QUOTE2, QUOTESQL2, PERIOD, ONEPASS1, MARGINS(8,72)1
or IBMCOB
FORTRAN APOST1, APOSTSQL1, PERIOD1, ONEPASS1, MARGINS(1,72)1
PLI APOST1, APOSTSQL1, PERIOD1, ONEPASS, MARGINS(2,72)
Note:
1. Forced for this language; no alternative allowed.
2. The default is chosen on Application Programming Defaults Panel 1 when DB2 is installed. The IBM-supplied
installation defaults for string delimiters are QUOTE (host language delimiter) and QUOTESQL (SQL escape
character). The installer can replace the IBM-supplied defaults with other defaults. The precompiler options you
specify override any defaults in effect.
If your source program is in COBOL, you must specify a string delimiter that is
the same for the DB2 precompiler, COBOL compiler, and CICS translator. The
defaults for the DB2 precompiler and COBOL compiler are not compatible with
the default for the CICS translator.
If the SQL statements in your source program refer to host variables that a
pointer stored in the CICS TWA addresses, you must make the host variables
addressable to the TWA before you execute those statements. For example, a
COBOL application can issue the following statement to establish
addressability to the TWA:
EXEC CICS ADDRESS
TWA (address-of-twa-area)
END-EXEC
You can run CICS applications only from CICS address spaces. This
restriction applies to the RUN option on the second program DSN command
processor. All of those possibilities occur in TSO.
You can append JCL from a job created by the DB2 Program Preparation
panels to the CICS translator JCL to prepare an application program. To run
the prepared program under CICS, you might need to update the RCT and
define programs and transactions to CICS. Your system programmer must
make the appropriate resource control table (RCT) and CICS resource or table
entries. For information on the required resource entries, see Part 2 of DB2
Installation Guide and CICS for MVS/ESA Resource Definition Guide.
| If you use an SQL statement coprocessor, you process SQL statements as you
| compile your program. You must use JCL procedures when you use the SQL
| statement coprocessor.
The purpose of the link edit step is to produce an executable load module. To
enable your application to interface with the DB2 subsystem, you must use a
link-edit procedure that builds a load module that satisfies these requirements:
For a program that uses 31-bit addressing, link-edit the program with the
AMODE=31 and RMODE=ANY options.
CICS
Include the DB2 CICS language interface module (DSNCLI).
You can link DSNCLI with your program in either 24 bit or 31 bit addressing
mode (AMODE=31). If your application program runs in 31-bit addressing
mode, you should link-edit the DSNCLI stub to your application with the
attributes AMODE=31 and RMODE=ANY so that your application can run
above the 16M line. For more information on compiling and link-editing CICS
application programs, see the appropriate CICS manual.
You also need the CICS EXEC interface module appropriate for the
programming language. CICS requires that this module be the first control
section (CSECT) in the final load module.
The size of the executable load module that is produced by the link-edit step varies
depending on the values that the SQL statement processor inserts into the source
code of the program.
For more information on compiling and link-editing, see “Using JCL procedures to
prepare applications” on page 428.
For more information on link-editing attributes, see the appropriate MVS manuals.
For details on DSNH, see Chapter 2 of DB2 Command Reference.
Exception
| You do not need to bind a DBRM if the only SQL statement in the program is
| SET CURRENT PACKAGESET.
| Because you do not need a plan or package to execute the SET CURRENT
| PACKAGESET statement, the ENCODING bind option does not affect the SET
| CURRENT PACKAGESET statement. An application that needs to provide a host
| variable value in an encoding scheme other than the system default encoding
| scheme must use the DECLARE VARIABLE statement to specify the encoding
| scheme of the host variable.
From a DB2 requester, you can run a plan by naming it in the RUN subcommand,
but you cannot run a package directly. You must include the package in a plan and
then run the plan.
To bind a package at a remote DB2 system, you must have all the privileges or
authority there that you would need to bind the package on your local system. To
bind a package at another type of a system, such as SQL/DS, you need any
privileges that system requires to execute its SQL statements and use its data
objects.
The bind process for a remote package is the same as for a local package, except
that the local communications database must be able to recognize the location
name you use as resolving to a remote location. To bind the DBRM PROGA at the
location PARIS, in the collection GROUP1, use:
BIND PACKAGE(PARIS.GROUP1)
MEMBER(PROGA)
Then, include the remote package in the package list of a local plan, say PLANB,
by using:
BIND PLAN (PLANB)
PKLIST(PARIS.GROUP1.PROGA)
| The ENCODING bind option has the following effect on a remote application:
| v If you bind a package locally, which is recommended, and you specify the
| ENCODING bind option for the local package, the ENCODING bind option for the
| local package applies to the remote application.
| v If you do not bind a package locally, and you specify the ENCODING bind option
| for the plan, the ENCODING bind option for the plan applies to the remote
| application.
| v If you do not specify an ENCODING bind option for the package or plan at the
| local site, the value of APPLICATION ENCODING that was specified on
| installation panel DSNTIPF at the local site applies to the remote application.
When you bind or rebind, DB2 checks authorizations, reads and updates the
catalog, and creates the package in the directory at the remote site. DB2 does not
read or update catalogs or check authorizations at the local site.
If you bind with the option COPY, the COPY privilege must exist locally. DB2
performs authorization checking, reads and updates the catalog, and creates the
package in the directory at the remote site. DB2 reads the catalog records related
to the copied package at the local site. If the local site is installed with time or date
format LOCAL, and a package is created at a remote site using the COPY option,
the COPY option causes DB2 at the remote site to convert values returned from the
remote site in ISO format, unless an SQL statement specifies a different format.
Once you bind a package, you can rebind, free, or bind it with the REPLACE option
using either a local or a remote bind.
Turning an existing plan into packages to run remotely: If you have used DB2
before, you might have an existing application that you want to run at a remote
location, using DRDA access. To do that, you need to rebind the DBRMs in the
current plan as packages at the remote location. You also need a new plan that
includes those remote packages in its package list.
When you now run the existing application at your local DB2, using the new
application plan, these things happen:
v You connect immediately to the remote location named in the
CURRENTSERVER option.
v When about to run a package, DB2 searches for it in the collection REMOTE1 at
the remote location.
v Any UPDATE, DELETE, or INSERT statements in your application affect tables
at the remote location.
Binding DBRMs directly to a plan: A plan can contain DBRMs bound directly to
it. To bind three DBRMs—PROGA, PROGB, and PROGC—directly to plan PLANW,
use:
BIND PLAN(PLANW)
MEMBER(PROGA,PROGB,PROGC)
You can include as many DBRMs in a plan as you wish. However, if you use a
large number of DBRMs in a plan (more than 500, for example), you could have
trouble maintaining the plan. To ease maintenance, you can bind each DBRM
separately as a package, specifying the same collection for all packages bound,
and then bind a plan specifying that collection in the plan’s package list. If the
design of the application prevents this method, see if your system administrator can
increase the size of the EDM pool to be at least 10 times the size of either the
largest database descriptor (DBD) or the plan, whichever is greater.
To bind DBRMs directly to the plan, and also include packages in the package list,
use both MEMBER and PKLIST. The example below includes:
v The DBRMs PROG1 and PROG2
v All the packages in a collection called TEST2
v The packages PROGA and PROGC in the collection GROUP1
MEMBER(PROG1,PROG2)
PKLIST(TEST2.*,GROUP1.PROGA,GROUP1.PROGC)
You must specify MEMBER, PKLIST, or both options. The plan that results consists
of one of the following:
v Programs associated with DBRMs in the MEMBER list only
v Programs associated with packages and collections identified in PKLIST only
v A combination of the specifications on MEMBER and PKLIST
(Usually, the consistency token is in an internal DB2 format. You can override that
token if you wish: see “Setting the program level” on page 418.)
But you need other identifiers also. The consistency token alone uniquely identifies
a DBRM bound directly to a plan, but it does not necessarily identify a unique
package. When you bind DBRMs directly to a particular plan, you bind each one
Identifying the location: When your program executes an SQL statement, DB2
uses the value in the CURRENT SERVER special register to determine the location
of the necessary package or DBRM. If the current server is your local DB2 and it
does not have a location name, the value is blank.
You can change the value of CURRENT SERVER by using the SQL CONNECT
statement in your program. If you do not use CONNECT, the value of CURRENT
SERVER is the location name of your local DB2 (or blank, if your DB2 has no
location name).
Identifying the collection: When your program executes an SQL statement, DB2
uses the value in the CURRENT PACKAGESET special register as the collection
name for a necessary package. To set or change that value within your program,
use the SQL SET CURRENT PACKAGESET statement.
If you do not use SET CURRENT PACKAGESET, the value in the register is blank
when your application begins to run and remains blank. In that case, the order in
which DB2 searches available collections can be important.
When you call a stored procedure, the special register CURRENT PACKAGESET
contains the value that you specified for the COLLID parameter when you defined
the stored procedure. When the stored procedure returns control to the calling
program, DB2 restores CURRENT PACKAGESET to the value it contained before
the call.
The order of search: The order in which you specify packages in a package list
can affect run-time performance. Searching for the specific package involves
searching the DB2 directory, which can be costly. When you use collection-id.* with
PKLIST keyword, you should specify first the collections in which DB2 is most likely
to find a package.
For example, if you perform the following bind: BIND PLAN (PLAN1) PKLIST (COL1.*,
COL2.*, COL3.*, COL4.*) and you then execute program PROG1, DB2 does the
following:
1. Checks to see if there is a PROG1 program bound as part of the plan
2. Searches for COL1.PROG1.timestamp
3. If it does not find COL1.PROG1.timestamp, searches for
COL2.PROG1.timestamp
4. If it does not find COL2.PROG1.timestamp, searches for
COL3.PROG1.timestamp
5. If it does not find COL3.PROG1.timestamp, searches for
COL4.PROG1.timestamp.
If you use the BIND PLAN option DEFER(PREPARE), DB2 does not search all
collections in the package list. See “Use bind options that improve performance”
on page 383 for more information.
If you set the special register CURRENT PACKAGESET, DB2 skips the check
for programs that are part of the plan and uses the value of CURRENT
PACKAGESET as the collection. For example, if CURRENT PACKAGESET
contains COL5, then DB2 uses COL5.PROG1.timestamp for the search.
If the order of search is not important: In many cases, DB2’s order of search is
not important to you and does not affect performance. For an application that runs
only at your local DB2, you can name every package differently and include them
all in the same collection. The package list on your BIND PLAN subcommand can
read:
PKLIST (collection.*)
You can add packages to the collection even after binding the plan. DB2 lets you
bind packages having the same package name into the same collection only if their
version IDs are different.
If your application uses DRDA access, you must bind some packages at remote
locations. Use the same collection name at each location, and identify your package
list as:
PKLIST (*.collection.*)
If you use an asterisk for part of a name in a package list, DB2 checks the
authorization for the package to which the name resolves at run time. To avoid the
checking at run time in the example above, you can grant EXECUTE authority for
the entire collection to the owner of the plan before you bind the plan.
You can do that with many versions of the program, without having to rebind the
application plan. Neither do you have to rename the plan or change any RUN
subcommands that use it.
The BIND and RUN values can be specified for packages and plans. The other
values can be specified only for packages.
Table 52. Definitions of dynamic SQL statement behaviors
Dynamic SQL attribute Setting for dynamic SQL attributes
Bind behavior Run behavior Define behavior Invoke behavior
Authorization ID Plan or package Current SQLID User-defined Authorization ID of
owner function or stored invoker1
procedure owner
Default qualifier for Bind OWNER or Current SQLID User-defined Authorization ID of
unqualified objects QUALIFIER value function or stored invoker
procedure owner
CURRENT SQLID2 Not applicable Applies Not applicable Not applicable
Source for application Determined by Install panel Determined by Determined by
programming options DSNHDECP DSNTIPF DSNHDECP DSNHDECP
parameter parameter parameter
DYNRULS3 DYNRULS3 DYNRULS3
Can execute GRANT, No Yes No No
REVOKE, CREATE,
ALTER, DROP, RENAME?
Determining the optimal authorization cache size: When DB2 determines that
you have the EXECUTE privilege on a plan, package collection, stored procedure,
or user-defined function, DB2 can cache your authorization ID. When you run the
plan, package, stored procedure, or user-defined function, DB2 can check your
authorization more quickly.
Determining the authorization cache size for plans: The CACHESIZE option
(optional) allows you to specify the size of the cache to acquire for the plan. DB2
uses this cache for caching the authorization IDs of those users running a plan.
DB2 uses the CACHESIZE value to determine the amount of storage to acquire for
the authorization cache. DB2 acquires storage from the EDM storage pool. The
default CACHESIZE value is 1024 or the size set at install time.
The size of the cache you specify depends on the number of individual
authorization IDs actively using the plan. Required overhead takes 32 bytes, and
each authorization ID takes up 8 bytes of storage. The minimum cache size is 256
bytes (enough for 28 entries and overhead information) and the maximum is 4096
bytes (enough for 508 entries and overhead information). You should specify size in
multiples of 256 bytes; otherwise, the specified value rounds up to the next highest
value that is a multiple of 256.
If you run the plan infrequently, or if authority to run the plan is granted to PUBLIC,
you might want to turn off caching for the plan so that DB2 does not use
unnecessary storage. To do this, specify a value of 0 for the CACHESIZE option.
Any plan that you run repeatedly is a good candidate for tuning using the
CACHESIZE option. Also, if you have a plan that a large number of users run
concurrently, you might want to use a larger CACHESIZE.
See DB2 Installation Guide for more information on setting the size of the package
authorization cache.
Determining the authorization cache size for stored procedures and user-defined
functions: DB2 provides a single routine authorization cache for an entire DB2
subsystem. The routine authorization cache stores a list of authorization IDs that
have the EXECUTE privilege on user-defined functions or stored procedures. The
DB2 installer sets the size of the routine authorization cache by entering a size in
field ROUTINE AUTH CACHE of DB2 installation panel DSNTIPP. A 32KB
authorization cache is large enough to hold authorization information for about 380
stored procedures or user-defined functions.
See DB2 Installation Guide for more information on setting the size of the routine
authorization cache.
Specifying the SQL rules: Not only does SQLRULES specify the rules under
which a type 2 CONNECT statement executes, but it also sets the initial value of
the special register CURRENT RULES when the database server is the local DB2.
When the server is not the local DB2, the initial value of CURRENT RULES is DB2.
After binding a plan, you can change the value in CURRENT RULES in an
application program using the statement SET CURRENT RULES.
CURRENT RULES determines the SQL rules, DB2 or SQL standard, that apply to
SQL behavior at run time. For example, the value in CURRENT RULES affects the
behavior of defining check constraints using the statement ALTER TABLE on a
populated table:
v If CURRENT RULES has a value of STD and no existing rows in the table
violate the check constraint, DB2 adds the constraint to the table definition.
Otherwise, an error occurs and DB2 does not add the check constraint to the
table definition.
If the table contains data and is already in a check pending status, the ALTER
TABLE statement fails.
v If CURRENT RULES has a value of DB2, DB2 adds the constraint to the table
definition, defers the enforcing of the check constraints, and places the table
space or partition in check pending status.
You can use the statement SET CURRENT RULES to control the action that the
statement ALTER TABLE takes. Assuming that the value of CURRENT RULES is
initially STD, the following SQL statements change the SQL rules to DB2, add a
check constraint, defer validation of that constraint and place the table in check
pending status, and restore the rules to STD.
EXEC SQL
SET CURRENT RULES = 'DB2';
EXEC SQL
ALTER TABLE DSN8710.EMP
ADD CONSTRAINT C1 CHECK (BONUS <= 1000.0);
EXEC SQL
SET CURRENT RULES = 'STD';
You can also use the CURRENT RULES in host variable assignments, for example:
SET :XRULE = CURRENT RULES;
CICS
You can use packages and dynamic plan selection together, but when you
dynamically switch plans, the following conditions must exist:
v All special registers, including CURRENT PACKAGESET, must contain their
initial values.
v The value in the CURRENT DEGREE special register cannot have changed
during the current transaction.
The benefit of using dynamic plan selection and packages together is that you
can convert individual programs in an application containing many programs
and plans, one at a time, to use a combination of plans and packages. This
reduces the number of plans per application, and having fewer plans reduces
the effort needed to maintain the dynamic plan exit.
you could create packages and plans using the following bind statements:
BIND PACKAGE(PKGB) MEMBER(PKGB)
BIND PLAN(MAIN) MEMBER(MAIN,PLANA) PKLIST(*.PKGB.*)
BIND PLAN(PLANC) MEMBER(PLANC)
The following scenario illustrates thread association for a task that runs
program MAIN:
Sequence of SQL Statements
Events
1. EXEC CICS START TRANSID(MAIN)
TRANSID(MAIN) executes program MAIN.
2. EXEC SQL SELECT...
Program MAIN issues an SQL SELECT statement. The default
dynamic plan exit selects plan MAIN.
3. EXEC CICS LINK PROGRAM(PROGA)
You can use the DSN command processor implicitly during program development
for functions such as:
v Using the declarations generator (DCLGEN)
The DSN command processor runs with the TSO terminal monitor program (TMP).
Because the TMP runs in either foreground or background, DSN applications run
interactively or as batch jobs.
The DSN command processor can provide these services to a program that runs
under it:
v Automatic connection to DB2
v Attention key support
v Translation of return codes into error messages
Limitations of the DSN command processor: When using DSN services, your
application runs under the control of DSN. Because TSO executes the ATTACH
macro to start DSN, and DSN executes the ATTACH macro to start a part of itself,
your application gains control two task levels below that of TSO.
If these limitations are too severe, consider having your application use the call
attachment facility or Recoverable Resource Manager Services attachment facility.
For more information on these attachment facilities, see “Chapter 29. Programming
for the call attachment facility (CAF)” on page 733 and “Chapter 30. Programming
for the Recoverable Resource Manager Services attachment facility (RRSAF)” on
page 767.
DSN return code processing: At the end of a DSN session, register 15 contains
the highest value placed there by any DSN subcommand used in the session or by
any program run by the RUN subcommand. Your runtime environment might format
that value as a return code. The value does not, however, originate in DSN.
The following example shows how to start a TSO foreground application. The name
of the application is SAMPPGM, and ssid is the system ID:
TSO Prompt: READY
Enter: DSN SYSTEM(ssid)
DSN Prompt: DSN
Enter: RUN PROGRAM(SAMPPGM) -
PLAN(SAMPLAN) -
LIB(SAMPPROJ.SAMPLIB) -
. PARMS('/D01 D02 D03')
.
.
(Here the program runs and might prompt you for input)
DSN Prompt: DSN
Enter: END
TSO Prompt: READY
The PARMS keyword of the RUN subcommand allows you to pass parameters to
the run-time processor and to your application program:
PARMS ('/D01, D02, D03')
The slash (/) indicates that you are passing parameters. For some languages, you
pass parameters and run-time options in the form PARMS('parameters/run-time-
options). In those environments, an example of the PARMS keyword might be:
PARMS ('D01, D02, D03/')
Check your host language publications for the correct form of the PARMS option.
//SYSTSPRT DD SYSOUT=A
//SYSTSIN DD *
DSN SYSTEM (ssid)
RUN PROG (SAMPPGM) -
PLAN (SAMPLAN) -
LIB (SAMPPROJ.SAMPLIB) -
PARMS ('/D01 D02 D03')
END
/*
Figure 125. JCL for running a DB2 application under the TSO terminal monitor program
v The JOB option identifies this as a job card. The USER option specifies the DB2
authorization ID of the user.
v The EXEC statement calls the TSO Terminal Monitor Program (TMP).
v The STEPLIB statement specifies the library in which the DSN Command
Processor load modules and the default application programming defaults
module, DSNHDECP, reside. It can also reference the libraries in which user
applications, exit routines, and the customized DSNHDECP module reside. The
customized DSNHDECP module is created during installation. If you do not
specify a library containing the customized DSNHDECP, DB2 uses the default
DSNHDECP.
v Subsequent DD statements define additional files needed by your program.
v The DSN command connects the application to a particular DB2 subsystem.
v The RUN subcommand specifies the name of the application program to run.
v The PLAN keyword specifies plan name.
v The LIB keyword specifies the library the application should access.
v The PARMS keyword passes parameters to the run-time processor and the
application program.
Usage notes:
v Keep DSN job steps short.
v We recommend that you not use DSN to call the EXEC command processor to
run CLISTs that contain ISPEXEC statements; results are unpredictable.
v If your program abends or gives you a non-zero return code, DSN terminates.
v You can use a group attachment name instead of a specific ssid to connect to a
member of a data sharing group. For more information, see DB2 Data Sharing:
Planning and Administration.
For more information on using the TSO TMP in batch mode, see OS/390 TSO/E
User's Guide.
The following CLIST calls a DB2 application program named MYPROG. The DB2
subsystem name or group attachment name should replace ssid.
IMS
To Run a Message-Driven Program
First, be sure you can respond to the program’s interactive requests for data
and that you can recognize the expected results. Then, enter the transaction
code associated with the program. Users of the transaction code must be
authorized to run the program.
First, ensure that the corresponding entries in the RCT, SNT, and RACF*
control areas allow run authorization for your application. The system
administrator is responsible for these functions; see Part 3 (Volume 1) of DB2
Administration Guide for more information.
Also, be sure to define to CICS the transaction code assigned to your program
and the program itself.
Issue the NEWCOPY command if CICS has not been reinitialized since the
program was last bound and compiled.
In a batch environment, you might use statements like these to invoke procedure
REXXPROG:
//RUNREXX EXEC PGM=IKJEFT01,DYNAMNBR=20
//SYSEXEC DD DISP=SHR,DSN=SYSADM.REXX.EXEC
//SYSTSPRT DD SYSOUT=*
//SYSTSIN DD *
%REXXPROG parameters
The SYSEXEC data set contains your REXX application, and the SYSTSIN data set
contains the command that you use to invoke the application.
This section describes how to use JCL procedures to prepare a program. For
information on using the DSNH CLIST, the TSO DSN command processor, or JCL
procedures added to your SYS1.PROCLIB, see Chapter 2 of DB2 Command
Reference.
If you use the PL/I macro processor, you must not use the PL/I *PROCESS
statement in the source to pass options to the PL/I compiler. You can specify the
needed options on the PARM.PLI= parameter of the EXEC statement in DSNHPLI
procedure.
member must be DSNELI, except for FORTRAN, in which case member must
be DSNHFT.
CICS
//LKED.SYSIN DD *
INCLUDE SYSLIB(DSNCLI)
/*
For more information on required CICS modules, see “Step 2: Compile (or
assemble) and link-edit the application” on page 411.
To call the precompiler, specify DSNHPC as the entry point name. You can pass
three address options to the precompiler; the following sections describe their
formats. The options are addresses of:
v A precompiler option list
v A list of alternate ddnames for the data sets that the precompiler uses
v A page number to use for the first page of the compiler listing on SYSPRINT.
The precompiler adds 1 to the last page number used in the precompiler listing and
puts this value into the page-number field before returning control to the calling
routine. Thus, if you call the precompiler again, page numbering is continuous.
CICS
Instead of using the DB2 Program Preparation panels to prepare your CICS program, you can tailor
CICS-supplied JCL procedures to do that. To tailor a CICS procedure, you need to add some steps
and change some DD statements. Make changes as needed to do the following:
v Process the program with the DB2 precompiler.
v Bind the application plan. You can do this any time after you precompile the program. You can
bind the program either on line by the DB2I panels or as a batch step in this or another MVS job.
v Include a DD statement in the linkage editor step to access the DB2 load library.
v Be sure the linkage editor control statements contain an INCLUDE statement for the DB2
language interface module.
The following example illustrates the necessary changes. This example assumes the use of a VS
COBOL II or COBOL/370 program. For any other programming language, change the CICS
procedure name and the DB2 precompiler options.
//TESTC01 JOB
//*
//*********************************************************
//* DB2 PRECOMPILE THE COBOL PROGRAM
//*********************************************************
(1) //PC EXEC PGM=DSNHPC,
(1) // PARM='HOST(COB2),XREF,SOURCE,FLAG(I),APOST'
(1) //STEPLIB DD DISP=SHR,DSN=prefix.SDSNEXIT
(1) // DD DISP=SHR,DSN=prefix.SDSNLOAD
(1) //DBRMLIB DD DISP=OLD,DSN=USER.DBRMLIB.DATA(TESTC01)
(1) //SYSCIN DD DSN=&&DSNHOUT,DISP=(MOD,PASS),UNIT=SYSDA,
(1) // SPACE=(800,(500,500))
(1) //SYSLIB DD DISP=SHR,DSN=USER.SRCLIB.DATA
(1) //SYSPRINT DD SYSOUT=*
(1) //SYSTERM DD SYSOUT=*
(1) //SYSUDUMP DD SYSOUT=*
(1) //SYSUT1 DD SPACE=(800,(500,500),,,ROUND),UNIT=SYSDA
(1) //SYSUT2 DD SPACE=(800,(500,500),,,ROUND),UNIT=SYSDA
(1) //SYSIN DD DISP=SHR,DSN=USER.SRCLIB.DATA(TESTC01)
(1) //*
For more information about the procedure DFHEITVL, other CICS procedures, or CICS requirements
for application programs, please see the appropriate CICS manual.
If you are preparing a particularly large or complex application, you can use one of
the last two techniques mentioned above. For example, if your program requires
four of your own link-edit include libraries, you cannot prepare the program with
DB2I, because DB2I limits the number of include libraries to three plus language,
IMS or CICS, and DB2 libraries. Therefore, you would need another preparation
method. Programs using the call attachment facility can use either of the last two
techniques mentioned above. Be careful to use the correct language interface.
You must precompile the contents of each data set or member separately, but the
prelinker must receive all of the compiler output together.
This section describes the options you can specify on the program preparation
panels. For the purposes of describing the process, the program preparation
examples assume that you are using COBOL programs that run under TSO.
Attention: If your C⁺⁺ or IBM COBOL for MVS & VM program satisfies both of
these conditions, you need to use a JCL procedure to prepare it:
v The program consists of more than one data set or member.
v More than one data set or member contains SQL statements.
See “Using JCL to prepare a program with object-oriented extensions” on page 433
for more information.
DB2I help
The online help facility enables you to select information in an online DB2 book
from a DB2I panel.
For instructions on setting up DB2 online help, see the discussion of setting up DB2
online help in Part 2 of DB2 Installation Guide.
If your site makes use of CD-ROM updates, you can make the updated books
accessible from DB2I. Select Option 10 on the DB2I Defaults Panel and enter the
new book data set names. You must have write access to prefix.SDSNCLST to
perform this function.
Figure 126. Initiating program preparation through DB2I. Specify Program Preparation on the
DB2I Primary Option Menu.
The following explains the functions on the DB2I Primary Option Menu.
1 SPUFI
Lets you develop and execute one or more SQL statements interactively.
For further information, see “Chapter 5. Executing SQL from your terminal
using SPUFI” on page 51.
2 DCLGEN
Lets you generate C, COBOL, or PL/I data declarations of tables. For
further information, see “Chapter 8. Generating declarations for your tables
using DCLGEN” on page 95.
3 PROGRAM PREPARATION
Lets you prepare and run an application program to run. For more
information, see “The DB2 Program Preparation panel” on page 436.
4 PRECOMPILE
Lets you convert embedded SQL statements into statements that your host
language can process. For further information, see “The Precompile panel”
on page 443.
5 BIND/REBIND/FREE
Lets you bind, rebind, or free a package or application plan.
6 RUN
Lets you run an application program in a TSO or batch environment.
7 DB2 COMMANDS
Lets you issue DB2 commands. For more information about DB2
commands, see Chapter 2 of DB2 Command Reference.
8 UTILITIES
Lets you call DB2 utility programs. For more information, see DB2 Utility
Guide and Reference.
D DB2I DEFAULTS
Lets you set DB2I defaults. See “DB2I Defaults Panel 1” on page 440.
X EXIT
Lets you exit DB2I.
The Program Preparation panel also lets you change the DB2I default values (see
page 440), and perform other precompile and prelink functions.
On the DB2 Program Preparation panel, shown in Figure 127, enter the name of
the source program data set (this example uses SAMPLEPG.COBOL) and specify
the other options you want to include. When finished, press ENTER to view the
next panel.
Figure 127. The DB2 program preparation panel. Enter the source program data set name
and other options.
The following explains the functions on the DB2 Program Preparation panel and
how to fill in the necessary fields in order to start program preparation.
If you are willing to accept default values for all the steps, enter N under DISPLAY
PANEL for all the other preparation panels listed.
To make changes to the default values, entering Y under DISPLAY PANEL for any
panel you want to see. DB2I then displays each of the panels that you request.
After all the panels display, DB2 proceeds with the steps involved in preparing your
program to run.
Variables for all functions used during program preparation are maintained
separately from variables entered from the DB2I Primary Option Menu. For
example, the bind plan variables you enter on the program preparation panel are
saved separately from those on any bind plan panel that you reach from the
Primary Option Menu.
6 CHANGE DEFAULTS
Lets you specify whether to change the DB2I defaults. Enter Y in the
Display Panel field next to this option; otherwise enter N. Minimally, you
should specify your subsystem identifier and programming language on the
defaults panel. For more information, see “DB2I Defaults Panel 1” on
page 440.
7 PL/I MACRO PHASE
Lets you specify whether to display the “Program Preparation: Compile,
Link, and Run” panel to control the PL/I macro phase by entering PL/I
options in the OPTIONS field of that panel. That panel also displays for
options COMPILE OR ASSEMBLE, LINK, and RUN.
This field applies to PL/I programs only. If your program is not a PL/I
program or does not use the PL/I macro processor, specify N in the
Perform function field for this option, which sets the Display panel field to
the default N.
For information on PL/I options, see “The Program Preparation: Compile,
Link, and Run panel” on page 460.
8 PRECOMPILE
Lets you specify whether to display the Precompile panel. To see this panel
enter Y in the Display Panel field next to this option; otherwise enter N. For
information on the Precompile panel, see “The Precompile panel” on
page 443.
9 CICS COMMAND TRANSLATION
Lets you specify whether to use the CICS command translator. This field
applies to CICS programs only.
If you run under TSO or IMS, ignore this step; this allows the Perform
function field to default to N.
If you are using CICS and have precompiled your program, you must
translate your program using the CICS command translator.
There is no separate DB2I panel for the command translator. You can
specify translation options on the Other Options field of the DB2
Program Preparation panel, or in your source program if it is not an
assembler program.
10 BIND PACKAGE
Lets you specify whether to display the BIND PACKAGE panel. To see it,
enter Y in the Display panel field next to this option; otherwise, enter N. For
information on the panel, see “The Bind Package panel” on page 446.
11 BIND PLAN
Lets you specify whether to display the BIND PLAN panel. To see it, enter
Y in the Display panel field next to this option; otherwise, enter N. For
information on the panel, see “The Bind Plan panel” on page 450.
12 COMPILE OR ASSEMBLE
Lets you specify whether to display the “Program Preparation: Compile,
Link, and Run” panel. To see this panel enter Y in the Display Panel field
next to this option; otherwise, enter N.
For information on the panel, see “The Program Preparation: Compile, Link,
and Run panel” on page 460.
13 PRELINK
Lets you use the prelink utility to make your C, C⁺⁺, or IBM COBOL for
MVS & VM program reentrant. This utility concatenates compile-time
initialization information from one or more text decks into a single
initialization unit. To use the utility, enter Y in the Display Panel field next to
this option; otherwise, enter N. If you request this step, then you must also
request the compile step and the link-edit step.
For more information on the prelink utility, see OS/390 Language
Environment for OS/390 & VM Programming Guide.
14 LINK
Lets you specify whether to display the “Program Preparation: Compile,
Link, and Run” panel. To see it, enter Y in the Display Panel field next to
this option; otherwise, enter N. If you specify Y in the Display Panel field for
the COMPILE OR ASSEMBLE option, you do not need to make any
changes to this field; the panel displayed for COMPILE OR ASSEMBLE is
the same as the panel displayed for LINK. You can make the changes you
want to affect the link-edit step at the same time you make the changes to
the compile step.
For information on the panel, see “The Program Preparation: Compile, Link,
and Run panel” on page 460.
IMS and CICS programs cannot run using DB2I. If you are using IMS
or CICS, use N in these fields.
If you are using TSO and want to run your program, you must enter Y
in the Perform function column next to this option. You can also
indicate that you want to specify options and values to affect the
running of your program, by entering Y in the Display panel column.
Pressing ENTER takes you to the first panel in the series you specified, in this
example to the DB2I Defaults panel. If, at any point in your progress from panel to
panel, you press the END key, you return to this first panel, from which you can
change your processing specifications. Asterisks (*) in the Display Panel column of
rows 7 through 14 indicate which panels you have already examined. You can see
a panel again by writing a Y over an asterisk.
Suppose that the default programming language is PL/I and the default number of
lines per page of program listing is 60. Your program is in COBOL, so you want to
change field 3, APPLICATION LANGUAGE. You also want to print 80 lines to the
page, so you need to change field 4, LINES/PAGE OF LISTING, as well. Figure 128
on page 441 shows the entries that you make in DB2I Defaults panel 1 to make
these changes. In this case, pressing ENTER takes you to DB2 DEFAULTS panel
2.
Pressing ENTER takes you to the next panel you specified on the DB2 Program
Preparation panel, in this case, to the Precompile panel.
Figure 130. The precompile panel. Specify the include library, if any, that your program
should use, and any other options you need.
The following explains the functions on the Precompile panel, and how to enter the
fields for preparing to precompile.
1 INPUT DATA SET
Lets you specify the data set name of the source program and SQL
statements to precompile.
If you reached this panel through the DB2 Program Preparation panel, this
field contains the data set name specified there. You can override it on this
panel if you wish.
If you reached this panel directly from the DB2I Primary Option Menu, you
must enter the data set name of the program you want to precompile. The
data set name can include a member name. If you do not enclose the data
set name with apostrophes, a standard TSO prefix (user ID) qualifies the
data set name.
2 INCLUDE LIBRARY
Lets you enter the name of a library containing members that the
precompiler should include. These members can contain output from
DCLGEN. If you do not enclose the name in apostrophes, a standard TSO
prefix (user ID) qualifies the name.
You can request additional INCLUDE libraries by entering DSNH CLIST
parameters of the form PnLIB(dsname), where n is 2, 3, or 4) on the OTHER
OPTIONS field of this panel or on the OTHER DSNH OPTIONS field of the
Program Preparation panel.
3 DSNAME QUALIFIER
Lets you specify a character string that qualifies temporary data set names
during precompile. Use any character string from 1 to 8 characters in length
that conforms to normal TSO naming conventions.
If you reached this panel through the DB2 Program Preparation panel, this
field contains the data set name qualifier specified there. You can override it
on this panel if you wish.
If you reached this panel from the DB2I Primary Option Menu, you can
either specify a DSNAME QUALIFIER or let the field take its default value,
TEMP.
For IMS and TSO programs, DB2 stores the precompiled source
statements (to pass to the compile or assemble step) in a data set
named tsoprefix.qualifier.suffix. A data set named
tsoprefix.qualifier.PCLIST contains the precompiler print listing.
CICS
For CICS programs, the data set tsoprefix.qualifier.suffix receives the
precompiled source statements in preparation for CICS command
translation.
When the precompiler completes its work, control passes to the CICS
command translator. Because there is no panel for the translator,
translation takes place automatically. The data set
tsoprefix.qualifier.CXLIST contains the output from the command
translator.
COMMAND ===>_
The following information explains the functions on the BIND PACKAGE panel and
how to fill the necessary fields in order to bind your program. For more information,
see the BIND PACKAGE command in Chapter 2 of DB2 Command Reference.
1 LOCATION NAME
Lets you specify the system at which to bind the package. You can use
from 1 to 16 characters to specify the location name. The location name
must be defined in the catalog table SYSIBM.LOCATIONS. The default is
the local DBMS.
2 COLLECTION-ID
Lets you specify the collection the package is in. You can use from 1 to 18
characters to specify the collection, and the first character must be
alphabetic.
3 DBRM: COPY:
Lets you specify whether you are creating a new package (DBRM) or
making a copy of a package that already exists (COPY). Use:
DBRM
To create a new package. You must specify values in the LIBRARY,
PASSWORD, and MEMBER fields.
COPY
To copy an existing package. You must specify values in the
COLLECTION-ID and PACKAGE-ID fields. (The VERSION field is
optional.)
4 MEMBER or COLLECTION-ID
MEMBER (for new packages): If you are creating a new package, this
option lets you specify the DBRM to bind. You can specify a member name
from 1 to 8 characters. The default name depends on the input data set
name.
v If the input data set is partitioned, the default name is the member name
of the input data set specified in the INPUT DATA SET NAME field of the
DB2 Program Preparation panel.
If you enter the BIND PLAN panel from the Program Preparation panel, many of the
BIND PLAN entries contain values from the Primary and Precompile panels. See
Figure 132.
|
| Figure 132. The bind plan panel
|
The following explains the functions on the BIND PLAN panel and how to fill the
necessary fields in order to bind your program. For more information, see the BIND
PLAN command in Chapter 2 of DB2 Command Reference.
1 MEMBER
Lets you specify the DBRMs to include in the plan. You can specify a name
from 1 to 8 characters. You must specify MEMBER or INCLUDE PACKAGE
LIST, or both. If you do not specify MEMBER, fields 2, 3, and 4 are ignored.
The default member name depends on the input data set.
v If the input data set is partitioned, the default name is the member name
of the input data set specified in field 1 of the DB2 Program Preparation
panel.
v If the input data set is sequential, the default name is the second qualifier
of this input data set.
If you reached this panel directly from the DB2I Primary Option Menu, you
must provide values for the MEMBER and LIBRARY fields.
| For more information about this option, see the bind option IMMEDWRITE
| in Chapter 2 of DB2 Command Reference.
When you finish making changes to this panel, press ENTER to go to the second of
the program preparation panels, Program Prep: Compile, Link, and Run.
This panel lets you change your defaults for BIND PACKAGE options. With a few
minor exceptions, the options on this panel are the same as the options for the
defaults for rebinding a package. However, the defaults for REBIND PACKAGE are
different from those shown in the above figure, and you can specify SAME in any
field to specify the values used the last time the package was bound. For rebinding,
the default value for all fields is SAME.
If you specify YES, DB2 determines the access paths again at execution
time. When you specify YES for this option, you must also specify YES for
DEFER PREPARE, or you will receive a bind error.
9 DEFER PREPARE
Lets you defer preparation of dynamic SQL statements until DB2
encounters the first OPEN, DESCRIBE, or EXECUTE statement that refers
to those statements. Specify YES to defer preparation of the statement. For
information on using this option, see “Use bind options that improve
performance” on page 383.
10 KEEP DYN SQL PAST COMMIT
Specifies whether DB2 keeps dynamic SQL statements after commit points.
YES causes DB2 to keep dynamic SQL statements after commit points. An
application can execute a PREPARE statement for a dynamic SQL
statement once and execute that statement after later commit points without
executing PREPARE again. For more information, see “Performance of
static and dynamic SQL” on page 499.
11 DBPROTOCOL
Specifies whether DB2 uses DRDA protocol or DB2 private protocol to
execute statements that contain 3-part names. For more information, see
“Chapter 19. Planning to access distributed data” on page 369.
| 12 APPLICATION ENCODING
| Specifies the application encoding scheme to be used:
| blank Indicates that all host variables in static SQL statements are
| encoded using the encoding scheme in the DEF
| ENCODING SCHEME field of installation panel DSNTIPF.
| ASCII Indicates that the CCSIDs for all host variables in static
| SQL statements are determined by the values in the ASCII
| CODED CHAR SET and MIXED DATA fields of installation
| panel DSNTIPF.
| EBCDIC Indicates that the CCSIDs for all host variables in static
| SQL statements are determined by the values in the
| EBCDIC CODED CHAR SET and MIXED DATA fields of
| installation panel DSNTIPF.
| UNICODE Indicates that the CCSIDs of all host variables in static SQL
| statements are determined by the value in the UNICODE
| CCSID field of installation panel DSNTIPF.
| ccsid Specifies a CCSID that determines the set of CCSIDs that
| are used for all host variables in static SQL statements. If
| you specify ccsid, this value should be a mixed CCSID. For
| Unicode, the mixed CCSID is a UTF-8 CCSID. DB2 derives
| the SBCS and DBCS CCSIDs.
| 13 OPTIMIZATION HINT
Specifies whether you want to use optimization hints to determine access
paths. Specify 'hint-id' to indicate that you want DB2 to use the optimization
hints in owner.PLAN_TABLE, where owner is the authorization ID of the
plan or package owner. 'hint-id' is a delimited string of up to 8 characters
that DB2 compares to the value of OPTHINT in owner.PLAN_TABLE to
determine the rows to use for optimization hints. If you specify a nonblank
| For more information about this option, see the bind option IMMEDWRITE
| in Chapter 2 of DB2 Command Reference.
15 DYNAMIC RULES
For plans, lets you specify whether run-time (RUN) or bind-time (BIND)
rules apply to dynamic SQL statements at run time.
For packages, lets you specify whether run-time (RUN) or bind-time (BIND)
rules apply to dynamic SQL statements at run time. For packages that run
under an active user-defined function or stored procedure environment, the
INVOKEBIND, INVOKERUN, DEFINEBIND, and DEFINERUN options
indicate who must have authority to execute dynamic SQL statements in the
package.
For packages, the default rules for a package on the local server are the
same as the rules for the plan to which the package appends at run time.
For a package on the remote server, the default is RUN.
If you specify rules for a package that are different from the rules for the
plan, the SQL statements for the package use the rules you specify for that
package. If a package that is bound with DEFINEBIND or INVOKEBIND is
not executing under an active stored procedure or user-defined function
environment, SQL statements for that package use BIND rules. If a
package that is bound with DEFINERUN or INVOKERUN is not executing
under an active stored procedure or user-defined function environment,
SQL statements for that package use RUN rules.
For more information, see “Using DYNAMICRULES to specify behavior of
dynamic SQL statements” on page 418.
For packages:
7 SQLERROR PROCESSING
Lets you specify CONTINUE to continue to create a package after finding
SQL errors, or NOPACKAGE to avoid creating a package after finding SQL
errors.
For a description of the effects of those values, see “The ACQUIRE and
RELEASE options” on page 339.
16 SQLRULES
Lets you specify whether a CONNECT (Type 2) statement executes
according to DB2 rules (DB2) or the SQL standard (STD). For information,
see “Specifying the SQL rules” on page 421.
17 DISCONNECT
Lets you specify which remote connections end during a commit or a
rollback. Regardless of what you specify, all connections in the
released-pending state end during commit.
Use:
EXPLICIT to end connections in the release-pending state only at
COMMIT or ROLLBACK
AUTOMATIC to end all remote connections
CONDITIONAL to end remote connections that have no open cursors
WITH HOLD associated with them.
For each connection type that has a second arrow, under SPECIFY
CONNECTION NAMES?, enter Y if you want to list specific connection
names of that type. Leave N (the default) if you do not. If you use Y in any
of those fields, you see another panel on which you can enter the
connection names. For more information, see “Panels for entering lists of
values”.
If you use the DISPLAY command under TSO on this panel, you can determine
what you have currently defined as “enabled” or “disabled” in your ISPF DSNSPFT
library (member DSNCONNS). The information does not reflect the current state of
the DB2 Catalog.
If you type DISPLAY ENABLED on the command line, you get the connection
names that are currently enabled for your TSO connection types. For example:
Display OF ALL connection name(s) to be ENABLED
CONNECTION SUBSYSTEM
CICS1 ENABLED
CICS2 ENABLED
CICS3 ENABLED
CICS4 ENABLED
DLI1 ENABLED
DLI2 ENABLED
DLI3 ENABLED
DLI4 ENABLED
DLI5 ENABLED
CMD
"""" value ...
"""" value ...
""""
""""
""""
""""
For the syntax of specifying names on a list panel, see Chapter 2 of DB2 Command
Reference for the type of name you need to specify.
All of the list panels let you enter limited commands in two places:
v On the system command line, prefixed by ====>
v In a special command area, identified by """"
When you finish with a list panel, specify END to same the current panel values
and continue processing.
Figure 137. The program preparation: Compile, link, and run panel
Your application could need other data sets besides SYSIN and SYSPRINT. If so,
remember to catalog and allocate them before you run your program.
When you press ENTER after entering values in this panel, DB2 compiles and
link-edits the application. If you specified in the DB2 PROGRAM PREPARATION
panel that you want to run the application, DB2 also runs the application.
CICS
Before you run an application, ensure that the corresponding entries in the
RCT, SNT, and RACF control areas authorize your application to run. The
system administrator is responsible for these functions; see Part 3 (Volume 1)
of DB2 Administration Guide for more information on the functions.
In addition, ensure that the program and its transaction code are defined in the
CICS CSD.
If your location has a separate DB2 system for testing, you can create the test
tables and views on the test system, then test your program thoroughly on that
system. This chapter assumes that you do all testing on a separate system, and
that the person who created the test tables and views has an authorization ID of
TEST. The table names are TEST.EMP, TEST.PROJ and TEST.DEPT.
2. Determine the test tables and views you need to test your application.
Create a test table on your list when either:
v The application modifies data in the table
v You need to create a view based on a test table because your application
modifies the view’s data.
DEPTNO MGRNO
. .
. .
. .
Obtaining authorization
Before you can create a table, you need to be authorized to create tables and to
use the table space in which the table is to reside. You must also have authority to
bind and run programs you want to test. Your DBA can grant you the authorization
needed to create and access tables and to bind and run programs.
If you intend to use existing tables and views (either directly or as the basis for a
view), you need privileges to access those tables and views. Your DBA can grant
those privileges.
For details about each CREATE statement, see DB2 SQL Reference.
SQL statements executed under SPUFI operate on actual tables (in this case, the
tables you have created for testing). Consequently, before you access DB2 data:
v Make sure that all tables and views your SQL statements refer to exist
v If the tables or views do not exist, create them (or have your database
administrator create them). You can use SPUFI to issue the CREATE statements
used to create the tables and views you need for testing.
For more details about how to use SPUFI, see “Chapter 5. Executing SQL from
your terminal using SPUFI” on page 51.
When your program encounters an error that does not result in an abend, it can
pass all the required error information to a standard error routine. Online programs
might also send an error message to the terminal.
For more information about the TEST command, see OS/390 TSO/E Command
Reference.
ISPF Dialog Test is another option to help you in the task of debugging.
When your program encounters an error, it can pass all the required error
information to a standard error routine. Online programs can also send an error
message to the originating logical terminal.
An interactive program also can send a message to the master terminal operator
giving information about the program’s termination. To do that, the program places
the logical terminal name of the master terminal in an express PCB and issues one
or more ISRT calls.
Some sites run a BMP at the end of the day to list all the errors that occurred
during the day. If your location does this, you can send a message using an
express PCB that has its destination set for that BMP.
Batch Terminal Simulator (BTS): The Batch Terminal Simulator (BTS) allows you
to test IMS application programs. BTS traces application program DL/I calls and
SQL statements, and simulates data communication functions. It can make a TSO
terminal appear as an IMS terminal to the terminal operator, allowing the end user
to interact with the application as though it were online. The user can use any
application program under the user’s control to access any database (whether DL/I
or DB2) under the user’s control. Access to DB2 databases requires BTS to operate
Using CICS facilities, you can have a printed error record; you can also print the
SQLCA (and SQLDA) contents.
For more details about each of these topics, see CICS for MVS/ESA Application
Programming Reference.
EDF displays essential information before and after an SQL statement, while the
task is in EDF mode. This can be a significant aid in debugging CICS transaction
programs containing SQL statements. The SQL information that EDF displays is
helpful for debugging programs and for error analysis after an SQL error or warning.
Using this facility reduces the amount of work you need to do to write special error
handlers.
ENTER: CONTINUE
PF1 : UNDEFINED PF2 : UNDEFINED PF3 : UNDEFINED
PF4 : SUPPRESS DISPLAYS PF5 : WORKING STORAGE PF6 : USER DISPLAY
PF7 : SCROLL BACK PF8 : SCROLL FORWARD PF9 : STOP CONDITIONS
PF10: PREVIOUS DISPLAY PF11: UNDEFINED PF12: ABEND USER TASK
SQL statements containing input host variables: The IVAR (input host variables)
section and its attendant fields only appear when the executing statement contains
input host variables.
The host variables section includes the variables from predicates, the values used
for inserting or updating, and the text of dynamic SQL statements being prepared.
The address of the input variable is AT 'nnnnnnnn'.
EDF after execution: Figure 139 on page 471 shows an example of the first EDF
screen displayed after the executing an SQL statement. The names of the key
information fields on this panel are in boldface.
ENTER: CONTINUE
PF1 : UNDEFINED PF2 : UNDEFINED PF3 : END EDF SESSION
PF4 : SUPPRESS DISPLAYS PF5 : WORKING STORAGE PF6 : USER DISPLAY
PF7 : SCROLL BACK PF8 : SCROLL FORWARD PF9 : STOP CONDITIONS
PF10: PREVIOUS DISPLAY PF11: UNDEFINED PF12: ABEND USER TASK
Plus signs (+) on the left of the screen indicate that you can see additional EDF
output by using PF keys to scroll the screen forward or back.
The OVAR (output host variables) section and its attendant fields only appear when
the executing statement returns output host variables.
Figure 140 on page 472 contains the rest of the EDF output for our example.
ENTER: CONTINUE
PF1 : UNDEFINED PF2 : UNDEFINED PF3 : END EDF SESSION
PF4 : SUPPRESS DISPLAYS PF5 : WORKING STORAGE PF6 : USER DISPLAY
PF7 : SCROLL BACK PF8 : SCROLL FORWARD PF9 : STOP CONDITIONS
PF10: PREVIOUS DISPLAY PF11: UNDEFINED PF12: ABEND USER TASK
The attachment facility automatically displays SQL information while in the EDF
mode. (You can start EDF as outlined in the appropriate CICS application
programmer’s reference manual.) If this is not the case, contact your installer and
see Part 2 of DB2 Installation Guide.
IMS
– If you are using IMS, have you included the DL/I option statement in the
correct format?
– Have you included the region size parameter in the EXEC statement? Does it
specify a region size large enough for the storage required for the DB2
interface, the TSO, IMS, or CICS system, and your program?
– Have you included the names of all data sets (DB2 and non-DB2) that the
program requires?
v Your program.
You can also use dumps to help localize problems in your program. For example,
one of the more common error situations occurs when your program is running
and you receive a message that it abended. In this instance, your test procedure
might be to capture a TSO dump. To do so, you must allocate a SYSUDUMP or
SYSABEND dump data set before calling DB2. When you press the ENTER key
(after the error message and READY message), the system requests a dump.
You then need to FREE the dump data set.
The SYSTERM output provides a brief summary of the results from the precompiler,
all error messages that the precompiler generated, and the statement in error, when
possible. Sometimes, the error messages by themselves are not enough. In such
cases, you can use the line number provided in each error message to locate the
failing source statement.
DSNH104I E DSNHPARS LINE 32 COL 26 ILLEGAL SYMBOL "X" VALID SYMBOLS ARE:, FROM1
SELECT VALUE INTO HIPPO X;2
When you use the Program Preparation panels to prepare and run your program,
DB2 allocates SYSPRINT according to TERM option you specify (on line 12 of the
PROGRAM PREPARATION: COMPILE, PRELINK, LINK, AND RUN panel). As an
alternative, when you use the DSNH command procedure (CLIST), you can specify
PRINT(TERM) to obtain SYSPRINT output at your terminal, or you can specify
PRINT(qualifier) to place the SYSPRINT output into a data set named
authorizationid.qualifier.PCLIST. Assuming that you do not specify PRINT as
LEAVE, NONE, or TERM, DB2 issues a message when the precompiler finishes,
telling you where to find your precompiler listings. This helps you locate your
diagnostics quickly and easily.
The SYSPRINT output can provide information about your precompiled source
module if you specify the options SOURCE and XREF when you start the DB2
precompiler.
...
SOURCE STATISTICS
SOURCE LINES READ: 15231
NUMBER OF SYMBOLS: 1282
SYMBOL TABLE BYTES EXCLUDING ATTRIBUTES: 64323
The restart capabilities for DB2 and IMS databases, as well as for sequential data
sets accessed through GSAM, are available through the IMS Checkpoint and
Restart facility.
DB2 allows access to both DB2 and DL/I data through the use of the following DB2
and IMS facilities:
v IMS synchronization calls, which commit and abend units of recovery
v The DB2 IMS attachment facility, which handles the two-phase commit protocol
and allows both systems to synchronize a unit of recovery during a restart after a
failure
v The IMS log, used to record the instant of commit.
| In a data sharing environment, DL/I batch supports group attachment. You can
| specify a group attachment name instead of a subsystem name in the SSN
Authorization
When the batch application tries to run the first SQL statement, DB2 checks
whether the authorization ID has the EXECUTE privilege for the plan. DB2 uses the
same ID for later authorization checks and also identifies records from the
accounting and performance traces.
The primary authorization ID is the value of the USER parameter on the job
statement, if that is available. It is the TSO logon name if the job is submitted.
Otherwise, it is the IMS PSB name. In that case, however, the ID must not begin
with the string “SYSADM”—which causes the job to abend. The batch job is
rejected if you try to change the authorization ID in an exit routine.
Address spaces
A DL/I batch region is independent of both the IMS control region and the CICS
address space. The DL/I batch region loads the DL/I code into the application
region along with the application program.
Commits
Commit IMS batch applications frequently so that you do not tie up resources for an
extended time. If you need coordinated commits for recovery, see Part 4 (Volume 1)
of DB2 Administration Guide.
Checkpoint calls
Write your program with SQL statements and DL/I calls, and use checkpoint calls.
All checkpoints issued by a batch application program must be unique. The
frequency of checkpoints depends on the application design. At a checkpoint, DL/I
positioning is lost, DB2 cursors are closed (with the possible exception of cursors
defined as WITH HOLD), commit duration locks are freed (again with some
exceptions), and database changes are considered permanent to both IMS and
DB2.
It is also possible to have IMS dynamically back out the updates within the same
job. You must specify the BKO parameter as 'Y' and allocate the IMS log to DASD.
You could have a problem if the system fails after the program terminates, but
before the job step ends. If you do not have a checkpoint call before the program
ends, DB2 commits the unit of work without involving IMS. If the system fails before
DL/I commits the data, then the DB2 data is out of synchronization with the DL/I
changes. If the system fails during DB2 commit processing, the DB2 data could be
indoubt.
It is recommended that you always issue a symbolic checkpoint at the end of any
update job to coordinate the commit of the outstanding unit of work for IMS and
DB2. When you restart the application program, you must use the XRST call to
obtain checkpoint information and resolve any DB2 indoubt work units.
If you do not use an XRST call, then DB2 assumes that any checkpoint call issued
is a basic checkpoint.
You can specify values for the following parameters using a DDITV02 data set or a
subsystem member:
SSN,LIT,ESMT,RTT,REO,CRC
You can specify values for the following parameters only in a DDITV02 data set:
CONNECTION_NAME,PLAN,PROG
If you use the DDITV02 data set and specify a subsystem member, the values in
the DDITV02 DD statement override the values in the specified subsystem member.
If you provide neither, DB2 abends the application program with system abend code
X'04E' and a unique reason code in register 15.
DDITV02 is the DD name for a data set that has DCB options of LRECL=80 and
RECFM=F or FB.
A subsystem member is a member in the IMS procedure library. Its name is derived
by concatenating the value of the SSM parameter to the value of the IMSID
parameter. You specify the SSM parameter and the IMSID parameter when you
invoke the DLIBATCH procedure, which starts the DL/I batch processing
environment.
If the application program uses the XRST call, and if coordinated recovery
is required on the XRST call, then REO is ignored. In that case, the
application program terminates abnormally if DB2 is not operational.
If a batch update job fails, you must use a separate job to restart the batch
job. The connection name used in the restart job must be the same as the
name used in the batch job that failed. Or, if the default connection name is
used, the restart job must have the same job name as the batch update job
that failed.
You might want to save and print the data set, as the information is useful for
diagnostic purposes. You can use the IMS module, DFSERA10, to print the
variable-length data set records in both hexadecimal and character format.
Precompiling
When you add SQL statements to an application program, you must precompile the
application program and bind the resulting DBRM into a plan or package, as
described in “Chapter 20. Preparing an application program to run” on page 397.
Binding
The owner of the plan or package must have all the privileges required to execute
the SQL statements embedded in it. Before a batch program can issue SQL
statements, a DB2 plan must exist.
You can specify the plan name to DB2 in one of the following ways:
v In the DDITV02 input data set.
v In subsystem member specification.
v By default; the plan name is then the application load module name specified in
DDITV02.
DB2 passes the plan name to the IMS attach package. If you do not specify a plan
name in DDITV02, and a resource translation table (RTT) does not exist or the
name is not in the RTT, then DB2 uses the passed name as the plan name. If the
name exists in the RTT, then the name translates to the plan specified for the RTT.
Link-editing
DB2 has language interface routines for each unique supported environment. DB2
requires the IMS language interface routine for DL/I batch. It is also necessary to
have DFSLI000 link-edited with the application program.
See “Submitting a DL/I batch application without using DSNMTV01” on page 486
for an example of JCL used to submit a DL/I batch application by this method.
You cannot restart A BMP application program in a DB2 DL/I batch environment.
The symbolic checkpoint records are not accessed, causing an IMS user abend
U0102.
To restart a batch job that terminated abnormally or prematurely, find the checkpoint
ID for the job on the MVS system log or from the SYSOUT listing of the failing job.
Before you restart the job step, place the checkpoint ID in the CKPTID=value option
of the DLIBATCH procedure, then submit the job. If the default connection name is
used (that is, you did not specify the connection name option in the DDITV02 input
data set), the job name of the restart job must be the same as the failing job. Refer
to the following skeleton example, in which the last checkpoint ID value was
IVP80002:
//ISOCS04 JOB 3000,OJALA,MSGLEVEL=(1,1),NOTIFY=OJALA,
// MSGCLASS=T,CLASS=A
//* ******************************************************************
//*
//* THE FOLLOWING STEP RESTARTS COBOL PROGRAM IVP8CP22, WHICH UPDATES
//* BOTH DB2 AND DL/I DATABASES, FROM CKPTID=IVP80002.
//*
//* ******************************************************************
//RSTRT EXEC DLIBATCH,DBRC=Y,COND=EVEN,LOGT=SYSDA,
// MBR=DSNMTV01,PSB=IVP8CA,BKO=Y,IRLM=N,CKPTID=IVP80002
//G.STEPLIB DD
// DD
// DD DSN=prefix.SDSNLOAD,DISP=SHR
// DD DSN=prefix.RUNLIB.LOAD,DISP=SHR
// DD DSN=SYS1.COB2LIB,DISP=SHR
// DD DSN=IMS.PGMLIB,DISP=SHR
//* other program libraries
//* G.IEFRDER data set required
//G.STEPCAT DD DSN=IMSCAT,DISP=SHR
//* G.IMSLOGR data set required
//G.DDOTV02 DD DSN=&TEMP2,DISP=(NEW,PASS,DELETE),
// SPACE=(TRK,(1,1),RLSE),UNIT=SYSDA,
// DCB=(RECFM=VB,BLKSIZE=4096,LRECL=4092)
During the commit process the application program checkpoint ID is passed to DB2.
If a failure occurs during the commit process, creating an indoubt work unit, DB2
remembers the checkpoint ID. You can use the following techniques to find the last
checkpoint ID:
v Look at the SYSOUT listing for the job step to find message DFS0540I, which
contains the checkpoint IDs issued. Use the last checkpoint ID listed.
v Look at the MVS console log to find message(s) DFS0540I containing the
checkpoint ID issued for this batch program. Use the last checkpoint ID listed.
v Submit the IMS Batch Backout utility to back out the DL/I databases to the last
(default) checkpoint ID. When the batch backout finishes, message DFS395I
provides the last valid IMS checkpoint ID. Use this checkpoint ID on restart.
v When restarting DB2, the operator can issue the command -DISPLAY
THREAD(*) TYPE(INDOUBT) to obtain a possible indoubt unit of work
(connection name and checkpoint ID). If you restarted the application program
from this checkpoint ID, it could work because the checkpoint is recorded on the
IMS log; however, it could fail with an IMS user abend U102 because IMS did not
finish logging the information before the failure. In that case, restart the
application program from the previous checkpoint ID.
DB2 performs one of two actions automatically when restarted, if the failure occurs
outside the indoubt period: it either backs out the work unit to the prior checkpoint,
or it commits the data without any assistance. If the operator then issues the
command
-DISPLAY THREAD(*) TYPE(INDOUBT)
Chapter 29. Programming for the call attachment facility (CAF) . . . . . 733
Call attachment facility capabilities and restrictions . . . . . . . . . . . 733
Capabilities when using CAF . . . . . . . . . . . . . . . . . . 733
Task capabilities . . . . . . . . . . . . . . . . . . . . . . 734
Programming language . . . . . . . . . . . . . . . . . . . 734
Tracing facility. . . . . . . . . . . . . . . . . . . . . . . 734
Program preparation . . . . . . . . . . . . . . . . . . . . 734
CAF requirements . . . . . . . . . . . . . . . . . . . . . . 734
Program size . . . . . . . . . . . . . . . . . . . . . . . 735
Use of LOAD . . . . . . . . . . . . . . . . . . . . . . . 735
Using CAF in IMS batch . . . . . . . . . . . . . . . . . . . 735
Run environment. . . . . . . . . . . . . . . . . . . . . . 735
Running DSN applications under CAF . . . . . . . . . . . . . . 735
How to use CAF . . . . . . . . . . . . . . . . . . . . . . . . 736
Summary of connection functions . . . . . . . . . . . . . . . . 737
Implicit connections . . . . . . . . . . . . . . . . . . . . . 738
Accessing the CAF language interface. . . . . . . . . . . . . . . 739
Explicit load of DSNALI . . . . . . . . . . . . . . . . . . . 739
Link-editing DSNALI . . . . . . . . . . . . . . . . . . . . 740
General properties of CAF connections . . . . . . . . . . . . . . 740
Task termination . . . . . . . . . . . . . . . . . . . . . . 740
DB2 abend . . . . . . . . . . . . . . . . . . . . . . . . 740
CAF function descriptions . . . . . . . . . . . . . . . . . . . 741
Register conventions . . . . . . . . . . . . . . . . . . . . 741
For most DB2 users, static SQL—embedded in a host language program and
bound before the program runs—provides a straightforward, efficient path to DB2
data. You can use static SQL when you know before run time what SQL statements
your application needs to execute.
Dynamic SQL prepares and executes the SQL statements within a program, while
the program is running. There are four types of dynamic SQL:
v Embedded dynamic SQL
Your application puts the SQL source in host variables and includes PREPARE
and EXECUTE statements that tell DB2 to prepare and run the contents of those
host variables at run time. You must precompile and bind programs that include
embedded dynamic SQL.
v Interactive SQL
A user enters SQL statements through SPUFI. DB2 prepares and executes those
statements as dynamic SQL statements.
v Deferred embedded SQL
Deferred embedded SQL statements are neither fully static nor fully dynamic.
Like static statements, deferred embedded SQL statements are embedded within
applications, but like dynamic statements, they are prepared at run time. DB2
processes deferred embedded SQL statements with bind-time rules. For
example, DB2 uses the authorization ID and qualifier determined at bind time as
the plan or package owner. Deferred embedded SQL statements are used for
DB2 private protocol access to remote data.
v Dynamic SQL executed through ODBC functions
Your application contains ODBC function calls that pass dynamic SQL statements
as arguments. You do not need to precompile and bind programs that use ODBC
function calls. See DB2 ODBC Guide and Reference for information on ODBC.
“Choosing between static and dynamic SQL” on page 498 suggests some reasons
for choosing either static or dynamic SQL.
The rest of this chapter shows you how to code dynamic SQL in applications that
contain three types of SQL statements:
v “Dynamic SQL for non-SELECT statements” on page 507. Those statements
include DELETE, INSERT, and UPDATE.
v “Dynamic SQL for fixed-list SELECT statements” on page 511. A SELECT
statement is fixed-list if you know in advance the number and type of data items
in each row of the result.
v “Dynamic SQL for varying-list SELECT statements” on page 513. A SELECT
statement is varying-list if you cannot know in advance how many data items to
allow for or what their data types are.
In the example below, the UPDATE statement can update the salary of any
employee. At bind time, you know that salaries must be updated, but you do not
know until run time whose salaries should be updated, and by how much.
01 IOAREA.
02 EMPID PIC X(06).
. 02 NEW-SALARY PIC S9(7)V9(2) COMP-3.
.
.
(Other declarations)
READ CARDIN RECORD INTO IOAREA
. AT END MOVE 'N' TO INPUT-SWITCH.
.
.
(Other COBOL statements)
EXEC SQL
UPDATE DSN8710.EMP
SET SALARY = :NEW-SALARY
WHERE EMPNO = :EMPID
END-EXEC.
The statement (UPDATE) does not change, nor does its basic structure, but the
input can change the results of the UPDATE statement.
One example of such a program is the Query Management Facility (QMF), which
provides an alternative interface to DB2 that accepts almost any SQL statement.
SPUFI is another example; it accepts SQL statements from an input data set, and
then processes and executes them dynamically.
The time at which DB2 determines the access path depends on these factors:
v Whether the statement is executed statically or dynamically
v Whether the statement contains input host variables
If you specify NOREOPT(VARS), DB2 determines the access path at bind time, just
as it does when there are no input variables.
If you specify REOPT(VARS), DB2 determines the access path at bind time and
again at run time, using the values in these types of input variables:
v Host variables
v Parameter markers
v Special registers
This means that DB2 must spend extra time determining the access path for
statements at run time, but if DB2 determines a significantly better access path
using the variable values, you might see an overall performance improvement. In
general, using REOPT(VARS) can make static SQL statements with input variables
If you use predictive governing, and a dynamic SQL statement bound with
REOPT(VARS) exceeds a predictive governing warning threshold, your application
does not receive a warning SQLCODE. However, if the statement exceeds a
predictive governing error threshold, the application receives an error SQLCODE
from the OPEN or EXECUTE statement.
DB2 can save prepared dynamic statements in a cache. The cache is a DB2-wide
cache in the EDM pool that all application processes can use to store and retrieve
prepared dynamic statements. After an SQL statement has been prepared and is
automatically stored in the cache, subsequent prepare requests for that same SQL
statement can avoid the costly preparation process by using the statement in the
cache. Cached statements can be shared among different threads, plans, or
packages.
For example:
PREPARE STMT1 FROM ... Identical statement. DB2 uses the prepared
EXECUTE STMT1 statement from the cache.
COMMIT
.
.
.
Eligible Statements: The following SQL statements are eligible for caching:
SELECT
UPDATE
INSERT
DELETE
Distributed and local SQL statements are eligible. Prepared, dynamic statements
using DB2 private protocol access are eligible.
Restrictions: Even though static statements that use DB2 private protocol access
are dynamic at the remote site, those statements are not eligible for caching.
Statements in plans or packages bound with REOPT(VARS) are not eligible for
caching. See “How bind option REOPT(VARS) affects dynamic SQL” on page 525
for more information about REOPT(VARS).
The following conditions must be met before DB2 can use statement P1 instead of
preparing statement S2:
v S1 and S2 must be identical. The statements must pass a character by character
| comparison and must be the same length. If the PREPARE statement for either
| statement contains an ATTRIBUTES clause, DB2 concatenates the values in the
| ATTRIBUTES clause to the statement string before comparing the strings. That
| is, if A1 is the set of attributes for S1 and A2 is the set of attributes for S2, DB2
| compares S1||A1 to S2||A2.
If the statement strings are not identical, DB2 cannot use the statement in the
cache.
For example, if S1 and S2 are both
'UPDATE EMP SET SALARY=SALARY+50'
and S2 is
In that case, DB2 prepares S2 and puts the prepared version of S2 in the cache.
v The authorization ID that was used to prepare S1 must be used to prepare S2:
– When a plan or package has run behavior, the authorization ID is the current
SQLID value.
For secondary authorization IDs:
- The application process that searches the cache must have the same
secondary authorization ID list as the process that inserted the entry into
the cache or must have a superset of that list.
- If the process that originally prepared the statement and inserted it into the
cache used one of the privileges held by the primary authorization ID to
accomplish the prepare, that ID must either be part of the secondary
authorization ID list of the process searching the cache, or it must be the
primary authorization ID of that process.
– When a plan or package has bind behavior, the authorization ID is the plan
owner's ID. For a DDF server thread, the authorization ID is the package
owner's ID.
– When a package has define behavior, then the authorization ID is the
user-defined function or stored procedure owner.
– When a package has invoke behavior, then the authorization ID is the
authorization ID under which the statement that invoked the user-defined
function or stored procedure executed.
For an explanation of bind, run, define, and invoke behavior, see “Using
DYNAMICRULES to specify behavior of dynamic SQL statements” on page 418.
v When the plan or package that contains S2 is bound, the values of these bind
options must be the same as when the plan or package that contains S1 was
bound:
CURRENTDATA
DYNAMICRULES
ISOLATION
SQLRULES
QUALIFIER
v When S2 is prepared, the values of special registers CURRENT DEGREE,
CURRENT RULES, and CURRENT PRECISION must be the same as when S1
was prepared.
Figure 146. Writing dynamic SQL to use the bind option KEEPDYNAMIC(YES)
Figure 147. Using KEEPDYNAMIC(YES) when the dynamic statement cache is not active
When the dynamic statement cache is active, and you run an application bound
with KEEPDYNAMIC(YES), DB2 retains a copy of both the prepared statement and
the statement string. The prepared statement is cached locally for the application
process. It is likely that the statement is globally cached in the EDM pool, to benefit
other application processes. If the application issues an OPEN, EXECUTE, or
DESCRIBE after a commit operation, the application process uses its local copy of
the prepared statement to avoid a prepare and a search of the cache. Figure 148
on page 504 illustrates this process.
Figure 148. Using KEEPDYNAMIC(YES) when the dynamic statement cache is active
The local instance of the prepared SQL statement is kept in ssnmDBM1 storage
until one of the following occurs:
v The application process ends.
v A rollback operation occurs.
v The application issues an explicit PREPARE statement with the same statement
name.
If the application does issue a PREPARE for the same SQL statement name that
has a kept dynamic statement associated with it, the kept statement is discarded
and DB2 prepares the new statement.
v The statement is removed from memory because the statement has not been
used recently, and the number of kept dynamic SQL statements reaches a limit
set at installation time.
The KEEPDYNAMIC option has performance implications for DRDA clients that
specify WITH HOLD on their cursors:
v If KEEPDYNAMIC(NO) is specified, a separate network message is required
when the DRDA client issues the SQL CLOSE for the cursor.
v If KEEPDYNAMIC(YES) is specified, the DB2 for OS/390 and z/OS server
automatically closes the cursor when SQLCODE +100 is detected, which means
that the client does not have to send a separate message to close the held
cursor. This reduces network traffic for DRDA applications that use held cursors.
It also reduces the duration of locks that are associated with the held cursor.
Considerations for data sharing: If one member of a data sharing group has
enabled the cache but another has not, and an application is bound with
KEEPDYNAMIC(YES), DB2 must implicitly prepare the statement again if the
statement is assigned to a member without the cache. This can mean a slight
reduction in performance.
The governor controls only the dynamic SQL manipulative statements SELECT,
UPDATE, DELETE, and INSERT. Each dynamic SQL statement used in a program
is subject to the same limits. The limit can be a reactive governing limit or a
predictive governing limit. If the statement exceeds a reactive governing limit, the
statement receives an error SQL code. If the statement exceeds a predictive
governing limit, it receives a warning or error SQL code. “Writing an application to
handle predictive governing” explains more about predictive governing SQL codes.
Your system administrator can establish the limits for individual plans or packages,
for individual users, or for all users who do not have personal limits.
Follow the procedures defined by your location for adding, dropping, or modifying
entries in the resource limit specification table. For more information on the
resource limit specification tables, see Part 5 (Volume 2) of DB2 Administration
Guide.
If the failed statement involves an SQL cursor, the cursor’s position remains
unchanged. The application can then close that cursor. All other operations with the
cursor do not run and the same SQL error code occurs.
If the failed SQL statement does not involve a cursor, then all changes that the
statement made are undone before the error code returns to the application. The
application can either issue another SQL statement or commit all work done so far.
For information about setting up the resource limit facility for predictive governing,
see Part 5 (Volume 2) of DB2 Administration Guide.
Normally with deferred prepare, the PREPARE, OPEN, and first FETCH of the data
are returned to the requester. For a predictive governor warning of +495, you would
ideally like to have the option to choose beforehand whether you want the OPEN
and FETCH of the data to occur. For downlevel requesters, you do not have this
option.
If your application does defer prepare processing, the application receives the +495
at its usual time (OPEN or PREPARE). If you have parameter markers with
deferred prepare, you receive the +495 at OPEN time as you normally do.
However, an additional message is exchanged.
Recommendation: Do not use deferred prepare for applications that use parameter
markers and that are predictively governed at the server side.
All SQL in REXX programs is dynamic SQL. For information on how to write SQL
REXX applications, see “Coding SQL statements in a REXX application” on
page 189
Library prefix.SDSNSAMP contains the sample programs. You can view the
programs online, or you can print them using ISPF, IEBPTPCH, or your own printing
program.
If you know in advance that you will use only the DELETE statement and only the
table DSN8710.EMP, then you can use the more efficient static SQL. Suppose
further that there are several different tables with rows identified by employee
numbers, and that users enter a table name as well as a list of employee numbers
to delete. Although variables can represent the employee numbers, they cannot
represent the table name, so you must construct and execute the entire statement
dynamically. Your program must now do these things differently:
v Use parameter markers instead of host variables
v Use the PREPARE statement
v Use EXECUTE instead of EXECUTE IMMEDIATE.
You can indicate to DB2 that a parameter marker represents a host variable of a
certain data type by specifying the parameter marker as the argument of a CAST
function. When the statement executes, DB2 converts the host variable to the data
Because DB2 can evaluate an SQL statement with typed parameter markers more
efficiently than a statement with untyped parameter markers, we recommend that
you use typed parameter markers whenever possible. Under certain circumstances
you must use typed parameter markers. See Chapter 5 of DB2 SQL Reference for
rules for using untyped or typed parameter markers.
You associate host variable :EMP with the parameter marker when you execute the
prepared statement. Suppose S1 is the prepared statement. Then the EXECUTE
statement looks like this:
EXECUTE S1 USING :EMP;
For example, let the variable :DSTRING have the value “DELETE FROM
DSN8710.EMP WHERE EMPNO = ?”. To prepare an SQL statement from that
string and assign it the name S1, write:
EXEC SQL PREPARE S1 FROM :DSTRING;
The prepared statement still contains a parameter marker, for which you must
supply a value when the statement executes. After the statement is prepared, the
table name is fixed, but the parameter marker allows you to execute the same
statement many times with different values of the employee number.
After you prepare a statement, you can execute it many times within the same unit
of work. In most cases, COMMIT or ROLLBACK destroys statements prepared in a
unit of work. Then, you must prepare them again before you can execute them
again. However, if you declare a cursor for a dynamic statement and use the option
WITH HOLD, a commit operation does not destroy the prepared statement if the
cursor is still open. You can execute the statement in the next unit of work without
preparing it again.
To execute the prepared statement S1 just once, using a parameter value contained
in the host variable :EMP, write:
EXEC SQL EXECUTE S1 USING :EMP;
You can now write an equivalent example for a dynamic SQL statement:
< Read a statement containing parameter markers into DSTRING.>
EXEC SQL PREPARE S1 FROM :DSTRING;
< Read a value for EMP from the list. >
DO UNTIL (EMPNO = 0);
EXEC SQL EXECUTE S1 USING :EMP;
< Read a value for EMP from the list. >
END;
The PREPARE statement prepares the SQL statement and calls it S1. The
EXECUTE statement executes S1 repeatedly, using different values for EMP.
Before you execute DESCRIBE INPUT, you must allocate an SQLDA with enough
instances of SQLVAR to represent all parameter markers in the SQL statements
you want to describe.
After you execute DESCRIBE INPUT, you code the application in the same way as
any other application in which you execute a prepared statement using an SQLDA.
First, you obtain the addresses of the input host variables and their indicator
variables and insert those addresses into the SQLDATA and SQLIND fields. Then
you execute the prepared SQL statement.
The term “fixed-list” does not imply that you must know in advance how many rows
of data will return; however, you must know the number of columns and the data
types of those columns. A fixed-list SELECT statement returns a result table that
can contain any number of rows; your program looks at those rows one at a time,
using the FETCH statement. Each successive fetch returns the same number of
values as the last, and the values have the same data types each time. Therefore,
you can specify host variables as you do for static SQL.
An advantage of the fixed-list SELECT is that you can write it in any of the
programming languages that DB2 supports. Varying-list dynamic SELECT
statements require assembler, C, PL/I, and versions of COBOL other than OS/VS
COBOL.
For a sample program written in C illustrating dynamic SQL with fixed-list SELECT
statements, see Figure 247 on page 863.
Suppose that your program retrieves last names and phone numbers by
dynamically executing SELECT statements of this form:
SELECT LASTNAME, PHONENO FROM DSN8710.EMP
WHERE ... ;
The program reads the statements from a terminal, and the user determines the
WHERE clause.
| ATTRVAR contains attributes that you want to add to the SELECT statement, such
| as FETCH FIRST 10 ROWS ONLY or OPTIMIZE for 1 ROW. In general, if the
| SELECT statement has attributes that conflict with the attributes in the PREPARE
| statement, the attributes on the SELECT statement take precedence over the
| attributes on the PREPARE statement. However, in this example, the SELECT
| statement in DSTRING has no attributes specified, so DB2 uses the attributes in
| ATTRVAR for the SELECT statement.
To execute STMT, your program must open the cursor, fetch rows from the result
table, and close the cursor. The following sections describe how to do those steps.
If STMT contains parameter markers, then you must use the USING clause of
OPEN to provide values for all of the parameter markers in STMT. If there are four
parameter markers in STMT, you need:
EXEC SQL OPEN C1 USING :PARM1, :PARM2, :PARM3, :PARM4;
The key feature of this statement is the use of a list of host variables to receive the
values returned by FETCH. The list has a known number of items (two—:NAME
and :PHONE) of known data types (both are character strings, of lengths 15 and 4,
respectively).
It is possible to use this list in the FETCH statement only because you planned the
program to use only fixed-list SELECTs. Every row that cursor C1 points to must
contain exactly two character values of appropriate length. If the program is to
handle anything else, it must use the techniques described under Dynamic SQL for
varying-list SELECT statements.
Now there is a new wrinkle. The program must find out whether the statement is a
SELECT. If it is, the program must also find out how many values are in each row,
and what their data types are. The information comes from an SQL descriptor area
(SQLDA).
For a complete layout of the SQLDA and the descriptions given by INCLUDE
statements, see Appendix C of DB2 SQL Reference.
Equivalently, you can use the INTO clause in the PREPARE statement:
EXEC SQL
PREPARE STMT INTO :MINSQLDA FROM :DSTRING;
Do not use the USING clause in either of these examples. At the moment, only the
minimum SQLDA is in use. Figure 149 shows the contents of the minimum SQLDA
in use.
Whether or not your SQLDA is big enough, whenever you execute DESCRIBE, DB2
returns the following values, which you can use to build an SQLDA of the correct
size:
v SQLD
0 if the SQL statement is not a SELECT. Otherwise, the number of columns in
the result table. The number of SQLVAR occurrences you need for the SELECT
depends on the value in the 7th byte of SQLDAID.
v The 7th byte of SQLDAID
2 if each column in the result table requires 2 SQLVAR entries. 3 if each column
in the result table requires 3 SQLVAR entries.
(If the statement does contain parameter markers, you must use an SQL descriptor
area; for instructions, see “Executing arbitrary statements with parameter markers”
on page 524.)
Figure 152, Figure 153, and Figure 154 on page 519 show the SQL descriptor area
after you take certain actions. Table 57 on page 519 describes the values in the
descriptor area. In Figure 152, the DESCRIBE statement inserted all the values
except the first occurrence of the number 200. The program inserted the number
200 before it executed DESCRIBE to tell how many occurrences of SQLVAR to
allow. If the result table of the SELECT has more columns than this, the SQLVAR
fields describe nothing.
The next set of five values, the first SQLVAR, pertains to the first column of the
result table (the WORKDEPT column). SQLVAR element 1 contains fixed-length
character strings and does not allow null values (SQLTYPE=452); the length
attribute is 3. For information on SQLTYPE values, see Appendix C of DB2 SQL
Reference.
Figure 153. SQL descriptor area after analyzing descriptions and acquiring storage
Figure 153 on page 518 shows the content of the descriptor area before the
program obtains any rows of the result table. Addresses of fields and indicator
variables are already in the SQLVAR.
| In dynamic SQL statements, if you want to retrieve the data in an encoding scheme
| and CCSID other than the default values, you can use one of the following
| techniques:
| v Set the CURRENT APPLICATION ENCODING SCHEME special register before
| you execute the SELECT statements. For example, to set the CCSID and
| encoding scheme for retrieved data to the default CCSID for Unicode, execute
| this SQL statement:
| EXEC SQL SET CURRENT APPLICATION ENCODING SCHEME ='UNICODE';
| v For fixed-list SELECT statements, use the DECLARE VARIABLE statement to
| associate CCSIDs with the host variables into which you retrieve the data. See
| “Changing the coded character set ID of host variables” on page 72 for
| information on this technique.
| v For varying-list SELECT statements, set the CCSID for the retrieved data in the
| SQLDA. The following text describes that technique.
To change the encoding scheme of retrieved data, set up the SQLDA as you would
for any other varying-list SELECT statement. Then make these additional changes
to the SQLDA:
1. Put the character + in the sixth byte of field SQLDAID.
2. For each SQLVAR entry:
v Set the length field of SQLNAME to 8.
v Set the first two bytes of the data field of SQLNAME to X'0000'.
v Set the third and fourth bytes of the data field of SQLNAME to the CCSID, in
hexadecimal, in which you want the results to display. You can specify any
CCSID that meets either of the following conditions:
– There is a row in catalog table SYSSTRINGS that has a matching value
for OUTCCSID.
– Language Environment supports conversion to that CCSID. See OS/390
C/C++ Programming Guide for information on the conversions that
Language Environment supports.
For REXX, you set the CCSID in the stem.n.SQLCCSID field instead of setting
the SQLDAID and SQLNAME fields.
For example, suppose the table that contains WORKDEPT and PHONENO is
defined with CCSID ASCII. To retrieve data for columns WORKDEPT and
Figure 155. SQL descriptor area for retrieving data in ASCII CCSID 437
In this case, SQLNAME contains nothing for a column with no label. If you prefer to
use labels wherever they exist, but column names where there are no labels, write
USING ANY. (Some columns, such as those derived from functions or expressions,
have neither name nor label; SQLNAME contains nothing for those columns.
However, if the column is the result of a UNION, SQLNAME contains the names of
the columns of the first operand of the UNION.)
You can also write USING BOTH to obtain the name and the label when both exist.
However, to obtain both, you need a second set of occurrences of SQLVAR in
FULSQLDA. The first set contains descriptions of all the columns using names; the
second set contains descriptions using labels. This means that you must allocate a
longer SQLDA for the second DESCRIBE statement ((16 + SQLD * 88 bytes)
instead of (16 + SQLD * 44)). You must also put double the number of columns
(SLQD * 2) in the SQLN field of the second SQLDA. Otherwise, if there is not
enough space available, DESCRIBE does not enter descriptions of any of the
columns.
USER cannot contain nulls and is of distinct type ID, defined like this:
CREATE DISTINCT TYPE SCHEMA1.ID AS CHAR(20);
The result table for this statement has two columns, but you need four SQLVAR
occurrences in your SQLDA because the result table contains a LOB type and a
distinct type. Suppose you prepare and describe this statement into FULSQLDA,
Figure 156. SQL descriptor area after describing a CLOB and distinct type
The next steps are the same as for result tables without LOBs or distinct types:
1. Analyze each SQLVAR description to determine the maximum amount of space
you need for the column value.
For a LOB type, retrieve the length from the SQLLONGL field instead of the
SQLLEN field.
2. Derive the address of some storage area of the required size.
For a LOB data type, you also need a 4-byte storage area for the length of the
LOB data. You can allocate this 4-byte area at the beginning of the LOB data or
in a different location.
3. Put this address in the SQLDATA field.
For a LOB data type, if you allocated a separate area to hold the length of the
LOB data, put the address of the length field in SQLDATAL. If the length field is
at beginning of the LOB data area, put 0 in SQLDATAL.
4. If the SQLTYPE field indicates that the value can be null, the program must also
put the address of an indicator variable in the SQLIND field.
Figure 157 and Figure 158 on page 523 show the contents of FULSQLDA after you
fill in pointers to storage locations and execute FETCH.
Figure 157. SQL descriptor area after analyzing CLOB and distinct type descriptions and
acquiring storage
For cases when there are parameter markers, see “Executing arbitrary statements
with parameter markers” on page 524 below.
The key feature of this statement is the clause USING DESCRIPTOR :FULSQLDA.
That clause names an SQL descriptor area in which the occurrences of SQLVAR
point to other areas. Those other areas receive the values that FETCH returns. It is
possible to use that clause only because you previously set up FULSQLDA to look
like Figure 152 on page 518.
Figure 154 on page 519 shows the result of the FETCH. The data areas identified
in the SQLVAR fields receive the values from a single row of the result table.
Successive executions of the same FETCH statement put values from successive
rows of the result table into these same areas.
In both cases, the number and types of host variables named must agree with the
number of parameter markers in STMT and the types of parameter they represent.
The first variable (VAR1 in the examples) must have the type expected for the first
parameter marker in the statement, the second variable must have the type
expected for the second marker, and so on. There must be at least as many
variables as parameter markers.
The structure of DPARM is the same as that of any other SQLDA. The number of
occurrences of SQLVAR can vary, as in previous examples. In this case, there must
be one for every parameter marker. Each occurrence of SQLVAR describes one
host variable that replaces one parameter marker at run time. This happens either
when a non-SELECT statement executes or when a cursor is opened for a
SELECT statement.
You must fill in certain fields in DPARM before using EXECUTE or OPEN; you can
ignore the other fields.
This chapter contains information that applies to all stored procedures and specific
information about stored procedures in languages other than Java. For information
on writing, preparing, and running Java stored procedures, see DB2 Application
Programming Guide and Reference for Java.
Consider using stored procedures for a client/server application that does at least
one of the following things:
v Executes many remote SQL statements.
Remote SQL statements can create many network send and receive operations,
which results in increased processor costs.
Stored procedures can encapsulate many of your application’s SQL statements
into a single message to the DB2 server, reducing network traffic to a single send
and receive operation for a series of SQL statements.
v Accesses host variables for which you want to guarantee security and integrity.
Stored procedures remove SQL applications from the workstation, which prevents
workstation users from manipulating the contents of sensitive SQL statements
and host variables.
Figure 159 on page 528 and Figure 160 on page 528 illustrate the difference
between using stored procedures and not using stored procedures.
OS/390 system
DB2 stored
Client EXEC SQL DB2 procedures region
CALL PROCX
Schedule EXEC SQL
PROCX DECLARE C1...
EXEC SQL
Perform SQL OPEN C1...
EXEC SQL
Perform SQL UPDATE...
EXEC SQL
Perform SQL INSERT...
Figure 160. Processing with stored procedures. The same series of SQL statements uses a
single send or receive operation.
The application can call more stored procedures, or it can execute more SQL
statements. DB2 receives and processes the COMMIT or ROLLBACK request.
The COMMIT or ROLLBACK operation covers all SQL operations, whether
executed by the application or by stored procedures, for that unit of work.
If the application involves IMS or CICS, similar processing occurs based on the
IMS or CICS sync point rather than on an SQL COMMIT or ROLLBACK
statement.
9. DB2 returns a reply message to the application describing the outcome of the
COMMIT or ROLLBACK operation.
10. The workstation application executes the following steps to retrieve the
contents of table EMPPROJACT, which the stored procedure has returned in a
result set:
a. Declares a result set locator for the result set being returned.
b. Executes the ASSOCIATE LOCATORS statement to associate the result
set locator with the result set.
c. Executes the ALLOCATE CURSOR statement to associate a cursor with
the result set.
d. Executes the FETCH statement with the allocated cursor multiple times to
retrieve the rows in the result set.
Perform these tasks to prepare the DB2 subsystem to run stored procedures:
v Decide whether to use WLM-established address spaces or DB2-established
address spaces for stored procedures.
See Part 5 (Volume 2) of DB2 Administration Guide for a comparison of the two
environments.
If you are currently using DB2-established address spaces and want to convert to
WLM-established address spaces, see “Moving stored procedures to a
WLM-established environment (for system administrators)” on page 538 for
information on what you need to do.
v Define JCL procedures for the stored procedures address spaces
Member DSNTIJMV of data set DSN710.SDSNSAMP contains sample JCL
procedures for starting WLM-established and DB2-established address spaces. If
you enter a WLM procedure name or a DB2 procedure name in installation panel
DSNTIPX, DB2 customizes a JCL procedure for you. See Part 2 of DB2
Installation Guide for details.
v For WLM-established address spaces, define WLM application environments for
groups of stored procedures and associate a JCL startup procedure with each
application environment.
See Part 5 (Volume 2) of DB2 Administration Guide for information on how to do
this.
v If you plan to execute stored procedures that use the ODBA interface to access
IMS databases, modify the startup procedures for the address spaces in which
those stored procedures will run in the following way:
– Add the data set name of the IMS data set that contains the ODBA callable
interface code (usually IMS.RESLIB) to the end of the STEPLIB
concatenation.
– After the STEPLIB DD statement, add a DFSRESLB DD statement that
names the IMS data set that contains the ODBA callable interface code.
v Install Language Environment and the appropriate compilers.
See OS/390 Language Environment for OS/390 & VM Customization for
information on installing Language Environment.
See “Language requirements for the stored procedure and its caller” on page 540
for minimum compiler and Language Environment requirements
See “Linkage conventions” on page 574 for an example of coding the DBINFO
parameter list in a stored procedure.
Later, you need to make the following changes to the stored procedure definition:
v It selects data from DB2 tables but does not modify DB2 data.
v The parameters can have null values, and the stored procedure can return a
diagnostic string.
v The length of time the stored procedure runs should not be limited.
v If the stored procedure is called by another stored procedure or a user-defined
function, the stored procedure uses the WLM environment of the caller.
Execute this ALTER PROCEDURE statement to make the changes:
ALTER PROCEDURE B
READS SQL DATA
ASUTIME NO LIMIT
PARAMETER STYLE DB2SQL
WLM ENVIRONMENT (PAYROLL,*);
The method that you use to perform these tasks depends on whether you are using
WLM-established or DB2-established address spaces.
In compatibility mode, you must stop and start stored procedures address
spaces when you refresh Language Environment.
To check for stored procedures with nonblank AUTHID or LUNAME values, execute
this query:
SELECT * FROM SYSIBM.SYSPROCEDURES
WHERE AUTHID<>' ' OR LUNAME<>' ';
Then use CREATE PROCEDURE to create definitions for all stored procedures that
are identified by the SELECT statement. You cannot specify AUTHID or LUNAME
using CREATE PROCEDURE. However, AUTHID and LUNAME let you define
several versions of a stored procedure, such as a test version and a production
version. You can accomplish the same task by specifying a unique schema name
for each stored procedure with the same name. For example, for stored procedure
INVENTORY, you might define TEST.INVENTORY and PRODTN.INVENTORY.
There are two types of stored procedures: external stored procedures and SQL
procedures. External stored procedures are written in a host language. The source
code for an external stored procedure is separate from the definition for the stored
procedure. An external stored procedure is much like any other SQL application. It
can include static or dynamic SQL statements, IFI calls, and DB2 commands issued
through IFI. SQL procedures are written using SQL procedures statements, which
are part of a CREATE PROCEDURE statement. This section discusses writing and
The program that calls the stored procedure can be in any language that supports
the SQL CALL statement. ODBC applications can use an escape clause to pass a
stored procedure call to DB2.
If the stored procedure calls other programs that contain SQL statements, each of
those called programs must have a DB2 package. The owner of the package or
plan that contains the CALL statement must have EXECUTE authority for all
packages that the other programs use.
When a stored procedure calls another program, DB2 determines which collection
the called program’s package belongs to in one of the following ways:
v If the stored procedure executes SET CURRENT PACKAGESET, the called
program’s package comes from the collection specified in SET CURRENT
PACKAGESET.
v If the stored procedure does not execute SET CURRENT PACKAGESET,
– If the stored procedure definition contains NO COLLID, DB2 uses the
collection ID of the package that contains the SQL statement CALL.
– If the stored procedure definition contains COLLID collection-id, DB2 uses
collection-id.
When control returns from the stored procedure, DB2 restores the value of the
special register CURRENT PACKAGESET to the value it contained before the client
program executed the SQL statement CALL.
/******************************************************************/
/* This C subprogram is a stored procedure that uses linkage */
/* convention GENERAL and receives 3 parameters. */
/******************************************************************/
#pragma linkage(cfunc,fetchable)
#include <stdlib.h>
void cfunc(char p1[11],long *p2,short *p3)
{
/****************************************************************/
/* Declare variables used for SQL operations. These variables */
/* are local to the subprogram and must be copied to and from */
/* the parameter list for the stored procedure call. */
/****************************************************************/
EXEC SQL BEGIN DECLARE SECTION;
char parm1[11];
long int parm2;
short int parm3;
EXEC SQL END DECLARE SECTION;
/*************************************************************/
/* Receive input parameter values into local variables. */
/*************************************************************/
strcpy(parm1,p1);
parm2 = *p2;
parm3 = *p3;
/*************************************************************/
/* Perform operations on local variables. */
/*************************************************************/
.
.
.
/*************************************************************/
/* Set values to be passed back to the caller. */
/*************************************************************/
strcpy(parm1,"SETBYSP");
parm2 = 100;
parm3 = 200;
/*************************************************************/
/* Copy values to output parameters. */
/*************************************************************/
strcpy(p1,parm1);
*p2 = parm2;
*p3 = parm3;
}
/******************************************************************/
/* This C⁺⁺ subprogram is a stored procedure that uses linkage */
/* convention GENERAL and receives 3 parameters. */
/* The extern statement is required. */
/******************************************************************/
extern "C" void cppfunc(char p1[11],long *p2,short *p3);
#pragma linkage(cppfunc,fetchable)
#include <stdlib.h>
EXEC SQL INCLUDE SQLCA;
void cppfunc(char p1[11],long *p2,short *p3)
{
/****************************************************************/
/* Declare variables used for SQL operations. These variables */
/* are local to the subprogram and must be copied to and from */
/* the parameter list for the stored procedure call. */
/****************************************************************/
EXEC SQL BEGIN DECLARE SECTION;
char parm1[11];
long int parm2;
short int parm3;
EXEC SQL END DECLARE SECTION;
/*************************************************************/
/* Receive input parameter values into local variables. */
/*************************************************************/
strcpy(parm1,p1);
parm2 = *p2;
parm3 = *p3;
/*************************************************************/
/* Perform operations on local variables. */
/*************************************************************/
.
.
.
/*************************************************************/
/* Set values to be passed back to the caller. */
/*************************************************************/
strcpy(parm1,"SETBYSP");
parm2 = 100;
parm3 = 200;
/*************************************************************/
/* Copy values to output parameters. */
/*************************************************************/
strcpy(p1,parm1);
*p2 = parm2;
*p3 = parm3;
}
| You cannot include ROLLBACK statements in a stored procedure if DB2 is not the
| commit coordinator.
The local DB2 application cannot use DRDA access to connect to any location that
the stored procedure has already accessed using DB2 private protocol access.
Before making the DB2 private protocol connection, the local DB2 application must
first execute the RELEASE statement to terminate the DB2 private protocol
connection, and then commit the unit of work.
ODBA support uses OS/390 RRS for syncpoint control of DB2 and IMS resources.
Therefore, stored procedures that use ODBA can run only in WLM-established
stored procedures address spaces.
When you write a stored procedure that uses ODBA, follow the rules for writing an
IMS application program that issues DL/I calls. See IMS Application Programming:
Database Manager and IMS Application Programming: Transaction Manager for
information on writing DL/I applications.
IMS work that is performed in a stored procedure is in the same commit scope as
the stored procedure. As with any other stored procedure, the calling application
commits work.
A stored procedure that uses ODBA must issue a DPSB PREP call to deallocate a
PSB when all IMS work under that PSB is complete. The PREP keyword tells IMS
to move inflight work to an indoubt state. When work is in the indoubt state, IMS
does not require activation of syncpoint processing when the DPSB call is
executed. IMS commits or backs out the work as part of RRS two-phase commit
when the stored procedure caller executes COMMIT or ROLLBACK.
A sample COBOL stored procedure and client program demonstrate accessing IMS
data using the ODBA interface. The stored procedure source code is in member
DSN8EC1 and is prepared by job DSNTEJ61. The calling program source code is
in member DSN8EC1 and is prepared and executed by job DSNTEJ62. All code is
in data set DSN710.SDSNSAMP.
The startup procedure for a stored procedures address space in which stored
procedures that use ODBA run must include a DFSRESLB DD statement and an
extra data set in the STEPLIB concatenation. See “Setting up the stored procedures
environment” on page 532 for more information.
For each result set you want returned, your stored procedure must:
When the stored procedure ends, DB2 returns the rows in the query result set to
the client.
DB2 does not return result sets for cursors that are closed before the stored
procedure terminates. The stored procedure must execute a CLOSE statement for
each cursor associated with a result set that should not be returned to the DRDA
client.
Example: Suppose you want to return a result set that contains entries for all
employees in department D11. First, declare a cursor that describes this subset of
employees:
EXEC SQL DECLARE C1 CURSOR WITH RETURN FOR
SELECT * FROM DSN8710.EMP
WHERE WORKDEPT='D11';
DB2 returns the result set and the name of the SQL cursor for the stored procedure
to the client.
Use meaningful cursor names for returning result sets: The name of the cursor
that is used to return result sets is made available to the client application through
extensions to the DESCRIBE statement. See “Writing a DB2 for OS/390 and z/OS
client program or SQL procedure to receive result sets” on page 602 for more
information.
Use cursor names that are meaningful to the DRDA client application, especially
when the stored procedure returns multiple result sets.
Objects from which you can return result sets: You can use any of these objects
in the SELECT statement associated with the cursor for a result set:
v Tables, synonyms, views, created temporary tables, declared temporary tables,
and aliases defined at the local DB2 system
v Tables, synonyms, views, created temporary tables, and aliases defined at
remote DB2 for OS/390 and z/OS systems that are accessible through DB2
private protocol access
Returning a subset of rows to the client: If you execute FETCH statements with
a result set cursor, DB2 does not return the fetched rows to the client program. For
example, if you declare a cursor WITH RETURN and then execute the statements
OPEN, FETCH, FETCH, the client receives data beginning with the third row in the
| result set. If the result set cursor is scrollable and you fetch rows with it, you need
| to position the cursor before the first row of the result table after you fetch the rows
| and before the stored procedure ends.
Using a temporary table to return result sets: You can use a created temporary
table or declared temporary table to return result sets from a stored procedure. This
capability can be used to return nonrelational data to a DRDA client.
The calling application can use a DB2 package or plan to execute the CALL
statement. The stored procedure must use a DB2 package as Figure 165 shows.
The server program might use more than one package. These packages come from
two sources:
Unlike other stored procedures, you do not prepare REXX stored procedures for
execution. REXX stored procedures run using one of four packages that are bound
during the installation of DB2 REXX Language Support. The package that DB2
uses when the stored procedure runs depends on the current isolation level at
which the stored procedure runs:
Package name Isolation level
DSNREXRR Repeatable read (RR)
DSNREXRS Read stability (RS)
DSNREXCS Cursor stability (CS)
DSNREXUR Uncommitted read (UR)
Figure 167 on page 552 shows an example of a REXX stored procedure that
executes DB2 commands. The stored procedure performs the following actions:
v Receives one input parameter, which contains a DB2 command.
v Calls the IFI COMMAND function to execute the command.
v Extracts the command result messages from the IFI return area and places the
messages in a created temporary table. Each row of the temporary table
contains a sequence number and the text of one message.
v Opens a cursor to return a result set that contains the command result
messages.
v Returns the unformatted contents of the IFI return area in an output parameter.
Figure 166 on page 552 shows the definition of the stored procedure.
/* REXX */
PARSE UPPER ARG CMD /* Get the DB2 command text */
/* Remove enclosing quotes */
IF LEFT(CMD,2) = ""'" & RIGHT(CMD,2) = "'"" THEN
CMD = SUBSTR(CMD,2,LENGTH(CMD)-2)
ELSE
IF LEFT(CMD,2) = """'" & RIGHT(CMD,2) = "'""" THEN
CMD = SUBSTR(CMD,3,LENGTH(CMD)-4)
COMMAND = SUBSTR("COMMAND",1,18," ")
/****************************************************************/
/* Set up the IFCA, return area, and output area for the */
/* IFI COMMAND call. */
/****************************************************************/
IFCA = SUBSTR('00'X,1,180,'00'X)
IFCA = OVERLAY(D2C(LENGTH(IFCA),2),IFCA,1+0)
IFCA = OVERLAY("IFCA",IFCA,4+1)
RTRNAREASIZE = 262144 /*1048572*/
RTRNAREA = D2C(RTRNAREASIZE+4,4)LEFT(' ',RTRNAREASIZE,' ')
OUTPUT = D2C(LENGTH(CMD)+4,2)||'0000'X||CMD
BUFFER = SUBSTR(" ",1,16," ")
/****************************************************************/
/* Make the IFI COMMAND call. */
/****************************************************************/
ADDRESS LINKPGM "DSNWLIR COMMAND IFCA RTRNAREA OUTPUT"
WRC = RC
RTRN= SUBSTR(IFCA,12+1,4)
REAS= SUBSTR(IFCA,16+1,4)
TOTLEN = C2D(SUBSTR(IFCA,20+1,4))
/****************************************************************/
/* Set up the host command environment for SQL calls. */
/****************************************************************/
"SUBCOM DSNREXX" /* Host cmd env available? */
IF RC THEN /* No--add host cmd env */
S_RC = RXSUBCOM('ADD','DSNREXX','DSNREXX')
EXIT SUBSTR(RTRNAREA,1,TOTLEN+4)
|| 'SQLWARN ='SQLWARN.0',',
|| SQLWARN.1',',
|| SQLWARN.2',',
|| SQLWARN.3',',
|| SQLWARN.4',',
|| SQLWARN.5',',
|| SQLWARN.6',',
|| SQLWARN.7',',
|| SQLWARN.8',',
|| SQLWARN.9',',
|| SQLWARN.10';' ,
|| 'SQLSTATE='SQLSTATE';'
Creating an SQL procedure involves writing the source statements for the SQL
procedure, creating the executable form of the SQL procedure, and defining the
SQL procedure to DB2. There are two ways to create an SQL procedure:
This section discusses how to write a and prepare an SQL procedure. The following
topics are included:
v “Comparison of an SQL procedure and an external procedure”
v “Statements that you can include in a procedure body” on page 556
v “Terminating statements in an SQL procedure” on page 559
v “Handling errors in an SQL procedure” on page 559
v “Examples of SQL procedures” on page 561
v “Preparing an SQL procedure” on page 563
For information on the syntax of the CREATE PROCEDURE statement and the
procedure body, see DB2 SQL Reference.
An external stored procedure definition and an SQL procedure definition specify the
following common information:
v The procedure name.
v Input and output parameter attributes.
v The language in which the procedure is written. For an SQL procedure, the
language is SQL.
v Information that will be used when the procedure is called, such as run-time
options, length of time that the procedure can run, and whether the procedure
returns result sets.
# An external stored procedure and an SQL procedure share the same rules for the
# use of COMMIT and ROLLBACK statements in a procedure. For information about
# the restrictions for the use of these statements and their effect, see “Using COMMIT
# and ROLLBACK statements in a stored procedure” on page 544.
An external stored procedure and an SQL procedure differ in the way that they
specify the code for the stored procedure. An external stored procedure definition
specifies the name of the stored procedure program. An SQL procedure definition
contains the source code for the stored procedure.
For an external stored procedure, you define the stored procedure to DB2 by
executing the CREATE PROCEDURE statement. You change the definition of the
stored procedure by executing the ALTER PROCEDURE statement. For an SQL
procedure, you define the stored procedure to DB2 by preprocessing a CREATE
PROCEDURE statement, then executing the CREATE PROCEDURE statement
statically or dynamically. As with an external stored procedure, you change the
definition by executing the ALTER PROCEDURE statement. You cannot change the
procedure body with the ALTER PROCEDURE statement. See “Preparing an SQL
procedure” on page 563 for more information on defining an SQL procedure to DB2.
See the discussion of the procedure body in DB2 SQL Reference for detailed
descriptions and syntax of each of these statements.
The general form of a declaration for an SQL variable that you use as a result set
locator is:
DECLARE SQL-variable-name data-type RESULT_SET_LOCATOR VARYING;
You can perform any operations on SQL variables that you can perform on host
variables in SQL statements.
Qualifying SQL variable names and other object names is a good way to avoid
ambiguity. Use the following guidelines to determine when to qualify variable
names:
Important
The way that DB2 determines the qualifier for unqualified names might change
in the future. To avoid changing your code later, qualify all SQL variable
names.
In general, the way that a handler works is that when an error occurs that matches
condition, SQL-procedure-statement executes. When SQL-procedure-statement
completes, DB2 performs the action that is indicated by handler-type.
Example: CONTINUE handler: This handler sets flag at_end when no more rows
satisfy a query. The handler then causes execution to continue after the statement
that returned no rows.
DECLARE CONTINUE HANDLER FOR NOT FOUND SET at_end=1;
Example: EXIT handler: This handler places the string 'Table does not exist' into
output parameter OUT_BUFFER when condition NO_TABLE occurs. NO_TABLE is
previously declared as SQLSTATE 42704 (name is an undefined name). The
handler then causes the SQL procedure to exit the compound statement in which
the handler is declared.
DECLARE
. NO_TABLE CONDITION FOR '42704';
.
.
If you want to pass the SQLCODE or SQLSTATE values to the caller, your SQL
procedure definition needs to include output parameters for those values. After an
error occurs, and before control returns to the caller, you can assign the value of
SQLCODE or SQLSTATE to the corresponding output parameter. For example, you
might include assignment statements in an SQLEXCEPTION handler to assign the
SQLCODE value to an output parameter:
CREATE PROCEDURE UPDATESALARY1
(IN EMPNUMBR CHAR(6),
OUT SQLCPARM INTEGER)
. LANGUAGE SQL
.
.
BEGIN:
DECLARE SQLCODE INTEGER;
DECLARE CONTINUE HANDLER FOR SQLEXCEPTION
. SET SQLCPARM = SQLCODE;
.
.
Because the IF statement is true, the SQLCODE value is reset to zero, and you
lose the previous SQLCODE value.
Example: CASE statement: The following SQL procedure demonstates how to use
a CASE statement. The procedure receives an employee's ID number and rating as
input parameters. The CASE statement modifies the employee's salary and bonus,
using a different UPDATE statement for each of the possible ratings.
CREATE PROCEDURE UPDATESALARY2
(IN EMPNUMBR CHAR(6),
IN RATING INT)
LANGUAGE SQL
MODIFIES SQL DATA
CASE RATING
WHEN 1 THEN
UPDATE CORPDATA.EMPLOYEE
SET SALARY = SALARY * 1.10, BONUS = 1000
WHERE EMPNO = EMPNUMBR;
WHEN 2 THEN
UPDATE CORPDATA.EMPLOYEE
SET SALARY = SALARY * 1.05, BONUS = 500
WHERE EMPNO = EMPNUMBR;
ELSE
UPDATE CORPDATA.EMPLOYEE
SET SALARY = SALARY * 1.03, BONUS = 0
WHERE EMPNO = EMPNUMBR;
END CASE
If any SQL statement in the procedure body receives a negative SQLCODE, the
SQLEXCEPTION handler receives control. This handler sets output parameter
DEPTSALARY to NULL and ends execution of the SQL procedure. When this
handler is invoked, the SQLCODE and SQLSTATE are set to 0.
CREATE PROCEDURE RETURNDEPTSALARY
(IN DEPTNUMBER CHAR(3),
OUT DEPTSALARY DECIMAL(15,2),
OUT DEPTBONUSCNT INT)
LANGUAGE SQL
READS SQL DATA
P1: BEGIN
DECLARE EMPLOYEE_SALARY DECIMAL(9,2);
DECLARE EMPLOYEE_BONUS DECIMAL(9,2);
DECLARE TOTAL_SALARY DECIMAL(15,2) DEFAULT 0;
DECLARE BONUS_CNT INT DEFAULT 0;
DECLARE END_TABLE INT DEFAULT 0;
DECLARE C1 CURSOR FOR
SELECT SALARY, BONUS FROM CORPDATA.EMPLOYEE
WHERE WORKDEPT = DEPTNUMBER;
DECLARE CONTINUE HANDLER FOR NOT FOUND
SET END_TABLE = 1;
DECLARE EXIT HANDLER FOR SQLEXCEPTION
SET DEPTSALARY = NULL;
OPEN C1;
FETCH C1 INTO EMPLOYEE_SALARY, EMPLOYEE_BONUS;
WHILE END_TABLE = 0 DO
SET TOTAL_SALARY = TOTAL_SALARY + EMPLOYEE_SALARY + EMPLOYEE_BONUS;
IF EMPLOYEE_BONUS > 0 THEN
SET BONUS_CNT = BONUS_CNT + 1;
END IF;
FETCH C1 INTO EMPLOYEE_SALARY, EMPLOYEE_BONUS;
END WHILE;
CLOSE C1;
SET DEPTSALARY = TOTAL_SALARY;
SET DEPTBONUSCNT = BONUS_CNT;
END P1
There are three methods available for preparing an SQL procedure to run:
v Using IBM DB2 Stored Procedure Builder, which runs on Windows NT, Windows
95, or Windows 98.
v Using JCL. See “Using JCL to prepare an SQL procedure” on page 564.
v Using the DB2 for OS/390 and z/OS SQL procedure processor. See “Using the
DB2 for OS/390 and z/OS SQL procedure processor to prepare an SQL
procedure” on page 564.
To run an SQL procedure, you must call it from a client program, using the SQL
CALL statement. See the description of the CALL statement in Chapter 5 of DB2
SQL Reference for more information.
Using the DB2 for OS/390 and z/OS SQL procedure processor to
prepare an SQL procedure
The SQL procedure processor, DSNTPSMP, is a REXX stored procedure that you
can use to prepare an SQL procedure for execution. You can also use DSNTPSMP
to perform selected steps in the preparation process or delete an existing SQL
procedure.
Environment for calling and running DSNTPSMP: You can invoke DSNTPSMP
only through an SQL CALL statement in an application program or through IBM
DB2 Stored Procedure Builder.
Before you can run DSNTPSMP, you need to perform the following steps to set up
the DSNTPSMP environment:
1. Install DB2 for OS/390 and z/OS REXX Language Support feature.
Contact your IBM service representative for more information.
2. If you plan to call DSNTPSMP directly, write and prepare an application program
that executes an SQL CALL statement for DSNTPSMP.
See “Writing and preparing an application that calls DSNTPSMP” on page 566
for more information.
If you plan to invoke DSNTPSMP through the IBM DB2 Stored Procedure
Builder, see the following URL for information on installing and using the IBM
DB2 Stored Procedure Builder.
http://www.ibm.com/software/data/db2/os390/spb
Figure 170 shows sample JCL for a startup procedure for the address space in
which DSNTPSMP runs.
Figure 170. Startup procedure for a WLM address space in which DSNTPSMP runs
NUMTCB specifies the number of programs that can run concurrently in the
address space. You should always set NUMTCB to 1 to ensure that executions of
DSNTPSMP occur serially.
2 STEPLIB specifies the Language Environment run-time library that DSNTPSMP
uses when it runs.
3 SYSEXEC specifies the library that contains DSNTPSMP.
4 DBRMLIB specifies the library into which DSNTPSMP puts the DBRM that it
generates when it precompiles your SQL procedure.
5 SQLCSRC specifies the library into which DSNTPSMP puts the C source code that
it generates from the SQL procedure source code. This data set should have a
logical record length of 80.
6 SQLLMOD specifies the library into which DSNTPSMP puts the load module that it
generates when it compiles and link-edits your SQL procedure.
7 SQLLIBC specifies the library that contains standard C header files. This library is
used during compilation of the generated C program.
8 SQLLIBL specifies the following libraries, which DSNTPSMP uses when it link-edits
the SQL procedure:
v Language Environment run-time library
v DB2 application load library
v DB2 exit library
v DB2 load library
9 SYSMSGS specifies the library that contains messages that are used by the C
prelink-edit utility.
10 The DD statements that follow describe work file data sets that are used by
DSNTPSMP.
run-time-options:
DSNTPSMP parameters:
function
A VARCHAR(20) input parameter that identifies the task that you want
DSNTPSMP to perform. The tasks are:
BUILD
Creates the following objects for an SQL procedure:
v A DBRM, in the data set that DD name SQLDBRM points to
v A load module, in the data set that DD name SQLLMOD points to
v The C language source code for the SQL procedure, in the data set that
DD name SQLCSRC points to
v The stored procedure package
v The stored procedure definition
If you choose the create function, and an SQL procedure with name
SQL-procedure-name already exists, DSNTPSMP issues a warning
message and terminates.
DESTROY
Deletes the following objects for an SQL procedure:
v The DBRM, from the data set that DD name SQLDBRM points to
v The load module, from the data set that DD name SQLLMOD points to
v The C language source code for the SQL procedure, from the data set
that DD name SQLCSRC points to
v The stored procedure package
v The stored procedure definition
Result sets that DSNTPSMP returns: When errors occur during DSNTPSMP
execution, DB2 returns a result set that contains messages and listings from each
step that DSNTPSMP performs. To obtain the information from the result set, you
can write your client program to retrieve information from one result set with known
contents. However, for greater flexibility, you might want to write your client program
to retrieve data from an unknown number of result sets with unknown contents.
Both techniques are shown in “Writing a DB2 for OS/390 and z/OS client program
or SQL procedure to receive result sets” on page 602.
Rows in the message result set are ordered by processing step, ddname, and
sequence number.
Function BUILD
Source location String in variable procsrc
Bind options SQLERROR(NOPACKAGE), VALIDATE(RUN), ISOLATION(RR),
RELEASE(COMMIT)
Compiler options SOURCE, LIST, MAR(1,80), LONGNAME, RENT
Precompiler options HOST(SQL), SOURCE, XREF, MAR(1,72), STDSQL(NO)
Prelink-edit options None specified
Link-edit options AMODE=31, RMODE=ANY, MAP, RENT
Run-time options MSGFILE(OUTFILE), RPTSTG(ON), RPTOPTS(ON)
Build schema MYSCHEMA
Build name WLM2PSMP
Function DESTROY
SQL procedure name OLDPROC
Function REBUILD
Source location Member PROCSRC of partitioned data set DSN710.SDSNSAMP
Bind options SQLERROR(NOPACKAGE), VALIDATE(RUN), ISOLATION(RR),
RELEASE(COMMIT)
Compiler options SOURCE, LIST, MAR(1,80), LONGNAME, RENT
Precompiler options HOST(SQL), SOURCE, XREF, MAR(1,72), STDSQL(NO)
Prelink-edit options MAP
Link-edit options AMODE=31, RMODE=ANY, MAP, RENT
Run-time options MSGFILE(OUTFILE), RPTSTG(ON), RPTOPTS(ON)
Function REBIND
SQL procedure name SQLPROC
Bind options VALIDATE(BIND), ISOLATION(RR), RELEASE(DEALLOCATE)
Preparing a program that invokes DSNTPSMP: To prepare the program that calls
DSNTPSMP for execution, you need to perform the following steps:
1. Precompile, compile, and link-edit the application program.
2. Bind a package for the application program.
3. Bind the package for DB2 REXX support, DSNTRXCS.DSNTREXX, and the
package for the application program into a plan.
where :EMP, :PRJ, :ACT, :EMT, :EMS, :EME, :TYPE, and :CODE are host variables
that you have declared earlier in your application program. Your CALL statement
might vary from the above statement in the following ways:
v Instead of passing each of the employee and project parameters separately, you
could pass them together as a host structure. For example, if you define a host
structure like this:
struct {
char EMP[7];
char PRJ[7];
short ACT;
short EMT;
char EMS[11];
char EME[11];
} empstruc;
where :IEMP, :IPRJ, :IACT, :IEMT, :IEMS, :IEME, :ITYPE, and :ICODE are
indicator variables for the parameters.
v You might pass integer or character string constants or the null value to the
stored procedure, as in this example:
EXEC SQL CALL A ('000130', 'IF1000', 90, 1.0, NULL, '1982-10-01',
:TYPE, :CODE);
v You might use a host variable for the name of the stored procedure:
EXEC SQL CALL :procnm (:EMP, :PRJ, :ACT, :EMT, :EMS, :EME,
:TYPE, :CODE);
Assume that the stored procedure name is ’A’. The host variable procnm is a
character variable of length 255 or less that contains the value ’A’. You should
use this technique if you do not know in advance the name of the stored
procedure, but you do know the parameter list convention.
v If you prefer to pass your parameters in a single structure, rather than as
separate host variables, you might use this form:
EXEC SQL CALL A USING DESCRIPTOR :sqlda;
One advantage of using this form is that you can change the encoding scheme
of the stored procedure parameter values. For example, if the subsystem on
which the stored procedure runs has an EBCDIC encoding scheme, and you
want to retrieve data in ASCII CCSID 437, you can specify the desired CCSIDs
for the output parameters in the SQLVAR fields of the SQLDA. The technique for
overriding the CCSIDs of parameters is the same as the technique for overriding
the CCSIDs of variables, which is described in “Changing the CCSID for
retrieved data” on page 519. When you use this technique, the defined encoding
scheme of the parameter must be different from the encoding scheme that you
specify in the SQLDA. Otherwise, no conversion occurs. The defined encoding
scheme for the parameter is the encoding scheme that you specify in the
CREATE PROCEDURE statement, or the default encoding scheme for the
subsystem, if you do not specify an encoding scheme in the CREATE
PROCEDURE statement.
v You might execute the CALL statement by using a host variable name for the
stored procedure with an SQLDA:
EXEC SQL CALL :procnm USING DESCRIPTOR :sqlda;
This form gives you extra flexibility because you can use the same CALL
statement to call different stored procedures with different parameter lists.
Your client program must assign a stored procedure name to the host variable
procnm and load the SQLDA with the parameter information before making the
SQL CALL.
The authorizations you need depend on whether the form of the CALL statement is
CALL literal or CALL :host-variable.
For more information, see the description of the CALL statement in Chapter 5 of
DB2 SQL Reference.
Linkage conventions
When an application executes the CALL statement, DB2 builds a parameter list for
the stored procedure, using the parameters and values provided in the statement.
DB2 obtains information about parameters from the stored procedure definition you
create when you execute CREATE PROCEDURE. Parameters are defined as one
of these types:
IN Input-only parameters, which provide values to the stored procedure
OUT Output-only parameters, which return values from the stored procedure to
the calling program
INOUT
Input/output parameters, which provide values to or return values from the
stored procedure.
If a stored procedure fails to set one or more of the output-only parameters, DB2
does not detect the error in the stored procedure. Instead, DB2 returns the output
parameters to the calling program, with the values established on entry to the
stored procedure.
Initializing output parameters: For a stored procedure that runs locally, you do not
need to initialize the values of output parameters before you call the stored
procedure. However, when you call a stored procedure at a remote location, the
local DB2 cannot determine whether the parameters are input (IN) or output (OUT
or INOUT) parameters. Therefore, you must initialize the values of all output
parameters before you call a stored procedure at a remote location.
It is recommended that you initialize the length of LOB output parameters to zero.
Doing so can improve your performance.
DB2 supports three parameter list conventions. DB2 chooses the parameter list
convention based on the value of the PARAMETER STYLE parameter in the stored
procedure definition: GENERAL, GENERAL WITH NULLS, or DB2SQL.
v GENERAL WITH NULLS: Use GENERAL WITH NULLS to allow the calling
program to supply a null value for any parameter passed to the stored procedure.
For the GENERAL WITH NULLS linkage convention, the stored procedure must
do the following:
– Declare a variable for each parameter passed in the CALL statement.
– Declare a null indicator structure containing an indicator variable for each
parameter.
– On entry, examine all indicator variables associated with input parameters to
determine which parameters contain null values.
– On exit, assign values to all indicator variables associated with output
variables. An indicator variable for an output variable that returns a null value
to the caller must be assigned a negative number. Otherwise, the indicator
variable must be assigned the value 0.
In the CALL statement, follow each parameter with its indicator variable, using
one of the forms below:
host-variable :indicator-variable
or
host-variable INDICATOR :indicator-variable.
Figure 172 on page 576 shows the structure of the parameter list for
PARAMETER STYLE GENERAL WITH NULLS.
v DB2SQL: Like GENERAL WITH NULLS, option DB2SQL lets you supply a null
value for any parameter that is passed to the stored procedure. In addition, DB2
passes input and output parameters to the stored procedure that contain this
information:
– The SQLSTATE that is to be returned to DB2. This is a CHAR(5) parameter
that can have the same values as those that are returned from a user-defined
function. See “Passing parameter values to and from a user-defined function”
on page 251 for valid SQLSTATE values.
– The qualified name of the stored procedure. This is a VARCHAR(27) value.
– The specific name of the stored procedure. The specific name is a
VARCHAR(18) value that is the same as the unqualified name.
– The SQL diagnostic string that is to be returned to DB2. This is a
VARCHAR(70) value. Use this area to pass descriptive information about an
error or warning to the caller.
Figure 173 on page 577 shows the structure of the parameter list for
PARAMETER STYLE DB2SQL.
For these examples, assume that a COBOL application has the following parameter
declarations and CALL statement:
************************************************************
* PARAMETERS FOR THE SQL STATEMENT CALL *
************************************************************
01 V1 PIC S9(9) USAGE COMP.
01 .V2 PIC X(9).
.
.
In the CREATE PROCEDURE statement, the parameters are defined like this:
IN V1 INT, OUT V2 CHAR(9)
Figure 174, Figure 175, Figure 176, and Figure 177 show how a stored procedure in
each language receives these parameters.
*******************************************************************
* GET THE PASSED PARAMETER VALUES. THE GENERAL LINKAGE CONVENTION*
* FOLLOWS THE STANDARD ASSEMBLER LINKAGE CONVENTION: *
* ON ENTRY, REGISTER 1 POINTS TO A LIST OF POINTERS TO THE *
* PARAMETERS. *
*******************************************************************
L R7,0(R1) GET POINTER TO V1
MVC LOCV1(4),0(R7) MOVE VALUE INTO LOCAL COPY OF V1
.
.
.
CEETERM RC=0
*******************************************************************
* VARIABLE DECLARATIONS AND EQUATES *
*******************************************************************
R1 EQU 1 REGISTER 1
R7 EQU 7 REGISTER 7
PPA CEEPPA , CONSTANTS DESCRIBING THE CODE BLOCK
LTORG , PLACE LITERAL POOL HERE
PROGAREA DSECT
ORG *+CEEDSASZ LEAVE SPACE FOR DSA FIXED PART
LOCV1 DS F LOCAL COPY OF PARAMETER V1
LOCV2 DS CL9 LOCAL COPY OF PARAMETER V2
.
.
.
/***************************************************************/
/* Get the passed parameters. The GENERAL linkage convention */
/* follows the standard C language parameter passing */
/* conventions: */
/* - argc contains the number of parameters passed */
/* - argv[0] is a pointer to the stored procedure name */
/* - argv[1] to argv[n] are pointers to the n parameters */
/* in the SQL statement CALL. */
/***************************************************************/
if(argc==3) /* Should get 3 parameters: */
{ /* procname, V1, V2 */
locv1 = *(int *) argv[1];
/* Get local copy of V1 */
.
.
.
strcpy(argv[2],locv2);
/* Assign a value to V2 */
.
.
.
}
}
DATA DIVISION.
.
.
.
LINKAGE SECTION.
************************************************************
* DECLARE THE PARAMETERS PASSED BY THE SQL STATEMENT *
* CALL HERE. *
************************************************************
01 V1 PIC S9(9) USAGE COMP.
01 V2 PIC X(9).
.
.
.
****************************************
* ASSIGN A VALUE TO OUTPUT VARIABLE V2 *
****************************************
MOVE '123456789' TO V2.
*PROCESS SYSTEM(MVS);
A: PROC(V1, V2) OPTIONS(MAIN NOEXECOPS REENTRANT);
/***************************************************************/
/* Code for a PL/I language stored procedure that uses the */
/* GENERAL linkage convention. */
/***************************************************************/
/***************************************************************/
/* Indicate on the PROCEDURE statement that two parameters */
/* were passed by the SQL statement CALL. Then declare the */
/* parameters below. */
/***************************************************************/
DCL V1 BIN FIXED(31),
V2 CHAR(9);
.
.
.
For these examples, assume that a C application has the following parameter
declarations and CALL statement:
/************************************************************/
/* Parameters for the SQL statement CALL */
/************************************************************/
long int v1;
char v2[10]; /* Allow an extra byte for */
/* the null terminator */
/************************************************************/
/* Indicator structure */
/************************************************************/
struct indicators {
short int ind1;
short int ind2;
} indstruc;
.
.
.
In the CREATE PROCEDURE statement, the parameters are defined like this:
IN V1 INT, OUT V2 CHAR(9)
Figure 178, Figure 179, Figure 180, and Figure 181 show how a stored procedure in
each language receives these parameters.
*******************************************************************
* GET THE PASSED PARAMETER VALUES. THE GENERAL WITH NULLS LINKAGE*
* CONVENTION IS AS FOLLOWS: *
* ON ENTRY, REGISTER 1 POINTS TO A LIST OF POINTERS. IF N *
* PARAMETERS ARE PASSED, THERE ARE N+1 POINTERS. THE FIRST *
* N POINTERS ARE THE ADDRESSES OF THE N PARAMETERS, JUST AS *
* WITH THE GENERAL LINKAGE CONVENTION. THE N+1ST POINTER IS *
* THE ADDRESS OF A LIST CONTAINING THE N INDICATOR VARIABLE *
* VALUES. *
*******************************************************************
L R7,0(R1) GET POINTER TO V1
MVC LOCV1(4),0(R7) MOVE VALUE INTO LOCAL COPY OF V1
L R7,8(R1) GET POINTER TO INDICATOR ARRAY
MVC LOCIND(2*2),0(R7) MOVE VALUES INTO LOCAL STORAGE
LH R7,LOCIND GET INDICATOR VARIABLE FOR V1
LTR R7,R7 CHECK IF IT IS NEGATIVE
BM NULLIN IF SO, V1 IS NULL
.
.
.
CEETERM RC=0
*******************************************************************
* VARIABLE DECLARATIONS AND EQUATES *
*******************************************************************
R1 EQU 1 REGISTER 1
R7 EQU 7 REGISTER 7
PPA CEEPPA , CONSTANTS DESCRIBING THE CODE BLOCK
LTORG , PLACE LITERAL POOL HERE
PROGAREA DSECT
ORG *+CEEDSASZ LEAVE SPACE FOR DSA FIXED PART
LOCV1 DS F LOCAL COPY OF PARAMETER V1
LOCV2 DS CL9 LOCAL COPY OF PARAMETER V2
LOCIND DS 2H LOCAL COPY OF INDICATOR ARRAY
.
.
.
/***************************************************************/
/* Get the passed parameters. The GENERAL WITH NULLS linkage */
/* convention is as follows: */
/* - argc contains the number of parameters passed */
/* - argv[0] is a pointer to the stored procedure name */
/* - argv[1] to argv[n] are pointers to the n parameters */
/* in the SQL statement CALL. */
/* - argv[n+1] is a pointer to the indicator variable array */
/***************************************************************/
if(argc==4) /* Should get 4 parameters: */
{ /* procname, V1, V2, */
/* indicator variable array */
locv1 = *(int *) argv[1];
/* Get local copy of V1 */
tempint = argv[3]; /* Get pointer to indicator */
/* variable array */
locind[0] = *tempint;
/* Get 1st indicator variable */
locind[1] = *(++tempint);
/* Get 2nd indicator variable */
if(locind[0]<0) /* If 1st indicator variable */
{ /* is negative, V1 is null */
.
.
.
}
.
.
.
strcpy(argv[2],locv2);
/* Assign a value to V2 */
*(++tempint) = 0; /* Assign 0 to V2's indicator */
/* variable */
}
}
DATA DIVISION.
.
.
.
LINKAGE SECTION.
************************************************************
* DECLARE THE PARAMETERS AND THE INDICATOR ARRAY THAT *
* WERE PASSED BY THE SQL STATEMENT CALL HERE. *
************************************************************
01 V1 PIC S9(9) USAGE COMP.
01 V2 PIC X(9).
*
01 INDARRAY.
10 INDVAR PIC S9(4) USAGE COMP OCCURS 2 TIMES.
.
.
.
***************************
* TEST WHETHER V1 IS NULL *
***************************
IF INDARRAY(1) < 0
PERFORM NULL-PROCESSING.
.
.
.
****************************************
* ASSIGN A VALUE TO OUTPUT VARIABLE V2 *
* AND ITS INDICATOR VARIABLE *
****************************************
MOVE '123456789' TO V2.
MOVE ZERO TO INDARRAY(2).
For these examples, assume that a C application has the following parameter
declarations and CALL statement:
/************************************************************/
/* Parameters for the SQL statement CALL */
/************************************************************/
long int v1;
char v2[10]; /* Allow an extra byte for */
/* the null terminator */
/************************************************************/
/* Indicator variables */
/************************************************************/
short int ind1;
short int ind2;
.
.
.
In the CREATE PROCEDURE statement, the parameters are defined like this:
IN V1 INT, OUT V2 CHAR(9)
Figure 182, Figure 183, Figure 184, Figure 185, and Figure 186 show how a stored
procedure in each language receives these parameters.
*******************************************************************
* CODE FOR AN ASSEMBLER LANGUAGE STORED PROCEDURE THAT USES *
* THE DB2SQL LINKAGE CONVENTION. *
*******************************************************************
B CEEENTRY AUTO=PROGSIZE,MAIN=YES,PLIST=OS
USING PROGAREA,R13
*******************************************************************
* BRING UP THE LANGUAGE ENVIRONMENT. *
*******************************************************************
.
.
.
*******************************************************************
* GET THE PASSED PARAMETER VALUES. THE DB2SQL LINKAGE *
* CONVENTION IS AS FOLLOWS: *
* ON ENTRY, REGISTER 1 POINTS TO A LIST OF POINTERS. IF N *
* PARAMETERS ARE PASSED, THERE ARE 2N+4 POINTERS. THE FIRST *
* N POINTERS ARE THE ADDRESSES OF THE N PARAMETERS, JUST AS *
* WITH THE GENERAL LINKAGE CONVENTION. THE NEXT N POINTERS ARE *
* THE ADDRESSES OF THE INDICATOR VARIABLE VALUES. THE LAST *
* 4 POINTERS (5, IF DBINFO IS PASSED) ARE THE ADDRESSES OF *
* INFORMATION ABOUT THE STORED PROCEDURE ENVIRONMENT AND *
* EXECUTION RESULTS. *
*******************************************************************
L R7,0(R1) GET POINTER TO V1
MVC LOCV1(4),0(R7) MOVE VALUE INTO LOCAL COPY OF V1
L R7,8(R1) GET POINTER TO 1ST INDICATOR VARIABLE
MVC LOCI1(2),0(R7) MOVE VALUE INTO LOCAL STORAGE
L R7,20(R1) GET POINTER TO STORED PROCEDURE NAME
MVC LOCSPNM(20),0(R7) MOVE VALUE INTO LOCAL STORAGE
L R7,24(R1) GET POINTER TO DBINFO
MVC LOCDBINF(DBINFLN),0(R7)
* MOVE VALUE INTO LOCAL STORAGE
LH R7,LOCI1 GET INDICATOR VARIABLE FOR V1
LTR R7,R7 CHECK IF IT IS NEGATIVE
BM NULLIN IF SO, V1 IS NULL
.
.
.
CEETERM RC=0
main(argc,argv)
int argc;
char *argv[];
{
int parm1;
short int ind1;
char p_proc[28];
char p_spec[19];
/***************************************************/
/* Assume that the SQL CALL statment included */
/* 3 input/output parameters in the parameter list.*/
/* The argv vector will contain these entries: */
/* argv[0] 1 contains load module */
/* argv[1-3] 3 input/output parms */
/* argv[4-6] 3 null indicators */
/* argv[7] 1 SQLSTATE variable */
/* argv[8] 1 qualified proc name */
/* argv[9] 1 specific proc name */
/* argv[10] 1 diagnostic string */
/* argv[11] + 1 dbinfo */
/* ------ */
/* 12 for the argc variable */
/***************************************************/
if argc<>12 {
.
.
.
Figure 183. An example of DB2SQL linkage for a C stored procedure written as a main
program (Part 1 of 2)
Figure 183. An example of DB2SQL linkage for a C stored procedure written as a main
program (Part 2 of 2)
/***************************************************/
/* Copy each of the parameters in the parameter */
/* list into a local variable, just to demonstrate */
/* how the parameters can be referenced. */
/***************************************************/
l_p1 = *parm1;
strcpy(l_p2,parm2);
l_ind1 = *p_ind1;
l_ind1 = *p_ind2;
strcpy(l_sqlstate,p_sqlstate);
strcpy(l_proc,p_proc);
strcpy(l_spec,p_spec);
strcpy(l_diag,p_diag);
. memcpy(&lsp_dbinfo,sp_dbinfo,sizeof(lsp_dbinfo));
.
.
. DATA DIVISION.
.
.
LINKAGE SECTION.
* Declare each of the input parameters
01 PARM1 ...
. 01 PARM2 ...
.
.
in your source code. This option is not applicable to other platforms, however. If you
plan to use a C stored procedure on other platforms besides MVS, use conditional
compilation, as shown in Figure 187, to include this option only when you compile
on MVS.
#ifdef MVS
#pragma runopts(PLIST(OS))
#endif
-- or --
#ifndef WKSTN
#pragma runopts(PLIST(OS))
#endif
For information on specifying PL/I compile-time and run-time options, see IBM
Enterprise PL/I for z/OS and OS/390 Programming Guide.
For example, suppose that a stored procedure that is defined with the GENERAL
linkage convention takes one integer input parameter and one character output
parameter of length 6000. You do not want to pass the 6000 byte storage area to
the stored procedure. A PL/I program containing these statements passes only two
bytes to the stored procedure for the output variable and receives all 6000 bytes
from the stored procedure:
DCL INTVAR BIN FIXED(31); /* This is the input variable */
DCL BIGVAR(6000); /* This is the output variable */
DCL. I1 BIN FIXED(15); /* This is an indicator variable */
.
.
For languages other than REXX: For all data types except LOBs, ROWIDs, and
locators, see the tables listed in Table 62 for the host data types that are compatible
with the data types in the stored procedure definition. For LOBs, ROWIDs, and
locators, see tables Table 63, Table 64 on page 598, Table 65 on page 599, and
Table 66 on page 600.
For REXX: See “Calling a stored procedure from a REXX Procedure” on page 608
for information on DB2 data types and corresponding parameter formats.
Table 62. Listing of tables of compatible data types
Language Compatible data types table
Assembler Table 8 on page 115
C Table 10 on page 133
COBOL Table 13 on page 156
PL/I Table 17 on page 184
Table 63. Compatible assembler language declarations for LOBs, ROWIDs, and locators
SQL data type in definition Assembler declaration
TABLE LOCATOR DS FL4
BLOB LOCATOR
CLOB LOCATOR
DBCLOB LOCATOR
Table 64. Compatible C language declarations for LOBs, ROWIDs, and locators
SQL data type in definition C declaration
TABLE LOCATOR unsigned long
BLOB LOCATOR
CLOB LOCATOR
DBCLOB LOCATOR
BLOB(n) struct
{unsigned long length;
char data[n];
} var;
CLOB(n) struct
{unsigned long length;
char var_data[n];
} var;
DBCLOB(n) struct
{unsigned long length;
wchar_t data[n];
} var;
ROWID struct {
short int length;
char data[40];
} var;
49 FILLER
PIC X(mod(n,32767)).
CLOB(n) If n <= 32767:
01 var.
49 var-LENGTH PIC 9(9)
USAGE COMP.
49 var-DATA PIC X(n).
If length > 32767:
01 var.
02 var-LENGTH PIC S9(9)
USAGE COMP.
02 var-DATA.
49 FILLER
PIC X(32767).
49 FILLER
PIC X(32767).
.
.
.
49 FILLER
PIC X(mod(n,32767)).
49 FILLER
PIC G(mod(n,32767))
USAGE DISPLAY-1.
ROWID 01 var.
49 var-LEN PIC 9(4)
USAGE COMP.
49 var-DATA PIC X(40).
Table 66. Compatible PL/I declarations for LOBs, ROWIDs, and locators
SQL data type in definition PL/I
TABLE LOCATOR BIN FIXED(31)
BLOB LOCATOR
CLOB LOCATOR
DBCLOB LOCATOR
BLOB(n) If n <= 32767:
01 var,
03 var_LENGTH
BIN FIXED(31),
03 var_DATA
CHAR(n);
If n > 32767:
01 var,
02 var_LENGTH
BIN FIXED(31),
02 var_DATA,
03 var_DATA1(n)
CHAR(32767),
03 var_DATA2
CHAR(mod(n,32767));
Writing a DB2 for OS/390 and z/OS client program or SQL procedure to
receive result sets
| You can write a program to receive result sets given either of the following
| alternatives:
| v For a fixed number of result sets, for which you know the contents
| This is the only alternative in which you can write an SQL procedure to return
| result sets.
| v For a variable number of result sets, for which you do not know the contents
| The first alternative is simpler to write, but if you use the second alternative, you do
| not need to make major modifications to your client program if the stored procedure
| changes.
Fetching rows from a result set is the same as fetching rows from a table.
You do not need to connect to the remote location when you execute these
statements:
v DESCRIBE PROCEDURE
v ASSOCIATE LOCATORS
v ALLOCATE CURSOR
v DESCRIBE CURSOR
v FETCH
For the syntax of result set locators in each host language, see “Chapter 9.
| Embedding SQL statements in host languages” on page 107. For the syntax of
| result set locators in SQL procedures, see Chapter 6 of DB2 SQL Reference. For
the syntax of the ASSOCIATE LOCATORS, DESCRIBE PROCEDURE, ALLOCATE
CURSOR, and DESCRIBE CURSOR statements, see Chapter 5 of DB2 SQL
Reference.
Figure 188 on page 605 and Figure 189 on page 606 show C language code that
accomplishes each of these steps. Coding for other languages is similar. For a
more complete example of a C language program that receives result sets, see
“Examples of using stored procedures” on page 894.
Figure 188 on page 605 demonstrates how you receive result sets when you know
how many result sets are returned and what is in each result set.
/*************************************************************/
/* Call stored procedure P1. */
/* Check for SQLCODE +466, which indicates that result sets */
/* were returned. */
/*************************************************************/
EXEC SQL CALL P1(:parm1, :parm2, ...);
if(SQLCODE==+466)
{
/*************************************************************/
/* Establish a link between each result set and its */
/* locator using the ASSOCIATE LOCATORS. */
/*************************************************************/
EXEC SQL ASSOCIATE LOCATORS (:loc1, :loc2) WITH PROCEDURE P1;
.
.
.
/*************************************************************/
/* Associate a cursor with each result set. */
/*************************************************************/
EXEC SQL ALLOCATE C1 CURSOR FOR RESULT SET :loc1;
EXEC SQL ALLOCATE C2 CURSOR FOR RESULT SET :loc2;
/*************************************************************/
/* Fetch the result set rows into host variables. */
/*************************************************************/
while(SQLCODE==0)
{
EXEC SQL FETCH C1 INTO :order_no, :cust_no;
.
.
.
}
while(SQLCODE==0)
{
EXEC SQL FETCH C2 :order_no, :item_no, :quantity;
.
.
.
}
}
Figure 189 on page 606 demonstrates how you receive result sets when you do not
know how many result sets are returned or what is in each result set.
/*************************************************************/
/* Call stored procedure P2. */
/* Check for SQLCODE +466, which indicates that result sets */
/* were returned. */
/*************************************************************/
EXEC SQL CALL P2(:parm1, :parm2, ...);
if(SQLCODE==+466)
{
/*************************************************************/
/* Determine how many result sets P2 returned, using the */
/* statement DESCRIBE PROCEDURE. :proc_da is an SQLDA */
/* with enough storage to accommodate up to three SQLVAR */
/* entries. */
/*************************************************************/
EXEC SQL DESCRIBE PROCEDURE P2 INTO :proc_da;
.
.
.
/*************************************************************/
/* Now that you know how many result sets were returned, */
/* establish a link between each result set and its */
/* locator using the ASSOCIATE LOCATORS. For this example, */
/* we assume that three result sets are returned. */
/*************************************************************/
EXEC SQL ASSOCIATE LOCATORS (:loc1, :loc2, :loc3) WITH PROCEDURE P2;
.
.
.
/*************************************************************/
/* Associate a cursor with each result set. */
/*************************************************************/
EXEC SQL ALLOCATE C1 CURSOR FOR RESULT SET :loc1;
EXEC SQL ALLOCATE C2 CURSOR FOR RESULT SET :loc2;
EXEC SQL ALLOCATE C3 CURSOR FOR RESULT SET :loc3;
/*************************************************************/
/* Assign values to the SQLDATA and SQLIND fields of the */
/* SQLDAs that you used in the DESCRIBE CURSOR statements. */
/* These values are the addresses of the host variables and */
/* indicator variables into which DB2 will put result set */
/* rows. */
/*************************************************************/
.
.
.
/*************************************************************/
/* Fetch the result set rows into the storage areas */
/* that the SQLDAs point to. */
/*************************************************************/
while(SQLCODE==0)
{
EXEC SQL FETCH C1 USING :res_da1;
.
.
.
}
while(SQLCODE==0)
{
EXEC SQL FETCH C2 USING :res_da2;
.
.
.
}
while(SQLCODE==0)
{
EXEC SQL FETCH C3 USING :res_da3;
.
.
.
}
}
Figure 190 on page 608 demonstrates how you can use an SQL procedure to
receive result sets.
CALL TARGETPROCEDURE();
ASSOCIATE RESULT SET LOCATORS(RESULT1,RESULT2)
WITH PROCEDURE TARGETPROCEDURE;
ALLOCATE RSCUR1 CURSOR FOR RESULT1;
ALLOCATE RSCUR2 CURSOR FOR RESULT2;
WHILE AT_END = 0 DO
FETCH RSCUR1 INTO VAR1;
SET TOTAL1 = TOTAL1 + VAR1;
END WHILE;
WHILE AT_END = 0 DO
FETCH RSCUR2 INTO VAR2;
SET TOTAL2 = TOTAL2 + VAR2;
END
. WHILE;
.
.
Figure 191 on page 610 demonstrates how a REXX procedure calls the stored
procedure in Figure 167 on page 552. The REXX procedure performs the following
actions:
v Connects to the DB2 subsystem that was specified by the REXX procedure
invoker.
v Calls the stored procedure to execute a DB2 command that was specified by the
REXX procedure invoker.
v Retrieves rows from a result set that contains the command output messages.
PROC = 'COMMAND'
RESULTSIZE = 32703
RESULT = LEFT(' ',RESULTSIZE,' ')
/****************************************************************/
/* Call the stored procedure that executes the DB2 command. */
/* The input variable (COMMAND) contains the DB2 command. */
/* The output variable (RESULT) will contain the return area */
/* from the IFI COMMAND call after the stored procedure */
/* executes. */
/****************************************************************/
ADDRESS DSNREXX "EXECSQL" ,
"CALL" PROC "(:COMMAND, :RESULT)"
IF SQLCODE < 0 THEN CALL SQLCA
Figure 191. Example of a REXX procedure that calls a stored procedure (Part 1 of 3)
DO I = 1 TO SQLDA.SQLD
SAY "SQLDA."I".SQLNAME ="SQLDA.I.SQLNAME";"
SAY "SQLDA."I".SQLTYPE ="SQLDA.I.SQLTYPE";"
SAY "SQLDA."I".SQLLOCATOR ="SQLDA.I.SQLLOCATOR";"
SAY "SQLDA."I".SQLESTIMATE="SQLDA.I.SQLESTIMATE";"
END I
/****************************************************************/
/* Set up a cursor to retrieve the rows from the result */
/* set. */
/****************************************************************/
ADDRESS DSNREXX "EXECSQL ASSOCIATE LOCATOR (:RESULT) WITH PROCEDURE :PROC"
IF SQLCODE ¬= 0 THEN CALL SQLCA
SAY RESULT
ADDRESS DSNREXX "EXECSQL ALLOCATE C101 CURSOR FOR RESULT SET :RESULT"
IF SQLCODE ¬= 0 THEN CALL SQLCA
CURSOR = 'C101'
ADDRESS DSNREXX "EXECSQL DESCRIBE CURSOR :CURSOR INTO :SQLDA"
IF SQLCODE ¬= 0 THEN CALL SQLCA
/****************************************************************/
/* Retrieve and display the rows from the result set, which */
/* contain the command output message text. */
/****************************************************************/
DO UNTIL(SQLCODE ¬= 0)
ADDRESS DSNREXX "EXECSQL FETCH C101 INTO :SEQNO, :TEXT"
IF SQLCODE = 0 THEN
DO
SAY TEXT
END
END
IF SQLCODE ¬= 0 THEN CALL SQLCA
Figure 191. Example of a REXX procedure that calls a stored procedure (Part 2 of 3)
RETURN
/****************************************************************/
/* Routine to display the SQLCA */
/****************************************************************/
SQLCA:
TRACE O
SAY 'SQLCODE ='SQLCODE
SAY 'SQLERRMC ='SQLERRMC
SAY 'SQLERRP ='SQLERRP
SAY 'SQLERRD ='SQLERRD.1',',
|| SQLERRD.2',',
|| SQLERRD.3',',
|| SQLERRD.4',',
|| SQLERRD.5',',
|| SQLERRD.6
Figure 191. Example of a REXX procedure that calls a stored procedure (Part 3 of 3)
Before you can call a stored procedure from your embedded SQL application, you
must bind a package for the client program on the remote system. You can use the
remote DRDA bind capability on your DRDA client system to bind the package to
the remote system.
If you have packages that contain SQL CALL statements that you bound before
DB2 Version 6, you can get better performance from those packages if you rebind
them in DB2 Version 6 or later. Rebinding lets DB2 obtain some information from
the catalog at bind time that it obtained at run time before Version 6. Therefore,
after you rebind your packages, they run more efficiently because DB2 can do
fewer catalog searches at run time.
An MVS client can bind the DBRM to a remote server by specifying a location
name on the command BIND PACKAGE. For example, suppose you want a client
program to call a stored procedure at location LOCA. You precompile the program
to produce DBRM A. Then you can use the command
BIND PACKAGE (LOCA.COLLA) MEMBER(A)
The plan for the package resides only at the client system.
DB2 runs stored procedures under the DB2 thread of the calling application, making
the stored procedures part of the caller’s unit of work.
If both the client and server application environments support two-phase commit,
the coordinator controls updates between the application, the server, and the stored
procedures. If either side does not support two-phase commit, updates will fail.
DB2 uses schema names from the CURRENT PATH special register.
2. When DB2 finds a stored procedure definition, DB2 executes that stored
procedure if the following conditions are true:
v The caller is authorized to execute the stored procedure.
v The stored procedure has the same number of parameters as in the CALL
statement.
If both conditions are not true, DB2 continues to go through the list of schemas
until it finds a stored procedure that meets both conditions or reaches the end of
the list.
3. If DB2 cannot find a suitable stored procedure, it returns an SQL error code for
the CALL statement.
For example, suppose that you want to write one program, PROGY, that calls one
of two versions of a stored procedure named PROCX. The load module for both
stored procedures is named SUMMOD. Each version of SUMMOD is in a different
load library. The stored procedures run in different WLM environments, and the
startup JCL for each WLM environment includes a STEPLIB concatenation that
specifies the correct load library for the stored procedure module.
First, define the two stored procedures in different schemas and different WLM
environments:
CREATE PROCEDURE TEST.PROCX(V1 INTEGER IN, CHAR(9) OUT)
LANGUAGE C
EXTERNAL NAME SUMMOD
WLM ENVIRONMENT TESTENV;
CREATE PROCEDURE PROD.PROCX(V1 INTEGER IN, CHAR(9) OUT)
LANGUAGE C
EXTERNAL NAME SUMMOD
WLM ENVIRONMENT PRODENV;
When you write CALL statements for PROCX in program PROGY, use the
unqualified form of the stored procedure name:
CALL PROCX(V1,V2);
Bind two plans for PROGY. In one BIND statement, specify PATH(TEST). In the
other BIND statement, specify PATH(PROD).
To call TEST.PROCX, execute PROGY with the plan that you bound with
PATH(TEST). To call PROD.PROCX, execute PROGY with the plan that you bound
with PATH(PROD).
To maximize the number of stored procedures that can run concurrently, use the
following guidelines:
v Set REGION size to 0 in startup procedures for the stored procedures address
spaces to obtain the largest possible amount of storage below the 16MB line.
v Limit storage required by application programs below the 16MB line by:
– Link editing programs above the line with AMODE(31) and RMODE(ANY)
attributes
– Using the RENT and DATA(31) compiler options for COBOL programs.
v Limit storage required by IBM Language Environment by using these run-time
options:
– HEAP(,,ANY) to allocate program heap storage above the 16MB line
– STACK(,,ANY,) to allocate program stack storage above the 16MB line
– STORAGE(,,,4K) to reduce reserve storage area below the line to 4KB
– BELOWHEAP(4K,,) to reduce the heap storage below the line to 4KB
– LIBSTACK(4K,,) to reduce the library stack below the line to 4KB
– ALL31(ON) to indicate all programs contained in the stored procedure run with
AMODE(31) and RMODE(ANY).
You can list these options in the RUN OPTIONS parameter of the CREATE
PROCEDURE or ALTER PROCEDURE statement, if they are not Language
Environment installation defaults. For example, the RUN OPTIONS parameter
could specify:
H(,,ANY),STAC(,,ANY,),STO(,,,4K),BE(4K,,),LIBS(4K,,),ALL31(ON)
Consider the following when you develop stored procedures that access non-DB2
resources:
IMS
If your system is not running a release of IMS that uses OS/390 RRS, you can
use one of the following methods to access DL/I data from your stored
procedure:
v Use the CICS EXCI interface to run a CICS transaction synchronously. That
CICS transaction can, in turn, access DL/I data.
v Invoke IMS transactions asynchronously using the MQI.
v Use APPC through the CPI Communications application programming
interface
After you write your COBOL stored procedure and set up the WLM environment,
follow these steps to test the stored procedure with the Debug Tool:
1. When you compile the stored procedure, specify the TEST and SOURCE
options.
Ensure that the source listing is stored in a permanent data set. VisualAge
COBOL displays that source listing during the debug session.
2. When you define the stored procedure, include run-time option TEST with the
suboption VADTCPIP&ipaddr in your RUN OPTIONS argument.
VADTCPIP& tells the Debug Tool that it is interfacing with a workstation that
runs VisualAge COBOL and is configured for TCP/IP communication with your
OS/390 system. ipaddr is the IP address of the workstation on which you
display your debug information. For example, the RUN OPTIONS value in this
stored procedure definition indicates that debug information should go to the
workstation with IP address 9.63.51.17:
CREATE PROCEDURE WLMCOB
(IN INTEGER, INOUT VARCHAR(3000), INOUT INTEGER)
MODIFIES SQL DATA
LANGUAGE COBOL EXTERNAL
PROGRAM TYPE MAIN
WLM ENVIRONMENT WLMENV1
RUN OPTIONS 'POSIX(ON),TEST(,,,VADTCPIP&9.63.51.17:*)'
3. In the JCL startup procedure for WLM-established stored procedures address
space, add the data set name of the Debug Tool load library to the STEPLIB
concatenation. For example, suppose that ENV1PROC is the JCL procedure for
application environment WLMENV1. The modified JCL for ENV1PROC might
look like this:
//DSNWLM PROC RGN=0K,APPLENV=WLMENV1,DB2SSN=DSN,NUMTCB=8
//IEFPROC EXEC PGM=DSNX9WLM,REGION=&RGN,TIME=NOLIMIT,
// PARM='&DB2SSN,&NUMTCB,&APPLENV'
//STEPLIB DD DISP=SHR,DSN=DSN710.RUNLIB.LOAD
// DD DISP=SHR,DSN=CEE.SCEERUN
// DD DISP=SHR,DSN=DSN710.SDSNLOAD
// DD DISP=SHR,DSN=EQAW.SEQAMOD <== DEBUG TOOL
4. On the workstation, start the VisualAge Remote Debugger daemon.
This daemon waits for incoming requests from TCP/IP.
5. Call the stored procedure.
When the stored procedure starts, a window that contains the debug session is
displayed on the workstation. You can then execute Debug Tool commands to
debug the stored procedure.
After you write your C⁺⁺ stored procedure or SQL procedure and set up the WLM
environment, follow these steps to test the stored procedure with the Distributed
Debugger feature of the C/C⁺⁺ Productivity Tools for OS/390 and the Debug Tool:
1. When you define the stored procedure, include run-time option TEST with the
suboption VADTCPIP&ipaddr in your RUN OPTIONS argument.
VADTCPIP& tells the Debug Tool that it is interfacing with a workstation that
runs VisualAge C⁺⁺ and is configured for TCP/IP communication with your
OS/390 system. ipaddr is the IP address of the workstation on which you
display your debug information. For example, this RUN OPTIONS value in a
stored procedure definition indicates that debug information should go to the
workstation with IP address 9.63.51.17:
RUN OPTIONS 'POSIX(ON),TEST(,,,VADTCPIP&9.63.51.17:*)'
2. Precompile the stored procedure.
Ensure that the modified source program that is the output from the precompile
step is in a permanent, catalogued data set. For an SQL procedure, the
modified C source program that is the output from the second precompile step
must be in a permanent, catalogued data set.
3. Compile the output from the precompile step. Specify the TEST, SOURCE, and
OPT(0) compiler options.
4. In the JCL startup procedure for the stored procedures address space, add the
data set name of the Debug Tool load library to the STEPLIB concatenation. For
example, suppose that ENV1PROC is the JCL procedure for application
environment WLMENV1. The modified JCL for ENV1PROC might look like this:
//DSNWLM PROC RGN=0K,APPLENV=WLMENV1,DB2SSN=DSN,NUMTCB=8
//IEFPROC EXEC PGM=DSNX9WLM,REGION=&RGN,TIME=NOLIMIT,
// PARM='&DB2SSN,&NUMTCB,&APPLENV'
//STEPLIB DD DISP=SHR,DSN=DSN710.RUNLIB.LOAD
// DD DISP=SHR,DSN=CEE.SCEERUN
// DD DISP=SHR,DSN=DSN710.SDSNLOAD
// DD DISP=SHR,DSN=EQAW.SEQAMOD <== DEBUG TOOL
5. On the workstation, start the Distributed Debugger daemon.
This daemon waits for incoming requests from TCP/IP.
6. Call the stored procedure.
When the stored procedure starts, a window that contains the debug session is
displayed on the workstation. You can then execute Debug Tool commands to
debug the stored procedure.
starts logging to a file on the workstation called dbgtool.log. This should be the
first command that you enter from the terminal or include in your commands file.
Using CODE/370 in batch mode: To test your stored procedure in batch mode,
you must have the CODE/370 MFI Debug Tool installed on the MVS system where
the stored procedure runs. To debug your stored procedure in batch mode using the
MFI Debug Tool, do the following:
v If you plan to use the Language Environment run-time option TEST to invoke
CODE/370, compile the stored procedure with option TEST. This places
information in the program that the Debug Tool uses during a debugging session.
v Allocate a log data set to receive the output from CODE/370. Put a DD statement
for the log data set in the start-up procedure for the stored procedures address
space.
v Enter commands in a data set that you want CODE/370 to execute. Put a DD
statement for that data set in the start-up procedure for the stored procedures
address space. To define the commands data set to CODE/370, specify the
commands data set name or DD name in the TEST run-time option. For
example,
TEST(ALL,TESTDD,PROMPT,*)
tells CODE/370 to look for the commands in the data set associated with DD
name TESTDD.
If you still have performance problems after you have tried the suggestions in these
sections, there are other, more risky techniques you can use. See “Special
techniques to influence access path selection” on page 660 for information.
Declared lengths of host variables: Make sure that the declared length of any
host variable is no greater than the length attribute of the data column it is
compared to. If the declared length is greater, the predicate is stage 2 and cannot
be a matching predicate for an index scan.
For example, assume that a host variable and an SQL column are defined as
follows:
When ’n’ is used, the precision of the host variable is ’2n-1’. If n = 4 and value =
’123.123’, then a predicate such as WHERE COL1 = :MYHOSTV is not a matching
predicate for an index scan because the precisions are different. One way to avoid
an inefficient predicate using decimal host variables is to declare the host variable
without the ’Ln’ option:
MYHOSTV DS P'123.123'
Consider the following illustration. Assume that there are 1000 rows in
MAIN_TABLE.
SELECT * FROM MAIN_TABLE
WHERE TYPE IN (subquery 1)
AND
PARTS IN (subquery 2);
Assuming that subquery 1 and subquery 2 are the same type of subquery (either
correlated or noncorrelated), DB2 evaluates the subquery predicates in the order
they appear in the WHERE clause. Subquery 1 rejects 10% of the total rows, and
subquery 2 rejects 80% of the total rows.
If you are in doubt, run EXPLAIN on the query with both a correlated and a
noncorrelated subquery. By examining the EXPLAIN output and understanding your
data distribution and SQL statements, you should be able to determine which form
is more efficient.
This general principle can apply to all types of predicates. However, because
subquery predicates can potentially be thousands of times more processor- and
I/O-intensive than all other predicates, it is most important to make sure they are
coded in the correct order.
Refer to “DB2 predicate manipulation” on page 642 to see in what order DB2 will
evaluate predicates and when you can control the evaluation order.
For column functions to be evaluated during data retrieval, the following conditions
must be met for all column functions in the query:
v There must be no sort needed for GROUP BY. Check this in the EXPLAIN
output.
v There must be no stage 2 (residual) predicates. Check this in your application.
v There must be no distinct set functions such as COUNT(DISTINCT C1).
v If the query is a join, all set functions must be on the last table joined. Check this
by looking at the EXPLAIN output.
v All column functions must be on single columns with no arithmetic expressions.
| v The column function is not one of the following column functions:
v STDDEV
v STDDEV_SAMP
v VAR
v VAR_SAMP
If your query involves the functions MAX or MIN, refer to “One-fetch access
(ACCESSTYPE=I1)” on page 692 to see whether your query could take advantage
of that method.
See “Using host variables efficiently” on page 648 for more information.
DB2 might not determine the best access path when your queries include correlated
columns. If you think you have a problem with column correlation, see “Column
correlation” on page 645 for ideas on what to do about it.
If you rewrite the predicate in the following way, DB2 can evaluate it more
efficiently:
In the second form, the column is by itself on one side of the operator, and all the
other values are on the other side of the operator. The expression on the right is
called a noncolumn expression. DB2 can evaluate many predicates with noncolumn
expressions at an earlier stage of processing called stage 1, so the queries take
less time to run.
Example: The query below has three predicates: an equal predicate on C1, a
BETWEEN predicate on C2, and a LIKE predicate on C3.
SELECT * FROM T1
WHERE C1 = 10 AND
C2 BETWEEN 10 AND 20 AND
C3 NOT LIKE 'A%'
Effect on access paths: This section explains the effect of predicates on access
paths. Because SQL allows you to express the same query in different ways,
knowing how predicates affect path selection helps you write queries that access
data efficiently.
Properties of predicates
Predicates in a HAVING clause are not used when selecting access paths; hence,
in this section the term ’predicate’ means a predicate after WHERE or ON.
There are special considerations for “Predicates in the ON clause” on page 631.
Predicate types
The type of a predicate depends on its operator or syntax, as listed below. The type
determines what type of processing and filtering occurs when the predicate is
evaluated.
Type Definition
Subquery
Any predicate that includes another SELECT statement. Example: C1 IN
(SELECT C10 FROM TABLE1)
Equal Any predicate that is not a subquery predicate and has an equal operator
and no NOT operator. Also included are predicates of the form C1 IS NULL.
Example: C1=100
Range
Any predicate that is not a subquery predicate and has an operator in the
following list: >, >=, <, <=, LIKE, or BETWEEN. Example: C1>100
IN-list A predicate of the form column IN (list of values). Example: C1 IN (5,10,15)
NOT Any predicate that is not a subquery predicate and contains a NOT
operator. Example: COL1 <> 5 or COL1 NOT BETWEEN 10 AND 20.
Example: Influence of type on access paths: The following two examples show
how the predicate type can influence DB2’s choice of an access path. In each one,
assume that a unique index I1 (C1) exists on table T1 (C1, C2), and that all values
of C1 are positive integers.
The query,
SELECT C1, C2 FROM T1 WHERE C1 >= 0;
has a range predicate. However, the predicate does not eliminate any rows of T1.
Therefore, it could be determined during bind that a table space scan is more
efficient than the index scan.
The query,
SELECT * FROM T1 WHERE C1 = 0;
has an equal predicate. DB2 chooses the index access in this case, because only
one scan is needed to return the result.
Examples: If the employee table has an index on the column LASTNAME, the
following predicate can be a matching predicate:
SELECT * FROM DSN8710.EMP WHERE LASTNAME = 'SMITH';
The first predicate is not stage 1 because the length of the column is shorter
than the length of the constant. The second predicate is not stage 1 because the
data types of the column and constant are not the same.
v Whether DB2 evaluates the predicate before or after a join operation. A predicate
that is evaluated after a join operation is always a stage 2 predicate.
Examples: All indexable predicates are stage 1. The predicate C1 LIKE %BC is
also stage 1, but is not indexable.
Effect on access paths: In single index processing, only Boolean term predicates
are chosen for matching predicates. Hence, only indexable Boolean term predicates
are candidates for matching index scans. To match index columns by predicates
that are not Boolean terms, DB2 considers multiple index access.
In join operations, Boolean term predicates can reject rows at an earlier stage than
can non-Boolean term predicates.
For left and right outer joins, and for inner joins, join predicates in the ON clause
are treated the same as other stage 1 and stage 2 predicates. A stage 2 predicate
in the ON clause is treated as a stage 2 predicate of the inner table.
For full outer join, the ON clause is evaluated during the join operation like a stage
2 predicate.
In an outer join, predicates that are evaluated after the join are stage 2 predicates.
Predicates in a table expression can be evaluated before the join and can therefore
be stage 1 predicates.
the predicate “EDLEVEL > 100” is evaluated before the full join and is a stage 1
predicate. For more information on join methods, see “Interpreting access to two or
more tables (join)” on page 693.
The second set of rules describes the order of predicate evaluation within each of
the above stages:
1. All equal predicates a (including column IN list, where list has only one
element).
2. All range predicates and predicates of the form column IS NOT NULL
3. All other predicate types are evaluated.
After both sets of rules are applied, predicates are evaluated in the order in which
they appear in the query. Because you specify that order, you have some control
over the order of evaluation.
By using correlation names, the query treats one table as if it were two
separate tables. Therefore, indexes on columns C1 and C2 are considered for
access.
5. If the subquery has already been evaluated for a given correlation value, then
the subquery might not have to be reevaluated.
6. Not indexable or stage 1 if a field procedure exists on that column.
7. Under any of the following circumstances, the predicate is stage 1 and
indexable:
v COL is of type INTEGER or SMALLINT, and expression is of the form:
integer-constant1 arithmetic-operator integer-constant2
v COL is of type DATE, TIME, or TIMESTAMP, and:
– expression is of any of these forms:
datetime-scalar-function(character-constant)
datetime-scalar-function(character-constant) + labeled-duration
datetime-scalar-function(character-constant) - labeled-duration
– The type of datetime-scalar-function(character-constant) matches the
type of COL.
– The numeric part of labeled-duration is an integer.
– character-constant is:
- Greater than 7 characters long for the DATE scalar function; for
example, '1995-11-30'.
- Greater than 14 characters long for the TIMESTAMP scalar function;
for example, '1995-11-30-08.00.00'.
- Any length for the TIME scalar function.
8. The processing for WHERE NOT COL = value is like that for WHERE COL <>
value, and so on.
9. If noncol expr, noncol expr1, or noncol expr2 is a noncolumn expression of
one of these forms, then the predicate is not indexable:
v noncol expr + 0
v noncol expr - 0
v noncol expr * 1
v noncol expr / 1
v noncol expr CONCAT empty string
10. COL, COL1, and COL2 can be the same column or different columns. The
columns can be in the same table or different tables.
11. To ensure that the predicate is indexable and stage 1, make the data type and
length of the column and the data type and length of the result of the
noncolumn expression the same. For example, if the predicate is:
and the scalar function is HEX, SUBSTR, DIGITS, CHAR, or CONCAT, then
the type and length of the result of the scalar function and the type and length
of the column must be the same for the predicate to be indexable and stage 1.
12. Under these circumstances, the predicate is stage 2:
v noncol expr is a case expression.
v non col expr is the product or the quotient of two noncolumn expressions,
that product or quotient is an integer value, and COL is a FLOAT or a
DECIMAL column.
13. If COL has the ROWID data type, DB2 tries to use direct row access instead
of index access or a table space scan.
14. If COL has the ROWID data type, and an index is defined on COL, DB2 tries
to use direct row access instead of index access.
15. Not indexable and not stage 1 if COL is not null and the noncorrelated
subquery SELECT clause entry can be null.
16. If the columns are numeric columns, they must have the same data type,
length, and precision to be stage 1 and indexable. For character columns, the
columns can be of different types and lengths. For example, predicates with
the following column types and lengths are stage 1 and indexable:
v CHAR(5) and CHAR(20)
v VARCHAR(5) and CHAR(5)
v VARCHAR(5) and CHAR(20)
The following examples of predicates illustrate the general rules shown in Table 68
on page 633. In each case, assume that there is an index on columns
(C1,C2,C3,C4) of the table and that 0 is the lowest value in each column.
v WHERE C1=5 AND C2=7
Both predicates are stage 1 and the compound predicate is indexable. A
matching index scan could be used with C1 and C2 as matching columns.
v WHERE C1=5 AND C2>7
Both predicates are stage 1 and the compound predicate is indexable. A
matching index scan could be used with C1 and C2 as matching columns.
v WHERE C1>5 AND C2=7
Both predicates are stage 1, but only the first matches the index. A matching
index scan could be used with C1 as a matching column.
v WHERE C1=5 OR C2=7
Both predicates are stage 1 but not Boolean terms. The compound is indexable.
When DB2 considers multiple index access for the compound predicate, C1 and
C2 can be matching columns. For single index access, C1 and C2 can be only
index screening columns.
v WHERE C1=5 OR C2<>7
The first predicate is indexable and stage 1, and the second predicate is stage 1
but not indexable. The compound predicate is stage 1 and not indexable.
v WHERE C1>5 OR C2=7
Example: Suppose that DB2 can determine that column C1 of table T contains only
five distinct values: A, D, Q, W and X. In the absence of other information, DB2
estimates that one-fifth of the rows have the value D in column C1. Then the
predicate C1='D' has the filter factor 0.2 for table T.
How DB2 uses filter factors: Filter factors affect the choice of access paths by
estimating the number of rows qualified by a set of predicates.
Values of the third variable, statistics on the column, are kept in the DB2 catalog.
You can update many of those values, either by running the utility RUNSTATS or by
executing UPDATE for a catalog table. For information about using RUNSTATS, .
see the discussion of maintaining statistics in the catalog in Part 4 (Volume 1) of
DB2 Administration Guide For information on updating the catalog manually, see
“Updating catalog statistics” on page 668.
If you intend to update the catalog with statistics of your own choice, you should
understand how DB2 uses:
v “Default filter factors for simple predicates”
v “Filter factors for uniform distributions”
v “Interpolation formulas” on page 639
v “Filter factors for all distributions” on page 640
Example: The default filter factor for the predicate C1 = 'D' is 1/25 (0.04). If D is
actually one of only five distinct values in column C1, the default probably does not
lead to an optimal access path.
Table 69. DB2 default filter factors by predicate type
Predicate Type Filter Factor
Col = literal 1/25
Col IS NULL 1/25
Col IN (literal list) (number of literals)/25
Col Op literal 1/3
Col LIKE literal 1/10
Col BETWEEN literal1 and literal2 1/10
Note:
Op is one of these operators: <, <=, >, >=.
Literal is any constant value that is known at bind time.
Example: If D is one of only five values in column C1, using RUNSTATS will put
the value 5 in column COLCARDF of SYSCOLUMNS. If there are no additional
statistics available, the filter factor for the predicate C1 = 'D' is 1/5 (0.2).
Table 70. DB2 uniform filter factors by predicate type
Predicate Type Filter Factor
Col = literal 1/COLCARDF
Col IS NULL 1/COLCARDF
Filter factors for other predicate types: The examples selected in Table 69 on
page 638 and Table 70 on page 638 represent only the most common types of
predicates. If P1 is a predicate and F is its filter factor, then the filter factor of the
predicate NOT P1 is (1 - F). But, filter factor calculation is dependent on many
things, so a specific filter factor cannot be given for all predicate types.
Interpolation formulas
Definition: For a predicate that uses a range of values, DB2 calculates the filter
factor by an interpolation formula. The formula is based on an estimate of the ratio
of the number of values in the range to the number of values in the entire column of
the table.
The formulas: The formulas that follow are rough estimates, subject to further
modification by DB2. They apply to a predicate of the form col op. literal. The
value of (Total Entries) in each formula is estimated from the values in columns
HIGH2KEY and LOW2KEY in catalog table SYSIBM.SYSCOLUMNS for column col:
Total Entries = (HIGH2KEY value - LOW2KEY value).
v For the operators < and <=, where the literal is not a host variable:
(Literal value - LOW2KEY value) / (Total Entries)
v For the operators > and >=, where the literal is not a host variable:
(HIGH2KEY value - Literal value) / (Total Entries)
v For LIKE or BETWEEN:
(High literal value - Low literal value) / (Total Entries)
For the predicate C1 BETWEEN 800 AND 1100, DB2 calculates the filter factor F as:
F = (1100 - 800)/1200 = 1/4 = 0.25
Defaults for interpolation: DB2 might not interpolate in some cases; instead, it
can use a default filter factor. Defaults for interpolation are:
When they are used: Table 72 lists the types of predicates on which these
statistics are used.
Table 72. Predicates for which distribution statistics are used
Type of statistic Single column or Predicates
concatenated columns
Frequency Single COL=literal
COL IS NULL
COL IN (literal-list)
COL op literal
COL BETWEEN literal AND literal
Frequency Concatenated COL=literal
Suppose that columns C1 and C2 are correlated and are concatenated columns of
an index. Suppose also that the predicate is C1='3' AND C2='5' and that
SYSCOLDIST contains these values for columns C1 and C2:
COLVALUE FREQUENCYF
'1' '1' .1176
'2' '2' .0588
'3' '3' .0588
'3' '5' .1176
'4' '4' .0588
'5' '3' .1764
'5' '5' .3529
'6' '6' .0588
Therefore, to understand your PLAN_TABLE results, you must understand how DB2
manipulates predicates. The information in Table 68 on page 633 is also helpful.
A set of simple, Boolean term, equal predicates on the same column that are
connected by OR predicates can be converted into an IN-list predicate. For
example: C1=5 or C1=10 or C1=15 converts to C1 IN (5,10,15).
The outer join operation gives you these result table rows:
v The rows with matching values of C1 in tables T1 and T2 (the inner join result)
v The rows from T1 where C1 has no corresponding value in T2
v The rows from T2 where C1 has no corresponding value in T1
However, when you apply the predicate, you remove all rows in the result table that
came from T2 where C1 has no corresponding value in T1. DB2 transforms the full
join into a left join, which is more efficient:
SELECT * FROM T1 X LEFT JOIN T2 Y
ON X.C1=Y.C1
WHERE X.C2 > 12;
In the following example, the predicate, X.C2>12, filters out all null values that result
from the right join:
SELECT * FROM T1 X RIGHT JOIN T2 Y
ON X.C1=Y.C1
WHERE X.C2>12;
Therefore, DB2 can transform the right join into a more efficient inner join without
changing the result:
SELECT * FROM T1 X INNER JOIN T2 Y
ON X.C1=Y.C1
WHERE X.C2>12;
These predicates are examples of predicates that can cause DB2 to simplify join
operations:
v T1.C1 > 10
v T1.C1 IS NOT NULL
v T1.C1 > 10 OR T1.C2 > 15
v T1.C1 > T2.C1
v T1.C1 IN (1,2,4)
v T1.C1 LIKE 'ABC%'
v T1.C1 BETWEEN 10 AND 100
v 12 BETWEEN T1.C1 AND 100
The following example shows how DB2 can simplify a join operation because the
query contains an ON clause that eliminates rows with unmatched values:
SELECT * FROM T1 X LEFT JOIN T2 Y
FULL JOIN T3 Z ON Y.C1=Z.C1
ON X.C1=Y.C1;
Because the last ON clause eliminates any rows from the result table for which
column values that come from T1 or T2 are null, DB2 can replace the full join with a
more efficient left join to achieve the same result:
SELECT * FROM T1 X LEFT JOIN T2 Y
LEFT JOIN T3 Z ON Y.C1=Z.C1
ON X.C1=Y.C1;
There is one case in which DB2 transforms a full outer join into a left join when you
cannot write code to do it. This is the case where a view specifies a full outer join,
but a subsequent query on that view requires only a left outer join. For example,
consider this view:
CREATE VIEW V1 (C1,T1C2,T2C2) AS
SELECT COALESCE(T1.C1, T2.C1), T1.C2, T2.C2
FROM T1 X FULL JOIN T2 Y
ON T1.C1=T2.C1;
This view contains rows for which values of C2 that come from T1 are null.
However, if you execute the following query, you eliminate the rows with null values
for C2 that come from T1:
SELECT * FROM V1
WHERE T1C2 > 10;
Therefore, for this query, a left join between T1 and T2 would have been adequate.
DB2 can execute this query as if the view V1 was generated with a left outer join so
that the query runs more efficiently.
Rules for generating predicates: For single-table or inner join queries, DB2
generates predicates for transitive closure if:
For outer join queries, DB2 generates predicates for transitive closure if the query
has an ON clause of the form COL1=COL2 and a before join predicate that has
one of the following formats:
v COL1 op value
op is =, <>, >, >=, <, or <=
v COL1 (NOT) BETWEEN value1 AND value2
DB2 generates a transitive closure predicate for an outer join query only if the
generated predicate does not reference the table with unmatched rows. That is, the
generated predicate cannot reference the left table for a left outer join or the right
table for a right outer join.
When a predicate meets the the transitive closure conditions, DB2 generates a new
predicate, whether or not it already exists in the WHERE clause.
Example of transitive closure for an inner join: Suppose that you have written
this query, which meets the conditions for transitive closure:
SELECT * FROM T1, T2
WHERE T1.C1=T2.C1 AND
T1.C1>10;
Example of transitive closure for an outer join: Suppose that you have written
this outer join query:
SELECT * FROM (SELECT * FROM T1 WHERE T1.C1>10) X
LEFT JOIN T2
ON X.C1 = T2.C1;
Adding extra predicates: DB2 performs predicate transitive closure only on equal
and range predicates. Other types of predicates, such as IN or LIKE predicates,
might be needed in the following case:
SELECT * FROM T1,T2
WHERE T1.C1=T2.C1
AND T1.C1 LIKE 'A%';
Column correlation
Two columns of data, A and B of a single table, are correlated if the values in
column A do not vary independently of the values in column B.
The following is an excerpt from a large single table. Columns CITY and STATE are
highly correlated, and columns DEPTNO and SEX are entirely independent.
TABLE CREWINFO
In this simple example, for every value of column CITY that equals 'FRESNO', there
is the same value in column STATE ('CA').
Compare the result of the above count (ANSWER2) with ANSWER1. If ANSWER2
is less than ANSWER1, then the suspected columns are correlated.
The problem is that the filtering of columns CITY and STATE should not look good.
Column STATE does almost no filtering. Since columns DEPTNO and SEX do a
better job of filtering out rows, DB2 should favor Index 2 over Index 1.
In the case of Index 3, because the columns CITY and STATE of Predicate 1 are
correlated, the index access is not improved as much as estimated by the
screening predicates and therefore Index 4 might be a better choice. (Note that
index screening also occurs for indexes with matching columns greater than zero.)
Multiple table joins: In Query 2, an additional table is added to the original query
(see Query 1 on page 646) to show the impact of column correlation on join
queries.
TABLE DEPTINFO
Query 2
SELECT ... FROM CREWINFO T1,DEPTINFO T2
WHERE T1.CITY = 'FRESNO' AND T1.STATE='CA' (PREDICATE 1)
AND T1.DEPTNO = T2.DEPT AND T2.DEPTNAME = 'LEGAL';
The order that tables are accessed in a join statement affects performance. The
estimated combined filtering of Predicate1 is lower than its actual filtering. So table
CREWINFO might look better as the first table accessed than it should.
Also, due to the smaller estimated size for table CREWINFO, a nested loop join
might be chosen for the join method. But, if many rows are selected from table
CREWINFO because Predicate1 does not filter as many rows as estimated, then
another join method might be better.
The last two techniques are discussed in “Special techniques to influence access
path selection” on page 660.
The utility RUNSTATS collects the statistics DB2 needs to make proper choices
about queries. With RUNSTATS, you can collect statistics on the concatenated key
columns of an index and the number of distinct values for those concatenated
columns. This gives DB2 accurate information to calculate the filter factor for the
query.
For example, RUNSTATS collects statistics that benefit queries like this:
SELECT * FROM T1
WHERE C1 = 'a' AND C2 = 'b' AND C3 = 'c' ;
where:
v The first three index keys are used (MATCHCOLS = 3).
v An index exists on C1, C2, C3, C4, C5.
v Some or all of the columns in the index are correlated in some way.
DB2 often chooses an access path that performs well for a query with several host
variables. However, in a new release or after maintenance has been applied, DB2
might choose a new access path that does not perform as well as the old access
path. In most cases, the change in access paths is due to the default filter factors,
which might lead DB2 to optimize the query in a different way.
There are two ways to change the access path for a query that contains host
variables:
v Bind the package or plan that contains the query with the option REOPT(VARS).
v Rewrite the query.
Because there is a performance cost to reoptimizing the access path at run time,
you should use the bind option REOPT(VARS) only on packages or plans
containing statements that perform poorly.
To use REOPT(VARS) most efficiently, first determine which SQL statements in your
applications perform poorly. Separate the code containing those statements into
units that you bind into packages with the option REOPT(VARS). Bind the rest of
the code into packages using NOREOPT(VARS). Then bind the plan with the option
NOREOPT(VARS). Only statements in the packages bound with REOPT(VARS) are
candidates for reoptimization at run time.
To determine which queries in plans and packages bound with REOPT(VARS) will
be reoptimized at run time, execute the following SELECT statements:
SELECT PLNAME,
CASE WHEN STMTNOI <> 0
THEN STMTNOI
ELSE STMTNO
END AS STMTNUM,
SEQNO, TEXT
FROM SYSIBM.SYSSTMT
WHERE STATUS IN ('B','F','G','J')
ORDER BY PLNAME, STMTNUM, SEQNO;
SELECT COLLID, NAME, VERSION,
CASE WHEN STMTNOI <> 0
THEN STMTNOI
ELSE STMTNO
END AS STMTNUM,
SEQNO, STMT
FROM SYSIBM.SYSPACKSTMT
WHERE STATUS IN ('B','F','G','J')
ORDER BY COLLID, NAME, VERSION, STMTNUM, SEQNO;
If you specify the bind option VALIDATE(RUN), and a statement in the plan or
package is not bound successfully, that statement is incrementally bound at run
time. If you also specify the bind option REOPT(VARS), DB2 reoptimizes the
access path during the incremental bind.
To determine which plans and packages have statements that will be incrementally
bound, execute the following SELECT statements:
SELECT DISTINCT NAME
FROM SYSIBM.SYSSTMT
WHERE STATUS = 'F' OR STATUS = 'H';
SELECT DISTINCT COLLID, NAME, VERSION
FROM SYSIBM.SYSPACKSTMT
WHERE STATUS = 'F' OR STATUS = 'H';
An equal predicate has a default filter factor of 1/COLCARDF. The actual filter factor
might be quite different.
Assumptions: Because there are only two different values in column SEX, ’M’ and
’F’, the value COLCARDF for SEX is 2. If the numbers of male and female
employees are not equal, the actual filter factor of 1/2 is larger or smaller than the
default, depending on whether :HV1 is set to ’M’ or ’F’.
Recommendation: One of these two actions can improve the access path:
v Bind the package or plan that contains the query with the option REOPT(VARS).
This action causes DB2 to reoptimize the query at run time, using the input
values you provide.
v Write predicates to influence DB2's selection of an access path, based on your
knowledge of actual filter factors. For example, you can break the query above
into three different queries, two of which use constants. DB2 can then determine
the exact filter factor for most cases when it binds the plan.
SELECT (HV1);
WHEN ('M')
DO;
EXEC SQL SELECT * FROM DSN8710.EMP
WHERE SEX = 'M';
END;
WHEN ('F')
DO;
EXEC SQL SELECT * FROM DSN8710.EMP
WHERE SEX = 'F';
END;
OTHERWISE
DO:
EXEC SQL SELECT * FROM DSN8710.EMP
WHERE SEX = :HV1;
END;
END;
Table T1 has two indexes: T1X1 on column C1 and T1X2 on column C2.
Query:
SELECT * FROM T1
WHERE C1 BETWEEN :HV1 AND :HV2
AND C2 BETWEEN :HV3 AND :HV4;
Recommendation: If DB2 does not choose T1X1, rewrite the query as follows, so
that DB2 does not choose index T1X2 on C2:
SELECT * FROM T1
WHERE C1 BETWEEN :HV1 AND :HV2
AND (C2 BETWEEN :HV3 AND :HV4 OR 0=1);
Table T1 has two indexes: T1X1 on column C1 and T1X2 on column C2.
Assumptions: You know that the application provides both narrow and wide ranges
on C1 and C2. Hence, default filter factors do not allow DB2 to choose the best
access path in all cases. For example, a small range on C1 favors index T1X1 on
C1, a small range on C2 favors index T1X2 on C2, and wide ranges on both C1
and C2 favor a table space scan.
Recommendation: If DB2 does not choose the best access path, try either of the
following changes to your application:
v Use a dynamic SQL statement and embed the ranges of C1 and C2 in the
statement. With access to the actual range values, DB2 can estimate the actual
filter factors for the query. Preparing the statement each time it is executed
requires an extra step, but it can be worthwhile if the query accesses a large
amount of data.
v Include some simple logic to check the ranges of C1 and C2, and then execute
one of these static SQL statements, based on the ranges of C1 and C2:
SELECT * FROM T1 WHERE C1 BETWEEN :HV1 AND :HV2
AND (C2 BETWEEN :HV3 AND :HV4 OR 0=1);
Example 4: ORDER BY
Table T1 has two indexes: T1X1 on column C1 and T1X2 on column C2.
Query:
SELECT * FROM T1
WHERE C1 BETWEEN :HV1 AND :HV2
ORDER BY C2;
If the actual number of rows that satisfy the range predicate is significantly different
from the estimate, DB2 might not choose the best access path.
Tables A, B, and C each have indexes on columns C1, C2, C3, and C4.
Query:
SELECT * FROM A, B, C
WHERE A.C1 = B.C1
AND A.C2 = C.C2
AND A.C2 BETWEEN :HV1 AND :HV2
AND A.C3 BETWEEN :HV3 AND :HV4
AND A.C4 < :HV5
AND B.C2 BETWEEN :HV6 AND :HV7
AND B.C3 < :HV8
AND C.C2 < :HV9;
Assumptions: The actual filter factors on table A are much larger than the default
factors. Hence, DB2 underestimates the number of rows selected from table A and
wrongly chooses that as the first table in the join.
The result of making the join predicate between A and B a nonindexable predicate
(which cannot be used in single index access) disfavors the use of the index on
column C1. This, in turn, might lead DB2 to access table A or B first. Or, it might
lead DB2 to change the access type of table A or B, thereby influencing the join
sequence of the other tables.
Decision needed: You can often write two or more SQL statements that achieve
identical results, particularly if you use subqueries. The statements have different
access paths, however, and probably perform differently.
Topic overview: The topics that follow describe different methods to achieve the
results intended by a subquery and tell what DB2 does for each method. The
information should help you estimate what method performs best for your query.
Correlated subqueries
Definition: A correlated subquery refers to at least one column of the outer query.
Example: In the following query, the correlation name, X, illustrates the subquery’s
reference to the outer query block.
SELECT * FROM DSN8710.EMP X
WHERE JOB = 'DESIGNER'
AND EXISTS (SELECT 1
FROM DSN8710.PROJ
WHERE DEPTNO = X.WORKDEPT
AND MAJPROJ = 'MA2100');
What DB2 does: A correlated subquery is evaluated for each qualified row of the
outer query that is referred to. In executing the example, DB2:
1. Reads a row from table EMP where JOB=’DESIGNER’.
2. Searches for the value of WORKDEPT from that row, in a table stored in
memory.
The in-memory table saves executions of the subquery. If the subquery has
already been executed with the value of WORKDEPT, the result of the subquery
is in the table and DB2 does not execute it again for the current row. Instead,
DB2 can skip to step 5.
3. Executes the subquery, if the value of WORKDEPT is not in memory. That
requires searching the PROJ table to check whether there is any project, where
MAJPROJ is ’MA2100’, for which the current WORKDEPT is responsible.
4. Stores the value of WORKDEPT and the result of the subquery in memory.
5. Returns the values of the current row of EMP to the application.
DB2 repeats this whole process for each qualified row of the EMP table.
Notes on the in-memory table: The in-memory table is applicable if the operator
of the predicate that contains the subquery is one of the following operators:
Example:
SELECT * FROM DSN8710.EMP
WHERE JOB = 'DESIGNER'
AND WORKDEPT IN (SELECT DEPTNO
FROM DSN8710.PROJ
WHERE MAJPROJ = 'MA2100');
What DB2 does: A noncorrelated subquery is executed once when the cursor is
opened for the query. What DB2 does to process it depends on whether it returns a
single value or more than one value. The query in the example above can return
more than one value.
Single-value subqueries
When the subquery is contained in a predicate with a simple operator, the subquery
is required to return 1 or 0 rows. The simple operator can be one of the following
operators:
What DB2 does: When the cursor is opened, the subquery executes. If it returns
more than one row, DB2 issues an error. The predicate that contains the subquery
is treated like a simple predicate with a constant specified, for example,
WORKDEPT <= ’value’.
Stage 1 and stage 2 processing: The rules for determining whether a predicate
with a noncorrelated subquery that returns a single value is stage 1 or stage 2 are
generally the same as for the same predicate with a single variable. However, the
predicate is stage 2 if:
v The value returned by the subquery is nullable and the column of the outer query
is not nullable.
v The data type of the subquery is higher than that of the column of the outer
query. For example, the following predicate is stage 2:
WHERE SMALLINT_COL < (SELECT INTEGER_COL FROM ...
Multiple-value subqueries
A subquery can return more than one value if the operator is one of the following:
op ANY op ALL op SOME IN EXISTS
where op is any of the operators >, >=, <, or <=.
What DB2 does: If possible, DB2 reduces a subquery that returns more than one
row to one that returns only a single row. That occurs when there is a range
comparison along with ANY, ALL, or SOME. The following query is an example:
DB2 calculates the maximum value for DEPTNO from table DSN8710.PROJ and
removes the ANY keyword from the query. After this transformation, the subquery is
treated like a single-value subquery.
That transformation can be made with a maximum value if the range operator is:
v > or >= with the quantifier ALL
v < or <= with the quantifier ANY or SOME
The transformation can be made with a minimum value if the range operator is:
v < or <= with the quantifier ALL
v > or >= with the quantifier ANY or SOME
When the subquery result is a character data type and the left side of the predicate
is a datetime data type, then the result is placed in a work file without sorting. For
some noncorrelated subqueries using the above comparison operators, DB2 can
more accurately pinpoint an entry point into the work file, thus further reducing the
amount of scanning that is done.
Results from EXPLAIN: For information about the result in a plan table for a
subquery that is sorted, see “When are column functions evaluated?
(COLUMN_FN_EVAL)” on page 687.
For a SELECT statement, DB2 does the transformation if the following conditions
are true:
v The transformation does not introduce redundancy.
v The subquery appears in a WHERE clause.
v The subquery does not contain GROUP BY, HAVING, or column functions.
v The subquery has only one table in the FROM clause.
v The transformation results in 15 or fewer tables in the join.
v The subquery select list has only one column, guaranteed by a unique index to
have unique values.
v The comparison operator of the predicate containing the subquery is IN, = ANY,
or = SOME.
| For an UPDATE or DELETE statement, or a SELECT statement that does not meet
| the previous conditions for transformation, DB2 does the transformation of a
| correlated subquery into a join if the following conditions are true:
| v The transformation does not introduce redundancy.
| v The subquery is correlated to its immediate outer query.
| v The FROM clause of the subquery contains only one table, and the outer query
| (for SELECT), UPDATE, or DELETE references only one table.
| v If the outer predicate is a quantified predicate with an operator of =ANY or an IN
| predicate, the following conditions are true:
| – The left side of the outer predicate is a single column.
| – The right side of the outer predicate is a subquery that references a single
| column.
| – The two columns have the same data type and length.
| v The subquery does not contain the GROUP BY or DISTINCT clauses.
| v The subquery does not contain column functions.
| v The SELECT clause of the subquery does not contain a user-defined function
| with an external action or a user-defined function that modifies data.
| v The subquery predicate is a Boolean term predicate.
| v The predicates in the subquery that provide correlation are stage 1 predicates.
| v The subquery does not contain nested subqueries.
| v The subquery does not contain a self-referencing UPDATE or DELETE.
| v For a SELECT statement, the query does not contain the FOR UPDATE OF
| clause.
| v For an UPDATE or DELETE statement, the statement is a searched UPDATE or
| DELETE.
| v For a SELECT statement, parallelism is not enabled.
| For a statement with multiple subqueries, DB2 does the transformation only on the
| last subquery in the statement that qualifies for transformation.
Example: The following subquery can be transformed into a join because it meets
the first set of conditions for transformation:
SELECT * FROM EMP
WHERE DEPTNO IN
(SELECT DEPTNO FROM DEPT
WHERE LOCATION IN ('SAN JOSE', 'SAN FRANCISCO')
AND DIVISION = 'MARKETING');
If there is a department in the marketing division which has branches in both San
Jose and San Francisco, the result of the above SQL statement is not the same as
if a join were done. The join makes each employee in this department appear twice
because it matches once for the department of location San Jose and again of
location San Francisco, although it is the same department. Therefore, it is clear
that to transform a subquery into a join, the uniqueness of the subquery select list
must be guaranteed. For this example, a unique index on any of the following sets
of columns would guarantee uniqueness:
v (DEPTNO)
v (DIVISION, DEPTNO)
| Example: The following subquery can be transformed into a join because it meets
| the second set of conditions for transformation:
| UPDATE T1 SET T1.C1 = 1
| WHERE T1.C1 =ANY
| (SELECT T2.C1 FROM T2
| WHERE T2.C2 = T1.C2);
| Results from EXPLAIN: For information about the result in a plan table for a
subquery that is transformed into a join operation, see “Is a subquery transformed
into a join?” on page 687.
Subquery tuning
The following three queries all retrieve the same rows. All three retrieve data about
all designers in departments that are responsible for projects that are part of major
project MA2100. These three queries show that there are several ways to retrieve a
desired result.
If you need columns from both tables EMP and PROJ in the output, you must use a
join.
In general, query A might be the one that performs best. However, if there is no
| index on DEPTNO in table PROJ, then query C might perform best. The
| IN-subquery predicate in query C is indexable. Therefore, if an index on
| WORKDEPT exists, DB2 might do IN-list access on table EMP. If you decide that a
join cannot be used and there is an available index on DEPTNO in table PROJ,
then query B might perform best.
It is also important to know the sequence of evaluation, for the different subquery
predicates as well as for all other predicates in the query. If the subquery predicate
is costly, perhaps another predicate could be evaluated before that predicate so that
the rows would be rejected before even evaluating the problem subquery predicate.
|
| Figure 192. Example of a view with UNION ALL operators and efficient predicates
|
v A view provides a global picture of similar information from unlike tables.
For example, suppose that you want income information about all the employees
in a company. This information is stored in separate tables for executives,
salespeople, and contractors. In addition, the sources of income are different for
each category of employee. Executives receive a salary plus a bonus,
salespeople receive a salary plus a commission, and contractors receive an
hourly wage. You might use a view like this to get combined income information:
CREATE VIEW EMPLOYEEPAY (EMPNO, FIRSTNAME, LASTNAME, DEPTNO, YEARMONTH, TOTALPAY) AS
(SELECT EMPNO, FIRSTNAME, LASTNAME, DEPTNO, YEARMONTH, SALARY/12 + BONUS
FROM EXECUTIVES
UNION ALL
SELECT EMPNO, FIRSTNAME, LASTNAME, DEPTNO, YEARMONTH, BASEMONSALARY+TOTALCOM
FROM SALESPERSON S, (SELECT EMPNO, YEARMONTH, SUM(COMMISSION) TOTALCOM
FROM COMMISSION C
GROUP BY EMPNO, YEARMONTH) COM
WHERE S.EMPNO = COM.EMPNO
UNION ALL
SELECT EMPNO, FIRSTNAME, LASTNAME, DEPTNO, YEARMONTH, TOTALPAY
FROM CONTRACTOR C, (SELECT EMPNO, YEARMONTH, SUM(PAY) TOTALPAY
FROM CONTRACTPAY
GROUP BY EMPNO, YEARMONTH) PAY
WHERE C.EMPNO = PAY.EMPNO);
The following techniques can help queries on these types of views perform better.
In these suggestions, S1 through Sn represent small tables that are combined using
UNION or UNION ALL operators to form view V.
v Create a clustering index on each of S1 through Sn.
In a typical data warehouse model, partitions in a table are in time sequence, but
the data is stored in another key sequence, such as the customer number within
each partition. You can simulate partitions on view V by creating cluster indexes
on S1 through Sn.
DB2 can eliminate a subselect from a view only if it contains one of these
predicates. Therefore, for better performance of queries that use the view, you
should provide a predicate for each subselect in the view, even if a subselect is
not needed to evaluate the query. For example, in Figure 192 on page 659, each
table contains data for only a single month, so the BETWEEN predicate is
redundant. However, when you use the UNION ALL operator and a BETWEEN
predicate for every SELECT clause, DB2 can optimize queries that use the view
more efficiently.
v Avoid view materialization.
See Table 78 on page 712 for conditions under which DB2 materializes views.
ATTENTION
This section describes tactics for rewriting queries and modifying catalog
statistics to influence DB2’s method of selecting access paths. In a later
release of DB2, the selection method might change, causing your changes to
degrade performance. Save the old catalog statistics or SQL before you
consider making any changes to control the choice of access path. Before and
after you make any changes, take performance measurements. When you
migrate to a new release, examine the performance again. Be prepared to
back out any changes that have degraded performance.
This section contains the following information about determining and changing
access paths:
v Obtaining information about access paths
This section discusses the use of OPTIMIZE FOR n ROWS to affect the
performance of interactive SQL applications. Unless otherwise noted, this
information pertains to local applications. For more information on using OPTIMIZE
FOR n ROWS in distributed applications, see “Specifying OPTIMIZE FOR n ROWS”
on page 388.
DB2 uses the OPTIMIZE FOR n ROWS clause to choose access paths that
minimize the response time for retrieving the first few rows. For distributed queries,
the value of n determines the number of rows that DB2 sends to the client on each
DRDA network transmission. See “Specifying OPTIMIZE FOR n ROWS” on
page 388 for more information on using OPTIMIZE FOR n ROWS in the distributed
environment.
Use OPTIMIZE FOR 1 ROW to avoid sorts: You can influence the access path
most by using OPTIMIZE FOR 1 ROW. OPTIMIZE FOR 1 ROW tells DB2 to select
an access path that returns the first qualifying row quickly. This means that
whenever possible, DB2 avoids any access path that involves a sort. If you specify
a value for n that is anything but 1, DB2 chooses an access path based on cost,
and you won’t necessarily avoid sorts.
How to specify OPTIMIZE FOR n ROWS for a CLI application: For a Call Level
Interface (CLI) application, you can specify that DB2 uses OPTIMIZE FOR n ROWS
for all queries. To do that, specify the keyword OPTIMIZEFORNROWS in the
initialization file. For more information, see Chapter 3 of DB2 ODBC Guide and
Reference.
How many rows you can retrieve with OPTIMIZE FOR n ROWS: The OPTIMIZE
FOR n ROWS clause does not prevent you from retrieving all the qualifying rows.
However, if you use OPTIMIZE FOR n ROWS, the total elapsed time to retrieve all
the qualifying rows might be significantly greater than if DB2 had optimized for the
entire result set.
Example: Suppose you query the employee table regularly to determine the
employees with the highest salaries. You might use a query like this:
SELECT LASTNAME, FIRSTNAME, EMPNO, SALARY
FROM EMP
ORDER BY SALARY DESC;
DB2 would most likely use the SALARY index directly because you have indicated
that you will probably retrieve the salaries of only the 20 most highly paid
employees. This choice avoids a costly sort operation.
When you specify OPTIMIZE FOR n ROWS for a remote query, a small value of n
can help limit the number of rows that flow across the network on any given
transmission.
You can improve the performance for receiving a large result set through a remote
query by specifying a large value of n in OPTIMIZE FOR n ROWS. When you
specify a large value, DB2 attempts to send the n rows in multiple transmissions.
For better performance when retrieving a large result set, in addition to specifying
OPTIMIZE FOR n ROWS with a large value of n in your query, do not execute
other SQL statements until the entire result set for the query is processed. If
retrieval of data for several queries overlaps, DB2 might need to buffer result set
data in the DDF address space. See "Block fetching result sets" in Part 5 (Volume
2) of DB2 Administration Guide for more information.
For local or remote queries, to influence the access path most, specify OPTIMIZE
for 1 ROW. This value does not have a detrimental effect on distributed queries.
DB2 picks IX2 to access the data, but IX1 would be roughly 10 times quicker. The
problem is that 50% of all parts from center number 3 are still in Center 3; they
have not moved. Assume that there are no statistics on the correlated columns in
catalog table SYSCOLDIST. Therefore, DB2 assumes that the parts from center
number 3 are evenly distributed among the 50 centers.
You can get the desired access path by changing the query. To discourage the use
of IX2 for this particular query, you can change the third predicate to be
nonindexable.
SELECT * FROM PART_HISTORY
WHERE
PART_TYPE = 'BB'
AND W_FROM = 3
AND (W_NOW = 3 + 0) <-- PREDICATE IS MADE NONINDEXABLE
Now index I2 is not picked, because it has only one match column. The preferred
index, I1, is picked. The third predicate is a nonindexable predicate, so an index is
not used for the compound predicate.
There are many ways to make a predicate nonindexable. The recommended way is
to make the add 0 to a predicate that evaluates to a numeric value or concatenate
a predicate that evaluates to a character value with an empty string.
Indexable Nonindexable
T1.C3=T2.C4 (T1.C3=T2.C4 CONCAT '')
T1.C1=5 T1.C1=5+0
These techniques do not affect the result of the query and cause only a small
amount of overhead.
The preferred technique for improving the access path when a table has correlated
columns is to generate catalog statistics on the correlated columns. You can do that
either by running RUNSTATS or by updating catalog table SYSCOLDIST or
SYSCOLDISTSTATS manually.
Q1:
SELECT * FROM PART_HISTORY -- SELECT ALL PARTS
WHERE PART_TYPE = 'BB' P1 -- THAT ARE 'BB' TYPES
AND W_FROM = 3 P2 -- THAT WERE MADE IN CENTER 3
AND W_NOW = 3 P3 -- AND ARE STILL IN CENTER 3
+------------------------------------------------------------------------------+
| Filter factor of these predicates. |
| P1 = 1/1000= .001 |
| P2 = 1/50 = .02 |
| P3 = 1/50 = .02 |
|------------------------------------------------------------------------------|
| ESTIMATED VALUES | WHAT REALLY HAPPENS |
| filter data | filter data |
| index matchcols factor rows | index matchcols factor rows |
| ix2 2 .02*.02 40 | ix2 2 .02*.50 1000 |
| ix1 1 .001 100 | ix1 1 .001 100 |
+------------------------------------------------------------------------------+
This does not change the result of the query. It is valid for a column of any data
type, and causes a minimal amount of overhead. However, DB2 uses only the best
filter factor for any particular column. So, if TX.CX already has another equal
predicate on it, adding this extra predicate has no effect. You should add the extra
local predicate to a column that is not involved in a predicate already. If index-only
access is possible for a table, it is generally not a good idea to add a predicate that
would prevent index-only access.
To access the data in a star schema, you write SELECT statements that include
join operations between the fact table and the dimension tables, but no join
operations between dimension tables.
DB2 uses a special join type called a star join if the conditions that are described in
“Star schema (star join)” on page 701 are true.
You can improve the performance of star joins by your use of indexes. This section
gives suggestions for choosing indexes might improve star join performance.
Follow these steps to derive a fact table index for a star join that joins n columns of
fact table F to n dimension tables D1 through Dn:
1. Define the set of columns whose index key order is to be determined as the n
columns of fact table F that correspond to dimension tables. That is,
S={C1,...Cn} and L=n.
2. Calculate the density of all sets of L-1 columns in S.
3. Find the lowest density. Determine which column is not in the set of columns
with the lowest density. That is, find column Cm in S, such that for every Ci in
S, density(S-{Cm})<density(S-{Ci}).
4. Make Cm the Lth column of the index.
5. Remove Cm from S.
6. Decrement L by 1.
7. Repeat steps 2 through 6 n-2 times. The remaining column after iteration n-2 is
the first column of the index.
Example of determining column order for a fact table index: Suppose that a
star schema has three dimension tables with the following cardinalities:
Now suppose that the cardinalities of single columns and pairs of columns in the
fact table are:
cardC1=2000
cardC2=433
cardC3=100
cardC12=625000
cardC13=196000
cardC23=994
Step 1: Calculate the density of all pairs of columns in the fact table:
density(C1,C2)=625000⁄(2000*500)=0.625
density(C1,C3)=196000⁄(2000*100)=0.98
density(C2,C3)=994⁄(500*100)=0.01988
Step 2: Find the pair of columns with the lowest density. That pair is (C2,C3).
Determine which column of the fact table is not in that pair. That column is C1.
Step 4: Repeat steps 1 through 3 to determine the second and first columns of the
index key:
density(C2)=433⁄500=0.866
density(C3)=100⁄100=1.0
The column with the lowest density is C2. Therefore, C3 is the second column of
the index. The remaining column, C2, is the first column of the index. That is, the
best order for the multi-column index is C2, C3, C1.
The example shown in Figure 193 on page 665, involving this query:
is a problem with data correlation. DB2 does not know that 50% of the parts that
were made in Center 3 are still in Center 3. It was circumvented by making a
predicate nonindexable. But suppose there are hundreds of users writing queries
similar to that query. It would not be possible to have all users change their queries.
In this type of situation, the best solution is to change the catalog statistics.
For the query in Figure 193 on page 665, where the correlated columns are
concatenated key columns of an index, you can update the catalog statistics in one
of two ways:
v Run the RUNSTATS utility, and request statistics on the correlated columns
W_FROM and W_NOW. This is the preferred method. See the discussion of
maintaining statistics in the catalog in Part 5 (Volume 2) of DB2 Administration
Guide and Part 2 of DB2 Utility Guide and Reference for more information.
v Update the catalog statistics manually.
Updating the catalog to adjust for correlated columns: One catalog table you
can update is SYSIBM.SYSCOLDIST, which gives information about the first key
column or concatenated columns of an index key. Assume that because columns
W_NOW and W_FROM are correlated, there are only 100 distinct values for the
combination of the two columns, rather than 2500 (50 for W_FROM * 50 for
W_NOW). Insert a row like this to indicate the new cardinality:
INSERT INTO SYSIBM.SYSCOLDIST
(FREQUENCY, FREQUENCYF, IBMREQD,
TBOWNER, TBNAME, NAME, COLVALUE,
TYPE, CARDF, COLGROUPCOLNO, NUMCOLUMNS)
VALUES(0, -1, 'N',
'USRT001','PART_HISTORY','W_FROM',' ',
'C',100,X'00040003',2);
Because W_FROM and W_NOW are concatenated key columns of an index, you
can also put this information in SYSCOLDIST using the RUNSTATS utility. See DB2
Utility Guide and Reference for more information.
You can also tell DB2 about the frequency of a certain combination of column
values by updating SYSIBM.SYSCOLDIST. For example, you can indicate that 1%
of the rows in PART_HISTORY contain the values 3 for W_FROM and 3 for
W_NOW by inserting this row into SYSCOLDIST:
INSERT INTO SYSIBM.SYSCOLDIST
(FREQUENCY, FREQUENCYF, STATSTIME, IBMREQD,
TBOWNER, TBNAME, NAME, COLVALUE,
TYPE, CARDF, COLGROUPCOLNO, NUMCOLUMNS)
VALUES(0, .0100, '1996-12-01-12.00.00.000000','N',
'USRT001','PART_HISTORY','W_FROM',X'00800000030080000003',
'F',-1,X'00040003',2);
Updating the catalog for joins with table functions: Updating catalog statistics
might cause extreme performance problems if the statistics are not updated
correctly. Monitor performance, and be prepared to reset the statistics to their
original values if performance problems arise.
The best solution to the problem is to run RUNSTATS again after the table is
populated. However, if it is not possible to do that, you can use subsystem
parameter NPGTHRSH to cause DB2 to favor matching index access over a table
space scan and over nonmatching index access.
# However, these actions might not improve performance for some outer joins.
Other tools: The following tools can help you tune SQL queries:
v DB2 Visual Explain
Visual Explain is a graphical workstation feature of DB2 that provides:
– An easy-to-understand display of a selected access path
– Suggestions for changing an SQL statement
– An ability to invoke EXPLAIN for dynamic SQL statements
– An ability to provide DB2 catalog statistics for referenced objects of an access
path
– A subsystem parameter browser with keyword 'Find' capabilities
For each access to a single table, EXPLAIN tells you if an index access or table
space scan is used. If indexes are used, EXPLAIN tells you how many indexes and
index columns are used and what I/O methods are used to read the pages. For
joins of tables, EXPLAIN tells you which join method and type are used, the order
in which DB2 joins the tables, and when and why it sorts any rows.
The primary use of EXPLAIN is to observe the access paths for the SELECT parts
of your statements. For UPDATE and DELETE WHERE CURRENT OF, and for
INSERT, you receive somewhat less information in your plan table. And some
accesses EXPLAIN does not describe: for example, the access to LOB values,
which are stored separately from the base table, and access to parent or dependent
tables needed to enforce referential constraints.
The access paths shown for the example queries in this chapter are intended only
to illustrate those examples. If you execute the queries in this chapter on your
system, the access paths chosen can be different.
Creating PLAN_TABLE
Before you can use EXPLAIN, you must create a table called PLAN_TABLE to hold
the results of EXPLAIN. A copy of the statements needed to create the table are in
the DB2 sample library, under the member name DSNTESC. (Unless you need the
information they provide, it is not necessary to create a function table or statement
table to use EXPLAIN.)
Figure 194 on page 673 shows the format of a plan table. Table 74 on page 673
shows the content of each column.
QUERYNO INTEGER NOT NULL PREFETCH CHAR(1) NOT NULL WITH DEFAULT
QBLOCKNO SMALLINT NOT NULL COLUMN_FN_EVAL CHAR(1) NOT NULL WITH DEFAULT
APPLNAME CHAR(8) NOT NULL MIXOPSEQ SMALLINT NOT NULL WITH DEFAULT
PROGNAME CHAR(8) NOT NULL ---------28 column format ---------
PLANNO SMALLINT NOT NULL VERSION VARCHAR(64) NOT NULL WITH DEFAULT
METHOD SMALLINT NOT NULL COLLID CHAR(18) NOT NULL WITH DEFAULT
CREATOR CHAR(8) NOT NULL ---------30 column format ---------
TNAME CHAR(18) NOT NULL ACCESS_DEGREE SMALLINT
TABNO SMALLINT NOT NULL ACCESS_PGROUP_ID SMALLINT
ACCESSTYPE CHAR(2) NOT NULL JOIN_DEGREE SMALLINT
MATCHCOLS SMALLINT NOT NULL JOIN_PGROUP_ID SMALLINT
ACCESSCREATOR CHAR(8) NOT NULL ---------34 column format ---------
ACCESSNAME CHAR(18) NOT NULL SORTC_PGROUP_ID SMALLINT
INDEXONLY CHAR(1) NOT NULL SORTN_PGROUP_ID SMALLINT
SORTN_UNIQ CHAR(1) NOT NULL PARALLELISM_MODE CHAR(1)
SORTN_JOIN CHAR(1) NOT NULL MERGE_JOIN_COLS SMALLINT
SORTN_ORDERBY CHAR(1) NOT NULL CORRELATION_NAME CHAR(18)
SORTN_GROUPBY CHAR(1) NOT NULL PAGE_RANGE CHAR(1) NOT NULL WITH DEFAULT
SORTC_UNIQ CHAR(1) NOT NULL JOIN_TYPE CHAR(1) NOT NULL WITH DEFAULT
SORTC_JOIN CHAR(1) NOT NULL GROUP_MEMBER CHAR(8) NOT NULL WITH DEFAULT
SORTC_ORDERBY CHAR(1) NOT NULL IBM_SERVICE_DATA VARCHAR(254) NOT NULL WITH DEFAULT
SORTC_GROUPBY CHAR(1) NOT NULL --------43 column format ----------
TSLOCKMODE CHAR(3) NOT NULL WHEN_OPTIMIZE CHAR(1) NOT NULL WITH DEFAULT
TIMESTAMP CHAR(16) NOT NULL QBLOCK_TYPE CHAR(6) NOT NULL WITH DEFAULT
REMARKS VARCHAR(254) NOT NULL BIND_TIME TIMESTAMP NOT NULL WITH DEFAULT
---------25 column format --------- ------46 column format ------------
OPTHINT CHAR(8) NOT NULL WITH DEFAULT
HINT_USED CHAR(8) NOT NULL WITH DEFAULT
PRIMARY_ACCESSTYPE CHAR(1) NOT NULL WITH DEFAULT
-------49 column format------------
# PARENT_QBLOCKNO SMALLINT NOT NULL WITH DEFAULT
| TABLE_TYPE CHAR(1)
-------51 column format-----------
When the values of QUERYNO are based on the statement number in the source
program, values greater than 32767 are reported as 0. Hence, in a very long program,
the value is not guaranteed to be unique. If QUERYNO is not unique, the value of
TIMESTAMP is unique.
QBLOCKNO The position of the query in the statement being explained (1 for the outermost query,
2 for the next query, and so forth). For better performance, DB2 might merge a query
block into another query block. When that happens, the position number of the
merged query block will not be in QBLOCKNO.
The data in this column is right justified. For example, IX appears as a blank followed
by I followed by X. If the column contains a blank, then no lock is acquired.
TIMESTAMP Usually, the time at which the row is processed, to the last .01 second. If necessary,
DB2 adds .01 second to the value to ensure that rows for two successive queries
have different values.
REMARKS A field into which you can insert any character string of 254 or fewer characters.
PREFETCH Whether data pages are to be read in advance by prefetch. S = pure sequential
prefetch; L = prefetch through a page list; blank = unknown or no prefetch.
COLUMN_FN_EVAL When an SQL column function is evaluated. R = while the data is being read from the
table or index; S = while performing a sort to satisfy a GROUP BY clause; blank =
after data retrieval and after any sorts.
MIXOPSEQ The sequence number of a step in a multiple index operation.
1, 2, ... n For the steps of the multiple index procedure (ACCESSTYPE is MX,
MI, or MU.)
0 For any other rows (ACCESSTYPE is I, I1, M, N, R, or blank.)
VERSION The version identifier for the package. Applies only to an embedded EXPLAIN
statement executed from a package or to a statement that is explained when binding a
package. Blank if not applicable.
COLLID The collection ID for the package. Applies only to an embedded EXPLAIN statement
executed from a package or to a statement that is explained when binding a package.
Blank if not applicable.
Note: The following nine columns, from ACCESS_DEGREE through CORRELATION_NAME, contain the null value if
the plan or package was bound using a plan table with fewer than 43 columns. Otherwise, each of them can contain
null if the method it refers to does not apply.
For tips on maintaining a growing plan table, see “Maintaining a plan table” on
page 678.
EXPLAIN for remote binds: A remote requester that accesses DB2 can specify
EXPLAIN(YES) when binding a package at the DB2 server. The information
appears in a plan table at the server, not at the requester. If the requester does not
support the propagation of the option EXPLAIN(YES), rebind the package at the
requester with that option to obtain access path information. You cannot get
information about access paths for SQL statements that use private protocol.
All rows with the same non-zero value for QBLOCKNO and the same value for
QUERYNO relate to a step within the query. QBLOCKNOs are not necessarily
executed in the order shown in PLAN_TABLE. But within a QBLOCKNO, the
PLANNO column gives the substeps in the order they execute.
What if QUERYNO=0? In a program with more than 32767 lines, all values of
QUERYNO greater than 32767 are reported as 0. For entries containing
QUERYNO=0, use the timestamp, which is guaranteed to be unique, to distinguish
individual statements.
COLLID gives the COLLECTION name, and PROGNAME gives the PACKAGE_ID.
The following query to a plan table return the rows for all the explainable
statements in a package in their logical order:
SELECT * FROM JOE.PLAN_TABLE
WHERE PROGNAME = 'PACK1' AND COLLID = 'COLL1' AND VERSION = 'PROD1'
ORDER BY QUERYNO, QBLOCKNO, PLANNO, MIXOPSEQ;
The examples in Figure 195 and Figure 196 have these indexes: IX1 on T(C1) and
IX2 on T(C2). DB2 processes the query in the following steps:
1. Retrieve all the qualifying record identifiers (RIDs) where C1=1, using index IX1.
2. Retrieve all the qualifying RIDs where C2=1, using index IX2. The intersection
of those lists is the final set of RIDs.
3. Access the data pages needed to retrieve the qualified rows using the final RID
list.
SELECT * FROM T
WHERE C1 = 1 AND C2 = 1;
Figure 195. PLAN_TABLE output for example with intersection (AND) operator
The same index can be used more than once in a multiple index access, because
more than one predicate could be matching, as in Figure 196.
SELECT * FROM T
WHERE C1 BETWEEN 100 AND 199 OR
C1 BETWEEN 500 AND 599;
Figure 196. PLAN_TABLE output for example with union (OR) operator
In general, the matching predicates on the leading index columns are equal or IN
predicates. The predicate that matches the final index column can be an equal, IN,
or range predicate (<, <=, >, >=, LIKE, or BETWEEN).
The index XEMP5 is the chosen access path for this query, with MATCHCOLS = 3.
Two equal predicates are on the first two columns and a range predicate is on the
third column. Though the index has four columns in the index, only three of them
can be considered matching columns.
Index-only access to data is not possible for any step that uses list prefetch, which
is described under “What kind of prefetching is done? (PREFETCH = L, S, or
blank)” on page 686. Index-only access is not possible when returning
varying-length data in the result set or a VARCHAR column has a LIKE predicate,
unless the VARCHAR FROM INDEX field of installation panel DSNTIP4 is set to
YES and plan or packages have been rebound to pick up the change. See Part 2 of
DB2 Installation Guide for more information.
If access is by more than one index, INDEXONLY is Y for a step with access type
MX, because the data pages are not actually accessed until all the steps for
intersection (MI) or union (MU) take place.
When an SQL application uses index-only access for a ROWID column, the
application claims the table space or table space partition. As a result, contention
may occur between the SQL application and a utility that drains the table space or
partition. Index-only access to a table for a ROWID column is not possible if the
associated table space or partition is in an incompatible restrictive state. For
example, an SQL application can make a read claim on the table space only if the
restrictive state allows readers.
Direct row access is very fast, because DB2 does not need to use the index or a
table space scan to find the row. Direct row access can be used on any table that
has a ROWID column.
Searching for propagated rows: If rows are propagated from one table to another,
do not expect to use the same row ID value from the source table to search for the
same row in the target table, or vice versa. This does not work when direct row
access is the access path chosen. For example, assume that the host variable
below contains a row ID from SOURCE:
SELECT * FROM TARGET
WHERE ID = :hv_rowid
Because the row ID location is not the same as in the source table, DB2 will most
likely not find that row. Search on another column to retrieve the row you want.
Reverting to ACCESSTYPE
Although DB2 might plan to use direct row access, circumstances can cause DB2
to not use direct row access at run time. DB2 remembers the location of the row as
of the time it is accessed. However, that row can change locations (such as after a
REORG) between the first and second time it is accessed, which means that DB2
cannot use direct row access to find the row on the second access attempt. Instead
of using direct row access, DB2 uses the access path that is shown in the
ACCESSTYPE column of PLAN_TABLE.
If the predicate you are using to do direct row access is not indexable and if DB2 is
unable to use direct row access, then DB2 uses a table space scan to find the row.
This can have a profound impact on the performance of applications that rely on
direct row access. Write your applications to handle the possibility that direct row
access might not be used. Some options are to:
If an index exists on EMPNO, DB2 can use index access if direct access fails.
The additional predicate ensures DB2 does not revert to a table space scan.
RID list processing: Direct row access and RID list processing are mutually
exclusive. If a query qualifies for both direct row access and RID list processing,
direct row access is used. If direct row access fails, DB2 does not revert to RID list
processing; instead it reverts to the backup access type.
/**********************************************************/
/* Retrieve the picture and resume from the PIC_RES table */
/**********************************************************/
strcpy(hv_name, "Jones");
EXEC SQL SELECT PR.PICTURE, PR.RESUME INTO :hv_picture, :hv_resume
FROM PIC_RES PR
WHERE PR.Name = :hv_name;
Figure 197. Example of using a row ID value for direct row access (Part 1 of 4)
/**********************************************************/
/* Insert a row into the EMPDATA table that contains the */
/* picture and resume you obtained from the PIC_RES table */
/**********************************************************/
EXEC SQL INSERT INTO EMPDATA
VALUES (DEFAULT,9999,'Jones', 35000.00, 99,
:hv_picture, :hv_resume);
/**********************************************************/
/* Now retrieve some information about that row, */
/* including the ROWID value. */
/**********************************************************/
hv_dept = 99;
EXEC SQL SELECT E.SALARY, E.EMP_ROWID
INTO :hv_salary, :hv_emp_rowid
FROM EMPDATA E
WHERE E.DEPTNUM = :hv_dept AND E.NAME = :hv_name;
Figure 197. Example of using a row ID value for direct row access (Part 2 of 4)
/**********************************************************/
/* Update columns SALARY, PICTURE, and RESUME. Use the */
/* ROWID value you obtained in the previous statement */
/* to access the row you want to update. */
/* smiley_face and update_resume are */
/* user-defined functions that are not shown here. */
/**********************************************************/
EXEC SQL UPDATE EMPDATA
SET SALARY = :hv_salary + 1200,
PICTURE = smiley_face(:hv_picture),
RESUME = update_resume(:hv_resume)
WHERE EMP_ROWID = :hv_emp_rowid;
Figure 197. Example of using a row ID value for direct row access (Part 3 of 4)
/**********************************************************/
/* Use the ROWID value to delete the employee record */
/* from the table. */
/**********************************************************/
EXEC SQL DELETE FROM EMPDATA
WHERE EMP_ROWID = :hv_emp_rowid;
Figure 197. Example of using a row ID value for direct row access (Part 4 of 4)
A limited partition scan can be combined with other access methods. For example,
consider the following query:
SELECT .. FROM T
WHERE (C1 BETWEEN '2002' AND '3280'
OR C1 BETWEEN '6000' AND '8000')
AND C2 = '6';
Assume that table T has a partitioned index on column C1 and that values of C1
between 2002 and 3280 all appear in partitions 3 and 4 and the values between
6000 and 8000 appear in partitions 8 and 9. Assume also that T has another index
on column C2. DB2 could choose any of these access methods:
v A matching index scan on column C1. The scan reads index values and data
only from partitions 3, 4, 8, and 9. (PAGE_RANGE=N)
v A matching index scan on column C2. (DB2 might choose that if few rows have
C2=6.) The matching index scan reads all RIDs for C2=6 from the index on C2
and corresponding data pages from partitions 3, 4, 8, and 9. (PAGE_RANGE=Y)
v A table space scan on T. DB2 avoids reading data pages from any partitions
except 3, 4, 8 and 9. (PAGE_RANGE=Y)
Joins: Limited partition scan can be used for each table accessed in a join.
If you have predicates using an OR operator and one of the predicates refers to a
column of the partitioning index that is not the first key column of the index, then
DB2 does not use limited partition scan.
METHOD 3 sorts: These are used for ORDER BY, GROUP BY, SELECT
DISTINCT, UNION, or a quantified predicate. A quantified predicate is ’col = ANY
Generally, values of R and S are considered better for performance than a blank.
Use variance and standard deviation with care: The VARIANCE and STDDEV
functions are always evaluated late (that is, COLUMN_FN_EVAL is blank). This
causes other functions in the same query block to be evaluated late as well. For
example, in the following query, the sum function is evaluated later than it would be
if the variance function was not present:
SELECT SUM(C1), VARIANCE(C1) FROM T1;
Assume that table T has no index on C1. The following is an example that uses a
table space scan:
SELECT * FROM T WHERE C1 = VALUE;
In this case, at least every row in T must be examined to determine whether the
value of C1 matches the given value.
In the general case, the rules for determining the number of matching columns are
simple, although there are a few exceptions.
v Look at the index columns from leading to trailing. For each index column,
search for an indexable boolean term predicate on that column. (See “Properties
of predicates” on page 628 for a definition of boolean term.) If such a predicate is
found, then it can be used as a matching predicate.
Column MATCHCOLS in a plan table shows how many of the index columns are
matched by predicates.
v If no matching predicate is found for a column, the search for matching
predicates stops.
v If a matching predicate is a range predicate, then there can be no more matching
columns. For example, in the matching index scan example that follows, the
range predicate C2>1 prevents the search for additional matching columns.
v For star joins, a missing key predicate does not cause termination of matching
columns that are to be used on the fact table index.
Index screening
In index screening, predicates are specified on index key columns but are not part
of the matching columns. Those predicates improve the index access by reducing
the number of rows that qualify while searching the index. For example, with an
index on T(C1,C2,C3,C4) in the following SQL statement, C3>0 and C4=2 are index
screening predicates.
SELECT * FROM T
WHERE C1 = 1
AND C3 > 0 AND C4 = 2
AND C5 = 8;
The predicates can be applied on the index, but they are not matching predicates.
C5=8 is not an index screening predicate, and it must be evaluated when data is
retrieved. The value of MATCHCOLS in the plan table is 1.
You can regard the IN-list index scan as a series of matching index scans with the
values in the IN predicate being used for each matching index scan. The following
example has an index on (C1,C2,C3,C4) and might use an IN-list index scan:
SELECT * FROM T
WHERE C1=1 AND C2 IN (1,2,3)
AND C3>0 AND C4<100;
The plan table shows MATCHCOLS = 3 and ACCESSTYPE = N. The IN-list scan is
performed as the following three matching index scans:
(C1=1,C2=1,C3>0), (C1=1,C2=2,C3>0), (C1=1,C2=3,C3>0)
| Parallelism is supported for queries that involve IN-list index access. These queries
| used to run sequentially in previous releases of DB2, although parallelism could
| have been used when the IN-list access was for the inner table of a parallel group.
RID lists are constructed for each of the indexes involved. The unions or
intersections of the RID lists produce a final list of qualified RIDs that is used to
retrieve the result rows, using list prefetch. You can consider multiple index access
as an extension to list prefetch with more complex RID retrieval operations in its
first phase. The complex operators are union and intersection.
The plan table contains a sequence of rows describing the access. For this query,
ACCESSTYPE uses the following values:
Value Meaning
M Start of multiple index access processing
MX Indexes are to be scanned for later union or intersection
MI An intersection (AND) is performed
MU A union (OR) is performed
The following steps relate to the previous query and the values shown for the plan
table in Figure 198 on page 692:
1. Index EMPX1, with matching predicate AGE= 34, provides a set of candidates
for the result of the query. The value of MIXOPSEQ is 1.
2. Index EMPX1, with matching predicate AGE = 40, also provides a set of
candidates for the result of the query. The value of MIXOPSEQ is 2.
3. Index EMPX2, with matching predicate JOB=’MANAGER’, also provides a set of
candidates for the result of the query. The value of MIXOPSEQ is 3.
4. The first intersection (AND) is done, and the value of MIXOPSEQ is 4. This MI
removes the two previous candidate lists (produced by MIXOPSEQs 2 and 3)
by intersecting them to form an intermediate candidate list, IR1, which is not
shown in PLAN_TABLE.
5. The last step, where the value MIXOPSEQ is 5, is a union (OR) of the two
remaining candidate lists, which are IR1 and the candidate list produced by
MIXOPSEQ 1. This final union gives the result for the query.
Figure 198. Plan table output for a query that uses multiple indexes. Depending on the filter
factors of the predicates, the access steps can appear in a different order.
In this example, the steps in the multiple index access follow the physical sequence
of the predicates in the query. This is not always the case. The multiple index steps
are arranged in an order that uses RID pool storage most efficiently and for the
least amount of time.
Sometimes DB2 can determine that an index that is not fully matching is actually an
equal unique index case. Assume the following case:
Unique Index1: (C1, C2)
Unique Index2: (C2, C1, C3)
SELECT C3 FROM T
WHERE C1 = 1 AND
C2 = 5;
Index1 is a fully matching equal unique index. However, Index2 is also an equal
unique index even though it is not fully matching. Index2 is the better choice
because, in addition to being equal and unique, it also provides index-only access.
To use a matching index scan to update an index in which its key columns are
being updated, the following conditions must be met:
v Each updated key column must have a corresponding predicate of the form
″index_key_column = constant″ or ″index_key_column IS NULL″.
v If a view is involved, WITH CHECK OPTION must not be specified.
With list prefetch or multiple index access, any index or indexes can be used in an
UPDATE operation. Of course, to be chosen, those access paths must provide
efficient access to the data
This section begins with “Definitions and examples” on page 694, below, and
continues with descriptions of the methods of joining that can be indicated in a plan
table:
v “Nested loop join (METHOD=1)” on page 696
v “Merge scan join (METHOD=2)” on page 697
v “Hybrid join (METHOD=4)” on page 699
v “Star schema (star join)” on page 701
(Method 1)
Nested
Composite TJ loop TK New
join
(Method 2)
Composite Work Merge scan TL New
File join
(Sort)
Result
A join operation can involve more than two tables. But the operation is carried out in
a series of steps. Each step joins only two tables.
Definitions: The composite table (or outer table) in a join operation is the table
remaining from the previous step, or it is the first table accessed in the first step. (In
the first step, then, the composite table is composed of only one table.) The new
table (or inner table) in a join operation is the table newly accessed in the step.
Example: Figure 199 shows a subset of columns in a plan table. In four steps,
DB2:
1. Accesses the first table (METHOD=0), named TJ (TNAME), which becomes the
composite table in step 2.
2. Joins the new table TK to TJ, forming a new composite table.
3. Sorts the new table TL (SORTN_JOIN=Y) and the composite table
(SORTC_JOIN=Y), and then joins the two sorted tables.
4. Sorts the final composite table (TNAME is blank) into the desired order
(SORTC_ORDERBY=Y).
Two kinds of joins differ in what they do with rows in one table that do not match on
the join condition with any row in the other table:
v An inner join discards rows of either table that do not match any row of the other
table.
v An outer join keeps unmatched rows of one or the other table, or of both. A row
in the composite table that results from an unmatched row is filled out with null
values. Outer joins are distinguished by which unmatched rows they keep.
Table 76. Join types and kept unmatched rows
This outer join: Keeps unmatched rows from:
Left outer join The composite (outer) table
Right outer join The new (inner) table
Full outer join Both tables
Example: Figure 200 shows an outer join with a subset of the values it produces in
a plan table for the applicable rows. Column JOIN_TYPE identifies the type of outer
join with one of these values:
v F for FULL OUTER JOIN
v L for LEFT OUTER JOIN
v Blank for INNER JOIN or no join
At execution, DB2 converts every right outer join to a left outer join; thus
JOIN_TYPE never identifies a right outer join specifically.
Figure 200. Plan table output for an example with outer joins
Materialization with outer join: Sometimes DB2 has to materialize a result table
when an outer join is used in conjunction with other joins, views, or nested table
expressions. You can tell when this happens by looking at the TABLE_TYPE and
| TNAME columns of the plan table. When materialization occurs, TABLE_TYPE
|
SELECT A, B, X, Y
FROM (SELECT FROM OUTERT WHERE A=10)
LEFT JOIN INNERT ON B=X;
Stage 1 and stage 2 predicates eliminate unqualified rows during the join. (For an
explanation of those types of predicate, see “Stage 1 and stage 2 predicates” on
page 630.) DB2 can scan either table using any of the available access methods,
including table space scan.
Performance considerations
The nested loop join repetitively scans the inner table. That is, DB2 scans the outer
table once, and scans the inner table as many times as the number of qualifying
rows in the outer table. Hence, the nested loop join is usually the most efficient join
method when the values of the join column passed to the inner table are in
sequence and the index on the join column of the inner table is clustered, or the
number of rows retrieved in the inner table through the index is small.
When it is used
Nested loop join is often used if:
v The outer table is small.
v Predicates with small filter factors reduce the number of qualifying rows in the
outer table.
Example: left outer join: Figure 201 on page 696 illustrates a nested loop for a
left outer join. The outer join preserves the unmatched row in OUTERT with values
A=10 and B=6. The same join method for an inner join differs only in discarding that
row.
Example: one-row table priority: For a case like the example below, with a
unique index on T1.C2, DB2 detects that T1 has only one row that satisfies the
search condition. DB2 makes T1 the first table in a nested loop join.
SELECT * FROM T1, T2
WHERE T1.C1 = T2.C1 AND
T1.C2 = 5;
Example: Cartesian join with small tables first: A Cartesian join is a form of
nested loop join in which there are no join predicates between the two tables. DB2
usually avoids a Cartesian join, but sometimes it is the most efficient method, as in
the example below. The query uses three tables: T1 has 2 rows, T2 has 3 rows,
and T3 has 10 million rows.
SELECT * FROM T1, T2, T3
WHERE T1.C1 = T3.C1 AND
T2.C2 = T3.C2 AND
T3.C3 = 5;
Join predicates are between T1 and T3 and between T2 and T3. There is no join
predicate between T1 and T2.
Assume that 5 million rows of T3 have the value C3=5. Processing time is large if
T3 is the outer table of the join and tables T1 and T2 are accessed for each of 5
million rows.
But if all rows from T1 and T2 are joined, without a join predicate, the 5 million rows
are accessed only six times, once for each row in the Cartesian join of T1 and T2. It
is difficult to say which access path is the most efficient. DB2 evaluates the different
options and could decide to access the tables in the sequence T1, T2, T3.
Sorting the composite table: Your plan table could show a nested loop join that
includes a sort on the composite table. DB2 might sort the composite table (the
outer table in Figure 201) if the following conditions exist:
v The join columns in the composite table and the new table are not in the same
sequence.
v The join column of the composite table has no index.
v The index is poorly clustered.
Nested loop join with a sorted composite table uses sequential detection efficiently
to prefetch data pages of the new table, reducing the number of synchronous I/O
operations and the elapsed time.
SELECT A, B, X, Y
FROM OUTER, INNER
WHERE A=10 AND B=X;
DB2 scans both tables in the order of the join columns. If no efficient indexes on
the join columns provide the order, DB2 might sort the outer table, the inner table,
or both. The inner table is put into a work file; the outer table is put into a work file
only if it must be sorted. When a row of the outer table matches a row of the inner
table, DB2 returns the combined rows.
DB2 then reads another row of the inner table that might match the same row of
the outer table and continues reading rows of the inner table as long as there is a
match. When there is no longer a match, DB2 reads another row of the outer table.
v If that row has the same value in the join column, DB2 reads again the matching
group of records from the inner table. Thus, a group of duplicate records in the
inner table is scanned as many times as there are matching records in the outer
table.
v If the outer row has a new value in the join column, DB2 searches ahead in the
inner table. It can find any of the following rows:
– Unmatched rows in the inner table, with lower values in the join column.
– A new matching inner row. DB2 then starts the process again.
– An inner row with a higher value of the join column. Now the row of the outer
table is unmatched. DB2 searches ahead in the outer table, and can find any
of the following rows:
- Unmatched rows in the outer table.
- A new matching outer row. DB2 then starts the process again.
- An outer row with a higher value of the join column. Now the row of the
inner table is unmatched, and DB2 resumes searching the inner table.
Performance considerations
A full outer join by this method uses all predicates in the ON clause to match the
two tables and reads every row at the time of the join. Inner and left outer joins use
only stage 1 predicates in the ON clause to match the tables. If your tables match
on more than one column, it is generally more efficient to put all the predicates for
the matches in the ON clause, rather than to leave some of them in the WHERE
clause.
For an inner join, DB2 can derive extra predicates for the inner table at bind time
and apply them to the sorted outer table to be used at run time. The predicates can
reduce the size of the work file needed for the inner table.
If DB2 has used an efficient index on the join columns, to retrieve the rows of the
inner table, those rows are already in sequence. DB2 puts the data directly into the
work file without sorting the inner table, which reduces the elapsed time.
When it is used
A merge scan join is often used if:
v The qualifying rows of the inner and outer table are large, and the join predicate
does not provide much filtering; that is, in a many-to-many join.
v The tables are large and have no indexes with matching columns.
v Few columns are selected on inner tables. This is the case when a DB2 sort is
used. The fewer the columns to be sorted, the more efficient the sort is.
INNER
X Y RIDs
OUTER
A B 1 Davis P5
Index 1 Index 2 Jones P2
10 1 2 Smith P7
10 1 3 Brown P4
10 2 5 Blake P1 5
10 3 7 Stone P6
10 6 9 Meyer P3 Composite table
A B X Y
10 2 2 Jones
2 X=B List prefetch 4 10 3 3 Brown
10 1 1 Davis
Intermediate table (phase 1) 10 1 1 Davis
OUTER INNER 10 2 2 Jones
data RIDs
RID List
10 1 P5
10 1 P5 P5
10 2 P2 P2
10 2 P7 P7
10 3 P4 P4
3 SORT
RID list
P2
P4
P5
Intermediate table (phase 2) P7
OUTER INNER
data RIDs
10 2 P2
10 3 P4
10 1 P5
10 1 P5
10 2 P7
Method of joining
The method requires obtaining RIDs in the order needed to use list prefetch. The
steps are shown in Figure 203. In that example, both the outer table (OUTER) and
the inner table (INNER) have indexes on the join columns.
Performance considerations
Hybrid join uses list prefetch more efficiently than nested loop join, especially if
there are indexes on the join predicate with low cluster ratios. It also processes
duplicates more efficiently because the inner table is scanned only once for each
set of duplicate values in the join column of the outer table.
If the index on the inner table is highly clustered, there is no need to sort the
intermediate table (SORTN_JOIN=N). The intermediate table is placed in a table in
memory rather than in a work file.
When it is used
Hybrid join is often used if:
v A nonclustered index or indexes are used on the join columns of the inner table.
v The outer table has duplicate qualifying rows.
You can think of the fact table, which is much larger than the dimension tables, as
being in the center surrounded by dimension tables; the result resembles a star
formation. The following diagram illustrates the star formation:
Dimension Dimension
table table
Fact table
Dimension Dimension
table table
Figure 204. Star schema with a fact table and dimension tables
Example
For an example of a star schema, consider the following scenario. A star schema is
composed of a fact table for sales, with dimension tables connected to it for time,
products, and geographic locations. The time table has an ID for each month, its
quarter, and the year. The product table has an ID for each product item and its
class and its inventory. The geographic location table has an ID for each city and its
country.
In this scenario, the sales table contains three columns with IDs from the dimension
tables for time, product, and location instead of three columns for time, three
columns for products, and two columns for location. Thus, the size of the fact table
is greatly reduced. In addition, if you needed to change an item, you would do it
once in a dimension table instead of several times for each instance of the item in
the fact table.
You can create even more complex star schemas by breaking a dimension table
into a fact table with its own dimension tables. The fact table would be connected to
the main fact table.
# Star join, which can reduce bind time significantly, does not provide optimal
# performance in all cases. Performance of star join depends on a number of
# For recommendations on indexes for star schemas, see “Creating indexes for
# efficient star schemas” on page 666.
Examples: query with three dimension tables: Suppose you have a store in
San Jose and want information about sales of audio equipment from that store in
2000. For this example, you want to join the following tables:
v A fact table for SALES (S)
v A dimension table for TIME (T) with columns for an ID, month, quarter, and year
v A dimension table for geographic LOCATION (L) with columns for an ID, city,
region, and country
v A dimension table for PRODUCT (P) with columns for an ID, product item, class,
and inventory
Figure 206. Plan table output for a star join example with PRODUCT and LOCATION
If DB2 does not choose prefetch at bind time, it can sometimes use it at execution
time nevertheless. The method is described in “Sequential detection at execution
time” on page 707.
For certain utilities (LOAD, REORG, RECOVER), the prefetch quantity can be twice
as much.
When it is used: Sequential prefetch is generally used for a table space scan.
For an index scan that accesses 8 or more consecutive data pages, DB2 requests
sequential prefetch at bind time. The index must have a cluster ratio of 80% or
higher. Both data pages and index pages are prefetched.
List prefetch can be used in conjunction with either single or multiple index access.
List prefetch does not preserve the data ordering given by the index. Because the
RIDs are sorted in page number order before accessing the data, the data is not
retrieved in order by any column. If the data must be ordered for an ORDER BY
clause or any other reason, it requires an additional sort.
In a hybrid join, if the index is highly clustered, the page numbers might not be
sorted before accessing the data.
When it is used
List prefetch is used:
v Usually with a single index that has a cluster ratio lower than 80%
v Sometimes on indexes with a high cluster ratio, if the estimated amount of data
to be accessed is too small to make sequential prefetch efficient, but large
enough to require more than one regular read
v Always to access data by multiple index access
v Always to access data from the inner table during a hybrid join
During execution, DB2 ends list prefetching if more than 25% of the rows in the
table (with a minimum of 4075) must be accessed. Record IFCID 0125 in the
performance trace, mapped by macro DSNDQW01, indicates whether list prefetch
ended.
When list prefetch ends, the query continues processing by a method that depends
on the current access path.
v For access through a single index or through the union of RID lists from two
indexes, processing continues by a table space scan.
v For index access before forming an intersection of RID lists, processing
continues with the next step of multiple index access. If no step remains and no
RID list has been accumulated, processing continues by a table space scan.
While forming an intersection of RID lists, if any list has 32 or fewer RIDs,
intersection stops and the list of 32 or fewer RIDs is used to access the data.
When it is used
DB2 can use sequential detection for both index leaf pages and data pages. It is
most commonly used on the inner table of a nested loop join, if the data is
accessed sequentially.
If a table is accessed repeatedly using the same statement (for example, DELETE
in a do-while loop), the data or index leaf pages of the table can be accessed
sequentially. This is common in a batch processing environment. Sequential
detection can then be used if access is through:
v SELECT or FETCH statements
v UPDATE and DELETE statements
v INSERT statements when existing data pages are accessed sequentially
Sequential detection is not used for an SQL statement that is subject to referential
constraints.
When data access sequential is first declared, which is called initial data access
sequential, three page ranges are calculated as follows:
v Let A be the page being requested. RUN1 is defined as the page range of length
P/2 pages starting at A.
v Let B be page A + P/2. RUN2 is defined as the page range of length P/2 pages
starting at B.
v Let C be page B + P/2. RUN3 is defined as the page range of length P pages
starting at C.
For example, assume page A is 10, the following figure illustrates the page ranges
that DB2 calculates.
A B C
RUN1 RUN2 RUN3
Page # 10 26 42
P=32 pages 16 16 32
For initial data access sequential, prefetch is requested starting at page A for P
pages (RUN1 and RUN2). The prefetch quantity is always P pages.
For subsequent page requests where the page is 1) page sequential and 2) data
access sequential is still in effect, prefetch is requested as follows:
v If the desired page is in RUN1, then no prefetch is triggered because it was
already triggered when data access sequential was first declared.
If a data access pattern develops such that data access sequential is no longer in
effect and, thereafter, a new pattern develops that is sequential as described above,
then initial data access sequential is declared again and handled accordingly.
Because, at bind time, the number of pages to be accessed can only be estimated,
sequential detection acts as a safety net and is employed when the data is being
accessed sequentially.
In extreme situations, when certain buffer pool thresholds are reached, sequential
prefetch can be disabled. For a description of buffer pools and thresholds, see Part
5 (Volume 2) of DB2 Administration Guide .
Sorts of data
After you run EXPLAIN, DB2 sorts are indicated in PLAN_TABLE. The sorts can be
either sorts of the composite table or the new table. If a single row of PLAN_TABLE
has a ’Y’ in more than one of the sort composite columns, then one sort
accomplishes two things. (DB2 will not perform two sorts when two ’Y’s are in the
same row.) For instance, if both SORTC_ORDERBY and SORTC_UNIQ are ’Y’ in
one row of PLAN_TABLE, then a single sort puts the rows in order and removes
any duplicate rows as well.
The only reason DB2 sorts the new table is for join processing, which is indicated
by SORTN_JOIN.
The performance of the sort by the GROUP BY clause is improved when the query
accesses a single table and when the GROUP BY column has no index.
Sorts of RIDs
To perform list prefetch, DB2 sorts RIDs into ascending page number order. This
sort is very fast and is done totally in memory. A RID sort is usually not indicated in
the PLAN_TABLE, but a RID sort normally is performed whenever list prefetch is
used. The only exception to this rule is when a hybrid join is performed and a
single, highly clustered index is used on the inner table. In this case SORTN_JOIN
is ’N’, indicating that the RID list for the inner table was not sorted.
Without parallelism:
v If no sorts are required, then OPEN CURSOR does not access any data. It is at
the first fetch that data is returned.
v If a sort is required, then the OPEN CURSOR causes the materialized result
table to be produced. Control returns to the application after the result table is
materialized. If a cursor that requires a sort is closed and reopened, the sort is
performed again.
v If there is a RID sort, but no data sort, then it is not until the first row is fetched
that the RID list is built from the index and the first data record is returned.
Subsequent fetches access the RID pool to access the next data record.
With parallelism:
v At OPEN CURSOR, parallelism is asynchronously started, regardless of whether
a sort is required. Control returns to the application immediately after the
parallelism work is started.
v If there is a RID sort, but no data sort, then parallelism is not started until the first
fetch. This works the same way as with no parallelism.
Merge
The merge process is more efficient than materialization, as described in
“Performance of merge versus materialization” on page 716. In the merge process,
the statement that references the view or table expression is combined with the
fullselect that defined the view or table expression. This combination creates a
logically equivalent statement. This equivalent statement is executed against the
database.
Consider the following statements, one of which defines a view, the other of which
references the view:
View-defining statement: View referencing statement:
Here is another example of when a view and table expression can be merged:
SELECT * FROM V1 X
LEFT JOIN
(SELECT * FROM T2) Y ON X.C1=Y.C1
LEFT JOIN T3 Z ON X.C1=Z.C1;
| Merged statement:
|
| SELECT * FROM V1 X
| LEFT JOIN
| T2 ON X.C1 = T2.C1
| LEFT JOIN T3 Z ON X.C1 = Z.C1;
Materialization
Views and table expressions cannot always be merged. Look at the following
statements:
View defining statement: View referencing statement:
Column VC1 occurs as the argument of a column function in the view referencing
statement. The values of VC1, as defined by the view-defining fullselect, are the
result of applying the column function SUM(C1) to groups after grouping the base
table T1 by column C2. No equivalent single SQL SELECT statement can be
executed against the base table T1 to achieve the intended result. There is no way
to specify that column functions should be applied successively.
Table 78 indicates some cases in which materialization occurs. DB2 can also use
materialization in statements that contain multiple outer joins, outer joins that
combine with inner joins, or merges that cause a join of greater than 15 tables.
Table 78. Cases when DB2 performs view or table expression materialization. The ″X″ indicates a case of
materialization. Notes follow the table.
A SELECT FROM a view View definition or table expression uses...(2)
or a table expression
| GROUP BY DISTINCT Column Column UNION(4) UNION
uses...(1)
| function function ALL(4)
DISTINCT
| Joins (3) X X X X X -
|# GROUP BY X X X X X -
|# DISTINCT - X - X X -
| Column function (without X X X X X X
GROUP BY)
|# Column function DISTINCT X X X X X -
| SELECT subset of view or - X - - X -
table expression columns
# When DB2 choses materialization, TNAME contains the name of the view or table
# expression and TABLE_TYPE contains a W. A value of Q in TABLE_TYPE for the
# name of a view or nested table expresssion indicates that the materialization was
# virtual and not actual. (Materialization can be virtual when the view or nested table
# expression definition contains a UNION ALL that is not distributed.) When DB2
chooses merge, EXPLAIN data for the merged statement appears in PLAN_TABLE;
only the names of the base tables on which the view or table expression is defined
appear.
# Examples: Consider the following statements, which define a view and reference
# the view. Figure 208 on page 714 shows a subset of columns in a plan table for the
# query. Notice how TNAME contains the name of the view and TABLE_TYPE
# contains W to indicate that DB2 chooses materialization for the reference to the
# view because of the use of SELECT DISTINCT in the view defitinion.
#
## QBLOCKNO PLANNO QBLOCK_ TNAME TABLE_ METHOD
# TYPE TYPE
# 1 1 SELECT DEPT T 0
# 2 1 NOCOSUB V1DIS W 0
# 2 2 NOCOSUB ? 3
# 3 1 NOCOSUB EMP T 0
# 3 2 NOCOSUB ? 3
#
#
# Figure 208. Plan table output for an example with view materialization
#
# As the following statements and sample plan table output show, had the VIEW been
# defined without DISTINCT, DB2 would choose merge instead of materialization. In
# the sample output, the name of the view does not appear in the plan table, but the
# table name on which the view is based does appear.
# View defining statement:
#
# CREATE VIEW V1NODIS (SALARY, WORKDEPT) as
# (SELECT SALARY, WORKDEPT FROM DSN8810.EMP)
#
# View referencing statement:
#
# SELECT * FROM DSN8810.DEPT
# WHERE DEPTNO IN (SELECT WORKDEPT FROM V1NODIS)
#
## QBLOCKNO PLANNO QBLOCK_ TNAME TABLE_ METHOD
# TYPE TYPE
# 1 1 SELECT DEPT T 0
# 2 1 NOCOSUB EMP T 0
# 2 2 NOCOSUB ? 3
#
#
# Figure 209. Plan table output for an example with view merge
#
# For an example of when a view definition contains a UNION ALL and DB2 can
# distribute joins and aggregations and avoid materialization, see “Using EXPLAIN to
# determine UNION activity and query rewrite” on page 715. When DB2 avoids
# materialization in such cases, TABLE_TYPE contains a Q to indicate that DB2 uses
# an intermediate result that is not materialized and TNAME shows the name of this
# intermediate result as DSNWFQB(xx), where xx is tthe number of the query block
# that produced the result.
| The QBLOCK_TYPE column in the plan table indicates union activity. For a UNION
| ALL, the column contains ’UNIONA’. For UNION, the column contains ’UNION’.
| When QBLOCK_TYPE=’UNION’, the METHOD column on the same row is set to 3
| and the SORTC_UNIQ column is set to ’Y’ to indicate that a sort is necessary to
| remove duplicates. As with other views and table expressions, the plan table also
| shows when DB2 uses materialization instead of merge.
| Example: Consider the following statements, which define a view, reference the
| view, and show how DB2 rewrites the referencing statement. Figure 210 on
| page 716 shows a subset of columns in a plan table for the query. Notice how DB2
| eliminates the second subselect of the view definition from the rewritten query and
| how the plan table indicates this removal by showing a UNION ALL for only the first
# and third subselect in the view definition. The Q in the TABLE_TYPE column
# indicates that DB2 does not materialize the view.
| View defining statement: View is created on three tables that contain weekly data
|
| CREATE VIEW V1 (CUSTNO, CHARGES, DATE) as
| SELECT CUSTNO, CHARGES, DATE
| FROM WEEK1
| WHERE DATE BETWEEN '01/01/2000' And '01/07/2000'
| UNION ALL
| SELECT CUSTNO, CHARGES, DATE
| FROM WEEK2
| WHERE DATE BETWEEN '01/08/2000' And '01/14/2000'
| UNION ALL
| SELECT CUSTNO, CHARGES, DATE
| FROM WEEK3
| WHERE DATE BETWEEN '01/15/2000' And '01/21/2000';
|
| View referencing statement: For each customer in California, find the average charges
| during the first and third Friday of January 2000
|
| SELECT V1.CUSTNO, AVG(V1.CHARGES)
| FROM CUST, V1
| WHERE CUST.CUSTNO=V1.CUSTNO
| AND CUST.STATE='CA'
| AND DATE IN ('01/07/2000','01/21/2000')
| GROUP BY V1.CUSTNO;
|
| Rewritten statement (assuming that CHARGES is defined as NOT NULL):
|
| SELECT CUSTNO_U, SUM(SUM_U)/SUM(CNT_U)
| FROM
| ( SELECT WEEK1.CUSTNO, SUM(CHARGES), COUNT(CHARGES)
| FROM CUST, WEEK1
| Where CUST.CUSTNO=WEEK1.CUSTNO AND CUST.STATE='CA'
| AND DATE BETWEEN '01/01/2000' And '01/07/2000'
| AND DATE IN ('01/07/2000','01/21/2000')
| GROUP BY WEEK1.CUSTNO
| UNION ALL
|
|| QBLOCKNO PLANNO TNAME TABLE_TYPE METHOD QBLOCK PARENT
| TYPE QBLOCKNO
|# 1 1 DSNWFQB(02) Q 0 0
|# 1 2 ? 3 0
|# 2 1 ? 0 UNIONA 1
| 3 1 CUST T 0 2
| 3 2 WEEK1 T 1 2
| 4 1 CUST T 0 2
| 4 2 WEEK3 T 2 2
|
|
| Figure 210. Plan table output for an example with a view with UNION ALLs
|
| Performance of merge versus materialization
Merge performs better than materialization. For materialization, DB2 uses a table
space scan to access the materialized temporary result. DB2 materializes a view or
table expression only if it cannot merge.
Note: Where ″op″ is =, <>, >, <, <=, or >=, and literal is either a host variable, constant, or
special register. The literals in the BETWEEN predicate need not be identical.
Table 80 shows the content of each column. The first five columns of the
DSN_STATEMNT_TABLE are the same as PLAN_TABLE.
Table 80. Descriptions of columns in DSN_STATEMNT_TABLE
Column Name Description
QUERYNO A number that identifies the statement being explained. See the description of the
QUERYNO column in Table 74 on page 673 for more information. If QUERYNO is not
unique, the value of EXPLAIN_TIME is unique.
APPLNAME The name of the application plan for the row, or blank. See the description of the
APPLNAME column in Table 74 on page 673 for more information.
PROGNAME The name of the program or package containing the statement being explained, or
blank. See the description of the PROGNAME column in Table 74 on page 673 for
more information.
COLLID The collection ID for the package, or blank. See the description of the COLLID column
in Table 74 on page 673 for more information.
GROUP_MEMBER The member name of the DB2 that executed EXPLAIN, or blank. See the description
of the GROUP_MEMBER column in Table 74 on page 673 for more information.
EXPLAIN_TIME The time at which the statement is processed. This time is the same as the
BIND_TIME column in PLAN_TABLE.
STMT_TYPE The type of statement being explained. Possible values are:
SELECT SELECT
INSERT INSERT
UPDATE UPDATE
DELETE DELETE
SELUPD SELECT with FOR UPDATE OF
DELCUR DELETE WHERE CURRENT OF CURSOR
UPDCUR UPDATE WHERE CURRENT OF CURSOR
COST_CATEGORY Indicates if DB2 was forced to use default values when making its estimates. Possible
values:
A Indicates that DB2 had enough information to make a cost estimate without
using default values.
B Indicates that some condition exists for which DB2 was forced to use default
values. See the values in REASON to determine why DB2 was unable to put
this estimate in cost category A.
Just as with the plan table, DB2 just adds rows to the statement table; it does not
automatically delete rows. INSERT triggers are not activated unless you insert rows
yourself using and SQL INSERT statement.
To clear the table of obsolete rows, use DELETE, just as you would for deleting
rows from any table. You can also use DROP TABLE to drop a statement table
completely.
Similarly, if system administrators use these estimates as input into the resource
limit specification table for governing (either predictive or reactive), they probably
would want to give much greater latitude for statements in cost category B than for
those in cost category A.
What goes into cost category B? DB2 puts a statement’s estimate into cost
category B when any of the following conditions exist:
v The statement has UDFs.
v Triggers are defined for the target table:
– The statement is INSERT, and insert triggers are defined on the target table.
– The statement is UPDATE, and update triggers are defined on the target
table.
– The statement is DELETE, and delete triggers are defined on the target table.
v The target table of a delete statement has referential constraints defined on it as
the parent table, and the delete rules are either CASCADE or SET NULL.
v The WHERE clause predicate has one of the following forms:
– COL op literal, and the literal is a host variable, parameter marker, or special
register. The operator can be >, >=, <, <=, LIKE, or NOT LIKE.
– COL BETWEEN literal AND literal where either literal is a host variable,
parameter marker, or special register.
– LIKE with an escape clause that contains a host variable.
v The cardinality statistics are missing for one or more tables that are used in the
statement.
| v A subselect in the SQL statement contains a HAVING clause.
What goes into cost category A? DB2 puts everything that doesn’t fall into
category B into category A.
Query I/O parallelism manages concurrent I/O requests for a single query, fetching
pages into the buffer pool in parallel. This processing can significantly improve the
performance of I/O-bound queries. I/O parallelism is used only when one of the
other parallelism modes cannot be used.
Query CP parallelism enables true multi-tasking within a query. A large query can
be broken into multiple smaller queries. These smaller queries run simultaneously
on multiple processors accessing data in parallel. This reduces the elapsed time for
a query.
Parallel operations usually involve at least one table in a partitioned table space.
Scans of large partitioned table spaces have the greatest performance
improvements where both I/O and central processor (CP) operations can be carried
out in parallel.
Figure 212 shows sequential processing. With sequential processing, DB2 takes
the 3 partitions in order, completing partition 1 before starting to process partition 2,
and completing 2 before starting 3. Sequential prefetch allows overlap of CP
processing with I/O operations, but I/O operations do not overlap with each other. In
the example in Figure 212, a prefetch request takes longer than the time to process
it. The processor is frequently waiting for I/O.
CP
processing: … …
P1R1 P1R2 P1R3 P2R1 P2R2 P2R3 P3R1
I/O:
P1R1 P1R2 P1R3
… P2R1 P2R2 P2R3
… P3R1 P3R2
Time line
Figure 213 shows parallel I/O operations. With parallel I/O, DB2 prefetches data
from the 3 partitions at one time. The processor processes the first request from
each partition, then the second request from each partition, and so on. The
processor is not waiting for I/O, but there is still only one processing task.
CP processing: …
P1R1 P2R1 P3R1 P1R2 P2R2 P3R2 P1R3
I/O:
P1 R1 R2 R3
P2 R1 R2 R3
P3 R1 R2 R3
Time line
Figure 214 on page 723 shows parallel CP processing. With CP parallelism, DB2
can use multiple parallel tasks to process the query. Three tasks working
concurrently can greatly reduce the overall elapsed time for data-intensive and
processor-intensive queries. The same principle applies for Sysplex query
parallelism, except that the work can cross the boundaries of a single CPC.
CP task 2:
P2R1 P2R2 P2R3
…
I/O:
P2R1 P2R2 P2R3
…
CP task 3:
P3R1 P3R2 P3R3
…
I/O:
P3R1 P3R2 P3R3
…
Time line
Figure 214. CP and I/O processing techniques. Query processing using CP parallelism. The
tasks can be contained within a single CPC or can be spread out among the members of a
data sharing group.
Queries that are most likely to take advantage of parallel operations: Queries
that can take advantage of parallel processing are:
v Those in which DB2 spends most of the time fetching pages—an I/O-intensive
query
A typical I/O-intensive query is something like the following query, assuming that
a table space scan is used on many pages:
SELECT COUNT(*) FROM ACCOUNTS
WHERE BALANCE > 0 AND
DAYS_OVERDUE > 30;
v Those in which DB2 spends a lot of processor time and also, perhaps, I/O time,
to process rows. Those include:
– Queries with intensive data scans and high selectivity. Those queries involve
large volumes of data to be scanned but relatively few rows that meet the
search criteria.
– Queries containing aggregate functions. Column functions (such as MIN,
MAX, SUM, AVG, and COUNT) usually involve large amounts of data to be
scanned but return only a single aggregate result.
– Queries accessing long data rows. Those queries access tables with long
data rows, and the ratio of rows per page is very low (one row per page, for
example).
– Queries requiring large amounts of central processor time. Those queries
might be read-only queries that are complex, data-intensive, or that involve a
sort.
A typical processor-intensive query is something like:
SELECT MAX(QTY_ON_HAND) AS MAX_ON_HAND,
AVG(PRICE) AS AVG_PRICE,
AVG(DISCOUNTED_PRICE) AS DISC_PRICE,
SUM(TAX) AS SUM_TAX,
SUM(QTY_SOLD) AS SUM_QTY_SOLD,
SUM(QTY_ON_HAND - QTY_BROKEN) AS QTY_GOOD,
AVG(DISCOUNT) AS AVG_DISCOUNT,
ORDERSTATUS,
COUNT(*) AS COUNT_ORDERS
Terminology: When the term task is used with information on parallel processing,
the context should be considered. For parallel query CP processing or Sysplex
query parallelism, task is an actual MVS execution unit used to process a query.
For parallel I/O processing, a task simply refers to the processing of one of the
concurrent I/O streams.
A parallel group is the term used to name a particular set of parallel operations
(parallel tasks or parallel I/O operations). A query can have more than one parallel
group, but each parallel group within the query is identified by its own unique ID
number.
The degree of parallelism is the number of parallel tasks or I/O operations that
DB2 determines can be used for the operations on the parallel group.
It is also possible to change the special register default from 1 to ANY for the
entire DB2 subsystem by modifying the CURRENT DEGREE field on installation
panel DSNTIP4.
v If you bind with isolation CS, choose also the option CURRENTDATA(NO), if
possible. This option can improve performance in general, but it also ensures
that DB2 will consider parallelism for ambiguous cursors. If you bind with
CURRENDATA(YES) and DB2 cannot tell if the cursor is read-only, DB2 does not
consider parallelism. It is best to always indicate when a cursor is read-only by
indicating FOR FETCH ONLY or FOR READ ONLY on the DECLARE CURSOR
statement.
v The virtual buffer pool parallel sequential threshold (VPPSEQT) value must be
large enough to provide adequate buffer pool space for parallel processing. For a
description of buffer pools and thresholds, see Part 5 (Volume 2) of DB2
Administration Guide.
If you enable parallel processing when DB2 estimates a given query’s I/O and
central processor cost is high, multiple parallel tasks can be activated if DB2
estimates that elapsed time can be reduced by doing so.
For complex queries, run the query in parallel within a member of a data sharing
group. With Sysplex query parallelism, use the power of the data sharing group to
process individual complex queries on many members of the data sharing group.
For more information on how you can use the power of the data sharing group to
run complex queries, see Chapter 6 of DB2 Data Sharing: Planning and
Administration.
Limiting the degree of parallelism: If you want to limit the maximum number of
parallel tasks that DB2 generates, you can use the installation parameter MAX
DEGREE in the DSNTIP4 panel. Changing MAX DEGREE, however, is not the way
to turn parallelism off. You use the DEGREE bind parameter or CURRENT
DEGREE special register to turn parallelism off.
DB2 avoids certain hybrid joins when parallelism is enabled: To ensure that
you can take advantage of parallelism, DB2 does not pick one type of hybrid join
(SORTN_JOIN=Y) when the plan or package is bound with CURRENT
DEGREE=ANY or if the CURRENT DEGREE special register is set to ’ANY’.
It is possible for a parallel group run at a parallel degree less than that shown in the
PLAN_TABLE output. The following can cause a reduced degree of parallelism:
v Buffer pool availability
v Logical contention.
The default value for CURRENT DEGREE is 1 unless your installation has
changed the default for the CURRENT DEGREE special register.
System controls can be used to disable parallelism, as well. These are described in
Part 5 (Volume 2) of DB2 Administration Guide.
The following sections discuss scenarios for interaction among your program, DB2,
and ISPF. Each has advantages and disadvantages in terms of efficiency, ease of
coding, ease of maintenance, and overall flexibility.
The DSN command processor (see “DSN command processor” on page 424)
permits only single task control block (TCB) connections. Take care not to change
the TCB after the first SQL statement. ISPF SELECT services change the TCB if
you started DSN under ISPF, so you cannot use these to pass control from load
module to load module. Instead, use LINK, XCTL, or LOAD.
Figure 215 on page 730 shows the task control blocks that result from attaching the
DSN command processor below TSO or ISPF.
If you are in ISPF and running under DSN, you can perform an ISPLINK to another
program, which calls a CLIST. In turn, the CLIST uses DSN and another
application. Each such use of DSN creates a separate unit of recovery (process or
transaction) in DB2.
All such initiated DSN work units are unrelated, with regard to isolation (locking)
and recovery (commit). It is possible to deadlock with yourself; that is, one unit
(DSN) can request a serialized resource (a data page, for example) that another
unit (DSN) holds incompatibly.
A COMMIT in one program applies only to that process. There is no facility for
coordinating the processes.
The application has one large load module and one plan.
Disadvantages: For large programs of this type, you want a more modular design,
making the plan more flexible and easier to maintain. If you have one large plan,
you must rebind the entire plan whenever you change a module that includes SQL
statements. 2 You cannot pass control to another load module that makes SQL calls
by using ISPLINK; rather, you must use LINK, XCTL, or LOAD and BALR.
If you want to use ISPLINK, then call ISPF to run under DSN:
DSN
RUN PROGRAM(ISPF) PLAN(MYPLAN)
END
You then have to leave ISPF before you can start your application.
2. To achieve a more modular construction when all parts of the program use SQL, consider using packages. See “Chapter 16.
Planning for DB2 program preparation” on page 315.
When you use the ISPF SELECT service, you can specify whether ISPF should
create a new ISPF variable pool before calling the function. You can also break a
large application into several independent parts, each with its own ISPF variable
pool.
You can call different parts of the program in different ways. For example, you can
use the PGM option of ISPF SELECT:
PGM(program-name) PARM(parameters)
For a part that accesses DB2, the command can name a CLIST that starts DSN:
DSN
RUN PROGRAM(PART1) PLAN(PLAN1) PARM(input from panel)
END
Breaking the application into separate modules makes it more flexible and easier to
maintain. Furthermore, some of the application might be independent of DB2;
portions of the application that do not call DB2 can run, even if DB2 is not running.
A stopped DB2 database does not interfere with parts of the program that refer only
to other databases.
With the same modular structure as in the previous example, using CAF is likely to
provide greater efficiency by reducing the number of CLISTs. This does not mean,
however, that any DB2 function executes more quickly.
Disadvantages: Compared to the modular structure using DSN, the structure using
CAF is likely to require a more complex program, which in turn might require
assembler language subroutines. For more information, see “Chapter 29.
Programming for the call attachment facility (CAF)” on page 733.
Chapter 28. Programming for the Interactive System Productivity Facility (ISPF) 731
732 Application Programming and SQL Guide
Chapter 29. Programming for the call attachment facility (CAF)
An attachment facility is a part of the DB2 code that allows other programs to
connect to and use DB2 to process SQL statements, commands, or instrumentation
facility interface (IFI) calls. With the call attachment facility (CAF), your application
program can establish and control its own connection to DB2. Programs that run in
MVS batch, TSO foreground, and TSO background can use CAF.
It is also possible for IMS batch applications to access DB2 databases through
CAF, though that method does not coordinate the commitment of work between the
IMS and DB2 systems. We highly recommend that you use the DB2 DL/I batch
support for IMS batch applications.
CICS application programs must use the CICS attachment facility; IMS application
programs, the IMS attachment facility. Programs running in TSO foreground or TSO
background can use either the DSN command processor or CAF; each has
advantages and disadvantages.
Prerequisite knowledge: Analysts and programmers who consider using CAF must
be familiar with MVS concepts and facilities in the following areas:
v The CALL macro and standard module linkage conventions
v Program addressing and residency options (AMODE and RMODE)
v Creating and controlling tasks; multitasking
v Functional recovery facilities such as ESTAE, ESTAI, and FRRs
v Asynchronous events and TSO attention exits (STAX)
v Synchronization techniques such as WAIT/POST.
Task capabilities
Any task in an address space can establish a connection to DB2 through CAF.
There can be only one connection for each task control block (TCB). A DB2 service
request issued by a program running under a given task is associated with that
task’s connection to DB2. The service request operates independently of any DB2
activity under any other task.
Each connected task can run a plan. Multiple tasks in a single address space can
specify the same plan, but each instance of a plan runs independently from the
others. A task can terminate its plan and run a different plan without fully breaking
its connection to DB2.
CAF does not generate task structures, nor does it provide attention processing
exits or functional recovery routines. You can provide whatever attention handling
and functional recovery your application needs, but you must use ESTAE/ESTAI
type recovery routines and not Enabled Unlocked Task (EUT) FRR routines.
Programming language
You can write CAF applications in assembler language, C, COBOL, FORTRAN, and
PL/I. When choosing a language to code your application in, consider these
restrictions:
v If you need to use MVS macros (ATTACH, WAIT, POST, and so on), you must
choose a programming language that supports them or else embed them in
modules written in assembler language.
v The CAF TRANSLATE function is not available from FORTRAN. To use the
function, code it in a routine written in another language, and then call that
routine from FORTRAN.
You can find a sample assembler program (DSN8CA) and a sample COBOL
program (DSN8CC) that use the call attachment facility in library prefix.SDSNSAMP.
A PL/I application (DSN8SPM) calls DSN8CA, and a COBOL application
(DSN8SCM) calls DSN8CC. For more information on the sample applications and
on accessing the source code, see “Appendix B. Sample applications” on page 833.
Tracing facility
A tracing facility provides diagnostic messages that aid in debugging programs and
diagnosing errors in the CAF code. In particular, attempts to use CAF incorrectly
cause error messages in the trace stream.
Program preparation
Preparing your application program to run in CAF is similar to preparing it to run in
other environments, such as CICS, IMS, and TSO. You can prepare a CAF
application either in the batch environment or by using the DB2 program
preparation process. You can use the program preparation system either through
DB2I or through the DSNH CLIST. For examples and guidance in program
preparation, see “Chapter 20. Preparing an application program to run” on
page 397.
CAF requirements
When you write programs that use CAF, be aware of the following characteristics.
Use of LOAD
CAF uses MVS SVC LOAD to load two modules as part of the initialization
following your first service request. Both modules are loaded into fetch-protected
storage that has the job-step protection key. If your local environment intercepts and
replaces the LOAD SVC, then you must ensure that your version of LOAD
manages the load list element (LLE) and contents directory entry (CDE) chains like
the standard MVS LOAD macro.
Run environment
Applications requesting DB2 services must adhere to several run environment
characteristics. Those characteristics must be in effect regardless of the attachment
facility you use. They are not unique to CAF.
v The application must be running in TCB mode. SRB mode is not supported.
v An application task cannot have any EUT FRRs active when requesting DB2
services. If an EUT FRR is active, DB2’s functional recovery can fail, and your
application can receive some unpredictable abends.
v Different attachment facilities cannot be active concurrently within the same
address space. Therefore:
– An application must not use CAF in an CICS or IMS address space.
– An application that runs in an address space that has a CAF connection to
DB2 cannot connect to DB2 using RRSAF.
– An application that runs in an address space that has an RRSAF connection
to DB2 cannot connect to DB2 using CAF.
– An application cannot invoke the MVS AXSET macro after executing the CAF
CONNECT call and before executing the CAF DISCONNECT call.
v One attachment facility cannot start another. This means that your CAF
application cannot use DSN, and a DSN RUN subcommand cannot call your
CAF application.
v The language interface module for CAF, DSNALI, is shipped with the linkage
attributes AMODE(31) and RMODE(ANY). If your applications load CAF below
the 16MB line, you must link-edit DSNALI again.
There is no significant advantage to running DSN applications with CAF, and the
loss of DSN services can affect how well your program runs. We do not recommend
that you run DSN applications with CAF unless you provide an application controller
Chapter 29. Programming for the call attachment facility (CAF) 735
to manage the DSN application and replace any needed DSN functions. Even then,
you could have to change the application to communicate connection failures to the
controller correctly.
When the language interface is available, your program can make use of the CAF
in two ways:
v Implicitly, by including SQL statements or IFI calls in your program just as you
would in any program. The CAF facility establishes the connections to DB2 using
default values for the pertinent parameters described under “Implicit connections”
on page 738.
v Explicitly, by writing CALL DSNALI statements, providing the appropriate options.
For the general form of the statements, see “CAF function descriptions” on
page 741.
The first element of each option list is a function, which describes the action you
want CAF to take. The available values of function and an approximation of their
effects, see “Summary of connection functions” on page 737. The effect of any
function depends in part on what functions the program has already run. Before
using any function, be sure to read the description of its usage. Also read
“Summary of CAF behavior” on page 753, which describes the influence of previous
functions.
Chapter 29. Programming for the call attachment facility (CAF) 737
OPEN
Allocates a DB2 plan. You must allocate a plan before DB2 can process SQL
statements. If you did not request the CONNECT function, OPEN implicitly
establishes the task, and optionally the address space, as a user of DB2. See
“OPEN: Syntax and usage” on page 747.
CLOSE
Optionally commits or abends any database changes and deallocates the plan.
If OPEN implicitly requests the CONNECT function, CLOSE removes the task,
and possibly the address space, as a user of DB2. See “CLOSE: Syntax and
usage” on page 749.
DISCONNECT
Removes the task as a user of DB2 and, if this is the last or only task in the
address space with a DB2 connection, terminates the address space
connection to DB2. See “DISCONNECT: Syntax and usage” on page 750.
TRANSLATE
Returns an SQLCODE and printable text in the SQLCA that describes a DB2
hexadecimal error reason code. See “TRANSLATE: Syntax and usage” on
page 751. You cannot call the TRANSLATE function from the FORTRAN
language.
Implicit connections
If you do not explicitly specify executable SQL statements in a CALL DSNALI
statement of your CAF application, CAF initiates implicit CONNECT and OPEN
requests to DB2. Although CAF performs these connection requests using the
default values defined below, the requests are subject to the same DB2 return
codes and reason codes as explicitly specified requests.
There are different types of implicit connections. The simplest is for application to
run neither CONNECT nor OPEN. You can also use CONNECT only or OPEN only.
Each of these implicitly connects your application to DB2. To terminate an implicit
connection, you must use the proper calls. See Table 87 on page 753 for details.
If the implicit connection was successful, the application can examine the
SQLCODE for the first, and subsequent, SQL statements.
You can access the DSNALI module by either explicitly issuing LOAD requests
when your program runs, or by including the module in your load module when you
link-edit your program. There are advantages and disadvantages to each approach.
By explicitly loading the DSNALI module, you beneficially isolate the maintenance of
your application from future IBM service to the language interface. If the language
interface changes, the change will probably not affect your load module.
You must indicate to DB2 which entry point to use. You can do this in one of two
ways:
v Specify the precompiler option ATTACH(CAF).
This causes DB2 to generate calls that specify entry point DSNHLI2. You cannot
use this option if your application is written in FORTRAN.
v Code a dummy entry point named DSNHLI within your load module.
If you do not specify the precompiler option ATTACH, the DB2 precompiler
generates calls to entry point DSNHLI for each SQL request. The precompiler
does not know and is independent of the different DB2 attachment facilities.
When the calls generated by the DB2 precompiler pass control to DSNHLI, your
code corresponding to the dummy entry point must preserve the option list
passed in R1 and call DSNHLI2 specifying the same option list. For a coding
example of a dummy DSNHLI entry point, see “Using dummy entry point
DSNHLI” on page 763.
Chapter 29. Programming for the call attachment facility (CAF) 739
Link-editing DSNALI
You can include the CAF language interface module DSNALI in your load module
during a link-edit step. The module must be in a load module library, which is
included either in the SYSLIB concatenation or another INCLUDE library defined in
the linkage editor JCL. Because all language interface modules contain an entry
point declaration for DSNHLI, the linkage editor JCL must contain an INCLUDE
linkage editor control statement for DSNALI; for example, INCLUDE
DB2LIB(DSNALI). By coding these options, you avoid inadvertently picking up the
wrong language interface module.
If you do not need explicit calls to DSNALI for CAF functions, including DSNALI in
your load module has some advantages. When you include DSNALI during the
link-edit, you need not code the previously described dummy DSNHLI entry point in
your program or specify the precompiler option ATTACH. Module DSNALI contains
an entry point for DSNHLI, which is identical to DSNHLI2, and an entry point
DSNWLI, which is identical to DSNWLI2.
A disadvantage to link-editing DSNALI into your load module is that any IBM service
to DSNALI requires a new link-edit of your load module.
Task termination
If a connected task terminates normally before the CLOSE function deallocates the
plan, then DB2 commits any database changes that the thread made since the last
commit point. If a connected task abends before the CLOSE function deallocates
the plan, then DB2 rolls back any database changes since the last commit point.
In either case, DB2 deallocates the plan, if necessary, and terminates the task’s
connection before it allows the task to terminate.
DB2 abend
If DB2 abends while an application is running, the application is rolled back to the
last commit point. If DB2 terminates while processing a commit request, DB2 either
A description of the call attach register and parameter list conventions for assembler
language follow. Following it, the syntax description of specific functions describe
the parameters for those particular functions.
Register conventions
If you do not specify the return code and reason code parameters in your CAF
calls, CAF puts a return code in register 15 and a reason code in register 0. CAF
also supports high-level languages that cannot interrogate individual registers. See
Figure 217 on page 742 and the discussion following it for more information. The
contents of registers 2 through 14 are preserved across calls. You must conform to
the following standard calling conventions:
Register Usage
R1 Parameter list pointer (for details, see “Call DSNALI parameter list”)
R13 Address of caller’s save area
R14 Caller’s return address
R15 CAF entry point address
When you code CALL DSNALI statements, you must specify all parameters that
come before Return Code. You cannot omit any of those parameters by coding
zeros or blanks. There are no defaults for those parameters for explicit connection
service requests. Defaults are provided only for implicit connections.
For all languages except assembler language, code zero for a parameter in the
CALL DSNALI statement when you want to use the default value for that parameter
but specify subsequent parameters. For example, suppose you are coding a
CONNECT call in a COBOL program. You want to specify all parameters except
Return Code. Write the call in this way:
CALL 'DSNALI' USING FUNCTN SSID TECB SECB RIBPTR
BY CONTENT ZERO BY REFERENCE REASCODE SRDURA EIBPTR.
For an assembler language call, code a comma for a parameter in the CALL
DSNALI statement when you want to use the default value for that parameter but
specify subsequent parameters. For example, code a CONNECT call like this to
specify all optional parameters except Return Code:
| CALL DSNALI,(FUNCTN,SSID,TERMECB,STARTECB,RIBPTR,,REASCODE,SRDURA,EIBPTR,GROUPOVERRIDE)
Chapter 29. Programming for the call attachment facility (CAF) 741
Figure 217. The parameter list for a CONNECT call
Figure 217 illustrates how you can use the indicator 'end of parameter list' to control
the return codes and reason code fields following a CAF CONNECT call. Each of
the three illustrated termination points apply to all CAF parameter lists:
1. Terminates the parameter list without specifying the parameters retcode,
reascode, and srdura, and places the return code in register 15 and the reason
code in register 0.
Terminating at this point ensures compatibility with CAF programs that require a
return code in register 15 and a reason code in register 0.
2. Terminates the parameter list after the return code field, and places the return
code in the parameter list and the reason code in register 0.
Terminating at this point permits the application program to take action, based
on the return code, without further examination of the associated reason code.
3. Terminates the parameter list after the reason code field and places the return
code and the reason code in the parameter list.
Terminating at this point provides support to high-level languages that are
unable to examine the contents of individual registers.
If you code your CAF application in assembler language, you can specify this
parameter and omit the return code parameter. To do this, specify a comma as
a place-holder for the omitted return code parameter.
4. Terminates the parameter list after the parameter srdura.
Even if you specify that the return code be placed in the parameter list, it is also
placed in register 15 to accommodate high-level languages that support special
return code processing.
)
,retcode
,reascode
,srdura
,eibptr
,groupoverride
Chapter 29. Programming for the call attachment facility (CAF) 743
POST code Termination type
8 QUIESCE
12 FORCE
16 ABTERM
Before you check termecb in your CAF application program, first check the
return code and reason code from the CONNECT call to ensure that the call
completed successfully. See “Checking return codes and reason codes” on
page 760 for more information.
startecb
The application’s start-up ECB. If DB2 has not yet started when the application
issues the call, DB2 posts the ECB when it successfully completes its startup
processing. DB2 posts at most one startup ECB per address space. The ECB is
the one associated with the most recent CONNECT call from that address
space. Your application program must examine any nonzero CAF/DB2 reason
codes before issuing a WAIT on this ECB.
| If ssnm is a group attachment name, the first DB2 subsystem that starts on the
| local OS/390 system and matches the specified group attachment name posts
| the ECB.
ribptr
A 4-byte area in which CAF places the address of the release information block
(RIB) after the call. You can determine what release level of DB2 you are
currently running by examining field RIBREL. You can determine the
modification level within the release level by examining fields RIBCNUMB and
RIBCINFO. If the value in RIBCNUMB is greater than zero, check RIBCINFO
for modification levels.
If the RIB is not available (for example, if you name a subsystem that does not
exist), DB2 sets the 4-byte area to zeros.
The area to which ribptr points is below the 16-megabyte line.
Your program does not have to use the release information block, but it cannot
omit the ribptr parameter.
Macro DSNDRIB maps the release information block (RIB). It can be found in
prefix.SDSNMACS(DSNDRIB).
retcode
A 4-byte area in which CAF places the return code.
This field is optional. If not specified, CAF places the return code in register 15
and the reason code in register 0.
reascode
A 4-byte area in which CAF places a reason code. If not specified, CAF places
the reason code in register 0.
This field is optional. If specified, you must also specify retcode.
srdura
A 10-byte area containing the string ’SRDURA(CD)’. This field is optional. If it is
provided, the value in the CURRENT DEGREE special register stays in effect
from CONNECT until DISCONNECT. If it is not provided, the value in the
CURRENT DEGREE special register stays in effect from OPEN until CLOSE. If
you specify this parameter in any language except assembler, you must also
Using a CONNECT call is optional. The first request from a task, either OPEN, or
an SQL or IFI call, causes CAF to issue an implicit CONNECT request. If a task is
connected implicitly, the connection to DB2 is terminated either when you execute
CLOSE or when the task terminates.
Chapter 29. Programming for the call attachment facility (CAF) 745
terminates. The explicit connection minimizes the overhead by ensuring that the
connection to DB2 remains after CLOSE deallocates a plan.
You can run CONNECT from any or all tasks in the address space, but the address
space level is initialized only once when the first task connects.
If a task does not issue an explicit CONNECT or OPEN, the implicit connection
from the first SQL or IFI call specifies a default DB2 subsystem name. A systems
programmer or administrator determines the default subsystem name when
installing DB2. Be certain that you know what the default name is and that it names
the specific DB2 subsystem you want to use.
Practically speaking, you must not mix explicit CONNECT and OPEN requests with
implicitly established connections in the same address space. Either explicitly
specify which DB2 subsystem you want to use or allow all requests to use the
default subsystem.
Do not issue CONNECT requests from a TCB that already has an active DB2
connection. (See “Summary of CAF behavior” on page 753 and “Error messages
and dsntrace” on page 756 for more information on CAF errors.)
)
Chapter 29. Programming for the call attachment facility (CAF) 747
This field is optional. If specified, you must also specify retcode.
| groupoverride
| An 8-byte area that the application provides. This field is optional. If this field is
| provided, it contains the string 'NOGROUP'. This string indicates that the
| subsystem name that is specified by ssnm is to be used as a DB2 subsystem
| name, even if ssnm matches a group attachment name. If groupoverride is not
| provided, ssnm is used as the group attachment name if it matches a group
| attachment name. If you specify this parameter in any language except
| assembler, you must also specify the return code and reason code parameters.
| In assembler language, you can omit the return code and reason code
| parameters by specifying commas as place-holders.
Usage: OPEN allocates DB2 resources needed to run the plan or issue IFI
requests. If the requesting task does not already have a connection to the named
DB2 subsystem, then OPEN establishes it.
OPEN allocates the plan to the DB2 subsystem named in ssnm. The ssnm
parameter, like the others, is required, even if the task issues a CONNECT call. If a
task issues CONNECT followed by OPEN, then the subsystem names for both calls
must be the same.
The use of OPEN is optional. If you do not use OPEN, the action of OPEN occurs
on the first SQL or IFI call from the task, using the defaults listed under “Implicit
connections” on page 738.
Usage: CLOSE deallocates the created plan either explicitly using OPEN or
implicitly at the first SQL call.
If you did not issue a CONNECT for the task, CLOSE also deletes the task’s
connection to DB2. If no other task in the address space has an active connection
to DB2, DB2 also deletes the control block structures created for the address space
and removes the cross memory authorization.
Do not use CLOSE when your current task does not have a plan allocated.
Using CLOSE is optional. If you omit it, DB2 performs the same actions when your
task terminates, using the SYNC parameter if termination is normal and the ABRT
parameter if termination is abnormal. (The function is an implicit CLOSE.) If the
objective is to shut down your application, you can improve shut down performance
by using CLOSE explicitly before the task terminates.
If you want to use a new plan, you must issue an explicit CLOSE, followed by an
OPEN, specifying the new plan name.
If DB2 terminates, a task that did not issue CONNECT should explicitly issue
CLOSE, so that CAF can reset its control blocks to allow for future connections.
This CLOSE returns the reset accomplished return code (+004) and reason code
X'00C10824'. If you omit CLOSE, then when DB2 is back on line, the task’s next
connection request fails. You get either the message Your TCB does not have a
Chapter 29. Programming for the call attachment facility (CAF) 749
connection, with X'00F30018' in register 0, or CAF error message DSNA201I or
DSNA202I, depending on what your application tried to do. The task must then
issue CLOSE before it can reconnect to DB2.
A task that issued CONNECT explicitly should issue DISCONNECT to cause CAF
to reset its control blocks when DB2 terminates. In this case, CLOSE is not
necessary.
Only those tasks that issued CONNECT explicitly can issue DISCONNECT. If
CONNECT was not used, then DISCONNECT causes an error.
Using DISCONNECT is optional. Without it, DB2 performs the same functions when
the task terminates. (The function is an implicit DISCONNECT.) If the objective is to
shut down your application, you can improve shut down performance if you request
DISCONNECT explicitly before the task terminates.
If DB2 terminates, a task that issued CONNECT must issue DISCONNECT to reset
the CAF control blocks. The function returns the reset accomplished return codes
and reason codes (+004 and X'00C10824'), and ensures that future connection
requests from the task work when DB2 is back on line.
A task that did not issue CONNECT explicitly must issue CLOSE to reset the CAF
control blocks when DB2 terminates.
TRANSLATE is useful only after an OPEN fails, and then only if you used an
explicit CONNECT before the OPEN request. For errors that occur during SQL or
IFI requests, the TRANSLATE function performs automatically.
Chapter 29. Programming for the call attachment facility (CAF) 751
CALL DSNALI ( function, sqlca )
, retcode
, reascode
Usage: Use TRANSLATE to get a corresponding SQL error code and message text
for the DB2 error reason codes that CAF returns in register 0 following an OPEN
service request. DB2 places the information into the SQLCODE and SQLSTATE
host variables or related fields of the SQLCA.
The TRANSLATE function can translate those codes beginning with X'00F3', but it
does not translate CAF reason codes beginning with X'00C1'. If you receive error
reason code X'00F30040' (resource unavailable) after an OPEN request,
TRANSLATE returns the name of the unavailable database object in the last 44
characters of field SQLERRM. If the DB2 TRANSLATE function does not recognize
the error reason code, it returns SQLCODE -924 (SQLSTATE '58006') and places a
printable copy of the original DB2 function code and the return and error reason
codes in the SQLERRM field. The contents of registers 0 and 15 do not change,
unless TRANSLATE fails; in which case, register 0 is set to X'C10205' and register
15 to 200.
In the table, an error shows as Error nnn. The corresponding reason code is
X'00C10'nnn; the message number is DSNAnnnI or DSNAnnnE. For a list of reason
codes, see “CAF return codes and reason codes” on page 756.
Table 87. Effects of CAF calls, as dependent on connection history
Previous Next function
function
CONNECT OPEN SQL CLOSE DISCONNECT TRANSLATE
Empty: first call CONNECT OPEN CONNECT, Error 203 Error 204 Error 205
OPEN,
followed by the
SQL or IFI call
CONNECT Error 201 OPEN OPEN, Error 203 DISCONNECT TRANSLATE
followed by the
SQL or IFI call
CONNECT Error 201 Error 202 The SQL or IFI CLOSE1 DISCONNECT TRANSLATE
followed by call
OPEN
CONNECT Error 201 Error 202 The SQL or IFI CLOSE1 DISCONNECT TRANSLATE
followed by call
SQL or IFI call
OPEN Error 201 Error 202 The SQL or IFI CLOSE2 Error 204 TRANSLATE
call
SQL or IFI call Error 201 Error 202 The SQL or IFI CLOSE2 Error 204 TRANSLATE3
call
Notes:
1. The task and address space connections remain active. If CLOSE fails because DB2 was down, then the CAF
control blocks are reset, the function produces return code 4 and reason code XX'00C10824', and CAF is ready for
more connection requests when DB2 is again on line.
2. The connection for the task is terminated. If there are no other connected tasks in the address space, the address
space level connection terminates also.
3. A TRANSLATE request is accepted, but in this case it is redundant. CAF automatically issues a TRANSLATE
request when an SQL or IFI request fails.
Chapter 29. Programming for the call attachment facility (CAF) 753
Table 87 on page 753 uses the following conventions:
v The top row lists the possible CAF functions that programs can use as their call.
v The first column lists the task’s most recent history of connection requests. For
example, CONNECT followed by OPEN means that the task issued CONNECT
and then OPEN with no other CAF calls in between.
v The intersection of a row and column shows the effect of the next call if it follows
the corresponding connection history. For example, if the call is OPEN and the
connection history is CONNECT, the effect is OPEN: the OPEN function is
performed. If the call is SQL and the connection history is empty (meaning that
the SQL call is the first CAF function the program), the effect is that an implicit
CONNECT and OPEN function is performed, followed by the SQL function.
Sample scenarios
This section shows sample scenarios for connecting tasks to DB2.
CLOSE
DISCONNECT
A task can have a connection to one and only one DB2 subsystem at any point in
time. A CAF error occurs if the subsystem name on OPEN does not match the one
on CONNECT. To switch to a different subsystem, the application must disconnect
from the current subsystem, then issue a connect request specifying a new
subsystem name.
Several tasks
In this scenario, multiple tasks within the address space are using DB2 services.
Each task must explicitly specify the same subsystem name on either the
CONNECT
OPEN OPEN OPEN
SQL SQL SQL
... ... ...
CLOSE CLOSE CLOSE
OPEN OPEN OPEN
SQL SQL SQL
... ... ...
CLOSE CLOSE CLOSE
DISCONNECT
Attention exits
An attention exit enables you to regain control from DB2, during long-running or
erroneous requests, by detaching the TCB currently waiting on an SQL or IFI
request to complete. DB2 detects the abend caused by DETACH and performs
termination processing (including ROLLBACK) for that task.
The call attachment facility has no attention exits. You can provide your own if
necessary. However, DB2 uses enabled unlocked task (EUT) functional recovery
routines (FRRs), so if you request attention while DB2 code is running, your routine
may not get control.
Recovery routines
The call attachment facility has no abend recovery routines.
Your program can provide an abend exit routine. It must use tracking indicators to
determine if an abend occurred during DB2 processing. If an abend occurs while
DB2 has control, you have these choices:
v Allow task termination to complete. Do not retry the program. DB2 detects task
termination and terminates the thread with the ABRT parameter. You lose all
database changes back to the last SYNC or COMMIT point.
This is the only action that you can take for abends that CANCEL or DETACH
cause. You cannot use additional SQL statements at this point. If you attempt to
execute another SQL statement from the application program or its recovery
routine, a return code of +256 and a reason code of X'00F30083' occurs.
v In an ESTAE routine, issue CLOSE with the ABRT parameter followed by
DISCONNECT. The ESTAE exit routine can retry so that you do not need to
re-instate the application task.
Standard MVS functional recovery routines (FRRs) can cover only code running in
service request block (SRB) mode. Because DB2 does not support calls from SRB
mode routines, you can use only enabled unlocked task (EUT) FRRs in your
routines that call DB2.
Do not have an EUT FRR active when using CAF, processing SQL requests, or
calling IFI.
Chapter 29. Programming for the call attachment facility (CAF) 755
An EUT FRR can be active, but it cannot retry failing DB2 requests. An EUT FRR
retry bypasses DB2’s ESTAE routines. The next DB2 request of any type, including
DISCONNECT, fails with a return code of +256 and a reason code of X'00F30050'.
With MVS, if you have an active EUT FRR, all DB2 requests fail, including the initial
CONNECT or OPEN. The requests fail because DB2 always creates an ARR-type
ESTAE, and MVS/ESA does not allow the creation of ARR-type ESTAEs when an
FRR is active.
When the reason code begins with X’00F3’ (except for X’00F30006’), you can use
the CAF TRANSLATE function to obtain error message text that can be printed and
displayed.
For SQL calls, CAF returns standard SQLCODEs in the SQLCA. See Part 1 of DB2
Messages and Codes for a list of those return codes and their meanings. CAF
returns IFI return codes and reason codes in the instrumentation facility
communication area (IFCA).
Table 88. CAF return codes and reason codes
Return code Reason code Explanation
0 X'00000000' Successful completion.
4 X'00C10823' Release level mismatch between DB2 and the and the call
attachment facility code.
4 X'00C10824' CAF reset complete. Ready to make a new connection.
200 X'00C10201' Received a second CONNECT from the same TCB. The first
(note 1) CONNECT could have been implicit or explicit.
200 X'00C10202' Received a second OPEN from the same TCB. The first OPEN could
(note 1) have been implicit or explicit.
200 X'00C10203' CLOSE issued when there was no active OPEN.
(note 1)
200 X'00C10204' DISCONNECT issued when there was no active CONNECT, or the
(note 1) AXSET macro was issued between CONNECT and DISCONNECT.
200 X'00C10205' TRANSLATE issued when there was no connection to DB2.
(note 1)
200 X'00C10206' Wrong number of parameters or the end-of-list bit was off.
(note 1)
Program examples
The following pages contain sample JCL and assembler programs that access the
call attachment facility (CAF).
//SYSPRINT DD SYSOUT=*
//DSNTRACE DD SYSOUT=*
//SYSUDUMP DD SYSOUT=*
These code segments assume the existence of a WRITE macro. Anywhere you find
this macro in the code is a good place for you to substitute code of your own. You
must decide what you want your application to do in those situations; you probably
do not want to write the error messages shown.
Chapter 29. Programming for the call attachment facility (CAF) 757
Loading and deleting the CAF language interface
The following code segment shows how an application can load entry points
DSNALI and DSNHLI2 for the call attachment language interface. Storing the entry
points in variables LIALI and LISQL ensures that the application has to load the
entry points only once.
When the module is done with DB2, you should delete the entries.
****************************** GET LANGUAGE INTERFACE ENTRY ADDRESSES
LOAD EP=DSNALI Load the CAF service request EP
ST R0,LIALI Save this for CAF service requests
LOAD EP=DSNHLI2 Load the CAF SQL call Entry Point
ST R0,LISQL Save this for SQL calls
* .
* . Insert connection service requests and SQL calls here
* .
DELETE EP=DSNALI Correctly maintain use count
DELETE EP=DSNHLI2 Correctly maintain use count
The code does not show a task that waits on the DB2 termination ECB. If you like,
you can code such a task and use the MVS WAIT macro to monitor the ECB. You
probably want this task to detach the sample code if the termination ECB is posted.
That task can also wait on the DB2 startup ECB. This sample waits on the startup
ECB at its own task level.
On entry, the code assumes that certain variables are already set:
Variable Usage
LIALI The entry point that handles DB2 connection service requests.
LISQL The entry point that handles SQL calls.
SSID The DB2 subsystem identifier.
TECB The address of the DB2 termination ECB.
Chapter 29. Programming for the call attachment facility (CAF) 759
SECB The address of the DB2 start-up ECB.
RIBPTR A fullword that CAF sets to contain the RIB address.
PLAN The plan name to use on the OPEN call.
CONTROL Used to shut down processing because of unsatisfactory return or
reason codes. Subroutine CHEKCODE sets CONTROL.
CAFCALL List-form parameter area for the CALL macro.
Figure 224. Subroutine to check return codes from CAF and DB2, in assembler (Part 1 of 3)
Chapter 29. Programming for the call attachment facility (CAF) 761
***********************************************************************
* Subroutine CHEKCODE checks return codes from DB2 and Call Attach.
* When CHEKCODE receives control, R13 should point to the caller's
* save area.
***********************************************************************
CHEKCODE DS 0H
STM R14,R12,12(R13) Prolog
ST R15,RETCODE Save the return code
ST R0,REASCODE Save the reason code
LA R15,SAVEAREA Get save area address
ST R13,4(,R15) Chain the save areas
ST R15,8(,R13) Chain the save areas
LR R13,R15 Put save area address in R13
* ********************* HUNT FOR FORCE OR ABTERM ***************
TM TECB,POSTBIT See if TECB was POSTed
BZ DOCHECKS Branch if TECB was not POSTed
CLC TECBCODE(3),QUIESCE Is this "STOP DB2 MODE=FORCE"
BE DOCHECKS If not QUIESCE, was FORCE or ABTERM
MVC CONTROL,SHUTDOWN Shutdown
WRITE 'Found found FORCE or ABTERM, shutting down'
B ENDCCODE Go to the end of CHEKCODE
DOCHECKS DS 0H Examine RETCODE and REASCODE
* ********************* HUNT FOR 0 *****************************
CLC RETCODE,ZERO Was it a zero?
BE ENDCCODE Nothing to do in CHEKCODE for zero
* ********************* HUNT FOR 4 *****************************
CLC RETCODE,FOUR Was it a 4?
BNE HUNT8 If not a 4, hunt eights
CLC REASCODE,C10823 Was it a release level mismatch?
BNE HUNT824 Branch if not an 823
WRITE 'Found a mismatch between DB2 and CAF release levels'
B ENDCCODE We are done. Go to end of CHEKCODE
HUNT824 DS 0H Now look for 'CAF reset' reason code
CLC REASCODE,C10824 Was it 4? Are we ready to restart?
BNE UNRECOG If not 824, got unknown code
WRITE 'CAF is now ready for more input'
MVC CONTROL,RESTART Indicate that we should re-CONNECT
B ENDCCODE We are done. Go to end of CHEKCODE
UNRECOG DS 0H
WRITE 'Got RETCODE = 4 and an unrecognized reason code'
MVC CONTROL,SHUTDOWN Shutdown, serious problem
B ENDCCODE We are done. Go to end of CHEKCODE
* ********************* HUNT FOR 8 *****************************
HUNT8 DS 0H
CLC RETCODE,EIGHT Hunt return code of 8
BE GOT8OR12
CLC RETCODE,TWELVE Hunt return code of 12
BNE HUNT200
GOT8OR12 DS 0H Found return code of 8 or 12
WRITE 'Found RETCODE of 8 or 12'
CLC REASCODE,F30002 Hunt for X'00F30002'
BE DB2DOWN
Figure 224. Subroutine to check return codes from CAF and DB2, in assembler (Part 2 of 3)
Figure 224. Subroutine to check return codes from CAF and DB2, in assembler (Part 3 of 3)
Chapter 29. Programming for the call attachment facility (CAF) 763
DSNALI as DSNHLI. DSNALI uses 31-bit addressing. If the application that calls
this intermediate subroutine uses 24-bit addressing, this subroutine should account
for the the difference.
In the example that follows, LISQL is addressable because the calling CSECT used
the same register 12 as CSECT DSNHLI. Your application must also establish
addressability to LISQL.
***********************************************************************
* Subroutine DSNHLI intercepts calls to LI EP=DSNHLI
***********************************************************************
DS 0D
DSNHLI CSECT Begin CSECT
STM R14,R12,12(R13) Prologue
LA R15,SAVEHLI Get save area address
ST R13,4(,R15) Chain the save areas
ST R15,8(,R13) Chain the save areas
LR R13,R15 Put save area address in R13
L R15,LISQL Get the address of real DSNHLI
BASSM R14,R15 Branch to DSNALI to do an SQL call
* DSNALI is in 31-bit mode, so use
* BASSM to assure that the addressing
* mode is preserved.
L R13,4(,R13) Restore R13 (caller's save area addr)
L R14,12(,R13) Restore R14 (return address)
RETURN (1,12) Restore R1-12, NOT R0 and R15 (codes)
Variable declarations
Figure 225 on page 765 shows declarations for some of the variables used in the
previous subroutines.
Chapter 29. Programming for the call attachment facility (CAF) 765
766 Application Programming and SQL Guide
Chapter 30. Programming for the Recoverable Resource
Manager Services attachment facility (RRSAF)
An application program can use the Recoverable Resource Manager Services
attachment facility (RRSAF) to connect to and use DB2 to process SQL statements,
commands, or instrumentation facility interface (IFI) calls. Programs that run in MVS
batch, TSO foreground, and TSO background can use RRSAF.
Prerequisite knowledge: Before you consider using RRSAF, you must be familiar
with the following MVS topics:
v The CALL macro and standard module linkage conventions
v Program addressing and residency options (AMODE and RMODE)
v Creating and controlling tasks; multitasking
v Functional recovery facilities such as ESTAE, ESTAI, and FRRs
v Synchronization techniques such as WAIT/POST.
v OS/390 RRS functions, such as SRRCMIT and SRRBACK.
Task capabilities
Any task in an address space can establish a connection to DB2 through RRSAF.
Specifying a plan for a task: Each connected task can run a plan. Tasks within a
single address space can specify the same plan, but each instance of a plan runs
independently from the others. A task can terminate its plan and run a different plan
without completely breaking its connection to DB2.
Providing attention processing exits and recovery routines: RRSAF does not
generate task structures, and it does not provide attention processing exits or
functional recovery routines. You can provide whatever attention handling and
functional recovery your application needs, but you must use ESTAE/ESTAI type
recovery routines only.
Programming language
You can write RRSAF applications in assembler language, C, COBOL, FORTRAN,
and PL/I. When choosing a language to code your application in, consider these
restrictions:
v If you use MVS macros (ATTACH, WAIT, POST, and so on), you must choose a
programming language that supports them.
v The RRSAF TRANSLATE function is not available from FORTRAN. To use the
function, code it in a routine written in another language, and then call that
routine from FORTRAN.
Tracing facility
A tracing facility provides diagnostic messages that help you debug programs and
diagnose errors in the RRSAF code. The trace information is available only in a
SYSABEND or SYSUDUMP dump.
Program preparation
Preparing your application program to run in RRSAF is similar to preparing it to run
in other environments, such as CICS, IMS, and TSO. You can prepare an RRSAF
application either in the batch environment or by using the DB2 program
preparation process. You can use the program preparation system either through
DB2I or through the DSNH CLIST. For examples and guidance in program
preparation, see “Chapter 20. Preparing an application program to run” on
page 397.
RRSAF requirements
When you write an application to use RRSAF, be aware of the following
characteristics.
Program size
The RRSAF code requires about 10K of virtual storage per address space and an
additional 10KB for each TCB that uses RRSAF.
Use of LOAD
RRSAF uses MVS SVC LOAD to load a module as part of the initialization following
your first service request. The module is loaded into fetch-protected storage that
has the job-step protection key. If your local environment intercepts and replaces
Follow these guidelines for choosing the DB2 statements or the CPIC functions for
commit and rollback operations:
v Use DB2 COMMIT and ROLLBACK statements when you know that the following
conditions are true:
– The only recoverable resource accessed by your application is DB2 data
managed by a single DB2 instance.
– The address space from which syncpoint processing is initiated is the same
as the address space that is connected to DB2.
v If your application accesses other recoverable resources, or syncpoint processing
and DB2 access are initiated from different address spaces, use SRRCMIT and
SRRBACK.
Run environment
Applications that request DB2 services must adhere to several run environment
requirements. Those requirements must be met regardless of the attachment facility
you use. They are not unique to RRSAF.
v The application must be running in TCB mode.
v No EUT FRRs can be active when the application requests DB2 services. If an
EUT FRR is active, DB2’s functional recovery can fail, and your application can
receive unpredictable abends.
v Different attachment facilities cannot be active concurrently within the same
address space. For example:
– An application should not use RRSAF in CICS or IMS address spaces.
– An application running in an address space that has a CAF connection to DB2
cannot connect to DB2 using RRSAF.
– An application running in an address space that has an RRSAF connection to
DB2 cannot connect to DB2 using CAF.
v One attachment facility cannot start another. This means your RRSAF application
cannot use DSN, and a DSN RUN subcommand cannot call your RRSAF
application.
v The language interface module for RRSAF, DSNRLI, is shipped with the linkage
attributes AMODE(31) and RMODE(ANY). If your applications load RRSAF below
the 16MB line, you must link-edit DSNRLI again.
Chapter 30. Programming for the Recoverable Resource Manager Services attachment facility (RRSAF) 769
Your program uses RRSAF by issuing CALL DSNRLI statements with the
appropriate options. For the general form of the statements, see “RRSAF function
descriptions” on page 774.
The first element of each option list is a function, which describes the action you
want RRSAF to take. For a list of available functions and what they do, see
“Summary of connection functions” on page 773. The effect of any function depends
in part on what functions the program has already performed. Before using any
function, be sure to read the description of its usage. Also read “Summary of
connection functions” on page 773, which describes the influence of previously
invoked functions.
Part of RRSAF is a DB2 load module, DSNRLI, the RRSAF language interface
module. DSNRLI has the alias names DSNHLIR and DSNWLIR. The module has
five entry points: DSNRLI, DSNHLI, DSNHLIR, DSNWLI, and DSNWLIR:
v Entry point DSNRLI handles explicit DB2 connection service requests.
v DSNHLI and DSNHLIR handle SQL calls. Use DSNHLI if your application
program link-edits RRSAF; use DSNHLIR if your application program loads
RRSAF.
v DSNWLI and DSNWLIR handle IFI calls. Use DSNWLI if your application
program link-edits RRSAF; use DSNWLIR if your application program loads
RRSAF.
Chapter 30. Programming for the Recoverable Resource Manager Services attachment facility (RRSAF) 771
You can access the DSNRLI module by explicitly issuing LOAD requests when your
program runs, or by including the DSNRLI module in your load module when you
link-edit your program. There are advantages and disadvantages to each approach.
By explicitly loading the DSNRLI module, you can isolate the maintenance of your
application from future IBM service to the language interface. If the language
interface changes, the change will probably not affect your load module.
You must indicate to DB2 which entry point to use. You can do this in one of two
ways:
v Specify the precompiler option ATTACH(RRSAF).
This causes DB2 to generate calls that specify entry point DSNHLIR. You cannot
use this option if your application is written in FORTRAN.
v Code a dummy entry point named DSNHLI within your load module.
If you do not specify the precompiler option ATTACH, the DB2 precompiler
generates calls to entry point DSNHLI for each SQL request. The precompiler
does not know and is independent of the different DB2 attachment facilities.
When the calls generated by the DB2 precompiler pass control to DSNHLI, your
code corresponding to the dummy entry point must preserve the option list
passed in R1 and call DSNHLIR specifying the same option list. For a coding
example of a dummy DSNHLI entry point, see “Using dummy entry point
DSNHLI” on page 800.
Link-editing DSNRLI
You can include DSNRLI when you link-edit your load module. For example, you
can use a linkage editor control statement like this in your JCL:
INCLUDE DB2LIB(DSNRLI).
By coding this statement, you avoid linking the wrong language interface module.
When you include DSNRLI during the link-edit, you do not include a dummy
DSNHLI entry point in your program or specify the precompiler option ATTACH.
Module DSNRLI contains an entry point for DSNHLI, which is identical to DSNHLIR,
and an entry point DSNWLI, which is identical to DSNWLIR.
A disadvantage of link-editing DSNRLI into your load module is that if IBM makes a
change to DSNRLI, you must link-edit your program again.
Connection name and connection type: The connection name and connection
type are RRSAF. You can use the DISPLAY THREAD command to list RRSAF
applications that have the connection name RRSAF.
RRSAF relies on the MVS System Authorization Facility (SAF) and a security
product, such as RACF, to verify and authorize the authorization IDs. An application
that connects to DB2 through RRSAF must pass those identifiers to SAF for
verification and authorization checking. RRSAF retrieves the identifiers from SAF.
A location can provide an authorization exit routine for a DB2 connection to change
the authorization IDs and to indicate whether the connection is allowed. The actual
values assigned to the primary and secondary authorization IDs can differ from the
values provided by a SIGNON or AUTH SIGNON request. A site's DB2 signon exit
routine can access the primary and secondary authorization IDs and can modify the
IDs to satisfy the site's security requirements. The exit can also indicate whether the
signon request should be accepted.
For information about authorization IDs and the connection and signon exit routines,
see Appendix B (Volume 2) of DB2 Administration Guide.
Do not mix RRSAF connections with other connection types in a single address
space. The first connection to DB2 made from an address space determines the
type of connection allowed.
Task termination
If an application that is connected to DB2 through RRSAF terminates normally
before the TERMINATE THREAD or TERMINATE IDENTIFY functions deallocate
the plan, then OS/390 RRS commits any changes made after the last commit point.
In either case, DB2 deallocates the plan, if necessary, and terminates the
application's connection.
DB2 abend
If DB2 abends while an application is running, DB2 rolls back changes to the last
commit point. If DB2 terminates while processing a commit request, DB2 either
commits or rolls back any changes at the next restart. The action taken depends on
the state of the commit request when DB2 terminates.
Chapter 30. Programming for the Recoverable Resource Manager Services attachment facility (RRSAF) 773
SWITCH TO
Directs RRSAF, SQL or IFI requests to a specified DB2 subsystem. See
“SWITCH TO: Syntax and usage” on page 778.
SIGNON
Provides to DB2 a user ID and, optionally, one or more secondary authorization
IDs that are associated with the connection. See “SIGNON: Syntax and usage”
on page 780.
AUTH SIGNON
Provides to DB2 a user ID, an Accessor Environment Element (ACEE) and,
optionally, one or more secondary authorization IDs that are associated with the
connection. See “AUTH SIGNON: Syntax and usage” on page 783.
CONTEXT SIGNON
Provides to DB2 a user ID and, optionally, one or more secondary authorization
IDs that are associated with the connection. You can execute CONTEXT
SIGNON from an unauthorized program. See “CONTEXT SIGNON: Syntax and
usage” on page 786.
CREATE THREAD
Allocates a DB2 plan or package. CREATE THREAD must complete before the
application can execute SQL statements. See “CREATE THREAD: Syntax and
usage” on page 790.
TERMINATE THREAD
Deallocates the plan. See “TERMINATE THREAD: Syntax and usage” on
page 792 .
TERMINATE IDENTIFY
Removes the task as a user of DB2 and, if this is the last or only task in the
address space that has a DB2 connection, terminates the address space
connection to DB2. See “TERMINATE IDENTIFY: Syntax and usage” on
page 793.
TRANSLATE
Returns an SQL code and printable text, in the SQLCA, that describes a DB2
error reason code. You cannot call the TRANSLATE function from the
FORTRAN language. See “Translate: Syntax and usage” on page 794.
If you do not specify the return code and reason code parameters in your RRSAF
calls, RRSAF puts a return code in register 15 and a reason code in register 0. If
you specify the return code and reason code parameters, RRSAF places the return
code in register 15 and in the return code parameter to accommodate high-level
languages that support special return code processing. RRSAF preserves the
contents of registers 2 through 14.
Table 89. Register conventions for RRSAF calls
Register Usage
R1 Parameter list pointer
R13 Address of caller’s save area
R14 Caller’s return address
R15 RRSAF entry point address
In an assembler language call, code a comma for a parameter in the CALL DSNRLI
statement when you want to use the default value for that parameter and specify
subsequent parameters. For example, code an IDENTIFY call like this to specify all
parameters except Return Code:
CALL DSNRLI,(IDFYFN,SSNM,RIBPTR,EIBPTR,TERMECB,STARTECB,,REASCODE)
For all languages: When you code CALL DSNRLI statements in any language,
specify all parameters that come before Return Code. You cannot omit any of those
parameters by coding zeros or blanks. There are no defaults for those parameters.
For all languages except assembler language: Code 0 for an optional parameter
in the CALL DSNRLI statement when you want to use the default value for that
parameter but specify subsequent parameters. For example, suppose you are
coding an IDENTIFY call in a COBOL program. You want to specify all parameters
except Return Code. Write the call in this way:
CALL 'DSNRLI' USING IDFYFN SSNM RIBPTR EIBPTR TERMECB STARTECB
BY CONTENT ZERO BY REFERENCE REASCODE.
Chapter 30. Programming for the Recoverable Resource Manager Services attachment facility (RRSAF) 775
CALL DSNRLI ( function, ssnm, ribptr, eibptr, termecb, startecb
)
,retcode
,reascode
,groupoverride
startecb
The address of the application's startup ECB. If DB2 has not started when the
application issues the IDENTIFY call, DB2 posts the ECB when DB2 startup
has completed. Enter a value of zero if you do not want to use a startup ECB.
DB2 posts a maximum of one startup ECB per address space. The ECB posted
is associated with the most recent IDENTIFY call from that address space. The
application program must examine any nonzero RRSAF or DB2 reason codes
before issuing a WAIT on this ECB.
retcode
A 4-byte area in which RRSAF places the return code.
This parameter is optional. If you do not specify this parameter, RRSAF places
the return code in register 15 and the reason code in register 0.
reascode
A 4-byte area in which RRSAF places a reason code.
This parameter is optional. If you do not specify this parameter, RRSAF places
the reason code in register 0.
If you specify this parameter, you must also specify retcode or its default (by
specifying a comma or zero, depending on the language).
| groupoverride
| An 8-byte area that the application provides. This field is optional. If this field is
| provided, it contains the string 'NOGROUP'. This string indicates that the
| subsystem name that is specified by ssnm is to be used as a DB2 subsystem
| name, even if ssnm matches a group attachment name. If groupoverride is not
| provided, ssnm is used as the group attachment name if it matches a group
| attachment name. If you specify this parameter in any language except
| assembler, you must also specify the return code and reason code parameters.
| In assembler language, you can omit the return code and reason code
| parameters by specifying commas as place-holders.
During IDENTIFY processing, DB2 determines whether the user address space is
authorized to connect to DB2. DB2 invokes the MVS SAF and passes a primary
authorization ID to SAF. That authorization ID is the 7-byte user ID associated with
the address space, unless an authorized function has built an ACEE for the address
space. If an authorized function has built an ACEE, DB2 passes the 8-byte user ID
from the ACEE. SAF calls an external security product, such as RACF, to determine
if the task is authorized to use:
v The DB2 resource class (CLASS=DSNR)
v The DB2 subsystem (SUBSYS=ssnm)
v Connection type RRSAF
Chapter 30. Programming for the Recoverable Resource Manager Services attachment facility (RRSAF) 777
If that check is successful, DB2 calls the DB2 connection exit to perform additional
verification and possibly change the authorization ID. DB2 then sets the connection
name to RRSAF and the connection type to RRSAF.
SWITCH TO is useful only after a successful IDENTIFY call. If you have established
a connection with one DB2 subsystem, then you must issue SWITCH TO before
you make an IDENTIFY call to another DB2 subsystem.
After you establish a connection to a DB2 subsystem, you must make a SWITCH
TO call before you identify to another DB2 subsystem. If you do not make a
SWITCH TO call before you make an IDENTIFY call to another DB2 subsystem,
then DB2 returns return Code = X'200' and reason code X'00C12201'.
This example shows how you can use SWITCH TO to interact with three DB2
subsystems.
RRSAF calls for subsystem db21:
IDENTIFY
SIGNON
CREATE THREAD
Execute SQL on subsystem db21
SWITCH TO db22
RRSAF calls on subsystem db22:
IDENTIFY
SIGNON
CREATE THREAD
Execute SQL on subsystem db22
SWITCH TO db23
Chapter 30. Programming for the Recoverable Resource Manager Services attachment facility (RRSAF) 779
RRSAF calls on subsystem db23:
IDENTIFY
SIGNON
CREATE THREAD
Execute SQL on subsystem 23
SWITCH TO db21
Execute SQL on subsystem 21
SWITCH TO db22
Execute SQL on subsystem 22
SWITCH TO db21
Execute SQL on subsystem 21
SRRCMIT (to commit the UR)
SWITCH TO db23
Execute SQL on subsystem 23
SWITCH TO db22
Execute SQL on subsystem 22
SWITCH TO db21
Execute SQL on subsystem 21
SRRCMIT (to commit the UR)
)
, retcode
, reascode
, user
, appl
, ws
, xid
Chapter 30. Programming for the Recoverable Resource Manager Services attachment facility (RRSAF) 781
accounting and monitoring purposes. DB2 displays the workstation name in the
DISPLAY THREAD output and in DB2 accounting and statistics trace records. If
ws is less than 18 characters long, you must pad it on the right with blanks to a
length of 18 characters.
This field is optional. If specified, you must also specify retcode, reascode, user,
and appl. If not specified, no workstation name is associated with the
connection.
xid
A 4-byte area into which you put one of the following values:
0 Indicates that the thread is not part of a global transaction.
1 Indicates that the thread is part of a global transaction and that
DB2 should retrieve the global transaction ID from RRS. If a
global transaction ID already exists for the task, the thread
becomes part of the associated global transaction. Otherwise,
RRS generates a new global transaction ID.
address The 4-byte address of of an area into which you enter a global
transaction ID for the thread. If the global transaction ID already
exists, the thread becomes part of the associated global
transaction. Otherwise, RRS creates a new global transaction
with the ID that you specify. The global transaction ID has the
format shown in Table 93.
A DB2 thread that is part of a global transaction can share locks with other DB2
threads that are part of the same global transaction and can access and modify
the same data. A global transaction exists until one of the threads that is part of
the global transaction is committed or rolled back.
Table 93. Format of a user-created global transaction ID
Field description Length in bytes Data type
Format ID 4 Character
Global transaction ID length 4 Integer
Branch qualifier length 4 Integer
Global transaction ID 1 to 64 Character
Branch qualifier 1 to 64 Character
See OS/390 Security Server (RACF) Macros and Interfaces for more information on
the RACROUTE macro.
Generally, you issue a SIGNON call after an IDENTIFY call and before a CREATE
THREAD call. You can also issue a SIGNON call if the application is at a point of
consistency, and
v The value of reuse in the CREATE THREAD call was RESET, or
Chapter 30. Programming for the Recoverable Resource Manager Services attachment facility (RRSAF) 783
CALL DSNRLI ( function, correlation-id, accounting-token,
)
, retcode
, reascode
, user
, appl
, ws
, xid
Chapter 30. Programming for the Recoverable Resource Manager Services attachment facility (RRSAF) 785
A DB2 thread that is part of a global transaction can share locks with other DB2
threads that are part of the same global transaction and can access and modify
the same data. A global transaction exists until one of the threads that is part of
the global transaction is committed or rolled back.
Generally, you issue an AUTH SIGNON call after an IDENTIFY call and before a
CREATE THREAD call. You can also issue an AUTH SIGNON call if the application
is at a point of consistency, and
v The value of reuse in the CREATE THREAD call was RESET, or
v The value of reuse in the CREATE THREAD call was INITIAL, no held cursors
are open, the package or plan is bound with KEEPDYNAMIC(NO), and all
special registers are at their initial state. If there are open held cursors or the
package or plan is bound with KEEPDYNAMIC(YES), a SIGNON call is permitted
only if the primary authorization ID has not changed.
)
, retcode
, reascode
, user
, appl
, ws
, xid
Chapter 30. Programming for the Recoverable Resource Manager Services attachment facility (RRSAF) 787
and in DB2 accounting and statistics trace records. If user is less than 16
characters long, you must pad it on the right with blanks to a length of 16
characters.
This field is optional. If specified, you must also specify retcode and reascode. If
not specified, no user ID is associated with the connection. You can omit this
parameter by specifying a value of 0.
appl
A 32-byte area that contains the application or transaction name of the end
user's application. You can use this parameter to provide the identity of the
client end user for accounting and monitoring purposes. DB2 displays the
application name in the DISPLAY THREAD output and in DB2 accounting and
statistics trace records. If appl is less than 32 characters long, you must pad it
on the right with blanks to a length of 32 characters.
This field is optional. If specified, you must also specify retcode, reascode, and
user. If not specified, no application or transaction is associated with the
connection. You can omit this parameter by specifying a value of 0.
ws An 18-byte area that contains the workstation name of the client end user. You
can use this parameter to provide the identity of the client end user for
accounting and monitoring purposes. DB2 displays the workstation name in the
DISPLAY THREAD output and in DB2 accounting and statistics trace records. If
ws is less than 18 characters long, you must pad it on the right with blanks to a
length of 18 characters.
This field is optional. If specified, you must also specify retcode, reascode, user,
and appl. If not specified, no workstation name is associated with the
connection.
xid
A 4-byte area into which you put one of the following values:
0 Indicates that the thread is not part of a global transaction.
1 Indicates that the thread is part of a global transaction and that
DB2 should retrieve the global transaction ID from RRS. If a
global transaction ID already exists for the task, the thread
becomes part of the associated global transaction. Otherwise,
RRS generates a new global transaction ID.
address The 4-byte address of of an area into which you enter a global
transaction ID for the thread. If the global transaction ID already
exists, the thread becomes part of the associated global
transaction. Otherwise, RRS creates a new global transaction
with the ID that you specify. The global transaction ID has the
format shown in Table 93 on page 782.
A DB2 thread that is part of a global transaction can share locks with other DB2
threads that are part of the same global transaction and can access and modify
the same data. A global transaction exists until one of the threads that is part of
the global transaction is committed or rolled back.
Usage: CONTEXT SIGNON relies on the RRS context services functions Set
Context Data (CTXSDTA) and Retrieve Context Data (CTXRDTA). Before you
invoke CONTEXT SIGNON, you must have called CTXSDTA to store a primary
authorization ID and optionally, the address of an ACEE in the context data whose
context key you supply as input to CONTEXT SIGNON.
If the new primary authorization ID is not different than the current primary
authorization ID (established at IDENTIFY time or at a previous SIGNON
invocation) then DB2 invokes only the signon exit. If the value has changed, then
DB2 establishes a new primary authorization ID and new SQL authorization ID and
then invokes the signon exit.
If you pass an ACEE address, then CONTEXT SIGNON uses the value in
ACEEGRPN as the secondary authorization ID if the length of the group name
(ACEEGRPL) is not 0.
Generally, you issue a CONTEXT SIGNON call after an IDENTIFY call and before a
CREATE THREAD call. You can also issue a CONTEXT SIGNON call if the
application is at a point of consistency, and
v The value of reuse in the CREATE THREAD call was RESET, or
v The value of reuse in the CREATE THREAD call was INITIAL, no held cursors
are open, the package or plan is bound with KEEPDYNAMIC(NO), and all
special registers are at their initial state. If there are open held cursors or the
package or plan is bound with KEEPDYNAMIC(YES), a SIGNON call is permitted
only if the primary authorization ID has not changed.
Chapter 30. Programming for the Recoverable Resource Manager Services attachment facility (RRSAF) 789
Table 96. Examples of RRSAF CONTEXT SIGNON calls (continued)
Language Call example
FORTRAN CALL DSNRLI(CSGNONFN,CORRID,ACCTTKN,ACCTINT,CTXTKEY, RETCODE,REASCODE,
USERID,APPLNAME,WSNAME)
PL/I CALL DSNRLI(CSGNONFN,CORRID,ACCTTKN,ACCTINT,CTXTKEY,
RETCODE,REASCODE,USERID,APPLNAME,WSNAME);
Note: DSNRLI is an assembler language program; therefore, you must include the following compiler directives in
your C, C⁺⁺, and PL/I applications:
C #pragma linkage(dsnali, OS)
C⁺⁺ extern "OS" {
int DSNALI(
char * functn,
...); }
PL/I DCL DSNALI ENTRY OPTIONS(ASM,INTER,RETCODE);
)
, retcode
, reascode
, pklistptr
Usage: CREATE THREAD allocates the DB2 resources required to issue SQL or
IFI requests. If you specify a plan name, RRSAF allocates the named plan.
If you specify ? in the first byte of the plan name and provide a collection name,
DB2 allocates a special plan named ?RRSAF and a package list that contains the
following entries:
v The collection name
v An entry that contains * for the location, collection ID, and package name
# If you specify ? in the first byte of the plan name and specify pklistptr, DB2 allocates
# a special plan named ?RRSAF and a package list that contains the following
# entries:
# v The collection names that you specify in the data area to which pklistptr points
# v An entry that contains * for the location, collection ID, and package name
The collection names are used to locate a package associated with the first SQL
statement in the program. The entry that contains *.*.* lets the application access
remote locations and access packages in collections other than the default
collection that is specified at create thread time.
Chapter 30. Programming for the Recoverable Resource Manager Services attachment facility (RRSAF) 791
The application can use the SQL statement SET CURRENT PACKAGESET to
change the collection ID that DB2 uses to locate a package.
When DB2 allocates a plan named ?RRSAF, DB2 checks authorization to execute
the package in the same way as it checks authorization to execute a package from
a requester other than DB2 for OS/390 and z/OS. See Part 3 (Volume 1) of DB2
Administration Guide for more information on authorization checking for package
execution.
Chapter 30. Programming for the Recoverable Resource Manager Services attachment facility (RRSAF) 793
If you specify this parameter, you must also specify retcode.
If the application allocated a plan, and you issue TERMINATE IDENTIFY without
first issuing TERMINATE THREAD, DB2 deallocates the plan before terminating the
connection.
Issuing TERMINATE IDENTIFY is optional. If you do not, DB2 performs the same
functions when the task terminates.
If DB2 terminates, the application must issue TERMINATE IDENTIFY to reset the
RRSAF control blocks. This ensures that future connection requests from the task
are successful when DB2 restarts.
Issue TRANSLATE only after a successful IDENTIFY operation. For errors that
occur during SQL or IFI requests, the TRANSLATE function performs automatically.
Usage: Use TRANSLATE to get a corresponding SQL error code and message
text for the DB2 error reason codes that RRSAF returns in register 0 following a
CREATE THREAD service request. DB2 places this information in the SQLCODE
and SQLSTATE host variables or related fields of the SQLCA.
The TRANSLATE function translates codes that begin with X'00F3', but it does not
translate RRSAF reason codes that begin with X'00C1'. If you receive error reason
code X'00F30040' (resource unavailable) after an OPEN request, TRANSLATE
returns the name of the unavailable database object in the last 44 characters of
field SQLERRM. If the DB2 TRANSLATE function does not recognize the error
reason code, it returns SQLCODE -924 (SQLSTATE '58006') and places a printable
copy of the original DB2 function code and the return and error reason codes in the
SQLERRM field. The contents of registers 0 and 15 do not change, unless
TRANSLATE fails; in which case, register 0 is set to X'00C12204' and register 15 is
set to 200.
Chapter 30. Programming for the Recoverable Resource Manager Services attachment facility (RRSAF) 795
Table 100. Examples of RRSAF TRANSLATE calls (continued)
Language Call example
Note: DSNRLI is an assembler language program; therefore, you must include the following compiler directives in
your C, C⁺⁺, and PL/I applications:
C #pragma linkage(dsnali, OS)
C⁺⁺ extern "OS" {
int DSNALI(
char * functn,
...); }
PL/I DCL DSNALI ENTRY OPTIONS(ASM,INTER,RETCODE);
In these tables, the first column lists the most recent RRSAF or DB2 function
executed. The first row lists the next function executed. The contents of the
intersection of a row and column indicate the result of calling the function in the first
column followed by the function in the first row. For example, if you issue
TERMINATE THREAD, then you execute SQL or issue an IFI call, RRSAF returns
reason code X'00C12219'.
Table 101. Effect of call order when next call is IDENTIFY, SWITCH TO, SIGNON, or CREATE THREAD
Next function
IDENTIFY SWITCH TO SIGNON, AUTH CREATE THREAD
SIGNON, or CONTEXT
Previous function SIGNON
Empty: first IDENTIFY X'00C12205' X'00C12204' X'00C12204'
call
1
IDENTIFY X'00C12201' Switch to Signon X'00C12217'
ssnm
1
SWITCH TO IDENTIFY Switch to Signon CREATE
ssnm THREAD
1
SIGNON, AUTH X'00C12201' Switch to Signon CREATE
SIGNON, or CONTEXT ssnm THREAD
SIGNON
1
CREATE X'00C12201' Switch to Signon X'00C12202'
THREAD ssnm
1
TERMINATE X'00C12201' Switch to Signon CREATE
THREAD ssnm THREAD
1
IFI X'00C12201' Switch to Signon X'00C12202'
ssnm
2
SQL X'00C12201' Switch to X'00F30092' X'00C12202'
ssnm
1
SRRCMIT or X'00C12201' Switch to Signon X'00C12202'
SRRBACK ssnm
Table 102. Effect of call order when next call is SQL or IFI, TERMINATE THREAD, TERMINATE IDENTIFY, or
TRANSLATE
Next function
SQL or IFI TERMINATE TERMINATE TRANSLATE
Previous function THREAD IDENTIFY
Empty: first call X'00C12204' X'00C12204' X'00C12204' X'00C12204'
IDENTIFY X'00C12218' X'00C12203' TERMINATE TRANSLATE
IDENTIFY
SWITCH TO SQL or IFI call TERMINATE TERMINATE TRANSLATE
THREAD IDENTIFY
SIGNON, X'00C12219' TERMINATE TERMINATE TRANSLATE
AUTH SIGNON, or THREAD IDENTIFY
CONTEXT SIGNON
CREATE SQL or IFI call TERMINATE TERMINATE TRANSLATE
THREAD THREAD IDENTIFY
TERMINATE X'00C12219' X'00C12203' TERMINATE TRANSLATE
THREAD IDENTIFY
IFI SQL or IFI call TERMINATE TERMINATE TRANSLATE
THREAD IDENTIFY
1 2
SQL SQL or IFI call X'00F30093' X'00F30093' TRANSLATE
SRRCMIT or SQL or IFI call TERMINATE TERMINATE TRANSLATE
SRRBACK THREAD IDENTIFY
Notes:
1. TERMINATE THREAD is not allowed if any SQL operations are requested after CREATE THREAD or after the last
SRRCMIT or SRRBACK request.
2. TERMINATE IDENTIFY is not allowed if any SQL operations are requested after CREATE THREAD or after the
last SRRCMIT or SRRBACK request.
Sample scenarios
This section shows sample scenarios for connecting tasks to DB2.
A single task
This example shows a single task running in an address space. OS/390 RRS
controls commit processing when the task terminates normally.
IDENTIFY
SIGNON
CREATE THREAD
SQL
. or IFI
.
.
TERMINATE IDENTIFY
Chapter 30. Programming for the Recoverable Resource Manager Services attachment facility (RRSAF) 797
Multiple tasks
This example shows multiple tasks in an address space. Task 1 executes no SQL
statements and makes no IFI calls. Its purpose is to monitor DB2 termination and
startup ECBs and to check the DB2 release level.
TASK 1 TASK 2 TASK 3 TASK n
When the reason code begins with X'00F3' (except for X'00F30006'), you can use
the RRSAF TRANSLATE function to obtain error message text that can be printed
and displayed.
For SQL calls, RRSAF returns standard SQL return codes in the SQLCA. See Part
1 of DB2 Messages and Codes for a list of those return codes and their meanings.
RRSAF returns IFI return codes and reason codes in the instrumentation facility
communication area (IFCA). See Part 3 of DB2 Messages and Codes for a list of
those return codes and their meanings.
Table 103. RRSAF return codes
Return code Explanation
0 Successful completion.
4 Status information. See the reason code for details.
>4 The call failed. See the reason code for details.
Chapter 30. Programming for the Recoverable Resource Manager Services attachment facility (RRSAF) 799
Program examples
This section contains sample JCL for running an RRSAF application and assembler
code for accessing RRSAF.
//SYSPRINT DD SYSOUT=*
//DSNRRSAF DD DUMMY
//SYSUDUMP DD SYSOUT=*
Delete the loaded modules when the application no longer needs to access DB2.
****************************** GET LANGUAGE INTERFACE ENTRY ADDRESSES
LOAD EP=DSNRLI Load the RRSAF service request EP
ST R0,LIRLI Save this for RRSAF service requests
LOAD EP=DSNHLIR Load the RRSAF SQL call Entry Point
ST R0,LISQL Save this for SQL calls
* .
* . Insert connection service requests and SQL calls here
* .
DELETE EP=DSNRLI Correctly maintain use count
DELETE EP=DSNHLIR Correctly maintain use count
In the example that follows, LISQL is addressable because the calling CSECT used
the same register 12 as CSECT DSNHLI. Your application must also establish
addressability to LISQL.
***********************************************************************
* Subroutine DSNHLI intercepts calls to LI EP=DSNHLI
***********************************************************************
DS 0D
DSNHLI CSECT Begin CSECT
The code in Figure 236 does not show a task that waits on the DB2 termination
ECB. You can code such a task and use the MVS WAIT macro to monitor the ECB.
The task that waits on the termination ECB should detach the sample code if the
termination ECB is posted. That task can also wait on the DB2 startup ECB. The
task in Figure 236 waits on the startup ECB at its own task level.
Chapter 30. Programming for the Recoverable Resource Manager Services attachment facility (RRSAF) 801
Figure 237 shows declarations for some of the variables used in Figure 236.
Figure 237. Declarations for variables used in the RRSAF connection routine
When you use this method, the attachment facility uses the default RCT. The
default RCT name is DSN2CT concatenated with a one- or two-character suffix.
The system administrator specifies this suffix in the DSN2STRT subparameter of
the INITPARM parameter in the CICS startup procedure. If no suffix is specified,
CICS uses an RCT name of DSN2CT00.
One of the most important things you can do to maximize thread reuse is to close
all cursors that you declared WITH HOLD before each sync point, because DB2
does not automatically close them. A thread for an application that contains an open
cursor cannot be reused. It is a good programming practice to close all cursors
immediately after you finish using them. For more information on the effects of
declaring cursors WITH HOLD in CICS applications, see “Held and non-held
cursors” on page 91.
MVC ENTNAME,=CL8'DSNCSQL'
MVC EXITPROG,=CL8'DSN2EXT1'
EXEC CICS INQUIRE EXITPROGRAM(EXITPROG) X
ENTRYNAME(ENTNAME) STARTSTATUS(STST) NOHANDLE
CLC EIBRESP,DFHRESP(NORMAL)
BNE NOTREADY
CLC STST,DFHVALUE(CONNECTED)
BNE NOTREADY
Attention
When both of the following conditions are true, the stormdrain effect can
occur:
v The CICS attachment facility is down.
v You are using INQUIRE EXITPROGRAM to avoid AEY9 abends.
For more information on the stormdrain effect and how to avoid it, see Chapter
2 of DB2 Data Sharing: Planning and Administration.
If you are using a release of CICS after CICS Version 4, and you have specified
STANDBY=SQLCODE and STRTWT=AUTO in the DSNCRCT TYPE=INIT macro,
you do not need to test whether the CICS attachment facility is up before executing
SQL. When an SQL statement is executed, and the CICS attachment facility is not
available, DB2 issues SQLCODE -923 with a reason code that indicates that the
attachment facility is not available. See Part 2 of DB2 Installation Guide for
information about the DSNCRCT macro and DB2 Messages and Codes for an
explanation of SQLCODE -923.
Answer: Add a column with the data type ROWID or an identity column. ROWID
columns and identity columns contain a unique value for each row in the table. You
can define the column as GENERATED ALWAYS, which means that you cannot
insert values into the column, or GENERATED BY DEFAULT, which means that
DB2 generates a value if you do not specify one. If you define the ROWID or
identity column as GENERATED BY DEFAULT, you need to define a unique index
that includes only that column to guarantee uniqueness.
| Answer: Declare your cursor as scrollable. When you select rows from the table,
| you can use the various forms of the FETCH statement to move to an absolute row
| number, move ahead or back a certain number of rows, to the first or last row,
| before the first row or after the last row, forward, or backward. You can use any
| combination of these FETCH statements to change direction repeatedly.
| For example, you can use code like the following to move forward in the
| department table by 10 records, backward five records, and forward again by three
| records:
| /**************************/
| /* Declare host variables */
| /**************************/
| EXEC SQL BEGIN DECLARE SECTION;
| char[37] hv_deptname;
| EXEC SQL END DECLARE SECTION;
| /**********************************************************/
| /* Declare scrollable cursor to retrieve department names */
| /**********************************************************/
| EXEC SQL DECLARE C1 SCROLL CURSOR FOR
|| . SELECT DEPTNAME FROM DSN8710.DEPT;
|| .
.
|
| /**********************************************************/
| /* Open the cursor and position it before the start of */
| /* the result table. */
| /**********************************************************/
| EXEC SQL OPEN C1;
| EXEC SQL FETCH BEFORE FROM C1;
| /**********************************************************/
| /* Fetch first 10 rows */
| /**********************************************************/
| for(i=0;i<10;i++)
| {
| EXEC SQL FETCH NEXT FROM C1 INTO :hv_deptname;
| }
| /**********************************************************/
| /* Save the value in the tenth row */
| /**********************************************************/
| tenth_row=hv_deptname;
| /**********************************************************/
| Answer: On the SELECT statement, use the FOR UPDATE clause without a
| column list, or the FOR UPDATE OF clause with a column list. For a more efficient
| program, specify a column list with only those columns that you intend to update.
| Then use the positioned UPDATE statement. The clause WHERE CURRENT OF
| identifies the cursor that points to the row you want to update.
| Answer: Use a scrollable cursor that is declared with the FOR UPDATE OF clause.
| Using a scrollable cursor to update backward involves these basic steps:
| 1. Declare the cursor with the SENSITIVE STATIC SCROLL parameters.
| 2. Open the cursor.
| 3. Execute a FETCH statement to position the cursor at the end of the result table.
| 4. FETCH statements that move the cursor backward, until you reach the row that
| you want to update.
| 5. Execute the UPDATE WHERE CURRENT OF statement to update the current
| row.
| 6. Repeat steps 4 and 5 until you have update all the rows that you need to.
| 7. When you have retrieved and updated all the data, close the cursor.
|
Updating thousands of rows
Question: Are there any special techniques for updating large volumes of data?
Answer: There are no special techniques; but for large numbers of rows, efficiency
can become very important. In particular, you need to be aware of locking
considerations, including the possibilities of lock escalation.
If your program allows input from a terminal before it commits the data and thereby
releases locks, it is possible that a significant loss of concurrency results. Review
the description of locks in “The ISOLATION option” on page 343 while designing
your program. Then review the expected use of tables to predict whether you could
have locking problems.
Using SELECT *
Question: What are the implications of using SELECT * ?
Answer: Generally, you should select only the columns you need because DB2 is
sensitive to the number of columns selected. Use SELECT * only when you are
sure you want to select all columns. One alternative is to use views defined with
only the necessary columns, and use SELECT * to access the views. Avoid
SELECT * if all the selected columns participate in a sort operation (SELECT
DISTINCT and SELECT...UNION, for example).
| DB2 usually optimizes queries to retrieve all rows that qualify. But sometimes you
| want to retrieve only the first few rows. For example, to retrieve the first row that is
| greater than or equal to a known value, code:
| SELECT column list FROM table
| WHERE key >= value
| ORDER BY key ASC
| Even with the ORDER BY clause, DB2 might fetch all the data first and sort it
| afterwards, which could be wasteful. Instead, you can write the query in one of the
| following ways:
| SELECT * FROM table
| WHERE key >= value
| ORDER BY key ASC
| OPTIMIZE FOR 1 ROW
| SELECT * FROM table
| WHERE key >= value
| ORDER BY key ASC
| FETCH FIRST n ROWS ONLY
| Use FETCH FIRST n ROWS ONLY to limit the number of rows in the result table to
| n rows. FETCH FIRST n ROWS ONLY has the following benefits:
| v When you use FETCH statements to retrieve data from a result table, FETCH
| FIRST n ROWS ONLY causes DB2 to retrieve only the number of rows that you
| need. This can have performance benefits, especially in distributed applications.
| If you try to execute a FETCH statement to retrieve the n+1st row, DB2 returns a
| +100 SQLCODE.
| v When you use FETCH FIRST ROW ONLY in a SELECT INTO statement, you
| never retrieve more than one row. Using FETCH FIRST ROW ONLY in a
| SELECT INTO statement can prevent SQL errors that are caused by
| inadvertently selecting more than one value into a host variable.
| When you specify FETCH FIRST n ROWS ONLY but not OPTIMIZE FOR n ROWS,
| OPTIMIZE FOR n ROWS is implied. When you specify FETCH FIRST n ROWS
| ONLY and OPTIMIZE FOR m ROWS, and m is less than n, DB2 optimizes the
| query for m rows. If m is greater than n, DB2 optimizes the query for n rows.
To get the effect of adding data to the “end” of a table, define a unique index on a
TIMESTAMP column in the table definition. Then, when you retrieve data from the
table, use an ORDER BY clause naming that column. The newest insert appears
last.
Answer: You can save the corresponding SQL statements in a table with a column
having a data type of VARCHAR(n), where n is the maximum length of any SQL
statement. You must save the source SQL statements, not the prepared versions.
That means that you must retrieve and then prepare each statement before
executing the version stored in the table. In essence, your program prepares an
SQL statement from a character string and executes it dynamically. (For a
description of dynamic SQL, see “Chapter 23. Coding dynamic SQL in application
programs” on page 497.)
For a description of dynamic SQL execution, see “Chapter 23. Coding dynamic SQL
in application programs” on page 497.
| Answer: You can store the data in a table in a VARCHAR column or a LOB
| column.
Answer: When you receive an SQL error because of a constraint violation, print out
the SQLCA. You can use the DSNTIAR routine described in “Handling SQL error
return codes” on page 76 to format the SQLCA for you. Check the SQL error
message insertion text (SQLERRM) for the name of the constraint. For information
on possible violations, see SQLCODEs -530 through -548 in Part 1 of DB2
Messages and Codes.
Authorization on all sample objects is given to PUBLIC in order to make the sample
programs easier to run. The contents of any table can easily be reviewed by
executing an SQL statement, for example SELECT * FROM DSN8710.PROJ. For
convenience in interpreting the examples, the department and employee tables are
listed here in full.
Content
Table 104 shows the content of the columns.
Table 104. Columns of the activity table
Column Column Name Description
1 ACTNO Activity ID (the primary key)
2 ACTKWD Activity keyword (up to six characters)
3 ACTDESC Activity description
The table, shown in Table 108 on page 817, resides in table space
DSN8D71A.DSN8S71D and is created with:
CREATE TABLE DSN8710.DEPT
(DEPTNO CHAR(3) NOT NULL,
DEPTNAME VARCHAR(36) NOT NULL,
MGRNO CHAR(6) ,
ADMRDEPT CHAR(3) NOT NULL,
LOCATION CHAR(16) ,
PRIMARY KEY (DEPTNO) )
IN DSN8D71A.DSN8S71D
CCSID EBCDIC;
Content
Table 106 shows the content of the columns.
Table 106. Columns of the department table
Column Column Name Description
1 DEPTNO Department ID, the primary key
2 DEPTNAME A name describing the general activities of the
department
3 MGRNO Employee number (EMPNO) of the department
manager
4 ADMRDEPT ID of the department to which this department
reports; the department at the highest level reports to
itself
5 LOCATION The remote location name
It is a dependent of the employee table, through its foreign key on column MGRNO.
Table 108. DSN8710.DEPT: department table
DEPTNO DEPTNAME MGRNO ADMRDEPT LOCATION
A00 SPIFFY COMPUTER 000010 A00 ----------------
SERVICE DIV.
B01 PLANNING 000020 A00 ----------------
C01 INFORMATION CENTER 000030 A00 ----------------
D01 DEVELOPMENT CENTER ------ A00 ----------------
E01 SUPPORT SERVICES 000050 A00 ----------------
D11 MANUFACTURING SYSTEMS 000060 D01 ----------------
D21 ADMINISTRATION SYSTEMS 000070 D01 ----------------
E11 OPERATIONS 000090 E01 ----------------
E21 SOFTWARE SUPPORT 000100 E01 ----------------
F22 BRANCH OFFICE F2 ------ E01 ----------------
G22 BRANCH OFFICE G2 ------ E01 ----------------
H22 BRANCH OFFICE H2 ------ E01 ----------------
I22 BRANCH OFFICE I2 ------ E01 ----------------
J22 BRANCH OFFICE J2 ------ E01 ----------------
The LOCATION column contains nulls until sample job DSNTEJ6 updates this
column with the location name.
The table shown in Table 111 on page 819 and Table 112 on page 820 resides in the
partitioned table space DSN8D71A.DSN8S71E. Because it has a foreign key
referencing DEPT, that table and the index on its primary key must be created first.
Then EMP is created with:
CREATE TABLE DSN8710.EMP
(EMPNO CHAR(6) NOT NULL,
FIRSTNME VARCHAR(12) NOT NULL,
MIDINIT CHAR(1) NOT NULL,
LASTNAME VARCHAR(15) NOT NULL,
WORKDEPT CHAR(3) ,
PHONENO CHAR(4) CONSTRAINT NUMBER CHECK
(PHONENO >= '0000' AND
PHONENO <= '9999') ,
HIREDATE DATE ,
JOB CHAR(8) ,
EDLEVEL SMALLINT ,
SEX CHAR(1) ,
BIRTHDATE DATE ,
SALARY DECIMAL(9,2) ,
BONUS DECIMAL(9,2) ,
COMM DECIMAL(9,2) ,
PRIMARY KEY (EMPNO) ,
FOREIGN KEY RED (WORKDEPT) REFERENCES DSN8710.DEPT
ON DELETE SET NULL )
EDITPROC DSN8EAE1
IN DSN8D71A.DSN8S71E
CCSID EBCDIC;
DB2 requires an auxiliary table for each LOB column in a table. These statements
define the auxiliary tables for the three LOB columns in
DSN8710.EMP_PHOTO_RESUME:
CREATE AUX TABLE DSN8710.AUX_BMP_PHOTO
IN DSN8D71L.DSN8S71M
STORES DSN8710.EMP_PHOTO_RESUME
COLUMN BMP_PHOTO;
Content
Table 113 shows the content of the columns.
Table 113. Columns of the employee photo and resume table
Column Column Name Description
1 EMPNO Employee ID (the primary key)
2 EMP_ROWID Row ID to uniquely identify each row of the table.
DB2 supplies the values of this column.
3 PSEG_PHOTO Employee photo, in PSEG format
4 BMP_PHOTO Employee photo, in BMP format
5 RESUME Employee resume
The auxiliary tables for the employee photo and resume table have these indexes:
Table 115. Indexes of the auxiliary tables for the employee photo and resume table
Name On Table Type of Index
DSN8710.XAUX_BMP_PHOTO DSN8710.AUX_BMP_PHOTO Unique
DSN8710.XAUX_PSEG_PHOTO DSN8710.AUX_PSEG_PHOTO Unique
DSN8710.XAUX_EMP_RESUME DSN8710.AUX_EMP_RESUME Unique
The table resides in database DSN8D71A. Because it has foreign keys referencing
DEPT and EMP, those tables and the indexes on their primary keys must be
created first. Then PROJ is created with:
CREATE TABLE DSN8710.PROJ
(PROJNO CHAR(6) PRIMARY KEY NOT NULL,
PROJNAME VARCHAR(24) NOT NULL WITH DEFAULT
'PROJECT NAME UNDEFINED',
DEPTNO CHAR(3) NOT NULL REFERENCES
DSN8710.DEPT ON DELETE RESTRICT,
RESPEMP CHAR(6) NOT NULL REFERENCES
DSN8710.EMP ON DELETE RESTRICT,
PRSTAFF DECIMAL(5, 2) ,
PRSTDATE DATE ,
PRENDATE DATE ,
MAJPROJ CHAR(6))
IN DSN8D71A.DSN8S71P
CCSID EBCDIC;
Because the table is self-referencing, the foreign key for that restraint must be
added later with:
ALTER TABLE DSN8710.PROJ
FOREIGN KEY RPP (MAJPROJ) REFERENCES DSN8710.PROJ
ON DELETE CASCADE;
Content
Table 116 shows the content of the columns.
Table 116. Columns of the project table
Column Column Name Description
1 PROJNO Project ID (the primary key)
2 PROJNAME Project name
3 DEPTNO ID of department responsible for the project
4 RESPEMP ID of employee responsible for the project
5 PRSTAFF Estimated mean number of persons needed
between PRSTDATE and PRENDATE to achieve
the whole project, including any subprojects
6 PRSTDATE Estimated project start date
7 PRENDATE Estimated project end date
8 MAJPROJ ID of any project of which this project is a part
Content
Table 118 shows the content of the columns.
Table 118. Columns of the project activity table
Column Column Name Description
1 PROJNO Project ID
2 ACTNO Activity ID
3 ACSTAFF Estimated mean number of employees needed to
staff the activity
4 ACSTDATE Estimated activity start date
5 ACENDATE Estimated activity completion date
The table resides in database DSN8D71A. Because it has foreign keys referencing
EMP and PROJACT, those tables and the indexes on their primary keys must be
created first. Then EMPPROJACT is created with:
CREATE TABLE DSN8710.EMPPROJACT
(EMPNO CHAR(6) NOT NULL,
PROJNO CHAR(6) NOT NULL,
ACTNO SMALLINT NOT NULL,
EMPTIME DECIMAL(5,2) ,
EMSTDATE DATE ,
EMENDATE DATE ,
FOREIGN KEY REPAPA (PROJNO, ACTNO, EMSTDATE)
REFERENCES DSN8710.PROJACT
ON DELETE RESTRICT,
FOREIGN KEY REPAE (EMPNO) REFERENCES DSN8710.EMP
ON DELETE RESTRICT)
IN DSN8D71A.DSN8S71P
CCSID EBCDIC;
Content
Table 120 shows the content of the columns.
Table 120. Columns of the employee to project activity table
Column Column Name Description
1 EMPNO Employee ID number
2 PROJNO Project ID of the project
3 ACTNO ID of the activity within the project
4 EMPTIME A proportion of the employee’s full time (between
0.00 and 1.00) to be spent on the activity
5 EMSTDATE Date the activity starts
6 EMENDATE Date the activity ends
CASCADE
DEPT
SET SET
NULL NULL
RESTRICT EMP
RESTRICT
RESTRICT EMP_PHOTO_RESUME
RESTRICT
CASCADE ACT
PROJ RESTRICT
RESTRICT
PROJACT
RESTRICT
RESTRICT
EMPPROJACT
Figure 238. Relationships among tables in the sample application. Arrows point from parent
tables to dependent tables.
The SQL statements that create the sample views are shown below.
CREATE VIEW DSN8710.VDEPT
AS SELECT ALL DEPTNO ,
DEPTNAME,
MGRNO ,
ADMRDEPT
FROM DSN8710.DEPT;
CREATE VIEW DSN8710.VHDEPT
AS SELECT ALL DEPTNO ,
DEPTNAME,
MGRNO ,
ADMRDEPT,
LOCATION
FROM DSN8710.DEPT;
CREATE VIEW DSN8710.VEMP
AS SELECT ALL EMPNO ,
FIRSTNME,
MIDINIT ,
LASTNAME,
WORKDEPT
FROM DSN8710.EMP;
CREATE VIEW DSN8710.VPROJ
AS SELECT ALL
PROJNO, PROJNAME, DEPTNO, RESPEMP, PRSTAFF,
PRSTDATE, PRENDATE, MAJPROJ
FROM DSN8710.PROJ ;
CREATE VIEW DSN8710.VACT
AS SELECT ALL ACTNO ,
ACTKWD ,
ACTDESC
FROM DSN8710.ACT ;
CREATE VIEW DSN8710.VPROJACT
AS SELECT ALL
PROJNO,ACTNO, ACSTAFF, ACSTDATE, ACENDATE
FROM DSN8710.PROJACT ;
Table
spaces: Separate
LOB spaces
spaces for DSN8SvrP
for employee
DSN8SvrD DSN8SvrE other common for
photo and
department employee application programming
resume table
table table tables tables
In addition to the storage group and databases shown in Figure 239, the storage
group DSN8G71U and database DSN8D71U are created when you run DSNTEJ2A.
Storage group
The default storage group, SYSDEFLT, created when DB2 is installed, is not used
to store sample application data. The storage group used to store sample
application data is defined by this statement:
CREATE STOGROUP DSN8G710
VOLUMES (DSNV01)
VCAT DSNC710;
Databases
The default database, created when DB2 is installed, is not used to store the
sample application data. Two databases are used: one for tables related to
applications, the other for tables related to programs. They are defined by the
following statements:
CREATE DATABASE DSN8D71A
STOGROUP DSN8G710
BUFFERPOOL BP0
CCSID EBCDIC;
Table spaces
The following table spaces are explicitly defined by the statements shown below.
The table spaces not explicitly defined are created implicitly in the DSN8D71A
database, using the default space attributes.
Several sample applications come with DB2 to help you with DB2 programming
techniques and coding practices within each of the four environments: batch, TSO,
IMS, and CICS. The sample applications contain various applications that might
apply to managing to company.
You can examine the source code for the sample application programs in the online
sample library included with the DB2 product. The name of this sample library is
prefix.SDSNSAMP.
Phone application: The phone application lets you view or update individual
employee phone numbers. There are different versions of the application for
ISPF/TSO, CICS, IMS, and batch:
v ISPF/TSO applications use COBOL and PL/I.
v CICS and IMS applications use PL/I.
v Batch applications use C, C⁺⁺, COBOL, FORTRAN, and PL/I.
LOB application: The LOB application demonstrates how to perform the following
tasks:
v Define DB2 objects to hold LOB data
v Populate DB2 tables with LOB data using the LOAD utility, or using INSERT and
UPDATE statements when the data is too large for use with the LOAD utility
v Manipulate the LOB data using LOB locators
Application programs: Tables 124 through 126 on pages 836 through 838 provide
the program names, JCL member names, and a brief description of some of the
programs included for each of the three environments: TSO, IMS, and CICS.
CICS
Table 126. Sample DB2 applications for CICS
Application Program JCL member Description
name name
Organization DSN8CC0 DSNTEJ5C CICS COBOL
DSN8CC1 Organization
DSN8CC2 Application
Organization DSN8CP0 DSNTEJ5P CICS PL/I
DSN8CP1 Organization
DSN8CP2 Application
Project DSN8CP6 DSNTEJ5P CICS PL/I
DSN8CP7 Project
DSN8CP8 Application
Phone DSN8CP3 DSNTEJ5P CICS PL/I Phone
Application.
This program
lists employee
telephone
numbers and
updates them
if requested.
Because these three programs also accept the static SQL statements CONNECT,
SET CONNECTION, and RELEASE, you can use the programs to access DB2
tables at remote locations.
DSNTIAUL and DSNTIAD are shipped only as source code, so you must
precompile, assemble, link, and bind them before you can use them. If you want to
use the source code version of DSNTEP2, you must precompile, compile, link and
bind it. You need to bind the object code version of DSNTEP2 before you can use
it. Usually, your system administrator prepares the programs as part of the
installation process. Table 127 indicates which installation job prepares each sample
program. All installation jobs are in data set DSN710.SDSNSAMP.
Table 127. Jobs that prepare DSNTIAUL, DSNTIAD, and DSNTEP2
Program name Program preparation job
DSNTIAUL DSNTEJ2A
DSNTIAD DSNTIJTM
DSNTEP2 (source) DSNTEJ1P
DSNTEP2 (object) DSNTEJ1L
To run the sample programs, use the DSN RUN command, which is described in
detail in Chapter 2 of DB2 Command Reference. Table 128 on page 840 lists the
load module name and plan name you must specify, and the parameters you can
specify when you run each program. See the following sections for the meaning of
each parameter.
The remainder of this appendix contains the following information about running
each program:
v Descriptions of the input parameters
v Data sets you must allocate before you run the program
v Return codes from the program
v Examples of invocation
See the sample jobs listed in Table 127 on page 839 for a working example of each
program.
Running DSNTIAUL
This section contains information that you need when you run DSNTIAUL, including
parameters, data sets, return codes, and invocation examples.
If you do not specify the SQL parameter, your input data set must contain one or
more single-line statements (without a semi-colon) that use the following syntax:
table or view name [WHERE conditions] [ORDER BY columns]
Each input statement must be a valid SQL SELECT statement with the clause
SELECT * FROM omitted and with no ending semi-colon. DSNTIAUL generates a
SELECT statement for each input statement by appending your input line to
SELECT * FROM, then uses the result to determine which tables to unload. For this
input format, the text for each table specification can be a maximum of 72 bytes
and must not span multiple lines.
For both input formats, you can specify SELECT statements that join two or more
tables or select specific columns from a table. If you specify columns, you will need
to modify the LOAD statement that DSNTIAUL generates.
Define all data sets as sequential data sets. You can specify the record length and
block size of the SYSPUNCH and SYSRECnn data sets. The maximum record
length for the SYSPUNCH and SYSRECnn data sets is 32760 bytes.
Examples of DSNTIAUL invocation: Suppose you want to unload the rows for
department D01 from the project table. You can fit the table specification on one
line, and you do not want to execute any non-SELECT statements, so you do not
need the SQL parameter. Your invocation looks like this:
Appendix C. How to run sample programs DSNTIAUL, DSNTIAD, and DSNTEP2 841
//UNLOAD EXEC PGM=IKJEFT01,DYNAMNBR=20
//SYSTSPRT DD SYSOUT=*
//SYSTSIN DD *
DSN SYSTEM(DSN)
RUN PROGRAM(DSNTIAUL) PLAN(DSNTIB71) -
LIB('DSN710.RUNLIB.LOAD')
//SYSPRINT DD SYSOUT=*
//SYSUDUMP DD SYSOUT=*
//SYSREC00 DD DSN=DSN8UNLD.SYSREC00,
// UNIT=SYSDA,SPACE=(32760,(1000,500)),DISP=(,CATLG),
// VOL=SER=SCR03
//SYSPUNCH DD DSN=DSN8UNLD.SYSPUNCH,
// UNIT=SYSDA,SPACE=(800,(15,15)),DISP=(,CATLG),
// VOL=SER=SCR03,RECFM=FB,LRECL=120,BLKSIZE=1200
//SYSIN DD *
DSN8710.PROJ WHERE DEPTNO='D01'
If you want to obtain the LOAD utility control statements for loading rows into a
table, but you do not want to unload the rows, you can set the data set names for
the SYSRECnn data sets to DUMMY. For example, to obtain the utility control
statements for loading rows into the department table, you invoke DSNTIAUL like
this:
Now suppose that you also want to use DSNTIAUL to do these things:
v Unload all rows from the project table
v Unload only rows from the employee table for employees in departments with
department numbers that begin with D, and order the unloaded rows by
employee number
v Lock both tables in share mode before you unload them
For these activities, you must specify the SQL parameter when you run DSNTIAUL.
Your DSNTIAUL invocation looks like this:
Running DSNTIAD
This section contains information that you need when you run DSNTIAD, including
parameters, data sets, return codes, and invocation examples.
DSNTIAD parameters:
RC0
If you specify this parameter, DSNTIAD ends with return code 0, even if the
program encounters SQL errors. If you do not specify RC0, DSNTIAD ends with
a return code that reflects the severity of the errors that occur. Without RC0,
DSNTIAD terminates if more than 10 SQL errors occur during a single
execution.
SQLTERM(termchar)
Specify this parameter to indicate the character that you use to end each SQL
statement. You can use any special character except one of those listed in
Table 129. SQLTERM(;) is the default.
Table 129. Invalid special characters for the SQL terminator
Name Character Hexadecimal representation
blank X'40'
comma , X'5E'
double quote " X'7F'
left parenthesis ( X'4D'
right parenthesis ) X'5D'
single quote ' X'7D'
underscore _ X'6D'
Appendix C. How to run sample programs DSNTIAUL, DSNTIAD, and DSNTEP2 843
Use a character other than a semicolon if you plan to execute a statement that
contains embedded semicolons. For example, suppose you specify the
parameter SQLTERM(#) to indicate that the character # is the statement
terminator. Then a CREATE TRIGGER statement with embedded semicolons
looks like this:
CREATE TRIGGER NEW_HIRE
AFTER INSERT ON EMP
FOR EACH ROW MODE DB2SQL
BEGIN ATOMIC
UPDATE COMPANY_STATS SET NBEMP = NBEMP + 1;
END#
Be careful to choose a character for the statement terminator that is not used
within the statement.
Running DSNTEP2
This section contains information that you need when you run DSNTEP2, including
parameters, data sets, return codes, and invocation examples.
DSNTEP2 parameters:
Parameter
Description
ALIGN(MID) or ALIGN(LHS)
If you want your DSNTEP2 output centered, specify ALIGN(MID). If you want
the output left-aligned, choose ALIGN(LHS). The default is ALIGN(MID).
NOMIXED or MIXED
If your input to DSNTEP2 contains any DBCS characters, specify MIXED. If
your input contains no DBCS characters, specify NOMIXED. The default is
NOMIXED.
SQLTERM(termchar)
Specify this parameter to indicate the character that you use to end each SQL
statement. You can use any character except one of those listed in Table 129
on page 843. SQLTERM(;) is the default.
Use a character other than a semicolon if you plan to execute a statement that
contains embedded semicolons. For example, suppose you specify the
parameter SQLTERM(#) to indicate that the character # is the statement
terminator. Then a CREATE TRIGGER statement with embedded semicolons
looks like this:
CREATE TRIGGER NEW_HIRE
AFTER INSERT ON EMP
FOR EACH ROW MODE DB2SQL
BEGIN ATOMIC
UPDATE COMPANY_STATS SET NBEMP = NBEMP + 1;
END#
Be careful to choose a character for the statement terminator that is not used
within the statement.
If you want to change the SQL terminator within a series of SQL statements,
you can use the --#SET TERMINATOR control statement. For example,
suppose that you have an existing set of SQL statements to which you want to
Appendix C. How to run sample programs DSNTIAUL, DSNTIAD, and DSNTEP2 845
add a CREATE TRIGGER statement that has embedded semicolons. You can
use the default SQLTERM value, which is a semicolon, for all of the existing
SQL statements. Before you execute the CREATE TRIGGER statement, include
the --#SET TERMINATOR # control statement to change the SQL terminator to
the character #:
SELECT * FROM DEPT;
SELECT * FROM ACT;
SELECT * FROM EMPPROJACT;
SELECT * FROM PROJ;
SELECT * FROM PROJACT;
--#SET TERMINATOR #
CREATE TRIGGER NEW_HIRE
AFTER INSERT ON EMP
FOR EACH ROW MODE DB2SQL
BEGIN ATOMIC
UPDATE COMPANY_STATS SET NBEMP = NBEMP + 1;
END#
See the discussion of the SYSIN data set for more information on the --#SET
control statement.
Figure 244. DSNTEP2 Invocation with the ALIGN(LHS) and MIXED parameters
Appendix C. How to run sample programs DSNTIAUL, DSNTIAD, and DSNTEP2 847
848 Application Programming and SQL Guide
Appendix D. Programming examples
This appendix contains the following programming examples:
v Sample COBOL dynamic SQL program
v “Sample dynamic and static SQL in a C program” on page 863
v “Example DB2 REXX application” on page 866
v “Sample COBOL program using DRDA access” on page 880
v “Sample COBOL program using DB2 private protocol access” on page 888
v “Examples of using stored procedures” on page 894
This example program does not support BLOB, CLOB, or DBCLOB data types.
The SET statement sets a pointer from the address of an area in the linkage
section or another pointer; the statement can also set the address of an area in the
linkage section. Figure 246 on page 853 provides these uses of the SET statement.
The SET statement does not permit the use of an address in the
WORKING-STORAGE section.
Storage allocation
COBOL does not provide a means to allocate main storage within a program. You
can achieve the same end by having an initial program which allocates the storage,
and then calls a second program that manipulates the pointer. (COBOL does not
permit you to directly manipulate the pointer because errors and abends are likely
to occur.)
The initial program is extremely simple. It includes a working storage section that
allocates the maximum amount of storage needed. This program then calls the
If you need to allocate parts of storage, the best method is to use indexes or
subscripts. You can use subscripts for arithmetic and comparison operations.
Example
Figure 245 on page 851 shows an example of the initial program DSN8BCU1 that
allocates the storage and calls the second program DSN8BCU2 shown in
Figure 246 on page 853. DSN8BCU2 then defines the passed storage areas in its
linkage section and includes the USING clause on its PROCEDURE DIVISION
statement.
Defining the pointers, then redefining them as numeric, permits some manipulation
of the pointers that you cannot perform directly. For example, you cannot add the
column length to the record pointer, but you can add the column length to the
numeric value that redefines the pointer.
Figure 246. Called program that does pointer manipulation (Part 1 of 10)
Figure 246. Called program that does pointer manipulation (Part 2 of 10)
Figure 246. Called program that does pointer manipulation (Part 3 of 10)
Figure 246. Called program that does pointer manipulation (Part 4 of 10)
Figure 246. Called program that does pointer manipulation (Part 5 of 10)
Figure 246. Called program that does pointer manipulation (Part 6 of 10)
Figure 246. Called program that does pointer manipulation (Part 7 of 10)
Figure 246. Called program that does pointer manipulation (Part 8 of 10)
Figure 246. Called program that does pointer manipulation (Part 9 of 10)
862 Figure 246. Called program that does pointer manipulation (Part 10 of 10)
Application Programming and SQL Guide
Sample dynamic and static SQL in a C program
Figure 247 illustrates dynamic SQL and static SQL embedded in a C program. Each
section of the program is identified with a comment. Section 1 of the program
shows static SQL; sections 2, 3, and 4 show dynamic SQL. The function of each
section is explained in detail in the prologue to the program.
/**********************************************************************/
/* Descriptive name = Dynamic SQL sample using C language */
/* */
/* Function = To show examples of the use of dynamic and static */
/* SQL. */
/* */
/* Notes = This example assumes that the EMP and DEPT tables are */
/* defined. They need not be the same as the DB2 Sample */
/* tables. */
/* */
/* Module type = C program */
/* Processor = DB2 precompiler, C compiler */
/* Module size = see link edit */
/* Attributes = not reentrant or reusable */
/* */
/* Input = */
/* */
/* symbolic label/name = DEPT */
/* description = arbitrary table */
/* symbolic label/name = EMP */
/* description = arbitrary table */
/* */
/* Output = */
/* */
/* symbolic label/name = SYSPRINT */
/* description = print results via printf */
/* */
/* Exit-normal = return code 0 normal completion */
/* */
/* Exit-error = */
/* */
/* Return code = SQLCA */
/* */
/* Abend codes = none */
/* */
/* External references = none */
/* */
/* Control-blocks = */
/* SQLCA - sql communication area */
/* */
#include "stdio.h"
#include "stdefs.h"
EXEC SQL INCLUDE SQLCA;
EXEC SQL INCLUDE SQLDA;
EXEC SQL BEGIN DECLARE SECTION;
short edlevel;
struct { short len;
char x1[56];
} stmtbf1, stmtbf2, inpstr;
struct { short len;
char x1[15];
} lname;
short hv1;
struct { char deptno[4];
struct { short len;
char x[36];
} deptname;
char mgrno[7];
char admrdept[4];
} hv2;
short ind[4];
EXEC SQL END DECLARE SECTION;
EXEC SQL DECLARE EMP TABLE
(EMPNO CHAR(6) ,
FIRSTNAME VARCHAR(12) ,
MIDINIT CHAR(1) ,
LASTNAME VARCHAR(15) ,
WORKDEPT CHAR(3) ,
PHONENO CHAR(4) ,
HIREDATE DECIMAL(6) ,
JOBCODE DECIMAL(3) ,
EDLEVEL SMALLINT ,
SEX CHAR(1) ,
BIRTHDATE DECIMAL(6) ,
SALARY DECIMAL(8,2) ,
FORFNAME VARGRAPHIC(12) ,
FORMNAME GRAPHIC(1) ,
FORLNAME VARGRAPHIC(15) ,
FORADDR VARGRAPHIC(256) ) ;
%DRAW object-name (
SSID=ssid SELECT
TYPE= INSERT
UPDATE
LOAD
DRAW parameters:
object-name
The name of the table or view for which DRAW builds an SQL statement or
utility control statement. The name can be a one-, two-, or three-part name. The
table or view to which object-name refers must exist before DRAW can run.
object-name is a required parameter.
SSID=ssid
Specifies the name of the local DB2 subsystem.
S can be used as an abbreviation for SSID.
If you invoke DRAW from the command line of the edit session in SPUFI,
SSID=ssid is an optional parameter. DRAW uses the subsystem ID from the
DB2I Defaults panel.
TYPE=operation-type
The type of statement that DRAW builds.
T can be used as an abbreviation for TYPE.
operation-type has one of the following values:
SELECT Builds a SELECT statement in which the result table contains
all columns of object-name.
S can be used as an abbreviation for SELECT.
INSERT Builds a template for an INSERT statement that inserts values
into all columns of object-name. The template contains
comments that indicate where the user can place column
values.
I can be used as an abbreviation for INSERT.
UPDATE Builds a template for an UPDATE statement that updates
columns of object-name. The template contains comments that
indicate where the user can place column values and qualify
the update operation for selected rows.
U can be used as an abbreviation for UPDATE.
LOAD Builds a template for a LOAD utility control statement for
object-name.
L can be used as an abbreviation for LOAD.
Generate a template for an INSERT statement that inserts values into table
DSN8710.EMP at location SAN_JOSE. The local subsystem ID is DSN.
Generate a LOAD control statement to load values into table DSN8710.EMP. The
local subsystem ID is DSN.
>>--DRAW-----tablename-----|---------------------------|-------><
|-(-|-Ssid=subsystem-name-|-|
| +-Select-+ |
|-Type=-|-Insert-|----|
|-Update-|
+--Load--+
Ssid=subsystem-name
subsystem-name specified the name of a DB2 subsystem.
Select
Composes a basic query for selecting data from the columns of a
table or view. If TYPE is not specified, SELECT is assumed.
Using SELECT with the DRAW command produces a query that would
retrieve all rows and all columns from the specified table. You
can then modify the query as needed.
Insert
Composes a basic query to insert data into the columns of a table
or view.
Update
Composes a basic query to change the data in a table or view.
To use this UPDATE query, type the changes you want to make to
the right of the column names, and delete the lines you don't
need. Be sure to complete the WHERE clause. For information on
writing queries to update data, refer to DB2 SQL Reference.
Load
Composes a load statement to load the data in a table.
*/
L2 = WHEREAMI()
/**********************************************************************/
/* TRACE ?R */
/**********************************************************************/
Address ISPEXEC
"ISREDIT MACRO (ARGS) NOPROCESS"
If ARGS = "" Then
Do
Do I = L1+2 To L2-2;Say SourceLine(I);End
Exit (20)
End
Parse Upper Var Args Table "(" Parms
Parms = Translate(Parms," ",",")
Type = "SELECT" /* Default */
SSID = "" /* Default */
"VGET (DSNEOV01)"
If RC = 0 Then SSID = DSNEOV01
If (Parms <> "") Then
Do Until(Parms = "")
Parse Var Parms Var "=" Value Parms
If Var = "T" | Var = "TYPE" Then Type = Value
Else
If Var = "S" | Var = "SSID" Then SSID = Value
Else
Exit (20)
End
"CONTROL ERRORS RETURN"
"ISREDIT (LEFTBND,RIGHTBND) = BOUNDS"
"ISREDIT (LRECL) = DATA_WIDTH" /*LRECL*/
BndSize = RightBnd - LeftBnd + 1
If BndSize > 72 Then BndSize = 72
"ISREDIT PROCESS DEST"
Select
When rc = 0 Then
'ISREDIT (ZDEST) = LINENUM .ZDEST'
When rc <= 8 Then /* No A or B entered */
Do
zedsmsg = 'Enter "A"/"B" line cmd'
zedlmsg = 'DRAW requires an "A" or "B" line command'
'SETMSG MSG(ISRZ001)'
Exit 12
End
When rc < 20 Then /* Conflicting line commands - edit sets message */
Exit 12
When rc = 20 Then
zdest = 0
Otherwise
Exit 12
End
Select
When (Left(Type,1) = "S") Then
Call DrawSelect
When (Left(Type,1) = "I") Then
Call DrawInsert
When (Left(Type,1) = "U") Then
Call DrawUpdate
When (Left(Type,1) = "L") Then
Call DrawLoad
Otherwise EXIT (20)
End
Do I = LINE.0 To 1 By -1
LINE = COPIES(" ",LEFTBND-1)||LINE.I
'ISREDIT LINE_AFTER 'zdest' = DATALINE (Line)'
End
line1 = zdest + 1
'ISREDIT CURSOR = 'line1 0
Exit
IDENTIFICATION DIVISION.
PROGRAM-ID. TWOPHASE.
AUTHOR.
REMARKS.
*****************************************************************
* *
* MODULE NAME = TWOPHASE *
* *
* DESCRIPTIVE NAME = DB2 SAMPLE APPLICATION USING *
* TWO PHASE COMMIT AND THE DRDA DISTRIBUTED *
* ACCESS METHOD *
* *
* COPYRIGHT = 5665-DB2 (C) COPYRIGHT IBM CORP 1982, 1989 *
* REFER TO COPYRIGHT INSTRUCTIONS FORM NUMBER G120-2083 *
* *
* STATUS = VERSION 5 *
* *
* FUNCTION = THIS MODULE DEMONSTRATES DISTRIBUTED DATA ACCESS *
* USING 2 PHASE COMMIT BY TRANSFERRING AN EMPLOYEE *
* FROM ONE LOCATION TO ANOTHER. *
* *
* NOTE: THIS PROGRAM ASSUMES THE EXISTENCE OF THE *
* TABLE SYSADM.EMP AT LOCATIONS STLEC1 AND *
* STLEC2. *
* *
* MODULE TYPE = COBOL PROGRAM *
* PROCESSOR = DB2 PRECOMPILER, VS COBOL II *
* MODULE SIZE = SEE LINK EDIT *
* ATTRIBUTES = NOT REENTRANT OR REUSABLE *
* *
* ENTRY POINT = *
* PURPOSE = TO ILLUSTRATE 2 PHASE COMMIT *
* LINKAGE = INVOKE FROM DSN RUN *
* INPUT = NONE *
* OUTPUT = *
* SYMBOLIC LABEL/NAME = SYSPRINT *
* DESCRIPTION = PRINT OUT THE DESCRIPTION OF EACH *
* STEP AND THE RESULTANT SQLCA *
* *
* EXIT NORMAL = RETURN CODE 0 FROM NORMAL COMPLETION *
* *
* EXIT ERROR = NONE *
* *
* EXTERNAL REFERENCES = *
* ROUTINE SERVICES = NONE *
* DATA-AREAS = NONE *
* CONTROL-BLOCKS = *
* SQLCA - SQL COMMUNICATION AREA *
* *
* TABLES = NONE *
* *
* CHANGE-ACTIVITY = NONE *
* *
* *
* *
Figure 249. Sample COBOL two-phase commit application for DRDA access (Part 1 of 8)
Figure 249. Sample COBOL two-phase commit application for DRDA access (Part 2 of 8)
ENVIRONMENT DIVISION.
INPUT-OUTPUT SECTION.
FILE-CONTROL.
SELECT PRINTER, ASSIGN TO S-OUT1.
DATA DIVISION.
FILE SECTION.
FD PRINTER
RECORD CONTAINS 120 CHARACTERS
DATA RECORD IS PRT-TC-RESULTS
LABEL RECORD IS OMITTED.
01 PRT-TC-RESULTS.
03 PRT-BLANK PIC X(120).
Figure 249. Sample COBOL two-phase commit application for DRDA access (Part 3 of 8)
*****************************************************************
* Variable declarations *
*****************************************************************
01 H-EMPTBL.
05 H-EMPNO PIC X(6).
05 H-NAME.
49 H-NAME-LN PIC S9(4) COMP-4.
49 H-NAME-DA PIC X(32).
05 H-ADDRESS.
49 H-ADDRESS-LN PIC S9(4) COMP-4.
49 H-ADDRESS-DA PIC X(36).
05 H-CITY.
49 H-CITY-LN PIC S9(4) COMP-4.
49 H-CITY-DA PIC X(36).
05 H-EMPLOC PIC X(4).
05 H-SSNO PIC X(11).
05 H-BORN PIC X(10).
05 H-SEX PIC X(1).
05 H-HIRED PIC X(10).
05 H-DEPTNO PIC X(3).
05 H-JOBCODE PIC S9(3)V COMP-3.
05 H-SRATE PIC S9(5) COMP.
05 H-EDUC PIC S9(5) COMP.
05 H-SAL PIC S9(6)V9(2) COMP-3.
05 H-VALIDCHK PIC S9(6)V COMP-3.
01 H-EMPTBL-IND-TABLE.
02 H-EMPTBL-IND PIC S9(4) COMP OCCURS 15 TIMES.
*****************************************************************
* Includes for the variables used in the COBOL standard *
* language procedures and the SQLCA. *
*****************************************************************
*****************************************************************
* Declaration for the table that contains employee information *
*****************************************************************
Figure 249. Sample COBOL two-phase commit application for DRDA access (Part 4 of 8)
*****************************************************************
* Constants *
*****************************************************************
*****************************************************************
* Declaration of the cursor that will be used to retrieve *
* information about a transferring employee *
*****************************************************************
PROCEDURE DIVISION.
A101-HOUSE-KEEPING.
OPEN OUTPUT PRINTER.
*****************************************************************
* An employee is transferring from location STLEC1 to STLEC2. *
* Retrieve information about the employee from STLEC1, delete *
* the employee from STLEC1 and insert the employee at STLEC2 *
* using the information obtained from STLEC1. *
*****************************************************************
MAINLINE.
PERFORM CONNECT-TO-SITE-1
IF SQLCODE IS EQUAL TO 0
PERFORM PROCESS-CURSOR-SITE-1
IF SQLCODE IS EQUAL TO 0
PERFORM UPDATE-ADDRESS
PERFORM CONNECT-TO-SITE-2
IF SQLCODE IS EQUAL TO 0
PERFORM PROCESS-SITE-2.
PERFORM COMMIT-WORK.
Figure 249. Sample COBOL two-phase commit application for DRDA access (Part 5 of 8)
*****************************************************************
* Establish a connection to STLEC1 *
*****************************************************************
CONNECT-TO-SITE-1.
*****************************************************************
* Once a connection has been established successfully at STLEC1,*
* open the cursor that will be used to retrieve information *
* about the transferring employee. *
*****************************************************************
PROCESS-CURSOR-SITE-1.
*****************************************************************
* Retrieve information about the transferring employee. *
* Provided that the employee exists, perform DELETE-SITE-1 to *
* delete the employee from STLEC1. *
*****************************************************************
FETCH-DELETE-SITE-1.
Figure 249. Sample COBOL two-phase commit application for DRDA access (Part 6 of 8)
DELETE-SITE-1.
*****************************************************************
* Close the cursor used to retrieve information about the *
* transferring employee. *
*****************************************************************
CLOSE-CURSOR-SITE-1.
*****************************************************************
* Update certain employee information in order to make it *
* current. *
*****************************************************************
UPDATE-ADDRESS.
MOVE TEMP-ADDRESS-LN TO H-ADDRESS-LN.
MOVE '1500 NEW STREET' TO H-ADDRESS-DA.
MOVE TEMP-CITY-LN TO H-CITY-LN.
MOVE 'NEW CITY, CA 97804' TO H-CITY-DA.
MOVE 'SJCA' TO H-EMPLOC.
*****************************************************************
* Establish a connection to STLEC2 *
*****************************************************************
CONNECT-TO-SITE-2.
Figure 249. Sample COBOL two-phase commit application for DRDA access (Part 7 of 8)
PROCESS-SITE-2.
*****************************************************************
* COMMIT any changes that were made at STLEC1 and STLEC2. *
*****************************************************************
COMMIT-WORK.
*****************************************************************
* Include COBOL standard language procedures *
*****************************************************************
INCLUDE-SUBS.
EXEC SQL INCLUDE COBSSUB END-EXEC.
Figure 249. Sample COBOL two-phase commit application for DRDA access (Part 8 of 8)
IDENTIFICATION DIVISION.
PROGRAM-ID. TWOPHASE.
AUTHOR.
REMARKS.
*****************************************************************
* *
* MODULE NAME = TWOPHASE *
* *
* DESCRIPTIVE NAME = DB2 SAMPLE APPLICATION USING *
* TWO PHASE COMMIT AND DB2 PRIVATE PROTOCOL *
* DISTRIBUTED ACCESS METHOD *
* *
* COPYRIGHT = 5665-DB2 (C) COPYRIGHT IBM CORP 1982, 1989 *
* REFER TO COPYRIGHT INSTRUCTIONS FORM NUMBER G120-2083 *
* *
* STATUS = VERSION 5 *
* *
* FUNCTION = THIS MODULE DEMONSTRATES DISTRIBUTED DATA ACCESS *
* USING 2 PHASE COMMIT BY TRANSFERRING AN EMPLOYEE *
* FROM ONE LOCATION TO ANOTHER. *
* *
* NOTE: THIS PROGRAM ASSUMES THE EXISTENCE OF THE *
* TABLE SYSADM.EMP AT LOCATIONS STLEC1 AND *
* STLEC2. *
* *
* MODULE TYPE = COBOL PROGRAM *
* PROCESSOR = DB2 PRECOMPILER, VS COBOL II *
* MODULE SIZE = SEE LINK EDIT *
* ATTRIBUTES = NOT REENTRANT OR REUSABLE *
* *
* ENTRY POINT = *
* PURPOSE = TO ILLUSTRATE 2 PHASE COMMIT *
* LINKAGE = INVOKE FROM DSN RUN *
* INPUT = NONE *
* OUTPUT = *
* SYMBOLIC LABEL/NAME = SYSPRINT *
* DESCRIPTION = PRINT OUT THE DESCRIPTION OF EACH *
* STEP AND THE RESULTANT SQLCA *
* *
* EXIT NORMAL = RETURN CODE 0 FROM NORMAL COMPLETION *
* *
* EXIT ERROR = NONE *
* *
* EXTERNAL REFERENCES = *
* ROUTINE SERVICES = NONE *
* DATA-AREAS = NONE *
* CONTROL-BLOCKS = *
* SQLCA - SQL COMMUNICATION AREA *
* *
* TABLES = NONE *
* *
* CHANGE-ACTIVITY = NONE *
* *
* *
Figure 250. Sample COBOL two-phase commit application for DB2 private protocol access
(Part 1 of 7)
Figure 250. Sample COBOL two-phase commit application for DB2 private protocol access
(Part 2 of 7)
ENVIRONMENT DIVISION.
INPUT-OUTPUT SECTION.
FILE-CONTROL.
SELECT PRINTER, ASSIGN TO S-OUT1.
DATA DIVISION.
FILE SECTION.
FD PRINTER
RECORD CONTAINS 120 CHARACTERS
DATA RECORD IS PRT-TC-RESULTS
LABEL RECORD IS OMITTED.
01 PRT-TC-RESULTS.
03 PRT-BLANK PIC X(120).
WORKING-STORAGE SECTION.
*****************************************************************
* Variable declarations *
*****************************************************************
01 H-EMPTBL.
05 H-EMPNO PIC X(6).
05 H-NAME.
49 H-NAME-LN PIC S9(4) COMP-4.
49 H-NAME-DA PIC X(32).
05 H-ADDRESS.
49 H-ADDRESS-LN PIC S9(4) COMP-4.
49 H-ADDRESS-DA PIC X(36).
05 H-CITY.
49 H-CITY-LN PIC S9(4) COMP-4.
49 H-CITY-DA PIC X(36).
05 H-EMPLOC PIC X(4).
05 H-SSNO PIC X(11).
05 H-BORN PIC X(10).
05 H-SEX PIC X(1).
05 H-HIRED PIC X(10).
05 H-DEPTNO PIC X(3).
05 H-JOBCODE PIC S9(3)V COMP-3.
05 H-SRATE PIC S9(5) COMP.
05 H-EDUC PIC S9(5) COMP.
05 H-SAL PIC S9(6)V9(2) COMP-3.
05 H-VALIDCHK PIC S9(6)V COMP-3.
Figure 250. Sample COBOL two-phase commit application for DB2 private protocol access
(Part 3 of 7)
*****************************************************************
* Includes for the variables used in the COBOL standard *
* language procedures and the SQLCA. *
*****************************************************************
*****************************************************************
* Declaration for the table that contains employee information *
*****************************************************************
*****************************************************************
* Constants *
*****************************************************************
*****************************************************************
* Declaration of the cursor that will be used to retrieve *
* information about a transferring employee *
*****************************************************************
Figure 250. Sample COBOL two-phase commit application for DB2 private protocol access
(Part 4 of 7)
*****************************************************************
* An employee is transferring from location STLEC1 to STLEC2. *
* Retrieve information about the employee from STLEC1, delete *
* the employee from STLEC1 and insert the employee at STLEC2 *
* using the information obtained from STLEC1. *
*****************************************************************
MAINLINE.
PERFORM PROCESS-CURSOR-SITE-1
IF SQLCODE IS EQUAL TO 0
PERFORM UPDATE-ADDRESS
PERFORM PROCESS-SITE-2.
PERFORM COMMIT-WORK.
PROG-END.
CLOSE PRINTER.
GOBACK.
*****************************************************************
* Open the cursor that will be used to retrieve information *
* about the transferring employee. *
*****************************************************************
PROCESS-CURSOR-SITE-1.
*****************************************************************
* Retrieve information about the transferring employee. *
* Provided that the employee exists, perform DELETE-SITE-1 to *
* delete the employee from STLEC1. *
*****************************************************************
FETCH-DELETE-SITE-1.
Figure 250. Sample COBOL two-phase commit application for DB2 private protocol access
(Part 5 of 7)
*****************************************************************
* Delete the employee from STLEC1. *
*****************************************************************
DELETE-SITE-1.
*****************************************************************
* Close the cursor used to retrieve information about the *
* transferring employee. *
*****************************************************************
CLOSE-CURSOR-SITE-1.
*****************************************************************
* Update certain employee information in order to make it *
* current. *
*****************************************************************
UPDATE-ADDRESS.
MOVE TEMP-ADDRESS-LN TO H-ADDRESS-LN.
MOVE '1500 NEW STREET' TO H-ADDRESS-DA.
MOVE TEMP-CITY-LN TO H-CITY-LN.
MOVE 'NEW CITY, CA 97804' TO H-CITY-DA.
MOVE 'SJCA' TO H-EMPLOC.
Figure 250. Sample COBOL two-phase commit application for DB2 private protocol access
(Part 6 of 7)
PROCESS-SITE-2.
*****************************************************************
* COMMIT any changes that were made at STLEC1 and STLEC2. *
*****************************************************************
COMMIT-WORK.
*****************************************************************
* Include COBOL standard language procedures *
*****************************************************************
INCLUDE-SUBS.
EXEC SQL INCLUDE COBSSUB END-EXEC.
Figure 250. Sample COBOL two-phase commit application for DB2 private protocol access
(Part 7 of 7)
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
main()
{
/************************************************************/
/* Include the SQLCA and SQLDA */
/************************************************************/
EXEC SQL INCLUDE SQLCA;
EXEC SQL INCLUDE SQLDA;
/************************************************************/
/* Declare variables that are not SQL-related. */
/************************************************************/
short int i; /* Loop counter */
/************************************************************/
/* Declare the following: */
/* - Parameters used to call stored procedure GETPRML */
/* - An SQLDA for DESCRIBE PROCEDURE */
/* - An SQLDA for DESCRIBE CURSOR */
/* - Result set variable locators for up to three result */
/* sets */
/************************************************************/
EXEC SQL BEGIN DECLARE SECTION;
char procnm[19]; /* INPUT parm -- PROCEDURE name */
char schema[9]; /* INPUT parm -- User's schema */
long int out_code; /* OUTPUT -- SQLCODE from the */
/* SELECT operation. */
struct {
short int parmlen;
char parmtxt[254];
} parmlst; /* OUTPUT -- RUNOPTS values */
/* for the matching row in */
/* catalog table SYSROUTINES */
struct indicators {
short int procnm_ind;
short int schema_ind;
short int out_code_ind;
short int parmlst_ind;
} parmind;
/* Indicator variable structure */
/************************************************************/
/* Call the GETPRML stored procedure to retrieve the */
/* RUNOPTS values for the stored procedure. In this */
/* example, we request the PARMLIST definition for the */
/* stored procedure named DSN8EP2. */
/* */
/* The call should complete with SQLCODE +466 because */
/* GETPRML returns result sets. */
/************************************************************/
strcpy(procnm,"dsn8ep2 ");
/* Input parameter -- PROCEDURE to be found */
strcpy(schema," ");
/* Input parameter -- Schema name for proc */
parmind.procnm_ind=0;
parmind.schema_ind=0;
parmind.out_code_ind=0;
/* Indicate that none of the input parameters */
/* have null values */
parmind.parmlst_ind=-1;
/* The parmlst parameter is an output parm. */
/* Mark PARMLST parameter as null, so the DB2 */
/* requester doesn't have to send the entire */
/* PARMLST variable to the server. This */
/* helps reduce network I/O time, because */
/* PARMLST is fairly large. */
EXEC SQL
CALL GETPRML(:procnm INDICATOR :parmind.procnm_ind,
:schema INDICATOR :parmind.schema_ind,
:out_code INDICATOR :parmind.out_code_ind,
:parmlst INDICATOR :parmind.parmlst_ind);
if(SQLCODE!=+466) /* If SQL CALL failed, */
{
/* print the SQLCODE and any */
/* message tokens */
printf("SQL CALL failed due to SQLCODE = %d\n",SQLCODE);
printf("sqlca.sqlerrmc = ");
for(i=0;i<sqlca.sqlerrml;i++)
printf("%c",sqlca.sqlerrmc[i]);
printf("\n");
}
/********************************************************/
/* Use the statement DESCRIBE PROCEDURE to */
/* return information about the result sets in the */
/* SQLDA pointed to by proc_da: */
/* - SQLD contains the number of result sets that were */
/* returned by the stored procedure. */
/* - Each SQLVAR entry has the following information */
/* about a result set: */
/* - SQLNAME contains the name of the cursor that */
/* the stored procedure uses to return the result */
/* set. */
/* - SQLIND contains an estimate of the number of */
/* rows in the result set. */
/* - SQLDATA contains the result locator value for */
/* the result set. */
/********************************************************/
EXEC SQL DESCRIBE PROCEDURE INTO :*proc_da;
/********************************************************/
/* Assume that you have examined SQLD and determined */
/* that there is one result set. Use the statement */
/* ASSOCIATE LOCATORS to establish a result set locator */
/* for the result set. */
/********************************************************/
EXEC SQL ASSOCIATE LOCATORS (:loc1) WITH PROCEDURE GETPRML;
/********************************************************/
/* Use the statement ALLOCATE CURSOR to associate a */
/* cursor for the result set. */
/********************************************************/
EXEC SQL ALLOCATE C1 CURSOR FOR RESULT SET :loc1;
/********************************************************/
/* Use the statement DESRIBE CURSOR to determine the */
/* columns in the result set. */
/********************************************************/
EXEC SQL DESCRIBE CURSOR C1 INTO :*res_da;
/********************************************************/
/* Fetch the data from the result table. */
/********************************************************/
while(SQLCODE==0)
EXEC SQL FETCH C1 USING DESCRIPTOR :*res_da;
}
return;
}
ENVIRONMENT DIVISION.
CONFIGURATION SECTION.
INPUT-OUTPUT SECTION.
FILE-CONTROL.
SELECT REPOUT
ASSIGN TO UT-S-SYSPRINT.
DATA DIVISION.
FILE SECTION.
FD REPOUT
RECORD CONTAINS 127 CHARACTERS
LABEL RECORDS ARE OMITTED
DATA RECORD IS REPREC.
01 REPREC PIC X(127).
WORKING-STORAGE SECTION.
*****************************************************
* MESSAGES FOR SQL CALL *
*****************************************************
01 SQLREC.
02 BADMSG PIC X(34) VALUE
' SQL CALL FAILED DUE TO SQLCODE = '.
02 BADCODE PIC +9(5) USAGE DISPLAY.
02 FILLER PIC X(80) VALUE SPACES.
01 ERRMREC.
02 ERRMMSG PIC X(12) VALUE ' SQLERRMC = '.
02 ERRMCODE PIC X(70).
02 FILLER PIC X(38) VALUE SPACES.
01 CALLREC.
02 CALLMSG PIC X(28) VALUE
' GETPRML FAILED DUE TO RC = '.
02 CALLCODE PIC +9(5) USAGE DISPLAY.
02 FILLER PIC X(42) VALUE SPACES.
01 RSLTREC.
02 RSLTMSG PIC X(15) VALUE
' TABLE NAME IS '.
02 TBLNAME PIC X(18) VALUE SPACES.
02 FILLER PIC X(87) VALUE SPACES.
*****************************************************
* SQL INCLUDE FOR SQLCA *
*****************************************************
EXEC SQL INCLUDE SQLCA END-EXEC.
PROCEDURE DIVISION.
*------------------
PROG-START.
OPEN OUTPUT REPOUT.
* OPEN OUTPUT FILE
MOVE 'DSN8EP2 ' TO PROCNM.
* INPUT PARAMETER -- PROCEDURE TO BE FOUND
MOVE SPACES TO SCHEMA.
* INPUT PARAMETER -- SCHEMA IN SYSROUTINES
MOVE -1 TO PARMIND.
* THE PARMLST PARAMETER IS AN OUTPUT PARM.
* MARK PARMLST PARAMETER AS NULL, SO THE DB2
* REQUESTER DOESN'T HAVE TO SEND THE ENTIRE
* PARMLST VARIABLE TO THE SERVER. THIS
* HELPS REDUCE NETWORK I/O TIME, BECAUSE
* PARMLST IS FAIRLY LARGE.
EXEC SQL
CALL GETPRML(:PROCNM,
:SCHEMA,
:OUT-CODE,
:PARMLST INDICATOR :PARMIND)
END-EXEC.
/************************************************************/
/* Declare the parameters used to call the GETPRML */
/* stored procedure. */
/************************************************************/
DECLARE PROCNM CHAR(18), /* INPUT parm -- PROCEDURE name */
SCHEMA CHAR(8), /* INPUT parm -- User's schema */
OUT_CODE FIXED BIN(31),
/* OUTPUT -- SQLCODE from the */
/* SELECT operation. */
PARMLST CHAR(254) /* OUTPUT -- RUNOPTS for */
VARYING, /* the matching row in the */
/* catalog table SYSROUTINES */
PARMIND FIXED BIN(15);
/* PARMLST indicator variable */
/************************************************************/
/* Include the SQLCA */
/************************************************************/
EXEC SQL INCLUDE SQLCA;
/************************************************************/
/* Call the GETPRML stored procedure to retrieve the */
/* RUNOPTS values for the stored procedure. In this */
/* example, we request the RUNOPTS values for the */
/* stored procedure named DSN8EP2. */
/************************************************************/
PROCNM = 'DSN8EP2';
/* Input parameter -- PROCEDURE to be found */
SCHEMA = ' ';
/* Input parameter -- SCHEMA in SYSROUTINES */
PARMIND = -1; /* The PARMLST parameter is an output parm. */
/* Mark PARMLST parameter as null, so the DB2 */
/* requester doesn't have to send the entire */
/* PARMLST variable to the server. This */
/* helps reduce network I/O time, because */
/* PARMLST is fairly large. */
EXEC SQL
CALL GETPRML(:PROCNM,
:SCHEMA,
:OUT_CODE,
:PARMLST INDICATOR :PARMIND);
The output parameters from this stored procedure contain the SQLCODE from the
SELECT statement and the value of the RUNOPTS column from SYSROUTINES.
The CREATE PROCEDURE statement for this stored procedure might look like this:
CREATE PROCEDURE GETPRML(PROCNM CHAR(18) IN, SCHEMA CHAR(8) IN,
OUTCODE INTEGER OUT, PARMLST VARCHAR(254) OUT)
LANGUAGE C
DETERMINISTIC
READS SQL DATA
EXTERNAL NAME 'GETPRML'
COLLID GETPRML
ASUTIME NO LIMIT
PARAMETER STYLE GENERAL
STAY RESIDENT NO
RUN OPTIONS 'MSGFILE(OUTFILE),RPTSTG(ON),RPTOPTS(ON)'
WLM ENVIRONMENT SAMPPROG
PROGRAM TYPE MAIN
SECURITY DB2
RESULT SETS 2
COMMIT ON RETURN NO;
/***************************************************************/
/* Declare C variables for SQL operations on the parameters. */
/* These are local variables to the C program, which you must */
/* copy to and from the parameter list provided to the stored */
/* procedure. */
/***************************************************************/
EXEC SQL BEGIN DECLARE SECTION;
char PROCNM[19];
char SCHEMA[9];
char PARMLST[255];
EXEC SQL END DECLARE SECTION;
/***************************************************************/
/* Declare cursors for returning result sets to the caller. */
/***************************************************************/
EXEC SQL DECLARE C1 CURSOR WITH RETURN FOR
SELECT NAME
FROM SYSIBM.SYSTABLES
WHERE CREATOR=:SCHEMA;
main(argc,argv)
int argc;
char *argv[];
{
/********************************************************/
/* Copy the input parameters into the area reserved in */
/* the program for SQL processing. */
/********************************************************/
strcpy(PROCNM, argv[1]);
strcpy(SCHEMA, argv[2]);
/********************************************************/
/* Issue the SQL SELECT against the SYSROUTINES */
/* DB2 catalog table. */
/********************************************************/
strcpy(PARMLST, ""); /* Clear PARMLST */
EXEC SQL
SELECT RUNOPTS INTO :PARMLST
FROM SYSIBM.ROUTINES
WHERE NAME=:PROCNM AND
SCHEMA=:SCHEMA;
/********************************************************/
/* Copy the PARMLST value returned by the SELECT back to*/
/* the parameter list provided to this stored procedure.*/
/********************************************************/
strcpy(argv[4], PARMLST);
/********************************************************/
/* Open cursor C1 to cause DB2 to return a result set */
/* to the caller. */
/********************************************************/
EXEC SQL OPEN C1;
}
The linkage convention for this stored procedure is GENERAL WITH NULLS.
The output parameters from this stored procedure contain the SQLCODE from the
SELECT operation, and the value of the RUNOPTS column retrieved from the
SYSROUTINES table.
The CREATE PROCEDURE statement for this stored procedure might look like this:
CREATE PROCEDURE GETPRML(PROCNM CHAR(18) IN, SCHEMA CHAR(8) IN,
OUTCODE INTEGER OUT, PARMLST VARCHAR(254) OUT)
LANGUAGE C
DETERMINISTIC
READS SQL DATA
EXTERNAL NAME 'GETPRML'
COLLID GETPRML
ASUTIME NO LIMIT
PARAMETER STYLE GENERAL WITH NULLS
STAY RESIDENT NO
RUN OPTIONS 'MSGFILE(OUTFILE),RPTSTG(ON),RPTOPTS(ON)'
WLM ENVIRONMENT SAMPPROG
PROGRAM TYPE MAIN
SECURITY DB2
RESULT SETS 2
COMMIT ON RETURN NO;
/***************************************************************/
/* Declare C variables used for SQL operations on the */
/* parameters. These are local variables to the C program, */
/* which you must copy to and from the parameter list provided */
/* to the stored procedure. */
/***************************************************************/
EXEC SQL BEGIN DECLARE SECTION;
char PROCNM[19];
char SCHEMA[9];
char PARMLST[255];
struct INDICATORS {
short int PROCNM_IND;
short int SCHEMA_IND;
short int OUT_CODE_IND;
short int PARMLST_IND;
} PARM_IND;
EXEC SQL END DECLARE SECTION;
/***************************************************************/
/* Declare cursors for returning result sets to the caller. */
/***************************************************************/
EXEC SQL DECLARE C1 CURSOR WITH RETURN FOR
SELECT NAME
FROM SYSIBM.SYSTABLES
WHERE CREATOR=:SCHEMA;
main(argc,argv)
int argc;
char *argv[];
{
/********************************************************/
/* Copy the input parameters into the area reserved in */
/* the local program for SQL processing. */
/********************************************************/
strcpy(PROCNM, argv[1]);
strcpy(SCHEMA, argv[2]);
/********************************************************/
/* Copy null indicator values for the parameter list. */
/********************************************************/
memcpy(&PARM_IND,(struct INDICATORS *) argv[5],
sizeof(PARM_IND));
Figure 255. A C stored procedure with linkage convention GENERAL WITH NULLS (Part 1 of
2)
else {
/********************************************************/
/* If the input parameters are not NULL, issue the SQL */
/* SELECT against the SYSIBM.SYSROUTINES catalog */
/* table. */
/********************************************************/
strcpy(PARMLST, ""); /* Clear PARMLST */
EXEC SQL
SELECT RUNOPTS INTO :PARMLST
FROM SYSIBM.SYSROUTINES
WHERE NAME=:PROCNM AND
SCHEMA=:SCHEMA;
/********************************************************/
/* Copy SQLCODE to the output parameter list. */
/********************************************************/
*(int *) argv[3] = SQLCODE;
PARM_IND.OUT_CODE_IND = 0; /* OUT_CODE is not NULL */
}
/********************************************************/
/* Copy the RUNOPTS value back to the output parameter */
/* area. */
/********************************************************/
strcpy(argv[4], PARMLST);
/********************************************************/
/* Copy the null indicators back to the output parameter*/
/* area. */
/********************************************************/
memcpy((struct INDICATORS *) argv[5],&PARM_IND,
sizeof(PARM_IND));
/********************************************************/
/* Open cursor C1 to cause DB2 to return a result set */
/* to the caller. */
/********************************************************/
EXEC SQL OPEN C1;
}
Figure 255. A C stored procedure with linkage convention GENERAL WITH NULLS (Part 2 of
2)
The output parameters from this stored procedure contain the SQLCODE from the
SELECT operation, and the value of the RUNOPTS column retrieved from the
SYSROUTINES table.
The CREATE PROCEDURE statement for this stored procedure might look like this:
CREATE PROCEDURE GETPRML(PROCNM CHAR(18) IN, SCHEMA CHAR(8) IN,
OUTCODE INTEGER OUT, PARMLST VARCHAR(254) OUT)
LANGUAGE COBOL
DETERMINISTIC
READS SQL DATA
EXTERNAL NAME 'GETPRML'
COLLID GETPRML
ASUTIME NO LIMIT
PARAMETER STYLE GENERAL
STAY RESIDENT NO
RUN OPTIONS 'MSGFILE(OUTFILE),RPTSTG(ON),RPTOPTS(ON)'
WLM ENVIRONMENT SAMPPROG
PROGRAM TYPE MAIN
SECURITY DB2
RESULT SETS 2
COMMIT ON RETURN NO;
ENVIRONMENT DIVISION.
INPUT-OUTPUT SECTION.
FILE-CONTROL.
DATA DIVISION.
FILE SECTION.
WORKING-STORAGE SECTION.
***************************************************
* DECLARE CURSOR FOR RETURNING RESULT SETS
***************************************************
*
EXEC SQL DECLARE C1 CURSOR WITH RETURN FOR
SELECT NAME FROM SYSIBM.SYSTABLES WHERE CREATOR=:INSCHEMA
END-EXEC.
*
LINKAGE SECTION.
***************************************************
* DECLARE THE INPUT PARAMETERS FOR THE PROCEDURE
***************************************************
01 PROCNM PIC X(18).
01 SCHEMA PIC X(8).
*******************************************************
* DECLARE THE OUTPUT PARAMETERS FOR THE PROCEDURE
*******************************************************
01 OUT-CODE PIC S9(9) USAGE BINARY.
01 PARMLST.
49 PARMLST-LEN PIC S9(4) USAGE BINARY.
49 PARMLST-TEXT PIC X(254).
Figure 256. A COBOL stored procedure with linkage convention GENERAL (Part 1 of 2)
*******************************************************
* COPY SQLCODE INTO THE OUTPUT PARAMETER AREA
*******************************************************
MOVE SQLCODE TO OUT-CODE.
*******************************************************
* OPEN CURSOR C1 TO CAUSE DB2 TO RETURN A RESULT SET
* TO THE CALLER.
*******************************************************
EXEC SQL OPEN C1
END-EXEC.
PROG-END.
GOBACK.
Figure 256. A COBOL stored procedure with linkage convention GENERAL (Part 2 of 2)
The linkage convention for this stored procedure is GENERAL WITH NULLS.
The output parameters from this stored procedure contain the SQLCODE from the
SELECT operation, and the value of the RUNOPTS column retrieved from the
SYSIBM.SYSROUTINES table.
The CREATE PROCEDURE statement for this stored procedure might look like this:
CREATE PROCEDURE GETPRML(PROCNM CHAR(18) IN, SCHEMA CHAR(8) IN,
OUTCODE INTEGER OUT, PARMLST VARCHAR(254) OUT)
LANGUAGE COBOL
DETERMINISTIC
READS SQL DATA
EXTERNAL NAME 'GETPRML'
COLLID GETPRML
ASUTIME NO LIMIT
PARAMETER STYLE GENERAL WITH NULLS
STAY RESIDENT NO
RUN OPTIONS 'MSGFILE(OUTFILE),RPTSTG(ON),RPTOPTS(ON)'
WLM ENVIRONMENT SAMPPROG
PROGRAM TYPE MAIN
SECURITY DB2
RESULT SETS 2
COMMIT ON RETURN NO;
ENVIRONMENT DIVISION.
INPUT-OUTPUT SECTION.
FILE-CONTROL.
DATA DIVISION.
FILE SECTION.
*
WORKING-STORAGE SECTION.
*
EXEC SQL INCLUDE SQLCA END-EXEC.
*
***************************************************
* DECLARE A HOST VARIABLE TO HOLD INPUT SCHEMA
***************************************************
01 INSCHEMA PIC X(8).
***************************************************
* DECLARE CURSOR FOR RETURNING RESULT SETS
***************************************************
*
EXEC SQL DECLARE C1 CURSOR WITH RETURN FOR
SELECT NAME FROM SYSIBM.SYSTABLES WHERE CREATOR=:INSCHEMA
END-EXEC.
*
LINKAGE SECTION.
***************************************************
* DECLARE THE INPUT PARAMETERS FOR THE PROCEDURE
***************************************************
01 PROCNM PIC X(18).
01 SCHEMA PIC X(8).
***************************************************
* DECLARE THE OUTPUT PARAMETERS FOR THE PROCEDURE
***************************************************
01 OUT-CODE PIC S9(9) USAGE BINARY.
01 PARMLST.
49 PARMLST-LEN PIC S9(4) USAGE BINARY.
49 PARMLST-TEXT PIC X(254).
***************************************************
* DECLARE THE STRUCTURE CONTAINING THE NULL
* INDICATORS FOR THE INPUT AND OUTPUT PARAMETERS.
***************************************************
01 IND-PARM.
03 PROCNM-IND PIC S9(4) USAGE BINARY.
03 SCHEMA-IND PIC S9(4) USAGE BINARY.
03 OUT-CODE-IND PIC S9(4) USAGE BINARY.
03 PARMLST-IND PIC S9(4) USAGE BINARY.
Figure 257. A COBOL stored procedure with linkage convention GENERAL WITH NULLS
(Part 1 of 2)
Figure 257. A COBOL stored procedure with linkage convention GENERAL WITH NULLS
(Part 2 of 2)
The output parameters from this stored procedure contain the SQLCODE from the
SELECT operation, and the value of the RUNOPTS column retrieved from the
SYSIBM.SYSROUTINES table.
The CREATE PROCEDURE statement for this stored procedure might look like this:
CREATE PROCEDURE GETPRML(PROCNM CHAR(18) IN, SCHEMA CHAR(8) IN,
OUTCODE INTEGER OUT, PARMLST VARCHAR(254) OUT)
LANGUAGE PLI
DETERMINISTIC
READS SQL DATA
EXTERNAL NAME 'GETPRML'
COLLID GETPRML
*PROCESS SYSTEM(MVS);
GETPRML:
PROC(PROCNM, SCHEMA, OUT_CODE, PARMLST)
OPTIONS(MAIN NOEXECOPS REENTRANT);
/************************************************************/
/* Execute SELECT from SYSIBM.SYSROUTINES in the catalog. */
/************************************************************/
EXEC SQL
SELECT RUNOPTS INTO :PARMLST
FROM SYSIBM.SYSROUTINES
WHERE NAME=:PROCNM AND
SCHEMA=:SCHEMA;
The linkage convention for this stored procedure is GENERAL WITH NULLS.
The output parameters from this stored procedure contain the SQLCODE from the
SELECT operation, and the value of the RUNOPTS column retrieved from the
SYSIBM.SYSROUTINES table.
The CREATE PROCEDURE statement for this stored procedure might look like this:
CREATE PROCEDURE GETPRML(PROCNM CHAR(18) IN, SCHEMA CHAR(8) IN,
OUTCODE INTEGER OUT, PARMLST VARCHAR(254) OUT)
LANGUAGE PLI
DETERMINISTIC
READS SQL DATA
EXTERNAL NAME 'GETPRML'
COLLID GETPRML
*PROCESS SYSTEM(MVS);
GETPRML:
PROC(PROCNM, SCHEMA, OUT_CODE, PARMLST, INDICATORS)
OPTIONS(MAIN NOEXECOPS REENTRANT);
IF PROCNM_IND<0 |
SCHEMA_IND<0 THEN
DO; /* If any input parm is NULL, */
OUT_CODE = 9999; /* Set output return code. */
OUT_CODE_IND = 0;
/* Output return code is not NULL.*/
PARMLST_IND = -1; /* Assign NULL value to PARMLST. */
END;
ELSE /* If input parms are not NULL, */
DO; /* */
/************************************************************/
/* Issue the SQL SELECT against the SYSIBM.SYSROUTINES */
/* DB2 catalog table. */
/************************************************************/
EXEC SQL
SELECT RUNOPTS INTO :PARMLST
FROM SYSIBM.SYSROUTINES
WHERE NAME=:PROCNM AND
SCHEMA=:SCHEMA;
PARMLST_IND = 0; /* Mark PARMLST as not NULL. */
END GETPRML;
Figure 259. A PL/I stored procedure with linkage convention GENERAL WITH NULLS
One situation in which this technique might be useful is when a resource becomes
unavailable during a rebind of many plans or packages. DB2 normally terminates
the rebind and does not rebind the remaining plans or packages. Later, however,
you might want to rebind only the objects that remain to be rebound. You can build
REBIND subcommands for the remaining plans or packages by using DSNTIAUL to
select the plans or packages from the DB2 catalog and to create the REBIND
subcommands. You can then submit the subcommands through the DSN command
processor, as usual.
You might first need to edit the output from DSNTIAUL so that DSN can accept it as
input. The CLIST DSNTEDIT can perform much of that task for you.
For both REBIND PLAN and REBIND PACKAGE subcommands, add the DSN
command that the statement needs as the first line in the sequential dataset, and
add END as the last line, using TSO edit commands. When you have edited the
sequential dataset, you can run it to rebind the selected plans or packages.
If the SELECT statement returns no qualifying rows, then DSNTIAUL does not
generate REBIND subcommands.
The examples in this section generate REBIND subcommands that work in DB2 for
OS/390 and z/OS Version 7. You might need to modify the examples for prior
releases of DB2 that do not allow all of the same syntax.
Example 1:
REBIND all plans without terminating because of unavailable resources.
SELECT SUBSTR('REBIND PLAN('CONCAT NAME
CONCAT') ',1,45)
FROM SYSIBM.SYSPLAN;
Example 2:
REBIND all versions of all packages without terminating because of
unavailable resources.
SELECT SUBSTR('REBIND PACKAGE('CONCAT COLLID CONCAT'.'
CONCAT NAME CONCAT'.(*)) ',1,55)
FROM SYSIBM.SYSPACKAGE;
Example 3:
REBIND all plans bound before a given date and time.
SELECT SUBSTR('REBIND PLAN('CONCAT NAME
CONCAT') ',1,45)
FROM SYSIBM.SYSPLAN
WHERE BINDDATE <= 'yyyymmdd' AND
BINDTIME <= 'hhmmssth';
where yyyymmdd represents the date portion and hhmmssth represents the
time portion of the timestamp string.
Example 4:
REBIND all versions of all packages bound before a given date and time.
SELECT SUBSTR('REBIND PACKAGE('CONCAT COLLID CONCAT'.'
CONCAT NAME CONCAT'.(*)) ',1,55)
FROM SYSIBM.SYSPACKAGE
WHERE BINDTIME <= 'timestamp';
where yyyymmdd represents the date portion and hhmmssth represents the
time portion of the timestamp string.
Example 6:
REBIND all versions of all packages bound since a given date and time.
SELECT SUBSTR('REBIND PACKAGE('CONCAT COLLID
CONCAT'.'CONCAT NAME
CONCAT'.(*)) ',1,55)
FROM SYSIBM.SYSPACKAGE
WHERE BINDTIME >= 'timestamp';
where yyyymmdd represents the date portion and hhmmssth represents the
time portion of the timestamp string.
Example 8:
REBIND all versions of all packages bound within a given date and time
range.
SELECT SUBSTR('REBIND PACKAGE('CONCAT COLLID CONCAT'.'
CONCAT NAME CONCAT'.(*)) ',1,55)
FROM SYSIBM.SYSPACKAGE
WHERE BINDTIME >= 'timestamp1' AND
BINDTIME <= 'timestamp2';
Figure 262 on page 920 shows some sample JCL for rebinding all plans bound
without specifying the DEGREE keyword on BIND with DEGREE(ANY).
Figure 261. Example JCL: Rebind all packages bound in 1994. (Part 1 of 2)
Figure 261. Example JCL: Rebind all packages bound in 1994. (Part 2 of 2)
Figure 262. Example JCL: Rebind selected plans with a different bind option
IBM SQL has additional reserved words that DB2 for OS/390 and z/OS does not
enforce. Therefore, we suggest that you do not use these additional reserved words
as ordinary identifiers in names that have a continuing use. See IBM SQL
Reference for a list of the words.
Appendix G. Characteristics of SQL statements in DB2 for OS/390 and z/OS 925
Table 131. Actions allowed on SQL statements in DB2 for OS/390 and z/OS (continued)
Interactively Processed by
or
dynamically Requesting
SQL statement Executable prepared system Server Precompiler
Notes:
1. The statement can be dynamically prepared. It cannot be prepared interactively.
2. The statement can be dynamically prepared only if DYNAMICRULES run behavior is implicitly or explicitly
specified.
3. The statement can be dynamically prepared, but only from an ODBC or CLI driver that supports dynamic CALL
statements.
4. The requesting system processes the PREPARE statement when the statement being prepared is ALLOCATE
CURSOR or ASSOCIATE LOCATORS.
5. The value to which special register CURRENT SQLID is set is used as the SQL authorization ID and the implicit
qualifier for dynamic SQL statements only when DYNAMICRULES run behavior is in effect. The CURRENT SQLID
value is ignored for the other DYNAMICRULES behaviors.
6. This statement can only be used in the triggered action of a trigger.
| 7. Local special registers can be referenced in a VALUES INTO statement if it results in the assignment of a single
| host-variable, not if it results in setting more than one value.
SAVEPOINT6 Y
SELECT Y Y
SELECT INTO Y Y
| SET CONNECTION Y Y Y
5
| SET host-variable Assignment Y Y Y
SET special register Y Y Y
Appendix G. Characteristics of SQL statements in DB2 for OS/390 and z/OS 927
Table 132. SQL statements in external user-defined functions and stored
procedures (continued)
Level of SQL access
CONTAINS READS SQL MODIFIES
SQL statement NO SQL SQL DATA SQL DATA
| SET transition-variable Y5 Y Y
| Assignment
SIGNAL SQLSTATE Y Y Y
UPDATE Y
VALUES Y Y
5
VALUES INTO Y Y Y
1
WHENEVER Y Y Y Y
Notes:
1. Although the SQL option implies that no SQL statements can be specified,
non-executable statements are not restricted.
2. The stored procedure that is called must have the same or more restrictive level of SQL
data access than the current level in effect. For example, a routine defined as MODIFIES
SQL DATA can call a stored procedure defined as MODIFIES SQL DATA, READS SQL
DATA, or CONTAINS SQL. A routine defined as CONTAINS SQL can only call a
procedure defined as CONTAINS SQL.
| 3. The COMMIT statement cannot be executed in a user-defined function. The COMMIT
| statement cannot be executed in a stored procedure if the procedure is in the calling
| chain of a user-defined function or trigger.
4. The statement specified for the EXECUTE statment must be a statement that is allowed
for the particular level of SQL data access in effect. For example, if the level in effect is
READS SQL DATA, the statement must not be an INSERT, UPDATE, or DELETE.
5. The statement is supported only if it does not contain a subquery or query-expression.
6. RELEASE SAVEPOINT, SAVEPOINT, and ROLLBACK (with the TO SAVEPOINT clause)
cannot be executed from a user-defined function.
| 7. If the ROLLBACK statement (without the TO SAVEPOINT clause) is executed in a
| user-defined function, an error is returned to the calling program, and the application is
| placed in a must rollback state.
| 8. The ROLLBACK statement (without the TO SAVEPOINT clause) cannot be executed in a
| stored procedure if the procedure is in the calling chain of a user-defined function or
| trigger.
Appendix G. Characteristics of SQL statements in DB2 for OS/390 and z/OS 929
Table 133. Valid SQL statements in an SQL procedure body (continued)
SQL statement is...
The only
statement in the Nested in a
SQL statement procedure compound statement
DROP Y Y
END DECLARE SECTION
EXECUTE Y
EXECUTE IMMEDIATE Y Y
EXPLAIN
FETCH Y
FREE LOCATOR
GRANT Y Y
HOLD LOCATOR
INCLUDE
INSERT Y Y
LABEL ON Y Y
LOCK TABLE Y Y
OPEN Y
PREPARE FROM Y
RELEASE connection Y Y
RELEASE SAVEPOINT Y Y
RENAME Y Y
REVOKE Y Y
1
| ROLLBACK Y Y
1
ROLLBACK TO SAVEPOINT Y Y
SAVEPOINT Y Y
SELECT
SELECT INTO Y Y
SET CONNECTION Y Y
3
| SET host-variable Assignment
SET special register3 Y Y
3
| SET transition-variable Assignment
SIGNAL SQLSTATE
UPDATE Y Y
VALUES
| VALUES INTO Y Y
WHENEVER
Appendix G. Characteristics of SQL statements in DB2 for OS/390 and z/OS 931
932 Application Programming and SQL Guide
Appendix H. Program preparation options for remote
packages
The table that follows gives generic descriptions of program preparation options,
lists the equivalent DB2 option for each one, and indicates if appropriate, whether it
is a bind package (B) or a precompiler (P) option. In addition, the table indicates
whether a DB2 server supports the option.
Table 134. Program preparation options for packages
Generic option description Equivalent for Requesting DB2 Bind or DB2 Server Support
Precompile
Option
Package replacement: protect ACTION(ADD) B Supported
existing packages
Package replacement: replace ACTION(REPLACE) B Supported
existing packages
Package replacement: version ACTION(REPLACE REPLVER B Supported
name (version-id))
Statement string delimiter APOSTSQL/QUOTESQL P Supported
DRDA access: SQL CONNECT CONNECT(1) P Supported
(Type 1)
DRDA access: SQL CONNECT CONNECT(2) P Supported
(Type 2)
Block protocol: Do not block data CURRENTDATA(YES) B Supported
for an ambiguous cursor
Block protocol: Block data when CURRENTDATA(NO) B Supported
possible
Block protocol: Never block data (Not available) Not supported
Name of remote database CURRENTSERVER(location name) B Supported as a BIND
PLAN option
Date format of statement DATE P Supported
Protocol for remote access DBPROTOCOL B Not supported
Maximum decimal precision: 15 DEC(15) P Supported
Maximum decimal precision: 31 DEC(31) P Supported
Defer preparation of dynamic SQL DEFER(PREPARE) B Supported
Do not defer preparation of NODEFER(PREPARE) B Supported
dynamic SQL
Dynamic SQL Authorization DYNAMICRULES B Supported
| Encoding scheme for static SQL ENCODING B Not supported
| statements
Explain option EXPLAIN B Supported
Immediately write group IMMEDWRITE B Supported
bufferpool-dependent page sets or
partitions in a data sharing
environment
Package isolation level: CS ISOLATION(CS) B Supported
Package isolation level: RR ISOLATION(RR) B Supported
Package isolation level: RS ISOLATION(RS) B Supported
Environment
WLM_REFRESH runs in a WLM-established stored procedures address space. The
load module for WLM_REFRESH, DSNTWR, must reside in an APF-authorized
library.
Authorization required
To execute the CALL statement, the SQL authorization ID of the process must have
READ access or higher to the OS/390 Security Server System Authorization Facility
(SAF) resource profile ssid.WLM_REFRESH.WLM-environment-name in resource
class DSNR. See Part 3 (Volume 1) of DB2 Administration Guide for information on
authorizing access to SAF resource profiles.
© Copyright IBM Corp. 1983, 2001 935
WLM_REFRESH syntax diagram
# The following syntax diagram shows the SQL CALL statement for invoking
# WLM_REFRESH. The linkage convention for WLM_REFRESH is GENERAL WITH
# NULLS.
# Environment
# DSNACICS runs in a WLM-established stored procedure address space and uses
# the Recoverable Resource Manager Services attachment facility to connect to DB2.
# If you use CICS Transaction Server for OS/390 Version 1 Release 3 or later, you
# can register your CICS system as a resource manager with recoverable resource
# management services (RRMS). When you do that, changes to DB2 databases that
# are made by the program that calls DSNACICS and the CICS server program that
# DSNACICS invokes are in the same two-phase commit scope. This means that
# when the calling program performs an SQL COMMIT or ROLLBACK, DB2 and RRS
# inform CICS about the COMMIT or ROLLBACK.
# If the CICS server program that DSNACICS invokes accesses DB2 resources, the
# server program runs under a separate unit of work from the original unit of work
# that calls the stored procedure. This means that the CICS server program might
# deadlock with locks that the client program acquires.
# Authorization required
# To execute the CALL statement, the owner of the package or plan that contains the
# CALL statement must have one or more of the following privileges:
# v The EXECUTE privilege on stored procedure DSNACICS
# v Ownership of the stored procedure
# v SYSADM authority
# The CICS server program that DSNACICS calls runs under the same user ID as
# DSNACICS. That user ID depends on the SECURITY parameter that you specify
# when you define DSNACICS. See Part 2 of DB2 Installation Guide.
# When CICS has been set up to be an RRS resource manager, the client
# application can control commit processing using SQL COMMIT requests. DB2
# for OS/390 and z/OS ensures that CICS is notified to commit any resources
# that the CICS server program modifies during two-phase commit processing.
# When CICS has not been set up to be an RRS resource manager, CICS forces
# syncpoint processing of all CICS resources at completion of the CICS server
# program. This commit processing is not coordinated with the commit processing
# of the client program.
# General considerations
# The DSNACICX exit must follow these rules:
# v It can be written in assembler, COBOL, PL/I, or C.
# v It must follow the Language Environment calling linkage when the caller is an
# assembler language program.
# v The load module for DSNACICX must reside in an authorized program library
# that is in the STEPLIB concatenation of the stored procedure address space
# startup procedure.
# You can replace the default DSNACICX in the prefix.SDSNLOAD, library, or you
# can put the DSNACICX load module in a library that is ahead of
# prefix.SDSNLOAD in the STEPLIB concatenation. It is recommended that you
# put DSNACICX in the prefix.SDSNEXIT library. Sample installation job DSNTIJEX
# contains JCL for assembling and link-editing the sample source code for
# DSNACICX into prefix.SDSNEXIT. You need to modify the JCL for the libraries
# and the compiler that you are using.
# v The load module must be named DSNACICX.
# v The exit must save and restore the caller’s registers. Only the contents of
# register 15 can be modified.
# v It must be written to be reentrant and link-edited as reentrant.
# v It must be written and link-edited to execute as AMODE(31),RMODE(ANY).
# v DSNACICX can contain SQL statements. However, if it does, you need to
# change the DSNACICS procedure definition to reflect the appropriate SQL
# access level for the types of SQL statements that you use in the user exit.
# Table 136 shows the contents of the DSNACICX exit parameter list, XPL. Member
# DSNDXPL in data set prefix.SDSNMACS contains an assembler language mapping
# macro for XPL. Sample exit DSNASCIO in data set prefix.SDSNSAMP includes a
# COBOL mapping macro for XPL.
# Table 136. Contents of the XPL exit parameter list
# Corresponding
# Hex DSNACICS
# Name offset Data type Description parameter
# XPL_EYEC 0 Character, 4 bytes Eye-catcher: 'XPL '
# XPL_LEN 4 Character, 4 bytes Length of the exit parameter list
# XPL_LEVEL 8 4-byte integer Level of the parameter list parm-level
# XPL_PGMNAME C Character, 8 bytes Name of the CICS server pgm-name
# program
# XPL_CICSAPPLID 14 Character, 8 bytes CICS VTAM applid CICS-applid
# XPL_CICSLEVEL 1C 4-byte integer Level of CICS code CICS-level
# XPL_CONNECTTYPE 20 Character, 8 bytes Specific or generic connection connect-type
# to CICS
# XPL_NETNAME 28 Character, 8 bytes Name of the specific connection netname
# to CICS
# XPL_MIRRORTRAN 30 Character, 8 bytes Name of the mirror transaction mirror-trans
# that invokes the CICS server
# program
1
# XPL_COMMAREAPTR 38 Address, 4 bytes Address of the COMMAREA
2
# XPL_COMMINLEN 3C 4–byte integer Length of the COMMAREA that
# is passed to the server program
# XPL_COMMTOTLEN 40 4–byte integer Total length of the COMMAREA commarea-total-len
# that is returned to the caller
# XPL_SYNCOPTS 44 4–byte integer Syncpoint control option sync-opts
# DSNACICS output
# DSNACICS places the return code from DSNACICS execution in the return-code
# parameter. If the value of the return code is non-zero, DSNACICS puts its own error
# messages and any error messages that are generated by CICS and the DSNACICX
# user exit in the msg-area parameter.
# The COMMAREA parameter contains the COMMAREA for the CICS server
# program that DSNACICS calls. The COMMAREA parameter has a VARCHAR type.
# DSNACICS restrictions
# Because DSNACICS uses the distributed program link (DPL) function to invoke
# CICS server programs, server programs that you invoke through DSNACICS can
# contain only the CICS API commands that the DPL function supports. The list of
# supported commands is documented in CICS for MVS/ESA Application
# Programming Reference.
# DSNACICS debugging
# If you receive errors when you call DSNACICS, ask your system administrator to
# add a DSNDUMP DD statement in the startup procedure for the address space in
# which DSNACICS runs. The DSNDUMP DD statement causes DB2 to generate an
# SVC dump whenever DSNACICS issues an error message.
In Version 7, some utility functions are available as optional products; you must
separately order and purchase a license to such utilities. Discussion of utility
functions in this publication is not intended to otherwise imply that you have a
license to them. See DB2 Utility Guide and Reference for more information about
utilities products.
Improved connectivity
Version 7 offers improved connectivity:
v Support for COMMIT and ROLLBACK in stored procedures lets you commit or
roll back an entire unit of work, including uncommitted changes that are made
from the calling application before the stored procedure call is made.
Appendix J. Summary of changes to DB2 for OS/390 and z/OS Version 7 947
v Support for Windows Kerberos security lets you more easily manage workstation
clients who seek access to data and services from heterogeneous environments.
v Global transaction support for distributed applications lets independent DB2
agents participate in a global transaction that is coordinated by an XA-compliant
transaction manager on a workstation or a gateway server (Microsoft Transaction
Server or Encina, for example).
v Support for a DB2 Connect Version 7 enhancement lets remote workstation
clients quickly determine the amount of time that DB2 takes to process a request
(the server elapsed time).
v Additional enhancements include:
– Support for connection pooling and transaction pooling for IBM DB2 Connect
– Support for DB2 Call Level Interface (DB2 CLI) bookmarks on DB2 UDB for
UNIX, Windows, OS/2
Migration considerations
# Migration with full fallback protection is available when you have either DB2 for
# OS/390 Version 5 or Version 6 installed. You should ensure that you are fully
# operational on DB2 for OS/390 Version 5, or later, before migrating to DB2 for
# OS/390 and z/OS Version 7.
To learn about all of the migration considerations from Version 5 to Version 7, read
the DB2 Release Planning Guide for Version 6 and Version 7; to learn about
content information, also read appendixes A through F in both books.
IBM may have patents or pending patent applications covering subject matter
described in this document. The furnishing of this document does not give you any
license to these patents. You can send license inquiries, in writing, to:
IBM Director of Licensing
IBM Corporation
North Castle Drive
Armonk, NY 10504-1785
U.S.A.
For license inquiries regarding double-byte (DBCS) information, contact the IBM
Intellectual Property Department in your country or send inquiries, in writing, to:
IBM World Trade Asia Corporation
Licensing
2-31 Roppongi 3-chome, Minato-ku
Tokyo 106-0032, Japan
The following paragraph does not apply to the United Kingdom or any other
country where such provisions are inconsistent with local law:
INTERNATIONAL BUSINESS MACHINES CORPORATION PROVIDES THIS
PUBLICATION “AS IS” WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESS
OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES
OF NON-INFRINGEMENT, MERCHANTABILITY OR FITNESS FOR A
PARTICULAR PURPOSE. Some states do not allow disclaimer of express or
implied warranties in certain transactions, therefore, this statement may not apply to
you.
Any references in this information to non-IBM Web sites are provided for
convenience only and do not in any manner serve as an endorsement of those
Web sites. The materials at those Web sites are not part of the materials for this
IBM product and use of those Web sites is at your own risk.
IBM may use or distribute any of the information you supply in any way it believes
appropriate without incurring any obligation to you.
Licensees of this program who wish to have information about it for the purpose of
enabling: (i) the exchange of information between independently created programs
The licensed program described in this information and all licensed material
available for it are provided by IBM under terms of the IBM Customer Agreement,
IBM International Program License Agreement, or any equivalent agreement
between us.
This information contains examples of data and reports used in daily business
operations. To illustrate them as completely as possible, the examples include the
names of individuals, companies, brands, and products. All of these names are
fictitious and any similarity to the names and addresses used by an actual business
enterprise is entirely coincidental.
COPYRIGHT LICENSE:
Trademarks
The following terms are trademarks of International Business Machines Corporation
in the United States, other countries, or both.
3090 GDDM
IBM
APL2 IBM Registry
AD/Cycle IMS
AS/400 IMS/ESA
BookManager Language Environment
C/370 MVS/DFP
CICS MVS/ESA
CICS/ESA Net.Data
CICS/MVS OpenEdition
COBOL/370 Operating System/390
DATABASE 2 OS/2
DataHub OS/390
DataPropagator OS/400
DB2 Parallel Sysplex
DB2 Connect PR/SM
DB2 Universal Database QMF
DFSMS/MVS RACF
DFSMSdfp RAMAC
DFSMSdss RETAIN
DFSMShsm RMF
DFSORT SAA
Distributed Relational Database Architecture SecureWay
DRDA SQL/DS
Enterprise Storage Server System/370
Enterprise System/3090 System/390
Enterprise System/9000 VisualAge
ES/3090 VTAM
ESCON
Tivoli and NetView are trademarks of Tivoli Systems Inc. in the United States, other
countries, or both.
Java, JDBC, and all Java-based trademarks and logos are trademarks or registered
trademarks of Sun Microsystems, Inc. in the United States, other countries, or both.
Notices 951
UNIX is a registered trademark of The Open Group in the United States and other
countries.
Other company, product, and service names may be trademarks or service marks
of others.
abend reason code. A 4-byte hexadecimal code that | application-directed connection. A connection that
uniquely identifies a problem with DB2. A complete list | an application manages using the SQL CONNECT
of DB2 abend reason codes and their explanations is | statement.
contained in DB2 Messages and Codes.
application plan. The control structure that is
abnormal end of task (abend). Termination of a task, produced during the bind process. DB2 uses the
job, or subsystem because of an error condition that application plan to process SQL statements that it
recovery facilities cannot resolve during execution. encounters during statement execution.
access path. The path that is used to locate data that application process. The unit to which resources and
is specified in SQL statements. An access path can be locks are allocated. An application process involves the
indexed or sequential. execution of one or more programs.
address space. A range of virtual storage pages that application programming interface (API). A
is identified by a number (ASID) and a collection of functional interface that is supplied by the operating
segment and page tables that map the virtual pages to system or by a separately orderable licensed program
real pages of the computer’s memory. that allows an application program that is written in a
high-level language to use specific data or functions of
address space connection. The result of connecting the operating system or licensed program.
an allied address space to DB2. Each address space
that contains a task that is connected to DB2 has | application requester. The component on a remote
exactly one address space connection, even though | system that generates DRDA requests for data on
more than one task control block (TCB) can be present. | behalf of an application. An application requester
See also allied address space and task control block. | accesses a DB2 database server using the DRDA
| application-directed protocol.
after trigger. A trigger that is defined with the trigger
activation time AFTER. | application server. The target of a request from a
| remote application. In the DB2 environment, the
agent. As used in DB2, the structure that associates | application server function is provided by the distributed
all processes that are involved in a DB2 unit of work. An | data facility and is used to access DB2 data from
allied agent is generally synonymous with an allied | remote applications.
thread. System agents are units of work that process
independently of the allied agent, such as prefetch | ASCII. An encoding scheme that is used to represent
processing, deferred writes, and service tasks. | strings in many environments, typically on PCs and
| workstations. Contrast with EBCDIC and Unicode.
alias. An alternative name that can be used in SQL
statements to refer to a table or view in the same or a attribute. A characteristic of an entity. For example, in
remote DB2 subsystem. database design, the phone number of an employee is
one of that employee’s attributes.
allied address space. An area of storage that is
external to DB2 and that is connected to DB2. An allied authorization ID. A string that can be verified for
address space is capable of requesting DB2 services. connection to DB2 and to which a set of privileges is
allowed. It can represent an individual, an organizational
allied thread. A thread that originates at the local DB2 group, or a function, but DB2 does not determine this
subsystem and that can access data at a remote DB2 representation.
subsystem.
auxiliary index. An index on an auxiliary table in
ambiguous cursor. A database cursor that is not which each index entry refers to a LOB.
defined with the FOR FETCH ONLY clause or the FOR
UPDATE OF clause, is not defined on a read-only result auxiliary table. A table that stores columns outside
table, is not the target of a WHERE CURRENT clause the table in which they are defined. Contrast with base
on an SQL UPDATE or DELETE statement, and is in a table.
Glossary 955
concurrency • data definition name (ddname)
concurrency. The shared use of resources by more | with searched update or delete statements, or with
than one application process at the same time. | cursors other than this cursor. These changes can be
| made by this application process or by another
connection. In SNA, the existence of a | application process.
communication path between two partner LUs that
allows information to be exchanged (for example, two cursor stability (CS). The isolation level that provides
DB2 subsystems that are connected and maximum concurrency without the ability to read
communicating by way of a conversation). uncommitted data. With cursor stability, a unit of work
holds locks only on its uncommitted changes and on the
consistency token. A timestamp that is used to current row of each of its cursors.
generate the version identifier for an application. See
also version.
D
constant. A language element that specifies an
unchanging value. Constants are classified as string DASD. Direct access storage device.
constants or numeric constants. Contrast with variable.
database. A collection of tables, or a collection of table
constraint. A rule that limits the values that can be spaces and index spaces.
inserted, deleted, or updated in a table. See referential
constraint, table check constraint, and uniqueness database access thread. A thread that accesses data
constraint. at the local subsystem on behalf of a remote
subsystem.
correlated columns. A relationship between the value
of one column and the value of another column. database administrator (DBA). An individual who is
responsible for designing, developing, operating,
correlated subquery. A subquery (part of a WHERE safeguarding, maintaining, and using a database.
or HAVING clause) that is applied to a row or group of
rows of a table or view that is named in an outer database descriptor (DBD). An internal
subselect statement. representation of a DB2 database definition, which
reflects the data definition that is in the DB2 catalog.
correlation name. An identifier that designates a The objects that are defined in a database descriptor
table, a view, or individual rows of a table or view within are table spaces, tables, indexes, index spaces, and
a single SQL statement. It can be defined in any FROM relationships.
clause or in the first clause of an UPDATE or DELETE
statement. database management system (DBMS). A software
system that controls the creation, organization, and
CP. See central processor (CP). modification of a database and the access to the data
stored within it.
created temporary table. A table that holds temporary
data and is defined with the SQL statement CREATE database request module (DBRM). A data set
GLOBAL TEMPORARY TABLE. Information about member that is created by the DB2 precompiler and that
created temporary tables is stored in the DB2 catalog, contains information about SQL statements. DBRMs are
so this kind of table is persistent and can be shared used in the bind process.
across application processes. Contrast with declared
temporary table. See also temporary table. | database server. The target of a request from a local
| application or an intermediate database server. In the
CS. Cursor stability. | DB2 environment, the database server function is
| provided by the distributed data facility to access DB2
current data. Data within a host structure that is | data from local applications, or from a remote database
current with (identical to) the data within the base table. | server that acts as an intermediate database server.
current SQL ID. An ID that, at a single point in time, DATABASE 2 Interactive (DB2I). The DB2 facility that
holds the privileges that are exercised when certain provides for the execution of SQL statements, DB2
dynamic SQL statements run. The current SQL ID can (operator) commands, programmer commands, and
be a primary authorization ID or a secondary utility invocation.
authorization ID.
data currency. The state in which data that is
| cursor sensitivity. The degree to which database retrieved into a host variable in your program is a copy
| updates are visible to the subsequent FETCH of data in the base table.
| statements in a cursor. A cursor can be sensitive to
| changes that are made with positioned update and data definition name (ddname). The name of a data
| delete statements specifying the name of that cursor. A definition (DD) statement that corresponds to a data
| cursor can also be sensitive to changes that are made control block containing the same name.
data partition. A VSAM data set that is contained DCLGEN. Declarations generator.
within a partitioned table space.
DDF. Distributed data facility.
data sharing. The ability of two or more DB2
subsystems to directly access and change a single set ddname. Data definition name.
of data.
deadlock. Unresolvable contention for the use of a
data sharing group. A collection of one or more DB2 resource such as a table or an index.
subsystems that directly access and change the same
data while maintaining data integrity. declarations generator (DCLGEN). A subcomponent
of DB2 that generates SQL table declarations and
data sharing member. A DB2 subsystem that is COBOL, C, or PL/I data structure declarations that
assigned by XCF services to a data sharing group. conform to the table. The declarations are generated
from DB2 system catalog information. DCLGEN is also
data space. A range of up to 2 GB of contiguous a DSN subcommand.
virtual storage addresses that a program can directly
manipulate. Unlike an address space, a data space can declared temporary table. A table that holds
hold only data; it does not contain common areas, temporary data and is defined with the SQL statement
system data, or programs. DECLARE GLOBAL TEMPORARY TABLE. Information
about declared temporary tables is not stored in the
data type. An attribute of columns, literals, host DB2 catalog, so this kind of table is not persistent and
variables, special registers, and the results of functions can only be used by the application process that issued
and expressions. the DECLARE statement. Contrast with created
temporary table. See also temporary table.
date. A three-part value that designates a day, month,
and year. default value. A predetermined value, attribute, or
option that is assumed when no other is explicitly
date duration. A decimal integer that represents a specified.
number of years, months, and days.
degree of parallelism. The number of concurrently
datetime value. A value of the data type DATE, TIME, executed operations that are initiated to process a
or TIMESTAMP. query.
DBMS. Database management system. delete trigger. A trigger that is defined with the
triggering SQL operation DELETE.
DBRM. Database request module.
delimited identifier. A sequence of characters that are
DB2 catalog. Tables that are maintained by DB2 and enclosed within double quotation marks ("). The
contain descriptions of DB2 objects, such as tables, sequence must consist of a letter followed by zero or
views, and indexes. more characters, each of which is a letter, digit, or the
underscore character (_).
DB2 command. An instruction to the DB2 subsystem
allowing a user to start or stop DB2, to display delimiter token. A string constant, a delimited
information on current users, to start or stop databases, identifier, an operator symbol, or any of the special
to display information on the status of databases, and characters that are shown in syntax diagrams.
so on.
dependent. An object (row, table, or table space) that
DB2 for VSE & VM. The IBM DB2 relational database has at least one parent. The object is also said to be a
management system for the VSE and VM operating dependent (row, table, or table space) of its parent. See
systems. parent row, parent table, parent table space.
Glossary 957
dimension • external function
arguments. That is, successive invocations with the DRDA. Distributed Relational Database Architecture.
same input values produce the same answer.
Sometimes referred to as a not-variant function. | DRDA access. An open method of accessing
Contrast this with an not-deterministic function | distributed data that you can use to can connect to
(sometimes called a variant function), which might not | another database server to execute packages that were
always produce the same result for the same inputs. | previously bound at the server location. You use the
| SQL CONNECT statement or an SQL statement with a
dimension. A data category such as time, products, or | three-part name to identify the server. Contrast with
markets. The elements of a dimension are referred to | private protocol access.
as members. Dimensions offer a very concise, intuitive
way of organizing and selecting data for retrieval, DSN. (1) The default DB2 subsystem name. (2) The
exploration, and analysis. See also dimension table. name of the TSO command processor of DB2. (3) The
first three characters of DB2 module and macro names.
dimension table. The representation of a dimension in
a star schema. Each row in a dimension table duration. A number that represents an interval of time.
represents all of the attributes for a particular member See date duration, labeled duration, and time duration.
of the dimension. See also dimension, star schema, and
star join. dynamic SQL. SQL statements that are prepared and
executed within an application program while the
direct access storage device (DASD). A device in program is executing. In dynamic SQL, the SQL source
which access time is independent of the location of the is contained in host language variables rather than
data. being coded into the application program. The SQL
statement can change several times during the
distinct type. A user-defined data type that is application program’s execution.
internally represented as an existing type (its source
type), but is considered to be a separate and
incompatible type for semantic purposes.
E
distributed data facility (DDF). A set of DB2 | EBCDIC. Extended binary coded decimal interchange
components through which DB2 communicates with | code. An encoding scheme that is used to represent
another RDBMS. | character data in the OS/390, MVS, VM, VSE, and
| OS/400® environments. Contrast with ASCII and
Distributed Relational Database Architecture | Unicode.
(DRDA). A connection protocol for distributed relational
database processing that is used by IBM’s relational embedded SQL. SQL statements that are coded
database products. DRDA includes protocols for within an application program. See static SQL.
communication between an application and a remote
equijoin. A join operation in which the join-condition
relational database management system, and for
has the form expression = expression.
communication between relational database
management systems. escape character. The symbol that is used to enclose
an SQL delimited identifier. The escape character is the
DL/I. Data Language/I.
double quotation mark ("), except in COBOL
double-byte character large object (DBCLOB). A applications, where the user assigns the symbol, which
sequence of bytes representing double-byte characters is either a double quotation mark or an apostrophe (').
where the size of the values can be up to 2 GB. In
EUR. IBM European Standards.
general, double-byte character large object values are
used whenever a double-byte character string might explicit hierarchical locking. Locking that is used to
exceed the limits of the VARGRAPHIC type. make the parent-child relationship between resources
known to IRLM. This kind of locking avoids global
| double-byte character set (DBCS). A set of
locking overhead when no inter-DB2 interest exists on a
| characters, which are used by national languages such
resource.
| as Japanese and Chinese, that have more symbols
| than can be represented by a single byte. Each expression. An operand or a collection of operators
| character is 2 bytes in length. Contrast with single-byte and operands that yields a single value.
| character set and multibyte character set.
external function. A function for which the body is
drain. The act of acquiring a locked resource by written in a programming language that takes scalar
quiescing access to that object. argument values and produces a scalar result for each
invocation. Contrast with sourced function, built-in
drain lock. A lock on a claim class that prevents a
function, and SQL function.
claim from occurring.
foreign key. A column or set of columns in a gross lock. The shared, update, or exclusive mode
dependent table of a constraint relationship. The key locks on a table, partition, or table space.
must have the same number of columns, with the same
descriptions, as the primary key of the parent table. group name. The MVS XCF identifier for a data
Each foreign key value must either match a parent key sharing group.
value in the related parent table or be null.
group restart. A restart of at least one member of a
full outer join. The result of a join operation that data sharing group after the loss of either locks or the
includes the matched rows of both tables that are being shared communications area.
joined and preserves the unmatched rows of both
tables. See also join.
H
fullselect. A subselect, a values-clause, or a number
of both that are combined by set operators. Fullselect help panel. A screen of information presenting tutorial
specifies a result table. If UNION is not used, the result text to assist a user at the terminal.
of the fullselect is the result of the specified subselect.
| hole. A row of the result set that cannot be accessed
function. A mapping, embodied as a program (the | because of a delete or update that has been performed
function body), invocable by means of zero or more | on the row. See also delete hole and update hole.
input values (arguments), to a single value (the result).
host identifier. A name that is declared in the host
See also column function and scalar function.
program.
Functions can be user-defined, built-in, or generated by
DB2. (See built-in function, cast function, external host language. A programming language in which you
function, sourced function, SQL function, and can embed SQL statements.
user-defined function.)
host program. An application program that is written
function definer. The authorization ID of the owner of in a host language and that contains embedded SQL
the schema of the function that is specified in the statements.
CREATE FUNCTION statement.
host structure. In an application program, a structure
function implementer. The authorization ID of the that is referenced by embedded SQL statements.
owner of the function program and function package.
host variable. In an application program, an
function package. A package that results from binding application variable that is referenced by embedded
the DBRM for a function program. SQL statements.
Glossary 959
IFP • job control language (JCL)
IDENTITY clause. Uniqueness of values can be insensitive cursor. A cursor that is not sensitive to
ensured by defining a single-column unique index using inserts, updates, or deletes that are made to the
the identity column. A table can have no more than one underlying rows of a result table after the result table
identity column. has materialized.
IFP. IMS Fast Path. insert trigger. A trigger that is defined with the
triggering SQL operation INSERT.
IMS. Information Management System.
Interactive System Productivity Facility (ISPF). An
IMS attachment facility. A DB2 subcomponent that IBM licensed program that provides interactive dialog
uses MVS subsystem interface (SSI) protocols and services.
cross-memory linkage to process requests from IMS to
DB2 and to coordinate resource commitment. inter-DB2 R/W interest. A property of data in a table
space, index, or partition that has been opened by more
index. A set of pointers that are logically ordered by than one member of a data sharing group and that has
the values of a key. Indexes can provide faster access been opened for writing by at least one of those
to data and can enforce uniqueness on the rows in a members.
table.
| intermediate database server. The target of a
index key. The set of columns in a table that is used | request from a local application or a remote application
to determine the order of index entries. | requester that is forwarded to another database server.
| In the DB2 environment, the remote request is
index partition. A VSAM data set that is contained | forwarded transparently to another database server if
within a partitioning index space. | the object that is referenced by a three-part name does
index space. A page set that is used to store the
| not reference the local location.
entries of one index. internal resource lock manager (IRLM). An MVS
subsystem that DB2 uses to control communication and
indicator column. A 4-byte value that is stored in a
database locking.
base table in place of a LOB column.
invalid package. A package that depends on an
indicator variable. A variable that is used to represent
object (other than a user-defined function) that is
the null value in an application program. If the value for
dropped. Such a package is implicitly rebound on
the selected column is null, a negative value is placed
invocation. Contrast with inoperative package.
in the indicator variable.
IRLM. Internal resource lock manager.
indoubt. A status of a unit of recovery. If DB2 fails
after it has finished its phase 1 commit processing and ISO. International Standards Organization.
before it has started phase 2, only the commit
coordinator knows if an individual unit of recovery is to isolation level. The degree to which a unit of work is
be committed or rolled back. At emergency restart, if isolated from the updating operations of other units of
DB2 lacks the information it needs to make this work. See also cursor stability, read stability, repeatable
decision, the status of the unit of recovery is indoubt read, and uncommitted read.
until DB2 obtains this information from the coordinator.
More than one unit of recovery can be indoubt at ISPF. Interactive System Productivity Facility.
restart.
ISPF/PDF. Interactive System Productivity
indoubt resolution. The process of resolving the Facility/Program Development Facility.
status of an indoubt logical unit of work to either the
committed or the rollback state.
J
inheritance. The passing of class resources or
attributes from a parent class downstream in the class Japanese Industrial Standards Committee (JISC).
hierarchy to a child class. An organization that issues standards for coding
character sets.
inner join. The result of a join operation that includes
only the matched rows of both tables being joined. See JCL. Job control language.
also join.
JIS. Japanese Industrial Standard.
inoperative package. A package that cannot be used
job control language (JCL). A control language that
because one or more user-defined functions or
is used to identify a job to an operating system and to
procedures that the package depends on were dropped.
describe the job’s requirements.
Such a package must be explicitly rebound. Contrast
with invalid package.
KB. Kilobyte (1024 bytes). | location. The unique name of a database server. An
| application uses the location name to access a DB2
key. A column or an ordered collection of columns | database server.
identified in the description of a table, index, or
referential constraint. lock. A means of controlling concurrent events or
access to data. DB2 locking is performed by the IRLM.
L lock duration. The interval over which a DB2 lock is
held.
labeled duration. A number that represents a duration
of years, months, days, hours, minutes, seconds, or lock escalation. The promotion of a lock from a row,
microseconds. page, or LOB lock to a table space lock because the
number of page locks that are concurrently held on a
large object (LOB). A sequence of bytes representing given resource exceeds a preset limit.
bit data, single-byte characters, double-byte characters,
or a mixture of single- and double-byte characters. A locking. The process by which the integrity of data is
LOB can be up to 2 GB−1 byte in length. See also ensured. Locking prevents concurrent users from
BLOB, CLOB, and DBCLOB. accessing inconsistent data.
left outer join. The result of a join operation that lock mode. A representation for the type of access
includes the matched rows of both tables that are being that concurrently running programs can have to a
joined, and that preserves the unmatched rows of the resource that a DB2 lock is holding.
first table. See also join.
lock object. The resource that is controlled by a DB2
linkage editor. A computer program for creating load lock.
modules from one or more object modules or load
modules by resolving cross references among the lock parent. For explicit hierarchical locking, a lock
modules and, if necessary, adjusting addresses. that is held on a resource that has child locks that are
lower in the hierarchy; usually the table space or
link-edit. The action of creating a loadable computer partition intent locks are the parent locks.
program using a linkage editor.
lock promotion. The process of changing the size or
L-lock. Logical lock. mode of a DB2 lock to a higher level.
load module. A program unit that is suitable for lock size. The amount of data controlled by a DB2
loading into main storage for execution. The output of a lock on table data; the value can be a row, a page, a
linkage editor. LOB, a partition, a table, or a table space.
LOB. Large object. lock structure. A coupling facility data structure that is
composed of a series of lock entries to support shared
LOB locator. A mechanism that allows an application and exclusive locking for logical resources.
program to manipulate a large object value in the
database system. A LOB locator is a fullword integer logical index partition. The set of all keys that
value that represents a single LOB value. An application reference the same data partition.
program retrieves a LOB locator into a host variable and
can then apply SQL operations to the associated LOB logical lock (L-lock). The lock type that transactions
value using the locator. use to control intra- and inter-DB2 data concurrency
between transactions. Contrast with physical lock
LOB table space. A table space that contains all the (P-lock).
data for a particular LOB column in the related base
table. logical unit. An access point through which an
application program accesses the SNA network in order
local. A way of referring to any object that the local to communicate with another application program.
DB2 subsystem maintains. A local table, for example, is
a table that is maintained by the local DB2 subsystem. logical unit of work (LUW). The processing that a
Contrast with remote. program performs between synchronization points.
Glossary 961
LU name • package
LU name. Logical unit name, which is the name by nonpartitioning index. Any index that is not a
which VTAM® refers to a node in a network. Contrast partitioning index.
with location name.
| nonscrollable cursor. A cursor that can be moved
LUW. Logical unit of work. | only in a forward direction. Nonscrollable cursors are
| sometimes called forward-only cursors or serial cursors.
M not-deterministic function. A user-defined function
whose result is not solely dependent on the values of
mass delete. The deletion of all rows of a table. the input arguments. That is, successive invocations
with the same argument values can produce a different
materialize. (1) The process of putting rows from a
answer. this type of function is sometimes called a
view or nested table expression into a work file for
variant function. Contrast this with a deterministic
additional processing by a query.
function (sometimes called a not-variant function), which
(2) The placement of a LOB value into contiguous always produces the same result for the same inputs.
storage. Because LOB values can be very large, DB2
avoids materializing LOB data until doing so becomes not-variant function. See deterministic function.
absolutely necessary.
NUL. In C, a single character that denotes the end of
| MBCS. Multibyte character set. UTF-8 is an example the string.
| of an MBCS. Characters in UTF-8 can range from 1 to
| 4 bytes in DB2. null. A special value that indicates the absence of
information.
menu. A displayed list of available functions for
selection by the operator. A menu is sometimes called a NUL-terminated host variable. A varying-length host
menu panel. variable in which the end of the data is indicated by the
presence of a NUL terminator.
mixed data string. A character string that can contain
both single-byte and double-byte characters. NUL terminator. In C, the value that indicates the end
of a string. For character strings, the NUL terminator is
modify locks. An L-lock or P-lock with a MODIFY X'00'.
attribute. A list of these active locks is kept at all times
in the coupling facility lock structure. If the requesting
DB2 fails, that DB2 subsystem’s modify locks are
O
converted to retained locks.
ordinary identifier. An uppercase letter followed by
MPP. Message processing program (in IMS). zero or more characters, each of which is an uppercase
letter, a digit, or the underscore character. An ordinary
| multibyte character set (MBCS). A character set that identifier must not be a reserved word.
| represents single characters with more than a single
| byte. Contrast with single-byte character set and ordinary token. A numeric constant, an ordinary
| double-byte character set. See also Unicode. identifier, a host identifier, or a keyword.
multisite update. Distributed relational database originating task. In a parallel group, the primary agent
processing in which data is updated in more than one that receives data from other execution units (referred to
location within a single unit of work. as parallel tasks) that are executing portions of the
query in parallel.
MVS. Multiple Virtual Storage.
OS/390. Operating System/390®.
™
MVS/ESA . Multiple Virtual Storage/Enterprise
Systems Architecture. outer join. The result of a join operation that includes
the matched rows of both tables that are being joined
and preserves some or all of the unmatched rows of the
N tables that are being joined. See also join.
negotiable lock. A lock whose mode can be overloaded function. A function name for which
downgraded, by agreement among contending users, to multiple function instances exist.
be compatible to all. A physical lock is an example of a
negotiable lock.
P
nested table expression. A fullselect in a FROM
clause (surrounded by parentheses). package. An object containing a set of SQL
statements that have been statically bound and that is
page. A unit of storage within a table space (4 KB, 8 plan. See application plan.
KB, 16 KB, or 32 KB) or index space (4 KB). In a table
space, a page contains one or more rows of a table. In plan allocation. The process of allocating DB2
a LOB table space, a LOB value can span more than resources to a plan in preparation for execution.
one page, but no more than one LOB value is stored on
a page. plan member. The bound copy of a DBRM that is
identified in the member clause.
page set. Another way to refer to a table space or
index space. Each page set consists of a collection of plan name. The name of an application plan.
VSAM data sets.
P-lock. Physical lock.
panel. A predefined display image that defines the
point of consistency. A time when all recoverable
locations and characteristics of display fields on a
data that an application accesses is consistent with
display surface (for example, a menu panel).
other data. The term point of consistency is
parallel task. The execution unit that is dynamically synonymous with sync point or commit point.
created to process a query in parallel. It is implemented
PPT. (1) Processing program table (in CICS). (2)
by an MVS service request block.
Program properties table (in MVS).
parameter marker. A question mark (?) that appears
precision. In SQL, the total number of digits in a
in a statement string of a dynamic SQL statement. The
decimal number (called the size in the C language). In
question mark can appear where a host variable could
the C language, the number of digits to the right of the
appear if the statement string were a static SQL
decimal point (called the scale in SQL). The DB2 library
statement.
uses the SQL definitions.
parent row. A row whose primary key value is the
precompilation. A processing of application programs
foreign key value of a dependent row.
containing SQL statements that takes place before
parent table. A table whose primary key is referenced compilation. SQL statements are replaced with
by the foreign key of a dependent table. statements that are recognized by the host language
compiler. Output from this precompilation includes
parent table space. A table space that contains a source code that can be submitted to the compiler and
parent table. A table space containing a dependent of the database request module (DBRM) that is input to
that table is a dependent table space. the bind process.
partitioned page set. A partitioned table space or an predicate. An element of a search condition that
index space. Header pages, space map pages, data expresses or implies a comparison operation.
pages, and index pages reference data only within the
scope of the partition. prepared SQL statement. A named object that is the
executable form of an SQL statement that has been
partitioned table space. A table space that is processed by the PREPARE statement.
subdivided into parts (based on index key range), each
of which can be processed independently by utilities. primary index. An index that enforces the uniqueness
of a primary key.
partner logical unit. An access point in the SNA
network that is connected to the local DB2 subsystem primary key. In a relational database, a unique,
by way of a VTAM conversation. nonnull key that is part of the definition of a table. A
table cannot be defined as a parent unless it has a
path. See SQL path. unique key or primary key.
PCT. Program control table (in CICS). private connection. A communications connection
that is specific to DB2.
piece. A data set of a nonpartitioned page set.
private protocol access. A method of accessing
physical consistency. The state of a page that is not distributed data by which you can direct a query to
in a partially changed state. another DB2 system. Contrast with DRDA access.
physical lock (P-lock). A lock type that DB2 acquires private protocol connection. A DB2 private
to provide consistency of data that is cached in different connection of the application process. See also private
DB2 subsystems. Physical locks are used only in data connection.
sharing environments. Contrast with logical lock (L-lock).
Glossary 963
QMF™ • resource control table (RCT)
retained lock. A MODIFY lock that a DB2 subsystem schema. A logical grouping for user-defined functions,
was holding at the time of a subsystem failure. The lock distinct types, triggers, and stored procedures. When an
is retained in the coupling facility lock structure across a object of one of these types is created, it is assigned to
DB2 failure. one schema, which is determined by the name of the
object. For example, the following statement creates a
right outer join. The result of a join operation that distinct type T in schema C:
includes the matched rows of both tables that are being
CREATE DISTINCT TYPE C.T ...
joined and preserves the unmatched rows of the second
join operand. See also join. | scrollability. The ability to use a cursor to fetch in
RLF. Resource limit facility.
| either a forward or backward direction. The FETCH
| statement supports multiple fetch orientations to indicate
rollback. The process of restoring data changed by | the new position of the cursor. See also fetch
SQL statements to the state at its last commit point. All | orientation.
locks are freed. Contrast with commit.
search condition. A criterion for selecting rows from a
row. The horizontal component of a table. A row table. A search condition consists of one or more
consists of a sequence of values, one for each column predicates.
of the table.
sensitive cursor. A cursor that is sensitive to changes
ROWID. Row identifier. made to the database after the result table has
materialized.
row identifier (ROWID). A value that uniquely
identifies a row. This value is stored with the row and sequential data set. A non-DB2 data set whose
never changes. records are organized on the basis of their successive
physical positions, such as on magnetic tape. Several of
row lock. A lock on a single row of data. the DB2 database utilities require sequential data sets.
row trigger. A trigger that is defined with the trigger serial cursor. A cursor that can be moved only in a
granularity FOR EACH ROW. forward direction.
Glossary 965
shift-in character • static SQL
shift-in character. A special control character (X'0F') SQL descriptor area (SQLDA). A structure that
that is used in EBCDIC systems to denote that the describes input variables, output variables, or the
subsequent bytes represent SBCS characters. See also columns of a result table.
shift-out character.
SQL/DS. Structured Query Language/Data System.
shift-out character. A special control character (X'0E') This product is now obsolete and has been replaced by
that is used in EBCDIC systems to denote that the DB2 for VSE & VM.
subsequent bytes, up to the next shift-in control
character, represent DBCS characters. See also shift-in SQL escape character. The symbol that is used to
character. enclose an SQL delimited identifier. This symbol is the
double quotation mark ("). See also escape character.
| single-byte character set (SBCS). A set of characters
| in which each character is represented by a single byte. SQL function. A user-defined function in which the
| Contrast with double-byte character set or multibyte CREATE FUNCTION statement contains the source
| character set. code. The source code is a single SQL expression that
evaluates to a single value. The SQL user-defined
single-precision floating point number. A 32-bit function can return only one parameter.
approximate representation of a real number.
SQL ID. SQL authorization ID.
size. In the C language, the total number of digits in a
decimal number (called the precision in SQL). The DB2 SQL path. An ordered list of schema names that are
library uses the SQL definition. used in the resolution of unqualified references to
user-defined functions, distinct types, and stored
sourced function. A function that is implemented by procedures. In dynamic SQL, the current path is found
another built-in or user-defined function that is already in the CURRENT PATH special register. In static SQL, it
known to the database manager. This function can be a is defined in the PATH bind option.
scalar function or a column (aggregating) function; it
returns a single value from a set of values (for example, SQL Processor Using File Input (SPUFI). SQL
MAX or AVG). Contrast with built-in function, external Processor Using File Input. A facility of the TSO
function, and SQL function. attachment subcomponent that enables the DB2I user
to execute SQL statements without embedding them in
source program. A set of host language statements an application program.
and SQL statements that is processed by an SQL
precompiler. SQL return code. Either SQLCODE or SQLSTATE.
source type. An existing type that is used to internally SQL statement coprocessor. An alternative to the
represent a distinct type. DB2 precompiler that lets the user process SQL
statements at compile time. The user invokes an SQL
space. A sequence of one or more blank characters. statement coprocessor by specifying a compiler option.
specific function name. A particular user-defined star join. A method of joining a dimension column of a
function that is known to the database manager by its fact table to the key column of the corresponding
specific name. Many specific user-defined functions can dimension table. See also join, dimension, and star
have the same function name. When a user-defined schema.
function is defined to the database, every function is
assigned a specific name that is unique within its star schema. The combination of a fact table (which
schema. Either the user can provide this name, or a contains most of the data) and a number of dimension
default name is used. tables. See also star join, dimension, and dimension
table.
SPUFI. SQL Processor Using File Input.
statement string. For a dynamic SQL statement, the
SQL. Structured Query Language. character string form of the statement.
SQL authorization ID (SQL ID). The authorization ID statement trigger. A trigger that is defined with the
that is used for checking dynamic SQL statements in trigger granularity FOR EACH STATEMENT.
some situations.
static SQL. SQL statements, embedded within a
SQLCA. SQL communication area. program, that are prepared during the program
preparation process (before the program is executed).
SQL communication area (SQLCA). A structure that After being prepared, the SQL statement does not
is used to provide an application program with change (although values of host variables that are
information about the execution of its SQL statements. specified by the statement might change).
string. See character string or graphic string. table check constraint. A user-defined constraint that
specifies the values that specific columns of a base
strong typing. A process that guarantees that only table can contain.
user-defined functions and operations that are defined
on a distinct type can be applied to that type. For table function. A function that receives a set of
example, you cannot directly compare two currency arguments and returns a table to the SQL statement
types, such as Canadian dollars and U.S. dollars. But that references the function. A table function can be
you can provide a user-defined function to convert one referenced only in the FROM clause of a subselect.
currency to the other and then do the comparison.
table locator. A mechanism that allows access to
Structured Query Language (SQL). A standardized trigger transition tables in the FROM clause of SELECT
language for defining and manipulating data in a statements, the subselect of INSERT statements, or
relational database. from within user-defined functions. A table locator is a
fullword integer value that represents a transition table.
subject table. The table for which a trigger is created.
When the defined triggering event occurs on this table, table space. A page set that is used to store the
the trigger is activated. records in one or more tables.
subquery. A SELECT statement within the WHERE or task control block (TCB). A control block that is used
HAVING clause of another SQL statement; a nested to communicate information about tasks within an
SQL statement. address space that are connected to DB2. An address
space can support many task connections (as many as
subselect. That form of a query that does not include one per task), but only one address space connection.
ORDER BY clause, UPDATE clause, or UNION See also address space connection.
operators.
TCB. Task control block (in MVS).
substitution character. A unique character that is
substituted during character conversion for any temporary table. A table that holds temporary data;
characters in the source program that do not have a for example, temporary tables are useful for holding or
match in the target coding representation. sorting intermediate results from queries that contain a
large number of rows. The two kinds of temporary table,
subsystem. A distinct instance of a relational which are created by different SQL statements, are the
database management system (RDBMS). created temporary table and the declared temporary
table. Contrast with result table. See also created
sync point. See commit point. temporary table and declared temporary table.
synonym. In SQL, an alternative name for a table or thread. The DB2 structure that describes an
view. Synonyms can be used only to refer to objects at application’s connection, traces its progress, processes
the subsystem in which the synonym is defined. resource functions, and delimits its accessibility to DB2
resources and services. Most DB2 functions execute
Sysplex query parallelism. Parallel execution of a under a thread structure. See also allied thread and
single query that is accomplished by using multiple database access thread.
tasks on more than one DB2 subsystem. See also
query CP parallelism. three-part name. The full name of a table, view, or
alias. It consists of a location name, authorization ID,
system administrator. The person at a computer and an object name, separated by a period.
installation who designs, controls, and manages the use
of the computer system. time. A three-part value that designates a time of day
in hours, minutes, and seconds.
system conversation. The conversation that two DB2
subsystems must establish to process system time duration. A decimal integer that represents a
messages before any distributed processing can begin. number of hours, minutes, and seconds.
| system-directed connection. A connection that an Time-Sharing Option (TSO). An option in MVS that
| RDBMS manages by processing SQL statements with provides interactive time sharing from remote terminals.
| three-part names.
Glossary 967
timestamp • union
timestamp. A seven-part value that consists of a date trigger granularity. A characteristic of a trigger, which
and time. The timestamp is expressed in years, months, determines whether the trigger is activated:
days, hours, minutes, seconds, and microseconds. Only once for the triggering SQL statement
Once for each row that the SQL statement modifies
TMP. Terminal Monitor Program.
triggering event. The specified operation in a trigger
transaction lock. A lock that is used to control definition that causes the activation of that trigger. The
concurrent execution of SQL statements. triggering event is comprised of a triggering operation
(INSERT, UPDATE, or DELETE) and a subject table on
transition table. A temporary table that contains all which the operation is performed.
the affected rows of the subject table in their state
before or after the triggering event occurs. Triggered triggering SQL operation. The SQL operation that
SQL statements in the trigger definition can reference causes a trigger to be activated when performed on the
the table of changed rows in the old state or the new subject table.
state.
trigger package. A package that is created when a
transition variable. A variable that contains a column CREATE TRIGGER statement is executed. The
value of the affected row of the subject table in its state package is executed when the trigger is activated.
before or after the triggering event occurs. Triggered
SQL statements in the trigger definition can reference TSO. Time-Sharing Option.
the set of old values or the set of new values.
TSO attachment facility. A DB2 facility consisting of
trigger. A set of SQL statements that are stored in a the DSN command processor and DB2I. Applications
DB2 database and executed when a certain event that are not written for the CICS or IMS environments
occurs in a DB2 table. can run under the TSO attachment facility.
trigger activation. The process that occurs when the typed parameter marker. A parameter marker that is
trigger event that is defined in a trigger definition is specified along with its target data type. It has the
executed. Trigger activation consists of the evaluation of general form:
the triggered action condition and conditional execution CAST(? AS data-type)
of the triggered SQL statements.
type 1 indexes. Indexes that were created by a
trigger activation time. An indication in the trigger release of DB2 before DB2 Version 4 or that are
definition of whether the trigger should be activated specified as type 1 indexes in Version 4. Contrast with
before or after the triggered event. type 2 indexes. As of Version 7, type 1 indexes are no
longer supported.
trigger body. The set of SQL statements that is
executed when a trigger is activated and its triggered type 2 indexes. Indexes that are created on a release
action condition evaluates to true. of DB2 after Version 6 or that are specified as type 2
indexes in Version 4 or later.
trigger cascading. The process that occurs when the
triggered action of a trigger causes the activation of
another trigger. U
triggered action. The SQL logic that is performed UCS-2. Universal Character Set, coded in 2 octets,
when a trigger is activated. The triggered action which means that characters are represented in 16-bits
consists of an optional triggered action condition and a per character.
set of triggered SQL statements that are executed only
if the condition evaluates to true. UDF. User-defined function.
triggered action condition. An optional part of the UDT. User-defined data type. In DB2 for OS/390 and
triggered action. This Boolean condition appears as a z/OS, the term distinct type is used instead of
WHEN clause and specifies a condition that DB2 user-defined data type. See distinct type.
evaluates to determine if the triggered SQL statements
should be executed. | Unicode. A standard that parallels the ISO-10646
| standard. Several implementations of the Unicode
triggered SQL statements. The set of SQL | standard exist, all of which have the ability to represent
statements that is executed when a trigger is activated | a large percentage of the characters contained in the
and its triggered action condition evaluates to true. | many scripts that are used throughout the world.
Triggered SQL statements are also called the trigger
body. union. An SQL operation that combines the results of
two select statements. Unions are often used to merge
lists of values that are obtained from several tables.
update trigger. A trigger that is defined with the Virtual Storage Access Method (VSAM). An access
triggering SQL operation UPDATE. method for direct or sequential processing of fixed- and
varying-length records on direct access devices. The
user-defined data type (UDT). See distinct type. records in a VSAM data set or file can be organized in
logical sequence by a key field (key sequence), in the
user-defined function (UDF). A function that is
physical sequence in which they are written on the data
defined to DB2 by using the CREATE FUNCTION
set or file (entry-sequence), or by relative-record
statement and that can be referenced thereafter in SQL
number.
statements. A user-defined function can be an external
function, a sourced function, or an SQL function. Virtual Telecommunications Access Method
Contrast with built-in function. (VTAM). An IBM licensed program that controls
communication and the flow of data in an SNA network.
UTF-8. Unicode Transformation Format, 8-bit encoding
form, which is designed for ease of use with existing VSAM. Virtual storage access method.
ASCII-based systems. The CCSID value for data in
UTF-8 format is 1208. DB2 for OS/390 and z/OS VTAM. Virtual Telecommunication Access Method (in
supports UTF-8 in mixed data fields. MVS).
Glossary 969
970 Application Programming and SQL Guide
Bibliography
DB2 Universal Database Server for OS/390 and DB2 DataPropagator™
z/OS Version 7 product libraries: v DB2 UDB Replication Guide and Reference,
SC26-9920
DB2 for OS/390 and z/OS
v DB2 Administration Guide, SC26-9931 Net.Data®
v DB2 Application Programming and SQL Guide,
The following books are available at this Web site:
SC26-9933
http://www.ibm.com/software/net.data/library.html
v DB2 Application Programming Guide and v Net.Data Library: Administration and
Reference for Java, SC26-9932 Programming Guide for OS/390 and z/OS
v DB2 Command Reference, SC26-9934 v Net.Data Library: Language Environment
v DB2 Data Sharing: Planning and Administration, Interface Reference
SC26-9935 v Net.Data Library: Messages and Codes
v Net.Data Library: Reference
v DB2 Data Sharing Quick Reference Card,
SX26-3846
DB2 PM for OS/390
v DB2 Diagnosis Guide and Reference,
v DB2 PM for OS/390 Batch User's Guide,
LY37-3740
SC27-0857
v DB2 Diagnostic Quick Reference Card,
v DB2 PM for OS/390 Command Reference,
LY37-3741
SC27-0855
v DB2 Image, Audio, and Video Extenders
v DB2 PM for OS/390 Data Collector Application
Administration and Programming, SC26-9947
Programming Interface Guide, SC27-0861
v DB2 Installation Guide, GC26-9936
v DB2 PM for OS/390 General Information,
v DB2 Licensed Program Specifications, GC27-0852
GC26-9938
v DB2 PM for OS/390 Installation and
v DB2 Master Index, SC26-9939 Customization, SC27-0860
v DB2 Messages and Codes, GC26-9940 v DB2 PM for OS/390 Messages, SC27-0856
v DB2 ODBC Guide and Reference, SC26-9941 v DB2 PM for OS/390 Online Monitor User's
v DB2 Reference for Remote DRDA Requesters Guide, SC27-0858
and Servers, SC26-9942 v DB2 PM for OS/390 Report Reference Volume
v DB2 Reference Summary, SX26-3847 1, SC27-0853
v DB2 Release Planning Guide, SC26-9943 v DB2 PM for OS/390 Report Reference Volume
v DB2 SQL Reference, SC26-9944 2, SC27-0854
v DB2 Text Extender Administration and v DB2 PM for OS/390 Using the Workstation
Programming, SC26-9948 Online Monitor, SC27-0859
v DB2 Utility Guide and Reference, SC26-9945 v DB2 PM for OS/390 Program Directory,
GI10-8223
v DB2 What's New? GC26-9946
v DB2 XML Extender for OS/390 and z/OS Query Management Facility (QMF)
Administration and Programming, SC27-9949 v Query Management Facility: Developing QMF
v DB2 Program Directory, GI10-8182 Applications, SC26-9579
v Query Management Facility: Getting Started
DB2 Administration Tool with QMF on Windows, SC26-9582
v DB2 Administration Tool for OS/390 and z/OS v Query Management Facility: High Peformance
User’s Guide, SC26-9847 Option User’s Guide for OS/390 and z/OS,
SC26-9581
DB2 Buffer Pool Tool v Query Management Facility: Installing and
Managing QMF on OS/390 and z/OS,
v DB2 Buffer Pool Tool for OS/390 and z/OS
GC26-9575
User’s Guide and Reference, SC26-9306
Bibliography 973
v DFSMS/MVS: Utilities, SC26-4926 v System/390 9672/9674 System Overview,
v MVS/DFP: Using Data Sets, SC26-4749 GA22-7148
DFSORT™ ICSF/MVS
v DFSORT Application Programming: Guide, v ICSF/MVS General Information, GC23-0093
SC33-4035
IMS
Distributed Relational Database Architecture™ v IMS Batch Terminal Simulator General
v Data Stream and OPA Reference, SC31-6806 Information, GH20-5522
v IBM SQL Reference, SC26-8416 v IMS Administration Guide: System, SC26-9420
v Open Group Technical Standard v IMS Administration Guide: Transaction
The Open Group presently makes the following Manager, SC26-9421
DRDA books available through its Web site at: v IMS Application Programming: Database
www.opengroup.org Manager, SC26-9422
– DRDA Version 2 Vol. 1: Distributed v IMS Application Programming: Design Guide,
Relational Database Architecture (DRDA) SC26-9423
– DRDA Version 2 Vol. 2: Formatted Data v IMS Application Programming: Transaction
Object Content Architecture Manager, SC26-9425
– DRDA Version 2 Vol. 3: Distributed Data v IMS Command Reference, SC26-9436
Management Architecture v IMS Customization Guide, SC26-9427
v IMS Install Volume 1: Installation and
Domain Name System Verification, GC26-9429
v DNS and BIND, Third Edition, Paul Albitz and v IMS Install Volume 2: System Definition and
Cricket Liu, O’Reilly, ISBN 1-56592-512-2 Tailoring, GC26-9430
v IMS Messages and Codes, GC27-1120
Education v IMS Utilities Reference: System, SC26-9441
v IBM Dictionary of Computing, McGraw-Hill,
ISBN 0-07031-489-6 ISPF
v 1999 IBM All-in-One Education and Training v ISPF V4 Dialog Developer's Guide and
Catalog, GR23-8105 Reference, SC34-4486
v ISPF V4 Messages and Codes, SC34-4450
Enterprise System/9000® and Enterprise v ISPF V4 Planning and Customizing, SC34-4443
System/3090™ v ISPF V4 User's Guide, SC34-4484
v Enterprise System/9000 and Enterprise
System/3090 Processor Resource/System Language Environment
Manager Planning Guide, GA22-7123 v Debug Tool User's Guide and Reference,
SC09-2137
High Level Assembler
v High Level Assembler for MVS and VM and National Language Support
VSE Language Reference, SC26-4940 v IBM National Language Support Reference
v High Level Assembler for MVS and VM and Manual Volume 2, SE09-8002
VSE Programmer's Guide, SC26-4941
NetView®
Parallel Sysplex® Library v NetView Installation and Administration Guide,
v OS/390 Parallel Sysplex Application Migration, SC31-8043
GC28-1863 v NetView User's Guide, SC31-8056
v System/390 MVS Sysplex Hardware and
Software Migration, GC28-1862 Microsoft® ODBC
v OS/390 Parallel Sysplex Overview: An v Microsoft ODBC 3.0 Software Development Kit
Introduction to Data Sharing and Parallelism, and Programmer's Reference, Microsoft Press,
GC28-1860 ISBN 1-57231-516-4
v OS/390 Parallel Sysplex Systems Management,
GC28-1861 OS/390
v OS/390 Parallel Sysplex Test Report, v OS/390 C/C++ Programming Guide, SC09-2362
GC28-1963 v OS/390 C/C++ Run-Time Library Reference,
SC28-1663
Bibliography 975
v OS/390 DCE Administration Guide, SC28-1584 System/370™ and System/390
v OS/390 DCE Introduction, GC28-1581 v ESA/370 Principles of Operation, SA22-7200
v OS/390 DCE Messages and Codes, SC28-1591 v ESA/390 Principles of Operation, SA22-7201
v OS/390 UNIX System Services Command v System/390 MVS Sysplex Hardware and
Reference, SC28-1892 Software Migration, GC28-1210
v OS/390 UNIX System Services Messages and
Codes, SC28-1908 System Network Architecture (SNA)
v OS/390 UNIX System Services Planning, v SNA Formats, GA27-3136
SC28-1890 v SNA LU 6.2 Peer Protocols Reference,
v OS/390 UNIX System Services User's Guide, SC31-6808
SC28-1891 v SNA Transaction Programmer's Reference
v OS/390 UNIX System Services Programming: Manual for LU Type 6.2, GC30-3084
Assembler Callable Services Reference, v SNA/Management Services Alert
SC28-1899 Implementation Guide, GC31-6809
Index X-3
CICS (continued) COBOL application program (continued)
DSNTIAC subroutine (continued) naming convention 144
PL/I 189 null values 71
facilities options 144, 145
command language translator 410 preparation 411
control areas 463 record description from DCLGEN 99
EDF (execution diagnostic facility) 468 resetting SQL-INIT-FLAG 146
language interface module (DSNCLI) sample program 849
use in link-editing an application 412 variables in SQL 67
logical unit of work 360 WHENEVER statement 144
operating with classes, preparing 433
running a program 463 with object-oriented extensions 163
system failure 360 coding
planning SQL statements
environment 428 assembler 107
programming C 121
DFHEIENT macro 110 C++ 121
sample applications 835, 838 COBOL 141
SYNCPOINT command 360 dynamic 497
storage handling FORTRAN 164
assembler 121 PL/I 174
C 140 REXX 189
COBOL 163 collection, package
PL/I 189 identifying 416
thread SET CURRENT PACKAGESET statement 416
reuse 803 colon
unit of work 360 assembler host variable 111
CICS attachment facility 803 C host variable 124
claim COBOL host variable 146
effect of cursor WITH HOLD 351 FORTRAN host variable 167
CLOSE PL/I host variable 178
connection function of CAF preceding a host variable 68
description 737 column
program example 758 data types 4
syntax 749 default value
usage 749 system-defined 18
statement user-defined 18
description 85 displaying
WHENEVER NOT FOUND clause 513, 523 list of columns 15
cluster ratio heading created by SPUFI 59
effects labels
table space scan 688 DCLGEN 97
with list prefetch 707 usage 521
COALESCE function 36 name
COBOL application program as a host variable 98
character host variables UPDATE statement 30
fixed-length strings 148 retrieving
varying-length strings 148 by SELECT 5
coding SQL statements 65, 141 specified in CREATE TABLE 17
compiling 411 width of results 55, 58
data declarations 95 COMMA
data type compatibility 159 option of precompiler 403
DB2 precompiler option defaults 409 commit
DECLARE statement 143 rollback coordination 365
declaring a variable 158 using RRSAF 769
dynamic SQL 526 commit point
FILLER entry name 158 description 359
host structure 72 IMS unit of work 361
host variable lock releasing 362
use of hyphens 145 COMMIT statement
indicator variables 160 description 53
Index X-5
cursor (continued) DB2I (DB2 Interactive)
opening background processing
OPEN statement 83 run-time libraries 442
retrieving a row of data 83 EDITJCL processing
scrollable 85 run-time libraries 442
updating a current row 84 interrupting 57
WITH HOLD menu 51
claims 351 panels
description 91 BIND PACKAGE 446
locks 350 BIND PLAN 450
cycle restrictions 207 Compile, Link, and Run 460
Current SPUFI Defaults 54
DB2I Primary Option Menu 51, 435
D DCLGEN 95, 103
data Defaults for BIND PLAN 454
adding to the end of a table 810 Precompile 443
associated with WHERE clause 9 Program Preparation 436
currency 392 System Connection Types 458
effect of locks on integrity 326 preparing programs 434
improving access 671 program preparation example 436
indoubt state 362 selecting
retrieval using SELECT * 809 DCLGEN (declarations generator) 99
retrieving a set of rows 83 SPUFI 51
retrieving large volumes 809 SPUFI 51
scrolling backward through 805 DBCS (double-byte character set)
security and integrity 359 constants 176
understanding access 671 table names 95
updating during retrieval 808 translation in CICS 410
updating previously retrieved data 808 use of labels with DCLGEN 97
data security and integrity 359 DBINFO
data space user-defined function 261
LOB materialization 236 DBPROTOCOL(DRDA)
data type improves distributed performance 383
compatibility DBRM (database request module)
assembler and SQL 117 deciding how to bind 317
assembler application program 118 description 400
C and SQL 131 DCLGEN subcommand of DSN
COBOL and SQL 155, 159 building data declarations 95
FORTRAN 172 example 101
FORTRAN and SQL 171 forming host variable names 98
PL/I and SQL 186 identifying tables 95
REXX and SQL 194 INCLUDE statement 99
equivalent including declarations in a program 99
FORTRAN 169 indicator variable array declaration 98
PL/I 182 starting 95
result set locator 603 using 95
database DDITV02 input data set 482
sample application 829 DDOTV02 output data set 484
DATE deadlock
option of precompiler 404 description 327
DB2 abend 740, 773 example 327
DB2 commit point 362 indications
DB2 private protocol access in CICS 329
coding an application 371 in IMS 329
compared to DRDA access 370 in TSO 328
example 370 recommendation for avoiding 330
mixed environment 923 with RELEASE(DEALLOCATE) 332
planning 369, 370 X'00C90088' reason code in SQLCA 328
sample program 888 debugging application programs 466
DEC15 precompiler option 404
Index X-7
DRDA access (continued) DSNHPLI procedure 429
programming hints 374 DSNMTV01 module 485
releasing connections 373 DSNRLI (RRSAF language interface module)
sample program 880 deleting 800
using 373 loading 800
dropping DSNTEDIT CLIST 915
tables 23 DSNTEP2 sample program
DSN applications, running with CAF 735 how to run 839
DSN command of TSO parameters 839
command processor program preparation 839
services lost under CAF 735 DSNTIAC subroutine
return code processing 425 assembler 121
subcommands C 140
RUN 424 COBOL 163
DSN_FUNCTION_TABLE table PL/I 189
description 295 DSNTIAD sample program
DSN_STATEMNT_TABLE table calls DSNTIAR subroutine 120
column descriptions 717 how to run 839
DSN8BC3 sample program 162 parameters 839
DSN8BD3 sample program 140 program preparation 839
DSN8BE3 sample program 140 specifying SQL terminator 843
DSN8BF3 sample program 173 DSNTIAR subroutine
DSN8BP3 sample program 189 assembler 119
DSNACICS stored procedure C 139
debugging 944 COBOL 161
description 937 description 76
invocation example 942 FORTRAN 173
invocation syntax 938 PL/I 188
output 943 DSNTIAUL sample program
parameter descriptions 938 how to run 839
restrictions 944 parameters 839
DSNACICX user exit program preparation 839
description 940 DSNTIR subroutine 173
parameter list 941 DSNTPSMP stored procedure
rules for writing 940 authorization required 566
DSNALI (CAF language interface module) DSNTRACE data set 756
deleting 757 duration of locks
loading 757 controlling 339
DSNCLI (CICS language interface module) description 335
include in link-edit 412 DYNAM option of COBOL 144
DSNELI (TSO language interface module) 735 dynamic plan selection
DSNH restrictions with CURRENT PACKAGESET special
command of TSO register 423
obtaining SYSTERM output 473 using packages with 423
DSNHASM procedure 429 dynamic SQL
DSNHC procedure 429 advantages and disadvantages 498
DSNHCOB procedure 429 assembler program 514
DSNHCOB2 procedure 429 C program 514
DSNHCPP procedure 429 caching
DSNHCPP2 procedure 429 effect of RELEASE bind option 340
DSNHDECP caching prepared statements 500
implicit CAF connection 738 COBOL program 143, 526
DSNHFOR procedure 429 description 497
DSNHICB2 procedure 429 effect of bind option REOPT(VARS) 525
DSNHICOB procedure 429 effect of WITH HOLD cursor 509
DSNHLI entry point to DSNALI EXECUTE IMMEDIATE statement 507
implicit calls 738 fixed-list SELECT statements 511, 513
program example 763 FORTRAN program 166
DSNHLI entry point to DSNRLI host languages 506
program example 800 non-SELECT statements 507, 510
DSNHLI2 entry point to DSNALI 763 PL/I 514
Index X-9
FORTRAN application program (continued) host language
declaring (continued) declarations in DB2I (DB2 Interactive) 95
views 166 dynamic SQL 506
description of SQLCA 164 embedding SQL statements in 65
host variable 167 host structure
including code 166 C 129
indicator variables 172 COBOL 72, 152
margins for SQL statements 166 description 72
naming convention 166 PL/I 181
parallel option 167 host variable
precompiler option defaults 409 assembler 111
sequence numbers 166 C 124, 125
SQL INCLUDE statement 167 changing CCSID 72
statement labels 166 character
FROM clause assembler 112
joining tables 33 C 125
SELECT statement 5 COBOL 148
FRR (functional recovery routine) 755, 756 FORTRAN 168
FULL OUTER JOIN PL/I 179
example 35 COBOL 146
function description 67
column example of use in COBOL program 68
when evaluated 687 example query 648
function resolution EXECUTE IMMEDIATE statement 508
user-defined function (UDF) 290 FETCH statement 512
functional recovery routine (FRR) 756 FORTRAN 167
graphic
assembler 112
G C 126
GET DIAGNOSTICS statement PL/I 179
SQL procedure 557 impact on access path selection 648
global transaction in equal predicate 649
RRSAF support 782, 785, 788 inserting into tables 69
GO TO clause of WHENEVER statement 75 naming a structure
GOTO statement C program 129
SQL procedure 557 PL/I program 181
governor (resource limit facility) 505 PL/I 178
GRANT statement PREPARE statement 512
authority 464 REXX 194
GRAPHIC SELECT
option of precompiler 404 clause of COBOL program 69
graphic host variables static SQL flexibility 498
assembler 112 tuning queries 648
C 126 WHERE clause in COBOL program 70
PL/I 179 hybrid join
GROUP BY clause description 699
effect on OPTIMIZE clause 662
subselect
examples 10 I
I/O processing
parallel
H queries 723
handler IDENTIFY, RRSAF
SQL procedure 559 program example 801
handling errors syntax 775
SQL procedure 559 usage 775
HAVING clause identity column
selecting groups subject to conditions 11 inserting in table 805
HOST inserting values into 27
FOLD value for C and CPP 405 use in a trigger 213
option of precompiler 405
Index X-11
KEEPDYNAMIC option LOB (large object) (continued)
BIND PACKAGE subcommand 503 locking 355
BIND PLAN subcommand 503 modes of LOB locks 357
key modes of table space locks 357
column 203 lock
composite 204 avoidance 348
foreign benefits 326
defining 206, 209 class
primary transaction 325
choosing 203 compatibility 337
defining 204, 205 description 325
recommendations for defining 205 duration
timestamp 203 controlling 339
unique 805 description 335
keywords, reserved 921 LOBs 357
effect of cursor WITH HOLD 350
effects
L deadlock 327
label, column 521 suspension 326
language interface modules timeout 326
DSNCLI escalation
AMODE link-edit option 412 when retrieving large numbers of rows 809
large object (LOB) hierarchy
data space 236 description 333
declaring host variables 232 LOB locks 355
declaring LOB locators 232 mode 336
description 229 object
locator 236 description 338
materialization 236 indexes 339
with indicator variables 240 options affecting
LEAVE statement access path 353
SQL procedure 557 bind 339
LEFT OUTER JOIN cursor stability 344
example 36 program 339
level of a lock 333 read stability 343
LEVEL option of precompiler 405 repeatable read 343
limited partition scan 685 uncommitted read 346
LINECOUNT option page locks
precompiler 405 CS, RS, and RR compared 343
link-editing description 333
AMODE option 462 recommendations for concurrency 329
application program 411 size
RMODE option 462 page 333
list prefetch partition 333
description 706 table 333
thresholds 707 table space 333
load module structure of CAF (call attachment summary 354
facility) 736 unit of work 359, 360
load module structure of RRSAF 770 LOCK TABLE statement
LOAD MVS macro used by CAF 735 effect on auxiliary tables 358
LOAD MVS macro used by RRSAF 768 effect on locks 352
loading LOCKPART clause of CREATE and ALTER
data TABLESPACE
DSNTIAUL 465 effect on locking 334
LOB LOCKSIZE clause
lock recommendations 330
concurrency with UR readers 347 logical unit of work
description 355 CICS description 360
LOB (large object) LOOP statement
lock duration 357 SQL procedure 557
LOCK TABLE statement 358
Index X-13
P PERIOD option
precompiler 406
package
phone application
advantages 318
description 833
binding
PL/I application program
DBRM to a package 412
character host variables 179
EXPLAIN option for remote 678
coding SQL statements 174
PLAN_TABLE 673
comments 175
remote 413
considerations 177
to plans 415
data types 182, 186
deciding how to use 317
declaring tables 175
identifying at run time 415
declaring views 175
invalidated
graphic host variables 179
conditions for 322
host variable
list
declaring 178
plan binding 415
numeric 178
location 416
using 177
rebinding with pattern-matching characters 320
indicator variables 187
selecting 415, 416
naming convention 176
version, identifying 417
sequence numbers 176
page
SQLCA, defining 174
locks
SQLDA, defining 174
description 333
statement labels 176
PAGE_RANGE column of PLAN_TABLE 685
variable, declaration 185
panel
WHENEVER statement 176
Current SPUFI Defaults 54
PLAN_TABLE table
DB2I Primary Option Menu 51
column descriptions 673
DCLGEN 95, 102
report of outer join 695
DSNEDP01 95, 102
planning
DSNEPRI 51
accessing distributed data 369, 392
DSNESP01 51
binding 317, 323
DSNESP02 54
concurrency 323, 359
EDIT (for SPUFI input data set) 56
precompiling 316
SPUFI 51
recovery 359
parallel processing
precompiler
description 721
binding on another system 400
enabling 724
description 398
related PLAN_TABLE columns 686
diagnostics 400
tuning 727
escape character 403
parameter marker
functions 398
casting 296
input 399
dynamic SQL 508
maximum size of input 399
more than one 510
option descriptions 402
values provided by OPEN 512
options
with arbitrary statements 524, 525
CONNECT 376
PARMS option
defaults 408
running in foreground 426
DRDA access 376
partition scan, limited 685
SQL 376
partitioned table space
output 399
locking 334
planning for 316
PDS (partitioned data set) 95
precompiling programs 398
performance
starting
affected by
dynamically 430
application structure 731
JCL for procedures 428
DEFER(PREPARE) 383
submitting jobs
lock size 335
DB2I panels 443
NODEFER (PREPARE) 383
ISPF panels 435, 436
remote queries 381, 383
predicate
monitoring
description 628
with EXPLAIN 671
evaluation rules 631
performance considerations
filter factor 637
scrollable cursor 658
Index X-15
RELEASE ROLB call, IMS (continued)
option of BIND PLAN subcommand ends unit of work 361
combining with other options 339 in batch programs 366
statement 373 ROLL call, IMS
release information block (RIB) 741 ends unit of work 361
RELEASE LOCKS field of panel DSNTIP4 in batch programs 366
effect on page and row locks 350 rollback
reoptimizing access path 648 option of CICS SYNCPOINT statement 360
REPEAT statement using RRSAF 769
SQL procedure 557 ROLLBACK statement
reserved keywords 921 description 53
resetting control blocks 750, 793 error in IMS 479
resource limit facility (governor) in a stored procedure 544
description 505 unit of work in TSO 359
writing an application for predictive governing 505 row
resource unavailable condition 751, 794 selecting with WHERE clause 8
restart updating 29
DL/I batch programs using JCL 487 updating current 84
result column updating large volumes 808
naming with AS clause 7 ROWID
result set locator coding example 683
assembler 113 index-only access 681
C 128 inserting in table 805
COBOL 151 rowset parameter
example 603 DB2 for OS/390 and z/OS support for 391
FORTRAN 168 RR (repeatable read)
how to use 603 how locks are held (figure) 343
PL/I 180 page and row locking 343
result table RRS global transaction
example 3 RRSAF support 782, 785, 788
retrieving RRSAF
data, changing the CCSID 519 application program
data in ASCII from DB2 for OS/390 and z/OS 519 examples 800
data in Unicode from DB2 for OS/390 and preparation 768
z/OS 519 connecting to DB2 801
data using SELECT * 809 description 767
large volumes of data 809 function descriptions 775
return code load module structure 770
DSN command 425 programming language 768
SQL 749 register conventions 775
REXX procedure restrictions 767
coding SQL statements 189 return codes
error handling 193 AUTH SIGNON 783
indicator variables 197 CONNECT 775
isolation level 198 SIGNON 780
naming convention 193 TERMINATE IDENTIFY 793
running 428 TERMINATE THREAD 792
specifying input data type 196 TRANSLATE 794
statement label 193 run environment 769
RIB (release information block) RRSAF (Recoverable Resource Manager Services
address in CALL DSNALI parameter list 741 attachment facility)
CONNECT, RRSAF 775 transactions
CONNECT connection function of CAF 743 using global transactions 332
program example 758 RS (read stability)
RID (record identifier) pool page and row locking (figure) 343
use in list prefetch 706 RUN
RIGHT OUTER JOIN subcommand of DSN
example 37 CICS restriction 411
RMODE link-edit option 462 return code processing 425
ROLB call, IMS running a program in TSO foreground 424
advantages over ROLL 366
Index X-17
SHARE (continued) SQL (Structured Query Language) (continued)
lock mode (continued) coding (continued)
page 336 object extensions 227
row 336 PL/I 174
table, partition, and table space 336 REXX 189
SIGNON, RRSAF cursors 81
program example 801 dynamic
syntax 780 coding 497
usage 780 sample C program 863
simple table space statements allowed 923
locking 334 escape character 403
single-mode IMS programs 364 host variables 67
SOME quantified predicate 45 keywords, reserved 921
sort return codes
program checking 74
RIDs (record identifiers) 710 handling 76
when performed 710 static
removing duplicates 709 sample C program 863
shown in PLAN_TABLE 709 string delimiter 442
sort key structures 67
ordering 9 syntax checking 374
SOURCE varying-list 513, 525
option of precompiler 407 SQL communication area (SQLCA) 74, 76
special register SQL-INIT-FLAG, resetting 146
behavior in stored procedures 544 SQL procedure
behavior in user-defined functions 276 preparation using DSNTPSMP procedure 564
CURRENT DEGREE 724 program preparation 563
CURRENT PACKAGESET 30 referencing SQLCODE and SQLSTATE 560
CURRENT RULES 421 SQL variable 557
CURRENT SERVER 30 statements allowed 928
CURRENT SQLID 30 SQL procedure statement
CURRENT TIME 30 CALL statement 556
CURRENT TIMESTAMP 30 CASE statement 557
CURRENT TIMEZONE 30 compound statement 557
USER 30 CONTINUE handler 559
SPUFI EXIT handler 560
browsing output 58 GET DIAGNOSTICS statement 557
changed column widths 58 GOTO statement 557
CONNECT LOCATION field 53 handler 559
created column heading 59 handling errors 559
default values 54 IF statement 557
panels LEAVE statement 557
allocates RESULT data set 52 LOOP statement 557
filling in 52 REPEAT statement 557
format and display output 58 SQL statement 557
previous values displayed on panel 51 WHILE statement 557
selecting on DB2I menu 51 SQL statement
processing SQL statements 51, 57 SQL procedure 557
specifying SQL statement terminator 54 SQL statement coprocessor
SQLCODE returned 58 processing SQL statements 398
SQL SQL statement nesting
option of precompiler 407 restrictions 297
SQL (Structured Query Language) stored procedures 297
coding triggers 297
assembler 107 user-defined functions 297
basics 65 SQL statement terminator
C 121 modifying in DSNTEP2 for CREATE TRIGGER 845
C⁺⁺ 121 modifying in DSNTIAD for CREATE TRIGGER 843
COBOL 141 modifying in SPUFI for CREATE TRIGGER 54
dynamic 526 Specifying in SPUFI 54
FORTRAN program 165
Index X-19
SQLWARNING clause stored procedure (continued)
WHENEVER statement in COBOL program 75 using ROLLBACK in 544
SSID (subsystem identifier), specifying 441 using temporary tables in 548
SSN (subsystem name) WLM_REFRESH 935
CALL DSNALI parameter list 741 writing 539
parameter in CAF CONNECT function 743 writing in REXX 551
parameter in CAF OPEN function 747 stormdrain effect 804
parameter in RRSAF CONNECT function 775 string
SQL calls to CAF (call attachment facility) 738 delimiter
standard, SQL (ANSI/ISO) apostrophe 403
UNIQUE clause of CREATE TABLE 204 fixed-length
star schema 701 assembler 112
defining indexes for 666 C 137
state COBOL 148
of a lock 336 PL/I 187
statement value in CREATE TABLE statement 18
labels varying-length
FORTRAN 166 assembler 112
PL/I 176 C 137
statement table COBOL 148
column descriptions 717 PL/I 187
static SQL string host variables in C 135
description 497 subquery
host variables 498 correlated
sample C program 863 DELETE statement 49
status example 47
incomplete definition 205 subquery 47
STDDEV function tuning 653
when evaluation occurs 687 UPDATE statement 49
STDSQL option DELETE statement 49
precompiler 407 description 43
STOP DATABASE command join transformation 655
timeout 327 noncorrelated 654
storage referential constraints 50
acquiring restrictions with DELETE 50
retrieved row 517 tuning 652
SQLDA 516 tuning examples 657
addresses in SQLDA 519 UPDATE statement 49
storage group, DB2 use with UPDATE, DELETE, and INSERT 45
sample application 829 subsystem
stored procedure identifier (SSID), specifying 441
accessing transition tables 279, 608 subsystem name (SSN) 738
binding 550 summarizing group values 10
CALL statement 572 SYNC call 361
calling from a REXX procedure 608 SYNC call, IMS 361
defining parameter lists 578 SYNC parameter of CAF (call attachment facility) 749,
defining to DB2 533 758
DSNACICS 937 synchronization call abends 482
example 528 SYNCPOINT statement of CICS 360
invoking from a trigger 217 syntax diagrams, how to read xix
languages supported 540 SYSLIB data sets 429
linkage conventions 574 Sysplex query parallelism
returning non-relational data 548 splitting large queries across DB2 members 721
returning result set 547 SYSPRINT
running as authorized program 549 precompiler output
statements allowed 926 options section 474
testing 618 source statements section, example 475
usage 527 summary section, example 477
use of special registers 544 symbol cross-reference section 476
using COMMIT in 544 used to analyze errors 474
using host variables with 531 SYSTERM output to analyze errors 473
Index X-21
TSO updating
CLISTs during retrieval 808
calling application programs 427 large volumes 808
running in foreground 427 values from host variables 69
DSNALI language interface module 735 UR (uncommitted read)
TEST command 466 concurrent access restrictions 347
unit of work, completion 360 effect on reading LOBs 356
tuning page and row locking 346
DB2 recommendation 332
queries containing host variables 648 USER
two-phase commit special register 30
coordinating updates 379 value in UPDATE statement 30
TWOPASS user-defined function
option of precompiler 408 DBINFO structure 261
invoking from a trigger 217
scratchpad 259
U statements allowed 926
Unicode user-defined function (UDF)
data, retrieving from DB2 for OS/390 and z/OS 519 abnormal termination 297
UNION clause accessing transition tables 279
effect on OPTIMIZE clause 662 assembler parameter conventions 264
removing duplicates with sort 709 assembler table locators 280
SELECT statement 12 C or C⁺⁺ table locators 282
UNIQUE clause C parameter conventions 264
CREATE TABLE statement 204 casting arguments 296
unit of recovery COBOL parameter conventions 271
indoubt COBOL table locators 282
recovering CICS 361 data type promotion 293
recovering IMS 362 definer 242
unit of work defining 244
beginning 359 description 241
CICS description 360 DSN_FUNCTION_TABLE 295
completion example 242
commit 360 example of definition 246
open cursors 91 function resolution 290
rollback 360 host data types 251
TSO 359, 360 how to implement 248
description 359 how to invoke 289
DL/I batch 365 implementer 242
duration 359 invocation
IMS syntax 289
batch 365 invoker 242
commit point 361 invoking from a predicate 299
ending 361 main program 249
starting point 361 nesting SQL statements 297
prevention of data access by other users 359 overview 242
TSO parallelism considerations 249
completion 359 parameter conventions 251
ROLLBACK statement 359 PL/I parameter conventions 274
UPDATE PL/I table locators 283
lock mode preparing 284
page 336 restrictions 248, 249
row 336 setting result values 256
table, partition, and table space 336 simplifying function resolution 294
statement subprogram 249
correlated subqueries 49 testing 286
description 29 use of scratchpad 277
SET clause 30 use of special registers 276
subqueries 45 with scrollable cursor 300
WHERE CURRENT clause 84 USING DESCRIPTOR clause
EXECUTE statement 525
W
WHENEVER statement
assembler 110
C 124
COBOL 144
FORTRAN 166
PL/I 176
SQL error codes 75
Index X-23
X-24 Application Programming and SQL Guide
Readers’ Comments — We’d Like to Hear from You
DB2 Universal Database for OS/390 and z/OS
Application Programming
and SQL Guide
Version 7
Overall, how satisfied are you with the information in this book?
How satisfied are you that the information in this book is:
When you send comments to IBM, you grant IBM a nonexclusive right to use or distribute your comments in any
way it believes appropriate without incurring any obligation to you.
Name Address
Company or Organization
Phone No.
___________________________________________________________________________________________________
Readers’ Comments — We’d Like to Hear from You Cut or Fold
SC26-9933-01 Along Line
_ _ _ _ _ _ _Fold
_ _ _and
_ _ _Tape
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _Please
_ _ _ _ _do
_ _not
_ _ staple
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _Fold
_ _ _and
_ _ Tape
______
NO POSTAGE
NECESSARY
IF MAILED IN THE
UNITED STATES
_________________________________________________________________________________________
Fold and Tape Please do not staple Fold and Tape
Cut or Fold
SC26-9933-01 Along Line
SC26-9933-01