Application Programming DB2
Application Programming DB2
Application Programming
and SQL Guide
Version 6
SC26-9004-01
Note!
Before using this information and the product it supports, be sure to read the general information under
Appendix I, “Notices” on page 955.
This edition applies to Version 6 of DB2 Universal Database Server for OS/390, 5645-DB2, and to any subsequent releases until
otherwise indicated in new editions. Make sure you are using the correct edition for the level of the product.
This softcopy version is based on the printed edition of the book and includes the changes indicated in the printed version by vertical
bars. Additional changes made to this softcopy version of the manual since the hardcopy manual was published are indicated by the
hash (#) symbol in the left-hand margin. Editorial changes that have no technical significance are not noted.
Copyright International Business Machines Corporation 1983, 1999. All rights reserved.
US Government Users Restricted Rights – Use, duplication or disclosure restricted by GSA ADP Schedule Contract with IBM Corp.
Contents
Section 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
Chapter 3-3. Generating declarations for your tables using DCLGEN . . . 115
Invoking DCLGEN through DB2I . . . . . . . . . . . . . . . . . . . . . . . . . . . 115
Including the data declarations in your program . . . . . . . . . . . . . . . . . . 119
DCLGEN support of C, COBOL, and PL/I languages . . . . . . . . . . . . . . . 120
Example: Adding a table declaration and host-variable structure to a library . . 121
Contents v
Section 7. Additional programming techniques . . . . . . . . . . . . . . . . . . . . . . . 497
Chapter 7-7. Programming for the call attachment facility (CAF) . . . . . . 745
Call attachment facility capabilities and restrictions . . . . . . . . . . . . . . . . . 745
How to use CAF . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 748
Sample scenarios . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 767
Exits from your application . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 768
Error messages and dsntrace . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 769
CAF return codes and reason codes . . . . . . . . . . . . . . . . . . . . . . . . . 769
Program examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 770
Appendixes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 827
Contents vii
Appendix B. Sample applications . . . . . . . . . . . . . . . . . . . . . . . . . 849
Types of sample applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 849
Using the applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 851
Glossary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 959
Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 975
Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . I-1
For more advanced topics on using SELECT statements, see “Chapter 2-4. Using
subqueries” on page 71, “Chapter 5-4. Planning to access distributed data” on
page 379, and Chapter 5 of DB2 SQL Reference.
Examples of SQL statements illustrate the concepts that this chapter discusses.
Consider developing SQL statements similar to these examples and then execute
them dynamically using SPUFI or Query Management Facility (QMF).
Result tables
The data retrieved through SQL is always in the form of a table, which is called a
result table. Like the tables from which you retrieve the data, a result table has
rows and columns. A program fetches this data one row at a time.
Example: SELECT statement: This SELECT statement retrieves the last name,
first name, and phone number of employees in department D11 from the sample
employee table:
SELECT LASTNAME, FIRSTNME, PHONENO
FROM DSN861.EMP
WHERE WORKDEPT = 'D11'
ORDER BY LASTNAME;
Data types
When you create a DB2 table, you define each column to have a specific data type.
The data type can be a built-in data type or a distinct type. This section discusses
built-in data types. For information on distinct types, see “Chapter 4-4. Creating
and using distinct types” on page 309. The data type of a column determines what
you can and cannot do with it. When you perform operations on columns, the data
To better understand the concepts presented in this chapter, you must know the
data types of the columns to which an example refers. As shown in Figure 1, the
| data types have four general categories: string, datetime, numeric, and ROWID.
For more detailed information on each data type, see Chapter 3 of DB2 SQL
Reference.
Table 1 on page 7 shows whether operands of any two data types are compatible
(Yes) or incompatible (No).
Example: SELECT *: This SQL statement selects all columns from the department
table:
SELECT *
FROM DSN861.DEPT;
The result table looks like this:
DEPTNO DEPTNAME MGRNO ADMRDEPT LOCATION
====== ==================================== ====== ======== ========
A SPIFFY COMPUTER SERVICE DIV. 1 A --------
B1 PLANNING 2 A --------
C1 INFORMATION CENTER 3 A --------
D1 DEVELOPMENT CENTER ------ A --------
D11 MANUFACTURING SYSTEMS 6 D1 --------
D21 ADMINISTRATION SYSTEMS 7 D1 --------
E1 SUPPORT SERVICES 5 A --------
E11 OPERATIONS 9 E1 --------
E21 SOFTWARE SUPPORT 1 E1 --------
F22 BRANCH OFFICE F2 ------ E1 --------
G22 BRANCH OFFICE G2 ------ E1 --------
H22 BRANCH OFFICE H2 ------ E1 --------
I22 BRANCH OFFICE I2 ------ E1 --------
J22 BRANCH OFFICE J2 ------ E1 --------
Because the example does not specify a WHERE clause, the statement retrieves
data from all rows.
The dashes for MGRNO and LOCATION in the result table indicate null values.
“Selecting rows that have null values” on page 12 describes null values.
SELECT * is recommended mostly for use with dynamic SQL and view definitions.
You can use SELECT * in static SQL, but this is not recommended; if you add a
column to the table to which SELECT * refers, the program might reference
columns for which you have not defined receiving host variables. For more
information on host variables, see “Accessing data using host variables and host
structures” on page 96.
If you list the column names in a static SELECT statement instead of using an
asterisk, you can avoid the problem just mentioned. You can also see the
relationship between the receiving host variables and the columns in the result
table.
Example: SELECT column-name This SQL statement selects only the MGRNO
and DEPTNO columns from the department table:
SELECT MGRNO, DEPTNO
FROM DSN861.DEPT;
With a single SELECT statement, you can select data from one column or as many
as 750 columns.
For example, if you want to execute a DB2 built-in function on host variable, you
can use an SQL statement like this:
SELECT RAND(:HRAND)
FROM SYSIBM.SYSDUMMY1;
If you want to order the rows of data in the result table, use the ORDER BY clause
described in “Putting the rows in order: ORDER BY” on page 32.
Example: CREATE VIEW with AS clause: You can specify result column names
in the select-clause of a CREATE VIEW statement. You do not need to supply the
column list of CREATE VIEW, because the AS keyword names the derived column.
The columns in the view EMP_SAL are EMPNO and TOTAL_SAL.
CREATE VIEW EMP_SAL AS
SELECT EMPNO,SALARY+BONUS+COMM AS TOTAL_SAL
FROM DSN861.EMP;
Example: UNION ALL with AS clause: You can use the AS clause to give the
same name to corresponding columns of tables in a union. The third result column
from the union of the two tables has the name TOTAL_VALUE, even though it
contains data derived from columns with different names:
SELECT 'On hand' AS STATUS, PARTNO, QOH * COST AS TOTAL_VALUE
FROM PART_ON_HAND
UNION ALL
SELECT 'Ordered' AS STATUS, PARTNO, QORDER * COST AS TOTAL_VALUE
FROM ORDER_PART
ORDER BY PARTNO, TOTAL_VALUE;
The column STATUS and the derived column TOTAL_VALUE have the same name
in the first and second result tables, and are combined in the union of the two result
tables:
STATUS PARTNO TOTAL_VALUE
----------- ------ -----------
On hand 557 345.6
Ordered 557 15.5
..
.
For information on unions, see “Merging lists of values: UNION” on page 36.
DB2 does not necessarily process the clauses in this order internally, but the
results you get always look as if they had been processed in this order.
You can specify the ORDER BY clause only in the outermost SELECT statement.
If you use an AS clause to define a name in the outermost SELECT clause, only
the ORDER BY clause can refer to that name. If you use an AS clause in a
subselect, you can refer to the name it defines outside of the subselect.
DB2 evaluates a predicate for a row as true, false, or unknown. Results are
unknown only if an operand is null.
The next sections illustrate different comparison operators that you can use in a
predicate in a WHERE clause. The following table lists the comparison operators.
You can also search for rows that do not satisfy one of the above conditions, by
using the NOT keyword before the specified condition. See “Using the not keyword
with comparison operators” on page 13 for more information about using the NOT
keyword.
You can use a WHERE clause to retrieve rows that contain a null value in some
column. Specify:
WHERE column-name IS NULL
| The statement retrieves the first and last name of each employee in department
| A00.
To select all employees hired before January 1, 1960, you can use:
SELECT HIREDATE, FIRSTNME, LASTNAME
FROM DSN861.EMP
WHERE HIREDATE < '196-1-1';
The example retrieves the date hired and the name for each employee hired before
1960.
When strings are compared, DB2 uses the collating sequence of the encoding
scheme for the table. That is, if the table is defined with CCSID EBCDIC, DB2 uses
an EBCDIC collating sequence. If the table is defined with CCSID ASCII, DB2 uses
an ASCII collating sequence. The EBCDIC collating sequence is different from the
ASCII collating sequence. For example, letters sort before digits in EBCDIC, and
after digits in ASCII.
You cannot use the NOT keyword directly with the comparison operators. The
following WHERE clause results in an error:
WHERE DEPT NOT = 'A'
You can precede other SQL keywords with NOT: NOT LIKE, NOT IN, and NOT
BETWEEN are all acceptable. For example, the following two clauses are
equivalent:
WHERE MGRNO NOT IN ('1', '2')
The following SQL statement selects data from each row for employees with the
initials E H.
SELECT FIRSTNME, LASTNAME, WORKDEPT
FROM DSN861.EMP
WHERE FIRSTNME LIKE 'E%' AND LASTNAME LIKE 'H%';
The following SQL statement selects data from each row of the department table
where the department name contains “CENTER” anywhere in its name.
SELECT DEPTNO, DEPTNAME
FROM DSN861.DEPT
WHERE DEPTNAME LIKE '%CENTER%';
Assume the DEPTNO column is a three-character column of fixed length. You can
use this search condition to return rows with department numbers that begin with E
and end with 1:
...WHERE DEPTNO LIKE 'E%1';
If E1 is a department number, its third character is a blank and does not match the
search condition. If you define the DEPTNO column as a three-character column of
varying-length, department E1 matches the search condition; varying-length
The following SQL statement selects data from each row of the department table
where the department number starts with an E and contains a 1.
SELECT DEPTNO, DEPTNAME
FROM DSN861.DEPT
WHERE DEPTNO LIKE 'E%1%';
The following SQL statement selects data from each row whose four-digit phone
number has the first three digits of 378.
SELECT LASTNAME, PHONENO
FROM DSN861.EMP
WHERE PHONENO LIKE '378_';
Example: AND operator: This example retrieves the employee number, date hired,
and salary for each employee hired before 1965 and having an annual salary of
less than $16000.
SELECT EMPNO, HIREDATE, SALARY
FROM DSN861.EMP
WHERE HIREDATE < '1965-1-1' AND SALARY < 16;
Example: Rows that meet at least one condition: Determine which employees
satisfy at least one of the following conditions:
) The employee's hire date is before 1965 AND salary is less than $20,000.
) The employee's education level is less than 13.
Example: Rows that meet multiple conditions: To select the row of each
employee that satisfies both of the following conditions:
) The employee's hire date is before 1965.
) The employee's annual salary is less than $20000 OR the employee's
education level is less than 13.
The SELECT statement looks like this:
SELECT EMPNO
FROM DSN861.EMP
WHERE HIREDATE < '1965-1-1' AND (SALARY < 2 OR EDLEVEL < 13);
The result table looks like this:
EMPNO
======
31
231
Example: Rows that meet one condition: The following SQL statement selects
the employee number of each employee that satisfies one of the following
conditions:
) The employee's hire date is before 1965 and the salary is less than $20,000
) The employee's hire date is after January 1, 1965, and the salary is greater
than $40,000.
The SELECT statement looks like this:
Example: NOT with a single condition: In this example, NOT affects only the first
search condition (SALARY >= 50000):
SELECT EMPNO, EDLEVEL, JOB
FROM DSN861.EMP
WHERE NOT (SALARY >═ 5) AND (EDLEVEL < 18);
This SQL statement retrieves the employee number, education level, and job title of
each employee who satisfies both of the following conditions:
) The employee's annual salary is less than $50000.
) The employee's education level is less than 18.
Specify the lower boundary of the BETWEEN predicate first, then the upper
boundary. The limits are inclusive. For example, suppose you specify
WHERE column-name BETWEEN 6 AND 8
where the value of the column-name column is an integer. DB2 selects all rows
whose column-name value is 6, 7, or 8. If you specify a range from a larger number
to a smaller number (for example, BETWEEN 8 AND 6), the predicate is always
false.
Example 1:
Example 2:
SELECT EMPNO, SALARY
FROM DSN861.EMP
WHERE SALARY NOT BETWEEN 4 AND 5;
The example retrieves the employee numbers and the salaries for all employees
who either earn less than $40,000 or more than $50,000. You can use the
BETWEEN predicate to define a tolerance factor to use when comparing
floating-point values. Floating-point numbers are approximations of real numbers.
As a result, a simple comparison might not evaluate to true, even if the same value
was stored in both the COL1 and COL2 columns:
...WHERE COL1 = COL2
The following example uses a host variable named FUZZ as a tolerance factor:
...WHERE COL1 BETWEEN (COL2 - :FUZZ) AND (COL2 + :FUZZ)
In the values list after IN, the order of the items is not important and does not affect
the ordering of the result. Enclose the entire list in parentheses, and separate items
by commas; the blanks are optional.
SELECT DEPTNO, MGRNO
FROM DSN861.DEPT
WHERE DEPTNO IN ('B1', 'C1', 'D1');
The example retrieves the department number and manager number for
departments B01, C01, and D01.
Using the IN predicate gives the same results as a much longer set of conditions
separated by the OR keyword. For example, you could code the WHERE clause in
the SELECT statement above as:
WHERE DEPTNO = 'B1' OR DEPTNO = 'C1' OR DEPTNO = 'D1'
The SQL statement below finds any sex code not properly entered.
SELECT EMPNO, SEX
FROM DSN861.EMP
WHERE SEX NOT IN ('F', 'M');
The SELECT statement example displays the monthly and weekly salaries of
employees in department A00.
What you can do: For static SQL statements, the simplest fix is to override DEC31
rules by specifying the precompiler option DEC(15). That reduces the probability of
errors for statements embedded in the program.
For a dynamic statement, or for a single static statement, use the scalar function
DECIMAL to specify values of the precision and scale for a result that causes no
errors.
For a dynamic statement, before you execute the statement, set the value of
special register CURRENT PRECISION to DEC15.
Suppose that in creating the table YEMP (described in “Creating a new department
table” on page 43), you assign data type DECIMAL(8,0) to the BIRTHDATE column
and then fill it with dates of the form yyyymmdd. You then execute the following
query to determine who is 27 years old or older:
SELECT EMPNO, FIRSTNME, LASTNAME
FROM YEMP
WHERE YEAR(CURRENT DATE - BIRTHDATE) > 26;
Suppose now that, at the time the query executes, one person represented in
YEMP is 27 years, 0 months, and 29 days old but does not show in the results.
What happens is this:
If you have stored date data in columns with types other than DATE or
TIMESTAMP, you can use scalar functions to convert the stored data. The
following examples illustrate a few conversion techniques:
) For data stored as yyyymmdd in a DECIMAL(8,0) column named C2, use:
– The DIGITS function to convert a numeric value to character format
– The SUBSTR function to isolate pieces of the value
– CONCAT to reassemble the pieces in ISO format (with hyphens)
– The DATE function to have DB2 interpret the resulting character string
value ('yyyy-mm-dd') as a date.
For example:
DATE(SUBSTR(DIGITS(C2),1,4) CONCAT
'-' CONCAT
SUBSTR(DIGITS(C2),5,2) CONCAT
'-' CONCAT
SUBSTR(DIGITS(C2)7,2))
The following SQL statement calculates for department D11, the sum of employee
salaries, the minimum, average, standard deviation, and maximum salary, and the
count of employees in the department:
SELECT SUM(SALARY) AS SUMSAL,
MIN(SALARY) AS MINSAL,
AVG(SALARY) AS AVGSAL,
STDDEV(SALARY) AS SDSAL,
MAX(SALARY) AS MAXSAL,
COUNT(*) AS CNTSAL
FROM DSN861.EMP
WHERE WORKDEPT = 'D11';
The following result is displayed:
You can use DISTINCT with the SUM, AVG, and COUNT functions. DISTINCT
means that the selected function operates on only the unique values in a column.
Using DISTINCT with the MAX and MIN functions has no effect on the result and is
not advised.
The following SQL statement counts the number of employees described in the
table.
SELECT COUNT(*)
FROM DSN861.EMP;
This SQL statement calculates the average education level of employees in a set of
departments.
SELECT AVG(EDLEVEL)
FROM DSN861.EMP
WHERE WORKDEPT LIKE '__';
The SQL statement below counts the different jobs in the DSN8610.EMP table.
Table 4 shows the scalar functions that you can use. For complete details on using
these functions see Chapter 4 of DB2 SQL Reference.
Examples
CHAR
The CHAR function returns a string representation of a datetime value
or a decimal number. This can be useful when the precision of the
number is greater than the maximum precision supported by the host
language. For example, if you have a number with a precision greater
than 18, you can retrieve it into a host variable by using the CHAR
function. Specifically, if BIGDECIMAL is a DECIMAL(33) column, you
can define a fixed-length string, BIGSTRING CHAR(33), and execute
the following statement:
SELECT CHAR(MAX(BIGDECIMAL))
INTO :BIGSTRING
FROM T;
CHAR also returns a character string representation of a datetime value
in a specified format. For example:
SELECT CHAR(HIREDATE,USA)
FROM DSN861.EMP
WHERE EMPNO='1';
returns 01/01/1965.
DECIMAL
The DECIMAL function returns a decimal representation of a numeric or
character value. For example, DECIMAL can transform an integer value
so that you can use it as a duration. Assume that the host variable
PERIOD is of type INTEGER. The following example selects all of the
starting dates (PRSTDATE) from the DSN8610.PROJ table and adds to
them a duration specified in a host variable (PERIOD). To use the
integer value in PERIOD as a duration, you must first make sure that
DB2 interprets it as DECIMAL(8,0):
| If one of DB2's built-in scalar functions does not meet your needs, you can define
| and write a user-defined function to perform that operation. You can use a
| user-defined function wherever you use a built-in function. For example, suppose
| you have defined and written a function called REVERSE that reverses the
| characters in a string. The definition looks like this:
| CREATE FUNCTION REVERSE(CHAR)
| RETURNS CHAR
| EXTERNAL NAME 'REVERSE'
| LANGUAGE C;
| You can then use this function in an SQL statement, wherever you would use any
| built-in function that accepts a character argument. For example:
| SELECT REVERSE(:CHARSTR)
| FROM SYSDUMMY1;
| Although you cannot write user-defined column functions, you can define
| user-defined column functions based on built-in column functions. For example,
| suppose you have defined a table called EUROEMP with a column named
| EUROSAL that has a distinct type of EURO, which is based on DECIMAL(9,2). You
| cannot use the built-in AVG function to find the average value of EUROSAL
| because AVG takes numeric arguments. You can, however, define an AVG function
| that is sourced on the built-in AVG function and accepts arguments of type EURO:
| You can define and write a user-defined table function that users can invoke in the
| FROM clause of a SELECT statement. For example, suppose you have defined
| and written a function called BOOKS that returns a table of information about books
| on a given subject. The definition looks like this:
| CREATE FUNCTION BOOKS(SUBJECT)
| RETURNS TABLE (TITLE VARCHAR(25),
| AUTHOR VARCHAR(25),
| PUBLISHER VARCHAR(25),
| ISBNNUM VARCHAR(2),
| PRICE DECIMAL(5,2),
| CHAP1 CLOB(5K))
| LANGUAGE COBOL
| EXTERNAL NAME BOOKS;
| You can then include this function in the FROM clause of a SELECT statement to
| retrieve the book information. For example:
| SELECT B.TITLE, B.AUTHOR, B.PUBLISHER, B.ISBNNUM
| FROM TABLE(BOOKS('Computers')) AS B
| WHERE B.TITLE LIKE '%COBOL%';
| See “Chapter 4-3. Creating and using user-defined functions” on page 249 for
| information on defining and writing user-defined functions and “Chapter 4-4.
| Creating and using distinct types” on page 309 for information on defining distinct
| types.
One use of a CASE expression is to replace the values in a result table with more
meaningful values.
Example: Suppose you want to display the employee number, name, and
education level of all clerks in the employee table. Education levels are stored in
the EDLEVEL column as small integers, but you would like to replace the values in
this column with more descriptive phrases. An SQL statement like this
accomplishes the task:
The CASE expression replaces each small integer value of EDLEVEL with a
description of the amount of schooling each clerk received. If the value of
EDLEVEL is null, then the CASE expression substitutes the word UNKNOWN.
The statement has a problem, however. If an employee has not earned any salary,
a division-by-zero error occurs. By modifying the SELECT statement with a CASE
expression, you can avoid division by zero:
SELECT EMPNO, WORKDEPT,
(CASE WHEN SALARY= THEN NULL
ELSE COMM/SALARY
END) AS "COMMISSION/SALARY"
FROM DSN861.EMP;
The CASE expression determines the ratio of commission to salary only if the
salary is not zero. Otherwise, DB2 sets the ratio to null.
You can list the rows in ascending or descending order. Null values appear last in
an ascending sort and first in a descending sort.
DB2 sorts strings in the collating sequence associated with the encoding scheme of
the table. DB2 sorts numbers algebraically and sorts datetime values
chronologically.
The example retrieves data showing the seniority of employees. ASC is the default
sorting order.
When several rows have the same first ordering column value, those rows are in
order of the second column you identify in the ORDER BY clause, and then on the
third ordering column, and so on. For example, there is a difference between the
results of the following two SELECT statements. The first one orders selected rows
by job and next by education level. The second SELECT statement orders selected
rows by education level and next by job.
You can also use a field procedure to change the normal collating sequence. See
DB2 SQL Reference for more detailed information about sorting (string
comparisons) and DB2 Administration Guide for more detailed information about
field procedures.
Under the following conditions, the ORDER BY clause can reference columns that
are not in the SELECT clause:
) There is no UNION or UNION ALL in the query
) There is no GROUP BY clause
) There is no column function in the SELECT list
) There is no DISTINCT in the SELECT list
If any of the previous conditions are not true, the ORDER BY clause can reference
only columns that are in the SELECT clause.
Example 3: In this SQL statement, the rows are ordered by EDLEVEL, JOB, and
SALARY, but SALARY is not in the SELECT list:
SELECT JOB, EDLEVEL, LASTNAME
FROM DSN861.EMP
WHERE WORKDEPT = 'E21'
ORDER BY EDLEVEL, JOB, SALARY;
The result table looks like this:
JOB EDLEVEL LASTNAME
======== ======= ===============
FIELDREP 14 WONG
FIELDREP 14 LEE
MANAGER 14 SPENSER
FIELDREP 16 MEHTA
FIELDREP 16 GOUNOT
FIELDREP 16 ALONZO
Except for the columns named in the GROUP BY clause, the SELECT statement
must specify any other selected columns as an operand of one of the column
functions.
The following SQL statement lists, for each department, the lowest and highest
education level within that department.
SELECT WORKDEPT, MIN(EDLEVEL), MAX(EDLEVEL)
FROM DSN861.EMP
GROUP BY WORKDEPT;
If a column you specify in the GROUP BY clause contains null values, DB2
considers those null values to be equal. Thus, all nulls form a single group.
When it is used, the GROUP BY clause follows the FROM clause and any WHERE
clause, and precedes the ORDER BY clause.
You can also group the rows by the values of more than one column. For example,
the following statement finds the average salary for men and women in
departments A00 and C01:
SELECT WORKDEPT, SEX, AVG(SALARY) AS AVG_SALARY
FROM DSN861.EMP
WHERE WORKDEPT IN ('A', 'C1')
GROUP BY WORKDEPT, SEX;
DB2 groups the rows first by department number and next (within each department)
by sex before DB2 derives the average SALARY value for each group.
Compare the preceding example with the second example shown in “Summarizing
group values: GROUP BY” on page 35. The HAVING COUNT(*) > 1 clause
ensures that only departments with more than one member display. (In this case,
departments B01 and E01 do not display.)
The HAVING clause tests a property of the group. For example, you could use it to
retrieve the average salary and minimum education level of women in each
department in which all female employees have an education level greater than or
equal to 16. Assuming you only want results from departments A00 and D11, the
following SQL statement tests the group property, MIN(EDLEVEL):
SELECT WORKDEPT, AVG(SALARY) AS AVG_SALARY,
MIN(EDLEVEL) AS MIN_EDLEVEL
FROM DSN861.EMP
WHERE SEX = 'F' AND WORKDEPT IN ('A', 'D11')
GROUP BY WORKDEPT
HAVING MIN(EDLEVEL) >═ 16;
The SQL statement above gives this result:
WORKDEPT AVG_SALARY MIN_EDLEVEL
======== =============== ============
A 49625. 18
D11 25817.5 17
When you specify both GROUP BY and HAVING, the HAVING clause must follow
the GROUP BY clause. A function in a HAVING clause can include DISTINCT if
you have not used DISTINCT anywhere else in the same SELECT statement. You
can also connect multiple predicates in a HAVING clause with AND and OR, and
you can use NOT for any predicate of a search condition. For more information,
see “Selecting rows that meet more than one condition” on page 15.
When you use the UNION statement, the SQLNAME field of the SQLDA contains
the column names of the first operand.
If you have an ORDER BY clause, it must appear after the last SELECT statement
that is part of the union. In this example, the first column of the final result table
determines the final order of the rows.
Special registers
A special register is a storage area that DB2 defines for a process. You can use
the SET statement to change the current value of a register. Where the register's
name appears in other SQL statements, the current value of the register replaces
the name when the statement executes.
You can specify certain special registers in SQL statements. See Chapter 3 of DB2
SQL Reference for additional information about special registers.
If you want to see the value in a special register, you can use the SET
host-variable statement to assign the value of a special register to a variable in
your program. For details, see the SET host-variable statement in Chapter 6 of DB2
SQL Reference.
The contents of the DB2 system catalog tables can be a useful reference tool when
you begin to develop an SQL statement or an application program.
If your DB2 subsystem uses an exit routine for access control authorization,
you cannot rely on catalog queries to tell you what tables you can access. When
such an exit routine is installed, RACF as well as DB2 control table access.
Identifying defaults
If you want to constrain the inputs or identify the defaults, you can describe the
columns using:
) NOT NULL, when the column cannot contain null values.
) UNIQUE, when the value for each row must be unique, and the column cannot
contain null values.
) DEFAULT, when the column has one of the following DB2-assigned defaults:
– For numeric fields, zero is the default value.
– For fixed-length strings, blank is the default value.
| – For variable-length strings, including LOB strings, the empty string (string of
| zero-length) is the default value.
– For datetime fields, the current value of the associated special register is
the default value.
) DEFAULT value, when you want to identify one of the following as the default
value:
– A constant
– USER, which uses the run-time value of the USER special register
– CURRENT SQLID, which uses the SQL authorization ID of the process
– NULL
| – The name of a cast function, to cast a default value to the distinct type of a
| column
You must separate each column description from the next with a comma, and
enclose the entire list of column descriptions in parentheses.
Each example shown in this chapter assumes you logged on using your own
authorization ID. The authorization ID qualifies the name of each object you create.
For example, if your authorization ID is SMITH, and you create table YDEPT, the
name of the table is SMITH.YDEPT. If you want to access table DSN8610.DEPT,
you must refer to it by its complete name. If you want to access your own table
YDEPT, you need only to refer to it as “YDEPT”.
You can use an INSERT statement with a SELECT clause to copy rows from one
table to another. The following statement copies all of the rows from
DSN8610.DEPT to your own YDEPT work table.
INSERT INTO YDEPT
SELECT *
FROM DSN861.DEPT;
For information on the INSERT statement, see “Modifying DB2 data” on page 52.
This statement also creates a referential constraint between the foreign key in
YEMP (WORKDEPT) and the primary key in YDEPT (DEPTNO). It also restricts all
phone numbers to unique numbers.
If you want to change a table definition after you create it, use the statement
ALTER TABLE.
If you want to change a table name after you create it, use the statement RENAME
TABLE. For details on the ALTER TABLE and RENAME TABLE statements, see
Chapter 6 of DB2 SQL Reference. You cannot drop a column from a table or
change a column definition. However, you can add and drop constraints on
columns in a table.
You can define primary keys, unique keys, or foreign keys when you use the
CREATE TABLE statement to create a new table. Use the keyword REFERENCES
and the optional clause FOREIGN KEY (for named referential constraints) to define
a foreign key involving one or more columns. Defining a foreign key establishes a
referential constraint between the columns of the foreign key of a table and the
columns of the parent key (primary or unique) of that table or another table. The
parent table of a referential constraint must have a primary key and a primary index
or a unique key and a unique index. Nonnull values in a foreign key column must
be equal to values in the associated column of the parent key of the parent table.
For an example of a CREATE TABLE statement that defines both a parent key and
a foreign key on single columns, see “Creating a new employee table” on page 43.
You can also use separate PRIMARY KEY, UNIQUE, or FOREIGN KEY clauses in
the table definition to define parent and foreign keys that consist of multiple
columns. (The columns for the parent keys cannot allow nulls.)
You cannot define parent keys or foreign keys on created temporary tables or
declared temporary tables. See “Working with temporary tables” on page 46 for
more information on temporary tables.
If you are using the schema processor, DB2 creates a unique index for you when
you define a primary or unique key in a CREATE TABLE statement. Otherwise,
you must create a unique index before you can use a table that contains a primary
or unique key. The unique index enforces the uniqueness of the parent key. For
information on the schema processor, see Section 2 of DB2 Administration Guide.
Specifying a foreign key defines a referential constraint with a delete rule. For
information on delete rules, see “Deleting from tables with referential and check
constraints” on page 59. For examples of creating tables with referential
constraints, see Appendix A, “DB2 sample tables” on page 829.
When you define the referential constraint, DB2 enforces the constraint on every
SQL INSERT, DELETE, and UPDATE, and use of the LOAD utility. After you
create a table, you can control the referential constraints on the table by adding or
dropping the constraints.
When each row of a table conforms to the check constraints defined on that table,
the table has check integrity. If DB2 cannot guarantee check integrity, then it places
the table space or partition that contains the table in a check pending status, which
prevents some utilities and some SQL statements from using the table.
You use the clause CHECK and the optional clause CONSTRAINT (for named
check constraints) to define a check constraint on one or more columns of a table.
A check constraint can have a single predicate, or multiple predicates joined by
AND or OR. The first operand of each predicate must be a column name, the
second operand can be a column name or a constant, and the two operands must
have compatible data types.
The CREATE TABLE or ALTER TABLE statements with the CHECK clause can
specify a check constraint on the base table. Table check constraints help you
control the integrity of your data by defining the values that the columns in your
table can contain.
BONUSCHK is a named check constraint, which allows you to drop the constraint
later, if you wish.
| To create a trigger on a table, use the CREATE TRIGGER statement. For example,
| to notify the payroll department that an employee's salary has dropped below
| $15000, you might create a trigger like this:
| If you decide later that the minimum salary is $20000, rather than $15000, you can
| drop this trigger with this statement:
| DROP TRIGGER LOWPAY;
| Then you can create a new trigger that specifies a minimum salary of $20000.
| For more information on triggers, see “Chapter 3-5. Using triggers for active data”
| on page 217 and Section 2 (Volume 1) of DB2 Administration Guide.
# Temporary tables can also return result sets from stored procedures. For more
# information, see “Writing a stored procedure to return result sets to a DRDA client”
# on page 556. The following sections provide more details on created temporary
# tables and declared temporary tables.
# Example:
# You can also create a definition by copying the definition of a base table:
# CREATE GLOBAL TEMPORARY TABLE TEMPPROD LIKE PROD;
# The SQL statements in the previous examples create identical definitions, even
# though table PROD contains two columns, DESCRIPTION and CURDATE, that are
# defined as NOT NULL WITH DEFAULT. Because created temporary tables do not
# support WITH DEFAULT, DB2 changes the definitions of DESCRIPTION and
# CURDATE to NOT NULL when you use the second method to define TEMPPROD.
# After you execute one of the two CREATE statements, the definition of
# TEMPPROD exists, but no instances of the table exist. To drop the definition of
# TEMPPROD, you must execute this statement:
# DROP TABLE TEMPPROD;
# An instance of a created temporary table exists at the current server until one of
# the following actions occurs:
# ) The remote server connection under which the instance was created
# terminates.
# ) The unit of work under which the instance was created completes.
# When you execute a ROLLBACK statement, DB2 deletes the instance of the
# created temporary table. When you execute a COMMIT statement, DB2 deletes
# the instance of the created temporary table unless a cursor for accessing the
# created temporary table is defined WITH HOLD and is open.
# ) The application process ends.
# For example, suppose that you create a definition of TEMPPROD and then run an
# application that contains these statements:
# When you execute the INSERT statement, DB2 creates an instance of TEMPPROD
# and populates that instance with rows from table PROD. When the COMMIT
# statement is executed, DB2 deletes all rows from TEMPPROD. If, however, you
# change the declaration of C1 to:
# EXEC SQL DECLARE C1 CURSOR WITH HOLD
# FOR SELECT * FROM TEMPPROD;
# DB2 does not delete the contents of TEMPPROD until the application ends
# because C1, a cursor defined WITH HOLD, is open when the COMMIT statement
# is executed. In either case, DB2 drops the instance of TEMPPROD when the
# application ends.
# Before you can define declared temporary tables, you must create a special
# database and table spaces for them. You do that by executing the CREATE
# DATABASE statement with the AS TEMP clause, and then creating segmented
# table spaces in that database. A DB2 subsystem can have only one database for
# declared temporary tables, but that database can contain more than one table
# space.
# Example: These statements create a database and table space for declared
# temporary tables:
# CREATE DATABASE DTTDB AS TEMP;
# CREATE TABLESPACE DTTTS IN DTTDB
# SEGSIZE 4;
# You can define a declared temporary table in any of the following ways:
# ) Specify all the columns in the table.
# Unlike columns of created temporary tables, columns of declared temporary
# tables can include the WITH DEFAULT clause.
# ) Use a LIKE clause to copy the definition of a base table, created temporary
# table, or view.
# If the base table or created temporary table that you copy has identity columns,
# you can specify that the corresponding columns in the declared temporary table
# are also identity columns. Do that by specifying the INCLUDING IDENTITY
# COLUMN ATTRIBUTES clause when you define the declared temporary table.
# ) Use a subselect to choose specific columns from a base table, created
# temporary table, or view.
# DB2 creates an empty instance of a declared temporary table when it executes the
# DECLARE GLOBAL TEMPORARY TABLE statement. You can populate the
# declared temporary table using INSERT statements, modify the table using
# searched or positioned UPDATE or DELETE statements, and query the table using
# SELECT statements. You can also create indexes on the declared temporary table.
# For example, suppose that you execute these statement in an application program:
# DECLARE GLOBAL TEMPORARY TABLE TEMPPROD
# AS (SELECT * FROM BASEPROD)
# DEFINITION ONLY
# INCLUDING IDENTITY COLUMN ATTRIBUTES
# INCLUDING COLUMN DEFAULTS
# ON COMMIT PRESERVE ROWS;
# EXEC SQL INSERT INTO SESSION.TEMPPROD SELECT * FROM BASEPROD;
#
# ...
#
# EXEC SQL COMMIT;
#
# ...
#
Use the DROP TABLE statement with care: Dropping a table is NOT equivalent
to deleting all its rows. When you drop a table, you lose more than both its data
and its definition. You lose all synonyms, views, indexes, and referential and check
constraints associated with that table. You also lose all authorities granted on the
table.
For more information on the DROP statement, see Chapter 6 of DB2 SQL
Reference.
Use the CREATE VIEW statement to define a view and give the view a name, just
as you do for a table.
CREATE VIEW VDEPTM AS
SELECT DEPTNO, MGRNO, LASTNAME, ADMRDEPT
FROM DSN861.DEPT, DSN861.EMP
WHERE DSN861.EMP.EMPNO = DSN861.DEPT.MGRNO;
This view shows each department manager's name with the department data in the
DSN8610.DEPT table.
When a program accesses the data defined by a view, DB2 uses the view definition
to return a set of rows the program can access with SQL statements. Now that the
view VDEPTM exists, you can retrieve data using the view. To see the departments
administered by department D01 and the managers of those departments, execute
the following statement:
SELECT DEPTNO, LASTNAME
FROM VDEPTM
WHERE ADMRDEPT = 'DO1';
When you create a view, you can reference the USER and CURRENT SQLID
special registers in the CREATE VIEW statement. When referencing the view, DB2
uses the value of the USER or CURRENT SQLID that belongs to the user of the
SQL statement (SELECT, UPDATE, INSERT, or DELETE) rather than the creator
of the view. In other words, a reference to a special register in a view definition
refers to its run-time value.
You can use views to limit access to certain kinds of data, such as salary
information. You can also use views to do the following:
) Make a subset of a table's data available to an application. For example, a view
based on the employee table might contain rows for a particular department
only.
) Combine data from two or more tables and make the combined data available
to an application. By using a SELECT statement that matches values in one
table with those in another table, you can create a view that presents data from
both tables. However, you can only select data from this type of view. You
cannot update, delete, or insert data using a view that joins two or more tables.
) Present computed data, and make the resulting data available to an
application. You can compute such data using any function or operation that
you can use in a SELECT statement.
| In either case, for every row you insert, you must provide a value for any column
| that does not have a default value. For a column that meets one of these
| conditions, you can specify DEFAULT to tell DB2 to insert the default value for that
| column:
| ) Is nullable.
| ) Is defined with a default value.
| ) Has data type ROWID. ROWID columns always have default values.
# ) Is an identity column. Identity columns always have default values.
# The values that you can insert into a ROWID column or identity column depend on
# whether the column is defined with GENERATED ALWAYS or GENERATED BY
# DEFAULT. See “Inserting data into a ROWID column” on page 55 and “Inserting
# data into an identity column” on page 56 for more information.
For static insert statements, it is a good idea to name all columns for which you are
providing values because:
) Your insert statement is independent of the table format. (For example, you do
not have to change the statement when a column is added to the table.)
) You can verify that you are giving the values in order.
) Your source statements are more self-descriptive.
If you do not name the columns in a static insert statement, and a column is added
to the table being inserted into, an error can occur if the insert statement is
rebound. An error will occur after any rebind of the insert statement unless you
change the insert statement to include a value for the new column. This is true,
even if the new column has a default value.
When you list the column names, you must specify their corresponding values in
the same order as in the list of column names.
For example,
INSERT INTO YDEPT (DEPTNO, DEPTNAME, MGRNO, ADMRDEPT, LOCATION)
VALUES ('E31', 'DOCUMENTATION', '1', 'E1', ' ');
After inserting a new department row into your YDEPT table, you can use a
SELECT statement to see what you have loaded into the table. This SQL
statement:
SELECT *
FROM YDEPT
WHERE DEPTNO LIKE 'E%'
ORDER BY DEPTNO;
shows you all the new department rows that you have inserted:
DEPTNO DEPTNAME MGRNO ADMRDEPT LOCATION
====== ==================================== ====== ======== ===========
E1 SUPPORT SERVICES 5 A -----------
E11 OPERATIONS 9 E1 -----------
E21 SOFTWARE SUPPORT 1 E1 -----------
E31 DOCUMENTATION 1 E1 -----------
For example, the sample application project table (PROJ) has foreign keys on the
department number (DEPTNO), referencing the department table, and the
employee number (RESPEMP), referencing the employee table. Every row inserted
into the project table must have a value of RESPEMP that is either equal to some
value of EMPNO in the employee table or is null. The row must also have a value
of DEPTNO that is equal to some value of DEPTNO in the department table. (You
cannot use the null value, because DEPTNO in the project table must be NOT
NULL.)
The two previous statements create and fill a table, TELE, that looks like this:
NAME2 NAME1 PHONE
=============== ============ =====
PULASKI EVA 7831
JEFFERSON JAMES 294
MARINO SALVATORE 378
SMITH DANIEL 961
JOHNSON SYBIL 8953
PEREZ MARIA 91
MONTEVERDE ROBERT 378
The CREATE TABLE statement example creates a table which, at first, is empty.
The table has columns for last names, first names, and phone numbers, but does
not have any rows.
The INSERT statement fills the newly created table with data selected from the
DSN8610.EMP table: the names and phone numbers of employees in Department
D21.
# Before you insert data into an identity column, you must know whether the column
# is defined as GENERATED ALWAYS or GENERATED BY DEFAULT. If you try to
# insert a value into an identity column that is defined as GENERATED ALWAYS, the
# insert operation fails.
# For example, suppose that tables T1 and T2 have two columns: a character column
# and an integer column that is defined as an identity column. For the following
# statement to execute successfully, IDENTCOL2 must be defined as GENERATED
# BY DEFAULT.
# INSERT INTO T2 (CHARCOL2,IDENTCOL2)
# SELECT CHARCOL1, IDENTCOL1 FROM T1;
Examples: This statement inserts information about a new employee into the
YEMP table. Because YEMP has a foreign key WORKDEPT referencing the
primary key DEPTNO in YDEPT, the value inserted for WORKDEPT (E31) must be
a value of DEPTNO in YDEPT or null.
INSERT INTO YEMP
VALUES ('4', 'RUTHERFORD', 'B', 'HAYES', 'E31',
'5678', '1983-1-1', 'MANAGER', 16, 'M', '1943-7-1', 24,
5, 19);
The following statement also inserts a row into the YEMP table. However, the
statement does not specify a value for every column. Because the unspecified
columns allow nulls, DB2 inserts null values into the columns not specified.
Because YEMP has a foreign key WORKDEPT referencing the primary key
DEPTNO in YDEPT, the value inserted for WORKDEPT (D11) must be a value of
DEPTNO in YDEPT or null.
INSERT INTO YEMP
(EMPNO, FIRSTNME, MIDINIT, LASTNAME, WORKDEPT, PHONENO, JOB)
VALUES ('41', 'MILLARD', 'K', 'FILLMORE', 'D11', '4888', 'MANAGER');
You cannot update rows in a created temporary table, but you can update rows in a
declared temporary table.
The SET clause names the columns that you want to update and provides the
values you want to assign to those columns. You can replace a column value with
any of the following items:
) A null value
The column to which you assign the null value must not be defined as NOT
NULL.
# ) An expression
# An expression can be any of the following items:
# – A column
# – A constant
If you omit the WHERE clause; DB2 updates every row in the table or view with the
values you supply.
If DB2 finds an error while executing your UPDATE statement (for instance, an
update value that is too large for the column), it stops updating and returns error
codes in the SQLCODE and SQLSTATE host variables or related fields in the
SQLCA. No rows in the table change (rows already changed, if any, are restored to
their previous values). If the UPDATE statement is successful, SQLERRD(3) is set
to the number of rows updated.
Examples: The following statement supplies a missing middle initial and changes
the job for employee 000200.
UPDATE YEMP
SET MIDINIT = 'H', JOB = 'FIELDREP'
WHERE EMPNO = '2';
The following statement gives everyone in department D11 a $400 raise. The
statement can update several rows.
UPDATE YEMP
SET SALARY = SALARY + 4.
WHERE WORKDEPT = 'D11';
# The following statement sets the salary and bonus for employee 000190 to the
# average salary and minimum bonus for all employees.
# UPDATE YEMP
# SET (SALARY, BONUS) =
# (SELECT AVG(SALARY), MIN(BONUS)
# FROM EMP)
# WHERE EMPNO = '19';
If you are updating a dependent table, any nonnull foreign key values that you
enter must match the parent key for each relationship in which the table is a
dependent. For example, department numbers in the employee table depend on the
department numbers in the department table; you can assign an employee to no
department, but you cannot assign an employee to a department that does not
exist.
If an update to a table with a referential constraint fails, DB2 rolls back all changes
made during the update.
For table YEMP defined in “Creating a new employee table” on page 43, this
UPDATE statement satisfies all constraints and succeeds:
UPDATE YEMP
SET JOB = 'TECHNICAL'
WHERE FIRSTNME = 'MARY' AND LASTNAME= 'SMITH';
You can use DELETE to remove all rows from a created temporary table or
declared temporary table. However, you can use DELETE with a WHERE clause to
remove selected rows only from a declared temporary table.
This DELETE statement deletes each row in the YEMP table that has an employee
number 000060.
DELETE FROM YEMP
WHERE EMPNO = '6';
When this statement executes, DB2 deletes any row from the YEMP table that
meets the search condition.
If DB2 finds an error while executing your DELETE statement, it stops deleting data
and returns error codes in the SQLCODE and SQLSTATE host variables or related
fields in the SQLCA. The data in the table does not change.
SQLERRD(3) in the SQLCA contains the number of deleted rows. This number
includes only the number of rows deleted in the table specified in the DELETE
statement. It does not include those rows deleted according to the CASCADE rule.
Be sure that check constraints do not affect the DELETE indirectly. For example,
suppose you delete a row in a parent table that sets a column in a dependent table
deletes every row in the YDEPT table. If the statement executes, the table
continues to exist (that is, you can insert rows into it) but it is empty. All existing
views and authorizations on the table remain intact when using DELETE. By
comparison, using DROP TABLE drops all views and authorizations, which can
invalidate plans and packages. For information on the DROP statement, see
“Dropping tables: DROP TABLE” on page 50.
DB2 supports these types of joins: inner join, left outer join, right outer join, and full
outer join.
You can specify joins in the FROM clause of a query: Figure 2 below shows the
ways to combine tables using outer join functions.
The result table contains data joined from all of the tables, for rows that satisfy the
search conditions.
The result columns of a join have names if the outermost SELECT list refers to
base columns. But, if you use a function (such as COALESCE or VALUE) to build a
column of the result, then that column does not have a name unless you use the
AS clause in the SELECT list.
To distinguish the different types of joins, the examples in this section use the
following two tables:
Inner join
| To request an inner join, execute a SELECT statement in which you specify the
| tables that you want to join in the FROM clause, and specify a WHERE clause or
| an ON clause to indicate the join condition. The join condition can be any simple or
| compound search condition that does not contain a subquery reference. See
| Chapter 5 of DB2 SQL Reference for the complete syntax of a join condition.
In the simplest type of inner join, the join condition condition is column1=column2.
For example, you can join the PARTS and PRODUCTS tables on the PROD#
column to get a table of parts with their suppliers and the products that use the
parts.
| You can specify more complicated join conditions to obtain different sets of results.
| For example, to eliminate the suppliers that begin with the letter A from the table of
| parts, suppliers, product numbers and products, write a query like this:
| SELECT PART, SUPPLIER, PARTS.PROD#, PRODUCT
| FROM PARTS INNER JOIN PRODUCTS
| ON PARTS.PROD# = PRODUCTS.PROD#
| AND SUPPLIER NOT LIKE 'A%';
| The result of the query is all rows that do not have a supplier that begins with A:
| PART SUPPLIER PROD# PRODUCT
| ======= ============ ===== ==========
| MAGNETS BATEMAN 1 GENERATOR
| PLASTIC PLASTIK_CORP 3 RELAY
Example of joining a table to itself using an inner join: The following example
joins table DSN8610.PROJ to itself and returns the number and name of each
“major” project followed by the number and name of the project that is part of it. In
this example, A indicates the first instance of table DSN8610.PROJ and B indicates
a second instance of this table. The join condition is such that the value in column
PROJNO in table DSN8610.PROJ A must be equal to a value in column MAJPROJ
in table DSN8610.PROJ B.
In this example, the comma in the FROM clause implicitly specifies an inner join,
and acts the same as if the INNER JOIN keywords had been used. When you use
the comma for an inner join, you must specify the join condition on the WHERE
clause. When you use the INNER JOIN keywords, you must specify the join
condition on the ON clause.
| The join condition for a full outer join must be a simple search condition that
| compares two columns or cast functions that contain columns.
For example, the following query performs a full outer join of the PARTS and
PRODUCTS tables:
SELECT PART, SUPPLIER, PARTS.PROD#, PRODUCT
FROM PARTS FULL OUTER JOIN PRODUCTS
ON PARTS.PROD# = PRODUCTS.PROD#;
The result table from the query is:
PART SUPPLIER PROD# PRODUCT
======= ============ ===== ==========
WIRE ACWF 1 GENERATOR
MAGNETS BATEMAN 1 GENERATOR
PLASTIC PLASTIK_CORP 3 RELAY
BLADES ACE_STEEL 25 SAW
OIL WESTERN_CHEM 16 -----------
------- ------------ --- SCREWDRIVER
You probably noticed that the result of the example for “Full outer join” is null for
SCREWDRIVER, even though the PRODUCTS table contains a product number for
SCREWDRIVER. If you select PRODUCTS.PROD# instead, PROD# is null for OIL.
If you select both PRODUCTS.PROD# and PARTS.PROD#, the result contains two
columns, with both columns contain some null values. We can merge data from
both columns into a single column, eliminating the null values, using the
COALESCE function.
The AS clause (AS PRODNUM) provides a name for the result of the COALESCE
function.
| As in an inner join, the join condition can be any simple or compound search
| condition that does not contain a subquery reference.
For example, to include rows from the PARTS table that have no matching values
in the PRODUCTS table and include only prices of greater than 10.00, execute this
query:
SELECT PART, SUPPLIER, PARTS.PROD#, PRODUCT
FROM PARTS LEFT OUTER JOIN PRODUCTS
ON PARTS.PROD#=PRODUCTS.PROD#
AND PRODUCTS.PRICE>1.;
The result of the query is:
PART SUPPLIER PROD# PRODUCT PRICE
======= ============ ===== ========== =====
WIRE ACWF 1 GENERATOR 45.75
MAGNETS BATEMAN 1 GENERATOR 45.75
PLASTIC PLASTIK_CORP 3 ----------- -------
BLADES ACE_STEEL 25 SAW 18.9
OIL WESTERN_CHEM 16 ----------- -------
Because the PARTS table can have nonmatching rows, and the PRICE column is
not in the PARTS table, rows in which PRICE is less than 10.00 are included in the
result of the join, but PRICE is set to null.
| As in an inner join, the join condition can be any simple or compound search
| condition that does not contain a subquery reference.
For example, to include rows from the PRODUCTS table that have no matching
values in the PARTS table and include only prices of greater than 10.00, execute
this query:
SELECT PART, SUPPLIER, PRODUCTS.PROD#, PRODUCT
FROM PARTS RIGHT OUTER JOIN PRODUCTS
ON PARTS.PROD# = PRODUCTS.PROD#
AND PRODUCTS.PRICE>1.;
gives this result:
PART SUPPLIER PROD# PRODUCT PRICE
======= ============ ===== ========== =====
WIRE ACWF 1 GENERATOR 45.75
MAGNETS BATEMAN 1 GENERATOR 45.75
BLADES ACE_STEEL 25 SAW 18.9
A join operation is part of a FROM clause; therefore, for the purpose of predicting
which rows will be returned from a SELECT statement containing a join operation,
assume that the join operation is performed first.
For example, suppose that you want to obtain a list of part names, supplier names,
product numbers, and product names from the PARTS and PRODUCTS tables.
These categories correspond to the PART, SUPPLIER, PROD#, and PRODUCT
columns. You want to include rows from either table where the PROD# value does
not match a PROD# value in the other table, which means that you need to do a
full outer join. You also want to exclude rows for product number 10. If you code a
SELECT statement like this:
SELECT PART, SUPPLIER,
VALUE(PARTS.PROD#,PRODUCTS.PROD#) AS PRODNUM, PRODUCT
FROM PARTS FULL OUTER JOIN PRODUCTS
ON PARTS.PROD# = PRODUCTS.PROD#
WHERE PARTS.PROD# <> '1' AND PRODUCTS.PROD# <> '1';
you get this table:
PART SUPPLIER PRODNUM PRODUCT
======= ============ ======= ===========
PLASTIC PLASTIK_CORP 3 RELAY
BLADES ACE_STEEL 25 SAW
which is not the desired result. DB2 performs the join operation first, then applies
the WHERE clause. The WHERE clause excludes rows where PROD# has a null
value, so the result is the same as if you had specified an inner join.
In this case, DB2 applies the WHERE clause to each table separately, so that no
rows are eliminated because PROD# is null. DB2 then performs the full outer join
operation, and the desired table is obtained:
In the example, the correlation name is CHEAP_PARTS. There are two correlated
references to CHEAP_PARTS: CHEAP_PARTS.PROD# and
CHEAP_PARTS.PRODUCT. Those references are valid because they do not occur
in the same FROM clause where CHEAP_PARTS is defined.
Conceptual overview
Suppose you want a list of the employee numbers, names, and commissions of all
employees working on a particular project, say project number MA2111. The first
part of the SELECT statement is easy to write:
SELECT EMPNO, LASTNAME, COMM
FROM DSN861.EMP
WHERE EMPNO
..
.
But you cannot go further because the DSN8610.EMP table does not include
project number data. You do not know which employees are working on project
MA2111 without issuing another SELECT statement against the
DSN8610.EMPPROJACT table.
You can use a subselect to solve this problem. A subselect in a WHERE clause is
called a subquery. The SELECT statement surrounding the subquery is called the
outer SELECT.
SELECT EMPNO, LASTNAME, COMM
FROM DSN861.EMP
WHERE EMPNO IN
(SELECT EMPNO
FROM DSN861.EMPPROJACT
WHERE PROJNO = 'MA2111');
To better understand what results from this SQL statement, imagine that DB2 goes
through the following process:
1. DB2 evaluates the subquery to obtain a list of EMPNO values:
(SELECT EMPNO
FROM DSN861.EMPPROJACT
WHERE PROJNO = 'MA2111');
which results in an interim result table:
2. The interim result table then serves as a list in the search condition of the outer
SELECT. Effectively, DB2 executes this statement:
This kind of subquery is uncorrelated. In the previous query, for example, the
content of the subquery is the same for every row of the table DSN8610.EMP.
Subqueries that vary in content from row to row or group to group are correlated
subqueries. For information on correlated subqueries, see “Using correlated
subqueries” on page 74. All of the information preceding that section applies to
both correlated and uncorrelated subqueries.
Subqueries can also appear in the predicates of other subqueries. Such subqueries
are nested subqueries at some level of nesting. For example, a subquery within a
subquery within an outer SELECT has a level of nesting of 2. DB2 allows nesting
down to a level of 15, but few queries require a nesting level greater than 1.
The relationship of a subquery to its outer SELECT is the same as the relationship
of a nested subquery to a subquery, and the same rules apply, except where
otherwise noted.
Except for a subquery of a basic predicate, a result table can have more than one
value.
Basic predicate
You can use a subquery immediately after any of the comparison operators. If you
do, the subquery can return at most one value. DB2 compares that value with the
value to the left of the comparison operator.
For example, the following SQL statement returns the employee numbers, names,
and salaries for employees whose education level is higher than the average
company-wide education level.
SELECT EMPNO, LASTNAME, SALARY
FROM DSN861.EMP
WHERE EDLEVEL >
(SELECT AVG(EDLEVEL)
FROM DSN861.EMP);
If a subquery that returns one or more null values gives you unexpected results,
see the description of quantified predicates in Chapter 3 of DB2 SQL Reference.
In the example, the search condition is true if any project represented in the
DSN8610.PROJ table has an estimated start date which is later than 1 January
1986. This example does not show the full power of EXISTS, because the result is
always the same for every row examined for the outer SELECT. As a
consequence, either every row appears in the results, or none appear. A correlated
subquery is more powerful, because the subquery would change from row to row.
As shown in the example, you do not need to specify column names in the
subquery of an EXISTS clause. Instead, you can code SELECT *. You can also use
the EXISTS keyword with the NOT keyword in order to select rows when the data
or condition you specify does not exist; that is, you can code
WHERE NOT EXISTS (SELECT ...);
In the subquery, you tell DB2 to compute the average education level for the
department number in the current row. A query that does this follows:
SELECT EMPNO, LASTNAME, WORKDEPT, EDLEVEL
FROM DSN861.EMP X
WHERE EDLEVEL >
(SELECT AVG(EDLEVEL)
FROM DSN861.EMP
WHERE WORKDEPT = X.WORKDEPT);
A correlated subquery looks like an uncorrelated one, except for the presence of
one or more correlated references. In the example, the single correlated reference
is the occurrence of X.WORKDEPT in the WHERE clause of the subselect. In this
clause, the qualifier X is the correlation name defined in the FROM clause of the
outer SELECT statement. X designates rows of the first instance of DSN8610.EMP.
At any time during the execution of the query, X designates the row of
DSN8610.EMP to which the WHERE clause is being applied.
Consider what happens when the subquery executes for a given row of
DSN8610.EMP. Before it executes, X.WORKDEPT receives the value of the
WORKDEPT column for that row. Suppose, for example, that the row is for
CHRISTINE HAAS. Her work department is A00, which is the value of
WORKDEPT for that row. The subquery executed for that row is therefore:
(SELECT AVG(EDLEVEL)
FROM DSN861.EMP
WHERE WORKDEPT = 'A');
The subquery produces the average education level of Christine's department. The
outer subselect then compares this to Christine's own education level. For some
other row for which WORKDEPT has a different value, that value appears in the
subquery in place of A00. For example, in the row for MICHAEL L THOMPSON,
this value is B01, and the subquery for his row delivers the average education level
for department B01.
The result table produced by the query has the following values:
The correlation name appears in the FROM clause of some query. This query could
be the outer-level SELECT, or any of the subqueries that contain the reference.
Suppose, for example, that a query contains subqueries A, B, and C, and that A
contains B and B contains C. Then C could use a correlation name defined in B, A,
or the outer SELECT.
You can define a correlation name for each table name appearing in a FROM
clause. Simply append the correlation name after its table name. Leave one or
more blanks between a table name and its correlation name. You can include the
word AS between the table name and the correlation name to increase the
readability of the SQL statement. For example:
SELECT EMPNO, LASTNAME, WORKDEPT, EDLEVEL
FROM DSN861.EMP AS X
WHERE EDLEVEL >
(SELECT AVG(EDLEVEL)
FROM DSN861.EMP
WHERE WORKDEPT = X.WORKDEPT);
To process this statement, DB2 determines for each project (represented by a row
in the DSN8610.PROJ table) whether or not the combined staffing for that project is
less than 0.5. If it is, DB2 deletes that row from the DSN8610.PROJ table.
To continue this example, suppose DB2 deletes a row in the DSN8610.PROJ table.
You must also delete rows related to the deleted project in the DSN8610.PROJACT
table. To do this, use:
DELETE FROM DSN861.PROJACT X
WHERE NOT EXISTS
(SELECT *
FROM DSN861.PROJ
WHERE PROJNO = X.PROJNO);
DB2 determines, for each row in the DSN8610.PROJACT table, whether a row with
the same project number exists in the DSN8610.PROJ table. If not, DB2 deletes
the row in DSN8610.PROJACT.
A subquery of a DELETE statement must not reference the same table from which
rows are deleted. In the sample application, some departments administer other
departments. Consider the following statement, which seems to delete every
department that does not administer another one:
With the referential constraints defined for the sample tables, the statement causes
an error. The deletion involves the table referred to in the subquery (DSN8610.EMP
is a dependent table of DSN8610.DEPT) and the last delete rule in the path to
EMP is SET NULL, not RESTRICT or NO ACTION. If the statement could execute,
its results would again depend on the order in which DB2 accesses the rows.
To use SPUFI, select SPUFI from the DB2I Primary Option Menu as shown in
Figure 3.
F G
DSNEPRI DB2I PRIMARY OPTION MENU SSID: DSN
COMMAND ===> 1
R S
Figure 3. The DB2I primary option menu with option 1 selected
From then on, when the SPUFI panel displays, the data entry fields on the panel
contain the values that you previously entered. You can specify data set names
and processing options each time the SPUFI panel displays, as needed. Values
you do not change remain in effect.
Enter the output data set name: (Must be a sequential data set)
4 DATA SET NAME..... ===> RESULT
F G
DSNESP2 CURRENT SPUFI DEFAULTS SSID: DSN
===>
Enter the following to control your SPUFI session:
1 SQL TERMINATOR ===> ; (SQL Statement Terminator)
2 ISOLATION LEVEL ===> RR (RR=Repeatable Read, CS=Cursor Stability)
3 MAX SELECT LINES ===> 25 (Maximum number of lines to be
returned from a SELECT)
Output data set characteristics:
4 RECORD LENGTH ... ===> 492 (LRECL= logical record length)
5 BLOCKSIZE ....... ===> 496 (Size of one block)
6 RECORD FORMAT.... ===> VB (RECFM= F, FB, FBA, V, VB, or VB)
7 DEVICE TYPE...... ===> SYSDA (Must be a DASD unit name)
Specify values for the following options on the CURRENT SPUFI DEFAULTS
panel. All fields must contain a value.
| 1 SQL TERMINATOR
| Allows you to specify the character that you use to end each SQL statement.
| You can specify any character except one of those listed in Table 5. A
| semicolon is the default.
| Table 5 (Page 1 of 2). Invalid special characters for the SQL terminator
| Hexadecimal
| Name Character Representation
| blank X'40'
| comma , X'5E'
When you have entered your SPUFI options, press the ENTER key to continue.
SPUFI then processes the next processing option for which you specified YES. If
all other processing options are NO, SPUFI displays the SPUFI panel.
If you press the END key, you return to the SPUFI panel, but you lose all the
changes you made on the SPUFI Defaults panel. If you press ENTER, SPUFI
saves your changes.
On the panel, use the ISPF EDIT program to enter SQL statements that you want
to execute, as shown in Figure 6 on page 85.
Move the cursor to the first input line and enter the first part of an SQL statement.
You can enter the rest of the SQL statement on subsequent lines, as shown in
Figure 6 on page 85. Indenting your lines and entering your statements on several
lines make your statements easier to read, and do not change how your statements
process.
In your SPUFI input data set, end each SQL statement with the statement
terminator that you specified in the CURRENT SPUFI DEFAULTS panel.
When you have entered your SQL statements, press the END PF key to save the
file and to execute the SQL statements.
F G
EDIT --------userid.EXAMPLES(XMP1) --------------------- COLUMNS 1 72
COMMAND INPUT ===> SAVE SCROLL ===> PAGE
********************************** TOP OF DATA ***********************
1 SELECT LASTNAME, FIRSTNME, PHONENO
2 FROM DSN861 .EMP
3 WHERE WORKDEPT= 'D11'
4 ORDER BY LASTNAME;
********************************* BOTTOM OF DATA *********************
R S
Figure 6. The edit panel: After entering an SQL statement
Pressing the END PF key saves the data set. You can save the data set and
continue editing it by entering the SAVE command. In fact, it is a good practice to
save the data set after every 10 minutes or so of editing.
Figure 6 shows what the panel looks like if you enter the sample SQL statement,
followed by a SAVE command.
You can bypass the editing step by resetting the EDIT INPUT processing option:
EDIT INPUT ... ===> NO
You can put comments about SQL statements either on separate lines or on the
same line. In either case, use two hyphens (--) to begin a comment. Specify any
text other than #SET TERMINATOR after the comment. DB2 ignores everything to
the right of the two hyphens.
# Use the text --SET TERMINATOR character in a SPUFI input data set as an
# instruction to SPUFI to interpret character as a statement terminator. You can
# specify any single-byte character except one of the characters that are listed in
# Table 5 on page 82. The terminator that you specify overrides a terminator that
# you specified in option 1 of the CURRENT SPUFI DEFAULTS panel or in a
# previous --SET TERMINATOR statement.
Your SQL statement might take a long time to execute, depending on how large a
table DB2 has to search, or on how many rows DB2 has to process. To interrupt
DB2's processing, press the PA1 key and respond to the prompting message that
asks you if you really want to stop processing. This cancels the executing SQL
statement and returns you to the ISPF-PDF menu.
What happens to the output data set? This depends on how much of the input data
set DB2 was able to process before you interrupted its processing. DB2 might not
have opened the output data set yet, or the output data set might contain all or part
of the results data produced so far.
At the end of the data set are summary statistics that describe the processing of
the input data set as a whole.
For all other types of SQL statements executed with SPUFI, the message
“SQLCODE IS 0” indicates an error-free result.
Other messages that you could receive from the processing of SQL statements
include:
) The number of rows that DB2 processed, that either:
– Your SELECT statement retrieved
– Your UPDATE statement modified
– Your INSERT statement added to a table
– Your DELETE statement deleted from a table
) Which columns display truncated data because the data was too wide
Chapter 3-3. Generating declarations for your tables using DCLGEN . . . 115
Invoking DCLGEN through DB2I . . . . . . . . . . . . . . . . . . . . . . . . . . . 115
Including the data declarations in your program . . . . . . . . . . . . . . . . . . 119
DCLGEN support of C, COBOL, and PL/I languages . . . . . . . . . . . . . . . 120
Example: Adding a table declaration and host-variable structure to a library . . 121
Step 1. Specify COBOL as the host language . . . . . . . . . . . . . . . . . . 121
Step 2. Create the table declaration and host structure . . . . . . . . . . . . . 122
Step 3. Examine the results . . . . . . . . . . . . . . . . . . . . . . . . . . . . 124
| If you are writing your applications in Java, you can use JDBC application
| support to access DB2. JDBC is similar to ODBC but is designed specifically
| for use with Java and is therefore a better choice than ODBC for making DB2
| calls from Java applications.
| For more information on using JDBC, see DB2 ODBC Guide and Reference.
) Delimit SQL statements, as described in “Delimiting an SQL statement” on
page 95.
) Declare the tables you use, as described in “Declaring table and view
definitions” on page 95. (This is optional.)
) Declare the data items used to pass data between DB2 and a host language,
as described in “Accessing data using host variables and host structures” on
page 96.
) Code SQL statements to access DB2 data. See “Accessing data using host
variables and host structures” on page 96.
For information about using the SQL language, see “Section 2. Using SQL
queries” on page 3 and in DB2 SQL Reference. Details about how to use SQL
statements within an application program are described in “Chapter 3-4.
Embedding SQL statements in host languages” on page 127.
) Declare a communications area (SQLCA), or handle exceptional conditions that
DB2 indicates with return codes, in the SQLCA. See “Checking the execution of
SQL statements” on page 101 for more information.
This section includes information about using SQL in application programs written
# in assembler, C, COBOL, FORTRAN, PL/I, and REXX. You can also use SQL in
application programs written in Ada, APL2, BASIC, and Prolog. See the following
publications for more information about these languages:
Ada IBM Ada/370 SQL Module Processor for DB2 Database Manager
User's Guide
APL2 APL2 Programming: Using Structured Query Language (SQL)
BASIC IBM BASIC/MVS Language Reference
Prolog/MVS & VM IBM SAA AD/Cycle Prolog/MVS & VM Programmer's Guide
Some of the examples vary from these conventions. Exceptions are noted where
they occur.
# For REXX, precede the statement with EXECSQL. If the statement is in a literal
# string, enclose it in single or double quotation marks.
For example, use EXEC SQL and END-EXEC to delimit an SQL statement in a
COBOL program:
EXEC SQL
an SQL statement
END-EXEC.
You do not have to declare tables or views, but there are advantages if you do.
One advantage is documentation. For example, the DECLARE statement specifies
the structure of the table or view you are working with, and the data type of each
column. You can refer to the DECLARE statement for the column names and data
types in the table or view. Another advantage is that the DB2 precompiler uses
your declarations to make sure you have used correct column names and data
types in your SQL statements. The DB2 precompiler issues a warning message
when the column names and data types do not correspond to the SQL DECLARE
statements in your program.
For example, the DECLARE TABLE statement for the DSN8610.DEPT table looks
like this:
END-EXEC.
| When you declare a table or view that contains a column with a distinct type, it is
| best to declare that column with the source type of the distinct type, rather than the
| distinct type itself. When you declare the column with the source type, DB2 can
| check embedded SQL statements that reference that column at precompile time.
A host variable is a data item declared in the host language for use within an SQL
statement. Using host variables, you can:
) Retrieve data into the host variable for your application program's use
) Place data into the host variable to insert into a table or to change the contents
of a row
) Use the data in the host variable when evaluating a WHERE or HAVING clause
) Assign the value in the host variable to a special register, such as CURRENT
SQLID and CURRENT DEGREE
) Insert null values in columns using a host indicator variable that contains a
negative value
) Use the data in the host variable in statements that process dynamic SQL,
such as EXECUTE, PREPARE, and OPEN
A host structure is a group of host variables that an SQL statement can refer to
# using a single name. You can use host structures in all languages except REXX.
Use host language statements to define the host structures.
To optimize performance, make sure the host language declaration maps as closely
as possible to the data type of the associated data in the database; see “Chapter
3-4. Embedding SQL statements in host languages” on page 127. For more
You can use a host variable to represent a data value, but you cannot use it to
represent a table, view, or column name. (You can specify table, view, or column
names at run time using dynamic SQL. See “Chapter 7-1. Coding dynamic SQL in
application programs” on page 503 for more information.)
Host variables follow the naming conventions of the host language. A colon (:) must
precede host variables used in SQL to tell DB2 that the variable is not a column
name. A colon must not precede host variables outside of SQL statements.
For more information about declaring host variables, see the appropriate language
section:
) Assembler: “Using host variables” on page 131
) C: “Using host variables” on page 144
) COBOL: “Using host variables” on page 166
) FORTRAN: “Using host variables” on page 185
) PL/I: “Using host variables” on page 196.
# ) REXX: “Using REXX host variables and data types” on page 211.
Retrieving a single row of data: The INTO clause of the SELECT statement
names one or more host variables to contain the column values returned. The
named variables correspond one-to-one with the list of column names in the
SELECT list.
For example, suppose you are retrieving the EMPNO, LASTNAME, and
WORKDEPT column values from rows in the DSN8610.EMP table. You can define
a data area in your program to hold each column, then name the data areas with
an INTO clause, as in the following example. (Notice that a colon precedes each
host variable):
EXEC SQL
SELECT EMPNO, LASTNAME, WORKDEPT
INTO :CBLEMPNO, :CBLNAME, :CBLDEPT
FROM DSN861.EMP
WHERE EMPNO = :EMPID
END-EXEC.
In the DATA DIVISION of the program, you must declare the host variables
CBLEMPNO, CBLNAME, and CBLDEPT to be compatible with the data types in
the columns EMPNO, LASTNAME, and WORKDEPT of the DSN8610.EMP table.
If the SELECT statement returns more than one row, this is an error, and any data
returned is undefined and unpredictable.
Retrieving Multiple Rows of Data: If you do not know how many rows DB2 will
return, or if you expect more than one row to return, then you must use an
alternative to the SELECT ... INTO statement.
Specifying a list of items in a select clause: When you specify a list of items in
the SELECT clause, you can use more than the column names of tables and
views. You can request a set of column values mixed with host variable values and
constants. For example:
MOVE 4476 TO RAISE.
MOVE '22' TO PERSON.
EXEC SQL
SELECT EMPNO, LASTNAME, SALARY, :RAISE, SALARY + :RAISE
INTO :EMP-NUM, :PERSON-NAME, :EMP-SAL, :EMP-RAISE, :EMP-TTL
FROM DSN861.EMP
WHERE EMPNO = :PERSON
END-EXEC.
The results shown below have column headings that represent the names of the
host variables:
EMP-NUM PERSON-NAME EMP-SAL EMP-RAISE EMP-TTL
======= =========== ======= ========= =======
22 LUTZ 2984 4476 34316
Searching data
You can use a host variable to specify a value in the predicate of a search
condition or to replace a constant in an expression. For example, if you have
defined a field called EMPID that contains an employee number, you can retrieve
the name of the employee whose number is 000110 with:
MOVE '11' TO EMPID.
EXEC SQL
SELECT LASTNAME
INTO :PGM-LASTNAME
FROM DSN861.EMP
WHERE EMPNO = :EMPID
END-EXEC.
Retrieving data into host variables: If the value for the column you retrieve is
null, DB2 puts a negative value in the indicator variable. If it is null because of a
numeric or character conversion error, or an arithmetic expression error, DB2 sets
the indicator variable to -2. See “Handling arithmetic or conversion errors” on
page 103 for more information.
If you do not use an indicator variable and DB2 retrieves a null value, an error
results.
When DB2 retrieves the value of a column, you can test the indicator variable. If
the indicator variable's value is less than zero, the column value is null. When the
column value is null, the value of the host variable does not change from its
previous value.
You can also use an indicator variable to verify that a retrieved character string
value is not truncated. If the indicator variable contains a positive integer, the
integer is the original length of the string.
You can specify an indicator variable, preceded by a colon, immediately after the
host variable. Optionally, you can use the word INDICATOR between the host
variable and its indicator variable. Thus, the following two examples are equivalent:
You can then test INDNULL for a negative value. If it is negative, the corresponding
value of PHONENO is null, and you can disregard the contents of CBLPHONE.
When you use a cursor to fetch a column value, you can use the same technique
to determine whether the column value is null.
Inserting null values into columns using host variables: You can use an
indicator variable to insert a null value from a host variable into a column. When
DB2 processes INSERT and UPDATE statements, it checks the indicator variable
(if it exists). If the indicator variable is negative, the column value is null. If the
indicator variable is greater than -1, the associated host variable contains a value
for the column.
For example, suppose your program reads an employee ID and a new phone
number, and must update the employee table with the new number. The new
number could be missing if the old number is incorrect, but a new number is not yet
available. If it is possible that the new value for column PHONENO might be null,
you can code:
When NEWPHONE contains other than a null value, set PHONEIND to zero by
preceding the statement with:
MOVE TO PHONEIND.
If you want to avoid listing host variables, you can substitute the name of a
structure, say :PEMP, that contains :EMPNO, :FIRSTNME, :MIDINIT, :LASTNAME,
and :WORKDEPT. The example then reads:
EXEC SQL
SELECT EMPNO, FIRSTNME, MIDINIT, LASTNAME, WORKDEPT
INTO :PEMP
FROM DSN861.VEMP
WHERE EMPNO = :EMPID
END-EXEC.
You can declare a host structure yourself, or you can use DCLGEN to generate a
COBOL record description, PL/I structure declaration, or C structure declaration that
corresponds to the columns of a table. For more details about coding a host
structure in your program, see “Chapter 3-4. Embedding SQL statements in host
languages” on page 127. For more information on using DCLGEN and the
restrictions that apply to the C language, see “Chapter 3-3. Generating
declarations for your tables using DCLGEN” on page 115.
In this example, EMP-IND is an array containing six values, which you can test for
negative values. If, for example, EMP-IND(6) contains a negative value, the
corresponding host variable in the host structure (EMP-BIRTHDATE) contains a null
value.
Because this example selects rows from the table DSN8610.EMP, some of the
values in EMP-IND are always zero. The first four columns of each row are defined
NOT NULL. In the above example, DB2 selects the values for a row of data into a
host structure. You must use a corresponding structure for the indicator variables to
determine which (if any) selected column values are null. For information on using
the IS NULL keyword phrase in WHERE clauses, see “Chapter 2-1. Retrieving
data” on page 5.
The meaning of SQLCODEs other than 0 and 100 varies with the particular product
implementing SQL.
An advantage to using the SQLCODE field is that it can provide more specific
information than the SQLSTATE. Many of the SQLCODEs have associated tokens
in the SQLCA that indicate, for example, which object incurred an SQL error.
To conform to the SQL standard, you can declare SQLCODE and SQLSTATE
(SQLCOD and SQLSTA in FORTRAN) as stand-alone host variables. If you specify
the STDSQL(YES) precompiler option, these host variables receive the return
codes, and you should not include an SQLCA in your program.
# The WHENEVER statement is not supported for REXX. For information on REXX
# error handling, see “Embedding SQL statements in a REXX procedure” on
# page 209.
The WHENEVER statement must precede the first SQL statement it is to affect.
However, if your program checks SQLCODE directly, it must check SQLCODE after
the SQL statement executes.
For rows in which the error does occur, one or more selected items have no
meaningful value. The indicator variable flags this error with a -2 for the affected
host variable, and an SQLCODE of +802 (SQLSTATE '01519') in the SQLCA.
You can find the programming language specific syntax and details for calling
DSNTIAR on the following pages:
For assembler programs, see page 139
For C programs, see page 158
DSNTIAR takes data from the SQLCA, formats it into a message, and places the
result in a message output area that you provide in your application program. Each
time you use DSNTIAR, it overwrites any previous messages in the message
output area. You should move or print the messages before using DSNTIAR again,
and before the contents of the SQLCA change, to get an accurate view of the
SQLCA.
You must define the message output area in VARCHAR format. In this varying
character format, a two-byte length field precedes the data. The length field tells
DSNTIAR how many total bytes are in the output message area; its minimum value
is 240.
Figure 8 shows the format of the message output area, where length is the
two-byte total length field, and the length of each line matches the logical record
length (lrecl) you specify to DSNTIAR.
When you call DSNTIAR, you must name an SQLCA and an output message area
in its parameters. You must also provide the logical record length (lrecl) as a value
between 72 and 240 bytes. DSNTIAR assumes the message area contains
fixed-length records of length lrecl.
When loading DSNTIAR from another program, be careful how you branch to
DSNTIAR. For example, if the calling program is in 24-bit addressing mode and
DSNTIAR is loaded above the 16-megabyte line, you cannot use the assembler
BALR instruction or CALL macro to call DSNTIAR, because they assume that
DSNTIAR is in 24-bit mode. Instead, you must use an instruction that is capable of
branching into 31-bit mode, such as BASSM.
You can dynamically link (load) and call DSNTIAR directly from a language that
does not handle 31-bit addressing (OS/VS COBOL, for example). To do this, link a
second version of DSNTIAR with the attributes AMODE(24) and RMODE(24) into
another load module library. Or you can write an intermediate assembler language
program and that calls DSNTIAR in 31-bit mode; then call that intermediate
program in 24-bit mode from your application.
For more information on the allowed and default AMODE and RMODE settings for
a particular language, see the application programming guide for that language. For
details on how the attributes AMODE and RMODE of an application are
determined, see the linkage editor and loader user's guide for the language in
which you have written the application.
You can use DSNTIAR in the error routine to generate the complete message text
associated with the negative SQLCODEs.
1. Choose a logical record length (lrecl) of the output lines. For this example,
assume lrecl is 72, to fit on a terminal screen, and is stored in the variable
named ERROR-TEXT-LEN.
2. Define a message area in your COBOL application. Assuming you want an
area for up to 10 lines of length 72, you should define an area of 720 bytes,
plus a 2-byte area that specifies the length of the message output area.
1 ERROR-MESSAGE.
2 ERROR-LEN PIC S9(4) COMP VALUE +72.
2 ERROR-TEXT PIC X(72) OCCURS 1 TIMES
INDEXED BY ERROR-INDEX.
77 ERROR-TEXT-LEN PIC S9(9) COMP VALUE +72.
For this example, the name of the message area is ERROR-MESSAGE.
3. Make sure you have an SQLCA. For this example, assume the name of the
SQLCA is SQLCA.
To display the contents of the SQLCA when SQLCODE is 0 or -501, you should
first format the message by calling DSNTIAR after the SQL statement that
produces SQLCODE 0 or -501:
CALL 'DSNTIAR' USING SQLCA ERROR-MESSAGE ERROR-TEXT-LEN.
You can then print the message output area just as you would any other variable.
Your message might look like the following:
DSNT48I SQLCODE = -51, ERROR: THE CURSOR IDENTIFIED IN A FETCH OR
CLOSE STATEMENT IS NOT OPEN
DSNT418I SQLSTATE = 2451 SQLSTATE RETURN CODE
DSNT415I SQLERRP = DSNXERT SQL PROCEDURE DETECTING ERROR
DSNT416I SQLERRD = -315 -1 SQL DIAGNOSTIC INFORMATION
DSNT416I SQLERRD = X'FFFFFEC5' X'' X''
X'FFFFFFFF' X'' X'' SQL DIAGNOSTIC
INFORMATION
Cursor functions
You can retrieve and process a set of rows that satisfy the search conditions of an
SQL statement. However, when you use a program to select the rows, the program
cannot process all the rows at once. The program must process the rows one at a
time.
To illustrate the concept of a cursor, assume that DB2 builds a result table1 to hold
all the rows specified by the SELECT statement. DB2 uses a cursor to make rows
from the result table available to your program. A cursor identifies the current row
of the result table specified by a SELECT statement. When you use a cursor, your
program can retrieve each row sequentially from the result table until it reaches an
end-of-data (that is, the not found condition, SQLCODE=100 and SQLSTATE =
'02000'). The set of rows obtained as a result of executing the SELECT statement
can consist of zero, one, or many rows, depending on the number of rows that
satisfy the SELECT statement search condition.
You process the result table of a cursor much like a sequential data set. You must
open the cursor (with an OPEN statement) before you retrieve any rows. You use a
FETCH statement to retrieve the cursor's current row. You can use FETCH
repeatedly until you have retrieved all the rows. When the end-of-data condition
occurs, you must close the cursor with a CLOSE statement (similar to end-of-file
processing).
Your program can have several cursors. Each cursor requires its own:
) DECLARE CURSOR statement to define the cursor
) OPEN and CLOSE statements to open and close the cursor
) FETCH statement to retrieve rows from the cursor's result table.
You must declare host variables before you refer to them in a DECLARE CURSOR
statement. Refer to Chapter 6 of DB2 SQL Reference for further information.
You can use cursors to fetch, update, or delete a row of a table, but you cannot
use them to insert a row into a table.
1 DB2 produces result tables in different ways, depending on the complexity of the SELECT statement. However, they are the same
regardless of the way DB2 produces them.
Table 6. SQL statements required to define and use a cursor in a COBOL program
SQL Statement Described in Section
EXEC SQL “Step 1: Define the cursor” on page 109
DECLARE THISEMP CURSOR FOR
SELECT EMPNO, LASTNAME,
WORKDEPT, JOB
FROM DSN8610.EMP
WHERE WORKDEPT = 'D11'
FOR UPDATE OF JOB
END-EXEC.
EXEC SQL “Step 2: Open the cursor” on page 110
OPEN THISEMP
END-EXEC.
EXEC SQL “Step 3: Specify what to do at end-of-data” on page
WHENEVER NOT FOUND 110
GO TO CLOSE-THISEMP
END-EXEC.
EXEC SQL “Step 4: Retrieve a row using the cursor” on page 111
FETCH THISEMP
INTO :EMP-NUM, :NAME2,
:DEPT, :JOB-NAME
END-EXEC.
... for specific employees “Step 5a: Update the current row” on page 111
in Department D11,
update the JOB value:
EXEC SQL
UPDATE DSN8610.EMP
SET JOB = :NEW-JOB
WHERE CURRENT OF THISEMP
END-EXEC.
EXEC SQL
DELETE FROM DSN8610.EMP
WHERE CURRENT OF THISEMP
END-EXEC.
Branch back to fetch and
process the next row.
CLOSE-THISEMP. “Step 6: Close the cursor” on page 113
EXEC SQL
CLOSE THISEMP
END-EXEC.
The SELECT statement shown here is quite simple. You can use other clauses of
the SELECT statement within DECLARE CURSOR. Chapter 5 of DB2 SQL
Reference illustrates several more clauses that you can use within a SELECT
statement.
Updating a column: If you intend to update a column in any (or all) of the rows of
the identified table, include the FOR UPDATE OF clause, which names each
column you intend to update. The precompiler options NOFOR and STDSQL affect
the use of the FOR UPDATE OF clause. For information on these options, see
Table 47 on page 410. If you do not specify the names of columns you intend to
update, and you do not specify the STDSQL(YES) option or the NOFOR
precompiler options, you receive an error.
You can update a column of the identified table even though it is not part of the
result table. In this case, you do not need to name the column in the SELECT
statement (but do not forget to name it in the FOR UPDATE OF clause). When the
cursor retrieves a row (using FETCH) that contains a column value you want to
update, you can use UPDATE ... WHERE CURRENT OF to update the row.
For example, assume that each row of the result table includes the EMPNO,
LASTNAME, and WORKDEPT columns from the DSN8610.EMP table. If you want
to update the JOB column (one of the columns in the DSN8610.EMP table), the
DECLARE CURSOR statement must include FOR UPDATE OF JOB even if you
omit JOB from the SELECT clause.
You can also use FOR UPDATE OF to update columns of one table, using
information from another table. For example, suppose you want to give a raise to
the employees responsible for certain projects. To do that, define a cursor like this:
EXEC SQL
DECLARE C1 CURSOR FOR
SELECT EMPNO, FIRSTNME, MIDINIT, LASTNAME, SALARY
FROM DSN861.EMP X
WHERE EXISTS
(SELECT *
FROM DSN861.PROJ Y
WHERE X.EMPNO=Y.RESPEMP
AND Y.PROJNO=:GOODPROJ)
FOR UPDATE OF SALARY;
Read-only result table: Some result tables cannot be updated—for example, the
result of joining two or more tables. Read-only result table specifications are
described in greater detail in Chapter 6 of DB2 SQL Reference.
When used with cursors, DB2 evaluates CURRENT DATE, CURRENT TIME, and
CURRENT TIMESTAMP special registers once when the OPEN statement
executes. DB2 uses the values returned in the registers on all subsequent FETCH
statements.
Two factors that influence the amount of time that DB2 requires to process the
OPEN statement are:
) Whether DB2 must perform any sorts before it can retrieve rows from the result
table
) Whether DB2 uses parallelism to process the SELECT statement associated
with the cursor
For more information, see “The effect of sorts on OPEN CURSOR” on page 721.
When your program issues the FETCH statement, DB2 uses the cursor to point to
the next row in the result table, making it the current row. DB2 then moves the
current row contents into the program host variables that you specified on the INTO
clause of FETCH. This sequence repeats each time you issue FETCH, until you
have processed all rows in the result table.
When you query a remote subsystem with FETCH, it is possible to have reduced
efficiency. To combat this problem, you can use block fetch. For more information
see “Use block fetch” on page 396. Block fetch processes rows ahead of the
application’s current row. You cannot use a block fetch when you use a cursor for
update or delete.
When used with a cursor, the UPDATE statement must meet these conditions:
) You update only one row—the current row.
) The WHERE clause identifies the cursor that points to the row to update.
) You must name each column to update in a FOR UPDATE OF clause of the
SELECT statement in DECLARE CURSOR before you use the UPDATE
statement. 2
2 If you do not specify the names of columns you intend to update, you receive an error code in the SQLCODE and SQLSTATE
host variables or related fields of the SQLCA when you try to update the columns. This is true only if you have not specified the
STDSQL(YES) option or the NOFOR precompile options.
“Updating current values: UPDATE” on page 57 showed you how to use the
UPDATE statement repeatedly when you update all rows that meet a specific
search condition. Alternatively, you can use the UPDATE...WHERE CURRENT OF
statement repeatedly when you want to obtain a copy of the row, examine it, and
then update it.
When used with a cursor, the DELETE statement differs from the one you learned
in “Chapter 2-2. Working with tables and modifying data” on page 41.
) You delete only one row—the current row.
) The WHERE clause identifies the cursor that points to the row to delete.
# You cannot use a DELETE statement with a cursor to delete rows from a created
# temporary table. However, you can use a DELETE statement with a cursor to
# delete rows from a declared temporary table.
After you have deleted a row, you cannot update or delete another row using that
cursor until you issue a FETCH statement to position the cursor on the next row.
“Deleting rows: DELETE” on page 59 showed you how to use the DELETE
statement to delete all rows that meet a specific search condition. Alternatively, you
can use the DELETE...WHERE CURRENT OF statement repeatedly when you
want to obtain a copy of the row, examine it, and then delete it.
If you finish processing the rows of the “result table” and you do not want to use
the cursor, you can let DB2 automatically close the cursor when your program
terminates.
An open cursor defined WITH HOLD remains open after a commit operation. The
cursor is positioned after the last row retrieved and before the next logical row of
the result table to be returned.
The following cursor declaration causes the cursor to maintain its position in the
DSN8610.EMP table after a commit point:
EXEC SQL
DECLARE EMPLUPDT CURSOR WITH HOLD FOR
SELECT EMPNO, LASTNAME, PHONENO, JOB, SALARY, WORKDEPT
FROM DSN861.EMP
WHERE WORKDEPT < 'D11'
ORDER BY EMPNO
END-EXEC.
If the program abends, the cursor position is lost; to prepare for restart, your
program must reposition the cursor.
You must use DCLGEN before you precompile your program. Supply DCLGEN with
the table or view name before you precompile your program. To use the
declarations generated by DCLGEN in your program, use the SQL INCLUDE
statement.
DB2 must be active before you can use DCLGEN. You can start DCLGEN in
several different ways:
) From ISPF through DB2I. Select the DCLGEN option on the DB2I Primary
Option Menu panel. Next, fill in the DCLGEN panel with the information it
needs to build the declarations. Then press ENTER.
) Directly from TSO. To do this, sign on to TSO, issue the TSO command DSN,
and then issue the subcommand DCLGEN.
) From a CLIST, running in TSO foreground or background, that issues DSN and
then DCLGEN.
) With JCL. Supply the required information, using JCL, and run DCLGEN in
batch.
If you wish to start DCLGEN in the foreground, and your table names include
DBCS characters, you must input and display double-byte characters. If you do
not have a terminal that displays DBCS characters, you can enter DBCS
characters using the hex mode of ISPF edit.
R S
Figure 9. DCLGEN panel
Chapter 3-3. Generating declarations for your tables using DCLGEN 117
alphabetic, then enclose the name in apostrophes. If you use special
characters, be careful to avoid name conflicts.
If you leave this field blank, DCLGEN generates a name that contains the
table or view name with a prefix of DCL. If the language is COBOL or PL/I,
and the table or view name consists of a DBCS string, the prefix consists of
DBCS characters.
C language characters you enter in this field do not fold to uppercase.
9 FIELD NAME PREFIX
Specifies a prefix that DCLGEN uses to form field names in the output. For
example, if you choose ABCDE, the field names generated are ABCDE1,
ABCDE2, and so on.
DCLGEN accepts a field name prefix of up to 28 bytes that can include
special and double-byte characters. If you specify a single-byte or
mixed-string prefix and the first character is not alphabetic, apostrophes must
enclose the prefix. If you use special characters, be careful to avoid name
conflicts.
For COBOL and PL/I, if the name is a DBCS string, DCLGEN generates
DBCS equivalents of the suffix numbers. For C, characters you enter in this
field do not fold to uppercase.
If you leave this field blank, the field names are the same as the column
names in the table or view.
10 DELIMIT DBCS
Tells DCLGEN whether to delimit DBCS table names and column names in
the table declaration. Use:
YES to enclose the DBCS table and column names with SQL delimiters.
NO to not delimit the DBCS table and column names.
11 COLUMN SUFFIX
Tells DCLGEN whether to form field names by attaching the column name as
a suffix to value you specify in FIELD NAME PREFIX. For example, if you
specify YES, the field name prefix is NEW, and the column name is EMPNO,
then the field name is NEWEMPNO.
If you specify YES, you must also enter a value in FIELD NAME PREFIX. If
you do not enter a field name prefix, DCLGEN issues a warning message and
uses the column names as the field names.
The default is NO, which does not use the column name as a suffix, and
allows the value in FIELD NAME PREFIX to control the field names, if
specified.
12 INDICATOR VARS
Tells DCLGEN whether to generate an array of indicator variables for the host
variable structure.
If you specify YES, the array name is the table name with a prefix of “I” (or
DBCS letter “<I>” if the table name consists solely of double-byte characters).
The form of the data declaration depends on the language:
For a C program: short int Itable-name[n];
For a COBOL program: 1 Itable-name PIC S9(4) USAGE COMP OCCURS n
TIMES.
For a PL/I program: DCL Itable-name(n) BIN FIXED(15);
If you are using an SQL reserved word as an identifier, you must edit the DCLGEN
output in order to add the appropriate SQL delimiters.
DCLGEN produces output that is intended to meet the needs of most users, but
occasionally, you will need to edit the DCLGEN output to work in your specific
case. For example, DCLGEN is unable to determine whether a column defined as
NOT NULL also contains the DEFAULT clause, so you must edit the DCLGEN
output to add the DEFAULT clause to the appropriate column definitions.
Chapter 3-3. Generating declarations for your tables using DCLGEN 119
DCLGEN support of C, COBOL, and PL/I languages
DCLGEN derives variable names from the source in the database. In Table 7, var
represents variable names that DCLGEN provides when it is necessary to clarify
the host language declaration.
For further details about the DCLGEN subcommand, see Chapter 2 of DB2
Command Reference.
Fill in the COBOL defaults panel as necessary. Press Enter to save the new
defaults, if any, and return to the DB2I Primary Option menu.
Chapter 3-3. Generating declarations for your tables using DCLGEN 121
F G
DSNEOP1 DB2I DEFAULTS
COMMAND ===>_
F G
Fill in the fields as shown in Figure 12 on page 123, and then press Enter.
If the operation succeeds, a message displays at the top of your screen as shown
in Figure 13.
F G
DSNE95I EXECUTION COMPLETE, MEMBER VPHONEC ADDED
***
DB2 then displays the screen as shown in Figure 14 on page 124. Press Enter to
return to the DB2I Primary Option menu.
Chapter 3-3. Generating declarations for your tables using DCLGEN 123
F G
DSNEDP1 DCLGEN SSID: DSN
===>
DSNE294I SYSTEM RETCODE= USER OR DSN RETCODE=
Enter table name for which declarations are required:
1 SOURCE TABLE NAME ===> DSN861.VPHONE
2 TABLE OWNER ===>
3 AT LOCATION ..... ===> (Location of table, optional)
Chapter 3-3. Generating declarations for your tables using DCLGEN 125
126 Application Programming and SQL Guide
Assembler
This chapter does not contain information on inter-language calls and calls to
stored procedures. “Writing and preparing an application to use stored procedures”
on page 582 discusses information needed to pass parameters to stored
procedures, including compatible language data types and SQL data types.
DB2 sets the SQLCODE and SQLSTATE values after each SQL statement
executes. An application can check these variables values to determine whether
the last SQL statement was successful. All SQL statements in the program must be
within the scope of the declaration of the SQLCODE and SQLSTATE variables.
If your program is reentrant, you must include the SQLCA within a unique data area
acquired for your task (a DSECT). For example, at the beginning of your program,
specify:
PROGAREA DSECT
EXEC SQL INCLUDE SQLCA
As an alternative, you can create a separate storage area for the SQLCA and
provide addressability to that area.
See Chapter 6 of DB2 SQL Reference for more information about the INCLUDE
statement and Appendix C of DB2 SQL Reference for a complete description of
SQLCA fields.
You must place SQLDA declarations before the first SQL statement that references
the data descriptor unless you use the precompiler option TWOPASS. See
Chapter 6 of DB2 SQL Reference for more information about the INCLUDE
Each SQL statement in an assembler program must begin with EXEC SQL. The
EXEC and SQL keywords must appear on one line, but the remainder of the
statement can appear on subsequent lines.
Continuation for SQL statements: The line continuation rules for SQL statements
are the same as those for assembler statements, except that you must specify
EXEC SQL within one line. Any part of the statement that does not fit on one line
can appear on subsequent lines, beginning at the continuation margin (column 16,
the default). Every line of the statement, except the last, must have a continuation
character (a non-blank character) immediately after the right margin in column 72.
Declaring tables and views: Your assembler program should include a DECLARE
statement to describe each table and view the program accesses.
Margins: The precompiler option MARGINS allows you to set a left margin, a right
margin, and a continuation margin. The default values for these margins are
columns 1, 71, and 16, respectively. If EXEC SQL starts before the specified left
margin, the DB2 precompiler does not recognize the SQL statement. If you use the
default margins, you can place an SQL statement anywhere between columns 2
and 71.
Names: You can use any valid assembler name for a host variable. However, do
not use external entry names or access plan names that begin with 'DSN' or host
variable names that begin with 'SQL'. These names are reserved for DB2.
Statement labels: You can prefix an SQL statement with a label. The first line of
an SQL statement can use a label beginning in the left margin (column 1). If you do
not use a label, leave column 1 blank.
WHENEVER statement: The target for the GOTO clause in an SQL WHENEVER
statement must be a label in the assembler source code and must be within the
scope of the SQL statements that WHENEVER affects.
CICS
An example of code to support reentrant programs, running under CICS,
follows:
DFHEISTG DSECT
DFHEISTG
EXEC SQL INCLUDE SQLCA
*
DS F
SQDWSREG EQU R7
SQDWSTOR DS (SQLDLEN)C RESERVE STORAGE TO BE USED FOR SQLDSECT
..
.
TSO
The sample program in prefix.SDSNSAMP(DSNTIAD) contains an example
of how to acquire storage for the SQLDSECT in a program that runs in a
TSO environment.
CICS
A CICS application program uses the DFHEIENT macro to generate the
entry point code. When using this macro, consider the following:
) If you use the default DATAREG in the DFHEIENT macro, register 13
points to the save area.
) If you use any other DATAREG in the DFHEIENT macro, you must
provide addressability to a save area.
For example, to use SAVED, you can code instructions to save, load,
and restore register 13 around each SQL statement as in the following
example.
ST 13,SAVER13 SAVE REGISTER 13
LA 13,SAVED POINT TO SAVE AREA
EXEC SQL . . .
L 13,SAVER13 RESTORE REGISTER 13
You can precede the assembler statements that define host variables with the
statement BEGIN DECLARE SECTION, and follow the assembler statements with
the statement END DECLARE SECTION. You must use the statements BEGIN
DECLARE SECTION and END DECLARE SECTION when you use the precompiler
option STDSQL(YES).
You can declare host variables in normal assembler style (DC or DS), depending
on the data type and the limitations on that data type. You can specify a value on
DC or DS declarations (for example, DC H'5'). The DB2 precompiler examines
only packed decimal declarations.
An SQL statement that uses a host variable must be within the scope of the
statement that declares the variable.
Numeric host variables: The following figure shows the syntax for valid numeric
host variable declarations. The numeric value specifies the scale of the packed
decimal variable. If value does not include a decimal point, the scale is 0.
| For floating point data types (E, EH, EB, D, DH, and DB), DB2 uses the FLOAT
| precompiler option to determine whether the host variable is in IEEE floating point
| or System/390 floating point format. If the precompiler option is FLOAT(S390),
| you need to define your floating point host variables as E, EH, D, or DH. If the
| precompiler option is FLOAT(IEEE), you need to define your floating point host
| variables as EB or DB. DB2 converts all floating point input data to System/390
| floating point before storing it.
|
| [[──variable-name──┬─DC─┬──┬───┬──┬─H──┬────┬──────────┬─────────────────────────────────────────────[^
| └─DS─┘ └─1─┘ │ └─L2─┘ │
| ├─F──┬────┬──────────┤
| │ └─L4─┘ │
| ├─P──┬────┬──'value'─┤
| │ └─Ln─┘ │
| ├─E──┬────┬──────────┤
| │ └─L4─┘ │
| ├─EH──┬────┬─────────┤
| │ └─L4─┘ │
| ├─EB──┬────┬─────────┤
| │ └─L4─┘ │
| ├─D──┬────┬──────────┤
| │ └─L8─┘ │
| ├─DH──┬────┬─────────┤
| │ └─L8─┘ │
| └─DB──┬────┬─────────┘
| └─L8─┘
Character host variables: There are three valid forms for character host variables:
) Fixed-length strings
| ) Varying-length strings
| ) CLOBs
The following figures show the syntax for forms other than CLOBs. See Figure 23
on page 134 for the syntax of CLOBs.
[[──variable-name──┬─DC─┬──┬───┬──C──┬────┬──────────────────────────────────────────────────────────[^
└─DS─┘ └─1─┘ └─Ln─┘
[[──variable-name──┬─DC─┬──┬───┬──H──┬────┬──,──┬───┬──CLn───────────────────────────────────────────[^
└─DS─┘ └─1─┘ └─L2─┘ └─1─┘
Graphic host variables: There are three valid forms for graphic host variables:
) Fixed-length strings
| ) Varying-length strings
| ) DBCLOBs
The following figures show the syntax for forms other than DBCLOBs. See
Figure 23 on page 134 for the syntax of DBCLOBs. In the syntax diagrams, value
denotes one or more DBCS characters, and the symbols < and > represent
shift-out and shift-in characters.
[[──┬─DC─┬──G──┬─────────────┬───────────────────────────────────────────────────────────────────────[^
└─DS─┘ ├─Ln──────────┤
├─'<value>'───┤
└─Ln'<value>'─┘
[[──┬─DS─┬──H──┬────┬──┬─────┬──,──GLn──┬───────────┬────────────────────────────────────────────────[^
└─DC─┘ └─L2─┘ └─'m'─┘ └─'<value>'─┘
Result set locators: The following figure shows the syntax for declarations of
result set locators. See “Chapter 7-2. Using stored procedures for client/server
processing” on page 535 for a discussion of how to use these host variables.
[[──variable-name──┬─DC─┬──┬───┬──F──┬────┬──────────────────────────────────────────────────────────[^
└─DS─┘ └─1─┘ └─L4─┘
| Table Locators: The following figure shows the syntax for declarations of table
| locators. See “Accessing transition tables in a user-defined function” on page 287
| for a discussion of how to use these host variables.
| LOB variables and locators: The following figure shows the syntax for
| declarations of BLOB, CLOB, and DBCLOB host variables and locators.
| If you specify the length of the LOB in terms of KB, MB, or GB, you must leave no
| spaces between the length and K, M, or B.
See “Chapter 4-2. Programming for large objects (LOBs)” on page 237 for a
discussion of how to use these host variables.
| ROWIDs: The following figure shows the syntax for declarations of ROWID
| variables. See “Chapter 4-2. Programming for large objects (LOBs)” on page 237
| for a discussion of how to use these host variables.
Table 8 (Page 1 of 2). SQL data types the precompiler uses for assembler declarations
SQLTYPE SQLLEN
Assembler Data of Host of Host
Type Variable Variable SQL Data Type
DS HL2 500 2 SMALLINT
DS FL4 496 4 INTEGER
DS P'value' 484 p in byte 1, DECIMAL(p,s)
DS PLn'value' or s in byte 2
See the description for DECIMAL(p,s) in
DS PLn
Table 9 on page 135.
1<=n<=16
| DS EL4 480 4 REAL or FLOAT (n)
| DS EHL4 1<=n<=21
| DS EBL4
| DS DL8 480 8 DOUBLE PRECISION,
| DS DHL8 or FLOAT (n)
| DS DBL8 22<=n<=53
DS CLn 452 n CHAR(n)
1<=n<=255
DS HL2,CLn 448 n VARCHAR(n)
1<=n<=255
DS HL2,CLn 456 n VARCHAR(n)
n>255
DS GLm 468 n GRAPHIC(n)2
2<=m<=2541
Table 8 (Page 2 of 2). SQL data types the precompiler uses for assembler declarations
SQLTYPE SQLLEN
Assembler Data of Host of Host
Type Variable Variable SQL Data Type
DS HL2,GLm 464 n VARGRAPHIC(n)2
2<=m<=2541
DS HL2,GLm 472 n VARGRAPHIC(n)2
m>2541
DS FL4 972 4 Result set locator2
| SQL TYPE IS 976 4 Table locator2
| TABLE LIKE table-name
| AS LOCATOR
| SQL TYPE IS 960 4 BLOB locator2
| BLOB_LOCATOR
| SQL TYPE IS 964 4 CLOB locator3
| CLOB_LOCATOR
| SQL TYPE IS 968 4 DBCLOB locator3
| DBCLOB_LOCATOR
| SQL TYPE IS 404 n BLOB(n)
| BLOB(n)
| 1≤n≤2147483647
| SQL TYPE IS 408 n CLOB(n)
| CLOB(n)
| 1≤n≤2147483647
| SQL TYPE IS 412 n DBCLOB(n)2
| DBCLOB(n)
| 1≤n≤10737418232
| SQL TYPE IS ROWID 904 40 ROWID
Table 9 helps you define host variables that receive output from the database. You
can use Table 9 to determine the assembler data type that is equivalent to a given
SQL data type. For example, if you retrieve TIMESTAMP data, you can use the
table to define a suitable host variable in the program that receives the data value.
Table 9 (Page 1 of 3). SQL data types mapped to typical assembler declarations
SQL Data Type Assembler Equivalent Notes
SMALLINT DS HL2
INTEGER DS F
Table 9 (Page 2 of 3). SQL data types mapped to typical assembler declarations
SQL Data Type Assembler Equivalent Notes
DECIMAL(p,s) or DS P'value' DS PLn'value' p is precision; s is scale. 1<=p<=31 and 0<=s<=p.
NUMERIC(p,s) DS PLn 1<=n<=16. value is a literal value that includes a
decimal point. You must use Ln, value, or both. It is
recommended that you use only value.
Precision: If you use Ln, it is 2n-1; otherwise, it is the
number of digits in value. Scale: If you use value, it
is the number of digits to the right of the decimal
point; otherwise, it is 0.
For efficient use of indexes: Use value. If p is even,
do not use Ln and be sure the precision of value is p
and the scale of value is s. If p is odd, you can use
Ln (although it is not advised), but you must choose n
so that 2n-1=p, and value so that the scale is s.
Include a decimal point in value, even when the scale
of value is 0.
| REAL or FLOAT(n) DS EL4 1<=n<=21
| DS EHL4
| DS EBL41
| DOUBLE PRECISION, DS DL8 22<=n<=53
| DOUBLE, or FLOAT(n) DS DHL8
| DS DBL81
CHAR(n) DS CLn 1<=n<=255
VARCHAR(n) DS HL2,CLn
GRAPHIC(n) DS GLm m is expressed in bytes. n is the number of
double-byte characters. 1<=n<=127
VARGRAPHIC(n) DS HL2,GLx DS x and m are expressed in bytes. n is the number of
HL2'm',GLx'<value>' double-byte characters. < and > represent shift-out
and shift-in characters.
DATE DS,CLn If you are using a date exit routine, n is determined by
that routine; otherwise, n must be at least 10.
TIME DS,CLn If you are using a time exit routine, n is determined by
that routine. Otherwise, n must be at least 6; to
include seconds, n must be at least 8.
TIMESTAMP DS,CLn n must be at least 19. To include microseconds, n
must be 26; if n is less than 26, truncation occurs on
the microseconds part.
Result set locator DS F Use this data type only to receive result sets. Do not
use this data type as a column type.
| Table locator SQL TYPE IS Use this data type only in a user-defined function or
| TABLE LIKE stored procedure to receive rows of a transition table.
| table-name Do not use this data type as a column type.
| AS LOCATOR
| BLOB locator SQL TYPE IS Use this data type only to manipulate data in BLOB
| BLOB_LOCATOR columns. Do not use this data type as a column type.
| CLOB locator SQL TYPE IS Use this data type only to manipulate data in CLOB
| CLOB_LOCATOR columns. Do not use this data type as a column type.
| DBCLOB locator SQL TYPE IS Use this data type only to manipulate data in
| DBCLOB_LOCATOR DBCLOB columns. Do not use this data type as a
| column type.
Table 9 (Page 3 of 3). SQL data types mapped to typical assembler declarations
SQL Data Type Assembler Equivalent Notes
| BLOB(n) SQL TYPE IS 1≤n≤2147483647
| BLOB(n)
| CLOB(n) SQL TYPE IS 1≤n≤2147483647
| CLOB(n)
| DBCLOB(n) SQL TYPE IS n is the number of double-byte characters.
| DBCLOB(n) 1≤n≤1073741823
| ROWID SQL TYPE IS ROWID
Host graphic data type: You can use the assembler data type “host graphic” in
SQL statements when the precompiler option GRAPHIC is in effect. However, you
cannot use assembler DBCS literals in SQL statements, even when GRAPHIC is in
effect.
| Floating point host variables: All floating point data is stored in DB2 in
| System/390 floating point format. However, your host variable data can be in
| System/390 floating point format or IEEE floating point format. DB2 uses the
| FLOAT(S390|IEEE) precompiler option to determine whether your floating point
| host variables are in IEEE floating point format or System/390 floating point format.
| DB2 does no checking to determine whether the host variable declarations or
| format of the host variable contents match the precompiler option. Therefore, you
| need to ensure that your floating point host variable types and contents match the
| precompiler option.
| Special Purpose Assembler Data Types: The locator data types are assembler
| language data types as well as SQL data types. You cannot use locators as
| column types. For information on how to use these data types, see the following
| sections:
| Table locator “Accessing transition tables in a user-defined function” on
| page 287
| LOB locators “Chapter 4-2. Programming for large objects (LOBs)” on page 237
When your program uses X to assign a null value to a column, the program should
set the indicator variable to a negative number. DB2 then assigns a null value to
the column and ignores any value in X.
You declare indicator variables in the same way as host variables. You can mix the
declarations of the two types of variables in any way that seems appropriate. For
more information on indicator variables, see “Using indicator variables with host
variables” on page 98 or Chapter 3 of DB2 SQL Reference.
Example:
The following figure shows the syntax for a valid indicator variable.
[[──variable-name──┬─DC─┬──┬───┬──H──┬────┬──────────────────────────────────────────────────────────[^
└─DS─┘ └─1─┘ └─L2─┘
DSNTIAR syntax
sqlca
An SQL communication area.
message
An output area, defined as a varying length string, in which DSNTIAR places
the message text. The first halfword contains the length of the remaining area;
its minimum value is 240.
The output lines of text, each line being the length specified in lrecl, are put into
this area. For example, you could specify the format of the output area as:
LINES EQU 1
LRECL EQU 132
..
.
MESSAGE DS H,CL(LINES*LRECL)
ORG MESSAGE
MESSAGEL DC AL2(LINES*LRECL)
MESSAGE1 DS CL(LRECL) text line 1
MESSAGE2 DS CL(LRECL) text line 2
..
.
MESSAGEn DS CL(LRECL) text line n
..
.
CALL DSNTIAR,(SQLCA, MESSAGE, LRECL),MF=(E,PARM)
where MESSAGE is the name of the message output area, LINES is the
number of lines in the message output area, and and LRECL is the length of
each line.
lrecl
A fullword containing the logical record length of output messages, between 72
and 240.
CICS
If your CICS application requires CICS storage handling, you must use the
subroutine DSNTIAC instead of DSNTIAR. DSNTIAC has the following syntax:
CALL DSNTIAC,(eib,commarea,sqlca,msg,lrecl),MF=(E,PARM)
DSNTIAC has extra parameters, which you must use for calls to routines that
use CICS commands.
eib EXEC interface block
commarea communication area
For more information on these new parameters, see the appropriate application
programming guide for CICS. The remaining parameter descriptions are the
same as those for DSNTIAR. Both DSNTIAC and DSNTIAR format the SQLCA
in the same way.
You must define DSNTIA1 in the CSD. If you load DSNTIAR or DSNTIAC, you
must also define them in the CSD. For an example of CSD entry generation
statements for use with DSNTIAC, see member DSN8FRDO in the data set
prefix.SDSNSAMP.
The assembler source code for DSNTIAC and job DSNTEJ5A, which
assembles and link-edits DSNTIAC, are also in the data set prefix.SDSNSAMP.
DB2 sets the SQLCODE and SQLSTATE values after each SQL statement
executes. An application can check these variable values to determine whether the
last SQL statement was successful. All SQL statements in the program must be
within the scope of the declaration of the SQLCODE and SQLSTATE variables.
A standard declaration includes both a structure definition and a static data area
named 'sqlca'. See Chapter 6 of DB2 SQL Reference for more information about
the INCLUDE statement and Appendix C of DB2 SQL Reference for a complete
description of SQLCA fields.
Unlike the SQLCA, more than one SQLDA can exist in a program, and an SQLDA
can have any valid name. You can code an SQLDA in a C program either directly
or by using the SQL INCLUDE statement. The SQL INCLUDE statement requests a
standard SQLDA declaration:
EXEC SQL INCLUDE SQLDA;
A standard declaration includes only a structure definition with the name 'sqlda'.
See Chapter 6 of DB2 SQL Reference for more information about the INCLUDE
statement and Appendix C of DB2 SQL Reference for a complete description of
SQLDA fields.
You must place SQLDA declarations before the first SQL statement that references
the data descriptor, unless you use the precompiler option TWOPASS. You can
Each SQL statement in a C program must begin with EXEC SQL and end with a
semi-colon (;). The EXEC and SQL keywords must appear all on one line, but the
remainder of the statement can appear on subsequent lines.
Because C is case sensitive, you must use uppercase letters to enter all SQL
words. You must also keep the case of host variable names consistent throughout
the program. For example, if a host variable name is lowercase in its declaration, it
must be lowercase in all SQL statements.
Comments: You can include C comments (/* ... */) within SQL statements
wherever you can use a blank, except between the keywords EXEC and SQL. You
can use single-line comments (starting with //) in C language statements, but not in
embedded SQL. You cannot nest comments.
Declaring tables and views: Your C program should use the statement DECLARE
TABLE to describe each table and view the program accesses. You can use the
DB2 declarations generator (DCLGEN) to generate the DECLARE TABLE
statements. For details, see “Chapter 3-3. Generating declarations for your tables
using DCLGEN” on page 115.
You cannot nest SQL INCLUDE statements. Do not use C #include statements to
include SQL statements or C host variable declarations.
Margins: Code SQL statements in columns 1 through 72, unless you specify other
margins to the DB2 precompiler. If EXEC SQL is not within the specified margins,
the DB2 precompiler does not recognize the SQL statement.
Names: You can use any valid C name for a host variable, subject to the following
restrictions:
) Do not use DBCS characters.
) Do not use external entry names or access plan names that begin with 'DSN'
and host variable names that begin with 'SQL' (in any combination of
uppercase or lowercase letters). These names are reserved for DB2.
Nulls and NULs: C and SQL differ in the way they use the word null. The C
language has a null character (NUL), a null pointer (NULL), and a null statement
(just a semicolon). The C NUL is a single character which compares equal to 0.
The C NULL is a special reserved pointer value that does not point to any valid
data object. The SQL null value is a special value that is distinct from all nonnull
values and denotes the absence of a (nonnull) value. In this chapter, NUL is the
null character in C and NULL is the SQL null value.
Sequence numbers: The source statements that the DB2 precompiler generates
do not include sequence numbers.
Statement labels: You can precede SQL statements with a label, if you wish.
Trigraphs: Some characters from the C character set are not available on all
keyboards. You can enter these characters into a C source program using a
sequence of three characters called a trigraph. The trigraphs that DB2 supports are
the same as those that the C/370 compiler supports.
WHENEVER statement: The target for the GOTO clause in an SQL WHENEVER
statement must be within the scope of any SQL statements that the statement
WHENEVER affects.
Special C considerations:
) Use of the C/370 multi-tasking facility, where multiple tasks execute SQL
statements, causes unpredictable results.
) You must run the DB2 precompiler before running the C preprocessor.
) The DB2 precompiler does not support C preprocessor directives.
) If you use conditional compiler directives that contain C code, either place them
after the first C token in your application program, or include them in the C
program using the #include preprocessor directive.
Please refer to the appropriate C documentation for further information on C
preprocessor directives.
Precede C statements that define the host variables with the statement BEGIN
DECLARE SECTION, and follow the C statements with the statement END
DECLARE SECTION. You can have more than one host variable declaration
section in your program.
The names of host variables must be unique within the program, even if the host
variables are in different blocks, classes, or procedures. You can qualify the host
variable names with a structure name to make them unique.
An SQL statement that uses a host variable must be within the scope of that
variable.
Host variables must be scalar variables or host structures; they cannot be elements
of vectors or arrays (subscripted variables) unless you use the character arrays to
hold strings. You can use an array of indicator variables when you associate the
array with a host structure.
Numeric host variables: The following figure shows the syntax for valid numeric
host variable declarations.
[[──┬────────┬──┬──────────┬──┬─float─────────────────────────────────┬───────────────────────────────[
├─auto───┤ ├─const────┤ ├─double────────────────────────────────┤
├─extern─┤ └─volatile─┘ │ ┌─int─┐ │
└─static─┘ ├─┬─long──┬──┴─────┴────────────────────┤
│ └─short─┘ │
└─decimal──(──integer──┬───────────┬──)─┘
└─, integer─┘
┌─,──────────────────────────────┐
[───g─variable-name──┬─────────────┬─┴── ; ───────────────────────────────────────────────────────────[^
└─=expression─┘
Character host variables: There are four valid forms for character host variables:
) Single-character form
) NUL-terminated character form
) VARCHAR structured form
) CLOBs
The following figures show the syntax for forms other than CLOBs. See Figure 35
on page 149 for the syntax of CLOBs.
┌─,──────────────────────────────┐
[[──┬────────┬──┬──────────┬──┬──────────┬──char───g─variable-name──┬─────────────┬─┴── ; ────────────[^
├─auto───┤ ├─const────┤ └─unsigned─┘ └─=expression─┘
├─extern─┤ └─volatile─┘
└─static─┘
┌─,────────────────────────────────────────────┐
[[──┬────────┬──┬──────────┬──┬──────────┬──char───g─variable-name──[──length──]──┬─────────────┬─┴────[
├─auto───┤ ├─const────┤ └─unsigned─┘ └─=expression─┘
├─extern─┤ └─volatile─┘
└─static─┘
[── ; ───────────────────────────────────────────────────────────────────────────────────────────────[^
Notes:
1. On input, the string contained by the variable must be NUL-terminated.
2. On output, the string is NUL-terminated.
3. A NUL-terminated character host variable maps to a varying length character
string (except for the NUL).
┌─int─┐
[[──┬────────┬──┬──────────┬──struct──┬─────┬── { ──short──┴─────┴──var-1── ; ────────────────────────[
├─auto───┤ ├─const────┤ └─tag─┘
├─extern─┤ └─volatile─┘
└─static─┘
[──┬──────────┬──char──var-2──[──length──]── ; ── } ──────────────────────────────────────────────────[
└─unsigned─┘
┌─,───────────────────────────────────────────────────┐
[───g─variable-name──┬─────────────────────────────┬── ; ─┴───────────────────────────────────────────[^
└─={ expression, expression }─┘
Notes:
) var-1 and var-2 must be simple variable references. You cannot use them as
host variables.
) You can use the struct tag to define other data areas, which you cannot use as
host variables.
Example:
EXEC SQL BEGIN DECLARE SECTION;
struct VARCHAR {
short len;
char s[1];
} vstring;
Graphic host variables: There are four valid forms for graphic host variables:
) Single-graphic form
) NUL-terminated graphic form
) VARGRAPHIC structured form.
) DBCLOBs
You can use the C data type wchar_t to define a host variable that inserts, updates,
deletes, and selects data from GRAPHIC or VARGRAPHIC columns.
The following figures show the syntax for forms other than DBCLOBs. See
Figure 35 on page 149 for the syntax of DBCLOBs.
┌─,──────────────────────────────┐
[[──┬────────┬──┬──────────┬──wchar_t───g─variable-name──┬─────────────┬─┴── ; ───────────────────────[^
├─auto───┤ ├─const────┤ └─=expression─┘
├─extern─┤ └─volatile─┘
└─static─┘
┌─,────────────────────────────────────────────┐
[[──┬────────┬──┬──────────┬──wchar_t───g─variable-name──[──length──]──┬─────────────┬─┴── ; ─────────[^
├─auto───┤ ├─const────┤ └─=expression─┘
├─extern─┤ └─volatile─┘
└─static─┘
Notes:
1. length must be a decimal integer constant greater than 1 and not greater than
16352.
2. On input, the string in variable-name must be NUL-terminated.
3. On output, the string is NUL-terminated.
4. The NUL-terminated graphic form does not accept single byte characters into
variable-name.
┌─int─┐
[[──┬────────┬──┬──────────┬──struct──┬─────┬── { ──short──┴─────┴──var-1── ; ────────────────────────[
├─auto───┤ ├─const────┤ └─tag─┘
├─extern─┤ └─volatile─┘
└─static─┘
┌─,─────────────────────────────────────────────┐
[──wchar_t──var-2──[──length──]── ; ── } ───g─variable-name──┬────────────────────────────┬─┴── ; ────[^
└─={ expression,expression }─┘
Notes:
) length must be a decimal integer constant greater than 1 and not greater than
16352.
) var-1 must be less than or equal to length.
) var-1 and var-2 must be simple variable references. You cannot use them as
host variables.
) You can use the struct tag to define other data areas, which you cannot use as
host variables.
Example:
EXEC SQL BEGIN DECLARE SECTION;
struct VARGRAPH {
short len;
wchar_t d[1];
} vgraph;
Result set locators: The following figure shows the syntax for declarations of
result set locators. See “Chapter 7-2. Using stored procedures for client/server
processing” on page 535 for a discussion of how to use these host variables.
| Table Locators: The following figure shows the syntax for declarations of table
| locators. See “Accessing transition tables in a user-defined function” on page 287
| for a discussion of how to use these host variables.
| LOB Variables and Locators: The following figure shows the syntax for
| declarations of BLOB, CLOB, and DBCLOB host variables and locators. See
| “Chapter 4-2. Programming for large objects (LOBs)” on page 237 for a discussion
| of how to use these host variables.
[[──┬──────────┬──┬──────────┬──SQL──TYPE──IS─────────────────────────────────────────────────────────[
├─auto─────┤ ├─const────┤
├─extern───┤ └─volatile─┘
├─static───┤
└─register─┘
┌─,─────────────────────────────┐
[──┬─┬─┬─BINARY LARGE OBJECT─┬────┬──(──length──┬───┬──)─┬───g─variable-name──┬────────────┬─┴──;─────[^
│ │ └─BLOB────────────────┘ │ ├─K─┤ │ └─init-value─┘
│ ├─┬─CHARACTER LARGE OBJECT─┬─┤ ├─M─┤ │
│ │ ├─CHAR LARGE OBJECT──────┤ │ └─G─┘ │
│ │ └─CLOB───────────────────┘ │ │
│ └─DBCLOB─────────────────────┘ │
└─┬─BLOB_LOCATOR───┬──────────────────────────────────┘
├─CLOB_LOCATOR───┤
└─DBCLOB_LOCATOR─┘
| ROWIDs: The following figure shows the syntax for declarations of ROWID
| variables. See “Chapter 4-2. Programming for large objects (LOBs)” on page 237
| for a discussion of how to use these host variables.
In this example, target is the name of a host structure consisting of the c1, c2, and
c3 fields. c1 and c3 are character arrays, and c2 is the host variable equivalent to
the SQL VARCHAR data type. The target host structure can be part of another host
structure but must be the deepest level of the nested structure.
The following figure shows the syntax for valid host structures.
[[──┬────────┬──┬──────────┬──┬────────┬──struct──┬─────┬──{──────────────────────────────────────────[
├─auto───┤ ├─const────┤ └─packed─┘ └─tag─┘
├─extern─┤ └─volatile─┘
└─static─┘
┌──
───────────────────────────────────────────────────────┐
[───g┬─┬─float─────────────────────────────────┬──var-1──;─┬┴──}───────────────────────────────────────[
│ ├─double────────────────────────────────┤ │
│ │ ┌─int─┐ │ │
│ ├─┬─long──┬──┴─────┴────────────────────┤ │
│ │ └─short─┘ │ │
│ ├─decimal──(──integer──┬───────────┬──)─┤ │
│ │ └─, integer─┘ │ │
│ ├─varchar structure─────────────────────┤ │
│ ├─vargraphic structure──────────────────┤ │
│ ├─SQL TYPE IS ROWID─────────────────────┤ │
│ └─LOB data type─────────────────────────┘ │
├─┬──────────┬──char──var-2──┬──────────────┬──;──────┤
│ └─unsigned─┘ └─[──length──]─┘ │
└─wchar_t──var-5──┬──────────────┬──;─────────────────┘
└─[──length──]─┘
[──variable-name──┬──────────────┬──;────────────────────────────────────────────────────────────────[^
└─= expression─┘
┌─int─┐
[[──struct──┬─────┬──{──┬────────┬──short──┴─────┴──var-3──;──────────────────────────────────────────[
└─tag─┘ └─signed─┘
[──┬──────────┬──char──var-4──[──length──]──;──}─────────────────────────────────────────────────────[^
└─unsigned─┘
┌─int─┐
[[──struct──┬─────┬──{──┬────────┬──short──┴─────┴──var-6──;──wchar_t──var-7──[──length──]──;──}─────[^
└─tag─┘ └─signed─┘
Table 10 (Page 1 of 2). SQL data types the precompiler uses for C declarations
SQLTYPE of SQLLEN of
Host Host
C Data Type Variable Variable SQL Data Type
short int 500 2 SMALLINT
long int 496 4 INTEGER
decimal(p,s)1 484 p in byte 1, DECIMAL(p,s)1
s in byte 2
float 480 4 FLOAT (single precision)
double 480 8 FLOAT (double precision)
Single-character 452 1 CHAR(1)
form
NUL-terminated 460 n VARCHAR (n-1)
character form
VARCHAR structured 448 n VARCHAR(n)
form
1<=n<=255
VARCHAR structured 456 n VARCHAR(n)
form
n>255
Single-graphic 468 1 GRAPHIC(1)
form
NUL-terminated 400 n VARGRAPHIC (n-1)
graphic form
(wchar_t)
VARGRAPHIC 464 n VARGRAPHIC(n)
structured form
1<=n<128
VARGRAPHIC 472 n VARGRAPHIC(n)
structured form
n>127
SQL TYPE IS 972 4 Result set locator2
RESULT_SET_LOCATOR
| SQL TYPE IS 976 4 Table locator2
| TABLE LIKE
| table-name
| AS LOCATOR
| SQL TYPE IS 960 4 BLOB locator2
| BLOB_LOCATOR
| SQL TYPE IS 964 4 CLOB locator2
| CLOB_LOCATOR
| SQL TYPE IS 968 4 DBCLOB locator2
| DBCLOB_LOCATOR
Table 10 (Page 2 of 2). SQL data types the precompiler uses for C declarations
SQLTYPE of SQLLEN of
Host Host
C Data Type Variable Variable SQL Data Type
| SQL TYPE IS 404 n BLOB(n)
| BLOB(n)
| 1≤n≤2147483647
| SQL TYPE IS 408 n CLOB(n)
| CLOB(n)
| 1≤n≤2147483647
| SQL TYPE IS DBCLOB(n) 412 n DBCLOB(n)3
| 1≤n≤1073741823
| SQL TYPE IS ROWID 904 40 ROWID
Notes:
1. p is the precision in SQL terminology which is the total number of digits. In C this is called the size.
s is the scale in SQL terminology which is the number of digits to the right of the decimal point. In C, this
is called the precision.
2. Do not use this data type as a column type.
3. n is the number of double-byte characters.
Table 11 helps you define host variables that receive output from the database.
You can use the table to determine the C data type that is equivalent to a given
SQL data type. For example, if you retrieve TIMESTAMP data, you can use the
table to define a suitable host variable in the program that receives the data value.
C data types with no SQL equivalent: C supports some data types and storage
classes with no SQL equivalents, for example, register storage class, typedef, and
the pointer.
SQL data types with no C equivalent: If your C compiler does not have a decimal
data type, then there is no exact equivalent for the SQL DECIMAL data type. In this
case, to hold the value of such a variable, you can use:
) An integer or floating-point variable, which converts the value. If you choose
integer, you will lose the fractional part of the number. If the decimal number
can exceed the maximum value for an integer, or if you want to preserve a
fractional value, you can use floating-point numbers. Floating-point numbers
are approximations of real numbers. Hence, when you assign a decimal
number to a floating point variable, the result could be different from the original
number.
) A character string host variable. Use the CHAR function to get a string
representation of a decimal number.
) The DECIMAL function to explicitly convert a value to a decimal data type, as
in this example:
long duration=11; /* 1 year and 1 month */
char result_dt[11];
| Floating point host variables: All floating point data is stored in DB2 in
| System/390 floating point format. However, your host variable data can be in
| System/390 floating point format or IEEE floating point format. DB2 uses the
| FLOAT(S390|IEEE) precompiler option to determine whether your floating point
| host variables are in IEEE floating point or System/390 floating point format. DB2
| does no checking to determine whether the contents of a host variable match the
| precompiler option. Therefore, you need to ensure that your floating point data
| format matches the precompiler option.
| Special Purpose C Data Types: The locator data types are C data types as well
| as SQL data types. You cannot use locators as column types. For information on
| how to use these data types, see the following sections:
| Result set locator “Chapter 7-2. Using stored procedures for client/server
| processing” on page 535
| Table locator “Accessing transition tables in a user-defined function” on
| page 287
| LOB locators “Chapter 4-2. Programming for large objects (LOBs)” on page 237
If you assign a string of length n to a NUL-terminated variable with a length that is:
) less than or equal to n, then DB2 inserts the characters into the host variable
as long as the characters fit up to length (n-1) and appends a NUL at the end
of the string. DB2 sets SQLWARN[1] to W and any indicator variable you
provide to the original length of the source string.
) equal to n+1, then DB2 inserts the characters into the host variable and
appends a NUL at the end of the string.
) greater than n+1, then the rules depend on whether the source string is a value
of a fixed-length string column or a varying-length string column. See Chapter 3
of DB2 SQL Reference for more information.
Truncation: Be careful of truncation. Ensure the host variable you declare can
contain the data and a NUL terminator, if needed. Retrieving a floating-point or
decimal column value into a long integer host variable removes any fractional part
of the value.
In SQL, you can use quotes to delimit identifiers and apostrophes to delimit string
constants. The following examples illustrate the use of apostrophes and quotes in
SQL.
Quotes
SELECT "COL#1" FROM TBL1;
Apostrophes
SELECT COL1 FROM TBL1 WHERE COL2 = 'BELL';
Varying-length strings: For varying-length BIT data, use the VARCHAR structured
form. Some C string manipulation functions process NUL-terminated strings and
others process strings that are not NUL-terminated. The C string manipulation
functions that process NUL-terminated strings cannot handle bit data; the functions
might misinterpret a NUL character to be a NUL-terminator.
When your program uses X to assign a null value to a column, the program should
set the indicator variable to a negative number. DB2 then assigns a null value to
the column and ignores any value in X.
You declare indicator variables in the same way as host variables. You can mix the
declarations of the two types of variables in any way that seems appropriate. For
more information about indicator variables, see “Using indicator variables with host
variables” on page 98.
Example:
The following figure shows the syntax for a valid indicator variable.
┌─int─┐ ┌─,─────────────┐
[[──┬────────┬──┬──────────┬──┬────────┬──short──┴─────┴───g─variable-name─┴──;───────────────────────[^
├─auto───┤ ├─const────┤ └─signed─┘
├─extern─┤ └─volatile─┘
└─static─┘
The following figure shows the syntax for a valid indicator array.
┌─int─┐
[[──┬────────┬──┬──────────┬──┬────────┬──short──┴─────┴──────────────────────────────────────────────[
├─auto───┤ ├─const────┤ └─signed─┘
├─extern─┤ └─volatile─┘
└─static─┘
┌─,────────────────────────────────────────────────────┐
[───g─variable-name──[──dimension──]──┬───────────────┬──;─┴──────────────────────────────────────────[^
└─=──expression─┘
Note:
DSNTIAR syntax
&message
An output area, in VARCHAR format, in which DSNTIAR places the
message text. The first halfword contains the length of the remaining area;
its minimum value is 240.
The output lines of text, each line being the length specified in &lrecl, are
put into this area. For example, you could specify the format of the output
area as:
For C, include:
#pragma linkage (dsntiar,OS)
CICS
If your CICS application requires CICS storage handling, you must use the
subroutine DSNTIAC instead of DSNTIAR. DSNTIAC has the following syntax:
DSNTIAC has extra parameters, which you must use for calls to routines that
use CICS commands.
&eib EXEC interface block
&commarea communication area
For more information on these new parameters, see the appropriate application
programming guide for CICS. The remaining parameter descriptions are the
same as those for DSNTIAR. Both DSNTIAC and DSNTIAR format the SQLCA
in the same way.
You must define DSNTIA1 in the CSD. If you load DSNTIAR or DSNTIAC, you
must also define them in the CSD. For an example of CSD entry generation
statements for use with DSNTIAC, see job DSNTEJ5A.
The assembler source code for DSNTIAC and job DSNTEJ5A, which
assembles and link-edits DSNTIAC, are in the data set prefix.SDSNSAMP.
Using C++ data types as host variables: You can use class members as host
variables. Class members used as host variables are accessible to any SQL
statement within the class.
Except where noted otherwise, this information pertains to all COBOL compilers
supported by DB2 for OS/390.
DB2 sets the SQLCODE and SQLSTATE values after each SQL statement
executes. An application can check these variables value to determine whether the
last SQL statement was successful. All SQL statements in the program must be
within the scope of the declaration of the SQLCODE and SQLSTATE variables.
When you use the precompiler option STDSQL(YES), you must declare an
SQLCODE variable. DB2 declares an SQLCA area for you in the
WORKING-STORAGE SECTION. DB2 controls that SQLCA, so your application
programs should not make assumptions about its structure or location.
You can specify INCLUDE SQLCA or a declaration for SQLCODE wherever you
can specify a 77 level or a record description entry in the WORKING-STORAGE
SECTION. You can declare a stand-alone SQLCODE variable in either the
WORKING-STORAGE SECTION or LINKAGE SECTION.
See Chapter 6 of DB2 SQL Reference for more information about the INCLUDE
statement and Appendix C of DB2 SQL Reference for a complete description of
SQLCA fields.
Unlike the SQLCA, there can be more than one SQLDA in a program, and an
SQLDA can have any valid name. The DB2 SQL INCLUDE statement does not
provide an SQLDA mapping for COBOL. You can define the SQLDA using one of
the following two methods:
) For COBOL programs compiled with any compiler except the OS/VS COBOL
compiler, you can code the SQLDA declarations in your program. For more
information, see “Using dynamic SQL in COBOL” on page 533. You must place
SQLDA declarations in the WORKING-STORAGE SECTION or LINKAGE
SECTION of your program, wherever you can specify a record description entry
in that section.
) For COBOL programs compiled with any COBOL compiler, you can call a
subroutine (written in C, PL/I, or assembler language) that uses the DB2
INCLUDE SQLDA statement to define the SQLDA. The subroutine can also
include SQL statements for any dynamic SQL functions you need. You must
use this method if you compile your program using OS/VS COBOL. The
SQLDA definition includes the POINTER data type, which OS/VS COBOL does
not support. For more information on using dynamic SQL, see “Chapter 7-1.
Coding dynamic SQL in application programs” on page 503.
You must place SQLDA declarations before the first SQL statement that references
the data descriptor. An SQL statement that uses a host variable must be within the
scope of the statement that declares the variable.
Each SQL statement in a COBOL program must begin with EXEC SQL and end
with END-EXEC. If the SQL statement appears between two COBOL statements,
the period is optional and might not be appropriate. If the statement appears in an
IF...THEN set of COBOL statements, leave off the ending period to avoid
inadvertently ending the IF statement. The EXEC and SQL keywords must appear
on one line, but the remainder of the statement can appear on subsequent lines.
EXEC SQL
UPDATE DSN861.DEPT
SET MGRNO = :MGR-NUM
WHERE DEPTNO = :INT-DEPT
END-EXEC.
In addition, you can include SQL comments in any embedded SQL statement if you
specify the precompiler option STDSQL(YES).
Continuation for SQL statements: The rules for continuing a character string
constant from one line to the next in an SQL statement embedded in a COBOL
program are the same as those for continuing a non-numeric literal in COBOL.
However, you can use either a quotation mark or an apostrophe as the first
nonblank character in area B of the continuation line. The same rule applies for the
continuation of delimited identifiers and does not depend on the string delimiter
option.
Declaring tables and views: Your COBOL program should include the statement
DECLARE TABLE to describe each table and view the program accesses. You can
use the DB2 declarations generator (DCLGEN) to generate the DECLARE TABLE
statements. You should include the DCLGEN members in the DATA DIVISION. For
details, see “Chapter 3-3. Generating declarations for your tables using DCLGEN”
on page 115.
You cannot nest SQL INCLUDE statements. Do not use COBOL verbs to include
SQL statements or COBOL host variable declarations, or use the SQL INCLUDE
statement to include CICS preprocessor related code. In general, use the SQL
INCLUDE only for SQL-related coding.
Margins: Code SQL statements in columns 12 through 72. If EXEC SQL starts
before column 12, the DB2 precompiler does not recognize the SQL statement.
The precompiler option MARGINS allows you to set new left and right margins
between 1 and 80. However, you must not code the statement EXEC SQL before
column 12.
Names: You can use any valid COBOL name for a host variable. Do not use
external entry names or access plan names that begin with 'DSN' and host
variable names that begin with 'SQL'. These names are reserved for DB2.
Sequence numbers: The source statements that the DB2 precompiler generates
do not include sequence numbers.
WHENEVER statement: The target for the GOTO clause in an SQL statement
WHENEVER must be a section name or unqualified paragraph name in the
PROCEDURE DIVISION.
– TRUNC(OPT) if you are certain that the data being moved to each binary
variable by the application does not have a larger precision than defined in
the PICTURE clause of the binary variable.
– TRUNC(BIN) if the precision of data being moved to each binary variable
might exceed the value in the PICTURE clause.
DB2 assigns values to COBOL binary integer host variables as if you had
specified the COBOL compiler option TRUNC(BIN).
) If a COBOL program contains several entry points or is called several times,
the USING clause of the entry statement that executes before the first SQL
statement executes must contain the SQLCA and all linkage section entries
that any SQL statement uses as host variables.
) The REPLACE statement has no effect on SQL statements. It affects only the
COBOL statements that the precompiler generates.
) Do not use COBOL figurative constants (such as ZERO and SPACE), symbolic
characters, reference modification, and subscripts within SQL statements.
) Observe the rules in Chapter 3 of DB2 SQL Reference when you name SQL
identifiers.
) Observe these rules for hyphens:
– Surround hyphens used as subtraction operators with spaces. DB2 usually
interprets a hyphen with no spaces around it as part of a host variable
name.
| – You can use hyphens in SQL identifiers under either of the following
| circumstances:
| - The application program is a local application that runs on DB2 UDB for
| OS/390 Version 6 or later.
| - The application program accesses remote sites, and the local site and
| remote sites are DB2 UDB for OS/390 Version 6 or later.
) If you include an SQL statement in a COBOL PERFORM ... THRU paragraph and
also specify the SQL statement WHENEVER ... GO, then the COBOL compiler
returns the warning message IGYOP3094. That message might indicate a
problem, depending on the intention behind the code. The usage is not
advised.
) If you are using VS COBOL II or COBOL/370 with the option NOCMPR2, then
the following additional restrictions apply:
– All SQL statements and any host variables they reference must be within
the first program when using nested programs or batch compilation.
– DB2 COBOL programs must have a DATA DIVISION and a PROCEDURE
DIVISION. Both divisions and the WORKING-STORAGE section must be
present in programs that use the DB2 precompiler.
If you pass host variables with address changes into a program more than once,
then the called program must reset SQL-INIT-FLAG. Resetting this flag indicates
that the storage must initialize when the next SQL statement executes. To reset the
flag, insert the statement MOVE ZERO TO SQL-INIT-FLAG in the called program's
PROCEDURE DIVISION, ahead of any executable SQL statements that use the
host variables.
You can precede COBOL statements that define the host variables with the
statement BEGIN DECLARE SECTION, and follow the statements with the
statement END DECLARE SECTION. You must use the statements BEGIN
DECLARE SECTION and END DECLARE SECTION when you use the precompiler
option STDSQL(YES).
The names of host variables should be unique within the source data set or
member, even if the host variables are in different blocks, classes, or procedures.
You can qualify the host variable names with a structure name to make them
unique.
An SQL statement that uses a host variable must be within the scope of the
statement that declares the variable.
You cannot define host variables, other than indicator variables, as arrays. You can
specify OCCURS only when defining an indicator structure. You cannot specify
OCCURS for any other type of host variable.
Numeric host variables: The following figures show the syntax for valid numeric
host variable declarations.
[[──┬─1──────┬──variable-name──┬───────────────┬──┬─COMPUTATIONAL-1─┬────────────────────────────────[
├─77──────┤ │ ┌─IS─┐ │ ├─COMP-1──────────┤
└─level-1─┘ └─USAGE──┴────┴─┘ ├─COMPUTATIONAL-2─┤
└─COMP-2──────────┘
[──┬─────────────────────────────────┬── . ──────────────────────────────────────────────────────────[^
│ ┌─IS─┐ │
└─VALUE──┴────┴──numeric-constant─┘
Notes:
1. level-1 indicates a COBOL level between 2 and 48.
2. COMPUTATIONAL-1 and COMP-1 are equivalent.
┌─IS─┐
[[──┬─1──────┬──variable-name──┬─PICTURE─┬──┴────┴──┬─S9(4)──────┬──┬───────────────┬────────────────[
├─77──────┤ └─PIC─────┘ ├─S9999──────┤ │ ┌─IS─┐ │
└─level-1─┘ ├─S9(9)──────┤ └─USAGE──┴────┴─┘
└─S999999999─┘
[──┬─BINARY──────────┬──┬─────────────────────────────────┬── . ─────────────────────────────────────[^
├─COMPUTATIONAL-4─┤ │ ┌─IS─┐ │
├─COMP-4──────────┤ └─VALUE──┴────┴──numeric-constant─┘
├─COMPUTATIONAL───┤
└─COMP────────────┘
Notes:
1. level-1 indicates a COBOL level between 2 and 48.
2. BINARY, COMP, COMPUTATIONAL, COMPUTATIONAL-4, COMP-4 are
equivalent.
3. Any specification for scale is ignored.
┌─IS─┐
[[──┬─1──────┬──variable-name──┬─PICTURE─┬──┴────┴──picture-string──┬───────────────┬────────────────[
├─77──────┤ └─PIC─────┘ │ ┌─IS─┐ │
└─level-1─┘ └─USAGE──┴────┴─┘
[──┬─┬─PACKED-DECIMAL──┬───────────────────────────────────┬──┬─────────────────────────────────┬─────[
│ ├─COMPUTATIONAL-3─┤ │ │ ┌─IS─┐ │
│ └─COMP-3──────────┘ │ └─VALUE──┴────┴──numeric-constant─┘
│ ┌─IS─┐ ┌─CHARACTER─┐ │
└─DISPLAY SIGN──┴────┴──LEADING SEPARATE──┴───────────┴─┘
[── . ───────────────────────────────────────────────────────────────────────────────────────────────[^
Notes:
1. level-1 indicates a COBOL level between 2 and 48.
2. PACKED-DECIMAL, COMPUTATIONAL-3, and COMP-3 are equivalent. The
picture-string associated with these types must have the form S9(i)V9(d) (or
S9...9V9...9, with i and d instances of 9) or S9(i)V.
3. The picture-string associated with SIGN LEADING SEPARATE must have the
form S9(i)V9(d) (or S9...9V9...9, with i and d instances of 9 or S9...9V with i
instances of 9).
Character host variables: There are three valid forms of character host variables:
) Fixed-length strings
) Varying-length strings
| ) CLOBs
The following figures show the syntax for forms other than CLOBs. See Figure 52
on page 170 for the syntax of CLOBs.
┌─IS─┐
[[──┬─1──────┬──variable-name──┬─PICTURE─┬──┴────┴──picture-string──┬────────────────────────────┬───[
├─77──────┤ └─PIC─────┘ └─┬───────────────┬──DISPLAY─┘
└─level-1─┘ │ ┌─IS─┐ │
└─USAGE──┴────┴─┘
[──┬───────────────────────────────────┬── . ────────────────────────────────────────────────────────[^
│ ┌─IS─┐ │
└─VALUE──┴────┴──character-constant─┘
Note:
[[──┬─1──────┬──variable-name── . ──────────────────────────────────────────────────────────────────[^
└─level-1─┘
┌─IS─┐
[[──49──var-1──┬─PICTURE─┬──┴────┴──┬─S9(4)─┬──┬───────────────┬──┬─BINARY──────────┬─────────────────[
└─PIC─────┘ └─S9999─┘ │ ┌─IS─┐ │ ├─COMPUTATIONAL-4─┤
└─USAGE──┴────┴─┘ └─COMP-4──────────┘
[──┬─────────────────────────────────┬── . ──────────────────────────────────────────────────────────[^
│ ┌─IS─┐ │
└─VALUE──┴────┴──numeric-constant─┘
┌─IS─┐
[[──49──var-2──┬─PICTURE─┬──┴────┴──picture-string──┬────────────────────────────┬────────────────────[
└─PIC─────┘ └─┬───────────────┬──DISPLAY─┘
│ ┌─IS─┐ │
└─USAGE──┴────┴─┘
[──┬───────────────────────────────────┬── . ────────────────────────────────────────────────────────[^
│ ┌─IS─┐ │
└─VALUE──┴────┴──character-constant─┘
Notes:
1. level-1 indicates a COBOL level between 2 and 48.
2. The picture-string associated with these forms must be X(m) (or XX...X, with m
instances of X), with 1 <= m <= 255 for fixed-length strings; for other strings, m
cannot be greater than the maximum size of a varying-length character string.
DB2 uses the full length of the S9(4) variable even though IBM COBOL for
MVS and VM only recognizes values up to 9999. This can cause data
truncation errors when COBOL statements execute and might effectively limit
the maximum length of variable-length character strings to 9999. Consider
using the TRUNC(OPT) or NOTRUNC COBOL compiler option (whichever is
appropriate) to avoid data truncation.
3. You cannot directly reference var-1 and var-2 as host variables.
Graphic character host variables: There are three valid forms for graphic
character host variables:
) Fixed-length strings
) Varying-length strings
| ) DBCLOBs
The following figures show the syntax for forms other than DBCLOBs. See
Figure 52 on page 170 for the syntax of DBCLOBs.
┌─IS─┐
[[──┬─1──────┬──variable-name──┬─PICTURE─┬──┴────┴──picture-string──────────────────────────────────[^
├─level-1─┤ └─PIC─────┘
└─77──────┘
┌─IS─┐
[[──USAGE──┴────┴──DISPLAY-1─────────────────────────────────────────────────────────────────────────[^
[[──┬─────────────────────────────────┬── . ─────────────────────────────────────────────────────────[^
│ ┌─IS─┐ │
└─VALUE──┴────┴──graphic-constant─┘
Note:
┌─IS─┐
[[──┬─1──────┬──variable-name── . ──49──var-1──┬─PICTURE─┬──┴────┴──┬─S9(4)─┬──┬───────────────┬─────[
└─level-1─┘ └─PIC─────┘ └─S9999─┘ │ ┌─IS─┐ │
└─USAGE──┴────┴─┘
[──┬─BINARY──────────┬──┬─────────────────────────────────┬── . ──49──var-2──┬─PICTURE─┬──────────────[
├─COMPUTATIONAL-4─┤ │ ┌─IS─┐ │ └─PIC─────┘
└─COMP-4──────────┘ └─VALUE──┴────┴──numeric-constant─┘
┌─IS─┐ ┌─IS─┐
[──┴────┴──picture-string──USAGE──┴────┴──DISPLAY-1──┬─────────────────────────────────┬── . ────────[^
│ ┌─IS─┐ │
└─VALUE──┴────┴──graphic-constant─┘
Notes:
1. level-1 indicates a COBOL level between 2 and 48.
2. The picture-string associated with these forms must be G(m) (or GG...G, with m
instances of G), with 1 <= m <= 127 for fixed-length strings. You can use N in
place of G for COBOL graphic variable declarations. If you use N for graphic
variable declarations, USAGE DISPLAY-1 is optional. For strings other than
fixed-length, m cannot be greater than the maximum size of a varying-length
graphic string.
DB2 uses the full size of the S9(4) variable even some COBOL
implementations restrict the maximum length of varying-length graphic string to
9999. This can cause data truncation errors when COBOL statements execute
and might effectively limit the maximum length of variable-length graphic strings
to 9999. Consider using the TRUNC(OPT) or NOTRUNC COBOL compiler
option (which ever is appropriate) to avoid data truncation.
3. You cannot directly reference var-1 and var-2 as host variables.
Result set locators: The following figure shows the syntax for declarations of
result set locators. See “Chapter 7-2. Using stored procedures for client/server
processing” on page 535 for a discussion of how to use these host variables.
| Table Locators: The following figure shows the syntax for declarations of table
| locators. See “Accessing transition tables in a user-defined function” on page 287
| for a discussion of how to use these host variables.
Note:
| LOB Variables and Locators: The following figure shows the syntax for
| declarations of BLOB, CLOB, and DBCLOB host variables and locators. See
| “Chapter 4-2. Programming for large objects (LOBs)” on page 237 for a discussion
| of how to use these host variables.
[[──┬─1──────┬──variable-name──┬───────────────┬──SQL──TYPE──IS──────────────────────────────────────[
└─level-1─┘ └─USAGE──┬────┬─┘
└─IS─┘
[──┬─┬─┬─BINARY LARGE OBJECT─┬────┬──(──length──┬───┬──)─┬───────────────────────────────────────────[^
│ │ └─BLOB────────────────┘ │ ├─K─┤ │
│ ├─┬─CHARACTER LARGE OBJECT─┬─┤ ├─M─┤ │
│ │ ├─CHAR LARGE OBJECT──────┤ │ └─G─┘ │
│ │ └─CLOB───────────────────┘ │ │
│ └─DBCLOB─────────────────────┘ │
└─┬─BLOB-LOCATOR───┬──────────────────────────────────┘
├─CLOB-LOCATOR───┤
└─DBCLOB-LOCATOR─┘
Note:
| ROWIDs: The following figure shows the syntax for declarations of ROWID
| variables. See “Chapter 4-2. Programming for large objects (LOBs)” on page 237
| for a discussion of how to use these host variables.
Note:
A host structure name can be a group name whose subordinate levels name
elementary data items. In the following example, B is the name of a host structure
consisting of the elementary items C1 and C2.
1 A
2 B
3 C1 PICTURE ...
3 C2 PICTURE ...
When you write an SQL statement using a qualified host variable name (perhaps to
identify a field within a structure), use the name of the structure followed by a
period and the name of the field. For example, specify B.C1 rather than C1 OF B or
C1 IN B.
The precompiler does not recognize host variables or host structures on any
subordinate levels after one of these items:
) A COBOL item that must begin in area A
) Any SQL statement (except SQL INCLUDE)
) Any SQL statement within an included member
When the precompiler encounters one of the above items in a host structure, it
therefore considers the structure to be complete.
Figure 54 on page 172 shows the syntax for valid host structures.
[[──level-1──variable-name──.─────────────────────────────────────────────────────────────────────────[
┌──
───────────────────────────────────────────────────────────────────────────────────────────┐
[───g─level-2──var-1──┬─┬───────────────┬────┬─COMPUTATIONAL-1─┬────.─────────────────────────┬─┴─────[^
│ │ ┌─IS─┐ │ ├─COMP-1──────────┤ │
│ └─USAGE──┴────┴─┘ ├─COMPUTATIONAL-2─┤ │
│ └─COMP-2──────────┘ │
│ ┌─IS─┐ │
├─┬─PICTURE─┬──┴────┴──┬────────────────┬──usage-clause──.──────────────┤
│ └─PIC─────┘ └─picture-string─┘ │
├─char-inner-variable──.────────────────────────────────────────────────┤
├─varchar-inner-variables───────────────────────────────────────────────┤
├─vargraphic-inner-variables────────────────────────────────────────────┤
├─┬───────────────┬──SQL TYPE IS ROWID──.───────────────────────────────┤
│ └─USAGE──┬────┬─┘ │
│ └─IS─┘ │
├─┬───────────────┬──SQL TYPE IS──TABLE LIKE──table-name──AS LOCATOR──.─┤
│ └─USAGE──┬────┬─┘ │
│ └─IS─┘ │
└─┬───────────────┬──LOB data type──.───────────────────────────────────┘
└─USAGE──┬────┬─┘
└─IS─┘
[[──┬───────────────┬──┬─┬─BINARY──────────┬───────────────────────────────────┬──────────────────────[
│ ┌─IS─┐ │ │ ├─COMPUTATIONAL-4─┤ │
└─USAGE──┴────┴─┘ │ ├─COMP-4──────────┤ │
│ ├─COMPUTATIONAL───┤ │
│ └─COMP────────────┘ │
├─┬─PACKED-DECIMAL──┬───────────────────────────────────┤
│ ├─COMPUTATIONAL-3─┤ │
│ └─COMP-3──────────┘ │
│ ┌─IS─┐ │
└─DISPLAY SIGN──┴────┴──LEADING SEPARATE──┬───────────┬─┘
└─CHARACTER─┘
[──┬─────────────────────────┬───────────────────────────────────────────────────────────────────────[^
│ ┌─IS─┐ │
└─VALUE──┴────┴──constant─┘
┌─IS─┐
[[──┬─PICTURE─┬──┴────┴──picture-string──┬────────────────────────────┬───────────────────────────────[
└─PIC─────┘ └─┬───────────────┬──DISPLAY─┘
│ ┌─IS─┐ │
└─USAGE──┴────┴─┘
[──┬─────────────────────────┬───────────────────────────────────────────────────────────────────────[^
│ ┌─IS─┐ │
└─VALUE──┴────┴──constant─┘
┌─IS─┐
[[──49──var-2──┬─PICTURE─┬──┴────┴──┬─S9(4)─┬──┬───────────────┬──┬─BINARY──────────┬─────────────────[
└─PIC─────┘ └─S9999─┘ │ ┌─IS─┐ │ ├─COMPUTATIONAL-4─┤
└─USAGE──┴────┴─┘ ├─COMP-4──────────┤
├─COMPUTATIONAL───┤
└─COMP────────────┘
┌─IS─┐
[──┬─────────────────────────────────┬──.──49──var-3──┬─PICTURE─┬──┴────┴──picture-string─────────────[
│ ┌─IS─┐ │ └─PIC─────┘
└─VALUE──┴────┴──numeric-constant─┘
[──┬────────────────────────────┬──┬─────────────────────────┬──.────────────────────────────────────[^
└─┬───────────────┬──DISPLAY─┘ │ ┌─IS─┐ │
│ ┌─IS─┐ │ └─VALUE──┴────┴──constant─┘
└─USAGE──┴────┴─┘
┌─IS─┐
[[──49──var-4──┬─PICTURE─┬──┴────┴──┬─S9(4)─┬──┬───────────────┬──┬─BINARY──────────┬─────────────────[
└─PIC─────┘ └─S9999─┘ │ ┌─IS─┐ │ ├─COMPUTATIONAL-4─┤
└─USAGE──┴────┴─┘ ├─COMP-4──────────┤
├─COMPUTATIONAL───┤
└─COMP────────────┘
┌─IS─┐
[──┬─────────────────────────────────┬──.──49──var-5──┬─PICTURE─┬──┴────┴──picture-string─────────────[
│ ┌─IS─┐ │ └─PIC─────┘
└─VALUE──┴────┴──numeric-constant─┘
[──┬──────────────────────────────┬──┬─────────────────────────────────┬──.──────────────────────────[^
└─┬───────────────┬──DISPLAY-1─┘ │ ┌─IS─┐ │
│ ┌─IS─┐ │ └─VALUE──┴────┴──graphic-constant─┘
└─USAGE──┴────┴─┘
|
| [[──SQL──TYPE──IS──┬─┬─┬─BINARY LARGE OBJECT─┬────┬──(──length──┬───┬──)─┬───────────────────────────[^
| │ │ └─BLOB────────────────┘ │ ├─K─┤ │
| │ ├─┬─CHARACTER LARGE OBJECT─┬─┤ ├─M─┤ │
| │ │ ├─CHAR LARGE OBJECT──────┤ │ └─G─┘ │
| │ │ └─CLOB───────────────────┘ │ │
| │ └─DBCLOB─────────────────────┘ │
| └─┬─BLOB-LOCATOR───┬──────────────────────────────────┘
| ├─CLOB-LOCATOR───┤
| └─DBCLOB-LOCATOR─┘
Notes:
1. level-1 indicates a COBOL level between 1 and 47.
2. level-2 indicates a COBOL level between 2 and 48.
3. For elements within a structure use any level 02 through 48 (rather than 01 or
77), up to a maximum of two levels.
4. Using a FILLER or optional FILLER item within a host structure declaration can
invalidate the whole structure.
5. You cannot use picture-string for floating point elements but must use it for
other data types.
Table 13 (Page 1 of 2). Sql data types the precompiler uses for COBOL declarations
SQLTYPE of SQLLEN of Host
COBOL Data Type Host Variable Variable SQL Data Type
COMP-1 480 4 REAL or FLOAT(n)
1<=n<=21
COMP-2 480 8 DOUBLE PRECISION,
or FLOAT(n)
22<=n<=53
S9(i)V9(d) COMP-3 or 484 i+d in byte 1, d in byte DECIMAL(i+d,d) or
S9(i)V9(d) PACKED-DECIMAL 2 NUMERIC(i+d,d)
S9(i)V9(d) DISPLAY SIGN 504 i+d in byte 1, d in byte No exact equivalent. Use
LEADING SEPARATE 2 DECIMAL(i+d,d) or
NUMERIC(i+d,d)
S9(4) COMP-4 or BINARY 500 2 SMALLINT
S9(9) COMP-4 or BINARY 496 4 INTEGER
Fixed-length character data 452 m CHAR(m)
Varying-length character data 448 m VARCHAR(m)
1<=m<=255
Varying-length character data 456 m VARCHAR(m)
m>255
Fixed-length graphic data 468 m GRAPHIC(m)
Varying-length graphic data 464 m VARGRAPHIC(m)
1<=m<=127
Varying-length graphic data 472 m VARGRAPHIC(m)
m>127
SQL TYPE IS 972 4 Result set locator1
RESULT-SET-LOCATOR
| SQL TYPE IS 976 4 Table locator1
| TABLE LIKE table-name
| AS LOCATOR
| SQL TYPE IS 960 4 BLOB locator1
| BLOB-LOCATOR
| SQL TYPE IS 964 4 CLOB locator1
| CLOB-LOCATOR
| USAGE IS 968 4 DBCLOB locator1
| SQL TYPE IS
| DBCLOB-LOCATOR
| USAGE IS SQL TYPE IS 404 n BLOB(n)
| BLOB(n) 1≤n≤2147483647
Table 13 (Page 2 of 2). Sql data types the precompiler uses for COBOL declarations
SQLTYPE of SQLLEN of Host
COBOL Data Type Host Variable Variable SQL Data Type
| USAGE IS SQL TYPE IS 408 n CLOB(n)
| CLOB(n) 1≤n≤2147483647
| USAGE IS SQL TYPE IS 412 n DBCLOB(m)2
| DBCLOB(m) 1≤m≤10737418232
| SQL TYPE IS ROWID 904 40 ROWID
Notes:
1. Do not use this data type as a column type.
2. m is the number of double-byte characters.
Table 14 helps you define host variables that receive output from the database.
You can use the table to determine the COBOL data type that is equivalent to a
given SQL data type. For example, if you retrieve TIMESTAMP data, you can use
the table to define a suitable host variable in the program that receives the data
value.
Table 14 (Page 1 of 2). SQL data types mapped to typical COBOL declarations
SQL Data Type COBOL Data Type Notes
SMALLINT S9(4) COMP-4 or BINARY
INTEGER S9(9) COMP-4 or BINARY
# DECIMAL(p,s) or S9(p-s)V9(s) COMP-3 or p is precision; s is scale. 0<=s<=p<=31. If
# NUMERIC(p,s) S9(p-s)V9(s) s=0, use S9(p)V or S9(p). If s=p, use
# PACKED-DECIMAL SV9(s). If the COBOL compiler does not
# DISPLAY SIGN support 31–digit decimal numbers, there
# LEADING SEPARATE is no exact equivalent. Use COMP-2.
REAL or FLOAT (n) COMP-1 1<=n<=21
DOUBLE PRECISION, COMP-2 22<=n<=53
DOUBLE
or FLOAT (n)
CHAR(n) fixed-length character string 1<=n<=255
VARCHAR(n) varying-length character string
GRAPHIC(n) fixed-length graphic string n refers to the number
of double-byte characters, not
to the number of bytes.
1<=n<=127
VARGRAPHIC(n) varying-length graphic string n refers to the number
of double-byte characters, not
to the number of bytes.
DATE fixed-length character string If you are using a date exit routine, n is
of length n determined by that routine. Otherwise, n
must be at least 10.
TIME fixed-length character string If you are using a time exit routine, n is
of length n determined by that routine. Otherwise, n
must be at least 6; to include seconds, n
must be at least 8.
Table 14 (Page 2 of 2). SQL data types mapped to typical COBOL declarations
SQL Data Type COBOL Data Type Notes
TIMESTAMP fixed-length character string n must be at least 19. To include
microseconds, n must be 26; if n is less
than 26, truncation occurs on the
microseconds part.
Result set locator SQL TYPE IS Use this data type only for
RESULT-SET-LOCATOR receiving result sets.
Do not use this data type as a
column type.
| Table locator SQL TYPE IS Use this data type only in a user-defined
| TABLE LIKE table-name function or stored procedure to receive
| AS LOCATOR rows of a transition table. Do not use this
| data type as a column type.
| BLOB locator USAGE IS Use this data type only to manipulate data
| SQL TYPE IS in BLOB columns. Do not use this data
| BLOB-LOCATOR type as a column type.
| CLOB locator USAGE IS Use this data type only to manipulate data
| SQL TYPE IS in CLOB columns. Do not use this data
| CLOB-LOCATOR type as a column type.
| DBCLOB locator USAGE IS Use this data type only to manipulate data
| SQL TYPE IS in DBCLOB columns. Do not use this data
| DBCLOB-LOCATOR type as a column type.
| BLOB(n) USAGE IS 1≤n≤2147483647
| SQL TYPE IS
| BLOB(n)
| CLOB(n) USAGE IS SQL TYPE IS CLOB(n) 1≤n≤2147483647
| DBCLOB(n) USAGE IS n is the number of double-byte
| SQL TYPE IS characters. 1≤n≤1073741823
| DBCLOB(n)
| ROWID SQL TYPE IS ROWID
# SQL data types with no COBOL equivalent: If you are using a COBOL compiler
# that does not support decimal numbers of more than 18 digits, use one of the
# following data types to hold values of greater than 18 digits:
) A decimal variable with a precision less than or equal to 18, if the actual data
values fit. If you retrieve a decimal value into a decimal variable with a scale
that is less than the source column in the database, then the fractional part of
the value could be truncated.
) An integer or a floating-point variable, which converts the value. If you choose
integer, you lose the fractional part of the number. If the decimal number could
exceed the maximum value for an integer or, if you want to preserve a
fractional value, you can use floating point numbers. Floating-point numbers
are approximations of real numbers. Hence, when you assign a decimal
number to a floating point variable, the result could be different from the original
number.
) A character string host variable. Use the CHAR function to retrieve a decimal
value into it.
| Special Purpose COBOL Data Types: The locator data types are COBOL data
| types as well as SQL data types. You cannot use locators as column types. For
| information on how to use these data types, see the following sections:
| Result set locator “Chapter 7-2. Using stored procedures for client/server
| processing” on page 535
| Table locator “Accessing transition tables in a user-defined function” on
| page 287
| LOB locators “Chapter 4-2. Programming for large objects (LOBs)” on page 237
Level 77 data description entries: One or more REDEFINES entries can follow
any level 77 data description entry. However, you cannot use the names in these
entries in SQL statements. Entries with the name FILLER are ignored.
SMALLINT and INTEGER data types: In COBOL, you declare the SMALLINT and
INTEGER data types as a number of decimal digits. DB2 uses the full size of the
integers (in a way that is similar to processing with the COBOL options
TRUNC(OPT) or NOTRUNC) and can place larger values in the host variable than
would be allowed in the specified number of digits in the COBOL declaration.
However, this can cause data truncation when COBOL statements execute. Ensure
that the size of numbers in your application is within the declared number of digits.
For small integers that can exceed 9999, use S9(5) COMP. For large integers that
can exceed 999,999,999, use S9(10) COMP-3 to obtain the decimal data type. If
you use COBOL for integers that exceed the COBOL PICTURE, then specify the
column as decimal to ensure that the data types match and perform well.
When your program uses X to assign a null value to a column, the program should
set the indicator variable to a negative number. DB2 then assigns a null value to
the column and ignores any value in X.
You declare indicator variables in the same way as host variables. You can mix the
declarations of the two types of variables in any way that seems appropriate. You
Example
The following figure shows the syntax for a valid indicator variable.
┌─IS─┐
[[──┬─1─┬──variable-name──┬─PICTURE─┬──┴────┴──┬─S9(4)─┬──┬───────────────┬──┬─BINARY──────────┬─────[
└─77─┘ └─PIC─────┘ └─S9999─┘ │ ┌─IS─┐ │ ├─COMPUTATIONAL-4─┤
└─USAGE──┴────┴─┘ ├─COMP-4──────────┤
├─COMPUTATIONAL───┤
└─COMP────────────┘
[──┬─────────────────────────┬──.────────────────────────────────────────────────────────────────────[^
│ ┌─IS─┐ │
└─VALUE──┴────┴──constant─┘
The following figure shows the syntax for valid indicator array declarations.
┌─IS─┐
[[──level-1──variable-name──┬─PICTURE─┬──┴────┴──┬─S9(4)─┬──┬───────────────┬──┬─BINARY──────────┬────[
└─PIC─────┘ └─S9999─┘ │ ┌─IS─┐ │ ├─COMPUTATIONAL-4─┤
└─USAGE──┴────┴─┘ ├─COMP-4──────────┤
├─COMPUTATIONAL───┤
└─COMP────────────┘
[──OCCURS──dimension──┬───────┬──┬─────────────────────────┬──.──────────────────────────────────────[^
└─TIMES─┘ │ ┌─IS─┐ │
└─VALUE──┴────┴──constant─┘
DSNTIAR syntax
sqlca
An SQL communication area.
message
An output area, in VARCHAR format, in which DSNTIAR places the message
text. The first halfword contains the length of the remaining area; its minimum
value is 240.
The output lines of text, each line being the length specified in lrecl, are put into
this area. For example, you could specify the format of the output area as:
1 ERROR-MESSAGE.
2 ERROR-LEN PIC S9(4) COMP VALUE +132.
2 ERROR-TEXT PIC X(132) OCCURS 1 TIMES
INDEXED BY ERROR-INDEX.
77 ERROR-TEXT-LEN PIC S9(9) COMP VALUE +132.
..
.
CALL 'DSNTIAR' USING SQLCA ERROR-MESSAGE ERROR-TEXT-LEN.
where ERROR-MESSAGE is the name of the message output area containing
10 lines of length 132 each, and ERROR-TEXT-LEN is the length of each line.
lrecl
A fullword containing the logical record length of output messages, between 72
and 240.
CICS
If your CICS application requires CICS storage handling, you must use the
subroutine DSNTIAC instead of DSNTIAR. DSNTIAC has the following syntax:
DSNTIAC has extra parameters, which you must use for calls to routines that
use CICS commands.
eib EXEC interface block
commarea communication area
For more information on these new parameters, see the appropriate application
programming guide for CICS. The remaining parameter descriptions are the
same as those for DSNTIAR. Both DSNTIAC and DSNTIAR format the SQLCA
in the same way.
You must define DSNTIA1 in the CSD. If you load DSNTIAR or DSNTIAC, you
must also define them in the CSD. For an example of CSD entry generation
statements for use with DSNTIAC, see job DSNTEJ5A.
The assembler source code for DSNTIAC and job DSNTEJ5A, which
assembles and link-edits DSNTIAC, are in the data set prefix.SDSNSAMP.
Where to Place SQL Statements in Your Application: An IBM COBOL for MVS
& VM source data set or member can contain the following elements:
) Multiple programs
) Multiple class definitions, each of which contains multiple methods
You can put SQL statements in only the first program or class in the source data
set or member. However, you can put SQL statements in multiple methods within a
class. If an application consists of multiple data sets or members, each of the data
sets or members can contain SQL statements.
Where to Place the SQLCA, SQLDA, and Host Variable Declarations: You can
put the SQLCA, SQLDA, and SQL host variable declarations in the
WORKING-STORAGE SECTION of a program, class, or method. An SQLCA or
SQLDA in a class WORKING-STORAGE SECTION is global for all the methods of
the class. An SQLCA or SQLDA in a method WORKING-STORAGE SECTION is
local to that method only.
If a class and a method within the class both contain an SQLCA or SQLDA, the
method uses the SQLCA or SQLDA that is local.
Rules for Host Variables: You can declare COBOL variables that are used as
host variables in the WORKING-STORAGE SECTION or LINKAGE-SECTION of a
program, class, or method. You can also declare host variables in the
LOCAL-STORAGE SECTION of a method. The scope of a host variable is the
method, class, or program within which it is defined.
DB2 sets the SQLCOD and SQLSTA (or SQLSTATE) values after each SQL
statement executes. An application can check these variables value to determine
whether the last SQL statement was successful. All SQL statements in the program
must be within the scope of the declaration of the SQLCOD and SQLSTA (or
SQLSTATE) variables.
See Chapter 6 of DB2 SQL Reference for more information about the INCLUDE
statement and Appendix C of DB2 SQL Reference for a complete description of
SQLCA fields.
Unlike the SQLCA, there can be more than one SQLDA in a program, and an
SQLDA can have any valid name. DB2 does not support the INCLUDE SQLDA
statement for FORTRAN programs. If present, an error message results.
You must place SQLDA declarations before the first SQL statement that references
the data descriptor.
You can code SQL statements in a FORTRAN program wherever you can place
executable statements. If the SQL statement is within an IF statement, the
precompiler generates any necessary THEN and END IF statements.
Each SQL statement in a FORTRAN program must begin with EXEC SQL. The
EXEC and SQL keywords must appear on one line, but the remainder of the
statement can appear on subsequent lines.
You cannot follow an SQL statement with another SQL statement or FORTRAN
statement on the same line.
FORTRAN does not require blanks to delimit words within a statement, but the SQL
language requires blanks. The rules for embedded SQL follow the rules for SQL
syntax, which require you to use one or more blanks as a delimiter.
Comments: You can include FORTRAN comment lines within embedded SQL
statements wherever you can use a blank, except between the keywords EXEC
and SQL. You can include SQL comments in any embedded SQL statement if you
specify the precompiler option STDSQL(YES).
The DB2 precompiler does not support the exclamation point (!) as a comment
recognition character in FORTRAN programs.
Continuation for SQL statements: The line continuation rules for SQL statements
are the same as those for FORTRAN statements, except that you must specify
EXEC SQL on one line. The SQL examples in this section have Cs in the sixth
column to indicate that they are continuations of the statement EXEC SQL.
Declaring tables and views: Your FORTRAN program should also include the
statement DECLARE TABLE to describe each table and view the program
accesses.
You can use a FORTRAN character variable in the statements PREPARE and
EXECUTE IMMEDIATE, even if it is fixed-length.
You cannot nest SQL INCLUDE statements. You cannot use the FORTRAN
INCLUDE compiler directive to include SQL statements or FORTRAN host variable
declarations.
Margins: Code the SQL statements between columns 7 through 72, inclusive. If
EXEC SQL starts before the specified left margin, the DB2 precompiler does not
recognize the SQL statement.
Names: You can use any valid FORTRAN name for a host variable. Do not use
external entry names that begin with 'DSN' and host variable names that begin
with 'SQL'. These names are reserved for DB2.
Do not use the word DEBUG, except when defining a FORTRAN DEBUG packet.
Do not use the words FUNCTION, IMPLICIT, PROGRAM, and SUBROUTINE to
define variables.
Sequence numbers: The source statements that the DB2 precompiler generates
do not include sequence numbers.
Statement labels: You can specify statement numbers for SQL statements in
columns 1 to 5. However, during program preparation, a labelled SQL statement
generates a FORTRAN statement CONTINUE with that label before it generates
the code that executes the SQL statement. Therefore, a labelled SQL statement
should never be the last statement in a DO loop. In addition, you should not label
SQL statements (such as INCLUDE and BEGIN DECLARE SECTION) that occur
before the first executable SQL statement because an error might occur.
WHENEVER statement: The target for the GOTO clause in the SQL statement
WHENEVER must be a label in the FORTRAN source and must refer to a
statement in the same subprogram. The statement WHENEVER only applies to
SQL statements in the same subprogram.
You can precede FORTRAN statements that define the host variables with a
BEGIN DECLARE SECTION statement and follow the statements with an END
DECLARE SECTION statement. You must use the statements BEGIN DECLARE
SECTION and END DECLARE SECTION when you use the precompiler option
STDSQL(YES).
The names of host variables should be unique within the program, even if the host
variables are in different blocks, functions, or subroutines.
When you declare a character host variable, you must not use an expression to
define the length of the character variable. You can use a character host variable
with an undefined length (for example, CHARACTER *(*)). The length of any such
variable is determined when its associated SQL statement executes.
An SQL statement that uses a host variable must be within the scope of the
statement that declares the variable.
You must be careful when calling subroutines that might change the attributes of a
host variable. Such alteration can cause an error while the program is running. See
Appendix C of DB2 SQL Reference for more information.
Numeric host variables: The following figure shows the syntax for valid numeric
host variable declarations.
┌─,─────────────────────────────────────────┐
[[──┬─INTEGER*2────────┬───g─variable-name──┬────────────────────────┬─┴──────────────────────────────[^
│ ┌─*4─┐ │ └─/──numeric-constant──/─┘
├─INTEGER──┴────┴──┤
│ ┌─*4─┐ │
├─REAL──┴────┴─────┤
├─REAL*8───────────┤
└─DOUBLE PRECISION─┘
Character host variables: The following figure shows the syntax for valid
character host variable declarations other than CLOBs. See Figure 65 on
page 187 for the syntax of CLOBs.
┌─,─────────────────────┐
[[──CHARACTER──┬────┬───g─variable-name──┬────┬─┴──┬──────────────────────────┬───────────────────────[^
└─8n─┘ └─8n─┘ └─/──character-constant──/─┘
Result set locators: The following figure shows the syntax for declarations of
result set locators. See “Chapter 7-2. Using stored procedures for client/server
processing” on page 535 for a discussion of how to use these host variables.
┌─,─────────────┐
[[──SQL TYPE IS RESULT_SET_LOCATOR VARYING───g─variable-name─┴────────────────────────────────────────[^
| LOB Variables and Locators: The following figure shows the syntax for
| declarations of BLOB and CLOB host variables and locators. See “Chapter 4-2.
| Programming for large objects (LOBs)” on page 237 for a discussion of how to use
| these host variables.
| ROWIDs: The following figure shows the syntax for declarations of ROWID
| variables. See “Chapter 4-2. Programming for large objects (LOBs)” on page 237
| for a discussion of how to use these host variables.
Table 15. SQL data types the precompiler uses for FORTRAN declarations
SQLTYPE of SQLLEN of Host
FORTRAN Data Type Host Variable Variable SQL Data Type
INTEGER*2 500 2 SMALLINT
INTEGER*4 496 4 INTEGER
REAL*4 480 4 FLOAT (single precision)
REAL*8 480 8 FLOAT (double precision)
CHARACTER*n 452 n CHAR(n)
SQL TYPE IS 972 4 Result set locator.
RESULT_SET_LOCATOR Do not use this data
type as a column type.
| SQL TYPE IS 960 4 BLOB locator.
| BLOB_LOCATOR Do not use this data
| type as a column type.
| SQL TYPE IS 964 4 CLOB locator.
| CLOB_LOCATOR Do not use this data
| type as a column type.
| SQL TYPE IS 404 n BLOB(n)
| BLOB(n)
| 1≤n≤2147483647
| SQL TYPE IS 408 n CLOB(n)
| CLOB(n)
| 1≤n≤2147483647
| SQL TYPE IS ROWID 904 40 ROWID
Table 16 on page 188 helps you define host variables that receive output from the
database. You can use the table to determine the FORTRAN data type that is
equivalent to a given SQL data type. For example, if you retrieve TIMESTAMP
data, you can use the table to define a suitable host variable in the program that
receives the data value.
Fortran data types with no SQL equivalent: FORTRAN supports some data
types with no SQL equivalent (for example, REAL*16 and COMPLEX). In most
cases, you can use FORTRAN statements to convert between the unsupported
data types and the data types that SQL allows.
SQL data types with no FORTRAN equivalent: FORTRAN does not provide an
equivalent for the decimal data type. To hold the value of such a variable, you can
use:
) An integer or floating-point variables, which converts the value. If you choose
integer, however, you lose the fractional part of the number. If the decimal
number can exceed the maximum value for an integer or you want to preserve
a fractional value, you can use floating point numbers. Floating-point numbers
are approximations of real numbers. When you assign a decimal number to a
floating point variable, the result could be different from the original number.
) A character string host variable. Use the CHAR function to retrieve a decimal
value into it.
| Special Purpose FORTRAN Data Types: The locator data types are FORTRAN
| data types as well as SQL data types. You cannot use locators as column types.
| For information on how to use these data types, see the following sections:
| Result set locator “Chapter 7-2. Using stored procedures for client/server
| processing” on page 535
| LOB locators “Chapter 4-2. Programming for large objects (LOBs)” on page 237
When your program uses X to assign a null value to a column, the program should
set the indicator variable to a negative number. DB2 then assigns a null value to
the column and ignores any value in X.
You declare indicator variables in the same way as host variables. You can mix the
declarations of the two types of variables in any way that seems appropriate. For
more information about indicator variables, see “Using indicator variables with host
variables” on page 98.
The following figure shows the syntax for a valid indicator variable.
[[──INTEGER*2──variable-name──┬────────────────────────┬─────────────────────────────────────────────[^
└─/──numeric-constant──/─┘
DSNTIR syntax
error-length
The total length of the message output area.
message
An output area, in VARCHAR format, in which DSNTIAR places the
message text. The first halfword contains the length of the remaining area;
its minimum value is 240.
The output lines of text are put into this area. For example, you could
specify the format of the output area as:
INTEGER ERRLEN /132/
CHARACTER*132 ERRTXT(1)
INTEGER ICODE
..
.
CALL DSNTIR ( ERRLEN, ERRTXT, ICODE )
where ERRLEN is the total length of the message output area, ERRTXT is
the name of the message output area, and ICODE is the return code.
return-code
Accepts a return code from DSNTIAR.
DB2 sets the SQLCODE and SQLSTATE values after each SQL statement
executes. An application can check these variables value to determine whether the
last SQL statement was successful. All SQL statements in the program must be
within the scope of the declaration of the SQLCODE and SQLSTATE variables.
See Chapter 6 of DB2 SQL Reference for more information about the INCLUDE
statement and Appendix C of DB2 SQL Reference for a complete description of
SQLCA fields.
You must declare an SQLDA before the first SQL statement that references that
data descriptor, unless you use the precompiler option TWOPASS. See Chapter 6
of DB2 SQL Reference for more information about the INCLUDE statement and
Appendix C of DB2 SQL Reference for a complete description of SQLDA fields.
You can code SQL statements in a PL/I program wherever you can use executable
statements.
Each SQL statement in a PL/I program must begin with EXEC SQL and end with a
semicolon (;). The EXEC and SQL keywords must appear all on one line, but the
remainder of the statement can appear on subsequent lines.
Continuation for SQL statements: The line continuation rules for SQL statements
are the same as those for other PL/I statements, except that you must specify
EXEC SQL on one line.
Declaring tables and views: Your PL/I program should also include a DECLARE
TABLE statement to describe each table and view the program accesses. You can
use the DB2 declarations generator (DCLGEN) to generate the DECLARE TABLE
statements. For details, see “Chapter 3-3. Generating declarations for your tables
using DCLGEN” on page 115.
Including code: You can use SQL statements or PL/I host variable declarations
from a member of a partitioned data set by using the following SQL statement in
the source code where you want to include the statements:
EXEC SQL INCLUDE member-name;
You cannot nest SQL INCLUDE statements. Do not use the statement PL/I
%INCLUDE to include SQL statements or host variable DCL statements. You must
use the PL/I preprocessor to resolve any %INCLUDE statements before you use
the DB2 precompiler. Do not use PL/I preprocessor directives within SQL
statements.
Margins: Code SQL statements in columns 2 through 72, unless you have
specified other margins to the DB2 precompiler. If EXEC SQL starts before the
specified left margin, the DB2 precompiler does not recognize the SQL statement.
Names: You can use any valid PL/I name for a host variable. Do not use external
entry names or access plan names that begin with 'DSN' and host variable names
that begin with 'SQL'. These names are reserved for DB2.
Sequence numbers: The source statements that the DB2 precompiler generates
do not include sequence numbers. IEL0378 messages from the PL/I compiler
identify lines of code without sequence numbers. You can ignore these messages.
Statement labels: You can specify a statement label for executable SQL
statements. However, the statements INCLUDE text-file-name and END DECLARE
SECTION cannot have statement labels.
Whenever statement: The target for the GOTO clause in an SQL statement
WHENEVER must be a label in the PL/I source code and must be within the scope
of any SQL statements that WHENEVER affects.
For example:
SQLSTMT = 'SELECT'<dbdb>'' FROM table-name'M;
EXEC SQL PREPARE FROM :SQLSTMT;
For instructions on preparing SQL statements dynamically, see “Chapter 7-1.
Coding dynamic SQL in application programs” on page 503.
) If you want a DBCS identifier to resemble PL/I graphic string, you must use a
delimited identifier.
) If you include DBCS characters in comments, you must delimit the characters
with a shift-out and shift-in control character. The first shift-in character signals
the end of the DBCS string.
) You can declare host variable names that use DBCS characters in PL/I
application programs. The rules for using DBCS variable names in PL/I follow
existing rules for DBCS SQL Ordinary Identifiers, except for length. The
maximum length for a host variable is 64 single-byte characters in DB2. Please
see Chapter 3 of DB2 SQL Reference for the rules for DBCS SQL Ordinary
Identifiers.
Restrictions:
– DBCS variable names must contain DBCS characters only. Mixing
single-byte character set (SBCS) characters with DBCS characters in a
DBCS variable name produces unpredictable results.
– A DBCS variable name cannot continue to the next line.
) The PL/I preprocessor changes non-Kanji DBCS characters into extended
binary coded decimal interchange code (EBCDIC) SBCS characters. To avoid
this change, use Kanji DBCS characters for DBCS variable names, or run the
PL/I compiler without the PL/I preprocessor.
) Use of the PL/I multitasking facility, where multiple tasks execute SQL
statements, causes unpredictable results. See the RUN(DSN) command in
Chapter 2 of DB2 Command Reference.
You can precede PL/I statements that define the host variables with the statement
BEGIN DECLARE SECTION, and follow the statements with the statement END
DECLARE SECTION. You must use the statements BEGIN DECLARE SECTION
and END DECLARE SECTION when you use the precompiler option
STDSQL(YES).
The names of host variables should be unique within the program, even if the host
variables are in different blocks or procedures. You can qualify the host variable
names with a structure name to make them unique.
An SQL statement that uses a host variable must be within the scope of the
statement that declares the variable.
Host variables must be scalar variables or structures of scalars. You cannot declare
host variables as arrays, although you can use an array of indicator variables when
you associate the array with a host structure.
The precompiler uses only the names and data attributes of the variables; it ignores
the alignment, scope, and storage attributes. Even though the precompiler ignores
alignment, scope, and storage, if you ignore the restrictions on their use, you might
have problems compiling the PL/I source code that the precompiler generates.
These restrictions are as follows:
) A declaration with the EXTERNAL scope attribute and the STATIC storage
attribute must also have the INITIAL storage attribute.
) If you use the BASED storage attribute, you must follow it with a PL/I
element-locator-expression.
) Host variables can be STATIC, CONTROLLED, BASED, or AUTOMATIC
storage class, or options. However, CICS requires that programs be reentrant.
Numeric host variables: The following figure shows the syntax for valid numeric
host variable declarations.
[[──┬─DECLARE─┬──┬─variable-name───────────┬──────────────────────────────────────────────────────────[
└─DCL─────┘ │ ┌─,─────────────┐ │
└─(───g─variable-name─┴──)─┘
[────┬─┬─BINARY─┬──┬──┬─FIXED──┬─────────────────────────────┬─┬──────────────────────────────────────[
│ └─BIN────┘ │ │ └─(──precision──┬────────┬──)─┘ │
└─┬─DECIMAL─┬─┘ │ └─,scale─┘ │
└─DEC─────┘ └─FLOAT──(──precision──)─────────────────┘
[──┬───────────────────────────────────────┬─────────────────────────────────────────────────────────[^
└─Alignment and/or Scope and/or Storage─┘
Notes:
1. You can specify host variable attributes in any order acceptable to PL/I. For
example, BIN FIXED(31), BINARY FIXED(31), BIN(31) FIXED, and FIXED
BIN(31) are all acceptable.
2. You can specify a scale for only DECIMAL FIXED.
Character host variables: The following figure shows the syntax for valid
character host variable declarations, other than CLOBs. See Figure 73 on
page 198 for the syntax of CLOBs.
[[──┬─DECLARE─┬──┬─variable-name───────────┬──┬─CHARACTER─┬──(──length──)──┬─────────┬────────────────[
└─DCL─────┘ │ ┌─,─────────────┐ │ └─CHAR──────┘ ├─VARYING─┤
└─(───g─variable-name─┴──)─┘ └─VAR─────┘
[──┬───────────────────────────────────────┬─────────────────────────────────────────────────────────[^
└─Alignment and/or Scope and/or Storage─┘
Graphic host variables: The following figure shows the syntax for valid graphic
host variable declarations, other than DBCLOBs. See Figure 73 on page 198 for
the syntax of DBCLOBs.
[[──┬─DECLARE─┬──┬─variable-name───────────┬──GRAPHIC──(──length──)──┬─────────┬──────────────────────[
└─DCL─────┘ │ ┌─,─────────────┐ │ ├─VARYING─┤
└─(───g─variable-name─┴──)─┘ └─VAR─────┘
[──┬───────────────────────────────────────┬─────────────────────────────────────────────────────────[^
└─Alignment and/or Scope and/or Storage─┘
Result set locators: The following figure shows the syntax for valid result set
locator declarations. See “Chapter 7-2. Using stored procedures for client/server
processing” on page 535 for a discussion of how to use these host variables.
| Table Locators: The following figure shows the syntax for declarations of table
| locators. See “Accessing transition tables in a user-defined function” on page 287
| for a discussion of how to use these host variables.
| LOB Variables and Locators: The following figure shows the syntax for
| declarations of BLOB, CLOB, and DBCLOB host variables and locators. See
| “Chapter 4-2. Programming for large objects (LOBs)” on page 237 for a discussion
| of how to use these host variables.
[[──┬─DCL─────┬──┬─variable-name───────────┬──SQL──TYPE──IS───────────────────────────────────────────[
└─DECLARE─┘ │ ┌─,───────────────────┐ │
└──g─(──variable-name──)─┴─┘
[──┬─┬─┬─BINARY LARGE OBJECT─┬────┬──(──length──┬───┬──)─┬───────────────────────────────────────────[^
│ │ └─BLOB────────────────┘ │ ├─K─┤ │
│ ├─┬─CHARACTER LARGE OBJECT─┬─┤ ├─M─┤ │
│ │ ├─CHAR LARGE OBJECT──────┤ │ └─G─┘ │
│ │ └─CLOB───────────────────┘ │ │
│ └─DBCLOB─────────────────────┘ │
└─┬─BLOB_LOCATOR───┬──────────────────────────────────┘
├─CLOB_LOCATOR───┤
└─DBCLOB_LOCATOR─┘
| ROWIDs: The following figure shows the syntax for declarations of ROWID
| variables. See “Chapter 4-2. Programming for large objects (LOBs)” on page 237
| for a discussion of how to use these host variables.
In this example, B is the name of a host structure consisting of the scalars C1 and
C2.
You can use the structure name as shorthand notation for a list of scalars. You
can qualify a host variable with a structure name (for example,
STRUCTURE.FIELD). Host structures are limited to two levels. You can think of a
host structure for DB2 data as a named group of host variables.
You must terminate the host structure variable by ending the declaration with a
semicolon. For example:
DCL 1 A,
2 B CHAR,
2 (C, D) CHAR;
DCL (E, F) CHAR;
You can specify host variable attributes in any order acceptable to PL/I. For
example, BIN FIXED(31), BIN(31) FIXED, and FIXED BIN(31) are all acceptable.
The following figure shows the syntax for valid host structures.
[[──┬─DECLARE─┬──level-1──variable-name──┬──────────────────────┬──,──────────────────────────────────[
└─DCL─────┘ └─Scope and/or storage─┘
┌─,─────────────────────────────────────────────────────┐
[───g─level-2──┬─var-1───────────┬──data-type-specification─┴──;──────────────────────────────────────[^
│ ┌─,─────┐ │
└─(───g─var-2─┴──)─┘
[[──────┬─┬─┬─BINARY─┬──┬──┬─FIXED──┬─────────────────────────────┬─┬─┬──────────────────────────────[^
│ │ └─BIN────┘ │ │ └─(──precision──┬────────┬──)─┘ │ │
│ └─┬─DECIMAL─┬─┘ │ └─,scale─┘ │ │
│ └─DEC─────┘ └─FLOAT──┬─────────────────┬─────────────┘ │
│ └─(──precision──)─┘ │
├─┬─CHARACTER─┬──┬───────────────┬──┬─────────┬───────────────┤
│ └─CHAR──────┘ └─(──integer──)─┘ ├─VARYING─┤ │
│ └─VARY────┘ │
├─GRAPHIC──┬───────────────┬──┬─────────┬─────────────────────┤
│ └─(──integer──)─┘ ├─VARYING─┤ │
│ └─VARY────┘ │
├─SQL TYPE IS ROWID───────────────────────────────────────────┤
└─LOB data type───────────────────────────────────────────────┘
|
| [[──SQL──TYPE──IS──┬─┬─┬─BINARY LARGE OBJECT─┬────┬──(──length──┬───┬──)─┬───────────────────────────[^
| │ │ └─BLOB────────────────┘ │ ├─K─┤ │
| │ ├─┬─CHARACTER LARGE OBJECT─┬─┤ ├─M─┤ │
| │ │ ├─CHAR LARGE OBJECT──────┤ │ └─G─┘ │
| │ │ └─CLOB───────────────────┘ │ │
| │ └─DBCLOB─────────────────────┘ │
| └─┬─BLOB_LOCATOR───┬──────────────────────────────────┘
| ├─CLOB_LOCATOR───┤
| └─DBCLOB_LOCATOR─┘
Table 17 (Page 1 of 2). SQL data types the precompiler uses for PL/I declarations
SQLTYPE of SQLLEN of Host
PL/I Data Type Host Variable Variable SQL Data Type
BIN FIXED(n) 1<=n<=15 500 2 SMALLINT
BIN FIXED(n) 16<=n<=31 496 4 INTEGER
DEC FIXED(p,s) 484 p in byte 1, s in byte 2 DECIMAL(p,s)
0<=p<=15 and
0<=s<=p1
BIN FLOAT(p) 480 4 REAL or FLOAT(n)
1<=p<=21 1<=n<=21
BIN FLOAT(p) 480 8 DOUBLE PRECISION or
22<=p<=53 FLOAT(n)
22<=n<=53
DEC FLOAT(m) 480 4 FLOAT (single precision)
1<=m<=6
DEC FLOAT(m) 480 8 FLOAT (double precision)
7<=m<=16
CHAR(n) 452 n CHAR(n)
CHAR(n) VARYING 448 n VARCHAR(n)
1<=n<=255
CHAR(n) VARYING 456 n VARCHAR(n)
n>255
GRAPHIC(n) 468 n GRAPHIC(n)
GRAPHIC(n) VARYING 464 n VARGRAPHIC(n)
1<=n<=127
GRAPHIC(n) VARYING n>127 472 n VARGRAPHIC(n)
SQL TYPE IS 972 4 Result set locator2
RESULT_SET_LOCATOR
Table 17 (Page 2 of 2). SQL data types the precompiler uses for PL/I declarations
SQLTYPE of SQLLEN of Host
PL/I Data Type Host Variable Variable SQL Data Type
| SQL TYPE IS 976 4 Table locator2
| TABLE LIKE table-name
| AS LOCATOR
| SQL TYPE IS 960 4 BLOB locator2
| BLOB_LOCATOR
| SQL TYPE IS 964 4 CLOB locator2
| CLOB_LOCATOR
| SQL TYPE IS 968 4 DBCLOB locator2
| DBCLOB_LOCATOR
| SQL TYPE IS BLOB(n) 404 n BLOB(n)
| 1≤n≤2147483647
| SQL TYPE IS CLOB(n) 408 n CLOB(n)
| 1≤n≤2147483647
| SQL TYPE IS DBCLOB(n) 412 n DBCLOB(n)3
| 1≤n≤10737418233
| SQL TYPE IS ROWID 904 40 ROWID
Note:
1. If p=0, DB2 interprets it as DECIMAL(15). For example, DB2 interprets a PL/I data type of DEC
FIXED(0,0) to be DECIMAL(15,0), which equates to the SQL data type of DECIMAL(15,0).
2. Do not use this data type as a column type.
3. n is the number of double-byte characters.
Table 18 helps you define host variables that receive output from the database.
You can use the table to determine the PL/I data type that is equivalent to a given
SQL data type. For example, if you retrieve TIMESTAMP data, you can use the
table to define a suitable host variable in the program that receives the data value.
Table 18 (Page 1 of 2). SQL data types mapped to typical PL/I declarations
SQL Data Type PL/I Equivalent Notes
SMALLINT BIN FIXED(n) 1<=n<=15
INTEGER BIN FIXED(n) 16<=n<=31
DECIMAL(p,s) or If p<16: p is precision;
NUMERIC(p,s) DEC FIXED(p) or s is scale.
DEC FIXED(p,s) 1<=p<=31
and 0<=s<=p
There is no exact equivalent for p>15.
(See 202 for more information.)
REAL or BIN FLOAT(p) or 1<=n<=21,
FLOAT(n) DEC FLOAT(m) 1<=p<=21 and
1<=m<=6
DOUBLE PRECISION, BIN FLOAT(p) or 22<=n<=53,
DOUBLE, or DEC FLOAT(m) 22<=p<=53 and
FLOAT(n) 7<=m<=16
CHAR(n) CHAR(n) 1<=n<=255
VARCHAR(n) CHAR(n) VAR
Table 18 (Page 2 of 2). SQL data types mapped to typical PL/I declarations
SQL Data Type PL/I Equivalent Notes
GRAPHIC(n) GRAPHIC(n) n refers to the number of double-byte
characters, not to the number of bytes.
1<=n<=127
VARGRAPHIC(n) GRAPHIC(n) VAR n refers to the number of double-byte
characters, not to the number of bytes.
DATE CHAR(n) If you are using a date exit routine, that
routine determines n; otherwise, n must
be at least 10.
TIME CHAR(n) If you are using a time exit routine, that
routine determines n. Otherwise, n must
be at least 6; to include seconds, n must
be at least 8.
TIMESTAMP CHAR(n) n must be at least 19. To include
microseconds, n must be 26; if n is less
than 26, the microseconds part is
truncated.
Result set locator SQL TYPE IS Use this data type only for receiving result
RESULT_SET_LOCATOR sets. Do not use this data type as a
column type.
| Table locator SQL TYPE IS Use this data type only in a user-defined
| TABLE LIKE table-name function or stored procedure to receive
| AS LOCATOR rows of a transition table. Do not use this
| data type as a column type.
| BLOB locator SQL TYPE IS Use this data type only to manipulate data
| BLOB_LOCATOR in BLOB columns. Do not use this data
| type as a column type.
| CLOB locator SQL TYPE IS Use this data type only to manipulate data
| CLOB_LOCATOR in CLOB columns. Do not use this data
| type as a column type.
| DBCLOB locator SQL TYPE IS Use this data type only to manipulate data
| DBCLOB_LOCATOR in DBCLOB columns. Do not use this data
| type as a column type.
| BLOB(n) SQL TYPE IS 1≤n≤2147483647
| BLOB(n)
| CLOB(n) SQL TYPE IS 1≤n≤2147483647
| CLOB(n)
| DBCLOB(n SQL TYPE IS n is the number of double-byte
| DBCLOB(n) characters. 1≤n≤1073741823
| ROWID SQL TYPE IS ROWID
PL/I Data Types with No SQL Equivalent: PL/I supports some data types with no
SQL equivalent (COMPLEX and BIT variables, for example). In most cases, you
can use PL/I statements to convert between the unsupported PL/I data types and
the data types that SQL supports.
SQL data types with no PL/I equivalent: PL/I does not provide an equivalent for
the decimal data type when the precision is greater than 15. To hold the value of
such a variable, you can use:
) Decimal variables with precision less than or equal to 15, if the actual data
values fit. If you retrieve a decimal value into a decimal variable with a scale
that is less than the source column in the database, then the fractional part of
the value could truncate.
) An integer or a floating-point variable, which converts the value. If you choose
integer, you lose the fractional part of the number. If the decimal number can
exceed the maximum value for an integer or you want to preserve a fractional
value, you can use floating point numbers. Floating-point numbers are
approximations of real numbers. When you assign a decimal number to a
floating point variable, the result could be different from the original number.
) A character string host variable. Use the CHAR function to retrieve a decimal
value into it.
| Special Purpose PL/I Data Types: The locator data types are PL/I data types as
| well as SQL data types. You cannot use locators as column types. For information
| on how to use these data types, see the following sections:
| Result set locator “Chapter 7-2. Using stored procedures for client/server
| processing” on page 535
| Table locator “Accessing transition tables in a user-defined function” on
| page 287
| LOB locators “Chapter 4-2. Programming for large objects (LOBs)” on page 237
PL/I scoping rules: The precompiler does not support PL/I scoping rules.
Similarly, retrieving a DB2 DECIMAL number into a PL/I equivalent variable could
truncate the value. This happens because a DB2 DECIMAL value can have up to
31 digits, but a PL/I decimal number can have up to only 15 digits.
| ) Character data types are compatible with each other. A CHAR, VARCHAR, or
| CLOB column is compatible with a fixed-length or varying-length PL/I character
| host variable.
| ) Character data types are partially compatible with CLOB locators. A value in a
| CLOB locator can be assigned to a CHAR or VARCHAR column, but a value in
| a CHAR or VARCHAR column cannot be assigned to a CLOB locator host
| variable.
| ) Graphic data types are compatible with each other. A GRAPHIC,
| VARGRAPHIC, or DBCLOB column is compatible with a fixed-length or
| varying-length PL/I graphic character host variable.
| ) Graphic data types are partially compatible with DBCLOB locators. A value in a
| DBCLOB locator can be assigned to a GRAPHIC or VARGRAPHIC column, but
| a value in a GRAPHIC or VARGRAPHIC column cannot be assigned to a
| DBCLOB locator host variable.
) Datetime data types are compatible with character host variables. A DATE,
TIME, or TIMESTAMP column is compatible with a fixed-length or
varying-length PL/I character host variable.
| ) A BLOB column is compatible only with a BLOB host variable.
| ) The ROWID column is compatible only with a ROWID host variable.
| ) A host variable is compatible with a distinct type if the host variable type is
| compatible with the source type of the distinct type. For information on
| assigning and comparing distinct types, see “Chapter 4-4. Creating and using
| distinct types” on page 309.
When your program uses X to assign a null value to a column, the program should
set the indicator variable to a negative number. DB2 then assigns a null value to
the column and ignores any value in X.
You declare indicator variables in the same way as host variables. You can mix the
declarations of the two types of variables in any way that seems appropriate. For
more information about indicator variables, see “Using indicator variables with host
variables” on page 98.
Example:
You can specify host variable attributes in any order acceptable to PL/I. For
example, BIN FIXED(31), BIN(31) FIXED, and FIXED BIN(31) are all acceptable.
The following figure shows the syntax for a valid indicator variable.
[[──┬─DECLARE─┬──variable-name──┬─BINARY─┬──FIXED(15)──;─────────────────────────────────────────────[^
└─DCL─────┘ └─BIN────┘
The following figure shows the syntax for a valid indicator array.
[[──┬─DECLARE─┬──┬─variable-name──(──dimension──)───────────┬──┬─BINARY─┬─────────────────────────────[
└─DCL─────┘ │ ┌─,──────────────────────────────┐ │ └─BIN────┘
└─(───g─variable-name──(──dimension──)─┴──)─┘
[──FIXED(15)──┬───────────────────────────────────────┬──;───────────────────────────────────────────[^
└─Alignment and/or Scope and/or Storage─┘
DSNTIAR syntax
message
An output area, in VARCHAR format, in which DSNTIAR places the
message text. The first halfword contains the length of the remaining area;
its minimum value is 240.
The output lines of text, each line being the length specified in lrecl, are
put into this area. For example, you could specify the format of the output
area as:
CICS
If your CICS application requires CICS storage handling, you must use the
subroutine DSNTIAC instead of DSNTIAR. DSNTIAC has the following syntax:
DSNTIAC has extra parameters, which you must use for calls to routines that
use CICS commands.
eib EXEC interface block
commarea communication area
For more information on these new parameters, see the appropriate application
programming guide for CICS. The remaining parameter descriptions are the
same as those for DSNTIAR. Both DSNTIAC and DSNTIAR format the SQLCA
in the same way.
You must define DSNTIA1 in the CSD. If you load DSNTIAR or DSNTIAC, you
must also define them in the CSD. For an example of CSD entry generation
statements for use with DSNTIAC, see job DSNTEJ5A.
The assembler source code for DSNTIAC and job DSNTEJ5A, which
assembles and link-edits DSNTIAC, are in the data set prefix.SDSNSAMP.
# DB2 sets the SQLCODE and SQLSTATE values after each SQL statement
# executes. An application can check these variable values to determine whether the
# last SQL statement was successful.
# See Appendix C of DB2 SQL Reference for information on the fields in the REXX
# SQLCA.
# A REXX procedure can contain more than one SQLDA. Each SQLDA consists of a
# set of REXX variables with a common stem. The stem must be a REXX variable
# name that contains no periods and is the same as the value of descriptor-name
# that you specify when you use the SQLDA in an SQL statement. DB2 does not
# support the INCLUDE SQLDA statement in REXX.
# See Appendix C of DB2 SQL Reference for information on the fields in a REXX
# SQLDA.
# CONNECT
# Connects the REXX procedure to a DB2 subsystem. You must execute
# CONNECT before you can execute SQL statements. The syntax of CONNECT
# is:
#
# (1) ───┬─'subsystem-ID'─┬─────────────────────────────────────────[^
[[──┬─────────────────┬────'CONNECT'───
# └─Address DSNREXX─┘ └─REXX-variable──┘
# Note:
# 1 CALL SQLDBS 'ATTACH TO' ssid is equivalent to ADDRESS DSNREXX 'CONNECT' ssid.
# EXECSQL
# Executes SQL statements in REXX procedures. The syntax of EXECSQL is:
#
# (1) ───┬─"SQL-statement"─┬────────────────────────────────────────[^
[[──┬─────────────────┬────"EXECSQL"───
# └─Address DSNREXX─┘ └─REXX-variable───┘
# Note:
# 1 CALL SQLEXEC is equivalent to EXECSQL.
# See “Embedding SQL statements in a REXX procedure” on page 209 for more
# information.
# DISCONNECT
# Disconnects the REXX procedure from a DB2 subsystem. You should execute
# DISCONNECT to release resources that are held by DB2. The syntax of
# DISCONNECT is:
#
# (1) ───────────────────────────────────────────────────────────[^
[[──┬─────────────────┬────'DISCONNECT'───
# └─Address DSNREXX─┘
# Note:
# 1 CALL SQLDBS 'DETACH' is equivalent to DISCONNECT.
# These application programming interfaces are available through the DSNREXX host
# command environment. To make DSNREXX available to the application, invoke the
# RXSUBCOM function. The syntax is:
#
# [[──RXSUBCOM──(──┬─'ADD'────┬──,──'DSNREXX'──,──'DSNREXX'──)─────────────────────────────────────────[^
# └─'DELETE'─┘
# The ADD function adds DSNREXX to the REXX host command environment table.
# The DELETE function deletes DSNREXX from the REXX host command
# environment table.
# S_RC = RXSUBCOM('DELETE','DSNREXX','DSNREXX')
# /* WHEN DONE WITH */
# /* DSNREXX, REMOVE IT. */
# Figure 80. Making DSNREXX available to an application
# Each SQL statement in a REXX procedure must begin with EXECSQL, in either
# upper-, lower-, or mixed-case. One of the following items must follow EXECSQL:
# ) An SQL statement enclosed in single or double quotation marks.
# ) A REXX variable that contains an SQL statement. The REXX variable must not
# be preceded by a colon.
# For example, you can use either of the following methods to execute the COMMIT
# statement in a REXX procedure:
# EXECSQL "COMMIT"
# rexxvar="COMMIT"
# EXECSQL rexxvar
# An SQL statement follows rules that apply to REXX commands. The SQL
# statement can optionally end with a semicolon and can be enclosed in single or
# double quotation marks, as in the following example:
# 'EXECSQL COMMIT';
# Comments: You cannot include REXX comments (/* ... */) or SQL comments (--)
# within SQL statements. However, you can include REXX comments anywhere else
# in the procedure.
# Continuation for SQL statements: SQL statements that span lines follow REXX
# rules for statement continuation. You can break the statement into several strings,
# each of which fits on a line, and separate the strings with commas or with
# concatenation operators followed by commas. For example, either of the following
# statements is valid:
# EXECSQL ,
# "UPDATE DSN861.DEPT" ,
# "SET MGRNO = '1'" ,
# "WHERE DEPTNO = 'D11'"
# "EXECSQL " || ,
# " UPDATE DSN861.DEPT " || ,
# " SET MGRNO = '1'" || ,
# " WHERE DEPTNO = 'D11'"
# Including code: The EXECSQL INCLUDE statement is not valid for REXX. You
# therefore cannot include externally defined SQL statements in a procedure.
# Margins: Like REXX commands, SQL statements can begin and end anywhere on
# a line.
# Names: You can use any valid REXX name that does not end with a period as a
# host variable. However, host variable names should not begin with 'SQL', 'RDI',
# 'DSN', 'RXSQL', or 'QRW'. Variable names can be at most 64 bytes.
# Nulls: A REXX null value and an SQL null value are different. The REXX language
# has a null string (a string of length 0) and a null clause (a clause that contains only
# blanks and comments). The SQL null value is a special value that is distinct from
# all nonnull values and denotes the absence of a value. Assigning a REXX null
# value to a DB2 column does not make the column value null.
# Statement labels: You can precede an SQL statement with a label, in the same
# way that you label REXX commands.
# Handling errors and warnings: DB2 does not support the SQL WHENEVER
# statement in a REXX procedure. To handle SQL errors and warnings, use the
# following methods:
# ) To test for SQL errors or warnings, test the SQLCODE or SQLSTATE value
# and the SQLWARN. values after each EXECSQL call. This method does not
# detect errors in the REXX interface to DB2.
# ) To test for SQL errors or warnings or errors or warnings from the REXX
# interface to DB2, test the REXX RC variable after each EXECSQL call.
# Table 19 on page 211 lists the values of the RC variable.
# You can also use the REXX SIGNAL ON ERROR and SIGNAL ON FAILURE
# keyword instructions to detect negative values of the RC variable and transfer
# control to an error routine.
# c1 to c100
# Cursor names for DECLARE CURSOR, OPEN, CLOSE, and FETCH
# statements. Use c1 to c50 for cursors that are defined without the WITH HOLD
# option. Use c51 to c100 for cursors that are defined with the WITH HOLD
# option. All cursors are defined with the WITH RETURN option, so any cursor
# name can be used to return result sets from a REXX stored procedure.
# c101 to c200
# Cursor names for ALLOCATE, DESCRIBE, FETCH, and CLOSE statements
# that are used to retrieve result sets in a program that calls a stored procedure.
# s1 to s100
# Prepared statement names for DECLARE STATEMENT, PREPARE,
# DESCRIBE, and EXECUTE statements.
# Use only the predefined names for cursors and statements. When you associate a
# cursor name with a statement name in a DECLARE CURSOR statement, the
# cursor name and the statement must have the same number. For example, if you
# declare cursor c1, you need to declare it for statement s1:
# EXECSQL 'DECLARE C1 CURSOR FOR S1'
# Do not use any of the predefined names as host variables names.
# a=1
# b=2
# EXECSQL 'OPEN C1 USING :x.a.b'
# When you assign input data to a DB2 table column, you can either let DB2
# determine the type that your input data represents, or you can use an SQLDA to
# tell DB2 the intended type of the input data.
# If you do not assign a value to a host variable before you assign the host variable
# to a column, DB2 returns an error code.
# Table 20 (Page 1 of 2). SQL input data types and REXX data formats
# SQL data type SQLTYPE for data
# assigned by DB2 type REXX input data format
# INTEGER 496/497 A string of numerics that does not contain a decimal point or
# exponent identifier. The first character can be a plus (+) or minus (−)
# sign. The number that is represented must be between -2147483647
# and 2147483647, inclusive.
# DECIMAL(p,s) 484/485 One of the following formats:
# ) A string of numerics that contains a decimal point but no
# exponent identifier. p represents the precision and s represents
# the scale of the decimal number that the string represents. The
# first character can be a plus (+) or minus (−) sign.
# ) A string of numerics that does not contain a decimal point or an
# exponent identifier. The first character can be a plus (+) or minus
# (−) sign. The number that is represented is less than
# -2147483647 or greater than 2147483647.
# FLOAT 480/481 A string that represents a number in scientific notation. The string
# consists of a series of numerics followed by an exponent identifier
# (an E or e followed by an optional plus (+) or minus (−) sign and a
# series of numerics). The string can begin with a plus (+) or minus (−)
# sign.
# Table 20 (Page 2 of 2). SQL input data types and REXX data formats
# SQL data type SQLTYPE for data
# assigned by DB2 type REXX input data format
# VARCHAR(n) 448/449 One of the following formats:
# ) A string of length n, enclosed in single or double quotation
# marks.
# ) The character X or x, followed by a string enclosed in single or
# double quotation marks. The string within the quotation marks
# has a length of 2*n bytes and is the hexadecimal representation
# of a string of n characters.
# ) A string of length n that does not have a numeric or graphic
# format, and does not satisfy either of the previous conditions.
# VARGRAPHIC(n) 464/465 One of the following formats:
# ) The character G, g, N, or n, followed by a string enclosed in
# single or double quotation marks. The string within the quotation
# marks begins with a shift-out character (X'0E') and ends with a
# shift-in character (X'0F'). Between the shift-out character and
# shift-in character are n double-byte characters.
# ) The characters GX, Gx, gX, or gx, followed by a string enclosed
# in single or double quotation marks. The string within the
# quotation marks has a length of 4*n bytes and is the
# hexadecimal representation of a string of n double-byte
# characters.
# For example, when DB2 executes the following statements to update the MIDINIT
# column of the EMP table, DB2 must determine a data type for HVMIDINIT:
# SQLSTMT="UPDATE EMP" ,
# "SET MIDINIT = ?" ,
# "WHERE EMPNO = '2'"
# "EXECSQL PREPARE S1 FROM :SQLSTMT"
# HVMIDINIT='H'
# "EXECSQL EXECUTE S1 USING" ,
# ":HVMIDINIT"
# Because the data that is assigned to HVMIDINIT has a format that fits a character
# data type, DB2 REXX Language Support assigns a VARCHAR type to the input
# data.
# Enclosing the string in apostrophes is not adequate because REXX removes the
# apostrophes when it assigns a literal to a variable. For example, suppose that you
# want to pass the value in host variable stringvar to DB2. The value that you want to
# pass is the string '100'. The first thing that you need to do is to assign the string to
# the host variable. You might write a REXX command like this:
# stringvar = '1'
# After the command executes, stringvar contains the characters 100 (without the
# apostrophes). DB2 REXX Language Support then passes the numeric value 100 to
# DB2, which is not what you intended.
# To indicate the data type of input data to DB2, use an SQLDA. For example,
# suppose you want to tell DB2 that the data with which you update the MIDINIT
# column of the EMP table is of type CHAR, rather than VARCHAR. You need to set
# up an SQLDA that contains a description of a CHAR column, and then prepare and
# execute the UPDATE statement using that SQLDA:
# INSQLDA.SQLD = 1 /* SQLDA contains one variable */
# INSQLDA.1.SQLTYPE = 453 /* Type of the variable is CHAR, */
# /* and the value can be null */
# INSQLDA.1.SQLLEN = 1 /* Length of the variable is 1 */
# INSQLDA.1.SQLDATA = 'H' /* Value in variable is H */
# INSQLDA.1.SQLIND = /* Input variable is not null */
# SQLSTMT="UPDATE EMP" ,
# "SET MIDINIT = ?" ,
# "WHERE EMPNO = '2'"
# "EXECSQL PREPARE S1 FROM :SQLSTMT"
# "EXECSQL EXECUTE S1 USING" ,
# "DESCRIPTOR :INSQLDA"
# Table 21 (Page 1 of 2). SQL output data types and REXX data formats
# SQL data type REXX output data format
# SMALLINT A string of numerics that does not contain leading zeroes, a decimal point, or an
# INTEGER exponent identifier. If the string represents a negative number, it begins with a minus
# (−) sign. The numeric value is between -2147483647 and 2147483647, inclusive.
# DECIMAL(p,s) A string of numerics with one of the following formats:
# ) Contains a decimal point but not an exponent identifier. The string is padded with
# zeroes to match the scale of the corresponding table column. If the value
# represents a negative number, it begins with a minus (−) sign.
# ) Does not contain a decimal point or an exponent identifier. The numeric value is
# less than -2147483647 or greater than 2147483647. If the value is negative, it
# begins with a minus (−) sign.
# Table 21 (Page 2 of 2). SQL output data types and REXX data formats
# SQL data type REXX output data format
# FLOAT(n) A string that represents a number in scientific notation. The string consists of a
# REAL numeric, a decimal point, a series of numerics, and an exponent identifier. The
# DOUBLE exponent identifier is an E followed by a minus (−) sign and a series of numerics if the
# number is between -1 and 1. Otherwise, the exponent identifier is an E followed by a
# series of numerics. If the string represents a negative number, it begins with a minus
# (−) sign.
# CHAR(n) A character string of length n bytes. The string is not enclosed in single or double
# VARCHAR(n) quotation marks.
# GRAPHIC(n) A string of length 2*n bytes. Each pair of bytes represents a double-byte character.
# VARGRAPHIC(n) This string does not contain a leading G, is not enclosed in quotation marks, and does
# not contain shift-out or shift-in characters.
# Because you cannot use the SELECT INTO statement in a REXX procedure, to
# retrieve data from a DB2 table you must prepare a SELECT statement, open a
# cursor for the prepared statement, and then fetch rows into host variables or an
# SQLDA using the cursor. The following example demonstrates how you can
# retrieve data from a DB2 table using an SQLDA:
# SQLSTMT= ,
# 'SELECT EMPNO, FIRSTNME, MIDINIT, LASTNAME,' ,
# ' WORKDEPT, PHONENO, HIREDATE, JOB,' ,
# ' EDLEVEL, SEX, BIRTHDATE, SALARY,' ,
# ' BONUS, COMM' ,
# ' FROM EMP'
# EXECSQL DECLARE C1 CURSOR FOR S1
# EXECSQL PREPARE S1 INTO :OUTSQLDA FROM :SQLSTMT
# EXECSQL OPEN C1
# Do Until(SQLCODE ¬═ )
# EXECSQL FETCH C1 USING DESCRIPTOR :OUTSQLDA
# If SQLCODE ═ Then Do
# Line ═ ''
# Do I ═ 1 To OUTSQLDA.SQLD
# Line ═ Line OUTSQLDA.I.SQLDATA
# End I
# Say Line
# End
# End
# The way that you use indicator variables for input host variables in REXX
# procedures is slightly different from the way that you use indicator variables in other
# languages. When you want to pass a null value to a DB2 column, in addition to
# putting a negative value in an indicator variable, you also need to put a valid value
# in the corresponding host variable. For example, to set a value of WORKDEPT in
# table EMP to null, use statements like these:
# SQLSTMT="UPDATE EMP" ,
# "SET WORKDEPT = ?"
# HVWORKDEPT=''
# INDWORKDEPT=-1
# "EXECSQL PREPARE S1 FROM :SQLSTMT"
# "EXECSQL EXECUTE S1 USING :HVWORKDEPT :INDWORKDEPT"
# After you retrieve data from a column that can contain null values, you should
# always check the indicator variable that corresponds to the output host variable for
# that column. If the indicator variable value is negative, the retrieved value is null, so
# you can disregard the value in the host variable.
# In the following example, the phone number for employee Haas is selected into
# variable HVPhone. After the SELECT statement executes, if no phone number for
# employee Haas is found, indicator variable INDPhone contains -1.
# SQLSTMT = ,
# "SELECT PHONENO WHERE LASTNAME='HAAS'"
# "EXECSQL PREPARE S1 FROM :SQLSTMT"
# "EXECSQL DECLARE C1 CURSOR FOR S1"
# "EXECSQL OPEN C1"
# "EXECSQL FETCH C1 INTO :HVPhone :INDPhone"
# If INDPhone < Then ,
# Say 'Phone number for Haas is null.'
| Triggers also move application logic into DB2, which can result in faster application
| development and easier maintenance. For example, you can write applications to
| control salary changes in the employee table, but each application program that
| changes the salary column must include logic to check those changes. A better
| method is to define a trigger that controls changes to the salary column. Then DB2
| does the checking for any application that modifies salaries.
| When you execute this CREATE TRIGGER statement, DB2 creates a trigger
| package called REORDER and associates the trigger package with table PARTS.
| DB2 records the timestamp when it creates the trigger. If you define other triggers
| on the PARTS table, DB2 uses this timestamp to determine which trigger to
| activate first. The trigger is now ready to use.
| When you no longer want to use trigger REORDER, you can delete the trigger by
| executing the statement:
| DROP TRIGGER REORDER RESTRICT;
| Executing this statement drops trigger REORDER and its associated trigger
| package named REORDER.
| If you drop table PARTS, DB2 also drops trigger REORDER and its trigger
| package.
| Trigger name: Use a short, ordinary identifier to name your trigger. You can use a
| qualifier or let DB2 determine the qualifier. When DB2 creates a trigger package for
| the trigger, it uses the qualifier for the collection ID of the trigger package. DB2
| uses these rules to determine the qualifier:
| ) If you use static SQL to execute the CREATE TRIGGER statement, DB2 uses
| the authorization ID in the bind option QUALIFIER for the plan or package that
| contains the CREATE TRIGGER statement. If the bind command does not
| include the QUALIFIER option, DB2 uses the owner of the package or plan.
| ) If you use dynamic SQL to execute the CREATE TRIGGER statement, DB2
| uses the authorization ID in special register CURRENT SQLID.
| Triggering table: When you perform an insert, update, or delete operation on this
| table, the trigger is activated. You must name a local table in the CREATE
| TRIGGER statement. You cannot define a trigger on a catalog table or on a view.
| Trigger activation time: The two choices for trigger activation time are NO
| CASCADE BEFORE and AFTER. NO CASCADE BEFORE means that the trigger
| is activated before DB2 makes any changes to the triggering table, and that the
| triggered action does not activate any other triggers. AFTER means that the trigger
| is activated after DB2 makes changes to the triggering table and can activate other
| triggers. Triggers with an activation time of NO CASCADE BEFORE are known as
| before triggers. Triggers with an activation time of AFTER are known as after
| triggers.
| A triggering event can also be an update or delete operation that occurs as the
| result of a referential constraint with ON DELETE SET NULL or ON DELETE
| CASCADE.
| Triggers are not activated as the result of updates made to tables by DB2 utilities.
| When the triggering event for a trigger is an update operation, the trigger is called
| an update trigger. Similiarly, triggers for insert operations are called insert triggers,
| and triggers for delete operations are called delete triggers.
| The following example shows a trigger that is defined with an INSERT triggering
| event:
| CREATE TRIGGER NEW_HIRE
| AFTER INSERT ON EMP
| FOR EACH ROW MODE DB2SQL
| BEGIN ATOMIC
| UPDATE COMPANY_STATS SET NBEMP = NBEMP + 1;
| END
| Each triggering event is associated with one triggering table and one SQL
| operation. If the triggering SQL operation is an update operation, the event can be
| associated with specific columns of the triggering table. In this case, the trigger is
| activated only if the update operation updates any of the specified columns. For
| example, the following trigger, PAYROLL1, is activated only if an update operation
| is performed on columns SALARY or BONUS of table PAYROLL:
| CREATE TRIGGER PAYROLL1
| AFTER UPDATE OF SALARY, BONUS ON PAYROLL
| FOR EACH STATEMENT MODE DB2SQL
| BEGIN ATOMIC
| VALUES(PAYROLL_LOG(USER, 'UPDATE', CURRENT TIME, CURRENT DATE));
| END
| Granularity: The triggering SQL statement might modify multiple rows in the table.
| The granularity of the trigger determines whether the trigger is activated only once
| for the triggering SQL statement or once for every row that the SQL statement
| modifies. The granularity values are:
| ) FOR EACH ROW
| The trigger is activated once for each row that DB2 modifies in the triggering
| table. If the triggering SQL statement modifies no rows, the trigger is not
| activated. However, if the triggering SQL statement updates a value in a row to
| the same value, the trigger is activated. For example, if an UPDATE trigger is
| defined on table COMPANY_STATS, the following SQL statement will activate
| the trigger.
| UPDATE COMPANY_STATS SET NBEMP = NBEMP;
| ) FOR EACH STATEMENT
| The trigger is activated once when the triggering SQL statement executes. The
| trigger is activated even if the triggering SQL statement modifies no rows.
| Triggers with a granularity of FOR EACH ROW are known as row triggers.
| Triggers with a granularity of FOR EACH STATEMENT are known as statement
| triggers. Statement triggers can only be after triggers.
| Transition variables: When you code a row trigger, you might need to refer to the
| values of columns in each updated row of the triggering table. To do this, specify
| transition variables in the REFERENCING clause of your CREATE TRIGGER
| statement. The two types of transition variables are:
| ) Old transition variables, specified with the OLD transition-variable clause,
| capture the values of columns before the triggering SQL statement updates
| them. You can define old transition variables for update and delete triggers.
| ) New transition variables, specified with the NEW transition-variable clause,
| capture the values of columns after the triggering SQL statement updates them.
| You can define new transition variables for update and insert triggers.
| Transition tables: If you want to refer to the entire set of rows that a triggering
| SQL statement modifies, rather than to individual rows, use a transition table. Like
| transition variables, transition tables can appear in the REFERENCING clause of a
| CREATE TRIGGER statement. Transition tables are valid for both row triggers and
| statement triggers. The two types of transition tables are:
| ) Old transition tables, specified with the OLD TABLE transition-table clause,
| capture the values of columns before the triggering SQL statement updates
| them. You can define old transition tables for update and delete triggers.
| ) New transition tables, specified with the NEW TABLE transition-table clause,
| capture the values of columns after the triggering SQL statement updates them.
| You can define new transition variables for update and insert triggers.
| The scope of old and new transition table names is the trigger body. If another
| table exists that has the same name as a transition table, any unqualified reference
| to that name in the trigger body points to the transition table. To reference the other
| table in the trigger body, you must use the fully qualified table name.
| The following example uses a new transition table to capture the set of rows that
| are inserted into the INVOICE table:
| CREATE TRIGGER LRG_ORDR
| AFTER INSERT ON INVOICE
| REFERENCING NEW TABLE AS N_TABLE
| FOR EACH STATEMENT MODE DB2SQL
| BEGIN ATOMIC
| SELECT LARGE_ORDER_ALERT(CUST_NO,
| TOTAL_PRICE, DELIVERY_DATE)
| FROM N_TABLE WHERE TOTAL_PRICE > 1;
| END
| Trigger condition: If you want the triggered action to occur only when certain
| conditions are true, code a trigger condition. A trigger condition is similar to a
| predicate in a SELECT, except that the trigger condition begins with WHEN, rather
| than WHERE. If you do not include a trigger condition in your triggered action, the
| trigger body executes every time the trigger is activated.
| For a row trigger, DB2 evaluates the trigger condition once for each modified row of
| the triggering table. For a statement trigger, DB2 evaluates the trigger condition
| once for each execution of the triggering SQL statement.
# If the trigger condition of a before trigger has a subselect, the subselect cannot
# reference the triggering table.
| The following example shows a trigger condition that causes the trigger body to
| execute only when the number of ordered items is greater than the number of
| available items:
| CREATE TRIGGER CK_AVAIL
| NO CASCADE BEFORE INSERT ON ORDERS
| REFERENCING NEW AS NEW_ORDER
| FOR EACH ROW MODE DB2SQL
| WHEN (NEW_ORDER.QUANTITY >
| (SELECT ON_HAND FROM PARTS
| WHERE NEW_ORDER.PARTNO=PARTS.PARTNO))
| BEGIN ATOMIC
| VALUES(ORDER_ERROR(NEW_ORDER.PARTNO,
| NEW_ORDER.QUANTITY));
| END
| Trigger body: In the trigger body, you code the SQL statements that you want to
| execute whenever the trigger condition is true. The trigger body begins with BEGIN
| ATOMIC and ends with END. You cannot include host variables or parameter
| markers in your trigger body. If the trigger body contains a WHERE clause that
# references transition variables, the comparison operator cannot be LIKE.
| The statements you can use in a trigger body depend on the activation time of the
| trigger. Table 22 summarizes which SQL statements you can use in which types of
| triggers.
| Table 22 (Page 1 of 2). Valid sql statements for triggers and trigger activation times
| Valid for Activation Time
| SQL Statement Before After
| SELECT Yes Yes
| VALUES Yes Yes
| CALL Yes Yes
| SIGNAL SQLSTATE Yes Yes
| The following list provides more detailed information about SQL statements that are
| valid in triggers:
| ) SELECT, VALUES, and CALL
| Use the SELECT or VALUES statement in a trigger body to conditionally or
| unconditionally invoke a user-defined function. Use the CALL statement to
| invoke a stored procedure. See “Invoking stored procedures and user-defined
| functions from triggers” on page 224 for more information on invoking
| user-defined functions and stored procedures from triggers.
# A SELECT statement in the trigger body of a before trigger cannot reference
# the triggering table.
| ) SET transition-variable
| Because before triggers operate on rows of a table before those rows are
| modified, you cannot perform operations in the body of a before trigger that
| directly modify the triggering table. You can, however, use the SET
| transition-variable statement to modify the values in a row before those values
| go into the table. For example, this trigger uses a new transition variable to fill
| in today's date for the new employee's hire date:
| CREATE TRIGGER HIREDATE
| NO CASCADE BEFORE INSERT ON EMP
| REFERENCING NEW AS NEW_VAR
| FOR EACH ROW MODE DB2SQL
| BEGIN ATOMIC
| SET NEW_VAR.HIRE_DATE = CURRENT_DATE;
| END
| ) SIGNAL SQLSTATE
| Use the SIGNAL SQLSTATE statement in the trigger body to report an error
| condition and back out any changes that are made by the trigger, as well as
| actions that result from referential constraints on the triggering table. When
| DB2 executes the SIGNAL SQLSTATE statement, it returns an SQLCA to the
| application with SQLCODE -438. The SQLCA also includes the following
| values, which you supply in the SIGNAL SQLSTATE statement:
| – A five-character value that DB2 uses as the SQLSTATE
| – An error message that DB2 places in the SQLERRMC field
| In the following example, the SIGNAL SQLSTATE statement causes DB2 to
| return an SQLCA with SQLSTATE 75001 and terminate the salary update
| operation if an employee's salary increase is over 20%:
| If any SQL statement in the trigger body fails during trigger execution, DB2 rolls
| back all changes that are made by the triggering SQL statement and the triggered
| SQL statements. However, if the trigger body executes actions that are outside of
| DB2's control or are not under the same commit coordination as the DB2
| subsystem in which the trigger executes, DB2 cannot undo those actions.
| Examples of external actions that are not under DB2's control are:
| ) Performing updates that are not under RRS commit control
| ) Sending an electronic mail message
| If the trigger executes external actions that are under the same commit coordination
| as the DB2 subsystem under which the trigger executes, and an error occurs
| during trigger execution, DB2 places the application process that issued the
| triggering statement in a must-rollback state. The application must then execute a
| rollback operation to roll back those external actions. Examples of external actions
| that are under the same commit coordination as the triggering SQL operation are:
| ) Executing a distributed update operation
| ) From a user-defined function or stored procedure, executing an external action
| that affects an external resource manager that is under RRS commit control.
| Because a before trigger must not modify any table, functions and procedures that
| you invoke from a trigger cannot include INSERT, UPDATE, or DELETE statements
| that modify the triggering table.
| Most of the code for using a table locator is in the function or stored procedure that
| receives the locator. “Accessing transition tables in a user-defined function” on
| page 287 explains how a function defines a table locator and uses it to receive a
| transition table. To pass the transition table from a trigger, specify the parameter
| TABLE transition-table when you invoke the function or stored procedure. For
| example, this trigger passes a table locator for a new transition table to stored
| procedure CHECKEMP:
| CREATE TRIGGER EMPRAISE
| AFTER UPDATE ON EMP
| REFERENCING NEW TABLE AS NEWEMPS
| FOR EACH STATEMENT MODE DB2SQL
| BEGIN ATOMIC
| CALL (CHECKEMP(TABLE NEWEMPS));
| END
| For example, in these cases, trigger A and trigger B are activated at the same
| level:
| ) Table X has two triggers that are defined on it, A and B. A is a before trigger
| and B is an after trigger. An update to table X causes both trigger A and trigger
| B to activate.
| ) Trigger A updates table X, which has a referential constraint with table Y, which
| has trigger B defined on it. The referential constraint causes table Y to be
| updated, which activates trigger B.
| In these cases, trigger A and trigger B are activated at different levels:
| ) Trigger A is defined on table X, and trigger B is defined on table Y. Trigger B
| is an update trigger. An update to table X activates trigger A, which contains an
| UPDATE statement on table B in its trigger body. This UPDATE statement
| activates trigger B.
| ) Trigger A calls a stored procedure. The stored procedure contains an INSERT
| statement for table X, which has insert trigger B defined on it. When the
| INSERT statement on table X executes, trigger B is activated.
| When triggers are activated at different levels, it is called trigger cascading. Trigger
| cascading can occur only for after triggers because DB2 does not support
| cascading of before triggers.
| To prevent the possibility of endless trigger cascading, DB2 supports only 16 levels
| of cascading of triggers, stored procedures, and user-defined functions. If a trigger,
| user-defined function, or stored procedure at the 17th level is activated, DB2
| returns SQLCODE -724 and backs out all SQL changes in the 16 levels of
| cascading. However, as with any other SQL error that occurs during trigger
| execution, if any action occurs that is outside the control of DB2, that action is not
| backed out.
| You can write a monitor program that issues IFI READS requests to collect DB2
| trace information about the levels of cascading of triggers, user-defined functions,
| and stored procedures in your programs. See Appendixes (Volume 2) of DB2
| Administration Guide for information on how to write a monitor program.
| In this example, triggers NEWHIRE1 and NEWHIRE2 have the same triggering
| event (INSERT), the same triggering table (EMP), and the same activation time
| (AFTER). Suppose that the CREATE TRIGGER statement for NEWHIRE1 is run
| before the CREATE TRIGGER statement for NEWHIRE2:
| CREATE TRIGGER NEWHIRE1
| AFTER INSERT ON EMP
| FOR EACH ROW MODE DB2SQL
| BEGIN ATOMIC
| UPDATE COMPANY_STATS SET NBEMP = NBEMP + 1;
| END
| When an insert operation occurs on table EMP, DB2 activates NEWHIRE1 first
| because NEWHIRE1 was created first. Now suppose that someone drops and
| recreates NEWHIRE1. NEWHIRE1 now has a later timestamp than NEWHIRE2, so
| the next time an insert operation occurs on EMP, NEWHIRE2 is activated before
| NEWHIRE1.
| If two row triggers are defined for the same action, the trigger that was created
| earlier is activated first for all affected rows. Then the second trigger is activated for
| all affected rows. In the previous example, suppose that an INSERT statement with
| a subselect inserts 10 rows into table EMP. NEWHIRE1 is activated for all 10
| rows, then NEWHIRE2 is activated for all 10 rows.
| In general, the following steps occur when triggering SQL statement S1 performs
| an insert, update, or delete operation on table T1:
| 1. DB2 determines the rows of T1 to modify. Call that set of rows M1. The
| contents of M1 depend on the SQL operation:
| ) For a delete operation, all rows that satisfy the search condition of the
| statement for a searched delete operation, or the current row for a
| positioned delete operation
| ) For an insert operation, the row identified by the VALUES statement, or the
| rows identified by a SELECT clause
| If any triggered actions contain SQL insert, update, or delete operations, DB2
| repeats steps 1 through 5 for each operation.
| For example, table DEPT is a parent table of EMP, with these conditions:
| ) The DEPTNO column of DEPT is the primary key.
| ) The WORKDEPT column of EMP is the foreign key.
| ) The constraint is ON DELETE SET NULL.
| Suppose the following trigger is defined on EMP:
| CREATE TRIGGER EMPRAISE
| AFTER UPDATE ON EMP
| REFERENCING NEW TABLE AS NEWEMPS
| FOR EACH STATEMENT MODE DB2SQL
| BEGIN ATOMIC
| VALUES(CHECKEMP(TABLE NEWEMPS));
| END
| When DB2 executes the FETCH statement that positions cursor C1 for the first
| time, DB2 evaluates the subselect, SELECT B1 FROM T2, to produce a result
| table that contains the two rows of column T2:
| 1
| 2
| When DB2 executes the positioned UPDATE statement for the first time, trigger
| TR1 is activated. When the body of trigger TR1 executes, the row with value 2 is
| deleted from T2. However, because SELECT B1 FROM T2 is evaluated only once,
| when the FETCH statement is executed again, DB2 finds the second row of T1,
| even though the second row of T2 was deleted. The FETCH statement positions
| the cursor to the second row of T1, and the second row of T1 is updated. The
| update operation causes the trigger to be activated again, which causes DB2 to
| attempt to delete the second row of T2, even though that row was already deleted.
| To avoid processing of the second row after it should have been deleted, use a
| correlated subquery in the cursor declaration:
| DCL C1 CURSOR FOR
| SELECT A1 FROM T1 X
| WHERE EXISTS (SELECT B1 FROM T2 WHERE X.A1 = B1)
| FOR UPDATE OF A1;
| In this case, the subquery, SELECT B1 FROM T2 WHERE X.A1 = B1, is evaluated
| for each FETCH statement. The first time that the FETCH statement executes, it
| positions the cursor to the first row of T1. The positioned UPDATE operation
| activates the trigger, which deletes the second row of T2. Therefore, when the
| FETCH statement executes again, no row is selected, so no update operation or
| triggered action occurs.
| The following example shows how the order of processing rows can change the
| outcome of an after row trigger.
| The contents of tables T2 and T3 after the UPDATE statement executes depend on
| the order in which DB2 updates the rows of T1.
| If DB2 updates the first row of T1 first, after the UPDATE statement and the trigger
| execute for the first time, the values in the three tables are:
| Table T1 Table T2 Table T3
| A1 B1 C1
| == == ==
| 2 2 2
| 2
| After the second row of T1 is updated, the values in the three tables are:
| Table T1 Table T2 Table T3
| A1 B1 C1
| == == ==
| 2 2 2
| 3 3 2
| 3
| However, if DB2 updates the second row of T1 first, after the UPDATE statement
| and the trigger execute for the first time, the values in the three tables are:
| Table T1 Table T2 Table T3
| A1 B1 C1
| == == ==
| 1 3 3
| 3
| After the first row of T1 is updated, the values in the three tables are:
| Table T1 Table T2 Table T3
| A1 B1 C1
| == == ==
| 2 3 3
| 3 2 3
| 2
| Introduction to LOBs
| Working with LOBs involves defining the LOBs to DB2, moving the LOB data into
| DB2 tables, then using SQL operations to manipulate the data. This chapter
| concentrates on manipulating LOB data using SQL statements. For information on
| defining LOBs to DB2, see Chapter 6 of DB2 SQL Reference and Section 2 of DB2
| Administration Guide. For information on how DB2 utilities manipulate LOB data,
| see Section 2 of DB2 Utility Guide and Reference.
| These are the basic steps for defining LOBs and moving the data into DB2:
| 1. Define a column of the appropriate LOB type and a row identifier (ROWID)
| column in a DB2 table. Define only one ROWID column, even if there are
| multiple LOB columns in the table.
| The LOB column holds information about the LOB, not the LOB data itself.
| The table that contains the LOB information is called the base table. DB2 uses
| the ROWID column to locate your LOB data. You need only one ROWID
| column in a table that contains one or more LOB columns. You can define the
| LOB column and the ROWID column in a CREATE TABLE or ALTER TABLE
| statement. If you are adding a LOB column and a ROWID column to an
| existing table, you must use two ALTER TABLE statements. Add the ROWID
| with the first ALTER TABLE statement and the LOB column with the second.
| 2. Create a table space and table to hold the LOB data.
| The table space and table are called a LOB table space and an auxiliary table.
| If your base table is nonpartitioned, you must create one LOB table space and
| one auxiliary table for each LOB column. If your base table is partitioned, for
| each LOB column, you must create one LOB table space and one auxiliary
| table for each partition. For example, if your base table has three partitions, you
| For example, suppose you want to add a resume for each employee to the
| employee table. Employee resumes are no more than 5 MB in size. The employee
| resumes contain single-byte characters, so you can define the resumes to DB2 as
| CLOBs. You therefore need to add a column of data type CLOB with a length of 5
| MB to the employee table. If a ROWID column has not been defined in the table,
| you need to add the ROWID column before you add the CLOB column. Execute
| an ALTER TABLE statement to add the ROWID column, and then execute another
| ALTER TABLE statement to add the CLOB column. You might use statements like
| this:
# ALTER TABLE EMP
# ADD ROW_ID ROWID NOT NULL GENERATED ALWAYS;
# COMMIT;
# ALTER TABLE EMP
# ADD EMP_RESUME CLOB(1M);
# COMMIT;
| Next, you need to define a LOB table space and an auxiliary table to hold the
| employee resumes. You also need to define an index on the auxiliary table. You
| must define the LOB table space in the same database as the associated base
| table. You can use statements like this:
| CREATE LOB TABLESPACE RESUMETS
| IN DSN8D61A
| LOG NO;
| COMMIT;
| CREATE AUXILIARY TABLE EMP_RESUME_TAB
| IN DSN8D61A.RESUMETS
| STORES DSN861.EMP
| COLUMN EMP_RESUME;
| CREATE UNIQUE INDEX XEMP_RESUME
| ON EMP_RESUME_TAB;
| COMMIT;
| If the value of bind option SQLRULES is STD, or if special register CURRENT
| RULES has been set in the program and has the value STD, DB2 creates the LOB
| table space, auxiliary table, and auxiliary index for you when you execute the
| ALTER statement to add the LOB column.
| Now that your DB2 objects for the LOB data are defined, you can load your
| employee resumes into DB2. To do this in an SQL application, you can define a
| After your LOB data is in DB2, you can write SQL applications to manipulate the
| data. You can use most SQL statements with LOBs. For example, you can use
| statements like these to extract information about an employee's department from
| the resume:
| EXEC SQL BEGIN DECLARE SECTION;
| long deptInfoBeginLoc;
| long deptInfoEndLoc;
| SQL TYPE IS CLOB_LOCATOR resume;
| SQL TYPE IS CLOB_LOCATOR deptBuffer;
| EXEC SQL END DECLARE SECTION;
|| ..
| .
| EXEC SQL DECLARE C1 CURSOR FOR
| SELECT EMPNO, EMP_RESUME FROM EMP;
|| ..
| .
| EXEC SQL FETCH C1 INTO :employeenum, :resume;
||| ...
| EXEC SQL SET :deptInfoBeginLoc =
| POSSTR(:resumedata, 'Department Information');
| Sample LOB applications: Table 23 on page 240 lists the sample programs that
| DB2 provides to assist you in writing applications to manipulate LOB data. All
| programs reside in data set DSN610.SDSNSAMP.
| For instructions on how to prepare and run the sample LOB applications, see
| Section 2 of DB2 Installation Guide.
| You can declare LOB host variables and LOB locators in assembler, C, C++,
| COBOL, FORTRAN, and PL/I. For each host variable or locator of SQL type BLOB,
| CLOB, or DBCLOB that you declare, DB2 generates an equivalent declaration that
| uses host language data types. When you refer to a LOB host variable or locator in
| an SQL statement, you must use the variable you specified in the SQL type
| declaration. When you refer to the host variable in a host language statement, you
| must use the variable that DB2 generates. See “Section 3. Coding SQL in your
| host application program” on page 89 for the syntax of LOB declarations in each
| language and for host language equivalents for each LOB type.
| The following examples show you how to declare LOB host variables in each
| supported language. In each table, the left column contains the declaration that you
| code in your application program. The right column contains the declaration that
| DB2 generates.
| Declarations of LOB host variables in PL/I: Table 28 on page 244 shows PL/I
| declarations for some typical LOB types.
| Data spaces for LOB materialization: The amount of storage that is used in data
| spaces for LOB materialization depends on a number of factors including:
| ) The size of the LOBs
| ) The number of LOBs that need to be materialized in a statement
| DB2 allocates a certain number of data spaces for LOB materialization. If there is
| insufficient space available in a data space for LOB materialization, your application
| receives SQLCODE -904.
| Although you cannot completely avoid LOB materialization, you can minimize it by
| using LOB locators, rather than LOB host variables in your application programs.
| See “Using LOB locators to save storage” for information on how to use LOB
| locators.
| A LOB locator is associated with a LOB value or expression, not with a row in a
| DB2 table or a physical storage location in a table space. Therefore, after you
| select a LOB value using a locator, the value in the locator normally does not
| change until the current unit of work ends. However the value of the LOB itself can
| change.
| If you want to remove the association between a LOB locator and its value before a
| unit of work ends, execute the FREE LOCATOR statement. To keep the
| association between a LOB locator and its value after the unit of work ends,
| execute the HOLD LOCATOR statement. After you execute a HOLD LOCATOR
| statement, the locator keeps the association with the corresponding value until you
| execute a FREE LOCATOR statement or the program ends.
| Because the program uses LOB locators, rather than placing the LOB data into
| host variables, no LOB data is moved until the INSERT statement executes. In
| addition, no LOB data moves between the client and the server.
| /**************************/
| /* Declare host variables */ 1
| /**************************/
| EXEC SQL BEGIN DECLARE SECTION;
| char userid[9];
| char passwd[19];
| long HV_START_DEPTINFO;
| long HV_START_EDUC;
| long HV_RETURN_CODE;
| SQL TYPE IS CLOB_LOCATOR HV_NEW_SECTION_LOCATOR;
| SQL TYPE IS CLOB_LOCATOR HV_DOC_LOCATOR1;
| SQL TYPE IS CLOB_LOCATOR HV_DOC_LOCATOR2;
| SQL TYPE IS CLOB_LOCATOR HV_DOC_LOCATOR3;
| EXEC SQL END DECLARE SECTION;
| Figure 82 (Part 1 of 3). Example of deferring evaluation of LOB expressions
| /*************************************************/
| /* Use a single row select to get the document */ 2
| /*************************************************/
| EXEC SQL SELECT RESUME
| INTO :HV_DOC_LOCATOR1
| FROM EMP_RESUME
| WHERE EMPNO = '13'
| AND RESUME_FORMAT = 'ascii';
| /*****************************************************/
| /* Use the POSSTR function to locate the start of */
| /* sections "Department Information" and "Education" */ 3
| /*****************************************************/
| EXEC SQL SET :HV_START_DEPTINFO =
| POSSTR(:HV_DOC_LOCATOR1, 'Department Information');
| /*******************************************************/
| /* Replace Department Information section with nothing */
| /*******************************************************/
| EXEC SQL SET :HV_DOC_LOCATOR2 =
| SUBSTR(:HV_DOC_LOCATOR1, 1, :HV_START_DEPTINFO -1)
| || SUBSTR (:HV_DOC_LOCATOR1, :HV_START_EDUC);
| /*******************************************************/
| /* Associate a new locator with the Department */
| /* Information section */
| /*******************************************************/
| EXEC SQL SET :HV_NEW_SECTION_LOCATOR =
| SUBSTR(:HV_DOC_LOCATOR1, :HV_START_DEPTINFO,
| :HV_START_EDUC -:HV_START_DEPTINFO);
| /*******************************************************/
| /* Append the Department Information to the end */
| /* of the resume */
| /*******************************************************/
| EXEC SQL SET :HV_DOC_LOCATOR3 =
| :HV_DOC_LOCATOR2 || :HV_NEW_SECTION_LOCATOR;
| Figure 82 (Part 2 of 3). Example of deferring evaluation of LOB expressions
| /*********************/
| /* Free the locators */ 5
| /*********************/
| EXEC SQL FREE LOCATOR :HV_DOC_LOCATOR1, :HV_DOC_LOCATOR2, :HV_DOC_LOCATOR3;
| Figure 82 (Part 3 of 3). Example of deferring evaluation of LOB expressions
| When you use LOB locators to retrieve data from columns that can contain null
| values, define indicator variables for the LOB locators, and check the indicator
| variables after you fetch data into the LOB locators. If an indicator variable is null
| after a fetch operation, you cannot use the value in the LOB locator.
| The user-defined function's definer and invoker determine that this new
| user-defined function should have these characteristics:
| ) The user-defined function name is CALC_BONUS.
| ) The two input fields are of type DECIMAL(9,2).
| ) The output field is of type DECIMAL(9,2).
| ) The program for the user-defined function is written in COBOL and has a load
| module name of CBONUS.
| User-defined function invokers write and prepare application programs that invoke
| CALC_BONUS. An invoker might write a statement like this, which uses the
| user-defined function to update the BONUS field in the employee table:
| UPDATE EMP
| SET BONUS = CALC_BONUS(SALARY,COMM);
| An invoker can execute this statement either statically or dynamically.
| Member DSN8DUWC contains a client program that shows you how to invoke the
| WEATHER user-defined table function.
| Member DSNTEJ2U shows you how to define and prepare the sample user-defined
| functions and the client program.
| The user-defined function takes two integer values as input. The output from the
| user-defined function is of type integer. The user-defined function is in the MATH
| schema, is written in assembler, and contains no SQL statements. This CREATE
| FUNCTION statement defines the user-defined function:
| CREATE FUNCTION MATH."/" (INT, INT)
| RETURNS INTEGER
| SPECIFIC DIVIDE
| EXTERNAL NAME 'DIVIDE'
| LANGUAGE ASSEMBLE
| PARAMETER STYLE DB2SQL
| NO SQL
| DETERMINISTIC
| NO EXTERNAL ACTION
| FENCED;
| Suppose you want the FINDSTRING user-defined function to work on BLOB data
| types, as well as CLOB types. You can define another instance of the user-defined
| function that specifies a BLOB type as input:
| The user-defined function is written in COBOL, uses SQL only to perform queries,
| always produces the same output for given input, and should not execute as a
| parallel task. The program is reentrant, and successive invocations of the
| user-defined function share information. You expect an invocation of the
| user-defined function to return about 20 rows.
| Your user-defined function can also access remote data using the following
| methods:
| ) DB2 private protocol access using three-part names or aliases for three-part
| names
| ) DRDA access using three-part names or aliases for three-part names
| ) DRDA access using CONNECT or SET CONNECTION statements
| The user-defined function and the application that calls it can access the same
| remote site if both use the same protocol.
| The following sections include additional information that you need when you write
| a user-defined function:
| ) “Restrictions on user-defined function programs” on page 257
| ) “Coding your user-defined function as a main program or as a subprogram” on
| page 257
| ) “Parallelism considerations” on page 257
| ) “Passing parameter values to and from a user-defined function” on page 259
| ) “Examples of passing parameters in a user-defined function” on page 272
| ) “Using special registers in a user-defined function” on page 284
| ) “Using a scratchpad in a user-defined function” on page 286
| ) “Accessing transition tables in a user-defined function” on page 287
| If you code your user-defined function as a subprogram and manage the storage
| and files yourself, you can get better performance. The user-defined function should
| always free any allocated storage before it exits. To keep data between invocations
| of the user-defined function, use a scratchpad.
| You must code a user-defined table function that accesses external resources as a
| subprogram. Also ensure that the definer specifies the EXTERNAL ACTION
| parameter in the CREATE FUNCTION or ALTER FUNCTION statement. Program
| variables for a subprogram persist between invocations of the user-defined function,
| and use of the EXTERNAL ACTION parameter ensures that the user-defined
| function stays in the same address space from one invocation to another.
| Parallelism considerations
| If the definer specifies the parameter ALLOW PARALLEL in the definition of a
| user-defined scalar function, and the invoking SQL statement runs in parallel, the
| function can run under a parallel task. DB2 executes a separate instance of the
| user-defined function for each parallel task. When you write your function program,
| you need to understand how the following parameter values interact with ALLOW
| PARALLEL so that you can avoid unexpected results:
| ) SCRATCHPAD
| Figure 84 on page 260 shows the structure of the parameter list that DB2 passes
| to a user-defined function. An explanation of each parameter follows.
| Input parameter values: DB2 obtains the input parameters from the invoker's
| parameter list, and your user-defined function receives those parameters according
| to the rules of the host language in which the user-defined function is written. The
| number of input parameters is the same as the number of parameters in the
| user-defined function invocation. If one of the parameters in the function invocation
| is an expression, DB2 evaluates the expression and assigns the result of the
| expression to the parameter.
| Table 32. Compatible assembler language declarations for LOBs, ROWIDs, and locators
SQL data type in definition Assembler declaration
| TABLE LOCATOR DS FL4
| BLOB LOCATOR
| CLOB LOCATOR
| DBCLOB LOCATOR
| BLOB(n) If n <= 65535:
| var DS 0FL4
| var_length DS FL4
| var_data DS CLn
| If n > 65535:
| var DS 0FL4
| var_length DS FL4
| var_data DS CL65535
| ORG var_data+(n-65535)
| CLOB(n) If n <= 65535:
| var DS 0FL4
| var_length DS FL4
| var_data DS CLn
| If n > 65535:
| var DS 0FL4
| var_length DS FL4
| var_data DS CL65535
| ORG var_data+(n-65535)
| DBCLOB(n) If m (=2*n) <= 65534:
| var DS 0FL4
| var_length DS FL4
| var_data DS CLm
| If m > 65534:
| var DS 0FL4
| var_length DS FL4
| var_data DS CL65534
| ORG var_data+(m-65534)
| ROWID DS HL2,CL40
| Table 34 (Page 1 of 2). Compatible COBOL declarations for LOBs, ROWIDs, and locators
SQL data type in definition COBOL declaration
| TABLE LOCATOR 01 var PIC S9(9) USAGE IS BINARY.
| BLOB LOCATOR
| CLOB LOCATOR
| DBCLOB LOCATOR
| BLOB(n) If n <= 32767:
| 01 var.
| 49 var-LENGTH PIC 9(9)
| USAGE COMP.
| 49 var-DATA PIC X(n).
| If length > 32767:
| 01 var.
| 02 var-LENGTH PIC S9(9)
| USAGE COMP.
| 02 var-DATA.
| 49 FILLER
| PIC X(32767).
| 49 FILLER
| PIC X(32767).
|| ..
| .
| 49 FILLER
| PIC X(mod(n,32767)).
| Table 35 (Page 1 of 2). Compatible PL/I declarations for LOBs, ROWIDs, and locators
SQL data type in definition PL/I
| TABLE LOCATOR BIN FIXED(31)
| BLOB LOCATOR
| CLOB LOCATOR
| DBCLOB LOCATOR
| Result parameters: Set these values in your user-defined function before exiting.
| For a user-defined scalar function, you return one result parameter. For a
| user-defined table function, you return the same number of parameters as columns
| in the RETURNS TABLE clause of the CREATE FUNCTION statement. DB2
| allocates a buffer for each result parameter value and passes the buffer address to
| the user-defined function. Your user-defined function places each result parameter
| value in its buffer. You must ensure that the length of the value you place in each
| See “Passing parameter values to and from a user-defined function” on page 259
| to determine the host data type to use for each result parameter value. If the
| CREATE FUNCTION statement contains a CAST FROM clause, use a data type
| that corresponds to the SQL data type in the CAST FROM clause. Otherwise, use
| a data type that corresponds to the SQL data type in the RETURNS or RETURNS
| TABLE clause.
| To improve performance for user-defined table functions that return many columns,
| you can pass values for a subset of columns to the invoker. For example, a
| user-defined table function might be defined to return 100 columns, but the invoker
| needs values for only two columns. Use the DBINFO parameter to indicate to DB2
| the columns for which you will return values. Then return values for only those
| columns. See the explanation of DBINFO below for information on how to indicate
| the columns of interest.
| Input parameter indicators: These are SMALLINT values, which DB2 sets before
| it passes control to the user-defined function. You use the indicators to determine
| whether the corresponding input parameters are null. The number and order of the
| indicators are the same as the number and order of the input parameters. On entry
| to the user-defined function, each indicator contains one of these values:
| 0 The input parameter value is not null.
| negative The input parameter value is null.
| Code the user-defined function to check all indicators for null values unless the
| user-defined function is defined with RETURNS NULL ON NULL INPUT. A
| user-defined function defined with RETURNS NULL ON NULL INPUT executes
| only if all input parameters are not null.
| Result indicators: These are SMALLINT values, which you must set before the
| user-defined function ends to indicate to the invoking program whether each result
| parameter value is null. A user-defined scalar function has one result indicator. A
| user-defined table function has the same number of result indicators as the number
| of result parameters. The order of the result indicators is the same as the order of
| the result parameters. Set each result indicator to one of these values:
| 0 or positive The result parameter is not null.
| negative The result parameter is null.
| SQLSTATE value: This is a CHAR(5) value, which you must set before the
| user-defined function ends. The user-defined function can return one of these
| SQLSTATE values:
| 00000 Use this value to indicate that the user-defined function executed
| without any warnings or errors.
| 01Hxx Use these values to indicate that the user-defined function detected
| a warning condition. xx can be any two single-byte alphanumeric
| characters. DB2 returns SQLCODE +462 if the user-defined
| function sets the SQLSTATE to 01Hxx.
| 02000 Use this value to indicate that there no more rows are to be
| returned from a user-defined table function.
| When your user-defined function returns an SQLSTATE of 38yxx other than one of
| the four listed above, DB2 returns SQLCODE -443.
| If both the user-defined function and DB2 set an SQLSTATE value, DB2 returns its
| SQLSTATE value to the invoker.
| User-defined function name: DB2 sets this value in the parameter list before the
| user-defined function executes. This value is VARCHAR(137): 8 bytes for the
| schema name, 1 byte for a period, and 128 bytes for the user-defined function
| name. If you use the same code to implement multiple versions of a user-defined
| function, you can use this parameter to determine which version of the function the
| invoker wants to execute.
| Specific name: DB2 sets this value in the parameter list before the user-defined
| function executes. This value is VARCHAR(128) and is either the specific name
| from the CREATE FUNCTION statement or a specific name that DB2 generated. If
| you use the same code to implement multiple versions of a user-defined function,
| DB2 allocates a 70-byte buffer for this area and passes you the buffer address in
| the parameter list. Ensure that you do not write more than 70 bytes to the buffer. At
| least the first 17 bytes of the value you put in the buffer appear in the SQLERRMC
| field of the SQLCA that is returned to the invoker. The exact number of bytes
| depends on the number of other tokens in SQLERRMC. Do not use X'FF' in your
| diagnostic message. DB2 uses this value to delimit tokens.
| You must ensure that your user-defined function does not write more bytes to the
| scratchpad than the scratchpad length.
| Call type: For a user-defined scalar function, if the definer specified FINAL CALL in
| the CREATE FUNCTION statement, DB2 passes this parameter to the user-defined
| function. For a user-defined table function, DB2 always passes this parameter to
| the user-defined function.
| On entry to a user-defined scalar function, the call type parameter has one of the
| following values:
| -1 This is the first call to the user-defined function for the SQL statement. For
| a first call, all input parameters are passed to the user-defined function. In
| addition, the scratchpad, if allocated, is set to binary zeros.
| 0 This is a normal call. For a normal call, all the input parameters are passed
| to the user-defined function. If a scratchpad is also passed, DB2 does not
| modify it.
| 1 This is a final call. For a final call, no input parameters are passed to the
| user-defined function. If a scratchpad is also passed, DB2 does not modify
| it.
| This type of final call occurs when the invoking application explicitly closes
| a cursor. When a value of 1 is passed to a user-defined function, the
| user-defined function can execute SQL statements.
| 255 This is a final call. For a final call, no input parameters are passed to the
| user-defined function. If a scratchpad is also passed, DB2 does not modify
| it.
| This type of final call occurs when the invoking application executes a
| COMMIT or ROLLBACK statement, or when the invoking application
| abnormally terminates. When a value of 255 is passed to the user-defined
| function, the user-defined function cannot execute any SQL statements,
| except for CLOSE CURSOR. If the user-defined function executes any
| close cursor statements during this type of final call, the user-defined
| During the first call, your user-defined scalar function should acquire any system
| resources it needs. During the final call, the user-defined scalar function should
| release any resources it acquired during the first call. The user-defined scalar
| function should return a result value only during normal calls. DB2 ignores any
| results that are returned during a final call. However, the user-defined scalar
| function can set the SQLSTATE and diagnostic message area during the final call.
| If an invoking SQL statement contains more than one user-defined scalar function,
| and one of those user-defined functions returns an error SQLSTATE, DB2 invokes
| all of the user-defined functions for a final call, and the invoking SQL statement
| receives the SQLSTATE of the first user-defined function with an error.
| On entry to a user-defined table function, the call type parameter has one of the
| following values:
| -2 This is the first call to the user-defined function for the SQL statement. A
| first call occurs only if the FINAL CALL keyword is specified in the
| user-defined function definition. For a first call, all input parameters are
| passed to the user-defined function. In addition, the scratchpad, if allocated,
| is set to binary zeros.
| -1 This is the open call to the user-defined function by an SQL statement. If
| FINAL CALL is not specified in the user-defined function definition, all input
| parameters are passed to the user-defined function, and the scratchpad, if
| allocated, is set to binary zeros during the open call. If FINAL CALL is
| specified for the user-defined function, DB2 does not modify the
| scratchpad.
| 0 This is a fetch call to the user-defined function by an SQL statement. For a
| fetch call, all input parameters are passed to the user-defined function. If a
| scratchpad is also passed, DB2 does not modify it.
| 1 This is a close call. For a close call, no input parameters are passed to the
| user-defined function. If a scratchpad is also passed, DB2 does not modify
| it.
| 2 This is a final call. This type of final call occurs only if FINAL CALL is
| specified in the user-defined function definition. For a final call, no input
| parameters are passed to the user-defined function. If a scratchpad is also
| passed, DB2 does not modify it.
| This type of final call occurs when the invoking application executes a
| CLOSE CURSOR statement.
| 255 This is a final call. For a final call, no input parameters are passed to the
| user-defined function. If a scratchpad is also passed, DB2 does not modify
| it.
| This type of final call occurs when the invoking application executes a
| COMMIT or ROLLBACK statement, or when the invoking application
| abnormally terminates. When a value of 255 is passed to the user-defined
| function, the user-defined function cannot execute any SQL statements,
| except for CLOSE CURSOR. If the user-defined function executes any
| close cursor statements during this type of final call, the user-defined
| During a fetch call, the user-defined table function should return a row. If the
| user-defined function has no more rows to return, it should set the SQLSTATE to
| 02000.
| During the close call, a user-defined table function can set the SQLSTATE and
| diagnostic message area.
| Location name
| A 128-byte character field. It contains the name of the location to which the
| invoker is currently connected.
| Authorization ID length
| An unsigned 2-byte integer field. It contains the length of the authorization ID in
| the next field.
| Authorization ID
| A 128-byte character field. It contains the authorization ID of the application
| from which the user-defined function is invoked, padded on the right with
| blanks. If this user-defined function is nested within other user-defined
| functions, this value is the authorization ID of the application that invoked the
| highest-level user-defined function.
| Table qualifier
| A 128-byte character field. It contains the qualifier of the table that is specified
| in the table name field.
| Table name
| A 128-byte character field. This field contains the name of the table that the
| UPDATE or INSERT modifies if the reference to the user-defined function in
| the invoking SQL statement is in one of the following places:
| Column name
| A 128-byte character field. This field contains the name of the column that the
| UPDATE or INSERT modifies if the reference to the user-defined function in
| the invoking SQL statement is in one of the following places:
| Product information
| An 8-byte character field that identifies the product on which the user-defined
| function executes. This field has the form pppvvrrm, where:
| 0 Unknown
| 1 OS/2
| 3 Windows
| 4 AIX
| 5 Windows NT
| 6 HP-UX
| 7 Solaris
| 8 OS/390
| 13 Siemens Nixdorf
| 15 Windows 95
| 16 SCO Unix
| Reserved area
| 24 bytes.
| Reserved area
| 20 bytes.
| These examples assume that the user-defined function is defined with the
| SCRATCHPAD, FINAL CALL, and DBINFO parameters.
||| ...
| L R7,8(R1) GET ADDRESS OF AREA FOR RESULT
| NULLIN MVC (9,R7),RESULT MOVE A VALUE INTO RESULT AREA
| L R7,2(R1) GET ADDRESS OF AREA FOR RESULT IND
| MVC (2,R7),=H'' MOVE A VALUE INTO INDICATOR AREA
|| ..
| .
| CEETERM RC=
| *******************************************************************
| * VARIABLE DECLARATIONS AND EQUATES *
| *******************************************************************
| R1 EQU 1 REGISTER 1
| R7 EQU 7 REGISTER 7
| PPA CEEPPA , CONSTANTS DESCRIBING THE CODE BLOCK
| LTORG , PLACE LITERAL POOL HERE
| PROGAREA DSECT
| ORG *+CEEDSASZ LEAVE SPACE FOR DSA FIXED PART
| PARM1 DS F PARAMETER 1
| PARM2 DS F PARAMETER 2
| RESULT DS CL9 RESULT
| F_IND1 DS H INDICATOR FOR PARAMETER 1
| F_IND2 DS H INDICATOR FOR PARAMETER 2
| F_INDR DS H INDICATOR FOR RESULT
| C or C++:
| For C or C++ user-defined functions, the conventions for passing parameters are
| different for main programs and subprograms.
| For subprograms, you pass the parameters directly. For main programs, you use
| the standard argc and argv variables to access the input and output parameters:
| Figure 86 shows the parameter conventions for a user-defined scalar function that
| is written as a main program that receives two parameters and returns one result.
| #include <stdlib.h>
| #include <stdio.h>
| main(argc,argv)
| int argc;
| char *argv[];
| {
| /***************************************************/
| /* Assume that the user-defined function invocation*/
| /* included 2 input parameters in the parameter */
| /* list. Also assume that the definition includes */
| /* the SCRATCHPAD, FINAL CALL, and DBINFO options, */
| /* so DB2 passes the scratchpad, calltype, and */
| /* dbinfo parameters. */
| /* The argv vector contains these entries: */
| /* argv[] 1 load module name */
| /* argv[1-2] 2 input parms */
| /* argv[3] 1 result parm */
| /* argv[4-5] 2 null indicators */
| /* argv[6] 1 result null indicator */
| /* argv[7] 1 SQLSTATE variable */
| /* argv[8] 1 qualified func name */
| /* argv[9] 1 specific func name */
| /* argv[1] 1 diagnostic string */
| /* argv[11] 1 scratchpad */
| /* argv[12] 1 call type */
| /* argv[13] + 1 dbinfo */
| /* ------ */
| /* 14 for the argc variable */
| /***************************************************/
| if argc<>14
| {
|| ..
| .
| /**********************************************************/
| /* This section would contain the code executed if the */
| /* user-defined function is invoked with the wrong number */
| /* of parameters. */
| /**********************************************************/
| }
| Figure 86 (Part 1 of 2). How a C or C++ user-defined function that is written as a main
| program receives parameters
| /***************************************************/
| /* Access the null indicator for the first */
| /* parameter on the invoked user-defined function */
| /* as follows: */
| /***************************************************/
| short int ind1;
| ind1 = *(short int *) argv[4];
| /***************************************************/
| /* Use the expression below to assign */
| /* 'xxxxx' to the SQLSTATE returned to caller on */
| /* the SQL statement that contains the invoked */
| /* user-defined function. */
| /***************************************************/
| strcpy(argv[7],"xxxxx/");
| /***************************************************/
| /* Obtain the value of the qualified function */
| /* name with this expression. */
| /***************************************************/
| char f_func[28];
| strcpy(f_func,argv[8]);
| /***************************************************/
| /* Obtain the value of the specific function */
| /* name with this expression. */
| /***************************************************/
| char f_spec[19];
| strcpy(f_spec,argv[9]);
| /***************************************************/
| /* Use the expression below to assign */
| /* 'yyyyyyyy' to the diagnostic string returned */
| /* in the SQLCA associated with the invoked */
| /* user-defined function. */
| /***************************************************/
| strcpy(argv[1],"yyyyyyyy/");
| /***************************************************/
| /* Use the expression below to assign the */
| /* result of the function. */
| /***************************************************/
| char l_result[11];
| strcpy(argv[3],l_result);
||| ...
| }
| Figure 86 (Part 2 of 2). How a C or C++ user-defined function that is written as a main
| program receives parameters
| #pragma runopts(plist(os))
| #include <stdlib.h>
| #include <stdio.h>
| #include <string.h>
| struct sqludf_scratchpad
| {
| unsigned long length; /* length of scratchpad data */
| char data[SQLUDF_SCRATCHPAD_LEN]; /* scratchpad data */
| };
| struct sqludf_dbinfo
| {
| unsigned short dbnamelen; /* database name length */
| unsigned char dbname[128]; /* database name */
| unsigned short authidlen; /* appl auth id length */
| unsigned char authid[128]; /* appl authorization ID */
| unsigned long ascii_sbcs; /* ASCII SBCS CCSID */
| unsigned long ascii_dbcs; /* ASCII MIXED CCSID */
| unsigned long ascii_mixed; /* ASCII DBCS CCSID */
| unsigned long ebcdic_sbcs; /* EBCDIC SBCS CCSID */
| unsigned long ebcdic_dbcs; /* EBCDIC MIXED CCSID */
| unsigned long ebcdic_mixed; /* EBCDIC DBCS CCSID */
| unsigned long encode; /* UDF encode scheme */
| unsigned char reserv[2]; /* reserved for later use*/
| unsigned short tbqualiflen; /* table qualifier length */
| unsigned char tbqualif[128]; /* table qualifer name */
| unsigned short tbnamelen; /* table name length */
| unsigned char tbname[128]; /* table name */
| unsigned short colnamelen; /* column name length */
| unsigned char colname[128]; /* column name */
| unsigned char relver[8]; /* Database release & version */
| unsigned long platform; /* Database platform */
| unsigned short numtfcol; /* # of Tab Fun columns used */
| unsigned char reserv1[24]; /* reserved */
| unsigned short *tfcolnum; /* table fn column list */
| unsigned short *appl_id; /* LUWID for DB2 connection */
| unsigned char reserv2[2]; /* reserved */
| };
| l_p1 = *parm1;
| strcpy(l_p2,parm2);
| l_ind1 = *f_ind1;
| l_ind1 = *f_ind2;
| strcpy(ludf_sqlstate,udf_sqlstate);
| strcpy(ludf_fname,udf_fname);
| strcpy(ludf_specname,udf_specname);
| l_udf_call_type = *udf_call_type;
| strcpy(ludf_msgtext,udf_msgtext);
| memcpy(&ludf_scratchpad,udf_scratchpad,sizeof(ludf_scratchpad));
| memcpy(&ludf_dbinfo,udf_dbinfo,sizeof(ludf_dbinfo));
|| ..
| .
| }
| Figure 87 (Part 2 of 2). How a C language user-defined function that is written as a
| subprogram receives parameters
| Figure 88 on page 278 shows the parameter conventions for a user-defined scalar
| function that is written as a C++ subprogram that receives two parameters and
| returns one result. This example demonstrates that you must use an extern "C"
| modifier to indicate that you want the C++ subprogram to receive parameters
| according to the C linkage convention. This modifier is necessary because the
| CEEPIPI CALL_SUB interface, which DB2 uses to call the user-defined function,
| passes parameters using the C linkage convention.
| Figure 88 (Part 1 of 2). How a C++ user-defined function that is written as a subprogram
| receives parameters
||| ...
| *********************************************************
| * Declare these variables for result parameters *
| *********************************************************
| 1 UDFRESULT1 PIC X(1).
| 1 UDFRESULT2 PIC X(1).
|| ..
| .
| *********************************************************
| * Declare a null indicator for each parameter *
| *********************************************************
| 1 UDF-IND1 PIC S9(4) USAGE COMP.
| 1 UDF-IND2 PIC S9(4) USAGE COMP.
|| ..
| .
| *********************************************************
| * Declare a null indicator for result parameter *
| *********************************************************
| 1 UDF-RIND1 PIC S9(4) USAGE COMP.
| 1 UDF-RIND2 PIC S9(4) USAGE COMP.
|| ..
| .
| *********************************************************
| * Declare the SQLSTATE that can be set by the *
| * user-defined function *
| *********************************************************
| 1 UDF-SQLSTATE PIC X(5).
| *********************************************************
| * Declare the qualified function name *
| *********************************************************
| 1 UDF-FUNC.
| 49 UDF-FUNC-LEN PIC 9(4) USAGE BINARY.
| 49 UDF-FUNC-TEXT PIC X(137).
| *********************************************************
| * Declare the specific function name *
| *********************************************************
| 1 UDF-SPEC.
| 49 UDF-SPEC-LEN PIC 9(4) USAGE BINARY.
| 49 UDF-SPEC-TEXT PIC X(128).
| Figure 89 (Part 1 of 3). How a COBOL user-defined function receives parameters
| PL/I: Figure 90 on page 283 shows the parameter conventions for a user-defined
| scalar function that is written as a main program that receives two parameters and
| returns one result. For a PL/I user-defined function that is a subprogram, the
| conventions are the same.
| Table 36 on page 285 shows information you need when you use special registers
| in a user-defined function.
| The scratchpad consists of a 4-byte length field, followed by the scratchpad area.
| The definer can specify the length of the scratchpad area in the CREATE
| FUNCTION statement. The specified length does not include the length field. The
| default size is 100 bytes. DB2 initializes the scratchpad for each function to binary
| zeros at the beginning of execution for each subquery of an SQL statement and
| does not examine or change the content thereafter. On each invocation of the
| user-defined function, DB2 passes the scratchpad to the user-defined function. You
| can therefore use the scratchpad to preserve information between invocations of a
| reentrant user-defined function.
| The scratchpad length is not specified, so the scratchpad has the default length of
| 100 bytes, plus 4 bytes for the length field. The user-defined function increments an
| integer value and stores it in the scratchpad on each execution.
| The five basic steps to accessing transition tables in a user-defined function are:
| 1. Declare input parameters to receive table locators. You must define each
| parameter that receives a table locator as an unsigned 4-byte integer.
| 2. Declare table locators. You can declare table locators in assembler, C, C++,
| COBOL, and PL/I. The syntax for declaring table locators in each language is
| described in “Chapter 3-4. Embedding SQL statements in host languages” on
| page 127.
| 3. Declare a cursor to access the rows in each transition table.
| 4. Assign the input parameter values to the table locators.
| The following examples show how a user-defined function that is written in each of
| the supported host languages accesses a transition table for a trigger. The
| transition table, NEWEMP, contains modified rows of the employee sample table.
| The trigger is defined like this:
| CREATE TRIGGER EMPRAISE
| AFTER UPDATE ON EMP
| REFERENCING NEW TABLE AS NEWEMPS
| FOR EACH STATEMENT MODE DB2SQL
| BEGIN ATOMIC
| VALUES (CHECKEMP(TABLE NEWEMPS));
| END;
| The user-defined function definition looks like this:
| CREATE FUNCTION CHECKEMP(TABLE LIKE EMP AS LOCATOR)
| RETURNS INTEGER
| EXTERNAL NAME 'CHECKEMP'
| PARAMETER STYLE DB2SQL
| LANGUAGE language;
| COBOL: Figure 94 on page 291 shows how a COBOL program accesses rows of
| transition table NEWEMPS.
| PL/I: Figure 95 on page 292 shows how a PL/I program accesses rows of
| transition table NEWEMPS.
||| ...
| EXEC SQL CLOSE C1;
||| ...
| END CHECK_EMP;
| Figure 95. How a PL/I user-defined function accesses a transition table
| 1. Precompile the user-defined function program and bind the DBRM into a
| package.
| You need to do this only if your user-defined function contains SQL statements.
| You do not need to bind a plan for the user-defined function.
| 2. Compile the user-defined function program and link-edit it with Language
| Environment and RRSAF.
| You must compile the program with a compiler that supports Language
| Environment and link-edit the appropriate Language Environment components
| with the user-defined function. You must also link-edit the user-defined function
| with RRSAF.
| For the minimum compiler and Language Environment requirements for
| user-defined functions, see DB2 Release Guide.
| The program preparation JCL samples DSNHASM, DSNHC, DSNHCPP,
| DSNHICOB, and DSNHPLI show you how to precompile, compile, and link-edit
| assembler, C, C++, COBOL, and PL/I DB2 programs. If your DB2 subsystem
| has been installed to work with Language Environment, you can use this
| sample JCL when you prepare your user-defined functions. For object-oriented
| programs in C++ or COBOL, see JCL samples DSNHCPP2 and DSNHICB2 for
| program preparation hints.
| When the primary program of a user-defined function calls another program, DB2
| uses the CURRENT PACKAGESET special register to determine the collection to
| search for the called program's package. The primary program can change this
| collection ID by executing the statement SET CURRENT PACKAGESET. If the
| value of CURRENT PACKAGESET is blank, DB2 uses the method described in
| “The order of search” on page 421 to search for the package.
| To maximize the number of user-defined functions and stored procedures that can
| run concurrently, follow these preparation recommendations:
| ) Ask the system administrator to set the region size parameter in the startup
| procedures for the WLM-established stored procedures address spaces to
| REGION=0. This lets an address space obtain the largest possible amount of
| storage below the 16-MB line.
| ) Limit storage required by application programs below the 16-MB line by:
| – Link-editing programs with the AMODE(31) and RMODE(ANY) attributes
| – Compiling COBOL programs with the RES and DATA(31) options
| ) Limit storage that is required by Language Environment by using these run-time
| options:
| HEAP(,,ANY) Allocates program heap storage above the
| 16-MB line
| STACK(,,ANY,) Allocates program stack storage above the
| 16-MB line
| STORAGE(,,,4K) Reduces reserve storage area below the line to
| 4 KB
| BELOWHEAP(4K,,) Reduces the heap storage below the line to 4
| KB
| LIBSTACK(4K,,) Reduces the library stack below the line to 4
| KB
| ALL31(ON) Causes all programs contained in the external
| user-defined function to execute with
| AMODE(31) and RMODE(ANY)
| The definer can list these options as values of the RUN OPTIONS parameter of
| CREATE FUNCTION, or the system administrator can establish these options
| as defaults during Language Environment installation.
| For example, the RUN OPTIONS option parameter could contain:
| H(,,ANY),STAC(,,ANY,),STO(,,,4K),BE(4K,,),LIBS(4K,,),ALL31(ON)
| ) Ask the system administrator to set the NUMTCB parameter for
| WLM-established stored procedures address spaces to a value greater than 1.
| This lets more than one TCB run in an address space. Be aware that setting
| NUMTCB to a value greater than 1 also reduces your level of application
| program isolation. For example, a bad pointer in one application can overwrite
| memory that is allocated by another application.
| ALL
| The Debug Tool gains control when an attention interrupt, abend, or
| program or Language Environment condition of Severity 1 and above
| occurs.
| PROMPT
| The Debug Tool is invoked immediately after Language Environment
| initialization.
| JBJONES%SESSNA:
| CODE/370 initiates a session on a workstation identified to APPC/MVS as
| JBJONES with a session ID of SESSNA.
| 3. If you want to save the output from your debugging session, issue a command
| that names a log file. For example, the following command starts logging to a
| file on the workstation called dbgtool.log.
| SET LOG ON FILE dbgtool.log;
| This should be the first command that you enter from the terminal or include in
| your commands file.
| Using CODE/370 in batch mode: To test your user-defined function in batch mode,
| you must have the CODE/370 Mainframe Interface (MFI) Debug Tool installed on
| the OS/390 system where the user-defined function runs. To debug your
| user-defined function in batch mode using the MFI Debug Tool:
| 1. If you plan to use the Language Environment run-time TEST option to invoke
| CODE/370, compile the user-defined function with the TEST option. This places
| information in the program that the Debug Tool uses during a debugging
| session.
| 2. Allocate a log data set to receive the output from CODE/370. Put a DD
| statement for the log data set in the start-up procedure for the stored
| procedures address space.
| Driver applications: You can write a small driver application that calls the
| user-defined function as a subprogram and passes the parameter list for the
| user-defined function. You can then test and debug the user-defined function as a
| normal DB2 application under TSO. You can then use TSO TEST and other
| commonly used debugging tools.
| SQL INSERT: You can use SQL to insert debugging information into a DB2 table.
| This allows other machines in the network (such as a workstation) to easily access
| the data in the table using DRDA access.
| See the following sections for details you should know before you invoke a
| user-defined function:
| ) “Syntax for user-defined function invocation”
| ) “Ensuring that DB2 executes the intended user-defined function” on page 298
| ) “Casting of user-defined function arguments” on page 303
| ) “What happens when a user-defined function abnormally terminates” on
| page 304
|
| [[──function-name──(──┬──────────────┬──┬──────────────────────────────────────┬──)──────────────────[^
| └─┬─ALL──────┬─┘ │ ┌─,────────────────────────────────┐ │
| └─DISTINCT─┘ └──g─┬─expression───────────────────┬─┴─┘
| └─TABLE──transition-table-name─┘
| Use the syntax shown in Figure 97 when you invoke a table function:
|
| [[──TABLE──(──function-name──(──┬──────────────────────────────────────┬──)──)──correlation-clause───[^
| │ ┌─,────────────────────────────────┐ │
| └──g─┬─expression───────────────────┬─┴─┘
| └─TABLE──transition-table-name─┘
| correlation-clause:
| ┌─AS─┐
| [[──┴────┴──correlation-name──┬───────────────────────┬──────────────────────────────────────────────[^
| │ ┌─,───────────┐ │
| └─(───g─column-name─┴──)─┘
| See Chapter 3 of DB2 SQL Reference for more information about the syntax of
| user-defined function invocation.
| The remainder of this section discusses details of the function resolution process
| and gives suggestions on how you can ensure that DB2 picks the right function.
| If the data types of all parameters in a function instance are the same as those in
| the function invocation, that function instance is a best fit. If no exact match exists,
| DB2 compares data types in the parameter lists from left to right, using this
| method:
| 1. DB2 compares the data types of the first parameter in the function invocation to
| the data type of the first parameter in each function instance.
| 2. For the first parameter, if one function instance has a data type that fits the
| function invocation better than the data types in the other instances, that
| function is a best fit. Table 37 on page 299 shows the possible fits for each
| data type, in best-to-worst order.
| 3. If the data types of the first parameter are the same for all function instances,
| DB2 repeats this process for the next parameter. DB2 continues this process
| for each parameter until it finds a best fit.
| Candidate 2:
| CREATE FUNCTION FUNC(VARCHAR(2),REAL,DOUBLE)
| RETURNS DECIMAL(9,2)
| EXTERNAL NAME 'FUNC2'
| PARAMETER STYLE DB2SQL
| LANGUAGE COBOL;
| DB2 compares the data type of the first parameter in the user-defined function
| invocation to the data types of the first parameters in the candidate functions.
| Because the first parameter in the invocation has data type VARCHAR, and both
| candidate functions also have data type VARCHAR, DB2 cannot determine the
| better candidate based on the first parameter. Therefore, DB2 compares the data
| types of the second parameters.
| The data type of the second parameter in the invocation is SMALLINT. INTEGER,
| which is the data type of candidate 1, is a better fit to SMALLINT than REAL, which
| is the data type of candidate 2. Therefore, candidate 1 is DB2's choice for
| execution.
| Before you use EXPLAIN to obtain information about function resolution, create
| DSN_FUNCTION_TABLE. The table definition looks like this:
| CREATE TABLE DSN_FUNCTION_TABLE
| (QUERYNO INTEGER NOT NULL WITH DEFAULT,
| QBLOCKNO INTEGER NOT NULL WITH DEFAULT,
| APPLNAME CHAR(8) NOT NULL WITH DEFAULT,
| PROGNAME CHAR(8) NOT NULL WITH DEFAULT,
| COLLID CHAR(18) NOT NULL WITH DEFAULT,
| GROUP_MEMBER CHAR(8) NOT NULL WITH DEFAULT,
| EXPLAIN_TIME TIMESTAMP NOT NULL WITH DEFAULT,
| SCHEMA_NAME CHAR(8) NOT NULL WITH DEFAULT,
| FUNCTION_NAME CHAR(18) NOT NULL WITH DEFAULT,
| SPEC_FUNC_NAME CHAR(18) NOT NULL WITH DEFAULT,
| FUNCTION_TYPE CHAR(2) NOT NULL WITH DEFAULT,
| VIEW_CREATOR CHAR(8) NOT NULL WITH DEFAULT,
| VIEW_NAME CHAR(18) NOT NULL WITH DEFAULT,
| PATH VARCHAR(254) NOT NULL WITH DEFAULT,
| FUNCTION_TEXT VARCHAR(254) NOT NULL WITH DEFAULT);
| Columns QUERYNO, QBLOCKNO, APPLNAME, PROGNAME, COLLID, and
| GROUP_MEMBER have the same meanings as in the PLAN_TABLE. See “
| Chapter 7-4. Using EXPLAIN to improve SQL performance” on page 679 for
| explanations of those columns. The meanings of the other columns are:
| EXPLAIN_TIME
| Timestamp when the EXPLAIN statement was executed.
| FUNCTION_NAME
| Name of the function that is invoked in the explained statement.
| SPEC_FUNC_NAME
| Specific name of the function that is invoked in the explained statement.
| FUNCTION_TYPE
| The type of function that is invoked in the explained statement. Possible values
| are:
| SU Scalar function
| TU Table function
| VIEW_CREATOR
| The creator of the view, if the function that is specified in the
| FUNCTION_NAME column is referenced in a view definition. Otherwise, this
| field is blank.
| VIEW_NAME
| The name of the view, if the function that is specified in the FUNCTION_NAME
| column is referenced in a view definition. Otherwise, this field is blank.
| PATH
| The value of the SQL path when DB2 resolved the function reference.
| FUNCTION_TEXT
| The text of the function reference (the function name and parameters). If the
| function reference exceeds 100 bytes, this column contains the first 100 bytes.
| For a function specified in infix notation, FUNCTION_TEXT contains only the
| function name. For example, suppose a user-defined function named / is in the
| function reference A/B. Then FUNCTION_TEXT contains only /, not A/B.
| When you invoke a user-defined function that is sourced on another function, DB2
| casts your parameters to the data types and lengths of the sourced function.
| The following example demonstrates what happens when the parameter definitions
| of a sourced function differ from those of the function on which it is sourced.
| Now suppose that PRICE2 has the DECIMAL(9,2) value 0001234.56. DB2 must
| first assign this value to the data type of the input parameter in the definition of
| TAXFN2, which is DECIMAL(8,2). The input parameter value then becomes
| 001234.56. Next, DB2 casts the parameter value to a source function parameter,
| which is DECIMAL(6,0). The parameter value then becomes 001234. (When you
| cast a value, that value is truncated, rather than rounded.)
| Now, if TAXFN1 returns the DECIMAL(5,2) value 123.45, DB2 casts the value to
| DECIMAL(5,0), which is the result type for TAXFN2, and the value becomes 00123.
| This is the value that DB2 assigns to column SALESTAX2 in the UPDATE
| statement.
| Although trigger activations count in the levels of SQL statement nesting, the
| previous restrictions on SQL statements do not apply to SQL statements that are
| executed in the trigger body. For example, suppose that trigger TR1 is defined on
| table T1:
| Now suppose that you execute this SQL statement at level 1 of nesting:
| INSERT INTO T1 VALUES(...);
| Although the UPDATE statement in the trigger body is at level 2 of nesting and
| modifies the same table that the triggering statement updates, DB2 can execute the
| INSERT statement successfully.
| The results can differ even more, depending on the order in which DB2 retrieves
| the rows from the table. Suppose that an ascending index is defined on column C2.
| After you define distinct types and columns of those types, you can use those data
| types in the same way you use built-in types. You can use the data types in
| assignments, comparisons, function invocations, and stored procedure calls.
| However, when you assign one column value to another or compare two column
| values, those values must be of the same distinct type. For example, you must
| assign a column value of type VIDEO to a column of type VIDEO, and you can
| compare a column value of type AUDIO only to a column of type AUDIO. When
| you assign a host variable value to a column with a distinct type, you can use any
| host data type that is compatible with the source data type of the distinct type. For
| example, to receive an AUDIO or VIDEO value, you can define a host variable like
| this:
| SQL TYPE IS BLOB (1M) HVAV;
| For example, if you have defined a user-defined function to convert U.S. dollars to
| euro currency, you do not want anyone to use this same user-defined function to
| convert Japanese Yen to euros because the U.S. dollars to euros function returns
| the wrong amount. Suppose you define three distinct types:
| CREATE DISTINCT TYPE US_DOLLAR AS DECIMAL(9,2) WITH COMPARISONS;
| CREATE DISTINCT TYPE EURO AS DECIMAL(9,2) WITH COMPARISONS;
| CREATE DISTINCT TYPE JAPANESE_YEN AS DECIMAL(9,2) WITH COMPARISONS;
| If a conversion function is defined that takes an input parameter of type
| US_DOLLAR as input, DB2 returns an error if you try to execute the function with
| an input parameter of type JAPANESE_YEN.
| DB2 does not let you compare data of a distinct type directly to data of its source
| type. However, you can compare a distinct type to its source type by using a cast
| function.
| For example, suppose you want to know which products sold more than US
| $100 000.00 in the US in the month of July in 1992 (7/92). Because you cannot
| compare data of type US_DOLLAR with instances of data of the source type of
| US_DOLLAR (DECIMAL) directly, you must use a cast function to cast data from
| DECIMAL to US_DOLLAR or from US_DOLLAR to DECIMAL. Whenever you
| create a distinct type, DB2 creates two cast functions, one to cast from the source
| type to the distinct type and the other to cast from the distinct type to the source
| type. For distinct type US_DOLLAR, DB2 creates a cast function called DECIMAL
| and a cast function called US_DOLLAR. When you compare an object of type
| US_DOLLAR to an object of type DECIMAL, you can use one of those cast
| functions to make the data types identical for the comparison. Suppose table
| US_SALES is defined like this:
| You cannot use host variables in statements that you prepare for dynamic
| execution. As explained in “Using parameter markers” on page 515, you can
| substitute parameter markers for host variables when you prepare a statement, and
| then use host variables when you execute the statement.
| If you use a parameter marker in a predicate of a query, and the column to which
| you compare the value represented by the parameter marker is of a distinct type,
| you must cast the parameter marker to the distinct type, or cast the column to its
| source type.
| If you need to assign a value of one distinct type to a column of another distinct
| type, a function must exist that converts the value from one type to another.
| Because DB2 provides cast functions only between distinct types and their source
| types, you must write the function to convert from one distinct type to another.
| You can assign a column value of a distinct type to a host variable if you can
| assign a column value of the distinct type's source type to the host variable. In the
| following example, you can assign SIZECOL1 and SIZECOL2, which has distinct
| type SIZE, to host variables of type double and short because the source type of
| SIZE, which is INTEGER, can be assigned to host variables of type double or
| short.
| EXEC SQL BEGIN DECLARE SECTION;
| double hv1;
| short hv2;
| EXEC SQL END DECLARE SECTION;
| CREATE DISTINCT TYPE SIZE AS INTEGER;
| CREATE TABLE TABLE1 (SIZECOL1 SIZE, SIZECOL2 SIZE);
|| ..
| .
| SELECT SIZECOL1, SIZECOL2
| INTO :hv1, :hv2
| FROM TABLE1;
| In this example, values of host variable hv2 can be assigned to columns SIZECOL1
| and SIZECOL2, because C data type short is equivalent to DB2 data type
| SMALIINT, and SMALLINT is promotable to data type INTEGER. However, values
| Example: Using an infix operator with distinct type arguments: Suppose you
| want to add two values of type US_DOLLAR. Before you can do this, you must
| define a version of the + function that accepts values of type US_DOLLAR as
| operands:
| CREATE FUNCTION "+"(US_DOLLAR,US_DOLLAR)
| RETURNS US_DOLLAR
| SOURCE SYSIBM."+"(DECIMAL(9,2),DECIMAL(9,2));
| Because the US_DOLLAR type is based on the DECIMAL(9,2) type, the source
| function must be the version of + with arguments of type DECIMAL(9,2).
| Suppose you keep electronic mail documents that are sent to your company in a
| DB2 table. The DB2 data type of an electronic mail document is a CLOB, but you
| define it as a distinct type so that you can control the types of operations that are
| performed on the electronic mail. The distinct type is defined like this:
| CREATE DISTINCT TYPE E_MAIL AS CLOB(5M);
| You have also defined and written user-defined functions to search for and return
| the following information about an electronic mail document:
| ) Subject
| ) Sender
| ) Date sent
| ) Message content
| ) Indicator of whether the document contains a user-specified string
| The user-defined function definitions look like this:
| CREATE FUNCTION SUBJECT(E_MAIL)
| RETURNS VARCHAR(2)
| EXTERNAL NAME 'SUBJECT'
| LANGUAGE C
| PARAMETER STYLE DB2SQL
| NO SQL
| DETERMINISTIC
| NO EXTERNAL ACTION;
| The table that contains the electronic mail documents is defined like this:
# CREATE TABLE DOCUMENTS
# (LAST_UPDATE_TIME TIMESTAMP,
# DOC_ROWID ROWID NOT NULL GENERATED ALWAYS,
# A_DOCUMENT E_MAIL);
| Because the table contains a column with a source data type of CLOB, the table
| requires a ROWID column and an associated LOB table space, auxiliary table, and
| index on the auxiliary table. Use statements like this to define the LOB table space,
| the auxiliary table, and the index:
| CREATE LOB TABLESPACE DOCTSLOB
| LOG YES
| GBPCACHE SYSTEM;
| To populate the document table, you write code that executes an INSERT
| statement to put the first part of a document in the table, and then executes
| Now that the data is in the table, you can execute queries to learn more about the
| documents. For example, you can execute this query to determine which
| documents contain the word 'performance':
| SELECT SENDER(A_DOCUMENT), SENDING_DATE(A_DOCUMENT),
| SUBJECT(A_DOCUMENT)
| FROM DOCUMENTS
| WHERE CONTAINS(A_DOCUMENT,'performance') = 1;
| Because the electronic mail documents can be very large, you might want to use
| LOB locators to manipulate the document data instead of fetching all of a document
| into a host variable. You can use a LOB locator on any distinct type that is defined
| on one of the LOB types. The following example shows how you can cast a LOB
| locator as a distinct type, and then use the result in a user-defined function that
| takes a distinct type as an argument:
After you have precompiled your source program, you create a load module,
possibly one or more packages, and an application plan. It does not matter which
you do first. Creating a load module is similar to compiling and link-editing an
application containing no SQL statements. Creating a package or an application
plan, a process unique to DB2, involves binding one or more DBRMs.
Figure 99. Program preparation. Two processes are needed: (1) Compile and link-edit, and (2) bind.
A few options, however, can affect the way you code. For example, you need to
know if you are using NOFOR or STDSQL(YES) before you begin coding.
Before you begin coding, please review the list of options in Table 47 on
page 410.
Planning to bind
Depending upon how you design your DB2 application, you might bind all your
DBRMs in one operation, creating only a single application plan. Or, you might bind
some or all of your DBRMs into separate packages in separate operations. After
that, you must still bind the entire application as a single plan, listing the included
packages or collections and binding any DBRMs not already bound into packages.
Regardless of what the plan contains, you must bind a plan before the application
can run.
Binding or rebinding a package or plan in use: Packages and plans are locked
when you bind or run them. Packages that run under a plan are not locked until the
plan uses them. If you run a plan and some packages in the package list never run,
those packages are never locked.
You cannot bind or rebind a package or a plan while it is running. However, you
can bind a different version of a package that is running.
Options for binding and rebinding: Several of the options of BIND PACKAGE
and BIND PLAN can affect your program design. For example, you can use a bind
option to ensure that a package or plan can run only from a particular CICS
connection or a particular IMS region—you do not have to enforce this in your
code. Several other options are discussed at length in later chapters, particularly
the ones that affect your program's use of locks, such as the option ISOLATION.
Before you finish reading this chapter, you might want to review those options in
Chapter 2 of DB2 Command Reference.
Input to binding the plan can include DBRMs only, a package list only, or a
combination of the two. When choosing one of those alternatives for your
application, consider the impact of rebinding and see “Planning for changes to your
application” on page 324.
Binding a plan that includes only a package list makes maintenance easier when
the application changes significantly over time.
Binding all DBRMs to a plan is suitable for small applications that are unlikely to
change or that require all resources to be acquired when the plan is allocated
rather than when your program first uses them.
Advantages of packages
You must decide how to use packages based on your application design and your
operational objectives. Keep in mind the following:
Ease of maintenance: When you use packages, you do not need to bind the
entire plan again when you change one SQL statement. You need to bind only the
package associated with the changed SQL statement.
Flexibility in using bind options: The options of BIND PLAN apply to all DBRMs
bound directly to the plan. The options of BIND PACKAGE apply only to the single
DBRM bound to that package. The package options need not all be the same as
the plan options, and they need not be the same as the options for other packages
used by the same plan.
Flexibility in using name qualifiers: You can use a bind option to name a
qualifier for the unqualified object names in SQL statements in a plan or package.
By using packages, you can use different qualifiers for SQL statements in different
parts of your application. By rebinding, you can redirect your SQL statements, for
example, from a test table to a production table.
CICS
With packages, you probably do not need dynamic plan selection and its
accompanying exit routine. A package listed within a plan is not accessed until
it is executed. However, it is possible to use dynamic plan selection and
packages together. Doing so can reduce the number of plans in an application,
and hence less effort to maintain the dynamic plan exit routine. See “Using
packages with dynamic plan selection” on page 428 for information on using
packages with dynamic plan selection.
A change to your program probably invalidates one or more of your packages and
perhaps your entire plan. For some changes, you must bind a new object; for
others, rebinding is sufficient.
| ) To bind a new plan or package, other than a trigger package, use the
| subcommand BIND PLAN or BIND PACKAGE with the option
| ACTION(REPLACE).
| To bind a new trigger package, recreate the trigger associated with the trigger
| package.
| ) To rebind an existing plan or package, other than a trigger package, use the
| REBIND subcommand.
| To rebind trigger package, use the REBIND TRIGGER PACKAGE
| subcommand.
Table 38 on page 325 tells which action particular types of change require. For
more information on trigger packages, see “Working with trigger packages” on
page 327.
If you want to change the bind options in effect when the plan or package runs,
review the descriptions of those options in Chapter 2 of DB2 Command Reference.
Not all options of BIND are also available on REBIND.
Dropping objects
If you drop an object that a package depends on, the following occurs:
) If the package is not appended to any running plan, the package becomes
invalid.
) If the package is appended to a running plan, and the drop occurs outside of
that plan, the object is not dropped, and the package does not become invalid.
) If the package is appended to a running plan, and the drop occurs within that
plan, the package becomes invalid.
In all cases, the plan does not become invalid unless it has a DBRM referencing
the dropped object. If the package or plan becomes invalid, automatic rebind
occurs the next time the package or plan is allocated.
Rebinding a package
Table 39 on page 326 clarifies which packages are bound, depending on how you
specify collection-id (coll-id) package-id (pkg-id), and version-id (ver-id) on the
REBIND PACKAGE subcommand. For syntax and descriptions of this
subcommand, see Chapter 2 of DB2 Command Reference.
REBIND PACKAGE does not apply to packages for which you do not have the
BIND privilege. An asterisk (*) used as an identifier for collections, packages, or
versions does not apply to packages at remote sites.
The following example shows the options for rebinding a package at the remote
location, SNTERSA. The collection is GROUP1, the package ID is PROGA, and the
version ID is V1. The connection types shown in the REBIND subcommand replace
connection types specified on the original BIND subcommand. For information on
the REBIND subcommand options, see DB2 Command Reference.
REBIND PACKAGE(SNTERSA.GROUP1.PROGA.(V1)) ENABLE(CICS,REMOTE)
You can use the asterisk on the REBIND subcommand for local packages, but not
for packages at remote sites. Any of the following commands rebinds all versions of
all packages in all collections, at the local DB2 system, for which you have the
BIND privilege.
Either of the following commands rebinds all versions of all packages in the local
collection LEDGER for which you have the BIND privilege.
Rebinding a plan
Using the PKLIST keyword replaces any previously specified package list. Omitting
the PKLIST keyword allows the use of the previous package list for rebinding.
Using the NOPKLIST keyword deletes any package list specified when the plan
was previously bound.
The following example rebinds PLANA and changes the package list.
The following example rebinds the plan and drops the entire package list.
For a description of the technique and several examples of its use, see
Appendix E, “REBIND subcommands for lists of plans or packages” on page 935.
| As with any other package, DB2 marks a trigger package invalid when you drop a
| table, index, or view on which the trigger package depends. DB2 executes an
| automatic rebind the next time the trigger activates. However, if the automatic
| rebind fails, DB2 does not mark the trigger package inoperative.
| Unlike other packages, a trigger package is freed if you drop the table on which the
| trigger is defined, so you can recreate the trigger package only by recreating the
| table and the trigger.
Automatic rebinding
Automatic rebind might occur if an authorized user invokes a plan or package when
the attributes of the data on which the plan or package depends change, or if the
environment in which the package executes changes. Whether the automatic rebind
occurs depends on the value of the field AUTO BIND on installation panel
DSNTIPO. The options used for an automatic rebind are the options used during
the most recent bind process.
| In the following cases, DB2 might automatically rebind a plan or package that has
| not been marked as invalid:
| ) A plan or package is bound in a different release of DB2 from the release in
| which it was first used.
| ) A plan or package has a location dependency and runs at a location other than
| the one at which it was bound. This can happen when members of a data
| sharing group are defined with location names, and a package runs on a
| different member from the one on which it was bound.
Whether EXPLAIN runs during automatic rebind depends on the value of the field
EXPLAIN PROCESSING on installation panel DSNTIPO, and on whether you
specified EXPLAIN(YES). Automatic rebind fails for all EXPLAIN errors except
“PLAN_TABLE not found.”
The SQLCA is not available during automatic rebind. Therefore, if you encounter
lock contention during an automatic rebind, DSNT501I messages cannot
accompany any DSNT376I messages that you receive. To see the matching
DSNT501I messages, you must issue the subcommand REBIND PLAN or REBIND
PACKAGE.
After the basic recommendations, the chapter tells what you can do about a major
technique that DB2 uses to control concurrency.
) Transaction locks mainly control access by SQL statements. Those locks are
the ones over which you have the most control.
– “Aspects of transaction locks” on page 339 describes the various types of
transaction locks that DB2 uses and how they interact.
– “Lock tuning” on page 345 describes what you can change to control
locking. Your choices include:
- “Bind options” on page 346
- “Isolation overriding with SQL statements” on page 357
- “The statement LOCK TABLE” on page 358
Under those headings, lock (with no qualifier) refers to transaction lock.
The final section of this chapter describes locking activity for LOBs. See “LOB
locks” on page 361.
To prevent those situations from occurring unless they are specifically allowed, DB2
might use locks to control concurrency.
What do locks do? A lock associates a DB2 resource with an application process
in a way that affects how other processes can access the same resource. The
process associated with the resource is said to “hold” or “own” the lock. DB2 uses
locks to ensure that no process accesses data that has been changed, but not yet
committed, by another process.
What do you do about locks? To preserve data integrity, your application process
| acquires locks implicitly, that is, under DB2 control. It is not necessary for a process
to request a lock explicitly to conceal uncommitted data. Therefore, sometimes you
need not do anything about DB2 locks. Nevertheless processes acquire, or avoid
acquiring, locks based on certain general parameters. You can make better use of
your resources and improve concurrency by understanding the effects of those
parameters.
Suspension
Definition: An application process is suspended when it requests a lock that is
already held by another application process and cannot be shared. The suspended
process temporarily stops running.
Order of precedence for lock requests: Incoming lock requests are queued.
Requests for lock promotion, and requests for a lock by an application process that
already holds a lock on the same object, precede requests for locks by new
applications. Within those groups, the request order is “first in, first out.”
Example: Using an application for inventory control, two users attempt to reduce
the quantity on hand of the same item at the same time. The two lock requests are
queued. The second request in the queue is suspended and waits until the first
request releases its lock.
Timeout
Definition: An application process is said to time out when it is terminated because
it has been suspended for longer than a preset interval.
Effects: DB2 terminates the process, issues two messages to the console, and
returns SQLCODE -911 or -913 to the process (SQLSTATEs '40001' or '57033').
Reason code 00C9008E is returned in the SQLERRD(3) field of the SQLCA. If
statistics trace class 3 is active, DB2 writes a trace record with IFCID 0196.
IMS
If you are using IMS, and a timeout occurs, the following actions take place:
) In a DL/I batch application, the application process abnormally terminates
with a completion code of 04E and a reason code of 00D44033 or
00D44050.
) In any IMS environment except DL/I batch:
– DB2 performs a rollback operation on behalf of your application process
to undo all DB2 updates that occurred during the current unit of work.
– For a non-message driven BMP, IMS issues a rollback operation on
behalf of your application. If this operation is successful, IMS returns
control to your application, and the application receives SQLCODE
-911. If the operation is unsuccessful, IMS issues user abend code
0777, and the application does not receive an SQLCODE.
– For an MPP, IFP, or message driven BMP, IMS issues user abend
code 0777, rolls back all uncommitted changes, and reschedules the
transaction. The application does not receive an SQLCODE.
COMMIT and ROLLBACK operations do not time out. The command STOP
DATABASE, however, may time out and send messages to the console, but it will
retry up to 15 times.
Deadlock
Definition: A deadlock occurs when two or more application processes each hold
locks on resources that the others need and without which they cannot proceed.
Table M (2) OK
(4)
000300 Page B Job PROJNCHG
Suspend
Notes:
1. Jobs EMPLJCHG and PROJNCHG are two transactions. Job EMPLJCHG accesses
table M, and acquires an exclusive lock for page B, which contains record
000300.
2. Job PROJNCHG accesses table N, and acquires an exclusive lock for page A,
which contains record 000010.
3. Job EMPLJCHG requests a lock for page A of table N while still holding the
lock on page B of table M. The job is suspended, because job PROJNCHG is
holding an exclusive lock on page A.
4. Job PROJNCHG requests a lock for page B of table M while still holding the
lock on page A of table N. The job is suspended, because job EMPLJCHG is
holding an exclusive lock on page B. The situation is a deadlock.
Figure 100. A deadlock example
Effects: After a preset time interval (the value of DEADLOCK TIME), DB2 can roll
back the current unit of work for one of the processes or request a process to
terminate. That frees the locks and allows the remaining processes to continue. If
statistics trace class 3 is active, DB2 writes a trace record with IFCID 0172. Reason
code 00C90088 is returned in the SQLERRD(3) field of the SQLCA.
If you are using IMS, and a deadlock occurs, the following actions take place:
) In a DL/I batch application, the application process abnormally terminates
with a completion code of 04E and a reason code of 00D44033 or
00D44050.
) In any IMS environment except DL/I batch:
– DB2 performs a rollback operation on behalf of your application process
to undo all DB2 updates that occurred during the current unit of work.
– For a non-message driven BMP, IMS issues a rollback operation on
behalf of your application. If this operation is successful, IMS returns
control to your application, and the application receives SQLCODE
-911. If the operation is unsuccessful, IMS issues user abend code
0777, and the application does not receive an SQLCODE.
– For an MPP, IFP, or message driven BMP, IMS issues user abend
code 0777, rolls back all uncommitted changes, and reschedules the
transaction. The application does not receive an SQLCODE.
CICS
If you are using CICS and a deadlock occurs, the CICS attachment facility
decides whether or not to roll back one of the application processes, based on
the value of the ROLBE or ROLBI parameter. If your application process is
chosen for rollback, it receives one of two SQLCODEs in the SQLCA:
-911 A SYNCPOINT command with the ROLLBACK option was
issued on behalf of your application process. All updates (CICS
commands and DL/I calls, as well as SQL statements) that
occurred during the current unit of work have been undone.
(SQLSTATE '40001')
-913 A SYNCPOINT command with the ROLLBACK option was not
issued. DB2 rolls back only the incomplete SQL statement that
encountered the deadlock or timed out. CICS does not roll back
any resources. Your application process should either issue a
SYNCPOINT command with the ROLLBACK option itself or
terminate. (SQLSTATE '57033')
Consider using the DSNTIAC subroutine to check the SQLCODE and display
the SQLCA. Your application must take appropriate actions before resuming.
Keep unlike things apart: Give users different authorization IDs for work with
different databases; for example, one ID for work with a shared database and
another for work with a private database. This effectively adds to the number of
possible (but not concurrent) application processes while minimizing the number of
databases each application process can access.
Plan for batch inserts: If your application does sequential batch insertions,
excessive contention on the space map pages for the table space can occur. This
problem is especially apparent in data sharing, where contention on the space map
means the added overhead of page P-lock negotiation. For these types of
applications, consider using the MEMBER CLUSTER option of CREATE
TABLESPACE. This option causes DB2 to disregard the clustering index (or implicit
clustering index) when assigning space for the SQL INSERT statement. For more
information about using this option in data sharing, see Chapter 7 of DB2 Data
Sharing: Planning and Administration. For the syntax, see Chapter 6 of DB2 SQL
Reference.
Use LOCKSIZE ANY until you have reason not to: LOCKSIZE ANY is the
default for CREATE TABLESPACE. It allows DB2 to choose the lock size, and DB2
| usually chooses LOCKSIZE PAGE and LOCKMAX SYSTEM for non-LOB table
| spaces. For LOB table spaces, it chooses LOCKSIZE LOB and LOCKMAX
| SYSTEM. Before you use LOCKSIZE TABLESPACE or LOCKSIZE TABLE, you
should know why you do not need concurrent access to the object. Before you
choose LOCKSIZE ROW, you should estimate whether there will be an increase in
overhead for locking and weigh that against the increase in concurrency.
Examine small tables: For small tables with high concurrency requirements,
estimate the number of pages in the data and in the index. If the index entries are
short or they have many duplicates, then the entire index can be one root page and
| a few leaf pages. In this case, spread out your data to improve concurrency, or
| consider it a reason to use row locks.
Partition the data: Online queries typically make few data changes, but they occur
often. Batch jobs are just the opposite; they run for a long time and change many
rows, but occur infrequently. The two do not run well together. You might be able to
separate online applications from batch, or two batch jobs from each other. To
separate online and batch applications, provide separate partitions. Partitioning can
also effectively separate batch jobs from each other.
Fewer rows of data per page: By using the MAXROWS clause of CREATE or
ALTER TABLESPACE, you can specify the maximum number of rows that can be
on a page. For example, if you use MAXROWS 1, each row occupies a whole
page, and you confine a page lock to a single row. Consider this option if you have
a reason to avoid using row locking, such as in a data sharing environment where
row locking overhead can be excessive.
# Taking commit points frequently in a long running unit of recovery (UR) has the
# following benefits:
# ) Reduces lock contention
# ) Improves the effectiveness of lock avoidance, especially in a data sharing
# environment
# ) Reduces the elapsed time for DB2 system restart following a system failure
# ) Reduces the elapsed time for a unit of recovery to rollback following an
# application failure or an explicit rollback request by the application
# ) Provides more opportunity for utilities, such as online REORD, to break in
# Consider using the UR CHECK FREQ field of the installation panel DSNTIPN to
# help you identify those applications that are not committing frequently. The setting
# of UR CHECK FREQ should conform to your installation standards for applications
# taking commit points.
Close cursors: If you define a cursor using the WITH HOLD option, the locks it
needs can be held past a commit point. Use the CLOSE CURSOR statement as
Bind plans with ACQUIRE(USE): That choice is best for concurrency. Packages
are always bound with ACQUIRE(USE), by default. ACQUIRE(ALLOCATE) gives
better protection against deadlocks for a high-priority job; if you need that option,
you might want to bind all DBRMs directly to the plan.
For information on how to make an agent part of a global transaction for RRSAF
applications, see “Chapter 7-8. Programming for the Recoverable Resource
Manager Services attachment facility (RRSAF)” on page 779.
Knowing the aspects helps you understand why a process suspends or times out or
why two processes deadlock.
Definition
| The size (sometimes scope or level) of a lock on data in a table describes the
| amount of data controlled. The possible sizes of locks are table space, table,
| partition, page, and row. This section contains information about locking for
| non-LOB data. See “LOB locks” on page 361 for information on locking for LOBs.
As Figure 101 on page 340 suggests, row locks and page locks occupy an equal
place in the hierarchy of lock sizes.
Table space lock Table space lock LOB table space lock
Table lock
Row lock Page lock Row lock Page lock LOB lock
Row lock Page lock Row lock Page lock Row lock Page lock
Row lock Page lock Row lock Page lock Row lock Page lock
Definition
The duration of a lock is the length of time the lock is held. It varies according to
when the lock is acquired and when it is released.
Effects
For maximum concurrency, locks on a small amount of data held for a short
duration are better than locks on a large amount of data held for a long duration.
However, acquiring a lock requires processor time, and holding a lock requires
storage; thus, acquiring and holding one table space lock is more economical than
acquiring and holding many page locks. Consider that trade-off to meet your
performance and concurrency objectives.
| On the other hand, LOB table space locks are always acquired when needed and
| released at a commit or held until the program terminates. See“LOB locks” on
| page 361 for information about locking LOBs and LOB table spaces.
| Duration of page, row, and LOB locks: If a page or row is locked, DB2 acquires
the lock only when it is needed. When the lock is released depends on many
factors, but it is rarely held beyond the next commit point.
For information about controlling the duration of locks, see “Bind options” on
page 346.
Definition
The mode (sometimes state) of a lock tells what access to the locked object is
permitted to the lock owner and to any concurrent processes.
The possible modes for page and row locks and the modes for partition, table, and
| table space locks are listed below. See “LOB locks” on page 361 for more
| information about modes for LOB locks and locks on LOB table spaces.
When a page or row is locked, the table, partition, or table space containing it is
also locked. In that case, the table, partition, or table space lock has one of the
intent modes: IS, IX, or SIX. The modes S, U, and X of table, partition, and table
space locks are sometimes called gross modes. In the context of reading, SIX is a
gross mode lock because you don't get page or row locks; in this sense, it is like
an S lock.
Example: An SQL statement locates John Smith in a table of customer data and
changes his address. The statement locks the entire table space in mode IX and
the specific row that it changes in mode X.
S (SHARE) The lock owner and any concurrent processes can read, but not
change, the locked page or row. Concurrent processes can acquire
S or U locks on the page or row or might read data without
acquiring a page or row lock.
U (UPDATE) The lock owner can read, but not change, the locked page or row.
| Concurrent processes with the U lock can acquire S locks or might
| read data without acquiring a page or row lock, but no concurrent
process can acquire a U lock.
U locks reduce the chance of deadlocks when the lock owner is
reading a page or row to determine whether to change it, because
the owner can start with the U lock and then promote the lock to an
X lock to change the page or row.
Definition: Locks of some modes do not shut out all other users. Assume that
application process A holds a lock on a table space that process B also wants to
access. DB2 requests, on behalf of B, a lock of some particular mode. If the mode
of A's lock permits B's request, the two locks (or modes) are said to be compatible.
Effects of incompatibility: If the two locks are not compatible, B cannot proceed.
It must wait until A releases its lock. (And, in fact, it must wait until all existing
incompatible locks are released.)
Compatible lock modes: Compatibility for page and row locks is easy to define.
Table 40 shows whether page locks of any two modes, or row locks of any two
modes, are compatible (Yes) or not (No). No question of compatibility of a page
lock with a row lock can arise, because a table space cannot use both page and
row locks.
Compatibility for table space locks is slightly more complex. Table 41 shows
whether or not table space locks of any two modes are compatible.
Table 41. Compatibility of table and table space (or partition) lock modes
Lock Mode IS IX S U SIX X
IS Yes Yes Yes Yes Yes No
IX Yes Yes No No No No
S Yes No Yes Yes No No
U Yes No Yes No No No
SIX Yes No No No No No
X No No No No No No
| The underlying data page or row locks are acquired to serialize the reading and
| updating of index entries to ensure the data is logically consistent, meaning that the
| data is committed and not subject to rollback or abort. The data locks can be held
| for a long duration such as until commit. However, the page latches are only held
| for a short duration while the transaction is accessing the page. Because the index
| pages are not locked, hot spot insert scenarios (which involve several transactions
| trying to insert different entries into the same index page at the same time) do not
| cause contention problems in the index.
| A query that uses index-only access might lock the data page or row, and that lock
| can contend with other processes that lock the data. However, using lock
| avoidance techniques can reduce the contention. See “Lock avoidance” on
| page 354 for more information about lock avoidance.
Lock tuning
This section describes what you can change to affect how a particular application
uses transaction locks, under:
) “Bind options” on page 346
) “Isolation overriding with SQL statements” on page 357
) “The statement LOCK TABLE” on page 358
Effect of LOCKPART YES: Partition locks follow the same rules as table space
locks, and all partitions are held for the same duration. Thus, if one package is
using RELEASE(COMMIT) and another is using RELEASE(DEALLOCATE), all
partitions use RELEASE(DEALLOCATE).
| For table spaces defined as LOCKPART YES, lock demotion occurs as with other
| table spaces; that is, the lock is demoted at the table space level, not the partition
| level.
Restriction: This combination is not allowed for BIND PACKAGE. Use this
combination if processing efficiency is more important than concurrency. It is a
good choice for batch jobs that would release table and table space locks only to
reacquire them almost immediately. It might even improve concurrency, by allowing
batch jobs to finish sooner. Generally, do not use this combination if your
application contains many SQL statements that are often not executed.
IMS
A CHKP or SYNC call (for single-mode transactions), a GU call to the I/O
PCB, or a ROLL or ROLB call is completed
CICS
A SYNCPOINT command is issued.
| Exception: If the cursor is defined WITH HOLD, table or table space locks
| necessary to maintain cursor position are held past the commit point. (See “The
| effect of WITH HOLD for a cursor” on page 356 for more information.
) The least restrictive lock needed to execute each SQL statement is used
except when a more restrictive lock remains from a previous statement. In that
case, that lock is used without change.
Time line
Figure 102. How an application using RR isolation acquires locks. All locks are held until
the application commits.
Applications using repeatable read can leave rows or pages locked for
longer periods, especially in a distributed environment, and they can claim
more logical partitions than similar applications using cursor stability.
They are also subject to being drained more often by utility operations.
Because so many locks can be taken, lock escalation might take place.
Frequent commits release the locks and can help avoid lock escalation.
| With repeatable read, lock promotion occurs for table space scan to
| prevent the insertion of rows that might qualify for the predicate. (If access
| is via index, DB2 locks the key range. If access is via table space scans,
| DB2 locks the table, partition, or table space.)
ISOLATION (RS) Allows the application to read the same pages or rows more than
once without allowing qualifying rows to be updated or deleted by another
process. It offers possibly greater concurrency than repeatable read,
because although other applications cannot change rows that are returned
to the original application, they can insert new rows or update rows that did
not satisfy the original application's search condition. Only those rows or
pages that satisfy the stage 1 predicate are locked until the application
commits. Figure 103 illustrates this. In the example, the rows held by locks
L2 and L4 satisfy the predicate.
Application
Time line
Figure 103. How an application using RS isolation acquires locks. Locks L2 and L4 are
held until the application commits. The other locks aren't held.
Applications using read stability can leave rows or pages locked for long
periods, especially in a distributed environment.
If you do use read stability, plan for frequent commit points.
Local access: Locally, CURRENTDATA(YES) means that the data upon which
the cursor is positioned cannot change while the cursor is positioned on it. If the
cursor is positioned on data in a local base table or index, then the data returned
with the cursor is current with the contents of that table or index. If the cursor is
positioned on data in a work file, the data returned with the cursor is current only
with the contents of the work file; it is not necessarily current with the contents of
the underlying table or index.
Application
Request Request next
row or page row or page
Time line
As with work files, if a cursor uses query parallelism, data is not necessarily current
with the contents of the table or index, regardless of whether a work file is used.
Therefore, for work file access or for parallelism on read-only queries, the
CURRENTDATA option has no effect.
To take the best advantage of this method of avoiding locks, make sure all
applications that are accessing data concurrently issue COMMITs frequently.
Figure 105 shows how DB2 can avoid taking locks and Table 42 on page 355
summarizes the factors that influence lock avoidance.
Application
Request Request next
row or page row or page
Time line
DB2
Figure 105. Best case of avoiding locks using CS isolation with CURRENTDATA(NO). This
figure shows access to the base table. If DB2 must take a lock, then locks are released
when DB2 moves to the next row or page, or when the application commits (the same as
CURRENTDATA(YES)).
For example, the plan value for CURRENTDATA has no effect on the packages
executing under that plan. If you do not specify a CURRENTDATA option explicitly
when you bind a package, the default is CURRENTDATA(YES).
Table 43 shows how conflicts between isolation levels are resolved. The first
column is the existing isolation level, and the remaining columns show what
happens when another isolation level is requested by a new application process.
For locks and claims needed for cursor position , the rules described above differ
as follows:
| Page and row locks: If your installation specifies NO on the RELEASE LOCKS
| field of installation panel DSNTIP4, as described in Section 5 (Volume 2) of DB2
| Administration Guide, a page or row lock is held past the commit point. This page
| or row lock is not necessary for cursor position, but the NO option is provided for
| compatibility that might rely on this lock.However, an X or U lock is demoted to an
| S lock at that time. (Because changes have been committed, exclusive control is no
| longer needed.) After the commit point, the lock is released at the next commit
| point, provided that no cursor is still positioned on that page or row.
A YES for RELEASE LOCKS means that no data page or row locks are held past
commit.
Table, table space, and DBD locks: All necessary locks are held past the
commit point. After that, they are released according to the RELEASE option under
which they were acquired: for COMMIT, at the next commit point after the cursor is
closed; for DEALLOCATE, when the application is deallocated.
Claims: All claims, for any claim class, are held past the commit point. They are
released at the next commit point after all held cursors have moved off the object
or have been closed.
Using KEEP UPDATE LOCKS on the WITH clause: You can use the clause
KEEP UPDATE LOCKS clause when you specify a SELECT with FOR UPDATE
OF. This option is only valid when you use WITH RR or WITH RS. By using this
clause, you tell DB2 to acquire an X lock instead of an U or S lock on all the
qualified pages or rows.
Here is an example:
SELECT ...
FOR UPDATE OF WITH RS KEEP UPDATE LOCKS;
With read stability (RS) isolation, a row or page rejected during stage 2 processing
still has the X lock held on it, even though it is not returned to the application.
With repeatable read (RR) isolation, DB2 acquires the X locks on all pages or rows
that fall within the range of the selection expression.
All X locks are held until the application commits. Although this option can reduce
concurrency, it can prevent some types of deadlocks and can better serialize
access to data.
| You can use LOCK TABLE on any table, including auxiliary tables of LOB table
| spaces. See “The LOCK TABLE statement” on page 364 for information about
| locking auxiliary tables.
| Table 44. Modes of locks acquired by LOCK TABLE. LOCK TABLE on partitions behave the same as
| nonsegmented table spaces.
Segmented Table Space
Nonsegmented Table
LOCK TABLE IN Space Table Table Space
EXCLUSIVE MODE X X IX
SHARE MODE S or SIX S or SIX IS
Note: The SIX lock is acquired if the process already holds an IX lock. SHARE MODE has no effect if the process
already has a lock of mode SIX, U, or X.
| Caution when using LOCK TABLE with simple table spaces: The statement
| locks all tables in a simple table space, even though you name only one table. No
| other process can update the table space for the duration of the lock. If the lock is
| in exclusive mode, no other process can read the table space, unless that process
| is running with UR isolation.
Additional examples of LOCK TABLE: You might want to lock a table or partition
that is normally shared for any of the following reasons:
Taking a“snapshot” If you want to access an entire table throughout a unit of
work as it was at a particular moment, you must lock out
concurrent changes. If other processes can access the table, use
LOCK TABLE IN SHARE MODE. (RR isolation is not enough; it
locks out changes only from rows or pages you have already
accessed.)
Avoiding overhead If you want to update a large part of a table, it can be more
efficient to prevent concurrent access than to lock each page as it
is updated and unlock it when it is committed. Use LOCK TABLE
IN EXCLUSIVE MODE.
Preventing timeouts Your application has a high priority and must not risk
timeouts from contention with other application processes.
Depending on whether your application updates or not, use either
LOCK IN EXCLUSIVE MODE or LOCK TABLE IN SHARE MODE.
Access paths
The access path used can affect the mode, size, and even the object of a lock. For
example, an UPDATE statement using a table space scan might need an X lock on
the entire table space. If rows to be updated are located through an index, the
same statement might need only an IX lock on the table space and X locks on
individual pages or rows.
If you use the EXPLAIN statement to investigate the access path chosen for an
SQL statement, then check the lock mode in column TSLOCKMODE of the
resulting PLAN_TABLE. If the table resides in a nonsegmented table space, or is
defined with LOCKSIZE TABLESPACE, the mode shown is that of the table space
lock. Otherwise, the mode is that of the table lock.
IMS
A CHKP or SYNC call, or (for single-mode transactions) a GU call to the
I/O PCB
CICS
A SYNCPOINT command.
| LOB locks
| The locking activity for LOBs is described separately from transaction locks
| because the purpose of LOB locks is different than that of regular transaction locks.
| Terminology: A lock that is taken on a LOB value in a LOB table space is called a
| LOB lock.
| DB2 also obtains locks on the LOB table space and the LOB values stored in that
| LOB table space, but those locks have the following primary purposes:
| ) To determine whether space from a deleted LOB can be reused by an inserted
| or updated LOB
| Storage for a deleted LOB is not reused until no more readers (including held
| locators) are on the LOB and the delete operation has been committed.
| ) To prevent deallocating space for a LOB that is currently being read
| A LOB can be deleted from one application's point-of-view while a reader from
| another application is reading the LOB. The reader continues reading the LOB
| In summary, the main purpose of LOB locks is for managing the space used by
| LOBs and to ensure that LOB readers do not read partially updated LOBs.
| Applications need to free held locators so that the space can be reused.
| Table 45 shows the relationship between the action that is occurring on the LOB
| value and the associated LOB table space and LOB locks that are acquired.
| Table 45. Locks that are acquired for operations on LOBs. This table does not account for
| gross locks that can be taken because of LOCKSIZE TABLESPACE, the LOCK TABLE
| statement, or lock escalation.
| Action on LOB value LOB table space
| lock LOB lock Comment
| Read (including UR) IS S Prevents storage from
| being reused while the
| LOB is being read or while
| locators are referencing the
| LOB
| Insert IX X Prevents other processes
| from seeing a partial LOB
| Delete IS S To hold space in case the
| delete is rolled back. (The
| X is on the base table row
| or page.) Storage is not
| reusable until the delete is
| committed and no other
| readers of the LOB exist.
| Update IS->IX Two LOB Operation is a delete
| locks: an followed by an insert.
| S-lock for
| the delete
| and an
| X-lock for
| the insert.
| Update the LOB to null IS S No insert, just a delete.
| or zero-length
| Update a null or IX X No delete, just an insert.
| zero-length LOB to a
| value
| S (SHARE) The lock owner and any concurrent processes can read, update, or
| delete the locked LOB. Concurrent processes can acquire an S
| lock on the LOB. The purpose of the S lock is to reserve the space
| used by the LOB.
| X (EXCLUSIVE) The lock owner can read or change the locked LOB. Concurrent
| processes cannot access the LOB.
| Duration of locks
| If the application uses HOLD LOCATOR, the locator (and the LOB lock) is not freed
| until the first commit operation after a FREE LOCATOR statement is issued, or until
| the thread is deallocated.
| A note about held cursors: If a cursor is defined WITH HOLD, LOB locks are
| held through commit operations.
| A note about INSERT with subselect: Because LOB locks are held until commit
| and because locks are put on each LOB column in both a source table and a target
| table, it is possible that a statement such as an INSERT with a subselect that
| involves LOB columns can accumulate many more locks than a similar statement
| that does not involve LOB columns. To prevent system problems caused by too
| many locks, you can:
| ) Ensure that you have lock escalation enabled for the LOB table spaces that are
| involved in the INSERT. In other words, make sure that LOCKMAX is non-zero
| for those LOB table spaces.
| ) Alter the LOB table space to change the LOCKSIZE to TABLESPACE before
| executing the INSERT with subselect.
| ) Use the LOCK TABLE statement to lock the LOB table space.
| ) Increase the LOCKMAX value on the table spaces involved and ensure that the
| user lock limit is sufficient.
| ) Use LOCK TABLE statements for the auxiliary tables involved.
If your application intercepts abends, DB2 commits work because it is unaware that
an abend has occurred. If you want DB2 to roll back work automatically when an
abend occurs in your program, do not let the program or runtime environment
intercept the abend. For example, if your program uses Language Environment,
and you want DB2 to roll back work automatically when an abend occurs in the
program, specify the runtime options ABTERMENC(ABEND) and TRAP(ON).
A unit of work is a logically distinct procedure containing steps that change the
data. If all the steps complete successfully, you want the data changes to become
permanent. But, if any of the steps fail, you want all modified data to return to the
original value before the procedure began.
When a unit of work completes, all locks implicitly acquired by that unit of work
after it begins are released, allowing a new unit of work to begin.
The amount of processing time used by a unit of work in your program determines
the length of time DB2 prevents other users from accessing that locked data. When
several programs try to use the same data concurrently, each program's unit of
work must be as short as possible to minimize the interference between the
programs. The remainder of this chapter describes the way a unit of work functions
in various environments. For more information on unit of work, see Chapter 2 of
DB2 SQL Reference or Section 4 (Volume 1) of DB2 Administration Guide.
A commit point occurs when you issue a COMMIT statement or your program
terminates normally. You should issue a COMMIT statement only when you are
sure the data is in a consistent state. For example, a bank transaction might
transfer funds from account A to account B. The transaction first subtracts the
amount of the transfer from account A, and then adds the amount to account B.
Both events, taken together, are a unit of work. When both events complete (and
not before), the data in the two accounts is consistent. The program can then
issue a COMMIT statement. A ROLLBACK statement causes any data changes,
made since the last commit point, to be backed out.
Before you can connect to another DBMS you must issue a COMMIT statement. If
the system fails at this point, DB2 cannot know that your transaction is complete. In
this case, as in the case of a failure during a one-phase commit operation for a
single subsystem, you must make your own provision for maintaining data integrity.
If your program abends or the system fails, DB2 backs out uncommitted data
changes. Changed data returns to its original condition without interfering with other
system activities.
Consider the inventory example, in which the quantity of items sold is subtracted
from the inventory file and then added to the reorder file. When both transactions
complete (and not before) and the data in the two files is consistent, the program
can then issue a DL/I TERM call or a SYNCPOINT command. If one of the steps
fails, you want the data to return to the value it had before the unit of work began.
That is, you want it rolled back to a previous point of consistency. You can achieve
this by using the SYNCPOINT command with the ROLLBACK option.
The SQL COMMIT and ROLLBACK statements are not valid in a CICS
environment. You can coordinate DB2 with CICS functions used in programs, so
that DB2 and non-DB2 data are consistent.
If the system fails, DB2 backs out uncommitted changes to data. Changed data
returns to its original condition without interfering with other system activities.
Sometimes, DB2 data does not return to a consistent state immediately. DB2 does
not process indoubt data (data that is neither uncommitted nor committed) until the
CICS attachment facility is also restarted. To ensure that DB2 and CICS are
synchronized, restart both DB2 and the CICS attachment facility.
A commit point can occur in a program as the result of any one of the following four
events:
) The program terminates normally. Normal program termination is always a
commit point.
) The program issues a checkpoint call. Checkpoint calls are a program's means
of explicitly indicating to IMS that it has reached a commit point in its
processing.
) The program issues a SYNC call. The SYNC call is a Fast Path system service
call to request commit point processing. You can use a SYNC call only in a
nonmessage-driven Fast Path program.
) For a program that processes messages as its input, a commit point can occur
when the program retrieves a new message. IMS considers a new message
the start of a new unit of work in the program. Unless you define the
transaction as single- or multiple-mode on the TRANSACT statement of the
APPLCTN macro for the program, retrieving a new message does not signal a
DB2 does some processing with single- and multiple-mode programs that IMS does
not. When a multiple-mode program issues a call to retrieve a new message, DB2
performs an authorization check and closes all open cursors in the program.
If the program processes messages, IMS sends the output messages that the
application program produces to their final destinations. Until the program reaches
a commit point, IMS holds the program's output messages at a temporary
destination. If the program abends, people at terminals, and other application
programs do not receive inaccurate information from the terminating application
program.
The SQL COMMIT and ROLLBACK statements are not valid in an IMS
environment.
If the system fails, DB2 backs out uncommitted changes to data. Changed data
returns to its original state without interfering with other system activities.
There are two calls available to IMS programs to simplify program recovery: the
symbolic checkpoint call and the restart call.
Programs that issue symbolic checkpoint calls can specify as many as seven data
areas in the program to be restored at restart. Symbolic checkpoint calls do not
support OS/VS files; if your program accesses OS/VS files, you can convert those
files to GSAM and use symbolic checkpoints. DB2 always recovers to the last
checkpoint. You must restart the program from that point.
However, message-driven BMPs must issue checkpoint calls rather than get-unique
calls to establish commit points, because they can restart from a checkpoint only. If
a program abends after issuing a get-unique call, IMS backs out the database
updates to the most recent commit point—the get-unique call.
Checkpoints also close all open cursors, which means you must reopen the cursors
you want and re-establish positioning.
If a batch-oriented BMP does not issue checkpoints frequently enough, IMS can
abend that BMP or another application program for one of these reasons:
) If a BMP retrieves and updates many database records between checkpoint
calls, it can monopolize large portions of the databases and cause long waits
for other programs needing those segments. (The exception to this is a BMP
with a processing option of GO. IMS does not enqueue segments for programs
with this processing option.) Issuing checkpoint calls releases the segments
that the BMP has enqueued and makes them available to other programs.
) If IMS is using program isolation enqueuing, the space needed to enqueue
information about the segments that the program has read and updated must
not exceed the amount defined for the IMS system. If a BMP enqueues too
many segments, the amount of storage needed for the enqueued segments
can exceed the amount of storage available. If that happens, IMS terminates
the program abnormally with an abend code of U0775. You then have to
increase the program's checkpoint frequency before rerunning the program.
The amount of storage available is specified during IMS system definition. For
more information, see IMS/ESA Installation Volume 2: System Definition and
Tailoring.
When you issue a DL/I CHKP call from an application program using DB2
databases, IMS processes the CHKP call for all DL/I databases, and DB2 commits
all the DB2 database resources. No checkpoint information is recorded for DB2
databases in the IMS log or the DB2 log. The application program must record
relevant information about DB2 databases for a checkpoint, if necessary.
One way to do this is to put such information in a data area included in the DL/I
CHKP call. There can be undesirable performance implications of re-establishing
position within a DB2 database as a result of the commit processing that takes
place because of a DL/I CHKP call. The fastest way to re-establish a position in a
DB2 database is to use an index on the target table, with a key that matches
one-to-one with every column in the SQL predicate.
Using ROLL
Issuing a ROLL call causes IMS to terminate the program with a user abend code
U0778. This terminates the program without a storage dump.
When you issue a ROLL call, the only option you supply is the call function, ROLL.
Using ROLB
The advantage of using ROLB is that IMS returns control to the program after
executing ROLB, thus the program can continue processing. The options for ROLB
are:
) The call function, ROLB
) The name of the I/O PCB.
In batch programs
If your IMS system log is on direct access storage, and if the run option BKO is Y
to specify dynamic back out, you can use the ROLB call in a batch program. The
ROLB call backs out the database updates since the last commit point and returns
control to your program. You cannot specify the address of an I/O area as one of
the options on the call; if you do, your program receives an AD status code. You
must, however, have an I/O PCB for your program. Specify CMPAT=YES on the
CMPAT keyword in the PSBGEN statement for your program's PSB. For more
information on using the CMPAT keyword, see IMS/ESA Utilities Reference:
System.
# Example: Rolling back to the most recently created savepoint: When the
# ROLLBACK TO SAVEPOINT statement is executed in the following code, DB2 rolls
# back work to savepoint B.
# EXEC SQL SAVEPOINT A;
#
# ..
# .
# EXEC SQL SAVEPOINT B;
#
# ..
# .
# EXEC SQL ROLLBACK TO SAVEPOINT;
# When savepoints are active, you cannot access remote sites using three-part
# names or aliases for three-part names. You can, however, use DRDA access with
# explicit CONNECT statements when savepoints are active. If you set a savepoint
# before you execute a CONNECT statement, the scope of that savepoint is the local
# site. If you set a savepoint after you execute the CONNECT statement, the scope
# of that savepoint is the site to which you are connected.
# You can set a savepoint with the same name multiple times within a unit of work.
# Each time that you set the savepoint, the new value of the savepoint replaces the
# old value.
# Example: Setting a savepoint multiple times: Suppose that the following actions
# take place within a unit of work:
# 1. Application A sets savepoint S.
# 2. Application A calls stored procedure P.
# 3. Stored procedure P sets savepoint S.
# 4. Stored procedure P executes ROLLBACK TO SAVEPOINT S.
# When DB2 executes ROLLBACK to SAVEPOINT S, DB2 rolls back work to the
# savepoint that was set in the stored procedure because that value is the most
# recent value of savepoint S.
# Savepoints are automatically released at the end of a unit of work. However, if you
# no longer need a savepoint before the end of a transaction, you should execute the
# SQL RELEASE SAVEPOINT statement. Releasing savepoints is essential if you
# need to use three-part names to access remote locations.
In this chapter, we assume that you are requesting services from a remote DBMS.
That DBMS is a server in that situation, and your local system is a requester or
client.
Your application can be connected to many DBMSs at one time; the one currently
performing work is the current server. When the local system is performing work, it
also is called the current server.
A remote server can be truly remote in the physical sense: thousands of miles
away. But that is not necessary; it could even be another subsystem of the same
operating system your local DBMS runs under. We assume that your local DBMS is
an instance of DB2 for OS/390. A remote server could be an instance of DB2 for
OS/390 also, or an instance of one of many other products.
A DBMS, whether local or remote, is known to your DB2 system by its location
name. The location name of a remote DBMS is recorded in the communications
database. (If you need more information about location names or the
communications database, see Section 3 of DB2 Installation Guide.)
Example 1: You can write a query like this to access data at a remote server:
SELECT * FROM CHICAGO.DSN861.EMP
WHERE EMPNO = '1';
| The mode of access depends on whether you bind your DBRMs into packages and
| on the value of field DATABASE PROTOCOL in installation panel DSNTIP5 or the
| value of bind option DBPROTOCOL. Bind option DBPROTOCOL overrides the
| installation setting.
Before you can execute the query at location CHICAGO, you must bind a package
at the CHICAGO server.
Example 3: You can call a stored procedure, which is a subroutine that can contain
many SQL statements. Your program executes this:
EXEC SQL
CONNECT TO ATLANTA;
EXEC SQL
CALL procedure_name (parameter_list);
The parameter list is a list of host variables that is passed to the stored procedure
and into which it returns the results of its execution. The stored procedure must
already exist at location ATLANTA.
Two methods of access: The examples above show two different methods for
accessing distributed data.
) Example 1 shows a statement that can be executed with DB2 private protocol
access or DRDA access.
| If you bind the DBRM that contains the statement into a plan at the local DB2
| and specify the bind option DBPROTOCOL(PRIVATE), you access the server
| using DB2 private protocol access.
| If you bind the DBRM that contains the statement using one of these methods,
| you access the server using DRDA access.
| Method 1:
| – Bind the DBRM into a package at the local DB2 using the bind option
| DBPROTOCOL(DRDA).
| – Bind the DBRM into a package at the remote location (CHICAGO).
| – Bind the packages into a plan using bind option DBPROTOCOL(DRDA).
| Method 2:
| – Bind the DBRM into a package at the remote location.
| – Bind the remote package and the DBRM into a plan using the bind option
| DBPROTOCOL(DRDA).
| ) Examples 2 and 3 show statements that are executed with DRDA access only.
| When you use these methods for DRDA access, your application must include
| an explicit CONNECT statement to switch your connection from one system to
| another.
If you update two or more DBMSs you must consider how updates can be
coordinated, so that units of work at the two DBMSs are either both committed or
both rolled back. Be sure to read “Coordinating updates to two or more data
sources” on page 389.
| You can use the resource limit facility at the server to govern distributed SQL
| statements. Governing is by plan for DB2 private protocol access and by package
| for DRDA access. See “Considerations for moving from DB2 private protocol
| access to DRDA access” on page 401 for information on changes you need to
| make to your resource limit facility tables when you move from DB2 private protocol
| access to DRDA access.
| Because platforms other than DB2 for OS/390 might not support the three-part
| name syntax, you should not code applications with three-part names if you plan to
| port those applications to other platforms.
| In a three-part table name, the first part denotes the location. The local DB2 makes
| and breaks an implicit connection to a remote server as needed.
| The following overview shows how the application uses three-part names:
| Read input values
| Do for all locations
| Read location name
| Set up statement to prepare
| Prepare statement
| Execute statement
| End loop
| Commit
| After the application obtains a location name, for example 'SAN_JOSE', it next
| creates the following character string:
| INSERT INTO SAN_JOSE.DSN861.PROJ VALUES (?,?,?,?,?,?,?,?)
| The application assigns the character string to the variable INSERTX and then
| executes these statements:
| EXEC SQL
| EXECUTE STMT1 USING :PROJNO, :PROJNAME, :DEPTNO, :RESPEMP,
| :PRSTAFF, :PRSTDATE, :PRENDATE, :MAJPROJ;
| The host variables for Spiffy's project table match the declaration for the sample
| project table in “Project table (DSN8610.PROJ)” on page 837.
| To keep the data consistent at all locations, the application commits the work only
| when the loop has executed for all locations. Either every location has committed
| the INSERT or, if a failure has prevented any location from inserting, all other
| locations have rolled back the INSERT. (If a failure occurs during the commit
| process, the entire unit of work can be indoubt.)
| Programming hint: You might find it convenient to use aliases when creating
| character strings that become prepared statements, instead of using full three-part
| names like SAN_JOSE.DSN8610.PROJ. For information on aliases, see the section
| on CREATE ALIAS in DB2 SQL Reference.
| In this example, Spiffy's application executes CONNECT for each server in turn and
| the server executes INSERT. In this case the tables to be updated each have the
| same name, though each is defined at a different server. The application executes
| the statements in a loop, with one iteration for each server.
| The application connects to each new server by means of a host variable in the
| CONNECT statement. CONNECT changes the special register CURRENT
| SERVER to show the location of the new server. The values to insert in the table
| are transmitted to a location as input host variables.
| The following overview shows how the application uses explicit CONNECTs:
| Read input values
| Do for all locations
| Read location name
| Connect to location
| Execute insert statement
| End loop
| Commit
| Release all
| The application inserts a new location name into the variable LOCATION_NAME,
| and executes the following statements:
| EXEC SQL
| CONNECT TO :LOCATION_NAME;
| EXEC SQL
| INSERT INTO DSN861.PROJ VALUES (:PROJNO, :PROJNAME, :DEPTNO, :RESPEMP,
| :PRSTAFF, :PRSTDATE, :PRENDATE, :MAJPROJ);
| The host variables for Spiffy's project table match the declaration for the sample
| project table in “Project table (DSN8610.PROJ)” on page 837. LOCATION_NAME
| is a character-string variable of length 16.
| Releasing connections
| When you connect to remote locations explicitly, you must also break those
| connections explicitly. You have considerable flexibility in determining how long
| connections remain open, so the RELEASE statement differs significantly from
| CONNECT.
| Examples: Using the RELEASE statement, you can place any of the following in
| the release-pending state.
| ) A specific connection that the next unit of work does not use:
| EXEC SQL RELEASE SPIFFY1;
| ) The current SQL connection, whatever its location name:
| EXEC SQL RELEASE CURRENT;
| ) All connections except the local connection:
| EXEC SQL RELEASE ALL;
| ) All DB2 private protocol connections. If the first phase of your application
| program uses DB2 private protocol access and the second phase uses DRDA
| access, then open DB2 private protocol connections from the first phase could
| cause a CONNECT operation to fail in the second phase. To prevent that error,
| execute the following statement before the commit operation that separates the
| two phases:
| EXEC SQL RELEASE ALL PRIVATE;
| PRIVATE refers to DB2 private protocol connections, which exist only between
| instances of DB2 for OS/390.
| Three-part names and multiple servers: If you use a three-part name, or an alias
| that resolves to one, in a statement executed at a remote server by DRDA access,
| and if the location name is not that of the server, then the method by which the
| remote server accesses data at the named location depends on the value of
| DBPROTOCOL. If the package at the first remote server is bound with
| DBPROTOCOL(PRIVATE), DB2 uses DB2 private protocol access to access the
| second remote server. If the package at the first remote server is bound with
| DBPROTOCOL(DRDA), DB2 uses DRDA access to access the second remote
| server. We recommend that you follow these steps so that access to the second
| remote server is by DRDA access:
| ) Rebind the package at the first remote server with DBPROTOCOL(DRDA).
| ) Bind the package that contains the three-part name at the second server.
# Accessing declared temporary tables using three-part names: You can access
# a remote declared temporary table using a three-part name only if you use DRDA
# However, you cannot perform the following series of actions, which includes a
# backward reference to the declared temporary table:
# EXEC SQL
# DECLARE GLOBAL TEMPORARY TABLE TEMPPROD /* Define the temporary table */
# (CHARCOL CHAR(6) NOT NULL); /* at the local site (ATLANTA)*/
# EXEC SQL CONNECT TO CHICAGO; /* Connect to the remote site */
# EXEC SQL INSERT INTO ATLANTA.SESSION.T1
# (VALUES 'ABCDEF'); /* Cannot access temp table */
# /* from the remote site (backward reference)*/
# Savepoints: In a distributed environment, you can set savepoints only if you use
# DRDA access with explicit CONNECT statements. If you set a savepoint and then
# execute an SQL statement with a three-part name, an SQL error occurs.
Precompiler options
The following precompiler options are relevant to preparing a package to be run
using DRDA access:
CONNECT
Use CONNECT(2), explicitly or by default.
CONNECT(1) causes your CONNECT statements to allow only the restricted
SQL
Use SQL(ALL) explicitly for a package that runs on a server that is not DB2 for
OS/390. The precompiler then accepts any statement that obeys DRDA rules.
Use SQL(DB2), explicitly or by default, if the server is DB2 for OS/390 only.
The precompiler then rejects any statement that does not obey the rules of
DB2 for OS/390.
location-name
Name the location of the server at which the package runs.
The privileges needed to run the package must be granted to the owner of the
package at the server. If you are not the owner, you must also have SYSCTRL
authority or the BINDAGENT privilege granted locally.
SQLERROR
Use SQLERROR(CONTINUE) if you used SQL(ALL) when precompiling. That
creates a package even if the bind process finds SQL errors, such as
statements that are valid on the remote server but that the precompiler did not
recognize. Otherwise, use SQLERROR(NOPACKAGE), explicitly or by default.
CURRENTDATA
Use CURRENTDATA(NO) to force block fetch for ambiguous cursors. See
“Use block fetch” on page 396 for more information.
OPTIONS
When you make a remote copy of a package using BIND PACKAGE with the
COPY option, use this option to control the default bind options that DB2 uses.
Specify:
COMPOSITE to cause DB2 to use any options you specify in the BIND
PACKAGE command. For all other options, DB2 uses the options of the
copied package. This is the default.
COMMAND to cause DB2 to use the options you specify in the BIND
PACKAGE command. For all other options, DB2 uses the defaults for the
server on which the package is bound. This helps ensure that the server
supports the options with which the package is bound.
| DBPROTOCOL
| Use DBPROTOCOL(PRIVATE) if you want DB2 to use DB2 private protocol
| access for accessing remote data that is specified with three-part names.
| Use DBPROTOCOL(DRDA) if you want DB2 to use DRDA access to access
| remote data that is specified with three-part names. You must bind a package
| at all locations whose names are specified in three-part names.
| These values override the value of DATABASE PROTOCOL on installation
| panel DSNTIP5. Therefore, if the setting of DATABASE PROTOCOL at the
| requester site specifies the type of remote access you want to use for
| three-part names, you do not need to specify the DBPROTOCOL bind option.
DISCONNECT
For most flexibility, use DISCONNECT(EXPLICIT), explicitly or by default. That
requires you to use RELEASE statements in your program to explicitly end
connections.
But the other values of the option are also useful.
SQLRULES
Use SQLRULES(DB2), explicitly or by default.
SQLRULES(STD) applies the rules of the SQL standard to your CONNECT
statements, so that CONNECT TO x is an error if you are already connected to
x. Use STD only if you want that statement to return an error code.
| If your program selects LOB data from a remote location, and you bind the plan
| for the program with SQLRULES(DB2), the format in which you retrieve the
| LOB data with a cursor is restricted. After you open the cursor to retrieve the
| LOB data, you must retrieve all of the data using a LOB variable, or retrieve all
| of the data using a LOB locator variable. If the value of SQLRULES is STD,
| this restriction does not exist.
| If you intend to switch between LOB variables and LOB locators to retrieve
| data from a cursor, execute the SET SQLRULES=STD statement before you
| connect to the remote location.
CURRENTDATA
Use CURRENTDATA(NO) to force block fetch for ambiguous cursors. See
“Use block fetch” on page 396 for more information.
| DBPROTOCOL
| Use DBPROTOCOL(PRIVATE) if you want DB2 to use DB2 private protocol
| access for accessing remote data that is specified with three-part names.
| Use DBPROTOCOL(DRDA) if you want DB2 to use DRDA access to access
| remote data that is specified with three-part names. You must bind a package
| at all locations whose names are specified in three-part names.
| The package value for the DBPROTOCOL option overrides the plan option. For
| example, if you specify DBPROTOCOL(DRDA) for a remote package and
| DBPROTOCOL(PRIVATE) for the plan, DB2 uses DRDA access when it
| accesses data at that location using a three-part name. If you do not specify
| any value for DBPROTOCOL, DB2 uses the value of DATABASE PROTOCOL
| on installation panel DSNTIP5.
DB2 and IMS, and DB2 and CICS, jointly implement a two-phase commit process.
You can update an IMS database and a DB2 table in the same unit of work. If a
system or communication failure occurs between committing the work on IMS and
on DB2, then the two programs restore the two systems to a consistent point when
activity resumes.
Details of the two-phase commit process are not important to the rest of this
description. You can read them in Section 4 (Volume 1) of DB2 Administration
Guide.
Versions 3 and later of DB2 for OS/390 implement two-phase commit. For other
types of DBMS, check the product specifications.
To achieve the effect of coordinated updates with a restricted system, you must first
update one system and commit that work, and then update the second system and
commit its work. If a failure occurs after the first update is committed and before the
second is committed, there is no automatic provision for bringing the two systems
back to a consistent point. Your program must assume that task.
If these conditions are not met, then you are restricted to read-only operations.
For more information about CONNECT (Type 1) and about managing connections
to other systems, see Chapter 2 of DB2 SQL Reference.
| Use LOB locators instead of LOB host variables: If you need to store only a
| portion of a LOB value at the client, or if your client program manipulates the LOB
| data but does not need a copy of it, LOB locators are a good choice. When a client
| program retrieves a LOB column from a server into a locator, DB2 transfers only
| the 4 byte locator value to the client, not the entire LOB value. For information on
| how to use LOB locators in an application, see “Using LOB locators to save
| storage” on page 245.
| Use stored procedure result sets: When you return LOB data to a client program
| from a stored procedure, use result sets, rather than passing the LOB data to the
| client in parameters. Using result sets to return data causes less LOB
| materialization and less movement of data among address spaces. For information
| on how to write a stored procedure to return result sets, see “Writing a stored
| procedure to return result sets to a DRDA client” on page 556. For information on
| how to write a client program to receive result sets, see “Writing a DB2 for OS/390
| client program to receive result sets” on page 611.
| Set the CURRENT RULES special register to DB2: When a DB2 for OS/390
| server receives an OPEN request for a cursor, the server uses the value in the
| CURRENT RULES special register to determine the type of host variables the
| associated statement uses to retrieve LOB values. If you specify a value of DB2 for
| CURRENT RULES, and the first FETCH for the cursor uses a LOB locator to
| retrieve LOB column values, DB2 lets you use only LOB locators for all subsequent
| FETCH statements for that column until you close the cursor. If the first FETCH
| uses a host variable, DB2 lets you use only host variables for all subsequent
| FETCH statements for that column until you close the cursor. However, if you set
| the value of CURRENT RULES to STD, DB2 lets you use the same open cursor to
| fetch a LOB column into either a LOB locator or a host variable.
| For example, an end user might want to browse through a large set of employee
| records but want to look at pictures of only a few of those employees. At the
| server, you set the CURRENT RULES special register to DB2. In the application,
| you declare and open a cursor to select employee records. The application then
| fetches all picture data into 4 byte LOB locators. Because DB2 knows that 4 bytes
| of LOB data is returned for each FETCH, DB2 can fill the network buffers with
| locators for many pictures. When a user wants to see a picture for a particular
| person, the application can retrieve the picture from the server by assigning the
| value referenced by the LOB locator to a LOB host variable:
| SQL TYPE IS BLOB my_blob[1M];
| SQL TYPE IS BLOB AS LOCATOR my_loc;
|| ..
| .
| FETCH C1 INTO :my_loc; /* Fetch BLOB into LOB locator */
|| ..
| .
| SET :my_blob = :my_loc; /* Assign BLOB to host variable */
DEFER(PREPARE)
To improve performance for both static and dynamic SQL used in DB2 private
protocol access, and for dynamic SQL in DRDA access, consider specifying the
option DEFER(PREPARE) when you bind or rebind your plans or packages.
Remember that statically bound SQL statements in DB2 private protocol access are
processed dynamically. When a dynamic SQL statement accesses remote data, the
PREPARE and EXECUTE statements can be transmitted over the network together
and processed at the remote location, and responses to both statements can be
sent together back to the local subsystem, thus reducing traffic on the network.
DB2 does not prepare the dynamic SQL statement until the statement executes.
(The exception to this is dynamic SELECT, which combines PREPARE and
DESCRIBE, whether or not the DEFER(PREPARE) option is in effect.)
All PREPARE messages for dynamic SQL statements that refer to a remote object
will be deferred until either:
) The statement executes
) The application requests a description of the results of the statement.
| When you use predictive governing, the SQL code returned to the requester if the
| server exceeds a predictive governing warning threshold depends on the level of
| DRDA at the requester. See “Writing an application to handle predictive governing”
| on page 512 for more information.
For DB2 private protocol access, when a static SQL statement refers to a remote
object, the transparent PREPARE statement and the EXECUTE statements are
automatically combined and transmitted across the network together. The
PREPARE statement is deferred only if you specify the bind option
DEFER(PREPARE).
PKLIST
The order in which you specify package collections in a package list can affect the
performance of your application program. When a local instance of DB2 attempts to
execute an SQL statement at a remote server, the local DB2 subsystem must
determine which package collection the SQL statement is in. DB2 must send a
message to the server, requesting that the server check each collection ID for the
SQL statement, until the statement is found or there are no more collection IDs in
the package list. You can reduce the amount of network traffic, and thereby
improve performance, by reducing the number of package collections that each
server must search. These examples show ways to reduce the collections to
search:
) Reduce the number of packages per collection that must be searched. The
following example specifies only 1 package in each collection:
PKLIST(S1.COLLA.PGM1, S1.COLLB.PGM2)
) Reduce the number of package collections at each location that must be
searched. The following example specifies only 1 package collection at each
location:
PKLIST(S1.COLLA.*, S2.COLLB.*)
) Reduce the number of collections used for each application. The following
example specifies only 1 collection to search:
PKLIST(*.COLLA.*)
You can also specify the package collection associated with an SQL statement in
your application program. Execute the SQL statement SET CURRENT
PACKAGESET before you execute an SQL statement to tell DB2 which package
collection to search for the statement.
When you use DEFER(PREPARE) with DRDA access, the package containing the
statements whose preparation you want to defer must be the first qualifying entry in
DB2's package search sequence. (See “Identifying packages at run time” on
page 420 for more information.) For example, assume that the package list for a
plan contains two entries:
PKLIST(LOCB.COLLA.*, LOCB.COLLB.*)
For NODEFER(PREPARE), the collections in the package list can be in any order,
but if the package is not found in the first qualifying PKLIST entry, there is
significant network overhead for searching through the list.
REOPT(VARS)
When you specify REOPT(VARS), DB2 determines access paths at both bind time
and run time for statements that contain one or more of the following variables:
) Host variables
) Parameter markers
) Special registers
At run time, DB2 uses the values in those variables to determine the access paths.
If you specify the bind option REOPT(VARS), DB2 sets the bind option
DEFER(PREPARE) automatically.
Because there are performance costs when DB2 reoptimizes the access path at
run time, we recommend that you do the following:
) Use the bind option REOPT(VARS) only on packages or plans that contain
statements that perform poorly because of a bad access path.
) Use the option NOREOPT(VARS) when you bind a plan or package that
contains statements that use DB2 private protocol access.
If you specify REOPT(VARS) when you bind a plan that contains statements
that use DB2 private protocol access to access remote data, DB2 prepares
those statements twice. See “How bind option REOPT(VARS) affects dynamic
SQL” on page 533 for more information on REOPT(VARS).
CURRENTDATA(NO)
Use this bind option to force block fetch for ambiguous queries. See “Use block
fetch” on page 396 for more information on block fetch.
KEEPDYNAMIC(YES)
Use this bind option to improve performance for queries that use cursors defined
WITH HOLD. With KEEPDYNAMIC(YES), DB2 automatically closes the cursor
when there is no more data to retrieve. The client does not need to send a network
message to tell DB2 to close the cursor. For more information on
KEEPDYNAMIC(YES), see “Keeping prepared statements after commit points” on
page 509.
| DBPROTOCOL(DRDA)
| If the value of installation default DATABASE PROTOCOL is not DRDA, use this
| bind option to cause DB2 to use DRDA access to execute SQL statements with
| three-part names. Statements that use DRDA access perform better at execution
| time because:
| ) Binding occurs when the package is bound, not during program execution.
How to Ensure Block Fetching: To use either type of block fetch, DB2 must
determine that the cursor is not used for update or delete. Indicate that in your
program by adding FOR FETCH ONLY or FOR READ ONLY to the query in the
DECLARE CURSOR statement. If you do not use FOR FETCH ONLY or FOR
READ ONLY, DB2 still uses block fetch for the query if:
) The result table of the cursor is read-only. (See Chapter 6 of DB2 SQL
Reference for a description of read-only tables.)
) The result table of the cursor is not read-only, but the cursor is ambiguous, and
the BIND option CURRENTDATA is NO. A cursor is ambiguous when:
– It is not defined with either the clauses FOR FETCH ONLY, FOR READ
ONLY, or FOR UPDATE OF.
– It is not defined on a read-only result table.
– It is not the target of a WHERE CURRENT clause on an SQL UPDATE or
DELETE statement.
– It is in a plan or package that contains the SQL statements PREPARE or
EXECUTE IMMEDIATE.
Table 46 summarizes the conditions under which DB2 uses block fetch.
Table 46 (Page 1 of 2). Effect of CURRENTDATA and isolation level on block fetch
Block
Isolation CURRENTDATA Cursor Type Fetch
CS or RR YES Read-only Yes
Updatable No
Ambiguous No
No Read-only Yes
Updatable No
Ambiguous Yes
The number of rows that DB2 transmits on each network transmission depends on
the following factors:
) If n rows of the SQL result set fit within a single DRDA query block, a DB2
server can send n rows to any DRDA client. In this case, DB2 sends n rows in
each network transmission, until the entire query result set is exhausted.
) If n rows of the SQL result set exceed a single DRDA query block, the number
of rows that are contained in each network transmission depends on the client's
DRDA software level and configuration:
– If the client does not support DRDA level 3, the DB2 server automatically
reduces the value of n to match the number of rows that fit within a DRDA
query block.
– If the client does support DRDA level 3, the DRDA client can choose to
accept multiple DRDA query blocks in a single data transmission. DRDA
allows the client to establish an upper limit on the number of DRDA query
blocks in each network transmission.
The number of rows that a DB2 server sends is the smaller of n rows and
the number of rows that fit within the lesser of these two limitations:
- The value of EXTRA BLOCKS SRV in install panel DSNTIP5 at the
DB2 server
This is the maximum number of extra DRDA query blocks that the DB2
server returns to a client in a single network transmission.
- The client's extra query block limit, which is obtained from the DDM
MAXBLKEXT parameter received from the client
When DB2 acts as a DRDA client, the DDM MAXBLKEXT parameter is
set to the value that is specified on the EXTRA BLOCKS REQ install
option of the DSNTIP5 install panel.
Specifying a large value for n in OPTIMIZE FOR n ROWS can increase the number
of DRDA query blocks that a DB2 server returns in each network transmission. This
function can improve performance significantly for applications that use DRDA
access to download large amounts of data. However, this same function can
degrade performance if you do not use it properly. The examples below
demonstrate the performance problems that can occur when you do not use
OPTIMIZE FOR n ROWS judiciously.
In Figure 107, the DRDA client opens a cursor and fetches rows from the cursor.
At some point before all rows in the query result set are returned, the application
issues an SQL INSERT. DB2 uses normal DRDA blocking, which has two
advantages over the blocking that is used for OPTIMIZE FOR n ROWS:
) If the application issues an SQL statement other than FETCH (the example
shows an INSERT statement), the DRDA client can transmit the SQL statement
immediately, because the DRDA connection is not in use after the SQL OPEN.
) If the SQL application closes the cursor before fetching all the rows in the
query result set, the server fetches only the number of rows that fit in one
query block, which is 100 rows of the result set. Basically, the DRDA query
block size places an upper limit on the number of rows that are fetched
unnecessarily.
In Figure 108 on page 399, the DRDA client opens a cursor and fetches rows from
the cursor using OPTIMIZE FOR n ROWS. Both the DRDA client and the DB2
server are configured to support multiple DRDA query blocks. At some time before
the end of the query result set, the application issues an SQL INSERT. Because
OPTIMIZE FOR n ROWS is being used, the DRDA connection is not available
How to prevent block fetching: If your application requires data currency for a
cursor, you want to prevent block fetching for the data it points to. To prevent block
fetching for a distributed cursor declare the cursor with the clause FOR UPDATE
OF, naming some column of the SELECT list.
When ASCII MIXED data is converted to EBCDIC MIXED, the converted string is
longer than the source string. An error occurs if that conversion is done to a
fixed-length input host variable. The remedy is to use a varying-length string
variable with a maximum length that is sufficient to contain the expansion.
The encoding scheme in which the results display depends on two factors:
) Whether the requesting system is ASCII or EBCDIC
If the requester is ASCII, the data returned displays as ASCII. If the requester
is EBCDIC, the returned data displays as EBCDIC, even though it is stored at
the server as ASCII. However, if the SELECT statement used to retrieve the
data contains an ORDER BY clause, the data displays in ASCII order.
Before you can run DB2 applications of the first type, you must precompile,
compile, link-edit, and bind them.
Productivity hint: To avoid rework, first test your SQL statements using SPUFI,
then compile your program without SQL statements and resolve all compiler errors.
Then proceed with the preparation and the DB2 precompile and bind steps.
Because most compilers do not recognize SQL statements, you must use the DB2
precompiler before you compile the program to prevent compiler errors. The
precompiler scans the program and returns a modified source code, which you can
then compile and link edit. The precompiler also produces a DBRM (database
request module). Bind this DBRM to a package or plan using the BIND
subcommand. (For information on packages and plans, see “Chapter 5-1. Planning
to precompile and bind” on page 321.) When you complete these steps, you can
run your DB2 application.
This chapter details the steps to prepare your application program to run. It
includes instructions for the main steps for producing an application program,
additional steps you might need, and steps for rebinding.
There are several ways to control the steps in program preparation. We describe
them under “Using JCL procedures to prepare applications” on page 435.
Attention
The size of a source program that DB2 can precompile is limited by the region
size and the virtual memory available to the precompiler. The maximum region
size and memory available to the precompiler is usually around 8 MB, but it
varies with each system installation.
CICS
If the application contains CICS commands, you must translate the program
before you compile it. (See “Translating command-level statements in a CICS
program” on page 416.)
Precompile methods
To start the precompile process, use one of the following methods:
) DB2I panels. Use the Precompile panel or the DB2 Program Preparation
panels.
) The DSNH command procedure (a TSO CLIST). For a description of that
CLIST, see Chapter 2 of DB2 Command Reference.
) JCL procedures supplied with DB2. See page 435 for more information on this
method.
Input to and output from the precompiler are the same regardless of which of these
methods you choose.
You can use the precompiler at any time to process a program with embedded
SQL statements. DB2 does not have to be active, because the precompiler does
not refer to DB2 catalog tables. For this reason, DB2 does not validate the names
of tables and columns used in SQL statements against current DB2 databases,
though the precompiler checks them against any SQL DECLARE TABLE
statements present in the program. Therefore, you should use DCLGEN to obtain
accurate SQL DECLARE TABLE statements.
You might precompile and compile program source statements several times before
they are error-free and ready to link-edit. During that time, you can get complete
diagnostic output from the precompiler by specifying the SOURCE and XREF
precompiler options.
You can use the SQL INCLUDE statement to get secondary input from the include
library, SYSLIB. The SQL INCLUDE statement reads input from the specified
member of SYSLIB until it reaches the end of the member. Input from the
INCLUDE library cannot contain other precompiler INCLUDE statements, but can
contain both host language and SQL statements. SYSLIB must be a partitioned
data set, with attributes RECFM F or FB, LRECL 80.
Another preprocessor, such as the PL/I macro preprocessor, can generate source
statements for the DB2 precompiler. Any preprocessor run before the DB2
precompiler must be able to pass on SQL statements.
Similarly, other preprocessors can process the source code, after you precompile
and before you compile or assemble. There are limits on the forms of source
statements that can pass through the precompiler. For example, constants,
comments, and other source syntax not accepted by the host compilers (such as a
missing right brace in C) can interfere with precompiler source scanning and cause
errors. You might want to run the host compiler before the precompiler to find the
source statements that are unacceptable to the host compiler. At this point you can
ignore the compiler error messages on SQL statements. After the source
statements are free of unacceptable compiler errors, you can then perform the
normal DB2 program preparation process for that host language.
Listing output: The output data set, SYSPRINT, used to print output from the DB2
precompiler, has an LRECL of 133 and a RECFM of FBA. Statement numbers in
the output of the precompiler listing always display as they appear in the listing.
However, DB2 stores statement numbers greater than 32,767 as 0 in the DBRM.
Database request modules: The major output from the precompiler is a database
request module (DBRM). That data set contains the SQL statements and host
variable information extracted from the source program, along with information that
identifies the program and ties the DBRM to the translated source statements. It
becomes the input to the bind process.
The data set requires space to hold all the SQL statements plus space for each
host variable name and some header information. The header information alone
requires approximately two records for each DBRM, 20 bytes for each SQL record,
and 6 bytes for each host variable. For an exact format of the DBRM, see the
DBRM mapping macro, DSNXDBRM in library prefix.SDSNMACS. The DCB
attributes of the data set are RECFM FB, LRECL 80. The precompiler sets the
characteristics. You can use IEBCOPY, IEHPROGM, TSO commands COPY and
DELETE, or other PDS management tools for maintaining these data sets.
The language preparation procedures in job DSNTIJMV (an install job used to
define DB2 to MVS) use the DISP=OLD parameter to enforce data integrity.
However, when the installation CLIST executes, the DISP=OLD parameter for the
DBRM library data set converts to DISP=SHR, which can cause data integrity
problems when you run multiple precompiler jobs. If you plan to run multiple
precompiler jobs and are not using DBSMSdfp's partitioned data set extended
(PDSE), you must change the language preparation procedures (DSNHCOB,
# DSNHCOB2, DSNHFOR, DSNHC, DSNHPLI, DSNHASM, DSNHSQL ) to specify
the DISP=OLD parameter instead of the DISP=SHR parameter.
Precompiler options
You can control the behavior of the precompiler by specifying options when you
use it. The options specify how the precompiler interprets or processes its input,
and how it presents its output.
You can specify DB2 precompiler options with DSNH operands or with the
PARM.PC option of the EXEC JCL statement. You can also specify them from the
appropriate DB2I panels.
Table of precompiler options: Table 47 on page 410 shows the options you can
specify when you use the precompiler, and abbreviations for those options if they
are available. The table uses a vertical bar (|) to separate mutually exclusive
Defaults for options of the DB2 precompiler: Some precompiler options have
defaults based on values specified on the Application Programming Defaults
panels. Table 48 shows these precompiler options and defaults:
Table 48. IBM-supplied installation default precompiler options. The installer can change these defaults.
Install Option (DSNTIPF) Install Default Equivalent Precompiler Available Precompiler
Option Options
STRING DELIMITER quotation mark (") QUOTE APOST
QUOTE
SQL STRING DELIMITER quotation mark (") QUOTESQL APOSTSQL
QUOTESQL
DECIMAL POINT IS PERIOD PERIOD COMMA
PERIOD
DATE FORMAT ISO DATE(ISO) DATE(ISO|USA|
EUR|JIS|LOCAL)
DECIMAL ARITHMETIC DEC15 DEC(15) DEC(15|31)
MIXED DATA NO NOGRAPHIC GRAPHIC
NOGRAPHIC
LANGUAGE DEFAULT COBOL HOST(COBOL) HOST(ASM|C[(FOLD)]|
CPP[(FOLD)]|
COBOL|COB2|IBMCOB|
FORTRAN|PLI)
STD SQL LANGUAGE NO STDSQL(NO) STDSQL(YES|NO|86)
TIME FORMAT ISO TIME(ISO) TIME(IS|USA|EUR|
JIS|LOCAL)
For dynamic SQL statements, another application programming default, USE FOR
DYNAMICRULES, determines whether DB2 uses the application programming
default or the precompiler option for the following install options:
) STRING DELIMITER
3 You can use STDSQL(86) as in prior releases of DB2. The precompiler treats it the same as STDSQL(YES).
Some precompiler options have default values based on the host language. Some
options do not apply to some languages. Table 49 show the language-dependent
options and defaults.
Notes to Table 49
1. Forced for this language; no alternative allowed.
2. The default is chosen on Application Programming Defaults Panel 1 when DB2
is installed. The IBM-supplied installation defaults for string delimiters are
QUOTE (host language delimiter) and QUOTESQL (SQL escape character).
The installer can replace the IBM-supplied defaults with other defaults. The
precompiler options you specify override any defaults in effect.
If your source program is in COBOL, you must specify a string delimiter that is
the same for the DB2 precompiler, COBOL compiler, and CICS translator. The
defaults for the DB2 precompiler and COBOL compiler are not compatible with
the default for the CICS translator.
If the SQL statements in your source program refer to host variables that a
pointer stored in the CICS TWA addresses, you must make the host variables
addressable to the TWA before you execute those statements. For example, a
COBOL application can issue the following statement to establish addressability
to the TWA:
EXEC CICS ADDRESS
TWA (address-of-twa-area)
END-EXEC
You can run CICS applications only from CICS address spaces. This restriction
applies to the RUN option on the second program DSN command processor. All
of those possibilities occur in TSO.
You can append JCL from a job created by the DB2 Program Preparation
panels to the CICS translator JCL to prepare an application program. To run the
prepared program under CICS, you might need to update the RCT and define
programs and transactions to CICS. Your system programmer must make the
appropriate resource control table (RCT) and CICS resource or table entries.
For information on the required resource entries, see Section 2 of DB2
Installation Guide and CICS for MVS/ESA Resource Definition Guide.
Exception
You do not have to bind a DBRM whose SQL statements all come from this list:
CONNECT
COMMIT
ROLLBACK
DESCRIBE TABLE
RELEASE
SET CONNECTION
SET CURRENT PACKAGESET
SET host-variable = CURRENT PACKAGESET
SET host-variable = CURRENT SERVER
You must bind plans locally, whether or not they reference packages that run
remotely. However, you must bind the packages that run at remote locations at
those remote locations.
To bind a package at a remote DB2 system, you must have all the privileges or
authority there that you would need to bind the package on your local system. To
bind a package at another type of a system, such as SQL/DS, you need any
privileges that system requires to execute its SQL statements and use its data
objects.
The bind process for a remote package is the same as for a local package, except
that the local communications database must be able to recognize the location
name you use as resolving to a remote location. To bind the DBRM PROGA at the
location PARIS, in the collection GROUP1, use:
BIND PACKAGE(PARIS.GROUP1)
MEMBER(PROGA)
Then, include the remote package in the package list of a local plan, say PLANB,
by using:
BIND PLAN (PLANB)
PKLIST(PARIS.GROUP1.PROGA)
When you bind or rebind, DB2 checks authorizations, reads and updates the
catalog, and creates the package in the directory at the remote site. DB2 does not
read or update catalogs or check authorizations at the local site.
| If you specify the option EXPLAIN(YES) and you do not specify the option
| SQLERROR(CONTINUE), then PLAN_TABLE must exist at the location specified
| on the BIND or REBIND subcommand. This location could also be the default
| location.
If you bind with the option COPY, the COPY privilege must exist locally. DB2
performs authorization checking, reads and updates the catalog, and creates the
package in the directory at the remote site. DB2 reads the catalog records related
to the copied package at the local site. If the local site is installed with time or date
format LOCAL, and a package is created at a remote site using the COPY option,
the COPY option causes DB2 at the remote site to convert values returned from
the remote site in ISO format, unless an SQL statement specifies a different format.
Once you bind a package, you can rebind, free, or bind it with the REPLACE option
using either a local or a remote bind.
When you now run the existing application at your local DB2, using the new
application plan, these things happen:
) You connect immediately to the remote location named in the
CURRENTSERVER option.
) When about to run a package, DB2 searches for it in the collection REMOTE1
at the remote location.
) Any UPDATE, DELETE, or INSERT statements in your application affect tables
at the remote location.
) Any results from SELECT statements return to your existing application
program, which processes them as though they came from your local DB2.
You can include as many DBRMs in a plan as you wish. However, if you use a
large number of DBRMs in a plan (more than 500, for example), you could have
trouble maintaining the plan. To ease maintenance, you can bind each DBRM
separately as a package, specifying the same collection for all packages bound,
and then bind a plan specifying that collection in the plan's package list. If the
design of the application prevents this method, see if your system administrator can
increase the size of the EDM pool to be at least 10 times the size of either the
largest database descriptor (DBD) or the plan, whichever is greater.
To bind DBRMs directly to the plan, and also include packages in the package list,
use both MEMBER and PKLIST. The example below includes:
) The DBRMs PROG1 and PROG2
) All the packages in a collection called TEST2
) The packages PROGA and PROGC in the collection GROUP1
MEMBER(PROG1,PROG2)
PKLIST(TEST2.*,GROUP1.PROGA,GROUP1.PROGC)
You must specify MEMBER, PKLIST, or both options. The plan that results consists
of one of the following:
) Programs associated with DBRMs in the MEMBER list only
) Programs associated with packages and collections identified in PKLIST only
) A combination of the specifications on MEMBER and PKLIST
(Usually, the consistency token is in an internal DB2 format. You can override that
token if you wish: see “Setting the program level” on page 423.)
But you need other identifiers also. The consistency token alone uniquely identifies
a DBRM bound directly to a plan, but it does not necessarily identify a unique
package. When you bind DBRMs directly to a particular plan, you bind each one
only once. But you can bind the same DBRM to many packages, at different
locations and in different collections, and then you can include all those packages
in the package list of the same plan. All those packages will have the same
You can change the value of CURRENT SERVER by using the SQL CONNECT
statement in your program. If you do not use CONNECT, the value of CURRENT
SERVER is the location name of your local DB2 (or blank, if your DB2 has no
location name).
If you do not use SET CURRENT PACKAGESET, the value in the register is blank
when your application begins to run and remains blank. In that case, the order in
which DB2 searches available collections can be important.
When you call a stored procedure, the special register CURRENT PACKAGESET
contains the value of the COLLID column of the catalog table SYSPROCEDURES.
When the stored procedure returns control to the calling program, DB2 restores
CURRENT PACKAGESET to the value it contained before the call.
For example, if you perform the following bind: BIND PLAN (PLAN1) PKLIST
(COL1.*, COL2.*, COL3.*, COL4.*) and you then execute program PROG1, DB2
does the following:
1. Checks to see if there is a PROG1 program bound as part of the plan
2. Searches for COL1.PROG1.timestamp
3. If it does not find COL1.PROG1.timestamp, searches for
COL2.PROG1.timestamp
4. If it does not find COL2.PROG1.timestamp, searches for
COL3.PROG1.timestamp
5. If it does not find COL3.PROG1.timestamp, searches for
COL4.PROG1.timestamp.
If the special register current packageset is blank, DB2 searches for a DBRM or
a package in one of these sequences:
If you set the special register CURRENT PACKAGESET, DB2 skips the check
for programs that are part of the plan and uses the value of CURRENT
PACKAGESET as the collection. For example, if CURRENT PACKAGESET
contains COL5, then DB2 uses COL5.PROG1.timestamp for the search.
If the order of search is not important: In many cases, DB2's order of search is
not important to you and does not affect performance. For an application that runs
only at your local DB2, you can name every package differently and include them
all in the same collection. The package list on your BIND PLAN subcommand can
read:
PKLIST (collection.8)
You can add packages to the collection even after binding the plan. DB2 lets you
bind packages having the same package name into the same collection only if their
version IDs are different.
If your application uses DRDA access, you must bind some packages at remote
locations. Use the same collection name at each location, and identify your
package list as:
PKLIST (8.collection.8)
If you use an asterisk for part of a name in a package list, DB2 checks the
authorization for the package to which the name resolves at run time. To avoid the
checking at run time in the example above, you can grant EXECUTE authority for
the entire collection to the owner of the plan before you bind the plan.
You can do that with many versions of the program, without having to rebind the
application plan. Neither do you have to rename the plan or change any RUN
subcommands that use it.
| Table 50. How DYNAMICRULES and the run-time environment determine dynamic SQL statement behavior
| Behavior of dynamic SQL statements
| Stand-alone program User-defined function or stored
| DYNAMICRULES value environment procedure environment
| BIND Bind behavior Bind behavior
| RUN Run behavior Run behavior
| DEFINEBIND Bind behavior Define behavior
| DEFINERUN Run behavior Define behavior
| INVOKEBIND Bind behavior Invoke behavior
| INVOKERUN Run behavior Invoke behavior
| The BIND and RUN values can be specified for packages and plans. The other
| values can be specified only for packages.
Determining the authorization cache size for plans: The CACHESIZE option
(optional) allows you to specify the size of the cache to acquire for the plan. DB2
uses this cache for caching the authorization IDs of those users running a plan.
The size of the cache you specify depends on the number of individual
authorization IDs actively using the plan. Required overhead takes 32 bytes, and
each authorization ID takes up 8 bytes of storage. The minimum cache size is 256
bytes (enough for 28 entries and overhead information) and the maximum is 4096
bytes (enough for 508 entries and overhead information). You should specify size in
multiples of 256 bytes; otherwise, the specified value rounds up to the next highest
value that is a multiple of 256.
If you run the plan infrequently, or if authority to run the plan is granted to PUBLIC,
you might want to turn off caching for the plan so that DB2 does not use
unnecessary storage. To do this, specify a value of 0 for the CACHESIZE option.
Any plan that you run repeatedly is a good candidate for tuning using the
CACHESIZE option. Also, if you have a plan that a large number of users run
concurrently, you might want to use a larger CACHESIZE.
Determining the authorization cache size for packages: DB2 provides a single
package authorization cache for an entire DB2 subsystem. The DB2 installer sets
the size of the package authorization cache by entering a size in field PACKAGE
AUTH CACHE of DB2 installation panel DSNTIPP. A 32KB authorization cache is
large enough to hold authorization information for about 375 package collections.
See DB2 Installation Guide for more information on setting the size of the package
authorization cache.
See DB2 Installation Guide for more information on setting the size of the routine
authorization cache.
CURRENT RULES determines the SQL rules, DB2 or SQL standard, that apply to
SQL behavior at run time. For example, the value in CURRENT RULES affects the
behavior of defining check constraints using the statement ALTER TABLE on a
populated table:
You can use the statement SET CURRENT RULES to control the action that the
statement ALTER TABLE takes. Assuming that the value of CURRENT RULES is
initially STD, the following SQL statements change the SQL rules to DB2, add a
check constraint, defer validation of that constraint and place the table in check
pending status, and restore the rules to STD.
EXEC SQL
SET CURRENT RULES = 'DB2';
EXEC SQL
ALTER TABLE DSN861.EMP
ADD CONSTRAINT C1 CHECK (BONUS <= 1.);
EXEC SQL
SET CURRENT RULES = 'STD';
add a check constraint immediately to a populated table, defer constraint validation,
place the table in check pending status, and restore the standard option. See
“Creating tables with check constraints” on page 44 for information on check
constraints.
You can also use the CURRENT RULES in host variable assignments, for
example:
SET :XRULE = CURRENT RULES;
and as the argument of a search-condition, for example:
SELECT * FROM SAMPTBL WHERE COL1 = CURRENT RULES;
You can use packages and dynamic plan selection together, but when you
dynamically switch plans, the following conditions must exist:
) All special registers, including CURRENT PACKAGESET, must contain their
initial values.
) The value in the CURRENT DEGREE special register cannot have changed
during the current transaction.
The benefit of using dynamic plan selection and packages together is that you
can convert individual programs in an application containing many programs
and plans, one at a time, to use a combination of plans and packages. This
reduces the number of plans per application, and having fewer plans reduces
the effort needed to maintain the dynamic plan exit.
The following scenario illustrates thread association for a task that runs program
MAIN:
Sequence of SQL Statements Events
1. EXEC CICS START TRANSID(MAIN) TRANSID(MAIN) executes program
MAIN.
2. EXEC SQL SELECT... Program MAIN issues an SQL
SELECT statement. The default
dynamic plan exit selects plan MAIN.
3. EXEC CICS LINK PROGRAM(PROGA)
4. EXEC SQL SELECT... DB2 does not call the default
dynamic plan exit, because the
program does not issue a sync
point. The plan is MAIN.
Include the DB2 TSO attachment facility language interface module (DSNELI) or
DB2 call attachment facility language interface module (DSNALI).
IMS
Include the DB2 IMS (Version 1 Release 3 or later) language interface module
(DFSLI000). Also, the IMS RESLIB must precede the SDSNLOAD library in the
link list, JOBLIB, or STEPLIB concatenations.
CICS
You can link DSNCLI with your program in either 24 bit or 31 bit addressing
mode (AMODE=31). If your application program runs in 31-bit addressing mode,
you should link-edit the DSNCLI stub to your application with the attributes
AMODE=31 and RMODE=ANY so that your application can run above the 16M
line. For more information on compiling and link-editing CICS application
programs, see the appropriate CICS manual.
You also need the CICS EXEC interface module appropriate for the
programming language. CICS requires that this module be the first control
section (CSECT) in the final load module.
The size of the executable load module produced by the link-edit step may vary
depending on the values that the DB2 precompiler inserts into the source code of
the program.
For more information on compiling and link-editing, see “Using JCL procedures to
prepare applications” on page 435.
For more information on link-editing attributes, see the appropriate MVS manuals.
For details on DSNH, see Chapter 2 of DB2 Command Reference.
You can use the DSN command processor implicitly during program development
for functions such as:
) Using the declarations generator (DCLGEN)
) Running the BIND, REBIND, and FREE subcommands on DB2 plans and
packages for your program
) Using SPUFI (SQL Processor Using File Input) to test some of the SQL
functions in the program
The DSN command processor runs with the TSO terminal monitor program (TMP).
Because the TMP runs in either foreground or background, DSN applications run
interactively or as batch jobs.
The DSN command processor can provide these services to a program that runs
under it:
) Automatic connection to DB2
) Attention key support
) Translation of return codes into error messages
If these limitations are too severe, consider having your application use the call
attachment facility or Recoverable Resource Manager Services attachment facility.
For more information on these attachment facilities, see “Chapter 7-7.
Programming for the call attachment facility (CAF)” on page 745 and “Chapter 7-8.
Programming for the Recoverable Resource Manager Services attachment facility
(RRSAF)” on page 779.
The following example shows how to start a TSO foreground application. The name
of the application is SAMPPGM, and ssid is the system ID:
TSO Prompt: READY
Enter: DSN SYSTEM(ssid)
DSN Prompt: DSN
Enter: RUN PROGRAM(SAMPPGM) -
PLAN(SAMPLAN) -
LIB(SAMPPROJ.SAMPLIB) -
PARMS('/D1 D2 D3')
..
.
(Here the program runs and might prompt you for input)
DSN Prompt: DSN
Enter: END
TSO Prompt: READY
This sequence also works in ISPF option 6. You can package this sequence in a
CLIST. DB2 does not support access to multiple DB2 subsystems from a single
address space.
The PARMS keyword of the RUN subcommand allows you to pass parameters to
the run-time processor and to your application program:
PARMS ('/D1, D2, D3')
The slash (/) is important, indicating where you want to pass the parameters.
Please check your host language publications for more details.
) The JOB option identifies this as a job card. The USER option specifies the
DB2 authorization ID of the user.
) The EXEC statement calls the TSO Terminal Monitor Program (TMP).
) The STEPLIB statement specifies the library in which the DSN Command
Processor load modules and the default application programming defaults
module, DSNHDECP, reside. It can also reference the libraries in which user
applications, exit routines, and the customized DSNHDECP module reside. The
customized DSNHDECP module is created during installation. If you do not
specify a library containing the customized DSNHDECP, DB2 uses the default
DSNHDECP.
) Subsequent DD statements define additional files needed by your program.
) The DSN command connects the application to a particular DB2 subsystem.
) The RUN subcommand specifies the name of the application program to run.
) The PLAN keyword specifies plan name.
) The LIB keyword specifies the library the application should access.
) The PARMS keyword passes parameters to the run-time processor and the
application program.
) END ends the DSN command processor.
Usage notes:
) Keep DSN job steps short.
) We recommend that you not use DSN to call the EXEC command processor to
run CLISTs that contain ISPEXEC statements; results are unpredictable.
) If your program abends or gives you a non-zero return code, DSN terminates.
) You can use a group attachment name instead of a specific ssid to connect to
a member of a data sharing group. For more information, see DB2 Data
Sharing: Planning and Administration.
For more information on using the TSO TMP in batch mode, see OS/390 TSO/E
User's Guide.
The following CLIST calls a DB2 application program named MYPROG. The DB2
subsystem name or group attachment name should replace ssid.
IMS
First, be sure you can respond to the program's interactive requests for data
and that you can recognize the expected results. Then, enter the transaction
code associated with the program. Users of the transaction code must be
authorized to run the program.
CICS
To Run a Program
First, ensure that the corresponding entries in the RCT, SNT, and RACF*
control areas allow run authorization for your application. The system
administrator is responsible for these functions; see Section 3 (Volume 1) of
DB2 Administration Guide for more information.
Also, be sure to define to CICS the transaction code assigned to your program
and the program itself.
Issue the NEWCOPY command if CICS has not been reinitialized since the
program was last bound and compiled.
# In a batch environment, you might use statements like these to invoke procedure
# REXXPROG:
# //RUNREXX EXEC PGM=IKJEFT1,DYNAMNBR=2
# //SYSEXEC DD DISP=SHR,DSN=SYSADM.REXX.EXEC
# //SYSTSPRT DD SYSOUT=*
# //SYSTSIN DD *
# %REXXPROG parameters
# The SYSEXEC data set contains your REXX application, and the SYSTSIN data
# set contains the command that you use to invoke the application.
This section describes how to use JCL procedures to prepare a program. For
information on using the DSNH CLIST, the TSO DSN command processor, or JCL
procedures added to your SYS1.PROCLIB, see Chapter 2 of DB2 Command
Reference. For a general overview of the DB2 program preparation process that
the DSNH CLIST performs, see Figure 109 on page 406.
If you use the PL/I macro processor, you must not use the PL/I *PROCESS
statement in the source to pass options to the PL/I compiler. You can specify the
needed options on the PARM.PLI= parameter of the EXEC statement in DSNHPLI
procedure.
//LKED.SYSIN DD *
INCLUDE SYSLIB(member)
/*
member must be DSNELI, except for FORTRAN, in which case member must
be DSNHFT.
//LKED.SYSIN DD *
INCLUDE SYSLIB(DFSLI)
ENTRY (specification)
/*
CICS
//LKED.SYSIN DD *
INCLUDE SYSLIB(DSNCLI)
/*
For more information on required CICS modules, see “Step 3: Compile (or
assemble) and link-edit the application” on page 429.
To call the precompiler, specify DSNHPC as the entry point name. You can pass
three address options to the precompiler; the following sections describe their
formats. The options are addresses of:
) A precompiler option list
) A list of alternate ddnames for the data sets that the precompiler uses
) A page number to use for the first page of the compiler listing on SYSPRINT.
The precompiler adds 1 to the last page number used in the precompiler listing and
puts this value into the page-number field before returning control to the calling
routine. Thus, if you call the precompiler again, page numbering is continuous.
Instead of using the DB2 Program Preparation panels to prepare your CICS program, you can tailor
CICS-supplied JCL procedures to do that. To tailor a CICS procedure, you need to add some steps
and change some DD statements. Make changes as needed to do the following:
) Process the program with the DB2 precompiler.
) Bind the application plan. You can do this any time after you precompile the program. You can
bind the program either on line by the DB2I panels or as a batch step in this or another MVS job.
) Include a DD statement in the linkage editor step to access the DB2 load library.
) Be sure the linkage editor control statements contain an INCLUDE statement for the DB2 language
interface module.
The following example illustrates the necessary changes. This example assumes the use of a VS
COBOL II or COBOL/370 program. For any other programming language, change the CICS procedure
name and the DB2 precompiler options.
//TESTC1 JOB
//*
//*********************************************************
//* DB2 PRECOMPILE THE COBOL PROGRAM
//*********************************************************
(1) //PC EXEC PGM=DSNHPC,
(1) // PARM='HOST(COB2),XREF,SOURCE,FLAG(I),APOST'
(1) //STEPLIB DD DISP=SHR,DSN=prefix.SDSNEXIT
(1) // DD DISP=SHR,DSN=prefix.SDSNLOAD
(1) //DBRMLIB DD DISP=OLD,DSN=USER.DBRMLIB.DATA(TESTC1)
(1) //SYSCIN DD DSN=&&DSNHOUT,DISP=(MOD,PASS),UNIT=SYSDA,
(1) // SPACE=(8,(5,5))
(1) //SYSLIB DD DISP=SHR,DSN=USER.SRCLIB.DATA
(1) //SYSPRINT DD SYSOUT=*
(1) //SYSTERM DD SYSOUT=*
(1) //SYSUDUMP DD SYSOUT=*
(1) //SYSUT1 DD SPACE=(8,(5,5),,,ROUND),UNIT=SYSDA
(1) //SYSUT2 DD SPACE=(8,(5,5),,,ROUND),UNIT=SYSDA
(1) //SYSIN DD DISP=SHR,DSN=USER.SRCLIB.DATA(TESTC1)
(1) //*
For more information about the procedure DFHEITVL, other CICS procedures, or CICS requirements
for application programs, please see the appropriate CICS manual.
If you are preparing a particularly large or complex application, you can use one of
the last two techniques mentioned above. For example, if your program requires
four of your own link-edit include libraries, you cannot prepare the program with
DB2I, because DB2I limits the number of include libraries to three plus language,
IMS or CICS, and DB2 libraries. Therefore, you would need another preparation
method. Programs using the call attachment facility can use either of the last two
techniques mentioned above. Be careful to use the correct language interface.
You must precompile the contents of each data set or member separately, but the
prelinker must receive all of the compiler output together.
This section describes the options you can specify on the program preparation
panels. For the purposes of describing the process, the program preparation
examples assume that you are using COBOL programs that run under TSO.
Attention: If your C++ or IBM COBOL for MVS & VM program satisfies both of
these conditions, you need to use a JCL procedure to prepare it:
) The program consists of more than one data set or member.
) More than one data set or member contains SQL statements.
See “Using JCL to prepare a program with object-oriented extensions” for more
information.
DB2I help
The online help facility enables you to select information in an online DB2 book
from a DB2I panel.
For instructions on setting up DB2 online help, see the discussion of setting up DB2
online help in Section 2 of DB2 Installation Guide.
If your site makes use of CD-ROM updates, you can make the updated books
accessible from DB2I. Select Option 10 on the DB2I Defaults Panel and enter the
new book data set names. You must have write access to prefix.SDSNCLST to
perform this function.
F G
DSNEPRI DB2I PRIMARY OPTION MENU SSID: DSN
COMMAND ===> 3_
Figure 111. Initiating program preparation through DB2I. Specify Program Preparation on
the DB2I Primary Option Menu.
The following explains the functions on the DB2I Primary Option Menu.
1 SPUFI
Lets you develop and execute one or more SQL statements interactively. For
further information, see “Chapter 2-5. Executing SQL from your terminal
using SPUFI” on page 79.
2 DCLGEN
Lets you generate C, COBOL, or PL/I data declarations of tables. For further
information, see “Chapter 3-3. Generating declarations for your tables using
DCLGEN” on page 115.
3 PROGRAM PREPARATION
Lets you prepare and run an application program to run. For more
information, see “The DB2 Program Preparation panel” on page 443.
4 PRECOMPILE
Lets you convert embedded SQL statements into statements that your host
language can process. For further information, see “The Precompile panel” on
page 450.
5 BIND/REBIND/FREE
Lets you bind, rebind, or free a package or application plan.
6 RUN
Lets you run an application program in a TSO or batch environment.
7 DB2 COMMANDS
Lets you issue DB2 commands. For more information about DB2 commands,
see Chapter 2 of DB2 Command Reference.
The Program Preparation panel also lets you change the DB2I default values (see
page 448), and to perform other precompile and prelink functions.
On the DB2 Program Preparation panel, shown in Figure 112, enter the name of
the source program data set (this example uses SAMPLEPG.COBOL) and specify
the other options you want to include. When finished, press ENTER to view the
next panel.
Figure 112. The DB2 program preparation panel. Enter the source program data set name
and other options.
The following explains the functions on the DB2 Program Preparation panel and
how to fill in the necessary fields in order to start program preparation.
Fields 7 through 15, described below, let you select the function to perform and to
choose whether to show the DB2I panels for the functions you select. Use Y for
YES, or N for NO.
If you are willing to accept default values for all the steps, enter N under DISPLAY
PANEL for all the other preparation panels listed.
To make changes to the default values, entering Y under DISPLAY PANEL for any
panel you want to see. DB2I then displays each of the panels that you request.
After all the panels display, DB2 proceeds with the steps involved in preparing your
program to run.
Variables for all functions used during program preparation are maintained
separately from variables entered from the DB2I Primary Option Menu. For
example, the bind plan variables you enter on the program preparation panel are
saved separately from those on any bind plan panel that you reach from the
Primary Option Menu.
6 CHANGE DEFAULTS
Lets you specify whether to change the DB2I defaults. Enter Y in the Display
Panel field next to this option; otherwise enter N. Minimally, you should
specify your subsystem identifier and programming language on the defaults
panel. For more information, see “DB2I Defaults Panel 1” on page 448.
7 PL/I MACRO PHASE
Lets you specify whether to display the “Program Preparation: Compile, Link,
and Run” panel to control the PL/I macro phase by entering PL/I options in
the OPTIONS field of that panel. That panel also displays for options
COMPILE OR ASSEMBLE, LINK, and RUN.
CICS
If you are using CICS and have precompiled your program, you must
translate your program using the CICS command translator.
There is no separate DB2I panel for the command translator. You can
specify translation options on the Other Options field of the DB2 Program
Preparation panel, or in your source program if it is not an assembler
program.
Because you specified a CICS run-time environment, the Perform function
column defaults to Y. Command translation takes place automatically after
you precompile the program.
10 BIND PACKAGE
Lets you specify whether to display the BIND PACKAGE panel. To see it,
enter Y in the Display panel field next to this option; otherwise, enter N. For
information on the panel, see “The Bind Package panel” on page 453.
11 BIND PLAN
Lets you specify whether to display the BIND PLAN panel. To see it, enter Y
in the Display panel field next to this option; otherwise, enter N. For
information on the panel, see “The Bind Plan panel” on page 456.
12 COMPILE OR ASSEMBLE
Lets you specify whether to display the “Program Preparation: Compile, Link,
and Run” panel. To see this panel enter Y in the Display Panel field next to
this option; otherwise, enter N.
For information on the panel, see “The Program Preparation: Compile, Link,
and Run panel” on page 466.
13 PRELINK
Lets you use the prelink utility to make your C, C++, or IBM COBOL for MVS &
VM program reentrant. This utility concatenates compile-time initialization
information from one or more text decks into a single initialization unit. To use
the utility, enter Y in the Display Panel field next to this option; otherwise,
For more information on the prelink utility, see OS/390 Language Environment
for OS/390 & VM Programming Guide.
14 LINK
Lets you specify whether to display the “Program Preparation: Compile, Link,
and Run” panel. To see it, enter Y in the Display Panel field next to this
option; otherwise, enter N. If you specify Y in the Display Panel field for the
COMPILE OR ASSEMBLE option, you do not need to make any changes to
this field; the panel displayed for COMPILE OR ASSEMBLE is the same as
the panel displayed for LINK. You can make the changes you want to affect
the link-edit step at the same time you make the changes to the compile step.
For information on the panel, see “The Program Preparation: Compile, Link,
and Run panel” on page 466.
15 RUN
Lets you specify whether to run your program. The RUN option is available
only if you specify TSO or CAF for RUN TIME ENVIRONMENT.
If you specify Y in the Display Panel field for the COMPILE OR ASSEMBLE
or LINK option, you can specify N in this field, because the panel displayed
for COMPILE OR ASSEMBLE and for LINK is the same as the panel
displayed for RUN.
IMS and CICS
IMS and CICS programs cannot run using DB2I. If you are using IMS or
CICS, use N in these fields.
Pressing ENTER takes you to the first panel in the series you specified, in this
example to the DB2I Defaults panel. If, at any point in your progress from panel to
panel, you press the END key, you return to this first panel, from which you can
change your processing specifications. Asterisks (*) in the Display Panel column of
rows 7 through 14 indicate which panels you have already examined. You can see
a panel again by writing a Y over an asterisk.
Suppose that the default programming language is PL/I and the default number of
lines per page of program listing is 60. Your program is in COBOL, so you want to
change field 3, APPLICATION LANGUAGE. You also want to print 80 lines to the
page, so you need to change field 4, LINES/PAGE OF LISTING, as well.
Figure 113 on page 448 shows the entries that you make in DB2I Defaults panel 1
to make these changes. In this case, pressing ENTER takes you to DB2
DEFAULTS panel 2.
Pressing ENTER takes you to the next panel you specified on the DB2 Program
Preparation panel, in this case, to the Precompile panel.
Figure 115. The precompile panel. Specify the include library, if any, that your program
should use, and any other options you need.
The following explain the functions on the Precompile panel, and how to enter the
fields for preparing to precompile.
1 INPUT DATA SET
Lets you specify the data set name of the source program and SQL
statements to precompile.
If you reached this panel through the DB2 Program Preparation panel, this
field contains the data set name specified there. You can override it on this
panel if you wish.
If you reached this panel directly from the DB2I Primary Option Menu, you
must enter the data set name of the program you want to precompile. The
data set name can include a member name. If you do not enclose the data
set name with apostrophes, a standard TSO prefix (user ID) qualifies the data
set name.
2 INCLUDE LIBRARY
Lets you enter the name of a library containing members that the precompiler
should include. These members can contain output from DCLGEN. If you do
not enclose the name in apostrophes, a standard TSO prefix (user ID)
qualifies the name.
You can request additional INCLUDE libraries by entering DSNH CLIST
parameters of the form PnLIB(dsname), where n is 2, 3, or 4) on the OTHER
OPTIONS field of this panel or on the OTHER DSNH OPTIONS field of the
Program Preparation panel.
3 DSNAME QUALIFIER
Lets you specify a character string that qualifies temporary data set names
during precompile. Use any character string from 1 to 8 characters in length
that conforms to normal TSO naming conventions.
If you reached this panel through the DB2 Program Preparation panel, this
field contains the data set name qualifier specified there. You can override it
on this panel if you wish.
If you reached this panel from the DB2I Primary Option Menu, you can either
specify a DSNAME QUALIFIER or let the field take its default value, TEMP.
CICS
For CICS programs, the data set tsoprefix.qualifier.suffix receives the
precompiled source statements in preparation for CICS command
translation.
If you do not plan to do CICS command translation, the source statements
in tsoprefix.qualifier.suffix, are ready to compile. The data set
tsoprefix.qualifier.PCLIST contains the precompiler print listing.
When the precompiler completes its work, control passes to the CICS
command translator. Because there is no panel for the translator,
translation takes place automatically. The data set
tsoprefix.qualifier.CXLIST contains the output from the command
translator.
COMMAND ===>_
The following information explains the functions on the BIND PACKAGE panel and
how to fill the necessary fields in order to bind your program. For more information,
see the BIND PACKAGE command in Chapter 2 of DB2 Command Reference.
1 LOCATION NAME
Lets you specify the system at which to bind the package. You can use from
1 to 16 characters to specify the location name. The location name must be
defined in the catalog table SYSIBM.LOCATIONS. The default is the local
DBMS.
2 COLLECTION-ID
Lets you specify the collection the package is in. You can use from 1 to 18
characters to specify the collection, and the first character must be alphabetic.
3 DBRM: COPY:
Lets you specify whether you are creating a new package (DBRM) or making
a copy of a package that already exists (COPY). Use:
DBRM
To create a new package. You must specify values in the LIBRARY,
PASSWORD, and MEMBER fields.
COPY
To copy an existing package. You must specify values in the
COLLECTION-ID and PACKAGE-ID fields. (The VERSION field is
optional.)
4 MEMBER OR COLLECTION-ID
MEMBER (for new packages): If you are creating a new package, this option
lets you specify the DBRM to bind. You can specify a member name from 1
to 8 characters. The default name depends on the input data set name.
) If the input data set is partitioned, the default name is the member name
of the input data set specified in the INPUT DATA SET NAME field of the
DB2 Program Preparation panel.
If you enter the BIND PLAN panel from the Program Preparation panel, many of
the BIND PLAN entries contain values from the Primary and Precompile panels.
See Figure 117 on page 457.
The following explains the functions on the BIND PLAN panel and how to fill the
necessary fields in order to bind your program. For more information, see the BIND
PLAN command in Chapter 2 of DB2 Command Reference.
1 MEMBER
Lets you specify the DBRMs to include in the plan. You can specify a name
from 1 to 8 characters. You must specify MEMBER or INCLUDE PACKAGE
LIST, or both. If you do not specify MEMBER, fields 2, 3, and 4 are ignored.
The default member name depends on the input data set.
) If the input data set is partitioned, the default name is the member name
of the input data set specified in field 1 of the DB2 Program Preparation
panel.
) If the input data set is sequential, the default name is the second qualifier
of this input data set.
If you reached this panel directly from the DB2I Primary Option Menu, you
must provide values for the MEMBER and LIBRARY fields.
If you plan to use more than one DBRM, you can include the library name
and member name of each DBRM in the MEMBER and LIBRARY fields,
separating entries with commas. You can also specify more DBRMs by using
the ADDITIONAL DBRMS? field on this panel.
2 PASSWORD
Lets you enter passwords for the libraries you list in the LIBRARY field. You
can use this field only if you reached the BIND PLAN panel directly from the
DB2 Primary Option Menu.
3 LIBRARY
Lets you specify the name of the library or libraries that contain the DBRMs to
use for the bind process. You can specify a name up to 44 characters long.
4 ADDITIONAL DBRMS?
Lets you specify more DBRM entries if you need more room. Or, if you
reached this panel as part of the program preparation process, you can
include more DBRMs by entering YES in this field. A separate panel then
When you finish making changes to this panel, press ENTER to go to the second
of the program preparation panels, Program Prep: Compile, Link, and Run.
F G
DSNEBP1 DEFAULTS FOR BIND PACKAGE SSID: DSN
COMMAND ===> _
This panel lets you change your defaults for BIND PACKAGE options. With a few
minor exceptions, the options on this panel are the same as the options for the
defaults for rebinding a package. However, the defaults for REBIND PACKAGE are
different from those shown in the above figure, and you can specify SAME in any
field to specify the values used the last time the package was bound. For rebinding,
the default value for all fields is SAME.
F G
DSNEBP1 DEFAULTS FOR BIND PLAN SSID: DSN
COMMAND ===>
This panel lets you change your defaults for options of BIND PLAN. The options on
this panel are mostly the same as the options for the defaults for rebinding a
For packages:
7 SQLERROR PROCESSING
Lets you specify CONTINUE to continue to create a package after finding
SQL errors, or NOPACKAGE to avoid creating a package after finding SQL
errors.
For plans:
7 RESOURCE ACQUISITION TIME
Lets you specify when to acquire locks on resources. Use:
USE (default) to open table spaces and acquire locks only when the
program bound to the plan first uses them.
ALLOCATE to open all table spaces and acquire all locks when you
allocate the plan. This value has no effect on dynamic SQL.
For a description of the effects of those values, see “The ACQUIRE and
RELEASE options” on page 346.
14 SQLRULES
Lets you specify whether a CONNECT (Type 2) statement executes according
to DB2 rules (DB2) or the SQL standard (STD). For information, see
“Specifying the SQL rules” on page 426.
15 DISCONNECT
Lets you specify which remote connections end during a commit or a rollback.
Regardless of what you specify, all connections in the released-pending state
end during commit.
Use:
EXPLICIT to end connections in the release-pending state only at
COMMIT or ROLLBACK
AUTOMATIC to end all remote connections
CONDITIONAL to end remote connections that have no open cursors
WITH HOLD associated with them.
To enable or disable connection types (that is, allow or prevent the connection from
running the package or plan), enter the information shown below.
1 ENABLE ALL CONNECTION TYPES?
Lets you enter an asterisk (*) to enable all connections. After that entry, you
can ignore the rest of the panel.
2 ENABLE/DISABLE SPECIFIC CONNECTION TYPES
Lets you specify a list of types to enable or disable; you cannot enable some
types and disable others in the same operation. If you list types to enable,
enter E; that disables all other connection types. If you list types to disable,
enter D; that enables all other connection types. For more information about
this option, see the bind options ENABLE and DISABLE in Chapter 2 of DB2
Command Reference.
For each connection type that follows, enter Y (yes) if it is on your list, N (no)
if it is not. The connection types are:
) BATCH for a TSO connection
) DB2CALL for a CAF connection
) RRSAF for an RRSAF connection
) CICS for a CICS connection
) IMS for all IMS connections: DLIBATCH, IMSBMP, and IMSMPP
) DLIBATCH for a DL/I Batch Support Facility connection
) IMSBMP for an IMS connection to a BMP region
) IMSMPP for an IMS connection to an MPP or IFP region
) REMOTE for remote location names and LU names
For each connection type that has a second arrow, under SPECIFY
CONNECTION NAMES?, enter Y if you want to list specific connection names
of that type. Leave N (the default) if you do not. If you use Y in any of those
fields, you see another panel on which you can enter the connection names.
For more information, see “Panels for entering lists of values” on page 465.
If you use the DISPLAY command under TSO on this panel, you can determine
what you have currently defined as “enabled” or “disabled” in your ISPF DSNSPFT
library (member DSNCONNS). The information does not reflect the current state of
the DB2 Catalog.
CONNECTION SUBSYSTEM
CICS1 ENABLED
CICS2 ENABLED
CICS3 ENABLED
CICS4 ENABLED
DLI1 ENABLED
DLI2 ENABLED
DLI3 ENABLED
DLI4 ENABLED
DLI5 ENABLED
The format of each list panel varies, depending on the content and purpose for the
panel. Figure 121 is a generic sample of a list panel:
F G
panelid Specific subcommand function SSID: DSN
COMMAND ===>_ SCROLL ===>
CMD
"""" value ...
"""" value ...
""""
""""
""""
""""
For the syntax of specifying names on a list panel, see Chapter 2 of DB2
Command Reference for the type of name you need to specify.
All of the list panels let you enter limited commands in two places:
) On the system command line, prefixed by ====>
) In a special command area, identified by """"
When you finish with a list panel, specify END to same the current panel values
and continue processing.
F G
DSNEPP2 PROGRAM PREP: COMPILE, PRELINK, LINK, AND RUN SSID: DSN
COMMAND ===>_
Figure 122. The program preparation: Compile, link, and run panel
Your application could need other data sets besides SYSIN and SYSPRINT. If so,
remember to catalog and allocate them before you run your program.
When you press ENTER after entering values in this panel, DB2 compiles and
link-edits the application. If you specified in the DB2 PROGRAM PREPARATION
panel that you want to run the application, DB2 also runs the application.
CICS
Before you run an application, ensure that the corresponding entries in the
RCT, SNT, and RACF control areas authorize your application to run. The
system administrator is responsible for these functions; see Section 3 (Volume
1) of DB2 Administration Guide for more information on the functions.
In addition, ensure that the transaction code assigned to your program, and to
the program itself, is defined to the CICS CSD.
If your location has a separate DB2 system for testing, you can create the test
tables and views on the test system, then test your program thoroughly on that
system. This chapter assumes that you do all testing on a separate system, and
that the person who created the test tables and views has an authorization ID of
TEST. The table names are TEST.EMP, TEST.PROJ and TEST.DEPT.
2. Determine the test tables and views you need to test your application.
Create a test table on your list when either:
) The application modifies data in the table
) You need to create a view based on a test table because your application
modifies the view's data.
To continue the example, create these test tables:
) TEST.EMP, with the following format:
DEPTNO MGRNO
... ...
Obtaining authorization
Before you can create a table, you need to be authorized to create tables and to
use the table space in which the table is to reside. You must also have authority to
bind and run programs you want to test. Your DBA can grant you the authorization
needed to create and access tables and to bind and run programs.
If you intend to use existing tables and views (either directly or as the basis for a
view), you need privileges to access those tables and views. Your DBA can grant
those privileges.
For details about each CREATE statement, see DB2 SQL Reference or Section 2
(Volume 1) of DB2 Administration Guide.
SQL statements executed under SPUFI operate on actual tables (in this case, the
tables you have created for testing). Consequently, before you access DB2 data:
) Make sure that all tables and views your SQL statements refer to exist
) If the tables or views do not exist, create them (or have your database
administrator create them). You can use SPUFI to issue the CREATE
statements used to create the tables and views you need for testing.
For more details about how to use SPUFI, see “Chapter 2-5. Executing SQL from
your terminal using SPUFI” on page 79.
For more information about the TEST command, see OS/390 TSO/E Command
Reference.
ISPF Dialog Test is another option to help you in the task of debugging.
When your program encounters an error, it can pass all the required error
information to a standard error routine. Online programs can also send an error
message to the originating logical terminal.
Some sites run a BMP at the end of the day to list all the errors that occurred
during the day. If your location does this, you can send a message using an
express PCB that has its destination set for that BMP.
Batch Terminal Simulator (BTS): The Batch Terminal Simulator (BTS) allows you
to test IMS application programs. BTS traces application program DL/I calls and
SQL statements, and simulates data communication functions. It can make a TSO
terminal appear as an IMS terminal to the terminal operator, allowing the end user
to interact with the application as though it were online. The user can use any
application program under the user's control to access any database (whether DL/I
or DB2) under the user's control. Access to DB2 databases requires BTS to
operate in batch BMP or TSO BMP mode. For more information on the Batch
Terminal Simulator, see IMS Batch Terminal Simulator General Information.
Using CICS facilities, you can have a printed error record; you can also print the
SQLCA (and SQLDA) contents.
For more details about each of these topics, see CICS for MVS/ESA Application
Programming Reference.
EDF intercepts the running application program at various points and displays
helpful information about the statement type, input and output variables, and any
error conditions after the statement executes. It also displays any screens that the
application program sends, making it possible to converse with the application
program during testing just as a user would on a production system.
EDF displays essential information before and after an SQL statement, while the
task is in EDF mode. This can be a significant aid in debugging CICS transaction
programs containing SQL statements. The SQL information that EDF displays is
helpful for debugging programs and for error analysis after an SQL error or
warning. Using this facility reduces the amount of work you need to do to write
special error handlers.
EDF before execution: Figure 123 on page 476 is an example of an EDF screen
before it executes an SQL statement. The names of the key information fields on
this panel are in boldface.
ENTER: CONTINUE
PF1 : UNDEFINED PF2 : UNDEFINED PF3 : UNDEFINED
PF4 : SUPPRESS DISPLAYS PF5 : WORKING STORAGE PF6 : USER DISPLAY
PF7 : SCROLL BACK PF8 : SCROLL FORWARD PF9 : STOP CONDITIONS
PF1: PREVIOUS DISPLAY PF11: UNDEFINED PF12: ABEND USER TASK
R S
Figure 123. EDF screen before a DB2 SQL statement
SQL statements containing input host variables: The IVAR (input host
variables) section and its attendant fields only appear when the executing
statement contains input host variables.
The host variables section includes the variables from predicates, the values used
for inserting or updating, and the text of dynamic SQL statements being prepared.
The address of the input variable is AT 'nnnnnnnn'.
EDF after execution: Figure 124 shows an example of the first EDF screen
displayed after the executing an SQL statement. The names of the key information
fields on this panel are in boldface.
F G
TRANSACTION: XC5 PROGRAM: TESTC5 TASK NUMBER: 698 DISPLAY:
STATUS: COMMAND EXECUTION COMPLETE
CALL TO RESOURCE MANAGER DSNCSQL
EXEC SQL FETCH P.AUTH=SYSADM , S.AUTH=
PLAN=TESTC5, DBRM=TESTC5, STMT=346, SECT=1
SQL COMMUNICATION AREA:
SQLCABC = 136 AT X'3C92789'
SQLCODE = AT X'3C9278D'
SQLERRML = AT X'3C92791'
SQLERRMC = '' AT X'3C92793'
SQLERRP = 'DSN' AT X'3C927D9'
SQLERRD(1-6) = , , , -1, , AT X'3C927E1'
SQLWARN(-A) = '_ _ _ _ _ _ _ _ _ _ _' AT X'3C927F9'
SQLSTATE = AT X'3C9284'
+ OVAR 1: TYPE=INTEGER, LEN=4, IND= AT X'3C92A'
DATA=X'1'
OFFSET:X'1D14' LINE:UNKNOWN EIBFN=X'182'
ENTER: CONTINUE
PF1 : UNDEFINED PF2 : UNDEFINED PF3 : END EDF SESSION
PF4 : SUPPRESS DISPLAYS PF5 : WORKING STORAGE PF6 : USER DISPLAY
PF7 : SCROLL BACK PF8 : SCROLL FORWARD PF9 : STOP CONDITIONS
PF1: PREVIOUS DISPLAY PF11: UNDEFINED PF12: ABEND USER TASK
R S
Figure 124. EDF screen after a DB2 SQL statement
Plus signs (+) on the left of the screen indicate that you can see additional EDF
output by using PF keys to scroll the screen forward or back.
The OVAR (output host variables) section and its attendant fields only appear when
the executing statement returns output host variables.
Figure 125 contains the rest of the EDF output for our example.
F G
TRANSACTION: XC5 PROGRAM: TESTC5 TASK NUMBER: 698 DISPLAY:
STATUS: COMMAND EXECUTION COMPLETE
CALL TO RESOURCE MANAGER DSNCSQL
+ OVAR 2: TYPE=CHAR, LEN=8, IND= AT X'3C92B'
DATA=X'C8F3E3E3C1C2D3C5'
OVAR 3: TYPE=CHAR, LEN=4, IND= AT X'3C92B8'
DATA=X'C9D5C9E3C9C1D34D3D6C1C44444444444444444'...
ENTER: CONTINUE
PF1 : UNDEFINED PF2 : UNDEFINED PF3 : END EDF SESSION
PF4 : SUPPRESS DISPLAYS PF5 : WORKING STORAGE PF6 : USER DISPLAY
PF7 : SCROLL BACK PF8 : SCROLL FORWARD PF9 : STOP CONDITIONS
PF1: PREVIOUS DISPLAY PF11: UNDEFINED PF12: ABEND USER TASK
R S
Figure 125. EDF screen after a DB2 SQL statement, continued
The attachment facility automatically displays SQL information while in the EDF
mode. (You can start EDF as outlined in the appropriate CICS application
programmer's reference manual.) If this is not the case, contact your installer and
see Section 2 of DB2 Installation Guide.
– Have you included the region size parameter in the EXEC statement? Does
it specify a region size large enough for the storage required for the DB2
interface, the TSO, IMS, or CICS system, and your program?
– Have you included the names of all data sets (DB2 and non-DB2) that the
program requires?
) Your program.
You can also use dumps to help localize problems in your program. For
example, one of the more common error situations occurs when your program
is running and you receive a message that it abended. In this instance, your
test procedure might be to capture a TSO dump. To do so, you must allocate a
SYSUDUMP or SYSABEND dump data set before calling DB2. When you
press the ENTER key (after the error message and READY message), the
system requests a dump. You then need to FREE the dump data set.
The SYSTERM output provides a brief summary of the results from the
precompiler, all error messages that the precompiler generated, and the statement
in error, when possible. Sometimes, the error messages by themselves are not
enough. In such cases, you can use the line number provided in each error
message to locate the failing source statement.
When you use the Program Preparation panels to prepare and run your program,
DB2 allocates SYSPRINT according to TERM option you specify (on line 12 of the
PROGRAM PREPARATION: COMPILE, PRELINK, LINK, AND RUN panel). As an
The SYSPRINT output can provide information about your precompiled source
module if you specify the options SOURCE and XREF when you start the DB2
precompiler.
F G
DB2 SQL PRECOMPILER SYMBOL CROSS-REFERENCE LISTING PAGE 29
...
DEFN
Is the number of the line that the precompiler generates to define the
name. **** means that the object was not defined or the precompiler did not
recognize the declarations.
REFERENCE
Contains two kinds of information: what the source program defines the
symbolic name to be, and which lines refer to the symbolic name. If the
symbolic name refers to a valid host variable, the list also identifies the
data type or STRUCTURE.
) A summary ( Figure 130) of the errors detected by the DB2 precompiler and a
list of the error messages generated by the precompiler.
SOURCE STATISTICS
SOURCE LINES READ: 15231
NUMBER OF SYMBOLS: 1282
SYMBOL TABLE BYTES EXCLUDING ATTRIBUTES: 64323
The restart capabilities for DB2 and IMS databases, as well as for sequential data
sets accessed through GSAM, are available through the IMS Checkpoint and
Restart facility.
DB2 allows access to both DB2 and DL/I data through the use of the following DB2
and IMS facilities:
) IMS synchronization calls, which commit and abend units of recovery
) The DB2 IMS attachment facility, which handles the two-phase commit protocol
and allows both systems to synchronize a unit of recovery during a restart after
a failure
Authorization
When the batch application tries to run the first SQL statement, DB2 checks
whether the authorization ID has the EXECUTE privilege for the plan. DB2 uses
the same ID for later authorization checks and also identifies records from the
accounting and performance traces.
The primary authorization ID is the value of the USER parameter on the job
statement, if that is available. It is the TSO logon name if the job is submitted.
Otherwise, it is the IMS PSB name. In that case, however, the ID must not begin
with the string “SYSADM”—which causes the job to abend. The batch job is
rejected if you try to change the authorization ID in an exit routine.
Address spaces
A DL/I batch region is independent of both the IMS control region and the CICS
address space. The DL/I batch region loads the DL/I code into the application
region along with the application program.
Checkpoint calls
Write your program with SQL statements and DL/I calls, and use checkpoint calls.
All checkpoints issued by a batch application program must be unique. The
frequency of checkpoints depends on the application design. At a checkpoint, DL/I
positioning is lost, DB2 cursors are closed (with the possible exception of cursors
defined as WITH HOLD), commit duration locks are freed (again with some
exceptions), and database changes are considered permanent to both IMS and
DB2.
It is also possible to have IMS dynamically back out the updates within the same
job. You must specify the BKO parameter as 'Y' and allocate the IMS log to
DASD.
You could have a problem if the system fails after the program terminates, but
before the job step ends. If you do not have a checkpoint call before the program
ends, DB2 commits the unit of work without involving IMS. If the system fails before
DL/I commits the data, then the DB2 data is out of synchronization with the DL/I
changes. If the system fails during DB2 commit processing, the DB2 data could be
indoubt.
It is recommended that you always issue a symbolic checkpoint at the end of any
update job to coordinate the commit of the outstanding unit of work for IMS and
DB2. When you restart the application program, you must use the XRST call to
obtain checkpoint information and resolve any DB2 indoubt work units.
If you do not use an XRST call, then DB2 assumes that any checkpoint call issued
is a basic checkpoint.
# You can specify values for the following parameters using a DDITV02 data set or a
# subsystem member:
# SSN,LIT,ESMT,RTT,REO,CRC
# You can specify values for the following parameters only in a DDITV02 data set:
# CONNECTION_NAME,PLAN,PROG
If you use the DDITV02 data set and specify a subsystem member, the values in
the DDITV02 DD statement override the values in the specified subsystem
member. If you provide neither, DB2 abends the application program with system
abend code X'04E' and a unique reason code in register 15.
DDITV02 is the DD name for a data set that has DCB options of LRECL=80 and
RECFM=F or FB.
You might want to save and print the data set, as the information is useful for
diagnostic purposes. You can use the IMS module, DFSERA10, to print the
variable-length data set records in both hexadecimal and character format.
Precompiling
When you add SQL statements to an application program, you must precompile the
application program and bind the resulting DBRM into a plan or package, as
described in “Chapter 6-1. Preparing an application program to run” on page 405.
Binding
The owner of the plan or package must have all the privileges required to execute
the SQL statements embedded in it. Before a batch program can issue SQL
statements, a DB2 plan must exist.
You can specify the plan name to DB2 in one of the following ways:
) In the DDITV02 input data set.
) In subsystem member specification.
) By default; the plan name is then the application load module name specified in
DDITV02.
DB2 passes the plan name to the IMS attach package. If you do not specify a plan
name in DDITV02, and a resource translation table (RTT) does not exist or the
name is not in the RTT, then DB2 uses the passed name as the plan name. If the
name exists in the RTT, then the name translates to the plan specified for the RTT.
Link-editing
DB2 has language interface routines for each unique supported environment. DB2
requires the IMS language interface routine for DL/I batch. It is also necessary to
have DFSLI000 link-edited with the application program.
You cannot restart A BMP application program in a DB2 DL/I batch environment.
The symbolic checkpoint records are not accessed, causing an IMS user abend
U0102.
DB2 performs one of two actions automatically when restarted, if the failure occurs
outside the indoubt period: it either backs out the work unit to the prior checkpoint,
or it commits the data without any assistance. If the operator then issues the
command
-DISPLAY THREAD(*) TYPE(INDOUBT)
no work unit information displays.
Chapter 7-7. Programming for the call attachment facility (CAF) . . . . . . 745
Call attachment facility capabilities and restrictions . . . . . . . . . . . . . . . . . 745
Capabilities when using CAF . . . . . . . . . . . . . . . . . . . . . . . . . . . . 745
CAF requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 747
How to use CAF . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 748
Summary of connection functions . . . . . . . . . . . . . . . . . . . . . . . . . 750
Accessing the CAF language interface . . . . . . . . . . . . . . . . . . . . . . 751
General properties of CAF connections . . . . . . . . . . . . . . . . . . . . . . 752
CAF function descriptions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 753
CONNECT: Syntax and usage . . . . . . . . . . . . . . . . . . . . . . . . . . 756
OPEN: Syntax and usage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 760
CLOSE: Syntax and usage . . . . . . . . . . . . . . . . . . . . . . . . . . . . 761
DISCONNECT: Syntax and usage . . . . . . . . . . . . . . . . . . . . . . . . 763
TRANSLATE: Syntax and usage . . . . . . . . . . . . . . . . . . . . . . . . . 764
Summary of CAF behavior . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 766
Sample scenarios . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 767
A single task with implicit connections . . . . . . . . . . . . . . . . . . . . . . 767
A single task with explicit connections . . . . . . . . . . . . . . . . . . . . . . 767
Several tasks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 767
Exits from your application . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 768
Attention exits . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 768
Recovery routines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 768
Error messages and dsntrace . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 769
CAF return codes and reason codes . . . . . . . . . . . . . . . . . . . . . . . . . 769
Subsystem support subcomponent codes (X'00F3') . . . . . . . . . . . . . . 770
For most DB2 users, static SQL—embedded in a host language program and
bound before the program runs—provides a straightforward, efficient path to DB2
data. You can use static SQL when you know before run time what SQL
statements your application needs to execute.
Dynamic SQL prepares and executes the SQL statements within a program, while
the program is running. There are four types of dynamic SQL:
) Embedded dynamic SQL
Your application puts the SQL source in host variables and includes PREPARE
and EXECUTE statements that tell DB2 to prepare and run the contents of
those host variables at run time. You must precompile and bind programs that
include embedded dynamic SQL.
) Interactive SQL
A user enters SQL statements through SPUFI. DB2 prepares and executes
those statements as dynamic SQL statements.
) Deferred embedded SQL
Deferred embedded SQL statements are neither fully static nor fully dynamic.
Like static statements, deferred embedded SQL statements are embedded
within applications, but like dynamic statements, they are prepared at run time.
DB2 processes deferred embedded SQL statements with bind-time rules. For
example, DB2 uses the authorization ID and qualifier determined at bind time
as the plan or package owner. Deferred embedded SQL statements are used
for DB2 private protocol access to remote data.
) Dynamic SQL executed through ODBC functions
Your application contains ODBC function calls that pass dynamic SQL
statements as arguments. You do not need to precompile and bind programs
that use ODBC function calls. See DB2 ODBC Guide and Reference for
information on ODBC.
“Choosing between static and dynamic SQL” on page 504 suggests some reasons
for choosing either static or dynamic SQL.
The rest of this chapter shows you how to code dynamic SQL in applications that
contain three types of SQL statements:
) “Dynamic SQL for non-SELECT statements” on page 513. Those statements
include DELETE, INSERT, and UPDATE.
) “Dynamic SQL for fixed-list SELECT statements” on page 517. A SELECT
statement is fixed-list if you know in advance the number and type of data
items in each row of the result.
) “Dynamic SQL for varying-list SELECT statements” on page 519. A SELECT
statement is varying-list if you cannot know in advance how many data items to
allow for or what their data types are.
In the example below, the UPDATE statement can update the salary of any
employee. At bind time, you know that salaries must be updated, but you do not
know until run time whose salaries should be updated, and by how much.
1 IOAREA.
2 EMPID PIC X(6).
2 NEW-SALARY PIC S9(7)V9(2) COMP-3.
..
.
(Other declarations)
READ CARDIN RECORD INTO IOAREA
AT END MOVE 'N' TO INPUT-SWITCH.
..
.
(Other COBOL statements)
EXEC SQL
UPDATE DSN861.EMP
SET SALARY = :NEW-SALARY
WHERE EMPNO = :EMPID
END-EXEC.
The statement (UPDATE) does not change, nor does its basic structure, but the
input can change the results of the UPDATE statement.
One example of such a program is the Query Management Facility (QMF), which
provides an alternative interface to DB2 that accepts almost any SQL statement.
SPUFI is another example; it accepts SQL statements from an input data set, and
then processes and executes them dynamically.
The time at which DB2 determines the access path depends on these factors:
) Whether the statement is executed statically or dynamically
) Whether the statement contains input host variables
If you specify NOREOPT(VARS), DB2 determines the access path at bind time, just
as it does when there are no input variables.
| If you use predictive governing, and a dynamic SQL statement bound with
| REOPT(VARS) exceeds a predictive governing warning threshold, your application
| does not receive a warning SQLCODE. However, if the statement exceeds a
| predictive governing error threshold, the application receives an error SQLCODE
| from the OPEN or EXECUTE statement.
DB2 can save prepared dynamic statements in a cache. The cache is a DB2-wide
cache in the EDM pool that all application processes can use to store and retrieve
prepared dynamic statements. After an SQL statement has been prepared and is
automatically stored in the cache, subsequent prepare requests for that same SQL
statement can avoid the costly preparation process by using the statement in the
cache. Cached statements can be shared among different threads, plans, or
packages.
For example:
PREPARE STMT1 FROM ... Statement is prepared and the prepared
EXECUTE STMT1 statement is put in the cache.
COMMIT
..
.
PREPARE STMT1 FROM ... Identical statement. DB2 uses the prepared
EXECUTE STMT1 statement from the cache.
COMMIT
..
.
Eligible Statements: The following SQL statements are eligible for caching:
SELECT
UPDATE
INSERT
DELETE
Distributed and local SQL statements are eligible. Prepared, dynamic statements
using DB2 private protocol access are eligible.
Restrictions: Even though static statements that use DB2 private protocol access
are dynamic at the remote site, those statements are not eligible for caching.
Statements in plans or packages bound with REOPT(VARS) are not eligible for
caching. See “How bind option REOPT(VARS) affects dynamic SQL” on page 533
for more information about REOPT(VARS).
The following conditions must be met before DB2 can use statement P1 instead of
preparing statement S2:
When the dynamic statement cache is active, and you run an application bound
with KEEPDYNAMIC(YES), DB2 retains a copy of both the prepared statement and
the statement string. The prepared statement is cached locally for the application
process. It is likely that the statement is globally cached in the EDM pool, to benefit
other application processes. If the application issues an OPEN, EXECUTE, or
DESCRIBE after a commit operation, the application process uses its local copy of
the prepared statement to avoid a prepare and a search of the cache. Figure 133
on page 510 illustrates this process.
The local instance of the prepared SQL statement is kept in ssnmDBM1 storage
until one of the following occurs:
) The application process ends.
) A rollback operation occurs.
) The application issues an explicit PREPARE statement with the same
statement name.
If the application does issue a PREPARE for the same SQL statement name
that has a kept dynamic statement associated with it, the kept statement is
discarded and DB2 prepares the new statement.
) The statement is removed from memory because the statement has not been
used recently, and the number of kept dynamic SQL statements reaches a limit
set at installation time.
The KEEPDYNAMIC option has performance implications for DRDA clients that
specify WITH HOLD on their cursors:
) If KEEPDYNAMIC(NO) is specified, a separate network message is required
when the DRDA client issues the SQL CLOSE for the cursor.
) If KEEPDYNAMIC(YES) is specified, the DB2 for OS/390 server automatically
closes the cursor when SQLCODE +100 is detected, which means that the
client does not have to send a separate message to close the held cursor. This
reduces network traffic for DRDA applications that use held cursors. It also
reduces the duration of locks that are associated with the held cursor.
Considerations for data sharing: If one member of a data sharing group has
enabled the cache but another has not, and an application is bound with
KEEPDYNAMIC(YES), DB2 must implicitly prepare the statement again if the
statement is assigned to a member without the cache. This can mean a slight
reduction in performance.
The governor controls only the dynamic SQL manipulative statements SELECT,
UPDATE, DELETE, and INSERT. Each dynamic SQL statement used in a program
is subject to the same limits. The limit can be a reactive governing limit or a
predictive governing limit. If the statement exceeds a reactive governing limit, the
statement receives an error SQL code. If the statement exceeds a predictive
governing limit, it receives a warning or error SQL code. “Writing an application to
handle predictive governing” on page 512 explains more about predictive governing
SQL codes.
Your system administrator can establish the limits for individual plans or packages,
for individual users, or for all users who do not have personal limits.
Follow the procedures defined by your location for adding, dropping, or modifying
entries in the resource limit specification table. For more information on the
resource limit specification tables, see Section 5 (Volume 2) of DB2 Administration
Guide.
| If the failed statement involves an SQL cursor, the cursor's position remains
| unchanged. The application can then close that cursor. All other operations with the
| cursor do not run and the same SQL error code occurs.
| If the failed SQL statement does not involve a cursor, then all changes that the
| statement made are undone before the error code returns to the application. The
| application can either issue another SQL statement or commit all work done so far.
| For information about setting up the resource limit facility for predictive governing,
| see Section 5 (Volume 2) of DB2 Administration Guide.
| Normally with deferred prepare, the PREPARE, OPEN, and first FETCH of the data
| are returned to the requester. For a predictive governor warning of +495, you would
| ideally like to have the option to choose beforehand whether you want the OPEN
| and FETCH of the data to occur. For downlevel requesters, you do not have this
| option. The level of DRDA that fully supports predictive governing is DRDA level 4.
| The products that include DRDA support for predictive governing are DB2 for
| OS/390 Version 6 and DB2 Connect Version 5.2 with appropriate maintenance.
| All other requesters are considered downlevel with regards to predictive governing
| support through DRDA.
If your application does defer prepare processing, the application receives the +495
at its usual time (OPEN or PREPARE). If you have parameter markers with
deferred prepare, you receive the +495 at OPEN time as you normally do.
However, an additional message is exchanged.
# All SQL in REXX programs is dynamic SQL. For information on how to write SQL
# REXX applications, see “Coding SQL statements in a REXX application” on page
# 207
Most of the examples in this section are in PL/I. “Using dynamic SQL in COBOL”
on page 533 shows techniques for using COBOL. Longer examples in the form of
complete programs are available in the sample applications:
DSNTEP2 Processes both SELECT and non-SELECT statements dynamically.
(PL/I).
DSNTIAD Processes only non-SELECT statements dynamically. (Assembler).
DSNTIAUL Processes SELECT statements dynamically. (Assembler).
Library prefix.SDSNSAMP contains the sample programs. You can view the
programs online, or you can print them using ISPF, IEBPTPCH, or your own
printing program.
Recall that you must prepare (precompile and bind) static SQL statements before
you can use them. You cannot prepare dynamic SQL statements in advance. The
SQL statement EXECUTE IMMEDIATE causes an SQL statement to prepare and
execute, dynamically, at run time.
If you know in advance that you will use only the DELETE statement and only the
table DSN8610.EMP, then you can use the more efficient static SQL. Suppose
further that there are several different tables with rows identified by employee
numbers, and that users enter a table name as well as a list of employee numbers
| You can indicate to DB2 that a parameter marker represents a host variable of a
| certain data type by specifying the parameter marker as the argument of a CAST
| function. When the statement executes, DB2 converts the host variable to the data
| type in the CAST function. A parameter marker that you include in a CAST function
| is called a typed parameter marker. A parameter marker without a CAST function is
| called an untyped parameter marker.
| Because DB2 can evaluate an SQL statement with typed parameter markers more
| efficiently than a statement with untyped parameter markers, we recommend that
| you use typed parameter markers whenever possible. Under certain circumstances
| you must use typed parameter markers. See Chapter 6 of DB2 SQL Reference for
| rules for using untyped or typed parameter markers.
You associate host variable :EMP with the parameter marker when you execute the
prepared statement. Suppose S1 is the prepared statement. Then the EXECUTE
statement looks like this:
EXECUTE S1 USING :EMP;
For example, let the variable :DSTRING have the value “DELETE FROM
DSN8610.EMP WHERE EMPNO = ?”. To prepare an SQL statement from that
string and assign it the name S1, write:
EXEC SQL PREPARE S1 FROM :DSTRING;
The prepared statement still contains a parameter marker, for which you must
supply a value when the statement executes. After the statement is prepared, the
table name is fixed, but the parameter marker allows you to execute the same
statement many times with different values of the employee number.
After you prepare a statement, you can execute it many times within the same unit
of work. In most cases, COMMIT or ROLLBACK destroys statements prepared in a
unit of work. Then, you must prepare them again before you can execute them
again. However, if you declare a cursor for a dynamic statement and use the option
WITH HOLD, a commit operation does not destroy the prepared statement if the
cursor is still open. You can execute the statement in the next unit of work without
preparing it again.
You can now write an equivalent example for a dynamic SQL statement:
< Read a statement containing parameter markers into DSTRING.>
EXEC SQL PREPARE S1 FROM :DSTRING;
< Read a value for EMP from the list. >
DO UNTIL (EMPNO = );
EXEC SQL EXECUTE S1 USING :EMP;
< Read a value for EMP from the list. >
END;
The PREPARE statement prepares the SQL statement and calls it S1. The
EXECUTE statement executes S1 repeatedly, using different values for EMP.
| After you execute DESCRIBE INPUT, you code the application in the same way as
| any other application in which you execute a prepared statement using an SQLDA.
| First, you obtain the addresses of the input host variables and their indicator
| variables and insert those addresses into the SQLDATA and SQLIND fields. Then
| you execute the prepared SQL statement.
The term “fixed-list” does not imply that you must know in advance how many rows
of data will return; however, you must know the number of columns and the data
types of those columns. A fixed-list SELECT statement returns a result table that
can contain any number of rows; your program looks at those rows one at a time,
using the FETCH statement. Each successive fetch returns the same number of
values as the last, and the values have the same data types each time. Therefore,
you can specify host variables as you do for static SQL.
An advantage of the fixed-list SELECT is that you can write it in any of the
programming languages that DB2 supports. Varying-list dynamic SELECT
statements require assembler, C, PL/I, and versions of COBOL other than OS/VS
COBOL.
The preceding two steps are exactly the same as described under “Dynamic SQL
for non-SELECT statements” on page 513.
3. Declare a cursor for the statement name as described in “Declare a cursor for
the statement name.”
4. Prepare the statement, as described in “Prepare the statement.”
5. Open the cursor, as described in “Open the cursor” on page 519.
6. Fetch rows from the result table, as described in “Fetch rows from the result
table” on page 519.
7. Close the cursor, as described in “Close the cursor” on page 519.
8. Handle any resulting errors. This step is the same as for static SQL, except for
the number and types of errors that can result.
Suppose that your program retrieves last names and phone numbers by
dynamically executing SELECT statements of this form:
SELECT LASTNAME, PHONENO FROM DSN861.EMP
WHERE ... ;
The program reads the statements from a terminal, and the user determines the
WHERE clause.
To execute STMT, your program must open the cursor, fetch rows from the result
table, and close the cursor. The following sections describe how to do those steps.
If STMT contains parameter markers, then you must use the USING clause of
OPEN to provide values for all of the parameter markers in STMT. If there are four
parameter markers in STMT, you need:
EXEC SQL OPEN C1 USING :PARM1, :PARM2, :PARM3, :PARM4;
It is possible to use this list in the FETCH statement only because you planned the
program to use only fixed-list SELECTs. Every row that cursor C1 points to must
contain exactly two character values of appropriate length. If the program is to
handle anything else, it must use the techniques described under Dynamic SQL for
varying-list SELECT statements.
Now there is a new wrinkle. The program must find out whether the statement is a
SELECT. If it is, the program must also find out how many values are in each row,
and what their data types are. The information comes from an SQL descriptor area
(SQLDA).
For a complete layout of the SQLDA and the descriptions given by INCLUDE
statements, see Appendix C of DB2 SQL Reference.
| Table 55. Minimum number of SQLVARs for a result table with n columns
| Type of Describe and Contents of Result
| Set Not USING BOTH USING BOTH
| No distinct types or LOBs n 2*n
| Distinct types but no LOBs 2*n 3*n
| LOBs but no distinct types 2*n 2*n
| LOBs and distinct types 2*n 3*n
A program that admits SQL statements of every kind for dynamic execution has two
choices:
) Provide the largest SQLDA that it could ever need. The maximum number of
columns in a result table is 750, so an SQLDA for 750 columns occupies 33
016 bytes for a single SQLDA, 66 016 bytes for a double SQLDA, or 99 016
bytes for a triple SQLDA. Most SELECTs do not retrieve 750 columns, so the
program does not usually use most of that space.
) Provide a smaller SQLDA, with fewer occurrences of SQLVAR. From this the
program can find out whether the statement was a SELECT and, if it was, how
many columns are in its result table. If there are more columns in the result
than the SQLDA can hold, DB2 returns no descriptions. When this happens,
the program must acquire storage for a second SQLDA that is long enough to
hold the column descriptions, and ask DB2 for the descriptions again. Although
this technique is more complicated to program than the first, it is more general.
How many columns should you allow? You must choose a number that is large
enough for most of your SELECT statements, but not too wasteful of space; 40
is a good compromise. To illustrate what you must do for statements that return
Equivalently, you can use the INTO clause in the PREPARE statement:
EXEC SQL
PREPARE STMT INTO :MINSQLDA FROM :DSTRING;
Do not use the USING clause in either of these examples. At the moment, only the
minimum SQLDA is in use. Figure 134 shows the contents of the minimum SQLDA
in use.
| Whether or not your SQLDA is big enough, whenever you execute DESCRIBE,
| DB2 returns the following values, which you can use to build an SQLDA of the
| correct size:
| ) SQLD
Figure 137, Figure 138, and Figure 139 on page 526 show the SQL descriptor
area after you take certain actions. Table 56 on page 526 describes the values in
the descriptor area. In Figure 137, the DESCRIBE statement inserted all the values
except the first occurrence of the number 200. The program inserted the number
200 before it executed DESCRIBE to tell how many occurrences of SQLVAR to
allow. If the result table of the SELECT has more columns than this, the SQLVAR
fields describe nothing.
The next set of five values, the first SQLVAR, pertains to the first column of the
result table (the WORKDEPT column). SQLVAR element 1 contains fixed-length
character strings and does not allow null values (SQLTYPE=452); the length
attribute is 3. For information on SQLTYPE values, see Appendix C of DB2 SQL
Reference.
Figure 138. SQL descriptor area after analyzing descriptions and acquiring storage
Figure 138 on page 525 shows the content of the descriptor area before the
program obtains any rows of the result table. Addresses of fields and indicator
variables are already in the SQLVAR.
When you use an SQLDA to select data from a table dynamically, you can change
the encoding scheme for the retrieved data. You can use this capability to retrieve
data in ASCII from a table defined as ASCII.
To change the encoding scheme of retrieved data, set up the SQLDA as you would
for any other varying-list SELECT statement. Then make these additional changes
to the SQLDA:
1. Put the character + in the sixth byte of field SQLDAID.
2. For each SQLVAR entry:
) Set the length field of SQLNAME to 8.
) Set the first two bytes of the data field of SQLNAME to X'0000'.
) Set the third and fourth bytes of the data field of SQLNAME to the CCSID,
in hexadecimal, in which you want the results to display. You can specify
any CCSID that meets either of the following conditions:
# – There is a row in catalog table SYSSTRINGS that has a matching
# value for OUTCCSID.
# – Language Environment supports conversion to that CCSID. See
# OS/390 C/C++ Programming Guide for information on the conversions
# that Language Environment supports.
If you are modifying the CCSID to display the contents of an ASCII table in
ASCII on a DB2 for OS/390 system, and you previously executed a
DESCRIBE statement on the SELECT statement you are using to display
the ASCII table, the SQLDATA fields in the SQLDA used for the
DESCRIBE contain the ASCII CCSID for that table. To set the data portion
of the SQLNAME fields for the SELECT, move the contents of each
SQLDATA field in the SQLDA from the DESCRIBE to each SQLNAME field
in the SQLDA for the SELECT. If you are using the same SQLDA for the
DESCRIBE and the SELECT, be sure to move the contents of the
SQLDATA field to SQLNAME before you modify the SQLDATA field for the
SELECT.
For REXX, you set the CCSID in the stem.n.SQLCCSID field instead of setting
the SQLDAID and SQLNAME fields.
For example, suppose the table that contains WORKDEPT and PHONENO is
defined with CCSID ASCII. To retrieve data for columns WORKDEPT and
PHONENO in ASCII CCSID 437 (X'01B5'), change the SQLDA as shown in
Figure 140 on page 528.
In this case, SQLNAME contains nothing for a column with no label. If you prefer to
use labels wherever they exist, but column names where there are no labels, write
USING ANY. (Some columns, such as those derived from functions or expressions,
have neither name nor label; SQLNAME contains nothing for those columns.
However, if the column is the result of a UNION, SQLNAME contains the names of
the columns of the first operand of the UNION.)
You can also write USING BOTH to obtain the name and the label when both exist.
However, to obtain both, you need a second set of occurrences of SQLVAR in
FULSQLDA. The first set contains descriptions of all the columns using names; the
second set contains descriptions using labels. This means that you must allocate a
longer SQLDA for the second DESCRIBE statement ((16 + SQLD * 88 bytes)
instead of (16 + SQLD * 44)). You must also put double the number of columns
(SLQD * 2) in the SQLN field of the second SQLDA. Otherwise, if there is not
enough space available, DESCRIBE does not enter descriptions of any of the
columns.
| The result table for this statement has two columns, but you need four SQLVAR
| occurrences in your SQLDA because the result table contains a LOB type and a
| distinct type. Suppose you prepare and describe this statement into FULSQLDA,
| Figure 141. SQL descriptor area after describing a CLOB and distinct type
| The next steps are the same as for result tables without LOBs or distinct types:
| 1. Analyze each SQLVAR description to determine the maximum amount of space
| you need for the column value.
| For a LOB type, retrieve the length from the SQLLONGL field instead of the
| SQLLEN field.
| 2. Derive the address of some storage area of the required size.
| For a LOB data type, you also need a 4-byte storage area for the length of the
| LOB data. You can allocate this 4-byte area at the beginning of the LOB data
| or in a different location.
| 3. Put this address in the SQLDATA field.
| For a LOB data type, if you allocated a separate area to hold the length of the
| LOB data, put the address of the length field in SQLDATAL. If the length field is
| at beginning of the LOB data area, put 0 in SQLDATAL.
| 4. If the SQLTYPE field indicates that the value can be null, the program must
| also put the address of an indicator variable in the SQLIND field.
| Figure 142 and Figure 143 on page 530 show the contents of FULSQLDA after
| you fill in pointers to storage locations and execute FETCH.
| Figure 142. SQL descriptor area after analyzing CLOB and distinct type descriptions and acquiring storage
The key feature of this statement is the clause USING DESCRIPTOR :FULSQLDA.
That clause names an SQL descriptor area in which the occurrences of SQLVAR
point to other areas. Those other areas receive the values that FETCH returns. It is
possible to use that clause only because you previously set up FULSQLDA to look
like Figure 137 on page 525.
Figure 139 on page 526 shows the result of the FETCH. The data areas identified
in the SQLVAR fields receive the values from a single row of the result table.
Successive executions of the same FETCH statement put values from successive
rows of the result table into these same areas.
When COMMIT ends the unit of work containing OPEN, the statement in STMT
reverts to the unprepared state. Unless you defined the cursor using the WITH
HOLD option, you must prepare the statement again before you can reopen the
cursor.
In both cases, the number and types of host variables named must agree with the
number of parameter markers in STMT and the types of parameter they represent.
The first variable (VAR1 in the examples) must have the type expected for the first
parameter marker in the statement, the second variable must have the type
expected for the second marker, and so on. There must be at least as many
variables as parameter markers.
The structure of DPARM is the same as that of any other SQLDA. The number of
occurrences of SQLVAR can vary, as in previous examples. In this case, there
must be one for every parameter marker. Each occurrence of SQLVAR describes
one host variable that replaces one parameter marker at run time. This happens
either when a non-SELECT statement executes or when a cursor is opened for a
SELECT statement.
You must fill in certain fields in DPARM before using EXECUTE or OPEN; you can
ignore the other fields.
Field Use When Describing Host Variables for Parameter Markers
| SQLDAID The seventh byte indicates whether more than one SQLVAR entry
| is used for each parameter marker. If this byte is not blank, at least
| one parameter marker represents a distinct type or LOB value, so
| the SQLDA has more than one set of SQLVAR entries.
# You do not set this field for a REXX SQLDA.
# SQLDABC The length of the SQLDA, equal to SQLN * 44 + 16. You do not set
# this field for a REXX SQLDA.
# SQLN The number of occurrences of SQLVAR allocated for DPARM. You
# do not set this field for a REXX SQLDA.
SQLD The number of occurrences of SQLVAR actually used. This must
not be less than the number of parameter markers. In each
occurrence of SQLVAR, put the following information using the
same way that you use the DESCRIBE statement:
SQLTYPE The code for the type of variable, and whether it allows nulls
SQLLEN The length of the host variable
SQLDATA The address of the host variable.
# For REXX, this field contains the value of the host variable.
SQLIND The address of an indicator variable, if needed.
# For REXX, this field contains a negative number if the value in
# SQLDATA is null.
SQLNAME Ignored
# This chapter contains information that applies to all stored procedures and specific
# information about Assembler, C, COBOL, REXX, and PL/I stored procedures. For
# information on writing, preparing, and running Java stored procedures, see DB2
# Application Programming Guide and Reference for Java.
Consider using stored procedures for a client/server application that does at least
one of the following things:
) Executes many remote SQL statements.
Remote SQL statements can create many network send and receive
operations, which results in increased processor costs.
Stored procedures can encapsulate many of your application's SQL statements
into a single message to the DB2 server, reducing network traffic to a single
send and receive operation for a series of SQL statements.
) Accesses host variables for which you want to guarantee security and integrity.
Stored procedures remove SQL applications from the workstation, which
prevents workstation users from manipulating the contents of sensitive SQL
statements and host variables.
Figure 144 on page 536 and Figure 145 on page 536 illustrate the difference
between using stored procedures and not using stored procedures.
Figure 145. Processing with stored procedures. The same series of SQL statements uses a single send or receive
operation.
Perform these tasks to prepare the DB2 subsystem to run stored procedures:
) Decide whether to use WLM-established address spaces or DB2-established
address spaces for stored procedures.
See Section 5 (Volume 2) of DB2 Administration Guide for a comparison of the
two environments.
If you are currently using DB2-established address spaces and want to convert
to WLM-established address spaces, see “Moving stored procedures to a
WLM-established environment (for system administrators)” on page 547 for
information on what you need to do.
) Define JCL procedures for the stored procedures address spaces
Member DSNTIJMV of data set DSN610.SDSNSAMP contains sample JCL
procedures for starting WLM-established and DB2-established address spaces.
If you enter a WLM procedure name or a DB2 procedure name in installation
panel DSNTIPX, DB2 customizes a JCL procedure for you. See Section 2 of
DB2 Installation Guide for details.
) For WLM-established address spaces, define WLM application environments for
groups of stored procedures and associate a JCL startup procedure with each
application environment.
See Section 5 (Volume 2) of DB2 Administration Guide for information on how
to do this.
) If you plan to execute stored procedures that use the ODBA interface to access
IMS databases, modify the startup procedures for the address spaces in which
those stored procedures will run in the following way:
– Add the data set name of the IMS data set that contains the ODBA callable
interface code (usually IMS.RESLIB) to the end of the STEPLIB
concatenation.
– After the STEPLIB DD statement, add a DFSRESLB DD statement that
names the IMS data set that contains the ODBA callable interface code.
) Install Language Environment and the appropriate compilers.
See OS/390 Language Environment for OS/390 & VM Customization for
information on installing Language Environment.
See “Language requirements for the stored procedure and its caller” on
page 548 for minimum compiler and Language Environment requirements
| Location name
| A 128-byte character field. It contains the name of the location to which the
| invoker is currently connected.
| Authorization ID length
| An unsigned 2-byte integer field. It contains the length of the authorization ID in
| the next field.
| Authorization ID
| A 128-byte character field. It contains the authorization ID of the application
| from which the stored procedure is invoked, padded on the right with blanks. If
| this stored procedure is nested within other routines (user-defined functions or
| stored procedures), this value is the authorization ID of the application that
| invoked the highest-level routine.
| Table qualifier
| A 128-byte character field. This field is not used for stored procedures.
| Table name
| A 128-byte character field. This field is not used for stored procedures.
| Column name
| A 128-byte character field. This field is not used for stored procedures.
| Product information
| An 8-byte character field that identifies the product on which the stored
| procedure executes. This field has the form pppvvrrm, where:
| Operating system
| A 4-byte integer field. It identifies the operating system on which the program
| that invokes the user-defined function runs. The value is one of these:
| 0 Unknown
| 1 OS/2
| 3 Windows
| 4 AIX
| Reserved area
| 24 bytes.
| Reserved area
| 20 bytes.
| See “Linkage conventions” on page 584 for an example of coding the DBINFO
| parameter list in a stored procedure.
| Later, you need to make the following changes to the stored procedure definition:
| ) It selects data from DB2 tables but does not modify DB2 data.
| ) The parameters can have null values, and the stored procedure can return a
| diagnostic string.
| ) The length of time the stored procedure runs should not be limited.
| ) If the stored procedure is called by another stored procedure or a user-defined
| function, the stored procedure uses the WLM environment of the caller.
| Execute this ALTER PROCEDURE statement to make the changes:
| ALTER PROCEDURE B
| READS SQL DATA
| ASUTIME NO LIMIT
| PARAMETER STYLE DB2SQL
| WLM ENVIRONMENT (PAYROLL,*);
The method that you use to perform these tasks depends on whether you are using
WLM-established or DB2-established address spaces.
| Then use CREATE PROCEDURE to create definitions for all stored procedures
| that are identified by the SELECT statement. You cannot specify AUTHID or
| LUNAME using CREATE PROCEDURE. However, AUTHID and LUNAME let you
| define several versions of a stored procedure, such as a test version and a
| production version. You can accomplish the same task by specifying a unique
| schema name for each stored procedure with the same name. For example, for
| stored procedure INVENTORY, you might define TEST.INVENTORY and
| PRODTN.INVENTORY.
# There are two types of stored procedures: external stored procedures and SQL
# procedures.. External stored procedures are written in a host language. The source
# code for an external stored procedure is separate from the definition for the stored
# procedure. SQL procedures are written using SQL procedures statements, which
# are part of a CREATE PROCEDURE statement. This section discusses writing and
# preparing external stored procedures. “Writing and preparing an SQL procedure” on
# page 564 discusses writing and preparing SQL procedures.
An external stored procedure is much like any other SQL application. It can include
static or dynamic SQL statements, IFI calls, and DB2 commands issued through
IFI. This section contains the following topics:
) Language requirements for the stored procedure and its caller
) “Calling other programs” on page 549
) “Using reentrant code” on page 549
) “Writing a stored procedure as a main program or subprogram” on page 550
) “Accessing other sites in a stored procedure” on page 555
) “Writing a stored procedure to return result sets to a DRDA client” on page 556
) “Preparing a stored procedure” on page 557
) “Binding the stored procedure” on page 558
) “Writing a REXX stored procedure” on page 559
If the stored procedure calls other programs that contain SQL statements, each of
those called programs must have a DB2 package. The owner of the package or
plan that contains the CALL statement must have EXECUTE authority for all
packages that the other programs use.
When a stored procedure calls another program, DB2 determines which collection
the called program's package belongs to in one of the following ways:
) If the stored procedure executes SET CURRENT PACKAGESET, the called
program's package comes from the collection specified in SET CURRENT
PACKAGESET.
) If the stored procedure does not execute SET CURRENT PACKAGESET,
– If the stored procedure definition contains NO COLLID, DB2 uses the
collection ID of the package that contains the SQL statement CALL.
– If the stored procedure definition contains COLLID collection-id, DB2 uses
collection-id.
When control returns from the stored procedure, DB2 restores the value of the
special register CURRENT PACKAGESET to the value it contained before the
client program executed the SQL statement CALL.
Figure 149 on page 552 shows an example of coding a C++ stored procedure as a
subprogram.
| Table 59 on page 554 shows information you need to use special registers in a
| stored procedure.
When a local DB2 application calls a stored procedure, the stored procedure
cannot have DB2 private protocol access to any DB2 sites already connected to the
calling program by DRDA access.
The local DB2 application cannot use DRDA access to connect to any location that
the stored procedure has already accessed using DB2 private protocol access.
Before making the DB2 private protocol connection, the local DB2 application must
first execute the RELEASE statement to terminate the DB2 private protocol
connection, and then commit the unit of work.
ODBA support uses OS/390 RRS for syncpoint control of DB2 and IMS resources.
Therefore, stored procedures that use ODBA can run only in WLM-established
stored procedures address spaces.
When you write a stored procedure that uses ODBA, follow the rules for writing an
IMS application program that issues DL/I calls. See IMS/ESA Application
Programming: Database Manager and IMS/ESA Application Programming:
Transaction Manager for information on writing DL/I applications.
A stored procedure that uses ODBA must issue a DPSB PREP call to deallocate a
PSB when all IMS work under that PSB is complete. The PREP keyword tells IMS
to move inflight work to an indoubt state. When work is in the indoubt state, IMS
does not require activation of syncpoint processing when the DPSB call is
executed. IMS commits or backs out the work as part of RRS two-phase commit
when the stored procedure caller executes COMMIT or ROLLBACK.
A sample COBOL stored procedure and client program demonstrate accessing IMS
data using the ODBA interface. The stored procedure source code is in member
DSN8EC1 and is prepared by job DSNTEJ61. The calling program source code is
in member DSN8EC1 and is prepared and executed by job DSNTEJ62. All code is
in data set DSN610.SDSNSAMP.
The startup procedure for a stored procedures address space in which stored
procedures that use ODBA run must include a DFSRESLB DD statement and an
extra data set in the STEPLIB concatenation. See “Setting up the stored
procedures environment” on page 540 for more information.
For each result set you want returned, your stored procedure must:
) Declare a cursor with the option WITH RETURN.
) Open the cursor.
) Leave the cursor open.
When the stored procedure ends, DB2 returns the rows in the query result set to
the client.
DB2 does not return result sets for cursors that are closed before the stored
procedure terminates. The stored procedure must execute a CLOSE statement for
each cursor associated with a result set that should not be returned to the DRDA
client.
Example: Suppose you want to return a result set that contains entries for all
employees in department D11. First, declare a cursor that describes this subset of
employees:
EXEC SQL DECLARE C1 CURSOR WITH RETURN FOR
SELECT * FROM DSN861.EMP
WHERE WORKDEPT='D11';
Use meaningful cursor names for returning result sets: The name of the cursor
that is used to return result sets is made available to the client application through
extensions to the DESCRIBE statement. See “Writing a DB2 for OS/390 client
program to receive result sets” on page 611 for more information.
Use cursor names that are meaningful to the DRDA client application, especially
when the stored procedure returns multiple result sets.
Objects from which you can return result sets: You can use any of these
objects in the SELECT statement associated with the cursor for a result set:
# ) Tables, synonyms, views, created temporary tables, declared temporary tables,
# and aliases defined at the local DB2 system
# ) Tables, synonyms, views, created temporary tables, and aliases defined at
# remote DB2 for OS/390 systems that are accessible through DB2 private
# protocol access
Returning a subset of rows to the client: If you execute FETCH statements with
a result set cursor, DB2 does not return the fetched rows to the client program. For
example, if you declare a cursor WITH RETURN and then execute the statements
OPEN, FETCH, FETCH, the client receives data beginning with the third row in the
result set.
# Using a temporary table to return result sets: You can use a created temporary
# table or declared temporary table to return result sets from a stored procedure. This
capability can be used to return nonrelational data to a DRDA client.
For example, you can access IMS data from a stored procedure in the following
way:
) Use MVS/APPC to issue an IMS transaction.
) Receive the IMS reply message, which contains data that should be returned to
the client.
) Insert the data from the reply message into a temporary table.
) Open a cursor against the temporary table. When the stored procedure ends,
the rows from the temporary table are returned to the client.
The calling application can use a DB2 package or plan to execute the CALL
statement. The stored procedure must use a DB2 package as Figure 150 on
page 559 shows.
The server program might use more than one package. These packages come
from two sources:
) A DBRM that you bind several times into several versions of the same
package, all with the same package name, which can then reside in different
collections. Your stored procedure can switch from one version to another by
using the statement SET CURRENT PACKAGESET.
) A package associated with another program that contains SQL statements that
the stored procedure calls.
# Unlike other stored procedures, you do not prepare REXX stored procedures for
# execution. REXX stored procedures run using one of four packages that are bound
# during the installation of DB2 REXX Language Support. The package that DB2
# uses when the stored procedure runs depends on the current isolation level at
# which the stored procedure runs:
# Package name Isolation level
# DSNREXRR Repeatable read (RR)
# DSNREXRS Read stability (RS)
# DSNREXCS Cursor stability (CS)
# DSNREXUR Uncommitted read (UR)
# Figure 152 on page 561 shows an example of a REXX stored procedure that
# executes DB2 commands. The stored procedure performs the following actions:
# ) Receives one input parameter, which contains a DB2 command.
# ) Calls the IFI COMMAND function to execute the command.
# ) Extracts the command result messages from the IFI return area and places the
# messages in a created temporary table. Each row of the temporary table
# contains a sequence number and the text of one message.
# ) Opens a cursor to return a result set that contains the command result
# messages.
# ) Returns the unformatted contents of the IFI return area in an output parameter.
# EXIT SUBSTR(RTRNAREA,1,TOTLEN+4)
# Figure 152 (Part 2 of 3). Example of a REXX stored procedure: COMMAND
# || 'SQLWARN ='SQLWARN.',',
# || SQLWARN.1',',
# || SQLWARN.2',',
# || SQLWARN.3',',
# || SQLWARN.4',',
# || SQLWARN.5',',
# || SQLWARN.6',',
# || SQLWARN.7',',
# || SQLWARN.8',',
# || SQLWARN.9',',
# || SQLWARN.1';' ,
# || 'SQLSTATE='SQLSTATE';'
# Figure 152 (Part 3 of 3). Example of a REXX stored procedure: COMMAND
# Creating an SQL procedure involves writing the source statements for the SQL
# procedure, creating the executable form of the SQL procedure, and defining the
# SQL procedure to DB2. There are two ways to create an SQL procedure:
# ) Use the IBM DB2 Stored Procedure Builder product to specify the source
# statements for the SQL procedure, define the SQL procedure to DB2, and
# prepare the SQL procedure for execution.
# ) Write a CREATE PROCEDURE statement for the SQL procedure. Then use
# one of the methods in “Preparing an SQL procedure” on page 572 to define
# the SQL procedure to DB2 and create an executable procedure.
# This section discusses how to write a and prepare an SQL procedure. The
# following topics are included:
# ) “Comparison of an SQL procedure and an external procedure”
# ) “Statements that you can include in a procedure body” on page 566
# ) “Terminating statements in an SQL procedure” on page 568
# ) “Handling errors in an SQL procedure” on page 568
# ) “Examples of SQL procedures” on page 570
# ) “Preparing an SQL procedure” on page 572
# For information on the syntax of the CREATE PROCEDURE statement and the
# procedure body, see DB2 SQL Reference .
# An external stored procedure definition and an SQL procedure definition specify the
# following common information:
# ) The procedure name.
# ) Input and output parameter attributes.
# ) The language in which the procedure is written. For an SQL procedure, the
# language is SQL.
# ) Information that will be used when the procedure is called, such as run-time
# options, length of time that the procedure can run, and whether the procedure
# returns result sets.
# An external stored procedure and an SQL procedure differ in the way that they
# specify the code for the stored procedure. An external stored procedure definition
# specifies the name of the stored procedure program. An SQL procedure definition
# contains the source code for the stored procedure.
# For an external stored procedure, you define the stored procedure to DB2 by
# executing the CREATE PROCEDURE statement. You change the definition of the
# Figure 153 shows a definition for an external stored procedure that is written in
# COBOL. The stored procedure program, which updates employee salaries, is called
# UPDSAL.
# Assignment statement
# Assigns a value to an output parameter or to an SQL variable, which is a
# variable that is defined and used only within a procedure body. The right side
# of an assignment statement can include SQL built-in functions.
# CALL statement
# Calls another stored procedure. This statement is similar to the CALL
# statement described in Chapter 6 of DB2 SQL Reference, except that the
# parameters must be SQL variables, parameters for the SQL procedure, or
# constants.
# CASE statement
# Selects an execution path based on the evaluation of one or more conditions.
# This statement is similar to the CASE expression, which is described in Chapter
# 3 of DB2 SQL Reference.
# GOTO statement
# Transfers program control to a labelled statement.
# IF statement
# Selects an execution path based on the evaluation of a condition.
# LEAVE statement
# Transfers program control out of a loop or a block of code.
# LOOP statement
# Executes a statement or group of statements multiple times.
# REPEAT statement
# Executes a statement or group of statements until a search condition is true.
# WHILE statement
# Repeats the execution of a statement or group of statements while a specified
# condition is true.
# Compound statement
# Can contain one or more of any of the other types of statements in this list. In
# addition, a compound statement can contain SQL variable declarations,
# condition handlers, or cursor declarations.
# The order of statements in a compound statement must be:
# SQL statement
# A subset of the SQL statements that are described in Chapter 6 of DB2 SQL
# Reference. Certain SQL statements are valid in a compound statement, but not
# See the discussion of the procedure body in DB2 SQL Reference for detailed
# descriptions and syntax of each of these statements.
# You can perform any operations on SQL variables that you can perform on host
# variables in SQL statements.
# Qualifying SQL variable names and other object names is a good way to avoid
# ambiguity. Use the following guidelines to determine when to qualify variable
# names:
# ) When you use an SQL procedure parameter in the procedure body, qualify the
# parameter name with the procedure name.
# Important
# The way that DB2 determines the qualifier for unqualified names might change
# in the future. To avoid changing your code later, qualify all SQL variable names.
# In general, the way that a handler works is that when an error occurs that matches
# condition, SQL-procedure-statement executes. When SQL-procedure-statement
# completes, DB2 performs the action that is indicated by handler-type.
# CONTINUE
# Specifies that after SQL-procedure-statement completes, execution continues
# with the statement after the statement that caused the error.
# Example: CONTINUE handler: This handler sets flag at_end when no more rows
# satisfy a query. The handler then causes execution to continue after the statement
# that returned no rows.
# DECLARE CONTINUE HANDLER FOR NOT FOUND SET at_end=1;
# Example: EXIT handler: This handler places the string 'Table does not exist' into
# output parameter OUT_BUFFER when condition NO_TABLE occurs. NO_TABLE is
# previously declared as SQLSTATE 42704 (name is an undefined name). The
# handler then causes the SQL procedure to exit the compound statement in which
# the handler is declared.
# DECLARE NO_TABLE CONDITION FOR '4274';
#
# ..
# .
# DECLARE EXIT HANDLER FOR NO_TABLE
# SET OUT_BUFFER='Table does not exist';
# If any SQL statement in the procedure body receives a negative SQLCODE, the
# SQLEXCEPTION handler receives control. This handler sets output parameter
# DEPTSALARY to NULL and ends execution of the SQL procedure. When this
# handler is invoked, the SQLCODE and SQLSTATE are set to 0.
# There are three methods available for preparing an SQL procedure to run:
# ) Using IBM DB2 Stored Procedure Builder, which runs on Windows NT,
# Windows 95, or Windows 98.
# ) Using JCL. See “Using JCL to prepare an SQL procedure” on page 573.
# ) Using the DB2 for OS/390 SQL procedure processor. See “Using the DB2 for
# OS/390 SQL procedure processor to prepare an SQL procedure” on page 573.
# Environment for calling and running DSNTPSMP: You can invoke DSNTPSMP
# only through an SQL CALL statement in an application program or through IBM
# DB2 Stored Procedure Builder.
# Before you can run DSNTPSMP, you need to perform the following steps to set up
# the DSNTPSMP environment:
# 1. Install the PTFs for DB2 APARs PQ24199 and PQ29706.
# 2. Install DB2 for OS/390 REXX Language Support feature.
# Figure 155 on page 575 shows sample JCL for a startup procedure for the
# address space in which DSNTPSMP runs.
# Creating tables that are used by DSNTPSMP: DSNTPSMP uses two permanent
# DB2 tables, one created temporary table, and three indexes:
# ) Table SYSIBM.SYSPSM holds the source code for SQL procedures that
# DSNTPSMP prepares.
# ) Table SYSIBM.SYSPSMOPTS holds information about the program preparation
# options that you specify when you invoke DSNTPSMP.
# ) Created temporary table SYSIBM.SYSPSMOUT holds information about errors
# that occur during the execution of DSNTPSMP. This information is returned to
# the client program in a result set. See “Result sets that DSNTPSMP returns” on
# page 579 for more information about the result set.
# ) Index SYSIBM.DSNPSMX1 is an index on SYSIBM.SYSPSM.
# ) Index SYSIBM.DSNPSMX2 is a unique index on SYSIBM.SYSPSM.
# ) Index SYSIBM.DSNPSMOX1 is a unique index on SYSIBM.SYSPSMOPTS.
# Before you can run DSNTPSMP, SYSIBM.SYSPSM, SYSIBM.SYSPSMOPTS,
# SYSIBM.SYSPSMOUT, and SYSIBM.DSNPSMOX1 must exist on your DB2
# subsystem. To create the objects, customize job DSNTIJSQ according to the
# instructions in its prolog, then execute DSNTIJSQ.
# DSNTPSMP Syntax
#
# [[──CALL──DSNTPSMP──(──function──,──SQL-procedure-name──,──┬─SQL-procedure-source─┬──,────────────────[
# └─empty-string─────────┘
# [──┬─bind-options─┬──,──┬─compiler-options─┬──,──┬─precompiler-options─┬──,───────────────────────────[
# └─empty-string─┘ └─empty-string─────┘ └─empty-string────────┘
# [──┬─prelink-edit-options─┬──,──┬─link-edit-options─┬──,──┬─run-time-options─┬──,─────────────────────[
# └─empty-string─────────┘ └─empty-string──────┘ └─empty-string─────┘
# [──┬─source-data-set-name─┬──,──┬─build-schema-name─┬──,──┬─build-name───┬──,──return-codes──)───────[^
# └─empty-string─────────┘ └─empty-string──────┘ └─empty-string─┘
# DSNTPSMP parameters
# function
# A VARCHAR(20) input parameter that identifies the task that you want
# DSNTPSMP to perform. The tasks are:
# BUILD
# Creates the following objects for an SQL procedure:
# If you choose the create function, and an SQL procedure with name
# SQL-procedure-name already exists, DSNTPSMP issues a warning
# message and terminates.
# ) The DBRM, from the data set that DD name SQLDBRM points to
# ) The load module, from the data set that DD name SQLLMOD points to
# ) The C language source code for the SQL procedure, from the data set
# that DD name SQLCSRC points to
# ) The stored procedure package
# ) The stored procedure definition
# Before the DESTROY function can execute successfully, you must execute
# DROP PROCEDURE on the SQL procedure.
# REBUILD
# Replaces all objects that were created by the BUILD function.
# REBIND
# Rebinds an SQL procedure package.
# SQL-procedure-name
# A VARCHAR(18) input parameter performs the following functions:
# ) Specifies the SQL procedure name for the DESTROY or REBIND function
# ) Specifies the name of the SQL procedure load module for the BUILD or
# REBUILD function
# SQL-procedure-source
# A VARCHAR(32672) input parameter that contains the source code for the
# SQL procedure. If you specify an empty string for this parameter, you need to
# specify the name of a data set that contains the SQL procedure source code, in
# source-data-set-name.
# bind-options
# A VARCHAR(1024) input parameter that contains the options that you want to
# specify for binding the SQL procedure package. For a list of valid bind options,
# see Chapter 2 of DB2 Command Reference.
# You must specify the PACKAGE bind option for the BUILD, REBUILD, and
# REBIND functions.
# compiler-options
# A VARCHAR(255) input parameter that contains the options that you want to
# specify for compiling the C language program that DB2 generates for the SQL
# procedure. For a list of valid compiler options, see OS/390 C/C++ User's
# Guide.
# precompiler-options
# A VARCHAR(255) input parameter that contains the options that you want to
# specify for precompiling the C language program that DB2 generates for the
# SQL procedure. For a list of valid precompiler options, see Section 6 of DB2
# Application Programming and SQL Guide.
# prelink-edit-options
# A VARCHAR(255) input parameter that contains the options that you want to
# specify for prelink-editing the C language program that DB2 generates for the
# link-edit-options
# A VARCHAR(255) input parameter that contains the options that you want to
# specify for link-editing the C language program that DB2 generates for the SQL
# procedure. For a list of valid link-edit options, see DFSMS/MVS: Program
# Management.
# run-time-options
# A VARCHAR(254) input parameter that contains the Language Environment
# run-time options that you want to specify for the SQL procedure. For a list of
# valid Language Environment run-time options, see OS/390 Language
# Environment for OS/390 & VM Programming Reference.
# source-data-set-name
# A VARCHAR(80) input parameter that contains the name of an MVS sequential
# data set or partitioned data set member that contains the source code for the
# SQL procedure. If you specify an empty string for this parameter, you need to
# provide the SQL procedure source code in source-procedure-source.
# build-schema-name
# A VARCHAR(8) input parameter that contains the schema name for the
# procedure name that you specify for the build-name parameter.
# build-name
# A VARCHAR(18) input parameter that contains the procedure name that you
# use when you call DSNTPSMP. You might create several stored procedure
# definitions for DSNTPSMP, each of which specifies a different WLM
# environment. When you call DSNTPSMP using the name in this parameter,
# DB2 runs DSNTPSMP in the WLM environment that is associated with the
# procedure name.
# return-codes
# A VARCHAR(255) output parameter in which DB2 puts the return codes from
# all steps of the DSNTPSMP invocation.
# Result sets that DSNTPSMP returns: When errors occur during DSNTPSMP
# execution, DB2 returns a result set that contains messages and listings from each
# step that DSNTPSMP performs. To obtain the information from the result set, you
# can write your client program to retrieve information from one result set with known
# contents. However, for greater flexibility, you might want to write your client
# program to retrieve data from an unknown number of result sets with unknown
# contents. Both techniques are shown in “Writing a DB2 for OS/390 client program
# to receive result sets” on page 611.
# Processing step
# The step in the function process to which the message applies.
# ddname
# The ddname of the data set that contains the message.
# Sequence number
# The sequence number of a line of message text within a message.
# Rows in the message result set are ordered by processing step, ddname, and
# sequence number.
# Function BUILD
# Source location String in variable procsrc
# Bind options SQLERROR(NOPACKAGE), VALIDATE(RUN), ISOLATION(RR),
# RELEASE(COMMIT)
# Compiler options SOURCE, LIST, MAR(1,80), LONGNAME, RENT
# Precompiler options HOST(SQL), SOURCE, XREF, MAR(1,72), STDSQL(NO)
# Prelink-edit options None specified
# Link-edit options AMODE=31, RMODE=ANY, MAP, RENT
# Run-time options MSGFILE(OUTFILE), RPTSTG(ON), RPTOPTS(ON)
# Build schema MYSCHEMA
# Build name WLM2PSMP
# Function DESTROY
# SQL procedure OLDPROC
# name
# Function REBUILD
# Source location Member PROCSRC of partitioned data set DSN610.SDSNSAMP
# Bind options SQLERROR(NOPACKAGE), VALIDATE(RUN), ISOLATION(RR),
# RELEASE(COMMIT)
# Compiler options SOURCE, LIST, MAR(1,80), LONGNAME, RENT
# Precompiler options HOST(SQL), SOURCE, XREF, MAR(1,72), STDSQL(NO)
# Prelink-edit options MAP
# Link-edit options AMODE=31, RMODE=ANY, MAP, RENT
# Function REBIND
# SQL procedure SQLPROC
# name
# Bind options VALIDATE(BIND), ISOLATION(RR), RELEASE(DEALLOCATE)
# Preparing a program that invokes DSNTPSMP: To prepare the program that calls
# DSNTPSMP for execution, you need to perform the following steps:
# 1. Precompile, compile, and link-edit the application program.
# 2. Bind a package for the application program.
# 3. Bind the package for DB2 REXX support, DSNTRXCS.DSNTREXX, and the
# package for the application program into a plan.
Each of the above CALL statement examples uses an SQLDA. If you do not
explicitly provide an SQLDA, the precompiler generates the SQLDA based on the
variables in the parameter list.
The authorizations you need depend on whether the form of the CALL statement is
CALL literal or CALL :host-variable.
For more information, see the description of the CALL statement in Chapter 6 of
DB2 SQL Reference.
Linkage conventions
When an application executes the CALL statement, DB2 builds a parameter list for
the stored procedure, using the parameters and values provided in the statement.
DB2 obtains information about parameters from the stored procedure definition you
create when you execute CREATE PROCEDURE. Parameters are defined as one
of these types:
IN Input-only parameters, which provide values to the stored procedure
| Initializing output parameters: For a stored procedure that runs locally, you do
| not need to initialize the values of output parameters before you call the stored
| procedure. However, when you call a stored procedure at a remote location, the
| local DB2 cannot determine whether the parameters are input (IN) or output (OUT
| or INOUT) parameters. Therefore, you must initialize the values of all output
| parameters before you call a stored procedure at a remote location.
| It is recommended that you initialize the length of LOB output parameters to zero.
| Doing so can improve your performance.
DB2 supports three parameter list conventions. DB2 chooses the parameter list
convention based on the value of the PARAMETER STYLE parameter in the stored
procedure definition: GENERAL, GENERAL WITH NULLS, or DB2SQL.
) GENERAL: Use GENERAL when you do not want the calling program to pass
null values for input parameters (IN or INOUT) to the stored procedure. The
stored procedure must contain a variable declaration for each parameter
passed in the CALL statement.
Figure 156 shows the structure of the parameter list for PARAMETER STYLE
GENERAL.
) GENERAL WITH NULLS: Use GENERAL WITH NULLS to allow the calling
program to supply a null value for any parameter passed to the stored
procedure. For the GENERAL WITH NULLS linkage convention, the stored
procedure must do the following:
– Declare a variable for each parameter passed in the CALL statement.
– Declare a null indicator structure containing an indicator variable for each
parameter.
– On entry, examine all indicator variables associated with input parameters
to determine which parameters contain null values.
Figure 157. Parameter convention GENERAL WITH NULLS for a stored procedure
) DB2SQL: Like GENERAL WITH NULLS, option DB2SQL lets you supply a null
value for any parameter that is passed to the stored procedure. In addition,
DB2 passes input and output parameters to the stored procedure that contain
this information:
– The SQLSTATE that is to be returned to DB2. This is a CHAR(5)
parameter that can have the same values as those that are returned from a
user-defined function. See “Passing parameter values to and from a
user-defined function” on page 259 for valid SQLSTATE values.
– The qualified name of the stored procedure. This is a VARCHAR(27) value.
– The specific name of the stored procedure. The specific name is a
VARCHAR(18) value that is the same as the unqualified name.
– The SQL diagnostic string that is to be returned to DB2. This is a
VARCHAR(70) value. Use this area to pass descriptive information about
an error or warning to the caller.
# DB2SQL is not a valid linkage convention for a REXX language stored
# procedure.
Figure 158 on page 587 shows the structure of the parameter list for
PARAMETER STYLE DB2SQL.
For these examples, assume that a COBOL application has the following parameter
declarations and CALL statement:
************************************************************
* PARAMETERS FOR THE SQL STATEMENT CALL *
************************************************************
1 V1 PIC S9(9) USAGE COMP.
1 V2 PIC X(9).
..
.
EXEC SQL CALL A (:V1, :V2) END-EXEC.
In the CREATE PROCEDURE statement, the parameters are defined like this:
V1 INT IN, V2 CHAR(9) OUT
Figure 159, Figure 160, Figure 161, and Figure 162 show how a stored procedure
in each language receives these parameters.
*PROCESS SYSTEM(MVS);
A: PROC(V1, V2) OPTIONS(MAIN NOEXECOPS REENTRANT);
/***************************************************************/
/* Code for a PL/I language stored procedure that uses the */
/* GENERAL linkage convention. */
/***************************************************************/
/***************************************************************/
/* Indicate on the PROCEDURE statement that two parameters */
/* were passed by the SQL statement CALL. Then declare the */
/* parameters below. */
/***************************************************************/
DCL V1 BIN FIXED(31),
V2 CHAR(9);
..
.
V2 = '123456789'; /* Assign a value to output variable V2 */
For these examples, assume that a C application has the following parameter
declarations and CALL statement:
/************************************************************/
/* Parameters for the SQL statement CALL */
/************************************************************/
long int v1;
char v2[1]; /* Allow an extra byte for */
/* the null terminator */
/************************************************************/
/* Indicator structure */
/************************************************************/
struct indicators {
short int ind1;
short int ind2;
} indstruc;
..
.
indstruc.ind1 = ; /* Remember to initialize the */
/* input parameter's indicator*/
/* variable before executing */
/* the CALL statement */
EXEC SQL CALL B (:v1 :indstruc.ind1, :v2 :indstruc.ind2);
..
.
In the CREATE PROCEDURE statement, the parameters are defined like this:
V1 INT IN, V2 CHAR(9) OUT
Figure 163, Figure 164, Figure 165, and Figure 166 show how a stored procedure
in each language receives these parameters.
For these examples, assume that a C application has the following parameter
declarations and CALL statement:
In the CREATE PROCEDURE statement, the parameters are defined like this:
V1 INT IN, V2 CHAR(9) OUT
Figure 167, Figure 168, Figure 169, Figure 170, and Figure 171 show how a
stored procedure in each language receives these parameters.
main(argc,argv)
int argc;
char *argv[];
{
int parm1;
short int ind1;
char p_proc[28];
char p_spec[19];
/***************************************************/
/* Assume that the SQL CALL statment included */
/* 3 input/output parameters in the parameter list.*/
/* The argv vector will contain these entries: */
/* argv[] 1 contains load module */
/* argv[1-3] 3 input/output parms */
/* argv[4-6] 3 null indicators */
/* argv[7] 1 SQLSTATE variable */
/* argv[8] 1 qualified proc name */
/* argv[9] 1 specific proc name */
/* argv[1] 1 diagnostic string */
/* argv[11] + 1 dbinfo */
/* ------ */
/* 12 for the argc variable */
/***************************************************/
if argc<>12 {
..
.
/* We end up here when invoked with wrong number of parms */
}
Figure 168 (Part 1 of 2). An example of DB2SQL linkage for a C stored procedure written
as a main program
Figure 168 (Part 2 of 2). An example of DB2SQL linkage for a C stored procedure written
as a main program
Figure 169 (Part 1 of 2). An example of DB2SQL linkage for a C stored procedure written
as a subprogram
strcpy(l_p2,parm2);
l_ind1 = *p_ind1;
l_ind1 = *p_ind2;
strcpy(l_sqlstate,p_sqlstate);
strcpy(l_proc,p_proc);
strcpy(l_spec,p_spec);
strcpy(l_diag,p_diag);
memcpy(&lsp_dbinfo,sp_dbinfo,sizeof(lsp_dbinfo));
..
.
}
Figure 169 (Part 2 of 2). An example of DB2SQL linkage for a C stored procedure written
as a subprogram
#ifdef MVS
#pragma runopts(PLIST(OS))
#endif
-- or --
#ifndef WKSTN
#pragma runopts(PLIST(OS))
#endif
For information on specifying PL/I compile-time and run-time options, see IBM PL/I
MVS & VM Programming Guide.
For example, suppose that a stored procedure that is defined with the GENERAL
linkage convention takes one integer input parameter and one character output
parameter of length 6000. You do not want to pass the 6000 byte storage area to
the stored procedure. A PL/I program containing these statements passes only two
bytes to the stored procedure for the output variable and receives all 6000 bytes
from the stored procedure:
For languages other than REXX: For all data types except LOBs, ROWIDs, and
locators, see the tables listed in Table 61 for the host data types that are
compatible with the data types in the stored procedure definition. For LOBs,
ROWIDs, and locators, see tables Table 62, Table 63 on page 608, Table 64 on
page 608, and Table 65 on page 610.
# For REXX: See “Calling a stored procedure from a REXX Procedure” on page 617
# for information on DB2 data types and corresponding parameter formats.
Table 62 (Page 1 of 2). Compatible assembler language declarations for LOBs, ROWIDs,
and locators
SQL data type in definition Assembler declaration
TABLE LOCATOR DS FL4
BLOB LOCATOR
CLOB LOCATOR
DBCLOB LOCATOR
BLOB(n) If n <= 65535:
var DS 0FL4
var_length DS FL4
var_data DS CLn
If n > 65535:
var DS 0FL4
var_length DS FL4
var_data DS CL65535
ORG var_data+(n-65535)
Table 63. Compatible C language declarations for LOBs, ROWIDs, and locators
SQL data type in definition C declaration
TABLE LOCATOR unsigned long
BLOB LOCATOR
CLOB LOCATOR
DBCLOB LOCATOR
BLOB(n) struct
{unsigned long length;
char data[n];
} var;
| CLOB(n) struct
| {unsigned long length;
| char var_data[n];
| } var;
| DBCLOB(n) struct
| {unsigned long length;
| wchar_t data[n];
| } var;
| ROWID struct {
| short int length;
| char data[40];
| } var;
Table 64 (Page 1 of 3). Compatible COBOL declarations for LOBs, ROWIDs, and locators
SQL data type in definition COBOL declaration
TABLE LOCATOR 01 var PIC S9(9) USAGE IS BINARY.
BLOB LOCATOR
CLOB LOCATOR
DBCLOB LOCATOR
Table 65 (Page 1 of 2). Compatible PL/I declarations for LOBs, ROWIDs, and locators
SQL data type in definition PL/I
TABLE LOCATOR BIN FIXED(31)
BLOB LOCATOR
CLOB LOCATOR
DBCLOB LOCATOR
BLOB(n) If n <= 32767:
01 var,
03 var_LENGTH
BIN FIXED(31),
03 var_DATA
CHAR(n);
If n > 32767:
01 var,
02 var_LENGTH
BIN FIXED(31),
02 var_DATA,
03 var_DATA1(n)
CHAR(32767),
03 var_DATA2
CHAR(mod(n,32767));
| CLOB(n) If n <= 32767:
| 01 var,
| 03 var_LENGTH
| BIN FIXED(31),
| 03 var_DATA
| CHAR(n);
| If n > 32767:
| 01 var,
| 02 var_LENGTH
| BIN FIXED(31),
| 02 var_DATA,
| 03 var_DATA1(n)
| CHAR(32767),
| 03 var_DATA2
| CHAR(mod(n,32767));
Figure 174 on page 616 demonstrates how you receive result sets when you
do not know how many result sets are returned or what is in each result set.
# Table 66 (Page 1 of 2). Parameter formats for a CALL statement in a REXX procedure
# SQL data type REXX format
# SMALLINT A string of numerics that does not contain a decimal point or exponent identifier.
# INTEGER The first character can be a plus or minus sign. This format also applies to
# indicator variables that are passed as parameters.
# Figure 175 on page 619 demonstrates how a REXX procedure calls the stored
# procedure in Figure 152 on page 561. The REXX procedure performs the following
# actions:
# ) Connects to the DB2 subsystem that was specified by the REXX procedure
# invoker.
# ) Calls the stored procedure to execute a DB2 command that was specified by
# the REXX procedure invoker.
# ) Retrieves rows from a result set that contains the command output messages.
# PROC = 'COMMAND'
# RESULTSIZE = 3273
# RESULT = LEFT(' ',RESULTSIZE,' ')
# /****************************************************************/
# /* Call the stored procedure that executes the DB2 command. */
# /* The input variable (COMMAND) contains the DB2 command. */
# /* The output variable (RESULT) will contain the return area */
# /* from the IFI COMMAND call after the stored procedure */
# /* executes. */
# /****************************************************************/
# ADDRESS DSNREXX "EXECSQL" ,
# "CALL" PROC "(:COMMAND, :RESULT)"
# IF SQLCODE < THEN CALL SQLCA
# DO I = 1 TO SQLDA.SQLD
# SAY "SQLDA."I".SQLNAME ="SQLDA.I.SQLNAME";"
# SAY "SQLDA."I".SQLTYPE ="SQLDA.I.SQLTYPE";"
# SAY "SQLDA."I".SQLLOCATOR ="SQLDA.I.SQLLOCATOR";"
# SAY "SQLDA."I".SQLESTIMATE="SQLDA.I.SQLESTIMATE";"
# END I
# /****************************************************************/
# /* Set up a cursor to retrieve the rows from the result */
# /* set. */
# /****************************************************************/
# ADDRESS DSNREXX "EXECSQL ASSOCIATE LOCATOR (:RESULT) WITH PROCEDURE :PROC"
# IF SQLCODE ¬= THEN CALL SQLCA
# SAY RESULT
# ADDRESS DSNREXX "EXECSQL ALLOCATE C11 CURSOR FOR RESULT SET :RESULT"
# IF SQLCODE ¬= THEN CALL SQLCA
# CURSOR = 'C11'
# ADDRESS DSNREXX "EXECSQL DESCRIBE CURSOR :CURSOR INTO :SQLDA"
# IF SQLCODE ¬= THEN CALL SQLCA
# /****************************************************************/
# /* Retrieve and display the rows from the result set, which */
# /* contain the command output message text. */
# /****************************************************************/
# DO UNTIL(SQLCODE ¬= )
# ADDRESS DSNREXX "EXECSQL FETCH C11 INTO :SEQNO, :TEXT"
# IF SQLCODE = THEN
# DO
# SAY TEXT
# END
# END
# IF SQLCODE ¬= THEN CALL SQLCA
# RETURN
# /****************************************************************/
# /* Routine to display the SQLCA */
# /****************************************************************/
# SQLCA:
# TRACE O
# SAY 'SQLCODE ='SQLCODE
# SAY 'SQLERRMC ='SQLERRMC
# SAY 'SQLERRP ='SQLERRP
# SAY 'SQLERRD ='SQLERRD.1',',
# || SQLERRD.2',',
# || SQLERRD.3',',
# || SQLERRD.4',',
# || SQLERRD.5',',
# || SQLERRD.6
Before you can call a stored procedure from your embedded SQL application, you
must bind a package for the client program on the remote system. You can use the
remote DRDA bind capability on your DRDA client system to bind the package to
the remote system.
| If you have packages that contain SQL CALL statements that you bound before
| DB2 Version 6, you can get better performance from those packages if you rebind
| them in DB2 Version 6 or later. Rebinding lets DB2 obtain some information from
| the catalog at bind time that it obtained at run time before Version 6. Therefore,
For an ODBC or CLI application, the DB2 packages and plan associated with the
ODBC driver must be bound to DB2 before you can run your application. For
information on building client applications on platforms other than DB2 for OS/390
to access stored procedures, see one of these documents:
) DB2 UDB Application Building Guide
) DB2 for OS/400 SQL Programming
An MVS client can bind the DBRM to a remote server by specifying a location
name on the command BIND PACKAGE. For example, suppose you want a client
program to call a stored procedure at location LOCA. You precompile the program
to produce DBRM A. Then you can use the command
BIND PACKAGE (LOCA.COLLA) MEMBER(A)
to bind DBRM A into package collection COLLA at location LOCA.
The plan for the package resides only at the client system.
DB2 runs stored procedures under the DB2 thread of the calling application,
making the stored procedures part of the caller's unit of work.
If both the client and server application environments support two-phase commit,
the coordinator controls updates between the application, the server, and the stored
procedures. If either side does not support two-phase commit, updates will fail.
| For example, suppose that you want to write one program, PROGY, that calls one
| of two versions of a stored procedure named PROCX. The load module for both
| stored procedures is named SUMMOD. Each version of SUMMOD is in a different
| load library. The stored procedures run in different WLM environments, and the
| startup JCL for each WLM environment includes a STEPLIB concatenation that
| specifies the correct load library for the stored procedure module.
| First, define the two stored procedures in different schemas and different WLM
| environments:
| CREATE PROCEDURE TEST.PROCX(V1 INTEGER IN, CHAR(9) OUT)
| LANGUAGE C
| EXTERNAL NAME SUMMOD
| WLM ENVIRONMENT TESTENV;
| CREATE PROCEDURE PROD.PROCX(V1 INTEGER IN, CHAR(9) OUT)
| LANGUAGE C
| EXTERNAL NAME SUMMOD
| WLM ENVIRONMENT PRODENV;
| When you write CALL statements for PROCX in program PROGY, use the
| unqualified form of the stored procedure name:
| CALL PROCX(V1,V2);
| Bind two plans for PROGY, In one BIND statement, specify PATH(TEST). In the
| other BIND statement, specify PATH(PROD).
| To call TEST.PROCX, execute PROGY with the plan that you bound with
| PATH(TEST). To call PROD.PROCX, execute PROGY with the plan that you
| bound with PATH(PROD).
To maximize the number of stored procedures that can run concurrently, use the
following guidelines:
) Set REGION size to 0 in startup procedures for the stored procedures address
spaces to obtain the largest possible amount of storage below the 16MB line.
) Limit storage required by application programs below the 16MB line by:
– Link editing programs above the line with AMODE(31) and RMODE(ANY)
attributes
– Using the RENT and DATA(31) compiler options for COBOL programs.
) Limit storage required by IBM Language Environment by using these run-time
options:
– HEAP(,,ANY) to allocate program heap storage above the 16MB line
– STACK(,,ANY,) to allocate program stack storage above the 16MB line
– STORAGE(,,,4K) to reduce reserve storage area below the line to 4KB
– BELOWHEAP(4K,,) to reduce the heap storage below the line to 4KB
– LIBSTACK(4K,,) to reduce the library stack below the line to 4KB
– ALL31(ON) to indicate all programs contained in the stored procedure run
with AMODE(31) and RMODE(ANY).
You can list these options in the RUN OPTIONS parameter of the CREATE
PROCEDURE or ALTER PROCEDURE statement, if they are not
Language Environment installation defaults. For example, the RUN
OPTIONS parameter could specify:
H(,,ANY),STAC(,,ANY,),STO(,,,4K),BE(4K,,),LIBS(4K,,),ALL31(ON)
For more information on creating a stored procedure definition, see
“Defining your stored procedure to DB2” on page 541.
) If you use WLM-established address spaces for your stored procedures, assign
stored procedures that behave similarly to the same WLM application
environment. When the stored procedures within a WLM environment have
substantially different performance characteristics, WLM can have trouble
characterizing the workload in the WLM environment. As a result, WLM can
Consider the following when you develop stored procedures that access non-DB2
resources:
) When a stored procedure runs in a DB2-established stored procedures address
space, DB2 does not coordinate commit and rollback activity on recoverable
resources such as IMS or CICS transactions, and MQI messages. DB2 has no
knowledge of, and therefore cannot control, the dependency between a stored
procedure and a recoverable resource.
) When a stored procedure runs in a WLM-established stored procedures
address space, the stored procedure uses the OS/390 Transaction
Management and Recoverable Resource Manager Services (OS/390 RRS) for
commitment control. When DB2 commits or rolls back work in this
environment, DB2 coordinates all updates made to recoverable resources by
other OS/390 RRS compliant resource managers in the MVS system.
) When a stored procedure runs in a DB2-established stored procedures address
space, MVS is not aware that the stored procedures address space is
processing work for DB2. One consequence of this is that MVS accesses
RACF-protected resources using the user ID associated with the MVS task
(ssnmSPAS) for stored procedures, not the user ID of the client.
) When a stored procedure runs in a WLM-established stored procedures
address space, DB2 can establish a RACF environment for accessing non-DB2
resources. The authority used when the stored procedure accesses protected
MVS resources depends on the value of SECURITY in the stored procedure
definition:
– If the value of SECURITY is DB2, the authorization ID associated with the
stored procedures address space is used.
– If the value of SECURITY is USER, the authorization ID under which the
CALL statement is executed is used.
– If the value of SECURITY is DEFINER, the authorization ID under which
the CREATE PROCEDURE statement was executed is used.
) Not all non-DB2 resources can tolerate concurrent access by multiple TCBs in
the same address space. You might need to serialize the access within your
application.
IMS
If your system is not running a release of IMS that uses OS/390 RRS, you can
use one of the following methods to access DL/I data from your stored
procedure:
) Use the CICS EXCI interface to run a CICS transaction synchronously. That
CICS transaction can, in turn, access DL/I data.
) Invoke IMS transactions asynchronously using the MQI.
) Use APPC through the CPI Communications application programming
interface
| After you write your COBOL stored procedure and set up the WLM environment,
| follow these steps to test the stored procedure with the Debug Tool:
| 1. When you compile the stored procedure, specify the TEST and SOURCE
| options.
| Ensure that the source listing is stored in a permanent data set. VisualAge
| COBOL displays that source listing during the debug session.
| 2. When you define the stored procedure, include run-time option TEST with the
| suboption VADTCPIP&ipaddr in your RUN OPTIONS argument.
| VADTCPIP& tells the Debug Tool that it is interfacing with a workstation that
| runs VisualAge COBOL and is configured for TCP/IP communication with your
| OS/390 system. ipaddr is the IP address of the workstation on which you
| display your debug information. For example, the RUN OPTIONS value in this
| stored procedure definition indicates that debug information should go to the
| workstation with IP address 9.63.51.17:
| CREATE PROCEDURE WLMCOB
| (IN INTEGER, INOUT VARCHAR(3), INOUT INTEGER)
| MODIFIES SQL DATA
| LANGUAGE COBOL EXTERNAL
| PROGRAM TYPE MAIN
| WLM ENVIRONMENT WLMENV1
| RUN OPTIONS 'POSIX(ON),TEST(,,,VADTCPIP&9.63.51.17:*)'
| 3. In the JCL startup procedure for WLM-established stored procedures address
| space, add the data set name of the Debug Tool load library to the STEPLIB
| concatenation. For example, suppose that ENV1PROC is the JCL procedure
| for application environment WLMENV1. The modified JCL for ENV1PROC
| might look like this:
| //DSNWLM PROC RGN=K,APPLENV=WLMENV1,DB2SSN=DSN,NUMTCB=8
| //IEFPROC EXEC PGM=DSNX9WLM,REGION=&RGN,TIME=NOLIMIT,
| // PARM='&DB2SSN,&NUMTCB,&APPLENV'
| //STEPLIB DD DISP=SHR,DSN=DSN61.RUNLIB.LOAD
| // DD DISP=SHR,DSN=CEE.SCEERUN
| // DD DISP=SHR,DSN=DSN61.SDSNLOAD
| // DD DISP=SHR,DSN=EQAW.SEQAMOD <== DEBUG TOOL
| 4. On the workstation, start the VisualAge Remote Debugger daemon.
| This daemon waits for incoming requests from TCP/IP.
| 5. Call the stored procedure.
| When the stored procedure starts, a window that contains the debug session is
| displayed on the workstation. You can then execute Debug Tool commands to
| debug the stored procedure.
# After you write your C++ stored procedure or SQL procedure and set up the WLM
# environment, follow these steps to test the stored procedure with the Distributed
# Debugger feature of the C/C++ Productivity Tools for OS/390 and the Debug Tool:
# 1. When you define the stored procedure, include run-time option TEST with the
# suboption VADTCPIP&ipaddr in your RUN OPTIONS argument.
# VADTCPIP& tells the Debug Tool that it is interfacing with a workstation that
# runs VisualAge C++ and is configured for TCP/IP communication with your
# OS/390 system. ipaddr is the IP address of the workstation on which you
# display your debug information. For example, this RUN OPTIONS value in a
# stored procedure definition indicates that debug information should go to the
# workstation with IP address 9.63.51.17:
# RUN OPTIONS 'POSIX(ON),TEST(,,,VADTCPIP&9.63.51.17:*)'
# 2. Precompile the stored procedure.
# Ensure that the modified source program that is the output from the precompile
# step is in a permanent, catalogued data set. For an SQL procedure, the
# modified C source program that is the output from the second precompile step
# must be in a permament, catalogued data set.
# 3. Compile the output from the precompile step. Specify the TEST, SOURCE, and
# OPT(0) compiler options.
# 4. In the JCL startup procedure for the stored procedures address space, add the
# data set name of the Debug Tool load library to the STEPLIB concatenation.
# For example, suppose that ENV1PROC is the JCL procedure for application
# environment WLMENV1. The modified JCL for ENV1PROC might look like this:
Using CODE/370 in batch mode: To test your stored procedure in batch mode,
you must have the CODE/370 MFI Debug Tool installed on the MVS system where
the stored procedure runs. To debug your stored procedure in batch mode using
the MFI Debug Tool, do the following:
) If you plan to use the Language Environment run-time option TEST to invoke
CODE/370, compile the stored procedure with option TEST. This places
information in the program that the Debug Tool uses during a debugging
session.
DB2 discards the debugging information if the application executes the ROLLBACK
statement. To prevent the loss of the debugging data, code the calling application
so that it retrieves the diagnostic data before executing the ROLLBACK statement.
If you still have performance problems after you have tried the suggestions in these
sections, there are other, more risky techniques you can use. See “Special
techniques to influence access path selection” on page 668 for information.
Declared lengths of host variables: Make sure that the declared length of any
host variable is no greater than the length attribute of the data column it is
compared to. If the declared length is greater, the predicate is stage 2 and cannot
be a matching predicate for an index scan.
For example, assume that a host variable and an SQL column are defined as
follows:
Assembler declaration SQL definition
MYHOSTV DS PLn 'value' COL1 DECIMAL(6,3)
When 'n' is used, the precision of the host variable is '2n-1'. If n = 4 and value =
'123.123', then a predicate such as WHERE COL1 = :MYHOSTV is not a matching
predicate for an index scan because the precisions are different. One way to avoid
an inefficient predicate using decimal host variables is to declare the host variable
without the 'Ln' option:
MYHOSTV DS P'123.123'
Consider the following illustration. Assume that there are 1000 rows in
MAIN_TABLE.
SELECT * FROM MAIN_TABLE
WHERE TYPE IN (subquery 1)
AND
PARTS IN (subquery 2);
Assuming that subquery 1 and subquery 2 are the same type of subquery (either
correlated or noncorrelated), DB2 evaluates the subquery predicates in the order
they appear in the WHERE clause. Subquery 1 rejects 10% of the total rows, and
subquery 2 rejects 80% of the total rows.
If you are in doubt, run EXPLAIN on the query with both a correlated and a
noncorrelated subquery. By examining the EXPLAIN output and understanding your
data distribution and SQL statements, you should be able to determine which form
is more efficient.
This general principle can apply to all types of predicates. However, because
subquery predicates can potentially be thousands of times more processor- and
I/O-intensive than all other predicates, it is most important to make sure they are
coded in the correct order.
For column functions to be evaluated during data retrieval, the following conditions
must be met for all column functions in the query:
) There must be no sort needed for GROUP BY. Check this in the EXPLAIN
output.
) There must be no stage 2 (residual) predicates. Check this in your application.
) There must be no distinct set functions such as COUNT(DISTINCT C1).
) If the query is a join, all set functions must be on the last table joined. Check
this by looking at the EXPLAIN output.
) All column functions must be on single columns with no arithmetic expressions.
If your query involves the functions MAX or MIN, refer to “One-fetch access
(ACCESSTYPE=I1)” on page 701 to see whether your query could take advantage
of that method.
See “Using host variables efficiently” on page 658 for more information.
DB2 might not determine the best access path when your queries include
correlated columns. If you think you have a problem with column correlation, see
“Column correlation” on page 654 for ideas on what to do about it.
Example: The query below has three predicates: an equal predicate on C1, a
BETWEEN predicate on C2, and a LIKE predicate on C3.
SELECT * FROM T1
WHERE C1 = 1 AND
C2 BETWEEN 1 AND 2 AND
C3 NOT LIKE 'A%'
Effect on access paths: This section explains the effect of predicates on access
paths. Because SQL allows you to express the same query in different ways,
knowing how predicates affect path selection helps you write queries that access
data efficiently.
Properties of predicates
Predicates in a HAVING clause are not used when selecting access paths; hence,
in this section the term 'predicate' means a predicate after WHERE or ON.
There are special considerations for “Predicates in the ON clause” on page 639.
Predicate types
The type of a predicate depends on its operator or syntax, as listed below. The
type determines what type of processing and filtering occurs when the predicate is
evaluated.
Type Definition
Subquery Any predicate that includes another SELECT statement. Example:
C1 IN (SELECT C10 FROM TABLE1)
Equal Any predicate that is not a subquery predicate and has an equal
operator and no NOT operator. Also included are predicates of the
form C1 IS NULL. Example: C1=100
Range Any predicate that is not a subquery predicate and has an operator
in the following list: >, >=, <, <=, LIKE, or BETWEEN. Example:
C1>100
IN-list A predicate of the form column IN (list of values). Example: C1 IN
(5,10,15)
NOT Any predicate that is not a subquery predicate and contains a NOT
operator. Example: COL1 <> 5 or COL1 NOT BETWEEN 10 AND
20.
Example: Influence of type on access paths: The following two examples show
how the predicate type can influence DB2's choice of an access path. In each one,
assume that a unique index I1 (C1) exists on table T1 (C1, C2), and that all values
of C1 are positive integers.
The query,
SELECT C1, C2 FROM T1 WHERE C1 >= ;
has a range predicate. However, the predicate does not eliminate any rows of T1.
Therefore, it could be determined during bind that a table space scan is more
efficient than the index scan.
The query,
SELECT * FROM T1 WHERE C1 = ;
Examples: If the employee table has an index on the column LASTNAME, the
following predicate can be a matching predicate:
SELECT * FROM DSN861.EMP WHERE LASTNAME = 'SMITH';
Examples: All indexable predicates are stage 1. The predicate C1 LIKE %BC is
also stage 1, but is not indexable.
Effect on access paths: In single index processing, only Boolean term predicates
are chosen for matching predicates. Hence, only indexable Boolean term predicates
are candidates for matching index scans. To match index columns by predicates
that are not Boolean terms, DB2 considers multiple index access.
In join operations, Boolean term predicates can reject rows at an earlier stage than
can non-Boolean term predicates.
For left and right outer joins, and for inner joins, join predicates in the ON clause
are treated the same as other stage 1 and stage 2 predicates. A stage 2 predicate
in the ON clause is treated as a stage 2 predicate of the inner table.
For full outer join, the ON clause is evaluated during the join operation like a stage
2 predicate.
In an outer join, predicates that are evaluated after the join are stage 2 predicates.
Predicates in a table expression can be evaluated before the join and can therefore
be stage 1 predicates.
The second set of rules describes the order of predicate evaluation within each of
the above stages:
1. All equal predicates a (including column IN list, where list has only one
element).
2. All range predicates and predicates of the form column IS NOT NULL
3. All other predicate types are evaluated.
After both sets of rules are applied, predicates are evaluated in the order in which
they appear in the query. Because you specify that order, you have some control
over the order of evaluation.
The following examples of predicates illustrate the general rules shown in Table 67
on page 641. In each case, assume that there is an index on columns
(C1,C2,C3,C4) of the table and that 0 is the lowest value in each column.
) WHERE C1=5 AND C2=7
Both predicates are stage 1 and the compound predicate is indexable. A
matching index scan could be used with C1 and C2 as matching columns.
) WHERE C1=5 AND C2>7
Both predicates are stage 1 and the compound predicate is indexable. A
matching index scan could be used with C1 and C2 as matching columns.
) WHERE C1>5 AND C2=7
Both predicates are stage 1, but only the first matches the index. A matching
index scan could be used with C1 as a matching column.
) WHERE C1=5 OR C2=7
| Both predicates are stage 1 but not Boolean terms. The compound is
| indexable. When DB2 considers multiple index access for the compound
| predicate, C1 and C2 can be matching columns. For single index access, C1
| and C2 can be only index screening columns.
) WHERE C1=5 OR C2<>7
The first predicate is indexable and stage 1, and the second predicate is stage
1 but not indexable. The compound predicate is stage 1 and not indexable.
) WHERE C1>5 OR C2=7
| Both predicates are stage 1 but not Boolean terms. The compound is
| indexable. When DB2 considers multiple index access for the compound
| predicate, C1 and C2 can be matching columns. For single index access, C1
| and C2 can be only index screening columns.
) WHERE C1 IN (subquery) AND C2=C1
Both predicates are stage 2 and not indexable. The index is not considered for
matching index access, and both predicates are evaluated at stage 2.
) WHERE C1=5 AND C2=7 AND (C3 + 5) IN (7,8)
The first two predicates only are stage 1 and indexable. The index is
considered for matching index access, and all rows satisfying those two
predicates are passed to stage 2 to evaluate the third predicate.
) WHERE C1=5 OR C2=7 OR (C3 + 5) IN (7,8)
Example: Suppose that DB2 can determine that column C1 of table T contains
only five distinct values: A, D, Q, W and X. In the absence of other information,
DB2 estimates that one-fifth of the rows have the value D in column C1. Then the
predicate C1='D' has the filter factor 0.2 for table T.
How DB2 uses filter factors: Filter factors affect the choice of access paths by
estimating the number of rows qualified by a set of predicates.
Recommendation: You control the first two of those variables when you write a
predicate. Your understanding of DB2's use of filter factors should help you write
more efficient predicates.
Values of the third variable, statistics on the column, are kept in the DB2 catalog.
You can update many of those values, either by running the utility RUNSTATS or
by executing UPDATE for a catalog table. For information about using RUNSTATS,
. see the discussion of maintaining statistics in the catalog in Section 4 (Volume 1)
If you intend to update the catalog with statistics of your own choice, you should
understand how DB2 uses:
) “Default filter factors for simple predicates”
) “Filter factors for uniform distributions”
) “Interpolation formulas” on page 648
) “Filter factors for all distributions” on page 649
Example: The default filter factor for the predicate C1 = 'D' is 1/25 (0.04). If D is
actually one of only five distinct values in column C1, the default probably does not
lead to an optimal access path.
Example: If D is one of only five values in column C1, using RUNSTATS will put
the value 5 in column COLCARDF of SYSCOLUMNS. If there are no additional
statistics available, the filter factor for the predicate C1 = 'D' is 1/5 (0.2).
Filter factors for other predicate types: The examples selected in Table 68 on
page 647 and Table 69 on page 647 represent only the most common types of
predicates. If P1 is a predicate and F is its filter factor, then the filter factor of the
predicate NOT P1 is (1 - F). But, filter factor calculation is dependent on many
things, so a specific filter factor cannot be given for all predicate types.
Interpolation formulas
Definition: For a predicate that uses a range of values, DB2 calculates the filter
factor by an interpolation formula. The formula is based on an estimate of the ratio
of the number of values in the range to the number of values in the entire column
of the table.
The formulas: The formulas that follow are rough estimates, subject to further
modification by DB2. They apply to a predicate of the form col op. literal. The
value of (Total Entries) in each formula is estimated from the values in columns
HIGH2KEY and LOW2KEY in catalog table SYSIBM.SYSCOLUMNS for column
col: Total Entries = (HIGH2KEY value - LOW2KEY value).
) For the operators < and <=, where the literal is not a host variable:
(Literal value - LOW2KEY value) / (Total Entries)
) For the operators > and >=, where the literal is not a host variable:
(HIGH2KEY value - Literal value) / (Total Entries)
) For LIKE or BETWEEN:
(High literal value - Low literal value) / (Total Entries)
For the predicate C1 BETWEEN 8 AND 11, DB2 calculates the filter factor F as:
F = (11 - 8)/12 = 1/4 = .25
Defaults for interpolation: DB2 might not interpolate in some cases; instead, it
can use a default filter factor. Defaults for interpolation are:
When they are used: Table 71 lists the types of predicates on which these
statistics are used.
Table 71 (Page 1 of 2). Predicates for which distribution statistics are used
Single Column or
Concatenated
Type of Statistic Columns Predicates
Frequency Single COL=literal
COL IS NULL
COL IN (literal-list)
COL op literal
COL BETWEEN literal AND literal
Frequency Concatenated COL=literal
Suppose that columns C1 and C2 are correlated and are concatenated columns of
an index. Suppose also that the predicate is C1='3' AND C2='5' and that
SYSCOLDIST contains these values for columns C1 and C2:
COLVALUE FREQUENCYF
'1' '1' .1176
'2' '2' .588
'3' '3' .588
'3' '5' .1176
'4' '4' .588
'5' '3' .1764
'5' '5' .3529
'6' '6' .588
A set of simple, Boolean term, equal predicates on the same column that are
connected by OR predicates can be converted into an IN-list predicate. For
example: C1=5 or C1=1 or C1=15 converts to C1 IN (5,1,15).
| The outer join operation gives you these result table rows:
| ) The rows with matching values of C1 in tables T1 and T2 (the inner join result)
| ) The rows from T1 where C1 has no corresponding value in T2
| ) The rows from T2 where C1 has no corresponding value in T1
| However, when you apply the predicate, you remove all rows in the result table that
| came from T2 where C1 has no corresponding value in T1. DB2 transforms the full
| join into a left join, which is more efficient:
| SELECT * FROM T1 X LEFT JOIN T2 Y
| ON X.C1=Y.C1
| WHERE X.C2 > 12;
| In the following example, the predicate, X.C2>12, filters out all null values that
| result from the right join:
| SELECT * FROM T1 X RIGHT JOIN T2 Y
| ON X.C1=Y.C1
| WHERE X.C2>12;
| The predicate that follows a join operation must have the following characteristics
| before DB2 transforms an outer join into a simpler outer join or into an inner join:
| ) The predicate is a Boolean term predicate.
| ) The predicate is false if one table in the join operation supplies a null value for
| all of its columns.
| These predicates are examples of predicates that can cause DB2 to simplify join
| operations:
| ) T1.C1 > 10
| ) T1.C1 IS NOT NULL
| ) T1.C1 > 10 OR T1.C2 > 15
| ) T1.C1 > T2.C1
| ) T1.C1 IN (1,2,4)
| ) T1.C1 LIKE 'ABC%'
| ) T1.C1 BETWEEN 10 AND 100
| ) 12 BETWEEN T1.C1 AND 100
| The following example shows how DB2 can simplify a join operation because the
| query contains an ON clause that eliminates rows with unmatched values:
| SELECT * FROM T1 X LEFT JOIN T2 Y
| FULL JOIN T3 Z ON Y.C1=Z.C1
| ON X.C1=Y.C1;
| Because the last ON clause eliminates any rows from the result table for which
| column values that come from T1 or T2 are null, DB2 can replace the full join with
| a more efficient left join to achieve the same result:
| SELECT * FROM T1 X LEFT JOIN T2 Y
| LEFT JOIN T3 Z ON Y.C1=Z.C1
| ON X.C1=Y.C1;
| There is one case in which DB2 transforms a full outer join into a left join when you
| cannot write code to do it. This is the case where a view specifies a full outer join,
| but a subsequent query on that view requires only a left outer join. For example,
| consider this view:
| CREATE VIEW V1 (C1,T1C2,T2C2) AS
| SELECT COALESCE(T1.C1, T2.C1), T1.C2, T2.C2
| FROM T1 X FULL JOIN T2 Y
| ON T1.C1=T2.C1;
| This view contains rows for which values of C2 that come from T1 are null.
| However, if you execute the following query, you eliminate the rows with null values
| for C2 that come from T1:
| SELECT * FROM V1
| WHERE T1C2 > 1;
Rules for generating predicates: For single-table or inner join queries, DB2
generates predicates for transitive closure if:
) The query has an equal type predicate: COL1=COL2. This could be:
– A local predicate
– A join predicate
| ) The query also has a Boolean term predicate on one of the columns in the first
| predicate with one of the following formats:
| – COL1 op value
| op is =, <>, >, >=, <, or <=.
| value is a constant, host variable, or special register.
| – COL1 (NOT) BETWEEN value1 AND value2
| – COL1=COL3
For outer join queries, DB2 generates predicates for transitive closure if the query
has an ON clause of the form COL1=COL2 and a before join predicate that has
one of the following formats:
) COL1 op value
op is =, <>, >, >=, <, or <=
) COL1 (NOT) BETWEEN value1 AND value2
| DB2 generates a transitive closure predicate for an outer join query only if the
| generated predicate does not reference the table with unmatched rows. That is,
| the generated predicate cannot reference the left table for a left outer join or the
| right table for a right outer join.
When a predicate meets the the transitive closure conditions, DB2 generates a new
predicate, whether or not it already exists in the WHERE clause.
Example of transitive closure for an inner join: Suppose that you have written
this query, which meets the conditions for transitive closure:
Example of transitive closure for an outer join: Suppose that you have written
this outer join query:
SELECT * FROM (SELECT * FROM T1 WHERE T1.C1>1) X
LEFT JOIN T2
ON X.C1 = T2.C1;
The before join predicate, T1.C1>10, meets the conditions for transitive closure, so
DB2 generates this query:
SELECT * FROM
(SELECT * FROM T1 WHERE T1.C1>1 AND T2.C1>1) X
LEFT JOIN T2
ON X.C1 = T2.C1;
Adding extra predicates: DB2 performs predicate transitive closure only on equal
and range predicates. Other types of predicates, such as IN or LIKE predicates,
might be needed in the following case:
SELECT * FROM T1,T2
WHERE T1.C1=T2.C1
AND T1.C1 LIKE 'A%';
In this case, add the predicate T2.C1 LIKE 'A%'.
Column correlation
Two columns of data, A and B of a single table, are correlated if the values in
column A do not vary independently of the values in column B.
The following is an excerpt from a large single table. Columns CITY and STATE
are highly correlated, and columns DEPTNO and SEX are entirely independent.
In this simple example, for every value of column CITY that equals 'FRESNO',
there is the same value in column STATE ('CA').
DB2 chooses an index that returns the fewest rows, partly determined by the
smallest filter factor of the matching columns. Assume that filter factor is the only
influence on the access path. The combined filtering of columns CITY and STATE
seems very good, whereas the matching columns for the second index do not
seem to filter as much. Based on those calculations, DB2 chooses Index 1 as an
access path for Query 1.
The problem is that the filtering of columns CITY and STATE should not look good.
Column STATE does almost no filtering. Since columns DEPTNO and SEX do a
better job of filtering out rows, DB2 should favor Index 2 over Index 1.
Query 2
SELECT ... FROM CREWINFO T1,DEPTINFO T2
WHERE T1.CITY = 'FRESNO' AND T1.STATE='CA' (PREDICATE 1)
AND T1.DEPTNO = T2.DEPT AND T2.DEPTNAME = 'LEGAL';
The order that tables are accessed in a join statement affects performance. The
estimated combined filtering of Predicate1 is lower than its actual filtering. So table
CREWINFO might look better as the first table accessed than it should.
Also, due to the smaller estimated size for table CREWINFO, a nested loop join
might be chosen for the join method. But, if many rows are selected from table
CREWINFO because Predicate1 does not filter as many rows as estimated, then
another join method might be better.
The last two techniques are discussed in “Special techniques to influence access
path selection” on page 668.
The utility RUNSTATS collects the statistics DB2 needs to make proper choices
about queries. With RUNSTATS, you can collect statistics on the concatenated key
columns of an index and the number of distinct values for those concatenated
columns. This gives DB2 accurate information to calculate the filter factor for the
query.
For example, RUNSTATS collects statistics that benefit queries like this:
SELECT * FROM T1
WHERE C1 = 'a' AND C2 = 'b' AND C3 = 'c' ;
where:
) The first three index keys are used (MATCHCOLS = 3).
) An index exists on C1, C2, C3, C4, C5.
) Some or all of the columns in the index are correlated in some way.
DB2 often chooses an access path that performs well for a query with several host
variables. However, in a new release or after maintenance has been applied, DB2
might choose a new access path that does not perform as well as the old access
path. In most cases, the change in access paths is due to the default filter factors,
which might lead DB2 to optimize the query in a different way.
There are two ways to change the access path for a query that contains host
variables:
) Bind the package or plan that contains the query with the option
REOPT(VARS).
) Rewrite the query.
Because there is a performance cost to reoptimizing the access path at run time,
you should use the bind option REOPT(VARS) only on packages or plans
containing statements that perform poorly.
| To determine which queries in plans and packages bound with REOPT(VARS) will
| be reoptimized at run time, execute the following SELECT statements:
If you specify the bind option VALIDATE(RUN), and a statement in the plan or
package is not bound successfully, that statement is incrementally bound at run
time. If you also specify the bind option REOPT(VARS), DB2 reoptimizes the
access path during the incremental bind.
To determine which plans and packages have statements that will be incrementally
bound, execute the following SELECT statements:
SELECT DISTINCT NAME
FROM SYSIBM.SYSSTMT
WHERE STATUS = 'F' OR STATUS = 'H';
SELECT DISTINCT COLLID, NAME, VERSION
FROM SYSIBM.SYSPACKSTMT
WHERE STATUS = 'F' OR STATUS = 'H';
An equal predicate has a default filter factor of 1/COLCARDF. The actual filter
factor might be quite different.
Query:
SELECT * FROM DSN861.EMP
WHERE SEX = :HV1;
Assumptions: Because there are only two different values in column SEX, 'M' and
'F', the value COLCARDF for SEX is 2. If the numbers of male and female
employees are not equal, the actual filter factor of 1/2 is larger or smaller than the
default, depending on whether :HV1 is set to 'M' or 'F'.
Table T1 has two indexes: T1X1 on column C1 and T1X2 on column C2.
Query:
SELECT * FROM T1
WHERE C1 BETWEEN :HV1 AND :HV2
AND C2 BETWEEN :HV3 AND :HV4;
Recommendation: If DB2 does not choose T1X1, rewrite the query as follows, so
that DB2 does not choose index T1X2 on C2:
SELECT * FROM T1
WHERE C1 BETWEEN :HV1 AND :HV2
AND (C2 BETWEEN :HV3 AND :HV4 OR =1);
Table T1 has two indexes: T1X1 on column C1 and T1X2 on column C2.
Assumptions: You know that the application provides both narrow and wide
ranges on C1 and C2. Hence, default filter factors do not allow DB2 to choose the
best access path in all cases. For example, a small range on C1 favors index T1X1
on C1, a small range on C2 favors index T1X2 on C2, and wide ranges on both C1
and C2 favor a table space scan.
Recommendation: If DB2 does not choose the best access path, try either of the
following changes to your application:
) Use a dynamic SQL statement and embed the ranges of C1 and C2 in the
statement. With access to the actual range values, DB2 can estimate the actual
filter factors for the query. Preparing the statement each time it is executed
requires an extra step, but it can be worthwhile if the query accesses a large
amount of data.
) Include some simple logic to check the ranges of C1 and C2, and then execute
one of these static SQL statements, based on the ranges of C1 and C2:
SELECT * FROM T1 WHERE C1 BETWEEN :HV1 AND :HV2
AND (C2 BETWEEN :HV3 AND :HV4 OR =1);
Example 4: ORDER BY
Table T1 has two indexes: T1X1 on column C1 and T1X2 on column C2.
Query:
SELECT * FROM T1
WHERE C1 BETWEEN :HV1 AND :HV2
ORDER BY C2;
In this example, DB2 could choose one of the following actions:
) Scan index T1X1 and then sort the results by column C2
) Scan the table space in which T1 resides and then sort the results by column
C2
) Scan index T1X2 and then apply the predicate to each row of data, thereby
avoiding the sort
Which choice is best depends on the following factors:
) The number of rows that satisfy the range predicate
) Which index has the higher cluster ratio
If the actual number of rows that satisfy the range predicate is significantly different
from the estimate, DB2 might not choose the best access path.
Tables A, B, and C each have indexes on columns C1, C2, C3, and C4.
Query:
SELECT * FROM A, B, C
WHERE A.C1 = B.C1
AND A.C2 = C.C2
AND A.C2 BETWEEN :HV1 AND :HV2
AND A.C3 BETWEEN :HV3 AND :HV4
AND A.C4 < :HV5
AND B.C2 BETWEEN :HV6 AND :HV7
AND B.C3 < :HV8
AND C.C2 < :HV9;
Assumptions: The actual filter factors on table A are much larger than the default
factors. Hence, DB2 underestimates the number of rows selected from table A and
wrongly chooses that as the first table in the join.
The result of making the join predicate between A and B a nonindexable predicate
(which cannot be used in single index access) disfavors the use of the index on
column C1. This, in turn, might lead DB2 to access table A or B first. Or, it might
lead DB2 to change the access type of table A or B, thereby influencing the join
sequence of the other tables.
Topic overview: The topics that follow describe different methods to achieve the
results intended by a subquery and tell what DB2 does for each method. The
information should help you estimate what method performs best for your query.
Finally, for a comparison of the three methods as applied to a single task, see:
) “Subquery tuning” on page 667
Correlated subqueries
Definition: A correlated subquery refers to at least one column of the outer query.
Example: In the following query, the correlation name, X, illustrates the subquery's
reference to the outer query block.
SELECT * FROM DSN861.EMP X
WHERE JOB = 'DESIGNER'
AND EXISTS (SELECT 1
FROM DSN861.PROJ
WHERE DEPTNO = X.WORKDEPT
AND MAJPROJ = 'MA21');
What DB2 does: A correlated subquery is evaluated for each qualified row of the
outer query that is referred to. In executing the example, DB2:
1. Reads a row from table EMP where JOB='DESIGNER'.
2. Searches for the value of WORKDEPT from that row, in a table stored in
memory.
The in-memory table saves executions of the subquery. If the subquery has
already been executed with the value of WORKDEPT, the result of the
subquery is in the table and DB2 does not execute it again for the current row.
Instead, DB2 can skip to step 5.
3. Executes the subquery, if the value of WORKDEPT is not in memory. That
requires searching the PROJ table to check whether there is any project, where
MAJPROJ is 'MA2100', for which the current WORKDEPT is responsible.
4. Stores the value of WORKDEPT and the result of the subquery in memory.
5. Returns the values of the current row of EMP to the application.
DB2 repeats this whole process for each qualified row of the EMP table.
Noncorrelated subqueries
Definition: A noncorrelated subquery makes no reference to outer queries.
Example:
SELECT * FROM DSN861.EMP
WHERE JOB = 'DESIGNER'
AND WORKDEPT IN (SELECT DEPTNO
FROM DSN861.PROJ
WHERE MAJPROJ = 'MA21');
What DB2 does: A noncorrelated subquery is executed once when the cursor is
opened for the query. What DB2 does to process it depends on whether it returns a
single value or more than one value. The query in the example above can return
more than one value.
Single-value subqueries
When the subquery is contained in a predicate with a simple operator, the subquery
is required to return 1 or 0 rows. The simple operator can be one of the following
operators:
What DB2 does: When the cursor is opened, the subquery executes. If it returns
more than one row, DB2 issues an error. The predicate that contains the subquery
is treated like a simple predicate with a constant specified, for example,
WORKDEPT <= 'value'.
Stage 1 and stage 2 processing: The rules for determining whether a predicate
with a noncorrelated subquery that returns a single value is stage 1 or stage 2 are
generally the same as for the same predicate with a single variable. However, the
predicate is stage 2 if:
Multiple-value subqueries
A subquery can return more than one value if the operator is one of the following:
op ANY op ALL op SOME IN EXISTS
where op is any of the operators >, >=, <, or <=.
What DB2 does: If possible, DB2 reduces a subquery that returns more than one
row to one that returns only a single row. That occurs when there is a range
comparison along with ANY, ALL, or SOME. The following query is an example:
SELECT * FROM DSN861.EMP
WHERE JOB = 'DESIGNER'
AND WORKDEPT <= ANY (SELECT DEPTNO
FROM DSN861.PROJ
WHERE MAJPROJ = 'MA21');
DB2 calculates the maximum value for DEPTNO from table DSN8610.PROJ and
removes the ANY keyword from the query. After this transformation, the subquery
is treated like a single-value subquery.
That transformation can be made with a maximum value if the range operator is:
) > or >= with the quantifier ALL
) < or <= with the quantifier ANY or SOME
The transformation can be made with a minimum value if the range operator is:
) < or <= with the quantifier ALL
) > or >= with the quantifier ANY or SOME
When the subquery result is a character data type and the left side of the predicate
is a datetime data type, then the result is placed in a work file without sorting. For
some noncorrelated subqueries using the above comparison operators, DB2 can
more accurately pinpoint an entry point into the work file, thus further reducing the
amount of scanning that is done.
Results from EXPLAIN: For information about the result in a plan table for a
subquery that is sorted, see “When are column functions evaluated?
(COLUMN_FN_EVAL)” on page 696.
If there is a department in the marketing division which has branches in both San
Jose and San Francisco, the result of the above SQL statement is not the same as
if a join were done. The join makes each employee in this department appear twice
because it matches once for the department of location San Jose and again of
location San Francisco, although it is the same department. Therefore, it is clear
that to transform a subquery into a join, the uniqueness of the subquery select list
must be guaranteed. For this example, a unique index on any of the following sets
of columns would guarantee uniqueness:
) (DEPTNO)
) (DIVISION, DEPTNO)
) (DEPTNO, DIVISION).
Results from EXPLAIN: For information about the result in a plan table for a
subquery that is transformed into a join operation, see “Is a subquery transformed
into a join?” on page 695.
If you need columns from both tables EMP and PROJ in the output, you must use
a join.
In general, query A might be the one that performs best. However, if there is no
index on DEPTNO in table PROJ, then query C might perform best. If you decide
that a join cannot be used and there is an available index on DEPTNO in table
PROJ, then query B might perform best.
When looking at a problem subquery, see if the query can be rewritten into another
format or see if there is an index that you can create to help improve the
performance of the subquery.
It is also important to know the sequence of evaluation, for the different subquery
predicates as well as for all other predicates in the query. If the subquery predicate
is costly, perhaps another predicate could be evaluated before that predicate so
that the rows would be rejected before even evaluating the problem subquery
predicate.
This section describes tactics for rewriting queries and modifying catalog
statistics to influence DB2's method of selecting access paths. In a later release
of DB2, the selection method might change, causing your changes to degrade
performance. Save the old catalog statistics or SQL before you consider making
any changes to control the choice of access path. Before and after you make
any changes, take performance measurements. When you migrate to a new
release, examine the performance again. Be prepared to back out any changes
that have degraded performance.
This section contains the following information about determining and changing
access paths:
) Obtaining information about access paths
) “Minimizing overhead for retrieving few rows: OPTIMIZE FOR n ROWS” on
page 669
) “Reducing the number of matching columns” on page 671
) “Adding extra local predicates” on page 673
) “Rearranging the order of tables in a FROM clause” on page 673
) “Updating catalog statistics” on page 676
) “Using a subsystem parameter” on page 678
This section discusses the use of OPTIMIZE FOR n ROWS to affect the
performance of interactive SQL applications. Unless otherwise noted, this
information pertains to local applications. For more information on using OPTIMIZE
FOR n ROWS in distributed applications, see “Specifying OPTIMIZE FOR n
ROWS” on page 397.
What OPTIMIZE FOR n ROWS does: The OPTIMIZE FOR n ROWS clause lets
an application declare its intent to do either of these things:
) Retrieve only a subset of the result set
) Give priority to the retrieval of the first few rows
DB2 uses the OPTIMIZE FOR n ROWS clause to choose access paths that
minimize the response time for retrieving the first few rows. For distributed queries,
the value of n determines the number of rows that DB2 sends to the client on each
DRDA network transmission. See “Specifying OPTIMIZE FOR n ROWS” on
page 397 for more information on using OPTIMIZE FOR n ROWS in the distributed
environment.
Use OPTIMIZE FOR 1 ROW to avoid sorts: You can influence the access path
most by using OPTIMIZE FOR 1 ROW. OPTIMIZE FOR 1 ROW tells DB2 to
select an access path that returns the first qualifying row quickly. This means that
whenever possible, DB2 avoids any access path that involves a sort. If you specify
a value for n that is anything but 1, DB2 chooses an access path based on cost,
and you won't necessarily avoid sorts.
How to specify OPTIMIZE FOR n ROWS for a CLI application: For a Call Level
Interface (CLI) application, you can specify that DB2 uses OPTIMIZE FOR n
ROWS for all queries. To do that, specify the keyword OPTIMIZEFORNROWS in
the initialization file. For more information, see Chapter 4 of DB2 ODBC Guide and
Reference.
How many rows you can retrieve with OPTIMIZE FOR n ROWS: The OPTIMIZE
FOR n ROWS clause does not prevent you from retrieving all the qualifying rows.
However, if you use OPTIMIZE FOR n ROWS, the total elapsed time to retrieve all
the qualifying rows might be significantly greater than if DB2 had optimized for the
entire result set.
Example: Suppose you query the employee table regularly to determine the
employees with the highest salaries. You might use a query like this:
SELECT LASTNAME, FIRSTNAME, EMPNO, SALARY
FROM EMPLOYEE
ORDER BY SALARY DESC;
# When you specify OPTIMIZE FOR n ROWS for a remote query, a small value of n
# can help limit the number of rows that flow across the network on any given
# transmission.
# You can improve the performance for receiving a large result set through a remote
# query by specifying a large value of n in OPTIMIZE FOR n ROWS. When you
# specify a large value, DB2 attempts to send the n rows in multiple transmissions.
# For better performance when retrieving a large result set, in addition to specifying
# OPTIMIZE FOR n ROWS with a large value of n in your query, do not execute
# other SQL statements until the entire result set for the query is processed. If
# retrieval of data for several queries overlaps, DB2 might need to buffer result set
# data in the DDF address space. See "Block fetching result sets" in Section 5
# (Volume 2) of DB2 Administration Guide for more information.
# For local or remote queries, to influence the access path most, specify OPTIMIZE
# for 1 ROW. This value does not have a detrimental effect on distributed queries.
DB2 picks IX2 to access the data, but IX1 would be roughly 10 times quicker. The
problem is that 50% of all parts from center number 3 are still in Center 3; they
have not moved. Assume that there are no statistics on the correlated columns in
catalog table SYSCOLDIST. Therefore, DB2 assumes that the parts from center
number 3 are evenly distributed among the 50 centers.
You can get the desired access path by changing the query. To discourage the use
of IX2 for this particular query, you can change the third predicate to be
nonindexable.
SELECT * FROM PART_HISTORY
WHERE
PART_TYPE = 'BB'
AND W_FROM = 3
AND (W_NOW = 3 + ) <-- PREDICATE IS MADE NONINDEXABLE
Now index I2 is not picked, because it has only one match column. The preferred
index, I1, is picked. The third predicate is a nonindexable predicate, so an index is
not used for the compound predicate.
There are many ways to make a predicate nonindexable. The recommended way is
to make the add 0 to a predicate that evaluates to a numeric value or concatenate
a predicate that evaluates to a character value with an empty string.
Indexable Nonindexable
T1.C3=T2.C4 (T1.C3=T2.C4 CONCAT
ssq;ssq;)
T1.C1=5 T1.C1=5+
The preferred technique for improving the access path when a table has correlated
columns is to generate catalog statistics on the correlated columns. You can do
that either by running RUNSTATS or by updating catalog table SYSCOLDIST or
SYSCOLDISTSTATS manually.
Q1:
SELECT * FROM PART_HISTORY -- SELECT ALL PARTS
WHERE PART_TYPE = 'BB' P1 -- THAT ARE 'BB' TYPES
AND W_FROM = 3 P2 -- THAT WERE MADE IN CENTER 3
AND W_NOW = 3 P3 -- AND ARE STILL IN CENTER 3
+------------------------------------------------------------------------------+
| Filter factor of these predicates. |
| P1 = 1/1= .1 |
| P2 = 1/5 = .2 |
| P3 = 1/5 = .2 |
|------------------------------------------------------------------------------|
| ESTIMATED VALUES | WHAT REALLY HAPPENS |
| filter data | filter data |
| index matchcols factor rows | index matchcols factor rows |
| ix2 2 .2*.2 4 | ix2 2 .2*.5 1 |
| ix1 1 .1 1 | ix1 1 .1 1 |
+------------------------------------------------------------------------------+
Figure 176. Reducing the number of MATCHCOLS
This does not change the result of the query. It is valid for a column of any data
type, and causes a minimal amount of overhead. However, DB2 uses only the best
filter factor for any particular column. So, if TX.CX already has another equal
predicate on it, adding this extra predicate has no effect. You should add the extra
local predicate to a column that is not involved in a predicate already. If index-only
access is possible for a table, it is generally not a good idea to add a predicate that
would prevent index-only access.
To access the data in a star schema, you write SELECT statements that include
join operations between the fact table and the dimension tables; no join operations
exist between dimension tables. When the query meets the following conditions,
that query is a candidate for a type of join called star join:
) The query references at least two dimensions.
) All join predicates are between the fact table and the dimension tables, or
within tables of the same dimension.
) All join predicates between the fact table and dimension tables are equi-join
predicates.
) All join predicates between the fact table and dimension tables are Boolean
term predicates. See “Boolean term (BT) predicates” on page 639 For more
information on Boolean term predicates, see “Boolean term (BT) predicates” on
page 639.
See “Interpreting access to two or more tables” on page 703 for more information
on and examples of star schemas and star joins.
You can improve the performance of star joins by your use of indexes. This section
gives suggestions for choosing indexes that give the best star join performance.
F A fact table.
D1...Dn
Dimension tables.
cardD1...cardDn
Cardinality of columns C1...Cn in dimension tables D1...Dn.
cardC1...cardCn
Cardinality of key columns C1...Cn in fact table F.
cardCij
Cardinality of pairs of column values from key columns Ci and Cj in fact table
F.
cardCijk
Cardinality of triplets of column values from key columns Ci, Cj, and Ck in fact
table F.
Density
A measure of the correlation of key columns in the fact table. The density is
calculated as follows:
S The current set of columns whose order in the index is not yet determined.
S-{Cm}
The current set of columns, excluding column Cm
Follow these steps to derive a fact table index for a star join that joins n columns of
fact table F to n dimension tables D1 through Dn:
1. Define the set of columns whose index key order is to be determined as the n
columns of fact table F that correspond to dimension tables. That is,
S={C1,...Cn} and L=n.
2. Calculate the density of all sets of L-1 columns in S.
3. Find the lowest density. Determine which column is not in the set of columns
with the lowest density. That is, find column Cm in S, such that for every Ci in
S, density(S-{Cm})<density(S-{Ci}).
4. Make Cm the Lth column of the index.
5. Remove Cm from S.
6. Decrement L by 1.
7. Repeat steps 2 through 6 n-2 times. The remaining column after iteration n-2 is
the first column of the index.
Example of determining column order for a fact table index: Suppose that a
star schema has three dimension tables with the following cardinalities:
Step 1: Calculate the density of all pairs of columns in the fact table:
density(C1,C2)=625/(2*5)=.625
density(C1,C3)=196/(2*1)=.98
density(C2,C3)=994/(5*1)=.1988
Step 2: Find the pair of columns with the lowest density. That pair is (C2,C3).
Determine which column of the fact table is not in that pair. That column is C1.
Step 4: Repeat steps 1 through 3 to determine the second and first columns of the
index key:
density(C2)=433/5=.866
density(C3)=1/1=1.
The column with the lowest density is C2. Therefore, C3 is the second column of
the index. The remaining column, C2, is the first column of the index. That is, the
best order for the multi-column index is C2, C3, C1.
The example shown in Figure 176 on page 672, involving this query:
SELECT * FROM PART_HISTORY -- SELECT ALL PARTS
WHERE PART_TYPE = 'BB' P1 -- THAT ARE 'BB' TYPES
AND W_FROM = 3 P2 -- THAT WERE MADE IN CENTER 3
AND W_NOW = 3 P3 -- AND ARE STILL IN CENTER 3
is a problem with data correlation. DB2 does not know that 50% of the parts that
were made in Center 3 are still in Center 3. It was circumvented by making a
predicate nonindexable. But suppose there are hundreds of users writing queries
similar to that query. It would not be possible to have all users change their
queries. In this type of situation, the best solution is to change the catalog statistics.
Updating the catalog to adjust for correlated columns: One catalog table you
can update is SYSIBM.SYSCOLDIST, which gives information about the first key
column or concatenated columns of an index key. Assume that because columns
W_NOW and W_FROM are correlated, there are only 100 distinct values for the
combination of the two columns, rather than 2500 (50 for W_FROM * 50 for
W_NOW). Insert a row like this to indicate the new cardinality:
INSERT INTO SYSIBM.SYSCOLDIST
(FREQUENCY, FREQUENCYF, IBMREQD,
TBOWNER, TBNAME, NAME, COLVALUE,
TYPE, CARDF, COLGROUPCOLNO, NUMCOLUMNS)
VALUES(, -1, 'N',
'USRT1','PART_HISTORY','W_FROM',' ',
'C',1,X'43',2);
Because W_FROM and W_NOW are concatenated key columns of an index, you
can also put this information in SYSCOLDIST using the RUNSTATS utility. See
DB2 Utility Guide and Reference for more information.
You can also tell DB2 about the frequency of a certain combination of column
values by updating SYSIBM.SYSCOLDIST. For example, you can indicate that 1%
of the rows in PART_HISTORY contain the values 3 for W_FROM and 3 for
W_NOW by inserting this row into SYSCOLDIST:
INSERT INTO SYSIBM.SYSCOLDIST
(FREQUENCY, FREQUENCYF, STATSTIME, IBMREQD,
TBOWNER, TBNAME, NAME, COLVALUE,
TYPE, CARDF, COLGROUPCOLNO, NUMCOLUMNS)
VALUES(, .1, '1996-12-1-12...','N',
'USRT1','PART_HISTORY','W_FROM',X'8383',
'F',-1,X'43',2);
# The best solution to the problem is to run RUNSTATS again after the table is
# populated. However, if it is not possible to do that, you can use subsystem
# parameter NPGTHRSH to cause DB2 to favor matching index access over a table
# space scan and over nonmatching index access.
# To set NPGTHRSH, which is in macro DSN6SPRM, modify and run installation job
# DSNTIJUZ. See Section 2 of DB2 Installation Guide for information on how to set
# subsystem parameters. The value of NPGTHRSH is an integer that indicates the
# tables for which DB2 favors matching index access. Values of NPGTHRSH and
# their meanings are:
# −1 DB2 favors matching index access for all tables.
# 0 DB2 selects the access path based on cost, and no tables qualify
# for special handling. This is the default.
# n>=1 If data access statistics have been collected for all tables, DB2
# favors matching index access for tables for which the total number
# of pages on which rows of the table appear (NPAGES) is less than
# n.
# If data access statistics have not been collected for some tables
# (NPAGES=-1 for those tables), DB2 favors matching index access
# for tables for which NPAGES=-1 or NPAGES<n.
Other tools: The following tools can help you tune SQL queries:
) DB2 Visual Explain
Visual Explain is a graphical workstation feature of DB2 that provides:
– An easy-to-understand display of a selected access path
– Suggestions for changing an SQL statement
– An ability to invoke EXPLAIN for dynamic SQL statements
– An ability to provide DB2 catalog statistics for referenced objects of an
access path
– A subsystem parameter browser with keyword 'Find' capabilities
For information on using DB2 Visual Explain, which is a separately packaged
CD-ROM provided with your DB2 Version 6 license, see DB2 Visual Explain
online help.
) DB2 Performance Monitor (PM)
DB2 PM is a performance monitoring tool that formats performance data. DB2
PM combines information from EXPLAIN and from the DB2 catalog. It displays
access paths, indexes, tables, table spaces, plans, packages, DBRMs, host
variable definitions, ordering, table access and join sequences, and lock types.
Output is presented in a dialog rather than as a table, making the information
easy to read and understand.
) DB2 Estimator
| DB2 Estimator for Windows is an easy-to-use, stand-alone tool for estimating
| the performance of DB2 for OS/390 applications. You can use it to predict the
| performance and cost of running the applications, transactions, SQL
For each access to a single table, EXPLAIN tells you if an index access or table
space scan is used. If indexes are used, EXPLAIN tells you how many indexes and
index columns are used and what I/O methods are used to read the pages. For
joins of tables, EXPLAIN tells you which join method and type are used, the order
in which DB2 joins the tables, and when and why it sorts any rows.
The primary use of EXPLAIN is to observe the access paths for the SELECT parts
of your statements. For UPDATE and DELETE WHERE CURRENT OF, and for
INSERT, you receive somewhat less information in your plan table. And some
| accesses EXPLAIN does not describe: for example, the access to LOB values,
| which are stored separately from the base table, and access to parent or
dependent tables needed to enforce referential constraints.
The access paths shown for the example queries in this chapter are intended only
to illustrate those examples. If you execute the queries in this chapter on your
system, the access paths chosen can be different.
Figure 177 shows the format of a plan table. Table 73 on page 682 shows the
content of each column.
Your plan table can use many formats, but use the 49-column format because it
gives you the most information. If you alter an existing plan table to add new
columns, specify the columns as NOT NULL WITH DEFAULT, so that default
values are included for the rows already in the table. However, as you can see in
Figure 177, certain columns do allow nulls. Do not specify those columns as NOT
NULL WITH DEFAULT.
QUERYNO INTEGER NOT NULL PREFETCH CHAR(1) NOT NULL WITH DEFAULT
QBLOCKNO SMALLINT NOT NULL COLUMN_FN_EVAL CHAR(1) NOT NULL WITH DEFAULT
APPLNAME CHAR(8) NOT NULL MIXOPSEQ SMALLINT NOT NULL WITH DEFAULT
PROGNAME CHAR(8) NOT NULL -------28 column format --------
PLANNO SMALLINT NOT NULL VERSION VARCHAR(64) NOT NULL WITH DEFAULT
METHOD SMALLINT NOT NULL COLLID CHAR(18) NOT NULL WITH DEFAULT
CREATOR CHAR(8) NOT NULL -------3 column format --------
TNAME CHAR(18) NOT NULL ACCESS_DEGREE SMALLINT
TABNO SMALLINT NOT NULL ACCESS_PGROUP_ID SMALLINT
ACCESSTYPE CHAR(2) NOT NULL JOIN_DEGREE SMALLINT
MATCHCOLS SMALLINT NOT NULL JOIN_PGROUP_ID SMALLINT
ACCESSCREATOR CHAR(8) NOT NULL -------34 column format --------
ACCESSNAME CHAR(18) NOT NULL SORTC_PGROUP_ID SMALLINT
INDEXONLY CHAR(1) NOT NULL SORTN_PGROUP_ID SMALLINT
SORTN_UNIQ CHAR(1) NOT NULL PARALLELISM_MODE CHAR(1)
SORTN_JOIN CHAR(1) NOT NULL MERGE_JOIN_COLS SMALLINT
SORTN_ORDERBY CHAR(1) NOT NULL CORRELATION_NAME CHAR(18)
SORTN_GROUPBY CHAR(1) NOT NULL PAGE_RANGE CHAR(1) NOT NULL WITH DEFAULT
SORTC_UNIQ CHAR(1) NOT NULL JOIN_TYPE CHAR(1) NOT NULL WITH DEFAULT
SORTC_JOIN CHAR(1) NOT NULL GROUP_MEMBER CHAR(8) NOT NULL WITH DEFAULT
SORTC_ORDERBY CHAR(1) NOT NULL IBM_SERVICE_DATA VARCHAR(254) NOT NULL WITH DEFAULT
SORTC_GROUPBY CHAR(1) NOT NULL ------43 column format --------
TSLOCKMODE CHAR(3) NOT NULL WHEN_OPTIMIZE CHAR(1) NOT NULL WITH DEFAULT
TIMESTAMP CHAR(16) NOT NULL QBLOCK_TYPE CHAR(6) NOT NULL WITH DEFAULT
REMARKS VARCHAR(254) NOT NULL BIND_TIME TIMESTAMP NOT NULL WITH DEFAULT
-------25 column format -------- ------46 column format -----------
OPTHINT CHAR(8) NOT NULL WITH DEFAULT
HINT_USED CHAR(8) NOT NULL WITH DEFAULT
PRIMARY_ACCESSTYPE CHAR(1) NOT NULL WITH DEFAULT
-------49 column format-----------
| Figure 177. Format of PLAN_TABLE
For tips on maintaining a growing plan table, see “Maintaining a plan table.”
EXPLAIN for remote binds: A remote requester that accesses DB2 can specify
EXPLAIN(YES) when binding a package at the DB2 server. The information
appears in a plan table at the server, not at the requester. If the requester does not
support the propagation of the option EXPLAIN(YES), rebind the package at the
requester with that option to obtain access path information. You cannot get
information about access paths for SQL statements that use private protocol.
All rows with the same non-zero value for QBLOCKNO and the same value for
QUERYNO relate to a step within the query. QBLOCKNOs are not necessarily
executed in the order shown in PLAN_TABLE. But within a QBLOCKNO, the
PLANNO column gives the substeps in the order they execute.
For each substep, the TNAME column identifies the table accessed. Sorts can be
shown as part of a table access or as a separate step.
What if QUERYNO=0? In a program with more than 32767 lines, all values of
QUERYNO greater than 32767 are reported as 0. For entries containing
QUERYNO=0, use the timestamp, which is guaranteed to be unique, to distinguish
individual statements.
COLLID gives the COLLECTION name, and PROGNAME gives the PACKAGE_ID.
The following query to a plan table return the rows for all the explainable
statements in a package in their logical order:
SELECT * FROM JOE.PLAN_TABLE
WHERE PROGNAME = 'PACK1' AND COLLID = 'COLL1' AND VERSION = 'PROD1'
ORDER BY QUERYNO, QBLOCKNO, PLANNO, MIXOPSEQ;
The examples in Figure 178 and Figure 179 on page 689 have these indexes: IX1
on T(C1) and IX2 on T(C2). DB2 processes the query in the following steps:
| 1. Retrieve all the qualifying record identifiers (RIDs) where C1=1, using index
| IX1.
2. Retrieve all the qualifying RIDs where C2=1, using index IX2. The intersection
of those lists is the final set of RIDs.
3. Access the data pages needed to retrieve the qualified rows using the final RID
list.
SELECT * FROM T
WHERE C1 = 1 AND C2 = 1;
Figure 178. PLAN_TABLE output for example with intersection (AND) operator
SELECT * FROM T
WHERE C1 BETWEEN 1 AND 199 OR
C1 BETWEEN 5 AND 599;
Figure 179. PLAN_TABLE output for example with Union (OR) Operator
In general, the matching predicates on the leading index columns are equal or IN
predicates. The predicate that matches the final index column can be an equal, IN,
or range predicate (<, <=, >, >=, LIKE, or BETWEEN).
The index XEMP5 is the chosen access path for this query, with MATCHCOLS = 3.
Two equal predicates are on the first two columns and a range predicate is on the
third column. Though the index has four columns in the index, only three of them
can be considered matching columns.
If access is by more than one index, INDEXONLY is Y for a step with access type
MX, because the data pages are not actually accessed until all the steps for
intersection (MI) or union (MU) take place.
| When an SQL application uses index-only access for a ROWID column, the
| application claims the table space or table space partition. As a result, contention
| may occur between the SQL application and a utility that drains the table space or
| partition. Index-only access to a table for a ROWID column is not possible if the
| associated table space or partition is in an incompatible restrictive state. For
| example, an SQL application can make a read claim on the table space only if the
| restrictive state allows readers.
| Direct row access is very fast, because DB2 does not need to use the index or a
| table space scan to find the row. Direct row access can be used on any table that
| has a ROWID column.
| To use direct row access, you first select the values of a row into host variables.
| The value that is selected from the ROWID column contains the location of that
| row. Later, when you perform queries that access that row, you include the row ID
| value in the search condition. If DB2 determines that it can use direct row access, it
| uses the row ID value to navigate directly to the row.See “Example: Coding with
| row IDs for direct row access” on page 692 for a coding example.
| Searching for propagated rows: If rows are propagated from one table to
| another, do not expect to use the same row ID value from the source table to
| search for the same row in the target table, or vice versa. This does not work
| when direct row access is the access path chosen. For example, assume that the
| host variable below contains a row ID from SOURCE:
| SELECT * FROM TARGET
| WHERE ID = :hv_rowid
| Because the row ID location is not the same as in the source table, DB2 will most
| likely not find that row. Search on another column to retrieve the row you want.
| Reverting to ACCESSTYPE
| Although DB2 might plan to use direct row access, circumstances can cause DB2
| to not use direct row access at run time. DB2 remembers the location of the row as
| of the time it is accessed. However, that row can change locations (such as after a
| REORG) between the first and second time it is accessed, which means that DB2
| cannot use direct row access to find the row on the second access attempt. Instead
| of using direct row access, DB2 uses the access path that is shown in the
| ACCESSTYPE column of PLAN_TABLE.
| If the predicate you are using to do direct row access is not indexable and if DB2 is
| unable to use direct row access, then DB2 uses a table space scan to find the row.
| This can have a profound impact on the performance of applications that rely on
| direct row access. Write your applications to handle the possibility that direct row
| access might not be used. Some options are to:
| ) Ensure that your application does not try to remember ROWID columns across
| reorganizations of the table space.
| When your application commits, it releases its claim on the table space; it is
| possible that a REORG can run and move the row, which disables direct row
| access. Plan your commit processing accordingly; use the returned row ID
| value before committing, or re-select the row ID value after a commit is issued.
| If you are storing ROWID columns from another table, update those values
| after the table with the ROWID column is reorganized.
| ) Create an index on the ROWID column, so that DB2 can use the index if direct
| row access is disabled.
| ) Supplement the ROWID column predicate with another predicate that enables
| DB2 to use an existing index on the table. For example, after reading a row, an
| application might perform the following update:
| EXEC SQL UPDATE EMP
| SET SALARY = :hv_salary + 12
| WHERE EMP_ROWID = :hv_emp_rowid
| AND EMPNO = :hv_empno;
| RID list processing: Direct row access and RID list processing are mutually
| exclusive. If a query qualifies for both direct row access and RID list processing,
| direct row access is used. If direct row access fails, DB2 does not revert to RID list
| processing; instead it reverts to the backup access type.
| /**************************/
| /* Declare host variables */
| /**************************/
| EXEC SQL BEGIN DECLARE SECTION;
| SQL TYPE IS BLOB_LOCATOR hv_picture;
| SQL TYPE IS CLOB_LOCATOR hv_resume;
| SQL TYPE IS ROWID hv_emp_rowid;
| short hv_dept, hv_id;
| char hv_name[3];
| decimal hv_salary[5,2];
| EXEC SQL END DECLARE SECTION;
| /**********************************************************/
| /* Retrieve the picture and resume from the PIC_RES table */
| /**********************************************************/
| strcpy(hv_name, "Jones");
| EXEC SQL SELECT PR.PICTURE, PR.RESUME INTO :hv_picture, :hv_resume
| FROM PIC_RES PR
| WHERE PR.Name = :hv_name;
| Figure 180 (Part 1 of 2). Example of using a row ID value for direct row access
| /**********************************************************/
| /* Now retrieve some information about that row, */
| /* including the ROWID value. */
| /**********************************************************/
| hv_dept = 99;
| EXEC SQL SELECT E.SALARY, E.EMP_ROWID
| INTO :hv_salary, :hv_emp_rowid
| FROM EMPDATA E
| WHERE E.DEPTNUM = :hv_dept AND E.NAME = :hv_name;
| /**********************************************************/
| /* Update columns SALARY, PICTURE, and RESUME. Use the */
| /* ROWID value you obtained in the previous statement */
| /* to access the row you want to update. */
| /* smiley_face and update_resume are */
| /* user-defined functions that are not shown here. */
| /**********************************************************/
| EXEC SQL UPDATE EMPDATA
| SET SALARY = :hv_salary + 12,
| PICTURE = smiley_face(:hv_picture),
| RESUME = update_resume(:hv_resume)
| WHERE EMP_ROWID = :hv_emp_rowid;
| /**********************************************************/
| /* Use the ROWID value to obtain the employee ID from the */
| /* same record. */
| /**********************************************************/
| EXEC SQL SELECT E.ID INTO :hv_id
| FROM EMPDATA E
| WHERE E.EMP_ROWID = :hv_emp_rowid;
| /**********************************************************/
| /* Use the ROWID value to delete the employee record */
| /* from the table. */
| /**********************************************************/
| EXEC SQL DELETE FROM EMPDATA
| WHERE EMP_ROWID = :hv_emp_rowid;
| Figure 180 (Part 2 of 2). Example of using a row ID value for direct row access
A limited partition scan can be combined with other access methods. For example,
consider the following query:
| SELECT .. FROM T
| WHERE (C1 BETWEEN '22' AND '328'
| OR C1 BETWEEN '6' AND '8')
| AND C2 = '6';
Assume that table T has a partitioned index on column C1 and that values of C1
between 2002 and 3280 all appear in partitions 3 and 4 and the values between
6000 and 8000 appear in partitions 8 and 9. Assume also that T has another index
on column C2. DB2 could choose any of these access methods:
) A matching index scan on column C1. The scan reads index values and data
only from partitions 3, 4, 8, and 9. (PAGE_RANGE=N)
) A matching index scan on column C2. (DB2 might choose that if few rows have
C2=6.) The matching index scan reads all RIDs for C2=6 from the index on C2
| and corresponding data pages from partitions 3, 4, 8, and 9.
| (PAGE_RANGE=Y)
) A table space scan on T. DB2 avoids reading data pages from any partitions
except 3, 4, 8 and 9. (PAGE_RANGE=Y)
Joins: Limited partition scan can be used for each table accessed in a join.
If you have predicates using an OR operator and one of the predicates refers to a
column of the partitioning index that is not the first key column of the index, then
DB2 does not use limited partition scan.
METHOD 3 Sorts: These are used for ORDER BY, GROUP BY, SELECT
| DISTINCT, UNION, or a quantified predicate. (A quantified predicate is 'col = ANY
| (subselect)' or 'col = SOME (subselect)' ). They are indicated on a separate row. A
single row of the plan table can indicate two sorts of a composite table, but only
one sort is actually done.
Generally, values of R and S are considered better for performance than a blank.
| Use variance and standard deviation with care: The VARIANCE and STTDEV
| functions are always evaluated late (that is, COLUMN_FN_EVAL is blank). This
| causes other functions in the same query block to be evaluated late as well. For
| example, in the following query, the sum function is evaluated later than it would be
| if the variance function was not present:
| SELECT SUM(C1), VARIANCE(C1) FROM T1;
Assume that table T has no index on C1. The following is an example that uses a
table space scan:
SELECT * FROM T WHERE C1 = VALUE;
If you do not want to use sequential prefetch for a particular query, consider adding
to it the clause OPTIMIZE FOR 1 ROW.
In the general case, the rules for determining the number of matching columns are
simple, although there are a few exceptions.
| ) Look at the index columns from leading to trailing. For each index column,
| search for an indexable boolean term predicate on that column. (See
| “Properties of predicates” on page 636 for a definition of boolean term.) If such
| a predicate is found, then it can be used as a matching predicate.
| Column MATCHCOLS in a plan table shows how many of the index columns
| are matched by predicates.
) If no matching predicate is found for a column, the search for matching
predicates stops.
) If a matching predicate is a range predicate, then there can be no more
matching columns. For example, in the matching index scan example that
follows, the range predicate C2>1 prevents the search for additional matching
columns.
# ) For star joins, a missing key predicate does not cause termination of matching
# columns that are to be used on the fact table index.
Two matching columns occur in this example. The first one comes from the
predicate C1=1, and the second one comes from C2>1. The range predicate on C2
prevents C3 from becoming a matching column.
The predicates can be applied on the index, but they are not matching predicates.
C5=8 is not an index screening predicate, and it must be evaluated when data is
retrieved. The value of MATCHCOLS in the plan table is 1.
You can regard the IN-list index scan as a series of matching index scans with the
values in the IN predicate being used for each matching index scan. The following
example has an index on (C1,C2,C3,C4) and might use an IN-list index scan:
SELECT * FROM T
WHERE C1=1 AND C2 IN (1,2,3)
AND C3> AND C4<1;
The plan table shows MATCHCOLS = 3 and ACCESSTYPE = N. The IN-list scan
is performed as the following three matching index scans:
(C1=1,C2=1,C3>), (C1=1,C2=2,C3>), (C1=1,C2=3,C3>)
RID lists are constructed for each of the indexes involved. The unions or
intersections of the RID lists produce a final list of qualified RIDs that is used to
retrieve the result rows, using list prefetch. You can consider multiple index access
as an extension to list prefetch with more complex RID retrieval operations in its
first phase. The complex operators are union and intersection.
The plan table contains a sequence of rows describing the access. For this query,
ACCESSTYPE uses the following values:
Value Meaning
M Start of multiple index access processing
MX Indexes are to be scanned for later union or intersection
MI An intersection (AND) is performed
MU A union (OR) is performed
The following steps relate to the previous query and the values shown for the plan
table in Figure 181 on page 701:
1. Index EMPX1, with matching predicate AGE= 34, provides a set of candidates
for the result of the query. The value of MIXOPSEQ is 1.
2. Index EMPX1, with matching predicate AGE = 40, also provides a set of
candidates for the result of the query. The value of MIXOPSEQ is 2.
3. Index EMPX2, with matching predicate JOB='MANAGER', also provides a set
of candidates for the result of the query. The value of MIXOPSEQ is 3.
4. The first intersection (AND) is done, and the value of MIXOPSEQ is 4. This MI
removes the two previous candidate lists (produced by MIXOPSEQs 2 and 3)
by intersecting them to form an intermediate candidate list, IR1, which is not
shown in PLAN_TABLE.
5. The last step, where the value MIXOPSEQ is 5, is a union (OR) of the two
remaining candidate lists, which are IR1 and the candidate list produced by
MIXOPSEQ 1. This final union gives the result for the query.
Figure 181. Plan table output for a query that uses multiple indexes. Depending on the
filter factors of the predicates, the access steps can appear in a different order.
In this example, the steps in the multiple index access follow the physical sequence
of the predicates in the query. This is not always the case. The multiple index steps
are arranged in an order that uses RID pool storage most efficiently and for the
least amount of time.
Queries using one-fetch index access: The following queries use one-fetch
index scan with an index existing on T(C1,C2 DESC,C3):
SELECT MIN(C1) FROM T;
SELECT MIN(C1) FROM T WHERE C1>5;
SELECT MIN(C1) FROM T WHERE C1>5 AND C1<1;
SELECT MAX(C2) FROM T WHERE C1=5;
SELECT MAX(C2) FROM T WHERE C1=5 AND C2>5;
SELECT MAX(C2) FROM T WHERE C1=5 AND C2>5 AND C2<1;
SELECT MAX(C2) FROM T WHERE C1=5 AND C2 BETWEEN 5 AND 1;
With an index on T(C1,C2), the following queries can use index-only access:
SELECT C1, C2 FROM T WHERE C1 > ;
SELECT C1, C2 FROM T;
SELECT COUNT(*) FROM T WHERE C1 = 1;
| Sometimes DB2 can determine that an index that is not fully matching is actually an
| equal unique index case. Assume the following case:
| Unique Index1: (C1, C2)
| Unique Index2: (C2, C1, C3)
| SELECT C3 FROM T
| WHERE C1 = 1 AND
| C2 = 5;
Index1 is a fully matching equal unique index. However, Index2 is also an equal
unique index even though it is not fully matching. Index2 is the better choice
because, in addition to being equal and unique, it also provides index-only access.
To use a matching index scan to update an index in which its key columns are
being updated, the following conditions must be met:
) Each updated key column must have a corresponding predicate of the form
"index_key_column = constant" or "index_key_column IS NULL".
) If a view is involved, WITH CHECK OPTION must not be specified.
With list prefetch or multiple index access, any index or indexes can be used in an
UPDATE operation. Of course, to be chosen, those access paths must provide
efficient access to the data
This section begins with “Definitions and examples,” below, and continues with
descriptions of the methods of joining that can be indicated in a plan table:
) “Nested loop join (METHOD=1)” on page 705
) “Merge scan join (METHOD=2)” on page 707
) “Hybrid join (METHOD=4)” on page 709
) “Star schema (star join)” on page 711
(Method 1)
Nested
Composite TJ loop TK New
join
(Method 2)
Composite Work Merge scan TL New
File join
(Sort)
Result
A join operation can involve more than two tables. But the operation is carried out
in a series of steps. Each step joins only two tables.
Definitions: The composite table (or outer table) in a join operation is the table
remaining from the previous step, or it is the first table accessed in the first step. (In
the first step, then, the composite table is composed of only one table.) The new
table (or inner table) in a join operation is the table newly accessed in the step.
Definitions: A join operation typically matches a row of one table with a row of
another on the basis of a join condition. For example, the condition might specify
that the value in column A of one table equals the value of column X in the other
table (WHERE T1.A = T2.X).
Two kinds of joins differ in what they do with rows in one table that do not match
on the join condition with any row in the other table:
) An inner join discards rows of either table that do not match any row of the
other table.
) An outer join keeps unmatched rows of one or the other table, or of both. A row
in the composite table that results from an unmatched row is filled out with null
values. Outer joins are distinguished by which unmatched rows they keep.
Example: Figure 183 on page 705 shows an outer join with a subset of the values
it produces in a plan table for the applicable rows. Column JOIN_TYPE identifies
the type of outer join with one of these values:
) F for FULL OUTER JOIN
) L for LEFT OUTER JOIN
) Blank for INNER JOIN or no join
At execution, DB2 converts every right outer join to a left outer join; thus
JOIN_TYPE never identifies a right outer join specifically.
Figure 183. Plan table output for an example with outer joins
| Materialization with outer join: Sometimes DB2 has to materialize a result set
| when an outer join is used in conjunction with other joins, views, or nested table
| expressions. You can tell when this happens by looking at the TNAME column of
| the plan table, where the materialized table is named DSNWFQB(xx), where xx is
| the number of the query block (QBLOCKNO) that produced the work file.
SELECT A, B, X, Y
FROM (SELECT FROM OUTERT WHERE A=10)
LEFT JOIN INNERT ON B=X;
Stage 1 and stage 2 predicates eliminate unqualified rows during the join. (For an
explanation of those types of predicate, see “Stage 1 and stage 2 predicates” on
page 638.) DB2 can scan either table using any of the available access methods,
including table space scan.
Performance considerations
The nested loop join repetitively scans the inner table. That is, DB2 scans the outer
table once, and scans the inner table as many times as the number of qualifying
rows in the outer table. Hence, the nested loop join is usually the most efficient join
method when the values of the join column passed to the inner table are in
sequence and the index on the join column of the inner table is clustered, or the
number of rows retrieved in the inner table through the index is small.
When it is used
Nested loop join is often used if:
) The outer table is small.
) Predicates with small filter factors reduce the number of qualifying rows in the
outer table.
) An efficient, highly clustered index exists on the join columns of the inner table.
) The number of data pages accessed in the inner table is small.
Example: left outer join: Figure 184 on page 705 illustrates a nested loop for a
left outer join. The outer join preserves the unmatched row in OUTERT with values
A=10 and B=6. The same join method for an inner join differs only in discarding
that row.
Example: one-row table priority: For a case like the example below, with a
unique index on T1.C2, DB2 detects that T1 has only one row that satisfies the
search condition. DB2 makes T1 the first table in a nested loop join.
SELECT * FROM T1, T2
WHERE T1.C1 = T2.C1 AND
T1.C2 = 5;
Example: Cartesian Join with Small Tables First: A Cartesian join is a form of
nested loop join in which there are no join predicates between the two tables. DB2
usually avoids a Cartesian join, but sometimes it is the most efficient method, as in
the example below. The query uses three tables: T1 has 2 rows, T2 has 3 rows,
and T3 has 10 million rows.
SELECT * FROM T1, T2, T3
WHERE T1.C1 = T3.C1 AND
T2.C2 = T3.C2 AND
T3.C3 = 5;
Assume that 5 million rows of T3 have the value C3=5. Processing time is large if
T3 is the outer table of the join and tables T1 and T2 are accessed for each of 5
million rows.
But if all rows from T1 and T2 are joined, without a join predicate, the 5 million
rows are accessed only six times, once for each row in the Cartesian join of T1 and
T2. It is difficult to say which access path is the most efficient. DB2 evaluates the
different options and could decide to access the tables in the sequence T1, T2, T3.
Sorting the composite table: Your plan table could show a nested loop join that
includes a sort on the composite table. DB2 might sort the composite table (the
outer table in Figure 184) if the following conditions exist:
) The join columns in the composite table and the new table are not in the same
sequence.
) The join column of the composite table has no index.
) The index is poorly clustered.
Nested loop join with a sorted composite table uses sequential detection efficiently
to prefetch data pages of the new table, reducing the number of synchronous I/O
operations and the elapsed time.
Method of joining
Figure 185 on page 708 illustrates a merge scan join.
DB2 scans both tables in the order of the join columns. If no efficient indexes on
the join columns provide the order, DB2 might sort the outer table, the inner table,
or both. The inner table is put into a work file; the outer table is put into a work file
only if it must be sorted. When a row of the outer table matches a row of the inner
table, DB2 returns the combined rows.
DB2 then reads another row of the inner table that might match the same row of
the outer table and continues reading rows of the inner table as long as there is a
match. When there is no longer a match, DB2 reads another row of the outer table.
) If that row has the same value in the join column, DB2 reads again the
matching group of records from the inner table. Thus, a group of duplicate
records in the inner table is scanned as many times as there are matching
records in the outer table.
) If the outer row has a new value in the join column, DB2 searches ahead in the
inner table. It can find any of the following rows:
– Unmatched rows in the inner table, with lower values in the join column.
– A new matching inner row. DB2 then starts the process again.
– An inner row with a higher value of the join column. Now the row of the
outer table is unmatched. DB2 searches ahead in the outer table, and can
find any of the following rows:
- Unmatched rows in the outer table.
- A new matching outer row. DB2 then starts the process again.
- An outer row with a higher value of the join column. Now the row of the
inner table is unmatched, and DB2 resumes searching the inner table.
Performance considerations
A full outer join by this method uses all predicates in the ON clause to match the
two tables and reads every row at the time of the join. Inner and left outer joins use
only stage 1 predicates in the ON clause to match the tables. If your tables match
on more than one column, it is generally more efficient to put all the predicates for
the matches in the ON clause, rather than to leave some of them in the WHERE
clause.
For an inner join, DB2 can derive extra predicates for the inner table at bind time
and apply them to the sorted outer table to be used at run time. The predicates can
reduce the size of the work file needed for the inner table.
If DB2 has used an efficient index on the join columns, to retrieve the rows of the
inner table, those rows are already in sequence. DB2 puts the data directly into the
work file without sorting the inner table, which reduces the elapsed time.
When it is used
A merge scan join is often used if:
) The qualifying rows of the inner and outer table are large, and the join
predicate does not provide much filtering; that is, in a many-to-many join.
) The tables are large and have no indexes with matching columns.
) Few columns are selected on inner tables. This is the case when a DB2 sort is
used. The fewer the columns to be sorted, the more efficient the sort is.
INNER
X Y RIDs
OUTER
A B 1 Davis P5
Index 1 Index 2 Jones P2
10 1 2 Smith P7
10 1 3 Brown P4
10 2 5 Blake P1 5
10 3 7 Stone P6
10 6 9 Meyer P3 Composite table
A B X Y
10 2 2 Jones
2 X=B List prefetch 4 10 3 3 Brown
10 1 1 Davis
Intermediate table (phase 1) 10 1 1 Davis
OUTER INNER 10 2 2 Jones
data RIDs
RID List
10 1 P5
10 1 P5 P5
10 2 P2 P2
10 2 P7 P7
10 3 P4 P4
3 SORT
RID list
P2
P4
P5
Intermediate table (phase 2) P7
OUTER INNER
data RIDs
10 2 P2
10 3 P4
10 1 P5
10 1 P5
10 2 P7
Method of joining
The method requires obtaining RIDs in the order needed to use list prefetch. The
steps are shown in Figure 186. In that example, both the outer table (OUTER) and
the inner table (INNER) have indexes on the join columns.
Performance considerations
Hybrid join uses list prefetch more efficiently than nested loop join, especially if
there are indexes on the join predicate with low cluster ratios. It also processes
duplicates more efficiently because the inner table is scanned only once for each
set of duplicate values in the join column of the outer table.
If the index on the inner table is highly clustered, there is no need to sort the
intermediate table (SORTN_JOIN=N). The intermediate table is placed in a table in
memory rather than in a work file.
When it is used
Hybrid join is often used if:
) A nonclustered index or indexes are used on the join columns of the inner
table.
) The outer table has duplicate qualifying rows.
# You can think of the fact table, which is much larger than the dimension tables, as
# being in the center surrounded by dimension tables; the result resembles a star
# formation. The following diagram illustrates the star formation:
Dimension Dimension
table table
Fact table
Dimension Dimension
table table
# Figure 187. Star schema with a fact table and dimension tables
# Example
# For an example of a star schema, consider the following scenario. A star schema is
# composed of a fact table for sales, with dimension tables connected to it for time,
# products, and geographic locations. The time table has an ID for each month, its
# quarter, and the year. The product table has an ID for each product item and its
# class and its inventory. The geographic location table has an ID for each city and
# its country.
# In this scenario, the sales table contains three columns with IDs from the dimension
# tables for time, product, and location instead of three columns for time, three
# columns for products, and two columns for location. Thus, the size of the fact table
# is greatly reduced. In addition, if you needed to change an item, you would do it
# once in a dimension table instead of several times for each instance of the item in
# the fact table.
# You can create even more complex star schemas by breaking a dimension table
# into a fact table with its own dimension tables. The fact table would be connected
# to the main fact table.
# Examples: query with three dimension tables: Suppose you have a store in
# San Jose and want information about sales of audio equipment from that store in
# 2000. For this example, you want to join the following tables:
# ) A fact table for SALES (S)
# ) A dimension table for TIME (T) with columns for an ID, month, quarter, and
# year
# ) A dimension table for geographic LOCATION (L) with columns for an ID, city,
# region, and country
# ) A dimension table for PRODUCT (P) with columns for an ID, product item,
# class, and inventory
# Figure 188. Plan table output for a star join example with TIME, PRODUCT, and
# LOCATION
# For another example, suppose you want to use the same SALES (S), TIME (T),
# PRODUCT (P), and LOCATION (L) tables for a similar query and index; however,
# for this example the index does not include the TIME dimension. A query doesn't
# have to involve all dimensions. In this example, the star join is performed on one
# query block at stage 1 and a star join is performed on another query block at stage
# 2.
If DB2 does not choose prefetch at bind time, it can sometimes use it at execution
time nevertheless. The method is described in “Sequential detection at execution
time” on page 718.
Table 76. For 4-KB buffer pools, the number of pages read by prefetch
Buffer pool size Pages read by prefetch
<=223 buffers 8 pages for each asynchronous I/O
224-999 buffers 16 pages for each asynchronous I/O
1000+ buffers 32 pages for each asynchronous I/O
| ) For 8-KB buffer pools, the number of pages read by prefetch is shown in
| Table 77.
| Table 77. For 8-KB buffer pools, the number of pages read by prefetch
| Buffer pool size Pages read by prefetch
| <=112 buffers 4 pages for each asynchronous I/O
| 113-499 buffers 8 pages for each asynchronous I/O
| 500+ buffers 16 pages for each asynchronous I/O
| ) For 16-KB buffer pools, the number of pages read by prefetch is shown in
| Table 78.
| Table 78. For 16-KB buffer pools, the number of pages read by prefetch
| Buffer pool size Pages read by prefetch
| <=56 buffers 2 pages for each asynchronous I/O
| 57-249 buffers 4 pages for each asynchronous I/O
| 250+ buffers 8 pages for each asynchronous I/O
) For 32-KB buffer pools, the number of pages read by prefetch is shown in
Table 79.
Table 79. For 32-KB buffer pools, the number of pages read by prefetch
Buffer pool size Pages read by prefetch
<=16 buffers 0 pages (prefetch disabled)
17-99 buffers 2 pages for each asynchronous I/O
100+ buffers 4 pages for each asynchronous I/O
For certain utilities (LOAD, REORG, RECOVER), the prefetch quantity can be twice
as much.
When it is used: Sequential prefetch is generally used for a table space scan.
For an index scan that accesses 8 or more consecutive data pages, DB2 requests
sequential prefetch at bind time. The index must have a cluster ratio of 80% or
higher. Both data pages and index pages are prefetched.
List prefetch can be used in conjunction with either single or multiple index access.
List prefetch does not preserve the data ordering given by the index. Because the
RIDs are sorted in page number order before accessing the data, the data is not
retrieved in order by any column. If the data must be ordered for an ORDER BY
clause or any other reason, it requires an additional sort.
In a hybrid join, if the index is highly clustered, the page numbers might not be
sorted before accessing the data.
List prefetch can be used with most matching predicates for an index scan. IN-list
predicates are the exception; they cannot be the matching predicates when list
prefetch is used.
When it is used
List prefetch is used:
) Usually with a single index that has a cluster ratio lower than 80%
) Sometimes on indexes with a high cluster ratio, if the estimated amount of data
to be accessed is too small to make sequential prefetch efficient, but large
enough to require more than one regular read
) Always to access data by multiple index access
) Always to access data from the inner table during a hybrid join
During execution, DB2 ends list prefetching if more than 25% of the rows in the
table (with a minimum of 4075) must be accessed. Record IFCID 0125 in the
performance trace, mapped by macro DSNDQW01, indicates whether list prefetch
ended.
While forming an intersection of RID lists, if any list has 32 or fewer RIDs,
intersection stops and the list of 32 or fewer RIDs is used to access the data.
When it is used
DB2 can use sequential detection for both index leaf pages and data pages. It is
most commonly used on the inner table of a nested loop join, if the data is
accessed sequentially.
If a table is accessed repeatedly using the same statement (for example, DELETE
in a do-while loop), the data or index leaf pages of the table can be accessed
sequentially. This is common in a batch processing environment. Sequential
detection can then be used if access is through:
) SELECT or FETCH statements
) UPDATE and DELETE statements
) INSERT statements when existing data pages are accessed sequentially
DB2 can use sequential detection if it did not choose sequential prefetch at bind
time because of an inaccurate estimate of the number of pages to be accessed.
Sequential detection is not used for an SQL statement that is subject to referential
constraints.
For example, assume page A is 10, the following figure illustrates the page ranges
that DB2 calculates.
A B C
RUN1 RUN2 RUN3
Page # 10 26 42
P=32 pages 16 16 32
For initial data access sequential, prefetch is requested starting at page A for P
pages (RUN1 and RUN2). The prefetch quantity is always P pages.
For subsequent page requests where the page is 1) page sequential and 2) data
access sequential is still in effect, prefetch is requested as follows:
) If the desired page is in RUN1, then no prefetch is triggered because it was
already triggered when data access sequential was first declared.
) If the desired page is in RUN2, then prefetch for RUN3 is triggered and RUN2
becomes RUN1, RUN3 becomes RUN2, and RUN3 becomes the page range
starting at C+P for a length of P pages.
If a data access pattern develops such that data access sequential is no longer in
effect and, thereafter, a new pattern develops that is sequential as described
above, then initial data access sequential is declared again and handled
accordingly.
Because, at bind time, the number of pages to be accessed can only be estimated,
sequential detection acts as a safety net and is employed when the data is being
accessed sequentially.
In extreme situations, when certain buffer pool thresholds are reached, sequential
prefetch can be disabled. For a description of buffer pools and thresholds, see
Section 5 (Volume 2) of DB2 Administration Guide .
The only reason DB2 sorts the new table is for join processing, which is indicated
by SORTN_JOIN.
The performance of the sort by the GROUP BY clause is improved when the query
accesses a single table and when the GROUP BY column has no index.
Without parallelism:
) If no sorts are required, then OPEN CURSOR does not access any data. It is
at the first fetch that data is returned.
) If a sort is required, then the OPEN CURSOR causes the materialized result
table to be produced. Control returns to the application after the result table is
materialized. If a cursor that requires a sort is closed and reopened, the sort is
performed again.
) If there is a RID sort, but no data sort, then it is not until the first row is fetched
that the RID list is built from the index and the first data record is returned.
Subsequent fetches access the RID pool to access the next data record.
With parallelism:
) At OPEN CURSOR, parallelism is asynchronously started, regardless of
whether a sort is required. Control returns to the application immediately after
the parallelism work is started.
) If there is a RID sort, but no data sort, then parallelism is not started until the
first fetch. This works the same way as with no parallelism.
Consider the following statements, one of which defines a view, the other of which
references the view:
View-defining statement: View referencing statement:
The subselect of the view-defining statement can be merged with the view
referencing statement to yield the following logically equivalent statement:
Merged statement:
| Here is another example of when a view and table expression can be merged:
| SELECT * FROM V1 X
| LEFT JOIN
| (SELECT * FROM T2) Y ON X.C1=Y.C1
| LEFT JOIN T3 Z ON X.C1=Z.C1;
Materialization
Views and table expressions cannot always be merged. Look at the following
statements:
View defining statement: View referencing statement:
Column VC1 occurs as the argument of a column function in the view referencing
statement. The values of VC1, as defined by the view-defining subselect, are the
result of applying the column function SUM(C1) to groups after grouping the base
table T1 by column C2. No equivalent single SQL SELECT statement can be
executed against the base table T1 to achieve the intended result. There is no way
to specify that column functions should be applied successively.
| Table 80. Cases when DB2 performs view or table expression materialization. The "X" indicates a case of
materialization. Notes follow the table.
A SELECT FROM a view or View definition or table expression uses...(2)
a table expression
GROUP BY DISTINCT Column Column function
uses...(1)
function DISTINCT
| Joins (3) X X X X
GROUP BY X X X X
DISTINCT - X - X
Column function X X X X
Column function DISTINCT X X X X
SELECT subset of view or table - X - -
expression columns
An SQL statement can reference a particular view multiple times where some
of the references can be merged and some must be materialized.
2. If a SELECT list contains a host variable in a table expression, then
materialization occurs. For example:
SELECT C1 FROM
(SELECT :HV1 AS C1 FROM T1) X;
) If the outer join is a full outer join and the SELECT list of the view or
nested table expression does not contain a standalone column for the
column that is used in the outer join ON clause, then materialization occurs.
For example:
SELECT X.C1 FROM
(SELECT C1+1 AS C2 FROM T1) X FULL JOIN T2 Y
ON X.C2=Y.C2;
| Another indication that DB2 chose view materialization is that the view name or
| table expression name appears as a TNAME attribute for rows describing the
| access path for the referencing query block. When DB2 chooses merge, EXPLAIN
| data for the merged statement appears in PLAN_TABLE; only the names of the
| base tables on which the view or table expression is defined appear.
Note: Where "op" is =, <>, >, <, <=, or >=, and literal is either a host variable, constant, or
special register. The literals in the BETWEEN predicate need not be identical.
| Table 82 shows the content of each column. The first five columns of the
| DSN_STATEMNT_TABLE are the same as PLAN_TABLE.
| Just as with the plan table, DB2 just adds rows to the statement table; it does not
| automatically delete rows. INSERT triggers are not activated unless you insert rows
| yourself using and SQL INSERT statement.
| To clear the table of obsolete rows, use DELETE, just as you would for deleting
| rows from any table. You can also use DROP TABLE to drop a statement table
| completely.
| Similarly, if system administrators use these estimates as input into the resource
| limit specification table for governing (either predictive or reactive), they probably
| would want to give much greater latitude for statements in cost category B than for
| those in cost category A.
| What goes into cost category B? DB2 puts a statement's estimate into cost
| category B when any of the following conditions exist:
| ) The statement has UDFs.
| ) Triggers are defined for the target table:
| – The statement is INSERT, and insert triggers are defined on the target
| table.
| – The statement is UPDATE, and update triggers are defined on the target
| table.
| – The statement is DELETE, and delete triggers are defined on the target
| table.
| ) The target table of a delete statement has referential constraints defined on it
| as the parent table, and the delete rules are either CASCADE or SET NULL.
| ) The WHERE clause predicate has one of the following forms:
| – COL op literal, and the literal is a host variable, parameter marker, or
| special register. The operator can be >, >=, <, <=, LIKE, or NOT LIKE.
| – COL BETWEEN literal AND literal where either literal is a host variable,
| parameter marker, or special register.
| – LIKE with an escape clause that contains a host variable.
| ) The cardinality statistics are missing for one or more tables that are used in the
| statement.
Query I/O parallelism manages concurrent I/O requests for a single query, fetching
pages into the buffer pool in parallel. This processing can significantly improve the
performance of I/O-bound queries. I/O parallelism is used only when one of the
other parallelism modes cannot be used.
Query CP parallelism enables true multi-tasking within a query. A large query can
be broken into multiple smaller queries. These smaller queries run simultaneously
on multiple processors accessing data in parallel. This reduces the elapsed time for
a query.
Parallel operations usually involve at least one table in a partitioned table space.
Scans of large partitioned table spaces have the greatest performance
improvements where both I/O and central processor (CP) operations can be carried
out in parallel.
Figure 192 shows sequential processing. With sequential processing, DB2 takes
the 3 partitions in order, completing partition 1 before starting to process partition 2,
and completing 2 before starting 3. Sequential prefetch allows overlap of CP
processing with I/O operations, but I/O operations do not overlap with each other.
In the example in Figure 192, a prefetch request takes longer than the time to
process it. The processor is frequently waiting for I/O.
CP
processing: … …
P1R1 P1R2 P1R3 P2R1 P2R2 P2R3 P3R1
I/O:
P1R1 P1R2 P1R3
… P2R1 P2R2 P2R3
… P3R1 P3R2
Time line
Figure 193 shows parallel I/O operations. With parallel I/O, DB2 prefetches data
from the 3 partitions at one time. The processor processes the first request from
each partition, then the second request from each partition, and so on. The
processor is not waiting for I/O, but there is still only one processing task.
CP processing: …
P1R1 P2R1 P3R1 P1R2 P2R2 P3R2 P1R3
I/O:
P1 R1 R2 R3
P2 R1 R2 R3
P3 R1 R2 R3
Time line
Figure 194 on page 733 shows parallel CP processing. With CP parallelism, DB2
can use multiple parallel tasks to process the query. Three tasks working
concurrently can greatly reduce the overall elapsed time for data-intensive and
processor-intensive queries. The same principle applies for Sysplex query
parallelism, except that the work can cross the boundaries of a single CPC.
CP task 2:
P2R1 P2R2 P2R3
…
I/O:
P2R1 P2R2 P2R3
…
CP task 3:
P3R1 P3R2 P3R3
…
I/O:
P3R1 P3R2 P3R3
…
Time line
Figure 194. CP and I/O processing techniques. Query processing using CP parallelism.
The tasks can be contained within a single CPC or can be spread out among the members
of a data sharing group.
Queries that are most likely to take advantage of parallel operations: Queries
that can take advantage of parallel processing are:
) Those in which DB2 spends most of the time fetching pages—an I/O-intensive
query
A typical I/O-intensive query is something like the following query, assuming
that a table space scan is used on many pages:
SELECT COUNT(*) FROM ACCOUNTS
WHERE BALANCE > AND
DAYS_OVERDUE > 3;
| ) Those in which DB2 spends a lot of processor time and also, perhaps, I/O time
, to process rows. Those include:
– Queries with intensive data scans and high selectivity. Those queries
involve large volumes of data to be scanned but relatively few rows that
meet the search criteria.
– Queries containing aggregate functions. Column functions (such as MIN,
MAX, SUM, AVG, and COUNT) usually involve large amounts of data to be
scanned but return only a single aggregate result.
– Queries accessing long data rows. Those queries access tables with long
data rows, and the ratio of rows per page is very low (one row per page,
for example).
– Queries requiring large amounts of central processor time. Those queries
might be read-only queries that are complex, data-intensive, or that involve
a sort.
A typical processor-intensive query is something like:
Terminology: When the term task is used with information on parallel processing,
the context should be considered. For parallel query CP processing or Sysplex
query parallelism, task is an actual MVS execution unit used to process a query.
For parallel I/O processing, a task simply refers to the processing of one of the
concurrent I/O streams.
A parallel group is the term used to name a particular set of parallel operations
(parallel tasks or parallel I/O operations). A query can have more than one parallel
group, but each parallel group within the query is identified by its own unique ID
number.
The degree of parallelism is the number of parallel tasks or I/O operations that
DB2 determines can be used for the operations on the parallel group.
It is also possible to change the special register default from 1 to ANY for the
entire DB2 subsystem by modifying the CURRENT DEGREE field on
installation panel DSNTIP4.
) If you bind with isolation CS, choose also the option CURRENTDATA(NO), if
possible. This option can improve performance in general, but it also ensures
that DB2 will consider parallelism for ambiguous cursors. If you bind with
CURRENDATA(YES) and DB2 cannot tell if the cursor is read-only, DB2 does
not consider parallelism. It is best to always indicate when a cursor is read-only
by indicating FOR FETCH ONLY or FOR READ ONLY on the DECLARE
CURSOR statement.
DB2 also considers only parallel I/O operations if you declare a cursor WITH HOLD
and bind with isolation RR or RS. For further restrictions on parallelism, see
Table 83.
| For complex queries, run the query in parallel within a member of a data sharing
| group. With Sysplex query parallelism, use the power of the data sharing group to
| process individual complex queries on many members of the data sharing group.
| For more information on how you can use the power of the data sharing group to
| run complex queries, see Chapter 7 of DB2 Data Sharing: Planning and
| Administration.
# Limiting the degree of parallelism: If you want to limit the maximum number of
# parallel tasks that DB2 generates, you can use the installation parameter MAX
# DEGREE in the DSNTIP4 panel. Changing MAX DEGREE, however, is not the way
# to turn parallelism off. You use the DEGREE bind parameter or CURRENT
# DEGREE special register to turn parallellism off.
DB2 avoids certain hybrid joins when parallelism is enabled: To ensure that
you can take advantage of parallelism, DB2 does not pick one type of hybrid join
(SORTN_JOIN=Y) when the plan or package is bound with CURRENT
DEGREE=ANY or if the CURRENT DEGREE special register is set to 'ANY'.
| IN-list access clarification: DB2 can use parallelism only when IN-list access is
| for the inner table of a parallel group. For example, assume that the following query
| uses a nested loop join to join T1 to T2. The IN list access for T2 can use
| parallelism:
| SELECT COUNT(*)
| FROM T1, T2
| WHERE T1.C1 = T2.C1 AND
| T1.C2 > 5 AND
| T2.C2 IN ( 6 , 7 , 9) ;
It is possible for a parallel group run at a parallel degree less than that shown in
the PLAN_TABLE output. The following can cause a reduced degree of parallelism:
) Buffer pool availability
) Logical contention.
Consider a nested loop join. The inner table could be in a partitioned or
nonpartitioned table space, but DB2 is more likely to use a parallel join
operation when the outer table is partitioned.
) Physical contention
) Run time host variables
A host variable can determine the qualifying partitions of a table for a given
query. In such cases, DB2 defers the determination of the planned degree of
parallelism until run time, when the host variable value is known.
) Updatable cursor
At run time, DB2 might determine that an ambiguous cursor is updatable.
| ) A change in the configuration of online processors
| If fewer processors are online at run time, DB2 might need to reformulate the
| parallel degree.
The default value for CURRENT DEGREE is 1 unless your installation has
changed the default for the CURRENT DEGREE special register.
System controls can be used to disable parallelism, as well. These are described in
Section 5 (Volume 2) of DB2 Administration Guide.
The following sections discuss scenarios for interaction among your program, DB2,
and ISPF. Each has advantages and disadvantages in terms of efficiency, ease of
coding, ease of maintenance, and overall flexibility.
The DSN command processor (see “DSN command processor” on page 431)
permits only single task control block (TCB) connections. Take care not to change
the TCB after the first SQL statement. ISPF SELECT services change the TCB if
you started DSN under ISPF, so you cannot use these to pass control from load
module to load module. Instead, use LINK, XCTL, or LOAD.
Figure 195 on page 742 shows the task control blocks that result from attaching
the DSN command processor below TSO or ISPF.
If you are in ISPF and running under DSN, you can perform an ISPLINK to another
program, which calls a CLIST. In turn, the CLIST uses DSN and another
application. Each such use of DSN creates a separate unit of recovery (process or
transaction) in DB2.
All such initiated DSN work units are unrelated, with regard to isolation (locking)
and recovery (commit). It is possible to deadlock with yourself; that is, one unit
(DSN) can request a serialized resource (a data page, for example) that another
unit (DSN) holds incompatibly.
A COMMIT in one program applies only to that process. There is no facility for
coordinating the processes.
Disadvantages: For large programs of this type, you want a more modular design,
making the plan more flexible and easier to maintain. If you have one large plan,
you must rebind the entire plan whenever you change a module that includes SQL
statements. 5 You cannot pass control to another load module that makes SQL
calls by using ISPLINK; rather, you must use LINK, XCTL, or LOAD and BALR.
If you want to use ISPLINK, then call ISPF to run under DSN:
DSN
RUN PROGRAM(ISPF) PLAN(MYPLAN)
END
5 To achieve a more modular construction when all parts of the program use SQL, consider using packages. See “Chapter 5-1.
Planning to precompile and bind” on page 321.
When you use the ISPF SELECT service, you can specify whether ISPF should
create a new ISPF variable pool before calling the function. You can also break a
large application into several independent parts, each with its own ISPF variable
pool.
You can call different parts of the program in different ways. For example, you can
use the PGM option of ISPF SELECT:
PGM(program-name) PARM(parameters)
Or, you can use the CMD option:
CMD(command)
For a part that accesses DB2, the command can name a CLIST that starts DSN:
DSN
RUN PROGRAM(PART1) PLAN(PLAN1) PARM(input from panel)
END
Breaking the application into separate modules makes it more flexible and easier to
maintain. Furthermore, some of the application might be independent of DB2;
portions of the application that do not call DB2 can run, even if DB2 is not running.
A stopped DB2 database does not interfere with parts of the program that refer only
to other databases.
With the same modular structure as in the previous example, using CAF is likely to
provide greater efficiency by reducing the number of CLISTs. This does not mean,
however, that any DB2 function executes more quickly.
Chapter 7-6. Programming for the Interactive System Productivity Facility (ISPF) 743
Disadvantages: Compared to the modular structure using DSN, the structure using
CAF is likely to require a more complex program, which in turn might require
assembler language subroutines. For more information, see “Chapter 7-7.
Programming for the call attachment facility (CAF)” on page 745.
It is also possible for IMS batch applications to access DB2 databases through
CAF, though that method does not coordinate the commitment of work between the
IMS and DB2 systems. We highly recommend that you use the DB2 DL/I batch
support for IMS batch applications.
CICS application programs must use the CICS attachment facility; IMS application
programs, the IMS attachment facility. Programs running in TSO foreground or TSO
background can use either the DSN command processor or CAF; each has
advantages and disadvantages.
Task capabilities
Any task in an address space can establish a connection to DB2 through CAF.
There can be only one connection for each task control block (TCB). A DB2 service
request issued by a programs running under a given task is associated with that
task's connection to DB2. The service request operates independently of any DB2
activity under any other task.
Each connected task can run a plan. Multiple tasks in a single address space can
specify the same plan, but each instance of a plan runs independently from the
others. A task can terminate its plan and run a different plan without fully breaking
its connection to DB2.
CAF does not generate task structures, nor does it provide attention processing
exits or functional recovery routines. You can provide whatever attention handling
and functional recovery your application needs, but you must use ESTAE/ESTAI
type recovery routines and not Enabled Unlocked Task (EUT) FRR routines.
Programming language
You can write CAF applications in assembler language, C, COBOL, FORTRAN,
and PL/I. When choosing a language to code your application in, consider these
restrictions:
) If you need to use MVS macros (ATTACH, WAIT, POST, and so on), you must
choose a programming language that supports them or else embed them in
modules written in assembler language.
) The CAF TRANSLATE function is not available from FORTRAN. To use the
function, code it in a routine written in another language, and then call that
routine from FORTRAN.
You can find a sample assembler program (DSN8CA) and a sample COBOL
program (DSN8CC) that use the call attachment facility in library
prefix.SDSNSAMP. A PL/I application (DSN8SPM) calls DSN8CA, and a COBOL
application (DSN8SCM) calls DSN8CC. For more information on the sample
applications and on accessing the source code, see Appendix B, “Sample
applications” on page 849.
Program preparation
Preparing your application program to run in CAF is similar to preparing it to run in
other environments, such as CICS, IMS, and TSO. You can prepare a CAF
application either in the batch environment or by using the DB2 program
preparation process. You can use the program preparation system either through
DB2I or through the DSNH CLIST. For examples and guidance in program
preparation, see “Chapter 6-1. Preparing an application program to run” on
page 405.
CAF requirements
When you write programs that use CAF, be aware of the following characteristics.
Program size
The CAF code requires about 16K of virtual storage per address space and an
additional 10K for each TCB using CAF.
Use of LOAD
CAF uses MVS SVC LOAD to load two modules as part of the initialization
following your first service request. Both modules are loaded into fetch-protected
storage that has the job-step protection key. If your local environment intercepts
and replaces the LOAD SVC, then you must ensure that your version of LOAD
manages the load list element (LLE) and contents directory entry (CDE) chains like
the standard MVS LOAD macro.
Run environment
Applications requesting DB2 services must adhere to several run environment
characteristics. Those characteristics must be in effect regardless of the attachment
facility you use. They are not unique to CAF.
) The application must be running in TCB mode. SRB mode is not supported.
) An application task cannot have any EUT FRRs active when requesting DB2
services. If an EUT FRR is active, DB2's functional recovery can fail, and your
application can receive some unpredictable abends.
) Different attachment facilities cannot be active concurrently within the same
address space. Therefore:
– An application must not use CAF in an CICS or IMS address space.
– An application that runs in an address space that has a CAF connection to
DB2 cannot connect to DB2 using RRSAF.
Chapter 7-7. Programming for the call attachment facility (CAF) 747
– An application that runs in an address space that has an RRSAF
connection to DB2 cannot connect to DB2 using CAF.
# – An application cannot invoke the MVS AXSET macro after executing the
# CAF CONNECT call and before executing the CAF DISCONNECT call.
) One attachment facility cannot start another. This means that your CAF
application cannot use DSN, and a DSN RUN subcommand cannot call your
CAF application.
) The language interface module for CAF, DSNALI, is shipped with the linkage
attributes AMODE(31) and RMODE(ANY). If your applications load CAF below
the 16MB line, you must link-edit DSNALI again.
There is no significant advantage to running DSN applications with CAF, and the
loss of DSN services can affect how well your program runs. We do not
recommend that you run DSN applications with CAF unless you provide an
application controller to manage the DSN application and replace any needed DSN
functions. Even then, you could have to change the application to communicate
connection failures to the controller correctly.
When the language interface is available, your program can make use of the CAF
in two ways:
) Implicitly, by including SQL statements or IFI calls in your program just as you
would in any program. The CAF facility establishes the connections to DB2
using default values for the pertinent parameters described under “Implicit
connections” on page 750.
) Explicitly, by writing CALL DSNALI statements, providing the appropriate
options. For the general form of the statements, see “CAF function
descriptions” on page 753.
The first element of each option list is a function, which describes the action you
want CAF to take. The available values of function and an approximation of their
effects, see “Summary of connection functions” on page 750. The effect of any
function depends in part on what functions the program has already run. Before
using any function, be sure to read the description of its usage. Also read
“Summary of CAF behavior” on page 766, which describes the influence of
previous functions.
Chapter 7-7. Programming for the call attachment facility (CAF) 749
Summary of connection functions
You can use the following functions with CALL DSNALI:
CONNECT
Establishes the task (TCB) as a user of the named DB2 subsystem. When the
first task within an address space issues a connection request, the address
space is also initialized as a user of DB2. See “CONNECT: Syntax and usage”
on page 756.
OPEN
Allocates a DB2 plan. You must allocate a plan before DB2 can process SQL
statements. If you did not request the CONNECT function, OPEN implicitly
establishes the task, and optionally the address space, as a user of DB2. See
“OPEN: Syntax and usage” on page 760.
CLOSE
Optionally commits or abends any database changes and deallocates the plan.
If OPEN implicitly requests the CONNECT function, CLOSE removes the task,
and possibly the address space, as a user of DB2. See “CLOSE: Syntax and
usage” on page 761.
DISCONNECT
Removes the task as a user of DB2 and, if this is the last or only task in the
address space with a DB2 connection, terminates the address space
connection to DB2. See “DISCONNECT: Syntax and usage” on page 763.
TRANSLATE
Returns an SQLCODE and printable text in the SQLCA that describes a DB2
hexadecimal error reason code. See “TRANSLATE: Syntax and usage” on
page 764. You cannot call the TRANSLATE function from the FORTRAN
language.
Implicit connections
If you do not explicitly specify executable SQL statements in a CALL DSNALI
statement of your CAF application, CAF initiates implicit CONNECT and OPEN
requests to DB2. Although CAF performs these connection requests using the
default values defined below, the requests are subject to the same DB2 return
codes and reason codes as explicitly specified requests.
Subsystem name
The default name specified in the module DSNHDECP. CAF uses the
installation default DSNHDECP, unless your own DSNHDECP is in a library in
a STEPLIB of JOBLIB concatenation, or in the link list. In a data sharing group,
the default subsystem name is the group attachment name.
Plan name
The member name of the database request module (DBRM) that DB2
produced when you precompiled the source program that contains the first SQL
call. If your program can make its first SQL call from different modules with
different DBRMs, then you cannot use a default plan name; you must use an
explicit call using the OPEN function.
There are different types of implicit connections. The simplest is for application to
run neither CONNECT nor OPEN. You can also use CONNECT only or OPEN
only. Each of these implicitly connects your application to DB2. To terminate an
implicit connection, you must use the proper calls. See Table 89 on page 766 for
details.
For implicit connection requests, register 15 contains the return code and register 0
contains the reason code. The return code and reason code are also in the
message text for SQLCODE -991. The application program should examine the
return and reason codes immediately after the first executable SQL statement
within the application program. There are two ways to do this:
) Examine registers 0 and 15 directly.
) Examine the SQLCA, and if the SQLCODE is -991, obtain the return and
reason code from the message text. The return code is the first token, and the
reason code is the second token.
If the implicit connection was successful, the application can examine the
SQLCODE for the first, and subsequent, SQL statements.
You can access the DSNALI module by either explicitly issuing LOAD requests
when your program runs, or by including the module in your load module when you
link-edit your program. There are advantages and disadvantages to each approach.
By explicitly loading the DSNALI module, you beneficially isolate the maintenance
of your application from future IBM service to the language interface. If the
language interface changes, the change will probably not affect your load module.
Chapter 7-7. Programming for the call attachment facility (CAF) 751
You must indicate to DB2 which entry point to use. You can do this in one of two
ways:
) Specify the precompiler option ATTACH(CAF).
This causes DB2 to generate calls that specify entry point DSNHLI2. You
cannot use this option if your application is written in FORTRAN.
) Code a dummy entry point named DSNHLI within your load module.
If you do not specify the precompiler option ATTACH, the DB2 precompiler
generates calls to entry point DSNHLI for each SQL request. The precompiler
does not know and is independent of the different DB2 attachment facilities.
When the calls generated by the DB2 precompiler pass control to DSNHLI,
your code corresponding to the dummy entry point must preserve the option list
passed in R1 and call DSNHLI2 specifying the same option list. For a coding
example of a dummy DSNHLI entry point, see “Using dummy entry point
DSNHLI” on page 776.
Link-editing DSNALI
You can include the CAF language interface module DSNALI in your load module
during a link-edit step. The module must be in a load module library, which is
included either in the SYSLIB concatenation or another INCLUDE library defined in
the linkage editor JCL. Because all language interface modules contain an entry
point declaration for DSNHLI, the linkage editor JCL must contain an INCLUDE
linkage editor control statement for DSNALI; for example, INCLUDE
DB2LIB(DSNALI). By coding these options, you avoid inadvertently picking up the
wrong language interface module.
If you do not need explicit calls to DSNALI for CAF functions, including DSNALI in
your load module has some advantages. When you include DSNALI during the
link-edit, you need not code the previously described dummy DSNHLI entry point in
your program or specify the precompiler option ATTACH. Module DSNALI contains
an entry point for DSNHLI, which is identical to DSNHLI2, and an entry point
DSNWLI, which is identical to DSNWLI2.
A disadvantage to link-editing DSNALI into your load module is that any IBM
service to DSNALI requires a new link-edit of your load module.
Task termination
If a connected task terminates normally before the CLOSE function deallocates the
plan, then DB2 commits any database changes that the thread made since the last
commit point. If a connected task abends before the CLOSE function deallocates
the plan, then DB2 rolls back any database changes since the last commit point.
In either case, DB2 deallocates the plan, if necessary, and terminates the task's
connection before it allows the task to terminate.
DB2 abend
If DB2 abends while an application is running, the application is rolled back to the
last commit point. If DB2 terminates while processing a commit request, DB2 either
commits or rolls back any changes at the next restart. The action taken depends on
the state of the commit request when DB2 terminates.
A description of the call attach register and parameter list conventions for
assembler language follow. Following it, the syntax description of specific functions
describe the parameters for those particular functions.
Register conventions
If you do not specify the return code and reason code parameters in your CAF
calls, CAF puts a return code in register 15 and a reason code in register 0. CAF
also supports high-level languages that cannot interrogate individual registers. See
Figure 197 on page 755 and the discussion following it for more information. The
contents of registers 2 through 14 are preserved across calls. You must conform to
the following standard calling conventions:
Register Usage
R1 Parameter list pointer (for details, see “Call DSNALI parameter list” on
page 754)
R13 Address of caller's save area
R14 Caller's return address
R15 CAF entry point address
Chapter 7-7. Programming for the call attachment facility (CAF) 753
Call DSNALI parameter list
Use a standard MVS CALL parameter list. Register 1 points to a list of fullword
addresses that point to the actual parameters. The last address must contain a 1 in
the high-order bit. Figure 197 on page 755 shows a sample parameter list
structure for the CONNECT function.
When you code CALL DSNALI statements, you must specify all parameters that
come before Return Code. You cannot omit any of those parameters by coding
zeros or blanks. There are no defaults for those parameters for explicit connection
service requests. Defaults are provided only for implicit connections.
For all languages except assembler language, code zero for a parameter in the
CALL DSNALI statement when you want to use the default value for that parameter
but specify subsequent parameters. For example, suppose you are coding a
CONNECT call in a COBOL program. You want to specify all parameters except
Return Code. Write the call in this way:
CALL 'DSNALI' USING FUNCTN SSID TECB SECB RIBPTR
BY CONTENT ZERO BY REFERENCE REASCODE SRDURA EIBPTR.
For an assembler language call, code a comma for a parameter in the CALL
DSNALI statement when you want to use the default value for that parameter but
specify subsequent parameters. For example, code a CONNECT call like this to
specify all optional parameters except Return Code:
CALL DSNALI,(FUNCTN,SSID,TERMECB,STARTECB,RIBPTR,,REASCODE,SRDURA,EIBPTR)
Figure 197 illustrates how you can use the indicator 'end of parameter list' to
control the return codes and reason code fields following a CAF CONNECT call.
Each of the three illustrated termination points apply to all CAF parameter lists:
1. Terminates the parameter list without specifying the parameters retcode,
reascode, and srdura, and places the return code in register 15 and the reason
code in register 0.
Terminating at this point ensures compatibility with CAF programs that require a
return code in register 15 and a reason code in register 0.
2. Terminates the parameter list after the return code field, and places the return
code in the parameter list and the reason code in register 0.
Terminating at this point permits the application program to take action, based
on the return code, without further examination of the associated reason code.
3. Terminates the parameter list after the reason code field and places the return
code and the reason code in the parameter list.
Terminating at this point provides support to high-level languages that are
unable to examine the contents of individual registers.
If you code your CAF application in assembler language, you can specify this
parameter and omit the return code parameter. To do this, specify a comma as
a place-holder for the omitted return code parameter.
4. Terminates the parameter list after the parameter srdura.
Chapter 7-7. Programming for the call attachment facility (CAF) 755
If you code your CAF application in assembler language, you can specify this
parameter and omit the return code and reason code parameters. To do this,
specify commas as place-holders for the omitted parameters.
5. Terminates the parameter list after the parameter eibptr.
If you code your CAF application in assembler language, you can specify this
parameter and omit the return code and reason code parameters. To do this,
specify commas as place-holders for the omitted parameters.
Even if you specify that the return code be placed in the parameter list, it is also
placed in register 15 to accommodate high-level languages that support special
return code processing.
function
12-byte area containing CONNECT followed by five blanks.
ssnm
4-byte DB2 subsystem name or group attachment name (if used in a data
sharing group) to which the connection is made.
If you specify the group attachment name, the program connects to the DB2 on
the MVS system on which the program is running. When you specify a group
attachment name and a start-up ECB, DB2 ignores the start-up ECB. If you
need to use a start-up ECB, specify a subsystem name, rather than a group
attachment name. That subsystem name must be different from the group
attachment name.
If your ssnm is less than four characters long, pad it on the right with blanks to
a length of four characters.
termecb
The application's event control block (ECB) for DB2 termination. DB2 posts this
ECB when the operator enters the STOP DB2 command or when DB2 is
undergoing abend. It indicates the type of termination by a POST code, as
follows:
Before you check termecb in your CAF application program, first check the
return code and reason code from the CONNECT call to ensure that the call
completed successfully. See “Checking return codes and reason codes” on
page 773 for more information.
startecb
The application's start-up ECB. If DB2 has not yet started when the application
issues the call, DB2 posts the ECB when it successfully completes its startup
processing. DB2 posts at most one startup ECB per address space. The ECB
is the one associated with the most recent CONNECT call from that address
space. Your application program must examine any nonzero CAF/DB2 reason
codes before issuing a WAIT on this ECB.
If ssnm is a group attachment name, DB2 ignores the startup ECB.
ribptr
A 4-byte area in which CAF places the address of the release information block
(RIB) after the call. You can determine what release level of DB2 you are
# currently running by examining field RIBREL. You can determine the
# modification level within the release level by examining fields RIBCNUMB and
# RIBCINFO. If the value in RIBCNUMB is greater than zero, check RIBCINFO
# for modification levels.
If the RIB is not available (for example, if you name a subsystem that does not
exist), DB2 sets the 4-byte area to zeros.
The area to which ribptr points is below the 16-megabyte line.
Your program does not have to use the release information block, but it cannot
omit the ribptr parameter.
Macro DSNDRIB maps the release information block (RIB). It can be found in
prefix.SDSNMACS(DSNDRIB).
retcode
A 4-byte area in which CAF places the return code.
This field is optional. If not specified, CAF places the return code in register 15
and the reason code in register 0.
reascode
A 4-byte area in which CAF places a reason code. If not specified, CAF places
the reason code in register 0.
This field is optional. If specified, you must also specify retcode.
srdura
A 10-byte area containing the string 'SRDURA(CD)'. This field is optional. If it is
provided, the value in the CURRENT DEGREE special register stays in effect
from CONNECT until DISCONNECT. If it is not provided, the value in the
CURRENT DEGREE special register stays in effect from OPEN until CLOSE. If
you specify this parameter in any language except assembler, you must also
specify the return code and reason code parameters. In assembler language,
Chapter 7-7. Programming for the call attachment facility (CAF) 757
you can omit the return code and reason code parameters by specifying
commas as place-holders.
eibptr
A 4-byte area in which CAF puts the address of the environment information
block (EIB). The EIB contains information that you can use if you are
connecting to a DB2 subsystem that is part of a data sharing group. For
example, you can determine the name of the data sharing group and member
to which you are connecting. If the DB2 subsystem that you connect to is not
part of a data sharing group, then the fields in the EIB that are related to data
sharing are blank. If the EIB is not available (for example, if you name a
subsystem that does not exist), DB2 sets the 4-byte area to zeros.
The area to which eibptr points is below the 16-megabyte line.
You can omit this parameter when you make a CONNECT call.
If you specify this parameter in any language except assembler, you must also
specify the return code, reason code, and srdura parameters. In assembler
language, you can omit the return code, reason code, and srdura parameters
by specifying commas as place-holders.
Macro DSNDEIB maps the EIB. It can be found in
prefix.SDSNMACS(DSNDEIB).
Using a CONNECT call is optional. The first request from a task, either OPEN, or
an SQL or IFI call, causes CAF to issue an implicit CONNECT request. If a task is
connected implicitly, the connection to DB2 is terminated either when you execute
CLOSE or when the task terminates.
You can run CONNECT from any or all tasks in the address space, but the address
space level is initialized only once when the first task connects.
If a task does not issue an explicit CONNECT or OPEN, the implicit connection
from the first SQL or IFI call specifies a default DB2 subsystem name. A systems
programmer or administrator determines the default subsystem name when
installing DB2. Be certain that you know what the default name is and that it names
the specific DB2 subsystem you want to use.
Practically speaking, you must not mix explicit CONNECT and OPEN requests with
implicitly established connections in the same address space. Either explicitly
specify which DB2 subsystem you want to use or allow all requests to use the
default subsystem.
Do not issue CONNECT requests from a TCB that already has an active DB2
connection. (See “Summary of CAF behavior” on page 766 and “Error messages
and dsntrace” on page 769 for more information on CAF errors.)
Chapter 7-7. Programming for the call attachment facility (CAF) 759
Table 84. Examples of CAF CONNECT calls
Language Call example
Assembler CALL DSNALI,(FUNCTN,SSID,TERMECB,STARTECB,
RIBPTR,RETCODE,REASCODE,SRDURA,EIBPTR)
C fnret=dsnali(&functn[0],&ssid[0], &tecb, &secb,&ribptr,&retcode, &reascode, &srdura[0], &eibptr);
COBOL CALL 'DSNALI' USING FUNCTN SSID TERMECB STARTECB RIBPTR RETCODE REASCODE
SRDURA EIBPTR.
FORTRAN CALL DSNALI(FUNCTN,SSID,TERMECB,STARTECB,RIBPTR,
RETCODE,REASCODE,SRDURA,EIBPTR)
PL/I CALL DSNALI(FUNCTN,SSID,TERMECB,STARTECB,RIBPTR,RETCODE,
REASCODE,SRDURA,EIBPTR);
Note: DSNALI is an assembler language program; therefore, the following compiler directives must be included in
your C and PL/I applications:
C #pragma linkage(dsnali, OS)
C++ extern "OS" {
int DSNALI(
char * functn,
...); }
PL/I DCL DSNALI ENTRY OPTIONS(ASM,INTER,RETCODE);
function
A 12-byte area containing the word OPEN followed by eight blanks.
ssnm
A 4-byte DB2 subsystem name or group attachment name (if used in a data
sharing group). Optionally, OPEN establishes a connection from ssnm to the
named DB2 subsystem. If your ssnm is less than four characters long, pad it
on the right with blanks to a length of four characters.
plan
An 8-byte DB2 plan name.
retcode
A 4-byte area in which CAF places the return code.
This field is optional. If not specified, CAF places the return code in register 15
and the reason code in register 0.
Usage: OPEN allocates DB2 resources needed to run the plan or issue IFI
requests. If the requesting task does not already have a connection to the named
DB2 subsystem, then OPEN establishes it.
OPEN allocates the plan to the DB2 subsystem named in ssnm. The ssnm
parameter, like the others, is required, even if the task issues a CONNECT call. If a
task issues CONNECT followed by OPEN, then the subsystem names for both calls
must be the same.
The use of OPEN is optional. If you do not use OPEN, the action of OPEN occurs
on the first SQL or IFI call from the task, using the defaults listed under “Implicit
connections” on page 750.
Chapter 7-7. Programming for the call attachment facility (CAF) 761
function
A 12-byte area containing the word CLOSE followed by seven blanks.
termop
A 4-byte terminate option, with one of these values:
retcode
A 4-byte area in which CAF should place the return code.
This field is optional. If not specified, CAF places the return code in register 15
and the reason code in register 0.
reascode
A 4-byte area in which CAF places a reason code. If not specified, CAF places
the reason code in register 0.
This field is optional. If specified, you must also specify retcode.
Usage: CLOSE deallocates the created plan either explicitly using OPEN or
implicitly at the first SQL call.
If you did not issue a CONNECT for the task, CLOSE also deletes the task's
connection to DB2. If no other task in the address space has an active connection
to DB2, DB2 also deletes the control block structures created for the address space
and removes the cross memory authorization.
Do not use CLOSE when your current task does not have a plan allocated.
Using CLOSE is optional. If you omit it, DB2 performs the same actions when your
task terminates, using the SYNC parameter if termination is normal and the ABRT
parameter if termination is abnormal. (The function is an implicit CLOSE.) If the
objective is to shut down your application, you can improve shut down performance
by using CLOSE explicitly before the task terminates.
If you want to use a new plan, you must issue an explicit CLOSE, followed by an
OPEN, specifying the new plan name.
If DB2 terminates, a task that did not issue CONNECT should explicitly issue
CLOSE, so that CAF can reset its control blocks to allow for future connections.
This CLOSE returns the reset accomplished return code (+004) and reason code
X'00C10824'. If you omit CLOSE, then when DB2 is back on line, the task's next
connection request fails. You get either the message Your TCB does not have a
connection, with X'00F30018' in register 0, or CAF error message DSNA201I or
DSNA202I, depending on what your application tried to do. The task must then
issue CLOSE before it can reconnect to DB2.
A task that issued CONNECT explicitly should issue DISCONNECT to cause CAF
to reset its control blocks when DB2 terminates. In this case, CLOSE is not
necessary.
[[──CALL DSNALI──(──function──┬─────────────────────────────┬──)─────────────────────────────────────[^
└─,──retcode──┬─────────────┬─┘
└─,──reascode─┘
function
A 12-byte area containing the word DISCONNECT followed by two blanks.
retcode
A 4-byte area in which CAF places the return code.
This field is optional. If not specified, CAF places the return code in register 15
and the reason code in register 0.
reascode
A 4-byte area in which CAF places a reason code. If not specified, CAF places
the reason code in register 0.
This field is optional. If specified, you must also specify retcode.
Only those tasks that issued CONNECT explicitly can issue DISCONNECT. If
CONNECT was not used, then DISCONNECT causes an error.
Chapter 7-7. Programming for the call attachment facility (CAF) 763
If an OPEN is in effect when the DISCONNECT is issued (that is, a plan is
allocated), CAF issues an implicit CLOSE with the SYNC parameter.
Using DISCONNECT is optional. Without it, DB2 performs the same functions when
the task terminates. (The function is an implicit DISCONNECT.) If the objective is to
shut down your application, you can improve shut down performance if you request
DISCONNECT explicitly before the task terminates.
If DB2 terminates, a task that issued CONNECT must issue DISCONNECT to reset
the CAF control blocks. The function returns the reset accomplished return codes
and reason codes (+004 and X'00C10824'), and ensures that future connection
requests from the task work when DB2 is back on line.
A task that did not issue CONNECT explicitly must issue CLOSE to reset the CAF
control blocks when DB2 terminates.
TRANSLATE is useful only after an OPEN fails, and then only if you used an
explicit CONNECT before the OPEN request. For errors that occur during SQL or
IFI requests, the TRANSLATE function performs automatically.
function
A 12-byte area containing the word TRANSLATE followed by three blanks.
sqlca
The program's SQL communication area (SQLCA).
retcode
A 4-byte area in which CAF places the return code.
This field is optional. If not specified, CAF places the return code in register 15
and the reason code in register 0.
reascode
A 4-byte area in which CAF places a reason code. If not specified, CAF places
the reason code in register 0.
This field is optional. If specified, you must also specify retcode.
Usage: Use TRANSLATE to get a corresponding SQL error code and message
text for the DB2 error reason codes that CAF returns in register 0 following an
OPEN service request. DB2 places the information into the SQLCODE and
SQLSTATE host variables or related fields of the SQLCA.
The TRANSLATE function can translate those codes beginning with X'00F3', but it
does not translate CAF reason codes beginning with X'00C1'. If you receive error
reason code X'00F30040' (resource unavailable) after an OPEN request,
TRANSLATE returns the name of the unavailable database object in the last 44
characters of field SQLERRM. If the DB2 TRANSLATE function does not recognize
the error reason code, it returns SQLCODE -924 (SQLSTATE '58006') and places
a printable copy of the original DB2 function code and the return and error reason
codes in the SQLERRM field. The contents of registers 0 and 15 do not change,
unless TRANSLATE fails; in which case, register 0 is set to X'C10205' and
register 15 to 200.
Chapter 7-7. Programming for the call attachment facility (CAF) 765
Summary of CAF behavior
Table 89 summarizes CAF behavior after various inputs from application programs.
Use it to help plan the calls your program makes, and to help understand where
CAF errors can occur. Careful use of this table can avoid major structural problems
in your application.
Sample scenarios
This section shows sample scenarios for connecting tasks to DB2.
A task can have a connection to one and only one DB2 subsystem at any point in
time. A CAF error occurs if the subsystem name on OPEN does not match the one
on CONNECT. To switch to a different subsystem, the application must disconnect
from the current subsystem, then issue a connect request specifying a new
subsystem name.
Several tasks
In this scenario, multiple tasks within the address space are using DB2 services.
Each task must explicitly specify the same subsystem name on either the
CONNECT or OPEN function request. Task 1 makes no SQL or IFI calls. Its
purpose is to monitor the DB2 termination and start-up ECBs, and to check the
DB2 release level.
Chapter 7-7. Programming for the call attachment facility (CAF) 767
TASK 1 TASK 2 TASK 3 TASK n
CONNECT
OPEN OPEN OPEN
SQL SQL SQL
... ... ...
CLOSE CLOSE CLOSE
OPEN OPEN OPEN
SQL SQL SQL
... ... ...
CLOSE CLOSE CLOSE
DISCONNECT
Attention exits
An attention exit enables you to regain control from DB2, during long-running or
erroneous requests, by detaching the TCB currently waiting on an SQL or IFI
request to complete. DB2 detects the abend caused by DETACH and performs
termination processing (including ROLLBACK) for that task.
The call attachment facility has no attention exits. You can provide your own if
necessary. However, DB2 uses enabled unlocked task (EUT) functional recovery
routines (FRRs), so if you request attention while DB2 code is running, your routine
may not get control.
Recovery routines
The call attachment facility has no abend recovery routines.
Your program can provide an abend exit routine. It must use tracking indicators to
determine if an abend occurred during DB2 processing. If an abend occurs while
DB2 has control, you have these choices:
) Allow task termination to complete. Do not retry the program. DB2 detects task
termination and terminates the thread with the ABRT parameter. You lose all
database changes back to the last SYNC or COMMIT point.
This is the only action that you can take for abends that CANCEL or DETACH
cause. You cannot use additional SQL statements at this point. If you attempt
to execute another SQL statement from the application program or its recovery
routine, a return code of +256 and a reason code of X'00F30083' occurs.
) In an ESTAE routine, issue CLOSE with the ABRT parameter followed by
DISCONNECT. The ESTAE exit routine can retry so that you do not need to
re-instate the application task.
Standard MVS functional recovery routines (FRRs) can cover only code running in
service request block (SRB) mode. Because DB2 does not support calls from SRB
mode routines, you can use only enabled unlocked task (EUT) FRRs in your
routines that call DB2.
Do not have an EUT FRR active when using CAF, processing SQL requests, or
calling IFI.
With MVS, if you have an active EUT FRR, all DB2 requests fail, including the
initial CONNECT or OPEN. The requests fail because DB2 always creates an
ARR-type ESTAE, and MVS/ESA does not allow the creation of ARR-type ESTAEs
when an FRR is active.
When the reason code begins with X'00F3' (except for X'00F30006'), you can use
the CAF TRANSLATE function to obtain error message text that can be printed and
displayed.
For SQL calls, CAF returns standard SQLCODEs in the SQLCA. See Section 2 of
DB2 Messages and Codes for a list of those return codes and their meanings. CAF
returns IFI return codes and reason codes in the instrumentation facility
communication area (IFCA).
Chapter 7-7. Programming for the call attachment facility (CAF) 769
Table 90 (Page 2 of 2). CAF return codes and reason codes
Return code Reason code Explanation
200 X'00C10206' Wrong number of parameters or the end-of-list bit was off.
(note 1)
200 X'00C10207' Unrecognized function parameter.
(note 1)
200 X'00C10208' Received requests to access two different DB2 subsystems from the
(note 1) same TCB.
204 (note 2) CAF system error. Probable error in the attach or DB2.
Notes:
1. A CAF error probably caused by errors in the parameter lists coming from application programs. CAF
errors do not change the current state of your connection to DB2; you can continue processing with a
corrected request.
2. System errors cause abends. For an explanation of the abend reason codes, see Section 4 of DB2
Messages and Codes. If tracing is on, a descriptive message is written to the DSNTRACE data set just
before the abend.
Program examples
The following pages contain sample JCL and assembler programs that access the
call attachment facility (CAF).
//SYSPRINT DD SYSOUT=*
//DSNTRACE DD SYSOUT=*
//SYSUDUMP DD SYSOUT=*
These code segments assume the existence of a WRITE macro. Anywhere you
find this macro in the code is a good place for you to substitute code of your own.
You must decide what you want your application to do in those situations; you
probably do not want to write the error messages shown.
When the module is done with DB2, you should delete the entries.
****************************** GET LANGUAGE INTERFACE ENTRY ADDRESSES
LOAD EP=DSNALI Load the CAF service request EP
ST R,LIALI Save this for CAF service requests
LOAD EP=DSNHLI2 Load the CAF SQL call Entry Point
ST R,LISQL Save this for SQL calls
* .
* . Insert connection service requests and SQL calls here
* .
DELETE EP=DSNALI Correctly maintain use count
DELETE EP=DSNHLI2 Correctly maintain use count
Chapter 7-7. Programming for the call attachment facility (CAF) 771
****************************** CONNECT ********************************
L R15,LIALI Get the Language Interface address
MVC FUNCTN,CONNECT Get the function to call
CALL (15),(FUNCTN,SSID,TECB,SECB,RIBPTR),VL,MF=(E,CAFCALL)
BAL R14,CHEKCODE Check the return and reason codes
CLC CONTROL,CONTINUE Is everything still OK
BNE EXIT If CONTROL not 'CONTINUE', stop loop
USING R8,RIB Prepare to access the RIB
L R8,RIBPTR Access RIB to get DB2 release level
WRITE 'The current DB2 release level is' RIBREL
The code does not show a task that waits on the DB2 termination ECB. If you like,
you can code such a task and use the MVS WAIT macro to monitor the ECB. You
probably want this task to detach the sample code if the termination ECB is posted.
That task can also wait on the DB2 startup ECB. This sample waits on the startup
ECB at its own task level.
On entry, the code assumes that certain variables are already set:
Variable Usage
LIALI The entry point that handles DB2 connection service requests.
LISQL The entry point that handles SQL calls.
SSID The DB2 subsystem identifier.
Chapter 7-7. Programming for the call attachment facility (CAF) 773
***********************************************************************
* CHEKCODE PSEUDOCODE *
***********************************************************************
*IF TECB is POSTed with the ABTERM or FORCE codes
* THEN
* CONTROL = 'SHUTDOWN'
* WRITE 'DB2 found FORCE or ABTERM, shutting down'
* ELSE /* Termination ECB was not POSTed */
* SELECT (RETCODE) /* Look at the return code */
* WHEN () ; /* Do nothing; everything is OK */
* WHEN (4) ; /* Warning */
* SELECT (REASCODE) /* Look at the reason code */
* WHEN ('C1823'X) /* DB2 / CAF release level mismatch*/
* WRITE 'Found a mismatch between DB2 and CAF release levels'
* WHEN ('C1824'X) /* Ready for another CAF call */
* CONTROL = 'RESTART' /* Start over, from the top */
* OTHERWISE
* WRITE 'Found unexpected R when R15 was 4'
* CONTROL = 'SHUTDOWN'
* END INNER-SELECT
* WHEN (8,12) /* Connection failure */
* SELECT (REASCODE) /* Look at the reason code */
* WHEN ('F32'X, /* These mean that DB2 is down but */
* 'F312'X) /* will POST SECB when up again */
* DO
* WRITE 'DB2 is unavailable. I'll tell you when it's up.'
* WAIT SECB /* Wait for DB2 to come up */
* WRITE 'DB2 is now available.'
* END
* /**********************************************************/
* /* Insert tests for other DB2 connection failures here. */
* /* CAF Externals Specification lists other codes you can */
* /* receive. Handle them in whatever way is appropriate */
* /* for your application. */
* /**********************************************************/
* OTHERWISE /* Found a code we're not ready for*/
* WRITE 'Warning: DB2 connection failure. Cause unknown'
* CALL DSNALI ('TRANSLATE',SQLCA) /* Fill in SQLCA */
* WRITE SQLCODE and SQLERRM
* END INNER-SELECT
* WHEN (2)
* WRITE 'CAF found user error. See DSNTRACE dataset'
* WHEN (24)
* WRITE 'CAF system error. See DSNTRACE data set'
* OTHERWISE
* CONTROL = 'SHUTDOWN'
* WRITE 'Got an unrecognized return code'
* END MAIN SELECT
* IF (RETCODE > 4) THEN /* Was there a connection problem?*/
* CONTROL = 'SHUTDOWN'
* END CHEKCODE
Figure 204 (Part 1 of 3). Subroutine to check return codes from CAF and DB2, in
assembler
Chapter 7-7. Programming for the call attachment facility (CAF) 775
CLC REASCODE,F312 Hunt for X'F312'
BE DB2DOWN
WRITE 'DB2 connection failure with an unrecognized REASCODE'
CLC SQLCODE,ZERO See if we need TRANSLATE
BNE A4TRANS If not blank, skip TRANSLATE
* ********************* TRANSLATE unrecognized RETCODEs ********
WRITE 'SQLCODE but R15 not, so TRANSLATE to get SQLCODE'
L R15,LIALI Get the Language Interface address
CALL (15),(TRANSLAT,SQLCA),VL,MF=(E,CAFCALL)
C R,C125 Did the TRANSLATE work?
BNE A4TRANS If not C125, SQLERRM now filled in
WRITE 'Not able to TRANSLATE the connection failure'
B ENDCCODE Go to end of CHEKCODE
A4TRANS DS H SQLERRM must be filled in to get here
* Note: your code should probably remove the X'FF'
* separators and format the SQLERRM feedback area.
* Alternatively, use DB2 Sample Application DSNTIAR
* to format a message.
WRITE 'SQLERRM is:' SQLERRM
B ENDCCODE We are done. Go to end of CHEKCODE
DB2DOWN DS H Hunt return code of 2
WRITE 'DB2 is down and I will tell you when it comes up'
WAIT ECB=SECB Wait for DB2 to come up
WRITE 'DB2 is now available'
MVC CONTROL,RESTART Indicate that we should re-CONNECT
B ENDCCODE
* ********************* HUNT FOR 2 ***************************
HUNT2 DS H Hunt return code of 2
CLC RETCODE,NUM2 Hunt 2
BNE HUNT24
WRITE 'CAF found user error, see DSNTRACE data set'
B ENDCCODE We are done. Go to end of CHEKCODE
* ********************* HUNT FOR 24 ***************************
HUNT24 DS H Hunt return code of 24
CLC RETCODE,NUM24 Hunt 24
BNE WASSAT If not 24, got strange code
WRITE 'CAF found system error, see DSNTRACE data set'
B ENDCCODE We are done. Go to end of CHEKCODE
* ********************* UNRECOGNIZED RETCODE *******************
WASSAT DS H
WRITE 'Got an unrecognized RETCODE'
MVC CONTROL,SHUTDOWN Shutdown
BE ENDCCODE We are done. Go to end of CHEKCODE
ENDCCODE DS H Should we shut down?
L R4,RETCODE Get a copy of the RETCODE
C R4,FOUR Have a look at the RETCODE
BNH BYEBYE If RETCODE <= 4 then leave CHEKCODE
MVC CONTROL,SHUTDOWN Shutdown
BYEBYE DS H Wrap up and leave CHEKCODE
L R13,4(,R13) Point to caller's save area
RETURN (14,12) Return to the caller
Figure 204 (Part 3 of 3). Subroutine to check return codes from CAF and DB2, in
assembler
In the example that follows, LISQL is addressable because the calling CSECT used
the same register 12 as CSECT DSNHLI. Your application must also establish
addressability to LISQL.
***********************************************************************
* Subroutine DSNHLI intercepts calls to LI EP=DSNHLI
***********************************************************************
DS D
DSNHLI CSECT Begin CSECT
STM R14,R12,12(R13) Prologue
LA R15,SAVEHLI Get save area address
ST R13,4(,R15) Chain the save areas
ST R15,8(,R13) Chain the save areas
LR R13,R15 Put save area address in R13
L R15,LISQL Get the address of real DSNHLI
BASSM R14,R15 Branch to DSNALI to do an SQL call
* DSNALI is in 31-bit mode, so use
* BASSM to assure that the addressing
* mode is preserved.
L R13,4(,R13) Restore R13 (caller's save area addr)
L R14,12(,R13) Restore R14 (return address)
RETURN (1,12) Restore R1-12, NOT R and R15 (codes)
Variable declarations
Figure 205 on page 778 shows declarations for some of the variables used in the
previous subroutines.
Chapter 7-7. Programming for the call attachment facility (CAF) 777
****************************** VARIABLES ******************************
SECB DS F DB2 Start-up ECB
TECB DS F DB2 Termination ECB
LIALI DS F DSNALI Entry Point address
LISQL DS F DSNHLI2 Entry Point address
SSID DS CL4 DB2 Subsystem ID. CONNECT parameter
PLAN DS CL8 DB2 Plan name. OPEN parameter
TRMOP DS CL4 CLOSE termination option (SYNC|ABRT)
FUNCTN DS CL12 CAF function to be called
RIBPTR DS F DB2 puts Release Info Block addr here
RETCODE DS F Chekcode saves R15 here
REASCODE DS F Chekcode saves R here
CONTROL DS CL8 GO, SHUTDOWN, or RESTART
SAVEAREA DS 18F Save area for CHEKCODE
****************************** CONSTANTS ******************************
SHUTDOWN DC CL8'SHUTDOWN' CONTROL value: Shutdown execution
RESTART DC CL8'RESTART ' CONTROL value: Restart execution
CONTINUE DC CL8'CONTINUE' CONTROL value: Everything OK, cont
CODE DC F'' SQLCODE of
CODE1 DC F'1' SQLCODE of 1
QUIESCE DC XL3'8' TECB postcode: STOP DB2 MODE=QUIESCE
CONNECT DC CL12'CONNECT ' Name of a CAF service. Must be CL12!
OPEN DC CL12'OPEN ' Name of a CAF service. Must be CL12!
CLOSE DC CL12'CLOSE ' Name of a CAF service. Must be CL12!
DISCON DC CL12'DISCONNECT ' Name of a CAF service. Must be CL12!
TRANSLAT DC CL12'TRANSLATE ' Name of a CAF service. Must be CL12!
SYNC DC CL4'SYNC' Termination option (COMMIT)
ABRT DC CL4'ABRT' Termination option (ROLLBACK)
****************************** RETURN CODES (R15) FROM CALL ATTACH ****
ZERO DC F''
FOUR DC F'4' 4
EIGHT DC F'8' 8
TWELVE DC F'12' 12 (Call Attach return code in R15)
NUM2 DC F'2' 2 (User error)
NUM24 DC F'24' 24 (Call Attach system error)
****************************** REASON CODES (R) FROM CALL ATTACH ****
C125 DC XL4'C125' Call attach could not TRANSLATE
C1823 DC XL4'C1823' Call attach found a release mismatch
C1824 DC XL4'C1824' Call attach ready for more input
F32 DC XL4'F32' DB2 subsystem not up
F311 DC XL4'F311' DB2 subsystem not up
F312 DC XL4'F312' DB2 subsystem not up
F325 DC XL4'F325' DB2 is stopping (REASCODE)
*
* Insert more codes here as necessary for your application
*
****************************** SQLCA and RIB **************************
EXEC SQL INCLUDE SQLCA
DSNDRIB Get the DB2 Release Information Block
****************************** CALL macro parm list *******************
CAFCALL CALL ,(*,*,*,*,*,*,*,*,*),VL,MF=L
Figure 205. Declarations for variables used in the previous subroutines
Prerequisite knowledge: Before you consider using RRSAF, you must be familiar
with the following MVS topics:
) The CALL macro and standard module linkage conventions
) Program addressing and residency options (AMODE and RMODE)
) Creating and controlling tasks; multitasking
) Functional recovery facilities such as ESTAE, ESTAI, and FRRs
) Synchronization techniques such as WAIT/POST.
) OS/390 RRS functions, such as SRRCMIT and SRRBACK.
Number of connections to DB2: Each task control block (TCB) can have only one
connection to DB2. A DB2 service request issued by a program that runs under a
given task is associated with that task's connection to DB2. The service request
operates independently of any DB2 activity under any other task.
Specifying a plan for a task: Each connected task can run a plan. Tasks within a
single address space can specify the same plan, but each instance of a plan runs
independently from the others. A task can terminate its plan and run a different plan
without completely breaking its connection to DB2.
Providing attention processing exits and recovery routines: RRSAF does not
generate task structures, and it does not provide attention processing exits or
functional recovery routines. You can provide whatever attention handling and
functional recovery your application needs, but you must use ESTAE/ESTAI type
recovery routines only.
Programming language
You can write RRSAF applications in assembler language, C, COBOL, FORTRAN,
and PL/I. When choosing a language to code your application in, consider these
restrictions:
) If you use MVS macros (ATTACH, WAIT, POST, and so on), you must choose
a programming language that supports them.
) The RRSAF TRANSLATE function is not available from FORTRAN. To use the
function, code it in a routine written in another language, and then call that
routine from FORTRAN.
Tracing facility
A tracing facility provides diagnostic messages that help you debug programs and
diagnose errors in the RRSAF code. The trace information is available only in a
SYSABEND or SYSUDUMP dump.
Program preparation
Preparing your application program to run in RRSAF is similar to preparing it to run
in other environments, such as CICS, IMS, and TSO. You can prepare an RRSAF
application either in the batch environment or by using the DB2 program
preparation process. You can use the program preparation system either through
DB2I or through the DSNH CLIST. For examples and guidance in program
preparation, see “Chapter 6-1. Preparing an application program to run” on
page 405.
Program size
The RRSAF code requires about 10K of virtual storage per address space and an
additional 10KB for each TCB that uses RRSAF.
Use of LOAD
RRSAF uses MVS SVC LOAD to load a module as part of the initialization
following your first service request. The module is loaded into fetch-protected
storage that has the job-step protection key. If your local environment intercepts
and replaces the LOAD SVC, then you must ensure that your version of LOAD
manages the load list element (LLE) and contents directory entry (CDE) chains like
the standard MVS LOAD macro.
Follow these guidelines for choosing the DB2 statements or the CPIC functions for
commit and rollback operations:
) Use DB2 COMMIT and ROLLBACK statements when you know that the
following conditions are true:
– The only recoverable resource accessed by your application is DB2 data
managed by a single DB2 instance.
– The address space from which syncpoint processing is initiated is the same
as the address space that is connected to DB2.
) If your application accesses other recoverable resources, or syncpoint
processing and DB2 access are initiated from different address spaces, use
SRRCMIT and SRRBACK.
Run environment
Applications that request DB2 services must adhere to several run environment
requirements. Those requirements must be met regardless of the attachment facility
you use. They are not unique to RRSAF.
) The application must be running in TCB mode.
) No EUT FRRs can be active when the application requests DB2 services. If an
EUT FRR is active, DB2's functional recovery can fail, and your application can
receive unpredictable abends.
) Different attachment facilities cannot be active concurrently within the same
address space. For example:
– An application should not use RRSAF in CICS or IMS address spaces.
– An application running in an address space that has a CAF connection to
DB2 cannot connect to DB2 using RRSAF.
Chapter 7-8. Programming for the Recoverable Resource Manager Services attachment facility (RRSAF) 781
– An application running in an address space that has an RRSAF connection
to DB2 cannot connect to DB2 using CAF.
) One attachment facility cannot start another. This means your RRSAF
application cannot use DSN, and a DSN RUN subcommand cannot call your
RRSAF application.
) The language interface module for RRSAF, DSNRLI, is shipped with the
linkage attributes AMODE(31) and RMODE(ANY). If your applications load
RRSAF below the 16MB line, you must link-edit DSNRLI again.
Your program uses RRSAF by issuing CALL DSNRLI statements with the
appropriate options. For the general form of the statements, see “RRSAF function
descriptions” on page 786.
The first element of each option list is a function, which describes the action you
want RRSAF to take. For a list of available functions and what they do, see
“Summary of connection functions” on page 786. The effect of any function
depends in part on what functions the program has already performed. Before
using any function, be sure to read the description of its usage. Also read
“Summary of connection functions” on page 786, which describes the influence of
previously invoked functions.
Part of RRSAF is a DB2 load module, DSNRLI, the RRSAF language interface
module. DSNRLI has the alias names DSNHLIR and DSNWLIR. The module has
five entry points: DSNRLI, DSNHLI, DSNHLIR, DSNWLI, and DSNWLIR:
) Entry point DSNRLI handles explicit DB2 connection service requests.
) DSNHLI and DSNHLIR handle SQL calls. Use DSNHLI if your application
program link-edits RRSAF; use DSNHLIR if your application program loads
RRSAF.
) DSNWLI and DSNWLIR handle IFI calls. Use DSNWLI if your application
program link-edits RRSAF; use DSNWLIR if your application program loads
RRSAF.
Chapter 7-8. Programming for the Recoverable Resource Manager Services attachment facility (RRSAF) 783
You can access the DSNRLI module by explicitly issuing LOAD requests when your
program runs, or by including the DSNRLI module in your load module when you
link-edit your program. There are advantages and disadvantages to each approach.
By explicitly loading the DSNRLI module, you can isolate the maintenance of your
application from future IBM service to the language interface. If the language
interface changes, the change will probably not affect your load module.
You must indicate to DB2 which entry point to use. You can do this in one of two
ways:
) Specify the precompiler option ATTACH(RRSAF).
This causes DB2 to generate calls that specify entry point DSNHLIR. You
cannot use this option if your application is written in FORTRAN.
) Code a dummy entry point named DSNHLI within your load module.
If you do not specify the precompiler option ATTACH, the DB2 precompiler
generates calls to entry point DSNHLI for each SQL request. The precompiler
does not know and is independent of the different DB2 attachment facilities.
When the calls generated by the DB2 precompiler pass control to DSNHLI,
your code corresponding to the dummy entry point must preserve the option list
passed in R1 and call DSNHLIR specifying the same option list. For a coding
example of a dummy DSNHLI entry point, see “Using dummy entry point
DSNHLI” on page 812.
Link-editing DSNRLI
You can include DSNRLI when you link-edit your load module. For example, you
can use a linkage editor control statement like this in your JCL:
INCLUDE DB2LIB(DSNRLI).
By coding this statement, you avoid linking the wrong language interface module.
When you include DSNRLI during the link-edit, you do not include a dummy
DSNHLI entry point in your program or specify the precompiler option ATTACH.
Module DSNRLI contains an entry point for DSNHLI, which is identical to
DSNHLIR, and an entry point DSNWLI, which is identical to DSNWLIR.
A disadvantage of link-editing DSNRLI into your load module is that if IBM makes a
change to DSNRLI, you must link-edit your program again.
Connection name and connection type: The connection name and connection
type are RRSAF. You can use the DISPLAY THREAD command to list RRSAF
applications that have the connection name RRSAF.
RRSAF relies on the MVS System Authorization Facility (SAF) and a security
product, such as RACF, to verify and authorize the authorization IDs. An application
that connects to DB2 through RRSAF must pass those identifiers to SAF for
verification and authorization checking. RRSAF retrieves the identifiers from SAF.
A location can provide an authorization exit routine for a DB2 connection to change
the authorization IDs and to indicate whether the connection is allowed. The actual
values assigned to the primary and secondary authorization IDs can differ from the
values provided by a SIGNON or AUTH SIGNON request. A site's DB2 signon exit
routine can access the primary and secondary authorization IDs and can modify the
IDs to satisfy the site's security requirements. The exit can also indicate whether
the signon request should be accepted.
For information about authorization IDs and the connection and signon exit
routines, see Appendix B (Volume 2) of DB2 Administration Guide.
Do not mix RRSAF connections with other connection types in a single address
space. The first connection to DB2 made from an address space determines the
type of connection allowed.
Task termination
If an application that is connected to DB2 through RRSAF terminates normally
before the TERMINATE THREAD or TERMINATE IDENTIFY functions deallocate
the plan, then OS/390 RRS commits any changes made after the last commit point.
In either case, DB2 deallocates the plan, if necessary, and terminates the
application's connection.
DB2 abend
If DB2 abends while an application is running, DB2 rolls back changes to the last
commit point. If DB2 terminates while processing a commit request, DB2 either
commits or rolls back any changes at the next restart. The action taken depends on
the state of the commit request when DB2 terminates.
Chapter 7-8. Programming for the Recoverable Resource Manager Services attachment facility (RRSAF) 785
Summary of connection functions
You can use the following functions with CALL DSNRLI:
IDENTIFY
Establishes the task as a user of the named DB2 subsystem. When the first
task within an address space issues a connection request, the address space
is initialized as a user of DB2. See “IDENTIFY: Syntax and usage” on
page 788.
| SWITCH TO
| Directs RRSAF, SQL or IFI requests to a specified DB2 subsystem. See
| “SWITCH TO: Syntax and usage” on page 790.
SIGNON
Provides to DB2 a user ID and, optionally, one or more secondary authorization
IDs that are associated with the connection. See “SIGNON: Syntax and usage”
on page 792.
AUTH SIGNON
Provides to DB2 a user ID, an Accessor Environment Element (ACEE) and,
optionally, one or more secondary authorization IDs that are associated with
the connection. See “AUTH SIGNON: Syntax and usage” on page 795.
| CONTEXT SIGNON
| Provides to DB2 a user ID and, optionally, one or more secondary authorization
| IDs that are associated with the connection. You can execute CONTEXT
| SIGNON from an unauthorized program. See “CONTEXT SIGNON: Syntax
| and usage” on page 798.
CREATE THREAD
Allocates a DB2 plan or package. CREATE THREAD must complete before the
application can execute SQL statements. See “CREATE THREAD: Syntax and
usage” on page 802.
TERMINATE THREAD
Deallocates the plan. See “TERMINATE THREAD: Syntax and usage” on
page 803.
TERMINATE IDENTIFY
Removes the task as a user of DB2 and, if this is the last or only task in the
address space that has a DB2 connection, terminates the address space
connection to DB2. See “TERMINATE IDENTIFY: Syntax and usage” on
page 804.
TRANSLATE
Returns an SQL code and printable text, in the SQLCA, that describes a DB2
error reason code. You cannot call the TRANSLATE function from the
FORTRAN language. See “Translate: Syntax and usage” on page 806.
Register conventions
Table 91 summarizes the register conventions for RRSAF calls.
If you do not specify the return code and reason code parameters in your RRSAF
calls, RRSAF puts a return code in register 15 and a reason code in register 0. If
you specify the return code and reason code parameters, RRSAF places the return
code in register 15 and in the return code parameter to accommodate high-level
languages that support special return code processing. RRSAF preserves the
contents of registers 2 through 14.
For all languages: When you code CALL DSNRLI statements in any language,
specify all parameters that come before Return Code. You cannot omit any of
those parameters by coding zeros or blanks. There are no defaults for those
parameters.
For all languages except assembler language: Code 0 for an optional parameter
in the CALL DSNRLI statement when you want to use the default value for that
parameter but specify subsequent parameters. For example, suppose you are
coding an IDENTIFY call in a COBOL program. You want to specify all parameters
except Return Code. Write the call in this way:
Chapter 7-8. Programming for the Recoverable Resource Manager Services attachment facility (RRSAF) 787
CALL 'DSNRLI' USING IDFYFN SSNM RIBPTR EIBPTR TERMCB STARTECB
BY CONTENT ZERO BY REFERENCE REASCODE.
function
An 18-byte area containing IDENTIFY followed by 10 blanks.
ssnm
A 4-byte DB2 subsystem name or group attachment name (if used in a data
sharing group) to which the connection is made. If ssnm is less than four
characters long, pad it on the right with blanks to a length of four characters.
ribptr
A 4-byte area in which RRSAF places the address of the release information
block (RIB) after the call. This can be used to determine the release level of the
# DB2 subsystem to which the application is connected. You can determine the
# modification level within the release level by examining fields RIBCNUMB and
# RIBCINFO. If the value in RIBCNUMB is greater than zero, check RIBCINFO
# for modification levels.
If the RIB is not available (for example, if you name a subsystem that does not
exist), DB2 sets the 4-byte area to zeros.
The area to which ribptr points is below the 16-megabyte line.
This parameter is required, although the application does not need to refer to
the returned information.
eibptr
A 4-byte area in which RRSAF places the address of the environment
information block (EIB) after the call. The EIB contains environment information,
such as the data sharing group and member name for the DB2 to which the
IDENTIFY request was issued. If the DB2 subsystem is not in a data sharing
group, then RRSAF sets the data sharing group and member names to blanks.
If the EIB is not available (for example, if ssnm names a subsystem that does
not exist), RRSAF sets the 4-byte area to zeros.
The area to which eibptr points is above the 16-megabyte line.
This parameter is required, although the application does not need to refer to
the returned information.
termecb
The address of the application's event control block (ECB) used for DB2
termination. DB2 posts this ECB when the system operator enters the
command STOP DB2 or when DB2 is terminating abnormally. Specify a value
of 0 if you do not want to use a termination ECB.
startecb
The address of the application's startup ECB. If DB2 has not started when the
application issues the IDENTIFY call, DB2 posts the ECB when DB2 startup
has completed. Enter a value of zero if you do not want to use a startup ECB.
DB2 posts a maximum of one startup ECB per address space. The ECB
posted is associated with the most recent IDENTIFY call from that address
space. The application program must examine any nonzero RRSAF or DB2
reason codes before issuing a WAIT on this ECB.
If ssnm is a group attachment name, DB2 ignores the startup ECB.
retcode
A 4-byte area in which RRSAF places the return code.
This parameter is optional. If you do not specify this parameter, RRSAF places
the return code in register 15 and the reason code in register 0.
reascode
A 4-byte area in which RRSAF places a reason code.
This parameter is optional. If you do not specify this parameter, RRSAF places
the reason code in register 0.
If you specify this parameter, you must also specify retcode.
During IDENTIFY processing, DB2 determines whether the user address space is
authorized to connect to DB2. DB2 invokes the MVS SAF and passes a primary
authorization ID to SAF. That authorization ID is the 7-byte user ID associated with
the address space, unless an authorized function has built an ACEE for the
address space. If an authorized function has built an ACEE, DB2 passes the 8-byte
user ID from the ACEE. SAF calls an external security product, such as RACF, to
determine if the task is authorized to use:
) The DB2 resource class (CLASS=DSNR)
) The DB2 subsystem (SUBSYS=ssnm)
) Connection type RRSAF
If that check is successful, DB2 calls the DB2 connection exit to perform additional
verification and possibly change the authorization ID. DB2 then sets the connection
name to RRSAF and the connection type to RRSAF.
Chapter 7-8. Programming for the Recoverable Resource Manager Services attachment facility (RRSAF) 789
Table 93 on page 790 shows an IDENTIFY call in each language.
function
An 18-byte area containing SWITCH TO followed by nine blanks.
ssnm
A 4-byte DB2 subsystem name or group attachment name (if used in a data
sharing group) to which the connection is made. If ssnm is less than four
characters long, pad it on the right with blanks to a length of four characters.
retcode
A 4-byte area in which RRSAF places the return code.
This parameter is optional. If you do not specify this parameter, RRSAF places
the return code in register 15 and the reason code in register 0.
After you establish a connection to a DB2 subsystem, you must make a SWITCH
TO call before you identify to another DB2 subsystem. If you do not make a
SWITCH TO call before you make an IDENTIFY call to another DB2 subsystem,
then DB2 returns return Code = X'200' and reason code X'00C12201'.
This example shows how you can use SWITCH TO to interact with three DB2
subsystems.
RRSAF calls for subsystem db21:
IDENTIFY
SIGNON
CREATE THREAD
Execute SQL on subsystem db21
SWITCH TO db22
RRSAF calls on subsystem db22:
IDENTIFY
SIGNON
CREATE THREAD
Execute SQL on subsystem db22
SWITCH TO db23
RRSAF calls on subsystem db23:
IDENTIFY
SIGNON
CREATE THREAD
Execute SQL on subsystem 23
SWITCH TO db21
Execute SQL on subsystem 21
SWITCH TO db22
Execute SQL on subsystem 22
SWITCH TO db21
Execute SQL on subsystem 21
SRRCMIT (to commit the UR)
SWITCH TO db23
Execute SQL on subsystem 23
SWITCH TO db22
Execute SQL on subsystem 22
SWITCH TO db21
Execute SQL on subsystem 21
SRRCMIT (to commit the UR)
Chapter 7-8. Programming for the Recoverable Resource Manager Services attachment facility (RRSAF) 791
Table 94. Examples of RRSAF SWITCH TO calls
Language Call example
Assembler CALL DSNRLI,(SWITCHFN,SSNM,RETCODE,REASCODE)
C fnret=dsnrli(&switchfn[0], &ssnm[0], &retcode, &reascode);
COBOL CALL 'DSNRLI' USING SWITCHFN RETCODE REASCODE.
FORTRAN CALL DSNRLI(SWITCHFN,RETCODE,REASCODE)
PL/I CALL DSNRLI(SWITCHFN,RETCODE,REASCODE);
Note: DSNRLI is an assembler language program; therefore, you must include the following compiler directives in
your C, C++, and PL/I applications:
C #pragma linkage(dsnali, OS)
C++ extern "OS" {
int DSNALI(
char * functn,
...); }
PL/I DCL DSNALI ENTRY OPTIONS(ASM,INTER,RETCODE);
function
An 18-byte area containing SIGNON followed by twelve blanks.
correlation-id
A 12-byte area in which you can put a DB2 correlation ID. The correlation ID is
displayed in DB2 accounting and statistics trace records. You can use the
correlation ID to correlate work units. This token appears in output from the
command -DISPLAY THREAD. If you do not want to specify a correlation ID, fill
the 12-byte area with blanks.
accounting-token
A 22-byte area in which you can put a value for a DB2 accounting token. This
value is displayed in DB2 accounting and statistics trace records. If you do not
want to specify an accounting token, fill the 22-byte area with blanks.
accounting-interval
A 6-byte area with which you can control when DB2 writes an accounting
record. If you specify COMMIT in that area, then DB2 writes an accounting
retcode
A 4-byte area in which RRSAF places the return code.
This parameter is optional. If you do not specify this parameter, RRSAF places
the return code in register 15 and the reason code in register 0.
reascode
A 4-byte area in which RRSAF places the reason code.
This parameter is optional. If you do not specify this parameter, RRSAF places
the reason code in register 0.
If you specify this parameter, you must also specify retcode.
user
A 16-byte area that contains the user ID of the client end user. You can use
this parameter to provide the identity of the client end user for accounting and
monitoring purposes. DB2 displays this user ID in DISPLAY THREAD output
and in DB2 accounting and statistics trace records. If user is less than 16
characters long, you must pad it on the right with blanks to a length of 16
characters.
This field is optional. If specified, you must also specify retcode and reascode.
If not specified, no user ID is associated with the connection. You can omit this
parameter by specifying a value of 0.
appl
A 32-byte area that contains the application or transaction name of the end
user's application. You can use this parameter to provide the identity of the
client end user for accounting and monitoring purposes. DB2 displays the
application name in the DISPLAY THREAD output and in DB2 accounting and
statistics trace records. If appl is less than 32 characters long, you must pad it
on the right with blanks to a length of 32 characters.
This field is optional. If specified, you must also specify retcode, reascode, and
user. If not specified, no application or transaction is associated with the
connection. You can omit this parameter by specifying a value of 0.
ws An 18-byte area that contains the workstation name of the client end user. You
can use this parameter to provide the identity of the client end user for
accounting and monitoring purposes. DB2 displays the workstation name in the
DISPLAY THREAD output and in DB2 accounting and statistics trace records.
If ws is less than 18 characters long, you must pad it on the right with blanks to
a length of 18 characters.
This field is optional. If specified, you must also specify retcode, reascode,
user, and appl. If not specified, no workstation name is associated with the
connection.
| xid A 4-byte area into which you put one of the following values:
Chapter 7-8. Programming for the Recoverable Resource Manager Services attachment facility (RRSAF) 793
| becomes part of the associated global transaction. Otherwise,
| RRS generates a new global transaction ID.
| address The 4-byte address of of an area into which you enter a global
| transaction ID for the thread. If the global transaction ID already
| exists, the thread becomes part of the associated global
| transaction. Otherwise, RRS creates a new global transaction
| with the ID that you specify. The global transaction ID has the
| format shown in Table 95.
| A DB2 thread that is part of a global transaction can share locks with other
| DB2 threads that are part of the same global transaction and can access and
| modify the same data. A global transaction exists until one of the threads that
| is part of the global transaction is committed or rolled back.
See OS/390 Security Server (RACF) Macros and Interfaces for more information on
the RACROUTE macro.
Generally, you issue a SIGNON call after an IDENTIFY call and before a CREATE
THREAD call. You can also issue a SIGNON call if the application is at a point of
consistency, and
) The value of reuse in the CREATE THREAD call was RESET, or
) The value of reuse in the CREATE THREAD call was INITIAL, no held cursors
are open, the package or plan is bound with KEEPDYNAMIC(NO), and all
special registers are at their initial state. If there are open held cursors or the
package or plan is bound with KEEPDYNAMIC(YES), a SIGNON call is
permitted only if the primary authorization ID has not changed.
function
An 18-byte area containing AUTH SIGNON followed by seven blanks.
Chapter 7-8. Programming for the Recoverable Resource Manager Services attachment facility (RRSAF) 795
correlation-id
A 12-byte area in which you can put a DB2 correlation ID. The correlation ID is
displayed in DB2 accounting and statistics trace records. You can use the
correlation ID to correlate work units. This token appears in output from the
command -DISPLAY THREAD. If you do not want to specify a correlation ID, fill
the 12-byte area with blanks.
accounting-token
A 22-byte area in which you can put a value for a DB2 accounting token. This
value is displayed in DB2 accounting and statistics trace records. If you do not
want to specify an accounting token, fill the 22-byte area with blanks.
accounting-interval
A 6-byte area with which you can control when DB2 writes an accounting
record. If you specify COMMIT in that area, then DB2 writes an accounting
record each time the application issues SRRCMIT. If you specify any other
value, DB2 writes an accounting record when the application terminates or
when you call SIGNON with a new authorization ID.
primary-authid
An 8-byte area in which you can put a primary authorization ID. If you are not
passing the authorization ID to DB2 explicitly, put X'00' or a blank in the first
byte of the area.
ACEE-address
The 4-byte address of an ACEE that you pass to DB2. If you do not want to
provide an ACEE, specify 0 in this field.
secondary-authid
An 8-byte area in which you can put a secondary authorization ID. If you do not
pass the authorization ID to DB2 explicitly, put X'00' or a blank in the first byte
of the area. If you enter a secondary authorization ID, you must also enter a
primary authorization ID.
retcode
A 4-byte area in which RRSAF places the return code.
This parameter is optional. If you do not specify this parameter, RRSAF places
the return code in register 15 and the reason code in register 0.
reascode
A 4-byte area in which RRSAF places the reason code.
This parameter is optional. If you do not specify this parameter, RRSAF places
the reason code in register 0.
If you specify this parameter, you must also specify retcode.
user
A 16-byte area that contains the user ID of the client end user. You can use
this parameter to provide the identity of the client end user for accounting and
monitoring purposes. DB2 displays this user ID in DISPLAY THREAD output
and in DB2 accounting and statistics trace records. If user is less than 16
characters long, you must pad it on the right with blanks to a length of 16
characters.
This field is optional. If specified, you must also specify retcode and reascode.
If not specified, no user ID is associated with the connection. You can omit this
parameter by specifying a value of 0.
ws An 18-byte area that contains the workstation name of the client end user. You
can use this parameter to provide the identity of the client end user for
accounting and monitoring purposes. DB2 displays the workstation name in the
DISPLAY THREAD output and in DB2 accounting and statistics trace records.
If ws is less than 18 characters long, you must pad it on the right with blanks to
a length of 18 characters.
This field is optional. If specified, you must also specify retcode, reascode,
user, and appl. If not specified, no workstation name is associated with the
connection.
| xid A 4-byte area into which you put one of the following values:
| A DB2 thread that is part of a global transaction can share locks with other
| DB2 threads that are part of the same global transaction and can access and
| modify the same data. A global transaction exists until one of the threads that
| is part of the global transaction is committed or rolled back.
Generally, you issue an AUTH SIGNON call after an IDENTIFY call and before a
CREATE THREAD call. You can also issue an AUTH SIGNON call if the
application is at a point of consistency, and
) The value of reuse in the CREATE THREAD call was RESET, or
) The value of reuse in the CREATE THREAD call was INITIAL, no held cursors
are open, the package or plan is bound with KEEPDYNAMIC(NO), and all
special registers are at their initial state. If there are open held cursors or the
Chapter 7-8. Programming for the Recoverable Resource Manager Services attachment facility (RRSAF) 797
package or plan is bound with KEEPDYNAMIC(YES), a SIGNON call is
permitted only if the primary authorization ID has not changed.
|
| [[──CALL DSNRLI──(──function, correlation-id, accounting-token,──accounting-interval, context-key─────[
| [──┬──────────────────────────────────────────────────────────────────────────────┬──)───────────────[^
| └─,──retcode──┬──────────────────────────────────────────────────────────────┬─┘
| └─,──reascode──┬─────────────────────────────────────────────┬─┘
| └─,──user──┬────────────────────────────────┬─┘
| └─,──appl──┬───────────────────┬─┘
| └─,──ws──┬────────┬─┘
| └─,──xid─┘
| function
| An 18-byte area containing CONTEXT SIGNON followed by four blanks.
| correlation-id
| A 12-byte area in which you can put a DB2 correlation ID. The correlation ID is
| displayed in DB2 accounting and statistics trace records. You can use the
| correlation ID to correlate work units. This token appears in output from the
| accounting-token
| A 22-byte area in which you can put a value for a DB2 accounting token. This
| value is displayed in DB2 accounting and statistics trace records. If you do not
| want to specify an accounting token, fill the 22-byte area with blanks.
| accounting-interval
| A 6-byte area with which you can control when DB2 writes an accounting
| record. If you specify COMMIT in that area, then DB2 writes an accounting
| record each time the application issues SRRCMIT. If you specify any other
| value, DB2 writes an accounting record when the application terminates or
| when you call SIGNON with a new authorization ID.
| context-key
| A 32-byte area in which you put the context key that you specified when you
| called the RRS Set Context Data (CTXSDTA) service to save the primary
| authorization ID and an optional ACEE address.
| retcode
| A 4-byte area in which RRSAF places the return code.
| This parameter is optional. If you do not specify this parameter, RRSAF places
| the return code in register 15 and the reason code in register 0.
| reascode
| A 4-byte area in which RRSAF places the reason code.
| This parameter is optional. If you do not specify this parameter, RRSAF places
| the reason code in register 0.
| If you specify this parameter, you must also specify retcode.
| user
| A 16-byte area that contains the user ID of the client end user. You can use
| this parameter to provide the identity of the client end user for accounting and
| monitoring purposes. DB2 displays this user ID in DISPLAY THREAD output
| and in DB2 accounting and statistics trace records. If user is less than 16
| characters long, you must pad it on the right with blanks to a length of 16
| characters.
| This field is optional. If specified, you must also specify retcode and reascode.
| If not specified, no user ID is associated with the connection. You can omit this
| parameter by specifying a value of 0.
| appl
| A 32-byte area that contains the application or transaction name of the end
| user's application. You can use this parameter to provide the identity of the
| client end user for accounting and monitoring purposes. DB2 displays the
| application name in the DISPLAY THREAD output and in DB2 accounting and
| statistics trace records. If appl is less than 32 characters long, you must pad it
| on the right with blanks to a length of 32 characters.
| This field is optional. If specified, you must also specify retcode, reascode, and
| user. If not specified, no application or transaction is associated with the
| connection. You can omit this parameter by specifying a value of 0.
Chapter 7-8. Programming for the Recoverable Resource Manager Services attachment facility (RRSAF) 799
| ws An 18-byte area that contains the workstation name of the client end user. You
| can use this parameter to provide the identity of the client end user for
| accounting and monitoring purposes. DB2 displays the workstation name in the
| DISPLAY THREAD output and in DB2 accounting and statistics trace records.
| If ws is less than 18 characters long, you must pad it on the right with blanks to
| a length of 18 characters.
| This field is optional. If specified, you must also specify retcode, reascode,
| user, and appl. If not specified, no workstation name is associated with the
| connection.
| xid A 4-byte area into which you put one of the following values:
| A DB2 thread that is part of a global transaction can share locks with other
| DB2 threads that are part of the same global transaction and can access and
| modify the same data. A global transaction exists until one of the threads that
| is part of the global transaction is committed or rolled back.
| Usage: CONTEXT SIGNON relies on the RRS context services functions Set
| Context Data (CTXSDTA) and Retrieve Context Data (CTXRDTA). Before you
| invoke CONTEXT SIGNON, you must have called CTXSDTA to store a primary
| authorization ID and optionally, the address of an ACEE in the context data whose
| context key you supply as input to CONTEXT SIGNON.
| If the new primary authorization ID is not different than the current primary
| authorization ID (established at IDENTIFY time or at a previous SIGNON
| invocation) then DB2 invokes only the signon exit. If the value has changed, then
| DB2 establishes a new primary authorization ID and new SQL authorization ID and
| then invokes the signon exit.
| If you pass an ACEE address, then CONTEXT SIGNON uses the value in
| ACEEGRPN as the secondary authorization ID if the length of the group name
| (ACEEGRPL) is not 0.
| Generally, you issue a CONTEXT SIGNON call after an IDENTIFY call and before
| a CREATE THREAD call. You can also issue a CONTEXT SIGNON call if the
| application is at a point of consistency, and
| ) The value of reuse in the CREATE THREAD call was RESET, or
| ) The value of reuse in the CREATE THREAD call was INITIAL, no held cursors
| are open, the package or plan is bound with KEEPDYNAMIC(NO), and all
| special registers are at their initial state. If there are open held cursors or the
| package or plan is bound with KEEPDYNAMIC(YES), a SIGNON call is
| permitted only if the primary authorization ID has not changed.
Chapter 7-8. Programming for the Recoverable Resource Manager Services attachment facility (RRSAF) 801
CREATE THREAD: Syntax and usage
CREATE THREAD allocates DB2 resources for the application.
function
An 18-byte area containing CREATE THREAD followed by five blanks.
plan
An 8-byte DB2 plan name. If you provide a collection name instead of a plan
name, specify the character ? in the first byte of this field. DB2 then allocates a
special plan named ?RRSAF and uses the collection parameter. If you do not
provide a collection name in the collection field, you must enter a valid plan
name in this field.
collection
An 18-byte area in which you enter a collection name. When you provide a
collection name and put the character ? in the plan field, DB2 allocates a plan
named ?RRSAF and a package list that contains two entries:
If you provide a plan name in the plan field, DB2 ignores the value in this field.
reuse
An 8-byte area that controls the action DB2 takes if a SIGNON call is issued
after a CREATE THREAD call. Specify either of these values in this field:
) RESET - to release any held cursors and reinitialize the special registers
) INITIAL - to disallow the SIGNON
This parameter is required. If the 8-byte area does not contain either RESET or
INITIAL, then the default value is INITIAL.
retcode
A 4-byte area in which RRSAF places the return code.
This parameter is optional. If you do not specify this parameter, RRSAF places
the return code in register 15 and the reason code in register 0.
reascode
A 4-byte area in which RRSAF places the reason code.
This parameter is optional. If you do not specify this parameter, RRSAF places
the reason code in register 0.
If you specify this parameter, you must also specify retcode.
Usage: CREATE THREAD allocates the DB2 resources required to issue SQL or
IFI requests. If you specify a plan name, RRSAF allocates the named plan. If you
specify ? in the first byte of the plan name and provide a collection name, DB2
The application can use the SQL statement SET CURRENT PACKAGESET to
change the collection ID that DB2 uses to locate a package.
When DB2 allocates a plan named ?RRSAF, DB2 checks authorization to execute
the package in the same way as it checks authorization to execute a package from
a requester other than DB2 for OS/390. See Section 3 (Volume 1) of DB2
Administration Guide for more information on authorization checking for package
execution.
[[──CALL DSNRLI──(──function,──┬─────────────────────────────┬──)────────────────────────────────────[^
└─,──retcode──┬─────────────┬─┘
└─,──reascode─┘
Chapter 7-8. Programming for the Recoverable Resource Manager Services attachment facility (RRSAF) 803
function
An 18-byte area containing TERMINATE THREAD followed by two blanks.
retcode
A 4-byte area in which RRSAF places the return code.
This parameter is optional. If you do not specify this parameter, RRSAF places
the return code in register 15 and the reason code in register 0.
reascode
A 4-byte area in which RRSAF places the reason code.
This parameter is optional. If you do not specify this parameter, RRSAF places
the reason code in register 0.
If you specify this parameter, you must also specify retcode.
[[──CALL DSNRLI──(──function──┬─────────────────────────────┬──)─────────────────────────────────────[^
└─,──retcode──┬─────────────┬─┘
└─,──reascode─┘
function
An 18-byte area containing TERMINATE IDENTIFY.
retcode
A 4-byte area in which RRSAF places the return code.
This parameter is optional. If you do not specify this parameter, RRSAF places
the return code in register 15 and the reason code in register 0.
reascode
A 4-byte area in which RRSAF places the reason code.
This parameter is optional. If you do not specify this parameter, RRSAF places
the reason code in register 0.
If you specify this parameter, you must also specify retcode.
If the application allocated a plan, and you issue TERMINATE IDENTIFY without
first issuing TERMINATE THREAD, DB2 deallocates the plan before terminating the
connection.
Issuing TERMINATE IDENTIFY is optional. If you do not, DB2 performs the same
functions when the task terminates.
If DB2 terminates, the application must issue TERMINATE IDENTIFY to reset the
RRSAF control blocks. This ensures that future connection requests from the task
are successful when DB2 restarts.
Table 101 on page 806 shows a TERMINATE IDENTIFY call in each language.
Chapter 7-8. Programming for the Recoverable Resource Manager Services attachment facility (RRSAF) 805
Table 101. Examples of RRSAF TERMINATE IDENTIFY calls
Language Call example
Assembler CALL DSNRLI,(TMIDFYFN,RETCODE,REASCODE)
C fnret=dsnrli(&tmidfyfn[0], &retcode, &reascode);
COBOL CALL 'DSNRLI' USING TMIDFYFN RETCODE REASCODE.
FORTRAN CALL DSNRLI(TMIDFYFN,RETCODE,REASCODE)
PL/I CALL DSNRLI(TMIDFYFN,RETCODE,REASCODE);
Note: DSNRLI is an assembler language program; therefore, you must include the following compiler directives in
your C, C++, and PL/I applications:
C #pragma linkage(dsnali, OS)
C++ extern "OS" {
int DSNALI(
char * functn,
...); }
PL/I DCL DSNALI ENTRY OPTIONS(ASM,INTER,RETCODE);
Issue TRANSLATE only after a successful IDENTIFY operation. For errors that
occur during SQL or IFI requests, the TRANSLATE function performs automatically.
function
An 18-byte area containing the word TRANSLATE followed by nine blanks.
sqlca
The program's SQL communication area (SQLCA).
retcode
A 4-byte area in which RRSAF places the return code.
This parameter is optional. If you do not specify this parameter, RRSAF places
the return code in register 15 and the reason code in register 0.
reascode
A 4-byte area in which RRSAF places the reason code.
This parameter is optional. If you do not specify this parameter, RRSAF places
the reason code in register 0.
If you specify this parameter, you must also specify retcode.
The TRANSLATE function translates codes that begin with X'00F3', but it does not
translate RRSAF reason codes that begin with X'00C1'. If you receive error
reason code X'00F30040' (resource unavailable) after an OPEN request,
TRANSLATE returns the name of the unavailable database object in the last 44
characters of field SQLERRM. If the DB2 TRANSLATE function does not recognize
the error reason code, it returns SQLCODE -924 (SQLSTATE '58006') and places
a printable copy of the original DB2 function code and the return and error reason
codes in the SQLERRM field. The contents of registers 0 and 15 do not change,
unless TRANSLATE fails; in which case, register 0 is set to X'00C12204' and
register 15 is set to 200.
In these tables, the first column lists the most recent RRSAF or DB2 function
executed. The first row lists the next function executed. The contents of the
intersection of a row and column indicate the result of calling the function in the first
column followed by the function in the first row. For example, if you issue
TERMINATE THREAD, then you execute SQL or issue an IFI call, RRSAF returns
reason code X'00C12219'.
Chapter 7-8. Programming for the Recoverable Resource Manager Services attachment facility (RRSAF) 807
| Table 103. Effect of call order when next call is IDENTIFY, SWITCH TO, SIGNON,
| CREATE THREAD, SQL, or IFI
| Next Call ===> IDENTIFY SWITCH TO SIGNON, CREATE SQL or IFI
| Last function AUTH SIGNON, THREAD
or
| CONTEXT SIGNON
| Empty: first IDENTIFY X'00C12205' X'00C12204' X'00C12204' X'00C12204'
| call
| IDENTIFY X'00C12201' Switch to Signon 1 X'00C12217' X'00C12218'
| ssnm
| SWITCH TO IDENTIFY Switch to Signon 1 CREATE SQL or IFI
| ssnm THREAD call
| SIGNON, X'00C12201' Switch to Signon 1 CREATE X'00C12219'
| AUTH SIGNON, or ssnm THREAD
| CONTEXT SIGNON
| CREATE X'00C12201' Switch to Signon 1 X'00C12202' SQL or IFI
| THREAD ssnm call
| TERMINATE X'00C12201' Switch to Signon 1 CREATE X'00C12219'
| THREAD ssnm THREAD
| IFI X'00C12201' Switch to Signon 1 X'00C12202' SQL or IFI
| ssnm call
| SQL X'00C12201' Switch to X'00F30092' X'00C12202' SQL or IFI
| ssnm 2 call
| SRRCMIT or X'00C12201' Switch to Signon 1 X'00C12202' SQL or IFI
| SRRBACK ssnm call
| Notes:
| 1. Signon means the signon to DB2 through either SIGNON, AUTH SIGNON, or CONTEXT
| SIGNON.
| 2. SIGNON, AUTH SIGNON, or CONTEXT SIGNON are not allowed if any SQL operations are
| requested after CREATE THREAD or after the last SRRCMIT or SRRBACK request.
Sample scenarios
This section shows sample scenarios for connecting tasks to DB2.
A single task
This example shows a single task running in an address space. OS/390 RRS
controls commit processing when the task terminates normally.
IDENTIFY
SIGNON
CREATE THREAD
SQL or IFI
..
.
TERMINATE IDENTIFY
Multiple tasks
This example shows multiple tasks in an address space. Task 1 executes no SQL
statements and makes no IFI calls. Its purpose is to monitor DB2 termination and
startup ECBs and to check the DB2 release level.
Chapter 7-8. Programming for the Recoverable Resource Manager Services attachment facility (RRSAF) 809
TASK 1 TASK 2 TASK 3 TASK n
When the reason code begins with X'00F3' (except for X'00F30006'), you can
use the RRSAF TRANSLATE function to obtain error message text that can be
printed and displayed.
For SQL calls, RRSAF returns standard SQL return codes in the SQLCA. See
Section 2 of DB2 Messages and Codes for a list of those return codes and their
meanings. RRSAF returns IFI return codes and reason codes in the instrumentation
facility communication area (IFCA). See Section 4 of DB2 Messages and Codes for
a list of those return codes and their meanings.
Chapter 7-8. Programming for the Recoverable Resource Manager Services attachment facility (RRSAF) 811
Table 105. RRSAF return codes
Return code Explanation
0 Successful completion.
4 Status information. See the reason code for details.
# >4 The call failed. See the reason code for details.
Program examples
This section contains sample JCL for running an RRSAF application and assembler
code for accessing RRSAF.
//SYSPRINT DD SYSOUT=*
//DSNRRSAF DD DUMMY
//SYSUDUMP DD SYSOUT=*
Delete the loaded modules when the application no longer needs to access DB2.
****************************** GET LANGUAGE INTERFACE ENTRY ADDRESSES
LOAD EP=DSNRLI Load the RRSAF service request EP
ST R,LIRLI Save this for RRSAF service requests
LOAD EP=DSNHLIR Load the RRSAF SQL call Entry Point
ST R,LISQL Save this for SQL calls
* .
* . Insert connection service requests and SQL calls here
* .
DELETE EP=DSNRLI Correctly maintain use count
DELETE EP=DSNHLIR Correctly maintain use count
In the example that follows, LISQL is addressable because the calling CSECT used
the same register 12 as CSECT DSNHLI. Your application must also establish
addressability to LISQL.
***********************************************************************
* Subroutine DSNHLI intercepts calls to LI EP=DSNHLI
***********************************************************************
DS D
DSNHLI CSECT Begin CSECT
STM R14,R12,12(R13) Prologue
LA R15,SAVEHLI Get save area address
ST R13,4(,R15) Chain the save areas
ST R15,8(,R13) Chain the save areas
LR R13,R15 Put save area address in R13
L R15,LISQL Get the address of real DSNHLI
BASSM R14,R15 Branch to DSNRLI to do an SQL call
* DSNRLI is in 31-bit mode, so use
* BASSM to assure that the addressing
* mode is preserved.
L R13,4(,R13) Restore R13 (caller's save area addr)
L R14,12(,R13) Restore R14 (return address)
RETURN (1,12) Restore R1-12, NOT R and R15 (codes)
The code in Figure 216 does not show a task that waits on the DB2 termination
ECB. You can code such a task and use the MVS WAIT macro to monitor the
ECB. The task that waits on the termination ECB should detach the sample code if
the termination ECB is posted. That task can also wait on the DB2 startup ECB.
The task in Figure 216 waits on the startup ECB at its own task level.
Chapter 7-8. Programming for the Recoverable Resource Manager Services attachment facility (RRSAF) 813
***************************** IDENTIFY ********************************
L R15,LIRLI Get the Language Interface address
CALL (15),(IDFYFN,SSNM,RIBPTR,EIBPTR,TERMECB,STARTECB),VL,MF=X
(E,RRSAFCLL)
BAL R14,CHEKCODE Call a routine (not shown) to check
* return and reason codes
CLC CONTROL,CONTINUE Is everything still OK
BNE EXIT If CONTROL not 'CONTINUE', stop loop
USING R8,RIB Prepare to access the RIB
L R8,RIBPTR Access RIB to get DB2 release level
WRITE 'The current DB2 release level is' RIBREL
***************************** SIGNON **********************************
L R15,LIRLI Get the Language Interface address
CALL (15),(SGNONFN,CORRID,ACCTTKN,ACCTINT),VL,MF=(E,RRSAFCLL)
BAL R14,CHEKCODE Check the return and reason codes
*************************** CREATE THREAD *****************************
L R15,LIRLI Get the Language Interface address
CALL (15),(CRTHRDFN,PLAN,COLLID,REUSE),VL,MF=(E,RRSAFCLL)
BAL R14,CHEKCODE Check the return and reason codes
****************************** SQL ************************************
* Insert your SQL calls here. The DB2 Precompiler
* generates calls to entry point DSNHLI. You should
* code a dummy entry point of that name to intercept
* all SQL calls. A dummy DSNHLI is shown below.
************************ TERMINATE THREAD *****************************
CLC CONTROL,CONTINUE Is everything still OK?
BNE EXIT If CONTROL not 'CONTINUE', shut down
L R15,LIRLI Get the Language Interface address
CALL (15),(TRMTHDFN),VL,MF=(E,RRSAFCLL)
BAL R14,CHEKCODE Check the return and reason codes
************************ TERMINATE IDENTIFY ***************************
CLC CONTROL,CONTINUE Is everything still OK
BNE EXIT If CONTROL not 'CONTINUE', stop loop
L R15,LIRLI Get the Language Interface address
CALL (15),(TMIDFYFN),VL,MF=(E,RRSAFCLL)
BAL R14,CHEKCODE Check the return and reason codes
Figure 216. Using RRSAF to connect to DB2
Figure 217 on page 815 shows declarations for some of the variables used in
Figure 216.
Chapter 7-8. Programming for the Recoverable Resource Manager Services attachment facility (RRSAF) 815
816 Application Programming and SQL Guide
Chapter 7-9. Programming considerations for CICS
This section discusses some special topics of importance to CICS application
programmers:
) Controlling the CICS attachment facility from an application
) Improving thread reuse
) Detecting whether the CICS attachment facility is operational
When you use this method, the attachment facility uses the default RCT. The
default RCT name is DSN2CT concatenated with a one- or two-character suffix.
The system administrator specifies this suffix in the DSN2STRT subparameter of
the INITPARM parameter in the CICS startup procedure. If no suffix is specified,
CICS uses an RCT name of DSN2CT00.
One of the most important things you can do to maximize thread reuse is to close
all cursors that you declared WITH HOLD before each sync point, because DB2
does not automatically close them. A thread for an application that contains an
open cursor cannot be reused. It is a good programming practice to close all
cursors immediately after you finish using them. For more information on the effects
of declaring cursors WITH HOLD in CICS applications, see “Declaring a cursor with
hold” on page 113.
Attention
When both of the following conditions are true, the stormdrain effect can occur:
) The CICS attachment facility is down.
) You are using INQUIRE EXITPROGRAM to avoid AEY9 abends.
For more information on the stormdrain effect and how to avoid it, see Chapter
3 of DB2 Data Sharing: Planning and Administration.
If you are using a release of CICS after CICS Version 4, and you have specified
STANDBY=SQLCODE and STRTWT=AUTO in the DSNCRCT TYPE=INIT macro,
you do not need to test whether the CICS attachment facility is up before executing
SQL. When an SQL statement is executed, and the CICS attachment facility is not
available, DB2 issues SQLCODE -923 with a reason code that indicates that the
attachment facility is not available. See Section 2 of DB2 Installation Guide for
information about the DSNCRCT macro and DB2 Messages and Codes for an
explanation of SQLCODE -923.
# Answer: Add a column with the data type ROWID or an identity column. ROWID
# columns and identity columns contain a unique value for each row in the table. You
# can define the column as GENERATED ALWAYS, which means that you cannot
# insert values into the column, or GENERATED BY DEFAULT, which means that
# DB2 generates a value if you do not specify one. If you define the ROWID or
# identity column as GENERATED BY DEFAULT, you need to define a unique index
# that includes only that column to guarantee uniqueness.
One effect of this approach is that, scrolling backward, you always see exactly the
same fetched data, even if the data in the database changed in the meantime. That
can be an advantage if your users need to see a consistent set of data. It is a
disadvantage if your users need to see updates as soon as other users commit
their data.
If you use this technique, and if you do not commit your work after fetching the
data, your program could hold locks that prevent other users from accessing the
data. (For locking considerations in your program, see “Chapter 5-2. Planning for
concurrency” on page 331.)
Suppose also that you now want to return to the rows that start with DEPTNO =
‘M95’, and fetch sequentially from that point. Run:
SELECT * FROM DSN861.DEPT
WHERE LOCATION = 'CALIFORNIA'
AND DEPTNO >═'M95'
ORDER BY DEPTNO;
That statement positions the cursor where you want it.
But again, unless the program still has a lock on the data, other users can insert or
delete rows. The row with DEPTNO = ‘M95’ might no longer exist. Or there could
now be 20 rows with DEPTNO between M95 and M99, where before there were
only 16.
The order of rows in the second result table: The rows of the second result
table might not appear in the same order. DB2 does not consider the order of rows
as significant, unless the SELECT statement uses ORDER BY. Hence, if there are
several rows with the same DEPTNO value, the second SELECT statement could
retrieve them in a different order from the first SELECT statement. The only
guarantee is that the rows are in order by department number, as demanded by the
clause ORDER BY DEPTNO.
The order among columns with the same value of DEPTNO can change, even if
you run the same SQL statement with the same host variables a second time. For
example, the statistics in the catalog could be updated between executions. Or
indexes could be created or dropped, and you could execute PREPARE for the
SELECT statement again. (For a description of the PREPARE statement in
“Chapter 7-1. Coding dynamic SQL in application programs” on page 503.)
The ordering is more likely to change if the second SELECT has a predicate that
the first did not. DB2 could choose to use an index on the new predicate. For
example, DB2 could choose an index on LOCATION for the first statement in our
example, and an index on DEPTNO for the second. Because rows are fetched in
the order indicated by the index key, the second order need not be the same as the
first.
Again, executing PREPARE for two similar SELECT statements can produce a
different ordering of rows, even if no statistics change and no indexes are created
or dropped. In the example, if there are many different values of LOCATION, DB2
A cursor on the second statement retrieves rows in the opposite order from a
cursor on the first statement. If the first statement specifies unique ordering, the
second statement retrieves rows in exactly the opposite order.
For retrieving rows in reverse order, it can be useful to have two indexes on the
DEPTNO column, one in ascending order and one in descending order.
# Retrieving rows from a table with a ROWID or identity column: If your table
# contains a ROWID column or an identity column, you can use that column to
# rapidly retrieve the rows in reverse order. When you perform the original SELECT,
# you can store the ROWID or identity column value for each row you retrieve. Then,
# to retrieve the values in reverse order, you can execute SELECT statements with a
# WHERE clause that compares the ROWID or identity column value to each stored
# value.
Answer: Scrolling and updating at the same time can cause unpredictable results.
Issuing INSERT, UPDATE and DELETE statements from the same application
process while a cursor is open can affect the result table.
For example, suppose you are fetching rows from table T using cursor C, which is
defined like this:
EXEC SQL DECLARE C CURSOR FOR SELECT * FROM T;
After you have fetched several rows, cursor C is positioned to a row within the
result table. If you insert a row into T, the effect on the result table is unpredictable
because the rows in the result table are unordered. A later FETCH C might or
might not retrieve the new row of T.
Answer: On the SELECT statement, use FOR UPDATE OF, followed by a list of
columns that can be updated (for efficiency, specify only those columns you intend
to update). Then use the positioned UPDATE statement. The clause WHERE
CURRENT OF names the cursor that points to the row you want to update.
Answer: Yes. When updating large volumes of data using a cursor, you can
minimize the amount of time that you hold locks on the data by declaring the cursor
with the HOLD option and by issuing commits frequently.
Answer: There are no special techniques; but for large numbers of rows, efficiency
can become very important. In particular, you need to be aware of locking
considerations, including the possibilities of lock escalation.
If your program allows input from a terminal before it commits the data and thereby
releases locks, it is possible that a significant loss of concurrency results. Review
the description of locks in “The ISOLATION option” on page 349 while designing
your program. Then review the expected use of tables to predict whether you could
have locking problems.
Answer: Generally, you should select only the columns you need because DB2 is
sensitive to the number of columns selected. Use SELECT * only when you are
sure you want to select all columns. One alternative is to use views defined with
only the necessary columns, and use SELECT * to access the views. Avoid
SELECT * if all the selected columns participate in a sort operation (SELECT
DISTINCT and SELECT...UNION, for example).
DB2 usually optimizes queries to retrieve all rows that qualify. But sometimes you
want to retrieve only the first few rows. For example, to retrieve the first row that is
greater than or equal to a known value, code:
SELECT column list FROM table
WHERE key >═ value
ORDER BY key ASC
Even with the ORDER BY clause, DB2 might fetch all the data first and sort it
afterwards, which could be wasteful. Instead, code:
SELECT * FROM table
WHERE key >═ value
ORDER BY key ASC
OPTIMIZE FOR 1 ROW
Use OPTIMIZE FOR 1 ROW to influence the access path. OPTIMIZE FOR 1 ROW
tells DB2 to select an access path that returns the first qualifying row quickly.
For more information on the OPTIMIZE FOR clause, see “Minimizing overhead for
retrieving few rows: OPTIMIZE FOR n ROWS” on page 669.
To get the effect of adding data to the “end” of a table, define a unique index on a
TIMESTAMP column in the table definition. Then, when you retrieve data from the
table, use an ORDER BY clause naming that column. The newest insert appears
last.
Answer: You can save the corresponding SQL statements in a table with a column
having a data type of VARCHAR(n), where n is the maximum length of any SQL
statement. You must save the source SQL statements, not the prepared versions.
That means that you must retrieve and then prepare each statement before
executing the version stored in the table. In essence, your program prepares an
SQL statement from a character string and executes it dynamically. (For a
description of dynamic SQL, see “Chapter 7-1. Coding dynamic SQL in application
programs” on page 503.)
Answer: Your program can dynamically execute CREATE TABLE and ALTER
TABLE statements entered by users to create new tables, add columns to existing
| tables, or increase the length of VARCHAR columns. Added columns initially
contain either the null value or a default value. Both statements, like any data
definition statement, are relatively expensive to execute; consider the effects of
locks.
For a description of dynamic SQL execution, see “Chapter 7-1. Coding dynamic
SQL in application programs” on page 503.
Answer: You can store the data as a single VARCHAR column in the database.
Answer: When you receive an SQL error because of a constraint violation, print
out the SQLCA. You can use the DSNTIAR routine described in “Handling SQL
error return codes” on page 103 to format the SQLCA for you. Check the SQL
error message insertion text (SQLERRM) for the name of the constraint. For
Authorization on all sample objects is given to PUBLIC in order to make the sample
programs easier to run. The contents of any table can easily be reviewed by
executing an SQL statement, for example SELECT * FROM DSN8610.PROJ. For
convenience in interpreting the examples, the department and employee tables are
listed here in full.
Content
Table 106 shows the content of the columns.
The table, shown in Table 110 on page 831, resides in table space
DSN8D61A.DSN8S61D and is created with:
CREATE TABLE DSN861.DEPT
(DEPTNO CHAR(3) NOT NULL,
DEPTNAME VARCHAR(36) NOT NULL,
MGRNO CHAR(6) ,
ADMRDEPT CHAR(3) NOT NULL,
LOCATION CHAR(16) ,
PRIMARY KEY (DEPTNO) )
IN DSN8D61A.DSN8S61D
CCSID EBCDIC;
Because the table is self-referencing, and also is part of a cycle of dependencies,
its foreign keys must be added later with these statements:
ALTER TABLE DSN861.DEPT
FOREIGN KEY RDD (ADMRDEPT) REFERENCES DSN861.DEPT
ON DELETE CASCADE;
Content
Table 108 shows the content of the columns.
It is a dependent of the employee table, through its foreign key on column MGRNO.
The LOCATION column contains nulls until sample job DSNTEJ6 updates this
column with the location name.
The table shown in Table 113 on page 834 and Table 114 on page 835 resides in
the partitioned table space DSN8D61A.DSN8S61E. Because it has a foreign key
referencing DEPT, that table and the index on its primary key must be created first.
Then EMP is created with:
CREATE TABLE DSN861.EMP
(EMPNO CHAR(6) NOT NULL,
FIRSTNME VARCHAR(12) NOT NULL,
MIDINIT CHAR(1) NOT NULL,
LASTNAME VARCHAR(15) NOT NULL,
WORKDEPT CHAR(3) ,
PHONENO CHAR(4) CONSTRAINT NUMBER CHECK
(PHONENO >= '' AND
PHONENO <= '9999') ,
HIREDATE DATE ,
JOB CHAR(8) ,
EDLEVEL SMALLINT ,
SEX CHAR(1) ,
BIRTHDATE DATE ,
SALARY DECIMAL(9,2) ,
BONUS DECIMAL(9,2) ,
COMM DECIMAL(9,2) ,
PRIMARY KEY (EMPNO) ,
FOREIGN KEY RED (WORKDEPT) REFERENCES DSN861.DEPT
ON DELETE SET NULL )
EDITPROC DSN8EAE1
IN DSN8D61A.DSN8S61E
CCSID EBCDIC;
Content
Table 111 on page 833 shows the content of the columns. The table has a check
constraint, NUMBER, which checks that the phone number is in the numeric range
0000 to 9999.
| DB2 requires an auxiliary table for each LOB column in a table. These statements
| define the auxiliary tables for the three LOB columns in
| DSN8610.EMP_PHOTO_RESUME:
| CREATE AUX TABLE DSN861.AUX_BMP_PHOTO
| IN DSN8D61L.DSN8S61M
| STORES DSN861.EMP_PHOTO_RESUME
| COLUMN BMP_PHOTO;
| Content
| Table 115 shows the content of the columns.
| The auxiliary tables for the employee photo and resume table have these indexes:
| Table 117. Indexes of the auxiliary tables for the employee photo and resume table
| Name On Table Type of Index
| DSN8610.XAUX_BMP_PHOTO DSN8610.AUX_BMP_PHOTO Unique
| DSN8610.XAUX_PSEG_PHOTO DSN8610.AUX_PSEG_PHOTO Unique
| DSN8610.XAUX_EMP_RESUME DSN8610.AUX_EMP_RESUME Unique
The table resides in database DSN8D61A. Because it has foreign keys referencing
DEPT and EMP, those tables and the indexes on their primary keys must be
created first. Then PROJ is created with:
CREATE TABLE DSN861.PROJ
(PROJNO CHAR(6) PRIMARY KEY NOT NULL,
PROJNAME VARCHAR(24) NOT NULL WITH DEFAULT
'PROJECT NAME UNDEFINED',
DEPTNO CHAR(3) NOT NULL REFERENCES
DSN861.DEPT ON DELETE RESTRICT,
RESPEMP CHAR(6) NOT NULL REFERENCES
DSN861.EMP ON DELETE RESTRICT,
PRSTAFF DECIMAL(5, 2) ,
PRSTDATE DATE ,
PRENDATE DATE ,
MAJPROJ CHAR(6))
IN DSN8D61A.DSN8S61P
CCSID EBCDIC;
Because the table is self-referencing, the foreign key for that restraint must be
added later with:
ALTER TABLE DSN861.PROJ
FOREIGN KEY RPP (MAJPROJ) REFERENCES DSN861.PROJ
ON DELETE CASCADE;
Content
Table 120 shows the content of the columns.
The table resides in database DSN8D61A. Because it has foreign keys referencing
EMP and PROJACT, those tables and the indexes on their primary keys must be
created first. Then EMPPROJACT is created with:
Content
Table 122 shows the content of the columns.
CASCADE
DEPT
SET SET
NULL NULL
RESTRICT EMP
RESTRICT
RESTRICT EMP_PHOTO_RESUME
RESTRICT
CASCADE ACT
PROJ RESTRICT
RESTRICT
PROJACT
RESTRICT
RESTRICT
EMPPROJACT
Figure 218. Relationships among tables in the sample application. Arrows point from
parent tables to dependent tables.
The SQL statements that create the sample views are shown below.
CREATE VIEW DSN861.VDEPT
AS SELECT ALL DEPTNO ,
DEPTNAME,
MGRNO ,
ADMRDEPT
FROM DSN861.DEPT;
CREATE VIEW DSN861.VHDEPT
AS SELECT ALL DEPTNO ,
DEPTNAME,
MGRNO ,
ADMRDEPT,
LOCATION
FROM DSN861.DEPT;
Table
spaces: Separate
LOB spaces
spaces for DSN8SvrP
for employee
DSN8SvrD DSN8SvrE other common for
photo and
department employee application programming
resume table
table table tables tables
In addition to the storage group and databases shown in Figure 219, the storage
group DSN8G61U and database DSN8D61U are created when you run
DSNTEJ2A.
Storage group
The default storage group, SYSDEFLT, created when DB2 is installed, is not used
to store sample application data. The storage group used to store sample
application data is defined by this statement:
CREATE STOGROUP DSN8G61
VOLUMES (DSNV1)
VCAT DSNC61;
Databases
The default database, created when DB2 is installed, is not used to store the
sample application data. Two databases are used: one for tables related to
applications, the other for tables related to programs. They are defined by the
following statements:
CREATE DATABASE DSN8D61A
STOGROUP DSN8G61
BUFFERPOOL BP
CCSID EBCDIC;
Several sample applications come with DB2 to help you with DB2 programming
techniques and coding practices within each of the four environments: batch, TSO,
IMS, and CICS. The sample applications contain various applications that might
apply to managing to company.
You can examine the source code for the sample application programs in the online
sample library included with the DB2 product. The name of this sample library is
prefix.SDSNSAMP.
Phone application: The phone application lets you view or update individual
employee phone numbers. There are different versions of the application for
ISPF/TSO, CICS, IMS, and batch:
) ISPF/TSO applications use COBOL and PL/I.
) CICS and IMS applications use PL/I.
) Batch applications use C, C++, COBOL, FORTRAN, and PL/I.
| LOB Application: The LOB application demonstrates how to perform the following
| tasks:
| ) Define DB2 objects to hold LOB data
| ) Populate DB2 tables with LOB data using the LOAD utility, or using INSERT
| and UPDATE statements when the data is too large for use with the LOAD
| utility
| ) Manipulate the LOB data using LOB locators
Application programs: Tables 126 through 128 on pages 852 through 854 provide
the program names, JCL member names, and a brief description of some of the
programs included for each of the three environments: TSO, IMS, and CICS.
IMS
Table 127 (Page 1 of 2). Sample DB2 applications for IMS
Program JCL member
Application name name Description
Organization DSN8IC0 DSNTEJ4C IMS COBOL
DSN8IC1 Organization
DSN8IC2 Application
Organization DSN8IP0 DSNTEJ4P IMS PL/I
DSN8IP1 Organization
DSN8IP2 Application
CICS
Table 128. Sample DB2 applications for CICS
Program JCL member
Application name name Description
Organization DSN8CC0 DSNTEJ5C CICS COBOL
DSN8CC1 Organization
DSN8CC2 Application
Organization DSN8CP0 DSNTEJ5P CICS PL/I
DSN8CP1 Organization
DSN8CP2 Application
Project DSN8CP6 DSNTEJ5P CICS PL/I
DSN8CP7 Project
DSN8CP8 Application
Phone DSN8CP3 DSNTEJ5P CICS PL/I Phone
Application.
This program
lists employee
telephone
numbers and
updates them
if requested.
| Because these three programs also accept the static SQL statements CONNECT,
| SET CONNECTION, and RELEASE, you can use the programs to access DB2
| tables at remote locations.
| DSNTIAUL and DSNTIAD are shipped only as source code, so you must
| precompile, assemble, link, and bind them before you can use them. If you want to
| use the source code version of DSNTEP2, you must precompile, compile, link and
| bind it. You need to bind the object code version of DSNTEP2 before you can use
| it. Usually, your system administrator prepares the programs as part of the
| installation process. Table 129 indicates which installation job prepares each
| sample program. All installation jobs are in data set DSN610.SDSNSAMP.
| To run the sample programs, use the DSN RUN command, which is described in
| detail in Chapter 2 of DB2 Command Reference. Table 130 on page 856 lists the
| load module name and plan name you must specify, and the parameters you can
| specify when you run each program. See the following sections for the meaning of
| each parameter.
| The remainder of this appendix contains the following information about running
| each program:
| ) Descriptions of the input parameters
| ) Data sets you must allocate before you run the program
| ) Return codes from the program
| ) Examples of invocation
| See the sample jobs listed in Table 129 on page 855 for a working example of
| each program.
| Running DSNTIAUL
| This section contains information that you need when you run DSNTIAUL, including
| parameters, data sets, return codes, and invocation examples.
| If you do not specify the SQL parameter, your input data set must contain one or
| more single-line statements (without a semi-colon) that use the following syntax:
| table or view name [WHERE conditions] [ORDER BY columns]
| Each input statement must be a valid SQL SELECT statement with the clause
| SELECT * FROM omitted and with no ending semi-colon. DSNTIAUL generates a
| SELECT statement for each input statement by appending your input line to
| SELECT * FROM, then uses the result to determine which tables to unload. For
| this input format, the text for each table specification can be a maximum of 72
| bytes and must not span multiple lines.
| For both input formats, you can specify SELECT statements that join two or more
| tables or select specific columns from a table. If you specify columns, you will need
| to modify the LOAD statement that DSNTIAUL generates.
| Define all data sets as sequential data sets. You can specify the record length and
| block size of the SYSPUNCH and SYSRECnn data sets. The maximum record
| length for the SYSPUNCH and SYSRECnn data sets is 32760 bytes.
| Examples of DSNTIAUL invocation: Suppose you want to unload the rows for
| department D01 from the project table. You can fit the table specification on one
| line, and you do not want to execute any non-SELECT statements, so you do not
| need the SQL parameter. Your invocation looks like this:
Appendix C. How to run sample programs DSNTIAUL, DSNTIAD, and DSNTEP2 857
| //UNLOAD EXEC PGM=IKJEFT1,DYNAMNBR=2
| //SYSTSPRT DD SYSOUT=*
| //SYSTSIN DD *
| DSN SYSTEM(DSN)
| RUN PROGRAM(DSNTIAUL) PLAN(DSNTIB61) -
| LIB('DSN61.RUNLIB.LOAD')
| //SYSPRINT DD SYSOUT=*
| //SYSUDUMP DD SYSOUT=*
| //SYSREC DD DSN=DSN8UNLD.SYSREC,
| // UNIT=SYSDA,SPACE=(3276,(1,5)),DISP=(,CATLG),
| // VOL=SER=SCR3
| //SYSPUNCH DD DSN=DSN8UNLD.SYSPUNCH,
| // UNIT=SYSDA,SPACE=(8,(15,15)),DISP=(,CATLG),
| // VOL=SER=SCR3,RECFM=FB,LRECL=12,BLKSIZE=12
| //SYSIN DD *
| DSN861.PROJ WHERE DEPTNO='D1'
| Figure 220. DSNTIAUL Invocation without the SQL parameter
| If you want to obtain the LOAD utility control statements for loading rows into a
| table, but you do not want to unload the rows, you can set the data set names for
| the SYSRECnn data sets to DUMMY. For example, to obtain the utility control
| statements for loading rows into the department table, you invoke DSNTIAUL like
| this:
| Now suppose that you also want to use DSNTIAUL to do these things:
| ) Unload all rows from the project table
| ) Unload only rows from the employee table for employees in departments with
| department numbers that begin with D, and order the unloaded rows by
| employee number
| ) Lock both tables in share mode before you unload them
| For these activities, you must specify the SQL parameter when you run DSNTIAUL.
| Your DSNTIAUL invocation looks like this:
| Running DSNTIAD
| This section contains information that you need when you run DSNTIAD, including
| parameters, data sets, return codes, and invocation examples.
| DSNTIAD parameters:
| RC0
| If you specify this parameter, DSNTIAD ends with return code 0, even if the
| program encounters SQL errors. If you do not specify RC0, DSNTIAD ends
| with a return code that reflects the severity of the errors that occur. Without
| RC0, DSNTIAD terminates if more than 10 SQL errors occur during a single
| execution.
| SQLTERM(termchar)
| Specify this parameter to indicate the character that you use to end each SQL
| statement. You can use any special character except one of those listed in
| Table 131. SQLTERM(;) is the default.
| Table 131 (Page 1 of 2). Invalid special characters for the SQL terminator
| Hexadecimal
| Name Character Representation
| blank X'40'
| comma , X'5E'
| double quote " X'7F'
| left parenthesis ( X'4D'
Appendix C. How to run sample programs DSNTIAUL, DSNTIAD, and DSNTEP2 859
| Table 131 (Page 2 of 2). Invalid special characters for the SQL terminator
| Hexadecimal
| Name Character Representation
| right parenthesis ) X'5D'
| single quote ' X'7D'
| underscore _ X'6D'
| Use a character other than a semicolon if you plan to execute a statement that
| contains embedded semicolons. For example, suppose you specify the
| parameter SQLTERM(#) to indicate that the character # is the statement
| terminator. Then a CREATE TRIGGER statement with embedded semicolons
| looks like this:
| CREATE TRIGGER NEW_HIRE
| AFTER INSERT ON EMP
| FOR EACH ROW MODE DB2SQL
| BEGIN ATOMIC
| UPDATE COMPANY_STATS SET NBEMP = NBEMP + 1;
| END#
| Be careful to choose a character for the statement terminator that is not used
| within the statement.
| Running DSNTEP2
| This section contains information that you need when you run DSNTEP2, including
| parameters, data sets, return codes, and invocation examples.
| DSNTEP2 parameters:
| Parameter
| Description
| ALIGN(MID) or ALIGN(LHS)
| If you want your DSNTEP2 output centered, specify ALIGN(MID). If you want
| the output left-aligned, choose ALIGN(LHS). The default is ALIGN(MID).
| MAXSEL(n)
| Specify MAXSEL(n) to limit the number of rows that DSNTEP2 returns from a
| SELECT statement. n is an integer between 0 and 32768. If you do not specify
| MAXSEL(n), DSNTEP2 returns all rows in the result table.
| NOMIXED or MIXED
| If your input to DSNTEP2 contains any DBCS characters, specify MIXED. If
| your input contains no DBCS characters, specify NOMIXED. The default is
| NOMIXED.
| SQLTERM(termchar)
| Specify this parameter to indicate the character that you use to end each SQL
| statement. You can use any character except one of those listed in Table 131
| on page 859. SQLTERM(;) is the default.
| Use a character other than a semicolon if you plan to execute a statement that
| contains embedded semicolons. For example, suppose you specify the
| parameter SQLTERM(#) to indicate that the character # is the statement
| terminator. Then a CREATE TRIGGER statement with embedded semicolons
| looks like this:
| CREATE TRIGGER NEW_HIRE
| AFTER INSERT ON EMP
| FOR EACH ROW MODE DB2SQL
| BEGIN ATOMIC
| UPDATE COMPANY_STATS SET NBEMP = NBEMP + 1;
| END#
Appendix C. How to run sample programs DSNTIAUL, DSNTIAD, and DSNTEP2 861
| Be careful to choose a character for the statement terminator that is not used
| within the statement.
| If you want to change the SQL terminator within a series of SQL statements,
| you can use the --#SET TERMINATOR control statement. For example,
| suppose that you have an existing set of SQL statements to which you want to
| add a CREATE TRIGGER statement that has embedded semicolons. You can
| use the default SQLTERM value, which is a semicolon, for all of the existing
| SQL statements. Before you execute the CREATE TRIGGER statement,
| include the --#SET TERMINATOR # control statement to change the SQL
| terminator to the character #:
| SELECT * FROM DEPT;
| SELECT * FROM ACT;
| SELECT * FROM EMPPROJACT;
| SELECT * FROM PROJ;
| SELECT * FROM PROJACT;
| --#SET TERMINATOR #
| CREATE TRIGGER NEW_HIRE
| AFTER INSERT ON EMP
| FOR EACH ROW MODE DB2SQL
| BEGIN ATOMIC
| UPDATE COMPANY_STATS SET NBEMP = NBEMP + 1;
| END#
| See the discussion of the SYSIN data set for more information on the --#SET
| control statement.
| TERMINATOR
| The SQL statement terminator. value is any single-byte
| character other than one of those listed in Table 131 on
| page 859. The default is the value of the SQLTERM
| parameter.
| ROWS_FETCH
| The number of rows to be fetched from the result table. value is
| a numeric literal between -1 and the number of rows in the
| ROWS_OUT
| The number of fetched rows to be sent to the output data set.
| value is a numeric literal between -1 and the number of fetched
| rows. -1 means that all fetched rows are to be sent to the
| output data set. The default is -1.
Appendix C. How to run sample programs DSNTIAUL, DSNTIAD, and DSNTEP2 863
864 Application Programming and SQL Guide
Appendix D. Programming examples
This appendix contains the following programming examples:
) Sample COBOL dynamic SQL program
) “Sample dynamic and static SQL in a C program” on page 879
# ) “Example DB2 REXX application” on page 882
) “Sample COBOL program using DRDA access” on page 897
) “Sample COBOL program using DB2 private protocol access” on page 905
) “Examples of using stored procedures” on page 912
| This example program does not support BLOB, CLOB, or DBCLOB data types.
The SET statement sets a pointer from the address of an area in the linkage
section or another pointer; the statement can also set the address of an area in the
linkage section. Figure 226 on page 869 provides these uses of the SET
statement. The SET statement does not permit the use of an address in the
WORKING-STORAGE section.
The initial program is extremely simple. It includes a working storage section that
allocates the maximum amount of storage needed. This program then calls the
second program, passing the area or areas on the CALL statement. The second
program defines the area in the linkage section and can then use pointers within
the area.
If you need to allocate parts of storage, the best method is to use indexes or
subscripts. You can use subscripts for arithmetic and comparison operations.
Example
Figure 225 on page 867 shows an example of the initial program DSN8BCU1 that
allocates the storage and calls the second program DSN8BCU2 shown in
Figure 226 on page 869. DSN8BCU2 then defines the passed storage areas in its
linkage section and includes the USING clause on its PROCEDURE DIVISION
statement.
Defining the pointers, then redefining them as numeric, permits some manipulation
of the pointers that you cannot perform directly. For example, you cannot add the
column length to the record pointer, but you can add the column length to the
numeric value that redefines the pointer.
/**********************************************************************/
/* Descriptive name = Dynamic SQL sample using C language */
/* */
/* Function = To show examples of the use of dynamic and static */
/* SQL. */
/* */
/* Notes = This example assumes that the EMP and DEPT tables are */
/* defined. They need not be the same as the DB2 Sample */
/* tables. */
/* */
/* Module type = C program */
/* Processor = DB2 precompiler, C compiler */
/* Module size = see link edit */
/* Attributes = not reentrant or reusable */
/* */
/* Input = */
/* */
/* symbolic label/name = DEPT */
/* description = arbitrary table */
/* symbolic label/name = EMP */
/* description = arbitrary table */
/* */
/* Output = */
/* */
/* symbolic label/name = SYSPRINT */
/* description = print results via printf */
/* */
/* Exit-normal = return code normal completion */
/* */
/* Exit-error = */
/* */
/* Return code = SQLCA */
/* */
/* Abend codes = none */
/* */
/* External references = none */
/* */
/* Control-blocks = */
/* SQLCA - sql communication area */
/* */
Figure 227 (Part 1 of 4). Sample SQL in a C program
#include "stdio.h"
#include "stdefs.h"
EXEC SQL INCLUDE SQLCA;
EXEC SQL INCLUDE SQLDA;
EXEC SQL BEGIN DECLARE SECTION;
short edlevel;
struct { short len;
char x1[56];
} stmtbf1, stmtbf2, inpstr;
struct { short len;
char x1[15];
} lname;
short hv1;
struct { char deptno[4];
struct { short len;
char x[36];
} deptname;
char mgrno[7];
char admrdept[4];
} hv2;
short ind[4];
EXEC SQL END DECLARE SECTION;
EXEC SQL DECLARE EMP TABLE
(EMPNO CHAR(6) ,
FIRSTNAME VARCHAR(12) ,
MIDINIT CHAR(1) ,
LASTNAME VARCHAR(15) ,
WORKDEPT CHAR(3) ,
PHONENO CHAR(4) ,
HIREDATE DECIMAL(6) ,
JOBCODE DECIMAL(3) ,
EDLEVEL SMALLINT ,
SEX CHAR(1) ,
BIRTHDATE DECIMAL(6) ,
SALARY DECIMAL(8,2) ,
FORFNAME VARGRAPHIC(12) ,
FORMNAME GRAPHIC(1) ,
FORLNAME VARGRAPHIC(15) ,
FORADDR VARGRAPHIC(256) ) ;
Figure 227 (Part 2 of 4). Sample SQL in a C program
# DRAW syntax:
# DRAW parameters:
# object-name
# The name of the table or view for which DRAW builds an SQL statement or
# utility control statement. The name can be a one-, two-, or three-part name.
# The table or view to which object-name refers must exist before DRAW can
# run.
# object-name is a required parameter.
# SSID=ssid
# Specifies the name of the local DB2 subsystem.
# S can be used as an abbreviation for SSID.
# If you invoke DRAW from the command line of the edit session in SPUFI,
# SSID=ssid is an optional parameter. DRAW uses the subsystem ID from the
# DB2I Defaults panel.
# TYPE=operation-type
# The type of statement that DRAW builds.
# T can be used as an abbreviation for TYPE.
# operation-type has one of the following values:
# Generate a template for an INSERT statement that inserts values into table
# DSN8610.EMP at location SAN_JOSE. The local subsystem ID is DSN.
# Generate a LOAD control statement to load values into table DSN8610.EMP. The
# local subsystem ID is DSN.
# >>--DRAW-----tablename-----|---------------------------|-------><
# |-(-|-Ssid=subsystem-name-|-|
# | +-Select-+ |
# |-Type=-|-Insert-|----|
# |-Update-|
# +--Load--+
# Ssid=subsystem-name
# subsystem-name specified the name of a DB2 subsystem.
# Select
# Composes a basic query for selecting data from the columns of a
# table or view. If TYPE is not specified, SELECT is assumed.
# Using SELECT with the DRAW command produces a query that would
# retrieve all rows and all columns from the specified table. You
# can then modify the query as needed.
# Insert
# Composes a basic query to insert data into the columns of a table
# or view.
# Update
# Composes a basic query to change the data in a table or view.
# To use this UPDATE query, type the changes you want to make to
# the right of the column names, and delete the lines you don't
# need. Be sure to complete the WHERE clause. For information on
# writing queries to update data, refer to DB2 SQL Reference.
# Load
# Composes a load statement to load the data in a table.
# */
# L2 = WHEREAMI()
# /**********************************************************************/
# /* TRACE ?R */
# /**********************************************************************/
# Address ISPEXEC
# "ISREDIT MACRO (ARGS) NOPROCESS"
# If ARGS = "" Then
# Do
# Do I = L1+2 To L2-2;Say SourceLine(I);End
# Exit (2)
# End
# Parse Upper Var Args Table "(" Parms
# Parms = Translate(Parms," ",",")
# Type = "SELECT" /* Default */
# SSID = "" /* Default */
# "VGET (DSNEOV1)"
# If RC = Then SSID = DSNEOV1
# If (Parms <> "") Then
# Do Until(Parms = "")
# Parse Var Parms Var "=" Value Parms
# If Var = "T" | Var = "TYPE" Then Type = Value
# Else
# If Var = "S" | Var = "SSID" Then SSID = Value
# Else
# Exit (2)
# End
# "CONTROL ERRORS RETURN"
# "ISREDIT (LEFTBND,RIGHTBND) = BOUNDS"
# "ISREDIT (LRECL) = DATA_WIDTH" /*LRECL*/
# BndSize = RightBnd - LeftBnd + 1
# If BndSize > 72 Then BndSize = 72
# "ISREDIT PROCESS DEST"
# Select
# When rc = Then
# 'ISREDIT (ZDEST) = LINENUM .ZDEST'
# When rc <= 8 Then /* No A or B entered */
# Do
# zedsmsg = 'Enter "A"/"B" line cmd'
# zedlmsg = 'DRAW requires an "A" or "B" line command'
# 'SETMSG MSG(ISRZ1)'
# Exit 12
# End
# When rc < 2 Then /* Conflicting line commands - edit sets message */
# Exit 12
# When rc = 2 Then
# zdest =
# Otherwise
# Exit 12
# End
# Figure 228 (Part 4 of 10). REXX sample program DRAW
# Select
# When (Left(Type,1) = "S") Then
# Call DrawSelect
# When (Left(Type,1) = "I") Then
# Call DrawInsert
# When (Left(Type,1) = "U") Then
# Call DrawUpdate
# When (Left(Type,1) = "L") Then
# Call DrawLoad
# Otherwise EXIT (2)
# End
# Do I = LINE. To 1 By -1
# LINE = COPIES(" ",LEFTBND-1)||LINE.I
# 'ISREDIT LINE_AFTER 'zdest' = DATALINE (Line)'
# End
# line1 = zdest + 1
# 'ISREDIT CURSOR = 'line1
# Exit
# Figure 228 (Part 5 of 10). REXX sample program DRAW
IDENTIFICATION DIVISION.
PROGRAM-ID. TWOPHASE.
AUTHOR.
REMARKS.
*****************************************************************
* *
* MODULE NAME = TWOPHASE *
* *
* DESCRIPTIVE NAME = DB2 SAMPLE APPLICATION USING *
* TWO PHASE COMMIT AND THE DRDA DISTRIBUTED *
* ACCESS METHOD *
* *
* COPYRIGHT = 5665-DB2 (C) COPYRIGHT IBM CORP 1982, 1989 *
* REFER TO COPYRIGHT INSTRUCTIONS FORM NUMBER G12-283 *
* *
* STATUS = VERSION 5 *
* *
* FUNCTION = THIS MODULE DEMONSTRATES DISTRIBUTED DATA ACCESS *
* USING 2 PHASE COMMIT BY TRANSFERRING AN EMPLOYEE *
* FROM ONE LOCATION TO ANOTHER. *
* *
* NOTE: THIS PROGRAM ASSUMES THE EXISTENCE OF THE *
* TABLE SYSADM.EMP AT LOCATIONS STLEC1 AND *
* STLEC2. *
* *
* MODULE TYPE = COBOL PROGRAM *
* PROCESSOR = DB2 PRECOMPILER, VS COBOL II *
* MODULE SIZE = SEE LINK EDIT *
* ATTRIBUTES = NOT REENTRANT OR REUSABLE *
* *
* ENTRY POINT = *
* PURPOSE = TO ILLUSTRATE 2 PHASE COMMIT *
* LINKAGE = INVOKE FROM DSN RUN *
* INPUT = NONE *
* OUTPUT = *
* SYMBOLIC LABEL/NAME = SYSPRINT *
* DESCRIPTION = PRINT OUT THE DESCRIPTION OF EACH *
* STEP AND THE RESULTANT SQLCA *
* *
* EXIT NORMAL = RETURN CODE FROM NORMAL COMPLETION *
* *
* EXIT ERROR = NONE *
* *
* EXTERNAL REFERENCES = *
* ROUTINE SERVICES = NONE *
* DATA-AREAS = NONE *
* CONTROL-BLOCKS = *
* SQLCA - SQL COMMUNICATION AREA *
* *
* TABLES = NONE *
* *
* CHANGE-ACTIVITY = NONE *
* *
* *
* *
Figure 229 (Part 1 of 8). Sample COBOL two-phase commit application for DRDA access
ENVIRONMENT DIVISION.
INPUT-OUTPUT SECTION.
FILE-CONTROL.
SELECT PRINTER, ASSIGN TO S-OUT1.
DATA DIVISION.
FILE SECTION.
FD PRINTER
RECORD CONTAINS 12 CHARACTERS
DATA RECORD IS PRT-TC-RESULTS
LABEL RECORD IS OMITTED.
1 PRT-TC-RESULTS.
3 PRT-BLANK PIC X(12).
Figure 229 (Part 3 of 8). Sample COBOL two-phase commit application for DRDA access
*****************************************************************
* Variable declarations *
*****************************************************************
1 H-EMPTBL.
5 H-EMPNO PIC X(6).
5 H-NAME.
49 H-NAME-LN PIC S9(4) COMP-4.
49 H-NAME-DA PIC X(32).
5 H-ADDRESS.
49 H-ADDRESS-LN PIC S9(4) COMP-4.
49 H-ADDRESS-DA PIC X(36).
5 H-CITY.
49 H-CITY-LN PIC S9(4) COMP-4.
49 H-CITY-DA PIC X(36).
5 H-EMPLOC PIC X(4).
5 H-SSNO PIC X(11).
5 H-BORN PIC X(1).
5 H-SEX PIC X(1).
5 H-HIRED PIC X(1).
5 H-DEPTNO PIC X(3).
5 H-JOBCODE PIC S9(3)V COMP-3.
5 H-SRATE PIC S9(5) COMP.
5 H-EDUC PIC S9(5) COMP.
5 H-SAL PIC S9(6)V9(2) COMP-3.
5 H-VALIDCHK PIC S9(6)V COMP-3.
1 H-EMPTBL-IND-TABLE.
2 H-EMPTBL-IND PIC S9(4) COMP OCCURS 15 TIMES.
*****************************************************************
* Includes for the variables used in the COBOL standard *
* language procedures and the SQLCA. *
*****************************************************************
*****************************************************************
* Declaration for the table that contains employee information *
*****************************************************************
*****************************************************************
* Constants *
*****************************************************************
*****************************************************************
* Declaration of the cursor that will be used to retrieve *
* information about a transferring employee *
*****************************************************************
PROCEDURE DIVISION.
A11-HOUSE-KEEPING.
OPEN OUTPUT PRINTER.
*****************************************************************
* An employee is transferring from location STLEC1 to STLEC2. *
* Retrieve information about the employee from STLEC1, delete *
* the employee from STLEC1 and insert the employee at STLEC2 *
* using the information obtained from STLEC1. *
*****************************************************************
MAINLINE.
PERFORM CONNECT-TO-SITE-1
IF SQLCODE IS EQUAL TO
PERFORM PROCESS-CURSOR-SITE-1
IF SQLCODE IS EQUAL TO
PERFORM UPDATE-ADDRESS
PERFORM CONNECT-TO-SITE-2
IF SQLCODE IS EQUAL TO
PERFORM PROCESS-SITE-2.
PERFORM COMMIT-WORK.
Figure 229 (Part 5 of 8). Sample COBOL two-phase commit application for DRDA access
*****************************************************************
* Establish a connection to STLEC1 *
*****************************************************************
CONNECT-TO-SITE-1.
*****************************************************************
* Once a connection has been established successfully at STLEC1,*
* open the cursor that will be used to retrieve information *
* about the transferring employee. *
*****************************************************************
PROCESS-CURSOR-SITE-1.
*****************************************************************
* Retrieve information about the transferring employee. *
* Provided that the employee exists, perform DELETE-SITE-1 to *
* delete the employee from STLEC1. *
*****************************************************************
FETCH-DELETE-SITE-1.
Figure 229 (Part 6 of 8). Sample COBOL two-phase commit application for DRDA access
DELETE-SITE-1.
*****************************************************************
* Close the cursor used to retrieve information about the *
* transferring employee. *
*****************************************************************
CLOSE-CURSOR-SITE-1.
*****************************************************************
* Update certain employee information in order to make it *
* current. *
*****************************************************************
UPDATE-ADDRESS.
MOVE TEMP-ADDRESS-LN TO H-ADDRESS-LN.
MOVE '15 NEW STREET' TO H-ADDRESS-DA.
MOVE TEMP-CITY-LN TO H-CITY-LN.
MOVE 'NEW CITY, CA 9784' TO H-CITY-DA.
MOVE 'SJCA' TO H-EMPLOC.
*****************************************************************
* Establish a connection to STLEC2 *
*****************************************************************
CONNECT-TO-SITE-2.
Figure 229 (Part 7 of 8). Sample COBOL two-phase commit application for DRDA access
PROCESS-SITE-2.
*****************************************************************
* COMMIT any changes that were made at STLEC1 and STLEC2. *
*****************************************************************
COMMIT-WORK.
*****************************************************************
* Include COBOL standard language procedures *
*****************************************************************
INCLUDE-SUBS.
EXEC SQL INCLUDE COBSSUB END-EXEC.
Figure 229 (Part 8 of 8). Sample COBOL two-phase commit application for DRDA access
IDENTIFICATION DIVISION.
PROGRAM-ID. TWOPHASE.
AUTHOR.
REMARKS.
*****************************************************************
* *
* MODULE NAME = TWOPHASE *
* *
* DESCRIPTIVE NAME = DB2 SAMPLE APPLICATION USING *
* TWO PHASE COMMIT AND DB2 PRIVATE PROTOCOL *
* DISTRIBUTED ACCESS METHOD *
* *
* COPYRIGHT = 5665-DB2 (C) COPYRIGHT IBM CORP 1982, 1989 *
* REFER TO COPYRIGHT INSTRUCTIONS FORM NUMBER G12-283 *
* *
* STATUS = VERSION 5 *
* *
* FUNCTION = THIS MODULE DEMONSTRATES DISTRIBUTED DATA ACCESS *
* USING 2 PHASE COMMIT BY TRANSFERRING AN EMPLOYEE *
* FROM ONE LOCATION TO ANOTHER. *
* *
* NOTE: THIS PROGRAM ASSUMES THE EXISTENCE OF THE *
* TABLE SYSADM.EMP AT LOCATIONS STLEC1 AND *
* STLEC2. *
* *
* MODULE TYPE = COBOL PROGRAM *
* PROCESSOR = DB2 PRECOMPILER, VS COBOL II *
* MODULE SIZE = SEE LINK EDIT *
* ATTRIBUTES = NOT REENTRANT OR REUSABLE *
* *
* ENTRY POINT = *
* PURPOSE = TO ILLUSTRATE 2 PHASE COMMIT *
* LINKAGE = INVOKE FROM DSN RUN *
* INPUT = NONE *
* OUTPUT = *
* SYMBOLIC LABEL/NAME = SYSPRINT *
* DESCRIPTION = PRINT OUT THE DESCRIPTION OF EACH *
* STEP AND THE RESULTANT SQLCA *
* *
* EXIT NORMAL = RETURN CODE FROM NORMAL COMPLETION *
* *
* EXIT ERROR = NONE *
* *
* EXTERNAL REFERENCES = *
* ROUTINE SERVICES = NONE *
* DATA-AREAS = NONE *
* CONTROL-BLOCKS = *
* SQLCA - SQL COMMUNICATION AREA *
* *
* TABLES = NONE *
* *
* CHANGE-ACTIVITY = NONE *
* *
* *
Figure 230 (Part 1 of 7). Sample COBOL two-phase commit application for DB2 private
protocol access
ENVIRONMENT DIVISION.
INPUT-OUTPUT SECTION.
FILE-CONTROL.
SELECT PRINTER, ASSIGN TO S-OUT1.
DATA DIVISION.
FILE SECTION.
FD PRINTER
RECORD CONTAINS 12 CHARACTERS
DATA RECORD IS PRT-TC-RESULTS
LABEL RECORD IS OMITTED.
1 PRT-TC-RESULTS.
3 PRT-BLANK PIC X(12).
WORKING-STORAGE SECTION.
*****************************************************************
* Variable declarations *
*****************************************************************
1 H-EMPTBL.
5 H-EMPNO PIC X(6).
5 H-NAME.
49 H-NAME-LN PIC S9(4) COMP-4.
49 H-NAME-DA PIC X(32).
5 H-ADDRESS.
49 H-ADDRESS-LN PIC S9(4) COMP-4.
49 H-ADDRESS-DA PIC X(36).
5 H-CITY.
49 H-CITY-LN PIC S9(4) COMP-4.
49 H-CITY-DA PIC X(36).
5 H-EMPLOC PIC X(4).
5 H-SSNO PIC X(11).
5 H-BORN PIC X(1).
5 H-SEX PIC X(1).
5 H-HIRED PIC X(1).
5 H-DEPTNO PIC X(3).
5 H-JOBCODE PIC S9(3)V COMP-3.
5 H-SRATE PIC S9(5) COMP.
5 H-EDUC PIC S9(5) COMP.
5 H-SAL PIC S9(6)V9(2) COMP-3.
5 H-VALIDCHK PIC S9(6)V COMP-3.
Figure 230 (Part 3 of 7). Sample COBOL two-phase commit application for DB2 private
protocol access
*****************************************************************
* Includes for the variables used in the COBOL standard *
* language procedures and the SQLCA. *
*****************************************************************
*****************************************************************
* Declaration for the table that contains employee information *
*****************************************************************
*****************************************************************
* Constants *
*****************************************************************
*****************************************************************
* Declaration of the cursor that will be used to retrieve *
* information about a transferring employee *
*****************************************************************
*****************************************************************
* An employee is transferring from location STLEC1 to STLEC2. *
* Retrieve information about the employee from STLEC1, delete *
* the employee from STLEC1 and insert the employee at STLEC2 *
* using the information obtained from STLEC1. *
*****************************************************************
MAINLINE.
PERFORM PROCESS-CURSOR-SITE-1
IF SQLCODE IS EQUAL TO
PERFORM UPDATE-ADDRESS
PERFORM PROCESS-SITE-2.
PERFORM COMMIT-WORK.
PROG-END.
CLOSE PRINTER.
GOBACK.
*****************************************************************
* Open the cursor that will be used to retrieve information *
* about the transferring employee. *
*****************************************************************
PROCESS-CURSOR-SITE-1.
*****************************************************************
* Retrieve information about the transferring employee. *
* Provided that the employee exists, perform DELETE-SITE-1 to *
* delete the employee from STLEC1. *
*****************************************************************
FETCH-DELETE-SITE-1.
*****************************************************************
* Delete the employee from STLEC1. *
*****************************************************************
DELETE-SITE-1.
*****************************************************************
* Close the cursor used to retrieve information about the *
* transferring employee. *
*****************************************************************
CLOSE-CURSOR-SITE-1.
*****************************************************************
* Update certain employee information in order to make it *
* current. *
*****************************************************************
UPDATE-ADDRESS.
MOVE TEMP-ADDRESS-LN TO H-ADDRESS-LN.
MOVE '15 NEW STREET' TO H-ADDRESS-DA.
MOVE TEMP-CITY-LN TO H-CITY-LN.
MOVE 'NEW CITY, CA 9784' TO H-CITY-DA.
MOVE 'SJCA' TO H-EMPLOC.
Figure 230 (Part 6 of 7). Sample COBOL two-phase commit application for DB2 private
protocol access
PROCESS-SITE-2.
*****************************************************************
* COMMIT any changes that were made at STLEC1 and STLEC2. *
*****************************************************************
COMMIT-WORK.
*****************************************************************
* Include COBOL standard language procedures *
*****************************************************************
INCLUDE-SUBS.
EXEC SQL INCLUDE COBSSUB END-EXEC.
Figure 230 (Part 7 of 7). Sample COBOL two-phase commit application for DB2 private
protocol access
/************************************************************/
/* Call the GETPRML stored procedure to retrieve the */
/* RUNOPTS values for the stored procedure. In this */
/* example, we request the PARMLIST definition for the */
/* stored procedure named DSN8EP2. */
/* */
/* The call should complete with SQLCODE +466 because */
/* GETPRML returns result sets. */
/************************************************************/
strcpy(procnm,"dsn8ep2 ");
/* Input parameter -- PROCEDURE to be found */
strcpy(schema," ");
/* Input parameter -- Schema name for proc */
parmind.procnm_ind=;
parmind.schema_ind=;
parmind.out_code_ind=;
/* Indicate that none of the input parameters */
/* have null values */
parmind.parmlst_ind=-1;
/* The parmlst parameter is an output parm. */
/* Mark PARMLST parameter as null, so the DB2 */
/* requester doesn't have to send the entire */
/* PARMLST variable to the server. This */
/* helps reduce network I/O time, because */
/* PARMLST is fairly large. */
EXEC SQL
CALL GETPRML(:procnm INDICATOR :parmind.procnm_ind,
:schema INDICATOR :parmind.schema_ind,
:out_code INDICATOR :parmind.out_code_ind,
:parmlst INDICATOR :parmind.parmlst_ind);
if(SQLCODE!=+466) /* If SQL CALL failed, */
{
/* print the SQLCODE and any */
/* message tokens */
printf("SQL CALL failed due to SQLCODE = %d\n",SQLCODE);
printf("sqlca.sqlerrmc = ");
for(i=;i<sqlca.sqlerrml;i++)
printf("%c",sqlca.sqlerrmc[i]);
printf("\n");
}
Figure 231 (Part 2 of 4). Calling a stored procedure from a C program
/********************************************************/
/* Use the statement DESCRIBE PROCEDURE to */
/* return information about the result sets in the */
/* SQLDA pointed to by proc_da: */
/* - SQLD contains the number of result sets that were */
/* returned by the stored procedure. */
/* - Each SQLVAR entry has the following information */
/* about a result set: */
/* - SQLNAME contains the name of the cursor that */
/* the stored procedure uses to return the result */
/* set. */
/* - SQLIND contains an estimate of the number of */
/* rows in the result set. */
/* - SQLDATA contains the result locator value for */
/* the result set. */
/********************************************************/
EXEC SQL DESCRIBE PROCEDURE INTO :*proc_da;
/********************************************************/
/* Assume that you have examined SQLD and determined */
/* that there is one result set. Use the statement */
/* ASSOCIATE LOCATORS to establish a result set locator */
/* for the result set. */
/********************************************************/
EXEC SQL ASSOCIATE LOCATORS (:loc1) WITH PROCEDURE GETPRML;
/********************************************************/
/* Use the statement ALLOCATE CURSOR to associate a */
/* cursor for the result set. */
/********************************************************/
EXEC SQL ALLOCATE C1 CURSOR FOR RESULT SET :loc1;
/********************************************************/
/* Use the statement DESRIBE CURSOR to determine the */
/* columns in the result set. */
/********************************************************/
EXEC SQL DESCRIBE CURSOR C1 INTO :*res_da;
Figure 231 (Part 3 of 4). Calling a stored procedure from a C program
/********************************************************/
/* Fetch the data from the result table. */
/********************************************************/
while(SQLCODE==)
EXEC SQL FETCH C1 USING DESCRIPTOR :*res_da;
}
return;
}
Figure 231 (Part 4 of 4). Calling a stored procedure from a C program
IDENTIFICATION DIVISION.
PROGRAM-ID. CALPRML.
ENVIRONMENT DIVISION.
CONFIGURATION SECTION.
INPUT-OUTPUT SECTION.
FILE-CONTROL.
SELECT REPOUT
ASSIGN TO UT-S-SYSPRINT.
DATA DIVISION.
FILE SECTION.
FD REPOUT
RECORD CONTAINS 127 CHARACTERS
LABEL RECORDS ARE OMITTED
DATA RECORD IS REPREC.
1 REPREC PIC X(127).
WORKING-STORAGE SECTION.
*****************************************************
* MESSAGES FOR SQL CALL *
*****************************************************
1 SQLREC.
2 BADMSG PIC X(34) VALUE
' SQL CALL FAILED DUE TO SQLCODE = '.
2 BADCODE PIC +9(5) USAGE DISPLAY.
2 FILLER PIC X(8) VALUE SPACES.
1 ERRMREC.
2 ERRMMSG PIC X(12) VALUE ' SQLERRMC = '.
2 ERRMCODE PIC X(7).
2 FILLER PIC X(38) VALUE SPACES.
1 CALLREC.
2 CALLMSG PIC X(28) VALUE
' GETPRML FAILED DUE TO RC = '.
2 CALLCODE PIC +9(5) USAGE DISPLAY.
2 FILLER PIC X(42) VALUE SPACES.
1 RSLTREC.
2 RSLTMSG PIC X(15) VALUE
' TABLE NAME IS '.
2 TBLNAME PIC X(18) VALUE SPACES.
2 FILLER PIC X(87) VALUE SPACES.
Figure 232 (Part 1 of 3). Calling a stored procedure from a COBOL program
*****************************************************
* SQL INCLUDE FOR SQLCA *
*****************************************************
EXEC SQL INCLUDE SQLCA END-EXEC.
PROCEDURE DIVISION.
*------------------
PROG-START.
OPEN OUTPUT REPOUT.
* OPEN OUTPUT FILE
MOVE 'DSN8EP2 ' TO PROCNM.
* INPUT PARAMETER -- PROCEDURE TO BE FOUND
MOVE SPACES TO SCHEMA.
* INPUT PARAMETER -- SCHEMA IN SYSROUTINES
MOVE -1 TO PARMIND.
* THE PARMLST PARAMETER IS AN OUTPUT PARM.
* MARK PARMLST PARAMETER AS NULL, SO THE DB2
* REQUESTER DOESN'T HAVE TO SEND THE ENTIRE
* PARMLST VARIABLE TO THE SERVER. THIS
* HELPS REDUCE NETWORK I/O TIME, BECAUSE
* PARMLST IS FAIRLY LARGE.
EXEC SQL
CALL GETPRML(:PROCNM,
:SCHEMA,
:OUT-CODE,
:PARMLST INDICATOR :PARMIND)
END-EXEC.
Figure 232 (Part 2 of 3). Calling a stored procedure from a COBOL program
*PROCESS SYSTEM(MVS);
CALPRML:
PROC OPTIONS(MAIN);
/************************************************************/
/* Declare the parameters used to call the GETPRML */
/* stored procedure. */
/************************************************************/
DECLARE PROCNM CHAR(18), /* INPUT parm -- PROCEDURE name */
SCHEMA CHAR(8), /* INPUT parm -- User's schema */
OUT_CODE FIXED BIN(31),
/* OUTPUT -- SQLCODE from the */
/* SELECT operation. */
PARMLST CHAR(254) /* OUTPUT -- RUNOPTS for */
VARYING, /* the matching row in the */
/* catalog table SYSROUTINES */
PARMIND FIXED BIN(15);
/* PARMLST indicator variable */
/************************************************************/
/* Include the SQLCA */
/************************************************************/
EXEC SQL INCLUDE SQLCA;
/************************************************************/
/* Call the GETPRML stored procedure to retrieve the */
/* RUNOPTS values for the stored procedure. In this */
/* example, we request the RUNOPTS values for the */
/* stored procedure named DSN8EP2. */
/************************************************************/
PROCNM = 'DSN8EP2';
/* Input parameter -- PROCEDURE to be found */
SCHEMA = ' ';
/* Input parameter -- SCHEMA in SYSROUTINES */
PARMIND = -1; /* The PARMLST parameter is an output parm. */
/* Mark PARMLST parameter as null, so the DB2 */
/* requester doesn't have to send the entire */
/* PARMLST variable to the server. This */
/* helps reduce network I/O time, because */
/* PARMLST is fairly large. */
EXEC SQL
CALL GETPRML(:PROCNM,
:SCHEMA,
:OUT_CODE,
:PARMLST INDICATOR :PARMIND);
Figure 233 (Part 1 of 2). Calling a stored procedure from a PL/I program
The output parameters from this stored procedure contain the SQLCODE from the
SELECT statement and the value of the RUNOPTS column from SYSROUTINES.
The CREATE PROCEDURE statement for this stored procedure might look like
this:
CREATE PROCEDURE GETPRML(PROCNM CHAR(18) IN, SCHEMA CHAR(8) IN,
OUTCODE INTEGER OUT, PARMLST VARCHAR(254) OUT)
LANGUAGE C
DETERMINISTIC
READS SQL
EXTERNAL NAME 'GETPRML'
COLLID GETPRML
ASUTIME NO LIMIT
PARAMETER STYLE GENERAL
STAY RESIDENT NO
RUN OPTIONS 'MSGFILE(OUTFILE),RPTSTG(ON),RPTOPTS(ON)'
WLM ENVIRONMENT SAMPPROG
PROGRAM TYPE MAIN
EXTERNAL SECURITY DB2
RESULT SETS 2
COMMIT ON RETURN NO;
/***************************************************************/
/* Declare C variables for SQL operations on the parameters. */
/* These are local variables to the C program, which you must */
/* copy to and from the parameter list provided to the stored */
/* procedure. */
/***************************************************************/
EXEC SQL BEGIN DECLARE SECTION;
char PROCNM[19];
char SCHEMA[9];
char PARMLST[255];
EXEC SQL END DECLARE SECTION;
/***************************************************************/
/* Declare cursors for returning result sets to the caller. */
/***************************************************************/
EXEC SQL DECLARE C1 CURSOR WITH RETURN FOR
SELECT NAME
FROM SYSIBM.SYSTABLES
WHERE CREATOR=:SCHEMA;
main(argc,argv)
int argc;
char *argv[];
{
/********************************************************/
/* Copy the input parameters into the area reserved in */
/* the program for SQL processing. */
/********************************************************/
strcpy(PROCNM, argv[1]);
strcpy(SCHEMA, argv[2]);
/********************************************************/
/* Issue the SQL SELECT against the SYSROUTINES */
/* DB2 catalog table. */
/********************************************************/
strcpy(PARMLST, ""); /* Clear PARMLST */
EXEC SQL
SELECT RUNOPTS INTO :PARMLST
FROM SYSIBM.ROUTINES
WHERE NAME=:PROCNM AND
SCHEMA=:SCHEMA;
Figure 234 (Part 1 of 2). A C stored procedure with linkage convention GENERAL
/********************************************************/
/* Copy the PARMLST value returned by the SELECT back to*/
/* the parameter list provided to this stored procedure.*/
/********************************************************/
strcpy(argv[4], PARMLST);
/********************************************************/
/* Open cursor C1 to cause DB2 to return a result set */
/* to the caller. */
/********************************************************/
EXEC SQL OPEN C1;
}
Figure 234 (Part 2 of 2). A C stored procedure with linkage convention GENERAL
The linkage convention for this stored procedure is GENERAL WITH NULLS.
The output parameters from this stored procedure contain the SQLCODE from the
SELECT operation, and the value of the RUNOPTS column retrieved from the
SYSROUTINES table.
The CREATE PROCEDURE statement for this stored procedure might look like
this:
CREATE PROCEDURE GETPRML(PROCNM CHAR(18) IN, SCHEMA CHAR(8) IN,
OUTCODE INTEGER OUT, PARMLST VARCHAR(254) OUT)
LANGUAGE C
DETERMINISTIC
READS SQL
EXTERNAL NAME 'GETPRML'
COLLID GETPRML
ASUTIME NO LIMIT
PARAMETER STYLE GENERAL WITH NULLS
STAY RESIDENT NO
RUN OPTIONS 'MSGFILE(OUTFILE),RPTSTG(ON),RPTOPTS(ON)'
WLM ENVIRONMENT SAMPPROG
PROGRAM TYPE MAIN
EXTERNAL SECURITY DB2
RESULT SETS 2
COMMIT ON RETURN NO;
/***************************************************************/
/* Declare C variables used for SQL operations on the */
/* parameters. These are local variables to the C program, */
/* which you must copy to and from the parameter list provided */
/* to the stored procedure. */
/***************************************************************/
EXEC SQL BEGIN DECLARE SECTION;
char PROCNM[19];
char SCHEMA[9];
char PARMLST[255];
struct INDICATORS {
short int PROCNM_IND;
short int SCHEMA_IND;
short int OUT_CODE_IND;
short int PARMLST_IND;
} PARM_IND;
EXEC SQL END DECLARE SECTION;
/***************************************************************/
/* Declare cursors for returning result sets to the caller. */
/***************************************************************/
EXEC SQL DECLARE C1 CURSOR WITH RETURN FOR
SELECT NAME
FROM SYSIBM.SYSTABLES
WHERE CREATOR=:SCHEMA;
main(argc,argv)
int argc;
char *argv[];
{
/********************************************************/
/* Copy the input parameters into the area reserved in */
/* the local program for SQL processing. */
/********************************************************/
strcpy(PROCNM, argv[1]);
strcpy(SCHEMA, argv[2]);
/********************************************************/
/* Copy null indicator values for the parameter list. */
/********************************************************/
memcpy(&PARM_IND,(struct INDICATORS *) argv[5],
sizeof(PARM_IND));
Figure 235 (Part 1 of 2). A C stored procedure with linkage convention GENERAL WITH
NULLS
else {
/********************************************************/
/* If the input parameters are not NULL, issue the SQL */
/* SELECT against the SYSIBM.SYSROUTINES catalog */
/* table. */
/********************************************************/
strcpy(PARMLST, ""); /* Clear PARMLST */
EXEC SQL
SELECT RUNOPTS INTO :PARMLST
FROM SYSIBM.SYSROUTINES
WHERE NAME=:PROCNM AND
SCHEMA=:SCHEMA;
/********************************************************/
/* Copy SQLCODE to the output parameter list. */
/********************************************************/
*(int *) argv[3] = SQLCODE;
PARM_IND.OUT_CODE_IND = ; /* OUT_CODE is not NULL */
}
/********************************************************/
/* Copy the RUNOPTS value back to the output parameter */
/* area. */
/********************************************************/
strcpy(argv[4], PARMLST);
/********************************************************/
/* Copy the null indicators back to the output parameter*/
/* area. */
/********************************************************/
memcpy((struct INDICATORS *) argv[5],&PARM_IND,
sizeof(PARM_IND));
/********************************************************/
/* Open cursor C1 to cause DB2 to return a result set */
/* to the caller. */
/********************************************************/
EXEC SQL OPEN C1;
}
Figure 235 (Part 2 of 2). A C stored procedure with linkage convention GENERAL WITH
NULLS
The output parameters from this stored procedure contain the SQLCODE from the
SELECT operation, and the value of the RUNOPTS column retrieved from the
SYSROUTINES table.
The CREATE PROCEDURE statement for this stored procedure might look like
this:
CREATE PROCEDURE GETPRML(PROCNM CHAR(18) IN, SCHEMA CHAR(8) IN,
OUTCODE INTEGER OUT, PARMLST VARCHAR(254) OUT)
LANGUAGE COBOL
DETERMINISTIC
READS SQL
EXTERNAL NAME 'GETPRML'
COLLID GETPRML
ASUTIME NO LIMIT
PARAMETER STYLE GENERAL
STAY RESIDENT NO
RUN OPTIONS 'MSGFILE(OUTFILE),RPTSTG(ON),RPTOPTS(ON)'
WLM ENVIRONMENT SAMPPROG
PROGRAM TYPE MAIN
EXTERNAL SECURITY DB2
RESULT SETS 2
COMMIT ON RETURN NO;
ENVIRONMENT DIVISION.
INPUT-OUTPUT SECTION.
FILE-CONTROL.
DATA DIVISION.
FILE SECTION.
WORKING-STORAGE SECTION.
***************************************************
* DECLARE CURSOR FOR RETURNING RESULT SETS
***************************************************
*
EXEC SQL DECLARE C1 CURSOR WITH RETURN FOR
SELECT NAME FROM SYSIBM.SYSTABLES WHERE CREATOR=:INSCHEMA
END-EXEC.
*
LINKAGE SECTION.
***************************************************
* DECLARE THE INPUT PARAMETERS FOR THE PROCEDURE
***************************************************
1 PROCNM PIC X(18).
1 SCHEMA PIC X(8).
*******************************************************
* DECLARE THE OUTPUT PARAMETERS FOR THE PROCEDURE
*******************************************************
1 OUT-CODE PIC S9(9) USAGE BINARY.
1 PARMLST.
49 PARMLST-LEN PIC S9(4) USAGE BINARY.
49 PARMLST-TEXT PIC X(254).
*******************************************************
* COPY SQLCODE INTO THE OUTPUT PARAMETER AREA
*******************************************************
MOVE SQLCODE TO OUT-CODE.
*******************************************************
* OPEN CURSOR C1 TO CAUSE DB2 TO RETURN A RESULT SET
* TO THE CALLER.
*******************************************************
EXEC SQL OPEN C1
END-EXEC.
PROG-END.
GOBACK.
Figure 236 (Part 2 of 2). A COBOL stored procedure with linkage convention GENERAL
The linkage convention for this stored procedure is GENERAL WITH NULLS.
The output parameters from this stored procedure contain the SQLCODE from the
SELECT operation, and the value of the RUNOPTS column retrieved from the
SYSIBM.SYSROUTINES table.
The CREATE PROCEDURE statement for this stored procedure might look like
this:
ENVIRONMENT DIVISION.
INPUT-OUTPUT SECTION.
FILE-CONTROL.
DATA DIVISION.
FILE SECTION.
*
WORKING-STORAGE SECTION.
*
EXEC SQL INCLUDE SQLCA END-EXEC.
*
***************************************************
* DECLARE A HOST VARIABLE TO HOLD INPUT SCHEMA
***************************************************
1 INSCHEMA PIC X(8).
***************************************************
* DECLARE CURSOR FOR RETURNING RESULT SETS
***************************************************
*
EXEC SQL DECLARE C1 CURSOR WITH RETURN FOR
SELECT NAME FROM SYSIBM.SYSTABLES WHERE CREATOR=:INSCHEMA
END-EXEC.
*
LINKAGE SECTION.
***************************************************
* DECLARE THE INPUT PARAMETERS FOR THE PROCEDURE
***************************************************
1 PROCNM PIC X(18).
1 SCHEMA PIC X(8).
***************************************************
* DECLARE THE OUTPUT PARAMETERS FOR THE PROCEDURE
***************************************************
1 OUT-CODE PIC S9(9) USAGE BINARY.
1 PARMLST.
49 PARMLST-LEN PIC S9(4) USAGE BINARY.
49 PARMLST-TEXT PIC X(254).
***************************************************
* DECLARE THE STRUCTURE CONTAINING THE NULL
* INDICATORS FOR THE INPUT AND OUTPUT PARAMETERS.
***************************************************
1 IND-PARM.
3 PROCNM-IND PIC S9(4) USAGE BINARY.
3 SCHEMA-IND PIC S9(4) USAGE BINARY.
3 OUT-CODE-IND PIC S9(4) USAGE BINARY.
3 PARMLST-IND PIC S9(4) USAGE BINARY.
Figure 237 (Part 1 of 2). A COBOL stored procedure with linkage convention GENERAL
WITH NULLS
The output parameters from this stored procedure contain the SQLCODE from the
SELECT operation, and the value of the RUNOPTS column retrieved from the
SYSIBM.SYSROUTINES table.
The CREATE PROCEDURE statement for this stored procedure might look like
this:
*PROCESS SYSTEM(MVS);
GETPRML:
PROC(PROCNM, SCHEMA, OUT_CODE, PARMLST)
OPTIONS(MAIN NOEXECOPS REENTRANT);
/************************************************************/
/* Execute SELECT from SYSIBM.SYSROUTINES in the catalog. */
/************************************************************/
EXEC SQL
SELECT RUNOPTS INTO :PARMLST
FROM SYSIBM.SYSROUTINES
WHERE NAME=:PROCNM AND
SCHEMA=:SCHEMA;
The linkage convention for this stored procedure is GENERAL WITH NULLS.
The output parameters from this stored procedure contain the SQLCODE from the
SELECT operation, and the value of the RUNOPTS column retrieved from the
SYSIBM.SYSROUTINES table.
GETPRML:
PROC(PROCNM, SCHEMA, OUT_CODE, PARMLST, INDICATORS)
OPTIONS(MAIN NOEXECOPS REENTRANT);
IF PROCNM_IND< |
SCHEMA_IND< THEN
DO; /* If any input parm is NULL, */
OUT_CODE = 9999; /* Set output return code. */
OUT_CODE_IND = ;
/* Output return code is not NULL.*/
PARMLST_IND = -1; /* Assign NULL value to PARMLST. */
END;
ELSE /* If input parms are not NULL, */
DO; /* */
/************************************************************/
/* Issue the SQL SELECT against the SYSIBM.SYSROUTINES */
/* DB2 catalog table. */
/************************************************************/
EXEC SQL
SELECT RUNOPTS INTO :PARMLST
FROM SYSIBM.SYSROUTINES
WHERE NAME=:PROCNM AND
SCHEMA=:SCHEMA;
PARMLST_IND = ; /* Mark PARMLST as not NULL. */
END GETPRML;
Figure 239. A PL/I stored procedure with linkage convention GENERAL WITH NULLS
One situation in which this technique might be useful is when a resource becomes
unavailable during a rebind of many plans or packages. DB2 normally terminates
the rebind and does not rebind the remaining plans or packages. Later, however,
you might want to rebind only the objects that remain to be rebound. You can build
REBIND subcommands for the remaining plans or packages by using DSNTIAUL to
select the plans or packages from the DB2 catalog and to create the REBIND
subcommands. You can then submit the subcommands through the DSN command
processor, as usual.
You might first need to edit the output from DSNTIAUL so that DSN can accept it
as input. The CLIST DSNTEDIT can perform much of that task for you.
For both REBIND PLAN and REBIND PACKAGE subcommands, add the DSN
command that the statement needs as the first line in the sequential dataset, and
add END as the last line, using TSO edit commands. When you have edited the
sequential dataset, you can run it to rebind the selected plans or packages.
If the SELECT statement returns no qualifying rows, then DSNTIAUL does not
generate REBIND subcommands.
The examples in this section generate REBIND subcommands that work in DB2 for
OS/390 Version 6. You might need to modify the examples for prior releases of
DB2 that do not allow all of the same syntax.
Example 1: REBIND all plans without terminating because of unavailable
resources.
SELECT SUBSTR('REBIND PLAN('CONCAT NAME
CONCAT') ',1,45)
FROM SYSIBM.SYSPLAN;
Example 2: REBIND all versions of all packages without terminating because of
unavailable resources.
SELECT SUBSTR('REBIND PACKAGE('CONCAT COLLID CONCAT'.'
CONCAT NAME CONCAT'.(*)) ',1,55)
FROM SYSIBM.SYSPACKAGE;
Example 3: REBIND all plans bound before a given date and time.
SELECT SUBSTR('REBIND PLAN('CONCAT NAME
CONCAT') ',1,45)
FROM SYSIBM.SYSPLAN
WHERE BINDDATE <= 'yyyymmdd' AND
BINDTIME <= 'hhmmssth';
where yyyymmdd represents the date portion and hhmmssth
represents the time portion of the timestamp string.
Figure 242 on page 941 shows some sample JCL for rebinding all plans bound
without specifying the DEGREE keyword on BIND with DEGREE(ANY).
Figure 241 (Part 1 of 2). Example JCL: Rebind all packages bound in 1994.
Figure 241 (Part 2 of 2). Example JCL: Rebind all packages bound in 1994.
Figure 242. Example JCL: Rebind selected plans with a different bind option
IBM SQL has additional reserved words that DB2 for OS/390 does not enforce.
Therefore, we suggest that you do not use these additional reserved words as
ordinary identifiers in names that have a continuing use. See IBM SQL Reference
for a list of the words.
Table 133 (Page 1 of 3). Actions allowed on SQL statements in DB2 for OS/390
Interactively Processed by
or
dynamically Requesting
SQL statement Executable prepared system Server Precompiler
ALLOCATE CURSOR1 Y Y Y
ALTER2 Y Y Y
ASSOCIATE LOCATORS1 Y Y Y
BEGIN DECLARE SECTION Y
CALL1 Y Y
CLOSE Y Y
COMMENT ON Y Y Y
COMMIT Y Y Y
CONNECT (Type 1 and Type 2) Y Y
CREATE2 Y Y Y
DECLARE CURSOR Y
# DECLARE GLOBAL Y Y Y
# TEMPORARY TABLE
DECLARE STATEMENT Y
DECLARE TABLE Y
DELETE Y Y Y
DESCRIBE Y Y
DESCRIBE CURSOR Y Y
| DESCRIBE INPUT Y Y
DESCRIBE PROCEDURE Y Y
DROP2 Y Y Y
END DECLARE SECTION Y
EXECUTE Y Y
EXECUTE IMMEDIATE Y Y
EXPLAIN Y Y Y
| Table 134 (Page 1 of 3). SQL statements in external user-defined functions and stored
| procedures
| Level of SQL access
| CONTAINS READS MODIFIES
| SQL statement NO SQL SQL SQL DATA SQL DATA
| ALLOCATE CURSOR Y Y
| ALTER Y
| ASSOCIATE LOCATORS Y Y
| BEGIN DECLARE SECTION Y1 Y Y Y
| CALL Y2 Y2 Y2
| CLOSE Y Y
| COMMENT ON Y
| COMMIT
| CONNECT (Type 1 and Type
| 2)
| CREATE Y
| DECLARE CURSOR Y1 Y Y Y
| DECLARE GLOBAL Y Y Y
| TEMPORARY TABLE
| DECLARE STATEMENT Y1 Y Y Y
| DECLARE TABLE Y1 Y Y Y
| DELETE Y
| DESCRIBE Y Y
| DESCRIBE CURSOR Y Y
| DESCRIBE INPUT Y Y
| DESCRIBE PROCEDURE Y Y
| DROP Y
| END DECLARE SECTION Y1 Y Y Y
| EXECUTE Y3 Y3 Y
| EXECUTE IMMEDIATE Y3 Y3 Y
| EXPLAIN Y
| Table 135 (Page 1 of 3). Valid SQL statements in an SQL procedure body
| SQL statement is...
| The only Nested in a
| statement in compound
| SQL statement the procedure statement
| ALLOCATE CURSOR
| ALTER DATABASE Y Y
| ALTER FUNCTION
| ALTER INDEX Y Y
| ALTER PROCEDURE
| ALTER STOGROUP Y Y
| ALTER TABLE Y Y
| ALTER TABLESPACE Y Y
| ASSOCIATE LOCATORS
IBM may have patents or pending patent applications covering subject matter
described in this document. The furnishing of this document does not give you any
license to these patents. You can send license inquiries, in writing, to:
For license inquiries regarding double-byte (DBCS) information, contact the IBM
Intellectual Property Department in your country or send inquiries, in writing, to:
The following paragraph does not apply to the United Kingdom or any other
country where such provisions are inconsistent with local law:
INTERNATIONAL BUSINESS MACHINES CORPORATION PROVIDES THIS
PUBLICATION “AS IS” WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESS
OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES
OF NON-INFRINGEMENT, MERCHANTABILITY OR FITNESS FOR A
PARTICULAR PURPOSE. Some states do not allow disclaimer of express or
implied warranties in certain transactions, therefore, this statement may not apply to
you.
Any references in this publication to non-IBM Web sites are provided for
convenience only and do not in any manner serve as an endorsement of those
Web sites. The materials at those Web sites are not part of the materials for this
IBM product and use of those Web sites is as your own risk.
Licensees of this program who wish to have information about it for the purpose of
enabling: (i) the exchange of information between independently created programs
IBM Corporation
J74/G4
555 Bailey Avenue
P.O. Box 49023
San Jose, CA 95161-9023
U.S.A.
The licensed program described in this information and all licensed material
available for it are provided by IBM under terms of the IBM Customer Agreement,
IBM International Program License Agreement, or any equivalent agreement
between us.
This information contains examples of data and reports used in daily business
operations. To illustrate them as completely as possible, the examples include the
names of individuals, companies, brands, and products. All of these names are
fictitious and any similarity to the names and addresses used by an actual business
enterprise is entirely coincidental.
COPYRIGHT LICENSE:
Trademarks
The following terms are trademarks of the International Business Machines
Corporation in the United States, or other countries, or both:
Tivoli and NetView are trademarks of Tivoli Systems Inc. in the United States,
or other countries, or both.
Java and all Java-based trademarks and logos are trademarks or registered
trademarks of Sun Microsystems, Inc. in the United States and/or other countries.
Microsoft, Windows, Windows NT, and the Windows logo are trademarks or
registered trademarks of Microsoft Corporation in the United States and/or other
countries.
Other company, product, and service names may be trademarks or service marks
of others.
Glossary
The following terms and abbreviations are defined as ambiguous cursor. A database cursor that is not
they are used in the DB2 library. If you do not find the defined with the FOR FETCH ONLY clause or the FOR
term you are looking for, refer to the index or to IBM UPDATE OF clause, is not defined on a read-only result
Dictionary of Computing. table, is not the target of a WHERE CURRENT clause
on an SQL UPDATE or DELETE statement, and is in a
plan or package that contains either PREPARE or
EXECUTE IMMEDIATE SQL statements.
A
API. Application programming interface.
abend. Abnormal end of task.
application. A program or set of programs that
abend reason code. A 4-byte hexadecimal code that performs a task; for example, a payroll application.
uniquely identifies a problem with DB2. A complete list
of DB2 abend reason codes and their explanations is application plan. The control structure that is
contained in DB2 Messages and Codes. produced during the bind process. DB2 uses the
application plan to process SQL statements that it
abnormal end of task (abend). Termination of a task, encounters during statement execution.
job, or subsystem because of an error condition that
recovery facilities cannot resolve during execution. application process. The unit to which resources and
locks are allocated. An application process involves the
access path. The path that is used to locate data that execution of one or more programs.
is specified in SQL statements. An access path can be
indexed or sequential. application programming interface (API). A
functional interface that is supplied by the operating
address space. A range of virtual storage pages that system or by a separately orderable licensed program
is identified by a number (ASID) and a collection of that allows an application program that is written in a
segment and page tables that map the virtual pages to high-level language to use specific data or functions of
real pages of the computer's memory. the operating system or licensed program.
address space connection. The result of connecting application requester (AR). See requester.
an allied address space to DB2. Each address space
that contains a task that is connected to DB2 has application server (AS). See server.
exactly one address space connection, even though
more than one task control block (TCB) can be present. AR. Application requester. See requester.
See also allied address space and task control block.
AS. Application server. See server.
after trigger. A trigger that is defined with the trigger
activation time AFTER. ASCII. An encoding scheme that is used to represent
strings in many environments, typically on PCs and
agent. As used in DB2, the structure that associates workstations. Contrast with EBCDIC.
all processes that are involved in a DB2 unit of work.
An allied agent is generally synonymous with an allied attribute. A characteristic of an entity. For example, in
thread. System agents are units of work that process database design, the phone number of an employee is
independently of the allied agent, such as prefetch one of that employee's attributes.
processing, deferred writes, and service tasks.
authorization ID. A string that can be verified for
alias. An alternative name that can be used in SQL connection to DB2 and to which a set of privileges are
statements to refer to a table or view in the same or a allowed. It can represent an individual, an organizational
remote DB2 subsystem. group, or a function, but DB2 does not determine this
representation.
allied address space. An area of storage that is
external to DB2 and that is connected to DB2. An allied auxiliary index. An index on an auxiliary table in
address space is capable of requesting DB2 services. which each index entry refers to a LOB.
auxiliary table. A table that stores columns outside BLOB. Binary large object.
the table in which they are defined. Contrast with base
table. BMP. Batch Message Processing (IMS).
bind. The process by which the output from the DB2 CDRA. Character data representation architecture.
precompiler is converted to a usable control structure
(which is called a package or an application plan). central processor (CP). The part of the computer that
During the process, access paths to the data are contains the sequencing and processing facilities for
selected and some authorization checking is performed. instruction execution, initial program load, and other
automatic bind. (More correctly automatic rebind). machine operations.
A process by which SQL statements are bound
Character Data Representation Architecture
automatically (without a user issuing a BIND
(CDRA). An architecture that is used to achieve
command) when an application process begins
consistent representation, processing, and interchange
execution and the bound application plan or
of string data.
package it requires is not valid.
dynamic bind. A process by which SQL statements
character large object (CLOB). A sequence of bytes
are bound as they are entered.
representing single-byte characters or a mixture of
incremental bind. A process by which SQL
single- and double-byte characters where the size of the
statements are bound during the execution of an
value can be up to 2 GB - 1. In general, character
application process, because they could not be
large object values are used whenever a character
bound during the bind process, and
string might exceed the limits of the VARCHAR type.
VALIDATE(RUN) was specified.
static bind. A process by which SQL statements
character set. A defined set of characters.
are bound after they have been precompiled. All
static SQL statements are prepared for execution at character string. A sequence of bytes that represent
the same time. bit data, single-byte characters, or a mixture of single-
and double-byte characters.
bit data. Data that is character type CHAR or
VARCHAR and is not associated with a coded
character set.
CHECK clause. An extension to the SQL CREATE coded character set. A set of unambiguous rules that
TABLE and SQL ALTER TABLE statements that establish a character set and the one-to-one
specifies a table check constraint. See also table check relationships between the characters of the set and their
constraint. coded representations.
check constraint. See table check constraint. coded character set identifier (CCSID). A 16-bit
number that uniquely identifies a coded representation
check integrity. The condition that exists when each of graphic characters. It designates an encoding
row in a table conforms to the table check constraints scheme identifier and one or more pairs consisting of a
that are defined on that table. Maintaining check character set identifier and an associated code page
integrity requires DB2 to enforce table check constraints identifier.
on operations that add or change data.
code page. A set of assignments of characters to
check pending. A state of a table space or partition code points.
that prevents its use by some utilities and some SQL
statements because of rows that violate referential code point. In CDRA, a unique bit pattern that
constraints, table check constraints, or both. represents a character in a code page.
CICS. Represents (in this publication) one of the collection. A group of packages that have the same
following products: qualifier.
CICS Transaction Server for OS/390: Customer
column function. An SQL operation that derives its
Information Control Center Transaction Server for
result from a collection of values across one or more
OS/390
rows. Contrast with scalar function.
CICS/ESA: Customer Information Control
System/Enterprise Systems Architecture command. A DB2 operator command or a DSN
CICS/MVS: Customer Information Control subcommand. A command is distinct from an SQL
System/Multiple Virtual Storage statement.
CICS attachment facility. A DB2 subcomponent that commit. The operation that ends a unit of work by
uses the MVS subsystem interface (SSI) and cross releasing locks so that the database changes that are
storage linkage to process requests from CICS to DB2 made by that unit of work can be perceived by other
and to coordinate resource commitment. processes.
claim. A notification to DB2 that an object is being commit point. A point in time when data is considered
accessed. Claims prevent drains from occurring until the consistent.
claim is released, which usually occurs at a commit
point. Contrast with drain. committed phase. The second phase of the multi-site
update process that requests all participants to commit
claim class. A specific type of object access that can the effects of the logical unit of work.
be one of the following:
Cursor stability (CS) communications database (CDB). A set of tables in
Repeatable read (RR) the DB2 catalog that are used to establish
Write conversations with remote database management
systems.
claim count. A count of the number of agents that are
accessing an object. comparison operator. A token (such as =, >, <) that
is used to specify a relationship between two values.
clause. In SQL, a distinct part of a statement, such as
a SELECT clause or a WHERE clause. composite key. An ordered set of key columns of the
same table.
client. See requester.
concurrency. The shared use of resources by more
CLIST. Command list. A language for performing TSO than one application process at the same time.
tasks.
connection. In SNA, the existence of a
CLOB. Character large object. communication path between two partner LUs that
allows information to be exchanged (for example, two
clustering index. An index that determines how rows DB2 subsystems that are connected and
are physically ordered in a table space. communicating by way of a conversation).
Glossary 961
consistency token 9 data type
consistency token. A timestamp that is used to database administrator (DBA). An individual who is
generate the version identifier for an application. See responsible for designing, developing, operating,
also version. safeguarding, maintaining, and using a database.
correlated subquery. A subquery (part of a WHERE database request module (DBRM). A data set
or HAVING clause) that is applied to a row or group of member that is created by the DB2 precompiler and
rows of a table or view that is named in an outer that contains information about SQL statements.
subselect statement. DBRMs are used in the bind process.
correlation name. An identifier that designates a DATABASE 2 Interactive (DB2I). The DB2 facility that
table, a view, or individual rows of a table or view within provides for the execution of SQL statements, DB2
a single SQL statement. It can be defined in any FROM (operator) commands, programmer commands, and
clause or in the first clause of an UPDATE or DELETE utility invocation.
statement.
data currency. The state in which data that is
CP. See central processor (CP). retrieved into a host variable in your program is a copy
of data in the base table.
# created temporary table. A table that holds temporary
# data and is defined with the SQL statement CREATE data definition name (ddname). The name of a data
# GLOBAL TEMPORARY TABLE. Information about definition (DD) statement that corresponds to a data
# created temporary tables is stored in the DB2 catalog, control block containing the same name.
# so this kind of table is persistent and can be shared
# across application processes. Contrast with declared Data Language/I (DL/I). The IMS data manipulation
# temporary table. See also temporary table. language; a common high-level interface between a
user application and IMS.
CS. Cursor stability.
data partition. A VSAM data set that is contained
current data. Data within a host structure that is within a partitioned table space.
current with (identical to) the data within the base table.
data sharing. The ability of two or more DB2
cursor stability (CS). The isolation level that provides subsystems to directly access and change a single set
maximum concurrency without the ability to read of data.
uncommitted data. With cursor stability, a unit of work
holds locks only on its uncommitted changes and on the data sharing group. A collection of one or more DB2
current row of each of its cursors. subsystems that directly access and change the same
data while maintaining data integrity.
date. A three-part value that designates a day, month, # DB2 catalog, so this kind of table is not persistent and
and year. # can only be used by the application process that issued
# the DECLARE statement. Contrast with created
date duration. A decimal integer that represents a # temporary table. See also temporary table.
number of years, months, and days.
default value. A predetermined value, attribute, or
datetime value. A value of the data type DATE, TIME, option that is assumed when no other is explicitly
or TIMESTAMP. specified.
DBCS. Double-byte character set. delete trigger. A trigger that is defined with the
triggering SQL operation DELETE.
DBD. Database descriptor.
delimited identifier. A sequence of characters that
DBMS. Database management system. are enclosed within double quotation marks ("). The
sequence must consist of a letter followed by zero or
DBRM. Database request module.
more characters, each of which is a letter, digit, or the
underscore character (_).
DB2 catalog. Tables that are maintained by DB2 and
that contain descriptions of DB2 objects, such as tables,
delimiter token. A string constant, a delimited
views, and indexes.
identifier, an operator symbol, or any of the special
characters that are shown in syntax diagrams.
DB2 command. An instruction to the DB2 subsystem
allowing a user to start or stop DB2, to display
dependent. An object (row, table, or table space) that
information on current users, to start or stop databases,
has at least one parent. The object is also said to be a
to display information on the status of databases, and
dependent (row, table, or table space) of its parent. See
so on.
parent row, parent table, parent table space.
DB2 for VSE & VM. The IBM DB2 relational database
deterministic function. A user-defined function whose
management system for the VSE and VM operating
result is dependent on the values of the input
systems.
arguments. That is, successive invocations with the
same input values produce the same answer.
DB2I. DATABASE 2 Interactive.
Sometimes referred to as a not-variant function.
Contrast this with an not-deterministic function
DB2I Kanji Feature. The tape that contains the panels
(sometimes called a variant function), which might not
and jobs that allow a site to display DB2I panels in
always produce the same result for the same inputs.
Kanji.
deadlock. Unresolvable contention for the use of a # dimension table. The representation of a dimension in
resource such as a table or an index. # a star schema. Each row in a dimension table
# represents all of the attributes for a particular member
declarations generator (DCLGEN). A subcomponent # of the dimension. See also dimension, star schema, and
of DB2 that generates SQL table declarations and # star join.
COBOL, C, or PL/I data structure declarations that
conform to the table. The declarations are generated direct access storage device (DASD). A device in
from DB2 system catalog information. DCLGEN is also which access time is independent of the location of the
a DSN subcommand. data.
# declared temporary table. A table that holds distinct type. A user-defined data type that is
# temporary data and is defined with the SQL statement internally represented as an existing type (its source
# DECLARE GLOBAL TEMPORARY TABLE. Information type), but is considered to be a separate and
# about declared temporary tables is not stored in the incompatible type for semantic purposes.
Glossary 963
distributed data facility (DDF) 9 foreign key
distributed data facility (DDF). A set of DB2 statement can change several times during the
components through which DB2 communicates with application program's execution.
another RDBMS.
Glossary 965
indoubt 9 link-edit
the selected column is null, a negative value is placed work. See also cursor stability, read stability, repeatable
in the indicator variable. read, and uncommitted read.
indoubt. A status of a unit of recovery. If DB2 fails ISPF. Interactive System Productivity Facility.
after it has finished its phase 1 commit processing and
before it has started phase 2, only the commit ISPF/PDF. Interactive System Productivity
coordinator knows if an individual unit of recovery is to Facility/Program Development Facility.
be committed or rolled back. At emergency restart, if
DB2 lacks the information it needs to make this
decision, the status of the unit of recovery is indoubt J
until DB2 obtains this information from the coordinator.
Japanese Industrial Standards Committee (JISC).
More than one unit of recovery can be indoubt at
An organization that issues standards for coding
restart.
character sets.
indoubt resolution. The process of resolving the
JCL. Job control language.
status of an indoubt logical unit of work to either the
committed or the rollback state.
JIS. Japanese Industrial Standard.
inheritance. The passing of class resources or
job control language (JCL). A control language that
attributes from a parent class downstream in the class
is used to identify a job to an operating system and to
hierarchy to a child class.
describe the job's requirements.
inner join. The result of a join operation that includes
join. A relational operation that allows retrieval of data
only the matched rows of both tables being joined. See
from two or more tables based on matching column
also join.
values. See also equi-join, full outer join, inner join, left
outer join, outer join, and right outer join.
inoperative package. A package that cannot be used
because one or more user-defined functions that the
package depends on were dropped. Such a package
must be explicitly rebound. Contrast with invalid
K
package. KB. Kilobyte (1024 bytes).
insert trigger. A trigger that is defined with the key. A column or an ordered collection of columns
triggering SQL operation INSERT. identified in the description of a table, index, or
referential constraint.
Interactive System Productivity Facility (ISPF). An
IBM licensed program that provides interactive dialog
services. L
internal resource lock manager (IRLM). An MVS labeled duration. A number that represents a duration
subsystem that DB2 uses to control communication and of years, months, days, hours, minutes, seconds, or
database locking. microseconds.
inter-DB2 R/W interest. A property of data in a table large object (LOB). A sequence of bytes representing
space, index, or partition that has been opened by more bit data, single-byte characters, double-byte characters,
than one member of a data sharing group and that has or a mixture of single- and double-byte characters. A
been opened for writing by at least one of those LOB can be up to 2 GB - 1 byte in length. See also
members. BLOB, CLOB, and DBCLOB.
invalid package. A package that depends on an left outer join. The result of a join operation that
object (other than a user-defined function) that is includes the matched rows of both tables that are being
dropped. Such a package is implicitly rebound on joined, and that preserves the unmatched rows of the
invocation. Contrast with inoperative package. first table. See also join.
IRLM. Internal resource lock manager. linkage editor. A computer program for creating load
modules from one or more object modules or load
ISO. International Standards Organization. modules by resolving cross references among the
modules and, if necessary, adjusting addresses.
isolation level. The degree to which a unit of work is
isolated from the updating operations of other units of link-edit. The action of creating a loadable computer
program using a linkage editor.
L-lock. Logical lock. lower in the hierarchy; usually the table space or
partition intent locks are the parent locks.
load module. A program unit that is suitable for
loading into main storage for execution. The output of a lock promotion. The process of changing the size or
linkage editor. mode of a DB2 lock to a higher level.
LOB. Large object. lock size. The amount of data controlled by a DB2
lock on table data; the value can be a row, a page, a
LOB locator. A mechanism that allows an application LOB, a partition, a table, or a table space.
program to manipulate a large object value in the
database system. A LOB locator is a fullword integer lock structure. A coupling facility data structure that is
value that represents a single LOB value. An application composed of a series of lock entries to support shared
program retrieves a LOB locator into a host variable and exclusive locking for logical resources.
and can then apply SQL operations to the associated
LOB value using the locator. logical index partition. The set of all keys that
reference the same data partition.
LOB table space. A table space that contains all the
data for a particular LOB column in the related base logical lock (L-lock). The lock type that transactions
table. use to control intra- and inter-DB2 data concurrency
between transactions. Contrast with P-lock.
local. A way of referring to any object that the local
DB2 subsystem maintains. A local table, for example, is logical unit. An access point through which an
a table that is maintained by the local DB2 subsystem. application program accesses the SNA network in order
Contrast with remote. to communicate with another application program.
local lock. A lock that provides intra-DB2 concurrency logical unit of work (LUW). The processing that a
control, but not inter-DB2 concurrency control; that is, program performs between synchronization points.
its scope is a single DB2.
LU name. Logical unit name, which is the name by
local subsystem. The unique RDBMS to which the which VTAM refers to a node in a network. Contrast
user or application program is directly connected (in the with location name.
case of DB2, by one of the DB2 attachment facilities).
LUW. Logical unit of work.
location name. The name by which DB2 refers to a
particular DB2 subsystem in a network of subsystems.
Contrast with LU name. M
lock. A means of controlling concurrent events or materialize. (1) The process of putting rows from a
access to data. DB2 locking is performed by the IRLM. view or nested table expression into a work file for
additional processing by a query.
lock duration. The interval over which a DB2 lock is (2) The placement of a LOB value into contiguous
held. storage. Because LOB values can be very large, DB2
avoids materializing LOB data until doing so becomes
lock escalation. The promotion of a lock from a row, absolutely necessary.
page, or LOB lock to a table space lock because the
number of page locks that are concurrently held on a menu. A displayed list of available functions for
given resource exceeds a preset limit. selection by the operator. A menu is sometimes called a
menu panel.
locking. The process by which the integrity of data is
ensured. Locking prevents concurrent users from mixed data string. A character string that can contain
accessing inconsistent data. both single-byte and double-byte characters.
lock mode. A representation for the type of access modify locks. An L-lock or P-lock with a MODIFY
that concurrently running programs can have to a attribute. A list of these active locks is kept at all times
resource that a DB2 lock is holding. in the coupling facility lock structure. If the requesting
DB2 fails, that DB2 subsystem's modify locks are
lock object. The resource that is controlled by a DB2 converted to retained locks.
lock.
MPP. Message processing program (IMS).
lock parent. For explicit hierarchical locking, a lock
that is held on a resource that has child locks that are
Glossary 967
multi-site update 9 partitioned page set
multi-site update. Distributed relational database to as parallel tasks) that are executing portions of the
processing in which data is updated in more than one query in parallel.
location within a single unit of work.
OS/390. Operating System/390.
MVS. Multiple Virtual Storage.
outer join. The result of a join operation that includes
MVS/ESA. Multiple Virtual Storage/Enterprise Systems the matched rows of both tables that are being joined
Architecture. and preserves some or all of the unmatched rows of the
tables that are being joined. See also join.
not-variant function. See deterministic function. panel. A predefined display image that defines the
locations and characteristics of display fields on a
NUL. In C, a single character that denotes the end of display surface (for example, a menu panel).
the string.
parallel task. The execution unit that is dynamically
null. A special value that indicates the absence of created to process a query in parallel. It is implemented
information. by an MVS service request block.
NUL-terminated host variable. A varying-length host parameter marker. A question mark (?) that appears
variable in which the end of the data is indicated by the in a statement string of a dynamic SQL statement. The
presence of a NUL terminator. question mark can appear where a host variable could
appear if the statement string were a static SQL
NUL terminator. In C, the value that indicates the end statement.
of a string. For character strings, the NUL terminator is
X'00'. parent row. A row whose primary key value is the
foreign key value of a dependent row.
ordinary token. A numeric constant, an ordinary partitioned page set. A partitioned table space or an
identifier, a host identifier, or a keyword. index space. Header pages, space map pages, data
pages, and index pages reference data only within the
originating task. In a parallel group, the primary agent scope of the partition.
that receives data from other execution units (referred
partitioned table space. A table space that is predicate. An element of a search condition that
subdivided into parts (based on index key range), each expresses or implies a comparison operation.
of which can be processed independently by utilities.
prepared SQL statement. A named object that is the
partner logical unit. An access point in the SNA executable form of an SQL statement that has been
network that is connected to the local DB2 subsystem processed by the PREPARE statement.
by way of a VTAM conversation.
primary index. An index that enforces the uniqueness
path. See SQL path. of a primary key.
PCT. Program control table (CICS). primary key. In a relational database, a unique,
nonnull key that is part of the definition of a table. A
piece. A data set of a nonpartitioned page set. table cannot be defined as a parent unless it has a
unique key or primary key.
physical consistency. The state of a page that is not
in a partially changed state. private connection. A communications connection
that is specific to DB2.
physical lock (P-lock). A lock type that DB2 acquires
to provide consistency of data that is cached in different private protocol access. A method of accessing
DB2 subsystems. Physical locks are used only in data distributed data by which you can direct a query to
sharing environments. Contrast with logical lock another DB2 system. Contrast with DRDA access.
(L-lock).
private protocol connection. A DB2 private
physical lock contention. Conflicting states of the connection of the application process. See also private
requesters for a physical lock. See negotiable lock. connection.
plan member. The bound copy of a DBRM that is query block. The part of a query that is represented
identified in the member clause. by one of the FROM clauses. Each FROM clause can
have multiple query blocks, depending on DB2's internal
plan name. The name of an application plan. processing of the query.
Glossary 969
rebind 9 ROWID
that issues the same query more than once might read repeatable read (RR). The isolation level that provides
additional rows that were inserted and committed by a maximum protection from other executing application
concurrently executing application process. programs. When an application program executes with
repeatable read protection, rows referenced by the
rebind. The creation of a new application plan for an program cannot be changed by other programs until the
application program that has been bound previously. If, program reaches a commit point.
for example, you have added an index for a table that
your application accesses, you must rebind the request commit. The vote that is submitted to the
application in order to take advantage of that index. prepare phase if the participant has modified data and
is prepared to commit or roll back.
record. The storage representation of a row or other
data. requester. The source of a request to a remote
RDBMS, the system that requests the data. A requester
recovery. The process of rebuilding databases after a is sometimes called an application requester (AR).
system failure.
resource control table (RCT). A construct of the
referential constraint. The requirement that nonnull CICS attachment facility, created by site-provided macro
values of a designated foreign key are valid only if they parameters, that defines authorization and access
equal values of the primary key of a designated table. attributes for transactions or transaction groups.
referential integrity. The condition that exists when all resource definition online. A CICS feature that you
intended references from data in one column of a table use to define CICS resources online without assembling
to data in another column of the same or a different tables.
table are valid. Maintaining referential integrity requires
that DB2 enforce referential constraints on all LOAD, resource limit facility (RLF). A portion of DB2 code
RECOVER, INSERT, UPDATE, and DELETE that prevents dynamic manipulative SQL statements
operations. from exceeding specified time limits. The resource limit
facility is sometimes called the governor.
relational database (RDB). A database that can be
perceived as a set of tables and manipulated in result set. The set of rows that a stored procedure
accordance with the relational model of data. returns to a client application.
relational database management system (RDBMS). result set locator. A 4-byte value that DB2 uses to
A collection of hardware and software that organizes uniquely identify a query result set that a stored
and provides access to a relational database. procedure returns.
relational database name (RDBNAM). A unique result table. The set of rows that are specified by a
identifier for an RDBMS within a network. In DB2, this SELECT statement.
must be the value in the LOCATION column of table
SYSIBM.LOCATIONS in the CDB. DB2 publications retained lock. A MODIFY lock that a DB2 subsystem
refer to the name of another RDBMS as a LOCATION was holding at the time of a subsystem failure. The lock
value or a location name. is retained in the coupling facility lock structure across a
DB2 failure.
remote. Any object that is maintained by a remote
DB2 subsystem (that is, by a DB2 subsystem other than right outer join. The result of a join operation that
the local one). A remote view, for example, is a view includes the matched rows of both tables that are being
that is maintained by a remote DB2 subsystem. joined and preserves the unmatched rows of the second
Contrast with local. join operand. See also join.
remote subsystem. Any RDBMS, except the local RLF. Resource limit facility.
subsystem, with which the user or application can
communicate. The subsystem need not be remote in rollback. The process of restoring data changed by
any physical sense, and might even operate on the SQL statements to the state at its last commit point. All
same processor under the same MVS system. locks are freed. Contrast with commit.
reoptimization. The DB2 process of reconsidering the row. The horizontal component of a table. A row
access path of an SQL statement at run time; during consists of a sequence of values, one for each column
reoptimization, DB2 uses the values of host variables, of the table.
parameter markers, or special registers.
ROWID. Row identifier.
row identifier (ROWID). A value that uniquely a server is the target for a request from a remote
identifies a row. This value is stored with the row and RDBMS and is the RDBMS that provides the data. A
never changes. server is sometimes also called an application server
(AS).
row lock. A lock on a single row of data.
shared lock. A lock that prevents concurrently
row trigger. A trigger that is defined with the trigger executing application processes from changing data, but
granularity FOR EACH ROW. not from reading data. Contrast with exclusive lock.
server. A functional unit that provides services to one SQL authorization ID (SQL ID). The authorization ID
or more clients over a network. In the DB2 environment, that is used for checking dynamic SQL statements in
some situations.
Glossary 971
SQL communication area (SQLCA) 9 system conversation
SQL communication area (SQLCA). A structure that change (although values of host variables that are
is used to provide an application program with specified by the statement might change).
information about the execution of its SQL statements.
storage group. A named set of DASD volumes on
SQL descriptor area (SQLDA). A structure that which DB2 data can be stored.
describes input variables, output variables, or the
columns of a result table. stored procedure. A user-written application program,
that can be invoked through the use of the SQL CALL
SQL escape character. The symbol that is used to statement.
enclose an SQL delimited identifier. This symbol is the
double quotation mark ("). See also escape character. string. See character string or graphic string.
SQL ID. SQL authorization ID. strong typing. A process that guarantees that only
user-defined functions and operations that are defined
SQL path. An ordered list of schema names that are on a distinct type can be applied to that type. For
used in the resolution of unqualified references to example, you cannot directly compare two currency
user-defined functions, distinct types, and stored types, such as Canadian dollars and US dollars. But
procedures. In dynamic SQL, the current path is found you can provide a user-defined function to convert one
in the CURRENT PATH special register. In static SQL, currency to the other and then do the comparison.
it is defined in the PATH bind option.
Structured Query Language (SQL). A standardized
SQL Processor Using File Input (SPUFI). SQL language for defining and manipulating data in a
Processor Using File Input. A facility of the TSO relational database.
attachment subcomponent that enables the DB2I user
to execute SQL statements without embedding them in subquery. A SELECT statement within the WHERE or
an application program. HAVING clause of another SQL statement; a nested
SQL statement.
SQL return code. Either SQLCODE or SQLSTATE.
subselect. That form of a query that does not include
SQLCA. SQL communication area. ORDER BY clause, UPDATE clause, or UNION
operators.
SQLDA. SQL descriptor area.
substitution character. A unique character that is
SQL/DS. Structured Query Language/Data System. substituted during character conversion for any
This product is now obsolete and has been replaced by characters in the source program that do not have a
DB2 for VSE & VM. match in the target coding representation.
# star join. A method of joining a dimension column of a subsystem. A distinct instance of a relational
# fact table to the key column of the corresponding database management system (RDBMS).
# dimension table. See also join, dimension, and star
# schema. sync point. See commit point.
# star schema. The combination of a fact table (which synonym. In SQL, an alternative name for a table or
# contains most of the data) and a number of dimension view. Synonyms can only be used to refer to objects at
# tables. See also star join, dimension, and dimension the subsystem in which the synonym is defined.
# table.
Sysplex query parallelism. Parallel execution of a
statement string. For a dynamic SQL statement, the single query that is accomplished by using multiple
character string form of the statement. tasks on more than one DB2 subsystem. See also
query CP parallelism.
statement trigger. A trigger that is defined with the
trigger granularity FOR EACH STATEMENT. system administrator. The person at a computer
installation who designs, controls, and manages the use
static SQL. SQL statements, embedded within a of the computer system.
program, that are prepared during the program
preparation process (before the program is executed). system conversation. The conversation that two DB2
After being prepared, the SQL statement does not subsystems must establish to process system
messages before any distributed processing can begin.
thread. The DB2 structure that describes an trigger cascading. The process that occurs when the
application's connection, traces its progress, processes triggered action of a trigger causes the activation of
resource functions, and delimits its accessibility to DB2 another trigger.
resources and services. Most DB2 functions execute
under a thread structure. See also allied thread and triggered action. The SQL logic that is performed
database access thread. when a trigger is activated. The triggered action
consists of an optional triggered action condition and a
three-part name. The full name of a table, view, or set of triggered SQL statements that are executed only
alias. It consists of a location name, authorization ID, if the condition evaluates to true.
and an object name, separated by a period.
triggered action condition. An optional part of the
time. A three-part value that designates a time of day triggered action. This Boolean condition appears as a
in hours, minutes, and seconds. WHEN clause and specifies a condition that DB2
evaluates to determine if the triggered SQL statements
time duration. A decimal integer that represents a should be executed.
number of hours, minutes, and seconds.
triggered SQL statements. The set of SQL
statements that is executed when a trigger is activated
Glossary 973
trigger granularity 9 VTAM
trigger package. A package that is created when a variant function. See not-deterministic function.
CREATE TRIGGER statement is executed. The
package is executed when the trigger is activated. varying-length string. A character or graphic string
whose length varies within set limits. Contrast with
triggering event. The specified operation in a trigger fixed-length string.
definition that causes the activation of that trigger. The
triggering event is comprised of a triggering operation version. A member of a set of similar programs,
(INSERT, UPDATE, or DELETE) and a triggering table DBRMs, packages, or LOBs.
on which the operation is performed.
A version of a program is the source code that is
produced by precompiling the program. The
triggering SQL operation. The SQL operation that
program version is identified by the program name
causes a trigger to be activated when performed on the
and a timestamp (consistency token).
triggering table.
A version of a DBRM is the DBRM that is
triggering table. The table for which a trigger is produced by precompiling a program. The DBRM
created. When the defined triggering event occurs on version is identified by the same program name and
this table, the trigger is activated. timestamp as a corresponding program version.
A version of a package is the result of binding a
TSO. Time-Sharing Option. DBRM within a particular database system. The
package version is identified by the same program
TSO attachment facility. A DB2 facility consisting of name and consistency token as the DBRM.
the DSN command processor and DB2I. Applications A version of a LOB is a copy of a LOB value at a
that are not written for the CICS or IMS environments point in time. The version number for a LOB is
can run under the TSO attachment facility. stored in the auxiliary index entry for the LOB.
typed parameter marker. A parameter marker that is view. An alternative representation of data from one or
specified along with its target data type. It has the more tables. A view can include all or some of the
general form: columns that are contained in tables on which it is
defined.
CAST(? AS data-type)
Virtual Storage Access Method (VSAM). An access
type 1 indexes. Indexes that were created by a
method for direct or sequential processing of fixed- and
release of DB2 before DB2 Version 4 or that are
varying-length records on direct access devices. The
specified as type 1 indexes in Version 4. Contrast with
records in a VSAM data set or file can be organized in
type 2 indexes. As of Version 6, type 1 indexes are no
logical sequence by a key field (key sequence), in the
longer supported.
physical sequence in which they are written on the data
type 2 indexes. Indexes that are created on a release set or file (entry-sequence), or by relative-record
of DB2 after Version 5 or that are specified as type 2 number.
indexes in Version 4 or Version 5.
Virtual Telecommunications Access Method
(VTAM). An IBM licensed program that controls
communication and the flow of data in an SNA network.
) DB2 Messages and Codes, GC26-9011 ) DB2 PM for OS/390 Report Reference Volume 1,
SC26-9164
) DB2 Master Index, SC26-9010
) DB2 PM for OS/390 Report Reference Volume 2,
) DB2 Reference for Remote DRDA Requesters and SC26-9165
Servers, SC26-9012
) DB2 PM for OS/390 Using the Workstation Online
) DB2 Reference Summary, SX26-3844 Monitor, SC26-9170
) DB2 Release Planning Guide, SC26-9013 ) DB2 PM for OS/390 Program Directory, GI10-8183
) DB2 SQL Reference, SC26-9014
Query Management Facility
) DB2 Text Extender Administration and
Programming, SC26-9651 ) Query Management Facility: Developing QMF
Applications, SC26-9579
) DB2 Utility Guide and Reference, SC26-9015 ) Query Management Facility: Getting Started with
) DB2 What's New? GC26-9017 QMF on Windows, SC26-9582
) Query Management Facility: High Peformance
) DB2 Program Directory, GI10-8182 Option User's Guide for OS/390, SC26-9581
) Query Management Facility: Installing and
DB2 Administration Tool
Managing QMF on OS/390, GC26-9575
) DB2 Administration Tool for OS/390 User's Guide, ) Query Management Facility: Installing and
SC26-9847 Managing QMF on Windows, GC26-9583
) Query Management Facility: Introducing QMF,
DB2 Buffer Pool Tool GC26-9576
) Query Management Facility: Messages and Codes,
) DB2 Buffer Pool Tool for OS/390 User's Guide and
GC26-9580
Reference, SC26-9306
) Query Management Facility: Reference, SC26-9577
DB2 DataPropagator ) Query Management Facility: Using QMF,
SC26-9578
) DB2 Replication Guide and Reference, SC26-9642
AS/400 CICS/MVS
) DB2 for OS/400 SQL Programming, SC41-4611 ) CICS/MVS Application Programmer's Reference,
) DB2 for OS/400 SQL Reference, SC41-4612 SC33-0512
) CICS/MVS Facilities and Planning Guide,
BASIC SC33-0504
) IBM BASIC/MVS Language Reference, GC26-4026 ) CICS/MVS Installation Guide, SC33-0506
) IBM BASIC/MVS Programming Guide, SC26-4027 ) CICS/MVS Operations Guide, SC33-0510
) CICS/MVS Problem Determination Guide,
BookManager READ/MVS SC33-0516
) CICS/MVS Resource Definition (Macro), SC33-0509
) BookManager READ/MVS V1R3: Installation
) CICS/MVS Resource Definition (Online), SC33-0508
Planning & Customization, SC38-2035
IBM C/C++ for MVS/ESA
C/370
) IBM C/C++ for MVS/ESA Library Reference,
) IBM SAA AD/Cycle C/370 Programming Guide,
SC09-1995
SC09-1841
) IBM C/C++ for MVS/ESA Programming Guide,
) IBM SAA AD/Cycle C/370 Programming Guide for
SC09-1994
Language Environment/370, SC09-1840
) IBM SAA AD/Cycle C/370 User's Guide,
IBM COBOL
SC09-1763
) SAA CPI C Reference, SC09-1308 ) IBM COBOL Language Reference, SC26-4769
) IBM COBOL for MVS & VM Programming Guide,
Character Data Representation Architecture SC26-4767
) Character Data Representation Architecture
Conversion Guide
Overview, GC09-2207
) Character Data Representation Architecture ) IMS-DB and DB2 Migration and Coexistence Guide,
Reference and Registry, SC09-2190 GH21-1083
Bibliography 977
Parallel Sysplex Library ODBC
) OS/390 Parallel Sysplex Application Migration, ) Microsoft ODBC 3.0 Programmer's Reference and
GC28-1863 SDK Guide, Microsoft Press, ISBN 1-55615-658-8
) System/390 MVS Sysplex Hardware and Software
Migration, GC28-1862 OS/390
) OS/390 Parallel Sysplex Overview: An Introduction ) OS/390 C/C++ Programming Guide, SC09-2362
to Data Sharing and Parallelism, GC28-1860 ) OS/390 C/C++ Run-Time Library Reference,
) OS/390 Parallel Sysplex Systems Management, SC28-1663
GC28-1861 ) OS/390 C/C++ User's Guide, SC09-2361
) OS/390 Parallel Sysplex Test Report, GC28-1963 ) OS/390 eNetwork Communications Server: IP
) System/390 9672/9674 System Overview, Configuration, SC31-8513
GA22-7148 ) OS/390 Hardware Configuration Definition Planning,
GC28-1750
ICSF/MVS
) OS/390 Information Roadmap, GC28-1727
) ICSF/MVS General Information, GC23-0093 ) OS/390 Introduction and Release Guide,
GC28-1725
IMS/ESA ) OS/390 JES2 Initialization and Tuning Guide,
) IMS Batch Terminal Simulator General Information, SC28-1791
GH20-5522 ) OS/390 JES3 Initialization and Tuning Guide,
) IMS/ESA Administration Guide: System, SC26-8013 SC28-1802
) IMS/ESA Administration Guide: Transaction ) OS/390 Language Environment for OS/390 & VM
Manager, SC26-8731 Concepts Guide, GC28-1945
) IMS/ESA Application Programming: Database ) OS/390 Language Environment for OS/390 & VM
Manager, SC26-8727 Customization, SC28-1941
) IMS/ESA Application Programming: Design Guide, ) OS/390 Language Environment for OS/390 & VM
SC26-8016 Debugging Guide, SC28-1942
) IMS/ESA Application Programming: Transaction ) OS/390 Language Environment for OS/390 & VM
Manager, SC26-8729 Programming Guide, SC28-1939
) IMS/ESA Customization Guide, SC26-8020 ) OS/390 Language Environment for OS/390 & VM
) IMS/ESA Installation Volume 1: Installation and Programming Reference, SC28-1940
Verification, SC26-8023 ) OS/390 MVS Diagnosis: Procedures, LY28-1082
) IMS/ESA Installation Volume 2: System Definition ) OS/390 MVS Diagnosis: Reference, SY28-1084
and Tailoring, SC26-8024 ) OS/390 MVS Diagnosis: Tools and Service Aids,
) IMS/ESA Messages and Codes, SC26-8028 LY28-1085
) IMS/ESA Operator's Reference, SC26-8030 ) OS/390 MVS Initialization and Tuning Guide,
) IMS/ESA Utilities Reference: System, SC26-8035 SC28-1751
) OS/390 MVS Initialization and Tuning Reference,
ISPF SC28-1752
) OS/390 MVS Installation Exits, SC28-1753
) ISPF V4 Dialog Developer's Guide and Reference, ) OS/390 MVS JCL Reference, GC28-1757
SC34-4486 ) OS/390 MVS JCL User's Guide, GC28-1758
) ISPF V4 Messages and Codes, SC34-4450 ) OS/390 MVS Planning: Global Resource
) ISPF V4 Planning and Customizing, SC34-4443 Serialization, GC28-1759
) ISPF V4 User's Guide, SC34-4484 ) OS/390 MVS Planning: Operations, GC28-1760
) OS/390 MVS Planning: Workload Management,
Language Environment
GC28-1761
) Debug Tool User's Guide and Reference, ) OS/390 MVS Programming: Assembler Services
SC09-2137 Guide, GC28-1762
) OS/390 MVS Programming: Assembler Services
National Language Support Reference, GC28-1910
) OS/390 MVS Programming: Authorized Assembler
) National Language Support Reference Volume 2,
Services Guide, GC28-1763
SE09-8002
) OS/390 MVS Programming: Authorized Assembler
NetView Services Reference, Volumes 1-4, GC28-1764,
GC28-1765, GC28-1766, GC28-1767
) NetView Installation and Administration Guide, ) OS/390 MVS Programming: Callable Services for
SC31-8043 High-Level Languages, GC28-1768
) NetView User's Guide, SC31-8056 ) OS/390 MVS Programming: Extended
Addressability Guide, GC28-1769
Bibliography 979
) IBM TCP/IP for MVS: Planning and Migration ) VS FORTRAN Version 2: Programming Guide for
Guide, SC31-7189 CMS and MVS, SC26-4222
VS COBOL II VTAM
) VS COBOL II Application Programming Guide for ) Planning for NetView, NCP, and VTAM, SC31-8063
MVS and CMS, SC26-4045 ) VTAM for MVS/ESA Diagnosis, LY43-0069
) VS COBOL II Application Programming: Language ) VTAM for MVS/ESA Messages and Codes,
Reference, GC26-4047 SC31-6546
) VS COBOL II Installation and Customization for ) VTAM for MVS/ESA Network Implementation Guide,
MVS, SC26-4048 SC31-6548
) VTAM for MVS/ESA Operation, SC31-6549
VS FORTRAN ) VTAM for MVS/ESA Programming, SC31-6550
) VS FORTRAN Version 2: Language and Library ) VTAM for MVS/ESA Programming for LU 6.2,
Reference, SC26-4221 SC31-6551
) VTAM for MVS/ESA Resource Definition Reference,
SC31-6552
Index I-3
call attachment facility (CAF) 745 CICS (continued)
See also CAF (call attachment facility) logical unit of work 368
CALL DSNALI statement 753, 767 operating
CALL DSNRLI statement 786 running a program 469
CALL statement system failure 369
SQL procedure 566 planning
Cartesian join 706 environment 434
CASE expression 30 programming
CASE statement DFHEIENT macro 130
SQL procedure 566 sample applications 851, 854
casting SYNCPOINT command 368
in user-defined function invocation 303 storage handling
catalog statistics assembler 141
influencing access paths 676 C 160
catalog tables COBOL 181
accessing 38 PL/I 206
CCSID (coded character set identifier) thread
SQLDA 526 reuse 817
CDSSRDEF subsystem parameter 734 unit of work 368
character host variables CICS attachment facility 817
assembler 132 See also CICS
C 145 claim
COBOL 167 effect of cursor WITH HOLD 356
FORTRAN 186 CLOSE
PL/I 197 connection function of CAF
character string description 750
comparative operators 13 program example 771
LIKE predicate of WHERE clause 14 syntax 761
literals 94 usage 761
mixed data 6 statement
width of column in results 84, 87 description 113
check WHENEVER NOT FOUND clause 519, 531
effects on DELETE 59 cluster ratio
check data integrity effects
CREATE statement 45 table space scan 696
INSERT statement 54 with list prefetch 717
UPDATE statement 59 COALESCE function 64
checkpoint COBOL application program
calls 369, 372 character host variables
frequency 374 fixed-length strings 167
CHKP call, IMS 369 varying-length strings 168
CICS coding SQL statements 93, 160
attachment facility compiling 429
controlling from applications 817 data declarations 115
programming considerations 817 data type compatibility 178
DSNTIAC subroutine DB2 precompiler option defaults 415
assembler 141 DECLARE statement 163
C 160 declaring a variable 176
COBOL 181 dynamic SQL 533
PL/I 206 FILLER entry name 177
facilities host structure 100
command language translator 416 host variable
control areas 469 use of hyphens 165
EDF (execution diagnostic facility) 475 indicator variables 178
language interface module (DSNCLI) naming convention 164
use in link-editing an application 430 null values 99
Index I-5
connection (continued) CURRENT SQLID special register
function of RRSAF description 58
AUTH SIGNON 795 use in test 469
CREATE THREAD 813 cursor
description 782 ambiguous 355
IDENTIFY 788, 813 closing
sample scenarios 809 CLOSE statement 113
SIGNON 792, 813 declaring 109
summary of behavior 807 deleting a current row 112
TERMINATE IDENTIFY 804, 813 description 107
TERMINATE THREAD 803, 813 effect of abend on position 113
TRANSLATE 806 end of data 110
constant example 108
assembler 130 maintaining position 113
COBOL 165 open state 113
syntax opening
C 155 OPEN statement 110
FORTRAN 189 retrieving a row of data 111
constraint 44 updating a current row 111
See also table check constraint WITH HOLD 113
CONTINUE claims 356
clause of WHENEVER statement 103 locks 356
CONTINUE handler
SQL procedure 568
copying D
tables from remote locations 400 data
correlated reference adding to the end of a table 824
correlation name associated with WHERE clause 11
example 76 currency 400
correlated subqueries 663 effect of locks on integrity 332
See also subquery improving access 679
COUNT function 22 indoubt state 371
CREATE TABLE statement nontabular storing 825
use 41 retrieval using SELECT * 824
CREATE THREAD retrieving a set of rows 111
connection function of RRSAF retrieving large volumes 823
program example 813 scrolling backward through 819
created temporary table 46 security and integrity 367
created temporary tables understanding access 679
table space scan 696 updating during retrieval 823
CS (cursor stability) updating previously retrieved data 823
page and row locking 351 data security and integrity 367
CURRENDATA option of BIND data space
plan and package options differ 355 LOB materialization 245
CURRENT DEGREE field of panel DSNTIP4 734 data type
CURRENT DEGREE special register compatibility
changing subsystem default 734 assembler and SQL 137
CURRENT PACKAGESET special register assembler application program 138
dynamic plan switching 428 C and SQL 151
identify package collection 421 COBOL and SQL 174, 178
CURRENT RULES special register FORTRAN 190
usage 426 FORTRAN and SQL 189
CURRENT SERVER special register PL/I and SQL 203
description 421 REXX and SQL 212
in application program 400 equivalent
FORTRAN 187
PL/I 200
Index I-7
DELETE (continued) distributed data (continued)
statement (continued) moving from DB2 private protocol access to DRDA
correlated subqueries 77 access 401
description 59 performance considerations 393
rules 59 planning
subqueries 73 access by a program 379, 400
when to avoid 60 program preparation 389
WHERE CURRENT clause 112 programming
deleting coding with DB2 private protocol access 382
current rows 112 coding with DRDA access 382
data 59 retrieving from DB2 for OS/390 ASCII tables 400
every row from a table 60 terminology 379
parent key 59 transmitting mixed data 400
rows from a table 59 division by zero 103
delimiter DL/I
SQL statements 95 batch
string 410 application programming 486
department sample table checkpoint ID 494
creating 43 DB2 requirements 486
description 830 DDITV02 input data set 488
dependent DSNMTV01 module 491
table 58 features 485
DESCRIBE CURSOR statement SSM= parameter 491
usage 613 submitting an application 491
DESCRIBE INPUT statement TERM call 368
usage 516 DRDA access
DESCRIBE PROCEDURE statement bind options 387, 388
usage 612 coding an application 382
DESCRIBE statement compared to DB2 private protocol access 380
column labels 528 example 380, 383
INTO clauses 522, 524 mixed environment 945
DFHEIENT macro 130 planning 379, 380
DFSLI000 (IMS language interface module) 430 precompiler options 386
direct row access 690 preparing programs 386
DISCONNECT programming hints 385
connection function of CAF releasing connections 384
description 750 sample program 897
program example 771 using 383
syntax 763 dropping
usage 763 tables 50
displaying DSN applications, running with CAF 748
calculated values 19 DSN command of TSO
lists command processor
table columns 38 services lost under CAF 748
tables 38 return code processing 432
DISTINCT subcommands
clause of SELECT statement 9 See also individual subcommands
distinct type RUN 431
description 309 DSN_FUNCTION_TABLE 302
distributed data DSN_STATEMNT_TABLE table
choosing an access method 380 column descriptions 726
copying a remote table 400 DSN8BC3 sample program 180
identifying server at run time 400 DSN8BD3 sample program 159
improving efficiency 391 DSN8BE3 sample program 159
LOB performance 392 DSN8BF3 sample program 191
maintaining data currency 400
Index I-9
EDIT panel, SPUFI (continued)
SQL statements 85 F
employee photo and resume sample table 836 FETCH statement
employee sample table 832 host variables 519
employee to project activity sample table 839 scrolling through data 819
end of cursors 110 USING DESCRIPTOR clause 530
end of data 110 field procedure
END-EXEC delimiter 95 changing collating sequence 34
error filter factor
arithmetic expression 103 predicate 646
handling 102 fixed-length character string
messages generated by precompiler 479, 480 assembler 132
return codes 101 C 157
run 478 value in CREATE TABLE statement 42
escape character FLAG option
SQL 410 precompiler 410
ESTAE routine in CAF (call attachment facility) 768 flags, resetting 165
exceptional condition handling 102 FLOAT
EXCLUSIVE option of precompiler 411
lock mode FLOAT option
effect on resources 343 precompiler 411
LOB 363 FOLD
page 342 value for C and CPP 411
row 342 value of precompiler option HOST 411
table, partition, and table space 342 FOR FETCH ONLY clause 396
EXEC SQL delimiter 95 FOR READ ONLY clause 396
EXECUTE IMMEDIATE statement See also FOR FETCH ONLY clause
dynamic execution 514 FOR UPDATE OF clause
EXECUTE statement example 109
dynamic execution 516 used to update a column 109
parameter types 531 format
USING DESCRIPTOR clause 532 SELECT statement results 87
EXISTS predicate 74 SQL in input data set 84
EXIT handler FORTRAN application program
SQL procedure 569 @PROCESS statement 185
exit routine assignment rules, numeric 189
abend recovery with CAF 768 byte data type 185
attention processing with CAF 768 character host variable 185, 186
EXPLAIN coding SQL statements 182
option comment lines 184
use during automatic rebind 329 constants, syntax differences 189
report of outer join 704 data types 187
statement declaring
description 679 tables 184
index scans 689 variables 189
interpreting output 687 views 184
investigating SQL processing 679 description of SQLCA 182
EXPLAIN PROCESSING field of panel DSNTIPO host variable 185
overhead 686 including code 184
expression indicator variables 190
columns 19 margins for SQL statements 184
results 19 naming convention 184
parallel option 185
precompiler option defaults 415
sequence numbers 184
SQL INCLUDE statement 185
Index I-11
indicator variable (continued)
I REXX 215
I/O processing setting null values in a COBOL program 99
parallel structures in a COBOL program 101
queries 733 INNER JOIN 62
IDENTIFY See also join operation
connection function of RRSAF (Recoverable example 62
Resource Manager Services attachment facility) input data set DDITV02 488
program example 813 INSERT processing
syntax 788 effect of MEMBER CLUSTER option of CREATE
usage 788 TABLESPACE 336
identity column INSERT statement
inserting in table 819 description 52
IF statement several rows 54
SQL procedure 566 subqueries 73
IKJEFT01 terminal monitor program in TSO 432 VALUES clause 52
IMS with identity column 56
application programs 372 with ROWID column 55
batch 375 INTENT EXCLUSIVE lock mode 343, 363
checkpoint calls 369 INTENT SHARE lock mode 343, 363
CHKP call 369 LOB table space 363
commit point 370 Interactive System Productivity Facility (ISPF) 79
error handling 371 See also ISPF (Interactive System Productivity
language interface module DFSLI000 Facility)
link-editing 430 internal resource lock manager (IRLM) 491
planning See also IRLM (internal resource lock manager)
environment 434 IRLM (internal resource lock manager)
recovery 369 description 491
restrictions 370 ISOLATION
ROLB call 369, 374 option of BIND PLAN subcommand
ROLL call 369, 374 effects on locks 349
SYNC call 369 isolation level
unit of work 369 control by SQL statement
IN example 357
clause in subqueries 74 recommendations 338
predicate 18 REXX 216
INCLUDE statement ISPF (Interactive System Productivity Facility)
DCLGEN output 119 browse 81, 86
index DB2 uses dialog management 79
access methods DB2I Menu 442
access path selection 697 precompiling under 441
by nonmatching index 699 preparation
IN-list index scan 699 Program Preparation panel 443
matching index columns 689 programming 741, 744
matching index description 698 scroll command 87
multiple 700 ISPLINK SELECT services 743
one-fetch index scan 701
locking 345
indicator variable J
array declaration in DCLGEN 118 JCL (job control language)
assembler application program 138 batch backout example 493
C 157 precompilation procedures 435
COBOL 178 starting a TSO batch application 432
description 99 join operation
FORTRAN 190 Cartesian 706
PL/I 141, 204 description 703
Index I-13
lock (continued) message (continued)
options affecting (continued) RRSAF errors 807
read stability 350 MIN function 22
repeatable read 349 mixed data
uncommitted read 351 description 6
page locks transmitting to remote location 400
CS, RS, and RR compared 349 mode of a lock 342
description 339 multiple-mode IMS programs 372
recommendations for concurrency 335 MVS
size 31-bit addressing 430, 467
page 339
partition 339
table 339 N
table space 339 naming convention
summary 360 assembler 129
unit of work 367, 368, 369 C 144
LOCK TABLE statement COBOL 164
effect on auxiliary tables 364 FORTRAN 184
effect on locks 358 PL/I 194
LOCKPART clause of CREATE and ALTER REXX 210
TABLESPACE tables you create 42
effect on locking 340 nested table expression 67
LOCKSIZE clause processing 721
recommendations 336 NODYNAM option of COBOL 164
logical unit of work NOFOR option
CICS description 368 precompiler 412
LOOP statement NOGRAPHIC option of precompiler 412
SQL procedure 566 noncorrelated subqueries 664
See also subquery
nonsegmented table space
M scan 697
mapping macro nontabular data storage 825
assembler applications 141 NOOPTIONS option
MARGINS option of precompiler 411 precompiler 412
mass delete NOSOURCE option of precompiler 412
contends with UR process 352 NOT FOUND clause of WHENEVER statement 103
mass insert 54 NOT NULL clause
materialization CREATE TABLE statement
LOBs 245 using 42
outer join 705 NOT operator of WHERE clause 13
views and nested table expressions 722 notices, legal 955
MAX function 22 NOXREF option of precompiler 412
MEMBER CLUSTER option of CREATE NUL character in C 144
TABLESPACE 336 NULL
merge processing attribute of UPDATE statement 58
views or nested table expressions 722 in REXX 210
message option of WHERE clause 12
analyzing 479 pointer in C 144
CAF errors 766 null value
obtaining text COBOL programs 99
assembler 139 description 12
C 158 numeric
COBOL 180 assignments 189
description 103 data
FORTRAN 191 do not use with LIKE 14
PL/I 205 width of column in results 84, 87
Index I-15
PL/I application program (continued) predicate (continued)
coding SQL statements 192 generation 651
comments 193 impact on access paths 636, 667
considerations 195 indexable 638
data types 200, 203 join 637
declaring tables 193 local 637
declaring views 193 modification 651
graphic host variables 197 properties 636
host variable quantified 71
declaring 196 stage 1 (sargable) 638
numeric 196 stage 2
using 196 evaluated 638
indicator variables 204 influencing creation 672
naming convention 194 subquery 637
sequence numbers 194 predictive governing
SQLCA, defining 192 in a distributed environment 512
SQLDA, defining 192 with DEFER(PREPARE) 512
statement labels 194 writing an application for 512
variable, declaration 202 PRELINK utility 446
WHENEVER statement 194 PREPARE statement
PLAN_TABLE table dynamic execution 515
column descriptions 681 host variable 518
report of outer join 704 INTO clause 522
planning prepared SQL statement
accessing distributed data 379, 400 caching 509
binding 322, 329 statements allowed 945
concurrency 329, 365 PRIMARY_ACCESSTYPE column of
precompiling 322 PLAN_TABLE 690
recovery 367 problem determination
precompiler guidelines 478
binding on another system 409 procedure, stored 535
description 407 See also stored procedure
diagnostics 408 processing
escape character 410 SQL statements 85
functions 407 program preparation 405
input 407 See also application program
maximum input to 407 program problems checklist
option descriptions 409 documenting error situations 472
options error messages 473
CONNECT 386 project activity sample table 838
defaults 414 project application 849
DRDA access 386 description 849
SQL 386 project sample table 837
output 408
planning for 322
precompiling programs 407 Q
starting query parallelism 731
dynamically 437 QUOTE
JCL for procedures 435 option of precompiler 412
submitting jobs QUOTESQL option
DB2I panels 450 precompiler 412
ISPF panels 442, 443
predicate
description 636
R
range of values, retrieving 17
filter factor 646
general rules 640
WHERE clause 11
Index I-17
REXX application RRSAF (Recoverable Resource Manager Services
running 435 attachment facility) (continued)
REXX procedure function descriptions 787
coding SQL statements 207 load module structure 782
error handling 210 programming language 780
indicator variables 215 register conventions 787
isolation level 216 restrictions 779
naming convention 210 return codes
specifying input data type 213 AUTH SIGNON 795
statement label 210 CONNECT 788
RIB (release information block) SIGNON 792
address in CALL DSNALI parameter list 753 TERMINATE IDENTIFY 804
CONNECT connection function of CAF 756 TERMINATE THREAD 803
CONNECT connection function of RRSAF 788 TRANSLATE 806
program example 771 run environment 781
RID (record identifier) pool transactions
use in list prefetch 717 using global transactions 338
RIGHT OUTER JOIN 65 RS (read stability)
See also join operation page and row locking (figure) 350
example 65 RUN
RMODE link-edit option 467 subcommand of DSN
ROLB call, IMS CICS restriction 417
advantages over ROLL 375 return code processing 432
ends unit of work 369 running a program in TSO foreground 431
in batch programs 374 run-time libraries, DB2I
ROLL call, IMS background processing 449
ends unit of work 369 EDITJCL processing 449
in batch programs 374 running application program
rollback errors 478
option of CICS SYNCPOINT statement 368 running application programs
using RRSAF 781 CICS 434
ROLLBACK statement IMS 434
description 81
error in IMS 485
unit of work in TSO 367 S
row sample application
selecting with WHERE clause 11 call attachment facility 746
updating 57 DB2 private protocol access 905
updating current 111 DRDA access 897
updating large volumes 823 dynamic SQL 879
ROWID environments 851
coding example 692 languages 851
index-only access 690 LOB 850
inserting in table 819 organization 849
RR (repeatable read) phone 849
how locks are held (figure) 349 programs 851
page and row locking 349 project 849
RRS global transaction Recoverable Resource Manager Services attachment
RRSAF support 793, 797, 800 facility 780
RRSAF (Recoverable Resource Manager Services static SQL 879
attachment facility) stored procedure 850
application program structure of 845
examples 812 use 851
preparation 780 user-defined function 850
connecting to DB2 813 sample program
description 779 DSN8BC3 180
Index I-19
special register SQL (Structured Query Language) (continued)
behavior in stored procedures 553 string delimiter 449
behavior in user-defined functions 284 structures 96
CURRENT DEGREE 734 syntax checking 385
CURRENT PACKAGESET 58 varying-list 519, 532
CURRENT RULES 426 SQL communication area (SQLCA) 101, 103
CURRENT SERVER 58 See also SQLCA (SQL communication area)
CURRENT SQLID 58 SQL procedure
CURRENT TIME 58 preparation using DSNTPSMP procedure 573
CURRENT TIMESTAMP 58 program preparation 572
CURRENT TIMEZONE 58 referencing SQLCODE and SQLSTATE 569
definition 37 SQL variable 567
USER 58 statements allowed 950
SPUFI SQL procedure statement
browsing output 86 CALL statement 566
changed column widths 87 CASE statement 566
CONNECT LOCATION field 82 compound statement 566
created column heading 87 CONTINUE handler 568
default values 82 EXIT handler 569
panels GET DIAGNOSTICS statement 566
allocates RESULT data set 80 GOTO statement 566
filling in 80 handler 568
format and display output 86 handling errors 568
previous values displayed on panel 79 IF statement 566
selecting on DB2I menu 79 LEAVE statement 566
processing SQL statements 79, 85 LOOP statement 566
Specifying SQL statement terminator 82 REPEAT statement 566
SQLCODE returned 86 SQL statement 566
SQL WHILE statement 566
option of precompiler 413 SQL statement
SQL (Structured Query Language) SQL procedure 566
case expression 30 SQL statement nesting
coding restrictions 304
assembler 127 stored procedures 304
basics 93 triggers 304
C 141 user-defined functions 304
C++ 141 SQL statement terminator
COBOL 160 Specifying in SPUFI 82
dynamic 533 SQL statements
FORTRAN program 183 ALLOCATE CURSOR 613
object extensions 235 ASSOCIATE LOCATORS 612
PL/I 192 CALL
REXX 207 restrictions on 553
cursors 107 CLOSE 113, 519
dynamic CONNECT (Type 1) 391
coding 503 CONNECT (Type 2) 391
sample C program 879 continuation
statements allowed 945 assembler 129
escape character 410 C language 143
host variables 96 COBOL 163
keywords, reserved 943 FORTRAN 184
return codes PL/I 193
checking 101 REXX language 210
handling 103 DECLARE CURSOR
static description 109
sample C program 879 example 518, 522
Index I-21
SQLSTATE (continued) stored procedure (continued)
referencing in SQL procedure 569 testing 627
SQLVAR field of SQLDA 525 usage 535
SQLWARNING clause use of special registers 553
WHENEVER statement in COBOL program 103 using host variables with 539
SSID (subsystem identifier), specifying 448 using temporary tables in 557
SSN (subsystem name) writing 548
CALL DSNALI parameter list 753 writing in REXX 559
parameter in CAF CONNECT function 756 stormdrain effect 818
parameter in CAF OPEN function 760 string
parameter in RRSAF CONNECT function 788 delimiter
SQL calls to CAF (call attachment facility) 750 apostrophe 410
star schema 711 fixed-length
defining indexes for 673 assembler 132
state C 157
of a lock 342 COBOL 167
statement PL/I 204
labels value in CREATE TABLE statement 42
FORTRAN 185 varying-length
PL/I 194 assembler 132
statement table C 157
column descriptions 726 COBOL 168
static SQL PL/I 204
description 503 string host variables in C 155
host variables 504 subquery
sample C program 879 correlated
STDDEV function DELETE statement 77
when evaluation occurs 696 example 74
STDSQL option subquery 74
precompiler 413 tuning 663
STOP DATABASE command UPDATE statement 76
timeout 333 DELETE statement 77
storage description 71
acquiring join transformation 666
retrieved row 525 noncorrelated 664
SQLDA 523 referential constraints 77
addresses in SQLDA 526 restrictions with DELETE 77
storage group, DB2 tuning 662
sample application 846 tuning examples 667
stored procedure 297, 632 UPDATE statement 76
binding 558 use with UPDATE, DELETE, and INSERT 73
CALL statement subselect
description 582 INSERT statement 57
restrictions on 553 subsystem
calling from a REXX procedure 617 identifier (SSID), specifying 448
defining parameter lists 587 subsystem name (SSN) 750
defining to DB2 541 See also SSN (subsystem name)
example 537 SUM function 22
invoking from a trigger 224 summarizing group values 35
languages supported 548 SYNC call 369
linkage conventions 585 SYNC call, IMS 369
restricted SQL statements 553 SYNC parameter of CAF (call attachment facility) 761,
returning non-relational data 557 771
returning result set 556 synchronization call abends 488
running as authorized program 558 SYNCPOINT statement of CICS 368
statements allowed 948
Index I-23
TRANSLATE function of CAF (continued) unit of work (continued)
usage 764 DL/I batch 374
TRANSLATE function of RRSAF duration 367
syntax 806 IMS
usage 806 batch 374
translating commit point 369
requests from end users into SQL Statements 825 ending 369
trigger starting point 369
activation order 226 prevention of data access by other users 367
cascading 226 TSO
coding 219 completion 367
description 45, 217 ROLLBACK statement 367
example 217 unknown characters 14
interaction with constraints 227 UPDATE
overview 217 lock mode
parts of 219 page 342
transition table 221 row 342
transition variable 221 table, partition, and table space 342
truncation statement
SQL variable assignment 569 correlated subqueries 76
TSO description 57
CLISTs SET clause 57
calling application programs 434 subqueries 73
running in foreground 434 WHERE CURRENT clause 111
DSNALI language interface module 748 updating
TEST command 473 during retrieval 823
unit of work, completion 369 large volumes 823
tuning values from host variables 98
DB2 UR (uncommitted read)
queries containing host variables 658 concurrent access restrictions 352
two-phase commit effect on reading LOBs 362
coordinating updates 389 page and row locking 351
TWOPASS recommendation 338
option of precompiler 413 USER
special register 58
value in UPDATE statement 58
U user-defined function
UNION clause DBINFO structure 269
effect on OPTIMIZE clause 670 invoking from a trigger 224
removing duplicates with sort 720 scratchpad 267
SELECT statement 36 statements allowed 948
unique index user-defined function (UDF)
creating using timestamp 819 abnormal termination 304
unit of recovery accessing transition tables 287
indoubt Assembler parameter conventions 272
recovering CICS 369 Assembler table locators 288
recovering IMS 371 C or C++ table locators 290
unit of work C parameter conventions 273
beginning 367 casting arguments 303
CICS description 368 COBOL parameter conventions 279
completion COBOL table locators 290
commit 368 data type promotion 300
open cursors 113 definer 250
rollback 368 defining 252
TSO 367, 369 description 249
description 367 DSN_FUNCTION_TABLE 302
Index I-25
WITH HOLD clause of DECLARE CURSOR
statement 113
WITH HOLD cursor 516
effect on locks and claims 356
X
XREF option
precompiler 414
XRST call, IMS application program 371
Your feedback helps IBM to provide quality information. Please send any comments that
you have about this book or other DB2 for OS/390 documentation. You can use any of the
following methods to provide comments.
) Send your comments by e-mail to db2pubs@vnet.ibm.com and include the name of the
product, the version number of the product the number of the book. If you are
commenting on specific text, please list the location of the text (for example, a chapter
and section title, page number, or a help topic title).
) Send your comments from the Web. Visit the DB2 for OS/390 Web site at:
http://www.ibm.com/software/db2os390
The Web site has a feedback page that you can use to send comments.
) Complete the readers' comment form at the back of the book and return it by mail, by
fax (800-426-7773 for the United States and Canada), or by giving it to an IBM
representative.
Readers' Comments
DB2 Universal Database for OS/390
Application Programming
and SQL Guide
Version 6
Publication No. SC26-9004-01
Very Very
Satisfied Satisfied Neutral Dissatisfied Dissatisfied
Technically accurate
Complete
Easy to find
Easy to understand
Well organized
Applicable to your tasks
Grammatically correct and consistent
Graphically well designed
Overall satisfaction
Name Address
Company or Organization
Phone No.
Cut or Fold
Readers' Comments
IBM
Along Line
SC26-9004-01
NO POSTAGE
NECESSARY
IF MAILED IN THE
UNITED STATES
Cut or Fold
SC26-9004-01 Along Line
IBM
SC26-94-1