0% found this document useful (0 votes)
183 views

Distributed Database Programming

Manuale per la programmazione del DB/400

Uploaded by

daniele
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
183 views

Distributed Database Programming

Manuale per la programmazione del DB/400

Uploaded by

daniele
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 307

AS/400e series IBM

Distributed Database Programming


Version 4

SC41-5702-01
AS/400e series IBM
Distributed Database Programming
Version 4

SC41-5702-01
Note

Before using this information and the product it supports, be sure to read the information in “Notices” on page X-5.

Second Edition (February 1998)

This edition applies to version 4, release 2, modification 0 of Operating System/400 (product number 5769-SS1) and to all subse-
quent releases and modifications until otherwise indicated in new editions. This edition applies only to reduced instruction set com-
puter (RISC) systems.

This edition replaces SC41-5702-00. This edition applies only to reduced instruction set computer (RISC) systems.

 Copyright International Business Machines Corporation 1997, 1998. All rights reserved.
Note to U.S. Government Users — Documentation related to restricted rights — Use, duplication or disclosure is subject to
restrictions set forth in GSA ADP Schedule Contract with IBM Corp.
Contents
About Distributed Database Programming (SC41-5702) . . . . . . . . . . . . xi
Who should read this book . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xi
AS/400 Operations Navigator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xi
Prerequisite and related information . . . . . . . . . . . . . . . . . . . . . . . . . xii
Information available on the World Wide Web . . . . . . . . . . . . . . . . . . . xii
How to send your comments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xii

Chapter 1. Distributed Relational Database and the AS/400 System . . . 1-1


Distributed Relational Database Processing . . . . . . . . . . . . . . . . . . . . 1-1
Remote Unit of Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-4
Distributed Unit of Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-4
Other Distributed Relational Database Terms and Concepts . . . . . . . . . 1-5
Distributed Relational Database Architecture Support . . . . . . . . . . . . . . . 1-6
DRDA and CDRA Support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-7
Character Conversion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-7
Application Requester Driver Programs . . . . . . . . . . . . . . . . . . . . . . . 1-8
Distributed Relational Database on the AS/400 System . . . . . . . . . . . . . 1-8
Managing an AS/400 Distributed Relational Database . . . . . . . . . . . . 1-10
Spiffy Corporation Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-13
Spiffy Organization and System Profile . . . . . . . . . . . . . . . . . . . . . 1-13
Business Processes of the Spiffy Corporation Automobile Service . . . . . 1-15
Distributed Relational Database Administration for the Spiffy Corporation . 1-15

Chapter 2. Planning and Design for Distributed Relational Database . . . 2-1


Identifying Your Needs and Expectations . . . . . . . . . . . . . . . . . . . . . . 2-1
Data Needs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-1
Distributed Relational Database Capabilities . . . . . . . . . . . . . . . . . . 2-2
Goals and Directions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-2
Designing the Application, Network, and Data . . . . . . . . . . . . . . . . . . . 2-4
Designing Applications — Tips . . . . . . . . . . . . . . . . . . . . . . . . . . 2-4
Network Considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-5
Data Considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-6
Developing a Management Strategy . . . . . . . . . . . . . . . . . . . . . . . . . 2-6
General Operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-6
Security . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-8
Accounting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-9
Problem Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-9
Backup and Recovery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-10

Chapter 3. Communications for an AS/400 Distributed Relational


Database . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-1
Communications Tools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-1
Systems Network Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-1
APPC/APPN . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-2
Using DDM and Distributed Relational Database . . . . . . . . . . . . . . . . 3-2
Alert Support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-3
Distributed Relational Database Network Considerations . . . . . . . . . . . . . 3-4
Configuring Communications for a Distributed Relational Database . . . . . . . 3-4
| Configuring a Communications Network for APPC . . . . . . . . . . . . . . . 3-5
| Configuring a Communications Network for TCP/IP . . . . . . . . . . . . . . 3-7

 Copyright IBM Corp. 1997, 1998 iii


APPN Configuration Example . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-7
Configuring Alert Support . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-16
Example Configuration for Alert Support . . . . . . . . . . . . . . . . . . . . 3-17

Chapter 4. Security for an AS/400 Distributed Relational Database . . . . 4-1


Elements of Distributed Relational Database Security . . . . . . . . . . . . . . . 4-1
| Session Level and Location Security for APPC Connections . . . . . . . . . 4-2
| Conversation Level Security for APPC Connections . . . . . . . . . . . . . . 4-4
| DRDA Security using TCP/IP . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-8
Object Related Security . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-9
Authority to Distributed Relational Database Objects . . . . . . . . . . . . . 4-10
Programs That Run Under Adopted Authority . . . . . . . . . . . . . . . . . 4-12
Protection Strategies in a Distributed Relational Database . . . . . . . . . . . 4-13

Chapter 5. Setting Up an AS/400 Distributed Relational Database . . . . . 5-1


Work Management on the AS/400 System . . . . . . . . . . . . . . . . . . . . . 5-1
Setting Up Your Work Management Environment . . . . . . . . . . . . . . . 5-2
| Work Management for DRDA Use with TCP/IP . . . . . . . . . . . . . . . . . 5-2
Considerations for Setting Up Subsystems for APPC . . . . . . . . . . . . . 5-3
Using the Relational Database Directory . . . . . . . . . . . . . . . . . . . . . . 5-5
Working with the Relational Database Directory . . . . . . . . . . . . . . . . 5-5
Relational Database Directory Setup Example . . . . . . . . . . . . . . . . . 5-9
| Setting Up Security . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-12
| Setting Up the TCP/IP Server . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-12
| Setting Up SQL Packages for Interactive SQL on a RUW Server . . . . . . . 5-13
Setting Up DDM Files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-13
Loading Data into Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-14
Loading New Data into Tables . . . . . . . . . . . . . . . . . . . . . . . . . 5-14
Moving Data from One AS/400 System to Another . . . . . . . . . . . . . . 5-16
Moving a Database to an AS/400 System from a Non-AS/400 System . . 5-22

Chapter 6. Distributed Relational Database Administration and Operation


Tasks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-1
Monitoring Relational Database Activity . . . . . . . . . . . . . . . . . . . . . . . 6-1
Working with Jobs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-1
Working with User Jobs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-2
Working with Active Jobs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-3
Working with Commitment Definitions . . . . . . . . . . . . . . . . . . . . . . 6-4
Using the Job Log . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-5
Locating Distributed Relational Database Jobs . . . . . . . . . . . . . . . . . 6-6
Operating Remote AS/400 Systems . . . . . . . . . . . . . . . . . . . . . . . . . 6-8
Starting and Stopping Other Systems Remotely . . . . . . . . . . . . . . . . 6-8
Submit Remote Command (SBMRMTCMD) Command . . . . . . . . . . . . 6-8
Controlling DDM Conversations . . . . . . . . . . . . . . . . . . . . . . . . . . 6-10
Reclaiming DDM Resources . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-12
Displaying Objects Used by Programs . . . . . . . . . . . . . . . . . . . . . . 6-12
Display Program Reference Example . . . . . . . . . . . . . . . . . . . . . 6-14
Dropping a Collection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-15
Job Accounting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-16
| Managing the TCP/IP Server . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-17
| Terminology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-17
| TCP/IP Communication Support Concepts . . . . . . . . . . . . . . . . . . 6-18
| QSYSWRK Subsystem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-20
| Identifying Server Jobs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-23

iv OS/400 Distributed Database Programming V4R2


Canceling Distributed Relational Database Work . . . . . . . . . . . . . . . . 6-26
End Job (ENDJOB) Command . . . . . . . . . . . . . . . . . . . . . . . . . 6-26
End Request (ENDRQS) Command . . . . . . . . . . . . . . . . . . . . . . 6-26
Auditing the Relational Database Directory . . . . . . . . . . . . . . . . . . . . 6-27

Chapter 7. Data Availability and Protection . . . . . . . . . . . . . . . . . . . 7-1


Recovery Support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-1
Uninterruptible Power Supply . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-2
Data Recovery after Disk Failures . . . . . . . . . . . . . . . . . . . . . . . . 7-2
Journal Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-3
Transaction Recovery through Commitment Control . . . . . . . . . . . . . . 7-6
Writing Data to Auxiliary Storage . . . . . . . . . . . . . . . . . . . . . . . . . 7-9
Save and Restore Processing . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-9
Ensuring Data Availability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-14
Network Redundancy Issues . . . . . . . . . . . . . . . . . . . . . . . . . . 7-14
Data Redundancy in Your Network . . . . . . . . . . . . . . . . . . . . . . . 7-16

Chapter 8. Distributed Relational Database Performance . . . . . . . . . . 8-1


Improving Performance Through the Network . . . . . . . . . . . . . . . . . . . 8-1
Unprotected Conversations . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-1
Improving Performance Through the System . . . . . . . . . . . . . . . . . . . . 8-1
Observing System Performance . . . . . . . . . . . . . . . . . . . . . . . . . . 8-2
Improving Performance Through the Database . . . . . . . . . . . . . . . . . . 8-2
Deciding Data Location . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-3
Factors that Affect Blocking for DRDA . . . . . . . . . . . . . . . . . . . . . . 8-3
Factors That Affect the Size of Query Blocks . . . . . . . . . . . . . . . . . . 8-6

Chapter 9. Handling Distributed Relational Database Problems . . . . . . 9-1


AS/400 Problem Handling Overview . . . . . . . . . . . . . . . . . . . . . . . . . 9-1
Isolating Distributed Relational Database Problems . . . . . . . . . . . . . . . . 9-2
Incorrect Output . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-3
Application Does Not Complete in the Expected Time . . . . . . . . . . . . . 9-3
Working with Users . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-6
Copy Screen . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-6
Messages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-7
| Handling Program Start Request Failures for APPC . . . . . . . . . . . . . 9-13
| Handling Connection Request Failures for TCP/IP . . . . . . . . . . . . . . 9-13
Application Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-15
Listings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-15
SQLCODEs and SQLSTATEs . . . . . . . . . . . . . . . . . . . . . . . . . . 9-17
System and Communications Problems . . . . . . . . . . . . . . . . . . . . . . 9-21
AS/400 Problem Log . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-21
Alerts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-23
Getting Data to Report a Failure . . . . . . . . . . . . . . . . . . . . . . . . . . 9-25
Printing a Job Log . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-25
| Finding Joblogs from TCP/IP Server Prestart Jobs . . . . . . . . . . . . . . 9-25
| Printing the Product Activity Log . . . . . . . . . . . . . . . . . . . . . . . . 9-26
Trace Job . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-26
Communications Trace . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-27
Finding First-Failure Data Capture (FFDC) Data . . . . . . . . . . . . . . . . . 9-29
Interpreting FFDC Data from the Error Log . . . . . . . . . . . . . . . . . . 9-30
Starting a Service Job to Diagnose Application Server Problems . . . . . . . 9-30
| Starting a Service Job for an APPC Server . . . . . . . . . . . . . . . . . . 9-30
| Starting a Service Job for a TCP/IP Server . . . . . . . . . . . . . . . . . . 9-31

Contents v
Setting QCNTSRVC as a TPN on a DB2/400 Application Requester . . . 9-31
| Creating Your Own TPN for Debugging a DB2 for AS/400 AS Job . . . . 9-31
| Setting QCNTSRVC as a TPN on a DB2 for VM Application Requester . 9-32
| Setting QCNTSRVC as a TPN on a DB2 for OS/390 Application Requester 9-32
| Setting QCNTSRVC as a TPN on a DB2 Connect Application Requester 9-32

Chapter 10. Writing Distributed Relational Database Applications . . . 10-1


Programming Considerations for a Distributed Relational Database
Application . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10-2
Naming Distributed Relational Database Objects . . . . . . . . . . . . . . . 10-2
Connecting to a Distributed Relational Database . . . . . . . . . . . . . . . 10-3
| SQL Specific to Distributed Relational Database and SQL CALL . . . . . 10-13
Ending Units of Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10-16
Coded Character Set Identifier (CCSID) . . . . . . . . . . . . . . . . . . . 10-16
Other Data Conversion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10-19
DDM Files and SQL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10-20
Preparing Distributed Relational Database Programs . . . . . . . . . . . . . 10-21
Precompiling Programs with SQL Statements . . . . . . . . . . . . . . . . 10-22
Compiling an Application Program . . . . . . . . . . . . . . . . . . . . . . 10-24
Binding an Application . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10-24
Testing and Debugging . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10-25
Working With SQL Packages . . . . . . . . . . . . . . . . . . . . . . . . . . . 10-28
SQL Package Management . . . . . . . . . . . . . . . . . . . . . . . . . . 10-28
Create SQL Package (CRTSQLPKG) Command . . . . . . . . . . . . . . 10-28
Delete SQL Package (DLTSQLPKG) Command . . . . . . . . . . . . . . 10-32
SQL DROP PACKAGE Statement . . . . . . . . . . . . . . . . . . . . . . 10-33

Appendix A. Application Programming Examples . . . . . . . . . . . . . . A-1


Business Requirement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-1
Technical Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-1
Creating a Collection and Tables . . . . . . . . . . . . . . . . . . . . . . . . . . A-1
Inserting Data into the Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . A-3
RPG Program Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-6
COBOL Program Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-13
C Program Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-18
Program Output Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-22

Appendix B. Cross-Platform Access Using DRDA . . . . . . . . . . . . . B-1


CCSID Considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . B-1
AS/400 System Value QCCSID . . . . . . . . . . . . . . . . . . . . . . . . . B-1
CCSID Conversion Considerations for DB2 Connect Connections . . . . . B-2
CCSID Conversion Considerations for DB2 and DB2 Server for VM
Database Managers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . B-2
Interactive SQL and Query Management Setup on Unlike Application Servers B-3
Creating Interactive SQL Packages on DB2 Server for VM . . . . . . . . . . B-4
| FAQs from Users of DB2 Connect . . . . . . . . . . . . . . . . . . . . . . . . . B-4
Do AS/400 Files Have to Be Journaled? . . . . . . . . . . . . . . . . . . . . B-4
When Will Query Data Be Blocked for Better Performance? . . . . . . . . B-4
Is the DB2/400 Query Manager and SQL Development Kit Product Needed
for Collection and Table Creation? . . . . . . . . . . . . . . . . . . . . . . B-5
How Do You Interpret an SQLCODE and the Associated Tokens Reported
in a DBM SQL0969N Error Message? . . . . . . . . . . . . . . . . . . . . B-6
| Other Tips for Interoperating with Workstations Using DB2 Connect and DB2
| UDB . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . B-7

vi OS/400 Distributed Database Programming V4R2


| DB2 Connect versus DB2 UDB . . . . . . . . . . . . . . . . . . . . . . . . . B-7
| Proper Configuration and Maintenance Level . . . . . . . . . . . . . . . . . B-7
| Table and Collection Naming . . . . . . . . . . . . . . . . . . . . . . . . . . B-8
| Granting Privileges . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . B-8
| APPC Communications Setup . . . . . . . . . . . . . . . . . . . . . . . . . . B-8
| Setting Up the RDB Directory . . . . . . . . . . . . . . . . . . . . . . . . . . B-8
| Setting Up the SQL Package for DB2 Connect . . . . . . . . . . . . . . . . B-9
| Using Interactive SQL to DB2 UDB . . . . . . . . . . . . . . . . . . . . . . . B-9

Appendix C. Interpreting Trace Job and FFDC Data . . . . . . . . . . . . C-1


Interpreting Data Entries for the RW Component of Trace Job . . . . . . . . C-1
Analyzing the RW Trace Data Example . . . . . . . . . . . . . . . . . . . . C-2
Description of RW Trace Points . . . . . . . . . . . . . . . . . . . . . . . . . C-3
First-Failure Data Capture (FFDC) . . . . . . . . . . . . . . . . . . . . . . . . . C-5
An FFDC Dump . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . C-5
FFDC Dump Output Description . . . . . . . . . . . . . . . . . . . . . . . . C-8
DDM Error Codes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . C-14

Appendix D. DDM Architecture Command Support . . . . . . . . . . . . . D-1

Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . X-1
AS/400 System Information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . X-1
Distributed Relational Database Library . . . . . . . . . . . . . . . . . . . . . . X-2
Other IBM Distributed Relational Database Platform Libraries . . . . . . . . . X-3
DB2 Connect and Universal Database . . . . . . . . . . . . . . . . . . . . . X-3
DB2 for OS/390 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . X-3
DB2 Server for VSE and VM . . . . . . . . . . . . . . . . . . . . . . . . . . X-4
Architecture Books . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . X-4
Redbooks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . X-4

Notices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . X-5
Trademarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . X-6

Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . X-7

Contents vii
viii OS/400 Distributed Database Programming V4R2
Figures
0-1. AS/400 Operations Navigator Display . . . . . . . . . . . . . . . . . . . . xi
1-1. A Typical Relational Table . . . . . . . . . . . . . . . . . . . . . . . . . 1-1
1-2. Relationship of SQL Terms to System Terms . . . . . . . . . . . . . . 1-2
1-3. A Distributed Relational Database . . . . . . . . . . . . . . . . . . . . . 1-3
1-4. Unit of Work in a Local Relational Database . . . . . . . . . . . . . . . 1-3
1-5. Remote Unit of Work in a Distributed Relational Database . . . . . . 1-4
1-6. Distributed Unit of Work in a Distributed Relational Database . . . . . 1-5
1-7. The Spiffy Corporation System Organization . . . . . . . . . . . . . . 1-14
2-1. Alternative Solutions to Distributed Relational Database . . . . . . . . 2-3
3-1. The Spiffy Corporation Network Organization . . . . . . . . . . . . . . 3-8
3-2. Spiffy Corporation Example Network Configuration . . . . . . . . . . 3-18
4-1. Remote Access to a Distributed Relational Database . . . . . . . . . . 4-7
5-1. Relational Database Directory Setup for Two Systems . . . . . . . . 5-10
5-2. Relational Database Directory Setup for Multiple Systems . . . . . . 5-11
6-1. How High-level Languages Save Information About Objects . . . . 6-13
| 6-2. DRDA/DDM TCP/IP Server . . . . . . . . . . . . . . . . . . . . . . . . 6-18
7-1. Record Lock Duration . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-8
7-2. Alternative Network Paths . . . . . . . . . . . . . . . . . . . . . . . . 7-15
7-3. Alternate Application Server . . . . . . . . . . . . . . . . . . . . . . . 7-16
7-4. Data Redundancy Example . . . . . . . . . . . . . . . . . . . . . . . . 7-17
9-1. Resolving Incorrect Output Problem . . . . . . . . . . . . . . . . . . . . 9-3
9-2. Resolving Wait, Loop, or Performance Problems on the Application
Requester . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-4
9-3. Resolving Wait, Loop, or Performance Problems on the Application
Server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-5
9-4. Message Categories . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-8
9-5. Message Severity Codes . . . . . . . . . . . . . . . . . . . . . . . . . . 9-9
| 9-6. Distributed Relational Database Messages . . . . . . . . . . . . . . . 9-11
9-7. Listing From a Precompiler . . . . . . . . . . . . . . . . . . . . . . . . 9-16
9-8. Listing from CRTSQLPKG . . . . . . . . . . . . . . . . . . . . . . . . 9-17
9-9. SQLCODEs and SQLSTATEs . . . . . . . . . . . . . . . . . . . . . . 9-18
9-10. Distributed Relational Database Messages that Create Alerts . . . . 9-24
9-11. Communications Trace Messages . . . . . . . . . . . . . . . . . . . . 9-27
10-1. Remote Unit of Work Activation Group Connection State Transition 10-4
10-2. Application-Directed Distributed Unit of Work Connection and
Activation Group Connection State Transitions . . . . . . . . . . . . 10-6
10-3. Coded Character Set Identifier (CCSID) . . . . . . . . . . . . . . . 10-17
A-1. Creating a Collection and Tables . . . . . . . . . . . . . . . . . . . . A-2
A-2. Inserting Data into the Tables . . . . . . . . . . . . . . . . . . . . . . A-3
A-3. RPG Program Example . . . . . . . . . . . . . . . . . . . . . . . . . . A-7
A-4. COBOL Program Example . . . . . . . . . . . . . . . . . . . . . . . . A-13
A-5. C Program Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-19
A-6. Program Output Example . . . . . . . . . . . . . . . . . . . . . . . . . A-22
C-1. An Example of Job Trace RW Component Information . . . . . . . . C-1
D-1. ACCRDB Command . . . . . . . . . . . . . . . . . . . . . . . . . . . . D-2
D-2. ACCRDBRM Reply for ACCRDB command . . . . . . . . . . . . . . D-2
| D-3. ACCSEC Command . . . . . . . . . . . . . . . . . . . . . . . . . . . . D-2
| D-4. ACCSECRD Reply for ACCSEC command . . . . . . . . . . . . . . D-3
D-5. BGNBND Command . . . . . . . . . . . . . . . . . . . . . . . . . . . . D-3
D-6. Reply Objects for BGNBND command . . . . . . . . . . . . . . . . . D-4

 Copyright IBM Corp. 1997, 1998 ix


D-7. BNDSQLSTT Command . . . . . . . . . . . . . . . . . . . . . . . . . D-4
D-8. Reply Objects for BNDSQLSTT command . . . . . . . . . . . . . . . D-5
D-9. CLSQRY Command . . . . . . . . . . . . . . . . . . . . . . . . . . . . D-5
D-10. Reply Objects for CLSQRY command . . . . . . . . . . . . . . . . . D-6
D-11. CNTQRY Command . . . . . . . . . . . . . . . . . . . . . . . . . . . . D-6
D-12. Reply Objects for CNTQRY command . . . . . . . . . . . . . . . . . D-6
D-13. DRPPKG Command . . . . . . . . . . . . . . . . . . . . . . . . . . . . D-6
D-14. Reply Objects for DRPPKG command . . . . . . . . . . . . . . . . . D-6
D-15. DSCRDBTBL Command . . . . . . . . . . . . . . . . . . . . . . . . . D-7
D-16. Reply Objects for DSCRDBTBL command . . . . . . . . . . . . . . . D-7
D-17. DSCSQLSTT Command . . . . . . . . . . . . . . . . . . . . . . . . . D-7
D-18. Reply Objects for DSCSQLSTT command . . . . . . . . . . . . . . . D-7
D-19. ENDBND Command . . . . . . . . . . . . . . . . . . . . . . . . . . . . D-7
D-20. Reply Objects for ENDBND command . . . . . . . . . . . . . . . . . D-8
D-21. EXCSAT Command . . . . . . . . . . . . . . . . . . . . . . . . . . . . D-8
D-22. EXCSATRD Reply for EXCSAT command . . . . . . . . . . . . . . . D-8
D-23. EXCSQLIMM Command . . . . . . . . . . . . . . . . . . . . . . . . . D-8
D-24. Reply Objects for EXCSQLIMM command . . . . . . . . . . . . . . . D-9
D-25. EXCSQLSTT Command . . . . . . . . . . . . . . . . . . . . . . . . . D-9
D-26. Reply Objects for EXCSQLSTT command . . . . . . . . . . . . . . . D-9
D-27. OPNQRY Command . . . . . . . . . . . . . . . . . . . . . . . . . . . D-9
D-28. Reply Message and Reply Objects for OPNQRY command . . . . . D-10
D-29. PRPSQLSTT Command . . . . . . . . . . . . . . . . . . . . . . . . . D-10
D-30. Reply Objects for PRPSQLSTT command . . . . . . . . . . . . . . . D-11
D-31. RDBCMM Command . . . . . . . . . . . . . . . . . . . . . . . . . . . D-11
D-32. Reply Objects for RDBCMM command . . . . . . . . . . . . . . . . . D-11
D-33. RDBRLLBCK Command . . . . . . . . . . . . . . . . . . . . . . . . . D-11
D-34. Reply Objects for RDBRLLBCK command . . . . . . . . . . . . . . . D-11
D-35. REBIND Command . . . . . . . . . . . . . . . . . . . . . . . . . . . . D-11
| D-36. SECCHK Command . . . . . . . . . . . . . . . . . . . . . . . . . . . . D-11
| D-37. SECCHKRM Reply message for SECCHK command . . . . . . . . D-12

x OS/400 Distributed Database Programming V4R2


About Distributed Database Programming (SC41-5702)
This book describes the distributed relational database management portion of the
Operating System/400 (OS/400) licensed program. Distributed relational database
management provides applications with access to data that is external to the appli-
cation and typically located across a network of computers.

Who should read this book


This guide is intended primarily for system programmers responsible for the devel-
opment, administration, and support of a distributed relational database on one or
more AS/400 systems. System programmers who are not familiar with the AS/400
database can also get a view of the total range of database support provided by
the OS/400 operating system. Application programmers may use this guide to see
the system context in which distributed relational database applications run.

Before using this guide, you should be familiar with general programming concepts
and terminology, and have a general understanding of the AS/400 system and
OS/400 operating system.

AS/400 Operations Navigator


AS/400 Operations Navigator is a powerful graphical interface for Windows 95/NT
clients. With AS/400 Operations Navigator, you can use your Windows 95/NT skills
to manage and administer your AS/400 systems. You can work with database
administration, file systems, Internet network administration, users, and user
groups. You can even schedule regular system backups and display your hardware
and software inventory. Figure 0-1 shows an example of the display.

Figure 0-1. AS/400 Operations Navigator Display

IBM recommends that you use this new interface. It is simple to use and has great
online information to guide you.

You can access the AS/400 Operations Navigator from the Client Access folder by
double-clicking the AS/400 Operations Navigator icon. You can also drag this icon
to your desktop for even quicker access.

 Copyright IBM Corp. 1997, 1998 xi


While we develop this interface, you will still need to use the familiar AS/400 “green
screens” to do some of your tasks. You can find information to help you in this
book and online.

Prerequisite and related information


For information about Advanced 36 publications, see the Advanced 36 Information
Directory, SC21-8292, in the AS/400 Softcopy Library.

For information about other AS/400 publications (except Advanced 36), see either
of the following:
Ÿ The Publications Reference, SC41-5003, in the AS/400 Softcopy Library.
Ÿ The AS/400 online library is available on the World Wide Web at the following
uniform resource locator (URL) address:
http://as4ððbks.rochester.ibm.com/

For a list of related publications, see the “Bibliography” on page X-1.

Information available on the World Wide Web


In addition to the AS/400 online library on the World Wide Web, you can access
other information from the AS/400 Technical Studio at the following URL address:
http://www.as4ðð.ibm.com/techstudio

How to send your comments


Your feedback is important in helping to provide the most accurate and high-quality
information. If you have any comments about this book or any other AS/400 doc-
umentation, fill out the readers' comment form at the back of this book.
Ÿ If you prefer to send comments by mail, use the readers' comment form with
the address that is printed on the back. If you are mailing a readers' comment
form from a country other than the United States, you can give the form to the
local IBM branch office or IBM representative for postage-paid mailing.
Ÿ If you prefer to send comments by FAX, use either of the following numbers:
– United States and Canada: 1-800-937-3430
– Other countries: 1-507-253-5192
Ÿ If you prefer to send comments electronically, use this network ID:
– IBMMAIL, to IBMMAIL(USIB56RZ)
[email protected]
Be sure to include the following:
Ÿ The name of the book.
Ÿ The publication number of the book.
Ÿ The page number or topic to which your comment applies.

xii OS/400 Distributed Database Programming V4R2


Chapter 1. Distributed Relational Database and the AS/400
System
Distributed relational database support on the AS/400* system consists of an imple-
mentation of IBM* Distributed Relational Database Architecture* (DRDA*) and inte-
gration of other SQL clients by use of Application Requester Driver (ARD)
programs. The Operating System/400 (OS/400) and the DB2 for AS/400 Query
Manager and SQL Development Kit combine to provide this support.

This chapter describes distributed relational database and how it is used on the
AS/400 system. It defines some general concepts of distributed relational database,
outlines the IBM DRDA implementation, and provides an overview of the current
DRDA implementation on the AS/400 system. It defines some terms and directs
you to other parts of this manual for more detail. Finally, an example corporation
named Spiffy is described. This fictional company uses AS/400 systems in a distrib-
uted relational database application program. This sample of the Spiffy Corporation
forms the background for all examples used in this manual.

Distributed Relational Database Processing


A relational database is a set of data stored in one or more tables in a computer.
A table is a two-dimensional arrangement of data consisting of horizontal rows and
vertical columns as shown in Figure 1-1. Each row contains a sequence of values,
one for each column of the table. A column has a name and contains a particular
data type (for example, character, decimal, or integer).

Figure 1-1. A Typical Relational Table


Item Name Supplier Quantity
78476 Baseball ACME 650
78477 Football Imperial 228
78478 Basketball ACME 105
78479 Soccer ball ACME 307

Tables can be defined and accessed in several ways on the AS/400 system. One
way to describe and access tables on the system is to use a language like Struc-
tured Query Language (SQL). SQL is the standard IBM database language and
provides the necessary consistency to enable distributed data processing across
different system operating environments. Another way to describe and access
tables on the AS/400 system is to describe physical and logical files using data
description specifications (DDS) and access tables using file interfaces (for
example, read and write high-level language statements).

SQL uses different terminology from that used on the AS/400 system. For most
SQL objects there is a corresponding system object on the AS/400 system.
Figure 1-2 shows the relationship between SQL relational database terms and
AS/400 system terms.

 Copyright IBM Corp. 1997, 1998 1-1


Figure 1-2. Relationship of SQL Terms to System Terms
SQL Term System Term
Collection. Consists of a library, a Library. Groups related objects and
journal, a journal receiver, an SQL allows you to find the objects by name.
catalog, and an optional data dictionary. A
collection groups related objects and
allows you to find the objects by name.
Table. A set of columns and rows. Physical file. A set of records.
Row. The horizontal part of a table con- Record. A set of fields.
taining a serial set of columns.
Column. The vertical part of a table of Field. One or more bytes of related infor-
one data type. mation of one data type.
View. A subset of columns and rows of Logical file. A subset of fields and/or
one or more tables. records of up to 32 physical files.
Index. A collection of data in the columns A type of logical file
of a table, logically arranged in ascending
or descending order.
Package. An object that contains control SQL Package. Has the same meaning as
structures for SQL statements to be used the SQL term.
by an application server.
Catalog. A set of tables and views that No similar object. However, the Display
contain information about tables, pack- File Description (DSPFD) and Display File
ages, views, indexes, and constraints. Field Description (DSPFFD) commands
The catalog views in QSYS2 contain infor- provide some of the same information that
mation about all tables, packages, views, querying an SQL catalog provides.
indexes, and constraints on the AS/400
system. Additionally, an SQL collection
will contain a set of these views that only
contains information about tables, pack-
ages, views, indexes, and constraints in
the collection.

A distributed relational database exists when the application programs that use
the data and the data itself are located on different systems. The simplest form of a
distributed relational database is shown in Figure 1-3 on page 1-3 where the appli-
cation program runs on one system, and the data is located on another system.

When using a distributed relational database, the system on which the application
program is run is called the application requester (AR), and the system on which
the remote data resides is called the application server (AS).

1-2 OS/400 Distributed Database Programming V4R2


Application Requester Application Server
┌───────────────────┐ ┌───────────────────┐
│ │ │ │
│ │ │ │
│ ┌──────────────┐ │ │ │
│ │ Application │──┼─────┐ │ │
│ │ │ │ │ │ ┌──────────────┐ │
│ └──────────────┘ │ └─────┼─5│ Data │ │
│ │ │ │ │ │
│ │ │ └──────────────┘ │
│ │ │ │
│ │ │ │
└───────────────────┘ └───────────────────┘

Figure 1-3. A Distributed Relational Database

A unit of work is one or more database requests and the associated processing
that make up a completed piece of work as shown in Figure 1-4. A simple example
is taking a part from stock in an inventory control application program. An inventory
program can tentatively remove an item from a shop inventory account table and
then add that item to a parts reorder table at the same location. The term trans-
action is another expression used to describe the unit of work concept.

In the above example, the unit of work is not complete until the part is both
removed from the shop inventory account table and added to a reorder table. When
the requests are complete, the application program can commit the unit of work.
This means that any database changes associated with the unit of work are made
permanent.

With unit of work support, the application program can also roll back changes to a
unit of work. If a unit of work is rolled back, the changes made since the last
commit or rollback operation are not applied. Thus, the application program treats
the set of requests to a database as a unit.

┌───────────────────┐ ┌───────────────────┐
┌─── │ Request 1 ├───────────5 │ │
│ ├───────────────────┤ │ │
Unit of Work 1 │ │ Request 2 ├───────────5 │ │
│ ├───────────────────┤ │ │
└─── │ Request 3 ├───────────5 │ │
├───────────────────┤ │ Local │
┌─── │ Request 4 ├───────────5 │ Relational │
Unit of Work 2 │ ├───────────────────┤ │ Database │
│ │ Request 5 ├───────────5 │ │
│ ├───────────────────┤ │ │
└─── │ Request 6 ├───────────5 │ │
├───────────────────┤ │ │
┌─── │ Request 7 ├───────────5 │ │
Unit of Work 3 │ ├───────────────────┤ │ │
└─── │ Request 8 ├───────────5 │ │
└───────────────────┘ └───────────────────┘

Figure 1-4. Unit of Work in a Local Relational Database

Chapter 1. Distributed Relational Database and the AS/400 System 1-3


Remote Unit of Work
Remote unit of work (RUW) is a form of distributed relational database processing
in which an application program can access data on a remote database within a
unit of work. A remote unit of work can include more than one relational database
request, but all requests must be made to the same remote database. All requests
to a relational database must be completed (either committed or rolled back) before
requests can be sent to another relational database. This is shown in Figure 1-5.

┌───────────────────┐ ┌───────────────────┐
┌─── │ Request 1 ├───────────5 │ │
│ ├───────────────────┤ │ Relational │
Unit of Work 1 │ │ Request 2 ├───────────5 │ Database 1 │
│ ├───────────────────┤ ┌───5 │ │
└─── │ Request 3 ├───────┘ └───────────────────┘
├───────────────────┤ ┌───────────────────┐
┌─── │ Request 4 ├───────────5 │ │
Unit of Work 2 │ ├───────────────────┤ │ Relational │
│ │ Request 5 ├───────────5 │ Database 2 │
│ ├───────────────────┤ │ │
└─── │ Request 6 ├───────────5 │ │
├───────────────────┤ └───────────────────┘
┌─── │ Request 7 ├───────┐ ┌───────────────────┐
Unit of Work 3 │ ├───────────────────┤ └───5 │ Relational │
└─── │ Request 8 ├───────────5 │ Database 3 │
└───────────────────┘ └───────────────────┘

Figure 1-5. Remote Unit of Work in a Distributed Relational Database

Remote unit of work is application-directed distribution because the application


program must connect to the correct relational database system before issuing the
requests. However, the application program only needs to know the name of the
remote database to make the correct connection.

Remote unit of work support enables an application program to read or update data
at more than one location. However, all the data that the program accesses within
a unit of work must be managed by the same relational database management
system. For example, the shop inventory application program must commit its
inventory and accounts receivable unit of work before it can read or update tables
that are in another location.

In remote unit of work processing, each computer has an associated relational


database management system and an associated application requester program
that help process distributed relational data requests. This allows you or your appli-
cation program to request remote relational data in much the same way as you
request local relational data.

Distributed Unit of Work


Distributed unit of work (DUW) enables a user or application program to read or
update data at multiple locations within a unit of work, as shown in Figure 1-6.
Within one unit of work, an application running in one system can direct SQL
requests to multiple remote database management systems using the SQL sup-
ported by those systems. For example, the shop inventory program can perform
updates to the inventory table on one system and the accounts receivable table on
another system within one unit of work.

1-4 OS/400 Distributed Database Programming V4R2


┌───────────────────┐ ┌───────────────────┐
┌─── │ Request 1 ├───────────5 │ │
│ ├───────────────────┤ │ Relational │
Unit of Work 1 │ │ Request 2 ├─────┐ ┌5 │ Database 1 │
│ ├───────────────────┤ │ ┌──┼5 │ │
└─── │ Request 3 ├─────┼─┼──┘ └───────────────────┘
├───────────────────┤ │ │ ┌───────────────────┐
┌─── │ Request 4 ├─────┼─┘ │ │
Unit of Work 2 │ ├───────────────────┤ └─────5 │ Relational │
│ │ Request 5 ├───────────5 │ Database 2 │
│ ├───────────────────┤ ┌─5 │ │
└─── │ Request 6 ├─────────┼┐ │ │
├───────────────────┤ ││ └───────────────────┘
┌─── │ Request 7 ├─────────┘│ ┌───────────────────┐
Unit of Work 3 │ ├───────────────────┤ └5 │ Relational │
└─── │ Request 8 ├───────────5 │ Database 3 │
└───────────────────┘ └───────────────────┘

Figure 1-6. Distributed Unit of Work in a Distributed Relational Database

The target of the requests is controlled by the user or application with SQL state-
ments such as CONNECT TO and SET CONNECTION. Each SQL statement must
refer to data at a single location.

When the application is ready to commit the work, it initiates the commit; commit-
ment coordination is performed by a synchronization-point manager.

Distributed unit of work allows:


Ÿ Update access to multiple database management systems in one unit of work,
or
Ÿ Update access to one or more database management systems with read
access to other database management systems in one unit of work.

Whether an application can update a given database management system in a unit


of work is dependent on the level of DRDA (if DRDA is used to access the remote
relational database) and the order in which the connections and updates are made.

Other Distributed Relational Database Terms and Concepts


| The following discussion provides an overview of additional distributed relational
| database concepts. On IBM systems, some of this support is provided by DataHub,
| DataJoiner, and DataPropagator Relational products. In addition, you can use some
| of these concepts as part of AS/400 application programs.

| DB2 for AS/400 supports both the remote unit of work and distributed unit of work
| with APPC communications. Remote unit of work is also supported with TCP/IP
| communications. A degree of processing sophistication beyond the distributed unit
| of work is a distributed request. This type of distributed relational database
| access enables a user or application program to issue a single SQL statement that
| can read or update data at multiple locations.

Tables in a distributed relational database do not have to differ from one another.
Some tables can be exact or partial copies of one another. Extracts, snapshots,
and replication are terms that describe types of copies using distributed processing.

Chapter 1. Distributed Relational Database and the AS/400 System 1-5


Extracts are user-requested copies of tables. The copies are extracted from one
database and loaded into another specified by the user. The unloading and loading
process may be repeated periodically to obtain updated data. Extracts are most
useful for one-time or infrequent occurrences, such as read-only copies of data that
rarely changes.

Snapshots are read-only copies of tables that are automatically made by a system.
The system refreshes these copies from the source table on a periodic basis speci-
fied by the user—perhaps daily, weekly, or monthly. Snapshots are most useful for
locations that seek an automatic process for receiving updated information on a
periodic basis.

Data replication means the system automatically updates copies of a table. It is


similar to snapshots because copies of a table are stored at multiple locations. Data
replication is most effective for situations that require high reliability and quick data
retrieval with few updates.

Tables can also be split across computer systems in the network. Such a table is
called a distributed table. Distributed tables are split either horizontally by rows or
vertically by columns to provide easier local reference and storage. The columns of
a vertically distributed table reside at various locations, as do the rows of a horizon-
tally distributed table. At any location, the user still sees the table as if it were kept
in a single location. Distributing tables is most effective when the request to access
and update certain portions of the table come from the same location as those
portions of the table.

Distributed Relational Database Architecture Support


DRDA support for distributed relational database processing is used by IBM rela-
tional database products. DRDA support defines protocols for communication
between an application program and a remote relational database.

DRDA support provides distributed relational database management in both IBM


and non-IBM environments. In IBM environments, relational data is managed with
the following programs:
| Ÿ DB2 for OS/390
| Ÿ DB2 for VSE and VM
| Ÿ DB2 Connect Personal Edition
| Ÿ DB2 Connect Enterprise Edition
| Ÿ DB2 Universal Database Workgroup Edition
| Ÿ DB2 Universal Database Enterprise Edition
| Ÿ DB2 Universal Database Extended Enterprise Edition
| Ÿ DB2 for AS/400 support in the OS/400 licensed program on AS/400

DRDA support provides the structure for access to database information for rela-
tional database managers operating in like and unlike environments. For example,
access to relational data between two or more AS/400 systems is distribution in a
like environment, and access to relational data between an AS/400 system and
systems using the DB2 database manager is distribution in an unlike
environment.

1-6 OS/400 Distributed Database Programming V4R2


SQL is the standard IBM database language. It provides the necessary consistency
to enable distributed data processing across like and unlike operating environ-
ments. Within DRDA support, SQL allows users to define, retrieve, and manipulate
data across environments that support a DRDA implementation.

DRDA and CDRA Support


One of the interesting possibilities in a distributed relational database is that the
database may not only span different types of computers, but those computers may
be in different countries or regions. The same systems, such as AS/400 systems,
can encode data differently depending on the language used on the system. Dif-
ferent types of systems encode data differently. For instance, a System/390*
system, an AS/400 system, and a PS/2* system encode numeric data in their own
unique formats. In addition, a System/390 system and an AS/400 system use the
EBCDIC encoding scheme to encode character data, while a PS/2 system uses an
ASCII encoding scheme.

For numeric data, these differences do not matter. Unlike systems that provide
DRDA support automatically convert any differences between the way a number is
represented in one computer system to the way it is represented in another. For
example, if an AS/400 application program reads numeric data from a DB2 data-
base, DB2 sends the numeric data in System/390 format and the OS/400 database
management system converts it to AS/400 numeric format.

However, the handling of character data is more complex, but this too can be
handled within a distributed relational database.

Character Conversion
Not only can there be differences in encoding schemes (such as EBCDIC versus
ASCII), but there can also be differences related to language. For instance,
systems configured for different languages can assign different characters to the
same code, or different codes to the same character. For example, a system con-
figured for U.S. English can assign the same code to the character } that a system
configured for the Danish language assigns to å. But those two systems can assign
different codes to the same character such as $.

If data is to be shared across different systems, character data needs to be seen


by users and applications the same way. In other words, a PS/2 user in New York
and an AS/400 user in Copenhagen both need to see a $ as a $, even though $
may be encoded differently in each system. Furthermore, the user in Copenhagen
needs to see a }, if that is the character that was stored at New York, even though
the code may be the same as a Danish å. In order for this to happen, the $ must
be converted to the proper character encoding for a PS/2 system (that is, U.S.
English character set, ASCII), and converted back to Danish encoding when it goes
from New York to Copenhagen (that is, Danish character set, EBCDIC). This sort of
character conversion is provided for by AS/400 as well as the other IBM distributed
relational database managers. This conversion is done in a coherent way in accord-
ance with the Character Data Representation Architecture (CDRA).

CDRA specifies the way to identify the attributes of character data so that the data
can be understood across systems, even if the systems use different character sets
and encoding schemes. For conversion to happen across systems, each system
must understand the attributes of the character data it is receiving from the other

Chapter 1. Distributed Relational Database and the AS/400 System 1-7


system. CDRA specifies that these attributes be identified through a coded char-
acter set identifier (CCSID). All character data in DB2, SQL/DS, and the OS/400
database management systems have a CCSID, which indicates a specific combina-
tion of encoding scheme, character set, and code page. All character data in an
Extended Services environment has a code page only (but the other database
managers treat that code page identification as a CCSID). A code page is a spe-
cific set of assignments between characters and internal codes.

For example, CCSID 37 means encoding scheme 4352 (EBCDIC), character set
697 (Latin, single-byte characters), and code page 37 (USA/Canada country
extended code page). CCSID 5026 means encoding scheme 4865 (extended
EBCDIC), character set 1172 with code page 290 (single-byte character set for
Katakana/ Kanji), and character set 370 with code page 300 (double-byte character
set for Katakana/Kanji).

DB2, SQL/DS, the OS/400 system, and DDCS/2 include mechanisms to convert
character data between a wide range of CCSID-to-CCSID pairs and CCSID-to-code
page pairs. Character conversion for many CCSIDs and code pages is already built
into these products. For a complete list and description of all CCSIDs registered in
CDRA, see the Character Data Representation Architecture - Level 1 Registry
book. For a description of the use of CCSIDs on the AS/400 system, see “Coded
Character Set Identifier (CCSID)” on page 10-16.

Application Requester Driver Programs


An application requester driver (ARD) program is a type of exit program that
enables SQL applications to access data managed by a database management
system other than DB2 for AS/400. AS/400 calls the ARD program during the fol-
lowing operations:
Ÿ The package creation step of SQL precompiling, performed using the
CRTSQLPKG or CRTSQLxxx commands, when the relational database (RDB)
parameter matches the RDB name corresponding to the ARD program.
Ÿ Processing of SQL statements when the current connection is to an RDB name
corresponding to the ARD program.

These calls allow the ARD program to pass the SQL statements and information
about the statements to a remote relational database and return results back to the
system. The system then returns the results to the application or the user. Access
to relational databases accessed by ARD programs appear like access to DRDA
application servers in the unlike environment.

For more information about application requester driver programs, see the System
API Reference.

Distributed Relational Database on the AS/400 System


All data on the AS/400 system is stored in a single relational database. DB2 for
AS/400 provides all the database management functions for this AS/400 relational
database. Distributed relational database support on the system is an integral part
of the OS/400 program, just as is support for communications, work management,
security functions and other functions.

1-8 OS/400 Distributed Database Programming V4R2


The AS/400 system can be part of a distributed relational database network with
other systems that support a DRDA implementation. The AS/400 system can be an
AR or an AS in either like or unlike environments. Distributed relational database
implementation on the AS/400 system supports remote unit of work (RUW) and dis-
| tributed unit of work (DUW).1 RUW allows you to submit multiple requests to a
single database within a single unit of work, and DUW allows requests to multiple
databases to be included within a single unit of work. For example, using DUW
support you can decrement the inventory count of a part on one system and incre-
ment the inventory count of a part on another system within a unit of work, and
then commit changes to these remote databases at the conclusion of a single unit
of work using a two-phase commit process. DB2 for AS/400 does not support dis-
tributed requests, so you can only access one database with each SQL statement.
The level of support provided in an application program depends on the level of
support available on the target systems and the order in which connections and
updates are made. See “Connecting to a Distributed Relational Database” on
page 10-3 for more information.

In addition to DRDA access, ARD programs can be used to access databases that
do not support DRDA. Connections to relational databases accessed through ARD
programs are treated like connections to unlike systems. Such connections can
coexist with connections to DRDA application servers, connections to the local rela-
tional database, and connections which access other ARD programs.

| On AS/400, the distribution functions of snapshots and replication, introduced in


| “Other Distributed Relational Database Terms and Concepts” on page 1-5, are not
| automatically performed by the system. You can install and configure the
| DataPropagator Relational Capture and Apply product on AS/400 to perform these
| functions. Also, you can use these functions in user-written application programs.
| More information about how you can organize these functions in a distributed rela-
| tional database is discussed in Chapter 7, Data Availability and Protection.

| On AS/400, the distributed request function that is discussed in “Other Distributed


| Relational Database Terms and Concepts” on page 1-5 is not directly supported.
| However, the DataJoiner product can perform distributed queries, joining tables
| from a variety of data sources. DataJoiner works synergistically with DataGuide, a
| comprehensive information catalog in the IBM Information Warehouse family of pro-
| ducts. DataGuide provides a graphical user interface to complete information
| listings about a company's data resources.

The OS/400 program includes run-time support for SQL. You do not need the DB2
for AS/400 Query Manager and SQL Development Kit licensed program installed on
a DB2 for AS/400 application requester or application server to process distributed
relational database requests or to create an SQL collection on an AS/400.
However, you do need the DB2 for AS/400 Query Manager and SQL Development
Kit program to precompile programs with SQL statements, run interactive SQL, or
run DB2 for AS/400 Query Manager.

| 1 The TCP/IP protocol currently does not support DUW. However, in a program compiled with RDBCNNMTH(*DUW), you can
| access an RUW server and do updates, under certain conditions which include the restriction that all other connections are read-
| only.

Chapter 1. Distributed Relational Database and the AS/400 System 1-9


Managing an AS/400 Distributed Relational Database
Managing a distributed relational database on the AS/400 system requires broad
knowledge of the resources and tools within the OS/400 licensed program. This
book provides an overview of the various functions available with the operating
system that can help you administer a distributed relational database on AS/400
systems. This guide explains distributed relational database functions and tasks in
a network of AS/400 systems (a like environment). Differences between AS/400
distributed relational database functions in a like and unlike environment are pre-
sented only in a general discussion in this guide. Considerations for different distrib-
uted relational database platforms working with AS/400 distributed relational
database are discussed in Appendix B, “Cross-Platform Access Using DRDA” on
page B-1. If you want more information about another IBM system that supports
DRDA, see the information provided with that system or the books listed in Distrib-
uted Relational Database Library and Other IBM Distributed Relational Database
Platform Libraries in the Bibliography.

Planning and Design


The first requirement for the successful operation of a distributed relational data-
base is thorough planning. The needs and goals of your enterprise must be consid-
ered when making the decision to use a distributed relational database. How you
code an application program, where it resides in relation to the data, and the
network design that connects application programs to data are all important design
considerations.

| Database design in a distributed relational database is more critical than when


| dealing with just one AS/400 relational database. With more than one AS/400
| system to consider, you must develop a consistent management strategy across
| the network. Operations that require particular attention when forming your strategy
| are: general operations, networking protocol, system security, accounting, problem
| analysis, and backup and recovery processes. Chapter 2, Planning and Design for
| Distributed Relational Database discusses some things to consider when planning
| for and designing a distributed database.

Communications
| The communications support for the DRDA implementation on the AS/400 is based
| on the AS/400 Distributed Data Management (DDM) Architecture. This support
| includes both native TCP/IP connectivity as well as the IBM Systems Network
| Architecture (SNA) through advanced program-to-program communications (APPC),
| with or without Advanced Peer-to-Peer Networking* (APPN*), and High-
| Performance Routing (HPR). In addition, OS/400 provides for APPC, and therefore
| DDM and distributed relational database access, over TCP/IP using AnyNet*
| support. AnyNet is not required for DRDA remote unit of work support over TCP/IP,
| but might be useful for distributed unit of work function over TCP/IP. See
| Chapter 3, Communications for an AS/400 Distributed Relational Database for
| more information about these functions and configuration samples.

| There are no restrictions on what communications transports an ARD exit program


| can use to access a relational database.

1-10 OS/400 Distributed Database Programming V4R2


Security
The AS/400 system has security functions built into the operating system to limit
access to the data resources of an application server. Security options range from
simple physical security to full password security coupled with authorization to com-
mands and data objects. Users must be properly authorized to have access to the
database whether it is local or remote. They must also have proper authorization to
collections, tables, and other relational database objects necessary to run their
application programs. This typically means that distributed database users must
have valid user profiles for the databases they use throughout the network. Security
planning must consider user and application program needs across the network.
Chapter 4, Security for an AS/400 Distributed Relational Database provides infor-
mation on the security considerations for an AS/400 distributed relational database.

Set Up
The run-time support for an AS/400 distributed relational database is provided by
the OS/400 program. Therefore, when the operating system is installed, distributed
relational database support is installed. However, some setup work is required to
make the application requesters and application servers ready to send and receive
work. One or more subsystems can be used to control interactive, batch, spooled,
and communications jobs. All the systems in the network must also have their rela-
tional database directory set up with connection information. Finally, you may wish
to put data into the tables of the application servers throughout the network.

The relational database directory contains database names and values that are
translated into communications network parameters. You add an entry for each
database in the network, including the local database. Each directory entry con-
sists of a unique relational database name and corresponding communications path
information. For access provided by ARD programs, the ARD program name must
be added to the relational database directory entry.

There are a number of ways to enter data into a database. You can use an SQL
application program, some other high-level language application program, or one of
these methods:
Ÿ Interactive SQL
Ÿ OS/400 query management
Ÿ Data file utility (DFU)
Ÿ Copy File (CPYF) command

For more information on ways to enter data into a distributed database, along with
a discussion of subsystems and relational database directories on the AS/400
system, see Chapter 5, Setting Up an AS/400 Distributed Relational Database.

Administration
As a database administrator in a distributed relational database network, you may
need to locate and monitor work being done on any one of several systems. Work
management functions on the AS/400 system provide effective ways to track this
work by allowing you to do the following:
Ÿ Work with jobs in the network
Ÿ Work with the communications networks, controllers, devices, modes, and ses-
sions

Chapter 1. Distributed Relational Database and the AS/400 System 1-11


| Ÿ Manage the DDM TCP/IP server
Ÿ Enable DDM conversations
Ÿ Grant and remove authority for users
Ÿ Stop work on a remote system

You can read more about how to do these tasks in Chapter 6, Distributed Rela-
tional Database Administration and Operation Tasks.

Data Protection and Availability


The AS/400 system provides a wide array of functions to ensure that data on
systems in a distributed relational database network is available for use. These
include save/restore functions of the system, journal management and access path
journaling, commitment control, auxiliary storage pools, checksum protection, mir-
rored protection and the uninterruptible power supply. While the system operator
for each AS/400 system is typically responsible for backup and recovery of that
system’s data, you should address provisions for network redundancy and data
redundancy to provide optimum data availability. Chapter 7, Data Availability and
Protection discusses these topics in more detail.

Performance
| No matter what kind of application programs you are running on a system, perform-
| ance can always be a concern. For a distributed relational database, network,
| system, and application performance are all crucial. System performance can be
| affected by the size and organization of main and auxiliary storage. There can also
| be performance gains if you know the strengths and weaknesses of SQL programs.
| Chapter 9, Handling Distributed Relational Database Problems

Problems
When a problem occurs within a distributed relational database, it is necessary to
first identify where the problem originates. The problem may be on the application
server or application requester. When the database problem is located properly in
the network, you may further isolate the problem as a user problem, a problem with
an application program, an AS/400 system problem, or communications problem in
order to correct the error. Chapter 9, Handling Distributed Relational Database
Problems describes the methods you can use to isolate and solve distributed data-
base problems.

Application Programming
Programmers can write high-level language programs that use SQL statements for
AS/400 distributed application programs. The main differences from programs
written for local processing only are the ability to connect to remote databases and
to create SQL packages. The CONNECT SQL statement can be used to explicitly
connect an application requester to an application server, or the name of the rela-
tional database can be specified when the program is created to allow an implicit
connection to occur. Also, the SET CONNECTION, RELEASE, and DISCONNECT
statements can be used to manage connections for applications that use distributed
unit of work.

An SQL package is an AS/400 object used only for distributed relational data-
bases. It can be created as a result of the precompile process of SQL or can be
created from a compiled program object. An SQL package resides on the applica-

1-12 OS/400 Distributed Database Programming V4R2


tion server. It contains SQL statements, host variable attributes, and access plans
which the application server uses to process an application requester’s request.

Because application programs can connect to many different systems, program-


mers may need to pay more attention to data conversion between systems. The
AS/400 system provides for conversion of various types of data, including coded
character set identifier (CCSID) support for the management of character informa-
tion.

See Chapter 10, Writing Distributed Relational Database Applications for an over-
view of distributed relational database topics for the application programmer.

Spiffy Corporation Example


The Spiffy Corporation is used in several IBM manuals to describe distributed rela-
tional database support. In this manual, this fictional company has been changed
somewhat to illustrate AS/400 support for DRDA in an AS/400 network. Examples
used throughout this manual illustrate particular functions, connections, and proc-
esses. These may not correspond exactly to the examples used in other distributed
relational database publications but an attempt has been made to make them look
familiar.

Though the Spiffy Corporation is a fictional enterprise, the business practices


described here are modeled after those in use in several companies of similar con-
struction. However, this example does not attempt to describe all that can be done
using a distributed relational database, even by this example company.

Spiffy Organization and System Profile


Spiffy Corporation is a national product distributor that sells and services automo-
biles, among other products, to retail customers through a network of regional
offices and local dealerships. Given the high competitiveness of today's automobile
industry, the success of an operation like the Spiffy Corporation depends on high-
quality servicing and timely delivery of spare parts to the customer. To meet this
competition, Spiffy has established a vast service network incorporated within its
dealership organization.

The dealership organization is headed by a central vehicle distributor that is located


in Chicago, Illinois. There are several regional distribution centers across North
America. Two of these are located in Minneapolis, Minnesota and Kansas City,
Missouri. These centers minimize the distribution costs of vehicles and spare parts
by setting up regional inventories. The Minneapolis regional center serves approxi-
mately 15 dealerships while the Kansas City center serves as many as 30 dealer-
ships.

Figure 1-7 on page 1-14 illustrates a system organization chart for Spiffy Corpo-
ration.

Chapter 1. Distributed Relational Database and the AS/400 System 1-13


Chicago

MP000 KC000

D1 D2 D15 D1 D2 D3 D30

RV2W734-0

Figure 1-7. The Spiffy Corporation System Organization

Spiffy is in the process of building up a nationwide integrated telecommunications


network. For the automobile division they are setting up a network of AS/400
systems for the regional distributions centers and the dealerships. These are con-
nected to an IBM 3090* at the central vehicle distributor. This network is considered
a vital business asset for maintaining the competitive edge.

| The central distributor runs OS/390 on its IBM 3090 system with DB2 and relevant
| decision support software. This system is used because of the large amounts of
| data that must be handled at any one time in a variety of application programs. The
| central vehicle distributor system is not dedicated to automobile division data proc-
| essing. It must handle work and processes for the corporation that do not yet
| operate in a distributed database environment. The regional centers are running
| AS/400 Model 650 e-systems. They use APPC/APPN with SNADS and 5250
| Display Station Pass-through using an SDLC protocol.

All of the dealerships use AS/400 systems but they may range in size from a Model
600 e-system in the smaller enterprises up to a Model 640 e-system in the largest.
These systems are connected to the regional office using SDLC protocol. The
largest dealerships have a part time programmer and a system operator to tend to
the data processing functioning of the enterprise. Most of the installations do not
employ anyone with programming expertise and some of the smaller locations do
not employ anyone with more than a very general knowledge of computers.

1-14 OS/400 Distributed Database Programming V4R2


Business Processes of the Spiffy Corporation Automobile Service
The Spiffy Corporation automobile division has business practices that are auto-
mated in this distributed relational database environment. To keep the examples
from becoming more complicated than necessary, consider just those functions in
the company that pertain to vehicle servicing.

Dealerships can have a list of from 2000 to 20,000 customers. This translates to 5
service orders per day for a small dealership and up to 50 per day for a large deal-
ership. These service orders include scheduled maintenance, warranty repairs,
regular repairs, and parts ordering.

The dealers stock only frequently needed spare parts and maintain their own inven-
tory databases. Both regional centers provide parts when requested. Dealer inven-
tories are also stocked on a periodic basis by a forecast-model-controlled batch
process.

Distributed Relational Database Administration for the Spiffy


Corporation
Each dealership manages its data processing resources and procedures as a
stand-alone enterprise. Spiffy Corporation requires that each dealership have one
or more AS/400 systems and that those systems must be available to the network
at certain times. However, the size of the system and the number of business proc-
esses that are automated on it are determined by each dealership's needs and the
resources available to it.

The Spiffy Corporation requires all dealerships to be active in the inventory distrib-
uted relational database. Since the corporation operates its own dealerships, it has
a full complement of dealership software that may or may not access the distributed
relational database environment. The Spiffy dealerships use the full set of software
tools. Most of the private franchises use them also since they are tailored specif-
ically to the Spiffy Corporation way of doing business.

The regional distribution centers manage the inventory for their region. They also
function as the database administrator for all distributed database resources used
in the region. The responsibilities involved vary depending on the level of data proc-
essing competency at each dealership. The regional center is always the first
contact for help for any dealership in the region.

The Minneapolis regional distribution center has a staff of AS/400 programmers


with a wide range of experience and knowledge about the systems and the
network. The dealership load is about one half that of other regional centers to
allow this center to focus on network-wide AS/400 support functions. These func-
tions include application program development, program maintenance, and problem
handling.

The following are the database responsibilities for each level of activity in the
network:
Dealerships
Ÿ Perform basic operation and administration of system
Ÿ Enroll local users
Regional distribution centers

Chapter 1. Distributed Relational Database and the AS/400 System 1-15


Ÿ Set up data processing for new dealerships
Ÿ Disperse database resources for discontinued dealerships
Ÿ Enroll network users in region
Ÿ Maintain inventory for region
Ÿ Develop service plans for dealerships
Ÿ Operate help desk for dealerships
In addition to the regional distribution center activities above, the Minneapolis
AS/400 competency center does the following activities:
Ÿ Develop applications for AS/400 network
Ÿ Operate help desk for regional centers
Ÿ Tune database performance
Ÿ Alert focal point
Ÿ Resolve database problems

Examples used throughout this manual are associated with one or more of these
activities. Many examples show the process of obtaining a part from inventory in
order to schedule customer service or repairs. Others show distributed relational
database administration tasks used to set up, secure, monitor, and resolve prob-
lems for systems in the Spiffy Corporation distributed relational database network.

1-16 OS/400 Distributed Database Programming V4R2


Chapter 2. Planning and Design for Distributed Relational
Database
To prepare for a distributed relational database, you must understand both the
needs of the business and relational database technology.

Because the planning and design of a distributed relational database are closely
linked to each other, this chapter combines these topics when discussing the fol-
lowing related tasks:
Ÿ Identifying your needs and expectations
Ÿ Designing the application, data, and network
Ÿ Putting together a management strategy

Additional information about planning for distributed relational databases can be


found in the Planning for Distributed Data book.

Identifying Your Needs and Expectations


When analyzing your needs and expectations of a distributed relational database,
consider the following:
1. Data needs. What data is pertinent to your plans, who will need it, for what
reason, and how often?
2. Distributed relational database capabilities. Do the requirements lend them-
selves to a distributed relational database solution?
3. Goals and directions. If a distributed relational database appears to be a viable
solution, what short-term and long-term goals can be met?

Data Needs
The first step in your analysis is to determine which factors affect your data and
how they affect it. Ask yourself the following questions:
Ÿ What locations are involved?
Ÿ What kind of transactions do you envision?
Ÿ What data is needed for each transaction?
Ÿ What dependencies do items of data have on each other, especially referential
limitations? For example, will information in one table need to be checked
against the information in another table? (If so, both tables must be kept at the
same location.)
Ÿ Does the data currently exist? If so, where is it located? Who "owns" it (that is,
who is responsible for maintaining the accuracy of the data)?
Ÿ What priority do you place on the availability of the needed data? Integrity of
the data across locations? Protection of the data from unauthorized access?
Ÿ What access patterns do you envision for the data? For instance, will the data
be read, updated, or both? How frequently? Will a typical access return a lot of
data or a little data?

 Copyright IBM Corp. 1997, 1998 2-1


Ÿ What level of performance do you expect from each transaction? What
response time is acceptable?

Distributed Relational Database Capabilities


The second step in your analysis is to decide whether or not your data needs lend
themselves to a distributed relational database solution.

Applications where most database processing is done locally and access to remote
data is needed only occasionally are typically good candidates for a distributed rela-
tional database.

Applications with the following requirements are usually poor candidates for a dis-
tributed relational database:
Ÿ The data is kept at a central site and most of the work that a remote user
needs to do is at the central site.
Ÿ Consistently high performance, especially consistently fast response time, is
needed. It takes longer to move data across a network.
Ÿ Consistently high availability, especially twenty-four hour, seven-day-a-week
availability, is needed. Networks involve more systems and more in-between
components, such as communications lines and communications controllers,
which increases the chance of breakdowns.
Ÿ A distributed relational database function that you need is not currently avail-
able or announced.

Goals and Directions


The third step in your analysis is to assess your short-term and long-term goals.

SQL is the standard IBM database language. If your goals and directions include
portability or remote data access on unlike systems, you should use distributed
relational database on the AS/400 system.

The distributed database function of distributed unit of work, as well as the addi-
tional data copying function provided by DataPropagator Relational Capture and
Apply, broaden the range of activities you can perform on AS/400. However, if
your distributed database application requires a function that is not currently avail-
able on the AS/400, other options are available until the function is made available
on the operating system. For example, you may do one of the following:
Ÿ Provide the needed function yourself
Ÿ Stage your plans for distributed relational database to allow for the new func-
tion to become available
Ÿ Reassess your goals and requirements to see if you can satisfy them with a
currently available or announced function. Some alternative solutions are listed
in Figure 2-1. These alternatives can be used to supplement or replace avail-
able function.

2-2 OS/400 Distributed Database Programming V4R2


Figure 2-1. Alternative Solutions to Distributed Relational Database
Solution Description Advantages Disadvantages
Distributed Data A function of the operating Ÿ For simple read and Ÿ SQL is more effi-
Management system that allows an applica- update accesses, the cient for complex
(DDM) tion program or user on one performance is better functions
system to use database files than for SQL.
Ÿ May not be able
stored on a remote system.
Ÿ Existing applications do to access other
The system must be connected
not need to be rewritten. distributed rela-
by a communications network,
tional database
and the remote system must Ÿ Can be used to access
platforms
also use DDM. S/38, S/36, and CICS*
Ÿ Does not perform
CCSID and
numeric data
conversions
Intersystem Com- ICF is a function of the oper- Ÿ Allows you to customize Compared to distrib-
munications ating system that allows a your application to meet uted relational data-
Function/Common program to communicate inter- your needs. base and DDM, a
Programming actively with another program more complicated
Ÿ Can provide better per-
Interface (ICF/CPI or system. CPI Communi- program is needed to
formance.
Communications) cations is a call-level interface support communi-
that provides a consistent appli- cations and data con-
cation interface for applications version requirements.
that use program-to-program
communications. These inter-
faces make use of SNA's
logical unit (LU) 6.2 architecture
to establish a conversation with
a program on a remote system,
to send and receive data, to
exchange control information,
to end a conversation, and to
notify a partner program of
errors.
Display station A communications function that Ÿ Applications and data on Response time on
pass-through allows a user to sign on to one remote systems are screen updates is
AS/400 system from another accessible from local slower than locally
AS/400 system and use that system. attached devices.
system's programs and data.
Ÿ Allows for quick access
when data is volatile and
a large amount of data
on one system is
needed by users on
several systems.

A distributed relational database usually evolves from simple to complex as busi-


ness needs change and new products are made available. Remember to consider
this when analyzing your needs and expectations.

Chapter 2. Planning and Design for Distributed Relational Database 2-3


Designing the Application, Network, and Data
Designing a distributed relational database involves making choices about the appli-
cations, network, and data.

Designing Applications — Tips


Distributed relational database applications have different requirements from appli-
cations developed solely for use on a local database. To properly plan for these
differences, design your applications with the following in mind:
Ÿ Take advantage of the distributed unit of work (DUW) function where appro-
priate.
| Note: The current AS/400 implementation of DRDA limits you to one-phase
| commit capability where TCP/IP is used. However, you can still interop-
| erate with one-phase commit servers in an application compiled with the
| *DUW connection management option under certain restrictions. For
| example, if the first connection made from the application is to an
| AS/400 over TCP/IP, updates can be made to that system, but any sub-
| sequent connections will be read-only.
Ÿ Code programs using common interfaces.
Ÿ Consider dividing a complex application into smaller parts and placing each
piece of the application in the location best suited to process it. One good way
to distribute processing in an application is to make use of the SQL CALL
statement to run a stored procedure at a remote location where the data to be
processed resides. The stored procedure is not limited to SQL operations when
it runs on a DB2/400 application server; it can use integrated database
input/output or perform other types of processing.
Ÿ Investigate how the initial database applications will be prepared, tested, and
used.
Ÿ Take advantage, when possible, of SQL set-processing capabilities. This will
minimize communication with the application servers. For example, update mul-
tiple rows with one SQL statement whenever you can.
| Ÿ Be aware that database updates within a unit of work must be done at a single
| site if the RUW connection method is used when the programs are prepared, or
| if the other nodes in the distributed application do not support DUW.
Ÿ Keep in mind that the DUW connection method restricts you from directing a
single statement to more than one relational database.
Ÿ Performance is affected by the choice of connection management methods.
Use of the RUW connection management method might be preferable if you do
not have the need to switch back and forth among different remote relational
databases. This is because more overhead is associated with the two-phase
commit protocols used with DUW connection management.
However, if you have to switch frequently among multiple remote database
management systems, use DUW connection management. When running with
DUW connection management, communication conversations to one database
management system do not have to be ended when you switch the connection
to another database management system. In the like environment, this is not as
big a factor as in the unlike environment, since conversations in the like envi-
ronment can be kept active by use of the default DDMCNV(*KEEP) job defi-
nition attribute. Even in the like environment, however, a performance

2-4 OS/400 Distributed Database Programming V4R2


advantage can be gained by using DUW to avoid the cost of closing cursors
and sending the communication flow to establish a new connection.
Ÿ The connection management method determines the semantics of the
CONNECT statement. With the RUW connection management method, the
CONNECT statement ends any existing connections prior to establishing a new
connection to the relational database. With the DUW connection management
method, the CONNECT statement does not end existing connections.

Network Considerations
The design of a network directly affects the performance of a distributed relational
database. To properly design a distributed relational database that works well with
a particular network, do the following:
Ÿ Because the line speed can be very important to application performance,
provide sufficient capacity at the appropriate places in the network to achieve
efficient performance to the main distributed relational database applications.
| See the Communications Management book for more information.
Ÿ Evaluate the available communication hardware and software and, if necessary,
your ability to upgrade.
| Ÿ For APPC connections, consider the session limits and conversation limits
| specified when the network is defined.
Ÿ Identify the hardware, software, and communication equipment needed (for
both test and production environments), and the best configuration of the equip-
ment for a distributed relational database network.
| Ÿ Consider the skills that are necessary to support TCP/IP as opposed to those
| that are necessary to support APPC. Also consider the additional functionality
| offered by APPC (that is, two-phase commit).
Ÿ Take into consideration the initial service level agreements with end user
groups (such as what response time to expect for a given distributed relational
database application), and strategies for monitoring and tuning the actual
service provided.
Ÿ Develop a naming strategy for database objects in the distributed relational
database and for each location in the distributed relational database system. A
location is a specific relational database management system in an intercon-
nected network of relational database management systems that participate in
distributed relational database. Consider the following when developing this
strategy:
– The fully qualified name of an object in a distributed database system has
three (rather than two) parts, and the highest-level qualifier identifies the
location of the object.
– Each location in a distributed relational database system should be given a
unique identification; each object in the system should also have a unique
identification. Duplicate identifications can cause serious problems. For
example, duplicate locations and object names may cause an application to
connect to an unintended remote database, and once connected, access
an unintended object. Pay particular attention to naming when networks are
coupled.

Chapter 2. Planning and Design for Distributed Relational Database 2-5


Data Considerations
The placement of data in respect to the applications that need it is an important
consideration when designing a distributed relational database. When making such
placement decisions, consider the following:
Ÿ The level of performance needed from the applications
Ÿ Requirements for the security, currency, consistency, and availability of the
data across locations
Ÿ The amount of data needed and the predicted patterns of data access
Ÿ If the distributed relational database functions needed are available
Ÿ The skills needed to support the system and the skills that are actually avail-
able
Ÿ Who "owns" the data (that is, who is responsible for maintaining the accuracy
of the data)
Ÿ Management strategy for cross-system security, accounting, monitoring and
tuning, problem handling, data backup and recovery, and change control
Ÿ Distributed database design decisions, such as where to locate data in the
network and whether to maintain single or multiple copies of the data

Developing a Management Strategy


This section discusses strategies for managing a distributed relational database.

General Operations
To plan for the general operation of a distributed relational database, consider both
performance and availability. The following design considerations can help you
improve both the performance and availability of a distributed relational database:
Ÿ If an application involves transactions that run frequently or that send or receive
a lot of data, you should try to keep it in the same location as the data.
Ÿ For data that needs to be shared by applications in different locations, put the
data in the location with the most activity.
Ÿ If the applications in one location need the data as much as the applications in
another location, consider keeping copies of the data at both locations. When
keeping copies at multiple locations, ask yourself the following questions about
your management strategy:
– Will users be allowed to make updates to the copies?
– How and when will the copies be refreshed with current data?
– Will all copies have to be backed up or will backing up one copy be suffi-
cient?
– How will general administration activities be performed consistently for all
copies?
– When is it permissible to delete one of the copies?
Ÿ Consider whether the distributed databases will be administered from a central
location or from each database location.

Performance may also be improved by doing the following:

2-6 OS/400 Distributed Database Programming V4R2


Ÿ If data and applications must be kept at different locations, do the following to
keep the performance within acceptable limits:
– Keep data traffic across the network as low as possible by only retrieving
the data columns that will be used by the application; that is, avoid using *
in place of a list of column names as part of a SELECT statement.
– Discourage programmers from coding statements that send large amounts
of data to or receive large amounts of data from a remote location; that is,
encourage the use of the WHERE clause of the SELECT statement to limit
the number of rows of data.
– Use referential integrity, triggers, and stored procedures (an SQL CALL
statement after a CONNECT to a remote relational database management
system); this improves performance by distributing processing to the AS,
which can substantially reduce line traffic.
– Use read-only queries where appropriate by specifying the FOR FETCH
ONLY clause.
– Be aware of rules for blocking of queries. For example, in
AS/400-to-AS/400 queries, blocking of read-only data is done only for
COMMIT(*NONE), or for COMMIT(*CHG) and COMMIT(*CS) when
ALWBLK(*ALLREAD) is specified.
– Keep the number of accesses to remote data low by using local data in
place of remote data whenever possible.
– Use SQL set operations to process multiple rows at the application
requester with a single SQL request.
– Try to avoid dropping of connections by using DDMCNV(*KEEP) when
running with RUW connection management, or by running with DUW con-
nection management.
Ÿ Provide sufficient network capacity by doing the following:
– Increase the capacity of the network by installing high-speed, high-
bandwidth lines or by adding lines at appropriate points in the network.
– Reduce the contention or improve the contention balance on certain
processors. For example, move existing applications from a host system to
a departmental system or group some distributed relational database work
into batch.
Ÿ Encourage good table design. At the distributed relational database locations,
encourage appropriate use of primary keys, table indexes, and normalization
techniques.
Ÿ Ensure data types of host variables used in WHERE clauses are consistent
with the data types of the associated key column data types. For example, a
floating-point host variable has been known to disqualify the use of an index
built over a column of a different data type.

Availability may also be improved by doing the following:


Ÿ In general, try to limit the amount of data traffic across the network.
Ÿ If data and applications must be kept at different locations, do the following to
keep the availability within acceptable limits:
– Establish alternate network routes.

Chapter 2. Planning and Design for Distributed Relational Database 2-7


– Consider the effect of time zone differences on availability:
- Will qualified people be available to bring up the system?
- Will off-hours batch work interfere with processing?
– Ensure good backup and recovery features.
– Ensure people are skilled in backup and recovery.

Security
Part of planning for a distributed relational database involves the decisions you
must make about securing distributed data. These decisions include:
Ÿ What systems should be made accessible to users in other locations and which
users in other locations should have access to those systems.
Ÿ How tightly controlled access to those systems should be. For example, should
a user password be required when a conversation is started by a remote user?
| Ÿ Is it required that passwords flow over the wire in encrypted form?
| Ÿ Is it required that a user profile under which a client job runs be mapped to a
| different user identification or password based on the name of the relational
| database to which you are connecting?
Ÿ What data should be made accessible to users in other locations and which
users in other locations should have access to that data.
Ÿ What actions those users should be allowed to take on the data.
Ÿ Whether authorization to data should be centrally controlled or locally con-
trolled.
Ÿ If special precautions should be taken because multiple systems are being
linked. For example, should name translation be used?

When making the previous decisions, consider the following when choosing
locations:
Ÿ Physical protection. For example, a location may offer a room with restricted
access.
Ÿ Level of system security. The level of system security often differs between
locations. The security level of the distributed database is no greater than the
lowest level of security used in the network.
| All systems connected by APPC can do the following:
| – If both systems are AS/400 systems, communicate passwords in encrypted
| form.
| – Verify that when one system receives a request to communicate with
| another system in the network, the requesting system is actually "who it
| says it is" and that it is authorized to communicate with the receiving
| system.
| All systems can do the following:
| – Pass a user's identification and password from the local system to the
| remote system for verification before any remote data access is allowed.
| – Grant and revoke privileges to access and manipulate SQL objects such as
| tables and views.

2-8 OS/400 Distributed Database Programming V4R2


The AS/400 system includes security audit functions that allow you to track
unauthorized attempts to access data, as well track other events pertinent to
security. The system also provides a function that can prevent all distributed
database access from remote systems.
– Security-related costs. When considering the cost of security, consider both
the cost of buying security-related products and the price of your informa-
tion staff's time to perform the following activities:
- Maintain system identification of remote-data-accessing users at both
local and remote systems.
- Coordinate auditing functions between sites.

Accounting
You need to be able to account and charge for the use of distributed data. Con-
sider the following:
Ÿ Accounting for the use of distributed data involves the use of resources in one
or more remote systems, the use of resources on the local system, and the use
of network resources that connect the systems.
Ÿ Accounting information is accumulated by each system independently. Network
accounting information is accumulated independent of the data accumulated by
the systems.
Ÿ The time zones of various systems may have to be taken into account when
trying to correlate accounting information. Each system clock may not be syn-
chronized with the remote system clock.
Ÿ Differences may exist between each system's permitted accounting codes
(numbers). For example, the AS/400 system restricts accounting codes to a
maximum of 15 characters.

The following functions are available to account for the use of distributed data:
Ÿ AS/400 job accounting journal. The AS/400 system writes job accounting infor-
mation into the job accounting journal for each distributed relational database
application. The Display Journal (DSPJRN) command can be used to write the
accumulated journal entries into a database file. Then, either a user-written
program or query functions can be used to analyze the accounting data. For
more information, see “Job Accounting” on page 6-16.
Ÿ NetView* accounting data. The NetView licensed program can be used to
record accounting data about the use of network resources.

Problem Analysis
Problem analysis needs to be managed in a distributed database environment.
Problem analysis involves both identifying and resolving problems for applications
00that are processed across a network of systems. Consider the following:
Ÿ Distributed database processing problems manifest themselves in various ways.
For example, an error return code may be passed to a distributed database
application by the system that detects the problem. In addition, responses may
be slow, wrong, or nonexistent.
Ÿ Tools are available to diagnose distributed database processing problems. For
example, each distributed relational database product provides trace functions
that can assist in diagnosing distributed data processing problems.

Chapter 2. Planning and Design for Distributed Relational Database 2-9


Ÿ When system failures are detected by an AS/400 system, the system does the
following:
– Logs information about program status immediately after the failure is
detected.
– Produces an alert message. All the alerts can be directed to a single
control point in the network. This can be either the NetView licensed
program or an AS/400 system.
| Note: Alerts flow only over APPC; they do not flow over TCP/IP.
– If a correction to an IBM program is required and if you have a
System/390* with Network Distribution Manager (NDM) installed in the
network, you can use the NDM and the Distributed System Node Executive
products to receive and transmit updates and replacements to appropriate
systems in the network.

Backup and Recovery


In a single-system environment, backup and recovery takes place locally. But in a
distributed database, backup and recovery also affects remote locations.

The AS/400 system allows individual tables, collections, or groups of collections to


be backed up and recovered. Although backup and recovery can only be done
locally, you may want to plan to have less critical data on a system that does not
have adequate backup support. Backup and recovery procedures must be con-
sistent with data that may exist on more than one application server. Because you
have more than one system in the network, you may want to save such data to a
second system so that it is always available to the network in some form. Strate-
gies such as these need to be planned and laid out specifically before a database
is distributed across the network.

2-10 OS/400 Distributed Database Programming V4R2


Chapter 3. Communications for an AS/400 Distributed
Relational Database
This chapter describes which communications functions to use when you are
setting up a network or changing an existing network to work with a distributed rela-
tional database. This guide does not contain all the information you need. It is
intended to help you ask the right questions and determine your own answers, that
ensures maximum use of your resources based on the needs of your business.

This chapter discusses distributed relational database supported communications


functions, including communications types and lines, and AS/400 functions, such as
alert support for problem notification. Steps for configuring a network and config-
uring alerts are provided with an accompanying configuration example.

Additional information about how to connect unlike systems in a network for distrib-
uted relational database work can be found in the Distributed Relational Database
Architecture Connectivity Guide, SC26-4783.

Communications Tools
| Communications support for the DRDA implementation on the AS/400 system was
| initially provided only under the IBM Systems Network Architecture (SNA) through
| the Advanced Program-to-Program Communications (APPC) protocol, with or
| without Advanced Peer-to-Peer Networking (APPN).

AnyNet support on AS/400 allows APPC applications to run over Transmission


Control Protocol/Internet Protocol (TCP/IP) networks. Examples in the sections that
follow include DDM and Alerts. These applications along with DRDA can run
unchanged over TCP/IP networks with some additional configuration.

| Native TCP/IP support for DRDA, introduced most recently, is limited to remote unit
| of work, single-phase commit protocols.

| The examples and specifications in this chapter are specific to SNA configurations
| and native TCP/IP only. For more information on APPC over TCP/IP, refer to the
| Communications Configuration book. For more information on setting up native
| TCP/IP support, see the TCP/IP Configuration and Reference book.

Systems Network Architecture


SNA is an architecture made up of several logical unit (LU) types. These logical
units are architectural definitions of how to communicate with systems, controllers,
and terminals that also support the same LU types. All of the SNA support neces-
sary for distributed relational database on the AS/400 system is part of the OS/400
licensed program.

 Copyright IBM Corp. 1997, 1998 3-1


APPC/APPN
APPC is the AS/400 system implementation of SNA LU 6.2 and physical unit (PU)
T2.1 architectures. It allows applications that reside on different processors to com-
municate and exchange data in a peer relationship with one another.

APPN support is an enhancement to the PU T2.1 architecture that provides net-


working functions such as:
Ÿ Dynamically locating LUs in the network by searching distributed directories
Ÿ Dynamically selecting routes to LUs based on selection characteristics when an
application requests a session
Ÿ Intermediate routing of LU 6.2 session traffic through the node for sessions
between other LU 6.2 partners
Ÿ Routing session data based on transmission priorities
Ÿ Dynamically creating and starting remote location partner definitions
| Ÿ High-Performance Routing (HPR), which is an addition to the APPN architec-
| ture that enhances APPN routing performance and reliability, especially when
| using high-speed links.
APPC and APPN also support these IBM-supplied functions:
Ÿ SNA distribution services (SNADS)
Ÿ Display station pass-through to the AS/400 system
Ÿ Alert support to help you manage problems from a central location

| AS/400 APPN and HPR are documented in the APPN Support book.

Using DDM and Distributed Relational Database


The DRDA implementation on the AS/400 system uses Distributed Data Manage-
ment (DDM) architecture commands to communicate with other systems. However,
distributed relational database and DDM support handle some functions differently.

Using distributed relational database processing, the application connects to a


remote system using a relational database directory on the local system. The rela-
tional database directory provides the necessary links between a relational data-
base name and the communications path to that database. An application running
under distributed relational database only has to identify the database name and
run the SQL statements needed for processing.

Using DDM support, the remote file is identified and the communications path is
provided by means of a DDM file on the local system.

| If you have an SNA network, you can use DDM to support distributed relational
| database processing for administrative tasks such as submitting remote commands,
| copying files, and moving data from one system to another. To use DDM support, a
| DDM file must be created. This is discussed in “Setting Up DDM Files” on
| page 5-13. Using a DDM file with the AS/400 copy file commands is discussed in
| “Using Copy File Commands Between Systems” on page 5-19. Using DDM files to
| submit a remote command is discussed in “Submit Remote Command
| (SBMRMTCMD) Command” on page 6-8.

3-2 OS/400 Distributed Database Programming V4R2


| If you have an IP network, there are similar functions available for some of the
| DDM-related things that are discussed in this section. For example, you can use
| FTP and the Run Remote Command (RUNRMTCMD) command.

Alert Support
Alert support on the AS/400 system allows you to manage problems from a central
location. Alert support is useful for managing systems that do not have an operator,
managing systems where the operator is not skilled in problem management, and
maintaining control of system resources and expenses.

On the AS/400 system, alerts are created based on messages that are sent to the
local system operator. These messages are used to inform the operator of prob-
lems with hardware resources, such as local devices or controllers, communication
lines, or remote controllers or devices. These messages can also report software
errors detected by the system or application programs.

Any message with the alert option field (located in the message description) set to
a value other than *NO can generate an alert. Alerts are generated from several
types of messages:
Ÿ OS/400 messages defined as alerts.
OS/400 support sends alerts for problems related to distributed relational data-
base functions. For more information about distributed relational database
related alerts, see “Alerts” on page 9-23.
Ÿ IBM-supplied messages where the value in the alert option field is specified as
*YES by the Change Message Description (CHGMSGD) command. In this way,
you can select the messages for which you want alerts sent to the distributed
relational database administrator.
Ÿ Messages that you create and define as alerts, or that you create with the
QALGENA application program interface (API).

In a distributed relational database, a system is part of a communications network,


and local system messages cause alerts to be created and sent through the
network to a central problem management site called a focal point. An alert focal
point is a system in a network that receives and processes (logs, displays, and
optionally sends) alerts. This allows you to centralize management of the network
at the focal point.

A focal point’s sphere of control is a collection of network node control points or


systems within an APPN network from which the focal point system receives alerts.
The focal point maintains connectivity with other network nodes in the sphere of
control, accepts alerts received from systems in the sphere of control, and forwards
alerts to a higher level focal point, if one exists.

An AS/400 system can be defined to be a primary focal point or a default focal


point. As a primary focal point, the system receives alerts from all systems explic-
itly defined in its sphere of control. As a default focal point, the system receives
alerts from all systems that do not already have a primary focal point.

The AS/400 system also provides the capability to nest focal points. You can define
a high level focal point, which accepts all of the alerts collected by lower level focal
points.

Chapter 3. Communications for an AS/400 Distributed Relational Database 3-3


See “Configuring Alert Support” on page 3-16 for an example configuration for alert
handling.

Distributed Relational Database Network Considerations


Communications usage increases in a distributed relational database and there are
some things you should consider for your communications network when you
depend on it for database processing.

Because of the increased usage that comes with distributed relational database
processing, you may want to increase the maximum number of sessions parameter
(MAXSSN) and the maximum number of conversations parameter (MAXCNV) for
the MODE description created for both the local and remote location.

In addition to increasing capacity through the MODE descriptions, you may want to
consider increasing the line speed for various lines within the network or selecting a
better quality line to improve performance of the network for your distributed rela-
tional database processing.

Another consideration for your distributed relational database network is the ques-
tion of data accessibility and availability. The more critical a certain database is to
daily or special enterprise operations, the more you need to consider how users
can access that database. This means examining paths and alternative paths
through the network to provide availability of the data as it is needed. More about
this topic is discussed in Chapter 7, Data Availability and Protection.

Line speed and how you configure your communications line can significantly affect
network performance. However, it is important to ask a few questions about the
nature of the information being transferred in relation to both line speed and type of
use. For example:
Ÿ How much information must be moved?
Ÿ What is a typical transaction and unit of work for batch applications?
Ÿ What is a typical transaction and unit of work for an interactive application and
how much data is sent and received for each transaction?
Ÿ How many application programs or users will be using the line at the same
time?

| For more information about network planning and performance considerations for
| APPC, see the Communications Management book.

Configuring Communications for a Distributed Relational Database


The following section briefly describes how to configure APPC communications for
a distributed relational database and how to set up alerts for systems in the distrib-
uted relational database network. Both of these topics is complex; for example,
communications can be configured with many variations. The following discussions
use basic configuration examples to illustrate the steps needed to configure
systems in a network and handle alerts at a central location.

3-4 OS/400 Distributed Database Programming V4R2


| Configuring a Communications Network for APPC
Configuring communications for a distributed relational database requires that the
local and remote systems are defined in the network. Once the systems in the
network are defined, you can use DDM functions or SNADS to distribute informa-
tion throughout the network, establish your alert handling systems, use display
station pass-through to connect to a target system from a workstation on a local
system, and setup a relational database directory for systems in the distributed
relational database network. A relational database directory associates communi-
cations configuration values with the names of relational databases in the distrib-
uted relational database network. See Chapter 5, Setting Up an AS/400 Distributed
Relational Database for information about setting up the relational database direc-
tory.

Each AS/400 system in the network must be defined so that each system can iden-
tify itself and the remote systems in the network. To define a system in the network
you must:
1. Define the network attributes.
| 2. Create network interfaces and network server descriptions, if necessary.
3. Create the appropriate line descriptions.
4. Create a controller description.
| 5. Create a class-of-service description for APPC connections.
| 6. Create a mode description for APPC connections.
7. Create device descriptions automatically or manually.

Defining Network Attributes


| To define the network attributes, use the Change Network Attributes (CHGNETA)
| command. The network attributes contain the local system name, the default local
| location name, the default control point name, the local network identifier, and the
| network node type. If the machine is an end-node, the attributes also contain the
| names of the network servers used by this AS/400 system. Network attributes also
| determine whether or not the system will use HPR.

| Defining a Network Interface Description


| Create a network interface description. Use the following commands to create
| network interfaces:
| Ÿ Create Network Interface (ATM) (CRTNWIATM)
| Ÿ Create Network Interface (Frame-Relay Network) (CRTNWIFR)
| Ÿ Create Network Interface (ISDN) (CRTNWIISDN)

Defining a Line Description


Create a line description to describe the physical line connection and the data link
protocol to be used between the AS/400 system and the network. Use the following
commands to create line descriptions:

| Ÿ Create Line Description (Ethernet) (CRTLINETH)


| Ÿ Create Line Description (DDI) (CRTLINDDI)
| Ÿ Create Line Description (Frame-Relay) (CRTLINFR)

Chapter 3. Communications for an AS/400 Distributed Relational Database 3-5


| Ÿ Create Line Description (IDLC) (CRTLINIDLC)
| Ÿ Create Line Description (SDLC) (CRTLINSDLC)
| Ÿ Create Line Description (Token-ring) (CRTLINTRN)
| Ÿ Create Line Description (Wireless) (CRTLINWLS)
| Ÿ Create Line Description (X.25) (CRTLINX25)

Defining a Controller Description


A controller description describes the adjacent systems in the network. The use of
APPN support is indicated by specifying APPN(*YES) when creating the controller
description. Use the following commands to create controller descriptions:
Ÿ Create Controller Description (APPC) (CRTCTLAPPC)
Ÿ Create Controller Description (SNA HOST) (CRTCTLHOST)

| If the AUTOCRTCTL parameter on a token-ring, Ethernet, wireless, or DDI line


| description is set to *YES, then a controller description is automatically created
| when the system receives a session start request over the line.

To specify AnyNet support, you specify *ANYNW on the LINKTYPE parameter of


the CRTCTLAPPC command.

Other Configuration Considerations


If additional local locations or special characteristics of remote locations for APPN
are required, APPN location lists must be created. One local location name is the
control point name specified in the network attributes. If additional locations are
needed for the AS/400 system, an APPN local location list is required. Special
characteristics of remote locations include whether the remote location is in a dif-
ferent network from the local location and security requirements. If special charac-
teristics of remote locations exist, an APPN remote location list is required. APPN
location lists can be created using the Create Configuration List (CRTCFGL)
command. See “Session Level and Location Security for APPC Connections” on
page 4-2 for more information about APPN configuration lists and security require-
ments.

The communication descriptions can be varied on (activated) by using the Vary


Configuration (VRYCFG) command or the Work with Configuration Status
(WRKCFGSTS) command. If the nonswitched line descriptions are varied on, the
appropriate controllers and devices attached to that line are also varied on. The
WRKCFGSTS command also gives the status of each connection. For more infor-
mation about working with communication configuration status, see Chapter 6, Dis-
tributed Relational Database Administration and Operation Tasks.

Notes:
1. The controller description is equivalent to the IBM Network Control Program
and Virtual Telecommunications Access Method (NCP/VTAM*) PU macros.
The information in a controller description is found in the Extended Services
Communication Manager Partner LU profile.
2. The device description is equivalent to the NCP/VTAM logical unit (LU) macro.
The information in a device description is found in Extended Services Commu-
nications Manager Partner LU and LU profiles.

3-6 OS/400 Distributed Database Programming V4R2


3. The mode description is equivalent to the NCP/VTAM mode tables. The infor-
mation in a mode description is found in Extended Services Communications
Manager Transmission Service Mode profile and Initial Session Limits profile.

The Communications Configuration and the APPN Support books contain more
information about configuring for networking support and working with location lists.

| Configuring a Communications Network for TCP/IP


| The following steps provide a high-level overview of the steps you take to set up a
| TCP/IP network. For details, see the Getting Your AS/400 Working for You and
| TCP/IP Configuration and Reference books.
| 1. Identify your AS/400 to the local network (the network that your AS/400 is
| directly connected to).
| a. Determine if a line description already exists.
| b. If a line description does not already exist, create one.
| c. Define a TCP/IP interface to give your AS/400 an IP address.
| 2. Define a TCP/IP route. This allows your AS/400 to communicate with systems
| on remote TCP/IP networks (networks that your AS/400 is not directly con-
| nected to).
| 3. Define a local domain name and host name. This assigns a name to your
| system.
| 4. Identify the names of the systems in your network.
| a. Build a local host table.
| b. Identify a remote name server.
| 5. Start TCP/IP.
| 6. Verify that TCP/IP works.

APPN Configuration Example


To help illustrate a basic configuration example, consider the Spiffy Corporation
network as illustrated in the following example.

Chapter 3. Communications for an AS/400 Distributed Relational Database 3-7


M in n e a p o lis K a n s a s C ity

M P 0 0 0 K C 0 0 0

M P 1 0 1 M P 1 1 0 M P 2 0 1 K C 1 0 1 K C 1 0 5 K C 2 0 1 K C 3 1 0

R V 2 W 7 3 5 -0

Figure 3-1. The Spiffy Corporation Network Organization

In this network organization, two of Spiffy Corporation's regional offices are the
network nodes systems named MP000 and KC000. The MP000 system in
Minneapolis and the KC000 system in Kansas City communicate with each other
over an SDLC nonswitched line with an SDLC switched line as a backup line. The
MP000 AS/400 system serves as a development and problem handling center for
the KC000 system and the other regional network nodes.

The following example programs and explanations describe how to configure the
Minneapolis and Kansas City AS/400 systems as network nodes in the network,
and also shows how Minneapolis configures its network to one of its area dealer-
ships. This example is intended to describe only a portion of the tasks needed to
configure the network shown in Figure 3-1, and is not a complete configuration for
that network.

Configuring Network Node MP000


The following example program shows the control language (CL) commands used
to define the configuration for the system identified as MP000 (network node 1).
The example shows the commands as used within a CL program; the configuration
can also be performed using the configuration menus.

3-8 OS/400 Distributed Database Programming V4R2


/\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\/
/\ \/
/\ MODULE: MPððð LIBRARY: PUBSCFGS \/
/\ \/
/\ LANGUAGE: CL \/
/\ \/
/\ FUNCTION: CONFIGURES APPN NETWORK: \/
/\ \/
/\ THIS IS: MPððð TO KCððð (nonswitched) \/
/\ MPððð TO KCððð (switched) \/
/\ MPððð TO MP1ð1 - MP299 (nonswitched) \/
/\ \/
/\ \/
/\ \/
/\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\/
PGM
/\ Change network attributes for MPððð \/ .1/
CHGNETA LCLNETID(APPN) LCLCPNAME(MPððð) +
LCLLOCNAME(MPððð) NODETYPE(\NETNODE)
/\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\/
/\ MPððð to KCððð (nonswitched) \/
/\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\/
/\ Create nonswitched line description for MPððð to KCððð\/
CRTLINSDLC LIND(KCðððL) RSRCNAME(LINð21) .2/
/\ Create controller description for MPððð to KCððð \/
CRTCTLAPPC CTLD(KCðððL) LINKTYPE(\SDLC) + .3/
LINE(KCðððL) RMTNETID(APPN) +
RMTCPNAME(KCððð) STNADR(ð1) +
NODETYPE(\NETNODE)
/\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\/
/\ MPððð TO KCððð (switched) \/
/\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\/
/\ Create switched line description for MPððð to KCððð \/
CRTLINSDLC LIND(KCðððS) RSRCNAME(LINð22) + .4/
CNN(\SWTPP) AUTOANS(\NO) STNADR(ð1)
/\ Create controller description for MPððð to KCððð \/
CRTCTLAPPC CTLD(KCðððS) LINKTYPE(\SDLC) + .5/
SWITCHED(\YES) SWTLINLST(KCðððS) +
RMTNETID(APPN) RMTCPNAME(KCððð) +
INLCNN(\DIAL) CNNNBR(8165551111 +
STNADR(ð1) TMSGRPNBR(3) NODETYPE(\NETNODE)

/\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\/

/\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\/
/\ MPððð to MP1ð1 (nonswitched) \/
/\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\/
/\ Create nonswitched line description for MPððð to KCððð\/
CRTLINSDLC LIND(MP1ð1L) RSRCNAME(LINð31) .6/
/\ Create controller description for MPððð to MP1ð1 \/
CRTCTLAPPC CTLD(MP1ð1L) LINKTYPE(\SDLC) +
LINE(MP1ð1L) RMTNETID(APPN) +
RMTCPNAME(MP1ð1) STNADR(ð1) +
NODETYPE(\ENDNODE)
/\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\/
ENDPGM

Chapter 3. Communications for an AS/400 Distributed Relational Database 3-9


.1/ Changing the Network Attributes (MP000)
The Change Network Attributes (CHGNETA) command is used to set
the attributes for the system within the network. The following attributes
are defined for the MP000 regional system, and these attributes apply to
all connections in the network for this network node.

LCLNETID(APPN)
The name of the local network is APPN. The remote system (KC000
in the example program) must specify this name as the remote
network identifier (RMTNETID) on the CRTCTLAPPC command. In
this example, it defaults to the network attribute.

LCLCPNAME(MP000)
The name assigned to the Minneapolis regional system local control
point is MP000. The remote systems specify this name as the
remote control point name (RMTCPNAME) on the CRTCTLAPPC
command.

LCLLOCNAME(MP000)
The default local location name is MP000. This name will be used
for the device description that is created by the APPN support.

NODETYPE(*NETNODE)
The local system (MP000) is an APPN network node.

.2/ Creating the Line Description (MP000 to KC000, Nonswitched)


The line used in this example is an SDLC nonswitched line. The
command used to create the line is CRTLINSDLC. The parameters
specified are:

LIND(KC000L)
The name assigned to the line description is KC000L.

RSRCNAME(LIN021)
The physical communications port named LIN021 is defined.

.3/ Creating the Controller Description (MP000 to KC000, Nonswitched)


Because this is an APPN environment (AS/400 system to AS/400
system), the controller is an APPC controller, and the CRTCTLAPPC
command is used to define the attributes of the controller. The following
attributes are defined by the example command:

CTLD(KC000L)
The name assigned to the controller description is KC000L.

LINKTYPE(*SDLC)
Because this controller is attached through an SDLC communi-
cations line, the value specified is *SDLC. This value must corre-
spond to the type of line defined by a Create Line Description
command (CRTLINxxx).

LINE(KC000L)
The name of the line description to which this controller is attached
is KC000L. This value must match a name specified by the LIND
parameter in a line description.

3-10 OS/400 Distributed Database Programming V4R2


RMTNETID(APPN)
The name of the network in which the remote control point resides is
APPN.

RMTCPNAME(KC000)
The remote control-point name is KC000. The name specified here
must match the name specified at the remote system for the local
control-point name. In the example, the name is specified at the
remote system (KC000) by the LCLCPNAME parameter on the
Change Network Attributes (CHGNETA) command.

STNADR(01)
The address assigned to the remote controller is hex 01.

NODETYPE(*NETNODE)
The remote system (KC000) is an APPN network node.

.4/ Creating the Line Description (MP000 to KC000, Switched)


The line used in this example is an SDLC switched line. The command
used to create the line is CRTLINSDLC. The parameters specified are:

LIND(KC000S)
The name assigned to the line description is KC000S.

RSRCNAME(LIN022)
The physical communications port named LIN022 is defined.

CNN(*SWTPP)
This is a switched line connection.

AUTOANS(*NO)
This system will not automatically answer an incoming call.

STNADR(01)
The address assigned to the local system is hex 01.

.5/ Creating the Controller Description (MP000 to KC000, Switched)


Because this is an APPN environment (AS/400 system to AS/400
system), the controller is an APPC controller, and the CRTCTLAPPC
command is used to define the attributes of the controller. The following
attributes are defined by the example command:

CTLD(KC000S)
The name assigned to the controller description is KC000S.

LINKTYPE(*SDLC)
Because this controller is attached through an SDLC communi-
cations line, the value specified is *SDLC. This value must corre-
spond to the type of line defined by a Create Line Description
command (CRTLINxxx).

SWITCHED(*YES)
This controller is attached to a switched SDLC line.

SWTLINLST(KC000S)
The name of the line description (for switched lines) to which this
controller can be attached is KC000S. In the example, there is only

Chapter 3. Communications for an AS/400 Distributed Relational Database 3-11


one line (KC000S). This value must match a name specified by the
LIND parameter in a line description.

RMTNETID(APPN)
The name of the network in which the remote control point resides is
APPN.

RMTCPNAME(KC000)
The remote control-point name is KC000. The name specified here
must match the name specified at the remote system for the local
control-point name. In the example, the name is specified at the
remote system by the LCLCPNAME parameter on the CHGNETA
(Change Network Attributes) command.

INLCNN(*DIAL)
The initial connection is made by the AS/400 system either
answering an incoming call or placing a call.

CNNNBR(8165551111)
The connection (telephone) number for the remote Kansas City con-
troller is 8165551111.

STNADR(01)
The address assigned to the remote Kansas City controller is hex
01.

TMSGRPNBR(3)
The value (3) is to be used by the APPN support for transmission
group negotiation with the remote system.
The remote system must specify the same value for the trans-
mission group.

NODETYPE(*NETNODE)
The remote system (KC000) is an APPN network node.

.6/ Creating a Line and Controller for MP101


This portion of the example shows a line and controller configuration for
MP000 to MP101, a dealership end node. A similar configuration must
be made from MP000 to each of its dealership end nodes. Also, to com-
plete the configuration for Minneapolis, each of the dealerships must use
configuration commands or a program similar to this one to create lines
and controllers for each system that they will communicate with.
Likewise, to complete the network configuration shown in Figure 3-1 on
page 3-8, the KC000 system must configure to each of its dealership
end nodes, and each end node must configure a line and controller to
communicate with the KC000 system.
These connections are not shown in the example.

Configuring Network Node KC000


The following example program shows the CL commands used to define the config-
uration for the regional system identified as KC000. The example shows these
commands as used within a CL program; the configuration can also be performed
using the configuration menus.

3-12 OS/400 Distributed Database Programming V4R2


/\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\/
/\ \/
/\ MODULE: KCððð LIBRARY: PUBSCFGS \/
/\ \/
/\ LANGUAGE: CL \/
/\ \/
/\ FUNCTION: CONFIGURES APPN NETWORK: \/
/\ \/
/\ THIS IS: KCððð TO MPððð (nonswitched) \/
/\ KCððð TO MPððð (switched) \/
/\ \/
/\ \/
/\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\/
PGM
/\ Change network attributes for KCððð \/
CHGNETA LCLNETID(APPN) LCLCPNAME(KCððð) + .7/
LCLLOCNAME(KCððð) NODETYPE(\NETNODE)
/\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\/
/\ KCððð TO MPððð (nonswitched) \/
/\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\/
/\ Create line description for KCððð to MPððð \/
CRTLINSDLC LIND(MPðððL) RSRCNAME(LINð22) .8/
/\ Create controller description for KCððð to MPððð \/
CRTCTLAPPC CTLD(MPððð) LINKTYPE(\SDLC) + .9/
LINE(MPðððL) RMTNETID(APPN) +
RMTCPNAME(MPððð) STNADR(ð1) +
NODETYPE(\NETNODE)
/\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\/
/\ KCððð TO MPððð (switched) \/
/\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\/
/\ Create switched line description for KCððð to MPðððS \/
CRTLINSDLC LIND(MPðððS) RSRCNAME(LINð31) + .1ð/
CNN(\SWTPP) AUTOANS(\NO) STNADR(ð1)
/\ Create controller description for KCððð to MPððð \/
CRTCTLAPPC CTLD(MPðððS) LINKTYPE(\SDLC) + .11/
SWITCHED(\YES) SWTLINLST(MPðððS) +
RMTNETID(APPN) RMTCPNAME(MPððð) +
INLCNN(\ANS) CNNNBR(6125551111) +
STNADR(ð1) TMSGRPNBR(3) NODETYPE(\NETNODE)
ENDPGM
.7/ Changing the Network Attributes (KC000)
The Change Network Attributes (CHGNETA) command is used to set
the attributes for the system within the network. The following attributes
are defined for the regional system named KC000, and these attributes
apply to all connections in the network for this network node:

LCLNETID(APPN)
The name of the local network is APPN. The remote systems (the
Minneapolis network node in this example) must specify this name
as the remote network identifier (RMTNETID) on the CRTCTLAPPC
command.

LCLCPNAME(KC000)
The name assigned to the local control point is KC000. The remote
system specifies this name as the remote control point name
(RMTCPNAME) on the CRTCTLAPPC command.

Chapter 3. Communications for an AS/400 Distributed Relational Database 3-13


LCLLOCNAME(KC000)
The default local location name is KC000. This name will be used
for the device description that is created by the APPN support.

NODETYPE(*NETNODE)
The local system (KC000) is an APPN network node.

.8/ Creating the Line Description (KC000 to MP000, Nonswitched)


The line used in this example is an SDLC nonswitched line. The
command used to create the line is CRTLINSDLC. The parameters
specified are:

LIND(MP000L)
The name assigned to the line description is MP000L.

RSRCNAME(LIN022)
The physical communications port named LIN022 is defined.

.9/ Creating the Controller Description (KC000 to MP000, Nonswitched)


Because this is an APPN environment (AS/400 system to AS/400
system), the controller is an APPC controller, and the CRTCTLAPPC
command is used to define the attributes of the controller. The following
attributes are defined by the example command:

CTLD(MP000L)
The name assigned to the controller description is MP000L.

LINKTYPE(*SDLC)
Because this controller is attached through an SDLC communi-
cations line, the value specified is *SDLC. This value must corre-
spond to the type of line defined by a Create Line Description
command (CRTLINxxx).

LINE(MP000L)
The name of the line description to which this controller is attached
is MP000L. This value must match a name specified by the LIND
parameter in a line description.

RMTNETID(APPN)
The name of the network in which the remote system resides is
APPN.

RMTCPNAME(MP000)
The remote control-point name is MP000. The name specified here
must match the name specified at the remote system for the local
control-point name. In the example, the name is specified at the
Minneapolis region remote system (MP000) by the LCLCPNAME
parameter on the Change Network Attributes (CHGNETA)
command.

STNADR(01)
The address assigned to the remote controller is hex 01.

NODETYPE(*NETNODE)
The remote system (MP000) is an APPN network node.

3-14 OS/400 Distributed Database Programming V4R2


.1ð/ Creating the Line Description (KC000 to MP000, Switched)
The line used in this example is an SDLC switched line. The command
used to create the line is CRTLINSDLC. The parameters specified are:

LIND(MP000S)
The name assigned to the line description is MP000S.

RSRCNAME(LIN031)
The physical communications port named LIN031 is defined.

CNN(*SWTPP)
This is a switched line connection.

AUTOANS(*NO)
This system will not automatically answer an incoming call.

STNADR(01)
The address assigned to the local system is hex 01.

.11/ Creating the Controller Description (KC000 to MP000, Switched)


Because this is an APPN environment (AS/400 system to AS/400
system), the controller is an APPC controller, and the CRTCTLAPPC
command is used to define the attributes of the controller. The following
attributes are defined by the example command:

CTLD(MP000S)
The name assigned to the controller description is MP000S.

LINKTYPE(*SDLC)
Because this controller is attached through an SDLC communi-
cations line, the value specified is *SDLC. This value must corre-
spond to the type of line defined by a Create Line Description
command (CRTLINxxx).

SWITCHED(*YES)
This controller is attached to a switched SDLC line.

SWTLINLST(MP000S)
The name of the line description (for switched lines) to which this
controller can be attached is MP000S. In the example, there is only
one line (MP000). This value must match a name specified by the
LIND parameter in a line description.

RMTNETID(APPN)
The name of the network in which the remote control point resides is
APPN.

RMTCPNAME(MP000)
The remote control-point name is MP000. The name specified here
must match the name specified at the remote regional system for
the local control-point name. In the example, the name is specified
at the remote Minneapolis regional system (MP000) by the
LCLCPNAME parameter on the Change Network Attributes
(CHGNETA) command.

INLCNN(*ANS)
The initial connection is made by the AS/400 system answering an
incoming call.

Chapter 3. Communications for an AS/400 Distributed Relational Database 3-15


CNNNBR(6125551111)
The connection (telephone) number for the remote Minneapolis con-
troller is 6125551111.

STNADR(01)
The address assigned to the remote Minneapolis controller is hex
01.

TMSGRPNBR(3)
The value (3) to be used by the APPN support for transmission
group negotiation with the remote system. The remote system must
specify the same value for the transmission group.

NODETYPE(*NETNODE)
The remote system (MP000) is an APPN network node.

Configuring Alert Support


Depending on what role your system assumes in the network, several actions help
establish alert support for your system.

In an APPN network, an end node sends its alerts either to its serving network
node or to another system as specified by the alerts controller name (ALRCTLD)
parameter of the Change Network Attributes (CHGNETA) command. When your
system is an end node in the network and you turn on the alert status (ALRSTS)
parameter of the CHGNETA command, alerts are forwarded to a serving network
node.

You can define your system as a default focal point using the alert default focal
point (ALRDFTFP) parameter of the CHGNETA command. When your system is
defined to be a default focal point, the AS/400 system automatically adds network
node control points to the sphere of control using the APPN network topology data-
base. When the AS/400 system detects that a network node system has entered
the network, the system sends management services capabilities to the new control
point so that the control point sends alerts to your system (if no other focal point is
specified for the new network node system). The alert status (ALRSTS) parameter
of the CHGNETA command should be turned off so your system does not forward
alerts because it is the default focal point.

You can define your system as a primary focal point using the alert primary focal
point (ALRPRIFP) parameter of the CHGNETA command. When your system is
defined to be a primary focal point, you must explicitly define the control points that
are to be in your sphere of control. This set of control points is defined using the
Work with Sphere of Control (WRKSOC) command.

The WRKSOC command allows you to add network node control point systems to
the sphere of control and to delete existing control points.

3-16 OS/400 Distributed Database Programming V4R2


à ð
System: MPððð
Position to . . . . . . . . ________ Control Point
Network ID . . . . . . . . ________

Type options, press Enter.


1=Add 4=Remove

Control
Opt Point Network ID Current Status
__ ________ \NETATR
__ CHððð APPN Delete pending
__ KCððð APPN Active - in sphere of control
__ SLððð APPN Add pending - in sphere of control
__ NYððð APPN Active - in sphere of control

Select option 1 (Add) on the Work with Sphere of Control (SOC) display, or use the
Add Sphere of Control Entry (ADDSOCE) command to add a system to your
sphere of control. To add a system to the sphere of control, type the control point
name and network ID of the new system.

Select option 4 (Remove) from the Work with Sphere of Control (SOC) display, or
use the Remove Sphere of Control Entry (RMVSOCE) command to delete systems
from the alert sphere of control. The systems are specified by network ID and
control point name.

Unless a default focal point is established for your network, a control point in the
sphere of control should not be removed from the sphere of control until another
focal point has started focal point services to that system.

The Display Sphere of Control Status (DSPSOCSTS) command shows the current
status of all systems in your sphere of control. This includes both systems that you
have defined using the WRKSOC command, if your system is defined to be a
primary focal point, and systems that the AS/400 system has added for you, if your
system is defined to be a default focal point.

Example Configuration for Alert Support


You can establish alert support for the two network nodes MP000 and KC000 and
their associated end nodes as shown in the following figure:

Chapter 3. Communications for an AS/400 Distributed Relational Database 3-17


MP000 KC000

MP101 MP110 MP201 KC101 KC105 KC201 KC310

RV2W736-0

Figure 3-2. Spiffy Corporation Example Network Configuration

This example configuration shows how to:


Ÿ Begin creating alerts at a network node
Ÿ Set up a network node to forward alerts to a primary focal point
Ÿ Begin creating alerts at an end node

The CL command examples that follow are used to establish the system named
MP000 as a primary focal point for alerts handling. While this system may serve as
the primary focal point for several systems, this example only illustrates how one
other network node (KC000) is configured to forward alerts to the MP000 system
and how MP000 is set up to be the primary focal point that does not pass the alerts
on to another system. To configure alerts for this example, the database adminis-
trator would:
1. Create alerts at a network node.
2. Define a network node as the primary focal point.
3. Add network nodes to the primary focal point’s sphere of control.
4. Create alerts at end node systems.

Create Alerts at a Network Node


The system configured as KC000 begins to create alerts and forward them to a
primary focal point when the database administrator turns on the local alerts param-
eter (ALRSTS) on the Change Network Attributes (CHGNETA) command as shown
in the example below. This system is also set up to log alerts it creates locally.
CHGNETA ALRSTS(\ON) ALRLOGSTS(\LOCAL)

Because a primary focal point is not active and has not included KC000 in its
sphere of control, any alerts created at the KC000 system are logged at the KC000
system and not forwarded to another system yet.

3-18 OS/400 Distributed Database Programming V4R2


Define A Primary Focal Point
To define a network node as a primary focal point, the database administrator
needs to specify *YES for the alert primary focal point (ALRPRIFP) parameter on
the Change Network Attributes (CHGNETA) command for the selected system.

This system creates alerts locally by specifying *ON for the alert status (ALRSTS)
parameter. Also, this system logs alerts created locally and alerts received from
other systems when *ALL is specified for the alert logging (ALRLOGSTS) param-
eter on the CHGNETA command.

An example is shown below.


CHGNETA ALRPRIFP(\YES) ALRSTS(\ON) ALRLOGSTS(\ALL)

Update the Primary Focal Point’s Sphere of Control


The system named MP000 does not receive alerts from other systems until the
database administrator adds the names of systems that should forward alerts to the
MP000 system’s sphere of control.

The Work with Sphere of Control (WRKSOC) command identifies the network
nodes from which MP000 receives alerts. In the example below, the KC000 system
is included in the MP000 system’s sphere of control using the Add Sphere of
Control Entry (ADDSOCE) command. The network identifier is specified as
*NETATR, and the control point name for KC000 is specified for the entry.
ADDSOCE ENTRY((\NETATR KCððð))

Create Alerts for End Nodes


End nodes may participate in an APPN network by using the services of an
attached network node (the serving network node). In the above figure, KC000 is
the serving network node for KC105.

The end node must begin creating alerts by specifying ALRSTS(*ON) for the
Change Network Attributes (CHGNETA) command. However, after its network node
is set up to forward alerts, the alerts sent by KC105 are forwarded by KC000 to the
focal point at MP000 without a database administrator having to specify how KC105
system alerts are handled.

Chapter 3. Communications for an AS/400 Distributed Relational Database 3-19


3-20 OS/400 Distributed Database Programming V4R2
Chapter 4. Security for an AS/400 Distributed Relational
Database
A distributed relational database administrator is faced with two security issues to
resolve:
Ÿ System to system protection
Ÿ Identification of users at remote sites

| When two or more systems are set up to access each other’s databases, it may be
| important to make sure that the other side of the communications line is the
| intended location and not an intruder. For DRDA access to a remote relational
| database, the AS/400 system use of advanced program-to-program communi-
| cations (APPC) and Advanced Peer-to-Peer Networking (APPN) communications
| configuration capabilities provides options for you to do this network level security.

| The second concern for the distributed relational database administrator is that data
| security is maintained by the system that stores the data. In a distributed relational
| database, the user has to be properly authorized to have access to the database
| (according to the security level of the system) whether the database is local or
| remote. Distributed relational database network users must be properly identified
| with a user ID on the application server (AS) for any jobs they run on the AS.
| DRDA support using both APPC/APPN and TCP/IP communications protocols pro-
| vides for the sending of user IDs and passwords along with connection requests.

| This chapter discusses security topics that are related to communications and
| DRDA access to remote relational databases. It discusses the significant differ-
| ences between conversation-level security in an APPC network connection and the
| corresponding level of security for a TCP/IP connection initiated by a DRDA appli-
| cation. In remaining security discussions, the term user also includes remote users
| starting communications jobs.

For a description of general AS/400 security concepts, see the Security - Basic
book.

Elements of Distributed Relational Database Security


A distributed relational database administrator needs to protect the resources of the
application servers in the network without unnecessarily restricting access to data
by ARs in the network.

An AR secures its objects and relational database to ensure only authorized users
have access to distributed relational database programs. This is done using normal
AS/400 object authorization to identify users and specify what each user (or group
of users) is allowed to do with an object. Alternatively, authority to tables, views,
and SQL packages can be granted or revoked using the SQL GRANT and
REVOKE statements. Providing levels of authority to SQL objects on the AR helps
ensure that only authorized users have access to an SQL application that accesses
data on another system.

The level of system security in effect on the AS determines whether a request from
an AR is accepted and whether the remote user is authorized to objects on the AS.

 Copyright IBM Corp. 1997, 1998 4-1


Some aspects of security planning for AS/400 systems in a distributed relational
database network include:
Ÿ Physical security such as locked doors or secured buildings that surround the
systems, modems, communication lines and terminals that can be configured in
the line description and used in the route selection process
Ÿ Location security that verifies the identity of other systems in the network
Ÿ User-related security to verify the identity and rights of users on the local
system and remote systems
Ÿ Object-related security to control user access to particular resources such as
confidential tables, programs, and packages
Location, user-related, and object-related security are only possible if the system
security level is set at level 20 or above.

For APPC conversations, when the system is using level 10 security, an AS/400
system connects to the network as a nonsecure system. The AS/400 system does
not validate the identity of a remote system during session establishment and does
not require conversation security on incoming program start requests. For level 10,
security information configured for the APPC remote location is ignored and is not
used during session or conversation establishment. If a user profile does not exist
on the AS/400 system, one is created.

| When the system is using security level 20 or above, an AS/400 system connects
| to the network as a secure system. The AS/400 system can then provide both
| session (except for TCP/IP connections) and conversation-level security functions.

Having system security set at the same level across the systems in your network
makes the task of security administration easier. An AS controls whether the
session and conversation can be established by specifying what is expected from
the AR to establish a session. For example, if the security level on the AR is set at
10 and the security level on the AS is above 10, the appropriate information may
not be sent and the session might not be established without changing security ele-
ments on one of the systems.

For more information on security levels, see the Security - Reference book and
security consideration topics in the APPC Programming or the APPN Support
books.

| Session Level and Location Security for APPC Connections


Communications security occurs when a Systems Network Architecture (SNA) bind
occurs between two locations, and involves session level and location security.

Session level security verifies the identity of the two systems attempting to establish
a communications session. Session level security is established during communi-
cations configuration in one of two ways, depending on whether the network uses
APPN.
Ÿ If you specify APPN(*NO) on the Create Controller Description (CRTCTLAPPC)
command, communications devices are created manually. APPC devices are
created by using the Create Device Description (CRTDEVAPPC) command.
The LOCPWD parameter on the CRTDEVAPPC command specifies if a pass-
word is used to verify the remote location.

4-2 OS/400 Distributed Database Programming V4R2


– If you specify a password for LOCPWD on a device description the AS/400
system uses that password to validate the identity of the remote system
during session establishment. The password must match the password
specified on the remote system or the connection is not allowed.
– If you specify *NONE on the LOCPWD parameter during configuration, the
AS/400 system does not validate the identity of the remote system when a
session is established.
Ÿ If you specify APPN(*YES) on the CRTCTLAPPC command, communications
devices are created automatically using APPN. For APPN, a location-password
on the remote location list specifies a password the two locations use to verify
identities. Use the Create Configuration List (CRTCFGL) command to create a
remote location list type (*APPNRMT).
– If you specify a location password on an APPN remote location list, the
AS/400 system uses that password to validate the identity of the local and
remote system pair. The location password must match the location pass-
word specified on the remote system’s remote location list or the con-
nection is not allowed.
– If you specify *NONE for the location password, the AS/400 system does
not validate the identity of the remote system when a session is estab-
lished. The remote system must also specify *NONE for the location pass-
word.

Location security establishes what security information each location requires from
the other location for each remotely initiated APPC conversation. Location security
is established during communication configuration in one of two ways, depending
on whether APPN is used.
Ÿ In an APPC network, the SECURELOC parameter on the CRTDEVAPPC
command specifies whether the local system allows the remote system to verify
security. Specifying *YES for SECURELOC means that the local system allows
the remote system to verify user security information. If you specify *NO on the
SECURELOC parameter, the local system verifies security information for the
incoming request.
Ÿ In an APPN network, the secure-location value on the remote location list veri-
fies security. Specifying *YES for secure-location on an APPN remote config-
uration list means that the local system allows the remote system to verify user
security information. If you specify *NO, the local system verifies security infor-
mation for the incoming request.
Note: APPN creates location information based on the first device description
that is varied on for the remote network ID, remote location name, and
local location name pair. To avoid using security information that cannot
be predicted, you must ensure that all of the device descriptions with
the same remote network ID, remote location name, and local location
name pair contain exactly the same security information.

For more information on session and location security issues, see the consider-
ations chapter in the APPC Programming book.

Chapter 4. Security for an AS/400 Distributed Relational Database 4-3


APPN Configuration Lists
In an APPN network, location passwords are specified for those pairs of locations
that are going to have end-to-end sessions between them. Location passwords
need not be specified for those locations that are intermediate nodes.

The remote location list is created with the CRTCFGL command, and it contains a
list of all remote locations, their location password, and whether the remote location
is secure. There is one system-wide remote location configuration list on an AS/400
system. A central site AS/400 system can create location lists for remote AS/400
systems by sending them a control language (CL) program.

Changes can be made to a remote configuration list using the Change Configura-
tion List (CHGCFGL) command, however, they do not take effect until all devices
for that location are all in a varied off state.

When the Display Configuration List (DSPCFGL) command is used, there is no


indication that a password exists. The CHGCFGL command indicates a password
exists by placing *PASSWORD in the field if a password has been entered. There
is no way to display the password. If you have problems setting up location security
you may have to enter the password again on both systems to be sure the pass-
words match.

For more information on configuration lists, see the APPN Support book.

| Conversation Level Security for APPC Connections


Systems Network Architecture (SNA) logical unit (LU) 6.2 architecture identifies
three conversation security designations that various types of systems in an SNA
network can use to provide consistent conversation security across a network of
unlike systems. The SNA security levels are:
SECURITY(NONE) No user ID or password is sent to establish communications.
SECURITY(SAME) A user ID is required, but no password is sent for communi-
cations.
SECURITY(PGM) Both a user ID and a password are sent for communications.

The AS/400 system supports all three SNA levels of conversation security. The AS
controls the SNA conversation levels used for the conversation. The SECURELOC
parameter on the APPC device description or the secure location value on the
APPN remote location list determines what is accepted from the AR for the conver-
sation.

For the SECURITY(NONE) level, an AS does not expect a user ID or password.


The conversation is allowed using a default user profile on the AS. Whether a
default user profile can be used for the conversation depends on the value speci-
fied on the DFTUSR parameter of the Add Communications Entry (ADDCMNE)
command or the Change Communications Entry (CHGCMNE) command for a given
subsystem. A value of *NONE for the DFTUSR parameter means the AS does not
allow a conversation using a default user profile on the AS. SECURITY (NONE) is
sent when no password or user ID is supplied and the AS has SECURELOC(*NO)
specified.

For the SECURITY(SAME) level, an AS expects a user ID and an Already Verified


indicator from the AR. A value of *YES for SECURELOC or secure location means

4-4 OS/400 Distributed Database Programming V4R2


SECURITY(SAME) conversation level security is used. A value of *NO for
SECURELOC means the AS does not allow SNA SECURITY(SAME). This causes
SECURITY(NONE) to be used if there is a default user ID specified in the commu-
nications subsystem.

For the SECURITY(PGM) level, an AS expects both a user ID and password from
the AR for the conversation. To allow a conversation only if both a user ID and
password are sent, the DB2/400 AS must be set up so the SECURELOC param-
eter or the secure location value is *NO and no default user profile is specified for
the communications subsystem. The password is validated when the conversation
is established and is ignored for any following uses of that conversation.

A DB2/400 DRDA AR sends a password if the USER and USING optional


keywords and their associated values are coded on the SQL CONNECT statement.
For example:
EXEC SQL CONNECT TO :locn USER :userid USING :pw;

Using Passwords
For DRDA access to remote relational databases, once a conversation is estab-
lished at the SECURITY(PGM) level, you do not need to enter a password again. If
you end a connection with either a RELEASE, DISCONNECT, or CONNECT state-
ments when running with the RUW connection management method, your conver-
sation with the first AS may or may not be dropped, depending on the kind of AS
you are connected to and your AR job attributes (for the specific rules, see “Con-
trolling DDM Conversations” on page 6-10). If the conversation to the first AS is not
dropped, it remains unused while you are connected to the second AS. If you
connect again to the first AS and the conversation is unused, the conversation
becomes active again without you needing to enter your user ID and password. On
this second use of the conversation, your password is also not validated again.

The OS/400, DB2, and SQL/DS licensed programs only accept passwords whose
alphanumeric characters are in upper case. If you enter lowercase characters in
your password when you connect, your connection is rejected.

| Connecting to a Secure Distributed Relational Database with


| APPC
A valid user profile must exist on the AS to process distributed relational database
work. You can specify a default user profile for a subsystem that handles communi-
cations jobs on an AS/400 system. The name of the default user profile is specified
on the DFTUSR parameter of the Add Communications Entry (ADDCMNE)
command on the AS. The ADDCMNE command adds a communications entry to a
subsystem description used for communications jobs.

If a default user profile is specified in a communications subsystem, whether the AS


is a secure location or not determines if the default user profile is used for this
request. The SECURELOC parameter on the CRTDEVAPPC command, or the
secure location designation on an APPN remote location list, specifies whether the
AS is a secure location.
Ÿ If *YES is specified for SECURELOC or secure location on the AS, the AS con-
siders the AR a secure location. A user ID and an Already Verified indicator is
expected from the AR with its request. If a user profile exists on the AS that
matches the user ID sent by the requester, the request is allowed. If not, the
request is rejected.

Chapter 4. Security for an AS/400 Distributed Relational Database 4-5


Ÿ If *NO is specified for the SECURELOC parameter on the AS, the AS does not
consider the AR a secure location. Although the AR still sends a user ID, the
AS does not use this for the request. Instead, a default user profile on the AS
is used for the request, if one is available. If no default user profile exists on
the AS, the request is rejected.

Figure 4-1 shows all of the possible combinations of the elements that control SNA
SECURITY(PGM) on the AS/400 system. A “Y” in any of the columns indicates that
the element is present or the condition is met. An “M” in the PWD column indicates
that the security manager retrieves the user's password and sends a protected
(encrypted) password if password protection is active. If a protected password is
not sent, no password is sent. A protected password is a character string that
APPC substitutes for a user password when it starts a conversation. Protected
passwords can be used only when the systems of both partners support password
protection and when the password is created on a system that runs OS/400
Version 2 Release 2 or later.

4-6 OS/400 Distributed Database Programming V4R2


Figure 4-1. Remote Access to a Distributed Relational Database
Row UID PWD1 AVI SEC(Y) DFT Valid Access
1 Y Y Y Y Y Use UID
2 Y Y Y Y Reject
3 Y Y Y Y Use UID
4 Y Y Y Reject
5 Y Y Y Y Use UID
6 Y Y Y Reject
7 Y Y Y Use UID
8 Y Y Reject
9 Y Y Y Y Y Use UID
10 Y Y Y Y Reject
11 Y Y Y Y Use UID
12 Y Y Y Reject
13 Y M3 Y Y Use DFT or UID2
14 Y M3 Y Use DFT or UID2
15 Y M3 Y Reject or UID2
16 Y M3 Reject or UID2
17 Y Y Used DFT
18 Y Reject
19 Y Use DFT
20 Reject
Key:
UID User ID sent
PWD Password sent
AVI Already Verified Indicator set
SEC(Y) SECURELOC(YES) specified
DFT Default user ID specified in communication subsystem
Valid User ID and password are valid
Use UID Connection made with supplied user ID
Use DFT Connection made with default user ID
Reject Connection not made
Notes:
1. If password protection is active, a protected password is sent.
2. Use UID when password protection is active.
3. If password protection is active, the password for the user is retrieved by the
security manager, and a protected password is sent; otherwise, no password is
sent.

To avoid having to use default user profiles, create a user profile on the AS for
every AR user that needs access to the distributed relational database objects. If
you decide to use a default user profile, however, make sure that users are not

Chapter 4. Security for an AS/400 Distributed Relational Database 4-7


allowed on the system without proper authorization. For example, the following
command specifies the default user parameter as DFTUSER(QUSER); this allows
the system to accept job start requests without a user ID or password from a com-
munications request. The communications job is signed on using the QUSER user
profile.
ADDCMNE SBSD(SAMPLE) DEV(\ALL) DFTUSER(QUSER)

| DRDA Security using TCP/IP


| DRDA over native TCP/IP does not use OS/400 communications security services
| and concepts such as communications devices, modes, secure location attributes,
| and conversation security levels which are associated with APPC communications.
| Therefore, security setup for TCP/IP is quite different.

| Two types of security mechanisms are supported by the current DB2 for AS/400
| implementation of DRDA over TCP/IP: user ID only, and user ID with password.
| These mechanisms are roughly equivalent to the APPC conversation security types
| of SECURITY(SAME) and SECURITY(PGM). There is nothing that corresponds to
| SECURITY(NONE) for DRDA over TCP/IP.

| At the application server, the default security is user ID with password. This means
| that, as the system is installed, inbound TCP/IP connect requests must have a
| password accompanying the user ID under which the server job is to run. The
| CHGDDMTCPA command can be used to specify that the password is not
| required. To make this change, enter the following command:
| CHGDDMTCPA PWDRQD(\NO)
| You must have *IOSYSCFG special authority to use this command.

| On the application requester (client) side, there are two methods that can be used
| to send a password along with the user ID on TCP/IP connect requests. In the
| absence of both of these methods, only a user ID will be sent. In that case, if the
| AS is set to require a password, the error SQ30082 (A connection attempt failed
| with reason code 17) will be posted in the job log.

| The first way to send a password is to use the USER/USING form of the SQL
| CONNECT statement. The syntax is: CONNECT TO rdbname USER userid USING
| 'password', where the lowercase words represent the appropriate connect parame-
| ters. In a program using embedded SQL, the userid and password values can be
| contained in host variables, as in the following example:
| EXEC SQL CONNECT TO :locn USER :userid USING :pw;

| The other way that a password can be provided to send on a connect request over
| TCP/IP is by use of a server authorization entry. Associated with every user profile
| on the system is a server authorization list. By default the list is empty, but with the
| ADDSVRAUTE command, entries can be added. When a DRDA connection over
| TCP/IP is attempted, DB2 for AS/400 checks the server authorization list for the
| user profile under which the AR job is running. If a match is found between the
| RDB name on the CONNECT statement and the SERVER name in an authori-
| zation entry, the associated USRID parameter in the entry is used for the con-
| nection user ID, and if a PASSWORD parameter is stored in the entry, that
| password is also sent on the connect request.

4-8 OS/400 Distributed Database Programming V4R2


| In order for a password to be stored using the ADDSVRAUTE command, the
| QRETSVRSEC system value must to set to '1'. By default, the value is '0'. Enter
| the following command to make the change:
| CHGSYSVAL QRETSVRSEC VALUE('1')

| The syntax of the ADDSVRAUTE command is:


| ADDSVRAUTE USRPRF(user-profile) SERVER(rdbname) USRID(userid) PASSWORD(password)

| The USRPRF parameter specifies the user profile under which the application
| requester job runs. The SERVER parameter specifies the remote RDB name. It is
| very important to note that for use with DRDA, the value of the SERVER parameter
| must be uppercase. The USRID parameter specifies the user profile under which
| the server job will run. The PASSWORD parameter specifies the password for the
| user profile at the server.

| If the USRPRF parameter is omitted, it will default to the user profile under which
| the ADDSVRAUTE command is being run. If the USRID parameter is omitted, it will
| default to the value of the USRPRF parameter. If the PASSWORD parameter is
| omitted, or if the QRETSVRSEC value is 0, no password will be stored in the entry;
| when a connect attempt is made using the entry, the security mechanism used will
| be user ID only.

| A server authorization entry can be removed by use of the RMVSVRAUTE


| command, and can be changed by use of CHGSVRAUTE. See the CL Reference
| for a complete description of these commands.

| If a server authorization entry exists for an RDB, and the USER/USING form of the
| CONNECT statement is also used, user ID and password provided with the
| CONNECT statement will be used.

Object Related Security


If the AS/400 system is an AS, there are two object-related levels at which security
can be enforced to control access to its relational database tables.

The DDMACC parameter is used on the Change Network Attributes (CHGNETA)


command to indicate whether the tables on this AS/400 system can be accessed at
all by another system and, if so, at which level of security the incoming DDM
requests are to be checked.
Ÿ If *REJECT is specified on the DDMACC parameter, all distributed relational
database requests received by the AS are rejected. However, this system (as
an AR) can still use SQL requests to access tables on other systems that allow
it. No remote system can access a database on any AS/400 system that speci-
fies *REJECT.
If *REJECT is specified while an SQL request is already in use, all new jobs
from any system requesting access to this system’s database are rejected and
an error message is returned to those jobs; existing jobs are not affected.
Ÿ If *OBJAUT is specified on the DDMACC parameter, normal object-level secu-
rity is used on the AS.
The DDMACC parameter is initially set to *OBJAUT. A value of *OBJAUT
allows all remote requests, but they are controlled by the object authorizations
on this AS. If the DDMACC value is *OBJAUT, the user profile used for the job

Chapter 4. Security for an AS/400 Distributed Relational Database 4-9


must have appropriate object authorizations through private, public, group, or
adopted authorities, or the profile must be on an authorization list for objects
needed by the AR job. For each SQL object on the system, all users, no users,
or only specific users (by user ID) can be authorized to access the object.
The user ID that must be authorized to objects is the user ID of the AS job.
See “Connecting to a Secure Distributed Relational Database with APPC” on
page 4-5 for a discussion on what user profile the AS job runs under.
| In the case of a TCP/IP connection, the server job initially starts running under
| QUSER. After the user ID is validated, an exchange occurs so that the job then
| runs under the user profile specified on the connect request. The job inherits
| the attributes (for example, the library list) of that user profile.
When the value *OBJAUT is specified, it indicates that no further verification
(beyond AS/400 object level security) is needed.
Ÿ For DDM jobs, if the name of an optional, user-supplied user exit program (or
access control program) is specified on the DDMACC parameter, an additional
level of security is used. The user exit program can be used to control whether
a user of a source system can use a specific command to access a specific file
on the target system.
A qualified-program-name entry is only valid when using DDM files. If a user-
written exit program name is specified for this parameter when handling a dis-
tributed relational database job, the system treats the entry as though
*OBJAUT is specified.

The DDMACC parameter, initially set to *OBJAUT, can be changed to one of the
previously described values by using the Change Network Attributes (CHGNETA)
command, and its current value can be displayed by the Display Network Attributes
(DSPNETA) command. You can also get the value in a CL program by using the
Retrieve Network Attributes (RTVNETA) command.

If the DDMACC parameter value is changed, although it takes effect immediately, it


affects only new distributed relational database jobs started on this system (as the
AS). Jobs running on this AS before the change was made continue to use the old
value.

For a description of the DDMACC parameter, see the description of the Change
Network Attributes (CHGNETA) command in the Communications Management
book.

Authority to Distributed Relational Database Objects


You can use either the SQL GRANT and REVOKE statements or the control lan-
guage (CL) Grant Object Authority (GRTOBJAUT) and Revoke Object Authority
(RVKOBJAUT) commands to grant and revoke a user’s authority to relational data-
base objects. The SQL GRANT and REVOKE statements only operate on pack-
ages, tables, and views. In some cases, it is necessary to use GRTOBJAUT and
RVKOBJAUT to authorize users to other objects, such as commands and pro-
grams.

The authority checked for SQL statements depends on whether the statement is
static, dynamic, or being run interactively.

4-10 OS/400 Distributed Database Programming V4R2


| In the following discussion, the USRPRF value refers to the USRPRF parameter on
| the CRTSQLxxx command that was used to create the application program.

For static SQL statements, if the USRPRF value is:


Ÿ *USER, the authority to run the SQL statement is checked using the user
profile of the user running the program. *USER is the default for system (*SYS)
naming.
Ÿ *OWNER, the authority to run the SQL statement is checked using the user
profiles of the user running the program and of the owner of the program. The
higher authority is the authority that is used. *OWNER is the default for SQL
(*SQL) naming.

For dynamic SQL statements:


Ÿ If the USRPRF value is *USER, the authority to run the SQL statement is
checked using the user profile of the person running the program.
Ÿ If the USRPRF value is *OWNER and DYNUSPRF is *USER, the authority to
run the SQL statement is checked using the user profile of the person running
the program.
Ÿ If the USRPRF value is *OWNER and DYNUSPRF is *OWNER, the authority to
run the SQL statement is checked using the user profile of the person running
the program and the owner of the program. The higher authority is the authority
that is used. Because of security concerns, you should use the *OWNER
parameter value for DYNUSPRF carefully. This option gives the access
authority of the owner of the program to those who run the program.
| Ÿ If the SQL package in use is associated with a program created on a system
| other than AS/400, and that package was created with the DRDA PKGATHRUL
| value of OWNER (such as when the OWNER keyword is specified on the BIND
| command of DB2 for OS/390), the authority to run the SQL statement is
| checked using the user profile of the person running the program and the
| owner of the program or package. The higher authority is the authority that is
| used.

For interactive SQL statements, authority is checked against the authority of the
person processing the statement. Adopted authority is not used for interactive SQL
statements.

Users running a distributed relational database application need authority to run the
SQL package on the AS. The GRANT EXECUTE ON PACKAGE statement allows
the owner of an SQL package, or any user with administrative privileges to it, to
grant specified users the privilege to run the statements in an SQL package. You
can use this statement to give all users authorized to the AS, or a list of one or
more user profiles on the AS, the privilege to run statements in an SQL package.

Normally, users have processing privileges on a package if they are authorized to


the distributed application program created using the CRTSQLxxx command. If the
package is created using the CRTSQLPKG command you may have to grant proc-
essing privileges on the package to users. You can issue this statement in an SQL
program or using interactive SQL. The following shows a sample statement:
GRANT EXECUTE
ON PACKAGE SPIFFY.PARTS1
TO PUBLIC

Chapter 4. Security for an AS/400 Distributed Relational Database 4-11


The REVOKE EXECUTE ON PACKAGE statement allows the owner of an SQL
package, or any user with administrative privileges to it, to remove the privilege to
run statements in an SQL package from specified users. You can remove the
EXECUTE privilege to all users authorized to the AS or to a list of one or more
user profiles on the AS.

If you granted the same privilege to the same user more than once, revoking that
privilege from that user nullifies all those grants. If you revoke an EXECUTE privi-
lege on an SQL package you previously granted to a user, it nullifies any grant of
the EXECUTE privilege on that SQL package, regardless of who granted it. The
following shows a sample statement:
REVOKE EXECUTE
ON PACKAGE SPIFFY.PARTS1
FROM PUBLIC

You can also grant authority to an SQL package using the GRTOBJAUT command
or revoke authority to an SQL package using the RVKOBJAUT command.

Programs That Run Under Adopted Authority


A distributed relational database program can run under adopted authority, which
means the user adopts the program owner’s authority to objects used by the
program while running the program. When a program is created using the *SQL
precompiler option for naming, the program runs under the program owner’s user
profile.

An SQL package from an unlike system always adopts the package owner’s
authority for all static SQL statements in the package. An SQL package created on
an AS/400 system using the CRTSQLxxx command with OPTION(*SQL) specified,
also adopts the package owner’s authority for all static SQL statements in the
package.

A distributed relational database administrator can check security exposure on


application servers by using the Display Program Adopt (DSPPGMADP) command.
The DSPPGMADP command displays the programs and SQL packages that use a
specified user profile, as shown below. You may also send the results of the
command to a printer or to an output file.

4-12 OS/400 Distributed Database Programming V4R2


à ð
User profile . . . . . . . : MPSUP

Object Library Type Attribute Text


INVENT SPIFFY \PGM Adopting program
CLIENT1 SPIFFY \PGM Adopting program
TESTINV TEST \PGM CLP Test inventory pgm
INVENT1 SPIFFY \SQLPKG SQL package
CLIENT1 SPIFFY \SQLPKG SQL package
TESTINV SPIFFY \SQLPKG SQL package

Bottom
Press Enter to continue

F3=Exit F12=Cancel F17=Top F18=Bottom


(C) COPYRIGHT IBM CORP. 198ð, 1991.
á ñ

Protection Strategies in a Distributed Relational Database


Network security in an AS/400 distributed relational database must be planned to
protect critical data on any AS from unauthorized access. But because of the dis-
tributed nature of the relational database, security planning must ensure that avail-
ability of data in the network is not unnecessarily restricted.

One of the decisions that a distributed relational database administrator needs to


make is the system security level in place for each system in the network. A system
security level of 10 provides no security for application servers other than physical
security at the system site. A system security level of 20 provides some protection
to application servers because network security checking is done to ensure the
local and remote system are correctly identified. However, this level does not
provide the object authorization necessary to protect critical database elements
from unauthorized access. An AS/400 system security level of 30 and above is the
recommended choice for systems in a network that want to protect specific system
objects.

The distributed relational database administrator must also consider how communi-
cations are established between ARs on the network and the application servers.
Some questions that need to be resolved might include:
Ÿ Should a default user profile exist on an AS?
Maintaining many user profiles throughout a network can be difficult. However,
creating a default user profile in a communications subsystem entry opens the
AS to incoming communications requests if the AS is not a secure location. In
some cases this might be an acceptable situation, in other cases a default user
profile might reduce the system protection capabilities too far to satisfy security
requirements.
For example, systems that serve many ARs need a high level of security. If
their databases were lost or damaged, the entire network could be affected.
Since it is possible to create user profiles or group profiles on an AS that identi-
fies all potential users needing access, it is unnecessary for the database

Chapter 4. Security for an AS/400 Distributed Relational Database 4-13


administrator to consider creating a default user profile for the communications
subsystem or subsystems managing distributed relational database work.
In contrast, an AS/400 system that rarely acts as an AS to other systems in the
network and does not contain sensitive or critical data might use a default user
profile for the communications subsystem managing distributed relational data-
base work. This might prove particularly effective if the same application is
used by all the other systems in the network to process work on this database.
| Strictly speaking, the concept of a default user applies only to the use of APPC.
| However, a similar technique can be used with systems that are using TCP/IP.
| A single userid could be established under which the server jobs could run. The
| ADDSVRAUTE command could be used on all ARs to specify that that user ID
| should be used for all users to connect with. The server authorization entries
| could have a password specified on them, or they could specify *NONE for the
| password, depending on the setting of the PWDRQD parameter on the
| CHGDDMTCPA command at the AS. The default value of this attribute is that
| passwords are required.
Ÿ How should access to database objects be handled?
Authority to objects can be granted through private authority, group authority,
public authority, adopted authority, and authorization lists. While a user profile
(or default profile) has to exist on the AS for the communications request to be
accepted, how the user is authorized to objects can affect performance.
Whenever possible, use group authority or authorization lists to grant access to
a distributed relational database object. It takes less time and system resources
to check these than to review all private authorities.
| For TCP/IP connections, you do not need a private user ID for each user that
| can connect to an AS, because you can map user IDs.

4-14 OS/400 Distributed Database Programming V4R2


Chapter 5. Setting Up an AS/400 Distributed Relational
Database
To set up an AS/400 distributed relational database, you need to work with three
functions of the AS/400 system:
Ÿ Work management
Ÿ The relational database directory
Ÿ Basic data management
Work management is the set of system functions that processes work in the
system. The relational database directory is used by the AS/400 system to direct
work to a particular relational database in the network. After you have your systems
ready for work and have your relational database directories set up, you can add
data to tables on any application server (AS) in your network. This chapter intro-
duces these three topics and helps you to set up AS/400 systems for distributed
relational database work.

Connection and set up information for a distributed relational database network of


unlike systems can be found in the Distributed Relational Database Cross-Platform
Connectivity book (SG24-4311-02) or the Distributed Relational Database Architec-
ture Connectivity Guide book (SC26-4783-03).

Work Management on the AS/400 System


All of the work done on the AS/400 system is submitted through the work manage-
ment function. On an AS/400 system, you can design specialized operating envi-
ronments to handle different types of work to satisfy the requirements of your
system. However, when the operating system is installed, it includes a work man-
agement environment that supports interactive and batch processing, communi-
cations, and spool processing.

On the AS/400 system, all user jobs operate in an environment called a sub-
system, defined by a subsystem description, where the system coordinates proc-
essing and resources. Users can control a group of jobs with common
characteristics independently of other jobs if the jobs are placed in the same sub-
system. You can easily start and end subsystems as needed to support the work
being done and to maintain the performance characteristics you desire.

The basic types of jobs that run on the system are interactive, communications,
batch, spooled, autostart, and prestart.

An interactive job starts when you sign on a work station and ends when you sign
off. A communications batch job is a job started from a program start request from
another system. A non-communications batch job is started from a job queue. Job
queues are not used when starting a communications batch job. Spooling functions
are available for both input and output. Autostart jobs perform repetitive work or
one-time initialization work. Autostart jobs are associated with a particular sub-
system, and each time the subsystem is started, the autostart jobs associated with
it are started. Prestart jobs are jobs that start running before the remote program
sends a program start request.

 Copyright IBM Corp. 1997, 1998 5-1


Setting Up Your Work Management Environment
One subsystem, called a controlling subsystem, starts automatically when you
load the system. Two controlling subsystem configurations are supplied by IBM,
and you can use them without change. The first configuration includes the following
subsystems:
Ÿ QBASE, the controlling subsystem, supports interactive, batch, and communi-
cations jobs.
Ÿ QSPL supports processing of spooling readers and writers.
| Ÿ QSYSWRK supports various system functions such as TCP/IP.

QBASE automatically starts when the system is started. An automatically started


job in QBASE starts QSPL.

| The second controlling subsystem configuration supplied is more complex. This


| configuration includes the following subsystems:
| Ÿ QCTL, the controlling subsystem, supports interactive jobs started at the
| console.
| Ÿ QINTER supports interactive jobs started at other work stations.
| Ÿ QCMN supports communications jobs.
| Ÿ QBATCH supports batch jobs.
| Ÿ QSPL supports processing of spooling readers and writers.
| Ÿ QSYSWRK supports various system functions such as TCP/IP.

If you change your configuration to use the QCTL controlling subsystem, it starts
automatically when the system is started. An automatically started job in QCTL
starts the other subsystems.

You can change your subsystem configuration from QBASE to QCTL by changing
the system value QCTLSBSD (controlling subsystem) to QCTL on the Change
System Value (CHGSYSVAL) command and starting the system again.

You can change the IBM-supplied subsystem descriptions or any user-created sub-
system descriptions by using the Change Subsystem Description (CHGSBSD)
command. You can use this command to change the storage pool size, storage
pool activity level, and the maximum number of jobs for the subsystem description
of an active subsystem.

For more information about work management, subsystems, and jobs on the
AS/400 system, see the Work Management book. For more information about work
management for communications and communications subsystems, see the Com-
munications Management book.

| Work Management for DRDA Use with TCP/IP


| The DDM TCP/IP server used for DRDA TCP/IP connections runs in the
| QSYSWRK subsystem. See “Managing the TCP/IP Server” on page 6-17 for
| details on setting up and administering the TCP/IP server.

5-2 OS/400 Distributed Database Programming V4R2


Considerations for Setting Up Subsystems for APPC
In a distributed relational database, communications jobs and interactive jobs are
the main types of work an administrator must plan to manage on each system.
Systems in the network start communications jobs to handle requests from an
application requester (AR); an AR’s communications requests to other systems
normally originate from interactive or batch jobs on the local system. Setting up an
efficient work management environment for the distributed relational database
network systems can enhance your overall network performance by allocating
system resources to the specific needs of each AS and AR in the network.

When the OS/400 licensed program is first installed, QBASE is the default control-
ling subsystem. As the controlling subsystem, QBASE allocates system resources
between the two subsystems QBASE and QSPL. Interactive jobs, communications
jobs, batch jobs, and so on, allocate resources within the QBASE subsystem. Only
spooled jobs are managed under a different subsystem, QSPL. This means you
have less control of system resources for handling communications jobs versus
interactive jobs than you would using the QCTL controlling subsystem.

Using the QCTL subsystem configuration, you have control of four additional sub-
systems for which the system has allocated storage pools and other system
resources. Changing the QCTL subsystems, or creating your own subsystems
gives you even more flexibility and control of your processing resources.

Different system requirements for some of the systems in the Spiffy Corporation
distributed relational database network may require different work management
environments for best network efficiency. The following discussions show how the
distributed relational database administrator can plan a work management sub-
system to meet the needs of each AS/400 system in the Spiffy distributed relational
database network.

In the Spiffy Corporation system organization, a small dealership may be satisfied


with a QBASE level of control for the various jobs its users have on the system. For
example, requests to a small dealership’s relational database from the regional AR
(to update dealer inventory levels for a shipment) are handled as communications
jobs. Requests from a dealership user to the regional AS, to request a part not
currently in stock locally, is handled as an interactive job on the dealership system.
Both activities are relatively small jobs because the dealership is smaller and
handles fewer service orders, parts sales and so on. The system coordination of
resources in the QBASE subsystem provides the level of control this enterprise
requires for their interactive and communications needs.

A large dealership, on the other hand, probably manages its work through the
QCTL subsystem, because of the different work loads associated with the different
types of jobs.

The number of service orders booked each day can be high, requiring a query to
the local relational database for parts or to the regional center AS for parts not in
stock at the dealership. This type of activity starts interactive jobs on their system.
The dealership also starts a number of interactive jobs that are not distributed rela-
tional database related jobs, such as enterprise personnel record keeping, mar-
keting and sales planning and reporting, and so on. Requests to this dealership
from the regional center for performance information or to update inventory or work
plans are communications jobs that the dealership wants to manage in a separate

Chapter 5. Setting Up an AS/400 Distributed Relational Database 5-3


environment. The large dealership can also receive a request from another dealer-
ship for a part that is out of stock at the regional center.

For a large dealership, the QCTL configuration with separate subsystem manage-
ment for QINTER and QCMN provides more flexibility and control for managing its
system work environment. In this example, interactive and communications jobs at
the dealership system can be allocated more of the system resources than other
types of jobs. Additionally, if communications jobs are typically fewer than interac-
tive jobs for this system, resources can be targeted toward interactive jobs, by
changing the subsystem descriptions for both QINTER and QCMN.

A work management environment tailored to a Spiffy Corporation regional center


perspective is also important. In the Spiffy network, the regional center is an AR to
each dealership when it updates the dealership inventory table with periodic parts
shipment data, or updates the service plan table with new or updated service plans
for specific repair jobs. Some of these jobs can be run as interactive jobs (on the
regional system) in early morning or late afternoon when system usage is typically
less, or run as batch jobs (on the regional system) after regular business hours.
The administrator can tailor the QINTER and QBATCH subsystems to accommo-
date specific processing times and resource needs.

The regional center is also an AS for each dealership when a dealership needs to
query the regional relational database for a part not in stock at the dealership, a
service plan for a specific service job (such as rebuilding a steering rack), or for
technical bulletins or recall notifications since the last update to the dealership rela-
tional database. These communications jobs can all be managed in QCMN.

However, a closer examination of some specific aspects of distributed relational


database network use by the KC000 (Kansas City) regional center and the dealer-
ships it serves suggests other alternatives to the distributed relational database
administrator at Kansas City.

The KC000 system serves several very large dealerships that handle hundreds of
service orders daily, and a few small dealerships that handle fewer than 20 service
orders each day. The remaining medium-sized dealerships each handle about 100
service orders daily. One problem that presents itself to the distributed relational
database administrator is how to fairly handle all the communications requests to
the KC000 system from other systems. A large dealership could control QCMN
resources with its requests so that response times and costs to other systems in
the network are unsatisfactory.

The distributed relational database administrator can create additional communi-


cations subsystems so each class of dealerships (small, medium, or large) can
request support from the AS and generally receive better response. By tailoring the
subsystem attributes, prestart job entries, communications work entries, and routing
entries for each subsystem description, the administrator controls how many jobs
can be active on a subsystem and how jobs are processed in the subsystem.

The administrator can add a routing entry to change the class (and therefore the
priority) of a DRDA/DDM job by specifying the class that controls the priority of the
job and by specifying QCNTEDDM on the CMPVAL parameter, as in the following
example:
ADDRTGE SBSD(QCMN) SEQNBR(28ð) CLS(QINTER) CMPVAL('QCNTEDDM' 37)

5-4 OS/400 Distributed Database Programming V4R2


The administrator can also add a prestarted job for DRDA/DDM job by specifying
QCNTEDDM as the prestarted job, as in the following example:
ADDPJE SBSD(QCMN) PGM(QCNTEDDM)

For more information on work management topics for the AS/400 system, see the
Work Management book. For more information about changing attributes, work
entries and routing entries for communications, see the Communications Manage-
ment book.

Using the Relational Database Directory


| The OS/400 program uses the relational database directory to define the relational
| database names that can be accessed by applications running on an AS/400
| system, to specify if the connection uses SNA or IP, and to associate these rela-
| tional database names to their corresponding network parameters. The relational
| database directory allows an AR to accept a relational database name from the
| application and translate this name into the appropriate Internet Protocol (IP)
| address or host name and port, or the appropriate Systems Network Architecture
| (SNA) network identifier and logical unit (LU) name values for communications
| processing. The relational database directory also allows associating an ARD
| program with a relational database name.

Each AS/400 system in the distributed relational database network must have a
relational database directory configured. There is only one relational database
directory on an AS/400 system. Each AR in the distributed relational database
network must have an entry in its relational database directory for its local relational
database and one for each remote relational database the AR accesses. Any
AS/400 system in the distributed relational database network that acts only as an
AS must have an entry in its relational database directory for the local relational
database, but does not need to include the relational database names of other
remote relational databases in its directory.

| The relational database name assigned to the local relational database must be
| unique. That is, it should be different from any other relational database in the
| network. Names assigned to other relational databases in the directory identify
| remote relational databases, and must match the name an AS uses to identify its
| local relational database. If the local RDB name entry at an AS does not exist when
| it is needed, one will be created automatically in the directory. The name used will
| be the current system name displayed by the DSPNETA command.

Working with the Relational Database Directory


The following commands let you work with the relational database directory on your
system:
ADDRDBDIRE Add Relational Database Directory Entry
CHGRDBDIRE Change Relational Database Directory Entry
DSPRDBDIRE Display Relational Database Directory Entry
RMVRDBDIRE Remove Relational Database Directory Entry
WRKRDBDIRE Work with Relational Database Directory Entry

Chapter 5. Setting Up an AS/400 Distributed Relational Database 5-5


The Add RDB Directory Entry (ADDRDBDIRE) display is shown below. You can
use the prompts in this display or the ADDRDBDIRE command to add an entry to
the relational database directory.

| à ð
| Add RDB Directory Entry (ADDRDBDIRE)

| Type choices, press Enter.

| Relational database . . . . . . MP311 Name


| Remote location:
| Name or address . . . . . . . MP311 Name, \LOCAL, \ARDPGM
| Type . . . . . . . . . . . . . \SNA \SNA, \IP
| Text . . . . . . . . . . . . . . 'Oak Street Dealership'

| In this example, an entry is made to add a relational database named MP311 for a
| system with a remote location name of MP311 to the relational database directory
| on the local system. The remote location name does not have to be defined before
| a relational database directory entry using it is created. However, the remote
| location name must be defined before the relational database directory entry is
| used in an application. The relational database name (RDB) parameter and the
| remote location name (RMTLOCNAME) parameter are required for the
| ADDRDBDIRE command. The second element of the RMTLOCNAME parameter
| defaults to *SNA. The descriptive text (TEXT) parameter is optional. As shown in
| this example, it is a good idea to make the relational database name the same as
| the system name or location name specified for this system in your network config-
| uration. This can help you identify a database name and correlate it to a particular
| system in your distributed relational database network, especially if your network is
| complex.

To see the other optional parameters on this command, press F10 on the Add RDB
Directory Entry (ADDRDBDIRE) display. These optional parameters are shown
below.

| à ð
| Add RDB Directory Entry (ADDRDBDIRE)

| Type choices, press Enter.

| Relational database . . . . . . MP311


| Remote location
| Name or address . . . . . . . MP311
| Type . . . . . . . . . . . . . \SNA \SNA, \IP
| Text . . . . . . . . . . . . . . 'Oak Street Dealership'

| Device:
| APPC device description . . . \LOC Name, \LOC
| Local location . . . . . . . . . \LOC Name, \LOC, \NETATR
| Remote network identifier . . . \LOC Name, \LOC, \NETATR, \NONE
| Mode . . . . . . . . . . . . . . \NETATR Name, \NETATR
| Transaction program . . . . . . \DRDA Character value, \DRDA

The system provides default values for the additional ADDRDBDIRE command
parameters:
Ÿ Device (DEV)
Ÿ Local location (LCLLOCNAME)

5-6 OS/400 Distributed Database Programming V4R2


Ÿ Remote network identifier (RMTNETID)
Ÿ Mode (MODE)
Ÿ Transaction program (TNSPGM)
Note: The transaction program name parameter in the AS/400 system is
TNSPGM and in SNA, it is TPN.
Ÿ If you use the defaults with advanced program-to-program communications
(APPC), the system determines the device, the local location, and the remote
network identifier that will be used. The mode name defined in the network
attributes is used and the transaction program name for Distributed Relational
Database Architecture (DRDA) support is used.
Ÿ If you use the defaults with Advanced Peer-to-Peer Networking (APPN), the
system ignores the device (DEV) parameter, and uses the local location name,
remote network identifier, and mode name defined in the network attributes.

You can change any of these default values on the ADDRDBDIRE command. For
example, you may have to change the TNSPGM parameter to communicate with
an SQL/DS system. By default for SQL/DS support, the TNSPGM is the name of
the SQL/DS database to which you want to connect. The default TNSPGM param-
eter value for DRDA (*DRDA) is X'07F6C4C2'. For more information on trans-
action program name, see:
| Ÿ “Setting QCNTSRVC as a TPN on a DB2/400 Application Requester” on
| page 9-31.
| Ÿ “Setting QCNTSRVC as a TPN on a DB2 for VM Application Requester” on
| page 9-32.
| Ÿ “Setting QCNTSRVC as a TPN on a DB2 for OS/390 Application Requester” on
| page 9-32.
| Ÿ “Setting QCNTSRVC as a TPN on a DB2 Connect Application Requester” on
| page 9-32.

| To specify communication information and an ARD program on the ADDRDBDIRE


| command prompt, press F9 and page down. When the ARD program will not use
| the communication information specified on the ADDRDBDIRE command (which is
| normally the case), use the special value *ARDPGM on the RMTLOCNAME param-
| eter.

| The Add RDB Directory Entry (ADDRDBDIRE) display shown below demonstrates
| how the panel changes if you enter *IP as the second element of the
| RMTLOCNAME parameter, and what typical entries would look like for an RDB that
| uses TCP/IP.

Chapter 5. Setting Up an AS/400 Distributed Relational Database 5-7


| à ð
| Add RDB Directory Entry (ADDRDBDIRE)

| Type choices, press Enter.

| Relational database . . . . . . > MP311


| Remote location:
| Name or address . . . . . . . > MP311.spiffy.com

| Type . . . . . . . . . . . . . > \IP \SNA, \IP


| Text . . . . . . . . . . . . . . > 'Oak Street Dealership'

| Port number or service program \DRDA

| Note that instead of specifying MP311.spiffy.com for the RMTLOCNAME, you could
| have specified the IP address (for example, '9.5.25.176'). For IP connections to
| another AS/400, leave the PORT parameter value set at the default, *DRDA. For
| connections to an IBM Universal Database (UDB) server, for example, you might
| need to set the port to a number such as 30000. Refer to the product documenta-
| tion for the server you are using. If you have a valid service name defined for a
| DRDA port at some location, you can also use that instead of a number. Do not
| use the 'drda' service name, however, since it will be slower on connects than
| using *DRDA, which accomplishes the same thing.

The Work with RDB Directory Entries display provides options that allow you to
add, change, display, or remove a relational database directory entry.

| à ð
| Work with RDB Directory Entries

| Position to . . . . . .

| Type options, press Enter.


| 1=Add 2=Change 4=Remove 5=Display details 6=Print details

| Relational Remote
| Option Database Location Text
| __ KCððð KCððð Kansas City region database
| __ MPððð \LOCAL Minneapolis region database
| __ MP1ð1 MP1ð1 Dealer database MP1ð1
| __ MP1ð2 MP1ð2 Dealer database MP1ð2
| __ MP211 MP211 Dealer database MP211
| __ MP215 MP215 Dealer database MP215
| 4_ MP311 MP311 Dealer database MP311

As shown on the display, option 4 can be used to remove an entry from the rela-
tional database directory on the local system. If you remove an entry, you receive
another display that allows you to confirm the remove request for the specified
entry or select a different relational database directory entry. If you use the Remove
Relational Database Directory (RMVRDBDIRE) command, you have the option of
specifying a specific relational database name, generic names, all directory entries,
or just the remote entries.

You have the option on the Work with RDB Directory Entries display to display the
details of an entry. Output from the Work with RDB Entries display is to a display.
However, if you use the Display RDB Directory Entries (DSPRDBDIRE) command,

5-8 OS/400 Distributed Database Programming V4R2


you can send the output to a printer or an output file. The relational database direc-
tory is not an AS/400 object, so using an output file provides a means of backup for
the relational database directory. For more information about using the
DSPRDBDIRE command with an output file for backing up the relational database
directory, see “Saving and Restoring Relational Database Directories” on
page 7-11.

You have the option on the Work with RDB Directory Entries display to change an
entry in the relational database directory. You can also use the Change Relational
Database Directory Entries (CHGRDBDIRE) command to make changes to an
entry in the directory. You can change any of the optional command parameters
and the remote location name of the system. You cannot change a relational data-
base name for a directory entry. To change the name of a relational database in
the directory, remove the entry for the relational database and add an entry for the
new database name.

| You should not make it a practice to remove the local RDB directory entry or
| change its name. Normally, it is never neccessary. However, if you must change
| the name of the local RDB entry, the procedure includes doing the remove and add
| as explained in the previous paragraph. But there are special considerations
| involved with removing the local entry, because that entry contains some system-
| wide DRDA attribute information. If you try to remove the entry, you will get
| message CPA3E01 (Removing or changing *LOCAL directory entry may cause loss
| of configuration data (C G)), and you will be given the opportunity to cancel the
| operation or continue. The message text goes on to tell you that the entry is used
| to store configuration data entered with the CHGDDMTCPA command. If the
| *LOCAL entry is removed, configuration data may be destroyed, and the default
| configuration values will be in effect. If the default values are not satisfactory, con-
| figuration data will have to be re-entered with the CHGDDMTCPA command.
| Before removing the entry, you may want to record the values specified in the
| CHGDDMTCPA command so that they can be restored after the *LOCAL entry is
| deleted and added with the correct local RDB name.

Relational Database Directory Setup Example


| The Spiffy Corporation network provides an example to illustrate how the relational
| database directory is used on systems in a distributed relational database network
| and show how each is set up. The example assumes the use of APPC for commu-
| nications, as opposed to TCP/IP, which would be simpler to set up. However,
| some elements of the example are protocol-independent. The RDB directory entries
| needed for APPC use would be needed in a TCP/IP network also, but the parame-
| ters would differ. Host names or IP addresses and port identifications would replace
| LU names, device descriptions, modes, TPNs, and so forth.

A simple relationship to consider is the one between two regional offices as shown
below:

Chapter 5. Setting Up an AS/400 Distributed Relational Database 5-9


MP000 KC000

RV2W737-0

Figure 5-1. Relational Database Directory Setup for Two Systems

The relational database directory for each regional office must contain an entry for
the local relational database and an entry for the remote relational database
because each system is both an AR and an AS. The commands to create the rela-
tional database directory for the MP000 system are:
ADDRDBDIRE RDB(MPððð) RMTLOCNAME(\LOCAL) TEXT('Minneapolis region database')

ADDRDBDIRE RDB(KCððð) RMTLOCNAME(KCððð) TEXT('Kansas City region database')

| In the above example, the MP000 system identifies itself as the local relational
| database by specifying *LOCAL for the RMTLOCNAME parameter. There is only
| one relational database on an AS/400 system. You can simplify identification of
| your network relational databases if you make the relational database names in the
| directory the same as the system name and the local location name for the local
| system, and the same as the remote location name for the remote system.
| Note: The system name is specified on the SYSNAME parameter of the Change
| Network Attributes (CHGNETA) command. The local system is identified on
| the LCLLOCNAME parameter of the CHGNETA command during communi-
| cations configuration, as shown in the example on page 3-10. Remote
| locations using SNA (APPC) are identified with the RMTCPNAME param-
| eter on the Create Controller (CRTCTLAPPC) command during communi-
| cations configuration as shown on page 3-11. Using the same names for
| system names, network locations, and database names can help avoid con-
| fusion, particularly in complex networks.

The corresponding entries for the KC000 system relational database directory are:
ADDRDBDIRE RDB(KCððð) RMTLOCNAME(\LOCAL) TEXT('Kansas City region database')

ADDRDBDIRE RDB(MPððð) RMTLOCNAME(MPððð) TEXT('Minneapolis region database')

A more complex example to consider is that of a regional office to its dealerships.


For example, to access relational databases in the network shown below, the rela-
tional database directory for MP000 system must be expanded to include an entry
for each of its dealerships.

5-10 OS/400 Distributed Database Programming V4R2


MP000

MP101 MP110 MP201

RV2W738-0

Figure 5-2. Relational Database Directory Setup for Multiple Systems

A sample of the commands used to complete the MP000 relational database direc-
tory to include all its dealer databases is as follows:
PGM
ADDRDBDIRE RDB(MPððð) RMTLOCNAME(\LOCAL) +
TEXT('Minneapolis region database')
ADDRDBDIRE RDB(KCððð) RMTLOCNAME(KCððð)
TEXT('Kansas City region database')
ADDRDBDIRE RDB(MP1ð1) RMTLOCNAME(MP1ð1)
TEXT('Dealer database MP1ð1')
ADDRDBDIRE RDB(MPðð2) RMTLOCNAME(MP11ð)
TEXT('Dealer database MP11ð')
.
.
.
ADDRDBDIRE RDB(MP215) RMTLOCNAME(MP2ð1)
TEXT('Dealer database MP2ð1')
ENDPGM

In the above example, each of the region dealerships is included in the Minneapolis
relational database directory as a remote relational database.

Since each dealership can serve as an AR to MP000 and to other dealership appli-
cation servers, each dealership must have a relational database directory that has
an entry for itself as the local relational database and the regional office and all
other dealers as remote relational databases. The database administrator has
several options to create a relational database directory at each dealership system.

The method that uses the most time and is most prone to error is to create a rela-
tional database directory at each system by using the ADDRDBDIRE command to
create each directory entry on all systems that are part of the MP000 distributed
relational database network.

A better alternative is to create a control language (CL) program like the one shown
in the above example for the MP000. The distributed relational database adminis-

Chapter 5. Setting Up an AS/400 Distributed Relational Database 5-11


trator can copy this CL program for each of the dealership systems. To customize
this program for each dealership, the database administrator changes the remote
location name of the MP000 system to MP000, and changes the remote location
name of the local dealership to *LOCAL. The distributed relational database admin-
istrator can distribute the customized CL program to each dealership to be run on
that system to build its unique relational database directory.

A third method is to write a program that reads the relational database directory
information sent to an output file as a result of using the Display Relational Data-
base Directory Entry (DSPRDBDIRE) command. This program can be distributed to
the dealerships, along with the output file containing the relational database direc-
tory entries for the MP000 system. Each system could read the MP000 output file
to create a local relational database directory. The Change Relational Database
Directory Entry (CHGRDBDIRE) command can then be used to customize the
MP000 system directory for the local system. For more information about using an
output file to create relational database directory entries, see “Saving and Restoring
Relational Database Directories” on page 7-11.

| Setting Up Security
| DRDA security is covered in Chapter 4, “Security for an AS/400 Distributed Rela-
| tional Database” on page 4-1, but for the sake of completeness, it is mentioned
| here as something that should be considered before using DRDA, or in converting
| your network from the use of APPC to TCP/IP. Security set up for TCP/IP is quite
| different from what is required for APPC. One thing to be aware of is the lack of the
| 'secure location' concept that APPC has. Because a TCP/IP server cannot fully
| trust that a client system is who it says it is, the use of passwords on connect
| requests is more important. To make it easier to send passwords on connect
| requests, the use of server authorization lists associated with specific user profiles
| has been introduced with TCP/IP support. Entries in server authorization lists can
| be maintained by use of the xxxSVRAUTHE commands described in Chapter 4,
| “Security for an AS/400 Distributed Relational Database” on page 4-1 and in the
| CL Reference (where xxx represents ADD, CHG, and RMV). An alternative to the
| use of server authorization entries is to use the USER/USING form of the SQL
| CONNECT statement to send passwords on connect requests.

| Setup at the server side includes deciding if passwords are required for inbound
| connect requests or not. The default setting is that they are. That can be changed
| by use of the CHGDDMTCPA CL command.

| Setting Up the TCP/IP Server


| If you own a DRDA AS that will be using the TCP/IP protocol, you will need to set
| up the DDM TCP/IP server. This can be as simple as insuring that it is started
| when it is needed, which can be done by running the following command if you
| want it to remain active at all times:
| CHGDDMTCPA AUTOSTART(\YES)

| But there are other parameters that you may want to adjust to tune the server for
| your environment. These include the initial number of prestart jobs to start, the
| maximum number of jobs, threshold when to start more, and so forth. See “Man-
| aging the TCP/IP Server” on page 6-17 for more information on this subject.

5-12 OS/400 Distributed Database Programming V4R2


| You may want to set up a common user profile for all clients to use when con-
| necting, or a set of different user profiles with different levels of security for different
| classes of remote users. You can then use the ADDSVRAUTE command at the AR
| to map each user's profile name at the AR to what user profile he or she will run
| under at the AS. See “DRDA Security using TCP/IP” on page 4-8 for more informa-
| tion.

| Setting Up SQL Packages for Interactive SQL on a RUW Server


| If you have the DB2 Query Manager and SQL Development Kit and plan to use the
| Interactive SQL (STRSQL) function of that product, and if you plan to connect to
| DRDA servers that do not have two-phase commit capability, then you need to take
| some special action to insure that SQL packages are set up at the servers that
| have the limited capability. DB2 for AS/400 servers using TCP/IP are examples of
| such systems that currently have only RUW capability. Conversations with such
| systems are said to be unprotected.

| Normally, SQL packages are created automatically at the AS for users of STRSQL.
| However, a problem can occur because the initial connection for STRSQL is to the
| local system, and that connection is protected by two-phase commit protocols. If a
| subsequent connection is made to a system that is only one-phase commit
| capable, then that connection is read-only. When an attempt is made to automat-
| ically create a package over such a connection, it fails because the creation of a
| package is considered an update, and cannot be done over a read-only connection.

| The solution to this is to get rid of the connection to the local database before con-
| necting to the remote AS. This can be done by doing a RELEASE ALL command
| followed by a COMMIT. Then the connection to the remote system can be made
| and since it is the first connection, updates can be made over it.

Setting Up DDM Files


The implementation of DRDA support on the AS/400 system uses Distributed Data
Management (DDM) conversations for communications. Because of this, you can
use DDM in conjunction with distributed relational database processing. You can
use DDM to submit remote commands to a target system, copy tables from one
AS/400 system to another, and process nondistributed relational database work on
another system.

With distributed relational database, information the AR needs to connect to a data-


base is provided in the relational database directory. When you use DDM, you must
create a separate DDM file for each file you want to work with on the target system.
The DDM file is used by the application on the source system to identify a remote
file on the target system and the communications path to the target system.

| DRDA support over TCP/IP does not support DDM source (client) access over
| TCP/IP. That is, you cannot create a DDM file on AS/400 that uses TCP/IP. This
| also means that you cannot run a SBMRMTCMD on an AS/400 over a TCP/IP con-
| nection. However, the new TCP/IP support does allow PC clients using DDM to
| access DB2 for AS/400 as a DDM server over TCP/IP, and the RUNRMTCMD can
| possibly be used as a substitute for SBMRMTCMD over TCP/IP.

Chapter 5. Setting Up an AS/400 Distributed Relational Database 5-13


Some database administration tasks discussed in Chapter 6, Distributed Relational
Database Administration and Operation Tasks use DDM to access remote files. A
DDM file is created using the Create DDM File (CRTDDMF) command. You can
create a DDM file before the file and communication path named in the file have
been created. However, the file named in the DDM file and the communications
information must be created before the DDM file is used by an application.

The following example shows how a DDM file is created:


CRTDDMF FILE (TEST/KC1ð5TST) RMTLOCNAME(KC1ð5)
RMTFILE(SPIFFY/INVENT)

This command creates a DDM file named KC105TST and stores it in the TEST
library on the source system. This DDM file uses the remote location KC105 to
access a remote file named INVENT stored in the SPIFFY library on the target
AS/400 system.

You can use options on the Work with DDM Files display to change, delete, display
or create DDM files. For more information about using DDM files, see the Distrib-
uted Data Management book.

Loading Data into Tables


Applications in the distributed relational database environment operate on data
stored in tables. In general, applications are used to query a table for information,
to insert, update, or delete rows of a table or tables, or to create a new table. Other
situations occur where data on one system must be moved to another system. This
section discusses many of the methods available to load new data into a table,
move data from one AS/400 system to another, or move data to an AS/400 system
from a non-AS/400 system.

Loading New Data into Tables


You load data into a table by entering each data item into the table. On the
AS/400 system, you can use SQL, the Query Management/400 function, or the
data file utility portion of AS/400 Application Development Tools to create applica-
tions that insert data into a table.

Using SQL to Load Data into a Table


A simple method of loading data into a table is to use an SQL application and the
SQL INSERT operation.

Consider a situation in which a Spiffy regional center needs to add inventory items
to a dealership’s inventory table on a periodic basis as regular inventory shipments
are made from the regional center to the dealership.
INSERT INTO SPIFFY.INVENT
(PART,
DESC,
QTY,
PRICE)
VALUES
('1234567',
'LUG NUT',
25,
1.15 )

5-14 OS/400 Distributed Database Programming V4R2


The statement above inserts one row of data into a table called INVENT in an SQL
collection named SPIFFY.

For each item on the regular shipment, an SQL INSERT statement places a row in
the inventory table for the dealership. In the above example, if 15 different items
were shipped to the dealership, the application at the regional office could include
15 SQL INSERT statements or a single SQL INSERT statement using host vari-
ables.

In this example, the regional center is using an SQL application to load data in to a
table at an AS. Run-time support for SQL is provided in the OS/400 licensed
program, so the AS does not need the DB2/400 Query Manager and SQL Develop-
ment Kit licensed program. However, the DB2/400 Query Manager and SQL Devel-
opment Kit licensed program is required to write the application. For more
information on the SQL programming language, see the DB2 for AS/400 SQL Pro-
gramming and the DB2 for AS/400 SQL Reference books.

Using the Query Management/400 Function


The OS/400 licensed program provides a query management function that allows
you to manipulate data in tables and files. A query is created using an SQL query
statement. You can run the query through CL commands or through a query call-
able interface in your application program. Using the query management function,
you can insert a row of data into a table for the inventory updates described in the
previous section as follows.

Create a source member INVLOAD in the source physical file INVLOAD and the
SQL statement:
INSERT INTO SPIFFY/INVENT
(PART, DESC, QTY, PRICE)
VALUES
(&PARTVALUE, &DESCVALUE, &QTYVALUE, &PRICEVALUE)

Use a CL command to create a query management query object:


CRTQMQRY QMQRY(INVLOAD) SRCFILE(INVLOAD) SRCMBR(INVLOAD)

The following CL command places the INSERT SQL statement results into the
INVENT table in the SPIFFY collection. Use of variables in the query
(&PARTVALUE, &DESCVALUE, and so on) allows you to enter the desired values
as part of the STRQMQRY call, rather than requiring that you create the query
management query again for each row.
STRQMQRY QMQRY(INVLOAD) RDB(KCððð)
SETVAR((PARTVALUE '1134567'') (DESCVALUE '''Lug Nut''')
(QTYVALUE 25) (PRICEVALUE 1.15))

The query management function is dynamic, which means its access paths are built
at run time instead of when a program is compiled. For this reason the Query
Management/400 function is not as efficient for loading data into a table as an SQL
application. However, you need the DB2/400 Query Manager and SQL Develop-
ment Kit product to write an application; run-time support for SQL and query man-
agement is part of the OS/400 licensed program.

For more information on the query management function, see the DB2 for AS/400
Query Management Programming book.

Chapter 5. Setting Up an AS/400 Distributed Relational Database 5-15


Using Data File Utility
The data file utility (DFU), which is part of the AS/400 Applications Development
Tools package available from IBM, is a program builder that helps you create pro-
grams to enter data, update tables, and make inquiries. You do not need a pro-
gramming language to use DFU. Your data entry, maintenance, or inquiry program
is created when you respond to a series of displays. An advantage in using DFU is
that its generic nature allows you to create a database update program to load data
to a table faster than you could by using programming languages such as SQL.
You can work with data on a remote system using DFU with DDM files, or by using
display station pass-through to run DFU at the target system.

For more information on the DFU program generator, see the ADTS/400: Data File
Utility book.

Moving Data from One AS/400 System to Another


A number of situations occur in enterprise operations that could require moving
data from one AS/400 system to another. For example, a new dealership might
open in a region, and some clients from one or two other dealerships might be
transferred to the new dealership as determined by client address. Perhaps a deal-
ership closed or no longer represents Spiffy Corporation sales and service. That
dealer’s inventories and required service information must be allocated to either the
regional office or other area dealerships. Perhaps a dealership has grown to the
extent that it needs to upgrade its AS/400 system, and the entire database must be
moved to the new system.

Some alternatives for moving data from one AS/400 system to another are:
Ÿ User-written application programs
Ÿ Interactive SQL
Ÿ Query Management/400 functions
Ÿ Copy to and from tape or diskette devices
Ÿ Copy file commands with DDM
Ÿ The network file commands
Ÿ AS/400 system save and restore commands

Creating a User-Written Application Program


A program compiled with DUW connection management can connect to a remote
database and a local database and FETCH from one to INSERT into the other to
move the data. By using multi-row FETCH and multi-row INSERT, blocks of records
can be processed at one time. Commitment control can be used to allow check-
points to be performed at points during the movement of the data to avoid having to
start the copy over in case of a failure.

Using Interactive SQL


Using the SQL SELECT statement and interactive SQL, you can query a database
on another AS/400 system for data you need to create or update a table on the
local system. The SELECT statement allows you to specify the table name and
columns containing the desired data, and selection criteria or filters that determine
which rows of data are retrieved. If the SELECT statement is successful, the result
is one or more rows of the specified table.

5-16 OS/400 Distributed Database Programming V4R2


In addition to getting data from one table, SQL allows you to get information from
columns contained in two or more tables in the same database by using a join
operation. If the SELECT statement is successful, the result is one or more rows of
the specified tables. The data values in the columns of the rows returned represent
a composite of the data values contained in specified tables.

Using an interactive SQL query, the results of a query can be placed in a database
file on the local system. If a commitment control level is specified for the interactive
SQL process, it applies to the AS; the database file on the local system is under a
commitment control level of *NONE.

Interactive SQL allows you to do the following:


Ÿ Create a new file for the results of a select.
Ÿ Replace and existing file.
Ÿ Create a new member in a file.
Ÿ Replace a member.
Ÿ Append the results to an existing member.

Consider the situation in which the KC105 dealership is transferring its entire stock
of part number ‘1234567’ to KC110. KC110 queries the KC105 database for the
part they acquire from KC105. The result of this inventory query is returned to a
database file that already exists on the KC110 system. This is the process you can
use to complete this task:

Use the Start SQL (STRSQL) command to get the interactive SQL display. Before
you enter any SQL statement (other than a CONNECT) for the new database,
specify that the results of this operation are sent to a database file on the local
system by doing the following steps:
1. Select the Services option from the Enter SQL Statements display.
2. Select the Change Session Attributes option from the Services display.
3. Enter the Select Output Device option from the Session Attributes Display.
4. Type a 3 for a database file in the Output device field and press Enter. The
following display is shown:

Chapter 5. Setting Up an AS/400 Distributed Relational Database 5-17


à ð
Type choices, press Enter.

File . . . . . . . . . QSQLSELECT Name


Library . . . . . . QGPL Name
Member . . . . . . . . \FILE Name, \FILE, \FIRST

Option . . . . . . . . 1 1=Create new file


2=Replace file
3=Create new member
4=Replace member
5=Add to member

For a new file:


Authority . . . . . \LIBCRTAUT \LIBCRTAUT, \CHANGE, \ALL
\EXCLUDE, \USE
authorization list name

Text . . . . . . . .

F3=Exit F5=Refresh F12=Cancel

á ñ
5. Specify the name of the database file that is to receive the results.
When the database name is specified, you can begin your interactive SQL proc-
essing as shown in the example below.

à ð
Type SQL statement, press Enter.
Current connection is to relational database KCððð.
CONNECT TO KC1ð5____________________________________________________
Current connection is to relational database KC1ð5.
====> SELECT \ FROM INVENTORY_____________________________________________
WHERE PART = '1234567'___________________________________________
__________________________________________________________________________
__________________________________________________________________________
__________________________________________________________________________
__________________________________________________________________________
__________________________________________________________________________
__________________________________________________________________________
__________________________________________________________________________
__________________________________________________________________________
__________________________________________________________________________
__________________________________________________________________________
__________________________________________________________________________
__________________________________________________________________________
Bottom
F3=Exit F4=Prompt F6=Insert line F9=Retrieve F1ð=Copy line
F12=Cancel F13=Services F24=More keys

á ñ
For more information on the SQL programming language and interactive SQL, see
the DB2 for AS/400 SQL Programmingand the DB2 for AS/400 SQL Reference
books.

Using the Query Management/400 Function


The Query Management/400 function provides almost the same support as interac-
tive SQL for querying a remote system and returning the results in an output file to
the local system.

Both interactive SQL and the query management function can perform data manip-
ulation operations (INSERT, DELETE, SELECT, and so on) for files or tables
without the requirement that the table (or file) already exist in a collection (it can

5-18 OS/400 Distributed Database Programming V4R2


exist in a library). Also, query management uses SQL CREATE TABLE statements
to provide data definition when a new table is created on the system as a result of
the query. Tables created from a query management function follow the same
guidelines and restrictions that apply to a table created using SQL.

However, the query management function does not allow you to specify a member
when you want to add the results to a file or table. The results of a query function
are placed in the first file member unless you use the OVRDBF command to
specify a different member before starting the query management function.

For more information on the query management function, see the DB2 for AS/400
Query Management Programming book.

Copying Files to and from Tape or Diskette


You can copy a table or file to tape or diskette using the Copy to Tape
(CPYTOTAP) and Copy to Diskette (CPYTODKT) commands on the AS/400
system. Data on tape or diskette can be loaded on another AS/400 system using
the Copy From Tape (CPYFRMTAP) and Copy From Diskette (CPYFRMDKT) com-
mands. For more information about using these commands, see the Tape and
Diskette Device Programming book.

| You can also use the CL command CPYF to load data on tape into DB2 for
| AS/400. This is especially useful when loading data that was unloaded from DB2
| for OS/390, or DB2 Server for VM (SQL/DS). Nullable data can be unloaded from
| these systems in such a way that a single-byte flag can be associated with each
| nullable field. CPYF with the *NULLFLAGS option specified for the FMTOPT
| parameter can recognize the null flags and ignore the data in the adjacent field on
| the tape and make the field null in DB2 for AS/400. Another useful FMTOPT
| parameter value for importing data from IBM mainframes is the *CVTFLOAT value.
| It allows floating point data stored on tape in System/390 format to be converted to
| the IEEE format used by DB2 for AS/400.

Using Copy File Commands Between Systems


| The following discussion on the use of copy file commands applies only to the SNA
| environment. In the TCP/IP environment, you can use File Transfer Protocol (FTP)
| to do some of the same type of things for simple tables. Note, however, that FTP
| has some limitations in the handling of null values and with Coded Character Set
| Identifiers (CCSIDs).

Another way to move data from one AS/400 system to another is to copy the data
using the copy file commands with DDM. You can use the Copy File (CPYF), Copy
Source File (CPYSRCF), and Copy from Query File (CPYFRMQRYF) commands to
copy data between files on source and target systems. You can copy local rela-
tional database or device files from (or to) remote database files, and remote files
can also be copied to remote files.

For example, if a dealership closes, the distributed relational database administrator


can copy the client and inventory tables from the remote system to the local
regional system. The administrator needs a properly authorized user profile on the
target system to access and copy the tables and must create a DDM file on the
source system for each table or file that is copied. The following example shows
the command the database administrator would use to copy a table called INVENT
in a collection called SPIFFY from a system with a remote location name of KC105
to a regional center system called KC000. A DDM file called INCOPY in a library

Chapter 5. Setting Up an AS/400 Distributed Relational Database 5-19


called TEST on the source system KC000 is used for the file access. These com-
mands are run on the KC000 system:
CRTDDMF FILE(TEST/INCOPY) RMTFILE(SPIFFY/INVENT)
RMTLOCNAME(KC1ð5)
CPYF FROMFILE(TEST/INCOPY) TOFILE(TEST/INVENTDDM)
MBROPT(\ADD)

In this example, the administrator runs the commands on the KC000 system. If the
administrator is not on the KC000 system, then pass-through must be used to run
these commands on the KC000 system. The SBMRMTCMD command cannot be
used to run the above commands because the AS/400 system cannot be a source
system and a target system for the same job.

Consider the following items when using this command with DDM:
Ÿ A DDM file can be specified on the FROMFILE and the TOFILE parameters for
the CPYF and CPYSRCF commands.
Note: For the Copy from Query File (CPYFRMQRYF), Copy from Diskette
(CPYFRMDKT), and Copy from Tape (CPYFRMTAP) commands, a
DDM file name can be specified only on the TOFILE parameter; for the
Copy to Diskette (CPYTODKT) and Copy to Tape (CPYTOTAP) com-
mands, a DDM file name can be specified only on the FROMFILE
parameter.
Ÿ When a delete-capable file is copied to a non-delete capable file, you must
specify COMPRESS(*YES), or an error message is sent and the job ends.
Ÿ If the remote file name on a DDM file specifies a member name, the member
name specified for that file on the CPYF command must be the same as the
member name on the remote file name on the DDM file. In addition, the Over-
ride Database File (OVRDBF) command cannot specify a member name that is
different from the member name on the remote file name on the DDM file.
Ÿ If a DDM file does not specify a member name and if the OVRDBF command
specifies a member name for the file, the CPYF command uses the member
name specified on the OVRDBF command.
Ÿ If the TOFILE parameter is a DDM file that refers to a file that does not exist,
CPYF creates the file. Following are special considerations for remote files
created with the CPYF command:
– The user profile for the target DDM job must be authorized to the CRTPF
command on the target system.
– For an AS/400 system target, the TOFILE parameter has all the attributes
of the FROMFILE parameter except those described in the Data Manage-
ment book.

For more information about using the Copy File commands to copy between
systems, see the Distributed Data Management book.

5-20 OS/400 Distributed Database Programming V4R2


Using Network File Commands
Data can be transferred over networks protocols that support SNA distribution ser-
vices (SNADS). In addition to APPC and APPN protocols used with distributed rela-
tional database processing, SNADS can be used with binary synchronous
equivalence link (BSCEL) and SNA Upline Facility (SNUF) protocols. An AS/400
system supported by SNADS can send data to another system with the Send
Network File (SNDNETF) command and receive a network file from another AS/400
system with the Receive Network File (RCVNETF) and Work with Network File
(WRKNETF) commands.

Using System Save and Restore Commands


You can move a table from another AS/400 system using the Save Object
(SAVOBJ) and Restore Object (RSTOBJ) commands. The save commands save
database files on tape, diskette, or a save file. The save file can be distributed to
another system through communications.

The save and restore commands used to save and restore tables or files include:
Ÿ Save Library (SAVLIB) saves one or more collections or libraries
Ÿ Save Object (SAVOBJ) saves one or more objects (including database tables
and views)
Ÿ Save Changed Object (SAVCHGOBJ) saves any objects that have changed
since either the last time the collection or library was saved or from a specified
date
Ÿ Restore Library (RSTLIB) restores a collection or library
Ÿ Restore Object (RSTOBJ) restores one or more objects (including database
tables and views)

For example, if two dealerships were merging, the save and restore commands
could be used to save collections and tables for one relational database, which are
then restored on the remaining system’s relational database. To accomplish this an
administrator would:
1. Use the SAVLIB command on System A to save a collection or use the
SAVOBJ command on system A to save a table.
2. Specify whether the data is saved to a save file, which can be distributed using
SNADS, or saved on tape or diskette.
3. Distribute the save file to System B or send the tape or diskette to System B.
4. Use the RSTLIB command on System B to restore a collection or use the
RSTOBJ command on System B to restore a table.

A consideration when using the save and restore commands is the ownership and
authorizations to the restored object. A valid user profile for the current object
owner should exist on the system where the object is restored. If the current
owner’s profile does not exist on this system, the object is restored under the
QDFTOWN default user profile. User authorizations to the object are limited by the
default user profile parameters. A user with QSECOFR authority must either create
the original owner’s profile on this system and make changes to the restored object
ownership, or specify new authorizations to this object for both local and remote
users.

Chapter 5. Setting Up an AS/400 Distributed Relational Database 5-21


For more information about the save and restore commands, see the Backup and
Recovery book.

Moving a Database to an AS/400 System from a Non-AS/400 System


You may need to move a file from another IBM system to an AS/400 system or
from a non-IBM system to the AS/400 system. This section lists alternatives for
moving data to an AS/400 system from a non-AS/400 system. However, you must
refer to manuals supplied with the other system or identified for the application for
specific instructions on its use.

Moving Data from Another IBM System


There are a number of methods you can use to move data from another IBM
system to an AS/400 system. These methods include the following:
Ÿ A high-level language program can be written to extract data from another
system. A corresponding program for the AS/400 system can be used to load
data to the system.
Ÿ For systems supporting other DRDA implementations, you can use SQL func-
tions to move data. For example, with distributed unit of work, you can open a
query against the source of the data and, in the same unit of work, insert the
data into a table on the AS/400. For best performance, blocking should be used
in the query and a multirow insert should be done at the AS/400. For additional
information, see “Designing Applications — Tips” on page 2-4.
Ÿ Data can be extracted from tables and files on the other system and sent to the
AS/400 system on tape or diskette or over communications lines.
– From a DB2 database, a sample program called DSNTUIL, supplied with
the database manager can be used to extract data from file or tables.
– From an DB2 Server for VM (SQL/DS) database, the Database Services
Utility portion of the database manager can be used to extract data.
– From both DB2 for OS/390 or DB2 Server for VM databases, Data Extract
(DXT*) can be used to extract data. However, DXT handling of null data is
not compatible with the Copy File handling of null data described below.
Therefore, DXT is not recommended for use in unloading relational data for
migration to an AS/400 system.
– From IMS/DB hierarchical databases, DXT can be used to extract data.
| Ÿ You can use standard tape management techniques to copy data to tape or
| diskette from DB2 for OS/390 or DB2 Server for VM databases. The AS/400
| system uses the Copy From Tape (CPYFRMTAP) command to load data from
| tape. The Copy File (CPYF) command, however, provides special support for
| migrating data from IBM mainframe computers. CPYF can be used with tape
| data by the use of the Override with Tape File (OVRTAPF) command. The
| OVRTAPF command lets you specify special tape-specific parameters which
| may be neccessary when you import data from a system other than AS/400.
| The special CPYF support lets you import nullable data and floating point data.
| Nullable data can be unloaded from mainframes in such a way that a single-
| byte flag can be associated with each nullable field. With the *NULLFLAGS
| option specified for the FMTOPT parameter, the CPYF command can recognize
| the null flags and ignore the data in the adjacent field on the tape and make
| the field null in DB2 for AS/400. The other useful FMTOPT parameter value for
| importing data from IBM mainframes is the *CVTFLOAT value. It allows floating

5-22 OS/400 Distributed Database Programming V4R2


| point data stored on tape in System/390 format to be converted to the IEEE
| format used by DB2 for AS/400.
| For more information on using tape and diskette devices with the AS/400
| system, see the Tape and Diskette Device Programming book. For more infor-
| mation about using the Copy File commands to copy between systems, see the
| Distributed Data Management book and the CL Reference book.
Ÿ Data sent over communications lines can be handled through SNADS support
on the AS/400 system. SNADS support transfers network files for BSCEL and
SNUF protocols in addition to the APPC or APPN protocols used for distributed
relational database processing.
– From an MVS system, data can be sent to the AS/400 system using TSO
XMIT functions. The AS/400 system uses the WRKNETF or RCVNETF
command to receive a network file.
– From a VM system, data can be sent to the AS/400 system using
SENDFILE functions. The AS/400 system uses the WRKNETF or
RCVNETF command to receive a network file.
Ÿ From an OS/2 system, data can be sent to the AS/400 system using DB2/2
functions and utilities or programs such as Client Access/400, a separately
orderable IBM product. Also, the DB2/2 IMPORT and EXPORT utilities can be
used to copy data to and from an AS/400 system.
Ÿ From an RS/6000* system, you can use the DB2/6000 IMPORT and EXPORT
utilities to copy data to and from an AS/400 system.
Ÿ The DataHub product provides function to easily copy data from any IBM
system to any other IBM system.
Ÿ Data can also be sent over communications lines that do not support SNADS,
such as asynchronous communications. File transfer support (FTS), a utility that
is part of the OS/400 licensed program, can be used to send and receive data.
For more information about working with communications and communications
files see the ICF Programming book.

Moving Data from a Non-IBM System


You can copy files or tables to tape or diskette from the other system and load
these files on an AS/400 system if the data and media format meets AS/400
formats and requirements.

Vendor independent communications functions are also supported through two sep-
arately licensed AS/400 programs.

Peer-to-peer connectivity functions for both local and wide area networks is pro-
vided by the Transmission Control Protocol/Internet Protocol (TCP/IP). The File
Transfer Protocol (FTP) function of the AS/400 TCP/IP Connectivity Utilities/400
licensed program allows you to receive many types of files, depending on the capa-
bilities of the remote system. For more information, see the TCP/IP Configuration
and Reference book.

The OSI File Services/400 licensed program (OSIFS/400) provides file manage-
ment and transfer services for open systems interconnection (OSI) networks.
OSIFS/400, with the prerequisite licensed program OSI Communications
Subsystem/400, connects the AS/400 system to remote IBM or non-IBM systems
that conform to OSI file transfer, access, and management (FTAM) standards.

Chapter 5. Setting Up an AS/400 Distributed Relational Database 5-23


OSIFS/400 provides either an interactive interface or an application programming
interface (API) to copy or move files from a remote system to a local AS/400
system. For more information, see the OSI Communications Subsystem Program-
ming and Concepts Guide

5-24 OS/400 Distributed Database Programming V4R2


Chapter 6. Distributed Relational Database Administration
and Operation Tasks
As an administrator for a distributed relational database, you are responsible for
work being done on several systems. Work that originates on your local system as
an application requester (AR) can be monitored in the same way that any other
work is monitored on an AS/400 system. When you are tracking units of work being
done on the local system as an application server (AS), you use the same tools but
look for different kinds of information.

This chapter discusses ways that you can administer the distributed relational data-
base work being done across a network. Most of the commands, processes, and
other resources discussed here do not exist just for distributed relational database
use, they are tools provided for the operation of any AS/400 system. All adminis-
tration commands, processes and resources discussed here are included with the
OS/400 program, along with all of the DB2 for AS/400 functions.

Monitoring Relational Database Activity


You can rely on several control language (CL) commands to give you a view of
work on an AS/400 system. They are the Work with Job (WRKJOB), Work with
User Jobs (WRKUSRJOB), Work with Active Jobs (WRKACTJOB), and Work with
Commitment Definitions (WRKCMTDFN) commands. WRKJOB, WRKUSRJOB, and
WRKACTJOB provide similar information but in different ways. The WRKJOB
command gives you information specific to a job if you know the job name or the
job from which you enter the WRKJOB command. The WRKUSRJOB command
provides you with more detailed information on a job if you know the user profile
under which the job is running. The WRKACTJOB command provides the most
general look at work being done on the system. It shows all jobs that are currently
running on the system and some statistics about each one. The WRKCMTDFN
command displays commitment definitions, which are used to store information
about commitment control when commitment control is started by the Start Commit-
ment Control (STRCMTCTL) command.

Working with Jobs


The WRKJOB command presents the Work with Job menu. This menu allows you
to select options to work with or to change information related to a specified job.
Enter the command without any parameters to get information about the job you
are currently using. Specify a job to get the same information pertaining to it by
entering its name in the command like this:
WRKJOB JOB(job-number/user-ID/job-name)

You can get the information provided by the options on the menu whether the job is
on a job queue, output queue, or active. However, a job is not considered to be in
the system until all of its input has been completely read in. Only then is an entry
placed on the job queue. The options for the job information are:
Ÿ Job status attributes
Ÿ Job definition attributes
Ÿ Spooled file information

 Copyright IBM Corp. 1997, 1998 6-1


Information about the following options can be shown only when the job is active:
Ÿ Job run attributes
Ÿ Job log information
Ÿ Program stack information
Ÿ Job lock information
Ÿ Library list information
Ÿ Open file information
Ÿ File override information
Ÿ Commitment control status
Ÿ Communications status
Ÿ Activation groups
Ÿ Mutexes

Option 10 (Display job log) gives you information about an active job or a job on a
job queue. For jobs that have ended you can usually find the same information by
using option 4 (Work with spooled files). This presents the Work with Spooled Files
display, where you can use option 5 to display the file named QPJOBLOG if it is on
the list.

Working with User Jobs


If you know the user profile (user name) being used by a job, you can use the
WRKUSRJOB command to display or change job information. Enter the command
without any parameters to get a list of the jobs in the system with your user profile.
You can specify any user and the job status to shorten the list of jobs by entering
its name in the command like this:
WRKUSRJOB USER(KCDBA)

The Work with User Jobs display appears with names and status information of
user jobs running in the system (*ACTIVE), on job queues (*JOBQ), or on an
output queue (*OUTQ). The following display shows the active and ended jobs for
the user named KCDBA:

6-2 OS/400 Distributed Database Programming V4R2


à ð3/29/92 16:15:33
ð
Type options, press Enter.
2=Change 3=Hold 4=End 5=Work with 6=Release 7=Display message
8=Work with spooled files 13=Disconnect

Opt Job User Type -----Status------ Function


__ KCððð KCDBA CMNEVK OUTQ
__ KCððð KCDBA CMNEVK OUTQ
__ KCððð KCDBA CMNEVK OUTQ
__ KCððð KCDBA CMNEVK OUTQ
__ KCððð KCDBA CMNEVK ACTIVE
__ KCððð1 KCDBA CMNEVK ACTIVE \ -PASSTHRU
__ KCððð1 KCDBA INTER ACTIVE CMD-WRKUSRJOB

Bottom
Parameters or command
===>
F3=Exit F4=Prompt F5=Refresh F9=Retrieve F11=Display schedule data
F12=Cancel F21=Select assistance level
á ñ
This display lists all the jobs in the system for the user, shows the status specified
(*ALL in this case), and shows the type of job. It also provides you with eight
options (2 through 8 and 13) to enter commands for a selected job. Option 5 pre-
sents the Work with Job display described above.

| The WRKUSRJOB command is useful when you want to look at the status of the
| DDM TCP/IP server jobs if your system is using TCP/IP. Run the following
| command:
| WRKUSRJOB QUSER \ACTIVE
| Page down until you see the jobs starting with the characters QRWT. If the server is
| active, you should see one job named QRWTLSTN, and one or more named QRWTSRVR
| (unless prestart DRDA jobs are not run on the system). The QRWTSRVR jobs are
| prestart jobs. If you do not see the QRWTLSTN job, run the following command to
| start it:
| STRTCPSVR \DDM
| If you see the QRWTLSTN job and not the QRWTSRVR jobs, and the use of
| DRDA prestart jobs has not been disabled, run the following command to start the
| prestart jobs:
| STRPJ QSYSWRK QRWTSRVR

Working with Active Jobs


Use the WRKACTJOB command if you want to monitor the jobs running for several
users or if you are looking for a job and you do not know the job name or the user
ID. When you enter this command, the Work with Active Jobs display appears. It
shows the performance and status information for jobs that are currently active on
the system. All information is gathered on a job basis and grouped by subsystem.

The display below shows the Work with Active Jobs display on a typical day at the
KC105 system:

Chapter 6. Distributed Relational Database Administration and Operation Tasks 6-3


à ð3/29/92 16:17:45
ð
CPU %: 41.7 Elapsed time: ð4:37:55 Active jobs: 42

Type options, press Enter.


2=Change 3=Hold 4=End 5=Work with 6=Release 7=Display message
8=Work with spooled files 13=Disconnect ...

Opt Subsystem/Job User Type CPU % Function Status


__ QBATCH QSYS SBS .ð DEQW
__ QCMN QSYS SBS .ð DEQW
__ QINTER QSYS SBS .ð DEQW
__ DSPð1 CLERK1 INT .ð CMD-STRSQL DSPW
__ DSPð2 CLERK2 INT .ð \ -CMDENT DSPW

More...
Parameters or command
===>
F3=Exit F5=Refresh F1ð=Restart statistics F11=Display elapsed data
F12=Cancel F23=More options F24=More keys

á ñ
When you press F11 (Display elapsed data), the following display is provided to
give you detailed status information.

à ð3/29/92 16:17:45
ð
CPU %: 41.7 Elapsed time: ð4:37:55 Active jobs: 42

Type options, press Enter.


2=Change 3=Hold 4=End 5=Work with 6=Release 7=Display message
8=Work with spooled files 13=Disconnect ...
--------Elapsed---------
Opt Subsystem/Job Type Pool Pty CPU Int Rsp AuxIO CPU %
__ QBATCH SBS 2 ð 4.4 1ð8 .ð
__ QCMN SBS 2 ð 2ð.7 668 .ð
__ KCððð EVK 2 5ð .1 9 .ð
__ KCððð1 EVK 2 5ð .1 9 .ð
__ MPððð EVK 2 5ð .1 14 .ð
__ QINTER SBS 2 ð 7.3 4 .ð
__ DSPð1 INT 2 2ð .1 ð .ð
__ DSPð2 INT 2 2ð .1 ð .ð

More...
Parameters or command
===>
F3=Exit F5=Refresh F1ð=Restart statistics F11=Display status
F12=Cancel F23=More options F24=More keys

á ñ
The Work with Active Jobs display gives you information about job priority and
system usage as well as the user and type information you get from the Work with
User Jobs display. You also can use any of 11 options on a job (2 through 11 and
13), including option 5, which presents you with the Work with Job display for the
selected job.

Working with Commitment Definitions


Use the WRKCMTDFN command if you want to work with the commitment defi-
nitions on the system. A commitment definition is used to store information about
commitment control when commitment control is started by the Start Commitment
Control (STRCMTCTL) command. These commitment definitions may or may not
be associated with an active job. Those not associated with an active job have

6-4 OS/400 Distributed Database Programming V4R2


been ended, but one or more of its logical units of work has not yet been com-
pleted.

The WRKCMTDFN command can be used to work with commitment definitions


based on the job name, status, or logical unit of work identifier of the commitment
definition.

On the STATUS parameter, you can specify all jobs or only those that have a
status value of *RESYNC or *UNDECIDED. *RESYNC shows only the jobs that are
involved with resynchronizing their resources in an effort to reestablish a synchroni-
zation point; a synchronization point is the point where all resources are in con-
sistent state.

*UNDECIDED shows only those jobs for which the decision to commit or roll back
resources is unknown.

| On the LUWID parameter, you can display commitment definitions that are working
| with a commitment definition on another system. Jobs containing these commitment
| definitions are communicating using an APPC protected conversation. An LUWID
| can be found by displaying the commitment definition on one system and then
| using it as input to the WRKCMTDFN command to find the corresponding commit-
| ment definition.

You can use the WRKCMTDFN command to free local resources in jobs that are
undecided, but only if the commitment definitions are in a Prepare in Progress (PIP)
or Last Agent Pending (LAP) state. You can force the commitment definition to
either commit or roll back, and thus free up held resources; control does not return
to the program that issued the original commit until the initiator learns of the action
taken on the commitment definition.

You can also use the WRKCMTDFN command to end resynchronization in cases
where it is determined that resynchronization will not ever complete with another
system.

For additional information on the WRKCMTDFN command, see the CL Reference


book.

Using the Job Log


Every job on the AS/400 system has a job log that contains information related to
requests entered for a job. The information in a job log includes:
Ÿ Commands that were used by a job
Ÿ Messages that were sent and not removed from the program message queues
Ÿ Commands in a CL program if the program was created with
LOGCLPGM(*JOB) and the job specifies LOGCLPGM(*YES) or the CL
program was created with LOGCLPGM(*YES)
At the end of the job, the job log can be written to a spooled file named
QPJOBLOG and the original job log is deleted. You can control what information is
written in the job log by specifying the LOG parameter of a job description.

The way to display a job log depends on the status of the job. If the job has ended
and the job log is not yet printed, find the job using the WRKUSRJOB command,
then select option 8 (Display spooled file). Find the spooled file named QPJOBLOG

Chapter 6. Distributed Relational Database Administration and Operation Tasks 6-5


and select option 5 (Display job log). You can also display a job log by using the
WRKJOB command and other options on the Work with Job display.

If the batch or interactive job is still active, or is on a job queue and has not yet
started, use the WRKUSRJOB command to find the job. The WRKACTJOB
command is used to display the job log of active jobs and does not show jobs on
job queues. Select option 5 (Work with job) and then select option 10 (Display job
log).

To display the job log of your own interactive job, do one of the following:
Ÿ Enter the Display Job Log (DSPJOBLOG) command.
Ÿ Enter the WRKJOB command and select option 10 (Display job log) from the
Work with Job display.
Ÿ Press F10 (Display detailed messages) from the Command Entry display to
display messages that are shown in the job log.

When you use the DSPJOBLOG command, you see the Job Log display. This
display shows program names with special symbols, as follows:
>> The running command or the next command to be run. For example, if a
CL or high-level language program was called, the call to the program is
shown.
> The command has completed processing.
. . The command has not yet been processed.
? Reply message. This symbol marks both those messages needing a
reply and those that have been answered.

Locating Distributed Relational Database Jobs


| When you are looking for information about a distributed relational database job on
| an AR and you know the user profile that is used, you can find that job by using the
| WRKUSRJOB command. You can also use this command on the AS, but be aware
| that the user profile on the AS may be different from that used by the AR. For
| TCP/IP servers, the user profile that qualifies the job name will always be QUSER,
| and the job name will always be QRWTSRVR. The DSPLOG command can be
| used to help find the complete server job name. The message will be in the fol-
| lowing form:
| DDM job ð31233/QUSER/QRWTSRVR servicing user XY on 1ð/ð2/97 at 22:ð6

| If there are several jobs listed for the specified user profile and the relational data-
| base is accessed using DRDA, enter option 5 (Work with job) to get the Work with
| Job display. From this display, enter option 10 (Display job log) to see the job log.
| The job log shows you whether this is a distributed relational database job and, if it
| is, to which remote system the job is connected. Page through the job log looking
| for one of the following messages (depending on whether the connection is using
| APPC or TCP/IP):
| CPI9150 DDM job started.
| CPI9160 Database connection started over TCP/IP or a local socket.

6-6 OS/400 Distributed Database Programming V4R2


| The second level text for message CPI9150 and CPI9160 contains the job name
| for the AS job.

| If you are on the AS and you do not know the job name,1 but you know the user
| name, use the WRKUSRJOB command. If you do not specify a user, the command
| returns a list of the jobs under the user profile2 you are using. On the Work with
| User Jobs display, use these columns to help you identify the AS jobs that are
| servicing APPC connections.
| .1/ The job type column shows jobs with the type that is listed as CMNEVK
| for APPC communications jobs.
.2/ The status column shows if the job is active or completed. Depending
on how the system is set up to log jobs, you may see only active jobs.
.3/ The job column provides the job name. The job name on the AS is the
same as the device being used.

à ð
Work with User Jobs KC1ð5
ð3/29/92 16:15:33
Type options, press Enter.
2=Change 3=Hold 4=End 5=Work with 6=Release 7=Display message
8=Work with spooled files 13=Disconnect

Opt Job User Type -----Status------ Function


__ KCððð KCDBA CMNEVK OUTQ
__ MPððð KCDBA CMNEVK OUTQ
__ MPððð KCDBA CMNEVK OUTQ
__ KCððð KCDBA CMNEVK OUTQ
__ KCððð KCDBA CMNEVK ACTIVE
__ KCððð1 KCDBA INTER ACTIVE CMD-WRKUSRJOB
.3/ .1/ .2/

If you are looking for an active AS job and do not know the user name, the
WRKACTJOB command gives you a list of those jobs for the subsystems active on
the system. The following example shows you some items to look for:

à ð
Work with Active Jobs KC1ð5
ð3/29/92 16:17:45
CPU %: 41.7 Elapsed time: ð4:37:55 Active jobs: 1ð2

Type options, press Enter.


2=Change 3=Hold 4=End 5=Work with 6=Release 7=Display message
8=Work with spooled files 13=Disconnect

Opt Subsystem/Job User Type CPU % Function Status


__ QBATCH QSYS SBS .ð DEQW
.4/ QCMN QSYS SBS .ð WDEQ
__ KCððð1 KCCLERK EVK .ð \ EVTW
.5/ .6/

.4/ Search the subsystem3 that is set up to handle the AS jobs. In this
example, the subsystem for AS jobs is QCMN.

| 1 If you are using the DDM TCP/IP server, you can find the job name with the DSPLOG command as explained above.
| 2 For TCP/IP, the user profile in the job name will always be QUSER.
| 3 The subsystem for TCP/IP server jobs is QSYSWRK.

Chapter 6. Distributed Relational Database Administration and Operation Tasks 6-7


| .5/ For APPC AS jobs, the job name is the device name of the device that
| is created for AS use.
.6/ The job type4 listed is normally EVK, started by a program start request.

When you have located a job that looks like a candidate, enter option 5 to work
with that job. Then select option 10 from the Work with Job Menu to display the job
log. Distributed database job logs for jobs that are accessing the AS from a
DB2/400 application requester contain a statement near the top that reads:
CPI3E01 Local relational database accessed by (system name).

| After you locate a job working on the AS, you can also trace it back to the AR if the
| AR is an AS/400 system. One of the following messages will appear in your job log;
| place the cursor on the message you received:
| CPI9152 Target DDM job started by source system.
| CPI9162 Target job assigned to handle DDM connection started by source
| system over TCP/IP.
| When you press the help key, the detailed message for the statement appears. The
| source system job named is the job on the AR that caused this job.

Operating Remote AS/400 Systems


As an administrator in a distributed relational database you may sometimes have to
operate a remote AS/400 system. For example, you may have to start or stop a
remote system. The AS/400 system provides several ways to do this either in real
time or at previously scheduled times. More often, you may need to perform certain
tasks on a remote system as it is operating. The three primary ways that you can
do this is by using display station pass-through, the Submit Remote Command
(SBMRMTCMD) command, or stored procedures.

Starting and Stopping Other Systems Remotely


The AS/400 system provides options that help you ensure that a remote system is
operating when it needs to be. Of course, the simplest way to ensure that a remote
system is operating is to allow the remote location to power on their system to meet
the distributed relational database requirements. But, this is not always possible.
You can set up an automatic power-on and power-off schedule or enable a remote
power on to a remote system. See the System Operation book for more information
on system values that control IPL, power- on and power-off schedules, and remote
IPLs.

Submit Remote Command (SBMRMTCMD) Command


| Note: The following discussion makes the assumption that you are in an SNA
| environment. If that is not the case, and you are using TCP/IP, the
| RUNRMTCMD command may meet any requirements you may have to run
| CL commands on a remote system. For example, you might run the
| command in the following manner:

| 4 For TCP/IP AS jobs, the job type is PJ (unless DRDA prestart jobs are not active on the system, in which case the job type is
| BCI).

6-8 OS/400 Distributed Database Programming V4R2


| RUNRMTCMD CMD('CALL MYLIB/MYPGM') RMTLOCNAME(CHICAGO \IP)
| RMTUSER(CHARLES) RMTPWD(PASSWORD)
| This requires that the REXEC server has been started on the remote
| system (STRTCPSVR *REXEC).

The Submit Remote Command (SBMRMTCMD) command submits a CL command


using Distributed Data Management (DDM) support to run on the target system.
You first need to create a DDM file. The remote location information of the DDM file
is used to determine the communications line to be used. Thus, it identifies the
target system that is to receive the submitted command. The remote file associated
with the DDM file is not involved when the DDM file is used for submitting com-
mands to run on the target system. See “Setting Up DDM Files” on page 5-13 for
information on creating DDM files.

The SBMRMTCMD command can submit any CL command that can run in both
the batch environment and via the QCAEXEC system program; that is, the
command has values of *BPGM and *EXEC specified for the ALLOW attribute. You
can display the ALLOW attributes by using the Display Command (DSPCMD)
command.

The primary purpose of the SBMRMTCMD command is to allow a source system


user or program to perform file management operations and file authorization activ-
ities on objects located on a target system. A secondary purpose of this command
is to allow a user to perform nonfile operations (such as creating a message queue)
or to submit user-written commands to run on the target system. The CMD param-
eter allows you to specify a character string of up to 2000 characters that repres-
ents a command to be run on the target system.

You must have the proper authority on the target system for the CL command
being submitted and for the objects that the command is to operate on. If the
source system user has the correct authority to do so (as determined in a target
system user profile), the following actions are examples of what can be performed
on remote files using the SBMRMTCMD command:
Ÿ Grant or revoke object authority to remote tables
Ÿ Verify tables or other objects
Ÿ Save or restore tables or other objects

Although the command can be used to do many things with tables or other objects
on the remote system, using this command for some tasks is not as efficient as
other methods on the AS/400 system. For example, you could use this command to
display the file descriptions or field attributes of remote files, or to dump files or
other objects, but the output remains at the target system. To display remote file
descriptions and field attributes at the source system, a better method is to use the
Display File Description (DSPFD) and Display File Field Description (DSPFFD)
commands with SYSTEM(*RMT) specified, and specify the names of the DDM files
associated with the remote files.

See the Distributed Data Management book for lists of CL commands you can
submit and restrictions for the use of this command. In addition, see “Controlling
DDM Conversations” on page 6-10 for information about how DDM shares conver-
sations.

Chapter 6. Distributed Relational Database Administration and Operation Tasks 6-9


SBMRMTCMD Example
You can use the SBMRMTCMD command to grant temporary authority to an
object. For example, a programmer on one system would like to call a program on
another system and watch its operation for debugging purposes. The distributed
relational database administrator could get authority to the program to be checked
for the local programmer by entering the following command:
SBMRMTCMD CMD(’GRTOBJAUT OBJ(SPIFFY/PARTS1) OBJTYPE(\PGM)
USER(MPSUP) AUT(\USE)’) DDMFILE(TEST/KC1ð5TST)

This submitted command grants *USE authority to the user MPSUP to the object
PARTS1. PARTS1 is a program that exists on the system identified by the DDM file
named KC105TST on the local system. The authority is granted if the distributed
relational database administrator has authority to use the GRTOBJAUT command
on the remote system named in the KC105TST DDM file.

Controlling DDM Conversations


| Note: The term conversation has a specific, technical meaning in SNA APPC ter-
| minology. It does not extend to TCP/IP terminology in a formal sense.
| However, there is a similar concept in TCP/IP (a 'network connection' in
| other books on the subject). In this book, the word is used with the under-
| standing that it applies to TCP/IP network connections as well. In other
| sections of this book, the term retains its specific APPC meaning, but it is
| expected that the reader can discern that meaning from the context.

| The term connection in this section of this book refers to the concept of an
| SQL connection. An SQL connection lasts from the time an explicit or
| implicit SQL CONNECT is done until the logical SQL connection is termi-
| nated by such means as an SQL DISCONNECT, or a RELEASE followed
| by a COMMIT. Multiple SQL connections can occur serially over a single
| network connection or conversation. In other words, when a connection is
| ended, the conversation that carried it is not neccessarily ended.

When an AR uses DRDA to connect to an AS, it uses a DDM conversation. The


conversation is established with the SQL CONNECT statement from the AR, but
only if:
Ÿ A conversation using the same remote location values does not already exist
for the AR job.
Ÿ A conversation uses the same activation group.
Ÿ If started from DDM, a conversation has the file scoped to the activation group.
Ÿ A conversation has the same conversation type5 (protected or unprotected).
DDM conversations can be in one of three states: active, unused, or dropped. A
DDM conversation used by distributed relational database is active while the AR is
connected to the AS.

The SQL DISCONNECT and RELEASE statements are used to end connections.
Connections can also be ended implicitly by the system. In addition, when running
with RUW connection management, previous connections are ended when a

5 Currently, TCP/IP conversations can be only unprotected.

6-10 OS/400 Distributed Database Programming V4R2


CONNECT is performed. See “Explicit CONNECT” on page 10-10 for more infor-
mation on when connections are ended. After a connection ends, the DDM conver-
sations then either become unused or are dropped. If a DDM conversation is
unused, the conversation to the remote database management system is main-
tained by the DDM communications manager and marked as unused. If a DDM
conversation is dropped, the DDM communications manager ends the conversation.
The DDMCNV job attribute determines whether DDM conversations for connections
that are no longer active become unused or dropped. If the job attribute value is
*KEEP and the connection is to another AS/400, the conversation becomes
unused. If the job attribute value is *DROP or the connection is not to another
AS/400, the conversation is dropped.

Using a DDMCNV job attribute of *KEEP is desirable when connections to remote


relational databases are frequently changed.

A value of *DROP is desirable in the following situations:


Ÿ When the cost of maintaining the conversation is high and the conversation will
not be used relatively soon.
Ÿ When running with a mixture of programs compiled with RUW connection man-
agement and programs compiled with DUW connection management. Attempts
to run programs compiled with RUW connection management to remote
locations will fail when protected conversations exist.
Ÿ When running with protected conversations either with DDM or DRDA. Addi-
tional overhead is incurred on commits and rollbacks for unused protected con-
versations.

If a DDM conversation is also being used to operate on remote files through DDM,
the conversation will remain active until the following conditions are met:
Ÿ All the files used in the conversation are closed and unlocked
Ÿ No other DDM-related functions are being performed
Ÿ No DDM-related function has been interrupted (by a break program, for
example)
Ÿ For protected conversations, a commit or rollback was performed after ending
all SQL programs and after all DDM-related functions were completed.
Ÿ An AR job is no longer connected to the AS

Regardless of the value of the DDMCNV job attribute, conversations are dropped at
the end of a job routing step, at the end of the job, or when the job initiates a
Reroute Job (RRTJOB) command. Unused conversations within an active job can
also be dropped by the Reclaim DDM Conversations (RCLDDMCNV) or Reclaim
Resources (RCLRSC) command. Errors, such as communications line failures, can
also cause conversations to drop.

The DDMCNV parameter is changed by the Change Job (CHGJOB) command and
is displayed by Display Job (DSPJOB) command with OPTION(*DFNA). Also, you
can use the Retrieve Job Attributes (RTVJOBA) command to get the value of this
parameter and use it within a CL program.

Chapter 6. Distributed Relational Database Administration and Operation Tasks 6-11


Reclaiming DDM Resources
The Reclaim DDM Conversations (RCLDDMCNV) command reclaims all application
conversations that are not currently being used by a source job, even if the
DDMCNV attribute value for the job is *KEEP. The command allows you to reclaim
unused DDM conversations without closing all open files or doing any of the other
functions performed by the Reclaim Resources (RCLRSC) command.

The RCLDDMCNV command applies to the DDM conversations for the job on the
AR in which the command is entered. There is an associated AS job for the DDM
conversation used by the AR job. The AS job ends6 automatically when the associ-
ated DDM conversation ends.

Although this command applies to all DDM conversations used by a job, using it
does not mean that all of them will be reclaimed. A conversation is reclaimed only if
it is not being actively used. If commitment control is used, a COMMIT or
ROLLBACK operation may have to be done before a DDM conversation can be
reclaimed.

Displaying Objects Used by Programs


You can use the Display Program References (DSPPGMREF) command to deter-
mine which tables, data areas, and other programs are used by a program or SQL
package. This information is available for SQL packages and compiled programs
only. The information can be displayed, printed, or written to a database output file.

When a program or package is created, the information about certain objects used
in the program or package is stored. This information is then available for use with
the Display Program References (DSPPGMREF) command. Information retrieved
can include:
Ÿ The name of the program or package and its text description
Ÿ The name of the library or collection containing the program or package
Ÿ The number of objects referred to by the program package
Ÿ The qualified name of the system object
Ÿ The information retrieval dates
Ÿ The object type of the referenced object

For files and tables, the record contains the following additional fields:
Ÿ The name of the file or table in the program or package (possibly different from
the system object name if an override was in effect when the program or
package was created)
Note: Any overrides apply only on the AR.
Ÿ The program or package use of the file or table (input, output, update, unspeci-
fied, or a combination of these four)
Ÿ The number of record formats referenced, if any

| 6 For TCP/IP conversations that end, the AS job is normally a prestart job and is usually recycled rather than ended.

6-12 OS/400 Distributed Database Programming V4R2


Ÿ The name of the record format used by the file or table and its record format
level identifier
Ÿ The number of fields referenced for each format

Before the objects can be shown in a program, the user must have *USE authority
for the program. Also, of the libraries specified by the library qualifier, only the
libraries for which the user has read authority are searched for the programs.

Figure 6-1 shows the objects for which the high-level languages and utilities save
information.

Figure 6-1. How High-level Languages Save Information About Objects


Language Files Programs Data Areas See Note
CL Yes Yes Yes 1
COBOL/400* Yes Yes No 2
Language
PL/I Yes Yes N/A 2
RPG/400* Lan- Yes No Yes 3
guage
DB2/400 SQL Yes N/A N/A 4
Notes:
1. All system commands that refer to files, programs, or data areas specify in the command
definition that the information should be stored when the command is compiled in a CL
program. If a variable is used, the name of the variable is used as the object name (for
example, &FILE); If an expression is used, the name of the object is stored as *EXPR.
User-defined commands can also store the information for files, programs, or data areas
specified on the command. See the description of the FILE, PGM, and DTAARA parame-
ters on the PARM or ELEM command statements in the CL Programming book.
2. The program name is stored only when a literal is used for the program name (this is a
static call, for example, CALL 'PGM1'), not when a COBOL/400 identifier is used for the
program name (this is a dynamic call, for example, CALL PGM1).
3. The use of the local data area is not stored.
4. Information about SQL packages.

The stored file information contains an entry (a number) for the type of use. In the
database file output of the Display Program References (DSPPGMREF) command
(built when using the OUTFILE parameter), this entry is a representation of one or
more codes listed below. There can only be one entry per object, so combinations
are used. For example, a file coded as a 7 would be used for input, output, and
update.
Code Meaning
1 Input
2 Output
3 Input and Output
4 Update
8 Unspecified

Chapter 6. Distributed Relational Database Administration and Operation Tasks 6-13


Display Program Reference Example
To see what objects are used by an AR program, you can enter a command such
as the following:
DSPPGMREF PGM(SPIFFY/PARTS1) OBJTYPE(\PGM)

On the requester you can get a list of all the collections and tables used by a
program, but you are not able to see on which relational database they are located.
They may be located in multiple relational databases. The output from the
command can go to a database file or to a displayed spooled file. The output looks
like this:
File . . . . . : QPDSPPGM Page/Line 1/1
Control . . . . . Columns 1 - 78
Find . . . . . .

3/29/92 Display Program References


DSPPGMREF Command Input
Program . . . . . . . . . . . . . . . . . . : PARTS1
Library . . . . . . . . . . . . . . . . . : SPIFFY
Output . . . . . . . . . . . . . . . . . . : \
Include SQL packages . . . . . . . . . . . : \YES
Program . . . . . . . . . . . . . . . . . . . : PARTS1
Library . . . . . . . . . . . . . . . . . . : SPIFFY
Text 'description'. . . . . . . . . . . . . : Check inventory for parts
Number of objects referenced . . . . . . . : 3
Object . . . . . . . . . . . . . . . . . . : PARTS1
Library . . . . . . . . . . . . . . . . . : SPIFFY
Object type . . . . . . . . . . . . . . . : \PGM
Object . . . . . . . . . . . . . . . . . . : QSQROUTE
Library . . . . . . . . . . . . . . . . . : \LIBL
Object type . . . . . . . . . . . . . . . : \PGM
Object . . . . . . . . . . . . . . . . . . : INVENT
Library . . . . . . . . . . . . . . . . . : SPIFFY
Object type . . . . . . . . . . . . . . . : \FILE
File name in program . . . . . . . . . . :
File usage . . . . . . . . . . . . . . . : Input

To see what objects are used by an AS SQL package, you can enter a command
such as the following:
DSPPGMREF PGM(SPIFFY/PARTS1) OBJTYPE(\SQLPKG)

The output from the command can go to a database file or to a displayed spooled
file. The output looks like this:

6-14 OS/400 Distributed Database Programming V4R2


File . . . . . : QPDSPPGM Page/Line 1/1
Control . . . . . Columns 1 - 78
Find . . . . . .

3/29/92 Display Program References


DSPPGMREF Command Input
Program . . . . . . . . . . . . . . . . . . : PARTS1
Library . . . . . . . . . . . . . . . . . : SPIFFY
Output . . . . . . . . . . . . . . . . . . : \
Include SQL packages . . . . . . . . . . . : \YES
SQL package . . . . . . . . . . . . . . . . . : PARTS1
Library . . . . . . . . . . . . . . . . . . : SPIFFY
Text 'description'. . . . . . . . . . . . . : Check inventory for parts
Number of objects referenced . . . . . . . : 1
Object . . . . . . . . . . . . . . . . . . : INVENT
Library . . . . . . . . . . . . . . . . . : SPIFFY
Object type . . . . . . . . . . . . . . . : \FILE
File name in program . . . . . . . . . . :
File usage . . . . . . . . . . . . . . . : Input

Dropping a Collection
Attempting to delete a collection that contains journal receivers may cause an
inquiry message to be sent to the QSYSOPR message queue for the AS job. The
AS and AR job wait until this inquiry is answered.

The message that appears on the message queue is:


CPA7025 Receiver (name) in (library) never fully saved. (I C)

When the AR job is waiting, it may appear as if the application is hung. Consider
the following when your AR job has been waiting for a time longer than anticipated:
Ÿ Be aware that an inquiry message is sent to QSYSOPR message queue and
needs an answer to proceed.
Ÿ Move the journal receivers to a different library other than the one that is being
dropped.
Ÿ Have the AS reply to the message using its system reply list.

This last consideration can be accomplished by changing the job that appears to be
currently hung, or by changing the job description for all AS jobs running on the
system. However, you must first add an entry to the AS system reply list for
message CPA7025 using the Add Reply List Entry (ADDRPYLE) command:
ADDRPYLE SEQNBR(...) MSGID(CPA7ð25) RPY(I)

To change the job description for the job that is currently running on the AS, use
the SBMRMTCMD command. The following example shows how the database
administrator on one system in the Kansas City region changes the job description
on the KC105 system (the system addressed by the TEST/KC105TST DDM file):
SBMRMTCMD CMD(’CHGJOB JOB(KC1ð5ASJOB) INQMSGRPY(\SYSRPYL)’)
DDMFILE(TEST/KC1ð5TST)

You can prevent this situation from happening on the AS more permanently by
using the Change Job Description (CHGJOBD) command so that any job that uses

Chapter 6. Distributed Relational Database Administration and Operation Tasks 6-15


that job description uses the system reply list. The following example shows how
this command is entered on the same AS:
CHGJOBD JOBD(KC1ð5ASJOB) INQMSGRPY(\SYSRPYL)

This method should be used with caution. Adding CPA7025 to the system reply list
affects all jobs which use the system reply list. Also changing the job description
affects all jobs that use a particular job description. You may want to create a sepa-
rate job description for AS jobs. For additional information on creating job
descriptions, see the Work Management book.

Job Accounting
The job accounting function on the AS/400 system gathers data so you can deter-
mine who is using the system and what system resources they are using. Typical
job accounting details the jobs running on a system and resources used, such as
use of the processing unit, printer, display stations; and database and communi-
cations functions.

Job accounting is optional and must be set up on the system. To set up resource
accounting on the system you must:
1. Create a journal receiver by using the Create Journal Receiver (CRTJRNRCV)
command.
2. Create the journal named QSYS/QACGJRN by using the Create Journal
(CRTJRN) command. You must use the name QSYS/QACGJRN and you must
have authority to add items to QSYS to create this journal. Specify the names
of the journal receivers you created in the previous step on this command.
3. Change the accounting level system value QACGLVL using the Work with
System Value (WRKSYSVAL) or Change System Value (CHGSYSVAL)
command.
The VALUE parameter on the CHGSYSVAL command determines when job
accounting journal entries are produced. A value of *NONE means the system
does not produce any entries in the job accounting journal. A value of *JOB
means the system produces a job (JB) journal entry. A value of *PRINT
produces a direct print (DP) or spooled print (SP) journal entry for each file
printed.

When a job is started, a job description is assigned to the job. The job description
object contains a value for the accounting code (ACGCDE) parameter, which may
be an accounting code or the default value *USRPRF. If *USRPRF is specified, the
accounting code in the job’s user profile is used.

You can add accounting codes to user profiles using the accounting code param-
eter ACGCDE on the Create User Profile (CRTUSRPRF) command or the Change
User Profile (CHGUSRPRF) command. You can change accounting codes for spe-
cific job descriptions by specifying the desired accounting code for the ACGCDE
parameter on the Create Job Description (CRTJOBD) command or the Change Job
Description (CHGJOBD) command.

When a job accounting journal is set up, job accounting entries are placed in the
journal receiver starting with the next job that enters the system after the
CHGSYSVAL command takes effect.

6-16 OS/400 Distributed Database Programming V4R2


You can use the OUTFILE parameter on the Display Journal (DSPJRN) command
to write the accounting entries to a database file that you can process.

For more information about job accounting, see the Work Management book.

| Managing the TCP/IP Server


| This section describes how to manage the DRDA/DDM server jobs that communi-
| cate using sockets over TCP. It describes the subsystem in which the server runs,
| the objects that affect the server and how to manage those resources.

| The DRDA/DDM TCP/IP server that is shipped with the OS/400 program does not
| typically require any changes to your existing system configuration in order to work
| correctly. It is set up and configured when you install OS/400. At some time, you
| may want to change the way the system manages the server jobs to better meet
| your needs, solve a problem, improve the system's performance, or simply look at
| the jobs on the system. To make such changes and meet your processing require-
| ments, you need to know which objects affect which pieces of the system and how
| to change those objects.

| This section describes, at a high level, some of the work management concepts
| that need to be understood in order to work with the server jobs and how the con-
| cepts and objects relate to the server. In order to fully understand how to manage
| your AS/400 system, it is recommended that you carefully review the Work Man-
| agement book before you continue with this section. This section then shows you
| how the servers can be managed and how they fit in with the rest of the system.

| Terminology
| The same server is used for both DDM and DRDA TCP/IP access to DB2 for
| AS/400. For brevity, we will use the term DRDA server rather than DRDA/DDM
| server in the following discussion. Sometimes, however, it may be referred to as
| the TCP/IP server, the DDM server, or simply the server when the context makes
| the use of a qualifer unneccessary.

| The DRDA server consists of two or more jobs, one of which is what is called the
| DRDA listener, because it listens for connection requests and dispatches work to
| the other jobs. The other job or jobs, as initially configured, are prestart jobs which
| service requests from the DRDA or DDM client after the initial connection is made.
| The set of all associated jobs, the listener and the server jobs, are collectively
| referred to as the DRDA server.

| The term client will be used interchangably with DRDA Application Requester (or
| AR) in the DRDA application environment. The term client will be used
| interchangably with DDM source system in the DDM (distributed file management)
| application environment.

| The acronym DDM appears at times in the context of DRDA usage, because DRDA
| is implemented using DDM protocols, and the two access methods are closely
| related. Examples of this are the use of the special value *DDM for the SERVER
| parameter on the STRTCPSVR CL command, and the use of DDM in the name of
| the CHGDDMTCPA CL command.

Chapter 6. Distributed Relational Database Administration and Operation Tasks 6-17


| TCP/IP Communication Support Concepts
| There are several concepts that pertain specifically to the TCP/IP communications
| support used by DRDA and DDM. These concepts are described here in detail.

| Establishing a DRDA or DDM connection over TCP/IP

DRDA
Application or
DDM Client DRDA/DDM
Requester Server Listener

Connect Listen (port 446)

Attach

Server Job

Wait for attach

Send Validate and


Start Server swap user profile

RV4W205-0

| Figure 6-2. DRDA/DDM TCP/IP Server

| To initiate a DRDA server job that uses TCP/IP communications support, the DRDA
| Application Requester or DDM source system will connect to the DRDA well-known
| port number, 446 .1/. The DRDA listener program must have been started (by
| using the STRTCPSVR SERVER(*DDM) command, for example) to listen for and
| accept the client's connection request. The DRDA listener, upon accepting this con-
| nection request, will issue an internal request to attach the client's connection to a
| DRDA server job .2/. This server job may be a prestarted job or, if the user has
| removed the QRWTSRVR prestart job entry from the QSYSWRK subsystem (in
| which case prestart jobs are not used), a batch job that is submitted when the client
| connection request is processed. The server job will handle any further communi-
| cations with the client.

| The initial data exchange that occurs includes a request that identifies the user
| profile under which the server job is to run .3/. Once the user profile and password
| (if it is sent with the user profile id) have been validated, the server job will swap to
| this user profile as well as change the job to use the attributes, such as CCSID,
| defined for the user profile .4/.

| The functions of connecting to the listener program, attaching the client connection
| to a server job and exchanging data and validating the user profile and password
| are comparable to those performed when an APPC program start request is proc-
| essed.

6-18 OS/400 Distributed Database Programming V4R2


| DRDA Listener Program
| The DRDA listener runs in a batch job. There is a one-to-many relationship
| between it and the actual server jobs; there is one listener and potentially many
| DRDA server jobs. The server jobs are normally prestart jobs. Both the listener job
| and the associated server jobs run in the QSYSWRK subsystem.

| The DRDA listener allows client applications to establish TCP/IP connections with
| an associated server job by handling and routing inbound connection requests.
| Once the client has established communications with the server job, there is no
| further association between the client and the listener for the duration of that con-
| nection.

| The DRDA listener is started using the STRTCPSVR command, using a value of
| *DDM or *ALL for the SERVER parameter. This program must be active in order for
| DRDA Application Requesters and DDM source systems to establish connections
| with the DRDA TCP/IP server. You can request that the DRDA listener be started
| automatically by using the CHGDDMTCPA command and specifying
| AUTOSTART(*YES). This will cause the listener to be started when TCP/IP is
| started. Once started, the listener will remain active until it is are ended using the
| ENDTCPSVR command or an error occurs. When starting the DRDA listener, both
| the QSYSWRK subsystem and TCP/IP must be active.

| Start TCP/IP Server (STRTCPSVR) CL Command


| The Start TCP/IP Server (STRTCPSVR) command, with a SERVER parameter
| value of *DDM or *ALL, is used to start the DRDA listener. In addition, the
| STRTCPSVR command will also attempt to start the prestart jobs that are associ-
| ated with the DRDA server. If for some reason the listener is started but the pre-
| start jobs were not, the prestart jobs for DRDA can be started manually with the
| command STRPJ QSYSWRK QRWTSRVR.

| An example of when the prestart jobs may need to be started manually is if the
| commands ENDTCPSVR *DDM and STRTCPSVR *DDM are run in rapid suc-
| cession, to restart the server. It takes a while for the prestart jobs to become com-
| pletely ended. If the STRTCPSVR command's request to start the prestart jobs
| comes too soon, they will not be started automatically, and the manual use of
| STRPJ QSYSWRK QRWTSRVR will be neccessary.

| Note that the DRDA TCP/IP server can also be administered using the AS/400
| Operations Navigator, which is part of the Client Access product. The DRDA server
| is referred to as the DDM server in this context.

| Restriction: Only one DRDA listener can be active at one time. Requests to start
| the listener when it is already active will result in an informational message to the
| command issuer.
| Note: The DRDA server will not start if the QUSER password has expired. It is
| recommended that the password expiration interval be set to *NOMAX for
| the QUSER profile. With this value the password will not expire.

| Examples: Example 1: Starting all TCP/IP servers


| STRTCPSVR SERVER(\ALL)

| The command starts all of the TCP/IP servers, including the DRDA server.

Chapter 6. Distributed Relational Database Administration and Operation Tasks 6-19


| Example 2: Starting just the DRDA TCP/IP server
| STRTCPSVR \DDM

| This command starts only the DRDA TCP/IP server.

| End TCP/IP Server (ENDTCPSVR) CL Command


| The End TCP/IP Server (ENDTCPSVR) command ends the DRDA server.

| If the DRDA listener is ended, and there are associated server jobs that have active
| connections to client applications, the server jobs will remain active until communi-
| cation with the client application is ended. Subsequent connection requests from
| the client application will fail, however, until the listener is started again.

| Restrictions: If the End TCP/IP Server command is used to end the DRDA lis-
| tener when it is not active, a diagnostic message will be issued. This same diag-
| nostic message will not be sent if the listener is not active when an ENDTCPSVR
| SERVER(*ALL) command is issued.

| Examples: Example 1: Ending all TCP/IP servers


| ENDTCPSVR \ALL

| The command ends all active TCP/IP servers.

| Example 2: Ending just the DRDA server


| ENDTCPSVR SERVER(\DDM)

| This command ends the DRDA TCP/IP server.

| QSYSWRK Subsystem
| The DRDA server jobs and their associated listener job run in this subsystem. The
| QSYSWRK subsystem will start automatically when you perform an IPL on your
| AS/400 system, regardless of the value specified for the controlling subsystem.

| Subsystem Descriptions and Prestart Job Entries


| A subsystem description defines how, where, and how much work enters a sub-
| system, and which resources the subsystem uses to perform the work. The fol-
| lowing paragraphs describe how the prestart job entries in the QSYSWRK
| subsystem description affect the DRDA server.

| A prestart job is a batch job that starts running before a program on a remote
| system initiates communications with the server. Prestart jobs use prestart job
| entries in the subsystem description to determine which program, class, and
| storage pool to use when the jobs are started. Within a prestart job entry, you must
| specify attributes that the subsystem uses to create and manage a pool of prestart
| jobs.

| Prestart jobs provide increased performance when initiating a connection to a


| server. Prestart job entries are defined within a subsystem. Prestart jobs become
| active when that subsystem is started, or they can be controlled with the Start Pre-
| start Job (STRPJ) and End Prestart Job (ENDPJ) commands.

6-20 OS/400 Distributed Database Programming V4R2


| Use of Prestart Jobs
| System information that pertains to prestart jobs (such as DSPACTPJ) will use the
| term 'program start request' exclusively to indicate requests made to start prestart
| jobs, even though the information may pertain to a prestart job that was started as
| a result of a TCP/IP connection request.

| The following list contains the prestart job entry attributes with the initial configured
| value for the DRDA TCP/IP server. They can be changed with the Change Prestart
| Job Entry (CHGPJE) command.
| Ÿ Subsystem Description. The subsystem that contains the prestart job entries is
| QSYSWRK.
| Ÿ Program library and name. The program that is called when the prestart job is
| started is QSYS/QRWTSRVR.
| Ÿ User profile. The user profile that the job runs under is QUSER. This is what
| the job shows as the user profile. When a request to connect to the server is
| received from a client, the prestart job function swaps to the user profile that is
| received in that request.
| Ÿ Job name. The name of the job when it is started is QRWTSRVR.
| Ÿ Job description. The job description used for the prestart job is *USRPRF.
| Note that the user profile is QUSER so this will be whatever QUSER's job
| description is. However, the attributes of the job are changed to correspond to
| the requesting user's job description after the userid and password (if present)
| are verified.
| Ÿ Start jobs. This indicates whether prestart jobs are to automatically start when
| the subsystem is started. These prestart job entries are shipped with a start
| jobs value of *NO to prevent unnecessary jobs starting when a system IPL is
| performed. You can change these to *YES if the DRDA TCP/IP communi-
| cations support is to be used. However, the STRTCPSVR *DDM command will
| also attempt to start the prestart jobs as part of its processing.
| Ÿ Initial number of jobs. As initially configured, the number of jobs that are started
| when the subsystem is started is 1. This value can be adjusted to suit your
| particular environment and needs.
| Ÿ Threshold. The minimum number of available prestart jobs for a prestart job
| entry is set to 1. When this threshold is reached, additional prestart jobs are
| automatically started. This is used to maintain a certain number of jobs in the
| pool.
| Ÿ Additional number of jobs. The number of additional prestart jobs that are
| started when the threshold is reached is initially configured at 2.
| Ÿ Maximum number of jobs. The maximum number of prestart jobs that can be
| active for this entry is *NOMAX.
| Ÿ Maximum number of uses. The maximum number of uses of the job is set to
| 200. This value indicates that the prestart job will end after 200 requests to
| start the server have been processed.
| Ÿ Wait for job. The *YES setting for DRDA causes a client connection request to
| wait for an available server job if the maximum number of jobs is reached.
| Ÿ Pool identifier. The subsystem pool identifier in which this prestart job runs is
| set to 1.

Chapter 6. Distributed Relational Database Administration and Operation Tasks 6-21


| Ÿ Class. The name and library of the class the DRDA prestart jobs will run under
| is set to QSYS/QSYSCLS20.

| When the start jobs value for the prestart job entry has been set to *YES, and the
| remaining values are as provided with their initial settings, the following happens for
| each prestart job entry:
| Ÿ When the subsystem is started, one prestart job is started.
| Ÿ When the first client connection request is processed for the DRDA server, the
| initial job is used and the threshold is exceeded.
| Ÿ Additional jobs are started for the server based on the number defined in the
| prestart job entry.
| Ÿ The number of available jobs will not reach below 1.
| Ÿ The subsystem periodically checks the number of prestart jobs in a pool that
| are ready to process requests, and it starts ending excess jobs. It always
| leaves at least the number of prestart jobs specified in the initial jobs param-
| eter.

| Monitoring Prestart Jobs: Prestart jobs can be monitored by using the Display
| Active Prestart Jobs (DSPACTPJ) command. Use the command DSPACTPJ
| QSYSWRK QRWTSRVR to monitor the DRDA prestart jobs.

| The DSPACTPJ command provides the following information:


| Ÿ Current number of prestart jobs
| Ÿ Average number of prestart jobs
| Ÿ Peak number of prestart jobs
| Ÿ Current number of prestart jobs in use
| Ÿ Average number of prestart jobs in use
| Ÿ Peak number of prestart jobs in use
| Ÿ Current number of waiting connect requests
| Ÿ Average number of waiting connect requests
| Ÿ Peak number of waiting connect requests
| Ÿ Average wait time
| Ÿ Number of connect requests accepted
| Ÿ Number of connect requests rejected

| Managing Prestart Jobs: The information presented for an active prestart job can
| be refreshed by pressing the F5 key while on the Display Active Prestart Jobs
| display. Of particular interest is the information about program start requests. This
| information can indicate to you whether or not you need to change the available
| number of prestart jobs. If you have information indicating that program start
| requests are waiting for an available prestart job, you can change prestart jobs
| using the Change Prestart Job Entry (CHGPJE) command.

| If the program start requests were not being acted on fast enough, you could do
| any combination of the following:
| Ÿ Increase the threshold.

6-22 OS/400 Distributed Database Programming V4R2


| Ÿ Increase the Initial number of jobs (INLJOBS) parameter value.
| Ÿ Increase the Additional number of jobs (ADLJOBS) parameter value.

| The key is to ensure that there is an available prestart job for every request that is
| sent that starts a server job.

| Removing Prestart Job Entries: If you decide that you do not want the servers
| to use the prestart job function, you must do the following:
| 1. End the prestarted jobs using the End Prestart Job (ENDPJ) command.
| Prestarted jobs ended with the ENDPJ command will be started the next time
| the subsystem is started if start jobs *YES is specified in the prestart job entry
| or when the STRHOSTSVR command is issued for the specified server type. If
| you only end the prestart job and do not perform the next step, any requests to
| start the particular server will fail.
| 2. Remove the prestart job entries in the subsystem description using the Remove
| Prestart Job Entry (RMVPJE) command.
| The prestart job entries removed with the RMVPJE command are permanently
| removed from the subsystem description. Once the entry is removed, new
| requests for the server will be successful, but will incur the performance over-
| head of job initiation.

| Routing Entries: When an OS/400 job is routed to a subsystem, this is done


| using the routing entries in the subsystem description. The routing entry for the
| DRDA listener job in the QSYSWRK subsystem is present after OS/400 is installed.
| This job is started under the QUSER user profile, and the QSYSNOMAX job queue
| is used.

| The server jobs run in the QSYSWRK subsystem also. The characteristics of the
| server jobs are taken from their prestart job entry which also comes automatically
| configured with OS/400. If this entry is removed so that prestart jobs are not used
| for the servers, then the server jobs are started using the characteristics of their
| corresponding listener job.

| The following provides the initial configuration in the QSYSWRK subsystem for the
| DRDA listener job.

| Subsystem QSYSWRK
| Job Queue QSYSNOMAX
| User QUSER
| Routing Data QRWTLSTN
| Job Name QRWTLSTN
| Class QSYSCLS20

| Identifying Server Jobs


| If you look at the server jobs started on the system, you may find it difficult to relate
| a server job to a certain application requester job or to a particular PC client. Being
| able to identify a particular job is a prerequisite to investigating problems and gath-
| ering performance data. This section provides information on how to identify server
| jobs before starting debug or performance investigation.

Chapter 6. Distributed Relational Database Administration and Operation Tasks 6-23


| AS/400 Job Names
| The job name used on the AS/400 consists of three parts:
| Ÿ The simple job name
| Ÿ User ID
| Ÿ Job number (ascending order)

| The DRDA server jobs follow the following conventions:


| Ÿ Job name is QRWTSRVR.
| Ÿ User ID
| – Will always be QUSER, whether prestart jobs are used or not.
| – The job log will show which user is currently using the job.
| Ÿ The job number is created by work management.

| Displaying Server Jobs


| There are three methods that can be used to aid in identifying server jobs. One
| method is to use the WRKACTJOB command. Another method is to use the
| WRKUSRJOB command. A third method is to display the history log to determine
| which job is being used by which client user.

| Displaying Active Jobs Using WRKACTJOB: The WRKACTJOB command


| shows all active jobs. All server jobs are displayed, as well as the DRDA listener.

| The following figures show a sample status using the WRKACTJOB command.
| Only jobs related to the DRDA server is shown in the figures. You must press F14
| to see the available prestart jobs.

| The following types of jobs are shown in the figures.

| Ÿ .1/ - DRDA listener


| Ÿ .2/ - Prestarted server jobs

6-24 OS/400 Distributed Database Programming V4R2


| à ð
| Work with Active Jobs AS4ðð597
| ð4/25/97 1ð:25:4ð
| CPU %: 3.1 Elapsed time: 21:38:4ð Active jobs: 77

| Type options, press Enter.


| 2=Change 3=Hold 4=End 5=Work with 6=Release 7=Display message
| 8=Work with spooled files 13=Disconnect ...

| Opt Subsystem/Job User Type CPU % Function Status


| .
| ___ QSYSWRK QSYS SBS .ð DEQW
| .
| ___ .1/
| QRWTLSTN QUSER BCH .ð SELW
| .
| .
| ___ .2/
| QRWTSRVR QUSER PJ .ð TIMW
| ___ QRWTSRVR QUSER PJ .ð TIMW
| ___ QRWTSRVR QUSER PJ .ð TIMW
| ___ QRWTSRVR QUSER PJ .ð TIMW
| ___ QRWTSRVR QUSER PJ .ð TIMW
| . More...

| The following types of jobs are shown:


| PJ The prestarted server jobs.
| SBS The subsystem monitor jobs.
| BCH The DRDA listener job.

| Displaying Active User Jobs Using WRKUSRJOB: The command


| WRKUSRJOB USER(QUSER) STATUS(*ACTIVE) will display all active server jobs
| running under QUSER. This includes the DRDA listener and all DRDA server jobs.
| This command may be preferable, in that it will list fewer jobs for you to look
| through to find the DRDA-related ones.

| Displaying the History Log


| Each time a client user establishes a successful connection with a server job, that
| job is swapped to run under the profile of that client user. To determine which job is
| associated with a particular client user, you can display the history log using the
| DSPLOG command. An example of the information provided is shown in the fol-
| lowing figure.

Chapter 6. Distributed Relational Database Administration and Operation Tasks 6-25


| à ð
| Display History Log Contents
| .
| .
| DDM job ð36995/QUSER/QRWTSRVR servicing user MEL on ð8/18/97 at 15:26:43.
| .
| DDM job ð36995/QUSER/QRWTSRVR servicing user REBECCA on ð8/18/97 at 15:45:ð8.
| .
| DDM job ð36995/QUSER/QRWTSRVR servicing user NANCY on ð8/18/97 at 15:56:21.
| .
| DDM job ð36995/QUSER/QRWTSRVR servicing user ROD on ð8/18/97 at 16:ð2:59.
| .
| DDM job ð36995/QUSER/QRWTSRVR servicing user SMITH on ð8/18/97 at 16:48:13.
| .
| DDM job ð36995/QUSER/QRWTSRVR servicing user DAVID on ð8/18/97 at 17:1ð:27.
| .
| .
| .

| Press Enter to continue.

| F3=Exit F1ð=Display all F12=Cancel

Canceling Distributed Relational Database Work


Whether you are testing an application, handling a user problem, or monitoring a
particular device, there are times when you may want to end work that is being
done on a system. When you are using an interactive job, you normally end the job
by signing off the system. There are other ways that you can cancel or discontinue
jobs on the system. They depend on what kind of a job it is and what system it is
on. The ways are:
Ÿ End job
Ÿ End request

End Job (ENDJOB) Command


The End Job (ENDJOB) command ends any job. The job can be active, on a job
queue, or already ended. You can end a job immediately or by specifying a time
interval so that end of job processing can occur.

Ending an AR job ends the job on both the AR and the AS. If the application is
under commitment control, all uncommitted changes are rolled back.

End Request (ENDRQS) Command


The End Request (ENDRQS) command cancels a local operation (request) that is
currently stopped at a breakpoint. This means the command cancels an AR opera-
tion or request. You can cancel a request by entering ENDRQS on a command line
or you can select option 2 from the System Request menu.

If it cannot be processed immediately because a system function that cannot be


interrupted is currently running, the command waits until interruption is allowed.

When a request is ended, an escape message is sent to the request processing


program that is currently called at the request level being canceled. Request proc-
essing programs can monitor for the escape message so that cleanup processing
can be done when a request is canceled. The static storage and open files are

6-26 OS/400 Distributed Database Programming V4R2


reclaimed for any program that was called by the request processing program.
None of the programs called by the request processing program are notified of the
cancel, so they have no opportunity to stop processing.

Attention: Using the ENDRQS command on an AR job may produce unpredictable


results and can cause the loss of the connection to the AS.

Auditing the Relational Database Directory


Accesses to the relational database directory are recorded in the security auditing
journal when either:
Ÿ The value of the system QAUDLVL is *SYSMGT.
Ÿ The value of the user AUDLVL is *SYSMGT.

With the *SYSMGT value, the system audits all accesses made with the following
commands:
Ÿ Add Directory Entry (ADDRDBDIRE)
Ÿ Change Directory Entry (CHGRDBDIRE)
Ÿ Display Directory Entry (DSPRDBDIRE)
Ÿ Remove Directory Entry (RMVRDBDIRE)
Ÿ Work Directory Entry (WRKRDBDIRE)

The relational database directory is a database file (QSYS/QADBXRDBD) that can


be read directly without the directory entry commands. To audit direct accesses to
this file, set auditing on with the Change Object Auditing (CHGOBJAUD) command.

Chapter 6. Distributed Relational Database Administration and Operation Tasks 6-27


6-28 OS/400 Distributed Database Programming V4R2
Chapter 7. Data Availability and Protection
In a distributed relational database environment, data availability not only involves
protecting data on an individual system in the network, but also ensuring that users
have access to the data across the network.

This chapter discusses tools and techniques to protect programs and data on an
AS/400 system and reduce recovery time in the event of a problem. It also provides
information about alternatives that ensure your network users have access to the
relational databases and tables across the network when it is needed.

Recovery Support
Failures that can occur on a computer system are a system failure (when the entire
system is not operating); a loss of the site due to fire, flood or similar catastrophe;
or the damage or loss of an object. For a distributed relational database, a failure
on one system in the network prevents users across the entire network from
accessing the relational database on that system. If the relational database is crit-
ical to daily business activities at other locations, enterprise operations across the
entire network can be disrupted for the duration of one system’s recovery time.
Clearly, planning for data protection and recovery after a failure is particularly
important in a distributed relational database.

Each system in a distributed relational database is responsible for backing up and


recovering its own data. Each system in the network also handles recovery proce-
dures after an abnormal system end. However, backup and recovery procedures
can be done by the distributed relational database administrator using display
station pass-through for those systems with an inexperienced operator or no oper-
ator at all.

The most common type of loss is the loss of an object or group of objects. An
object can be lost or damaged due to several factors, including power failure, hard-
ware failures, system program errors, application program errors, or operator errors.
The AS/400 system provides several methods for protecting the system programs,
application programs, and data from being permanently lost. Depending on the type
of failure and the level of protection chosen, most of the programs and data can be
protected, and the recovery time can be significantly reduced. These protection
methods include:
Ÿ Physical protection of the system from power failure
Ÿ System save and restore functions to ensure Structured Query Language
(SQL) objects such as tables, collections, packages and relational database
directories can be saved and restored
Ÿ Protection from disk related failures such as auxiliary storage pools to control
where objects are stored, checksum protection for auxiliary storage pools, and
mirrored protection for disk-related hardware components
Ÿ Journal management for auxiliary records of relational database changes and
journaling indexes to data
Ÿ Commitment control to ensure relational database transactions can be applied
or removed in a uniform manner

 Copyright IBM Corp. 1997, 1998 7-1


Uninterruptible Power Supply
Making sure your system is protected from sudden power loss is an important part
of ensuring that your application server (AS) is available to an application requester
(AR). An uninterruptible power supply, that can be ordered separately, protects the
system from loss because of power failure, power interruptions, or drops in voltage,
by supplying power to the system devices until power can be restored. Normally, an
uninterruptible power supply does not provide power to all work stations. With the
AS/400 system, the uninterruptible power supply allows the system to:
Ÿ Continue operations during brief power interruptions or momentary drops in
voltage.
Ÿ End operations normally by closing files and maintaining object integrity.

Data Recovery after Disk Failures


Recovery is not possible for recently entered data if a disk failure occurs and all
objects are not saved on tape or disk immediately before the failure. After previ-
ously saved objects are restored, the system is operational, but the database is not
current.

Auxiliary storage pools (ASPs), checksum protection, and mirrored protection are
OS/400 disk recovery functions that provide methods to recover recently entered
data after a disk related failure. These functions use additional system resources,
but provide a high level of protection for systems in a distributed relational data-
base. Since some systems may be more critical as application servers than others,
the distributed relational database administrator should review how these disk data
protection methods can best be used by individual systems within the network.

Auxiliary Storage Pools


An ASP is one or more physical disk units assigned to the same storage area.
ASPs allow you to isolate certain types of objects on specified physical disk units.

The system ASP isolates system programs and the temporary objects that are
created as a result of processing by system programs. User ASPs can be used to
isolate some objects such as libraries, SQL objects, journals, journal receivers,
applications, and data. The AS/400 system supports up to 15 user ASPs. Isolating
libraries or objects in a user ASP protects them from disk failures in other ASPs
and reduces recovery time.

In addition to reduced recovery time and isolation of objects, placing objects in an


ASP can improve performance. If a journal receiver is isolated in a user ASP, the
disks associated with that ASP are dedicated to that receiver. In an environment
that requires many read and write operations to the database files, this can reduce
arm contention on the disks in that ASP, and can improve journaling performance.

Checksum Protection
Checksum protection guards against losing the data on any disk in an ASP. The
checksum software maintains a coded copy of ASP data in special checksum data
areas within that ASP. Any changes made to permanent objects in a checksum
protected ASP are automatically maintained in the checksum data of the checksum
set. If any single disk unit in a checksum set is lost, the system reconstructs the
contents of the lost device using the checksum and the data on the remaining func-
tional units of the set. In this way, if any one of the units fails, its contents may be
recovered. This reconstructed data reflects the most up-to-date information that was

7-2 OS/400 Distributed Database Programming V4R2


on the disk at the time of the failure. Checksum protection can affect system per-
formance significantly. In a distributed relational database this may be a concern.

Mirrored Protection
Mirrored protection increases the availability of a system by duplicating different
disk-related hardware components such as a disk controller, a disk I/O processor,
or a bus. The system can remain available after a failure, and service for the failed
hardware components can be scheduled at a convenient time.

Different levels of mirrored protection provide different levels of system availability.


For example, if only the disk units on a system are mirrored, all disk units have disk
unit-level protection, so the system is protected against the failure of a single disk
unit. In this situation, if a controller, I/O processor, or bus failure occurs, the system
cannot run until the failing part is repaired or replaced. All mirrored units on the
system must have identical disk unit-level protection and reside in the same ASP.
The units in an ASP are automatically paired by the system when mirrored pro-
tection is started.

Journal Management
Journal management can be used as a part of the backup and recovery strategy for
relational databases and indexes. AS/400 journal support provides an audit trail and
forward and backward recovery. Forward recovery can be used to take an older
version of a table and apply changes logged in the journal to the table. Backward
recovery can be used to remove changes logged in the journal from the table.

When a collection is created, a journal and an object called a journal receiver are
created in the collection. Because placing journal receivers on ASPs can improve
performance, the distributed relational database administrator may wish to create
the collection on a user ASP.

When a table is created, it is automatically journaled to the journal SQL created in


the collection. After this point, you are responsible for using the journal functions to
manage the journal, journal receivers, and the journaling of tables to the journal.
For example, if a table is moved into a collection, no automatic change to the jour-
naling status occurs. If a table is restored, the normal journal rules apply. That is, if
a table is journaled when it is saved, it is journaled to the same journal when it is
restored on that system. If the table is not journaled at the time of the save, it is
not journaled at restore time. You can stop journaling on any table using the journal
functions, but doing so prevents SQL operations from running under commitment
control. SQL operations can still be performed if you have specified
COMMIT(*NONE), but this does not provide the same level of integrity that jour-
naling and commitment control provide.

With journaling active, when changes are made to the database, the changes are
journaled in a journal receiver before the changes are made to the database. The
journal receiver always has the latest database information. All activity is journaled
for a database table regardless of how the change was made.

Journal receiver entries record activity for a specific row (added, changed, or
deleted), and for a table (opened, table or member saved, and so on). Each entry
includes additional control information identifying the source of the activity, the user,
job, program, time, and date.

Chapter 7. Data Availability and Protection 7-3


The system journals some file-level changes, including moving a table and
renaming a table. The system also journals member-level changes, such as initial-
izing a physical file member, and system-level changes, such as initial program
load (IPL). You can add entries to a journal receiver to identify significant events
(such as the checkpoint at which information about the status of the job and the
system can be journaled so that the job step can be restarted later) or to help in
the recovery of applications.

For changes that affect a single row, row images are included following the control
information. The image of the row after a change is made is always included.
Optionally, the row image before the change is made can also be included. You
control whether to journal both before and after row images or just after row images
by specifying the IMAGES parameter on the Start Journaling Physical File
(STRJRNPF) command.

All journaled database files are automatically synchronized with the journal when
the system is started (IPL time). If the system ended abnormally, some database
changes may be in the journal, but not yet reflected in the database itself. If that is
the case, the system automatically updates the database from the journal to bring
the tables up to date.

Journaling can make saving database tables easier and faster. For example,
instead of saving entire tables everyday, you can simply save the journal receivers
that contain the changes to the tables. You might still save the entire tables on a
regular basis. This method can reduce the amount of time it takes to perform your
daily save operations.

The Display Journal (DSPJRN) command, can be used to convert journal receiver
entries to a database file. Such a file can be used for activity reports, audit trails,
security, and program debugging.

Index Recovery
An index describes the order in which rows are read from a table. When indexes
are recorded in the journal, the system can recover the index to avoid spending a
significant amount of time rebuilding indexes during the IPL following an abnormal
system end.

When you journal tables, images of changes to the rows in the table are written to
the journal. These row images are used to recover the table should the system end
abnormally. However, after an abnormal end, the system may find that indexes built
over the table are not synchronized with the data in the table. If an access path and
its data are not synchronized, the system must rebuild the index to ensure that the
two are synchronized and usable.

When indexes are journaled, the system records images of the index in the journal
to provide known synchronization points between the index and its data. By having
that information in the journal, the system can recover both the data and the index,
and ensure that the two are synchronized. In such cases, the lengthy time to
rebuild the indexes can be avoided.

The AS/400 system provides several functions to assist with index recovery. All
indexes on the system have a maintenance option that specifies when the index is
maintained. SQL indexes are created with an attribute of *IMMED maintenance.

7-4 OS/400 Distributed Database Programming V4R2


In the event of a power failure or abnormal system failure, indexes that are in the
process of change may need to be rebuilt to make sure they agree with the data.
All indexes on the system have a recovery option that specifies when the index
should be rebuilt if necessary. All SQL indexes with an attribute of UNIQUE are
created with a recovery attribute of *IPL, which means these indexes are rebuilt
before the OS/400 licensed program has been started. All other SQL indexes are
created with the *AFTIPL recovery attribute, which means they are rebuilt after the
operating system has been started. During an IPL, you can see a display showing
indexes needing to be rebuilt and their recovery option, and you may override these
recovery options.

SQL indexes are not journaled automatically. You can use the Start Journal Access
Path (STRJRNAP) command to journal any index created by SQL operations. The
system save and restore functions allow you to save indexes when a table is saved
by using ACCPTH(*YES) on the Save Object (SAVOBJ) or Save Library (SAVLIB)
commands. If you must restore a table, there is no need to rebuild the indexes. Any
indexes not previously saved and restored are automatically and asynchronously
rebuilt by the database.

Before journaling indexes, you must start journaling for the tables associated with
the index. In addition, you must use the same journal for the index and its associ-
ated table.

Index journaling is designed to minimize additional output operations. For example,


the system writes the journal data for the changed row and the changed index in
the same output operation. However, you should seriously consider isolating your
journal receivers in user ASPs when you start journaling your indexes. Placing
journal receivers in their own user ASP provides the best journal management per-
formance, while helping to protect them from a disk failure.

Designing Tables to Reduce Index Rebuilding Time


Table design can also help reduce index recovery time. For example, you can
divide a large master table into a history table and a transaction table. The trans-
action table is then used for adding new data, the history table is used for inquiry
only. Each day, you can merge the transaction data into the history table, then
clear the transaction file for the next day’s data. With this design, the time to rebuild
indexes can be shortened, because if the system abnormally ends during the day,
the index to the smaller transaction table might need to be rebuilt. However,
because the index to the large history table, is read-only for most of the day, it
would probably not be out of synchronization with its data, and would not have to
be rebuilt.

Consider the trade-off between using table design to reduce index rebuilding time
and using system-supplied functions like access path journaling. The table design
described above may require a more complex application design. After evaluating
your situation, you may decide to use system-supplied functions like access path
journaling rather than design more complex applications.

Chapter 7. Data Availability and Protection 7-5


System-Managed Access-Path Protection (SMAPP)
System-managed access-path protection (SMAPP) provides automatic protection
for access paths. Using the SMAPP support, you do not have to use the journaling
commands, such as STRJRNAP, to get the benefits of access path journaling.
SMAPP support recovers access paths after an abnormal system end rather than
rebuilding them during IPL.

The SMAPP support is turned on with the shipped system.

The system determines which access paths to protect based on target access path
recovery times provided by the user or by using a system-provided default time.
The target access path recovery times can be specified as a system-wide value or
on an ASP basis. Access paths that are being journaled to a user-defined journal
are not eligible for SMAPP protection because they are already protected. See the
Backup and Recovery book for more information about SMAPP.

Transaction Recovery through Commitment Control


Commitment control is an extension of the journal management function on the
AS/400 system. The system can identify and process a group of relational database
changes as a single unit of work (transaction).

An SQL COMMIT statement guarantees that the group of operations is completed.


An SQL ROLLBACK statement guarantees that the group of operations is backed
out. The only SQL statements that cannot be committed or rolled back are:
Ÿ DROP COLLECTION
Ÿ GRANT or REVOKE if an authority holder exists for the specified object

Under commitment control, tables and rows used during a transaction are locked
from other jobs. This ensures that other jobs do not use the data until the trans-
action is complete. At the end of the transaction, the program issues an SQL
COMMIT or ROLLBACK statement, freeing the rows. If the system or job ends
abnormally before the commit operation is performed, all changes for that job since
the last time a commit or rollback operation occurred are rolled back. Any affected
rows that are still locked are then unlocked. The lock levels are as follows:
*NONE Commitment control is not used. Uncommitted changes in other jobs
can be seen.
*CHG Objects referred to in SQL ALTER, COMMENT ON, CREATE, DROP,
GRANT, LABEL ON, and REVOKE statements and the rows updated,
deleted, and inserted are locked until the unit of work (transaction) is
completed. Uncommitted changes in other jobs can be seen.
*CS Objects referred to in SQL ALTER, COMMENT ON, CREATE, DROP,
GRANT, LABEL ON, and REVOKE statements and the rows updated,
deleted, and inserted are locked until the unit of work (transaction) is
completed. A row that is selected, but not updated, is locked until the
next row is selected. Uncommitted changes in other jobs cannot be
seen.
*ALL Objects referred to in SQL ALTER, COMMENT ON, CREATE, DROP,
GRANT, LABEL ON, and REVOKE statements and the rows read,
updated, deleted, and inserted are locked until the end of the unit of
work (transaction). Uncommitted changes in other jobs cannot be seen.

7-6 OS/400 Distributed Database Programming V4R2


Figure 7-1 on page 7-8 shows the record lock duration for each of these lock level
values.

If you request COMMIT (*CHG), COMMIT (*CS), or COMMIT (*ALL) when the
program is precompiled or when interactive SQL is started, then SQL sets up the
commitment control environment by implicitly calling the Start Commitment Control
(STRCMTCTL) command. The LCKLVL parameter specified when SQL starts com-
mitment control is the lock level specified on the COMMIT parameter on the
CRTSQLxxx commands. NFYOBJ(*NONE) is specified when SQL starts commit-
ment control. To specify a different NFYOBJ parameter, issue a STRCMTCTL
command before starting SQL.
Note: When running with commitment control, the tables referred to in the applica-
tion program by data manipulation language statements must be journaled.
The tables do not have to be journaled at precompile time, but they must be
journaled when you run the application.

If a remote relational database is accessing data on the AS/400 system and


requesting commit level repeatable read (*RR), the tables will be locked until the
query is closed. If the cursor is read only, the table will be locked (*SHRNUP). If
the cursor is in update mode, the table will be locked (*EXCLRD).

The journal created in the SQL collection is normally the journal used for logging all
changes to SQL tables. You can, however, use the system journal functions to
journal SQL tables to a different journal.

Commitment control can handle up to 131 072 distinct row changes in a unit of
work. If COMMIT(*ALL) is specified, all rows read are also included in the 131 072
limit. (If a row is changed or read more than once in a unit of work, it is only
counted once toward the 131 072 limit.) Maintaining a large number of locks
adversely affects system performance and does not allow concurrent users to
access rows locked in the unit of work until the unit of work is completed. It is,
therefore, more efficient to keep the number of rows processed in a unit of work
small. Commitment control allows up to 512 tables either open under commitment
control or closed with pending changes in a unit of work.

The HOLD value on COMMIT and ROLLBACK statements allows you to keep the
cursor open and start another unit of work without issuing an OPEN again. The
HOLD value is not available when there are non-AS/400 connections that are not
released for a program and SQL is still in the call stack. If ALWBLK(*ALLREAD)
and either COMMIT(*CHG) or COMMIT(*CS) are specified when the program is
precompiled, all read-only cursors will allow blocking of rows and a ROLLBACK
HOLD statement will not roll the cursor position back.

If there are locked rows (records) pending from running a SQL precompiled
program or an interactive SQL session, a COMMIT or ROLLBACK statement can
be issued from the system Command Entry display. Otherwise, an implicit
ROLLBACK operation occurs when the job is ended.

You can use the Work with Commitment Definitions (WRKCMTDFN) command to
monitor the status of commitment definitions and free up locks and held resources
involved with commitment control activities across systems. For more information,
see “Working with Commitment Definitions” on page 6-4.

Chapter 7. Data Availability and Protection 7-7


For more information on commitment control, see the DB2 for AS/400 SQL Pro-
gramming book.

Figure 7-1 (Page 1 of 2). Record Lock Duration


COMMIT Param-
SQL Statement eter Duration of Record Locks Lock Type
SELECT INTO *NONE No locks
*CHG No locks
*CS Row locked when read and released READ
*ALL (See note 2) From read until ROLLBACK or COMMIT READ
FETCH (read-only *NONE No locks
cursor) *CHG No locks
*CS From read until the next FETCH READ
*ALL (See note 2) From read until ROLLBACK or COMMIT READ
FETCH (update or *NONE When record not updated or deleted UPDATE
delete capable from read until next FETCH
cursor) See note 1 When record is updated or deleted
from read until UPDATE or DELETE
*CHG When record not updated or deleted UPDATE
from read until next FETCH
When record is updated or deleted
from read until UPDATE or DELETE
*CS When record not updated or deleted UPDATE
from read until next FETCH
When record is updated or deleted
from read until UPDATE or DELETE
*ALL From read until ROLLBACK or COMMIT UPDATE3
INSERT (target *NONE No locks
table) *CHG From insert until ROLLBACK or COMMIT UPDATE
*CS From insert until ROLLBACK or COMMIT UPDATE
*ALL From insert until ROLLBACK or COMMIT UPDATE4
INSERT (tables in *NONE No locks
subselect) *CHG No locks
*CS Each record locked while being read READ
*ALL From read until ROLLBACK or COMMIT READ
UPDATE (non- *NONE Each record locked while being updated UPDATE
cursor) *CHG From read until ROLLBACK or COMMIT UPDATE
*CS From read until ROLLBACK or COMMIT UPDATE
*ALL From read until ROLLBACK or COMMIT UPDATE
DELETE (non- *NONE Each record locked while being deleted UPDATE
cursor) *CHG From read until ROLLBACK or COMMIT UPDATE
*CS From read until ROLLBACK or COMMIT UPDATE
*ALL From read until ROLLBACK or COMMIT UPDATE
UPDATE (with *NONE Lock released when record updated UPDATE
cursor) *CHG From read until ROLLBACK or COMMIT UPDATE
*CS From read until ROLLBACK or COMMIT UPDATE
*ALL From read until ROLLBACK or COMMIT UPDATE
DELETE (with *NONE Lock released when record deleted UPDATE
cursor) *CHG From read until ROLLBACK or COMMIT UPDATE
*CS From read until ROLLBACK or COMMIT UPDATE
*ALL From read until ROLLBACK or COMMIT UPDATE

7-8 OS/400 Distributed Database Programming V4R2


Figure 7-1 (Page 2 of 2). Record Lock Duration
COMMIT Param-
SQL Statement eter Duration of Record Locks Lock Type
Subqueries *NONE From read until next FETCH READ
(update or delete *CHG From read until next FETCH READ
capable cursor or *CS From read until next FETCH READ
UPDATE or *ALL (see note 2) From read until ROLLBACK or COMMIT READ
DELETE non-
cursor)
Subqueries (read- *NONE No locks
only cursor or *CHG No locks
SELECT INTO) *CS Each record locked while being read READ
*ALL From read until ROLLBACK or COMMIT READ
Notes:
1. A cursor is open with UPDATE or DELETE capabilities if the result table is not read-only (see
description of DECLARE CURSOR in the DB2 for AS/400 SQL Reference book) and if one of the
following is true:
Ÿ The cursor is defined with a FOR UPDATE clause.
Ÿ The cursor is defined without a FOR UPDATE, FOR FETCH ONLY, or ORDER BY clause and the
program contains at least one of the following:
– Cursor UPDATE referring to the same cursor-name
– Cursor DELETE referring to the same cursor-name
– An EXECUTE or EXECUTE IMMEDIATE statement with ALWBLK(*READ) or
ALWBLK(*NONE) specified on the CRTSQLxxx command
2. A table or view can be locked exclusively in order to satisfy COMMIT(*ALL). If a subselect is proc-
essed that includes a group by or union, or if the processing of the query requires the use of a tempo-
rary result, an exclusive lock is acquired to protect you from seeing uncommitted changes.
3. If the row is not updated or deleted, the lock is reduced to *READ.
4. An UPDATE lock on rows of the target table and a READ lock on the rows of the subselect table.
5. A table or view can be locked exclusively in order to satisfy repeatable read. Row locking is still done
under repeatable read. The locks acquired and their duration are identical to *ALL.

Writing Data to Auxiliary Storage


The Force-Write Ratio (FRCRATIO) parameter on the Create File command can be
used to force data to be written to auxiliary storage. A force-write ratio of one
causes every add, update, and delete request to be written to auxiliary storage
immediately for the table in question. However, choosing this option can reduce
system performance. Therefore, saving your tables and journaling tables should be
considered the primary methods for protecting the database.

Save and Restore Processing


Saving and restoring data and programs allows recovery from a program or system
failure, exchange of information between systems, or storage of objects or data off-
line. A sound backup policy at each system in the distributed relational database
network ensures a system can be restored and made available to network users
quickly in the event of a problem.

Chapter 7. Data Availability and Protection 7-9


Saving the system on external media such as tape, protects system programs and
data from disasters, such as fire or flood. However, information can also be saved
to a disk file called a save file. A save file is a disk-resident file used to store data
until it is used in input and output operations or for transmission to another AS/400
system over communication lines. Using a save file allows unattended save oper-
ations because an operator does not need to load diskettes or tapes. In a distrib-
uted relational database, save files can be sent to another system as a protection
method.

When information is restored, the information is written from diskette, tape, or a


save file into auxiliary storage, where it can be accessed by system users.

The AS/400 system has a full set of commands to save and restore your database
tables and SQL objects:
Ÿ Save Library (SAVLIB) saves one or more collections
Ÿ Save Object (SAVOBJ) saves one or more objects such as SQL tables, views
and indexes
Ÿ Save Changed Object (SAVCHGOBJ) saves any objects that have changed
since either the last time the collection was saved or from a specified date
Ÿ Save Save File Data (SAVSAVFDTA) saves the contents of a save file
Ÿ Save System (SAVSYS) saves the operating system, security information,
device configurations, and system values
Ÿ Restore Library (RSTLIB) restores a collection
Ÿ Restore Object (RSTOBJ) restores one or more objects such as SQL tables,
views and indexes
Ÿ Restore User Profiles (RSTUSRPRF), Restore Authority (RSTAUT) and
Restore Configuration (RSTCFG) restore user profiles, authorities, and config-
urations saved by a SAVSYS command

See the Backup and Recovery book for more information about these functions and
commands.

Saving and Restoring Indexes


Restoring an SQL index can be faster than rebuilding it. Although times vary
depending on a number of factors, rebuilding a database index takes approximately
1 minute for every 10,000 rows.

After restoring the index, the table may need to be brought up to date by applying
the latest journal changes (depending on whether journaling is active). Normally,
the system can apply approximately 80,000 to 100,000 journal entries per hour.
(This assumes that each of the tables to which entries are being applied has only
one index or view built over it.) Even with this additional recovery time, you will
usually find it is faster to restore indexes rather than to rebuild them.

The system ensures the integrity of an index before you can use it. If the system
determines that the index is unusable, the system attempts to recover it. You can
control when an index will be recovered. If the system ends abnormally, during the
next IPL the system automatically lists those tables requiring index or view
recovery. You can decide whether to rebuild the index or to attempt to recover it at
one of the following times:

7-10 OS/400 Distributed Database Programming V4R2


Ÿ During the IPL
Ÿ After the IPL
Ÿ When the table is first used

For more information, see the Backup and Recovery book topics about saving and
restoring access paths.

Saving and Restoring Security Information


If you make frequent changes to your system security environment by updating
user profiles and updating authorities for users in the distributed relational database
network, you can save security information to media or a save file without a com-
plete Save System (SAVSYS) command, a long-running process that uses a dedi-
cated system. With the Save Security Data (SAVSECDTA) command you can save
security data in a shorter time without using a dedicated system. Data saved using
the SAVSECDTA command can be restored using the Restore User Profile
(RSTUSRPRF) or Restore Authority (RSTAUT) commands.

| Included in the security information that the SAVSECDTA and RSTUSRPRF com-
| mands can save and restore are the server authorization entries that the DRDA
| TCP/IP support uses to store and retrieve remote system user ID and password
| information.

Saving and Restoring SQL Packages


When an application program that refers to a relational database on a remote
system is precompiled and bound, an SQL package is created on the AS to contain
the control structures necessary to process any SQL statements in the application.

An SQL package is an AS/400 object, so it can be saved to media or a save file


using the Save Object (SAVOBJ) command and restored using the Restore Object
(RSTOBJ) command.

An SQL package must be restored to a collection having the same name as the
collection from which it was saved, and it cannot be renamed.

Saving and Restoring Relational Database Directories


The relational database directory is not an AS/400 object. The relational database
directory is made up of files that are opened by the system at IPL time, so the
SAVOBJ command cannot used to directly save these files. You can save the rela-
tional database directory by creating an output file from the relational database
directory data. This output file can then be used to add entries to the directory
again if it is damaged.

When entries have been added and you want to save the relational database direc-
tory, specify the OUTFILE parameter on the Display Relational Database Directory
Entry (DSPRDBDIRE) command to send the results of the command to an output
file. The output file can be saved to tape, diskette, or a save file and restored to the
system. If your relational database directory is damaged or your system needs to
be recovered, you can restore the output file containing relational database entry
data using a control language (CL) program. The CL program reads data from the
restored output file and creates the CL commands that add entries to a new rela-
tional database directory.

Chapter 7. Data Availability and Protection 7-11


For example, the relational database directory for the Spiffy Corporation MP000
system is sent to an output file named RDBDIRM as follows:
DSPRDBDIRE OUTPUT(\OUTFILE) OUTFILE(RDBDIRM)

The sample CL program that follows reads the contents of the output file RDBDIRM
and recreates the relational database directory using the Add Relational Database
Directory Entry (ADDRDBDIRE) command. In this example (for systems running
versions prior to Version 4 Release 2), the old directory is cleared before the new
entries are made.
| /\ - - \/
| /\ - Restore RDB Entries from output file created with: - \/
| /\ - DSPRDBDIRE OUTPUT(\OUTFILE) OUTFILE(RDBDIRM) - \/
| /\ - for OS/4ðð V3R1 through V4R1 - \/
| /\ - - \/
| PGM
| DCLF FILE(RDBDIRM)
| RMVRDBDIRE RDB(\ALL)
| NEXTENT:
| RCVF
| MONMSG MSGID(CPFð864) EXEC(DO)
| QSYS/RCVMSG PGMQ(\SAME (\)) MSGTYPE(\EXCP) RMV(\YES) MSGQ(\PGMQ)
| GOTO CMDLBL(LASTENT)
| ENDDO
| IF (&RWRLOC = '\LOCAL') DO
| QSYS/ADDRDBDIRE RDB(&RWRDB) RMTLOCNAME(&RWRLOC) TEXT(&RWTEXT)
| ENDDO
| ELSE IF (&RWRLOC = '\ARDPGM') DO
| QSYS/ADDRDBDIRE RDB(&RWRDB) RMTLOCNAME(&RWRLOC) TEXT(&RWTEXT) +
| ARDPGM(&RWDLIB/&RWDPGM)
| ENDDO
| ELSE IF (&RWDPGM \NE ' ') DO
| QSYS/ADDRDBDIRE RDB(&RWRDB) RMTLOCNAME(&RWRLOC) TEXT(&RWTEXT) +
| DEV(&RWDEV) LCLLOCNAME(&RWLLOC) RMTNETID(&RWNTID) +
| MODE(&RWMODE) TNSPGM(&RWTPN) +
| ARDPGM(&RWDLIB/&RWDPGM)
| ENDDO
| ELSE DO
| QSYS/ADDRDBDIRE RDB(&RWRDB) RMTLOCNAME(&RWRLOC) TEXT(&RWTEXT) +
| DEV(&RWDEV) LCLLOCNAME(&RWLLOC) RMTNETID(&RWNTID)
| MODE(&RWMODE) TNSPGM(&RWTPN)
| ENDDO
| GOTO CMDLBL(NEXTENT)
| LASTENT:
| RETURN
| ENDPGM

| The following example shows the same program for systems running Version 4
| Release 2 or later:

7-12 OS/400 Distributed Database Programming V4R2


| /\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\/
| /\ - Restore RDB Entries from output file created with: - \/
| /\ - DSPRDBDIRE OUTPUT(\OUTFILE) OUTFILE(RDBDIRM) - \/
| /\ - from a V4R2 or later level of OS/4ðð - \/
| /\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\/
| PGM
| DCLF FILE(RDBDIRM) /\ See prolog concerning this \/

| /\ Declare Entry Types Variables to Compare with &RWTYPE \/


| DCL &LOCAL \CHAR 1
| DCL &SNA \CHAR 1
| DCL &IP \CHAR 1
| DCL &ARD \CHAR 1
| DCL &ARDSNA \CHAR 1
| DCL &ARDIP \CHAR 1

| /\ Initialize Entry Type Variables to Assigned Values \/


| CHGVAR &LOCAL 'ð' /\ Local RDB (one per system) \/
| CHGVAR &SNA '1' /\ APPC entry (no ARD pgm) \/
| CHGVAR &IP '2' /\ TCP/IP entry (no ARD pgm) \/
| CHGVAR &ARD '3' /\ ARD pgm w/o comm parms \/
| CHGVAR &ARDSNA '4' /\ ARD pgm with APPC parms \/
| CHGVAR &ARDIP '5' /\ ARD pgm with TCP/IP parms \/

| RMVRDBDIRE RDB(\ALL) /\ Clear out directory \/

| NEXTENT: /\ Start of processing loop \/


| RCVF /\ Get a directory entry \/
| MONMSG MSGID(CPFð864) EXEC(DO) /\ End of file processing \/
| QSYS/RCVMSG PGMQ(\SAME (\)) MSGTYPE(\EXCP) RMV(\YES) MSGQ(\PGMQ)
| GOTO CMDLBL(LASTENT)
| ENDDO

| /\ Process entry based on type code \/


| IF (&RWTYPE = &LOCAL) THEN( +
| QSYS/ADDRDBDIRE RDB(&RWRDB) RMTLOCNAME(&RWRLOC) TEXT(&RWTEXT) )

| ELSE IF (&RWTYPE = &SNA) THEN( +


| QSYS/ADDRDBDIRE RDB(&RWRDB) RMTLOCNAME(&RWRLOC) TEXT(&RWTEXT) +
| DEV(&RWDEV) LCLLOCNAME(&RWLLOC) +
| RMTNETID(&RWNTID) MODE(&RWMODE) TNSPGM(&RWTPN) )

| ELSE IF (&RWTYPE = &IP) THEN( +


| QSYS/ADDRDBDIRE RDB(&RWRDB) RMTLOCNAME(&RWSLOC \IP) +
| TEXT(&RWTEXT) PORT(&RWPORT) )

| ELSE IF (&RWTYPE = &ARD) THEN( +


| QSYS/ADDRDBDIRE RDB(&RWRDB) RMTLOCNAME(&RWRLOC) TEXT(&RWTEXT) +
| ARDPGM(&RWDLIB/&RWDPGM) )

| ELSE IF (&RWTYPE = &ARDSNA) THEN( +


| QSYS/ADDRDBDIRE RDB(&RWRDB) RMTLOCNAME(&RWRLOC) TEXT(&RWTEXT) +
| DEV(&RWDEV) LCLLOCNAME(&RWLLOC) +
| RMTNETID(&RWNTID) MODE(&RWMODE) TNSPGM(&RWTPN) +
| ARDPGM(&RWDLIB/&RWDPGM) )

| ELSE IF (&RWTYPE = &ARDIP) THEN( +


| QSYS/ADDRDBDIRE RDB(&RWRDB) RMTLOCNAME(&RWSLOC \IP) +

Chapter 7. Data Availability and Protection 7-13


| TEXT(&RWTEXT) PORT(&RWPORT) +
| ARDPGM(&RWDLIB/&RWDPGM) )

| GOTO CMDLBL(NEXTENT)

| LASTENT:
| RETURN
| ENDPGM

The files that make up the relational database directory are saved when a SAVSYS
command is run. The physical file that contains the relational database directory
can be restored from the save media to your library with the following Restore
Object (RSTOBJ) command:
RSTOBJ OBJ(QADBXRDBD) SAVLIB(QSYS)
DEV(TAPð1) OBJTYPE(\FILE)
LABEL(Qpppppppvrmxxððð3)
RSTLIB(your lib)

| In this example, the relational database directory is restored from tape. The char-
| acters ppppppp in the LABEL parameter represent the product code of Operating
| System/400 (for example, 5769SS1 for Version 4 Release 2). The vrm in the
| LABEL parameter is the version, release, and modification level of OS/400. The xx
| in the LABEL parameter is the last two digits of the current system language value.
| For example, 2924 is for the English language; therefore, the value of xx is 24.

After you restore this file to your library, you can use the information in the file to
re-create the relational database directory.

Ensuring Data Availability


The more critical certain data is to your enterprise, the more ways you should have
for accessing that data. This means you might also consider aspects of network
redundancy as well as data redundancy when planning your strategy to ensure the
optimum availability of data across your network.

Network Redundancy Issues


Network redundancy provides different ways for users on the distributed relational
database network to access a relational database on the network. If there is only
one communications path from an AR to an AS, when the communications line is
down, users on the AR do not have access to the AS relational database. For this
reason network redundancy issues are important to the distributed relational data-
base administrator for the Spiffy Corporation.

For example, consider service booking or customer parts purchasing issues for a
dealership. When a customer is waiting for service or to purchase a part, the
service clerk needs access to all authorized tables of enterprise information to
schedule work or sell parts.

If the local system is down, no work can be done. If the local system is running but
a request to a remote system is needed to process work and the remote system is
down, the request can not be handled. In the Spiffy Corporation example, this might
mean a dealership cannot request parts information from a regional inventory
center. Also, if an AS that handles many AR jobs is down, none of the ARs can

7-14 OS/400 Distributed Database Programming V4R2


complete their requests. In the case of the Spiffy Corporation network, if a regional
center is down, none of the application servers it supports can order parts.

Providing the region’s dealerships with access to regional inventory data is impor-
tant to the Spiffy Corporation distributed relational database administrator. Providing
paths through the network to data can be addressed several ways. The original
network configuration for the Spiffy Corporation linked the end node dealerships to
their respective network node regional centers.

MP000 KC000

Switched Switched

MP101 MP110 MP201 KC101 KC105 KC201 KC310

RV2W739-1

Figure 7-2. Alternative Network Paths

An alternative for some dealerships might be a switched-line connection to a dif-


ferent regional center. If the local regional center is unavailable to the network,
access to another AS allows the requesting dealership to obtain information needed
to do their work. In Figure 7-2, some ARs served by the MP000 system establish
links to the KC000 system, which can be used whenever the MP000 system is una-
vailable. The Vary Configuration (VRYCFG) or Work With Configuration Status
(WRKCFGSTS) commands can be used by a system operator or distributed rela-
tional database administrator to vary the line on when needed and vary the line off
when the primary AS is available.

Another alternative could be if one of the larger area dealerships also acted as an
AS for other dealerships. As shown in Figure 7-3 on page 7-16, an end node is
only an AS to other end nodes through its network node. In Figure 7-2, if the link to
Minneapolis is down, none of the dealerships could query another (end node) for
inventory. The configuration illustrated above could be changed so that one of the
dealerships is configured as an APPN network node, and lines to that dealership
from other area dealerships are set up.

Chapter 7. Data Availability and Protection 7-15


MP000

MP101 MP110 MP201

RV2W744-1

Figure 7-3. Alternate Application Server

Data Redundancy in Your Network


Data redundancy in a distributed relational database also provides different ways
for users on the distributed relational database network to access a database on
the network. The issues a distributed relational database administrator examines to
create a data redundancy strategy are more complex than ensuring communi-
cations paths are available to the data. Tables can be replicated across systems in
the network, or a snapshot of data can be used to provide data availability. The
DataPropagator Relational Capture and Apply/400 product can provide this capa-
bility.

The figure below shows that a copy of the MP000 system distributed relational
database can be stored on the KC000 system, and a copy of the KC000 system
distributed relational database can be stored on the MP000 system. The ARs from
one region can link to the other AS to query or to update a replicated copy of their
relational database.

7-16 OS/400 Distributed Database Programming V4R2


┌─────────────────────┐ ┌─────────────────────┐
│ Copy of │ │ Copy of │
│ KCððð │ │ MPððð │
│ Database │ │ Database │
│ ┌─────┴──────────────┐ ┌──────────────────┴──┐ │
│ │ │ │ │ │
└───────────────│ MPððð │%────┤ KCððð ├──────────────────┘
│ Database │ │ Database │
│ ├────5┤ │
│ │ │ │
└────────────────────┘ └─────────────────────┘
& & & & & &
│ │----│ │ │ │
│ 6 6 │ │------│
│ │ │ │
│ Dealerships ────────────6 6 6

└───────────────────────────── Dealerships

Figure 7-4. Data Redundancy Example

The administrator must decide what is the most efficient, effective strategy to allow
distributed relational database processing. Alternative strategies might include these
scenarios.

One alternative may be that when MP000 is unavailable, its ARs connect to the
KC000 system to query a read-only snapshot of the MP000 distributed relational
database so service work can be scheduled.

DataPropagator Relational/400 can provide a read-only copy (or snapshot) of the


tables to a remote system on a regular basis. For the Spiffy Corporation, this might
be at the end or the beginning of each business day. In this example, the MP000
database snapshot provides a 24-hour-old, last-point-in-time picture for dealerships
to use for scheduling only. When the MP000 system is back on line, its ARs query
the MP000 distributed relational database to completely process inventory requests
or other work queried on the snapshot.

Another alternative may be that Spiffy Corporation wants dealership users to be


able to update a replicated table at another AS when their regional AS is unavail-
able.

For example, an AR that normally connects to the MP000 database could connect
to a replicated MP000 database on the KC000 system to process work. When the
MP000 system is available again, the MP000 relational database can be updated
by applying journal entries from activity originating in its replicated tables at the
KCOOO location. When these journal entries have been applied to the original
MP000 tables, distributed relational database users can access the MP000 as an
AS again.

Journal management processes on each regional system update all relational data-
bases. The amount of journal management copy activity in this situation should be
examined because of potential adverse performance effects at these systems.

Chapter 7. Data Availability and Protection 7-17


7-18 OS/400 Distributed Database Programming V4R2
Chapter 8. Distributed Relational Database Performance
The performance of your distributed relational database depends on the design of
your network, the system, and your database.

Improving Performance Through the Network


| You can improve the performance of your network in various ways. Among them
| are the following:
| Ÿ Line speed
| Ÿ Connection type (nonswitched versus switched)
| Ÿ Frame size
| Ÿ RU sizing
| Ÿ Pacing

| For details, see the Communications Management book. See the APPN
| Supportbook for information about RU sizing and pacing. For a discussion of other
| communications-related performance considerations, see the TCP/IP Configuration
| and Reference book.

Unprotected Conversations
| Unprotected conversations are used for DRDA connections when the connection is
| performed from a program using RUW connection management or if the program
| making the connection is not running under commitment control, or if the database
| to which the connection is made does not support two-phase commit for the pro-
| tocol that is being used. If the characteristics of the data are such that the trans-
| action only affects one database management system, establishing the connection
| from a program using RUW connection management or from a program running
| without commitment control can avoid the overhead associated with two-phase
| commit flows. Additionally, when conversations are kept active with
| DDMCNV(*KEEP) and those conversations are protected conversations, two-phase
| commit flows are sent regardless of whether the conversation was used for DRDA
| or DDM processing during the unit of work. Therefore, when running with
| DDMCNV(*KEEP), it is better to run with unprotected conversations if possible. If
| running with protected conversations, you should run with DDMCNV(*DROP) and
| use the RELEASE statement to end the connection and the conversation at the
| next commit when the conversation will not be used in future units of work.

Improving Performance Through the System


Achieving efficient system performance requires a proper balance among system
resources. Overusing any resource adversely affects performance.

 Copyright IBM Corp. 1997, 1998 8-1


Observing System Performance
This section describes the system commands that are available to help you
observe the performance of your system.

| You can use the AS/400 Performance Tools licensed program to help analyze your
| performance. In addition, there are some system commands available to help you
| observe the performance of your system: WRKSYSSTS, WRKDSKSTS, and
| WRKACTJOB. In using them, you should observe system performance during
| typical levels of activity. For example, statistics gathered when no jobs are running
| on the system are of little value in assessing system performance. To observe the
| system performance, complete the following steps:
| 1. Enter the WRKSYSSTS, WRKDSKSTS, or WRKACTJOB command.
| 2. Allow the system to collect data for a minimum of 5 minutes.
| 3. Press F5 (Refresh) to refresh the display and present the performance data.
| 4. Tune your system based on the new data.

Press F10 (Restart) to restart the elapsed time counter.

| See the chapter on performance tuning in the Work Management book for details
| on how to work with system status and disk status.

| One of the functions of the WRKACTJOB command discussed earlier is to measure


| system performance. The Work with Active Jobs display is shown in “Working with
| Active Jobs” on page 6-3.

| Use both the WRKSYSSTS and the WRKACTJOB commands when observing the
| performance of your system. With each observation period, you should examine
| and evaluate the measures of system performance against the goals you have set.

| Some of the typical measures include:


| Ÿ Interactive throughput and response time, available from the WRKACTJOB
| display.
| Ÿ Batch throughput. Observe the AuxIO and CPU% values for active batch jobs.
| Ÿ Spool throughput. Observe the AuxIO and CPU% values for active writers.
| Each time you make tuning adjustments, you should measure and compare all of
| your main performance measures. Make and evaluate adjustments one at a time.

Improving Performance Through the Database


Distributed relational database performance is affected by the overall design of the
database as mentioned in Chapter 2, “Planning and Design for Distributed Rela-
tional Database” on page 2-1. Where you locate distributed data, the level of com-
mitment control you use, and the design of your SQL indexes all affect
performance.

8-2 OS/400 Distributed Database Programming V4R2


Deciding Data Location
Because putting a network between an application and the data it needs will prob-
ably slow performance, consider the following when deciding where to put data:
Ÿ Transactions that use the data
Ÿ How often the transactions are performed
Ÿ How much data the transactions send or receive

If an application involves transactions that run frequently or that send or receive a


lot of data, you should try to keep it in the same location as the data. For example,
an application that runs many times a second or that receives hundreds of rows of
data at a time will have better performance if the application and data are on the
same system. Conversely, consider placing data in a different location than the
application that needs it if the application includes low-use transactions or trans-
actions that send or receive only moderate amounts of data at a time.

Factors that Affect Blocking for DRDA


A very important performance factor is whether blocking occurs when data is trans-
ferred between the application requestor (AR) and the application server (AS). A
group of rows transmitted as a block of data requires much less communications
overhead than the same data sent one row at a time. One way to control blocking
when connected to another AS/400 system is to use the SQL multiple-row INSERT
and multiple-row FETCH statements in Version 2 Release 2 and later versions of
the OS/400 operating system. The multiple-row FETCH forces the blocking of the
number of rows specified in the FOR n ROWS clause, unless a hard error or end of
data is encountered. The following discussion gives rules for determining if blocking
will occur for single-row FETCHs.

Conditions that inhibit the blocking of query data between the AR and the AS are
also listed in the following discussion. These conditions do not apply to the use of
the multiple-row FETCH statement. Any condition listed under each of the following
cases is sufficient to prevent blocking from occurring.

Case 1: DB2 for AS/400 to DB2 for AS/400


Blocking will not occur if:
Ÿ The cursor is updatable (see Note 1).
Ÿ The cursor is potentially updatable (see Note 2).
Ÿ The ALWBLK(*NONE) precompile option was used.
Ÿ The commitment control level is *CS and the level of OS/400 is earlier than
Version 3 Release 1.
Ÿ The commitment control level is *ALL and the outer subselect does not contain
one of the following:
– The DISTINCT keyword
– The UNION operator
– An ORDER BY clause and the sum of the lengths of the fields in the clause
requires a sort
– A reference to a system database file (system database files are those in
library QSYS named QADBxxxx, and any views built over those files)

Chapter 8. Distributed Relational Database Performance 8-3


Ÿ The row size is greater than approximately 2K or, if the SBMRMTCMD
command or a stored procedure was used to extend the size of the default AS
database buffer, the row size is greater than approximately half of the size of
the database buffer resulting from specification of the OVRDBF SEQONLY
number-of-records parameter. (Note that for the OVRDBF command to work
remotely, OVRSCOPE(*JOB) must be specified.)
Ÿ The cursor is declared to be scrollable (DECLARE...SCROLL CURSOR...) and
a scroll option specified in a FETCH statement is one of the following: RELA-
TIVE, PRIOR, or CURRENT (unless a multiple-row FETCH was done, as men-
tioned above.)

Case 2: DB2 for AS/400 to Non-DB2 for AS/400


Blocking will not occur if:
Ÿ The cursor is updatable (see Note 1).
Ÿ The cursor is potentially updatable (see Note 2).
Ÿ The ALWBLK(*NONE) precompile option is used.
Ÿ The row size is greater than approximately 16K.

Case 3: Non-DB2 for AS/400 to DB2 for AS/400


Blocking will not occur if:
Ÿ The cursor is updatable (see Note 1).
Ÿ The cursor is potentially updatable (see Note 2).
Ÿ A precompile or bind option is used that caused the package default value to
be force-single-row protocol.
– For DB2, there is no option to do this.
– For SQL/DS, this is the NOBLOCK keyword on SQLPREP (the default).
– For DB2/2, this is /K=NO on SQLPREP or SQLBIND.
Ÿ The row size is greater than approximately 0.5*QRYBLKSIZ. (The default
QRYBLKSIZ values for DB2, SQL/DS, and DB2/2 are 32K, 8K, and 4K, respec-
tively.)

Summarization of rules
In summary, what these rules (including the notes) say is that in the absence of
certain special or unusual conditions, blocking will occur in both of the following
cases:
Ÿ It will occur if the cursor is read-only (see Note 3) and if:
– Either the application requester or application server is a non-DB2/400.
– Both the application requester and application server are DB2/400s and
ALWBLK(*ALLREAD) is specified and COMMIT(*ALL) is not specified.
Ÿ It will occur if COMMIT(*ALL) was not specified and all of the following are also
true:
– There is no FOR UPDATE OF clause in the SELECT, and
– There are no UPDATE or DELETE WHERE CURRENT OF statements
against the cursor in the program, and

8-4 OS/400 Distributed Database Programming V4R2


– Either the program does not contain dynamic SQL statements or a
precompile/bind option was used to request limited-block protocol (/K=ALL
with DB2/2 or DB2/6000, ALWBLK(*ALLREAD) with DB2/400,
CURRENTDATA(NO) with DB2, SBLOCK with SQL/DS).

Notes:
1. A cursor is updatable if it is not read-only (see Note 3), and one of the following
is true:
Ÿ The select statement contained the FOR UPDATE OF clause, or
Ÿ There exists in the program an UPDATE or DELETE WHERE CURRENT
OF against the cursor.
2. A cursor is potentially updatable if it is not read-only (see Note 3), and if the
program includes an EXECUTE or EXECUTE IMMEDIATE statement (or when
connected to a non-AS/400 system, any dynamic statement), and a precompile
or bind option is used that caused the package default value to be single-row
protocol.
Ÿ For DB2/400, this is the ALWBLK(*READ) precompile option (the default).
Ÿ For DB2, this is CURRENTDATA(YES) on BIND PACKAGE (the default).
Ÿ For SQL/DS, this is the SBLOCK keyword on SQLPREP.
Ÿ For DB2/2, this is /K=UNAMBIG on SQLPREP or SQLBIND (the default).
3. A cursor is read-only if one or more of the following conditions are true:
Ÿ The DECLARE CURSOR statement specified an ORDER BY clause but
did not specify a FOR UPDATE OF clause.
Ÿ The DECLARE CURSOR statement specified a FOR FETCH ONLY clause.
Ÿ The DECLARE CURSOR statement specified the SCROLL keyword without
DYNAMIC (OS/400 only).
Ÿ One or more of the following conditions are true for the cursor or a view or
logical file referenced in the outer subselect to which the cursor refers:
– The outer subselect contains a DISTINCT keyword, GROUP BY clause,
HAVING clause, or a column function in the outer subselect.
– The select contains a join function.
– The select contains a UNION operator.
– The select contains a subquery that refers to the same table as the
table of the outer-most subselect.
– The select contains a complex logical file that had to be copied to a
temporary file.
– All of the selected columns are expressions, scalar functions, or con-
stants.
– All of the columns of a referenced logical file are input only (OS/400
only).

Chapter 8. Distributed Relational Database Performance 8-5


Factors That Affect the Size of Query Blocks
If a large amount of data is being returned on a query, performance may be
improved by increasing the size of the block of query data. The way that this is
done depends upon the types of systems participating in the query. In an unlike
environment, the size of the query block is determined at the application requester
by a parameter sent with the Open Query command. When an AS/400 system is
the AR, it always requests a query block size of 32K. Other types of ARs give the
user a choice of what block size to use. The default query block sizes for DB2,
SQL/DS, and DB2/2 are 32K, 8K, and 4K, respectively. See the product documen-
tation for the platform being used as an AR when a DB2 for AS/400 server is con-
nected to an unlike AR.

In the DB2 for AS/400 to DB2 for AS/400 environment, the query block size is
determined by the size of the buffer used by the database manager. The default
size is 4K. This can be changed on application servers that are at the Version 2,
Release 3 or higher level. In order to do this, use the SBMRMTCMD CL command
to send and execute an OVRDBF command on the AS. Besides the name of the
file being overridden, the OVRDBF command should contain OVRSCOPE(*JOB)
and SEQONLY(*YES nnn). The number of records desired per block replaces nnn
in the SEQONLY parameter. Increasing the size of the database buffer not only can
reduce communications overhead, but can also reduce the number of calls to the
database manager to retrieve the rows.

You can also change the query block size using an SQL CALL statement (a stored
procedure) from non-AS/400 systems or between AS/400 systems.

8-6 OS/400 Distributed Database Programming V4R2


Chapter 9. Handling Distributed Relational Database
Problems
When a problem occurs accessing a distributed relational database, it is the job of
the administrator to:
Ÿ Determine the nature of the problem, and
Ÿ Determine if it is a problem with the application or a problem with the local or
remote system.

You must then resolve the problem or obtain customer support assistance to
resolve the problem. To do this, you need:
Ÿ An understanding of the OS/400 program support.
Ÿ A good idea of how to decide if a problem is on an application requester (AR)
or an application server (AS).
Ÿ Familiarity with using OS/400 problem management functions.

This chapter provides a distributed relational database problem handling overview.


It describes how to isolate distributed relational database problems, how to work
with users to look at displays and messages, and how to look at problems for an
application that has failed. It also presents information on using the job log and
AS/400 system alerts to manage problems. Finally, this chapter provides informa-
tion on obtaining data to report a failure and help diagnose a problem.

For more information about diagnosing problems in a distributed relational data-


base, see the Distributed Relational Database Problem Determination Guide

AS/400 Problem Handling Overview


The OS/400 program helps you manage problems for both user- and system-
detected problems that occur on local and remote AS/400 systems. Problem han-
dling support includes:
Ÿ Messages with initial problem handling information
Ÿ Automatic alerting of system-detected problems
Ÿ Alert management focal point capability
Ÿ Integrated problem logging and tracking
Ÿ First failure data capture (FFDC) support
Ÿ Electronic customer support service requisition
Ÿ Electronic customer support, program temporary fix (PTF) requisition

The AS/400 system and its attached devices are able to detect some types of prob-
lems. These are called system-detected problems. When a problem is detected,
several operations take place:
Ÿ An entry in the Product Activity Log is created
Ÿ A problem record is created
Ÿ A message is sent to the QSYSOPR message queue

 Copyright IBM Corp. 1997, 1998 9-1


Ÿ An alert may be created

Information is recorded in the error log and the problem record. The alert is then
sent to the service provider if the service provider is either an alert focal point or
the network node server for the system with the problem. When some alerts are
sent, a spooled file of FFDC information is also created. The error log and the
problem record may contain the following information:
Ÿ Vital product data
Ÿ Configuration information
Ÿ Reference code
Ÿ The name of the associated device
Ÿ Additional failure information

User-detected problems are usually related to program errors that can cause any
of the following problems to occur:
Ÿ Job problems
Ÿ Incorrect output
Ÿ Messages indicating a program failure
Ÿ Device failure not detected by the system
Ÿ Poor performance

When a user detects a problem, no information is gathered by the system until


problem analysis is run or you select the option to save information to help resolve
a problem from the Operational Assistant* USERHELP menu.

The AS/400 system tracks both user- and system-detected problems using the
problem log and problem manager. A problem state is maintained from when a
problem is detected (OPENED) to when it is resolved (CLOSED) to assist you with
tracking. Alert and alert management capabilities extend the problem management
support to include problems occurring on other AS/400 systems in a distributed
relational database network. For more information, see “AS/400 Problem Log” on
page 9-21.

Isolating Distributed Relational Database Problems


A problem you encounter when running a distributed relational database application
can exhibit two general symptoms:
Ÿ The user receives incorrect output
Ÿ The application does not complete in the expected time
The diagrams and procedures below show generally how you can classify problems
as application program problems, performance related problems, and system
related problems, so you can use standard AS/400 system problem analysis
methods to resolve the problem.

9-2 OS/400 Distributed Database Programming V4R2


Incorrect Output
If you receive an error message, use the error message, SQLCODE, or SQLSTATE
to determine the cause of the problem. See Figure 9-1. The message description
indicates what the problem is and provides corrective actions. If you do not receive
an error message, you must determine whether distributed relational database is
causing the failure. To do this, run the failing statement locally on the AS or use
interactive Structured Query Language (SQL) to run the statement on the AS. If you
can create the problem locally, the problem is not with distributed relational data-
base support. Use AS/400 problem analysis methods to provide specific information
for your support staff depending on the results of this operation.

Use error message,


Error Yes SQLCODE, or SQLSTATE
Message to determine cause of
error and any possible
actions

No (bad data)

Locate failing
statement and run
on application
server locally or
with interactive SQL

Problem in relational
Yes
Still database on AS
Failing
Run ANZPRB

No

Problem with distributed


relational database

Run ANZPRB
RV2W731-1

Figure 9-1. Resolving Incorrect Output Problem

Application Does Not Complete in the Expected Time


If the request takes longer than expected to complete, the first place to check is at
the AR. Check the job log for message SQL7969 which indicates that a connect to
a relational database is complete. This tells you the application is a distributed rela-
tional database application. Check the AR for a loop by using the Work with Job
(WRKJOB) command to display the program stack, and check the program stack to
determine whether the system is looping. See Figure 9-2 on page 9-4. If the appli-
cation itself is looping, contact the application programmer for resolution. If you see
QAPDEQUE and QCNSRCV on the stack, the AR is waiting for the AS. See
Figure 9-3 on page 9-5. If the system is not in a communications wait state, use
problem analysis procedures to show whether there is a performance problem or a
wait state somewhere else.

Chapter 9. Handling Distributed Relational Database Problems 9-3


Check for application
Program Yes requester loop. Use
Distributed WRKJOB to display
program stack.

No

Application Contact
Yes Application Yes
Run ANZPRB Requester Application
Looping Looping Programmer

No No

Run ANZPRB

Wait with Yes Use the application


QCNSRCV server wait/loop/
in Stack performance process

No

Run ANZPRB

RV2W732-2

Figure 9-2. Resolving Wait, Loop, or Performance Problems on the Application Requester

| You can find the AR job name by looking at the job log on the AS. For more infor-
| mation about finding jobs on the AS, see “Locating Distributed Relational Database
| Jobs” on page 6-6. When you need to check the AS job, use the Work with Job
| (WRKJOB), Work with Active Job (WRKACTJOB), or Work with User Job
| (WRKUSRJOB) commands to locate the job on the AS. For information on using
| these commands, see “Working with Jobs” on page 6-1, “Working with User Jobs”
| on page 6-2, and “Working with Active Jobs” on page 6-3. From one of these job
| displays, look at the program stack to see if the AS is looping. If it is looping, use
| problem analysis to handle the problem. If it is not looping, check the program
| stack for WAIT with QCNTRCV, which means the AS is waiting for the AR. If both
| systems are in this communications wait state, there is a problem with your
| network. If the AS is not in a wait state, there is a performance issue that may have
| to be addressed.

Two common sources of slow query performance are:


Ÿ An accessed table does not have an index. If this is the case, create an index
using an appropriate field or fields as the key.
Ÿ The rows returned on a query request are not blocked. Whether the rows are
blocked can cause a significant difference in query performance. It is important
to understand the factors that affect blocking, and tune the application to take

9-4 OS/400 Distributed Database Programming V4R2


advantage of it. For more information, see “Factors that Affect Blocking for
DRDA” on page 8-3.

| The first time you connect to DB2 for AS/400 from a PC using a product like DB2
| Connect, if you have not already created the SQL packages for the product in DB2
| for AS/400, the packages will be created automatically, and the NULLID collection
| may need to be created automatically as well. This can take a long time and give
| the appearance of a performance problem. However, it should be just a one-time
| occurrance.

| A long delay will occur if the system to which you are trying to connect over TCP/IP
| is not available. A several minute timeout delay will preceed the message A remote
| host did not respond within the timeout period. An incorrect IP address in the
| RDB directory will cause this behavior as well.

Find
Application
Server Job

Display
Program
Stack

Application Yes
Server Run ANZPRB
Looping

No

Wait on Yes
QCNTRCV Network Problem

No

Performance issue
use ANZPRB
to determine

RV2W733-1

Figure 9-3. Resolving Wait, Loop, or Performance Problems on the Application Server

Chapter 9. Handling Distributed Relational Database Problems 9-5


Working with Users
Investigating a problem usually begins with the user. Users may not be getting the
results they expect when running a program or they may get a message indicating
a problem. Sometimes the best way to diagnose and solve a problem is to step
through the procedure with a user. The copy screen function allows you to do this
either in real time with the user or in examining a file of the displays the user saw
previously.

You can also gather more information from a message than just the line of text that
appears at the bottom of a display. This section discusses how you can copy dis-
plays being viewed by another user and how you can obtain more information
about messages you or a user receive when doing distributed relational database
work.

Copy Screen
The Start Copy Screen (STRCPYSCN) command allows you to be signed on to
your work station and see the same displays being viewed by someone else at
another work station. You must be signed on to the same AS/400 system as the
user. If that user is on a remote system, you can use display station pass-through
to sign on that system and then enter the STRCPYSCN command to see the other
displays. Screen images can be copied to a database file at the same time they are
copied to another work station or when another work station cannot be used. This
allows you to process this data later and prepares an audit trail for the operations
that occur during a problem situation.

To copy the display image to another display station the following requirements
must be met:
Ÿ Both displays are defined to the system
Ÿ Both displays are color or both are monochrome, but not one color and the
other monochrome
Ÿ Both displays have the same number of character positions horizontally and
vertically

If you type your own display station ID as the sending device, the receiving display
station must have the sign on display shown when you start copying screen
images. Graphics are copied as blanks.

If not already signed on to the same system, use the following process to see the
displays that another user sees on a remote system:
1. Enter the Start Pass-Through (STRPASTHR) command.
STRPASTHR RMTLOCNAME(KC1ð5)
2. Log on to the target system.
3. Enter the STRCPYSCN command.
STRCPYSCN SRCDEV(KC1ð5)
OUTDEV(\REQUESTER)
OUTFILE(KCHELP/TEST)
Ÿ SRCDEV specifies the name of the source device, the display station that
is sending the display image. To send your display to command to another
device, enter the *REQUESTER value for this parameter.

9-6 OS/400 Distributed Database Programming V4R2


Ÿ OUTDEV specifies the name of the output device to which the display
image is sent. In this example the display image is sent to the display
station of the person who enters the command (*REQUESTER). You can
also name another display station, another device (where a third user is
viewing), or to no other device (*NONE). When the *NONE value is used,
specify an output file for the display images.
Ÿ OUTFILE specifies the name of the output file that will contain an image of
all the displays viewed while the command is active.
4. An inquiry message is sent to the source device to notify the user of that
device that the displays will be copied to another device or file. Type a g (Go)
to start sending the images to the requesting device.

The sending display station’s screens are copied to the other display station. The
image shown at the receiving display station trails the sending display station by
one screen. If the user at the sending display station presses a key that is not
active (such as the Home key), both display stations will show the same display.

While you are copying screens, the operator of the receiving display station cannot
do any other work at that display station until the copying of screens is ended.

To end the copy screen function from the sending display station, enter the End
Copy Screen (ENDCPYSCN) command from any command line and press the
Enter key.
ENDCPYSCN

The display you viewed when you started the copy screen function is shown.

Messages
The AS/400 system sends a variety of system messages that indicate conditions
ranging from simple typing errors to problems with system devices or programs.
The message may be one of the following:
Ÿ An error message on your current display.
These messages can interrupt your job or sound an alarm. You can display
these messages by typing DSPMSG on any command line.
Ÿ A message regarding a system problem that is sent to the system operator
message queue and displayed on a separate Work with Messages display.
To see these messages, type DSPMSG QSYSOPR on any system command line.
Ÿ A message regarding a system problem that is sent to the message queue
specified in a device description.
To see these messages, type DSPMSG message-queue-name on any system
command line.
Ÿ A message regarding a system problem that is sent to another system in the
network.
These messages are called alerts. See “Alerts” on page 9-23 for how to view
and work with alerts.

The system sends informational or inquiry messages for certain system events.
Informational messages give you status on what the system is doing. Inquiry
messages give you information about the system, but also request a reply.

Chapter 9. Handling Distributed Relational Database Problems 9-7


In some message displays a message is accompanied by a letter and number code
such as:
CPFðð83
The first two or three letters indicate the message category. Some message cate-
gories for distributed relational database are:

Figure 9-4. Message Categories


Category Description Library
CPA through CPZ Messages from the oper- QSYS/QCPFMSG
ating system
MCH Licensed internal code QSYS/QCPFMSG
messages
SQ and SQL Structured Query Lan- QSYS/QSQLMSG
guage (SQL) messages
TCP TCP/IP messages QTCP/QTCPMSGF

The remaining four digits (five digits if the prefix is SQ) indicate the sequence
number of the message. The example message ID shown indicates this is a
message from the operating system, number 0083.

To obtain more information about a message on the message line of a display or in


a message queue, do the following:
1. Move the cursor to the same line as the message.
2. Press the Help key. The Additional Message Information display is shown.

à ð
Message ID . . . . . . : CPD6A64 Severity . . . . . . : 3ð
Message type . . . . . : DIAGNOSTIC
Date sent . . . . . . . : ð3/29/92 Time sent . . . . . : 13:49:ð6
From program . . . . . : QUIACT Instruction . . . . : ð8ðD
To program . . . . . . : QUIMGFLW Instruction . . . . : ð3C5

Message . . . . : Specified menu selection is not correct.


Cause . . . . . : The selection that you have specified is not correct for
one of the following reasons:
-- The number selected was not valid.
-- Something other than a menu option was entered on the option line.
Recovery . . . : Select a valid option and press the Enter or Help key
again.

Bottom
Press Enter to continue.

F3=Exit F6=Print F9=Display message details


F1ð=Display messages in job log F12=Cancel F21=Select assistance level

á ñ
You can get more information about a message that is not showing on your display
if you know the message identifier and the library in which it is located. To get this
information enter the Display Message Description (DSPMSGD) command:
DSPMSGD RANGE(SQLð2ð4) MSGF(QSYS/QSQLMSG)

This command produces a display that allows you to select the following informa-
tion about a message:

9-8 OS/400 Distributed Database Programming V4R2


Ÿ Message text
Ÿ Field data
Ÿ Message attributes
Ÿ All of the above

The text is the same message and message help text that you see on the Addi-
tional Message Information display. The field data is a list of all the substitution
variables defined for the message and their attributes. The message attributes are
the values (when defined) for severity, logging, level of message, alert, default
program, default reply, and dump parameters. You can use this information to help
you determine what the user was doing when the message appeared.

Message Types
On the Additional Message Information display you see the message type and
severity code for the message. Figure 9-5 shows the different message types for
AS/400 messages and their associated severity codes:

Figure 9-5 (Page 1 of 2). Message Severity Codes


Message Type Severity Code
Informational messages. For informational 00
purposes only; no reply is needed. The
message can indicate that a function is in
progress or that a function has completed
successfully.
Warning. A potential error condition exists. 10
The program may have taken a default,
such as supplying missing data. The results
of the operation are assumed to be suc-
cessful.
Error. An error has been found, but it is one 20
for which automatic recovery procedures
probably were applied; processing has con-
tinued. A default may have been taken to
replace the wrong data. The results of the
operation may not be correct. The function
may not have completed; for example, some
items in a list ran correctly, while other
items did not.
Severe error. The error found is too severe 30
for automatic recovery procedures and no
defaults are possible. If the error was in the
source data, the entire data record was
skipped. If the error occurred during a
program, it leads to an abnormal end of
program (severity 40). The results of the
operation are not correct.
Severe error: abnormal end of program 40
or function. The operation has ended, pos-
sibly because the program was not able to
handle data that was not correct or because
the user canceled it.

Chapter 9. Handling Distributed Relational Database Problems 9-9


Figure 9-5 (Page 2 of 2). Message Severity Codes
Message Type Severity Code
Abnormal end of job or program. The job 50
was not started or failed to start, a job-level
function may not have been done as
required, or the job may have been can-
celed.
System status. Issued only to the system 60
operator message queue. It gives either the
status of or a warning about a device, a
subsystem, or the system.
Device integrity. Issued only to the system 70
operator message queue, indicating that a
device is not working correctly or is in some
way no longer operational.
System alert and user messages. A con- 80
dition exists that, although not severe
enough to stop the system now, could
become more severe unless preventive
measures are taken.
System integrity. Issued only to the system 90
operator message queue. Describes a con-
dition where either a subsystem or system
cannot operate.
Action. Some manual action is required, 99
such as entering a reply or changing printer
forms.

Distributed Relational Database Messages


If an error message occurs at either an AS or an AR, the system message is
logged on the job log to indicate the reason for the failure. See “Using the Job
Log” on page 6-5 for information on how to use a job log and locate one on an AS.

A system message exists for each SQLCODE returned from an SQL statement
supported by the DB2/400 program. The message is made available in precompiler
listings, on interactive SQL, or in the job log when running in the debug mode.
However, when you are working with an AS that is not an AS/400 system, there
may not be a specific message for every error condition in the following cases:
Ÿ The error is associated with a function not used by the AS/400 system.
For example, the special register CURRENT SQLID is not supported by
DB2/400, so SQLCODE -411 (SQLSTATE 56040) “CURRENT SQLID cannot
be used in a statement that references remote objects” does not exist.
Ÿ The error is product-specific and will never occur when using DB2/400.
DB2/400 will never have SQLCODE -925 (SQLSTATE 56021), “SQL commit or
rollback is invalid in an IMS or CICS environment.”

For SQLCODEs that do not have corresponding messages, a generic message is


returned that identifies the unrecognized SQLCODE, SQLSTATE, and tokens,
along with the relational database name of the AS which generated the message.
To determine the specific condition and how to interpret the tokens, consult the
product documentation corresponding to the particular release of the connected AS.

9-10 OS/400 Distributed Database Programming V4R2


For more information on SQLCODEs, see “SQLCODEs and SQLSTATEs” on
page 9-17.

Messages in the ranges CPx3E00 through CPx3EFF and CPI9100 through


CPI91FF are used for distributed relational database system messages. The fol-
lowing list is not inclusive, but shows more common system messages you may
see in a distributed database job log on an AS/400 system. See the DB2 for
AS/400 SQL Programming book for a list of SQL messages for distributed relational
database.

| Figure 9-6 (Page 1 of 3). Distributed Relational Database Messages


| MSG ID Description
| CPA3E01 Attempt to delete *LOCAL RDB directory entry
| CPC3EC5 Some parameters for RDB directory entry ignored
| CPD3E30 Conflicting remote network ID specified
| CPD3E35 Structure of remote location name not valid for ...
| CPD3E36 Port identification is not valid
| CPD3E38 Type conflict for remote location
| CPD3E39 Value &3 for parameter &2 not allowed
| CPD3E3B Error occurred retrieving server authorization information for ...
| CPD3ECA RDB directory operation may not have completed
| CPD3E01 DBCS or MBCS CCSID not supported.
| CPD3E03 Local RDB name not in RDB directory
| CPD3E05 DDM conversation path not found
| CPD3E31 DDM TCP/IP server is not active
| CPD3E32 Error occurred ending DDM TCP/IP server
| CPD3E33 DDM TCP/IP server error occurred with reason code ...
| CPD3E34 DDM TCP/IP server communications error occurred
| CPD3E37 DDM TCP/IP get host by name failure
| CPF3E30 Errors occurred starting DDM TCP/IP server
| CPF3E31 * Unable to start DDM TCP/IP server
| CPF3EC6 Change DDM TCP/IP attributes failed
| CPF3EC9 Scope message for interrupt RDB
| CPF3E0A Resource limits error
| CPF3E0B Query not open
| CPF3E0C FDOCA LID limit reached
| CPF3E0D Interrupt not supported
| CPF3E01 DDM parameter value not supported
| CPF3E02 AR cannot support operations
| CPF3E04 SBCS CCSID not supported
| CPF3E05 Package binding not active
| CPF3E06 RDB not found
| CPF3E07 Package binding process active

Chapter 9. Handling Distributed Relational Database Problems 9-11


| Figure 9-6 (Page 2 of 3). Distributed Relational Database Messages
| MSG ID Description
| CPF3E08 Open query failure
| CPF3E09 Begin bind error
| CPF3E10 AS does not support DBCS or MC
| CPF3E12 Commit/rollback HOLD not supported
| CPF3E13 Commitment control operation failed
| CPF3E14 End RDB Request failed
| CPF3E16 Not authorized to RDB
| CPF3E17 End RDB request is in progress
| CPF3E18 COMMIT/ROLLBACK with SQLCA
| CPF3E19 Commitment control operation failed
| CPF3E20 DDM conversation path not found
| CPF3E21 RDB interrupt fails
| CPF3E22 Commit resulted in a rollback at the application server
| CPF3E23 * DDM data stream violates conversation capabilities
| CPF3E30 * Errors occurred starting DDM TCP/IP server
| CPF3E32 * Server error occurred processing client request
| CPF3E80 * Data stream syntax error
| CPF3E81 * Invalid FDOCA descriptor
| CPF3E82 * ACCRDB sent twice
| CPF3E83 * Data mismatch error
| CPF3E84 * DDM conversational protocol error
| CPF3E85 * RDB not accessed
| CPF3E86 * Unexpected condition
| CPF3E87 * Permanent agent error
| CPF3E88 * Query already open
| CPF3E89 * Query not open
| CPF3E99 End RDB request has occurred
| CPI9150 DDM job started
| CPI9152 Target DDM job started by source system
| CPI9160 DDM connection started over TCP/IP
| CPI9161 DDM TCP/IP connection ended
| CPI9162 Target job assigned to handle DDM connection started
| CPI9190 Authorization failure on distributed database
| CPI3E01 Local RDB accessed successfully
| CPI3E02 Local RDB disconnected successfully
| CPI3E04 Connection to relational database &1; ended
| CPI3E30 DDM TCP/IP server already active
| CPI3E31 DDM TCP/IP server does not support security mechanism
| CPI3E32 DDM server successfully started

9-12 OS/400 Distributed Database Programming V4R2


| Figure 9-6 (Page 3 of 3). Distributed Relational Database Messages
| MSG ID Description
| CPI3E33 DDM server successfully ended
| CPI3E34 DDM job xxxx servicing user yyy on mm/dd/yy at hh:mm:ss
| CPI3E35 No DDM server prestart job entry
| CPI3E36 Connection to relational database xxxx ended
| SQ30082 A connection attempt failed with reason code...
| SQL7992 Connect completed over TCP/IP
| SQL7993 Already connected
| Note: An asterisk (*) means an alert is associated with the error condition.

| Handling Program Start Request Failures for APPC


When a program start request is received by an OS/400 subsystem on the AS, the
system attempts to start a job based on information sent with the program start
request. The AR user’s authority to the AS system, existence of the requested
database, and many other items are checked.

If the AS subsystem determines that it cannot start the job (for example, the user
profile does not exist on the AS, the user profile exists but is disabled, or the user
is not properly authorized to the requested objects on the AS), the subsystem
sends a message, CPF1269, to the QSYSMSG message queue (or QSYSOPR
when QSYSMSG does not exist). The CPF1269 message contains two reason
codes (one of the reason codes may be zero, which can be ignored).

The nonzero reason code gives the reason the program start request was rejected.
Because the remote job was to have started on the AS, the message and reason
codes are provided on the AS system, and not the AR system. The user at the AR
only knows that the program start request failed, not why it failed. The user on the
AR must either talk to the system operator at the AS system, or use display station
pass-through to the AS to determine the reason why the request failed.

For a complete description of the reason codes and their meanings, refer to the ICF
Programming book.

| Handling Connection Request Failures for TCP/IP


| The main causes for failed connection requests at a DRDA server configured for
| TCP/IP use is that the DDM TCP/IP server is not started, an authorization error
| occurred, or the machine is not running.

| Server Is Not Started or the Port ID Is Not Valid


| The error message given if the DDM TCP/IP server is not started is CPE3425:
| A remote host refused an attempted connect operation.
| You can also get this message if you specify the wrong port on the Add or Change
| RDB Directory Entry command. For a DB2 for AS/400 server, the port should
| always be *DRDA (the DRDA well-known port of 446). To start the DDM server on
| the remote system, run the STRTCPSVR *DDM command. You can request that it
| be started whenever TCP/IP is started by running the CHGDDMTCPA
| AUTOSTART(*YES) command.

Chapter 9. Handling Distributed Relational Database Problems 9-13


| DRDA Connect Authorization Failure
| The error messages given for an authorization failure is SQ30082:
| Authorization failure on distributed database connection attempt.
| The cause section of the message gives a reason code and a list of meanings for
| the possible reason codes. Reason code 17 means that there was an unsupported
| security mechanism (SECMEC).

| There are two DRDA SECMECs implemented by DB2 for AS/400: user ID only,
| and user ID with password. The default for an AS/400 server is user ID with pass-
| word. If the AR sends only a user ID to a server with the default SECMEC, the
| above error message with reason code 17 is given.

| The solution for the unsupported SECMEC failure is either to allow the user ID only
| SECMEC at the server by running the CHGDDMTCPA PWDRQD(*NO) command,
| or by sending a password on the connect request. A password can be sent by
| either using the USER/USING form of the SQL CONNECT statement, or by using
| the ADDSVRAUTE command to add the remote user ID and password in a server
| authorization entry for the user profile under which the connection attempt is to be
| made.

| Note that you have to have system value QRETSVRSEC (retain server security
| data) set to '1' to be able to store the remote password in the server authorization
| entry.

| Attention: You must enter the RDB name on the ADDSVRAUTE command in
| upper case for use with DRDA or the name will not be recognized during connect
| processing and the information in the authorization entry will not be used.

| Server Not Available


| If a remote system is not up and running, or if you specifiy an incorrect IP address
| in the RDB directory entry for the target system, you will get message CPE3447:
| A remote host did not respond within the timeout period.
| There is normally a several minute delay before this message occurs. It may
| appear that something is hung up or looping during that time.

| Connection Failures Specific to Interactive SQL


| Sometimes when you are running a CONNECT statement from interactive SQL, a
| general SQ30080 message, Communication error occurred during distributed
| database processing, is given. In order to get the details of the error, you should
| exit from interactive SQL and look at the joblog.

| If you get message SQL7020, SQL package creation failed, when connecting for
| the first time (for any given level of commitment control) to a system that has only
| single-phase commit capabilities, the likely cause is that you accessed the remote
| system as a read-only server and you need to update it to create the SQL package.

| You can verify that by looking at the messages in the joblog. The solution is to do a
| RELEASE ALL and COMMIT to get rid of all connections before connecting, so that
| the connection will be updatable.

9-14 OS/400 Distributed Database Programming V4R2


| Not Enough Prestart Jobs at Server
| If the number of prestart jobs associated with the TCP/IP server is limited by the
| QRWTSRVR prestart job entry of the QSYSWRK subsystem, and all prestart jobs
| are being used for a connection, an attempt at a new connection will fail with the
| following messages:
| CPE3426 A connection with a remote socket was reset by that socket.
| CPD3E34 DDM TCP/IP communications error occurred on recv() — MSG_PEEK.
| You can avoid this problem at the server by setting the MAXJOBS parameter of the
| CHGPJE command for the QTWTSRVR entry to a higher number or to *NOMAX,
| and by setting the ADLJOBS parameter to something other than 0.

Application Problems
The best time to handle a problem with an application is before it goes into pro-
duction. However, it is impossible to anticipate all the conditions that will exist for
an application when it gets into general use. The job log of either the AR or the AS
can tell you that a package failed; the listing of the program or the package can tell
you why it failed. The SQL compilers provide diagnostic tests that show the
SQLCODEs generated by the precompile process on the diagnostic listing. For
Integrated Language Environment* (ILE*) precompiles, you can optionally specify
OPTION(*XREF) and OUTPUT(*PRINT) to print a precompile source and cross-
reference listing. For non-ILE precompiles, you can optionally specify *SOURCE
and *XREF on the OPTIONS parameter of the Create SQL Program (CRTSQLxxx)
commands to print a precompile source and cross-reference listings.

Listings
The listing from the Create SQL program (CRTSQLxxx) command shown in
Figure 9-7 on page 9-16 provides the following kinds of information:
Ÿ The values supplied for the parameters of the precompile command
Ÿ The program source
Ÿ The identifier cross-references
Ÿ The messages resulting from the precompile

Precompiler Listing

Chapter 9. Handling Distributed Relational Database Problems 9-15


5763ST1 V3R1Mð 94ð9ð9 Create SQL ILE C Object UPDATEPGM ð4/19/94 14:3ð:1ð Page 1
Source type...............C
Object name...............TST/UPDATEPGM
Source file...............\LIBL/QCSRC
Member....................\OBJ
Options...................\XREF
Listing option............\PRINT
Target release............\CURRENT
INCLUDE file..............\LIBL/\SRCFILE
Commit....................\CHG
Allow copy of data........\YES
Close SQL cursor..........\ENDACTGRP
Allow blocking............\READ
Delay PREPARE.............\NO
Generation level..........1ð
Margins...................\SRCFILE
Printer file..............\LIBL/QSYSPRT
Date format...............\JOB
Date separator............\JOB
Time format...............\HMS
Time separator ...........\JOB
Replace...................\YES
Relational database.......RCHASLKM
User .....................\CURRENT
RDB connect method........\DUW
Default Collection........\NONE
Package name..............\OBJLIB/\OBJ
Created object type.......\PGM
Debugging view............\NONE
Dynamic User Profile......\USER
Sort Sequence.............\JOB
Language ID...............\JOB
IBM SQL flagging..........\NOFLAG
ANS flagging..............\NONE
Text......................\SRCMBRTXT
Source file CCSID.........37
Job CCSID.................65535
Source member changed on ð4/19/94 14:25:33
5763ST1 V3R1Mð 94ð9ð9 Create SQL ILE C Object UPDATEPGM ð4/19/94 14:3ð:1ð Page 2
Record \...+... 1 ...+... 2 ...+... 3 ...+... 4 ...+... 5 ...+... 6 ...+... 7 ...+... 8 SEQNBR Last change
1 /\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\/ 1ðð
2 /\ This program is called to update the DEPTCODE of file RWDS/DPT1 \/ 2ðð
3 /\ to NULL. This is run once a month to clear out the old \/ 3ðð
4 /\ data. \/ 4ðð
5 /\ \/ 5ðð
6 /\ NOTE: Because this program was compiled with an RDB name, it is \/ 6ðð
7 /\ not necessary to do a connect, as an implicit connect will take \/ 7ðð
8 /\ place when the program is called. \/ 8ðð
9 /\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\/ 9ðð
1ð #include <stdio.h> 1ððð
11 #include <stdlib.h> 11ðð
12 exec sql include sqlca; 12ðð
13 13ðð
14 main() 14ðð
15 { 15ðð
16 /\ Just update RWDS/DPT1, setting deptcode = NULL \/ 16ðð
17 exec sql update RWDS/DPT1 17ðð
18 set deptcode = NULL; 18ðð
19 } 19ðð
\ \ \ \ \ E N D O F S O U R C E \ \ \ \ \

Figure 9-7 (Part 1 of 2). Listing From a Precompiler

5763ST1 V3R1Mð 94ð9ð9 Create SQL ILE C Object UPDATEPGM ð4/19/94 14:3ð:1ð Page 3
CROSS REFERENCE
Data Names Define Reference
DEPTCODE \\\\ COLUMN
18
DPT1 \\\\ TABLE IN RWDS
17
RWDS \\\\ COLLECTION
17
5763ST1 V3R1Mð 94ð9ð9 Create SQL ILE C Object UPDATEPGM ð4/19/94 14:3ð:1ð Page 4
DIAGNOSTIC MESSAGES
MSG ID SEV RECORD TEXT
SQLðð88 ð 17 Position 15 UPDATE applies to entire table.
SQL11ð3 1ð 17 Field definitions for file DPT1 in RWDS not found.
Message Summary
Total Info Warning Error Severe Terminal
2 1 1 ð ð ð
1ð level severity errors found in source
19 Source records processed
\ \ \ \ \ E N D O F L I S T I N G \ \ \ \ \

Figure 9-7 (Part 2 of 2). Listing From a Precompiler

9-16 OS/400 Distributed Database Programming V4R2


CRTSQLPKG Listing
The listing from the Create SQL Package (CRTSQLPKG) command shown in
Figure 9-8 provides two types of information:
Ÿ The values used on the parameters of the command
Ÿ The statement in error, if any
Ÿ The messages resulting from running the CRTSQLPKG command

5763SS1 V3R1Mð 94ð9ð9 Create SQL package ð4/19/94 14:3ð:31 Page 1


Record \...+... 1 ...+... 2 ...+... 3 ...+... 4 ...+... 5 ...+... 6 ...+... 7 ...+... 8 SEQNBR Last change
Program name..............TST/UPDATEPGM
Relational database.......\PGM
User .....................\CURRENT
Replace...................\YES
Default Collection........\PGM
Generation level..........1ð
Printer file..............\LIBL/QSYSPRT
Object type...............\PGM
Module list...............\ALL
Text......................\PGMTXT
Source file...............TST/QCSRC
Member....................UPDATEPGM

5763SS1 V3R1Mð 94ð9ð9 Create SQL package ð4/19/94 14:3ð:31 Page 2


Record \...+... 1 ...+... 2 ...+... 3 ...+... 4 ...+... 5 ...+... 6 ...+... 7 ...+... 8 SEQNBR Last change
17 UPDATE RWDS / DPT1 SET deptcode = NULL
DIAGNOSTIC MESSAGES
MSG ID SEV RECORD TEXT
SQLð2ð4 1ð 17 Position 17 DPT1 in RWDS type \FILE not found.
SQL5ð57 SQL Package UPDATEPGM in TST created at KCððð from
module UPDATEPGM.
Message Summary
Total Info Warning Error Severe Terminal
1 ð 1 ð ð ð
1ð level severity errors found in source
\ \ \ \ \ E N D O F L I S T I N G \ \ \ \ \

Figure 9-8. Listing from CRTSQLPKG

SQLCODEs and SQLSTATEs


SQL returns error codes to the application program when an error occurs.
SQLCODEs and their corresponding SQLSTATEs are returned in the SQL commu-
nication area (SQLCA) structure. An SQLCA is a collection of variables that is
updated with information about the SQL statement most recently run.

When an SQL error is detected, a return code called an SQLCODE is returned. If


SQL encounters a hard error while processing a statement, the SQLCODE is a
negative number (for example, SQLCODE −204). If SQL encounters an exceptional
but valid condition (warning) while processing a statement, the SQLCODE is a posi-
tive number (for example, SQLCODE +100). If SQL encounters no error or excep-
tional condition while processing a statement, the SQLCODE is 0. Every DB2/400
SQLCODE has a corresponding message in message file QSQLMSG in library
QSYS. For example, SQLCODE −204 is logged as message ID SQL0204.

SQLSTATE is an additional return code provided in the SQLCA. SQLSTATE pro-


vides application programs with return codes for common error conditions.
SQLCODE does not return the same return code for the same error condition
among the current four IBM relational database products. SQLSTATE has been
designed so that application programs can test for specific error conditions or
classes of errors regardless of whether the application program is connected to a
DB2, SQL/DS, or DB2/400 AS.

Because the SQLCA is a valuable problem-diagnosis tool, it is a good idea to


include in your application programs the instructions necessary to display some of

Chapter 9. Handling Distributed Relational Database Problems 9-17


the information contained in the SQLCA. Especially important are the following
SQLCA fields:
SQLCODE Return code.
SQLERRD(3) The number of rows updated, inserted, or deleted by SQL.
SQLSTATE Return code.
SQLWARN0 If set to W, at least one of the SQL warning flags (SQLWARN1
through SQLWARNA) is set.

For more information about the SQLCA, see the information on SQLCA and
SQLDA control blocks in the DB2 for AS/400 SQL Reference book.

The DB2 for AS/400 SQL Programming book lists each SQLCODE, the associated
message ID, the associated SQLSTATE, and the text of the message. The com-
plete message can be viewed online by using the Display Message Description
(DSPMSGD) CL command.

Distributed Relational Database SQLCODEs and SQLSTATEs


The following list provides some of the more common SQLCODEs and
SQLSTATEs associated with distributed relational database processing. See the
DB2 for AS/400 SQL Programming book for all SQLCODEs and SQLSTATEs. In
these brief descriptions of the SQLCODEs (and their associated SQLSTATEs),
message data fields are identified by an ampersand (&); and a number (for
example, &1); The replacement text for these fields is stored in SQLERRM in the
SQLCA. More detailed cause and recovery information for any SQLCODE can be
found by using the Display Message Description (DSPMSGD) CL command.

Figure 9-9 (Page 1 of 4). SQLCODEs and SQLSTATEs


SQLCODE SQLSTATE Description
+114 0A001 Relational database name
&1; not the same as current
server &2;
+331 01520 Character conversion cannot
be performed.
+335 01517 Character conversion has
resulted in substitution char-
acters.
+551 01548 Not authorized to object & in
&2 type &3.
+552 01542 Not authorized to &1;
+595 01526 Commit level &1; has been
escalated to &2; lock.
+863 01539 Only SBCS characters
allowed to relational data-
base &1;
-114 42961 Relational database &1; not
the same as current server
&2;

9-18 OS/400 Distributed Database Programming V4R2


Figure 9-9 (Page 2 of 4). SQLCODEs and SQLSTATEs
SQLCODE SQLSTATE Description
-144 58003 Section number &1; not
valid. Current high section
number is &3; Reason &2;
-145 55005 Recursion not supported for
heterogeneous application
server.
| -175 58028 The commit operation failed.
-189 22522 Coded Character Set identi-
fier &1; is not valid.
-250 42718 Local relational database not
defined in the directory.
-251 2E000 42602 Character in relational data-
base name &1; is not valid.
-302 22001 22003 22024 22502 Conversion error on input
host variable &2;
-330 22021 Character conversion cannot
be performed.
-331 22021 Character conversion cannot
be performed.
-332 57017 Character conversion
between CCSID &1; and
CCSID &2; not valid.
-334 22524 Character conversion
resulted in truncation.
-525 51015 Statement is in error.
-551 42501 Not authorized to object &1;
in &2; type *&3;
-552 42502 Not authorized to &1;
-683 42842 FOR DATA clause or CCSID
clause not valid for specified
type.
-752 0A001 Application process is not in
a connectable state. Reason
code &1;
-805 51002 SQL package &1; in &2; not
found.
-818 51003 Consistency tokens do not
match.
| -842 08002 The connection already
| exists.
-862 55029 Local program attempted to
connect to remote relational
database.
-871 54019 Too many CCSID values
specified.

Chapter 9. Handling Distributed Relational Database Problems 9-19


Figure 9-9 (Page 3 of 4). SQLCODEs and SQLSTATEs
SQLCODE SQLSTATE Description
| -900 08003 The connection does not
| exist.
-950 42705 Relational database &1; not
in relational directory.
-952 57014 Processing of the SQL state-
ment was ended by
ENDRDBRQS command.
-969 58033 Error occurred when passing
request to application
requester driver program.
-7017 42971 Commitment control is
already active to a DDM
target.
-7018 42970 COMMIT HOLD or
ROLLBACK HOLD is not
allowed.
-7021 57043 Local program attempting to
run on application server.
-30000 58008 Distributed Relational Data-
base Architecture (DRDA)
protocol error.
-30001 57042 Call to distributed SQL
program not allowed.
-30020 58009 Distributed Relational Data-
base Architecture (DRDA)
protocol error.
-30021 58010 Distributed relational data-
base not supported by
remote system.
-30040 57012 DDM resource &2; at rela-
tional database &1; unavail-
able.
-30041 57013 DDM resources at relational
database &1; unavailable.
-30050 58011 DDM command &1; is not
valid while bind process in
progress.
-30051 58012 Bind process with specified
package name and consist-
ency token not active.
-30052 42932 Program preparation
assumptions are incorrect.
-30053 42506 Not authorized to create
package for owner&1;
-30060 08004 User not authorized to rela-
tional database &1;

9-20 OS/400 Distributed Database Programming V4R2


Figure 9-9 (Page 4 of 4). SQLCODEs and SQLSTATEs
SQLCODE SQLSTATE Description
-30061 08004 Relational database &1; not
found.
-30070 58014 Distributed Data Manage-
ment (DDM) command &1;
not supported.
-30071 58015 Distributed Data Manage-
ment (DDM) object &1; not
supported.
-30072 58016 Distributed Data Manage-
ment (DDM) parameter &1;
not supported.
-30073 58017 Distributed Data Manage-
ment (DDM) parameter
value &1; not supported.
-30074 58018 Distributed Data Manage-
ment (DDM) reply message
&1; not supported.
-30080 08001 Communication error
occurred during distributed
database processing.
| -30082 08001 Authorization failure on dis-
| tributed database connection
| attempt.
-30090 25000 2D528 2D529 Change request not valid for
read-only application server.
| 58020 SQLSTATE value not
| defined for the error or
| warning.

System and Communications Problems

AS/400 Problem Log


System-detected problems are automatically entered into the problem log. You can
also enter a user-detected problem in the problem log. You can run problem anal-
ysis on logged problems at any time by entering the Analyze Problem (ANZPRB)
command from any system command line. This command takes you through an
analysis procedure and stores additional problem-related information in the problem
log.

Use the Work with Problems (WRKPRB) command to view the problem log. The
following displays show the two views of the problem log:

Chapter 9. Handling Distributed Relational Database Problems 9-21


à System: KCððð
ð
Position to . . . . . . . Problem ID

Type options, press Enter.


2=Change 4=Delete 5=Display details 6=Print details
8=Work with problem 9=Work with alerts 12=Enter notes

Opt Problem ID Status Problem Description


__ 911435ð131 READY User detected a hardware problem on a differen
__ 9114326436 OPENED System cannot call controller . No lines avail
__ 9114326281 OPENED Line failed during insertion into the token-r
__ 9114324416 OPENED Device failed, recovery stopped.
__ 9114324241 OPENED System cannot call controller . No lines avail
__ 9114324238 OPENED System cannot call controller . No lines avail
__ 9114324234 OPENED System cannot call controller . No lines avail
__ 9114324231 OPENED System cannot call controller . No lines avail
__ 9114324227 OPENED System cannot call controller . No lines avail
__ 9114324224 OPENED System cannot call controller . No lines avail
__ 9114324218 OPENED System cannot call controller . No lines avail
More...
F3=Exit F5=Refresh F6=Print list F11=Display dates and times
F12=Cancel F16=Report prepared problems F24=More keys

á ñ
Press F11 on the first view to see the following display:

à System: KCððð
ð
Position to . . . . . . . Problem ID

Type options, press Enter.


2=Change 4=Delete 5=Display details 6=Print details
8=Work with problem 9=Work with alerts 12=Enter notes

Opt Problem ID Date Time Origin


__ 911435ð131 ð3/29/92 14:36:ð5 APPN.KCððð
__ 9114326436 ð3/29/92 ð7:41:59 APPN.KCððð
__ 9114326281 ð3/29/92 ð7:39:17 APPN.KCððð
__ 9114324416 ð3/29/92 ð7:ð6:42 APPN.KCððð
__ 9114324241 ð3/29/92 ð7:ð3:38 APPN.KCððð
__ 9114324238 ð3/29/92 ð7:ð3:35 APPN.KCððð
__ 9114324234 ð3/29/92 ð7:ð3:31 APPN.KCððð
__ 9114324231 ð3/29/92 ð7:ð3:27 APPN.KCððð
__ 9114324227 ð3/29/92 ð7:ð3:24 APPN.KCððð
__ 9114324224 ð3/29/92 ð7:ð3:2ð APPN.KCððð
__ 9114324218 ð3/29/92 ð7:ð3:14 APPN.KCððð
More...
F3=Exit F5=Refresh F6=Print list F11=Display descriptions F12=Cancel
F14=Analyze new problem F16=Report prepared problems F18=Work with alerts

á ñ
AS/400 problem log support allows you to display a list of all the problems that
have been recorded on the local system. You can also display detailed information
about a specific problem such as the following:
Ÿ Product type and serial number of device with a problem
Ÿ Date and time of the problem
Ÿ Part that failed and where it is located
Ÿ Problem status

From the problem log you can also analyze a problem, report a problem, or deter-
mine any service activity that has been done. For more information about handling
AS/400 problems, see the chapter on problem handling in the System Operation
book.

9-22 OS/400 Distributed Database Programming V4R2


Alerts
Alert support on the AS/400 system is based on the AS/400 message support,
which is built into the operating system. Any message sent to the system operator
message queue or to the history log can be defined as an alert.

For full support1 in handling distributed relational database problems, alerts and
alert logging can be enabled using the Change Network Attributes (CHGNETA)
command. OS/400 alert support is discussed in “Alert Support” on page 3-3, and
procedures and examples for setting up alerts are provided in “Configuring Alert
Support” on page 3-16.

Whenever an alert occurs, a system informational message (alert created) is dis-


played interactively or put on the job log. You can display an alert with the Work
with Alerts (WRKALR) command. When you enter WRKALR, the following display
appears:

à ð3/28/92 15:44:34
ð
Type options, press Enter.
2=Change 4=Delete 5=Display recommended actions 6=Print details
8=Display alert detail 9=Work with problem

Resource
Opt Name Type Date Time Alert Description: Probable Cause
KCððð\ UNK ð5/28 15:19 Resource unavailable: Printer
AS SRV ð5/27 21:31 Distributed process failed: Command not re
KCððð\ LU ð5/23 ð8:29 Operator intervention required: Printer
KCððð\ UNK ð5/23 ð8:27 Resource unavailable: Printer
AS SRV ð5/2ð 11:49 Distributed process failed: Command not re
KCððð\ UNK ð5/2ð 11:26 Resource unavailable: Printer
AS SRV ð5/2ð 1ð:47 Distributed process failed: Relational dat
AS SRV ð5/2ð 1ð:31 Distributed process failed: Command not re
KCððð\ CTL ð5/2ð ð9:46 Unable to communicate with remote node: Co
KCððð\ CTL ð5/2ð ð3:23 Unable to communicate with remote node: Co
KCððð\ UNK ð5/19 15:32 Resource unavailable: Printer
AS SRV ð5/19 14:37 Distributed process failed: Invalid data s
More...
F3=Exit F1ð=Show new alerts F11=Display user/group F12=Cancel
F13=Change attributes F2ð=Right F21=Automatic refresh F24=More keys

á ñ
Alert message descriptions are contained in the QHST log. Use the Display Log
(DSPLOG) command and specify QHST, or the Display Message (DSPMSG)
command and specify QSYSOPR to see the alert message description.

The AS/400 system enables a subset of the DRDB messages listed in Figure 9-6
on page 9-11 to trigger alerts for distributed relational database support. If an error
is detected at the AS, a DDM message is sent to the AR. The AR generates an
alert based on that DDM message.

Distributed relational database alerts contain the following information:


Ÿ Identification number (alert ID)
Ÿ Type
Ÿ Description

| 1 Alerts can still be generated and logged locally for applications that use DRDA over TCP/IP, but the alert messages do not flow
| over TCP/IP.

Chapter 9. Handling Distributed Relational Database Problems 9-23


Ÿ Probable causes
Ÿ Failure causes
Ÿ Recommended action
These alerts also contain additional information, such as:
Ÿ Product set identifier.
Ÿ Product identifier (IBM product number, version, release, modification, and
product common name).
Ÿ Hierarchy Name List and Associated Resources List. These two fields show the
resource name and type that detected the error condition; for example, the
resource name of SQL/DS and the type of AS. If the detecting resource is not
known, the identifier of the system that sent the alert is displayed as the lowest
hierarchical entry.
Ÿ Local date and time from either the AS or AR.
Ÿ Other details such as relational database name and logical unit of work identi-
fier (LUWID).

The following alerts are generated for AS/400 distributed relational database
support:

Figure 9-10. Distributed Relational Database Messages that Create Alerts


Message ID Alert ID Text
CPF3E23 4821 F0B5 DDM data stream violates
conversation capabilities.
CPF3E80 C299 284E Syntax error detected in
DDM data stream.
CPF3E81 2257 C33F The data descriptor received
is not valid.
CPF3E82 36B0 632B Relational database already
accessed.
| CPF3E83 2257 C33F FD:OCA1. Data descriptor
| does not match data
| received.
CPF3E84 DA23 E856 DDM conversational protocol
error was detected.
CPF3E85 36B0 632B Relational database
(RDBNAME) not accessed.
CPF3E86 D67E 885A Error occurred during distrib-
uted database processing.
CPF3E87 2E0A A333 Permanent error condition
detected.
CPF3E88 3AED 0327 The SQL cursor had been
previously opened at the
remote location.
CPF3E89 3AED 0327 Query is not opened within
this unit of work.

9-24 OS/400 Distributed Database Programming V4R2


1 Formatted Data: Object Content Architecture (FD:OCA) used by the AS/400
system to describe the data format of the columns of a database table.

When alerts are sent from some modules that support a distributed relational data-
base, a spooled file that contains extensive diagnostic information is also created.
This data is called first-failure data capture (FFDC) information.

For more information about AS/400 alerts, see the DSNX Support book.

Getting Data to Report a Failure


The following sections describe the kinds of data that you can print to help you
diagnose a problem in a distributed relational database on AS/400 systems. This
data is produced by the OS/400 program. You can also use system operator mes-
sages and the application program (along with its data) to diagnose problems.

Printing a Job Log


Every job on the AS/400 system has a job log that contains information related to
requests entered for that job. When a user is having a problem at an AR, the infor-
mation in the job log may be helpful in diagnosing the problem. One easy way to
get this information is to have the user sign off with the command:
SIGNOFF \LIST

This command prints a copy of the user's job log, or places it in an output queue
for printing.

Another way to print the job log is by specifying LOG(4 00 *SECLVL) on the appli-
cation job description. After the job is finished, all messages are logged to the job
log for that specific job. You can print the job log by locating it on an output queue
and running a print procedure. See “Using the Job Log” on page 6-5 for information
on how to locate jobs and job logs on the system.

The job log for the AS may also be helpful in diagnosing problems. See “Locating
Distributed Relational Database Jobs” on page 6-6 for information on how to find
the job name for the AS job.

| Finding Joblogs from TCP/IP Server Prestart Jobs


| When the connection ends that is serviced by one of the QRWTSRVR prestart jobs
| associated with the DDM TCP/IP server, the prestart job is recycled for use by
| another connection. When this happens, the joblog associated with the ended con-
| nection is normally discarded.

| There are two conditions under which the joblog will be saved:
| Ÿ If the program QCNTEDDM detects that a serious error occurred in processing
| the request that ended the connection
| Ÿ If the prestart job was being serviced (by use of the STRSRVJOB command)
| Keeping the joblog for the first condition is designed to retain information useful for
| diagnosing serious unexpected errors. Keeping it for serviced jobs is to provide a
| way for someone to force the joblog to be kept. For example, if you want to get
| SQL optimizer data that is emitted when running under debug, you can start a

Chapter 9. Handling Distributed Relational Database Problems 9-25


| service job, run the STRDBG command, and the joblog will be retained in a
| spooled file.

| The joblogs will not be stored under the prestart job ID. To find them, run the fol-
| lowing command:
| WRKJOB userid/QPRTJOB
| where userid is the user ID used on the CONNECT to the AS. You can find that
| user ID if you do not know it with the DSPLOG command on the AS. Look for the
| following message:
| DDM job xxxx servicing user yyy on ddd at ttt.

| Printing the Product Activity Log


| The Product Activity Log on the AS/400 system is a record of machine checks,
| device errors, and tape and diskette statistics. It also contains FFDC information
| including the first 1000 bytes of each FFDC dump. By reviewing these errors you
| may be able to determine the nature of a problem.

| To print the product activity log for a system on which you are signed on, do the
| following:
| 1. Type the Print Error Log (PRTERRLOG) command on any command line and
| press F4 (Prompt). The Print Error Log display is shown.
| 2. Type the parameter value for the kind of log information you want to print and
| press the Enter key. The log information is sent to the output queue identified
| for your job.
| 3. Enter the Work with Job (WRKJOB) command. The Work with Job display is
| shown.
| 4. Select the option to work with spooled files. The Work with Job Spooled Files
| display is shown.
| 5. Look for the log file you just created at or near the bottom of the spooled file
| list.
| 6. Type the work with printing status option in the Opt column next to the log file.
| The Work with Printing Status display is shown.
| 7. On the Work with Printing Status display, use the change status option to
| change the status of the file and specify the printer to print the file.

Trace Job
Sometimes a problem cannot be tracked to a specific program.

You can trace module flow, OS/400 data acquisition (including CL commands), or
both using the Trace Job (TRCJOB) command. TRCJOB logs all of the called pro-
grams. As the trace records are generated, the records are stored in an internal
trace storage area. When the trace is ended, the trace records can be written to a
spooled printer file (QPSRVTRC) or directed to a database output file.

The TRCJOB command should be used when the problem analysis procedures do
not supply sufficient information about the problem. For distributed database appli-
cations, the command is useful for capturing distributed database request and
response data streams.

9-26 OS/400 Distributed Database Programming V4R2


A sample trace scenario is as follows:
TRCJOB SET(\ON) TRCTYPE(\ALL) MAXSTG(2ððð)
TRCFULL(\WRAP) EXITPGM($SCFTRC)
CALL QCMD
TRCJOB SET(\OFF) OUTPUT(\PRINT)
WRKOUTQ output-queue-name

You will see a spooled file with a name of QPSRVTRC. The spooled file contains
your trace. For more information on the use of trace job, see Appendix C, “Inter-
preting Trace Job and FFDC Data” on page C-1.

Communications Trace
If you get a message in the CPF3Exx range or the CPF91xx range when using
DRDA to access a distributed relational database, you should run a communi-
cations trace. The following list shows common messages you might see in these
ranges.

Figure 9-11. Communications Trace Messages


MSG ID Description
CPF3E80 DDM data stream syntax error.
CPF91xx DDM protocol error.
CPF3E83 Invalid FD0:CA descriptor.
CPF3E84 Data mismatch error.

The communications trace function lets you start or stop a trace of data on commu-
nications configuration objects. After you have run a trace of data, the data can be
formatted for printing or viewing. You can view the printer file only in the output
queue.

Communication trace options run under system service tools (SST). SST lets you
use the configuration objects while communications trace is active. Data can be
traced and formatted for any communications type you can use in a distributed
database network.

The AS/400 communications trace can run from any display connected to the
system. Anyone, with a special authority (SPCAUT) of *SERVICE can run the trace
on an AS/400 system. Communications trace supports all line speeds. See the
Communications Management book for the maximum aggregate line speeds on the
protocols that are available on the communications controllers.

Communications trace should be used in the following situations:


Ÿ The problem analysis procedures do not supply sufficient information about the
problem.
Ÿ You suspect a protocol violation is the problem.
Ÿ You suspect a line noise to be the problem.
Ÿ The error messages indicate there is an Systems Network Architecture (SNA)
BIND problem.
You must have detailed knowledge of the line protocols being used to correctly
interpret the data generated by a communications trace. For information on inter-

Chapter 9. Handling Distributed Relational Database Problems 9-27


preting DRDA data streams see “Analyzing the RW Trace Data Example” on
page C-2.

Whenever possible, start the communications trace before varying on the lines.
This gives you the most accurate sample of your line as it is varied on.

| To run an APPC trace and to work with its output, you have to know on what line,
| controller, and device you are running. If you do not have this information, refer to
| “Finding Your Line, Controller and Device Descriptions.”

| To format the output of a TCP/IP trace, you should know the IP addresses of the
| source and target systems to avoid getting unwanted data in the trace.

The following commands start, stop, print, and delete communications traces:

Start Communications Trace (STRCMNTRC)


Starts a communications trace for a specified line or network interface
description. A communications trace continues until you run the End Communi-
cations Trace (ENDCMNTRC) command.

End Communications Trace (ENDCMNTRC)


Ends the communications trace running on the specified line or network inter-
face description.

Print Communications Trace (PRTCMNTRC)


Moves the communications trace data for the specified line or network interface
description to a spooled file or an output file. Specify *YES for the format SNA
data only parameter.

Delete Communications Trace (DLTCMNTRC)


Deletes the communications trace for a specified line or network interface
description.

If you are running on a Version 2, Release 1.1 or earlier system, the preceding
commands are not available. Instead, you have to use the System Service Tools
(SST). Start SST with the Start System Service Tools (STRSST) command. For
more information about the STRSST command and details on communication
traces see the AS/400 Licensed Internal Code Diagnostic Aids - Volume 1 book.

Finding Your Line, Controller and Device Descriptions


Use the Work with Configuration Status (WRKCFGSTS) command to find what con-
troller and device your application server job is being started under. For example:
WRKCFGSTS CFGTYPE(\DEV)
CFGD(\LOC)
RMTLOCNAME(DB2ESYS)

The value for the RMTLOCNAME keyword is the application server's system name.

The WRKCFGSTS command displays all devices that have the specified system
name as the remote location name. You can tell which device is in use because
you can vary on only one device at a time. Use option 8 to work with the device
description and then option 5 to display it. The attached controller field gives the
name of your controller. You can use the WRKCFGSTS command to work with the
controller and device descriptions. For example:

9-28 OS/400 Distributed Database Programming V4R2


WRKCFGSTS CFGTYPE(\CTL)
CFGD(PCXZZ12ð5) /\ workstation \/
WRKCFGSTS CFGTYPE(\CTL)
CFGD(LANSLKM) /\ AS/4ðð on token ring \/

The CFGD values are the controller names acquired from the device descriptions in
the first example in this section.

The output from this command also includes the name of the line description that
you need when working with communications traces. If you select option 8 and then
option 5 to display the controller description, the active switched line parameter dis-
plays the name of the line description. The LAN remote adapter address gives the
token-ring address of the remote system.

| Another way to find the line name is to use the WRKLIND command, which lists all
| of the line descriptions for the system.

Finding First-Failure Data Capture (FFDC) Data


Note: No FFDC data is produced unless the QSFWERRLOG system value is set
to *LOG.

The following are tips on how to locate FFDC data on an AS/400 system. This
information is most useful if the failure causing the FFDC data output occurred on
the application server (AS). The FFDC data for an application requester (AR) can
usually be found in one of the spooled files associated with the job running the
application program.
1. Execute a DSPMSG QSYSOPR command and look for a Software problem
detected in Qccxyyyy message in the QSYSOPR message log. (cc in the
program name is usually RW, but could be CN or SQ.) The presence of this
message indicates that FFDC data was produced. You can use the help key to
get details on the message. The message help gives you the problem ID,
which you can use to identify the problem in the list presented by the WRKPRB
command. You may be able to skip this step because the problem record, if it
exists, may be at or near the top of the list.
2. Enter the WRKPRB command and specify the program name (Qccxyyyy) from
the Software problem detected in Qccxyyyy message. Use the program name
to filter out unwanted list items. When a list of problems is presented, specify
option 5 on the line containing the problem ID to get more problem details,
such as symptom string and error log ID.
3. When you have the error log ID, enter the STRSST command. On the first
screen, select Start a service tool. On the next screen, enter 1 to select
Error log utility. On the next screen, enter 2 to select Display or print by
error log ID. In the next screen, you can:
Ÿ Enter the error log ID.
Ÿ Enter Y to get the hexadecimal display.
Ÿ Select the Print or Display option.

The Display option gives 16 bytes per line instead of 32. This can be useful for
on-line viewing and printing screens on an 80-character workstation printer. If you

Chapter 9. Handling Distributed Relational Database Problems 9-29


choose the Display option, use F6 to see the hexadecimal data after you press
Enter.

The hexadecimal data contains the first 1K bytes of the FFDC dump data, pre-
ceded by some other data. The start of the FFDC data is identified by the FFDC
data index. The name of the target job (if this is on the application server) is before
the data index. If the FFDC dump spool file has not been deleted, use this fully
qualified job name to find the spool file. If the spool file is missing, either:
Ÿ Use the first 1K of the dump stored in the error log.
Ÿ Recreate the problem if the 1K of FFDC data is insufficient.

Interpreting FFDC Data from the Error Log


The FFDC data in the error log is not formatted for reading as well as the data in
the spooled files. Each section of the FFDC dump in the error log is prefixed by a
4-byte header. The first two bytes of the header are the length of the following
section (not counting the prefix). The second two bytes, which are the section
number, correspond to the section number in the index (see “FFDC Dump Output
Description” on page C-8).

Starting a Service Job to Diagnose Application Server Problems


When an application uses DRDA, the SQL statements are run in the application
server job. Because of this, you may need to start debug or a job trace for the
application server job that is running on the OS/400 operating system.

| Starting a Service Job for an APPC Server


| When the DB2 for AS/400 application server recognizes a special transaction
| program name (TPN), it causes the application server to send a message to the
| system operator and then wait for a reply (see 1). This allows you to issue a Start
| Service Job (STRSRVJOB) command that allows job trace or debug to be started
| for the application server job. The following steps allow you to stop the DB2/400
| application server job and restart it in debug mode.
| 1. Specify QCNTSRVC as the transaction program name (TPN) at the application
| requester. There is a different method of doing this for each platform. The fol-
| lowing sections describe the different methods.
| 2. When the OS/400 application receives a TPN of QCNTSRVC, it sends a
| CPF9188 message to QSYSOPR and waits for a G (for go) reply.
| 3. Before entering the G reply, use the STRSRVJOB command to start a service
| job for the application server job and put it into debug mode. (Request help on
| the CPF9188 message to display the jobname.)
| 4. Enter the Start Debug (STRDBG) command.
| 5. After starting debug for the application server job, reply to the QSYSOPR
| message with a G.
| 6. After receiving the G reply, the application server continues with normal DRDA
| processing.
| 7. After the application runs, you can look at the application server joblog to see
| the SQL debug messages.

9-30 OS/400 Distributed Database Programming V4R2


| Starting a Service Job for a TCP/IP Server
| The DDM TCP/IP server does not use TPNs as the APPC server does. However,
| the use of prestart jobs by the TCP/IP server provides a way to start a service job
| in that environment. If you do not need to trace the actions of the server during the
| connect operation, and you have the ability to delay execution of the AR job till you
| can do some setup on the server, such as from interactive SQL, you can use the
| DSPLOG command to find the CPI3E34 message reporting the name of the server
| job being used for a given connection. You can then use the STRSRVJOB
| command as described in the previous section.

| If you do need to trace the connect statement, or do not have time to do manual
| setup on the server after the connect, you will need to anticipate what prestart job
| will be used for the connection before it happens. One way to do that is to prevent
| other users from connecting during the time of your test, if possible, and end all of
| the prestart jobs except one.

| You can force the number of prestart jobs to be 1 by setting the following parame-
| ters on the CHGPJE command for QRWTSRVR running in QSYSWRK to the
| values specified below:
| Ÿ Initial number of jobs: 1
| Ÿ Threshold: 1
| Ÿ Additional number of jobs: 0
| Ÿ Maximum number of jobs: 1

| If you use this technique, be sure to change the parameters back to values that are
| reasonable for your environment; otherwise, users will get the message that 'A
| connection with a remote socket was reset by that socket' when trying to
| connect when the one prestart job is busy.

Setting QCNTSRVC as a TPN on a DB2/400 Application Requester


Specify the QCNTSRVC on the TNSPGM parameter of the Add DRDB Directory
Entry (ADDRDBDIRE) or Change DRDB Directory Entry (CHGRDBDIRE) com-
mands.

| It can be helpful to make a note of the special TPN in the text of the RDB directory
| entry as a reminder to change it back when you are finished with debugging.

| Creating Your Own TPN for Debugging a DB2 for AS/400 AS Job
| It is possible for you to create your own TPN by compiling a CL program containing
| debug statements and a TFRCTL QSYS/QCNTEDDM statement at the end. The
| advantage of this is that you do not need any manual intervention when doing the
| connect. An example of such a program follows:
| PGM
| MONMSG CPFðððð
| STRDBG UPDPROD(\YES) PGM(CALL/QRWTEXEC) MAXTRC(9999)
| ADDBKP STMT(CKUPDATE) PGMVAR((\CHAR (SQLDA@))) OUTFMT(\HEX) +
| LEN(14ðð)
| ADDTRC PGMVAR((DSLENGTH ()) (LNTH ()) (FDODTA_LNTH ()))
| TRCJOB \ON TRCTYPE(\DATA) MAXSTG(2ð48) TRCFULL(\STOPTRC)
| TFRCTL QSYS/QCNTEDDM
| ENDPGM

Chapter 9. Handling Distributed Relational Database Problems 9-31


| The TPN name in the RDB directory entry of the AR is the name that you supply.
| Use the text field to provide a warning that the special TPN is in use, and be sure
| to change the TPN name back when done debugging.

| Be aware that when you change the TPN of an RDB, all connections from that AR
| will use the new TPN until you change it back. This could cause surprises for
| unsuspecting users, such as poor performance, long waits for operator responses,
| and the filling up of storage with debug data.

| Setting QCNTSRVC as a TPN on a DB2 for VM Application Requester


Change the UCOMDIR NAMES file to specify QCNTSRVC in the TPN tag.

For example:
:nick.RCHASLAI :tpn.QCNTSRVC
:luname.VM4GATE RCHASLAI
:modename.MODE645
:security.NONE

Then issue SET COMDIR RELOAD USER.

| Setting QCNTSRVC as a TPN on a DB2 for OS/390 Application


| Requester
| Update the SYSIBM.LOCATIONS table to specify QCNTSRVC in the TPN column
| for the row that contains the RDB-NAME of the DB2 for AS/400 application server.
| For systems running versions earlier than release 5, substitute the
| SYSIBM.SYSLOCATIONS table and the LINKATTR column in the above instruc-
| tion.

| Setting QCNTSRVC as a TPN on a DB2 Connect Application Requester


| See the DB2 Connect Quick Beginnings book for instructions on how to set up the
| TPN on this family of products.

9-32 OS/400 Distributed Database Programming V4R2


Chapter 10. Writing Distributed Relational Database
Applications
You can create and maintain programs for a distributed relational database on the
AS/400 system using the SQL language the same way you use it for local-
processing applications. You can embed static and dynamic Structured Query Lan-
guage (SQL) statements with any one or more of the following high-level
languages:
Ÿ AS/400 PL/I
Ÿ ILE C/400*
Ÿ COBOL/400
Ÿ ILE COBOL/400
Ÿ FORTRAN/400*
Ÿ RPG/400
Ÿ ILE RPG/400

The process for developing distributed applications is similar to that of developing


SQL applications for local processing. The difference is that the application for dis-
tributed processing must specify the name of the relational database to which it
connects. This may be done when you precompile the program or within the appli-
cation.

The same SQL objects are used for both local and distributed applications, except
that one object, the SQL package, is used exclusively for distributed relational data-
base support. You create the program using the Create SQL program
(CRTSQLxxx) command. The xxx in this command refers to the host language CI,
CBL, CBLI, FTN, PLI, RPG, or RPGI. The SQL package may be a product of the
precompile in this process. The Create SQL Package (CRTSQLPKG) command
creates SQL packages for existing distributed SQL programs.

You must have the DB2/400 Query Manager and SQL Development Kit licensed
program installed to precompile programs with SQL statements. However, you can
create SQL packages from existing distributed SQL programs with only the com-
piled program installed on your system. The DB2/400 Query Manager and SQL
Development Kit licensed program also allows you to use interactive SQL to access
a distributed relational database. This is helpful when you are debugging programs
because it allows you to test SQL statements without having to precompile and
compile a program.

This chapter provides an overview of programming issues for a distributed relational


database. More detailed information on these topics is in the DB2 for AS/400 SQL
Programming book and the Distributed Relational Database Application Program-
ming Guide for IBM relational database management systems.

 Copyright IBM Corp. 1997, 1998 10-1


Programming Considerations for a Distributed Relational Database
Application
Programming considerations for a distributed relational database application on an
AS/400 system fall into two main categories: those that deal with a function that is
supported on the local system and those that are a result of having to connect to
other systems. This section addresses both of these categories as it discusses the
following:
Ÿ Naming conventions
Ÿ Connecting to other systems
Ÿ Distributed SQL statements and coexistence
Ÿ Coded character set identifiers (CCSIDs)
Ÿ Data translation
Ÿ Distributed Data Management (DDM) files and SQL programs

“Designing Applications — Tips” on page 2-4 provides additional information that


you should take into consideration when designing distributed relational database
applications.

Naming Distributed Relational Database Objects


SQL objects are created and maintained as AS/400 system objects.

You can use either of two naming conventions in DB2/400 programming: system
(*SYS) and SQL (*SQL). The naming convention you use affects the method for
qualifying file and table names. It also affects security and the terms used on the
interactive SQL displays. Distributed relational database applications can access
objects on another AS/400 system using either naming convention. However, if
your program accesses a relational database on a non-AS/400 system, only SQL
names can be used. Select the naming convention using the NAMING parameter
on the Start SQL (STRSQL) command or the OPTION parameter on one of the
CRTSQLxxx commands.

System (*SYS) Naming Convention


When you use the system naming convention, files are qualified by library name in
the form: library/file. Tables created using this naming convention assume the
public authority of the library in which they are created. If the table name is not
explicitly qualified and a default collection name is used in the DFTRDBCOL
parameter of the CRTSQLxxx or CRTSLQPKG commands, the default collection
name is used for static SQL statements. If the file name is not explicitly qualified
and the default collection name is not specified, the following rules apply:
Ÿ All SQL statements except certain CREATE statements cause SQL to search
the library list (*LIBL) for the unqualified file.
Ÿ The CREATE statements resolve to unqualified objects as follows:
– CREATE TABLE: The table name must be explicitly qualified.
– CREATE VIEW: The view is created in the first library referred to in the
subselect.
– CREATE INDEX: The index is created in the collection or library that con-
tains the table on which the index is being built.

10-2 OS/400 Distributed Database Programming V4R2


SQL (*SQL) Naming Convention
When you use the SQL naming convention, tables are qualified by the collection
name in the form: collection.table. If the table name is not explicitly qualified and
the default collection name is specified in the default relational database collection
(DFTRDBCOL) parameter of the CRTSQLxxx or CRTSQLPKG commands, the
default collection name is used. If the table name is not explicitly qualified and the
default collection name is not specified, the following rules apply:
Ÿ For static SQL, the default qualifier is the user profile of the program owner.
Ÿ For dynamic SQL or interactive SQL, the default qualifier is the user profile of
the job running the statement.

Default Collection Name


You can specify a default collection name to be used by an SQL program by sup-
plying this name for the DFTRDBCOL parameter on the CRTSQLxxx command
when you precompile the program. The DFTRDBCOL parameter provides the
program with the collection name as the library for an unqualified file if the *SYS
naming convention is used, or as the collection for an unqualified table if the *SQL
naming convention is used. If you do not specify a default collection name when
you precompile the program, the rules for unqualified names apply, as stated
above, for each naming convention. The default relational database collection name
only applies to static SQL statements.

You can also use the DFTRDBCOL parameter on the CRTSQLPKG command to
change the default collection of a package. After an SQL program is compiled you
can create a new SQL package to change the default collection. See “Create SQL
Package (CRTSQLPKG) Command” on page 10-28 for a discussion of all the
parameters of the CRTSQLPKG command.

Connecting to a Distributed Relational Database


What makes a distributed relational database application distributed is its ability to
connect to a relational database on another system.

There are two types of CONNECT statements with the same syntax but different
semantics:
Ÿ CONNECT (Type 1) is used for remote unit of work.
Ÿ CONNECT (Type 2) is used for distributed unit of work.

The type of CONNECT that a program uses is indicated by the RDBCNNMTH


parameter on the CRTSQLxxx commands.

Remote Unit of Work


The remote unit of work facility provides for the remote preparation and execution
of SQL statements. An activation group at computer system A can connect to an
application server at computer system B. Then, within one or more units of work,
that activation group can execute any number of static or dynamic SQL statements
that reference objects at B. After ending a unit of work at B, the activation group
can connect to an application server at computer system C, and so on.

Most SQL statements can be remotely prepared and executed with the following
restrictions:

Chapter 10. Writing Distributed Relational Database Applications 10-3


Ÿ All objects referenced in a single SQL statement must be managed by the
same application server.
Ÿ All of the SQL statements in a unit of work must be executed by the same
application server.

Remote Unit of Work Connection Management: An activation group is in one of


three states at any time:
Connectable and connected
Unconnectable and connected
Connectable and unconnected

The following diagram shows the state transitions:


Begin process



6
┌───────────────┐ ┌───────────────┐
┌──┤ │ Successful CONNECT │ │
│ │ Connectable │%────────────────────────────────┤ Connectable │
CONNECT │ │ and │ │ and │
└─5│ Connected ├────────────────────────────────5│ Unconnected │
│ │ │ │
└────────────┬──┘ CONNECT with system failure or └───────────────┘
& │ COMMIT after the connection is &
│ │ released. │
│ │ │
│ │ │
ROLLBACK or │ │ │ System failure
successful │ │ SQL statement other than CONNECT, │ with rollback
COMMIT │ │ COMMIT, or ROLLBACK │
│ │ │
│ │ ┌─────────────────┐ │
│ │ │ │ │
│ └──────────5│ Unconnectable │ │
│ │ and ├───────────────┘
└─────────────────────┤ Connected │
│ │
└─────────────────┘

Figure 10-1. Remote Unit of Work Activation Group Connection State Transition

The initial state of an activation group is connectable and connected. The applica-
tion server to which the activation group is connected is determined by the RDB
parameter on the CRTSQLxxx and STRSQL commands and may involve an implicit
CONNECT operation. An implicit CONNECT operation cannot occur if an implicit or
explicit CONNECT operation has already successfully or unsuccessfully occurred.
Thus, an activation group cannot be implicitly connected to an application server
more than once.

The connectable and connected state: An activation group is connected to an


application server and CONNECT statements can be executed. The activation
group enters this state when it completes a rollback or successful commit from the
unconnectable and connected state, or a CONNECT statement is successfully exe-
cuted from the connectable and unconnected state.

The unconnectable and connected state: An activation group is connected to an


application server, but a CONNECT statement cannot be successfully executed to
change application servers. The activation group enters this state from the

10-4 OS/400 Distributed Database Programming V4R2


connectable and connected state when it executes any SQL statement other than
CONNECT, COMMIT, or ROLLBACK.

The connectable and unconnected state: An activation group is not connected to


an application server. The only SQL statement that can be executed is CONNECT.

The activation group enters this state when:


Ÿ The connection was previously released and a successful COMMIT is exe-
cuted.
Ÿ The connection is disconnected using the SQL DISCONNECT statement.
Ÿ The connection was in a connectable state, but the CONNECT statement was
unsuccessful.

Consecutive CONNECT statements can be executed successfully because


CONNECT does not remove the activation group from the connectable state. A
CONNECT to the application server to which the activation group is currently con-
nected is executed like any other CONNECT statement. CONNECT cannot execute
successfully when it is preceded by any SQL statement other than CONNECT,
COMMIT, DISCONNECT, SET CONNECTION, RELEASE, or ROLLBACK (unless
running with COMMIT(*NONE)). To avoid an error, execute a commit or rollback
operation before a CONNECT statement is executed.

Application-Directed Distributed Unit of Work


The application-directed distributed unit of work facility also provides for the remote
preparation and execution of SQL statements in the same fashion as remote unit of
work. Like remote unit of work, an activation group at computer system A can
connect to an application server at computer system B and execute any number of
static or dynamic SQL statements that reference objects at B before ending the unit
of work. All objects referenced in a single SQL statement must be managed by the
same application server. However, unlike remote unit of work, any number of appli-
cation servers can participate in the same unit of work. A commit or rollback opera-
tion ends the unit of work.

Application-Directed Distributed Unit of Work Connection Management: At


any time:
Ÿ An activation group is always in the connected or unconnected state and has a
set of zero or more connections. Each connection of an activation group is
uniquely identified by the name of the application server of the connection.
Ÿ An SQL connection is always in one of the following states:
– Current and held
– Current and released
– Dormant and held
– Dormant and released

Initial state of an activation group: An activation group is initially in the connected


state and has exactly one connection. The initial state of a connection is current
and held.

The following diagram shows the state transitions:

Chapter 10. Writing Distributed Relational Database Applications 10-5


Begin process

│ ┌──────────────────────── SQL Connection States ────────────────────────┐
│ │ │
│ │ Successful CONNECT │
│ │ or SET CONNECTION specifying │
│ │ ┌─────────────┐ another SQL connection ┌─────────────┐ │
│ │ │ ├───────────────────────────5│ │ │
├──────┼──────5│ Current │ │ Dormant │ │
│ │ │ │%───────────────────────────┤ │ │
│ │ └─────────────┘ Successful CONNECT or └─────────────┘ │
│ │ SET CONNECTION specifying │
│ │ an existing dormant connection │
│ │ │
│ │ │
│ │ ┌─────────────┐ ┌─────────────┐ │
│ │ │ │ RELEASE │ │ │
├──────┼──────5│ Held ├───────────────────────────5│ Released │ │
│ │ │ │ │ │ │
│ │ └─────────────┘ └─────────────┘ │
│ │ │
│ └───────────────────────────────────────────────────────────────────────┘


│ ┌──────────────── Activation Group Connection States ───────────────┐
│ │ │
│ │ The current connection │
│ │ is intentionally ended, or │
│ │ a failure occurs causing the │
│ │ ┌─────────────┐ loss of the connection ┌─────────────┐ │
│ │ │ ├───────────────────────────5│ │ │
└──────┼──────5│ Connected │ │ Unconnected │ │
│ │ │%───────────────────────────┤ │ │
│ └─────────────┘ Successful CONNECT or └─────────────┘ │
│ SET CONNECTION │
│ │
└───────────────────────────────────────────────────────────────────────┘

Figure 10-2. Application-Directed Distributed Unit of Work Connection and Activation Group
Connection State Transitions

Connection States: If an application executes a CONNECT statement and the


server name is known to the application requester and is not in the set of existing
connections of the activation group, then:
Ÿ The current connection is placed in the dormant state and held state.
Ÿ The server name is added to the set of connections and the new connection is
placed in the current and held state.
If the server name is already in the set of existing connections of the activation
group, an error occurs.

A connection in the dormant state is placed in the current state using the SET
CONNECTION statement. When a connection is placed in the current state, the
previous current connection, if any, is placed in the dormant state. No more than
one connection in the set of existing connections of an activation group can be
current at any time. Changing the state of a connection from current to dormant or
from dormant to current has no effect on its held or released state.

A connection is placed in the released state by the RELEASE statement. When an


activation group executes a commit operation, every released connection of the
activation group is ended. Changing the state of a connection from held to released
has no effect on its current or dormant state. Thus, a connection in the released
state can still be used until the next commit operation. There is no way to change
the state of a connection from released to held.

10-6 OS/400 Distributed Database Programming V4R2


Activation Group Connection States: A different application server can be
established by the explicit or implicit execution of a CONNECT statement. The fol-
lowing rules apply:
Ÿ An activation group cannot have more than one connection to the same appli-
cation server at the same time.
Ÿ When an activation group executes a SET CONNECTION statement, the speci-
fied location name must be an existing connection in the set of connections of
the activation group.
Ÿ When an activation group executes a CONNECT statement, the specified
server name must not be an existing connection in the set of connections of the
activation group.

If an activation group has a current connection, the activation group is in the


connected state. The CURRENT SERVER special register contains the name of
the application server of the current connection. The activation group can execute
SQL statements that refer to objects managed by that application server.

An activation group in the unconnected state enters the connected state when it
successfully executes a CONNECT or SET CONNECTION statement.

If an activation group does not have a current connection, the activation group
is in the unconnected state. The CURRENT SERVER special register contents are
equal to blanks. The only SQL statements that can be executed are CONNECT,
DISCONNECT, SET CONNECTION, RELEASE, COMMIT, and ROLLBACK.

An activation group in the connected state enters the unconnected state when its
current connection is intentionally ended or the execution of an SQL statement is
unsuccessful because of a failure that causes a rollback operation at the application
server and loss of the connection. Connections are intentionally ended when an
activation group successfully executes a commit operation and the connection is in
the released state, or when an application process successfully executes the DIS-
CONNECT statement.

When a Connection is Ended: When a connection is ended, all resources that


were acquired by the activation group through the connection and all resources that
were used to create and maintain the connection are deallocated. For example,
when the activation group executes a RELEASE statement, any open cursors will
be closed when the connection is ended during the next commit operation.

A connection can also be ended as a result of a communications failure in which


case the activation group is placed in the unconnected state. All connections of an
activation group are ended when the activation group ends.

Running with both RUW and DUW connection management: Programs com-
piled with RUW connection management can be called by programs compiled with
DUW connection management. SET CONNECTION, RELEASE, and DISCON-
NECT statements can be used by the program compiled with RUW connection
management to work with any of the active connections. However, when a program
compiled with DUW connection management calls a program compiled with RUW
connection management, CONNECTs that are performed in the program compiled
with RUW connection management will attempt to end all active connections for the
activation group as part of the CONNECT. Such CONNECTs will fail if the conver-
sation used by active connections uses protected conversations. Furthermore,

Chapter 10. Writing Distributed Relational Database Applications 10-7


when protected conversations were used for inactive connections and the
DDMCNV job attribute is *KEEP, these unused DDM conversations will also cause
the connections in programs compiled with RUW connection management to fail.
To avoid this situation, run with DDMCNV(*DROP) and perform a RELEASE and
COMMIT prior to calling any programs compiled with RUW connection manage-
ment that perform CONNECTs.

Likewise, when creating packages for programs compiled with DUW connection
management after creating a package for a program compiled with RUW con-
nection management, either run with DDMCNV(*DROP) or perform a RCLDDMCNV
after creating the package for the programs compiled with DUW connection man-
agement.

Programs compiled with DUW connection management can also be called by pro-
grams compiled with RUW connection management. When the program compiled
with DUW connection management performs a CONNECT, the connection per-
formed by the program compiled with RUW connection management is not discon-
nected. This connection can be used by the program compiled with DUW
connection management.

Implicit Connection Management for the Default Activation


Group
The application requester can implicitly connect to an application server. Implicit
connection occurs when the application requester detects the first SQL statement is
being issued by the first active SQL program for the default activation group and
the following items are true:
Ÿ The SQL statement being issued is not a CONNECT statement with parame-
ters.
Ÿ SQL is not active in the default activation group.

For a distributed program, the implicit connection is to the relational database spec-
ified on the RDB parameter. For a nondistributed program, the implicit connection is
to the local relational database.

SQL will end any active connections in the default activation group when SQL
becomes not active. SQL becomes not active when:
Ÿ The application requester detects the first active SQL program for the process
has ended and the following are all true:
– There are no pending SQL changes
– There are no connections using protected conversations
– A SET TRANSACTION statement is not active
– No programs that were precompiled with CLOSQLCSR(*ENDJOB) were
run.
If there are pending changes, protected conversations, or an active SET
TRANSACTION statement, then SQL is placed in the exited state. If programs
precompiled with CLOSQLCSR(*ENDJOB) were run, then SQL will remain
active for the default activation group until the job ends.
– At the end of a unit of work if SQL is in the exited state. This occurs when
you issue a COMMIT or ROLLBACK command outside of an SQL program.

10-8 OS/400 Distributed Database Programming V4R2


– At the end of a job.

Implicit Connection Management for Nondefault Activation


Groups
The application requester can implicitly connect to an application server. Implicit
connection occurs when the application requester detects the first SQL statement
issued for the activation group and it is not a CONNECT statement with parame-
ters.

For a distributed program, the implicit connection is made to the relational database
specified on the RDB parameter. For a nondistributed program, the implicit con-
nection is made to the local relational database.

Implicit disconnect can occur at the following parts of a process:


Ÿ When the activation group ends, if commitment control is not active, activation
group level commitment control is active, or the job level commitment definition
is at a unit of work boundary.
If the job level commitment definition is active and not at a unit of work
boundary then SQL is placed in the exited state.
Ÿ If SQL is in the exited state, when the job level commitment definition is com-
mitted or rolled back.
Ÿ At the end of a job.

The following example program is not distributed (no connection is required). It is a


program run at a Spiffy Corporation regional office to gather local repair information
into a report.
CRTSQLxxx PGM(SPIFFY/FIXTOTAL) COMMIT(\CHG) RDB(\NONE)

PROC: FIXTOTAL;
.
.
.
SELECT \ INTO :SERVICE .A/
FROM REPAIRTOT;
EXEC SQL
COMMIT;
.
.
.
END FIXTOTAL;
.A/ Statement run on the local relational database

Another program, such as the following example, could gather the same information
from Spiffy dealerships in the Kansas City region. This is an example of a distrib-
uted program that is implicitly connected and disconnected:

Chapter 10. Writing Distributed Relational Database Applications 10-9


CRTSQLxxx PGM(SPIFFY/FIXES) COMMIT(\CHG) RDB(KC1ð1) RDBCNNMTH(\RUW)

PROC: FIXES;
.
.
.
EXEC SQL
SELECT \ INTO :SERVICE .B/
FROM SPIFFY.REPAIR1;

EXEC SQL .C/


COMMIT;
.
.
.
END FIXES; .D/
.B/ Implicit connection to AS. The statement runs on the AS.
.C/ End of unit of work. The AR is placed in a connectable and connected
state if the COMMIT is successful.
.D/ Implicit disconnect at the end of the SQL program.

Explicit CONNECT
The CONNECT statement is used to explicitly connect an AR to an identified AS.
This SQL statement can be embedded within an application program or you can
issue it using interactive SQL. The CONNECT statement is used with a TO or
RESET clause. A CONNECT statement with a TO clause allows you to specify
connection to a particular AS relational database. The CONNECT statement with a
RESET clause specifies connection to the local relational database.

When you issue (or the program issues) a CONNECT statement with a TO or
RESET clause, the AS identified must be described in the relational database direc-
tory. See “Using the Relational Database Directory” on page 5-5 for more informa-
tion on how to work with this directory. The AR must also be in a connectable state
for the CONNECT statement to be successful.

The CONNECT statement has different effects depending on the connection man-
agement method you use. For RUW connection management, the CONNECT
statement has the following effects:
Ÿ When a CONNECT statement with a TO or RESET clause is successful, the
following occurs:
– Any open cursors are closed, any prepared statements are discarded, and
any held resources are released from the previous AS if the application
process was placed in the connectable state through the use of COMMIT
HOLD or ROLLBACK HOLD SQL statements, or if the application process
is running COMMIT(*NONE).
– The application process is disconnected from its previous AS, if any, and
connected to the identified AS.
– The name of the AS is placed in the Current Server special register.
– Information that identifies the type of AS is placed in the SQLERRP field of
the SQL communication area (SQLCA).

10-10 OS/400 Distributed Database Programming V4R2


Ÿ If the CONNECT statement is unsuccessful for any reason, the application
remains in the connectable but unconnected state. An application in the
connectable but unconnected state can only run the CONNECT statement.
Ÿ Consecutive CONNECT statements can be run successfully because
CONNECT does not remove the AR from the connectable state. A CONNECT
to the AS to which the AR is currently connected is run like any other
CONNECT statement.
Ÿ If running with commitment control, the CONNECT statement cannot run suc-
cessfully when it is preceded by any SQL statement other than CONNECT,
SET CONNECTION, COMMIT, ROLLBACK, DISCONNECT, or RELEASE. To
avoid an error, perform a COMMIT or ROLLBACK operation before a
CONNECT statement is run. If running without commitment control, the
CONNECT statement is always allowed.

For DUW connection management, the CONNECT statement has the following
effects:
Ÿ When a CONNECT statement with a TO or RESET clause is successful, the
following occurs:
– The name of the AS is placed in the Current Server special register.
– Information that identifies the type of AS is placed in the SQLERRP field of
the SQL communication area (SQLCA).
– Information on the type of connection is put into the SQLERRD(4) field of
the SQLCA. Encoded in this field is the following information:
- Whether the connection is to the local relational database or a remote
relational database.
- Whether or not the connection uses a protected conversation.
- Whether the connection is always read-only, always capable of
updates, or whether the ability to update can change between each unit
of work.
See the DB2 for AS/400 SQL Programming book for more information on
SQLERRD(4).
Ÿ If the CONNECT statement with a TO or RESET clause is unsuccessful
because the AR is not in the connectable state or the server-name is not listed
in the local relational database directory, the connection state of the AR is
unchanged.
Ÿ A connect to a currently connected AS results in an error.
Ÿ A connection without a TO or RESET clause can be used to obtain information
about the current connection. This includes the following information:
– Information that identifies the type of AS is placed in the SQLERRP field of
the SQL communications area.
– Information on whether an update is allowed to the relational database is
encoded in the SQLERRD(3) field. A value of 1 indicates that an update
can be performed. A value of 2 indicates that an update can not be per-
formed over the connection. See the DB2 for AS/400 SQL Programming
book for more information on SQLERRD(3).

Chapter 10. Writing Distributed Relational Database Applications 10-11


It is a good practice for the first SQL statement run by an application process to be
the CONNECT statement. However, when you have CONNECT statements
embedded in your program you may want to dynamically change the AS name if
the program connects to more than one AS. If you are going to run the application
at multiple systems, you can specify the CONNECT statement with a host variable
as shown below, so that the program can be passed the relational database name.
CONNECT TO : host-variable

Without CONNECT statements, all you need to do when you change the AS is to
recompile the program with the new relational database name.

The following example shows two forms of the CONNECT statement (.1/ and .2/)
in an application program:
CRTSQLxxx PGM(SPIFFY/FIXTOTAL) COMMIT(\CHG) RDB(KC1ð5)

PROC: FIXTOTAL;
EXEC SQL CONNECT TO KC1ð5; .1/
..
.
EXEC SQL
SELECT \ INTO :SERVICE
FROM REPAIRTOT;
..
.
EXEC SQL COMMIT;
..
.
EXEC SQL CONNECT TO MPLSð3 USER :USERID USING :PW; .2/
..
.
EXEC SQL SELECT ...
..
.
EXEC SQL COMMIT;
..
.
END FIXTOTAL;

The example (.2/) shows the CONNECT statement DB2/400 extension (Version 2
Release 2 and later). This extension provides a way that applications can use the
SECURITY=PGM form of SNA LU 6.2 conversation security. The user ID and pass-
word stored in the host USERID and PW variables are transmitted in the ALLO-
CATE verb SECURITY parameter. In this example the ALLOCATE verb flows when
the connection is established to MPLS03. You must specify the user ID and pass-
word with host variables when the CONNECT statement is embedded in a
program.

The following example shows both CONNECT statement forms in interactive SQL.
Note that the password must be enclosed in single quotes.

10-12 OS/400 Distributed Database Programming V4R2


à ð
Type SQL statement, press Enter.
Current connection is to relational database (RDB) KC1ð5.
CONNECT TO KCððð_________________________________________________________
..
.
COMMIT___________________________________________________________________
===> CONNECT TO MPLSð3 USER JOE USING 'X47K'__________________________________
_________________________________________________________________________
_________________________________________________________________________

| SQL Specific to Distributed Relational Database and SQL CALL


During the precompile process of a distributed DB2/400 application, the OS/400
program may build SQL packages to be run on an AS. After it is compiled, a dis-
tributed SQL program and package must be compatible with the systems that are
being used as application receivers and application servers. “Preparing Distributed
Relational Database Programs” on page 10-21 gives you more information about
the changes to the precompile process and the addition of SQL packages.

This section gives an overview of the SQL statements that are used with distributed
relational database support and some things for you to consider about coexistence
with other systems. For more detail on these subjects, see the DB2 for AS/400
SQL Reference book and the DB2 for AS/400 SQL Programming book.

Distributed Relational Database Statements


The following statements included with the SQL language specifically support a dis-
tributed relational database:
Ÿ CONNECT
Ÿ SET CONNECTION
Ÿ RELEASE
Ÿ DISCONNECT
Ÿ DROP PACKAGE
Ÿ GRANT EXECUTE ON PACKAGE
Ÿ REVOKE EXECUTE ON PACKAGE

| The SQL CALL statement can be used locally, but its primary purpose is to allow a
| procedure to be called on a remote system.

“Connecting to a Distributed Relational Database” on page 10-3 describes using


the CONNECT, SET CONNECTION, RELEASE, and DISCONNECT statements to
manage connections between an AR and an AS. Using the SQL GRANT
EXECUTE ON PACKAGE and REVOKE EXECUTE ON PACKAGE statements to
grant or revoke user authority to SQL packages is described in “Authority to Distrib-
uted Relational Database Objects” on page 4-10. The SQL DROP PACKAGE
statement, as it is used to drop an SQL package, is discussed in “Working With
SQL Packages” on page 10-28.

| The SQL CALL statement is discussed in “SQL CALL Statement (Stored


| Procedures)” on page 10-14

Chapter 10. Writing Distributed Relational Database Applications 10-13


| SQL CALL Statement (Stored Procedures)
| The SQL CALL statement is not actually specific to distributed relational databases,
| but a discussion of it is included here because its main value is in distributing appli-
| cation logic and processing. The CALL statement provides a capability in a DRDA
| environment much like the Remote Procedure Call (RPC) mechanism does in the
| Open Software Foundation** (OSF**) Distributed Computing Environment (DCE). In
| fact, an SQL CALL to a program on a remote relational database actually is a
| remote procedure call. This type of RPC has certain advantages; for instance, it
| does not require the compilation of interface definitions, nor does it require the cre-
| ation of stub programs.

| You might want to use SQL CALL, or stored procedures, as the technique is some-
| times called, for the following reasons:
| Ÿ To reduce the number of message flows between the AR and AS to perform a
| given function. If a set of SQL operations are to be run, it is more efficient for a
| program at the server to contain the statements and interconnecting logic.
| Ÿ To allow native database operations to be performed at the remote location.
| Ÿ To perform nondatabase operations (for example, sending messages or per-
| forming data queue operations) using SQL.
| Note: Unlike database operations, these operations are not protected by com-
| mitment control by the system.
| Ÿ To access system Application Programming Interfaces (APIs) on a remote
| system.

| A stored procedure and application program can run in the same or different acti-
| vation groups. It is recommended that the stored procedure be compiled with
| ACTGRP(*CALLER) specified to achieve consistency between the application
| program at the AR and the stored procedure at the AS.

| When a stored procedure is called that issues an inquiry message, the message is
| sent to the QSYSOPR message queue. The stored procedure waits for a response
| to the inquiry message. To have the stored procedure respond to the inquiry
| message, use the ADDRPYLE command and specify *SYSRPYL on the
| INQMSGRPY parameter of the CHGJOB command in the stored procedure.

| You cannot perform a COMMIT or ROLLBACK in a stored procedure if it runs in an


| AS job in the default activation group. When a stored procedure and an application
| program run under different commitment definitions, the COMMIT and ROLLBACK
| statements in the application program only affect its own commitment definition.
| You must commit the changes in the stored procedure by other means.

| For more information on SQL CALL, see the DB2 for AS/400 SQL Reference book.

| Using SQL CALL to the DB2 Universal Database (UDB): If you will be using the
| SQL CALL statement to call stored procedures from an older release of DB2 for
| AS/400 to DB2 UDB, you may need one of these PTFs:
| V3R1 SF36701
| V3R2 SF36699
| V3R6 SF36700
| V3R7 SF33877

10-14 OS/400 Distributed Database Programming V4R2


| The error that these PTFs fix is reported at DB2 for AS/400 as a 'Distributed Rela-
| tional Database Architecture (DRDA) protocol error' where the 'Data descriptor did
| not match data' at the application requestor (SQL code -30000, SQL state 58008).

| From AS/400 systems running OS/400 V3R1 or V3R6, you will need to call DB2
| UDB procedures with the procedure name in a host variable, as in the following
| example:
| CALL :host-procedure-name(...
| For V3R2 and V3R7, there are PTFs that allow you to embed the procedure name
| in the SQL statement without putting it into a host variable.
| V3R2 SF36535
| V3R7 SF35932
| Stored procedures written in C that are invoked on a platform running DB2 UDB
| cannot use argc and argv as parameters (that is, they cannot be of type main()).
| This differs from AS/400 stored procedures which must use argc and argv. For
| examples of stored procedures for DB2 UDB platforms, see the \SQLLIB\SAMPLES
| (or /sqllib/samples) subdirectory. Look for outsrv.sqc and outcli.sqc in the C subdi-
| rectory.

| For UDB stored procedures called by AS/400, make sure that the procedure name
| is in upper case letters. AS/400 currently folds procedure names to upper case.
| This means that a procedure on the UDB server, having the same procedure name
| but in lower case, will not be found. For stored procedures on AS/400, the proce-
| dure names are in upper case.

| Stored procedures on the AS/400 cannot have a COMMIT in them when they are
| created to run in the same activation group as the calling program (the proper way
| to create them). In UDB, a stored procedure is allowed to have a COMMIT, but the
| application designer should be aware that there is no knowledge on the part of DB2
| for AS/400 that the commit occurred.

DB2 for AS/400 Coexistence


When you write and maintain programs for a distributed relational database using
the SQL language, you need to consider the other systems in the

distributed relational database network. The program you are writing or maintaining
may have to be compatible with the following:
Ÿ Other AS/400 systems
Ÿ Previous AS/400 releases
Ÿ Systems that are not AS/400 systems

Remember that the SQL statements in a distributed SQL program run on the AS.
Even though the program runs on the AR, the SQL statements are in the SQL
package to be run on the AS. Those statements must be supported by the AS and
be compatible with the collections, tables, and views that exist on the AS. Also, the
users who run the program on the AR must be authorized to the SQL package and
other SQL objects on the AS.

Releases of the AS/400 system before Version 2 Release 1 Modification 1 do not


support distributed relational database. If you are writing applications for an earlier
version of the AS/400 system, you need to use DDM support that is at the level

Chapter 10. Writing Distributed Relational Database Applications 10-15


required for the target system. You can convert an SQL program from a previous
release to a distributed SQL program by creating the program again using the
CRTSQLxxx command and specifying the relational database name (RDB param-
eter) for an AS. This compiles the program again using the distributed relational
database support in DB2/400 and creates the SQL package needed on the AS.

You can write DB2/400 programs that run on application servers that are not
AS/400 systems and these other platforms may support more or less SQL func-
tions. Statements that are not supported on the DB2/400 AR can be used and com-
piled on the AS/400 system when the AS supports the function. SQL programs
written to run on an AS/400 AS only provide the level of support described in this
guide. See the support documentation for the other systems to determine the level
of function they provide.

Ending Units of Work


You should be careful about ending SQL programs with uncommitted work. When a
program ends with uncommitted work, the connection to the relational database
remains active. (In some cases involving programs running in system-named acti-
vation groups, however, the system performs an automatic commit when the
program ends.)

This behavior differs from that of other systems because in the OS/400 operating
system, COMMITs and ROLLBACKs can be used as commands from the
command line or in a CL program. However, the preceding scenario can lead to
unexpected results in the next SQL program run, unless you plan for the situation.
For example, if you run interactive SQL next (STRSQL command), the interactive
session starts up in the state of being connected to the previous AS with uncom-
mitted work. As another example, if following the preceding scenario, you start a
second SQL program that does an implicit connect, an attempt is made to find and
run a package for it on the AS that was last used. This may not be the AS that you
intended. To avoid these surprises always commit or rollback the last unit of work
before ending any application program.

Coded Character Set Identifier (CCSID)


Support for the national language of any country requires the proper handling of a
minimum set of characters. A cross-system support for the management of char-
acter information is provided with the IBM Character Data Representation Architec-
ture (CDRA). CDRA defines the coded character set identifier (CCSID) values to
identify the code points used to represent characters, and to convert these codes
(character data), as needed to preserve their meanings.

The use of an architecture such as CDRA and associated conversion protocols is


important in the following situations:
Ÿ More than one national language version is installed on the AS/400 system.
Ÿ Multiple AS/400 systems are sharing data between systems in different coun-
tries with different primary national language versions.
Ÿ AS/400 systems and non-AS/400 systems are sharing data between systems in
different countries with different primary national language versions.

Tagging is the primary means to assign meaning to coded graphic characters. The
tag may be in a data structure that is associated with the data object (explicit

10-16 OS/400 Distributed Database Programming V4R2


tagging), or it may be inherited from objects such as the job or the system itself
(implicit tagging).

DB2/400 tags character columns with CCSIDs. A CCSID is a 16-bit number identi-
fying a specific set of encoding scheme identifiers, character set identifiers, code
page identifiers, and additional coding-related information that uniquely identifies
the coded graphic character representation used. When running applications, data
is not converted when it is sent to another system; it is sent as tagged along with
its CCSID. The receiving job automatically converts the data to its own CCSID if it
is different from the way the data is tagged.

The CDRA has defined the following range of values for CCSIDs.
00000 Use next hierarchical CCSID
00001 through 28671 IBM-registered CCSIDs
28672 through 65533 Reserved
65534 Refer to lower hierarchical CCSID
65535 No conversion done

See the National Language Support book for a list of the OS/400 CCSIDs and the
Character Data Representation Architecture - Level 1, Registry for a complete list
of the CDRA CCSIDs. For more information on handling CCSIDs, see the DB2 for
AS/400 SQL Reference and the DB2 for AS/400 SQL Programming book.

The following illustration shows the parts of a CCSID.

┌────────────────────┐ ┌────────────────────┐
│ │ │ Additional │
│ Character Set │ │ Coding-Related │
│ Code Page │ ┌───────────┐ │ Required │
│ ├───────┤ CCSID ├────────┤ Information │
└────────────────────┘ └───┬────┬──┘ └────────────────────┘
│ │
│ │
┌──────────┘ └──────────┐
│ │
┌─────────┴──────────┐ ┌──────────┴─────────┐
│ │ │ │
│ Encoding │ │ Character │
│ Scheme │ │ Size │
│ │ │ │
└────────────────────┘ └────────────────────┘

Figure 10-3. Coded Character Set Identifier (CCSID)

AS/400 Support
The default CCSID for a job on the AS/400 system is specified using the Change
Job (CHGJOB) command. If a CCSID is not specified in this way, the job CCSID is
obtained from the CCSID attribute of the user profile. If a CCSID is not specified on
the user profile, the system gets it from the QCCSID system value. This QCCSID
value is initially set to 65535. If your AS/400 system is in a distributed relational
database with unlike systems, it may not be able to use CCSID 65535. See
Appendix B, “Cross-Platform Access Using DRDA” on page B-1 for things to con-
sider when operating in an unlike environment.

Chapter 10. Writing Distributed Relational Database Applications 10-17


All control information that flows between the AR and AS is in CCSID 500 (a DRDA
standard). This is information such as collection names, table names, and some
descriptive text. Using variant characters for control information causes these
names to be converted, which can affect performance. Package names are also
sent in CCSID 500. Using variant characters in a package name causes the
package name to be converted. This means the package is not found at run time.

After a job has been initiated, you can change the job CCSID by using the
CHGJOB command. To do this:
1. Enter the Work with Job (WRKJOB) command to get the Work with Jobs
display.
2. Select option 2 (Display job definition attributes).
This locates the current CCSID value so you can reset the job to its original
CCSID value later.
3. Enter the CHGJOB command with the new CCSID value.

The new CCSID value is reflected in the job immediately. However, if the job
CCSID you change is an AR job, the new CCSID does not affect the work being
done until the next CONNECT.

Attention: If you change the CCSID of an AS job, the results cannot be predicted.

Source files are tagged with the job CCSID if a CCSID is not explicitly specified on
the Create Source Physical File (CRTSRCPF) or Create Physical File (CRTPF)
command for source files. Externally described database files and tables are
tagged with the job CCSID if a CCSID is not explicitly specified in data description
specification (DDS), in interactive data definition utility (IDDU), or in the CREATE
TABLE SQL statement. For source and externally described files, if the job CCSID
is 65535, the default CCSID based on the language of the operating system is
used. Program described files are tagged with CCSID 65535. Views are tagged
with the CCSID of its corresponding table tag or column-level tags. If a view is
defined over several tables, it is tagged at the column level and assumes the tags
of the underlying columns. Views cannot be explicitly tagged with a CCSID. The
system automatically converts data between the job and the table if the CCSIDs
are not equal and neither of the CCSIDs is equal to 65535.

When you change the CCSID of a tagged table, it cannot be tagged at the column
level or have views defined on it. To change the CCSID of a tagged table, use the
Change Physical File (CHGPF) command. To change a table with column-level
tagging, you must create it again and copy the data to a new table using
FMT(*MAP) on the Copy File (CPYF) command. When a table has one or more
views defined, you must do the following to change the table:
1. Save the view and table along with their access paths.
2. Delete the views.
3. Change the table.
4. Restore the views and their access paths over the created table.

Source files and externally described files migrated to DB2/400 that are not tagged
or are implicitly tagged with CCSID 65535 will be tagged with the default CCSID
based on the language of the operating system installed. This includes files that are

10-18 OS/400 Distributed Database Programming V4R2


on the system when you install a new release and files that are restored to
DB2/400.

All data that is sent between an AR and an AS is sent not converted. In addition,
the CCSID is also sent. The receiving job automatically converts the data to its own
CCSID if it is different from the way the data is tagged. For example, consider the
following application that is run on a dealership system, KC105.
CRTSQLxxx PGM(PARTS1) COMMIT(\CHG) RDB(KCððð)

PROC: PARTS1;
.
.
EXEC SQL
SELECT \ INTO :PARTAVAIL
FROM INVENTORY
WHERE ITEM = :PARTNO;
.
.
END PARTS1;

In the above example, the local system (KC105) has the QCCSID system value set
at CCSID 37. The remote regional center (KC000) uses CCSID 937 and all its
tables are tagged with CCSID 937. CCSID processing takes place as follows:
Ÿ The KC105 system sends an input host variable (:PARTNO) in CCSID 37.
(The DECLARE VARIABLE SQL statement can be used if the CCSID of the job
is not appropriate for the host variable.)
Ÿ The KC000 system converts :PARTNO to CCSID 937, selects the required
data, and sends the data back to KC105 in CCSID 937.
Ÿ When KC105 gets the data, it converts it to CCSID 37 and places it in
:PARTAVAIL for local use.

Other Data Conversion


Sometimes, when you are doing processing on a remote system, your program
may need to convert the data from one system so that it can be used on the other.
DRDA support on the AS/400 system converts the data automatically between
other systems that use DRDA support. When a DB2/400 AR connects to an AS, it
sends information that identifies its type. Likewise, the AS sends back information
to the AS/400 system that identifies its processor type (for example, S/390* host or
AS/400 system). The two systems then automatically convert the data between
them as defined for this connection. This means that you do not need to program
for architectural differences between systems.

Data conversion between IBM systems with DRDA support includes data types
such as:
Ÿ Floating point representations
Ÿ Zoned decimal representations
Ÿ Byte reversal
Ÿ Mixed data types
Ÿ AS/400 specific data types such as:
– DBCS-only

Chapter 10. Writing Distributed Relational Database Applications 10-19


– DBCS-either
– Integer with precision and scale

DDM Files and SQL


You can use AS/400 DDM support to help you do some distributed relational data-
base tasks within a program that also uses SQL distributed relational database
support. It may be faster, for example, for you to use DDM and the Copy File
(CPYF) command to get a large number of records rather than an SQL FETCH
statement. Also, DDM can be used to get external file descriptions of the remote
system data brought in during compile for use with the distributed relational data-
base application. To do this you need to use DDM as described in Chapter 3,
Communications for an AS/400 Distributed Relational Database Chapter 5, Setting
Up an AS/400 Distributed Relational Database

The following example shows how you can add a relational database directory
entry and create a DDM file so that the same job can be used on the AS and target
system.
Note: Either both connections must be protected or both connections must be
unprotected for the conversation to be shared.

Relational Database Directory:

ADDRDBDIRE RDB(KCððð) +
RMTLOCNAME(KCððð)
TEXT('Kansas City regional database')

DDM File:

CRTDDMF FILE(SPIFFY/UPDATE)
RMTFILE(SPIFFY/INVENTORY)
RMTLOCNAME(KCððð)
TEXT('DDM file to update local orders')

The following is a sample program that uses both the relational database directory
entry and the DDM file in the same job on the remote system:

10-20 OS/400 Distributed Database Programming V4R2


CRTSQLxxx PGM(PARTS1) COMMIT(\CHG) RDB(KCððð) RDBCNNMTH(\RUW)

PROC :PARTS1;
OPEN SPIFFY/UPDATE;
.
.
.
CLOSE SPIFFY/UPDATE;
.
.
.
EXEC SQL
SELECT \ INTO :PARTAVAIL
FROM INVENTORY
WHERE ITEM = :PARTNO;
EXEC SQL
COMMIT;
.
.
.
END PARTS1;

See the Distributed Data Management book for more information on how to use
AS/400 DDM support.

Preparing Distributed Relational Database Programs


When you write a program using the SQL language, you normally embed the SQL
statements in a host program. The host program is the program that contains the
SQL statements, written in one of the host languages: the AS/400 PL/I, ILE C/400,
COBOL/400, ILE COBOL/400, FORTRAN/400, RPG/400, or ILE RPG/400 program-
ming languages. In a host program you use variables referred to as host
variables. These are variables used in SQL statements that are identifiable to the
host program. In RPG, this is called a field name; in FORTRAN, PL/I, and C, this is
known as a variable; in COBOL, this is called a data item.

You can code your distributed DB2/400 programs in a way similar to the coding for
a DB2/400 program that is not distributed. You use the host language to embed the
SQL statements with the host variables. Also, like a DB2/400 program that is not
distributed, a distributed DB2/400 program is prepared using the following
processes:
Ÿ Precompiling
Ÿ Compiling
Ÿ Binding the application
Ÿ Testing and debugging

However, a distributed DB2/400 program also requires that an SQL package is


created on the AS to access data.

This section discusses these steps in the process, outlining the differences for a
distributed DB2/400 program.

Chapter 10. Writing Distributed Relational Database Applications 10-21


Precompiling Programs with SQL Statements
You must precompile and compile an application program containing embedded
SQL statements before you can run it. Precompiling such programs is done by an
SQL precompiler. The SQL precompiler scans each statement of the application
program source and does the following:
Ÿ Looks for SQL statements and for the definition of host variable names
Ÿ Verifies that each SQL statement is valid and free of syntax errors
Ÿ Validates the SQL statements using the description in the database
Ÿ Prepares each SQL statement for compilation in the host language
Ÿ Produces information about each precompiled SQL statement

Application programming statements and embedded SQL statements are the


primary input to the SQL precompiler. The SQL precompiler assumes that the host
language statements are syntactically correct. If the host language statements are
not syntactically correct, the precompiler may not correctly identify SQL statements
and host variable declarations.

The SQL precompile process produces a listing and a temporary source file
member. It can also produce the SQL package depending on what is specified for
the OPTION and RDB parameters of the precompiler command. See “Compiling an
Application Program” on page 10-24 for more information about this parameter.

Listing
The output listing is sent to the printer file specified by the PRTFILE parameter of
the CRTSQLxxx command. The following items are written to the printer file:
Ÿ Precompiler options
This is a list of all the options specified with the CRTSQLxxx command and the
date the source member was last changed.
Ÿ Precompiler source
This output is produced if the *SOURCE option is used for non-ILE precompiles
or if the OUTPUT(*PRINT) parameter is specifed for ILE precompiles. It shows
each precompiler source statement with its record number assigned by the pre-
compiler, the sequence number (SEQNBR) you see when using the source
entry utility (SEU), and the date the record was last changed.
Ÿ Precompiler cross-reference
This output is produced if *XREF was specified in the OPTION parameter. It
shows the name of the host variable or SQL entity (such as tables and
columns), the record number where the name is defined, what the name is
defined, and the record numbers where the name occurs.
Ÿ Precompiler diagnostic list
This output supplies diagnostic messages, showing the precompiler record
numbers of statements in error.

10-22 OS/400 Distributed Database Programming V4R2


Temporary Source File Member
Source statements processed by the precompiler are written to QSQLTEMP in the
QTEMP library (QSQLTEMP1 in the QTEMP library for programs created using
CRTSQLRPGI). In your precompiler-changed source code, SQL statements have
been converted to comments and calls to the SQL interface modules: QSQROUTE,
QSQLOPEN, QSQLCLSE, and QSQLCMIT. The name of the temporary source file
member is the same as the name specified in the PGM parameter of CRTSQLxxx.
This member cannot be changed before being used as input to the compiler.

QSQLTEMP or QSQLTEMP1 can be moved to a permanent library after the pre-


compile, if you want to compile at a later time. If you change the records of the
temporary source file member, the compile attempted later will fail.

SQL Package Creation


An object called an SQL package can be created as part of the precompile process
when the CRTSQLxxx command is compiled. See “Compiling an Application
Program” on page 10-24 and “Binding an Application” on page 10-24 for informa-
tion on situations that affect package creation as part of these processes. See
“Working With SQL Packages” on page 10-28 for more information on the SQL
package and commands that you can use to work with a package.

Precompiler Commands
The DB2/400 Query Manager and SQL Development Kit program has seven pre-
compiler commands, one for each of the host languages.

Host Language Command


AS/400 PL/I CRTSQLPLI
ILE C/400 language CRTSQLCI
COBOL/400 language CRTSQLCBL
ILE COBOL/400 language CRTSQLCBLI
FORTRAN/400 language CRTSQLFTN
RPG III (part of RPG/400 language) CRTSQLRPG
ILE RPG/400 language CRTSQLRPGI

A separate command for each language exists so each language can have param-
eters that apply only to that language. For example, the options *APOST and
*QUOTE are unique to COBOL. They are not included in the commands for the
other languages. The precompiler is controlled by parameters specified when it is
called by one of the SQL precompiler commands. The parameters specify how the
input is processed and how the output is presented.

You can precompile a program without specifying anything more than the name of
the member containing the program source statements as the PGM parameter (for
non-ILE precompiles) or the OBJ parameter (for ILE precompiles) of the
CRTSQLxxx command. SQL assigns default values for all precompiler parameters
(which may, however, be overridden by any that you explicitly specify).

The following briefly describes parameters common to all the CRTSQLxxx com-
mands that are used to support distributed relational database. To see the syntax
and full description of the parameters and supported values, see the DB2 for
AS/400 SQL Programming book.

Chapter 10. Writing Distributed Relational Database Applications 10-23


RDB
Specifies the name of the relational database where the SQL package option is
to be created. If *NONE is specified, then the program or module is not a dis-
tributed object and the CRTSQLPKG command cannot be used. The relational
database name can be the name of the local database.

RDBCNNMTH
Specifies the type of semantics to be used for CONNECT statements: remote
unit of work (RUW) or distributed unit of work (DUW) semantics.

SQLPKG
Specifies the name and library of the SQL package.

USER
Specifies the user name sent to the remote system when starting the conversa-
tion. This parameter is used only if a conversation is started as part of the pre-
compile process.

PASSWORD
Specifies the password to be used on the remote system when starting the
conversation. This parameter is used only if a conversation is started as part of
the precompile process.

REPLACE
Specifies if any objects created as part of the precompile process should be
able to replace an existing object.

The following example creates a COBOL program named INVENT and stores it in
a library named SPIFFY. The SQL naming convention is selected, and every row
selected from a specified table is locked until the end of the unit of recovery. An
SQL package with the same name as the program is created on the remote rela-
tional database named KC000.

CRTSQLCBL PGM(SPIFFY/INVENT) OPTION(\SRC \XREF \SQL)


COMMIT(\ALL) RDB(KCððð)

Compiling an Application Program


The DB2/400 precompiler automatically calls the host language compiler after the
successful completion of a precompile, unless the *NOGEN precompiler option is
specified. The compiler command is run specifying the program name, source file
name, precompiler created source member name, text, and user profile. Other
parameters are also passed to the compiler, depending on the host language. For
more information on these parameters, see the DB2 for AS/400 SQL Programming
book.

Binding an Application
Before you can run your application program, a relationship between the program
and any referred-to tables and views must be established. This process is called
binding. The result of binding is an access plan. The access plan is a control
structure that describes the actions necessary to satisfy each SQL request. An
access plan contains information about the program and about the data the
program intends to use. For distributed relational database work, the access plan
is stored in the SQL package and managed by the system along with the SQL

10-24 OS/400 Distributed Database Programming V4R2


package. See “Working With SQL Packages” on page 10-28 for more information
about SQL packages.

SQL automatically attempts to bind and create access plans when the result of a
successful compile is a program or service program object. If the compile is not
successful or the result of a compile is a module object, access plans are not
created. If, at run time, the database manager detects that an access plan is not
valid or that changes have occurred to the database that may improve performance
(for example, the addition of indexes), a new access plan is automatically created.
If the AS is not an AS/400 system, then a bind must be done again using the
CRTSQLPKG command. Binding does three things:
Ÿ Revalidates the SQL statements using the description in the database.
During the bind process, the SQL statements are checked for valid table, view,
and column names. If a referred to table or view does not exist at the time of
the precompile or compile, the validation is done at run time. If the table or
view does not exist at run time, a negative SQLCODE is returned.
Ÿ Selects the access paths needed to access the data your program wants to
process.
In selecting an access path, indexes, table sizes, and other factors are consid-
ered when SQL builds an access plan. The bind process considers all indexes
available to access the data and decides which ones (if any) to use when
selecting a path to the data.
Ÿ Attempts to build access plans.
If all the SQL statements are valid, the bind process builds and stores access
plans in the program.

If the characteristics of a table or view your program accesses have changed, the
access plan may no longer be valid. When you attempt to use an access plan that
is not valid, the system automatically attempts to rebuild the access plan. If the
access plan cannot be rebuilt, a negative SQLCODE is returned. In this case, you
might have to change the program's SQL statements and reissue the CRTSQLxxx
command to correct the situation.

For example, if a program contains an SQL statement that refers to COLUMNA in


TABLEA and the user deletes and recreates TABLEA so that COLUMNA no longer
exists, when you call the program, the automatic rebind is unsuccessful because
COLUMNA no longer exists. You must change the program source and reissue the
CRTSQLxxx command.

Testing and Debugging


Testing and debugging distributed SQL programs is similar to testing and debug-
ging local SQL programs, but certain aspects of the process are different.

More than one system will eventually be required for testing. If applications are
coded so that the relational database names can easily be changed by recompiling
the program, changing the input parameters to the program, or making minor mod-
ifications to the program source, most testing can be accomplished using a single
system.

After the program has been tested against local data, the program is then made
available for final testing on the distributed relational database network. Consider

Chapter 10. Writing Distributed Relational Database Applications 10-25


testing the application locally on the system that will be the AS when the application
is tested over a remote connection, so that only the program will need to be moved
when the testing moves into a distributed environment.

Debugging a distributed SQL program uses the same techniques as debugging a


local SQL program. You use the Start Debug (STRDBG) command to start the
debugger and to put the application in debug mode. You can add breakpoints, trace
statements, and display the contents of variables.

However, to debug a distributed SQL program, you must specify the value of *YES
for the UPDPROD parameter. This is because OS/400 distributed relational data-
base support uses files in library QSYS and QSYS is a production library. This
allows data in production libraries to be changed on the AR. Issuing the STRDBG
command on the AR only puts the AR job into debug mode, so your ability to
manipulate data on the AS is not changed.

While in debug mode on the AR, informational messages are entered in the job log
for each SQL statement run. These messages give information about the result of
each SQL statement. A list of SQL return codes and a list of error messages for
distributed relational database are provided in Chapter 9, Handling Distributed
Relational Database Problems.

Informational messages about how the system maximizes processing efficiency of


SQL statements also are issued as a result of being in debug mode. Since any
maximization occurs at the AS, these types of messages will not appear in the AR
job log. To get this information, the AS job must be put in to debug mode.

| If both the AR and AS are AS/400 systems, and they are connected with APPC,
| you can use the Submit Remote Command (SBMRMTCMD) command to start the
| debug mode in an AS job. Create a DDM file as described in “Setting Up DDM
| Files” on page 5-13. The communications information in the DDM file must match
| the information in the relational database directory entry for the relational database
| being accessed. Then issue the command:
| SBMRMTCMD CMD('STRDBG UPDPROD(\YES)') DDMFILE(ddmfile name)

The SBMRMTCMD command starts the AS job if it does not already exist and
starts the debug mode in that job. Use the methods described in “Monitoring Rela-
tional Database Activity” on page 6-1 to examine the AS job log to find the job.

You can also use an SQL CALL statement (stored procedure) from either a
non-AS/400 or another AS/400 to start the debug mode in an AS job. See “SQL
CALL Statement (Stored Procedures)” on page 10-14 for more information.

The following method for putting the AS job into debug mode works with any AR
and a DB2/400 AS.
Ÿ Sign on to the AS and find the AS job.
Ÿ Issue the Start Service Job (STRSRVJOB) command from the your interactive
job (the job you are using to find the AS job) as shown:
STRSRVJOB (job-number/user-ID/job-name)
The job name for the STRSRVJOB command is the name of the AS job.
Issuing this command lets you issue certain commands from your interactive

10-26 OS/400 Distributed Database Programming V4R2


job that affect the AS job. One of these commands is the Start Debug
(STRDBG) command.
Ÿ Issue the STRDBG command using a value of *YES for the UPDPROD param-
eter in the interactive job. This puts the AS job into debug mode to produce
debug messages on the AS job log.

To end this debug session, either end your interactive job by signing off or use the
End Debug (ENDDBG) command followed by the End Service Job (ENDSRVJOB)
command.

Since the AS job must be put into debug before the SQL statements are run, the
application may need to be changed to allow you time to set up debug on the AS.
The AS job starts as a result of the application connecting to the AS. Your applica-
tion could be coded to enter a wait state after connecting to the AS until debug is
started on the AS.

| If you can anticipate the prestart job that will be used for a TCP/IP connection
| before it occurs, such as when there is only one waiting for work and there is no
| interference from other clients, you do not have the need to introduce a delay.

Program References
When a program is created, the OS/400 licensed program stores information about
all collections, tables, views, SQL packages, and indexes referred to in SQL state-
ments in an SQL program.

You can use the Display Program References (DSPPGMREF) command to display
all object references in the program. If the SQL naming convention is used, the
library name is stored in one of three ways:
Ÿ If the SQL name is fully qualified, the collection name is stored as the name
qualifier.
Ÿ If the SQL name is not fully qualified, and the DFTRDBCOL parameter is not
specified, the authorization ID of the statement is stored as the name qualifier.
Ÿ If the SQL name is not fully qualified, and the DFTRDBCOL parameter is speci-
fied, the collection name specified on the DFTRDBCOL parameter is stored as
the name qualifier.
If the system naming convention is used, the library name is stored in one of three
ways:
Ÿ If the object name is fully qualified, the library name is stored as the name
qualifier.
Ÿ If the object is not fully qualified, and the DFTRDBCOL parameter is not speci-
fied, *LIBL is stored.
Ÿ If the SQL name is not fully qualified, and the DFTRDBCOL parameter is speci-
fied, the collection name specified on the DFTRDBCOL parameter is stored as
the name qualifier.

Chapter 10. Writing Distributed Relational Database Applications 10-27


Working With SQL Packages
An SQL package is an SQL object used specifically by distributed relational data-
base applications. It contains control structures for each SQL statement that
accesses data on an AS. These control structures are used by the AS at run time
when the application program requests data using the SQL statement.

You must use a control language (CL) command to create an SQL package
because there is no SQL statement for SQL package creation. You can create an
SQL package in two ways:
Ÿ Using the CRTSQLxxx command with a relational database name specified in
the RDB parameter. See “Precompiling Programs with SQL Statements” on
page 10-22
Ÿ Using the CRTSQLPKG command.

SQL Package Management


After an SQL package is created, you can manage it the same way you manage
other objects on the AS/400 system, with some restrictions. You can save and
restore it, send it to other systems, and grant and revoke a user’s authority to the
package. You can also delete it by entering the Delete SQL Package
(DLTSQLPKG) command or the DROP PACKAGE SQL statement.

When a distributed SQL program is created, the name of the SQL package and an
internal consistency token are saved in the program. These are used at run time to
find the SQL package and verify that the SQL package is correct for this program.
Because the name of the SQL package is critical for running distributed SQL pro-
grams, an SQL package cannot be moved, renamed, duplicated, or restored to a
different library.

Create SQL Package (CRTSQLPKG) Command


You do not need the DB2/400 Query Manager and SQL Development Kit licensed
program to create an SQL package on an AS. You can enter the CRTSQLPKG
command to create an SQL package from a compiled distributed relational data-
base program. You can also use this command to replace an SQL package that
was created previously. A new SQL package is created on the relational database
defined by the RDB parameter. The new SQL package has the same name and is
placed in the same library as specified on the PKG parameter of the CRTSQLxxx
command.

10-28 OS/400 Distributed Database Programming V4R2


Job: B,I Pgm: B,I REXX: B,I Exec
┌─\LIBL/────────┐
55──CRTSQLPKG──PGM(──┼───────────────┼──program-name──)────────────────5
├─\CURLIB/──────┤
└─library-name/─┘
5──┬───────────────────────────────────────┬───────────────────────────5
│ ┌─\PGM─────────────────────┐ │
└─RDB(──┴─relational-database-name─┴──)─┘
5──┬─────────────────────────┬──┬────────────────────────────┬─────────5
│ ┌─\CURRENT──┐ │ │ ┌─\NONE────┐ │
└─USER(──┴─user-name─┴──)─┘ └─PASSWORD(──┴─password─┴──)─┘
5──┬────────────────────────────────┬──┬───────────────────────┬───────5
│ ┌─1ð─────────────┐ │ │ ┌─\YES─┐ │
└─GENLVL(──┴─severity-level─┴──)─┘ └─REPLACE(──┴─\NO──┴──)─┘
5──┬────────────────────────────────────┬──────────────────────────────5
│ ┌─\PGM────────────┐ │
└─DFTRDBCOL(──┼─\NONE───────────┼──)─┘
└─collection-name─┘
5──┬───────────────────────────────────────────────────────┬───────────5
│ ┌─\LIBL/────────┐ ┌─QSYSPRT───────────┐ │
└─PRTFILE(──┼───────────────┼──┴─printer-file-name─┴──)─┘
├─\CURLIB/──────┤
└─library-name/─┘
5──┬──────────────────────────┬────────────────────────────────────────5
│ ┌─\PGM────┐ │
└─OBJTYPE(──┴─\SRVPGM─┴──)─┘
5──┬───────────────────────────────────┬───────────────────────────────5
│ ┌─\ALL──────────────┐ │
│ │ ┌─.───────────┐ │ │
└─MODULE(──┴──6─module-name─┴──
(1)
─┴──)─┘
5──┬─────────────────────────────┬────────────────────────────────────5%
│ ┌─\PGMTXT───────┐ │
└─TEXT(──┼─\BLANK────────┼──)─┘
└─'description'─┘
Note:
1A maximum of 256 modules may be specified.

PGM
Specifies the qualified name of the program for which the SQL package is
being created.
*LIBL: Specifies that the library list is used to locate the program.
*CURLIB: Specifies that the current library is able to find the program. If a
current library entry does not exist in the library list, the QGPL library is used.
library-name: Specifies the library where the program is located.
program-name: Specifies the name of the distributed program for which the
SQL package is being created.

RDB
Specifies the relational database name that identifies the remote database
where the SQL package is being created.

Chapter 10. Writing Distributed Relational Database Applications 10-29


*PGM: Specifies that the relational database name to be used is the same as
the value specified on the RDB parameter of the CRTSQLxxx command used
when the program was created.
relational-database-name: Specifies the name of the relational database where
the SQL package is to be created.

USER
Specifies the user name sent to the remote system when starting the conversa-
tion.
*CURRENT: The user name associated with the current job is used.
user-name: Specifies the user name to be used for the remote job.
PASSWORD
Specifies the password to be used on the remote system.
*NONE: No password is sent. If a user name is specified on the USER param-
eter, the value is not valid.
password: Specifies the password of the user name specified on the USER
parameter.

GENLVL
Controls the generation of the SQL package. If error messages are returned
with a severity greater than the GENLVL value, the SQL package is not
created.

10: If a severity level value is not specified, the default severity level is 10.
severity-level: Specify a number from 0 through 40. Some suggested values
are listed below:

10 warnings
20 general error messages
30 serious error messages
40 system detected error messages

Note: There are some errors that cannot be controlled by GENLVL. When
those errors occur, the SQL package is not created.

REPLACE
Specifies whether or not to replace an existing SQL package of the same name
with a newly created SQL package.
*YES: Specifies that if the SQL package already exists, it will be replaced with
the new SQL package.
*NO: Specifies that the create SQL package operation will end if an SQL
package already exists.

DFTRDBCOL
Identifies the default collection name to be used for unqualified names of
tables, views, indexes and SQL packages with static SQL statements.
*PGM: Specifies that the collection name to be used is the same as the
DFTRDBCOL parameter value used when the program was created.

10-30 OS/400 Distributed Database Programming V4R2


*NONE: Specifies that unqualified names for tables, indexes, views, and SQL
packages will use the search conventions defined for the *SQL and *SYS
options in the SQL precompiler commands.
collection-name: Specify the name of the collection name that is to be used for
unqualified tables, views, indexes and SQL packages.

PRTFILE
Specifies the qualified name of the printer device file to which the precompiler
listing is directed. The file should have a minimum length of 132 characters. If a
file with a record length of less than 132 characters is specified, information is
lost.
*LIBL: Specifies the library list used to locate the printer file.
*CURLIB: Specifies that the current library for the job is used to locate the
printer file. If no library entry exists in the library list, QGPL is used.
library-name: Specify the library where the printer file is located.
QSYSPRT: If a file name is not specified, the precompiler listing is directed to
the IBM-supplied printer file QSYSPRT.
printer-file-name: Specify the name of the printer device file to which the pre-
compiler listing is directed.

OBJTYPE
Specifies the type of program for which an SQL package is created.
*PGM: Create an SQL package from the program specified on the PGM param-
eter.
*SRVPGM: Create an SQL package from the service program specified on the
PGM parameter.

MODULE
Specifies a list of modules in a bound program.
*ALL: An SQL package is created for each module in the program. An error
message is sent if none of the modules in the program contain SQL statements
or none of the modules is a distributed module.
Note: CRTSQLPKG can process programs that do not contain more than
1024 modules.
module-name: Specify the names of up to 256 modules in the program for
which an SQL package is to be created. If more than 256 modules exist that
need to have an SQL package created, multiple CRTSQLPKG commands must
be used.
Duplicate module names in the same program are allowed. This command
looks at each module in the program and if *ALL or the module name is speci-
fied on the MODULE parameter, processing continues to determine whether an
SQL package should be created. If the module is created using SQL and the
RDB parameter is specified on the precompile command, an SQL package is
created for the module. The SQL package is associated with the module of the
bound program.

TEXT
Specifies text that briefly describes the program and its function.
*PGMTXT: Specifies that the text is taken from the program.

Chapter 10. Writing Distributed Relational Database Applications 10-31


*BLANK: Specifies no text.
'description': Specify no more than 50 characters of text enclosed in apostro-
phes (').

The following sample command creates an SQL package from the distributed SQL
program INVENT on relational database KC000.

CRTSQLPKG INVENT RDB(KCððð) TEXT('Inventory Check')

The new SQL package is created with the same options that were specified on the
CRTSQLxxx command.

If errors are encountered while creating the SQL package, the SQL statement being
processed when the error occurred and the message text for the error are written to
the file identified by the PRTFILE parameter. A listing is not generated if no errors
were found during the create SQL package process.

If the CRTSQLxxx command failed to create an SQL package (for example, the
communications line failed during the precompile) but the program was created, the
SQL package can be created without running the CRTSQLxxx command again.

Delete SQL Package (DLTSQLPKG) Command


You can use the Delete SQL Package (DLTSQLPKG) command to delete one or
more SQL packages. You must enter the DLTSQLPKG command on the AS/400
system where the SQL package being deleted is located.

Job: B,I Pgm: B,I REXX: B,I Exec


55──DLTSQLPKG──────────────────────────────────────────────────────────5
┌─\LIBL/────────┐
5──SQLPKG(──┼───────────────┼──┬─SQL-package-name──────────┬──)───────5%
├─\CURLIB/──────┤ └─generic\-SQL-package name─┘
├─\USRLIBL/─────┤
├─\ALL/─────────┤
├─\ALLUSR/──────┤
└─library-name/─┘

SQLPKG
Specifies the qualified name of the SQL package being deleted. A specific or
generic SQL package name can be specified.
The possible library values are:

*LIBL: All libraries in the user and system portions of the job's library list
are searched.
*CURLIB: The current library is searched. If no library is specified as the
current library for the job, the QGPL library is used.
*USRLIBL: Only the libraries listed in the user portion of the library list are
searched.
*ALL: All libraries in the system, including QSYS, are searched.
*ALLUSR: All nonsystem libraries, including all user-defined libraries and
the QGPL library, not just those in the job's library list are searched.
Libraries whose names start with the letter Q, other than QGPL, are not
searched.

10-32 OS/400 Distributed Database Programming V4R2


library-name: Specifies the name of the library to be searched.
SQL-package-name: Specifies the name of the SQL package being deleted.
generic*-SQL-package-name: Specifies the generic name of the SQL package
to be deleted. A generic name is a character string of one or more characters
followed by an asterisk (*); for example, ABC*. If a generic name is specified,
all SQL packages with names that begin with the generic name, and for which
the user has authority, are deleted. If an asterisk is not included with the
generic (prefix) name, the system assumes it to be the complete SQL package
name.

You must have *OBJEXIST authority for the SQL package and at least *EXECUTE
authority for the collection where it is located.

There are also several SQL methods to drop packages:


Ÿ If you have the DB2/400 Query Manager and SQL Development Kit licensed
program installed, use interactive SQL to connect to the AS and then drop the
package using the SQL DROP PACKAGE statement.
Ÿ Run an SQL program that connects and then drops the package.
Ÿ Use Query Management to connect and drop the package.

The following command deletes the SQL package PARTS1 in the SPIFFY
collection:
DLTSQLPKG SQLPKG(SPIFFY/PARTS1)

To delete an SQL package on a remote AS/400 system, use the Submit Remote
Command (SBMRMTCMD) command to run the DLTSQLPKG command on the
remote system. See “Submit Remote Command (SBMRMTCMD) Command” on
page 6-8 for how to use the SBMRMTCMD command. You can also use display
station pass-through to sign on the remote system to delete the SQL package. If
the remote system is not an AS/400 system, pass through to that system using a
remote work station program and then submit the delete SQL package command
local to that system.

SQL DROP PACKAGE Statement


The DROP PACKAGE statement includes the PACKAGE parameter for distributed
relational database. You can issue the DROP PACKAGE statement embedded in a
program or using interactive SQL. When you issue a DROP PACKAGE statement,
the SQL package and its description are deleted from the AS. This has the same
result as a Delete SQL Package (DLTSQLPKG) command entered on a local
system. No other objects dependent on the SQL package are deleted as a result of
this statement.

You must have the following privileges on the SQL package to successfully delete
it:
Ÿ The system authority *EXECUTE on the referenced collection
Ÿ The system authority *OBJEXIST on the SQL package

The following example shows how the DROP PACKAGE statement is issued:

DROP PACKAGE SPIFFY.PARTS1

Chapter 10. Writing Distributed Relational Database Applications 10-33


A program cannot issue a DROP PACKAGE statement for the SQL package it is
currently using.

10-34 OS/400 Distributed Database Programming V4R2


Appendix A. Application Programming Examples
This appendix contains an example RUW application for distributed relational data-
base use, written in RPG/400, COBOL/400 and ILE C/400 programming languages.
This example shows how to use a distributed relational database for functional
specification tasks.

Business Requirement
The application for the distributed relational database in this example is parts stock
management in an automobile dealer or distributor network.

This program checks the level of stock for each part in the local part stock table. If
this is below the re-order point, the program then checks on the central tables to
see whether there are any existing orders outstanding and what quantity has been
shipped against each order.

If the net quantity (local stock, plus orders, minus shipments) is still below the re-
order point, an order is placed for the part by inserting rows in the appropriate
tables on the central system. A report is printed on the local system.

Technical Notes
Commitment control
This program uses the concept of Local and Remote Logical Units of
Work (LUW). Since this program uses remote unit of work, it is neces-
sary to close the current LUW on one system (COMMIT) before begin-
ning a new unit of work on another system.
Cursor repositioning
When a LUW is committed and the application connects to another
database, all cursors are closed. This application requires the cursor
reading the part stock file to be re-opened at the next part number. To
achieve this, the cursor is defined to begin where the part number is
greater than the current value of part number, and to be ordered by part
number.
Note: This technique will not work if there are duplicate rows for the
same part number.

Creating a Collection and Tables

 Copyright IBM Corp. 1997, 1998 A-1


5738PW1 V2R1M1 92ð327 SEU SOURCE LISTING ð3/29/92 17:16:5ð PAGE 1
SOURCE FILE . . . . . . . DRDA/QLBLSRC
MEMBER . . . . . . . . . CRTDB
SEQNBR\...+... 1 ...+... 2 ...+... 3 ...+... 4 ...+... 5 ...+... 6 ...+... 7 ...+... 8 ...+... 9 ...+... ð
1ðð IDENTIFICATION DIVISION. ð3/29/92
2ðð PROGRAM-ID. CRTDB. ð3/29/92
3ðð ENVIRONMENT DIVISION. ð3/29/92
4ðð DATA DIVISION. ð3/29/92
5ðð WORKING-STORAGE SECTION. ð3/29/92
6ðð EXEC SQL INCLUDE SQLCA END-EXEC. ð3/29/92
7ðð PROCEDURE DIVISION. ð3/29/92
8ðð MAIN. ð3/29/92
9ðð \ ------------------------------------------------------------- ð3/29/92
1ððð \ LOCATION TABLE ð3/29/92
11ðð \ ------------------------------------------------------------\/-- ð3/29/92
12ðð EXEC SQL ð3/29/92
13ðð CREATE COLLECTION DRDA ð3/29/92
14ðð END-EXEC. ð3/29/92
15ðð EXEC SQL ð3/29/92
16ðð CREATE TABLE DRDA/PART_STOCK ð3/29/92
17ðð (PART_NUM CHAR(5) NOT NULL,
18ðð PART_UM CHAR(2) NOT NULL,
19ðð PART_QUANT INTEGER NOT NULL WITH DEFAULT, ð3/29/92
2ððð PART_ROP INTEGER NOT NULL, ð3/29/92
21ðð PART_EOQ INTEGER NOT NULL, ð3/29/92
22ðð PART_BIN CHAR(6) NOT NULL WITH DEFAULT ð3/29/92
23ðð ) END-EXEC. ð3/29/92
24ðð EXEC SQL ð3/29/92
25ðð CREATE UNIQUE INDEX DRDA/PART_STOCI ð3/29/92
26ðð ON DRDA/PART_STOCK ð3/29/92
27ðð (PART_NUM ASC) END-EXEC. ð3/29/92
28ðð EXEC SQL ð3/29/92
29ðð CREATE TABLE DRDA/PART_ORDER ð3/29/92
3ððð (ORDER_NUM SMALLINT NOT NULL,
31ðð ORIGIN_LOC CHAR(4) NOT NULL,
32ðð ORDER_TYPE CHAR(1) NOT NULL,
33ðð ORDER_STAT CHAR(1) NOT NULL,
34ðð NUM_ALLOC SMALLINT NOT NULL WITH DEFAULT,
35ðð URG_REASON CHAR(1) NOT NULL WITH DEFAULT,
36ðð CREAT_TIME TIMESTAMP NOT NULL,
37ðð ALLOC_TIME TIMESTAMP,
38ðð CLOSE_TIME TIMESTAMP,
39ðð REV_REASON CHAR(1) ð3/29/92
4ððð ) END-EXEC. ð3/29/92
41ðð EXEC SQL ð3/29/92
42ðð CREATE UNIQUE INDEX DRDA/PART_ORDEI ð3/29/92
43ðð ON DRDA/PART_ORDER ð3/29/92
44ðð (ORDER_NUM ASC) END-EXEC. ð3/29/92
45ðð EXEC SQL ð3/29/92
46ðð CREATE TABLE DRDA/PART_ORDLN ð3/29/92
47ðð (ORDER_NUM SMALLINT NOT NULL,
48ðð ORDER_LINE SMALLINT NOT NULL,
49ðð PART_NUM CHAR(5) NOT NULL,
5ððð QUANT_REQ INTEGER NOT NULL, ð3/29/92
51ðð LINE_STAT CHAR(1) NOT NULL ð3/29/92
52ðð ) END-EXEC. ð3/29/92
53ðð EXEC SQL ð3/29/92
54ðð CREATE UNIQUE INDEX PART_ORDLI ð3/29/92
55ðð ON DRDA/PART_ORDLN ð3/29/92
56ðð (ORDER_NUM ASC, ð3/29/92
57ðð ORDER_LINE ASC) END-EXEC. ð3/29/92
58ðð EXEC SQL ð3/29/92
59ðð CREATE TABLE DRDA/SHIPMENTLN ð3/29/92
6ððð (SHIP_NUM SMALLINT NOT NULL,
61ðð SHIP_LINE SMALLINT NOT NULL,
62ðð ORDER_LOC CHAR(4) NOT NULL,
63ðð ORDER_NUM SMALLINT NOT NULL,
64ðð ORDER_LINE SMALLINT NOT NULL,
65ðð PART_NUM CHAR(5) NOT NULL,
66ðð QUANT_SHIP INTEGER NOT NULL, ð3/29/92
67ðð QUANT_RECV INTEGER NOT NULL WITH DEFAULT ð3/29/92
68ðð ) END-EXEC. ð3/29/92

Figure A-1 (Part 1 of 2). Creating a Collection and Tables


5738PW1 V2R1M1 92ð327 SEU SOURCE LISTING ð3/29/92 17:16:5ð PAGE 2
SOURCE FILE . . . . . . . DRDA/QLBLSRC
MEMBER . . . . . . . . . CRTDB
SEQNBR\...+... 1 ...+... 2 ...+... 3 ...+... 4 ...+... 5 ...+... 6 ...+... 7 ...+... 8 ...+... 9 ...+... ð
69ðð EXEC SQL ð3/29/92
7ððð CREATE UNIQUE INDEX SHIPMENTLI ð3/29/92
71ðð ON DRDA/SHIPMENTLN ð3/29/92
72ðð (SHIP_NUM ASC, ð3/29/92
73ðð SHIP_LINE ASC) END-EXEC. ð3/29/92
74ðð EXEC SQL ð3/29/92
75ðð COMMIT END-EXEC. ð3/29/92
76ðð STOP RUN. ð3/29/92
\ \ \ \ E N D O F S O U R C E \ \ \ \

Figure A-1 (Part 2 of 2). Creating a Collection and Tables

A-2 OS/400 Distributed Database Programming V4R2


Inserting Data into the Tables
5738PW1 V2R1M1 92ð327 SEU SOURCE LISTING ð3/29/92 17:16:54 PAGE 1
SOURCE FILE . . . . . . . DRDA/QLBLSRC
MEMBER . . . . . . . . . INSDB
SEQNBR\...+... 1 ...+... 2 ...+... 3 ...+... 4 ...+... 5 ...+... 6 ...+... 7 ...+... 8 ...+... 9 ...+... ð
1ðð IDENTIFICATION DIVISION. ð3/29/92
2ðð PROGRAM-ID. INSDB. ð3/29/92
3ðð ENVIRONMENT DIVISION. ð3/29/92
4ðð DATA DIVISION. ð3/29/92
5ðð WORKING-STORAGE SECTION. ð3/29/92
6ðð EXEC SQL INCLUDE SQLCA END-EXEC. ð3/29/92
7ðð PROCEDURE DIVISION. ð3/29/92
8ðð MAIN. ð3/29/92
9ðð ð3/29/92
1ððð ð3/29/92
11ðð \------------------------------------------------------------------ ð3/29/92
12ðð \ PART_STOCK TABLE ð3/29/92
13ðð \--------------------------------------------------------------\/-- ð3/29/92
14ðð ð3/29/92
15ðð ð3/29/92
16ðð EXEC SQL ð3/29/92
17ðð INSERT INTO PART_STOCK ð3/29/92
18ðð VALUES ð3/29/92
19ðð ('14ð2ð','EA',ð38,ð5ð,1ðð,' ') END-EXEC. ð3/29/92
2ððð EXEC SQL ð3/29/92
21ðð INSERT INTO PART_STOCK ð3/29/92
22ðð VALUES ð3/29/92
23ðð ('14ð3ð','EA',ð43,ð5ð,ð5ð,' ') END-EXEC. ð3/29/92
24ðð EXEC SQL ð3/29/92
25ðð INSERT INTO PART_STOCK ð3/29/92
26ðð VALUES ð3/29/92
27ðð ('14ð4ð','EA',ð3ð,ð2ð,ð3ð,' ') END-EXEC. ð3/29/92
28ðð EXEC SQL ð3/29/92
29ðð INSERT INTO PART_STOCK ð3/29/92
3ððð VALUES ð3/29/92
31ðð ('14ð5ð','EA',ð1ð,ðð5,ð15,' ') END-EXEC. ð3/29/92
32ðð EXEC SQL ð3/29/92
33ðð INSERT INTO PART_STOCK ð3/29/92
34ðð VALUES ð3/29/92
35ðð ('14ð6ð','EA',11ð,ð45,ð9ð,' ') END-EXEC. ð3/29/92
36ðð EXEC SQL ð3/29/92
37ðð INSERT INTO PART_STOCK ð3/29/92
38ðð VALUES ð3/29/92
39ðð ('14ð7ð','EA',13ð,ð8ð,16ð,' ') END-EXEC. ð3/29/92
4ððð EXEC SQL ð3/29/92
41ðð INSERT INTO PART_STOCK ð3/29/92
42ðð VALUES ð3/29/92
43ðð ('18ð2ð','EA',ð13,ð25,ð5ð,' ') END-EXEC. ð3/29/92
44ðð EXEC SQL ð3/29/92
45ðð INSERT INTO PART_STOCK ð3/29/92
46ðð VALUES ð3/29/92
47ðð ('18ð3ð','EA',ð15,ðð5,ð1ð,' ') END-EXEC. ð3/29/92
48ðð EXEC SQL ð3/29/92
49ðð INSERT INTO PART_STOCK ð3/29/92
5ððð VALUES ð3/29/92
51ðð ('21ð1ð','EA',ð29,ð3ð,ð5ð,' ') END-EXEC. ð3/29/92
52ðð EXEC SQL ð3/29/92
53ðð INSERT INTO PART_STOCK ð3/29/92
54ðð VALUES ð3/29/92
55ðð ('24ð1ð','EA',ð25,ð2ð,ð4ð,' ') END-EXEC. ð3/29/92
56ðð EXEC SQL ð3/29/92
57ðð INSERT INTO PART_STOCK ð3/29/92
58ðð VALUES ð3/29/92
59ðð ('24ð8ð','EA',ð54,ð5ð,ð5ð,' ') END-EXEC. ð3/29/92
6ððð EXEC SQL ð3/29/92
61ðð INSERT INTO PART_STOCK ð3/29/92
62ðð VALUES ð3/29/92
63ðð ('24ð9ð','EA',ð3ð,ð25,ð5ð,' ') END-EXEC. ð3/29/92
64ðð EXEC SQL ð3/29/92
65ðð INSERT INTO PART_STOCK ð3/29/92
66ðð VALUES ð3/29/92
67ðð ('241ðð','EA',ð2ð,ð15,ð3ð,' ') END-EXEC. ð3/29/92

Figure A-2 (Part 1 of 4). Inserting Data into the Tables

Appendix A. Application Programming Examples A-3


5738PW1 V2R1M1 92ð327 SEU SOURCE LISTING ð3/29/92 17:16:54 PAGE 2
SOURCE FILE . . . . . . . DRDA/QLBLSRC
MEMBER . . . . . . . . . INSDB
SEQNBR\...+... 1 ...+... 2 ...+... 3 ...+... 4 ...+... 5 ...+... 6 ...+... 7 ...+... 8 ...+... 9 ...+... ð
68ðð EXEC SQL ð3/29/92
69ðð INSERT INTO PART_STOCK ð3/29/92
7ððð VALUES ð3/29/92
71ðð ('2411ð','EA',ð52,ð5ð,ð8ð,' ') END-EXEC. ð3/29/92
72ðð EXEC SQL ð3/29/92
73ðð INSERT INTO PART_STOCK ð3/29/92
74ðð VALUES ð3/29/92
75ðð ('25ð1ð','EA',511,3ðð,6ðð,' ') END-EXEC. ð3/29/92
76ðð EXEC SQL ð3/29/92
77ðð INSERT INTO PART_STOCK ð3/29/92
78ðð VALUES ð3/29/92
79ðð ('36ð1ð','EA',ð13,ðð5,ð1ð,' ') END-EXEC. ð3/29/92
8ððð EXEC SQL ð3/29/92
81ðð INSERT INTO PART_STOCK ð3/29/92
82ðð VALUES ð3/29/92
83ðð ('36ð2ð','EA',11ð,ð3ð,ð6ð,' ') END-EXEC. ð3/29/92
84ðð EXEC SQL ð3/29/92
85ðð INSERT INTO PART_STOCK ð3/29/92
86ðð VALUES ð3/29/92
87ðð ('37ð1ð','EA',415,1ðð,2ðð,' ') END-EXEC. ð3/29/92
88ðð EXEC SQL ð3/29/92
89ðð INSERT INTO PART_STOCK ð3/29/92
9ððð VALUES ð3/29/92
91ðð ('37ð2ð','EA',ð1ð,ð2ð,ð4ð,' ') END-EXEC. ð3/29/92
92ðð EXEC SQL ð3/29/92
93ðð INSERT INTO PART_STOCK ð3/29/92
94ðð VALUES ð3/29/92
95ðð ('37ð3ð','EA',154,ð55,ð6ð,' ') END-EXEC. ð3/29/92
96ðð EXEC SQL ð3/29/92
97ðð INSERT INTO PART_STOCK ð3/29/92
98ðð VALUES ð3/29/92
99ðð ('37ð4ð','EA',223,12ð,12ð,' ') END-EXEC. ð3/29/92
1ðððð EXEC SQL ð3/29/92
1ð1ðð INSERT INTO PART_STOCK ð3/29/92
1ð2ðð VALUES ð3/29/92
1ð3ðð ('43ð1ð','EA',11ð,ð2ð,ð4ð,' ') END-EXEC. ð3/29/92
1ð4ðð EXEC SQL ð3/29/92
1ð5ðð INSERT INTO PART_STOCK ð3/29/92
1ð6ðð VALUES ð3/29/92
1ð7ðð ('43ð2ð','EA',ð67,ð5ð,ð5ð,' ') END-EXEC. ð3/29/92
1ð8ðð EXEC SQL ð3/29/92
1ð9ðð INSERT INTO PART_STOCK ð3/29/92
11ððð VALUES ð3/29/92
111ðð ('48ð1ð','EA',ð32,ð3ð,ð6ð,' ') END-EXEC. ð3/29/92
112ðð ð3/29/92
113ðð \--------------------------------------------------------------- -- ð3/29/92
114ðð \ PART_ORDER TABLE ð3/29/92
115ðð \--------------------------------------------------------------\/-- ð3/29/92
116ðð ð3/29/92
117ðð ð3/29/92
118ðð ð3/29/92
119ðð EXEC SQL ð3/29/92
12ððð INSERT INTO PART_ORDER ð3/29/92
121ðð VALUES ð3/29/92
122ðð (1,'DB2B','U','O',ð,' ','1991-ð3-12-17.ðð.ðð',NULL,NULL,NULL) ð3/29/92
123ðð END-EXEC. ð3/29/92
124ðð EXEC SQL ð3/29/92
125ðð INSERT INTO PART_ORDER ð3/29/92
126ðð VALUES ð3/29/92
127ðð (2,'SQLA','U','O',ð,' ','1991-ð3-12-17.ð1.ðð', ð3/29/92
128ðð NULL,NULL,NULL) ð3/29/92
129ðð END-EXEC. ð3/29/92
13ððð EXEC SQL ð3/29/92
131ðð INSERT INTO PART_ORDER ð3/29/92
132ðð VALUES ð3/29/92
133ðð (3,'SQLA','U','O',ð,' ','1991-ð3-12-17.ð2.ðð', ð3/29/92
134ðð NULL,NULL,NULL) ð3/29/92
135ðð END-EXEC. ð3/29/92

Figure A-2 (Part 2 of 4). Inserting Data into the Tables

A-4 OS/400 Distributed Database Programming V4R2


5738PW1 V2R1M1 92ð327 SEU SOURCE LISTING ð3/29/92 17:16:54 PAGE 3
SOURCE FILE . . . . . . . DRDA/QLBLSRC
MEMBER . . . . . . . . . INSDB
SEQNBR\...+... 1 ...+... 2 ...+... 3 ...+... 4 ...+... 5 ...+... 6 ...+... 7 ...+... 8 ...+... 9 ...+... ð
136ðð EXEC SQL ð3/29/92
137ðð INSERT INTO PART_ORDER ð3/29/92
138ðð VALUES ð3/29/92
139ðð (4,'SQLA','U','O',ð,' ','1991-ð3-12-17.ð3.ðð', ð3/29/92
14ððð NULL,NULL,NULL) ð3/29/92
141ðð END-EXEC. ð3/29/92
142ðð EXEC SQL ð3/29/92
143ðð INSERT INTO PART_ORDER ð3/29/92
144ðð VALUES ð3/29/92
145ðð (5,'DB2B','U','O',ð,' ','1991-ð3-12-17.ð4.ðð', ð3/29/92
146ðð NULL,NULL,NULL) ð3/29/92
147ðð END-EXEC. ð3/29/92
148ðð ð3/29/92
149ðð \------------------------------------------------------------------ ð3/29/92
15ððð \ PART_ORDLN TABLE ð3/29/92
151ðð \--------------------------------------------------------------\/-- ð3/29/92
152ðð ð3/29/92
153ðð ð3/29/92
154ðð EXEC SQL ð3/29/92
155ðð INSERT INTO PART_ORDLN ð3/29/92
156ðð VALUES ð3/29/92
157ðð (1,1,'2411ð',ðð5,'O') END-EXEC. ð3/29/92
158ðð EXEC SQL ð3/29/92
159ðð INSERT INTO PART_ORDLN ð3/29/92
16ððð VALUES ð3/29/92
161ðð (1,2,'241ðð',ð21,'O') END-EXEC. ð3/29/92
162ðð EXEC SQL ð3/29/92
163ðð INSERT INTO PART_ORDLN ð3/29/92
164ðð VALUES ð3/29/92
165ðð (1,3,'24ð9ð',ð18,'O') END-EXEC. ð3/29/92
166ðð EXEC SQL ð3/29/92
167ðð INSERT INTO PART_ORDLN ð3/29/92
168ðð VALUES ð3/29/92
169ðð (2,1,'14ð7ð',ðð4,'O') END-EXEC. ð3/29/92
17ððð EXEC SQL ð3/29/92
171ðð INSERT INTO PART_ORDLN ð3/29/92
172ðð VALUES ð3/29/92
173ðð (2,2,'37ð4ð',ð43,'O') END-EXEC. ð3/29/92
173ð1 EXEC SQL ð3/29/92
173ð2 INSERT INTO PART_ORDLN ð3/29/92
173ð3 VALUES ð3/29/92
173ð4 (2,3,'14ð3ð',ð15,'O') END-EXEC. ð3/29/92
173ð5 EXEC SQL ð3/29/92
173ð6 INSERT INTO PART_ORDLN ð3/29/92
173ð7 VALUES ð3/29/92
173ð8 (3,2,'14ð3ð',ð25,'O') END-EXEC. ð3/29/92
174ðð EXEC SQL ð3/29/92
175ðð INSERT INTO PART_ORDLN ð3/29/92
176ðð VALUES ð3/29/92
177ðð (3,1,'43ð1ð',ðð3,'O') END-EXEC. ð3/29/92
178ðð EXEC SQL ð3/29/92
179ðð INSERT INTO PART_ORDLN ð3/29/92
18ððð VALUES ð3/29/92
181ðð (4,1,'36ð1ð',ð13,'O') END-EXEC. ð3/29/92
182ðð EXEC SQL ð3/29/92
183ðð INSERT INTO PART_ORDLN ð3/29/92
184ðð VALUES ð3/29/92
185ðð (5,1,'18ð3ð',ðð5,'O') END-EXEC. ð3/29/92
186ðð ð3/29/92
187ðð ð3/29/92
188ðð \------------------------------------------------------------------ ð3/29/92
189ðð \ SHIPMENTLN TABLE ð3/29/92
19ððð \--------------------------------------------------------------\/-- ð3/29/92
191ðð ð3/29/92
192ðð ð3/29/92
193ðð EXEC SQL ð3/29/92
194ðð INSERT INTO SHIPMENTLN ð3/29/92
195ðð VALUES ð3/29/92
196ðð (1,1,'DB2B',1,1,'2411ð',5,5) END-EXEC. ð3/29/92
197ðð EXEC SQL ð3/29/92
198ðð INSERT INTO SHIPMENTLN ð3/29/92
199ðð VALUES ð3/29/92
2ðððð (1,2,'DB2B',1,2,'241ðð',1ð,1) END-EXEC. ð3/29/92
2ð1ðð EXEC SQL ð3/29/92
2ð2ðð INSERT INTO SHIPMENTLN ð3/29/92
2ð3ðð VALUES ð3/29/92
2ð4ðð (2,1,'SQLA',2,1,'14ð7ð',4,4) END-EXEC. ð3/29/92

Figure A-2 (Part 3 of 4). Inserting Data into the Tables

Appendix A. Application Programming Examples A-5


5738PW1 V2R1M1 92ð327 SEU SOURCE LISTING ð3/29/92 17:16:54 PAGE 5
SOURCE FILE . . . . . . . DRDA/QLBLSRC
MEMBER . . . . . . . . . INSDB
SEQNBR\...+... 1 ...+... 2 ...+... 3 ...+... 4 ...+... 5 ...+... 6 ...+... 7 ...+... 8 ...+... 9 ...+... ð
2ð5ðð EXEC SQL ð3/29/92
2ð6ðð INSERT INTO SHIPMENTLN ð3/29/92
2ð7ðð VALUES ð3/29/92
2ð8ðð (2,2,'SQLA',2,2,'37ð4ð',45,25) END-EXEC. ð3/29/92
2ð8ð1 EXEC SQL ð3/29/92
2ð8ð2 INSERT INTO SHIPMENTLN ð3/29/92
2ð8ð3 VALUES ð3/29/92
2ð8ð4 (2,3,'SQLA',2,3,'14ð3ð', 5,5) END-EXEC. ð3/29/92
2ð8ð5 EXEC SQL ð3/29/92
2ð8ð6 INSERT INTO SHIPMENTLN ð3/29/92
2ð8ð7 VALUES ð3/29/92
2ð8ð8 (3,1,'SQLA',2,3,'14ð3ð', 5,5) END-EXEC. ð3/29/92
2ð9ðð ð3/29/92
21ððð EXEC SQL COMMIT END-EXEC. ð3/29/92
211ðð STOP RUN. ð3/29/92
\ \ \ \ E N D O F S O U R C E \ \ \ \

Figure A-2 (Part 4 of 4). Inserting Data into the Tables

RPG Program Example

A-6 OS/400 Distributed Database Programming V4R2


5738PW1 V2R1M1 92ð327 SEU SOURCE LISTING ð3/29/92 17:12:48 PAGE 1
SOURCE FILE . . . . . . . DRDA/QRPGSRC
MEMBER . . . . . . . . . DDBPT6RG
SEQNBR\...+... 1 ...+... 2 ...+... 3 ...+... 4 ...+... 5 ...+... 6 ...+... 7 ...+... 8 ...+... 9 ...+... ð
1ðð \\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\ ð3/29/92
2ðð \ \ ð3/29/92
3ðð \ DESCRIPTIVE NAME = D-DB SAMPLE APPLICATION \ ð3/29/92
4ðð \ REORDER POINT PROCESSING \ ð3/29/92
5ðð \ AS/4ðð \ ð3/29/92
6ðð \ \ ð3/29/92
7ðð \ FUNCTION = THIS MODULE PROCESS THE PART_STOCK TABLE AND \ ð3/29/92
8ðð \ FOR EACH PART BELOW THE ROP (REORDER POINT) \ ð3/29/92
9ðð \ CREATES A SUPPLY ORDER AND PRINTS A REPORT. \ ð3/29/92
1ððð \ \ ð3/29/92
11ðð \ \ ð3/29/92
12ðð \ INPUT = PARAMETERS EXPLICITLY PASSED TO THIS FUNCTION: \ ð3/29/92
13ðð \ \ ð3/29/92
14ðð \ LOCADB LOCAL DB NAME \ ð3/29/92
15ðð \ REMODB REMOTE DB NAME \ ð3/29/92
16ðð \ \ ð3/29/92
17ðð \ TABLES = PART-STOCK - LOCAL \ ð3/29/92
18ðð \ PART_ORDER - REMOTE \ ð3/29/92
19ðð \ PART_ORDLN - REMOTE \ ð3/29/92
2ððð \ SHIPMENTLN - REMOTE \ ð3/29/92
21ðð \ \ ð3/29/92
22ðð \ INDICATORS = \IN89 - 'ð' ORDER HEADER NOT DONE \ ð3/29/92
23ðð \ '1' ORDER HEADER IS DONE \ ð3/29/92
24ðð \ \IN99 - '1' ABNORMAL END (SQLCOD<ð) \ ð3/29/92
25ðð \ \ ð3/29/92
26ðð \ TO BE COMPILED WITH COMMIT(\CHG) RDB(remotedbname) \ ð3/29/92
27ðð \ \ ð3/29/92
28ðð \ INVOKE BY : CALL DDBPT6RG PARM(localdbname remotedbname) \ ð3/29/92
29ðð \ \ ð3/29/92
3ððð \ CURSORS WILL BE CLOSED IMPLICITLY (BY CONNECT) BECAUSE \ ð3/29/92
31ðð \ THERE IS NO REASON TO DO IT EXPLICITLY \ ð3/29/92
32ðð \ \ ð3/29/92
33ðð \\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\ ð3/29/92
34ðð \ ð3/29/92
35ðð FQPRINT O F 33 OF PRINTER ð3/29/92
36ðð F\ ð3/29/92
37ðð I\ ð3/29/92
38ðð IMISC DS ð3/29/92
39ðð I B 1 2ðSHORTB ð3/29/92
4ððð I B 3 6ðLONGB ð3/29/92
41ðð I B 7 8ðINDNUL ð3/29/92
42ðð I 9 13 PRTTBL ð3/29/92
43ðð I 14 29 LOCTBL ð3/29/92
44ðð I I 'SQLA' 3ð 33 LOC ð3/29/92
45ðð I\ ð3/29/92
46ðð I\ ð3/29/92
47ðð C\ ð3/29/92
48ðð C \LIKE DEFN SHORTB NXTORD NEW ORDER NR ð3/29/92
49ðð C \LIKE DEFN SHORTB NXTORL ORDER LINE NR ð3/29/92
5ððð C \LIKE DEFN SHORTB RTCOD1 RTCOD NEXT_PART ð3/29/92
51ðð C \LIKE DEFN SHORTB RTCOD2 RTCOD NEXT_ORD_ ð3/29/92
52ðð C \LIKE DEFN SHORTB CURORD ORDER NUMBER ð3/29/92
53ðð C \LIKE DEFN SHORTB CURORL ORDER LINE ð3/29/92
54ðð C \LIKE DEFN LONGB QUANTI FOR COUNTING ð3/29/92
55ðð C \LIKE DEFN LONGB QTYSTC QTY ON STOCK ð3/29/92
56ðð C \LIKE DEFN LONGB QTYORD REORDER QTY ð3/29/92
57ðð C \LIKE DEFN LONGB QTYROP REORDER POINT ð3/29/92
58ðð C \LIKE DEFN LONGB QTYREQ QTY ORDERED ð3/29/92
59ðð C \LIKE DEFN LONGB QTYREC QTY RECEIVED ð3/29/92
6ððð C\ ð3/29/92
61ðð C\ ð3/29/92
62ðð C\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\ ð3/29/92
63ðð C\ PARAMETERS \ ð3/29/92
64ðð C\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\ ð3/29/92
65ðð C\ ð3/29/92
66ðð C \ENTRY PLIST ð3/29/92
67ðð C PARM LOCADB 18 LOCAL DATABASE ð3/29/92
68ðð C PARM REMODB 18 REMOTE DATABASE ð3/29/92
69ðð C\ ð3/29/92
7ððð C\ ð3/29/92

Figure A-3 (Part 1 of 7). RPG Program Example

Appendix A. Application Programming Examples A-7


5738PW1 V2R1M1 92ð327 SEU SOURCE LISTING ð3/29/92 17:12:48 PAGE 2
SOURCE FILE . . . . . . . DRDA/QRPGSRC
MEMBER . . . . . . . . . DDBPT6RG
SEQNBR\...+... 1 ...+... 2 ...+... 3 ...+... 4 ...+... 5 ...+... 6 ...+... 7 ...+... 8 ...+... 9 ...+... ð
71ðð C\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\ ð3/29/92
72ðð C\ SQL CURSOR DECLARATIONS \ ð3/29/92
73ðð C\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\ ð3/29/92
74ðð C\ ð3/29/92
75ðð C\ NEXT PART WHICH STOCK QUANTITY IS UNDER REORDER POINTS QTY ð3/29/92
76ðð C/EXEC SQL ð3/29/92
77ðð C+ DECLARE NEXT_PART CURSOR FOR ð3/29/92
78ðð C+ SELECT PART_NUM, ð3/29/92
79ðð C+ PART_QUANT, ð3/29/92
8ððð C+ PART_ROP, ð3/29/92
81ðð C+ PART_EOQ ð3/29/92
82ðð C+ FROM PART_STOCK ð3/29/92
83ðð C+ WHERE PART_ROP > PART_QUANT ð3/29/92
84ðð C+ AND PART_NUM > :PRTTBL ð3/29/92
85ðð C+ ORDER BY PART_NUM ASC ð3/29/92
86ðð C/END-EXEC ð3/29/92
87ðð C\ ð3/29/92
88ðð C\ ORDERS WHICH ARE ALREADY MADE FOR CURRENT PART ð3/29/92
89ðð C/EXEC SQL ð3/29/92
9ððð C+ DECLARE NEXT_ORDER_LINE CURSOR FOR ð3/29/92
91ðð C+ SELECT A.ORDER_NUM, ð3/29/92
92ðð C+ ORDER_LINE, ð3/29/92
93ðð C+ QUANT_REQ ð3/29/92
94ðð C+ FROM PART_ORDLN A, ð3/29/92
95ðð C+ PART_ORDER B ð3/29/92
96ðð C+ WHERE PART_NUM = :PRTTBL ð3/29/92
97ðð C+ AND LINE_STAT <> 'C' ð3/29/92
98ðð C+ AND A.ORDER_NUM = B.ORDER_NUM ð3/29/92
99ðð C+ AND ORDER_TYPE = 'R' ð3/29/92
1ðððð C/END-EXEC ð3/29/92
1ð1ðð C\ ð3/29/92
1ð2ðð C\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\ ð3/29/92
1ð3ðð C\ SQL RETURN CODE HANDLING \ ð3/29/92
1ð4ðð C\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\ ð3/29/92
1ð5ðð C/EXEC SQL ð3/29/92
1ð6ðð C+ WHENEVER SQLERROR GO TO DBERRO ð3/29/92
1ð7ðð C/END-EXEC ð3/29/92
1ð8ðð C/EXEC SQL ð3/29/92
1ð9ðð C+ WHENEVER SQLWARNING CONTINUE ð3/29/92
11ððð C/END-EXEC ð3/29/92
111ðð C\ ð3/29/92
112ðð C\ ð3/29/92
113ðð C\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\ ð3/29/92
114ðð C\ PROCESS - MAIN PROGRAM LOGIC \ ð3/29/92
115ðð C\ MAIN PROCEDURE WORKS WITH LOCAL DATABASE \ ð3/29/92
116ðð C\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\ ð3/29/92
117ðð C\ ð3/29/92
118ðð C\CLEAN UP TO PERMIT RE-RUNNING OF TEST DATA ð3/29/92
119ðð C EXSR CLEANU ð3/29/92
12ððð C\ ð3/29/92
121ðð C\ ð3/29/92
122ðð C RTCOD1 DOUEQ1ðð ð3/29/92
123ðð C\ ð3/29/92
124ðð C/EXEC SQL ð3/29/92
125ðð C+ CONNECT TO :LOCADB ð3/29/92
126ðð C/END-EXEC ð3/29/92
127ðð C/EXEC SQL ð3/29/92
128ðð C+ OPEN NEXT_PART ð3/29/92
129ðð C/END-EXEC ð3/29/92
13ððð C/EXEC SQL ð3/29/92
131ðð C+ FETCH NEXT_PART ð3/29/92
132ðð C+ INTO :PRTTBL, ð3/29/92
133ðð C+ :QTYSTC, ð3/29/92
134ðð C+ :QTYROP, ð3/29/92
135ðð C+ :QTYORD ð3/29/92
136ðð C/END-EXEC ð3/29/92

Figure A-3 (Part 2 of 7). RPG Program Example

A-8 OS/400 Distributed Database Programming V4R2


5738PW1 V2R1M1 92ð327 SEU SOURCE LISTING ð3/29/92 17:12:48 PAGE 3
SOURCE FILE . . . . . . . DRDA/QRPGSRC
MEMBER . . . . . . . . . DDBPT6RG
SEQNBR\...+... 1 ...+... 2 ...+... 3 ...+... 4 ...+... 5 ...+... 6 ...+... 7 ...+... 8 ...+... 9 ...+... ð
137ðð C MOVE SQLCOD RTCOD1 ð3/29/92
138ðð C/EXEC SQL ð3/29/92
139ðð C+ COMMIT ð3/29/92
14ððð C/END-EXEC ð3/29/92
141ðð C RTCOD1 IFNE 1ðð ð3/29/92
142ðð C EXSR CHECKO ð3/29/92
143ðð C ENDIF ð3/29/92
144ðð C\ ð3/29/92
145ðð C ENDDO ð3/29/92
146ðð C\ ð3/29/92
147ðð C GOTO SETLR ð3/29/92
148ðð C\ ð3/29/92
149ðð C\ ð3/29/92
15ððð C\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\ ð3/29/92
151ðð C\ SQL RETURN CODE HANDLING ON ERROR SITUATIONS \ ð3/29/92
152ðð C\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\ ð3/29/92
153ðð C\ ð3/29/92
154ðð C DBERRO TAG ð3/29/92
155ðð C\ \-------------\ ð3/29/92
156ðð C EXCPTERRLIN ð3/29/92
157ðð C MOVE \ON \IN99 ð3/29/92
158ðð C/EXEC SQL ð3/29/92
159ðð C+ WHENEVER SQLERROR CONTINUE ð3/29/92
16ððð C/END-EXEC ð3/29/92
161ðð C/EXEC SQL ð3/29/92
162ðð C+ ROLLBACK ð3/29/92
163ðð C/END-EXEC ð3/29/92
164ðð C/EXEC SQL ð3/29/92
165ðð C+ WHENEVER SQLERROR GO TO DBERRO ð3/29/92
166ðð C/END-EXEC ð3/29/92
167ðð C\ ð3/29/92
168ðð C\ ð3/29/92
169ðð C SETLR TAG ð3/29/92
17ððð C\ \-------------\ ð3/29/92
171ðð C/EXEC SQL ð3/29/92
172ðð C+ CONNECT RESET ð3/29/92
173ðð C/END-EXEC ð3/29/92
174ðð C MOVE \ON \INLR ð3/29/92
175ðð C\ ð3/29/92
176ðð C\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\ ð3/29/92
177ðð C\ THE END OF THE PROGRAM \ ð3/29/92
178ðð C\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\ ð3/29/92
179ðð C\ ð3/29/92
18ððð C\ ð3/29/92
181ðð C\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\ ð3/29/92
182ðð C\ SUBROUTINES TO WORK WITH REMOTE DATABASES \ ð3/29/92
183ðð C\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\ ð3/29/92
184ðð C\ ð3/29/92
185ðð C\ ð3/29/92
186ðð C CHECKO BEGSR ð3/29/92
187ðð C\ \---------------\ ð3/29/92
188ðð C\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\ ð3/29/92
189ðð C\ CHECKS WHAT IS CURRENT ORDER AND SHIPMENT STATUS FOR THE PART \ ð3/29/92
19ððð C\ IF ORDERED AND SHIPPED IS LESS THAN REORDER POINT OF PART, \ ð3/29/92
191ðð C\ PERFORMS A SUBROUTINE WHICH MAKES AN ORDER. \ ð3/29/92
192ðð C\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\ ð3/29/92
193ðð C\ ð3/29/92
194ðð C MOVE ð RTCOD2 ð3/29/92
195ðð C MOVE ð QTYREQ ð3/29/92
196ðð C MOVE ð QTYREC ð3/29/92
197ðð C\ ð3/29/92
198ðð C/EXEC SQL ð3/29/92
199ðð C+ CONNECT TO :REMODB ð3/29/92
2ðððð C/END-EXEC ð3/29/92
2ð1ðð C/EXEC SQL ð3/29/92
2ð2ðð C+ OPEN NEXT_ORDER_LINE ð3/29/92
2ð3ðð C/END-EXEC ð3/29/92

Figure A-3 (Part 3 of 7). RPG Program Example

Appendix A. Application Programming Examples A-9


5738PW1 V2R1M1 92ð327 SEU SOURCE LISTING ð3/29/92 17:12:48 PAGE 4
SOURCE FILE . . . . . . . DRDA/QRPGSRC
MEMBER . . . . . . . . . DDBPT6RG
SEQNBR\...+... 1 ...+... 2 ...+... 3 ...+... 4 ...+... 5 ...+... 6 ...+... 7 ...+... 8 ...+... 9 ...+... ð
2ð4ðð C\ ð3/29/92
2ð5ðð C RTCOD2 DOWNE1ðð ð3/29/92
2ð6ðð C\ ð3/29/92
2ð7ðð C/EXEC SQL ð3/29/92
2ð8ðð C+ FETCH NEXT_ORDER_LINE ð3/29/92
2ð9ðð C+ INTO :CURORD, ð3/29/92
21ððð C+ :CURORL, ð3/29/92
211ðð C+ :QUANTI ð3/29/92
212ðð C/END-EXEC ð3/29/92
213ðð C\ ð3/29/92
214ðð C SQLCOD IFEQ 1ðð ð3/29/92
215ðð C MOVE 1ðð RTCOD2 ð3/29/92
216ðð C ELSE ð3/29/92
217ðð C ADD QUANTI QTYREQ ð3/29/92
218ðð C\ ð3/29/92
219ðð C/EXEC SQL ð3/29/92
22ððð C+ SELECT SUM(QUANT_RECV) ð3/29/92
221ðð C+ INTO :QUANTI:INDNUL
222ðð C+ FROM SHIPMENTLN ð3/29/92
223ðð C+ WHERE ORDER_LOC = :LOC ð3/29/92
224ðð C+ AND ORDER_NUM = :CURORD ð3/29/92
225ðð C+ AND ORDER_LINE = :CURORL ð3/29/92
226ðð C/END-EXEC ð3/29/92
227ðð C\ ð3/29/92
228ðð C INDNUL IFGE ð ð3/29/92
229ðð C ADD QUANTI QTYREC ð3/29/92
23ððð C ENDIF ð3/29/92
231ðð C\ ð3/29/92
232ðð C ENDIF ð3/29/92
233ðð C ENDDO ð3/29/92
234ðð C\ ð3/29/92
235ðð C/EXEC SQL ð3/29/92
236ðð C+ CLOSE NEXT_ORDER_LINE ð3/29/92
237ðð C/END-EXEC ð3/29/92
238ðð C\ ð3/29/92
239ðð C QTYSTC ADD QTYREQ QUANTI ð3/29/92
24ððð C SUB QUANTI QTYREC ð3/29/92
241ðð C\ ð3/29/92
242ðð C QTYROP IFGT QUANTI ð3/29/92
243ðð C EXSR ORDERP ð3/29/92
244ðð C ENDIF ð3/29/92
245ðð C\ ð3/29/92
246ðð C/EXEC SQL ð3/29/92
247ðð C+ COMMIT ð3/29/92
248ðð C/END-EXEC ð3/29/92
249ðð C\ ð3/29/92
25ððð C ENDSR CHECKO ð3/29/92
251ðð C\ ð3/29/92
252ðð C\ ð3/29/92
253ðð C ORDERP BEGSR ð3/29/92
254ðð C\ \---------------\ ð3/29/92
255ðð C\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\ ð3/29/92
256ðð C\ MAKES AN ORDER. IF FIRST TIME, PERFORMS THE SUBROUTINE, WHICH \ ð3/29/92
257ðð C\ SEARCHES FOR NEW ORDER NUMBER AND MAKES THE ORDER HEADER. \ ð3/29/92
258ðð C\ AFTER THAT MAKES ORDER LINES USING REORDER QUANTITY FOR THE \ ð3/29/92
259ðð C\ PART. FOR EVERY ORDERED PART WRITES A LINE ON REPORT. \ ð3/29/92
26ððð C\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\ ð3/29/92
261ðð C\ ð3/29/92
262ðð C \IN89 IFEQ \OFF FIRST ORDER ? ð3/29/92
263ðð C EXSR STRORD ð3/29/92
264ðð C MOVE \ON \IN89 ORD.HEAD.DONE ð3/29/92
265ðð C EXCPTHEADER WRITE HEADERS ð3/29/92
266ðð C ENDIF ð3/29/92

Figure A-3 (Part 4 of 7). RPG Program Example

A-10 OS/400 Distributed Database Programming V4R2


5738PW1 V2R1M1 92ð327 SEU SOURCE LISTING ð3/29/92 17:12:48 PAGE 5
SOURCE FILE . . . . . . . DRDA/QRPGSRC
MEMBER . . . . . . . . . DDBPT6RG
SEQNBR\...+... 1 ...+... 2 ...+... 3 ...+... 4 ...+... 5 ...+... 6 ...+... 7 ...+... 8 ...+... 9 ...+... ð
267ðð C\ ð3/29/92
268ðð C ADD 1 NXTORL NEXT ORD.LIN ð3/29/92
269ðð C/EXEC SQL ð3/29/92
27ððð C+ INSERT ð3/29/92
271ðð C+ INTO PART_ORDLN ð3/29/92
272ðð C+ (ORDER_NUM, ð3/29/92
273ðð C+ ORDER_LINE, ð3/29/92
274ðð C+ PART_NUM, ð3/29/92
275ðð C+ QUANT_REQ, ð3/29/92
276ðð C+ LINE_STAT) ð3/29/92
277ðð C+ VALUES (:NXTORD, ð3/29/92
278ðð C+ :NXTORL, ð3/29/92
279ðð C+ :PRTTBL, ð3/29/92
28ððð C+ :QTYORD, ð3/29/92
281ðð C+ 'O') ð3/29/92
282ðð C/END-EXEC ð3/29/92
283ðð C\ ð3/29/92
284ðð C \INOF IFEQ \ON ð3/29/92
285ðð C EXCPTHEADER ð3/29/92
286ðð C END ð3/29/92
287ðð C EXCPTDETAIL ð3/29/92
288ðð C\ ð3/29/92
289ðð C ENDSR ORDERP ð3/29/92
29ððð C\ ð3/29/92
291ðð C\ ð3/29/92
292ðð C STRORD BEGSR ð3/29/92
293ðð C\ \---------------\ ð3/29/92
294ðð C\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\ ð3/29/92
295ðð C\ SEARCHES FOR NEXT ORDER NUMBER AND MAKES AN ORDER HEADER \ ð3/29/92
296ðð C\ USING THAT NUMBER. WRITES ALSO HEADERS ON REPORT. \ ð3/29/92
297ðð C\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\ ð3/29/92
298ðð C\ ð3/29/92
299ðð C/EXEC SQL ð3/29/92
3ðððð C+ SELECT (MAX(ORDER_NUM) + 1) ð3/29/92
3ð1ðð C+ INTO :NXTORD ð3/29/92
3ð2ðð C+ FROM PART_ORDER ð3/29/92
3ð3ðð C/END-EXEC ð3/29/92
3ð4ðð C/EXEC SQL ð3/29/92
3ð5ðð C+ INSERT ð3/29/92
3ð6ðð C+ INTO PART_ORDER ð3/29/92
3ð7ðð C+ (ORDER_NUM, ð3/29/92
3ð8ðð C+ ORIGIN_LOC, ð3/29/92
3ð9ðð C+ ORDER_TYPE, ð3/29/92
31ððð C+ ORDER_STAT, ð3/29/92
311ðð C+ CREAT_TIME) ð3/29/92
312ðð C+ VALUES (:NXTORD, ð3/29/92
313ðð C+ :LOC, ð3/29/92
314ðð C+ 'R', ð3/29/92
315ðð C+ 'O', ð3/29/92
316ðð C+ CURRENT TIMESTAMP) ð3/29/92
317ðð C/END-EXEC ð3/29/92
318ðð C ENDSR STRORD ð3/29/92
319ðð C\ ð3/29/92
32ððð C\ ð3/29/92
321ðð C CLEANU BEGSR ð3/29/92
322ðð C\ \---------------\ ð3/29/92
323ðð C\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\ ð3/29/92
324ðð C\ THIS SUBROUTINE IS ONLY REQUIRED IN A TEST ENVIRONMENT ð3/29/92
325ðð C\ TO RESET THE DATA TO PERMIT RE-RUNNING OF THE TEST ð3/29/92
326ðð C\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\ ð3/29/92
327ðð C\ ð3/29/92
328ðð C/EXEC SQL ð3/29/92
329ðð C+ CONNECT TO :REMODB ð3/29/92
33ððð C/END-EXEC ð3/29/92
331ðð C/EXEC SQL ð3/29/92
332ðð C+ DELETE ð3/29/92
333ðð C+ FROM PART_ORDLN ð3/29/92
334ðð C+ WHERE ORDER_NUM IN ð3/29/92
335ðð C+ (SELECT ORDER_NUM ð3/29/92
336ðð C+ FROM PART_ORDER ð3/29/92
337ðð C+ WHERE ORDER_TYPE = 'R') ð3/29/92
338ðð C/END-EXEC ð3/29/92

Figure A-3 (Part 5 of 7). RPG Program Example

Appendix A. Application Programming Examples A-11


5738PW1 V2R1M1 92ð327 SEU SOURCE LISTING ð3/29/92 17:12:48 PAGE 7
SOURCE FILE . . . . . . . DRDA/QRPGSRC
MEMBER . . . . . . . . . DDBPT6RG
SEQNBR\...+... 1 ...+... 2 ...+... 3 ...+... 4 ...+... 5 ...+... 6 ...+... 7 ...+... 8 ...+... 9 ...+... ð
339ðð C/EXEC SQL ð3/29/92
34ððð C+ DELETE ð3/29/92
341ðð C+ FROM PART_ORDER ð3/29/92
342ðð C+ WHERE ORDER_TYPE = 'R' ð3/29/92
343ðð C/END-EXEC ð3/29/92
344ðð C/EXEC SQL ð3/29/92
345ðð C+ COMMIT ð3/29/92
346ðð C/END-EXEC ð3/29/92
347ðð C\ ð3/29/92
348ðð C ENDSR CLEANU ð3/29/92
349ðð C\ ð3/29/92
35ððð C\ ð3/29/92
351ðð C\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\ ð3/29/92
352ðð O\ OUTPUTLINES FOR THE REPORT \ ð3/29/92
353ðð O\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\ ð3/29/92
354ðð O\ ð3/29/92
355ðð OQPRINT E 2 HEADER ð3/29/92
356ðð O + ð '\\\\\ ROP PROCESSING' ð3/29/92
357ðð O + 1 'REPORT \\\\\' ð3/29/92
358ðð O\ ð3/29/92
359ðð OQPRINT E 2 HEADER ð3/29/92
36ððð O + ð ' ORDER NUMBER = ' ð3/29/92
361ðð O NXTORDZ + ð ð3/29/92
362ðð O\ ð3/29/92
363ðð OQPRINT E 1 HEADER ð3/29/92
364ðð O + ð '------------------------' ð3/29/92
365ðð O + ð '---------' ð3/29/92
366ðð O\ ð3/29/92
367ðð OQPRINT E 1 HEADER ð3/29/92
368ðð O + ð ' LINE ' ð3/29/92
369ðð O + ð 'PART ' ð3/29/92
37ððð O + ð 'QTY ' ð3/29/92
371ðð O\ ð3/29/92
372ðð OQPRINT E 1 HEADER ð3/29/92
373ðð O + ð ' NUMBER ' ð3/29/92
374ðð O + ð 'NUMBER ' ð3/29/92
375ðð O + ð 'REQUESTED ' ð3/29/92
376ðð O\ ð3/29/92
377ðð OQPRINT E 11 HEADER ð3/29/92
378ðð O + ð '------------------------' ð3/29/92
379ðð O + ð '---------' ð3/29/92
38ððð O\ ð3/29/92
381ðð OQPRINT EF1 DETAIL ð3/29/92
382ðð O NXTORLZ + 4 ð3/29/92
383ðð O PRTTBL + 4 ð3/29/92
384ðð O QTYORD1 + 4 ð3/29/92
385ðð O\ ð3/29/92
386ðð OQPRINT T 2 LRN99 ð3/29/92
387ðð O + ð '------------------------' ð3/29/92
388ðð O + ð '---------' ð3/29/92
389ðð OQPRINT T 1 LRN99 ð3/29/92
39ððð O + ð 'NUMBER OF LINES ' ð3/29/92
391ðð O + ð 'CREATED = ' ð3/29/92
392ðð O NXTORLZ + ð ð3/29/92
393ðð O\ ð3/29/92
394ðð OQPRINT T 1 LRN99 ð3/29/92
395ðð O + ð '------------------------' ð3/29/92
396ðð O + ð '---------' ð3/29/92
397ðð O\ ð3/29/92
398ðð OQPRINT T 2 LRN99 ð3/29/92
399ðð O + ð '\\\\\\\\\' ð3/29/92
4ðððð O + ð ' END OF PROGRAM ' ð3/29/92
4ð1ðð O + ð '\\\\\\\\' ð3/29/92
4ð2ðð O\ ð3/29/92

Figure A-3 (Part 6 of 7). RPG Program Example


5738PW1 V2R1M1 92ð327 SEU SOURCE LISTING ð3/29/92 17:12:48 PAGE 8
SOURCE FILE . . . . . . . DRDA/QRPGSRC
MEMBER . . . . . . . . . DDBPT6RG
SEQNBR\...+... 1 ...+... 2 ...+... 3 ...+... 4 ...+... 5 ...+... 6 ...+... 7 ...+... 8 ...+... 9 ...+... ð
4ð3ðð OQPRINT E 2 ERRLIN ð3/29/92
4ð4ðð O + ð '\\ ERROR \\' ð3/29/92
4ð5ðð O + ð '\\ ERROR \\' ð3/29/92
4ð6ðð O + ð '\\ ERROR \\' ð3/29/92
4ð7ðð OQPRINT E 1 ERRLIN ð3/29/92
4ð8ðð O + ð '\ SQLCOD:' ð3/29/92
4ð9ðð O SQLCODM + ð ð3/29/92
41ððð O 33 '\' ð3/29/92
411ðð OQPRINT E 1 ERRLIN ð3/29/92
412ðð O + ð '\ SQLSTATE:' ð3/29/92
413ðð O SQLSTT + 2 ð3/29/92
414ðð O 33 '\' ð3/29/92
415ðð OQPRINT E 1 ERRLIN ð3/29/92
416ðð O + ð '\\ ERROR \\' ð3/29/92
417ðð O + ð '\\ ERROR \\' ð3/29/92
418ðð O + ð '\\ ERROR \\' ð3/29/92

Figure A-3 (Part 7 of 7). RPG Program Example

A-12 OS/400 Distributed Database Programming V4R2


COBOL Program Example
5738PW1 V2R1M1 92ð327 SEU SOURCE LISTING ð3/29/92 17:12:35 PAGE 1
SOURCE FILE . . . . . . . DRDA/QLBLSRC
MEMBER . . . . . . . . . DDBPT6CB
SEQNBR\...+... 1 ...+... 2 ...+... 3 ...+... 4 ...+... 5 ...+... 6 ...+... 7 ...+... 8 ...+... 9 ...+... ð
1ðð IDENTIFICATION DIVISION.
2ðð \------------------------
3ðð PROGRAM-ID. DDBPT6CB. ð3/29/92
4ðð \\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\ ð3/29/92
5ðð \ MODULE NAME = DDBPT6CB ð3/29/92
6ðð \
7ðð \ DESCRIPTIVE NAME = D-DB SAMPLE APPLICATION
8ðð \ REORDER POINT PROCESSING
9ðð \ AS/4ðð ð3/29/92
1ððð \ COBOL
11ðð \
12ðð \ FUNCTION = THIS MODULE PROCESS THE PART_STOCK TABLE AND
13ðð \ FOR EACH PART BELOW THE ROP (REORDER POINT)
14ðð \ CHECKS THE EXISTING ORDERS AND SHIPMENTS, ð3/29/92
15ðð \ CREATES A SUPPLY ORDER AND PRINTS A REPORT. ð3/29/92
16ðð \
17ðð \ DEPENDENCIES = NONE ð3/29/92
18ðð \
19ðð \ INPUT = PARAMETERS EXPLICITLY PASSED TO THIS FUNCTION:
2ððð \
21ðð \ LOCAL-DB LOCAL DB NAME ð3/29/92
22ðð \ REMOTE-DB REMOTE DB NAME ð3/29/92
23ðð \
24ðð \ TABLES = PART-STOCK - LOCAL ð3/29/92
25ðð \ PART_ORDER - REMOTE ð3/29/92
26ðð \ PART_ORDLN - REMOTE ð3/29/92
27ðð \ SHIPMENTLN - REMOTE ð3/29/92
28ðð \ ð3/29/92
29ðð \ CRTSQLCBL SPECIAL PARAMETERS ð3/29/92
3ððð \ PGM(DDBPT6CB) RDB(remotedbname) OPTION(\APOST \APOSTSQL) ð3/29/92
31ðð \ ð3/29/92
32ðð \ INVOKE BY : CALL DDBPT6CB PARM(localdbname remotedbname) ð3/29/92
33ðð \ ð3/29/92
34ðð \\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\ ð3/29/92
35ðð ENVIRONMENT DIVISION.
36ðð \---------------------
37ðð INPUT-OUTPUT SECTION.
38ðð FILE-CONTROL.
39ðð SELECT RELAT ASSIGN TO PRINTER-QPRINT. ð3/29/92
4ððð DATA DIVISION.
41ðð \--------------
42ðð FILE SECTION.
43ðð \------------- ð3/29/92
44ðð FD RELAT
45ðð RECORD CONTAINS 33 CHARACTERS
46ðð LABEL RECORDS ARE OMITTED
47ðð DATA RECORD IS REPREC.
48ðð ð1 REPREC PIC X(33).
49ðð WORKING-STORAGE SECTION.
5ððð \------------------------ ð3/29/92
51ðð \ PRINT LINE DEFINITIONS ð3/29/92
52ðð ð1 LINEð PIC X(33) VALUE SPACES.
53ðð ð1 LINE1 PIC X(33) VALUE
54ðð '\\\\\ ROP PROCESSING REPORT \\\\\'.
55ðð ð1 LINE2.
56ðð ð5 FILLER PIC X(18) VALUE ' ORDER NUMBER = '.
57ðð ð5 MASKð PIC ZZZ9.
58ðð ð5 FILLER PIC X(11) VALUE SPACES.
59ðð ð1 LINE3 PIC X(33) VALUE
6ððð '---------------------------------'.
61ðð ð1 LINE4 PIC X(33) VALUE
62ðð ' LINE PART QTY '.
63ðð ð1 LINE5 PIC X(33) VALUE
64ðð ' NUMBER NUMBER REQUESTED '.

Figure A-4 (Part 1 of 6). COBOL Program Example

Appendix A. Application Programming Examples A-13


5738PW1 V2R1M1 92ð327 SEU SOURCE LISTING ð3/29/92 17:12:35 PAGE 2
SOURCE FILE . . . . . . . DRDA/QLBLSRC
MEMBER . . . . . . . . . DDBPT6CB
SEQNBR\...+... 1 ...+... 2 ...+... 3 ...+... 4 ...+... 5 ...+... 6 ...+... 7 ...+... 8 ...+... 9 ...+... ð
65ðð ð1 LINE6.
66ðð ð5 FILLER PIC XXXX VALUE SPACES.
67ðð ð5 MASK1 PIC ZZZ9.
68ðð ð5 FILLER PIC XXXX VALUE SPACES.
69ðð ð5 PART-TABLE PIC XXXXX.
7ððð ð5 FILLER PIC XXXX VALUE SPACES.
71ðð ð5 MASK2 PIC Z,ZZZ,ZZZ.ZZ.
72ðð ð1 LINE7.
73ðð ð5 FILLER PIC X(26) VALUE
74ðð 'NUMBER OF LINES CREATED = '.
75ðð ð5 MASK3 PIC ZZZ9.
76ðð ð5 FILLER PIC XXX VALUE SPACES.
77ðð ð1 LINE8 PIC X(33) VALUE
78ðð '\\\\\\\\\ END OF PROGRAM \\\\\\\\'.
79ðð \ MISCELLANEOUS DEFINITIONS ð3/29/92
8ððð ð1 WHAT-TIME PIC X VALUE '1'.
81ðð 88 FIRST-TIME VALUE '1'.
82ðð ð1 CONTL PIC S9999 COMP-4 VALUE ZEROS. ð3/29/92
83ðð ð1 CONTD PIC S9999 COMP-4 VALUE ZEROS. ð3/29/92
84ðð ð1 RTCODE1 PIC S9999 COMP-4 VALUE ZEROS. ð3/29/92
85ðð ð1 RTCODE2 PIC S9999 COMP-4. ð3/29/92
86ðð ð1 NEXT-NUM PIC S9999 COMP-4. ð3/29/92
87ðð ð1 IND-NULL PIC S9999 COMP-4. ð3/29/92
88ðð ð1 LOC-TABLE PIC X(16).
89ðð ð1 ORD-TABLE PIC S9999 COMP-4. ð3/29/92
9ððð ð1 ORL-TABLE PIC S9999 COMP-4. ð3/29/92
91ðð ð1 QUANT-TABLE PIC S9(9) COMP-4. ð3/29/92
92ðð ð1 QTY-TABLE PIC S9(9) COMP-4. ð3/29/92
93ðð ð1 ROP-TABLE PIC S9(9) COMP-4. ð3/29/92
94ðð ð1 EOQ-TABLE PIC S9(9) COMP-4. ð3/29/92
95ðð ð1 QTY-REQ PIC S9(9) COMP-4. ð3/29/92
96ðð ð1 QTY-REC PIC S9(9) COMP-4. ð3/29/92
97ðð \ CONSTANT FOR LOCATION NUMBER ð3/29/92
98ðð ð1 XPARM. ð3/29/92
99ðð ð5 LOC PIC X(4) VALUE 'SQLA'. ð3/29/92
1ðððð \ DEFINITIONS FOR ERROR MESSAGE HANDLING ð3/29/92
1ð1ðð ð1 ERROR-MESSAGE. ð3/29/92
1ð2ðð ð5 MSG-ID. ð3/29/92
1ð3ðð 1ð MSG-ID-1 PIC X(2) ð3/29/92
1ð4ðð VALUE 'SQ'. ð3/29/92
1ð5ðð 1ð MSG-ID-2 PIC 99999. ð3/29/92
1ð6ðð \\\\\\\\\\\\\\\\\\\\\\\\\\\\\\ ð3/29/92
1ð7ðð \ SQLCA INCLUDE \ ð3/29/92
1ð8ðð \\\\\\\\\\\\\\\\\\\\\\\\\\\\\\ ð3/29/92
1ð9ðð EXEC SQL INCLUDE SQLCA END-EXEC.
11ððð ð3/29/92
111ðð LINKAGE SECTION. ð3/29/92
112ðð \---------------- ð3/29/92
113ðð ð1 LOCAL-DB PIC X(18). ð3/29/92
114ðð ð1 REMOTE-DB PIC X(18). ð3/29/92
115ðð ð3/29/92
116ðð PROCEDURE DIVISION USING LOCAL-DB REMOTE-DB. ð3/29/92
117ðð \------------------ ð3/29/92
118ðð \\\\\\\\\\\\\\\\\\\\\\\\\\\\\ ð3/29/92
119ðð \ SQL CURSOR DECLARATION \ ð3/29/92
12ððð \\\\\\\\\\\\\\\\\\\\\\\\\\\\\ ð3/29/92
121ðð \ RE-POSITIONABLE CURSOR : POSITION AFTER LAST PART_NUM ð3/29/92
122ðð EXEC SQL DECLARE NEXT_PART CURSOR FOR
123ðð SELECT PART_NUM,
124ðð PART_QUANT,
125ðð PART_ROP,
126ðð PART_EOQ
127ðð FROM PART_STOCK
128ðð WHERE PART_ROP > PART_QUANT
129ðð AND PART_NUM > :PART-TABLE ð3/29/92
13ððð ORDER BY PART_NUM ASC ð3/29/92
131ðð END-EXEC.

Figure A-4 (Part 2 of 6). COBOL Program Example

A-14 OS/400 Distributed Database Programming V4R2


5738PW1 V2R1M1 92ð327 SEU SOURCE LISTING ð3/29/92 17:12:35 PAGE 3
SOURCE FILE . . . . . . . DRDA/QLBLSRC
MEMBER . . . . . . . . . DDBPT6CB
SEQNBR\...+... 1 ...+... 2 ...+... 3 ...+... 4 ...+... 5 ...+... 6 ...+... 7 ...+... 8 ...+... 9 ...+... ð
132ðð \ CURSOR FOR ORDER LINES ð3/29/92
133ðð EXEC SQL DECLARE NEXT_ORDER_LINE CURSOR FOR
134ðð SELECT A.ORDER_NUM,
135ðð ORDER_LINE,
136ðð QUANT_REQ
137ðð FROM PART_ORDLN A, ð3/29/92
138ðð PART_ORDER B
139ðð WHERE PART_NUM = :PART-TABLE
14ððð AND LINE_STAT <> 'C' ð3/29/92
141ðð AND A.ORDER_NUM = B.ORDER_NUM
142ðð AND ORDER_TYPE = 'R'
143ðð END-EXEC.
144ðð \\\\\\\\\\\\\\\\\\\\\\\\\\\\\\ ð3/29/92
145ðð \ SQL RETURN CODE HANDLING\ ð3/29/92
146ðð \\\\\\\\\\\\\\\\\\\\\\\\\\\\\\ ð3/29/92
147ðð EXEC SQL WHENEVER SQLERROR GO TO DB-ERROR END-EXEC.
148ðð EXEC SQL WHENEVER SQLWARNING CONTINUE END-EXEC. ð3/29/92
149ðð ð3/29/92
15ððð MAIN-PROGRAM-PROC. ð3/29/92
151ðð \------------------ ð3/29/92
152ðð PERFORM START-UP THRU START-UP-EXIT. ð3/29/92
153ðð PERFORM MAIN-PROC THRU MAIN-EXIT UNTIL RTCODE1 = 1ðð. ð3/29/92
154ðð END-OF-PROGRAM. ð3/29/92
155ðð \--------------- ð3/29/92
156ðð \\\\ ð3/29/92
157ðð EXEC SQL CONNECT RESET END-EXEC. ð3/29/92
158ðð \\\\
159ðð CLOSE RELAT.
16ððð GOBACK.
161ðð MAIN-PROGRAM-EXIT. EXIT. ð3/29/92
162ðð \------------------ ð3/29/92
163ðð ð3/29/92
164ðð START-UP. ð3/29/92
165ðð \---------- ð3/29/92
166ðð OPEN OUTPUT RELAT. ð3/29/92
167ðð \\\\ ð3/29/92
168ðð EXEC SQL COMMIT END-EXEC. ð3/29/92
169ðð \\\\ ð3/29/92
17ððð PERFORM CLEAN-UP THRU CLEAN-UP-EXIT. ð3/29/92
171ðð \\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\ ð3/29/92
172ðð \ CONNECT TO LOCAL DATABASE \ ð3/29/92
173ðð \\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\ ð3/29/92
174ðð \\\\ ð3/29/92
175ðð EXEC SQL CONNECT TO :LOCAL-DB END-EXEC. ð3/29/92
176ðð \\\\ ð3/29/92
177ðð START-UP-EXIT. EXIT. ð3/29/92
178ðð \------------ ð3/29/92
179ðð EJECT
18ððð MAIN-PROC.
181ðð \---------
182ðð EXEC SQL OPEN NEXT_PART END-EXEC. ð3/29/92
183ðð EXEC SQL
184ðð FETCH NEXT_PART
185ðð INTO :PART-TABLE,
186ðð :QUANT-TABLE,
187ðð :ROP-TABLE,
188ðð :EOQ-TABLE
189ðð END-EXEC.
19ððð IF SQLCODE = 1ðð
191ðð MOVE 1ðð TO RTCODE1 ð3/29/92
192ðð PERFORM TRAILER-PROC THRU TRAILER-EXIT ð3/29/92
193ðð ELSE
194ðð MOVE ð TO RTCODE2
195ðð MOVE ð TO QTY-REQ
196ðð MOVE ð TO QTY-REC
197ðð \ --- IMPLICIT "CLOSE" CAUSED BY COMMIT --- ð3/29/92
198ðð \\\\ ð3/29/92
199ðð EXEC SQL COMMIT END-EXEC ð3/29/92
2ðððð \\\\ ð3/29/92

Figure A-4 (Part 3 of 6). COBOL Program Example

Appendix A. Application Programming Examples A-15


5738PW1 V2R1M1 92ð327 SEU SOURCE LISTING ð3/29/92 17:12:35 PAGE 4
SOURCE FILE . . . . . . . DRDA/QLBLSRC
MEMBER . . . . . . . . . DDBPT6CB
SEQNBR\...+... 1 ...+... 2 ...+... 3 ...+... 4 ...+... 5 ...+... 6 ...+... 7 ...+... 8 ...+... 9 ...+... ð
2ð1ðð \\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\ ð3/29/92
2ð2ðð \ CONNECT TO REMOTE DATABASE \ ð3/29/92
2ð3ðð \\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\ ð3/29/92
2ð4ðð \\\\ ð3/29/92
2ð5ðð EXEC SQL CONNECT TO :REMOTE-DB END-EXEC ð3/29/92
2ð6ðð \\\\ ð3/29/92
2ð7ðð EXEC SQL OPEN NEXT_ORDER_LINE END-EXEC ð3/29/92
2ð8ðð PERFORM UNTIL RTCODE2 = 1ðð
2ð9ðð EXEC SQL ð3/29/92
21ððð FETCH NEXT_ORDER_LINE
211ðð INTO :ORD-TABLE,
212ðð :ORL-TABLE,
213ðð :QTY-TABLE
214ðð END-EXEC
215ðð IF SQLCODE = 1ðð
216ðð MOVE 1ðð TO RTCODE2
217ðð EXEC SQL CLOSE NEXT_ORDER_LINE END-EXEC
218ðð ELSE
219ðð ADD QTY-TABLE TO QTY-REQ
22ððð EXEC SQL
221ðð SELECT SUM(QUANT_RECV) ð3/29/92
222ðð INTO :QTY-TABLE:IND-NULL
223ðð FROM SHIPMENTLN ð3/29/92
224ðð WHERE ORDER_LOC = :LOC
225ðð AND ORDER_NUM = :ORD-TABLE
226ðð AND ORDER_LINE = :ORL-TABLE
227ðð END-EXEC
228ðð IF IND-NULL NOT < ð
229ðð ADD QTY-TABLE TO QTY-REC
23ððð END-IF
231ðð END-IF
232ðð END-PERFORM
233ðð IF ROP-TABLE > QUANT-TABLE + QTY-REQ - QTY-REC
234ðð PERFORM ORDER-PROC THRU ORDER-EXIT
235ðð END-IF
236ðð END-IF.
237ðð \\\\ ð3/29/92
238ðð EXEC SQL COMMIT END-EXEC. ð3/29/92
239ðð \\\\ ð3/29/92
24ððð \\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\ ð3/29/92
241ðð \ RECONNECT TO LOCAL DATABASE \ ð3/29/92
242ðð \\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\ ð3/29/92
243ðð \\\\ ð3/29/92
244ðð EXEC SQL CONNECT TO :LOCAL-DB END-EXEC. ð3/29/92
245ðð \\\\ ð3/29/92
246ðð MAIN-EXIT. EXIT.
247ðð \---------------
248ðð ORDER-PROC.
249ðð \----------
25ððð IF FIRST-TIME
251ðð MOVE '2' TO WHAT-TIME
252ðð PERFORM CREATE-ORDER-PROC THRU CREATE-ORDER-EXIT. ð3/29/92
253ðð ADD 1 TO CONTL.
254ðð EXEC SQL
255ðð INSERT
256ðð INTO PART_ORDLN ð3/29/92
257ðð (ORDER_NUM,
258ðð ORDER_LINE,
259ðð PART_NUM,
26ððð QUANT_REQ,
261ðð LINE_STAT)
262ðð VALUES (:NEXT-NUM,
263ðð :CONTL,
264ðð :PART-TABLE,
265ðð :EOQ-TABLE,
266ðð 'O')
267ðð END-EXEC.
268ðð PERFORM DETAIL-PROC THRU DETAIL-EXIT.
269ðð ORDER-EXIT. EXIT.
27ððð \----------------

Figure A-4 (Part 4 of 6). COBOL Program Example

A-16 OS/400 Distributed Database Programming V4R2


5738PW1 V2R1M1 92ð327 SEU SOURCE LISTING ð3/29/92 17:12:35 PAGE 5
SOURCE FILE . . . . . . . DRDA/QLBLSRC
MEMBER . . . . . . . . . DDBPT6CB
SEQNBR\...+... 1 ...+... 2 ...+... 3 ...+... 4 ...+... 5 ...+... 6 ...+... 7 ...+... 8 ...+... 9 ...+... ð
271ðð ð3/29/92
272ðð CREATE-ORDER-PROC. ð3/29/92
273ðð \------------------ ð3/29/92
274ðð \GET NEXT ORDER NUMBER ð3/29/92
275ðð EXEC SQL ð3/29/92
276ðð SELECT (MAX(ORDER_NUM) + 1) ð3/29/92
277ðð INTO :NEXT-NUM:IND-NULL ð3/29/92
278ðð FROM PART_ORDER ð3/29/92
279ðð END-EXEC. ð3/29/92
28ððð IF IND-NULL < ð ð3/29/92
281ðð MOVE 1 TO NEXT-NUM. ð3/29/92
282ðð EXEC SQL ð3/29/92
283ðð INSERT ð3/29/92
284ðð INTO PART_ORDER ð3/29/92
285ðð (ORDER_NUM, ð3/29/92
286ðð ORIGIN_LOC, ð3/29/92
287ðð ORDER_TYPE, ð3/29/92
288ðð ORDER_STAT, ð3/29/92
289ðð CREAT_TIME) ð3/29/92
29ððð VALUES (:NEXT-NUM, ð3/29/92
291ðð :LOC, 'R', 'O', ð3/29/92
292ðð CURRENT TIMESTAMP) ð3/29/92
293ðð END-EXEC. ð3/29/92
294ðð MOVE NEXT-NUM TO MASKð. ð3/29/92
295ðð PERFORM HEADER-PROC THRU HEADER-EXIT. ð3/29/92
296ðð CREATE-ORDER-EXIT. EXIT. ð3/29/92
297ðð \------------------ ð3/29/92
298ðð ð3/29/92
299ðð DB-ERROR. ð3/29/92
3ðððð \-------- ð3/29/92
3ð1ðð PERFORM ERROR-MSG-PROC THRU ERROR-MSG-EXIT. ð3/29/92
3ð2ðð \\\\\\\\\\\\\\\\\\\\\\\ ð3/29/92
3ð3ðð \ ROLLBACK THE LUW \ ð3/29/92
3ð4ðð \\\\\\\\\\\\\\\\\\\\\\\ ð3/29/92
3ð5ðð EXEC SQL WHENEVER SQLERROR CONTINUE END-EXEC. ð3/29/92
3ð6ðð \\\\ ð3/29/92
3ð7ðð EXEC SQL ROLLBACK WORK END-EXEC. ð3/29/92
3ð8ðð \\\\ ð3/29/92
3ð9ðð PERFORM END-OF-PROGRAM THRU MAIN-PROGRAM-EXIT. ð3/29/92
31ððð \ -- NEXT LINE INCLUDED TO RESET THE "GO TO" DEFAULT -- ð3/29/92
311ðð EXEC SQL WHENEVER SQLERROR GO TO DB-ERROR END-EXEC. ð3/29/92
312ðð ð3/29/92
313ðð ERROR-MSG-PROC. ð3/29/92
314ðð \---------- ð3/29/92
315ðð MOVE SQLCODE TO MSG-ID-2. ð3/29/92
316ðð DISPLAY 'SQL STATE =' SQLSTATE ' SQLCODE =' MSG-ID-2. ð3/29/92
317ðð \ -- ADD HERE ANY ADDITIONAL ERROR MESSAGE HANDLING -- ð3/29/92
318ðð ERROR-MSG-EXIT. EXIT. ð3/29/92
319ðð \---------------- ð3/29/92
32ððð ð3/29/92
321ðð \\\\\\\\\\\\\\\\\\\ ð3/29/92
322ðð \ REPORT PRINTING \ ð3/29/92
323ðð \\\\\\\\\\\\\\\\\\\ ð3/29/92
324ðð HEADER-PROC. ð3/29/92
325ðð \----------- ð3/29/92
326ðð WRITE REPREC FROM LINE1 AFTER ADVANCING PAGE.
327ðð WRITE REPREC FROM LINE2 AFTER ADVANCING 3 LINES.
328ðð WRITE REPREC FROM LINE3 AFTER ADVANCING 2 LINES.
329ðð WRITE REPREC FROM LINE4 AFTER ADVANCING 1 LINES.
33ððð WRITE REPREC FROM LINE5 AFTER ADVANCING 1 LINES.
331ðð WRITE REPREC FROM LINE3 AFTER ADVANCING 1 LINES.
332ðð WRITE REPREC FROM LINEð AFTER ADVANCING 1 LINES.
333ðð HEADER-EXIT. EXIT.
334ðð \-----------------

Figure A-4 (Part 5 of 6). COBOL Program Example

Appendix A. Application Programming Examples A-17


5738PW1 V2R1M1 92ð327 SEU SOURCE LISTING ð3/29/92 17:12:35 PAGE 7
SOURCE FILE . . . . . . . DRDA/QLBLSRC
MEMBER . . . . . . . . . DDBPT6CB
SEQNBR\...+... 1 ...+... 2 ...+... 3 ...+... 4 ...+... 5 ...+... 6 ...+... 7 ...+... 8 ...+... 9 ...+... ð
335ðð DETAIL-PROC.
336ðð \-----------
337ðð ADD 1 TO CONTD.
338ðð IF CONTD > 5ð
339ðð MOVE 1 TO CONTD
34ððð PERFORM HEADER-PROC THRU HEADER-EXIT
341ðð END-IF
342ðð MOVE CONTL TO MASK1.
343ðð MOVE EOQ-TABLE TO MASK2.
344ðð WRITE REPREC FROM LINE6 AFTER ADVANCING 1 LINES.
345ðð DETAIL-EXIT. EXIT.
346ðð \-----------------
347ðð TRAILER-PROC.
348ðð \------------
349ðð MOVE CONTL TO MASK3.
35ððð WRITE REPREC FROM LINE3 AFTER ADVANCING 2 LINES.
351ðð WRITE REPREC FROM LINE7 AFTER ADVANCING 2 LINES.
352ðð WRITE REPREC FROM LINE3 AFTER ADVANCING 2 LINES.
353ðð WRITE REPREC FROM LINE8 AFTER ADVANCING 1 LINES.
354ðð TRAILER-EXIT. EXIT.
355ðð \------------------
356ðð \\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\ ð3/29/92
357ðð \ THIS PARAGRAPH IS ONLY REQUIRED IN A TEST ENVIRONMENT\ ð3/29/92
358ðð \ TO RESET THE DATA TO PERMIT RE-RUNNING OF THE TEST \ ð3/29/92
359ðð \\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\ ð3/29/92
36ððð CLEAN-UP. ð3/29/92
361ðð \--------- ð3/29/92
362ðð \\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\ ð3/29/92
363ðð \ CONNECT TO REMOTE DATABASE \ ð3/29/92
364ðð \\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\ ð3/29/92
365ðð \\\\ ð3/29/92
366ðð EXEC SQL CONNECT TO :REMOTE-DB END-EXEC. ð3/29/92
367ðð \\\\ ð3/29/92
368ðð \---------------------DELETE ORDER ROWS FOR RERUNABILITY ð3/29/92
369ðð EXEC SQL ð3/29/92
37ððð DELETE ð3/29/92
371ðð FROM PART_ORDLN ð3/29/92
372ðð WHERE ORDER_NUM IN ð3/29/92
373ðð (SELECT ORDER_NUM ð3/29/92
374ðð FROM PART_ORDER ð3/29/92
375ðð WHERE ORDER_TYPE = 'R') ð3/29/92
376ðð END-EXEC. ð3/29/92
377ðð EXEC SQL ð3/29/92
378ðð DELETE ð3/29/92
379ðð FROM PART_ORDER ð3/29/92
38ððð WHERE ORDER_TYPE = 'R' ð3/29/92
381ðð END-EXEC. ð3/29/92
382ðð \\\\ ð3/29/92
383ðð EXEC SQL COMMIT END-EXEC. ð3/29/92
384ðð \\\\ ð3/29/92
385ðð CLEAN-UP-EXIT. EXIT. ð3/29/92
386ðð \------------- ð3/29/92
\ \ \ \ E N D O F S O U R C E \ \ \ \

Figure A-4 (Part 6 of 6). COBOL Program Example

C Program Example

A-18 OS/400 Distributed Database Programming V4R2


5738PW1 V2R1M1 92ð327 SEU SOURCE LISTING ð3/29/92 17:11:13 PAGE 1
SOURCE FILE . . . . . . . DRDA/QCSRC
MEMBER . . . . . . . . . DDBPT6C
SEQNBR\...+... 1 ...+... 2 ...+... 3 ...+... 4 ...+... 5 ...+... 6 ...+... 7 ...+... 8 ...+... 9 ...+... ð
1ðð /\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\/ ð3/29/92
2ðð /\ MODULE NAME = DDBPT6C \/ ð3/29/92
3ðð /\ \/ ð3/29/92
4ðð /\ DESCRIPTIVE NAME: D-DB SAMPLE APPLICATION \/ ð3/29/92
5ðð /\ REORDER POINT PROCESSING \/ ð3/29/92
6ðð /\ AS/4ðð \/ ð3/29/92
7ðð /\ C/4ðð \/ ð3/29/92
8ðð /\ \/ ð3/29/92
9ðð /\ FUNCTION: THIS MODULE PROCESS THE PART_STOCK TABLE AND \/ ð3/29/92
1ððð /\ FOR EACH PART BELOW THE ROP (REORDER POINT) \/ ð3/29/92
11ðð /\ CREATES A SUPPLY ORDER. \/ ð3/29/92
12ðð /\ \/ ð3/29/92
13ðð /\ OUTPUT: BATCH : SPOOLFILE \/ ð3/29/92
14ðð /\ INTER : DISPLY \/ ð3/29/92
15ðð /\ \/ ð3/29/92
16ðð /\ LOCAL TABLES: PART_STOCK \/ ð3/29/92
17ðð /\ \/ ð3/29/92
18ðð /\ REMOTE TABLES: PART_ORDER, PART_ORDLN, SHIPMENTLN \/ ð3/29/92
19ðð /\ \/ ð3/29/92
2ððð /\ COMPILE OPTIONS: \/ ð3/29/92
21ðð /\ CRTSQLC PGM(DDBPT6C) COMMIT(\CHG) RDB(rdbname) \/ ð3/29/92
22ðð /\ \/ ð3/29/92
23ðð /\ INVOKED BY: CALL PGM(DDBPT6C) PARM('lcldbname' 'rmtdbname') \/ ð3/29/92
24ðð /\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\/ ð3/29/92
25ðð ð3/29/92
26ðð #include <stdlib.h>
27ðð #include <string.h> ð3/29/92
28ðð #include <stdio.h>
29ðð ð3/29/92
3ððð EXEC SQL BEGIN DECLARE SECTION; ð3/29/92
31ðð ð3/29/92
32ðð char loc [4] = "SQLA"; /\ dealer's database name \/
33ðð char remote_db [18] = " "; /\ sample remote database \/
34ðð char local_db [18] = " "; /\ sample local database \/
35ðð ð3/29/92
36ðð char part_table [5] = " "; /\ part number in table part_stock \/
37ðð long quant_table; /\ quantity in stock, tbl part_stock \/ ð3/29/92
38ðð long rop_table; /\ reorder point , tbl part_stock \/ ð3/29/92
39ðð long eoq_table; /\ reorder quantity , tbl part_stock \/ ð3/29/92
4ððð ð3/29/92
41ðð short next_num; /\ next order nbr,table part_order \/ ð3/29/92
42ðð ð3/29/92
43ðð short ord_table; /\ order nbr. , tbl order_line \/ ð3/29/92
44ðð short orl_table; /\ order line , tbl order_line \/ ð3/29/92
45ðð long qty_table; /\ ordered quantity , tbl order_line \/ ð3/29/92
46ðð long line_count = ð; /\ total number of order lines \/ ð3/29/92
47ðð short ind_null; /\ null indicator for qty_table \/ ð3/29/92
48ðð short contl = ð; /\ continuation line, tbl order_line \/ ð3/29/92
49ðð ð3/29/92
5ððð EXEC SQL END DECLARE SECTION; ð3/29/92
51ðð EXEC SQL INCLUDE SQLCA; ð3/29/92
52ðð EXEC SQL WHENEVER SQLERROR go to error_tag; ð3/29/92
53ðð EXEC SQL WHENEVER SQLWARNING CONTINUE; ð3/29/92
54ðð ð3/29/92
55ðð /\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\/ ð3/29/92
56ðð /\ Other Variables \/ ð3/29/92
57ðð /\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\/ ð3/29/92
58ðð ð3/29/92
59ðð char first_time, what_time; ð3/29/92
6ððð long qty_rec = ð, qty_req = ð; ð3/29/92
61ðð ð3/29/92

Figure A-5 (Part 1 of 4). C Program Example

Appendix A. Application Programming Examples A-19


5738PW1 V2R1M1 92ð327 SEU SOURCE LISTING ð3/29/92 17:11:13 PAGE 2
SOURCE FILE . . . . . . . DRDA/QCSRC
MEMBER . . . . . . . . . DDBPT6C
SEQNBR\...+... 1 ...+... 2 ...+... 3 ...+... 4 ...+... 5 ...+... 6 ...+... 7 ...+... 8 ...+... 9 ...+... ð
62ðð /\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\/ ð3/29/92
63ðð /\ Function Declaration \/ ð3/29/92
64ðð /\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\/ ð3/29/92
65ðð ð3/29/92
66ðð declare_cursor () { ð3/29/92
67ðð ð3/29/92
68ðð /\ SQL Cursor declaration and reposition for local UW \/ ð3/29/92
69ðð ð3/29/92
7ððð EXEC SQL DECLARE NEXT_PART CURSOR FOR ð3/29/92
71ðð SELECT PART_NUM, PART_QUANT, PART_ROP, PART_EOQ ð3/29/92
72ðð FROM PART_STOCK ð3/29/92
73ðð WHERE PART_ROP > PART_QUANT ð3/29/92
74ðð AND PART_NUM > :part_table ð3/29/92
75ðð ORDER BY PART_NUM; ð3/29/92
76ðð /\ SQL Cursor declaration and connect for RUW \/ ð3/29/92
77ðð ð3/29/92
78ðð EXEC SQL DECLARE NEXT_OLINE CURSOR FOR ð3/29/92
79ðð SELECT A.ORDER_NUM, ORDER_LINE, QUANT_REQ ð3/29/92
8ððð FROM PART_ORDLN A, ð3/29/92
81ðð PART_ORDER B ð3/29/92
82ðð WHERE PART_NUM = :part_table ð3/29/92
83ðð AND LINE_STAT <> 'C' ð3/29/92
84ðð AND A.ORDER_NUM = B.ORDER_NUM ð3/29/92
85ðð AND ORDER_TYPE = 'R'; ð3/29/92
86ðð ð3/29/92
87ðð /\ upline exit function in connectable state \/ ð3/29/92
88ðð ð3/29/92
89ðð EXEC SQL COMMIT; ð3/29/92
9ððð goto function_exit; ð3/29/92
91ðð error_tag: ð3/29/92
92ðð error_function(); ð3/29/92
93ðð function_exit: ; ð3/29/92
94ðð } /\ function declare_cursor \/ ð3/29/92
95ðð ð3/29/92
96ðð delete_for_rerun () { ð3/29/92
97ðð ð3/29/92
98ðð /\ Clean up for rerunability in test environment \/ ð3/29/92
99ðð EXEC SQL CONNECT TO :remote_db; ð3/29/92
1ðððð EXEC SQL DELETE ð3/29/92
1ð1ðð FROM PART_ORDLN ð3/29/92
1ð2ðð WHERE ORDER_NUM IN ð3/29/92
1ð3ðð (SELECT ORDER_NUM ð3/29/92
1ð4ðð FROM PART_ORDER ð3/29/92
1ð5ðð WHERE ORDER_TYPE = 'R'); ð3/29/92
1ð6ðð EXEC SQL DELETE ð3/29/92
1ð7ðð FROM PART_ORDER ð3/29/92
1ð8ðð WHERE ORDER_TYPE = 'R'; ð3/29/92
1ð9ðð /\ upline exit function in connectable state \/ ð3/29/92
11ððð EXEC SQL COMMIT; ð3/29/92
111ðð EXEC SQL CONNECT TO :local_db; ð3/29/92
112ðð goto function_exit; ð3/29/92
113ðð error_tag: ð3/29/92
114ðð error_function(); ð3/29/92
115ðð function_exit: ; ð3/29/92
116ðð } /\ function delete_for_rerun \/ ð3/29/92
117ðð ð3/29/92
118ðð calculate_order_quantity () { ð3/29/92
119ðð ð3/29/92
12ððð /\ available qty = Stock qty + qty in order - qty received \/ ð3/29/92
121ðð ð3/29/92
122ðð EXEC SQL OPEN NEXT_PART; ð3/29/92
123ðð EXEC SQL FETCH NEXT_PART ð3/29/92
124ðð INTO :part_table, :quant_table, :rop_table, :eoq_table; ð3/29/92
125ðð ð3/29/92
126ðð if (sqlca.sqlcode == 1ðð) { ð3/29/92
127ðð printf("--------------------------------\n"); ð3/29/92
128ðð printf("NUMBER OF LINES CREATED = %d\n",line_count); ð3/29/92
129ðð printf("--------------------------------\n"); ð3/29/92
13ððð printf("\\\\\ END OF PROGRAM \\\\\\\\\\n"); ð3/29/92
131ðð rop_table = ð; /\ no (more) orders to process \/ ð3/29/92
132ðð } ð3/29/92

Figure A-5 (Part 2 of 4). C Program Example

A-20 OS/400 Distributed Database Programming V4R2


5738PW1 V2R1M1 92ð327 SEU SOURCE LISTING ð3/29/92 17:11:13 PAGE 3
SOURCE FILE . . . . . . . DRDA/QCSRC
MEMBER . . . . . . . . . DDBPT6C
SEQNBR\...+... 1 ...+... 2 ...+... 3 ...+... 4 ...+... 5 ...+... 6 ...+... 7 ...+... 8 ...+... 9 ...+... ð
133ðð else { qty_rec = ð; ð3/29/92
134ðð qty_req = ð; ð3/29/92
135ðð /\ \/ ð3/29/92
136ðð EXEC SQL COMMIT; ð3/29/92
137ðð EXEC SQL CONNECT TO :remote_db; ð3/29/92
138ðð EXEC SQL OPEN NEXT_OLINE; ð3/29/92
139ðð do { ð3/29/92
14ððð EXEC SQL FETCH NEXT_OLINE ð3/29/92
141ðð INTO :ord_table, :orl_table, :qty_table; ð3/29/92
142ðð qty_rec = qty_rec + qty_table; ð3/29/92
143ðð } while(sqlca.sqlcode != 1ðð); ð3/29/92
144ðð EXEC SQL CLOSE NEXT_OLINE; ð3/29/92
145ðð EXEC SQL SELECT SUM(QUANT_RECV) ð3/29/92
146ðð INTO :qty_table:ind_null ð3/29/92
147ðð FROM SHIPMENTLN ð3/29/92
148ðð WHERE ORDER_LOC = :loc ð3/29/92
149ðð AND ORDER_NUM = :ord_table ð3/29/92
15ððð AND ORDER_LINE = :orl_table; ð3/29/92
151ðð if (ind_null != ð) ð3/29/92
152ðð qty_rec = qty_rec + qty_table; ð3/29/92
153ðð } /\ end of else branch \/ ð3/29/92
154ðð goto function_exit; ð3/29/92
155ðð error_tag: ð3/29/92
156ðð error_function(); ð3/29/92
157ðð function_exit: ; ð3/29/92
158ðð } /\ end of calculate_order_quantity \/ ð3/29/92
159ðð ð3/29/92
16ððð process_order () { ð3/29/92
161ðð ð3/29/92
162ðð /\ insert order and order_line in remote database \/ ð3/29/92
163ðð ð3/29/92
164ðð if (contl == ð) { ð3/29/92
165ðð ð3/29/92
166ðð EXEC SQL SELECT (MAX(ORDER_NUM) + 1) ð3/29/92
167ðð INTO :next_num ð3/29/92
168ðð FROM PART_ORDER; ð3/29/92
169ðð EXEC SQL INSERT INTO PART_ORDER ð3/29/92
17ððð (ORDER_NUM, ORIGIN_LOC, ORDER_TYPE, ORDER_STAT, CREAT_TIME) ð3/29/92
171ðð VALUES (:next_num, :loc, 'R', 'O', CURRENT TIMESTAMP); ð3/29/92
172ðð printf("\\\\\ ROP PROCESSING \\\\\\\\\\n"); ð3/29/92
173ðð printf("ORDER NUMBER = %d \n\n",next_num); ð3/29/92
174ðð printf("--------------------------------\n"); ð3/29/92
175ðð printf(" LINE PART QTY \n"); ð3/29/92
176ðð printf(" NBR NBR REQUESTED\n"); ð3/29/92
177ðð printf("--------------------------------\n"); ð3/29/92
178ðð contl = contl + 1; ð3/29/92
179ðð } /\ if contl == ð \/ ð3/29/92
18ððð ð3/29/92
181ðð EXEC SQL INSERT INTO PART_ORDLN ð3/29/92
182ðð (ORDER_NUM, ORDER_LINE, PART_NUM, QUANT_REQ, LINE_STAT) ð3/29/92
183ðð VALUES (:next_num, :contl, :part_table, :eoq_table, 'O'); ð3/29/92
184ðð line_count = line_count + 1; ð3/29/92
185ðð printf(" %d %.5s %d\n", ð3/29/92
186ðð line_count,part_table,eoq_table); ð3/29/92
187ðð contl = contl + 1; ð3/29/92
188ðð ð3/29/92
189ðð /\ upline exit function in connectable state \/ ð3/29/92
19ððð EXEC SQL COMMIT; ð3/29/92
191ðð /\ RECONNECT TO LOCAL DATABASE \/ ð3/29/92
192ðð EXEC SQL CONNECT TO :local_db; ð3/29/92
193ðð ð3/29/92
194ðð goto function_exit; ð3/29/92
195ðð error_tag: ð3/29/92
196ðð error_function(); ð3/29/92
197ðð function_exit: ; ð3/29/92
198ðð } /\ end of function process_order \/ ð3/29/92
199ðð ð3/29/92

Figure A-5 (Part 3 of 4). C Program Example

Appendix A. Application Programming Examples A-21


5738PW1 V2R1M1 92ð327 SEU SOURCE LISTING ð3/29/92 17:11:13 PAGE 4
SOURCE FILE . . . . . . . DRDA/QCSRC
MEMBER . . . . . . . . . DDBPT6C
SEQNBR\...+... 1 ...+... 2 ...+... 3 ...+... 4 ...+... 5 ...+... 6 ...+... 7 ...+... 8 ...+... 9 ...+... ð
2ðððð error_function () { ð3/29/92
2ð1ðð /\ \/ ð3/29/92
2ð2ðð printf("\\\\\\\\\\\\\\\\\\\\\\\\\n"); ð3/29/92
2ð3ðð printf("\ SQL ERROR \\n"); ð3/29/92
2ð4ðð printf("\\\\\\\\\\\\\\\\\\\\\\\\\n"); ð3/29/92
2ð5ðð printf("SQLCODE = %d\n",sqlca.sqlcode); ð3/29/92
2ð6ðð printf("SQLSTATE = %5s",sqlca.sqlstate); ð3/29/92
2ð7ðð printf("\n\\\\\\\\\\\\\\\\\\\\\\\n"); ð3/29/92
2ð8ðð EXEC SQL WHENEVER SQLERROR CONTINUE; ð3/29/92
2ð9ðð EXEC SQL ROLLBACK; ð3/29/92
21ððð EXEC SQL CONNECT RESET; ð3/29/92
211ðð exit (999); ð3/29/92
212ðð } ð3/29/92
213ðð ð3/29/92
214ðð main(int argc, char \argv[]) {
215ðð memcpy(local_db,argv[1],strlen(argv[1]));
216ðð memcpy(remote_db,argv[2],strlen(argv[2]));
217ðð /\ clean up \/ ð3/29/92
218ðð declare_cursor(); ð3/29/92
219ðð delete_for_rerun(); ð3/29/92
22ððð ð3/29/92
221ðð /\ main-line, state is connectable \/ ð3/29/92
222ðð ð3/29/92
223ðð do { ð3/29/92
224ðð calculate_order_quantity (); ð3/29/92
225ðð if (rop_table > quant_table + qty_req - qty_rec) { ð3/29/92
226ðð process_order(); ð3/29/92
227ðð quant_table = qty_req = qty_rec = ð; ð3/29/92
228ðð } ð3/29/92
229ðð } while (sqlca.sqlcode == ð); ð3/29/92
23ððð /\ RECONNECT TO APPLICATION SERVER \/ ð3/29/92
231ðð EXEC SQL CONNECT RESET; ð3/29/92
232ðð exit(ð); ð3/29/92
233ðð ð3/29/92
234ðð ð3/29/92
235ðð } /\ end of main \/ ð3/29/92
\ \ \ \ E N D O F S O U R C E \ \ \ \

Figure A-5 (Part 4 of 4). C Program Example

Program Output Example


\\\\\ ROP PROCESSING \\\\\\\\\
ORDER NUMBER = 6
--------------------------------
LINE PART QTY
NBR NBR REQUESTED
--------------------------------
1 14ð2ð 1ðð
2 14ð3ð 5ð
3 18ð2ð 5ð
4 21ð1ð 5ð
5 37ð2ð 4ð
--------------------------------
NUMBER OF LINES CREATED = 5
--------------------------------
\\\\\ END OF PROGRAM \\\\\\\\\
Figure A-6. Program Output Example

A-22 OS/400 Distributed Database Programming V4R2


Appendix B. Cross-Platform Access Using DRDA
This book concentrates on describing AS/400 support for distributed relational data-
bases in a network of AS/400 systems (a like environment). Many distributed rela-
tional database implementations exist in a network of different DRDA-supporting
platforms. This appendix provides a list of tips and techniques you may need to
consider when using the AS/400 system in an unlike DRDA environment.

| This appendix describes some conditions you need to consider when working with
| another specific IBM product. It is not intended to be a comprehensive list. Many
| problems or conditions like the ones described here depend significantly on your
| application. You can get more information on the differences between the various
| IBM platforms from the IBM SQL Reference Volume 2, SC26-8416, or the DRDA
| Application Programming Guide, SC26-4773.

CCSID Considerations
When you work with a distributed relational database in an unlike environment,
coded character set identifiers (CCSIDs) need to be set up and used properly. The
AS/400 system is shipped with a default value that may need to be changed to
work in an unlike environment. Also, the AS/400 system supports some CCSIDs for
DBCS that are not supported by the DB2 and DB2 Server for VM database man-
agers. This section discusses these two conditions and provides you with a way to
work around them.

AS/400 System Value QCCSID


The AS/400 system is shipped with a QCCSID value set to 65535. Data tagged
with this CCSID is not to be converted by the receiving system. You may not be
able to connect to an unlike system when your AS/400 AR is using this CCSID.
Also, you may not be able to use source files that are tagged with this CCSID to
create applications on unlike systems.

As stated in “Coded Character Set Identifier (CCSID)” on page 10-16, the CCSID
used at connection time is determined by the job CCSID. When a job begins, its
CCSID is determined by the user profile the job is running under. The user profile
can, and as a default does, use the system value QCCSID.

| If you are connecting to a system that does not support the system default CCSID,
| you need to change your job CCSID. You can change the job CCSID by using the
| Change Job (CHGJOB) command. However, this solution is only for the job you are
| currently working with. The next time you will have to change the job CCSID again.

A more permanent solution is to change the CCSID designated by the user profiles
used in the distributed relational database. When you change the user profiles you
affect only those users that need to have their data converted. If you are working
with a DB2/400 AS, you need to change the user profile that the AS uses.

The default CCSID value in a user profile is *SYSVAL. This references the
QCCSID system value. You can change this system value, and therefore the
default value used by all user profiles, with the Change System Values
(CHGSYSVAL) command. If you do this, you would want to select a CCSID that

 Copyright IBM Corp. 1997, 1998 B-1


represents most (if not all) of the users on your system. For a list of CCSIDs avail-
able and the languages they represent, see the National Language Support book.

If you suspect that you are working with a system that does not support a CCSID
used by your job or your system, look for the following indicators in a job log or
SQLCA:
Message SQ30073
SQLCODE -30073
SQLSTATE 58017
Text Distributed Data Management (DDM) parameter X'0035' not supported.
Message SQL0332
SQLCODE -332
SQLSTATE 57017
Text Total conversion between CCSID &1 and CCSID &2 not valid.

CCSID Conversion Considerations for DB2 Connect Connections


| When you connect from DB2 Connect to a DB2 for AS/400 AS, columns tagged
| with CCSID 65535 are not converted from EBCDIC to ASCII. If the files that contain
| these columns do not contain any columns that have a CCSID explicitly identified,
| the CCSID of all character columns can be changed to another CCSID value. To
| change the CCSID, use the Change Physical File (CHGPF) command. If you have
| logical files built over the physical file, follow the directions given in the recovery
| section of the error message (CPD322D) that you get.

CCSID Conversion Considerations for DB2 and DB2 Server for VM


Database Managers
One of the differences between a DB2/400 database and other DB2* databases is
that the AS/400 system supports a larger set of CCSIDs. This can lead to errors
when the systems attempt to perform character conversion on the data (SQLCODE
–332 and SQLSTATE 57017).

Certain fields in the DB2/400 SQL catalog tables may be defined to have a
DBCS-open data type. This is a data type that allows both double-byte character
set (DBCS) and single-byte character set (SBCS) characters. The CCSID for these
field types is based on the default CCSID shipped with the system.

When these fields are selected from a DB2 or DB2 Server for VM AR, the SELECT
statement may fail because the DB2 and DB2 Server for VM databases may not
support the conversion to this CCSID.

To avoid this error, you must change the DB2 database or the DB2 Server for VM
AR to run with either:
Ÿ The same mixed-byte CCSID as the DBCS-OPEN fields in the AS/400 SQL
catalog tables.
Ÿ A CCSID that the system allows conversion of data to when the data is from
the mixed-byte CCSID of the DBCS-OPEN fields in the AS/400 SQL catalog
tables. This CCSID may be a single-byte CCSID if the data in the AS/400 SQL
catalog tables DBCS-OPEN fields is all single-byte data.

B-2 OS/400 Distributed Database Programming V4R2


This requires some analysis of the CCSID conversions supported on the DB2 or
DB2 Server for VM system so you can make the correct changes to your system.
See the DB2 for OS/390 Administration Guide for specific information on how to
handle this error.

Interactive SQL and Query Management Setup on Unlike Application


Servers
Interactive SQL and Query Manager/400 create packages on unlike application
servers based on the user's run options (date format, commitment control, and so
on) as they are needed. These packages are created in a collection called
QSQL400 on the application server. The package name is QSQLabcd where ‘abcd’
correspond to numbers which refer to specific options that are used for that
package. Values for ‘abcd’ correspond to options as follows :

Position Option Value


a Date Format 0 = ISO, JIS date format 1 =
USA date format 2 = EUR
date format
b Time Format 0 = JIS time format 1 = USA
time format 2 = EUR, ISO
time format
c Commitment Control 0 = *CS commitment control
Decimal Delimiter period decimal delimiter 1 =
*CS commitment control
comma decimal delimiter 2 =
*RR commitment control
period decimal delimiter 3 =
*RR commitment control
comma decimal delimiter
d String Delimiter Default 0 = apostrophe string delim-
Character Subtype iter, single byte character
subtype 1 = apostrophe
string delimiter, double byte
character subtype 2 =
double quote string delimiter,
single byte character
subtype 3 = double quote
string delimiter, double byte
character subtype

For example, a package created from interactive SQL to an unlike application


server with the following options: USA date format, USA time format, commitment
control level of *CS, a period for the decimal delimiter, an apostrophe for the string
delimiter, and a default character subtype of single byte would have the name
'QSQL1100'. Once a package is created with a particular set of options, all subse-
quent interactive SQL or Query Manager/400 users running with those same
options against that application server will use that package.

| As has been pointed out elsewhere, you need to have an updatable connection to
| the AS when the package is created. You may need to do a RELEASE ALL and
| COMMIT before connecting to the AS to have the package created.

Appendix B. Cross-Platform Access Using DRDA B-3


Creating Interactive SQL Packages on DB2 Server for VM
On DB2 Server for VM, a collection name is synonymous with a user ID. To create
packages to be used with interactive SQL or Query Manager/400 on an DB2 Server
for VM application server, create a user ID of QSQL400 on the OS/400 system.
This user ID can be used to create all the necessary packages on the DB2 Server
for VM application server. Users can then use their own user IDs to access DB2
Server for VM through interactive SQL or Query Manager/400 on the OS/400.

| FAQs from Users of DB2 Connect


This section answers the following frequently asked questions from users of work-
stations who want to access AS/400 data through DB2 Connect:
Ÿ Do AS/400 files have to be journaled?
Ÿ When will query data be blocked for better performance?
Ÿ Is the DB2/400 Query Manager and SQL Development Kit product required on
an AS/400 system to create collections and tables?
Ÿ How do you interpret an SQLCODE and the associated tokens reported in a
DBM SQL0969N error message?
Ÿ How can host variable type in WHERE clauses affect performance?
Ÿ What considerations must be given to CCSIDs? (See “CCSID Considerations”
on page B-1.)
Ÿ Why are no rows returned when I perform a query? One potential cause of this
problem is a failure to add an entry for the AS/400 in the DB2 Connect Data-
base Communication Services Directory.

Do AS/400 Files Have to Be Journaled?


The answer to this question is closely related to the question in “When Will Query
Data Be Blocked for Better Performance?.” Journaling is not required if the client
application is using an isolation level of no-commit (NC) or uncommitted read (UR),
and if the DB2/400 SQL function determines that the query data can be blocked. In
that case commitment control is not enabled, which makes journaling unnecessary.

The DB2/2 precompiler parameter that specifies uncommitted read is /I=UR. When
using the DB2/2 command line processor, the command DBM CHANGE SQLISL
TO UR sets the isolation level to uncommitted read.

The command 'export SQLJSETP="-i=n"' can be used with DB2/6000 before per-
forming a program precompile or bind to request the no-commit (NC - do not use
commitment control) isolation level.

When Will Query Data Be Blocked for Better Performance?


The query data will be blocked if none of the following conditions are true:
Ÿ The cursor is updatable (see Note 1).
Ÿ The cursor is potentially updatable (see Note 2).
Ÿ The /K=NO precompile or bind option was used on SQLPREP or SQLBIND.

B-4 OS/400 Distributed Database Programming V4R2


Unless you force single-row protocol with the /K=NO precompile/bind option,
blocking will occur in both of the following cases:
Ÿ The cursor is read-only (see Note 3).
Ÿ All of the following are true:
– There is no FOR UPDATE OF clause in the SELECT, and
– There are no UPDATE or DELETE WHERE CURRENT OF statements
against the cursor in the program, and
– Either the program does not contain dynamic SQL statements or /K=ALL
was used.

Notes:
1. A cursor is updatable if it is not read-only (see Note 3), and one of the following
is true:
Ÿ The select statement contained the FOR UPDATE OF clause, or
Ÿ There exists in the program an UPDATE or DELETE WHERE CURRENT
OF against the cursor.
2. A cursor is potentially updatable if it is not read-only (see Note 3), and if the
program includes any dynamic statement, and the /K=UNAMBIG precompile or
bind option was used on SQLPREP or SQLBIND.
3. A cursor is read-only if one or more of the following conditions is true:
Ÿ The DECLARE CURSOR statement specified an ORDER BY clause but
did not specify a FOR UPDATE OF clause.
Ÿ The DECLARE CURSOR statement specified a FOR FETCH ONLY clause.
Ÿ One or more of the following conditions are true for the cursor or a view or
logical file referenced in the outer subselect to which the cursor refers:
– The outer subselect contains a DISTINCT keyword, GROUP BY clause,
HAVING clause, or a column function in the outer subselect.
– The select contains a join function.
– The select contains a UNION operator.
– The select contains a subquery that refers to the same table as the
table of the outer-most subselect.
– The select contains a complex logical file that had to be copied to a
temporary file.
– All of the selected columns are expressions, scalar functions, or con-
stants.
– All of the columns of a referenced logical file are input only.

Is the DB2/400 Query Manager and SQL Development Kit Product


Needed for Collection and Table Creation?
Working locally on an AS/400 system, it is possible to create an SQL collection
without having the DB2/400 Query Manager and SQL Development Kit product
installed. For example, to create the NULLID collection needed for part of the DB2
Connect installation you can:
1. Create a source file member containing the line:

Appendix B. Cross-Platform Access Using DRDA B-5


CREATE COLLECTION NULLID
2. Perform a CRTQMQRY command referencing the above source file member.
3. Execute the CREATE statement using the STRQMQRY command.

It is also possible to create tables and execute other SQL statements with the
above approach as well. A REXX or Control Language program can improve the
usability of this approach. The following CL program is a simple example of the
type of thing that can be done.
PGM
MONMSG MSGID(CPFðððð)
DLTQMQRY MYLIB/QMTEMP
STRSEU MYLIB/SRC QMTEMP
CRTQMQRY MYLIB/QMTEMP MYLIB/SRC
STRQMQRY MYLIB/QMTEMP
ENDPGM

When the SEU program involved in the preceding series of commands displays an
edit screen, enter an SQL statement and save the file. The program then attempts
to process and execute the statement.

How Do You Interpret an SQLCODE and the Associated Tokens


Reported in a DBM SQL0969N Error Message?
The client support used with DB2 Connect returns message SQL0969N when
reporting host SQLCODEs and tokens for which it has no equivalent code. The fol-
lowing is an example of message SQL0969N:
SQLð969N There is no message text corresponding to SQL error
"-7ðð8" in the Database Manager message file on this workstation.
The error was returned from module "QSQOPEN" with original
tokens "TABLE1 PRODLIB1 3".

Use the AS/400 DSPMSGD command to interpret the code and tokens:
DSPMSGD SQL7ðð8 MSGF(QSQLMSG)

Select option 1 (Display message text) and the system presents the Display For-
matted Message Text display. The three tokens in the message are represented by
&1, &2, and &3 in the display. The reason code in the example message is 3,
which points to Code 3 in the list at the bottom of the display.

B-6 OS/400 Distributed Database Programming V4R2


à Display Formatted Message Text
ð
System: RCHASLAI
Message ID . . . . . . . . . : SQL7ðð8
Message file . . . . . . . . : QSQLMSG
Library . . . . . . . . . : QSYS

Message . . . . : &1 in &2 not valid for operation.


Cause . . . . . : The reason code is &3. A list of reason codes follows:
-- Code 1 indicates that the table has no members.
-- Code 2 indicates that the table has been saved with storage free.
-- Code 3 indicates that the table is not journaled, the table is
journaled to a different journal than other tables being processed under
commitment control, or that you do not have authority to the journal.
-- Code 4 indicates that the table is in a production library but the user
is in debug mode with UPDPROD(\NO); therefore, production tables may not be
updated.
-- Code 5 indicates that a table, view, or index is being created into a
production library but the user is in debug mode with UPDPROD(\NO);
therefore, tables, views, or indexes may not be created.
More...
Press Enter to Continue.

F3=Exit F11=Display unformatted message text F12=Cancel

á ñ

| Other Tips for Interoperating with Workstations Using DB2 Connect


| and DB2 UDB
| The following sections provide additional information for using DB2 for AS/400 with
| DB2 Connect and DB2 UDB. These tips were developed from experiences testing
| with the products on an OS/2 platform, but it is believed that they apply to all envi-
| ronments to which they have been ported.

| DB2 Connect versus DB2 UDB


| Users are sometimes confused over what products are needed to perform the
| DRDA Application Server function versus the Application Requester (client) func-
| tion. Here is how the function is distributed among the products:
| Ÿ DRDA-AR only:
| – DB2 Connect Personal Edition
| – DB2 Connect Enterprise Edition
| Ÿ DRDA-AS only:
| – DB2 Universal Database Workgroup Edition
| Ÿ Both DRDA-AS and DRDA-AR:
| – DB2 Universal Database Enterprise Edition
| – DB2 Universal Database Extended Enterprise Edition

| Proper Configuration and Maintenance Level


| Be sure to follow the installation and configuration instructions given in the product
| manuals carefully. Make sure that you have the most current level of the products.
| Apply the appropriate fix packs if not.

Appendix B. Cross-Platform Access Using DRDA B-7


| Table and Collection Naming
| SQL tables accessed by DRDA applications have three-part names: the first part is
| the database name, the second part is a collection ID, and the third part is the base
| table name. The first two parts are optional. DB2 for AS/400 qualifies table names
| at the second level by a collection (or library) name. Tables reside in the DB2 for
| AS/400 database.

| There is only one1database for each AS/400. However, in DB2 UDB, tables are
| qualified by a user ID (that of the creator of the table), and reside in one of possibly
| multiple databases on the platform. DB2 Connect has the same notion of using the
| user ID for the collection ID.

| A dynamic query from DB2 Connect to DB2 for AS/400 will use the user ID of the
| target side job (on the AS/400) for the default collection name, if the name of the
| queried table was specified without a collection name. This may not be what is
| expected by the user and can cause the table to be not found.

| A dynamic query from DB2 for AS/400 to DB2 UDB would have an implied table
| qualifier if it is not specified in the query in the form qualifier.table-name. The
| second-level UDB table qualifier defaults to the user ID of the user making the
| query.

| You may want to create the DB2 UDB databases and tables with a common user
| ID. Remember, for UDB there are no physical collections as there are in DB2 for
| AS/400; there is only a table qualifier, which is the user ID of the creator.

| Granting Privileges
| For any programs created on an AS/400 that is accessing a UDB database,
| remember to do the following UDB commands (perhaps from the command line
| processor):
| 1. GRANT ALL PRIVILEGES ON TABLE table-name TO user (possibly 'PUBLIC'
| for user)
| 2. GRANT EXECUTE ON PACKAGE package-name (usually the AS/400 program
| name) TO user (possibly 'PUBLIC' for user)

| APPC Communications Setup


| OS/400 communications must be configured properly, with a controller and device
| created for the workstation, when using APPC with either DB2 Connect as an AR,
| or UDB as an AS.

| Setting Up the RDB Directory


| Add an entry in the RDB directory for each UDB database an AS/400 will connect
| to. Use the ADDRDBDIRE command. The RDB name is the UDB database name.

| When using APPC communications, the remote location name is the name of the
| workstation.

| 1 The fact that DATABASE is a synonym for COLLECTION in the CREATE COLLECTION SQL statement in DB2 for AS/400 has
| created some confusion about there being only one database per AS/400.

B-8 OS/400 Distributed Database Programming V4R2


| When using TCP/IP, the remote location name is the domain name of the work-
| station, or its IP address. The port used by the UDB DRDA server is typically not
| 446, the well-known DRDA port that the AS/400 server uses (*DRDA).

| Consult the UDB product documentation to determine the port number. A common
| value used is 30000. An example DSPRDBDIRE screen showing a properly config-
| ured RDB entry for a UDB server follows.
| Display Relational Database Detail
| Relational database . . . . . . : SAMPLE
| Remote location:
| Remote location . . . . . . . : 9.5.36.17
| Type . . . . . . . . . . . . : \IP
| Port number or service name . : 3ðððð
| Text . . . . . . . . . . . . . . : My UDB server

| Setting Up the SQL Package for DB2 Connect


| Before using DB2 Connect to access data on DB2 for AS/400, you must create
| SQL packages on the AS/400 for application programs and for DB2 Connect utili-
| ties.

| The DB2 PREP command can be used to process an application program source
| file with embedded SQL. This processing will create a modified source file con-
| taining host language calls for the SQL statements and it will, by default, create an
| SQL package in the database you're currently connected to.

| To bind DB2 Connect to a DB2 for AS/400 server:


| 1. CONNECT TO rdbname
| 2. BIND [email protected] BLOCKING ALL SQLERROR CONTINUE MESSAGES
| DDCS400.MGS GRANT PUBLIC
| Replace 'path' in the [email protected] parameter above with the default path
| C:\SQLLIB\BND\ (c:/sqllib/bin/ on non-INTEL platforms), or with your value if
| you did not install to the default directory.
| Note: PTF SF23624 is needed for OS/400 V3R1 to avoid a -901 SQL code
| from the OS/400 database on the third bind file in the list.
| 3. CONNECT RESET

| Using Interactive SQL to DB2 UDB


| To use interactive SQL, you need the DB2 Query Manager and SQL Development
| Kit product installed on OS/400. To access data on UDB:
| 1. When starting a session with STRSQL, use session attributes of
| NAMING(*SQL), DATFMT(*ISO), and TIMFMT(*ISO). Other formats besides
| *ISO work, but not all, and what is used for the date format (DATFMT) must
| also be used for the time format (TIMFMT).
| 2. Note the correspondence between COLLECTIONs on the AS/400, and table
| qualifier (the creator's user ID) for UDB.
| 3. For the first interactive session, you must do this sequence of SQL statements
| to get a package created on UDB: (1) RELEASE ALL, (2) COMMIT, and (3)
| CONNECT TO rdbname (where 'rdbname' is replaced with a particular data-
| base).

Appendix B. Cross-Platform Access Using DRDA B-9


| As part of your setup for the use of interactive SQL, you may also want to GRANT
| EXECUTE ON PACKAGE QSQL400.QSQLabcd TO PUBLIC (or to specific users),
| so that others can use the SQL PKG created on the PC for interactive SQL. The
| actual value for 'abcd' in the above GRANT statement can be determined from the
| table presented in “Interactive SQL and Query Management Setup on Unlike Appli-
| cation Servers” on page B-3, which gives the package names for various sets of
| options in effect when the package is created. For example, you would GRANT
| EXECUTE ON PACKAGE QSQL400.QSQL0200 TO some-user if the following
| options were in use when you created the package: *ISO for date, *ISO for time,
| *CS for commitment control, apostrophe for string delimiter, and single byte for
| character subtype.

B-10 OS/400 Distributed Database Programming V4R2


Appendix C. Interpreting Trace Job and FFDC Data
This appendix provides additional problem-analysis information. It is useful to spe-
cialists responsible for problem determination. It is also for suppliers of software
products designed to conform to the Distributed Relational Database Architecture
who want to test connectivity to an AS/400 system.

This appendix contains an example of the RW component trace data from a job
trace with an explanation of the trace data output. Some of this information is
helpful with interpreting communications trace data. This appendix also shows an
example of a first-failure data capture printout of storage, with explanations of the
output.

Interpreting Data Entries for the RW Component of Trace Job


It is the RW component of the OS/400 licensed program that includes most of the
DRDA support. This component produces certain types of diagnostic information
when the Trace Job (TRCJOB) command is issued with TRCTYPE(*ALL) or
TRCTYPE(*DATA). RW trace points are of the type that are shown in Figure C-1.
RW trace points can be located easily by doing a find operation using the string ‘>>’
as the search argument. The end of the data dumped at each trace point can be
determined by looking for the ‘<<<...’ delimiter characters. There are one or more of
the ‘<’ delimiter characters at the end of the data, enough to fill out the last line.

DATA FF 6E6ED9E6D8E84ðD9C37Aðð16Dð52ððð1ðð1ð22ð5ððð61149ðððð \>>RWQY RC: } \


DATA FF ððð621ð22417ðð25Dð53ððð1ðð1F241AðC76Dðð5ððð231ððð3ðA \ } } \
DATA FF ððð8ð971Eð54ððð1Dðððð1ð671FðEððððððð2CDð53ððð1ðð2624 \ \ } ð\ } \
DATA FF 1BFFððððððð1ððF1F1F1411ðððððððððððððFFððððððð2ððF2F2 \ 111 22 \
DATA FF F2412ððððððððððððððð26Dð52ððð1ðð2ð22ðBððð61149ððð4ðð \2 } \
DATA FF 16211ðC4C2F2C5E2E8E24ð4ð4ð4ð4ð4ð4ð4ð4ð4ð4ððð56Dðð3ðð \ DB2ESYS } \
DATA FF ð1ðð5ð24ð8ðððððððð64FðF2FðFðFðC4E2D5E7D9C6D54ðððC4C2 \ & ð2ðððDSNXRFN DB \
DATA FF F2C5E2E8E24ð4ð4ð4ð4ð4ð4ð4ð4ð4ð4ðFFFFFF92ðððððððððððð \2ESYS k \
DATA FF ððððFFFFFFFFðððððððððððððððð4ð4ð4ð4ð4ð4ð4ð4ð4ð4ð4ððð \ \
DATA FF ðððððð4C4C4C4C4C4C4C4C4C4C4C4C4C4C4C4C4C4C4C4C4C4C4C \ <<<<<<<<<<<<<<<<<<<<<< \

Figure C-1. An Example of Job Trace RW Component Information

Note: There is an exception to the use of the ‘<’ delimiters to determine the end of
data. In certain rare circumstances where a received data stream is being
dumped, the module that writes the trace data is unable to determine where
the end of the data stream is. In that case, the program dumps the entire
receive buffer, and as a warning that the length of the data dumped is
greater than that of the data stream, it replaces the ‘<<<...’ delimiter with a
string of ‘(’ characters.

Following the ‘>>’ prefix is a 7-character string that identifies the trace point. The
first 2 characters, ‘RW’, identify the component. The second 2 characters identify
the RW function being performed. The ‘QY’ indicates the query function which cor-
responds to the DDM commands OPNQRY, CNTQRY, and CLSQRY. The ‘EX’
indicates the EXECUTE function which corresponds to the DDM commands
EXCSQLSTT, EXCSQLIMM, and PRPSQLSTT.

Which program module corresponds to each of these functions depends on


whether the job trace was taken at the application requester (AR) end of the distrib-
uted SQL access operation, or at the application server (AS) end. The modules

 Copyright IBM Corp. 1997, 1998 C-1


performing the process and query functions at the AR are QRWSEXEC and
QRWSQRY. The modules at the AS are QRWTEXEC and QRWTQRY.

The last 2 characters of the 7-byte trace point identifier indicate the nature of the
dumped data or the point at which the dump is taken. For example, SN corre-
sponds to the data stream sent from an AR or an AS, and RC corresponds to the
data stream received by an AR.

Analyzing the RW Trace Data Example


The example in Figure C-1 on page C-1 shows the data stream received during a
distributed SQL query function. This particular trace was run at the AR end of the
connection. Therefore, the associated program module that produced the data is
QRWSQRY.

The following discussion examines the elements that make up the data stream in
the example. For more information on the interpretation of DRDA data streams, see
the Distributed Relational Database Architecture Reference and the Distributed
Data Management Level 4.0 Architecture Reference books.

The trace data follows the ‘:’ marking the end of the trace point identifier. In this
example, the first 6 bytes of the data stream contain the DDM data stream structure
(DSS) header. The first 2 bytes of this DSS header are a length field. The third
byte, X'D0' is the registered SNA architecture identifier for all DDM data. The
fourth byte is the format identifier (explained in more detail later). The fifth and sixth
bytes contain the DDM request correlation identifier.

The next 2 bytes, X'0010' (decimal 16) give the length of the next DDM object,
which in this case is identified by the X'2205' which follows it and is the code point
for the OPNQRYRM reply message.

Following the 16-byte reply message is a 6-byte DSS header for the reply objects
that follow the reply message. The first reply object is identified by the X'241A'
code point. It is a QRYDSC object. The second reply object in the example is a
QRYDTA structure identified by the X'241B' code point (split between two lines in
the trace output). As with the OPNQRYRM code point, the preceding 2 bytes give
the length of the object.

Looking more closely at the QRYDTA object, you can see a X'FF' following the
X'241B' code point. This represents a null SQLCAGRP (the form of an SQLCA
that flows on the wire). The null form of the SQLCAGRP indicates that it contains
no error or warning information about the associated data. In this case, the associ-
ated data is the row of data from an SQL SELECT operation. It follows the null
SQLCAGRP. Because rows of data as well as SQLCAGRPs are nullable, however,
the first byte that follows the null SQLCAGRP is an indicator containing X'00' that
indicates that the row of data is not null. The meaning of the null indicator byte is
determined by the first bit. A ‘1’ in this position indicates ‘null’. However, all 8 bits
are usually set on when an indicator represents a null object.

The format of the row of data is indicated by the preceding QRYDSC object. In this
case, the QRYDSC indicates that the row contains a nullable SMALLINT value, a
nullable CHAR(3) value, and a non-nullable double precision floating point value.
The second byte past the null SQLCAGRP is the null indicator associated with the
SMALLINT field. It indicates the field is not null, and the X'0001' following it is the
field data. The nullable CHAR(3) that follows is present and contains ‘111’. The

C-2 OS/400 Distributed Database Programming V4R2


floating point value that follows next does not have a X'00' byte following it, since
it is defined to be not nullable.

A second row of data with a null SQLCAGRP follows the first, which in turn is fol-
lowed by another 6-byte DSS header. The second half of the format byte (X'2')
contained in that header indicates that the corresponding DSS is a REPLY. The
format byte of the previous DSS (X'53') indicated that it was an OBJECT DSS.
The ENDQRYRM reply message carried by the third DSS requires that it be con-
tained in a REPLY DSS. The ENDQRYRM code point is X'220B'. This reply
message contains a severity code of X'0004', and the name of the RDB that
returned the query data (‘DB2ESYS’).

Following the third DSS in this example is a fourth and final one. The format byte of
it is X'03'. The 3 indicates that it is an OBJECT DSS, and the 0 that precedes it
indicates that it is the last DSS of the chain (the chaining bits are turned off).

The object in this DSS is an SQLCARD containing a non-null SQLCAGRP. The first
byte following the X'2408' SQLCARD code point is the indicator telling us that the
SQLCAGRP is not null. The next 4 bytes, X'00000064', represents the +100
SQLCODE which means that the query was ended by the encounter of a ‘row not
found’ condition. The rest of the fields correspond to other fields in an SQLCA. The
mapping of SQLCAGRP fields to SQLCA fields can be found in the Distributed
Relational Database Architecture Reference book.

Description of RW Trace Points

RWxx RC—Receive Data Stream Trace Point


This data stream contains a DDM response from an AS program. The DSS
headers are present in this data stream. This is the trace point shown in the above
example.

RWxx SN—Send Data Stream Trace Point


This data stream contains either a DDM request from an AR program, or a DDM
response from an AS program, as they exist before they are given to the lower
level CN component for addition of headers and transmission across the wire.
Besides content, the main difference between the trace information for receive data
streams and send data streams is that for the latter, the 6-byte DSS header infor-
mation is missing. For the first DSS in a send data stream trace area, the header is
omitted entirely, and for subsequent ones, 6 bytes of zeros are present which will
be overlaid by the header when it is constructed later by a CN component module.

RWQY S1—Partial Send Data Stream Trace Point 1


This trace point occurs in the NEWBLOCK routine of the QRWTQRY module, when
a new query block is needed in the building of QRYDTA in the like environment. In
the like environment a query block need not be filled up before it is transmitted, and
it is always put on the wire at this point so that the buffer space can be reused.
DSS headers are absent as in other send data streams.

Appendix C. Interpreting Trace Job and FFDC Data C-3


RWQY S2—Partial Send Data Stream Trace Point 2
This trace point occurs in the NEWBLOCK routine of the QRWTQRY module, when
a new query block is needed in the building of QRYDTA in the unlike environment.
In the unlike environment all query blocks except the last one must be filled up
before construction of a new one can be started, and they are not transmitted until
all are built.

RWQY BP—Successful Fetch Trace Point


This trace point occurs in the FETCH routine of the QRWTQRY module, when a
call to the SQFCHCRS macro results in a non-null pointer to a BPCA structure,
implying that one or more records were returned in the BPCA buffer. The data
dumped is the BPCA structure (not the associated buffer), which among other
things indicates how many records were returned.

RWQY NB—Unsuccessful Fetch Trace Point


This trace point occurs in the FETCH routine of the QRWTQRY module, when a
call to the SQFCHCRS macro results in a null pointer to a BPCA structure, implying
that no records were returned in the BPCA buffer. The data dumped is the
SQLSTATE in the associated SQLCA area.

| RWAC RQ—Access RDB Request Trace Point


| This trace point occurs on entry to either the QRWSARDB module at a DRDA AR,
| or the QRWTARDB module at an AS. The content varies accordingly. If the trace is
| taken at an AS, the content of the data is a two-byte DDM code point identifying
| the DDM command to be executed by QRWTARDB, followed by the English name
| of the command, which can be SXXDSCT for disconnect, SXXCLNUP for cleanup,
| or ACCRDB for a connect. If the trace is taken at the AR, the content of the data is
| as follows:
| OFFSET TYPE CONTENT
| -- ------- --------------------------------------------
| ð BIN(8) FUNCTION CODE
| 1 CHAR(8) INTERPRETATION OF FUNCTION CODE
| 9 BIT(8) BIT FLAGS
| 1ð CHAR(1) COMMIT SCOPE
| 11 CHAR(1) SQLHOLD value
| 12 CHAR(1) CMTFAIL value
| 13 BIN(15) Index of last AFT entry processed by RWRDBCMT

| The function codes are:


| ð 'CONNECT ' ==> CONNECT
| 1 'DISCONNE' ==> DISCONNECT
| 2 'CLEANUP ' ==> CLEANUP
| 3 'RELEASE ' ==> RELEASE
| 4 'EXIT ' ==> EXIT
| 5 'PRECMT ' ==> PRE-COMMIT
| 6 'POSTCMT ' ==> POST-COMMIT
| 7 'PREROLLB' ==> PRE-ROLLBACK
| 8 'POSTROLL' ==> POST-ROLLBACK
| 9 'FORCED D' ==> FORCED DISCONNECT

C-4 OS/400 Distributed Database Programming V4R2


| RW_ff_m—Application Requester Driver (ARD) Control Block
| Trace Point
| This trace point displays the contents of the ARD control blocks for the different
| types of ARD calls that can be made. It displays three different types of control
| blocks: input formats, output formats, and SQLCAs. The type of call and type of
| control block being displayed is encoded in the trace point ID. The form of the ID is
| RW_ff_m, where ff is the call-type ID, and m is the control block type code. The
| call-type IDs (ff) and control block type codes (m) are as follows:
| ff Call Type m Ctl Blk Type
| -- ---------------------- - ------------
| CN Connect I Input Format
| DI Disconnect O Output Format
| BB Begin Bind C SQLCA
| BS Bind Statement
| EB End Bind
| PS Prepare Statement
| PD Prepare and Describe Statement
| XD Execute Bound Statement with Data
| XB Execute Bound Statement without Data
| XP Execute Prepared Statement
| XI Execute Immediate
| OC Open Cursor
| FC Fetch from Cursor
| CC Close Cursor
| DS Describe a Statement
| DT Describe an Object

First-Failure Data Capture (FFDC)


The AS/400 system provides a way for you to capture and report error information
for the distributed relational database. This function is called first-failure data
capture (FFDC). The primary purpose of FFDC support is to provide extensive
information on errors detected in the DDM components of the OS/400 system from
which an Authorized Program Analysis Report (APAR) can be created.

You can also use this function to help you diagnose some system-related applica-
tion problems. By means of this function, key structures and the DDM data stream
are automatically dumped to the spooled file. This automatic dumping of error infor-
mation on the first occurrence of an error means that you do not have to create the
failure again to report it for service support. FFDC is active in both the application
requester and the application server.

One thing you should keep in mind is that not all negative SQLCODEs result in
dumps; only those that may indicate an APAR situation are dumped.

An FFDC Dump
The processing of alerts triggers FFDC data to be dumped. However, the FFDC
data is produced even when alerts or alert logging is disabled (using the CHGNETA
command). FFDC output can be disabled by setting the QSFWERRLOG system
value to *NOLOG, but it is strongly recommended that you do not disable the FFDC
dump process. If an FFDC dump has occurred, the informational message, “*Soft-
ware problem detected in Qxxxxxxx.” (where Qxxxxxxx is an OS/400 module identi-
fier), is logged in the QSYSOPR message queue.

Appendix C. Interpreting Trace Job and FFDC Data C-5


To see output from an FFDC dump operation, use the Work with Spooled Files
(WRKSPLF) command and view QPSRVDMP. The information contained in the
dump output are:
Ÿ DDM function
Ÿ Specific information on the failing DDM module
Ÿ DDM source or target main control block
Ÿ DDM internal control structures
Ÿ DDM communication control blocks
Ÿ Input and output parameter list for the failing DDM module if at the application
requester
Ÿ The request and reply data stream

The first 1K-byte of data is put in the error log. However, the data put in the
spooled file is always complete and easier to work with. If multiple DDM conversa-
tions have been established, the dump output may be contained in more than one
spooled file because of a limit of only 32 entries per spooled file. In this case,
there will be multiple “Software Problem” messages in the QSYSOPR message
queue that are prefixed with an asterisk (*).

C-6 OS/400 Distributed Database Programming V4R2


Work With Error Log ð2/27/91 13:33:ð5 Page . . . : 1
.A/ .B/
5738SS1 V2R1M1 AS/4ðð DUMP ð9ð454/SRR/SRRS1 ð2/27/91 15:12:52 PAGE 1
DUMP TAKEN FOR DETECTED ERROR
.C/
.SUSPECTED- QRWSQRY LIBRARY- S
..LICENSED PROGRAM- 5738SS1 V2R1M1
..FUNCTION- 5ðð1
..LOAD- ðððð
..PTF-
.D/
.DETECTOR- QRWSQRY LIBRARY- S
..LICENSED PROGRAM- 5738SS1 V2R1M1
..FUNCTION- 5ðð1
..LOAD- ðððð
..PTF-
.SYMPTOM STRING-
.E/ .F/ .G/
5738 MSGCPF3E86 F/QRWSQRY RC1ðððððð2
.H/
.SPACE- ð1 .I/
ðððððð FðF17EC9 D5C4E74ð FðF27EC6 C3E34E4ð FðF37EC5 D4E2C74ð FðF47ED7 D9D4E24ð \ð1=INDX ð2=FCT+ ð3=EMSG ð4=PRMS \
ðððð2ð FðF57EE2 D5C4C24ð FðF67ED9 C3E5C24ð FðF77EC1 D9C4C24ð FðF87ED8 C4E3C14ð \ð5=SNDB ð6=RCVB ð7=ARDB ð8=QDTA \
ðððð4ð FðF97EC9 D5C4C14ð F1Fð7EE2 D8C3C14ð F1F17EE6 D9C3C14ð F1F27ED9 C6D4E34ð \ð9=INDA 1ð=SQCA 11=WRCA 12=RFMT \
ðððð6ð F1F37EC1 C6E34ð4ð F1F47EE2 D4C3C24ð F1F57EE3 E2D3D24ð F1F67EE5 C1D9E24ð \13=AFT 14=SMCB 15=TSLK 16=VARS \
ðððð8ð 4DD9C5E2 E34ðC9E2 4ðC3C3C2 6BD7C3C2 E26BE2C1 E36BD7D4 C1D76BD9 C3E5C24ð \(REST IS CCB,PCBS,SAT,PMAP,RCVB \
ððððAð D7C5D94ð C3C3C25D \PER CCB) \
.SPACE- .L/ ð2
ðððððð 2ððC1254 ð1ð2F5F8 FðFðF9 \ 58ðð9 \
.SPACE- ð4 .J/
ðððððð D8D7C1D9 D4E2ðððð D67FCð1D A6ðð65Að ðððððððð FðF1ðððð ððððð434 ðððððððð \QPARMS O" ð1 \
ðððð2ð D9C3C8C1 E2F2F6F6 4ð4ð4ð4ð 4ð4ð4ð4ð 4ð4ðE2D9 D94ð4ð4ð 4ð4ð4ð4ð 4ð4ð4ð4ð \RCHAS266 SRR \
ðððð4ð 4ð4ð4ð4ð D7E3F14ð 4ð4ð4ð4ð 4ð4ð4ð4ð 4ð4ð4ð4ð 4ð4ð7ððF 7ðDB33Cð ððBBððð5 \ PT1 \
.SPACE- ð5
ðððððð ðððððððð ðð56Dð51 ððð1ðð5ð 2ððCðð44 2113D9C3 C8C1E2F2 F6F64ð4ð 4ð4ð4ð4ð \ & RCHAS266 \
ðððð2ð 4ð4ð4ð4ð E2D9D94ð 4ð4ð4ð4ð 4ð4ð4ð4ð 4ð4ð4ð4ð 4ð4ðD7E3 F14ð4ð4ð 4ð4ð4ð4ð \ SRR PT1 \
ðððð4ð 4ð4ð4ð4ð 4ð4ð4ð4ð 7ððF7ðDB 33CðððBB ððð5ððð8 2114ðððð 7FFFðð21 Dðð3ððð1 \ \
ðððð6ð ðð1B2412 ðð1ððð1ð ð676Dðð4 ððððð671 E4Dðððð1 ððð7147A ððððð2 \ \
.SPACE- ð6
ðððððð ðð16Dð52 ððð1ðð1ð 22ð5ððð6 1149ðððð ððð621ð2 2417ðð52 Dð53ððð1 ðð22241A \ \
ðððð2ð ðF76Dðð4 ðððð26ðð ð3ð2ðððð ðAððððð9 71Eð54ðð ð1Dðððð1 ð671FðEð ðððððð2A \ \
ðððð4ð 241BFFðð ððð1FðFð F1ðððððð ð13FFððð ðððððððð ððFFðððð ð2FðFðF2 ððððððð2 \ ðð1 ð ðð2 \
ðððð6ð 4ððððððð ðððððððð ðð1ðDð52 ððð1ðððA 22ðBððð6 1149ððð4 ðð69Dðð3 ððð1ðð63 \ \
ððððEð FF \ \
.SPACE- .K/ ð7
ðððððð D9C3C8C1 E2F2F6F6 4ð4ð4ð4ð 4ð4ð4ð4ð 4ð4ðD9C3 C8C1E2F2 F6F64ð4ð 4ð4ð4ð4ð \RCHAS266 RCHAS266 \
ðððð2ð 4ð4ð4ð4ð E2D9D94ð 4ð4ð4ð4ð 4ð4ð4ð4ð 4ð4ð4ð4ð 4ð4ðD7E3 F14ð4ð4ð 4ð4ð4ð4ð \ SRR PT1 \
ðððð4ð 4ð4ð4ð4ð 4ð4ð4ð4ð 7ððF7ðDB 33CðððBB D8E3C4E2 D8D3F4Fð FðD8E2D8 FðF2FðF1 \ QTDSQL4ððQSQð2ð1\
ðððð6ð F1ðð25ðð ðððððððð 25ðððððð ðððð1ðFð F4F5F1F7 F461E2D9 D961C4E2 F3F7F84ð \1 ð45174/SRR/DS378 \
ðððð8ð 4ð4ð4ð4ð 4ð4ð4ð4ð 4ð4ð4ð4ð 4ð4ð4ð4ð 4ð4ð4ð4ð 4ð4ð4ð4ð 4ð4ð4ð4ð 4ð4ð4ð4ð \ \
LINES ððððAð TO ððð15F SAME AS ABOVE
ððð16ð 4ð4ð4ð4ð 4ð4ð4ð4ð 4ð4ð4ð4ð 4ð4ðAððð 2434E2D9 D94ð4ð4ð 4ð4ð4ð4ð ðððððððð \ SRR \
ððð18ð C1D7D7D5 4BD9C3C8 C1E2F3F7 F8A7CCA7 541372ðð 4ð4ð4ððð ðððððððð ðððððððð \APPN.RCHAS378x x \
ððð1Að ðððððððð ðððððððð \ \
.SPACE- ð9
ðððððð E2D8D3C4 C14ð4ð4ð ðððððð6ð ððð1ððð1 ð1F4ððð2 ððððð4ðð ðððððð4ð 4ð4ð4ð4ð \SQLDA 4 \
ðððð2ð 8ððððððð ðððððððð ðð7FCð1E 11ððð334 ðððððððð ðððððððð ðððððððð ðððððððð \ \
ðððð4ð ððð8ðððð ðð25ðððð ðððððððð ðððððððð ðððððððð ðððððððð ðððððððð ðððððððð \ \
.SPACE- 1ð
ðððððð E2D8D3C3 C14ð4ð4ð ðððððð88 FFFF8ABC ððð41254 ð1ð2ðððð ðððððððð ðððððððð \SQLCA \
ðððð2ð ðððððððð ðððððððð ðððððððð ðððððððð ðððððððð ðððððððð ðððððððð ðððððððð \ \
ðððð4ð ðððððððð ðððððððð ðððððððð ðððððððð ðððððððð ðððððððð ðððððððð ðððððððð \ \
ðððð6ð ðððððððð ðððððððð ðððððððð ðððððððð ðððððððð ðððððððð 4ð4ð4ð4ð 4ð4ð4ð4ð \ \
ðððð8ð 4ð4ð4ðF5 F8FðFðF9 \ 58ðð9 \
.SPACE- 11
ðððððð E2D8D3C3 C14ð4ð48 ðððððð88 ðððððððð ðððððððð ðððððððð ðððððððð ðððððððð \SQLCA \
ðððð2ð ðððððððð ðððððððð ðððððððð ðððððððð ðððððððð ðððððððð ðððððððð ðððððððð \ \
ðððð4ð ðððððððð ðððððððð ðððððððð ðððððððð ðððððððð ðððððððð ðððððððð ðððððððð \ \
ðððð6ð ðððððððð ðððððððð ðððððððð ðððððððð ðððððððð ðððððððð 4ð4ð4ð4ð 4ð4ð4ð4ð \ \
ðððð8ð 4ð4ð4ðFð FðFðFðFð \ ððððð \

Appendix C. Interpreting Trace Job and FFDC Data C-7


.SPACE- 13
ðððððð ðððð1BBð ðð31ððð1 FðFðFðFð FðFðFðFð ðððððððð ðððððððð ðððððððð ðððððððð \ ðððððððð \
ðððð2ð ððððð47ð ððððð2Cð 7ð23C382 57ðððð48 8ððððððð ðððððððð ðð7FAð83 A3ððð82ð \ \
ðððð4ð 8ððððððð ðððððððð ðð7FAð83 E7ððð1ðð D9C3C8C1 E2F2F6F6 4ð4ð4ð4ð 4ð4ð4ð4ð \ RCHAS266 \
ðððð6ð 4ð4ð5CD3 D6C34ð4ð 4ð4ð4ð4ð 5CD5C5E3 C1E3D94ð D9C3C8C1 E2F2F6F6 5CD3D6C3 \ \LOC \NETATR RCHAS266\LOC\
LINES ððððAð TO ðð1B9F SAME AS ABOVE
ðð1BAð ðððððððð ðððððððð ðððððððð ðððððððð \ \
.SPACE- 14
ðððððð E2D4C3C2 2ðððð1ðð ðððððð1ð FðF9FðF4 F5F461E2 D9D961E2 D9D9E2F1 ðððððððð \SMCB ð9ð454/SRR/SRRS1 \
ðððð2ð ðððððððð ðððððððð E5FðF2D9 FðF1D4Fð F1D9C3C8 C1E2F3F7 F8ðððððð ðð8ððððð \ Vð2Rð1Mð1RCHAS378 \
ðððð4ð ð3ð2C3D5 E2E2D5D9 C3E5D8D3 F7F9F7F1 8ððððððð ðððððððð ðð7FAð83 E9ððð1ð6 \ CNSSNRCVQL7971 \
ðððð6ð F1ðððððð ðð71ðððð ðððððððð ðððððððð ððððð47ð ððððð2Cð 7ð23C382 57ðððð48 \1 \
.SPACE- 15
ðððððð ðððððððð ðððððððð ðð7FAð83 E6ðð19FF ðððððððð ðððððððð ðððððððð ðððððððð \ \
ðððð2ð ðððððððð ðð4ððððð \ \
.SPACE- 16
ðððððð ðððððððð ðððððððð ðððððððð ððððððð2 ðððððð17 ððððððE1 ðððððððð ðððððð71 \ \
ðððð2ð ðððððððð ðððð7FFF ððððððð3 ðð17ðððð ðð1Bðððð FFðððððð ðððð241ð ððFðFð6ð \ \
ðððð4ð E7ð4ðð \X \
.SPACE- 17
ðððððð E2C3C3C2 5CD3D6C3 4ð4ð4ð4ð 4ð4ð5CD5 C5E3C1E3 D94ð5CD3 D6C34ð4ð 4ð4ðD9C3 \SCCB\LOC \NETATR \LOC RC\
ðððð2ð C8C1E2F2 F6F65CD3 D6C34ð4ð 4ð4ðð7F6 C4C24ð4ð 4ð4ð5CC4 D9C4C14ð 4ð4ð4ð4ð \HAS266\LOC 6DB \DRDA \
ðððð4ð 4ð4ð4ð4ð 4ð4ð4ð4ð 4ððððð1E ðð11ðððð ðððððððð ðððððððð ðððððððð ðððððððð \ \
ðððð6ð ðððððððð ðððððððð ðððððððð ðððððððð \ \
.SPACE- 18
ðððððð E2D7C3C2 ðððððððð ðð7FAð83 A3ððð81ð ððððð47ð ððððð2Cð 7ð23C382 57ðððð48 \SPCB \
ðððð2ð ðððððððð ðððððððð ðððððððð ðððððððð ðððððððð ðððððððð ðððððððð ðððððððð \ \
ðððð4ð ðððððððð ðððððððð ðððððððð ðððððððð \ \
.SPACE- 19
ðððððð C5E7C3C2 ðððððð76 ððððððð3 ðððððð79 ððððððð9 ðððððð82 ðððððð1ð ðððððð92 \EXCB \
ðððð2ð ððððððð8 ðððððððð ðððððð18 ðð2ðððð3 ððð3ððð3 ððð3ððð3 ððð3ððð1 ððð3ððð3 \ \
ðððð4ð ðððððððð ðððððððð ðððððððð ðððððððð ðððððððð ððððC4C4 D4E5FðF2 D9FðF1D4 \ DDMVð2Rð1M\
ðððð6ð FðF1FðF4 F5F1F7F4 61E2D9D9 61C4E2F3 F7F8D9C3 C8C1E2F2 F6F6 \ð1ð45174/SRR/DS378RCHAS266 \
.SPACE- 2ð
ðððððð ðððððð3ð ððððð2B6 ððððð43ð ððððð43E ððð1ðððð ðððððððð ðððððððð ðððððððð \ \
ðððð2ð ðððððððð ðððððððð ðððððððð ðððððððð ðððððððð ðððððððð ðððððððð ðððððððð \ \
ðððð4ð 8ððððððð ðððððððð ðð7FAð83 D2ððð1ðð ðððððððð ððððð29A ðððððð5C 22ð5ðððð \ \
ðððð6ð ððð6ðððð ð2B6ðððð ððBððððð ðððððððð ðððððððð ðððððððð ðððððððð ðððððððð \ \
LINES ððððEð TO ððð17F SAME AS ABOVE
.SPACE- 21
ðððððð ðð16Dð52 ððð1ðð1ð 22ð5ððð6 1149ðððð ððð621ð2 2417ðð52 Dð53ððð1 ðð22241A \ \
ðððð2ð ðF76Dðð4 ðððð26ðð ð3ð2ðððð ðAððððð9 71Eð54ðð ð1Dðððð1 ð671FðEð ðððððð2A \ \
ðððð4ð 241BFFðð ððð1FðFð F1ðððððð ð13FFððð ðððððððð ððFFðððð ð2FðFðF2 ððððððð2 \ ðð1 ð ðð2 \
ðððð6ð 4ððððððð ðððððððð ðð1ðDð52 ððð1ðððA 22ðBððð6 1149ððð4 ðð69Dðð3 ððð1ðð63 \ \
ðððð8ð 24ð8ðððð ðððð64Fð F2FðFðFð D8E2D8C6 C5E3C3C8 ððD9C3C8 C1E2F2F6 F64ð4ð4ð \ ð2ðððQSQFETCH RCHAS266 \
.SPACE- 22
ðððððð E2C3C3C2 5CD3D6C3 4ð4ð4ð4ð 4ð4ð5CD5 C5E3C1E3 D94ð5CD3 D6C34ð4ð 4ð4ðD9C3 \SCCB\LOC \NETATR \LOC RC\
ðððð2ð C8C1E2F2 F6F65CD3 D6C34ð4ð 4ð4ðð7Fð FðF14ð4ð 4ð4ðE77D FðF7C6Fð C6FðC6F1 \HAS266\LOC ðð1 X'ð7FðFðF1\
ðððð4ð 7D4ð4ð4ð 4ð4ð4ð4ð 4ððððð14 ðð11ðððð ðððððððð ðððððððð ðððððððð ðððððððð \ \
ðððð6ð ðððððððð ðððððððð ðððððððð ðððððððð ðððð8Fðð ððððð7ðð FðFðF1ðð ðððððððð \ ðð1 \
.SPACE- 23
ðððððð C5E7C3C2 ðððððð76 ððððððð3 ðððððð79 ððððððð9 ðððððð82 ðððððð1ð ðððððð92 \EXCB b k\
ðððð2ð ððððððð8 ðððððððð ðððððð18 ðð2ðððð3 ððð3ððð3 ððð3ððð3 ððð3ððð1 ððð3ððð3 \ \
ðððð4ð ðððððððð ðððððððð ðððððððð ðððððððð ðððððððð ððððC4C4 D4E5FðF2 D9FðF1D4 \ DDMVð2Rð1M\
ðððð6ð FðF1FðF4 F5F1F7F2 61E2D9D9 61C4E2F3 F7F8D9C3 C8C1E2F2 F6F6 \ð1ð45172/SRR/DS378RCHAS266 \
.SPACE- 24
ðððððð ðððððð3ð ðððððð5C ðððððððð ððððððCC ððð1ðððð ðððððððð ðððððððð ðððððððð \ \ \
ðððð2ð ðððððððð ðððððððð ðððððððð ðððððððð ðððððððð ðððððððð ðððððððð ðððððððð \ \
ðððð4ð 8ððððððð ðððððððð ðð7FAð83 A4ððð1ðð ðððððððð ðððððððð ðððððð5C D2ð1ðððð \ \K \
.SPACE- 25
ðððððð ðð1ðDðð2 ððð1ðððA D2ð1ððð6 1149ðððð E2ðððD11 5AE5FðF2 D9FðF1D4 FðF1ðððC \ Vð2Rð1Mð1 \
ðððð2ð 116DD9C3 C8C1E2F2 F6F6ðð14 115EFðF4 F5F1F7F2 61E2D9D9 61C4E2F3 F7F8ðð64 \ RCHAS266 ð45172/SRR/DS378 \
ðððð4ð 14ð414ð3 ððð31423 ððð314ð5 ððð314ð6 ððð314ð7 ððð31444 ððð31458 ððð11457 \ \
ðððð6ð ððð314ðC ððð31419 ððð3141E ððð31422 ððð324ðF ððð314Að ððð41432 ððð31433 \
END OF DUMP
\ \ \ \ \ E N D O F L I S T I N G \ \ \ \ \

FFDC Dump Output Description


The following information describes the data areas and types of information avail-
able in an FFDC dump output like the one in the preceding figure.

C-8 OS/400 Distributed Database Programming V4R2


Notes:
1. Each FFDC dump output will differ in content, but the format is generally the
same. An index (.I/) is provided to help you understand the content and
location of each section of data.
2. Each section of data is identified by “SPACE-” and a number; for example:
SPACE- ... 01. The sections of data present in your dump output are
dependent on the operation and its progress at the time of failure.
3. Each section of data is given a name; for example SQCA. SQCA is the section
name for data from the DB2/400 Query Manager and SQL Development Kit
SQLCA. To locate the SQLCA data, find SQCA in the index (.I/). In the
sample dump index, SQCA is shown to be in data section 10 (10=SQCA). To
view the SQLCA data, go the SPACE- 10.
4. There are two basic classes of modules that can be dumped:
Ÿ Application requester (AR) modules
Ÿ Application server (AS) modules
The sample dump output is typical of a dump from an AR module. AR dump
outputs typically have a fixed number of data sections identified in the index. (For
example, in the sample dump output SPACE- 01 through 16 are listed.) In addition,
they have a variable number of other data sections. These sections are not
included in the index. (For example, in the sample dump output, SPACE- 17
through 25 are not listed in the index.)
Application server dump output is usually simpler because they consist only of a
fixed number of data sections, all of which are identified in the index.
5. There are index entries for all data sections whether or not the data section
actually exists in the current dump output. For example, in the sample dump
output, there is no SPACE- 08. In the index, 08 equals QDTA (query data).
The absence of SPACE- 08 means that no query data was returned, so none
could be dumped.
6. In the sample dump output, the last entry in the index is “(REST IS CCB,
PCBS, SAT, PMAP, RCVB, PER CCB).” This entry means that SPACE- 17 and
upward contain one or more communications control blocks (CCB), each
containing:

Ÿ Zero, one, or more path control blocks (SPCB); there is normally just one.
Ÿ Exchange server attributes control block (EXCB)
Ÿ Parser map space
Ÿ Receive buffer for the communications control block
The data section number is incremented by one from 17 onward as each control
block is dumped. For example, in the sample dump output, data sections SPACE-
17 through SPACE- 21 are for the first data control block dumped (CCB 1), while
data sections SPACE- 22 through SPACE- 25 are for the second data control block
dumped (CCB 2), as shown below:
17 CCB (Eyecatcher is :‘SCCB:’. For an application server module, the
eyecatcher is :‘TCCB:’.)
18 PCB for CCB 1 (Eyecatcher is :‘SPBC:’.)
19 SAT for CCB 1 (Eyecatcher is :‘EXCB:’.)

Appendix C. Interpreting Trace Job and FFDC Data C-9


20 PMAP for CCB 1 (No eyecatcher.)
21 RCVB for CCB 1 (No eyecatcher.)
22 CCB 2 (Eyecatcher is :‘SCCB:’.)
-- (No PCB for CCB 2 because the conversation is not active.)
23 SAT for CCB 2 (Eyecatcher is :‘EXCB:’.)
24 PMAP for CCB 2 (No eyecatcher.)
25 RCVB for CCB 2 (No eyecatcher.)
.A/ Name and release information of the system on which the dump was
taken.
.B/ Name of job that created the dump output.
.C/ Name of module in the operating system suspected of failure.
.D/ Name of module that detected the failure.

Symptom String- contents:


.E/ Message identifier.
.F/ Name of module suspected of causing the FFDC dump.
.G/ Return code (RC), identifying the point of failure.

The first digit after RC indicates the number of dump files associated with this
failure. There can be multiple dump files depending on the number of conversations
that were allocated. In the sample dump output, the digit is “1,” indicating that this
is the first (and possible the only) dump file associated with this failure.

You may have four digits (not zeros) at the rightmost end of the return code that
indicate the type of error.
Ÿ The possible codes for errors detected by the AR are:
0001 Failure occurred in connecting to the remote database
0002 More-to-receive indicator was on when it should not have been
0003 AR detected an unrecognized object in the data stream received
from the AS
0097 Error detected by the AR DDM communications manager
0098 Conversation protocol error detected by the DDM component of the
AR
0099 Function check
Ÿ The possible codes for errors detected by the AS are:
0099 Function check
4415 Conversational protocol error
4458 Agent permanent error
4459 Resource limit reached
4684 Data stream syntax not valid
4688 Command not supported

C-10 OS/400 Distributed Database Programming V4R2


4689 Parameter not supported
4690 Value not supported
4691 Object not supported
4692 Command check
8706 Query not open
8708 Remote database not accessed
8711 Remote database previously accessed
8713 Package bind process active
8714 FDO:CA descriptor is not valid
8717 Abnormal end of unit of work
8718 Data and/or descriptor does not match
8719 Query previously opened
8722 Open query failure
8730 Remote database not available
.H/ SPACE- number identifying a section of data. The number is related to a
data section name by the index. Data section names are defined under
.I/ below.
.I/ An index and definition of SPACE- numbers (defined in .H/) to help you
understand the content and location of each section of data. The order
of the different data sections may vary between dump output from dif-
ferent modules. The meaning of the data section names are:
Ÿ AFT: DDM active file table, containing all conversation information.
Ÿ ARDB: Access remote database control block, containing the AR
and AS connection information. The structure contains the LUWID
token used to correlate the dump with any related alert data.
| Ÿ ARDP: ARD program parameters at start of user space.
Ÿ BDTA: Buffer processing communications area (BPCA) and associ-
ated data record from SELECT INTO statement.
Ÿ BIND: SQL bind template
Ÿ BPCA: BPCA structure (without data records)
Ÿ DATA: Data records associated with the BPCA. It is possible that
the records in this section do not reflect the total BPCA buffer con-
tents. Already-processed records may not be included.
Ÿ DOFF: Offset within query data stream (QRYDTA) where the error
was detected.
Ÿ EICB: Error information control block
Ÿ EMSG: Error message associated with a function check or DDM
communications manager error.
Ÿ FCT: DDM function code point (2 bytes)

Appendix C. Interpreting Trace Job and FFDC Data C-11


Ÿ FCT+: Same as FCT, plus message tokens and the SQLSTATE
logged in the SQLCA. See “DDM Error Codes” on page C-14 for
more information on how to interpret FCT+.
– DDM function code point (2 bytes)
– DDM reply code point (2 bytes)
– DDM reply reason code (2 bytes)
- Location at which the error was detected (1 byte; 01 = appli-
cation requester; 02 = application server)
- Error reason code (1 byte; see the DDM Error Codes on
page C-14.)
– SQLSTATE (5 bytes)
Ÿ FDOB: FDO:CA descriptor input to the parser in an execute opera-
tion.
Ÿ FDTA: FDO:CA data structure consisting of:
– A 4-byte field defining the length of the FDO:CA data stream
(FDODTA)
– The FDODTA
Ÿ HDRS: Communications manager command header stack.
| Ÿ IFMT: ARD program input format.
Ÿ INDA: Input SQLDA containing user-defined SQLDA for insert,
select, delete, update, open, and execute operations.
Ÿ INDX: The index that maps the data section name to the data
section SPACE- code. Not all of the entries in the index have a cor-
responding data section. The dump data is based on the error that
occurs and the progress of the operation at the time of the error. A
maximum of 32 entries can be dumped in one spooled file.
Ÿ INST: SQL statement
Ÿ ITKN: Interrupt token.
| Ÿ OFMT: ARD program output format.
Ÿ PKGN: Input package name, consistency token and section number.
Ÿ PMAP: Parser map in an AS dump output.
Ÿ PRMS: DDM module input or output parameter structure.
Ÿ PSOP: Input parser options.
Ÿ QDTA: Query data structure consisting of:
– A 4-byte field defining the length of the query data stream
(QRYDTA)
– The QRYDTA
Ÿ RCVB: Received data stream. The contents depend on the
following:
– If the dump occurs on the application server, the section con-
tains the DDM request data that was sent from the application
requester.

C-12 OS/400 Distributed Database Programming V4R2


– If the dump occurs on the application requester, the section con-
tains the DDM reply data that was sent from the application
server. If this section is not present, it is possible the received
data may be found in the receive buffer in the variable part of
the dump.
Ÿ RDBD: Relational database directory.
Ÿ RFMT: Record format structure.
Ÿ RMTI: Remote location information in the commitment control block.
| Ÿ RTDA: Returned SQLDA (from ARD program).
Ÿ SMCB: DDM source master control block, containing pointers to
other DDM connection control blocks and internal DDM control
blocks.
Ÿ SNDB: Send data stream. The contents depend on the following:
– If the dump occurs on the application requester, the buffer con-
tains the DDM request that was sent to the application server or
that was being prepared to send.
Note the four bytes of zeros that are at the beginning of
SPACE- 05 in the example. When zeros are present, they are
not part of the data stream. They represent space in the buffer
that is used only in the case that a DDM large object has to be
sent with a DDM request. The DDM request stream is shifted
left four bytes in that case.
– If the dump occurs on the application server, the buffer contains
the DDM reply data that was being prepared to send to the
application requester.
Ÿ SQCA: Output SQLCA being returned to the user.
Ÿ SQDA: SQLDA built by the FDO:CA parser.
Ÿ TBNM: Input remote database table name.
Ÿ TMCB: Target main control block.
Ÿ TSLK: Target or source connection control block, containing pointers
to the DDM active file table and other internal DDM control blocks.
Ÿ VARS: Local variables for the module being dumped.
Ÿ WRCA: Warning SQLCA returned only for an open operation
(OPNQRYRM).
Ÿ XSAT: Exchange server attributes control block.
Ÿ Remainder: Multiple conversation control blocks for all the DDM con-
versations for the job at the time of the error. Each conversation
control block contains the following:
– Path control blocks, containing information about an established
conversation. There can be multiple path control blocks for one
conversation control block.
– One exchange server information control block, containing infor-
mation about the application requester and application server.

Appendix C. Interpreting Trace Job and FFDC Data C-13


– One DDM parser map area, containing the locations and values
for all the DDM commands, objects, and replies.
– One receive buffer, containing the requested data stream
received by the application server. See also 6 on page C-9.
The data section number is incremented by one as each control
block is dumped.
.J/ The eyecatcher area. Information identifying the type of data in some of
the areas that were dumped.
.K/ The logical unit of work identifier (LUWID) for the conversation in
progress at the time of the failure can be found in the access RDB
control block. This data area is identified by the string ‘ARDB’ in the
FFDC index. In this example, it is in SPACE- 07. The LUWID begins at
offset 180. The network identifier (NETID) is APPC. A period separates
it from the logical unit (LU) name, RCHAS378, which follows. Following
the LU name is the 6-byte LUW instance number X'A7CCA7541372'.

DDM Error Codes


These error codes are included in the FFDC dumps (.L/ in the sample dump
output) that identify DDM error conditions. These conditions may or may not be
defined by the DDM architecture.

Command Check Codes


If FCT+ (SPACE- 02) contains 1254 in bytes 3 and 4, look for one of these codes
in byte 6:
01 Failure to connect to the relational database (RDB).
02 State of the DDM data stream is incorrect.
03 Unrecognized object in the data stream.
04 Statement CCSID received from SQL not recognized.
05 EXCSQLSTT OUTEXP value is inconsistent with the SQL statement
being executed.
06 DDM command or object sent to AS violates OS/400 extension to
DRDA2 architecture.
07 DDM reply or object received from AS violates DRDA2 architecture.
| 08 SQLDA data pointer is NULL when it should not be.
| 09 Product data structure not valid.
97 DDM communications manager detected an error.
98 Conversation protocol error detected by the DDM module.
99 Function check. Look for EMSG section, normally in SPACE- 03.

Conversational Protocol Error Code Descriptions


IF FCT+ (SPACE- 02) contains 1245 in bytes 3 and 4, look for one of these codes
in byte 6:
01 RPYDSS received by target communications manager.
02 Multiple DSSs sent without chaining, or multiple DSS chains sent.

C-14 OS/400 Distributed Database Programming V4R2


03 OBJDSS sent when not allowed.
04 Request correlation identifier of an RQSDSS is less than or equal to the
previous RQSDSS request correlation identifier in the chain.
If two RQSDSSs have the same request correlation identifier, the
PRECCNVRM must be sent in RPYDSS with a request correlation iden-
tifier of minus 1.
05 Request correlation identifier of an OBJDSS does not equal the request
correlation identifier of the preceding RQSDSS.
06 EXCSAT was not the first command after the connection was estab-
lished.
DF FDODSC was received but no accompanying FDODTA.
E0 No OPNQRY (open query) reply message.
E1 RDBNAM on ENDQRYRM (end query reply message) is not valid.
E2 An OPEN got QRYDTA (query answer set data) without a QRYDSC
(query answer set description).
E3 Unexpected OPNQRY reply object.
E4 Unexpected CXXQRY reply object.
E5 QRYDTA on OPEN, single row.
E6 RM after OPNQRYRM is not valid.
E7 No interrupt reply message.
FD Null SQLCARD (SQLCA reply data) following error RM.
FE Null QRYDTA row follows null SQLCA.
FF Expected SQLCARD missing.

DDM Syntax Error Code Descriptions


If FCT+ (SPACE- 02) contains 124C in bytes 3 and 4, look for one of these codes
in byte 6:
01 DSS header length less than 6.
02 DSS header length does not match the number of bytes of data found.
03 DSS head C-byte not X'D0'.
04 DSS header F-bytes either not recognized or not supported.
05 DSS continuation specified, but not found. For example, DSS continua-
tion is specified on the last DSS, and the SEND indicator has been
returned by the SNA LU 6.2 communications program.
06 DSS chaining specified, but no DSS found. For example, DSS chaining
is specified on the last DSS, and the SEND indicator has been returned
by the SNA LU 6.2 communications program.
07 Object length less than 4. For example, a command parameter length is
specified as 2, or a command length is specified as 3.
08 Object length does not match the number of bytes of data found. For
example, an RQSDSS with a length 150 contains a command whose
length is 125, or an SRVDGN (server diagnostic information) parameter
specifies a length of 200, but there are only 50 bytes left in the DSS.

Appendix C. Interpreting Trace Job and FFDC Data C-15


09 Object length greater than maximum allowed. For example, the
RECCNT parameter specifies a length of 5, but this indicates that only
half of the hours field is present instead of the complete hours field.
0A Object length less than minimum required. For example, the SVRCOD
parameter specifies a length of 5, but the parameter is defined to have a
fixed length of 6.
0B Object length not allowed. For example, the FILEXPDT parameter is
specified with a length of 11, but this indicates that only half of the hours
field is present instead of the complete hours field.
0C Incorrect large object extended length field (see the description of DSS).
For example, an extended length field is present, but it is only 3 bytes
long. It is defined as being a multiple of 2 bytes long.
0D Object code point index not supported. For example, a code point of
X'8032' is encountered, but X'8' is a reserved code point index.
0E Required object not found. For example, a CLRFIL command does not
have an FILNAM parameter present, or an MODREC command is not
followed by a RECORD command data object.
0F Too many command data objects sent. For example, an MODREC
command is followed by two RECORD command data objects, or a
DELREC command is followed by a RECORD object.
10 Mutually exclusive objects present. For example, a CRTDIRF command
specifies both a DCLNAM and a FILNAM parameter.
11 Too few command data objects sent. For example, an INSRECEF
command that specified RECCNT(5) is followed by only four RECORD
command data objects.
12 Duplicate object present. For example, a LSTFAT command has two
FILNAM parameters specified.
13 Specified request correlation identifier not valid. Use PRCCNVRM with a
PRCCNVCD of X'04' or X'05' instead of this error code. This error
code is being maintained for compatibility with Level 1 architecture.
14 Required value not found.
15 Reserved value not allowed. For example, an INSRECEF command
specified an RECCNT(0) parameter.
16 DSS continuation less than or equal to 2. For example, the length bytes
for the DSS continuation have a value of 1.
17 Objects not in required order. For example, a RECAL object contains a
RECORD object followed by a RECNBR object that is not in the speci-
fied order.
18 DSS chaining bit not a binary 1, but DSSFMT bit 3 is set to a binary 1.
is requested.
19 Previous DSS indicated current DSS has the same request correlation,
but the request correlation identifiers are not the same.
1A DSS chaining bit not a binary 1, but error continuation is requested.
1B Mutually exclusive parameter values specified. For example, an OPEN
command specified PRPSHD(TRUE) and FILSHR(READER).

C-16 OS/400 Distributed Database Programming V4R2


1D Code point not a valid command. For example, the first code point in
RQSDSS either is not in the dictionary or is not a code point for a
command.

Appendix C. Interpreting Trace Job and FFDC Data C-17


C-18 OS/400 Distributed Database Programming V4R2
Appendix D. DDM Architecture Command Support
The OS/400 licensed program supports all DDM commands and parameters that
the Remote Unit of Work and Distributed Unit of Work portions of the DRDA archi-
tecture require. The following tables show how OS/400 supports each of these
DDM codepoints and parameters.

The meanings of the columns for commands are:


Ÿ The R column indicates how the DB2/400 application requester handles the
codepoint or parameter:
Y The OS/400 program flows it to application server.
N The OS/400 program does not flow it to the application server.
Ÿ The S column indicates how the DB2/400 application server supports the
codepoint or parameter:
Y The OS/400 program recognizes and processes it.
S The OS/400 program allows the parameter, depending on its value.
The supported values are defined after the table.
I The OS/400 program ignores it if received.

The meanings of the columns for reply objects (in the indented tables) are:
Ÿ The S column indicates how the DB2/400 application server handles the
codepoint or parameter:
Y The OS/400 program flows it to the application requester.
N The OS/400 program does not flow it to the application requester.
Ÿ The R column indicates how the DB2/400 application requester supports the
codepoint or parameter:
Y The OS/400 program recognizes and processes it.
I The OS/400 program ignores it.

Note that each DDM command can have associated with it:
Ÿ Parameters (instance variables)
Ÿ Command data objects
Ÿ Reply messages
Ÿ Reply data objects

In the following tables, commands and associated object types can be distinguished
by the following means:
Ÿ Command and reply message names are in uppercase and contained in the
top row of a table.
Ÿ Parameter names are in lowercase.
Ÿ Command object names are in uppercase and if present are in the bottom rows
of a table.
Ÿ Reply data object names are in mixed case (first letter capitalized).

 Copyright IBM Corp. 1997, 1998 D-1


Figure D-1. ACCRDB Command
DDM Codepoint Optional R S AR Value
ACCRDB Y Y
rdbnam (name of relational database) Y Y
rdbacccl (access manager class) Y Y
typdefnam (data type definition name) Y Y QDTSQL400
typdefovr (data type definition override) Y Y
prdid (product specific identifier) Y Y QSQvvrrm
rdbalwupd (relational database to allow updates) Y Y Y
prddta (product specific data) Y N I
sttstrdel (string delimiter) Y Y Y
sttdecdel (decimal delimiter) Y Y Y
crrtkn (correlation token) N Y Y
trgdftrt (target default value return) Y Y Y
Notes:
1. On the STTSTRDEL parameter, the application requester always sends DFTPKG to application server for
dynamic SQL.
2. On the STTDECDEL parameter, the application requester always sends DFTPKG to application server for
dynamic SQL.
3. For the PRDID parameter, the string ’vvrrm’ contains the requester’s version, release, and modification
level.
4. The CRRTKN parameter is optional for DRDA1 support.

Figure D-2. ACCRDBRM Reply for ACCRDB command


DDM Codepoint Optional R S AS Value
ACCRDBRM Y Y
svrcod (severity code) Y Y
prdid (product specific identifier) Y Y QSQvvrrm
typdefnam (data type definition name) Y Y QTDSQL400
typdefovr (data type definition override) Y Y
srvdgn (server diagnostic information) Y I N
rdbinttkn (relational database interrupt Y Y Y
token)
crrtkn (correlation token) N Y Y
pkgdftcst (package default character Y Y Y
subtype)
usrid (user id at the target system) Y Y Y
Notes:
1. For the PRDID parameter, the string ‘vvrrm’ contains the application server’s
version, release, and modification level.
2. The CRRTKN parameter is optional for DRDA1 support.

| Figure D-3. ACCSEC Command


| DDM Codepoint Optional R S AR Value
| ACCSEC Y Y
| secmec (security mechanism) N Y S 3 or 4
| rdbnam (name of relational database) Y N I
| secmgrnm (security manager name) Y N I
| Note: The SECMECs supported by DB2 for AS/400 are USRIDPWD(3) and USRIDONL(4).

D-2 OS/400 Distributed Database Programming V4R2


| Figure D-4. ACCSECRD Reply for ACCSEC command
| DDM Codepoint Optional R S AS Value
| ACCSECRD N Y Y
| secmec (security mechanism) N Y S 3 or 4
| Note: The SECMEC parameter of ACCSECRD is repeatable. There can be multiple occur-
| rences to reflect what the server does support, if it does not support the value sent
| on ACCSEC by the AR. DB2 for AS/400 currently does not send multiple occur-
| rences, however.

Figure D-5 (Page 1 of 2). BGNBND Command


CRTSQLxxx
Precompile
DDM Codepoint Optional R S Options
BGNBND Y Y
rdbnam (name of relational database as in ACCRDB) Y Y Y RDB
pkgnamct (package name and consistency token) N Y Y RDB, SQLPKG,
DFTRDBCOL
Ÿ rdbnam (application server database name).
Ÿ rdbcolid (relational database collection identifier).
Ÿ pkgid (package identifier).
Ÿ pkgcnstkn (package consistency token).
vrsnam (package version name) Y N S
pkgrplvrs (replaced package version name) Y N S
bndchkexs (bind existence checking) Y Y Y
pkgrplopt (package replacement option) Y Y Y REPLACE
pkgathopt (package authorization option) Y N Y
sttstrdel (statement string delimiter) Y Y Y OPTION –
*QUOTE –
*APOST
sttdecdel (statement decimal delimiter) Y Y Y OPTION –
*SYSVAL –
*PERIOD –
*COMMA
sttdatfmt (date format of statement) Y Y Y DATFMT
stttimfmt (time format of statement) Y Y Y TIMFMT
pkgisolvl (package isolation level) N Y Y COMMIT
bndcrtctl (bind creation control) Y Y Y GENLVL
bndexpopt (bind explain option) Y N S
pkgownid (package owner identifier) Y N S
rdbrlsopt (relational database release option) Y N S
dftrdbcol (default relational database collection identifier) Y Y S DFTRDBCOL
title (brief description of package) Y Y S TEXT
qryblkctl (query block protocol control) Y N Y
pkgdftcst (default character subtype) Y Y Y See note 8.
pkgdftcc (package default CCSID) Y Y Y See note 8.
decprc (decimal precision) Y N S
dgrioprl (degree of I/O parallelism) Y N S

Appendix D. DDM Architecture Command Support D-3


Figure D-5 (Page 2 of 2). BGNBND Command
CRTSQLxxx
Precompile
DDM Codepoint Optional R S Options
Notes:
1. For the PKGRPLVRS parameter, the application requester never sends a value. The application server
allows a null value, otherwise VALNSPRM is returned.
2. For the PKGISOLVL parameter, the application requester user specifies:
Ÿ COMMIT(*NONE), mapped to zero. (*NONE applies to AS/400 system to AS/400 system only.)
Ÿ COMMIT(*CHG), mapped to ISOLVLCHG.
Ÿ COMMIT(*CS), mapped to ISOLVLCS.
Ÿ COMMIT(*ALL), mapped to ISOLVLALL.
The application server maps the request of ISOLVLRR to COMMIT(*CHG) with full table locking.
Ÿ For the BNDEXPOPT parameter, only the DDM default value (EXPNON) is supported. Other values are rejected,
and the application server returns a “Value not Supported” reply message.
Ÿ For the PKGOWNID parameter, if a package owner id longer than 10 characters is received, the application server
returns a “Value not Supported” reply message.
Ÿ For the RDBRLSOPT parameter, only the DDM default value (RDBRLSCMM) is supported, otherwise the request
is rejected.
Ÿ Using the DFTRDBCOL parameter, the application requester user can specify the default collection ID. The default
value of the default collection ID is the application requester end user ID.
If the application server receives a a collection name longer than 10 characters, it returns a "Value not Supported"
reply message.
Ÿ For the TITLE parameter, a length greater than fifty characters is truncated by the application server.
Ÿ For the PKGDFTCST and PKGDFTCC parameters, values are based on the job attributes determined by the
system values QICG and QCCSID, as well as any job or user profile overrides.
Ÿ For the DECPRC parameter, if a decimal precision of 15 is received, the application server returns a “Value not
Supported” reply message.
Ÿ For the VRSNAM parameter, if a value other than null is received, the AS sends a “Value not supported” reply
message.

Figure D-6. Reply Objects for BGNBND command


DDM Codepoint Optional R S
Typdefnam (data type definition name) Y Y N
Typdefovr (TYPDEF override) Y Y Y
Sqlcard (SQLCA reply data) N Y Y

Figure D-7 (Page 1 of 2). BNDSQLSTT Command


DDM Codepoint Optional R S
BNDSQLSTT Y Y Part of the
package creation
process.
rdbnam (name of relational database as in ACCRDB) Y Y Y

D-4 OS/400 Distributed Database Programming V4R2


Figure D-7 (Page 2 of 2). BNDSQLSTT Command
DDM Codepoint Optional R S
pkgnamcsn (package name, consistency token and Y Y
section #)
Ÿ rdbnam (application server database name).
Ÿ rdbcolid (relational database collection identifier).
Ÿ pkgid (package identifier).
Ÿ pkgcnstkn (package consistency token).
Ÿ pkgsn (package section number).
sqlsttnbr (source application statement number) Y Y Y
bndsttasm (bind statement assumptions) Y Y Y
TYPDEFNAM (data type definition name) Y N Y
TYPDEFOVR (TYPDEF override) Y Y Y
SQLSTT (SQL statement to be bound in the AS Y Y
package)
SQLSTTVRB (description of each variable) Y Y Y
Ÿ SQLPRECISION (precision of fixed decimal field, or
zero).
Ÿ SQLSCALE (scale of fixed/zoned decimal, or zero).
Ÿ SQLLENGTH (length of field - not counting length
field).
Ÿ SQLTYPE (SQL data type associated with field).
Ÿ SQLCCSID (0 or CCSID for the column).
Ÿ SQLNAME (name of program variable in statement).
Ÿ SQLDIAGNAME (fully qualified name of program var-
iable).

Figure D-8. Reply Objects for BNDSQLSTT command


DDM Codepoint Optional R S
Typdefnam (data type definition name) Y Y N
Typdefovr (TYPDEF override) Y Y N
Sqlcard (SQLCA reply data) Y Y

Figure D-9. CLSQRY Command


DDM Codepoint Optional R S SQL Statement
CLSQRY Y Y CLOSE
rdbnam (name of relational database as in ACCRDB) Y N Y
pkgnamcsn (package name, consistency token and Y Y
section #)
Ÿ rdbnam (application server database name).
Ÿ rdbcolid (relational database collection identifier).
Ÿ pkgid (package identifier).
Ÿ pkgcnstkn (package consistency token).
Ÿ pkgsn (package section number).

Appendix D. DDM Architecture Command Support D-5


Figure D-10. Reply Objects for CLSQRY command
DDM Codepoint Optional R S
Typdefnam (data type definition name) Y Y N
Typdefovr (TYPDEF override) Y Y N
Sqlcard (SQLCA reply data) Y Y

Figure D-11. CNTQRY Command


DDM Codepoint Optional R S SQL Statement
CNTQRY Y Y FETCH
rdbnam (name of relational database as in ACCRDB) Y N Y
pkgnamcsn (package name, consistency token and Y Y
section #)
Ÿ rdbnam (application server database name).
Ÿ rdbcolid (relational database collection identifier).
Ÿ pkgid (package identifier).
Ÿ pkgcnstkn (package consistency token).
Ÿ pkgsn (package section number).
qryblksz (query block size) Y Y

Figure D-12. Reply Objects for CNTQRY command


DDM Codepoint Optional R S
Typdefnam (data type definition name) Y Y N
Typdefovr (TYPDEF override) Y Y N
Sqlcard (SQLCA reply data) Y Y
Qrydta (query answer set data) Y Y

Figure D-13. DRPPKG Command


DDM Codepoint Optional R S SQL Statement
DRPPKG N Y DROP
PACKAGE
rdbnam (name of relational database as in ACCRDB) Y Y
pkgnam (package grouping name and identifier) Y
Ÿ rdbnam (application server database name).
Ÿ rdbcolid (relational database collection identifier).
Ÿ pkgid (package identifier).
vrsnam (version name) Y S
DRPPKG Parameters Clarification:
Ÿ VRSNAM
– Application server rejects this parameter if it is not the DDM default value (NULL).

Figure D-14. Reply Objects for DRPPKG command


DDM Codepoint Optional R S
Typdefnam (data type definition name) Y N
Typdefovr (TYPDEF override) Y N
Sqlcard (SQLCA reply data) Y

D-6 OS/400 Distributed Database Programming V4R2


Figure D-15. DSCRDBTBL Command
DDM Codepoint Optional R S SQL Statement
DSCRDBTBL Y Y DESCRIBE
TABLE
rdbnam (name of relational database as in ACCRDB) Y N Y
TYPDEFNAM (data type definition name) Y N I
TYPDEFOVR (TYPDEF override) Y Y Y
SQLOBJNAM (SQL object name) Y Y

Figure D-16. Reply Objects for DSCRDBTBL command


DDM Codepoint Optional R S
Typdefnam (data type definition name) Y Y N
Typdefovr (TYPDEF override) Y Y N
Sqlcard (SQLCA reply data) Y Y
Sqldard (SQLDA reply data) Y Y

Figure D-17. DSCSQLSTT Command


DDM Codepoint Optional R S SQL Statement
DSCSQLSTT Y Y DESCRIBE
rdbnam (name of relational database as in ACCRDB) Y N Y
pkgnamcsn (package name, consistency token and Y Y
section #)
Ÿ rdbnam (application server database name).
Ÿ rdbcolid (relational database collection identifier).
Ÿ pkgid (package identifier).
Ÿ pkgcnstkn (package consistency token).
Ÿ pkgsn (package section number).

Figure D-18. Reply Objects for DSCSQLSTT command


DDM Codepoint Optional R S
Typdefnam (data type definition name) Y Y N
Typdefovr (TYPDEF override) Y Y N
Sqlcard (SQLCA reply data) Y Y
Sqldard (SQLDA reply data) Y Y

Figure D-19. ENDBND Command


DDM Codepoint Optional R S
ENDBND Y Y Part of the
package creation
process.
rdbnam (name of relational database as in ACCRDB) Y Y Y
pkgnamct (package name and consistency token) Y Y
Ÿ rdbnam (application server database name).
Ÿ rdbcolid (relational database collection identifier).
Ÿ pkgid (package identifier).
Ÿ pkgcnstkn (package consistency token).
maxsctnbr (maximum section number) Y Y Y

Appendix D. DDM Architecture Command Support D-7


Figure D-20. Reply Objects for ENDBND command
DDM Codepoint Optional R S
Typdefnam (data type definition name) Y Y N
Typdefovr (TYPDEF override) Y Y N
Sqlcard (SQLCA reply data) Y Y

Figure D-21. EXCSAT Command


Requester
DDM Codepoint Optional R S Values
EXCSAT Y Y
extnam (external name) Y Y Y Qualified name
at the application
requester.
mgrlvlls (manager level list) Y Y Y Always fixed.
spvnam (supervisor name) Y N I
srvclsnm (server class name) Y Y Y QAS
srvnam (server – can be ignored) Y Y Y System name.
srvrlslv (server release level – can be ignored) Y Y Y VvvRrrMmm
Note: For the SRVRLSLV parameter, vv, rr, and mm designate the version, release, and modification level.

Figure D-22. EXCSATRD Reply for EXCSAT command


Server
DDM Codepoint Optional R S Values
EXCSATRD Y Y
extnam (external name) Y Y Y Fully quali-
fied job
name at the
application
server.
mgrlvlls (manager level list) Y Y Y From list
from the
application
requester.
srvclsnm (server class name) Y Y Y QAS
srvnam (server name) Y Y Y System
name.
srvrlslv (server release level) Y Y Y VvvRrrMmm
Note: For the SRVRLSLV parameter, vv, rr, and mm designate the version, release, and
modification level.

Figure D-23 (Page 1 of 2). EXCSQLIMM Command


DDM Codepoint Optional R S SQL Statement
EXCSQLIMM Y Y EXECUTE
IMMEDIATE
rdbnam (name of relational database as in ACCRDB) Y N Y
pkgnamcsn (package name, consistency token and Y Y
section #)
Ÿ rdbnam (application server database name).
Ÿ rdbcolid (relational database collection identifier).
Ÿ pkgid (package identifier).
Ÿ pkgcnstkn (package consistency token).
Ÿ pkgsn (package section number).

D-8 OS/400 Distributed Database Programming V4R2


Figure D-23 (Page 2 of 2). EXCSQLIMM Command
DDM Codepoint Optional R S SQL Statement
TYPDEFNAM (data type definition name) Y N Y
TYPDEFOVR (TYPDEF override) Y Y Y
SQLSTT (SQL statement - cannot contain host vari- Y Y
ables).

Figure D-24. Reply Objects for EXCSQLIMM command


DDM Codepoint Optional R S
Typdefnam (data type definition name) Y Y N
Typdefovr (TYPDEF override) Y Y N
Sqlcard (SQLCA reply data) Y Y Y

Figure D-25. EXCSQLSTT Command


DDM Codepoint Optional R S SQL Statement
EXCSQLSTT Y Y EXECUTE
rdbnam (name of relational database as in ACCRDB) Y N Y
pkgnamcsn (package name, consistency token and Y Y
section #)
Ÿ rdbnam (application server database name).
Ÿ rdbcolid (relational database collection identifier).
Ÿ pkgid (package identifier).
Ÿ pkgcnstkn (package consistency token).
Ÿ pkgsn (package section number).
outexp (output expected-specifies non-cursor SELECT) Y Y Y For non-cursor
SELECT only.
TYPDEFNAM (data type definition name) Y N Y
TYPDEFOVR (TYPDEF override) Y Y Y
SQLDTA (SQL program variable data) Y Y Y Input SQLDA
and host vari-
ables.

Figure D-26. Reply Objects for EXCSQLSTT command


DDM Codepoint Optional R S
Typdefnam (data type definition name) Y Y N
Typdefovr (TYPDEF override) Y Y N
Sqlcard (SQLCA reply data) Y Y
Sqldtard (SQL data reply data) Y Y

Figure D-27 (Page 1 of 2). OPNQRY Command


DDM Codepoint Optional R S SQL Statement
OPNQRY Y Y OPEN
rdbnam (name of relational database as in ACCRDB) Y N Y

Appendix D. DDM Architecture Command Support D-9


Figure D-27 (Page 2 of 2). OPNQRY Command
DDM Codepoint Optional R S SQL Statement
pkgnamcsn (package name, consistency token and Y Y
section #)
Ÿ rdbnam (application server database name).
Ÿ rdbcolid (relational database collection identifier).
Ÿ pkgid (package identifier).
Ÿ pkgcnstkn (package consistency token).
Ÿ pkgsn (package section number).
qryblksz (query block size) Y Y
qryblkctl (query block protocol control) Y Y Y
TYPDEFNAM (data type definition name) Y N Y
TYPDEFOVR (TYPDEF override) Y Y Y
SQLDTA (input variable data) Y Y Y Input SQLDA
and host vari-
ables.

Figure D-28. Reply Message and Reply Objects for OPNQRY command
DDM Codepoint Optional R S
OPNQRYRM (open query reply message) Y Y
svrcod (severity code) Y Y
qryprctyp (protocol type) Y Y
sqlcsrhld (cursor hold flag) Y Y N
srvdgn (server diagnostic information) Y I. N
Typdefnam (data type definition name) Y Y N
Typdefovr (TYPDEF override) Y Y N
Sqlcard (SQLCA reply data) Y Y Y
Qrydsc (query answer set description) Y Y Y
Qrydta (query answer set data) Y Y Y

Figure D-29. PRPSQLSTT Command


DDM Codepoint Optional R S SQL Statement
PRPSQLSTT Y Y Dynamic
PREPARE
rdbnam (name of remote database as in ACCRDB) Y N Y
pkgnamcsn (package name, consistency token and Y Y
section #)
Ÿ rdbnam (application server database name).
Ÿ rdbcolid (relational database collection identifier).
Ÿ pkgid (package identifier).
Ÿ pkgcnstkn (package consistency token).
Ÿ pkgsn (package section number).
rtnsqlda (specifies if SQLDA should be returned) Y Y Y USING clause
TYPDEFNAM (data type definition name) Y N Y
TYPDEFOVR (TYPDEF override) Y Y Y
SQLSTT (SQL Statement) Y Y

D-10 OS/400 Distributed Database Programming V4R2


Figure D-30. Reply Objects for PRPSQLSTT command
DDM Codepoint Optional R S
Typdefnam (data type definition name) Y Y N
Typdefovr (TYPDEF override) Y Y N
Sqlcard (SQLCA reply data) Y Y Y
Sqldard (SQLDA reply data) Y Y Y

Figure D-31. RDBCMM Command


DDM Codepoint Optional R S SQL Statement
RDBCMM Y Y COMMIT
rdbnam (name of relational database as in ACCRDB) Y N Y

Figure D-32. Reply Objects for RDBCMM command


DDM Codepoint Optional R S
Typdefnam (data type definition name) Y Y N
Typdefovr (TYPDEF override) Y Y N
Sqlcard (SQLCA reply data) Y Y Y

Figure D-33. RDBRLLBCK Command


DDM Codepoint Optional R S SQL Statement
RDBRLLBCK Y Y ROLLBACK
rdbnam (name of relational database as in ACCRDB) Y N Y

Figure D-34. Reply Objects for RDBRLLBCK command


DDM Codepoint Optional R S
Typdefnam (data type definition name) Y Y N
Typdefovr (TYPDEF override) Y Y N
Sqlcard (SQLCA reply data) Y Y Y

Figure D-35. REBIND Command


DDM Codepoint Optional R S
REBIND N N N
Note: Use of REPLACE(*YES) with CRTSQLPKG provides a function similar to that architected for REBIND.
However, since the function is not an exact match, the REBIND command is not supported by the OS/400
licensed program.

| Figure D-36 (Page 1 of 2). SECCHK Command


| DDM Codepoint Optional R S AR Value
| SECCHK N Y Y
| secmec (security mechanism) N Y S 3 or 4
| usrid (user ID on remote RDB) Y Y Y
| password (password for remote RDB) Y Y Y
| newpassword (new password for password change) Y N N
| rdbnam (name of relational database) Y N I
| secmgrnm (security manager name) Y N I

Appendix D. DDM Architecture Command Support D-11


| Figure D-36 (Page 2 of 2). SECCHK Command
| DDM Codepoint Optional R S AR Value
| Notes:
| 1. While RDBNAM is optional, it is required by some servers, so DB2 for AS/400 always sends it.
| 2. PASSWORD is not sent for SECMEC of 4.

| Figure D-37. SECCHKRM Reply message for SECCHK command


| DDM Codepoint Optional R S AS Value
| SECCHKRM N Y Y
| svrcod (severity code) N Y Y
| secchkcd (security check return code) N Y Y
| svcerrno (error number from called service Y N I
| svrdgn (server diagnostic information Y N I
| Note: SECCHKCDs map to SQLCODE -30082 reason codes as follows:
| 1 to 17
| 8 to OK
| 9 to 15
| 10 to 15
| 14 to 1
| 15 to 2
| 16 to 3
| 18 to 5
| 19 to 6
| 20 to 7

D-12 OS/400 Distributed Database Programming V4R2


Bibliography
This bibliography lists four classifications of books avail- user-defined commands and menus, and application
able from IBM that are related to this book. These clas- testing, including debug mode, breakpoints, traces,
sifications are: and display functions.
Ÿ AS/400 system library Ÿ CL Reference, SC41-5722, provides a description of
the control language (CL) and its commands. Each
Ÿ Distributed Relational Database Library
command is defined including its syntax diagram,
Ÿ Other IBM Distributed Relational Database Platform parameters, default values, and keywords.
Libraries
Ÿ Communications Management, SC41-5406, con-
Ÿ Architecture books tains information on working with communications
Ÿ IBM Redbooks status, communications-related work management
topics, communications errors, performance, line
speed and subsystem storage.

AS/400 System Information Ÿ Distributed Data Management, SC41-5307, provides


the application programmer or system programmer
The following AS/400 books contain information you with information about remote file processing. It
may need. The order number is provided for ordering describes how to define a remote file to OS/400 dis-
and referencing purposes. tributed data management (DDM), how to create a
DDM file, which file utilities are supported through
Ÿ DSNX Support, SC41-5409, provides information for DDM, and the requirements of OS/400 DDM as
configuring an AS/400 system to use the remote related to other systems.
management support (distributed host command
facility), the change management support (distrib- Ÿ Local Device Configuration, SC41-5121, provides
uted systems node executive) and the problem the system operator or system administrator with
management support (alerts). information on how to do an initial local hardware
configuration and how to change that configuration.
Ÿ APPC Programming, SC41-5443, provides informa- It also contains conceptual information for device
tion for developing communication application pro- configuration, and planning information for device
grams that use APPC and for defining the configuration on the 9406, 9404, and 9402 System
communications environment for APPC communi- Units.
cations.
Ÿ ADTS/400: Data File Utility, SC09-1773, provides
Ÿ APPN Support, SC41-5407, provides the system the application programmer, programmer or help
programmer with information about the Advanced desk aide with information about the Application
Peer-to-Peer Networking (APPN) support provided Development Tools data file utility (DFU) to create
by the AS/400 system. It describes APPN concepts, programs to enter data into files, update files,
functions and features as carried out on the AS/400 inquire into files and run DFU programs. This guide
system and also presents considerations when also provides the work station operator with activ-
using APPN. ities and material to learn about DFU.
Ÿ Backup and Recovery, SC41-5304, provides the Ÿ SNA Distribution Services, SC41-5410, provides the
system programmer with information about the dif- system programmer or network administrator with
ferent media available to save and restore system information about configuring a communications
data, as well as a description of how to record network for distribution services (SNADS) and the
changes made to database files and how that infor- Virtual Machine/Multiple Virtual Storage (VM/MVS)
mation can be used for system recovery and activity bridge. In addition, object distribution functions, doc-
report information. ument library services and system distribution direc-
Ÿ CL Programming, SC41-5721, provides a wide- tory services are also discussed.
ranging discussion of programming topics, including Ÿ ICF Programming, SC41-5442, provides the appli-
a general discussion of objects and libraries, control cation programmer with the information needed to
language (CL) programming, controlling flow and write application programs that use AS/400 commu-
communicating between programs, working with nications and ICF files. It also contains information
objects in CL programs, and creating CL programs. on data description specifications (DDS) keywords,
Other topics include predefined and immediate mes- system-supplied formats, return codes, file transfer
sages and message handling, defining and creating support, and programming examples.

 Copyright IBM Corp. 1997, 1998 X-1


Ÿ ISDN Support, SC41-5403, contains information on (PTFs), and process and manage jobs on the
using an AS/400 system in an Integrated Services system.
Digital Network (ISDN) network environment.
Ÿ TCP/IP Configuration and Reference, SC41-5420,
Ÿ LAN, Frame-Relay and ATM Support, SC41-5404, provides the application programmer or end user
contains information on using an AS/400 system in with information about how the AS/400 system
a token-ring network, an Ethernet network, or carries out TCP/IP. This guide describes how to use
bridged network environment. and configure TCP/IP applications of FTP, SMTP
and TELNET. It also provides information about the
Ÿ National Language Support, SC41-5101, provides
relationship of TCP/IP to other AS/400 communi-
information required to understand and use the
cations protocols.
national language support function on the AS/400
system. This book prepares the AS/400 user for Ÿ Work Management, SC41-5306, provides the
planning and using the national language support system programmer with information about how to
(NLS) and the multilingual support of the AS/400 create and change a work management environ-
system. It also provides an explanation of the data- ment. It also includes a description of tuning the
base management of multilingual data and applica- system, collecting performance data, working with
tion considerations for a multilingual system. system values to control or change the overall oper-
ation of the system, and a description of how to
Ÿ Communications Configuration, SC41-5401, pro-
gather data to determine who is using the system
vides information for the application programmer or
and what resources are being used.
system programmer about configuration commands
and defining lines, controllers, and devices. Ÿ X.25 Network Support, SC41-5405, contains infor-
mation on using AS/400 systems in an X.25
Ÿ DB2 for AS/400 Query Management Programming,
network.
SC41-5703, provides the application programmer
with information on how to determine the database Ÿ System API Reference, SC41-5801, provides infor-
files to be queried for a report, define a structured mation on how to create, use, and delete objects
query language (SQL) query definition, and use and that help manage system performance, use
write procedures that use query management com- spooling, and maintain database files efficiently.
mands. This book also includes information on how This book also includes information on creating and
to use the query global variable support and under- maintaining the programs for system objects and
standing the relationship between the OS/400 query retrieving OS/400 information by working with
management and the AS/400 Query licensed objects, database files, jobs, and spooling.
program.
Ÿ Remote Work Station Support, SC41-5402, provides
information on how to set up and use remote work Distributed Relational Database
station support, such as display station pass- Library
through, distributed host command facility, and 3270
remote attachment. The following books provide background and general
Ÿ Security - Reference, SC41-5302, provides the support information for IBM Distributed Relational Data-
system programmer (or someone who is assigned base Architecture implementations.
the responsibilities of a security officer) with infor- Ÿ DRDA: Every Manager's Guide, GC26-3195, pro-
mation about system security concepts, planning for vides concise, high-level education on distributed
security, and setting up security on the system. relational database and distributed file. This book
Ÿ DB2 for AS/400 SQL Programming, SC41-5611, describes how IBM supports the development of
provides the application programmer, programmer, distributed data systems, and discusses some
or database administrator with an overview of how current IBM products and announced support for
to design, write, test and run SQL statements. It distributed data. The information in this book is
also describes interactive Structured Query Lan- intended to help executives, managers, and tech-
guage (SQL). nical personnel understand the concepts of distrib-
uted data.
Ÿ DB2 for AS/400 SQL Reference, SC41-5612, pro-
vides the application programmer, programmer, or Ÿ DRDA: Planning for Distributed Relational
database administrator with detailed information Database, SC26-4650, helps you plan for distrib-
about SQL statements and their parameters. uted relational data. It describes the steps to take,
the decisions to make, and the options from which
Ÿ System Operation, SC41-4203, provides information to choose in making those decisions. The book also
about how to use the system unit control panel and covers the distributed relational database products
console, send and receive messages, respond to and capabilities that are now available or that have
error messages, start and stop the system, use been announced, and it discusses IBM's stated
control devices, work with program temporary fixes

X-2 OS/400 Distributed Database Programming V4R2


direction for supporting distributed relational data in
the future. The information in this book is intended Other IBM Distributed Relational
for planners. Database Platform Libraries
Ÿ DRDA: Connectivity Guide SC26-4783, describes
how to interconnect IBM products that support Dis- The following lists of books are available for IBM distrib-
tributed Relational Database Architecture. It uted relational databases that are supported with the
explains concepts and terminology associated with AS/400 distributed relational database at this time.
distributed relational database and network
systems. This book tells you how to connect unlike
systems in a distributed environment. The informa-
DB2 Connect and Universal
tion in the Connectivity Guide is not included in any Database
product documentation. The information in this book Ÿ DB2 Connect Enterprise Edition Quick Beginning,
is intended for system administrators, database S10J-7888
administrators, communication administrators, and
system programmers. Ÿ DB2 Connect Personal Edition Quick Beginning,
S10J-8162
Ÿ DRDA: Application Programming Guide,
SC26-4773, describes how to design, build, and Ÿ DB2 Connect User's Guide V5, S10J-8163
modify application programs that access IBM's rela- Ÿ DB2 UDB for OS/2 Quick Beginnings V5,
tional database management systems. This manual S10J-8147
focuses on what a programmer should do differently
Ÿ DB2 UDB for UNIX Quick Beginnings V5,
when writing distributed relational database applica-
S10J-8148
tions for unlike environments. Topics include
program design, preparation, and execution, as well Ÿ DB2 UDB for Windows NT Quick Beginnings V5,
as performance considerations. Programming exam- S10J-8149
ples written in IBM C are included. The information
Ÿ DB2 UDB Personal Edition Quick Beginnings V5,
in this manual is designed for application program-
S10J-8150
mers who work with at least one of IBM's high-level
languages and with Structured Query Language Ÿ DB2 UDB Administration Getting Started V5,
(SQL). S10J-8154
Ÿ DRDA: Problem Determination Guide, SC26-4782, Ÿ DB2 UDB SQL Getting Started V5, S10J-8156
helps you define the source of problems in a distrib- Ÿ DB2 UDB Administration Guide V5, S10J-8157
uted relational database environment. This manual
contains introductory material on each product, for Ÿ DB2 UDB Embedded SQL Programming Guide V5,
people not familiar with those products, and gives S10J-8158
detailed information on how to diagnose and report Ÿ DB2 UDB SQL Reference V5, S10J-8165
problems with each product. The guide describes
procedures and tools unique to each host system Ÿ DB2 UDB Command Reference V5, S10J-8166
and those common among the different systems. Ÿ DB2 UDB Messages Reference V5, S10J-8168
The information in this book is intended for the
people who report distributed relational database Ÿ DB2 UDB Troubleshooting Guide V5, S10J-8169
problems to the IBM Support Center.
| Ÿ IBM SQL Reference, Volume 2, SC26-8416, makes DB2 for OS/390
| references to DRDA and compares the facilities of: Ÿ DB2 for OS/390 V5 Command Reference,
| – IBM SQL relational database products SC26-8960

| – IBM SQL Ÿ DB2 for OS/390 V5 Reference for Remote DRDA,


SC26-8964
| – ISO-ANSI SQL (SQL92E)
Ÿ DB2 for OS/390 V5 SQL Reference, SC26-8966
| – X/Open SQL (XPG4-SQL)
Ÿ DB2 for OS/390 V5 Utility Guide and Reference,
| – ISO-ANSI SQL Call Level Interface (CLI) SC26-8967
| – X/Open CLI Ÿ DB2 for OS/390 V5 Messages and Codes,
| – Microsoft Open Database Connectivity (ODBC) GC26-8979
| Version 2.0 Ÿ WOW! DRDA Supports TCP/IP: DB2 Server for
OS/390, SG24-2212
Ÿ DB2 for OS/390 V5 Server Online Library,
SK2T-9102

Bibliography X-3
DB2 Server for VSE and VM Architecture Books
Ÿ SBOF for DB2 Server for VM V5R1, SBOF-8917
| Ÿ Character Data Representative Architecture: Over-
Ÿ DB2 and Data Tools for VSE and VM, GC09-2350 | view, GC09-2207
Ÿ DB2 Server for VM V5R1 Database Administration, | Ÿ Character Data Representative Architecture: Details,
GC09-2388 | SC09-2190
Ÿ DB2 Server for VM V5R1 Application Programming, | This manual includes a CD-ROM, which contains
SC09-2392 | the two CDRA publications in online BOOK format,
Ÿ DB2 Server for VM V5R1 Database Services Utili- | conversion tables in binary form, mapping source
ties, SC09-2394 | for many of the conversion binaries, a collection of
| code page and character set resources, and char-
Ÿ DB2 Server for VM V5R1 Messages and Codes, | acter naming information as used in IBM. The CD
SC09-2396 | also includes a viewing utility to be used with the
Ÿ DB2 Server for VM V5R1 Master Index and Glos- | provided material. Viewer works with OS/2,
sary, SC09-2398 | Windows 3.1, and Windows 95.

Ÿ DB2 Server for VM V5R1 Operation, SC09-2400 Ÿ Distributed Relational Database Architecture Refer-
ence SC26-4651.
Ÿ DB2 Server for VSE & VM V5R1 Quick Reference,
SC09-2403 Ÿ DDM Architecture Reference Guide SC21-9526.
The SC21-9526-05 version of this book describes
Ÿ DB2 Server for VSE & VM V5R1 SQL Reference, Level 4 of the DDM Architecture which does not
SC09-2404 include the new DDM protocols for TCP/IP support.
Ÿ DB2 Server for VM V5R1 System Administration,
GC09-2405
Ÿ DB2 Server for VM V5R1 Diagnosis Guide, Redbooks
SC09-2407 Ÿ DRDA DDCS/6000 Connection to DB2 and
Ÿ DB2 Server for VM V5R1 Interactive SQL Guide, DB2/400, GG24-4155
SC09-2409 Ÿ Setup and Usage of SQL/DS in a DRDA Environ-
Ÿ DB2 Server V5R1 Data Spaces Support for ment, GG24-3733
VM/ESA, SC09-2411 Ÿ DRDA Client/Server Application Scenarios,
Ÿ DB2 Server for VSE & VM V5R1 LPS, GC09-2413 GG24-4193

Ÿ DB2 Server for VSE & VM V5R1 Data Restore, Ÿ DRDA Client/Server for VM and VSE Setup,
SC09-2499 GG24-4275

Ÿ DB2 for VM V5R1 Control Center Installation, Ÿ DATABASE 2/400 Advanced Database Functions,
SC09-2501 GG24-4249

Ÿ DB2 Server for VM/VSE Training Brochure, Ÿ Distributed Relational Database Cross Platform
GC09-2561 Connectivity and Application, GG24-4311

Ÿ DB2 Server for VM V5R1 Online Product Libraries, Ÿ Getting Started with DB2 Stored Procedures: Give
SK2T-2792 Them a Call through the Network, GG24-4693

X-4 OS/400 Distributed Database Programming V4R2


Notices
This information was developed for products and ser- rated in new editions of the publication. IBM may make
vices offered in the U.S.A. IBM may not offer the pro- improvements and/or changes in the product(s) and/or
ducts, services, or features discussed in this document the program(s) described in this publication at any time
in other countries. Consult your local IBM representative without notice.
for information on the products and services currently
available in your area. Any reference to an IBM product, Licensees of this program who wish to have information
program, or service is not intended to state or imply that about it for the purpose of enabling: (i) the exchange of
only that IBM product, program, or service may be information between independently created programs
used. Any functionally equivalent product, program, or and other programs (including this one) and (ii) the
service that does not infringe any IBM intellectual prop- mutual use of the information which has been
erty right may be used instead. However, it is the user's exchanged, should contact:
responsibility to evaluate and verify the operation of any
non-IBM product, program, or service. IBM Corporation
Software Interoperability Coordinator
IBM may have patents or pending patent applications 3605 Highway 52 N
covering subject matter described in this document. The Rochester, MN 55901-7829
furnishing of this document does not give you any U.S.A.
license to these patents. You can send license inquiries,
in writing, to: Such information may be available, subject to appro-
priate terms and conditions, including in some cases,
IBM Director of Licensing payment of a fee.
IBM Corporation
500 Columbus Avenue The licensed program described in this information and
Thornwood, NY 10594 all licensed material available for it are provided by IBM
U.S.A. under terms of the IBM Customer Agreement or any
equivalent agreement between us.
For license inquiries regarding double-byte (DBCS)
information, contact the IBM Intellectual Property This information contains examples of data and reports
Department in your country or send inquiries, in writing, used in daily business operations. To illustrate them as
to: completely as possible, the examples include the
names of individuals, companies, brands, and products.
IBM World Trade Asia Corporation All of these names are fictitious and any similarity to the
Licensing names and addresses used by an actual business
2-31 Roppongi 3-chome, Minato-ku enterprise is entirely coincidental.
Tokyo 106, Japan
COPYRIGHT LICENSE:
The following paragraph does not apply to the
United Kingdom or any other country where such This information contains sample application programs
provisions are inconsistent with local law: INTER- in source language, which illustrates programming tech-
NATIONAL BUSINESS MACHINES CORPORATION niques on various operating platforms. You may copy,
PROVIDES THIS PUBLICATION “AS IS” WITHOUT modify, and distribute these sample programs in any
WARRANTY OF ANY KIND, EITHER EXPRESS OR form without payment to IBM, for the purposes of devel-
IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE oping, using, marketing or distributing application pro-
IMPLIED WARRANTIES OF NON-INFRINGEMENT, grams conforming to the application programming
MERCHANTABILITY OR FITNESS FOR A PARTIC- interface for the operating platform for which the sample
ULAR PURPOSE. Some states do not allow disclaimer programs are written. These examples have not been
of express or implied warranties in certain transactions, thoroughly tested under all conditions. IBM, therefore,
therefore, this statement may not apply to you. cannot guarantee or imply reliability, serviceability, or
function of these programs. You may copy, modify, and
This information could include technical inaccuracies or distribute these sample programs in any form without
typographical errors. Changes are periodically made to payment to IBM for the purposes of developing, using,
the information herein; these changes will be incorpo- marketing, or distributing application programs con-
forming to IBM's application programming interfaces.

 Copyright IBM Corp. 1997, 1998 X-5


MVS
Trademarks NetView
Operating System/400
The following terms are trademarks of International Operational Assistant
Business Machines Corporation in the United States, or OS/2
other countries, or both: OS/390
OS/400
Advanced Peer-to-Peer Networking PS/2
Advanced 36 RPG/400
AnyNet RS/6000
Application System/400 S/390
APPN SQL/DS
AS/400 System/390
C/400 VM/ESA
CICS VTAM
Client Access 3090
Client Access/400 400
COBOL/400
Current C-bus is a trademark of Corollary, Inc.
DATABASE 2
DataGuide Microsoft, Windows, Windows NT, and the Windows 95
DataHub logo are registered trademarks of Microsoft Corporation.
DataJoiner
DataPropagator Java and HotJava are trademarks of Sun Microsystems,
DB2 Inc.
DB2 Connect
DB2 Universal Database UNIX is a registered trademark in the United States and
Distributed Relational Database Architecture other countries licensed exclusively through X/Open
DRDA Company Limited.
DXT
Extended Services PC Direct is a registered trademark of Ziff Communi-
FORTRAN/400 cations Company and is used by IBM Corporation under
IBM license.
IMS
Information Warehouse Other company, product, and service names may be
Integrated Language Environment trademarks or service marks of others.

X-6 OS/400 Distributed Database Programming V4R2


Index
Advanced Peer-to-Peer Networking (APPN)
Special Characters configuration example 3-7
(FFDC) first-failure data capture 9-29 configuration list 4-4
conversation level security 4-4
DRDA support 3-2
A location security 4-3
access path 7-3
remote location list 3-6, 4-4
See also index
session level security 4-2
protection, system-managed 7-6
SNA conversation security 4-4
access plan
Advanced Program-to-Program Communications
definition 10-24
(APPC)
SQL package 10-24
DRDA support 3-2
accessing AS/400 data via DB2 Connect B-4
alert
accounting
definition 3-3
planning for 2-9
for distributed relational database 9-24
active job
problem handling 9-23
working with 8-2
set up 3-16
active jobs
types 3-3
working with 6-3, 8-2
working with 9-23
Add Communications Entry (ADDCMNE)
alert default focal point (ALRDFTFP) parameter 3-16
command 4-5
alert primary focal point (ALRPRIFP) parameter 3-16
Add Relational Database Directory Entry
alert support
(ADDRDBDIRE) command 5-6, 6-27, 7-12, 9-31
focal point
Add Sphere of Control Entry (ADDSOCE)
definition 3-3
command 3-17
set up 3-19
ADDCMNE (Add Communications Entry)
overview 3-3
command 4-5
sphere of control
adding
definition 3-3
communications entry 4-5
set up 3-19
relational database directory entry 5-6, 7-12, 9-31
analyzing
sphere of control entry 3-17
RW trace data C-2
ADDRDBDIRE (Add Relational Database Directory
APPC (Advanced Program-to-Program Communi-
Entry) command 5-6, 6-27, 7-12, 9-31
cations)
ADDSOCE (Add Sphere of Control Entry)
DRDA support 3-2
command 3-17
application
ADDSVRAUTE command 4-8
considerations 2-4
administration and operation 6-1
designing 2-4
administration task
requirements 2-4
canceling work 6-26
Application Development Tools (ADT) 5-16
displaying job log 6-5
application program
finding a distributed relational database job 6-6
binding 10-24
job accounting 6-16
compiling 10-24
operating systems remotely 6-8
creating an SQL package 10-28
starting and stopping remote systems 6-8
deleting an SQL package 10-32
submitting remote commands 6-9
handling problems
working with active jobs 6-3
SQLCODE 9-15
working with commitment definitions 6-4
SQLSTATE 9-15
working with jobs 6-1
host program 10-21
working with user jobs 6-2
host variable 10-21
adopted authority
precompiler commands 10-23
displaying objects using 4-12
precompiling 10-22
running jobs under 4-12
program references 10-27

 Copyright IBM Corp. 1997, 1998 X-7


application program (continued) auditing (continued)
SQL naming convention 10-3 DSPRDBDIRE (Display Relational Database Direc-
SQLCODE 9-17 tory Entry) 6-27
SQLSTATE 9-17 relational database directory 6-27
system naming convention 10-2 Remove Relational Database Directory Entry
temporary source file member 10-23 (RMVRDBDIRE) 6-27
testing and debugging 10-25 RMVRDBDIRE (Remove Relational Database Direc-
application programming examples A-1 tory Entry) 6-27
application requester Work with Relational Database Directory Entries
commitment control for DDM jobs 10-20 (WRKRDBDIRE) 6-27
definition 1-2 WRKRDBDIRE (Work with Relational Database
problem diagnosis 9-3 Directory Entries) 6-27
program start request failure message 9-13 authority
relational database directory 5-5 restoring 7-10, 7-11
application requester driver (ARD) programs 1-8 saving 7-11
application server 9-4 authorization method
commitment control for DDM jobs 10-20 group profile 4-14
definition 1-2 autostart job 5-1
object security 4-9 auxiliary storage pool (ASP) 7-2
problem diagnosis 9-4 availability
program start request failure message 9-13 data 7-14
relational database directory 5-5
starting a service job 9-30
using a default user profile 4-5 B
applications backup
writing Distributed Relational Database 10-1 planning for 2-10
APPN (Advanced Peer-to-Peer Networking) batch job 5-1
configuration example 3-7 binding an application 10-24
configuration list 4-4 blocking
conversation level security 4-4 factors that affect 8-3
DRDA support 3-2 blocks
location security 4-3 size factors 8-6
remote location list 3-6, 4-4
session level security 4-2
SNA conversation security 4-4
C
C/400
APPN location list 3-6, 4-4
programming
ARD (application requester driver) programs 1-8
examples A-18
AS/400 data
capabilities
accessing via DB2 Connect B-4
distributed relational database 2-2
AS/400 Distributed Relational Database
catalog
managing an 1-10
definition 1-1
AS/400 files
CCSID
journaling B-4
conversion considerations B-2
AS/400 QCCSID value B-1
DB2 B-2
ASP (auxiliary storage pool) 7-2
DB2 Server for VM database managers B-2
auditing
DB2/2 licensed program B-2
Add Relational Database Directory Entry
CCSID (coded character set identifier) 1-7
(ADDRDBDIRE) 6-27
allowed values 10-17
ADDRDBDIRE (Add Relational Database Directory
changing 10-18
Entry) 6-27
DB2 considerations B-2
Change Relational Database Directory Entry
DB2 Server for VM considerations B-2
(CHGRDBDIRE) 6-27
how data is translated 10-19
CHGRDBDIRE (Change Relational Database Direc-
in user profile 10-17
tory Entry) 6-27
overview 10-16
Display Relational Database Directory Entry
tagging 10-16, 10-18
(DSPRDBDIRE) 6-27

X-8 OS/400 Distributed Database Programming V4R2


CCSID considerations B-1 command support
CDRA (Character Data Representation DDM D-1
Architecture) 1-7, 10-16 command, CL
Change Job (CHGJOB) command 6-11 Add Communications Entry (ADDCMNE) 4-5
Change Network Attributes (CHGNETA) Add Relational Database Directory Entry
command 3-5, 3-16 (ADDRDBDIRE) 5-6, 6-27, 7-12, 9-31
DDMACC parameter 4-9 Add Sphere of Control Entry (ADDSOCE) 3-17
Change Object Auditing Value (CHGOBJAUD) ADDCMNE (Add Communications Entry) 4-5
command 6-27 ADDRDBDIRE (Add Relational Database Directory
Change Relational Database Directory Entry Entry) 5-6, 6-27, 7-12, 9-31
(CHGRDBDIRE) command 5-9, 6-27, 9-31 ADDSOCE (Add Sphere of Control Entry) 3-17
Change Subsystem Description (CHGSBSD) Change Job (CHGJOB) 6-11
command 5-2 Change Network Attributes (CHGNETA) 3-5, 3-16
changed object DDMACC parameter 4-9
saving 7-10 Change Object Auditing Value (CHGOBJAUD) 6-27
changing Change Relational Database Directory Entry
job 6-11 (CHGRDBDIRE) 5-9, 6-27, 9-31
network attributes 3-5, 3-16, 4-9 Change Subsystem Description (CHGSBSD) 5-2
object auditing value 6-27 CHGJOB (Change Job) 6-11
relational database directory entry 5-9, 9-31 CHGNETA (Change Network Attributes) 3-5, 3-16
subsystem description 5-2 DDMACC parameter 4-9
character conversion 1-7 CHGOBJAUD (Change Object Auditing Value) 6-27
Character Data Representation Architecture CHGRDBDIRE (Change Relational Database Direc-
(CDRA) 1-7, 10-16 tory Entry) 5-9, 6-27, 9-31
with DRDA 1-7 CHGSBSD (Change Subsystem Description) 5-2
checksum protection 7-2 Create Configuration List (CRTCFGL)
CHGDDMTCPA command 4-8, 4-14, 5-12 secure-location entry 4-3
CHGJOB (Change Job) command 6-11 Create Controller Description (APPC)
CHGNETA (Change Network Attributes) (CRTCTLAPPC) 4-2, 4-3
command 3-5, 3-16 Create Device Description (APPC)
DDMACC parameter 4-9 (CRTDEVAPPC) 4-2
CHGOBJAUD (Change Object Auditing Value) Create Structured Query Language C
command 6-27 (CRTSQLC) 10-23
CHGRDBDIRE (Change Relational Database Directory Create Structured Query Language C ILE
Entry) command 5-9, 6-27, 9-31 (CRTSQLCI) 10-23
CHGSBSD (Change Subsystem Description) Create Structured Query Language COBOL
command 5-2 (CRTSQLCBL) 10-23
COBOL/400 Create Structured Query Language COBOL ILE
programming (CRTSQLCBLI) 10-23
examples A-13 Create Structured Query Language FORTRAN
code page 1-8 (CRTSQLFTN) 10-23
coded character set identifier (CCSID) 1-7 Create Structured Query Language Package
allowed values 10-17 (CRTSQLPKG) 10-28
changing 10-18 Create Structured Query Language PL/I
DB2 considerations B-2 (CRTSQLPLI) 10-23
DB2 Server for VM considerations B-2 Create Structured Query Language RPG
how data is translated 10-19 (CRTSQLRPG) 10-23
in user profile 10-17 Create Structured Query Language RPG ILE
overview 10-16 (CRTSQLRPGI) 10-23
tagging 10-16, 10-18 CRTCFGL (Create Configuration List)
collection secure-location entry 4-3
definition 1-1 CRTCTLAPPC (Create Controller Description
in SQL naming convention 10-3 (APPC)) 4-2, 4-3
SQL communication area (SQLCA) 9-17 CRTDEVAPPC (Create Device Description
collection and table creation (APPC)) 4-2
DB2/400 Query Manager and SQL Development Kit CRTSQLC (Create Structured Query Language
needed for B-5 C) 10-23

Index X-9
command, CL (continued) command, CL (continued)
CRTSQLCBL (Create Structured Query Language Restore User Profiles (RSTUSRPRF) 7-10, 7-11
COBOL) 10-23 Revoke Object Authority (RVKOBJAUT) 4-10
CRTSQLCBLI (Create Structured Query Language RMVRDBDIRE (Remove Relational Database Direc-
COBOL ILE) 10-23 tory Entry) 5-8, 6-27
CRTSQLCI (Create Structured Query Language C RMVSOCE (Remove Sphere of Control Entry) 3-17
ILE) 10-23 RSTAUT (Restore Authority) 7-10, 7-11
CRTSQLFTN (Create Structured Query Language RSTCFG (Restore Configuration) 7-10
FORTRAN) 10-23 RSTLIB (Restore Library) 7-10
CRTSQLPKG (Create Structured Query Language RSTOBJ (Restore Object) 7-10, 7-11, 7-14
Package) 10-28 RSTUSRPRF (Restore User Profiles) 7-10, 7-11
CRTSQLPLI (Create Structured Query Language RVKOBJAUT (Revoke Object Authority) 4-10
PL/I) 10-23 SAVCHGOBJ (Save Changed Object) 7-10
CRTSQLRPG (Create Structured Query Language Save Changed Object (SAVCHGOBJ) 7-10
RPG) 10-23 Save Library (SAVLIB) 7-5, 7-10
CRTSQLRPGI (Create Structured Query Language Save Object (SAVOBJ) 7-5, 7-10, 7-11
RPG ILE) 10-23 Save Save File Data (SAVSAVFDTA) 7-10
Display Job Log (DSPJOBLOG) 6-5 Save Security Data (SAVSECDTA) 7-11
Display Journal (DSPJRN) 2-9, 7-4 Save System (SAVSYS) 7-10, 7-11
Display Message Descriptions (DSPMSGD) 9-8 SAVLIB (Save Library) 7-5, 7-10
Display Program References (DSPPGMREF) 6-12, SAVOBJ (Save Object) 7-5, 7-10, 7-11
10-27 SAVSAVFDTA (Save Save File Data) 7-10
Display Programs that Adopt (DSPPGMADP) 4-12 SAVSECDTA (Save Security Data) 7-11
Display Relational Database Directory Entry SAVSYS (Save System) 7-10, 7-11
(DSPRDBDIRE) 5-8, 6-27, 7-11 SBMRMTCMD (Submit Remote Command) 6-9,
Display Sphere of Control Status 6-10, 6-15
(DSPSOCSTS) 3-17 authority restrictions 6-9
DSPJOBLOG (Display Job Log) 6-5 Start Commitment Control (STRCMTCTL) 7-7
DSPJRN (Display Journal) 2-9, 7-4 Start Copy Screen (STRCPYSCRN) 9-6
DSPMSGD (Display Message Descriptions) 9-8 Start Debug (STRDBG) 9-30
DSPPGMADP (Display Programs that Adopt) 4-12 Start Journal Access Path (STRJRNAP) 7-5
DSPPGMREF (Display Program References) 6-12, Start Pass-Through (STRPASTHR) 9-6
10-27 Start Service Job (STRSRVJOB) 9-30
DSPRDBDIRE (Display Relational Database Direc- STRCMTCTL (Start Commitment Control) 7-7
tory Entry) 5-8, 6-27, 7-11 STRCPYSCRN (Start Copy Screen) 9-6
DSPSOCSTS (Display Sphere of Control STRDBG (Start Debug) 9-30
Status) 3-17 STRJRNAP (Start Journal Access Path) 7-5
End Job (ENDJOB) 6-26 STRPASTHR (Start Pass-Through) 9-6
End Request (ENDRQS) 6-26 STRSRVJOB (Start Service Job) 9-30
ENDJOB (End Job) 6-26 Submit Remote Command (SBMRMTCMD) 6-9,
ENDRQS (End Request) 6-26 6-10, 6-15
Grant Object Authority (GRTOBJAUT) 4-10 authority restrictions 6-9
GRTOBJAUT (Grant Object Authority) 4-10 Vary Configuration (VRYCFG) 3-6, 7-15
RCLDDMCNV (Reclaim Distributed Data Manage- VRYCFG (Vary Configuration) 3-6, 7-15
ment Conversations) 6-11, 6-12 Work with Active Jobs (WRKACTJOB) 6-3, 8-2
RCLRSC (Reclaim Resources) 6-11, 6-12 Work with Configuration Status
Reclaim Distributed Data Management Conversa- (WRKCFGSTS) 3-6, 7-15
tions (RCLDDMCNV) 6-11, 6-12 Work with Disk Status (WRKDSKSTS) 8-2
Reclaim Resources (RCLRSC) 6-11, 6-12 Work with Job (WRKJOB) 6-1
Remove Relational Database Directory Entry Work with Relational Database Directory Entries
(RMVRDBDIRE) 5-8, 6-27 (WRKRDBDIRE) 5-8, 6-27
Remove Sphere of Control Entry (RMVSOCE) 3-17 Work with Sphere of Control (WRKSOC) 3-16
Restore Authority (RSTAUT) 7-10, 7-11 Work with System Status (WRKSYSSTS) 8-2
Restore Configuration (RSTCFG) 7-10 Work with User Jobs (WRKUSRJOB) 6-2
Restore Library (RSTLIB) 7-10 WRKACTJOB (Work with Active Jobs) 6-3, 8-2
Restore Object (RSTOBJ) 7-10, 7-11, 7-14 WRKCFGSTS (Work with Configuration
Status) 3-6, 7-15

X-10 OS/400 Distributed Database Programming V4R2


command, CL (continued) communications (continued)
WRKDSKSTS (Work with Disk Status) 8-2 configuration (continued)
WRKJOB (Work with Job) 6-1 line description 3-5
WRKRDBDIRE (Work with Relational Database location list 3-6
Directory Entries) 5-8, 6-27 network interface description 3-5
WRKSOC (Work with Sphere of Control) 3-16 steps 3-5
WRKSYSSTS (Work with System Status) 8-2 varying on or off 3-6
WRKUSRJOB (Work with User Jobs) 6-2 DDM and DRDA coexistence 3-2, 5-13
command, precompiler DDM conversations 6-10
Create Structured Query Language C ILE job 5-1, 5-3
(CRTSQLCI) 10-23 network considerations
Create Structured Query Language COBOL for DRDA support 3-4
(CRTSQLCBL) 10-23 overview 3-1
Create Structured Query Language COBOL ILE communications entry
(CRTSQLCBLI) 10-23 adding 4-5
Create Structured Query Language FORTRAN communications tools 3-1
(CRTSQLFTN) 10-23 communications trace
Create Structured Query Language PL/I messages 9-27
(CRTSQLPLI) 10-23 system service tools (SST) 9-27
Create Structured Query Language RPG compiling programs 10-24
(CRTSQLRPG) 10-23 concepts and terms 1-5
Create Structured Query Language RPG ILE configuration
(CRTSQLRPGI) 10-23 alerts 3-16
CRTSQLCBL (Create Structured Query Language restoring 7-10
COBOL) 10-23 varying 3-6, 7-15
CRTSQLCBLI (Create Structured Query Language configuration list
COBOL ILE) 10-23 creating 4-3
CRTSQLCI (Create Structured Query Language C configuration status
ILE) 10-23 working with 3-6, 7-15
CRTSQLFTN (Create Structured Query Language configuring
FORTRAN) 10-23 alerts 3-17
CRTSQLPLI (Create Structured Query Language APPN network nodes 3-7
PL/I) 10-23 communications
CRTSQLRPG (Create Structured Query Language controller description 3-6
RPG) 10-23 line description 3-5
CRTSQLRPGI (Create Structured Query Language network interface description 3-5
RPG ILE) 10-23 steps 3-5
commitment control configuring communications
journal management 7-3 network attributes 3-5
lock levels 7-6 connected state 10-7
notify object (NFYOBJ) parameter 7-7 connecting to a secure database 4-5
overview 1-3 connection
record lock durations 7-8 SQL 10-5
starting 7-7 SQL versus network 6-10
transaction recovery 7-6 connection management 2-4
with distributed relational database and DDM connection problems 9-13
jobs 10-20 connection state
commitment definitions, defined 6-4 CONNECT (Type 2) statement 10-5
committed work connection states
definition 1-3 activation group 10-7
communications distributed unit of work 10-5
alert support 3-3 remote unit of work 10-4
APPC support 3-2 considerations
APPN support 3-2 application programming 10-2
configuration CCSID B-1
alerts 3-16
controller description 3-6

Index X-11
controller description (APPC) CRTCTLAPPC (Create Controller Description (APPC))
creating 4-2, 4-3 command (continued)
controlling subsystem SECURELOC parameter 4-3
definition 5-2 specifying an APPN location password 4-3
QBASE 5-2 CRTDEVAPPC (Create Device Description (APPC))
QCTL 5-2 command
controlling which ID a job runs under 4-5 LOCPWD parameter 4-2
conversations specifying a location password 4-2
SNA versus TCP/IP 6-10 CRTSQLCBL (Create Structured Query Language
conversations, unprotected 8-1 COBOL) command 10-23
conversion considerations CRTSQLCBLI (Create Structured Query Language
CCSID B-2 COBOL ILE) command 10-23
DB2 B-2 CRTSQLCI (Create Structured Query Language C ILE)
DB2 Server for VM database managers B-2 command 10-23
DB2/2 licensed program B-2 CRTSQLFTN (Create Structured Query Language
copying displays 9-6 FORTRAN) command 10-23
Create Configuration List (CRTCFGL) command CRTSQLPKG (Create Structured Query Language
secure-location entry 4-3 Package) command 10-28
Create Controller Description (APPC) (CRTCTLAPPC) CRTSQLPLI (Create Structured Query Language PL/I)
command 4-2 command 10-23
location-password entry 4-3 CRTSQLRPG (Create Structured Query Language
SECURELOC parameter 4-3 RPG) command 10-23
specifying an APPN location password 4-3 CRTSQLRPGI (Create Structured Query Language
Create Device Description (APPC) (CRTDEVAPPC) RPG ILE) command 10-23
command current connection state 10-6
LOCPWD parameter 4-2
specifying a location password 4-2
Create Structured Query Language C ILE (CRTSQLCI) D
command 10-23 data
Create Structured Query Language COBOL accessing via DB2 Connect B-4
(CRTSQLCBL) command 10-23 availability 7-14
Create Structured Query Language COBOL ILE blocked for better performance B-4
(CRTSQLCBLI) command 10-23 character conversion 1-7
Create Structured Query Language FORTRAN considerations 2-6
(CRTSQLFTN) command 10-23 designing 2-4
Create Structured Query Language Package failure 9-25
(CRTSQLPKG) command 10-28 requirements 2-6
Create Structured Query Language PL/I (CRTSQLPLI) data availability and protection 7-1
command 10-23 data capture
Create Structured Query Language RPG FFDC 9-29
(CRTSQLRPG) command 10-23 data conversion
Create Structured Query Language RPG ILE noncharacter data 10-19
(CRTSQLRPGI) command 10-23 data entries
creating interpreting C-1
configuration list 4-3 data file utility (DFU) 5-16
controller description (APPC) 4-2, 4-3 data location
device description (APPC) 4-2 deciding 8-3
interactive SQL packages on DB2 Server for data needs
VM B-4 determining 2-1
structured query language package 10-28 data redundancy 7-16
cross-platform DRDB notes B-1 data translation
CRTCFGL (Create Configuration List) command CCSID 10-16
secure-location entry 4-3 noncharacter data 10-19
CRTCTLAPPC (Create Controller Description (APPC)) database
command 4-2 security 4-1
location-password entry 4-3

X-12 OS/400 Distributed Database Programming V4R2


database administration DDM (distributed data management) (continued)
canceling work 6-26 unused conversations 6-10
displaying job log 6-5 using copy file commands 5-19
finding a distributed relational database job 6-6 DDM architecture command support D-1
job accounting 6-16 DDM error codes on FFDC C-14
operating systems remotely 6-8 DDM file
starting and stopping remote systems 6-8 setting up 5-13
submitting remote commands 6-9 DDM file access 5-13
working with DDM files
active jobs 6-3 SQL commitment control 10-20
commitment definitions 6-4 DDM job start message 6-6
jobs 6-1 DDMCNV (DDM conversations) job attribute 6-10,
user jobs 6-2 6-11
database recovery debug
auxiliary storage pool (ASP) 7-2 starting 9-30
checksum protection 7-2 debugging and testing
converting journal receiver entries 7-4 application program 10-25
disk failures 7-2 default collection name 10-3
failure types 7-1 default focal point
force-write ratio 7-9 definition 3-3
journal management 7-3 default user (DFTUSR) parameter 4-8
methods 7-1 defining
mirrored protection 7-3 controller description 3-6
rebuilding indexes 7-4 line description 3-5
reducing index rebuilding time 7-5 network interface description 3-5
uninterruptible power supply 7-2 description
database, improving performance through 8-2 FFDC dump output C-8
DB2 1-6 RW trace points C-3
CCSID B-2 design
conversion considerations B-2 application 2-4
DB2 Connect 1-6 data 2-4
accessing AS/400 data B-4 network 2-4
DB2 for VSE and VM 1-6 design for distributed relational database 2-1
DB2 Server for VM developing
creating interactive SQL packages B-4 management strategy 2-6
DB2 Server for VM database managers device description (APPC)
CCSID B-2 creating 4-2
conversion considerations B-2 DFTUSR (default user) parameter 4-8
DB2/2 licensed program DFU (data file utility) 5-16
CCSID B-2 DISCONNECT 6-10
conversion considerations B-2 disk failure
DB2/400 Query Manager and SQL Development Kit auxiliary storage pool (ASP) 7-2
coexistence across DRDA platforms 10-15 checksum protection 7-2
collection and table creation B-5 mirrored protection 7-3
distributed relational database statements 10-13 disk status
DDM (distributed data management) working with 8-2
CHGJOB command 6-11 Display Job Log (DSPJOBLOG) command 6-5
coexistence with DRDA support 3-2 Display Journal (DSPJRN) command 2-9, 7-4
DDMCNV job attribute 6-10, 6-11 Display Message Descriptions (DSPMSGD)
dropped conversations 6-10 command 9-8
dropping conversations 6-10, 6-11 Display Program References (DSPPGMREF)
keeping conversations 6-10 command 6-12, 10-27
keeping conversations active 6-10, 6-11 Display Programs that Adopt (DSPPGMADP)
moving data between AS/400 systems 5-19 command 4-12
reclaiming Display Relational Database Directory Entry
conversations 6-12 (DSPRDBDIRE) command 5-8, 6-27, 7-11
resources 6-12

Index X-13
Display Sphere of Control Status (DSPSOCSTS) DRDA (Distributed Relational Database Architecture)
command 3-17 Level 2 support 1-4
display, copying 9-6 DRDA (Distributed Relational Database Architecture)
displaying support
job log 6-5 coexistence with DDM 3-2
journal 2-9, 7-4 current AS/400 support 1-9
message descriptions 9-8 level 1 support 1-1
objects 6-12 level 2 support 1-1
program references 6-12, 10-27 overview 1-6
programs that adopt 4-12 with CDRA 1-7
relational database directory entry 5-8, 7-11 DRDA listener program 6-19
sphere of control status 3-17 DRDB
distributed data management (DDM) cross-platform B-1
CHGJOB command 6-11 DROP PACKAGE statement 10-33
coexistence with DRDA support 3-2 dropping a collection 6-15
DDMCNV job attribute 6-10, 6-11 DSPJOBLOG (Display Job Log) command 6-5
dropped conversations 6-10 DSPJRN (Display Journal) command 2-9, 7-4
dropping conversations 6-10, 6-11 DSPMSGD (Display Message Descriptions)
keeping conversations 6-10 command 9-8
keeping conversations active 6-10, 6-11 DSPPGMADP (Display Programs that Adopt)
moving data between AS/400 systems 5-19 command 4-12
reclaiming DSPPGMREF (Display Program References)
conversations 6-12 command 6-12, 10-27
resources 6-12 DSPRDBDIRE (Display Relational Database Directory
unused conversations 6-10 Entry) command 5-8, 6-27, 7-11
using copy file commands 5-19 DSPSOCSTS (Display Sphere of Control Status)
distributed data management conversations command 3-17
reclaiming 6-11, 6-12 dump, FFDC C-5
Distributed Relational Database DUW (distributed unit of work)
administration and operation 6-1 definition 1-4
managing 1-10
remote unit of work 10-3
set up 5-1 E
SQL specific to 10-13 EBCDIC 1-8
Distributed Relational Database application encoding, character conversion 1-7
considerations for a Distributed Relational End Job (ENDJOB) command 6-26
Database 10-2 End Request (ENDRQS) command 6-26
programming considerations 10-2 End TCP/IP Server CL command 6-20
Distributed Relational Database Architecture (DRDA) ending
support job 6-26
coexistence with DDM 3-2 request 6-26
current AS/400 support 1-9 ending SQL programs 10-16
overview 1-6 ENDJOB (End Job) command 6-26
with CDRA 1-7 ENDRQS (End Request) command 6-26
distributed relational database capabilities 2-2 ensuring data availability 7-14
distributed relational database problems environments
incorrect output 9-3 like 1-6
waiting, looping, performance unlike 1-6
at the application requester 9-3 error log 9-26
at the application server 9-4 FFDC data 9-30
distributed relational database security 4-1 error recovery
distributed unit of work (DUW) relational database 7-1
application design tips 2-4 error reporting
definition 1-4 alerts 9-23
dormant connection state 10-6 communications trace 9-27
definition C-5

X-14 OS/400 Distributed Database Programming V4R2


error reporting (continued) focal point
DRDA supported alerts 9-24 default 3-3
first-failure data capture C-5 definition 3-3
printing a job log 9-25 primary 3-3
printing an error log 9-26 sphere of control 3-3
trace jobs 9-26 force-write ratio 7-9
example
alerts configuration
adding a sphere of control entry 3-19 G
at end nodes 3-19 getting data to report a failure 9-25
creating 3-18 GRANT EXECUTE statement 4-11
analyzing the RW trace data C-2 Grant Object Authority (GRTOBJAUT) command 4-10
APPN configuration granting
controller description, nonswitched 3-10, 3-14 object authority 4-10
controller description, switched 3-11, 3-15 GRTOBJAUT (Grant Object Authority) command 4-10
network attributes 3-10, 3-12, 3-13
network node to network node 3-7
nonswitched line description 3-10, 3-14
H
handling DRDB problems 9-1
switched line description 3-11, 3-15
held connection state 10-6
configuring alert support 3-17
history log, displaying 6-25
displaying program references 6-14
host program
displaying SQL package references 6-14
definition 10-21
FFDC dump C-6
host variables
programming
definition 10-21
C/400 language A-18
hung job 6-15
COBOL/400 language A-13
database setup A-1
inserting data A-3
RPG/400 language A-6
I
IBM-supplied subsystem
spiffy corporation 1-13 QBASE 5-2
submitting a remote command 6-10 QBATCH 5-2
examples 5-9 QCMN 5-2
application programming A-1 QCTL 5-2
expectations and needs QINTER 5-2
identifying 2-1 QSPL 5-2
explicit connection 10-10 QSYSWRK 5-2
identifying your needs and expectations 2-1
F implicit connect 10-8
index
factors that affect blocking 8-3
definition 1-1
factors that affect query block size 8-6
journaling 7-4
failure data 9-25
journaling restrictions 7-5
FFDC (first-failure data capture) 9-25
rebuilding 7-4
interpreting C-1
recovering 7-5
FFDC data
saving and restoring 7-10
interpreting 9-30
starting journaling 7-5
FFDC dump C-5
table design considerations 7-5
files
informational messages 9-7
journaling B-4
interactive job 5-1, 5-3
finding first-failure data capture data 9-29
interactive SQL
first-failure data capture (FFDC) 9-25, 9-29
moving data between systems 5-16
DDM error codes C-14
starting commitment control 7-7
dump output description C-8
interactive SQL and query management setup B-3
first-failure data capture data (FFDC)
interpreting
interpreting C-1
data entries
for the RW component of trace job C-1

Index X-15
interpreting (continued) location, definition 2-5
FFDC data C-1 loop problem
FFDC data from the error log 9-30 application requester 9-3
trace job C-1 application server 9-4

J M
job management strategy
accounting 6-16 developing 2-6
canceling 6-26 managing an AS/400 Distributed Relational
changing 6-11 Database 1-10
ending 6-26 message
types 5-1 Additional Message Information display 9-8
working with 6-1 category descriptions 9-8
job log database accessed 6-8
alerts 9-25 DDM job start 6-6
displaying 6-5 distributed relational database 9-10
finding a job 6-6 handling problems 9-7
job trace 9-26 informational 9-7
jobs inquiry 9-7
working with active 8-2 program start request failure 9-13
journal severity code 9-9
displaying 2-9, 7-4 target DDM job started 6-8
journal access path types 9-9
starting 7-5 message category 9-8
journal management message descriptions
commitment control 7-3 displaying 9-8
indexes 7-4 migration of data from mainframes 5-19, 5-23
journal receiver 7-3 mirrored protection 7-3
overview 7-3 monitoring
starting index journaling 7-5 relational database activity 6-1
stopping 7-3 moving data
journal receiver 7-3 between AS/400 systems 5-16
journaling between unlike systems
AS/400 files B-4 using communications 5-22
using File Transfer Protocol 5-23
using OSI File Services/400 licensed
L program 5-23
LCKLVL parameter 7-7 using SQL functions 5-22
library using tape or diskette 5-22
restoring 7-10 using TCP/IP Connectivity Utilities/400 licensed
saving 7-5, 7-10 program 5-23
like environment copying files with DDM 5-19
definition 1-6 using copy file commands 5-19
load data using interactive SQL 5-16
into tables 5-14 using Query Management/400 5-18
using Query Management/400 5-15 using save and restore 5-21
using DFU (data file utility) 5-16
using SQL 5-14
location security N
APPC network 4-3 naming convention
APPN network 4-3 default collection name 10-3
secure-location entry 4-3 SQL 10-3
SECURELOC parameter 4-3 system 10-2
system verifies 4-3 naming distributed relational database objects 10-2
which system verifies security 4-3

X-16 OS/400 Distributed Database Programming V4R2


national language support 10-16 performance (continued)
needs and expectations distributed relational database 8-1
identifying 2-1 factors affecting 8-3
network improving through database 8-2
considerations 2-5 improving through the network 8-1
designing 2-4 improving through the system 8-1
improving performance through 8-1 observing system 8-2
requirements 2-5 unprotected conversations 8-1
network attributes performance problems
changing 3-5, 3-16, 4-9 application server 9-4
network configuration example 3-7 planning
network considerations backup 2-10
for DRDA support 3-4 general operations 2-6
network redundancy 7-14 recovery 2-10
NFYOBJ (notify object) parameter 7-7 security 2-8
notes planning for distributed relational database 2-1
cross-platform DRDB B-1 precompile process
Notices X-5 commands 10-23
notify object (NFYOBJ) parameter 7-7 output listing 10-22
overview 10-22
SQL package 10-23
O temporary source file member 10-23
object precompiler command
restoring 7-10, 7-11, 7-14 Create Structured Query Language C ILE
saving 7-5, 7-10, 7-11 (CRTSQLCI) 10-23
object auditing value Create Structured Query Language COBOL
changing 6-27 (CRTSQLCBL) 10-23
object authority Create Structured Query Language COBOL ILE
granting 4-10 (CRTSQLCBLI) 10-23
revoking 4-10 Create Structured Query Language FORTRAN
object-related security (CRTSQLFTN) 10-23
DDMACC parameter 4-9 Create Structured Query Language PL/I
objects (CRTSQLPLI) 10-23
naming distributed relational database 10-2 Create Structured Query Language RPG
operation and administration 6-1 (CRTSQLRPG) 10-23
operations, general Create Structured Query Language RPG ILE
planning for 2-6 (CRTSQLRPGI) 10-23
CRTSQLCBL (Create Structured Query Language
COBOL) 10-23
P CRTSQLCBLI (Create Structured Query Language
package management
COBOL ILE) 10-23
SQL 10-28
CRTSQLCI (Create Structured Query Language C
packages
ILE) 10-23
working with 10-28
CRTSQLFTN (Create Structured Query Language
pass-through
FORTRAN) 10-23
starting 9-6
CRTSQLPLI (Create Structured Query Language
password
PL/I) 10-23
in CONNECT statement 4-5, 10-12
CRTSQLRPG (Create Structured Query Language
in interactive SQL 10-13
RPG) 10-23
sending 4-5, 10-12, 10-13
CRTSQLRPGI (Create Structured Query Language
using 4-5
RPG ILE) 10-23
performance
prestart job 5-1
blocked query data B-4
prestart jobs, using 6-21
blocking 8-3
primary focal point
deciding data location 8-3
definition 3-3
delays on connect 9-5
sphere of control setup 3-16

Index X-17
problem QCTL controlling subsystem 5-2
system-detected 9-1 QCTLSBSD system value 5-2
user-detected 9-2 QPSRVDMP FFDC spooled file C-5
problem analysis, planning for 2-9 QRETSVRSEC system value 4-8
problem handling C-5 query block size
Additional Message Information display 9-8 factors that affect the 8-6
alerts 9-23 query data
Analyze Problem (ANZPRB) command 9-21 blocked for better performance B-4
application problems 9-15 query management and interactive SQL setup B-3
communications trace 9-27 Query Management/400 function
copying displays 9-6 loading data into tables 5-15
displaying message description 9-8 moving data between AS/400 systems 5-18
distributed relational database messages 9-10
DRDA supported alerts 9-24
error log 9-26 R
isolating distributed relational database RCLDDMCNV (Reclaim Distributed Data Management
problems 9-2 Conversations) command 6-11, 6-12
job log 9-25 RCLRSC (Reclaim Resources) command 6-11, 6-12
job trace 9-26 RDB (relational database) parameter
message category 9-8 implicit CONNECT 10-8
message severity 9-9 in CRTSQLPKG command 10-29
overview 9-1 in relational database directory 5-6
problem log 9-21 Reclaim Distributed Data Management Conversations
program start request failure 9-13 (RCLDDMCNV) command 6-11, 6-12
system messages 9-7 Reclaim Resources (RCLRSC) command 6-11, 6-12
system-detected problems 9-1 reclaiming
user-detected problems 9-2 distributed data management conversations 6-11,
using display station pass-through 9-6 6-12
wait, loop, performance problems resources 6-11, 6-12
application requester 9-3 recovery
application server 9-4 auxiliary storage pool (ASP) 7-2
working with users 9-6 checksum protection 7-2
problem log 9-21 disk failures 7-2
problems failure types 7-1
handling 9-1 force-write ratio 7-9
program references journal management 7-3
displaying 6-12, 10-27 methods 7-1
program start request failure 9-13 mirrored protection 7-3
programming considerations planning for 2-10
for a Distributed Relational Database uninterruptible power supply 7-2
application 10-2 redundancy
programming examples communications network 7-14
application A-1 data 7-16
programs that adopt relational database
displaying 4-12 definition 1-1
protection relational database (RDB) parameter
system-managed access-path 7-6 implicit CONNECT 10-8
protection strategies for distributed databases 4-13 in CRTSQLPKG command 10-29
in relational database directory 5-6
relational database activity
Q monitoring 6-1
QBASE controlling subsystem 5-2 relational database directory
QCCSID auditing 6-27
system value B-1 changing entries 5-9
QCNTSRVC 9-31, 9-32 commands 5-6
creating an output file 7-11

X-18 OS/400 Distributed Database Programming V4R2


relational database directory (continued) restoring (continued)
definition 1-11 from save file 7-10
displaying entries 5-8 from tape or diskette 7-10
local entry 5-5 indexes 7-10
optional parameters 5-6 library 7-10
RDB (relational database) parameter 5-6 object 7-10, 7-11, 7-14
removing entries 5-8 relational database directory 7-12
restoring 7-12 security data 7-11
RMTLOCNAME parameter 5-6 SQL packages 7-11
saving 7-11 user profiles 7-10, 7-11
setting up 5-5 REVOKE EXECUTE statement 4-12
setup example 5-9 Revoke Object Authority (RVKOBJAUT)
using CL programs 5-11 command 4-10
working with entries 5-8 revoking
relational database directory entries object authority 4-10
working with 5-8 RMVRDBDIRE (Remove Relational Database Directory
relational database directory entry Entry) command 5-8, 6-27
adding 5-6, 7-12, 9-31 RMVSOCE (Remove Sphere of Control Entry)
changing 5-9, 9-31 command 3-17
displaying 5-8, 7-11 rollback
removing 5-8 definition 1-3
relational database name RPG/400
implicit CONNECT 10-8 programming
in CRTSQLPKG command 10-29 examples A-6
in relational database directory 5-6 RSTAUT (Restore Authority) command 7-10, 7-11
RELEASE 6-10 RSTCFG (Restore Configuration) command 7-10
released connection state 10-6 RSTLIB (Restore Library) command 7-10
remote command RSTOBJ (Restore Object) command 7-10, 7-11, 7-14
submitting 6-9, 6-10, 6-15 RSTUSRPRF (Restore User Profiles) command 7-10,
remote procedure call 10-14 7-11
remote system operation RUNRMTCMD command 5-13, 6-9
starting and stopping 6-8 RUW (remote unit of work)
submitting remote commands 6-9 definition 1-4
remote unit of work (RUW) 10-3 RVKOBJAUT (Revoke Object Authority)
definition 1-4 command 4-10
Remove Relational Database Directory Entry RW trace data
(RMVRDBDIRE) command 5-8, 6-27 analyzing C-2
Remove Sphere of Control Entry (RMVSOCE)
command 3-17
removing S
relational database directory entry 5-8 SAVCHGOBJ (Save Changed Object) command 7-10
sphere of control entry 3-17 Save Changed Object (SAVCHGOBJ) command 7-10
request save file 7-10
ending 6-26 save file data
resources saving 7-10
reclaiming 6-11, 6-12 Save Library (SAVLIB) command 7-5, 7-10
Restore Authority (RSTAUT) command 7-10, 7-11 Save Object (SAVOBJ) command 7-5, 7-10, 7-11
Restore Configuration (RSTCFG) command 7-10 moving data between AS/400 systems 5-21
Restore Library (RSTLIB) command 7-10 Save Save File Data (SAVSAVFDTA) command 7-10
Restore Object (RSTOBJ) command 7-10, 7-11, 7-14 Save Security Data (SAVSECDTA) command 7-11
moving data between AS/400 systems 5-21 Save System (SAVSYS) command 7-10, 7-11
Restore User Profiles (RSTUSRPRF) command 7-10, saving
7-11 changed object 7-10
restoring indexes 7-10
authority 7-10, 7-11 journal receivers 7-4
configuration 7-10 library 7-5, 7-10

Index X-19
saving (continued) setting QCNTSRVC as a TPN
object 7-5, 7-10, 7-11 on a DB2 Connect application requester 9-32
relational database directory 7-11 on a DB2 for OS/390 application requester 9-32
save file data 7-10 on a DB2 for VM application requester 9-32
security data 7-11 on a DB2/400 application requester 9-31
SQL packages 7-11 setting up a distributed relational database 5-1
system 7-10, 7-11 setup
to save file 7-10 interactive SQL B-3
to tape or diskette 7-10 query management B-3
SAVLIB (Save Library) command 7-5, 7-10 size of query blocks
SAVOBJ (Save Object) command 7-5, 7-10, 7-11 factors that affect the 8-6
SAVSAVFDTA (Save Save File Data) command 7-10 SMAPP (system-managed access-path protection) 7-6
SAVSECDTA (Save Security Data) command 7-11 SNA (Systems Network Architecture) 3-1
SAVSYS (Save System) command 7-10, 7-11 special TPN for debugging APPC server jobs 9-32
SBMRMTCMD (Submit Remote Command) sphere of control
command 6-9, 6-10, 6-15 definition 3-3
SBMRMTCMD command 5-13 working with 3-16
security sphere of control entry
application adding 3-17
requester 4-1, 4-4, 4-5 removing 3-17
server 4-1, 4-4, 4-5 sphere of control status
assigning authority to users 4-14 displaying 3-17
auditing 6-27 spiffy corporation example 1-13
consistent system levels across network 4-2 spooled job 5-1
controlling access to objects 4-9 SQL CALL 10-14
controlling which ID a job runs under 4-5 SQL collection
conversation level 4-4 definition 1-1
default user profile 4-5, 4-13 SQL naming convention 10-3
distributed database overview 4-1 SQL package
for an AS/400 distributed relational database 4-1 access plan 10-24
location 4-3 adopted authority 4-12
object security 4-13 creating with CRTSQLPKG 10-28
password 4-5, 10-12, 10-13 creating with CRTSQLxxx 10-28
planning for 2-8 creation as a result of precompile 10-23
protection strategies 4-13 definition 1-12
restoring profiles and authorities 7-11 deleting 10-32
saving profiles and authorities 7-11 displaying objects used 6-14
session level 4-2 for interactive SQL 5-13
security data restoring 7-11
saving 7-11 saving 7-11
server SQL package management 10-28
application SQL packages
starting a service job 9-30 working with 10-28
server authorization entries 4-8, 5-13 SQL program
service job adopted authority 4-12
on the application server 9-30 compiling 10-24
starting 9-30 displaying objects used 6-14
session level security example listing
APPC network 4-2 CRTSQLPKG 9-17
APPN network 4-3 precompiler 9-15
creating a remote location list 4-3 SQLCODE 9-15
LOCPWD (location password) parameter 4-2 SQLSTATE 9-15
specifying a location password during device config- handling problems
uration 4-2 SQLCODE 9-15
specifying a location-password 4-3 SQLSTATE 9-15
starting commitment control 7-7

X-20 OS/400 Distributed Database Programming V4R2


SQL programs, ending 10-16 subsystem (continued)
SQL specific to distributed relational database 10-13 definition 5-1
SQL statement IBM-supplied 5-2
CALL 10-14 QBASE 5-2, 5-3
CONNECT QBATCH 5-2
explicit 10-10 QCMN 5-2, 5-4
implicit 10-8 QCTL 5-2, 5-3
DISCONNECT 6-10 QCTLSBSD system value 5-2
DROP PACKAGE 10-33 QINTER 5-2, 5-4
GRANT EXECUTE 4-11 QSPL 5-2
precompiling 10-22 QSYSWRK 5-2
RELEASE 6-10 setup considerations 5-3
REVOKE EXECUTE 4-12 subsystem description
SQL terms changing 5-2
corresponding system terms 1-1 supported connections 1-10
definition list 1-1 supported products
SQLCODE error code DB2 1-6
error handling 9-17 DB2 Connect 1-6
for distributed relational database 9-17 DB2 for VSE and VM 1-6
SQLSTATE error code system
error handling 9-17 improving performance through 8-1
for distributed relational database 9-18 naming convention 10-2
SST (system service tools) 9-27 saving 7-10, 7-11
Start Commitment Control (STRCMTCTL) terms 1-1
command 7-7 system message
Start Copy Screen (STRCPYSCRN) command 9-6 Additional Message Information display 9-8
Start Debug (STRDBG) command 9-30 category 9-8
Start Journal Access Path (STRJRNAP) command 7-5 displaying message description 9-8
Start Pass-Through (STRPASTHR) command 9-6 for distributed relational database 9-10
Start Service Job (STRSRVJOB) command 9-30 informational 9-7
Start TCP/IP Server CL command 6-19 inquiry 9-7
starting returned SQLCODE 9-10
commitment control 7-7 severity code 9-9
debug 9-30 types 9-7, 9-9
journal access path 7-5 system performance
pass-through 9-6 applicator requester problem 9-4
service job 9-30 force-write ratio 7-9
starting a service job 9-30 system service tools (SST) 9-27
states system status
SQL connection 10-6 working with 8-2
stored procedure 10-14 system value
STRCMTCTL (Start Commitment Control) QCCSID B-1
command 7-7 system-detected problem 9-1
STRCPYSCRN (Start Copy Screen) command 9-6 system-managed access-path protection (SMAPP) 7-6
STRDBG (Start Debug) command 9-30 Systems Network Architecture (SNA) 3-1
STRJRNAP (Start Journal Access Path) command 7-5
STRPASTHR (Start Pass-Through) command 9-6
STRSRVJOB (Start Service Job) command 9-30 T
structured query language package table
creating 10-28 definition 1-1
Submit Remote Command (SBMRMTCMD) table creation
command 6-9, 6-10, 6-15 DB2/400 Query Manager and SQL Development Kit
submitting needed for B-5
remote command 6-9, 6-10, 6-15 target system
subsystem submit remote commands 6-9
controlling 5-2

Index X-21
TCP/IP 1-10, 2-4 user profiles
finding joblogs 9-26 restoring 7-10, 7-11
finding server jobs 6-6 user-detected problem 9-2
forcing joblogs to be saved 9-26 using passwords 4-5
security 4-8, 5-12
server job attributes 4-10
service jobs 9-31 V
user profile 4-14 Vary Configuration (VRYCFG) command 3-6, 7-15
user profiles 4-8, 4-10 varying
working with server jobs 6-3 configuration 3-6, 7-15
TCP/IP Communication Support Concepts 6-18 view
TCP/IP communications, establishing 6-18 definition 1-1
temporary source file member 10-23 recovering 7-5
terminology 6-17 VRYCFG (Vary Configuration) command 3-6, 7-15
terms and concepts 1-5
testing and debugging
application program 10-25
W
wait problem
tools
application requester 9-3
communications 3-1
application server 9-4
TPN
work management
setting QCNTSRVC 9-31, 9-32
job types 5-1
trace
subsystem setup 5-3
communications 9-27
subsystems 5-1
job 9-26
Work with Active Jobs (WRKACTJOB) command 6-3,
trace data
8-2
analyzing C-2
Work with Configuration Status (WRKCFGSTS)
trace job data
command 3-6, 7-15
interpreting C-1
Work with Disk Status (WRKDSKSTS) command 8-2
trace point
Work with Job (WRKJOB) command 6-1
description C-3
Work with Relational Database Directory Entries
partial send data stream C-3, C-4
(WRKRDBDIRE) command 5-8, 6-27
receive data stream C-3
Work with Sphere of Control (WRKSOC)
send data stream C-3
command 3-16
successful fetch C-4
Work with System Status (WRKSYSSTS)
unsuccessful fetch C-4
command 8-2
transaction program name parameter
Work with User Jobs (WRKUSRJOB) command 6-2
in AS/400 (TNSPGM) 5-7
working with
in SNA (TPN) 5-7
active jobs 6-3, 8-2
configuration status 3-6, 7-15
U disk status 8-2
unconnected state 10-7 job 6-1
uninterruptible power supply 7-2 relational database directory entries 5-8
unit of work sphere of control 3-16
definition 1-3 system status 8-2
unlike environment user jobs 6-2
definition 1-6 working with SQL packages 10-28
unprotected conversations 8-1 writing Distributed Relational Database
user jobs applications 10-1
working with 6-2 WRKACTJOB (Work with Active Jobs) command 6-3,
user profile 8-2
CCSID 10-17 WRKCFGSTS (Work with Configuration Status)
default 4-5, 4-13 command 3-6, 7-15
on application server 4-13 WRKDSKSTS (Work with Disk Status) command 8-2
restoring 7-11 WRKJOB (Work with Job) command 6-1
saving 7-11

X-22 OS/400 Distributed Database Programming V4R2


WRKRDBDIRE (Work with Relational Database Direc-
tory Entries) command 5-8, 6-27
WRKSOC (Work with Sphere of Control)
command 3-16
WRKSYSSTS (Work with System Status)
command 8-2
WRKUSRJOB (Work with User Jobs) command 6-2

Index X-23
Communicating Your Comments to IBM
AS/400e series
Distributed Database Programming
Version 4
Publication No. SC41-5702-01

If you especially like or dislike anything about this book, please use one of the methods
listed below to send your comments to IBM. Whichever method you choose, make sure you
send your name, address, and telephone number if you would like a reply.

Feel free to comment on specific errors or omissions, accuracy, organization, subject matter,
or completeness of this book. However, the comments you send should pertain to only the
information in this manual and the way in which the information is presented. To request
additional publications, or to ask questions or make comments about the functions of IBM
products or systems, you should talk to your IBM representative or to your IBM authorized
remarketer.

When you send comments to IBM, you grant IBM a nonexclusive right to use or distribute
your comments in any way it believes appropriate without incurring any obligation to you.

If you are mailing a readers' comment form (RCF) from a country other than the United
States, you can give the RCF to the local IBM branch office or IBM representative for
postage-paid mailing.
Ÿ If you prefer to send comments by mail, use the RCF at the back of this book.
Ÿ If you prefer to send comments by FAX, use this number:
– United States and Canada: 1-800-937-3430
– Other countries: 1-507-253-5192
Ÿ If you prefer to send comments electronically, use this network ID:
– IBMMAIL(USIB56RZ)
[email protected]

Make sure to include the following in your note:


Ÿ Title and publication number of this book
Ÿ Page number or topic to which your comment applies.
Readers' Comments — We'd Like to Hear from You
AS/400e series
Distributed Database Programming
Version 4
Publication No. SC41-5702-01

Overall, how satisfied are you with the information in this book?
Very Very
Satisfied Satisfied Neutral Dissatisfied Dissatisfied
Overall satisfaction Ø Ø Ø Ø Ø

How satisfied are you that the information in this book is:
Very Very
Satisfied Satisfied Neutral Dissatisfied Dissatisfied
Accurate Ø Ø Ø Ø Ø
Complete Ø Ø Ø Ø Ø
Easy to find Ø Ø Ø Ø Ø
Easy to understand Ø Ø Ø Ø Ø
Well organized Ø Ø Ø Ø Ø
Applicable to your tasks Ø Ø Ø Ø Ø

Please tell us how we can improve this book:

Thank you for your responses. May we contact you? Ø Yes Ø No

When you send comments to IBM, you grant IBM a nonexclusive right to use or distribute your comments
in any way it believes appropriate without incurring any obligation to you.

Name Address

Company or Organization

Phone No.
Cut or Fold
Readers' Comments — We'd Like to Hear from You

IBM
Along Line
SC41-5702-01

Fold and Tape Please do not staple Fold and Tape

NO POSTAGE
NECESSARY
IF MAILED IN THE
UNITED STATES

BUSINESS REPLY MAIL


FIRST-CLASS MAIL PERMIT NO. 40 ARMONK, NEW YORK

POSTAGE WILL BE PAID BY ADDRESSEE

IBM CORPORATION
ATTN DEPT 245
3605 HWY 52 N
ROCHESTER MN 55901-7829

Fold and Tape Please do not staple Fold and Tape

Cut or Fold
SC41-5702-01 Along Line
IBM 

Printed in the United States of America


on recycled paper containing 10%
recovered post-consumer fiber.

SC41-57ð2-ð1
Spine information:

IBM AS/400e series Distributed Database Programming Version 4

You might also like