Distributed Database Programming
Distributed Database Programming
SC41-5702-01
AS/400e series IBM
Distributed Database Programming
Version 4
SC41-5702-01
Note
Before using this information and the product it supports, be sure to read the information in “Notices” on page X-5.
This edition applies to version 4, release 2, modification 0 of Operating System/400 (product number 5769-SS1) and to all subse-
quent releases and modifications until otherwise indicated in new editions. This edition applies only to reduced instruction set com-
puter (RISC) systems.
This edition replaces SC41-5702-00. This edition applies only to reduced instruction set computer (RISC) systems.
Copyright International Business Machines Corporation 1997, 1998. All rights reserved.
Note to U.S. Government Users — Documentation related to restricted rights — Use, duplication or disclosure is subject to
restrictions set forth in GSA ADP Schedule Contract with IBM Corp.
Contents
About Distributed Database Programming (SC41-5702) . . . . . . . . . . . . xi
Who should read this book . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xi
AS/400 Operations Navigator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xi
Prerequisite and related information . . . . . . . . . . . . . . . . . . . . . . . . . xii
Information available on the World Wide Web . . . . . . . . . . . . . . . . . . . xii
How to send your comments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xii
Contents v
Setting QCNTSRVC as a TPN on a DB2/400 Application Requester . . . 9-31
| Creating Your Own TPN for Debugging a DB2 for AS/400 AS Job . . . . 9-31
| Setting QCNTSRVC as a TPN on a DB2 for VM Application Requester . 9-32
| Setting QCNTSRVC as a TPN on a DB2 for OS/390 Application Requester 9-32
| Setting QCNTSRVC as a TPN on a DB2 Connect Application Requester 9-32
Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . X-1
AS/400 System Information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . X-1
Distributed Relational Database Library . . . . . . . . . . . . . . . . . . . . . . X-2
Other IBM Distributed Relational Database Platform Libraries . . . . . . . . . X-3
DB2 Connect and Universal Database . . . . . . . . . . . . . . . . . . . . . X-3
DB2 for OS/390 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . X-3
DB2 Server for VSE and VM . . . . . . . . . . . . . . . . . . . . . . . . . . X-4
Architecture Books . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . X-4
Redbooks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . X-4
Notices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . X-5
Trademarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . X-6
Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . X-7
Contents vii
viii OS/400 Distributed Database Programming V4R2
Figures
0-1. AS/400 Operations Navigator Display . . . . . . . . . . . . . . . . . . . . xi
1-1. A Typical Relational Table . . . . . . . . . . . . . . . . . . . . . . . . . 1-1
1-2. Relationship of SQL Terms to System Terms . . . . . . . . . . . . . . 1-2
1-3. A Distributed Relational Database . . . . . . . . . . . . . . . . . . . . . 1-3
1-4. Unit of Work in a Local Relational Database . . . . . . . . . . . . . . . 1-3
1-5. Remote Unit of Work in a Distributed Relational Database . . . . . . 1-4
1-6. Distributed Unit of Work in a Distributed Relational Database . . . . . 1-5
1-7. The Spiffy Corporation System Organization . . . . . . . . . . . . . . 1-14
2-1. Alternative Solutions to Distributed Relational Database . . . . . . . . 2-3
3-1. The Spiffy Corporation Network Organization . . . . . . . . . . . . . . 3-8
3-2. Spiffy Corporation Example Network Configuration . . . . . . . . . . 3-18
4-1. Remote Access to a Distributed Relational Database . . . . . . . . . . 4-7
5-1. Relational Database Directory Setup for Two Systems . . . . . . . . 5-10
5-2. Relational Database Directory Setup for Multiple Systems . . . . . . 5-11
6-1. How High-level Languages Save Information About Objects . . . . 6-13
| 6-2. DRDA/DDM TCP/IP Server . . . . . . . . . . . . . . . . . . . . . . . . 6-18
7-1. Record Lock Duration . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-8
7-2. Alternative Network Paths . . . . . . . . . . . . . . . . . . . . . . . . 7-15
7-3. Alternate Application Server . . . . . . . . . . . . . . . . . . . . . . . 7-16
7-4. Data Redundancy Example . . . . . . . . . . . . . . . . . . . . . . . . 7-17
9-1. Resolving Incorrect Output Problem . . . . . . . . . . . . . . . . . . . . 9-3
9-2. Resolving Wait, Loop, or Performance Problems on the Application
Requester . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-4
9-3. Resolving Wait, Loop, or Performance Problems on the Application
Server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-5
9-4. Message Categories . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-8
9-5. Message Severity Codes . . . . . . . . . . . . . . . . . . . . . . . . . . 9-9
| 9-6. Distributed Relational Database Messages . . . . . . . . . . . . . . . 9-11
9-7. Listing From a Precompiler . . . . . . . . . . . . . . . . . . . . . . . . 9-16
9-8. Listing from CRTSQLPKG . . . . . . . . . . . . . . . . . . . . . . . . 9-17
9-9. SQLCODEs and SQLSTATEs . . . . . . . . . . . . . . . . . . . . . . 9-18
9-10. Distributed Relational Database Messages that Create Alerts . . . . 9-24
9-11. Communications Trace Messages . . . . . . . . . . . . . . . . . . . . 9-27
10-1. Remote Unit of Work Activation Group Connection State Transition 10-4
10-2. Application-Directed Distributed Unit of Work Connection and
Activation Group Connection State Transitions . . . . . . . . . . . . 10-6
10-3. Coded Character Set Identifier (CCSID) . . . . . . . . . . . . . . . 10-17
A-1. Creating a Collection and Tables . . . . . . . . . . . . . . . . . . . . A-2
A-2. Inserting Data into the Tables . . . . . . . . . . . . . . . . . . . . . . A-3
A-3. RPG Program Example . . . . . . . . . . . . . . . . . . . . . . . . . . A-7
A-4. COBOL Program Example . . . . . . . . . . . . . . . . . . . . . . . . A-13
A-5. C Program Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-19
A-6. Program Output Example . . . . . . . . . . . . . . . . . . . . . . . . . A-22
C-1. An Example of Job Trace RW Component Information . . . . . . . . C-1
D-1. ACCRDB Command . . . . . . . . . . . . . . . . . . . . . . . . . . . . D-2
D-2. ACCRDBRM Reply for ACCRDB command . . . . . . . . . . . . . . D-2
| D-3. ACCSEC Command . . . . . . . . . . . . . . . . . . . . . . . . . . . . D-2
| D-4. ACCSECRD Reply for ACCSEC command . . . . . . . . . . . . . . D-3
D-5. BGNBND Command . . . . . . . . . . . . . . . . . . . . . . . . . . . . D-3
D-6. Reply Objects for BGNBND command . . . . . . . . . . . . . . . . . D-4
Before using this guide, you should be familiar with general programming concepts
and terminology, and have a general understanding of the AS/400 system and
OS/400 operating system.
IBM recommends that you use this new interface. It is simple to use and has great
online information to guide you.
You can access the AS/400 Operations Navigator from the Client Access folder by
double-clicking the AS/400 Operations Navigator icon. You can also drag this icon
to your desktop for even quicker access.
For information about other AS/400 publications (except Advanced 36), see either
of the following:
The Publications Reference, SC41-5003, in the AS/400 Softcopy Library.
The AS/400 online library is available on the World Wide Web at the following
uniform resource locator (URL) address:
http://as4ððbks.rochester.ibm.com/
This chapter describes distributed relational database and how it is used on the
AS/400 system. It defines some general concepts of distributed relational database,
outlines the IBM DRDA implementation, and provides an overview of the current
DRDA implementation on the AS/400 system. It defines some terms and directs
you to other parts of this manual for more detail. Finally, an example corporation
named Spiffy is described. This fictional company uses AS/400 systems in a distrib-
uted relational database application program. This sample of the Spiffy Corporation
forms the background for all examples used in this manual.
Tables can be defined and accessed in several ways on the AS/400 system. One
way to describe and access tables on the system is to use a language like Struc-
tured Query Language (SQL). SQL is the standard IBM database language and
provides the necessary consistency to enable distributed data processing across
different system operating environments. Another way to describe and access
tables on the AS/400 system is to describe physical and logical files using data
description specifications (DDS) and access tables using file interfaces (for
example, read and write high-level language statements).
SQL uses different terminology from that used on the AS/400 system. For most
SQL objects there is a corresponding system object on the AS/400 system.
Figure 1-2 shows the relationship between SQL relational database terms and
AS/400 system terms.
A distributed relational database exists when the application programs that use
the data and the data itself are located on different systems. The simplest form of a
distributed relational database is shown in Figure 1-3 on page 1-3 where the appli-
cation program runs on one system, and the data is located on another system.
When using a distributed relational database, the system on which the application
program is run is called the application requester (AR), and the system on which
the remote data resides is called the application server (AS).
A unit of work is one or more database requests and the associated processing
that make up a completed piece of work as shown in Figure 1-4. A simple example
is taking a part from stock in an inventory control application program. An inventory
program can tentatively remove an item from a shop inventory account table and
then add that item to a parts reorder table at the same location. The term trans-
action is another expression used to describe the unit of work concept.
In the above example, the unit of work is not complete until the part is both
removed from the shop inventory account table and added to a reorder table. When
the requests are complete, the application program can commit the unit of work.
This means that any database changes associated with the unit of work are made
permanent.
With unit of work support, the application program can also roll back changes to a
unit of work. If a unit of work is rolled back, the changes made since the last
commit or rollback operation are not applied. Thus, the application program treats
the set of requests to a database as a unit.
┌───────────────────┐ ┌───────────────────┐
┌─── │ Request 1 ├───────────5 │ │
│ ├───────────────────┤ │ │
Unit of Work 1 │ │ Request 2 ├───────────5 │ │
│ ├───────────────────┤ │ │
└─── │ Request 3 ├───────────5 │ │
├───────────────────┤ │ Local │
┌─── │ Request 4 ├───────────5 │ Relational │
Unit of Work 2 │ ├───────────────────┤ │ Database │
│ │ Request 5 ├───────────5 │ │
│ ├───────────────────┤ │ │
└─── │ Request 6 ├───────────5 │ │
├───────────────────┤ │ │
┌─── │ Request 7 ├───────────5 │ │
Unit of Work 3 │ ├───────────────────┤ │ │
└─── │ Request 8 ├───────────5 │ │
└───────────────────┘ └───────────────────┘
┌───────────────────┐ ┌───────────────────┐
┌─── │ Request 1 ├───────────5 │ │
│ ├───────────────────┤ │ Relational │
Unit of Work 1 │ │ Request 2 ├───────────5 │ Database 1 │
│ ├───────────────────┤ ┌───5 │ │
└─── │ Request 3 ├───────┘ └───────────────────┘
├───────────────────┤ ┌───────────────────┐
┌─── │ Request 4 ├───────────5 │ │
Unit of Work 2 │ ├───────────────────┤ │ Relational │
│ │ Request 5 ├───────────5 │ Database 2 │
│ ├───────────────────┤ │ │
└─── │ Request 6 ├───────────5 │ │
├───────────────────┤ └───────────────────┘
┌─── │ Request 7 ├───────┐ ┌───────────────────┐
Unit of Work 3 │ ├───────────────────┤ └───5 │ Relational │
└─── │ Request 8 ├───────────5 │ Database 3 │
└───────────────────┘ └───────────────────┘
Remote unit of work support enables an application program to read or update data
at more than one location. However, all the data that the program accesses within
a unit of work must be managed by the same relational database management
system. For example, the shop inventory application program must commit its
inventory and accounts receivable unit of work before it can read or update tables
that are in another location.
The target of the requests is controlled by the user or application with SQL state-
ments such as CONNECT TO and SET CONNECTION. Each SQL statement must
refer to data at a single location.
When the application is ready to commit the work, it initiates the commit; commit-
ment coordination is performed by a synchronization-point manager.
| DB2 for AS/400 supports both the remote unit of work and distributed unit of work
| with APPC communications. Remote unit of work is also supported with TCP/IP
| communications. A degree of processing sophistication beyond the distributed unit
| of work is a distributed request. This type of distributed relational database
| access enables a user or application program to issue a single SQL statement that
| can read or update data at multiple locations.
Tables in a distributed relational database do not have to differ from one another.
Some tables can be exact or partial copies of one another. Extracts, snapshots,
and replication are terms that describe types of copies using distributed processing.
Snapshots are read-only copies of tables that are automatically made by a system.
The system refreshes these copies from the source table on a periodic basis speci-
fied by the user—perhaps daily, weekly, or monthly. Snapshots are most useful for
locations that seek an automatic process for receiving updated information on a
periodic basis.
Tables can also be split across computer systems in the network. Such a table is
called a distributed table. Distributed tables are split either horizontally by rows or
vertically by columns to provide easier local reference and storage. The columns of
a vertically distributed table reside at various locations, as do the rows of a horizon-
tally distributed table. At any location, the user still sees the table as if it were kept
in a single location. Distributing tables is most effective when the request to access
and update certain portions of the table come from the same location as those
portions of the table.
DRDA support provides the structure for access to database information for rela-
tional database managers operating in like and unlike environments. For example,
access to relational data between two or more AS/400 systems is distribution in a
like environment, and access to relational data between an AS/400 system and
systems using the DB2 database manager is distribution in an unlike
environment.
For numeric data, these differences do not matter. Unlike systems that provide
DRDA support automatically convert any differences between the way a number is
represented in one computer system to the way it is represented in another. For
example, if an AS/400 application program reads numeric data from a DB2 data-
base, DB2 sends the numeric data in System/390 format and the OS/400 database
management system converts it to AS/400 numeric format.
However, the handling of character data is more complex, but this too can be
handled within a distributed relational database.
Character Conversion
Not only can there be differences in encoding schemes (such as EBCDIC versus
ASCII), but there can also be differences related to language. For instance,
systems configured for different languages can assign different characters to the
same code, or different codes to the same character. For example, a system con-
figured for U.S. English can assign the same code to the character } that a system
configured for the Danish language assigns to å. But those two systems can assign
different codes to the same character such as $.
CDRA specifies the way to identify the attributes of character data so that the data
can be understood across systems, even if the systems use different character sets
and encoding schemes. For conversion to happen across systems, each system
must understand the attributes of the character data it is receiving from the other
For example, CCSID 37 means encoding scheme 4352 (EBCDIC), character set
697 (Latin, single-byte characters), and code page 37 (USA/Canada country
extended code page). CCSID 5026 means encoding scheme 4865 (extended
EBCDIC), character set 1172 with code page 290 (single-byte character set for
Katakana/ Kanji), and character set 370 with code page 300 (double-byte character
set for Katakana/Kanji).
DB2, SQL/DS, the OS/400 system, and DDCS/2 include mechanisms to convert
character data between a wide range of CCSID-to-CCSID pairs and CCSID-to-code
page pairs. Character conversion for many CCSIDs and code pages is already built
into these products. For a complete list and description of all CCSIDs registered in
CDRA, see the Character Data Representation Architecture - Level 1 Registry
book. For a description of the use of CCSIDs on the AS/400 system, see “Coded
Character Set Identifier (CCSID)” on page 10-16.
These calls allow the ARD program to pass the SQL statements and information
about the statements to a remote relational database and return results back to the
system. The system then returns the results to the application or the user. Access
to relational databases accessed by ARD programs appear like access to DRDA
application servers in the unlike environment.
For more information about application requester driver programs, see the System
API Reference.
In addition to DRDA access, ARD programs can be used to access databases that
do not support DRDA. Connections to relational databases accessed through ARD
programs are treated like connections to unlike systems. Such connections can
coexist with connections to DRDA application servers, connections to the local rela-
tional database, and connections which access other ARD programs.
The OS/400 program includes run-time support for SQL. You do not need the DB2
for AS/400 Query Manager and SQL Development Kit licensed program installed on
a DB2 for AS/400 application requester or application server to process distributed
relational database requests or to create an SQL collection on an AS/400.
However, you do need the DB2 for AS/400 Query Manager and SQL Development
Kit program to precompile programs with SQL statements, run interactive SQL, or
run DB2 for AS/400 Query Manager.
| 1 The TCP/IP protocol currently does not support DUW. However, in a program compiled with RDBCNNMTH(*DUW), you can
| access an RUW server and do updates, under certain conditions which include the restriction that all other connections are read-
| only.
Communications
| The communications support for the DRDA implementation on the AS/400 is based
| on the AS/400 Distributed Data Management (DDM) Architecture. This support
| includes both native TCP/IP connectivity as well as the IBM Systems Network
| Architecture (SNA) through advanced program-to-program communications (APPC),
| with or without Advanced Peer-to-Peer Networking* (APPN*), and High-
| Performance Routing (HPR). In addition, OS/400 provides for APPC, and therefore
| DDM and distributed relational database access, over TCP/IP using AnyNet*
| support. AnyNet is not required for DRDA remote unit of work support over TCP/IP,
| but might be useful for distributed unit of work function over TCP/IP. See
| Chapter 3, Communications for an AS/400 Distributed Relational Database for
| more information about these functions and configuration samples.
Set Up
The run-time support for an AS/400 distributed relational database is provided by
the OS/400 program. Therefore, when the operating system is installed, distributed
relational database support is installed. However, some setup work is required to
make the application requesters and application servers ready to send and receive
work. One or more subsystems can be used to control interactive, batch, spooled,
and communications jobs. All the systems in the network must also have their rela-
tional database directory set up with connection information. Finally, you may wish
to put data into the tables of the application servers throughout the network.
The relational database directory contains database names and values that are
translated into communications network parameters. You add an entry for each
database in the network, including the local database. Each directory entry con-
sists of a unique relational database name and corresponding communications path
information. For access provided by ARD programs, the ARD program name must
be added to the relational database directory entry.
There are a number of ways to enter data into a database. You can use an SQL
application program, some other high-level language application program, or one of
these methods:
Interactive SQL
OS/400 query management
Data file utility (DFU)
Copy File (CPYF) command
For more information on ways to enter data into a distributed database, along with
a discussion of subsystems and relational database directories on the AS/400
system, see Chapter 5, Setting Up an AS/400 Distributed Relational Database.
Administration
As a database administrator in a distributed relational database network, you may
need to locate and monitor work being done on any one of several systems. Work
management functions on the AS/400 system provide effective ways to track this
work by allowing you to do the following:
Work with jobs in the network
Work with the communications networks, controllers, devices, modes, and ses-
sions
You can read more about how to do these tasks in Chapter 6, Distributed Rela-
tional Database Administration and Operation Tasks.
Performance
| No matter what kind of application programs you are running on a system, perform-
| ance can always be a concern. For a distributed relational database, network,
| system, and application performance are all crucial. System performance can be
| affected by the size and organization of main and auxiliary storage. There can also
| be performance gains if you know the strengths and weaknesses of SQL programs.
| Chapter 9, Handling Distributed Relational Database Problems
Problems
When a problem occurs within a distributed relational database, it is necessary to
first identify where the problem originates. The problem may be on the application
server or application requester. When the database problem is located properly in
the network, you may further isolate the problem as a user problem, a problem with
an application program, an AS/400 system problem, or communications problem in
order to correct the error. Chapter 9, Handling Distributed Relational Database
Problems describes the methods you can use to isolate and solve distributed data-
base problems.
Application Programming
Programmers can write high-level language programs that use SQL statements for
AS/400 distributed application programs. The main differences from programs
written for local processing only are the ability to connect to remote databases and
to create SQL packages. The CONNECT SQL statement can be used to explicitly
connect an application requester to an application server, or the name of the rela-
tional database can be specified when the program is created to allow an implicit
connection to occur. Also, the SET CONNECTION, RELEASE, and DISCONNECT
statements can be used to manage connections for applications that use distributed
unit of work.
An SQL package is an AS/400 object used only for distributed relational data-
bases. It can be created as a result of the precompile process of SQL or can be
created from a compiled program object. An SQL package resides on the applica-
See Chapter 10, Writing Distributed Relational Database Applications for an over-
view of distributed relational database topics for the application programmer.
Figure 1-7 on page 1-14 illustrates a system organization chart for Spiffy Corpo-
ration.
MP000 KC000
D1 D2 D15 D1 D2 D3 D30
RV2W734-0
| The central distributor runs OS/390 on its IBM 3090 system with DB2 and relevant
| decision support software. This system is used because of the large amounts of
| data that must be handled at any one time in a variety of application programs. The
| central vehicle distributor system is not dedicated to automobile division data proc-
| essing. It must handle work and processes for the corporation that do not yet
| operate in a distributed database environment. The regional centers are running
| AS/400 Model 650 e-systems. They use APPC/APPN with SNADS and 5250
| Display Station Pass-through using an SDLC protocol.
All of the dealerships use AS/400 systems but they may range in size from a Model
600 e-system in the smaller enterprises up to a Model 640 e-system in the largest.
These systems are connected to the regional office using SDLC protocol. The
largest dealerships have a part time programmer and a system operator to tend to
the data processing functioning of the enterprise. Most of the installations do not
employ anyone with programming expertise and some of the smaller locations do
not employ anyone with more than a very general knowledge of computers.
Dealerships can have a list of from 2000 to 20,000 customers. This translates to 5
service orders per day for a small dealership and up to 50 per day for a large deal-
ership. These service orders include scheduled maintenance, warranty repairs,
regular repairs, and parts ordering.
The dealers stock only frequently needed spare parts and maintain their own inven-
tory databases. Both regional centers provide parts when requested. Dealer inven-
tories are also stocked on a periodic basis by a forecast-model-controlled batch
process.
The Spiffy Corporation requires all dealerships to be active in the inventory distrib-
uted relational database. Since the corporation operates its own dealerships, it has
a full complement of dealership software that may or may not access the distributed
relational database environment. The Spiffy dealerships use the full set of software
tools. Most of the private franchises use them also since they are tailored specif-
ically to the Spiffy Corporation way of doing business.
The regional distribution centers manage the inventory for their region. They also
function as the database administrator for all distributed database resources used
in the region. The responsibilities involved vary depending on the level of data proc-
essing competency at each dealership. The regional center is always the first
contact for help for any dealership in the region.
The following are the database responsibilities for each level of activity in the
network:
Dealerships
Perform basic operation and administration of system
Enroll local users
Regional distribution centers
Examples used throughout this manual are associated with one or more of these
activities. Many examples show the process of obtaining a part from inventory in
order to schedule customer service or repairs. Others show distributed relational
database administration tasks used to set up, secure, monitor, and resolve prob-
lems for systems in the Spiffy Corporation distributed relational database network.
Because the planning and design of a distributed relational database are closely
linked to each other, this chapter combines these topics when discussing the fol-
lowing related tasks:
Identifying your needs and expectations
Designing the application, data, and network
Putting together a management strategy
Data Needs
The first step in your analysis is to determine which factors affect your data and
how they affect it. Ask yourself the following questions:
What locations are involved?
What kind of transactions do you envision?
What data is needed for each transaction?
What dependencies do items of data have on each other, especially referential
limitations? For example, will information in one table need to be checked
against the information in another table? (If so, both tables must be kept at the
same location.)
Does the data currently exist? If so, where is it located? Who "owns" it (that is,
who is responsible for maintaining the accuracy of the data)?
What priority do you place on the availability of the needed data? Integrity of
the data across locations? Protection of the data from unauthorized access?
What access patterns do you envision for the data? For instance, will the data
be read, updated, or both? How frequently? Will a typical access return a lot of
data or a little data?
Applications where most database processing is done locally and access to remote
data is needed only occasionally are typically good candidates for a distributed rela-
tional database.
Applications with the following requirements are usually poor candidates for a dis-
tributed relational database:
The data is kept at a central site and most of the work that a remote user
needs to do is at the central site.
Consistently high performance, especially consistently fast response time, is
needed. It takes longer to move data across a network.
Consistently high availability, especially twenty-four hour, seven-day-a-week
availability, is needed. Networks involve more systems and more in-between
components, such as communications lines and communications controllers,
which increases the chance of breakdowns.
A distributed relational database function that you need is not currently avail-
able or announced.
SQL is the standard IBM database language. If your goals and directions include
portability or remote data access on unlike systems, you should use distributed
relational database on the AS/400 system.
The distributed database function of distributed unit of work, as well as the addi-
tional data copying function provided by DataPropagator Relational Capture and
Apply, broaden the range of activities you can perform on AS/400. However, if
your distributed database application requires a function that is not currently avail-
able on the AS/400, other options are available until the function is made available
on the operating system. For example, you may do one of the following:
Provide the needed function yourself
Stage your plans for distributed relational database to allow for the new func-
tion to become available
Reassess your goals and requirements to see if you can satisfy them with a
currently available or announced function. Some alternative solutions are listed
in Figure 2-1. These alternatives can be used to supplement or replace avail-
able function.
Network Considerations
The design of a network directly affects the performance of a distributed relational
database. To properly design a distributed relational database that works well with
a particular network, do the following:
Because the line speed can be very important to application performance,
provide sufficient capacity at the appropriate places in the network to achieve
efficient performance to the main distributed relational database applications.
| See the Communications Management book for more information.
Evaluate the available communication hardware and software and, if necessary,
your ability to upgrade.
| For APPC connections, consider the session limits and conversation limits
| specified when the network is defined.
Identify the hardware, software, and communication equipment needed (for
both test and production environments), and the best configuration of the equip-
ment for a distributed relational database network.
| Consider the skills that are necessary to support TCP/IP as opposed to those
| that are necessary to support APPC. Also consider the additional functionality
| offered by APPC (that is, two-phase commit).
Take into consideration the initial service level agreements with end user
groups (such as what response time to expect for a given distributed relational
database application), and strategies for monitoring and tuning the actual
service provided.
Develop a naming strategy for database objects in the distributed relational
database and for each location in the distributed relational database system. A
location is a specific relational database management system in an intercon-
nected network of relational database management systems that participate in
distributed relational database. Consider the following when developing this
strategy:
– The fully qualified name of an object in a distributed database system has
three (rather than two) parts, and the highest-level qualifier identifies the
location of the object.
– Each location in a distributed relational database system should be given a
unique identification; each object in the system should also have a unique
identification. Duplicate identifications can cause serious problems. For
example, duplicate locations and object names may cause an application to
connect to an unintended remote database, and once connected, access
an unintended object. Pay particular attention to naming when networks are
coupled.
General Operations
To plan for the general operation of a distributed relational database, consider both
performance and availability. The following design considerations can help you
improve both the performance and availability of a distributed relational database:
If an application involves transactions that run frequently or that send or receive
a lot of data, you should try to keep it in the same location as the data.
For data that needs to be shared by applications in different locations, put the
data in the location with the most activity.
If the applications in one location need the data as much as the applications in
another location, consider keeping copies of the data at both locations. When
keeping copies at multiple locations, ask yourself the following questions about
your management strategy:
– Will users be allowed to make updates to the copies?
– How and when will the copies be refreshed with current data?
– Will all copies have to be backed up or will backing up one copy be suffi-
cient?
– How will general administration activities be performed consistently for all
copies?
– When is it permissible to delete one of the copies?
Consider whether the distributed databases will be administered from a central
location or from each database location.
Security
Part of planning for a distributed relational database involves the decisions you
must make about securing distributed data. These decisions include:
What systems should be made accessible to users in other locations and which
users in other locations should have access to those systems.
How tightly controlled access to those systems should be. For example, should
a user password be required when a conversation is started by a remote user?
| Is it required that passwords flow over the wire in encrypted form?
| Is it required that a user profile under which a client job runs be mapped to a
| different user identification or password based on the name of the relational
| database to which you are connecting?
What data should be made accessible to users in other locations and which
users in other locations should have access to that data.
What actions those users should be allowed to take on the data.
Whether authorization to data should be centrally controlled or locally con-
trolled.
If special precautions should be taken because multiple systems are being
linked. For example, should name translation be used?
When making the previous decisions, consider the following when choosing
locations:
Physical protection. For example, a location may offer a room with restricted
access.
Level of system security. The level of system security often differs between
locations. The security level of the distributed database is no greater than the
lowest level of security used in the network.
| All systems connected by APPC can do the following:
| – If both systems are AS/400 systems, communicate passwords in encrypted
| form.
| – Verify that when one system receives a request to communicate with
| another system in the network, the requesting system is actually "who it
| says it is" and that it is authorized to communicate with the receiving
| system.
| All systems can do the following:
| – Pass a user's identification and password from the local system to the
| remote system for verification before any remote data access is allowed.
| – Grant and revoke privileges to access and manipulate SQL objects such as
| tables and views.
Accounting
You need to be able to account and charge for the use of distributed data. Con-
sider the following:
Accounting for the use of distributed data involves the use of resources in one
or more remote systems, the use of resources on the local system, and the use
of network resources that connect the systems.
Accounting information is accumulated by each system independently. Network
accounting information is accumulated independent of the data accumulated by
the systems.
The time zones of various systems may have to be taken into account when
trying to correlate accounting information. Each system clock may not be syn-
chronized with the remote system clock.
Differences may exist between each system's permitted accounting codes
(numbers). For example, the AS/400 system restricts accounting codes to a
maximum of 15 characters.
The following functions are available to account for the use of distributed data:
AS/400 job accounting journal. The AS/400 system writes job accounting infor-
mation into the job accounting journal for each distributed relational database
application. The Display Journal (DSPJRN) command can be used to write the
accumulated journal entries into a database file. Then, either a user-written
program or query functions can be used to analyze the accounting data. For
more information, see “Job Accounting” on page 6-16.
NetView* accounting data. The NetView licensed program can be used to
record accounting data about the use of network resources.
Problem Analysis
Problem analysis needs to be managed in a distributed database environment.
Problem analysis involves both identifying and resolving problems for applications
00that are processed across a network of systems. Consider the following:
Distributed database processing problems manifest themselves in various ways.
For example, an error return code may be passed to a distributed database
application by the system that detects the problem. In addition, responses may
be slow, wrong, or nonexistent.
Tools are available to diagnose distributed database processing problems. For
example, each distributed relational database product provides trace functions
that can assist in diagnosing distributed data processing problems.
Additional information about how to connect unlike systems in a network for distrib-
uted relational database work can be found in the Distributed Relational Database
Architecture Connectivity Guide, SC26-4783.
Communications Tools
| Communications support for the DRDA implementation on the AS/400 system was
| initially provided only under the IBM Systems Network Architecture (SNA) through
| the Advanced Program-to-Program Communications (APPC) protocol, with or
| without Advanced Peer-to-Peer Networking (APPN).
| Native TCP/IP support for DRDA, introduced most recently, is limited to remote unit
| of work, single-phase commit protocols.
| The examples and specifications in this chapter are specific to SNA configurations
| and native TCP/IP only. For more information on APPC over TCP/IP, refer to the
| Communications Configuration book. For more information on setting up native
| TCP/IP support, see the TCP/IP Configuration and Reference book.
| AS/400 APPN and HPR are documented in the APPN Support book.
Using DDM support, the remote file is identified and the communications path is
provided by means of a DDM file on the local system.
| If you have an SNA network, you can use DDM to support distributed relational
| database processing for administrative tasks such as submitting remote commands,
| copying files, and moving data from one system to another. To use DDM support, a
| DDM file must be created. This is discussed in “Setting Up DDM Files” on
| page 5-13. Using a DDM file with the AS/400 copy file commands is discussed in
| “Using Copy File Commands Between Systems” on page 5-19. Using DDM files to
| submit a remote command is discussed in “Submit Remote Command
| (SBMRMTCMD) Command” on page 6-8.
Alert Support
Alert support on the AS/400 system allows you to manage problems from a central
location. Alert support is useful for managing systems that do not have an operator,
managing systems where the operator is not skilled in problem management, and
maintaining control of system resources and expenses.
On the AS/400 system, alerts are created based on messages that are sent to the
local system operator. These messages are used to inform the operator of prob-
lems with hardware resources, such as local devices or controllers, communication
lines, or remote controllers or devices. These messages can also report software
errors detected by the system or application programs.
Any message with the alert option field (located in the message description) set to
a value other than *NO can generate an alert. Alerts are generated from several
types of messages:
OS/400 messages defined as alerts.
OS/400 support sends alerts for problems related to distributed relational data-
base functions. For more information about distributed relational database
related alerts, see “Alerts” on page 9-23.
IBM-supplied messages where the value in the alert option field is specified as
*YES by the Change Message Description (CHGMSGD) command. In this way,
you can select the messages for which you want alerts sent to the distributed
relational database administrator.
Messages that you create and define as alerts, or that you create with the
QALGENA application program interface (API).
The AS/400 system also provides the capability to nest focal points. You can define
a high level focal point, which accepts all of the alerts collected by lower level focal
points.
Because of the increased usage that comes with distributed relational database
processing, you may want to increase the maximum number of sessions parameter
(MAXSSN) and the maximum number of conversations parameter (MAXCNV) for
the MODE description created for both the local and remote location.
In addition to increasing capacity through the MODE descriptions, you may want to
consider increasing the line speed for various lines within the network or selecting a
better quality line to improve performance of the network for your distributed rela-
tional database processing.
Another consideration for your distributed relational database network is the ques-
tion of data accessibility and availability. The more critical a certain database is to
daily or special enterprise operations, the more you need to consider how users
can access that database. This means examining paths and alternative paths
through the network to provide availability of the data as it is needed. More about
this topic is discussed in Chapter 7, Data Availability and Protection.
Line speed and how you configure your communications line can significantly affect
network performance. However, it is important to ask a few questions about the
nature of the information being transferred in relation to both line speed and type of
use. For example:
How much information must be moved?
What is a typical transaction and unit of work for batch applications?
What is a typical transaction and unit of work for an interactive application and
how much data is sent and received for each transaction?
How many application programs or users will be using the line at the same
time?
| For more information about network planning and performance considerations for
| APPC, see the Communications Management book.
Each AS/400 system in the network must be defined so that each system can iden-
tify itself and the remote systems in the network. To define a system in the network
you must:
1. Define the network attributes.
| 2. Create network interfaces and network server descriptions, if necessary.
3. Create the appropriate line descriptions.
4. Create a controller description.
| 5. Create a class-of-service description for APPC connections.
| 6. Create a mode description for APPC connections.
7. Create device descriptions automatically or manually.
Notes:
1. The controller description is equivalent to the IBM Network Control Program
and Virtual Telecommunications Access Method (NCP/VTAM*) PU macros.
The information in a controller description is found in the Extended Services
Communication Manager Partner LU profile.
2. The device description is equivalent to the NCP/VTAM logical unit (LU) macro.
The information in a device description is found in Extended Services Commu-
nications Manager Partner LU and LU profiles.
The Communications Configuration and the APPN Support books contain more
information about configuring for networking support and working with location lists.
M P 0 0 0 K C 0 0 0
M P 1 0 1 M P 1 1 0 M P 2 0 1 K C 1 0 1 K C 1 0 5 K C 2 0 1 K C 3 1 0
R V 2 W 7 3 5 -0
In this network organization, two of Spiffy Corporation's regional offices are the
network nodes systems named MP000 and KC000. The MP000 system in
Minneapolis and the KC000 system in Kansas City communicate with each other
over an SDLC nonswitched line with an SDLC switched line as a backup line. The
MP000 AS/400 system serves as a development and problem handling center for
the KC000 system and the other regional network nodes.
The following example programs and explanations describe how to configure the
Minneapolis and Kansas City AS/400 systems as network nodes in the network,
and also shows how Minneapolis configures its network to one of its area dealer-
ships. This example is intended to describe only a portion of the tasks needed to
configure the network shown in Figure 3-1, and is not a complete configuration for
that network.
/\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\/
/\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\/
/\ MPððð to MP1ð1 (nonswitched) \/
/\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\/
/\ Create nonswitched line description for MPððð to KCððð\/
CRTLINSDLC LIND(MP1ð1L) RSRCNAME(LINð31) .6/
/\ Create controller description for MPððð to MP1ð1 \/
CRTCTLAPPC CTLD(MP1ð1L) LINKTYPE(\SDLC) +
LINE(MP1ð1L) RMTNETID(APPN) +
RMTCPNAME(MP1ð1) STNADR(ð1) +
NODETYPE(\ENDNODE)
/\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\/
ENDPGM
LCLNETID(APPN)
The name of the local network is APPN. The remote system (KC000
in the example program) must specify this name as the remote
network identifier (RMTNETID) on the CRTCTLAPPC command. In
this example, it defaults to the network attribute.
LCLCPNAME(MP000)
The name assigned to the Minneapolis regional system local control
point is MP000. The remote systems specify this name as the
remote control point name (RMTCPNAME) on the CRTCTLAPPC
command.
LCLLOCNAME(MP000)
The default local location name is MP000. This name will be used
for the device description that is created by the APPN support.
NODETYPE(*NETNODE)
The local system (MP000) is an APPN network node.
LIND(KC000L)
The name assigned to the line description is KC000L.
RSRCNAME(LIN021)
The physical communications port named LIN021 is defined.
CTLD(KC000L)
The name assigned to the controller description is KC000L.
LINKTYPE(*SDLC)
Because this controller is attached through an SDLC communi-
cations line, the value specified is *SDLC. This value must corre-
spond to the type of line defined by a Create Line Description
command (CRTLINxxx).
LINE(KC000L)
The name of the line description to which this controller is attached
is KC000L. This value must match a name specified by the LIND
parameter in a line description.
RMTCPNAME(KC000)
The remote control-point name is KC000. The name specified here
must match the name specified at the remote system for the local
control-point name. In the example, the name is specified at the
remote system (KC000) by the LCLCPNAME parameter on the
Change Network Attributes (CHGNETA) command.
STNADR(01)
The address assigned to the remote controller is hex 01.
NODETYPE(*NETNODE)
The remote system (KC000) is an APPN network node.
LIND(KC000S)
The name assigned to the line description is KC000S.
RSRCNAME(LIN022)
The physical communications port named LIN022 is defined.
CNN(*SWTPP)
This is a switched line connection.
AUTOANS(*NO)
This system will not automatically answer an incoming call.
STNADR(01)
The address assigned to the local system is hex 01.
CTLD(KC000S)
The name assigned to the controller description is KC000S.
LINKTYPE(*SDLC)
Because this controller is attached through an SDLC communi-
cations line, the value specified is *SDLC. This value must corre-
spond to the type of line defined by a Create Line Description
command (CRTLINxxx).
SWITCHED(*YES)
This controller is attached to a switched SDLC line.
SWTLINLST(KC000S)
The name of the line description (for switched lines) to which this
controller can be attached is KC000S. In the example, there is only
RMTNETID(APPN)
The name of the network in which the remote control point resides is
APPN.
RMTCPNAME(KC000)
The remote control-point name is KC000. The name specified here
must match the name specified at the remote system for the local
control-point name. In the example, the name is specified at the
remote system by the LCLCPNAME parameter on the CHGNETA
(Change Network Attributes) command.
INLCNN(*DIAL)
The initial connection is made by the AS/400 system either
answering an incoming call or placing a call.
CNNNBR(8165551111)
The connection (telephone) number for the remote Kansas City con-
troller is 8165551111.
STNADR(01)
The address assigned to the remote Kansas City controller is hex
01.
TMSGRPNBR(3)
The value (3) is to be used by the APPN support for transmission
group negotiation with the remote system.
The remote system must specify the same value for the trans-
mission group.
NODETYPE(*NETNODE)
The remote system (KC000) is an APPN network node.
LCLNETID(APPN)
The name of the local network is APPN. The remote systems (the
Minneapolis network node in this example) must specify this name
as the remote network identifier (RMTNETID) on the CRTCTLAPPC
command.
LCLCPNAME(KC000)
The name assigned to the local control point is KC000. The remote
system specifies this name as the remote control point name
(RMTCPNAME) on the CRTCTLAPPC command.
NODETYPE(*NETNODE)
The local system (KC000) is an APPN network node.
LIND(MP000L)
The name assigned to the line description is MP000L.
RSRCNAME(LIN022)
The physical communications port named LIN022 is defined.
CTLD(MP000L)
The name assigned to the controller description is MP000L.
LINKTYPE(*SDLC)
Because this controller is attached through an SDLC communi-
cations line, the value specified is *SDLC. This value must corre-
spond to the type of line defined by a Create Line Description
command (CRTLINxxx).
LINE(MP000L)
The name of the line description to which this controller is attached
is MP000L. This value must match a name specified by the LIND
parameter in a line description.
RMTNETID(APPN)
The name of the network in which the remote system resides is
APPN.
RMTCPNAME(MP000)
The remote control-point name is MP000. The name specified here
must match the name specified at the remote system for the local
control-point name. In the example, the name is specified at the
Minneapolis region remote system (MP000) by the LCLCPNAME
parameter on the Change Network Attributes (CHGNETA)
command.
STNADR(01)
The address assigned to the remote controller is hex 01.
NODETYPE(*NETNODE)
The remote system (MP000) is an APPN network node.
LIND(MP000S)
The name assigned to the line description is MP000S.
RSRCNAME(LIN031)
The physical communications port named LIN031 is defined.
CNN(*SWTPP)
This is a switched line connection.
AUTOANS(*NO)
This system will not automatically answer an incoming call.
STNADR(01)
The address assigned to the local system is hex 01.
CTLD(MP000S)
The name assigned to the controller description is MP000S.
LINKTYPE(*SDLC)
Because this controller is attached through an SDLC communi-
cations line, the value specified is *SDLC. This value must corre-
spond to the type of line defined by a Create Line Description
command (CRTLINxxx).
SWITCHED(*YES)
This controller is attached to a switched SDLC line.
SWTLINLST(MP000S)
The name of the line description (for switched lines) to which this
controller can be attached is MP000S. In the example, there is only
one line (MP000). This value must match a name specified by the
LIND parameter in a line description.
RMTNETID(APPN)
The name of the network in which the remote control point resides is
APPN.
RMTCPNAME(MP000)
The remote control-point name is MP000. The name specified here
must match the name specified at the remote regional system for
the local control-point name. In the example, the name is specified
at the remote Minneapolis regional system (MP000) by the
LCLCPNAME parameter on the Change Network Attributes
(CHGNETA) command.
INLCNN(*ANS)
The initial connection is made by the AS/400 system answering an
incoming call.
STNADR(01)
The address assigned to the remote Minneapolis controller is hex
01.
TMSGRPNBR(3)
The value (3) to be used by the APPN support for transmission
group negotiation with the remote system. The remote system must
specify the same value for the transmission group.
NODETYPE(*NETNODE)
The remote system (MP000) is an APPN network node.
In an APPN network, an end node sends its alerts either to its serving network
node or to another system as specified by the alerts controller name (ALRCTLD)
parameter of the Change Network Attributes (CHGNETA) command. When your
system is an end node in the network and you turn on the alert status (ALRSTS)
parameter of the CHGNETA command, alerts are forwarded to a serving network
node.
You can define your system as a default focal point using the alert default focal
point (ALRDFTFP) parameter of the CHGNETA command. When your system is
defined to be a default focal point, the AS/400 system automatically adds network
node control points to the sphere of control using the APPN network topology data-
base. When the AS/400 system detects that a network node system has entered
the network, the system sends management services capabilities to the new control
point so that the control point sends alerts to your system (if no other focal point is
specified for the new network node system). The alert status (ALRSTS) parameter
of the CHGNETA command should be turned off so your system does not forward
alerts because it is the default focal point.
You can define your system as a primary focal point using the alert primary focal
point (ALRPRIFP) parameter of the CHGNETA command. When your system is
defined to be a primary focal point, you must explicitly define the control points that
are to be in your sphere of control. This set of control points is defined using the
Work with Sphere of Control (WRKSOC) command.
The WRKSOC command allows you to add network node control point systems to
the sphere of control and to delete existing control points.
Control
Opt Point Network ID Current Status
__ ________ \NETATR
__ CHððð APPN Delete pending
__ KCððð APPN Active - in sphere of control
__ SLððð APPN Add pending - in sphere of control
__ NYððð APPN Active - in sphere of control
Select option 1 (Add) on the Work with Sphere of Control (SOC) display, or use the
Add Sphere of Control Entry (ADDSOCE) command to add a system to your
sphere of control. To add a system to the sphere of control, type the control point
name and network ID of the new system.
Select option 4 (Remove) from the Work with Sphere of Control (SOC) display, or
use the Remove Sphere of Control Entry (RMVSOCE) command to delete systems
from the alert sphere of control. The systems are specified by network ID and
control point name.
Unless a default focal point is established for your network, a control point in the
sphere of control should not be removed from the sphere of control until another
focal point has started focal point services to that system.
The Display Sphere of Control Status (DSPSOCSTS) command shows the current
status of all systems in your sphere of control. This includes both systems that you
have defined using the WRKSOC command, if your system is defined to be a
primary focal point, and systems that the AS/400 system has added for you, if your
system is defined to be a default focal point.
RV2W736-0
The CL command examples that follow are used to establish the system named
MP000 as a primary focal point for alerts handling. While this system may serve as
the primary focal point for several systems, this example only illustrates how one
other network node (KC000) is configured to forward alerts to the MP000 system
and how MP000 is set up to be the primary focal point that does not pass the alerts
on to another system. To configure alerts for this example, the database adminis-
trator would:
1. Create alerts at a network node.
2. Define a network node as the primary focal point.
3. Add network nodes to the primary focal point’s sphere of control.
4. Create alerts at end node systems.
Because a primary focal point is not active and has not included KC000 in its
sphere of control, any alerts created at the KC000 system are logged at the KC000
system and not forwarded to another system yet.
This system creates alerts locally by specifying *ON for the alert status (ALRSTS)
parameter. Also, this system logs alerts created locally and alerts received from
other systems when *ALL is specified for the alert logging (ALRLOGSTS) param-
eter on the CHGNETA command.
The Work with Sphere of Control (WRKSOC) command identifies the network
nodes from which MP000 receives alerts. In the example below, the KC000 system
is included in the MP000 system’s sphere of control using the Add Sphere of
Control Entry (ADDSOCE) command. The network identifier is specified as
*NETATR, and the control point name for KC000 is specified for the entry.
ADDSOCE ENTRY((\NETATR KCððð))
The end node must begin creating alerts by specifying ALRSTS(*ON) for the
Change Network Attributes (CHGNETA) command. However, after its network node
is set up to forward alerts, the alerts sent by KC105 are forwarded by KC000 to the
focal point at MP000 without a database administrator having to specify how KC105
system alerts are handled.
| When two or more systems are set up to access each other’s databases, it may be
| important to make sure that the other side of the communications line is the
| intended location and not an intruder. For DRDA access to a remote relational
| database, the AS/400 system use of advanced program-to-program communi-
| cations (APPC) and Advanced Peer-to-Peer Networking (APPN) communications
| configuration capabilities provides options for you to do this network level security.
| The second concern for the distributed relational database administrator is that data
| security is maintained by the system that stores the data. In a distributed relational
| database, the user has to be properly authorized to have access to the database
| (according to the security level of the system) whether the database is local or
| remote. Distributed relational database network users must be properly identified
| with a user ID on the application server (AS) for any jobs they run on the AS.
| DRDA support using both APPC/APPN and TCP/IP communications protocols pro-
| vides for the sending of user IDs and passwords along with connection requests.
| This chapter discusses security topics that are related to communications and
| DRDA access to remote relational databases. It discusses the significant differ-
| ences between conversation-level security in an APPC network connection and the
| corresponding level of security for a TCP/IP connection initiated by a DRDA appli-
| cation. In remaining security discussions, the term user also includes remote users
| starting communications jobs.
For a description of general AS/400 security concepts, see the Security - Basic
book.
An AR secures its objects and relational database to ensure only authorized users
have access to distributed relational database programs. This is done using normal
AS/400 object authorization to identify users and specify what each user (or group
of users) is allowed to do with an object. Alternatively, authority to tables, views,
and SQL packages can be granted or revoked using the SQL GRANT and
REVOKE statements. Providing levels of authority to SQL objects on the AR helps
ensure that only authorized users have access to an SQL application that accesses
data on another system.
The level of system security in effect on the AS determines whether a request from
an AR is accepted and whether the remote user is authorized to objects on the AS.
For APPC conversations, when the system is using level 10 security, an AS/400
system connects to the network as a nonsecure system. The AS/400 system does
not validate the identity of a remote system during session establishment and does
not require conversation security on incoming program start requests. For level 10,
security information configured for the APPC remote location is ignored and is not
used during session or conversation establishment. If a user profile does not exist
on the AS/400 system, one is created.
| When the system is using security level 20 or above, an AS/400 system connects
| to the network as a secure system. The AS/400 system can then provide both
| session (except for TCP/IP connections) and conversation-level security functions.
Having system security set at the same level across the systems in your network
makes the task of security administration easier. An AS controls whether the
session and conversation can be established by specifying what is expected from
the AR to establish a session. For example, if the security level on the AR is set at
10 and the security level on the AS is above 10, the appropriate information may
not be sent and the session might not be established without changing security ele-
ments on one of the systems.
For more information on security levels, see the Security - Reference book and
security consideration topics in the APPC Programming or the APPN Support
books.
Session level security verifies the identity of the two systems attempting to establish
a communications session. Session level security is established during communi-
cations configuration in one of two ways, depending on whether the network uses
APPN.
If you specify APPN(*NO) on the Create Controller Description (CRTCTLAPPC)
command, communications devices are created manually. APPC devices are
created by using the Create Device Description (CRTDEVAPPC) command.
The LOCPWD parameter on the CRTDEVAPPC command specifies if a pass-
word is used to verify the remote location.
Location security establishes what security information each location requires from
the other location for each remotely initiated APPC conversation. Location security
is established during communication configuration in one of two ways, depending
on whether APPN is used.
In an APPC network, the SECURELOC parameter on the CRTDEVAPPC
command specifies whether the local system allows the remote system to verify
security. Specifying *YES for SECURELOC means that the local system allows
the remote system to verify user security information. If you specify *NO on the
SECURELOC parameter, the local system verifies security information for the
incoming request.
In an APPN network, the secure-location value on the remote location list veri-
fies security. Specifying *YES for secure-location on an APPN remote config-
uration list means that the local system allows the remote system to verify user
security information. If you specify *NO, the local system verifies security infor-
mation for the incoming request.
Note: APPN creates location information based on the first device description
that is varied on for the remote network ID, remote location name, and
local location name pair. To avoid using security information that cannot
be predicted, you must ensure that all of the device descriptions with
the same remote network ID, remote location name, and local location
name pair contain exactly the same security information.
For more information on session and location security issues, see the consider-
ations chapter in the APPC Programming book.
The remote location list is created with the CRTCFGL command, and it contains a
list of all remote locations, their location password, and whether the remote location
is secure. There is one system-wide remote location configuration list on an AS/400
system. A central site AS/400 system can create location lists for remote AS/400
systems by sending them a control language (CL) program.
Changes can be made to a remote configuration list using the Change Configura-
tion List (CHGCFGL) command, however, they do not take effect until all devices
for that location are all in a varied off state.
For more information on configuration lists, see the APPN Support book.
The AS/400 system supports all three SNA levels of conversation security. The AS
controls the SNA conversation levels used for the conversation. The SECURELOC
parameter on the APPC device description or the secure location value on the
APPN remote location list determines what is accepted from the AR for the conver-
sation.
For the SECURITY(PGM) level, an AS expects both a user ID and password from
the AR for the conversation. To allow a conversation only if both a user ID and
password are sent, the DB2/400 AS must be set up so the SECURELOC param-
eter or the secure location value is *NO and no default user profile is specified for
the communications subsystem. The password is validated when the conversation
is established and is ignored for any following uses of that conversation.
Using Passwords
For DRDA access to remote relational databases, once a conversation is estab-
lished at the SECURITY(PGM) level, you do not need to enter a password again. If
you end a connection with either a RELEASE, DISCONNECT, or CONNECT state-
ments when running with the RUW connection management method, your conver-
sation with the first AS may or may not be dropped, depending on the kind of AS
you are connected to and your AR job attributes (for the specific rules, see “Con-
trolling DDM Conversations” on page 6-10). If the conversation to the first AS is not
dropped, it remains unused while you are connected to the second AS. If you
connect again to the first AS and the conversation is unused, the conversation
becomes active again without you needing to enter your user ID and password. On
this second use of the conversation, your password is also not validated again.
The OS/400, DB2, and SQL/DS licensed programs only accept passwords whose
alphanumeric characters are in upper case. If you enter lowercase characters in
your password when you connect, your connection is rejected.
Figure 4-1 shows all of the possible combinations of the elements that control SNA
SECURITY(PGM) on the AS/400 system. A “Y” in any of the columns indicates that
the element is present or the condition is met. An “M” in the PWD column indicates
that the security manager retrieves the user's password and sends a protected
(encrypted) password if password protection is active. If a protected password is
not sent, no password is sent. A protected password is a character string that
APPC substitutes for a user password when it starts a conversation. Protected
passwords can be used only when the systems of both partners support password
protection and when the password is created on a system that runs OS/400
Version 2 Release 2 or later.
To avoid having to use default user profiles, create a user profile on the AS for
every AR user that needs access to the distributed relational database objects. If
you decide to use a default user profile, however, make sure that users are not
| Two types of security mechanisms are supported by the current DB2 for AS/400
| implementation of DRDA over TCP/IP: user ID only, and user ID with password.
| These mechanisms are roughly equivalent to the APPC conversation security types
| of SECURITY(SAME) and SECURITY(PGM). There is nothing that corresponds to
| SECURITY(NONE) for DRDA over TCP/IP.
| At the application server, the default security is user ID with password. This means
| that, as the system is installed, inbound TCP/IP connect requests must have a
| password accompanying the user ID under which the server job is to run. The
| CHGDDMTCPA command can be used to specify that the password is not
| required. To make this change, enter the following command:
| CHGDDMTCPA PWDRQD(\NO)
| You must have *IOSYSCFG special authority to use this command.
| On the application requester (client) side, there are two methods that can be used
| to send a password along with the user ID on TCP/IP connect requests. In the
| absence of both of these methods, only a user ID will be sent. In that case, if the
| AS is set to require a password, the error SQ30082 (A connection attempt failed
| with reason code 17) will be posted in the job log.
| The first way to send a password is to use the USER/USING form of the SQL
| CONNECT statement. The syntax is: CONNECT TO rdbname USER userid USING
| 'password', where the lowercase words represent the appropriate connect parame-
| ters. In a program using embedded SQL, the userid and password values can be
| contained in host variables, as in the following example:
| EXEC SQL CONNECT TO :locn USER :userid USING :pw;
| The other way that a password can be provided to send on a connect request over
| TCP/IP is by use of a server authorization entry. Associated with every user profile
| on the system is a server authorization list. By default the list is empty, but with the
| ADDSVRAUTE command, entries can be added. When a DRDA connection over
| TCP/IP is attempted, DB2 for AS/400 checks the server authorization list for the
| user profile under which the AR job is running. If a match is found between the
| RDB name on the CONNECT statement and the SERVER name in an authori-
| zation entry, the associated USRID parameter in the entry is used for the con-
| nection user ID, and if a PASSWORD parameter is stored in the entry, that
| password is also sent on the connect request.
| The USRPRF parameter specifies the user profile under which the application
| requester job runs. The SERVER parameter specifies the remote RDB name. It is
| very important to note that for use with DRDA, the value of the SERVER parameter
| must be uppercase. The USRID parameter specifies the user profile under which
| the server job will run. The PASSWORD parameter specifies the password for the
| user profile at the server.
| If the USRPRF parameter is omitted, it will default to the user profile under which
| the ADDSVRAUTE command is being run. If the USRID parameter is omitted, it will
| default to the value of the USRPRF parameter. If the PASSWORD parameter is
| omitted, or if the QRETSVRSEC value is 0, no password will be stored in the entry;
| when a connect attempt is made using the entry, the security mechanism used will
| be user ID only.
| If a server authorization entry exists for an RDB, and the USER/USING form of the
| CONNECT statement is also used, user ID and password provided with the
| CONNECT statement will be used.
The DDMACC parameter, initially set to *OBJAUT, can be changed to one of the
previously described values by using the Change Network Attributes (CHGNETA)
command, and its current value can be displayed by the Display Network Attributes
(DSPNETA) command. You can also get the value in a CL program by using the
Retrieve Network Attributes (RTVNETA) command.
For a description of the DDMACC parameter, see the description of the Change
Network Attributes (CHGNETA) command in the Communications Management
book.
The authority checked for SQL statements depends on whether the statement is
static, dynamic, or being run interactively.
For interactive SQL statements, authority is checked against the authority of the
person processing the statement. Adopted authority is not used for interactive SQL
statements.
Users running a distributed relational database application need authority to run the
SQL package on the AS. The GRANT EXECUTE ON PACKAGE statement allows
the owner of an SQL package, or any user with administrative privileges to it, to
grant specified users the privilege to run the statements in an SQL package. You
can use this statement to give all users authorized to the AS, or a list of one or
more user profiles on the AS, the privilege to run statements in an SQL package.
If you granted the same privilege to the same user more than once, revoking that
privilege from that user nullifies all those grants. If you revoke an EXECUTE privi-
lege on an SQL package you previously granted to a user, it nullifies any grant of
the EXECUTE privilege on that SQL package, regardless of who granted it. The
following shows a sample statement:
REVOKE EXECUTE
ON PACKAGE SPIFFY.PARTS1
FROM PUBLIC
You can also grant authority to an SQL package using the GRTOBJAUT command
or revoke authority to an SQL package using the RVKOBJAUT command.
An SQL package from an unlike system always adopts the package owner’s
authority for all static SQL statements in the package. An SQL package created on
an AS/400 system using the CRTSQLxxx command with OPTION(*SQL) specified,
also adopts the package owner’s authority for all static SQL statements in the
package.
Bottom
Press Enter to continue
The distributed relational database administrator must also consider how communi-
cations are established between ARs on the network and the application servers.
Some questions that need to be resolved might include:
Should a default user profile exist on an AS?
Maintaining many user profiles throughout a network can be difficult. However,
creating a default user profile in a communications subsystem entry opens the
AS to incoming communications requests if the AS is not a secure location. In
some cases this might be an acceptable situation, in other cases a default user
profile might reduce the system protection capabilities too far to satisfy security
requirements.
For example, systems that serve many ARs need a high level of security. If
their databases were lost or damaged, the entire network could be affected.
Since it is possible to create user profiles or group profiles on an AS that identi-
fies all potential users needing access, it is unnecessary for the database
On the AS/400 system, all user jobs operate in an environment called a sub-
system, defined by a subsystem description, where the system coordinates proc-
essing and resources. Users can control a group of jobs with common
characteristics independently of other jobs if the jobs are placed in the same sub-
system. You can easily start and end subsystems as needed to support the work
being done and to maintain the performance characteristics you desire.
The basic types of jobs that run on the system are interactive, communications,
batch, spooled, autostart, and prestart.
An interactive job starts when you sign on a work station and ends when you sign
off. A communications batch job is a job started from a program start request from
another system. A non-communications batch job is started from a job queue. Job
queues are not used when starting a communications batch job. Spooling functions
are available for both input and output. Autostart jobs perform repetitive work or
one-time initialization work. Autostart jobs are associated with a particular sub-
system, and each time the subsystem is started, the autostart jobs associated with
it are started. Prestart jobs are jobs that start running before the remote program
sends a program start request.
If you change your configuration to use the QCTL controlling subsystem, it starts
automatically when the system is started. An automatically started job in QCTL
starts the other subsystems.
You can change your subsystem configuration from QBASE to QCTL by changing
the system value QCTLSBSD (controlling subsystem) to QCTL on the Change
System Value (CHGSYSVAL) command and starting the system again.
You can change the IBM-supplied subsystem descriptions or any user-created sub-
system descriptions by using the Change Subsystem Description (CHGSBSD)
command. You can use this command to change the storage pool size, storage
pool activity level, and the maximum number of jobs for the subsystem description
of an active subsystem.
For more information about work management, subsystems, and jobs on the
AS/400 system, see the Work Management book. For more information about work
management for communications and communications subsystems, see the Com-
munications Management book.
When the OS/400 licensed program is first installed, QBASE is the default control-
ling subsystem. As the controlling subsystem, QBASE allocates system resources
between the two subsystems QBASE and QSPL. Interactive jobs, communications
jobs, batch jobs, and so on, allocate resources within the QBASE subsystem. Only
spooled jobs are managed under a different subsystem, QSPL. This means you
have less control of system resources for handling communications jobs versus
interactive jobs than you would using the QCTL controlling subsystem.
Using the QCTL subsystem configuration, you have control of four additional sub-
systems for which the system has allocated storage pools and other system
resources. Changing the QCTL subsystems, or creating your own subsystems
gives you even more flexibility and control of your processing resources.
Different system requirements for some of the systems in the Spiffy Corporation
distributed relational database network may require different work management
environments for best network efficiency. The following discussions show how the
distributed relational database administrator can plan a work management sub-
system to meet the needs of each AS/400 system in the Spiffy distributed relational
database network.
A large dealership, on the other hand, probably manages its work through the
QCTL subsystem, because of the different work loads associated with the different
types of jobs.
The number of service orders booked each day can be high, requiring a query to
the local relational database for parts or to the regional center AS for parts not in
stock at the dealership. This type of activity starts interactive jobs on their system.
The dealership also starts a number of interactive jobs that are not distributed rela-
tional database related jobs, such as enterprise personnel record keeping, mar-
keting and sales planning and reporting, and so on. Requests to this dealership
from the regional center for performance information or to update inventory or work
plans are communications jobs that the dealership wants to manage in a separate
For a large dealership, the QCTL configuration with separate subsystem manage-
ment for QINTER and QCMN provides more flexibility and control for managing its
system work environment. In this example, interactive and communications jobs at
the dealership system can be allocated more of the system resources than other
types of jobs. Additionally, if communications jobs are typically fewer than interac-
tive jobs for this system, resources can be targeted toward interactive jobs, by
changing the subsystem descriptions for both QINTER and QCMN.
The regional center is also an AS for each dealership when a dealership needs to
query the regional relational database for a part not in stock at the dealership, a
service plan for a specific service job (such as rebuilding a steering rack), or for
technical bulletins or recall notifications since the last update to the dealership rela-
tional database. These communications jobs can all be managed in QCMN.
The KC000 system serves several very large dealerships that handle hundreds of
service orders daily, and a few small dealerships that handle fewer than 20 service
orders each day. The remaining medium-sized dealerships each handle about 100
service orders daily. One problem that presents itself to the distributed relational
database administrator is how to fairly handle all the communications requests to
the KC000 system from other systems. A large dealership could control QCMN
resources with its requests so that response times and costs to other systems in
the network are unsatisfactory.
The administrator can add a routing entry to change the class (and therefore the
priority) of a DRDA/DDM job by specifying the class that controls the priority of the
job and by specifying QCNTEDDM on the CMPVAL parameter, as in the following
example:
ADDRTGE SBSD(QCMN) SEQNBR(28ð) CLS(QINTER) CMPVAL('QCNTEDDM' 37)
For more information on work management topics for the AS/400 system, see the
Work Management book. For more information about changing attributes, work
entries and routing entries for communications, see the Communications Manage-
ment book.
Each AS/400 system in the distributed relational database network must have a
relational database directory configured. There is only one relational database
directory on an AS/400 system. Each AR in the distributed relational database
network must have an entry in its relational database directory for its local relational
database and one for each remote relational database the AR accesses. Any
AS/400 system in the distributed relational database network that acts only as an
AS must have an entry in its relational database directory for the local relational
database, but does not need to include the relational database names of other
remote relational databases in its directory.
| The relational database name assigned to the local relational database must be
| unique. That is, it should be different from any other relational database in the
| network. Names assigned to other relational databases in the directory identify
| remote relational databases, and must match the name an AS uses to identify its
| local relational database. If the local RDB name entry at an AS does not exist when
| it is needed, one will be created automatically in the directory. The name used will
| be the current system name displayed by the DSPNETA command.
| à ð
| Add RDB Directory Entry (ADDRDBDIRE)
| In this example, an entry is made to add a relational database named MP311 for a
| system with a remote location name of MP311 to the relational database directory
| on the local system. The remote location name does not have to be defined before
| a relational database directory entry using it is created. However, the remote
| location name must be defined before the relational database directory entry is
| used in an application. The relational database name (RDB) parameter and the
| remote location name (RMTLOCNAME) parameter are required for the
| ADDRDBDIRE command. The second element of the RMTLOCNAME parameter
| defaults to *SNA. The descriptive text (TEXT) parameter is optional. As shown in
| this example, it is a good idea to make the relational database name the same as
| the system name or location name specified for this system in your network config-
| uration. This can help you identify a database name and correlate it to a particular
| system in your distributed relational database network, especially if your network is
| complex.
To see the other optional parameters on this command, press F10 on the Add RDB
Directory Entry (ADDRDBDIRE) display. These optional parameters are shown
below.
| à ð
| Add RDB Directory Entry (ADDRDBDIRE)
| Device:
| APPC device description . . . \LOC Name, \LOC
| Local location . . . . . . . . . \LOC Name, \LOC, \NETATR
| Remote network identifier . . . \LOC Name, \LOC, \NETATR, \NONE
| Mode . . . . . . . . . . . . . . \NETATR Name, \NETATR
| Transaction program . . . . . . \DRDA Character value, \DRDA
The system provides default values for the additional ADDRDBDIRE command
parameters:
Device (DEV)
Local location (LCLLOCNAME)
You can change any of these default values on the ADDRDBDIRE command. For
example, you may have to change the TNSPGM parameter to communicate with
an SQL/DS system. By default for SQL/DS support, the TNSPGM is the name of
the SQL/DS database to which you want to connect. The default TNSPGM param-
eter value for DRDA (*DRDA) is X'07F6C4C2'. For more information on trans-
action program name, see:
| “Setting QCNTSRVC as a TPN on a DB2/400 Application Requester” on
| page 9-31.
| “Setting QCNTSRVC as a TPN on a DB2 for VM Application Requester” on
| page 9-32.
| “Setting QCNTSRVC as a TPN on a DB2 for OS/390 Application Requester” on
| page 9-32.
| “Setting QCNTSRVC as a TPN on a DB2 Connect Application Requester” on
| page 9-32.
| The Add RDB Directory Entry (ADDRDBDIRE) display shown below demonstrates
| how the panel changes if you enter *IP as the second element of the
| RMTLOCNAME parameter, and what typical entries would look like for an RDB that
| uses TCP/IP.
| Note that instead of specifying MP311.spiffy.com for the RMTLOCNAME, you could
| have specified the IP address (for example, '9.5.25.176'). For IP connections to
| another AS/400, leave the PORT parameter value set at the default, *DRDA. For
| connections to an IBM Universal Database (UDB) server, for example, you might
| need to set the port to a number such as 30000. Refer to the product documenta-
| tion for the server you are using. If you have a valid service name defined for a
| DRDA port at some location, you can also use that instead of a number. Do not
| use the 'drda' service name, however, since it will be slower on connects than
| using *DRDA, which accomplishes the same thing.
The Work with RDB Directory Entries display provides options that allow you to
add, change, display, or remove a relational database directory entry.
| à ð
| Work with RDB Directory Entries
| Position to . . . . . .
| Relational Remote
| Option Database Location Text
| __ KCððð KCððð Kansas City region database
| __ MPððð \LOCAL Minneapolis region database
| __ MP1ð1 MP1ð1 Dealer database MP1ð1
| __ MP1ð2 MP1ð2 Dealer database MP1ð2
| __ MP211 MP211 Dealer database MP211
| __ MP215 MP215 Dealer database MP215
| 4_ MP311 MP311 Dealer database MP311
As shown on the display, option 4 can be used to remove an entry from the rela-
tional database directory on the local system. If you remove an entry, you receive
another display that allows you to confirm the remove request for the specified
entry or select a different relational database directory entry. If you use the Remove
Relational Database Directory (RMVRDBDIRE) command, you have the option of
specifying a specific relational database name, generic names, all directory entries,
or just the remote entries.
You have the option on the Work with RDB Directory Entries display to display the
details of an entry. Output from the Work with RDB Entries display is to a display.
However, if you use the Display RDB Directory Entries (DSPRDBDIRE) command,
You have the option on the Work with RDB Directory Entries display to change an
entry in the relational database directory. You can also use the Change Relational
Database Directory Entries (CHGRDBDIRE) command to make changes to an
entry in the directory. You can change any of the optional command parameters
and the remote location name of the system. You cannot change a relational data-
base name for a directory entry. To change the name of a relational database in
the directory, remove the entry for the relational database and add an entry for the
new database name.
| You should not make it a practice to remove the local RDB directory entry or
| change its name. Normally, it is never neccessary. However, if you must change
| the name of the local RDB entry, the procedure includes doing the remove and add
| as explained in the previous paragraph. But there are special considerations
| involved with removing the local entry, because that entry contains some system-
| wide DRDA attribute information. If you try to remove the entry, you will get
| message CPA3E01 (Removing or changing *LOCAL directory entry may cause loss
| of configuration data (C G)), and you will be given the opportunity to cancel the
| operation or continue. The message text goes on to tell you that the entry is used
| to store configuration data entered with the CHGDDMTCPA command. If the
| *LOCAL entry is removed, configuration data may be destroyed, and the default
| configuration values will be in effect. If the default values are not satisfactory, con-
| figuration data will have to be re-entered with the CHGDDMTCPA command.
| Before removing the entry, you may want to record the values specified in the
| CHGDDMTCPA command so that they can be restored after the *LOCAL entry is
| deleted and added with the correct local RDB name.
A simple relationship to consider is the one between two regional offices as shown
below:
RV2W737-0
The relational database directory for each regional office must contain an entry for
the local relational database and an entry for the remote relational database
because each system is both an AR and an AS. The commands to create the rela-
tional database directory for the MP000 system are:
ADDRDBDIRE RDB(MPððð) RMTLOCNAME(\LOCAL) TEXT('Minneapolis region database')
| In the above example, the MP000 system identifies itself as the local relational
| database by specifying *LOCAL for the RMTLOCNAME parameter. There is only
| one relational database on an AS/400 system. You can simplify identification of
| your network relational databases if you make the relational database names in the
| directory the same as the system name and the local location name for the local
| system, and the same as the remote location name for the remote system.
| Note: The system name is specified on the SYSNAME parameter of the Change
| Network Attributes (CHGNETA) command. The local system is identified on
| the LCLLOCNAME parameter of the CHGNETA command during communi-
| cations configuration, as shown in the example on page 3-10. Remote
| locations using SNA (APPC) are identified with the RMTCPNAME param-
| eter on the Create Controller (CRTCTLAPPC) command during communi-
| cations configuration as shown on page 3-11. Using the same names for
| system names, network locations, and database names can help avoid con-
| fusion, particularly in complex networks.
The corresponding entries for the KC000 system relational database directory are:
ADDRDBDIRE RDB(KCððð) RMTLOCNAME(\LOCAL) TEXT('Kansas City region database')
RV2W738-0
A sample of the commands used to complete the MP000 relational database direc-
tory to include all its dealer databases is as follows:
PGM
ADDRDBDIRE RDB(MPððð) RMTLOCNAME(\LOCAL) +
TEXT('Minneapolis region database')
ADDRDBDIRE RDB(KCððð) RMTLOCNAME(KCððð)
TEXT('Kansas City region database')
ADDRDBDIRE RDB(MP1ð1) RMTLOCNAME(MP1ð1)
TEXT('Dealer database MP1ð1')
ADDRDBDIRE RDB(MPðð2) RMTLOCNAME(MP11ð)
TEXT('Dealer database MP11ð')
.
.
.
ADDRDBDIRE RDB(MP215) RMTLOCNAME(MP2ð1)
TEXT('Dealer database MP2ð1')
ENDPGM
In the above example, each of the region dealerships is included in the Minneapolis
relational database directory as a remote relational database.
Since each dealership can serve as an AR to MP000 and to other dealership appli-
cation servers, each dealership must have a relational database directory that has
an entry for itself as the local relational database and the regional office and all
other dealers as remote relational databases. The database administrator has
several options to create a relational database directory at each dealership system.
The method that uses the most time and is most prone to error is to create a rela-
tional database directory at each system by using the ADDRDBDIRE command to
create each directory entry on all systems that are part of the MP000 distributed
relational database network.
A better alternative is to create a control language (CL) program like the one shown
in the above example for the MP000. The distributed relational database adminis-
A third method is to write a program that reads the relational database directory
information sent to an output file as a result of using the Display Relational Data-
base Directory Entry (DSPRDBDIRE) command. This program can be distributed to
the dealerships, along with the output file containing the relational database direc-
tory entries for the MP000 system. Each system could read the MP000 output file
to create a local relational database directory. The Change Relational Database
Directory Entry (CHGRDBDIRE) command can then be used to customize the
MP000 system directory for the local system. For more information about using an
output file to create relational database directory entries, see “Saving and Restoring
Relational Database Directories” on page 7-11.
| Setting Up Security
| DRDA security is covered in Chapter 4, “Security for an AS/400 Distributed Rela-
| tional Database” on page 4-1, but for the sake of completeness, it is mentioned
| here as something that should be considered before using DRDA, or in converting
| your network from the use of APPC to TCP/IP. Security set up for TCP/IP is quite
| different from what is required for APPC. One thing to be aware of is the lack of the
| 'secure location' concept that APPC has. Because a TCP/IP server cannot fully
| trust that a client system is who it says it is, the use of passwords on connect
| requests is more important. To make it easier to send passwords on connect
| requests, the use of server authorization lists associated with specific user profiles
| has been introduced with TCP/IP support. Entries in server authorization lists can
| be maintained by use of the xxxSVRAUTHE commands described in Chapter 4,
| “Security for an AS/400 Distributed Relational Database” on page 4-1 and in the
| CL Reference (where xxx represents ADD, CHG, and RMV). An alternative to the
| use of server authorization entries is to use the USER/USING form of the SQL
| CONNECT statement to send passwords on connect requests.
| Setup at the server side includes deciding if passwords are required for inbound
| connect requests or not. The default setting is that they are. That can be changed
| by use of the CHGDDMTCPA CL command.
| But there are other parameters that you may want to adjust to tune the server for
| your environment. These include the initial number of prestart jobs to start, the
| maximum number of jobs, threshold when to start more, and so forth. See “Man-
| aging the TCP/IP Server” on page 6-17 for more information on this subject.
| Normally, SQL packages are created automatically at the AS for users of STRSQL.
| However, a problem can occur because the initial connection for STRSQL is to the
| local system, and that connection is protected by two-phase commit protocols. If a
| subsequent connection is made to a system that is only one-phase commit
| capable, then that connection is read-only. When an attempt is made to automat-
| ically create a package over such a connection, it fails because the creation of a
| package is considered an update, and cannot be done over a read-only connection.
| The solution to this is to get rid of the connection to the local database before con-
| necting to the remote AS. This can be done by doing a RELEASE ALL command
| followed by a COMMIT. Then the connection to the remote system can be made
| and since it is the first connection, updates can be made over it.
| DRDA support over TCP/IP does not support DDM source (client) access over
| TCP/IP. That is, you cannot create a DDM file on AS/400 that uses TCP/IP. This
| also means that you cannot run a SBMRMTCMD on an AS/400 over a TCP/IP con-
| nection. However, the new TCP/IP support does allow PC clients using DDM to
| access DB2 for AS/400 as a DDM server over TCP/IP, and the RUNRMTCMD can
| possibly be used as a substitute for SBMRMTCMD over TCP/IP.
This command creates a DDM file named KC105TST and stores it in the TEST
library on the source system. This DDM file uses the remote location KC105 to
access a remote file named INVENT stored in the SPIFFY library on the target
AS/400 system.
You can use options on the Work with DDM Files display to change, delete, display
or create DDM files. For more information about using DDM files, see the Distrib-
uted Data Management book.
Consider a situation in which a Spiffy regional center needs to add inventory items
to a dealership’s inventory table on a periodic basis as regular inventory shipments
are made from the regional center to the dealership.
INSERT INTO SPIFFY.INVENT
(PART,
DESC,
QTY,
PRICE)
VALUES
('1234567',
'LUG NUT',
25,
1.15 )
For each item on the regular shipment, an SQL INSERT statement places a row in
the inventory table for the dealership. In the above example, if 15 different items
were shipped to the dealership, the application at the regional office could include
15 SQL INSERT statements or a single SQL INSERT statement using host vari-
ables.
In this example, the regional center is using an SQL application to load data in to a
table at an AS. Run-time support for SQL is provided in the OS/400 licensed
program, so the AS does not need the DB2/400 Query Manager and SQL Develop-
ment Kit licensed program. However, the DB2/400 Query Manager and SQL Devel-
opment Kit licensed program is required to write the application. For more
information on the SQL programming language, see the DB2 for AS/400 SQL Pro-
gramming and the DB2 for AS/400 SQL Reference books.
Create a source member INVLOAD in the source physical file INVLOAD and the
SQL statement:
INSERT INTO SPIFFY/INVENT
(PART, DESC, QTY, PRICE)
VALUES
(&PARTVALUE, &DESCVALUE, &QTYVALUE, &PRICEVALUE)
The following CL command places the INSERT SQL statement results into the
INVENT table in the SPIFFY collection. Use of variables in the query
(&PARTVALUE, &DESCVALUE, and so on) allows you to enter the desired values
as part of the STRQMQRY call, rather than requiring that you create the query
management query again for each row.
STRQMQRY QMQRY(INVLOAD) RDB(KCððð)
SETVAR((PARTVALUE '1134567'') (DESCVALUE '''Lug Nut''')
(QTYVALUE 25) (PRICEVALUE 1.15))
The query management function is dynamic, which means its access paths are built
at run time instead of when a program is compiled. For this reason the Query
Management/400 function is not as efficient for loading data into a table as an SQL
application. However, you need the DB2/400 Query Manager and SQL Develop-
ment Kit product to write an application; run-time support for SQL and query man-
agement is part of the OS/400 licensed program.
For more information on the query management function, see the DB2 for AS/400
Query Management Programming book.
For more information on the DFU program generator, see the ADTS/400: Data File
Utility book.
Some alternatives for moving data from one AS/400 system to another are:
User-written application programs
Interactive SQL
Query Management/400 functions
Copy to and from tape or diskette devices
Copy file commands with DDM
The network file commands
AS/400 system save and restore commands
Using an interactive SQL query, the results of a query can be placed in a database
file on the local system. If a commitment control level is specified for the interactive
SQL process, it applies to the AS; the database file on the local system is under a
commitment control level of *NONE.
Consider the situation in which the KC105 dealership is transferring its entire stock
of part number ‘1234567’ to KC110. KC110 queries the KC105 database for the
part they acquire from KC105. The result of this inventory query is returned to a
database file that already exists on the KC110 system. This is the process you can
use to complete this task:
Use the Start SQL (STRSQL) command to get the interactive SQL display. Before
you enter any SQL statement (other than a CONNECT) for the new database,
specify that the results of this operation are sent to a database file on the local
system by doing the following steps:
1. Select the Services option from the Enter SQL Statements display.
2. Select the Change Session Attributes option from the Services display.
3. Enter the Select Output Device option from the Session Attributes Display.
4. Type a 3 for a database file in the Output device field and press Enter. The
following display is shown:
Text . . . . . . . .
á ñ
5. Specify the name of the database file that is to receive the results.
When the database name is specified, you can begin your interactive SQL proc-
essing as shown in the example below.
à ð
Type SQL statement, press Enter.
Current connection is to relational database KCððð.
CONNECT TO KC1ð5____________________________________________________
Current connection is to relational database KC1ð5.
====> SELECT \ FROM INVENTORY_____________________________________________
WHERE PART = '1234567'___________________________________________
__________________________________________________________________________
__________________________________________________________________________
__________________________________________________________________________
__________________________________________________________________________
__________________________________________________________________________
__________________________________________________________________________
__________________________________________________________________________
__________________________________________________________________________
__________________________________________________________________________
__________________________________________________________________________
__________________________________________________________________________
__________________________________________________________________________
Bottom
F3=Exit F4=Prompt F6=Insert line F9=Retrieve F1ð=Copy line
F12=Cancel F13=Services F24=More keys
á ñ
For more information on the SQL programming language and interactive SQL, see
the DB2 for AS/400 SQL Programmingand the DB2 for AS/400 SQL Reference
books.
Both interactive SQL and the query management function can perform data manip-
ulation operations (INSERT, DELETE, SELECT, and so on) for files or tables
without the requirement that the table (or file) already exist in a collection (it can
However, the query management function does not allow you to specify a member
when you want to add the results to a file or table. The results of a query function
are placed in the first file member unless you use the OVRDBF command to
specify a different member before starting the query management function.
For more information on the query management function, see the DB2 for AS/400
Query Management Programming book.
| You can also use the CL command CPYF to load data on tape into DB2 for
| AS/400. This is especially useful when loading data that was unloaded from DB2
| for OS/390, or DB2 Server for VM (SQL/DS). Nullable data can be unloaded from
| these systems in such a way that a single-byte flag can be associated with each
| nullable field. CPYF with the *NULLFLAGS option specified for the FMTOPT
| parameter can recognize the null flags and ignore the data in the adjacent field on
| the tape and make the field null in DB2 for AS/400. Another useful FMTOPT
| parameter value for importing data from IBM mainframes is the *CVTFLOAT value.
| It allows floating point data stored on tape in System/390 format to be converted to
| the IEEE format used by DB2 for AS/400.
Another way to move data from one AS/400 system to another is to copy the data
using the copy file commands with DDM. You can use the Copy File (CPYF), Copy
Source File (CPYSRCF), and Copy from Query File (CPYFRMQRYF) commands to
copy data between files on source and target systems. You can copy local rela-
tional database or device files from (or to) remote database files, and remote files
can also be copied to remote files.
In this example, the administrator runs the commands on the KC000 system. If the
administrator is not on the KC000 system, then pass-through must be used to run
these commands on the KC000 system. The SBMRMTCMD command cannot be
used to run the above commands because the AS/400 system cannot be a source
system and a target system for the same job.
Consider the following items when using this command with DDM:
A DDM file can be specified on the FROMFILE and the TOFILE parameters for
the CPYF and CPYSRCF commands.
Note: For the Copy from Query File (CPYFRMQRYF), Copy from Diskette
(CPYFRMDKT), and Copy from Tape (CPYFRMTAP) commands, a
DDM file name can be specified only on the TOFILE parameter; for the
Copy to Diskette (CPYTODKT) and Copy to Tape (CPYTOTAP) com-
mands, a DDM file name can be specified only on the FROMFILE
parameter.
When a delete-capable file is copied to a non-delete capable file, you must
specify COMPRESS(*YES), or an error message is sent and the job ends.
If the remote file name on a DDM file specifies a member name, the member
name specified for that file on the CPYF command must be the same as the
member name on the remote file name on the DDM file. In addition, the Over-
ride Database File (OVRDBF) command cannot specify a member name that is
different from the member name on the remote file name on the DDM file.
If a DDM file does not specify a member name and if the OVRDBF command
specifies a member name for the file, the CPYF command uses the member
name specified on the OVRDBF command.
If the TOFILE parameter is a DDM file that refers to a file that does not exist,
CPYF creates the file. Following are special considerations for remote files
created with the CPYF command:
– The user profile for the target DDM job must be authorized to the CRTPF
command on the target system.
– For an AS/400 system target, the TOFILE parameter has all the attributes
of the FROMFILE parameter except those described in the Data Manage-
ment book.
For more information about using the Copy File commands to copy between
systems, see the Distributed Data Management book.
The save and restore commands used to save and restore tables or files include:
Save Library (SAVLIB) saves one or more collections or libraries
Save Object (SAVOBJ) saves one or more objects (including database tables
and views)
Save Changed Object (SAVCHGOBJ) saves any objects that have changed
since either the last time the collection or library was saved or from a specified
date
Restore Library (RSTLIB) restores a collection or library
Restore Object (RSTOBJ) restores one or more objects (including database
tables and views)
For example, if two dealerships were merging, the save and restore commands
could be used to save collections and tables for one relational database, which are
then restored on the remaining system’s relational database. To accomplish this an
administrator would:
1. Use the SAVLIB command on System A to save a collection or use the
SAVOBJ command on system A to save a table.
2. Specify whether the data is saved to a save file, which can be distributed using
SNADS, or saved on tape or diskette.
3. Distribute the save file to System B or send the tape or diskette to System B.
4. Use the RSTLIB command on System B to restore a collection or use the
RSTOBJ command on System B to restore a table.
A consideration when using the save and restore commands is the ownership and
authorizations to the restored object. A valid user profile for the current object
owner should exist on the system where the object is restored. If the current
owner’s profile does not exist on this system, the object is restored under the
QDFTOWN default user profile. User authorizations to the object are limited by the
default user profile parameters. A user with QSECOFR authority must either create
the original owner’s profile on this system and make changes to the restored object
ownership, or specify new authorizations to this object for both local and remote
users.
Vendor independent communications functions are also supported through two sep-
arately licensed AS/400 programs.
Peer-to-peer connectivity functions for both local and wide area networks is pro-
vided by the Transmission Control Protocol/Internet Protocol (TCP/IP). The File
Transfer Protocol (FTP) function of the AS/400 TCP/IP Connectivity Utilities/400
licensed program allows you to receive many types of files, depending on the capa-
bilities of the remote system. For more information, see the TCP/IP Configuration
and Reference book.
The OSI File Services/400 licensed program (OSIFS/400) provides file manage-
ment and transfer services for open systems interconnection (OSI) networks.
OSIFS/400, with the prerequisite licensed program OSI Communications
Subsystem/400, connects the AS/400 system to remote IBM or non-IBM systems
that conform to OSI file transfer, access, and management (FTAM) standards.
This chapter discusses ways that you can administer the distributed relational data-
base work being done across a network. Most of the commands, processes, and
other resources discussed here do not exist just for distributed relational database
use, they are tools provided for the operation of any AS/400 system. All adminis-
tration commands, processes and resources discussed here are included with the
OS/400 program, along with all of the DB2 for AS/400 functions.
You can get the information provided by the options on the menu whether the job is
on a job queue, output queue, or active. However, a job is not considered to be in
the system until all of its input has been completely read in. Only then is an entry
placed on the job queue. The options for the job information are:
Job status attributes
Job definition attributes
Spooled file information
Option 10 (Display job log) gives you information about an active job or a job on a
job queue. For jobs that have ended you can usually find the same information by
using option 4 (Work with spooled files). This presents the Work with Spooled Files
display, where you can use option 5 to display the file named QPJOBLOG if it is on
the list.
The Work with User Jobs display appears with names and status information of
user jobs running in the system (*ACTIVE), on job queues (*JOBQ), or on an
output queue (*OUTQ). The following display shows the active and ended jobs for
the user named KCDBA:
Bottom
Parameters or command
===>
F3=Exit F4=Prompt F5=Refresh F9=Retrieve F11=Display schedule data
F12=Cancel F21=Select assistance level
á ñ
This display lists all the jobs in the system for the user, shows the status specified
(*ALL in this case), and shows the type of job. It also provides you with eight
options (2 through 8 and 13) to enter commands for a selected job. Option 5 pre-
sents the Work with Job display described above.
| The WRKUSRJOB command is useful when you want to look at the status of the
| DDM TCP/IP server jobs if your system is using TCP/IP. Run the following
| command:
| WRKUSRJOB QUSER \ACTIVE
| Page down until you see the jobs starting with the characters QRWT. If the server is
| active, you should see one job named QRWTLSTN, and one or more named QRWTSRVR
| (unless prestart DRDA jobs are not run on the system). The QRWTSRVR jobs are
| prestart jobs. If you do not see the QRWTLSTN job, run the following command to
| start it:
| STRTCPSVR \DDM
| If you see the QRWTLSTN job and not the QRWTSRVR jobs, and the use of
| DRDA prestart jobs has not been disabled, run the following command to start the
| prestart jobs:
| STRPJ QSYSWRK QRWTSRVR
The display below shows the Work with Active Jobs display on a typical day at the
KC105 system:
More...
Parameters or command
===>
F3=Exit F5=Refresh F1ð=Restart statistics F11=Display elapsed data
F12=Cancel F23=More options F24=More keys
á ñ
When you press F11 (Display elapsed data), the following display is provided to
give you detailed status information.
à ð3/29/92 16:17:45
ð
CPU %: 41.7 Elapsed time: ð4:37:55 Active jobs: 42
More...
Parameters or command
===>
F3=Exit F5=Refresh F1ð=Restart statistics F11=Display status
F12=Cancel F23=More options F24=More keys
á ñ
The Work with Active Jobs display gives you information about job priority and
system usage as well as the user and type information you get from the Work with
User Jobs display. You also can use any of 11 options on a job (2 through 11 and
13), including option 5, which presents you with the Work with Job display for the
selected job.
On the STATUS parameter, you can specify all jobs or only those that have a
status value of *RESYNC or *UNDECIDED. *RESYNC shows only the jobs that are
involved with resynchronizing their resources in an effort to reestablish a synchroni-
zation point; a synchronization point is the point where all resources are in con-
sistent state.
*UNDECIDED shows only those jobs for which the decision to commit or roll back
resources is unknown.
| On the LUWID parameter, you can display commitment definitions that are working
| with a commitment definition on another system. Jobs containing these commitment
| definitions are communicating using an APPC protected conversation. An LUWID
| can be found by displaying the commitment definition on one system and then
| using it as input to the WRKCMTDFN command to find the corresponding commit-
| ment definition.
You can use the WRKCMTDFN command to free local resources in jobs that are
undecided, but only if the commitment definitions are in a Prepare in Progress (PIP)
or Last Agent Pending (LAP) state. You can force the commitment definition to
either commit or roll back, and thus free up held resources; control does not return
to the program that issued the original commit until the initiator learns of the action
taken on the commitment definition.
You can also use the WRKCMTDFN command to end resynchronization in cases
where it is determined that resynchronization will not ever complete with another
system.
The way to display a job log depends on the status of the job. If the job has ended
and the job log is not yet printed, find the job using the WRKUSRJOB command,
then select option 8 (Display spooled file). Find the spooled file named QPJOBLOG
If the batch or interactive job is still active, or is on a job queue and has not yet
started, use the WRKUSRJOB command to find the job. The WRKACTJOB
command is used to display the job log of active jobs and does not show jobs on
job queues. Select option 5 (Work with job) and then select option 10 (Display job
log).
To display the job log of your own interactive job, do one of the following:
Enter the Display Job Log (DSPJOBLOG) command.
Enter the WRKJOB command and select option 10 (Display job log) from the
Work with Job display.
Press F10 (Display detailed messages) from the Command Entry display to
display messages that are shown in the job log.
When you use the DSPJOBLOG command, you see the Job Log display. This
display shows program names with special symbols, as follows:
>> The running command or the next command to be run. For example, if a
CL or high-level language program was called, the call to the program is
shown.
> The command has completed processing.
. . The command has not yet been processed.
? Reply message. This symbol marks both those messages needing a
reply and those that have been answered.
| If there are several jobs listed for the specified user profile and the relational data-
| base is accessed using DRDA, enter option 5 (Work with job) to get the Work with
| Job display. From this display, enter option 10 (Display job log) to see the job log.
| The job log shows you whether this is a distributed relational database job and, if it
| is, to which remote system the job is connected. Page through the job log looking
| for one of the following messages (depending on whether the connection is using
| APPC or TCP/IP):
| CPI9150 DDM job started.
| CPI9160 Database connection started over TCP/IP or a local socket.
| If you are on the AS and you do not know the job name,1 but you know the user
| name, use the WRKUSRJOB command. If you do not specify a user, the command
| returns a list of the jobs under the user profile2 you are using. On the Work with
| User Jobs display, use these columns to help you identify the AS jobs that are
| servicing APPC connections.
| .1/ The job type column shows jobs with the type that is listed as CMNEVK
| for APPC communications jobs.
.2/ The status column shows if the job is active or completed. Depending
on how the system is set up to log jobs, you may see only active jobs.
.3/ The job column provides the job name. The job name on the AS is the
same as the device being used.
à ð
Work with User Jobs KC1ð5
ð3/29/92 16:15:33
Type options, press Enter.
2=Change 3=Hold 4=End 5=Work with 6=Release 7=Display message
8=Work with spooled files 13=Disconnect
If you are looking for an active AS job and do not know the user name, the
WRKACTJOB command gives you a list of those jobs for the subsystems active on
the system. The following example shows you some items to look for:
à ð
Work with Active Jobs KC1ð5
ð3/29/92 16:17:45
CPU %: 41.7 Elapsed time: ð4:37:55 Active jobs: 1ð2
.4/ Search the subsystem3 that is set up to handle the AS jobs. In this
example, the subsystem for AS jobs is QCMN.
| 1 If you are using the DDM TCP/IP server, you can find the job name with the DSPLOG command as explained above.
| 2 For TCP/IP, the user profile in the job name will always be QUSER.
| 3 The subsystem for TCP/IP server jobs is QSYSWRK.
When you have located a job that looks like a candidate, enter option 5 to work
with that job. Then select option 10 from the Work with Job Menu to display the job
log. Distributed database job logs for jobs that are accessing the AS from a
DB2/400 application requester contain a statement near the top that reads:
CPI3E01 Local relational database accessed by (system name).
| After you locate a job working on the AS, you can also trace it back to the AR if the
| AR is an AS/400 system. One of the following messages will appear in your job log;
| place the cursor on the message you received:
| CPI9152 Target DDM job started by source system.
| CPI9162 Target job assigned to handle DDM connection started by source
| system over TCP/IP.
| When you press the help key, the detailed message for the statement appears. The
| source system job named is the job on the AR that caused this job.
| 4 For TCP/IP AS jobs, the job type is PJ (unless DRDA prestart jobs are not active on the system, in which case the job type is
| BCI).
The SBMRMTCMD command can submit any CL command that can run in both
the batch environment and via the QCAEXEC system program; that is, the
command has values of *BPGM and *EXEC specified for the ALLOW attribute. You
can display the ALLOW attributes by using the Display Command (DSPCMD)
command.
You must have the proper authority on the target system for the CL command
being submitted and for the objects that the command is to operate on. If the
source system user has the correct authority to do so (as determined in a target
system user profile), the following actions are examples of what can be performed
on remote files using the SBMRMTCMD command:
Grant or revoke object authority to remote tables
Verify tables or other objects
Save or restore tables or other objects
Although the command can be used to do many things with tables or other objects
on the remote system, using this command for some tasks is not as efficient as
other methods on the AS/400 system. For example, you could use this command to
display the file descriptions or field attributes of remote files, or to dump files or
other objects, but the output remains at the target system. To display remote file
descriptions and field attributes at the source system, a better method is to use the
Display File Description (DSPFD) and Display File Field Description (DSPFFD)
commands with SYSTEM(*RMT) specified, and specify the names of the DDM files
associated with the remote files.
See the Distributed Data Management book for lists of CL commands you can
submit and restrictions for the use of this command. In addition, see “Controlling
DDM Conversations” on page 6-10 for information about how DDM shares conver-
sations.
This submitted command grants *USE authority to the user MPSUP to the object
PARTS1. PARTS1 is a program that exists on the system identified by the DDM file
named KC105TST on the local system. The authority is granted if the distributed
relational database administrator has authority to use the GRTOBJAUT command
on the remote system named in the KC105TST DDM file.
| The term connection in this section of this book refers to the concept of an
| SQL connection. An SQL connection lasts from the time an explicit or
| implicit SQL CONNECT is done until the logical SQL connection is termi-
| nated by such means as an SQL DISCONNECT, or a RELEASE followed
| by a COMMIT. Multiple SQL connections can occur serially over a single
| network connection or conversation. In other words, when a connection is
| ended, the conversation that carried it is not neccessarily ended.
The SQL DISCONNECT and RELEASE statements are used to end connections.
Connections can also be ended implicitly by the system. In addition, when running
with RUW connection management, previous connections are ended when a
If a DDM conversation is also being used to operate on remote files through DDM,
the conversation will remain active until the following conditions are met:
All the files used in the conversation are closed and unlocked
No other DDM-related functions are being performed
No DDM-related function has been interrupted (by a break program, for
example)
For protected conversations, a commit or rollback was performed after ending
all SQL programs and after all DDM-related functions were completed.
An AR job is no longer connected to the AS
Regardless of the value of the DDMCNV job attribute, conversations are dropped at
the end of a job routing step, at the end of the job, or when the job initiates a
Reroute Job (RRTJOB) command. Unused conversations within an active job can
also be dropped by the Reclaim DDM Conversations (RCLDDMCNV) or Reclaim
Resources (RCLRSC) command. Errors, such as communications line failures, can
also cause conversations to drop.
The DDMCNV parameter is changed by the Change Job (CHGJOB) command and
is displayed by Display Job (DSPJOB) command with OPTION(*DFNA). Also, you
can use the Retrieve Job Attributes (RTVJOBA) command to get the value of this
parameter and use it within a CL program.
The RCLDDMCNV command applies to the DDM conversations for the job on the
AR in which the command is entered. There is an associated AS job for the DDM
conversation used by the AR job. The AS job ends6 automatically when the associ-
ated DDM conversation ends.
Although this command applies to all DDM conversations used by a job, using it
does not mean that all of them will be reclaimed. A conversation is reclaimed only if
it is not being actively used. If commitment control is used, a COMMIT or
ROLLBACK operation may have to be done before a DDM conversation can be
reclaimed.
When a program or package is created, the information about certain objects used
in the program or package is stored. This information is then available for use with
the Display Program References (DSPPGMREF) command. Information retrieved
can include:
The name of the program or package and its text description
The name of the library or collection containing the program or package
The number of objects referred to by the program package
The qualified name of the system object
The information retrieval dates
The object type of the referenced object
For files and tables, the record contains the following additional fields:
The name of the file or table in the program or package (possibly different from
the system object name if an override was in effect when the program or
package was created)
Note: Any overrides apply only on the AR.
The program or package use of the file or table (input, output, update, unspeci-
fied, or a combination of these four)
The number of record formats referenced, if any
| 6 For TCP/IP conversations that end, the AS job is normally a prestart job and is usually recycled rather than ended.
Before the objects can be shown in a program, the user must have *USE authority
for the program. Also, of the libraries specified by the library qualifier, only the
libraries for which the user has read authority are searched for the programs.
Figure 6-1 shows the objects for which the high-level languages and utilities save
information.
The stored file information contains an entry (a number) for the type of use. In the
database file output of the Display Program References (DSPPGMREF) command
(built when using the OUTFILE parameter), this entry is a representation of one or
more codes listed below. There can only be one entry per object, so combinations
are used. For example, a file coded as a 7 would be used for input, output, and
update.
Code Meaning
1 Input
2 Output
3 Input and Output
4 Update
8 Unspecified
On the requester you can get a list of all the collections and tables used by a
program, but you are not able to see on which relational database they are located.
They may be located in multiple relational databases. The output from the
command can go to a database file or to a displayed spooled file. The output looks
like this:
File . . . . . : QPDSPPGM Page/Line 1/1
Control . . . . . Columns 1 - 78
Find . . . . . .
To see what objects are used by an AS SQL package, you can enter a command
such as the following:
DSPPGMREF PGM(SPIFFY/PARTS1) OBJTYPE(\SQLPKG)
The output from the command can go to a database file or to a displayed spooled
file. The output looks like this:
Dropping a Collection
Attempting to delete a collection that contains journal receivers may cause an
inquiry message to be sent to the QSYSOPR message queue for the AS job. The
AS and AR job wait until this inquiry is answered.
When the AR job is waiting, it may appear as if the application is hung. Consider
the following when your AR job has been waiting for a time longer than anticipated:
Be aware that an inquiry message is sent to QSYSOPR message queue and
needs an answer to proceed.
Move the journal receivers to a different library other than the one that is being
dropped.
Have the AS reply to the message using its system reply list.
This last consideration can be accomplished by changing the job that appears to be
currently hung, or by changing the job description for all AS jobs running on the
system. However, you must first add an entry to the AS system reply list for
message CPA7025 using the Add Reply List Entry (ADDRPYLE) command:
ADDRPYLE SEQNBR(...) MSGID(CPA7ð25) RPY(I)
To change the job description for the job that is currently running on the AS, use
the SBMRMTCMD command. The following example shows how the database
administrator on one system in the Kansas City region changes the job description
on the KC105 system (the system addressed by the TEST/KC105TST DDM file):
SBMRMTCMD CMD(’CHGJOB JOB(KC1ð5ASJOB) INQMSGRPY(\SYSRPYL)’)
DDMFILE(TEST/KC1ð5TST)
You can prevent this situation from happening on the AS more permanently by
using the Change Job Description (CHGJOBD) command so that any job that uses
This method should be used with caution. Adding CPA7025 to the system reply list
affects all jobs which use the system reply list. Also changing the job description
affects all jobs that use a particular job description. You may want to create a sepa-
rate job description for AS jobs. For additional information on creating job
descriptions, see the Work Management book.
Job Accounting
The job accounting function on the AS/400 system gathers data so you can deter-
mine who is using the system and what system resources they are using. Typical
job accounting details the jobs running on a system and resources used, such as
use of the processing unit, printer, display stations; and database and communi-
cations functions.
Job accounting is optional and must be set up on the system. To set up resource
accounting on the system you must:
1. Create a journal receiver by using the Create Journal Receiver (CRTJRNRCV)
command.
2. Create the journal named QSYS/QACGJRN by using the Create Journal
(CRTJRN) command. You must use the name QSYS/QACGJRN and you must
have authority to add items to QSYS to create this journal. Specify the names
of the journal receivers you created in the previous step on this command.
3. Change the accounting level system value QACGLVL using the Work with
System Value (WRKSYSVAL) or Change System Value (CHGSYSVAL)
command.
The VALUE parameter on the CHGSYSVAL command determines when job
accounting journal entries are produced. A value of *NONE means the system
does not produce any entries in the job accounting journal. A value of *JOB
means the system produces a job (JB) journal entry. A value of *PRINT
produces a direct print (DP) or spooled print (SP) journal entry for each file
printed.
When a job is started, a job description is assigned to the job. The job description
object contains a value for the accounting code (ACGCDE) parameter, which may
be an accounting code or the default value *USRPRF. If *USRPRF is specified, the
accounting code in the job’s user profile is used.
You can add accounting codes to user profiles using the accounting code param-
eter ACGCDE on the Create User Profile (CRTUSRPRF) command or the Change
User Profile (CHGUSRPRF) command. You can change accounting codes for spe-
cific job descriptions by specifying the desired accounting code for the ACGCDE
parameter on the Create Job Description (CRTJOBD) command or the Change Job
Description (CHGJOBD) command.
When a job accounting journal is set up, job accounting entries are placed in the
journal receiver starting with the next job that enters the system after the
CHGSYSVAL command takes effect.
For more information about job accounting, see the Work Management book.
| The DRDA/DDM TCP/IP server that is shipped with the OS/400 program does not
| typically require any changes to your existing system configuration in order to work
| correctly. It is set up and configured when you install OS/400. At some time, you
| may want to change the way the system manages the server jobs to better meet
| your needs, solve a problem, improve the system's performance, or simply look at
| the jobs on the system. To make such changes and meet your processing require-
| ments, you need to know which objects affect which pieces of the system and how
| to change those objects.
| This section describes, at a high level, some of the work management concepts
| that need to be understood in order to work with the server jobs and how the con-
| cepts and objects relate to the server. In order to fully understand how to manage
| your AS/400 system, it is recommended that you carefully review the Work Man-
| agement book before you continue with this section. This section then shows you
| how the servers can be managed and how they fit in with the rest of the system.
| Terminology
| The same server is used for both DDM and DRDA TCP/IP access to DB2 for
| AS/400. For brevity, we will use the term DRDA server rather than DRDA/DDM
| server in the following discussion. Sometimes, however, it may be referred to as
| the TCP/IP server, the DDM server, or simply the server when the context makes
| the use of a qualifer unneccessary.
| The DRDA server consists of two or more jobs, one of which is what is called the
| DRDA listener, because it listens for connection requests and dispatches work to
| the other jobs. The other job or jobs, as initially configured, are prestart jobs which
| service requests from the DRDA or DDM client after the initial connection is made.
| The set of all associated jobs, the listener and the server jobs, are collectively
| referred to as the DRDA server.
| The term client will be used interchangably with DRDA Application Requester (or
| AR) in the DRDA application environment. The term client will be used
| interchangably with DDM source system in the DDM (distributed file management)
| application environment.
| The acronym DDM appears at times in the context of DRDA usage, because DRDA
| is implemented using DDM protocols, and the two access methods are closely
| related. Examples of this are the use of the special value *DDM for the SERVER
| parameter on the STRTCPSVR CL command, and the use of DDM in the name of
| the CHGDDMTCPA CL command.
DRDA
Application or
DDM Client DRDA/DDM
Requester Server Listener
Attach
Server Job
RV4W205-0
| To initiate a DRDA server job that uses TCP/IP communications support, the DRDA
| Application Requester or DDM source system will connect to the DRDA well-known
| port number, 446 .1/. The DRDA listener program must have been started (by
| using the STRTCPSVR SERVER(*DDM) command, for example) to listen for and
| accept the client's connection request. The DRDA listener, upon accepting this con-
| nection request, will issue an internal request to attach the client's connection to a
| DRDA server job .2/. This server job may be a prestarted job or, if the user has
| removed the QRWTSRVR prestart job entry from the QSYSWRK subsystem (in
| which case prestart jobs are not used), a batch job that is submitted when the client
| connection request is processed. The server job will handle any further communi-
| cations with the client.
| The initial data exchange that occurs includes a request that identifies the user
| profile under which the server job is to run .3/. Once the user profile and password
| (if it is sent with the user profile id) have been validated, the server job will swap to
| this user profile as well as change the job to use the attributes, such as CCSID,
| defined for the user profile .4/.
| The functions of connecting to the listener program, attaching the client connection
| to a server job and exchanging data and validating the user profile and password
| are comparable to those performed when an APPC program start request is proc-
| essed.
| The DRDA listener allows client applications to establish TCP/IP connections with
| an associated server job by handling and routing inbound connection requests.
| Once the client has established communications with the server job, there is no
| further association between the client and the listener for the duration of that con-
| nection.
| The DRDA listener is started using the STRTCPSVR command, using a value of
| *DDM or *ALL for the SERVER parameter. This program must be active in order for
| DRDA Application Requesters and DDM source systems to establish connections
| with the DRDA TCP/IP server. You can request that the DRDA listener be started
| automatically by using the CHGDDMTCPA command and specifying
| AUTOSTART(*YES). This will cause the listener to be started when TCP/IP is
| started. Once started, the listener will remain active until it is are ended using the
| ENDTCPSVR command or an error occurs. When starting the DRDA listener, both
| the QSYSWRK subsystem and TCP/IP must be active.
| An example of when the prestart jobs may need to be started manually is if the
| commands ENDTCPSVR *DDM and STRTCPSVR *DDM are run in rapid suc-
| cession, to restart the server. It takes a while for the prestart jobs to become com-
| pletely ended. If the STRTCPSVR command's request to start the prestart jobs
| comes too soon, they will not be started automatically, and the manual use of
| STRPJ QSYSWRK QRWTSRVR will be neccessary.
| Note that the DRDA TCP/IP server can also be administered using the AS/400
| Operations Navigator, which is part of the Client Access product. The DRDA server
| is referred to as the DDM server in this context.
| Restriction: Only one DRDA listener can be active at one time. Requests to start
| the listener when it is already active will result in an informational message to the
| command issuer.
| Note: The DRDA server will not start if the QUSER password has expired. It is
| recommended that the password expiration interval be set to *NOMAX for
| the QUSER profile. With this value the password will not expire.
| The command starts all of the TCP/IP servers, including the DRDA server.
| If the DRDA listener is ended, and there are associated server jobs that have active
| connections to client applications, the server jobs will remain active until communi-
| cation with the client application is ended. Subsequent connection requests from
| the client application will fail, however, until the listener is started again.
| Restrictions: If the End TCP/IP Server command is used to end the DRDA lis-
| tener when it is not active, a diagnostic message will be issued. This same diag-
| nostic message will not be sent if the listener is not active when an ENDTCPSVR
| SERVER(*ALL) command is issued.
| QSYSWRK Subsystem
| The DRDA server jobs and their associated listener job run in this subsystem. The
| QSYSWRK subsystem will start automatically when you perform an IPL on your
| AS/400 system, regardless of the value specified for the controlling subsystem.
| A prestart job is a batch job that starts running before a program on a remote
| system initiates communications with the server. Prestart jobs use prestart job
| entries in the subsystem description to determine which program, class, and
| storage pool to use when the jobs are started. Within a prestart job entry, you must
| specify attributes that the subsystem uses to create and manage a pool of prestart
| jobs.
| The following list contains the prestart job entry attributes with the initial configured
| value for the DRDA TCP/IP server. They can be changed with the Change Prestart
| Job Entry (CHGPJE) command.
| Subsystem Description. The subsystem that contains the prestart job entries is
| QSYSWRK.
| Program library and name. The program that is called when the prestart job is
| started is QSYS/QRWTSRVR.
| User profile. The user profile that the job runs under is QUSER. This is what
| the job shows as the user profile. When a request to connect to the server is
| received from a client, the prestart job function swaps to the user profile that is
| received in that request.
| Job name. The name of the job when it is started is QRWTSRVR.
| Job description. The job description used for the prestart job is *USRPRF.
| Note that the user profile is QUSER so this will be whatever QUSER's job
| description is. However, the attributes of the job are changed to correspond to
| the requesting user's job description after the userid and password (if present)
| are verified.
| Start jobs. This indicates whether prestart jobs are to automatically start when
| the subsystem is started. These prestart job entries are shipped with a start
| jobs value of *NO to prevent unnecessary jobs starting when a system IPL is
| performed. You can change these to *YES if the DRDA TCP/IP communi-
| cations support is to be used. However, the STRTCPSVR *DDM command will
| also attempt to start the prestart jobs as part of its processing.
| Initial number of jobs. As initially configured, the number of jobs that are started
| when the subsystem is started is 1. This value can be adjusted to suit your
| particular environment and needs.
| Threshold. The minimum number of available prestart jobs for a prestart job
| entry is set to 1. When this threshold is reached, additional prestart jobs are
| automatically started. This is used to maintain a certain number of jobs in the
| pool.
| Additional number of jobs. The number of additional prestart jobs that are
| started when the threshold is reached is initially configured at 2.
| Maximum number of jobs. The maximum number of prestart jobs that can be
| active for this entry is *NOMAX.
| Maximum number of uses. The maximum number of uses of the job is set to
| 200. This value indicates that the prestart job will end after 200 requests to
| start the server have been processed.
| Wait for job. The *YES setting for DRDA causes a client connection request to
| wait for an available server job if the maximum number of jobs is reached.
| Pool identifier. The subsystem pool identifier in which this prestart job runs is
| set to 1.
| When the start jobs value for the prestart job entry has been set to *YES, and the
| remaining values are as provided with their initial settings, the following happens for
| each prestart job entry:
| When the subsystem is started, one prestart job is started.
| When the first client connection request is processed for the DRDA server, the
| initial job is used and the threshold is exceeded.
| Additional jobs are started for the server based on the number defined in the
| prestart job entry.
| The number of available jobs will not reach below 1.
| The subsystem periodically checks the number of prestart jobs in a pool that
| are ready to process requests, and it starts ending excess jobs. It always
| leaves at least the number of prestart jobs specified in the initial jobs param-
| eter.
| Monitoring Prestart Jobs: Prestart jobs can be monitored by using the Display
| Active Prestart Jobs (DSPACTPJ) command. Use the command DSPACTPJ
| QSYSWRK QRWTSRVR to monitor the DRDA prestart jobs.
| Managing Prestart Jobs: The information presented for an active prestart job can
| be refreshed by pressing the F5 key while on the Display Active Prestart Jobs
| display. Of particular interest is the information about program start requests. This
| information can indicate to you whether or not you need to change the available
| number of prestart jobs. If you have information indicating that program start
| requests are waiting for an available prestart job, you can change prestart jobs
| using the Change Prestart Job Entry (CHGPJE) command.
| If the program start requests were not being acted on fast enough, you could do
| any combination of the following:
| Increase the threshold.
| The key is to ensure that there is an available prestart job for every request that is
| sent that starts a server job.
| Removing Prestart Job Entries: If you decide that you do not want the servers
| to use the prestart job function, you must do the following:
| 1. End the prestarted jobs using the End Prestart Job (ENDPJ) command.
| Prestarted jobs ended with the ENDPJ command will be started the next time
| the subsystem is started if start jobs *YES is specified in the prestart job entry
| or when the STRHOSTSVR command is issued for the specified server type. If
| you only end the prestart job and do not perform the next step, any requests to
| start the particular server will fail.
| 2. Remove the prestart job entries in the subsystem description using the Remove
| Prestart Job Entry (RMVPJE) command.
| The prestart job entries removed with the RMVPJE command are permanently
| removed from the subsystem description. Once the entry is removed, new
| requests for the server will be successful, but will incur the performance over-
| head of job initiation.
| The server jobs run in the QSYSWRK subsystem also. The characteristics of the
| server jobs are taken from their prestart job entry which also comes automatically
| configured with OS/400. If this entry is removed so that prestart jobs are not used
| for the servers, then the server jobs are started using the characteristics of their
| corresponding listener job.
| The following provides the initial configuration in the QSYSWRK subsystem for the
| DRDA listener job.
| Subsystem QSYSWRK
| Job Queue QSYSNOMAX
| User QUSER
| Routing Data QRWTLSTN
| Job Name QRWTLSTN
| Class QSYSCLS20
| The following figures show a sample status using the WRKACTJOB command.
| Only jobs related to the DRDA server is shown in the figures. You must press F14
| to see the available prestart jobs.
Ending an AR job ends the job on both the AR and the AS. If the application is
under commitment control, all uncommitted changes are rolled back.
With the *SYSMGT value, the system audits all accesses made with the following
commands:
Add Directory Entry (ADDRDBDIRE)
Change Directory Entry (CHGRDBDIRE)
Display Directory Entry (DSPRDBDIRE)
Remove Directory Entry (RMVRDBDIRE)
Work Directory Entry (WRKRDBDIRE)
This chapter discusses tools and techniques to protect programs and data on an
AS/400 system and reduce recovery time in the event of a problem. It also provides
information about alternatives that ensure your network users have access to the
relational databases and tables across the network when it is needed.
Recovery Support
Failures that can occur on a computer system are a system failure (when the entire
system is not operating); a loss of the site due to fire, flood or similar catastrophe;
or the damage or loss of an object. For a distributed relational database, a failure
on one system in the network prevents users across the entire network from
accessing the relational database on that system. If the relational database is crit-
ical to daily business activities at other locations, enterprise operations across the
entire network can be disrupted for the duration of one system’s recovery time.
Clearly, planning for data protection and recovery after a failure is particularly
important in a distributed relational database.
The most common type of loss is the loss of an object or group of objects. An
object can be lost or damaged due to several factors, including power failure, hard-
ware failures, system program errors, application program errors, or operator errors.
The AS/400 system provides several methods for protecting the system programs,
application programs, and data from being permanently lost. Depending on the type
of failure and the level of protection chosen, most of the programs and data can be
protected, and the recovery time can be significantly reduced. These protection
methods include:
Physical protection of the system from power failure
System save and restore functions to ensure Structured Query Language
(SQL) objects such as tables, collections, packages and relational database
directories can be saved and restored
Protection from disk related failures such as auxiliary storage pools to control
where objects are stored, checksum protection for auxiliary storage pools, and
mirrored protection for disk-related hardware components
Journal management for auxiliary records of relational database changes and
journaling indexes to data
Commitment control to ensure relational database transactions can be applied
or removed in a uniform manner
Auxiliary storage pools (ASPs), checksum protection, and mirrored protection are
OS/400 disk recovery functions that provide methods to recover recently entered
data after a disk related failure. These functions use additional system resources,
but provide a high level of protection for systems in a distributed relational data-
base. Since some systems may be more critical as application servers than others,
the distributed relational database administrator should review how these disk data
protection methods can best be used by individual systems within the network.
The system ASP isolates system programs and the temporary objects that are
created as a result of processing by system programs. User ASPs can be used to
isolate some objects such as libraries, SQL objects, journals, journal receivers,
applications, and data. The AS/400 system supports up to 15 user ASPs. Isolating
libraries or objects in a user ASP protects them from disk failures in other ASPs
and reduces recovery time.
Checksum Protection
Checksum protection guards against losing the data on any disk in an ASP. The
checksum software maintains a coded copy of ASP data in special checksum data
areas within that ASP. Any changes made to permanent objects in a checksum
protected ASP are automatically maintained in the checksum data of the checksum
set. If any single disk unit in a checksum set is lost, the system reconstructs the
contents of the lost device using the checksum and the data on the remaining func-
tional units of the set. In this way, if any one of the units fails, its contents may be
recovered. This reconstructed data reflects the most up-to-date information that was
Mirrored Protection
Mirrored protection increases the availability of a system by duplicating different
disk-related hardware components such as a disk controller, a disk I/O processor,
or a bus. The system can remain available after a failure, and service for the failed
hardware components can be scheduled at a convenient time.
Journal Management
Journal management can be used as a part of the backup and recovery strategy for
relational databases and indexes. AS/400 journal support provides an audit trail and
forward and backward recovery. Forward recovery can be used to take an older
version of a table and apply changes logged in the journal to the table. Backward
recovery can be used to remove changes logged in the journal from the table.
When a collection is created, a journal and an object called a journal receiver are
created in the collection. Because placing journal receivers on ASPs can improve
performance, the distributed relational database administrator may wish to create
the collection on a user ASP.
With journaling active, when changes are made to the database, the changes are
journaled in a journal receiver before the changes are made to the database. The
journal receiver always has the latest database information. All activity is journaled
for a database table regardless of how the change was made.
Journal receiver entries record activity for a specific row (added, changed, or
deleted), and for a table (opened, table or member saved, and so on). Each entry
includes additional control information identifying the source of the activity, the user,
job, program, time, and date.
For changes that affect a single row, row images are included following the control
information. The image of the row after a change is made is always included.
Optionally, the row image before the change is made can also be included. You
control whether to journal both before and after row images or just after row images
by specifying the IMAGES parameter on the Start Journaling Physical File
(STRJRNPF) command.
All journaled database files are automatically synchronized with the journal when
the system is started (IPL time). If the system ended abnormally, some database
changes may be in the journal, but not yet reflected in the database itself. If that is
the case, the system automatically updates the database from the journal to bring
the tables up to date.
Journaling can make saving database tables easier and faster. For example,
instead of saving entire tables everyday, you can simply save the journal receivers
that contain the changes to the tables. You might still save the entire tables on a
regular basis. This method can reduce the amount of time it takes to perform your
daily save operations.
The Display Journal (DSPJRN) command, can be used to convert journal receiver
entries to a database file. Such a file can be used for activity reports, audit trails,
security, and program debugging.
Index Recovery
An index describes the order in which rows are read from a table. When indexes
are recorded in the journal, the system can recover the index to avoid spending a
significant amount of time rebuilding indexes during the IPL following an abnormal
system end.
When you journal tables, images of changes to the rows in the table are written to
the journal. These row images are used to recover the table should the system end
abnormally. However, after an abnormal end, the system may find that indexes built
over the table are not synchronized with the data in the table. If an access path and
its data are not synchronized, the system must rebuild the index to ensure that the
two are synchronized and usable.
When indexes are journaled, the system records images of the index in the journal
to provide known synchronization points between the index and its data. By having
that information in the journal, the system can recover both the data and the index,
and ensure that the two are synchronized. In such cases, the lengthy time to
rebuild the indexes can be avoided.
The AS/400 system provides several functions to assist with index recovery. All
indexes on the system have a maintenance option that specifies when the index is
maintained. SQL indexes are created with an attribute of *IMMED maintenance.
SQL indexes are not journaled automatically. You can use the Start Journal Access
Path (STRJRNAP) command to journal any index created by SQL operations. The
system save and restore functions allow you to save indexes when a table is saved
by using ACCPTH(*YES) on the Save Object (SAVOBJ) or Save Library (SAVLIB)
commands. If you must restore a table, there is no need to rebuild the indexes. Any
indexes not previously saved and restored are automatically and asynchronously
rebuilt by the database.
Before journaling indexes, you must start journaling for the tables associated with
the index. In addition, you must use the same journal for the index and its associ-
ated table.
Consider the trade-off between using table design to reduce index rebuilding time
and using system-supplied functions like access path journaling. The table design
described above may require a more complex application design. After evaluating
your situation, you may decide to use system-supplied functions like access path
journaling rather than design more complex applications.
The system determines which access paths to protect based on target access path
recovery times provided by the user or by using a system-provided default time.
The target access path recovery times can be specified as a system-wide value or
on an ASP basis. Access paths that are being journaled to a user-defined journal
are not eligible for SMAPP protection because they are already protected. See the
Backup and Recovery book for more information about SMAPP.
Under commitment control, tables and rows used during a transaction are locked
from other jobs. This ensures that other jobs do not use the data until the trans-
action is complete. At the end of the transaction, the program issues an SQL
COMMIT or ROLLBACK statement, freeing the rows. If the system or job ends
abnormally before the commit operation is performed, all changes for that job since
the last time a commit or rollback operation occurred are rolled back. Any affected
rows that are still locked are then unlocked. The lock levels are as follows:
*NONE Commitment control is not used. Uncommitted changes in other jobs
can be seen.
*CHG Objects referred to in SQL ALTER, COMMENT ON, CREATE, DROP,
GRANT, LABEL ON, and REVOKE statements and the rows updated,
deleted, and inserted are locked until the unit of work (transaction) is
completed. Uncommitted changes in other jobs can be seen.
*CS Objects referred to in SQL ALTER, COMMENT ON, CREATE, DROP,
GRANT, LABEL ON, and REVOKE statements and the rows updated,
deleted, and inserted are locked until the unit of work (transaction) is
completed. A row that is selected, but not updated, is locked until the
next row is selected. Uncommitted changes in other jobs cannot be
seen.
*ALL Objects referred to in SQL ALTER, COMMENT ON, CREATE, DROP,
GRANT, LABEL ON, and REVOKE statements and the rows read,
updated, deleted, and inserted are locked until the end of the unit of
work (transaction). Uncommitted changes in other jobs cannot be seen.
If you request COMMIT (*CHG), COMMIT (*CS), or COMMIT (*ALL) when the
program is precompiled or when interactive SQL is started, then SQL sets up the
commitment control environment by implicitly calling the Start Commitment Control
(STRCMTCTL) command. The LCKLVL parameter specified when SQL starts com-
mitment control is the lock level specified on the COMMIT parameter on the
CRTSQLxxx commands. NFYOBJ(*NONE) is specified when SQL starts commit-
ment control. To specify a different NFYOBJ parameter, issue a STRCMTCTL
command before starting SQL.
Note: When running with commitment control, the tables referred to in the applica-
tion program by data manipulation language statements must be journaled.
The tables do not have to be journaled at precompile time, but they must be
journaled when you run the application.
The journal created in the SQL collection is normally the journal used for logging all
changes to SQL tables. You can, however, use the system journal functions to
journal SQL tables to a different journal.
Commitment control can handle up to 131 072 distinct row changes in a unit of
work. If COMMIT(*ALL) is specified, all rows read are also included in the 131 072
limit. (If a row is changed or read more than once in a unit of work, it is only
counted once toward the 131 072 limit.) Maintaining a large number of locks
adversely affects system performance and does not allow concurrent users to
access rows locked in the unit of work until the unit of work is completed. It is,
therefore, more efficient to keep the number of rows processed in a unit of work
small. Commitment control allows up to 512 tables either open under commitment
control or closed with pending changes in a unit of work.
The HOLD value on COMMIT and ROLLBACK statements allows you to keep the
cursor open and start another unit of work without issuing an OPEN again. The
HOLD value is not available when there are non-AS/400 connections that are not
released for a program and SQL is still in the call stack. If ALWBLK(*ALLREAD)
and either COMMIT(*CHG) or COMMIT(*CS) are specified when the program is
precompiled, all read-only cursors will allow blocking of rows and a ROLLBACK
HOLD statement will not roll the cursor position back.
If there are locked rows (records) pending from running a SQL precompiled
program or an interactive SQL session, a COMMIT or ROLLBACK statement can
be issued from the system Command Entry display. Otherwise, an implicit
ROLLBACK operation occurs when the job is ended.
You can use the Work with Commitment Definitions (WRKCMTDFN) command to
monitor the status of commitment definitions and free up locks and held resources
involved with commitment control activities across systems. For more information,
see “Working with Commitment Definitions” on page 6-4.
The AS/400 system has a full set of commands to save and restore your database
tables and SQL objects:
Save Library (SAVLIB) saves one or more collections
Save Object (SAVOBJ) saves one or more objects such as SQL tables, views
and indexes
Save Changed Object (SAVCHGOBJ) saves any objects that have changed
since either the last time the collection was saved or from a specified date
Save Save File Data (SAVSAVFDTA) saves the contents of a save file
Save System (SAVSYS) saves the operating system, security information,
device configurations, and system values
Restore Library (RSTLIB) restores a collection
Restore Object (RSTOBJ) restores one or more objects such as SQL tables,
views and indexes
Restore User Profiles (RSTUSRPRF), Restore Authority (RSTAUT) and
Restore Configuration (RSTCFG) restore user profiles, authorities, and config-
urations saved by a SAVSYS command
See the Backup and Recovery book for more information about these functions and
commands.
After restoring the index, the table may need to be brought up to date by applying
the latest journal changes (depending on whether journaling is active). Normally,
the system can apply approximately 80,000 to 100,000 journal entries per hour.
(This assumes that each of the tables to which entries are being applied has only
one index or view built over it.) Even with this additional recovery time, you will
usually find it is faster to restore indexes rather than to rebuild them.
The system ensures the integrity of an index before you can use it. If the system
determines that the index is unusable, the system attempts to recover it. You can
control when an index will be recovered. If the system ends abnormally, during the
next IPL the system automatically lists those tables requiring index or view
recovery. You can decide whether to rebuild the index or to attempt to recover it at
one of the following times:
For more information, see the Backup and Recovery book topics about saving and
restoring access paths.
| Included in the security information that the SAVSECDTA and RSTUSRPRF com-
| mands can save and restore are the server authorization entries that the DRDA
| TCP/IP support uses to store and retrieve remote system user ID and password
| information.
An SQL package must be restored to a collection having the same name as the
collection from which it was saved, and it cannot be renamed.
When entries have been added and you want to save the relational database direc-
tory, specify the OUTFILE parameter on the Display Relational Database Directory
Entry (DSPRDBDIRE) command to send the results of the command to an output
file. The output file can be saved to tape, diskette, or a save file and restored to the
system. If your relational database directory is damaged or your system needs to
be recovered, you can restore the output file containing relational database entry
data using a control language (CL) program. The CL program reads data from the
restored output file and creates the CL commands that add entries to a new rela-
tional database directory.
The sample CL program that follows reads the contents of the output file RDBDIRM
and recreates the relational database directory using the Add Relational Database
Directory Entry (ADDRDBDIRE) command. In this example (for systems running
versions prior to Version 4 Release 2), the old directory is cleared before the new
entries are made.
| /\ - - \/
| /\ - Restore RDB Entries from output file created with: - \/
| /\ - DSPRDBDIRE OUTPUT(\OUTFILE) OUTFILE(RDBDIRM) - \/
| /\ - for OS/4ðð V3R1 through V4R1 - \/
| /\ - - \/
| PGM
| DCLF FILE(RDBDIRM)
| RMVRDBDIRE RDB(\ALL)
| NEXTENT:
| RCVF
| MONMSG MSGID(CPFð864) EXEC(DO)
| QSYS/RCVMSG PGMQ(\SAME (\)) MSGTYPE(\EXCP) RMV(\YES) MSGQ(\PGMQ)
| GOTO CMDLBL(LASTENT)
| ENDDO
| IF (&RWRLOC = '\LOCAL') DO
| QSYS/ADDRDBDIRE RDB(&RWRDB) RMTLOCNAME(&RWRLOC) TEXT(&RWTEXT)
| ENDDO
| ELSE IF (&RWRLOC = '\ARDPGM') DO
| QSYS/ADDRDBDIRE RDB(&RWRDB) RMTLOCNAME(&RWRLOC) TEXT(&RWTEXT) +
| ARDPGM(&RWDLIB/&RWDPGM)
| ENDDO
| ELSE IF (&RWDPGM \NE ' ') DO
| QSYS/ADDRDBDIRE RDB(&RWRDB) RMTLOCNAME(&RWRLOC) TEXT(&RWTEXT) +
| DEV(&RWDEV) LCLLOCNAME(&RWLLOC) RMTNETID(&RWNTID) +
| MODE(&RWMODE) TNSPGM(&RWTPN) +
| ARDPGM(&RWDLIB/&RWDPGM)
| ENDDO
| ELSE DO
| QSYS/ADDRDBDIRE RDB(&RWRDB) RMTLOCNAME(&RWRLOC) TEXT(&RWTEXT) +
| DEV(&RWDEV) LCLLOCNAME(&RWLLOC) RMTNETID(&RWNTID)
| MODE(&RWMODE) TNSPGM(&RWTPN)
| ENDDO
| GOTO CMDLBL(NEXTENT)
| LASTENT:
| RETURN
| ENDPGM
| The following example shows the same program for systems running Version 4
| Release 2 or later:
| GOTO CMDLBL(NEXTENT)
| LASTENT:
| RETURN
| ENDPGM
The files that make up the relational database directory are saved when a SAVSYS
command is run. The physical file that contains the relational database directory
can be restored from the save media to your library with the following Restore
Object (RSTOBJ) command:
RSTOBJ OBJ(QADBXRDBD) SAVLIB(QSYS)
DEV(TAPð1) OBJTYPE(\FILE)
LABEL(Qpppppppvrmxxððð3)
RSTLIB(your lib)
| In this example, the relational database directory is restored from tape. The char-
| acters ppppppp in the LABEL parameter represent the product code of Operating
| System/400 (for example, 5769SS1 for Version 4 Release 2). The vrm in the
| LABEL parameter is the version, release, and modification level of OS/400. The xx
| in the LABEL parameter is the last two digits of the current system language value.
| For example, 2924 is for the English language; therefore, the value of xx is 24.
After you restore this file to your library, you can use the information in the file to
re-create the relational database directory.
For example, consider service booking or customer parts purchasing issues for a
dealership. When a customer is waiting for service or to purchase a part, the
service clerk needs access to all authorized tables of enterprise information to
schedule work or sell parts.
If the local system is down, no work can be done. If the local system is running but
a request to a remote system is needed to process work and the remote system is
down, the request can not be handled. In the Spiffy Corporation example, this might
mean a dealership cannot request parts information from a regional inventory
center. Also, if an AS that handles many AR jobs is down, none of the ARs can
Providing the region’s dealerships with access to regional inventory data is impor-
tant to the Spiffy Corporation distributed relational database administrator. Providing
paths through the network to data can be addressed several ways. The original
network configuration for the Spiffy Corporation linked the end node dealerships to
their respective network node regional centers.
MP000 KC000
Switched Switched
RV2W739-1
Another alternative could be if one of the larger area dealerships also acted as an
AS for other dealerships. As shown in Figure 7-3 on page 7-16, an end node is
only an AS to other end nodes through its network node. In Figure 7-2, if the link to
Minneapolis is down, none of the dealerships could query another (end node) for
inventory. The configuration illustrated above could be changed so that one of the
dealerships is configured as an APPN network node, and lines to that dealership
from other area dealerships are set up.
RV2W744-1
The figure below shows that a copy of the MP000 system distributed relational
database can be stored on the KC000 system, and a copy of the KC000 system
distributed relational database can be stored on the MP000 system. The ARs from
one region can link to the other AS to query or to update a replicated copy of their
relational database.
The administrator must decide what is the most efficient, effective strategy to allow
distributed relational database processing. Alternative strategies might include these
scenarios.
One alternative may be that when MP000 is unavailable, its ARs connect to the
KC000 system to query a read-only snapshot of the MP000 distributed relational
database so service work can be scheduled.
For example, an AR that normally connects to the MP000 database could connect
to a replicated MP000 database on the KC000 system to process work. When the
MP000 system is available again, the MP000 relational database can be updated
by applying journal entries from activity originating in its replicated tables at the
KCOOO location. When these journal entries have been applied to the original
MP000 tables, distributed relational database users can access the MP000 as an
AS again.
Journal management processes on each regional system update all relational data-
bases. The amount of journal management copy activity in this situation should be
examined because of potential adverse performance effects at these systems.
| For details, see the Communications Management book. See the APPN
| Supportbook for information about RU sizing and pacing. For a discussion of other
| communications-related performance considerations, see the TCP/IP Configuration
| and Reference book.
Unprotected Conversations
| Unprotected conversations are used for DRDA connections when the connection is
| performed from a program using RUW connection management or if the program
| making the connection is not running under commitment control, or if the database
| to which the connection is made does not support two-phase commit for the pro-
| tocol that is being used. If the characteristics of the data are such that the trans-
| action only affects one database management system, establishing the connection
| from a program using RUW connection management or from a program running
| without commitment control can avoid the overhead associated with two-phase
| commit flows. Additionally, when conversations are kept active with
| DDMCNV(*KEEP) and those conversations are protected conversations, two-phase
| commit flows are sent regardless of whether the conversation was used for DRDA
| or DDM processing during the unit of work. Therefore, when running with
| DDMCNV(*KEEP), it is better to run with unprotected conversations if possible. If
| running with protected conversations, you should run with DDMCNV(*DROP) and
| use the RELEASE statement to end the connection and the conversation at the
| next commit when the conversation will not be used in future units of work.
| You can use the AS/400 Performance Tools licensed program to help analyze your
| performance. In addition, there are some system commands available to help you
| observe the performance of your system: WRKSYSSTS, WRKDSKSTS, and
| WRKACTJOB. In using them, you should observe system performance during
| typical levels of activity. For example, statistics gathered when no jobs are running
| on the system are of little value in assessing system performance. To observe the
| system performance, complete the following steps:
| 1. Enter the WRKSYSSTS, WRKDSKSTS, or WRKACTJOB command.
| 2. Allow the system to collect data for a minimum of 5 minutes.
| 3. Press F5 (Refresh) to refresh the display and present the performance data.
| 4. Tune your system based on the new data.
| See the chapter on performance tuning in the Work Management book for details
| on how to work with system status and disk status.
| Use both the WRKSYSSTS and the WRKACTJOB commands when observing the
| performance of your system. With each observation period, you should examine
| and evaluate the measures of system performance against the goals you have set.
Conditions that inhibit the blocking of query data between the AR and the AS are
also listed in the following discussion. These conditions do not apply to the use of
the multiple-row FETCH statement. Any condition listed under each of the following
cases is sufficient to prevent blocking from occurring.
Summarization of rules
In summary, what these rules (including the notes) say is that in the absence of
certain special or unusual conditions, blocking will occur in both of the following
cases:
It will occur if the cursor is read-only (see Note 3) and if:
– Either the application requester or application server is a non-DB2/400.
– Both the application requester and application server are DB2/400s and
ALWBLK(*ALLREAD) is specified and COMMIT(*ALL) is not specified.
It will occur if COMMIT(*ALL) was not specified and all of the following are also
true:
– There is no FOR UPDATE OF clause in the SELECT, and
– There are no UPDATE or DELETE WHERE CURRENT OF statements
against the cursor in the program, and
Notes:
1. A cursor is updatable if it is not read-only (see Note 3), and one of the following
is true:
The select statement contained the FOR UPDATE OF clause, or
There exists in the program an UPDATE or DELETE WHERE CURRENT
OF against the cursor.
2. A cursor is potentially updatable if it is not read-only (see Note 3), and if the
program includes an EXECUTE or EXECUTE IMMEDIATE statement (or when
connected to a non-AS/400 system, any dynamic statement), and a precompile
or bind option is used that caused the package default value to be single-row
protocol.
For DB2/400, this is the ALWBLK(*READ) precompile option (the default).
For DB2, this is CURRENTDATA(YES) on BIND PACKAGE (the default).
For SQL/DS, this is the SBLOCK keyword on SQLPREP.
For DB2/2, this is /K=UNAMBIG on SQLPREP or SQLBIND (the default).
3. A cursor is read-only if one or more of the following conditions are true:
The DECLARE CURSOR statement specified an ORDER BY clause but
did not specify a FOR UPDATE OF clause.
The DECLARE CURSOR statement specified a FOR FETCH ONLY clause.
The DECLARE CURSOR statement specified the SCROLL keyword without
DYNAMIC (OS/400 only).
One or more of the following conditions are true for the cursor or a view or
logical file referenced in the outer subselect to which the cursor refers:
– The outer subselect contains a DISTINCT keyword, GROUP BY clause,
HAVING clause, or a column function in the outer subselect.
– The select contains a join function.
– The select contains a UNION operator.
– The select contains a subquery that refers to the same table as the
table of the outer-most subselect.
– The select contains a complex logical file that had to be copied to a
temporary file.
– All of the selected columns are expressions, scalar functions, or con-
stants.
– All of the columns of a referenced logical file are input only (OS/400
only).
In the DB2 for AS/400 to DB2 for AS/400 environment, the query block size is
determined by the size of the buffer used by the database manager. The default
size is 4K. This can be changed on application servers that are at the Version 2,
Release 3 or higher level. In order to do this, use the SBMRMTCMD CL command
to send and execute an OVRDBF command on the AS. Besides the name of the
file being overridden, the OVRDBF command should contain OVRSCOPE(*JOB)
and SEQONLY(*YES nnn). The number of records desired per block replaces nnn
in the SEQONLY parameter. Increasing the size of the database buffer not only can
reduce communications overhead, but can also reduce the number of calls to the
database manager to retrieve the rows.
You can also change the query block size using an SQL CALL statement (a stored
procedure) from non-AS/400 systems or between AS/400 systems.
You must then resolve the problem or obtain customer support assistance to
resolve the problem. To do this, you need:
An understanding of the OS/400 program support.
A good idea of how to decide if a problem is on an application requester (AR)
or an application server (AS).
Familiarity with using OS/400 problem management functions.
The AS/400 system and its attached devices are able to detect some types of prob-
lems. These are called system-detected problems. When a problem is detected,
several operations take place:
An entry in the Product Activity Log is created
A problem record is created
A message is sent to the QSYSOPR message queue
Information is recorded in the error log and the problem record. The alert is then
sent to the service provider if the service provider is either an alert focal point or
the network node server for the system with the problem. When some alerts are
sent, a spooled file of FFDC information is also created. The error log and the
problem record may contain the following information:
Vital product data
Configuration information
Reference code
The name of the associated device
Additional failure information
User-detected problems are usually related to program errors that can cause any
of the following problems to occur:
Job problems
Incorrect output
Messages indicating a program failure
Device failure not detected by the system
Poor performance
The AS/400 system tracks both user- and system-detected problems using the
problem log and problem manager. A problem state is maintained from when a
problem is detected (OPENED) to when it is resolved (CLOSED) to assist you with
tracking. Alert and alert management capabilities extend the problem management
support to include problems occurring on other AS/400 systems in a distributed
relational database network. For more information, see “AS/400 Problem Log” on
page 9-21.
No (bad data)
Locate failing
statement and run
on application
server locally or
with interactive SQL
Problem in relational
Yes
Still database on AS
Failing
Run ANZPRB
No
Run ANZPRB
RV2W731-1
No
Application Contact
Yes Application Yes
Run ANZPRB Requester Application
Looping Looping Programmer
No No
Run ANZPRB
No
Run ANZPRB
RV2W732-2
Figure 9-2. Resolving Wait, Loop, or Performance Problems on the Application Requester
| You can find the AR job name by looking at the job log on the AS. For more infor-
| mation about finding jobs on the AS, see “Locating Distributed Relational Database
| Jobs” on page 6-6. When you need to check the AS job, use the Work with Job
| (WRKJOB), Work with Active Job (WRKACTJOB), or Work with User Job
| (WRKUSRJOB) commands to locate the job on the AS. For information on using
| these commands, see “Working with Jobs” on page 6-1, “Working with User Jobs”
| on page 6-2, and “Working with Active Jobs” on page 6-3. From one of these job
| displays, look at the program stack to see if the AS is looping. If it is looping, use
| problem analysis to handle the problem. If it is not looping, check the program
| stack for WAIT with QCNTRCV, which means the AS is waiting for the AR. If both
| systems are in this communications wait state, there is a problem with your
| network. If the AS is not in a wait state, there is a performance issue that may have
| to be addressed.
| The first time you connect to DB2 for AS/400 from a PC using a product like DB2
| Connect, if you have not already created the SQL packages for the product in DB2
| for AS/400, the packages will be created automatically, and the NULLID collection
| may need to be created automatically as well. This can take a long time and give
| the appearance of a performance problem. However, it should be just a one-time
| occurrance.
| A long delay will occur if the system to which you are trying to connect over TCP/IP
| is not available. A several minute timeout delay will preceed the message A remote
| host did not respond within the timeout period. An incorrect IP address in the
| RDB directory will cause this behavior as well.
Find
Application
Server Job
Display
Program
Stack
Application Yes
Server Run ANZPRB
Looping
No
Wait on Yes
QCNTRCV Network Problem
No
Performance issue
use ANZPRB
to determine
RV2W733-1
Figure 9-3. Resolving Wait, Loop, or Performance Problems on the Application Server
You can also gather more information from a message than just the line of text that
appears at the bottom of a display. This section discusses how you can copy dis-
plays being viewed by another user and how you can obtain more information
about messages you or a user receive when doing distributed relational database
work.
Copy Screen
The Start Copy Screen (STRCPYSCN) command allows you to be signed on to
your work station and see the same displays being viewed by someone else at
another work station. You must be signed on to the same AS/400 system as the
user. If that user is on a remote system, you can use display station pass-through
to sign on that system and then enter the STRCPYSCN command to see the other
displays. Screen images can be copied to a database file at the same time they are
copied to another work station or when another work station cannot be used. This
allows you to process this data later and prepares an audit trail for the operations
that occur during a problem situation.
To copy the display image to another display station the following requirements
must be met:
Both displays are defined to the system
Both displays are color or both are monochrome, but not one color and the
other monochrome
Both displays have the same number of character positions horizontally and
vertically
If you type your own display station ID as the sending device, the receiving display
station must have the sign on display shown when you start copying screen
images. Graphics are copied as blanks.
If not already signed on to the same system, use the following process to see the
displays that another user sees on a remote system:
1. Enter the Start Pass-Through (STRPASTHR) command.
STRPASTHR RMTLOCNAME(KC1ð5)
2. Log on to the target system.
3. Enter the STRCPYSCN command.
STRCPYSCN SRCDEV(KC1ð5)
OUTDEV(\REQUESTER)
OUTFILE(KCHELP/TEST)
SRCDEV specifies the name of the source device, the display station that
is sending the display image. To send your display to command to another
device, enter the *REQUESTER value for this parameter.
The sending display station’s screens are copied to the other display station. The
image shown at the receiving display station trails the sending display station by
one screen. If the user at the sending display station presses a key that is not
active (such as the Home key), both display stations will show the same display.
While you are copying screens, the operator of the receiving display station cannot
do any other work at that display station until the copying of screens is ended.
To end the copy screen function from the sending display station, enter the End
Copy Screen (ENDCPYSCN) command from any command line and press the
Enter key.
ENDCPYSCN
The display you viewed when you started the copy screen function is shown.
Messages
The AS/400 system sends a variety of system messages that indicate conditions
ranging from simple typing errors to problems with system devices or programs.
The message may be one of the following:
An error message on your current display.
These messages can interrupt your job or sound an alarm. You can display
these messages by typing DSPMSG on any command line.
A message regarding a system problem that is sent to the system operator
message queue and displayed on a separate Work with Messages display.
To see these messages, type DSPMSG QSYSOPR on any system command line.
A message regarding a system problem that is sent to the message queue
specified in a device description.
To see these messages, type DSPMSG message-queue-name on any system
command line.
A message regarding a system problem that is sent to another system in the
network.
These messages are called alerts. See “Alerts” on page 9-23 for how to view
and work with alerts.
The system sends informational or inquiry messages for certain system events.
Informational messages give you status on what the system is doing. Inquiry
messages give you information about the system, but also request a reply.
The remaining four digits (five digits if the prefix is SQ) indicate the sequence
number of the message. The example message ID shown indicates this is a
message from the operating system, number 0083.
à ð
Message ID . . . . . . : CPD6A64 Severity . . . . . . : 3ð
Message type . . . . . : DIAGNOSTIC
Date sent . . . . . . . : ð3/29/92 Time sent . . . . . : 13:49:ð6
From program . . . . . : QUIACT Instruction . . . . : ð8ðD
To program . . . . . . : QUIMGFLW Instruction . . . . : ð3C5
Bottom
Press Enter to continue.
á ñ
You can get more information about a message that is not showing on your display
if you know the message identifier and the library in which it is located. To get this
information enter the Display Message Description (DSPMSGD) command:
DSPMSGD RANGE(SQLð2ð4) MSGF(QSYS/QSQLMSG)
This command produces a display that allows you to select the following informa-
tion about a message:
The text is the same message and message help text that you see on the Addi-
tional Message Information display. The field data is a list of all the substitution
variables defined for the message and their attributes. The message attributes are
the values (when defined) for severity, logging, level of message, alert, default
program, default reply, and dump parameters. You can use this information to help
you determine what the user was doing when the message appeared.
Message Types
On the Additional Message Information display you see the message type and
severity code for the message. Figure 9-5 shows the different message types for
AS/400 messages and their associated severity codes:
A system message exists for each SQLCODE returned from an SQL statement
supported by the DB2/400 program. The message is made available in precompiler
listings, on interactive SQL, or in the job log when running in the debug mode.
However, when you are working with an AS that is not an AS/400 system, there
may not be a specific message for every error condition in the following cases:
The error is associated with a function not used by the AS/400 system.
For example, the special register CURRENT SQLID is not supported by
DB2/400, so SQLCODE -411 (SQLSTATE 56040) “CURRENT SQLID cannot
be used in a statement that references remote objects” does not exist.
The error is product-specific and will never occur when using DB2/400.
DB2/400 will never have SQLCODE -925 (SQLSTATE 56021), “SQL commit or
rollback is invalid in an IMS or CICS environment.”
If the AS subsystem determines that it cannot start the job (for example, the user
profile does not exist on the AS, the user profile exists but is disabled, or the user
is not properly authorized to the requested objects on the AS), the subsystem
sends a message, CPF1269, to the QSYSMSG message queue (or QSYSOPR
when QSYSMSG does not exist). The CPF1269 message contains two reason
codes (one of the reason codes may be zero, which can be ignored).
The nonzero reason code gives the reason the program start request was rejected.
Because the remote job was to have started on the AS, the message and reason
codes are provided on the AS system, and not the AR system. The user at the AR
only knows that the program start request failed, not why it failed. The user on the
AR must either talk to the system operator at the AS system, or use display station
pass-through to the AS to determine the reason why the request failed.
For a complete description of the reason codes and their meanings, refer to the ICF
Programming book.
| There are two DRDA SECMECs implemented by DB2 for AS/400: user ID only,
| and user ID with password. The default for an AS/400 server is user ID with pass-
| word. If the AR sends only a user ID to a server with the default SECMEC, the
| above error message with reason code 17 is given.
| The solution for the unsupported SECMEC failure is either to allow the user ID only
| SECMEC at the server by running the CHGDDMTCPA PWDRQD(*NO) command,
| or by sending a password on the connect request. A password can be sent by
| either using the USER/USING form of the SQL CONNECT statement, or by using
| the ADDSVRAUTE command to add the remote user ID and password in a server
| authorization entry for the user profile under which the connection attempt is to be
| made.
| Note that you have to have system value QRETSVRSEC (retain server security
| data) set to '1' to be able to store the remote password in the server authorization
| entry.
| Attention: You must enter the RDB name on the ADDSVRAUTE command in
| upper case for use with DRDA or the name will not be recognized during connect
| processing and the information in the authorization entry will not be used.
| If you get message SQL7020, SQL package creation failed, when connecting for
| the first time (for any given level of commitment control) to a system that has only
| single-phase commit capabilities, the likely cause is that you accessed the remote
| system as a read-only server and you need to update it to create the SQL package.
| You can verify that by looking at the messages in the joblog. The solution is to do a
| RELEASE ALL and COMMIT to get rid of all connections before connecting, so that
| the connection will be updatable.
Application Problems
The best time to handle a problem with an application is before it goes into pro-
duction. However, it is impossible to anticipate all the conditions that will exist for
an application when it gets into general use. The job log of either the AR or the AS
can tell you that a package failed; the listing of the program or the package can tell
you why it failed. The SQL compilers provide diagnostic tests that show the
SQLCODEs generated by the precompile process on the diagnostic listing. For
Integrated Language Environment* (ILE*) precompiles, you can optionally specify
OPTION(*XREF) and OUTPUT(*PRINT) to print a precompile source and cross-
reference listing. For non-ILE precompiles, you can optionally specify *SOURCE
and *XREF on the OPTIONS parameter of the Create SQL Program (CRTSQLxxx)
commands to print a precompile source and cross-reference listings.
Listings
The listing from the Create SQL program (CRTSQLxxx) command shown in
Figure 9-7 on page 9-16 provides the following kinds of information:
The values supplied for the parameters of the precompile command
The program source
The identifier cross-references
The messages resulting from the precompile
Precompiler Listing
5763ST1 V3R1Mð 94ð9ð9 Create SQL ILE C Object UPDATEPGM ð4/19/94 14:3ð:1ð Page 3
CROSS REFERENCE
Data Names Define Reference
DEPTCODE \\\\ COLUMN
18
DPT1 \\\\ TABLE IN RWDS
17
RWDS \\\\ COLLECTION
17
5763ST1 V3R1Mð 94ð9ð9 Create SQL ILE C Object UPDATEPGM ð4/19/94 14:3ð:1ð Page 4
DIAGNOSTIC MESSAGES
MSG ID SEV RECORD TEXT
SQLðð88 ð 17 Position 15 UPDATE applies to entire table.
SQL11ð3 1ð 17 Field definitions for file DPT1 in RWDS not found.
Message Summary
Total Info Warning Error Severe Terminal
2 1 1 ð ð ð
1ð level severity errors found in source
19 Source records processed
\ \ \ \ \ E N D O F L I S T I N G \ \ \ \ \
For more information about the SQLCA, see the information on SQLCA and
SQLDA control blocks in the DB2 for AS/400 SQL Reference book.
The DB2 for AS/400 SQL Programming book lists each SQLCODE, the associated
message ID, the associated SQLSTATE, and the text of the message. The com-
plete message can be viewed online by using the Display Message Description
(DSPMSGD) CL command.
Use the Work with Problems (WRKPRB) command to view the problem log. The
following displays show the two views of the problem log:
á ñ
Press F11 on the first view to see the following display:
à System: KCððð
ð
Position to . . . . . . . Problem ID
á ñ
AS/400 problem log support allows you to display a list of all the problems that
have been recorded on the local system. You can also display detailed information
about a specific problem such as the following:
Product type and serial number of device with a problem
Date and time of the problem
Part that failed and where it is located
Problem status
From the problem log you can also analyze a problem, report a problem, or deter-
mine any service activity that has been done. For more information about handling
AS/400 problems, see the chapter on problem handling in the System Operation
book.
For full support1 in handling distributed relational database problems, alerts and
alert logging can be enabled using the Change Network Attributes (CHGNETA)
command. OS/400 alert support is discussed in “Alert Support” on page 3-3, and
procedures and examples for setting up alerts are provided in “Configuring Alert
Support” on page 3-16.
à ð3/28/92 15:44:34
ð
Type options, press Enter.
2=Change 4=Delete 5=Display recommended actions 6=Print details
8=Display alert detail 9=Work with problem
Resource
Opt Name Type Date Time Alert Description: Probable Cause
KCððð\ UNK ð5/28 15:19 Resource unavailable: Printer
AS SRV ð5/27 21:31 Distributed process failed: Command not re
KCððð\ LU ð5/23 ð8:29 Operator intervention required: Printer
KCððð\ UNK ð5/23 ð8:27 Resource unavailable: Printer
AS SRV ð5/2ð 11:49 Distributed process failed: Command not re
KCððð\ UNK ð5/2ð 11:26 Resource unavailable: Printer
AS SRV ð5/2ð 1ð:47 Distributed process failed: Relational dat
AS SRV ð5/2ð 1ð:31 Distributed process failed: Command not re
KCððð\ CTL ð5/2ð ð9:46 Unable to communicate with remote node: Co
KCððð\ CTL ð5/2ð ð3:23 Unable to communicate with remote node: Co
KCððð\ UNK ð5/19 15:32 Resource unavailable: Printer
AS SRV ð5/19 14:37 Distributed process failed: Invalid data s
More...
F3=Exit F1ð=Show new alerts F11=Display user/group F12=Cancel
F13=Change attributes F2ð=Right F21=Automatic refresh F24=More keys
á ñ
Alert message descriptions are contained in the QHST log. Use the Display Log
(DSPLOG) command and specify QHST, or the Display Message (DSPMSG)
command and specify QSYSOPR to see the alert message description.
The AS/400 system enables a subset of the DRDB messages listed in Figure 9-6
on page 9-11 to trigger alerts for distributed relational database support. If an error
is detected at the AS, a DDM message is sent to the AR. The AR generates an
alert based on that DDM message.
| 1 Alerts can still be generated and logged locally for applications that use DRDA over TCP/IP, but the alert messages do not flow
| over TCP/IP.
The following alerts are generated for AS/400 distributed relational database
support:
When alerts are sent from some modules that support a distributed relational data-
base, a spooled file that contains extensive diagnostic information is also created.
This data is called first-failure data capture (FFDC) information.
For more information about AS/400 alerts, see the DSNX Support book.
This command prints a copy of the user's job log, or places it in an output queue
for printing.
Another way to print the job log is by specifying LOG(4 00 *SECLVL) on the appli-
cation job description. After the job is finished, all messages are logged to the job
log for that specific job. You can print the job log by locating it on an output queue
and running a print procedure. See “Using the Job Log” on page 6-5 for information
on how to locate jobs and job logs on the system.
The job log for the AS may also be helpful in diagnosing problems. See “Locating
Distributed Relational Database Jobs” on page 6-6 for information on how to find
the job name for the AS job.
| There are two conditions under which the joblog will be saved:
| If the program QCNTEDDM detects that a serious error occurred in processing
| the request that ended the connection
| If the prestart job was being serviced (by use of the STRSRVJOB command)
| Keeping the joblog for the first condition is designed to retain information useful for
| diagnosing serious unexpected errors. Keeping it for serviced jobs is to provide a
| way for someone to force the joblog to be kept. For example, if you want to get
| SQL optimizer data that is emitted when running under debug, you can start a
| The joblogs will not be stored under the prestart job ID. To find them, run the fol-
| lowing command:
| WRKJOB userid/QPRTJOB
| where userid is the user ID used on the CONNECT to the AS. You can find that
| user ID if you do not know it with the DSPLOG command on the AS. Look for the
| following message:
| DDM job xxxx servicing user yyy on ddd at ttt.
| To print the product activity log for a system on which you are signed on, do the
| following:
| 1. Type the Print Error Log (PRTERRLOG) command on any command line and
| press F4 (Prompt). The Print Error Log display is shown.
| 2. Type the parameter value for the kind of log information you want to print and
| press the Enter key. The log information is sent to the output queue identified
| for your job.
| 3. Enter the Work with Job (WRKJOB) command. The Work with Job display is
| shown.
| 4. Select the option to work with spooled files. The Work with Job Spooled Files
| display is shown.
| 5. Look for the log file you just created at or near the bottom of the spooled file
| list.
| 6. Type the work with printing status option in the Opt column next to the log file.
| The Work with Printing Status display is shown.
| 7. On the Work with Printing Status display, use the change status option to
| change the status of the file and specify the printer to print the file.
Trace Job
Sometimes a problem cannot be tracked to a specific program.
You can trace module flow, OS/400 data acquisition (including CL commands), or
both using the Trace Job (TRCJOB) command. TRCJOB logs all of the called pro-
grams. As the trace records are generated, the records are stored in an internal
trace storage area. When the trace is ended, the trace records can be written to a
spooled printer file (QPSRVTRC) or directed to a database output file.
The TRCJOB command should be used when the problem analysis procedures do
not supply sufficient information about the problem. For distributed database appli-
cations, the command is useful for capturing distributed database request and
response data streams.
You will see a spooled file with a name of QPSRVTRC. The spooled file contains
your trace. For more information on the use of trace job, see Appendix C, “Inter-
preting Trace Job and FFDC Data” on page C-1.
Communications Trace
If you get a message in the CPF3Exx range or the CPF91xx range when using
DRDA to access a distributed relational database, you should run a communi-
cations trace. The following list shows common messages you might see in these
ranges.
The communications trace function lets you start or stop a trace of data on commu-
nications configuration objects. After you have run a trace of data, the data can be
formatted for printing or viewing. You can view the printer file only in the output
queue.
Communication trace options run under system service tools (SST). SST lets you
use the configuration objects while communications trace is active. Data can be
traced and formatted for any communications type you can use in a distributed
database network.
The AS/400 communications trace can run from any display connected to the
system. Anyone, with a special authority (SPCAUT) of *SERVICE can run the trace
on an AS/400 system. Communications trace supports all line speeds. See the
Communications Management book for the maximum aggregate line speeds on the
protocols that are available on the communications controllers.
Whenever possible, start the communications trace before varying on the lines.
This gives you the most accurate sample of your line as it is varied on.
| To run an APPC trace and to work with its output, you have to know on what line,
| controller, and device you are running. If you do not have this information, refer to
| “Finding Your Line, Controller and Device Descriptions.”
| To format the output of a TCP/IP trace, you should know the IP addresses of the
| source and target systems to avoid getting unwanted data in the trace.
The following commands start, stop, print, and delete communications traces:
If you are running on a Version 2, Release 1.1 or earlier system, the preceding
commands are not available. Instead, you have to use the System Service Tools
(SST). Start SST with the Start System Service Tools (STRSST) command. For
more information about the STRSST command and details on communication
traces see the AS/400 Licensed Internal Code Diagnostic Aids - Volume 1 book.
The value for the RMTLOCNAME keyword is the application server's system name.
The WRKCFGSTS command displays all devices that have the specified system
name as the remote location name. You can tell which device is in use because
you can vary on only one device at a time. Use option 8 to work with the device
description and then option 5 to display it. The attached controller field gives the
name of your controller. You can use the WRKCFGSTS command to work with the
controller and device descriptions. For example:
The CFGD values are the controller names acquired from the device descriptions in
the first example in this section.
The output from this command also includes the name of the line description that
you need when working with communications traces. If you select option 8 and then
option 5 to display the controller description, the active switched line parameter dis-
plays the name of the line description. The LAN remote adapter address gives the
token-ring address of the remote system.
| Another way to find the line name is to use the WRKLIND command, which lists all
| of the line descriptions for the system.
The following are tips on how to locate FFDC data on an AS/400 system. This
information is most useful if the failure causing the FFDC data output occurred on
the application server (AS). The FFDC data for an application requester (AR) can
usually be found in one of the spooled files associated with the job running the
application program.
1. Execute a DSPMSG QSYSOPR command and look for a Software problem
detected in Qccxyyyy message in the QSYSOPR message log. (cc in the
program name is usually RW, but could be CN or SQ.) The presence of this
message indicates that FFDC data was produced. You can use the help key to
get details on the message. The message help gives you the problem ID,
which you can use to identify the problem in the list presented by the WRKPRB
command. You may be able to skip this step because the problem record, if it
exists, may be at or near the top of the list.
2. Enter the WRKPRB command and specify the program name (Qccxyyyy) from
the Software problem detected in Qccxyyyy message. Use the program name
to filter out unwanted list items. When a list of problems is presented, specify
option 5 on the line containing the problem ID to get more problem details,
such as symptom string and error log ID.
3. When you have the error log ID, enter the STRSST command. On the first
screen, select Start a service tool. On the next screen, enter 1 to select
Error log utility. On the next screen, enter 2 to select Display or print by
error log ID. In the next screen, you can:
Enter the error log ID.
Enter Y to get the hexadecimal display.
Select the Print or Display option.
The Display option gives 16 bytes per line instead of 32. This can be useful for
on-line viewing and printing screens on an 80-character workstation printer. If you
The hexadecimal data contains the first 1K bytes of the FFDC dump data, pre-
ceded by some other data. The start of the FFDC data is identified by the FFDC
data index. The name of the target job (if this is on the application server) is before
the data index. If the FFDC dump spool file has not been deleted, use this fully
qualified job name to find the spool file. If the spool file is missing, either:
Use the first 1K of the dump stored in the error log.
Recreate the problem if the 1K of FFDC data is insufficient.
| If you do need to trace the connect statement, or do not have time to do manual
| setup on the server after the connect, you will need to anticipate what prestart job
| will be used for the connection before it happens. One way to do that is to prevent
| other users from connecting during the time of your test, if possible, and end all of
| the prestart jobs except one.
| You can force the number of prestart jobs to be 1 by setting the following parame-
| ters on the CHGPJE command for QRWTSRVR running in QSYSWRK to the
| values specified below:
| Initial number of jobs: 1
| Threshold: 1
| Additional number of jobs: 0
| Maximum number of jobs: 1
| If you use this technique, be sure to change the parameters back to values that are
| reasonable for your environment; otherwise, users will get the message that 'A
| connection with a remote socket was reset by that socket' when trying to
| connect when the one prestart job is busy.
| It can be helpful to make a note of the special TPN in the text of the RDB directory
| entry as a reminder to change it back when you are finished with debugging.
| Creating Your Own TPN for Debugging a DB2 for AS/400 AS Job
| It is possible for you to create your own TPN by compiling a CL program containing
| debug statements and a TFRCTL QSYS/QCNTEDDM statement at the end. The
| advantage of this is that you do not need any manual intervention when doing the
| connect. An example of such a program follows:
| PGM
| MONMSG CPFðððð
| STRDBG UPDPROD(\YES) PGM(CALL/QRWTEXEC) MAXTRC(9999)
| ADDBKP STMT(CKUPDATE) PGMVAR((\CHAR (SQLDA@))) OUTFMT(\HEX) +
| LEN(14ðð)
| ADDTRC PGMVAR((DSLENGTH ()) (LNTH ()) (FDODTA_LNTH ()))
| TRCJOB \ON TRCTYPE(\DATA) MAXSTG(2ð48) TRCFULL(\STOPTRC)
| TFRCTL QSYS/QCNTEDDM
| ENDPGM
| Be aware that when you change the TPN of an RDB, all connections from that AR
| will use the new TPN until you change it back. This could cause surprises for
| unsuspecting users, such as poor performance, long waits for operator responses,
| and the filling up of storage with debug data.
For example:
:nick.RCHASLAI :tpn.QCNTSRVC
:luname.VM4GATE RCHASLAI
:modename.MODE645
:security.NONE
The same SQL objects are used for both local and distributed applications, except
that one object, the SQL package, is used exclusively for distributed relational data-
base support. You create the program using the Create SQL program
(CRTSQLxxx) command. The xxx in this command refers to the host language CI,
CBL, CBLI, FTN, PLI, RPG, or RPGI. The SQL package may be a product of the
precompile in this process. The Create SQL Package (CRTSQLPKG) command
creates SQL packages for existing distributed SQL programs.
You must have the DB2/400 Query Manager and SQL Development Kit licensed
program installed to precompile programs with SQL statements. However, you can
create SQL packages from existing distributed SQL programs with only the com-
piled program installed on your system. The DB2/400 Query Manager and SQL
Development Kit licensed program also allows you to use interactive SQL to access
a distributed relational database. This is helpful when you are debugging programs
because it allows you to test SQL statements without having to precompile and
compile a program.
You can use either of two naming conventions in DB2/400 programming: system
(*SYS) and SQL (*SQL). The naming convention you use affects the method for
qualifying file and table names. It also affects security and the terms used on the
interactive SQL displays. Distributed relational database applications can access
objects on another AS/400 system using either naming convention. However, if
your program accesses a relational database on a non-AS/400 system, only SQL
names can be used. Select the naming convention using the NAMING parameter
on the Start SQL (STRSQL) command or the OPTION parameter on one of the
CRTSQLxxx commands.
You can also use the DFTRDBCOL parameter on the CRTSQLPKG command to
change the default collection of a package. After an SQL program is compiled you
can create a new SQL package to change the default collection. See “Create SQL
Package (CRTSQLPKG) Command” on page 10-28 for a discussion of all the
parameters of the CRTSQLPKG command.
There are two types of CONNECT statements with the same syntax but different
semantics:
CONNECT (Type 1) is used for remote unit of work.
CONNECT (Type 2) is used for distributed unit of work.
Most SQL statements can be remotely prepared and executed with the following
restrictions:
│
│
6
┌───────────────┐ ┌───────────────┐
┌──┤ │ Successful CONNECT │ │
│ │ Connectable │%────────────────────────────────┤ Connectable │
CONNECT │ │ and │ │ and │
└─5│ Connected ├────────────────────────────────5│ Unconnected │
│ │ │ │
└────────────┬──┘ CONNECT with system failure or └───────────────┘
& │ COMMIT after the connection is &
│ │ released. │
│ │ │
│ │ │
ROLLBACK or │ │ │ System failure
successful │ │ SQL statement other than CONNECT, │ with rollback
COMMIT │ │ COMMIT, or ROLLBACK │
│ │ │
│ │ ┌─────────────────┐ │
│ │ │ │ │
│ └──────────5│ Unconnectable │ │
│ │ and ├───────────────┘
└─────────────────────┤ Connected │
│ │
└─────────────────┘
Figure 10-1. Remote Unit of Work Activation Group Connection State Transition
The initial state of an activation group is connectable and connected. The applica-
tion server to which the activation group is connected is determined by the RDB
parameter on the CRTSQLxxx and STRSQL commands and may involve an implicit
CONNECT operation. An implicit CONNECT operation cannot occur if an implicit or
explicit CONNECT operation has already successfully or unsuccessfully occurred.
Thus, an activation group cannot be implicitly connected to an application server
more than once.
Figure 10-2. Application-Directed Distributed Unit of Work Connection and Activation Group
Connection State Transitions
A connection in the dormant state is placed in the current state using the SET
CONNECTION statement. When a connection is placed in the current state, the
previous current connection, if any, is placed in the dormant state. No more than
one connection in the set of existing connections of an activation group can be
current at any time. Changing the state of a connection from current to dormant or
from dormant to current has no effect on its held or released state.
An activation group in the unconnected state enters the connected state when it
successfully executes a CONNECT or SET CONNECTION statement.
If an activation group does not have a current connection, the activation group
is in the unconnected state. The CURRENT SERVER special register contents are
equal to blanks. The only SQL statements that can be executed are CONNECT,
DISCONNECT, SET CONNECTION, RELEASE, COMMIT, and ROLLBACK.
An activation group in the connected state enters the unconnected state when its
current connection is intentionally ended or the execution of an SQL statement is
unsuccessful because of a failure that causes a rollback operation at the application
server and loss of the connection. Connections are intentionally ended when an
activation group successfully executes a commit operation and the connection is in
the released state, or when an application process successfully executes the DIS-
CONNECT statement.
Running with both RUW and DUW connection management: Programs com-
piled with RUW connection management can be called by programs compiled with
DUW connection management. SET CONNECTION, RELEASE, and DISCON-
NECT statements can be used by the program compiled with RUW connection
management to work with any of the active connections. However, when a program
compiled with DUW connection management calls a program compiled with RUW
connection management, CONNECTs that are performed in the program compiled
with RUW connection management will attempt to end all active connections for the
activation group as part of the CONNECT. Such CONNECTs will fail if the conver-
sation used by active connections uses protected conversations. Furthermore,
Likewise, when creating packages for programs compiled with DUW connection
management after creating a package for a program compiled with RUW con-
nection management, either run with DDMCNV(*DROP) or perform a RCLDDMCNV
after creating the package for the programs compiled with DUW connection man-
agement.
Programs compiled with DUW connection management can also be called by pro-
grams compiled with RUW connection management. When the program compiled
with DUW connection management performs a CONNECT, the connection per-
formed by the program compiled with RUW connection management is not discon-
nected. This connection can be used by the program compiled with DUW
connection management.
For a distributed program, the implicit connection is to the relational database spec-
ified on the RDB parameter. For a nondistributed program, the implicit connection is
to the local relational database.
SQL will end any active connections in the default activation group when SQL
becomes not active. SQL becomes not active when:
The application requester detects the first active SQL program for the process
has ended and the following are all true:
– There are no pending SQL changes
– There are no connections using protected conversations
– A SET TRANSACTION statement is not active
– No programs that were precompiled with CLOSQLCSR(*ENDJOB) were
run.
If there are pending changes, protected conversations, or an active SET
TRANSACTION statement, then SQL is placed in the exited state. If programs
precompiled with CLOSQLCSR(*ENDJOB) were run, then SQL will remain
active for the default activation group until the job ends.
– At the end of a unit of work if SQL is in the exited state. This occurs when
you issue a COMMIT or ROLLBACK command outside of an SQL program.
For a distributed program, the implicit connection is made to the relational database
specified on the RDB parameter. For a nondistributed program, the implicit con-
nection is made to the local relational database.
PROC: FIXTOTAL;
.
.
.
SELECT \ INTO :SERVICE .A/
FROM REPAIRTOT;
EXEC SQL
COMMIT;
.
.
.
END FIXTOTAL;
.A/ Statement run on the local relational database
Another program, such as the following example, could gather the same information
from Spiffy dealerships in the Kansas City region. This is an example of a distrib-
uted program that is implicitly connected and disconnected:
PROC: FIXES;
.
.
.
EXEC SQL
SELECT \ INTO :SERVICE .B/
FROM SPIFFY.REPAIR1;
Explicit CONNECT
The CONNECT statement is used to explicitly connect an AR to an identified AS.
This SQL statement can be embedded within an application program or you can
issue it using interactive SQL. The CONNECT statement is used with a TO or
RESET clause. A CONNECT statement with a TO clause allows you to specify
connection to a particular AS relational database. The CONNECT statement with a
RESET clause specifies connection to the local relational database.
When you issue (or the program issues) a CONNECT statement with a TO or
RESET clause, the AS identified must be described in the relational database direc-
tory. See “Using the Relational Database Directory” on page 5-5 for more informa-
tion on how to work with this directory. The AR must also be in a connectable state
for the CONNECT statement to be successful.
The CONNECT statement has different effects depending on the connection man-
agement method you use. For RUW connection management, the CONNECT
statement has the following effects:
When a CONNECT statement with a TO or RESET clause is successful, the
following occurs:
– Any open cursors are closed, any prepared statements are discarded, and
any held resources are released from the previous AS if the application
process was placed in the connectable state through the use of COMMIT
HOLD or ROLLBACK HOLD SQL statements, or if the application process
is running COMMIT(*NONE).
– The application process is disconnected from its previous AS, if any, and
connected to the identified AS.
– The name of the AS is placed in the Current Server special register.
– Information that identifies the type of AS is placed in the SQLERRP field of
the SQL communication area (SQLCA).
For DUW connection management, the CONNECT statement has the following
effects:
When a CONNECT statement with a TO or RESET clause is successful, the
following occurs:
– The name of the AS is placed in the Current Server special register.
– Information that identifies the type of AS is placed in the SQLERRP field of
the SQL communication area (SQLCA).
– Information on the type of connection is put into the SQLERRD(4) field of
the SQLCA. Encoded in this field is the following information:
- Whether the connection is to the local relational database or a remote
relational database.
- Whether or not the connection uses a protected conversation.
- Whether the connection is always read-only, always capable of
updates, or whether the ability to update can change between each unit
of work.
See the DB2 for AS/400 SQL Programming book for more information on
SQLERRD(4).
If the CONNECT statement with a TO or RESET clause is unsuccessful
because the AR is not in the connectable state or the server-name is not listed
in the local relational database directory, the connection state of the AR is
unchanged.
A connect to a currently connected AS results in an error.
A connection without a TO or RESET clause can be used to obtain information
about the current connection. This includes the following information:
– Information that identifies the type of AS is placed in the SQLERRP field of
the SQL communications area.
– Information on whether an update is allowed to the relational database is
encoded in the SQLERRD(3) field. A value of 1 indicates that an update
can be performed. A value of 2 indicates that an update can not be per-
formed over the connection. See the DB2 for AS/400 SQL Programming
book for more information on SQLERRD(3).
Without CONNECT statements, all you need to do when you change the AS is to
recompile the program with the new relational database name.
The following example shows two forms of the CONNECT statement (.1/ and .2/)
in an application program:
CRTSQLxxx PGM(SPIFFY/FIXTOTAL) COMMIT(\CHG) RDB(KC1ð5)
PROC: FIXTOTAL;
EXEC SQL CONNECT TO KC1ð5; .1/
..
.
EXEC SQL
SELECT \ INTO :SERVICE
FROM REPAIRTOT;
..
.
EXEC SQL COMMIT;
..
.
EXEC SQL CONNECT TO MPLSð3 USER :USERID USING :PW; .2/
..
.
EXEC SQL SELECT ...
..
.
EXEC SQL COMMIT;
..
.
END FIXTOTAL;
The example (.2/) shows the CONNECT statement DB2/400 extension (Version 2
Release 2 and later). This extension provides a way that applications can use the
SECURITY=PGM form of SNA LU 6.2 conversation security. The user ID and pass-
word stored in the host USERID and PW variables are transmitted in the ALLO-
CATE verb SECURITY parameter. In this example the ALLOCATE verb flows when
the connection is established to MPLS03. You must specify the user ID and pass-
word with host variables when the CONNECT statement is embedded in a
program.
The following example shows both CONNECT statement forms in interactive SQL.
Note that the password must be enclosed in single quotes.
This section gives an overview of the SQL statements that are used with distributed
relational database support and some things for you to consider about coexistence
with other systems. For more detail on these subjects, see the DB2 for AS/400
SQL Reference book and the DB2 for AS/400 SQL Programming book.
| The SQL CALL statement can be used locally, but its primary purpose is to allow a
| procedure to be called on a remote system.
| You might want to use SQL CALL, or stored procedures, as the technique is some-
| times called, for the following reasons:
| To reduce the number of message flows between the AR and AS to perform a
| given function. If a set of SQL operations are to be run, it is more efficient for a
| program at the server to contain the statements and interconnecting logic.
| To allow native database operations to be performed at the remote location.
| To perform nondatabase operations (for example, sending messages or per-
| forming data queue operations) using SQL.
| Note: Unlike database operations, these operations are not protected by com-
| mitment control by the system.
| To access system Application Programming Interfaces (APIs) on a remote
| system.
| A stored procedure and application program can run in the same or different acti-
| vation groups. It is recommended that the stored procedure be compiled with
| ACTGRP(*CALLER) specified to achieve consistency between the application
| program at the AR and the stored procedure at the AS.
| When a stored procedure is called that issues an inquiry message, the message is
| sent to the QSYSOPR message queue. The stored procedure waits for a response
| to the inquiry message. To have the stored procedure respond to the inquiry
| message, use the ADDRPYLE command and specify *SYSRPYL on the
| INQMSGRPY parameter of the CHGJOB command in the stored procedure.
| For more information on SQL CALL, see the DB2 for AS/400 SQL Reference book.
| Using SQL CALL to the DB2 Universal Database (UDB): If you will be using the
| SQL CALL statement to call stored procedures from an older release of DB2 for
| AS/400 to DB2 UDB, you may need one of these PTFs:
| V3R1 SF36701
| V3R2 SF36699
| V3R6 SF36700
| V3R7 SF33877
| From AS/400 systems running OS/400 V3R1 or V3R6, you will need to call DB2
| UDB procedures with the procedure name in a host variable, as in the following
| example:
| CALL :host-procedure-name(...
| For V3R2 and V3R7, there are PTFs that allow you to embed the procedure name
| in the SQL statement without putting it into a host variable.
| V3R2 SF36535
| V3R7 SF35932
| Stored procedures written in C that are invoked on a platform running DB2 UDB
| cannot use argc and argv as parameters (that is, they cannot be of type main()).
| This differs from AS/400 stored procedures which must use argc and argv. For
| examples of stored procedures for DB2 UDB platforms, see the \SQLLIB\SAMPLES
| (or /sqllib/samples) subdirectory. Look for outsrv.sqc and outcli.sqc in the C subdi-
| rectory.
| For UDB stored procedures called by AS/400, make sure that the procedure name
| is in upper case letters. AS/400 currently folds procedure names to upper case.
| This means that a procedure on the UDB server, having the same procedure name
| but in lower case, will not be found. For stored procedures on AS/400, the proce-
| dure names are in upper case.
| Stored procedures on the AS/400 cannot have a COMMIT in them when they are
| created to run in the same activation group as the calling program (the proper way
| to create them). In UDB, a stored procedure is allowed to have a COMMIT, but the
| application designer should be aware that there is no knowledge on the part of DB2
| for AS/400 that the commit occurred.
distributed relational database network. The program you are writing or maintaining
may have to be compatible with the following:
Other AS/400 systems
Previous AS/400 releases
Systems that are not AS/400 systems
Remember that the SQL statements in a distributed SQL program run on the AS.
Even though the program runs on the AR, the SQL statements are in the SQL
package to be run on the AS. Those statements must be supported by the AS and
be compatible with the collections, tables, and views that exist on the AS. Also, the
users who run the program on the AR must be authorized to the SQL package and
other SQL objects on the AS.
You can write DB2/400 programs that run on application servers that are not
AS/400 systems and these other platforms may support more or less SQL func-
tions. Statements that are not supported on the DB2/400 AR can be used and com-
piled on the AS/400 system when the AS supports the function. SQL programs
written to run on an AS/400 AS only provide the level of support described in this
guide. See the support documentation for the other systems to determine the level
of function they provide.
This behavior differs from that of other systems because in the OS/400 operating
system, COMMITs and ROLLBACKs can be used as commands from the
command line or in a CL program. However, the preceding scenario can lead to
unexpected results in the next SQL program run, unless you plan for the situation.
For example, if you run interactive SQL next (STRSQL command), the interactive
session starts up in the state of being connected to the previous AS with uncom-
mitted work. As another example, if following the preceding scenario, you start a
second SQL program that does an implicit connect, an attempt is made to find and
run a package for it on the AS that was last used. This may not be the AS that you
intended. To avoid these surprises always commit or rollback the last unit of work
before ending any application program.
Tagging is the primary means to assign meaning to coded graphic characters. The
tag may be in a data structure that is associated with the data object (explicit
DB2/400 tags character columns with CCSIDs. A CCSID is a 16-bit number identi-
fying a specific set of encoding scheme identifiers, character set identifiers, code
page identifiers, and additional coding-related information that uniquely identifies
the coded graphic character representation used. When running applications, data
is not converted when it is sent to another system; it is sent as tagged along with
its CCSID. The receiving job automatically converts the data to its own CCSID if it
is different from the way the data is tagged.
The CDRA has defined the following range of values for CCSIDs.
00000 Use next hierarchical CCSID
00001 through 28671 IBM-registered CCSIDs
28672 through 65533 Reserved
65534 Refer to lower hierarchical CCSID
65535 No conversion done
See the National Language Support book for a list of the OS/400 CCSIDs and the
Character Data Representation Architecture - Level 1, Registry for a complete list
of the CDRA CCSIDs. For more information on handling CCSIDs, see the DB2 for
AS/400 SQL Reference and the DB2 for AS/400 SQL Programming book.
┌────────────────────┐ ┌────────────────────┐
│ │ │ Additional │
│ Character Set │ │ Coding-Related │
│ Code Page │ ┌───────────┐ │ Required │
│ ├───────┤ CCSID ├────────┤ Information │
└────────────────────┘ └───┬────┬──┘ └────────────────────┘
│ │
│ │
┌──────────┘ └──────────┐
│ │
┌─────────┴──────────┐ ┌──────────┴─────────┐
│ │ │ │
│ Encoding │ │ Character │
│ Scheme │ │ Size │
│ │ │ │
└────────────────────┘ └────────────────────┘
AS/400 Support
The default CCSID for a job on the AS/400 system is specified using the Change
Job (CHGJOB) command. If a CCSID is not specified in this way, the job CCSID is
obtained from the CCSID attribute of the user profile. If a CCSID is not specified on
the user profile, the system gets it from the QCCSID system value. This QCCSID
value is initially set to 65535. If your AS/400 system is in a distributed relational
database with unlike systems, it may not be able to use CCSID 65535. See
Appendix B, “Cross-Platform Access Using DRDA” on page B-1 for things to con-
sider when operating in an unlike environment.
After a job has been initiated, you can change the job CCSID by using the
CHGJOB command. To do this:
1. Enter the Work with Job (WRKJOB) command to get the Work with Jobs
display.
2. Select option 2 (Display job definition attributes).
This locates the current CCSID value so you can reset the job to its original
CCSID value later.
3. Enter the CHGJOB command with the new CCSID value.
The new CCSID value is reflected in the job immediately. However, if the job
CCSID you change is an AR job, the new CCSID does not affect the work being
done until the next CONNECT.
Attention: If you change the CCSID of an AS job, the results cannot be predicted.
Source files are tagged with the job CCSID if a CCSID is not explicitly specified on
the Create Source Physical File (CRTSRCPF) or Create Physical File (CRTPF)
command for source files. Externally described database files and tables are
tagged with the job CCSID if a CCSID is not explicitly specified in data description
specification (DDS), in interactive data definition utility (IDDU), or in the CREATE
TABLE SQL statement. For source and externally described files, if the job CCSID
is 65535, the default CCSID based on the language of the operating system is
used. Program described files are tagged with CCSID 65535. Views are tagged
with the CCSID of its corresponding table tag or column-level tags. If a view is
defined over several tables, it is tagged at the column level and assumes the tags
of the underlying columns. Views cannot be explicitly tagged with a CCSID. The
system automatically converts data between the job and the table if the CCSIDs
are not equal and neither of the CCSIDs is equal to 65535.
When you change the CCSID of a tagged table, it cannot be tagged at the column
level or have views defined on it. To change the CCSID of a tagged table, use the
Change Physical File (CHGPF) command. To change a table with column-level
tagging, you must create it again and copy the data to a new table using
FMT(*MAP) on the Copy File (CPYF) command. When a table has one or more
views defined, you must do the following to change the table:
1. Save the view and table along with their access paths.
2. Delete the views.
3. Change the table.
4. Restore the views and their access paths over the created table.
Source files and externally described files migrated to DB2/400 that are not tagged
or are implicitly tagged with CCSID 65535 will be tagged with the default CCSID
based on the language of the operating system installed. This includes files that are
All data that is sent between an AR and an AS is sent not converted. In addition,
the CCSID is also sent. The receiving job automatically converts the data to its own
CCSID if it is different from the way the data is tagged. For example, consider the
following application that is run on a dealership system, KC105.
CRTSQLxxx PGM(PARTS1) COMMIT(\CHG) RDB(KCððð)
PROC: PARTS1;
.
.
EXEC SQL
SELECT \ INTO :PARTAVAIL
FROM INVENTORY
WHERE ITEM = :PARTNO;
.
.
END PARTS1;
In the above example, the local system (KC105) has the QCCSID system value set
at CCSID 37. The remote regional center (KC000) uses CCSID 937 and all its
tables are tagged with CCSID 937. CCSID processing takes place as follows:
The KC105 system sends an input host variable (:PARTNO) in CCSID 37.
(The DECLARE VARIABLE SQL statement can be used if the CCSID of the job
is not appropriate for the host variable.)
The KC000 system converts :PARTNO to CCSID 937, selects the required
data, and sends the data back to KC105 in CCSID 937.
When KC105 gets the data, it converts it to CCSID 37 and places it in
:PARTAVAIL for local use.
Data conversion between IBM systems with DRDA support includes data types
such as:
Floating point representations
Zoned decimal representations
Byte reversal
Mixed data types
AS/400 specific data types such as:
– DBCS-only
The following example shows how you can add a relational database directory
entry and create a DDM file so that the same job can be used on the AS and target
system.
Note: Either both connections must be protected or both connections must be
unprotected for the conversation to be shared.
ADDRDBDIRE RDB(KCððð) +
RMTLOCNAME(KCððð)
TEXT('Kansas City regional database')
DDM File:
CRTDDMF FILE(SPIFFY/UPDATE)
RMTFILE(SPIFFY/INVENTORY)
RMTLOCNAME(KCððð)
TEXT('DDM file to update local orders')
The following is a sample program that uses both the relational database directory
entry and the DDM file in the same job on the remote system:
PROC :PARTS1;
OPEN SPIFFY/UPDATE;
.
.
.
CLOSE SPIFFY/UPDATE;
.
.
.
EXEC SQL
SELECT \ INTO :PARTAVAIL
FROM INVENTORY
WHERE ITEM = :PARTNO;
EXEC SQL
COMMIT;
.
.
.
END PARTS1;
See the Distributed Data Management book for more information on how to use
AS/400 DDM support.
You can code your distributed DB2/400 programs in a way similar to the coding for
a DB2/400 program that is not distributed. You use the host language to embed the
SQL statements with the host variables. Also, like a DB2/400 program that is not
distributed, a distributed DB2/400 program is prepared using the following
processes:
Precompiling
Compiling
Binding the application
Testing and debugging
This section discusses these steps in the process, outlining the differences for a
distributed DB2/400 program.
The SQL precompile process produces a listing and a temporary source file
member. It can also produce the SQL package depending on what is specified for
the OPTION and RDB parameters of the precompiler command. See “Compiling an
Application Program” on page 10-24 for more information about this parameter.
Listing
The output listing is sent to the printer file specified by the PRTFILE parameter of
the CRTSQLxxx command. The following items are written to the printer file:
Precompiler options
This is a list of all the options specified with the CRTSQLxxx command and the
date the source member was last changed.
Precompiler source
This output is produced if the *SOURCE option is used for non-ILE precompiles
or if the OUTPUT(*PRINT) parameter is specifed for ILE precompiles. It shows
each precompiler source statement with its record number assigned by the pre-
compiler, the sequence number (SEQNBR) you see when using the source
entry utility (SEU), and the date the record was last changed.
Precompiler cross-reference
This output is produced if *XREF was specified in the OPTION parameter. It
shows the name of the host variable or SQL entity (such as tables and
columns), the record number where the name is defined, what the name is
defined, and the record numbers where the name occurs.
Precompiler diagnostic list
This output supplies diagnostic messages, showing the precompiler record
numbers of statements in error.
Precompiler Commands
The DB2/400 Query Manager and SQL Development Kit program has seven pre-
compiler commands, one for each of the host languages.
A separate command for each language exists so each language can have param-
eters that apply only to that language. For example, the options *APOST and
*QUOTE are unique to COBOL. They are not included in the commands for the
other languages. The precompiler is controlled by parameters specified when it is
called by one of the SQL precompiler commands. The parameters specify how the
input is processed and how the output is presented.
You can precompile a program without specifying anything more than the name of
the member containing the program source statements as the PGM parameter (for
non-ILE precompiles) or the OBJ parameter (for ILE precompiles) of the
CRTSQLxxx command. SQL assigns default values for all precompiler parameters
(which may, however, be overridden by any that you explicitly specify).
The following briefly describes parameters common to all the CRTSQLxxx com-
mands that are used to support distributed relational database. To see the syntax
and full description of the parameters and supported values, see the DB2 for
AS/400 SQL Programming book.
RDBCNNMTH
Specifies the type of semantics to be used for CONNECT statements: remote
unit of work (RUW) or distributed unit of work (DUW) semantics.
SQLPKG
Specifies the name and library of the SQL package.
USER
Specifies the user name sent to the remote system when starting the conversa-
tion. This parameter is used only if a conversation is started as part of the pre-
compile process.
PASSWORD
Specifies the password to be used on the remote system when starting the
conversation. This parameter is used only if a conversation is started as part of
the precompile process.
REPLACE
Specifies if any objects created as part of the precompile process should be
able to replace an existing object.
The following example creates a COBOL program named INVENT and stores it in
a library named SPIFFY. The SQL naming convention is selected, and every row
selected from a specified table is locked until the end of the unit of recovery. An
SQL package with the same name as the program is created on the remote rela-
tional database named KC000.
Binding an Application
Before you can run your application program, a relationship between the program
and any referred-to tables and views must be established. This process is called
binding. The result of binding is an access plan. The access plan is a control
structure that describes the actions necessary to satisfy each SQL request. An
access plan contains information about the program and about the data the
program intends to use. For distributed relational database work, the access plan
is stored in the SQL package and managed by the system along with the SQL
SQL automatically attempts to bind and create access plans when the result of a
successful compile is a program or service program object. If the compile is not
successful or the result of a compile is a module object, access plans are not
created. If, at run time, the database manager detects that an access plan is not
valid or that changes have occurred to the database that may improve performance
(for example, the addition of indexes), a new access plan is automatically created.
If the AS is not an AS/400 system, then a bind must be done again using the
CRTSQLPKG command. Binding does three things:
Revalidates the SQL statements using the description in the database.
During the bind process, the SQL statements are checked for valid table, view,
and column names. If a referred to table or view does not exist at the time of
the precompile or compile, the validation is done at run time. If the table or
view does not exist at run time, a negative SQLCODE is returned.
Selects the access paths needed to access the data your program wants to
process.
In selecting an access path, indexes, table sizes, and other factors are consid-
ered when SQL builds an access plan. The bind process considers all indexes
available to access the data and decides which ones (if any) to use when
selecting a path to the data.
Attempts to build access plans.
If all the SQL statements are valid, the bind process builds and stores access
plans in the program.
If the characteristics of a table or view your program accesses have changed, the
access plan may no longer be valid. When you attempt to use an access plan that
is not valid, the system automatically attempts to rebuild the access plan. If the
access plan cannot be rebuilt, a negative SQLCODE is returned. In this case, you
might have to change the program's SQL statements and reissue the CRTSQLxxx
command to correct the situation.
More than one system will eventually be required for testing. If applications are
coded so that the relational database names can easily be changed by recompiling
the program, changing the input parameters to the program, or making minor mod-
ifications to the program source, most testing can be accomplished using a single
system.
After the program has been tested against local data, the program is then made
available for final testing on the distributed relational database network. Consider
However, to debug a distributed SQL program, you must specify the value of *YES
for the UPDPROD parameter. This is because OS/400 distributed relational data-
base support uses files in library QSYS and QSYS is a production library. This
allows data in production libraries to be changed on the AR. Issuing the STRDBG
command on the AR only puts the AR job into debug mode, so your ability to
manipulate data on the AS is not changed.
While in debug mode on the AR, informational messages are entered in the job log
for each SQL statement run. These messages give information about the result of
each SQL statement. A list of SQL return codes and a list of error messages for
distributed relational database are provided in Chapter 9, Handling Distributed
Relational Database Problems.
| If both the AR and AS are AS/400 systems, and they are connected with APPC,
| you can use the Submit Remote Command (SBMRMTCMD) command to start the
| debug mode in an AS job. Create a DDM file as described in “Setting Up DDM
| Files” on page 5-13. The communications information in the DDM file must match
| the information in the relational database directory entry for the relational database
| being accessed. Then issue the command:
| SBMRMTCMD CMD('STRDBG UPDPROD(\YES)') DDMFILE(ddmfile name)
The SBMRMTCMD command starts the AS job if it does not already exist and
starts the debug mode in that job. Use the methods described in “Monitoring Rela-
tional Database Activity” on page 6-1 to examine the AS job log to find the job.
You can also use an SQL CALL statement (stored procedure) from either a
non-AS/400 or another AS/400 to start the debug mode in an AS job. See “SQL
CALL Statement (Stored Procedures)” on page 10-14 for more information.
The following method for putting the AS job into debug mode works with any AR
and a DB2/400 AS.
Sign on to the AS and find the AS job.
Issue the Start Service Job (STRSRVJOB) command from the your interactive
job (the job you are using to find the AS job) as shown:
STRSRVJOB (job-number/user-ID/job-name)
The job name for the STRSRVJOB command is the name of the AS job.
Issuing this command lets you issue certain commands from your interactive
To end this debug session, either end your interactive job by signing off or use the
End Debug (ENDDBG) command followed by the End Service Job (ENDSRVJOB)
command.
Since the AS job must be put into debug before the SQL statements are run, the
application may need to be changed to allow you time to set up debug on the AS.
The AS job starts as a result of the application connecting to the AS. Your applica-
tion could be coded to enter a wait state after connecting to the AS until debug is
started on the AS.
| If you can anticipate the prestart job that will be used for a TCP/IP connection
| before it occurs, such as when there is only one waiting for work and there is no
| interference from other clients, you do not have the need to introduce a delay.
Program References
When a program is created, the OS/400 licensed program stores information about
all collections, tables, views, SQL packages, and indexes referred to in SQL state-
ments in an SQL program.
You can use the Display Program References (DSPPGMREF) command to display
all object references in the program. If the SQL naming convention is used, the
library name is stored in one of three ways:
If the SQL name is fully qualified, the collection name is stored as the name
qualifier.
If the SQL name is not fully qualified, and the DFTRDBCOL parameter is not
specified, the authorization ID of the statement is stored as the name qualifier.
If the SQL name is not fully qualified, and the DFTRDBCOL parameter is speci-
fied, the collection name specified on the DFTRDBCOL parameter is stored as
the name qualifier.
If the system naming convention is used, the library name is stored in one of three
ways:
If the object name is fully qualified, the library name is stored as the name
qualifier.
If the object is not fully qualified, and the DFTRDBCOL parameter is not speci-
fied, *LIBL is stored.
If the SQL name is not fully qualified, and the DFTRDBCOL parameter is speci-
fied, the collection name specified on the DFTRDBCOL parameter is stored as
the name qualifier.
You must use a control language (CL) command to create an SQL package
because there is no SQL statement for SQL package creation. You can create an
SQL package in two ways:
Using the CRTSQLxxx command with a relational database name specified in
the RDB parameter. See “Precompiling Programs with SQL Statements” on
page 10-22
Using the CRTSQLPKG command.
When a distributed SQL program is created, the name of the SQL package and an
internal consistency token are saved in the program. These are used at run time to
find the SQL package and verify that the SQL package is correct for this program.
Because the name of the SQL package is critical for running distributed SQL pro-
grams, an SQL package cannot be moved, renamed, duplicated, or restored to a
different library.
PGM
Specifies the qualified name of the program for which the SQL package is
being created.
*LIBL: Specifies that the library list is used to locate the program.
*CURLIB: Specifies that the current library is able to find the program. If a
current library entry does not exist in the library list, the QGPL library is used.
library-name: Specifies the library where the program is located.
program-name: Specifies the name of the distributed program for which the
SQL package is being created.
RDB
Specifies the relational database name that identifies the remote database
where the SQL package is being created.
USER
Specifies the user name sent to the remote system when starting the conversa-
tion.
*CURRENT: The user name associated with the current job is used.
user-name: Specifies the user name to be used for the remote job.
PASSWORD
Specifies the password to be used on the remote system.
*NONE: No password is sent. If a user name is specified on the USER param-
eter, the value is not valid.
password: Specifies the password of the user name specified on the USER
parameter.
GENLVL
Controls the generation of the SQL package. If error messages are returned
with a severity greater than the GENLVL value, the SQL package is not
created.
10: If a severity level value is not specified, the default severity level is 10.
severity-level: Specify a number from 0 through 40. Some suggested values
are listed below:
10 warnings
20 general error messages
30 serious error messages
40 system detected error messages
Note: There are some errors that cannot be controlled by GENLVL. When
those errors occur, the SQL package is not created.
REPLACE
Specifies whether or not to replace an existing SQL package of the same name
with a newly created SQL package.
*YES: Specifies that if the SQL package already exists, it will be replaced with
the new SQL package.
*NO: Specifies that the create SQL package operation will end if an SQL
package already exists.
DFTRDBCOL
Identifies the default collection name to be used for unqualified names of
tables, views, indexes and SQL packages with static SQL statements.
*PGM: Specifies that the collection name to be used is the same as the
DFTRDBCOL parameter value used when the program was created.
PRTFILE
Specifies the qualified name of the printer device file to which the precompiler
listing is directed. The file should have a minimum length of 132 characters. If a
file with a record length of less than 132 characters is specified, information is
lost.
*LIBL: Specifies the library list used to locate the printer file.
*CURLIB: Specifies that the current library for the job is used to locate the
printer file. If no library entry exists in the library list, QGPL is used.
library-name: Specify the library where the printer file is located.
QSYSPRT: If a file name is not specified, the precompiler listing is directed to
the IBM-supplied printer file QSYSPRT.
printer-file-name: Specify the name of the printer device file to which the pre-
compiler listing is directed.
OBJTYPE
Specifies the type of program for which an SQL package is created.
*PGM: Create an SQL package from the program specified on the PGM param-
eter.
*SRVPGM: Create an SQL package from the service program specified on the
PGM parameter.
MODULE
Specifies a list of modules in a bound program.
*ALL: An SQL package is created for each module in the program. An error
message is sent if none of the modules in the program contain SQL statements
or none of the modules is a distributed module.
Note: CRTSQLPKG can process programs that do not contain more than
1024 modules.
module-name: Specify the names of up to 256 modules in the program for
which an SQL package is to be created. If more than 256 modules exist that
need to have an SQL package created, multiple CRTSQLPKG commands must
be used.
Duplicate module names in the same program are allowed. This command
looks at each module in the program and if *ALL or the module name is speci-
fied on the MODULE parameter, processing continues to determine whether an
SQL package should be created. If the module is created using SQL and the
RDB parameter is specified on the precompile command, an SQL package is
created for the module. The SQL package is associated with the module of the
bound program.
TEXT
Specifies text that briefly describes the program and its function.
*PGMTXT: Specifies that the text is taken from the program.
The following sample command creates an SQL package from the distributed SQL
program INVENT on relational database KC000.
The new SQL package is created with the same options that were specified on the
CRTSQLxxx command.
If errors are encountered while creating the SQL package, the SQL statement being
processed when the error occurred and the message text for the error are written to
the file identified by the PRTFILE parameter. A listing is not generated if no errors
were found during the create SQL package process.
If the CRTSQLxxx command failed to create an SQL package (for example, the
communications line failed during the precompile) but the program was created, the
SQL package can be created without running the CRTSQLxxx command again.
SQLPKG
Specifies the qualified name of the SQL package being deleted. A specific or
generic SQL package name can be specified.
The possible library values are:
*LIBL: All libraries in the user and system portions of the job's library list
are searched.
*CURLIB: The current library is searched. If no library is specified as the
current library for the job, the QGPL library is used.
*USRLIBL: Only the libraries listed in the user portion of the library list are
searched.
*ALL: All libraries in the system, including QSYS, are searched.
*ALLUSR: All nonsystem libraries, including all user-defined libraries and
the QGPL library, not just those in the job's library list are searched.
Libraries whose names start with the letter Q, other than QGPL, are not
searched.
You must have *OBJEXIST authority for the SQL package and at least *EXECUTE
authority for the collection where it is located.
The following command deletes the SQL package PARTS1 in the SPIFFY
collection:
DLTSQLPKG SQLPKG(SPIFFY/PARTS1)
To delete an SQL package on a remote AS/400 system, use the Submit Remote
Command (SBMRMTCMD) command to run the DLTSQLPKG command on the
remote system. See “Submit Remote Command (SBMRMTCMD) Command” on
page 6-8 for how to use the SBMRMTCMD command. You can also use display
station pass-through to sign on the remote system to delete the SQL package. If
the remote system is not an AS/400 system, pass through to that system using a
remote work station program and then submit the delete SQL package command
local to that system.
You must have the following privileges on the SQL package to successfully delete
it:
The system authority *EXECUTE on the referenced collection
The system authority *OBJEXIST on the SQL package
The following example shows how the DROP PACKAGE statement is issued:
Business Requirement
The application for the distributed relational database in this example is parts stock
management in an automobile dealer or distributor network.
This program checks the level of stock for each part in the local part stock table. If
this is below the re-order point, the program then checks on the central tables to
see whether there are any existing orders outstanding and what quantity has been
shipped against each order.
If the net quantity (local stock, plus orders, minus shipments) is still below the re-
order point, an order is placed for the part by inserting rows in the appropriate
tables on the central system. A report is printed on the local system.
Technical Notes
Commitment control
This program uses the concept of Local and Remote Logical Units of
Work (LUW). Since this program uses remote unit of work, it is neces-
sary to close the current LUW on one system (COMMIT) before begin-
ning a new unit of work on another system.
Cursor repositioning
When a LUW is committed and the application connects to another
database, all cursors are closed. This application requires the cursor
reading the part stock file to be re-opened at the next part number. To
achieve this, the cursor is defined to begin where the part number is
greater than the current value of part number, and to be ordered by part
number.
Note: This technique will not work if there are duplicate rows for the
same part number.
C Program Example
| This appendix describes some conditions you need to consider when working with
| another specific IBM product. It is not intended to be a comprehensive list. Many
| problems or conditions like the ones described here depend significantly on your
| application. You can get more information on the differences between the various
| IBM platforms from the IBM SQL Reference Volume 2, SC26-8416, or the DRDA
| Application Programming Guide, SC26-4773.
CCSID Considerations
When you work with a distributed relational database in an unlike environment,
coded character set identifiers (CCSIDs) need to be set up and used properly. The
AS/400 system is shipped with a default value that may need to be changed to
work in an unlike environment. Also, the AS/400 system supports some CCSIDs for
DBCS that are not supported by the DB2 and DB2 Server for VM database man-
agers. This section discusses these two conditions and provides you with a way to
work around them.
As stated in “Coded Character Set Identifier (CCSID)” on page 10-16, the CCSID
used at connection time is determined by the job CCSID. When a job begins, its
CCSID is determined by the user profile the job is running under. The user profile
can, and as a default does, use the system value QCCSID.
| If you are connecting to a system that does not support the system default CCSID,
| you need to change your job CCSID. You can change the job CCSID by using the
| Change Job (CHGJOB) command. However, this solution is only for the job you are
| currently working with. The next time you will have to change the job CCSID again.
A more permanent solution is to change the CCSID designated by the user profiles
used in the distributed relational database. When you change the user profiles you
affect only those users that need to have their data converted. If you are working
with a DB2/400 AS, you need to change the user profile that the AS uses.
The default CCSID value in a user profile is *SYSVAL. This references the
QCCSID system value. You can change this system value, and therefore the
default value used by all user profiles, with the Change System Values
(CHGSYSVAL) command. If you do this, you would want to select a CCSID that
If you suspect that you are working with a system that does not support a CCSID
used by your job or your system, look for the following indicators in a job log or
SQLCA:
Message SQ30073
SQLCODE -30073
SQLSTATE 58017
Text Distributed Data Management (DDM) parameter X'0035' not supported.
Message SQL0332
SQLCODE -332
SQLSTATE 57017
Text Total conversion between CCSID &1 and CCSID &2 not valid.
Certain fields in the DB2/400 SQL catalog tables may be defined to have a
DBCS-open data type. This is a data type that allows both double-byte character
set (DBCS) and single-byte character set (SBCS) characters. The CCSID for these
field types is based on the default CCSID shipped with the system.
When these fields are selected from a DB2 or DB2 Server for VM AR, the SELECT
statement may fail because the DB2 and DB2 Server for VM databases may not
support the conversion to this CCSID.
To avoid this error, you must change the DB2 database or the DB2 Server for VM
AR to run with either:
The same mixed-byte CCSID as the DBCS-OPEN fields in the AS/400 SQL
catalog tables.
A CCSID that the system allows conversion of data to when the data is from
the mixed-byte CCSID of the DBCS-OPEN fields in the AS/400 SQL catalog
tables. This CCSID may be a single-byte CCSID if the data in the AS/400 SQL
catalog tables DBCS-OPEN fields is all single-byte data.
| As has been pointed out elsewhere, you need to have an updatable connection to
| the AS when the package is created. You may need to do a RELEASE ALL and
| COMMIT before connecting to the AS to have the package created.
The DB2/2 precompiler parameter that specifies uncommitted read is /I=UR. When
using the DB2/2 command line processor, the command DBM CHANGE SQLISL
TO UR sets the isolation level to uncommitted read.
The command 'export SQLJSETP="-i=n"' can be used with DB2/6000 before per-
forming a program precompile or bind to request the no-commit (NC - do not use
commitment control) isolation level.
Notes:
1. A cursor is updatable if it is not read-only (see Note 3), and one of the following
is true:
The select statement contained the FOR UPDATE OF clause, or
There exists in the program an UPDATE or DELETE WHERE CURRENT
OF against the cursor.
2. A cursor is potentially updatable if it is not read-only (see Note 3), and if the
program includes any dynamic statement, and the /K=UNAMBIG precompile or
bind option was used on SQLPREP or SQLBIND.
3. A cursor is read-only if one or more of the following conditions is true:
The DECLARE CURSOR statement specified an ORDER BY clause but
did not specify a FOR UPDATE OF clause.
The DECLARE CURSOR statement specified a FOR FETCH ONLY clause.
One or more of the following conditions are true for the cursor or a view or
logical file referenced in the outer subselect to which the cursor refers:
– The outer subselect contains a DISTINCT keyword, GROUP BY clause,
HAVING clause, or a column function in the outer subselect.
– The select contains a join function.
– The select contains a UNION operator.
– The select contains a subquery that refers to the same table as the
table of the outer-most subselect.
– The select contains a complex logical file that had to be copied to a
temporary file.
– All of the selected columns are expressions, scalar functions, or con-
stants.
– All of the columns of a referenced logical file are input only.
It is also possible to create tables and execute other SQL statements with the
above approach as well. A REXX or Control Language program can improve the
usability of this approach. The following CL program is a simple example of the
type of thing that can be done.
PGM
MONMSG MSGID(CPFðððð)
DLTQMQRY MYLIB/QMTEMP
STRSEU MYLIB/SRC QMTEMP
CRTQMQRY MYLIB/QMTEMP MYLIB/SRC
STRQMQRY MYLIB/QMTEMP
ENDPGM
When the SEU program involved in the preceding series of commands displays an
edit screen, enter an SQL statement and save the file. The program then attempts
to process and execute the statement.
Use the AS/400 DSPMSGD command to interpret the code and tokens:
DSPMSGD SQL7ðð8 MSGF(QSQLMSG)
Select option 1 (Display message text) and the system presents the Display For-
matted Message Text display. The three tokens in the message are represented by
&1, &2, and &3 in the display. The reason code in the example message is 3,
which points to Code 3 in the list at the bottom of the display.
á ñ
| There is only one1database for each AS/400. However, in DB2 UDB, tables are
| qualified by a user ID (that of the creator of the table), and reside in one of possibly
| multiple databases on the platform. DB2 Connect has the same notion of using the
| user ID for the collection ID.
| A dynamic query from DB2 Connect to DB2 for AS/400 will use the user ID of the
| target side job (on the AS/400) for the default collection name, if the name of the
| queried table was specified without a collection name. This may not be what is
| expected by the user and can cause the table to be not found.
| A dynamic query from DB2 for AS/400 to DB2 UDB would have an implied table
| qualifier if it is not specified in the query in the form qualifier.table-name. The
| second-level UDB table qualifier defaults to the user ID of the user making the
| query.
| You may want to create the DB2 UDB databases and tables with a common user
| ID. Remember, for UDB there are no physical collections as there are in DB2 for
| AS/400; there is only a table qualifier, which is the user ID of the creator.
| Granting Privileges
| For any programs created on an AS/400 that is accessing a UDB database,
| remember to do the following UDB commands (perhaps from the command line
| processor):
| 1. GRANT ALL PRIVILEGES ON TABLE table-name TO user (possibly 'PUBLIC'
| for user)
| 2. GRANT EXECUTE ON PACKAGE package-name (usually the AS/400 program
| name) TO user (possibly 'PUBLIC' for user)
| When using APPC communications, the remote location name is the name of the
| workstation.
| 1 The fact that DATABASE is a synonym for COLLECTION in the CREATE COLLECTION SQL statement in DB2 for AS/400 has
| created some confusion about there being only one database per AS/400.
| Consult the UDB product documentation to determine the port number. A common
| value used is 30000. An example DSPRDBDIRE screen showing a properly config-
| ured RDB entry for a UDB server follows.
| Display Relational Database Detail
| Relational database . . . . . . : SAMPLE
| Remote location:
| Remote location . . . . . . . : 9.5.36.17
| Type . . . . . . . . . . . . : \IP
| Port number or service name . : 3ðððð
| Text . . . . . . . . . . . . . . : My UDB server
| The DB2 PREP command can be used to process an application program source
| file with embedded SQL. This processing will create a modified source file con-
| taining host language calls for the SQL statements and it will, by default, create an
| SQL package in the database you're currently connected to.
This appendix contains an example of the RW component trace data from a job
trace with an explanation of the trace data output. Some of this information is
helpful with interpreting communications trace data. This appendix also shows an
example of a first-failure data capture printout of storage, with explanations of the
output.
Note: There is an exception to the use of the ‘<’ delimiters to determine the end of
data. In certain rare circumstances where a received data stream is being
dumped, the module that writes the trace data is unable to determine where
the end of the data stream is. In that case, the program dumps the entire
receive buffer, and as a warning that the length of the data dumped is
greater than that of the data stream, it replaces the ‘<<<...’ delimiter with a
string of ‘(’ characters.
Following the ‘>>’ prefix is a 7-character string that identifies the trace point. The
first 2 characters, ‘RW’, identify the component. The second 2 characters identify
the RW function being performed. The ‘QY’ indicates the query function which cor-
responds to the DDM commands OPNQRY, CNTQRY, and CLSQRY. The ‘EX’
indicates the EXECUTE function which corresponds to the DDM commands
EXCSQLSTT, EXCSQLIMM, and PRPSQLSTT.
The last 2 characters of the 7-byte trace point identifier indicate the nature of the
dumped data or the point at which the dump is taken. For example, SN corre-
sponds to the data stream sent from an AR or an AS, and RC corresponds to the
data stream received by an AR.
The following discussion examines the elements that make up the data stream in
the example. For more information on the interpretation of DRDA data streams, see
the Distributed Relational Database Architecture Reference and the Distributed
Data Management Level 4.0 Architecture Reference books.
The trace data follows the ‘:’ marking the end of the trace point identifier. In this
example, the first 6 bytes of the data stream contain the DDM data stream structure
(DSS) header. The first 2 bytes of this DSS header are a length field. The third
byte, X'D0' is the registered SNA architecture identifier for all DDM data. The
fourth byte is the format identifier (explained in more detail later). The fifth and sixth
bytes contain the DDM request correlation identifier.
The next 2 bytes, X'0010' (decimal 16) give the length of the next DDM object,
which in this case is identified by the X'2205' which follows it and is the code point
for the OPNQRYRM reply message.
Following the 16-byte reply message is a 6-byte DSS header for the reply objects
that follow the reply message. The first reply object is identified by the X'241A'
code point. It is a QRYDSC object. The second reply object in the example is a
QRYDTA structure identified by the X'241B' code point (split between two lines in
the trace output). As with the OPNQRYRM code point, the preceding 2 bytes give
the length of the object.
Looking more closely at the QRYDTA object, you can see a X'FF' following the
X'241B' code point. This represents a null SQLCAGRP (the form of an SQLCA
that flows on the wire). The null form of the SQLCAGRP indicates that it contains
no error or warning information about the associated data. In this case, the associ-
ated data is the row of data from an SQL SELECT operation. It follows the null
SQLCAGRP. Because rows of data as well as SQLCAGRPs are nullable, however,
the first byte that follows the null SQLCAGRP is an indicator containing X'00' that
indicates that the row of data is not null. The meaning of the null indicator byte is
determined by the first bit. A ‘1’ in this position indicates ‘null’. However, all 8 bits
are usually set on when an indicator represents a null object.
The format of the row of data is indicated by the preceding QRYDSC object. In this
case, the QRYDSC indicates that the row contains a nullable SMALLINT value, a
nullable CHAR(3) value, and a non-nullable double precision floating point value.
The second byte past the null SQLCAGRP is the null indicator associated with the
SMALLINT field. It indicates the field is not null, and the X'0001' following it is the
field data. The nullable CHAR(3) that follows is present and contains ‘111’. The
A second row of data with a null SQLCAGRP follows the first, which in turn is fol-
lowed by another 6-byte DSS header. The second half of the format byte (X'2')
contained in that header indicates that the corresponding DSS is a REPLY. The
format byte of the previous DSS (X'53') indicated that it was an OBJECT DSS.
The ENDQRYRM reply message carried by the third DSS requires that it be con-
tained in a REPLY DSS. The ENDQRYRM code point is X'220B'. This reply
message contains a severity code of X'0004', and the name of the RDB that
returned the query data (‘DB2ESYS’).
Following the third DSS in this example is a fourth and final one. The format byte of
it is X'03'. The 3 indicates that it is an OBJECT DSS, and the 0 that precedes it
indicates that it is the last DSS of the chain (the chaining bits are turned off).
The object in this DSS is an SQLCARD containing a non-null SQLCAGRP. The first
byte following the X'2408' SQLCARD code point is the indicator telling us that the
SQLCAGRP is not null. The next 4 bytes, X'00000064', represents the +100
SQLCODE which means that the query was ended by the encounter of a ‘row not
found’ condition. The rest of the fields correspond to other fields in an SQLCA. The
mapping of SQLCAGRP fields to SQLCA fields can be found in the Distributed
Relational Database Architecture Reference book.
You can also use this function to help you diagnose some system-related applica-
tion problems. By means of this function, key structures and the DDM data stream
are automatically dumped to the spooled file. This automatic dumping of error infor-
mation on the first occurrence of an error means that you do not have to create the
failure again to report it for service support. FFDC is active in both the application
requester and the application server.
One thing you should keep in mind is that not all negative SQLCODEs result in
dumps; only those that may indicate an APAR situation are dumped.
An FFDC Dump
The processing of alerts triggers FFDC data to be dumped. However, the FFDC
data is produced even when alerts or alert logging is disabled (using the CHGNETA
command). FFDC output can be disabled by setting the QSFWERRLOG system
value to *NOLOG, but it is strongly recommended that you do not disable the FFDC
dump process. If an FFDC dump has occurred, the informational message, “*Soft-
ware problem detected in Qxxxxxxx.” (where Qxxxxxxx is an OS/400 module identi-
fier), is logged in the QSYSOPR message queue.
The first 1K-byte of data is put in the error log. However, the data put in the
spooled file is always complete and easier to work with. If multiple DDM conversa-
tions have been established, the dump output may be contained in more than one
spooled file because of a limit of only 32 entries per spooled file. In this case,
there will be multiple “Software Problem” messages in the QSYSOPR message
queue that are prefixed with an asterisk (*).
Zero, one, or more path control blocks (SPCB); there is normally just one.
Exchange server attributes control block (EXCB)
Parser map space
Receive buffer for the communications control block
The data section number is incremented by one from 17 onward as each control
block is dumped. For example, in the sample dump output, data sections SPACE-
17 through SPACE- 21 are for the first data control block dumped (CCB 1), while
data sections SPACE- 22 through SPACE- 25 are for the second data control block
dumped (CCB 2), as shown below:
17 CCB (Eyecatcher is :‘SCCB:’. For an application server module, the
eyecatcher is :‘TCCB:’.)
18 PCB for CCB 1 (Eyecatcher is :‘SPBC:’.)
19 SAT for CCB 1 (Eyecatcher is :‘EXCB:’.)
The first digit after RC indicates the number of dump files associated with this
failure. There can be multiple dump files depending on the number of conversations
that were allocated. In the sample dump output, the digit is “1,” indicating that this
is the first (and possible the only) dump file associated with this failure.
You may have four digits (not zeros) at the rightmost end of the return code that
indicate the type of error.
The possible codes for errors detected by the AR are:
0001 Failure occurred in connecting to the remote database
0002 More-to-receive indicator was on when it should not have been
0003 AR detected an unrecognized object in the data stream received
from the AS
0097 Error detected by the AR DDM communications manager
0098 Conversation protocol error detected by the DDM component of the
AR
0099 Function check
The possible codes for errors detected by the AS are:
0099 Function check
4415 Conversational protocol error
4458 Agent permanent error
4459 Resource limit reached
4684 Data stream syntax not valid
4688 Command not supported
The meanings of the columns for reply objects (in the indented tables) are:
The S column indicates how the DB2/400 application server handles the
codepoint or parameter:
Y The OS/400 program flows it to the application requester.
N The OS/400 program does not flow it to the application requester.
The R column indicates how the DB2/400 application requester supports the
codepoint or parameter:
Y The OS/400 program recognizes and processes it.
I The OS/400 program ignores it.
Note that each DDM command can have associated with it:
Parameters (instance variables)
Command data objects
Reply messages
Reply data objects
In the following tables, commands and associated object types can be distinguished
by the following means:
Command and reply message names are in uppercase and contained in the
top row of a table.
Parameter names are in lowercase.
Command object names are in uppercase and if present are in the bottom rows
of a table.
Reply data object names are in mixed case (first letter capitalized).
Figure D-28. Reply Message and Reply Objects for OPNQRY command
DDM Codepoint Optional R S
OPNQRYRM (open query reply message) Y Y
svrcod (severity code) Y Y
qryprctyp (protocol type) Y Y
sqlcsrhld (cursor hold flag) Y Y N
srvdgn (server diagnostic information) Y I. N
Typdefnam (data type definition name) Y Y N
Typdefovr (TYPDEF override) Y Y N
Sqlcard (SQLCA reply data) Y Y Y
Qrydsc (query answer set description) Y Y Y
Qrydta (query answer set data) Y Y Y
Bibliography X-3
DB2 Server for VSE and VM Architecture Books
SBOF for DB2 Server for VM V5R1, SBOF-8917
| Character Data Representative Architecture: Over-
DB2 and Data Tools for VSE and VM, GC09-2350 | view, GC09-2207
DB2 Server for VM V5R1 Database Administration, | Character Data Representative Architecture: Details,
GC09-2388 | SC09-2190
DB2 Server for VM V5R1 Application Programming, | This manual includes a CD-ROM, which contains
SC09-2392 | the two CDRA publications in online BOOK format,
DB2 Server for VM V5R1 Database Services Utili- | conversion tables in binary form, mapping source
ties, SC09-2394 | for many of the conversion binaries, a collection of
| code page and character set resources, and char-
DB2 Server for VM V5R1 Messages and Codes, | acter naming information as used in IBM. The CD
SC09-2396 | also includes a viewing utility to be used with the
DB2 Server for VM V5R1 Master Index and Glos- | provided material. Viewer works with OS/2,
sary, SC09-2398 | Windows 3.1, and Windows 95.
DB2 Server for VM V5R1 Operation, SC09-2400 Distributed Relational Database Architecture Refer-
ence SC26-4651.
DB2 Server for VSE & VM V5R1 Quick Reference,
SC09-2403 DDM Architecture Reference Guide SC21-9526.
The SC21-9526-05 version of this book describes
DB2 Server for VSE & VM V5R1 SQL Reference, Level 4 of the DDM Architecture which does not
SC09-2404 include the new DDM protocols for TCP/IP support.
DB2 Server for VM V5R1 System Administration,
GC09-2405
DB2 Server for VM V5R1 Diagnosis Guide, Redbooks
SC09-2407 DRDA DDCS/6000 Connection to DB2 and
DB2 Server for VM V5R1 Interactive SQL Guide, DB2/400, GG24-4155
SC09-2409 Setup and Usage of SQL/DS in a DRDA Environ-
DB2 Server V5R1 Data Spaces Support for ment, GG24-3733
VM/ESA, SC09-2411 DRDA Client/Server Application Scenarios,
DB2 Server for VSE & VM V5R1 LPS, GC09-2413 GG24-4193
DB2 Server for VSE & VM V5R1 Data Restore, DRDA Client/Server for VM and VSE Setup,
SC09-2499 GG24-4275
DB2 for VM V5R1 Control Center Installation, DATABASE 2/400 Advanced Database Functions,
SC09-2501 GG24-4249
DB2 Server for VM/VSE Training Brochure, Distributed Relational Database Cross Platform
GC09-2561 Connectivity and Application, GG24-4311
DB2 Server for VM V5R1 Online Product Libraries, Getting Started with DB2 Stored Procedures: Give
SK2T-2792 Them a Call through the Network, GG24-4693
Index X-9
command, CL (continued) command, CL (continued)
CRTSQLCBL (Create Structured Query Language Restore User Profiles (RSTUSRPRF) 7-10, 7-11
COBOL) 10-23 Revoke Object Authority (RVKOBJAUT) 4-10
CRTSQLCBLI (Create Structured Query Language RMVRDBDIRE (Remove Relational Database Direc-
COBOL ILE) 10-23 tory Entry) 5-8, 6-27
CRTSQLCI (Create Structured Query Language C RMVSOCE (Remove Sphere of Control Entry) 3-17
ILE) 10-23 RSTAUT (Restore Authority) 7-10, 7-11
CRTSQLFTN (Create Structured Query Language RSTCFG (Restore Configuration) 7-10
FORTRAN) 10-23 RSTLIB (Restore Library) 7-10
CRTSQLPKG (Create Structured Query Language RSTOBJ (Restore Object) 7-10, 7-11, 7-14
Package) 10-28 RSTUSRPRF (Restore User Profiles) 7-10, 7-11
CRTSQLPLI (Create Structured Query Language RVKOBJAUT (Revoke Object Authority) 4-10
PL/I) 10-23 SAVCHGOBJ (Save Changed Object) 7-10
CRTSQLRPG (Create Structured Query Language Save Changed Object (SAVCHGOBJ) 7-10
RPG) 10-23 Save Library (SAVLIB) 7-5, 7-10
CRTSQLRPGI (Create Structured Query Language Save Object (SAVOBJ) 7-5, 7-10, 7-11
RPG ILE) 10-23 Save Save File Data (SAVSAVFDTA) 7-10
Display Job Log (DSPJOBLOG) 6-5 Save Security Data (SAVSECDTA) 7-11
Display Journal (DSPJRN) 2-9, 7-4 Save System (SAVSYS) 7-10, 7-11
Display Message Descriptions (DSPMSGD) 9-8 SAVLIB (Save Library) 7-5, 7-10
Display Program References (DSPPGMREF) 6-12, SAVOBJ (Save Object) 7-5, 7-10, 7-11
10-27 SAVSAVFDTA (Save Save File Data) 7-10
Display Programs that Adopt (DSPPGMADP) 4-12 SAVSECDTA (Save Security Data) 7-11
Display Relational Database Directory Entry SAVSYS (Save System) 7-10, 7-11
(DSPRDBDIRE) 5-8, 6-27, 7-11 SBMRMTCMD (Submit Remote Command) 6-9,
Display Sphere of Control Status 6-10, 6-15
(DSPSOCSTS) 3-17 authority restrictions 6-9
DSPJOBLOG (Display Job Log) 6-5 Start Commitment Control (STRCMTCTL) 7-7
DSPJRN (Display Journal) 2-9, 7-4 Start Copy Screen (STRCPYSCRN) 9-6
DSPMSGD (Display Message Descriptions) 9-8 Start Debug (STRDBG) 9-30
DSPPGMADP (Display Programs that Adopt) 4-12 Start Journal Access Path (STRJRNAP) 7-5
DSPPGMREF (Display Program References) 6-12, Start Pass-Through (STRPASTHR) 9-6
10-27 Start Service Job (STRSRVJOB) 9-30
DSPRDBDIRE (Display Relational Database Direc- STRCMTCTL (Start Commitment Control) 7-7
tory Entry) 5-8, 6-27, 7-11 STRCPYSCRN (Start Copy Screen) 9-6
DSPSOCSTS (Display Sphere of Control STRDBG (Start Debug) 9-30
Status) 3-17 STRJRNAP (Start Journal Access Path) 7-5
End Job (ENDJOB) 6-26 STRPASTHR (Start Pass-Through) 9-6
End Request (ENDRQS) 6-26 STRSRVJOB (Start Service Job) 9-30
ENDJOB (End Job) 6-26 Submit Remote Command (SBMRMTCMD) 6-9,
ENDRQS (End Request) 6-26 6-10, 6-15
Grant Object Authority (GRTOBJAUT) 4-10 authority restrictions 6-9
GRTOBJAUT (Grant Object Authority) 4-10 Vary Configuration (VRYCFG) 3-6, 7-15
RCLDDMCNV (Reclaim Distributed Data Manage- VRYCFG (Vary Configuration) 3-6, 7-15
ment Conversations) 6-11, 6-12 Work with Active Jobs (WRKACTJOB) 6-3, 8-2
RCLRSC (Reclaim Resources) 6-11, 6-12 Work with Configuration Status
Reclaim Distributed Data Management Conversa- (WRKCFGSTS) 3-6, 7-15
tions (RCLDDMCNV) 6-11, 6-12 Work with Disk Status (WRKDSKSTS) 8-2
Reclaim Resources (RCLRSC) 6-11, 6-12 Work with Job (WRKJOB) 6-1
Remove Relational Database Directory Entry Work with Relational Database Directory Entries
(RMVRDBDIRE) 5-8, 6-27 (WRKRDBDIRE) 5-8, 6-27
Remove Sphere of Control Entry (RMVSOCE) 3-17 Work with Sphere of Control (WRKSOC) 3-16
Restore Authority (RSTAUT) 7-10, 7-11 Work with System Status (WRKSYSSTS) 8-2
Restore Configuration (RSTCFG) 7-10 Work with User Jobs (WRKUSRJOB) 6-2
Restore Library (RSTLIB) 7-10 WRKACTJOB (Work with Active Jobs) 6-3, 8-2
Restore Object (RSTOBJ) 7-10, 7-11, 7-14 WRKCFGSTS (Work with Configuration
Status) 3-6, 7-15
Index X-11
controller description (APPC) CRTCTLAPPC (Create Controller Description (APPC))
creating 4-2, 4-3 command (continued)
controlling subsystem SECURELOC parameter 4-3
definition 5-2 specifying an APPN location password 4-3
QBASE 5-2 CRTDEVAPPC (Create Device Description (APPC))
QCTL 5-2 command
controlling which ID a job runs under 4-5 LOCPWD parameter 4-2
conversations specifying a location password 4-2
SNA versus TCP/IP 6-10 CRTSQLCBL (Create Structured Query Language
conversations, unprotected 8-1 COBOL) command 10-23
conversion considerations CRTSQLCBLI (Create Structured Query Language
CCSID B-2 COBOL ILE) command 10-23
DB2 B-2 CRTSQLCI (Create Structured Query Language C ILE)
DB2 Server for VM database managers B-2 command 10-23
DB2/2 licensed program B-2 CRTSQLFTN (Create Structured Query Language
copying displays 9-6 FORTRAN) command 10-23
Create Configuration List (CRTCFGL) command CRTSQLPKG (Create Structured Query Language
secure-location entry 4-3 Package) command 10-28
Create Controller Description (APPC) (CRTCTLAPPC) CRTSQLPLI (Create Structured Query Language PL/I)
command 4-2 command 10-23
location-password entry 4-3 CRTSQLRPG (Create Structured Query Language
SECURELOC parameter 4-3 RPG) command 10-23
specifying an APPN location password 4-3 CRTSQLRPGI (Create Structured Query Language
Create Device Description (APPC) (CRTDEVAPPC) RPG ILE) command 10-23
command current connection state 10-6
LOCPWD parameter 4-2
specifying a location password 4-2
Create Structured Query Language C ILE (CRTSQLCI) D
command 10-23 data
Create Structured Query Language COBOL accessing via DB2 Connect B-4
(CRTSQLCBL) command 10-23 availability 7-14
Create Structured Query Language COBOL ILE blocked for better performance B-4
(CRTSQLCBLI) command 10-23 character conversion 1-7
Create Structured Query Language FORTRAN considerations 2-6
(CRTSQLFTN) command 10-23 designing 2-4
Create Structured Query Language Package failure 9-25
(CRTSQLPKG) command 10-28 requirements 2-6
Create Structured Query Language PL/I (CRTSQLPLI) data availability and protection 7-1
command 10-23 data capture
Create Structured Query Language RPG FFDC 9-29
(CRTSQLRPG) command 10-23 data conversion
Create Structured Query Language RPG ILE noncharacter data 10-19
(CRTSQLRPGI) command 10-23 data entries
creating interpreting C-1
configuration list 4-3 data file utility (DFU) 5-16
controller description (APPC) 4-2, 4-3 data location
device description (APPC) 4-2 deciding 8-3
interactive SQL packages on DB2 Server for data needs
VM B-4 determining 2-1
structured query language package 10-28 data redundancy 7-16
cross-platform DRDB notes B-1 data translation
CRTCFGL (Create Configuration List) command CCSID 10-16
secure-location entry 4-3 noncharacter data 10-19
CRTCTLAPPC (Create Controller Description (APPC)) database
command 4-2 security 4-1
location-password entry 4-3
Index X-13
Display Sphere of Control Status (DSPSOCSTS) DRDA (Distributed Relational Database Architecture)
command 3-17 Level 2 support 1-4
display, copying 9-6 DRDA (Distributed Relational Database Architecture)
displaying support
job log 6-5 coexistence with DDM 3-2
journal 2-9, 7-4 current AS/400 support 1-9
message descriptions 9-8 level 1 support 1-1
objects 6-12 level 2 support 1-1
program references 6-12, 10-27 overview 1-6
programs that adopt 4-12 with CDRA 1-7
relational database directory entry 5-8, 7-11 DRDA listener program 6-19
sphere of control status 3-17 DRDB
distributed data management (DDM) cross-platform B-1
CHGJOB command 6-11 DROP PACKAGE statement 10-33
coexistence with DRDA support 3-2 dropping a collection 6-15
DDMCNV job attribute 6-10, 6-11 DSPJOBLOG (Display Job Log) command 6-5
dropped conversations 6-10 DSPJRN (Display Journal) command 2-9, 7-4
dropping conversations 6-10, 6-11 DSPMSGD (Display Message Descriptions)
keeping conversations 6-10 command 9-8
keeping conversations active 6-10, 6-11 DSPPGMADP (Display Programs that Adopt)
moving data between AS/400 systems 5-19 command 4-12
reclaiming DSPPGMREF (Display Program References)
conversations 6-12 command 6-12, 10-27
resources 6-12 DSPRDBDIRE (Display Relational Database Directory
unused conversations 6-10 Entry) command 5-8, 6-27, 7-11
using copy file commands 5-19 DSPSOCSTS (Display Sphere of Control Status)
distributed data management conversations command 3-17
reclaiming 6-11, 6-12 dump, FFDC C-5
Distributed Relational Database DUW (distributed unit of work)
administration and operation 6-1 definition 1-4
managing 1-10
remote unit of work 10-3
set up 5-1 E
SQL specific to 10-13 EBCDIC 1-8
Distributed Relational Database application encoding, character conversion 1-7
considerations for a Distributed Relational End Job (ENDJOB) command 6-26
Database 10-2 End Request (ENDRQS) command 6-26
programming considerations 10-2 End TCP/IP Server CL command 6-20
Distributed Relational Database Architecture (DRDA) ending
support job 6-26
coexistence with DDM 3-2 request 6-26
current AS/400 support 1-9 ending SQL programs 10-16
overview 1-6 ENDJOB (End Job) command 6-26
with CDRA 1-7 ENDRQS (End Request) command 6-26
distributed relational database capabilities 2-2 ensuring data availability 7-14
distributed relational database problems environments
incorrect output 9-3 like 1-6
waiting, looping, performance unlike 1-6
at the application requester 9-3 error log 9-26
at the application server 9-4 FFDC data 9-30
distributed relational database security 4-1 error recovery
distributed unit of work (DUW) relational database 7-1
application design tips 2-4 error reporting
definition 1-4 alerts 9-23
dormant connection state 10-6 communications trace 9-27
definition C-5
Index X-15
interpreting (continued) location, definition 2-5
FFDC data C-1 loop problem
FFDC data from the error log 9-30 application requester 9-3
trace job C-1 application server 9-4
J M
job management strategy
accounting 6-16 developing 2-6
canceling 6-26 managing an AS/400 Distributed Relational
changing 6-11 Database 1-10
ending 6-26 message
types 5-1 Additional Message Information display 9-8
working with 6-1 category descriptions 9-8
job log database accessed 6-8
alerts 9-25 DDM job start 6-6
displaying 6-5 distributed relational database 9-10
finding a job 6-6 handling problems 9-7
job trace 9-26 informational 9-7
jobs inquiry 9-7
working with active 8-2 program start request failure 9-13
journal severity code 9-9
displaying 2-9, 7-4 target DDM job started 6-8
journal access path types 9-9
starting 7-5 message category 9-8
journal management message descriptions
commitment control 7-3 displaying 9-8
indexes 7-4 migration of data from mainframes 5-19, 5-23
journal receiver 7-3 mirrored protection 7-3
overview 7-3 monitoring
starting index journaling 7-5 relational database activity 6-1
stopping 7-3 moving data
journal receiver 7-3 between AS/400 systems 5-16
journaling between unlike systems
AS/400 files B-4 using communications 5-22
using File Transfer Protocol 5-23
using OSI File Services/400 licensed
L program 5-23
LCKLVL parameter 7-7 using SQL functions 5-22
library using tape or diskette 5-22
restoring 7-10 using TCP/IP Connectivity Utilities/400 licensed
saving 7-5, 7-10 program 5-23
like environment copying files with DDM 5-19
definition 1-6 using copy file commands 5-19
load data using interactive SQL 5-16
into tables 5-14 using Query Management/400 5-18
using Query Management/400 5-15 using save and restore 5-21
using DFU (data file utility) 5-16
using SQL 5-14
location security N
APPC network 4-3 naming convention
APPN network 4-3 default collection name 10-3
secure-location entry 4-3 SQL 10-3
SECURELOC parameter 4-3 system 10-2
system verifies 4-3 naming distributed relational database objects 10-2
which system verifies security 4-3
Index X-17
problem QCTL controlling subsystem 5-2
system-detected 9-1 QCTLSBSD system value 5-2
user-detected 9-2 QPSRVDMP FFDC spooled file C-5
problem analysis, planning for 2-9 QRETSVRSEC system value 4-8
problem handling C-5 query block size
Additional Message Information display 9-8 factors that affect the 8-6
alerts 9-23 query data
Analyze Problem (ANZPRB) command 9-21 blocked for better performance B-4
application problems 9-15 query management and interactive SQL setup B-3
communications trace 9-27 Query Management/400 function
copying displays 9-6 loading data into tables 5-15
displaying message description 9-8 moving data between AS/400 systems 5-18
distributed relational database messages 9-10
DRDA supported alerts 9-24
error log 9-26 R
isolating distributed relational database RCLDDMCNV (Reclaim Distributed Data Management
problems 9-2 Conversations) command 6-11, 6-12
job log 9-25 RCLRSC (Reclaim Resources) command 6-11, 6-12
job trace 9-26 RDB (relational database) parameter
message category 9-8 implicit CONNECT 10-8
message severity 9-9 in CRTSQLPKG command 10-29
overview 9-1 in relational database directory 5-6
problem log 9-21 Reclaim Distributed Data Management Conversations
program start request failure 9-13 (RCLDDMCNV) command 6-11, 6-12
system messages 9-7 Reclaim Resources (RCLRSC) command 6-11, 6-12
system-detected problems 9-1 reclaiming
user-detected problems 9-2 distributed data management conversations 6-11,
using display station pass-through 9-6 6-12
wait, loop, performance problems resources 6-11, 6-12
application requester 9-3 recovery
application server 9-4 auxiliary storage pool (ASP) 7-2
working with users 9-6 checksum protection 7-2
problem log 9-21 disk failures 7-2
problems failure types 7-1
handling 9-1 force-write ratio 7-9
program references journal management 7-3
displaying 6-12, 10-27 methods 7-1
program start request failure 9-13 mirrored protection 7-3
programming considerations planning for 2-10
for a Distributed Relational Database uninterruptible power supply 7-2
application 10-2 redundancy
programming examples communications network 7-14
application A-1 data 7-16
programs that adopt relational database
displaying 4-12 definition 1-1
protection relational database (RDB) parameter
system-managed access-path 7-6 implicit CONNECT 10-8
protection strategies for distributed databases 4-13 in CRTSQLPKG command 10-29
in relational database directory 5-6
relational database activity
Q monitoring 6-1
QBASE controlling subsystem 5-2 relational database directory
QCCSID auditing 6-27
system value B-1 changing entries 5-9
QCNTSRVC 9-31, 9-32 commands 5-6
creating an output file 7-11
Index X-19
saving (continued) setting QCNTSRVC as a TPN
object 7-5, 7-10, 7-11 on a DB2 Connect application requester 9-32
relational database directory 7-11 on a DB2 for OS/390 application requester 9-32
save file data 7-10 on a DB2 for VM application requester 9-32
security data 7-11 on a DB2/400 application requester 9-31
SQL packages 7-11 setting up a distributed relational database 5-1
system 7-10, 7-11 setup
to save file 7-10 interactive SQL B-3
to tape or diskette 7-10 query management B-3
SAVLIB (Save Library) command 7-5, 7-10 size of query blocks
SAVOBJ (Save Object) command 7-5, 7-10, 7-11 factors that affect the 8-6
SAVSAVFDTA (Save Save File Data) command 7-10 SMAPP (system-managed access-path protection) 7-6
SAVSECDTA (Save Security Data) command 7-11 SNA (Systems Network Architecture) 3-1
SAVSYS (Save System) command 7-10, 7-11 special TPN for debugging APPC server jobs 9-32
SBMRMTCMD (Submit Remote Command) sphere of control
command 6-9, 6-10, 6-15 definition 3-3
SBMRMTCMD command 5-13 working with 3-16
security sphere of control entry
application adding 3-17
requester 4-1, 4-4, 4-5 removing 3-17
server 4-1, 4-4, 4-5 sphere of control status
assigning authority to users 4-14 displaying 3-17
auditing 6-27 spiffy corporation example 1-13
consistent system levels across network 4-2 spooled job 5-1
controlling access to objects 4-9 SQL CALL 10-14
controlling which ID a job runs under 4-5 SQL collection
conversation level 4-4 definition 1-1
default user profile 4-5, 4-13 SQL naming convention 10-3
distributed database overview 4-1 SQL package
for an AS/400 distributed relational database 4-1 access plan 10-24
location 4-3 adopted authority 4-12
object security 4-13 creating with CRTSQLPKG 10-28
password 4-5, 10-12, 10-13 creating with CRTSQLxxx 10-28
planning for 2-8 creation as a result of precompile 10-23
protection strategies 4-13 definition 1-12
restoring profiles and authorities 7-11 deleting 10-32
saving profiles and authorities 7-11 displaying objects used 6-14
session level 4-2 for interactive SQL 5-13
security data restoring 7-11
saving 7-11 saving 7-11
server SQL package management 10-28
application SQL packages
starting a service job 9-30 working with 10-28
server authorization entries 4-8, 5-13 SQL program
service job adopted authority 4-12
on the application server 9-30 compiling 10-24
starting 9-30 displaying objects used 6-14
session level security example listing
APPC network 4-2 CRTSQLPKG 9-17
APPN network 4-3 precompiler 9-15
creating a remote location list 4-3 SQLCODE 9-15
LOCPWD (location password) parameter 4-2 SQLSTATE 9-15
specifying a location password during device config- handling problems
uration 4-2 SQLCODE 9-15
specifying a location-password 4-3 SQLSTATE 9-15
starting commitment control 7-7
Index X-21
TCP/IP 1-10, 2-4 user profiles
finding joblogs 9-26 restoring 7-10, 7-11
finding server jobs 6-6 user-detected problem 9-2
forcing joblogs to be saved 9-26 using passwords 4-5
security 4-8, 5-12
server job attributes 4-10
service jobs 9-31 V
user profile 4-14 Vary Configuration (VRYCFG) command 3-6, 7-15
user profiles 4-8, 4-10 varying
working with server jobs 6-3 configuration 3-6, 7-15
TCP/IP Communication Support Concepts 6-18 view
TCP/IP communications, establishing 6-18 definition 1-1
temporary source file member 10-23 recovering 7-5
terminology 6-17 VRYCFG (Vary Configuration) command 3-6, 7-15
terms and concepts 1-5
testing and debugging
application program 10-25
W
wait problem
tools
application requester 9-3
communications 3-1
application server 9-4
TPN
work management
setting QCNTSRVC 9-31, 9-32
job types 5-1
trace
subsystem setup 5-3
communications 9-27
subsystems 5-1
job 9-26
Work with Active Jobs (WRKACTJOB) command 6-3,
trace data
8-2
analyzing C-2
Work with Configuration Status (WRKCFGSTS)
trace job data
command 3-6, 7-15
interpreting C-1
Work with Disk Status (WRKDSKSTS) command 8-2
trace point
Work with Job (WRKJOB) command 6-1
description C-3
Work with Relational Database Directory Entries
partial send data stream C-3, C-4
(WRKRDBDIRE) command 5-8, 6-27
receive data stream C-3
Work with Sphere of Control (WRKSOC)
send data stream C-3
command 3-16
successful fetch C-4
Work with System Status (WRKSYSSTS)
unsuccessful fetch C-4
command 8-2
transaction program name parameter
Work with User Jobs (WRKUSRJOB) command 6-2
in AS/400 (TNSPGM) 5-7
working with
in SNA (TPN) 5-7
active jobs 6-3, 8-2
configuration status 3-6, 7-15
U disk status 8-2
unconnected state 10-7 job 6-1
uninterruptible power supply 7-2 relational database directory entries 5-8
unit of work sphere of control 3-16
definition 1-3 system status 8-2
unlike environment user jobs 6-2
definition 1-6 working with SQL packages 10-28
unprotected conversations 8-1 writing Distributed Relational Database
user jobs applications 10-1
working with 6-2 WRKACTJOB (Work with Active Jobs) command 6-3,
user profile 8-2
CCSID 10-17 WRKCFGSTS (Work with Configuration Status)
default 4-5, 4-13 command 3-6, 7-15
on application server 4-13 WRKDSKSTS (Work with Disk Status) command 8-2
restoring 7-11 WRKJOB (Work with Job) command 6-1
saving 7-11
Index X-23
Communicating Your Comments to IBM
AS/400e series
Distributed Database Programming
Version 4
Publication No. SC41-5702-01
If you especially like or dislike anything about this book, please use one of the methods
listed below to send your comments to IBM. Whichever method you choose, make sure you
send your name, address, and telephone number if you would like a reply.
Feel free to comment on specific errors or omissions, accuracy, organization, subject matter,
or completeness of this book. However, the comments you send should pertain to only the
information in this manual and the way in which the information is presented. To request
additional publications, or to ask questions or make comments about the functions of IBM
products or systems, you should talk to your IBM representative or to your IBM authorized
remarketer.
When you send comments to IBM, you grant IBM a nonexclusive right to use or distribute
your comments in any way it believes appropriate without incurring any obligation to you.
If you are mailing a readers' comment form (RCF) from a country other than the United
States, you can give the RCF to the local IBM branch office or IBM representative for
postage-paid mailing.
If you prefer to send comments by mail, use the RCF at the back of this book.
If you prefer to send comments by FAX, use this number:
– United States and Canada: 1-800-937-3430
– Other countries: 1-507-253-5192
If you prefer to send comments electronically, use this network ID:
– IBMMAIL(USIB56RZ)
– [email protected]
Overall, how satisfied are you with the information in this book?
Very Very
Satisfied Satisfied Neutral Dissatisfied Dissatisfied
Overall satisfaction Ø Ø Ø Ø Ø
How satisfied are you that the information in this book is:
Very Very
Satisfied Satisfied Neutral Dissatisfied Dissatisfied
Accurate Ø Ø Ø Ø Ø
Complete Ø Ø Ø Ø Ø
Easy to find Ø Ø Ø Ø Ø
Easy to understand Ø Ø Ø Ø Ø
Well organized Ø Ø Ø Ø Ø
Applicable to your tasks Ø Ø Ø Ø Ø
When you send comments to IBM, you grant IBM a nonexclusive right to use or distribute your comments
in any way it believes appropriate without incurring any obligation to you.
Name Address
Company or Organization
Phone No.
Cut or Fold
Readers' Comments — We'd Like to Hear from You
IBM
Along Line
SC41-5702-01
NO POSTAGE
NECESSARY
IF MAILED IN THE
UNITED STATES
IBM CORPORATION
ATTN DEPT 245
3605 HWY 52 N
ROCHESTER MN 55901-7829
Cut or Fold
SC41-5702-01 Along Line
IBM
SC41-57ð2-ð1
Spine information: