Replication Guide71 PDF
Replication Guide71 PDF
Replication Guide71 PDF
SC26-9920-00
® ®
IBM DB2 Universal Database
Replication Guide and Reference
Version 7
SC26-9920-00
Before using this information and the product it supports, be sure to read the general information under
“Appendix C. Notices” on page 403.
This document contains proprietary information of IBM. It is provided under a license agreement and is protected by
copyright law. The information contained in this publication does not include any product warranties, and any
statements provided in this manual should not be interpreted as such.
Order publications through your IBM representative or the IBM branch office serving your locality or by calling
1-800-879-2755 in the United States or 1-800-IBM-4YOU in Canada.
When you send information to IBM, you grant IBM a nonexclusive right to use or distribute the information in any
way it believes appropriate without incurring any obligation to you.
© Copyright International Business Machines Corporation 1994, 2000. All rights reserved.
US Government Users Restricted Rights – Use, duplication or disclosure restricted by GSA ADP Schedule Contract
with IBM Corp.
Contents
About this book . . . . . . . . . . ix Data distribution . . . . . . . . . 19
Who should read this book . . . . . . . x Data consolidation . . . . . . . . . 20
How this book is structured . . . . . . . x Update anywhere . . . . . . . . . 21
Conventions . . . . . . . . . . . . x Occasionally connected . . . . . . . 22
Terminology . . . . . . . . . . . . xi Examples of replication configurations . . . 24
How to read syntax diagrams . . . . . . xi Archiving audit information . . . . . 24
Road map . . . . . . . . . . . . xii Consolidating data from distributed
How to send your comments . . . . . . xiii databases . . . . . . . . . . . . 24
Distributing data to remote sites . . . . 25
What’s new . . . . . . . . . . . . xv Distributing IMS data to remote sites . . 27
Compatibility . . . . . . . . . . . xv Accessing data continuously . . . . . 28
What’s new for Version 7 . . . . . . . xv Replicating operational data to decision
DATALINK replication . . . . . . . xv support systems. . . . . . . . . . 29
Replication for AS/400 . . . . . . . xv Using target tables as sources of updates
Replication for UNIX, Windows, and (update anywhere) . . . . . . . . . 30
OS/2 . . . . . . . . . . . . . xvi Updating data on occasionally connected
What’s new for Version 6 . . . . . . . xvi systems . . . . . . . . . . . . 31
DB2 Satellite Edition . . . . . . . . xvi Retrieving data from a non-DB2
Database currency . . . . . . . . xvii distributed data store . . . . . . . . 32
Performance features . . . . . . . xviii Replicating operational data to a non-DB2
Integration with DB2 . . . . . . . xviii reports and query database . . . . . . 33
Contents v
Pruning the change data and unit-of-work User ID requirements for running the
tables and minimizing source server Capture and Apply programs . . . . . . 239
DASD usage . . . . . . . . . . 204 Setting up the Capture and Apply programs 239
Warm and cold starts . . . . . . . 205 Configuring the Capture program for
How the Capture program processes journal UNIX platforms . . . . . . . . . 239
entry types . . . . . . . . . . . . 205 Optional: Configuring the Apply program
Operating Apply for AS/400 . . . . . . 207 manually for UNIX platforms . . . . . 240
Creating packages to use with remote Other configuration considerations for
systems . . . . . . . . . . . . 208 UNIX-based components . . . . . . 241
Before you start the Apply program . . 210 Setting up end-user authentication at the
Starting Apply for AS/400 . . . . . . 210 source server . . . . . . . . . . 241
Scheduling Apply for AS/400 . . . . . 217 Operating Capture for UNIX platforms . . 243
Stopping Apply for AS/400 . . . . . 217 Restrictions for running the Capture
Additional Apply program operations . . . 220 program . . . . . . . . . . . . 243
Using the ASNDONE exit routine for Scheduling Capture for UNIX platforms 244
AS/400 . . . . . . . . . . . . 220 Setting environment variables for Capture
Refreshing target tables with the for UNIX platforms . . . . . . . . 244
ASNLOAD exit routine for AS/400 . . . 221 Starting Capture for UNIX platforms . . 244
Stopping Capture for UNIX platforms 247
Chapter 10. Capture and Apply for OS/390 225 Suspending Capture for UNIX platforms 247
Setting up the Capture and Apply programs 225 Resuming Capture for UNIX platforms 248
Applying DB2 maintenance . . . . . 225 Reinitializing Capture for UNIX platforms 248
Installing Capture and Apply for OS/390 225 Pruning the change data and unit-of-work
Configuring Capture and Apply for tables . . . . . . . . . . . . . 249
OS/390 after installing a new release of Displaying captured log progress . . . 249
DB2 . . . . . . . . . . . . . 226 Operating Apply for UNIX platforms . . . 250
Operating Capture for OS/390 . . . . . 227 Before you start the Apply program . . 250
Restrictions for running the Capture Starting Apply for UNIX platforms . . . 251
program . . . . . . . . . . . . 227 Scheduling Apply for UNIX platforms 254
Starting Capture for OS/390 . . . . . 227 Stopping Apply for UNIX platforms . . 254
Scheduling Capture for OS/390 . . . . 229
Stopping Capture for OS/390 . . . . . 230 Chapter 12. Capture for VM and Capture
Suspending Capture for OS/390 . . . . 230 for VSE . . . . . . . . . . . . . 255
Resuming Capture for OS/390 . . . . 230 Setting up the Capture program . . . . . 255
Reinitializing Capture for OS/390 . . . 230 Operating Capture for VM and Capture for
Pruning the change data and unit-of-work VSE . . . . . . . . . . . . . . 255
tables . . . . . . . . . . . . . 231 Restrictions for running the Capture
Displaying captured log progress . . . 231 program . . . . . . . . . . . . 255
Operating Apply for OS/390 . . . . . . 231 Starting Capture for VM and VSE . . . 256
Starting Apply for OS/390 . . . . . . 232 Stopping Capture for VM and VSE . . . 258
Scheduling Apply for OS/390. . . . . 234 Suspending Capture for VM and VSE . . 259
Stopping Apply for OS/390 . . . . . 234 Resuming Capture for VM and VSE . . 259
Rules for index types . . . . . . . . 234 Reinitializing Capture for VM and VSE 260
Using the DB2 ODBC Catalog . . . . . 235 Pruning the change data and unit-of-work
Setting up the DB2 ODBC Catalog . . . 235 tables . . . . . . . . . . . . . 260
DB2 ODBC Catalog tables . . . . . . 237 Displaying captured log progress . . . 261
Chapter 11. Capture and Apply for UNIX Chapter 13. Capture and Apply for
platforms . . . . . . . . . . . . 239 Windows and OS/2 . . . . . . . . . 263
Contents vii
Error messages table (Microsoft Jet Starting the Capture program using a
specific) . . . . . . . . . . . . 349 routine . . . . . . . . . . . . . 397
Error-side-information table (Microsoft Jet Starting the Apply program using a routine 398
specific) . . . . . . . . . . . . 349 Sample routine that starts the Capture and
Key string table (Microsoft Jet specific) 349 Apply programs . . . . . . . . . . 398
Synchronization generations table
(Microsoft Jet specific) . . . . . . . 350 Appendix B. Education and services for
DB2 data replication . . . . . . . . 401
Chapter 15. Capture and Apply messages 353 Services . . . . . . . . . . . . . 401
Capture program messages . . . . . . 353 Education . . . . . . . . . . . . 401
Apply program messages . . . . . . . 368
Appendix C. Notices . . . . . . . . 403
Chapter 16. Replication messages for Programming interface information . . . . 404
AS/400 . . . . . . . . . . . . . 381 Trademarks . . . . . . . . . . . . 404
Apply for AS/400 messages . . . . . . 381 Trademarks of other companies . . . . . 405
Capture for AS/400 messages . . . . . . 386
Other replication messages for AS/400. . . 390 Glossary . . . . . . . . . . . . 407
The DB2 DataPropagator™ product is the focus of the book. You can use it
with other products in the IBM® replication solution to tailor a replication
environment that suits your business needs.
You can replicate data from DB2 sources to DB2 targets. You can also replicate
data between DB2 and non-IBM sources and targets. Specifically, you can use
the following database management systems as sources, targets, or both:
Conventions
This book uses these highlighting conventions:
v Boldface type indicates commands or user interface controls such as names
of fields, folders, icons, or menu choices.
v Monospace type indicates examples of text that you enter exactly as shown.
v Italic type indicates variables that you should replace with a value. It is also
used to indicate book titles and for emphasis of words.
v If you can choose from two or more items, they appear vertically, in a stack.
If you must choose one of the items, one item of the stack appears on the
main path.
If choosing one of the items is optional, the entire stack appears below the
main path.
required_item
optional_choice1
optional_choice2
Road map
Compatibility
All releases of DB2 DataPropagator Relational Version 1 (DPropR V1) are not
compatible with this product. If you currently use DPropR V1, see “Planning
for migration” on page 89 for instructions on upgrading.
On Windows 32-bit operating systems, you can start the Capture and Apply
programs on demand using the ASNSAT command. For information about
this command, see “Replicating on demand (Windows 32–bit operating
systems only)” on page 279.
You can start both the Capture and Apply programs from within an
application by using the new asnCapture and asnApply application
programming interfaces. For information about these interfaces, see
“Appendix A. Starting the Capture and Apply programs from within an
application” on page 397.
With DB2 Satellite Edition, you can replicate data between DB2 servers and
some non-IBM source servers (Oracle, Sybase, Informix, Microsoft), and you
gain the following benefits:
v Centralized group administration and problem determination
v Ability to easily support thousands of occasionally connected clients
v Capture and Apply programs that start and stop automatically, as required
5. The Capture program AUTOSTOP option is not available on the Capture program for AS/400.
6. The Apply program COPYONCE option is not available on the Apply program for AS/400.
“Chapter 3. Data replication scenario” on page 35 lists steps that you can
follow to use the DB2 Control Center and the Capture and Apply programs to
perform a simple replication scenario on sample data in DB2 for Windows
NT.
“Chapter 4. Data replication tasks” on page 51 introduces the tasks that you
perform at various stages of the replication process.
A number of IBM products enable you to replicate data. The product that is
the focus of this book—DB2 DataPropagator—is a replication product for
relational data. You can use it to replicate changes between any DB2 relational
databases. You can also use it with other IBM products (such as DB2
DataJoiner and IMS DataPropagator) or non-IBM products (such as Microsoft
SQL Server and Sybase SQL Server) to replicate data between a growing
number of database products—both relational and nonrelational.
The replication environment that you need depends on when you want data
updated and how you want transactions handled. You have the flexibility to
choose the locations of the replication components to maximize the efficiency
of your replication environment.
This section describes the control tables that manage replication requests, the
logical servers that contain the replication components, as well as the main
The Apply program uses the following control tables: Apply trail table, critical
section table, pruning control table, prune lock table, register table,
subscription set table, subscription statements table, subscription events table,
subscription-targets-member table, subscription columns table, unit-of-work
table, and change data tables.
Logical servers
All the replication components reside on a logical server. In this book, logical
servers refer to databases, not to servers in the client/server sense. For the
OS/390 operating system, logical servers are equivalent to subsystems or
data-sharing groups (that is, the domain of a single database catalog). There are
three types of logical servers:
Source server
The source server contains the change-capture mechanism, the source
tables that you want to replicate, and the control tables for the
Capture program that are also used by the Apply program.7
Target server
The target server contains the target tables.
Control server
The control server contains control tables for the Apply program.
The Apply program can reside on any of the logical servers in the network. It
uses distributed DB2 technology to connect to the control, source, and target
servers.
7. If you use the remote journal set up on DPROPR/400, the source server will not contain the source tables that you
want to replicate. For more information about the remote journal set up, see “The journal” on page 191.
You can use the Control Center to perform the following administration tasks
for replication:
v Define DB2 tables and DB2 views as replication sources.
v Define or remove subscription sets.
v Add subscription-set members to existing subscription sets.
v Remove subscription-set members from existing subscription sets.
v Remove replication sources.
v Clone subscription sets to other servers.
v Activate and deactivate subscription sets.
v Add or delete a call to a procedure or a SQL statement that runs before or
after data is replicated.
Capture program
When the source is a DB2 table, the Capture program captures changes that
are made to the source. The Capture program uses the database log8 to
capture changes made to the source database and stores them temporarily in
tables.
The Capture program runs at the source server. Typically it runs continuously,
but you can stop it while running utilities or modifying replication sources.
Capture triggers
When the source table is in a non-IBM database (other than Teradata,
Microsoft Access, and Microsoft Jet), Capture triggers capture changes that are
made to the source. Capture triggers are fired when a particular database
event (UPDATE, INSERT, DELETE) occurs.
8. The Capture program retrieves changed and committed information from the active and archive logs on DB2 for
MVS 4.1 or higher and DB2 Universal Database. Capture for VSE and VM 5.1 can read only the active log on DB2
for VSE & VM.
The Apply program generally runs at the target server, but it can run at any
server in your network that can connect to the source, control, and target
servers. Several Apply program instances can run on the same or different
servers. Each Apply program can run using the same authorization, different
authorization, or as part of a group of Apply programs where each Apply
program in the group runs using the same authorization (user ID).
Each Apply program is associated with one control server, which contains the
control tables that contain the definitions for the subscription sets. The control
tables can be used by more than one instance of the Apply program. For
example, if you have one source server and two target servers, you can have
separate Apply programs running at each target server. The two apply
instances can share the control tables, which will have the specific information
related to each Apply instance.
Each time that the Apply program copies data to the target database, the
contents of the target database reflect the changes that were made to the
Log-based communication
The Capture program uses some of the control tables to indicate what changes
have been made to the source database, and the Apply program uses these
control table-values to detect what needs to be copied to the target database.
Important: The Capture program will not capture any information until the
Apply program signals it to do so, and the Apply program will not signal the
Capture program to start capturing changes until you define a replication
source and associated subscription sets. See “Performing the initial
replication” on page 54 for more information about the steps that you must
perform so that the components communicate with each other and replicate
changes.
The following process describes how the Apply and Capture programs
communicate in a typical replication scenario to ensure data integrity:
Trigger-based communication
DJRA, working through DB2 DataJoiner, creates Capture triggers on non-IBM
source tables when you define them as replication sources. Three types of
triggers are created on the source table: DELETE, UPDATE, and INSERT. Also,
UPDATE triggers are created on the pruning control table and the register
synchronization table. The Apply program uses these control tables to detect
what needs to be copied to the target database.
The following process describes how the Capture triggers and the Apply
program communicate in a typical replication scenario to ensure data integrity:
During full-refresh only copying, the Apply program performs these tasks:
1. Deletes all of the rows from the target table
2. Reads all of the rows from the source table
3. Copies the rows to the target table
During differential-refresh copying, the Apply program copies only the changed
data to the target table.
A subscription set must have one subscription-set member for each target table
or view. When you create a subscription-set member, you define the following
attributes:
v The source table or view and a target table or view
v The structure of the target table or view
Subscription sets ensure that all subscription-set members are treated alike
during replication: either changes are applied to all targets or to none of them.
The changed data for all the subscription-set members in a subscription set is
replicated to the specified target tables in a single transaction. Subscription
sets optimize performance because the target tables in a set are processed in
one transaction against the target server. Subscription sets also preserve
referential integrity.
Figure 1. Subscription sets and subscription-set members. Example of the relationship between a
subscription set and subscription-set members.
By using more than one Apply qualifier, you can run more than one instance
of the Apply program from a single user ID. The Apply qualifier is used to
identify records at the control server that define the work load of an instance
of the Apply program; whereas the user ID is for authorization purposes only.
For example, assume that you want to replicate data from two source
databases to the target tables on your computer. The data in source table A is
replicated using full-refresh copying to target table A, and the data in source
table B is replicated using differential-refresh copying to target table B. You
define two subscription sets (one for table A and one for table B), and you use
separate Apply qualifiers to allow two instances of the Apply program to
copy the data at different times. You can also define both subscription sets
using one Apply qualifier.
Data manipulation
You might want to replicate only a subset of your source table, use a simple
view to restructure the data from the source table to the target table, or use
more complex joins and unions.
Use column subsetting if you want to replicate only a subset of all of the
columns from the source. This type of subsetting is appropriate, for example,
if some of the columns in the source are very large, such as large objects
(LOBs), or if the column data types are not supported by the intended target
table.
Use row subsetting if you want to replicate only some of the rows from the
source database. For example, when you are replicating data to more than one
regional office, you might want to replicate only records that are relevant to
that particular regional office. To subset rows, use the WHERE clause when
defining the subscription-set member.
Views as sources
Simple views are useful in data warehouse scenarios if you want to
restructure copies so that data in target tables is easily queried.
10. The Apply qualifier appears in many control tables; therefore, do not attempt to change its value after you set it.
Views are also useful for introducing related columns in other tables. You can
reference the columns in other tables in the subscription-set member
predicates, which facilitates the routing of updates to the appropriate target
sites.
You can use joins and unions to manipulate data in the following ways:
v Joins of tables from a single DB2 source server (by defining a DB2 view as
the join of certain tables)
v Unions of tables from one source server (by using multiple subscription-set
members in a set where each member has the same target table)
v Unions of tables from multiple source servers, sometimes referred to as
multisite unions (by creating multiple subscription-set members in multiple
subscription sets because there are multiple source servers)
Target tables
When you define a subscription-set member, you must specify the type of
target table that you want to use. The following types of tables are available:
v User copy tables
v Point-in-time tables
v Aggregate tables
v Consistent-change-data (CCD) tables
v Replica or row-replica tables
v User tables
Point-in-time tables
These tables are read-only copies of the replication source with a timestamp
column added. The timestamp column is originally null. When changes are
replicated, values are added to indicate the time when updates are made. Use
these types of tables if you want to keep track of the time of changes.
Aggregate tables
These are read-only tables that use SQL column functions (such as SUM and
AVG) to compute summaries of the entire contents of the source tables or of
the recent changes made to the source table data. Rows are appended to
aggregate tables over time. There are two kinds of aggregate tables: base
aggregate tables and change aggregate tables.
Base aggregate tables summarize the contents of a source table. Use a base
aggregate table to track the state of a source table on a regular basis. For
example, assume that you want to know the average number of customers
that you have each month. If your source table has a row for each customer,
you would average the number of rows in your source table on a monthly
basis and store the results in a base aggregate table.
A base aggregate table does not track change information. For example,
assume you had an average of 500 customers in January and 500 in February;
however, in February you lost two existing customers and gained two new
ones. The base aggregate table shows you that you had the same average
number of customers in both months, but it does not show the changes that
were made during February. If you want to track change information, use a
change aggregate table.
Change aggregate tables work with the change data in the control tables, not
with the contents of the source table. Use a change aggregate table to track
the changes (UPDATE, INSERT, and DELETE operations) made over time. For
example, assume that you want to know how many new customers you
gained each month (INSERTS) and how many existing customers you lost
(DELETES). You would count the changes that are made to the rows in your
source table on a monthly basis and store that number in a change aggregate
table.
User tables
You don’t actually specify a user table as a target; however, in
update-anywhere replication, a user table is automatically a target for the
replicas or row-replicas that are associated with it. The user table is the parent
of the replica, and its copies are dependent replicas. The parent of the replica
receives updates from a dependent replica and, if there are no conflicts
detected, it replicates the changes to the other dependent replicas. The parent
of the replica is the primary source of data. If there are any update conflicts
detected, the contents of the parent of the replica prevail. Typically your
applications access the dependent replica tables; however, they connect to the
server containing the user table when the replicas are not available.
Interval timing
This is the simplest method of controlling the timing of replication. To use
interval timing, you choose a date and time for the Apply program to start
replicating data to the target, and set a time interval that describes how
frequently you want the data replicated. When the Apply program stops, it
will not start again until the time interval passes. The time interval can be a
period of time (from one minute to one year), or it can be continuous. A
continuous time interval means that the Apply program starts replication cycles
one after the other, with only a few seconds delay in-between (you can control
the delay with the start parameter). The intervals that you provide are
approximate. The interval actually used by the Apply program depends on
the number of updates that the Apply program has to replicate and on the
availability of resources (that is, database table, table space).
Event timing
This is the most precise method of controlling the timing of replication. To use
event timing, you specify the name for an event when you define the
subscription set. You then set the time when you want that event processed.
You or your application must provide information for event timing. This
information is stored in the subscription events table. The Apply program
searches the subscription events table for the event name and the associated
time and end-of-period information.
On-demand timing
You can replicate data on demand by using the ASNSAT command. This
command starts the Apply program and, if necessary, it also starts the
Capture program. Each program self-terminates after it completes its part of
one replication cycle. This command is supported on Windows 32–bit
operating systems, and its invocation parameters are described in “Replicating
on demand (Windows 32–bit operating systems only)” on page 279.
Figure 2. Data distribution. Changes made to a source table are replicated to read-only target
tables.
Data consolidation
In data consolidation configurations, a central data server is used as a
repository for data from many data sources (see Figure 3 on page 21).
Therefore, this configuration consists of many source tables or views and one
target table with multiple subset views. Changes made to each data source are
replicated to the central data server, which is read-only.
Restriction: If you consolidate data from more than one server into a CCD
target table, you must not use that CCD target table as a replication source for
other target tables. The original servers use separate log sequences that cannot
be distinguished in further replication.
Update anywhere
In update-anywhere configurations, a replication source has target tables that
are read/write copies. Changes made to a target table are applied to the
source table, which maintains the most up-to-date data. If a conflict occurs
between a source and target, the source wins. The source table then applies
the changes to all of its target tables. Unless you design your application
correctly, update conflicts can occur when the data is replicated (see Figure 4
on page 22). It is best to design your application so that a conflict can never
occur when data is replicated from the source to all the target tables (see
Figure 5 on page 22). You have the option of ignoring conflicts and rejecting
any conflicting updates. By rejecting conflicting updates, you risk losing some
information.
Figure 5. Update-anywhere replication with no risk of conflicts between target tables. Each
read/write target table has a unique set of rows that can be updated locally; the source table at the
source server maintains the most up-to-date data.
Occasionally connected
In occasionally connected configurations, you have the flexibility to connect to
and transfer data to and from a primary source on demand. These types of
configurations allow users to connect to the primary data source only long
enough to synchronize their local database. The data source doesn’t require a
continuous connection for replication administration (see Figure 6 on page 23).
Figure 6. Occasionally connected configuration. The target servers are not continuously connected
to the source server; changes that are made to the tables are replicated when the target server is
connected to the source server.
You can use DB2 Universal Database Satellite Edition (or any other DB2
server that participates in a satellite environment) to administer satellites,
which are occasionally connected DB2 servers. DB2 data replication enables
you to synchronize data between a central control site and many satellites. At
the home office you set up the replication environment, test it, and when it is
ready to be rolled out to the occasionally connected systems, you store it in
the Satellite Administration Center database. You don’t access any of the
occasionally connected systems and only need to set up the environment once.
For information about setting up data replication for satellites, enabling the
satellite environment for replication, and testing replication on a satellite, see
the DB2 Universal Database Administering Satellites Guide and Reference.
Figure 7. Audit information. Audit data is replicated to a target table that can be read by the
customer’s application.
Design highlights: Both the before-image and the after-image values of each
row are captured and stored. The authorization ID of the user who changed
the data is also stored in the audit tables. All of this information is captured
from the DB2 for OS/390 log.
Consolidating data from distributed databases
Requirements: A large retail chain has almost 500 stores around the country,
each of which gathers purchase details through an electronic point of sale
(EPOS) system. Each store keeps its data in local databases on DB2 for AIX.
The data is transferred nightly to a central DB2 for OS/390 site using a
pre-existing file-transfer process from the EPOS terminals. The company
wants to enhance the data at the central site.
Figure 8. Consolidating data from distributed databases. Data from three source servers is
replicated to two target tables on a target server.
Design highlights: The Apply program uses base aggregate and change
aggregate tables to summarize the consolidated store data. The base aggregate
tables summarize the contents of the source files. The change aggregate tables
summarize the results of the changes made between each refresh of the target
that is performed by the Apply program.
Distributing data to remote sites
Requirements: A small bank installed several new Windows NT client/server
applications in its 85 branches. A major source of data for the new
applications is the customer and financial reference data, which is derived and
held at a host site in two operational systems, one on DB2 for OS/390 and the
other on DB2 for AIX. If branches accessed the data directly from the host site,
network traffic would be congested and the availability of the production data
could be affected.
Figure 9. Distributing data to remote sites. Source data is consolidated on an AIX server and replicated to the
branches. Each branch gets all of the financial data and some of the customer data. WHERE clauses are used to
ensure that each branch gets the records that pertain to their own customers only.
Design highlights: One Apply program resides on AIX and replicates from
DB2 for OS/390 and DB2 for AIX. There is one subscription set for replicating
from DB2 for OS/390 to DB2 for AIX and one for replicating from DB2 for
AIX to DB2 for AIX.
An Apply program also resides on the target servers at each branch. The
Apply program on the source server runs separately from the Apply
The Capture and Apply programs maintain complete, condensed CCD tables
in DB2 for AIX. The administrator chose a condensed CCD table because that
type of staging table contains only the most recent change made to a row, so
network traffic is reduced during replication.
When the subscription sets were created for each branch, the administrator
put the control server on the Windows NT server. If the administrator had put
the control server on DB2 for AIX, the Apply program from each Windows
NT server would need to connect to the host site over the network to read
and update the control information about the subscription set, and to detect
changes to its control information.
Distributing IMS data to remote sites
Requirements: A large financial institution wants to improve the flow of
information from two legacy operational systems to its OS/2-based branches.
It wants to provide more accurate and timely data to help loan-application
research and to detect credit-card fraud. The data for loan applications is in
DB2 for OS/390, and the credit card details are in an IMS system. Previous
attempts to copy the legacy data consisted of an unworkable mixture of
ad-hoc reports and file transfer techniques.
Design highlights: IMS DataPropagator captures changes from the IMS log
and creates a noncondensed CCD table in DB2 DataPropagator format on the
OS/390 source server. DB2 DataPropagator uses this CCD table as a
replication source. The Capture program on the OS/390 server captures
information from the local tables that contain the credit-card and
loan-application data. The Apply program on the OS/2 target server pulls the
change data to the target tables.
Accessing data continuously
Requirements: An international bank wants to keep its system online 24
hours a day. Currently the system is online 23 hours 45 minutes a day. Every
day the bank stops the system to quiesce it for a batch application, which
requires exactly one day’s worth of data. During the 15 minutes when the
system is down, the required tables are extracted. After the extraction, the
system is made available for the next financial day.
Replication solution: Data changes made during the day are captured and
replicated to CCD tables (see Figure 11 on page 29). The batch application was
modified to process the changes in the CCD tables instead of the table
extracts. The online system does not need to be stopped to provide consistent
data for the batch application.
Replication solution: Updates are captured from the key operational tables
and, on an hourly basis, replicated to CCD tables in the decision support
system (see Figure 12).
Figure 12. Replicating operational data to decision support systems. The noncondensed CCD
target table is used to record all changes made to the source database.
The Capture and Apply programs are given job priorities such that replication
does not impact production CPU resources. The decision support system
could be implemented just as easily on any of the supported target platforms
and could still be ported to other platforms if required.
Using target tables as sources of updates (update anywhere)
Requirements: A financial institution has hundreds of agents at several
branches who must fill in online forms to set up and modify client accounts.
The agents base the quotation rates on information that was generated at the
head office and sent to the branch. The agents send reports back to the head
office, and the accounts are finalized only after the information is verified at
the head office. The agents would be more productive if they had access to
up-to-date data without the network problems of accessing the central
database directly.
This type of replication works best when transaction conflicts between the
central database and the updatable copies can be avoided, such as when
copies can update only key ranges at specific sites, or when sites can make
updates only during certain time periods.
DB2 DataPropagator detects conflicts that occur when the same row is
updated on the host system and on an agent’s system and neither change has
been replicated. If an agent made updates that are in conflict, these updates
are discarded during replication to ensure data integrity. The transaction
containing the conflict and all captured transactions that are found that are
dependent on the conflicting transaction are backed out.
Updating data on occasionally connected systems
Requirements: An insurance company wants to equip its sales agents, who
rarely visit the company’s home office, with a set of offers to attract both new
and existing customers—special introductory offers and personalized
packages. Much of the time, the agents’ computers will not be connected to
Replication solution: The Capture program places the updates that are made
to the operational data in DB2 for OS/390 tables. The subscription timing
interval is set to four hours to make sure that the query results and reports
are based on current operational data. Using DataJoiner nicknames, the Apply
program replicates updates from the DB2 tables to the query and reports
tables in the Informix database.
The steps in this chapter use the data in the DEPARTMENT table from the
SAMPLE database. The fully qualified name is userID.Department; where
userID is the user ID that created the table. Table 1 on page 36 shows the
DEPARTMENT table.
For the remainder of this exercise, use the user ID with which you created the
SAMPLE and COPYDB databases. Because you created the databases, you
have the authority (DBADM or SYSADM) to perform replication tasks.
You require a simple data distribution configuration, with changes from one
replication source being replicated to a single read-only copy. This section
describes the design and planning issues that you need to consider before you
perform any replication tasks.
Replication source
You already know that the replication source is the userID.DEPARTMENT
table in the SAMPLE database. Before you set up your environment, you must
decide what you want to replicate from that table. You decide to make all
columns available for replication; and you want to save before-image values
for each of them so that you can see what is changed.
Using existing target tables: When you use the Control Center, the target
table is created if it doesn’t exist. This method of automatically generating a
target table is preferred because it ensures correct mapping to the replication
source. You can use existing target tables if they were created by any DB2
product.
Assume that you want the target table in COPYDB to contain the following
columns of information:
DEPTNO
Information from the DEPTNO column in the replication source (this
column will be the primary key of the target table)
DEPTNAME
Information from the DEPTNAME column in the replication source
MGRNO
Information from the MGRNO column in the replication source
ADMRDEPT
Information from the ADMRDEPT column in the replication source
LOCATION
Information from the LOCATION column in the replication source
Because the columns in the target table simply reflect the data from the source
table, and there is to be only one record in the target table for each record in
the source table, you can use a user copy type of target table.
Replication options
For the purpose of this exercise, you decide to store the target table and the
replication control tables in the default table space, USERSPACE1.
Typically you will want to put the UOW table and the CD tables (and CCD
tables if you are using them) in their own table spaces, with table or table
space locking. You can put all other replication control tables together in one
table space with row-level locking.
For scheduling replication, assume that you want DB2 replication to check for
any changes from the source table every minute and replicate them to the
target table. Although a report-generating application doesn’t require that
kind of turnaround, you want to test the replication environment that you set
up to make sure that everything is working correctly.
Also, you decide that after each replication cycle, you want to delete any
records from the Apply audit trail table that are older than one week (seven
days). This pruning prevents the table from growing too large.
You won’t need to set constraints because you have a read-only target.
Constraints are needed only when applications are updating a target table. In
this scenario, the updates are committed at the replication source, and they
must satisfy the constraints defined on that system. There is no reason for you
to reevaluate the same constraints at the target.
Tip: Most of the time you will want to use the default. By saving the
SQL to a file, you can look at the SQL to understand what it will
do, make any modifications that you require, save the file, and
run it after you are confident that it will do what you expect it to
do.
b. The System name window opens. Click OK.
c. Use the File browser window to create a file in which to save the SQL:
1) In the Drives field, select C:.
2) In the Directories list, select scripts by double-clicking it. (To move
up one directory level, double-click the two dots (..) at the top of
the list.)
3) In the Path field, type replsrc.sql.
4) Click OK.
Tip: By default, the SQL file is saved in the sqllib directory. When you
work in your own replication environment, you will want to keep
all the files in a separate directory instead of storing them in
sqllib.
d. View the file that you created. Go to the C:\scripts directory and open
the replsrc.sql file using an editor. For the purpose of this exercise,
don’t change anything in the file. Close the file.
Tip: You might want to expand the window to view all of the
columns. Also, some rows have names beginning with the
letter X (for example, XDEPTNO). These rows store the
before-image column values that you requested.
3) On the Rows page, indicate that you want to replicate rows that
meet certain criteria by typing the following for the WHERE clause:
DEPTNO >='A00'
4) Click OK to save these settings and return to the Define replication
subscription window.
3. Define the SQL statements that will be processed when the subscription
set is run:
a. Click SQL to open the SQL window.
b. Click Add to open the Add SQL window.
c. Indicate that you want to delete any records in the Apply audit trail
table that are older than seven days by typing the following processing
statement in the SQL statement or Call procedure field:
DELETE FROM ASN.IBMSNAP_APPLYTRAIL WHERE LASTRUN
< (CURRENT TIMESTAMP - 7 DAYS)
d. Indicate that ″row not found″ is an acceptable SQL state by typing the
value 02000 in the SQLSTATE field and clicking Add. This value is
added to the Acceptable SQLSTATE values list box.
Tip: You can define up to ten SQL states that you want to ignore for
this subscription.
Tip: The value that you set for data blocking depends on how much
free space you have on the workstation that runs the Apply
program. Typically, you would use a number from 5 to 20. If you
want to be very conservative, use 1 minute.
d. Click OK to save these values, close the Subscription Timing notebook,
and return to the Define replication subscription window.
5. Submit the subscription set.
a. Click OK in the Define replication subscription window. The Run Now
or Save SQL window opens.
b. Specify the control server, which is the database that will contain the
subscription set control information, by selecting COPYDB. This server is
the database in which you want to store the subscription control
information.
c. On the Run Now or Save SQL window, accept the default option,
which is to save the SQL file and run it later, by clicking OK.
d. The System name window opens. Click OK.
e. Use the File browser window to create a file in which to save the SQL:
1) In the Drives field, select C:.
2) In the Directories list, select scripts by double-clicking it.
3) In the Path field, type replsub.sql.
Tip: If your source server were on another machine, you would need to log
on to the source server over the network. You would use a user ID that
has DBADM or SYSADM authority for the source server. However,
because the source server for this exercise is on your local machine, you
don’t need to log on again.
Tip: You must perform the backup action to make the database accessible.
The database was put in backup pending mode when you specified
that you want to retain log files for roll-forward recovery.
Step 5: Bind the Capture and Apply programs
Tip: For the purpose of this exercise, you will manually create and bind the
Capture and Apply program packages. However, DB2 DataPropagator
for all supported UNIX, Windows, and OS/2 operating systems can
automatically create and bind the packages for you.
The applyur.lst and applycs.lst files contain a list of the packages that were
created.
2. Connect to the target server:
DB2 CONNECT TO COPYDB
3. Create and bind the Apply package to the target server database by typing
both of the following commands:
DB2 BIND @APPLYUR.LST ISOLATION UR BLOCKING ALL
DB2 BIND @APPLYCS.LST ISOLATION CS BLOCKING ALL
The applyur.lst and applycs.lst files contain a list of the packages that were
created.
Step 6: Create a password file
For end-user authentication to occur at the source server, you must create a
password file with an AUTH=SERVER scheme. The Apply program uses this
file when connecting to the source server. Make sure that the user ID that will
run the Apply program can read the password file.
Where:
server
The name of the source, target, or control server, exactly as it appears
in the subscription set table. (In this example, SAMPLE and COPYDB.)
userid
The user ID that you plan to use to administer that particular server.
This value is case-sensitive on Windows NT and UNIX operating
systems.
password
The password that is associated with that user ID. This value is
case-sensitive on Windows NT and UNIX operating systems.
Password file format: Do not put blank lines or comment lines in this file.
Add only the server-name, user ID, and password information. This
information enables you to use different passwords or the same password
for each server.
4. Save and close the file as deptqual.pwd.
Password file naming convention:
The password file name is applyqual.pwd; where applyqual is a
case-sensitive string that must match the case and value of the Apply
qualifier (APPLY_QUAL) in the subscription set table. The file naming
convention from Version 5 of DB2 DataPropagator is also supported:
ApplyqualInstance_nameControl_server.pwd; which includes the
case-sensitive Apply qualifier, the instance name that the Apply program
runs under (the default name is DB2, in uppercase), and the name of the
control server in uppercase (for example, COPYDB).
For more information about authentication and security, refer to the IBM DB2
Administration Guide.
Step 7: Replicate the scenario data
After defining the replication source and the subscription set, you can submit
the copy request by starting the Capture and Apply programs.
The Capture program starts running but no new command prompt appears.
This action creates a *.ccp file. The Capture program is initialized but it does
not start capturing changes for the defined replication source until you start
the Apply program and it completes its initial full-refresh copy.
Tip: You must start the Apply program in the same directory in which
you stored the password file. If you try to start the Apply program in
another directory, you will get an error message.
2. Type the following command to start the Apply program:
ASNAPPLY DEPTQUAL COPYDB
Tip: You can use the LOADX invocation parameter to call the ASNLOAD
program. Type the LOADX parameter after the database name
(COPYDB) in the command statement above. The default ASNLOAD
program uses the EXPORT utility to export data from the source table
and uses the LOAD utility to fully refresh the target table. You can
modify ASNLOAD to call any IBM or vendor utility.
The Apply program starts running, but no new command prompt appears.
You can check the Apply trail table (ASN.IBMSNAP_APPLYTRAIL) in
COPYDB for status information.
If you view the DEPTCOPY target table after one replication cycle, you should
see results that match the data shown in Table 2.
Table 2. DEPTCOPY table
DEPTNO DEPTNAME MGRNO ADMRDEPT LOCATION
A00 SPIFFY COMPUTER 000010 A00 -
SERVICE
B01 PLANNING 000020 A00 -
C01 INFORMATION CENTER 000030 A00 -
Tip: The replication process does not occur immediately. You should wait
approximately five minutes before checking the table.
Tip: On Windows NT, you can use the Task Manager to determine if the
Capture program (ASNCCP) is running.
Perform the following steps in the DB2 command window that you opened in
the previous step.
You can run DB2 utilities on your database now that you stopped the Capture
and Apply programs. (Running the utilities is beyond the scope of this
exercise.)
You can use DB2 data replication to maintain data in more than one location
and keep the various copies of it synchronized. You must determine where
your source data will be coming from. You must decide whether you want all
or some of the source information copied, or whether you want only changes
copied, and how many copies (or targets) you need. You also need to
determine where the copies will be located.
Although you cannot update the source tables and target tables
synchronously, you can schedule the updates to meet the needs of your
applications and your replication environment. The frequency of replication
depends on how much lag time is acceptable between the time that the source
is updated and the time that the targets are updated. Therefore, you must
decide how synchronized the copies must be with the source and with each
other before you can come up with a replication model.
After you understand your application data requirements, you can design the
replication model that will help you meet those requirements. There are many
facts that you need to consider when you design your model. These are some
of the more important decisions that you need to make:
The replication configuration
Based on your data needs, you must decide whether you need a
consolidation, distribution, update-anywhere, or occasionally
When you are ready to plan your replication environment, see “Chapter 5.
Planning for replication” on page 61 for detailed planning information.
To perform the initial replication, you must perform the following steps in the
exact order:
1. Make sure that at least one replication source is defined.
2. Start the Capture program. This step includes specifying invocation
parameters (such as NOPRUNE, which prevents automatic pruning of the
CD and UOW tables). After the Capture program is fully initialized, it will
not capture any changes until the Apply program signals it to do so.
3. If you haven’t already done so, define at least one subscription set and one
subscription set member.
4. Start one or more Apply programs. This step includes specifying
invocation parameters (such as LOADX, which calls ASNLOAD—an exit
routine to initialize target tables). Each Apply program will perform a
11. If the Capture program and the Apply program are not on OS/390, they will automatically bind.
Tip: Use the WARMNS option in the Capture program if you want to be able
to repair any problems (such as unavailable databases or table spaces)
that might prevent a warm start from occurring.
Adding to your replication environment
You probably need to add replication sources and subscription sets to your
replication environment from time to time.
To add to your replication environment, you must perform the following steps
in the exact order:
1. Define the new replication source.
2. Run the Capture reinit command, or stop the Capture program and warm
start it.
3. Define the new subscription sets and subscription set members.
4. The Apply program will automatically recognize the new subscription set
if the Apply program is already running and it uses the Apply qualifier
that is associated with the new subscription set. Otherwise, you must start
a new Apply program using the appropriate Apply qualifier before the
Apply program can recognize the new subscription set.
Copying your replication environment
After you define your replication environment on one system (for example, a
test system), you can copy the replication environment to another system (for
example, a production system). You use the promote functions to
reverse-engineer your tables, replication sources, and subscription sets and to
create a script file with the appropriate data definition language (DDL) and
data manipulation language (DML). For more information about the promote
functions, see “Copying your replication configuration to another system” on
page 127 and the on-line help for the administration interface.
12. If you use non-IBM load utilities, it is recommended that you use the offline load feature in DJRA. For more
information on setting up the offline load feature with DJRA, see “Loading target tables offline using DJRA” on
page 127.
Capacity planning
The Capture program does not generally impact other applications and
requires a minimum of central processing unit (CPU) or central processing
complex (CPC) capacity. For example, you can schedule Capture for OS/390
at a lower priority than the application programs that update the source
tables. In this case, the Capture program lags behind when CPU resource is
constrained.
The Capture program does use CPU resources when it prunes the CD tables
and UOW table, but you can defer this activity to reduce system impact.
In general, the DB2 Control Center and DJRA do not require many local CPU
resources. However, when you generate the SQL for replication sources and
subscription-set definitions, DB2 DataPropagator extensively searches the
catalogs of the source server. And for large sites, these searches can have a
noticeable CPU or database system impact.
Storage planning
In addition to the storage required for DB2, replication requires storage for:
Database log and journal data
The additional data logged to support the replication of data.
All of the sizes given in the following sections are estimates only. To prepare
and design a production-ready system, you must also account for such things
as failure prevention. For example, the holding period of data (discussed in
“Target tables and control tables” on page 63) might need to be increased to
account for potential line outage.
Although estimating the increase in the log or journal volume is not easy, in
general you will need an additional three times the current log volume for all
tables involved in replication.
To make a more accurate estimate, you must have detailed knowledge of the
updating application and the replication requirements. For example, if an
In addition to logging for the source database, there is also logging for the
target database, where the rows are applied. Because the Apply program does
not issue interim checkpoints, you should estimate the maximum amount of
data that the Apply program will process in one time interval and adjust the
log space (or the space for the current receiver for AS/400) to accommodate
that amount of data.
Active log file size for Capture for VSE and VM and current receiver size
for Capture for AS/400
For VM and VSE, when the active log is full, DB2 archives its contents. For
AS/400, when the current receiver is full, the system switches to a new one;
you can optionally save and delete old ones no longer needed for replication.
When a system handles a large number of transactions, the Capture program
can occasionally lag behind. If the log is too small, some of the log records
could be archived before they are captured. Capture for VSE and VM running
with DB2 for VSE & VM cannot recover archived log records. 13
For DB2 for VSE & VM, ensure that your log is large enough to handle at
least 24 hours of transaction data. For DB2 for AS/400, ensure that the current
receiver is large enough to handle at least 24 hours of data.
Target tables and control tables
The space required for a target table is usually no greater than that of the
source table (or tables), but can be much larger if the target table is
denormalized or includes before images (in addition to after-images) or
history data. The following also affect the space required for a target table: the
number of columns replicated, the data type of columns replicated, any row
subsets defined for the subscription-set member, and data transformations
performed during replication.
The CD tables and the UOW table also affect the disk space required for a
source database. The space required for the replication control tables is
generally small because each requires only a few rows.
13. Capture for OS/390 running with DB2 for MVS/ESA V4 or higher and DB2 Universal Database V5 or later can
recover archived log records.
When calculating the number of bytes of data replicated, you need to include
21 bytes for overhead data added to the CD tables by the Capture program. In
the formula, determine the number of inserts, updates, and deletes to the
source table within the interval between capturing and pruning of data. The
exception factor allows for such things as network failures or other failures
that prevent the Apply program from replicating data. Use a value of 2
initially, then refine the value based on the performance of your replication
environment.
Example: If the Capture program prunes applied rows from the CD table once
daily, your interval is 24 hours. If the rows in the CD table are 100 bytes long
(plus the 21 bytes for overhead), and 100,000 updates are applied during a
24-hour period, the storage required for the CD table is about 12 MB.
The UOW table grows and shrinks based on the number of rows inserted in a
particular time interval (the number of commits issued within that interval by
transactions that update source tables or by Capture for AS/400). You should
initially overestimate the size required and monitor the space actually used to
determine if any space can be recovered. The size of each row in the UOW
The size of the spill file is equal to the size of the data selected for replication
during each replication interval. You can estimate the size of the spill file by
comparing the frequency interval (or data-blocking interval; see “Data
blocking for large volumes of changes” on page 68) planned for the Apply
program with the volume of changes in that same time period (or in a peak
period of change). The spill file’s row size is the target row size, including any
DB2 DataPropagator overhead columns. This row size is not in DB2 packed
internal format, but is in expanded, interpreted character format (as fetched
from the SELECT). The row also includes a row length and null terminators
on individual column strings.
Example: If change volume peaks at 12,000 updates per hour and the Apply
program frequency is planned for one-hour intervals, the spill file must hold
one-hour’s worth of updates, or 12,000 updates. If each update represents 100
bytes of data, the spill file will be about 1.2 MB.
Network planning
This section describes connectivity requirements, discusses where to run the
Apply program (using the push or pull configuration), and describes how
data blocking can improve performance.
Connectivity
Because data replication usually involves physically separate databases,
connectivity is important to consider during the planning stages. The
workstation that runs the DB2 Control Center or DJRA must be able to
14. If you are using the ASNLOAD utility, you have a load input file instead of a load spill file.
If you use password verification for DB2 for OS/390, use Data
Communication Service by adding DCS to the CATALOG DB statement. If
you connect using SNA, add SECURITY PGM to the CATALOG APPC NODE
statement. However, if you connect using TCP/IP, there is no equivalent
security keyword for the CATALOG TCPIP NODE statement.
Be sure to limit the layers of emulation, LAN bridges, and router links
required, because these can all affect replication performance.
Where to run the Apply program: push or pull configuration
You can run the Apply program at the source server or at the target server.
When the Apply program runs at the source server, you have a push
configuration: the Apply program pushes updates from the source server to the
target server. When the Apply program runs at the target server, you have a
pull configuration: the Apply program pulls updates from the source server to
the target server.
The Apply program can run in either or both configurations at the same time:
it can push updates for some subscription sets and pull updates for others.
Figure 16 shows the differences between the push and pull configurations.
In the push configuration, the Apply program connects to the local source
server (or to a DB2 DataJoiner source server for non-IBM sources) and
retrieves the data. Then, it connects to the remote target server and pushes the
updates to the target table. The Apply program pushes the updates row by
row, and cannot use DB2’s block-fetch capability to improve network
efficiency.
In the pull configuration, the Apply program connects to the remote source
server (or to a DB2 DataJoiner source server for non-IBM sources) to retrieve
the data. DB2 can use block fetch to retrieve the data across the network
efficiently. After all data is retrieved, the Apply program connects to the local
target server and applies the changes to the target table.
To set up a push or pull configuration you need only to decide where to run
the Apply program. DB2 DataPropagator, the DB2 Control Center, and DJRA
recognize both configurations.
Data blocking for large volumes of changes
Replication subscriptions that replicate large blocks of changes in one Apply
cycle can cause the spill files or log (for the target database) to overflow. For
example, batch-Apply scenarios can produce a large backlog of enqueued
transactions that need to be replicated. Or, an extended outage of the network
can cause a large block of data to accumulate in the CD tables, which can
cause spill-file overflows.
Use the Data Blocking page of the Subscription Timing notebook in the DB2
Control Center or the Blocking factor field of the Create Empty Subscription
Sets window in DJRA to specify how many minutes worth of change data the
Apply program can replicate during a subscription cycle. The number of
minutes that you specify determines the size of the data block. 15 This value is
stored in the MAX_SYNCH_MINUTES column of the Subscription set table. If
the accumulation of change data is greater than the size of the data block, the
Apply program converts a single subscription cycle into many mini-cycles,
reducing the backlog to manageable pieces. It also retries any unsuccessful
mini-cycles and will reduce the size of the data block to match available
system resources. If replication fails during a mini-cycle, the Apply program
retries the subscription set from the last successful mini-cycle. Figure 17 on
page 69 shows how the changed data is broken down into subsets of changes.
15. If your subscription set includes tables with DATALINK columns, this value also specifies the number of files
passed to the ASNDLCOPY exit routine.
By default, the Apply program uses no data blocking, that is, it copies all
available committed data that has been captured. If you set a data-blocking
value, the number of minutes that you set should be small enough so that all
transactions for the subscription set that occur during the interval can be
copied without causing the spill files or log to overflow. For AS/400, ensure
that the total amount of data to be replicated during the interval does not
exceed 4 MB.
Restrictions:
v You cannot split a unit of work.
v You cannot roll back previous mini-subscription cycles.
v You cannot use data blocking for full refreshes.
The following sections describe the data manipulations that you can perform
using the Control Center or DJRA. This chapter also describes replicating
large-object (LOB) data, limits for column names for before-image data, and
data-type restrictions.
Subsetting columns and rows
IBM Replication supports both column (vertical) and row (horizontal)
subsetting of the source table. This means that you can specify that only a
subset of the source table columns and rows be replicated to the target table,
rather than all of the columns and rows:
Column subsetting
In some replication scenarios, you might not want to replicate all
columns to the target table, or the target table might not support all
data types defined for the source table. You can define a column
subset that has fewer columns than your source table. Column
subsetting is available for all tables except replica tables.
You can define column subsetting at one of two times:
v When you define a replication source table for differential refresh.
Select only those columns that you want to make available for
replication to a target table. Because CD tables must contain
sufficient key data for point-in-time copies, you must include
primary-key columns in your subset. The columns that you do not
select will not be available for replication to any target table.
v When you define a subscription set.
For the Control Center, use the advanced subscription options to
select only those columns that you want to replicate to the target
table. For DJRA, you can select the columns when you add
members to the subscription set. The columns that you do not select
are still available for other subscription sets, but are not included
for the current subscription set.
Recommendation: When you define a replication source, select all
columns (that is, do not subset any of them). Create your column
subsets when you define subscription sets. By defining your column
To define a join view as a replication source using the Control Center, first
define all tables that participate in the join as replication sources (you do not
need to define subscriptions for them). To define a join view as a replication
source using DJRA, you can use existing views or you can define a join view
that includes tables that are not defined as replication sources. You can use
the Control Center or DJRA to define the view as a replication source; see
“Defining views as replication sources” on page 108. If the replication sources
defined in the join have CD or CCD tables, the Control Center or DJRA
creates a CD view from the replication sources’ CD tables.
16. For example, knowing where to send a bank account update may require a join of the account table with the
customer table to determine which branch of the bank the customer deals with. Typically, production databases
are normalized so that the geographic details, such as branch-number, are not stored redundantly throughout the
database.
Tip: If you define a view that includes two or more source tables as a
replication source, also define a CCD table for one of the source tables in the
join. This CCD table should be condensed and non-complete (or it can be
complete) and should be located on the target server. A view that includes
more than two source tables can be subject to the problem of “double deletes”
which DB2 DataPropagator cannot replicate.
For example, if you define a view that contains the CUSTOMERS table and
the CONTRACTS table, and if you delete a row from the CUSTOMERS table
and also delete the corresponding row (from the join point of view) from the
CONTRACTS table during the same replication cycle, you have a double
delete. The problem is that, because the row was deleted from the two source
tables of the join, the row does not appear in the views (neither base views or
CD-table views), and thus the double-delete cannot be replicated.
Defining a condensed and non-complete CCD table for one of the source
tables in the join solves this problem because you can use the
IBMSNAP_OPERATION column of this CCD table to detect the deletes. You
can add an SQL statement to the definition of the subscription set that should
run after the subscription cycle. This SQL statement removes all the rows from
the target table for which the IBMSNAP_OPERATION is equal to “D” in the
CCD table.
Replicating before and after images
You can define both before and after images in your replication sources and
subscriptions. A before-image column is a copy of a column before it is
updated, and an after-image column is a copy of a column after it is updated.
DB2 logs both the before-image and after-image columns of a table for each
change to that table. Replicating before images is required for
17. The CCD tables for simple inner-joins must be complete and condensed. See “Staging data” on page 82.
The before and after images have different values for different actions
performed against the target tables, as shown below:
Action Column Value
Full refresh All before-image columns have a NULL value.
Insert The before-image column has a NULL value.
Update Column values before the change are captured in the
before-image columns; values after the change are in the
after-image columns.
When you enable logical-partitioning key support, the
before-image column appears in the deleted column and the
after-image column appears in the inserted column. See
“Enabling replication logical-partitioning-key support” on
page 109 for more information.
Delete Both the before-image and after-image columns contain the
before-image value.
Before images do not make sense for base aggregate target-table types (there
is no before image for computed columns). All other target-table types can
make use of before-image columns.
Stored procedures use the SQL CALL statement without parameters. The
procedure name must be 18 characters or less in length (for AS/400, the
maximum is 128). If the source table is in a non-IBM database, DB2 DataJoiner
processes the SQL statements. The run-time procedures of each type are
executed together as a single transaction. You can also define acceptable
SQLSTATEs for each statement.
Depending on the DB2 platform, the SQL before and after processing
statements can perform other processing, such as calling stored procedures.
Replicating large objects
DB2 Universal Database supports large object (LOB) data types, which
include: binary LOB (BLOB), character LOB (CLOB), and double-byte
character LOB (DBCLOB). This section refers to all of these types as LOB data.
The Capture program reads the LOB descriptor to determine if any data in
the LOB column has changed and thus should be replicated, but does not
copy the LOB data to the CD tables. When a LOB column changes, the
Capture program sets an indicator in the CD tables. When the Apply program
reads this indicator, it then copies the entire LOB column (not just the
changed portions of LOB columns) directly from the source table to the target
table.
To allow the Capture program to detect changes to LOB data, you must
include the DATA CAPTURE CHANGES keywords when you create (or alter)
the source table.
Restrictions:
v The Apply program always copies the most current version of a LOB
column directly from the source table (not the CD table), even if that
column is more current than other columns in the target table. Thus, there
is a small period of time during which the target row that includes a LOB
column could be inconsistent with the rest of the row. To reduce this small
period of time, ensure that the interval between Apply cycles is as small as
practical for your application.
v To copy LOB data between DB2 for OS/390 V6 (or later) and DB2 Universal
Database (for any other operating system), you need DB2 Connect 5.2 or
later.
v You can copy LOB data only to read-only tables. Thus, you cannot replicate
LOB data to replica or row-replica tables.
v The primary key for the source table and the subscription-set definition
must match. Code-page differences that affect key values can inhibit the
Apply program’s ability to locate the source-table row that contains LOB
data.
v You cannot refer to LOB data using nicknames.
v Before-image values for LOB columns are not supported.
v For DB2 for OS/390, any table that contains LOB columns must also
contain a ROWID column.
v Replication is not supported for DB2 Extenders™ for Text, Audio, Video,
Image, or other extenders where additional control files associated with the
extender’s LOB column data are maintained outside of the database.
v DB2 can replicate only a full LOB; it cannot replicate parts of a LOB.
Replicating DATALINK values
Accessing large files (such as multimedia data) over a remote network can be
inefficient and costly. If these files do not change, or change infrequently, you
gain faster access to the files and reduced network traffic by replicating these
files to remote sites. DB2 Universal Database provides a DATALINK data type
that allows the database to control access, integrity, and recovery for these
kinds of files. DB2 Universal Database supports DATALINK values on all
platforms except OS/390.
DB2 replicates DATALINK columns and uses the ASNDLCOPY user exit
routine to replicate the external files to which the DATALINK columns point.
This routine transforms each source link reference to a target link reference
and copies the external files from the source system to the target system. In
Because external files can be very large, you must ensure that you have
sufficient network bandwidth for both the Apply program and whatever
file-transfer mechanism you use to copy these files. Likewise, your target
system must have sufficient disk space to accommodate these files.
Recommendation:
v Use a separate subscription set for DATALINK columns because the Apply
program waits for the ASNDLCOPY routine to complete its replication
before the Apply program completes replication of the subscription set. Any
failures in copying the external files will cause replication of the entire
subscription set to fail.
Restrictions:
v Because of the way DB2 supports DATALINK values, you can replicate
DATALINK values between DB2 databases on the following operating
systems:
– AIX
– AS/400
– Windows NT
You cannot replicate DATALINK values to platforms that do not support
them.
v If you use update-anywhere replication with DATALINK columns, you
must specify None for the conflict-detection level. DB2 does not check
update conflicts for external files pointed to by DATALINK columns.
v Before-image values for DATALINK columns are not supported.
v DB2 always replicates the most current version of an external file pointed to
by a DATALINK column.
v Target tables that are base-aggregate or change-aggregate tables do not
support DATALINK columns.
Key-update restrictions
If you are replicating to condensed target tables (user copy, point-in-time,
condensed CCD, or replica tables), do not use the syntax SET
KEYCOL=KEYCOL + n for updates. Data cannot be replicated correctly with
this form of key update. Use a different column in the source table as the key
There are three triggers for each source table: DELETE, UPDATE, and INSERT.
How the Capture triggers capture the data changes
The Capture triggers work with the following objects: the CCD table, the
register control table, the pruning control table, and the register
synchronization control table.
DJRA generates SQL (when you define a table as a replication source) that,
when run:
v Creates Capture triggers on the source table.
v Creates the CCD table on the source server. There is one CCD table for each
source table.
v Inserts a row into the register control table (to represent the new source
table).
v Creates a nickname for the CCD table in the DB2 DataJoiner database.
The Apply program then reads the CCD table (through DB2 DataJoiner
nicknames), copies the changes to the target server, and applies the changes to
the target table. Figure 18 shows the relationship between the Capture
triggers, the source table, the register control table, and the CCD table.
Figure 18. Capture triggers at the source server. The Capture triggers monitor source changes,
capture the changed data, and write the changed data to the CCD table.
Staging data
Typically during replication, changes to a source table are captured, the
changed rows are inserted into the CD table, and the related transaction
information is inserted into the UOW table. The CD table is joined with the
UOW table to determine which changes are committed and can therefore be
replicated to target tables. This joined output can be saved in a CCD table
from which changed data information can also be read. A CCD table contains
committed changes only. By using a CCD table, several subscription sets (and
their members) can refer to that information without each incurring overhead
for a join of the CD and UOW tables for each subscription cycle.18
There are other uses for CCD tables apart from eliminating the need to join
the CD and UOW tables. When you set up your replication environment, you
can choose the type of CCD table that is appropriate for your replication
environment. To help you determine whether or not you need to use CCD
tables, this section describes the attributes of CCD tables and the typical uses
of CCD tables.
Attributes of CCD tables
If you want to use a CCD table, you must decide where you want it located
and what change data it must contain.
A noncomplete CCD table contains only changes that were made to the source
table. Thus, a noncomplete CCD table is initially empty and is populated as
changes are made to the source table. When a noncomplete CCD table is
initially created, or when the Capture program is cold started, the Apply
program does not refresh a noncomplete CCD table with all the rows of the
source table.19 The Apply program records changes that are made to the
source table but does not replicate the original rows. The Apply program
cannot use a noncomplete CCD table to refresh other target tables.
Defining unique indexes: A condensed CCD table requires unique key values
for each row, but a noncondensed CCD table can have multiple rows with the
same key values. Because of the differences in key uniqueness, you must
define a unique index for a condensed CCD table, and you must not define
one for a noncondensed CCD table.
19. If changes are made to a source table while the Capture program is cold started, those changes might not get into
a noncomplete CCD table. To ensure that such changes get replicated to the noncomplete CCD table, you must
stop all activity against the source table when you cold start the Capture program.
20. You can define CCD tables as targets for non-IBM databases, but not as sources. Also, if the CCD target is in a
non-IBM database, it can be neither an internal nor external CCD table.
External CCD tables: If you perform a full refresh on an external CCD table,
the Apply program performs a full refresh on all target tables that use this
external CCD table as a replication source. This process is referred to as a
cascade full refresh. You can define more than one external CCD table for a
replication source. An external CCD table can have any attributes you like
(local or remote, complete or noncomplete, condensed or noncondensed);
however, if you use it to stage data, you must use a complete CCD table
because the Apply program will use it to perform both full refresh and change
replication.
Internal CCD tables: Internal CCD tables are useful for staging changes. The
Apply program uses the original source table for full refreshes, and it uses the
internal CCD for change replication (instead of joining the CD and UOW table
each time changes are replicated).21
Use an internal CCD table as a local cache for committed changes to a source
table. The Apply program replicates changes from an internal CCD table, if
one exists, rather than from CD tables.
You can use an internal CCD table as an implicit source for replication
without explicitly defining it as a replication source. When you add a
subscription-set member, you can specify that you want an internal CCD table
if the table has the following attributes:
v It is a local CCD table. That is, the source server and the target server are
the same database.
v The CCD table is noncomplete.
v No other internal CCD table exists for this replication source.
21. If you define an internal CCD table, it is ignored by the Apply program when processing a subscription set with a
replica as a target.
Internal CCD tables do not support additional UOW table columns. If you
define target CCD tables (with UOW columns) as a replication source, you
cannot then define an internal CCD table. Do not use an internal CCD table if
you already defined a target CCD table that includes UOW columns.
If you want to use column subsetting in an internal CCD table, review all
previously-defined target tables to make sure that the internal CCD table
definition includes all appropriate columns from the source tables. If you
define the subscription set for the internal CCD table before you define any of
the other subscription sets from this source, the other subscription sets are
restricted to the columns that are in the internal CCD table.
Some CCD tables can continue to grow in size, especially noncondensed CCD
tables. These tables are not pruned automatically; you must prune them
manually or use an application program. For some types of CCD tables, you
might want to archive them and define new ones, rather than prune them.
When the source table is a non-IBM table, the Capture triggers prune the CCD
table based on a synchpoint that the Apply program writes to the pruning
control table.
22. If you have target CCD tables defined for a replication source and you want to then define an internal CCD table,
note that internal CCD tables do not support additional UOW table columns. Do not use an internal CCD table if
you already defined a target CCD table that includes UOW columns.
Migrating from DB2 UDB V6 to V7 does not require any special migration for
replication.
The Version 5 Capture and Apply components can run alongside the Version 6
or Version 7 Capture and Apply programs; you do not need to migrate all
servers at the same time.
In addition:
v The Version 7 Capture and Apply programs are backward-compatible with
Version 5 or Version 6 sources and subscription sets. The Version 7
components will continue to use the critical section table in the same way
that the Version 5 or Version 6 Capture and Apply programs did if the new
prune lock control table is not present.
v The Version 7 Capture and Apply programs can use the invocation options
introduced for DB2 Universal Database Satellite Edition, even if the
administration component remains at the Version 5 or Version 6 level.
DB2 UDB supports the DB2 Universal Database Satellite Edition enabler
command ASNSAT. However, you cannot use the DB2 Universal Database
Satellite Edition SYNCH command in an existing replication environment
because the SYNCH command relies on centralized administration controlled
by a central control server. The central control server is not aware of any
existing replication environment administered without use of the SYNCH
command.
Migrating from DB2 for OS/390 V6 to V7 does not require any special
migration for replication.
The Version 5 Capture and Apply components can run alongside the Version 6
or Version 7 Capture and Apply programs; you do not need to migrate all
servers at the same time.
You can use either the DB2 Control Center or the DB2 DataJoiner Replication
Administration (DJRA) tool to define sources and targets for replication, to set
the schedule for updating the targets, to specify the enhancements to the
target data, and to define any triggers that initiate replication. You can use the
DB2 Control Center to administer replication only when your source and
target tables are in DB2 Universal Database databases (for any
operating-system environment), but you can use DJRA to administer
replication when your source and target tables are in DB2 Universal Database
databases (for any operating-system environment) or in supported non-IBM
databases.
After you create the control tables and define the replication sources and
targets, you need to configure and run the Capture and Apply programs to
begin replicating data.
You can access your replication sources and targets through the Control
Center. There are three containers in the Control Center for organizing the
objects that you use to set up and maintain your replication environment:
Tables folder
The folder containing DB2 tables.
Replication Sources folder
The folder containing tables that have been defined as replication
sources: DB2 tables, views, or target tables redefined as sources for
replication.
Replication Subscription folder
The folder containing subscription-set definitions for copying source
data or source-data changes to target tables.
Each object also has a menu for the actions that can be performed with the
object.
Configuring the Control Center for host RDBMSs
If you are connecting to a DB2 for MVS/ESA, DB2 for VSE, DB2 for VM, or
DB2 for AS/400 server from the Control Center, you must configure
connectivity to the remote database, catalog the remote databases, and bind
packages to the remote databases.
If the user ID and password are different than the local logon ID and
password for the Control Center workstation, you must explicitly connect
to the database server using the Connect menu choice from the pop-up
menu for your remote database object.
Setting replication preferences in the DB2 Tools Settings notebook
The Tools Settings notebook contains default preferences for the DB2
Universal Database administration tools. You can set replication default values
on the Replication page of the notebook, as shown in Figure 19. These default
values are used for all replication activities administered by the Control
Center.
Figure 19. The Replication Page of the Tools Settings Notebook. Use this page to specify default
preferences for replication.
You specify that you want to use this file from the Replication page of the
Tools Settings notebook. See Figure 19.
DJRA provides objects and actions that define and manage source and target
table definitions. Working through DB2 DataJoiner, DJRA creates:
v Capture triggers on the non-IBM source servers
v Nicknames in the DB2 DataJoiner database for the remote tables where the
changed data is to be captured
v Target tables (and their associated nicknames) in the non-IBM database for
the remote target tables
The Apply program reads from and writes to DB2 DataJoiner nicknames,
which eliminates the need to connect explicitly to non-IBM databases.
If the source database is a DB2 database, the Capture program for that
database captures the changes, therefore, the Capture triggers and DB2
DataJoiner are not involved. If the target database is a DB2 database, the
Apply program writes the changed data to the DB2 target database directly
and DB2 DataJoiner is not involved.
DJRA, working with DB2 DataJoiner, the Capture program, Capture triggers,
and the Apply program, replicates relational data from a variety of sources to
a variety of targets. The databases that DJRA supports as sources or targets
are:
v DB2 UDB (for UNIX, Windows, and OS/2) V5 or later
v DB2 UDB for AS/400 V5 or later
v DB2 UDB for OS/390 V5 or later
v DB2 DataJoiner V2 or later
v Oracle V7.3.4 or later
v Informix V7.2x or later
v Sybase V11.5 or later
v Sybase SQL Anywhere Version 6.0 or later
v Microsoft SQL Server V6.0 or later23
v NCR Teradata V2R4 or later (as target only)
23. For DataJoiner for AIX, replication from Microsoft SQL Server 6.x must use a DBLIB connection. For DataJoiner for
Windows NT, replication from Microsoft SQL Server 6.x is restricted to using the ODBC protocol.
DJRA provides a user interface that is divided into areas that deal with
control tables, replication sources, subscription sets, and the running or
editing of SQL (see Figure 20 on page 98).
Using this interface, you can perform the following administration tasks:
v Create replication control tables and put them on your source, target, or
control servers
v Define DB2 tables, non-IBM tables, and DB2 views as sources
v Remove replication sources
v Change the definitions for existing DB2 source tables to add new columns
v Promote table, registration, and subscription definitions
v Define subscription sets and subscription members
v Activate and deactivate subscription sets
v Change existing subscription-set members for DB2 target tables to add new
columns
v Remove subscription sets or subscription-set members that are no longer
needed
v Add SQL statements or delete SQL statements or stored procedures that
should run before or after the target tables are replicated
v Run or edit SQL that is generated by DJRA
v Monitor replication
v Perform an offline load of a table
You can also customize the logic for most of the administration tasks listed
above.
Installing DJRA
When you install DB2 UDB on a Windows system, the DB2 setup program
copies the DJRA setup program (djra.exe) to the \sqllib\djra directory. DJRA
also comes with DB2 DataJoiner V2; when you install DataJoiner on Windows
NT, you can also optoinally install DJRA. In addition, you can download
DJRA from the Web.24 When you install DJRA, if you do not already have
Object Rexx installed, DJRA will install it, otherwise DJRA will use your
existing copy.
To install DJRA:
1. From the Windows Explorer, go to the \sqllib\djra directory, then
double-click the djra.exe file. This starts the DJRA setup program.
2. Follow the online instructions. Online help is available to help you with
the remaining steps. When you complete setup, DJRA appears in the
Windows Start menu.
3. To start DJRA:
a. Click the Start icon.
b. Select the Programs menu.
24. http://www.ibm.com/software/data/dpropr
When you create customized control tables, you must customize the CREATE
TABLE statements in the DPCNTL files. There is one DPCNTL file for each
operating system environment; these files are located in the
sqllib\samples\repl\ directory. The file names are:
DPCNTL.UDB
Creates control tables for DB2 Universal Database (for UNIX,
Windows, or OS/2).
DPCNTL.MVS
Creates control tables for DB2 for MVS/ESA and DB2 for OS/390.
DPCNTL.VM
Creates control tables for DB2 for VSE & VM.
DPCNTL.400
Creates control tables for DB2 for AS/400.
DPCNTL.SAT
Creates and drops control tables for DB2 Universal Database Satellite
Edition.
If, after creating customized control tables, you need to drop them, you must
customize the DROP TABLE statements in the DPNCNTL files. There is a
DPNCNTL file for each operating system environment located in the
sqllib\samples\repl\ directory. The files names are:
DPNCNTL.UDB
Drops control tables for DB2 Universal Database (for UNIX, Windows,
or OS/2).
DPNCNTL.MVS
Drops control tables for DB2 for MVS/ESA and DB2 for OS/390.
DPNCNTL.VM
Drops control tables for DB2 for VSE & VM.
DPNCNTL.400
Drops control tables for DB2 for AS/400.
From the DJRA primary window, click the Create Replication Control Tables.
The fields you complete to create a control table are:
Source, control, or target server
When you click the down arrow, DJRA checks to see what type of
server it is and then lists all databases and aliases that are cataloged
on the workstation from which you are running DJRA. If you select a
DataJoiner server from the list, the DataJoiner non-IBM source server
pull-down list becomes active. If you do not choose a DataJoiner
server, you will link directly to a DB2 database.
DataJoiner non-IBM source server
If you selected a DataJoiner alias from the Source, control, or target
server pull-down list and you have performed server mappings in
DataJoiner, then this list displays available remote server names.
Specify (None) if you want the control tables to be created in the
DataJoiner database rather than in the remote server database.
Edit Tablespace Logic
Click this push button to customize table space names for control
tables or for CREATE TABLESPACE options. The default table space
names for DB2 for OS/390 are:
v TS_UOW for the UOW table
v TS_CNTL for all other control tables
25. For DB2 Universal Database systems, you can use the DB2 Control Center or DJRA to perform this task; for other
systems, including DB2 for OS/390, DB2 for AS/400, and all non-IBM databases, you must use DJRA.
You might want to save and customize the SQL files to:
v Create multiple copies of the same replication action, customized for
multiple servers.
v Customize CD table names.
v Define the location for CD tables (DB2 for OS/390 database, DB2 Universal
Database table spaces, DB2 for VSE & VM dbspaces).
v Set the size of the table spaces, databases, or dbspaces of the CD tables.
v Define site-specific standards.
v Combine definitions together and run as a batch job.
v Defer the replication action until a specified time.
v Create libraries of SQL files for backup, site-specific customization, or to
run stand-alone at distributed sites, such as for an occasionally-connected
environment.
v Edit create table and index statements to represent clusters and other
database objects.
v For Oracle and other remote servers, ensure that tables are created in the
existing table spaces that you want.
v For Microsoft SQL Server, create control tables on an existing segment.
When editing generated SQL, be careful not to change special markers that
DJRA places within the SQL. For example, :ENDOFTRIGGER: or
:ENDOFPROCEDURE: is part of a comment that is necessary for DJRA to run
successfully. Altering create trigger blocks can result in incorrect SQL that
ends in error when run. If you add lines to the end of the file, be sure to add
an extra newline (CRLF) to the end of the file.
The DJRA Run SQL push button is intended to be used for SQL generated by
DJRA. SQL that you generate outside DJRA might not run successfully if you
use DJRA to start it. Likewise, you might not be able to run SQL generated by
DJRA at a DB2 command line.
For VM and VSE, the user ID that runs the Capture program must have DBA
authority. For all other operating systems, the user ID that runs the Capture
program must have either DBADM or SYSADM authority.
Authorization requirements for running the Apply program
The user ID that runs the Apply program must be a valid logon ID for the
source, control, and target servers, and for the workstation where the Control
Center or DJRA is installed. The user ID that runs the Apply program must be
able to access the source tables, access and update all replication control
tables, and update the target tables. This user ID must also have execute
privileges on the Apply program packages. The user ID that runs the Apply
program can be the same as the administrator user ID, but this is not a
requirement. With the proper authorization, any user ID can run any Apply
program instance.
Click the Tables folder for the source database to show all tables. Right-click
on a table object to show the pop-up menu and select Define as replication
source.
You can define replication sources using the Quick or Custom choices. Quick
allows you to define a replication source using default values. Custom allows
you to customize the defaults, such as specifying that certain columns should
not be captured.
For information about Capture triggers, see “Capture triggers for non-IBM
sources” on page 80. For data restrictions when defining replication sources
and subscription sets, see “General restrictions for replication” on page 77.
The Capture program does not recognize new DB2 replication sources until
you issue either the reinit command or stop and restart the Capture program.
The Capture program does not begin capturing changes for a replication
source until a subscription set is created for that replication source and the
subscription-set members have been fully refreshed.
Defining replication sources for update-anywhere replication
To define a replication source for update-anywhere replication using the
DB2 Control Center:
Select the conflict detection level (described above) when you define a table as
a replication source, and select the replica target structure when you add the
member to the subscription set.
The Apply program detects update conflicts, after they occur, during the
subscription cycle. The source table is considered the primary table. That is, it
can receive updates from replica tables, but if there is a conflict, the source
table wins and the replica tables’ conflicting transactions are rejected. The
Apply program detects direct row conflicts by comparing the key values in
The Apply program cannot detect read dependencies. If, for example, an
application reads information that is subsequently removed (by a DELETE
statement or by a rolled back transaction), the Apply program cannot detect
the dependency.
Use the rejection codes provided in the UOW table to identify the before and
after row values in the CD table for each rejected transaction. Because the
ASNDONE exit routine runs at the end of each subscription cycle, you can
add code to the routine to handle any rejected transactions. See “Using the
ASNDONE exit routine” on page 132 for more information on the ASNDONE
exit routine. Alternatively, because the change data rows and UOW control
table rows for rejected transactions are exempt from normal pruning (they are,
however, subject to RETENTION_LIMIT pruning), you could handle the
rejected transactions as a batch by using a program that scans the UOW table.
Defining views as replication sources
You can define replication sources that are views of other tables. After
defining each replication source table included in the view, you can create a
view replication source. The view replication source is then available for
replication to a target table.
You cannot use the DB2 Control Center to define an existing view as a
replication source; use DJRA instead. You can use the DB2 Control Center to
define a new view as a replication source.
Do not type the words CREATE VIEW. This part of the statement is
automatically supplied during processing.
4. In the FROM field, type table names that define the join. For example:
TABLEA A, TABLEB B
Do not type the word FROM. This part of the statement is automatically
supplied during processing.
5. If you want to use a row predicate, type the WHERE clause SQL statement
in the WHERE field. For example:
A.COL1=B.COL1
Click Define DB2 Views as Replication Sources and fill in the required
information, such as source server, source view qualifier, and source view
name. You cannot define a join as a replication source using DJRA, but you
can define a view for the join and use DJRA to define the view as a
replication source.
Enabling replication logical-partitioning-key support
Generally, the Capture program captures an update to a source table as an
UPDATE statement. However, for the following conditions, you must instruct
the Capture program to capture updates as DELETE and INSERT statements
(that is, you must enable logical-partitioning-key support):
v Your source applications update one or more columns that are part of a
target table’s primary key.
Because the values for the target-table primary key come from the changes
captured on the source server, which reflect the new key values, these
values cannot be used to find the existing target table row (it doesn’t exist
yet). Converting the UPDATE to a DELETE and INSERT pair ensures that
the target table reflects the changes made at the source server.
You can capture updates as updates or as delete/insert pairs for both DB2
and non-DB2 sources.
By default, when you update the primary keys of either the source or target
tables, the Capture program captures the changed row for the update. The
Apply program then attempts to update the row in the target table with the
new key value. This new key value is not found in the target table, so the
Apply program converts this update to an insert. In this case, the old row
with the old key value remains in the table (and is unnecessary). When you
enable replication logical-partitioning-key support, the Capture program
captures the change as separate DELETE and INSERT statements: delete the
old row and insert the new row.
Each captured UPDATE is converted to two rows in the CD table for all
columns, non-key columns as well as key columns. You might need to adjust
the space allocation for the CD table to accommodate this increase in captured
data.
27. Version 4 or earlier only. This restriction does not apply to DB2 for OS/390 V5 or later.
When you use DJRA to define the source table, select the Updates as
delete/insert pairs radio button from either the Define One Table as a
Replication Source window or the Define Multiple Tables as Replication
Sources window.
To define a CCD table using DJRA, select CCD as the Target structure from
the Add Member to a Subscription Set window, then click the Setup
pushbutton. Select the type of CCD table you want from the Staging (CCD)
table property selection for target server window. This window prompts you
for all valid combinations of CCD tables.
For noncomplete CCD tables, you can include one or more of the UOW table
columns; these columns are useful for auditing and include Apply qualifiers,
authorization IDs, UOW ID, and so on.
If you are using a CCD table to stage replication (for example, in a three-tier
replication environment), complete the following steps:
1. Add the (complete and condensed) CCD table to a subscription set.
The Apply program that owns the subscription set populates the CCD
table based on the subscription-set definition.
2. If the CCD is defined as external, define it as a replication source.
From the DJRA Staging (CCD) table property selection for target server
window, select the Register as external replication source checkbox after
selecting a complete CCD table. See “Defining replication sources” on
page 105 for more information.
3. Create a new subscription set.
This new set is the Apply program that applies changes from the CCD
table to the target tables. Usually, you use a different Apply qualifier than
the one used to populate the CCD, but you can use the same one.
See “Defining replication subscription sets” on page 112.
4. Define the target tables within the subscription set.
Select the source table depending on the type of CCD table you are using:
For more information about CCD tables, see “Staging data” on page 82.
When you add members to a subscription set, you can specify which primary
key to use for the target table. You can specify that DJRA should generate the
target primary key from the source primary key and source table indexes, or
you can specify the particular columns for the key, or you can specify the
source primary key.
After you create subscription sets for a non-IBM source server, the Apply
program connects to the DB2 DataJoiner database that is associated with the
non-IBM server and accesses (through nicknames) the information in the
register control table and the staging table on the non-IBM source server (see
Figure 21 on page 114).
If you defined an event to start the Apply program, you must populate the
event table. See “Event timing” on page 124 for more information about this
task. To begin replicating data to the target tables, start the Capture program
at the source server, then start the Apply program using the name of the
control server that you specified in the Control Center Subscription
Information window or the DJRA Add a Member to Subscription Sets
window (or Add Multiple Members to Subscription Sets window).
Defining subscription sets for update-anywhere replication
To define a subscription set for update-anywhere replication using the DB2
Control Center, define a subscription set and use the following selections:
1. Select the replication sources that you want to be in the subscription set.
Include all sources affected by the replica tables being updated.
2. From the Subscription Definition window, select a target table to be
defined as a replica table.
3. Click Advanced to open the Advanced Subscription notebook. The
following selections are required on the Advanced Subscription notebook:
a. From the Target Type page, click on Target table is replica.
b. From the Target Columns page, repeat the following steps for each
target table:
1) Ensure that the Subscribe check boxes are selected for every
column. Do not create new columns for the replica table.
2) Specify a primary key for the replica table by clicking on the
Primary Key check boxes next to the key column names.
Restriction: Replica target tables must contain the same columns as the source
table: you cannot create subsets for them; you cannot add columns; and you
cannot rename columns.
If you want to create a new computed column or use aggregation for the
target table:
a. Click Create Column to open the Create Column window.
b. Type the name of the column in the Column name field. The name can
have up to 17 characters and can be an ordinary or delimited identifier.
c. Type the SQL expression defining the new column.
d. Click OK to close the window.
Click the Selected columns radio button from the Add a Member to
Subscription Sets window. Then select the columns you want replicated to the
target table.
28. If you use before-image columns or computed columns, for example, full refresh is no longer possible. You must
also modify the register control table.
Add a WHERE clause in the Where clause field of the Add a Member to
Subscription Sets window.
Defining a subscription set with a user-defined table
DB2 DataPropagator allows you to use a previously-defined DB2 table as the
target table in a subscription set. That is, you can define a subscription-set
member to be a target table that is defined outside of the DB2 Control Center
or DJRA. This type of target table is known as a user-defined target table.
Restrictions:
v The subscription-set definition must contain the same number of columns
as exist in the user-defined target table.
v New columns in the subscription-set definition must allow nulls and must
not have defined default values.
DJRA tolerates existing target tables, and checks that the columns in the target
table match those defined for the subscription-set member.
To specify SQL statements or stored procedures for the subscription set using
the DB2 Control Center:
1. From the Define Subscription window, click SQL to open the SQL
window.
Use the SQL window to add or remove SQL statements or stored
procedures that run at the target or source server either before or after the
replication subscription is processed. The statements are processed in the
order that they appear in the list.
2. Click Add to open the Add SQL window.
3. Type the SQL statement or stored procedure name in the SQL statement
or Call procedure field. The stored procedure name must begin with CALL.
This field can contain ordinary or delimited identifiers.
4. If you know that the SQL statement or stored procedure will generate
SQLSTATEs that would otherwise terminate execution, specify these
SQLSTATEs so that the Apply program can bypass them and treat them as
successful execution. For example, a DELETE statement will generate a
SQLSTATE 02000 when attempting to delete nonexistent rows, but for new
tables you might not care about this error.
Enter valid 5-byte SQLSTATE values in the SQLSTATE field and click
Add. The value is added to the Acceptable SQLSTATE values box. You
can add up to 10 values.
5. Specify whether you want to run the SQL statement or stored procedure at
the source or target server before the subscription set is processed, or at
the target server after the subscription set is processed by clicking the
appropriate radio button in the Submit SQL statement field.
6. Click OK to add the statement to the box in the SQL window and close
the Add SQL window.
The Capture program can read the data-sharing logs for all supported
vresions of DB2 for OS/390. That is, you can run different versions of DB2 in
a data-sharing environment, for example during version-to-version migration,
and have one Capture program continue to capture transaction-consistent
data. However, this mixed-version environment is not recommended for
long-term use, either for replication or for DB2. See the DB2 for OS/390
Administration Guide for information about data sharing with mixed versions
of DB2.
Specifying a data-blocking value
To specify how many minutes worth of change data DB2 DataPropagator can
replicate during a subscription cycle, use the Data Blocking page of the
Subscription Timing notebook in the DB2 Control Center, or set the Blocking
factor in the Create Empty Subscription Sets window in DJRA. The number of
minutes that you specify determines the size of the data block. See “Data
blocking for large volumes of changes” on page 68 for more information about
how to determine this value.
where new_val is the new blocking factor value, ApplyQual is the current
Apply qualifier, name is the current subscription-set name, and val is either F
or S.
Data currency requirements
How up to date do you want target tables to be? How out of date can they be
without disrupting the application programs that use them? The answers to
these questions reveal your data currency requirements. You can control how
often the Apply program processes subscriptions and thereby control the
currency of the data. You can set an interval (or relative timing) schedule for
the Apply program, or define an event trigger that the Apply program uses to
start processing a subscription set.
You define subscription timing with the Subscription Timing notebook in the
DB2 Control Center or from the Subscription set timing field on the Create
Empty Subscription Sets window in DJRA. You can control the timing using
time-based or event-based scheduling, or you can use these timing options
together. For example, you can set an interval of one day, and also specify an
event that triggers the subscription cycle. For update-anywhere replication,
you can also specify different timing for source-to-replica and replica-to-source
replication.
Event timing
To replicate data using event timing, specify an event name when you define
the subscription set in the DB2 Control Center or DJRA. You must also
populate (using an application program or the DB2 Command Center) the
subscription events table with a timestamp for the event name. When the
Apply program detects the event, it begins replication (either change-data
capture or full refresh).
EVENT_NAME is the name of the event that you specify while defining the
subscription set. EVENT_TIME is the timestamp for when the Apply program
begins processing the subscription set. END_OF_PERIOD is an optional value
that indicates that updates that occur after the specified time should be
deferred until a future time. Set EVENT_TIME using the clock at the control
server, and set END_OF_PERIOD using the clock at the source server. This
distinction is important if the two servers are in different time zones.
Your application programs must post events to the subscription events table
to tie your application programs to subscription activity. When you post an
entry using CURRENT TIMESTAMP plus one minute for EVENT_TIME, you
trigger the event named by EVENT_NAME. Any subscription set tied to this
event becomes eligible to run in one minute. You can post events in advance,
such as next week, next year, or every Saturday. If the Apply program is
running, it starts at approximately the time that you specify. If the Apply
program is stopped at the time that you specify, when it restarts, it checks the
subscription events table and begins processing the subscription set for the
posted event.
where new_val is the new interval value, ApplyQual is the current Apply
qualifier, name is the current subscription-set name, and val is either F or S.
To change a subscription set to use event timing rather than interval timing,
execute the following SQL statements:
UPDATE ASN.IBMSNAP_SUBS_SET
SET REFRESH_TIMING='E', EVENT_NAME='END_OF_DAY'
WHERE APPLY_QUAL=ApplyQual AND SET_NAME=name
where new_val is the new interval value, ApplyQual is the current Apply
qualifier, name is the current subscription-set name, val is either F or S, and
timestamp is the timestamp for when the Apply program should begin
processing the subscription set. If you already have an event named
END_OF_DAY, you do not need the INSERT statement shown above, but you
might need to modify the EVENT_TIME.
See “Subscription set table” on page 323 and “Subscription events table” on
page 335 for more information about these control tables.
Data consistency requirements
When planning and defining a subscription set, you need to be aware of the
following rules and constraints:
v If any member of the subscription set requires full-refresh copying for any
reason, the entire set is refreshed. For update-anywhere replication,
full-refresh copying occurs only from the replication source to the replica,
not from replica to source.
v A single synchpoint is maintained for the subscription set to indicate the
copy progress for the entire subscription set.
For example, use the promote functions to define subscription sets for remote
DB2 Personal Edition target databases. After you define a model target system
in your test environment, you can create subscription-set scripts (and modify
which Apply qualifier is used and so on) for your DB2 Personal Edition
systems, which are not otherwise supported from a central control point.
This function is fully supported for DB2 UDB V5 and later, but
for the IBM Common Server you can promote only tables, not
table spaces.
Promote subscription This function promotes subscriptions: subscription sets,
subscription-set members, subscription columns, subscription
prune control, and subscription statements. It enables you to
create a new subscription set from an existing one.
29. Customizing DPCNTL.400 is not necessary if you already installed DataPropagator for AS/400.
If you need to change the values and refresh the tuning parameters while
the Capture program is running, enter the reinit command after changing
the table values. For AS/400, enter the INZDPRCAP command; for more
information about the INZDPRCAP command, see “Reinitializing Capture
for AS/400” on page 203.
You can use the ASNLOAD routine as shipped with the Apply program, or
you can modify it. As shipped, the routine uses the DB2 EXPORT utility to
export data from the source table and uses the DB2 LOAD utility to fully
refresh the target table. You can modify the ASNLOAD routine to call any
IBM or vendor utility. See the prolog section of the sample program
(ASNLOAD.SMP) in the \sqllib\samples\repl directory for information about
how to modify this exit routine.
You must use the ASNLOAD routine to fully refresh tables with referential
integrity constraints in order to bypass referential integrity checking.
If your source servers are password protected, you must modify the
ASNLOAD routine to provide the password file. However, if the password is
administered by DB2 Universal Database Satellite Edition, the ASNLOAD
routine does not require a password file, and you can use the IBM-supplied
routine.
If your source tables include DATALINK columns, the Apply program does
not call the ASNDLCOPY exit routine. If you want external files (pointed to
by the DATALINK values) to be copied during a full refresh, you must
modify the ASNLOAD routine to call the ASNDLCOPY routine for these
columns.
See “Refreshing target tables with the ASNLOAD exit routine for AS/400” on
page 221 for information about using the ASNLOAD routine in an AS/400
environment.
Error handling
If an error occurs while the Apply program calls the ASNLOAD routine, or if
the routine returns a nonzero return code, the Apply program issues a
message, stops processing the current subscription set, and processes the next
subscription set.
Restrictions
You can use the ASNLOAD routine only to refresh point-in-time and user
copy tables. Target tables have the following restrictions for the ASNLOAD
routine:
v The target-table columns must match both the order and data type of the
source tables.
v The target table cannot have a subset of the source columns nor extra
columns.
v The source table cannot be defined to include both before and after images
because the before images would add extra columns to the target table.
Using the ASNDONE exit routine
The Apply program can optionally call the ASNDONE exit routine after
subscription processing completes, regardless of success or failure. You can
modify this routine as necessary; for example, the routine can examine the
UOW table to discover rejected transactions and initiate further actions, such
as issuing a message or generating an alert. Another use for this exit routine is
to deactivate a subscription set that fails (status = -1), and thus avoid retry by
the Apply program until the failure is fixed.
See “Using the ASNDONE exit routine for AS/400” on page 220 for more
information about using the ASNDONE exit routine in an AS/400
environment.
The parameters that the Apply program passes to the ASNDONE exit routine
are:
v Set name
v Apply qualifier
v Value for the WHOS_ON_FIRST column in the subscription set control
table
v Control server name
v Trace option
v Status value
Using the ASNDLCOPY exit routine
If a subscription set contains DATALINK columns, the Apply program calls
the ASNDLCOPY exit routine during processing for a subscription-set
member to copy the external file. You can modify this routine as necessary, for
example, to change the file transfer protocol.
Restrictions: The Apply program does not call the ASNDLCOPY routine if the
target table is a CCD table. Also, if you want external files (pointed to by
DATALINK values) to be copied during a full refresh, you must modify the
ASNLOAD routine to call the ASNDLCOPY routine for these columns.
Because the Apply program calls the ASNDONE exit routine after
subscription processing completes, regardless of success or failure, you can
use the routine to perform any necessary clean up if the ASNDLCOPY routine
fails to replicate any external files.
The ASNDLCOPY routine creates two files: a log file and a trace file (if trace
is enabled). The log file has the following name:
ASNDLApplyQualSetNameSrcSrvrTgtSrvr.LOG
The trace file contains any trace information generated by the ASNDLCOPY
routine.
The input data file contains a list of link references captured from the source
table. The format for this file is:
length source_link_reference new_link_indicator
Use the newline character to indicate the end of the input line.
The result file contains transformed link references that are valid for the target
system. The format for this file is:
length target_link_reference
The trace option can be either yes or no, to specify whether you want tracing.
The ASNDLSRVMAP file contains the server mappings for link references and
an optional directory path map. If you don’t specify a directory path map, or
if the path mapping cannot be found, the same path name will be used.
All fields for a given entry must appear on the same line.
Using the ASNDLCOPYD file-copy daemon
The ASNDLCOPYD file-copy daemon extracts files for the ASNDLCOPY exit
routine. It is similar to a standard FTP daemon, but provides the following
functions for DATALINK replication:
v A command for extracting file information (such as file size and last
modification time)
v A command for retrieving the content of a particular file
You can configure the ASNDLCOPYD file-copy daemon to allow only certain
users to log in, and for each user, you can allow access to a subset of
directories. See the prolog section of the sample program
(ASNDLCOPYD.SMP) in the \sqllib\samples\repl directory for information
about how to set up and modify this program. For AS/400, you can find the
sample program in library QDPR, source file QCSRC, member
ASNDLCOPYD. If you need to add or change user logins, use the
ASNDLCOPYD_CMD tool.
The ASNDLCOPYD file-copy daemon creates a log file for all the messages
generated by the ASNDLCOPYD program. This log file has the following
name: ASNDLCOPYDYYYYMMDDHHMMSS.LOG, where YYYYMMDDHHMMSS is the
time that the daemon started running.
Cold start
When you cold start the Capture program, it deletes all rows from the CD
tables and the UOW table and begins reading the end of the database log.
Specify a cold start by including the COLD keyword when you start the
Capture program. A warm start can also become a cold start in certain
circumstances; see “Automatic cold start”.
After a cold start, the Apply program performs a full refresh of the target
tables. You can specify the LOADX keyword when you start the Apply
program to improve the performance of the full refresh, or you can use the
technique described in “Loading target tables offline using DJRA” on
page 127.
Warm start
When you stop the Capture program or if it fails, it writes information in the
warm start control table to enable a warm start. There are cases when the
Capture program cannot save warm start information. For example, an
operator might cancel the Capture program or stop DB2. In this case, the
Capture program uses information in the CD, UOW, or register tables to
resynchronize to the time it stopped and thus allow a warm start.
When you restart the Capture program with the WARM or WARMNS
keywords, it looks in the warm start table (or in the CD, UOW, or register
tables) to determine if it can warm start or if it must cold start. If there is
sufficient warm start information, the Capture program warm starts,
otherwise it attempts a cold start; see “Automatic cold start”.
After a successful warm start, the Capture program deletes old rows in the
warm start table.
Starting the Capture and Apply programs for the first time
If you are starting the Capture program for the first time, or after stopping
both the Capture and Apply programs, use the following steps:
1. Define replication sources and subscription sets.
See “Defining replication sources” on page 105 and “Defining replication
subscription sets” on page 112.
2. Start the Capture program.
Wait for the initialization message that indicates that the Capture program
is running. The Capture program does not capture changes until the Apply
program starts and completes a full refresh.
3. Start the Apply program.
The Apply program performs a full refresh for all subscription-set
members. When the full refresh is complete, the Capture program begins
capturing changes to the source tables.
The package names change with each release and with each service
update, but this query retrieves names that are specific to your service
level.
2. If you have an apply_names.ini file (in the sqllib directory), replace the
package names with the ones that you retrieved in step 1. If you do not
have an apply_names.ini file, create one and list the package names. The
following lines show an example of an apply_names.ini file:
ASN6A001+
ASN6B001+
ASN6C001+
ASN6F001+
ASN6I001+
ASN6M001+
ASN6P001
3. Create server options for the Apply packet and buffer sizes. Sample sever
options for Sybase are:
create server option apply_packet_size for server type sybase setting 16384;
create server option apply_buffer_size for server type sybase setting 16384;
You can set the packet and buffer size to any appropriate value less than
or equal to the maximum setting for Sybase or Microsoft SQL Server, and
adjust as necessary.
4. Set the following environment variable:
DJX_ASYNC_APPLY=TRUE
5. If you created or changed the apply_names.ini file, or if you changed the
DJX_ASYNC_APPLY variable, you must stop and restart DataJoiner before
these changes take effect. To stop and restart DataJoiner, issue the db2stop
and db2start commands.
Because these operational procedures do not coexist well with the Capture
and Apply programs’ issuing dynamic SQL (which implicitly locks catalog
tables) or accessing table spaces, you should stop both the Capture and Apply
programs when running utilities (and other similar operational procedures) to
avoid any possible contention.
To start the Replication Monitor, click Monitor replication on the main DJRA
window. From the Replication Administration Scheduler window, you can
schedule the monitor to run periodically or you can run it once.
When a gap exists, the Apply program attempts a full refresh unless the target
table is a noncomplete CCD table. If the Apply program cannot perform a full
refresh, data integrity could be lost. For noncomplete CCD tables, you can
avoid potential data-integrity loss that could result from a cold start of the
Capture program by using the following steps:
1. Ensure all changes are replicated to the noncomplete CCD tables.
2. Stop all update activity for the source tables.
3. Cold start the Capture program.
4. Restart update activity for the source tables.
If you plan to change the replication source definition, use the Capture
REINIT command. You can also stop or suspend the Capture program and
then warm start or reinitialize the Capture program to begin capturing
changes for the changed replication source. For information about the Capture
program for your operating system environment, see “Part 3. Operations” on
page 173.
30. You can change the set of columns available for replication only for DB2 sources, not for non-IBM sources.
Attention:
v Stop the Capture program before deleting a replication source. Do not
merely suspend it. You can restart the Capture program after you have
finished removing the replication source.
v Although the Control Center removes dependent subscriptions, you should
check whether a dependent subscription table is being used as a source for
another subscription and cancel any such dependent subscription before
you delete a replication source. DJRA does not remove dependent
subscriptions, so you must delete any subscriptions that use the source for
copying.
The Control Center and DJRA drop the table space for a DB2 replication
source if it is empty. DJRA does not drop non-IBM database containers (table
spaces, dbspaces, or segments). For the Control Center, you can ensure that
the table space is never dropped by changing the settings on the Replication
page of the Tools Settings notebook.
Activating and deactivating subscription sets
From the DB2 Control Center or DJRA, you can control the active status of a
subscription set. This feature is useful when you want to temporarily
deactivate a subscription set without removing it. When you deactivate a
subscription set, the Apply program completes its current processing cycle
and then suspends operation for that subscription set. In the Control Center,
when you deactivate a subscription set, the icon for the subscription set is
greyed out.
Cloning a subscription set to another server
Using the DB2 Control Center, you can clone a subscription set to another
server. Cloning creates a copy of an existing subscription set on a different
target server, using a different Apply qualifier. This copy includes only
subscription information; it does not include copy table, table space, or index
definitions. You can clone one or more subscription sets at a time. The Control
Center updates the control tables at the control server.
Using the DB2 Control Center, select one or more replication subscription
objects from the contents pane and select Remove from the pop-up menu.
Using DJRA, you must first remove all members from the subscription set,
then you can remove the empty subscription set.
31. You can change the set of columns available for replication only for DB2 sources, not for non-IBM sources.
This book also includes other resources that can help you in problem
determination:
v “Chapter 15. Capture and Apply messages” on page 353 describes error
messages for the Capture program and the Apply program. “Chapter 16.
Replication messages for AS/400” on page 381 describes the error messages
for replication in an AS/400 environment.
After you define replication sources and subscription sets, the SQL statements
for your replication requests complete satisfactorily, but the Apply program
does not replicate the data successfully. To determine the error, you could:
1. Examine any error messages returned directly to the terminal for the
Apply program job or process.
2. Run the Replication Analyzer to verify that the replication setup is correct.
3. Examine the Apply trail table (ASN.IBMSNAP_APPLYTRAIL) for any
indicators of the problem.
4. Examine the Capture program trace table (ASN.IBMSNAP_TRACE) for
indicators from the Capture program’s activity.
5. Examine the log files for the Capture and Apply programs for indicators
from their activities.
6. Examine the CD and UOW tables to verify that the Capture program is
capturing changes.
7. Rerun the Capture and Apply programs with the trace option and examine
the trace file for indicators of the problem.
The Capture and Apply programs can also encounter problems while
capturing and replicating changed data, even though the SQL that the DB2
Control Center or DJRA generated for defining replication sources and
subscriptions ran without error. You can determine the cause of the errors
with information from the following sections: “Problem determination for the
Apply program”, “Problem determination for the Capture program” on
page 156, and “Troubleshooting” on page 166.
For subscription sets that do not replicate successfully, the Apply trail table
records the SQL code and the SQLSTATE. Additional SQL codes and states
associated with the problem can be found in the Apply program trace file.
Use the Replication Analyzer to view information in this table; see “Using the
Replication Analyzer” on page 163. You can also query information in the
Apply trail table directly to gather problem-determination information for
failing subscription sets.
1. Ensure that the data in the Apply trail table is current:
v Temporarily disable subscription sets that are successful to ensure that
rows in the Apply trail table apply only to subscription sets that have
problems.
v Delete all unnecessary rows from the Apply trail table to remove
information from past replications.
2. To gather the problem-determination information, execute a query similar
to:
SELECT TARGET_TABLE, STATUS, SQLSTATE, SQLCODE, SQLERRM, APPERRM, LASTRUN, LASTSUCCESS
FROM ASN.IBMSNAP_APPLYTRAIL
WHERE STATUS <> 0
ORDER BY TARGET_TABLE, LASTRUN DESC, SQLCODE DESC, SQLSTATE ASC
This query returns the following columns from the Apply trail table:
TARGET_TABLE
The target table for the current subscription set.
STATUS
Contains -1 to indicate the failed subscription set.
SQLSTATE
Contains the error SQLSTATE for the failed subscription set.
SQLCODE
Contains the error SQLCODE for the failed subscription set.
SQLERRM
Contains the text of the error message that corresponds to the SQL
code.
While tracing its activity, the Apply program records the following kinds of
information in the trace file:
v Connections to the control server to obtain information for subscription sets
to be processed
v Connections to source servers to retrieve rows to be replicated from the CD
table to the target table
v Connections to the target servers to insert, update, and delete rows into and
from the target tables
The Apply program inserts error messages and indicators in the trace file at
points when it encounters an error.
When you specify a trace parameter, you must also specify the name of a
trace file (for workstation systems, precede the trace file name with the
where ApplyQual is the Apply qualifier, CtlSrvr is the control server, and
apply.trc is the trace file. The trace file is created in the directory from
which you start the Apply program.
3. Let replication run until the problem recurs and the Apply program inserts
a row into the Apply trail table.
4. Stop the Apply program and restart it without the trace parameter. Be sure
to re-enable subscription sets that were successful.
5. View the trace file using any editor. You could also send the file to other
systems or print it.
Apply-program log file
For a summary of the Apply program’s activities, you can view the Apply
program log file. The log file is in the directory from which you start the
Apply program and contains messages issued by the Apply program. Because
the information in the Apply program log file is high level, it typically directs
you to the Apply trail table for more detailed information.
The name of the log file is ApplyQual.APP, where ApplyQual is the Apply
qualifier associated with the Apply-program instance.
Use the Replication Analyzer to view information in this table; see “Using the
Replication Analyzer” on page 163. You can also query information in the
trace table directly to gather problem-determination information for the
Capture program.
1. Ensure that the data in the trace table is current. You might want to delete
rows from the table to remove information from past Capture operations.
The Capture program deletes all rows from this table when you cold start
it, so if you want to keep trace information, you should copy the table
before you cold start the Capture program.
2. To gather the problem-determination information for the Capture program,
execute a query similar to:
SELECT *
FROM ASN.IBMSNAP_TRACE
ORDER BY TRACE_TIME
This query returns the following columns from the trace table:
OPERATION
The type of Capture-program operation, such as initialization,
capture, or an error condition.
TRACE_TIME
The time the current row was inserted into this table.
DESCRIPTION
The message ID and message text.
For AS/400 systems, this table also includes the following columns:
JOB_NAME
The fully-qualified name of the job that wrote the current entry.
JOB_STR_TIME
The starting time for the job listed in the JOB_NAME column.
Capture-program trace file
You can trace the operation of the Capture program to help isolate the causes
of certain kinds of replication errors. The Capture program creates a trace file
when you include a trace parameter to the Capture program start command.
The Capture program inserts error messages and indicators in the trace file at
points when it encounters an error.
To trace problems on AS/400 systems, view the job logs of the control and
journal jobs. See “Problem determination for AS/400” on page 159 for more
information.
where SrcSrvr is the source server and capture.trc is the trace file. Because
this command does not include parameters for the type of start (WARM,
WARMNS, or COLD) or for pruning (PRUNE or NOPRUNE), the Capture
program uses the defaults (WARM and PRUNE). The trace file is created
in the directory from which you start the Capture program.
2. Let replication run until the problem recurs.
3. Stop the Capture program and restart it without the trace parameter.
4. View the trace file using any editor. You could also send the file to other
systems or print it.
Capture-program log file
For a summary of the Capture program’s activities, you can view the
Capture-program log file. The log file is in the directory from which you start
the Capture program and contains messages issued by the Capture program.
The name of the log file is SrcSrvr.CPP, where SrcSrvr is the name of the
source server.
Other problem-determination facilities for the Capture program
For OS/390, VM, and VSE, the Capture program provides the following tools:
Trace buffer
The Capture program puts a small amount of critical diagnostic data
in a wraparound trace buffer during processing. Each trace-buffer
entry describes the current status of data capture. If a severe error
occurs, the Capture program prints the trace buffer before terminating.
The trace buffer supplements the Capture program error message.
Trace output
When an error occurs, you can run the Capture program with the
Be sure to record the 6-digit job numbers because you might need them later
in the problem-determination process.
Every two minutes (or at the frequency that you specified for the WAIT
parameter of the STRDPRCAP command), the QZSNCTL5 job wakes up to
determine if any replication source exists that meets both conditions. If it
finds a replication source that is eligible for replication, it starts the journal
job.
where SrcTbl is the library name, SrcOwn is the table name, and SrcVwQual
is the source-view qualifier for the replication source in question. Both
SrcTbl and SrcOwn are case-sensitive.
Collecting data for problem determination
The following items are needed for problem determination for the Capture
program. The items are listed in order of importance:
where sqlname is the SQL name of the CD table. Both sqlname and
lib are case sensitive.
4. A formatted dump of the user index QDPR/QZSNINDEX5. Issue the
following command before the Capture program ends:
DMPOBJ QDPR/QZSNINDEX5 *USRIDX
5. The contents of the rows in the register table and the register extension
table that correspond to replication sources that you want to know more
about. You can collect this information by executing the following SQL
statements:
SELECT A.*, HEX(CD_OLD_SYNCHPOINT), HEX(CD_NEW_SYNCHPOINT)
FROM ASN/IBMSNAP_REGISTER A
WHERE SOURCE_OWNER='SrcOwn' AND SOURCE_TABLE='SrcTbl'
SELECT *
FROM ASN.IBMSNAP_REG_EXT
WHERE SOURCE_OWNER='SrcOwn' AND SOURCE_TABLE='SrcTbl'
where jobnum is the job number for the job you want to investigate. The
DESCRIPTION column provides important information about the job.
Examples:
To gather trace table entries after 7 a.m., March 31, 2000, execute the
following query:
SELECT *
FROM ASN/IBMSNAP_TRACE
WHERE TRACE_TIME > '2000-03-31-07.00.00.000000'
ORDER BY TRACE_TIME
where JLib is the library name and JName is the table name for the
journal. Both JLib and JName are case sensitive.
9. Data from the target server that can help determine how and why the
Apply program fails to replicate to the target table:
You can run the Replication Analyzer at any time after replication setup is
complete to analyze a failure by the Capture program or the Apply program,
or simply to verify your setup. You can find the Replication Analyzer in the
\sqllib\bin directory. The bind file for the Analyzer, analyze.bnd, is in the
\sqllib\bnd directory, but it is not necessary to bind the program because the
Analyzer is autobound for DB2 V6 and V7. If you run the Analyzer with DB2
V5, use the following command to bind the Analyzer:
bind analyze.bnd isolation UR
You run the Replication Analyzer from the Windows command line. Type the
name of the Replication Analyzer command file (analyze.exe) followed by a
list of DB2 alias names of source, target and control servers, separated by
blanks (each of these names must be eight or fewer characters in length).
Because the Replication Analyzer runs from the command line, you do not
need to have either DJRA or the DB2 Control Center running when you run
the Replication Analyzer.
IBM Software Support may ask you to run the Replication Analyzer and send
the generated file to someone within IBM to check the output for correctness.
Syntax:
analyze U DB_alias_name A
deepcheck
lightcheck
?
A:
U
q = ApplyQual f = directory
deepcheck
Specifies that the Analyzer perform a more complete analysis, including
CD and UOW table pruning information, DB2 for OS/390 tablespace
partitioning and compression detail, and analysis of target indexes with
respect to subscription keys. The analysis includes all servers. This
keyword is optional.
f Specifies the directory in which to store the output HTML file. If you do
not specify this keyword, the HTML file is created in the current directory.
No space is allowed between the keyword (f), the equal sign, and the
value. This keyword is optional.
lightcheck
Specifies that all column detail be excluded from the report, thus reducing
report generation time, saving resources, and producing a smaller HTML
file. This keyword is optional and is mutually exclusive with the
deepcheck keyword.
q Specifies the Apply qualifier to use as a filter to restrict analysis of
subscription sets. You can specify the q keyword more than once if you
want to analyze multiple Apply qualifiers. No space is allowed between
the keyword (q), the equal sign, and the value. This keyword is optional.
Examples:
analyze mydb1 mydb2
Ensure that APF authorization was performed for all STEPLIB libraries as
specified in the RUN JCL.
Ensure that:
v Access was given for the database log and directory minidisks. Note that
the Capture program issues internal links to these minidisks.
v Access was given to the C Run Time Library.
v *IDENT authorization was given to the Capture program virtual machine.
v The ASNLMAIN package file was loaded to the database.
v For VM: *IDENT authorization was given to the Capture program virtual
machine.
Any of the following errors could prevent the Capture program from
capturing updates:
v Proper authorization was not granted to the user ID running the Capture
program.
v DATA CAPTURE CHANGES was not specified on the source tables to be
captured. At startup, the Capture program checks that registered tables
have DATA CAPTURE CHANGES specified. If tables are altered after the
Capture program is running, stop and restart the Capture program so that
it can find the tables altered to include DATA CAPTURE CHANGES.
v The proper order for starting the Capture and Apply programs was not
used:
1. Define replication sources and subscriptions before starting the Capture
program.
2. Start the Capture program and look for message number ASN0100I
(initialization completed) in the system console or in the trace table.
3. Start the Apply program so that it performs a full refresh before the
Capture program starts capturing updates.
Check the trace table for possible error messages.
The first time that you start the Capture and Apply programs, the Apply
program performs a full refresh to populate the target tables. Then the
Capture program writes message ASN0104I to the trace table, providing
information related to table owner name, table name, and starting log
sequence number value. This information provides a point from which the
Capture program starts to capture updates.
Updates captured from then on are placed in CD tables. They are eventually
applied to target tables and pruned from the CD tables. After the Capture
program runs for some time, you should see rows in the CD tables if changes
are made to the sources. Periodically, check the trace table to see the progress
made by the Capture program. If it encounters errors, it sends them to the
console and also logs them in the trace table. Similarly, the Apply program
logs its information in the Apply trail table.
Problem: Capture for OS/390 issued message ASN0000E instead of the proper
message number.
Problem: Capture for VM or Capture for VSE issued message ASN0000E instead of
the proper message number.
The Capture program terminates either because of a severe error, or when you
issue the stop command. The Capture program terminates with a return code
that indicates successful or unsuccessful completion. Return codes are:
0 stop command issued
8 Error during initialization
12 Any other severe error
Problem: Capture for OS/390 failed while using the LE for OS/390 environment.
Error message 0509 occurs because multiple versions of DB2, or DB2 and
DataJoiner, are installed on the same system:
v 0509-0306 Cannot load program asnccp for the following errors:
v 0509-0222 Cannot load the library libdb2.a(shr.o)
v 0509-0026 System error: a file or directory does not exist
Ensure that the LIBPATH environment variable is set to the same environment
in which the Apply program starts.
Problem: Apply component for DB2 Universal Database stops with an SQLCODE=
-330, SQLSTATE=22517, ″A string cannot be used, because its characters cannot be
translated″.
When copying between DB2 for OS/390 and DB2 on another platform, the
CCSID translation can cause an INSERT to fail if a translated value is longer
than the DB2 column in which it will be inserted.
If you are running in a mixed environment, ensure that you have installed the
latest maintenance for the CCSID support of your DB2 for OS/390 program.
Problem: I received system error 1067 trying to start the Capture or Apply program
as a Windows NT Service.
followed by CRLF.
Problem: The ASNSERV.LOG file in ASNPATH tells me that the Apply program
was started correctly, but the Apply process terminated.
Problem: I performed a successful bind, but when running the Apply program, I still
get SQLCODE -805, SQLSTATE 51002.
Make sure that the user ID has EXECUTE privilege on the Apply program
packages, and make sure to bind both the Apply program packages to the
control, source, and target server databases.
Problem: The DB2 log filled to capacity because I copied a very large table.
If the error occurred during a full refresh, you can use alternative methods to
load large tables. You can either use the ASNLOAD exit routine, or you can
perform your own load, as described in “Loading target tables offline using
DJRA” on page 127.
If the error occurred while applying changed data, then you can change the
data-blocking parameter to break down large blocks of changed data. See
“Specifying a data-blocking value” on page 122.
Problem: Capture was cold started, which caused the Apply program to perform a
full refresh, but I don’t want a full refresh.
If your target table is very large, and in cases where you decided to use only
your own load mechanism, you might want to suppress any future full
refreshes of the Apply program. Set the DISABLE_REFRESH flag to 1 in the
register table at the source server for the source table. In this case, the Apply
program issues message ASN1016E and copies nothing until you perform a
full refresh.
If you want to bypass full refresh and also start capturing data as soon as
possible, you can use the offline load function of DJRA. In this case, you do
not need to unload and reload the tables, but just run the SQL generated for
step 1 and step 4. In addition to disabling full refresh, offline load also
deactivates the subscription until you complete step 4. See “Loading target
tables offline using DJRA” on page 127.
Problem: A gap was detected, so the Apply program won’t perform a full refresh of
my target table.
Problem: I received a security violation message, and the Apply program is not
authorized to connect to the database.
The control server name, user ID, and password definitions are case sensitive
and must match exactly those specified in the password file. Check your
definitions again.
Apply for AS/400 does not use a password file, so it attempts to connect to
the database using the user ID specified in the user parameter of the
STRDPRAPY CL command. Ensure that you correctly set up your DRDA
connectivity definitions.
You must start the database manager before invoking the Apply program.
Problem: Apply receives SQLCODE -206 when fetching the source data.
If you use a CCD table to stage replication to multiple target tables, ensure
that the CD or CCD table includes all the columns expected in all target
tables. Subsetting columns or selecting UOW columns to be replicated to the
final targets can cause this problem when a CCD table is added to a
replication scenario after the target tables are defined.
This problem should only occur if you manually define target tables and do
not include all the columns when defining replication sources. You can avoid
this problem by defining all target tables (adding to subscription sets) after
defining any CD and internal CCD tables. DJRA does not allow columns in a
target table that are not in the predefined CD or internal CCD tables; DJRA
displays only a valid subset of the columns for the target table.
“Chapter 9. Capture and Apply for AS/400” on page 175 describes how to
operate the Capture and Apply programs on the AS/400 operating system.
“Chapter 10. Capture and Apply for OS/390” on page 225 describes how to
operate the Capture and Apply programs on the OS/390 operating system.
“Chapter 11. Capture and Apply for UNIX platforms” on page 239 describes
how to operate the Capture and Apply programs on UNIX operating systems.
“Chapter 12. Capture for VM and Capture for VSE” on page 255 describes
how to operate the Capture program on the VM and VSE operating systems.
“Chapter 13. Capture and Apply for Windows and OS/2” on page 263
describes how to operate the Capture and Apply programs on the Windows
and OS/2 operating systems.
Be sure to read the following sections before reading the sections on operating
the Capture and Apply programs for AS/400:
v “Coexistence considerations”
v “Setting up the Capture and Apply programs” on page 176
v “Authorization requirements for running the Capture and Apply programs”
on page 181
v “Restrictions for running the Capture program” on page 190
v “The journal” on page 191
v “Defining replication sources and subscription sets” on page 196
v “Using a relative record number (RRN) as a primary key” on page 196
Coexistence considerations
You cannot run Version 1 of DB2 DataPropagator for AS/400 concurrently
with Version 7. If you currently use Version 1, or if you use Version 1
replication components in a Version 5 DB2 DataPropagator for AS/400
environment, you must either:
v Migrate your Version 1 replication environment to Version 5. Instructions
are in the Migration Guide in the Library page of the DB2 DataPropagator
Web site (www.ibm.com/software/data/dpropr/).
v If your Version 1 replication environment is small (for example, if it
contains fewer than 50 source registrations and subscriptions), do not
migrate to Version 5. Instead, use the DataJoiner Replication Administration
(DJRA) tool to create your replication environment in Version 7.
You should use DJRA for all replication administration tasks. However, both
DJRA and the DB2 Control Center provide basic replication administration
functions for defining replication sources and subscription sets. Only DJRA
provides support for remote journals and the use of a relative record number
(RRN) as a primary key.
For example, use the following steps to connect to an AS/400 server from a
DB2 for Windows NT workstation:
1. Log on to the AS/400 server and locate the relational database:
a. Log on to the AS/400 server to which you want to connect.
b. Submit a dsprdbdire command, and then specify local for *LOCAL.
c. Locate the name of the relational database in the output. For example,
in the following output, the database is called DB2400E:
MYDBOS2 9.112.14.67
RCHASDPD RCHASDPD
DB2400E *LOCAL
RCHASLJN RCHASLJN
2. Catalog the AS/400 database in DB2 for Windows NT:
a. From your Windows NT workstation, click Start->Programs->DB2 for
Windows NT->Command Window. The DB2 CLP command window
opens.
b. In the command window, type the following three commands in exact
order:
db2 catalog tcpip node server_name remote server_name server 446 system
server_name ostype OS400
Where server_name is the TCP/IP host name of the AS/400 system, and
rdb_name is the name of the AS/400 relational database that you found
in Step 1 on page 176.
3. In the command window, issue the following command:
db2 terminate
4. Ensure that the AS/400 user profile that you will use to log on to your
AS/400 system uses CCSID37:
a. Log on to the AS/400 system.
b. Type the following command, where user is the user profile:
CHGUSRPRF USRPRF (user) CCSID(37)
c. To make sure that DB2 for Windows NT and DB2 for AS/400 have
been connected, issue the following command:
db2 connect to rdb_name user user_name using password
5. Make sure that the DDM server is started on the AS/400 system by
typing:
STRTCPSVR SERVER(*DDM)
6. From your Windows NT workstation, use the Control Center or DJRA to
administer the AS/400 database.
Verifying and customizing your installation of DB2 DataPropagator for
AS/400
You should install DB2 DataPropagator for AS/400 before using the
replication administration tools, because the installation process issues the
CRTDPRTBL command to automatically create the replication control tables.
These tables are created in the DataPropagator Relational collection (named
ASN), if they do not already exist.
The installation program also creates an SQL journal, an SQL journal receiver
for this library, and work management objects. Table 8 lists the work
management objects that are created.
Table 8. Work management objects
Description Object type Name
Subsystem description *SBSD QDPR/QZSNDPR
Job queue *JOBQ QDPR/QZSNDPR
Job description *JOBD QDPR/QZSNDPR
A note on work management: You can alter the default definitions or provide
your own definitions. See OS/400 Work Management V4R3, SC41–5306 for more
information about changing these definitions.
Important: The CRTDPRTBL command is the only command that you should
use to create AS/400 control tables. Do not use DJRA to create the control
tables.
CRTDPRTBL
7
DPRVSN( 5 )
You can see the current values of the Capture program attributes if you issue
the CHGDPRCAPA command with the F4 key to prompt on the command.
CHGDPRCAPA
RETAIN( *SAME ) LAG( *SAME )
minutes minutes
FRCFRQ( *SAME ) CLNUPITV( *SAME )
seconds 1-100
Ensure that the Apply intervals are set to copy changed information
before the value on the RETAIN parameter is reached. This prevents
your tables from becoming inconsistent. If they become inconsistent,
the Apply program performs full refreshes.
*SAME (default)
Specifies that the value remains unchanged.
minutes
Specifies the number of minutes that the CD is retained. The
maximum value is 35 000 000. The default value is 10 080
minutes (7 days).
This is a global value, and is used for all defined source tables.
Setting this value at a lower number can affect processor usage.
*SAME (default)
Specifies that the value remains unchanged.
seconds
Specifies the number of seconds that the Capture program
keeps CD table and UOW table changes in buffer space before
making them available for use by the Apply program. This
value can range from 30 to 600 seconds. The default value is
180 seconds.
7 *ALL
DPRVSN( 5 ) APYQUAL( *USER )
apply-qualifier
You cannot use the GRTDPRAUT command while the Capture or Apply
programs are running, or when applications that use the source tables are
active because authorizations cannot be changed on files that are in use.
Examples
If the application server job on the source server used by the Apply
program runs under a different user profile; for example, QUSER, the
command is:
GRTDPRAUT USER(QUSER) AUT(*APPLY) DPRVSN(7) APYQUAL(A1)
The following table lists the authorities granted when you specify the
AUT(*REGISTRAR) parameter on the GRTDPRAUT command:
Table 12. Authorities granted with GRTDPRAUT AUT(*REGISTRAR)
Library Object Type Version Authorizations
QSYS ASN *LIB 57 *USE, *ADD
ASN QSQJRN *JRN 57 *OBJOPR,
*OBJMGT
ASN IBMSNAP_REGISTER *FILE 7 *OBJOPR,
*READ, *ADD,
*UPD, *DLT
ASN IBMSNAP_REGISTERX *FILE 7 *OBJOPR,
*READ, *ADD,
*UPD, *DLT
ASN IBMSNAP_REG_EXT *FILE 57 *OBJOPR,
*OBJMGT,
*READ, *ADD,
*UPD, *DLT
The following table lists the authorities granted when you specify the
AUT(*SUBSCRIBER) parameter on the GRTDPRAUT command:
Table 13. Authorities granted with GRTDPRAUT AUT(*SUBSCRIBER)
Library Object Type Version Authorizations
QSYS ASN *LIB 7 *USE, *ADD
QSYS IBMSNAP_SUBS_SET *FILE 7 *CHANGE
ASN IBMSNAP_APPLYTRAIL *FILE 7 *CHANGE
ASN IBMSNAP_SUBS_COL *FILE 7 *CHANGE
ASN IBMSNAP_SUBS_EVENT *FILE 7 *CHANGE
ASN IBMSNAP_SUBS_STMTS *FILE 7 *CHANGE
The following table lists the authorities granted when you specify the
AUT(*CAPTURE) parameter on the GRTDPRAUT command:
Table 14. Authorities granted with GRTDPRAUT AUT(*CAPTURE)
Library Object Type Version Authorizations
QSYS ASN *LIB 57 *USE, *OBJMGT
ASN IBMSNAP_REGISTER *FILE 57 *USE, *UPD
ASN IBMSNAP_REG_EXT *FILE 57 *USE, *UPD
QSYS Control library *LIB 57 *USE
Control library CD table *FILE 57 *OBJOPR,
*OBJMGT,
*READ, *UPD,
*DLT, *ADD
The following table lists the authorities granted when you specify the
AUT(*APPLY) parameter on the GRTDPRAUT command:
Table 15. Authorities granted with GRTDPRAUT AUT(*APPLY)
Library Object Type Version Authorizations
QSYS ASN *LIB 57 *USE
ASN IBMSNAP_SUBS_SET *FILE 7 *CHANGE
ASN IBMSNAP_APPLYTRAIL *FILE 7 *CHANGE
ASN IBMSNAP_SUBS_COLS *FILE 7 *USE
ASN IBMSNAP_SUBS_EVENT *FILE 7 *USE
ASN IBMSNAP_SUBS_STMTS *FILE 7 *USE
ASN IBMSNAP_SUBS_MEMBR *FILE 7 *USE
ASN ASNA* *SQLPKG 7 *USE
ASN ASNU* *SQLPKG 7 *USE
ASN IBMSNAP_REGISTER *FILE 7 *USE, *UPD
ASN IBMSNAP_REG_EXT *FILE 57 *USE, *UPD
ASN IBMSNAP_UOW *FILE 57 *USE, *UPD
ASN IBMSNAP_PRUNCNTL *FILE 7 *USE, *UPD,
*ADD
Revoking authority
The Revoke DPR Authority (RVKDPRAUT) command revokes authority to
the replication control tables so that users can no longer define or modify
replication sources and subscriptions.
7
RVKDPRAUT USER( U user-name ) DPRVSN( 5 )
*PUBLIC
Example
To revoke authorities to the control tables:
RVKDPRAUT USER(user-name) DPRVSN(7)
When conditions make capturing data for a particular source table impossible,
the Capture program changes the state of the source table from capturing
changes to needing a full refresh. (See Table 20 on page 205 for a list of such
conditions.) Other conditions that prevent data capturing for a source table
are:
v An ALTER TABLE is performed on the source table or the CD table such
that either:
– A column in the CD table is no longer in the source table.
– The column in the CD table has different attributes (data type, length)
from its counterpart in the source table.
When you need to perform an ALTER TABLE on the source table, ensure
that you remove the subscription and define the source table again. Or you
can use the List or Change replication sources action in DJRA to fix the
changed data table. If you defined targets, you can also use the List
members or add a column to target table action in DJRA to alter the target
tables.
v A lock is placed on the source table or the CD table that prevents the
Capture program from accessing the needed information.
The journal
DB2 DataPropagator for AS/400 uses the information that it receives from the
journals about changes to the data to populate the CD and UOW tables for
replication.
DB2 DataPropagator for AS/400 runs under commitment control for most
operations and therefore requires journaling on the control tables. (The
QSQJRN journal is created when the CRTDPRTBL command creates a
collection.)
Administrators must manually create the QSQJRN journal in both the library
that contains the replication source control tables and the library that contains
the target tables. Administrators must also ensure that all the source tables are
journaled correctly.
Important: The intention of this type of setup is to have the replication source
definitions on the same AS/400 system as the replication target.
To define a replication source with remote journals, select Define One Table
as a Replication Source from the DJRA main window, select an AS/400
source table, then select the AS/400 policies tab. From this tab, select the
remote journal checkbox, and enter the remote journal library, remote journal
name, and Capture server.
For more information about the remote journal function, see AS/400 Remote
Journal Function for High Availability and Data Replication, SG24-5189.
Creating journals for source tables
To set up the source table journals, you must have the authority to create
journals and journal receivers for the source tables to be defined.
Important: Use a different journal for the source tables than one of those
created by DB2 DataPropagator for AS/400 (QSQJRN journals) in the ASN
library, the source library, the control library, or the target library.
Be sure to:
v Place the journal receiver in a library that is saved regularly.
Be sure to:
v Specify the name of the journal receiver that you created in the first
step.
v Use the Manage receiver (MNGRCV) parameter to have the system
change the journal receiver and attach a new one when the attached
receiver becomes too large. If you choose this option, you do not need
to use the CRTJRN command to detach receivers and create and attach
new receivers manually.
v Specify DLTRCV(*NO) only if you have overriding reasons to do so (for
example, if you need to save these journal receivers for recovery
reasons). If you specify DLTRCV(*YES), these receivers might be deleted
before you have a chance to save them.
You can use two values on the RCVSIZOPT parameter of the CRTJRN
command (*RMVINTENT and *MINFIXLEN) to optimize your storage
availability and system performance. See the AS/400 Programming:
Performance Tools Guide for more information.
3. Start journaling the source table using the Start Journal Physical File
(STRJRNPF) command, as in the following example:
STRJRNPF FILE(library/file)
JRN(JRNLIB/DJRN1)
OMTJRNE(*OPNCLO)
IMAGES(*BOTH)
Specify the name of the journal that you created in step 2. The Capture
program requires a value of *BOTH for the IMAGES parameter.
Managing journals and journal receivers
The Capture program uses the Receive Journal Entry (RCVJRNE) command
to receive journals.
that value. If you use system change journal management support, you must
create a journal receiver that specifies the threshold at which you want the
system to change journal receivers. The threshold must be at least 5000 KB,
and should be based on the number of transactions on your system. The
system automatically detaches the receiver when it reaches the threshold size
and creates and attaches a new journal receiver, if it can.
Use the CHGJRN command to detach the old journal receiver and attach a
new one. This command prevents Entry not journaled error conditions and
limits the amount of storage space that the journal uses. To avoid affecting
performance, do this at a time when the system is not at maximum use.
You can switch journal receiver management back to the system by specifying
CHGJRN MNGRCV(*SYSTEM).
You should regularly detach the current journal receiver and attach a new one
for two reasons:
v Analyzing journal entries is easier if each journal receiver contains the
entries for a specific, manageable time period.
v Large journal receivers can affect system performance and take up valuable
space on auxiliary storage.
The default message queue for a journal is QSYSOPR. If you have a large
volume of messages in the QSYSOPR message queue, you might want to
associate a different message queue, such as DPRUSRMSG, with the journal.
You can use a message handling program to monitor the DPRUSRMSG
message queue. For an explanation of messages that can be sent to the journal
message queue, see OS/400 Backup and Recovery.
To take advantage of the delete journal receiver exit routine and leave journal
management to the system, specify DLTRCV(*YES) and MNGRCV(*SYSTEM)
on the CHGJRN or CRTJRN command.
If the journal that the receiver is associated with has no association with any
of the source tables, this exit routine approves the deletion of the receiver.
If the journal receiver is used by one or more source tables, this exit routine
makes sure that the receiver being deleted does not contain entries that have
not been processed by the Capture program. The exit routine disapproves the
deletion of the receiver if the Capture program still needs to process entries
on that receiver.
If you must delete a journal receiver and the delete journal receiver exit
routine does not approve the deletion, specify DLTJRNRCV DLTOPT(*IGNEXITPGM)
to override the exit routine.
Removing the delete journal receiver exit routine: If you want to handle the
deletion of journal receivers manually, you can remove the delete journal
receiver exit routine by issuing the following command:
RMVEXITPGM EXITPNT (QIBM_QJO_DLT_JRNRCV)
FORMAT(DRVC0100)
PGMNBR(value)
Registering the delete journal receiver exit routine for upgraded systems: If
the 5769DP3 version of DB2 DataPropagator for AS/400 was installed on
V4R1 and the operating system was upgraded to V4R2 or V4R3 without
reinstalling the product, you must register the exit routine with this
command:
ADDEXITPGM EXITPNT(QIBM_QJO_DLT_JRNRCV)
FORMAT(DRCV0100)
PGMNBR(value *LOW)
CRTEXITPNT(*NO)
PGM(QDPR/QZSNDREP)
When you define tables as replication sources, the CCSID attributes of CHAR,
VARCHAR, GRAPHIC, and VARGRAPHIC columns in the CD table must be
the same as the CCSID column attributes of the source table.
Because the RRN of a source table row does not change unless the source
table is reorganized, the RRN value can be used as a primary key for the
source table row if a source table is not reorganized. Any time that you
reorganize a source table (to compress deleted rows, for example), DB2
DataPropagator for AS/400 performs a full refresh of all the target tables.
Important: Only the Apply program for AS/400 can be used to maintain
copies that contain RRN columns, whether these copies are on an AS/400 or
other target DB2 platforms.
To define a replication source with a RRN column, select Define One Table as
a Replication Source from the DJRA main window, select an AS/400 source
table, then select the AS/400 policies tab. From this tab, select the RRN
checkbox.
After you start the Capture program, it runs continuously until you stop it or
it detects an unrecoverable error.
STRDPRCAP
*YES
RESTART( *NO )
*LIBL/QZSNDPR 7
JOBD( library-name/job-description-name ) DPRVSN( 5 )
120 *DPRVSN *IMMED
WAIT( value ) CLNUPITV( *GLOBAL *DELAYED )
hours-to-wait *NO
*ALL
JRN( U library-name/journal-name )
*ALLCHG
GENCDROW( *REGCOLCHG )
JOBD Specifies the name of the job description to use when submitting
the Capture program.
*LIBL/QZSNDPR (default)
Specifies the default job description provided with DB2
DataPropagator for AS/400.
library-name/job-description-name
Represents the name of the job description used for the
Capture program.
You can run the STRDPRCAP command manually, or you can automatically
run the command as a part of the initial program load (IPL startup program).
For information about including the STRDPRCAP command in a startup
program, see OS/400 Work Management V4R3, SC41–5306.
If the job description specified with the JOBD parameter uses job queue
QDPR/QZSNDPR, and the DB2 DataPropagator for AS/400 subsystem is not
active, the STRDPRCAP command starts the subsystem. If the job description
is defined to use a different job queue and subsystem, you must start this
subsystem manually with the Start Subsystem (STRSBS) command either
before or after running the STRDPRCAP command:
STRSBS QDPR/QZSNDPR
You can set up the system to start the subsystem automatically by adding the
STRSBS command to the program that is referred to in the QSTRUPPGM
system value on your system.
Determining the progress of the Capture program
To determine the progress of the Capture program, you must either determine
how much work remains between the last Capture process that was
performed and the last Apply process, or use the DJRA Replication Monitor.
If the Capture program has ended, you can determine its progress by
inspecting the warm start table. There is one row for each journal used by the
source tables. The LOGMARKER column provides the timestamp of the last
journal entry processed successfully. The SEQNBR column provides the
journal entry sequence number of that entry.
If the Capture program is still running, you can determine its progress by
completing the following tasks:
Use this command to end the Capture program before shutting down the
system. You might also want to end the program during periods of peak
system use to increase the performance of other programs that run on the
system.
ENDDPRCAP
*CNTRLD 7
OPTION( *IMMED ) DPRVSN( 5 )
If you use the ENDJOB command, temporary objects might be left in the
QDPR library. These objects have the types *DTAQ and *USRSPC, and are
named QDPRnnnnnn, where nnnnnn is the job number of the job that used
them. You can delete these objects when the job that used them (identified by
the job number in the object name) is not active.
If the job QDPRCTL5 does not end long after issuing this command, use the
ENDJOB command with *IMMED option to end this job and all the journal
jobs running in the DB2 DataPropagator for AS/400 subsystem. Do not end
Apply jobs running in the same subsystem if you want to end only the
Capture program.
In rare cases when the job QDPRCTL5 ends abnormally, the journal jobs
created by QDPRCTL5 might still be left running. The only way to end these
jobs is to use the ENDJOB command with either the *IMMED or *CNTRLD
option.
Reinitializing Capture for AS/400
The Initialize DPR Capture (INZDPRCAP) command initializes the Capture
program by directing the Capture program to work with an updated list of
source tables.
Source tables under the control of the program can change while the Capture
program is running. Use the INZDPRCAP command to ensure that the
Capture program processes the most up-to-date replication sources.
If you change the values of the tuning parameters while the Capture program
is running, enter the INZDPRCAP command to reinitialize the program using
the new values.
The Capture program must be running before you run this command.
INZDPRCAP
7
DPRVSN( 5 )
*ALL
JRN( U library-name/journal-name )
Pruning the change data and unit-of-work tables and minimizing source
server DASD usage
The CLNUPITV parameter on the STRDPRCAP command specifies the
maximum number of hours that the Capture program waits before pruning
old records from the CD tables and the UOW table. For more information
about the CLNUPITV parameter, see “Starting Capture for AS/400” on
page 197.
Pruning does not recover DASD for you. You have to frequently issue
RGZPFM (Reorganize Physical File Member) commands against the CD tables
and the UOW tables to recover the DASD. The RGZPFM command reclaims
deleted space by moving active rows forward. It requires an EXCLRD lock of
the file and member, and you must schedule it when Capture and Apply are
not running.
For more information about how the Capture program processes different
journal entry types, see Table 20.
Notes:
1. The R-UP image and the R-UB image form a single UPD record in the CD table if
the PARTITION_KEYS_CHG column in the register table is N. Otherwise, the R-UB
image inserts a DLT record in the CD table and the R-UP image inserts an ADD
record in the CD table.
2. The R-UR image and the R-BR image form a single UPD record in the CD table if
the PARTITION_KEYS_CHG column in the register table is N. Otherwise, the R-BR
image inserts a DLT record in the CD table and the R-UR image inserts an ADD
record in the CD table.
3. The following values are used for the journal codes:
C Commitment control operation
F Database file operation
J Journal or journal receiver operation
R Operation on specific record
All other journal entry types are ignored by the Capture program.
*ALL
RDB rdb-name )
RDB Specifies the relational database where the packages are created. The
packages are not created in the following cases:
v The RDB is on an AS/400 system and the ASN library does not
exist on the remote system.
v The RDB is not on an AS/400 system and ASN is not defined as
an authorization ID on that RDB.
*ALL (default)
Specifies to create an SQL package on every RDB that is used as
a source server or a target server by DB2 DataPropagator for
AS/400.
rdb-name
Represents the name of the relational database. You can use the
Work with RDB Directory Entries (WRKRDBDIRE) command
to find this name.
When prompting on the CRTDPRPKG command, you can
press the F4 key to choose from the list of databases in the RDB
directory.
The packages are created using the ASN qualifier. They are created in the
ASN library for DB2 UDB for AS/400 platforms. For other platforms, the
authorization ID ASN is used.
After creating the DB2 DataPropagator for AS/400 packages, this command
grants *PUBLIC authority to the packages to allow them to be used by users
of DB2 DataPropagator for AS/400.
The system also produces a spool file that contains the SQL messages
associated with each attempt to create a package.
*LIBL/QZSNDPR 7
JOBD( library-name/job-description-name ) DPRVSN( 5 )
*LIBL/job-description-name
*USER *LOCAL
APYQUAL( apply-qualifier ) CTLSVR( rdb-name )
*NONE *NONE
TRACE( *ERROR ) FULLREFPGM( library-name/program-name )
*ALL
*PRF
*NONE *YES
SUBNFYPGM( library-name/program-name ) INACTMSG( *NO )
*YES 6
ALWINACT( *NO ) DELAY( delay-time )
300
RTYWAIT( retry-wait-time )
The Apply program runs under the specified user profile. The
control tables (in ASN) are located on the relational database
specified with the CTLSVR parameter. The same control tables are
used regardless of the value specified on the USER parameter.
*CURRENT (default)
Specifies that the user ID associated with the current job is the
user ID associated with this instance of the Apply program.
*JOBD
Represents the user ID specified in the job description
associated with this instance of the Apply program. The job
description cannot specify USER(*RQD).
user-name (default)
Specifies the user ID associated with this instance of the Apply
program. The following IBM-supplied objects are not valid on
this parameter: QDBSHR, QDFTOWN, QDOC, QLPAUTO,
QLPINSTALL, QRJE, QSECOFR, QSPL, QSYS, or QTSTRQS.
When prompting on the STRDPRAPY command, you can press
the F4 key to see a list of users who defined subscriptions.
TRACE Specifies whether the Apply program should generate a trace. If the
Apply program generates a trace, the trace is output to a spool file
called QPZSNATRC.
*NONE (default)
Specifies that no trace is generated.
*ERROR
Specifies that the trace should contain information for errors
only.
*ALL
Specifies that the trace should contain information for errors,
execution flow, and SQL statements issued by the Apply
program.
*PRF
Specifies that the trace should contain information that can be
used to analyze performance at different stages of the Apply
program execution.
For more information, see “Using the ASNDONE exit routine for
AS/400” on page 220.
*NONE (default)
Specifies that an exit routine is not used.
library-name/program-name
Represents the qualified name of the program to be called when
the Apply program completes processing a subscription set. For
example, to call program APPLYDONE in library DATAPROP,
the qualified name is DATAPROP/APPLYDONE.
DELAY Specifies the delay time (in seconds) at the end of each Apply
program cycle when continuous replication is used.
6 Specifies a delay time of 6 seconds.
delay-time
Specifies a delay time between 0 and 6 seconds inclusive.
RTYWAIT Specifies in seconds how long the Apply program should wait after
it encounters an error before it retries the operation that failed.
300
Specifies a retry wait time of 300 seconds.
retry-wait-time
Specifies a retry wait time between 0 and 35000000 seconds
inclusive.
You can set up the system to automatically start the subsystem by adding the
command that is referred to in the QSTRUPPGM value on your system. If you
will use the QDPR/QZSNDPR subsystem, it will be started as part of the
STRDPRAPY command processing.
v The Apply program is started for the local control server only.
Scheduling Apply for AS/400
Use the ADDJOBSCDE command to start the Apply program at a specific
time.
Stopping Apply for AS/400
The End DPR Apply (ENDDPRAPY) command ends an instance of the Apply
program on the local system.
You should end the Apply program before any planned system down time.
You might also want to end the Apply program during periods of peak
system activity.
ENDDPRAPY
*CURRENT *CNTRLD
USER( user-name ) OPTION( *IMMED )
7 *USER
DPRVSN( 5 ) APYQUAL( apply-qualifier )
*LOCAL
CTLSVR( rdb-name )
CTLSVR Specifies the name of the relational database where the Version 7
control tables are located.
*LOCAL (default)
Specifies that the control tables are located on the local
relational database.
rdb-name
Specifies that the subscription control tables are located on this
relational database. You can use the Work with RDB Directory
Entries (WRKRDBDIRE) command to find this name.
When prompting on the ENDDPRAPY command, you can
press the F4 key to choose from the list of databases in the RDB
directory.
The ENDDPRAPY command uses the value of the APYQUAL and CTLSVR
parameters to search the Apply job table for the job name, job number, and
job user for the referenced Apply program, and ends that job.
v If the user ID running the command is not authorized to end the Apply job.
If the program is created to run with a new activation group: the Apply
program and the subscription notify program will not share SQL resources,
such as RDB connections and open cursors. The activation handling code in
the AS/400 operating system frees any resources allocated by the subscription
notify program before control is returned to the Apply program. Additional
resource is used every time that the Apply program calls the subscription
notify program.
If the program is created to run in the caller’s activation group: it shares SQL
resources with the Apply program. Design the program so that you minimize
its impact on the Apply program. For example, the program might cause
unexpected Apply program processing if it changes the current relational
database (RDB) connection.
Refreshing target tables with the ASNLOAD exit routine for AS/400
The ASNLOAD full-refresh exit routine is called by the Apply program:
v When it determines that a full refresh of a target table is necessary.
v If you specify the name of a full refresh program on the FULLREFPGM
parameter when you start the Apply program.
When a full refresh of a subscription set is necessary, the Apply program calls
the exit routine. The program then performs a full refresh of the target table
(if necessary), or of each target table listed in the subscription set.
You can use an exit routine instead of the Apply program to perform a full
refresh more efficiently. For example, if you are copying every row and every
column from a source table to a target table, you can design a full-refresh exit
routine that uses a Distributed Data Management (DDM) file and the Copy
File (CPYF) CL command to copy the entire file from the source table to the
target table.
If the exit routine returns a non-zero return code, the current subscription set
being processed by the Apply program fails. Processing of the remainder of
the subscription set is discontinued until the next iteration.
You cannot direct the Apply program to use another program unless you end
the Apply program and start it again with another STRDPRAPY command.
Trace indicator
Specifies whether the Apply program generates trace data. The exit
routine can use the trace indicator to coordinate its internal trace with the
Apply trace.
When the Apply program generates a trace, it prints to a spool file. If the
exit routine is running in a separate activation group, the results print to a
separate spool file. If the exit routine runs in the caller’s activation group,
the results print to the same spool file as the Apply trace.
The values for trace indicator are:
YES
Trace data is being produced.
NO
No trace data is being produced.
Capture for OS/390 and Apply for OS/390 are packaged in SMP/E format.
The installation sequence for each program consists of:
1. Customizing invocation JCL to suit your environment
2. Using SMP/E to install
3. Providing APF authorization
4. Creating and loading the VSAM messages file
5. Binding to the DB2 subsystem and target or control subsystems that the
DB2 subsystem will connect to
See the DB2 Universal Database for OS/390 Version 7 Program Directory for
complete installation instructions for the Capture and Apply programs.
To set up your SMP/E DDDEF entries during installation (in 2 on page 225),
refer to the Capture sample library SASNLBSE(ASNLDEF) and the Apply
sample library SASNABSE(ASNADEF).
When you installed the Capture or Apply program, each DDDEF entry was
set up to point to a corresponding DB2 DSN###.SDSNLOAD library, where
### referred to the product release (710 for Version 7.1, 610 for Version 6.1,
and 510 for Version 5.1). If any of the related releases of DB2 were not
installed, then the DDDEF entries were set up with the DB2 SDSNLOAD
library for ″DDDEF(SDSNLD##)″ to point to the highest installed level of DB2.
For example, if DB2 5.1 was not installed, then DDDEF entries were set up
with DDDEF(SDSNLD51) to point to the DSN710.SDSNLOAD library so that
SMP/E link-edit will complete with return code 4. Also, the Capture load
module ASNLRP75 in run job ASNL2RN5 or the Apply load module
ASNAPV75 in run job ASNA2RN5 cannot be executed.
If you install a new release of DB2 after installing the Capture or Apply
program:
1. Use the Capture sample DDDEF job SASNLBSE(ASNLDEF), Apply sample
DDDEF job SASNABSE(ASNADEF), or both, to change the DB2
DSN###.SDSNLOAD library data set for DDDEF(SDSNLD##) to the new
installed level of DB2. Note that SDSNLD## refers to the new DB2 release
(51, 61, 71 ) and DSN###.SDSNLOAD refers to the new DB2 release (510,
610, or 710).
2. Run an SMP/E APPLY job to reapply any recent Capture V7 PTF, any
recent Apply V7 PTF, or both, using the SMP/E REDO operand.
You can submit the commands from TSO or the MVS console.
This section also lists restrictions for running the Capture program.
Restrictions for running the Capture program
Capture for OS/390 cannot replicate certain types of data. See “General
restrictions for replication” on page 77 for a list of restrictions.
REINIT also rereads the tuning parameters table for any changes made to the
tuning parameters.
Before you add a column to a replication source or CD table using the ALTER
TABLE statement, you must ensure that the Capture program has captured all
the changes for the table. After the ALTER TABLE statement, you must issue
the REINIT command.
Pruning the change data and unit-of-work tables
Use the PRUNE command to start pruning the CD and UOW tables.
The Capture program issues the message ASN0124I when the command is
successfully queued.
If you stop or suspend the Capture program during pruning, you must enter
the PRUNE command again to resume pruning. Pruning does not resume
after you enter the RESUME command.
Displaying captured log progress
Use the GETLSEQ command to get the timestamp and current log sequence
number. You can use this information to determine how far the Capture
program has read the DB2 log.
F jobname ,GETLSEQ
The Capture program issues the message ASN0125I indicating when the
current log sequence number is successfully processed.
If you do not specify an index type, the index type is determined as follows:
v If the LOCKSIZE is ROW, the default index type is TYPE 2, regardless of
the type specified on the installation panel DSNTIPE.
v If the LOCKSIZE is not ROW, the default index type is the type specified in
the field DEFAULT INDEX TYPE on the installation panel DSNTIPE. The
default value for that field is TYPE 2.
You can eliminate data currency problems by using the DB2 ODBC Catalog
tables. DB2 DataPropagator for OS/390 Version 6 can keep data in the DB2
ODBC Catalog synchronized with the contents of the real DB2 catalog table.
The Capture program identifies log records that represent changes to the DB2
catalog and records these changed data records in a staging table. The Apply
program replicates the changed data records to the DB2 ODBC Catalog tables.
This section describes how to implement the DB2 ODBC Catalog using the
automatic mode. The automatic mode automatically replicates any DB2
Catalog changes to the DB2 ODBC Catalog tables.
Setting up the DB2 ODBC Catalog
The following section provides setup instructions needed to prepare your
client and server to run your ODBC queries.
[tstcli2x]
Assuming dbalias2 is a database in DB2 for MVS
SchemaList="'OWNER1','OWNER2','CURRENT SQLID'"
[MyVeryLongDBALIASName]
dbalias=dbalias3
SysSchema=MYSCHEMA
[RDBD2205]
AUTOCOMMIT=1
LOBMAXCOLUMNSIZE=33554431
LONGDATACOMPAT=1
PWD=USRT006
UID=USRT006
DBALIAS=RDBD2205
CLISCHEMA=CLISCHEM
[RDBD2206]
AUTOCOMMIT=1
LOBMAXCOLUMNSIZE=33554431
LONGDATACOMPAT=1
PWD=USRT006
UID=USRT006
DBALIAS=RDBD2206
CLISCHEMA=MYSCHEMA
You must define views for all the DB2 ODBC Catalog tables when you use
your own schema. See Table 26 on page 237 for the list of the DB2 ODBC
Catalog tables for which you must define a view. Use the following VIEW
MYSCHEMA statement to define the DB2 ODBC Catalog views on
CLISCHEM.table_name ODBC tables.
CREATE VIEW MYSCHEMA.table_name FROM CLISCHEM.table_name
where TABLE_SCHEM=MYUSER
Because the Apply program control tables use static SQL calls for the
control tables, the Apply bind process searches for the control tables at
each server that the Apply program is bound to, regardless of whether
these control tables are used at a server.
These commands create a list of packages, the names of which are in the
files APPLYCS.LST and APPLYUS.LST.
Other configuration considerations for UNIX-based components
Ensure that the user ID from which the Capture and Apply programs are
running has write privilege on the directories where you invoke the programs.
Write privilege is necessary because both the Capture and Apply programs
create files in the invocation directory.
The Capture program creates the following files in addition to the spill files:
instnameSRCSRVR.ccp
A log file for the messages issued by the Capture program. These
messages are also recorded in the trace table.
instnameSRCSRVR.tmp
A file that contains the process ID of this invocation of the Capture
program (to prevent multiple Capture programs from being started in
the same instance on the same source server).
Where:
server_name
The name of the source, target, or control server, exactly as it
appears in the subscription set table.
userid The user ID that you plan to use to administer that particular
server. This value is case-sensitive.
password
The password that is associated with userid. This value is
case-sensitive.
The Apply program for UNIX must be able to issue an SQL CONNECT
statement without specifying the user ID and password. If the Apply program
needs to connect to an OS/390 database with SNA connectivity, these settings
are necessary:
v The DB2 for OS/390 database must be cataloged as
AUTHENTICATION=CLIENT.
For more information about authentication and security, refer to the IBM DB2
Universal Database Administration Guide.
This section explains how to perform the following Capture program tasks:
v “Starting Capture for UNIX platforms” on page 244
v “Scheduling Capture for UNIX platforms” on page 244
v “Setting environment variables for Capture for UNIX platforms” on
page 244
v “Stopping Capture for UNIX platforms” on page 247
v “Suspending Capture for UNIX platforms” on page 247
v “Resuming Capture for UNIX platforms” on page 248
v “Reinitializing Capture for UNIX platforms” on page 248
v “Pruning the change data and unit-of-work tables” on page 249
v “Displaying captured log progress” on page 249
This section also lists restrictions for running the Capture program.
Restrictions for running the Capture program
Some actions cause the Capture program to terminate while it is running. Stop
the Capture program before you take any of the following actions:
v Remove an existing replication source.
v Drop a replication source table.
v Make changes that affect the structure of source tables, such as changes
resulting from data definition language or utilities. Structural changes can
compromise the data integrity of the copies. (Using ALTER ADD to add
new columns is an exception.)
HP-UX example:
export SHLIB_PATH=db2homedir/sqllib/lib:/usr/lib:/lib
export LANG=en_US
Linux example:
export LD_LIBRARY_PATH=db2homedir/sqllib/lib:/usr/lib:/lib:/db2/linux/lib
export LANG=en_US
NUMA-Q example:
export LD_LIBRARY_PATH=db2homedir/sqllib/lib:/opt/jse3.0/lib
export LANG=en_US
Solaris example:
export LD_LIBRARY_PATH=db2homedir/sqllib/lib:/usr/lib:/lib
export LANG=en_US
export NLSPATH=/usr/lib/locale/%L/%N:/db2homedir/sqllib/msg/en_US/%N
5. Enter the following command:
asnccp
src_server warm prune notrace trcfile
warmns noprune trace
cold
notrctbl autostop logreuse logstdout allchg
chgonly
Before you stop the Capture program, ensure that the environment variables
are set (see “Setting environment variables for Capture for UNIX platforms”
on page 244). To stop the Capture program, enter the command:
Before you suspend the Capture program, ensure that the environment
variables are set (see “Setting environment variables for Capture for UNIX
platforms” on page 244). To suspend the Capture program, enter the
command:
Before you resume the Capture program, ensure that the environment
variables are set (see “Setting environment variables for Capture for UNIX
platforms” on page 244). To resume the Capture program, enter the command:
reinit also rereads the tuning parameters table for any changes made to the
tuning parameters.
Before you reinitialize the Capture program, ensure that the environment
variables are set (see “Setting environment variables for Capture for UNIX
platforms” on page 244). To reinitialize the Capture program, enter the
command:
Important: Do not use the reinit command to reinitialize the Capture program
after canceling a replication source or dropping a replication source table
while the Capture program is running. Instead, stop the Capture program and
restart it using the WARM or WARMNS option.
Pruning the change data and unit-of-work tables
Use the prune command to start pruning the CD and UOW tables.
Before you begin pruning tables, ensure that the environment variables are set
(see “Setting environment variables for Capture for UNIX platforms” on
page 244). To begin pruning tables, enter the command:
The Capture program issues the message ASN0124I when the command is
successfully queued.
If you stop or suspend the Capture program during pruning, you must enter
the prune command again to resume pruning. Pruning does not resume after
you enter the resume command.
Displaying captured log progress
Use the getlseq command to get the timestamp and current log sequence
number. You can use this information to determine how far the Capture
program has read the DB2 log.
Tip: You can use the DB2 UDB Find Log Sequence Number command
(db2flsn) to identify the physical log file associated with the log
sequence number. You can use this number to delete or archive log files
no longer needed by the Capture program. For more information, see the
IBM DB2 Universal Database Command Reference.
32. Usually the Apply package is created automatically for you; however, if you configure the Apply program
manually, you must explicitly bind the Apply package.
HP-UX example:
export SHLIB_PATH=db2homedir/sqllib/lib:/usr/lib:/lib
export LANG=en_US
Linux example:
export LD_LIBRARY_PATH=db2homedir/sqllib/lib:/usr/lib:/lib:/db2/linux/lib
export LANG=en_US
NUMA-Q example:
export LD_LIBRARY_PATH=db2homedir/sqllib/lib
export LANG=en_US
Solaris example:
export LD_LIBRARY_PATH=db2homedir/sqllib/lib:/usr/lib:/lib
export NLS_PATH= /usr/lib/locale/%L/%N:db2homedir/sqllib/msg/en_US/%N
export LANG=en_US
4. Enter the asnapply command and options:
asnapply apl_qual
ctrl_serv loadxit inamsg notrc
noloadxit noinamsg trcerr
trcflow
logreuse logstdout trlreuse
To stop the Apply program, perform the following steps from a window
where the Apply program is not running:
1. Set environment variable DB2INSTANCE to the value set when the Apply
program was started.
2. Set environment variable DB2DBDFT to the source server specified when
the Apply program was started (or the DB2DBDFT value used when the
Apply program was started).
3. Enter the command.
asnastop apply_qualifier
See the Capture for VM Program Directory or the Capture for VSE Program
Directory for Capture program installation instructions.
This section explains how to perform the following Capture program tasks:
v Starting
v Stopping
v Suspending
v Resuming
v Reinitializing
v Pruning
v Displaying captured log progress
This section also lists restrictions for running the Capture program.
Restrictions for running the Capture program
Capture program restrictions are:
v Tables with field procedures for columns (FIELDPROC specified on
CREATE or ALTER TABLE) are not supported by Capture for VM or
Capture for VSE unless you create a new one-way FIELDPROC based on the
existing FIELDPROC. The existing FIELDPROC doesn’t require any
changes. If the corresponding column(s) on the CD table are defined with
the new one-way FIELDPROC, and if the FIELDPROC does not change the
length of the data, replication can be performed successfully.
v Because the Capture program identifies itself as an APPC/VM resource,
you must specify appropriate IUCV VM/ESA® System Directory control
NOUSERID
USERID= auth_name/password
Group 1 ’
NOTRACE ALLCHG
TRACE CHGONLY
Group 1:
USERID= auth_name/password
Dbname= databasename
where partition represents the partition that is running Capture for VSE.
If you stop the Capture program, it shuts itself down and issues an
informational message. If it detects an error, the program shuts itself down
after cleaning up the data in the affected tables so that the data will not be
used. Staging tables are pruned when it is appropriate. In the case of
abnormal termination, you must initiate a cold start because the warm start
information could not be saved.
Suspending Capture for VM and VSE
Use the SUSPEND command to suspend the Capture program until you issue
the RESUME command.
where partition represents the partition that is running Capture for VSE.
You can use this command to suspend the Capture program to improve
performance for operational transactions during peak periods without
destroying the Capture program run environment.
where partition represents the partition that is running Capture for VSE.
Reinitializing Capture for VM and VSE
Use the REINIT command to reinitialize the Capture program.
where partition represents the partition that is running Capture for VSE.
Use the REINIT command to begin to capture changes from new source
tables if you add a new replication source or ALTER ADD a column to a
replication source and CD table while the Capture program is running. The
REINIT command tells the Capture program to obtain newly added
replication sources from the register table.
REINIT also rereads the tuning parameters table for any changes made to the
tuning parameters.
where partition represents the partition that is running Capture for VSE.
During pruning, if you stop or suspend the Capture program, pruning does
not resume after you enter the RESUME command. You must enter the
PRUNE command again to resume pruning.
Displaying captured log progress
Use the GETLSEQ command to provide the timestamp and current log
sequence number. You can use this number to determine how far the Capture
program has read the DB2 log.
To display captured log progress for the Capture program for VM:
GETLSEQ
To display captured log progress for the Capture program for VSE:
MSG partition ,DATA=GETLSEQ
where partition represents the partition that is running Capture for VSE.
Be sure to read the following sections before reading the sections on operating
the Capture and Apply programs:
v “User ID requirements for running the Capture and Apply programs”
v “Setting up the Capture and Apply programs”
or:
DB2 UPDATE DATABASE CONFIGURATION FOR database_alias USING USEREXIT ON
DB2 BACKUP DATABASE database_alias
Because the Apply program uses static SQL calls for the control tables,
the Apply bind process searches for the control tables at each server
that the Apply program is bound to, regardless of whether these
control tables are used at a server.
These commands create a list of packages, the names of which are in the
files APPLYCS.LST and APPLYUR.LST.
Setting up end-user authentication at the source server
For end-user authentication to occur at the source server, you must provide a
password file with an AUTH=SERVER scheme. The Apply program uses this
file when connecting to the source server. Give read access to only the user ID
that will run the Apply program.
Where applyqual is a case-sensitive string that must match the case and
value of the Apply qualifier (APPLY_QUAL) in the subscription set table
exactly.
This naming convention is the same as the log file name (applyqual.app) and
the spill file name (applyqual.nnn), but with a file extension of .PWD.
v Reside in the directory from which you will start the Apply program.
v Include no blank lines or comment lines. Add only the server-name, user
ID, and password information. This information enables you to use
different passwords or the same password at each server.
v Have one or more records using the following format:
SERVER=server_name USER=userid PWD=password
Where:
server_name
The source, target, or control server, exactly as it appears in the
subscription set table.
Chapter 13. Capture and Apply for Windows and OS/2 265
userid The user ID that you plan to use to administer that particular
server. On Windows, this value is case-sensitive.
password
The password that is associated with the userid. On Windows, this
value is case-sensitive.
For more information about authentication and security, refer to the IBM DB2
Universal Database Administration Guide.
Setting up the NT Service Control Manager
You can operate the Capture and Apply programs for Windows by using the
DB2 command processor or by using the NT Service Control Manager (SCM).
The SCM enables you to automatically start the Capture and Apply programs
as services from the NT Control Panel.
If you want to operate Capture and Apply as services, you must install the
replication service manually (installation is not automatic). The following
steps explain how to install the replication service and set it up as an NT
service.
In this section, x:\ refers to the drive and directory containing executable
programs. These programs are usually located in the \sqllib\bin directory.
where db_name specifies the name of the source database for the
Capture program and the name of the control database for the Apply
program, x:\ is the location of the programs, and parameters specifies
one or more invocation parameters (such as the Apply qualifier).
To use the Capture program and Apply program trace facilities, specify
the invocation parameters in the file. For example:
DBNAME1 C:\SQLLIB\BIN\ASNCCP COLD TRACE<CRLF>
DBNAME2 C:\SQLLIB\BIN\ASNAPPLY APPLYQUAL DBNAME2 TRCFLOW TRCFILE<CRLF>
Do not specify an output file name for traces. These will be written to
default locations, with default file names, as follows:
v For the Capture program:
x:\instancenamedbname.trc
v For the Apply program:
x:\APPLYtimestamp.trc
b. Save the file to the following location:
x:\ntserv.asn
The Replication Services program stores all messages in x:\asnserv.log. If you
encounter any problems, check this log file for error messages.
Important: After you start the service, the Capture and Apply programs run
independently of ASNSERV. Therefore, stopping ASNSERV does not stop the
Capture and Apply programs. Use the ASNCMD STOP command in a
command window to stop the Capture program. Use the ASNASTOP
command in a command window to stop the Apply program.
Chapter 13. Capture and Apply for Windows and OS/2 267
To remove Replication Services from the NT Control Panel, run the
ASNREMV program.
This section explains how to perform the following Capture program tasks:
v “Starting Capture for Windows and OS/2” on page 269
v “Scheduling Capture for Windows and OS/2” on page 271
v “Setting environment variables for Capture for Windows and OS/2”
v “Stopping Capture for Windows and OS/2” on page 271
v “Suspending Capture for Windows and OS/2” on page 272
v “Resuming Capture for Windows and OS/2” on page 273
v “Reinitializing Capture for Windows and OS/2” on page 273
v “Pruning the change data and unit-of-work tables” on page 274
v “Displaying captured log progress” on page 274
This section also lists restrictions for running the Capture program.
Restrictions for running the Capture program
Some actions cause the Capture program to terminate while it is running. Stop
the Capture program before you take any of the following actions:
v Remove an existing replication source.
v Drop a replication source table.
v Make changes that affect the structure of source tables, such as changes
resulting from data definition language or utilities. Structural changes can
compromise the data integrity of the copies. (ALTER ADDs of new columns
are an exception.)
The Capture program cannot capture any changes made by DB2 utilities,
because the utilities do not log changes that they make.
Setting environment variables for Capture for Windows and OS/2
You must set two environment variables before you start the Capture
program. These variables must also be set when you use any of the following
functions:
v Stop the Capture program
v Suspend the Capture program
v Resume the Capture program
v Reinitialize the Capture program
Chapter 13. Capture and Apply for Windows and OS/2 269
NOTRCTBL AUTOSTOP LOGREUSE LOGSTDOUT ALLCHG
CHGONLY
For OS/2: Use the Alarms program in the OS/2 Productivity set to start
Capture for OS/2 at a specific time.
Stopping Capture for Windows and OS/2
Use the STOP command or a key combination to stop the Capture program
in an orderly way and commit the log records that it processed up to that
point.
Chapter 13. Capture and Apply for Windows and OS/2 271
For Windows: If you started the Capture program as an NT service, the
Capture program runs independently of ASNSERV. By selecting Replication
from the NT Services window and clicking Stop, you can stop ASNSERV but
not the Capture program. Use the ASNCMD STOP command in a command
window to stop the Capture program.
For Windows and OS/2: Before you stop the Capture program, ensure that
the environment variables are set (see “Setting environment variables for
Capture for Windows and OS/2” on page 268). To stop the Capture program,
enter the command:
Before you suspend the Capture program, ensure that the environment
variables are set (see “Setting environment variables for Capture for Windows
and OS/2” on page 268). To suspend the Capture program, enter the
command:
Before you resume the Capture program, ensure that the environment
variables are set (see “Setting environment variables for Capture for Windows
and OS/2” on page 268). To resume the Capture program, enter the command:
REINIT also rereads the tuning parameters table for any changes made to the
tuning parameters.
Before you reinitialize the Capture program, ensure that the environment
variables are set (see “Setting environment variables for Capture for Windows
and OS/2” on page 268). To reinitialize the Capture program, enter the
command:
Chapter 13. Capture and Apply for Windows and OS/2 273
table while the Capture program is running. Instead, stop the Capture
program and restart it using the WARM or WARMNS option.
Pruning the change data and unit-of-work tables
Use the PRUNE command to start pruning the CD and UOW tables.
Before you prune the change data and unit-of-work tables, ensure that the
environment variables are set (see “Setting environment variables for Capture
for Windows and OS/2” on page 268). To begin pruning tables, enter the
command:
The Capture program issues the message ASN0124I when the command is
successfully queued.
If you stop or suspend the Capture program during pruning, you must enter
the PRUNE command again to resume pruning. Pruning does not resume
after you enter the RESUME command.
Displaying captured log progress
Use the GETLSEQ command to get the timestamp and current log sequence
number. You can use this information to determine how far the Capture
program has read the DB2 log.
Before you display captured log progress, ensure that the environment
variables are set (see “Setting environment variables for Capture for Windows
and OS/2” on page 268). To display captured log progress, enter the
command:
Tip: You can use DB2 UDB Find Log Sequence Number command
(DB2FLSN) to identify the physical log file associated with the log
sequence number. You can use this number to delete or archive log files
no longer needed by the Capture program. For more information, see the
IBM DB2 Universal Database Command Reference.
33. Usually the Apply package is created automatically for you; however, if you configure the Apply program
manually, you must explicitly bind the Apply package.
Chapter 13. Capture and Apply for Windows and OS/2 275
1. Select Replication from the NT Services window.
2. Click the Start push button. The Apply program starts according to the
ASCII file information you provided.
You can also start the replication service by typing STRTSERV on the
Windows NT command line.
TRCFILE NOTIFY SLEEP DELAY(n) COPYONCE
NONOTIFY NOSLEEP ERRWAIT(n)
LOGREUSE LOGSTDOUT TRLREUSE
Chapter 13. Capture and Apply for Windows and OS/2 277
Table 31. ASNAPPLY invocation parameter definitions on Windows and OS/2
platforms (continued)
Parameter Definition
ERRWAIT(n) Specifies the number of seconds that the Apply program
waits before retrying after it encounters an error
condition, where n is the number of seconds. Do not
specify too small a number because the Apply program
will run almost continuously and generate many rows in
the Apply trail table. The default wait period is 300
seconds (5 minutes).
COPYONCE The Apply program executes one copy cycle for each
subscription set that is eligible at the time the Apply
program is invoked, and then the Apply program
terminates. An eligible subscription set is such that:
v ACTIVATE > 0
v REFRESH_TIMING = R or B or REFRESH_TIMING =
E and the specified event has occurred.
MAX_SYNCH_MINUTES and END_OF_PERIOD are
honored if specified.
LOGREUSE The Apply program reuses the log file (*.app) by first
deleting it and then re-creating it when the Apply
program is restarted. If you do not specify this option,
the Apply program appends messages to the log file,
even after the Apply program is restarted.
LOGSTDOUT The Apply program sends all messages to both the
standard output (stdout) and the log file.
TRLREUSE The Apply program empties the Apply trail table when
the Apply program is started.
For OS/2: Use the Alarms program in the OS/2 Productivity set to start the
Apply program at a specific time.
Stopping Apply for Windows and OS/2
Use the ASNASTOP command or a key combination to stop the Apply
program in an orderly way.
The optional Capture and Apply parameters for this command are optimized
for the satellite environment. For details on using the ASNSAT command in a
satellite environment, see Administering Satellites Guide and Reference. You can
override the optional parameters for the Capture and Apply programs if you
want to use the command in a non-satellite environment.
Chapter 13. Capture and Apply for Windows and OS/2 279
Table 32. ASNSAT options and invocation parameter definitions (only for Windows
32-bit operating systems)
Option Definition
-q apply_qual Specifies the Apply qualifier that the Apply program instance uses to
identify the subscriptions to be served. The Apply qualifier is case
sensitive and must match the value of the APPLY_QUAL column in
the subscription set table. This must be the first parameter.
-n cntl_serv Specifies the name of the server where the replication control tables
will reside. If you do not specify this parameter, the default is the
default database or the value of DB2DBDFT.
-t trgt_serv Specifies the name of the server where the target tables will reside.
-c Specifies the optional invocation parameters for the Capture program.
If you do not specify this option, the ASNSAT command uses the
following defaults: warm, prune, notrace, logreuse, logstdout, trcfile,
notrctbl, and autostop.
-a Specifies the optional invocation parameters for the Apply program. If
you do not specify this option, the ASNSAT command uses the
following defaults: noinam, notrc, nonotify, logreuse, logstdout, trcfile,
trlreuse, copyonce, loadx.
For more information about Capture and Apply parameters, see Table 30 on
page 270 and Table 31 on page 276 respectively.
DataPropagator for Microsoft Jet is a single executable that contains both the
Capture and Apply capability and a portion of the administration facility.
DataPropagator for Microsoft Jet runs on a client machine under Microsoft
Windows NT or Windows 95, and reaches source databases via DB2 Client
Application Enabler (CAE). DataPropagator for Microsoft Jet is packaged as
part of DB2 DataJoiner Version 2 Release 2.1.1 (although you do not need to
install a DB2 DataJoiner server to use this software) but also works with DB2
Universal Database (DB2 UDB), DB2 Common Server V2, and DB2 Connect.
DataPropagator for Microsoft Jet requires the DataJoiner Replication
Administration tool (DJRA) at the control point.
Figure 22. Microsoft Jet database replication. DataPropagator for Microsoft Jet extends IBM’s data
replication solution by supporting Microsoft Access and Microsoft Jet databases.
Chapter 13. Capture and Apply for Windows and OS/2 281
tables, and additional columns, and drop tables and old columns based on the
current state of the control information in the server. To deploy a Microsoft Jet
application, the application, database, and replication software must be
installed before you distribute the laptop computers. However, the Microsoft
Jet database does not need to be created in advance.
You can define or redefine replication source and subscription definitions for a
Microsoft Jet database at any time, using DJRA, before or after you distribute
the laptops for asynchronous processing by DataPropagator for Microsoft Jet.
If you have problems with your laptop, you can rebuild your Microsoft Jet
database, tables, and contents by deleting the Jet database and
resynchronizing using DataPropagator for Microsoft Jet. DataPropagator for
Microsoft Jet can automatically rebuild your database.
For more information about usage scenarios involving mobile replication, see
“Occasionally connected” on page 22.
Data integrity considerations
Within a network of DB2 databases, DB2 DataPropagator supports an
update-anywhere model that is able to detect transaction conflicts.
DataPropagator for Microsoft Jet supports an update-anywhere model, but
with weaker row-conflict detection (similar to the standard Microsoft Jet
model). If you choose to use DataPropagator for Microsoft Jet, you should be
both familiar and comfortable with the standard Microsoft Jet replication
model.
Chapter 13. Capture and Apply for Windows and OS/2 283
2. Configure the DB2 ODBC driver by using the DB2 Client Configuration
Assistant window.
3. Install either one of the following:
v Microsoft Data Access Components (MDAC)
v Microsoft Access
4. Install the DAO component (downloadable from
http://www.nesbitt.com/bctech.html or available on the Microsoft Visual
C++ Version 5 CD-ROM).
5. Install DataPropagator for Microsoft Jet (during DB2 DataJoiner
installation).
v During installation you will be prompted to set the ASNJETPATH
environment variable to specify the directory where DataPropagator for
Microsoft Jet can create the log, trace, and password files. The file names
are:
– Apply_qual.LOG. Created by DataPropagator for Microsoft Jet.
– Apply_qual.TRC. Created by DataPropagator for Microsoft Jet.
– Apply_qual.PWD. Created by DataPropagator for Microsoft Jet.
DataPropagator for Microsoft Jet also creates the target database in this
directory if it does not already exist.
v Define the Microsoft Jet database source in the ODBC Data Source
Administration window, if it is not already defined.
To start DataPropagator for Microsoft Jet, use the ASNJET command. Enter
the ASNJET command from a command prompt.
ASNJET apply_qual ctrl_srvr
INAMSG NOTRC NOTIFY
NOINAMSG TRCERR NONOTIFY
TRCFLOW
MOBILE
NOMOBILE
Chapter 13. Capture and Apply for Windows and OS/2 285
Table 33. ASNJET command parameter definitions for DataPropagator for Microsoft
Jet (continued)
Parameter Definition
INAMSG Specifies that DataPropagator for Microsoft Jet issue an
inactivity message to the log whenever DataPropagator for
Microsoft Jet is going to sleep until the next copy cycle. This
option is ignored if you specify the MOBILE option.
NOINAMSG (default) Specifies that no inactivity message is issued.
NOTRC (default)
TRCERR Specifies that a trace file of minimal information is created.
TRCFLOW Specifies that a trace file of extensive information is created.
NOTIFY Specifies that DataPropagator for Microsoft Jet call the
ASNJDONE exit routine at the completion of each
subscription set, regardless of success or failure.
NONOTIFY (default) Specifies that DataPropagator for Microsoft Jet does not call
the ASNJDONE exit routine.
MOBILE Specifies that DataPropagator for Microsoft Jet run in mobile
mode (copy all active subscriptions only once, and then
terminate).
NOMOBILE (default) Specifies that DataPropagator for Microsoft Jet run
continuously until it is stopped with the ASNJSTOP
command.
Use the following command to stop DataPropagator for Microsoft Jet. Enter
the ASNJSTOP command from a command prompt.
ASNJSTOP apply_qual
Where apply_qual is the Apply qualifier that you used when you started
DataPropagator for Microsoft Jet with the ASNJET command.
You can also use one of the following key combinations from the window
where the program is running to stop DataPropagator for Microsoft Jet:
v Ctrl+C
v Ctrl+Break
For error message information, see “Chapter 15. Capture and Apply
messages” on page 353. For more information about troubleshooting, see
“Troubleshooting” on page 166.
Returning control to users with the ASNJDONE exit routine
If you specify the NOTIFY parameter when you start DataPropagator for
Microsoft Jet with the ASNJET command, DataPropagator for Microsoft Jet
calls the exit routine ASNJDONE at the completion of each subscription set ,
regardless of success or failure. ASNJDONE.SMP is a sample program
shipped with the product. You can modify it to meet the requirements of your
installation. For example, the exit routine can examine the error table to
discover rejected updates and initiate further actions, such as issuing a
message or generating an alert.
Chapter 13. Capture and Apply for Windows and OS/2 287
See the prologue section in the sample exit routine ASNJDONE.SMP for
instructions on how to modify the sample.
Parameters
The parameters that DataPropagator for Microsoft Jet passes to ASNJDONE
are:
Control server
The control server alias.
Set name
The name of the set just processed.
Apply qualifier
The Apply qualifier of this DataPropagator for Microsoft Jet instance.
Trace option
The trace option specified when DataPropagator for Microsoft Jet was
started.
Status value
Set to a value of 0 for success, and -1 for failure.
Error recovery
If the status value that DataPropagator for Microsoft Jet passes to ASNJDONE
is -1, conflicts or errors might have been recorded. You can set the exit routine
to examine the error codes and messages in the error message table. (There
can be more than one row in the error message table.)
When DataPropagator for Microsoft Jet detects an update conflict between the
RDBMS source and row-replica target table, it saves additional information for
the ASNJDONE exit routine as follows:
v Inserts a row into the conflict table. (This is not the same conflict table that
Microsoft Jet might detect between the Design Master and its Microsoft Jet
Replicas.) The conflict table contains the row data that conflicted with the
RDBMS update.
v Places the names of the conflict tables in the side information table. Each
Microsoft Jet target table has its own conflict table. If a conflict is detected,
the update to the row-replica loses while the source server update wins.
The exit routine can use this information to take remedial action. When the
exit routine returns, the status is still -1 in the subscription set table.
DataPropagator for Microsoft Jet does not expect any output or return codes
from the exit routine.
Chapter 13. Capture and Apply for Windows and OS/2 289
290 DB2 Replication Guide and Reference
Part 4. Reference information
This part of the book contains the following chapters:
“Chapter 14. Table structures” on page 293 describes the source, control, and
target table structures.
“Chapter 15. Capture and Apply messages” on page 353 lists all of the
messages issued by all the Capture and Apply programs except those on the
AS/400 platform.
“Chapter 16. Replication messages for AS/400” on page 381 lists all of the
messages for data replication on the AS/400 platform.
Table 34 on page 296, Table 35 on page 298, and Table 36 on page 299 provide
brief descriptions of the tables listed in this chapter. When you become
familiar with the tables, you can use Figure 23 on page 294 and Figure 24 on
page 295 as a quick reference for the source and control server tables, table
keys, and their parameters.
Important: You must not use SQL to update some of the control tables (see
particular table descriptions for details). Altering control tables
inappropriately can cause problems such as unexpected results, loss of data,
and reduced replication performance.
Tables at a glance
Figure 23 on page 294 and Figure 24 on page 295 show the tables at the source
and control servers, table keys, and their parameters.
(VM and VSE specific) Used to ensure that only one Capture program
is running per database.
Change data table CD 321
(Microsoft Jet specific) A type of Microsoft Jet target table that can be
updated.
User copy table userid.target_table 341
(Microsoft Jet specific) Contains error codes and error messages. There
can be more than one row in this table.
Depending on the error code, additional
information will be available in the error
information, error-side-information, and conflict
tables.
Error-side-information IBMSNAP_SIDE_INFO 349
table
Contains the names of the conflict tables.
(Microsoft Jet specific)
Key string table IBMSNAP_GUID_KEY 349
(Microsoft Jet specific) Maps the Microsoft Jet table identifiers and row
identifiers to primary key values when the
following actions occur:
v Rows are deleted from Microsoft Jet database
tables.
v Deletes are recorded in MSysTombstone with
s_Generation, TableGUID and s_GUID (row)
identifiers, but without primary key details.
v The primary key values are needed to
propagate Microsoft Jet database deletes to an
RDBMS.
Synchronization IBMSNAP_S_GENERATION 350
generations table
Prevents cyclic updates from propagating back to
(Microsoft Jet specific) the RDBMS from a Microsoft Jet database.
ASN.IBMSNAP_REGISTER
The register table contains information about replication sources, such as the
names of the replication source tables, their attributes, and their staging table
names. A row is automatically inserted into this table every time a new
replication source is defined at this server. You must update this table to
maintain an external CCD table.
The register table is the place to look if you need to know how you defined
your replication sources.
ASN.IBMSNAP_REG_EXT
Use this table to complete the information from the register table to track
where and how you defined your replication sources on an AS/400 server.
ASN.IBMSNAP_PRUNCNTL
The pruning control table coordinates the pruning of the change data (CD)
tables, which have the potential for unlimited growth. For each new
subscription set, the Apply program first updates the pruning control table
and then it begins a full refresh of every member of the new subscription set.
After the full refresh, the Capture program begins capturing changes from the
replication source. When the Capture program begins to capture data, it
updates the pruning control table to notify the Apply program. During each
Apply cycle, the Apply program updates the pruning control table to indicate
the last change applied. The Capture program then uses the information to
prune the CD and UOW tables.
The rows in the pruning control table are not deleted during a cold start of
the Capture program. The administration tools use the values from the
pruning control table to provide a list of copies defined as source tables and
views.
There is one pruning control table at each source server and one row in the
pruning control table for each subscription-set member.
You can manually prune your table by issuing the prune command or have it
done automatically by updating the PRUNE_INTERVAL column in the tuning
parameters table. See “Tuning parameters table” on page 312 for more
information about using the tuning parameters table.
Use this table to monitor the pruning status of your CD and UOW tables.
ASN.IBMSNAP_CCPPARMS
This table contains parameters that you can modify to control the performance
of the Capture program. You can set these parameters to modify the length of
time that you retain data in the CD table, the amount of time that the Capture
program is allowed to lag in processing log records, how often data will be
committed, and how often your CD and UOW tables are pruned. These
modifications must be done manually because there are no DB2
DataPropagator processes that update this table after it is created. The
Capture program can read your modifications only during its start processing;
therefore, you should stop and start the Capture program if you want your
modifications to take effect.
ASN.IBMSNAP_CCPENQ
The Capture enqueue table is used in the VM and VSE environments only.
This table is used to ensure that there is only one Capture program running
per database.
Table 41 provides a list and a brief description of the Capture enqueue table
column.
Table 41. Capture enqueue table column
Column name Description
LOCKNAME Unique name of the resource for this database.
ASN.IBMSNAP_WARM_START
This table is created in the same database as the register table and contains
information that enables the Capture program to restart from the last log or
journal record read. Use the information in this table to avoid a full refresh of
your system.
You do not have to recover this table if it is damaged. Simply create an empty
table before warm starting the Capture program.
The following three tables show platform-specific layouts of the warm start
table:
v Table 42 shows the layout for all platforms other than VM/VSE and
AS/400.
v Table 43 shows the VM/VSE layout.
v Table 44 on page 315 shows the AS/400 layout.
Table 42. Columns in the warm start table
Column name Description
SEQ The last captured sequence number from the log or journal
record. Used for quickly restarting following a shutdown or
failure.
AUTHTKN The DB2 token for the unit of work associated with the
SEQ log or journal record. AUTHTKN length is 12
characters. If you supply a longer value, it is truncated.
AUTHID The DB2 authorization ID for the unit of work associated
with the SEQ log or journal record. AUTHID length is 18
characters. If you supply a longer value, it is truncated.
CAPTURED A flag indicating whether or not this unit of work was
captured.
Y This unit of work was captured.
N This unit of work was not captured.
Table 43. Columns in the warm start table for VM and VSE platforms
Column name Description
SEQ The last captured sequence number from the log or journal
record. Used for quickly restarting following a shutdown or
failure.
UOWID The unit-of-recovery ID from the log record header for this
unit of work.
AUTHID The DB2 authorization ID for the unit of work associated
with the SEQ log or journal record.
Table 43. Columns in the warm start table for VM and VSE platforms (continued)
Column name Description
CAPTURED A flag indicating whether or not this unit of work was
captured.
Y This unit of work was captured.
N This unit of work was not captured.
For AS/400, the warm start table is used to determine the starting time of the
RCVJRNE (Receive Journal Entry) command. A row is inserted into the warm
start table for each journal that is used by a replication source or a group of
replication sources.
Table 44 provides a brief description of the columns in the warm start table
for the AS/400 platform.
Table 44. Columns in the warm start table for AS/400 platform
Column name Description
JRN_LIB The library name of the journal.
JRN_NAME The name of the journal used by a source table. An asterisk
followed by nine blanks in this column means that the
source table is not currently in a journal. Therefore, it is not
possible to capture data for this source table.
JRN_JOB_NUMBER The job number of the current job for a particular journal. If
the journal is not active, this column contains the job
number of the last job that was processed.
LOGMARKER The timestamp of the last processed journal entry.
UID A unique number that is used as a prefix for the contents of
the IBMSNAP_UOWID column located in the UOW table.
SEQNBR The sequence number of the last processed journal entry.
ASN.IBMSNAP_CRITSEC
You do not have to recover this table if it is damaged. Simply create an empty
table.
The prune lock table is used to serialize the access of staging tables during a
cold start or retention limit pruning. (Retention limit pruning is pruning after
the retention limit is reached. The default retention limit is 10 080, which
equals 7 days.) There are no rows in this table. The Capture and Apply
programs use this table as a logical lock to serialize their operations during
these critical phases. If a prune lock table does not exist, as on DB2 UDB
Version 5 servers, the critical section (ASN.IBMSNAP_CRITSEC) table is
locked instead. If a prune lock table does not exist, you can create one to
increase the concurrency of update-anywhere subscriptions.
You do not have to recover this table if it is damaged. Simply create an empty
table.
Trace table
ASN.IBMSNAP_TRACE
This table contains audit trail information for the Capture program.
Everything that is done by the Capture program is recorded in this table,
which makes it one of the best places to look if a problem with the Capture
program occurs. If you cold start the Capture program, all of the trace table’s
entries are deleted, so you might want to save a copy of this table before you
issue a cold start command.
The following two tables show platform-specific layouts of the trace table.
Table 46 on page 317 shows the layout for all platforms other than AS/400,
ASN.IBMSNAP_AUTHTKN
ASN.IBMSNAP_REG_SYNCH
Unit-of-work table
Important: Do not use SQL to update this table. Altering this table
inappropriately can cause unexpected results and loss of data.
ASN.IBMSNAP_UOW
For AS/400: Capture for AS/400 can start data capturing for only a subset of
the replication sources. Therefore, Capture for AS/400 does not
delete all the rows in the UOW table if you do a partial cold
start.
The Capture program requires that there is one UOW table for each source
server. The Capture program inserts one new row into this table for every log
or journal record that commits changes to replication sources.
For AS/400: There are some user programs that do not use commitment
control. In such cases, the Capture program arbitrarily inserts a
new UOW row after a number of rows are written to the CD
table. This artificial commitment boundary helps reduce the size
of the UOW table.
The Capture program also prunes the UOW table based on information
inserted into the pruning control table by the Apply program.
For AS/400: The UOW table is pruned by retention limits, not by the pruning
control table information.
Table 50 on page 320 provides a brief description of the columns in the UOW
table.
CD
Change data (CD) tables record all changes made to a replication source.
Committed, uncommitted, and incomplete changes are inserted as rows into
the CD table. The CD table works with the UOW table to provide commit
information. (See “Staging data” on page 82 for more information.) Pruning of
the CD table rows is coordinated by the pruning control table. (See “Pruning
control table” on page 309 for more information.)
CD tables are automatically created when you define a replication source. For
each replication source that is enabled for data capture, there is one CD table.
If you cold start the Capture program, all of the CD table’s entries are deleted.
Table 51 provides a list and a brief description of each of the columns in the
CD table.
Table 51. Columns in the CD table
Column name Description
IBMSNAP_UOWID Unit-of-work ID for an update. The Apply program uses
this column to join the CD table with the UOW table so
that only committed changes are replicated.
IBMSNAP_INTENTSEQ Log or journal record sequence number that uniquely
identifies a change. This value is globally ascending.
IBMSNAP_OPERATION Character value of I, U, or D, indicating an insert, update,
or delete record.
DATA1 User column from source table specified when the
replication source was defined.
AFTER-IMAGE User column from source table selected when defining a
replication source. This column will have the same name,
data type, and null attributes as the source column. The
after-image column also contains the equivalent source
table column value after the change is made.
BEFORE-IMAGE User column from source table selected when defining a
replication source. This column will have the same name,
data type, and null attributes as the source column. The
name is the source column prefixed with the
BEFORE_IMG_PREFIX value from the register table. This
column contains the equivalent source table column value
before the change was made.
The subscription events and Apply trail tables are used by the Apply program
to control and audit your data.
Subscription set table
Important: Do not use SQL to update this table. Altering this table
inappropriately can cause unexpected results and loss of data.
ASN.IBMSNAP_SUBS_SET
The subscription set table lists all of the subscription sets defined at the
control server and identifies the source and target server pairs that are
processed as a group. Rows are inserted into this table when you create your
subscription set definition.
SOURCE_SERVER The database name of the source server where the source
tables and views are defined.
SOURCE_ALIAS The DB2 Universal Database alias corresponding to the
source server named in the SOURCE_SERVER column.
TARGET_SERVER The database name of the server where the target tables
and views are defined.
TARGET_ALIAS The DB2 Universal Database alias corresponding to the
target server named in the TARGET_SERVER column.
LASTRUN The estimated time that the last subscription set began. The
Apply program sets the LASTRUN value each time a
subscription set is processed. It is the approximate time at
the control server that the Apply program begins
processing the subscription set.
REFRESH_TIMING Sets the timing between statement executions.
R The Apply program uses the value in
SLEEP_MINUTES to determine replication timing.
E The Apply program checks the time value in the
subscription events table to determine replication
timing. Before any replication (change capture or
full refresh) can begin, an event must occur.
B Indicates that a subscription set has both relative
and event timing specifications. Therefore, this
subscription set can be eligible for a refresh based
on either the timer or event timing criteria.
Subscription-targets-member table
Important: Do not use SQL to update this table. Altering this table
inappropriately can cause unexpected results and loss of data.
ASN.IBMSNAP_SUBS_MEMBR
This table or view contains information about the individual source and target
table pairs defined for a subscription set. Rows are automatically inserted into
this table when you create a subscription set member.
Use this table or view to identify a specific source and target table pair within
a subscription set.
ASN.IBMSNAP_SUBS_COLS
Rows are automatically inserted or deleted from this table when information
contained in one or more columns of a source and target table pair is
changed.
TARGET_NAME The name of the target table or view column. It does not
need to match the source column name.
ASN.IBMSNAP_SUBS_STMTS
This table contains the user-defined SQL statements or stored procedure calls
that will be executed before or after each subscription-set processing cycle.
Execute immediately (EI) statements or stored procedures can be executed at
the source or target server only. This table is populated when you define a
subscription set that uses SQL statements or stored procedure calls.
ASN.IBMSNAP_SUBS_TGTS
This table is necessary to identify when a member has been deleted from a
subscription set for a Microsoft Jet database target, so that the row-replica
table can be deleted from the Microsoft Jet database. The
row-replica-target-list table allows DataPropagator for Microsoft Jet to
maintain a list of known row-replica tables in a stable DB2 or DataJoiner
database. DataPropagator for Microsoft Jet uses this information during
schema analysis to determine if any row-replica tables should be deleted
because the corresponding subscription-set member was dropped since the
last synchronization.
ASN.IBMSNAP_SCHEMA_CHG
ASN.IBMSNAP_SUBS_EVENT
This table contains information about the event triggers that are copied in a
subscription set. The subscription events table contains event names and
timestamps associated with the event names. You insert a row into this table
when you create a new event to start an Apply process. See “Event timing” on
page 124.
The Apply trail table contains audit trail information for the Apply program.
This table records a history of updates performed against subscriptions. It is a
repository of diagnostic and performance statistics. The Apply trail table is
one of the best places to look if a problem occurs with the Apply program.
Because this table is not automatically pruned, it is up to you to do so.
Table 59 provides a brief description of the columns in the Apply trail table.
Table 59. Columns in the Apply trail table
Column name Description
APPLY_QUAL Uniquely identifies a group of subscription sets that are
processed by the same Apply process. This user-specified
value must be unique for the control server where the
subscription set table is located. For update-anywhere, this
value must be unique at the control server and at the
source server. This value is case-sensitive. You must specify
this value when you define a subscription set.
LASTRUN The estimated time that the last subscription began. The
Apply program sets the LASTRUN value each time a
subscription set is processed. It is the approximate time at
the control server that the Apply program begins
processing the subscription set.
LASTSUCCESS The control server timestamp for the beginning of the last
successful processing of a subscription set.
SYNCHPOINT The Apply program uses the SYNCHPOINT value from the
global row of the register table at the source server if
GLOBAL_RECORD is Y. If data blocking is specified in the
subscription definition, then the SYNCHPOINT value is the
log or journal record sequence number of the last change
applied during the Apply process.
SYNCHTIME The Capture program or an external program, such as IMS
DataPropagator, updates this timestamp whether there are
changes to be processed or not.
IBMSNAP_APPLY_JOB
Table 60 provides a brief description of the columns in the Apply job table.
Table 60. Columns in the Apply job table
Column name Description
APPLY_QUAL A unique identifier for a group of subscription sets. This
value is supplied by the user when defining a subscription
set. Each instance of the Apply program is started with an
APPLY_QUAL. This value is used during update-anywhere
replication to prevent circular replication of the changes
made by the Apply program. See the subscription set table
on page “Subscription set table” on page 323 for more
details.
CONTROL_SERVER Name of the database where the control tables and view are
defined.
USER_NAME Name of the user who started a new instance of the Apply
program
JOB_NAME The fully qualified name of the job that wrote this trace
entry:
v position 1-10: APPLY_QUAL, truncated to 10 characters if
necessary
v position 11-20: The ID of the user who started the Apply
program
v position 21-26: The job number
JOB_NUMBER The job number of the current job for a particular journal. If
the journal is not active, this column contains the job
number of the last job that was processed.
userid.target_table
The user copy table is identical to the point-in-time target table with the
exception of the IBMSNAP_LOGMARKER column, which is not included in
the user copy table.
Except for subsetting and data enhancement, a user copy table reflects a valid
state of the source table, but not necessarily the most current state. References
to user copy tables (or any other type of target table) reduce the risk of
contention problems that results from a high volume of direct access to the
source tables. Accessing local user copy tables is much faster than using the
network to access remote source tables for each query.
Table 61 provides a brief description of the columns in the user copy table.
Table 61. Columns in the user copy table
Column name Description
user key columns The primary key of the target table, although it is not
necessarily a component of the primary key of the source
table. You can use predicates to prevent a NULL value from
being assigned to the key fields of any copies.
user nonkey columns The nonkey data columns from the source table or view.
The columns from the source table do not need to match
these columns, but the data types must match.
user computed columns User-defined columns that are derived from SQL
expressions. You can use computed columns with SQL
functions to convert source data types to different target
data types.
Point-in-time table
Important: If you use SQL to update this table, you run the risk of losing
your updates when a full refresh is performed by the Apply program.
userid.target_table
The point-in-time table is similar to the user copy table, but contains an
additional system column (IBMSNAP_LOGMARKER) containing the
approximate timestamp of when the particular row was inserted or updated
at the source system. Otherwise, a point-in-time table is much like a past
image of the source table. Point-in-time copies reflect a valid state of the
source table, but not necessarily the most current state.
Consistent-change-data table
This table contains information that you can update by using SQL.
userid.target_table
CCD tables are staging tables that contain committed change data (for details,
see “Staging data” on page 82). Maintaining CCD tables requires updating the
CCD_OLD_SYNCHPOINT and SYNCHPOINT columns of the register table.
Replica table
This table contains information that you can update by using SQL.
userid.target_table
The replica must have the same primary key as the source table. Because of
these similarities, the replica table can be used as a source table for further
subscription sets, making the target server a source server as well. Converting
a target table into a source table is done automatically when you define a
replica target type and specify the CHANGE DATA CAPTURE attribute. See
“Defining subscription sets for update-anywhere replication” on page 114 for
more information.
Base aggregate tables are target tables that contain data aggregated from a
source table. Functions are performed on data located at the source table, and
the result of the function is inserted as a row in the base aggregate table.
userid.target_table
Table 66 on page 347 provides a brief description of the columns in the change
aggregate table.
userid.target_table
IBMSNAP_<target name>_CONFLICT
This table is a conflict table for tracking synchronization conflicts and errors.
This Microsoft Jet database control table mimics Microsoft’s conflict tables.
This table contains the conflict loser’s row data. The columns are the same as
the corresponding row-replica table. This table can have more than one row.
The conflict table is created along with the row-replica table in the Microsoft
Jet database and dropped when the row-replica table is dropped.
IBMSNAP_ERROR_INFO
This table identifies the row-replica table and row that caused the error. This
table can have more than one row. The error information table is created
along with the Microsoft Jet database and never dropped.
IBMSNAP_ERROR_MESSAGE
This table identifies the nature of an error. It contains the error code and error
message. This table can have more than one row. The error messages table is
created along with the Microsoft Jet database and never dropped.
IBMSNAP_SIDE_INFO
This table is a conflict table for tracking synchronization conflicts and errors.
This Microsoft Jet database control table mimics Microsoft’s conflict tables.
This table contains the names of the conflict tables created by DataPropagator
for Microsoft Jet.
IBMSNAP_GUID_KEY
This table maps the Microsoft Jet table names and row identifiers to primary
key values when the following changes occur:
v Rows are deleted from Microsoft Jet database tables.
v Deletes are recorded in MSysTombstone with s_Generation, TableGUID, and
s_GUID (row) identifiers, but without primary key details.
v The primary key values are needed to replicate Microsoft Jet database
deletes to an RDBMS.
When DataPropagator for Microsoft Jet replicates delete actions to another
Microsoft Jet database, only the internal row identifier is sent. To replicate
delete actions outside of the Microsoft Jet environment, DataPropagator for
Microsoft Jet needs to replicate a searched delete, with predicates referencing
primary key values. The key string table allows DataPropagator for Microsoft
Jet to maintain the key values needed to replicate a delete to an RDBMS, even
after the row is physically deleted from the row-replica table.
Table 72 provides a brief description of the columns in the key string table.
Table 72. Columns in the key string table
Column name Description
RowReplicaname Identifies the row-replica table where the row was inserted.
s_GUID Identifies the row in the specific row-replica table.
key_string The string of ″and-ed″ DB2 SQL predicates identifying the
key columns and their row values, with character constants
delimited by single quotation marks. The column names are
taken from the row-replica definition and can contain
uppercase letters, lowercase letters or both. The constant
values are taken from the rows themselves and the string
values can contain uppercase letters, lowercase letters,
numeric characters, or any combination of the three.
Microsoft Jet database supports ASCII, so the string
constants can contain single- or double-byte characters. For
example:
COL1=(character) AND COL2=(character)
IBMSNAP_S_GENERATION
This table is used to prevent cyclic updates from replicating back to the
RDBMS from a Microsoft Jet database. When DB2 is the target, this function is
accomplished in a different way, using the APPLY_QUAL column of the
critical section table, which results in the Capture Program posting to the
APPLY_QUAL column of the UOW table.
Due to the risk of partial failures during a DataPropagator for Microsoft Jet
synchronization cycle, and because the WHOS_ON_FIRST = S flow is handled
before the WHOS_ON_FIRST = F flow, multiple RDBMS-to-Jet generations can
be posted before any Microsoft Jet database changes replicate to the RDBMS.
In such a case, there is the possibility that a list of s_GENERATION values
will need to be skipped over when determining which s_GENERATION of
changes needs to be replicated to the RDBMS.
Unless otherwise stated, all error codes described in error messages are
internal error codes used by IBM Software Support. Also, unless otherwise
stated, error messages are issued with a return code of 8.
You can also obtain explanations for messages by typing the following
command at a DB2 command prompt:
db2 message_number
When you start DB2 with the cold start option, you must also start the
Capture program with the cold start option.
Note: For SQL errors, see the DB2 Message Reference for your platform.
For Capture program for OS/390, an MVS For Capture program for OS/390, an MVS
system dump is generated for this message that system dump is generated for this message that
is contained in MVS dump data set SYS1.DUMP. is contained in MVS dump data set SYS1.DUMP.
For DB2 DataPropagator, the “<return_code>” User Response: Contact IBM Software Support.
value is for the Asynchronous Read Log API.
For Capture for VSE, the “<return code>” is for ASN0007E The Capture program encountered
the VSE/VSAM GET macro. an unexpected log error of
For Capture for VM, the “<return code>” is for unimplemented data type. The
Diagnose X’A4’. routine name is “<routine>”.
User Response: See the DB2 Codes section in Explanation: An unexpected log error not
the messages and codes publication of the DB2 reported by either:
database manager on your platform for the v the Instrumentation Facility Interface (IFI) for
appropriate reason code. Capture program for OS/390, or
For Capture program for OS/390, see the v the Asynchronous Read Log API for the
Instrumentation Facility Interface (IFI) section in Capture program
The Capture program started but could not find 1. Stop the Capture program.
source tables that were: 2. Delete the replication source.
v Enabled with the DATA CAPTURE CHANGES 3. Redefine the replication source.
option of the CREATE or ALTER TABLE 4. Start the Capture program.
statement.
v Defined as replication sources with the Data
ASN0019E The Capture program libraries are
capture is full-refresh only check box cleared
not authorized for the Authorized
on the Define as Source window.
Program Facility (APF).
User Response: Ensure that the register table is
Explanation: The Capture program cannot
defined properly. For more information about the
process the STOP, SUSPEND, RESUME, or
register table, see “Register table” on page 301.
REINIT commands because the STEPLIB libraries
Verify that replication sources have been defined.
are not authorized for APF.
User Response: Authorize the Capture link
ASN0017E The Capture program encountered
library for APF.
a severe internal error and could
not issue the correct error
message. The routine name is ASN0020I Netview Generic Alerts Interface
“<routine>”; the return code is failure. The Netview return code
“<return_code>”; the error is “<return_code>”.
message number is
Explanation: The Network Major Vector
“<error_message_num>”.
Transport (NMVT) could not be sent to Netview
Explanation: The Capture program could not by the program because the program interface
retrieve the message from the Capture program failed. This is a secondary informational
error messages file. message.
User Response: Edit the Capture program error User Response: See the Netview programming
messages file. Locate the ASNnnnn error documentation for a description of the return
message number to determine which error code to determine the interface error. The
User Response: Run the Capture program with User Response: Contact your system
the appropriate release of DB2. programmer to determine the method of
requesting the storage listed in the error
message.
ASN0023I The Capture program successfully
reinitialized the register table. For Capture for VM, a request to obtain virtual
The table name is storage could not be satisfied. You might need to
“<table_name>”; the routine name increase the size of the virtual machine in which
is “<routine_name>”. the Capture program runs.
Explanation: A REINIT command was issued For Capture for VSE, all available GETVIS
and the updates were successfully made to the storage has been exhausted. You might need to
Capture program internal control information. restart the Capture program after allocating a
This message is for your information only. larger partition.
ASN0036E DB2 was terminated abnormally. ASN0041E An error was returned while
The routine name is “<routine>”. getting the instance name. The
reason code is “<reason_code>”.
Explanation: DB2 was terminated while the
Capture program was still active. Explanation: The SQLEGINS API of DB2
Universal Database returned an error.
For OS/390, VSE/ESA, or VM/ESA, DB2 was
terminated while Capture program was active User Response: See the DB2 Universal Database
and the user did not specify the NOTERM API Reference for information about the
invocation parameter. SQLEGINS API to determine the error, or contact
IBM Software Support.
User Response: Start DB2 and start the Capture
program.
ASN0043E A child process of ASNLMAIN ASN0047E An error was returned from the
died. FTOK function of “<platform>”.
The error is “<error_text>”.
Explanation: The child process created by
ASNLMAIN terminated. Possible causes include: Explanation: The AIX function FTOK returned
v A user stopped the child process. an error. “<Error_text>” describes the error.
v There is an AIX system problem. User Response: See AIX Calls and Subroutines
Reference for information about the FTOK
User Response: Check the system processes for
function, use the provided error text to
conflicts, or contact your AIX system
determine the error, or contact IBM Software
programmer.
Support.
ASN1013E The target table structure is ASN1017E Apply could not find any target
invalid. The error code is column names. The error code is
“<error_code>”. “<error_code>”.
Explanation: The target table structure in the Explanation: The Apply program could not find
subscription-targets-member table was not valid. any columns in the subscription columns table.
User Response: Refer to “Subscription-targets- User Response: Redefine the subscription set
member table” on page 326 for valid target table and subscription-set members (see “Chapter 6.
structure.
ASN1019E The target table does not have any ASN1023S The Apply program could not
key columns. The error code is open the work file. The error code
“<error_code>”. is “<error_code>”.
Explanation: The Apply program could not find Explanation: The Apply program could not
key column names in one of the columns open the work file.
requiring a primary key.
User Response: Contact IBM Software Support.
User Response: Redefine the subscription set
and the subscription-set members (see
ASN1024S The Apply program could not
“Chapter 6. Setting up your replication
close the work file. The error code
environment” on page 93 for instructions).
is “<error_code>”.
Explanation: The Apply program could not
ASN1020S The Apply program could not
close the work file.
reserve a storage block. The error
code is “<error_code>”. User Response: Contact IBM Software Support.
Explanation: The Apply program could not
obtain the required (memory) storage. ASN1025I The Apply program completed
processing for subscription set
User Response: Contact IBM Software Support.
“<set_name>”(“<whos_on_first>”).
The return code is
ASN1021S The Apply program could not “<return_code>”.
read the work file. The error code
Explanation: This message is for your
is “<error_code>”.
information only.
Explanation: The Apply program could not
User Response: No action is required.
read the work file due to a system error.
User Response: Determine if the problem is
ASN1026I The Apply program encountered
caused by lack of space and contact your system
an error while trying to bind.
administrator to obtain what is needed.
SQLSTATE is “<sqlstate>”,
SQLCODE is “<sqlcode>”.
Explanation: An error occurred during the
execution of bind.
User Response: Refer to your OS/390 system Explanation: This message is for your
library information. information only.
User Response: No action is required unless
this is not the intended database.
ASN1077S The Apply program encountered ASN1111I The Apply program converted Jet
an invalid DATALINK column Database “<db_name>” to a
value while updating the target Design Master.
table. The error code is Explanation: The database that you specified is
″<error_code>″. now a Design Master from which all Microsoft
Explanation: The DATALINK column field of a Jet Replicas will be created.
row fetched from the source table is not valid. User Response: No action is required.
User Response: Contact IBM Software Support.
ASN1130E Execution of DAO call failed. ASN1221I Set “<set_name>” has been
ERRCODE “<error_code>”, DAO successfully refreshed with
error number “<error_number>”, “<number>”rows at “<time>”.
and DAO error message
“<error_message>”. Explanation: This message is for your
information only.
Explanation: An error occurred during a
Microsoft Data Access Object (DAO) execution. User Response: No action is required.
You can also obtain explanations for messages by typing the following
command at a DB2 command prompt:
db2 message_number
ASN1001 The Apply program encountered ASN1002 Critical section table not
an SQL error. available.
Explanation: The SQL was statement was not Explanation: The Apply program could not lock
successful. the critical section table.
User Response: Check the messages in the job User Response: Try the request again when the
log to determine the cause of the problem. Try critical section table is available.
the request again.
ASN1050 Refresh operation not valid. ASN1063 Maximum number of set members
Explanation: The Apply program encountered exceeded.
an operation during refresh that is not valid. The Explanation: The number of subscriptions has
operation is ″<operation>″. Error code exceeded the maximum allowed number of 200.
″<error_code>″. The error code is ″<error_code>″.
User Response: Record the message number, User Response: Remove excess members from
operation code and error code, and contact your the subscription set.
system administrator.
ASN1129 SQL statement was not successful. ASN1148 Subscription was not successful.
Explanation: The SQL statement that the user Explanation: The subscription did not run
specified to run as an EXECUTE IMMEDIATE successfully. The error code is ″<error_code>″.
SQL statement was not successful. The error code
User Response: Check the messages in the job
is ″<error_code>″.
log or the apply trail table
User Response: Refer to the previous messages (ASN.IBMSNAP_APPLYTRAIL) to determine
in the job log and the apply trail table why the subscription failed. Correct the error
(ASN.IBMSNAP_APPLYTRAIL) for detailed and try the request again.
information.
ASN1151 Subscription was not successful.
Explanation: The Apply program determined
that a gap exists between the source table
″<src_tbl>″ and the target table. The error code is
″<error_code>″.
ASN3050 Change data table ″<table_name>″ ASN4502 Register table index not found.
not found.
Explanation: The index
Explanation: The change data table referred to ASN/IBMSNAP_REGISTERX is not found.
in the Register table was not found.
User Response: Restore the ASN library from
User Response: Remove the registration for that the previous save volume or run the Create DPR
source table. Then register the source table and Tables command (CRTDPRTBL).
try the STRDPRCAP command again.
ASN4503 Pruning control table not found.
ASN3053 Source table not found.
Explanation: The table
Explanation: There is a registration entry found ASN/IBMSNAP_PRUNCNTL is not found.
for the named source table. That source table is
User Response: Restore the ASN library from
not found.
the previous save volume or run the Create DPR
User Response: Delete the registration for that Tables command (CRTDPRTBL).
source table and register the source table again
when appropriate.
ASN4504 Pruning Control index not found.
Explanation: The index
ASN4501 Register table not found.
ASN/IBMSNAP_PRUNCNTLX is not found.
Explanation: The table
User Response: Restore the ASN library from
ASN/IBMSNAP_REGISTER is not found.
the previous save volume or run the Create DPR
User Response: Restore the ASN library from Tables command (CRTDPRTBL).
the previous save volume or run the
CRTDPRTBL command or restore the IBM
DataPropagator Relational Capture and Apply
for OS/400 licensed program (5769-DP2).
Explanation: An attempt was made to read a User Response: Record the reason code and
matching row for the named source table in table contact your system administrator.
IBMSNAP_REG_EXT. The matching row was not
found. ASN2401 Internal error in trigger program.
User Response: Ignore the error if you are Explanation: An error occurred in the trigger
trying to remove a registration. If you are not program for Pruning Control table
trying to remove a registration, insert a row in IBMSNAP_PRUNCNTL in library ASN.
table IBMSNAP_REG_EXT with VERSION set to
5. SOURCE_OWNER and SOURCE_TABLE are User Response: Record the reason code and try
set to the appropriate values. SOURCE_NAME is to correct the problem. For example, for reason
set to the system name of the source table. code 60 (Could not find a matching row for
JRN_NAME and JRN_LIB should have the source table in index
journal name and its library name of the journal ASN/IBMSNAP_REG_EXTX) and reason code 90
the source table uses. SOURCE_TABLE_RDB (Could not find a matching row for source table
should be NULL if the source table is in the in table ASN/IBMSNAP_REGISTER) your action
same system, and should have the RDB name of could be to register the source table before trying
the system where the source table is at otherwise. to insert a row in the Prune Control table. For
SOURCE_VIEW_QUAL is set to the some other reason codes, it is possible that the
corresponding value in the registration. referenced table is temporarily unavailable. Your
action could be just to retry the task at a later
time. If the problem recurs, contact your system
administrator.
This chapter describes the routines and the return codes, and it gives a sample
routine that starts the Capture and Apply programs.
struct asnParm
{
short byteCount;
char val[MAXASNPARMLENGTH];
};
struct asnParms
{
int parmCount;
struct asnParm **parms;
};
struct asnParm
{
short byteCount;
char val[MAXASNPARMLENGTH];
};
struct asnParms
{
int parmCount;
struct asnParm **parms;
};
#endif
rc = asnCapture(&captureParms;);
if( rc!=0 )
printf("Capture failed with rc = %d\n", rc );
else
printf("Capture completed successfully\n" );
Appendix A. Starting the Capture and Apply programs from within an application 399
currParm = (struct asnParm *)malloc(sizeof(struct asnParm));
strcpy( currParm->val, "CNTLSRV" );
currParm->byteCount = strlen( currParm->val );
applyParms.parms[1] = currParm; /* second capture parameter */
rc = asnApply(&applyParms;);
if( rc!=0 )
printf("Apply failed with rc = %d\n", rc );
else
printf("Apply completed successfully\n" );
return(rc);
}
Services
IBM and IBM Business Partners offer consulting and services supporting the
DB2 data replication solution. Customized services are available in addition to
service offerings that help you:
v Plan and design your application.
v Install, configure, and integrate the products.
v Evaluate operational and tuning considerations.
v Evaluate application and data migration.
v Educate and train staff.
For additional information on IBM products and services, contact your IBM
software provider, or, in the U.S. and Canada, call 1-800-IBM-3333.
Education
The following classes are provided by IBM Education and Training:
v Data Replication: Basic Usage (DW140)
v Data Replication: Advanced Usage (DW150)
Details about these courses can be found at the following site on the World
Wide Web: http://www.ibm.com/software/data/dpropr/education.html
General Education Information on the Web
IBM Education and Training information is available on the Web. You
can access the entire curriculum of courses directly from the IBM
Global Campus Web site: http://www.training.ibm.com/ibmedu
Custom Classes
Replication courses can be tailored to address your unique
environment and needs. To find out more information, call
1-800-IBM-TEACH, Ext. CUSTOM (800-426-8322, Ext. CUSTOM).
IBM Employees: For complete course descriptions, see the
EDUCATION application on HONE or MSE.
IBM may have patents or pending patent applications covering subject matter
in this document. The furnishing of this document does not give you any
license to these patents. You can send license inquiries, in writing, to the
IBM Director of Licensing
IBM Corporation
North Castle Drive
Armonk, NY 10504-1785
U.S.A.
Licensees of this program who wish to have information about it for the
purpose of enabling: (i) the exchange of information between independently
created programs and other programs (including this one) and (ii) the mutual
use of the information which has been exchanged, should contact:
IBM Canada Limited
Office of the Lab Director
1150 Eglinton Ave. East
North York, Ontario
M3C 1H7
CANADA
This publication may contain examples of data and reports used in daily
business operations. To illustrate them as completely as possible, the examples
include the names of individuals, companies, brands, and products. All of
these names are fictitious and any similarity to the names and addresses used
by an actual business enterprise is entirely coincidental.
Trademarks
The following terms are trademarks or registered trademarks of the IBM
Corporation in the United States and/or other countries:
ACF/VTAM MVS/ESA
ADSTAR MVS/XA
AISPO OS/400
AIX OS/390
AIXwindows OS/2
AnyNet PowerPC
APPN QMF
AS/400 RACF
CICS RISC System/6000
C Set++ SP
C/370 SQL/DS
DATABASE 2 SQL/400
DataHub S/370
DataJoiner System/370
DataPropagator System/390
DataRefresher SystemView
DB2 VisualAge
DB2 Connect VM/ESA
DB2 Universal Database VSE/ESA
Distributed Relational VTAM
Database Architecture WIN-OS/2
Extended Services
FFST
First Failure Support Technology
IBM
IMS
Lan Distance
Java, HotJava, Solaris, Solstice, and Sun are trademarks of Sun Microsystems,
Inc.
Microsoft, Windows, Windows NT, Visual Basic, and the Windows logo are
trademarks or registered trademarks of Microsoft Corporation in the United
States, other countries, or both.
K
F
key. A column or an ordered collection of
full refresh. A process in which all of the data columns that are identified in the description of a
of interest in a user table is copied to the target table, index, or referential constraint.
table, replacing existing data. Contrast with
differential refresh.
Glossary 409
L an absence of a value. For example, a field for a
person’s middle initial does not require a value.
large object (LOB). A sequence of bytes, where
null value. A parameter for which no value is
the length can be up to 2 gigabytes. It can be any
specified.
of three types: BLOB (binary), CLOB (single-byte
character or mixed) or DBCLOB (double-byte
character). O
LOB. Large object. object. (1) Anything that can be created or
manipulated with SQL—for example, tables,
local database. A database that is physically views, indexes, or packages. (2) In object-oriented
located on the workstation in use. Contrast with design or programming, an abstraction consisting
remote database. of data and operations associated with that data.
lock. (1) A means of serializing events or access ODBC. Open Database Connectivity.
to data (2) A means of preventing uncommitted
changes made by one application process from ODBC driver. A driver that implements ODBC
being perceived by another application process function calls and interacts with a data source.
and for preventing one application process from
updating data that is being accessed by another on-demand timing. A method for controlling
process the timing of replication for occasionally
connected systems. Requires that you use the
locking. The mechanism used by the database ASNSAT program to operate the Capture and
manager to ensure the integrity of data. Locking Apply programs. Contrast with event timing and
prevents concurrent users from accessing interval timing.
inconsistent data.
Open Database Connectivity (ODBC). An API
that allows access to database management
M systems using callable SQL, which does not
require the use of an SQL preprocessor. The
member. See subscription-set member.
ODBC architecture allows users to add modules,
called database drivers, that link the application
N to their choice of database management systems
at run time. Applications do not need to be
nickname. A name that is defined in a DB2 linked directly to the modules of all the
DataJoiner database to represent a physical supported database management systems.
database object (such as a table or stored
procedure) in a non-IBM database. ordinary identifier. In SQL, a name that is
made up of a letter, which might be followed by
noncomplete CCD table. A CCD table that is zero or more characters, each of which is a letter
empty when it is created and has rows appended (a-z and A-Z), a symbol, a number, or the
to it as changes are made to the source. Contrast underscore character.
with complete CCD table.
primary key. A unique key that is part of the row-replica. A type of update-anywhere replica
definition of a table. A primary key is the default maintained by DataPropagator for Microsoft Jet
parent key of a referential constraint definition. without transaction semantics.
Glossary 411
standard conflict detection. Conflict detection action or result. Examples are the entry of a
in which the Apply program searches for customer’s deposit and the updating of the
conflicts in rows that are already captured in the customer’s balance.
change data tables of the replica or user table.
See also conflict detection, enhanced conflict trigger. In DB2, an object in a database that is
detection, and row-replica conflict detection. invoked indirectly by the database manager
when a particular SQL statement is run.
subscription. See subscription set.
two-phase commit. A two-step process by
subscription cycle. A process in which the which recoverable resources and an external
Apply program retrieves changed data for a subsystem are committed. During the first step,
given subscription set, replicates the changes to the database manager subsystems are polled to
the target table, and updates the appropriate ensure that they are ready to commit. If all
replication control tables to reflect the progress it subsystems respond positively, the database
made. manager instructs them to commit.
V
view. A logical table that consists of data that is
generated by a query.
W
warm start. A start of the Capture program that
allows reuse of previously initialized input and
output work queues. Contrast with cold start.
Glossary 413
414 DB2 Replication Guide and Reference
Index
Apply program (continued) Apply program (continued)
Special Characters configuring 44, 142 post-installation tasks 142
-1032 170
connectivity 65 problem determination 153
-206 170
data blocking 68 processing cycle 124
-330 168
for AS/400 processor requirements 61
-741 82
installing 176 push and pull configurations 66
-805 169
invocation parameters 210 run-time processing
$TA JES2 command 234
operating 207 statements 74
Numerics scheduling 217 setting up 131
0509 168 setting up 175, 176 spill files, storage
1067 168 starting 210 requirements 62, 65
1108 170 stopping 217 starting
22517 168 using remotely 207 after Capture program cold
51002 169 for OS/390 start 142
57019 170 installing 225 example for Windows 45
invocation parameters 232 instructions 142
A operating 225, 231 overview 54
activating subscription sets 148 scheduling 234 starting with event 114
active log size 63 setting up 225 stopping example for
ADDEXITPGM command 195 starting 232 Windows 49
ADDJOBSCDE command 217 stopping 234 synchronization with Capture
administration for UNIX platforms triggers 88
authorization requirements 104 binding 240 trace files 155
timing recommendation 61 configuring 240 user ID 105
administration interfaces invocation parameters 252 Apply qualifier 13, 40
Control Center 5 operating 239, 250 Apply-qualifier-cross-reference
DJRA (DataJoiner Replication scheduling 254 tables 317
Administration) tool 5 setting up 239 Apply trail tables
overview 5 starting 251 description 336
after-image columns 10, 72 stopping 254 problem determination 153
aggregate tables 15 for Windows and OS/2 archived log restrictions 77
See also base aggregate tables, binding 44, 264 archiving information, example 24
change aggregate tables 15 configuring 264 AS/400 server
analyzing invocation parameters 276 connecting to 176
Apply program performance 56, operating 263, 275 ASCII tables 79
163 scheduling 278 ASN0000E message 167
Capture program Service Control Manager for
ASNAPPLY command
performance 56, 163 Windows 266
for UNIX platforms 251
ANZDPRJRN command 194 setting up 263
for Windows and OS/2 276
APF authorization 143, 166 starting 275
application data prerequisites 51 ASNARUN command 232
stopping 278
Applications full versus differential refresh 10 ASNCCP command
starting Capture program gap detection 147 for UNIX platforms 245
from 397 introduction 7 for VM and VSE 256
Apply job tables 340 log file 156 for Windows and OS/2 269
apply_names.ini file 144 messages 152, 368 ASNCMD command
Apply program mini-cycles 68 for UNIX platforms 247
Apply qualifier 13 operating 142 for Windows and OS/2 271
authorization requirements 105 performance 144 ASNDLCOPY exit routine
capacity planning 61 configuration files 136
Index 417
columns (continued) commands (continued) commands (continued)
computed 74, 118 CRTJRN 193 SUSPEND
creating new in target table 118 CRTJRNRCV 192 for OS/390 230
defining in target table 116 CRTSQLPKG 207 for UNIX platforms 247
names, restrictions 73 DB2FLSN for VM and VSE 259
primary key, specifying 117 for UNIX platforms 249 for Windows and OS/2 272
relative record numbers on DBFLFSN WRKRDBDIRE 209, 219
AS/400 196 for Windows and OS/2 274 WRKREGINF 195
removing from target table 117 DSPJRN 202 WRKSBMJOB 159
renaming 73, 117 ENDDPRAPY 217 WRKSBSJOB 159
subsetting ENDDPRCAP 202 commit interval 129
DB2 Control Center 116 ENDJOB 203 communication
DJRA 118 GETLSEQ log-based 8
introduction 13 for OS/390 231 trigger-based 9
planning 70 for UNIX platforms 249 complete CCD tables 15, 83
commands for VM and VSE 261 components
for Windows and OS/2 274 See also Apply program, Capture
$TA JES2 GRTDPRAUT 181
Apply for OS/390 234 program, Capture triggers,
INZDPRCAP 203 Control Center, DJRA (DB2
Capture for OS/390 229 LOADX 45
ADDEXITPGM 195 DataJoiner Replication
PRUNE Administration) tool 3
ADDJOBSCDE 217
for OS/390 231 administration interfaces 5
ANZDPRJRN 194 for UNIX platforms 249
ASNAPPLY communication between 7
for VM and VSE 260 introduction 3
for UNIX platforms 251 for Windows and OS/2 274
for Windows and OS/2 276 computed columns 74, 118
RCVJRNE 193 concepts
ASNARUN 232 REINIT
ASNCCP after-image columns 10
for OS/390 230
for UNIX platforms 245 Apply qualifier 13
for UNIX platforms 248
for VM and VSE 256 before-image columns 10
for VM and VSE 260
for Windows and OS/2 269 change capture 6
for Windows and OS/2 273
ASNCMD column subsetting 13
replication sources,
for UNIX platforms 247 conflict detection 11
recognizing new 106
for Windows and OS/2 271 control tables 4
REORG 56
ASNJET 285 differential-refresh copying 10
RESUME
ASNJSTOP 286 full-refresh copying 10
for OS/390 230
ASNL2RNx 227 joins 14
for UNIX platforms 248
ASNSAT 279 logical servers 4
for VM and VSE 259
ASNSTOP replication 10
for Windows and OS/2 272
for UNIX platforms 254 replication sources 10
REVOKE 56
for Windows and OS/2 278 row subsetting 13
RGZPFM 56
AT subscription-set members 11
RMVEXITPGM 195
Apply for UNIX subscription sets 11
RUNSTATS 56
platforms 254 subsetting source tables 13
RVKDPRAUT 189
Apply for Windows 278 table partitioning 13
SBMJOB 202
Capture for UNIX target tables 14
STOP
platforms 244 unions 14
for OS/390 230
Capture for Windows 271 user tables 10
for UNIX platforms 247
AT NetView views as sources 13
for VM and VSE 258
Apply for OS/390 234 condensed CCD tables
for Windows and OS/2 271
Capture for OS/390 229 introduction 15
STRDPRAPY 210
BIND PACKAGE 56 overview 83
STRDPRCAP 197
CHGDPRCAPA 179 updating 343
STRJRNPF 193
CHGJRN 194 configuration, replication
STRSBS 201
CRTDPRPKG 207 changing 55
CRTDPRTBL 178, 210 copying 55
Index 419
customizing DB2 DataJoiner diagnosing errors 151
DJRA 99 restrictions 79 differential-refresh copying 10
SQL files 103 setup 137 distinct data type 79
SQL for control tables 101 DB2 DataJoiner Replication distributing
table names 95 Administration (DJRA) tool data to remote sites 25
See DJRA (DB2 DataJoiner IMS data 27
D Replication Administration) DJRA (DB2 DataJoiner Replication
data tool 5 Administration) tool
accessing continuously 28 DB2 Extenders authorization requirements 104
consolidation configuration 20 restrictions 75 capacity planning 61
distributing to remote sites 25 DB2 File Manager 75 columns, defining 118
distribution configuration 19 DB2 for OS/390 connectivity 65
IMS, distributing 27 Apply program control tables, creating 102
manipulating source 13 operating 225 customizing 99
manipulating target 69 Capture program editing
prerequisites 51 operating 225 logic 99
data blocking 68 CCSID translation 168 SQL 103
data compression restrictions 77 data sharing 122 installing 98
data consistency 125 DB2 ODBC Catalog 235 introduction 5
data currency 123 index types 234 offline load 127
data encryption restrictions 78 password verification 66 overview 96
data integrity DB2 ODBC Catalog preferences 99
DataPropagator for Microsoft function calls 237 processor requirements 61
Jet 282 setting up server 236 promote functions 127
setting up workstation Replication Monitor 146
resolving gaps 147
client 235 replication sources
data manipulations 13, 69
tables 237 changing 147
data restrictions 77
Version 6 enhancements 235 defining 105
data sharing 122
DB2 Tools Settings notebook 95 removing 148
data types, restrictions 78 DB2FLSN command rows, defining 119
databases for UNIX platforms 249 running SQL 104
maintenance tasks 56, 145 for Windows and OS/2 274 setting up replication 96
non-IBM target tables 52 DB2INSTANCE SQL
DATALINK values starting Capture for UNIX 244 editing 103
ASNDLCOPY exit routine 133 starting Capture for Windows running 104
ASNDLCOPYD file-copy and OS/2 269 SQL statements and stored
daemon 136 DBCLOB (double-byte character procedures 122
link control level 112 large object) 74, 79 subscription sets
planning 75 DBLIB connections changing 149
DataPropagator for Microsoft Jet improving performance 144 defining 113
ASNJDONE parameters 288 Microsoft SQL Server 96 removing 149
ASNJET parameters 285 deactivating subscription sets 148 timing 123
control tables 289 decision support systems 29 target-table type, choosing 116
data integrity 282 defining targets supported 96
error recovery 288 replication source joins 108 using for AS/400
operating 285 replication sources 105 defining replication sources
setting up 283 subscription sets 40 and subscription sets 196
starting 285 delete journal receiver exit relative record numbers 175,
stopping 286 routine 194 196
terminology 282 Design Master 283 Replication Monitor 201
troubleshooting 287 designing DJX_ASYNC_APPLY variable 144
DataPropagator NonRelational overview 51 double-byte character large object
maintaining CCD tables 16 replication configurations 19 (DBCLOB) 74, 79
DB2 Control Center unsuitable configurations 19 DPCNTL files 101
See Control Center 5 detecting a gap 147 DPNCNTL files 101
Index 421
journals (continued) messages occasionally connected
default message queue 194 Apply for AS/400 381 environments (continued)
entry types 205 Apply program 353, 368, 381 description 22
managing 193 Capture for AS/400 386 example configuration 31
problem determination 160 Capture program 353, 368 introduction 22
QSQJRN journal 191 delete receiver exit program 394 offline load 127
remote journal function 192 for problem determination 152 on-demand timing 18
starting 193 Trigger Program for Critical operating
use 191 Section Table 394 Apply program
Trigger Program for Prune example 54
K Control Table 393 for AS/400 207
KEEPDICTIONARY keyword 145 Trigger Program for Register for OS/390 231
key string tables 289, 349 Table 390 for UNIX platforms 250
key-update Microsoft Jet 280 for Windows and OS/2 275
restrictions 76 Microsoft SQL Server overview 142
DBLIB connections 96 Capture program
L improving performance 144 example 54
lag limit 129 restrictions 79 for AS/400 197
large object (LOB) 74, 79 migration for OS/390 227
large replication jobs 68 planning 89 for UNIX platforms 243
LE for OS/390 environment 168 services and consulting 401 for VM and VSE 255
legacy data sources 27 mini-cycles for Windows and OS/2 268
LOADX command Apply program 68 overview 139
ASNLOAD exit routine 131 defining for subscription set 122 DataPropagator for Microsoft
example 45 mobile replication Jet 285
LOB (large object) 74, 79 see also Microsoft Jet 280 overview 55
local cache for committed monitoring options, performance 128
changes 85 Capture program progress for ORA-04081 message 81
local CCD tables 83 AS/400 201 Oracle
log-based communication 8 introduction 56 restrictions 79
log file replication environment 146 overview 22
Apply program 156 multi-tier staging 85
Capture program 158 multiple source tables 24 P
log records, archived before multiple target tables 87 parameter definitions
captured 63 DataPropagator for Microsoft
log sequence number N Jet 285, 288
network connectivity 65 partitioning key 78
for OS/390 231
non-DB2 distributed data store 32 password files
for UNIX platforms 249
non-DB2 reports creating, example 44
for VM and VSE 261
query database 33 for Apply program
for Windows and OS/2 274
non-IBM data sources 87 for UNIX platforms 242
logging requirements 62
noncomplete CCD tables 15, 83 for Windows and OS/2 265
logic, editing DJRA 99
noncondensed CCD tables 15, 83 password verification, DB2 for
logical partitioning keys OS/390 66
nonrelational data sources 87
description 109 performance
NT Service Control Manager
row subsets 71, 81 improving 144
Apply for Windows 266
logical servers 4 Capture for Windows 266 options 128
LONG VARCHAR data type 79 NT Services troubleshooting, introduction 56
LONG VARGRAPHIC data type 78 starting Apply program 275 planning
starting Capture program 269 active log size 63
M capacity 61
maintenance O conflict detection 107
database 145 Object Rexx 98 migration 89
overview of tasks 56 occasionally connected environments multiple target tables 87
MAX_SYNCH_MINUTES 68, 122 See also satellite replication, network 65
members, subscription-set 11 Microsoft Jet 22 overview 51
Index 423
replication sources (continued) resuming Capture program Service Control Manager
large objects 74 (continued) Apply for Windows 266
removing 148 for VM and VSE 259 Capture for Windows 266
setting up, overview 53 for Windows and OS/2 273 services and consulting 401
subsetting 13 retention limit 128 setting environment variables
viewing 147 REVOKE utility 56 Capture program
restrictions Rexx 98 for UNIX platforms 244
archived log 77 RGZPFM command 56 for Windows and OS/2 268
AS/400 78 RMVEXITPGM command 195
setting up
ASCII tables 79 row-replica tables
Apply program
Capture program See also replica tables 16
for AS/400 175
for AS/400 190 description 283, 347
for OS/390 225
for OS/390 227 introduction 16
for UNIX platforms 239
for UNIX platforms 243 row-replica-target-list tables 289,
for Windows and OS/2 263
for VM and VSE 255 333
Capture program
for Windows and OS/2 268 rows
for AS/400 175
general 130 defining in target table 118
for OS/390 225
CCD tables as sources 20 subsetting
for UNIX platforms 239
column names, limits 73 DB2 Control Center 118
for VM and VSE 255
data compression 77 DJRA 119
for Windows and OS/2 263
data encryption 78 introduction 13, 71
replication
data types 78 run-time processing 74, 121
DB2 Control Center 93
DB2 DataJoiner 79 RUNSTATS utility 56
DJRA 96
DB2 Enterprise - Extended RVKDPRAUT command 189
replication criteria 53
Edition 77
EDITPROC 77 S source servers
FIELDPROC 77, 255 satellite clients 22 introduction 4
for DB2 Extenders large satellite replication 22 password file for 44
objects 75 satellites 22 source tables
general 77 SBMJOB command 202 See replication sources 10
Informix 80 scenarios spill files 62, 65
key-update 76 problem determination 152 SQL
Microsoft SQL Server 79 typical 19 editing 103
MVS 77 using Control Center 35 errors 353
Oracle 79 scheduling files, customizing 39, 103
partitioning key 78 Apply program running DJRA 104
referential constraints 78 for AS/400 217 statements
remote journal 78 for OS/390 234 defining for subscription
stored procedures 78 for UNIX platforms 254 set 121
Sybase 79 for Windows and OS/2 278 run-time processing 74
table-name length 77 Capture program SQLCODEs
Unicode tables 79 for AS/400 202
utility program 78 -1032 170
for OS/390 229 -206 170
VALIDPROC 78 for UNIX platforms 244
views 72 -330 168
for Windows and OS/2 271 -741 82
VM & VSE 77 subscription sets 123
WHERE clause 118 -805 169
timing 1108 170
RESUME command event 124
for OS/390 230 SQLSTATEs
relative 123
for UNIX platforms 248 security 104 22517 168
for VM and VSE 259 servers 51002 169
for Windows and OS/2 272 57019 170
control 4
resuming Capture program logical 4 SRCESVR.REX file 100
for OS/390 230 source 4 staged replication 111
for UNIX platforms 248 target 4 staging data 82
Index 425
tables (continued) target tables (continued) update-anywhere replication
key string 349 update anywhere, defining 106 CCD (consistent-change-data)
Microsoft Jet target server 289 user 16 tables 82
noncomplete, condensed CCD user copy 15, 341 conflict detection 107
tables 86 user defined 119 defining sources 106
noncomplete, noncondensed CCD TARGSVR.REX file 100 defining subscription sets 114
tables 86 tasks, overview 49 example configuration 30
point-in-time 342 TBLSPACE.REX file 99 fragmentation for 107
prune lock 316 terminology introduction 21
pruning control 309 DataPropagator for Microsoft updated primary key columns 109
register 301 Jet 282 updates
register extension 308 DB2 DataPropagator 1 as inserts and deletes 109
register synchronization 318 three-tier replication asynchronous 17
replica 345 configuration 111 conflicts 107
row-replica 347 timing event-based timing 17
row-replica-target-list 333 event-based 17, 124 interval-based timing 17
staging 82 interval-based 17 on-demand timing 18
structures 293 on-demand 18 scheduling 16
subscription columns 329 subscription sets synchronous 17
subscription events 335 changing 125 user copy tables
subscription-schema- setting 123 defining 115
changes 334 Tools Settings notebook 95 description 341
subscription set 323 trace files introduction 15
subscription statements 331 Apply program 155 user-defined data types 79
subscription-targets-member 326 Capture program 157 user-defined tables 119
synchronization generations 350 problem determination 157 user ID
target types 14 trace tables Apply program 105
trace 157, 316 description 316 Capture program 104
tuning parameters 128, 312 problem determination 157 requirements
unit-of-work (UOW) 319 transaction identification 89 for UNIX 239
user 16 trigger-based communication 9 Windows and OS/2 263
user copy 15, 341 troubleshooting user-oriented identification 89
warm start 313 Capture and Apply user tables
target servers, introduction 4 programs 166 as targets 16
DataPropagator for Microsoft introduction 10
target tables
Jet 287 utilities
aggregate 15 introduction 56 BIND PACKAGE 56
base aggregate 15, 345 services and consulting 401 REORG 56
CCD (consistent-change-data) tuning parameters REVOKE 56
description 82, 343 Capture for AS/400 178, 203 RUNSTATS 56
introduction 15 specifying 128 utility program restrictions 78
change aggregate 15, 346 tuning parameters tables 312
columns, defining 116 tutorial for Windows NT 35 V
fragmenting 70 VALIDPROC 78
in non-IBM databases 52 U vertical subsets 70
offline load 127 Unicode tables 79 views
point-in-time 15, 342 unions for targets 14 defining as sources 13, 108
replica 16, 345 unit-of-work (UOW) tables description 71
row-replica 16, 347 Capture triggers 80 double delete 72
rows, defining 118 description 319 restrictions 72
storage requirements 63 pruning 319
structure, specifying 116 storage requirements 65 W
table structures, quick UOW (unit-of-work) tables warm start, Capture program
reference 299 Capture triggers 80 for AS/400 198, 205
type, specifying 115 description 319 for OS/390 228
types of 14 storage requirements 65 for UNIX platforms 245
Index 427
428 DB2 Replication Guide and Reference
Contacting IBM
If you have a technical problem, please review and carry out the actions
suggested by the Troubleshooting Guide before contacting DB2 Customer
Support. This guide suggests information that you can gather to help DB2
Customer Support to serve you better.
If you live in the U.S.A., then you can call one of the following numbers:
v 1-800-237-5511 for customer support.
v 1-888-426-4343 to learn about available service options.
Product information
If you live in the U.S.A., then you can call one of the following numbers:
v 1-800-IBM-CALL (1-800-426-2255) or 1-800-3IBM-OS2 (1-800-342-6672) to
order products or get general information.
v 1-800-879-2755 to order publications.
http://www.ibm.com/software/data/
The DB2 World Wide Web pages provide current DB2 information
about news, product descriptions, education schedules, and more.
http://www.ibm.com/software/data/db2/library/
The DB2 Product and Service Technical Library provides access to
frequently asked questions, fixes, books, and up-to-date DB2 technical
information.
For information on how to contact IBM outside of the United States, refer to
Appendix A of the IBM Software Support Handbook. To access this document,
go to the following Web page: http://www.ibm.com/support/, and then
select the IBM Software Support Handbook link near the bottom of the page.
SC26-9920-00
Spine information:
® ®
IBM DB2 Universal
Database DB2 Replication Guide and Reference Version 7