IBM Content Manager OnDemand and FileNet-4
IBM Content Manager OnDemand and FileNet-4
Content Manager OnDemand provides data at each exit that can serve as input to the
user-written programs. By using these exits, you can perform functions, such as sending
emails based on events in the system, updating index values through a print request, and
cleaning up data as it is loaded into Content Manager OnDemand. Unlimited possibilities are
available with the Content Manager OnDemand exits. We provide samples to act as a guide
for creating customized user exit programs.
Important: Always recompile all of the customized user exits after you upgrade the
Content Manager OnDemand software because the header files might change with
different versions.
PDF Indexer: The PDF Indexer does not support any user exits.
In Multiplatforms, ACIF user exits must be written in C. In z/OS, ACIF user exits can be written
in C, COBOL, or assembler. For more information, see the “Special considerations for
APKACIF exits written in COBOL” section in the IBM Content Manager OnDemand for z/OS,
V9.0, Administration Guide, SC19-3364. ACIF exits do not exist in Content Manager
OnDemand for IBM i.
For detailed documentation about each exit point, see IBM Content Manager OnDemand for
Multiplatforms - Indexing Reference, SC19-3354, and IBM Content Manager OnDemand for
z/OS and OS/390 - Indexing Reference, SC27-1375.
The arsload program substitutes the correct path for the platforms.
This macro works for all four ACIF user exits. The macro is not supported if ACIF is run
outside of the arsload program.
The input exit can be used to insert indexing information. More common uses are to remove
null characters, truncate records, add carriage control, and change code pages. In general,
indexer parameters need to reflect what the input record looks like after the input exit runs.
The only exception is the FILEFORMAT indexer parameter, which needs to correspond to the
input record before it is passed to the input exit. For example, if the file contains ASCII data
and uses the ASCII stream delimiter x'0A', specify (NEWLINE=x'0A'), not (NEWLINE=x'25'),
even if you use the apka2e exit to convert ASCII to EBCDIC. Otherwise, ACIF does not pass
the correct record to the apka2e input exit.
You can either use these input record exits as samples to build from, or you can compile them
and run them as is. These programs are documented in IBM Content Manager OnDemand
for Multiplatforms - Indexing Reference, SC18-9235, and are described briefly in the following
sections.
When you use the apka2e exit, you must manually change your indexing parameters:
Change an ASCII CPGID to an EBCIDIC CPGID; for example, change CPGID=850 to
CPGID=500.
Change the HEX codes for the triggers and index names from ASCII to EBCDIC. If you do
not, you receive ACIF return code 16, which states that it cannot find trigger1 or any fields.
We used a hex editor to determine the new EBCDIC values and typed them by keyboard edit
into the parameter file. If you do not have a hex editor, you can find conversion tables on the
Internet.
For more information about how to update indexing parameters, see 11.2.6, “Debugging input
user exit programs” on page 247.
Note: Because the asciinp exit inserts carriage control characters in byte 0 of your
document, and leaves X'0A', it changes the position of the triggers and fields. If you use
this exit, you must add 1 to the column offsets for the triggers and fields.
Another example is to modify the format of an existing index. Example 11-1 shows a sample
index exit C program to update the date format from mmddyy to mm/dd/yy.
Important: The ACIF index file is in AFP format. It is important to understand the structure
of AFP before you write or modify an index exit.
long
INDXEXIT( INDXEXIT_PARMS *exitstruc )
{
int i;
if ( exitstruc->eof != IDX_EOFLAG )
{
/************************************************/
/* Look for TLE with attribute name "mmddyy" */
/************************************************/
if (
(exitstruc->record[13] == 0x6D) &&
(exitstruc->record[14] == 0x6D) &&
(exitstruc->record[15] == 0x64) &&
(exitstruc->record[16] == 0x64) &&
(exitstruc->record[17] == 0x79) &&
(exitstruc->record[18] == 0x79))
{
/************************************************/
/* TLE length is now 40 (was 38) */
/************************************************/
exitstruc->record[ 2] = 0x28;
/************************************************/
/* Attribute value count is now 12 (was 10) */
/************************************************/
exitstruc->record[19] = 0x0C;
/************************************************/
/* Change mmddyy to mm/dd/yy */
/************************************************/
exitstruc->record[30] = exitstruc->record[28];
exitstruc->record[29] = exitstruc->record[27];
exitstruc->record[28] = 0x61;
exitstruc->record[27] = exitstruc->record[26];
exitstruc->record[26] = exitstruc->record[25];
exitstruc->record[25] = 0x61;
/**********************************************/
/* record length has increased to 41 (was 39) */
/**********************************************/
exitstruc->recordln = 41;
exitstruc->request = IDX_USE;
}
return( 0 );
}
Example 11-2 shows a sample output exit program that deletes records from the output file.
This program checks each Structured Field to determine whether it is an AFP record. If the
record does not begin with Hex 5A, the exit program instructs ACIF not to use this record.
Important: The ACIF output file can be in either Line Data or AFP format. If the ACIF
output file is in AFP format, it is important to understand the structure of AFP before you
write or modify an output exit.
long
OUTEXIT( OUTEXIT_PARMS *exitstruc )
{
/************************************************************************/
/* Delete all records from the output that do not begin with Hex '5A' */
/************************************************************************/
if(exitstruc->eof != ACIF_EOF)
{
if(exitstruc->record[0] == 0x5A)
exitstruc->request = ACIF_USE;
else
exitstruc->request = ACIF_DELETE;
}
return( 0 );
}
The resource exit is best used to control resources at the file name level. For example, you
want to exclude particular fonts from the resource file. You can code this exit program to
contain a table of the fonts that you want to exclude and filter them from the resource file. The
program that is invoked at this exit is defined in the ACIF resexit parameter.
ACIF does not start the exit for the following resource types:
Page definitions: The pagedef is a required resource for converting line data to AFP and it
is never included in the resource file.
Form definitions: The formdef is a required resource for processing print files. If you do not
want the formdef to be included in the resource file, specify restype=none or explicitly
exclude the formdef from the restype list.
Coded fonts: If you specify MCF2REF=CF, ACIF writes coded fonts to the resource file if they
are included in the restype list. The default is MCF2REF=CPCS; therefore, ACIF does not
write coded fonts to the resource file.
This command starts ACIF with the user exit, runs the exit, and writes the output to the file
that is specified by the OUTPUTDD parameter. You can inspect the output file to ensure that the
exit did what you expected. You can also use this output file in the graphical indexer to index
your post-exit file because the exit routine might change the location of your triggers and
fields.
Important: Specify the complete path in the inpexit, indxexit, resexit, or outexit
parameters. Nothing is more frustrating than trying to debug an exit that is never called
because another exit with the same name is started because of the PATH environment
variable.
Another method is to run arsload with the -i option, which runs only the indexer and does not
load any files. In this case, it is not necessary to add INPUTDD or OUTPUTDD to the indexing
parameters in the application. Running arsload with the -i option creates the .ind and .out
files, and leaves them on the system for you to view.
The anystore batch capture exit can be used to provide all segment and index data to the
report capture program. This exit program is called from the report capture process.
The exit is called dynamically during the capture process. The capture program calls the exit
when the indexing instructions for the application include the ANYEXIT parameter. The report
administrator provides a program name for the anystore exit.
The report capture program expects the anystore exit to pass back all segment data and the
associated index information. The capture program performs only the data management
functions that are required for the capture process (document compression, document store,
index management and store, and so on).
No restrictions exist for the type of processing that can be performed in an input exit, except
that the exit must pass the standard parameter list back to the report capture program. Values
must be supplied for all parameters.
Beginning with Content Manager OnDemand for z/OS V8.4 or later, a line print file can have a
fixed record length that is greater than 512 or a variable record length. To support this
capability, a new parameter format is provided. The old parameter format is still supported for
compatibility with earlier versions.
To lean more about how to create an input exit, the details of the new parameter format, and
how Content Manager OnDemand determines whether to use the old or new parameter
format, see IBM Content Manager OnDemand Version 9 Release 5 - Indexing Reference,
SC19-3354.
No restrictions exist for the type of processing that can be performed in an index exit, except
that the exit must pass the standard parameter list back to the capture program. A sample
COBOL exit is provided in ARSEXNDX, with the COBOL copybook ARSINDBK. A sample C
exit is provided in ARSECNDX with the C header file ARSINDBH.
For more information about the OS/390 indexer, see IBM Content Manager OnDemand for
z/OS and OS/390 - Indexing Reference, SC27-1375.
In addition, you can configure Content Manager OnDemand to send these messages to the
arslog exit. The system log exit is supplied in the arslog file that is in the bin directory of the
Content Manager OnDemand installation root for each platform. If the arslog file is opened in
a text editor, it contains comments that provide a brief description of the exit and the order of
the parameters that Content Manager OnDemand supplies to this exit. By default, the system
log exit is not initialized within Content Manager OnDemand. Therefore, if you edit the arslog
file to capture information, the exit does not run automatically.
Tip: The arslog exit file is run by the same user that owns the arssockd process that
calls this exit. A common reason for receiving no response from this exit is access
permissions on either the arslog file itself or files and directories that are accessed
within arslog.
Content Manager OnDemand provides an exit for each of the four system logging event
points. Use these exits to filter the messages and act when a particular event occurs. For
example, you can provide a user exit program that sends a message to a security
administrator when an unsuccessful logon attempt occurs.
For simplicity, we do not demonstrate the system log exits across all supported platforms. We
recognize that the scripting languages between platforms vary, but the principles that we
describe here are uniform across all supported platforms; only the syntax differs.
exit 0
For the exit sample that is provided in Example 11-4, we also provided a small sample of what
the output of this exit might look like (Example 11-5). For example, you can see in the output
that several unsuccessful attempts were made from the same machine and different user IDs
were used for each attempt. In this example, by adding parameter 2 ($2) to the output and
resorting the file, we can further establish the times of these attempts.
Example 11-6 shows how the exit collects virtually all of the available information when it
receives message number 87 (a successful load). This information is then used as the input
for another script, which notifies the remote machine that the load is complete and the next
report file can be sent.
case $7 in
#
# msg num 87 is a successful load
#
87) echo "Instance : $1" >> /arsacif/companyx/arslog.out
echo "Time Stamp : $2" >> /arsacif/companyx/arslog.out
echo "Log Identifier : $3" >> /arsacif/companyx/arslog.out
echo "Userid : $4" >> /arsacif/companyx/arslog.out
echo "Account : $5" >> /arsacif/companyx/arslog.out
echo "Severity: $6" >> /arsacif/companyx/arslog.out
echo "Message Number: $7" >> /arsacif/companyx/arslog.out
echo "Message Text : $8" >> /arsacif/companyx/arslog.out
/arsacif/companyx/control_file.scr "$@" >> /arsacif/companyx/arslog.out ;;
*) ;;
esac
exit 0
Tips: For more information about the codes for each message type that is logged in the
system log, see Chapter 2, “Common Server Messages”, in IBM Content Manager
OnDemand - Messages and Codes, SC27-1379. For example, message number 87 is
listed as ARS0087I.
You configure the system log exit with the Administrator Client in the System Parameters
window (see Figure 11-2 on page 254).
Select the options for the system logging and set up the exit. The sample in Example 11-7
routes the messages to the system log with the write to operator (WTO) macro.
When the exit routine is assembled and link-edited to a library, it must be associated with the
exit in one of two ways:
Use the exit statement in the PROGXX parmlib member. For more information about the
PROGXX parmlib member, see z/OS MVS Initialization and Tuning Reference,
SA22-7592.
Use the SETPROG EXIT operator command. For more information about the SETPROG EXIT
command, see z/OS MVS System Commands, SA22-7627.
A server print device can be physically connected to the library server or attached to another
workstation in the network. Server print devices are managed by Infoprint.
Content Manager OnDemand provides a print exit for Multiplatforms that can be used only for
documents that are printed through a server printer.
Two print exits are available for Multiplatforms, which are in the bin directory of the Content
Manager OnDemand installation root for each platform:
arsprt: Content Manager OnDemand User Exit Printing Facility
arsrdprt: Content Manager OnDemand User Exit Printing Facility for Report Distribution
If you open either of the files in a text editor, you can see that they contain comments that
provide a brief description of the exit and the order of the parameters that Content Manager
OnDemand gives to this exit.
Example 11-8 shows an arsprt file, which updates application group indexes for a certain
document type each time it is sent to a server printer. This example is from an actual
customer where the requirement was for Content Manager OnDemand to keep a record of
when a document is reprinted. This file is created by using the print exit to update the indexes
of a document to show the last time that the document was reprinted and a counter is
incremented to log the number of times the document was reprinted. Comments are inserted
into the sample script in Example 11-8 that explain each part of the script. The customer
name and the IP addresses are either altered or removed.
##################
# 3 stmt's added #
# for debugging #
##################
#RANDOM=$$
#set -x
#exec 2> /usr/lpp/ars/bin/debug1.log.$RANDOM
RM=/bin/rm
SED=/bin/sed
OS=$(uname)
#
# $1 - Printer Queue Name
# $2 - Copies
# $3 - Userid
# $4 - Application Group Name
# $5 - Application Name
# $6 - Application Print Options
# $7 - Filename to Print
#
# NOTE: It is up to this script to make sure the file is deleted.
# example( -r option on /bin/enq )
#
FILE=$7
OPTS_FILE=${FILE}.opts
NOTES_FILE=${FILE}.notes
if [[ -f ${OPTS_FILE} ]] ; then
DEL=1
PRT_OPTIONS="-o PASSTHRU=fax_file-${FILE}-"
#
# Since I am faxing, make sure messages are not produced.
# If debugging is needed, then this parameter should be blank.
#
#EXTRA_OPTIONS="-o MSGCOUNT=0"
EXTRA_OPTIONS="-o MSGCOUNT=0"
else
DEL=0
PRT_OPTIONS=
EXTRA_OPTIONS=
fi
RC=$?
if [[ ${RC} = 0 ]] ; then
if [[ ${OS} != AIX ]] ; then
${RM} -f ${FILE}
else
####################################
# Test if filename ends up with .0 #
# If not,skip around code to update#
# index. This prevents update of #
# same index several times as only #
# one .cntl file is created #
# when server print is made for #
# multiple documents and this #
# script is called one time for #
# each doc to print. #
####################################
ext=$7
ext=${ext##*.}
if [[ ${ext} = 0 ]] ; then
####################################
# Compute .cntl filname from #
# supplied parameter $7 #
####################################
fil=$7
mine=${fil%.*}.cntl
####################################
# Double check if .cntl file exist #
####################################
if test ! -f $mine
then echo "File $mine not found"
exit 1
fi
####################################
# Set static variables #
####################################
host=9.99.99.99
nohit=no
applgrp1=ICAlog
folder1=ICAlog
applgrp2=applg2
folder2=folder2
applgrp3=applg3
folder3=folder3
####################################
fi
else
(
if [[ ${OS} = AIX ]] ; then
echo /bin/enq -r -P "$1" -N $2 -T "${TITLE}" $6 ${EXTRA_OPTIONS} ${PRT_OPTIONS}
${FILE}
else
echo ${BASE_DIR}/lprafp -p "$1" -s "${ARSPRT_HOSTNAME}" -o "COPIES=${2}" -o
"JOBNAME=${TITLE}" -o "TITLE=${TITLE}" $6 ${EXTRA_OPTIONS} ${PRT_OPTIONS} ${FILE}
fi
#
# If there is an options file, wait until the file has been
# printed before removing it.
#
if [[ ${DEL} != 0 ]] ; then
while(( 1 ))
do
if [[ -f "${FILE}" ]] ; then
sleep 30
else
${RM} -f ${OPTS_FILE} ${NOTES_FILE}
break
fi
done
fi
exit 0
You can also use the sample exit source code to write your own exits. In this section, we
describe each sample exit that is provided in the standard Content Manager OnDemand
installation.
The sample source code for the Content Manager OnDemand user exits is provided for all of
the platforms. They are placed in the directories or libraries of Content Manager OnDemand
that are listed in Table 11-1. These sample user exit modules provide a skeleton for you to
program the exits.
The header file provides information about how to turn on the user exits. If the information is
not specified in the header file, place the compiled user exit program into the bin/exits
directory of the Content Manager OnDemand installation root.
The source code must be compiled before you use it. For UNIX platforms, you can compile
the source code by using the sample makefile that is provided. The makefile is in the same
exits directory as the sample exits source code.
Table 11-2 provides the functions and usage of the user exit modules.
arsutbl TBLSPCRT To customize the creation of table spaces, tables, and indexes
The first part of the header file is a declaration of all of the structures and variables that are
used. Example 11-9 shows several of the common structures that are used in the functions
declarations.
Example 11-9 Common structure that is defined in the arscsxit.h header file
/*********************************************************************/
/* COMMON STRUCTURES */
/*********************************************************************/
#define ARCCSXIT_MAX_SRVR_MESSAGE_SIZE 1024
#define ARCCSXIT_DOCNAME_SIZE 11
From the previous example, the ArcCSXitApplGroup structure consists of the application
group name, the application group identifier (agid), and the AGID name (agid_name). This
information is important because it indicates the input to the functions. Structures that are
specific to a function are also included in the header file.
In the following sections, we examine each exit and describe its usage.
You can use the sample exits program to insert the action that you prefer. The input to the
program is in the structure ArsCSXitLoadExit. This structure contains the load information,
such as the load identifier and the application group name. Based on the load information,
you decide whether to send a notification, to whom to send the notification, and the type of
information you want to provide when loading is successful.
You can use the client retrieval preview exit to add, remove, or reformat data before the
document is presented to the client, for example:
You can remove pages from the document, such as banner pages, title pages, or all pages
except the summary page.
You can remove specific words, columns of data, or other information from the document.
That is, you can omit (“white out”) sensitive information, such as salaries, social security
numbers, and birth dates.
You can add information to the document, for example, a summary page, data analysis
information, and Confidential or Copy statements.
You can reformat data that is contained in the document. For example, you can reorder the
columns of data.
The client retrieval preview exit point might be enabled for specific applications. However, to
enable the client retrieval preview exit for a specific application, ensure that the Use Preview
Exit option is selected on the Miscellaneous Options page of the application.
The input to the exit program is captured when the user tries to retrieve the document. Based
on the input, such as application group name and the indexes, you can then use your program
to create an output file with the name from pOutFileName.
Example 11-11 shows the header file of the client retrieval preview exit.
ArcI32
ARSCSXIT_EXPORT
ARSCSXIT_API
PREPEXIT( ArsCSXitPrepExit *prep );
For example, you can arrange it so that when a user retrieves a document from a particular
application group, you can check the name of the account number (the indexes from the Doc
handle) and place a watermark for that document. When the document is retrieved by the
user, the user sees the document with the watermark.
Any information that is specified in the Parameters field is passed to the user-written program.
Place the arsuprep program in the bin/exits directory.
The client retrieval preview user exit can be enabled for all data types, except for None.
For more information, see the IBM Content Manager OnDemand for Multiplatforms -
Installation and Configuration Guide, SC18-9232.
Example 11-12 Header file of the report specifications archive definition exit
/**********************************************************************/
/* UPDTEXIT - Report Definition Update Exit */
/* */
/* This exit is for specialized applications and is not normally */
/* used. */
/* */
/* INPUT: */
/* pFileName */
/* Function */
/* ApplGrpName */
/* ApplName */
/* ObjServer */
/* StorageNode */
/* pJES */
/* IndexerParms */
/* CCType */
/* LRECL */
/* RECFM */
/* Delim */
/* instance */
/* */
/* OUTPUT: */
/* ApplGrpName */
/* ApplName */
/* ObjServer */
/* StorageNode */
/* IndexerParms */
/* CCType */
/* LRECL */
/* RECFM */
/* UpdateAppl */
/* Delim */
/* DbFieldName */
/* DbFieldDateFormat */
/* */
/* RETURN_CODE: */
/* 0 -> Successful */
/* Otherwise -> Failed */
/* */
/**********************************************************************/
#if defined(OS390)
typedef struct _ArsCSXitUpdtExit_JES
{
void *JES_SSS2p; /* pointer to SSS2 (SAPI SSOB ext) */
char JES_DDÝ8¨; /* DD name allocated to spool file */
} ArsCSXitUpdtExit_JES;
#endif
ArcI32
ARSCSXIT_EXPORT
ARSCSXIT_API
UPDTEXIT( ArsCSXitUpdtExit *updt );
The ARSUUPDT DLL invokes module ARSUUPDX. Module ARSUUPDX interfaces with the MVS
Dynamic Exit Facility to perform the following actions:
Define the logical exit point name: ARS.RSADUPDT
Route control to a set of exit routines that are associated with MVS and process the
results of their execution
Module ARSUUPDZ is implemented as dynamic exit routine that is associated with MVS. An exit
routine is eligible for execution after it becomes associated with the logical exit point. The
MVS Dynamic Exit Facility provides several methods for performing this association.
Use the following command to activate the exit routine and associate ARSUUPDZ with the logical
exit point name. (The example assumes that ARSUUPDZ is in the link pack area (LPA) or a
LNKLST dataset.)
SETPROG EXIT,ADD,EXITNAME=ARS.RSADUPDT,MODNAME=ARSUUPDZ
For more information about the report specifications archive definition exit routines, see
Chapter 40, “Report specifications archive definition exit”, in the Content Manager
OnDemand for z/OS Configuration Guide, SC19-3363.
You can also use this exit to perform other actions during a table space creation. This exit is
useful if you must change default parameters for the table space, the table, or the indexes.
The changes affect only new creations. Example 11-13 shows the header file of the table
space creation exit.
Example 11-13 Header file for the table space creation exit
/**********************************************************************/
/* TBLSPCRT - table space Create Exit */
/* */
/* To activate the table space creation exit, set the following */
/* variable in the appropriate OnDemand instance ars.cfg file: */
/* */
/* ARS_DB_TABLESPACE_USEREXIT=<absolute_dll_path_name> */
/* */
/* INPUT: appl_grp */
/* tblsp_name */
/* table_name */
/* idx_name */
/* sql (allocated with 16384 bytes) */
/* action */
/* instance */
/* */
If you do not customize the action, Content Manager OnDemand uses the defaults.
Action 2
Is there a need to customize the creation of the table?
If yes
Action 3
Is there a need to customize the creation of the indexes?
If yes
create the indexes
return( created = 1 )
Else
OnDemand create the indexes
return( created = 0 )
Action 4
Final call, is there additional work, clean up or update on parameters?
If yes
perform the additional action.
return( created = not used )
Else
OnDemand do nothing
return( created = not used )
The following statement must exist in the ARS.CFG file that is associated with the instance so
that the ARSUTBL DLL can be invoked:
ARS_DB_TABLESPACE_USEREXIT=absolute path name
For more information about the table space creation exit, see the IBM Content Manager
OnDemand for Multiplatforms - Installation and Configuration Guide, SC18-9232.
ARSYSPIN creates an intermediate output file that contains one or more spool files from one or
more jobs. The intermediate output file is indexed and stored in Content Manager OnDemand
by using the ARSLOAD program. ARSYSPIN invokes ARSLOAD when sufficient data is captured in
the intermediate output file. ARSLOAD calls the indexer program (APKACIF) to extract the index
values from the data and store them in an index file. ARSLOAD adds these index values to the
database and stores the data object. If you want, you can use ARSYSPIN exits to augment the
data stream.
ARSSPVIN is a sample APKACIF input exit that is provided with ARSYSPIN to introduce additional
index values into the data stream, by using a “trailer” record. Trailer records are inserted at
the end of the JESMSGLG data. They reflect the highest severity condition (a step completion
code, an ABEND code, or another type of problem, such as a JCL error) that is observed in
messages that are contained within these spool files.
CEEUOPT CSECT must be assembled and link-edited with the COBOL object code. In
addition, you must ensure that the resulting module is link-edited as NOT RE-ENTRANT and
NOT REUSEABLE. This task is required for the local variables within the COBOL exit code to
retain their values. This exit is invoked several times during an ACIF run. The sample source
code is in the SARSINST library member ARSSPVIN. Example 11-15 shows a sample
CEEUOPT CSECT.
2. Click Modify.
3. Click the Exit Information tab, as shown in Figure 11-4.
Figure 11-4 Specify Load Module Name in the Exit Information tab
If an application exists, edit your indexing parameters and add the following line, as in shown
in Figure 11-5:
INPEXIT=ARSSPVIN
For more information about activating this exit, see the Content Manager OnDemand for z/OS
Version 9.0 Administration Guide, SC19-3364.
Scalability
Scalability is the ability of a Content Manager OnDemand system to handle a growing amount
of work with no performance degradation. A Content Manager OnDemand system’s
performance improves with the addition of hardware and network resources and therefore is
defined as a scalable system. Two types of scalability are defined:
Horizontal scalability (or scale out): This type of scalability is achieved by adding more
nodes, systems, or logical partitions (LPARs) to a Content Manager OnDemand instance.
An example of horizontal scalability is adding more object servers to a Content Manager
OnDemand instance.
Vertical scalability (or scale up): This type of scalability is achieved by adding more
resources to a single node in a Content Manager OnDemand instance. Typically, this type
of scalability involves faster processors, more processors, memory, disks, or networking
hardware.
Reliability
Reliability is the ability of Content Manager OnDemand to perform and maintain functionality
during regular workloads and during peak workloads. Peak workloads might occur regularly
(for example, when everyone signs on at 9:00 a.m.) or periodically (at the end of the month
when more processing than usual occurs). Or, peak workloads might occur sporadically (for
example, when a special event occurs, such as a sales drive that results in more users using
the system).
Availability
Availability is a measure of the time that a Content Manager OnDemand server or process
functions normally, and a measure of the time that the recovery process requires after a
component failure. It is the downtime (unavailability) that defines system availability.
Availability is the amount of system uptime when the system is fully functional and accessible
by all users.
Availability requires that the system provides a degree of redundancy to eliminate single
points of failure (SPOFs). The greater the redundancy that is provided, the higher the
availability of the system. A single physical machine is still a SPOF. For this reason, a high
availability system topology typically involves horizontal scaling and redundancy across
multiple machines.
High availability
High availability implies that no human intervention is needed to restore operation if a failure
or outage occurs. A highly available system has an availability limit of at least 99%, which
allows an average of 15 minutes each day to perform maintenance tasks (during which period
the system is inaccessible to users). The degree of high availability that is achieved is a
function of the amount of redundancy within the system and the degree to which this
redundancy is automatically enabled.
Systems typically become unavailable because of the lack of one or more of the following
activities:
Change control procedures (a failure to implement the appropriate procedures from
installation verification through performance testing before you place the system into
production)
Monitoring of production system components (including total system workload, hardware,
and network issues)
Implementing high availability solutions (redundant systems and network connections)
A comprehensive backup (and restore) process that is tested on a routine basis
A cost exists to implementing highly available high-performance systems. This cost must be
weighed against the cost of not implementing such systems.
The following sections provide more information about example system implementations that
allow high performance, scalability, reliability, and availability.
In both of these scenarios, the configuration results in a single Content Manager OnDemand
instance view from both the administrative and user perspectives.
With this flexibility and scalability, you can configure Content Manager OnDemand systems
so that they meet a wide range of both workload and operational requirements.
Figure 12-1 illustrates a single Content Manager OnDemand instance. In this figure, the
Content Manager OnDemand server supports the library server, one or more object servers,
and one or more load processes. The following sections provide examples of how the Content
Manager OnDemand server can be scaled both vertically and horizontally.
Single Sys te m
Obje ct Serve r
Tmp (h fs,zFS)
Ca che
TSM/OAM/VSAM
C lient
Loa d Proce ss
tmp
The limit to the amount of possible vertical scalability is the architectural hardware constraints
of the system. For example, if the system supports only 24 GB of memory, that memory
limitation can be overcome only by buying a larger system.
At the process level, the Content Manager OnDemand server runs multiple processes:
A library server
One or more object servers
One or more load jobs
The expiration process
On IBM i, when you access the ASM archives, connection pooling is not required for store
requests. When a store request is made, ASM opens a connection and keeps it open until the
data store request is complete. In addition, ASM allows the aggregation of objects, sending
fewer objects to storage media than otherwise are sent without aggregation.
On Multiplatforms and z/OS, you can aggregate documents that are loaded from Content
Manager OnDemand Web Enablement Kit (ODWEK) before you store them in the archive.
The document is stored to cache where it is appended to the storage object until the object
reaches 10 MB (defined storage object size), at which point it is migrated to a storage
manager, such as Tivoli Storage Manager. For more information about this topic, see the
following website:
http://www.ibm.com/support/docview.wss?uid=swg21587507
In the example that is shown in Figure 12-2, the Content Manager OnDemand system is
horizontally scaled by placing the library server, object servers, and load processes on
multiple systems.
Use rs (Gui Clie nt, We b Client, A PIs …) Sto red da ta Lo ad job p rocessi ng
distri buted acro ss distri buted acro ss
mul ti ple in dep end ent mul ti ple in dep end ent
ob ject se rvers ob ject se rvers
+ +
Figure 12-2 Horizontal scaling: Multiple object servers (z/OS and Multiplatforms)
This form of horizontal scalability provides better performance, reliability, and scalability by
distributing the storage and retrieval workload over multiple systems.
From a Content Manager OnDemand perspective, no limit exists to the number of object and
load process servers. Each of the servers can run to its maximum capacity. Operational
limitations are imposed by the TCP network bandwidth that connects all of the servers and by
the available data center floor space. Both of these constraints can be reduced by placing
multiple servers in a rack-mounted configuration.
An object server controls the storage and retrieval of the archived data. The archived data is
stored in a storage subsystem. The number and architecture of these subsystems can be
scaled to the limitations of the subsystem. Each object server can support one or more
storage subsystems, and each storage subsystem can consist of multiple storage devices, as
shown in Figure 12-3.
+ +
Archive sc alability
Storag e subsystems
Ca che, OAM, TSM
Both Tivoli Storage Manager and OAM provide hierarchical data management facilities. Data
can be stored on different devices based on the age or predicted frequency of data access.
For example, frequently accessed data might be placed on high-speed disk and infrequently
accessed data might be placed on tape. When the data is requested by a user, the location of
the data is transparent to the user. The only perceived difference from a user perspective is
the response time, which is mainly a factor of the type of device on which the data is stored. In
this example, tape access is slower than disk access.
In summary, better performance is achieved by distributing the storage and retrieval workload
over multiple systems and multiple devices.
+ +
Syst em A Syste m B
This scenario is in organizations with large systems, such as AIX or z/OS, that are installed
and that have enough available capacity to support the required Content Manager
OnDemand workload. One advantage of this configuration is that you can control the priority
of work and computer resource distribution to each of the LPARs, such as the number of
processors or the processing priority (depending on the computer system/operating system
architecture) that is allocated to each of the LPARs. So, for example, load jobs can be
assigned a low priority during the day when the focus is on data retrieval and a high priority
during the night when the focus is on data loading.
This setup supports horizontal scalability by using multiple technologies as appropriate. The
main constraint is that clients must have access to all systems through TCP/IP.
LP AR - 1 arssockd
Temporary s torage
Library Server
ars soc kd
LP AR - 2 ars objd-1
Cache storage
TCP/IP ars 1.c ac he
Ne twork Temporary st orage ars. ini
Client
A rc hive s torage ars1. cfg
Objec t S erver
ars objd
ars objd-n
LPA R - N
Cache storage
ars 2.c ac he
Temporary st orage ars.ini
ars2. cfg
A rc hive s torage
Objec t S erver
ars objd
All of these techniques work well and provide various levels of near real-time high availability
based on the degree to which the redundant systems are created and are kept in
active-standby mode.
CMOD Client
z/OS A
OAM DB
Shared Data
Both of these systems (all LPARs and all instances of the Content Manager OnDemand
server) access a single set of Content Manager OnDemand database tables through DB2
data sharing. They also access a single OAM archive system through an OAMplex. Not
shown in the figure is the access to a single Job Entry Subsystem (JES) spool and a shared
file system (which consists of a set of hierarchical file systems (HFS) or z/OS file systems
(zFS). The term “single” is used to imply that the same set of data is available to all systems
concurrently. Each of these single systems consists of highly redundant components and
therefore do not represent a SPOF.
From a client perspective, the “cluster” is a single IP address. Incoming client requests are
received by the sysplex distributor/Workload Manager (WLM). WLM monitors the various
systems in the Parallel Sysplex and selects the appropriate Content Manager OnDemand
server to forward the request to based on the current system workload and availability, so that
the system that is more available (less busy) receives the request.
The IBM Power Systems™ Capacity BackUp (CBU) offerings support disaster recovery and
high availability needs. The CBU offerings recognize that true high availability or disaster
recovery solutions require at least two systems. If one system is not available, the other
system takes over. The CBU offering provides flexible and economic options for deploying
business continuity operations.
In a high availability environment on IBM i, you might not want to replicate the following
directories because OnDemand places only temporary data in them, and this data might
occupy a large amount of space:
Do not replicate the temporary integrated file system (IFS) directories for your instances.
For example, do not replicate /QIBM/UserData/OnDemand/QUSROND/TMP or
/QIBM/UserData/OnDemand/QUSROND/PRTTMP, where QUSROND is your instance name.
Do not replicate the home directory for the user that is storing data. For example, if
JOHNDOE is the name of the user profile that stores data into Content Manager
OnDemand, do no replicate /home/JOHNDOE.
Do not replicate the /tmp directory.
A Content Manager OnDemand server can scale from a Windows server up to a cluster of
z/OS systems. It is important to initially select an installation that meets the following
requirements:
Appropriate for your current workload in terms of the following items:
– Performance
– Reliability
– Availability
– Scalability
Support for your future growth requirements if the following actions are necessary:
– Increase the number of users that access the system
– Increase the quantity of data that is stored in the system
Change in the types of archived data
Change in the preprocessing requirements
A high performance system, such as Content Manager OnDemand, provides both high
throughput and short response times.
The following sections describe the various components of a Content Manager OnDemand
system and its architecture. They provide guidance about the parameters and configurations
that you can change to improve performance.
The ability to separate the object server from the library server offers two main advantages:
The ability to share workload by dedicating machines to individual tasks
The ability to reduce the impact of retrieving a large piece of data over a network that is
either slow or overloaded
ARS_NUM_DBSRVR
The ARS_NUM_DBSRVR parameter is set in the ars.cfg file. This parameter is the maximum
number of threads that are concurrently open between the Content Manager OnDemand
library server and DB2. Typically, this value is set to a number between 4 - 30. This number
must be large enough to support all of the concurrent database requests from all users and
clients and Content Manager OnDemand commands and daemons, such as ARSLOAD, ARSDOC,
ARSDB, ARSMAINT, and ARSADMIN. This number must not exceed the number of DB2 batch
connections (MAXDBATS for z/OS and MAXAPPLS for Multiprocessing (MP)). The number of DB2
batch connections must be greater than the ARS_NUM_DBSRVR, plus all of the other connections
that are required by all DB2 applications that you defined in your DB2 configuration.
For systems that are running several large load jobs in parallel, or for systems with large
numbers of active users, increase this parameter from the default of 4.
For performance reasons, when the Content Manager OnDemand file systems are created,
the following components must not be on the same physical media:
Cache file system
Database file system
Primary logs file system
Secondary logs file system
Load/indexing file system
Content Manager OnDemand temporary space file system
For effective storage management, one key performance feature of Content Manager
OnDemand is its ability to load data to archive media, while simultaneously retaining a
temporary cached copy of the most recent archived data on fast access media (such as the
hard disk drive (HDD)). Content Manager OnDemand handles the expiration and
management of this cached copy of the data. After a certain predefined period elapses, the
data is removed from cache. The only remaining copy is held on the much slower archive
media that is managed by either Tivoli Storage Manager, OAM, VSAM, or ASM, depending on
the platform.
If performance problems are encountered at the storage manager level, the issue is almost
always related to the inherent qualities of the slower media types (such as optical platters and
tape volumes) or how the archive media manager is configured.
ARS_NUM_OAMSRVR
This parameter specifies the maximum number of concurrently attached threads between the
Content Manager OnDemand object server and OAM for z/OS. Typically, this value is set to a
number between 4 - 30, depending on client access patterns and object storage locations
(disk versus tape). This parameter has a maximum value of 30. Any value larger than 30
result in a U0039 abend.
ARS_NUM_OAMSRVR_SLOW_RETRIEVE
This parameter determines the number of task control blocks (TCBs) that the Content
Manager OnDemand server starts to handle connections to OAM for retrievals from objects
with a slow retrieval time as defined by the ARS_OAM_SLOW_RETRIEVE_THRESHOLD parameter.
The ARS_NUM_OAMSRVR_SLOW_RETRIEVE parameter applies to all object servers. If the value that
is specified for this parameter is zero (0), no TCBs are dedicated for slow retrievals. All
retrievals are processed by the TCBs that are associated with the ARS_NUM_OAMSRVR
parameter. The default is zero (0). The ARS_NUM_OAMSRVR_SLOW_RETRIEVE TCBs are in addition
to the ARS_NUM_OAMSRVR TCBs, and they use additional DB2 connections.
ARS_OAM_SLOW_RETRIEVE_THRESHOLD
This parameter specifies the threshold at which OAM retrievals are processed by the TCBs
that are associated with the ARS_NUM_OAMSRVR_SLOW_RETRIEVE parameter. If the estimated
retrieval time for an object (as indicated by QELQERRT) is greater than or equal to the value
of the ARS_OAM_SLOW_RETRIEVE_THRESHOLD parameter, the OSREQ RETRIEVE is processed
by an ARS_NUM_OAMSRVR_SLOW_RETRIEVE TCB. The default value is 12000. For other valid
QELQERRT values, see the Object Access Method Application Programmer’s Reference,
SC35-0425-08. An ARS_OAM_SLOW_RETRIEVE_THRESHOLD value of zero (0) with a nonzero
ARS_NUM_OAMSRVR_SLOW_RETRIEVE value causes all OAM retrieve requests to be processed by
the ARS_NUM_OAMSRVR_SLOW_RETRIEVE TCBs, while the ARS_NUM_OAMSRVR TCBs process store,
query, and delete requests.
During the load process, in addition to any command-line parameters that are supplied, the
application group and application parameters are retrieved from the library server. Then,
based on the parameter definitions, the load process completes the following steps:
1. Selects the indexer to be used for indexing the report data and retrieves the indexing
parameters.
2. Reads in the report data from the identified source location. The input report data can be
of any data type.
3. Indexes the report data based on the defined indexing parameters.
4. Segments the report into “documents”.
5. Compresses the documents.
6. Stores the compressed documents in storage objects (10 MB by default).
7. Sends the storage objects to the object server where they are stored in the identified
archive (storage node).
8. Sends the index data for the stored objects to the library server where the indexes are
stored in the appropriate application group data table.
13.2.2 Recommendations
For the most optimal performance in loading, we recommend the following practices:
For Multiplatforms and z/OS, run parallel load jobs to take advantage of multiprocessors,
large memory pools, multiple data paths, and multiple disk drives.
Ensure that each parallel load is loading to a different application group.
Ensure that you set up a different temp directory for each of the parallel loads. The -c
indexDir indexer parameter (which specifies the directory in which the indexer stores
temporary data) must always be specified for ARSLOAD and must be unique for each
running ARSLOAD process.
For IBM i, start multiple output queue monitors over a single output queue to improve
throughput and take advantage of multiprocessors, large memory pools, and multiple disk
drives.
Each Content Manager OnDemand process is limited by the performance of a single
processor. For example, the OS/400 indexer uses only one processor when it indexes a
document. Using two or more processors in your system or LPAR does not improve the
performance of the OS/400 indexer. However, by using two or more processors in your
system or LPAR, you might be able to run multiple load jobs simultaneously. You can start
multiple output queue monitors over a single output queue to improve document load
performance.
For IBM i, the use of the Merge Spooled Files (MRGSPLFOND) command can provide
significant performance improvements when you load SCS spooled files.
Note: For most users, a single load process meets the ingestion throughput
requirements.
Data types and exits: A different data type, and whether an exit is started during the load
process, affects the load throughput. Test samples of the different types that represent the
general loads.
TCP/IP considerations
A known Windows configuration setting might affect performance when you connect to a
Content Manager OnDemand server. During repeated searches and retrievals on a Content
Manager OnDemand server, many Windows sockets are opened and closed. Two default
Windows settings might affect heavy traffic between the client and the Content Manager
OnDemand server:
When an application closes a Windows socket, Windows places the socket’s port into
TIME_WAIT status for 240 seconds; during this time, the port cannot be reused.
Windows limits the number of ports that an application can use to 5000.
For more information about TcpTimedWaitDelay and MaxUserPort, see your Windows
documentation.
Verify with your network personnel that you are setting the values that are appropriate for your
environment correctly.
Performance Tuning
The concepts that are shown in Figure 13-2 are described for your reference.
The retrieval performance is mostly limited by the resources that are available to the Content
Manager OnDemand server.
Note:
The performance tuning process demands great skill, knowledge, and experience, and
it cannot be performed by only analyzing statistics, graphs, and figures.
The goal is to tune the Content Manager OnDemand server. You can “see” the
bottlenecks in the server only if both the client and the network are clear of bottlenecks.
During PDF document creation, resources, such as images and custom fonts, are placed in
the data stream once and then referenced many times from within the PDF file. If a large
report is produced from many small documents, that report requires only one copy of the
resources.
However, when the PDF is indexed, the PDF Indexer creates many PDF documents from the
input file. Each of these documents requires a certain number of PDF structures, which define
a document. These documents are concatenated together in the .out file, and then loaded
into Content Manager OnDemand as separate documents. Because the resources are
extracted and placed into a separate resource file, they are not included in each document.
For an illustration of the process, see Figure 13-3.
Document
Resources
Converted to
One PDF file
with
documents
and resources
Many
separate PDF
documents
with resources
removed in
the .out file
If no resources are collected, the size of the .out file, which contains all of the individual
documents, might be larger than the original file. For tips about how to reduce the size of the
output file, see 7.3.5, “PDF indexing: Using internal indexes (Page Piece Dictionary)” on
page 173.
Create PDF data with the base 14 fonts, which do not need to be included in the PDF file.
Because they are not included in the PDF file, they are not extracted during resource
collection, which improves performance. For more information about the PDF data stream
and fonts, see 7.3.1, “PDF fonts and output file size” on page 166.
When you index transaction data, if each transaction number from each line of the report is
treated as a database index, such as date or customer name, the database becomes large
quickly. Content Manager OnDemand has a special type of field for transaction data, which is
illustrated in Figure 13-4 by the boxed data on the left of the window.
The transaction data field selects the first and last values from a group of pages and only
these group level values are inserted into the database. Content Manager OnDemand
queries the database by comparing the search value that is entered by the user to two
database fields, the beginning value and the ending value. If the value that is entered by the
user falls within the range of both database fields, Content Manager OnDemand adds the
item to the document list.
It is a common misconception that if fonts are collected when the data is loaded, they are
available for viewing in the Windows client. However, Windows does not recognize AFP fonts.
It is not possible to use these fonts even if they are sent to the client as part of the resource.
Windows clients require a mapping from AFP fonts to Adobe Type Manager (ATM) fonts or
TrueType (TT) fonts. Content Manager OnDemand provides this mapping for most standard
fonts. For more information about mapping custom fonts, see IBM Content Manager -
Windows Client Customization Guide and Reference, SC27-0837.
One possibly useful implementation of storing fonts with the resource group is when server
reprint is necessary. If the fonts are stored with the resource group, they can be retrieved from
Content Manager OnDemand and used by AFP printers. However, if fonts are collected, they
are also sent to the client as part of the resources group and then discarded. Storing the fonts
with the resource group serves only to increase network traffic when transferring the resource
to the workstation. A more practical option for server printing is to store the font in a fontlib
and to keep only the reference (path) to the fontlib. Although the font is accessible on the
server, Print Services Facility (PSF) or InfoPrint does not need the font to be inline (stored in
the resource group). The use of this approach also allows all AFP data that references the
font to use the single instance of the font without redundant inline storage.
Figure 13-5 on page 311 shows the indexer information in the application where you can
select the resources to collect with the Restype= parameter. Unless reprints to AFP printers
with 100% fidelity is a requirement, do not collect the fonts.
The Content Manager OnDemand for i server does not collect the fonts and it does not give
the administrator that option. The Resource Information window (under Indexer Properties) is
not available to the Content Manager OnDemand for i administrator. If you are reprinting to an
AFP printer, the fonts must be available on the IBM i server, or font substitution is performed.
ODF can distribute reports that are stored in a Content Manager OnDemand server on any
platform that is supported by Content Manager OnDemand.
Both of these components contained certain strengths and weaknesses. In V9.5, the
strengths of both of these components were merged into a single component named
OnDemand Distribution Facility (ODF), which offered the following advantages:
It runs on all Content Manager OnDemand platforms.
It can run on a separate platform from where the Content Manager OnDemand server is
installed.
Its operation can be monitored through a new graphical monitor, the OnDemand Monitor.
It includes transform support where Content Manager OnDemand can transform content
from one data type to another data type before the content is sent as part of an ODF
distribution.
This chapter describes ODF V9.5. For any new installations (on z/OS or AIX) before version
9.5 of Content Manager OnDemand, we suggest that you install ODF.
Figure 14-1 shows the evolution and merger of ODF 9.5 from its predecessors ODF9.0 and
Report Distribution System (RDF) 9.0.
When you load documents into Content Manager OnDemand, you might need to print these
documents or send them to various people in your organization.
Content Manager OnDemand automates the process of sending the documents that are
loaded into Content Manager OnDemand to print (or the JES spool), a file (or a z/OS
dataset), to a recipient as an email attachment, or to a recipient as an email notification.
Figure 14-2 is an overview of the OnDemand Distribution Facility and its interaction with the
Content Manager OnDemand server.
Figure 14-2 shows that the Content Manager OnDemand server and its operation did not
change. Reports and documents are loaded into the server, and system users continue to
view and print their documents normally. The only addition to the library server is a set of ODF
tables that define the documents that are to be distributed to which users and when. The ODF
process reads the ODF tables and collects the required documents and bundles them for
each recipient. ODF then send out the “bundles” to the appropriate destinations (email, file,
and print). Alternatively, ODF can send each recipient (based on system definitions) an email
notification that the report and document were loaded and are available for viewing.
Different organizations have different report and document load and retrieval patterns. In
certain cases, documents are loaded and never retrieved. In other cases, a loaded document
is retrieved multiple times by multiple users. In other cases, it is known that when a specific
report or document is loaded, one or more copies must be distributed to one or more
destinations. What benefit does automating this distribution process provide?
The biggest benefit is that as reports are loaded into Content Manager OnDemand regularly,
they can be delivered automatically to one or more users as they are loaded. Also, after the
distribution is set up, no other changes are required, such as changing the document
selection criteria to identify the latest data that is loaded.
For example, suppose that your organization generates monthly statements for your
customers. You must store these documents in Content Manager OnDemand, and you must
print the statements and mail them to the customers. With ODF, you can set up a distribution
that automatically retrieves these documents as they are loaded into Content Manager
OnDemand and sends them to a spool file for printing.
The applications for using ODF are endless, but the basis for using it is the same. Documents
are loaded regularly and are needed by one or more users as they become available in
Content Manager OnDemand. Let us look at a specific example from our fictitious company
that was introduced in 1.2.1, “Background information of an example company” on page 6.
AFinancial Co generates monthly credit card statements for all its customers. These
customers can choose to receive a hardcopy of the statement or have the statement sent to
them as an email attachment.
In this example, even though separate customer statements are created each month, they are
loaded into the system at the same time, so only one load occurs each month. This
information is important when you are determining the best way to set up the distribution.
Before a distribution is set up, ask yourself the following questions:
What documents are needed?
Who receives the documents?
When are the documents retrieved and delivered?
Where are they delivered?
In general, you identify the documents by creating an SQL query that uses index fields and
values that uniquely identify the documents that you want to retrieve when they are loaded.
You can then define the distribution to include multiple report bundles with different SQL
queries for each bundle. If the SQL must retrieve the document that is the same except for a
value that identifies the recipient, a single distribution can be used with a recipient list. In this
case, the SQL specifies a wildcard value. When processing, ODF fills in the recipient ID in the
SQL statement. For example, a recipient list contains recipients 100001, 100002, and 100003
and an SQL statement of “Where branch_id = '$ODF_RECIPIENT'”. When this recipient list is
processed, ODF creates a distribution for recipient 100001 with all reports where branch_id =
'100001', recipient 100002 will receive a distribution that contains all reports where branch_id
= '100002', and so on.
Recipients who receive a printed copy of the distribution can choose to include a banner page
in the distribution by selecting Use Banner Page. You can specify up to eight header lines to
include in the banner page, as shown in Figure 14-5 on page 321.
Recipients are added to the list by selecting the ID on the left and clicking Add, as shown in
Figure 14-6 on page 322.
To create a report ID, specify the identifier and then choose the application group and
application from the drop-down selection.
Distribution Name
With the recipient or recipient list name, the distribution name uniquely identifies the
distribution. For our example, we name this distribution CREDIT CARD STATEMENTS.
Recipient/List
Choose your recipient. For our example, we add the newly created recipient from the
drop-down menu.
Status
Two values are valid for status:
Active indicates that the distribution is processed while the documents are loaded.
Inactive indicates that the distribution is not processed while the documents are loaded.
Location
The location specifies where the distribution is delivered. We select E-mail for our distribution.
Note: The “Notify by e-mail” check box is available for use with Location values of Print,
File, or None. The selection of the “Notify by e-mail” check box sends an email to the
recipient to notify them that the distribution is available.
Customer Variables
This field contains any information that you might need to pass to the customizable user exits.
For example, if this distribution requires special spool file allocation options, you can enter the
information in this field. The preallocation exit can then use the information to change the
spool file allocation parameters. For our example, we leave this field blank.
Account
This field is optional. This field specifies the name to use on the JCL job card. For our
example, we leave this field blank.
Distribution Method
The distribution method controls the scheduling and processing of the distribution. Because
we want the distribution to be processed while the documents are loaded, we select the
Loaded method.
Continue/Wait indicator
This option is valid only when the Distribution Method is Time of Day or Time of Print. From
the drop-down list, select either Continue to continue processing report bundles as they are
available after the Time is reached, or select Wait to wait until the next Time occurrence to
print any report bundles that become available.
Manifest Indicator
This value indicates whether a manifest page, which lists the report bundles that are included
in the distribution, must be created. The manifest defaults to a separate file. If you want the
manifest in the same file as the report bundles, specify Manifest in Sysout.
Sequence
This value identifies the sequence that the report bundles are included in the distribution. The
default is 10, and each new report bundle increments the sequence by 10. This value
provides flexibility to add report bundles without the need to renumber any other report
bundles.
Report ID
This number identifies the report to include. For our example, we select the previously added
report ID from the drop-down menu.
Wait/Ignore Indicator
When more than one report bundle is specified in the distribution, this value tells ODF
whether this report bundle must be available before the distribution is processed. A value of
Wait indicates that you wait until this report is loaded before the distribution processing
begins. A value of Ignore indicates that the distribution is processed even if this report bundle
is not available. This function is useful if documents are loaded at different times but you want
them to be processed and included in a single distribution instance.
Report Build
This field indicates whether the distribution must include the full report or if a query will be
performed and only a portion of the report will be included. When Query is selected, the SQL
source option is available to build the query. You can either type the query by using the
Keyboard option or build the SQL, as shown in Figure 14-10 on page 327. For our example,
we build a query to include only the statements for John Doe.
Additionally, users can specify a wildcard with a substring in the SQL statement. On
execution, ODF will substitute the correct portion of the recipient or recipient list name.
14.3.1 Recipient
Run the following command to add a recipient:
Arsxml add -h myod -u myuser -p mypwd -v -i /recipientAdd.xml
Example 14-1 on page 328 shows the content of our example recipientAdd.xml file.
14.3.2 Report ID
Run the following command to add a report ID:
Arsxml add -h myod -u myuser -p mypwd -v -i /reportIDAdd.xml