SG 246915
SG 246915
Content Manager
OnDemand Guide
Administration, database structure,
and multiple instances
Wei-Dong Zhu
Carol Allen
Patrick Hofleitner
Doris Ming Ming Ngai
Paul I Harris
ibm.com/redbooks
International Technical Support Organization
December 2007
SG24-6915-02
Note: Before using this information and the product it supports, read the information in
“Notices” on page xxi.
This edition applies to Version 7, Release 1, IBM DB2 Content Manager OnDemand for
Multiplatforms (product number 5697-G34), Version 7, Release 1, IBM DB2 Content Manager
OnDemand for z/OS and OS/390 (product number 5655-H39), and Version 5, Release 3, IBM
DB2 Content Manager OnDemand for iSeries ™ (product number 5722-RD1).
© Copyright International Business Machines Corporation 2003, 2006, 2007. All rights reserved.
Note to U.S. Government Users Restricted Rights -- Use, duplication or disclosure restricted by GSA ADP
Schedule Contract with IBM Corp.
Contents
Figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xi
Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xix
Notices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxi
Trademarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxii
Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxiii
The team that wrote this IBM Redbooks publication . . . . . . . . . . . . . . . . . . . xxiv
Become a published author . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxvi
Comments welcome. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxvi
Chapter 2. Administration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
2.1 Report administration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
2.1.1 Storage sets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
2.1.2 Application groups. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
2.1.3 Applications. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
2.1.4 Folders . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
2.1.5 The Report Wizard . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
2.2 User and group administration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
2.2.1 User types, authorities, and functions . . . . . . . . . . . . . . . . . . . . . . . . 48
© Copyright IBM Corp. 2003, 2006, 2007. All rights reserved. iii
2.2.2 Decentralized system administration . . . . . . . . . . . . . . . . . . . . . . . . 49
2.2.3 OnDemand XML Batch Administration . . . . . . . . . . . . . . . . . . . . . . . 53
Contents v
9.1.2 When to convert data streams . . . . . . . . . . . . . . . . . . . . . . . . . . . . 254
9.1.3 How to convert the data: integrated solutions with OnDemand . . . 255
9.2 IBM AFP2WEB Services Offerings and OnDemand . . . . . . . . . . . . . . . . 256
9.2.1 AFP2HTML: converting AFP to HTML . . . . . . . . . . . . . . . . . . . . . . 257
9.2.2 AFP2PDF: converting AFP to PDF . . . . . . . . . . . . . . . . . . . . . . . . . 265
9.2.3 AFP2XML: converting AFP to XML . . . . . . . . . . . . . . . . . . . . . . . . . 276
9.2.4 AFPIndexer: indexing composed AFP files . . . . . . . . . . . . . . . . . . . 279
9.3 Xenos and OnDemand . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 282
9.3.1 Converting AFP to XML through ODWEK. . . . . . . . . . . . . . . . . . . . 285
9.3.2 Using the AFP2PDF transform with ARSLOAD . . . . . . . . . . . . . . . 302
9.3.3 Job Supervisor program . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 309
Contents vii
14.2.2 Database. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 463
14.2.3 Tivoli Storage Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 465
14.2.4 OnDemand client logon. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 465
14.2.5 Performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 466
14.2.6 ODWEK. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 466
14.2.7 OnDemand server hang or crash . . . . . . . . . . . . . . . . . . . . . . . . . 467
14.2.8 Exporting information to a local server . . . . . . . . . . . . . . . . . . . . . 467
14.3 OnDemand trace facility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 472
14.3.1 Enabling the trace facility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 472
14.3.2 Setting trace parameters in the OnDemand administrative client . 475
Glossary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 607
Contents ix
Related publications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 633
IBM Redbooks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 633
Other resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 633
Referenced Web sites . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 635
How to get IBM Redbooks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 636
IBM Redbooks collections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 636
Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 637
Figures xiii
10-1 List of monthly sales reports for Acme Art Inc. . . . . . . . . . . . . . . . . . . 315
10-2 OnDemand Administrator Client with Report Distribution . . . . . . . . . . 317
10-3 Bundle contents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 319
10-4 User with an e-mail address and server printer specified . . . . . . . . . . 320
10-5 Report window for the Northwest regional sales report . . . . . . . . . . . . 322
10-6 SQL Query window for the Northwest regional sales report . . . . . . . . 323
10-7 Add a Banner window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 324
10-8 Example of a separator banner . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 325
10-9 Add a Schedule window. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 326
10-10 General tab of the Add a Bundle window. . . . . . . . . . . . . . . . . . . . . . . 327
10-11 Bundle Contents tab of the Add a Bundle window . . . . . . . . . . . . . . . . 328
10-12 General tab of the Add a Distribution window . . . . . . . . . . . . . . . . . . . 329
10-13 Bundle tab of the Add a Distribution window . . . . . . . . . . . . . . . . . . . . 330
10-14 Schedule tab of the Add a Distribution window . . . . . . . . . . . . . . . . . . 331
10-15 Recipients tab of the Add a Distribution window . . . . . . . . . . . . . . . . . 332
10-16 Report Distribution Parameters window. . . . . . . . . . . . . . . . . . . . . . . . 333
10-17 Search for Distributions window. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 336
11-1 Select OnDemand system parameters . . . . . . . . . . . . . . . . . . . . . . . . 347
11-2 System Parameters window for user exit select . . . . . . . . . . . . . . . . . 351
11-3 Setting the query restriction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 387
11-4 Indexer Properties window. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 392
11-5 Specify the exit load module name . . . . . . . . . . . . . . . . . . . . . . . . . . . 393
12-1 Defining the RTPS01 query . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 411
12-2 Specify File Selections display. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 411
12-3 Define Result Fields display. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 412
12-4 Select and Sequence Fields display . . . . . . . . . . . . . . . . . . . . . . . . . . 412
12-5 Select Records display . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 412
12-6 Output type . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 412
12-7 Database output. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 413
12-8 Running query RPTS01 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 413
12-9 PRTRPTS01 CL program . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 414
12-10 Application version . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 415
12-11 Update an Application display . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 415
12-12 Update an Application Group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 416
12-13 Adding an application ID value . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 417
12-14 DLTRPTS01 CL program. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 418
12-15 Folder settings before making changes . . . . . . . . . . . . . . . . . . . . . . . . 423
12-16 Folder settings after making changes . . . . . . . . . . . . . . . . . . . . . . . . . 423
12-17 Default settings for updating the folder date . . . . . . . . . . . . . . . . . . . . 424
12-18 Adding a folder . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 425
12-19 Updating the FINRPTS folder . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 425
12-20 Update application group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 426
12-21 Folder search . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 426
Figures xv
15-25 Accessing CD image from disk: opening a folder . . . . . . . . . . . . . . . . 506
15-26 Accessing CD image from CD-ROM: logging in . . . . . . . . . . . . . . . . . 506
15-27 Accessing CD image from CD-ROM: opening a folder . . . . . . . . . . . . 507
15-28 Customized About menu . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 509
15-29 Customized About window. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 510
15-30 Registry setting modification: single-selection hit list . . . . . . . . . . . . . . 511
15-31 Registry setting modification: enhanced Folder List Filter option. . . . . 512
15-32 Registry setting modification: FILTER_AUTO_APPEND_WILDCARD 512
15-33 Registry setting modification: FILTER_AUTO_UPPERCASE . . . . . . . 513
15-34 Registry setting modification: Folder List Filter changes . . . . . . . . . . . 513
15-35 Line data background customization: Green Bar Alternating Stripes . 514
15-36 Editing colors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 515
15-37 Registry setting: GREEN_BAR_COLOR . . . . . . . . . . . . . . . . . . . . . . . 515
15-38 Output of registry setting for GREEN_BAR_COLOR. . . . . . . . . . . . . . 516
15-39 Registry setting: SHOW LOGO . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 517
15-40 Negative number capture in graphical indexer . . . . . . . . . . . . . . . . . . 518
15-41 Negative number displayed in OnDemand client . . . . . . . . . . . . . . . . . 518
15-42 Message of the day . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 519
15-43 Windows registry for Message of the day . . . . . . . . . . . . . . . . . . . . . . 520
16-1 ODF Monitor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 527
16-2 ODF Administration screen . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 527
16-3 Maintain Recipient screen . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 528
16-4 Maintain recipient list screen - Display recipient information . . . . . . . . 529
16-5 Display Recipient: Display recipients defined in ODF database . . . . . 529
16-6 Cross Reference Maintenance screen. . . . . . . . . . . . . . . . . . . . . . . . . 530
16-7 Maintain Distribution screen - With option 5 . . . . . . . . . . . . . . . . . . . . 531
16-8 Maintain Distribution screen - With option 6 . . . . . . . . . . . . . . . . . . . . 532
16-9 Display Distributions screen. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 533
16-10 Display Bundle Definition screen . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 534
16-11 Display Bundle Definition screen - with predefined fields . . . . . . . . . . 535
16-12 Maintain Bundle Definitions screen sample . . . . . . . . . . . . . . . . . . . . . 536
16-13 Bundle Definition Report Date Ranges screen . . . . . . . . . . . . . . . . . . 537
16-14 Maintain Bundle Definition: e-mail notification and delivery options . . 541
16-15 Defining a report . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 555
16-16 Adding a report . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 556
16-17 Adding a banner. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 557
16-18 Creating a bundle. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 558
16-19 Defining a bundle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 559
16-20 Defining a schedule . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 560
16-21 Adding a schedule . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 561
16-22 Defining a distribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 562
16-23 Defining a distribution name . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 563
16-24 Defining a distribution bundle. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 564
Figures xvii
xviii Content Manager OnDemand Guide
Tables
© Copyright IBM Corp. 2003, 2006, 2007. All rights reserved. xix
15-2 T format string examples with the Equal To search operator . . . . . . . 499
15-3 T format string examples with the Between search operator . . . . . . . . 499
16-1 ODF distribution tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 541
This information was developed for products and services offered in the U.S.A.
IBM may not offer the products, services, or features discussed in this document in other countries. Consult
your local IBM representative for information on the products and services currently available in your area.
Any reference to an IBM product, program, or service is not intended to state or imply that only that IBM
product, program, or service may be used. Any functionally equivalent product, program, or service that
does not infringe any IBM intellectual property right may be used instead. However, it is the user's
responsibility to evaluate and verify the operation of any non-IBM product, program, or service.
IBM may have patents or pending patent applications covering subject matter described in this document.
The furnishing of this document does not give you any license to these patents. You can send license
inquiries, in writing, to:
IBM Director of Licensing, IBM Corporation, North Castle Drive Armonk, NY 10504-1785 U.S.A.
The following paragraph does not apply to the United Kingdom or any other country where such provisions
are inconsistent with local law: INTERNATIONAL BUSINESS MACHINES CORPORATION PROVIDES
THIS PUBLICATION "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESS OR IMPLIED,
INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF NON-INFRINGEMENT,
MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Some states do not allow disclaimer
of express or implied warranties in certain transactions, therefore, this statement may not apply to you.
This information could include technical inaccuracies or typographical errors. Changes are periodically made
to the information herein; these changes will be incorporated in new editions of the publication. IBM may
make improvements and/or changes in the product(s) and/or the program(s) described in this publication at
any time without notice.
Any references in this information to non-IBM Web sites are provided for convenience only and do not in any
manner serve as an endorsement of those Web sites. The materials at those Web sites are not part of the
materials for this IBM product and use of those Web sites is at your own risk.
IBM may use or distribute any of the information you supply in any way it believes appropriate without
incurring any obligation to you.
Information concerning non-IBM products was obtained from the suppliers of those products, their published
announcements or other publicly available sources. IBM has not tested those products and cannot confirm
the accuracy of performance, compatibility or any other claims related to non-IBM products. Questions on
the capabilities of non-IBM products should be addressed to the suppliers of those products.
This information contains examples of data and reports used in daily business operations. To illustrate them
as completely as possible, the examples include the names of individuals, companies, brands, and products.
All of these names are fictitious and any similarity to the names and addresses used by an actual business
enterprise is entirely coincidental.
COPYRIGHT LICENSE:
This information contains sample application programs in source language, which illustrates programming
techniques on various operating platforms. You may copy, modify, and distribute these sample programs in
any form without payment to IBM, for the purposes of developing, using, marketing or distributing application
programs conforming to the application programming interface for the operating platform for which the
sample programs are written. These examples have not been thoroughly tested under all conditions. IBM,
therefore, cannot guarantee or imply reliability, serviceability, or function of these programs. You may copy,
modify, and distribute these sample programs in any form without payment to IBM for the purposes of
developing, using, marketing, or distributing application programs conforming to IBM's application
programming interfaces.
© Copyright IBM Corp. 2003, 2006, 2007. All rights reserved. xxi
Trademarks
The following terms are trademarks of the International Business Machines Corporation in the United States,
other countries, or both:
Oracle, JD Edwards, PeopleSoft, Siebel, and TopLink are registered trademarks of Oracle Corporation
and/or its affiliates.
FileNet, and the FileNet logo are registered trademarks of FileNet Corporation in the United States, other
countries or both.
PostScript, Distiller, Adobe, Acrobat, and Portable Document Format (PDF) are either registered trademarks
or trademarks of Adobe Systems Incorporated in the United States, other countries, or both.
Java, JavaServer, JSP, JVM, J2EE, Solaris, Sun, Sun Fire, Sun Java, and all Java-based trademarks are
trademarks of Sun Microsystems, Inc. in the United States, other countries, or both.
Excel, Expression, Internet Explorer, Microsoft, Visual Basic, Windows NT, Windows Server, Windows Vista,
Windows, and the Windows logo are trademarks of Microsoft Corporation in the United States, other
countries, or both.
Intel, Pentium, Intel logo, Intel Inside logo, and Intel Centrino logo are trademarks or registered trademarks
of Intel Corporation or its subsidiaries in the United States, other countries, or both.
UNIX is a registered trademark of The Open Group in the United States and other countries.
Linux is a trademark of Linus Torvalds in the United States, other countries, or both.
Other company, product, or service names may be trademarks or service marks of others.
This IBM Redbooks publication provides helpful, practical advice, hints, and tips
for those involved in the design, installation, configuration, system administration,
and tuning of an OnDemand system. It covers key areas that are either not well
known to the OnDemand community or are misunderstood. We reviewed all
aspects of the OnDemand topics and decided to provide information about the
following areas:
Administration
Database structure
Multiple instances
Storage management
Performance
PDF indexing
OnDemand Web Enablement Kit
Data conversion
Report distribution
Exits
iSeries Common Server migration
Solution design and best practices
Troubleshooting
Did you know?
Option features
Enhancements
Because a number of other sources are available that address various subjects
on different platforms, this IBM Redbooks publication is not intended as a
comprehensive guide for OnDemand. We step beyond the existing OnDemand
documentation to provide insight into the issues that might be encountered in the
setup and use of OnDemand.
Note: IBM DB2 OnDemand for Multiplatforms Version 8.3 is also known as
Version 7.1.2.5 or later. This IBM Redbooks publication covers features and
functions up to Version 7.1.2.5 of OnDemand for Multiplatforms.
© Copyright IBM Corp. 2003, 2006, 2007. All rights reserved. xxiii
The team that wrote this IBM Redbooks publication
This IBM Redbooks publication was produced by a team of specialists from
around the world working at the International Technical Support Organization
(ITSO), San Jose Center.
Wei-Dong Zhu is a Content Management Project Leader with the ITSO in San
Jose, California. She is a Certified Solution Designer for IBM DB2 Content
Manager. She has more than 10 years of software development experience in
accounting, image workflow processing, and digital media distribution (DMD).
Her development work in one of the DMD solutions contributed to a first time ever
win for IBM of an Emmy award in 2005. Jackie joined IBM in 1996. She holds a
Master of Science degree in computer science from the University of Southern
California.
Doris Ming Ming Ngai is an IT Availability Specialist for IBM Singapore. She has
been working in Integrated Technology Services for eight years, in the IBM
eServer pSeries® services and support team, spending most time in
implementing and supporting OnDemand on AIX® and Microsoft® Windows®.
Her areas of expertise include AIX, OnDemand, and Tivoli® Storage Manager.
Doris is a Certified Solution Expert for OnDemand Multiplatforms. She holds a
degree in electrical engineering from National University of Singapore.
Special thanks to the following people who co-authored the first version of this
IBM Redbooks publication:
Mike Adair
Stephanie Kiefer Jefferson
Henry Martens
Martin Pepper
Special thanks to the following people for their contributions to this IBM
Redbooks publication:
Debbie Wagner for her Report Distribution, and XML Batch Administration
materials
Benjamin Boltz and Eric Hann for their solution design (for performance and
user satisfaction) materials
Sebastian Welter for his development of the free, open-source software,
OnDemand Toolbox, and associated contributors Peter Seckinger and
Hans-Joachim Hein.
Andy J. Smith and Simon E. Moore for their development of the Store
OnDemand application and the architects of the application, Martin Pepper
and Chris Corfield
Darrell Bryant for reviewing and correcting the IBM Redbooks content
Steve Henrickson for his continuous support of OnDemand IBM Redbooks
projects
Preface xxv
Kevin Van Winkle
Timothy Yeh
IBM Software Group, Boulder, U.S.
Terry Brown
Manimala Kumaravel
Nancy O’Brien
Mike Reece
Yewen Tang
IBM Software Group and IBM Sales and Distribution, U.S.
Craig Brossman
Ronald Smith
Dave Tanigawa
IBM Systems and Technology Group, Boulder and Poughkeepsie, U.S.
Your efforts will help increase product acceptance and customer satisfaction. As
a bonus, you will develop a network of contacts in IBM development labs, and
increase your productivity and marketability.
Find out more about the residency program, browse the residency index, and
apply online at:
ibm.com/redbooks/residencies.html
Comments welcome
Your comments are important to us!
Preface xxvii
xxviii Content Manager OnDemand Guide
Summary of changes
This section describes the technical changes made in this edition of the book and
in previous editions. This edition might also include minor corrections and
editorial changes that are not identified.
Summary of Changes
for SG24-6915-02
for Content Manager OnDemand Guide
as created or updated on January 2, 2008.
New chapters
Chapter 16, “Optional features” on page 523, covers the following topics:
– OnDemand Distribution Facility (ODF) on z/OS
– Report Distribution
– Content Manager OnDemand Toolbox
– E-mail Notification and Delivery for Multiplatforms
Chapter 17, “Enhancements” on page 571, covers the following topics:
– Web Administration Client
– Composite indexes
– Cluster indexes
– Cabinets
– File name Indexing
– LDAP security
– 64-bit support
– Tracing
© Copyright IBM Corp. 2003, 2006, 2007. All rights reserved. xxix
Note: This edition is not a formal update of the book. We simply added the two
chapters that were created during the production of the other book
(Implementing IBM Content Manager OnDemand Solutions with Case
Studies, SG24-7511). Some of the materials in these two chapters might
coincide with the existing book. The original content of the book as of the May
2006, Second Edition, has not been modified or updated for this edition. Our
purpose for the Third Edition is to make the information available to you as
early as possible rather than wait until a formal update of the book.
New information
Chapter 1, “Overview and concepts” on page 1, covers the following new
topics:
– Supported environments
– Document access and distribution possibilities
Chapter 2, “Administration” on page 21, covers the following new topics:
– New function to export application
– New function to select the font at the graphical indexer
– New function to define the indexer parameter for Portable Document
Format (PDF) data in Unicode
– New function to do a full report browse in a folder
– New XML batch administration tool
Chapter 5, “Storage management” on page 105, introduces the use of IBM
System Storage™ Archive Manager to be used with Tivoli Storage Manager,
IBM TotalStorage® DR550 and EMC Centera.
Chapter 8, “OnDemand Web Enablement Kit” on page 199, introduces the
OnDemand Portlets.
Chapter 9, “Data conversion” on page 253, discusses IBM AFP2WEB
Services Offerings.
Chapter 10, “Report Distribution” on page 313, is a new chapter that
discusses the separately priced report distribution feature in OnDemand for
Multiplatforms.
Chapter 11, “Exits” on page 337, includes additional explanation of the
following exits:
Changed information
Chapter 2, “Administration” on page 21, contains application group deletion
validation changes.
Chapter 4, “Multiple instances” on page 77, contains parameter changes
when creating an instance in OnDemand for iSeries and additional
information about Linux®.
Chapter 5, “Storage management” on page 105, includes more information
about using migration policies in OnDemand for iSeries and an introduction to
using the IBM System Storage Archive Manager.
Chapter 7, “PDF indexing” on page 185, covers the discontinuation of
Adobe® Acrobat® Approval.
Chapter 8, “OnDemand Web Enablement Kit” on page 199, includes
introduction updates and OnDemand Web Enablement Kit (ODWEK)
deployment changes.
OnDemand gives you the ability to view documents; print, send, and fax copies of
documents; and attach electronic notes to documents. OnDemand offers several
advantages allowing you to:
Easily locate data without specifying the exact report
Retrieve the pages of the report that you require without processing the entire
report
View selected data from within a report
OnDemand provides you with an information management tool that can increase
your effectiveness when working with customers. It supports the following
capabilities:
Integrates data created by application programs into an online, electronic
information archive and retrieval system
Provides controlled and reliable access to all reports of an organization
Retrieves data that you need when you need it
Provides a standard, intuitive client with features such as thumbnails,
bookmarks, notes, and shortcuts
OnDemand client programs run on PCs and terminals attached to the network
and communicate with the OnDemand servers. The OnDemand library server
manages a database of information about the users of the system and the
reports stored on the system. The OnDemand object server manages the reports
on disk, optical, and tape storage devices. An OnDemand system has one library
server and one or more object servers. An object server can operate on the
same server or node as the library server or on a different server or node than
the library server.
OnDemand servers manage control information and index data, store and
retrieve documents and resource group files, and process query requests from
OnDemand client programs. The documents can reside on disk, optical, and
tape storage volumes. New reports can be loaded into OnDemand every day.
This way, OnDemand can retrieve the latest information generated by application
programs.
1.2 Concepts
In this section, we examine some of the basic concepts of OnDemand:
Report and document
Application, application group, and folder
Documents are usually identified by date, with one or more other fields, such as
customer name, customer number, or transaction number. A date is optional but
highly recommended for optimizing document search performance. If there is no
date field, the load ID looks similar to this example, 5179-1-0-1FAA-0-0, where
the 0-0 on the end means that no date was used.
An administrator can define the TRANS application for a report that contains
lines of sorted transaction data. The TRANS application uses the report indexing
method to divide the report into documents. Each group of 100 pages in the
report becomes a document in OnDemand. Each group is indexed using the first
and last sorted transaction values that occur in the group. Users can retrieve the
group of pages that contains a specific transaction number by specifying the date
and the transaction number. OnDemand retrieves the group that contains the
value entered by the user.
Note: The administrator must first create an application group if one does not
exist.
Application
An application describes the physical characteristics of a report to OnDemand.
Typically you define an application for each program that produces output to be
stored in OnDemand. The application includes information about the format of
the data, the orientation of data on the page, the paper size, the record length,
and the code page of the data. The application also includes parameters that the
indexing program uses to locate and extract index data and processing
instructions that OnDemand uses to load index data in the database and
documents on storage volumes.
Application group
An application group contains the storage management attributes of and index
fields for the data that you load into OnDemand. When you load a report into
OnDemand, you must identify the application group where OnDemand will load
the index data and store the documents.
Folder
A folder is the user’s way to query and retrieve data stored in OnDemand. A
folder provides users with a convenient way to find related information stored in
OnDemand, regardless of the source of the information or how the data was
prepared.
Document indexing
Document indexing is used for reports that contain logical items such as policies,
and statements. Each of the items in a report can be individually indexed on
values such as account number, customer name, and balance. OnDemand
supports up to 32 index values per item. With document indexing, the user is not
Report indexing
Report indexing is used for reports that contain many pages of the same kind of
data, such as a transaction log. Each line in the report usually identifies a specific
transaction, and it is not cost effective to index each line. OnDemand stores the
report as groups of pages and indexes each group.
When reports include a sorted transaction value (for example, invoice number),
OnDemand can index the data on the transaction value. This is done by
extracting the beginning and ending transaction values for each group of pages
and storing the values in the database. This type of indexing lets users retrieve a
specific transaction value directly.
Request manager
The request manager processes search requests from OnDemand client
programs. When a user enters a query, the client program sends a request over
the network to the request manager. The request manager works with the
database manager to compile a list of the items that match the query and returns
the list to the client program. When the user selects an item for viewing, the
request manager sends a retrieval request to the cache storage manager, if the
document resides in cache storage, or to the archive storage manager, if the
document resides in archive storage. The storage manager retrieves the
document and, optionally, the resources associated with the item. The
OnDemand client program decompresses and displays the document.
OnDemand also provides a system log user exit so that you can run a
user-defined program to process messages. For example, you can design a
user-defined program to send an alert to an administrator when certain
messages appear in the system log. The messages in the system log can also
be used to generate usage and billing reports.
Database manager
OnDemand uses a database management product, such as DB2, to maintain the
index data for the reports that you load into the system. The database manager
also maintains the OnDemand system tables that describe the applications,
application groups, storage sets, folders, groups, users, and printers that you
define to the system. You should periodically collect statistics on the tables in the
database to optimize the operation of the OnDemand database.
Storage manager
The OnDemand cache storage manager maintains a copy of documents, usually
temporarily, on disk. The cache storage manager uses a list of file systems to
determine the devices available to store and maintain documents. You typically
define a set of cache storage devices on each object server so that the data
loaded on the server can be placed on the fastest devices to provide the most
benefit to your users.
For multiplatforms and z/OS, the cache storage manager uses the arsmaint
command to migrate documents from cache storage to archive media and to
remove documents that have passed their life of data period. For iSeries, the
Start Disk Storage Management (STRDSMOND) command starts the Disk
Storage Management task that manages data from between disk and the archive
storage manager.
Download facility
The download facility is a licensed feature of PSF for OS/390. It provides the
automatic, high-speed download of JES spool files from an OS/390 system to an
OnDemand for Multiplatforms server. You can use the download facility to
transfer reports created on OS/390 systems to the server, where you can
configure OnDemand to automatically index the reports and store the reports
and index data on the system. The download facility operates as a JES functional
subsystem application (FSA) and can automatically route jobs based on a JES
class or destination, reducing the need to modify JCL. The download facility uses
TCP/IP protocols to stream data at high speed over a LAN or channel connection
from an OS/390 system to the OnDemand server.
OnDemand data loading programs read the index data generated by ACIF and
load it into the OnDemand database. The data loading programs obtain other
processing parameters from the OnDemand database, such as parameters used
to segment, compress, and store report data in cache storage and on archive
media. If you plan to index reports on an OnDemand server, you can define the
parameters with the administrative client. The administrative client includes a
Report Wizard that lets you create ACIF indexing parameters by visually marking
sample report data. OnDemand also provides indexing programs that can be
used to generate index data for Adobe Acrobat PDF files and other types of
source data, such as TIFF files.
For OS/400, the OS/400 Indexer can index a variety of data types for OS/400
spooled files. Refer to the following publications for details about the indexing
programs provided with OnDemand for various platforms:
IBM Content Manager OnDemand for Multiplatforms - Indexing Reference,
SC18-9235
IBM Content Manager OnDemand for iSeries Common Server - Indexing
Reference, SC27-1160
IBM Content Manager OnDemand for z/OS and OS/390 - Indexing
Reference, SC27-1375
The OnDemand data loading program first determines whether the report needs
to be indexed. If the report needs indexing, the data loading program calls the
appropriate indexing program. The indexing program uses the indexing
parameters from the OnDemand application to process the report data. The
indexing program can extract and generate index data, divide the report into
indexed groups, and collect the resources needed to view and reprint the report.
After indexing the report, the data loading program processes the index data, the
indexed groups, and the resources using other parameters from the application
and application group. The data loading program works with the database
manager to update the OnDemand database with index data extracted from the
report. Depending on the storage management attributes of the application
group, the data loading program might work with the cache storage manager to
segment, compress, and copy report data to cache storage and the archive
storage manager to copy report data to archive storage.
The archive storage manager deletes data from archive media when it reaches
its storage expiration date. An administrator defines management information to
the archive storage manager to support the OnDemand data it manages. The
management information includes the storage libraries and storage volumes that
can contain OnDemand data, the number of copies of a report to maintain, and
the amount of time to keep data in the archive management system.
OnDemand and the archive storage manager delete data independently of each
other. Each uses its own criteria to determine when to remove documents. Each
also uses its own utilities and schedules to remove documents. However, for final
removal of documents from the system, you should specify the same criteria to
OnDemand and the archive storage manager.
Note: Because many Linux versions are available, make sure to use only
the supported versions for OnDemand. If you use a nonsupported version,
you might run into problems.
Linux for zSeries® on any IBM eServer zSeries model that supports running
Linux
– Red Hat Enterprise Linux AS or ES
– SUSE LINUX Enterprise Server 8 SP 3
Content Manager OnDemand for z/OS includes the capability to mix and match
servers to support the architecture and geographic needs of the customers, while
allowing the data to be kept closer to the requesting users, minimizing network
traffic. For example, a Chicago-based company with an z/OS-based library
Note: The library server and the object server must be on the same iSeries
partition. It is not possible to mix servers as is possible with OnDemand for
Multiplatforms and OnDemand for z/OS.
Users might have to search for documents when a customer calls in for a
particular transaction. They might need to view special monthly reports after the
reports are generated. They might access information of different types and from
different sources, such as unstructured documents stored either in OnDemand or
in some third-party offerings and structured information stored either in DB2 or in
third-party database offerings.
OnDemand offers a large choice of solutions that cover many of the users’
needs. In this section, we introduce:
Document access methods for OnDemand
Document distribution possibility using OnDemand
Federated search: WebSphere® Information Integrator Content Edition
The server might be any of the OnDemand server types. The client can be the
OnDemand client, a browser client, or a custom client.
Disconnected mode
Data is extracted from an OnDemand server and is written to a media that can be
easily distributed. Users access the OnDemand data from the media in the same
way that they access data that is stored in an OnDemand server. For example,
you can store all your invoices of a period in a media and loaded it to a mobile
computer. Sales representatives can then access the customer invoices on their
mobile computers without any network connections while they are in the field.
Refer to Chapter 10, “Report Distribution” on page 313, for more information.
WebSphere Information Integrator Content Edition also provides a toolkit for you
to develop, configure, and deploy content connectors for additional commercial
and proprietary repositories. Sample connectors are provided.
Chapter 2. Administration
An extremely important aspect of an OnDemand system is the effective design
and implementation of a strategy regarding system administration from a report
administration perspective and from a user authority and responsibility
perspective. The focus of this strategy should be to ensure that the system is
planned in a manner that provides the greatest functionality and the best
performance as the system matures.
The system components that are required for creating, retrieving, and viewing an
OnDemand report are a storage set, an application group, an application, and a
folder. These elements, in combination, allow the OnDemand administrator to
define and create a report definition that can then be used to index and load data
into OnDemand. Figure 2-1 illustrates the relationship of these elements in a
typical OnDemand system.
User User
User User
OnDemand
Database
Application Group
Application Group
Folder
For a more in-depth look into storage management, see Chapter 5, “Storage
management” on page 105. Excerpts from that chapter are repeated here to
introduce the various report administration related topics.
Chapter 2. Administration 23
Database information
The database information section of the application group definition process
(Figure 2-2) requires that decisions be made concerning the number of rows to
be stored in each database table and the number of report loads to be included
in each database table. These values are important to system performance and
maintenance.
Maximum rows
The maximum rows value, which determines how many data rows will be loaded
into each database table, is used for segmenting the index data and determining
when to close a database table and open a new one. We recommend that you
use the default value of 10000000 rows for balancing the performance of data
loads and queries. The number of rows specified should be large enough to
handle the largest possible input report file. You should decrease the value if
there is a small amount of data associated with the application group, thereby
increasing query performance without adversely affecting data load performance.
If single load is chosen, a new database table is used for each load of a report
into the application group. The maximum rows value is used to calculate the
space allocation for the single load tables. However, a single load per database
table is no longer supported. Existing application groups with the options
selected can still be used, but new application groups cannot use this option.
Storage Management
The storage management settings (Figure 2-3) determine how long report data
and indexes will be kept in cache storage before being expired. There are also
options that determine how soon data will be migrated to archive storage after
the report load is completed.
Chapter 2. Administration 25
Cache Data
The Cache Data setting determines if the report data will be stored in disk cache,
and if so, how long it will be kept in cache before it is expired. If the Cache Data
for nn Days parameter is selected, then Search Cache is always selected.
Search Cache determines whether OnDemand searches cache storage when
users retrieve documents from the application group. When you set the cache
data to No, you can configure OnDemand to retrieve existing documents from
cache storage while preventing new documents from being copied to cache
storage. If you choose not to store reports in cache, a storage set that supports
archive storage must be selected.
Note: Data that is retrieved often should generally remain in cache until it is no
longer needed by 90% of OnDemand users.
Expiration Type
The Expiration Type determines how report data, indexes, and resources are
expired. If the expiration type is Load, an input file at a time can be deleted from
the application group. The latest date in the input data, and the life of data and
indexes, determine when OnDemand will delete the data. Data that has been
stored in archive storage is deleted by the storage manager based on the archive
expiration date. We recommend that you set Expiration Type to Load.
If the Expiration Type is Segment, a segment of data, which is a database file that
contains index values for an application group, at a time is deleted from the
application group. The segment must be closed, and the expiration date of every
record in the segment must have been reached. If a small amount of data is
loaded into the application group, and the maximum rows value is high, the
segment might be open for a long period of time, and the data will not expire for
the period.
The iSeries server does not use a cache upper threshold. If the cache data for
days duration has passed and Disk Storage Manager is run, then the data is
deleted from cache immediately.
Field Information
The Field Information tab (Figure 2-4) is used to define the attributes of the
database fields to make up the OnDemand report index data. These attributes
determine the characteristics of the index data and control many aspects of
loading and processing data in the system. A database field must be added for
each index value that is required by applications to be part of the application
group.
Chapter 2. Administration 27
If multiple applications will be part of the application group, select the Application
ID Field to uniquely identify each application in an application group. If it is
possible that more than one application will be part of an application group, you
should select the Application ID Field.
Note: Be sure that all of the database fields needed are included before the
application group is added to the OnDemand system. Database fields cannot
be added after the application group has been created.
Type
The Type attribute determines the manner in which the database field is used by
OnDemand. There are three main types of attributes: Index, Filter, and Not in
database.
A field should have a type of filter if it does not uniquely identify a document of
the file and that it is usually used in conjunction with an index field during folder
queries.
Important: Folder queries using filter fields alone result in a sequential scan
through database tables. An index field should always be included in folder
queries. For more information about folders, see 2.1.4, “Folders” on page 35.
A field should have a type of not in database if the field contains the same data
value for every document in the input data. A value for that field will be stored in
the segment table pointing to the database index records rather than storing the
same value over and over again in each row of the database.
A thorough understanding of the way that users will search for documents in the
system is required before making decisions about which fields should be indexes
Segment
Segment is the date or date and time field that is used to limit the number of
tables that are searched during a folder query. If the application group is defined
for multiple loads per database table, we highly recommend that you define a
segment date for the application group. By using a segment date to limit folder
queries to a single table or a limited set of tables, performance is significantly
improved. The segment date is especially important for application groups that
contain a large amount of data.
Note: The date field that is used for the segment date should always have a
type of filter. By default, an index is created for the segment date, and setting
the segment date to a type of index creates unnecessary overhead.
Application ID Field
The Application ID Field is used to identify an application within an application
group when you create an application group that contains more than one
application. The database mapping fields are used to map the value to be stored
in the database as the label that is displayed for folder queries and in the
subsequent query hit list. A query can be made against a specific application in
an application group or against all of the applications in an application group.
The password window is prompted with the user ID that is currently being used.
The server then validates the password and deletes the application group only
after the password validation is successful. The user is then prompted with the
Delete Application Group window, followed by the Confirm Delete Application
Groups window. Selecting OK causes the application group to be deleted. For
more details, refer to Technote #1213631 at the following Web address:
http://www.ibm.com/support/docview.wss?uid=swg21213631
Chapter 2. Administration 29
2.1.3 Applications
An application defines the data that is to be indexed and loaded, associates the
data with an application group, and specifies the type of indexing process to be
performed on the data. It also defines any logical views to be put in place for the
end users and determines any special print options to be used with the data. In
this section, we consider some of the load information attributes.
Load Information
Load Information specifies the processing and resource information that the
OnDemand loader uses to load the input data onto storage volumes and to load
the associated index data into the OnDemand database. The File Format,
Preprocessor Parameters, and Postprocessor Parameter (Figure 2-5 on
page 31) are defined as part of load information:
File Format: Provides settings that control how the OnDemand system
compresses and stores documents and resources
Preprocessor: Specifies processing that is carried out on database fields
prior to indexing data
Postprocessor: Specifies a system command or exit program that will run
against an index file before the index records are loaded into the database
When a document is retrieved for viewing, only the first part of the document is
returned from the server to the client. Additional parts of the document are sent
from the server to the client as the user moves to different pages in the
document. Advanced Function Presentation (AFP) Conversion and Indexing
Facility (ACIF) and OS/400 are the two indexers that can be used to enable large
object support. Invoking Large Object support generates an INDEXOBJ=ALL entry
Chapter 2. Administration 31
in the indexing parameters that enables the generation of large object indexing
information.
When Large Object is selected, the number of pages parameter must also be
specified. Number of pages determines how many pages will be included by
OnDemand in each large object segment.
If Large Object is not selected, the compressed object size parameter is included
in the load information. The compressed object size specifies the size in kilobytes
of each stored block of data. We recommend that you use the default size of
100 KB blocks.
Exporting an application
It is not possible to export an application to application groups that have different
database fields or attributes. However, it is possible to export applications to a
different server as long as the application group on the target server is identical
to the application group on the source server (the server on which the
applications are defined).
Note: For best results, select a monospacing font with the line data graphical
indexer.
To change the font setting back to the default font, right-click the document and
select Font → Reset.
Note: If the font is changed while using the administration client, the selected
font is also used by the Windows client the next time that the Windows client is
started and a line data document is viewed.
You can find more detail in Technote #1215957, which is available at the following
Web address:
http://www.ibm.com/support/docview.wss?uid=swg21215957
Chapter 2. Administration 33
Figure 2-6 Indexer Properties for PDF graphical Indexer
You must select the Output Hexadecimal Strings check box prior to defining
trigger or field parameters. After you select the check box, click OK to confirm the
change. When a trigger or field is defined, the selected text is displayed as a
hexadecimal string in the window.
For more information, refer to Technote #1219572, which you can find at the
following Web address:
http://www.ibm.com/support/docview.wss?uid=swg21219572
2.1.4 Folders
A folder is the interface that allows a user to search for reports and documents
that have been stored in the OnDemand system. The user enters index search
criteria for an application group into the folder search fields and a document hit
list is constructed based on the results of the query. The folder can be
customized to provide the look and feel that is desired for the users of the
OnDemand system. The folder definition process allows the OnDemand
administrator to grant specific permissions for users of the folders.
Chapter 2. Administration 35
Figure 2-7 Folder general information
Important: Use care when enabling this feature. The display document
location function can result in degraded search performance because the
storage location information for every document returned for the hit list must
be retrieved from the OnDemand object server.
Note Search
The Note Search setting determines when the user will be notified that a note
exists for a report document. If the annotation parameter in the application group
is set to “No”, the Note Search parameter determines when OnDemand
searches the database for annotations and notifies the user of the annotations.
The possible options are:
We recommend that you set the annotation parameter in the application group
advanced settings to handle annotation storage and display. When the
application group annotation parameter is set to “Yes”, an annotation flag is set in
the database when a user adds an annotation to a document. When an
annotation exists for a document, a note icon is displayed in the folder document
hit list.
Maximum Hits
Maximum Hits (Figure 2-8 on page 38) sets the maximum number of document
hit list entries to be returned by a folder query. Limiting the number of hits that
can be returned from a query prevents performance degradation that might be
experienced if an extremely large result is returned from a query. If a query
results in a large hit list that takes a long time to create, the cancel operation
function on the OnDemand client can be used to stop the creation of the hit list.
Note: OnDemand does not guarantee the order in which the hits are retrieved
from the database. If the hit list size is limited, it is possible that you might not
see the most recent documents. If the most recent documents in the
application group are required, the query must be qualified in a way that
results in a hit list that does not exceed the maximum hits parameter.
Furthermore, if Load Date is defined as one of the fields in the application
group, you can set descending sort for the load date, in that way, documents
that are loaded last are listed first.
Chapter 2. Administration 37
Figure 2-8 Folder permissions
Secondary Folder
The Secondary Folder parameter (Figure 2-8) is used to manage the number of
folders that a user is presented with when they log on to the OnDemand system
and their list of folders is displayed. By default, OnDemand presents a list of the
primary folders that a user is authorized to access. Marking a folder as a
secondary folder reduces the size of the initial folder list. All folders that the user
is authorized to view might be displayed by selecting the show all folders option
in the OnDemand client.
Text Search
Text Search is used to search documents that contain a specific word or phrase
before the document hit list is built. Only documents that contain the specified
word or phrase are returned as part of the hit list. The search takes place on the
server.
Using Text Search allows a user to further qualify a search without adding the
overhead associated with adding and maintaining additional index fields to the
database. Text search is performed on the documents that match the criteria for
the other query fields. For example, if the other query fields are date and account
number, text search is performed on the documents that match the specified date
Text search string can be a word or a phrase. Only one text search field can be
defined per folder. The only valid search operator is EQUAL. Wild card searches
and pattern searches are not allowed. Text search is not case sensitive.
Other text search limitations are based on the type of the documents to be
searched and the platform OnDemand is running. For more information, refer to
the OnDemand Information Center at the following Web address:
http://publib.boulder.ibm.com/infocenter/cmod/v8r3m0
Chapter 2. Administration 39
4. Select the Show Search String option in the Options menu. This causes the
system to highlight the searched text string when the document is opened.
If Autoview in the Options menu is set to First Document or Single Document,
the document automatically displays with searched string highlighted. The
Single Document option causes the document to be automatically displayed if
only one document meets the search criteria. The First Document option
causes the first document in the hit list to be displayed automatically with
highlighted searched string.
To use the text search field, open the folder with a predefined text search field
and perform a text search. When a document returned by a text search is opened
for viewing, the viewer is positioned to the first line in the document that contains
the text search string. You can use the Find Next option to move to other
occurrences of the string in the document.
Note: You can still perform a standard search with this folder. You do not have
to specify a text search every time.
One of the advantages of the text search function is that the search is performed
on the server. The speed of search is based on the power of the server that is
running OnDemand. The disadvantage is the performance hit that might be
incurred by the system. The larger the number of documents is that matches the
other query fields, the longer it takes for the text search to be performed on this
document list in order to build the resulting hit list.
Users should always fully qualify the queries to bring back only the specific
documents that they must view. Any sort of wild card search in conjunction with a
text search can severely impact performance.
Figure 2-10 Folder Permissions with Full Report Browse for Administrative Client
Chapter 2. Administration 41
If the user has Full Report Browse authority for a specific folder, the Windows
client has a new View Full Report button, as shown in Figure 2-11. When the
user selects the button, OnDemand retrieves the entire report so that the user
can view it. If the user does not have the Full Report Browse authority, the button
not visible for that folder in the Windows client.
If the View Full Report button is used, the entire report (with the same load ID)
associated with the selected document is viewed, rather than the individual
document. If a Full Report document is displayed and the entire document is
printed to a server printer, the entire report is printed as a single job.
Figure 2-11 Folder view with Full Report Browse authority on a Windows client
This section discusses briefly what the Report Wizard can do. For further
information, refer to the following Web address:
http://www.ibm.com/developerworks/db2/library/techarticle/0301wagner/
0301wagner.html
To start the Report Wizard, you click the Report Wizard button located on the
main window of the Administrative Client, as shown in Figure 2-12, “Report
Wizard button on the OnDemand Administrator Client” on page 43.
Chapter 2. Administration 43
After you go through all the report definitions, you are asked if an application
identifier is needed (see Figure 2-13).
To use Report Wizard to add a new application into an existing application group,
highlight the application group and click the Report Wizard button. If the
application group was originally added using the Report Wizard, update the
application group so that additional application identifiers can be added for each
new application.
Follow the same procedure and answer the questions along. Note that the
application group database fields and folder fields are already defined and are
not displayed.
Chapter 2. Administration 45
The Name field is slightly different as well, because the application group and
folder are already defined to the application. From the Application Identifier field,
you can choose the Application Identifiers that are defined in the application
group previously but are not in use yet.
Figure 2-15 Names page of Report Wizard window when adding an application
Note: If AFP is selected as the data type, the report data is line data, which is
converted to AFP before it is loaded into OnDemand. The Report Wizard
cannot be used to define the report to OnDemand if it is already AFP data.
Summary
Defining a report to OnDemand using the Report Wizard is simple. You answer a
few questions and identify the locations of the index values visually in the sample
report data. The number of questions that you answer are kept to a minimum and
are related to the values that cannot be changed after the report is defined. For
any values that are not assigned based on the answers to the questions or by
using the graphical indexer, default values are used and changed later by
updating the application, application group, or folder.
The answers to these questions depend on the size of the system, the degree of
centralization to be exercised over system administration, and the nature of the
data and the business needs of the users.
Centralized or decentralized?
In a system design that exercises centralized control, one or a few administrators
are granted system administrator authority. A centralized system is typically used
when the number of reports and users to be added to the system are small.
Centralized administration is also appropriate where resources are limited and
only one person might have the skills and knowledge to perform the system
administration tasks, or where one user group will perform all administration
tasks.
The skill level of the users might be a determining factor in the degree of
authority that is granted. It takes a more skilled user to define indexes and report
parameters than to set up users and groups. A decentralized system is typically
used when data from different sources is stored on the same OnDemand system
but must be maintained independent of other data. Decentralization also makes
sense when report loading and processing needs are limited to a specific group
of users for security purposes or when administrators that add users and groups
must be prevented from accessing report data.
Chapter 2. Administration 47
In this section, we discuss different types of users, followed by a discussion on a
decentralized administrative plan. We also introduce a new administrative tool,
the OnDemand XML Batch Administration, which is a command line program
that is executed on the ONDemand server.
When the administrative tasks and levels of authorities are understood, decisions
must be made concerning the span of control in the system. Is it better to have
one user control all access and functions in the OnDemand system, or is it better
to spread the administrative tasks among several users to smooth the workload
based on system requirements? The answer to this question depends on the
factors that were discussed previously concerning centralized or decentralized
administrative control.
System Administrator
Report Administrator
Application Groups Applications Folders
User Administrator
Users Groups
Chapter 2. Administration 49
The report administrator defines and creates the application groups,
applications, and folders that make up the OnDemand system. In this role, the
report administrator is responsible for determining the data fields in the report
data that make up the indexes for the reports, how the index data is extracted
from the report data, how the reports are loaded into the system, and how long
the report data and indexes are to be maintained in the system.
The report administrator also controls access to the reports that are created.
Access can be granted to individual users or can be granted to user groups. The
use of groups for access control simplifies the task. Simply adding a user to a
group with access to a specified report prevents the need to grant authority to
each user that needs access to a report.
The user administrator adds users to the system and creates groups to which
users might be added. Any user added to the group has access to all reports that
might be accessed by members of the group. The user administrator also adds
the report administrator to the groups that require access to report data. By
virtue of being added as a group member, the report administrator can see the
group in the permissions list for application groups and folder and can grant
access permission to the group. A report administrator does not automatically
have access to users and groups when accessing permission lists.
System Administrator
Department A
Report Administrator
Application Groups Applications Folders
User Administrator
Users Groups
Department B
Report Administrator
Application Groups Applications Folders
User Administrator
Users Groups
Departments within the same company are good candidates for the object owner
model. Users from one department are completely isolated from the data and
Chapter 2. Administration 51
users of another department. There are separate administrators for each
department that is being serviced.
The report administrator is defined with the authority to create and maintain
application groups and folders. The report administrator only has authority over
the application groups, applications, and folders that this person creates.
The user administrator is defined with the authority to create users and to create
user groups. The user administrator only has authority over the users and user
groups that this person creates.
The OnDemand system administrator must give the user administrator authority
of the report administrator. This authority allows the user administrator to add the
report administrator to the groups that have access to the reports that the report
administrator is creating. In turn, this allows the report administrator to view and
add members of the groups to the permissions list of the application groups and
folders that he or she creates.
The responsibilities of the user administrators and report administrators are the
same in both the object type model and the object owner model. The difference is
that, in the object type model, the administrators have authority over all users and
reports in the system, while in the object owner model (Table 2-2), the
administrators only have authority over the users and reports that they created.
The number of reports and users to add to the Centralized system administration
OnDemand system is small (less than 100).
Resources are limited and only one person Centralized system administration
performs system administrative tasks.
The difference between the two programs is that for the administrative client, the
user has to provide input through the graphical user interface (GUI) while the
XML batch program receives input through the XML interface.
Chapter 2. Administration 53
Using the XML Batch Administrative program
Special features of the XML batch program
Tips on using the arsxml command
Troubleshooting
The OnDemand Batch System Administration process uses the Xerces2 Java
Parser. You must download the parser code before using the Batch System
Administration.
In this line, <dir> indicates the full path of the directory that contains the Xerces2
Java Parser JAR files, xercesImpl.jar and xml-apis.jar.
For details about installing the Xerces2 Java Parser, refer to the README.html or
README file found in the xml directory of the OnDemand installation root. For
AIX, the README files are in the directory /usr/lpp/ars/bin/xml. Follow the
instructions in the README file to drop down the parser.
The Batch Administration program is called arsxml. With this XML batch
program, you can export, add, delete, and update onDemand objects.
To use the program, you must have the following files, ID, and password:
The schema file, ondemand.xsd
An input XML file (for example, exportusers.xml)
A user ID and password to access the OnDemand server
In XML, the definition and syntax of the markup language is defined in a schema
file. For the OnDemand XML batch program, the schema file is called
ondemand.xsd. It contains the definitions for the OnDemand objects: users,
groups, applications, application groups, storage sets, folders, and printers. Each
OnDemand object definition contains one or more child objects. For example, a
user object has a child object for permissions, and a group object has a child
object for users in the group. The schema file (ondemand.xsd) should not be
changed in any way by the user.
The input XML file for the XML batch program is parsed to ensure that it is valid
according to the schema file. Each object within the file is examined to ensure
that the attributes are valid according to the object type. The XML batch program
generates XML when OnDemand objects are exported. The XML that is
generated can be used as an input for the subsequent arsxml command.
Chapter 2. Administration 55
Example 2-3 shows a sample of the file exportusers.xml from the xml samples
directory. You can change the names of the user name to the users that you want
to export.
You can export objects using the arsxml export command. The following
command exports the users from the server odserver1 and generates the output
file users.xml:
arsxml export -u oduser1 -p odpasswd1 -h odserver1 -i exportusers.xml -o
users.xml -v
You can import objects using the arsxml add command. The following command
imports the users who are using the input file users.xml, which can be the
generated output file from the previous command, to odserver2:
arsxml add -u oduser2 -p odpasswd2 -h odserver2 -i users.xml -v
You can delete objects using the arsxml delete command. The following
command deletes the users from odserver2, based on the users listed in the
users.xml file:
arsxml delete -u oduser2 -p odpasswd2 -h odserver2 -i users.xml -v
For deletion, you are prompted before each object in the XML is deleted, unless
the -x parameter is used.
You can update objects using the arsxml update command. For example, you
want to update the description of the user REDBOOK with a new description and
add the authority to create users. In this case, you construct the XML input file as
shown in Example 2-4 on page 57.
Some attributes are not allowed to be updated, such as the data type of an
application group field or folder field. If this is specified, the XML batch program
produces a warning message and the rest of the attributes continue to be
updated.
Note: Attributes that have default values are not included in the output XML
file. Also when creating an input XML file, not all attributes must be specified
for each object.
Dependent objects, such as all users that belong to a group, can be exported
together when you choose to export the group rather than having to add a user
object to the XML file for every user in the group. You do this by specifying the
arsxml command option -r d on the command line.
In a case when there is no network connection between two servers, the XML
batch program can be used to export OnDemand objects to an XML file from one
server and later import to another server.
If an error occurs during processing of one of the objects in the input XML file, the
remainder of the XML file is not processed unless option -e c is used on the
arsxml command line.
Tip: For good performance, the ideal order in the XML file is printers, users,
groups, storage sets, application groups, applications, and folders. However,
this order is not required. The objects can appear in any order within the XML
file.
Chapter 2. Administration 57
Tips on using the arsxml command
Before you begin to use the arsxml command, we recommend that you read IBM
Content Manager OnDemand for Multiplatforms - Administration Guide,
SC18-9237. Also read the article “OnDemand XML Batch Administration” on the
Web at the following address:
http://www.ibm.com/developerworks/db2/library/techarticle/dm-0510wagner/
If you are not familiar with the syntax, an easier way to begin is to perform an
export of the object. By doing so, you get a working XML input file that you can
use to modify to suit your needs. Make sure that the export is successful without
any errors; otherwise the output XML file might be incomplete.
Adding objects to the OnDemand server is straight forward. If you are into more
advanced operations, such as updating the permission of existing users for an
application group or folder, and you are not getting what you are expecting, then
you might have missed the task attribute. You should include this attribute when
you want to update existing object, such as removing a user’s permission from an
application group or updating a user permission to an application group. The
values for the task attribute are add, delete, and update.
For example, if you want to remove the permission of the user REDBOOK from
an application group, you must use the following line in the input XML file:
<permission user="REDBOOK" task="delete" />
Another example is when you want to update the query restriction of the user
REDBOOK for the application group RedbookAG. In this case, you must use the
following line in the input XML file, with the task attribute set to update.
<permission user="REDBOOK" task="update" queryRes="New Query" />
The previous line is incorporated in Example 2-5 for the input file updateag.xml.
</onDemand>
Notice that *PUBLIC is part of the basic folder/ application group structure and is
automatically added when a folder or application group is created. Therefore, the
permissions for *PUBLIC can only be updated; it cannot be deleted or added.
You should never specify the task attribute in the <permission> object for
*PUBLIC. The task attribute is update by default.
Chapter 2. Administration 59
Example 2-8 Delete, update, and add permission sample
<applicationGroup name="test07"
storageSet="test01"
description=""
>
<field name="f1" appIDField="Yes" />
<permission user="test01" task="delete" />
<permission user="test02" task="delete" />
<permission user="test03" task="delete" />
<permission user="test06" task="update" docUpdatePerm="Yes" docPrintPerm="Yes"
docCopyPerm="Yes" annotCopyPerm="Yes" queryRes="This is a New Query
Restriction -6" authority="Logical Views" />
<permission user="test07" task="update" docUpdatePerm="Yes" docPrintPerm="Yes"
docCopyPerm="Yes" annotCopyPerm="Yes" queryRes="This is a New Query
Restriction -7" authority="Administrator" />
<permission user="test08" task="update" docUpdatePerm="Yes"
docPrintPerm="Yes" docCopyPerm="Yes" annotCopyPerm="Yes" queryRes="This is a
New Query Restriction -8" authority="Logical Views" />
<permission user="test11" task="add" docUpdatePerm="Yes" docPrintPerm="Yes"
docCopyPerm="Yes" annotCopyPerm="No" queryRes="This is a Query Restriction
-1" authority="Logical Views" />
</applicationGroup>
The README file from the xml installation directory provides a list of technotes
that you should review. This README is always updated with the latest
technotes that are available. If you have questions or problems with arsxml,
consult the technotes to see if you can find a solution to the situation that you are
experiencing.
Troubleshooting
To test the arsxml function while writing this IBM Redbooks publication, we
exported the users using the arsxml command and received the XML file. We
then used the output file, changed the name of the users, and added it back to
the server. In doing these actions, we received the following error message:
A parsing error occurred in file oexportdoris.xml, Line 2, Column 111:
cvc-elt.1: Cannot find the declaration of element 'onDemand'.
This message indicates that the schema file ondemand.xsd cannot be found.
The default location for the schema file is in the current directory. Therefore,
when the output file is created, the schema file location is specified as
ondemand.xsd. If this is not where the schema file is located, then this line must
be changed in the output XML file to specify the full path name before the file can
be used as input.
In the previous example, we simply added the </ondemand> tag at the end of the
output file. Because we were doing testing, we were able to add the tag by
ourselves.
This problem is fixed in version 7.1.2.6. In your environment, if you are still using
version 7.1.2.5, always check to make sure that the export process runs
successfully before you do any import based on the export data.
Chapter 2. Administration 61
62 Content Manager OnDemand Guide
3
Table 3-1 shows the OnDemand system control tables with their descriptions.
Note: The complete table name is composed of the owner name, which can
be the database name or the instance name, and the table name. For
example, the application group table ARSAG that belongs to the ODARCH
instance has a complete table name of ODARCH.ARSAG.
For the iSeries server, the complete table name is in the format of library/table,
where the library name is always the same as the instance name. For
example, the application group table ARSAG that belongs to the default
QUSROND instance has a complete table name of QUSROND/ARSAG.
ARSAG2FOL Field mapping table One row for each application group
field mapped to a folder field
ARSAGFLD Application group field One row for each field defined in an
table application group
ARSAGFLDALIAS Application group field One row for each database (internal)
alias table and displayed (external) value in an
application group
ARSAGPERMS Application group One row for every user given specific
permissions table permission to an application group
ARSAPPUSR User logical views table One row for each logical view defined
for a specific user
ARSFOLFLD Folder field table One row for every folder field defined
for a folder
ARSFOLFLDUSR Folder user field table One row for every field provided for a
user given specific field information
for a folder
ARSFOLDPERMS Folder permission table One row for every user given specific
permissions to a folder
ARSNAMEQ Named query table One row for each private and public
named query defined to OnDemand
ARSPRTOPTS Printer options table One row for each printer option
ARSPRTUSR Printer user table One row for each user access this
printer
ARSSET Storage set table One row for each storage set
ARSUSRGRP Users in group table One row for each user assigned to an
OnDemand group
Dynamic name Application group data One row for each document that is
table stored in the application group
There are two important tables we must examine here, the segment table
(ARSSEG) and the application group data table (ROOT.ag_internal_id). The
segment table contains one row for each application group table. Table 3-2
shows the ARSSEG table structure.
The start_date and the stop_date are written in an internal OnDemand format.
Use the arsdate command to get the normal date format. For example, if you
get 11992 from the database and use the arsdate 11992 command, the
system returns the date 10/31/02.
The ARSEG table points to the application group data table name (second
column of the table, table_name). The application group data table is created or
updated during the arsload process. It contains a row for each item stored in the
application group.
The name of the application group data table is ag_internal_id, which is the
identifier that OnDemand assigns to the application group when it is created with
the administrative client. The three-digit application group identifier is listed in the
Storage Management panel of the administrative client as shown in the example
in Figure 3-1. In this case, the application group identifier is EAA.
The application group data table is indexed on one or more of the user-defined
fields, from field_1 to field_n.
Three more tables might grow rapidly during the lifetime of an OnDemand
system:
The annotation table (ARSANN), if a lot of annotations are added to the
documents
The system creates one row for every annotation. This means every yellow
sticker and every graphical annotation add one row to this table.
The resource table (ARSRES), if a lot of AFP data is archived in an
OnDemand system and the resources such as formdef, page segments, and
overlays are often changed
The resource table grows depending on the amount of resources that are
added and changed.
The load ID in the system log or after the arsload should look like the following
example:
6850-25-0-15FAA-9577-9577
After the application group and the application are defined, the application group
gets an application group identifier, agid, in the ARSAG table and an internal
application group identifier, agid_name. Figure 3-2 on page 70 displays the data
created in the ARSAG table; the agid is 5018, and the agid_name is HAA.
This application is defined to create a new application group data table every 10
million rows. During the data loading, OnDemand uses the agid and the
agid_name to create one row into the segment table (ARSSEG) for every 10
million rows. The important pointer in the ARSSEG table is the name of the
application group data table, table_name, where the index values (in this case,
the four defined index values) are stored. The table_name is composed of the
Figure 3-2 displays the two rows created in the ARSSEG table: one row with
table_name HAA1, another HAA2. Both HAA1 and HAA2 are the actual names
of the application group data tables that are created. ARSSEG keeps track of the
maximum rows and the currently inserted rows; one is 10000000, and another is
326098.
The application group data table contains the doc_name, which is the actual
container for the individual document. The offset and the document length are
also kept in this table. Figure 3-2 shows the first row has an offset of 0, and the
second row has an offset of the document length of the first row.
After you enter these values, OnDemand starts searching the folder table,
ARSFOL, to determine the folder identification, fid. Figure 3-4 on page 72 shows
that the fid is 5022.
After getting the folder information, OnDemand searches the ARSAG2FOL table
for the application groups associated with this folder. Figure 3-4 on page 72
shows the application group ID, agid, is 5020.
In general, any number of application groups can be connected with one folder.
In our example, there is only one application group associated with this folder.
To retrieve the individual document within this result list, the system goes back
into the application group data table and locates the document offset and data
set (object) and retrieves the object for display at the client.
Figure 3-4 shows the details of this search sequence from a folder.
folder to
FID AGID
applicaiton
5022 5020
group
ARSAG2FOL
All system tables, as mentioned in 3.1, “System control tables” on page 64, are
created with the arsdb program. The tables are created in one tablespace. This
tablespace is created during the installation with the ARSTSPAC member in the
ODADMIN.V7R1M0.SARINST installation file. The size of the tablespace is set
there. Before you create the tablespace, you must create the storage group and
the database. It is important that the owner of the database (the submitter of the
job or the user ID that is set by the “Set current SQLID =’username’) match the
entry SRVR_INSTANCE_OWNER in the ARS.INI file.
od.v7r1m0.sarinst(arstspac)
SET CURRENT SQLID='ARSSERVR'
CREATE TABLESPACE ARSTSPAC
IN ARSDBASE
USING STOGROUP ARSSGRP
PRIQTY 800
SECQTY 5000
ars.cfg SEGSIZE 32
BUFFERPOOL BP32K;
###################
# DB2 Parameters. #
###################
ARS_DB_TABLESPACE=ARSTSPAC
ars.ini
tablespace arstspac
SRVR_INSTANCE=ARSDBASE
SRVR_INSTANCE_OW NER=ARSSERVR table table table
ARSAG ARSAG2FOL ARSAPP
Figure 3-5 Relation between configuration files, tablespace, and system tables
The arsdb program provides an interface between the database manager and
OnDemand. Several parameters are used in the creation and dropping of tables.
For example, to create initial tables, we use the arsdb -c command. Refer to IBM
Content Manager OnDemand for z/OS and OS/390 - Configuration Guide,
GC27-1373, for more details.
The arsdb program resides in the UNIX System Services file system path
\usr\lpp\ars\bin. There are several parameters along with the arsdb program.
Always keep in mind that the ars.cfg file is required to determine the tablespace
name. Today, there is no way to set the individual size of the system tables
because there is only one tablespace associated to them. The creation is done
automatically in the background. The only storage allocation size parameter that
is changeable is on the create tablespace statement in arstspac. The arsdb
command allows the creation of all or specific tablespace.
Four major factors influence the amount of storage needed for the OnDemand
database:
The number of index and filter fields
The size of the index and filter fields
The number of indexed items per month
The number of months (years) OnDemand keeps the indexes in the database
Refer to IBM Content Manager OnDemand for z/OS and OS/390 - Introduction
and Planning Guide, GC27-1438, for information about space requirements.
OnDemand for z/OS and OS/390 allocates its tablespace during the creation of a
new table based on the following space allocation parameters:
DBSIZE / two for primary allocation
Primary allocation / four for the secondary allocation
The allocation of the database is done in kilobytes. The allocation values depend
on the maximum row limit set when creating the application group. The DBSIZE
depends on the number of index fields defined in the application.
The Default Table Factor is set to 1,2 by the program. The records per page value
is a DB2 parameter. For more information about records per page, refer to
Chapter 8, “Estimating Disk Storage”, in IBM DB2 UDB for z/OS and OS/390 -
Administration Guide, SC26-9931.
Note: Based on this calculation, when you define the application group, make
sure that you lower the default of 10 million rows if you only want to store a
small amount of data. If you leave the 10-million-row default unchanged,
OnDemand allocates 6 million rows at the primary allocation.
Optionally, you can allow DB2 to automatically create this user when the
database instance is created. However you may want to create the user
manually, so you can verify that the proper authorities are set before creating the
The default user and the default instance are archive. In our scenario, we created
a user called ondtest that owns the ondtest instance. We used smitty to create
this user and added the user ID to the sysadm1 group.
We recommend that you use the same user name and group name for the
fenced UDFs. You see a message warning you not to do this. If OnDemand is the
only application running on the machine, we recommend that you bypass this
warning message.
ARS.INI
The ARS.INI file contains a section for each instance; each section begins with a
header. It is created at installation time, and by default, is configured with
information for the archive instance. To update the file, simply copy the archive
section to a new section and update the newly copied section for the new
instance.
[@SRV@_ARCHIVE]
HOST=9.17.64.210
PROTOCOL=2
PORT=1450
SRVR_INSTANCE=ARCHIVE
SRVR_INSTANCE_OWNER=root
SRVR_OD_CFG=/usr/lpp/ars/config/ars.cfg
SRVR_DB_CFG=/usr/lpp/ars/config/ars.dbfs
SRVR_SM_CFG=/usr/lpp/ars/config/ars.cache
[@SRV@_ondtest]
HOST=9.17.64.210
PROTOCOL=2
PORT=1441
SRVR_INSTANCE=ondtest
SRVR_INSTANCE_OWNER=root
SRVR_OD_CFG=/usr/lpp/ars/config/ars_ondtest.cfg
SRVR_DB_CFG=/usr/lpp/ars/config/ars_ondtest.dbfs
SRVR_SM_CFG=/usr/lpp/ars/config/ars_ondtest.cache
Notice that we created a new section called ondtest, the name of our new
instance. Each instance should point to the HOST name or IP address of a
physical server. OnDemand differentiates instances by the PORT number. This
parameter identifies the TCP/IP port that OnDemand monitors for client requests;
each instance must use a unique and unused port. The default port is 0, which
points to port 1445. You must choose a different value for each additional
instance. We chose an unused port of 1441.
Finally, specify the location of the server configuration file, the tablespace file
system, and the cache file system for each instance. The parameters are
SRVR_OD_CFG, SRVR_DB_CFG, and SRVR_SM_CFG respectively. The
cache file system might be shared among instances. If you choose to do so, you
can define and use the same file for the SRVR_SM_CFG parameter. In this case,
you only have one cache parameter file to update.
ARS.CFG
When an instance is started, OnDemand reads the ARS.INI file to determine
where the server configuration file is located. Each instance must have its own
ARS.CFG file that is determined by the ARS_OD_CFG parameter in ARS.INI.
Copy the original ARS.CFG file and modify it appropriately. For our scenario, we
created a file named ars_ondtest.cfg.
Most of the parameters are same for the multiple instances. The only parameters
that must be changed are the database-related ones. You must change your
DB2INSTANCE parameter to your new name, and you must change the
database path, the primary and the archive log file directories. We recommend
that each database resides in its own unique file directory.
The group to which the database instance owner belongs must have write
access to the database directories specified here. The database instance owner
is the user ID that you specified when you created the instance. Verify that the
entire database tree (/ondtest/arsdb* in our scenario) is in the sysadm1 group.
Note: We strongly recommend that each instance uses a different set of Tivoli
Storage Manager options files.
############################3###########
# DB2 Parameters (Library Server Only) #
############################3###########
#
DB2INSTANCE=ondtest
#
ARS_DB2_DATABASE_PATH=/ondtest/arsdb
ARS_DB2_PRIMARY_LOGPATH=/ondtest/arsdb_primarylog
ARS_DB2_ARCHIVE_LOGPATH=/ondtest/arsdb_archivelog
ARS_DB2_LOGFILE_SIZE=1000
ARS_DB2_LOG_NUMBER=20
ARS.CACHE
OnDemand supports cache storage for temporary storage and high-speed
retrieval of reports that are stored on the system. Each OnDemand instance can
have its own cache storage to allow for a complete differentiation between the
instances.
Alternatively, OnDemand instances can share the same cache storage. This is
because OnDemand separates the cache directories by first placing the instance
name at the cache directory defined. For the archive instance, however, the
cache directory is directly below the defined file system name. For the rest of the
instances, the cache directories are separated by the instance name. The
SRVR_SM_CFG parameter in the ARS.INI file identifies the cache file systems
used by the instance. This file can contain one or more file systems.
Important: The first line in the ARS.CACHE file identifies the base cache
storage file system where OnDemand stores the control information. After you
define this value, you cannot add or remove it from OnDemand or change it in
any way.
The permissions on these file systems are important. On AIX servers, the cache
file system must be owned by the root user and the system group. On Linux,
HP-UX and Sun Solaris, these file systems must be owned by the root user and
the root group. You must ensure that no other permissions are set. On AIX, the
file system permissions should be similar to the following example:
drwx------ 4 root system 512 Oct 30 12:38 arscache
When using DB2 as the database, OnDemand supports the use of SMS
tablespace. Using SMS allows the operating system to increase the size of the
tablespace, as required, during a load process.
When creating a new instance that uses tablespace, you must create a new
ARS.DBFS file. We created ars_ondtest.dbfs in our scenario. Each line in this file
must contain the name of the file system and the type of tablespace to be stored.
These file systems must be owned by the database instance owner and the
group. In our scenario, it is owned by ondtest and belongs to the sysadm1 group.
See the following example for the correct permissions:
drwxrws--- 4 ondtest sysadm1 512 Dec 27 2001 /arsdb/db1/SMS
We include the SMS in the file system name to indicate the type of data that will be
stored.
Sign on to the user account that you assigned as the owner of the OnDemand
instance (in the ARS.INI file). In our scenario, this is root. Run the arsdb
command with the following options:
arsdb -I ondtest -gcv
After this command completes, you should be able to log into DB2 and connect
to the new instance. List all the tables by typing the following command:
db2 list tables for all
If you list the tables, you should see the new ARS tables, owned by root. If this
command fails for any reason, it creates a db2uexit.err file in the ARS_TMP
directory specified in the ARS.INI file; by default, it is /tmp.
You must also initialize the system migration facility by entering the following
command:
arssyscr -I ondtest -m
The arssyscr program creates the application groups, applications, and folders
required by the system logging and system migration facilities.
If you have more than one instance running, you see more than one arssockd
process in the accepting state. The instance other than the default instance
archive has a -instancename after arssockd for identification:
root 33900 1 0 13:02:37 - 0:00 arssockd-ondtest: (accepting)
Be sure that when you stop the instance, you stop the correct one. You might
stop the instance by issuing a kill command on the process identifier (PID) of
the accepting process or by using the following command:
arssockd stop ondtest
Connecting to instances
To connect to a particular instance, the client must log on to the correct library
server. Add a new server in the administrative or user client by identifying the
name of the library server and the port number to use. The port number that you
specify must be the same port number that you specified in the ARS.INI file.
Running commands
In general, the -h or -I parameters are used to determine the name of the
OnDemand instance to process. You must specify the parameter and the
instance name if:
The name of the default instance is not ARCHIVE.
You are running more than one instance on the same system and you want to
process an instance other than the default instance.
You are running the program on a system other than where the library server
is running.
The programs locate the specified instance name in the ARS.INI file to determine
the TCP/IP address, host name alias, or fully-qualified host name of the system
on which the OnDemand library server is running and other information about the
instance. The ARSADM, ARSADMIN, ARSDOC, and ARSLOAD programs
support the -h parameter. The ARSDB, ARSLOAD, ARSMAINT and ARSTBLSP
programs support the -I parameter. For the ARSLOAD program, if both the -h
and -I parameters are specified, the value of the last parameter specified is
used, for example:
arsload -g applicationgroup -u userid -p password -I ondtest test.data
arsmaint -cmsv -I ondtest
In the next window, the server panel, click communications. Choose a port for
the OnDemand clients to use to communicate with the server. You must choose a
unique port for each new instance. The default is 0, which defaults to 1445. If you
do not change this port, you do not see an error message; instead, every client
trying to access the original, archive instance, through port 1445 is now trying to
log into this new instance instead. You will be unable to access the original
instance until you change this port to a unique number.
We recommend that you define unique file systems for each instance as you
define the file systems to control this instance (cache, temp, and database
directories). This is a way to keep the instance data and indexes separate from
one another. You must assign a unique database directory, primary and archive
log file directories, or you will see an error message when creating the database.
After you define the instance, you might click Create Database Now to create
the DB2 tables. This creates the ARS Db2 tables and initializes the system log
and migration facility. You might also choose to run these commands manually by
using the arsdb command or the arssyscr commands with the -I parameter.
Note: There are now additional services. There is a new library server, MVS™
download server, and load data server for the new instance.
Refer to 4.2.2, “Working with the second instance” on page 84, for instructions on
how to connect to the new instance and how to run the OnDemand programs.
In addition, each OnDemand for iSeries instance must run in a single Coded
Character Set ID.
The user profile that is used to create an instance must have *SECADM authority
and must have the correct locale and locale job attributes set in the profile. Any
user profile that is then used to load data into OnDemand also must have the
locale parameters set, but does not need *SECADM authority. A profile used to
load data should also specify group or supplemental group profiles QONDADM,
QRDARS400, and QRDARSADM.
Note: If you are also using the Spool File Archive feature of OnDemand,
you must change the port number for the QUSROND instance to
something other than 0, for example, 1450. That is because Spool File
Archive also uses port 1445.
CMD ....+....1....+....2....+....3....+....4....+....5....
___ [@SRV@_REDBOOK]
___ HOST=LOCALHOST
___ PROTOCOL=2
___ PORT=1470
___ SRVR_INSTANCE=REDBOOK
___ SRVR_INSTANCE_OWNER=QRDARS400
___ SRVR_FLAGS_SECURITY_EXIT=1
___ SRVR_OD_CFG=/QIBM/USERDATA/ONDEMAND/REDBOOK/ARS.CFG
___ SRVR_DB_CFG=/QIBM/USERDATA/ONDEMAND/REDBOOK/ARS.DBFS
___ SRVR_SM_CFG=/QIBM/USERDATA/ONDEMAND/REDBOOK/ARS.CACHE
Note: You can also edit the ARS.INI and ARS.CFG files by mapping a drive
to the iSeries integrated file system root directory using Windows Explorer.
Open the file in an editor such as Notepad or WordPad, make the desired
changes, and save the file.
CMD ....+....1....+....2....+....3....+....4....+....
___ #
___ ARS_NUM_DBSRVR=5
___
___ # AUTOSTART SERVER WHEN USING STRTCPSVR COMMAND
___ # - SET TO 1 TO AUTOSTART THIS INSTANCE
___ # SET TO 0 TO NOT AUTOSTART THIS INSTANCE
___ #
___ ARS_AUTOSTART_INSTANCE=1
___ #
Instances on z/OS
HFS DB2 OnDemand OAM Storage
usr Management
Program Libraries
lpp
ars
Disk
Packages Tape
bin config locale samples www Plans
Optical
ars.ini
Instance1 Instance2
ARSDB1 ARSDB2
In UNIX System Services, there are many ways to create a copy of a file.
Sometimes there are problems with authorization. If you are a superuser, which
you can verify by typing su on the Open OMVS shell, you can call your systems
programmer and RACF® administrator to get the right permissions. When you
Note: Usually it is sufficient to simply use the following command from the
OMVS command line:
cp ars.ini /u/ussdflt/arsini.back
Here, /u/ussdflt/ is the directory for the copied dataset. It can be any directory
with write permissions. For more information about any UNIX System
Services command, refer to UNIX System Services Command Reference,
SC28-1892.
Every permission for a UNIX file (read, write, and execute (rwx)) is maintained for
three different types of file users:
The file owner
The group that owns the file
All other users
UNIX files
To determine the permissions for a file, use the ls -l command from the
command line of the OMVS shell. The following information is returned:
-rwxrwxrwx 1 SYSADM1 USERID 203 Jun 28 14:02 ars.ini
In this case, the list file and directory attributes command is used for the ARS.INI
file (similar to a dir filename command in Windows). The -l parameter gives
you more detailed information about the file.
Note: This example is taken from the IBM Redbooks publication OS/390
Version 2 Release 6 UNIX System Services Implementation and
Customization, SG24-5178, which is a good reference if you are starting with
UNIX System Services.
- File
d Directory
l Symbolic link
HFS file data is byte-oriented, which differs from the MVS record-oriented
datasets. The I/O is done by the use of data stream and not by writing records to
it. A UNIX System Services file system on z/OS looks similar to a Windows file
system, except for the direction of the slashes. OnDemand expects certain files
to be in a specific directory. In UNIX System Services, the root file system is the
first file system that is mounted. You do not add application data to this file
system.
The path for the OnDemand system is /usr/lpp/ars/. From the ars directory, there
are several directories that contain the OnDemand files and executable files,
such as programs and procedures. The directories are created at the installation
time when running the ARSMKDIR REXX™ routine from the install library,
ODADMIN.V7R1M0.SARINST. The /usr/lpp/ars/ directory contains the
subdirectories listed in Table 4-2.
/ Root Directory
lpp
many other applications
ars
www
bin config locale samples
(odwek)
arsdate arsdoc arsload
arsobjd arspdump arssyscr enu
arsadm arsdb arsexprt arsmaint Samplefiles api
arspdoci arssockd arsview ars.cfg arsload.cfg
exits fonts deu arsprtjcl ars.cache
ars.ini arsprt applets
jpn arsprtjcl1
ars.cache arsload.cfg arsprtjcl1 cgi-bin
ars.cfg arsprt arssockd.stdenv
kor
ars.ini arsprtjcl cli.ini images
more plugins
nls
samples
servlets
USS
root
z/OS Disk
ars2 ars ars3 OD390.V710...HFS..
mount filesystem('OD390.V710.DBSRES.SERVER.CACHE3.HFS')
mountpoint('/ars3/cache') type(HFS)
/ars/cache
/ars2/cache
/ars3/cache
Copy the original ARS.CFG file and modify it appropriately. Get the right
permission byte sets. In our scenario, a new configuration file, arsins1.cfg
(Figure 4-10 on page 99) is created. The important parameters that are database
related must be changed:
ARS_DB_TABLESPACE: The name of the tablespace created with the
ARSTSPAC member of the installation library for the new instance
DB2INSTANCE: The name of the database created with the ARSDB2
member of the installation library for the new instance
All other parameters remain the same, unless you want to try something else
with this new instance, such as using object access method (OAM) or a different
language.
Attention: The ARS.INI file is sensitive to the kind of square brackets that you
use as a delimiter. Even if it looks the same on the Ishell editor when
displaying them, it depends on the code page used by the machine, and the
Hex value might not represent the correct value, which can lead to
unpredictable results.
Example 4-1 shows the correct Hex values for new instance name.
This might look like a DB2 error. Actually, the ARSDB program cannot read the
configuration file. Check the log for any RACF messages writing to or opening
the file system.
Many installations run several DB2 systems on the z/OS logical partition (LPAR).
Sometimes, this can lead to errors if the link list contains only the DSNLOAD and
DSNEXIT library from a different DB2 subsystem. You can add your requested
DB2 library with the export command:
export STEPLIB=ICCDB2.SDSNEXIT:ICCDB2.SDSNLOAD This sets the environment.
Tip: If you exit the shell, the setting is gone. You can add the export
command to your OMVS login profile. Check your variables with the SET
command.
After this procedure is started, log on to the new instance using the different port
number and create users, application groups, applications, and storage sets with
the normal procedures.
ARSDB2 ars.ini
ARSSERV1.ARSDBAS1 ݧSRV§_ARSSOCKD¨
SET CURRENT SQLID='ARSSERV1';
CREATE STOGROUP ARSSGRP1 HOST=wscmvs.washington.ibm.com
VOLUMES ('*','*') PROTOCOL=2
VCAT OD710 ; PORT=1444
CREATE DATABASE ARSDBAS1 ARSDBAS1 SRVR_INSTANCE=ARSDBASE
STOGROUP ARSSGRP1; SRVR_INSTANCE_OWNER=ARSSERVR
SRVR_OD_CFG=/usr/lpp/ars/config/ars.cfg
SRVR_SM_CFG=/usr/lpp/ars/config/ars.cache
ARSTSPAC
ARSSERV1.ARSTSIN1 ݧSRV§_ARSSOCKT¨
SET CURRENT SQLID='ARSSERV1'; HOST=wscmvs.washington.ibm.com
CREATE TABLESPACE ARSTSIN1 PROTOCOL=2
IN ARSDBAS1 PORT=1555
USING STOGROUP ARSSGRP1 SRVR_INSTANCE=ARSDBAS1
PRIQTY 700 SRVR_INSTANCE_OWNER=ARSSERV1
SECQTY 7000 SRVR_OD_CFG=/usr/lpp/ars/config/arsins1.cfg
SEGSIZE 32 SRVR_SM_CFG=/usr/lpp/ars/config/arsins1.cache
BUFFERPOOL BP32K;
OMVS ARSSERV1.ARSTSIN1 arsins1.cfg
ARS_NUM_LICENSE=10
$ export STEPLIB... DSNALOAD DSNEXIT ARSFOL ARSSEG
... ARS_LANGUAGE=ENU
ARS_SRVR=
$ export DSNAOINI="/etc/ars/cli.ini"
ARS_LOCAL_SRVR=
create tables
OnDemand ARS_NUM_DBSRVR=4
$ arsdb - c -I ARSSOCKT System Tables ARS_TMP=/tmp
ARS_PRINT_PATH=/tmp
ARSOCKT STARTED TASK DB_ENGINE=DB2
ARS_DB_TABLESPACE=ARSTSIN1
//ARSSOCKT PROC DB2INSTANCE=ARSDBAS1
//ARSSOCKT EXEC PGM=ARSSOCKD,PARM=('/ARSSOCKT ARSSOCKD'), ............
// REGION=0M,TIME=NOLIMIT
//*
//STEPLIB DD DISP=SHR,DSN=OD390.V710.DBS.SARSLOAD
// DD DISP=SHR,DSN=ICCDB2.SDSNEXIT
// DD DISP=SHR,DSN=ICCDB2.SDSNLOAD
//DSNAOINI DD PATH='/etc/ars/cli.ini'
//SYSPRINT DD SYSOUT=*
//SYSOUT DD SYSOUT=*
Figure 4-13 Relationship between the steps for creating an instance on z/OS
© Copyright IBM Corp. 2003, 2006, 2007. All rights reserved. 105
5.1 Tivoli Storage Manager for Multiplatforms
OnDemand for Multiplatforms has a cache storage manager that we use to
maintain documents on disk storage. The cache storage manager uses a list of
file systems to determine the devices that are available for storing and
maintaining documents. Typically, each OnDemand object server in the system
has a defined set of cache storage devices on which you can maintain the report
data for a period of time to provide the fastest access times for system users.
Documents migrate from cache storage to archive storage based on the
migration policy that you define for the application group.
OnDemand for Multiplatforms also has an archive storage manager that you use
to store documents on archive media. The archive storage manager maintains
one or more copies of documents and acts as the interface between the object
server and archive storage media. Tivoli Storage Manager is included in
OnDemand as the archive storage manager. Documents is archived on a variety
of media such as disk, optical, and tape. The archive storage devices must be
configured and defined to Tivoli Storage Manager.
To store application group data to archive media, you must assign the application
group to a storage set that contains a storage node that is managed by the
archive storage manager. In an application group definition, you can specify that
the data is migrated to archive storage when the document is originally loaded
into the system, the next time that the migration maintenance process is run or
after a certain number of days pass.
In this section, we consider the steps that you must followed to set up and
configure Tivoli Storage Manager as the archive manager for an OnDemand for
Multiplatforms system. We discuss configuration of IBM System Storage Archive
Manager, previously named IBM Tivoli Storage Manager for Data Retention, to
store OnDemand data. It provides data retention policies that help meet
regulatory requirements and uses storage devices such as IBM TotalStorage
DR450, IBM TotalStorage DR550, or EMC Centera.
This section provides an overview with emphasis on the parts of the process that
most directly affect the OnDemand to Tivoli Storage Manager interface. It is not
meant to provide exhaustive coverage of Tivoli Storage Manager.
Policy Set
Client Node
Storage
Daata
Manaagement Volume
Class
Archive
Copy Group
Storage
ARSLOAD Program
Volume
Media
Device
Class
Library
Drive Device,
Drives
Storage policy
Storage policy consists of the following items:
Client node: Represents an object server that has the Tivoli Storage
Manager backup archive server installed and that has been assigned to a
policy domain
Policy domain: Contains the policy set, management class, and archive copy
group that is used by the client node
Policy set: Contains management classes, which contain the archive copy
groups
Management class: Determines where data is stored and how it is managed
Archive copy group: Used to copy data to Tivoli Storage Manager for
long-term storage
Refer to Tivoli Storage Manager for Windows Quick Start, GC32-0784, which is a
good reference to help with installing and configuring the Tivoli Storage Manager
system. Within this guide, follow the steps listed for installing the Tivoli Storage
Manager server, Tivoli Storage Manager licenses, Tivoli Storage Manager
backup archive client, and Tivoli Storage Manager device driver. When these
installations are complete, continue to the following section that covers the Tivoli
Storage Manager configuration.
If you use IBM TotalStorage DR 450 or DR550 for archival, Tivoli Storage
Manager is already built into the hardware. No installation is required.
During the standard configuration process, wizards help you perform the
following tasks:
Analyze drive performance to determine the best location for the Tivoli
Storage Manager server
Initialize the Tivoli Storage Manager server
Apply the Tivoli Storage Manager licenses
Configure Tivoli Storage Manager to access storage devices
Prepare media for use by Tivoli Storage Manager
The standard initial configuration does not cover all Tivoli Storage Manager
functions, but results in a functional Tivoli Storage Manager system that can be
modified and enhanced further. The default settings used by the wizards are
appropriate in many cases.
Initial configuration
After you install Tivoli Storage Manager, perform the following initial
configuration:
1. Open the Tivoli Storage Manager management console and expand Tivoli
Storage Manager until you see the local machine name. Right-click the local
machine name and select Add a New TSM Server
2. From the initial configuration task list, select Standard or Minimal
configuration. Refer to the Tivoli Storage Manager for Windows Quick Start,
GC32-0784, for information to help you with your decision concerning the
configuration type.
3. Select a Standalone or Network configuration:
– In a stand-alone environment, a Tivoli Storage Manager server and
backup archive client are installed on the same machine. There can be no
network-connected Tivoli Storage Manager clients.
– In a network environment, a Tivoli Storage Manager server is installed.
The backup archive client can be optionally installed on the same
machine. Network-connected clients can be installed on remote machines.
After you make the selections for server initialization, Tivoli Storage Manager
performs the following actions:
Initializes the server database and recovery log
Creates the database, recovery log, and disk storage pool initial volumes
Creates a daily and weekly schedule that can be used for automated Tivoli
Storage Manager functions
Registers a local administrative client with the server (The client is named
admin and the initial password is admin.)
License
Select and apply the number of licenses purchased for the different features of
Tivoli Storage Manager. The license for Tivoli Storage Manager has been
reduced to the following three components:
Base IBM Tivoli Storage Manager
Base IBM Tivoli Storage Manager Extended Edition
IBM Tivoli Storage Manager for Data Retention
Note: If you intend to use the IBM System Storage Archive Manager, then you
must obtain and install the license for IBM Tivoli Storage Manager for Data
Retention.
The license information provided is registered with the Tivoli Storage Manager
server.
Device configuration
The device configuration wizard automatically detects storage devices that are
attached to the Tivoli Storage Manager server and is used to select the devices
that you want to use with Tivoli Storage Manager. You define a device by
selecting the check box that is associated with that device. Undetected or virtual
Device configuration is not needed if you are using IBM TotalStorage DR550. For
EMC Centera, there is no library or drive as well. You must define only the
devclass with DEVTYPE=centera to point to the correct IP address.
By default, data that you store with client nodes associated with BACKUPPOOL
is transferred to DISKPOOL. The data can be stored in DISKPOOL indefinitely or
can be migrated to another storage device in the storage pool hierarchy.
To register new client nodes, you provide a client node name and password for
each node that is required (Figure 5-3). The new node defaults to the
STANDARD policy domain. BACKUPPOOL is the default storage pool for this
policy domain. Associate the client name with the storage pool that is set up to
maintain the archive data on the desired device type for the period of time that is
required. You can associate the new client node with a different storage pool by
selecting New to create a new policy domain.
Note: The node name and password that you create here are used when
creating OnDemand storage sets that uses Tivoli Storage Manager archive.
COMMmethod TCPip
TCPPort 1500
TCPServeraddress 127.0.0.1
Note: The first backup of a file is always a full backup, regardless of the
backup type that you select.
Note: Tivoli Storage Manager backup and archive client is not supported if
data retention protection is turned on. The previous test does not apply if you
enabled data retention protection in Tivoli Storage Manager.
You select TSM, click TSM Options, and then enter the path to the Tivoli Storage
Manager program files and the path to the Tivoli Storage Manager options file.
######################################################
# Storage Manager Parameters (Library/Object Server) #
######################################################
#
# Storage Manager for OnDemand to use
#
ARS_STORAGE_MANAGER=TSM
#######################################
# TSM Parameters (Object Server Only) #
#######################################
DSMSERV_DIR=/usr/tivoli/tsm/server/bin
DSMSERV_CONFIG=/usr/tivoli/tsm/server/bin/dsmserv.opt
DSM_DIR=/usr/tivoli/tsm/client/ba/bin
DSM_CONFIG=/usr/tivoli/tsm/client/ba/bin/dsm.opt
DSM_LOG=/ondemand/arslog
DSMG_DIR=/usr/tivoli/tsm/client/api/bin
DSMG_CONFIG=/usr/tivoli/tsm/client/api/bin/dsm.opt
DSMG_LOG=/tmp
DSMI_DIR=/usr/tivoli/tsm/client/api/bin
DSMI_CONFIG=/usr/tivoli/tsm/client/api/bin/dsm.opt
DSMI_LOG=/tmp
For example, a storage set can be used to maintain data from different
application groups that must retain documents for the same length of time and
require the data to be kept on the same type of media. Different storage sets can
be created to handle different data retention requirements. One storage set can
be set up to maintain data on cache only hard disk drive storage. Another can be
set up to point to a Tivoli Storage Manager client node that will cause a copy of
the report to be stored in archive storage.
Application Group
Storage Set
Storage node
Cache
Storage
Archive
Storage
If Tivoli Storage Manager is used as the archive storage manager, the same
storage management criteria should be specified for both OnDemand and Tivoli
Storage Manager. That is, the Life of Data and Indexes in OnDemand and the
retention period in Tivoli Storage Manager should be the same value.
If the load type value for the application group is load, a command is issued
from OnDemand to Tivoli Storage Manager to delete data when the data is
being expired from OnDemand. If the load type is segment or document, a
delete command is not issued from OnDemand to Tivoli Storage Manager
when OnDemand expires the data and the data remains in Tivoli Storage
Manager until the Tivoli Storage Manager retention period expires. This data is
not accessible from OnDemand because the indexes are expired in
OnDemand.
OnDemand systems can be set up to run as cache only hard disk drive systems
with no migration of the data or indexes, or with an archive system using Tivoli
Storage Manager to maintain and manager the archive of OnDemand documents
and indexes over a predesignated period of time. When OnDemand is installed
and the system is initialized, a default cache only storage set is created.
Additional cache storage sets can be defined. Storage sets associated with Tivoli
Storage Manager client nodes that are tied to specific management policies on
the Tivoli Storage Manager servers are used for long-term archive storage.
Load Type
The Load Type parameter determines where OnDemand stores data. There are
two possible values (Figure 5-9):
Fixed: OnDemand stores data in the primary storage node that has the load
data field selected. When Load Type is set to Fixed, you must select the load
data check box for one primary storage node. OnDemand loads data to only
one primary storage node regardless of the number of primary nodes that are
defined in the storage set.
Local: OnDemand stores data in a primary storage node on the server on
which the data loading program executes. When load type is Local, the load
data check box must be selected for a primary storage node on each of the
object servers that is identified in the storage set. A storage set can contain
one or more primary storage nodes that reside on one or more object servers.
Storage Node
The OnDemand storage node name can be from one to sixty characters in length
and can include embedded blanks. The case can be mixed.
Note: The OnDemand storage node name does not tie the storage set to the
Tivoli Storage Manager client node. This name is only a label in the
OnDemand system. The storage node name can be the same as the
associated client node name, but it is not required that they are the same.
Note: The Logon field must be a valid Tivoli Storage Manager client node
name. This is the client node that has been defined on the Tivoli Storage
Manager system through the wizard or command line. The password that
follows the logon must be the same as the password that you created for the
client node. OnDemand uses a Tivoli Storage Manager application
programming interface (API) to connect and log on to the Tivoli Storage
Manager server when data is being migrated to the Tivoli Storage Manager
client node.
Load Data
The Load Data parameter determines the primary storage node into which
OnDemand loads data. When the load type is fixed, one primary storage node
must have load data selected. When load type is local, load data must be
selected for one primary node for each object server that is associated with the
storage set.
Cache Only
The Cache Only parameter determines whether OnDemand uses the archive
manager for long-term storage of data.
Cache Data
The Cache Data setting determines if the report data is stored in a hard disk
drive cache and, if so, how long it is kept in cache before it is expired. You can
also choose whether to search cache when retrieving documents for viewing. If
you choose not to store reports in cache, you must select a storage set that
supports archive storage.
Note: Data that is retrieved often should generally remain in cache until it is no
longer needed by 90% of OnDemand users.
Expiration Type
The Expiration Type determines how report data, indexes and resources are
expired. There are three expiration types:
Load: With this expiration type, an input file at a time can be deleted from the
application group. The latest date in the input data and the life of data and
indexes determines when OnDemand deletes the data. OnDemand signals to
the storage manager that the data might be deleted. Load is the
recommended expiration type.
Segment: With this expiration type, a segment of data at a time is deleted
from the application group. The segment must be closed and the expiration
date of every record in the segment must have been reached. Data that is
stored in archive storage is deleted by the storage manager based on the
archive expiration date. If a small amount of data is loaded into the application
group, and the maximum rows value is high, the segment might be open for a
long period of time and the data is not be expired for the period.
Document: With this expiration type, a document at a time is deleted from the
application group. Data that is stored in archive storage is deleted by the
storage manager based on the archive expiration date. Storing with an
expiration type of document causes the expiration process to search through
every document in the segment to determine if the expiration date has been
reached resulting in long processing times.
When the arsmaint expiration process is run, data is only deleted from the
application group if the upper threshold for the size of cache storage has been
reached. By default, the cache threshold is 80%. A lower threshold can be forced
by the expiration command parameters. Unless there is some reason that cache
must be cleared, leaving data in cache improves retrieval performance.
Object Size
The Object Size parameter determines the size of a storage object in kilobytes
(KB). OnDemand, by default, segments and compresses stored data into 10 MB
storage objects. The default 10 MB is the recommended object size value.
Attention: Use care when changing the value for Object Size. Setting the
value too small or too large can have an adverse affect on load performance.
Note: The object size, defined here, must be equal to or larger than the size of
the compressed storage objects defined in any application assigned to the
application group.
Because disk storage has become more affordable over the years and with
technology advancement, disk storage devices that prevent data from being
erased or overwritten have started to become popular. These WORM disks can
be used to store data just as the WORM tapes or optical platters. IBM System
Storage Archive Manager allows critical data to be retained for a mandated
period of time without the possibility of being rewritten or erased.
In this section, we discuss the enhancements in OnDemand that use the WORM
disk. We also provide some setup recommendations and pointers when
configuring OnDemand to use such devices.
Previously known as IBM Tivoli Storage Manager for Data Retention, this
feature was available in Tivoli Storage Manager 5.2.2. It is used to prevent critical
data from being erased or rewritten. For more information about the IBM System
Storage Archive Manager, refer to the following Web address:
http://www.ibm.com/software/tivoli/products/storage-mgr-data-reten/
Table 5-1 Comparison of Tivoli Storage Manager expiration methods with data protection OFF or ON
DRP TSM OnDemand action: Unload OnDemand action: Delete
RETinit Application group
OFF Creation The Delete Object command is issued The Delete Filespace command is
through the Tivoli Storage Manager API. issued.
Objects are deleted during the next Tivoli Objects are immediately deleted along
Storage Manager expiration. with the file space.
The objects are effectively orphaned by The objects are effectively orphaned by
OnDemand and are expired by Tivoli OnDemand and are expired by Tivoli
Storage Manager based on their Storage Manager based on their
retention parameters. If the retention retention parameters. If the retention
parameters are set to NOLIMIT, the parameters are set to NOLIMIT, the
objects will never expire. objects will never expire.
Event OnDemand issues an event trigger The Delete Filespace command cannot
command through Tivoli Storage be used with DRP ON so the operation
Manager API. is treated the same as though a delete
were indicated and the status of all the
The status of the objects affected are affected objects is changed from
changed from PENDING to STARTED PENDING to STARTED. They are
and are expired by Tivoli Storage expired by Tivoli Storage Manager
Manager based on their retention based on their retention parameters.
parameters. If the retention parameters This unfortunately leaves the file space
are set to NOLIMIT, the objects will never entries in Tivoli Storage Manager.
expire. These entries can be manually deleted
after the file space is empty even with
DRP ON.
In the DR550, the Tivoli Storage Manager database, database volumes, recovery
log, recovery log volumes and primary storage pools and storage pool volumes
are all preconfigured. You are not required to define anything for DR550 if you
use the default setting.
The DISK device class is used by primary storage pool called ARCHIVEPOOL.
There is also a DBBKUP device class with device type FILE that is used for
database backup.
You can attach a tape device for the purpose of backing up your primary storage
pools to copy storage pools. You can also use the tape device to back up the
Tivoli Storage Manager database. Tape devices are well-suited for this, because
the media can be transported off-site for disaster recovery purposes. A tape drive
or tape library is not included in the IBM TotalStorage DR550/DR550 Express.
However, you can attach tape devices that are supported by Tivoli Storage
Manager on the AIX platform and that best suit your data retention requirements.
The client node has the same option as a normal OnDemand client.
Note: Remember to set ARCHDELETE=yes for the client node on the Tivoli
Storage Manager server. If this is not set, you will experience errors when you
try to delete the application groups or unload data from OnDemand.
For more information about DR550, refer to the IBM Redbooks publication IBM
System Storage DR550 Setup and Implementation, SG24-7091.
For use with Centera, the Tivoli Storage Manager database must be a new
database that has not previously stored any data. Nor should any data have been
previously loaded onto the server.
Configure the Tivoli Storage Manager server as normal; however, you are not
required to define a library or drive for the Centera storage device. To define
devclass, use the new command DEFINE DEVCLASS CENTERA.
To enable Centera support for data retention protection, use the new command
on the Tivoli Storage Manager server:
SET ARCHIVERETENTION PROTECTION
For more information about Tivoli Storage Manager support of Centera devices,
see the Tivoli Storage Manager for AIX Administrator’s Guide, GC32-0768, or
Tivoli Storage Manager for Windows Administrator’s Guide, GC32-0782.
The arsmaint command uses the application group expiration type to determine
how to delete index data from an application group. This command can expire a
table of application group data at a time (segment expiration type), an input file of
Note: When expiring cache data, by default, the data is not expired until the
cache storage file system has exceeded 80% of capacity. Keeping data in
cache as long as possible improves retrieval and viewing performance. You
can force the expiration of cache data before cache is 80% full by using the
minimum and maximum parameters to override the percentage full default.
How to handle this data is left up to the application. OAM is designed to handle
an unlimited number of objects, which can be stored on magnetic disk, magnetic
tape, or optical storage. Objects are different from data sets, which are handled
by existing access methods. The following characteristics distinguish them from
traditional data sets:
Lack of record orientation: There is no concept of individual records within
an object.
Broad range of size: An object might contain less than one KB or up to 50
MB of data.
OAM components
The functions of OAM are performed by three components:
Object Storage and Retrieval (OSR) component
This component provides an API for OAM. All OAM API functions are
requested via the OSREQ assembler macro. Applications use this interface to
store, retrieve, query, and delete objects, as well as to change information
about objects. OSR stores the objects in the storage hierarchy and maintains
the information about these objects in DB2 databases. OSR functions invoked
through the application programming interface require the OAM Thread
Isolation Support (OTIS) application for administrative processing.
Library Control System (LCS) component
This component writes and reads objects on tape and optical disk storage. It
also manipulates the volumes on which the objects reside. The LCS
component controls the usage of optical hardware resources that are
attached to the system.
OAM Storage Management Component (OSMC)
This component determines where objects should be stored in the OAM
storage hierarchy. It manages object movement within the object storage
hierarchy and manages expiration attributes that are based on the installation
storage management policy that is defined through SMS. OSMC also creates
the requested backup copies of the objects and provides object and volume
recovery functions.
Usually, three storage classes are set up for OAM where the names of the
storage classes are set up by the storage administrator based on the naming
convention in the Enterprise. These storage classes are:
OAMDASD: Objects are stored in a DB2 table on fast magnetic disk.
OAMTAPE: Objects are stored on magnetic tape including tape robots.
OAMOPTIC: Objects are stored on a 3995 optical device.
Note: The cache storage on a hierarchical file system (HFS) is not part of
these SMS constructs.
OAM collection
A collection is a group of objects that typically have similar performance,
availability, backup, retention, and class transition characteristics. A collection is
used to catalog a large number of objects, which, if cataloged separately, can
require an extremely large catalog. Every object must be assigned to a
collection. Object names within a collection must be unique; however, the same
object name can be used in multiple collections. Each collection belongs to one
and only one Object storage group. Each storage group can contain from one to
many collections.
The administrator must define values for the following fields to add a new storage
set:
Name: The name of the storage set
Description: The storage set description, up to 120 characters
Load Type: Where OnDemand stores data
There are two choices:
– Fixed: OnDemand stores data in the primary storage node that has the
load data field selected. When you set load type to Fixed, you must select
the Load Data check box for one primary storage node. A storage set can
contain one or more primary storage nodes. There can be several different
collection names.
– Local: OnDemand stores data in a primary node on the server on which
the data loading program executes. This applies to z/OS.
The object server is always OnDemand if the OD/390 Object Server check box is
selected. The load data check box indicates that the data is loaded to this
collection. You must select the OAM check box. The Logon and Password fields
are not used in a z/OS environment. These fields are for Tivoli Storage Manager
only.
OAM
Database
Group00
ICF Catalog
Image1.collect entry
OAMDATA
DIRECTORYTOKEN----GROUP00
needed ! SMSDATA
STORAGECLASS ----OAMDASD
MANAGEMENTCLASS---OBJ365
The application group name is added, and an object name looks like this:
A AAAAAAA.L1.FAAA
Figure 5-16 shows the window in which you can set up the expiration for Storage
Management when defining or updating an application group.
Tip: OnDemand and OAM can run in different DB2 subsystems (different DB2
subsystem identifiers (SSIDs)).
To create a storage set that stores to VSAM, the OnDemand administrator must
provide the first level qualifier for the defined cluster statement. In the example
shown in Figure 5-17, TEAM5 is the high (first) level qualifier.
Based on these parameters, OnDemand creates VSAM data sets during the
arsload program. A catalog entry is created as shown in Example 5-2.
This is done automatically by the OnDemand system. The only part that you can
create for yourself is the first level qualifier. The space allocation during the
Every load creates two VSAM data sets, one for the data, and one for the index.
Every Define Cluster of a VSAM data set is a catalog entry. If you have several
million loads with this storage set, your catalog can grow very large.
You can browse the VSAM data set; but if the compression is on, you cannot see
much. For test purposes, compression can be switched off and then the content
of the VSAM data set is viewable. Compression can be switched off on the load
information on the application panel.
If you store Advanced Function Presentation (AFP) data to VSAM, the resources
are stored in a different VSAM data set.
The Cache Only Library Server storage set is no longer created automatically
with the installation of OnDemand Common Server. This change was made to
the program product code because of performance problems that customers
encountered when archiving a large amount of data and leaving it in the Cache
directory. Even though document retrieval is fast, the load process takes longer
as the size of the cache directory grows.
Also, the Cache Only storage set was limiting because you could not add any
storage levels to it. Even when this storage set was automatically created at
installation, many customers chose to define a disk pool and create a migration
policy instead. Then if they later decide to begin using an optical library, they can
easily add an optical storage level to the policy.
If you have been using the Cache Only storage set, you may decide to start using
a migration policy instead for greater flexibility and to avoid archival performance
problems. The OnDemand Administrator Client does not allow you to change the
storage set in the application group.
When you create a migration policy, a storage set of the same name is
automatically created by OnDemand. If you plan to keep all your archives on
disk, the best approach is to create a disk pool and a migration policy that
specifies “No Maximum” for the duration level. Archive Storage Manager expires
data and indexes whenever the number of days is reached in the Life of Data and
Indexes in the application group, or whenever an expiration level in the migration
policy is encountered, whichever comes first. If there is no expiration level in the
migration policy, data is only expired according to the Life of Data in the
If you choose the default in the application group to migrate data from cache
when data is loaded, then a copy of the data is archived to the integrated file
system CACHE directory and to the integrated file system ASMREQUEST
directory. When you run Disk Storage Manager, the data is deleted from cache
after the Cache Data for Days duration has passed. When you run Archive
Storage Manager for the first time after loading data, the data is moved to the first
level of the migration policy, ASP01 in our example. The data remains in ASP01
until the number of days in the Life of Data and Indexes is reached or an
expiration level in the migration policy is encountered, whichever comes first.
Most administrative functions for an OnDemand for iSeries Common Server can
be carried out with the OnDemand administrator client. Creating the objects
necessary for OnDemand archive storage management on the iSeries must be
done through iSeries Access Navigator with the OnDemand plug-in (Figure 5-18
on page 145).
To create a migration policy, there must be storage devices defined for the types
of archive media required by the OnDemand system. For the purposes of our
scenario, we created a disk pool storage group and an optical storage group.
When defining the optical storage group (Figure 5-20), you provide:
Storage group name
Description of the storage group
Volume full reset when optical volumes are rewritable and you want to reuse
the storage space (only available with LAN-attached optical jukeboxes)
Free space threshold percent (the percent at which OnDemand starts storing
to rewritable volumes again if the volume full reset parameter is checked)
Storage group type, primary or backup
After you define the optical storage group, use iSeries Navigator to define the
optical volumes to the OnDemand system (Figure 5-21).
When defining optical volumes (Figure 5-21), you provide this information:
Volume name: Your volume name
Volume type: Primary or backup
Instance: OnDemand instance with which the optical volume is associated
Capacity in megabytes: Capacity of one side of the optical media, after it is
initialized
Optical media family: Rewritable (REWT), WORM, Universal Disk Format
single-sided (UDF1) used by DVD RAM drives, or Universal Disk Format
double-sided (UDF2)
Optical storage group: Your optical storage group
Optical library: Library name, which can be provided for documentation
Volume is full: Set when the optical volume reaches its capacity
Opposite side volume name: For the other side of the optical platter
Tape backup requested and media type: The Tape backup requested field
indicates whether a one-time tape backup should be made of the data before
it is archived. The Media type field indicates the type of tape to use for the
backup.
Instance: The value in this field indicates the OnDemand instance with which
the migration policy is associated.
Storage levels in this policy: This section determines the path that the
archived data follows through the different archive storage media. The order
of the levels determines the migration sequence. Storage levels are created
by placing the cursor on an existing storage level (if one exists) and clicking
the Add Before or Add After button. The New Policy Level window
(Figure 5-23 on page 150) opens.
In the New Policy Level window, you provide the following information for the new
storage level (Figure 5-23):
Level identifier: This field distinguishes the different storage levels within the
migration policy. The value must be unique within the storage levels of the
migration policy. Archive Storage Manager uses the level identifier to
determine current level of the migration hierarchy and to determine the next
level to which the data should be moved. The identifier can be numeric (for
example, 10, 20, and 30) or descriptive (for example, ASP or OPT).
Disabled: Specifying option causes Archive Storage Manager to skip this
level in the storage hierarchy. The Disabled option can be used in a situation
where an optical unit is added to the system later, but the administrator wants
to add an optical policy level and disable it. This option can also be used when
migration to a policy level is to be discontinued, such as a tape unit. A policy
level might not be removed if data is archived to it, but it might be disabled so
that no more data gets migrated to that level.
Description of the policy level: Use this field to provide a description of the
policy level.
In our scenario, we created a policy level that stores data for 100 days on disk
using the disk pool storage group assigned to auxiliary storage pool 1. We also
created a policy level that stores data on optical indefinitely and uses the
REDBOOK optical storage group. We did not include an expire level, so the data
will always be expired according to the Life of Data and Indexes in the application
group. We can use this migration policy for all application groups if we choose.
Documents that are in application groups with Life of Data set to 100 days or
fewer are never migrated to optical because the disk pool storage level specifies
100 days. This approach is easy to manage. Figure 5-24 on page 152 shows the
final migration policy structure.
Cache Data
The Cache Data setting determines if the report data is stored in disk cache, and
if so, how long it is kept in cache before it is expired. If the Cache Data for n Days
options is selected, then the search cache is always selected.
Expiration Type
The Expiration Type determines how report data, indexes, and resources are
expired. There are three expiration types:
Load
If the expiration type is Load, an input file at a time can be deleted from the
application group. The latest date in the input data and the life of data and
indexes determines when OnDemand will delete the data. Data that is stored
in archive storage is deleted by the storage manager based on the archive
expiration date. Load is the recommended expiration type.
Segment
If the expiration type is Segment, a segment of data, which is a database file
that contains index values for an application group, at a time is deleted from
the application group. The segment must be closed and the expiration date of
every record in the segment must have been reached. If small amounts of
data are loaded into the application group, and the maximum rows value is
high, the segment might be open for a long period of time and the data is not
expired for the period.
Document
If the expiration type is Document, a document at a time is deleted from the
application group. Storing with an expiration type of Document causes the
expiration process to search through every document in the segment to
determine if the expiration date has been reached resulting in long processing
times.
Note: Expiration Type of Load is not allowed when using the arsdoc add API
or when using the workstation APIs such as those used by the OnDemand
Toolbox Store Component (see 15.4, “OnDemandToolbox” on page 493). If
you plan to use these APIs with an application group, specify the Expiration
Type as Document.
Object Size
The Object Size parameter determines the size of a storage object in kilobytes.
OnDemand, by default, segments and compresses stored data into 10 MB
storage objects. The default 10 MB is the recommended object size value.
Attention: Setting the value too small or too large can have an adverse affect
on load performance.
Note: The object size, defined here, must be equal to or larger than the size of
the compressed storage objects defined in any application assigned to the
application group.
However, if you use the Save Changed Objects (SAVCHGOBJ) command when
doing daily backups, you might prefer to keep the default database size. You only
save the most recent file instead of always saving one large file.
© Copyright IBM Corp. 2003, 2006, 2007. All rights reserved. 159
6.1 When performance tuning is necessary
Due to the way in which OnDemand integrates with a wide variety of products,
data types, and operating systems spanning over many different vendors, it
should come as no surprise to learn that a standard installation of OnDemand is
not optimized for all of these different environments. As part of the installation
process, decisions must be made and parameter values must be changed from
the default settings to best configure the product to fit the environment on which it
must operate. In many cases, it might not be possible to anticipate future demand
and workload which will be placed on the system. Therefore, as requirements
change over time, it might be necessary to fine-tune the system to maintain high
level of performance.
If you are experiencing poor performance during indexing, it is likely that one of
these two areas is the cause of the problem.
Notice that Table 6-1 specifically covers OnDemand operations and processes
that might require tuning to enhance performance. Other areas, such as the
underlying operating system, hardware including network, or even contention
with other software running on the same machine, might require tuning for better
performance. However, in this chapter, we only deal with the areas to tune within
OnDemand; it is by no means a definitive reference for all performance problems
experienced on the machine where the OnDemand system is installed.
In addition to the performance enhancements in the load process, there can also
be improvements in document retrieval times if the object servers are distributed
nationally or internationally. A common configuration for an OnDemand system
that must span large geographical areas is to place an object server in each of
the main computer centers. With this configuration, users can search their
documents from the central library server, and when they want to retrieve a
document, the document is sent from the object server that is local to them (in
network topology terms).
We should consider the possible disadvantages with this architecture. Aside from
the hardware administration overhead, there are few disadvantages to having
multiple object servers within the same data center; however, there are some
issues to consider when distributing object servers across geographies.
The access to documents by the users must be carefully considered before
deciding on geographically distributed object servers. If a significant number
of users require documents from object servers that are not local to them (in
networking terms), there are negative performance effects.
Wide area networks (WANs) are often less reliable then local area networks
(LANs). If object servers are inaccessible from the library server, then,
although documents might be physically located in the same building as the
users, the users will not be able to access the documents.
Four system logging event points can be selected to control the messages that
OnDemand saves in the system log. OnDemand can record a message when the
following events occur:
Login: A user logs on a server.
Logoff: A user logs off a server.
Application Group Messages: A user queries or retrieves application group
data and other types of application group events.
Failed Login: An unsuccessful logon attempt is made.
The amount of logging that is done on a server is controlled in two places: in the
System Parameters and in the Message Logging tab of each of the application
groups on the system. By default, when you create a new application group, all of
the message logging is turned on (except for the database queries). Also, within
the system parameters, all four of the system logging event points listed earlier
are checked by default on a standard OnDemand installation. This means, by
default, an OnDemand server is in full logging mode. In some cases, this can
have significant effects on the system performance.
Unless you are running the system in a diagnostic mode, we recommend that
you turn off logging for Login and Logoff messages. If it is an active system with
large numbers of active users constantly logging in and off, this should improve
the performance of Login times.
6.2.3 Database
OnDemand creates a database as part of the installation process. Apart from the
ability to choose the location of the database logs, there is little opportunity to
change the default values used by the database manager. For example, in the
case of DB2 Universal Database, the default parameter values are oriented
toward small machines with small amounts of memory. It is common to alter
some of the default parameters to optimize performance in a specialized
environment.
Memory
It is possible to allocate system memory to a database application in order to
gain more control over the way in which system resources are allocated. In DB2,
this memory allocation is called a buffer pool, and each database has at least
one of these. All buffer pool resides in global memory, which is available to all
applications using the database. If the buffer pools are large enough to keep the
required data in memory, less disk activity will occur. Conversely, if the buffer
pools are not large enough, the overall performance of the database can be
severely curtailed and the database manager can become I/O-bound.
In DB2, the default buffer pool size is 1000 (specified in 4 KB pages), which is
4 MB. We recommend that you increase this value enough to reduce the risk of
becoming I/O-bound and give the database a decent buffer pool to work with.
Depending on the system resources, it is beneficial for a production environment
to increase the buffer pool to limit I/O contention.
Query optimization
Query optimization determines how much effort the database puts into
optimizing queries. A high value is often used in a data warehousing
environment. Lower values are useful in an online transaction processing (OLTP)
type environment that uses simple dynamic queries, which is the case with most
of the OnDemand systems. Lowering this parameter can significantly reduce
CPU activity and prepare time. In DB2, for example, the default query
optimization class (DFT_QUERYOPT) is set to a value of 5, which is regarded as
significant query optimization.
In DB2, the parameter for setting the maximum number of open database files
per application is called MAXFILOP and the default value is 64. If opening a file
causes this parameter to be exceeded, some files in use by this agent must be
closed. Increasing this parameter helps to alleviate the overhead of excessive
opening and closing files. The default tablespace type for an OnDemand
installation is SMS. Generally, SMS tablespaces need a larger value. SMS
tablespaces have at least one file per database table (usually more).
Transaction logs
The database transaction logs should be placed on a separate physical disk or
disks from the database to avoid contention with the data I/O. In addition,
physically separating the database transaction logs from the database means
that in the event of a media failure, the logs are not lost with the database and
recovery is possible. The log disks must be protected to ensure database
consistency, and due to the high write content, mirroring is typically preferred
over RAID-5. Notice that there is a performance hit to mirroring.
DB2 parallelism
Parallelism within DB2 is designed for best performance when running few, but
complex, number-crunching type queries. OnDemand submits a very large
number of simple queries for user logon and folder searches. The overhead
generated by DB2 parallelism can impact server performance in an OnDemand
environment.
Our key recommendation with regard to running multiple arsload jobs is that,
unless the system has a distributed object server architecture, try not to start all
of the load jobs at the same time. If you start multiple arsload jobs
simultaneously, you might still get a performance benefit. If you stagger the load
jobs, then each of them will be at a different phase of the load process, which will
maximize performance due to less I/O contention in the database and in cache
storage volumes. To clarify this point further, see Figure 6-1 on page 160, which
illustrates the different phases in the load process.
For systems that are running several large load jobs in parallel, or for systems
that have large numbers of active users, we recommend that you increase this
parameter from the default of 4. For more detailed guidance, refer to Appendix A,
in IBM Content Manager OnDemand for Multiplatforms - Installation and
Configuration Guide, SC18-9232.
For performance reasons, when the OnDemand file systems are created, the
following components should not be on the same physical media:
The cache file system
The database file system
The primary logs file system
The secondary logs file system
The load / indexing file system
The OnDemand temporary space file system
If you are experiencing performance problems and you believe that you have
followed all of the guidance available for configuring OnDemand, make sure that
you check with the vendor of your operating system to confirm that you are at the
latest service pack or maintenance level. Also ensure that there are no known
problems with performance at this level.
Network
OnDemand has various features, such as compression and large object support,
which minimize the impact of retrieving large quantities of information from the
server over a network. However, network performance and topology can often be
the bottleneck in a system architecture, especially when the data retrieved is
large image files that cannot be compressed. For guidance in tuning the
OnDemand environment to cater for the network bandwidth that is in place, see
the following sections:
6.2.1, “OnDemand architecture” on page 162
6.2.2, “OnDemand configuration” on page 163
6.3, “Performance issues based on data type” on page 177
If performance problems are encountered at the storage manager level, the issue
is almost always related to the inherent qualities of the slower media types (such
as optical platters and tape volumes) or the way in which the archive media
manager is configured. To ensure that the OnDemand configuration done by the
administrator is not causing the slower performance from the storage manager,
see 6.2.2, “OnDemand configuration” on page 163.
This example is typical of a situation where data is loaded into OnDemand at the
same time that users are active on the system and retrieving documents from
OnDemand. The most common way to avoid this performance problem is to
schedule load jobs at times when the system is not in use by the user community.
This way the load can have full use of all of the drives in the library to load the
large quantities of data that are necessary during this process.
In Tivoli Storage Manager, the mount retention of optical or tape volumes is set
when defining the device class. Example 6-1 is an extract from the archive.mac
file, which is shipped with a standard installation of OnDemand for Multiplatforms.
The archive.mac file is an extremely useful tool to use in order to configure a
Tivoli Storage Manager environment with OnDemand. For more information
about storage management with Tivoli Storage Manager, OAM (for zSeries) and
Archive Storage Manager (for iSeries), see Chapter 5, “Storage management” on
page 105.
Folder search fields One for each folder for each user FLD
Line data document One for each line data doc ASC
It is impossible to estimate a generic size or growth rate for all ODWEK systems
because the numbers of users, data types, folders and Advanced Function
Presentation (AFP) resources vary enormously from one installation to another.
However, it is possible to offer guidance, given information about the individual
environment.
6. Figure 6-3 also shows an example of the overall size of the cache after doing
this operation (63.4 KB). If you are satisfied that this is a typical search, then
multiply the collective size of these files by the number of concurrent users
expected to access this ODWEK system.
7. We recommend that you also add 25% to the value from step 6 for
contingency.
Caching documents
If you intend to cache documents that are retrieved from OnDemand in
conjunction with the session information and resources, then it is far more difficult
to estimate an optimum cache size. You can follow the same process as in the
previous section; however, be aware of the following factors:
Documents are typically much larger than session information, and so when
sizing a cache to include documents, you will find that much more space is
required.
If the likelihood of many users frequently viewing the same documents is high,
then caching documents can be beneficial because ODWEK is not required to
retrieve the documents from OnDemand many times.
If the cache size is too large, then it might take ODWEK more time to check
the cache to see if a document is present than to retrieve the document from
OnDemand. To avoid this, we recommend that you set the cache size no
larger than 200 MB.
If the network is slow or overloaded between the OnDemand server and the
Web server, then caching documents might alleviate this problem.
The ODWEK cache is one of the main areas for tuning ODWEK to optimize
performance in your environment. Considered and deliberate sizing of the cache
can see significant performance benefits.
When ODWEK is used as the viewing technology for OnDemand, the Web server
must be considered as one of those components within the architecture to
separate from the machine on which the OnDemand server is running. If the Web
server and the OnDemand server are separated onto dedicated machines, then
this should significantly improve overall system performance.
It is also possible to have multiple Web servers. This is necessary if you have an
OnDemand architecture with multiple distributed object servers, where typically
there is a Web server (with ODWEK installed on it) in close proximity to each of
the object servers. With this configuration, ODWEK users can benefit from the
fast retrieval of their documents from object server in their location.
Tip: With ODWEK, the OnDemand object server must deliver the object via
the Web server. If you have multiple object servers that might be distributed
throughout your enterprise but only a single Web server, then all objects must
be delivered through that Web server. You are not using the strengths of the
fast document retrieval from the distributed object servers.
Figure 6-4 describes the process that is followed within ODWEK when a user
with their own OnDemand user ID retrieves a document. In reality, this process is
followed each time the user issues a new logon; for subsequent searches after a
logon, there is ODWEK caching for folder lists and folders as described in “The
ODWEK cache” on page 170. Figure 6-4 shows a simple example of an
interaction between a user and OnDemand, which involves four requests
submitted by the user via ODWEK and four replies submitted by OnDemand via
ODWEK.
From a performance standpoint, the issue that commonly arises with the PDF
data type is illustrated in Figure 6-6 on page 178. As we have stated previously,
one of the main advantages of PDF data is its self contained nature; however,
when archiving this data, it is also one of the main disadvantages.
When a PDF is produced, such resources as images and custom fonts are
placed in the data stream once (usually at the top) and then referenced many
times from within the PDF file. If for example, a large report is produced that
might be a collection of many small documents, then the advantage is that only
one copy of the resources are required. However, in order for OnDemand (or any
archive product) to split this report into a collection of self contained documents,
the resources must be copied and placed within each of the smaller documents.
The effect of doing this is illustrated in Figure 6-6. It is common for the sum of the
individual documents to be many times larger than the original report.
This issue can create a variety of problems during the load process:
If this increase in file size has not been anticipated, the temporary space used
during indexing can be too small and the load will fail.
The PDF indexer has a maximum file size, 4 GB, that it can load to
OnDemand. If the resulting PDF file that the indexer produces is larger than
this maximum file size, then the load fails.
The most common cause for expediential increases in the PDF file to be loaded
into OnDemand is the inclusion of custom fonts into the PDF data. For a small
The primary goal of this section is to increase awareness of the issues with
archiving PDF data. Our recommendation is that if PDF data is the only format
possible to archive your data, then wherever possible, use the base 14 fonts,
which do not need to be included in the data. For more information about the
PDF data stream and the font issues, see Chapter 7, “PDF indexing” on
page 185.
When indexing transaction data, if each transaction number from each line of the
report is treated as a database index, such as date or customer name, then the
database grows to be extremely large in a short period of time. OnDemand has a
special type of field for transaction data, which is illustrated in Figure 6-7 on
page 180 by the boxed data on the left of the window.
The transaction data field selects the first and last values from a group of pages
and only these group level values are inserted into the database. OnDemand
queries the database by comparing the search value entered by the user to two
database fields, the beginning value and the ending value. If the value entered by
the user falls within the range of both database fields, OnDemand adds the item
to the document list.
From a performance perspective, using the transaction data field for transaction
style line data optimizes indexing performance by significantly reducing the
number of index values to be inserted into the database. This means that loading
and retrieving these extremely large reports is significantly faster and the
OnDemand database is many times smaller.
It is a common misconception that if fonts are collected when the data is loaded,
they are available for viewing in the Windows client. The fact is that Windows
does not recognize AFP fonts. It is not possible to use these fonts even if they are
sent to the client as part of the resource. Windows clients require a mapping from
AFP fonts to ATM or TT fonts. OnDemand provides this mapping for most
standard fonts. For more information about mapping custom fonts, refer to IBM
Content Manager OnDemand - Windows Client Customization Guide and
Reference, SC27-0837.
Figure 6-8 shows the panel in the application where you can select the resources
to collect. Unless reprints to AFP printers with 100% fidelity is a requirement, we
recommend that fonts are not collected.
The iSeries server does not collect the fonts, nor does it not give the
administrator that option. The Resource Information window (under Indexer
Image data
To optimize performance with storing and retrieving image formats such as TIFF,
GIF, and JPEG, we recommend that you do not compress the data because the
file sizes might actually increase. To turn off compression, select the Disable
option from the Load Information tab within the application (see Figure 6-9).
© Copyright IBM Corp. 2003, 2006, 2007. All rights reserved. 185
7.1 Getting started
PDF is a standard specified by Adobe Systems, Incorporated, for the electronic
distribution of documents. PDF files are compact; can be distributed globally via
e-mail, the Web, intranets, or CD-ROM; and can be viewed with the Acrobat
Reader.
Type 1 fonts
Type 1 fonts, described in detail in Adobe Type 1 Font Format, by Adobe
Systems Incorporated, are special-purpose PostScript® language programs
used for defining fonts. They use a specialized subset of the PostScript language
for more compact representation and optimized performance.
Compared with the larger Type 3 fonts, Type 1 fonts can be defined more
compactly due to the fact that they make use of a special procedure for drawing
the characters that results in higher quality output at small sizes and low
resolution. They also have a built-in mechanism for specifying hints, which is data
that indicates basic features of the character shapes not directly expressible by
the basic PostScript language operators.
Because the only external resources for a PDF document are the base 14 fonts,
each PDF document embeds any other fonts that are not members of the base
14 fonts. In the event that a PDF file has a company logo or image on a set of
pages, that logo or image is also embedded within the document. Barcode fonts
are embedded within the PDF document as well.
Notes:
Adobe Acrobat Approval, an alternative to Adobe Acrobat, was previously
offered by Adobe. It is no longer sold or supported by Adobe as of
September 2004. For information about transitioning to Acrobat Reader
Extensions Server or Adobe Acrobat, refer to the following Web address:
http://www.adobe.com/products/acrapproval
Users access PDF documents through OnDemand clients depending on
the OnDemand parameters and software installed on the workstations.
OnDemand provides the ARSPDF32.API file to enable PDF viewing from the
client. If you install the client after you install Adobe Acrobat, then the installation
program copies the application programming interface (API) file to the Acrobat
plug-in directory. If you install the client before you install Adobe Acrobat, then
you must copy the API file to the Acrobat plug-in directory manually. Also, if you
upgrade to a new version of Acrobat, then you must copy the API file to the new
Acrobat plug-in directory.
Here is an example of why the initial PDF file size can grow to be multiple times
larger than the original. Suppose that OnDemand receives a PDF data stream
with the following characteristics:
A 100-page PDF file includes one non-base 14 font.
A company logo is displayed on every fifth page.
A barcode font is displayed on every fifth page that describes the customer
account number.
The file is 100 KB in size, and OnDemand is required to index the document
into twenty 5-page documents that represent twenty customer accounts with
five pages of customer information each.
The output PDF file is generated from the original PDF in the following manner:
Each of the 20 PDF documents includse any and all fonts that are not
members of the base 14 fonts. OnDemand has 20 copies of any non-base 14
font in the resultant indexed PDF file. The 10 KB of font becomes 200 KB
worth of fonts.
Each of the 20 PDF documents has a copy of the company logo instead of
one copy of the company logo for the non-indexed 100-page document. A
compress image of 25 KB is now 500 KB.
Each of the 20 PDF documents has a copy of the barcode font required to
print the customer account information instead of one copy for the
non-indexed 100-page document. A barcode font of 5 KB is now 100 KB.
As you can see, the original 100 pages with one company logo, one barcode font,
and one non-base 14 font has become 20 wholly contained PDF documents with
one copy of the non-base 14 font each, one copy of the barcode font each, and
one copy of the company logo each.
The indexed file size is around 860 KB, 60 KB being the actual text in the 100 KB
original file, and 40 KB being the resources in the original file, which expanded to
800 KB through duplication. Considering that the logo, the barcode font, and
possibly the non-base 14 font are images, you can see how the file size
multiplies. Due to the extraction of the fonts, logos, barcode, and pages into
separate documents, the result is that extra hardware, disk, and RAM are
required to accomplish this task. For a discussion of what this means in terms of
Fonts
The PDF indexer must be able to access fonts to insert appropriate information in
a PDF output file. If a font is referenced in an input file but is not available on the
system, the PDF indexer substitutes a font, usually Courier.
All installations should verify that the standard Adobe font files are installed in the
standard font directory. For installations that plan to use the PDF indexer to
access double-byte character set (DBCS) fonts, verify the locations of the DBCS
font files and export or add the ACRO_RES_DIR and PSRESOURCEPATH
environment variables.
Graphical indexer
Since Version 7.1 of the administrative client, it has been possible to define PDF
reports within the application component of OnDemand graphically in the same
way as line data and AFP. The principle of indexing PDF data is the same as all
of the other data types supported by OnDemand; therefore, triggers, fields and
indexes must be defined. This section serves as an introduction to the PDF
graphical indexer by stepping through an example of indexing a PDF document.
The following process is an extract from “PDF Indexing”, in IBM Content Manager
OnDemand for Multiplatforms Release Notes for Version 7.1.0.10, which comes
with the Content Manager OnDemand for Multiplatforms software. The example
describes how to use the graphical indexer from the Report Wizard to create
indexing information for an input file. The indexing information consists of a
trigger that uniquely identifies the beginning of a document in the input file and
the fields and indexes for each document. Our intention here is to elaborate on
this example by clarifying some of the instructions, and throughout each step,
adding important hints, tips, and explanations.
7. Press the F1 key to open the main help topic for the report window.
8. The main help topic contains general information about the report window
and links to other topics that describe how to add triggers, fields, and indexes.
Under Options and Commands, click Indexer Information page to open the
Indexing Commands topic. (You can also use the content help tool to display
information about the icons on the toolbar.) Under Tasks, Indexer Information
page, click Adding a trigger (PDF).
9. Close any open help topics and return to the report window.
10.Define a trigger as follows:
a. Find a text string that uniquely identifies the beginning of a document, for
example, Account Number, Invoice Number, Customer Name.
b. Using the mouse, draw a box around the text string. Start just outside of
the upper left corner of the string. Click and then drag the mouse towards
the lower right corner of the string. As you drag the mouse, the graphical
indexer uses a dotted line to draw a box. When you have enclose the text
string completely inside of a box, release the mouse. The graphical
indexer highlights the text string inside of a box.
Important: We recommend that the box that you create around the text
string, which you are trying to collect, should be as large as possible to
ensure that the field is collected at load time.
Member T1PFA14 contains the statements as shown in Example 7-4. It can point
to any data set containing the ADOBE fonts.
Table 7-2 Data control block used for ADOBERES data set
Space LRECL RECFM BLKSIZE DSORG
ADOBEFNT DD is used by the OnDemand PDF indexer. It specifies the data set
that is used by the OnDemand PDF indexer at run time to hold the names of the
fonts that are located in the font path and in the runtime directory.
Important: This file normally does not exist when the PDF indexer is not used
before. The indexer does not create it and if you run the arsload procedure
with a non-catalog data set, it fails with a JCL error. You must create this data
set prior running the arsload procedure. You cannot just allocate an empty
data set because the indexer is looking at the end of the data set and
searches for the ASCII End of FILE tag, which is X’0A’. Edit the allocated data
set in Hex and put X’0A’ at the end on the file.
The DCB in Table 7-3 is used for the ADOBEFNT data set.
TEMPATTR DD is used by the OnDemand PDF indexer. It specifies the data set
that contains the attributes of the temporary working space for the PDF libraries.
The content of the data set is as shown in Example 7-5.
If the PDF data set is not transferred correctly to the z/OS system, you might see
ABENDS and DUMPS when trying to index it.
Limitations
There are some system limitations that you must consider when using the
OnDemand PDF indexer:
The PDF indexer cannot process data sets that are greater than 4 GB in size.
The PDF indexer supports DBCS languages. Since IBM does not provide any
DBCS fonts, you must purchase them from Adobe.
Input data delimited with Postscript Pass through markers cannot be indexed.
The Adobe Toolkit does not validate link, destinations, or bookmarks to other
pages in a document or to other documents. Links or bookmarks might or
might not resolve correctly, depending on how you segment your documents.
Postscript data generated by applications must be processed by a conversion
program such as Acrobat Distiller® before you run the PDF indexer. Acrobat
Distiller, however, is not available in a z/OS environment.
© Copyright IBM Corp. 2003, 2006, 2007. All rights reserved. 199
8.1 ODWEK architecture
The OnDemand Web Enablement Kit was designed as a toolkit for Web
developers to create browser-based interfaces to OnDemand servers. ODWEK
Version 8.3, with the latest level of maintenance applied, provides access to
OnDemand servers on all supported server platforms, including UNIX, Linux,
Microsoft Windows NT, iSeries, and zSeries.
When the ODWEK code is installed, four options are available for the
communication method back to an OnDemand server. In this section, we
introduce these access methods, which you should use in conjunction with the
following two manuals:
IBM Content Manager OnDemand - Web Enablement Kit Installation and
Configuration Guide, SC18-9231
IBM Content Manager OnDemand for Multiplatforms Release Notes for
Version 7.1.0.10 (available with the Content Manager OnDemand for
Multiplatforms software)
CGI
The CGI implementation of ODWEK is probably the most simple one of the four
access points to configure. Refer to “Deploying the CGI program”, in Chapter 2,
“Installation and Configuration”, in IBM Content Manager OnDemand - Web
Enablement Kit Installation and Configuration Guide, SC18-9231, for details
about the files that must be placed within the HTTP Web server environment.
For CGI access to OnDemand, a Web server is required to execute the CGI
program, but an application server is not required. An instance of the arswww.cgi
program is launched every time a user performs an operation that requires
connection to the OnDemand server, such as search and retrieval. Refer to
Chapter 6, “Perf2003, 2006, 2007ormance” on page 159, to learn about issues
associated with this.
The fundamental difference between the servlet and the CGI implementation of
ODWEK is that the servlet is managed by an application server, while the CGI is
executed by an HTTP Web server. This means that rather than executing many
instances of the code and therefore allocating memory for these multiple threads,
as in the case of CGI, the servlet runs as a single instance, and memory is
managed by the application server. To learn about the issues associated with
this, see Chapter 6, “Perf2003, 2006, 2007ormance” on page 159.
Java API
The servlet that is supplied with ODWEK after a standard installation of the
product is a working example of a servlet that has been written using the Java
APIs in order to access OnDemand. When using these APIs with ODWEK, an
application server and an HTTP Web server are required if clients require access
to OnDemand via a browser. However, unless you are using the Java APIs to
write a Web application, a Web server or an application server are not required.
For example, if a program is written using these Java APIs for bulk retrieval of
documents from OnDemand, you might want to send the objects to a file system
without using a Web server or an application server.
Portlets
A portal provides personalized access to a variety of applications and aggregate
disparate content sources and services. The OnDemand Portlets developed with
Java APIs aggregate OnDemand with other applications. This implementation
requires an HTTP Web server, a Java enabled application server, and a portal
server.
Assuming that you use a browser to view information from OnDemand, there are
two places where custom code must be written to integrate with the ODWEK
APIs. The first place is the HTML Web pages, which present the user interface,
such as logon, search, and hit-list pages; the second place is for a custom servlet
that is application specific. Figure 8-1 illustrates the various components within
ODWEK that make up the three access points into an OnDemand server and
shows the software that is required for each.
ODWEK
OnDemand Server
HTTP CGI
Server
Custom
AnsWWWServlet
Servlet
Application Server
The three HTML documents represent the three access points into the
OnDemand server. Figure 8-1 illustrates all three methods of linking to an
The HTML samples provide basic functions to search and retrieve documents
from OnDemand, but they do not show samples of all of the possible APIs that
are available for use. The HTML in Example 8-1 on page 204 is a sample of the
Uniform Resource Locator (URL) APIs that are used within an HTML document.
The HTML is a lengthy example, but demonstrates several uses for the URL
APIs.
The code shown in Example 8-1 is derived from a production application. The
server names, IP addresses, and various other sensitive data such as user IDs
and password have been removed or altered.
To fully complete this code sample, we included the source of the policy.htm file
in Example 8-2 that is referenced by the HTML sample in Example 8-1 on
page 204.
</BODY>
</NOFRAMES>
</HTML>
To compile and run these samples, some preliminary work must be done:
Ensure that the Java Development Kit is installed as a prerequisite.
Ensure that the ODWEK shared library (ARSWWWSL.dll on UNIX and
Windows) is accessible:
– Windows: The ODWEK installation directory must be in the system path.
– UNIX: The user running the Java program should have the ODWEK
installation directory as part of the PATH variable.
– OS/390: In UNIX System Services, the user running the Java program
should have the ODWEK installation directory as part of the PATH
variable.
Ensure that the directory that contains the ODAPI.jar file is in CLASSPATH.
The Java API uses the arswww.ini file, although it does not require a Web
server to run. Ensure that all referenced paths in the arswww.ini file exist.
if (argv.length < 4)
{
System.out.println("usage: java Search <server> <user> <pw> <config
dir> [<local server dir>]");
return;
}
try
{
program_start = new Date();
odServer = new ODServer ();
odServer.initialize(argv[3],
"Search.java");
To retrieve a single document from OnDemand, this requires a single search and
retrieval from an OnDemand server. If several hundred or several thousand
documents must be retrieved, then a single search and retrieve for each
document adversely effect performance. If bulk retrieval of documents is
required, the CALLBACK API must be used, which means that the documents to
be retrieved are collated at the server and then sent back to the custom program
as a single operation.
The code in Example 8-4 demonstrates the use of an extended version of the
CALLBACK class, which is called MyCallback, and it is supplied in Example 8-5
on page 215.
public
class SearchWithCallback
{
public static void main (String argv[])
{
int rc;
int numFolders;
byte[] data;
FileOutputStream file;
ODServer odServer;
ODFolder odFolder;
ODCriteria odCrit;
if (argv.length < 4)
{
System.out.println("usage: java test <server> <user> <pw> <config
dir> [<local server dir>]");
return;
}
try
{
odServer = new ODServer ();
odServer.initialize(argv[3],
"/servlets/TestServlet");
if (argv.length == 4)
odServer.logon(argv[0],
argv[1],
argv[2]);
else if (argv.length == 5)
odServer.logon(argv[0],
argv[1],
argv[2],
ODConstant.CONNECT_TYPE_LOCAL,
0,
argv[4]);
numFolders = odServer.getNumFolders("C%");
System.out.println("number of folders is: " + numFolders);
odServer.terminate();
}
catch (ODException e)
switch (op)
{
case ODConstant.OPEqual:
s = "Equal";
break;
case ODConstant.OPNotEqual:
s = "Not Equal";
break;
case ODConstant.OPLessThan:
s = "Less Than";
break;
case ODConstant.OPLessThanEqual:
s = "Less Than or Equal";
break;
case ODConstant.OPGreaterThan:
s = "Greater Than";
break;
case ODConstant.OPGreaterThanEqual:
s = "Greather Than or Equal";
break;
case ODConstant.OPIn:
s = "In";
break;
case ODConstant.OPNotIn:
s = "Not In";
break;
case ODConstant.OPLike:
s = "Like";
break;
case ODConstant.OPNotLike:
s = "Not Like";
break;
case ODConstant.OPBetween:
s = "Between";
return s;
}
}
if (argv.length < 4)
{
System.out.println("usage: java Logon <server> <user> <pw> <config dir>
[<local server dir>]");
return;
}
try
{
odServer = new ODServer ();
System.out.println("calling initialize with "+argv[3]);
odServer.initialize(argv[3],
"Logon.java");
System.out.println("Did the Initialize");
odServer.setServer(argv[0]);
odServer.setUserId(argv[1]);
odServer.setPassword(argv[2]);
odServer.setPort(1445);
odServer.logon();
System.out.println("Did a Logon");
System.out.println("Information is "+info);
hshApprovalVals.put("Account", "100-000-000");
odhit.update(hshApprovalVals);
odServer.logoff();
System.out.println("Logged off");
odServer.terminate();
System.out.println("Terminated the server object");
}
catch (ODException e)
{
System.out.println("ODException: " + e);
System.out.println(" msg = " + e.getErrorMsg());
System.out.println(" msg = " + e.getErrorId());
}
catch (Exception e2)
{
System.out.println("exception: " + e2);
e2.printStackTrace();
}
}
}
The portlets are available in the IBM Workplace™ Solutions catalog at the
following Web address:
http://catalog.lotus.com/wps/portal/portlet/catalog
Two portlets are delivered in OnDemand Portlets: Main and Viewer portlets.
The Viewer portlet can interact with the Main portlet to display documents on the
same portal page instead of in new browser windows. Both the Main and Viewer
portlets must be on the same portal page. Several types of viewers can be used
to view the documents based on the ODWEK configuration.
Hardware requirements
The OnDemand Portlets are supported on Windows, AIX, Solaris, and Linux.
The portlets have been tested with WebSphere Portal 5.1 on the following
platforms:
Windows 2000 and Windows 2003
AIX 5.2
Solaris 9.0
Linux RHEL 3 and RHEL 4
Software requirements
The OnDemand Portlets requires that the following software products are
installed on the Portal Server machine where they will be deployed.
WebSphere Application Server Version 5.1.1 or later
WebSphere Business Integration Server Foundation Version 5.1.1
WebSphere Portal Enable for Multiplatform Version 5.1.0.1
IBM DB2 Content Manager OnDemand Web Enablement Kit 7.1.2.5
The OnDemand Portlets have been tested against OnDemand servers running
on the following platforms:
OnDemand V7.1 for Solaris
OnDemand V7.1 for z/OS
OnDemand for i5 Common Server
OnDemand V7.1 for AIX
OnDemand V7.1 for Windows
OnDemand V7.1 for Linux (Red Hat and SUSE)
There are several ways to view documents with the OnDemand Portlets based
on viewer configuration. Using the OnDemand Portlets specific configuration, the
viewer portlet can be used for viewing multiple documents in a tabbed pane, or
the main portlet can be used to launch new browser windows. The type of viewer
to be used for a given document type is completely controlled by the ODWEK
configuration. For more information about the ODWEK related viewing
configuration and viewing engines, refer to IBM Content Manager OnDemand -
Web Enablement Kit Installation and Configuration Guide, SC18-9231.
Logon panel
The Main Portlet provides a logon interface with the OnDemand server list
according to the arswww.ini file. See Figure 8-3.
It does not update Section 3, “Deploying the servlet using WebSphere Tools”, nor
Subsection 2, “Installing the Application”. For more information, refer to the IBM
WebSphere Information Center at the following Web address:
http://publib.boulder.ibm.com/infocenter/ws51help/index.jsp
When you reach this Web address, search the topic ID “trun_appl”.
We recommend that you use the WebSphere tools to deploy the servlet. The
WebSphere tools automatically configure the HTTP server and Web application
server configuration files. If you are an experienced Web server administrator,
you might choose not to use the WebSphere tools and deploy the servlet by
manually configuring the HTTP server and Web application server configuration
files.
To use the WebSphere tools to deploy the servlet, follow these steps:
1. Copy the files.
2. Deploy the servlet using the WebSphere tools.
3. For Windows systems, copy these files to the directory in which you copied
the shared library:
– ARSSCKNT.DLL
– ARSCT32.DLL
4. Copy the following files to the HTTP server directory. See Table 8-2.
– ARSWWW.INI
– AFP2HTML.INI
– AFP2PDF.INI
Table 8-2 Copy of the ODWEK INI files according to the operating system
Operating system HTTP server directory
There are two steps in deploying the servlet using the WebSphere tools:
1. Assemble the application with the WebSphere Application Assemble Tool.
2. Install the application from the WebSphere administration console. See
“Installing the application”, in IBM Content Manager OnDemand - Web
Enablement Kit Installation and Configuration Guide, SC18-9231.
AIX ASTK_install_root/astk
HP-UX ASTK_install_root/astk
Linux ASTK_install_root/astk
Solaris ASTK_install_root/astk
10.The system prompts you if you want to switch to the J2EE perspective. See
Figure 8-17. Click Yes.
Figure 8-20 Importing class files from the local file system
Note: You must specify the URL pattern in the format /od/*, where od is
the user-defined name.
22..Select the ConfigDir’s parameter value. See Figure 8-27 on page 249.
Change it from the default to one that is appropriate for the runtime platform.
See Table 8-4 on page 249.
This value specifies the location of the ARSWWW.INI file. See step 4 on
page 231.
If you follow these steps completely, when you deploy the Web application, the
context-root becomes the Web project. When testing the URL to access the
servlet, you enter:
http:/hostname/OnDemandWEKWeb/od/arswww
We describe two conversion solutions that integrate with OnDemand: the IBM
AFP2WEB Services Offerings from the team who created AFP data stream and
Xenos d2e from Xenos. We also explain how to index composed AFP documents
that are generated without the requisite tags for indexing.
© Copyright IBM Corp. 2003, 2006, 2007. All rights reserved. 253
9.1 Overview of data conversion
To work with data conversion, it is important that you understand which data
conversions are required, and when and how to convert the data. Perform
detailed planning before you begin building your solution to help you to achieve a
design that remains efficient for many years to come.
In this section, we discuss why to perform data conversion, when to perform it,
and how to make the conversion.
We briefly discuss two different data types and the factors that effect this
decision:
Xerox metacode
AFP to PDF
If you want to display Xerox metacode through an OnDemand client, you must
convert it to something else. If you intend to use a PC client, you must convert
the metacode to AFP or PDF at load time since the thick client does not support
retrieval transform. If you intend to use OnDemand to reprint the metacode
documents to the original metacode printer, then the documents must be stored
as metacode. When the documents are stored as metacode, the only way to view
them is to enable the retrieval transform through OnDemand Web Enablement
Kit (ODWEK) and convert them to either HTML, PDF, or XML.
AFP to PDF
If there is a requirement to present AFP documents in the PDF format over the
Web, the best way is to store the documents in their native format and then
convert them to PDF at retrieval time. This is because AFP documents are stored
much more efficiently than PDF. When multiple AFP documents refer to the same
resources, these resources are stored only one time and are shared among the
AFP documents.
PDF documents are completely opposite. All the resources necessary to present
a PDF document are contained within the document. The PDF document is
larger than the original AFP, and the entire print stream, when it is divided into
separate customer statements, is much larger, because each individual
statement holds all the resources. See 7.2, “Indexing issues with PDF” on
page 188, for an example of how a 100k PDF document can be indexed as five
separate PDF documents, with a total of 860k in the resulting indexed file size.
Timing is essential to the decision as well. The amount of time needed to convert
the document depends on how large it is and how many resources or fonts are
associated with the document.
IBM AFP2WEB solution is a service offering by the team who created AFP, and
it focuses mainly on AFP.
In the following two sections, we discuss the supported environments, the main
functions, and the way these solutions integrate with OnDemand. We also
provide some samples.
[default]
ScaleFactor=1.0
CreateGIF=TRUE
FontMapFile=fontmap.cfg
ImageMapFile=imagemap.cfg
The structure of the afp2html.ini file is similar to a Windows INI file. It contains
one section for each AFP application and a default section. The title line of the
section identifies the application group and application.
For example, the title line [CREDIT-CREDIT] identifies the CREDIT application
group and the CREDIT application. Use the – (dash) character to separate the
names in the title line. The names must match the application group and
application names defined to the OnDemand server. If the application group
contains more than one application, then create one section for each application.
One advantage of the applets is that users never have to install or upgrade
software on the PC to use them. When using the applets and viewers that are
provided by IBM, the documents that are retrieved from an OnDemand server
remain compressed until they reached the viewer.
The viewer uncompresses the documents and displays the pages in a Web
browser window. If a document is stored in OnDemand as a large object, the
viewer retrieves and uncompresses segments of the document, as needed,
when the user moves through the pages of the document.
Note: If you enable Large Object support for very large documents and
specify 1, then your users might experience a significant delay before they can
view the document.
The ScaleFactor parameter scales the output with the given scale factor. The
default value is 1.0. For example, specifying a value of ScaleFactor=2.0 scales
the output to be twice as large as the default size; specifying a value of
ScaleFactor=0.5 scales the output to one half of the default size. The default size
is derived from the Zoom setting on the Logical Views page in the OnDemand
application.
The SuppressFonts parameter determines whether the AFP text strings are
transformed. If you specify SuppressFonts=TRUE, any text that uses a font listed in
the Font Map file is not transformed. The default value is FALSE, which means
that all of the AFP text strings are transformed. The Font Map file is identified
with the FontMapFile option.
The FontMapFile parameter identifies the full path name of the Font Map file.
The Font Map file contains a list of fonts that require special processing. The
default Font Map file is named imagfont.cfg and resides in the directory that
The ImageMapFile parameter identifies the image mapping file. The image
mapping file can be used to remove images from the output, improve the look of
shaded images, and substitute existing images for images created by the
AFP2HTML transform. Mapping images that are common across your AFP
documents (for example, a company logo) reduces the time required to transform
documents. If specified, the image mapping file must exist in the directory that
contains the AFP2HTML programs. See the AFP2WEB transform documentation
for details about the image mapping file.
Control files
Three control files are used by AFP2HTML:
The font map file, by default imagfont.cfg, is listed in the afp2html.ini file. See
the AFP2WEB transform documentation for details about this file.
The image map file is listed in the afp2html.ini file. We present some samples
about the way to use it in “Mapping AFP images” on page 261.
The transform profile file by default, is afp2web.ini, with parameters to control
settings for AFP2HTML:
– ResourceDataPath specifies the directories that the transform uses to
search for AFP resources.
– FontDataPath specifies the base directory that the transform uses to
search for the font configuration files.
– PageSegExt sets the file extension to be used when searching for a page
segment file in a resource directory. For example, if all the page segment
resource files had the file extension of .PSG, you can set it as:
PageSegExt=*.PSG
– OverlayExt sets the file extension to be used when searching for an
overlay file in a resource directory. For example, if all of the overlay
resource files had the file extension of .OLY, you can set it as:
OverlayExt=*.OLY
The configuration file handles all the transform processing for the images. For
example, when the transform program is run against an AFP document and an
image is encountered, the program looks for a matching image entry in the
configuration file. If an entry is defined that matches the name, position, size, or a
combination of the three, the information in the configuration file is used for the
transform of the image.
Removing images
If no other information is given as part of an entry in the image map configuration
file, such as extra lines between the image position and size definitions and the
IMAGE_END tag, the entry is considered empty. Then the image is removed
from the transformed GIF files. The image information that the transform program
generates is empty by default.
AFP2HTML command
As seen before, the conversion function is called automatically from OnDemand.
However for different purposes such as creating the Image Map File or testing
the conversion result, the afp2web command might be used.
See the AFP2WEB transform documentation for details about the afp2web
command parameters.
Example 9-8 shows the sample lines in the arswww.ini file, where AFP is set to
convert to PDF.
[default]
OptionsFile=
ImageMapFile=imagemap.cfg
AllObjects=0
The structure of the file is similar to a Windows INI file. It contains one section for
each AFP application and a default section. The title line of the section identifies
the application group and application.
For example, the title line, [CREDIT-CREDIT] identifies the CREDIT application
group and the CREDIT application. Use the – (dash) character to separate the
names in the title line. The names must match the application group and the
application names defined to the OnDemand server. If the application group
contains more than one application, then create one section for each application.
The parameters that you specify in the [default] section are used by the
AFP2PDF transform to process documents for AFP applications that are not
identified in the afp2pdf.ini file. The default parameters are also used if an AFP
application section does not include one of the specified parameters.
The OptionsFile parameter identifies the full path name of the file that contains
the transform options used by the AFP2PDF transform. The transform options
are used for AFP documents that require special processing. The ImageMapFile
parameter identifies the image mapping file.
The image mapping file can be used to remove images from the output, improve
the look of shaded images, and substitute existing images for images created by
Note: If you enable Large Object support for very large documents and
specify a value of 1, then your users might experience a significant delay
before they can view the document.
Control files
Two control files, listed in the afp2pdf.ini file, are used by AFP2PDF:
The image map file: We present some samples for using it in “Mapping AFP
images” on page 268.
The options file, by default a2pxopts.ini, with parameters to control settings
for AFP2HTML: We present some of the most important parameters in
“Option file” on page 273.
Mapping images gives you the option of handling AFP images in different ways
during the transform process, including:
Removing images
You can remove all or some of the images from your transformed output.
Substituting existing images
You can substitute all or some of the images with previously generated
images in the PDF output with JPEG images.
The configuration file handles all the transform processing for the images. For
example, when the transform program is run against an AFP document and an
image is encountered, the program looks for a matching image entry in the
configuration file. If an entry is defined that matches the name, position, size, or a
combination of the three, the information in the configuration file is used for the
transform of the image.
The image information for each image is in pairs. The first line contains the
page-segment resource name (only if available), the position value in inches, and
size values in inches. The second line ends the entry for the image. The first
value for the position and size gives the horizontal dimension and the second
gives the vertical dimension. The position measurements are for the upper,
left-hand corner of the image relative to the upper, left-hand corner of the page.
The copy of the lines in the output file (imagemap.out in this example) is used to
create the image-map configuration file (imagemap.cfg by default).
The image information in the configuration file is used to identify the images in
the AFP document during the transform process. Each IMAGE tag along with its
corresponding IMAGE_END tag defines a single image information entry in the
configuration file.
Removing images
If no other information is given as part of an entry in the image map configuration
file, such as extra lines between the image position and size definitions and the
IMAGE_END tag, the entry is considered empty and the image is removed from
the transformed PDF files. The image information that the transform program
generates is empty by default.
Option file
An option file might be associated to specific OnDemand application group and
application or defaulted.
We present part of the options that are specific to the PDF flow. These might
concern either information made available to the user through options in the PDF
viewer or security that limits the user in the actions that can be taken with the
document. See the AFP2PDF transform documentation for details about the
option file parameters.
Security
Protecting the content of the PDF document is accomplished with encryption.
This PDF security feature is supported by the AFP2PDF transform and follows
the password features supported within the Adobe Acrobat product. A PDF
document can have two kinds of passwords, a document open password and a
Permissions password.
When a document open password (also known as a user password) is used, any
user who attempts to open the PDF document is required to supply the correct
password.
The encrypted PDF in the transform is also tied to disabling certain operations
when displayed in Adobe Acrobat. Using letter codes, any combination of the
following operations listed can be disabled:
c Modifying the document's contents
s Copying text and graphics from the document
a Adding or modifying text annotations and interactive form fields
p Printing the document
Both types of passwords can be set for a document. If the PDF document has
both types of passwords, a user can open it with either password.
The Adobe products enforce the restrictions set by the permissions, owner, or
master password.
Note: Not all products that process PDF files from companies other than
Adobe fully support and respect these settings. Users of these programs
might be able to bypass some of the restrictions set.
See the AFP2PDF transform documentation for details about the afp2pdf
command parameters.
The AFP2XML GUI displays the document similarly to its print output. It allows
the user to select Triggers and Attributes and define parsing rules without
requiring any specific AFP knowledge.
You can extract the significant data from your AFP file and place it into an XML
format, which is a more interchangeable format.
The AFP document can be retrieved from OnDemand using one of the ODWEK
APIs and then converted to XML using the Java or C API. The XML file can be
manipulated for any use of the available information in this interchangeable
format.
The information is extracted from the archived AFP documents and placed in an
XML file so that it can be integrated into the electronic and bill presentation
application.
Figure 9-7 AFP document tagged with TLEs for OnDemand indexation
The Xenos transforms are batch application programs that let you process these
various input data types by converting the data, indexing on predefined
parameters and collecting resources to provide the proper viewing. The Xenos
load transform produces the index file, the output file, and the resource file, which
The Xenos transforms can be run either when loading the input files into the
system, or alternatively, dynamically when the documents are retrieved via the
OnDemand Web Enablement Kit.
If transforming the data at load time, the Xenos transforms listed in Table 9-1 are
available.
From To
AFP PDF
Metacode AFP
Metacode PDF
Metacode Metacode
PCL PDF
If transforming the data dynamically when it is retrieved via ODWEK, the Xenos
transforms listed in Table 9-2 are available.
Table 9-2 Available Xenos transforms: transforming data dynamically through ODWEK
From To
AFP PDF
AFP HTML
AFP XML
Metacode AFP
Metacode HTML
Metacode PDF
Metacode XML
Figure 9-8 on page 284 shows, at a high level, how the Xenos transforms fit into
the OnDemand environment. It shows the resources and the AFP, metacode, or
PCL printstream being fed into the ARSLOAD program. When ARSLOAD runs, it
checks to see what indexer to use. If the application specifies Xenos, then the
Xenos transforms are called and run with a predefined parameter and script files.
Your
Print
Application
Data
Resources
Processing
Parameters
ARSLOAD Program
Viewing
and Printing
Xenos
Viewing
Processing Transform
Parameters (Optional)
Printing
Figure 9-8 How the Xenos transforms fit into the OnDemand environment
AFP2XML example
For our example, we use an AFP customer billing statement that is stored in
OnDemand. We want our customers to be able to retrieve this document in a
Web-ready format without having to rely on the AFP Web Viewer. Our Web
developers have stated that if they can extract the pertinent pieces of information
in a standard XML format, they can use XSL or CSS to format the document. See
Figure 9-9 on page 286 to view the AFP statement as retrieved from the
OnDemand PC client.
The fields that we want to extract are highlighted with a box. We want to extract
the account ID, the amount due, the start and the end dates, and the usage. We
also want to extract the current and previous bill sections and all the charges this
are included in these sections.
The Xenos parameter file is a text file that contains parser and generator
required and optional parameters. Examples of these parameters are the names
of input and output files and the location of all document resources, such as
fonts, pagdefs and formdefs. Also included in this parameter file is a list of the
fields to be pulled from the document and where these fields are located.
Five types of job-related parameters can be defined in the parameter file. They
are organized by type and begin with a section heading. The five sections are
Job Script, Parser, Generator, Display List, and Distributor. Each of these
sections contains many required and optional fields depending on the data type
that is being parsed and generated. Refer to Xenos d2e Platform User Guide,
which comes with the Xenos offering by Xenos Group Incorporated, for a full
description of this file.
Table 9-3 provides a parameter file summary, with a description for each
parameter section that is applicable to our AFP2XML conversion.
Job Script (JS:) This section indicates the names and locations of the Xenos
d2e Script Library. The Dmsl.lib library is required, and the
conversion fails if this library is not defined. This section also
defines the variables to be used in the d2e script.
Generator This section controls how the new document is generated. The
(TMerge:) XML generator uses the Template Merge Facility, which scans
the template for variable names and then replaces them with
variable values from the document. In our parameter file, the
PREFIX and SUFFIX parameter tell the template merger the
characters that define the beginning and ending of the variable
in the template file.
Display List This section controls how special features, such as book
(DLIST:) marking and URL links, and fields are generated. Our display
list parameters tell d2e where to locate each field in the input
AFP file.
AFPPARSER:
CC = YES
FDAFPFONTS = ‘D:\d2eDevStudio\AFP2XML\Resources\%s.fnt’
FDFORMDEFS = ‘D:\d2eDevStudio\AFP2XML\Resources\%s.fde’
FDMFCT = ‘D:\d2eDevStudio\AFP2XML\AFP2XML.tab’
FDOVERLAYS = ‘D:\d2eDevStudio\AFP2XML\Resources\%s.ovr’
FDPAGEDEFS = ‘D:\d2eDevStudio\AFP2XML\Resources\%s.pde’
FDPAGESEGS = ‘D:\d2eDevStudio\AFP2XML\Resources\%s.psg’
FORMDEF = ‘f1mbibi0’
PAGEDEF = ‘p1mbibi0’
POSITION = WORD
TMERGE:
PREFIX = ‘&&’
SUFFIX = ‘.’
DLIST:
PARMDPI = 100
PAGEFILTER = ALL
FIELDNAME = ‘PAGE’
FIELDWORD = %30
FIELDPHRASE = %400
FIELDLOCATE = (‘CurrentBill’,’BILLING INFORMATION’)
FIELDLOCATE = (‘EndCurrent’,’CURRENT GAS BILLING’)
FIELDLOCATE = (‘EndPrevious’,’PREVIOUS BALANCE’)
FIELDNAME = ‘ACCTID’
FIELDBOX = (367,78,519,98)
FIELDWORD = %20
FIELDPHRASE = %500
FIELDNAME = ‘AMTDUE’
FIELDBOX = (714,530,816,578)
FIELDWORD = %20
FIELDPHRASE = %500
FIELDNAME = ‘STARTDATE’
FIELDBOX = (585,81,641,99)
FIELDWORD = %20
FIELDPHRASE = %500
FIELDNAME = ‘USAGE’
FIELDBOX = (730,130,769,148)
FIELDWORD = %20
FIELDPHRASE = %500
FIELDNAME = ‘CURRBILL’
FIELDBASE = (‘’,41,’’,220,’’,800,’EndCurrent’,30)
FIELDWORD = %20
FIELDPHRASE = %5000
FIELDTABS = (0,450)
FIELDNAME = ‘PREVIOUSBILL’
FIELDBASE = (‘EndCurrent’,-40,’EndCurrent’,40,’’,800,’EndCurrent’,200)
FIELDWORD = %20
FIELDPHRASE = %5000
FIELDTABS = (0,450)
FIELDNAME = ‘Field’
FIELDBOX = (176,119,263,163)
FIELDWORD = %20
FIELDPHRASE = %500
The job parameter file can either be typed in manually using any ASCII editor or
created graphically using the d2e Developer Studio. It most cases, it is a
combination of both. Developer Studio has a graphical AFP parser that can
render the input file as a bitmap. This allows for the graphical selection of field
locations for data extraction. We used the Developer Studio wizard to create the
parameter file and locate the fields, and then updated this file manually for the
resource locations and the script variables.
/* variables */
TRUE = 1
FALSE = 0
doc_open = FALSE
AFP_Parser_h = dm_StartParser(“AFP”)
TMerge1_h = dm_StartGenerator(“TMerge”)
/* get page */
dlpage_h = dm_GetDLPage(AFP_Parser_h)
/* This section parses through the current charges section detail lines and */
/* and loads them into the xml templast as the detail ITEM and detail NUMBER */
rc = dm_GetMultiText(dlpage_h,”currbill”,curr_cnt)
DO i = 1 to curr_cnt.0
PARSE var curr_cnt.i.dm_textdata item ‘09’x amount
/* This section parses through the previous charges section detail lines and */
/* and loads them into the xml templast as the detail ITEM and detail NUMBER */
rc = dm_GetMultiText(dlpage_h,”previousbill”,prev_cnt)
DO i = 1 to prev_cnt.0
PARSE var prev_cnt.i.dm_textdata item ‘09’x amount
rc = dm_TMergeWrite(TMERGE1_h, project_resource_path || “itemizedline.tpl”)
END
END
/* get page */
END
rc = dm_TMergeClose(TMERGE1_h)
RETURN
The dms script in Example 9-18 on page 291 refers to several template (.tpl) files.
These are the files that are used to create the XML output file. See Example 9-19
for the bill.tpl file that is referenced. Each && begins a variable to be replaced with
the actual value from the AFP file before it is inserted into the output XML file.
Configuring ODWEK
After the transform is set up on the Xenos side, you must configure ODWEK to
run the transform. You must make the changes explained in the following
sections.
You must also update the AfpViewing parameter in the [default browser] section.
When ODWEK retrieves an AFP document from the OnDemand server, the
value of this parameter determines what action, if any, that ODWEK takes before
sending the document to a Web browser. To convert AFP documents to HTML,
PDF, or XML output with the Xenos transform, specify AfpViewing=Xenos so that
ODWEK calls the Xenos transform to convert the AFP document before sending
it to a Web browser. The type of output that is generated is determined by the
value of the OUTPUTTYPE parameter in the ARSXENOS.INI file.
Note: This change affects all AFP data on the system; it is not limited to an
application group or folder. If you want to override this, you may specify the
_afp parameter of the Retrieve Document API.
Example 9-20 on page 295 shows our ARSXENOS.INI file. The first section
specifies the application group and the application, separated by a dash. Our
application group and application are both named xenos-xml.
[default]
ParmFile=D:\Xenos\afp2pdf\sample.par
ScriptFile=D:\Xenos\noindex.dms
LicenseFile=D:\Program Files\xenosd2e\licenses\dmlic.txt
OutputType=pdf
WarningLevel=8
AllObjects=1
The ParmFile parameter identifies the full path name of the file that contains the
parameters that are used by the Xenos transform to convert the document. This
points to the afp2xml_rb.par file discussed earlier.
The ScriptFile parameter identifies the full path name of the file that contains the
script statements that are used by the Xenos transform to create the output file.
This points to the afp2xml_rb.dms script discussed earlier.
The LicenseFile parameter identifies the full path name of a valid license file
obtained from Xenos.
The OutputType parameter determines the type of conversion that the Xenos
transform performs. If the input document is AFP, you can set this parameter to
HTML, PDF, or XML. If the input document is metacode, you can set this
parameter to AFP, HTML, PDF, or XML. Since we are converting from AFP to
XML, our parameter is XML.
Note: If you enable large object support for very large documents, then your
users might experience a significant delay before they can view the document.
The Xenos AFP2XML transform is then invoked and the XML document is sent to
the browser. You can see the XML output in Figure 9-10 on page 297.
Figure 9-11 AFP document displayed in its native format with AFP Web Viewer
This XML document is presented with standard tags and can be displayed using
a variety of XML display methods. We created a simple cascading style sheet
We created a CSS definition file that takes each parameter from the XML tagged
file and presents it to a browser. This file is named disp_bill.css. To call this CSS
file and tell the browser to associate it with the XML file, we had to make a
change to the template file that is called from our dms script. To tell the browser to
use the disp_bill.css file to present the XML file, we added two lines to the
startfile.tpl template file as follows:
<?xml version='1.0'?>
<?xml-stylesheet type="text/css" href="D:\Xenos\disp_bill.css"?>
<BILLFILE>
<BILL>
Now when we click the document in the hit list, the browser sees the first two
lines in the XML file that tell the browser to present the document with the style
sheet. The document now appears as shown in Figure 9-12.
AMTDUE
{display: block;
background-image: url(amtdue.gif);
background-repeat: no-repeat;
margin-left: 100px;
margin-right:600px;
background-color: #CCCCCC;
font-size: 16pt;
font-weight: bold;
border: thin ridge;
padding-left: 200px
}
STARTDATE
{display: block;
background-image: url(strtdate.gif);
background-repeat: no-repeat;
margin-left: 100px;
margin-right:600px;
background-color: #CCCCCC;
font-size: 16pt;
font-weight: bold;
border: thin ridge;
padding-left: 220px
}
ENDDATE
{display: block;
background-image: url(enddate.gif);
background-repeat: no-repeat;
margin-left: 100px;
margin-right:600px;
background-color: #CCCCCC;
font-size: 16pt;
To enable the Xenos transforms on the load side, you must specify the indexer to
be Xenos in the OnDemand application. When ARSLOAD sees an indexer of
Xenos, it calls the Xenos transform with the parameter and script files that are
specified in the application. Xenos creates three files to be sent back to
ARSLOAD, the index file, the output file, and a resource file. ARSLOAD uses
these files to update the database and object server.
When using Xenos to parse the input printstream file, it is possible to allow the
transform to pull indexes from the file and to split the file into logical documents.
When doing this, it is important to define the same database fields in both Xenos
and OnDemand. Xenos provides an .ind file that contains each field and the
value. If OnDemand receives too many indexes or not enough indexes, or if the
indexes are a different data type, the load fails.
When defining the Xenos parameter script files in the application, there are two
methods: the details method and the filename method. In our example
(Figure 9-13), we used the filename method, where we point to the full path of
the files. Using the filename method is a way to reuse the same files between
multiple applications and is a better method of separating the OnDemand and
Xenos administration.
Optionally, you can paste the entire script and parameter file directly into the
indexing parameters screen. Using the detail method allows the OnDemand
administrator to view the Xenos parameters without the need to access any other
system. It is also a way to manage multiple versions of scripts and applications.
There is never any question about which application is using which script files. If
the printstream data changes, a new application can be created with a new script
and parameter file included. When using the detail method, the parameter file
details must be included between an OnDemandXenosParmBegin tag and an
When calling the transform from ARSLOAD, be sure not to have any input or
output file names hardcoded in the script or parameter file. If you have an input
file listed in the fdinput or inputfile parameters, the Xenos transform runs with a
return code of 0. It is not processing the data that ARSLOAD is sending. If you
have the output files defined in fdoutput, outputfile, indexfile or resourcefile
parameters, Xenos also runs fine, but ARSLOAD shows the message,
“Output/Indexer file was not created Indexing Failed.”
Any error messages that come from the Xenos transforms are populated into the
system log and can viewed in message number 87, failed load. All success details
that come from the Xenos transform can be viewed in message number 88,
successful load.
call dm_Initialize
par_h = dm_StartParser(Parser);
gen_h = dm_StartGenerator(Generator);
/* initialize */
do i = 1 to NumberOfFields
fieldvaluesave.i = ""
if Break.i \= "no" & Break.i \= "NO" then
do
Break.i = "yes"
end
end
file_open = FALSE
dlpage = dm_GetDLPage(par_h);
do while(dlpage \= 'EOF')
if file_open = FALSE then do
select
when generator = 'PDF' then
rc = dm_PDFGenOpen(gen_h, '{GROUPFILEENTRY}'outputfile);
when generator = 'AFP' then
rc = dm_AFPGenOpen(gen_h, '{GROUPFILEENTRY}'outputfile);
when generator = 'META' then
rc = dm_METAGenOpen(gen_h, '{GROUPFILEENTRY}'outputfile);
otherwise do
say 'Invalid generator'
return 12
end
end
if rc = 0 then do
file_open = TRUE
end
end
do i = 1 to NumberOfFields
fieldvalue.i = dm_GetText( dlpage, field.i, First )
end
docbreak = 0
do i = 1 to NumberOfFields
if fieldvalue.i \= "" then do
/* if there is no previous value, save the current value */
if fieldvaluesave.i = "" then do
fieldvaluesave.i = fieldvalue.i
end
else
/* if there is a previous value, see if the new value is different */
if fieldvaluesave.i \= fieldvalue.i then do
if Break.i = "yes" then
docbreak = 1
if docbreak = 1 then do
select
when generator = 'PDF' then rc = dm_PDFGenClose( gen_h )
when generator = 'AFP' then rc = dm_AFPGenClose( gen_h )
when generator = 'META' then rc = dm_METAGenClose( gen_h )
end
file_open = FALSE
rc = dm_DASDSize(dasd_h)
BytesWritten = dm_size
length = BytesWritten - save_BytesWritten
offset = BytesWritten - length
save_BytesWritten = BytesWritten
group_offset = "GROUP_OFFSET:"||offset
rc = dm_DASDWrite( index_h, group_offset )
group_length = "GROUP_LENGTH:"||length
rc = dm_DASDWrite( index_h, group_length )
group_filename = "GROUP_FILENAME:"||outputfile
rc = dm_DASDWrite( index_h, group_filename||crlf )
select
when generator = 'PDF' then
rc = dm_PDFGenOpen(gen_h, '{GROUPFILEENTRY}'outputfile);
when generator = 'AFP' then
rc = dm_AFPGenOpen(gen_h, '{GROUPFILEENTRY}'outputfile);
when generator = 'META' then
rc = dm_METAGenOpen(gen_h, '{GROUPFILEENTRY}'outputfile);
end
if rc = 0 then do
select
when generator = 'PDF' then rc = dm_PDFGenWrite(gen_h, dlpage );
when generator = 'AFP' then rc = dm_AFPGenWrite(gen_h, dlpage );
when generator = 'META' then rc = dm_METAGenWrite(gen_h, dlpage );
end
dlpage = dm_GetDLPage(par_h);
end
rc = dm_DASDSize(dasd_h)
BytesWritten = dm_size
length = BytesWritten - save_BytesWritten
offset = BytesWritten - length
save_BytesWritten = BytesWritten
group_offset = "GROUP_OFFSET:"||offset
rc = dm_DASDWrite( index_h, group_offset )
group_length = "GROUP_LENGTH:"||length
rc = dm_DASDWrite( index_h, group_length )
group_filename = "GROUP_FILENAME:"||outputfile
rc = dm_DASDWrite( index_h, group_filename )
rc = dm_DASDClose( dasd_h )
rc = dm_DASDClose( index_h )
return;
JS:
scriptvar=(‘Parser’, ‘AFP’)
scriptvar=(‘Generator’, ‘PDF’)
scriptvar=(‘NumberOfFields’, 1)
scriptvar=(‘Field.1’,’Name’)
AFPDL-AFPP:
/* AFP Parser Options */
formdef = f1a10111
pagedef = p1a06462
CC = on
trc = off
startpage = 0
stoppage = 0
native = no
position = word
/* File Defs */
FDpagesegs = ‘D:\xenos\afp2pdf\Resources\%s.psg’
FDafpfonts = ‘D:\xenos\afp2pdf\Resources\%s.fnt’
FDpagedefs = ‘D:\xenos\afp2pdf\Resources\%s.pde’
FDformdefs = ‘D:\xenos\afp2pdf\Resources\%s.fde’
FDoverlays = ‘D:\xenos\afp2pdf\Resources\%s.ovr’
FDfontcor = ‘D:\xenos\afp2pdf\Resources\master.tab’
FDResGrpOut = ‘D:\xenos\afp2pdf\Output\sample.res’
ResGrpOption = (FormDefs, PageSegs, Overlays)
PDFGEN-PDFOUT:
/* PDF Out Generator Options */
/* output file name being set in the script */
offset = (0,0)
scaleby = 100
border = NONE
Compress = (NONE,NONE,NONE)
orient = AUTO
AFPDL-DLIST:
parmdpi = 300
pagefilter = all
resfilter = all
FieldName = (PAGE)
FieldWord = (20, and, %20)
FieldPhrase = (%100)
FieldPara = (%500)
/* extract name */
FieldLocate = (‘InsName’, ‘Insured’)
FieldName = (‘Name’)
FieldBase = (‘InsName’, +275,
‘=’, -35,
‘=’, +800,
‘=’, +30)
The second method allows the Web Enablement Kit to call the Job Supervisor
program when a document is requested from ODWEK. This calls Job Supervisor
with one specific document, allows the document to be transformed, and then
sends the transformed data back to the browser. This method is discussed in
9.3.1, “Converting AFP to XML through ODWEK” on page 285.
The third method of calling the Job Supervisor program is from a command line.
This might be a useful troubleshooting technique, because it runs Xenos without
any connection to OnDemand and allows you to pinpoint any problems. The Job
Supervisor program can also be used to print the locations of text strings found
Developer Studio: Developer Studio allows you to define fields from the
document to be used for indexes. Two types of fields are of interest, the
absolute field and the relative field. The absolute field is defined by the x and
y coordinates of a box around the text of interest. The relative field is useful for
extracting data that has different positions in a page, but can be found in
relation to another piece of text on the page. For more information about using
Developer Studio, refer to Xenos d2e Developer Studio User Guide, which
comes with the Xenos offering by Xenos Group Incorporated.
Figure 9-14 shows the syntax when running Job Supervisor from a command
line to convert data.
Here, -parms is a file that contains the Job Supervisor parameter (.par) file and
the Job Supervisor script (.dms) file. All the parameters are required, but you
may place the inputfile, outputfile, and resourcefile parameters in the .par or the
.dms file instead of in the command line.
We ran the Job Supervisor program from the command line with our parameter
file and script file from the previous AFP2PDF example to ensure the index file
was being created correctly. We did this before setting up the PDF application
group to ensure that Xenos was working correctly before wrapping the
ARSLOAD around it. ARSLOAD runs the same command with the same syntax.
We ran the command as shown in Example 9-24.
Example 9-24 Running the Job Supervisor program from a command line
js -parms=”D:\Xenos\afp2pdf\parms_afp2pdf”
-report=”D:\Xenos\afp2pdf\output\sample.rep”
-scriptvar=inputfile=”D:\Xenos\afp2pdf\input\afp2pdf_sample.afp”
-scriptvar=indexfile=”D:\Xenos\afp2pdf\output\afp2pdf_sample.ind”
-scriptvar=outputfile=”D:\Xenos\afp2pdf\output\afp2pdf_sample.out”
-scriptvar=resourcefile=”D:\Xenos\afp2pdf\output\afp2pdf_sample.res”
-licensefile=D:\Program Files\xenosd2e\licenses\dmlic.txt
This transform creates an .ind file in the format of the Generic Indexer parameter
file. This file can then be loaded into OnDemand with the ARSLOAD command.
© Copyright IBM Corp. 2003, 2006, 2007. All rights reserved. 313
10.1 Introduction to Report Distribution
Report Distribution provides many of the same functions as other parts of the
OnDemand system such as querying the database for documents, retrieving the
documents from various types of storage media, and providing the ability to print
them on a server printer. If these functions are already available in OnDemand,
why would you want to use Report Distribution? The answer is simple:
automation.
Normally when you think of an archival/retrieval system, you think of the large
numbers of documents that are stored, but a small number of documents are
retrieved. What benefit does automation and scheduling provide?
The biggest benefit is that as reports are loaded into OnDemand on a regular
basis, they can be delivered automatically to one or more users soon after they
are loaded. Also, after the distribution is set up, no other changes are required
such as changing the document selection criteria to identify the latest data that is
loaded.
For example, a company creates monthly sales reports and archives them in
OnDemand. The reports are needed by sales managers to analyze the results
and to plan for future sales and inventory. By using Report Distribution, the
delivery of the monthly sales report can be automated so that the sales
managers receive the report via e-mail once a month as soon as the report is
available in OnDemand. Other examples include auditing that is performed on a
periodic basis and workflow items such as processing overdue accounts for
credit cards, utility bills, or doctors’ bills.
The applications for using Report Distribution are endless but the basis for using
it is the same; namely documents are loaded on a regular basis and are needed
by one or more users as they become available in OnDemand. Let us look at a
specific example.
Acme Art Inc. is a company that sells artwork and art supplies. Each month a
sales report is created for each region and is archived in OnDemand. After the
reports are archived, the regional sales managers need a copy of the reports for
planning purposes and for restocking inventory.
Figure 10-1 List of monthly sales reports for Acme Art Inc.
In this example, even though there are separate regional sales reports per
month, they are loaded at the same time so there is only one load per month.
This information is important when you are determining the best way to set up
the distribution. Before a distribution is set up, you should ask yourself the
following four W questions:
What documents are needed?
Who will receive the documents?
When will the documents be retrieved and delivered?
Where will they be delivered?
In general, you identify the documents by creating a database query using index
fields and values that uniquely identify the documents you want to retrieve. For
Report Distribution, another method can be used to identify the documents that
you want to retrieve. Instead of querying the database, you can simply retrieve all
or some of the documents as they are being loaded.
To illustrate this, let us look at the example again. Once a month, regional sales
reports are loaded into OnDemand. Since the load contains all the documents
that are needed, we can identify and retrieve all of the documents from the load.
Later, when we explain how to set up a distribution using the administrative client,
we go into more detail about how to define a set of documents that will be
delivered.
You might ask why this type of schedule is used rather than a monthly schedule
since the reports are loaded on a monthly basis. When a load-based schedule is
used, the extraction and delivery of the documents is triggered when the data is
loaded. Report Distribution periodically looks to see if data has been loaded. If it
has, then the documents are extracted and delivered. When a monthly schedule
is used, the extraction and delivery process are performed on a specified day of
the month. For some reason, if the data is not loaded by the specified day, the
delivery will fail.
This assumes that you have already loaded data into OnDemand so an
application, application group, and folder have already been defined. For the
example we mentioned earlier, we will include banner pages and multiple
reports; the reports will be sent to multiple users.
The logical order to define the distribution objects is to follow the order of the four
W questions (who, what, when, and where). Based on this, the order of defining
the distribution objects is:
1. Defining users/groups
2. Defining reports
3. Defining banners
Some of these objects might already be defined, so they might also be available
for use. For example, users and groups are already used in OnDemand, so you
may not have to define these objects. Also, if this is not the first distribution that
you are defining, you can use existing banners, bundles, schedules, and reports
if appropriate.
For the example, we presume that the users have already been defined and that
all sales managers who will receive the monthly regional sales reports already
have e-mail addresses specified for their user IDs since they will receive the
reports via e-mail. If you do not have to create new user IDs, make sure that the
e-mail address or the server printer is specified for each user. Figure 10-4 shows
an example of a user that has an e-mail address and a server printer specified.
Figure 10-4 User with an e-mail address and server printer specified
Figure 10-5 Report window for the Northwest regional sales report
For the example, the load report type with an SQL query is used. Since the
regional sales reports are in the same load, an SQL query is not necessary.
However, by using the SQL query, the reports can be retrieved separately and
placed in the bundle in any order. Separator banner pages can also be added to
identify each report in the bundle. Another reason for creating separate reports is
that it gives you the flexibility to deliver the regional sales report to the
appropriate regional sales manager rather than sending both reports. This
requires creating a separate bundle and distribution for each report.
Along with selecting the report type, you must select an application group where
the data is loaded. The application group must be selected before the SQL query
In Figure 10-6, an SQL query string is defined for the Northwest region by
selecting application group database fields and operators to create the SQL
query string. A segment date field can also be specified so that the query is
limited to a smaller number of database tables.
Figure 10-6 SQL Query window for the Northwest regional sales report
After all of the information is entered for the report, click OK to save the report.
The second report can be created in a similar way. In 10.2.4, “Adding a bundle”
on page 326, we explain how to create a bundle.
For the example, a header banner, a separator banner, and a trailer banner are
used. Figure 10-7 shows the banner window for a separator banner.
You must enter the banner name and a description, and then select a banner
type. For Banner Type, select the kind of banner that you are creating. After you
select the banner type, the list of information that can be included on the banner
page is displayed in the Available Information list. You can add one or more lines
of information to the banner page. The information can be placed in an order.
For the example, a load-based schedule must be used since the reports are
defined using a report type of load. Figure 10-9 on page 326 shows the schedule
window for a load-based schedule.
As part of the schedule definition, a start date, end date, and delivery time are
specified. In the case of a load-based schedule, the Start Date field specifies the
first day that you want the Report Distribution to start looking for documents that
have been loaded into OnDemand beginning at the time specified in the Delivery
Time field. For example, starting on 19 January 2004 at 10:00 AM, Report
Distribution periodically queries the load identifier database table to see if any
regional sales reports have been loaded.
E-mail notification messages can be sent to one or more users during bundle
processing. The message types are error messages, warning messages,
progress messages, and completion messages. If you want to notify more than
one user, a group can be used. A progress message is sent after the bundle has
Figure 10-11 on page 328 shows the Bundle Contents tab of the Add a Bundle
window. In this window, you decide which reports to include in the bundle. If
banner pages are used, you also specify which banner pages to use.
For the example, the bundle contains a header banner, a separator banner, and a
trailer banner as well as the two reports that were created earlier. The first report
in the bundle is MidWest Monthly Sales Report and the second report is
NorthWest Monthly Sales Report. The reports can be included in the bundle in
any order; it is determined by how you add them to the Bundle Contents list. You
can change the order of the export by moving them up and down in the list.
The field titles on the banner pages and manifest can be created in many
different languages. The choices are Arabic, Chinese (Simplified), Chinese
(Traditional), Danish, Dutch, English, Finnish, French, French Canadian,
German, Italian, Japanese, Korean, Norwegian, Portuguese (Brazil), Spanish,
and Swedish. When selecting a banner language, you should consider that the
banner pages are converted to the code page of the data. With this in mind,
After you make all of the selections for the bundle, click OK to save the it.
E-mail notification messages can be sent to one or more users during distribution
processing. The message types are error messages, warning messages,
progress messages, and completion messages. If you want to notify more than
one user, a group can be used. Progress messages are sent after each recipient
in the distribution has been processed. A completion message is sent after the
distribution has been processed for all of the recipients.
Figure 10-14 on page 331 shows the Schedule tab of the Add a Distribution
window. When you create a distribution, a schedule does not have to be selected
or one can be selected but not activated. Of course, if the distribution does not
have an activated schedule, the reports in the selected bundle are not delivered.
For the example, we set the Distribution Schedule to Load Based Schedule that
was created earlier and activate it.
Figure 10-15 on page 332 shows the Recipients tab of the Add a Distribution
window. In this window, you select the recipients that are going to receive the
reports. For the example, the regional sales managers will receive the reports so
the user IDs of the managers have been added to the Selected Recipients list. If
there are several users that require the same set of reports, you can choose to
add the users to a group and add the group to the Selected Recipients list. The
two regional sales managers could have been added to a group and then the
group would have been used instead of the individual user IDs.
The check mark next to the user ID of each recipient in the list indicates that the
recipient is active for the distribution. If the check mark is removed from the box,
the recipient is deactivated and will not receive the reports. You can use this
feature to temporarily deactivate the recipient, if for example, the recipient is on
vacation and does not need the reports. If there is only one recipient for the
After all of the selections are made for the Add a Distribution window, click OK to
save the distribution. All of the steps required to set up the extraction, bundling,
and delivery of the regional sales reports are now completed.
Start the Report Distribution program on the server, and you are ready to begin
receiving the regional sales reports after they are loaded into OnDemand.
Other options that you can specify include how often Report Distribution looks for
distributions that are ready to be processed (Number of minutes between
schedule searches) and the number of times an operation should be retried
Messages that are generated during the extraction, bundling, and delivery stages
of Report Distribution can optionally be logged. They are viewable using the
Windows client by opening one of the folders that was created during Report
Distribution installation.
If you use the e-mail delivery option or need to send e-mail notification
messages, you must specify an Simple Mail Transfer Protocol (SMTP) server
address that processes the e-mail messages that are generated by Report
Distribution. You can optionally specify address information that is specific to
your company such as a return e-mail address and an e-mail address to use for
correspondence. You can also use a file that contains company-specific
information or other types of information that you want to include with the delivery
of the documents. If a global attachment file is specified, it is attached as a
separate file to the e-mail message that contains the documents that were
extracted.
© Copyright IBM Corp. 2003, 2006, 2007. All rights reserved. 337
11.1 Introduction to user exits
A user exit is a point during processing that enables you to run a user-written
program and return the control of processing after your user-written program
ends. There are few different kinds of exits. In this chapter, we discuss the exits
based on the following grouping:
ACIF Indexing
– Input record exit
– Index record exit
– Output record exit
– Resource exit
System administration
– System log exit
– Print exit
Customized functions
– Fax options exit
– Load exit
– Permissions exit
– Preview exit
– Security exit
– Storage management external cache exit
– Tablespace create exit
OnDemand provides data at each exit that can serve as input to the user-written
programs. Using these exits, it is possible to perform functions such as e-mailing
based on events in the system, updating index values via a print request,
cleaning up data as it is loaded into OnDemand, and accessing external security
managers. Infinite examples can be provided here for what is possible from the
OnDemand exits. We provide some samples here that act as a guide for creating
customized user exits programs.
Note: Always make a point to recompile all the customized user exits after
upgrading of OnDemand software, because the header files might have
changed with different versions.
Note: ACIF exits are called for each and every input, indexing, output, and
resource record. They are not limited to being called only once per file.
In Multiplatforms, ACIF user exits must be written in C. In z/OS, ACIF user exits
must be written in COBLOL or ASSEMBLER. ACIF exits do not exist in iSeries.
The input exit can be used to insert indexing information. More common uses are
to remove null characters, truncate records, add carriage control, and change
code pages. In general, indexer parameters should reflect what the input record
looks like after the input exit is executed. The only exception is the FILEFORMAT
indexer parameter, which should correspond to the input record before it is
passed to the input exit. For example, if an ASCII stream type file is being loaded,
use the FILEFORMAT=STREAM,(NEWLINE=x’0A’) parameter, not
(NEWLINE=x’25’), an EBCDIC stream delimiter. Otherwise, ACIF does not pass
the correct record to the apka2e input exit.
You can either use these as samples to build from, or you can compile them and
run them as is. These programs are documented in IBM Content Manager
OnDemand for Multiplatforms - Indexing Reference, SC18-9235, and are
described briefly in the following sections.
When using the apka2e exit, you must manually change your indexing
parameters:
Change CPGID=500.
Change the HEX codes for the triggers and fields from ASCII to EBCDIC. If
you do not do this, you receive ACIF return code 16, stating that it cannot find
trigger1 or any fields.
We used Hexedit to determine the new EBCDIC values and typed them by
keyboard edit into the parameter file. If you do not have such program, you might
find some conversion tables from the Internet.
See 11.2.5, “Debugging user exit programs” on page 344, for further information
about how to update indexing parameters.
A good use of this program is for an application that needs to pull an index from a
source other than the document. The application group can be set up with a
default index; then the user exit program can grab the appropriate index from this
secondary source and replace the default value that was in the index record. The
record is then sent back to ACIF.
long
INDXEXIT( INDXEXIT_PARMS *exitstruc )
{
int i;
if ( exitstruc->eof != IDX_EOFLAG )
{
/************************************************/
/* Look for TLE with attribute name "mmddyy" */
/************************************************/
if (
(exitstruc->record[13] == 0x6D) &&
/************************************************/
/* TLE length is now 40 (was 30) */
/************************************************/
exitstruc->record[ 2] = 0x28;
/************************************************/
/* Attribute value count is now 12 (was 10) */
/************************************************/
exitstruc->record[19] = 0x0C;
/************************************************/
/* Relocate attribute qualifier triplet X'80' */
/************************************************/
/************************************************/
/* Change mmddyy to mm/dd/yy */
/************************************************/
exitstruc->record[30] = exitstruc->record[28];
exitstruc->record[29] = exitstruc->record[27];
exitstruc->record[28] = 0x61;
exitstruc->record[27] = exitstruc->record[26];
exitstruc->record[26] = exitstruc->record[25];
exitstruc->record[25] = 0x61;
/**********************************************/
/* record length has increased to 41 (was 39) */
/**********************************************/
exitstruc->recordln = 41;
return( 0 );
}
Example 11-2 shows a sample output exit program that deletes records from the
output file. This program checks each structured field to determine whether it is
an AFP record. If the record does not begin with Hex 5A, the exit program tells
ACIF not to use this record.
long
ACCTOUT( OUTEXIT_PARMS *exitstruc )
{
/************************************************************************/
/* Delete all records from the output that do not begin with Hex '5A' */
/************************************************************************/
return( 0 );
}
The resource exit is best used to control resources at the file name level. For
example, suppose that you intend to send the output of ACIF to PSF and you
only want to send those fonts that were not shipped with the PSF product. You
can code this exit program to contain a table of all fonts shipped with PSF and
filter those from the resource file. The program invoked at this exit is defined in
the ACIF resexit parameter.
ACIF does not invoke the exit for the following resource types:
Page definitions: The pagedef is a required resource for processing
line-mode application output and is never included in the resource file.
Form definitions: The formdef is a required resource for processing print files.
If you do not want the formdef to be included in the resource file, specify
restype=none or explicitly exclude it from the restype list.
Coded fonts: If you specify MCF2REF=CF, ACIF includes coded fonts. By default,
(MCF2REF=CPCS), ACIF processes coded fonts to determine the names of the
code pages and font character sets they reference. This is necessary in
creating Map Coded Font-2 (MCF-2) structured fields.
To set up ACIF to run in stand-alone mode, create an indexing parameter file with
no triggers, fields or indexes defined. Include your input file and the exit routine in
the parameter file. Then, run arsacif from a command line, pointing to this
parameter file. You can also direct the output to a file. Example 11-3 on page 345
This command writes the output of ACIF, including the input exit processing, to
the output file where you can inspect it and make sure it did what you expected.
You can also use this output file in the graphical indexer to index your post-exit
file, because the exit routine might change the location of your triggers and fields.
Another method is to run arsload with the -i option, which runs indexing only.
This creates the .ind and .out files for you to view.
Important: Specify the complete path in the inpexit, indexit, resexit, or outexit
parameter. There is nothing more frustrating than trying to debug an exit that
never gets called because another exit with the same name is being invoked
due to the PATH environment variable.
The system log exit is supplied in the arslog file that resides in the bin directory of
the OnDemand install root for each respective platform. If the arslog file is
opened in a text editor, notice that it simply contains comments that provide a
brief description of the exit and the order of the parameters that OnDemand
hands to this exit. By default, the system log exit is not initialized within
OnDemand. Therefore, if you edit the arslog file to capture information, the exit is
not executed automatically.
Tip: The arslog exit file is run by the same user that owns the arssockd
process that is calling this exit. A common reason for getting no response from
this exit is access permissions on either the arslog file itself or files and
directories that are being accessed within arslog.
OnDemand provides an exit for each of the four system logging event points.
These exits allow you to filter the messages and take action when a particular
event occurs. For example, you can provide a user exit program that sends a
message to a security administrator when an unsuccessful logon attempt occurs.
For simplicity, we have not demonstrated the system log exits across all
supported platforms. We recognize that the scripting languages between
platforms do vary, but the principles that we describe here are uniform across all
supported platforms; only the syntax differs.
exit 0
For the exit sample provided in Example 11-4, we have also provided a small
sample of what the output of this exit might look like as in Example 11-5. For
instance, you can see in the output provided that several unsuccessful attempts
have been made from the same machine and different user ID have been used at
each attempt. In this example, by adding parameter 2 ($2) to the output and
resorting the file, we can further establish the time of these attempts.
# if [ $6 = "3" ]; then
# print $@ >> /home/archive/InfoMsg.log
# fi
case $7 in
#
# msg num 87 is a successful load
#
87) echo "Instance : $1" >> /arsacif/companyx/arslog.out
echo "Time Stamp : $2" >> /arsacif/companyx/arslog.out
echo "Log Identifier : $3" >> /arsacif/companyx/arslog.out
echo "Userid : $4" >> /arsacif/companyx/arslog.out
echo "Account : $5" >> /arsacif/companyx/arslog.out
echo "Severity: $6" >> /arsacif/companyx/arslog.out
echo "Message Number: $7" >> /arsacif/companyx/arslog.out
echo "Message Text : $8" >> /arsacif/companyx/arslog.out
/arsacif/companyx/control_file.scr "$@" >> /arsacif/companyx/arslog.out
;;
*) ;;
esac
exit 0
Important: For a guide about the codes for each of the message types logged
in the system log, refer to Chapter 2, “Common Server Messages”, in IBM
Content Manager OnDemand - Messages and Codes, SC27-1379. For
example, message number 88 is listed as ARS0088I.
The configuration of the system log exit is done with the administrator client in the
Systems Parameters window (see Figure 11-2).
The exit was link-edited to a normal library that is not AFP authorized.
Example 11-9 on page 357 shows an arsprt file, which updates application group
indexes for a certain document type each time it is sent to a server printer. This is
a real example from a customer where the requirement is for OnDemand to keep
a record of when a document is reprinted. This file is achieved is by using the
print exit to update the indexes of a document to show the last time that the
document was reprinted and a counter that is incremented to log the number of
times that the document has been reprinted. Comments are inserted into the
sample script in Example 11-9 on page 357 that explain what each part of the
script is doing. The customer name and the IP addresses have been either
altered or removed for reasons of confidentiality.
set -a
set -u
set -m
#set -x
###########################
# 3 stmt's added by #
# Hasse Ryden #
# for debugging #
###########################
#RANDOM=$$
#set -x
#exec 2> /usr/lpp/ars/bin/hasse.log.$RANDOM
RM=/bin/rm
SED=/bin/sed
OS=$(uname)
#
# $1 - Printer Queue Name
# $2 - Copies
# $3 - Userid
# $4 - Application Group Name
# $5 - Application Name
# $6 - Application Print Options
# $7 - Filename to Print
#
# NOTE: It is up to this script to make sure the file is deleted.
# example( -r option on /bin/enq )
#
FILE=$7
OPTS_FILE=${FILE}.opts
NOTES_FILE=${FILE}.notes
if [[ -f ${OPTS_FILE} ]] ; then
DEL=1
PRT_OPTIONS="-o PASSTHRU=fax_file-${FILE}-"
#
# Since I am faxing, make sure that messages are not produced.
# If debugging is needed, then this parameter should be blank.
#
#EXTRA_OPTIONS="-o MSGCOUNT=0"
EXTRA_OPTIONS="-o MSGCOUNT=0"
else
DEL=0
PRT_OPTIONS=
EXTRA_OPTIONS=
fi
RC=$?
if [[ ${RC} = 0 ]] ; then
if [[ ${OS} != AIX ]] ; then
else # HR1-a #
#################################### # HR1-a #
# Test if filename ends up with .0 # # HR1-a #
# If not,skip around code to update# # HR1-a #
# index. This prevents to update # # HR1-a #
# same index several times as only # # HR1-a #
# one .cntl file is created even # # HR1-a #
# when server print is made for # # HR1-a #
# multiple documents and this # # HR1-a #
# script is called one time for # # HR1-a #
# each doc to print. # # HR1-a #
#################################### # HR1-a #
ext=$7 # HR1-a #
ext=${ext##*.} # HR1-a #
if [[ ${ext} = 0 ]] ; then # HR1-a #
#################################### # HR1-a #
# Compute .cntl filname from # # HR1-a #
# supplied parameter $7 # # HR1-a #
#################################### # HR1-a #
fil=$7 # HR1-a #
mine=${fil%.*}.cntl # HR1-a #
#################################### # HR1-a #
# Double check if .cntl file exist # # HR1-a #
#################################### # HR1-a #
if test ! -f $mine # HR1-a #
then echo "File $mine not found" # HR1-a #
exit 1 # HR1-a #
fi # HR1-a #
#################################### # HR1-a #
# Set static variables # # HR1-a #
#################################### # HR1-a #
host=9.99.99.99 # HR1-a #
nohit=no # HR1-a #
applgrp1=ICAlog # HR1-a #
folder1=ICAlog # HR1-a #
applgrp2=applg2 # HR1-a #
folder2=folder2 # HR1-a #
applgrp3=applg3 # HR1-a #
folder3=folder3 # HR1-a #
#################################### # HR1-a #
# Read info from .cntl file # # HR1-a #
#################################### # HR1-a #
fi
else
(
if [[ ${OS} = AIX ]] ; then
echo /bin/enq -r -P "$1" -N $2 -T "${TITLE}" $6 ${EXTRA_OPTIONS} ${PRT_OPTIONS}
${FILE}
else
echo ${BASE_DIR}/lprafp -p "$1" -s "${ARSPRT_HOSTNAME}" -o "COPIES=${2}" -o
"JOBNAME=${TITLE}" -o "TITLE=${TITLE}" $6 ${EXTRA_OPTIONS} ${PRT_OPTIONS} ${FILE}
fi
The sample source code for the OnDemand user exits are provided for all the
platforms. They are placed in the exits directory of the OnDemand install root for
Multiplatforms. As listed in Table 11-1 on page 363, these sample user exit
modules provide a skeleton for you to program the exits.
The header file provides information about how to turn on the user exits. If it is
not specified in the header file, then place the compiled user exit program into the
bin/exits directory of the OnDemand installation root. For AIX, the directory is
/usr/lpp/ars/bin/exits.
The source code must be compiled before use. For UNIX platforms, you can
compile the source code using the sample Makefile that is provided. The
Makefile is in the same exits directory as the sample exits source code.
The first part of the header file is a declaration of all the structures and variables
used. Example 11-10 shows some of the common structures used in the
functions declarations.
#define ARCCSXIT_DOCNAME_SIZE 11
In the following sections, we examine each exit and describe its usage and
functionality.
The fax options exit has a special structure in the header file as shown in
Example 11-11, the ArsCSXitFaxOptions structure contains the values that you
can predefine for the specific document.
From the header file for fax options exit in Example 11-12, the input to the exit
program is in the structure of ArcCSXitApplGroup and ArcCSXitDocFields,
which corresponds to the application group information and the document fields.
With this information, you can write a program and provide output using the
structure of ArsCSXitFaxOptions. This allows you to customize the fax
information, such as the recipient company and the fax number, based on the
input such as the account number of the document. When a user faxes a
document, it can be prefilled with the necessary recipient and fax number
according to the document opened. Of course, the user is still free to modify the
fax information options.
int
ARSCSXIT_EXPORT
ARSCSXIT_API
LOADEXIT( ArsCSXitLoadExit *load );
int
ARSCSXIT_EXPORT
ARSCSXIT_API
PERMEXIT( char *userid, ArcCSXitPermExit *perm_exit, int *access );
The input of the exit program is the user ID and the information from the structure
field ArcCSXitPermExit. The output is the access values of the different actions.
The access values of the first two actions determine whether the user has the
right to access the folder and application group during logon. The exit program
can also change the SQL query and the SQL query restriction for the application
group in action 4. Finally, the access value of action 3 determines the permission
to retrieve the document into the hit list.
Action 2
Check Application Group permission using input from appl_grp_perm
Based on your SQL code, output the user access permission
If no access to application group
return (access = 0)
Elseif access defaults to *PUBLIC access
return (access = 1)
Elseif grants access
return (access = 2)
Action 4
Check the SQL Query String using input from sql_query_perm before searching
starts. Based on your SQL code, output the new SQL query string if needed
User will use the new sql string to perform query if it is available
If change in SQL query string is needed
set out_sql = new query string
Else if no change is needed (in_sql will be used)
set out_sql = null
If change in SQL query restriction string is needed
set out_sql_r = new query string
Else if no change is needed (in_sql_r will be used)
set out_sql_r = null
return (access = not use)
Action 3
Check the document access permission after using the SQL query to search
using the input from doc_perm and based on your SQL code
If no access to document (document will not be shown on hitlist)
return (access = 0)
Else grants access
return (access = 1)
You set the following variables to activate the different permissions in the ARS.INI
file:
To activate the folder or the application permission
SRVR_FLAGS_FOLDER_APPLGRP_EXIT=1
To activate the SQL query exit
SRVR_FLAGS_SQL_QUERY_EXIT=1
To activate the document permission exit
SRVR_FLAGS_DOCUMENT_EXIT=1
You can use the client retrieval preview exit to add, remove, or reformat data
before the document is presented to the client, for example:
Remove pages from the document, for example, banner pages, title pages, or
all pages except the summary page.
Remove specific words, columns of data, or other information from the
document. That is, omit (“white out”) sensitive information such as salaries,
social security numbers, and birth dates.
Add information to the document, for example, a summary page, data
analysis information, and Confidential or Copy statements.
Reformat data contained in the document. For example, reorder the columns
of data.
The client retrieval user exit point might be enabled for more than one
application. However, all applications must be processed by the same
user-written program (only one user-written program is supported). The system
passes the name of the application that is associated with the document to the
The input to the exit program is captured when the user tries to retrieve the
document. Based on the input, such as application group name and the indexes,
you can then use your program to create an output file with the name from
pOutFileName.
Example 11-16 shows the header file of the client retrieval preview exit.
int
For example, you can program so that when a user retrieves a document from a
particular application group, you can check the name of the account number (the
indexes from the Doc handle) and place a watermark for that document. When
the document is retrieved by the user, the user sees the document with the
watermark.
The retrieval preview user exit might be enabled for all data types, except for
None.
If any of the events or activities occurs, a user-written exit routine can interact
with a security system, such as RACF, to determine whether the given activity is
allowed.
ARSUPERM This c-module provides the interface between the OnDemand system
and the ARSUSECX module.
ARSUSEC This c-module provides the interface between the OnDemand system
and the ARSUSECX module.
ARSUSECJ This is a sample JCL stream for assemble and bind ARSUSECX and
ARSUSECZ.
ARSUSECX This is the interface module for the MVS Dynamic Exit Facility.
Note: The security exit is an enhancement that is not shipped with the basic
code. It is available with PTF UQ58458 UQ59190.
All modules are found in the SARSINST library after applying the PTF. The
sequence of this exit, using the MVS Dynamic Exit Facility, is different from the
classical interface with exit modules or a security exit in a CICS® environment.
The kernel code was updated to allow external security. The OnDemand Kernel
code calls a dynamic link library (DLL) as an interface to the exit. Modules
ARSUSEC and ARSUPERM, provided as C source code modules and as
executables, fulfill this function. There is no need to change and recompile them.
The source is delivered mainly for understanding the entire security system exit.
If you want to change them, they have to be recompiled and bound as a C DLL.
These modules communicate with the ARSUSECX module, which is an interface
to the MVS Dynamic Exit Facility. The security exit module ARSUSECZ is the
delivered sample with the PTF. It shows how to perform security checks with a
Security Exit Facility (SAF) interface. RACF is a program that uses SAF. The
ARSUSECH is a C source code module that passes the data structure as input
for every exit (ARSUSECZ) that is provided. The ARSUSEA provides the same in
assembler language.
Tip: The only module that you must change is the provided source code
ARSUSECZ to meet you requirements. It must be assembled and linked into a
library that is accessible for the MVS Dynamic Exit Facility.
For example, if your folder permissions are stored in an external security system
without any Security Exit Facility interface, this part must be updated to call this
external security system. For demonstration purposes, Example 11-19 shows
the access to an application group code sample. This sample issues the
RACROUTE macro. If a different external security manager is used, this code
must be updated for a proper call of this system.
To enable the exit for these events, you must add the following statement to the
ARS.INI file:
SRVR_FLAGS_SECURITY_EXIT=1
For activation of the application group and folder permission exit, refer to11.4.4,
“Permission exit” on page 369.
Note: The sample is designed to process the feedback of the exit one at a
time, even if you are running more than one exit.
Important: The load module must be found in LPA or a LNLKLST data set.
The security exit can only handle the functions that we described earlier. If you
want to restrict access to folder and application groups based on index values,
you can do this with the internal OnDemand security. The restriction for an
application group is maintained by RACF. When a user has access to the
application group, there is no way to limit the access to this application group with
any external security. To limit access to specific application group data, enter a
Query Restriction to the Application Group to create an SQL “where clause”.
ARSYSPIN creates an intermediate output file that contains one or more spool
files from one or more jobs. The intermediate output file is indexed and stored in
OnDemand using the ARSLOAD program. ARSYSPIN invokes ARSLOAD when
sufficient data has been captured in the intermediate output file. ARSLOAD calls
the indexer program (APKACIF) to extract the index values from the data and
store them in an index file. ARSLOAD adds these index values to the database
and stores the data object.
In addition, you must be sure that the resulting module is link-edited as NOT
RE-ENTRANT and NOT REUSEABLE. This is required to allow the local
variables within the COBOL exit code to retain their values. This exit is invoked
several times during an ACIF run. See Example 11-21, the JCL sample for
details. The sample source code can be found in the SARSINST library member
ARSSPVIN.
Note: If you are running OnDemand on z/OS, the ACIF indexer is running in
an OS/390 environment. The normally provided Parmfile in JCL for ACIF is
now provided as the indexer information in the application definition.
From the header file as shown in Example 11-23, the input is the application
group information, ArcCSXitApplGroup, and the document information,
ArcCSXitDoc. The output is the data if it is available.
Example 11-23 Header file for storage management external cache exit
/**********************************************************************/
/* SMEXTCAC - Storage Management External Cache Exit */
/* */
/* This exit is invoked only when data is to be retrieved from an */
/* Application Group that is defined with the External Cache setting */
/* checked. This exit is for specialized applications and is not */
/* normally used. */
/* */
/* 1) Don't return data, only validate whether the document exists. */
/* On Input: buf == NULL */
/* */
/* INPUT: appl_grp */
/* doc */
/* buf */
/* */
/* OUTPUT: */
/* *buf_len = 0 -> Data is not in external cache */
/* -> Otherwise data is in external cache */
/* */
/* 2) Return document data. */
/* On Input: buf != NULL */
/* */
/* INPUT: appl_grp */
/* doc */
/* *buf_len -> #of bytes to retrieve */
To use this exit, you must first load the index of the documents into OnDemand,
and select External Cache when the application group is created. When the
user retrieves the document from OnDemand based on the indexes, the exit is
activated to pull the document from respective location.
You can also use this exit to perform other actions during a tablespace creation.
This is useful if you must change default parameters for the tablespace, the table,
or the indexes. The changes only affect new creations.
If you do not customize the action, OnDemand uses the defaults. Example 11-25
shows a sample program flow.
Action 2
Is there a need to customise the creation of the table?
If yes
create the table (in the tablespace)
return( created = 1 )
Else
OnDemand create the table
return( created = 0 )
Action 3
Is there a need to customise the creation of the indexes?
If yes
create the indexes
return( created = 1 )
Else
OnDemand create the indexes
return( created = 0 )
Action 4
Final call, is there additional work, clean up or update on parameters?
If yes
perform the additional action.
The following statement must exist in the ARS.CFG file that is associated with
the instance so that the arsutbl DLL can be invoked:
ARS_DB_TABLESPACE_USEREXIT=absoulte path name
For this example, you must place the arsutbl exit program in the
/usr/lpp/ars/bin/exits directory of the OnDemand installation root.
You can find more information about the tablespace creation exit in the manual
IBM Content Manager OnDemand for Multiplatforms - Installation and
Configuration Guide, SC18-9232.
© Copyright IBM Corp. 2003, 2006, 2007. All rights reserved. 399
12.1 Introduction to the migration tool
With Version 5 Release 3 of OnDemand for iSeries, a tool is available to help
customers migrate from Spool File Archive to the Common Server. Before the
tool was available, the only way to migrate was to re-spool all the archived
reports, archive them into the Common Server, and then delete them from the
Spool File Archive. With AnyStore, it was necessary to rescan or recreate the
original documents and then archive and delete them from the Spool File
Archive.
The migration tool makes the entire process much easier. You can use this tool to
migrate users and user groups, migration policies (including optical storage
groups), report definitions, and indexes. The compressed archived data itself is
not moved, but the Advanced Function Presentation (AFP) resources are moved
to the new integrated file system directories and the indexes and annotations are
moved to new database files. The OS/400 Indexer has been enhanced to
recognize migrated definitions. Also the Common Server programs have been
modified, so that they can locate archived data on optical volumes, tapes, and in
the /QIBM/UserData/RDARS/SpoolFile path in the integrated file system. The
end result is that new data can be archived into the Common Server and users
can still retrieve the migrated data.
In this chapter, we refer to document search fields as indexes, even though in the
Spool File Archive, we often refer to them as keys. The terminology has changed,
but it is easier for you if we are consistent in the terms we use here.
12.2 Preparation
There are two main preparation steps:
1. Set up the Common Server environment.
2. Make changes to the Spool File Archive environment.
Next you must create the instance QUSROND. We also recommend that you
create an instance called ONDTEST for testing purposes. Refer to the Planning
and Installation Guide, SC27-1158, for information about how to create an
instance.
If you want the servers to start automatically with the Start TCP/IP Server
(STRTCPSVR) command, edit the /QIBM/UserData/OnDemand/'instance
name'/ARS.CFG file for each instance and specify ARS_AUTOSTART_INSTANCE=1.
To download the latest level of the OnDemand Client, go to the following Web
address:
ftp://ftp.software.ibm.com/software/ondemand/fixes/
From this Web site, select the latest directory and the highest release level within
that directory. Download the odwin32.zip file, unzip it, and run the setup.exe
program.
Install iSeries Access and the latest service pack on your workstation. Then run
iSeries Access Selective Setup and install the OnDemand plug-in.
If you only use disk, you only need one migration policy in the Common Server,
since the Life of Data and Indexes is specified in each application group. If you
plan to use an optical library, you might still be able to use a single policy. For
example, you can create a migration policy with two levels, disk pool and optical.
Specify 100 days for the disk pool level and No Maximum days for the optical level.
Each application group with Life of Data equal to more than 100 days is migrated
to optical. Each application group with 100 or fewer Life of Data days remains in
the disk pool until it is deleted by the Archive Storage Manager.
Are you using an optical library with Spool File Archive? You may decide to move
your archived data back to disk or to a new IBM 3996 optical library that supports
high density optical volumes (30 GB cartridges). Some business partners have
programs to move Spool File Archive data from optical to disk or high density
optical. It is a good idea to do this step before you migrate to the Common
Server. Remember, the migration tool moves the indexes and AFP resources, but
leaves the data in its current location.
To retrieve a report, the user must be authorized to that report. By default, each
report has an authorization list with the same name as the report definition. You
can grant *PUBLIC *USE to the report so that all users can retrieve documents.
Or you can leave the default *PUBLIC *EXCLUDE and then grant *USE authority
to individual users or group profiles. Refer to Chapter 1 in IBM Content Manager
OnDemand for iSeries - Administration Guide, SC41-5325, for information about
authorizing users to Spool File Archive and to Spool File Archive reports and
report groups.
Users can also be restricted by key security, which is handled in the migration
tool by adding Query Restrictions for *PUBLIC and individual users to migrated
application groups. For Spool File Archive reports that belong to report groups,
the authority is sometimes only specified at the report group level. After all the
report definitions in a report group are migrated, the migration tool migrates the
report group definition to a folder. If you specify security at a report group level,
you must update permissions manually in the Common Server application
groups after you migrate the report definitions within the group. Or you can set
report and key security at an individual report level prior to migration, and the
permissions should migrate.
If you specify *PUBLIC *EXCLUDE for the QRDARS400 authorization list, the
migration tool migrates only the users who are listed on the authorization lists for
If you do not want to migrate all the users, make sure that the authorities in Spool
File Archive are adjusted to make the migration tool migrate the appropriate user
profiles. The QRDARS400 and QRDARSADM profiles are always migrated.
You may prefer to add selected users to the Common Server instead of using the
tool to migrate users. If so, you can either add users manually with the
OnDemand Administrator Client or write a program to add selected users.
Sample programs are included in the OnDemand for iSeries Bulletin Summary
from 2005 (refer to Chapter 15, “Did you know” on page 477, for more
information about the bulletins).
You must use the migration tool to migrate your migration policies from the Spool
File Archive to the Common Server. The tool creates two storage nodes for the
migrated policy, one for the Spool File Archive and the other for the Common
Server. You must have both nodes so that users can retrieve data archived in
both environments. Since the data itself is not moved from its original location in
the Spool File Archive, a storage node must be defined for OnDemand to find it.
Refer to the Read This First document. This document contains detailed
information about requirements when migrating migration policies from the Spool
File Archive to the Common Server. In particular, it addresses the requirements
for migrating Spool File Archive migration policies that have names that match
migration policies that already exist in the Common Server instance. You can find
the Read This First document on the Web at the following address:
http://www.ibm.com/software/data/ondemand/pubs/readmefirst_400v5.2.pdf
You may choose to use the no-charge tool to help migrate most of your archives,
but you may also re-spool and re-archive some of your reports. The analysis and
If you have a large number of report definitions, this might be a long-running job.
It is important to analyze the report definitions. After the job finishes, print the
report, review it, and look for warning messages. The report tells you if you must
review certain report definitions. For example, index exit programs do not
migrate, so if you are using them for any of your reports, you must modify the
definitions after they are migrated to the Common Server. Review the exit
programs so you know how they are used. You may be able to use the same
function in the Common Server without writing a post-processor program. Many
customers use index exit programs to remove the dashes (-) in a social security
number; you can accomplish this task easily in the Common Server by modifying
the application to remove the embedded “-” character.
Be sure to review the analysis report to see if you changed the number or order
of indexes in a report definition. For example, if you use Account Number for
index 2 in one version of a report and index 3 in another version, a user cannot
easily search for a document. In this situation, it is easier to re-spool the reports
and re-archive them in the Common Server rather than to migrate a problem
report.
In another situation, you might encounter a migration problem if you used DATE
as the name of an index in the Spool File Archive report definition. DATE is a
reserved word in the Common Server and cannot be used (in either upper or
lowercase) as the name of an index. You can change the name of an index in the
Spool File Archive, but not in the Common Server. If you use DATE as the name
of an index, be sure to change the name before migrating that report.
It is also a good idea to review your Spool File Archive environment and get
some statistics about the amount of data and types of data that must be
migrated. It is helpful to know this information so that you can estimate the effort
involved in the migration and track the progress.
An index recall program is included with the migration tool, but we recommend
that you recall all the indexes from optical or tape back to disk before you start
the migration. In fact, you can do this process at any time, even if you do not have
the Common Server installed. Just compile and use the RTVARCIDX or
RTVSPECIDX program to recall all indexes or selective indexes by optical
volume. The source for these programs is found in the QSAMPLES source file in
library QRDARS. Follow the instructions to compile and run the programs, but
ignore the statement in the documentation stating that no one should be using
OnDemand while the program is running. It is perfectly acceptable to archive
reports and have users access OnDemand while indexes are copied back to
disk.
As part of the analysis and planning phase, you must determine your migration
strategy. You might have some archived reports that you want to re-spool,
redefine, and re-archive in the Common Server instead of using the migration
tool. Here are a few examples of reports that you might want to re-define in the
Common Server instead of migrating as is:
Reports that should have longer index lengths
For DOC reports in Spool File Archive, the maximum length for the five
indexes are 25, 20, 20, 20, and 15 respectively. The maximum index length in
the Common Server is 254 characters. If you have some reports where a
customer name, for example, is truncated at 20 characters and you want to
use 40 characters, then you might want to re-spool those report occurrences
and archive them into a new definition in the Common Server.
Important: You cannot rename migrated application groups. If you do, you will
not be able to access the reports.
You cannot rename migrated application group because the archived data is not
moved and is located in integrated file system or on optical by using the
application group name. This is an important fact to keep in mind when
considering how best to migrate.
Maybe you decide to migrate a report definition and then create a new
application group with additional index fields, grouping the two application groups
within the same folder for searching. You must keep the original name for the old
application group and give a new name to the new application group. But that
means that you must change whatever value you are using to match the
application and application group name in the output queue monitor program (for
example, userdata or formtype).
You may change your mind as you progress through the migration, but it is a
good idea in the planning stage to think about which reports to migrate using the
tool and which reports to migrate manually. It is easier to use the migration tool
for all reports if possible. We suggest that you use the migration tool for:
Report definitions that have a satisfactory number, type, and length of index
fields now
These are the reports that you will be satisfied with when they are migrated to
the Common Server.
ANYS reports
If you do not use the migration tool, you must reprint and rescan documents,
or write your own programs to move the indexes and retrieve and archived the
data. It is much easier to use the migration tool. If you are using Kofax Ascent
Capture, you must modify the document class definition to refer to the
Common Server Release Script.
Report definitions
If you want to redefine report ABC in the Common Server, we present some
steps that might make this process easier. The names and libraries of objects
you create can be changed; this is only an example, but we found that it is easier
to monitor the progress of these steps if you name the objects according to the
version of the report definition you are working with. Version 01 report definitions
are used by the RPTS01 query, the RPTS01 output queue, and so on.
We suggest that you use the following steps to redefine report ABC:
1. Create a query called RPTS01 in library QGPL.
The displays in Figure 12-1 on page 411 through Figure 12-7 on page 413
show how you can define the QARLRSRT file, the ODATE and OSEQ result
fields, and the selected fields CDTYPE, VERSION, ODATE, and OSEQ. The
output to database file RPTS01 is also included.
F3=Exit F5=Report
F13=Layout F18=Files F21=Select all
Select options, or press F3 to save or run the query.
Field Expression
ODATE substr(ONAME,1,8)
OSEQ substr(ONAME,10,3)
Select Records
2. Run the query, which creates a database file called RPTS01. There is one
record in the file for each report occurrence of Version 01 for the ABC report
definition. See Figure 12-8.
Note: You cannot use the Report Wizard if you want to use the Document
Audit Facility, but it is easy to create the application group, application, and
folder separately.
The definition that you just created is now associated with application group
ID 01. See Figure 12-11.
Important: Do not use this technique if you plan to have a single folder that
contains multiple application groups and plan to allow different permission
levels for different users to application groups within the folder.
c. Copy the ABC-01 application into a new application called ABC, with
identifier 02.
Note: Be sure that you successfully archive all the files in the Common
Server before you delete them from Spool File Archive.
This process can be modified so that you can select several different reports with
the same version number and send all of the spooled files to the RPTS01 output
queue. In doing so, be careful to separate the different versions of the reports
because you probably created different versions for a reason. The spooled files
are unable to use the same definition.
We suggest that you re-spool all of the occurrences for a report, and then set
up a definition using the oldest spooled file first. Start the output queue
monitor and see which spooled files go to the error queue; use those files to
set up new applications (with new application ID fields) within the application
group.
Also review the results of successful loads. The location of a field might have
changed and the report loads fine, but the beginning or ending location of the
index field is incorrect. In this case, the report can store successfully, but the
index values are wrong. To verify that the values that are stored are correct,
search the OnDemand client for the report to see that the values in the hit list
look correct.
For reports where you have continued to modify Version 01, it is easier to use
the migration tool.
Users
You may prefer to add selected users to the Common Server instead of using the
tool to migrate users. If so, you can either add users manually with the
OnDemand Administrator Client, or write a program to add the selected users.
Sample programs are included in the OnDemand for iSeries Bulletin Summary
from 2005 (refer to Chapter 15, “Did you know” on page 477, for more
information about the Bulletins).
If you do not use the migration tool to migrate users, you must still create an
ADMIN user ID for the local server instance used by the migration tool. This ID is
created automatically by the *MGRUSR (migrate users) option of the tool, so you
can create this ID in either of these two ways:
Run the *MGRUSR option to the ONDTEST instance. This creates the
ADMIN user ID in the local server used by the migration tool.
Create the USER.TBL file in the /OND_MIG_INST/TABLE directory in the
iSeries integrated file system and add these lines to the file:
EDTF STMF('/OND_MIG_INST/TABLE/USER.TBL')
[ADMIN]
UID=79999
PASSWD="ssjbENv1dbaoA"
ADMIN=4
PID=0
LAST_UPDATE=-1
TIMEOUT=0
It is easy to track the progress of the migration by using Query or SQL. There is a
record for each report definition in the QARLRACT file in the QUSRRDARS
library. The value for EXTRAFIELD1 in this file is changed as you successfully
complete the migration steps for a report. There is a record for every report
occurrence in the QARLRSRT file in library QUSRRDARS. The value for
RESTIND in this file is changed to M whenever all the indexes for that report
occurrence have been migrated to the Common Server.
Map the Report Date to F_01 (posting date from Spool File Archive) and
Report Name to F_00 (application identifier or version from Spool File
Archive). See Figure 12-19.
The displayed value can be the same for all Application ID Field values for the
application group. Now when users search for documents in the folder, they
can select a single report to display. See Figure 12-21.
Be sure to automate the OnDemand jobs. We recommend that you add the
following tasks to the iSeries Job Scheduler:
STRTCPSVR *ONDMD
STRMONOND
STRDSMOND
Also, before you back up the OnDemand integrated file system directories, you
must unmount the file system if you are using a disk pool. If you are using
ASMASP01 in instance QUSROND, you use the following command:
UNMOUNT TYPE(*UDFS) MFS('/dev/QASP01/ONDEMAND_QUSROND_PRIMARY_01.UDFS')
Two tasks cannot be automated. First you must review the error output queue
each day to see if any reports failed to archive. This step is also necessary in
Spool File Archive. Second review the QPRLCASM1 report, which is the status
report that is created whenever Archive Storage Manager is run. The default
location for this Archive Storage Manager report is output queue QRDARS400 in
library QRDARS, but the QPRLCASM1 printer file can be modified so that
another output queue is used instead.
12.7 Summary
Both the Spool File Archive and Common Server are designed to help customers
archive and retrieve spooled files and other documents. However, the directory
and database structure of the two product features are so different that it is
remarkable that it is even possible to migrate from one environment to the other.
The migration tool is a complex set of programs that works well in making this
migration possible. However the migration process cannot be fully automated; an
OnDemand administrator must review each step of the process to ensure that
you achieve accurate results.
Since knowledge of both the Spool File Archive and Common Server is required
to understand the migration process, we recommend that you acquire formal
education about the Common Server from an experienced IBM Business
Partner. You may also choose to have an IBM Business Partner handle the
The Common Server offers a lot of advantages for both users and
administrators. Learn as much as you can about this product so you can make
enhancements to both migrated and new applications and take full advantage of
the new features in the Common Server.
Note: The contents of 13.1, “Designing a winning solution” on page 430, are
contributed by the OnDemand development and support lab personnel. The
various best practices tips were collected from both the OnDemand lab
personnel and OnDemand practitioners in the field.
© Copyright IBM Corp. 2003, 2006, 2007. All rights reserved. 429
13.1 Designing a winning solution
Your company has decided to purchase the IBM Content Manager OnDemand
(OnDemand) solution and has put you in charge of the project. You have gone to
OnDemand University; you have attended the workshops; you have spent a great
deal of time out on the OnDemand Web Support site; and you have read first part
of this OnDemand IBM Redbooks publication.
When your supervisor asks you if you are prepared to design an OnDemand
solution that performs well and is easy to use, how will you answer?
We intend to help you answer that question positively and confidently in this
section. Although we do not discuss every possibility available for correctly
designing an OnDemand solution, we provide you with an idea of what you
should think about while creating your solution.
Most people who are new to OnDemand understand that OnDemand stores
documents and provides the ability to retrieve these documents on a PC or a
Web browser. They might also understand the basic functions of the individual
components or have the ability to create an application that stores a document
type.
However, most people who are new to OnDemand, and even some who have
been using OnDemand for a while, have no idea if the way they use their
OnDemand solution is easy to use or performing well. The OnDemand users
have grown accustomed to the way things are and nobody has suggested ways
to make improvements.
To help you learn how to design an OnDemand solution, you first must
understand what OnDemand is. This overlaps somewhat with the information
that we presented in the earlier chapters; however, it is an important concept for
us to go over here again in a slightly different approach.
At the library, when you want to research a specific chapter in a book, you go to
the card catalog cabinet, open a specific card catalog drawer, find a specific card
that tells you where the book is located, find the book, and then turn to the
chapter you want to see.
With OnDemand, you select a folder that is attached to certain database tables.
You then enter your search criteria. When you press Enter, you see a document
hit list, which is a copy of specific rows in a database table. When you select one
of these rows, you retrieve the document that is of interest to you.
Let us go back to your local library. You know which book you want to see, but
when you reach the card catalog cabinet, you see that four card catalog drawers
have the exact same label on the front. To determine where your book is located,
you must search all four of the drawers. This is obviously much slower than
searching a single card drawer.
The same is true for OnDemand. If your search criteria must go across multiple
application group tables, your query takes longer to complete. This means that
your system works harder to perform the query, and your user is waiting longer
for results.
Before you start designing your application, you must understand the four pieces
of an OnDemand application that are going to mean the most to your design:
Although many more variables are involved in the overall OnDemand solution, a
successful OnDemand solution design is most concerned with these four
variables.
A folder provides an interface that allows a user to query database tables. The
fields of a folder correspond to database index and filter fields. If a segment field
is available, this narrows the tables that OnDemand searches. A folder can also
have a server-based text search field. Using a text search field is the worst way to
search for data. Whenever possible, avoid providing text search fields for the
OnDemand users.
When a user requests a query, the following actions occur from a high-level
perspective:
1. If part of the query is a segment field, OnDemand selects only the tables that
meet this criteria.
2. If index fields are part of the query, OnDemand searches the indexes of the
tables selected by the segment search to choose the matching table rows.
3. If filter fields are chosen, OnDemand looks at the selected rows to narrow the
resulting rows to only those rows with matching filter fields.
4. OnDemand returns the matching query rows, in a hit list, to the user.
Although it is tempting to make all fields the index fields, you must understand
the trade-off. When you create an index, you take information already stored in
your database table and use the database space to add that information again in
a separate table, as well using the space for the overhead involved in creating an
index. If all of your table fields are indexes, your database is unnecessarily large
without additional benefit.
Having too many fields, index or filter, is also not a good idea. Each database
column that you tell the application group to create for a table is additional space
needed in your database. You must only create fields (database columns) for the
items that you need to search against. At the minimum, provide at least one
index field. This field should define the most unique value in the data, such as
customer number, Social Security Number, and phone number. It should also be
a field that a user normally searches on.
This is why it is imperative to understand how the users plan to search for
documents. If your users generally search on a single field, you obviously only
need that field. If 80% of your users query using three fields and another 20%
An application group table is limited by the number of rows that it can have. You
set this limit within the application group properties. By default, an application
group has 10 million rows. When you have stored the 10 million rows,
OnDemand closes the first table and opens a second table. By doing this, you
degrade performance. At the same time, if you set your row limit too high,
performance also degrades because it takes too long to look through the table to
match your query.
This is where the segment fields come in. You should always specify a segment
value to improve performance, usually a report date or statement date. If one
does not exist in the report, you can always use a load date by specifying it in the
application. This value should be chronological to provide the best segmentation.
A segment field allows you to limit the number of tables you choose to search. If
your segment is “load date” and you fill a table four times per year, you can limit
the search to a single table simply by adding the month and year to the search.
At most, a month can carry over into a second table. However, this successfully
narrows the search simply by narrowing the tables you search across.
Cache storage is the fastest means to deliver data to your users. When the
demand for the data is no longer high, the data should be moved from the cache
to long-term storage; users can still retrieve the data from long-term storage.
Having a disk or tape placed in a drive and then having the drive spin up and
deliver the data to your user takes considerably longer time than retrieving it from
cache. It gets worse if too many people retrieve the data from Tivoli Storage
Manager and a drive is not currently available to fulfill the retrieval requests.
There are some limitations regarding the amount of the cache storage you can
use. The general rule of thumb is to keep the data in cache for as long as
possible. Since you paid for those hard drives, you should use them. In general,
you do not want your users to wait for the data to be delivered from long-term
storage.
In summary, when designing your solution, you want to accomplish the following
plan:
For applications: one to many data objects
For application groups: one to many applications
For folders: one to one, or one to few, application groups
Your company has the following six reports that they want to store in OnDemand:
Balance Sheet - AFP Data
Sales Detail Report - Line Data
Inventory Detail Report - AFP Data
Transaction Detail Report - Line Data
Income Statement - PDF
Payroll Ledger - AFP Data
Any time a user searches this single folder using a common index field,
OnDemand searches across all six application groups. Since there is no
segment field, OnDemand searches across all of the tables in all six of the
application groups. To make matters worse, because every field is an index field,
the database is quite a bit larger than it needs to be.
The Payroll Ledger is the only report that is used by the human resources
personnel; therefore, we design the following items into the solution:
Folder “Payroll Ledger”
Application group “payledge”
– Segment Field: Date
– Index Field: employeenum, lastname, ssn
– Filter Field: firstname, dept
Application “payledge”
The Inventory Detail Report is the only report viewed by the inventory control
personnel; therefore, we design the following items into the solution:
Folder “Inventory Detail Report”
Application Group “invreport”
– Segment Field: date
– Index Field: prodnum
– Filter Field: proddescr, transtype
Application “invreport”
Finally, we have three reports, Balance Sheet, Sales Detail Report, and Income
Statement left. These reports have different data types, but they have the same
query needs. The only thing we have to watch for is that the Sales Detail Report
is the only one that is used by the salespeople. However, these reports are good
candidates for a single application group. We design the following items in the
solution:
Folder “Executive Reports” “Sales Detail Report”
– This will be restricted by the application ID.
Application Group: “execreport”
– Segment Field: date
– Index Field: acctnum
– Filter Field: acctdescr, application ID
Application “balsheet” “salesrpt” “incomestmnt”
This solution requires six applications, four application groups, and five folders
with access controlled by five groups. Each query searches across a minimum of
a single table. Most user searches will be index scans (via index fields), as
opposed to table scans (via filter fields). Because we have the date field listed as
our segment date, if we load the data in the date order and require users to enter
a date, we have an excellent opportunity to restrict the queries to the fewest
tables possible.
This solution is not the only possibility for an excellent OnDemand solution
design. There are several things we are able to do, such as query restrictions,
user group restrictions, and application group permissions. The OnDemand
support Web site provides an excellent resource to assist you with other design
possibilities:
As the designer of the OnDemand Solution, you will likely be presented with a
wide variety of reports to archive. They will not all be line data or Advanced
Function Presentation (AFP) data, and they will all have different query needs.
The best solution design that you can achieve requires understanding of users
who use these reports and how the reports will be queried. Detailed planning,
before you begin to build your solution, helps you to achieve a design that
remains efficient for many years to come.
For example, an Invoice Number and Customer Number fields provide important
information. Without them, we cannot associate the right invoice with the right
customer. A date field, such as an Invoice Date, is also needed so we know when
this invoice is generated. This information, as well as other date fields, such as
You should identify a date field that OnDemand can use to segment the
application group index data. The segment field enables the searching of specific
tables of application group data rather than all of the tables.
If you see a message like the example in Figure 13-2, select No and choose a
date field as a segment.
In case no date is available in the document, use at least the Load Date as the
segment. See the next section for more information.
The Load Date of the document might be different from any of the application
dates. For example, the invoices can be loaded the day that they are printed or
some days later. In this case, the Load Date is different from the invoice Date.
Accurate and easily accessible Load Date information helps to avoid any
misunderstandings.
In addition to help keep track of archiving activity, the availability of a Load Date
index might be of great help in case of an audit or compliance request.
Sometimes, a document does not have any date. In this case, it is useful to use
Load Date as a segment date as follows:
1. Define a LoadDate in the application group, in addition to other fields.
2. Within the application definition, click the Load Information tab.
You can also set this up, by using version support, as explained in the following
steps:
1. Whenever you add a new application group, define a Version field in addition
to the other fields that you need for the document. Figure 13-4 shows an
example of the Version field.
If a few of applications are linked to the same application group, the application
group name and application name must be specified for the load:
If you run arsload as a daemon or a service, then one of the following actions
may occur:
– The input file name must consider the application to be used.
– The -A parameter of arsload has to specify the part of the file name that
identifies the application to load, MVS, JOBNAME, DATASET or FORM.
MVS.JOBNAME.DATASET.FORM.YYYYDDD.HHMMSST.ARD
If you run arsload as a command line, the -A parameter must specify the
application name.
Users can have access to different document types through one folder. They can
limit their search to a specific document type, or they can see the document type
that each hit-list entry represents.
If you assign multiple applications to the same application group, you can
display the Application ID field; refer to 13.2.3, “Including an Application ID
Field in an application group” on page 442:
– Define a field in the folder.
– Map this field with the corresponding Application Group field. This is the
Version field in the example of the referenced section.
The information is shown the same way as for the Application Group field
named Document type. See Figure 13-7.
The information coming from the application groups is displayed to users through
a folder. Remember to use self-explanatory and user-friendly expressions for
them.
Note: An application group name can be updated after the application group is
added, as long as the Application ID field value has not been used as the
identifier in an application; otherwise, you can no longer update the application
group name. Figure 13-8 shows for the error message that is displayed.
Before you decide which way you want to set up OnDemand application for PDF,
ask yourself these questions:
Will the OnDemand clients be used to access PDF documents? Is seamless
integration mandatory for your business requirements?
Is full text search support mandatory?
If you can change the way how the PDF documents are generated, you can help
to solve these problems.
If you cannot change the layout of the PDF documents for business reasons, try
the following steps:
1. Add all the required information in a blank part of the first page of the
document by using a fixed font and separating clearly all the different pieces
of information.
2. Define the PDF indexer parameters using the graphical indexer.
3. Test and validate the indexing.
4. Turn the color of the added information to white so that it does not appear on
the printout.
© Copyright IBM Corp. 2003, 2006, 2007. All rights reserved. 451
14.1 Troubleshooting FAQ
This section contains frequently asked questions by the OnDemand
administrators. It also includes solutions to common problems encountered by
the OnDemand administrators and users.
Tip: For the UNIX platform, the console message might help to determine the
cause of the problem. However, if you use Telnet from your PC, you might miss
the important console message. For AIX, you can switch the console to your
current terminal by using the swcons ‘tty‘ command. To switch it back to the
console, simply use the swcons command.
In the following sections, we discuss some of these problems that you might have
encountered and we provide possible solutions to the problems.
Page 2
John Doe
...
...
In this example, since the string Page 2 does not match the TRIGGER, it is
ignored, and that page is included in report 1. Moreover, the report does not
break until the name John Smith is read, because it is different from the name
John Doe.
Problem: You run OnDemand on HP/UX and encounter the error message
shown in Example 14-2 while attempting to index a PDF document.
If your PDF version is 1.5 and later, you must upgrade your OnDemand server
to the latest version to avoid the segmentation fault. This is because starting
at version 7.1.2.5, OnDemand uses new libraries that support the newer PDF
versions.
Problem: The OnDemand Windows client hangs when performing a Full Text
Search against PDF version 1.5 and later.
Reason or resolution: Similar to the previous problem, you must upgrade
the OnDemand server to the latest version to resolve the problem.
Container ID = 0
Name = /arsdb/db1/SMS/ARCHIVE/root/CAA1.0.0
Type = Path
Container ID = 1
Name = /arsdb/db1/SMS/ARCHIVE/root/CAA1.1.0
Type = Path
Container ID = 2
Name = /arsdb/db1/SMS/ARCHIVE/root/CAA1.2.0
Type = Path
In this case, check the console message of the AIX server for an error message
similar to the one shown in Figure 14-2.
On the same terminal, try to start arssockd again. If it still fails to start, then as
the DB2 instance owner, enter the following command:
db2set DB2ENVLIST=EXTSHM
Tip: If your users have a problem with the OnDemand client and this problem
only happens with that particular client, you might save time by first reinstalling
the client on that computer. Then run a test to see if the problem still persists
after the reinstallation.
When you report a problem to the support center, first, you must provide the
version of the software that you are using. For OnDemand, this might include the
operating system, DB2, Oracle, Tivoli Storage Manager and OnDemand, and
ODWEK. This information helps the Support Team to determine whether the
software version is still supported and whether there are known issues to the
software level.
LINUX Look for the highest version for the package name in the list. In the
example, the ODWEK Version is 7.1.2.4.
# rpm -qa |grep odwek
The versions are:
odwek_license-7.1.2-0
odwek-7.1.2-0
odwek_icu-7.1.2-2
odwek_icu-7.1.2-4
odwek-7.1.2-1
odwek-7.1.1-0
odwek-7.1.2-2
odwek-7.1.2-4
odwek_icu-7.1.2-0
After you get the correct version number of the software that you are using, you
must collect the information specific to the problem.
There are several main areas where problems can occur. We divide them into the
following areas in this section:
Indexing or loading
Database
Tivoli Storage Manager
OnDemand client logon
Performance
ODWEK
OnDemand server hang or crash
ARSSOCKD.ERR This is the log file for arssockd daemon process. The
process is instance dependent if multiple instances are
running.
ARSLOAD error message The ARSLOAD error message shows whether it failed
at the indexing or loading phase.
OnDemand System log This is the OnDemand system logs in system log folder.
There are various message number regarding warnings
or errors at the time of failure.
Export of Folder, Application The export files are used to import to the test server for
Group and Application files problem replication.
and sample data
Version or Level of This file name contains the version or level of software
DB2/Oracle/SQL server and that the server is currently using. Sometimes a problem
OnDemand might be resolved by upgrading to the latest PTF.
Table 14-3 shows the information to collect when you have problems with AFP.
Export of Folder, Application The export files are used to import to the test server for
Group and Application files problem replication.
and sample data
ACIF indexer error message This file contains the error messages generated by the
ACIF indexer.
AFP sample data file This should be a non-confidential data file that can be
viewed by the support team to verify AFP syntax.
AFP interim files used by The files are created in the OnDemand client directory
AFP viewer within under C:\ProgramFiles\IBM\OnDemand32\DATA.
OnDemand Windows Client These files are deleted automatically after the
document is closed by viewer. The file is useful in
determining whether it is a server or client issue.
AFP resource and font files Sometimes this file is useful for various AFP issues
such as overlay, company logo, or national language
support (NLS) fonts.
Before you log a problem to the support team, use the information in Table 14-3
to look for clues for your problem. Especially regarding the error codes from the
ACIF indexer, you can check the error codes in the manual IBM Content Manager
OnDemand - Messages and Codes, SC27-1379. You might find the solution right
away. If you have an AFP dump tool, you can also dump the AFP data file to
check for invalid AFP data stream, which is a common problem.
Note: Because the AFP data stream can be printed by an AFP printer, it does
not necessarily have the correct AFP structure for loading into OnDemand.
The loading of AFP data requires more specific AFP structure than printing.
The manual IBM Content Manager OnDemand for Multiplatforms - Indexing
Reference, SC18-9235, provides information about the correct AFP data
stream structure.
CLI trace This file contains the call level interface (CLI) trace file for
diagnosis SQL statements. The CLI trace option must be
turned on to collect the file.
Application Group Report The Application Group ID is the name of the respective
DB2 tables.
The examples shows the common option for the DB2 CLI trace. The support
team might have a different option to collect information as appropriate to your
situation. Modify these options as advised.
Example 14-5 Turning the trace on via the DB2 command line
db2 UPDATE CLI CFG FOR SECTION COMMON USING Trace 1
db2 UPDATE CLI CFG FOR SECTION COMMON USING TraceRefreshInterval 5
db2 UPDATE CLI CFG FOR SECTION COMMON USING TraceFileName /tmp/db2trace.dmp
db2 UPDATE CLI CFG FOR SECTION COMMON USING TraceComm 1
db2 UPDATE CLI CFG FOR SECTION COMMON USING TraceFlush 1
2. Restart the application, in this case arssockd, for the changes to take effect.
3. Simulate the DB2 problem that you have encountered to capture the trace
information.
4. Run the following command to turn off the traces:
db2 UPDATE CLI CFG FOR SECTION COMMON USING Trace 0
5. Restart arssockd to take effect.
Application Group The Summary information for Storage Management shows the
Report Storage Set name, which is related to Tivoli Storage Manager.
Storage Set This information provides the node name at Tivoli Storage
Report Manager.
TSM activity log This log shows the events in the Tivoli Storage Manager server.
You can retrieve the log by using the Query actlog command.
TSM error Tivoli Storage Manager error messages are prefixed with ANS,
message ANR, and so on. This error is generated by Tivoli Storage
Manager storage manager and can be used for Tivoli Storage
Manager support for further diagnosis.
You can gather the various object reports, such as the application group report
and storage set report, by right-clicking the object and choosing Summarize.
Collect the files listed in Table 14-6 for client problems such as logging into
OnDemand.
ARSSOCKD.ERR This is the log file for the arssockd daemon process. The process
is instance dependent if multiple instances are running. This file is
located in the path defined for ARS_TMP.
Application group It is useful to check those fields in the report whether they are
report indexed or filters. Simply reviewing this report might resolve the
issue.
Database reorganize This file is used to check if the arsdb command has been run
information to reorganize OnDemand system and data tables.
Memory information This file contains the amount of physical memory, and the
memory setting in the server such as output from the ulimit
command.
ARSSOCKD.ERR This is the log file for the arssockd daemon process. The
process is instance dependent if multiple instances are
running. This file is located in the path defined for ARS_TMP.
Indexer information This file helps to determine if the report has a single index,
from application which uses up memory if the report is huge. Also for a large
report report without using large object option, the client experiences
a long time to download.
14.2.6 ODWEK
For ODWEK problems, gather the information as shown in Table 14-8.
Depending on the environment and the specific failure, some of the information
might not be present in your environment.
arswww.log This is the ODWEK log file. You have to turn on debug mode
and restart the Web server for changes to take effect.
OnDemand system log This file contains the OnDemand system logs from the
System log folder.
Screen shots of the This file contains screen captures of the error message or
problem document. Sometimes it is useful for non-English error
message and document.
Plug-ins or applets This file helps to check the version in use. Sometimes a
information problem is resolved by using the latest version.
Version or Level of This file indicates the version or level of software that the
ODWEK, server is currently using. Sometimes a problem might be
DB2/Oracle/SQL resolved by simply upgrading to the latest PTF.
server and OnDemand
Search this Web site using the keyword mustgather to find the following
Technotes:
MustGather: Content Manager OnDemand Server for Windows - Hang,
reference #1223907
MustGather: Content Manager OnDemand Server for Windows - Crash,
reference #1226443
MustGather: IBM DB2 Content Manager OnDemand server hang on AIX,
reference #1222374
MustGather: IBM DB2 Content Manager OnDemand server crash on AIX,
reference #1223109
Follow the instructions from the Technotes to gather information when the server
hangs or crashes.
2. The local server cannot be used until it is setup. Right-click the new server
ODlocal and select Setup as shown in Figure 14-5.
4. In the Export Application Groups window (Figure 14-7 on page 471) that
opens, complete these tasks:
a. From the Server list, select the server to be exported.
b. Click Export. The information of the application group that you have
chosen starts transferring to ODlocal.
c. Check the message at the end of the export to make sure that it is
successful.
5. When all the requested information has been exported to the local server, zip
the entire directory as defined from the Directory of the local server. In this
example, it is C:\ODlocal as shown in Figure 14-4 on page 469.
If arssockd cannot be started after you set up the trace, you must turn on
EXTSHM. Refer to 14.1.4, “OnDemand startup problem” on page 457, for the
steps to turn it on.
For the Windows platform, the trace facility is enabled via the OnDemand
configurator.
The parameter in the trace.settings file is read when the server starts. It provides
the server startup program with the trace options. The OnDemand installation
comes with a default trace.settings file for UNIX as shown in Example 14-6. You
may modify the file for different options.
[TRACE]
COMPONENT_LEVEL=0000000000000000007000
TRACE_FILE=ARCHIVE.trace.log
TRACE_FORMAT=TEXT
APPEND=0
CPU_TIME=0
Using information from Example 14-6 on page 472, the nineteenth bit of
COMPONENT_LEVEL corresponds to SRVR, which is for server trace. The
value 0x07 is a summation of 01 + 02 + 04, which means that the message level
of the trace is ERROR + WARNING + FLOW.
For multiple instances, you may specify a different file name and path for
ARS_TRACE_SETTINGS in the ARS.CFG file of that instance. Then in the trace
settings file, you may specify a unique name for the TRACE_FILE.
Note: You must restart arssockd for OnDemand to read in the trace settings
from the configuration files.
The previous trace settings are useful when you cannot activate arssockd. Next
we look at traces that can be started by the OnDemand administrative client.
After you log on to the OnDemand administrative client, you can configure tracing
as explained in the following steps:
1. Right-click the server name and select Trace Parameters. Figure 14-8
shows how to enable tracing from the OnDemand administrative client.
2. In the System Trace Setting window (Figure 14-9 on page 476), complete the
following steps:
a. Select Activate System Trace to enable tracing on the server.
b. In the Components To Trace section, click the component name to select
the component that you want to trace.
c. In the Trace Level Reporting section, you might also set the message level
of the trace for each components. The values provided for the message
level is similar to the COMPONENT_LEVEL in the file trace.settings. For
problem determination, consult your IBM support team on the appropriate
trace to capture.
Note: You can stop or start the runtime trace from the administrative client
anytime without restarting arssockd.
After the trace is collected, you can send the trace file to the IBM Support team.
Important: Only use trace with the help of IBM Support since activating it
might severely impact the performance of the OnDemand system.
We also provide some program samples, which you can download from the IBM
Redbooks Web site. The samples offer practical ways to educate yourself on
using OnDemand APIs or as base for more advanced development.
© Copyright IBM Corp. 2003, 2006, 2007. All rights reserved. 477
15.1 Using the Document Audit Facility
OnDemand has incorporated a feature called the Document Audit Facility (DAF).
This function allows for basic approval routing of a document. To use the DAF,
you must first define the reports to be audited to OnDemand and create an audit
control file. An administrator can define the default status for a document, and
users with the appropriate permissions have the ability to click a button on the
client to change the status of the document.
An administrator sets up an index to the invoices that can be one of four values:
Hold, Accept, Reject, or Escalate. When invoices are scanned, they are loaded
with a default of Hold status. The only users who have permission to view these
Hold invoices are the auditors or managers. After the auditor reviews the invoice,
they can click a button to set the document to either Accept, Reject, or Escalate
status:
Accepted invoices should be paid.
Rejected invoices should not be paid due to problems with the invoice.
Escalated invoices should be reviewed by managers to determine if they
should be paid.
This action changes the value of that index. Permissions for the Accounts
Payable user group are set up in such a way that they can only view invoices that
have the Accept status. Purchasing can view invoices with the Reject status to
determine why they were rejected and contact the vendor to correct the problem.
Auditors and managers can view invoices that have the Escalate status.
[Invoices]
FOLDER=Invoices - Auditor
AUDIT_FIELD=Status
TEXT1=Accept
TEXT2=Reject
TEXT3=Escalate
VALUE1=A
VALUE2=R
VALUE3=E
AUDIT section
The AUDIT section contains one record, the FOLDERS record. The FOLDERS
record contains a comma-separated list of folder section names. You must create
an additional section in the DAF file for each folder section named in the
FOLDERS record.
Important: The total number of characters in the FOLDERS record must not
exceed 255.
FOLDER section
Each FOLDER section contains the following records:
FOLDER specifies the name of the folder, exactly as it appears in
OnDemand. The FOLDER record is required.
AUDIT_FIELD specifies the name of the folder field used to audit documents,
exactly as it appears in OnDemand. The AUDIT_FIELD record is required.
TEXTx is the caption that appears on the command button used to change
the status of the document. Up to eight TEXT settings are permitted.
VALUEx is the value that is stored in the database when the corresponding
TEXTx button is clicked. This value is stored in the application group field and
must match one of the mapped field values. One VALUE record is required for
each TEXT record. Up to eight VALUE settings are permitted.
Note: You must restart the OnDemand client after you create this file.
When Accounts Payable users query the server, they are limited in what they can
see. They do not see the status buttons or the status of the documents. Accounts
Payable only sees the statements that are accepted by the auditor (Figure 15-7).
To take this example further, it is possible to set up multiple folders each with their
own distinct status buttons. By doing so, it is possible to route a document
through a series of auditing. You can define various users, each with a different
auditing responsibility.
For example, a particular user is responsible for pulling up all failed documents
and placing them in some other status. To do this, each status must be mapped
in the application group, and each folder must be specified within the appropriate
user’s arsgui.cfg file. It is also important to define each user with the correct
search and update permissions in the application group.
On the Application Group, the Message Logging tab, verify that Index Update is
selected. Selecting this option causes system log message 80 to be logged
every time an invoice index is updated. See Figure 15-8.
In our example, we log three field values so that we can uniquely identify the
invoice as accepted or rejected. We log the purchase order number, invoice
number, and status.
To check who is updating the invoice status, use the system log to review
message number 80, which contains the information about Application Group
document updates. System log message 80 includes the date and time the
update was made, the user ID making the update, and message text of the
update.
If no fields are selected for logging, the system log 80 message contains blanks,
as shown in the following example:
ApplGroup DocUpdate: Name(Invoices) Agid(5395) OrigFlds() UpdFlds()
If only the status field is selected for logging, the system log 80 message
contains only the before and after status values. This is not sufficient information
to identify the exact document rejected, as shown in the following example:
ApplGroup DocUpdate: Name(Invoices) Agid(5395) OrigFlds('H') UpdFlds('R')
For details about how to configure the Windows client for Related Documents,
refer to the “Related Documents” section, in the “Windows 32-bit GUI
Customization Guide” chapter, in IBM Content Manager OnDemand - Windows
Client Customization Guide and Reference, SC27-0837.
The following sections contain extracts from this guide along with important tips
and practical examples of how to configure this feature.
The way in which this is done is by the user selecting text from within a document
and then clicking the Related Documents icon, which is on the task bar when
Related Documents is configured for that folder. The text that is selected is then
used as the search criteria on another folder. The first document in the hit list
from this search is displayed in the client along side the document already open.
Using the preceding example, the summary sheet must be a document type that
allows text to be selected within the OnDemand client; Related Documents does
not work if the summary sheet is either PDF or image data.
[HKEY_CURRENT_USER\Software\IBM\OnDemand32\Client\RelatedDocs]
"Related"="Letters,Baxter"
[HKEY_CURRENT_USER\Software\IBM\OnDemand32\Client\RelatedDocs\Baxter]
"MenuText"="Related Check"
"BitmapDll"="c:\\reldocs\\extaddll.dll"
"BitmapResid"="135"
"RelatedFolder"="Cheque Images"
"Fields"="Amount=eq\\%s"
"Arrange"="v"
"Folders"="Baxter*\\Credit*"
[HKEY_CURRENT_USER\Software\IBM\OnDemand32\Client\RelatedDocs\Letters]
"MenuText"="Financial Report"
"BitmapDLL"="c:\\reldocs\\extaddll.dll"
"BitmapResid"="135"
"Folders"="Letters"
"RelatedFolder"="Financial Report"
"Fields"="Reference Number=eq\\%s"
"Arrange"="v"
Typically, the only place that the arsload program can run is on the server.
Therefore users might send their files that require loading into OnDemand to an
administrator who initiates a daily batch load of these documents. With the Store
OnDemand services offering, you no longer need to rely on a system
administrator to store a single document. The following section explains what the
service offering provides.
b. From the Application group and Application lists, select the application
group and application in which the document should be stored.
Note: Data verification functions within Store OnDemand ensure that data
entered is in the correct format and type to further reduce the possibility of
entering incorrect indexes. The Store Document bottom is unavailable until
this validation criteria is satisfied. Otherwise, storing is not possible. Guidance
regarding this criteria is provided in the Store OnDemand window.
15.4 OnDemandToolbox
IBM Germany developed a set of sample programs for use with OnDemand
Common Server. These programs are intended to provide customers and
partners examples of how to code client APIs while providing a useful as-is tool
to meet customer needs. The sample toolbox code and documentation are
available on the IBM Redbooks Web site. Refer to Appendix A, “Additional
material” on page 605, for download instructions.
OD Delete
The OD Delete application is used for deleting documents archived in
OnDemand.
OD Store
The OD Store application is used for archiving PC files directly into OnDemand.
The example in Figure 15-13 shows how to run the ARSDATE command as a
batch job. The executable file ARSDATE resides in the /usr/lpp/ars/bin directory
in the hierarchical file system (HFS) of the UNIX System Services. The output is
written to the STDOUT statement, which points to an outputfile in the HFS
directory /tmp/arssockt.std. This file must be accessible or the user running the
BPXBATCH program must have proper authority to create this file.
The PAR DD file contains the parameter for the indexer (Example 15-3). You can
cut and paste this information from the Application panel indexer information.
Type the letter T in the date search field and set the search operator to Equal To.
When you run the search, OnDemand retrieves the documents that contain
today’s date.
The T date search option might be used with the search operator set to Between
or set to Equal To. You can also use the following patterns when you use the T
date search option:
T { + or - } # { D or M or Y }
The braces denote groups of optional parameters for the T format string; choose
one of the symbols in the group. If you leave out the plus sign (+) or the minus
sign (-), OnDemand assumes a + sign. If you leave out D, M, or Y, OnDemand
assumes D. The T format string is case insensitive, meaning that you can type T
or t, D or d, M or m, or Y or y.
Table 15-2 T format string examples with the Equal To search operator
T string Meaning
Table 15-3 lists examples of using the T format string with the search operator
set to Between.
Table 15-3 T format string examples with the Between search operator
T string T string Meaning
T-60D and T-30D Between 30 and 60 days prior to the current date
Figure 15-15 Client installation: selecting the Ad-Hoc CDROM Mastering option
Important:
To transfer the documents to a staging drive, you must keep the
document list on the screen. Only one folder can be staged at a time.
All items in the document list are placed on the CD-ROM.
6. The CD-ROM mastering process starts. You should be able to see a window
with five options in it (Figure 15-20 on page 503):
– Clean: Removes all files in the staging directory
– Setup: Creates the necessary directory structures in the staging directory
– Fetch: Retrieves the data and resources for the items in the hit list
– Index: Re-indexes the retrieved data for the CD-ROM
– Stage: Copies the CD-ROM installation files and the OnDemand client
(along with any installed languages) to the staging directory
7. After the CD-ROM mastering process finishes, you see a message like the
example in Figure 15-21. Click Yes to finish the process. Click No if you want
to add another folder or stop the CD-ROM mastering process.
8. If you click No in the previous step, you see the message shown in
Figure 15-22. Click No to finish the process. Click Yes to return to the
previous window to select another folder.
9. After you finish selecting the folders and staging documents you want, when
prompted by the message “Proceed with the mastering of volume
xxxnnnnnnnn?”, click Yes. The CD-ROM image is finalized and can now be
accessed with the OnDemand client or written to a CD-ROM.
COPIES 1
USER cdrom
PASSWORD cdrom
4. Log on with the user ID and password specified when the CD-ROM image
was created (Figure 15-24).
PDD is a services offering. For more information about the Production Data
Distribution, contact your local IBM marketing representative.
The information that is displayed in the About window is obtained from a text file
named product.inf. The file can be created using a text editor such as Notepad.
The product.inf must be located in the OnDemand installation directory. The
default installation directory is C:\Program Files\IBM\OnDemand32.
Example 15-4 on page 509 shows the contents of a sample product.inf file.
The title bar of the OnDemand main window is customized with the name Baxter
Bay Bank Archive System from the NAME keyword in the product.inf file
(Figure 15-28).
You can also customize how long you want to display the About window through
the registry setting. Refer to 15.11.4, “Displaying the OnDemand splash screen
or About window” on page 516, for more details.
Solution: For fix pack 7.1.2.4, a registry entry has been added to support this
requirement.
Solution: For fix pack 7.1.2.3, several registry entries have been added to
support this requirement (Figure 15-31).
Figure 15-31 Registry setting modification: enhanced Folder List Filter option
Figure 15-35 Line data background customization: Green Bar Alternating Stripes
The amount of time to display the splash screen or the About window can be
changed to a longer or shorter time by adding an entry in the Windows Registry.
The display time is specified in seconds. A value of zero can be specified to
prevent the splash screen or the About window from being displayed.
After you create this registry entry, the OnDemand splash screen or the About
window is displayed for 5 seconds when the OnDemand Client is started.
OnDemand processes a leading negative sign without any special steps in the
definition of the indexer parameters. Simply define the length of the field long
enough to include the negative sign and the largest possible value the field can
contain. For example, if the largest possible value is -9,999,999.99, you define 13
as the field length. In the load information, OnDemand removes the leading and
trailing blanks, embedded blanks, commas, and periods.
In Example 15-5, Field 3 (FIELD3) and Index 3 (INDEX3) contain the decimal
numbers.
In our example as shown in Figure 15-40, Field 3 contains the numeric portion,
and Field 4 contains the sign portion.
Index 3 contains the decimal amount with Field 4 (FIELD4), the sign portion first,
and Field 3 (FIELD3). See the resulting indexer parameters set up in
Example 15-6.
From an OnDemand client, the document list displays the amount with a leading
negative sign. See Figure 15-41.
The content of the message file can contain a maximum of 1024 characters of
text. The administrative client and the user client show the message after users
log on to the server. To close the message box and continue, users click OK.
To set up the message of the day, choose one of the following options:
For all OnDemand server platforms except Windows, set the
ARS_MESSAGE_OF_THE_DAY parameter to the full path name of a file that
contains the message that you want the client to show, in the ARS.CFG file,
for example:
ARS_MESSAGE_OF_THE_DAY=/opt/ondemand/tmp/message.txt
Restart the server after you modify the message of the day information.
Go to the OnDemand for iSeries Support Web site at the following address:
http://www.ibm.com/software/data/ondemand/400/support.html
Search on the word bulletin. You can obtain summary bulletins from the last
several years. Review them to find such valuable information as:
Common problems and solutions
Indexing techniques
Client command line parameters
Enhancements to the end-user client
Enhancements to the administrator client
ODWEK enhancements
How to create an AFP overlay
Tips on migration from Spool File Archive to Common Server
OnDemand client upgrade considerations
How to set up Document Audit Facility
Tips on using query restrictions
How to use Expression® Find in the client
How to add your own messages to the System Log
How to display a “message of the day” to an OnDemand client user
How to use a public named query with arsdoc to make it easier to delete
individual documents or modify index values
How to use a folder list filter in the OnDemand client
And many more tips
If you want to subscribe to the bulletin, contact Darrell Bryant by sending e-mail
to [email protected].
© Copyright IBM Corp. 2003, 2006, 2007. All rights reserved. 523
16.1 OnDemand Distribution Facility (ODF) on z/OS
OnDemand Distribution Facility is the report distribution feature for IBM Content
Manager OnDemand for z/OS V8.4. ODF is designed to group archived report
pages or segments into print bundles for distribution.
ODF obtains information from DB2 tables that can then be set up and maintained
through an online administration facility. This allows generated bundles of
captured reports for each user to be organized into print bundles for distribution.
The print processor may write the requested report pages or report segments to
the JES spool with the appropriate delivery information, to an output z/OS data
set, or to an e-mail URL. The print processor may also send a notice to an e-mail
URL that a report has been output to a JES spool. For JES spool, multiple
reports for a single user at a specific destination are combined into one bundle.
or for each designated user into print bundles for distribution.
For more information about installing the optional ODF feature, refer to the IBM
Content Manager OnDemand Distribution Facility Installation and Reference
Guide, SC27-1377.
ODF monitor
The ODF Monitor screen is an online, menu-driven monitor that enables you to
perform the following tasks within ODF:
Distribution inquiry
Report bundle inquiry
Recipient inquiry
Destination printer inquiry
Initiate distribution reprint
Reprint inquiry
Figure 16-1 on page 527 shows the main ODF Monitor screen with the Report
Distribution Monitor Options.
ODF administration
The ODF Administration screen presents the administrator with a variety of
options for administering report distribution. They are:
Display and maintain recipients and recipient lists.
Display and maintain distributions.
Maintain report cross references.
Figure 16-2 shows the main ODF Administration Report Distribution screen with
the above options.
To display a list of recipients from the Report Distribution screen, select Option 1
to display the recipients. See the displayed list of the recipients defined in the
ODF database, as shown in Figure 16-5.
In the Maintain Distribution screen, you can update the following information:
Recipient and Distribution fields: Descriptions for both fields together make up
the Distribution Name.
Recipient/List field: Contains the user ID in the User Output Table (UOT) or
the Recipient List in the Recipient List Table (LIS).
Distribution Description field: Contains a description indicating characteristics
of the distribution, such as purpose, contents, or recipients.
Customer Variables field: Contains sysout parameters used to override
sysout parameters prior to dynamic allocation using the pre-allocation exit.
Account field: Contains job card accounting code information for this
distribution.
Selecting Option 6 to access the Maintain Distribution within ODF will take you to
the screen shown in Figure 16-8.
From the Display Distributions screen, administrators have the ability to perform
the following tasks:
To display the list of distributions that are defined in the ODF database.
The ability to perform maintenance against some of the distribution fields.
The ability to:
– ExEdit: Edit the existing distribution information.
– Copy as a Model: To COPY a distribution using the contents of the existing
distribution you are copying from, including all bundles.
– List of Bundles: Lists bundles for the distribution you selected.
– Add: Add a distribution.
– Delete: Delete a distribution.
– Rename: Rename a distribution with a new Recipient/List name and
Description.
– Update: Update Job Name, class, wait, continue, manifest, and banner
indicators.
Figure 16-11 on page 535 shows a bundle definition with predefined fields
containing information.
The Maintain Bundle Definition Report screen contains the following fields:
The Report ID and Report Version fields: Contains the OnDemand for z/OS
report name and version to be included in the distribution, as defined in the
CRT table (OnDemand V7 only). The version will always be 01 for OnDemand
V7. The Report Version may be ** or the report version number in OnDemand
V2.
Customer Variables field: Contains sysout parameters used to override
sysout parameters prior to dynamic allocation using the Pre-Allocation exit.
Destination field: Contains the output destination printer name for this report.
A system node name may be included.
Job Name field: Contains the Job Name used for the print processor created
for this report used for unique identification in JES.
Bundle Sequence field: Contains a unique sequence number assigned for a
particular occurrence of a report within a bundle.
Report Build field: Contains F if the entire report is to be distributed, or Q if the
report is to be distributed by page or data selection range.
Status field: Contains A to mark this bundle Active, or I to mark this bundle
Inactive.
Wait/Ignore field: Contains W to indicate distribution should wait on this report
if unavailable at distribution initiation time, or I to indicate distribution should
not wait on this report.
Distribution processing
The scheduler task begins the distribution process at distribution intervals using
an activation routine. The scheduler and distribution intervals are defined by the
DISTSLEEP and SCHDSLEEP system-wide parameters. The following are
available distribution methods:
ALLREADY
TODHH:MM
TOPHH:MM
TOSHH:MM
LOADED
EXTERNAL
Distribution processing checks to see if distributions are ready to print. It also has
the capability to monitor the availability of captured reports. It notifies the main
task that all of a distribution reports are available and ready to print.
Continuation processing monitors the ODF for z/OS work queues for missing
reports and initiates a print when the reports are available.
If all reports defined to a distribution bundle are not available, the distribution
request will be reexamined at the next distribution interval. If Continuation is
defined for the distribution, the missing report(s) will be distributed when they are
available.
LOADED distributions may be automatically initiated even if all reports are not
available by using the C indicator rather than the W. Often there is only one
report for the LOADED distributions.
If all reports defined to a distribution bundle are not available, the distribution
request will be reexamined at the next distribution interval. If Continuation is
defined for the distribution, the missing report(s) will be distributed when they are
available.
Print processing
Print processing creates print bundles that consist of:
A manifest page describing the contents of the print bundle, if requested
A banner page preceding each report, if requested
The entire report
Selected page ranges of a report
Selected documents (segments) of a report
Print processor sysout under main task, ARSODF
Reprint Facility
Reprint the original distribution:
Entire Distribution
By Report name
By Recipient/List name
By Destination
It overrides the original print parameters and manages the Print Processor
entries.
Figure 16-14 on page 541 shows the maintain bundle definition screen with the
option for e-mail notification and delivery set with E- or N- .
Distribution tables
The distribution tables in ODF and their descriptions are summarized in
Table 16-1.
Bundle Query Table (BQT) Defines queries used to build the Bundle.
Print Query Table (PQT) Defines the report query and the date of the query for
the Distribution Bundle.
Recipient List Table (LIS) Defines a list of recipients (user IDs) for print
distributions.
Print Processor Table (PPT) Defines printed distribution bundles used to produce
initial print and reprint distribution output.
User Output Table (UOT) Defines separator page and optional banner page
header information for a print distribution recipient.
Cross Reference Table Contains a list of report names that cross-reference the
(CRT) ODF report name to the OnDemand application
group/application name.
ODF documentation
For more information about the OnDemand Distribution Facility, refer to the
Installation and Reference Guide V7.1, which can be found at the link below:
http://www.ibm.com/software/data/ondemand/390/library.html
APAR 039140
This PTF applies to all the OnDemand Distribution Facility installations.
This PTF implements the unified logon capability when accessing the
OnDemand Server. This eliminates the need to specify the user ID and password
in the System Default table. The unified logon feature uses the TSO ID of the
person who submitted the batch utility jobs or logged on to the CICS
Administration Client Region. In this case, the user ID must be defined to the
OnDemand Administration Client with the necessary authority.
Note: Do not remove the SDT entries that contain the user ID and password.
These are the numbers 9, 24, and 25. These entries may be blanked out to
utilize the unified logon feature of OnDemand.
APAR AD95924O
This PTF adds a new feature to the OnDemand Distribution Facility. It permits the
control of which designated reports will be distributed. Any given report may be
generated by multiple jobs, all of which will be archived by OnDemand but not all
of these will be necessarily valid for distribution. This is controlled by an assigned
index value. The index value can be selected by checking the Reference check
box found in the Application Group Field Information tab within the OnDemand
Administration Client. The value of this reference field index is passed to ODF as
a Reference field. A new field is then added to the ODF report Cross Reference
table. This field is also referred to as a Reference field. If the value of the
reference field sent to ODF matches the reference field of a table entry in the
Cross Reference table, this report will be distributed to all defined recipients. The
reference index field selected should be a single value index for a given
application group; otherwise, the value that will be passed to ODF will be first
value found for that index when the document was loaded into OnDemand.
For example, the report TRIALBAL is generated ad hoc throughout the day. Only
a segment of the report is created each time. TRIALBAL has dozens of recipients
defined in distributions to ODF, and each one is to receive only the final run. Each
ad hoc generation is run using a Job Name of DAYBAL. The final job that is run is
called FINALBAL. OnDemand has an exit that can be applied to collect the Job
Name and insert an index value into the TRIALBAL report. The TRIALBAL report
has the JOBNAME index value defined as a Reference field. When the DAYBAL
ADD97728O
This technote introduces the new feature Reference Field Caching Option for
OnDemand Distribution Facility. This feature adds the capability to pre-process
documents for a distribution so that only documents that exist will be scheduled
for printing. There are two methods of checking that this action is performed for
existing documents. The first method requires the distribution method of
EXTERNAL and will create a document cache in the Hierarchical File System to
temporarily store the documents for fast retrieval. A set of criteria must be met
before a document cache will be built. When a predetermined number of hits
have been made, the document cache will be created. During print processing,
the document cache will be checked first; if it is not found, the server will be
queried. This method permits only simple queries to be used (queries that have a
single compare value or IN a list of values). However, if a distribution that is
intended to use the cache has a bundle entry that has a more complex query, the
query will be used to check the server for a match. So both complex and simple
queries can be specified. A Print request will only be inserted for documents that
are found.
Using the document cache with simple queries will save significant processing,
since the full document will be retrieved once from the server and temporarily
stored for fast retrieval of the segments. This can have a huge benefit when there
is a high percentage of hits within a document search.
Both methods require the reference field to be set up on the CRT table and have
the reference check box marked in the OnDemand Administration Client. Also,
the cache keyword must be used as a load parameter, even though a document
cache may not be created. This is because the cache keyword instructs the load
to insert the request into the RIT table instead of the DST table.
Full document requests are supported. However, if you are using the batch
scheduler for a full document print, requests will not provide any improvement in
throughput. It is feasible to combine full document and segment selection queries
within distribution that will be scheduled using the batch scheduler. The intent of
this option is to provide rapid distribution of large reports that are segmented into
small documents that have many recipients.
ODF may temporarily store all the documents of a report into a file cache. During
print processing, ODF will first check if the document exists in the cache and will
return it for printing if it exists. During load processing, a check will be made for
the Reference index field. Any index field can be defined as a reference field
using the Reference check box in the OnDemand Administration Client. When
this check box is selected, ARSLOAD will call the ODF interface with the first
value defined for this index. The ODF interface will check for the reference value
in the Cross Reference table (ARSCRT). A match of the reference field will cause
the interface to place an entry into the new table ARSRIT if the load has been
invoked with the keyword CACHE. Without the CACHE keyword, normal
scheduling of the defined distributions will be processed. The scheduling of the
distributions associated with the document will be done by the batch scheduling
process.
The batch scheduler will retrieve the ready ARSRIT entries. All distributions that
have been defined to use the report will be retrieved. When the distribution
method is EXTERNAL, those bundle entries that have been defined with a query
will be checked first for a simple query, then a match on the index name in the
CRT table will be checked first for a simple query, then a match on the index
name in the CRT table, then finally a scan of the index data for this report will be
done for the value defined in the query. A match will cause the cache to be built
for this report once a threshold setting is reached. The threshold value allows for
The -Z parameter now has three values that can be set. Possible values are -Z
ODServer, -Z ODServer,CACHE or -Z ODServer,CACHE,TRACE. The trace
keyword may be present as the second value if the CACHE key is not set.
This PTF adds the SQL Tokenized query capability to ODF. ODF has been
modified to generate tokenized queries when editing or adding a distribution that
will require a report segment selection using an SQL query. This feature will
improve query performance on the OnDemand Server by using the DB2 prepare
cache. Once installed, ODF will build and use SQL queries in the tokenized
format. However, existing queries will not automatically be converted to the
tokenized format. A new transaction has been added to the ODF Batch Utility to
provide for selective or mass conversion of all the queries that are currently in the
ARSPQT table. The transaction is only valid on the SYSIN transaction
definitions.
To change a range of distribution queries, you can specify the advantage of the
DB2 prepare Cache. Once installed, ODF will build and use SQL queries in the
tokenized format. However, existing queries will not automatically be converted to
the tokenized format. A new transaction has been added to the ODF Batch Utility
to provide for selective or mass conversion of all the queries that are currently in
the ARSPQT table. The transaction is only valid on the SYSIN transaction
definitions.
The Batch Utility only reports whether the rebuild of the queries was successfully
converted or not. It is possible to rebuild all the queries in the ODF database.
However, if the ODF database is large, we recommend that you specify smaller
changes of distributions to be converted. The batch utility will only perform a
commit after the rebuild transaction completes, so the large range could be on a
very large span of control for the commit. ODF supports both formats of the
query during distribution processing, so it is not necessary to convert the queries
on installation of this PFT.
Note: The new format will generate a larger query that is stored in the
ARSPQT tables query field. The maximum size of a query is 32000 bytes. If a
query will exceed the maximum length, the non-tokenized query will be built.
AD12309O
This PTF only applies to OnDemand Distribution Facility installations utilizing an
OnDemand V7 archive. This PTF provides a performance enhancement when
distributions have multiple bundle entries defined. The principle change was to
maintain the connection to the server for the life of the distribution and to provide
direct access to the OnDemand Archive database. This eliminates the open and
close that is done during normal distribution processing and improves the
database access by eliminating the call to the OnDemand ARSDOC command.
ODF distributions that have been defined with only a single bundle entry may not
see much improvement in performance with this PTF. This feature must be
enabled in order to be used by ODF. To enable, clear the DD DUMMY JCL
statement //*ARSNODOC DD DUMMY by removing the Asterisk (*) in column 3.
This feature must be enabled in order to be used by ODF. To enable it, edit the
ARSODFC1 member found in the SARSJCLS library and change the
VERIFYQUERY value from an N to a Y. A value of M can be used; this value will
apply this feature to the Continuation Processor only, and the Main distribution
processor will not use this feature. This feature will only work with OnDemand V7
Archive; if the ODF system defines a V2 or V2 and V7 system, this feature will be
disabled. The setting for the server is found in the SDT 23 definition; the first four
characters must be defined as 'V7 ' to use this feature.
A new batch facility is also being provided with this PTF. This batch facility will
perform the same but limited processing done in the real-time distribution
processor in batch mode. This process will only insert print requests for
documents that exist on a V7 Archive Server. This batch process will perform
only on one specific distribution at a time. The batch distribution processor is
designed to work with an external process that inserts the Document Status
records (DSTs) for this batch facility to process. The external process would set
the status field to a value of 'Q'. The real-time distribution processor will ignore
this status code so that the batch distribution processor can act on it.
Set the DB2SSID and DB2PLAN names to the installation standard names your
system and plan uses to execute this program. The REPORT=ON option will
write a report line for every PPT inserted for the print processor. REPORT=ON
will not eliminate the insert report, but the final totals will still be produced. Set
the DIST-ID and DIST-NAME parameters to the distribution that is to be
processed by the batch distribution processor. The Batch Distribution processor
will only handle one distribution per invocation. You cannot specify more then one
set of DIST-ID/DIST-NAME parameters. The REPORT=NOHITS option can be
specified to help diagnose why documents are not being retrieved. Additional
displays will provide meaningful information about the query requests, but should
not be used normally.
An external process will create DST records for the selected documents to be
processed by the print processor of ODF. This external process is user provided.
The DST record must be properly constructed with a valid Report ID, Application
Group, Application, and Load ID for a defined distribution with the DST_STATUS
field set to a value of 'Q'. The Batch Distribution processor would be invoked with
the Distribution ID and Distribution Name specified on the PARMIN dd statement.
When invoked, the Batch Distribution Processor will validate the distribution, then
retrieve the DST records, and find the matching bundle entries defined for the
distribution. If the bundle is defined with a query selection, the query will be
retrieved and passed to the OnDemand server to determine if the document
exists for the query. Only documents that exist will have a print request inserted
(PPT). If the bundle entry defines a full document print, a PPT will always be
inserted. Once all DST entries have been processed for every bundle entry, a
DRT record will be inserted to indicate to ODF that the distribution is ready for
printing. When ODF detects the DRT status Q record, it will invoke the print
processor immediately. No further bundling will take place. This feature assumes
that all the bundling for the distribution has been done, so it does not recognize
the Wait/Ignore indicator that is set in the bundle entry.
AD30814O
This PTF applies to OnDemand Distribution Facility installations utilizing an
OnDemand V7 archive. This PTF provides a performance enhancement for
distributions that have Job Name controls established by Report Administration.
Address Spaces dynamically created for Job Name controlled distribution
processes will remain persistent for as long as there is distribution work to be
handled. Contrast this to a process that initiated fresh controls for each new
distribution. Persistent distribution processing will capitalize on economies
gained through reusing resources.
Defining a report
To define a new report:
1. Start OnDemand Administration Client.
2. Expand Report Distribution.
3. Right-click Reports and select New Report from the pop-up menu. See
Figure 16-15.
Defining a banner
To define a banner:
1. Right-click Banner and select New Banner from the pop-up menu.
2. Specify the banner name and the banner type you want to use and the
header banner information. See Figure 16-17 on page 557.
3. Click OK to save the banner information.
Defining a bundle
To define a new bundle name:
1. Right-click Bundles and select New Bundle from the pop-up menu.
3. Under the General tab, type the bundle name and the banner type you wish to
use. See Figure 16-18.
4. Define the bundle contents you are going to use. More than one content type
can be added. For example, you could combine header banners and reports.
To add both a banner and a report, select the content you want to bundle
under bundle content type and select the Add Tab button so it appears under
the bundle contents. See Figure 16-18 on page 558.
5. Click OK.
2. Click the General tab. Define a unique name for your distribution and select
the delivery options for the distribution. If applicable, select who to notify. See
the sample setup in Figure 16-23 on page 563.
Tip: The Server Printer delivery option is specific to InfoPrint and does not
support a local or network defined printer.
4. Click the Schedule tab. Select the distribution schedule you are going to use.
See Figure 16-25.
OD Store
OD Store is a simple application that allows users to manually archive
documents from their PC directly into the Content Manager OnDemand.
The user selects one or more files and enters the key fields by hand.
Documents can only be stored into an existing application, application group, and
folder. Also, an administrator has to define which type of files (for example, Word
files are *.doc files or Excel files are *.xls files) users are allowed to store into
which storage-set within the Content Manager OnDemand.
In addition to this task, the user can select one or more documents and modify
their key fields.
OD Delete
OD Delete has the same interface as OD Update. It allows the user to search for
documents and review them. Instead of update functionality, it allows the user to
delete one or more documents.
OD Filesystem Monitor
The OD Filesystem Monitor is a server program that runs without any user
interaction.
To use the transform option, the appropriate transform software, such as Xenos
or AFP2WEB, must be used. To use the Metacode to PDF transform option, the
customer Xenos transform must be used and the system should be running
OnDemand Version 7.1 or later.
Issue the following arsmail qry command to show content of the database:
arsmail qry Recipient | Email | Addr | Transform | Delivery | App Group
| Folder |
The latest version of E-mail Notification V8.4 is written in Java. The source code
is available when the E-mail Notification feature is purchased. You can customize
it anyway you want. The other major change is now you can send an e-mail
directly to a document. This works as long as the Web interface into OnDemand
is WEBI. Upon opening the attachment in an e-mail, the user will be presented
with the document the user has been notified about or a login page if the user is
not already authenticated. Once authenticated, the document will be displayed.
© Copyright IBM Corp. 2003, 2006, 2007. All rights reserved. 571
17.1 Web Administration Client
OnDemand V8.4 allows you to use a Web administrative client to administer
OnDemand. This client enables OnDemand administrators and other individuals
who might not be full-time administrators perform certain administrative tasks
from the Web browser without having to install the OnDemand Administration
Client on their workstations.
The OnDemand Web administrative client allows you to add, view, update, and
delete users, groups, applications, application groups, folders, printers, and
storage sets.
You can create custom administrative forms by using the Web administrative
client and IBM Workplace Forms™ Designer. Individuals who are not full-time
administrators can use the custom forms to perform administrative tasks. The
Web administrative client also enables users who do not have in depth
knowledge of OnDemand to complete administrative tasks.
From the existing user window, you can manage and view a specific user by
right-clicking the user and selecting Copy, Update, Delete, or Properties from
the pop-up menu. See Figure 17-6 on page 577.
Use the copy function to quickly create another user with the similar settings.
At the time of writing, only Windows Server® 2003 R2 is supported for the
operating system of the mid-tier application. Windows 2000 or Windows Vista® is
not supported.
Always check the support Web site for the latest requirements and additional
enhancements for the OnDemand Web Administration Client.
A new tab has been added to the Application Group edit/view window and will
only appear if the server is at V8.4.0.0 or higher. Composite Indexes that are
created are shown in the Multiple Field Indexes list. Any application group field
that has a type of Index will appear in the Single Field Indexes list.
3. Right-click the specific application group and select Update. See Figure 17-8.
Figure 17-9 Composite index - Update Application Group, Advanced Index tab
7. Once you add the necessary fields for the composite index, click OK to exit
the window.
9. To add another composite index, click Add and repeat the previous steps. As
an example, we add branch and subkey as the second composite index. See
Figure 17-12 on page 583, which shows both indexes are added.
2. Define a name and description for your application group. See Figure 17-14
on page 585.
3. Select the Storage Management tab and select a Storage Set Name. See
Figure 17-15.
5. Select the Field Information tab and select a string value length for test1.
See Figure 17-17.
7. Select the Advanced Information tab and click Add. This will take you to the
Add an Index window.
9. Click OK and this will take you back to the Add an Application Group window.
See Figure 17-20 on page 589.
10.Now you have a cluster index defined within the Multiple Field Indexes. Notice
that under the Cluster, Index 1 is set to Yes. Save your application group.
Cabinets
Monthly Client
Report Report
Folders
You can view the Access and Administrator authority when viewing a specific
cabinet and selecting the Permissions tab. See Figure 17-23.
Figure 17-26 Specify the field information for the date field
4. Map the folder fields to the application group fields. See Figure 17-27 on
page 595.
5. Verify that the input file format, depending on the file name format that you
specify with the arsload -b command. Any of the following input files should
work for this test case:
– MVS.JOB.010199.test-CRD-LI2484.09244.00001.ARD
– 070907.JOB.DAT.test-CRD-LI2484.07191.1207.ARD
– MVS.070907.test-CRD-LI2484.test-CRD-LI2484.07191.1207.ARD
6. Running the arsload command with the -b flag specifies the name of the
field/index (application group/application) and the -b flag identifies the part of
the file name that contains the index value. For example:
– To process file name 4.a:
arsload -d /arsload -u testuid -p TESTPW -b "whoDate" -B
"ign.ign.IDX.ag.ign.ign"
– To process file name 4.b:
arsload -d /arsload -u testuid -p TESTPW -b "whoDate" -B
"IDX.ign.ign.ag.ign.ign”
– To process file name 4.c:
arsload -d /arsload -u testuid -p TESTPW -b "whoDate" -B
"ign.IDX.app.ag.ign.ign"
7. Figure 17-28 shows a sample document hit list for the document search hit list
within the Content Manager OnDemand Windows Client.
3. Define the name and port of the LDAP server you are going to authenticate to
in the ARS.CFG with the parameters ARS_LDAP_SERVER and
ARS_LDAP_PORT. See Figure 17-31.
###########################################
# LDAP Parameters (Library Server Only) #
###########################################
ARS_LDAP_SERVER=bluepages.ibm.com
ARS_LDAP_PORT=
ARS_LDAP_BASE_DN=ou=bluepages,o=ibm.com
ARS_LDAP_BIND_DN=
ARS_LDAP_BIND_DN_PWD=
ARS_LDAP_BIND_ATTRIBUTE=mail
ARS_LDAP_MAPPED_ATTRIBUTE=primaryuserid
ARS_LDAP_ALLOW_ANONYMOUS=TRUE
Figure 17-31 Sample ARS.CFG to configure LDAP authentication
For a 64-bit Windows system, use the Windows Server 2003 R2 64-bit version.
ODWEK will support both 32- and 64-bit.
17.8 Tracing
Content Manager OnDemand has incorporated a trace facility into the code to
help the support team perform problem determination. The trace facility is
available in Content Manager OnDemand for Multiplatforms.
3. The default trace log name is ARCHIVE.trace.log The following line should be
added to the ars.cfg for the directory path:
ARS_TRACE_SETTINGS=/usr/lpp/ars/config/trace.settings
Figure 17-34 shows a portion of the trace.settings file.
Select the Additional materials and open the directory that corresponds with
the IBM Redbooks publication form number, SG246915.
© Copyright IBM Corp. 2003, 2006, 2007. All rights reserved. 605
Using the Web material
The additional Web material that accompanies this IBM Redbooks publication
includes the following files:
File name Description
SG246915_StoreOD.zip Store OnDemand
SG246915_ODToolbox.zip OnDemand Toolbox
© Copyright IBM Corp. 2003, 2006, 2007. All rights reserved. 607
Advanced Function Presentation data alphanumeric character. Consists of letters,
stream (AFP data stream). A presentation numbers, and often other symbols, such as
data stream that is processed in the AFP punctuation marks and mathematical symbols.
environment. MO:DCA-P is the strategic AFP See also alphabetic character.
interchange data stream. IPDS is the strategic
AFP printer data stream. alphanumeric string. A sequence of
characters consisting solely of the letters a
AFP. Advanced Function Presentation through z and the numerals 0 through 9.
archive media. Devices and volumes on BCOCA. Bar Code Object Content
which the long-term or backup copy of a report Architecture
is stored. For example, an optical storage
library is one type of archive media supported backend. In the AIX operating system, the
by OnDemand. program that sends output to a particular
device. Synonymous with backend program.
archive storage. The storage in which the
long-term or backup copy of a report is backend program. Synonym for backend.
maintained. Includes the devices and volumes
Bar Code Object Content Architecture. An
on which the files are stored and the
architected collection of control structures
management policies that determine how long
used to interchange and present barcode
data is maintained in archive storage.
data.
archive storage manager. The software
bitmap. A file that contains a bit-mapped
product that manages archive media and
graphic.
maintains files in archive storage. See Tivoli
Storage Manager. BMP. Bitmap
Glossary 609
byte. The amount of storage required to character rotation. The alignment of a
represent 1 character; a byte is 8 bits. character with respect to its character
baseline, measured in degrees in a clockwise
C direction. Examples are 0°,90°, 180°, and
270°. Zero-degree character rotation exists
cache storage. The storage in which the when a character is in its customary alignment
primary or short-term copy of a report is with the baseline.
stored. Usually disk storage. Most customers
configure the system to maintain the most character set. A group of characters used for
recent and frequently used versions of reports a specific reason; for example, the set of
in cache storage. characters a printer can print or a keyboard
can support.
carriage control character. The first
character of an output record (line) that is to click. To press the left mouse button while
be printed; it determines how many lines pointing to an object such as a command
should be skipped before the next line is button or a toolbar button.
printed.
client. (1) In a distributed file system
case-sensitive. The ability to distinguish environment, a system that is dependent on a
between uppercase and lowercase letters. server to provide it with programs or access to
programs. (2) A personal computer connected
CCITT. Consultative Committee on to a network running OnDemand software that
International Telegraphy and Telephone can log on and query the library server,
retrieve documents from OnDemand, and
CD-ROM. Compact disc read-only memory
view and print documents.
channel. A device connecting the processor
client domain. The set of optical drives and
to input and output devices.
storage volumes used by Tivoli Storage
channel adapter. A communication controller Manager to store report files and resources
hardware unit used to attach the controller to a belonging to an application group.
System/370™ data channel.
client node. An application group that has
channel-attached. (1) Pertaining to devices been registered to the Tivoli Storage Manager
attached to a controlling unit by cables, rather server.
than by telecommunication lines. (2)
COBOL. Common business-oriented
Synonymous with local.
language. A high-level programming
character. A letter, digit, or other symbol that language, based on English, that is used
represents, organizes, or controls data. primarily for business applications.
Glossary 611
Consultative Committee on International copy storage pool. A named collection of
Telegraphy and Telephone (CCITT). A storage volumes that contains copies of files
United Nations Specialized Standards group that reside in primary storage pools. Copy
whose membership includes common carriers storage pools are used to back up the data
concerned with devising and proposing stored in primary storage pools.
recommendations for international
telecommunications representing alphabets, CPGID. Code Page Global Identifier
graphics, control information, and other
fundamental information interchange issues. customization. The process of describing
optional changes to defaults of a software
Content Manager. A comprehensive set of program that is already installed on the system
Web-enabled, integrated software solutions and configured so that it can be used. Contrast
from IBM for managing information and with configuration.
making it available to anyone, anywhere.
customize. To describe the system, the
control character. A character that is not a devices, programs, users, and user defaults
graphic character such as a letter, number, or for a particular data processing system or
punctuation mark. Such characters are called network. Contrast with configure.
control characters because they frequently act
to control a peripheral device. D
controller. A device that coordinates and daemon. In UNIX, a process begun by the
controls the operation of one or more root user or by the root shell that can be
input/output devices, such as workstations, stopped only by the root user. Daemon
and synchronizes the operation of the system processes generally provide services that
as a whole. must be available at all times, such as sending
data to the printer. A daemon runs
conversion. In programming languages, the continuously, looking for work to do,
transformation between values that represent performing that work, and waiting for more
the same data item but belong to different data work. A daemon does not have a controlling
types. terminal associated with it. The OnDemand
data download program (ARSJESD) is an
copies. See copy group. example of a daemon.
copy group. In Tivoli Storage Manager, a database. (1) The collection of information
policy object that contains attributes that about all objects managed by OnDemand,
control the generation, destination, and including reports, groups, users, printers,
expiration of backup and archive files. There application groups, storage sets, applications,
are two kinds of copy groups: backup and and folders. (2) The collection of information
archive. Copy groups belong to management about all objects managed by Tivoli Storage
classes. Manager, including policy management
objects, administrators, and client nodes.
data set. Synonym for file. device class. A named group of Tivoli Storage
Manager storage devices. Each device class
data stream. A continuous stream of data has a unique name and represents a device
elements being transmitted, or intended for type of disk, tape, or optical disk.
transmission, in character or binary-digit form
using a defined format. device driver. A program that operates a
specific device, such as a printer, disk drive, or
data transfer. The movement, or copying, of display.
data from one location and the storage of the
data at another location. device type. A type of Tivoli Storage Manager
storage device. Each device class must be
data type. The type, format, or classification categorized with either the disk, tape, or
of a data object. optical disk devices types.
Glossary 613
document. (1) In OnDemand, a logical EIP. Enterprise Information Portal
section of a larger file, such as an individual
invoice within a report of thousands of enqueue. To place items in a queue.
invoices. A document can also represent an
indexed group of pages from a report. (2) A file enter. (1) An instruction to type specific
containing an AFP data stream document. An information using the keyboard. (2) A
AFP data stream document is bounded by keyboard key that, when pressed, confirms or
Begin Document and End Document initiates the selected command.
structured fields and can be created using a
Enterprise Information Portal (EIP). An IBM
text formatter such as Document Composition
software product that provides a coordinated,
Facility (DCF).
Web-enabled entry point to what is otherwise
Document Composition Facility (DCF). An disconnected, incompatible data scattered
IBM licensed program used to prepare printed across an enterprise.
documents.
environment variable. A variable that is
domain. See policy domain or client domain. included in the current software environment
and is therefore available to any called
DOS. Disk operating system program that requests it.
double-click. To rapidly press the left mouse error condition. The state that results from an
button twice while pointing to an object. attempt to run instructions in a computer
program that are not valid or that operate on
download. To transfer data from one data that is not valid.
computer for use on another one. Typically,
users download from a larger computer to a error log. A file in a product or system where
diskette or fixed disk on a smaller computer or error information is stored for later access.
from a system unit to an adapter.
error log entry. In AIX, a record in the system
drag. To hold down the left mouse button error log describing a hardware or software
while moving the mouse. failure and containing failure data captured at
the time of the failure.
driver. The end of a stream closest to an
external interface. The principal functions of error message. An indication that an error
the driver are handling any associated device, has been detected.
and transforming data and information
between the external device and stream. error recovery. The process of correcting or
bypassing the effects of a fault to restore a
E computer system to a prescribed condition.
EBCDIC. Extended Binary-Coded Decimal error type. Identifies whether an error log
Interchange Code. This is the default type of entry is for a permanent failure, temporary
data encoding in an MVS environment. failure, performance degradation, impending
Contrast with ASCII. loss of availability, or undetermined failure.
exit program. A user-written program that is file system. The collection of files and file
given control during operation of a system management structures on a physical or
function. logical mass storage device, such as a
diskette or a minidisk.
exit routine. A routine that receives control
when a specified event occurs, such as an file transfer. In remote communications, the
error. transfer of a file or files from one system to
another over a communications link.
expiration. The process of deleting index data
and reports based on storage management File Transfer Protocol (FTP). In TCP/IP, the
information. The OnDemand database protocol that makes it possible to transfer data
manager and the storage managers run among hosts and to use foreign hosts
expiration processing to remove data that is indirectly.
no longer needed from storage volumes and
reclaim the space. fixed disk. A flat, circular, nonremovable plate
with a magnetizable surface layer on which
Extended Binary-Coded Decimal data can be stored by magnetic recording. A
Interchange Code (EBCDIC). A coded rigid magnetic disk.
character set consisting of eight-bit coded
characters. fixed-disk drive. The mechanism used to
read and write information on a fixed disk.
external library resource (member). Objects
that can be used by other program products folder. In OnDemand, the end-user view of
while running print jobs; for example, coded data stored in the system. Folders provide
fonts, code pages, font character sets, form users a convenient way to find related
definitions, page definitions, and page information, regardless of the source of the
segments. Synonym for resource object. information or where the data is stored.
Glossary 615
font character set. Part of an AFP font that gigabyte. A unit of memory or space
contains the raster patterns, identifiers, and measurement equal to approximately one
descriptions of characters. Often synonymous billion bytes. One gigabyte equals 1,000
with Character Set. See also coded font. megabytes.
form definition (FORMDEF). A form definition GOCA. Graphic Object Content Architecture
is a resource used by OnDemand. A form
definition specifies the number of copies to be graphic. A symbol produced by a process
printed, whether the sheet should be printed such as handwriting, drawing, or printing.
on both sides, the position of a page of data on
the sheet, text suppression, and overlays to be graphic character. A character that can be
used (if any). Synonymous with FORMDEF. displayed or printed.
Glossary 617
index object file. An index-information file informational message. (1) A message that
created by ACIF that contains the Index provides information to the end-user or system
Element (IEL) structured fields, which identify administrator but does not require a response.
the location of tagged groups in the AFP file. (2) A message that is not the result of an error
The indexing tags are contained in the Tagged condition.
Logical Element (TLE) structured fields.
input file. A file opened in order to allow
indexing. (1) A process of segmenting a print records to be read.
file into uniquely identifiable groups of pages
(a named collection of sequential pages) for install. (1) To add a program, program option,
later retrieval. (2) In ACIF, a process of or software program to the system in a manner
matching reference points within a file and such that it might be executed and will interact
creating structured field tags within the properly with all affected programs in the
MO:DCA-P document and the separate index system. (2) To connect a piece of hardware to
object file. the processor.
indexing with data values. Adding indexing intelligent printer data stream (IPDS). An
tags to a MO:DCA-P document using data that all-points-addressable data stream that allows
is already in the document and that is users to position text, images, and graphics at
consistently located in the same place in each any defined point on a printed page.
group of pages.
interface. Hardware, software, or both, that
indexing with literal values. Adding indexing links systems, programs, or devices.
tags to a MO:DCA-P document by assigning
literal values as indexing tags, because the Internet. A wide area network connecting
document is not organized such that common thousands of disparate networks in industry,
data is located consistently throughout the education, government, and research. The
document. Internet network uses TCP/IP as the protocol
for transmitting information.
Infoprint Manager. A sophisticated IBM print
subsystem that drives AFP printers, PostScript Internet Protocol (IP). In TCP/IP, a protocol
printers, and PCL printers. Infoprint Manager that routes data from its source to its
is supported under AIX, OS/390, Windows NT, destination in an Internet environment.
and Windows 2000. Infoprint Manager
IOCA. Image Object Content Architecture
manages printer resources such as fonts,
images, electronic forms, form definitions, and IP. Internet Protocol
page definitions, and provides error recovery
for print jobs. When printing line data, Infoprint IPDS. Intelligent printer data stream
Manager supports external formatting using
page definitions and form definitions. This J
external formatting extends page printer
functions such as electronic forms and use of job. One or more related procedures or
typographic fonts without any change to programs grouped into a procedure, identified
applications that generate the data. by appropriate job control statements.
kernel. The part of an operating system that library resource name. A name by which an
performs basic functions such as allocating object might be called from a library by AFP as
hardware resources. part of a print job. Includes the 2-character
prefix for the type of object, such as P1 for
kernel extension. A program that modifies page definitions, F1 for form definitions, or O1
parts of the kernel that can be customized to for overlays (also known as resource name).
provide additional services and calls. See
kernel. library server. In OnDemand, the workstation
or node that users must go through to access
K-byte. Kilobyte the system. The library server controls the
OnDemand database.
keyword. Part of a command operand that
consists of a specific character string. licensed program. A separately priced
program and its associated materials that bear
kilobyte (K-byte). 1024 bytes in decimal a copyright and are offered to customers
notation when referring to memory capacity; in under the terms and conditions of a licensing
all other cases, it is defined as 1000. agreement.
L line data. Data prepared for printing on a line
printer, such as an IBM 3800 Model 1 Printing
LAN. Local area network
Subsystem. Line data is usually characterized
LAN server. A data station that provides by carriage-control characters and table
services to other data stations on a local area reference characters.
network; for example, file server, print server,
line-data print file. A file that consists of line
mail server.
data, optionally supplemented by a limited set
of structured fields.
Glossary 619
line printer. A device that prints a line of logical page. In the IMS™ message format
characters as a unit. Contrast with page service, a user-defined group of related
printer. message segment and field definitions.
line printer daemon (LPD). In TCP/IP, the logical volume. The combined space from all
command responsible for sending data from volumes defined to either the Tivoli Storage
the spooling directory to a printer. Manager database or recovery log. The
database resides on one logical volume and
line printer requestor (LPR). In TCP/IP, a the recovery log resides on a different logical
client command that allows the local host to volume.
submit a file to be printed on a remote print
server. log file. A fixed-length file used to record
changes to a database.
literal. (1) A symbol or a quantity in a source
program that is itself data, rather than a LPD Line printer daemon
reference to data. (2) A character string whose
value is given by the characters themselves; LPR. Line printer requestor.
for example, the numeric literal 7 has the value
7, and the character literal CHARACTERS has LZW. See Lempel Ziv Welsh
the value CHARACTERS.
M
loading. The logical process of archiving
M byte. Megabyte
reports in OnDemand. During the loading
process, OnDemand processes reports, MB. Megabyte
creates index data, and copies report data and
resources to cache storage and archive machine carriage control character. A
storage. character that specifies that a write, space, or
skip operation should be performed either
local. Pertaining to a device accessed directly immediately or after printing the line containing
without use of a telecommunication line. the carriage control.
local area network (LAN). (1) A computer mainframe. A large computer, particularly one
network located on a user’s premises within a to which other computers can be connected so
limited geographical area. Communication that they can share facilities the mainframe
within a local area network is not subject to provides. The term usually refers to hardware
external regulations; however, communication only.
across the LAN boundary might be subject to
some form of regulation. (2) A network in
which a set of devices is connected to one
another for communication and that can be
connected to a larger network. See also
token-ring network.
migration. (1) The process of moving data NFS. Network File System
from one computer system to another without
node. A workstation that operates as an
converting the data. (2) The process of moving
OnDemand library server or object server and
report files, resources, and index data from
is connected to a TCP/IP network.
cache storage to long-term (optical or tape)
storage.
Glossary 621
notes. Electronic comments, clarifications, operating environment. (1) The physical
and reminders that can be attached to an environment; for example, temperature,
OnDemand document. humidity, and layout. (2) All of the basic
functions and the user programs that can be
non-IPDS printer. In this publication, a printer executed by a store controller to enable the
that is not channel-attached and which does devices in the system to perform specific
not accept the Intelligent Printer Data Stream. operations. (3) The collection of store
controller data, user programs, lists, tables,
numeric. Pertaining to any of the digits 0 control blocks, and files that reside in a
through 9. subsystem store controller and control its
operation.
O
operating system. Software that controls the
object. (1) A collection of structured fields.
running of programs and that also can provide
The first structured field provides a
such services as resource allocation,
begin-object function and the last structured
scheduling, input and output control, and data
field provides an end-object function. The
management.
object might contain one or more other
structured fields whose content consists of optical library. A storage device that houses
one or more data elements of a particular data optical disk drives and optical disks, and
type. An object might be assigned a name, contains a mechanism for moving optical disks
which might be used to reference the object. between a storage area and optical disk
Examples of objects are text, graphics, and drives.
image objects. (2) A resource or a sequence
of structured fields contained within a larger optimize. To improve the speed of a program
entity, such as a page segment or a composed or to reduce the use of storage during
page. (3) A collection of data referred to by a processing.
single name.
outline font. (1) Font whose graphic character
object server. In OnDemand, a workstation or shapes are defined as mathematical
node controlled by a storage manager to equations rather than by raster patterns. (2)
maintain reports in cache storage, and Font created in the format described in Adobe
optionally, archive storage. Type 1 Font Format, a publication available
from Adobe Systems, Inc. Synonymous with
offset. The number of measuring units from Type 1 fonts.
an arbitrary starting point in a record, area, or
control block to some other point. overlay. A collection of predefined, constant
data such as lines, shading, text, boxes, or
online. Being controlled directly by or directly logos, that is electronically composed and
communicating with the computer. stored as an AFP resource file that can be
merged with variable data on a page while
printing or viewing.
Glossary 623
pipe. To direct the data so that the output from Portable Document Format (PDF). A distilled
one process becomes the input to another version of PostScript data that adds structure
process. The standard output of one and efficiency. PDF data has the same
command can be connected to the standard imaging model as PostScript but does not
input of another with the pipe operator (¦). Two have its programmability. PDF also provides
commands connected in this way constitute a direct access to pages and allows hypertext
pipeline. links, bookmarks, and other navigational aids
required for viewing. The text in a PDF file is
point. (1) To move the mouse pointer to a usually compressed using LZW methods. The
specific object. (2) A unit of typesetting images is a PDF file are usually compressed
measure equal to 0.01384 inch (0.35054 mm), using CCITT or JPEG methods.
or about one-seventy second of an inch. There
are 12 points per pica. PostScript. Adobe’s page description
language used for printing. PostScript is a
point size. The height of a font in points. See flexible programming language and imaging
also point. model but is not as structured as AFP.
PostScript cannot be parsed to determine
policy domain. In Tivoli Storage Manager, a page boundaries, it must be interpreted.
policy object that contains policy sets, Because of this limitation, PostScript is not
management classes, and copy groups that is practical for archiving and viewing. Adobe
used by a group of client nodes. See policy created PDF for archiving and viewing.
set, management class, copy group, and client
node. press. To touch a specific key on the
keyboard.
policy set. In Tivoli Storage Manager, a policy
object that contains a group of management primary log file. A set of one or more log files
class definitions that exist for a policy domain. used to record changes to a database.
At any one time, there can be many policy sets Storage for these files is allocated in advance.
within a policy domain but only one policy set
can be active. See management class and primary storage pool. A named collection of
active policy set. storage volumes in which Tivoli Storage
Manager stores archive copies of files.
port. (1) A part of the system unit or remote
controller to which cables for external devices print file. (1) The output of a user-defined
(display stations, terminals, or printers) are program that is to be indexed and loaded into
attached. The port is an access point for data the system. (2) A file that a user wants to print.
entry or exit. (2) A specific communications
endpoint within a host. A port is identified by a print job. A series of print files scheduled for
port number. printing. At print submission time, the user can
request one or more files to be printed;
therefore, a print job consists of one or more
print files.
Glossary 625
queue device. A logical device defining recovery procedure. (1) An action performed
characteristics of a physical device attached to by the operator when an error message
a queue. appears on the display panel. This action
usually permits the program to run the next
R job. (2) The method of returning the system to
the point where a major system error occurred
radio button. Round option buttons grouped and running the recent critical jobs again.
in windows; one is preselected. Like a radio in
an automobile, select only one button register. To define a client node to Tivoli
(“station”) at a time. Storage Manager.
RAM. Random access memory. Specifically, remote. Pertaining to a system or device that
the memory used for system memory. is accessed through a communications line.
Sometimes this memory is referred to as main Contrast with Local.
storage.
remote print. Issuing print jobs to one
raster. In Advanced Function Presentation, an machine (client) to print on another machine
on/off pattern of electrostatic images produced (server) on a network.
by the laser print head under control of the
character generator. remote system. A system that is connected to
your system through a communication line.
raster font. A font in which the characters are
defined directly by the raster bit map. See font. report. A print data stream produced by a
Contrast with outline font. user-defined program or other software
program that can contains hundreds or
raster graphics. Computer graphics in which thousands of pages of related information.
a display image is composed of an array of Most reports can be logically divided and
pixels arranged in rows and columns. indexed into single and multiple page objects
called documents.
read access. In computer security,
permission to read information. resolution. (1) In computer graphics, a
measure of the sharpness of an image,
record. (1) In programming languages, an expressed as the number of lines and columns
aggregate that consists of data objects, on the display panel. (2) The number of pels
possibly with different attributes, that usually per unit of linear measure.
have identifiers attached to them. (2) A set of
data treated as a unit. (3) A collection of fields resource. A collection of printing instructions,
treated as a unit. and sometimes data to be printed, that
consists entirely of structured fields. A
recovery log. In Tivoli Storage Manager, a log resource can be stored as a member of a
of updates that are about to be written to the directory and can be called for by the Print
database. The log can be used to recover from Services Facility when needed. The different
system and media failures. resources are: coded font, character set, code
page, page segment, overlay, and form
definition.
Glossary 627
serial device. A device that performs spool file. (1) A disk file containing output that
functions sequentially, such as a serial printer has been saved for later printing. (2) Files
that prints one byte at a time. Contrast with used in the transmission of data among
parallel device. devices. spooling (simultaneous peripheral
operation.
server. (1) On a network, the computer that
contains the data or provides the facilities to spooling subsystem. A synonym for the
be accessed by other computers in the queuing system that pertains to its use for
network. (2) A program that handles protocol, queuing print jobs.
queuing, routing, and other tasks necessary
for data transfer between devices in a stand-alone workstation. A workstation that
computer system. (3) A workstation connected can perform tasks without being connected to
to a TCP/IP network that runs the OnDemand other resources such as servers or host
programs that store, retrieve, and maintain systems.
report files. OnDemand supports two types of
servers: a library server an object server. standard input. The primary source of data
going into a command. Standard input comes
server options file. The Tivoli Storage from the keyboard unless redirection or piping
Manager file that specifies processing options is used, in which case standard input can be
for communication methods, tape handling, from a file or the output from another
pool sizes, language, and date, time, and command.
number formats.
standard output. The primary destination of
shell. In UNIX environments, a software data coming from a command. Standard
interface between a user and the operating output goes to the display unless redirection or
system of a computer. Shell programs piping is used, in which case standard output
interpret commands and user interactions on can be to a file or another command.
devices such as keyboards and pointing
devices and communicate them to the status. (1) The current condition or state of a
operating system. program or device. For example, the status of
a printer. (2) The condition of the hardware or
skip-to-channel control. A line printer control software, usually represented in a status code.
appearing in line data. Allows space to be left
between print lines. Compatible with page storage. (1) The location of saved information.
printers when the data is formatted by page (2) In contrast to memory, the saving of
definitions. information on physical devices such as disk
or tape.
SMIT. System Management Interface Tool
storage device. A functional unit for storing
SMS. System Managed Space and retrieving data.
Glossary 629
system memory. Synonymous with Main Tagged Image File Format (TIFF). A
Storage, but used in hardware to refer to bit-mapped graphics format for scanned
semiconductor memory (modules). images with resolutions of up to 300 dpi. TIFF
simulates gray scale shading.
system prompt. Synonym for command line.
The system prompt is the symbol that appears TB. Terabyte
at the command line of an operating system.
The system prompt indicates that the TCP. Transmission Control Protocol
operating system is ready for the user to enter
a command. TCP/IP. Transmission Control
Protocol/Internet Protocol
T
terabyte. A unit of memory or space
table. A named collection of data consisting of measurement capacity equal to approximately
rows and columns. one trillion bytes. One terabyte is equal to
1,000 gigabytes, or one million megabytes.
table reference character (TRC). (1) Usually,
the second byte on a line in the user’s data. text. (1) A type of data consisting of a set of
This byte contains a value (0–126) that is used linguistic characters (letters, numbers, and
to select a font to be used to print that line. (2) symbols) and formatting controls. (2) In word
In the 3800 Printing Subsystem, a numeric processing, information intended for human
character (0, 1, 2, or 3) corresponding to the viewing that is presented in a two-dimensional
order in which the character arrangement form, such as data printed on paper or
table names have been specified with the displayed on a panel.
CHARS keyword. It is used for selection of a
character arrangement table during printing. throughput. A measure of the amount of work
performed by a computer system over a
tablespace. An abstraction of a collection of period of time, for example, the number of jobs
containers into which database objects are per day.
stored. A tablespace provides a level of
indirection between a database and the tables TIFF. Tagged Image File Format
stored within the database. A tablespace has
Tivoli Storage Manager. An IBM software
space on media storage devices assigned to it
program that provides archive storage
and has tables created within it.
management of data stored in an OnDemand
tag. (1) A type of structured field used for system.
indexing in an AFP document. Tags associate
token name. An eight-byte name that can be
an index attribute-value pair with a specific
given to all data stream objects.
page or group of pages in a document. (2) In
text formatting markup language, a name for a token-ring network. A ring network that
type of document element that is entered in allows unidirectional data transmission
the source document to identify it. between data stations, by a token passing
procedure, such that the transmitted data
return to the transmitting station. (T)
toolbar button. A small bitmap on the toolbar unformatted print data. Data that is not
that represents a command in OnDemand formatted for printing. A page definition can
client programs that support a graphical user contain controls that map unformatted print
interface. Click a toolbar button to quickly data to its output format.
access a command.
UNIX operating system. An operating system
transfer. To send data to one place and to developed by Bell Laboratories that features
receive data at another place. multiprogramming in a multi-user environment.
The UNIX operating system was originally
transform. To change the form of data developed for use on minicomputers but has
according to specified rules without been adapted for mainframes and
significantly changing the meaning of the data. microcomputers.
Transmission Control Protocol (TCP). A upload. To transfer data from one computer to
communications protocol used in Internet and another. Typically, users upload from a small
in any network that follows the U.S. computer to a large one.
Department of Defense standards for
inter-network protocol. TCP provides a user. A person authorized to logon to an
host-to-host protocol between hosts in OnDemand server.
packet-switched communications networks
and in interconnected systems of such user exit. (1) A point in an IBM-supplied
networks. It assumes that the Internet protocol program at which a user-defined program
is the underlying protocol. might be given control. (2) A programming
service provided by an IBM software product
Transmission Control Protocol/Internet that might be requested during the execution
Protocol (TCP/IP). A set of communications of an application program for the service of
protocols that support peer-to-peer transferring control back to the application
connectivity functions for both local and wide program upon the later occurrence of a
area networks. user-specified event.
Glossary 631
V window. A part of a display panel with visible
boundaries in which information is presented.
value. (1) A set of characters or a quantity
associated with a parameter or name. (2) A workstation. A terminal or microcomputer,
quantity assigned to a constant, variable, usually one that is connected to a mainframe
parameter, or symbol. or to a network, at which a user can perform
applications.
variable. (1) A name used to represent a data
item whose value can change while the write access. In computer security,
program is running. (2) In programming permission to write to an object.
languages, a language object that can take
different values at different times. (3) A writer. A JES function that processes print
quantity that can assume any of a given set of output.
values.
The publications listed in this section are considered particularly suitable for a
more detailed discussion of the topics covered in this IBM Redbooks publication.
IBM Redbooks
For information about ordering these publications, see “How to get IBM
Redbooks” on page 636.
Content Manager OnDemand Backup, Recovery, and High Availability,
SG24-6444
Image and Workflow Library: Content Manager for ImagePlus on OS/390
Implementation and EIP, SG24-4055
Implementing Content Manager OnDemand Solutions with Case Studies,
SG24-7511
Implementing Web Applications with CM Information Integrator for Content
and OnDemand Web Enablement Kit, SG24-6338
OS/390 Version 2 Release 6 UNIX System Services Implementation and
Customization, SG24-5178
IBM System Storage DR550 Setup and Implementation, SG24-7091
Other resources
These publications are also relevant as further information sources:
IBM Content Manager OnDemand - User’s Guide, SC27-0836
IBM Content Manager OnDemand - Windows Client Customization Guide
and Reference, SC27-0837
IBM Content Manager OnDemand - Messages and Codes, SC27-1379
IBM Content Manager OnDemand for Multiplatforms - Administration Guide,
SC18-9237
IBM Content Manager OnDemand for Multiplatforms - Indexing Reference,
SC18-9235
IBM Content Manager OnDemand for Multiplatforms - Installation and
Configuration Guide, SC18-9232
© Copyright IBM Corp. 2003, 2006, 2007. All rights reserved. 633
IBM Content Manager OnDemand for Multiplatforms - Installation and
Configuration Guide for Windows Servers, GC27-0835
IBM Content Manager OnDemand for Multiplatforms - Introduction and
Planning Guide, GC18-9236
IBM Content Manager OnDemand - Web Enablement Kit Installation and
Configuration Guide, SC18-9231
IBM Content Manager OnDemand - Report Distribution: Installation, Use, and
Reference, SC18-9233
IBM Content Manager OnDemand for z/OS and OS/390 - Configuration
Guide, GC27-1373
IBM Content Manager OnDemand for z/OS and OS/390 - Administration
Guide, SC27-1374
IBM Content Manager OnDemand for z/OS and OS/390 - Indexing
Reference, SC27-1375
IBM Content Manager OnDemand for z/OS and OS/390 - Web Enablement
Kit Installation and Configuration Guide, SC27-1376
IBM Content Manager OnDemand for z/OS and OS/390 - OnDemand
Distribution Facility Installation Guide and Reference, SC27-1377
IBM Content Manager OnDemand for z/OS and OS/390 - Introduction and
Planning Guide, GC27-1438
IBM Content Manager OnDemand for iSeries - Administration Guide,
SC41-5325
IBM Content Manager OnDemand for iSeries - Installation Guide, SC41-5333
IBM Content Manager OnDemand iSeries Common Server - Planning and
Installation Guide, SC27-1158
IBM Content Manager OnDemand for iSeries Common Server -
Administration Guide, SC27-1161
IBM Content Manager OnDemand for iSeries Common Server - Indexing
Reference, SC27-1160
IBM Content Manager OnDemand for iSeries Common Server - Web
Enablement Kit Installation and Configuration Guide, SC27-1163
IBM Content Manager OnDemand for Multiplatforms Release Notes for
Version 7.1.0.10 (comes with the Content Manager OnDemand for
Multiplatforms software)
IBM DB2 UDB for z/OS and OS/390 - Administration Guide, SC26-9931
Tivoli Storage Manager for Windows Administrator’s Guide, GC32-0782
Tivoli Storage Manager for AIX Administrator’s Guide, GC32-0768
© Copyright IBM Corp. 2003, 2006, 2007. All rights reserved. 637
definition 30 ars.cfg file 559
OnDemand 432 ARS.DBFS 83
application group 5–8, 23, 25, 28, 48, 50, 69, 71, ARS.INI 79, 85, 88, 92, 98, 100
75, 84, 432, 530, 541, 572, 574 z/OS 99
advanced settings 37 ARS_AUTOSTART_INSTANCE 90
application added to existing group 45 ARS_DB_TABLESPACE 98
Application ID field 442 ARS_DB2_ARCHIVE_LOGPATH 81
data table 65–66 ARS_DB2_DATABASE_PATH 81
Date field 439 ARS_DB2_PRIMARY_LOGPATH 81
edit/view window 578 ARS_NUM_DBSRVR parameter 167
example 67 ARS_OD_CFG 81, 98
field alias table 64 ARS_TMP 84
Field Information tab 542 ARSADM 386
field table 64 ARSAG 64, 69
Load Date field 441 ARSAG2FOL 64
name handling 445 ARSAGFLD 64
permissions table 64 ARSAGFLDALIAS 64
Right click 546 ARSAGPERMS 64
status field 479 ARSANN 64, 68
storage management 123, 125, 152, 155 ARSAPP 64
table 64, 434 ARSAPPUSR 64
table structure 68 arscsxit.h header file 363
window 588, 592 ArsCSXitFaxOptions 366
application ID 28–29 ArsCSXitLoadExit 369
Application ID field in application group 442 arsdate 11992 command 67
application table 64 arsdate command 67
ArcCSXitApplGroup structure 366 arsdb command 64, 83–84, 86
ArcCSXitDocFields 367 arsdb program 73–74, 100
ArcCSXitPermExit 371 ARSFOL 64
ArcCSXitSecurityAction 375 ARSFOLDPERMS 65
ArcCSXitSecurityRC 375 ARSFOLFLD 65
architecture ARSFOLFLDUSR 65
ODWEK 200 ARSGROUP 65
OnDemand 162 ARSLOAD 283, 302, 388
Web server 174 AFP2PDF transform 302
archive copy group 107 installation verification 104
archive documents 106 load table 69
archive media 9, 12, 15 modification for PDF indexing 196
performance 169 modification for PDF indexing on z/OS 194
archive storage 23, 25, 36, 116, 119, 123 performance tuning 160
Archive Storage Manager 105, 141–142 process 67, 75
for iSeries 141 running parallel jobs 167
archive storage manager 9–10, 12, 14–15, 106 Store OnDemand 491
archive.mac 170 VSAM data sets 140
archived report 11 with distributed object servers 163
ARCHIVEPOOL 112 Xenos transforms 283
ARS.CACHE 82, 88 arsload command 309, 595
z/OS 99 ARSLOAD table 65
ARS.CFG 81, 88, 98, 115, 117 arslog 346
Index 639
preview exit 373 data
clnt_id parameter 376 attributes 27
coded data 132 compression 183
collection 135, 138 data type 28
Composite index xxix, 571, 578 deletion 15
necessary fields 581 indexing 11, 13–14
compressed data 132 indexing and loading 11, 13
compressed storage objects 125 loading 6, 11, 13–14, 69
concept loading programs 11
application 5–6 migration 11
application group 5–7 migration from cache 125
applications 7 protection with Centera 130
database manager 12 segmentation 24
document 4 type
document indexing 8 filter 28
folder 6–7 index 28
management programs 15 not in database 28
report 4 data control block (DCB) 195
report indexing 9 data conversion 253
request manager 11 integrated solutions with OnDemand 255
server 9 data field, segment 29
storage manager 12 data stream conversion 254
configuration file 79 data table 75
ARS.CACHE 88 structure 66
ARS.CACHE, z/OS 99 application group data table 68
ARS.CFG 81, 88, 98 ARSSEG 66
ars.cfg 115, 117 data type 559
ARS.DBFS 83 database 12
ARS.INI 79, 85, 88, 92, 98, 100 binding 83
ARS.INI, z/OS 99 control information 10
permission 82–83 create OnDemand database 83
RS.CACHE 82 creation and relationship on z/OS 73
Content Manager OnDemand file system 168
support 565 information 24
control file 261, 268, 482 locking 166
control information 10 logs 164
conversion maximum file handles 165
load or transform 254 objects 9
of data streams 254 problems 463
program 10 relationship when loading data 69–70
crash, OnDemand server 467 transaction logs 166
Cross Reference Table database manager 9–12
reference value 544 database structure 63
Cross Reference Table (CRT) 530, 534 database view 432
CSS (cascading style sheet) 285, 298 Date field in application group 439
date range search tip 498
date, internal OnDemand format 67
D DB2 control center 66
DAF (Document Audit Facility) 478
DB2 instance, creation of 79
Index 641
filter 75 HTTP Web server 200–201
folder 5–7, 35, 48, 50, 84, 432, 445, 484
creation 482
modifications after migration 423
I
I/O contention 168
primary folder 38
performance problem 168
query 28
IBM 3995 Optical jukebox 169
search 71–72
IBM AFP2WEB Services Offerings 256
search sequence 71
IBM System Storage Archive Manager 106, 126
secondary folder 38
IBM WebSphere Application Server 173, 229
folder field table 65
image 132
Folder List Filter 512
image data 182
folder permission table 65
image map file 261–262, 269
folder table 64
index 2, 75
folder user field table 65
exit program 405
font 190
fields 2
listing in a PDF file 189
record exit 341
map file 261
Index field
mapping 180
Key number 547
font mapping table
index field 530
allocation 195
index value 530, 542, 595
creation 195
indexer parameter 33
fontlib 181
indexing 6, 432
FSA (functional subsystem application) 13
composed AFP files 279
Full Report Browse 41
data 8, 11, 13–14
function 48
exits 338
functional subsystem application (FSA) 13
issue 452
issues with PDF 188
G methods 8
generator 288 document indexing 8
Generic Indexer 491 report indexing 9
GIF 182 PDF 185
graphical indexer 190 PDF on z/OS 194
group administration 47 problem diagnosis 461
group table 65 Indexing parameter 13–14, 160
indexing program 6, 10, 13–14
ACIF 10, 13
H OnDemand Generic Indexer 10
hang, OnDemand server 467
OnDemand PDF Indexer 10
hard disk drive 111–112, 118, 134, 167, 170
OS/400 Indexer 10, 13
cache 123
indxexit parameter 341
header banner 318
input record exit 339
header file arscsxit.h 363
apka2e 339
HFS (hierarchical file system) 93, 95–96, 134
asciinp 340
hierarchical file system (HFS) 93, 95–96, 134
asciinpe 340
hit list 37, 431, 511
install library 95
order 37
invoice status 485
single-selection 511
iSeries
HP-UX 82, 84
multiple instances 86
HTTP server 173
Index 643
Message of the day 519 OAMOPTIC 134
Migrate Data from Cache 156 OAMTAPE 134
migration 84 object access method (OAM) 12, 139
analysis and planning 404 API functions 133
iSeries Common Server 399 collection 135
migration tool 410 components 133
modifications to folders after migration 423 Library Control System (LCS) 133
policy 142, 149, 152 OAM Storage Management Component
tool 400 (OSMC) 133
with the tool 420 Object Storage and Retrieval (OSR) 133
without the tool 410 for z/OS 132
mount retention 170 object expiration 139
mountable file system 93 object maximum size 139
mountretention setting in Tivoli Storage Manager object naming conventions 138
170 object owner model 49, 51
Multiplatforms, OnDemand 16 administrator roles 52
multiple instances 77 implementation 52
adding instance to ARS.INI 99 object server 3, 9–10, 106, 162
connecting 85 functions 9
creation of tables 100 object size 125, 155
creation on z/OS 103 object storage 133
definition on iSeries 86 object storage and retrieval 133
definition on UNIX 78 object type model 49
definition on Windows NT 86 administrator roles 50
definition on z/OS 97 implementation 49
overview on z/OS 92 OD Delete application 494
port number 80, 85, 88 OD File System Monitor application 495
starting new instance 102 OD Store application 494
z/OS 91 OD Update application 494
MVS 93, 95–96 ODWEK
MVS download server 86 AFP2HTML 257
MVS Dynamic Exit Facility 351 AFP2PDF 265
APIs 202
architecture 200
N cache 170
Named Query report 321
performance 173
named query table 65
sizing process 171
negative numbers in decimal fields 517
caching documents 173
network performance 168
configuration 170, 293
node table 65
configuration of Xenos transforms 293
note 37
conversion of AFP to XML 285
Note icon 37
debug 174
Note Search 36
debugging 174
logging 174
O problem diagnosis 466
OAM (object access method) 12, 139 servlet deployment 229
OAM Storage Management Component 133 ODWEK (OnDemand Web Enablement Kit) 159,
OAM Thread Isolation Support 133 200, 283, 459
OAMDASD 134 OLTP (online transaction processing) 165
Index 645
output record exit 343 PDF data 177
owner password, master password 275 physical media and library issues 169
problem diagnosis 466
query optimization 165
P running large load jobs 167
parser 288
running parallel load jobs 167
partitioned data set (PDS) 195
servlets versus CGI 173
PDD (Production Data Distribution) 507
storage management 169
PDF 10, 13, 255
system logging 164
architecture 178
text search 40
data 177
transaction data 179
data indexing 188
troubleshooting reference 161
definition 186
tuning when is necessary 160
document access 448
performance configuration 109
document indexing 450
permission 104
file listing fonts 189
password 275
file size, loading 188
UNIX file permission bits 94
graphical indexer 192
UNIX file permissions 94
indexing 185
updating for various users 481
on z/OS 194
permission exit 338, 369
indexing issues 188
activation 373
PDF (Portable Document Format) 177, 186
physical media
PDF Indexer 601
issues 169
PDF indexer 190
policy domain 107, 111
limitations 198
port number 80, 85, 88, 99, 102
maximum file size 178
Portable Document Format (PDF) 177, 186
testing in z/OS 496
PostScript language 186
PDS (partitioned data set) 195
primary allocation 75
performance 24, 31, 36, 109, 132, 135, 208
primary folder 38
data compression 183
primary logs file system 168
data loading 160
primary storage group 151
database 164
primary storage node 137
debugging ODWEK 174
print exit for Multiplatforms 356
enhancements 163
Print Processor
I/O contention 168
entry 540
image data 182
Table 541
issues based on data type 177
Print Query Table (PQT) 541
line data 179
Print Services Facility (PSF) 10
memory 165
for OS/390 13
mount retention 170
printer options table 65
network 168
printer table 65
ODWEK cache 170, 173
printer user table 65
ODWEK logging 174
problem diagnosis categories 452
ODWEK with multiple user ID access 175
Production Data Distribution (PDD) 507
ODWEK with single user ID access 176
PSF (Print Services Facility) 10
OnDemand 159
open database files 165
operating system 168 Q
organize file systems on UNIX 167 query optimization 165
Index 647
custom 202 storage level 149
shortcut 2 storage management 25, 105, 118
Single Coded Character Set ID 87 advanced application group 155
single load 25 application group 123, 125, 152
single-selection hit list 511 expiration 139
smitty 79 external cache exit 338, 394
SMS activation 395
management class 134 object size 125
storage class 134 performance 169
storage group 134 storage manager 11–12
terminology 133–134 archive storage manager 9–10, 12, 14–15, 106
solution design 429 cache storage manager 9–12, 106
bad example 436 storage node 119, 121
best practices 439 storage node name 121
case study 436 storage policy 107
good example 436 storage pool 111–112
three-step approach 432 storage pool hierarchy 113
solution, winning design 430 storage pools and volumes 108
SPACEMGPOOL 112 storage set 23, 26, 48, 119, 121, 153, 572, 574
splash screen, display time customization 516 cache only 122
Spool File Archive environment 402 definition 136
SPUFI 66 load data 122
SQL query 548, 553 load type 120
SQL report 321 logon 122
SRV_OD_CFG 99 storage node 121
SRV_SM_CFG 99 storage set definition 23
SRVR_FLAGS_SECURITY_EXIT 89 storage set table 65
SRVR_INSTANCE 80, 99 Store OnDemand 490
SRVR_INSTANCE_OWNER 80, 99 what it does 492
SRVR_SM_CFG 82 why it is needed 491
Start Disk Storage Management (STRDSMOND) STRDSMOND (Start Disk Storage Management)
command 12, 154 command 12, 154
status field 479 STRMONOND 87, 156
in application group 479 STRTCPSVR 90
storage 4, 432 Sun Solaris 82, 84
archive media 9, 12 symbolic link 96
cache 15, 82 SYSOUT 388
disk 4, 106 system administration 47, 346
long term 122 decentralized 49
optical 4, 9, 12, 106 system administrator 47–50, 52
tape 4, 9, 12, 106 system control tables 64
Storage Archive Manager 126 ARSAG 64, 69
storage class 134 ARSAG2FOL 64
storage classes 134 ARSAGFLD 64
OAMDASD 134 ARSAGFLDALIAS 64
OAMOPTIC 134 ARSAGPERMS 64
OAMTAP 134 ARSANN 64, 68
storage devices 3 ARSAPP 64
storage group 134 ARSAPPUSR 64
Index 649
arsxml 60 Virtual Storage Access Method (VSAM) 12, 140
ODWEK and Xenos 301 data sets 141
performance 161 storing data to data sets 140
TT font 180 VSAM (Virtual Storage Access Method) 12, 140
Type 1 font 186
W
U WAN (wide area network) 163
Unicode format of index parameter 33 Web Enablement Kit 309
UNIX Web interface 569
file permissions 94 Web server architecture 174
multiple instances 78 WebSphere Application Server 173, 229
UNIX System Services 92–93 WebSphere Information Integrator Content Edition,
file system 74 federated search 19
UPDATE API 216 wide area network (WAN) 163
user access 6, 47, 175 Windows Client 597
ODWEK 175–176 Windows NT multiple instances 86
user administration 47 winning solution design 430
user administrator 48–50, 52 Work with Active Jobs (WRKACTJOB) command
user exit 12, 83, 338 90
ACIF 338 workload balance 174
customized functions 362 WORM (Write Once Read Many) 126
debugging 344 Write Once Read Many (WORM) 126
fax options exit 338 WRKACTJOB (Work with Active Jobs) command
header file arscsxit.h 363 90
indexing exits 338
load exit 338
permissions exit 338
X
Xenos
print exit for Multiplatforms 356
and OnDemand 282
security exit 338, 375
d2e from Xenos 253
security exit on z/OS 378
document manipulation script 290
storage management external cache 394
job parameter file 288
storage management external cache exit 338
parameter file 288
system log 12
Xenos transforms 284
system log exit for Multiplatforms 346
available dynamic transforms via ODWEK 283
tablespace creation exit 338, 395
available transforms at load time 282
user group ID table 65
ODWEK configuration 293
user ID 175
Xerces2 Java Parser 54
user logical views table 64
Xerox metacode 255
User Output Table (UOT) 541
XML Batch Administration program 53
user password 274
XML Batch Administration program (XML batch pro-
user permissions 35, 104
gram) 53
user table 65
XML document viewing 296
user type 48
XSL (Extensible Stylesheet Language) 285
users in group table 65
Z
V z/OS
version support 443
APKACIF exit 388
view of the database 432
Index 651
652 Content Manager OnDemand Guide
Content Manager OnDemand Guide
(1.0” spine)
0.875”<->1.498”
460 <-> 788 pages
Back cover ®
Content Manager
OnDemand Guide
Administration, This IBM Redbooks publication covers a variety of topics relating to
database structure, the practical application of Content Manager OnDemand (simply INTERNATIONAL
and multiple referred to as “OnDemand”) for Multiplatforms Version 8.3 (also TECHNICAL
instances
known as Version 7.1.2.5), z/OS Version 7.1, and IBM eServer iSeries SUPPORT
Common Server Version 5.3 of the OnDemand product. Where ORGANIZATION
necessary, separate sections are included to cover variations
Storage between the different platforms.
management, PDF This IBM Redbooks publication provides helpful, practical advice,
indexing, and data hints, and tips for those involved in the design, installation,
BUILDING TECHNICAL
configuration, system administration, and tuning of an OnDemand
conversion system. It covers key areas that are either not well known to the
INFORMATION BASED ON
PRACTICAL EXPERIENCE
OnDemand community or are misunderstood.
Exits, iSeries CS We reviewed all aspects of the OnDemand topics. Among these
migration, best topics, which we present in this IBM Redbooks publication, are IBM Redbooks are developed by
practices, and many administration, database structure, multiple instances, storage the IBM International Technical
management, performance, PDF indexing, OnDemand Web Support Organization. Experts
more
Enablement Kit, data conversion, report distribution, exits, and from IBM, Customers and
iSeries Common Server migration. Partners from around the world
Because a number of other sources are available that address create timely technical
various subjects on different platforms, this IBM Redbooks information based on realistic
publication is not intended as a comprehensive guide for OnDemand. scenarios. Specific
recommendations are provided
We step beyond the existing OnDemand documentation to provide
to help you implement IT
insight into the issues that might be encountered in the setup and solutions more effectively in
use of OnDemand. your environment.