Tivoli Workloud Scheduler Guide
Tivoli Workloud Scheduler Guide
Tivoli Workloud Scheduler Guide
Vasfi Gucer
Rick Jones
Natalia Jojic
Dave Patrick
Alan Bain
ibm.com/redbooks
International Technical Support Organization
October 2003
SG24-6628-00
Note: Before using this information and the product it supports, read the information in
“Notices” on page ix.
Notices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix
Trademarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . x
Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xi
The team that wrote this redbook. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xi
Become a published author . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiii
Comments welcome. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiii
Chapter 3. Installation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
iv IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices
5.2.4 Creating private keys and certificates . . . . . . . . . . . . . . . . . . . . . . . 188
5.2.5 Setting SSL local options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 197
5.2.6 Configuring SSL attributes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 199
5.2.7 Trying out your SSL configuration . . . . . . . . . . . . . . . . . . . . . . . . . . 201
5.3 Centralized user security definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . 204
5.3.1 Configuring centralized security . . . . . . . . . . . . . . . . . . . . . . . . . . . 205
5.3.2 Configuring the JSC to work across a firewall. . . . . . . . . . . . . . . . . 206
Contents v
8.6.4 Integrated solution. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 255
8.6.5 Hot start. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 255
vi IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices
9.5.1 Remote terminal sessions and JSC . . . . . . . . . . . . . . . . . . . . . . . . 307
9.5.2 Applying the latest fixes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 307
9.5.3 Resource requirements. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 307
9.5.4 Setting the refresh rate . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 307
9.5.5 Setting the buffer size . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 309
9.5.6 Iconize the JSC windows to force the garbage collector to work . . 309
9.5.7 Number of open editors. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 309
9.5.8 Number of open windows . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 309
9.5.9 Applying filters and propagating to JSC users . . . . . . . . . . . . . . . . 309
9.5.10 Java tuning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 313
9.5.11 Startup script . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 314
9.6 IBM Tivoli Workload Scheduler internals. . . . . . . . . . . . . . . . . . . . . . . . . 315
9.6.1 IBM Tivoli Workload Scheduler directory structure . . . . . . . . . . . . . 315
9.6.2 IBM Tivoli Workload Scheduler process tree . . . . . . . . . . . . . . . . . 317
9.6.3 Interprocess communication and link initialization . . . . . . . . . . . . . 318
9.6.4 IBM Tivoli Workload Scheduler Connector . . . . . . . . . . . . . . . . . . . 320
9.6.5 Retrieval of FTA joblog . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 322
9.6.6 Netman services and their functions . . . . . . . . . . . . . . . . . . . . . . . . 325
9.7 Regular maintenance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 326
9.7.1 Cleaning up IBM Tivoli Workload Scheduler directories . . . . . . . . . 326
9.7.2 Backup considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 336
9.7.3 Rebuilding IBM Tivoli Workload Scheduler databases . . . . . . . . . . 338
9.7.4 Creating IBM Tivoli Workload Scheduler Database objects . . . . . . 339
9.8 Basic fault finding and troubleshooting . . . . . . . . . . . . . . . . . . . . . . . . . . 339
9.8.1 FTAs not linking to the Master Domain Manager . . . . . . . . . . . . . . 340
9.8.2 Batchman not up or will not stay up (batchman down) . . . . . . . . . . 342
9.8.3 Jobs not running . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 343
9.8.4 Jnextday is hung or still in EXEC state . . . . . . . . . . . . . . . . . . . . . . 344
9.8.5 Jnextday in ABEND state . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 345
9.8.6 FTA still not linked after Jnextday . . . . . . . . . . . . . . . . . . . . . . . . . . 345
9.8.7 Troubleshooting tools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 346
9.9 Finding answers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 348
Contents vii
System requirements for downloading the Web material . . . . . . . . . . . . . 388
How to use the Web material . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 388
Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 393
viii IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices
Notices
This information was developed for products and services offered in the U.S.A.
IBM may not offer the products, services, or features discussed in this document in other countries. Consult
your local IBM representative for information on the products and services currently available in your area.
Any reference to an IBM product, program, or service is not intended to state or imply that only that IBM
product, program, or service may be used. Any functionally equivalent product, program, or service that
does not infringe any IBM intellectual property right may be used instead. However, it is the user's
responsibility to evaluate and verify the operation of any non-IBM product, program, or service.
IBM may have patents or pending patent applications covering subject matter described in this document.
The furnishing of this document does not give you any license to these patents. You can send license
inquiries, in writing, to:
IBM Director of Licensing, IBM Corporation, North Castle Drive Armonk, NY 10504-1785 U.S.A.
The following paragraph does not apply to the United Kingdom or any other country where such provisions
are inconsistent with local law: INTERNATIONAL BUSINESS MACHINES CORPORATION PROVIDES
THIS PUBLICATION "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESS OR IMPLIED,
INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF NON-INFRINGEMENT,
MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Some states do not allow disclaimer
of express or implied warranties in certain transactions, therefore, this statement may not apply to you.
This information could include technical inaccuracies or typographical errors. Changes are periodically made
to the information herein; these changes will be incorporated in new editions of the publication. IBM may
make improvements and/or changes in the product(s) and/or the program(s) described in this publication at
any time without notice.
Any references in this information to non-IBM Web sites are provided for convenience only and do not in any
manner serve as an endorsement of those Web sites. The materials at those Web sites are not part of the
materials for this IBM product and use of those Web sites is at your own risk.
IBM may use or distribute any of the information you supply in any way it believes appropriate without
incurring any obligation to you.
Information concerning non-IBM products was obtained from the suppliers of those products, their published
announcements or other publicly available sources. IBM has not tested those products and cannot confirm
the accuracy of performance, compatibility or any other claims related to non-IBM products. Questions on
the capabilities of non-IBM products should be addressed to the suppliers of those products.
This information contains examples of data and reports used in daily business operations. To illustrate them
as completely as possible, the examples include the names of individuals, companies, brands, and products.
All of these names are fictitious and any similarity to the names and addresses used by an actual business
enterprise is entirely coincidental.
COPYRIGHT LICENSE:
This information contains sample application programs in source language, which illustrates programming
techniques on various operating platforms. You may copy, modify, and distribute these sample programs in
any form without payment to IBM, for the purposes of developing, using, marketing or distributing application
programs conforming to the application programming interface for the operating platform for which the
sample programs are written. These examples have not been thoroughly tested under all conditions. IBM,
therefore, cannot guarantee or imply reliability, serviceability, or function of these programs. You may copy,
modify, and distribute these sample programs in any form without payment to IBM for the purposes of
developing, using, marketing, or distributing application programs conforming to IBM's application
programming interfaces.
AlarmPoint is a trademark of Invoq Systems, Inc. in the United States, other countries, or both.
Intel and Intel Inside (logos) are trademarks of Intel Corporation in the United States, other countries, or
both.
Microsoft, Windows, Windows NT, and the Windows logo are trademarks of Microsoft Corporation in the
United States, other countries, or both.
Java and all Java-based trademarks and logos are trademarks or registered trademarks of Sun
Microsystems, Inc. in the United States, other countries, or both.
UNIX is a registered trademark of The Open Group in the United States and other countries.
Other company, product, and service names may be trademarks or service marks of others.
x IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices
Preface
IBM Tivoli Workload Scheduler, Version 8.2 is IBM’s strategic scheduling product
that runs on many different platforms, including the mainframe. This IBM
Redbook covers the new features of Tivoli Workload Scheduler 8.2, focusing
specifically on the Tivoli Workload Scheduler 8.2 Distributed product. In addition
to new features and real-life scenarios, you will find a whole chapter on best
practices (mostly version independent) with lots of tips for fine tuning your
scheduling environment. For this reason, even if you are using a back-level
version of Tivoli Workload Scheduler 8.2, you will benefit from this redbook.
Rick Jones is currently an L2 Senior Software Engineer for IBM Tivoli Workload
Scheduler in IBM UK. He has worked for IBM for five years supporting IBM Tivoli
Workload Scheduler. He has been using, administering and supporting various
Natalia Jojic is a Certified Tivoli Consultant and instructor based in London, UK.
She is currently working for Elyzium Limited, a UK-based IBM Premier Partner.
She is primarily engaged in client-based consulting on delivering Tivoli
Enterprise Management solutions, design and development of Tivoli integration
modules with other, non-Tivoli products (such as AlarmPoint). Natalia has a
Bachelor of Engineering degree in Computer Systems Engineering from City
University, London, UK.
Alan Bain is an IT Specialist in IBM Tivoli Services in Stoke Poges (UK). Over
the past four years, Alan has also worked as a Technical Training Consultant with
IBM Tivoli Education and a Pre-Sales Systems Engineer with the IBM Tivoli
Advanced Technology Group. He has extensive customer experience and in his
current services role works on many short-term and long-term IBM Tivoli
Workload Scheduler and IBM Tivoli Storage Manager engagements all over the
UK, including a TNG-to-ITWS conversion project completed in six months.
Geoff Pusey
IBM UK
Fabio Barillari, Lucio Bortolotti, Maria Pia Cagnetta, Antonio Di Cocco, Riccardo
Colella, Pietro Iannucci, Antonello Izzi, Valeria Perticara
IBM Italy
Peter May
Inform& Enlighten Ltd
xii IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices
The team would like to express special thanks to Warren Gill from IBM USA and
Michael A Lowry from IBM Sweden.
Your efforts will help increase product acceptance and customer satisfaction. As
a bonus, you'll develop a network of contacts in IBM development labs, and
increase your productivity and marketability.
Find out more about the residency program, browse the residency index, and
apply online at:
ibm.com/redbooks/residencies.html
Comments welcome
Your comments are important to us!
Preface xiii
xiv IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices
1
Important: The wizard and command line are available for only Tier 1
platforms. Refer to the IBM Tivoli Workload Scheduler Release Notes for
information about supported platforms.
IBM Tivoli Data Warehouse enables you to access the underlying data about
your network devices and connections, desktops, applications and software, and
the activities in managing your infrastructure. Having all this information in a data
warehouse enables you to look at your IT costs, performance, and other trends of
specific interest across your enterprise.
IBM Tivoli Data Warehouse provides the infrastructure for the following:
Schema generation of the central data warehouse
Extract, transform, and load (ETL) processes through the IBM DB2® Data
Warehouse Center tool
2 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices
Report interfaces. It is also flexible and extensible enough to allow you to
integrate application data of your own.
4 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices
Note: Both IBM Tivoli Data Warehouse and one copy of IBM DB2 software (to
be used with IBM Tivoli Data Warehouse) are shipped with each Tivoli
application at no charge. This is also true for IBM Tivoli Workload Scheduler
Version 8.2. You do not need to buy it separately.
1.2.1 How does the IBM Tivoli Data Warehouse integration work?
The IBM Tivoli Workload Scheduler Production Plan is physically mapped into a
binary file (named Symphony) that contains the scheduling activities to be
performed in the next 24 hours. When a new Production Plan is created, a new
Symphony file is created, all the uncompleted activities from the previous Plan
are carried forward into the new Symphony file, and the old Symphony file Plan is
archived in the TWShome/schedlog directory.
The archiver process extracts the scheduling history data from the archived
Symphony files and dumps it into some flat files, while the import process
imports data from those flat files and uploads it into DB2 tables. Due to the fact
that the Tivoli Workload Scheduler Master Domain Manager and the Tivoli Data
Warehouse control server usually reside on two different machines, in order for
the import process to upload data to the central data warehouse database, a
DB2 client must be installed on the Tivoli Workload Scheduler Master Domain
Manager.
The Perl script that runs the archiver process and import command is called
tws_launch_archive.pl.
cpus
Jobs
Archiver Scheds
process Scheds
Cpus
Note: IBM Tivoli Data Warehouse V1.2 (new version of IBM Tivoli Data
Warehouse, which is expected to be available in 1Q 2004) is planned to be
shipped with Crystal Reports. It will replace the current reporting interface.
The integration with IBM Tivoli Workload Scheduler provides several predefined
reports. The following lists these predefined reports:
Jobs with the highest number of unsuccessful runs
Workstations with the highest number of unsuccessful runs
Run states statistics for all jobs
Jobs with the highest average duration time
Workstations with the highest CPU utilization
Run times statistics for all jobs
6 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices
Tip: You are not limited to these reports. Since the data is in a DB2 database,
you can create your own reports as well.
Figure 1-3 shows the jobs with the highest number of unsuccessful runs.
The tracing system is completely transparent and does not have any impact on
file system performance because it is fully handled in memory. It is automatically
started by the StartUp command, so no further action is required.
On UNIX, the StartUp command is usually installed in the /etc/rc file, so that
netman is started each time a computer is rebooted. StartUp can be used to
restart netman if it is stopped for any reason.
In case of problems, you are asked to create a trace snap file by issuing some
simple commands. The trace snap file is then inspected by the Tivoli support
team, which uses the logged information as an efficient problem determination
base. The Autotrace feature, already available with Version 8.1, has now been
extended to run on additional platforms. Configuration options are available in the
TWShome/trace directory.
Example 1-1 shows the directories related to the Autotrace. Note that tracing
configurations and options can be defined at the Master level. Customizable
logging options can be configured for clearer reporting and quicker problem
resolution.
8 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices
# a new stanza; most take an argument to restrict their effect:
# default
# product $id # 32-bit value or a product name
# process $name # process name as invoked
# channel $number # 1..255, inclusive
For more details on the Autotrace facility refer to Tivoli Workload Scheduler
Version 8.2, Error Message and Troubleshooting, SC32-1275.
The job return code is now saved in the Plan and visible by the Job Scheduling
Console and by conman. If a job is a recovery job, the jobinfo utility can return
the information about the original return code, such as:
jobinfo RSTRT_RETCOD
The conman showjobs has been enhanced to retrieve the return code information
of a given job. For example:
conman “sj <jobselect>; keys retcode”
If you want to use the new return code functionality, you have to add the
RCCONDSUCC keyword in the job definition (you can also define the return
code from the JSC in the New Job Definition window).
For example:
RCCONDSUCC “RC = 2 OR (RC >= 6 AND RC <= 10)”
This expression means that if the job’s return code is equal to 2 or a value
between 6 and 10, the job’s execution will be considered successful, while it will
be considered in error in any other cases.
The default behavior (if you do not code the return code) is:
If the return code is equal to 0, the job is considered successful (SUCC)
10 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices
Example 1-3 Abbreviated local options file
# SSL Attributes
#
nm SSL port =31113
SSL key =/usr/local/tws/maestro/ssl/MASTERkey.pem
SSL certificate =/usr/local/tws/maestro/ssl/MASTERcert.pem
SSL key pwd =/usr/local/tws/maestro/ssl/MASTERpwd.sth
SSL CA certificate =/usr/local/tws/maestro/ssl/cacert.pem
#SSL certificate chain
=/usr/local/tws/maestro/ssl/TWSCertificateChain.crt
SSL random seed =/usr/local/tws/maestro/ssl/TWS.rnd
SSL Encryption Cipher =SSLv3
SSL auth mode =cpu
#SSL auth string =tws
Also in this version, event status is monitored at Symphony file generation time
for key jobs. A parameter SYMEVNTS in BmEvents.conf allows you to get a
picture of the key jobs that are in ready, hold and exec status. This information is
reported on IBM Tivoli Enterprise Console by IBM Tivoli Workload Scheduler
Plus module.
Also, for all the workstations having this attribute set to ON, the commands to
start or stop the workstation or to get the standard list will be transmitted through
the domain hierarchy instead of opening a direct connection between the Master
(or Domain Manager) and the workstation.
For more information on this, refer to 5.1, “Working across firewalls” on page 172.
Setting this global option on also triggers a security mechanism that uses an
encrypted, randomly generated Security file checksum and a Symphony file run
number to identify and trust the Tivoli Workload Scheduler network
corresponding to that Master.
The main goal of the centralized security model is to take away from the root (or
administrator) user of an FTA the means of deleting the Security file and of
re-creating it with the maximum authorization, thus gaining the capability to issue
12 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices
Tivoli Workload Scheduler commands that affect other machines within the
network. In Example 1-4, globalopts file centralized security is enabled (shown in
bold). Consequently, only the Master Domain Controller administrator has the
ability to modify local Fault Tolerant Agent Security files. The globalopts file is
located in the TWShome/mozart directory.
Tip: If you prefer to use the traditional security model, you can still do so by
not activating the global variable.
SSL uses digital certificates to authenticate the identity of a workstation. The IBM
Tivoli Workload Scheduler administrator must plan how authentication will be
used within the network:
Use one certificate for the entire Tivoli Workload Scheduler network.
A new GUI installation process is also available to simply the IBM Tivoli Workload
Scheduler for Applications 8.2 installation process.
To create and set up an Extended Agent workstation, you should go through the
following steps. This is common to all access methods.
Perform all the post-installation steps required by each method.
Create one or more Extended Agent workstation definitions.
14 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices
Create the options file for those Extended Agent workstation definitions.
Install one or more access methods through:
– InstallShield Multiplatform (ISMP) wizard installer
– IBM Tivoli Configuration Manager 4.2
– TAR-based installation for Tier 2 platforms
A template for running a silent installation is also available with this product
and is located in the response_file directory. Refer to the IBM Tivoli Workload
Scheduler Release Notes, SC32-1277 for further information.
For more information, refer to IBM Tivoli Workload Scheduler for Applications
User Guide, SC32-1278.
The Job Scheduling Console now has an easier to understand interface, with a
Tivoli-compliant look and feel, making it even easier to navigate and update
database entries.
In addition to these views, there is also a Task Assistant, which is the help feature
of the JSC.
The first level is the IBM Tivoli Workload Scheduler object type, while the second
level is the FTA workstation name in which the object will be created. The
18 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices
second-level selection can be dependent on the IBM Tivoli Workload Scheduler
engine (or connector) that you currently select in the Engine view.
This Action list view is often referred to as the portfolio or the Action Tree view,
which lists the actions that are available to you.
The Actions list pane can be displayed or hidden (toggled on/off) by selecting
View -> Show -> Portfolio from the menu bar as shown in Figure 2-3 on
page 20.
20 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices
Name of the
instance
Default Database Lists
User Defined
Group
User Defined list
Common Default
Plan Lists
There are two types of views that are provided with the JSC: the database view
and the plan view.
The database view shows the objects in the IBM Tivoli Workload Scheduler
database.
The plan view shows the instances of objects in the IBM Tivoli Workload
Scheduler plan.
The Common Default Plan lists will connect to all available connectors and
produce the necessary output.
If you select the tree root in the left pane, all job stream instances are displayed
in table form in the right pane. In both panes, a pop-up menu is available for
managing job stream instances.
If you select a job stream instance in the left pane, it expands to display the All
Jobs folder and the Dependencies folder. To view all jobs contained in the job
stream instance, select the All Jobs folder. In both the left and right pane, a
pop-up menu is available for managing jobs.
22 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices
To view all the dependencies contained in the job stream instance, select the
Dependencies folder. The contents of the folder are displayed in table form in
the right pane.
When the list is refreshed, it is re-created based on the data returned by the
engine. Any job stream instances with their related jobs and dependencies that
were expanded are collapsed to be updated. You have to navigate to the job
stream instance that interests you and expand the jobs and dependencies folder
again.
GJSXXXXY
24 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices
2.2.2 Hyperbolic Viewer
The Hyperbolic Viewer displays an integrated graphical view of job streams and
job instances with their related jobs, job streams and other dependencies, as
shown in Figure 2-8. This view can be very useful for displaying complicated
queries that involve many job stream instances. You can access the Hyperbolic
Viewer by selecting View -> Hyperbolic Viewer from the JSC menu. It is only
used for the Scheduled Jobs and Job Streams Plan Lists.
26 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices
A query facility enables you to query jobs and job streams for the following
information:
– Jobs or job streams whose start times have been reached, but have not
yet started
– Jobs or job streams whose deadlines have been reached, but have not yet
started
– Jobs or job streams whose deadlines have been reached, but have not yet
completed running
– Start jobs or job streams whose deadlines have been reached but have
not yet started
Figure 2-10 on page 28 shows how to input the latest start and termination
deadlines.
28 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices
define the job as successful or abend allows more flexibility in controlling the job
execution flow depending on the result of the job execution.
For more information on these JSC enhancements and other JSC features, refer
to Tivoli Workload Scheduler Job Scheduling Console User’s Guide, SH19-4552.
You can also find some suggestions about optimizing JSC performance in 9.5,
“Optimizing Job Scheduling Console performance” on page 307.
Chapter 3. Installation
This chapter provides step-by-step installation instructions for Tivoli Workload
Scheduler 8.2 and Job Scheduling Console (JSC) including the setup of Tivoli
Framework A number of common scenarios are provided, including:
“Installing a Master Domain Manager on UNIX” on page 35
“Adding a new feature” on page 56
“Promoting an agent” on page 78
“Upgrading to Version 8.2 from a previous release” on page 92
“Installing the Job Scheduling Console” on page 109
“Installing using the twsinst script” on page 130
“Silent install using ISMP” on page 132
“Troubleshooting installation problems” on page 135
“Uninstalling Tivoli Workload Scheduler 8.2” on page 137
3.1.1 CD layout
The following CDs are required to start the installation process:
Tivoli Workload Scheduler 8.2 Installation Disk 1
This CD-ROM includes install images for most of the Tier 1 platforms, TMF
JSS, and IBM Tivoli Workload Scheduler Connector. Table 3-1 is a complete
contents list.
32 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices
File or directory Description
Chapter 3. Installation 33
File or directory Description
34 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices
3.2 Installation roadmap
The basic flow of the InstallShield MultiPlatform installation process is illustrated
diagramatically in Figure 3-1.
License Panel
CPU Definition Panel
Discovery Panel
ADD FEATURES PROMOTE - MIGRATION FRESH
Feature Panel
CONNECTOR TIVOLI PLUS MODULE
I s Fram ework No
Alr eady inst alled? Framework Panel
Yes
Language Panel
Summary Panel
We recommend that you create a separate file system to protect the root file
system and also to prevent other applications from inadvertently filling up the file
system IBM Tivoli Workload Scheduler is installed in.
A file system size of 500 MB should be enough for IBM Tivoli Workload
Scheduler Domain Managers and Master including Tivoli Framework, but exact
Chapter 3. Installation 35
space requirements will vary considerably from one installation to another
depending on the number and types of jobs run plus the amount of time logs are
retained. Note that without the Tivoli Framework installed, a file system 250-300
MB should be adequate.
1. Log in as root and create a TWSuser and group. We used the user tws and
the group tivoli.
36 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices
v. Now that the TWSuser has been set up correctly, you can close the
telnet session by logging out, and return to root.
2. Mount the appropriate CD-ROM (for AIX this is Tivoli Workload Scheduler 8.2
Installation Disk 1) as follows:
# mount -r -V cdrfs /dev/cd0 /cdrom
Note: This mount command is AIX specific. See Table 3-5 on page 149 for
equivalent commands for other platforms.
3. Create a temporary directory for the setup program to copy some images to
the local file system to allow unmounting of CDs during the installation.
Although any existing temporary directory will do, the setup program does not
clean up after itself, and using a dedicated directory greatly simplifies the
cleanup process:
# mkdir /usr/local/tmp/TWS
4. Change directory to the top-level directory of the CD-ROM and launch the
installation:
# cd /cdrom
Note: If the mount point /cdrom does not exist, create the directory /cdrom
(mkdir /cdrom) or substitute all references to /cdrom with an alternate
mount point that does exist.
Note: There may be some delay while the install images are copied to the
local file system before any output is generated. This is particularly true
when a slow CD-ROM drive is used.
Chapter 3. Installation 37
Figure 3-2 Language selection
6. The welcome window lists the actions available. Click Next to continue with
the installation, as shown in Figure 3-3.
7. Having read the terms and conditions, select I accept the terms in the
license agreement, then click Next to continue as shown in Figure 3-4 on
page 39.
38 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices
Figure 3-4 Software License Agreement
Chapter 3. Installation 39
Figure 3-5 Installation operation
9. Specify the TWSuser name created in step b on page 36, then click Next to
continue, as shown in Figure 3-6 on page 41.
40 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices
Figure 3-6 Specifying the TWSuser
10.Tivoli Workload Scheduler 8.2 is installed into the TWSuser’s home directory.
Review the path, then click Next to continue as shown in Figure 3-7 on
page 42.
Chapter 3. Installation 41
Figure 3-7 Destination directory
11.Select the Custom install option and click Next as shown in Figure 3-8 on
page 43.
42 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices
Figure 3-8 Type of installation
12.Select Master Domain Manager and click Next as shown in Figure 3-9 on
page 44.
Chapter 3. Installation 43
Figure 3-9 Type of agent to install
13.Type in the following information and click Next as shown in Figure 3-10 on
page 45:
a. The company name as you would like it to appear in program headers and
reports.
Note: Spaces are permitted, provided that the name is not enclosed in
double quotation marks.
c. The TCP port number used by the instance being installed. It must be a
value in the range 1-65535. The default is 31111.
44 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices
Figure 3-10 Workstation configuration information
Chapter 3. Installation 45
Figure 3-11 Optional features
15.Type the name that identifies the instance in the Job Scheduling Console
window. The name must be unique within the scheduler network. We used the
convention hostname_TWSuser, as shown in Figure 3-12 on page 47.
46 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices
Figure 3-12 Connector Instance Name
Chapter 3. Installation 47
Figure 3-13 Additional languages
17.Specify the directory where you would like the Tivoli Management Framework
installed and click Next to continue, as shown in Figure 3-14 on page 49.
48 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices
Figure 3-14 Framework destination directory
18.Review the installation settings and click Next as shown in Figure 3-15 on
page 50.
Chapter 3. Installation 49
Figure 3-15 Installation settings
19.A progress bar indicates that the installation has started, as shown in
Figure 3-16 on page 51.
50 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices
Figure 3-16 Progress bar
20.On the Locate the Installation Image window, you are prompted for the
location of the Tivoli Management Framework images. You will first need to
change out of the CD-ROM root before you can unmount the Tivoli Workload
Scheduler 8.2 Installation CD-ROM:
Tip: The output from the setup program may have obscured your
command-line prompt, press Enter to get a prompt back, in order to enter
the change directory and umount commands.
# cd /
# umount /cdrom
21.Replace the Tivoli Workload Scheduler 8.2 Installation CD-ROM with the
Tivoli Framework 4.1 Installation CD-ROM and mount the CD-ROM:
# mount -r -V cdrfs /dev/cd0 /cdrom
22.Once the CD-ROM is mounted, select /cdrom in the Locate Installation
Image window and click OK, as shown in Figure 3-17 on page 52.
Chapter 3. Installation 51
Figure 3-17 Locate the Tivoli Framework installation image
23.The progress bar will indicate that the installation is continuing. Once the
Tivoli Management Framework installation has completed, a Tivoli Desktop
will launch, as shown in Figure 3-18 on page 53.
52 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices
Figure 3-18 Tivoli Desktop
24.A pop-up window prompting for the Tivoli Workload Scheduler 8.2 Engine CD
should appear shortly after the Tivoli Desktop, as shown in Figure 3-19 on
page 54. Before acknowledging the prompt, you will need to unmount the
Tivoli Framework CD:
# umount /cdrom
Chapter 3. Installation 53
Replace the Tivoli Framework CD with the Tivoli Workload Scheduler 8.2
Installation Disk 1 CD, then mount the CD and change directory to the root
directory of the CD:
# mount -r -V crdfs /dev/cd0 /cdrom
# cd /cdrom
25.Once the installation is complete you will get a final summary window. Click
Finish to exit the setup program, as shown in Figure 3-19.
54 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices
26.You have now finished with the Tivoli Workload Scheduler 8.2 Installation
CD-ROM and can unmount it:
# cd /
# umount /cdrom
Tip: The output from the setup program may have obscured your
command-line prompt. After exiting the setup program, press Enter to get a
prompt back.
27.This would be a good time to clean up the temporary disk space used by the
setup program:
# cd /usr/local/tmp
# rm -rf TWS
28.Finally there are a few more steps to complete the setup and start Tivoli
Workload Scheduler 8.2:
a. Log in as the TWSuser.
b. Run the composer command to add the FINAL schedule definition to the
database by running the following command:
$ composer add Sfinal
Tip: You need to be in the TWShome directory and have the TWSuser’s
environment correctly configured for this command to be successful.
Tip: This is the only time that the Jnextday script should be run in this way.
After this initial run, Tivoli Workload Scheduler 8.2 will schedule the Jnextday
job to run daily. If it is necessary to run Jnextday before its scheduled time, for
example while testing in a development environment, release all the
dependencies on the FINAL schedule using conman or the Job Scheduling
Console.
d. Give Tivoli Workload Scheduler 8.2 a few minutes to start up, then check
the status by running the command:
$ conman status
If Tivoli Workload Scheduler 8.2 started correctly, you will see the status:
Batchman LIVES
Chapter 3. Installation 55
e. After installation, the default job limit is set to zero. In order for jobs with a
priority lower than GO (101) to run, this limit needs to be raised:
$ conman “limit;10”
Before performing an upgrade, be sure that all Tivoli Workload Scheduler 8.2
processes and services are stopped. If you have any jobs that are running
currently, they should be allowed to complete or you should stop the related
processes manually.
1. From the Job Scheduling Console, stop the target workstation. Otherwise
from the command line on the MDM while logged on as the TWSuser, use the
following command:
$ conman “stop workstation”
56 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices
2. From the Job Scheduling Console, unlink the target workstation. From the
command line on the MDM, use the following command:
$ conman “unlink workstation”
3. Log on to the target workstation as root (UNIX/Linux), or the local
Administrator (Windows).
4. From the command line (DOS prompt on Windows), stop the netman process
as follows:
– On UNIX:
$ su - TWSuser -c “conman shut\;wait”
– On Windows:
C:\> cd \win32app\maestro
C:\win32app\maestro> .\Shutdown.cmd
Tip: If you are adding a feature to an installation that includes the Connector,
be sure that you stop the connector processes also.
5. To verify whether there are processes still running, complete the following
steps:
– On UNIX, run the command:
$ ps -u TWSuser
– On Windows, run the command:
C:\win32app\maestro> unsupported\listproc.exe
Verify that the following processes are not running:
netman, mailman, batchman, writer, jobman, JOBMAN (UNIX only),
stageman, JOBMON (Windows only), tokensrv (Windows only).
6. Insert the Tivoli Workload Scheduler 8.2 Installation CD-ROM (CD 1 for UNIX
and Windows, CD 2 for Linux).
7. Run the setup program for the operating system on which you are upgrading:
– On UNIX/Linux, while logged on as root, mount the CD-ROM and change
directory to the root directory of the CD-ROM:
Note: The following mount command is AIX specific. See Table 3-5 on
page 149 for equivalent commands for other platforms.
Chapter 3. Installation 57
Tip: If you run the SETUP.bin in the root of the Tivoli Workload Scheduler 8.2
CD-ROM, the files necessary to install the Tivoli Workload Scheduler 8.2
engine are copied to the local hard disk and the installation launched from the
hard disk. Since the setup program does not remove the files that it copies to
the hard disk, creating a temporary directory specifically for the setup program
to use will simplify the cleanup of these files following the add feature.
If you are upgrading the Tivoli Workload Scheduler 8.2 engine only, this is
unnecessary and the SETUP.bin in the appropriate platform directory can be
launched directly from the CD-ROM, thereby reducing the amount of
temporary disk space required.
9. The welcome window lists the actions available. Click Next to continue the
upgrade, as shown in Figure 3-59 on page 95.
58 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices
Figure 3-23 Welcome window
10.Having read the terms and conditions, select I accept the terms in the
license agreement, then click Next to continue as shown in Figure 3-24 on
page 60.
Chapter 3. Installation 59
Figure 3-24 Software License Agreement
60 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices
Figure 3-25 Discovery window
Chapter 3. Installation 61
Figure 3-26 Add a feature to the selected instance
62 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices
Figure 3-27 User window
Chapter 3. Installation 63
Figure 3-28 Location window
64 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices
Figure 3-29 CPU definition window
Chapter 3. Installation 65
Figure 3-30 Optional features window
17.Type the name that identifies the instance in the Job Scheduling Console
window, then click Next to continue. The name must be unique within the
scheduler network. We used the convention hostname_TWSuser as shown in
Figure 3-31 on page 67.
66 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices
Figure 3-31 Connector Instance Name
Chapter 3. Installation 67
Figure 3-32 Additional languages window
Note: The remaining fields are optional and apply only to Windows. Unless
you intend to deploy Tivoli Management Framework programs or Managed
Nodes in your Tivoli Management Framework environment, leave them
empty.
68 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices
Figure 3-33 Tivoli Management Framework installation window
20.Review the installation settings and click Next to start adding the feature as
shown in Figure 3-34 on page 70.
Chapter 3. Installation 69
Figure 3-34 Summary window
21.A progress bar indicates that the installation has started as shown in
Figure 3-35 on page 71.
70 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices
Figure 3-35 Progress bar
22.On the Locate the Installation Image window, you will be prompted for the
location of the Tivoli Management Framework CD-ROM.
On UNIX, unmount the Tivoli Workload Scheduler 8.2 CD-ROM:
# cd /
# umount /cdrom
Note: On UNIX you will need to change the directory of the Tivoli Workload
Scheduler Installation CD-ROM before you will be able to unmount it. If you
find that the command-line prompt has been obscured by output from the
setup program, just press Enter to get a prompt back.
23.Replace the Tivoli Workload Scheduler 8.2 Installation CD-ROM with the
Tivoli Management Framework CD-ROM.
24.On UNIX, mount the Tivoli Management Framework Installation CD-ROM:
# mount -r -V cdrfs /dev/cd0 /cdrom
25.Select the root directory of the CD-ROM on the Locate Installation Image
window and click OK as shown in Figure 3-36 on page 72.
Chapter 3. Installation 71
Figure 3-36 Locate the Installation Image window
26.The progress bar will indicate that the installation is continuing. On UNIX,
once the Tivoli Management Framework installation has completed, a Tivoli
Desktop will launch as shown in Figure 3-18 on page 53.
27.A pop-up window prompting for the Tivoli Workload Scheduler 8.2 Installation
CD-ROM, as shown in Figure 3-37 should appear shortly after the Tivoli
Management Framework installation completes.
72 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices
28.Replace the Tivoli Management Framework Installation CD-ROM with the
Tivoli Workload Scheduler 8.2 Installation CD-ROM you removed previously.
29.On UNIX, mount the Tivoli Workload Scheduler 8.2 Installation CD-ROM:
# mount -r -V cdrfs /dev/cd0 /cdrom
Note: The following mount command is AIX specific, see Table 3-5 on
page 149 for equivalent commands for other platforms.
Chapter 3. Installation 73
32.After the Windows system reboots, log back in as the same local
Administrator, and the add feature will continue by re-prompting for the
installation language as shown in Figure 3-39. Select the required language
and click OK to continue.
33.A progress bar indicates that the add feature has resumed, as shown in
Figure 3-40.
34.A pop-up window prompting for the locations of the TMF_JSS.IND file may
appear. Be sure that the Tivoli Workload Scheduler 8.2 Installation CD 1 is
74 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices
installed, then select the TWS_CONN directory and click OK, as shown in
Figure 3-41.
35.A progress bar will indicate that the add feature has resumed as shown in
Figure 3-42 on page 76.
Chapter 3. Installation 75
Figure 3-42 Progress bar
36.Once the add feature completes you will get a final summary window. Click
Finish to exit the setup program, as shown in Figure 3-43 on page 77.
76 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices
Figure 3-43 Add feature complete window
Chapter 3. Installation 77
42.From the Job Scheduling Console, start the target workstation. From the
command line on the MDM, use the following command:
$ conman “start workstation”
For example, in order to promote an existing Fault Tolerant Agent with the Tivoli
Management Framework and Connector already installed you would use the
following steps:
1. From the Job Scheduling Console, stop the target workstation. Otherwise
from the command line on the MDM while logged on as the TWSuser, use the
following command:
$ conman “stop workstation”
2. From the Job Scheduling Console, unlink the target workstation. From the
command line on the MDM, use the following command:
$ conman “unlink workstation”
3. Log on to the target workstation as root (UNIX), or the local Administrator
(Windows).
4. From the command line (command prompt on Windows), stop the netman
process as follows:
– On UNIX:
$ su - TWSuser -c “conman shut\;wait”
– On Windows:
C:\> cd TWShome
C:\win32app\maestro> .\Shutdown.cmd
78 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices
5. Stop the connector processes as follows:
– On UNIX:
$ su -c TWSuser -c “TWShome/bin/wmaeutil.sh ALL -stop”
– On Windows:
C:\win32app\maestro> wmaeutil.cmd ALL -stop
6. To verify whether there are processes still running, complete the following
steps:
– On UNIX, run the command:
$ ps -u TWSuser
– On Windows, run the command:
C:\win32app\maestro> unsupported\listproc.exe
Verify that the following processes are not running:
netman, mailman, batchman, writer, jobman, JOBMAN (UNIX only),
stageman, JOBMON (Windows only), tokensrv (Windows only),
maestro_engine, maestro_plan, maestro_database.
7. Insert the Tivoli Workload Scheduler 8.2 Installation Disk (CD 1 for UNIX and
Windows, CD 2 for Linux).
8. Run the setup program for the operating system on which you are upgrading:
– On UNIX/Linux, while logged on as root, mount the CD-ROM and change
directory to the appropriate platform directory:
Note: The following mount command is AIX specific. See Table 3-5 on
page 149 for equivalent commands for other platforms.
Tip: If you run the SETUP.bin in the root of the Tivoli Workload Scheduler 8.2
CD-ROM the files necessary to install the Tivoli Workload Scheduler 8.2
engine are copied to the local hard disk and the installation launched from the
hard disk. If you are upgrading the Tivoli Workload Scheduler 8.2 engine only,
this is unnecessary and the SETUP.bin in the appropriate platform directory
can be launched directly from the CD-ROM, thereby reducing the amount of
temporary disk space required.
Chapter 3. Installation 79
Figure 3-44 Run SETUP.exe
10.The welcome window lists the actions available. Click Next to continue the
upgrade, as shown in Figure 3-59 on page 95.
80 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices
Figure 3-46 Welcome window
11.Having read the terms and conditions, select I accept the terms in the
license agreement, then click Next to continue as shown in Figure 3-47 on
page 82.
Chapter 3. Installation 81
Figure 3-47 Software License Agreement
82 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices
Figure 3-48 Discovery window
Chapter 3. Installation 83
Figure 3-49 Promote the selected instance
84 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices
Figure 3-50 User window
Chapter 3. Installation 85
Figure 3-51 Location window
86 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices
Figure 3-52 Type of agent window
17.Confirm the workstation name required for the new Master Domain Manager,
then click Next to continue, as shown in Figure 3-53 on page 88.
Chapter 3. Installation 87
Figure 3-53 CPU definition window
18.Review the installation settings and click Next to start promoting the agent, as
shown in Figure 3-54 on page 89.
88 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices
Figure 3-54 Summary window
19.A progress bar indicates that the installation has started, as shown in
Figure 3-55 on page 90.
Chapter 3. Installation 89
Figure 3-55 Progress bar
20.Once the installation completes you will get a final summary window. Click
Finish to exit the setup program, as shown in Figure 3-56 on page 91.
90 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices
Figure 3-56 Installation complete window
Chapter 3. Installation 91
26.From the Job Scheduling Console, start the target workstation. From the
command line on the MDM, use the following command:
$ conman “start workstation”
The upgrade procedure on Tier 1 platforms backs up the entire Tivoli Workload
Scheduler Version 7.0 or 8.1 installation to the TWShome_backup_TWSuser
directory.
Tip: The backup files are moved to the same file system where you originally
installed the previous release. A check is made to ensure that there is enough
space on the file system. Otherwise, the upgrade will not proceed.
If you do not have the required disk space to perform the upgrade, back up the
mozart database and all your customized configuration files, and install a new
instance of Tivoli Workload Scheduler 8.2, then transfer the saved files to the
new installation.
Some configuration files such as localopts, globalopts, etc. are preserved by the
upgrade, whereas others such as jobmanrc (jobmanrc.cmd on Windows) are not.
Should you have locally customized files that are not preserved by the upgrade,
then the customized files can be located in the TWShome_backup_TWSuser
directory and merged with the Tivoli Workload Scheduler 8.2 files.
92 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices
Follow these steps:
1. From the Job Scheduling Console, stop the target workstation. Otherwise
from the command line on the MDM while logged on as the TWSuser, use the
following command:
$ conman “stop workstation”
2. From the Job Scheduling Console, unlink the target workstation. From the
command line on the MDM, use the following command:
$ conman “unlink workstation”
3. Log on to the target workstation as root (UNIX) or the local Administrator
(Windows).
4. From the command line (DOS prompt on Windows), stop the netman process
as follows:
– On UNIX:
$ su - TWSuser -c “conman shut\;wait”
– On Windows:
C:\> cd \win32app\maestro
C:\win32app\maestro> .\Shutdown.cmd
Tip: If you are upgrading an installation that includes the Connector, be sure
that you also stop the connector processes.
5. To verify whether there are processes still running, complete the following
steps:
– On UNIX, run the command:
$ ps -u TWSuser
– On Windows, run the command:
C:\win32app\maestro> unsupported\listproc.exe
Verify that the following processes are not running:
netman, mailman, batchman, writer, jobman, JOBMAN (UNIX only),
stageman, JOBMON (Windows only), tokensrv (Windows only)
Also, be sure that no system programs are accessing the TWShome directory
or anything below it, including the command prompt and Windows Explorer. If
any of these files are in use, the backup of the existing instance will fail.
Chapter 3. Installation 93
Note: The setup program will not detect the components file if it has been
relocated from the /usr/unison directory using the
UNISON_COMPONENT_FILE environment variable. In order for the setup
program to successfully discover the existing instance, the relocated
components file will need to be copied into /usr/unison before proceeding
with the upgrade.
6. Insert the Tivoli Workload Scheduler 8.2 Installation CD-ROM (CD 1 for UNIX
and Windows, CD 2 for Linux).
7. Run the setup program for the operating system on which you are upgrading:
– On UNIX/Linux, while logged on as root, mount the CD-ROM, change the
directory to the appropriate platform directory, and run the setup program:
Note: The following mount command is AIX specific. See Table 3-5 on
page 149 for equivalent commands for other platforms.
Tip: If you run the SETUP.bin in the root of the Tivoli Workload Scheduler 8.2
CD-ROM, the files necessary to install the Tivoli Workload Scheduler 8.2
engine are copied to the local hard disk and the installation launched from the
hard disk. If you are upgrading the Tivoli Workload Scheduler 8.2 engine only,
this is unnecessary and the SETUP.bin in the appropriate platform directory
can be launched directly from the CD-ROM reducing the amount of temporary
disk space required.
94 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices
8. Select the installation wizard language and click OK to continue, as shown in
Figure 3-58.
9. The welcome window lists the actions available. Click Next to continue the
upgrade, as shown in Figure 3-59.
10.Having read the terms and conditions, select I accept the terms in the
license agreement, then click Next to continue as shown in Figure 3-60.
Chapter 3. Installation 95
Figure 3-60 Software License Agreement
96 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices
Figure 3-61 Discovery window
12.The Upgrade the selected instance radio button is selected by default. Click
Next to continue as shown in Figure 3-62 on page 98.
Chapter 3. Installation 97
Figure 3-62 Upgrade the selected instance
13.The upgrade actions window gives an overview of the upgrade process. Click
Next to continue, as shown in Figure 3-63 on page 99.
98 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices
Figure 3-63 Upgrade overview
14.Review the TWSuser information and on Windows enter and confirm the
TWSuser’s password, then click Next to continue, as shown in Figure 3-64 on
page 100.
Chapter 3. Installation 99
Figure 3-64 User window
100 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices
Figure 3-65 Location window
16.Select the type of agent being upgraded, then click Next to continue, as
shown in Figure 3-66 on page 102.
Note: The type of agent selected must match the type of agent being
upgraded.
102 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices
Figure 3-67 CPU definition window
18.If the Tivoli Management Framework is found, but it is the wrong version, it
has no IBM Tivoli Workload Scheduler Connector or it is otherwise
incomplete, you will see a warning window as shown in Figure 3-68 on
page 104. Click Next to skip the IBM Tivoli Workload Scheduler Connector
upgrade at this time.
19.Review the installation settings and click Next to start the upgrade, as shown
in Figure 3-69 on page 105.
Tip: These installation settings are read from the TWShome/localopts file. If
they are incorrect, it is possible to click Back, then edit the localopts file and
once the settings are correct return to this window by clicking Next. When
doing this operation on Windows, take care not to leave the editor or command
prompt running.
104 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices
Figure 3-69 Summary window
20.A progress bar indicates that the installation has started as shown in
Figure 3-70 on page 106.
21.Once the installation is complete, you will get a final summary window. Click
Finish to exit the setup program, as shown in Figure 3-71 on page 107.
106 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices
Figure 3-71 Upgrade completed successfully
108 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices
25.On UNIX, unmount the CD-ROM:
# cd /
# umount /cdrom
26.Remove the Tivoli Workload Scheduler 8.2 Installation Disk.
You can install the Job Scheduling Console using any of the following installation
mechanisms:
Using an installation wizard that guides the user through the installation steps.
Using a response file that provides input to the installation program without
user intervention.
Using Software Distribution to distribute the Job Scheduling Console files.
Here we will give an example of the first of these methods, using the installation
wizard interactively. The installation program can perform a number of actions:
Fresh install
Adding new languages to an existing installation
Repairing an existing installation
However, the steps below assume that you are performing a fresh install:
1. Insert the Job Scheduling Console CD 1 in the CD-ROM drive.
Note: The following mount command is AIX specific. See Table 3-5 on
page 149 for equivalent commands for other platforms.
4. The welcome window lists the actions available. Click Next to continue as
shown in Figure 3-75 on page 111.
110 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices
Figure 3-75 Welcome window
5. Having read the terms and conditions, select I accept the terms in the
license agreement, then click Next to continue as shown in Figure 3-76 on
page 112.
6. Select the required installation directory, then click Next to continue as shown
in Figure 3-77 on page 113.
112 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices
Figure 3-77 Location window
8. Select the required locations for the program icons, then click Next to
continue as shown in Figure 3-79 on page 115.
Note: The options available will vary depending upon the target platform.
114 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices
Figure 3-79 Icon location window (Windows variant)
9. Review the installation settings and click Next to start the upgrade, as shown
in Figure 3-80 on page 116.
10.A progress bar indicates that the installation has started as shown in
Figure 3-81 on page 117.
116 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices
Figure 3-81 Progress bar
11.Once the installation completes you get a final summary window. Click Finish
to exit the setup program, as shown in Figure 3-82 on page 118.
AIX AIXconsole.sh
HP-UX HPconsole.sh
Linux LINUXconsole.sh
Windows NTconsole.cmd
A Job Scheduling Console logon window will be displayed. Enter the login name
associated with Tivoli Management Framework Administrator configured within
118 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices
the IBM Tivoli Workload Scheduler Security file (by default this will be the
TWSuser), the user’s password, and the host name of the machine running the
IBM Tivoli Workload Scheduler Connector, and click OK, as shown in
Figure 3-83.
The Job Scheduling Console main window is displayed as shown in Figure 3-84
on page 120.
Download the fix pack README file plus the fix pack image for each required
platform.
Having downloaded the necessary files, you should spend some time reviewing
the fix pack README file. The README will give you an overview of the defects
fixed, known limitations and dependencies, plus installation instructions. This file
is found in “README file for JSC Fix Pack 01” on page 350.
120 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices
Having ensured that the Job Scheduling Console is not running and you have
taken the necessary backups, use the following steps to apply the fix pack:
1. Extract the fix pack image (.tar file on UNIX/Linux or .ZIP file on Windows) into
a temporary directory.
2. Run the setup program extracted from the archive in the previous step:
– On UNIX, run the following command as root:
# setup.bin [-is:tempdir temporary_directory]
– On Windows, launch the SETUP.exe file as shown in Figure 3-85.
3. The welcome window lists the actions available. Click Next to continue with
the discovery phase, as shown in Figure 3-86 on page 122.
4. During the discovery phase, the setup program will search for the existing
JSC 1.3 instance and display the results as shown in Figure 3-87 on
page 123. Having confirmed that the discovered path is correct, click Next to
continue.
122 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices
Figure 3-87 Discovery window
5. The first time you apply the fix pack the only option available is Apply - fix
pack nn. Select this option, then click Next to continue as shown in
Figure 3-88 on page 124.
6. Review the installations settings and click Next to start the fix pack
application, as shown in Figure 3-89 on page 125.
124 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices
Figure 3-89 Summary window
7. A progress bar indicates that the fix pack application has started as shown in
Figure 3-90 on page 126.
8. Once the fix pack application completes, you get a final summary window.
Click Finish to exit the setup program as shown in Figure 3-91 on page 127.
126 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices
Figure 3-91 Installation complete window
9. At this stage the JSC fix pack is installed in a non-permanent mode with a
backup of the previous version stored on your workstation. Once you have
tested the JSC and are happy that it is working correctly you can make the fix
pack application permanent and free up the disk space occupied by the
previous version by committing the fix pack. To commit the fix pack, follow the
remaining steps in this section.
Note: Should you have problems with the fix pack version of the JSC and
need to revert to the existing version, then follow the steps below, but select
the Rollback action instead of the Commit action.
14.Review the installation settings and click Next to commit the fix pack, as
shown in Figure 3-93 on page 129.
128 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices
Figure 3-93 Summary window
130 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices
Note: The TWShome directory must exist before attempting to run
twsinst. Therefore if the directory was not created automatically when the
TWSuser was created, manually create the directory before proceeding.
Customize the users .profile to set up the user’s environment correctly for
TWSuser. See step b on page 36 for further details.
4. Insert the Tivoli Workload Scheduler 8.2 Installation CD 1.
5. Mount the CD-ROM.
6. Change directory to the appropriate platform directory below the root
directory on the CD-ROM.
7. Run the twsinst program:
# ./twsinst -new -uname tws82 -cputype bkm_agent -thiscpu BACKUP -master
MASTER -port 31182 -company IBM
The resulting output from this command can be seen in Example 3-1.
The silent installation is performed when the wizard is run with the -options
command line switch.
Note: The following mount command is AIX specific. See Table 3-5 on
page 149 for equivalent commands for other platforms.
132 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices
4. Edit the copy of the freshInstall.txt file made in the last step, and customize
the values of the required keywords (search for the lines starting with -W).
Note: The sample response files are UNIX format files that do not contain
carriage return characters at the end of each lines. This means that on
Windows you need to edit these files using a UNIX file format aware text
editor such as found by selecting Start -> Programs -> Accessories ->
Wordpad. Notepad and other incompatible editors will see the entire
contents of the file as a single line of text.
Tip: If there was a previous instance of Tivoli Workload Scheduler 8.2 installed
in the same location and the TWShome directory had not been cleaned up as
recommended in 3.12.3, “Tiding up the TWShome directory” on page 142, the
existing localopts file will take precedence over the settings defined in the
response file.
We would highly recommend installing Perl5 onto all machines running Tivoli
Workload Scheduler 8.2 where not already installed. Perl makes an excellent
scripting language for writing generic scripts that will run on UNIX and Windows
platforms, and has many benefits over .bat command file and Visual Basic
scripts.
134 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices
print “Hello, world!\n”
exit 0;
These log files are created in the following system temporary directories:
On UNIX in $TMPDIR if defined otherwise in /tmp.
On Windows in %TMPDIR%.
One possible problem is that the default temporary directory does not contain
enough free disk space to run the setup program correctly, in which case an
alternate temporary directory can be specified using the option:
-is:tempdir temporary_directory
For example, on UNIX you might run the setup program specifying both of the
above options as follows:
# setup.bin -is:tempdir /var/tmp/TWS -is:log /var/tmp/ismp.log
136 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices
Tip: You need to be consistent with the installation method you choose. For
example if you chose to use the ISMP-based installation for installing IBM
Tivoli Workload Scheduler, later when you need to install the fix packs, you
should also use ISMP. Using any other method in this case, such as twsinst
or IBM Tivoli Configuration Manager, might produce unpredictable results.
Note: The Tivoli Workload Scheduler 8.2 uninstall program does not remove
the IBM Tivoli Workload Scheduler Connector, IBM Tivoli Workload Scheduler
Plus Module, or Tivoli Management Framework.
On UNIX
To launch the uninstaller on UNIX/Linux, use the following steps:
1. Log in as root.
2. Change directory to the _uninst directory below TWShome:
# cd TWShome/_uninst
3. Run the uninstall program:
# ./uninstaller.bin
On Windows
Launch the uninstaller on Windows from the Add/Remove Program window
launch, using the following steps:
1. Log on as a user with Local Administrator rights.
2. Launch the Add/Remove Program using Start -> Settings -> Control Panel
-> Add/Remove Programs.
138 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices
2. The welcome window lists the actions available. Click Next to continue with
the uninstall as shown in Figure 3-97.
3. Review the uninstall settings, then click Next if you wish to continue with the
uninstall, as shown in Figure 3-98 on page 140.
140 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices
Figure 3-99 Information window
5. Once the uninstall is complete, you will get a final summary window. Click
Finish to exit the setup program, as shown in Figure 3-100 on page 142.
On UNIX
From the command line, change directory to the directory above the TWShome
directory, then recursively delete the TWShome directory:
# cd TWShome/..
# rm -rf TWShome
On Windows
Remove the TWShome directory and any subdirectories using Windows Explorer
or a similar tool.
142 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices
3.12.4 Uninstalling JSS and IBM Tivoli Workload Scheduler
Connector
To uninstall the Tivoli Job Scheduling Services (JSS) and IBM Tivoli Workload
Scheduler Connector, use the following procedure:
1. Log in as root (UNIX) or the local Administrator (Windows).
2. From the command line (command prompt on Windows), stop the Connectors
as follows:
– On UNIX:
# su -c TWSuser -c “‘maestro‘/bin/wmaeutil.sh ALL -stop”
– On Windows:
C:\> TWShome\bin\wmaeutil.cmd ALL -stop
3. Be sure that the Tivoli Management Framework environment is configured:
– On UNIX:
# . /etc/Tivoli/setup_env.sh
– On Windows:
C:\>%windir%\System32\drivers\etc\Tivoli\setup_env.cmd
Note: For Windows, you need also to start the bash environment with the bash
command.
5. First uninstall the IBM Tivoli Workload Scheduler Connector using the
following command:
# wuninst TWSConnector node -rmfiles
Where node is the host name of the box where the IBM Tivoli Workload
Scheduler Connector is installed, as known by the Tivoli Management
Framework. If you are unsure of the correct node name to use, run the
following command and check for the name in the hostname(s) column:
# odadmin odlist
The wuninst command will prompt for confirmation before proceeding with the
uninstall, as shown in Example 3-4.
144 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices
Removing resource from Policy Regions
Deleting Class Object gently
Unregister from TNR
Removing TWSConnector files...
-->Removing Files...
---->Removing /usr/local/Tivoli/bin/solaris2/Maestro
---->Removing cli programs
Removing TWSConnector from ProductLocations...
Removing TWSConnector ProductInfo...
---->Removing TWSConnector from Installation object
---->Checking for wuninst ended on Managed Nodes
Uninstall of TWSConnector complete.
------Standard Error Output------
############################################################################
Cleaning up...
wuninst complete.
Please run wchkdb -u
6. After uninstalling the IBM Tivoli Workload Scheduler Connector, uninstall the
Job Scheduling Services using the command:
# wuninst TMF_JSS node -rmfiles
The output from this command is shown in Example 3-5.
Cleaning up...
wuninst complete.
Please run wchkdb -u
146 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices
3.13 Troubleshooting uninstall problems
To determine the installation method used, and identify the appropriate uninstall
method, work through the methods in Table 3-4 from the top, stopping at the first
match in the How to determine column.
Note: It is important that you use the uninstallation program for the method
that you had used for installing the product. For example if you had used the
ISMP installation to install IBM Tivoli Workload Scheduler, you should not use
another method such as twsinst to uninstall the product. This might cause
unpredictable results.
148 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices
|c:\win32app\maestro|0|0|1|1557aa482df5e5013543616468e30178|8|2| |0|
|1|0|false| |true|3|558bf31abca195e9eeefc137d3c5eb4b| | | | | |1
4abdda266077f8b0e1070e859c1ed5cd| | | | | |1=Tivoli Workload Scheduler 8.2 Lap
Component|IBM Lap Component| | | |
|c:\win32app\maestro|0|0|1|1557aa482df5e5013543616468e30178|8|2| |0|
|1|0|false| |true|3|4abdda266077f8b0e1070e859c1ed5cd| | | | | |1
1557aa482df5e5013543616468e30178|8|2| |0| |1=TWS|Tivoli Workload Scheduler|
|IBM Tivoli Systems Inc.|
|8.2|c:\win32app\maestro|0|0|1|1557aa482df5e5013543616468e30178|8|2| |0|
|1|0|false|”$J(install_dir)/_uninst” “uninstall.jar” “uninstall.dat”
““|true|3|1557aa482df5e5013543616468e30178|8|2| |0| |1
1b76c910f25e05c418e5477ba952483f| | | | | |1=Tivoli Workload Scheduler Engine
8.2 component for Windows|SPB Component for Windows| | | |
|c:\win32app\maestro|0|0|1|558bf31abca195e9eeefc137d3c5eb4b| | | | |
|1|0|false| |true|3|1b76c910f25e05c418e5477ba952483f| | | | | |1
Note: Tivoli Workload Scheduler 8.2 is not the only application that writes into
the vpd.properties file. Other applications such as WebSphere® also use this
file.
HP-UX useradd
Linux useradd
150 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices
4
With Tivoli Workload Scheduler 8.2 using a simple logical expression to define
the specific return codes that should be considered successful has been
introduced. The return code condition expression is saved within the Plan and is
visible from the Job Scheduling Console and conman.
If no specific return code condition is defined for a job, the default action is as in
previous versions: a return code of 0 is considered successful and anything else
unsuccessful.
A new keyword RCCONDSUCC has been added to the job statement, which is
used to specify the return code condition as shown in Example 4-1. In this
example, the job DBSELOAD will be considered successful if it completes with
any of the return codes 0, 5, 6, 7, 8, 9 or 10.
Using the Job Scheduling Console, the return code condition is defined in the
Return Code Mapping field on the Task tab within the Job Definition window, as
shown in Figure 4-1 on page 153.
152 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices
Figure 4-1 Job definition window
Comparison expression
A comparison expression has the syntax:
[(] RC operator operand [)]
Where the comparison operator is one of those listed in Table 4-1 on page 154
and the operand is an integer between -2147483647 and 2147483647.
= Equal to RC = 0
!= Not equal to RC != 5
Boolean expression
Specifies a logical combination of comparison expressions. The syntax is:
comparison_expression operator comparison_expression
Note that the expression is evaluated from left to right. Parentheses can be used
to assign priority to the expression evaluation.
Tip: Be aware that it is possible to define a return code condition that will
never evaluate to true, such as (RC = 0 AND RC = 1) which will result in the
job always being unsuccessful (ABEND).
154 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices
OR ((RC > 4) AND (RC < 11))”, the maximum JCL length is reduced from 4096 to
4050.
Note: If the JCL field contains an IBM Tivoli Workload Scheduler parameter or
operating system environment variable, the return code condition will not be
saved by either composer or the JSC. This is a known problem (APAR
IY46920) and will be fixed in a future fix pack.
Within the Job Scheduling Console, the return code a job completed with can be
found in the Return Code column of the All Scheduled Jobs window. Figure 4-2
shows the job BDSELOAD with a return code of 5 and a status of Successful.
Note: In this example the default column order has been changed for clarity.
156 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices
4.5 Conman enhancement
The conman showjobs function has been enhanced to retrieve the return code
information for a given job. The default showjobs output has been adjusted to
include an additional column for the return code. An example of this can be seen
in Example 4-2 on page 155.
Also, a new argument retcod, when used in conjunction with the keys argument,
will give the return code for a specified job, as shown in Example 4-3.
The retcod feature may not at first appear overly useful, but when integrated into
a script, it can become quite powerful.
Branch JOB
JOB_23
JOB_24 JOB_25
Figure 4-3 Job stream flow diagram
We then took this flow diagram and created a job stream with the necessary
follows dependencies to achieve this job flow. The resulting job stream can be
seen in Example 4-4.
158 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices
Otherwise we want to run job JOB_24. This we achieved by having the branch
job, JOB_23, check the return codes that JOB_21 and JOB_22 completed with
and then cancel the job that is not required to run, as shown in Figure 4-4.
The branch job JOB_23 runs the script branch.sh, which uses the conman
showjobs retcod command to obtain the return code for each of JOB_21 and
JOB 22, then uses a conman canceljob command to cancel the job not required.
The complete script can be seen in Example 4-5.
# return codes
: ${OK=0} ${FAIL=1}
# if both jobs 21 and 22 exit with a return code less than 3 run job 25
# else run job 24
if [ ${JOB_21_RC} -lt 3 ] && [ ${JOB_22_RC} -lt 3 ]
then
echo “INFO: cancelling job JOB_24”
conman “cj ${SCHED}.JOB_24;noask”
else
echo “INFO: cancelling job JOB_25”
conman “cj ${SCHED}.JOB_25;noask”
fi
# all done
exit ${OK}
For a complete description of conman and all the available options, see Chapter 4,
“Conman Reference” in the Tivoli Workload Scheduler Version 8.2, Reference
Guide, SC32-1274.
This utility has been extended in Tivoli Workload Scheduler 8.2 to include the
rstrt_retcode option to enable a recovery job to determine the return code of
the parent job, and is run as follows:
$ jobinfo rstrt_retcode
160 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices
4.6.1 Jobinfo example
A parent job is known to exit with a return code in the range 0 ..10, whereby 0, 5,
6, 7, 8, 9 and 10 are deemed to be successful, and 1, 2, 3 and 4 are deemed to
be unsuccessful.
The parent job is then defined with the following return code condition:
(RC = 0) OR ((RC > 4) AND (RC < 11))
And a recovery job as shown in Example 4-6. Note that the job is defined with the
recovery action RERUN. This enables the recovery job to take some corrective
action, and then the parent job will attempt to run again.
In the event of the parent job running unsuccessfully, we want the recovery job to
take the actions described in Table 4-3 depending upon the parent job’s return
code.
The recovery job runs a script (recovery.sh), which uses the jobinfo utility to
obtain information about itself first, then the return code the parent job completed
with.
In the first case, where the recovery job is obtaining information about itself, we
set the variable RSTRT_FLAG with the value returned by the jobinfo rstst_flag
option:
RSTRT_FLAG=‘jobinfo rstrt_flag 2>/dev/null‘
Note: We used a modified jobmanrc, which set the PATH variable within the
jobs environment to include the TWShome/bin directory. If you do not do
something similar, it will be necessary to call jobinfo using an explicit path, for
example:
RSTRT_FLAG=‘/usr/local/tws/maestro/bin/jobinfo rstrt_flag 2>/dev/null‘
This will set the variable to the value YES if this script is running as a recovery
job, or NO otherwise. We later test this variable and abend the job should it not
be running as a recovery job.
Note: The version of jobinfo included in IBM Tivoli Workload Scheduler 8.2
Fix Pack 01 returns the wrong result when the rstrt_flag option is used
within a recovery job. We worked around this problem by replacing the Fix
Pack 01 version of jobinfo with the version originally included with IBM Tivoli
Workload Scheduler 8.2.
In the second case, we use the jobinfo rstrt_retcode option to set the variable
RSTRT_RETCODE to the parent job’s return code:
RSTRT_RETCODE=‘jobinfo rstrt_retcode 2>/dev/null‘
We then use a case statement based on the value of RSTRT_RETCODE to define the
appropriate actions. You will note that there is no option 1 within the case
statement. Option 1 will match the default action at the bottom of the case, as will
any unexpected values, which is to have the recovery script abend by returning
an unsuccessful exit status as shown in Figure 4-5 on page 163.
162 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices
Figure 4-5 Job DBSELOAD completes with return code 1
In the case of option 2, where we want to rerun the parent job without taking any
other action, the recovery job simply needs to exit with a successful exit status as
shown in Figure 4-6 on page 164.
In the case of option 3, where we want to submit a fix script, then rerun the
parent job, we use a fairly complicated conman sbd (submit command), which
submits a conman sbj (submit job) on the Master. This means we do not need to
have access to the jobs database on the local workstation. The recovery job then
waits for a short time (30 seconds) in the example before completing with a
successful exit status so that the parent job can rerun.
We do not want the parent job to rerun before the fix script has completed, and
waiting for 10 seconds is certainly not a good way to be sure that this does not
happen. However, the parent job has a dependency on the resource DATALOAD.
Therefore by including a dependency on the same resource when submitting the
fix script job, we only need to wait long enough for the fix script to submit and
acquire the resource. The parent job will then wait for the fix script to complete
and release the resource before it reruns, as shown in Figure 4-7 on page 165.
164 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices
Tip: A more robust approach would be to run further conman commands
following the sleep and check that the fix script job had indeed been submitted
correctly and had acquired the resource before exiting. The recovery job
would then exit with a successful exit status if all was well, or with an
unsuccessful exit status if there was a problem.
Finally in the case of option 4, where we want to rerun the parent job after
changing the value of an IBM Tivoli Workload Scheduler parameter, the recovery
job uses the IBM Tivoli Workload Scheduler parms utility to set the IBM Tivoli
Workload Scheduler parameter DBVLEVEL to the value 9:
parms -c DBVLEVEL 9
Having set the IBM Tivoli Workload Scheduler parameter, the recovery script will
exit with a successful exit status as shown in Figure 4-8.
Finally, Figure 4-9 on page 167 shows the job stream status following the job
DBSELOAD completing with a successful return code, in this case 8, but the
effect would have been the same had the return code been any of 0, 5, 6, 7, 8, 9
or 10.
166 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices
Figure 4-9 Job DBSELOAD completes with a successful return code
The complete script run by the job RECOVERY is shown in Figure 4-8.
# return codes
: ${OK=0} ${FAIL=1}
# check rc is defined
if [ ! “${rc}” ]
then
echo “WARNING: exit code not defined, defaulting to FAIL”
rc=${FAIL}
fi
# all done
exit ${rc}
168 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices
For a complete description of jobinfo and all the available options, see Chapter
5, “Utility Commands” in the Tivoli Workload Scheduler Version 8.2, Reference
Guide, SC32-1274. Details of the parms utility can also be found in the same
chapter.
aster Domain
Master
Domain ll
Manager e wl la
a
wir
Pi reF
IF
Domain Domain
Manager Manager
CHATHAM TWS6
In this case, the firewall rules required would have been as shown in
Example 5-1 on page 173.
172 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices
Example 5-1 Before sample firewall rules
src 9.3.4.47/32 dest 10.2.3.190/32 port 31111 proto tcp permit
src 10.2.3.190/32 dest 9.3.4.47/32 port 31111 proto tcp permit
With just a small number of workstations involved as is the case here, this is not
a major concern, but as the number of workstations increase, particularly if
workstations are added and/or removed frequently and strict change control
procedures are enforced when changes are made to the firewall rules, this can
become a major problem.
In the example network shown in Figure 5-1 on page 172, we would set the
behindfirewall attribute to ON for the workstations TWS1, TWS5 and TWS6,
and to OFF for the remaining workstations. Once Jnextday had run and the
modified workstation definitions have taken effect, the firewall configuration can
be modified to permit a connection for the Domain Manager only, as shown in
Example 5-3.
174 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices
With this configuration stop commands, start commands and the retrieval of job
stdlist files for workstations TWS1 and TWS5 will follow the domain hierarchy
from the Master down through the Domain Manager TWS6 as shown in
Figure 5-3.
Master Domain
Master
Domain ll
Manager ewlal
a
ir
r eFw
FIPi
Domain Domain
Manager Manager
CHATHAM TWS6
The new Tivoli Workload Scheduler 8.2 service ROUTER (service 2007 in the
Netconf file) has been created to perform the Firewall Support.
To use this feature, replace the stop @!@; wait line within the Jnextday script
with stop; progressive; wait.
Note that if you use this feature, there is a slight chance that some
workstations might not be stopped when their Domain Managers are ready to
send the Symphony file, but in that case the Domain Manager will wait for five
minutes and then retry sending the Symphony file.
If you use the Firewall Support option, we recommend you use the progressive
stop feature. This is not mandatory, but when used, will increase the
performance of the Jnextday process. Note that the default behavior in
Jnextday is not to use the progressive stop.
176 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices
Note: For more information on the OpenSSL Toolkit, refer to the OpenSSL
organization’s Web site at:
http://www.openssl.org/
To authenticate a peer’s identity, the SSL protocol uses X.509 certificates called
digital certificates. Digital certificates are, in essence, electronic ID cards that are
Public-key cryptography uses two different cryptographic keys: a private key and
a public key. Public-key cryptography is also known as asymmetric cryptography,
because you can encrypt information with one key and decrypt it with the
complement key from a given public-private key pair. Public-private key pairs are
simply long strings of data that act as keys to a user’s encryption scheme. The
user keeps the private key in a secure place (for example, encrypted on a
computer’s hard drive) and provides the public key to anyone with whom the user
wants to communicate. The private key is used to digitally sign all secure
communications sent from the user, while the public key is used by the recipient
to verify the sender’s signature.
Digital certificates provide that confidence. For this reason, the IBM Tivoli
Workload Scheduler workstations that share an SSL session must have locally
installed repositories for the X.509 certificates that will be exchanged during the
SSL session establishment phase to authenticate the session.
Users can also create their own self-signed digital certificates for testing
purposes.
The following example describes in a simplified way how digital certificates are
used in establishing an SLL session. In this scenario, Appl1 is a client process
that opens an SLL connection with the server application Appl2:
1. Client Appl1 asks to open an SSL session with server Appl2.
178 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices
2. Appl2 starts the SSL handshake protocol. It encrypts the information using its
private key and sends its certificate with the matching public key to Appl1.
3. Appl1 receives the certificate from Appl2 and verifies that it is signed by a
trusted Certificate Authority. If the certificate is signed by a trusted CA, Appl1
can optionally extract some information (such as the Distinguished Name)
stored in the certificate and performs additional authentication checks on
Appl2.
4. At this point, the server process has been authenticated, and the client
process starts its part of the authentication process; that is, Appl1 encrypts
the information using its private key and sends the certificate with its public
key to Appl2.
5. Appl2 receives the certificate from Appl1 and verifies that it is signed by a
trusted Certificate Authority.
6. If the certificate is signed by a trusted CA, Appl2 can optionally extract some
information (such as the Distinguished Name) stored in the certificate and
performs additional authentication checks on Appl1.
Tip: When planning for SSL implementation, take into account that it is not a
good idea to use SSL on all nodes of a Tivoli Workload Scheduler network.
The reason is the potential performance penalty, especially during the link
phase of the FTAs. It is a good idea to use the SSL only on the workstations in
your DMZ that serve as an entry point from a nonsecure zone to the secure
zone.
By selecting the appropriate authentication methods from the list above, the
Tivoli Workload Scheduler 8.2 administrator can choose to implement one or a
mix of the following:
Use the same certificate for the entire network
Use a certificate for each domain
Use a certificate for each CPU
The Tivoli Workload Scheduler 8.2 administrator has the choice of requesting
certificates from one of the many commercial Certificate Authorities (such as
Baltimore, VeriSign, and so on) or creating his own Certificate Authority to create
and sign the necessary certificates.
180 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices
Note: A template for the configuration file we used, plus a script create_ca.sh
that performs the steps required to create your own CA, as detailed in the next
section, are available for download via FTP from:
ftp://www.redbooks.ibm.com/redbooks/SG246628
In this section we will go through the steps necessary to create your own CA:
Creating an environment for your CA
Creating an OpenSSL configuration file
Creating a self-signed root certificate
Note: Work with your security administrator when configuring SSL for IBM
Tivoli Workload Scheduler.
5. Next, we created an empty file index.txt, which will be used to hold a simple
database of certificates issued:
$ touch index.txt
Example 5-4 on page 183 shows the configuration file for our CA. The first set of
keys informs OpenSSL about the placement of the files and directories that it
needs to use. The keys, default_crl_days, default_days, and default_md,
correspond to the command-line crldays, days and md options, and are
explained below:
Note: If you use the command-line options, they will override the
corresponding options specified in the configuration file.
182 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices
default_days Specifies the number of days issued certificate will be
valid for.
default_md Specifies the message digest algorithm to used when
signing issued certificates and CRLs.
The policy key gives the name of the default policy that will be used. A policy
definition reflects the fields in a certificate’s Distinguished Name. The
x509_extensions key is used for the name of a section that has the extensions to
be added to each certificate that belongs to this CA. In our example, we only
included the basicConstraint extension and set it to False, which effectively
eliminates the use of certificate chains.
[ twsca ]
dir = /opt/tws/ca
certificate = $dir/cacert.pem
database = $dir/index.txt
new_certs_dir = $dir/certs
private_key = $dir/private/cakey.pem
serial = $dir/serial
default_crl_days = 7
default_days = 365
default_md = md5
policy = twsca_policy
x509_extensions = certificate_extensions
[ twsca_policy ]
commonName = supplied
stateOrProvinceName = supplied
countryName = supplied
emailAddress = supplied
organizationName = supplied
organizationalUnitName = supplied
[ certificate_extensions ]
basicConstraints = CA:false
Alternatively, you can simple create the ca_env.sh script using your prefered text
editor, for example vi.
OPENSSL_CONF=${HOME}/ca/openssl.cnf
export OPENSSL_CONF
We can now source the script as and when required, thereby setting the required
environment variable, using the following command:
$ . ca_env.sh
Example 5-6 Configuration file additions for generating a self-signed root certificate
[ req ]
default_bits = 2048
default_keyfile = /opt/tws/ca/private/cakey.pem
default_md = md5
184 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices
prompt = no
distinguished_name = root_ca_distinguished_name
x509_extensions = root_ca_extensions
[ root_ca_distinguished_name ]
commonName = ITSO/Austin CA
stateOrProvinceName = Texas
countryName = US
emailAddress = [email protected]
organizationName = ITSO/Austin Certificate Authority
[ root_ca_extensions ]
basicConstraints = CA:true
After finishing this configuration step, you can generate your self-signed root
certificate. To do this, from the root directory of the CA, which is /opt/tws/ca in our
scenario, first source the ca_env.sh script to set the required environment
variable:
$ . ca_env.sh
OpenSSL will prompt you to enter a pass phrase (twice) to encrypt your private
key.
186 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices
The output from this command can be seen in Example 5-8. Note, however, that
your certificate will not be identical, since your public and private key will be
different from ours.
188 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices
>
> OPENSSL_CONF=\${HOME}/bin/openssl.cnf
> export OPENSSL_CONF
> EOT
$
Alternatively, you can simply create the ssl_env.sh script using your preferred
text editor, for example vi.
The resulting ssl_env.sh script file is shown in Example 5-10.
OPENSSL_CONF=${HOME}/bin/openssl.cnf
export OPENSSL_CONF
The operation will be much more interactive, prompting for information to fill in
to the certificate request’s Distinguished Name. Example 5-11 shows the
output from generating a certificate request.
190 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices
Organizational Unit Name (eg, section) []:ITSO/Austin
Common Name (eg, YOUR name) []:TWS6
Email Address []:[email protected]
The result of this command is the creation of two files: CPUreq.pem and
CPUkey.pem. The former, CPUreq.pem, contains the certificate request as
shown in Example 5-12, and CPUkey.pem contains the private key that
matches the public key embedded in the certificate request. As part of the
process to generate a certificate request, a new key pair was also generated.
The first passphrase that is prompted for is the passphrase used to encrypt
the private key. The challenge phrase is stored in the certificate request, and
is otherwise ignored by OpenSSL. Some CAs may make use of it, however.
Note: Remember the passphrase used. You will require it again later.
192 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices
Example 5-13 Issuing a certificate from a certificate request
$ openssl ca -in CPUreq.pem
Using configuration from /opt/tws/ca/openssl.cnf
Enter pass phrase for /opt/tws/ca/private/cakey.pem:
Check that the request matches the signature
Signature ok
The Subject’s Distinguished Name is as follows
countryName :PRINTABLE:’US’
stateOrProvinceName :PRINTABLE:’Texas’
localityName :PRINTABLE:’Austin’
organizationName :PRINTABLE:’IBM’
organizationalUnitName:PRINTABLE:’ITSO/Austin’
commonName :PRINTABLE:’TWS6’
emailAddress :IA5STRING:’[email protected]’
Certificate is to be certified until Jul 29 15:29:19 2004 GMT (365 days)
Sign the certificate? [y/n]:y
Figure 5-5 on page 195 summarizes the steps that we have performed to create
a self-signed digital certificate.
194 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices
CPU CA Root
Figure 5-5 Summary of the steps for creating a self-signed digital certificate
Note: You will also require a copy of the CA’s certificate. In our example, this is
the file cacert.pem in the TWShome/ca, and a copy of this file should be
distributed to the Tivoli Workload Scheduler 8.2 workstations along with their
signed certificate files, for example 03.pem.
Note: If you are introducing an SSL connection between two workstations, you
must configure both sides of the connection before the workstations will be
able to link successfully:
1. Tivoli Workload Scheduler 8.2 needs to know the passphrase associated with
the certificate. This is the passphrase that was entered when generating the
certificate request to create the private key in step 6 on page 189. Write the
passphrase used into a file called CPUpwd.txt without any appended control
characters such as a line feed. The behavior of echo varies from one platform
Note: It is very important that this file contains only the passphrase itself,
and no additional characters including a carriage return or line feed
character. Once this file is encoded, if the passphrase does not exactly
match, Tivoli Workload Scheduler 8.2 will not be able to access the private
key and the workstation will fail to link.
2. There are a number of ways to confirm that the file contains only the
passphrase itself. One is to simply run ls -l and confirm that the file size
exactly matches the number of characters in the passphrase. Possibly a
better way is to use the od utility, which will display the file contents including
any control characters. For example, if you used the passphrase “pass
phrase”, then the file in Example 5-14 is correct, but the Example 5-15 is
incorrect because the file contains a line feed character, denoted by the \n at
the end of the line.
196 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices
5. By this stage, you should have a number of files in your TWShome/ssl
directory, which need to be defined in the TWShome/localopts file. These files
are summarized in Table 5-1.
Note: If the random number generator needed seeding, you will also need to
iinclude the localopts attribute and SSL random seed, and you must specify
here the seed file location.
Note: SSL connections are disabled if the nm SSL port localopts attribute is
set to 0. Therefore, you need to select a free port number to use and specify
the port when editing the other localopts attributes. We used port 31113.
Note: If you found it necessary to seed the random number generator before
you were able to successfully issue openssl commands, you will also need to
set the SSL random seed attribute to the location of the TWS.rnd file as shown
in Example 5-18.
Example 5-18 Customized SSL localopts attributes including SSL random seed
# SSL Attributes
#
nm SSL port =31113
SSL key =/opt/tws/ssl/CPUkey.pem
SSL certificate =/opt/tws/ssl/03.pem
SSL key pwd =/opt/tws/ssl/CPUpwd.sth
SSL CA certificate =/opt/tws/ssl/cacert.pem
#SSL certificate chain =/opt/tws/ssl/TWSCertificateChain.crt
SSL random seed =/opt/tws/ssl/TWS.rnd
SSL Encryption Cipher =SSLv3
198 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices
SSL auth mode =cpu
#SSL auth string =tws
In our example the Domain Manager TWS6 will require this attribute to be set to
On, while the Master Domain Manager MASTER and Backup Domain Manager
BACKUP should have this attribute set to Enabled. By using Enabled for
MASTER and BACKUP, we are specifying that these workstations should use
SSL when communicating with other workstations that are configured to use
SSL, for example the Domain Manager TWS6, but also use non-SSL
connections when communicating with workstations not configured to use SSL,
for example the Domain Manager CHATHAM.
Example 5-19 shows the workstation definitions for TWS6 and MASTER.
BACKUP is fundamentally identical to MASTER.
cpuname TWS6
os UNIX
node tws6.itsc.austin.ibm.com
tcpaddr 31111
secureaddr 31113
domain SECUREDM
TIMEZONE CST
for maestro
type FTA
autolink on
fullstatus on
resolvedep on
securitylevel on
behindfirewall on
end
Using the Job Scheduling Console, these attributes correspond to the SSL
Communication and Secure Port attributes of the General tab within the
workstation definition window, as shown in Figure 5-6 on page 201.
200 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices
Figure 5-6 Setting SSL attributes using the Job Scheduling Console
Following these two events, if all has gone well your workstations will link up just
as they did prior to the SSL configuration. You should, however, see conman
reporting an additional flag (S) as shown in the TWS6 entry in Example 5-20, or
from the Job Scheduling Console. See the new configuration reported in the SSL
Communication and Secure Port columns as shown in Figure 5-7 on page 203.
202 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices
Figure 5-7 JSC workstation SSL status
Should the workstations fail to link. check the Tivoli Workload Scheduler 8.2 logs
on the workstations on both sides of the link. SSL-related errors are described in
Chapter 13 of the Tivoli Workload Scheduler Version 8.2, Error Message and
Troubleshooting, SC32-1275.
The Tivoli Workload Scheduler Version 8.2, Error Message and Troubleshooting,
SC32-1275 guide describes this error as follows:
AWSDEB045E Error during creation of SSL local context: Reason: 1
Explanation: This error occurs when one of the TWS network processes
(mailman, writer netman or scribner) is unable to initialize the local SSL
context. The problem could be related to either an invalid SSL setting
specified within the localopts file or to an initialization error of the SSL.
Operator Response: To solve this problem you should check all the values of
the options specified within the localopts file of the node where the error is
logged (see Chapter 9 of the Tivoli Workload Scheduler Version 8.2,
SC32-1273 manual for more details). One of the most common errors is
related to the password. If the source password file contains some spurious
This did lead us to the problem in this case, namely a line feed in the file used to
encode the password. However, the Error:06065064:digital envelope
routines: EVP_DecryptFinal: bad decrypt part of the message is a standard
OpenSSL error message, and can occur in many products using OpenSSL to
provide SSL facilities. Therefore if Tivoli Workload Scheduler Version 8.2, Error
Message and Troubleshooting, SC32-1275 does not immediately answer the
question, a search for all or part of this string using Google
(http://www.google.com) or other search engine should provide many helpful
hits to guide you to the problem.
With local Security files, each workstation (Fault Tolerant Agent, Standard Agent
or Domain Manager) has its own local Security file, which may be identical to
other Security files in the network or unique to the workstation. Furthermore, by
default the TWSuser on the workstation can modify the local Security file if
desired, although it is possible to configure the Security file to prevent this, it is
not possible to prevent the root user (or Administrator on Windows) from
replacing the Security file with one containing any rights desired.
With a centralized Security file, a single controlled Security file is created and
distributed manually across the TWS network. Although each workstation still
has its own local copy of this Security file, if this file is tampered with locally, all
rights to the conman interface, plus composer and TWS Connector if installed,
are revoked on that workstation.
204 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices
5.3.1 Configuring centralized security
Perform the following steps to configure centralized security:
1. Modify the TWShome/mozart/globalopts file on the Master Domain Manager
setting the centralized security attribute to yes as shown in Example 5-21.
2. Once the globalopts option is activated, the next Jnextday will load
security-related information into the Symphony file. This security-related
information will include that centralized security is enabled plus the checksum
of the Master Domain Manager’s Security file. Once the new day’s Symphony
file has been distributed, all workstations will have the same security
information.
In a network with centralized security enabled, two workstations will not be able
to establish a connection if one has centralized security turned off in its
Symphony file, or if their Security file information does not match. However, a
workstation will always accept incoming connections from its Domain Manager,
even if the Security file information sent from the Domain Manager does not
match the information in the workstation’s Symphony file. This is enforced to
allow the Tivoli Workload Scheduler 8.2 administrator to change the Security file
on the Master Domain Manager without having to distribute the new Security file
out to the entire network before running Jnextday.
Note: Tivoli Workload Scheduler 8.2 will run quite happily without the correct
Security file on each FTA. However, running conman commands locally such as
conman shut to stop netman will fail.
Note: If the Master Domain Managers Security file is modified, you must be
sure that the modified file is copied to the Backup Master Domain Manager at
the very least.
For example, the JSC is running on the Windows 2000 machine 3d054-1 has the
Job Scheduling Console installed. In order to connect to the Master Domain
Manager, which is positioned behind a firewall, the following steps are
necessary:
1. On MASTER, enable single-port Bulk Data Transfer (BDT), using the odadmin
single_port_bdt command as follows:
# /etc/Tivoli/setup_env.sh
# odadmin single_port_bdt TRUE all
2. By default the Bulk Data Transfer service uses port 9401. If a different port is
required, this can be specified using the odadmin set_bdt_port command as
follows:
# odadmin set_bdt_port 31194
3. Modify the firewall configuration to permit the necessary connections for the
JSC to access the TWS Connector, as shown in Example 5-22.
206 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices
6
6.2 Configuration
In order for the late job handling functionality to work, the localopts file on the
agents will need to be configured. The bm check until and bm check deadline
options specify the number of seconds between checks for jobs that have either
passed their Latest Start Time (UNTIL) or Termination Deadline (DEADLINE)
time. The bm check deadline option may not already be specified in the file, so it
may need to be added. The default value for bm check until is 300 seconds. The
bm check deadline option should be set to the same value. These default
settings enable the batchman process to run efficiently and not severely impact
system processing. These values should not be changed. Example 6-1 on
page 209 shows these entries in the localopts file.
208 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices
Example 6-1 Part of the localopts file
# TWS localopts file defines attributes of this Workstation.
~~~~~~~~~~~~
#--------------------------------------------------------------------
# Attributes of this Workstation for TWS batchman process:
bm check file =120
bm check status =300
bm look =15
bm read =10
bm stats =off
bm verbose =off
bm check until =300
bm check deadline=300
Within the JSC GUI, as seen in Figure 6-1 on page 210, the Suppress option is
highlighted as the default action. The ONUNTIL option should be specified after
the UNTIL option, when building job priorities within a job stream using the CLI. If
no action is defined, then ONUNTIL SUPPR is assumed, since it is the default
action.
210 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices
MASTER#ACCJOB01
MASTER#ACCJOB02
FOLLOWS ACCJOB01
UNTIL 1650
ONUNTIL SUPPR
MASTER#ACCJOB03
FOLLOWS ACCJOB02
DEADLINE 1655
MASTER#ACCJOB04
FOLLOWS ACCJOB03
END
The action followed with this option is the same as the current behavior. If a job
has not started by the time specified as the Latest Start Time, then the launch of
the job will be suppressed. All subsequent jobs that depend on the job in
question will not be launched either.
Each of the jobs in the example has a run time of five minutes. As the first job
ACCJOB01 has not completed by 16:50, ACCJOB02, which has a dependency
on it, cannot start before its Latest Start Time. As shown in Figure 6-2, the JSC
GUI indicates that the time has passed by placing a message in the Information
column.
The command line (Example 6-3) shows that the job has been suppressed.
Example 6-3 Command line showings that the job has been suppressed
MASTER #TESTJS1_SUPP ******** READY 10 10/10 (00:04)
ACCJOB01 SUCC 10 10/10 00:06 #J1932
ACCJOB02 HOLD 10 (10/10)(00:04) [Until];
[Suppressed]
ACCJOB03 HOLD 10 (10/10)(00:05) [Late] ;
ACCJOB02
ACCJOB04 HOLD 10 (10/10)(00:04) ACCJOB03
To use this option, select the Continue radio button when specifying a Latest
Start Time for a job using the JSC GUI as seen in Figure 6-3 on page 213.
212 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices
Figure 6-3 Continue option in JSC GUI
Also, you can use the ONUNTIL CONT option when using an UNTIL time on the
command line (Example 6-5 on page 215).
The action followed with this option is that the job will run when all of its
dependencies are met, as though no time limitation has been associated to it. It
should be used where the user wishes the jobs to run regardless of the time, but
want to be informed if a job has not started by a predetermined time. Nothing is
displayed on the JSC GUI (Figure 6-4), although it is on the command line and
an event, 121, is sent to the bmevents file to be passed on to the IBM Tivoli
Enterprise Console. In order to effectively use this option, users need to be either
monitoring using this command line or using the Plus Module and IBM Tivoli
Enterprise Console.
Each of the jobs in the example has a run time of five minutes. Despite a Latest
Start Time being set on ACCJOB02, it starts when ACCJOB01, which it depends
on, has completed.
The command line shows that the time has passed and that the Continue option
has been set.
Example 6-5 Command line shows that Continue option has been set
MASTER #TESTJS1_CONT ******** SUCC 10 10/10 00:21
ACCJOB01 SUCC 10 10/10 00:06 #J1931
ACCJOB02 SUCC 10 10/10 00:06 #J1934;
[Until]; [Continue]
ACCJOB03 SUCC 10 10/10 00:06 #J1936; [Late]
ACCJOB04 SUCC 10 10/10 00:06 #J1937
214 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices
Event number 121 is also written to the bmevents file, which can then be passed
to the IBM Tivoli Enterprise Console.
To use this option, select the Cancel radio button when specifying a Latest Start
Time for a job using the JSC GUI, as seen in Figure 6-4 on page 214.
You can also use the ONUNTIL CANC option when using an UNTIL time on the
command line (Example 6-6).
216 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices
UNTIL 1650
ONUNTIL CANC
MASTER#ACCJOB03
FOLLOWS ACCJOB02
DEADLINE 1655
MASTER#ACCJOB04
FOLLOWS ACCJOB03
END
The action followed with this option is that the job that has a Latest Start Time will
be cancelled automatically if it has not started by the time specified, effectively
removing it from the plan, in the same way as if the job had been cancelled
manually. Any jobs or streams that are dependent on the job being cancelled will
then run once all their other dependencies have been met, as though they had
never waited for the job being cancelled in the first place.
Each of the jobs in the example has a run time of five minutes. As the first job
ACCJOB01 has not completed by 16:50, ACCJOB02, which has a dependency
on it, cannot start before its Latest Start Time. Job ACCJOB02 is therefore
cancelled. The GUI indicates that the job has been cancelled by placing a
message in the Information column.
The command line shows that the time has passed, the job has been cancelled,
and that the dependent job that ACCJOB03 starts has no other dependencies.
Event number 122 is also written to the bmevents file, which can then be passed
to the IBM Tivoli Enterprise Console.
With this option, users can determine a time when a job would be considered late
if it has not completed by that time. This allows users to be notified in advance of
any potential overruns that may affect subsequent processes and/or the
completion of a batch cycle.
To use this option, select the Specify time option and enter the time in the Time
Restrictions section of the Job Priorities, when adding it to a job stream using the
GUI (Figure 6-7 on page 219). The normal elapsed time, based on previous runs
of the job, will be displayed at the base of the window, which may help in
determining the time to enter. Of course, if this job has not run in the past, the
timings must be derived from testing performed outside of IBM Tivoli Workload
Scheduler.
218 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices
Figure 6-7 Termination Deadline option in JSC GUI
On the command line, this time is added by using the DEADLINE option.
The action followed with this option is that the job will be considered late if it has
not completed by the time specified. The job will continue to run to completion.
This is shown in Figure 6-8.
Each of the jobs in the example has a run time of five minutes. As the job
ACCJOB03 has not completed by its Termination Deadline time,16:55, it is
considered as being late. The GUI indicates the fact that the job is late by placing
a message in the Information column. The message displayed is “Deadline has
passed, Userjob.”. This message is not as informative as it could be as it does
not actually state that the job is running late. However, it is known that this
message indicated that the job is late, then it is usable.
The command line shows more clearly that the job is running late.
Event number 120 is also written to the bmevents file which can then be passed
to the IBM Tivoli Enterprise Console.
220 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices
7
Even though Tivoli Workload Scheduler displays if any jobs or job streams fail or
abend and shows if any of the agents are unlinked or down, it is not always
possible to get to the root of the problem nor is it possible to monitor everything
using the IBM Tivoli Workload Scheduler only. This is why IBM Tivoli Enterprise
Console is a great tool to use for monitoring all possible aspects of IBM Tivoli
Workload Scheduler and the entire system environment, starting from the entire
network (using IBM Tivoli NetView®) and down to the smallest application
processes and application internals.
We will see in this chapter how it is possible to monitor this and correlate events
from the system environment and from IBM Tivoli Workload Scheduler. We will
also describe the possibility of using both IBM Tivoli Enterprise Console and IBM
Tivoli Workload Scheduler to notify the relevant people who may take an
appropriate action, using a third-party notification server – AlarmPoint.
222 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices
Event consoles
Event consoles are graphical user interfaces that allow users to see relevant
events in different levels of details. Users can also respond to events, for
example close or acknowledge them or initiate a task, depending on their
administration roles.
Events
Events are units of information detected by event adapters and sent to the
event server.
Event definitions
These are definitions of type of events together with their properties. They are
stored in BAROC files (Basic Recorder of Objects in C). Events and their
properties have to be defined and then compiled into the rule base.
Rules
A set of preprogrammed logic steps, based on prolog, which process events
that come through to the event server. Rules process the events and take
action based on the attributes of those events. Just like event definitions, rules
also have to be compiled into the rule base before they can be applied to
incoming events.
When an event adapter detects an event generated from a source, it formats
the event and sends it to the event server. The event server processes the
event and automatically performs any predefined tasks. An adapter can be
configured to discard certain type of events before they are even sent to IBM
Tivoli Enterprise Console, so no unnecessary processing by IBM Tivoli
Enterprise Console is needed.
This type of event configuration and predefining the event server rules are the
topics we are covering in this chapter. We will be taking IBM Tivoli Workload
Scheduler as a source of our events.
Tivoli Plus Module comes with a set of predefined message formats, event
classes, rules and actions that can be taken if those situations arise. The Plus
Tivoli Distributed Monitoring (DM) for IBM Tivoli Workload Scheduler uses DM
Version 3.7 and requires Distributed Monitoring Universal Monitors 3.7 to be
installed. The current Tivoli Monitoring version is 5.1.1, called IBM Tivoli
Monitoring, which uses different methods of monitoring, Resource Models. TWS
Distributed Monitors 3.7 can be imported into the IBM Tivoli Monitoring Resource
Models using the IBM Tivoli Monitoring Workbench, but they are not covered in
this book, as standard IBM Tivoli Monitoring would give more sophisticated
application monitoring with Web-based views.
For more information on IBM Tivoli Monitoring and how to import TWS
Distributed Monitors 3.7 to Resource Models, refer to the redbook IBM Tivoli
Monitoring Version 5.1.1 Creating Resource Models and Providers, SG24-6900.
Plus Module Integration includes a set of Tivoli tasks that can be used for the
following:
Tivoli tasks for configuring the event server. These tasks compile and import
IBM Tivoli Enterprise Console rules and classes either into already existing
224 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices
rule base or creates a new rule base containing events for IBM Tivoli
Workload Scheduler.
Tivoli tasks for configuring IBM Tivoli Enterprise Console adapters. These
include configuring adapters in the Tivoli Framework environment (TME®
adapters) and the adapters that are outside the Tivoli systems management
environment (non-TME adapters).
A set of IBM Tivoli Workload Scheduler Report tasks. Report tasks produce
formatted IBM Tivoli Workload Scheduler standard reports, which are
generally available in IBM Tivoli Workload Scheduler (rep1-4b, rep7, rep8,
rep11, reptr, etc.).
Tivoli tasks that allow manipulation of IBM Tivoli Workload Scheduler, such as
stopping/starting and linking/unlinking of IBM Tivoli Workload Scheduler
agents.
In real environment situations, most of these tasks are never used, since these
features are available via the Job Scheduling Console. The tasks that are usually
used are only for IBM Tivoli Enterprise Console configuration, although this can
also be done manually.
For more information on AlarmPoint, refer to Invoq Systems’ Web site at:
http://www.invoqsystems.com
Figure 7-1 shows the integration between IBM Tivoli Workload Scheduler, event
server, and AlarmPoint.
Note: Before installing the Plus Module, make sure you do the full Tivoli
database backup!
After the installation, a new icon appears on the Tivoli Desktop, called TivoliPlus.
This icon contains the TWS Plus for Tivoli collection. The collection contains a
number of configuration tasks that can be used for set up and configuration of
IBM Tivoli Enterprise Console and IBM Tivoli Workload Scheduler as shown in
Figure 7-2 on page 227.
226 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices
The IBM Tivoli Workload Scheduler Plus Module is installed under the
$BINDIR/../generic_UNIX/TME/PLUS/TWS directory, where $BINDIR is the
Tivoli framework binary directory. The installation places a set of configuration
scripts, IBM Tivoli Enterprise Console event, rule and format files under this
directory.
After the task is configured, the IBM Tivoli Enterprise Console server is re-started
and the new rule base becomes active.
The next step is to configure log adapters. There are two tasks for this
configuration:
Configuring TME Adapters: For adapters within the Tivoli managed
environment
Configuring Non-TME Adapters: For adapters outside the Tivoli managed
environment.
We have used TME Adapters for our environment. In this case, IBM Tivoli
Workload Scheduler Master has a Tivoli endpoint and a Tivoli Enterprise
Console adapter running. The Configure TME Logfile Adapter task does the
following:
Stops the adapter from running
Adds the maestro format file into the existing one and compiles it
Makes modifications to the adapter configuration file
Re-starts the adapter
The adapter configuration file contains configuration options and filters for the
adapter. It is read by the adapter at startup. The file points out to the event server
228 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices
to send events to any of the ASCII files we’d like to monitor. In our case, the
configuration task adds the TWShome/event.log file to be monitored.
Format files define formats of messages that are displayed in IBM Tivoli
Enterprise Console and matches them to event classes. There are two different
IBM Tivoli Workload Scheduler format files:
maestront.fmt for Windows platform
maestro.fmt for all other operating system platforms.
We have used Windows IBM Tivoli Enterprise Console adapters for Windows
2000 platform, rather than Windows NT® adapters. They are different and for
Windows 2000 platform we recommend that you use tecad_win adapters, rather
than tecad_nt. The Plus Module configuration scripts are written for configuring
Windows NT adapters. We had to modify those to include Windows 2000
adapters, in order for them to work. Modifications were made to the script task
uses, called config_tme_logadapter.sh.
The adapter configuration task sets up the BmEvents.conf file. This file exists
under the TWShome/config directory. On UNIX, this file can also be found in
TWShome/OV and TWShome/unsupported/OpC directories. The configuration
tasks can be run separately on selected subscribers. If run by default, it will run
on all clients that are subscribed to the IBM Tivoli Workload Scheduler Client list
profiles.
Note: Tivoli has changed IBM Tivoli Workload Scheduler event classes. They
are now different for IBM Tivoli Workload Scheduler in UNIX/Linux and
Windows environments. IBM Tivoli Workload Scheduler classes for
UNIX/Linux are of the form TWS_Base, while for the Windows platform, they
are of the form TWS_NT_Base.
After the adapter configuration, IBM Tivoli Workload Scheduler engine needs to
be restarted for the events to start writing into the log file.
TME Logfile Adapter on Windows looks for Tivoli endpoint installation under the
$SystemRoot/Tivoli/lcf directory (that is, c:\winnt\tivoli\lcf) and fails if Tivoli
endpoint is installed somewhere else. Note that this is not the Tivoli endpoint
default location, so it is almost certain that this task would not work.
7.3.3 Recommendations
Because of the various issues found when configuring the integration between
IBM Tivoli Workload Scheduler and IBM Tivoli Enterprise Console, we can
recommend the following. Instead of installing the Plus Module in a live
environment, you can install it on a test machine and copy the BAROC, rule and
format files across. Use the standard Tivoli ACP (Adapter Configuration Profile)
profile to download the format file onto the IBM Tivoli Workload Scheduler Master
machine and configure this adapter to monitor the IBM Tivoli Workload
Scheduler event.log file.
230 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices
For IBM Tivoli Enterprise Console configuration, use only one rule file, depending
on the implementation. For example, use maestro_plus.rls if IBM Tivoli Workload
Scheduler monitored Master is on a UNIX/Linux platform and use
maestront_plus.rls if IBM Tivoli Workload Scheduler Master is on a Windows
platform. When using the Tivoli Distributed Monitoring option, load the
maestro_mon.rls rule file. If modification to the rules is needed, modify them,
then compile and load into the existing or new rule base. If there is a need to
create a new event group and a console, you can use the wconsole command to
do this manually.
When configuring IBM Tivoli Workload Scheduler to report events from every
IBM Tivoli Workload Scheduler agent, this would result in duplication of events
coming from the Master DM, as well as from each agent. This can be solved by
creating new IBM Tivoli Enterprise Console rules which will “detect” this
duplication, on the basis of slots, such as job_name, job_cpu, schedule_name
and schedule_cpu.
The full listings of all available IBM Tivoli Workload Scheduler events and their
corresponding event numbers (old and new) is given in 7.5.1, “Full IBM Tivoli
Workload Scheduler Event Configuration listing” on page 240.
232 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices
Our scenario reflects the following (see Figure 7-4 on page 232):
If a DBSELOAD job abends, the recovery job, DBSELREC, runs automatically
and checks for return codes.
If the return code of the DBSELOAD job is 1 or greater than 10, an event with
a fatal severity is sent to IBM Tivoli Enterprise Console, which causes IBM
Tivoli Enterprise Console to take action to create an AlarmPoint critical call.
AlarmPoint finds the appropriate Technical Support person who is responsible
for this job, calls him or her and delivers the message. It also gives the person
an option to acknowledge or close the IBM Tivoli Enterprise Console event by
pressing relevant keypads on the phone.
Because the DBSELOAD job is critical and there could be a possible service
impact affecting end users, AlarmPoint also informs the Product Support
group in charge, according to the escalation procedure.
If the DATAREPT job is late (if not started by 03:00 in the afternoon), a
severity “warning” event is sent to IBM Tivoli Enterprise Console – Late
message.
IBM Tivoli Enterprise Console passes relevant information to AlarmPoint,
which notifies the Technical Support group via pager or SMS or e-mail.
If the DATAREPT job abends, an IBM Tivoli Enterprise Console critical event
is created and AlarmPoint calls Technical Support with an option to re-run the
job.
If Technical Support does not fix the job within 30 minutes, AlarmPoint informs
management of the situation. The AlarmPoint XML-based Java client is installed
and running on the IBM Tivoli Enterprise Console server. The client is
responsible for all the communications between the event server and the
AlarmPoint server.
When, in our case, the DBSELOAD fatal event arrives to IBM Tivoli Enterprise
Console, IBM Tivoli Enterprise Console has a predefined rule that executes a
shell script. This script (send_ap_action.sh) passes parameters to the
AlarmPoint Java Client. The client translates this into a special XML message
that is sent to AlarmPoint. AlarmPoint then finds the relevant people that need to
be notified for this situation (Technical Support and Product Control groups). It
also finds all devices these people need to be notified on (for example, mobile
phones, pagers, e-mails, etc.) and notifies them. If any of those people want to
respond to acknowledge or close the IBM Tivoli Enterprise Console event, the
XML message is passed back to IBM Tivoli Enterprise Console from AlarmPoint.
IBM Tivoli Enterprise Console acknowledges or closes this event accordingly.
If the DATAREPT job is abending, the Technical Support person gets an option
on the telephone keypad to re-run the job. The AlarmPoint Java Client passes
In our environment, we have used the Plus Module IBM Tivoli Enterprise Console
rules and added a few rules to include integration with AlarmPoint. The rules
reflect our scenario only, but give a good idea of how the integration can be done.
Figure 7-6 shows the AlarmPoint Pager message and Figure 7-7 on page 235
shows an AlarmPoint two-way e-mail response that is generated as a result of
this event. Figure 7-8 on page 235 shows an AlarmPoint Mobile Phone SMS
message sent to a mobile phone, giving an option via the phone to re-run the job.
234 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices
Figure 7-7 AlarmPoint two-way e-mail response
We have also added another rule (redbook.rls) to include our scenario cases,
together with rules for AlarmPoint integration.
BmEvents.conf file has been configured to send almost all events, but this is only
needed for testing. After the testing period is finished, you will need to change
this file to include monitoring for only desired IBM Tivoli Workload Scheduler
events (see Example 7-1 on page 237).
After installation and customization of IBM Tivoli Workload Scheduler and IBM
Tivoli Enterprise Console, you need to decide what type of events you would like
to be reported via Tivoli Enterprise Console. There are two different approaches
to achieve this. One is to report all available events and then filter out or stop
events that are not needed (for example, harmless events of jobs starting their
execution). The other is to refer to the event listing (see Table 7-1 on page 240)
and pick only the events you are interested in. We recommend the second
approach, since the first may produce far too many events to keep track of.
This is the basic operation of how the integration works between Tivoli Workload
Scheduler, event server and, if available, AlarmPoint. We have also given the
236 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices
best practices and recommended settings and configuration for this integration.
Configuration files and IBM Tivoli Enterprise Console rules are listed in this
section.
BmEvents.conf file
This configuration file specifies the type of IBM Tivoli Workload Scheduler events
that are sent to the Enterprise Console. The file should be located in the IBM
Tivoli Workload Scheduler home directory. By default, the following events are
reported:
An IBM Tivoli Workload Scheduler process is reset, link dropped or link failed.
Domain Manager switched.
Job or/and schedule abended, failed or suspended.
Prompts are issued and waiting to be answered.
It is also possible to switch on the rest of the events that report almost every
change in IBM Tivoli Workload Scheduler internal status. We have used some
new events only available in IBM Tivoli Workload Scheduler Version 8.2, such as
reporting for late jobs and schedules (see Example 7-1).
This file also defines whether the events should be reported only from the Master
Domain Manager for all other workstations or from each workstation separately
(the OPTIONS settings). The FILE option specifies the name of the log file that
the IBM Tivoli Enterprise Console adapter will be reading from. This file should
be located in the IBM Tivoli Workload Scheduler home directory (usually called
event.log).
The BmEvents.conf file can be configured either manually or using the Plus
Module task (see 7.3.1, “Setting up the IBM Tivoli Enterprise Console” on
page 227).
OPTIONS=MASTER
# MASTER This tells batchman to act as the master of the network and
# information on all cpus are returned by this module.
LOGGING=KEY
# ALL This tells batchman all the valid event numbers are reported.
#
# KEY This tells batchman the key-flag filter is enabled
#
# default is ALL for all the cpus
SYMEVNTS=YES
# YES tells batchman to report a picture of job status events as soon as the
new plan is
# generated. It is valid only for key-flagged jobs with LOGGING=KEY
#
# NO does not report these events. It is the default.
# EVENTS = 51 101 102 105 111 151 152 155 201 202 203 204 251 252 301
EVENTS=1 51 52 53 101 102 103 104 105 106 107 110 111 112 113 115 116 120 121
122 151 152 154 155 157 163 164 165 201 202 203 204 251 252 301
# <n> is a valid event number (see Maestro.mib traps for the valid event
# numbers and the contents of each event.
#
# default is 51 101 102 105 111 151 152 155 201 202 203 204 251 252 301
238 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices
# PIPE=/usr/lib/maestro/MAGENT.P
#PIPE=/usr/lib/maestro/MAGENT.P
tecad.conf file
This file is used by the IBM Tivoli Enterprise Console adapter and read at
adapter startup. The only part of this adapter that needs to be configured is the
LogSources option. This option points out which log file IBM Tivoli Enterprise
Console needs to monitor. In our scenario, this is the event.log file from the IBM
Tivoli Workload Scheduler home directory. The tecad.conf file is installed with the
IBM Tivoli Enterprise Console adapter and can usually be found under the
following directories:
For TME adapters: $LCFDIR/bin/<platform>/TME/TEC/adapters/etc
For non-TME adapters: $TECAD/etc (where $TECAD is the adapter
installation directory)
We have used two different IBM Tivoli Enterprise Console adapter configuration
files: one for the Windows platform (tecad_win.conf) and one for the UNIX
platform (tecad_logfile.conf). These files are shown in the following examples.
PreFilter:Log=Security
#
LogSources=,/export/home/maestro/event.log
#Filter:Class=Logfile_Base
Filter:Class=Logfile_Sendmail
Filter:Class=Amd_Unmounted
Filter:Class=Amd_Mounted
Filter:Class=Su_Success;
TEC rules
IBM Tivoli Enterprise Console Rules used in this integration are the Plus Module
rules, which are located in the maestro_plus.rls rule set. The original Plus
Module rule executes a hidden IBM Tivoli Workload Scheduler task to e-mail
standard output of abended jobs to system administrators. However, this is only
possible on UNIX, since Tivoli uses the sendmail method, which is not supported
on Windows. Also, in most cases UNIX system administrators will not be
interested in IBM Tivoli Workload Scheduler job listings, but people who are
responsible for those jobs. This is why we used AlarmPoint to make sure relevant
people are always informed when critical situations arise. The maestro_plus rule
set is found in “maestro_plus rule set” on page 372.
1 TWS reset
51 Process reset
240 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices
Event Number Message Type
242 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices
8
244 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices
TMR failure is being resolved. This will, therefore, remove the pressure from the
operations department for the TMR to be restarted.
If the answer to any of the above is yes, then a long-term switch is required. Note
that the first stage of a long-term switch is the short-term switch procedure.
During the time while the Backup Master is in control, following a short-term
switch, it will maintain a log of the updates to the Symphony file, in the same way
as when communication between agents is lost due to network outages. The
default maximum size for the update file, tomaster.msg, is approximately 9.5 MB.
While the size of this file can be increased by using the evtsize command, a
long-term switch should be considered if this file starts to get full, with a switch
back to the original MDM following the running of Jnextday. This will avoid any
possible loss of job state information that may occur if the size of this file affects
the switchback process.
In the case of MDM failure, unless the problem can be resolved by a system
reboot, we would recommend that a long-term switch is always performed and
that the switchover is maintained until at least the start of the next processing
day. Following these guidelines will ensure that any possible issues with the
switchover procedure are avoided.
The result of the short-term switch is that all of the scheduling updates are
redirected from the original MDM to the BMDM, which will act as the MDM, thus
allowing for the original MDM to be shut down, or in the case of a failure, fixed
and re-started. The switch can be performed by a simple command or GUI
option. This is also the first step in the long-term switchover process.
The procedure for the short-term switch to the Backup Master is the same
regardless of whether this is due to the Master workstation crashing or if there is
a requirement to perform maintenance on the workstation, which requires IBM
Tivoli Workload Scheduler to be stopped.
This can either be performed via the JSC or the command line. If the JSC is to be
used, the connection will need to be made to the BMDM.
246 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices
Figure 8-1 Status of all Domains list
2. Select the MASTERDM row and right-click to open the Actions menu
(Figure 8-2).
4. Use the Find window to list the agents in the network and select the
workstation that is to take over as the Master (Figure 8-4).
5. Refresh the Status of all Domains view until the switch is confirmed.
6. Log on to all the agents in the network as maestro, enter the conman CLI and
check that all agents have recognized the switch using the sd command.
248 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices
8.4.2 Using the command line
Follow the following procedure for a short-term switch using the command line:
1. Log on to any agent in the network as the install user, typically maestro. If the
long-term switch is script based and is to be performed, then using the
command-line procedure may prove easier, since a command-line session
will be required in order to run the script.
2. Enter the conman command line interface by typing conman.
3. Be sure that the agent has a status of ‘Batchman LIVES. Issue a start
command if it is not.
4. Enter the following switch command, where YYYYYYYY is the workstation name
that is to take over as the MDM:
switchmgr masterdm ; YYYYYYY
5. While still in the conman CLI, check that the switch has taken place by using
the SHOWDOMAINS command. Enter sd for short. The switch will take a few
minutes to take place.
6. Log on to all the agents in the network as the install user and enter the
conman CLI and check that all agents have recognized the switch using the
sd command.
7. Reverse the procedure to revert back to the original MDM.
The long-term switch deals with the location of the active IBM Tivoli Workload
Scheduler databases, which is determined by the master option within the
globalopts file. This file exists within the mozart directory on all workstations,
although only the globalopts file on the Master workstation has any effect on how
the environment acts. Whenever access to the database is requested, via the
composer command line or JSC, the master option in the local globalopts file is
checked to ensure that it matches the workstation name where the request is
being made. If it does not match, then an error is produced and no access is
gained.
If the long-term switch is to be carried out by running a script, the best way to
make the required change to the globalopts file is to have a copy on the BMDM
with the master option set to the name of the BMDM, which can be copied over
the original file at the appropriate time. This file could be called globalopts.bkup.
A file called globalopts.mstr would also need to be created so that it can be
copied back following the switch back to the MDM. The copy of the globalopts file
would be the first step in the switchover, following the short-term switch.
As the main aspect of the long-term switch is the access to the active databases,
it is important is to be sure that the database files on the Backup Master are as
up to date as possible. There are two ways that this can be achieved, both of
which have their own set of considerations.
The first method is to copy the physical database files, including key files, from
the MDM at the start of the day. This option requires files to be copied from
multiple directories and might require a more involved fix process if the
databases on the Backup Master are corrupted or the copy has failed. Care
would need to be taken to ensure that the databases are not being updated
during the copy, because this could lead to inconsistency between the MDM and
BMDM and in extreme cases might cause corruption.
Secondly, flat files can be exported from the database and then copied to the
BMDM. The advantage of this method is that no updates to the database will be
allowed during the export. The import process of the databases on the BMDM
should not be performed until a long-term switch is actually initiated. This will
require running the composer replace command after you have performed the
Domain switch and a copy has been made of the globalopts file. The
disadvantage with this method is the contents of the users’ database. User
passwords are not extracted during the flat file creation, for security reasons, so
these would therefore need to be added following the load. This can be achieved
relatively easily by using non-expiring passwords, ensuring that the password is
the same for all Windows batch job users, and either typing in the password via a
prompt during the load script or by having a file that is owned and only readable
by a root that contains the password that can be read by the load script.
Whichever of the above options is chosen, the copy of the physical databases or
exported flat files, the copy should take place on a daily basis at a set time so as
to ensure that all involved understand and appreciate the state that the
databases are in following a long-term switch. Due to the importance of the
runmsgno file, explained later, it would be advisable for this to take place soon
after the running of the Jnextday job. In this way, a switch can be performed at
any time after the copy during the current production day, without issue. It should
also be noted that, unless further copies are performed, any changes to the
250 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices
database during the production day will not be reflected following a switch and
will need to recreated. During normal day-to-day running, it is probable that the
number of changes will be minimal. If many changes are being made to the
database, then multiple copies should be scheduled.
As well as the copy of the databases, there is also a requirement to copy the
runmsgno file from the mozart directory on MDM to the BMDM. This file contains
the current sequence number of the Symphony file and will be checked during
the running of Jnextday.
Once a switch and a load of the databases have taken place, the FINAL job
stream and Jnextday job will need to be run on the BMDM, when it is acting as
the Master. As further preparation, a Jnextday job and FINAL job stream should
be created for the BMDM, with the FINAL job stream specified to run “on request”
to ensure that it is not scheduled on a daily basis. This will need to be submitted
during the switchover and amended to run everyday, while the original FINAL job
stream should be amended to run on request so that the original MDM will be
online for a number of days prior to switch back. If the Jnextday script has been
amended, then these amendments should be reflected on the BMDM. In order to
change the run cycle of the FINAL job streams on the MDM and BMDM while
using a switchover script, an exported file containing the two FINAL job streams
with amended run cycles can be stored on the BMDM and loaded.
The script found at “Script for performing long-term switch” on page 382 is an
example of one that could be used to perform the long-term switch, as far as the
preparation work regarding the globalopts and runmsgno files is concerned. It
uses databases files that have been exported using the composer create
command and copied to the BMDM prior to the switch. For simplicity, the
This shell script will run on Windows with some minimal modification utilizing
bash, which will be available on the MDM and BMDM as part of the Framework
TMR installation.
Although each implementation of IBM Tivoli Workload Scheduler has its own
unique qualities, which may require an alternative method of off-site disaster
recovery, generally there are three options that should be considered.
Implementing this solution requires systems that match those that exist in the
production IBM Tivoli Workload Scheduler network. To ensure the cleanest and
quickest switchover, these systems should be set up with the same system
names and IP addresses as those in production. Because these systems will
normally be isolated from the live systems, this should not be an issue. If they are
in a network configuration that would cause contention if both sets of systems
were to be active at the same time, then some manipulation of files will be
required during the switch process.
252 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices
If the off-site facilities are provided in-house, then much of the preparation work
needed for this solution can be in place prior to any failure, although this will not
necessarily improve the time taken for the batch process to be restarted. In this
case, it would be worth considering either the warm or even hot start option.
Normally this solution is used where a third-party supplier will provide the
systems, under agreement, in the event of failure.
On a daily basis, more regularly if possible, flat file copies of the IBM Tivoli
Workload Scheduler databases should be made (using the composer create
command), along with backups of the application databases and transported off
site to a secure site or to the off-site location, if in-house. Putting them in a safe,
even a fireproof location in the same building as the live systems is not good
enough.
If a failure occurs, the systems will need to be built and configured to match those
that they are to replace and the application data restored. If the application
backup is made midway through a batch run, then it is imperative that the staff
performing the rebuild are aware of this fact, because it will determine the point
where the production work is restarted.
Once all the systems have been rebuilt, the IBM Tivoli Workload Scheduler
agents will need to be built on each of the systems. If the same system names
and IP addresses as the live systems are used, then this can be performed as
part of the application data recovery. The IBM Tivoli Workload Scheduler object
databases can then be loaded using the composer replace command. The
workstations should be loaded first and the job streams last to avoid errors
occurring during the load. The IBM Tivoli Workload Scheduler network can be
started by running the Jnextday command. All agents will have their limit set to 0,
which will prevent any work starting before final checks have been completed
and the restart point determined if the application backups were taken midway
through a batch run.
Regardless of which of the two options are used, this strategy is built around the
off-site systems being managed in-house and that permanent network
connectivity is established between the live systems and the disaster recovery
site up to the point of total loss of the live systems.
On a daily basis, flat file exports of the IBM Tivoli Workload Scheduler databases
(composer create), excluding workstations, would be sent to the off-site MDM
and loaded immediately (composer replace). Jnextday would then run to ensure
that the day’s batch work is built into the Symphony file and ready to go. The
limits would be set to 0 on all systems to ensure that no jobs actually run and
there would need to be a process of cancelling all carry forward job streams prior
to the Jnextday job running, to ensure that nothing is carried forward. This
process, of course, would need to be removed in the event of a failover.
One system at the off-site location would need to be installed as an FTA in the
MASTERDM domain in the live IBM Tivoli Workload Scheduler network with
options set to Full status. Following a switch to the disaster recovery systems,
this agent’s Symphony file will act as reference to the state that the live systems
were in at the point of failure.
During the day, time-stamped database update files should be sent from the live
systems and stored in a reception area on the disaster recovery systems. These
files should be applied at the end of the batch cycle, assuming that a switchover
has not occurred, to ensure that the disaster recovery are systems are up to date
254 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices
to at least the end of the previous working day, and also to conserve space and
reduce the update time during a switchover.
If a failure occurs during a batch cycle, all the update files for the current day will
then need to be applied. Once these have been applied, the batch processing
can then be restarted from the point of the jobs that were running at the time of
the database snapshot. The Agent that is part of the live system and the
time-stamped database files can be cross-referenced to provide this information.
It is important that the batch work be re-started from the point of the last
database snapshot and not the point of failure, as several hours may have
passed and the databases will not be in a state to run this work.
The application of the update files could take several hours, hence the term
warm start.
Once again, this option is only a warm start, since there would be a time delay
between initial failure and the restart of the processing, which would effectively
be from a point prior to that of the failure.
Details of the architecture are covered in 8.6.5, “Hot start” on page 255, while the
recovery and restart procedure are the same as the independent solution.
The entire off-site disaster recovery network would be included within the live
production architecture with the agents that will act as the MDM and BMDM
during the switchover in the MASTERDM Domain and configured as additional
BMDMs to the live MDM. If there is a limit on the number of systems available,
these systems could easily be those that would be running the batch work during
the time the disaster recovery site is in operation. While this is not the ideal
solution, it would be sufficient to allow processing to continue until a return the
live systems or replacement systems can be implemented. The IBM Tivoli
To aid the schedulers, these agents should have IBM Tivoli Workload Scheduler
workstation names that closely resemble those of the live systems. We would
recommend that the live, or production, agent names start with a “P”, while the
agents that make up the disaster recovery element of the network start with an
“R” and the remainder of the workstation names being the same. Note that “D”
should not be used because it could be confused with a Development network.
This will also allow the operations teams to create lists that will allow them to
either side of the network independently.
The preparation required for this solution is the same as the switchover to the
on-site BMDM, with IBM Tivoli Workload Scheduler database files being copied
to the disaster recovery MDM and BMDM on a daily basis and the creation of
files and scripts to populate the databases and the update of the globalopts file
and FINAL job stream.
Additionally, all objects created for the live systems will need to be duplicated for
the disaster recovery systems. Following our recommendations regarding the
naming of the agents will assist in this process.
The disaster recovery agents would need to have their LIMIT set to 0 and
FENCE set to 100 to ensure that no work would actually run. Jobs would need to
be created to run on these systems and scheduled with a priority of 101 to run
prior to Jnextday, to cancel any carry forward job streams.
Since the preparation for this solution is identical to that of the switch to the live
BMDM, then the procedure to perform the switch will be the same, with the
added step of amending the LIMIT and FENCE levels of the agents. Once the
switch has been performed, the processing can be restarted from the point of
failure, which can be identified by examining the plan.
256 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices
9
For example AIX 5L running on Enterprise Server class IBM pSeries systems
configured for high availability host IBM Tivoli Workload Scheduler Master
Domain Managers and Domain Managers very effectively.
It is again important to remember that the IBM Tivoli Workload Scheduler engine
is not a cpu-intensive application except when Jnextday is running. In most
configurations, disk I/O is the limiting factor in its performance. Such measures
as putting IBM Tivoli Workload Scheduler on its own physical disks, on a
separate disk adapter, or on a RAID array (especially RAID-0), can boost
performance in a large high-workload IBM Tivoli Workload Scheduler
environment. Ultra2 or Ultra160 (IBM Ultrastar) SCSI storage components can
258 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices
also relieve I/O bottlenecks. If a server is more than two years old, it may have
storage interfaces and components with performance that falls well below current
standards and may not perform well in particularly demanding environments.
9.1.3 Processes
A Master Domain Manager or Domain Manager will have one writer process for
each agent hosted and one mailman server process for each unique server ID
allocated plus the master mailman process. At least three processes will exist for
each job running locally on the workstation. This can add up to considerably
more processes than most servers are initially configured to handle from a single
user. Open files on Master Domain Manager and Domain Managers can also
add significant load on UNIX systems. It is advisable to work with your system
administrator to tune all given options in advance of a Tivoli Workload Scheduler
deployment. This will ensure that the system has been configured to host the
number of processes that IBM Tivoli Workload Scheduler can generate. Make
sure to check both the system-wide and per-user process limits. If the operating
system hits one of these limits while IBM Tivoli Workload Scheduler is running,
IBM Tivoli Workload Scheduler will eventually stop completely on the node. If this
happens on a Master Domain Manager or Domain Manager, scheduling on
multiple nodes may be affected.
9.1.5 Inodes
IBM Tivoli Workload Scheduler can consume large numbers of inodes when
storing large amounts of job output on UNIX systems in TWShome/stdlist.
However inodes are not an issue on Microsoft® Windows operating systems. On
an FTA, which runs large numbers of jobs, inode consumption can grow quickly.
Although most new UNIX boxes should not present a problem, it is worth
consideration.
260 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices
processes and Domain Managers all have roughly similar workloads to process
throughout the day. Concurrency is achieved in larger enterprises by having
multiple mailman server processes and/or Domain Managers simultaneously
distributing and processing data from the Master Domain Manager.
In order to better understand the meaning of Server IDs, let us consider the
following examples.
Figure 9-2 on page 263 shows an example IBM Tivoli Workload Scheduler
domain with no Server ID defined. In this case the main mailman process on
Domain Manager handles all outbound communications with the FTAs in the
domain.
262 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices
MASTERDM
AIX
Master
Domain
Manager
DomainA DomainB
AIX
HPUX
Domain Domain
Manager Manager
DMA DMB
Figure 9-3 on page 264 shows the same domain with three Server IDs defined.
As seen from the diagram, one extra mailman process is spawned for each
Server ID in the domain.
DomainA AIX
Domain
Manager
DMA
SERVERA
SERVERA SERVER1
SERVER1 SERVER4
SERVER4
mailman
mailman mailman
mailman mailman
mailman
Figure 9-3 The same configuration with three different server IDs
Figure 9-4 on page 265 shows the usage of extra mailman processes in an
multidomain environment.
264 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices
MASTERDM
AIX
Master
Domain
Manager
SERVERA
SERVERA SERVERB
SERVERB
mailman
mailman mailman
mailman
DomainA DomainB
AIX
HPUX
Domain Server ID A Domain
Server ID B
Manager Manager
DMA DMB
SERVERA
SERVERA SERVERB
SERVERB SERVER1
SERVER1 SERVER2
SERVER2
mailman
mailman mailman
mailman mailman
mailman mailman
mailman
266 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices
9.1.7 Considerations when designing an IBM Tivoli Workload
Scheduler network
When designing an IBM Tivoli Workload Scheduler network, there are several
things that need to be considered:
What are your job scheduling requirements?
How critical are your jobs to the business? What is the effect on your business
if Domain Manager A (DMA) goes down? Does DMA need a Backup Domain
Manager at the ready?
How large is your IBM Tivoli Workload Scheduler network? How many
computers does it hold? How many applications and jobs does it run?
The size of your network will help you decide whether to use a single domain
or the multiple domain architecture. If you have a small number of computers,
or a small number of applications to control with IBM Tivoli Workload
Scheduler, there may not be a need for multiple domains.
How many geographic locations will be covered in your IBM Tivoli Workload
Scheduler network? How reliable and efficient is the communication between
locations?
This is one of the primary reasons for choosing a multiple domain
architecture. One domain for each geographical location is a common
configuration. If you choose single domain architecture, you will be more
reliant on the network to maintain continuous processing.
Do you need centralized or decentralized management of IBM Tivoli
Workload Scheduler?
An IBM Tivoli Workload Scheduler network, with either a single domain or
multiple domains, gives you the ability to manage IBM Tivoli Workload
Scheduler from a single node, the Master Domain Manager. If you want to
manage multiple locations separately, you can consider the installation of a
separate IBM Tivoli Workload Scheduler network at each location. Note that
some degree of decentralized management is possible in a stand-alone IBM
Tivoli Workload Scheduler network by mounting or sharing file systems.
Do you have multiple physical or logical entities at a single site? Are there
different buildings, and several floors in each building? Are there different
departments or business functions? Are there different applications?
These may be reasons for choosing a multi-domain configuration. For
example, a domain for each building, department, business function, or each
application (manufacturing, financial, engineering, etc.).
Do you have reliable and efficient network connections?
Among these considerations, the layout of the physical network is one of the
most important. The structure of the domains must reflect the topology of the
network in order to best use the communication channels. The following example
explains this.
268 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices
Let us suppose we have the configuration illustrated in Figure 9-5:
One Master Domain Manager is in New York.
30 FTAs are in New York, and 60 are in New Jersey.
40 FTAs are in Chicago.
5000 jobs run each day, balanced between the FTAs. The Symphony file is 5 MB
in size when Jnextday is run. New Jersey and Chicago are accessed through a
WAN link.
MASTER-NewYork
30 local
Master FTAs
Sinfonia
Chicago NewJersey
Location Location
40 FTAs 60 FTAs
The FTAs, upon re-linking to the Master Domain Manager after Jnextday, request
a copy of the Sinfonia file. The MDM has to send a copy to each FTA. The total
MB transferred over the WAN is 500 MB.
Now, look at the second topology shown in Figure 9-6 on page 270:
One Master Domain Manager is in New York.
30 FTAs are in New York.
One Domain Manager with 60 FTAs is in New Jersey.
One Domain Manager with 40 FTAs is in Chicago.
In this topology, each FTA reports to its respective DM, which reports to the
MDM. The DM is also responsible for keeping contact with its FTAs and pushing
down a Sinfonia file to each one after Jnextday.
Even though New Jersey and Chicago are still accessed through the WAN, the
MB push to each city after Jnextday is reduced to 10 MB. This reduces the WAN
traffic considerably.
Additionally, because the DMs are responsible for initializing their own FTAs, it
shortens the length of time from start of Jnextday to start of production across
the network by initializing in parallel.
MASTERDM-NewYork
30 local
Master FTAs
Domain
Manager
Sinfonia
Chicago NewJersey
Domain Domain
Domain Domain
Manager Manager
DMA DMB
40 FTAs 60 FTAs
Therefore, Domain Managers across wide area networks are definitely a good
idea. You need to plan to implement them accordingly.
The number of FTAs in your network topology dictates the number of Domain
Managers you must implement. If you have 200 FTAs with one Domain Manager,
you have not balanced out the message processing because all your FTAs report
to one Domain Manager, which in turn, reports everything to the MDM.
Therefore, you have created a situation where two boxes are hit hard with
incoming messages.
Each FTA generates a writer process on its DM. With UNIX and Linux, you can
configure the number of processes a user can have, but on Microsoft Windows
there is a limit to the number of named pipes (about 100). Each writer on
Windows uses a named pipe. Logically, fewer FTAs under each Domain Manager
allows for faster dependency resolution within that Domain Manager structure.
Each DM processes fewer messages than the DM in the situation previously
270 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices
listed and reports all messages back to the MDM. This leaves only the MDM
processing all messages.
In the previous situation (scenario with 200 FTAs), you can implement four DMs.
Try to put your FTAs that depend on each other for dependency resolution under
the same DM. If you have inter-Domain Manager dependency resolution, the
messages must still go through the MDM, which has to process all messages
from all four DMs.
Also, in the previous situation, if the MDM has a problem and cannot
communicate with one of the DMs, all dependency resolution required from the
network of that DM does not occur. Therefore, it is a better practice to put all
machines dependent upon each other within the same DM network with no more
than 60 or so FTAs for faster message processing within the DM network.
Tip: When configuring your Domain Manager infrastructure, try to put your
critical servers close to the Master in the Domain Manager hierarchy.
Wrapping up
Here is a summary that compares single and multiple domain networks.
The lack of fault tolerance has caused many organizations to select FTAs over
Standard Agents. But Standard Agents do have some merits in certain cases
and the following summarizes some of the situations where Standard Agents
might be preferred over FTAs:
To facilitate global resources: Global resources do not currently exist for
IBM Tivoli Workload Scheduler. Therefore, the only way to share resources is
through a common manager. Using Standard Agents can help in this
situation.
Low-end machines: If you need to install IBM Tivoli Workload Scheduler
agent on low-end machines with little CPU and memory power, Standard
Agents might be preferred choice, since they require less machine resources.
Cluster environments: For cluster environments Standard Agents might help
because they require simpler configuration for fall-back situations than FTAs.
However, if you do not have these requirements, you should prefer FTAs over
Standard Agents due to more functionality and inherent network fault-tolerant
characteristics of FTAs.
For outages that do not cross Jnextday, access to the database will be required
for ad hoc job submission. If an outage occurs during the Jnextday process,
272 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices
access to the database will be needed for the Jnextday process to complete
successfully. There are a number of possible approaches for continuous
processing of the workday:
Copy the IBM Tivoli Workload Scheduler databases to the Backup Domain
Manager on a regular basis. The contents of the TWShome/mozart directory
should be copied following Jnextday so that the Backup Master Domain
Manager is aware of the current Plan run number, plus the database files
contained within the TWShome/network directory.
Store the IBM Tivoli Workload Scheduler database files on a shared disk as
typically found in a clustered environment.
Remotely mount IBM Tivoli Workload Scheduler databases from another
machine. This is generally not a good practice, since in the event of a failure
of the server physically holding the files, access to the files is lost at the very
least and potentially IBM Tivoli Workload Scheduler may hang on the servers
with the remote mounts until the failed server is restored.
All will work with IBM Tivoli Workload Scheduler, but the installation and
configuration can be complex. Consider Tivoli Services for large implementations
if expertise is not available in-house, or contact your hardware or software vendor
for information about high availability server configuration information.
The job definition for distributed jobs in IBM Tivoli Workload Scheduler contains a
pointer (the path or directory) to the script. The script by itself is placed locally on
the Fault Tolerant Agent. Since the Fault Tolerant Agents have a local copy of the
Plan (Symphony) and the script to run, they can continue running jobs on the
system even if the connection to the IBM Tivoli Workload Scheduler Master is
broken. This way we have the fault tolerance on the workstations.
We suggest placing all scripts used for production workload in one common
script repository. The repository can be designed in different ways. One way
could be to have a subdirectory for each fault-tolerant workstation (with the same
name as the name on the IBM Tivoli Workload Scheduler workstation).
All changes to scripts are done in this production repository. On a daily basis, for
example, just before the Jnextday process, the master scripts in the central
repository are distributed to the Fault Tolerant Agents. The daily distribution can
be handled by a Tivoli Workload Scheduler scheduled job.
This approach can be made even more advanced, for example, by using a
software distribution application to handle the distribution of the scripts. This way,
the software distribution application can help keep track of different versions of
the same script. If you encounter a problem with a changed script in a production
shift, you can simply ask the software distribution application to redistribute a
previous version of the same script and then rerun the job.
274 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices
Security files
The IBM Tivoli Workload Scheduler Security file is used to protect access to
database and Plan objects. On every IBM Tivoli Workload Scheduler engine
(Domain Manager, Fault Tolerant Agent, etc.) you can issue conman commands
for the Plan and composer commands for the database. IBM Tivoli Workload
Scheduler Security files are used to ensure that the right people have the right
access to objects in IBM Tivoli Workload Scheduler.
Security files can be created or modified on every local IBM Tivoli Workload
Scheduler workstation and they can be different from ITWS workstation to ITWS
workstation.
Unless you have a firm requirement for different Security files (due to company
policy, etc.), we suggest that you use the one of the two approaches, which do
not require maintenance of different Security files on workstations:
Use the centralized security function
IBM Tivoli Workload Scheduler Version 8.2 introduced a function called the
centralized security. If used, this function allows the Security files of all the
workstations of the IBM Tivoli Workload Scheduler network to be created and
modified only on the Master. The IBM Tivoli Workload Scheduler
administrator is responsible for their production, maintenance, and distribution
of the Security file.
This has the benefit of preventing risks for someone tampering with the
Security file on a workstation, but the administrator still has the job of
distributing the Security file on workstations.
The parameter database can then be managed and updated centrally. On a daily
basis the updated parameter database can be distributed to all the IBM Tivoli
Workload Scheduler workstations. The process can be as follows:
1. Update the parameter database daily.
This can be done by a daily job that uses the IBM Tivoli Workload Scheduler
parms command to add or update parameters in the parameter database.
2. Create a text copy of the updated parameter database using the IBM Tivoli
Workload Scheduler composer create command:
composer create parm.txt form parm
3. Distribute the text copy of the parameter database to all your IBM Tivoli
Workload Scheduler workstations.
4. Restore the received text copy of the parameter database on the local IBM
Tivoli Workload Scheduler workstation using the IBM Tivoli Workload
Scheduler composer replace command:
composer replace parm.txt
These steps can be handled by one job in IBM Tivoli Workload Scheduler. This
job could, for example, be scheduled just before or after Jnextday runs.
9.2 Deployment
In the following sections you will find some best practices for deploying an IBM
Tivoli Workload Scheduler implementation.
276 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices
9.2.1 Installing a large IBM Tivoli Workload Scheduler environment
Tivoli products, such as the Tivoli Management Agent and Tivoli Software
Distribution, can assist in the deployment of large numbers of IBM Tivoli
Workload Scheduler agents. In this release, IBM Tivoli Workload Scheduler
components can be installed by distributing a software package block (SPB),
using either Tivoli Software Distribution Version 4.1 or the Software Distribution
component of IBM Tivoli Configuration Manager Version 4.2. An SPB exists for
each supported Tier 1 platform. See “Installing The Product Using Software
Distribution” in the Tivoli Workload Scheduler Version 8.2, SC32-1273.
Patching is one of the most important things you can do for your systems. If you
are a part of an organization that requires several authorizations to apply a
patch, then consider patching on a quarterly basis.
Download all newly released patches on a quarterly basis, and have a schedule
set up in advance that you implement quarterly for testing and paper processing.
This way, you can implement three months of patches four times a year instead
of out-of-process patching when problems arise.
This can alleviate tensions between parties in the organization who consistently
receive emergency requests to patch when proper procedures were not followed.
If you require no testing or paper processing to patch your IBM Tivoli Workload
Scheduler systems, you can consider patching your systems as soon as a patch
is available. If it is an engine patch, you can automate the task by remotely
Furthermore, if a patch has an update for the IBM Tivoli Workload Scheduler
Connector or the Job Scheduling Services, you can implement these through the
command line, and you can automate this process as well.
The patches for IBM Tivoli Workload Scheduler can be downloaded via
anonymous FTP from:
ftp://ftp.software.ibm.com/software/tivoli_support/patches/
or HTTP from:
http://www3.software.ibm.com/ibmdl/pub/software/tivoli_support/patches/
278 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices
Tip: The use of host names in node name fields relies on the presence and
maintenance of DNS or fully populated and maintained host files. This should
be seen as an advantage, since when using host names instead of IP
addresses, IBM Tivoli Workload Scheduler will attempt to resolve the IP
addresses. If these are not correctly configured within the DNS or hosts file,
linking processes will fail or slow down significantly. Therefore it is better to
have the linking process fail and get the problem identified and fixed, then
hidden.
When this limit is exceeded, processing might slow down or IBM Tivoli Workload
Scheduler might shut itself down on the local node.
The size of these files will grow if any of the following conditions exist:
An agent is down and no operator intervention is performed.
Network connectivity has been lost without operator intervention.
Performance problems on the Master cause it to be unable to keep up with
the pace of messages from the agents. In some situations, it is necessary to
increase the size of these files.
In most cases, however, the IBM Tivoli Workload Scheduler administrator should
work to eliminate the cause of the message backlog rather than implementing
For example to increase the size of the mailman message file to 20 MB, use the
following command:
$ evtsize mailbox.msg 20000000
This change will remain until the file is deleted and re-created.
You can use the following command to query the size of a message file:
evtsize -show <filename.msg>
To change the default creation size for all .msg files to 15 MB, add the following
line to TWShome/StartUp and TWShome/.profile:
EVSIZE=15000000
export EVSIZE
Before running any of these commands, make sure that you have a verified and
available backup of at least the IBM Tivoli Workload Scheduler file system(s).
Tip: If you anticipate that a workstation will be down for some time due to a
network problem or maintenance, check the Ignore box in the workstation
definition of this workstation. This will prevent the workstation from being
included in the next production cycle, thereby preventing the increase of
message file sizes due to this inactive workstation. When the problem is
resolved, you can uncheck the Ignore box in the workstation definition to
include the workstation in the production cycle again.
You can change the attributes of the TWSCCLog.properties file to shrink the size
of the log files. Example 9-2 on page 281 shows the TWSCCLog.properties file.
For instance, by commenting out the lines in Example 9-2 on page 281 shown in
bold, you can suppress most of the extra information in the headers.
280 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices
Example 9-2 TWSCCLog.properties
tws.loggers.level=INFO
twsHnd.logFile.className=ccg_multiproc_filehandler
#twsHnd.logFile.MPFileSemKey=31111
twsHnd.logFile.formatterName=formatters.basicFmt
#----------------------------------------------
# basic format properties
#----------------------------------------------
formatters.basicFmt.className=ccg_basicformatter
formatters.basicFmt.dateTimeFormat=%H:%M:%S %d.%m.%Y
formatters.basicFmt.separator=|
#tws.loggers.organization=
#tws.loggers.product=
#tws.loggers.component=
Example 9-3 shows the TWSMERGE log before commenting out those lines
(tws.loggers.organization=, tws.loggers.product=, and
tws.loggers.component=) and Example 9-4 shows the same log file with those
lines commented out.
Tip: Output in IBM Tivoli Workload Scheduler message logs can be formatted
in XML format by changing the CCLOG attributes, but it is not recommended
for production use, because it would cause an increase in the size of log files.
282 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices
rep8 -F $FDATE -T $TDATE -B $FTIME -E $TTIME: Creates histogram for
yesterday’s production day.
logman $HOME/schedlog/M$DATE: Logs all the statistics of all jobs run in the
last production day. All statistics are written to the Jobs database.
Figure 9-7 on page 283 shows the sequence of operations during the Jnextday
process.
Inputs Outputs
Old plan file
Symphony Incomplete
carry forward
job streams
System date
CF job
New plan file
Workstations, streams
Domains &
Workstation cpudata 1 2 3 Symphony
Classes
schedulr prodsked compiler Symnew stageman
Calendars calendars
Production Interim
schedule file Plan File Sinfonia
jobs, prompts,
Jobs jobs resources,
NT users
Resources resources
NT Users userdata
Jnextday should run at a time of day that there is least scheduling activity such
as early in the morning (like the default start time 6:00 AM) or late afternoon.
If you must run Jnextday at midnight, you must set the final schedule to run a few
minutes past midnight. Remember, when changing your final schedule run time,
you must change your start of day (start) in your TWShome/mozart/globalopts
file to begin one minute later than the Jnextday run time.
284 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices
end of day, while the second job is scheduled to run just prior to the end of
day.
See 9.6.6, “Netman services and their functions” on page 325 for more
information on options in the NetConf file.
9.2.9 Monitoring
Automated monitoring is an essential part of a successful IBM Tivoli Workload
Scheduler implementation. IBM Tivoli Monitoring, IBM Tivoli Enterprise Console,
and IBM Tivoli Business Systems Manager are good examples of products that
can be linked to IBM Tivoli Workload Scheduler to monitor status and events
from IBM Tivoli Workload Scheduler. IBM Tivoli Monitoring can be used to
monitor production system usage. Tivoli Enterprise Console's logfile adapters
can be used to transmit selected IBM Tivoli Workload Scheduler events to a
centralized IT console. IBM Tivoli Data Warehouse can also be used in
conjunction with IBM Tivoli Workload Scheduler to provide an enterprise-wide
central reporting repository for the whole IT infrastructure and provide a
Web-based infrastructure for providing Web-based historical reports.
286 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices
Note: Using the solutions described in the redbook Integrating IBM Tivoli
Workload Scheduler and Content Manager OnDemand to Provide Centralized
Job Log Processing, SG24-6629, you can archive your stdlist files in a central
location. This allows you to delete the original stdlist files on the local
workstations, reducing the disk space required by IBM Tivoli Workload
Scheduler.
Tip: If you have, for example, a 2 GB file system, you might want a warning at
80 percent, but if you have a smaller file system you will need a warning when
a lower percentage fills up.
Monitoring or testing for the percentage of the file system can be done by, for
example, IBM Tivoli Monitoring and IBM Tivoli Enterprise Console.
Example 9-6 is an example of a shell script that will test for the percentage of the
IBM Tivoli Workload Scheduler file system filled, and will report back if it is over
80 percent.
Example 9-6 Shell script that is monitor space in /dev/lv01 ITWS file system
.1 2003/09/03
#
# monitor free space in the TWS file system
# All done
exit 0
9.2.10 Security
A good understanding of IBM Tivoli Workload Scheduler security implementation
is important when you are deploying the product. This is an area that sometimes
creates confusion, since both Tivoli Framework security and IBM Tivoli Workload
Scheduler native security (the Security file) need to be customized for a user to
manage IBM Tivoli Workload Scheduler objects through JSC. The following
explains the customization steps in detail, but keep in mind that these are for JSC
users:
Step 1:
The user ID that is entered in the JSC User Name field must match one of the
Current Login Names of a Tivoli Administrator (Figure 9-8 on page 289). Also
the user ID must be defined on the Tivoli Framework machine (usually TMR
Server, but can also be a Managed Node) and the password must be valid.
288 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices
Tips:
The easiest way to understand whether the user ID/password pair is
correct is to do a telnet with the user name and password to the Tivoli
Framework machine. If the telnet is successful, that means that the
user ID/password pair is correct.
If the Connector instance was created on a TMR Server (which is usually
the case), you need to enter the TMR Server host name (or IP address) in
the Server Name field. If the Connector instance was created on a
Managed Node, you can either enter the Managed Node’s host name or
the host name of the TMR Server that this Managed Node reports to. The
only thing you need to assure is that the user name that you enter in the
User Name field must be a valid user on the machine that you enter in the
Server Name field.
Tivoli Desktop
Step 3:
Finally the Tivoli Administrator has to have at least user TMR Role in order to
view and modify the Plan and Database objects. If this user also needs to
create a new Connector instance or change an existing one, you need to give
him or her admin, senior or senior TMR Roles as well. This is shown in
Figure 9-10 on page 291.
290 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices
Step 3: In order for the user to view and
modify the plan and database objects, you
need to give at least user TMR Role for the
Administrator Name that the user belongs
to. If this user also needs to create a new
Connector instance or change an existing
one, you need to give him admin, senior or
super TMR Roles as well.
A JSC or conman user that runs any of the above commands must have the LIST
authority for an object in the Security file to see it in any objects lists resulting
from the above commands. If a user does not have LIST access to an object in
the Security file, this object will not be shown in any resulting list from the above
commands.
In IBM Tivoli Workload Scheduler 8.2, the LIST authority is automatically granted
in the base Security file that is installed with the product. When you migrate from
an earlier version of IBM Tivoli Workload Scheduler network (where the LIST
authority was not supported in the Security file) you have to manually add the
security LIST access to all the users you want to give this access in each
Security file if you would like to use this feature by setting enable list security
check= yes in the globalopts file.
Note: IBM Tivoli Workload Scheduler will not work properly with one-way TMR
connections.
If you cannot see all instances in the JSC, select Connect from the Tivoli
desktop to exchange the resources between Tivoli Management Regions.
Tip: You can also manually update the resources from the command line
using the wupdate -f -r all Tivoli command.
292 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices
SchedulerDatabase
SchedulerPlan
To verify the exhange of resources, issue the following command on one of the
TMRs:
Wlookup -ar MaestroEngine
The output of this command should show the resources of both TMRs for a
successful interconnection.
Note: Tuning localopts is a specialized job, so if you are not very familiar with
what you are doing, it might be best to get help from Tivoli Services.
294 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices
nm port =31111
nm read =10
nm retry =800
#
#----------------------------------------------------------------------------
# Attributes of this Workstation for TWS writer process:
#
wr read =600
wr unlink =120
wr enable compression=no
#
#----------------------------------------------------------------------------
# Optional attributes of this Workstation for remote database files
#
# mozart directory = /usr/local/tws/mozart
# parameters directory = /usr/local/tws
# unison network directory = /usr/local/tws/network
#
#----------------------------------------------------------------------------
# Custom format attributes
#
date format =1# The possible values are 0-ymd, 1-mdy, 2-dmy, 3-NLS.
composer prompt =-
conman prompt =%
switch sym prompt=<n>%
#
#----------------------------------------------------------------------------
# Attributes for customization of I/O on mailbox files
#
sync level =high
#
#----------------------------------------------------------------------------
# Network Attributes
#
tcp timeout =600
#
#----------------------------------------------------------------------------
# SSL Attributes
#
nm SSL port =0
SSL key =/usr/local/tws/ssl/TWS.key
SSL certificate =/usr/local/tws/ssl/TWS.crt
SSL key pwd =/usr/local/tws/ssl/TWS.sth
SSL CA certificate =/usr/local/tws/ssl/TWSTrustedCA.crt
SSL certificate chain =/usr/local/tws/ssl/TWSCertificateChain.crt
SSL random seed =/usr/local/tws/ssl/TWS.rnd
SSL Encryption Cipher =SSLv3
SSL auth mode =caonly
A special mechanism ensures that messages considered essential are not put
into cache but are immediately handled. This avoids loss of vital information in
296 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices
case of a mailman failure. The settings in the localopts file regulate the behavior
of mailman cache:
mm cache mailbox: The default is no. Specify yes to enable mailman to use
a reading cache for incoming messages.
mm cache size: Specify this option only if you use the mm cache mailbox
option. The default is 32 bytes and should be a reasonable value for most of
the small and medium-sized IBM Tivoli Workload Scheduler installations. The
maximum value is 512 and higher values are ignored.
Tip: If necessary, you can experiment and increase this setting gradually to
get performance. You can use larger values than 32 bytes for large networks.
But in small networks, be careful to not set this value unnecessarily large,
since this would reduce the available memory that could be allocated to other
applications or other IBM Tivoli Workload Scheduler processes.
The following setting in the localopts is used to set the compression in IBM Tivoli
Workload Scheduler.
wr enable compression=yes: This means that Sinfonia will be compressed.
The default is no.
When mailman wants to link with another workstation it connects to netman and
requests that writer be started. Netman starts the writer process and hands the
socket connection from mailman to it. Mailman and writer do some handshaking
and then writer waits for commands from mailman. It does this by issuing a read
on the socket.
The read will wait until either mailman sends some data on the socket or a
timeout happens. If writer has not heard from mailman in two minutes, it unlinks
the connection. Mailman polls writer every minute to see if the connection is up
and to let writer know that everything is fine. If the send aborts (due to a network
problem or writer is down) mailman unlinks from the workstation.
There are some entries in localopts that can be used to tune the above algorithm.
The configurable timeout parameters related with the writer are:
wr unlink: Controls how long writer waits to hear from mailman before
unlinking.
wr read: Controls the timeout for the writer process.
Writer issues a read and times out in wr read seconds. It then checks if at least
wr unlink seconds have passed since it has heard from mailman.
If not, it goes back to the read on the socket else it unlinks. For wr read to be
useful in the above algorithm, it needs to be less than wr unlink.
Note: We recommend that you set the wr read to be at most half of wr unlink.
This gives writer the chance to read from the socket twice before unlinking.
Also, the following are the configurable timeout parameters related to mailman:
mm retrylink: Controls how often mailman tries to relink to a workstation.
mm unlink: Controls the maximum number of seconds mailman will wait
before unlinking from a workstation that is not responding. The default is 960
seconds.
mm response: Controls the maximum number of seconds mailman will wait
for a response. The response time should not be less than 90 seconds.
298 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices
The wait time (mm unlink) should not be less than the response time specified for
the nm response. If you receive a lot of timeouts in your network, increase it in
small increments until the timeouts cease. This could also be a sign of
performance problems on the Domain Manager or Master Domain Manager. If
the Domain Manager or Master Domain Manager is unable to handle the pace of
incoming records from the agents, timeouts can occur on Fault Tolerant Agent to
Domain Manager or Master Domain Manager communications. A hardware
upgrade or system reconfiguration might be considered.
If you use a lot of file dependencies in your job streams, decreasing the values of
these parameters might help speed up the file dependency resolution. But you
should be careful because it is a trade-off between speed and CPU utilization.
Unlike the other dependency checkings, file dependency checking is an external
operation to IBM Tivoli Workload Scheduler. IBM Tivoli Workload Scheduler
relies on the operating system to do the checking. For this reason, it is the most
expensive operation in terms of resource usage. This is also evident from the fact
that IBM Tivoli Workload Scheduler attempts to check the existence of a file only
after all dependencies (for example predecessor, time, or prompt dependencies)
have been satisfied.
Finally to a lesser extent, the following parameters also affect the speed of file
dependency resolution:
bm look: Controls the minimum number of seconds batchman will wait before
scanning and updating its production control file. The default is 30 seconds.
jm look: Controls the minimum number of seconds jobman will wait before
looking for completed jobs and performing general job management tasks.
The default is five minutes.
If you have the available CPU cycles, you may want to compromise by lowering
these values.
The bm check deadline interval has 0 as the default value. The user who wants
IBM Tivoli Workload Scheduler to check the deadline times defined must add the
bm check deadline interval to the localopts with a value greater than 0. If bm
check deadline is 0, no check on deadline time will be performed, even if
deadline time has been defined in the database. The reason for this default value
(0) is to avoid any performance impact processing for customers who are not
interested in the new function.
As a best practice, we recommend that you enable the check of the deadline
times only on Domain Managers rather than on each FTA of the topology. Since
Domain Managers have the status of all jobs running on their subordinate FTAs,
for performance reasons it is better to track this at the domain level.
300 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices
introduction of a dummy job into a job stream if the dependency is on a
number of jobs in the first job stream. As a general rule, with the exception of
resources (see 9.4.2, “Resources” on page 302), if a dependency relates to
the first job in a job stream, then the dependency should be placed on the job
stream.
Start time and deadline times should be kept to a minimum. If possible, use
predecessor dependencies instead.
Where start times are used to control the start of a batch run in conjunction
with simultaneously starting job streams, prompts should be considered to
assist control.
In most of the cases, use resources at the job level as opposed to the
jobstream (or schedule) level. See 9.4.2, “Resources” on page 302 for more
details on this.
Avoid using cross-workstation resources.
When using file dependencies, a start time should be used to ensure that file
checking does not start before the earliest arrival time of the file, or a
predecessor dependency if the file is created by a prior job.
The number of calendars should be kept to a minimum, to ensure rapid
progress during the Jnextday job. Offsets to existing calendars should be
considered when determining a run cycle.
Make sure calendars are kept up to date with a last-included date specified in
the description.
If possible, automate recovery procedures.
Always use parameters in jobs for user login and directory structures to aid
migration between IBM Tivoli Workload Scheduler environments.
It is a good practice to use unique object names within seven characters,
which are easier to manage since JSC columns do not need to be constantly
resized. Also it is a good practice to not include information in object names
that will be duplicated by conman/JSC, the CPU and JOB STREAM names in
JOB names.
Use a planned naming convention for all scheduling objects. This will be
explained in more detail in the next section.
Again, using sensible and informative naming resolutions can speed up the IBM
Tivoli Workload Scheduler network in terms of:
Rapid problem resolution
Fewer queries are generated leading to better IBM Tivoli Workload Scheduler
network performance.
Tip: In the naming convention, you might also include the name of operating
system that the job or job stream is intended for.
9.4.2 Resources
Resources can be either physical or logical resources on your system. When
defined in the IBM Tivoli Workload Scheduler database, they can be used as
dependencies for jobs and job streams. Resources at the job stream level are
allocated at the beginning of the job stream launching. Even though a job stream
may take several hours to complete, the resources may only be needed for a
302 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices
short time. For this reason, in most cases allocating a resource at the job level is
the better choice.
Note: Ad hoc jobs take up more space in the IBM Tivoli Workload Scheduler
database than batch jobs.
If you find that you are submitting large quantities of jobs for purposes other than
testing new jobs and/or job streams, either batch scheduling software is not
appropriate for the purpose in hand or your workload is in need of some
restructuring to enable it to be scheduled in advance.
Notes:
File dependency was called OPENS dependency in the legacy Maestro™
terminology. If you use the command line, you will still see OPENS for file
dependency checking, since IBM Tivoli Workload Scheduler command line
still uses legacy Maestro terminology.
The Not Checked status from the File Status list in JSC indicates that IBM
Tivoli Workload Scheduler has not got to the point of finding the file.
IBM Tivoli Workload Scheduler will not look for any of the remaining seven files
until the first file exists. This means that it is possible that files 2 through 8 can
exist, but since file 1 does not exist it will not check for the other files until file 1 is
304 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices
found. Once file 1 exists, then it will check for each of the remaining files. This
can take several minutes before it gets to the last file.
It is important to note that once the file exists, IBM Tivoli Workload Scheduler will
no longer check for that dependency for that job/job stream if commas are used
to separate the list of files. This means that if files 1 through 7 existed and have
later been removed before file 8 existed, as soon as file 8 exists the job/job
stream will launch, even if the other files no longer exist. This is important if your
job/job stream needs those files.
Tip: One way to get around this file dependency checking problem is to use
the qualifier with the “AND” modifier (a parameter when specifying the file
dependency).
With the normal file dependency, it searches first found as described in the
previous paragraph. But, you can do a qualified file dependency as follows
(The following shows this done from the command line, but you can also do it
from the JSC):
OPENS "/var/tmp/incoming" (-f %p/file1 -a -f %p/file2 -a -f %p/file3 -a -f
%p/file4 -a -f %p/file5 -a -f %p/file6)
306 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices
# Entries introduced in TWS-8.2
centralized security =yes
enable list security check =no
Tip: Do not set this property value too low. Otherwise, should the displayed list
be very large, the interval between auto-refreshes might be less than the time
it takes to actually refresh, and Job Scheduling Console will appear to lock up
on you. Also if you are using several detached windows (you can detach up to
seven windows) setting the refresh rate properly becomes even more
important.
308 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices
9.5.5 Setting the buffer size
Setting the buffer size property determines how many list items an ongoing
search loads for you to view. For example, if you select 100, the results of a list
are sent in blocks of 100 lines. The default is 50. Buffering more items quickens
browsing of those loaded in memory but increases the load time each time Job
Scheduling Console searches for more items from the Plan.
Conversely, a smaller buffer size slows item display somewhat, but the refresh
from the Plan goes faster. You need to experiment with buffer size to determine
what works best for your Job Scheduling Console instance.
9.5.6 Iconize the JSC windows to force the garbage collector to work
Whenever you need to decrease the amount of memory the JSC is using, you
can minimize all the JSC windows. In this way the Java garbage collector starts
its work and releases the unnecessary allocated memory. This decreases the
memory used by JSC.
310 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices
Tips on creating query lists:
For performance reasons try to create query lists that return not too many
jobs or job streams in the same view. From a window management point of
view, you need to be aiming at little more than a screenfull of data per
query.
A good naming convention is the key factor for creating efficient query lists.
The Common Plan lists will connect to all available Connectors and
produce the necessary output. While this may be helpful for some
operations with multiple IBM Tivoli Workload Scheduler instances, you
have to keep in your mind that:
– Connecting to all the Connectors and retrieving the information
generates unnecessary load on the system, if you have some
Connectors that you do not want to use. (To overcome this problem, you
can modify the properties of the Common Plan list to exclude the
Connectors that you do not want to include in the list.)
– It might return duplicate information if you have Connectors for different
parts of the same IBM Tivoli Workload Scheduler network, such as the
IBM Tivoli Workload Scheduler Master z/OS and the Primary Domain
Manager.
Especially if you havea large number of scheduling objects, we
recommend that you delete all of the “All XXX object” queries and build
your own ones from scratch. A good idea is to organize your query lists by
workstations, or groups of workstations, or by job states. Figure 9-12 on
page 312 shows a sample JSC configuration for an operator responsible
for accounting and manufacturing jobs. This configuration is based on
workstations and job states. If for example accounting jobs are more critical
than the manufacturing jobs, you can set the refresh rate low for the
accounting lists and high for the manufacturing lists.
An alternative for creating query lists is to leave the default JSC lists (“All
XXX object” lists), but to change the filter conditions of these lists each time
you do a query. This on-the-fly query model is a good idea if you have a
well-defined naming convention that allows you to easily identify objects
from their names. In this case, changing the filter criteria each time you do
a query might be more practical than creating separate query lists, since
there might be too many filtering possibilities to create a query list in
advance.
Remember that all these lists can be made available to all JSC users, simply by
saving the preferences.xml file and propagating it to your JSC users. User
preferences are stored in a file named preferences.xml. The file contains the
names and the details, including filters, of all the queries (or lists) that were saved
during a session. Every time you close the Job Scheduling Console, the
preferences.xml file is updated with any queries you saved in, or deleted from,
the Job Scheduling Tree.
Note: Note that in the previous version of JSC (JSC 1.2), the
globalpreferences.ser file was used for propagating user settings.
312 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices
On Windows:
C:\Documents and
Settings\Administrator\.tmeconsole\user@hostname_locale
Where:
– user is the name of the operating system user that you enter in the User
Name fields in the JSC logon window. It is followed by the at (@) sign.
– hostname is the name of the system running the Connector followed by
the underscore (_) sign.
– locale is the regional settings of the operating system where the
Connector is installed.
For example, suppose that, to start the Job Scheduling Console, you log on for
the first time to machine fta12, where the Connector was installed by user
ITWS12. A user directory named ITWS12@fta12_en_US (where en_US stands
for English regional settings) is created under the path described above in your
workstation.
Every time you log onto a different Connector, a new user directory is added, and
every time you close the Job Scheduling Console, a preferences.xml is created
or updated in the user directory that matches your connection.
Note: The preferences.xml file changes dynamically if the user logs onto the
same Connector and finds that the regional settings have changed.
If you want to propagate a specific set of queries to new users, copy the relevant
preferences.xml file in the path described above in the users’ workstations. If you
want to propagate a preference file to existing users, you have them replace their
own preferences.xml with the one you have prepared.
Note: Note that the preferences.xml file can also be modified with a simple
text editor (for example to create multiple lists with similar characteristics), but
unless you are very familiar with the file structure, we do not recommend that
you manipulate the file directly but use the JSC instead.
Example 9-10 shows a JSC startup script for the Windows 2000 platform. Note
that unlike the JSC 1.2 startup script, in JSC 1.3 these parameters were removed
from the script. When you edit the script with an editor, you will not see the -Xms
and -Xmx parameters, and default values of these parameters are used. If you
want to change the default values, you need to add explicitly these parameters
into the command that starts the JSC as shown (in bold) in Example 9-10.
314 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices
Tips:
The default values (64 for Xms and 256 for Xmx) are average values for all
platforms and for average machine configurations. To get better
performance you can change these values based on your machine
environment. If, for example, you have a machine with 512 MB of RAM
memory, a good choice is the values that are given in Example 9-10, but if
you have a machine with 256 MB of RAM memory, it is better to use
-Xms64m -Xmx128m.
Messages such as “out.java.lang.OutofMemoryError” in the JSC error.log
file point out that these options (particularly -Xmx) should definitely be
increased.
If you need more details on these settings, you can refer to:
http://java.sun.com/j2se/1.3/docs/tooldocs/win32/java.html
Note: Do not forget to make a backup of the JSC startup script before you test
the settings.
316 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices
adds to it any job streams from the previous Symphony file that need to be
carried forward – usually incomplete job streams that have the
CARRYFORWARD option enabled.
Figure 9-14 and Figure 9-15 on page 318 show the IBM Tivoli Workload
Scheduler process tree on Windows and UNIX respectively.
The mailman program is the master message handling program. One writer is spawned by netman for each
mailman writer
connected TWS agent.
The batchman program handles all changes to the Symphony file. serverA A new server ID mailman process is
batchman (mailman) spawned on a domain manager for each
server ID defined in a workstation definition
of a workstation in that domain. This new
mailman process connects to all the agents
The jobman program keeps track of job states. in that domain that are defined with that
jobman
particular server ID (in this example, the
server ID “A”).
318 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices
Message files TWS processes
Remote FTA
12 1 8 13
7
Changes to Mailbox.msg mailman mailman Mailbox.msg
Symphony
Note: Batchman, jobman, and their related message files are also present on
the FTA.
320 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices
ITWS Master
ITWS ITWS Connector TMF
Databases maestro_database
Symphony maestro_plan
job_instance_output
Job
Scheduling
Console
Figure 9-17 A closer look at the IBM Tivoli Workload Scheduler Connector
The IBM Tivoli Workload Scheduler Connector consists of five different programs
as described below. Each program has a specific function. The oserv program
will call the connector program appropriate for the action being performed in JSC.
maestro_database: Performs direct reads and writes of the IBM Tivoli
Workload Scheduler database files in TWShome/mozart (just like composer).
maestro_plan: Reads the Symphony file directly but it submits changes to
the Symphony file by queuing events to the Mailbox.msg file (just as conman
does).
maestro_engine: Submits start and stop events to netman via the
NetReq.msg file (just as conman does).
322 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices
Figure 9-18 Retrieval of FTA joblog – conman on MDM
3. Netman on that workstation spawns scribner and hands over the TCP
connection with conman to the new scribner process.
4. Scribner retrieves the joblog and sends the joblog to conman on the Master.
324 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices
9.6.6 Netman services and their functions
The netman configuration file exists on all IBM Tivoli Workload Scheduler
workstations. The name of this file is TWShome/network/NetConf and it defines
the services provided by netman. The NetConf file supplied by Tivoli includes
comments describing each service.
Note: In previous versions of IBM Tivoli Workload Scheduler, you had the
option of installing netman in a different directory. IBM Tivoli Workload
Scheduler Version 8.2 installation program always installs it in the
TWShome/network directory.
The first three of the above directories are the most important to maintain. We
will go into details of these and show you how to automate the cleanup process
of these directories.
326 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices
Example 9-12 Files in a stdlist, ccyy.mm.dd directory
NETMAN Message file with messages form NETMAN process
O19502.0908 File with job log for job with process no. 19502 run at 09.08
O19538.1052 File with job log for job with process no. 19538 run at 10.52
O38380.1201 File with job log for job with process no. 38380 run at 12.01
TWS File with messages from MAILMAN, BATCHMAN, JOBMAN and WRITER
Example 9-13 shows the contents of the logs directory. There is one
<yyyymmdd>_NETMAN.log (for netman messages) and one
<yyyymmdd>_TWSMERGE.log file (for mailman, batchman, jobman, and writer
messages) for each day in this directory.
Each job that is run under IBM Tivoli Workload Scheduler’s control creates a log
file in the IBM Tivoli Workload Scheduler’s stdlist directory. These log files are
created by the IBM Tivoli Workload Scheduler job manager process (jobman)
and will remain there until deleted by the system administrator.
The easiest way to maintain the growth of these directories is to decide how long
the log files are needed and schedule a job under IBM Tivoli Workload
Scheduler’s control, which removes any file older than the given number of days.
The rmstdlist command can perform the deletion of stdlist files. The rmstdlist
command removes or displays files in the stdlist directory based on the age of
the files.
Where:
-v Displays the command version and exits.
-u Displays the command usage information and exits.
-p Displays the names qualifying standard list file directories. No directory or
files are removed. If you do not specify -p, the qualifying standard list files
are removed.
age The minimum age, in days, for standard list file directories to be displayed
or removed. The default is 10 days.
The following example displays the names of standard list file directories that are
more than 14 days old:
rmstdlist -p 14
The following example removes all standard list files (and their directories) that
are more than 7 days old:
rmstdlist 7
We suggest that you run the rmstdlist command on a daily basis on all your
Fault Tolerant Agents. The rmstdlist command can be defined in a job in a job
stream and scheduled by IBM Tivoli Workload Scheduler. You may need to save
a backup copy of the stdlist files, for example, for internal revision or due to
company policies. If this is the case, a backup job can be scheduled to run just
before the rmstdlist job.
Notes:
At the time of writing this book there was a defect in rmstdlist that
prevents removing TWSMERGE and NETMAN logs when the command is
issued. This is expected to be resolved in Fix Pack 01.
As a best practice ,do not keep more than 100 log files in the stdlist
directory.
The job (or more precisely the script) with the rmstdlist command can be coded
in different ways. If you are using IBM Tivoli Workload Scheduler parameters to
specify the age of your rmstdlist files, it will be easy to change this age later if
required.
328 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices
Example 9-14 shows an example of a shell script where we use the stdlist
command in combination with IBM Tivoli Workload Scheduler parameters.
Example 9-14 Shell script using the rmstdlist command in combination with ITWS
parameters
:
# @(#)clean_stdlist.sh 1.1 2003/09/09
#
# IBM Redbook TWS 8.2 New Features - sample clean up old stdlist files job
# return codes
: ${OK=0} ${FAIL=1}
# start here
echo "clean_stdlist.sh 1.1 2003/09/09"
case "${#}" in
0)
# TWS parameter specifies how long to keep stdlist files, retrieve at
# run time from parameter database on the local workstation
KEEPSTDL=`parms ${PARM}`
if [ ! "${KEEPSTDL}" ]
then
echo "ERROR: TWS parameter ${PARM} not defined"
exit ${FAIL}
fi
;;
1)
# number of days to keep stdlist files is passed in as a command line
# argument.
KEEPSTDL="${1}"
;;
*)
# Usage error
echo "ERROR: usage: ${0} <days>"
# all done
exit ${OK}
The age of the stdlist directories is specified using the variable KEEPSTDL. This
parameter can be created on the Fault Tolerant Agent using the parms command
or using the JSC. When you run the parms command with the name of the
variable (such as parms KEEPSTDL), the command returns the current value of the
variable.
The auditing options are enabled by two entries in the globalopts file in the IBM
Tivoli Workload Scheduler server:
plan audit level = 0|1
database audit level = 0|1
If either of these options is set to the value of 1, the auditing is enabled on the
Fault Tolerant Agent. The auditing logs are created in the following directories:
TWShome/audit/plan
TWShome/audit/database
If the auditing function is enabled in IBM Tivoli Workload Scheduler, files will be
added to the audit directories every day. Modifications to the IBM Tivoli Workload
Scheduler database will be added to the database directory:
TWShome/audit/database/date (where date is in ccyymmdd format)
Modification to the IBM Tivoli Workload Scheduler Plan (the Symphony) will be
added to the Plan directory:
TWShome/audit/plan/date (where date is in ccyymmdd format)
330 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices
We suggest that you regularly clean out the audit database and Plan directories,
for example, on a daily basis. The clean out in the directories can be defined in a
job in a job stream and scheduled by IBM Tivoli Workload Scheduler. You may
need to save a backup copy of the audit files, for example, for internal revision or
due to company policies. If this is the case, a backup job can be scheduled to run
just before the cleanup job.
The job (or more precisely the script) doing the clean up can be coded in different
ways. If you are using IBM Tivoli Workload Scheduler parameters to specify the
age of your audit files, it will be easy to change this age later if required.
Example 9-15 shows an example of a shell script where we use the UNIX find
command in combination with IBM Tivoli Workload Scheduler parameters.
Example 9-15 Shell script to clean up files in the audit directory based on age
# @(#)clean_auditlogs.sh 1.1 2003/09/12
#
# IBM Redbook TWS 8.2 New Features - sample clean up old audit log files job
# return codes
: ${OK=0} ${FAIL=1}
# start here
echo "clean_auditlogs.sh 1.1 2003/09/12"
case "${#}" in
0)
# TWS parameter specifies how long to keep audit log files, retrieve at
# run time from parameter database on the local workstation
KEEPADTL=`parms ${PARM}`
if [ ! "${KEEPADTL}" ]
then
echo "ERROR: TWS parameter ${PARM} not defined"
exit ${FAIL}
1)
# number of days to keep stdlist files is passed in as a command line
# argument.
KEEPADTL="${1}"
;;
*)
# usage error
echo "ERROR: usage: ${0} <days>"
exit ${FAIL}
;;
esac
# all done
exit ${OK}
The age of the audit files is specified using variables. A variablehas been
assigned the value from the KEEPADTL parameter. The KEEPADTL parameter
can be created on the Fault Tolerant Agent using the parms command or using
the JSC.
Note: This applies only to Tivoli Workload Scheduler Master Domain Manager
and to Backup Master Domain Manager.
332 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices
The job (or more precisely the script) doing the cleanup can be coded in different
ways. If you are using IBM Tivoli Workload Scheduler parameters to specify the
age of your schedlog files, it will be easy to change this age later if required.
Example 9-16 shows a sample of a shell script where we use the UNIX find
command in combination with IBM Tivoli Workload Scheduler parameters.
Example 9-16 Shell script to clean out files in the schedlog directory
:
# @(#)clean_schedlogs.sh 1.1 2003/09/12
#
# IBM Redbook TWS 8.2 New Features - sample clean up old schedlog files
# return codes
: ${OK=0} ${FAIL=1}
# start here
echo "clean_schedlogs.sh 1.1 2003/09/12"
case "${#}" in
0)
# TWS parameter specifies how long to keep audit log files, retrieve at
# run time from parameter database on the local workstation
KEEPSCHL=`parms ${PARM}`
if [ ! "${KEEPSCHL}" ]
then
echo "ERROR: TWS parameter ${PARM} not defined"
exit ${FAIL}
fi
;;
1)
# number of days to keep schedlog files is passed in as a command line
# argument.
# all done
exit ${OK}
Notice from the script that the age of the schedlog files is specified using the
variable KEEPSCHL. The schedlog files older than what is specified in the
KEEPSCHL parameters will be removed.
Note: If there are more than 150 files in schedlog, this might cause conman
listsym and JSC Alternate Plan option to hang.
Example 9-17 Shell script to clean up files in the tmp directory based on age
:
# @(#)clean_tmpfiles.sh 1.1 2003/09/12
#
# IBM Redbook TWS 8.2 New Features - sample clean up old tmpfiles files
334 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices
# return codes
: ${OK=0} ${FAIL=1}
# start here
echo "clean_tmpfiles.sh 1.1 2003/09/12"
case "${#}" in
0)
# TWS parameter specifies how long to keep files in the tmp directory,
retrieve at run time from parameter database on the local workstation
KEEPTMPF=`parms ${PARM}`
if [ ! "${KEEPTMPF}" ]
then
echo "ERROR: TWS parameter ${PARM} not defined"
exit ${FAIL}
fi
;;
1)
# number of days to keep files in the tmp directory is passed in as a
command line
# argument.
KEEPTMPF="${1}"
;;
*)
# usage error
echo "ERROR: usage: ${0} <days>"
exit ${FAIL}
;;
esac
# all done
exit ${OK}
Notice from the script that the age of the schedlog files is specified using the
variable KEEPTMPF. The temporary files older than what is specified in the
KEEPTMPF parameters will be removed.
The backup can be done in several ways. You probably already have some
backup policies and routines implemented for the system where the IBM Tivoli
Workload Scheduler engine is installed. These backups should be extended to
make a backup of files in the TWShome directory.
We suggest that you have a backup of all the IBM Tivoli Workload Scheduler files
in the TWShome directory and the directory where the autotrace library is
installed.
336 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices
When deciding how often a backup should be generated, consider the following:
Are you using parameters on the IBM Tivoli Workload Scheduler agent?
If you are using parameters locally on the IBM Tivoli Workload Scheduler
agent and do not have a central repository for the parameters, you should
consider making daily backups.
Are you using specific security definitions on the IBM Tivoli Workload
Scheduler agent?
If you are using specific Security file definitions locally on the IBM Tivoli
Workload Scheduler agent and do not use the centralized security option that
is available with IBM Tivoli Workload Scheduler Version 8.2, you should
consider making daily backups.
Using the following, you can avoid reinstalling your Tivoli Framework environment
in case of major problems:
Back up your pristine environment.
Back up your environment after making changes.
The standard practice is to use the Tivoli Framework object database backup
process. However, alternatives exist.
Put the tar or zip file on a machine that is backed up regularly. Do the previous for
each machine in your IBM Tivoli Workload Scheduler network that requires the
installation of Tivoli Framework.
If you make any changes to your environment (for example, install new
applications into the Tivoli Framework, disconnect TMRs, or remove Connector
instances) you must tar or zip up the directory again. Date it and put it on the
same system as your original pristine backup.
When you encounter major problems in the Tivoli Framework that require a full
reinstall of your original environment or last change, all you have to do is get your
tar or zip file and untar or unzip it back into place. By doing so, you have your
pristine Tivoli Framework environment or last changed environment back in
place.
This can become a bit more complicated when managed nodes enter the picture.
You need to back up your managed nodes the same way you back up your TMR
as listed previously. However, if you must restore your entire Tivoli Framework
environment to the pristine or changed backup, then you must untar or unzip for
all managed nodes in the region as well.
This process is not a standard Tivoli Framework backup procedure, but this can
be a necessity in environments for preservation and saving time.
Note: For Windows managed nodes or TMR servers, you also need to update
the registry manually after restoring your backup. Refer to Tivoli Management
Framework Maintenance and Troubleshooting Guide, GC32-0807 for more
information on required registry updates for Tivoli.
338 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices
Before rebuilding any database, you must make a backup of the database and
optionally its associated key file. The build command makes a copy of the
database, but it does not make a copy of the key file.
Tip: Key files can be recreated from the data files, which is why they are not
backed up by default, but saving the key files might save you some time
should you need to restore the database.
You must stop all instances of conman and composer plus stop the Connectors
before doing a composer build.
See the build command options in the Tivoli Workload Scheduler Version 8.2,
Reference Guide, SC32-1274 for further options.
340 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices
On Windows:
C:\win32app\maestro> conman stop;wait
C:\win32app\maestro> Shutdown.cmd
• If the mailman read corrupted data, try to bring IBM Tivoli Workload
Scheduler down normally. If this is not successful, kill the mailman
process as follows:
On UNIX:
Run ps -u TWSuser to find the process ID.
Run kill process id or if that fails, kill -9 process id to kill the
mailman process.
On Windows (commands in the TWShome\Unsupported directory):
Run listproc to find the process ID.
Run killproc process id to kill the mailman process.
• If batchman is hung:
Try to bring IBM Tivoli Workload Scheduler down normally. If not
successful, kill the mailman process as explained in the previous bullet.
If the writer process for FTA is down or hung on the Master, it means that:
– FTA was not properly unlinked from the Master.
– The writer read corrupted data.
– Multiple writers are running for the same FTA.
Use ps -ef | grep maestro to check that the writer processes are
running. If there is more than one process for the same FTA, perform the
following steps:
i. Shut down IBM Tivoli Workload Scheduler normally.
ii. Check the processes for multiple writers again.
iii. If there are multiple writers, kill them.
If the netman process is hung:
– If multiple netman processes are running, try shutting down netman
properly first. If this is not successful, kill netman using the following
commands:
On UNIX:
Use ps -ef | grep maestro to find the running processes.
Issue kill process id or if that fails, kill -9 process id to kill the
mailman process.
On Windows (commands in the TWShome\Unsupported directory):
Use listproc to find the process ID.
342 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices
Tip: When checking whether root owns the jobman, also check that the setuid
bit is present and the file system containing TWShome is not mounted with the
nosetuid option.
344 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices
– The Master was not able to start batchman after stageman completed.
See “Batchman not up or will not stay up (batchman down)” on page 342.
– The Master was not able to link to FTA.
See “FTAs not linking to the Master Domain Manager” on page 340.
Autotrace feature
This is a built-in flight-recorder-style trace mechanism that logs all activities
performed by the IBM Tivoli Workload Scheduler processes. In case of product
failure or unexpected behavior, this feature can be extremely effective in finding
the cause of the problem and in providing a quick solution.
In case of problems, you are asked to create a trace snap file by issuing some
simple commands. The trace snap file is then inspected by the Tivoli support
team, which uses the logged information as an efficient problem determination
base. The Autotrace feature, already available with Version 8.1, has now been
extended to run on additional platforms.
This feature is now available with Tivoli Workload Scheduler on the following
platforms:
AIX
HP-UX
Solaris Operating Environment
Microsoft Windows NT and 2000
Linux
The tracing system is completely transparent and does not have any impact on
file system performances, because it is fully handled in memory. It is
automatically started by StartUp, so no further action is required.
Autotrace uses a dynamic link library named libatrc. Normally, this library is
located in /usr/Tivoli/TWS/TKG/3.1.5/lib (UNIX) and not in the installation path,
as would be expected.
346 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices
Tip: Each time Autotrace could be an option for problem determination, snap
files should be taken as soon as possible. If you need to gather a lot of
information quickly, use Metronome, which we cover next.
Metronome script
Metronome is a PERL script that takes a snapshot of Tivoli Workload Scheduler
instantly and generates an HTML report. It is a useful tool for the Tivoli Workload
Scheduler user for describing a problem to Customer Support. For best results,
the tool should be run as soon as the problem is discovered. Metronome copies
all the Tivoli Workload Scheduler configuration files in a directory named:
TWShome /snapshots/snap_date _time.
The Metronome files are copied in the TWShome/bin directory when Tivoli
Workload Scheduler is installed.
Format:
perl path/maestro/bin/Metronome.pl [MAESTROHOME=TWS_dir] [-html] [-pack]
[-prall]
Where:
MAESTROHOME is the directory where the script is located if it is different from the
installation directory of Tivoli Workload Scheduler.
–html generates an HTML report.
–pack creates a package of configuration files.
–prall prints all variables.
Example 9-19 on page 348 shows how to run the command from the Tivoli
Workload Scheduler home directory.
Example 9-20 shows how to run the command from a directory that is not the
Tivoli Workload Scheduler home directory.
http://www3.software.ibm.com/ibmdl/pub/software/tivoli_support/patches/
Public IBM Tivoli Workloads Scheduler mailing list
http://groups.yahoo.com/group/maestro-l
Note: This public mailing list has many contributors from all over the world
and is an excellent forum for new and more experienced Tivoli Workload
Scheduler users.
348 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices
A
Patch: 1.3-JSC-FP01
General Description: Tivoli Job Scheduling Console fix pack for Version 1.3
and Tivoli Workload Scheduler for z/OS Connector
fix pack for Version 1.3
PROBLEMS FIXED:
APAR IY44472
Symptoms: JSC 1.2 BROWSE JOB LOG DOES NOT OPEN ON LARGE STDLIST FILES *
APAR IY47022
Symptoms: Cannot submit job that is more than 15 chars
APAR IY47230
Symptoms: AD HOC JOB LOGIN FIELD SHOWS RED X WHEN >8 CHARACTERS ARE
ENTERED
350 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices
Internal CMVC Defect 10982
Symptoms: “Task” field edition mot allowed
352 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices
Internal CMVC Defect 11081
Symptoms: Properties window of a job are to be modal
Dependencies: JSC 1.3 GA, Tivoli Workload Scheduler 8.2.0 for z/OS Connector
GA,
++APAR for apar PQ76778 on TWS for z/OS and apar PQ76409.
354 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices
¦ ¦ U_OPC_L.IND
¦ ¦
¦ +---CFG
¦ FILE1.CFG
¦ FILE2.CFG
¦ FILE3.CFG
¦ FILE4.CFG
¦ FILE5.CFG
¦
+---JSC
¦ +---Aix
¦ ¦ ¦ setup.bin
¦ ¦ ¦
¦ ¦ +---SPB
¦ ¦ TWSConsole_FixPack.spb
¦ ¦
¦ +---HP
¦ ¦ ¦ setup.bin
¦ ¦ ¦
¦ ¦ +---SPB
¦ ¦ TWSConsole_FixPack.spb
¦ ¦
¦ +---Linux
¦ ¦ ¦ setup.bin
¦ ¦ ¦
¦ ¦ +---SPB
¦ ¦ TWSConsole_FixPack.spb
¦ ¦
¦ +---Solaris
¦ ¦ ¦ setup.bin
¦ ¦ ¦
¦ ¦ +---SPB
¦ ¦ TWSConsole_FixPack.spb
¦ ¦
¦ +---Windows
¦ ¦ setup.exe
¦ ¦
¦ +---SPB
¦ TWSConsole_FixPack.spb
¦
+---DOC
README_PDF
The Job Scheduling Console fix pack installation is based on the ISMP
technology.
You can install the fix pack only after you have installed the Job
Scheduling Console.
When you start the fix pack installation a welcome panel is displayed. If
you click OK a discovery action is launched and a panel with
the JSC instance and the discovered Job Scheduling Console directory is
displayed.
If no instance is discovered an error message appears. When you click Next,
a panel with the following actions appears: Apply, Rollback, and Commit.
The first time you install the fix pack you can only select the Apply
action.
With this action the fix pack is installed in undoable mode and a backup
copy of the product is stored on your workstation.
If the fix pack is already installed the Rollback and Commit actions are
available.
If you select Rollback, you remove the fix pack installation and return to
the previous installation of the Job Scheduling Console.
If you select Commit, the installation backup copy is deleted and the fix
pack
installation mode changes to commit.
Use the commit action after you have tested the fix pack.
If the fixpack installation is in commit mode and is corrupt, you can run
the setup program and select the Repair action. This action
is displayed in the panel in place of the Apply action.
356 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices
Connector:
IMPORTANT: Before installing the fix pack, create a backup copy of the Job
Scheduling Console current installation.
1) Do not launch the Job Scheduling Console fresh install after the fix
pack installation.
Workaround to add languages to a JSC instance if the fix pack has been
applied:
- If the fix pack is in undoable mode, you can perform the following
steps:
1)Remove the fix pack with the Rollback action
2)Add languages with the fresh install
3)Apply the fix pack
- If the fix pack is not in undoable mode, you can perform the following
steps:
1) Remove the fresh install
2) Run the fresh install to install the languages
3) Apply the fix pack
2) Before installing the fix pack in undoable mode, ensure that you have 35
MB of free space in the root/Administrator home directory
for the installation backup.
3) All the panels are identical, after the discovery of the installed
instance, regardless of the installation actions you are running.
4) The panel that contains the installation summary displays the wrong disk
space size.
1) The recovery option of a job is not saved if you open the Properties panel
during its submission.
Note: defect 10994 fixes only the problem related to the recovery job.
2) If you add a dependency to an existing job stream, when you try to save
it,
the following error message is displayed:
Cannot save the job stream.
Reason: <JOB STREAM NAME> already exists.
5) You can submit a job into the plan using an alias that starts with
a numeric character.
6) If you open a job stream in the plan and try to cancel a job, the job will
not be cancelled.
358 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices
7) In the Job Stream Instance Editor, select a job stream, click on the
Properties button.
In the Properties panel modify some properties and click “OK”. The
Properties window remains
in the background, the Job Stream Instance Editor hangs. If you run Sumbit
and Edit,
the Job Stream Instance panel is opened in the background.
Sample freshInstall.txt
###############################################################################
#
#
# InstallShield Response File for Installing a New Instance
#
# This file contains values that can be used to configure the installation
program with the options
# specified below when the wizard is run with the "-options" command line
option.
# Read the instructions contained in each section to customize the options and
their respective values.
#
# A common use of a response file is to run the wizard in silent mode. This
enables you to
# specify wizard keywords without running the
# wizard in graphical mode.
#
# The keywords that can be specified for the wizard are listed below. To
install in silent mode,
# follow these steps:
#
# 1. Customize the value of the required keywords (search for the lines
starting with -W).
# This file contains both UNIX and Windows specific keywords. The
required keywords are
# customized for a Windows workstation.
#
# 2. Enable the optional keywords by removing the leading '###' characters
from the
# line (search for '###' to find the keywords you can set).
#
# 3. Specify a value for a keyword by replacing the characters within double
quotation
# marks "<value>".
-W twsLocationPanel.directory="<system_drive>\win32app\TWS\<tws_user>"
-W userWinPanel.inputUserName="twsuser"
-W userWinPanel.password="twsuser"
-W userWinPanel.twsDomain="cpuName"
360 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices
# [UNIX IBM Tivoli Workload Scheduler user name]
# On UNIX systems, this user account must be created manually before running
the
# installation. Create a user with a home directory. IBM Tivoli Workload
Scheduler
# will be installed under the HOME directory of the specified user.
### -W userUNIXPanel.inputUserName="twsuser"
###############################################################################
#
#
# CPU DEFINITION
#
# [Workstation name of the installation]
# The name assigned to the workstation where the IBM Tivoli Workload Scheduler
installation runs.
# This name cannot exceed 16 characters. If you are installing a master domain
manager,
# the value of this keyword must be identical to the cpubean.masterCPU keyword.
-W cpubean.thisCPU="twsCpu"
-W cpubean.masterCPU="twsCpu"
# [TCP port]
# The TCP port number that will be used by the instance being installed.
# It must be an unassigned 16ñbit value in the range 1ñ65535. The default value
is 31111.
# When installing more than one instance on the same workstation, use
# different port numbers for each instance.
-W cpubean.tcpPortNumber="31111"
# [Company name]
# The name of the company. This name is registered in the localopts file and
appears in
# program headers and reports.
-W cpubean.company="IBM"
-W twsDiscoverInstances.typeInstance="NEW"
###############################################################################
#
#
# AGENT TYPE
#
# Do not modify the value of this keyword.
-W setupTypes.typeInstall="custom"
# The type of IBM Tivoli Workload Scheduler agent to install. Enable the
keyword for
# the type of agent you want to install. This file is customized to install a
# Fault Tolerant Agent.
# Standard agent:
### -W agentType.stAgent="true"
# Master domain manager:
### -W agentType.mdmAgent="true"
# Backup master:
### -W agentType.bkmAgent="true"
# Fault Tolerant Agent:
-W agentType.ftAgent="true"
###############################################################################
#
#
# LANGUAGE PACKS
#
# [All supported language packs]
# The English language pack and the language locale of
# the operating system are installed by default. To install all supported
language packs
# enable the keyword.
### -W languageChoice.catOption="true"
362 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices
# you want to install. Language packs that remain commented, default to false.
### -W languageChoice.chineseSimplified="true"
### -W languageChoice.chineseTraditional="true"
### -W languageChoice.german="true"
### -W languageChoice.french="true"
### -W languageChoice.italian="true"
### -W languageChoice.japanese="true"
### -W languageChoice.korean="true"
### -W languageChoice.portuguese="true"
### -W languageChoice.spanish="true"
###############################################################################
#
#
# OPTIONAL FEATURES
#
# [Tivoli Plus Module]
# Installs the Tivoli Plus Module feature.
### -W featureChoice.pmOption="true"
# To create a task which enables you to launch the Job
# Scheduling Console from the Tivoli desktop, optionally, specify the location
# of the Job Scheduling Console installation directory.
### -W twsPlusModulePanel.jscHomeDir=""
# The path to the Tivoli Plus Module images and index file. These paths are
identical.
# To install the Tivoli Plus Module, enable both keywords.
### -P checkPlusServerCDAction.imagePath="D:\disk1\TWS_CONN"
### -P checkPlusServerCDAction.indPath="D:\disk1\TWS_CONN"
#
#
# [Connector]
# Installs the Connector feature.
### -W featureChoice.conOption="true"
# The Connector instance name identifies the instance in the Job Scheduling
Console. The
# name must be unique within the scheduler network.
### -W twsConnectorPanel.jscConnName="TMF2conn"
# Customize the path to the Job Scheduling Services and Connector images and
index files.
# These paths are identical. To install the Connector, enable all keywords.
### -P checkJssServerCDAction.imagePath="D:\disk1\TWS_CONN"
### -P checkJssServerCDAction.indPath="D:\disk1\TWS_CONN"
### -P checkConnServerCDAction.imagePath="D:\disk1\TWS_CONN"
### -P checkConnServerCDAction.indPath="D:\disk1\TWS_CONN"
###############################################################################
#
#
# TIVOLI MANAGEMENT FRAMEWORK
# The Connector and Tivoli Plus Module features prerequisite the Tivoli
# Management Framework, Version 3.7.1 or 4.1.
364 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices
# Simplified Chinese
### -P checkTmfSimplifiedCnCdAction.imagePath="D:\disk1\fwork\new1"
### -P checkTmfSimplifiedCnCdAction.indPath="D:\disk1\fwork\new1"
# Traditional Chinese
### -P checkTmfTraditionalCnCdAction.imagePath="D:\disk1\fwork\new1"
### -P checkTmfTraditionalCnCdAction.indPath="D:\disk1\fwork\new1"
# Portuguese
### -P checkTmfPortugueseCdAction.imagePath="D:\disk1\fwork\new1"
### -P checkTmfPortugueseCdAction.indPath="D:\disk1\fwork\new1"
###############################################################################
#
#
# DO NOT CHANGE THE FOLLOWING OPTIONS OR THE INSTALLATION WILL FAIL
#
###############################################################################
#
-silent
-G replaceNewerResponse="Yes to All"
-W featuresWarning.active=False
-W winUserNotFound.active=False
-W featuresWarning23.active=False
-W featuresWarning2.active=False
-W featuresWarning.active=False
-W featuresWarning222.active=False
-W featuresWarning22.active=False
Customized freshInstall.txt
###############################################################################
#
#
# InstallShield Response File for Installing a New Instance
#
# This file contains values that can be used to configure the installation
program with the options
# specified below when the wizard is run with the "-options" command line
option.
# Read the instructions contained in each section to customize the options and
their respective values.
#
# A common use of a response file is to run the wizard in silent mode. This
enables you to
# specify wizard keywords without running the
# wizard in graphical mode.
#
-W twsLocationPanel.directory="C:\win32app\tws"
366 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices
-W userWinPanel.inputUserName="tws"
-W userWinPanel.password="chuy5"
-W userWinPanel.twsDomain="6579IMAGE"
###############################################################################
#
#
# CPU DEFINITION
#
# [Workstation name of the installation]
# The name assigned to the workstation where the IBM Tivoli Workload Scheduler
installation runs.
# This name cannot exceed 16 characters. If you are installing a master domain
manager,
# the value of this keyword must be identical to the cpubean.masterCPU keyword.
-W cpubean.thisCPU="TWS2"
-W cpubean.masterCPU="MASTER"
-W cpubean.tcpPortNumber="31111"
# [Company name]
# The name of the company. This name is registered in the localopts file and
appears in
# program headers and reports.
-W cpubean.company="ITSO/Austin"
###############################################################################
#
#
# INSTALLATION TYPE
#
# Do not modify the value of this keyword. It is customized to perform
# fresh install.
-W twsDiscoverInstances.typeInstance="NEW"
###############################################################################
#
#
# AGENT TYPE
#
# Do not modify the value of this keyword.
-W setupTypes.typeInstall="custom"
# The type of IBM Tivoli Workload Scheduler agent to install. Enable the
keyword for
# the type of agent you want to install. This file is customized to install a
# Fault Tolerant Agent.
# Standard agent:
### -W agentType.stAgent="true"
# Master domain manager:
### -W agentType.mdmAgent="true"
# Backup master:
### -W agentType.bkmAgent="true"
# Fault Tolerant Agent:
-W agentType.ftAgent="true"
368 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices
###############################################################################
#
#
# LANGUAGE PACKS
#
# [All supported language packs]
# The English language pack and the language locale of
# the operating system are installed by default. To install all supported
language packs
# enable the keyword.
### -W languageChoice.catOption="true"
370 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices
### -P checkTmfGermanCdAction.imagePath="D:\disk1\fwork\new1"
### -P checkTmfGermanCdAction.indPath="D:\disk1\fwork\new1"
# Spanish
### -P checkTmfSpanishCdAction.imagePath="D:\disk1\fwork\new1"
### -P checkTmfSpanishCdAction.indPath="D:\disk1\fwork\new1"
# French
### -P checkTmfFrenchCdAction.imagePath="D:\disk1\fwork\new1"
### -P checkTmfFrenchCdAction.indPath="D:\disk1\fwork\new1"
# Italian
### -P checkTmfItalianCdAction.imagePath="D:\disk1\fwork\new1"
### -P checkTmfItalianCdAction.indPath="D:\disk1\fwork\new1"
# Korean
### -P checkTmfKoreanCdAction.imagePath="D:\disk1\fwork\new1"
### -P checkTmfKoreanCdAction.indPath="D:\disk1\fwork\new1"
# Japanese
### -P checkTmfJapaneseCdAction.imagePath="D:\disk1\fwork\new1"
### -P checkTmfJapaneseCdAction.indPath="D:\disk1\fwork\new1"
# Simplified Chinese
### -P checkTmfSimplifiedCnCdAction.imagePath="D:\disk1\fwork\new1"
### -P checkTmfSimplifiedCnCdAction.indPath="D:\disk1\fwork\new1"
# Traditional Chinese
### -P checkTmfTraditionalCnCdAction.imagePath="D:\disk1\fwork\new1"
### -P checkTmfTraditionalCnCdAction.indPath="D:\disk1\fwork\new1"
# Portuguese
### -P checkTmfPortugueseCdAction.imagePath="D:\disk1\fwork\new1"
### -P checkTmfPortugueseCdAction.indPath="D:\disk1\fwork\new1"
###############################################################################
#
#
# DO NOT CHANGE THE FOLLOWING OPTIONS OR THE INSTALLATION WILL FAIL
#
###############################################################################
#
-silent
-G replaceNewerResponse="Yes to All"
-W featuresWarning.active=False
-W winUserNotFound.active=False
-W featuresWarning23.active=False
-W featuresWarning2.active=False
-W featuresWarning.active=False
-W featuresWarning222.active=False
-W featuresWarning22.active=False
/************************************************************************/
/*********** TWS Event Rules **********************/
/************************************************************************/
rule: job_started: (
description: 'Clear job prompt events related to this job',
rule:
job_done: (
description: 'Acknowledge that the job is done and close all \
372 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices
outstanding job started events',
rule:
job_ready_started: (
description: 'Acknowledge that the job is launched and close all \
outstanding job ready events',
reception_action: close_all_job_ready_events: (
all_instances(
event: _job_ready_event of_class 'TWS_Job_Ready'
where [
hostname: equals _hostname,
job_name: equals _job_name,
rule: schedule_done: (
description: 'Acknowledge that the schedule is done and close all \
outstanding schedule started events',
rule: schedule_started: (
description: 'Clear schedule stuck or schedule abend events \
or schedule prompt events related to this schedule',
374 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices
],
reception_action: close_all_schedule_started_stuck_events: (
all_instances(
event: _schedule_stopped_event of_class _Class within
['TWS_Schedule_Stuck','TWS_Schedule_Abend', 'TWS_Schedule_Prompt']
where [
hostname: equals _hostname,
schedule_cpu: equals _schedule_cpu,
schedule_name: equals _schedule_name,
status: outside ['CLOSED']
]
),
set_event_status(_schedule_stopped_event, 'CLOSED')
)
).
rule: schedule_stopped: (
description: 'Start a timer rule',
timer_rule:
timer_schedule_stopped: (
description: 'Calls a script that takes further action if schedule_stopped
event is still OPEN',
rule: job_abend: (
description: 'Check to see if this event has occurred in the last 24
hours',
first_duplicate(
_event,
event: _duplicate
where [
],
_event - 86400 - 86400 ),
generate_event(
'TWS_Job_Repeated_Failure',
[
severity='CRITICAL',
source=_source,
origin=_origin,
hostname=_hostname,
job_name=_job_name,
job_cpu=_job_cpu,
schedule_name=_schedule_name,
msg=_fail_msg
]
)
),
376 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices
reception_action: (
/* sends an email message to tech support and attaches stdlist */
exec_program(_event,
'/tivoli/scripts/alarmpoint/send_ap_action.sh','EMAIL',[],'NO'),
)
).
rule: link_established: (
description: 'Clear TWS_Link_Dropped and TWS_Link_Failed events for this
hostname cpu pair',
rule: job_events_mgmt: (
description: 'sets time posix var for events from 101 to 118',
'TWS_Job_Cancel','TWS_Job_Ready','TWS_Job_Hold','TWS_Job_Restart','TWS_Job_Cant
','TWS_Job_SuccP','TWS_Job_Extern',
reception_action: job_estdur_issuing: (
/* Est Duration */
convert_local_time(_job_estdur_eval, _est_dur_locale),
convert_ascii_time(_est_dur_locale, _est_dur_locale_ascii),
atompart(_est_dur_locale_ascii, _est_dur_time, 12, 8),
bo_set_slotval(_event1,estimated_duration, _est_dur_time )
),
reception_action: job_dead_issuing: (
/* deadline */
_job_dead_eval > 0x0,
convert_local_time(_job_dead_eval, _dead_locale),
convert_ascii_time(_dead_locale, _dead_ascii),
bo_set_slotval(_event1, deadline, _dead_ascii)
),
reception_action: job_estst_issuing: (
reception_action: job_effst_issuing: (
).
378 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices
rule: job_events_warnings: (
description: 'evaluate time values vs deadline for 3 events',
reception_action: job_estst_evaluation: (
/* foreseen duration */
pointertoint(_job_estdur_eval, _int_estdur_eval),
_job_foreseen =? _job_estst_eval + _int_estdur_eval,
set_event_severity(_event2 , 'WARNING')
),
reception_action: job_effst_evaluation: (
_job_estst_eval == 0x0,
_job_effst_eval > 0x0,
/* foreseen duration */
pointertoint(_job_estdur_eval, _int_estdur_eval),
_job_foreseen =? _job_effst_eval + _int_estdur_eval,
set_event_severity(_event2 , 'WARNING')
).
/*
******************************************************************************
*/
/* *************************** TIMER RULES
******************************** */
/*
******************************************************************************
*/
rule: job_ready_open : (
description: 'Start a timer rule for ready',
timer_rule:
timer_job_ready_open: (
description: 'Generate a warning on the event if job ready event \
is till waiting for its job launched event',
action: (
set_event_message(_event , 'Start delay of Job %s.%s on CPU %s',
[_schedule_name,_job_name,_job_cpu]),
set_event_severity(_event , 'WARNING')
380 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices
)
).
rule: job_launched_open : (
description: 'Start a timer rule for ready',
timer_rule:
timer_job_launched_open: (
description: 'Generate a warning on the event if job launched event \
is till waiting for its job done event',
action: (
set_event_message(_event , 'Long duration for Job %s.%s on CPU %s',
[_schedule_name,_job_name,_job_cpu]),
set_event_severity(_event , 'WARNING')
)
).
# This script will perform the tasks required to perform a TWS long term
switch.
# The short term switch MUST be performed prior to this script being run.
export PATH=$PATH:/usr/local/tws/maestro:/usr/local/tws/maestro/bin
export BKDIR=/usr/local/tws/maestro/TWS_BACKUPS
export DBDIR=/usr/local/tws/maestro/mozart
export SCDIR=/usr/local/tws/maestro/scripts
export NEWMASTER=BMDM_workstation_name
export NPW=`cat $BKDIR/user_info`
export DATE=`date +%d_%m_%y_%H.%M`
export TIME=`date +%H%M`
export LOGFILE=/usr/local/tws/maestro/lts_log.$DATE
# The file 'user_info' contains the password that will be used for all windows
users.
# This file is owned and is read/writable ONLY by the root user.
# The variable $NPW will be set to the windows password.
# Set the NEWMASTER variable to the workstation name of the agent that will
take over us the
# MDM. This allows this script to be used on multiple workstations with minimal
amendment.
function confswch
{
CHECK=n
TYPE=X
DRSWITCH=n
tput clear
echo You are about to perform a Long-Term Switch
echo of the TWS Production Master Workstation.
echo $NEWMASTER will take over as the MDM upon completion of this script
echo
echo Ensure that the TWS Database backup and copy scripts have run on the
echo Current MDM before you continue with this switch.
echo
echo These are run automatically between the MDM and the BMDM
echo at the start of each day.
echo However, if a Long Term switch has already taken place and this is a
echo return back to the master during the same processing day then the
382 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices
echo back and copy scripts will need to be run on the system that you are
echo switching back from, before you run this script.
echo This will ensure that any database updates that have been made during the
echo switch are maintained.
echo
echo Also, a Short Term Switch Needs to be performed before continuing.
echo
echo "***********************************************************"
echo "* *"
echo "* Y) Perform Long Term Switch to BMDM. *"
echo "* *"
echo "* X) Exit from this script. *"
echo "* *"
echo "***********************************************************"
echo
printf "Please select the Y to continue with the switch or X to exit.(Y/X): "
read TYPE
echo "Please select the Y to continue with the switch or X to exit.(Y/X): " >>
$LOGFILE
echo Operator Reply to Type of switch menu is :$TYPE >> $LOGFILE
if [ $TYPE = "Y" ]
then echo The Long-Term switch process to
echo Backup Master Domain Manager.
echo Please Wait..........
echo The switch will create a logfile. Please check this upon completion.
echo Logfile name is $LOGFILE
else if [ $TYPE = "X" ]
then echo Long-Term switch process is stopping.
echo Re-run when ready to do this.
echo The long Term Switch script was exited at `date
+%d_%m_%y_%H:%M.%S` >> $LOGFILE
exit 1
else confswch
fi
fi
}
echo Long Term Switch script started at `date +%d_%m_%y_%H:%M.%S` > $LOGFILE
echo This script will perform the tasks required to perform a TWS long term
switch
echo so that $NEWMASTER will become the master.
echo This script will perform the tasks required to perform a TWS long term
switch >> $LOGFILE
echo so that $NEWMASTER will become the master. >> $LOGFILE
confswch
echo Copy the globalopts file for the BMDM to mozart directory. >> $LOGFILE
echo Required so that the BMDM can assume the full role of the MDM. >>
$LOGFILE
cp $BKDIR/globalopts.bkup $DBDIR/globalopts
echo Remove any TWS DataBase files prior to load of lastest copy. >> $LOGFILE
echo Note: These should be cleared down after switch of Master if possible. >>
$LOGFILE
cd $DBDIR
rm calendars*
rm job.sched*
rm jobs*
rm mastsked*
rm prompts*
rm resources*
cd ../unison/network
rm cpudata*
rm userdata*
# Update the users.txt file with the password as file created with string of
****
echo Update the users.txt file with the password as file created with string of
**** >> $LOGFILE
# Issue TWS DataBase replace commands to build the TWS DataBases on the BMDM.
echo Issue TWS DataBase replace commands to build the TWS DataBases on the
BMDM.>> $LOGFILE
384 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices
composer replace $BKDIR/users.tmp
composer replace $BKDIR/resources.txt
composer replace $BKDIR/jobs.txt
composer replace $BKDIR/scheds.txt
# Remove the file created that contains the windows users and passwords so that
it is not
# left on the system so that it can be read by other users.
rm $BKDIR/users.tmp
# Replace Final JobStream for BMDM to run everyday and Final JobStream for MDM
# to run on request.
echo Replace Final JobStream for BMDM to run everyday and Final JobStream for
MDM >> $LOGFILE
echo to run on request. >> $LOGFILE
# The final_jss file contains the FINAL job stream definitions for the MDM and
BMDM with
# with the run cycles set so that FINAL on the BMDM will be selected and that
FINAL on the
# MDM will not.
echo Replacing the FINAL job streams for the MDM and BMDM with the correct Run
Cycles.
echo Replacing the FINAL job streams for the MDM and BMDM with the correct Run
Cycles. >> $LOGFILE
# Cancel any Final JobStreams in the PLAN as they are no longer needed.
echo Cancel any Final JobStreams in the PLAN as they are no longer needed.>>
$LOGFILE
# The FINAL job stream will be submitted with a time to make it unique in order
to all for
# multiple switches during a production day, if required.
echo Long Term Switch script finished at `date +%d_%m_%y_%H:%M.%S`. >> $LOGFILE
echo "*****************************************************************"
echo "* Long Term switch of MDM to $NEWMASTER is now complete. *"
echo "*****************************************************************"
386 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices
B
Select the Additional materials and open the directory that corresponds with
the redbook form number, SG246628.
388 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices
Abbreviations and acronyms
AIX Advanced Interactive ITSO International Technical
Executive Support Organization
APAR authorized program analysis ITWS IBM Tivoli Workload
report Scheduler
API Application Program Interface JCL job control language
BDM Backup Domain Manager JSC Job Scheduling Console
BMDM Backup Master Domain JSS Job Scheduling Services
Manager JVM Java Virtual Machine
CA Certificate Authority MAC message authentication code
CLI command line interface MDM Master Domain Manager
CORBA Common Object Request MIB Management Information
Broker Architecture Base
cpu central processing unit OLAP online analytical processing
CPU ITWS workstation OPC Operations, Planning and
CRL Certification Revocation List Control
CSR Certificate Signing Request PERL Practical Extraction and
DES Data Encryption Standard Report Language
390 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices
Related publications
The publications listed in this section are considered particularly suitable for a
more detailed discussion of the topics covered in this redbook.
IBM Redbooks
For information on ordering these publications, see “How to get IBM Redbooks”
on page 392. Note that some of the documents referenced here may be available
in softcopy only.
IBM Tivoli Monitoring Version 5.1.1 Creating Resource Models and Providers,
SG24-6900
Integrating IBM Tivoli Workload Scheduler and Content Manager OnDemand
to Provide Centralized Job Log Processing, SG24-6629
Other publications
These publications are also relevant as further information sources:
Tivoli Workload Scheduler Job Scheduling Console User’s Guide, SH19-4552
Tivoli Workload Scheduler Version 8.2, Error Message and Troubleshooting,
SC32-1275
IBM Tivoli Workload Scheduler Version 8.2, Planning and Installation,
SC32-1273
Tivoli Workload Scheduler Version 8.2, Reference Guide, SC32-1274
IBM Tivoli Workload Scheduler Version 8.2 Plus Module User’s Guide,
SC32-1276
Tivoli Management Framework Maintenance and Troubleshooting Guide,
GC32-0807
IBM Tivoli Workload Scheduler for Applications User Guide, SC32-1278
IBM Tivoli Workload Scheduler Release Notes, SC32-1277
392 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices
Index
batch work 255
Symbols batchman 57, 79, 208
_uninst directory 137
Batchman LIVES 55, 249
BDM
A See Backup Domain Manager
ABEND 152 behindfirewall attribute 173, 323
access method 14 bm check deadline 208, 300
Action list view 19 bm check file 299
adapter 228 bm check status 299
adapter configuration file 228 bm check until 208
Adapter Configuration Profile 230 BMDM 250–251, 256
Adding Server ID 262 See Backup Master Domain Manager
adhoc 272 bmevents file 214
adhoc jobs 246 BmEvents.conf 225, 229
Admintool 148 build command options 339
agent down 224 Bulk Data Transfer service 206
agents unlinked 224
AIX 149
AIX 5L 258
C
CA 178
AIX 5L Version 5.1 32
See certificate authority
AlarmPoint 225, 236, 240
ca_env.sh 184
AlarmPoint Java Client 233
cacert.pem 195
altpass 344
Cancel option 215
anonymous FTP 120
Cannot create Jobtable 343
archiver process 5
caonly 179
Authentication 177
CARRYFORWARD schedule 345
Authentication methods
central data warehouse 2
caonly 179
central repository 273
cpu 180
centralized security 205
string 180
cert 181
Autotrace 8, 346
Certificate Authority 178, 180
AWSDEB045E 203
certificate request 191
Certificate Revocation Lists 182
B certs directory 194
Backup Domain Manager 267, 273 Change control 277
Backup Master 34, 249–250 Choosing platforms 258
Backup Master Domain Manager 244, 273 chown 340, 342
Bad data 343 Cluster environments 272
Baltimore 180 Column layout customization 25
BAROC 223 command line 246
BAROC files 230 Commit 127
bash 252 Compiler 283
basicConstraint 183 components file 34
394 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices
file system space 286 html report 347
file system syncronization level 296
FINAL job stream 251
finding answers 348
I
I/O bottleneck 259
fire proof 253
IBM DB2 Data Warehouse Center 2
Firewall Support 176
IBM HACMP 273
firewalls 172, 202, 206, 268
IBM pSeries systems 258
Fix Pack 01 134, 162, 230, 328
IBM Tivoli Business Systems Manager 286
flat file 250–251
IBM Tivoli Configuration Manager 4.2 15
format files 230
IBM Tivoli Data Warehouse 286
Framework
IBM Tivoli Enterprise Console 232–233, 236, 286
administration 244
IBM Tivoli Monitoring 224, 286–287
administrators 244
IBM Tivoli Workload Scheduler 258, 286
binaries 338
Adding a new feature 56
database 146
central repositories 273
network 244
Common installation problems 136
products 244
Connector 56
See also Tivoli Management Framework
FINAL schedule 55
Free inodes 286
Installation CD layout 32
freshInstall.txt 133
Installation overview 32
customized file 365
Installation roadmap 35
sample file 359
Installing the Job Scheduling Console 109
Full status 254
Installing using the twsinst script 130
Full Status option 231
InstallShield MultiPlatform installation 35
Jnextday script 55
G Launching the uninstaller 137
garbage collected heap 314 Parameters files 276
garbage collector 309 Promoting an agent 78
generating a certificate 189 schedlog directory 332
Global option 284 scripts files 274
globalopts 249, 256 Security files 275
globalopts.mstr 250 Silent Install using ISMP 132
grep 341 Troubleshooting the installation 135
TWSRegistry.dat 34
Uninstalling 137
H Useful commands 149
Hardware considerations 258
IBM Tivoli Workload Scheduler 8.1 296
hardware upgrade 299
IBM Tivoli Workload Scheduler Client list 229
hexidecimal number 182
IBM Tivoli Workload Scheduler V8.2 323
high availability 258
Ignore box 280
historical reports 286
increase the uptime 273
HKEY_LOCAL_MACHINE 148
Information column 211
hostname 279
inode 260
HP 9000 K 258
inode consumption 260
HP 9000 N 258
Inodes 260
HP ServiceGuard 273
installation language 74
HP-UX 148
Installation log files 135
HP-UX 11 258
tivoli.cinstall 135
HP-UX 11i 258
Index 395
tivoli.sinstall 135 Legacy entries 148
TWS_platform_TWSuser^8.2.log 135 legacy Maestro terminology 304
TWSInstall.log 135 LIMIT 256
TWSIsmp.log 135 linking process 279
installation wizard language 80 Linux 148, 258
installed repository 178 Linux xSeries cluster 273
Installing Perl5 133 listproc 341
InstallShield Multiplatform 15 listsym 285
Intel 345 live systems 256
Interconnected TMRs 292 load average 286
IP address 278–279 local copy of plan 274
IP spoofing 177 local SSL context 203
IP-checking mechanism 177 localopts 104, 188, 251, 293
log switch 326
logman 283
J LogSources option 239
JCL field 155
long-term switch 245
Jnextday 258, 260, 269, 284, 344
Low end machines 272
Jnextday job 245, 250
ls command 120, 196
JNEXTDAY script 251
Job FOLLOW 344
job invocation 177 M
Job Scheduling Console 55, 121 MAC 177
Job Scheduling Console Fix Pack 120, 127 MAC computation 177
Job Scheduling Services 145, 278 maestro.fmt 229
Job Stream Editor 309 maestro.rls 235
jobinfo 9 maestro_dm.rls 235
joblog 323 maestro_mon.rls 231
JOBMAN 57, 79 maestro_nt.rls 235
jobman 57, 79 maestro_plus.rls 236
jobmanrc 166 maestro_plus.rls rule set 240
JOBMON 79 MaestroDatabase 292
Jobtable 343 MaestroEngine 292
JSC 1.2 startup script 314 maestront.fmt 229
JSC 1.3 startup script 314 MaestroPlan 292
JSC launcher 314 mailman 286, 341
mailman cache 296–297
main TMR 244
K Major modifications 34
kill 341
Master 34
killproc 341
Master Domain Manager 78, 244, 259, 269
master option 249
L MASTERDM Domain 246
lag time 255 maximum event size 279
Language Packs 56 MCMAGENT 14
Latest Start Time 209, 211, 217 MD5 177
leaf node FTA 278 MDM 249, 251
least impact 255 See Master Domain Manager
least scheduling activity 284 merge stdlists 204
396 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices
Message file sizes 279 off-site systems 255
Metronome 347 On request 251
Metronome script 347 online analytical processing 3
Metronome.pl 133 On-site disaster recovery 245
Microsoft Cluster Services 273 ONUNTIL 209
mm cache mailbox 297 ONUNTIL CANC 216
mm cache size 297 ONUNTIL SUPPR 209
mm response 298 OPENS dependency 304
mm retrylink 298 OPENS file 344
mm unlink 298 OpenSSL 181–182, 184
Monitoring 286 Oracle E-Business Suite Access Method 14
mount 94 Oracle E-Business Suite access method 14
mozart directory 249, 251 OutofMemoryError 315
multiple domain architecture 267 overhead 297
mvsca7 14 owner’s distinguished name 178
mvsjes 14 owner’s public key 178
mvsopc 14
P
N page out 224
name resolution 278 parms 166
named pipes 270 parms command 276
National Language Support 14 parms utility 169
NetConf 34, 286 passphrase 191, 196
netman 340 patch 278
netman process 91 pending jobs 137
NetReq.msg 34 PeopleSoft access method 14–15
netstat 342 Perl interpreter 135
network availability 251 Perl5 133
network card 342 Perl5 interpreter 133
network performance 302 physical database files 250
Network status 286 physical defects 342
nm port 199 physical disk 258
nm SSL port 199 Plan 5, 9, 309
nm tcp timeout 11 Plan run number 273
node field 278 point of failure 254
node name field 278 Power outage 343
Non-TME Adapters 228 pre-defined message formats 223
nosetuid 343 pre-defined rules 224
Not Checked status 304 preferences.xml 312
notification server 222 pristine 338
pristine Tivoli Framework environment 338
private key 178, 191
O process id 341
od utility 196
Processes 259
odadmin 144
prodsked 283
odadmin set_bdt_port 206
production control database 246
odadmin single_port_bdt 206
Production Control file 284
Off-site disaster recovery 252
production day 246
off-site location 254
Index 397
Production Plan 5 adding to a job definition 152
progress bar 74, 105 Boolean expression 154
Progressive message numbering 24 Comparison expression 153
prompt 185 conman enhancement 157
propagating changes in JSC 309 example 157
ps command 341 jobinfo enhancement 160
psagent 14 Jobinfo example 161
pSeries 258 monitoring 155
public key 178, 191 overview 152
public mailing list 348 parent job 163
Public-key cryptography 178 RCCONDSUCC keyword 152
retcod 157
Return Code Mapping field 152
Q rstrt_retcode option 160
qualified file dependency 305
rmstdlist 260
Rollback 127
R root CA 178
R/3 access method 14 router 342
r3batch 14 RSA 177
RAID array 258 rule base 228
RAID technologies 259 rule-based event management 222
RAID-0 259 runmsgno file 250
RAID-1 259
RAID-5 259
RC4 177 S
SAM application 148
recipient 178
SAP BC-XBP 2.0 14
recovery job 162
schedlog 260, 334
Red Hat 258
schedlog directory 332
Red Hat Linux Enterprise Server 2.1 32
schedulers 256
Redbooks Web site 392
Schedulr 283
Contact us xiii
Secure Port 200
Regatta servers 258
secureaddr 199
regedit.exe 148
Security file 206, 275
registry updates 338
securitylevel 199
remote Master 292
self-signed digital certificate 178
remove programs 137
self-signed root certificate 181
rep11 225
sendmail method 240
rep1-4b 225
serial number 182
rep7 225
Server ID 262
rep8 225, 283
setsym 285
Report tasks 225
setuid bit 343
reptr 225, 282
Setup EventServer for TWS 227
RERUN 161
SETUP.bin 79, 94
Resource Model 224
SETUP.exe 136
response file 132–133
setup_env.sh 143
restart the TMR 245
SHA 177
retcod 157
short-term switch 245
Return Code condition 152
SHOWDOMAINS command 249
Return Codes 152, 156
398 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices
showjobs 9 TEC 3.8 Fixpack 01 230
shutdown 246 tecad_nt 229
signature 178 Termination Deadline 212, 218
signed digital certificate 178 terminology changes 208
silent mode 132 test command 305
Sinfonia 269, 297, 345 text copies of the database files 337
Sinfonia distribution 270 Tier 1 platforms 32
SLL connection 178 Tier 2 platforms 33
slow initializing FTAs 266 time stamp 345
slow link 266 time-stamped database files 254
Software Distribution process 135 Tivoli desktop 292
software package block 277 Tivoli Distributed Monitoring 224
Solaris 148, 258 Tivoli Endpoints 244
SPB 277 Tivoli Enterprise Event Management 224
SSL auth string 180 Tivoli Framework 224
SSL session 178 backup procedure 338
SSL support 177 directory 338
stageman 79, 282–283, 344 environment 338
standalone TMR 244 See also Tivoli Management Framework
Standard Agent 78, 272 Tivoli Job Scheduling Services 1.3 109
standard reports 225 Tivoli managed environment 228
start of day 306 Tivoli Management Framework 3.7.1 109
start the adapter 228 Tivoli Management Framework 4.1 109
StartUp 8 Tivoli Management Region 292
StartUp command 340 Tivoli Plus Module 223
stdlist directory 260, 328 Tivoli Workload Scheduler
stop command 175 auditing log files 330
stop the adapter 228 TME Adapters 228
submit additional work 245 TMF_JSS 143
submit job 164 TMR failure 245
SUCC 152 tokensrv 79, 93
successful interconnection 293 tomaster.msg 245
Sun Cluster Software 273 trace snap file 8
SunBlade systems 258 Troubleshooting
Suppress option 210 batchman down 342
SuSE Linux 258 Byte order problem 345
swap space 224, 286 compiler processes 345
switching 231 evtsize 342
switchmgr 188, 249 FTA not linked 345
switchover 250 FTAs not linking to the master 340
Symnew 282 Jnextday in ABEND 345
Symphony file 246, 269, 320 Jnextday is hung 344
system reconfiguration 299 Jobs not running 343
missing calendars 345
missing resources 345
T multiple netman processes 341
tar file 121
TWS port 342
target workstation 78
writer process down 341
Task Assistant 23
two-way communication link 292
Index 399
two-way TMR communication 292 wr read 298
TWS Distributed Monitors 3.7 224 wr unlink 298
TWS Plus for Tivoli collection 226 writer 57, 79, 286
TWS_Base 229 wuninstall 144
tws_launch_archiver.pl 133
TWS_NT_Base 229
TWSCCLog.properties 280
X
X.509 certificate 178
TWSConnector 143
XML 233
TWShome/network 273
xSeries 258
twsinst program 131
twsinst script 130
TWSPlus event group 230 Z
TWSRegistry 34 z/OS access method 14
TWSRegistry.dat 34, 147
type of switchover 245
U
Ultrastar 258
uninstaller 138
UNIX 258
UNIX logfile adapter 230
unmount 77
untar 338
UNTIL 209
Useful commands 149
user TMR Role 290
Userjob 220
V
VeriSign 180
Visual Basic 134
vpd.properties 148
W
wait time 299
warm start 255
wchkdb 146
wconsole 231
Windows 2000 adapter 229
Windows NT adapter 229
winstall process 135
wizard installation 132
Wlookup 293
Work with engines pane 20
Workaround 230
Workbench 224
workstation definition 254
400 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices
IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices
(0.5” spine)
0.475”<->0.875”
250 <-> 459 pages
Back cover ®