Patran - 2014 - Doc - Analysis Manager User's Guide PDF
Patran - 2014 - Doc - Analysis Manager User's Guide PDF
Patran - 2014 - Doc - Analysis Manager User's Guide PDF
Analysis Manager
Users Guide
Corporate
Japan
Asia-Pacific
Worldwide Web
www.mscsoftware.com
Support
http://www.mscsoftware.com/Contents/Services/Technical-Support/Contact-Technical-Support.aspx
Disclaimer
This documentation, as well as the software described in it, is furnished under license and may be used only in accordance with the
terms of such license.
MSC Software Corporation reserves the right to make changes in specifications and other information contained in this document
without prior notice.
The concepts, methods, and examples presented in this text are for illustrative and educational purposes only, and are not intended
to be exhaustive or to apply to any particular engineering problem or design. MSC Software Corporation assumes no liability or
responsibility to any person or company for direct or indirect damages resulting from the use of any information contained herein.
User Documentation: Copyright 2014 MSC Software Corporation. Printed in U.S.A. All Rights Reserved.
This notice shall be marked on any reproduction of this documentation, in whole or in part. Any reproduction or distribution of this
document, in whole or in part, without the prior written consent of MSC Software Corporation is prohibited.
This software may contain certain third-party software that is protected by copyright and licensed from MSC Software suppliers.
Additional terms and conditions and/or notices may apply for certain third party software. Such additional third party software terms
and conditions and/or notices may be set forth in documentation and/or at http://www.mscsoftware.com/thirdpartysoftware (or
successor website designated by MSC from time to time).
METIS is copyrighted by the regents of the University of Minnesota. A copy of the METIS product documentation is included with this
installation. Please see "A Fast and High Quality Multilevel Scheme for Partitioning Irregular Graphs". George Karypis and Vipin
Kumar. SIAM Journal on Scientific Computing, Vol. 20, No. 1, pp. 359-392, 1999.
MSC, MSC Nastran, MD Nastran, MSC Fatigue, Marc, Patran, Dytran, and Laminate Modeler are trademarks or registered trademarks
of MSC Software Corporation in the United States and/or other countries.
NASTRAN is a registered trademark of NASA. PAM-CRASH is a trademark or registered trademark of ESI Group. SAMCEF is a
trademark or registered trademark of Samtech SA. LS-DYNA is a trademark or registered trademark of Livermore Software
Technology Corporation. ANSYS is a registered trademark of SAS IP, Inc., a wholly owned subsidiary of ANSYS Inc. ACIS is a
registered trademark of Spatial Technology, Inc. ABAQUS, and CATIA are registered trademark of Dassault Systemes, SA. EUCLID
is a registered trademark of Matra Datavision Corporation. FLEXlm and FlexNet Publisher are trademarks or registered trademarks of
Flexera Software. HPGL is a trademark of Hewlett Packard. PostScript is a registered trademark of Adobe Systems, Inc. PTC, CADDS
and Pro/ENGINEER are trademarks or registered trademarks of Parametric Technology Corporation or its subsidiaries in the United
States and/or other countries. Unigraphics, Parasolid and I-DEAS are registered trademarks of Siemens Product Lifecycle
Management, Inc. All other brand names, product names or trademarks belong to their respective owners.
P3:V2014:Z:ANM:Z: DC-USR-PDF
Contents
MSC Patran Analysis Manager Users Guide
Overview
Purpose
Product Information
Getting Started
Quick Overview
11
13
MSC.Marc Submittals
Generic Submittals
10
14
15
23
Submit
Introduction
Selecting Files
26
28
29
20
Windows Submittal
31
33
Configure
Introduction
36
Disk Space
37
MSC Nastran Disk Space
37
ABAQUS, MSC.Marc, and General Disk Space
Memory
42
MSC Nastran Memory
42
ABAQUS Memory
44
MSC.Marc and General Memory
Mail
Time
48
49
General
51
Restart
55
MSC Nastran Restarts
55
MSC.Marc Restarts
57
ABAQUS Restarts
58
Miscellaneous
60
MSC Nastran Miscellaneous
60
MSC.Marc Miscellaneous
61
ABAQUS Miscellaneous
63
General Miscellaneous
64
Monitor
Introduction
70
Running Job
71
Windows Interface
74
Completed Job
77
Windows Interface
78
Host/Queue
Job Listing
80
81
46
40
CONTENTS v
Host Status
82
Queue Manager Log
Full Listing
84
CPU Loads
84
83
Abort
Selecting a Job
88
Aborting a Job
89
UNIX Interface
89
Windows Interface
89
System Management
Directory Structure
92
105
Installation
109
Installation Requirements
109
Installation Instructions
110
X Resource Settings
113
148
Error Messages
Error Messages
164
115
157
96
206
Example Interface
232
196
Chapter 1: Overview
Patran Analysis Manager Users Guide
Overview
Purpose
Product Information
5
6
Purpose
MD Nastran, MSC Marc, and MSC Patran are analysis software systems developed and maintained by
the MSC Software Corporation. MD Nastran and MSC Marc are advanced finite element analysis
programs used mainly for analyzing complex structural and thermal engineering problems. The core of
MSC Patran is a finite element analysis pre/postprocessor. Several optional products are available with
MSC Patran including advanced postprocessing, interfaces to third party solvers and application
modules. This document describes the MSC Patran Analysis Manager, one of these application modules.
The Analysis Manager provides interfaces within MSC Patran to submit, monitor and manage analysis
jobs on local and remote networked systems. It can also operate in a stand-alone mode directly with MD
Nastran, MSC Marc, ABAQUS, and other general purpose finite element solvers.
At many sites, engineers have several computing options. Users can choose from multiple platforms or
various queues when jobs are submitted. In reality, the resources available to them are not equal. They
differ based on the amount of disk space and memory available, system speed, cost of computing
resources, and number of users. In networked environments, users frequently do their modeling on local
workstations with the actual analysis performed on compute servers or other licensed workstations.
The MSC Patran Analysis Manager automates the process of running analysis software, even on remote
and dissimilar platforms. Files are automatically copied to where they are needed; the analysis is
performed; pertinent information is relayed back to the user; files are returned or deleted when the
analysis is complete even in heterogeneous computing environments. Time consuming system
housekeeping tasks are reduced so that more time is available for productive engineering.
The Analysis Manager replaces text-oriented submission scripts with a Motif-based menu-driven
interface (or windows native interface on Windows platforms), allowing the user to submit and control
his job with point and click ease. No programming is required. Most users are able to productively use it
after a short demonstration.
Chapter 1: Overview 3
Product Information
Product Information
The MSC Patran Analysis Manager provides convenient and automatic submittal, monitoring, control
and general management of analysis jobs to local or remote networked systems. Primary benefits of using
the Analysis Manager are engineering productivity and efficient use of local and corporate network-wide
computing resources for finite element analysis.
The Analysis Manager has its own scheduling capability. If commercially available queueing software,
such as LSF (Load Sharing Facility) from Platform Computing Ltd. or NQS is available, then the
Analysis Manager can be configured to work closely with it.
This release of the MSC Patran Analysis Manager works explicitly with MD Nastran & MSC Marc
releases up to version 2006, and versions of ABAQUS up to 6.x. It also has a general capability which
allows almost any software analysis application to be supported in a generic way.
For more information on how to contact your local MSC representative see Technical Support, xii.
Chapter 1: Overview 5
Integration with MSC Patran
Getting Started
Quick Overview
ABAQUS Submittals
MSC.Marc Submittals
Generic Submittals
Files Created
10
11
13
14
15
16
23
20
Quick Overview
Before Patrans Analysis Manager can be used, it must be installed and configured by the system
administrator. See System Management for more on the installation and set-up of the module.
In so doing, the system administrator starts the Analysis Managers queue manager (QueMgr) daemon
or service, which is always running on a master system. The queue manager schedules all jobs submitted
through the Analysis Manager. The master host is generally the system on which Patran or an analysis
module was installed, but does not have to be.
The system administrator also starts another daemon (or service) that runs on all machines configured to
run analyses, called the remote manager (RmtMgr). This daemon/service allows for proper
communication and file transfer to/from these machines.
Users that already have analysis input files prepared and are not using Patran may skip to The Main Form
after reviewing the rules for input files for the various submittal types in this Chapter.
When using Patran, in general, the user begins by setting the Analysis Preference to the appropriate
analysis, such as MSC Nastran, which is available from the Preferences pull down menu on the top menu
bar.
Once the Analysis Preference is set and a proper analysis model has been created in Patran, the user can
submit the job. Generally, the submittal process takes place from the Analysis application form when the
user presses the Apply button. The full interface with access to all features of Patrans Analysis Manager
is always available, regardless of the Preference setting, from the Tools pull down menu or from the
Analysis Manager button on the Analysis form. The location of the submittal form is explained
throughout this chapter for each supported analysis code.
10
12
ABAQUS Submittals
Any standard ABAQUS (up to version 6.x) problem can be submitted using Patrans Analysis Manager.
This is accomplished from the Analysis form with the Analysis Preference set to ABAQUS.
The following rules apply to ABAQUS run-ready input files for submittal:
1. The filename may not have any '.' characters except for the extension. The filename must begin
with a letter (not a number).
2. The combined filename and path should not exceed 80 characters.
Run-ready input files prepared by Patran follow these rules. Correct and proper analysis files are created
by following the instructions and guidelines as outlined in the Patran Interface to ABAQUS Preference
Guide.
To submit, monitor, and manage ABAQUS jobs from Patran using the Analysis Manager, make sure the
Analysis Preference is set to ABAQUS. This is done from the Preferences menu on the main form. The
Analysis form appears when the Analysis toggle, located on the Patran application switch, is chosen.
Pressing the Apply button on the Analysis application form with the Action set to Analyze, Monitor,
or Abort will cause the Analysis Manager to perform the desired action. A chapter is dedicated to each
of these actions in the manual as well as one for custom configuration of ABAQUS submittals.
If multiple file systems have been defined, the Analysis Manager will generate aux_scratch and
split_scratch parameters appropriately based on current free space among all file systems for the
host on which the job is executing. See Disk Space for more information.
Restarts are handled by the Analysis Manager by optionally copying the restart (.res) file to the
executing host first, then running ABAQUS with the oldjob keyword. See Restart for more
information.
14
MSC.Marc Submittals
Any standard MSC.Marc (up to version 2006) problem can be submitted using Patrans Analysis
Manager. This is accomplished from the Analysis form with the Analysis Preference set to MSC.Marc.
The following rules apply to MSC.Marc run-ready input files for submittal:
1. The filename may not have any '.' characters except for the extension. The filename must begin
with a letter (not a number).
Run-ready input files prepared by Patran follow these rules. Correct and proper analysis files are created
by following the instructions and guidelines as outlined in the Marc Preferance Guide.
To submit, monitor, and manage MSC.Marc jobs from Patran using the Analysis Manager, make sure the
Analysis Preference is set to MSC.Marc. This is done from the Preferences menu on the main form.
The Analysis form appears when the Analysis toggle, located on the Patran application switch, is chosen.
Pressing the Apply button on the Analysis application form with the Action set to Analyze, Monitor,
or Abort will cause the Analysis Manager to perform the desired action. A chapter is dedicated to each
of these actions in the manual as well as one for custom configuration of
MSC.Marc submittals.
Multiple file systems are not supported with MSC.Marc submittals. See Disk Space for
more information.
Restarts, user subroutines, externally referenced result (POST) and view factor files are handled by the
Analysis Manager by optionally copying these files to the executing host first, then running MSC.Marc
with the appropriate command arguments. See Restart for more information.
Generic Submittals
Aside from the explicitly supported analysis codes, MSC Nastran, MSC.Marc, and ABAQUS, most any
analysis application can be submitted, monitored and managed using Patrans Analysis Manager general
analysis management capability. This is accomplished by selecting Analysis Manager from the Tools
pull down menu on the main Patran form. This brings up the full Analysis Manager user interface which
is described in the next section, The Main Form.
When the Analysis Manager is accessed in this manner, it keys off the current Analysis Preference. If the
Preference is set to MSC Nastran, MSC.Marc, and ABAQUS, the jobname and any restart information
is passed from the current job to the Analysis Manager and is brought up ready to submit, monitor, or
manage this job.
Any other Preference that is set must be configured correctly as described in Installation and is considered
part of the general analysis management. The jobname from the Analysis form is passed to the Analysis
Manager and the job submitted with the configured command line and arguments. (How to configure this
information is given in Miscellaneous and Applications.) If an analysis code is to be submitted, yet no
Analysis Preference exists for this code, the Analysis Manager is brought up in its default mode and the
user must then manually change the analysis application to be submitted via an option menu. This is
explained in detail in the next section.
On submittal of a general analysis code, the job file is copied to the specified analysis computer, the
analysis is run, and all resulting files from the submittal are copied back to the invoking computer
and directory.
16
UNIX Interface
Note:
The rest of this forms appearance varies depending on the Action that is set. Different
databoxes, listboxes, or other items in accordance with the Action/Object menu settings are
displayed. Each of these are discussed in the following chapters.
18
Windows Interface
Note:
The rest of this forms appearance varies depending on the Tab that is set. Different
databoxes, listboxes, or other items in accordance with the Tree and/or Tab settings that are
displayed. Each of these are discussed in the following chapters.
The main purpose of this pull-down is to allow a user to Exit the program, Print
where appropriate, and to Connect To... other queue manager daemons or services.
User Settings can also be saved and read from this pull-down menu. For
administrators, other items on this pull-down become available when configuring the
Analysis Manager and for Starting and Stopping queue manager services. This is
detailed in System Management. These items in the Queue pull-down menu are only
enabled when the Administration tree tab is accessed.
Edit
Gives access to standard text Cut and Paste operation when applicable.
View
This pull-down menu allows the user mainly to update the view when jobs are being
run. The Refresh (F5) option graphically updates the window when in monitoring
mode. The program automatically refreshes the screen based on the Update Speed
also. All Jobs or only the current User Jobs can be seen if desired.
Tools
The Options under this menu allow the user to change the default editor when
viewing result files or input files. The number of user completed jobs viewable from
the interface is also set here.
Windows
The main purpose of this pull-down menu is to hide or display the Status Bar and
Output Window at the bottom of the window.
Help
Windows Icons
These icons appear on the main form.
Folder
The open folder icon is the same as the Connect To... option under the Queue pulldown menu, which allows you to connect to other queue manager daemons/services
that may be running and accessible.
Save
Printer
Paintbrush
20
Description
The program can be started up in one of the following 8 modes (enter the
number only):
1- Start up the full interface. See The Main Form. (default)
2- Start up the Queue Monitor. See Monitor.
3- Start up the Abort Job now. See Abort.
4- Monitor a Running Job. See Monitor.
5- Monitor a Completed Job. See Monitor.
6- Submit the job. See Submit.
7- Submit in batch mode. (No user interface appears or messages.)
8- Same as 7 but waits until job is done. Returns status codes:
0=success, 1=failure, 2=abort.
arg2
(extension)
This is the extension of the analysis input file (e.g., .dat,.bdf, .inp).
(.dat is the default)
arg3 (jobname)
This is the Patran jobname; the jobname that appears in any jobname textbox
(without the extension). (default = unknown)
arg4
(application type)
ABAQUS
MSC.Marc
20
General code #1
21
General code #2
thru
29
Argument
Description
optional args
-coldstart coldstart_jobname
(MSC Nastran)
optional args
(ABAQUS)
-restart oldjobname
The -runtype parameter followed by a 0, 1 or a 2 is to specify whether the
run is a full analysis, a restart, or a check run respectively. The
-restart parameter is to specify the old jobname for a restart run.
optional args
(MSC.Marc)
arg5 (x position)
arg6 (y position)
See P3Mgr
Optional - Specifies the X position of upper left corner of Patran right hand side
interface in inches. (UNIX only)
Optional - Specifies the Y position of upper left corner of Patran right hand side
interface in inches. (UNIX only)
arg7 (width)
arg8 (height)
If no arguments are provided, defaults are used (full interface (1), .dat, unknown, MSC Nastran (1)).
The arguments listed in the table above are very convenient when invoking the Analysis Manager from
pre and postprocessors such as Patran, which have access to the pertinent information which may be
passed along in the arguments. It may, however, be more convenient for the user to define an alias such
that the program always comes up in the same mode.
Here are some examples of invoking Patrans Analysis Manager:
$P3_HOME/bin/p3analysis_mgr
or
$P3_HOME/p3manager_files/p3analysis_mgr 1 bdf myjob 1
or
$P3_HOME/p3manager_files/p3analysis_mgr 1 bdf myjob MSC.Nastran
This invokes Patrans Analysis Manager by specifying the entire path name to the executable where
$P3_HOME is a variable containing the Patran installation directory. The entire user interface is brought
up specified by the first argument. The input file is called myjob.bdf and the last argument specifies
that MSC Nastran is the analysis code of preference.
Here is another example:
22
Files Created
Aside from the files generated by the analysis codes themselves, Patrans Analysis Manager also
generates files, the contents of which are described in the following table.
Argument
Description
jobname.mon
jobname.tml
This is the analysis manager log file that gives the status of the analysis job
and parameters that were used during execution.
jobname.submit
This file contains the messages that would normally appear on the screen if
the job were submitted interactively. When a silent submit is performed
(batch submittal), this file is created. Interactive submittals will display all
messages to a form on the screen.
jobname.stdout
This file contains any messages that would normally go to the standard
output (generally the screen) if the user had invoked the analysis code from
the system prompt.
jobname.stderr
This file will contain any messages from the analysis which are written to
standard error. If no such messages are generated this file does not appear.
Any or all of these files should be checked for error messages and codes if a job is not successful and it
does not appear that the analysis itself is at fault for abnormal termination.
24
Chapter 3: Submit
Patran Analysis Manager Users Guide
Submit
Introduction
26
Selecting Files
29
Windows Submittal
31
28
33
26
Introduction
The process of submitting a job requires the user to select the file and options desired. The job is
submitted to the system and ultimately executes MD Nastran, ABAQUS, MSC.Marc, or some other
application module. Patrans Analysis Manager properly handles all necessary files and provides
monitoring capability to the user during and after job execution. See Monitor for more information on
monitoring jobs.
In Patran, jobs are submitted one of two ways: through the Analysis application form for the particular
Analysis Preference, or outside of Patran through Patrans Analysis Manager user interface with the
Action (or tree tab in the Windows interface) set to Submit. Submitting through the Analysis form in
Patran makes the submittal process transparent to the user and is explained in Getting Started.
For more flexibility the full user interface can be invoked from the system prompt as explained in the
previous chapter or from within Patran by pressing the Analysis Manager button on the Analysis
application form or by invoking it from the Tools pull down menu. This gives access to more advanced
and flexible features such as submitting existing input files from different directories, changing groups
or organizations (queue manager daemons/services), selecting different hosts or queues, and configuring
analysis specific items. The rest of this chapter explains these capabilities.
Below is the UNIX submittal form (see Windows Submittal for the Windows interface).
Chapter 3: Submit 27
Introduction
28
Selecting Files
The filename of the job that is currently opened will appear in a textbox of the form on the previous page.
If this is not the job to be submitted, press the Select File button and a file browser will appear.
Below is the UNIX file browser form (see Windows Submittal for the Windows interface)
.
All appropriate files in the selected directory are displayed in the file browser. Select the file to be run
from those listed in the file browser or change the directory path in the Filter databox and then press the
Filter button to re-display the files in the new directory indicated. An asterisk (*) serves as a wild card.
Select OK once the file is properly selected and displayed, or double-click on the selected file.
Note:
The directory in the Filter databox indicates where the input file will be copied from upon
submission AND where the results files from the analysis will be copied to upon
completion. Any existing results files of the same names will be overwritten on completion
and you must have write privileges to the specified directory.
Chapter 3: Submit 29
Where to Run Jobs
The submit function can also be invoked manually from the system prompt. See Invoking the Analysis
Manager Manually for details. It can be invoked in both an interactive and a batch mode.
30
Note:
Often, the user will look into the Host/Queue listing window described in Host/Queue, to
see what host/queue is most appropriate (free or empty) before selecting from the list and
submitting. When submitting to an LSF/NQS queue, the host is selected automatically,
however you can select a particular host from the Choose Specific Host button (not shown)
if desired.
Chapter 3: Submit 31
Windows Submittal
Windows Submittal
The interface on Windows platforms is quite different in appearance than that for UNIX, but the process
is almost identical. Submitting through this interface is simple. Simply follow these steps:
32
Once a file is selected, you can edit the file if necessary before submitting it. This is done by pressing the
Edit File button. By default the Notepad application is used as the editor. The default editor can be
changed under the Tools | Options menu pick as shown below.
Multiple File Transfer
alysis
Multiple
Analysis
config.usr
test.frq
input12.inp
locations.xyz
WhenManager
the
File
Manager
Manger
job
Transfer
will
supports
does
be
supports
submitted
this
multiple
multiple
by transferring
tofile
the
file
transfer
Analysis
transfer
the iffiles
aManager,
ifsolver
aspecified
solver
requires
itrequires
will
in afind
listmultiple
multiple
the
file$JOBNAME.lst,
over
input
input
tofiles
the
files
in
solver
order
inparse
order
machine.
to it,
run
toand
run
a solution.
This
additionally
a solution.
requirescopy
theover
inputfiles
filesoftothe
beabove
listed in
name
a LST
(assumed
file that to
has
be the
in the
same
same
name
location).
as the job (jobname). For example, if the job name is 'test' the LST file name must be 'test.lst'. The input files can be listed in the LST file like this:
Chapter 3: Submit 33
Multiple File Transfer
34
Chapter 4: Configure
Patran Analysis Manager Users Guide
Configure
Introduction
36
Disk Space
37
Memory
Time
General
Restart
Miscellaneous
42
48
49
51
55
60
36
Introduction
By setting the Action to Conbody on the main Patran Analysis Manager form, the user has control of a
variety of options that affect job submittal. The user can customize the submitting environment by setting
any of the parameters discussed in this chapter. These parameters can be saved such that all subsequent
submittals use the new settings or they can be set for a single submittal only. All of this is at the control
of the user.
Chapter 4: Configure 37
Disk Space
Disk Space
The Disk Space configuration is analysis code specific.
38
Note:
Patrans Analysis Manager will only check for sufficient disk space if the numbers for
DBALL, MASTER, and SCRATCH are provided. An error message will appear if not
enough disk space is available. If these values are not specified the job will be submitted
and will run until completion or the disk is full and an error occurs.
Chapter 4: Configure 39
Disk Space
The Windows interface for MSC Nastran disk space is shown below.
40
The Windows interface for ABAQUS, MSC.Marc, or other user defined analysis disk space requirements
is shown below.
Chapter 4: Configure 41
Disk Space
42
Memory
The Memory configuration is analysis code specific.
Chapter 4: Configure 43
Memory
The Windows interface for MSC Nastran memory requirements is shown below:
44
ABAQUS Memory
After selecting the Memory option on the Object menu, the following Memory form appears.
Chapter 4: Configure 45
Memory
46
The Windows interface for MSC.Marc or other general application memory requirements is shown
below:
Chapter 4: Configure 47
Memory
48
Mail
The Mail configuration setting determines whether or not to have mail notification and, if so, where to
send the mail notices.
Note:
In this version there is no mail notification. This feature has been disabled.
Chapter 4: Configure 49
Time
Time
Any job can be submitted to be run immediately, with a delay, or at a specific future time. The default
submittal is immediate. To change the submittal time, use the following Time form.
50
The Windows interface for setting job submit delay and maximum job time is specified directly on the
Submit | Job Control tab as shown below:
Note:
Chapter 4: Configure 51
General
General
The General configuration form allows preferences to be set for a number of items as described below.
Nothing in this form is analysis specific.
Note:
Items not described on this page are described on subsequent pages in this section.
52
The Windows interface for General setting is specified directly on the Submit | General tab as shown
below:
Chapter 4: Configure 53
General
Note:
Unlike the UNIX interface, to save a default Host/Queue, you select the Host/Queue on the
Job Control tab and then save the settings under the Queue pull-down menu.
Project Directory
The project directory is a subdirectory below the Patran Analysis Manager install path where the
Analysis Manager's job-specific files are created during job execution.
Projects are a method of organizing ones jobs and results. For instance, if a user had two different
bracket assembly designs and each assembly contained many similar if not identical parts, each assembly
file might be named assembly.dat. But to avoid interference, each file is executed out of a different
project directory.
If the first project is design1 and the second is design2, then one job is executed out of <file
system(s) for selected host>/proj/design1 and the other is <file system(s)
for selected host>/proj/design2. Hence, the user could have both jobs running at the same
time without any problems, even though they are labeled with the same file name. See Disk Configuration.
When the job is completely finished, all appropriate files are copied back to the originating host/directory
(the machine and directory where the job was actually submitted from).
Pre and Post Commands
The capability exists to execute commands prior to submission of an analysis in the form of a pre and
post capability. For instance, let us say that before submitting an analysis the user needs to translate an
input deck from ASCII form to binary form running some utility called ascbin. This is done on the
submitting host by typing ascbin at the system prompt. This same operation can be done by specifying
ascbin in the Pre databox for the originating host.
Similarly, on completion of the analysis and after the files have been copied back from the executing host,
the user needs to again run a program to translate the results from one file format to another using a
program called trans. He would then place the command trans in the Post databox for the originating
host.
A Pre and a Post command can be specified on the executing (analysis) host side also.
These commands specified in the databoxes can be as simple as a one word command or can reference
shell scripts. Arguments to the command can be specified. Also, if keywords, such as the jobname or
hostname, from Patrans Analysis Manager are needed, they can be referenced by placing a $ in front of
them. The available keywords that are interpreted in the Pre and Post databoxes can be examined by
pressing the Keyword Index button. For more explanation of keywords, see General Miscellaneous.
Separate User
The Separate User option allows job submittal to the selected system as a different user in case the current
user does not have an account on the selected system. This must be enabled and set up in advance by the
system administrator. In order for this to work properly, the separate user account specified in this
databox must exist on both the selected system to run the job and the machine where the job is being
54
submitted from. See Examples of Configuration Files for an explanation on how to set up separate users
submission.
Default Host/Queue
The Default Host/Queue, if saved, is the host/queue to which jobs are submitted when submitted directly
from Patran by using the Apply button on the Analysis form. It is also the host/queue to which jobs will
be submitted when using the batch submittal from the direct Analysis Manager command line. It is also
the host/queue which will come up as the selected default when the full Analysis Manager interface is
started. If this setting is not saved, the default host/queue is the first in the list.
Patran Database
You can specify the name of an Patran database so that on a post-submit task such as running a script file
it will know the Patran database to use for what it (the script) wants to do (like automatically reading the
results back in after a job has completed.
Copy/Link Results Files
By default all results files are copied back to the directory where the input file resides. The copy/link
functionality is just a method for transfering files. If you are remote then the files will be copied via the
Analysis Manager. But if you run locally then there is no good reason to transfer the files or even copy
them, so you can set this flag and the Analysis Manager will either link the files in the work dir to the
original ones or use the copy system command instead of trying to read one file and send bytes over to
write another file. If you are low on disk space then the link is a good way to go, but of course the
Analysis Manager needs to see the results files from the submittal host to the analysis host scratch disk
space location for this to work.
Chapter 4: Configure 55
Restart
Restart
The Restart configuration is analysis code specific and does not apply to General applications.
Within Patran, to perform a restart using the Analysis Manager, the job is submitted from the Analysis
application as normal however, a restart job must be indicated. When invoking the Analysis Managers
main interface with a restart job from Patran, this information is passed to the Analysis Manager and the
restart jobname shows up in the Configure| Restart form. The restart job can be submitted directly from
the main form or from Patran. In either case, the restart job looks for the previous job to be restarted in
the local path and/or on the host machine. If this restart jobname is not specified, the databases must be
located on the host machine to perform a successful restart.
56
Chapter 4: Configure 57
Restart
MSC.Marc Restarts
Restarts in MSC.Marc are quite similar to those in MSC Nastran.
58
ABAQUS Restarts
Chapter 4: Configure 59
Restart
After selecting the Restart option on the menu, the following Restart form appears.
60
Miscellaneous
The Miscellaneous configuration is analysis code specific.
Chapter 4: Configure 61
Miscellaneous
MSC.Marc Miscellaneous
After selecting the Miscellaneous option on the menu, the following form appears.
Note:
When invoked from Patran, items requiring file locations are usually passed directly into
the Analysis Manager such as the User Subroutine, POST file, and View Factor file. Thus,
in this case, there would be no need to reenter these items.
62
Chapter 4: Configure 63
Miscellaneous
ABAQUS Miscellaneous
After selecting the Miscellaneous option on the menu, the following form appears.
64
General Miscellaneous
After selecting the Miscellaneous option on the menu, the following form appears.
Note:
Chapter 4: Configure 65
Miscellaneous
Examples of some specific command lines used to invoke analysis codes are given here.
Example 1:
The first example involves the ANSYS 5 code. First the Analysis Preference must be set to ANSYS 5
from Patrans Analysis Preference form and an input deck for ANSYS 5 must have been generated via
the Analysis application (this is done by setting the Action to Analyze, and the Method to Analysis
Deck). Then Patrans Analysis Manager can be invoked from the Analysis main form. Note that a direct
submittal from Patran is not feasible in this and the subsequent example.
The jobfile (jobname.prp in this case) is automatically displayed as the input file and the Submit
button can be pressed. The jobfile is the only file that is copied over to the remote host with this general
analysis submittal capability.
In the host.cfg configuration file the path_name of the executable is defined. The rest of the
command line would then look like this:
-j $JOBNAME < $JOBFILE > $JOBNAME.log
If the executable and path defined is as /ansys/bin/ansys.er4k50a, then the entire command
that is executed is:
/ansys/bin/ansys.er4k50a -j $JOBNAME < $JOBFILE > $JOBNAME.log
Here the executable is invoked with a parameter (-j) specifying the jobname. The input file
($JOBFILE) is redirected using the UNIX redirect symbol as the standard input and the standard output
is redirected into a file called $JOBNAME.log. The variables beginning with the $ sign are passed by
Patrans Analysis Manager. All resulting output files are copied back to the invoking host and directory
on completion.
Example 2:
This is a more complicated example where an analysis code needs more than one input file. The general
analysis capability in Patrans Analysis Manager only copies one input file over to the remote host for
execution. If more than one file needs to be copied over then a script must be developed for this purpose.
This example shows how Patran FEA can be submitted via a script that does the proper copying of files
to the remote host.
The Analysis Preference in Patran is set to Patran FEA and, in addition to setting the Preference, the
input file suffix is specified as .job. Patran FEA needs three possible input files: jobname.job,
jobname.ntl, and an auxiliary input file. The jobname.job file is automatically copied over to the
remote host. The auxiliary input file can be called anything and is specified in the jobname.job file.
A shell script called FeaExecute is created and placed on all hosts that allow execution of Patran FEA.
This FeaExecute script does the following:
1. Parses the jobname.job file to find the name of the auxiliary input file if it is specified.
2. Copies the auxiliary input file and the jobname.ntl file to the remote host.
3. Execute the FeaControl script which controls actual execution of the Patran FEA job. This is
a standard script which is delivered with the Patran FEA installation.
66
In the Patran Analysis Manager configuration file, the FeaExecute script and its path are specified.
The input parameters for this script are:
-j $JOBNAME, -h $P3AMHOST -d $P3AMDIR
which specify the jobname, the host from which the job was submitted and the directory on that host
where job was submitted from. With this information the job can be successfully run. The full command
that is executed on the remote host is (assuming a location of FeaExecute):
/fea/bin/FeaExecute -j $JOBNAME, -h $P3AMHOST -d $P3AMDIR
The FeaExecute script contents are shown for completeness:
#! /bin/sh
# Script to submit Patran FEA to a remote host via the Analysis Manager
# Define a function for displaying valid params for this script
abort_usage( ) {
cat 2>&1 <</
Usage: $Cmd -j Jobname -h Remote_Host -d Remote_Dir
/
exit 1
}
# Define a function for checking status
check_status( ) {
Status=$1
if [ $1 -ne 0 ] ; then
echo Error detected ... aborting $Cmd
exit 2
fi
}
# Define a function for doing a general-purpose exit
exit_normal( ) {
echo $Cmd complete
exit 0
}
# Define a for extracting keyword values from
# the .job file. Convert keyword value to upper case
GetKeyValue( )
{
JobFile=${1?} ; Key=echo ${2?} | sed s/ //g
cat $JobFile | sed s/ //g | grep -i ^$Key= | \
sed s/^.*=// | tr [a-z] [A-Z]
}
# Define a for extracting keyword values from
# the .job file. Return the correct case for all characters
# (dont force anything to upper case.)
GetKeyValueCC( )
{
JobFile=${1?} ; Key=echo ${2?} | sed s/ //g
cat $JobFile | sed s/ //g | grep -i ^$Key= | \
sed s/^.*=//
}
# Define a function to get the Jobname from the jobfilename
#
# # usage: get_Jobname filespecification
Chapter 4: Configure 67
Miscellaneous
#
get_Jobname()
{
echo $1 | sed -e s;^.*/;; -e s;\..*$;;
}
# Determine the command name of this script
Cmd=echo $0 | sed s;^.*/;;
# Assign the default argument parameter values
Jobname=
Verbose=
if [ <installation_directory> = ] ; then
Acommand=<installation_directory>/bin/FeaControl
else
Acommand=<installation_directory>/bin/FeaControl
fi
Status=0
# Parse through the input arguments.
if [ $# -ne 6 ] ; then
abort_usage
fi
while [ $# -ne 0 ] ; do
case $1 in
-j) Jobname=$2 ; shift 2 ;;
-h) remhost=$2 ; shift 2 ;;
-d) remdir=$2 ; shift 2 ;;
*) abort_usage ;;
esac
done
# Runtime determination of machine/system type
OsName=uname -a | awk {print $1}
case $OsName in
SunOS)
Rsh=rsh
RshN1=-n
RshN2=
;;
HP-UX)
Rsh=remsh
RshN1=
RshN2=
;;
AIX)
Rsh=/usr/ucb/remsh
RshN1=
RshN2=-n
;;
ULTRIX)
Rsh=/usr/ucb/rsh
RshN1=
RshN2=-n
;;
IRIX)
Rsh=rsh
RshN1=
RshN2=-n
;;
68
*)
Rsh=rsh
RshN1=
RshN2=
;;
esac
# Determine the fully expanded names for the input files.
JobFile=${Jobname}.job
AifFile=GetKeyValueCC $JobFile AUXILIARY INPUT FILE
# Copy the files over from the remote host
NtlFile=${Jobname}.ntl
lochost=hostname
curdir=pwd
if [ $curdir = $remdir ] ; then
crap=1
else
if [ $remhost = $lochost ] ; then
cp ${remdir}/${NtlFile} .
if [ $AifFile = ] ; then
crap=1
else
cp ${remdir}/${AifFile} .
fi
else
rcp ${remhost}:${remdir}/${NtlFile} .
if [ $AifFile = ] ; then
crap=1
else
rcp ${remhost}:${remdir}/${AifFile} .
fi
fi
fi
# Perform the analysis
$Acommand $Jobname ; check_status $?
# Successful exit of script
exit_normal
Chapter 5: Monitor
Patran Analysis Manager Users Guide
Monitor
Introduction
70
Running Job
Completed Job
Host/Queue
71
77
80
70
Introduction
By setting the Action to Monitor on the main Patran Analysis Manager form, the user can monitor not
only his active jobs but also the Host or Queue activity. In addition, completed graphical monitoring
graphs can also be recalled at anytime. Each of these functions is explained in this chapter.
Each of these functions for monitoring jobs or hosts/queues is also accessible directly from the Analysis
application form within Patran. The only difference is that the full user interface of Patran Analysis
Manager is not accessed first; instead, the monitoring forms are displayed directly as explained in the
next few pages.
Note:
The UNIX interface is shown above. In subsequent sections both the UNIX and the Windows
interface are shown. Monitoring in the Windows interface is done from the Monitor tree tabs.
Chapter 5: Monitor 71
Running Job
Running Job
With the Action set to monitor a Running Job, pertinent information about a specific job that is currently
running or queued to run can be obtained. Jobs can be monitored from any host in the Analysis Manager's
configuration, not just from where they were submitted.
Note:
This form is not displayed when a job is monitored directly from Patran. Instead, only the
monitoring form is displayed as shown on the next page since all the pertinent information to
monitor a job is passed in from Patran. The Windows interface is displayed further down also.
72
A graph of the selected running job appears, showing the duration of the job where it has been or is
running.
The following table describes all the widgets that appear in this job graph.
Chapter 5: Monitor 73
Running Job
Item
Description
Job Status
This widget gives the total elapsed time in blue and the actual CPU time in red.
A check mark appears when the job is completed successfully. Otherwise, an
X appears. The clear portion of the blue bar indicates the amount of time the
job was queued before execution began. Elapsed and CPU time are reported in
minutes.
Percent CPU
Usage
This widget gives the percentage of CPU that is being used by the analysis code
at any given time. The maximum percentage of CPU during job execution is
indicated as a grey shade which remains at the highest level of % CPU usage.
This widget gives the total amount of disk space used by the job during
execution in megabytes.
Percent Disk Usage This widget gives the percentage of the total disk space that this job occupies at
an given time for all file systems. If you click on this widget with the mouse, all
file systems will be shown. The maximum percentage of disk space used during
job execution is indicated as a grey shade which remains at the highest level.
Job Information
Returning Job
Files
All files created during execution are copied back and displayed in this list box.
After job completion and during job execution, it is possible to click on any of
these files to view them with a simple editor/viewer. The following keystrokes
are available in this viewer window:
ctrl-s:
ctrl-n:
ctrl-c:
ctrl-<:
ctrl->:
74
Item
Controls
Description
Remove beginning queue time - takes off the queued portion of the graphics bar,
e.g., the portion that is not blue before job begins.
Suspend/Resume Job - when toggled on, the job will be indefinitely suspended.
A banner across the CPU dial will display the word SUSPENDED while the job
is suspended. Toggle the switch off to resume the job. The banner will be
removed.
Update (Sec.) - how often to update the graph / display
Pixels Per Min - how many pixels wide per minute
MB Per Inch - how many megabytes per inch to be displayed
Normalize Graph - make the graph fit in the window area
Close
Status Window
Status messages are returned in this window. If the log file is being monitored,
then log file lines will appear here also for MSC Nastran and ABAQUS.
The bottom left panel lists information about the job, such as date and time of event task name, host name,
and status. Any error and status messages will appear here. An example listing is:
Fri Jan
The running job function can also be invoked manually from the system prompt. See Invoking the
Analysis Manager Manually for details.
Windows Interface
For Running Jobs, when a job is submitted from the Windows interface, the user is queried as the
whether he/she wants the interface to switch automatically to the monitoring mode.
When a job is running the Monitor tree shows running jobs and jobs that have been queued.
Chapter 5: Monitor 75
Running Job
When a Running Jobs in the tree structure is selected, three tabs become available to give specific
status of the job, allow viewing of created output files, and give graphical display of memory, CPU and
disk usage.
76
Chapter 5: Monitor 77
Completed Job
Completed Job
This is an Analysis Manager utility that allows the user to graph a particular completed job run by the
Analysis Manager
Note:
This form is not displayed if this action is selected directly from the Analysis application
form in Patran. Instead, only the monitoring form is displayed as shown on the next page.
The Windows interface is also shown
78
The .mon file is created when a job is first submitted to Patrans Analysis Manager. Information on all
the job tasks is written to the .mon file. Time submitted, job name, job number, time actually run, time
finished and completion status are all recorded in the file, so that this Analysis Manager function can read
the file and have enough information to graph the jobs progress completely.
The explanation of the graphs on this form is identical to that of a Running Job except that the Update
slider bar does not show up since it is not applicable to a completed job.
Windows Interface
For Completed Jobs, the Windows interface displays them under the Completed Jobs tab in the
Monitor tree.
Chapter 5: Monitor 79
Completed Job
80
Host/Queue
Information about all hosts or queues used by Patrans Analysis Manager and jobs submitted through the
Analysis Manager can be reviewed using the Monitor Host/Queue selection. Options available include
Job Listing, Host Status, Queue Manager Log and a Full Listing. Press the Apply
button to invoke these functions. The user can vary how often the information is updated, using the
slider control.
The Host/Queue monitoring function can also be invoked manually from the system prompt. See
Invoking the Analysis Manager Manually for details.
Chapter 5: Monitor 81
Host/Queue
Job Listing
The initial application form of Monitor's Host/Queue appears as follows:
At the top of the main form for Monitor Queue is a slider labeled Update Time (Min.). Drag the slider
to the left to shorten the interval between information updates, or drag the slider to the right to slow
update of information. The default interval time is 5 minutes. In the Windows interface the refresh setting
is set under the View | Update Speeds menu pick.
The update interval may be changed at any time during the use of any Monitor Queue options.
All jobs are listed which are currently running in some capacity. Information about each job includes:
Job Number, Job Name, Owner and Time. The job number is a unique, sequential number that the
Analysis Manager generates for each job submitted to it. Pressing the Close button will close down the
monitor form.
82
Host Status
When the Host Status toggle is highlighted the form appears as follows:
The status is reported on all hosts or queues used by the Analysis Manager. Information about each
host/queue includes: host/queue name (Host Name), number of jobs running (# Running), number of jobs
queued (# Queued), maximum allowed to run concurrently (Max Running), and Host Type (i.e., MSC
Nastran).
If NQS or LSF is being used, queue information is provided instead of host information. See Submit for
more information on default settings.
The update interval may be changed at any time during the use of any Monitor Queue options. The default
interval time is 5 minutes. In Windows, use the View | Update Speeds menu option.
To exit from the Monitor Queue, select the Close button on the bottom of the main form on the right.
Log files are unaffected when the form is closed.
Chapter 5: Monitor 83
Host/Queue
The most recent jobs submitted are listed, regardless of where or when they were run. Information about
each job includes: date and time of event, event description, job number, job or task name or host name,
task type or PID (process id of task), and owner. Most recent jobs are listed in the text list box from the
time the Analysis Managers Queue Manager daemon was started. See System Management for a
description of the Queue Manager daemon.
The update interval may be changed at any time during the use of any Monitor Queue options. The default
interval time is 5 minutes. In Windows, use the View | Update Speeds menu option.
To exit from the Monitor Queue, select the Close button on the bottom of the main form on the right.
Log files are unaffected when the form is closed.
84
Full Listing
When Full Listing is selected, the form appears as follows:
The Full Listing information shows all job tasks submitted. Information about each host/queue includes:
status (blue = running; red = queued), job number, task name, task type, date and time submitted, and
owner.
The queue name is shown if an additional scheduler is present and being used (LSF/Utopia) and a pointer
to the actual queue name.
The update interval may be changed at any time during the use of any Monitor Queue options. The default
interval time is 5 minutes.
To exit from the Monitor Queue, select the Close button on the bottom of the main form on the right.
Note:
CPU Loads
When CPU Loads is selected, the form appears as follows:
Chapter 5: Monitor 85
Host/Queue
The load on the workstations and computers can be determined by inspecting this form which
periodically updates itself. The list of hosts or queues appears with the percent CPU usage, total amount
of free disk space, and available memory at that particular snapshot in time. The user may sort the hosts
by CPU UTILIZATION, FREE DISK SPACE, or AVAILABLE MEMORY, so that the host or
queue with the best situation appears at the top. Also, indicated in blue are the best hosts or queues for
each category of CPU, disk space and memory.
86
Chapter 6: Abort
Patran Analysis Manager Users Guide
Abort
Selecting a Job
Aborting a Job
88
89
88
Selecting a Job
This capability allows the user to terminate a running job originally submitted through Patrans Analysis
Manager. When aborting a job, the Analysis Manager cleans up all appropriate files.
The abort function can also be invoked manually from the system prompt. See Invoking the Analysis
Manager Manually for details. A currently running job must be available.
Chapter 6: Abort 89
Aborting a Job
Aborting a Job
You can only abort jobs which you own (i.e., originally submitted by you).
When a job is aborted, the analysis files are removed from where they were copied to, and all scratch and
database files are removed, unless the job is a restart from a previous run, in which case the scratch files
are removed, but the original database files from previous runs are left unaffected.
Note:
When a job is aborted from within Patran, no user interface appears. The job is simply aborted
after the confirmation.
UNIX Interface
Press the Apply button on the main form with the Action set to Abort as shown on the previous page.
You are asked to confirm with,
Are you sure you wish to abort job # <jobname> ?
Press the OK button to confirm.
The Cancel button will take no action and close the Abort form.
Windows Interface
There are three ways to abort a job from the Windows interface.
1. When the job is initially submitted, a modal window appears asking whether you want to monitor
or abort the job or simply do nothing and let the job run.
2. Once the job is running, from the Job Control tab in the Monitor tree structure. There is an
Abort button on this form to terminate the job.
90
3. From the Monitor | Running Jobs tree structure you can right mouse click on a running job. A
pulldown menu appears from which you can select Abort.
System Management
Directory Structure
92
Installation
X Resource Settings
94
105
109
113
115
148
157
92
Directory Structure
The Analysis Manager has a set directory structure, configurable environment variables and other tunable
parameters which are discussed in this chapter.
The Analysis Manager directory structure is displayed below. The main installation directory is shown
as an environment variable, $P3_HOME =<installation_directory>. Typically this would be
or /msc/patran200x or something similar.
where:
<org> (optional) is an additional organizational group and shares the same directory tree as default
yet will have its own unique set of configuration files. See Organization Environment Variables.
<arch> is one of:
HP700
RS6K
SGI5
SUNS
LX86
WINNT
Windows 2000 or XP
There may be more than one <arch> directory in a filesystem. Architecture types that are not applicable
to your installation may be deleted to reduce disk space usage; however, all machine architecture types
that will be accessed by the Analysis Manager must be kept. Each one of the executables under the bin
directory is described in Analysis Manager Programs.
All configuration files are explained in detail in Examples of Configuration Files. These include org.cfg,
host.cfg, disk.cfg, lsf.cfg, and nqs.cfg.
Organization groups and their uses are described in Organization Environment Variables.
The QueMgr.log file is created when a Queue Manager daemon is started and does not exist until this
time and, therefore, will not exist in the above directory structure unitl after the initial installation. Use
of this file is described in Starting the Queue/Remote Managers respectively. The file QueMgr.rdb is also
created when a Queue Manager daemon is started and is a database containing job specific statistics of
every job ever submitted through the Queue Manager for that particular set of configuration file or
<org>. The contents of this file can be viewed on Unix platforms using the Job_Viewer executable.
Items in the bin and exe directories are scripts to enable easier access to the main programs. These scripts
make sure that the proper environment variables are set before invoking the particular program that reside
in $P3_HOME/p3manager_files/bin/<arch>.
Note:
p3analysis_mgr
Invokes P3Mgr
p3am_admin
p3am_viewer
QueMgr
RmtMgr
The directories (conf, log, proj) for each set of configuration file (organizational
structure) must have read, write, and execute (777) permission for all users. This can be the
cause of task manager errors.
94
memory and cpu resources, so users will not notice performance effects. Also these processes can run as
root (Administrator on Windows) or as any user, if these privileges are not available.
Each RmtMgr binds to a known/chosen port number that is the same for every RmtMgr machine. Each
RmtMgr process collects machine statistics on free CPU cycles, free memory and free disk space and
returns this data to the QueMgr at frequent intervals. The RmtMgr is actually used to perform a
command and return the output from that command on the host it is running. This is essentially a remote
shell (rsh) host command as on a Unix machine.
Note:
Its best to run the RmtMgr service on Windows as someone other than SYSTEM (the
default if you do not do anything different). After installing the RmtMgr, use the control
panel to access the services and then find the RmtMgr and change its startup to use a
different account, something generic if it exists, or an Analysis Manager admin. account. If
the RmtMgr is running as a user and not SYSTEM then the NasMgr/ MarMgr / AbaMgr/
GenMgr will run as this user and have access to Windows networking, shared drives and
all. If it is run as SYSTEM then it is limited to only local Windows drives, shares, etc. The
QueMgr does not do much in the way of files so running that as SYSTEM is OK.
Job Manager
The Job Manager (JobMgr executable name) runs for the life of a job. When a user submits a job using
the Analysis Managerr, the user interface tells Queue Manager about the job and then starts a Job
Manager daemon. The Job Manager daemon will receive and save job information from the Analysis
Manager's user interface. The main purpose of the Job Manager is to record job status for monitoring and
file transfer.
During the execution of jobs, users utilizing the Analysis Manager's user interface program can
seamlessly connect to the Job Manager of their job and see what the status of the job is. In summary, the
Job Manager controls the execution of a single job and is always aware of the current status of that job.
The Job Manager runs on the submit host machine.
Note:
96
During execution, the NasMgr relays pertinent information (disk usage, cpu, etc.) to the Job Manager
(JobMgr), which then updates the graphical information displayed to the user. The NasMgr is also
responsible for cleaning up files and putting results back to desired locations, as well as reporting its
status to the Job Manager. This daemon runs on the analysis host machine and only for the life of the
analysis.
MSC.Marc Manager
The MSC.Marc Manager (MarMgr executable name) runs only for the life of a job. The MarMgr is
identical in function to the MSC Nastran Manager (NasMgr) except it is for execution of MSC.Marc
analyses.
ABAQUS Manager
The ABAQUS Manager (AbaMgr executable name) runs only for the life of a job. The AbaMgr is
identical in function to the MSC Nastran Manager (NasMgr) except it is for execution of ABAQUS
analyses.
General Manager
The General Manager (GenMgr executable name) runs only for the life of a job. The GenMgr is
identical in function to the MSC Nastran Manager (NasMgr) except it is for execution of general analysis
applications.
Editor
The editor (p3edit executable name) runs when requested from P3Mgr when viewing results files or
editing the input deck.
Text Manager
The Text Manager (TxtMgr executable name) is a text based interface to the Analysis Manager to
illustrate the Analysis Manager API. See Application Procedural Interface (API).
Job Viewer
The job viewer (Job_Viewer executable name) is a simple program available on UNIX platforms for
opening and viewing job statistics for the Analysis Managers database file. This file is generally located
in $P3_HOME/p3manager_files/default/log/QueMgr.rdb. You must run Job_Viewer
and then open the file manually.
JobMgr
Started automatically by P3Mgr/TxtMgr (or RmtMgr); no command line arguments.
RmtMgr
This is a daemon on Unix or a service on Windows and started automatically at boot time. Possible
command line arguments (also see Organization Environment Variables):
Argument
Description
-version
-ultima
-port <####>
Port number to use. MUST be the SAME port number for ALL RmtMgr's for
whole network (per QueMgr) default is 1800 if not set.
-path <path>
Use to specify base path for finding the Analysis Manager executables:
$P3_HOME/p3manager_files/bin/{arch}/*Mgr.
<path> is the base path $P3_HOME. Default is to use same path as program
was started up with, but in the case of "./RmtMgr ...." it will not work. If
a full path is used to start RmtMgr (like in a startup script) then this is not
needed.
-orgpath <path>
Use to specify base path for finding the Analysis Manager org tree
(configuration files and directories):
$P3_HOME/p3manager_files/{org}/{conf,log,proj}.
<path> is the base path $P3_HOME. Use to specify the base path to find the org
tree only if different than the -path argument.
RmtMgr writes files in the proj/{projectname} directories, so if this is
not the default (desired) location (same as -path above) then this argument
needs to be set.
-name <name>
Windows only. Use if you want to run more than one RmtMgr service. Each
must have a unique name so the start/stop method can distinguish which one to
work with. Default <name> is MSCRmtMgr.
QueMgr (AdMgr)
This is a daemon on Unix or a service on Windows and started automatically at boot time. Possible
command line arguments (also see Organization Environment Variables):
Note:
98
Argument
Description
-version
-ultima
-port <####>
Port number to use. The default is 1900 if not set. If using an org.cfg file then
use this argument with the -org option below to force a port number and org
name.
-path <path>
Use to specify base path for finding the Analysis Manager executables:
$P3_HOME/p3manager_files/bin/{arch}/*Mgr.
<path> is the base path $P3_HOME. Default is to use same path as program
was started up with, but in the case of "./QueMgr ...." it will not work. If
a full path is used to start QueMgr (like in a startup script) then this is not
needed.
-orgpath <path>
Use to specify base path for finding the Analysis Manager org tree
(configuration files and directories):
$P3_HOME/p3manager_files/{org}/{conf,log,proj}.
<path> is the base path $P3_HOME. Use to specify the base path to find the org
tree only if different than the -path argument.
RmtMgr writes files in the proj/{projectname} directories, so if this is
not the default (desired) location (same as -path above) then this argument
needs to be set.
-name <name>
Windows only. Use if you want to run more than one QueMgr service. Each
must have a unique name so the start/stop method can distinguish which one to
work with. Default <name> is MSCQueMgr.
-rmtmgrport <####> The port number to use for ALL RmtMgr's that this QueMgr will connect to
for the entire network. Default is 1800 (default RmtMgr -port value) if not
set.
-rmgrport <####>
Argument
Description
-org <org>
org name to use. This is the name of the directory containing the configuration
files for this Queue Manager daemon (i.e.,
$P3_HOME/p3manager_files/{org}/{conf,log,proj}). The
default is default. If using an org.cfg file then use this with the -port
option above to force a port number and org name.
-delayint <###>
P3Mgr
This program is started by the user. If 4 arguments are present then its assumed that:
Argument
arg 1
Description
Startup type..... It is one of the following:
1 - Start Up Full Interface.
2 - Start Up Queue Monitor Now.
3 - Start Up Abort Job Now.
4 - Start Up Monitor Running Job Now.
5 - Start Up Monitor Completed Job Now.
6 - Start Up Submit Now. (Submit current job)
7 - Start Up Submit Quiet. (Submit current job without GUI)
8 - Start Up Submit Quiet and wait for job to complete. (with exit status)
arg 2
arg 3
arg 4
100
Argument
Description
arg 5
X position of upper left corner of Patran right hand side interface in inches.
arg 6
Y position of upper left corner of Patran right hand side interface in inches.
arg 7
arg 8
The following arguments can be used alone or after the first 4 arguments above:
Argument
Description
-rcf <file>
rcf file to use for all GUI settings (same format as -env/-envall output)
- see Analysis Manager Environment File.
-auth <file>
-env
-envall
-extra <args>
-runtype <#>
-restart <file>
-coldstart <file>
MSC Nastran ONLY - coldstart filename for restart. MSC.Marc uses the
rcfile - see Analysis Manager Environment File.
TxtMgr
This program is started by the user to manage jobs through a simple text submittal program. Possible
arguments are:
Argument
Description
-version
Same as RmtMgr.
-qmgrhost <hostname>
-qmgrport <####>
Argument
Description
-rmgrport <####>
Port for ALL RmtMgr's for this org (QueMgr). Not needed unless using
the Admin test feature and the default RmtMgr port is not being used.
-org <org>
-orgpath <path>
-auth <file>
-app <name>
Application name to use. Default is MSC Nastran (or first valid app).
-rcf <file>
rcf file to use for all GUI settings (same format as -env/-envall
output).- see Analysis Manager Environment File.
-p3home <path>
-amhome <path>
-choice <#>
-env
-envall
-envf <file>
-envfall <file>
-nocon
102
cfg.total_h_list[1].sub_app[MSC.Marc].rcpath =
d:\msc\marc2001\tools\include.bat
#
unv_config.auto_mon_flag = 1
unv_config.time_type = 0
unv_config.delay_hour = 0
unv_config.delay_min = 0
unv_config.specific_hour = 0
unv_config.specific_min = 0
unv_config.specific_day = 0
unv_config.mail_on_off = 0
unv_config.mon_file_flag = 1
unv_config.copy_link_flag = 0
unv_config.job_max_time = 0
unv_config.project_name = user1
unv_config.orig_pre_prog =
unv_config.orig_pos_prog =
unv_config.exec_pre_prog =
unv_config.exec_pos_prog =
unv_config.separate_user = user1
unv_config.p3db_file =
unv_config.email_addr = empty
#
nas_config.disk_master = 0
nas_config.disk_dball = 0
nas_config.disk_scratch = 0
nas_config.disk_units = 2
nas_config.scr_run_flag = 1
nas_config.save_db_flag = 0
nas_config.copy_db_flag = 0
nas_config.mem_req = 0
nas_config.mem_units = 0
nas_config.smem_units = 0
nas_config.extra_arg =
nas_config.num_hosts = 2
nas_host[tavarua.scm.na.mscsoftware.com].mem = 0
nas_host[tavarua.scm.na.mscsoftware.com].smem = 0
nas_host[tavarua.scm.na.mscsoftware.com].num_cpus = 0
nas_host[lalati.scm.na.mscsoftware.com].mem = 0
nas_host[lalati.scm.na.mscsoftware.com].smem = 0
nas_host[lalati.scm.na.mscsoftware.com].num_cpus = 0
nas_config.default_host = tavarua_nast2001
nas_config.default_queue = N/A
nas_submit.restart_type = 0
nas_submit.restart = 0
nas_submit.modfms = 1
nas_submit.nas_input_deck =
nas_submit.cold_jobname =
#
aba_config.copy_res_file = 1
aba_config.save_res_file = 0
aba_config.mem_req = 0
aba_config.mem_units = 0
aba_config.disk_units = 2
aba_config.space_req = 0
aba_config.append_fil = 0
aba_config.user_sub =
aba_config.use_standard = 1
aba_config.extra_arg =
aba_config.num_hosts = 2
104
aba_host[tavarua.scm.na.mscsoftware.com].num_cpus = 1
aba_host[tavarua.scm.na.mscsoftware.com].pre_buf = 0
aba_host[tavarua.scm.na.mscsoftware.com].pre_mem = 0
aba_host[tavarua.scm.na.mscsoftware.com].main_buf = 0
aba_host[tavarua.scm.na.mscsoftware.com].main_mem = 0
aba_host[lalati.scm.na.mscsoftware.com].num_cpus = 1
aba_host[lalati.scm.na.mscsoftware.com].pre_buf = 0
aba_host[lalati.scm.na.mscsoftware.com].pre_mem = 0
aba_host[lalati.scm.na.mscsoftware.com].main_buf = 0
aba_host[lalati.scm.na.mscsoftware.com].main_mem = 0
aba_config.default_host = tavarua_aba62
aba_config.default_queue = N/A
aba_submit.restart = 0
aba_submit.aba_input_deck =
aba_submit.restart_file =
#
mar_config.disk_units = 2
mar_config.space_req = 0
mar_config.mem_req = 0
mar_config.mem_units = 2
mar_config.translate_input = 1
mar_config.num_hosts = 2
mar_host[tavarua.scm.na.mscsoftware.com].num_cpus = 1
mar_host[lalati.scm.na.mscsoftware.com].num_cpus = 1
mar_config.default_host = tavarua_marc2001
mar_config.default_queue = N/A
mar_config.cmd_line =
mar_config.mon_file = $JOBNAME.sts
mar_submit.save = 0
mar_submit.nprocd = 0
mar_submit.datfile_name =
mar_submit.restart_name =
mar_submit.post_name =
mar_submit.program_name =
mar_submit.user_subroutine_name =
mar_submit.viewfactor =
mar_submit.hostfile =
mar_submit.iamval =
RS6K
SGI5
SUNS
LX86
WINNT
Windows 2000 or XP
These variables can be set in the following manner with cshell (if necessary):
setenv P3_HOME /msc/patran200x
setenv P3_PLATFORM HP700
or for bourne shell or kern shell users:
P3_HOME=/patran3
P3_PLATFORM=DECA
export P3_HOME
export P3_PLATFORM
or on Windows:
set P3_HOME=c:/msc/patran200x
set P3_PLATFORM=WINNT
In most instances, users will never have to concern themselves with these environment variables but are
included here for completeness. In a typical Patran installation, a file called .wrapper exists in the
$P3_HOME/bin directory which automatically determines these environment variables. The names of
the invoking scripts, p3analysis_mgr and p3am_admin exist as pointers to .wrapper in this bin directory
which, when executed, determines the variable values and then executes the actual scripts. For this to
work conveniently, the user should have $P3_HOME/bin in his/her path, otherwise the entire path name
must be used when invoking the programs.
P3_ORG
It may be desirable to have multiple Queue Managers running (groups of systems for the Analysis
Manager to use) each with a separate organizational directory for Analysis Manager configuration files.
An optional environment variable, P3_ORG, may be set for each user to specify a separate named
organizational directory. If defined, the Analysis Manager will use it for accessing its required
configuration files and thus connect to the Queue Manager specified by P3_ORG.
106
108
can be specified to change organizational groups each time the Analysis Manager is invoked. In
this last method, the user or the system administrator that starts the QueMgr does not need to ever
worry about assigning unique port numbers. However, it is one of the most restrictive installations
and methods of access.
MSC_RMTMGR_ARGS and MSC_QUEMGR_ARGS
The RmtMgr and QueMgr will also read the environment variables called MSC_RMTMGR_ARGS and
MSC_QUEMGR_ARGS, respectively, for all of its arguments. It is one big string as in this cshell
setting:
setenv MSC_RMTMGR_ARGS -port 1850 -path /msc/patran200x
setenv MSC_QUEMGR_ARGS -port 1950 -path /msc/patran200x
or for bourne shell or kern shell users:
MSC_RMTMGR_ARGS=-port 1850 -path /msc/patran200x
MSC_QUEMGR_ARGS=-port 1950 -path /msc/patran200x
export MSC_RMTMGR_ARGS
export MSC_QUEMGR_ARGS
or on Windows:
set MSC_RMTMGR_ARGS=-port 1850 -path /msc/patran200x
set MSC_QUEMGR_ARGS=-port 1950 -path /msc/patran200x
And then when RmtMgr and/or QueMgr start they will check these and get their arguments from these
strings. Real command line arguments overwrite these in case both are set.
This method is needed on Windows because there is currently no way to save the startup arguments for
a service, so on reboot the RmtMgr would not know its startup arguments. It would have to read a file or
read an environment string to get them. The only problem right now is if you have two RmtMgr's running
on the same machine there is no way to have different args for each.
Note:
On Windows these variables should be set under the System Control Panel such that on
reboot, the RmtMgr and QueMgr start up with these arguments. You can check the Event
Viewer under Adminstrative Tools Control Panel to check for proper startup.
Installation
Installation Requirements
The following definitions apply to this section:
1. The master host is the machine which continually runs the Analysis Manager daemon (called
QueMgr). This is also referred to as the master node.
2. The submit host is the machine from which the analysis is submitted, sometimes referred to as the
client also.
3. The analysis host is the machine which actually executes the analysis.
Below is an itemized list of installation requirements:
1. One master node must be chosen for each organizational group (for each Queue Manager that will
be running - typical installation only have one!).
2. Queue Manager (QueMgr) should run as root on the master node. This is not a strict requirement
but recommended on Unix. On Windows it can run as a user or as administrator.
3. Each node (submit and analysis hosts) in the Analysis Manager configuration must be reachable
to and from the master node via a TCP/IP network.
4. Each analysis host must have a Remote Manager (RmtMgr) running with the same port number
(for each QueMgr). It is recommended that each Submit machine also (especially on Windows)
however this is not a strict requirement. (This takes the place of rsh (remsh) remote access
capabilities that used to be used in older versions of the Analysis Manager.)
5. The Analysis Manager software will come off of the installation media onto any machine (master,
submit, or analysis host) under the $P3_HOME/p3manager_files directory. The $P3_HOME
variable is the installation directory and is typically set up as /msc/patran200x and is usually
defined as an environment variable. This p3manager_files directory and tree must exist as-is and
not be renamed.
6. Each analysis host machine in the Analysis Manager configuration must be able to identically see
the installation tree. If a RmtMgr is running this is not an issue because the RmtMgr knows where
the Analysis Manager executables are.
7. The root user should run the administration program (p3am_admin (AdmMgr)) on the master
node to test and ensure that new users can correctly access the Analysis Manager. See
Configuration Management Interface.
Each user wishing to use the Analysis Manager must meet the following requirements:
1. Users who are using the Analysis Manager should have the same login name, user and group ids
on all hosts / nodes in the Analysis Manager configuration. This will prevent file access problems.
In specific cases, users may run jobs on different accounts other than their own, but this must be
set up by the system administrator. This is described in Examples of Configuration Files.
2. Users must have uname in their default search path (path or PATH environment variable in the
user's .cshrc or .profile file).
110
Installation Instructions
1. Unload the p3manager_files directory from the installation media. (Consult the Installation
guide for more information on how this is done.)
2. Decide on a master node (typically the node the Patran software is located on), and login to that
node as root.
3. Decide which machine(s), that have MSC Nastran, MSC.Marc, ABAQUS, or other analysis
modules to be used, will be included in the Analysis Managers configuration. Find out where
each runtime script and/or configuration files are located (i.e. /msc/bin/nast200x,
/msc/conf/nast200xrc for MSC Nastran) for each machine. Only these machines will be enabled
for later job submission, monitoring, and management.
4. Each analysis host machine that will be configured to run an analysis code must be able to see the
p3manager_files directory structure as outlined in Directory Structure. This directory
structure must also exist on the master node as well as client (submit) nodes. This can be done in
one of two ways. Either the directory structure can be copied directly to each machine so that it
can be accessed in the same manner as on the master node, or symbolic links and NFS mounts
can be created. In any case, if on one machine you type
cd $P3_HOME/p3manager_files
you should be able to do the same on all analysis nodes and see the same directory structure.
As an example of setting up a link, suppose that the machine venus is the master host and has
the installation directory structure in /venus/users/msc/patran200x. A link can be established on
venus by typing:
mkdir /patran
ln -s /venus/users/msc/patran200x /patran
This will ensure that on venus, if you type cd /patran you will be put into
/venus/users/msc/patran200x.
Now on an analysis host called jupiter, NFS mount the disk /venus/users and then type:
mkdir /patran
ln -s /venus/users/msc/patran /patran
This will ensure that analysis host jupiter can see the installation directory structure. Repeat
this for all analysis hosts. NFS mounts are not necessary if you wish to copy the installation
directory structures to each host separately instead of creating links.
Each submit host (hosts that submit jobs) does not necessarily need to see the directory structure
in exactly the same way as the master and analysis hosts do. They only need to be able to see an
installation directory structure to find the user interface executable (P3Mgr).
Note:
The above description sounds a bit more restrictive than it really is. In actuality, if a
RmtMgr is started on each analysis host, the directory structure can be seen because
RmtMgr knows from where it was launched and thus knows where all the Analysis
Manager executable are. However, it is still recommended to follow the above
procedure if at all possible.
5. Start up the RmtMgr daemon or service on each and every analysis node. It is recommended to
start RmtMgr on submit machines also. Starting the Queue/Remote Managers explains this
procedure. This must be done before configuration testing can be done.
6. Use the p3am_admin program to set up the configuration files. This program is located in
$P3_HOME/bin/p3am_admin
Modify Configuration Files explains the use of this program and the format of the generated
configuration files as a result of running this program. The configuration file will be placed in the
correct locations automatically. The following configuration files will be generated:
host.cfg
disk.cfg
lsf.cfg
LSF configuration file (if you plan to use LSF as your scheduler, and not the
Analysis Manager own built-in scheduler.
nqs.cfg
NQS configuration file (if you plan to use NQS as your scheduler, and not the
Analysis Manager own built-in scheduler.
Note:
For a minimal configuration with a single Queue Manager, you should remove or
rename the file $P3_HOME/p3manager_files/org.cfg. See step 12. for more
information.
7. Test the configuration setup using p3am_admins testing features. Specifically, do basic tests and
network tests for each user that wishes to access the Analysis Manager. Test Configuration
explains this procedure in detail.
8. Start up the QueMgr daemon on the master node. Starting the Queue/Remote Managers explains
this procedure.
9. Add commands to the appropriate rc files for automatic start-up of the QueMgr and RmtMgr
daemons when the master, submit or analysis nodes have to be rebooted. Starting the
Queue/Remote Managers also explains this procedure.
10. Invoke the Analysis Manager user interface as a normal user and check that the installation was
performed properly. Invoking the Analysis Manager is explained in Invoking the Analysis
Manager Manually.
11. Repeat the procedure from step 2. for each organizational group (Queue Manager) you wish to
set up.
12. When more than one organizational group (Queue Manager) is to be accessed, either modify the
org.cfg file and add the port numbers and group names, or have users set the appropriate
environment variables to access them. See Organization Environment Variables for an explanation
of these variables and see Examples of Configuration Files for setting up the org.cfg file.
13. Make sure users have $P3_HOME/bin in their path. Most Analysis Manager executables can be
invoked from$P3_HOME/bin or are links from $P3_HOME/bin for setting all environment
variables. These include:
112
p3am_admin
p3am_viewer (Unix only)
p3analysis_mgr
QueMgr
RmtMgr
X Resource Settings
The Analysis Manager GUI on Unix requires the use of certain X Window System Resources. The
following explains this use.
The name of the Analysis Manager X application class is P3Mgr. Therefore, to change the background
color the Analysis Manager uses to red, the following resource specification is used:
P3Mgr*background: red
The lines below belong in the P3Mgr file delivered with your installation. This file can be found in
$P3_HOME/app-defaults. This file can reside in the users local directory or in his home directory
or be placed in .Xdefaults or /usr/lib/X11/app-defaults. It is most convenient to place
it in the users home directory; that way changes can be made instantly without having to log out. These
are the resources which the Analysis Manager requires for it to look and behave like Patran.
!
! Resources for Patran Analysis Manager:
!
P3Mgr*background:white
P3Mgr*foreground:black
P3Mgr*bottomShadowColor:bisque4
P3Mgr*troughColor:bisque3
P3Mgr*topShadowColor:white
P3Mgr*highlightColor: black
P3Mgr*XmScrollBar.foreground:white
P3Mgr*XmScrollBar.background:white
P3Mgr*mon_run_trough.background:DodgerBlue
P3Mgr*mon_ok_label.foreground:DodgerBlue
P3Mgr*mon_bad_label.foreground:red
P3Mgr*que_mon_queued.background:red
P3Mgr*que_mon_run.background:DodgerBlue
P3Mgr*mon_disk_trough.background:red
P3Mgr*mon_cpu_trough.background: green
!
! End of Patran Analysis Manager Resources
!
A file called p3am_admin (AdmMgr) also exists for the system administration tool X resources.
Font Handling
The Analysis Manager on Unix requires three fonts to work correctly. At start-up, the Analysis Manager
looks through the fonts available on the machine and picks out three fonts which meet its needs. You will
notice that there are no font definitions in the default Analysis Manager resources. On platforms which
utilize an R4 based version of X windows, the fonts are NOT adjustable by the user. The fonts that the
Analysis Manager calculates are used all the time.
On R5 X windows platforms, the three fonts are still calculated by the Analysis Manager, but the user
has the option of overriding the calculated fonts by using the X resources. The names of the resources to
use are as follows:
114
P3Mgr*fontList:
*lucida-bold-r-*-14-140-*
P3Mgr*middle.fontList:
*lucida-medium-r-*-14-140-*
P3Mgr*fixed.fontList:
*courier-medium-r-*-12-120-*
If the user decides to change the fonts, these are the resources which need to be set. All three fonts do not
have to be changed, a single one can be adjusted by itself. The only requirement is that a fixed font is
defined for P3Mgr*fixed.fontList. It is important for this font to be fixed or the interface for the
Queue Monitor will not appear correctly.
$P3_HOME/p3manager_files/bin/<arch>/
<arch>
HP700
RS6K
SGI5
SUNS
LX86
WINNT
Windows 2000 or XP
116
The path where the Analysis Manager is installed. This path will be used to
locate the p3manager_files directory. For example, if /msc/patran200x is
specified, the p3am_admin (AdmMgr) program will look for the
/msc/patran/p3manager_files directory. Typically, the install directory is
/msc/patran200x and defined in an environment variable called
$P3_HOME.
-org <org>
Both of the arguments listed above are optional. If they are not specified, the p3am_admin (AdmMgr)
program will check for the following two environment variables:
P3_HOME
P3_ORG
If the command line arguments are not specified, then at least the P3_HOME environment variable must
be set. The P3_ORG variable is not required. If the P3_ORG variable is not set and the -org option is not
provided, an organization of default is used. Therefore, p3am_admin (AdmMgr) will check for
configuration files in the following location:
$P3_HOME/p3manager_files/default/conf
When running the p3am_admin (AdmMgr) program, it is recommended this be done on the master node
and as the root user. The p3am_admin (AdmMgr) program can be run as normal users, but some of the
testing options will not be available. In addition, the user may not have the necessary privileges to save
the changes to the configuration files or start up a Queue Manager daemon.
When p3am_admin (AdmMgr) starts up, it will take the arguments provided (or environment variables)
and check to see if configuration files already exist. The configuration files should exist as follows. The
last two are only necessary if LSF or NQS queueing are used.
$P3_HOME/p3manager_files/<org>/conf/host.cfg
$P3_HOME/p3manager_files/<org>/conf/disk.cfg
$P3_HOME/p3manager_files/<org>/conf/lsf.cfg
$P3_HOME/p3manager_files/<org>/conf/nqs.cfg
If these files exist, they will be read in for use within the p3am_admin (AdmMgr) program. If these files
are not found, p3am_admin (AdmMgr) will start up in an initial state. In this state there are no hosts,
filesystems, or queues defined and they must all be added using the p3am_admin (AdmMgr)
functionality.
Therefore, upon initial installation and/or configuration of the Analysis Manager, the p3am_admin
(AdmMgr) program will come up in an initial state and the user can build up configuration files to save.
Action Options
The initial form for p3am_admin (AdmMgr) has the following Actions/Options:
1. Modify Config Files
2. Test Configuration
3. Reconfigure Que Mgr
118
Apply saves all configuration files: host, disk, and, if applicable, lsf or nqs.
Queue Managers set up on Windows only have choice of the default MSC Queue type.
LSF and NQS are not supported for Queue Managers running on Windows.
Administrator User
You must also set the Admin user. This should not be root on Unix or the administrator account on
Windows, but should be a normal user name.
120
Configuration Version
There are three configuration versions. The functionality accessible to setup is dependent on which
version you select. Version 1 is the original.
Version 2 includes an additional capability of limiting the maximum number of task for any given
applications allowed to run at any one time. If this number is exceeded, any additional submittals are
queued until the maximum number of tasks for that application drops below this number. This is
typically used when there are only so may application licenses available such that a job cannot be
submitted without a license being available. Otherwise the application might fail due to no license
being available.
Version 3 includes all capabilities of versions 1 and 2, and also includes the ability to set up a host, made
of a group of hosts, that will be monitored for the least loaded machines. Once a machine in that group
satisfies the loading critiria, the job is submitted to that machine.
Applications
Since the Analysis Manager can execute different applications, it needs to know which applications to
execute and how to access them. This configuration information is stored in the host.cfg file located in
the $P3_HOME/p3manager_files/default/conf directory. This portion of the host.cfg file contains the
following fields:
type
An integer number used to identify the application. The user never has to worry
about this number because it is automatically assigned by the program.
program name
MarMgr
AbaMgr
GenMgr
Patran name
The name of the Patran Preference from which to key off of when invoking the
Analysis Manager. These can be MSC Nastran, MSC.Marc, ABAQUS,
ANSYS, etc. Check to see what the exact Patran Preference spelling is and
remove any spaces. If the Preference does not exist then the first configured
application will be used when the Analysis Manager is invoked from Patran
after which the user can change it to the one he wants.
optional args
Used for generic program execution only. These specify the arguments to be
added to the invoking line when running a generic application.
MaxAppTask
By default this is not set. If the configuration file version is set to 2 or 3, then
you may specify the maximum number of tasks that the given application can
run at any one time (on all machines). This is convenient when you dont want
jobs submitted with the possibility of one or more not being able to check out
the proper licenses if none are available because too many jobs are running at
once.
The p3am_admin (AdmMgr) program can be used to add and delete applications or change any field
above as shown in the forms below.
The exception to this is the Maximum Number of Tasks. This value must be changed manually by
editing the configuration file and then restarting the Queue Manager service on Windows. On UNIX,
this can be controlled through the Administration GUI.
Adding an Application
To add an application, select the Add Application button. (On Windows, right mouse click the
Applications tree tab.) An application list form appears from which an application can be selected. If
GENERAL is selected the Application Name and Optional Args data boxes appear on the main form.
122
For GENERAL, enter the name of the application as it is know by the Patran Preference, without any
spaces. For example if ANSYS 5 is a preference, then enter ANSYS5.
Enter the optional arguments that are needed to run the specified analysis code. For example, if an
executable for the MARC analysis code needs arguments of -j jobname, you can specify -j
$JOBNAME as the optional args. Arguments can be specified explicitly such as the -j, or they can be
placed in as variables such as the $JOBNAME. The following variables are available:
$JOBFILE
$JOBNAME
$P3AMHOST
$P3AMDIR
$APPNAME
$PROJ
$DISK
Up to 10 GENERAL applications can be added. To save the configuration, the Apply button must be
pressed and the newly added application information will be saved in the host.cfg file. On Windows this
is Save Config Settings under the Queue pull down menu.
Note:
Apply saves all configuration files: host, disk, and if applicable, lsf or nqs.
Deleting an Application
To remove an application, select the Delete Application button. A list of defined applications appears.
Select one to be deleted by clicking on the application name in the list. Then, select OK. The application
will be removed from the list and the list of application will disappear.
On Windows, simply select the application you want to delete from the Applications tree tab and press
the Delete button (or right-mouse click the application and select Delete).
To save the configuration, the Apply button must be pressed and the newly deleted application
information will be saved in the host.cfg file. On Windows this is Save Config Settings under the Queue
pull down menu.
Note:
Apply saves all configuration files: host, disk, and, if applicable, lsf or nqs.
Physical Hosts
Since the Analysis Manager can execute jobs on different hosts, it needs to know about each analysis
host. Host configuration for the Analysis Manager is done via the host.cfg file located in the
$P3_HOME/p3manager_files/default/conf directory.
This portion of the host.cfg file contains the following fields:
124
physical host
class
maximum tasks
HP700
RS6K
SGI5
SUNS
LX86
WINNT
Windows 2000 or XP
The p3am_admin (AdmMgr) program can be used to add and delete hosts or change any field above as
shown in the forms below.
Enter the name of the host in the Host Name box, and select the system/OS in the Host Type menu.
Additional hosts can be added by repeating this process.
When all hosts have been added, select Apply and the newly added host information will be saved in the
host.cfg file. On Windows this is Save Config Settings under the Queue pull down menu.
Note:
Apply saves all configuration files: host, disk, and if applicable, lsf or nqs.
126
Deleting a Host
To remove a host from use by the Analysis Manager, select the Delete Physical Host button on the
bottom of the p3am_admin (AdmMgr) form. A list of possible hosts will appear.
Select the host to be deleted by clicking on the hostname in the list. Then, select OK. The host will be
removed from the list of hosts and the list will go away.
On Windows, simply select the Host you want to delete from the Physical Hosts tree tab and press the
Delete button (or right-mouse click the host and select Delete).
When all host configurations are ready, select Apply and the revised host.cfg file will be saved,
excluding the deleted hosts. On Windows this is Save Config Settings under the Queue pull down menu.
Analysis Manager Host Configurations
In addition to specifying physical hosts, it is necessary to specify specific names by which the Analysis
Manager can recognize the actions it should take on various hosts. For example, it may be possible that
ABAQUS and MSC Nastran are configured to run on the same physical host or that two versions of MSC
Nastran are installed on the same physical host. In order to account for this, each application and physical
host has its own name or AM host name assigned to it. Host configuration for the Analysis Manager is
done via the host.cfg file located in the $P3_HOME/p3manager_files/default/conf
directory.
This portion of the host.cfg file contains the following fields:
AM hostname
Unique name for the combination of the analysis application and physical host.
It can be called anything but must be unique, for example, nas68_venus.
physical host
The physical host name where the analysis application will run.
type
path
How this machine can find the analysis application - for MSC Nastran, this is
the runtime script (typically the nast200x file), for MSC.Marc, ABAQUS, and
GENERAL applications, this is the executable location.
rcpath
How this machine can find the analysis application runtime configuration file the MSC Nastran nast200xrc file or the ABAQUS site.env file. This is not
applicable to MSC.Marc or GENERAL application and should be filed with the
keyword NONE.
The p3am_admin (AdmMgr) program can be used to add and delete AM hosts and change any field
above as shown by the forms below.
Adding an AM Host
An AM host is a unique name which the user will specify when submitting a job. Information contained
in the AM host is a combination of the physical host and application type along with the physical location
of that application. To add a specific AM host press the Add AM Host button. A new host description
128
will be created and displayed in the left scrolled window, with AM Host Name: Unknown, Physical Host:
UNKNOWN, and Application Type: Unknown.
Enter the unique name of the host in the AM Host Name box, and select the Physical Host that this
application will run on. The application is selected from the Application Type menu. Then, specify the
Configuration Location and Runtime Location paths in the corresponding boxes. The unique name
should reflect the name of the application to be run and where it will run. For example, if V68 of MSC
Nastran is to be run on host venus, then specify NasV68_venus as the AM host name.
The Runtime Location is the actual path to the executable or script to be run, such as /msc/bin/nas68 for
MSC Nastran. The Config Location is the actual path to the MSC Nastran rc (nast68rc) file or the
ABAQUS site.env file.
Additional AM hosts can be added by repeating this process.
For each AM host, at least one filesystem must be specified. Use the Add Filesystem capability in
Modify Config Files/Filesystems to specify a filesystem for each added host.
When all hosts have been added, select Apply and the newly added host information will be saved in the
host.cfg file. On Windows this is Save Config Settings under the Queue pull down menu. Note that
Apply saves all configuration files: host, disk, and if applicable, lsf or nqs.
For Group, see Groups (of hosts).
Deleting an AM Host
To remove a host from use by the Analysis Manager, select the Delete AM Host button. A list of possible
hosts will appear.
Select the host to be deleted by clicking on the hostname in the list. Then, select OK. The host will be
removed from the list of hosts and the list of hosts will go away.
On Windows, simply select the AM Host you want to delete from the AM Hosts tree tab and press the
Delete button (or right-mouse click the host and select Delete).
When all host configurations are ready, select Apply and the revised host.cfg file will be saved,
excluding the deleted hosts. On Windows this is Save Config Settings under the Queue pull down menu.
Disk Configuration
In order to define filesystems to be written for scratch and database files, the Analysis Manager needs to
have a list of each file system for each host in the disk.cfg file that is to be used when running analyses.
This file contains a list of each host, a list of each file system for that host, and the file system type. There
are two different Analysis Manager file system types: NFS and local.
130
Adding a Filesystem
Use the Modify Config Files/Filesystems form to specify or add a filesystem for use by the Analysis
Manager.
Press the Add Filesystem button. Then, select a host from the list provided.
There are two types of filesystems: NFS and local. Select the appropriate type for the newly added
filesystem.
Additional filesystems can be added by repeating this process. Multiple filesystems can be added for each
host. When all filesystems have been added, select Apply and the newly added filesystem information
will be saved in the disk.cfg file.
Each host must contain at least one filesystem.
After adding a host or filesystem, test the configuration information using the Test Configuration form.
See Test Configuration.
Note:
When using the Analysis Manager with LSF or NQS, you must run the administration
program and start a Queue Manager on the same machine that LSF or NQS executables
are located.
When an AM Host is created, one filesystem is created by default (c:\temp). You can add more
filesystems to an AM Host under the Disk Space tree tab and pressing the Add button. You can change
the directory path by clicking on the Directory itself and editing it in the normal method on Windows.
The Type is changed by the pulldown menu next to the Directory name. If the filesystem is a Unix
filesystem, make sure you remove the c:, e.g., /tmp.
Deleting a Filesystem
At the bottom of the Modify Config Files/Filesystems form, select the Delete Filesystem button to
delete a filesystem from use by the Analysis Manager.
Then, select a host from the list provided, and click OK.
After selecting a host, a list of filesystems defined for the chosen host will appear. Choose the filesystem
to delete from this list and click OK.
On Windows, select the AM Host under the Disk Space tree tab and press the Delete button. The last
filesystem created is deleted.
132
Additional filesystems can be deleted by repeating this process. When all appropriate filesystems have
been deleted, select Apply and the updated filesystem information will be saved in the disk.cfg file. On
Windows this is Save Config Settings under the Queue pull down menu.
Queue Configuration
If the LSF or NQS scheduling system is being used at this site, the Analysis Manager can interact with it
using the queue configuration file (i.e., lsf.cfg or nqs.cfg). Ensure that LSF or NQS Queue is set for the
Queue Type field in the Modify Config Files form. See Analysis Manager Host Configurations. This sets
a line in the host.cfg file to QUE_TYPE: LSF or NQS. The Queue Manager configuration file lists
each queue name, and all hosts allowed to run MSC Nastran, MSC.Marc, MSC Nastran, or other
GENERAL applications for that queue. In addition, a queue install path is required so that the Analysis
Manager can execute queue commands with the proper path.
Note:
NQS and LSF are only supported by Unix platform Queue Managers. Although you can
submit to an LSF or NQS queue from Windows to a Unix platform, the Windows Queue
Manager does not support LSF or NQS submittals at this time.
Adding a Queue
To add a queue for use by the Analysis Manager, press the Add Queue button on the bottom of the
p3am_admin (AdmMgr) form. A new queue description will be created and displayed on the left panel,
with MSC Queue Name: Unknown and LSF (or NQS) Queue Name: Unknown.
Enter the names of the queue in the MSC Queue Name and LSF (or NQS) Queue Name boxes
provided. These names can be the same or different. In addition, the administrator must also choose
between one or more hosts from the listbox on the right side of the specified queue name. The host in the
listbox to the right only appear after selecting an application from the Application pulldown menu. Only
those hosts configured to run that application will appear in the list box. These are the hosts which will
be allowed to run the analysis application when submitted to that queue.
Additional queues can be added by repeating this process. When all queues have been added, press Apply
and the newly queue host information will be saved in the lsf.cfg (or nqs.cfg) file.
Various information need to be supplied for the Analysis Manager to communicate properly with the
queueing software. The most important information is the Executable Path. Enter the full executable
path where the NQS or LSF executables can be found. In addition, you may specify additional (optional)
parameters for the NQS or LSF executables to use if necessary. Keywords can also be used. The
description of how these keywords work can be found in General. Two keywords are available: MEM
and DISK, which are evaluated to what Minimum MEMory and DISK space has been specified. For
example, if an NQS command has these additional parameters: -nr -lm $MEM -lf $DISK
134
This version of Analysis Manager supports the concept of groups of hosts. In the host.cfg file if you
specify VERSION: 3 as the first non-commented line and you also add the group/queue name on the
end of the am_host line in the AM_HOSTS section then you will have enabled this feature. Here is an
example:
VERSION: 3
...
AM_HOSTS:
#am_hosthosttypebin pathrc pathgroup
#----------------------------------------------------------------------------N2004_hst1host11/msc/bin/n2004/msc/conf/nast2004rcgrp_nas2004
N2004_hst2host21/msc/bin/n2004/msc/conf/nast2004rcgrp_nas2004
N2004_hst3host31/msc/bin/n2004/msc/conf/nast2004rcgrp_nas2004
N2001_hst1host11/msc/bin/n2001/msc/conf/nast2001rcgrp_nas2001
N2001_hst2host21/msc/bin/n2001/msc/conf/nast2001rcgrp_nas2001
N2001_hst3host31/msc/bin/n2001/msc/conf/nast2001rcgrp_nas2001
M2001_hst1host13/m2001/marcNONEgrp_mar2001
M2001_hst2host23/m2001/marcNONEgrp_mar2001
M2001_hst3host33/m2001marcNONEgrp_mar2001
...
In this configuration, when you submit a job, you will also have the choice of the group name with the
added label of 'least-loaded-grp:<group name>' in addition to and to distinguish it from regular host
names. When you select this group instead of a regular host, the Analysis Manager will then decide
136
which host from the list of those in the group is best-suited to run the job and start it there when possible.
Here, best-suited means the next available host based several factors, including:
Free tasks on each host (Maximum currently running jobs)
Cpu utilization of host
Available memory of host
Free disk space of host
Time since most recent job was started on host
If in the above example, you submitted an MSC Nastran job to the grp_nas2004 then there are 3
machines the Analysis Manager could select to run the job, host1, host2 or host3. The Analysis Manager
will query each host for the current cpu utilization, available memory and free disk space (as configured
by the Analysis Manager) and also the free tasks and time since an Analysis Manager job was last started
and figure out which, if any, machine can run the job. If more than one machine can run the job based
on the criteria above then the Analysis Manager will select the best suited host by sorting the acceptable
hosts in a user-selectable sort order. If no machines have met the criteria then the job remains queued,
and the Analysis Manager will try again to find a suitable host at periodic intervals. The user selectable
sort order is specificed in an optional configuration file called msc.cfg. If this file does not exist then
the sort order and criteria are as follows:
free_tasks
cpu_util
avail_mem
free_disk
last_job_time
Where the defaults for cpu util, available mem and disk are:
Cpu util: 98
Available mem: 5 mb
Available disk: 10 mb
Thus any host that has cpu util < 98 and available mem > 5mb and available disk > 10mb and at least one
free task (so it can start another Analysis Manager job) is eligible to run a job and the best suited host will
be the one after a sort on all eligible hosts is done. You can change the sort order and defaults for cpu
util, available mem and disk in the msc.cfg file. The msc.cfg file exists in the same location as the
host.cfg and disk.cfg and has this format as explained in Group/Queue Feature.
Test Configuration
The p3am_admin (AdmMgr) program has various tests that facilitate verification of the
configuration.
Application Test
Changes to the host.cfg file dealing with defined applications can be tested by selecting the Test
Configuration/Applications option. The Applications Test form will appear when the Application
Test button is pressed. On Windows press the Test Configuration button under Adminstration
138
140
AM Hosts Test
Changes to portions of the host.cfg file dealing with the AM hosts can be tested by selecting the Test
Configuration/AM Hosts option.
142
If a problem is detected, close the form and return to the Modify Config File form to correct the
configuration.
Network Host Test
The Network AM Host Test will validate all of the AM host configuration information in the host.cfg
file, and validate communication paths between hosts.
Requirements to run the Network Host Test include:
1. Must be root.
2. Must be on Master node.
3. Must provide a username.
A message box provides status information as each of the following network AM host tests is run:
1. Checks the location and privileges of the configuration file for each application.
2. Checks the runtime location (executable) for each application and privileges.
If a problem is detected, close the form and return to the Modify Config Files form to correct the
configuration or exit to the system to correct the problem. It is highly recommended that you run the
Network Host Test for each user who wants to use the Analysis Manager.
Disk (Filesystem) Test
Changes to the disk.cfg file can be tested by selecting the Test Configuration/ Disk Configuration
option. The test disk configuration form will appear.
144
Queue Manager
This simply allows any changes in the configuration files that may have been implemented during a
p3am_admin (AdmMgr) session to be applied. If the configuration files are owned by root then you must
have root access to change them. Once they have been changed, in order for the QueMgr to recognize
them, it must be reconfigured. Simply press the Apply button with the Restart QueMgr toggle selected.
146
This forces the Queue Manager to reread the configuration files. Once the Queue Manager has been
reconfigured, new jobs submitted will use the updated configuration.
If a reconfiguration is issued while jobs are currently running, then those jobs are allowed to finish before
the reconfiguration occurs. During this period, the Queue Manager is said to be in drain mode, not
accepting any new jobs until all old jobs are complete and the Queue Manager has reconfigured itself.
The Queue Manager can also be halted immediately (which kills any job running) or can be halted after
it is drained.
When the Queue Manager is halted, the three toggles on the right side change to one toggle to allow the
Queue Manager to be started. All configurations that are being used are shown on the left. When the
Queue Manager is halted, you may change some of the configurations on the left side, such as Port, Log
File, and Log File User before starting the daemon again. For more information on the Queue Manager
see Starting the Queue/Remote Managers.
On Windows you can Start and Stop the Queue Manager from the Queue pulldown menu when you are
in the Administration tree tab.
Or you can right mouse click the Administration tree tab and the choices to Read or Save configuration
file or Start and Stop the Queue Manager are also available.
148
A unique name for the combination of the analysis application and physical
host. It can be called anything but must be unique, for example nas68_venus.
Physical Host
The physical host name where the analysis application will run.
Type
Path
How this machine can find the analysis application. For MSC Nastran, this is
the runtime script (typically the nast68 file), for MSC.Marc, ABAQUS or
GENERAL applications, this is the executable location.
rcpath
How this machine can find the analysis application runtime configuration file:
the MSC Nastran nast68rc file or the ABAQUS site.env file. This is not
applicable to a MSC.Marc, GENERAL application and should be filed with
the keyword NONE.
The physical host information has the following fields associated with it:
Physical Host
Class
Max
MaxAppTsk
Maximum application tasks. This is used say, if four MSC Nastran hosts are
configured, but there are only enough licenses for three concurrent jobs.
Without this the 4th job would always fail. With MaxAppTsk set to 3, the
4th job waits in the queue until one of the previous running jobs completes,
then it gets submitted. It is ONLY present if the configuration file version is
>=2. This is set with the VERS: or VERSION: field at the top of the file.
Note:
The MaxAppTsk setting must be added manually. There is no widget in the AdmMgr to do
this. If there are NO configuration files on start up of the AdmMgr, then it will set the version
to 2 and use 1000 as the MaxAppTsk. If configuration files exist and version 2 is set, it will
honor whatever is already there and pass them through. If version 1 is set, then MaxAppTsk
is not written to the configuration files.
The application information has the following fields associated with it:
Type
Prog_name
Patran name
Options
If the scheduling system is a separate package (e.g., LSF or NQS), then the Analysis Manager will submit
jobs to a queue provided. Queues are described below. Also, If the scheduler is separate from the
Analysis Manager, then the maximum task field is not used. All tasks are submitted through the queue
and the queueing system will execute or hold each task according to its own configuration. An example
of a host.cfg file is given below. Each comment line must begin with a # character. All fields are
separated by one or more spaces. All fields must be present.
#-----------------------------------------------------# Analysis Manager host.cfg file
#-----------------------------------------------------#
#
# A/M Config file version
# Que Type: possible choices are P3, LSF, or NQS
#
VERSION: 2
ADMIN: am_admin
QUE_TYPE: MSC
#
#-----------------------------------------------------# AM HOSTS Section
#-----------------------------------------------------#
# Must start with a P3AM_HOSTS: tag.
#
150
# AM Host:
# Name to represent the choice as it will appear
# on the AM menus.
#
# Physical Host:
# Actual hostname of the machine to run the application on.
#
# Type:
# 1 - MSC.Nastran
# 2 - ABAQUS
# 3 - MSC.Marc
# 20 - User defined (General) application #1
# 21 - User defined (General) application #3
# etc. (max of 29)
#
# This field defines the application for this entry.
# Each value will have a corresponding entry in the
# APPLICATIONS section.
#
# EXE_Path:
# Where executable entry is made.
#
# RC_Path:
# Where runtime configuration file (if present) is found.
# Set to NONE if General application.
#
#-----------------------------------------------------# Physical Hosts Section
#-----------------------------------------------------#
# Must start with a PHYSICAL_HOSTS: tag.
#
# Class:
# HP700 - Hewlett Packard HP-UX
# RS6K - IBM RS/6000 AIX
# SGI5 - Silicon Graphics IRIX
# SUNS - Sun Solaris
# LX86 - Linux/
# WINNT - Windows
#
# Max:
#
# Maximum allowable concurrent tasks for this host.
#
#-----------------------------------------------------# Applications Section
#-----------------------------------------------------#
# Must start with a APPLICATIONS: tag.
#
# Type: See above for values
# Prog_name:
#
# The name of the Patran AM Task Manager executable to start.
#
# This field must be set to the following, based on the
# application it represents:
#
# MSC.Nastran -> NasMgr
# HKS/ABAQUS -> AbaMgr
venus
/msc/msc675/bin/nast675
/msc/msc675/conf/nast675rc
Venus_nas68
venus
/msc/msc68/bin/nast68
/msc/msc68/conf/nast68rc
Venus_aba53
venus
/hks/abaqus
/hks/site/abaqus.env
Venus_mycode
venus
20
/mycode/script
NONE
Mars_nas68
mars
/msc/msc68/bin/nast68
/msc/msc68/conf/nast68rc
Mars_aba5
mars
/hks/abaqus
/hks/site/abaqus.env
Mars_mycode
mars
20
/mycode/script
NONE
#--------------------------------------------------------------------#
#Physical Host Class Max
#-------------------------------------------------------------PHYSICAL HOSTS:
venus
SGI4D
mars
SUN4
#-------------------------------------------------------------#
#
#Type Prog_name MSC P3 name MaxAppTsk [option args]
#-------------------------------------------------------------APPLICATIONS:
1
NasMgr
MSC.Nastran
AbaMgr
ABAQUS
MarMgr
MSC.Marc
20
GenMgr
MYCODE
-j $JOBNAME -f $JOBFILE
152
#--------------------------------------------------------------
System Type
(nfs or blank)
#---------------------------------------------------------------------Venus_nas675
/user2/nas_scratch
Venus_nas675
/venus/users/nas_scratch
#
Venus_nas68
/user2/nas_scratch
Venus_nas68
/venus/users/nas_scratch
Venus_nas68
/tmp
#
Venus_aba53
/user2/aba_scratch
Venus_aba53
/venus/users/aba_scratch
Venus_aba53
/tmp
Venus_mycode
/tmp
Venus_mycode
/server/scratch
nfs
#
Mars_nas68
/mars/nas_scratch
#
Mars_aba5.2
/mars/users/aba_scratch
Mars_aba5.2
/tmp
#
Mars_mycode
/tmp
#---------------------------------------------------------------------
Each comment line must begin with a # character. All fields are separated by one or more spaces. All
fields must be present.
In this example, the term file system is used to define a directory that may or may not be its own file
system, and that already exists and has permissions so that any the Analysis Manager user can create
directories below it. It is recommended that the Analysis Manager file systems be directories with large
amounts of disk space and restricted to the Analysis Managers use, because the Analysis Managers
MSC Nastran, MSC.Marc, ABAQUS, and GENERAL Managers only know about their own jobs
and processes.
Queue Configuration File
If a separate scheduling system (i.e., LSF or NQS) is being used at this site, the Analysis Manager can
interact with it, using the queue configuration file. This file is of the same name as the Queue Manager
type field in the host.cfg file (i.e. QUE_TYPE: LSF or NQS), with a.cfg extension (i.e., lsf.cfg or
nqs.cfg). The Queue Manager configuration file lists each queue name, and all hosts allowed to run
applications for that queue. In addition, a queue install path is required, so that the Analysis Manager can
execute queue commands with the proper path. An example of a Queue Manager configuration file is
given below.
Each comment line must begin with a # character. All fields are separated by one or more spaces. All
fields must be present.
#-----------------------------------------------------# Analysis Manager lsf.cfg file
#-----------------------------------------------------#
# Below is the location (path) of the LSF executables (i.e. bsub)
#
QUE_PATH: /lsf/bin
QUE_OPTIONS:
QUE_MIN_MEM:
QUE_MIN_DISK:
#
# Below, each queue which will execute MSC tasks is listed.
# Each queue contains a list of hosts (from host.cfg) which
# are eligible to run tasks from the given queue.
154
#
# NOTE:
# Each queue can only contain one Host of a given application
# version(i.e., if there are two version entries for
# MSC.Nastran, nas67 and nas68, then each queue
# set up to run MSC.Nastran tasks could only include
# one of these versions. To be able to submit to
# the other version, create a separate, additional
# MSC queue containing the same LSF queue name, but
# referencing the other version)
#
#
TYPE: 1
#MSC Que
LSF Que
Hosts
#--------------------------------------------------------Priority_nas
priority
mars_nas675, venus_nas675
Normal_nas
normal
mars_nas675, venus_nas675
Night_nas
night
mars_nas675
#---------------------------------------------------------
#
TYPE: 2
#MSC Que
LSF Que
Hosts
#--------------------------------------------------------Priority_aba
priority
mars_aba53, venus_aba53
Normal_aba
normal
mars_aba53, venus_aba53
Night_aba
night
mars_aba53, venus_aba53
#---------------------------------------------------------
master host
The host on which the Queue Manager daemon is running for the particular
organizational group in question
port #
The unique port ID used for this Queue Manager daemon. Each Queue
Manager must have been started with the -port option.
Master Host
Port #
#-----------------------------------------------------default
casablanca
1500
atf
atf_ibm
1501
lsf_atf
atf_sgi
1502
support
umea
1503
Any user account that is configured in this manner must exist not only on the machine
where the analysis is going to run, but also on the machine from which the job was
submitted.
156
The capability or necessity of this separate user file has somewhat been obsoleted. In general the
following applies:
1. On Unix machines, if RmtMgrs are running as root then they can run the job as the user (or the
separate user as specified by this file) with no problem.
2. On Unix machines, if RmtMgrs are running as a specific user then the job will run as that user
regardless of the user (or separate user) who submitted the job.
3. If Windows, the job gets runs as whoever is running the RmtMgr on the PC. The user (and
separate user) is ignored.
Group/Queue Feature
This configuration file msc.cfg, allows the default least-loaded criteria to be modified when using the
host grouping feature for automatically selection the least loaded machine to submit to. The file contents
look like:
SORT_ORDER: free_tasks cpu_util last_job_time avail_mem
free_disk
GROUP:grp_nas2004
MIN_DISK: 10
MIN_MEM::5
MAX_CPU_UTIL: 95
The SORT_ORDER line lists the names of the sort criteria in the order you care to sort eligible hosts. The
remaining lines are then for each group you care to change the defaults. Thus you must define multiple
entries of the GROUP, MIN_DISK, MIN_MEM, MAX_CPU_UTIL for each group.
A group can not contain multiple entries that use the same physical hosts (e.g.: nast2004_host1 and
nast2001_host1 in the above example) because then the Analysis Manager would not know which to use.
In this case just create another group name (grp_nas2001 like above) and it will work as expected. You
can have different applications in the same group with no problems. You could in the above example
have used grp_nas2004 as the group name for all the MSC Nastran entries (possibly changing the name
of the group to make more sense that its for hosts which run MSC Nastran) or you can keep them separate
with the added flexibility of defining a different sort order and util/mem/disk criteria for each
application/group.
RS6K
SGI5
SUNS
LX86
WINNT
Windows 2000 or XP
QueMgr Usage:
The Queue Manager can be manually invoked by typing
$P3_HOME/bin/QueMgr <args>
with the arguments below:
QueMgr -path $P3_HOME -org <org> -log <logfile> -port <#>
where:
$P3_HOME is the installation directory.
<org>
<logfile>
A different log filename for QueMgr. If not specified, the QueMgr.log located in:
$P3_HOME/p3manager_files/<org> is used.
<#>
Only the -path is required unless the QueMgr is started with a full path. The QueMgr is recommended
to be started as root although not a strict requirement. It is recommended to run the QueMgr as a separate
user such as the administrator account.
158
Example:
If the Analysis Manager is installed in /msc/patran200x and the master node is an IBM RS/6000
computer, log into the master node (as root if you want) and do the following:
/msc/patran200x/bin/QueMgr -path /msc/patran200x
If the Analysis Manager is installed on a filesystem that is not local to the master node and the QueMgr
is started as root, it is recommended that the -log option be used when starting the Queue Manager. The
-log option should be used to specify a log file which should be on a filesystem local to the master node.
Writing files as root onto network mounted filesystems is sometimes not possible. Starting the QueMgr
as a normal user solves this problem.
You may want to put this command line somewhere in a script so the Queue Manager is started as root
each time the master node is rebooted. See Starting Daemons at Boot Time.
Note:
There are other arguments that can be used when starting up the Queue Manager for more
flexibility. See Analysis Manager Program Startup Arguments.
RmtMgr Usage:
The Remote Manager can be manually invoked by typing
$P3_HOME/bin/RmtMgr
where:
$P3_HOME is the installation directory. No arguments are necessary unless you start from where it exists
with a ./RmtMgr in which case you will need the -path $P3_HOME argument.
The RmtMgr should not be started as root.
Example:
If the Analysis Manager is installed in /msc/patran200x and the analysis node is an IBM RS/6000
computer, log into the analysis node as root and do the following:
/msc/patran200x/bin/RmtMgr -path /msc/patran200x
All other arguments not specified will be defaulted. You may want to put this command line somewhere
in a script so the Queue Manager is started as root each time the master node is rebooted. See Starting
Daemons at Boot Time.
Note:
There are other arguments that can be used when starting up the Remote Manager for more
flexibility. See Analysis Manager Program Startup Arguments.
the /etc/rc2 method as opposed to the inittab method. These methods can vary from Unix machine to Unix
machine. If you have trouble, consult your system administrator.
Windows uses services. Manually installing and configuring these services is also described below.
Unix Method: rc
The location of the rc2.d directory may vary from computer to computer. Check /etc and /sbin.
What this script actually does is call another script (or two) that actually starts or stops the QueMgr and
RmtMgr, but it could have been done directly in the above script. The contents of the p3am_que script
are:
#! /usr/bin/csh -f
# This service starts/stops the QueMgr used with
# the Analysis Manager application.
if ( $#argv != 1 ) then
echo "Usage: $0 { start | stop }"
exit 1
endif
if ( $status != 0 ) then
echo "Cannot determine platform. Exiting..."
exit 1
endif
set P3_HOME = /msc/patran200x
switch ( $argv[1] )
case start:
if ( -x ${P3_HOME}/p3manager_files/bin/SGI5/QueMgr ) then
${P3_HOME}/p3manager_files/bin/SGI5/QueMgr
endif
breaksw
case stop:
set quepid = `ps -eo comm,user,pid | \
160
The p3am_rmt script would be identical except that RmtMgr replaces QueMgr. This could be done
with a single script where another argument is the daemon type, RmtMgr or QueMgr and thus another
variable is set to start or stop the one specified in the argument list.
Note:
The script above is specific to starting the QueMgr on SGI machines. For other machines,
replace the SGI5 with the appropriate <arch> as described in Directory Structure.
The above script can be used to stop the daemons also. This would be done if the machine were brought
down when rebooting. In this case you use a script in the rc0.d directory with a name of Kxx_p3am
where xx is the lowest number such as 01 to force it to be executed first among all the scripts in this
directory. The argument to the above script would then be stop instead of start. This is used to do a clean
and proper exit of the daemons when the machine is shut down. The example of a script called
K01_p3am is:
#! /sbin/sh
# This script stops the QueMgr and RmtMgr
# of the Patran Analysis Manager application.
# stop QueMgr
/etc/p3am_que stop
# stop the RmtMgr
/etc/p3am_rmt stop
Unix Method: inittab:
Note:
The number following the p3am in the above lines must match the init default # in the inittab
file. Check this number to make sure you are using the correct one. Otherwise it will not
start on reboot.
Now create the file, /etc/p3am and add the following lines:
#!/bin/sh
QueMgr=$P3_HOME/bin/QueMgr
RmtMgr=$P3_HOME/bin/RmtMgr
if [ -x $QueMgr ]
then
$QueMgr -path $P3_HOME
fi
if [ -x $RmtMgr ]
then
$RmtMgr
fi
where $P3_HOME is the Analysis Manager installation directory commonly referred to as $P3_HOME
throughout this manual. You must replace it with the exact path in the above example. Make sure that
this files protection allows for execution:
chmod 755 /etc/p3am
For Window machines:
The Queue and Remote Managers are installed as services. Once the service is installed, no further action
needs to be taken. In general the installation from the media installs these services. You will have to start
and stop them to reconfigure if you change the configuration files. If for some reason you must install the
Analysis Manager manually and assuming that the following directory exists:
$P3_HOME\p3manager_files
follow these steps,
1. Edit the files install_server.bat and install_client.bat in
$P3_HOME\p3manager_files\bin\WINNT
and make sure that the path points to
$P3_HOME\\p3manager_files\\bin\\WINNT\\QueMgr.exe and RmtMgt.exe,
respectively. Make sure there are two back slashes between each entry.
2. Double-click the install_server.bat and install_client.bat files. This will
install the services.
3. Edit the gui_install.reg file and make sure the path is correct also with two back slashes
between each entry in the Path= field, e.g.,
Path=C:\\MSC.Software\\MSC.Patran\\2004\\p3manager_files\\bin
\\WINNT\\AnalysisMgrB.dll
4. Right mouse click the gui_install.reg file and select merge. This will merge it into the
registry. This is not necessary if youve installed from the CD. If you get a message saying, No
doctemplate is loaded. Cannot create new document. this is because you have not merged this
file into the registry, or the path was incorrect.
5. Optional: You may want the Queue and Remote Manager services to startup as different users
other than Administrator. To do this right mouse click My Computer and select Manage. Then
open the Services tree tab and find MSCQueMgr (or MSCRmtMgr) and select it and view
Properties from the Action pulldown menu. Under the Log On tab you can change to This
Account or select another account for the services to start up as.
6. You can start and stop the Queue Manager and/or Remote Managers from the Services form from
the previous step. However you can also use the small command files in
162
$P3_HOME\p3manager_files\bin\WINNT
called:
start_server.bat
start_client.bat
stop_server.bat
stop_client.bat
query_server.bat
query_client.bat
remove_server.bat
remove_client.bat
to do exactly as the file describes for starting, stopping, querying, and removing the Queue
Manager (server) service or the Remote Manager (client) service.
If you follow the above steps, manually installation should be successful. You will still have to edit your
configuration files and the reconfigure (or stop and start) the Queue Manager to read the configuration
before you will be able to successfully use the Analysis Manager. See Configuration Management
Interface.
Error Messages
Error Messages
164
164
Error Messages
The following are possible error messages and their corresponding explanations and possible solutions.
Only messages which are not self explanatory are elaborated upon. If you are having trouble, please
check the QueMgr.log file usually located in the directory
$P3_HOME/p3manager_files/<org>/log or in the directory that the was specified by the -log
argument when starting the QueMgr. On Windows, check the Event Log under the Administrative
Tools Control Panel (or a system log on Unix).
Note:
The directories (conf, log, proj) for each set of configuration file (organizational
structure) must have read, write, and execute (777) permission for all users. This can be the
cause of many task manager errors.
Sometimes errors occur because the RmtMgr is running as root or administrator on Windows yet
RmtMgr is trying to access network resources such as shared drives. For this reason it is recommended
that RmtMgr (and QueMgr) be started as a normal user.
PCL Form Messages...
Patran Analysis Manager not installed.
Check for proper installation and authorization. Check with your system administrator. The Analysis
Manager directory $P3_HOME/p3manager_files must exist and a proper license must be available
for the Analysis Manager to be accessible from within Patran.
Windows
No doctemplate is loaded. Cannot create new document.
If you get a message saying, this is because you have not merged this file into the registry, or the path
was incorrect. See For Window machines: in Queue Manager.
Job Manager Daemon (JobMgr) Errors
ERROR... Starting a JobMgr on local host.
JobMgr is unable to run most likely because of a permission problem. Make sure that the input deck is
being submitted from a directory that has read/write permissions set.
================
311 ERROR... Unable to start network communication on server side.
335 ERROR... Unable to initiate server communication.
JobMgr is unable to create server communication. Possible reason is the hosts network interface is not
configured properly.
================
312 ERROR... Unable to start network communication on client side.
ERROR... Unable to create and connect to client socket
JobMgr is unable to create client communication. Possible reason is the hosts network interface is not
configured properly.
================
ERROR... Problem in socket accept
301 ERROR... Unable to accept network connection.
ERROR... Unable to accept message
ERROR... Unable to complete network accept.
JobMgr is unable to complete communication connection. Possible reason is the hosts network
interface is not configured properly, or the network connectivity has been interrupted.
================
307 ERROR... Problem with network communication select.
ERROR... Select ready returned, but no data
ERROR... Problem in socket select
302 ERROR... Unable to read data from network connection.
306 ERROR... Data ready on network, but unable to read.
ERROR... Socket empty
327 ERROR... Data channel empty during read.
324 ERROR... Error with network communication select.
ERROR... Unknown error on select
JobMgr is unable to determine when data is available or reading. Possible cause is loss of network
connectivity.
================
ERROR... Problem reading socket message.
326 ERROR... Timeout while reading message.
325 ERROR... Error in message received.
304 ERROR... Unknown receive_message error
305 ERROR... Timeout with no responses
JobMgr received an error while trying to read data or received a timeout while waiting to read data.
Possible cause is loss in network connectivity or the sending process has terminated prematurely.
================
321 ERROR... Unable to contact QueMgr
ERROR... Timeout with no response from server.
JobMgr received an error or timeout while trying to contact the QueMgr. Possible cause is loss in
network connectivity or the QueMgr process has terminated prematurely.
================
ERROR... Unable to accept connection from A/M
ERROR... Timeout with no response from A/M
ERROR... Unable to contact ANALYSIS MANAGER interface.
JobMgr received an error or timeout while trying to contact the Analysis Manager interface. Possible
cause is loss in network connectivity or the Analysis Manager interface process has terminated (either
prematurely or by user intervention).
================
166
JobMgr is unable to receive configuration information from the Analysis Manager interface for a submit
job request. Possible cause is loss in network connectivity or premature termination of the Analysis
Manager interface process.
================
340 ERROR... Unable to send general config info.
341 ERROR... Unable to send application specific config info.
342 ERROR... Unable to send application specific submit info.
memory.
for file sys max
for file sys space
for file sys names
This indicates the workstation is out of memory. Free up memory used by other processes and try to
submit again at a later time.
================
344 ERROR... Unable to determine mail flag from config struct
ERROR... Unable to determine mail config setting.
JobMgr is unable to query memory for the mail config setting. Contact support personnel for assistance.
================
345 ERROR... Unable to determine delay time from config struct
ERROR... Unable to determine delay time setting.
JobMgr is unable to query memory for the delay time config setting. Contact support personnel for
assistance.
================
350 ERROR... Unable to determine disk req from config struct
ERROR... Unable to determine disk requirement.
JobMgr is unable to query memory for the disk req config setting. Contact support personnel for
assistance.
================
349 ERROR... Unable to determine memory req from config struct
ERROR... Unable to determine memory requirement.
JobMgr is unable to query memory for the memory req config setting. Contact support personnel for
assistance.
================
352 ERROR... Unable to determine pos prog from config struct
ERROR... Unable to determine pos program.
JobMgr is unable to query memory for the pos prog config setting. Contact support personnel for
assistance.
================
351 ERROR... Unable to determine pre prog from config struct
ERROR... Unable to determine pre program.
JobMgr is unable to query memory for the pre prog config setting. Contact support personnel for
assistance.
================
353 ERROR... Unable to determine job filename from config struct
ERROR... Unable to determine job filename.
JobMgr is unable to query memory for the job filename config setting. Contact support personnel for
assistance.
================
337 ERROR... Unable to determine specific index from submit struct
338 ERROR... Unable to determine submit specific host index.
JobMgr is unable to query memory for the specific index config setting. Contact support personnel for
assistance.
================
ERROR... Unable to determine submit index from submit struct
ERROR... Unable to determine submit host index.
JobMgr is unable to query memory for the submit index config setting. Contact support personnel for
assistance.
================
ERROR... Unable to determine/execute message
168
JobMgr is unable to set-up and/or receive the data file from the [Aba,Gen,Mar,Nas]Mgr. Possible
cause is loss of network connectivity or premature termination of the [Aba,Gen,Mar,Nas]Mgr process.
================
ERROR... Unable to send file
333 ERROR... Unable to transfer data file.
JobMgr is unable to send data file to [Aba,Gen,Mar,Nas]Mgr process. The executing host or network
connection may be down.
================
346 ERROR... Unknown state
347 ERROR... Inconsistant state.
JobMgr is unable to open log file. Check write permission on the current working directory and log file
(if it exists).
================
348 ERROR... Unable to cd to local work dir.
JobMgr is unable to change directory to the directory with the input filename specified. Check existence
and permissions on this directory.
================
314 ERROR... Unable to determine true host address.
315 ERROR... Unable to determine actual host address.
JobMgr is unable to determine its host address. Possible causes are an invalid host file or name
server entry.
================
316 ERROR... Unable to determine usrname.
JobMgr cannot determine user name. Check the passwd, user account files for possible errors.
================
322 ERROR... Unable to open stdout stream.
ERROR... Unable to open stderr stream.
JobMgr is asked to submit a job which is not from a supported application. Check installation by
running the basic and network tests with the Administration tool.
================
343 ERROR... Received signal.
JobMgr received a signal from the operating system. JobMgr has encountered an error or was signalled
by a user.
================
354 ERROR... Invalid version of Job Manager.
The current version of JobMgr does not match that of the QueMgr. An invalid/incomplete installation
is most likely the cause. To determine what version of each executable is installed, type JobMgr version, and QueMgr -version and compare output.
=======================================================
P3Mgr is unable to determine the P3_HOME environment variable. Set the environment variable to the
location of the Patran install path (<installation_directory> for example).
================
ERROR... Obtaining ANALYSIS MANAGER licenses.
P3Mgr is unable to obtain necessary license tokens. Check nodelock file
or netls daemons.
================
ERROR... Problem reading .p3mgrrc
ERROR... Unable to write to file <.../.p3mgrrc[_org]>
P3Mgr is unable to read/write to the rc file to load/save configuration settings. Check the owner and
permissions on the designated file.
================
ERROR... Unable to determine QueMgr host or port
170
P3Mgr cannot determine which port to connect to a valid Queue Manager. The file QueMgr.sid is not
actually used anymore. You should set P3_MASTER and P3_PORT environment variables, or use the
org.cfg file.
================
ERROR... QueMgr host <> from org.cfg file inconsistent
ERROR... QueMgr port <> from org.cfg file inconsistent
P3Mgr has found the QueMgr host and/or port from the QueMgr.sid file to be different from that
found in the org.cfg file. Check the org.cfg file and modify it to match the current QueMgr settings,
or restart the QueMgr with the org.cfg settings. The QueMgr.sid file is no longer used so this message
should never appear. Call MSC support personnel if this happens.
================
ERROR... Unable to determine address of master host <>
P3Mgr is unable to determine the address of the master host provided. Check the QueMgr.sid file for
proper hostname and/or the org.cfg file and/or the P3_MASTER environment variable. Also check for
an invalid host file or name server entry. The QueMgr.sid file is no longer used so this message should
never appear. Call MSC support personnel if this happens.
================
ERROR... Unable to Contact QueMgr to determine version information.
P3Mgr is unable to contact the QueMgr. Check to see if the QueMgr is up and running, and the master
host is up and running, and the P3Mgr host and the master host can communicate via the network.
================
ERROR...
ERROR...
ERROR...
ERROR...
ERROR...
ERROR...
ERROR...
ERROR...
P3Mgr cannot create communication socket, or is unable to send request to the QueMgr process. Check
the QueMgr process is up and running, the QueMgr host is up and running, and the network is
connected.
================
ERROR... Creating Communications Socket For Job Monitor
ERROR... Establishing communication to Job Mgr with port <>.
ERROR... In Job Mgr Communication
P3Mgr cannot create communication socket, or is unable to send request to the JobMgr process. Check
the JobMgr process is up and running, and the network is connected.
================
P3Mgr cannot send configuration info to the P3Mgr process. Check that the P3Mgr process is up and
running, and the network interface is configured properly.
================
ERROR... An incompatible version of the QueMgr is currently running.
P3Mgr has determined that the version of the QueMgr presently running is not compatible. An
incomplete or invalid installation is most likely the cause. Type P3Mgr -version and QueMgr -version
and compare the output. Re-install the software if necessary.
================
ERROR... No valid applications defined in QueMgr configuration
P3Mgr has been started with an application not supported by the current configuration used by the
QueMgr. Update the configuration files to include the new application and restart the QueMgr before
continuing.
================
ERROR... Org <> does not contain any defined applications.
P3Mgr has been started (or switched to) an organization (group) which does not contain any
applications. Check the configuration files and restart the QueMgr process for the designated
organization.
================
ERROR... Filename is too long. Shorten jobname
P3Mgr can only submit jobs with files no longer than 32 characters. Shorten the job filename to less than
32 characters and submit again.
================
ERROR... Job <> is not currently active.
P3Mgr was asked to monitor or delete a job with a given name (and owner) which cannot be located in
the queue.
================
ERROR... File <> does not exist... Enter A/M to select file explicitly.
P3Mgr was asked to monitor a completed job (using the mon file) from the jobname information only
and this file cannot be found. Use the Full Analysis Manager interface and the file browser (under
Monitor, Completed Job) to select an existing mon file.
================
ERROR... Monitor file <> does not exist
172
P3Mgr was asked to monitor a completed job and the selected mon file does not exist. Select an existing
mon file and try again.
================
ERROR... You must choose an A/M monitor file (.mon extension).
P3Mgr was asked to monitor a completed job, but no mon file was specified. Select a mon file and try
again.
================
ERROR... Unable to parse Monitor File
P3Mgr encountered an error while parsing the mon file. Contact support personnel for assistance.
================
ERROR... than one active job named <> found. Request an active list
ERROR... More than one active job named <> found. Enter A/M to
explicitly select job
ERROR... No jobs named <> owned by <> are currently active
P3Mgr is asked to monitor or delete a job from the job name (and owner) and no such job can be located
in the queues. Select an active list of jobs from the Full Analysis Manager interface (Monitor, Running
jobs)
================
ERROR... No Host Selected. Submit cancelled.
ERROR... No Queue Selected. Submit cancelled.
P3Mgr attempted to submit a job, but no host or queue was selected. Select a host or queue and try to
submit again.
================
ERROR... QueMgr has been reconfigured. Exit and restart to continue.
P3Mgr has found the QueMgr has been reconfigured (restarted) and so the P3Mgrs copy of the
configuration information may be invalid. Exit the P3Mgr interface and restart to load the latest
configuration information before continuing.
================
ERROR... Starting a Job Mgr on local host.
P3Mgr is unable to spawn a JobMgr process. Perhaps the system is heavily loaded or the maximum
number of per-user processes has been met. Free up unused processes and try to submit again.
================
ERROR... Unable to alloc mem for load info.
ERROR... Unable to allocate memory for org structure
This indicates the workstation is out of memory. Free up memory used by other processes and try again.
================
P3Mgr is unable to open a log file. Check file permissions if it exists, or check local directory
access/permissions.
================
ERROR... Unable to open unique submit log file
P3Mgr is unable to open a submit log file. Check file permissions if it exists, or check local directory
access/permissions.
================
ERROR... Unknown version of ANALYSIS MANAGER .p3mgrrc file
P3Mgr has attempted to read in a .p3mgrrc file but from an unsupported version. Remove the
.p3mgrrc file and save configuration settings in a new .p3mgrrc file.
================
ERROR... Could not open file <>
P3Mgr is asked to submit a job, but no filename has been input. Selected an input filename and try to
submit again.
================
ERROR... File <> does not exist.
P3Mgr has been asked to submit a file, but no such file can be found. Select an existing input file (or
check file permissions) and submit again.
Additional (submit) Errors...
ABAQUS:
P3Mgr is unable to open the designated input file. Check file permissions.
================
ERROR... JobName <> and Input Temperature File JobName <> are identical.
ABAQUS cannot have jobs where the job name and the temperature data file job name are the same.
Change one or the other and re-submit.
================
ERROR... *RESTART, READ found but no restart jobname specified.
174
ABAQUS RESTART card encountered, but no filename specified. Add filename to this card and
re-submit.
MSC Nastran:
================
ERROR... File <> cannot contain more than one period in its name.
P3Mgr will currently only allow MSC Nastran jobs to contain one period in their
filename. Rename the input file to contain no more than one period and re-submit.
================
ERROR... File <> must begin with an alpha character.
P3Mgr will currently only allow MSC Nastran jobs to start with an alpha character, and not a number.
Rename the input file to start with a letter (A-z, a-z) and re-submit.
================
ERROR... Include cards are too early in file.
P3Mgr can currently only support MSC Nastran jobs with Include cards between the BEGIN BULK
and ENDDATA cards. Place the contents of the include files before the BEGIN BULK card directly
into the input file and re-submit.
================
ERROR... BEGIN BULK card present with no CEND card present.
P3Mgr has encountered a BEGIN BULK card before a CEND card. P3Mgr currently requires a
BEGIN BULK card if there is a CEND card found in the input file. Add a BEGIN BULK card to
the input file and re-submit.
================
ERROR... ENDDATA card missing.
P3Mgr has encountered the end of the input file without finding an ENDDATA card. Add an
ENDDATA card and re-submit.
MSC.Marc:
================
ERROR... Unable to load MSC.Marc configuration info.
The network is unable to transfer the MSC.Marc config info over to the MarMgr from the JobMgr
running on the submit machine. Check network connectivity and the submit machine for any problems.
================
ERROR... Unable to load MSC.Marc submit info
The network is unable to transfer the MSC.Marc submit info over to the MarMgr from the JobMgr
running on the submit machine. Check network connectivity and the submit machine for any problems.
================
INFORMATION: Total disk space req of %d (kb) met
Information message telling that enough disk space has been found on the file systems configured for
MSC.Marc to run.
================
WARNING: Total disk space req of %d (kb) cannot IMMEDIATLEY be met.
Continuing on regardless ...
Information message telling that there is currently not enough free disk space found to honor the space
requirement provided by the user. The job will continue however, because the space may be freed up at
a later time (by another job finishing perhaps) before this job needs it
================
ERROR... Total disk space req of %d (kb) cannot EVER be met. Cannot
continue.
There is not enough disk space (free or used) to honor the space requirement provided by the user so the
job will stop. Add more disk space or check the requirement specified.
================
WARNING: Cannot determine if disk space req %d (kb) can be met.
Continuing on regardless ...
Information message telling the disk space of the file system(s) configured for MSC.Marc cannot be
determined. The job will continue anyway as there may be enough space. Sometime, if the file system is
mounted over nfs the size of the file system is not available.
================
INFORMATION: No disk space requirement specified
If no disk space requirement is provided by the user then this information message will be printed.
================
ERROR... Unable to alloc ## bytes of memory in sss, line lll
The MarMgr is unable to allocate memory for its own use, check the memory and swap space on the
executing machine.
================
ERROR... Unable to receive file sss
MarMgr could not transfer a file from the JobMgr on the submit machine. Check the network
connectivity and submit machine for any problems.
Editing (p3edit) Errors...
ERROR... Unable to allocate enough memory for file list.
This indicates the workstation is out of memory. Free up memory used by other processes and try again.
176
================
ERROR... Unable to determine file statistics for <>
This indicates the operating system is unable to determine file statistics for the requested file. The
requested file most likely does not exist. Select an existing file to view/edit and try again.
================
ERROR... File <> does not appear to be ASCII
P3edit can only view/edit ASCII files, and the requested file appears to be non-ASCII. Select an ASCII
file to view/edit and try again.
================
ERROR... File <> is too large to view
Due to memory constraints, P3edit is limited to viewing/editing files no larger than 16 mb in size. (Except
for Cray and Convex machines, where the max file size limit is 60 mb and 40 mb, respectively) Select a
smaller file to view/edit and try again.
================
ERROR... Unable to open file <>
P3edit is unable to open requested file to load into viewer/editor. Select an existing file to view/edit
and try again.
================
ERROR... File <> is empty
P3edit has found the selected file is empty. Currently P3edit can only view/edit files with data. Select
a file containing data to view/edit and try again.
================
ERROR... Unable to seek to beginning of file <>
P3edit is unable to seek to beginning of selected file. Possible system error occurred during seek, or
file is corrupted.
================
ERROR... Unable to read in file <>
A system error occurred during file read. Try to view/edit file again at another time.
================
ERROR... Unable to read text
P3edit is unable to read file completely, or is unable to read text from memory to write file.
================
ERROR... Unable to scan text
P3edit is unable to scan text from memory to search for text pattern.
================
ERROR... Unable to write text to file
P3edit is unable to write text out to file. Perhaps the disk is full, or some system error occurred during
the write call.
RmtMgr Errors...
RmtMgr errors are returned to the connection program and also printed in the OS system log (syslog)
or Event Viewer for Windows.
================
RmtMgr Error RM_CANT_GET_ADDRESS
This should not happen, but if for some reason the OS / network ca not determine the network address of
the machine RmtMgr is started on this will be printed before RmtMgr exits. Contact your system
administrator for more information.
================
RmtMgr Error port number ####
invalid
This should not happen, but if for some reason the program contacting the RmtMgr does not supply a
valid port then this will be printed. The connection to the RmtMgr cannot be completed, but the RmtMgr
will continue on listening for other connections.
================
RmtMgr Error RM_CANT_CREATE_SERVER_SOCKET
If the RmtMgr can not create its main server socket for listening then this error message will be printed
before the RmtMgr exits.
================
RmtMgr Error accept failed, errno = ##
If the accept system call fails on the socket after a connection is established this will be printed. The error
number should be checked against the system errno list for the platform to see the cause.
================
RmtMgr Error unable to end proc ## errno=%d",
If RmtMgr is asked to kill/end a process and it is unable to do so then this message is printed. The errno
should give the reason.
178
================
RmtMgr Error Invalid message of <xxxxx>
An invalid message format/syntax was sent to RmtMgr. The message will be ignored and RmtMgr will
continue, listening for other connections.
================
RmtMgr Error Invalid message status code = ##
The status on the receive message is not correct, the status code will help determine the cause, most likely
invalid network connectivity is the reason.
================
RmtMgr Error Invalid NULL message
An invalid message format/syntax was sent to RmtMgr. The message will be ignored and RmtMgr will
continue, listening for other connections.
================
RmtMgr Error unable to determine system type, error = ##
RmtMgr is unable to determine what kid of platform it is running on. RmtMgr will exit. Check the
supported platform/OS list.
================
RmtMgr Error WSAStartup failed, code = ##
Windows network/socket communication initialization failed the code should be checked against the
windows system error list for the cause.
================
RmtMgr Error WSAStartup version incompatible, code = ##
If a contacting program is of a different version than the RmtMgr then this message is printed. The
RmtMgr will continue, listening for other connections.
QueMgr Errors...
Sometimes errors occur because the RmtMgr is running as root or administrator on Windows yet
RmtMgr is trying to access network resources such as shared drives. For this reason it is recommended
that RmtMgr (and QueMgr) be started as a normal user.
ERROR... Determining computer architecture
QueMgr is unable to recognize its host architecture. Check installation and OS version compatibility.
================
ERROR... Invalid -port option argument <>
ERROR... Invalid -usr option argument <>
QueMgr was started with an invalid port or user argument. Select a valid port or user name argument
and try to start the QueMgr again.
================
ERROR... Cant Find
ERROR... Cant Find
ERROR... Cant Find
ERROR... Cant Find
ERROR... Cant Find
202 ERROR... Job to
QueMgr is unable to locate job in internal list to remove, resume or suspend. Contact support personnel
for assistance.
================
ERROR... Cant Resume Job unless its RUNNING and SUSPENDED.
ERROR... Cant Suspend Job unless its RUNNING.
QueMgr received an invalid suspend/resume request. QueMgr can only suspend running jobs, and can
only resume suspended jobs.
================
203 ERROR... Problem creating com file for Task Manager execution.
QueMgr is unable to create a com file on the eligible host(s) for execution. Possible causes are lack of
permission connecting to the eligible host(s) as the designated user (check network permission/access
using the Administration tool) or incorrect path/permission on the directory on the eligible host(s). (Use
the Administration tool to check this.) The major cause of this error is that the specified user does not
have remote shell access from the Master Host to the Analysis Host. Resolutions to this problem are to
add the Analysis Host name to the hosts.equiv file or the users.rhosts file on the Master Host.
================
ERROR... Creating host/port file <>.
QueMgr is unable to create the QueMgr.sid file containing its host and port number. Possible causes
are invalid sid user name (with the -usr command line option), invalid organization (-org
command line option) or invalid network or directory/file permissions. This file and -usr are no longer
used and the message should not ever appear. Call MSC support if this happens.
================
ERROR...
ERROR...
ERROR...
ERROR...
RECONFIG
QueMgr is unable to open the log and/or rdb files. Check the owner and permissions of these files. If
the rdb file is corrupted, QueMgr may not be able to seek to its end and determine the next available job
number to use.
================
180
ERROR...
ERROR...
ERROR...
ERROR...
RECONFIG
Out Of Memory
Problem Allocating memory for return string
Unable to alloc mem for load info
Unable to alloc memory for load information
ERROR... Unable to alloc mem for load info
The workstation is out of memory. Free up memory used by other processes and restart the QueMgr.
================
ERROR...
ERROR...
ERROR...
ERROR...
ERROR...
ERROR...
ERROR...
Problem
Problem
Problem
Problem
Problem
Problem
Problem
Creating Socket.
Binding Socket.
Connecting Socket.
Reading Socket Message.
Writing To Socket.
in socket accept.
in socket select.
QueMgr has been requested to terminate a job, and cannot inform the JobMgr of this. Possible cause is
network connectivity loss, or the JobMgr process has terminated unexpectedly, or the JobMgr
workstation is down.
ERROR... Host Index <> Received, Max Index is <>,
ERROR... Queue Index <> Received, Max Index is <>
ERROR... Specific Index <> Received, Max Index is <>
QueMgr asked to submit a job with an invalid index. Contact support personnel for assistance.
================
ERROR... User: <> can not delete job number <> which is owned by: <>
201 ERROR... User can not kill a job owned by someone else.
QueMgr was asked to delete a job from a user other than the one who submitted the job. Only the user
who submitted a job is eligible to delete it.
================
ERROR... You must be the root user to run QueMgr.
The QueMgr process must run as the root user. Restart the QueMgr as the root user.
================
209 ERROR... Unable to start Task Manager.
QueMgr is unable to start up a LoadMgr process. Check network/host connections and access, and
install tree path on the remote host. Also, check admin user account access. (Use the Administration tool
to check network permissions.) Obsolete. RmtMgr is now used.
================
205 ERROR... Error submitting job to NQS. See Que Manager Log.
211 ERROR... Unable to submit task to NQS queue.
QueMgr received an error while trying to submit a job to an NQS queue. Check the QueMgr.log file
for the more detailed NQS error.
================
206 ERROR... Error submitting job to LSF. See Que Manager Log.
207 ERROR... Error in return string from LSF bsub. See Que Manager Log.
210 ERROR... Unable to submit task to LSF queue.
QueMgr received an error while trying to submit a job to an LSF queue. Check the QueMgr.log file
for the more detailed LSF error.
================
214 ERROR... Unable to delete task from NQS queue.
QueMgr is unable to delete task from an NQS queue. Perhaps the task has finished or has already been
deleted by an outside source.
================
213 ERROR... Unable to delete task from LSF queue.
QueMgr is unable to delete task from an LSF queue. Perhaps the task has finished or has already been
deleted by an outside source.
================
217 ERROR... Job killed from outside queue system.
QueMgr cannot find job in queue when it is expected to be there. QueMgr can only assume the job was
deleted from outside the Analysis Manager environment.
================
218 ERROR... Invalid version of Task Manager.
The current version of [Aba,Gen,Mar,Nas]Mgr does not match that of the QueMgr. An
invalid/incomplete installation is most likely the cause. To determine what version of each executable is
installed, type [Aba,Gen,Mar,Nas]Mgr-version, and QueMgr -version and compare output.
182
The application has terminated with errors. This is not an Analysis Manager error, but just indicates there
are errors in the analysis. Check and modify the input file and try again.
================
ERROR... Pos_application fatal
434 ERROR... Application fatal in pos routine.
The application received a fatal error in the pos routine. Contact support personnel for assistance.
================
ERROR... Pre application error
435 ERROR... Application fatal in pre routine.
The application received a fatal error in the pre routine. Contact support personnel for assistance.
================
ERROR... Abort_application fatal
The application received a fatal error in the abort routine. Contact support personnel for assistance.
================
426 ERROR... Unable to alloc mem.
ERROR... Unable to alloc mem for File_Sys
The workstation is out of memory. Free up memory used by other processes and try again.
================
403 ERROR... Unable to initiate network communication.
413 ERROR... Unable to initiate file transfer network communication.
423 ERROR... Problem with network communication accept.
415 ERROR... Problem with network communication select.
404 ERROR... Problem with network communication select.
ERROR... Unknown error on select
Process is unable to contact the QueMgr. Perhaps the QueMgr host is down, or the QueMgr process is
not running or the network is down.
================
ERROR... Timeout with no response from JobMgr
ERROR... Unable to accept connection from JobMgr
430 ERROR... Unable to contact JobMgr
Process is unable to contact the JobMgr. Perhaps the JobMgr host is down, or the JobMgr process is
not running or the network is down.
================
ERROR... Unable to create info socket
417 ERROR... Unable to initiate job info network communication.
414 ERROR... Unable to determine job info over network.
Process is unable to determine job filename. Contact support personnel for assistance.
================
ERROR... Unable to determine number of clock tics/sec
425 ERROR... Unable to determine machine clock rate.
Process cannot determine the machine setting for the number of clock tics per second. Check machine
operating system manual for further assistance.
================
184
[Aba,Gen,Mar,Nas]Mgr cannot fork a new process. Perhaps the system is heavily loaded or the
maximum number of per-user processes have been exceeded. Terminate extra processes and try again.
================
ERROR... Unable to load file system info
409 ERROR... Unable to load file system information.
Process could not determine its current working directory. Check file systems designated for executing
[Aba,Gen,Mar,Nas]Mgr.
427 ERROR... Unable to create work file system dirs.
ERROR... Unable to make proj sub-dirs off file systems
Process could not make directories off main file system entries in the configuration. Check file
system/directory access/permission. (Use the Administration tool.)
================
428 ERROR... Unable to create local work dir.
ERROR... Unable to make unique dir off proj dir
Process is unable to make a unique named directory below the designated file system/directory it is
currently executing out of. Check file system/directory access/permission. (Use the Administration tool.)
================
411 ERROR... Unable to cd to local work dir.
429 ERROR... Unable to cd to local work dir.
ERROR... Unable to cd to unique dir off proj dir
Process is unable to change directory to the unique directory created below the file system/directory
designated in the configuration. Check owner and permission of parent directory.
================
400
412
401
421
ERROR...
ERROR...
ERROR...
ERROR...
Unable
Unable
Unable
Unable
to
to
to
to
[Aba,Gen,Mar,Nas]Mgr is unable to open, re-open log and/or stdout, stderr stream files. Check current
directory permissions.
================
ERROR... Unable to place process in new group
Process is unable to change process group. Contact support personnel for assistance.
================
ERROR... Unable to obtain config info from QueMgr
402 ERROR... Unable to receive configuration info.
ERROR... Unable to receive gen_config struct
ERROR... Unable to receive app_config struct
ERROR... Unable to receive app_submit struct
405 ERROR... Unable to receive general config info.
406 ERROR... Unable to receive application config info.
407 ERROR... Unable to receive application submit info.
[Aba,Gen,Mar,Nas]Mgr is unable to query memory for either a host index or submit parameters.
Contact support personnel for assistance.
================
416 ERROR... Unable to receive data file.
ERROR... Unable to recv file <> from JobMgr
Process is unable to receive file from JobMgr. Possibly the network is down, or the JobMgr host is off
the network, or the JobMgr host is down.
================
ERROR... file: <> cannot be sent
Process can not send file to JobMgr. Possibly the network is down, or the JobMgr host is off the
network, or the JobMgr host is down.
================
432
437
438
439
ERROR...
ERROR...
ERROR...
ERROR...
Task
Task
Task
Task
aborted.
aborted while executing.
aborted before execution.
aborted after execution.
The [Aba,Gen,Mar,Nas]Mgr has been aborted. This is not necessarily an Analysis Manager error, but
an indication that the analysis has been terminated by the user.
================
441 ERROR... Received 2nd signal.
186
The Process has received a signal, either from an abort (from the user) or from an internal error, and
during the shutdown procedures, a second signal has occurred, indicating an error in the shutdown
procedure.
================
ERROR... Total disk space req of %d (kb) cannot be met
[Aba,Gen,Mar,Nas]Mgr is unable to locate the amount of free disk space on all designated file systems
of this host for the analysis to be run. Either free up some disk space, reduce the amount of disk requested
in the interface, or submit job to a different host (with a different set of file systems).
================
ERROR... Unable to temporarily rename input file
[Aba,Gen,Mar,Nas]Mgr is unable to rename input file temporarily. Contact support personnel for
assistance.
================
ERROR... File <> cannot be found
ERROR... Unable to open input file
Process has transferred the designated file, but is now unable to locate it for opening or reading. Check
network connections, JobMgr host, and file system permissions.
================
443 ERROR... Invalid version of Task Manager
The current version of [Aba,Gen,Mar,Nas]Mgr does not match that of the QueMgr/RmtMgr. An
invalid/incomplete installation is most likely the cause. To determine what version of each executable is
installed, type [Aba,Gen,Mar,Nas]Mgr -version, and QueMgr/RmtMgr -version and compare output.
Additional application specific Errors...
ABAQUS (AbaMgr):
AbaMgr is unable to load configuration information from internal memory. Contact support personnel
for assistance.
================
ERROR... Unable to load ABAQUS submit info
AbaMgr is unable to load submit information from internal memory. Contact support personnel for
assistance.
================
GENERAL (GenMgr):
GenMgr is unable to load submit information from internal memory. Contact support personnel for
assistance.
================
MSC Nastran (NasMgr):
NasMgr only allows Include cards to be within the BEGIN BULK and ENDDATA sections of the input
file. Place the contents of the include cards which lie before the BEGIN BULK card directly in the input
file, and submit again.
================
ERROR... Restart type file but no MASTER file specified
The input file appears to be a restart, as the RESTART card is found, but no MASTER file is specified.
Use an ASSIGN card to designate which MASTER file is to be used, and submit again.
================
ERROR... Unable to add MASTER database FMS
ERROR... Unable to add DBALL database FMS
ERROR... Unable to add SCRATCH database FMS
When NasMgr is adding FMS, line length is found to be greater than the maximum of 80 characters.
Decrease the filename (jobname) length or use links to shorten the file system/directory names.
================
ERROR... Unable to load MSC Nastran configuration info
188
NasMgr is unable to load configuration information from internal memory. Contact support personnel
for assistance.
================
ERROR... Unable to load MSC Nastran submit info
NasMgr is unable to load submit information from internal memory. Contact support personnel for
assistance.
================
ERROR... Unable to read file include <>
NasMgr has transferred the designated file, but is now unable to locate it for opening or reading. Check
network connections, JobMgr host, and file system permissions.
================
ERROR... Unexpected end of file
NasMgr has encountered the end of the input file without finding complete information. Check input
file.
MSC.Marc (MarMgr):
================
ERROR... Unable to load MSC.Marc configuration info.
The network is unable to transfer the MSC.Marc config info over to the MarMgr from the JobMgr
running on the submit machine. Check network connectivity and the submit machine for any problems.
================
ERROR... Unable to load MSC.Marc submit info
The network is unable to transfer the MSC.Marc submit info over to the MarMgr from the JobMgr
running on the submit machine. Check network connectivity and the submit machine for any problems.
================
INFORMATION: Total disk space req of %d (kb) met
Information message telling that enough disk space has been found on the file systems configured for
MSC.Marc to run.
================
WARNING: Total disk space req of %d (kb) cannot IMMEDIATLEY be met.
Continuing on regardless ...
Information message telling that there is currently not enough free disk space found to honor the space
requirement provided by the user. The job will continue however, because the space may be freed up at
a later time (by another job finishing perhaps) before this job needs it
================
ERROR... Total disk space req of %d (kb) cannot EVER be met. Cannot
continue.
There is not enough disk space (free or used) to honor the space requirement provided by the user so the
job will stop. Add more disk space or check the requirement specified.
================
WARNING: Cannot determine if disk space req %d (kb) can be met.
Continuing on regardless ...
Information message telling the disk space of the file system(s) configured for MSC.Marc cannot be
determined. The job will continue anyway as there may be enough space. Sometime, if the file system is
mounted over nfs the size of the file system is not available.
================
INFORMATION: No disk space requirement specified
If no disk space requirement is provided by the user then this information message will be printed.
================
ERROR... Unable to alloc ## bytes of memory in sss, line lll
The MarMgr is unable to allocate memory for its own use, check the memory and swap space on the
executing machine.
================
ERROR... Unable to receive file sss
MarMgr could not transfer a file from the JobMgr on the submit machine. Check the network
connectivity and submit machine for any problems.
Administration (AdmMgr) Testing Messages...
ERROR... An invalid version of Queue Manager is currently running
The current version of AdmMgr does not match that of the running QueMgr. An invalid/incomplete
installation is most likely the cause. To determine what version of each executable is installed, type
AdmMgr -version, and QueMgr -version and compare output.
================
ERROR... <> specified as type for host, but <> detected.
ERROR... <> specified as type for host, but UNKNOWN is detected.
ERROR... Host Type <> is not a valid selection for host <>.
The AdmMgr program has discovered the host architecture for the indicated host is not the same as what
is designated in the configuration, or no specific type has been given to this host. Change the host type
to the correct one and re-test.
================
ERROR... A/M Host <> configuration file <> does not contain an absolute
path.
190
The AdmMgr program has found an rc file entry, or an exe file entry in the host.cfg file, or a file
system in the disk.cfg file to not be a full path. Change the entries to be fully qualified. (starts with a
slash character /)
================
ERROR... A/M Host <> does not have a valid application defined. Run
Basic A/M Host Test.
The configuration files do not contain any valid applications. Add a valid application and all its required
information and run the basic test to verify.
================
ERROR... A/M Host <> filesystem <> does not contain an absolute path.
The file system designated for the host listed is not fully qualified. Change the entry to begin with a slash
/ character.
================
ERROR... A/M Host <> runtime file <> does not contain an absolute path.
The rc file designated for the host listed is not fully qualified. Change the entry to begin with a slash /
character.
================
ERROR... A/M Host name <> is used more than once within application <>.
Each application contains a list of Analysis Manager host names (which are mapped to physical host
names) and each Analysis Manager host name must be unique. The AdmMgr program has found the
designated Analysis Manager host names is being used more than once. Change the Analysis Manager
host name for all but one of the applications and re-test.
================
ERROR...
ERROR...
ERROR...
ERROR...
ERROR...
These errors indicate various problems when trying to execute a command on the designated host as the
admin or user provided. Network access to a host for a user can fail for a number of reasons, among which
are lack of network permission or the network/host is down. To check network permission, the user must
be able to rsh (or remsh on some platforms) from the master host (where the QueMgr process runs)
to each application host (not interface host, but where the defined application is targeted to run). Rsh (or
remsh) access is denied if the user has no password on the remote host, does not have a valid .rhosts
file in his/her home directory on the remote host, with the same owner and group ids, and file permissions
of 600 (-rw-------) or there is no /etc/hosts.equiv file on the remote host, with an entry for the
originating host. The RmtMgr replaces rsh. Call MSC if you get this error.
================
ERROR... Admin account can NOT be root.
The AdmMgr program requires an administrator account which is not the root account. Change the
administrator account name to something other than root and continue testing.
================
ERROR... Unable to locate Admin account <>.
The AdmMgr program is unable to locate the admin account name in the passwd file/database. Make sure
the admin account name provided is a valid user account name on all application hosts (and the master
host as well) and continue testing.
================
ERROR... Application must be chosen for A/M Host <>.
The configuration requires each Analysis Manager host to reference an application. Add a reference for
this AM host and re-test.
================
ERROR... Application name <> is not referenced by any A/M Host.
The application specified is not referenced by any Analysis Manager hosts. Add AM hosts or remove this
application and continue testing.
================
ERROR... Application name <> is used more than once.
Only unique application names can be used. Re-name the applications so no two are alike and re-test.
================
ERROR... Application not specified for A/M Host <>.
Cannot do Unique A/M Host Test. The configuration requires each A/M host to reference an application.
Add a reference for this A/M host and re-test.
================
ERROR... At least one filesystem must be defined for A/M Host <>.
The configuration requires each Analysis Manager host to reference a file system. Add a reference for
this AM host and re-test.
================
ERROR... At least one host must be specified.
At least one physical host must be specified. Add a physical host entry and re-test.
================
ERROR... At least one application must be specified.
192
The Admin program is unable to determine the host address for the designated host. Possible causes are
an invalid name entered, or an invalid host file or name server entry.
================
ERROR... Detected NULL host name.
Either the Analysis Manager install tree is not accessible on the remote host, or network access to the
remote host is denied. Make sure the Analysis Manager install tree is the same on all application hosts
(either through nfs or by created identical directories) and try again.
If network access is the cause for failure, check network permission. (see the ERROR... Access to host
<> failed for admin <> error description)
================
ERROR... Execution of command failed on host <>.
Either the command does not exist, or network access is denied. Most likely due to network access
permission. (See the ERROR... Access to host <> failed for admin <> error description.)
================
ERROR... Failure Creating file <> on host <>.
ERROR... Failure Accessing Test File <> on host <>.
The user does not have permission to create/access a test file in the proj directory on the designated
host. Check the permission of this directory on the remote host and re-test. If permission is not the
problem, check network access to the remote as the user. (See the ERROR... Access to host <> failed
for admin <> error description.)
================
ERROR... Failure Creating Test File <> on host <>.
The user does not have permission to create/access a test file in the file system/directory on the designated
host as listed in the disk.cfg file. Check the permission of this directory on the remote host and retest. If permission is not the problem, check network access to the remote host as the user. (See the
ERROR... Access to host <> failed for admin <> error description.)
================
ERROR... Host <> and Host <> have an identical addresses.
Remove one of the host entries (since they are the same host) or change one to point to a different host
and re-test.
================
ERROR... Host not specified for A/M Host <>. Run Basic A/M Host Test.
Each Analysis Manager host must reference a physical host. Provide a physical host reference and
continue testing.
================
ERROR... Invalid A/M queue name <>.
ERROR... Invalid LSF queue name <>.
ERROR... Invalid NQS queue name <>.
Enter a valid queue name (no space characters, etc.) and re-test.
ERROR... LSF executables path <> must be an absolute path.
ERROR... NQS executables path <> must be an absolute path.
The pathname must be a fully qualified path name. Change the path to be fully qualified (starts with a
slash / character) and re-test.
================
ERROR... NULL A/M Host name is invalid.
Each Analysis Manager host must reference a physical host. Provide a physical host reference and
continue testing.
================
ERROR... Physical host must be chosen for A/M Host <>.
Each Analysis Manager host must reference a physical host. Provide a physical host reference and
continue testing.
================
ERROR... Remote execution of uname command failed on host <>.
Either the uname command (required by Analysis Manager) cannot be found on the remote host of the
users default search path, or the network access to the remote host is denied. Check the existence,
permission, and location of the uname command on the remote host. (Some Convex machines are
shipped without a uname, but Analysis Manager provides one, just place a copy of the uname program
(or link) into a default path directory, such as /bin.) If network access is the cause of failure, check as
above. (See the ERROR... Access to host <> failed for admin <> error description.)
================
194
ERROR...
ERROR...
ERROR...
ERROR...
ERROR...
ERROR...
Unable
Unable
Unable
Unable
Unable
Unable
to
to
to
to
to
to
start
start
start
start
start
start
executable
executable
executable
executable
executable
executable
Either the designated files do not exist on the remote host as indicated or the network access to the remote
host to test each executable is failing. Check the file existence and change the path as required, or check
network access as described above. (See the ERROR... Access to host <> failed for admin <> error
description.)
================
ERROR... Zero A/M hosts defined. At least one required.
Select a host or hosts from the list for each queue and continue.
================
ERROR... configuration file <> not located on physical host <>.
ERROR... runtime file <> not located on physical host <>.
The AdmMgr program cannot locate the rc or exe file designated in the configuration on the specified
host. Check the installation of the application or the rc/exe path and re-test.
================
ERROR... Unable to open unique submit log file
In the current working directory, there are more than 50 submit log files.
Include File
Example Interface
206
232
196
196
The QueMgr then maintains a sorted list of each RmtMgr machine and its capacity to report back to a
GUI/user. (A least loaded host selection is currently being developed so the QueMgr selects the actual
host for a submit based on these statistics, instead of a user explicitly setting the hostname in the GUI.)
There are a few other AM executables:
1. The TxtMgr - a simple text-based UI which is built on this API and demonstrates all these
features.
2. The JobMgr - GUI back-end processes, starts up on the same machine as the GUI (submit
machine) when a job is submitted and runs only for the life of a job. There is always 1 JobMgr
process per job.
3. The analysis family: These 3 programs are all built on top of an additional API which uses many
common features each must do. The common code is run and the custom work for each
application is in a few separate routines, pre_app(), post_app(), abort_app()
NasMgr - The MSC Nastran analysis process which communicates data to/from the JobMgr
and spawns the actual MSC Nastran sub-process. It also reads include files and transfers them,
adds FMS statements to the deck if appropriate, and periodically sends job resource data and
msgpop message data to the JobMgr to store off.
MarMgr - The Abaqus analysis process which does the same things as NasMgr, but for the
Abaqus application.
AbaMgr - The Abaqus analysis process which does the same things as NasMgr, but for the
Abaqus application.
GenMgr - The General application analysis process, used for any other application. Does
what NasMgr does except it has no knowledge of the application and just runs it and collects
resource usage.
General outline of the Analysis Manager API:
With Analysis Manager there are 5 fundamental functions one can perform:
1. Submit a job
2. Abort a job
3. Monitor a specific job
4. Monitor all the hosts/queues
5. List statistics of a completed job
Note:
198
Each function requires some common data and some unique data. Common data include the QueMgr host
and port it is listening on and the configuration structure information. Unique data is described
further below.
Configure
The first step to any of the Analysis Manager functions is to connect to an already running QueMgr. To
do this you must first know the host and port of the running QueMgr, which is usually in the
$P3_HOME/p3manager_files/org.cfg or the
$P3_HOME/p3manager_files/default/conf/QueMgr.sid file. After that, simply call
CONFIG *cfg;
char qmgr_host[128];
int qmgr_port;
int ret_code;
int error_msg;
cfg = get_config(qmgr_host, qmgr_port, &ret_code, error_msg)
For submit, the GUI then needs to fill in the application structure data and make a call to submit the job.
The call may block and wait for the job to complete (maybe a very long time) or it can return
immediately. See the job info rcf/GUI settings listed below for what can be set and changed. Assuming
defaults for ALL settings, then only a jobname (input file selection), hostname and (possibly) memory
need to be set before submitting.
Then call
char *jobfile;
char *jobname; /* usually same as basename of jobfile */
int background;
int ret_code;
int job_number;
job_number = submit_job(jobfile,jobname,background,&ret_code);
This call goes through many steps: contacting the QueMgr, getting a valid reserved job number, asking
the QueMgr to start a JobMgr, etc. and then sends all the config/rcf/GUI structure info to the JobMgr.
The JobMgr runs for the life of the job and is essentially the back-end of the GUI, transferring files
to/from the user submit machine to the analysis machine (the NasMgr, MarMgr, AbaMgr or
GenMgr process).
Abort
For abort, the GUI then needs to query the QueMgr for a list of jobs, and then present this for the user
to select:
char *qmgr_host;
int qmgr_port;
JOBLIST *job_list;
int job_count;
job_count = get_job_list(qmgr_host,qmgr_port,job_list);
For monitor a specific job, the GUI then needs to query the QueMgr for a list of jobs, and then present
this for the user to select:
char *qmgr_host;
int qmgr_port;
JOBLIST *job_list;
int job_count;
job_count = get_job_list(qmgr_host,qmgr_port,job_list);
Once a job is chosen, a simple call with a severity level returns data:
int job_number;
int severity_level;
int cpu, mem, disk;
int msg_count;
char *ret_string;
ret_string = monitor_job(job_number,severity_level,
&cpu,&mem,&disk,&msg_count);
The ret_string then contains a list (array of strings) of all messages the application stored (msgpop
type) that are <= the severity level input along with a resource usage string. The number of msgpop
messages is stored in msg_count, to be referenced like:
200
for(i=0;i<msg_count;i++)
printf("%s",ret_string[i]);
For monitor all hosts/queues, the GUI then needs to make a call and get back all QueMgr data for the
application chosen. This gets complex. There are 4 different types/groups of data available. For now lets
just assume only one type is wanted. There are:
1. FULL_LIST
2. JOB_LIST
3. QUEMGR_LOG
4. QUE_STATUS
5. HOST_STATS
Each has its own syntax and set of data. For the QUE_STATUS type, the call returns an array of
structures containing the hostname, number of running jobs, number of waiting jobs, maximum jobs
allowed to run on that host, for the given (input) application.
char *qmgr_host;
int qmgr_port;
int job_count;
QUESTAT *que_info;
que_info = get_que_stats(qgr_host,qmgr_port,&job_count);
for(i=0;i<job_count;i++)
printf("%s %d %d %d\n",
que_info[i].hostname,que_info[i].num_running,
que_info[i].num_waiting,que_info[i].maxtsk);
For FULL_LIST:
See Include File.
For JOB_LIST:
See Include File.
For QUEMGR_LOG, this is simply a character string of the last 4096 bytes of the QueMgr log file:
See Include File.
For HOST_STATS:
See Include File.
For a list completed jobs, the GUI then needs to query the QueMgr for a list of jobs, and then present this
for the user to select.
char *qmgr_host;
int qmgr_port;
JOBLIST *job_list;
int job_count;
job_count = get_job_list(qmgr_host,qmgr_port,job_list);
Once a job is chosen, a simple call will return all the job data saved.
int job_number;
JOBLIST *comp_info;
comp_info = get_completedjob_stats(job_number);
Remote Manager
On a another level, a GUI could also connect to any RmtMgr and ask it to perform a command and return
the output from that command. This is essentially a remote shell (rsh) host command as on a Unix
machine. This functionality may come in handy when adding/extending the Analysis Manager product
to network install other MSC software or whatever is thought of. The syntax for this is as follows:
char *ret_msg;
int ret_code;
char *rmtuser;
char *rmthost;
int rmtport;
char *command;
int background (== FORGROUND (0) or BACKGROUND (1))
ret_msg = remote_command(rmtuser, rmthost,
rmtport, command, background, &ret_code)
Structures
The JOBLIST structure contains these members:
int job_number;
char job_name[128];
char job_user[128];
char job_host[128];
char work_dir[256];
int port_number;
202
}PROGS;
typedef struct{
char pseudohost_name[NAME_LENGTH];
char host_name[NAME_LENGTH];
char exepath[PATH_LENGTH];
char rcpath[PATH_LENGTH];
int glob_index;
int sub_index;
char arch[NAME_LENGTH];
unsigned int address;
}HSTS;
typedef struct{
int num_hosts;
HSTS *hosts;
}HOST;
typedef struct{
char pseudohost_name[NAME_LENGTH];
char exepath[PATH_LENGTH];
char rcpath[PATH_LENGTH];
int type;
}APPS;
typedef struct{
char host_name[NAME_LENGTH];
int num_subapps;
APPS subapp[MAX_SUB_APPS];
int maxtsk;
char arch[NAME_LENGTH];
unsigned int address;
}TOT_HST;
typedef struct{
char queue_name1[NAME_LENGTH];
char queue_name2[NAME_LENGTH];
int glob_index;
}QUES;
typedef struct{
int num_queues;
QUES *queues;
}QUEUE;
typedef struct{
char queue_name1[NAME_LENGTH];
char queue_name2[NAME_LENGTH];
HOST sub_host[MAX_APPS];
}TOT_QUE;
typedef struct{
char file_sys_name[NAME_LENGTH];
int model;
int max_size;
int cur_free;
}FILES;
typedef struct{
char pseudohost_name[NAME_LENGTH];
int num_fsystems;
FILES *sub_fsystems;
}TOT_FSYS;
typedef struct{
char sepuser_name[NAME_LENGTH];
}SEP_USER;
typedef struct{
int QUE_TYPE;
char ADMIN[128];
int NUM_APPS;
unsigned int timestamp;
/* prog names */
PROGS progs[MAX_APPS];
/* host stuff */
HOST hsts[MAX_APPS];
int total_h;
TOT_HST *total_h_list;
/* que stuff */
char que_install_path[PATH_LENGTH];
char que_options[PATH_LENGTH];
int min_mem_value;
int min_disk_value;
int min_time_value;
QUEUE ques[MAX_APPS];
int total_q;
TOT_QUE *total_q_list;
/* file stuff */
int total_f;
TOT_FSYS *total_f_list
/* separate user stuff */
int total_u;
SEP_USER *total_u_list;
}CONFIG;
204
#
unv_config.auto_mon_flag = 0
unv_config.time_type = 0
unv_config.delay_hour = 0
unv_config.delay_min = 0
unv_config.specific_hour = 0
unv_config.specific_min = 0
unv_config.specific_day = 0
unv_config.mail_on_off = 0
unv_config.mon_file_flag = 0
unv_config.copy_link_flag = 0
unv_config.job_max_time = 0
unv_config.project_name = nastusr
unv_config.orig_pre_prog =
unv_config.orig_pos_prog =
unv_config.exec_pre_prog =
unv_config.exec_pos_prog =
unv_config.separate_user = nastusr
unv_config.p3db_file =
#
nas_config.disk_master = 0
nas_config.disk_dball = 0
nas_config.disk_scratch = 0
nas_config.disk_units = 2
nas_config.scr_run_flag = 1
nas_config.save_db_flag = 0
nas_config.copy_db_flag = 0
nas_config.mem_req = 0
nas_config.mem_units = 0
nas_config.smem_units = 0
nas_config.extra_arg =
nas_config.num_hosts = 2
nas_host[hal9000.macsch.com].mem = 0
nas_host[hal9000.macsch.com].smem = 0
nas_host[daisy.macsch.com].mem = 0
nas_host[daisy.macsch.com].smem = 0
nas_config.default_host = nas_host_u
nas_config.default_queue = N/A
nas_submit.restart_type = 0
nas_submit.restart = 0
nas_submit.modfms = 0
nas_submit.nas_input_deck =
nas_submit.cold_jobname =
#
aba_config.copy_res_file = 1
aba_config.save_res_file = 0
aba_config.mem_req = 0
aba_config.mem_units = 0
aba_config.disk_units = 2
aba_config.space_req = 0
aba_config.append_fil = 0
aba_config.user_sub =
aba_config.use_standard = 1
aba_config.extra_arg =
aba_config.num_hosts = 2
aba_host[hal9000.macsch.com].num_cpus = 1
aba_host[hal9000.macsch.com].pre_buf = 0
aba_host[hal9000.macsch.com].pre_mem = 0
aba_host[hal9000.macsch.com].main_buf = 0
aba_host[hal9000.macsch.com].main_mem = 0
aba_host[daisy.macsch.com].num_cpus = 1
aba_host[daisy.macsch.com].pre_buf = 0
aba_host[daisy.macsch.com].pre_mem = 0
aba_host[daisy.macsch.com].main_buf = 0
aba_host[daisy.macsch.com].main_mem = 0
aba_config.default_host = aba_host_u
aba_config.default_queue = N/A
aba_submit.restart = 0
aba_submit.aba_input_deck =
aba_submit.restart_file =
#
gen_config[GENERIC].disk_units = 2
gen_config[GENERIC].space_req = 0
gen_config[GENERIC].mem_units = 2
gen_config[GENERIC].mem_req = 0
gen_config[GENERIC].cmd_line = jid=$JOBFILE mem=$MEM
gen_config[GENERIC].mon_file = $JOBNAME.log
gen_config[GENERIC].default_host = gen_host_u
gen_config[GENERIC].default_queue = N/A
gen_submit[GENERIC].gen_input_deck =
#
gen_config[GENERIC2].disk_units = 2
gen_config[GENERIC2].space_req = 0
gen_config[GENERIC2].mem_units = 2
gen_config[GENERIC2].mem_req = 0
gen_config[GENERIC2].cmd_line =
gen_config[GENERIC2].mon_file = $JOBNAME.log
gen_config[GENERIC2].default_host = gen_host2_nt
gen_config[GENERIC2].default_queue = N/A
gen_submit[GENERIC2].gen_input_deck =
#
206
Include File
This include file (api.h) must be included in any source file using the Analysis Manager API.
#ifndef _AMAPI
#define _AMAPI
#ifdef __cplusplus
extern C {
#endif
#if defined(SGI5)
typedef int socklen_t;
#elif defined(DECA)
typedef size_t socklen_t;
#elif defined(HP700)
# if !defined(_ILP32) && !defined(_LP64)
typedef int socklen_t;
# endif
#elif defined(WINNT)
typedef int socklen_t;
#endif
#define RMTMGR_RESV_PORT
#define QUEMGR_RESV_PORT
1800
1900
int xxx_has_input_deck;
int hks_has_restart;
int has_extra_arg;
#else
extern int xxx_has_input_deck;
extern int hks_has_restart;
extern int has_extra_arg;
#endif
#define SOCKET_VERSION1
#define SOCKET_VERSION2
1
1
#ifndef PATH_LENGTH
# define PATH_LENGTH 400
#endif
#ifndef NAME_LENGTH
# define NAME_LENGTH 256
#endif
#ifndef MAX_STR_LEN
# define MAX_STR_LEN
#endif
256
#ifndef SOMAXCONN
# define SOMAXCONN 20
#endif
#ifdef ULTIMA
#define MSGPOP
1
#else
#ifdef MSGPOP
# undef MSGPOP
#endif
#define MSGPOPnotused
#endif
115
#define TOTAL_TO_QM_EVENTS 39
/* ---------------------------- */
/* all events to QueMgr are first (and sequential) */
208
#define TRANS_CONFIG1
#define XX_QM_PING
#define QM_XX_PING
#define JM_QM_JOB_FINISHED2
#define JM_QM_JOB_INIT3
#define JM_QM_ADD_TASK4
#define JM_QM_DB_UPDATE
19
#define JM_QM_CLEANUP_JOB
26
#define TM_QM_TASK_FINISHED5
#define TM_QM_TASK_RUNNING6
#define TM_QM_APP_FILES 25
#define PM_QM_REMOVE_JOB7
#define PM_QM_FULL_LIST8
#define PM_QM_JOB_LIST9
#define PM_QM_QUEMGR_LOG10
#define PM_QM_QUE_STATUS11
#define PM_QM_JOB_SELECT_LIST12
#define PM_QM_JOB_COMP_LIST27
#define PM_QM_JOBNUM_REQ13
#define PM_QM_SUSPEND_JOB
#define PM_QM_RESUME_JOB
#define PM_QM_CPU_LOADS
21
22
23
#define PM_QM_START_UP_JOBMGR 29
#define PA_QM_HALT_QUEMGR14
#define PA_QM_DRAIN_HALT15
#define PA_QM_DRAIN_RESTART16
#define PA_QM_CHECK
17
#define PA_QM_GET_RECFG_TEXT 18
#define XX_QM_REQ_VERSION
20
24
28
32
#define QM_JM_TASK_FINISHED40
#define QM_JM_TASK_RUNNING41
#define QM_JM_KILL_TASK42
#define QM_JM_ACCEPT_REQUEST43
#define TM_JM_IN_PRE44
#define TM_JM_RUN_INFO45
#define TM_JM_IN_POS46
#define TM_JM_GET_FILES
62
#define TM_JM_PUT_FILES
63
#define TM_JM_CFG_STRUCTS
65
#define TM_JM_DISK_INIT
66
#define TM_JM_LOG_INFO69
#define TM_JM_PRE_PROG
96
#define TM_JM_POS_PROG
97
#define TM_JM_SUSPEND_JOB
#define TM_JM_RESUME_JOB
77
78
#define TM_JM_ADD_COMMENT
#define TM_JM_RM_FILE
85
86
#define TM_JM_RUNNING_FILE
87
#define TM_JM_MSG_BUFFERS
95
#define TM_PM_GET_FILES
108
#define XX_RM_STOP_NOW
74
#define XX_RM_RMT_CMD
81
#define XX_RM_RMT_AM_CMD
99
#define XX_RM_SEND_LOADS
82
#define XX_RM_KILL_PROCESS
83
#define XX_RM_REMOVE_FILE
84
#define XX_RM_REMOVE_AM_FILE 100
#define XX_RM_WRITE_FILE
75
#define XX_RM_PUT_FILE
109
#define XX_RM_PUL_FILE
110
#define XX_RM_PING_ME
111
#define XX_RM_GET_UNAME
112
#define XX_RM_EXIST_FILE
113
#define XX_RM_DIR_WRITEABLE 114
#define XX_RM_CAT_FILE
115
#define QM_PM_RET_CODE47
#define QM_PM_FULL_LISTING48
#define QM_PM_JOB_LIST49
#define QM_PM_QUEUE_STATUS50
#define QM_PM_QUEMGR_LOG51
#define QM_PM_JOB_SEL_LIST52
#define QM_PM_SEND_JOBNUM53
#define QM_PM_NEEDS_RECFG
91
#define QM_PM_LOAD_INFO
92
#define QM_PM_JOBMGR_START
94
#define PM_JM_REQ_JOBMON 54
#define PM_JM_REQ_RUNNING_FILE 88
#define PM_JM_KILL_TRANSFERS 90
#define PM_JM_MSGDEST_REQ
101
#define PM_JM_STATS_REQ
102
210
#define PM_JM_LOGFILE_REQ
103
#define PM_JM_MON_INIT
104
#define PM_JM_LIST_RUN_FILES 105
#define PM_JM_REQ_RUNNING_FILE2 106
#define QM_PA_INFO
67
#define QM_PA_SEND_RECFG_TEXT 68
#define JM_PM_LOG_COMMENT55
#define JM_PM_LOG_INIT_JOB56
#define JM_PM_LOG_TASK_SUBMIT57
#define JM_PM_LOG_TASK_RUN58
#define JM_PM_LOG_TASK_COMPLETE59
#define JM_PM_LOG_JOB_FINISHED60
#define JM_PM_TIME_SYNC61
#define JM_PM_LOG_LINE
70
#define JM_PM_FILE_PRESENT
71
#define JM_JM_PRE_FINISHED
#define JM_JM_POS_FINISHED
72
73
#define JM_TM_RECV_SETUP
#define JM_TM_GIVEME_FILE
#define JM_TM_GIVEME_FILE2
64
89
107
#define QM_XX_REQ_VERSION
76
#define QM_TM_SUSPEND_JOB
79
#define QM_TM_RESUME_JOB
80
#define QM_TM_KILL_JOB
93
#define MAX_ORGS
28
#define MAX_APPS
30
#define MAX_SUB_APPS 50
#define MAX_GEN_APPS 10
#define LOCAL
#define NFS
#define MSC_QUEUE
#define LSF_QUEUE
#define NQS_QUEUE
0
0
1
2
#define MSC_NASTRAN
1
#define HKS_ABAQUS
2
#define MSC_MARC
3
#define GENERAL
20
#define MAX_NUM_FILE_SYS 20
#define UNITS_WORDS
0
#define UNITS_64BIT_WORDS
99
#define UNITS_KB
1
#define UNITS_MB
2
#define UNITS_GB
#define MIN_MEM_REQ
#define MIN_DISK_REQ
#define MIN_TIME_REQ
1 /* (mb) */
1 /* (mb) */
99999 /* (min) */
#define JOB_SUBMITTED
#define JOB_QUEUED
#define JOB_RUNNING
1
2
#define JOB_SUCCESSFUL
0
#define JOB_ABORTED
1
#define JOB_FAILED
2
#define FILE_STILL_DOWNLOADING 1
#define FILE_DOWNLOAD_COMPLETE 0
/* ---------------------------- */
#define IC_CLEAN
0
#define IC_CANT_GET_ADDRESS
-100
#define IC_CANT_OPEN_HOST_FILE -101
#define IC_CANT_ALLOC_MEM
-102
#define IC_NOT_ENUF_HOSTS
-103
#define IC_CANT_OPEN_QUE_FILE -104
#define IC_MISSING_FIELDS
-105
#define IC_CANT_FIND_HOST
-106
#define IC_ADD_QUE_ERROR
-107
#define IC_NOT_ENUF_QUES
-108
#define IC_CANT_FIND_QUE
-109
#define IC_NO_QUE_TYPE
-110
#define IC_UNKNOWN_QUE_TYPE
-111
#define IC_NO_QUE_PATH
-112
#define IC_CANT_FIND_MACH
-113
#define IC_BAD_MAXTSK
-114
#define IC_TOO_FEW_QUE_APPS
-115
#define IC_BAD_APP_TYPE
-116
#define IC_NOT_ENUF_SUB_HOSTS -117
#define IC_BAD_PORT
-118
#define IC_NO_ADMIN
-119
#define IC_BAD_ADMIN
-120
#define ID_CLEAN
0
#define ID_CANT_OPEN_DISK_FILE -150
#define ID_CANT_GET_ADDRESS
-151
#define ID_CANT_ALLOC_MEM
-152
#define ID_CANT_FSTAT
-153
#define ID_NOT_ENUF_FSYS
-154
#define ID_NOT_ENUF_SUBS
-155
#define ID_CANT_FIND_HOST
-156
#define IU_CLEAN
0
#define IU_CANT_ALLOC_MEM
-180
212
#defineTIME_SYNC 99
#define LOG_COMMENT100
#define LOG_INIT_JOB101
#define LOG_TASK_SUBMIT102
#define LOG_TASK_RUN103
#define LOG_TASK_COMPLETE104
#define LOG_JOB_FINISHED105
#define LOG_DISK_INIT106
#define LOG_DISK_UPDATE107
#define LOG_CPU_UPDATE108
#define LOG_DISK_SUMMARY109
#define LOG_DISK_FS_SUMMARY110
#define LOG_CPU_SUMMARY111
#define LOG_LOGLINE
112
#define LOG_FILE_PRESENT
113
#define LOG_TASK_SUSPEND
114
#define LOG_TASK_RESUME
115
#define LOG_RUNNING_FILE
116
#define LOG_RUNNING_DONE
117
#define LOG_MEM_UPDATE118
#define LOG_MEM_SUMMARY119
/* ---------------------------- */
typedef struct{
char file_sys_name[PATH_LENGTH];
int disk_used_pct;
int disk_max_size_mb;
}JOB_FS_LIST;
typedef struct{
char filename[PATH_LENGTH];
int sizekb;
}FILE_LIST;
typedef struct{
char org_name[NAME_LENGTH];
char org_name2[NAME_LENGTH];
char host_name[NAME_LENGTH];
unsigned int addr;
int port;
}ORG;
typedef struct{
char prog_name[NAME_LENGTH];
char app_name[NAME_LENGTH];
int maxapptsk;
char args[PATH_LENGTH];
char extension[24];
}PROGS;
typedef struct{
char
pseudohost_name[NAME_LENGTH];
char
host_name[NAME_LENGTH];
char
exepath[PATH_LENGTH];
char
rcpath[PATH_LENGTH];
int
glob_index;
int
sub_index;
int
maxapptsk;
char
arch[NAME_LENGTH];
unsigned int address;
}HSTS;
typedef struct{
int
num_hosts;
HSTS
*hosts;
}HOST;
typedef struct{
char
pseudohost_name[NAME_LENGTH];
char
exepath[PATH_LENGTH];
char
rcpath[PATH_LENGTH];
int
maxapptsk;
int
type;
}APPS;
typedef struct{
char
host_name[NAME_LENGTH];
int
num_subapps;
APPS
subapp[MAX_SUB_APPS];
int
maxtsk;
char
arch[NAME_LENGTH];
unsigned int address;
}TOT_HST;
typedef struct{
char
queue_name1[NAME_LENGTH];
char
queue_name2[NAME_LENGTH];
int
glob_index;
}QUES;
typedef struct{
int
num_queues;
QUES
*queues;
}QUEUE;
typedef struct{
char
queue_name1[NAME_LENGTH];
char
queue_name2[NAME_LENGTH];
HOST
sub_host[MAX_APPS];
}TOT_QUE;
typedef struct{
char
file_sys_name[NAME_LENGTH];
int
model;
int
max_size;
int
cur_free;
}FILES;
214
typedef struct{
char
pseudohost_name[NAME_LENGTH];
int
num_fsystems;
FILES
*sub_fsystems;
}TOT_FSYS;
typedef struct{
char
sepuser_name[NAME_LENGTH];
}SEP_USER;
/* ---------------------------- */
typedef struct{
int QUE_TYPE;
char ADMIN[128];
int NUM_APPS;
int config_file_version;
unsigned int timestamp;
char prog_version[32];
/* prog names */
PROGS progs[MAX_APPS];
/* host stuff */
HOST hsts[MAX_APPS];
int total_h;
TOT_HST *total_h_list;
/* que stuff */
char que_install_path[PATH_LENGTH];
char que_options[PATH_LENGTH];
int min_mem_value;
int min_disk_value;
int min_time_value;
QUEUE ques[MAX_APPS];
int total_q;
TOT_QUE *total_q_list;
/* file stuff */
int total_f;
TOT_FSYS *total_f_list;
/* separate user stuff */
int total_u;
SEP_USER *total_u_list;
int qmgr_port;
int rmgr_port;
char qmgr_host[256];
}CONFIG;
/************************************************************************/
/* Defines for setting the different values of the config structure */
/************************************************************************/
#define CONFIG_VERSION
1
#define NO_JOB_MON
#define START_JOB_MON
#define SUBMIT_NOW
#define SUBMIT_DELAY
#define SUBMIT_SPECIFIC
0
1
#define SUNDAY
#define MONDAY
#define TUESDAY
#define WEDNESDAY
#define THURSDAY
#define FRIDAY
#define SATURDAY
#define MAIL_OFF
#define MAIL_ON
0
1
2
5
6
0
1
#define UI_MGR_MAIL
#define MASTER_MAIL
#define MAX_PROJ_LENGTH
typedef struct{
#ifndef CRAY
int pad1;
#endif
int version;
#ifndef CRAY
int pad2;
#endif
int job_mon_flag;
#ifndef CRAY
int pad3;
0
1
16
216
#endif
int time_type;
#ifndef CRAY
int pad4;
#endif
int delay_hour;
#ifndef CRAY
int pad5;
#endif
int delay_min;
#ifndef CRAY
int pad6;
#endif
int specific_hour;
#ifndef CRAY
int pad7;
#endif
int specific_min;
#ifndef CRAY
int pad8;
#endif
int specific_day;
#ifndef CRAY
int pad9;
#endif
int mail_on_off;
#ifndef CRAY
int pad10;
#endif
int bogus;
#ifndef CRAY
int pad11;
#endif
int mon_file_flag;
#ifndef CRAY
int pad12;
#endif
int copy_link_flag;
#ifndef CRAY
int pad13;
#endif
int job_max_time;
#ifndef CRAY
int pad14;
#endif
int bogus1;
char project_name[128];
char orig_pre_prog[256];
char orig_pos_prog[256];
char exec_pre_prog[256];
char exec_pos_prog[256];
char separate_user[128];
char p3db_file[256];
char email_addr[256];
} Universal_Config_Info;
/* ---------------------------- */
typedef struct {
char host_name[128];
int num_running;
int num_waiting;
int maxtsk;
char stat_str[64];
}Que_List;
typedef struct {
char msg[2048];
}Msg_List;
typedef struct {
int job_number;
char job_name[128];
char job_user[128];
char job_submit_host[128];
char am_host_name[128];
char job_proj[128];
char work_dir[256];
int application;
int port_number;
char job_run_host[128];
char sub_time_str[128];
int jobstatus;
}Job_List;
typedef struct {
char host_name[128];
int cpu_util;
int free_disk;
int avail_mem;
int status;
}Cpu_List;
/************************************************************************/
/*
*/
/* MSC.Nastran specific configuration structures.
*/
/*
*/
/************************************************************************/
#define DEFAULT_BUFFSIZE
8193
/*
** mck 6/12/98 - change to 0, so they dont get added unless you type something ...
**
#define CONFIG_DEFAULT_SMEM ( (DEFAULT_BUFFSIZE-1) * 100 )
#define CONFIG_DEFAULT_MEM
8000000
*/
218
#define CONFIG_DEFAULT_SMEM 0
#define CONFIG_DEFAULT_MEM
0
#define NAS_NONE
#define NO
#define YES
#define SINGLE
#define MULTI
0
0
1
1
2
#define DB_GET_NO_FILES
500
#define DB_GET_MST_P3_FILE
600
#define DB_GET_ALL_P3_FILES 650
#define DB_GET_MST_MK_FILE
700
#define DB_GET_ALL_MK_FILES 750
typedef struct {
#ifndef CRAY
int pad1;
#endif
int host_index;/* Global Host Index.*/
#ifndef CRAY
int pad2;
#endif
float mem;
/* stored as whatever.
#ifndef CRAY
int pad3;
#endif
float smem;
/* stored as whatever.
#ifndef CRAY
int pad4;
#endif
int num_cpus;/* Number cpus on machine.*/
*/
*/
int pad4;
#endif
int disk_dball;
/* stored as KB.
*/
#ifndef CRAY
int pad5;
#endif
int disk_scratch;
/* stored as KB.
*/
#ifndef CRAY
int pad6;
#endif
int disk_units;
/* see defines below
*/
#ifndef CRAY
int pad7;
#endif
int scr_run_flag;
#ifndef CRAY
int pad8;
#endif
int save_db_flag;
#ifndef CRAY
int pad9;
#endif
int copy_db_flag;
#ifndef CRAY
int pad10;
#endif
float mem_req;
/* stored as whatever */
#ifndef CRAY
int pad11;
#endif
int mem_units;
#ifndef CRAY
int pad12;
#endif
int smem_units;
#ifndef CRAY
int pad13;
#endif
int num_hosts;
#ifndef CRAY
int pad14;
#endif
int bogus;
char default_host[128];/* uihost_name is saved here*/
char default_queue[128];/* queue_name1 is saved here*/
char mem_req_str[64];
char extra_arg[256];
Nas_Config_Host *host_ptr;
} Nas_Configure_Info;
typedef struct {
#ifndef CRAY
int pad1;
#endif
220
int submit_index;
/* Index just within Nas List */
#ifndef CRAY
int pad2;
#endif
intspecific_index;
/* see descrip below.*/
#ifndef CRAY
int pad3;
#endif
int restart_type;
#ifndef CRAY
int pad4;
#endif
int restart;
#ifndef CRAY
int pad5;
#endif
int modfms;
#ifndef CRAY
int pad6;
#endif
int bogus;
char nas_input_deck[256];
/* full path and filename
char cold_jobname[256];
/* coldstart jobname
} Nas_Submit_Info;
*/
*/
#define ABA_NONE
#define ABA_RESTART
#define ABA_CHECK
1
2
typedef struct {
#ifndef CRAY
int pad1;
#endif
int host_index;
/* Global Host Index.
*/
#ifndef CRAY
int pad2;
#endif
int num_cpus;/* Number cpus on machine.*/
#ifndef CRAY
int pad3;
#endif
float pre_buf;/* stored as whatever.*/
#ifndef CRAY
int pad4;
#endif
float pre_mem;/* stored as whatever.*/
#ifndef CRAY
int pad5;
#endif
float main_buf;/* stored as whatever.*/
#ifndef CRAY
int pad6;
#endif
float main_mem;/* stored as whatever.*/
char pre_buf_str[64];
char pre_mem_str[64];
char main_buf_str[64];
char main_mem_str[64];
char host_name[128];
/* Real Host Name (host_name) */
} Aba_Config_Host;
typedef struct {
#ifndef CRAY
int pad1;
#endif
int application_type; /* Should be set to HKS_ABAQUS */
#ifndef CRAY
int pad2;
#endif
int default_index;/* Index just within Aba List*/
#ifndef CRAY
int pad3;
#endif
int copy_res_file;
#ifndef CRAY
int pad4;
#endif
int save_res_file;
#ifndef CRAY
222
int pad5;
#endif
float mem_req;
/* stored as whatever */
#ifndef CRAY
int pad6;
#endif
int mem_units;/* One of the defines above*/
#ifndef CRAY
int pad7;
#endif
int disk_units;/* One of the defines above*/
#ifndef CRAY
int pad8;
#endif
int space_req;/* stored as KB.*/
#ifndef CRAY
int pad9;
#endif
int append_fil;/* 0 = no 1 = yes*/
#ifndef CRAY
int pad10;
#endif
int num_hosts;
#ifndef CRAY
int pad11;
#endif
int use_standard;
/* 0 = no 1 = yes
*/
char default_host[128];/* uihost_name is saved here */
char default_queue[128];/* queue_name1 is saved here */
char user_sub[128];
char mem_req_str[64];
char extra_arg[256];
Aba_Config_Host *host_ptr;
} Aba_Configure_Info;
typedef struct {
#ifndef CRAY
int pad1;
#endif
int submit_index;/* Index just within Aba list*/
#ifndef CRAY
int pad2;
#endif
intspecific_index;/* see description below*/
#ifndef CRAY
int pad3;
#endif
int restart;
#ifndef CRAY
int pad4;
#endif
int bogus;
char aba_input_deck[256]; /* full path and filename
char restart_file[256];
*/
} Aba_Submit_Info;
/* The specific_index variable is only used when the queuing type is*/
/* not P3_QUEUE (i.e. it is LSF). If it is -1 then that means the*/
/* task can be submitted to any host in the defined queue. If the*/
/* specific_index has a value other than -1, then this is a index into*/
/* the host list (host list for the application, not global index)*/
/* of the specific host the task should be submited to.*/
/************************************************************************/
/*
*/
/* MSC.Marc specific configuration structures.
*/
/*
*/
/************************************************************************/
#define MAR_NONE
#define MAR_RESTART
typedef struct {
#ifndef CRAY
int pad1;
#endif
int host_index;
/* Global Host Index.
*/
#ifndef CRAY
int pad2;
#endif
int num_cpus;
/* Number cpus on machine. */
#ifndef CRAY
int pad3;
#endif
int bogus;
char host_name[128];
/* Real Host Name (host_name) */
} Mar_Config_Host;
typedef struct {
#ifndef CRAY
int pad1;
#endif
int application_type;
/* Should be set to MSC_MARC */
#ifndef CRAY
int pad2;
#endif
int default_index;
/* Index just within Mar List */
#ifndef CRAY
int pad3;
#endif
int disk_units;
/* One of the defines above */
#ifndef CRAY
int pad4;
#endif
int space_req;
/* stored as KB.
*/
#ifndef CRAY
int pad5;
#endif
224
int mem_units;
/* One of the defines above */
#ifndef CRAY
int pad6;
#endif
float mem_req;
/* stored as whatever */
#ifndef CRAY
int pad7;
#endif
int num_hosts;
#ifndef CRAY
int pad8;
#endif
int translate_input;
char default_host[128]; /* uihost_name is saved here */
char default_queue[128]; /* queue_name1 is saved here */
char cmd_line[256];
/* command line to run with */
char mon_file[256];
/* log file to monitor
*/
char mem_req_str[64];
Mar_Config_Host *host_ptr;
} Mar_Configure_Info;
typedef struct {
#ifndef CRAY
int pad1;
#endif
int submit_index;
#ifndef CRAY
int pad2;
#endif
int rid;
#ifndef CRAY
int pad3;
#endif
int pid;
#ifndef CRAY
int pad4;
#endif
int prog;
#ifndef CRAY
int pad5;
#endif
int user;
#ifndef CRAY
int pad6;
#endif
int save;
#ifndef CRAY
int pad7;
#endif
int vf;
#ifndef CRAY
int pad8;
#endif
int nprocd;
#ifndef CRAY
int pad9;
#endif
int host;
/* Flag: hostfile (-host hostfilename) */
#ifndef CRAY
int pad10;
#endif
int iam;
/* Flag: iam flag for licensing (-iam iamtag) */
#ifndef CRAY
int pad11;
#endif
int specific_index;
/* see description below
*/
/* All files should have full path and filename */
char datfile_name[256];
/* input deck */
char restart_name[256];
/* restart file */
char post_name[256];
/* post file */
char program_name[256];
/* program file */
char user_subroutine_name[256];
/* user subroutine file */
char viewfactor[256];
/* viewfactor file */
char hostfile[256];
/* hostfile */
char iamval[256];
/* iam licensing tag - no file involved */
} Mar_Submit_Info;
/* The specific_index variable is only used when the queuing type is*/
/* not P3_QUEUE (i.e. it is LSF). If it is -1 then that means the*/
/* task can be submitted to any host in the defined queue. If the*/
/* specific_index has a value other than -1, then this is a index into*/
/* the host list (host list for the application, not global index)*/
/* of the specific host the task should be submited to.*/
/************************************************************************/
/*
*/
/* GENERAL specific configuration structures.
*/
/*
*/
/************************************************************************/
typedef struct {
#ifndef CRAY
int pad1;
#endif
int host_index;
/* Global Host Index.
*/
#ifndef CRAY
int pad2;
#endif
int bogus;
char host_name[128];
/* Real Host Name (host_name) */
} Gen_Config_Host;
typedef struct {
#ifndef CRAY
int pad1;
#endif
int application_type;/* Should be set to GEN - RANGE */
#ifndef CRAY
226
int pad2;
#endif
int default_index;/* Index just within Gen List*/
#ifndef CRAY
int pad3;
#endif
int disk_units;/* One of the defines above*/
#ifndef CRAY
int pad4;
#endif
int space_req;/* stored as KB.*/
#ifndef CRAY
int pad5;
#endif
int mem_units;/* One of the defines above*/
#ifndef CRAY
int pad6;
#endif
float mem_req;/* stored as whatever */
#ifndef CRAY
int pad7;
#endif
int num_hosts;
#ifndef CRAY
int pad8;
#endif
int translate_input;
char default_host[128];/* uihost_name is saved here */
char default_queue[128];/* queue_name1 is saved here */
char cmd_line[256];
/* command line to run with */
char mon_file[256];
/* log file to monitor
*/
char mem_req_str[64];
Gen_Config_Host *host_ptr;
} Gen_Configure_Info;
typedef struct {
#ifndef CRAY
int pad1;
#endif
int submit_index;/* Index just within Gen list*/
#ifndef CRAY
int pad2;
#endif
intspecific_index;/* see description below*/
char gen_input_deck[256]; /* full path and filename
} Gen_Submit_Info;
*/
/* The specific_index variable is only used when the queuing type is*/
/* not MSC_QUEUE (i.e. it is LSF). If it is -1 then that means the*/
/* task can be submitted to any host in the defined queue. If the*/
/* specific_index has a value other than -1, then this is a index into*/
/* the host list (host list for the application, not global index)*/
/* of the specific host the task should be submited to.*/
/* ---------------------------- */
/*
** api globals ...
*/
#ifdef AM_INITIALIZE
AM_EXTERN int
gbl_nwrk_timeout_secs = BLOCK_TIMEOUT;
AM_EXTERN int
api_use_this_host = 0;
#else
AM_EXTERN int
gbl_nwrk_timeout_secs;
AM_EXTERN int
api_use_this_host;
#endif
AM_EXTERN CONFIG
*cfg;
AM_EXTERN ORG
*org;
AM_EXTERN int
num_orgs;
AM_EXTERN Universal_Config_Info ui_config;
AM_EXTERN Nas_Configure_Info nas_config;
AM_EXTERN Nas_Submit_Info
nas_submit;
AM_EXTERN Aba_Configure_Info aba_config;
AM_EXTERN Aba_Submit_Info
aba_submit;
AM_EXTERN Mar_Configure_Info mar_config;
AM_EXTERN Mar_Submit_Info
mar_submit;
AM_EXTERN Gen_Configure_Info gen_config[MAX_GEN_APPS];
AM_EXTERN Gen_Submit_Info
gen_submit[MAX_GEN_APPS];
AM_EXTERN char
api_this_host[256];
AM_EXTERN char
api_user_name[256];
AM_EXTERN char
api_application_name[64];
AM_EXTERN int
api_application_index;
/*
* api functions ...
*/
/*
* init - MUST BE FIRST api_* call made by application ...
*/
extern int api_init(char *out_str);
/*
* just to set the global timeout for communication ...
*/
extern int
api_get_gbl_timeout();
extern int
api_set_gbl_timeout(int secs);
/*
* reads an org.cfg file if possible and builds the ORG struct for list of QueMgrs ...
*/
extern ORG
*api_read_orgs(char *dir,int *num_orgs,int *status);
/*
* contacts running QueMgr and builds cfg struct ...
*/
extern CONFIG *api_get_config(char *qmgr_host,int qmgr_port,int *status,char *out_str);
228
/*
* reads *.cfg files and builds cfg struct (No QueMgr process involved) ...
*/
extern CONFIG *api_read_config(CONFIG *cfg,char *path,char *orgname,int *status,char
*out_str);
/*
* reads *.cfg files (without building path) and builds cfg struct (No QueMgr process involved) ...
*/
extern CONFIG *api_read_config_fullpath(CONFIG *cfg,char *path,int *status,char *out_str);
/*
* writes *.cfg files from cfg struct (No QueMgr process involved) ...
*/
extern void api_write_config(CONFIG *cfg,char *path,char *orgname,int *stauts,char
*out_str);
/*
* tries to contact running QueMgr and check if timestamp is ok ...
* returns 0 if all ok ...
*/
extern int
api_ping_quemgr(char *qmgr_host,int qmgr_port,unsigned int timestamp,char
*out_str);
/*
* initializes UI config structs (nas, aba, gen[] subimt and config) ...
*/
extern void api_init_uiconfig(CONFIG *cfg);
/*
* gets logged in user name
*/
extern char *api_getlogin(void);
/*
* checks on job data deck and returns possible question for UI to ask, setting answer for
* submit call below ...
*/
extern int api_check_job(char *ques_text,char *ans1_text,char *ans2_text,char *out_str);
/*
* submits job (needs filled in UI config and submit structs as well as global cfg struct) ...
*/
extern int
api_submit_job(char *qmgr_host,int qmgr_port,char *jobname,int background,int
*job_number,char *base_path,int *jmgr_port,int answer,char *out_str);
/*
* gets list of all running jobs from QueMgr ...
*/
extern Job_List *api_get_runningjob_list(char *qmgr_host,int qmgr_port,int *job_count,char
*out_str);
/*
* gets initial socket for later on api_mon_job_* calls ...
*/
230
*/
extern FILE_LIST *api_com_job_received_files_list(char *sub_host,char *mon_file,int
*num_files,char *out_str);
/*
* starts file download ...
*/
extern int api_download_file_start(int msg_sock,int job_number,char *filename,char *out_str);
/*
* checks on donwload file status ...
*/
extern int api_download_file_check(int job_number,char *filename,int *filesizekb);
/*
* returns all jobs for all hosts and apps from a running QueMgr ...
*/
extern Que_List *api_mon_que_full(char *qmgr_host,int qmgr_port,int *num_tsks,char
*out_str);
/*
* gets last 4k bytes of QueMgr log file ...
*/
extern char *api_mon_que_log(char *qmgr_host,int qmgr_port,char *out_str);
/*
* gets all hosts statistics ...
*/
extern Cpu_List *api_mon_que_cpu(char *qmgr_host,int qmgr_port,char *out_str);
/*
* gets list of last 25 or so completed jobs from QueMgr ...
*/
extern Job_List *api_get_completedjob_list(char *qmgr_host,int qmgr_port,int *job_count,char
*out_str);
/*
* abort job ...
*/
extern int
api_abort_job(char *qmgr_host,int qmgr_port,int job_number,char *job_user,char
*out_str);
/*
* reads rc file and overrides all UI settings found ..
*/
extern int
api_rcfile_read(char *rcfile,char *out_str);
/*
* writes rc file from UI settigns ...
*/
extern int
api_rcfile_write( char *rcfile,char *out_str);
extern int
api_rcfile_write2(FILE *stream,int short_or_long);
/*
232
Example Interface
This is the actual source file of the TxtMgr, which uses the Analysis Manager API and the previously
shown api.h include file.
#include <stdlib.h>
#include <string.h>
#include <sys/types.h>
#ifndef WINNT
# include <unistd.h>
# include <sys/time.h>
# include <sys/uio.h>
# include <sys/socket.h>
# include <netinet/in.h>
# include <netdb.h>
#else
# include <winsock.h>
#endif
#include <time.h>
#define AM_INITIALIZE 1
#include api.h
int dont_connect = 0;
int has_qmgr_host;
int has_qmgr_port;
int has_org;
int has_orgpath;
char lic_file[256];
char org_name[256];
char binpath[256];
char orgpath[256];
char qmgr_host[256];
int qmgr_port;
int rmgr_port;
int msg_sock = -1;
int msg_port = -1;
int msg_sock_job = -1;
int auto_startup;
char sys_rcf_file[256];
char usr_rcf_file[256];
int has_cmd_rcf;
char cmd_rcf_file[256];
/* ==================== */
#define SUBMIT
1
#define ABORT
2
#define WATCHJOB
3
#define WATCHQUE_LOG 4
#define WATCHQUE_FULL 5
#define WATCHQUE_CPU 6
#define LISTCOMP
7
#include <stdio.h>
#include <stdio.h>
#define RCFILEWRITE 8
#define ADMINTEST 9
#define RECONFIG
10
#define QUIT
11 /* must be highest defined number type */
#define NOTVALID
9999
/* ==================== */
#ifdef WINNT
BOOL console_event_func(DWORD dwEvent)
{
if(dwEvent == CTRL_LOGOFF_EVENT)
return TRUE;
#ifdef DEBUG
fprintf(stderr,\nbye ...);
#endif
fprintf(stderr,\n);
api_release_license();
#ifdef WINNT
WSACleanup();
#endif
return FALSE;
}
#endif
/* ==================== */
void leafname(char *input_string, char *output_string)
{
int
string_length;
int
i;
char
temp_string[256];
int
found;
/*********************************************************************/
/* First get rid of the leading path (if any).
*/
/*********************************************************************/
string_length = strlen(input_string);
if(string_length < 1){
output_string[0] = \0;
return;
}
found = 0;
for(i = string_length - 1; i >= 0; i--){
if( (input_string[i] == /) || (input_string[i] == \\) ){
found = 1;
strcpy(temp_string, &input_string[i + 1]);
break;
}
234
#include <stdio.h>
if(found == 0)
strcpy(temp_string, input_string);
/*********************************************************************/
/* Now get rid of the extention (if any).
*/
/*********************************************************************/
string_length = strlen(temp_string);
if(string_length < 1){
output_string[0] = \0;
return;
}
for(i = string_length - 1; i >= 0; i--){
if( (temp_string[i] == .) && (i != 0) ){
temp_string[i] = \0;
strcpy(output_string, temp_string);
return;
}
}
strcpy(output_string, temp_string);
return;
/* ==================== */
int submit_job(void)
{
int background;
int i;
int lenc;
int submit_index;
int job_number;
int jmgr_port;
char job_name[256];
int mem;
char job_fullname[256];
int srtn;
char out_str[2048];
int ans;
char ques_text[512];
char ans1_text[32];
char ans2_text[32];
background = 0;
/*
** if not auto_startup, ask for details ...
*/
if(auto_startup == 0){
background = 1;
#include <stdio.h>
/*
** ask jobname ...
*/
printf(\nEnter job name: );
scanf(%s,job_name);
/*
** ask memory ...
*/
printf(\nEnter memory (in set units): );
scanf(%d,&mem);
/*
** print list of hosts from QueMgr ...
** and ask for which to submit to ...
*/
if(cfg->QUE_TYPE == MSC_QUEUE){
printf(\nhosts:\n);
printf(index name\n);
printf(------------\n);
for(i=0;i<cfg->hsts[api_application_index-1].num_hosts;i++){
printf(%-5d %s\n,i+1,cfg->hsts[api_application_index-1].hosts[i].pseudohost_name);
}
printf(\nEnter host index: );
scanf(%d,&submit_index);
submit_index--;
printf(\n);
if( (submit_index < 0) || (submit_index >= cfg->hsts[api_application_index-1].num_hosts) ){
printf(Error, invalid index\n);
return 1;
}
}else{
printf(\nqueues:\n);
printf(index name\n);
printf(------------\n);
for(i=0;i<cfg->ques[api_application_index-1].num_queues;i++){
printf(%-5d %s -> %s\n,i+1,cfg->ques[api_application_index-1].queues[i].queue_name1,
cfg->ques[api_application_index-1].queues[i].queue_name2);
}
printf(\nEnter queue index: );
scanf(%d,&submit_index);
submit_index--;
printf(\n);
if( (submit_index < 0) || (submit_index >= cfg->ques[api_application_index-1].num_queues) ){
printf(Error, invalid index\n);
return 1;
236
#include <stdio.h>
}
/*
** set up config/submit struct info ...
*/
strcpy(job_fullname,job_name);
lenc = (int)strlen(job_fullname);
for(i=0;i<lenc;i++){
if(job_fullname[i] == \\)
job_fullname[i] = /;
}
leafname(job_fullname,job_name);
if(api_application_index == MSC_NASTRAN){
sprintf(nas_submit.nas_input_deck,%s,job_fullname);
nas_config.mem_req = (float)mem;
nas_submit.submit_index = submit_index;
}else if(api_application_index == HKS_ABAQUS){
sprintf(aba_submit.aba_input_deck,%s,job_fullname);
aba_config.mem_req = (float)mem;
aba_submit.submit_index = submit_index;
}else if(api_application_index == MSC_MARC){
sprintf(mar_submit.datfile_name,%s,job_fullname);
mar_config.mem_req = (float)mem;
mar_submit.submit_index = submit_index;
}else{
sprintf(gen_submit[api_application_index-GENERAL].gen_input_deck,%s,job_fullname);
gen_config[api_application_index-GENERAL].mem_req = (float)mem;
gen_submit[api_application_index-GENERAL].submit_index = submit_index;
}
}else{
/*
** leave all config and submit struct settings alone, as
** the rcf/override ASSUME to have it all correct ...
** (just get job_name for use below ...)
*/
if(api_application_index == MSC_NASTRAN){
leafname(nas_submit.nas_input_deck,job_name);
}else if(api_application_index == HKS_ABAQUS){
leafname(aba_submit.aba_input_deck,job_name);
}else if(api_application_index == MSC_MARC){
leafname(mar_submit.datfile_name,job_name);
}else{
leafname(gen_submit[api_application_index-GENERAL].gen_input_deck,job_name);
}
}
#include <stdio.h>
ans = NO;
srtn = api_check_job(ques_text,ans1_text,ans2_text,out_str);
if(srtn < 0){
printf(%s,out_str);
return srtn;
}
if(srtn > 0){
redo:
printf(%s\n,ques_text);
printf(\nAnswer:\n);
printf(-------\n);
printf(0 - %s\n,ans1_text);
printf(1 - %s\n,ans2_text);
printf(\nanswer: );
scanf(%d,&ans);
printf(\n);
if( (ans != NO) && (ans != YES) ){
printf(Error, invalid answer\n\n);
goto redo;
}
}
srtn = api_submit_job(qmgr_host,qmgr_port,job_name,background,&job_number,binpath,
&jmgr_port,ans,out_str);
if(out_str[0] != \0){
printf(%s,out_str);
}
if( (srtn == 0) && (background == 1) ){
/*
** right away get monitor socket ...
*/
msg_sock = api_mon_job_init(api_this_host,jmgr_port,&msg_port,out_str);
if(msg_sock < 0){
msg_port = -1;
msg_sock_job = -1;
printf(%s,out_str);
}else{
msg_sock_job = job_number;
}
}
}
return srtn;
/* ==================== */
int abort_job(void)
238
#include <stdio.h>
int srtn;
Job_List *jr_ptr = NULL;
int num_running_jobs;
int job_number;
char j_numstr[100];
int found;
char job_user[256];
char job_name[256];
char proj_name[256];
int i;
char out_str[2048];
jr_ptr = api_get_runningjob_list(qmgr_host,qmgr_port,&num_running_jobs,out_str);
if(num_running_jobs == 0){
printf(\nNo active jobs found\n);
return 0;
}
if( (num_running_jobs < 0) || (jr_ptr == NULL) ){
printf(%s,out_str);
if(jr_ptr != NULL)
free(jr_ptr);
return 1;
}
job_number = -1;
if(auto_startup == 0){
/*
** present list to user ...
*/
printf(\nRunning jobs ....\n\n);
printf(num jobname
jobuser
project
amhost
runhost
subtime\n);
printf(----------------------------------------------------------------------------------------------------------\n);
for(i=0;i<num_running_jobs;i++){
printf(%-4d %-20s %-20s %-20s %-20s %-20s %-20s\n,jr_ptr[i].job_number,
jr_ptr[i].job_name,
jr_ptr[i].job_user,
jr_ptr[i].job_proj,
jr_ptr[i].am_host_name,
jr_ptr[i].job_run_host,
jr_ptr[i].sub_time_str);
}
for(i=0;i<100;i++)
j_numstr[i] = \0;
printf(\nEnter job number: );
scanf(%s,j_numstr);
if( (j_numstr[0] == q) || (j_numstr[0] == Q) || (j_numstr[0] == 0) ){
free(jr_ptr);
return 0;
#include <stdio.h>
}
sscanf(j_numstr,%d,&job_number);
found = 0;
for(i=0;i<num_running_jobs;i++){
if(job_number == jr_ptr[i].job_number){
found = 1;
break;
}
}
if(!found){
printf(Error, job number %d not in list\n,job_number);
free(jr_ptr);
return 1;
}
printf(\n);
}else{
if(api_application_index == MSC_NASTRAN){
leafname(nas_submit.nas_input_deck,job_name);
}else if(api_application_index == HKS_ABAQUS){
leafname(aba_submit.aba_input_deck,job_name);
}else if(api_application_index == MSC_MARC){
leafname(mar_submit.datfile_name,job_name);
}else{
leafname(gen_submit[api_application_index-GENERAL].gen_input_deck,job_name);
}
strcpy(proj_name,ui_config.project_name);
/*
** search list for match and set job_number ...
*/
job_number = -1;
for(i=0;i<num_running_jobs;i++){
if(strcmp(jr_ptr[i].job_name,job_name) == 0){
if(strcmp(jr_ptr[i].job_proj,proj_name) == 0){
job_number = jr_ptr[i].job_number;
break;
}
}
}
}
strcpy(job_user,api_user_name);
srtn = api_abort_job(qmgr_host,qmgr_port,job_number,job_user,out_str);
if(out_str[0] != \0){
printf(%s,out_str);
240
#include <stdio.h>
free(jr_ptr);
}
return srtn;
/* ==================== */
int watch_job(void)
{
Job_List *jr_ptr = NULL;
int num_running_jobs;
int check;
char job_host[128];
int job_port = 0;
char j_numstr[100];
int found;
int srtn;
char *log_str;
char job_name[256];
char proj_name[256];
char sfile[256];
int i;
int job_number;
int sev_level;
char out_str[2048];
int num_msgs;
Msg_List *msg_ptr = NULL;
int cpu, pct_cpu;
int mem, pct_mem;
int dsk, pct_dsk;
int elapsed;
int status;
FILE_LIST *file_list = NULL;
int num_files = 0;
int file_index;
int sizekb;
int num_fs;
JOB_FS_LIST *job_fs_list;
extern void get_leaf_and_extention(char *,char *);
jr_ptr = api_get_runningjob_list(qmgr_host,qmgr_port,&num_running_jobs,out_str);
if(num_running_jobs == 0){
printf(\nNo active jobs found\n);
return 0;
}
if( (num_running_jobs < 0) || (jr_ptr == NULL) ){
printf(%s,out_str);
if(jr_ptr != NULL)
free(jr_ptr);
return 1;
#include <stdio.h>
job_number = -1;
if(auto_startup == 0){
/*
** present list to user ...
*/
printf(\nRunning jobs ....\n\n);
printf(num jobname
jobuser
project
amhost
runhost
subtime\n);
printf(----------------------------------------------------------------------------------------------------------\n);
for(i=0;i<num_running_jobs;i++){
printf(%-4d %-20s %-20s %-20s %-20s %-20s %-20s\n,jr_ptr[i].job_number,
jr_ptr[i].job_name,
jr_ptr[i].job_user,
jr_ptr[i].job_proj,
jr_ptr[i].am_host_name,
jr_ptr[i].job_run_host,
jr_ptr[i].sub_time_str);
}
for(i=0;i<100;i++)
j_numstr[i] = \0;
printf(\nEnter job number: );
scanf(%s,j_numstr);
if( (j_numstr[0] == q) || (j_numstr[0] == Q) || (j_numstr[0] == 0) ){
free(jr_ptr);
return 0;
}
sscanf(j_numstr,%d,&job_number);
found = 0;
for(i=0;i<num_running_jobs;i++){
if(job_number == jr_ptr[i].job_number){
job_port = jr_ptr[i].port_number;
strcpy(job_host,jr_ptr[i].job_submit_host);
found = 1;
break;
}
}
if(!found){
printf(Error, job number %d not in list\n,job_number);
free(jr_ptr);
return 1;
}
}else{
if(api_application_index == MSC_NASTRAN){
leafname(nas_submit.nas_input_deck,job_name);
242
#include <stdio.h>
}else if(api_application_index == HKS_ABAQUS){
leafname(aba_submit.aba_input_deck,job_name);
}else if(api_application_index == MSC_MARC){
leafname(mar_submit.datfile_name,job_name);
}else{
leafname(gen_submit[api_application_index-GENERAL].gen_input_deck,job_name);
}
strcpy(proj_name,ui_config.project_name);
/*
** search list for match and set job_number ...
*/
job_number = -1;
for(i=0;i<num_running_jobs;i++){
if(strcmp(jr_ptr[i].job_name,job_name) == 0){
if(strcmp(jr_ptr[i].job_proj,proj_name) == 0){
job_port = jr_ptr[i].port_number;
strcpy(job_host,jr_ptr[i].job_submit_host);
break;
}
}
}
if(job_number < 0){
printf(Error, job name %s not in list\n,job_name);
free(jr_ptr);
return 1;
}
}
free(jr_ptr);
#ifdef DEBUG
fprintf(stderr,posa\n);
#endif
/*
** get msg socket if needed ...
*/
if( (msg_sock < 0) || (msg_sock_job != job_number) ){
#ifdef DEBUG
fprintf(stderr,posa1\n);
#endif
msg_sock = api_mon_job_init(job_host,job_port,&msg_port,out_str);
if(msg_sock < 0){
msg_port = -1;
msg_sock_job = -1;
printf(%s,out_str);
return 1;
}else{
msg_sock_job = job_number;
#include <stdio.h>
#ifdef DEBUG
fprintf(stderr,posb\n);
#endif
/*
** get severity if not auto ...
*/
sev_level = 3;
if(auto_startup == 0){
#ifdef MSGPOP
if(api_application_index == MSC_NASTRAN){
printf(Enter message severity level >=: );
scanf(%d,&sev_level);
printf(\n);
}
if(sev_level < 0) sev_level = 0;
if(sev_level > 3) sev_level = 3;
#endif
}
#ifdef DEBUG
fprintf(stderr,posc\n);
#endif
/*
** get monitor info ...
*/
msg_ptr = api_mon_job_msgs(msg_sock,api_this_host,msg_port,sev_level,&num_msgs,out_str);
if(num_msgs < 0){
printf(%s,out_str);
return 2;
}
#ifdef DEBUG
fprintf(stderr,posd\n);
#endif
if(msg_ptr == NULL){
printf(%s,out_str);
return 3;
}else if(num_msgs == 0){
printf(\nNo messages at this time ...\n\n);
}else{
/*
** mgs format is severity@sevbuf@msgtxt ... sevbuf is string NULL when severity=0
*/
for(i=0;i<num_msgs-1;i++){
printf( %s\n,msg_ptr[i].msg);
}
244
free(msg_ptr);
#include <stdio.h>
#ifdef DEBUG
fprintf(stderr,pose\n);
#endif
job_fs_list =
api_mon_job_stats(msg_sock,api_this_host,msg_port,&cpu,&pct_cpu,&mem,&pct_mem,
&dsk,&pct_dsk,&elapsed,&status,&num_fs,&srtn,out_str);
if(srtn != 0){
printf(%s,out_str);
}else{
printf(job stats:\n);
if(status == JOB_SUBMITTED){
printf(cpu=%d, %%cpu=%d, mem=%d, %%mem=%d, disk=%d, %%disk=%d,
elapsed=%d, status=%s\n,
cpu,pct_cpu,mem,pct_mem,dsk,pct_dsk,elapsed,submitted);
}else if(status == JOB_QUEUED){
printf(cpu=%d, %%cpu=%d, mem=%d, %%mem=%d, disk=%d, %%disk=%d,
elapsed=%d, status=%s\n,
cpu,pct_cpu,mem,pct_mem,dsk,pct_dsk,elapsed,queued);
}else if(status == JOB_RUNNING){
printf(cpu=%d, %%cpu=%d, mem=%d, %%mem=%d, disk=%d, %%disk=%d,
elapsed=%d, status=%s\n,
cpu,pct_cpu,mem,pct_mem,dsk,pct_dsk,elapsed,running);
}else{
printf(cpu=%d, %%cpu=%d, mem=%d, %%mem=%d, disk=%d, %%disk=%d,
elapsed=%d, status=%s\n,
cpu,pct_cpu,mem,pct_mem,dsk,pct_dsk,elapsed,unknown);
}
/*
printf(total num filesys = %d\n,num_fs);
for(i=0;i<num_fs;i++){
fprintf(stdout, %s max=%d usage=%d\n,job_fs_list[i].file_sys_name,
job_fs_list[i].disk_max_size_mb,
job_fs_list[i].disk_used_pct);
}
*/
printf(\n);
#ifdef DEBUG
fprintf(stderr,posf\n);
#endif
log_str = api_mon_job_mon(msg_sock,api_this_host,msg_port,out_str);
if(log_str == NULL){
printf(%s,out_str);
}else{
printf(mon file contents:\n);
printf(%s,log_str);
free(log_str);
}
#include <stdio.h>
file_list =
api_mon_job_running_files_list(msg_sock,api_this_host,msg_port,&num_files,out_str);
#ifdef DEBUG
printf(api_mon_job_running_files_list: num_files = %d\n,num_files);
#endif
if(num_files < 0){
printf(%s,out_str);
return 4;
}
if(num_files == 0)
return 0;
for(i=0;i<num_files;i++){
if(i == 0){
printf(\ndownloadable files: (use q to quit)\n);
printf(index job file
size (kb)\n);
printf(--------------------------------------------------\n);
}
get_leaf_and_extention(file_list[i].filename,sfile);
printf(%-10d%-30s %d\n,i+1,sfile,file_list[i].sizekb);
}
for(i=0;i<100;i++)
j_numstr[i] = \0;
printf(\nEnter file index to download: );
scanf(%s,j_numstr);
if( (j_numstr[0] == q) || (j_numstr[0] == Q) || (j_numstr[0] == 0) ){
free(file_list);
return 0;
}
sscanf(j_numstr,%d,&file_index);
if(file_index == 0){
free(file_list);
return 0;
}
check = 0;
if(file_index < 0){
check = 1;
file_index *= -1;
}
246
#include <stdio.h>
if(file_index > num_files){
printf(invalid index\n);
free(file_list);
return 5;
}
if(check){
srtn = api_download_file_check(job_number,file_list[file_index-1].filename,&sizekb);
#ifdef DEBUG
printf(check returns %d\n,srtn);
#endif
if(srtn == FILE_STILL_DOWNLOADING){
printf(File %s is still being transfered\n,file_list[file_index-1].filename);
}else if(srtn == FILE_DOWNLOAD_COMPLETE){
printf(File %s transfer complete !\n,file_list[file_index-1].filename);
}
}else{
srtn = api_download_file_start(msg_sock,job_number,file_list[file_index-1].filename,out_str);
if(srtn != 0){
printf(File download (%s) start failed, error = %d (%s),
file_list[file_index-1].filename,srtn,out_str);
}
}
free(file_list);
}
return 0;
/* ==================== */
int watch_que(int which)
{
int i;
int num_tasks;
Que_List *ql_ptr = NULL;
char out_str[2048];
char *log_str = NULL;
Cpu_List *cpu_ptr = NULL;
if(which == WATCHQUE_LOG){
log_str = api_mon_que_log(qmgr_host,qmgr_port,out_str);
if(log_str == NULL){
printf(%s,out_str);
return 1;
#include <stdio.h>
printf(\n);
printf(%s,log_str);
free(log_str);
return 0;
}else if(which == WATCHQUE_FULL){
ql_ptr = api_mon_que_full(qmgr_host,qmgr_port,&num_tasks,out_str);
if( (num_tasks < 0) || (ql_ptr == NULL) ){
if(ql_ptr != NULL)
free(ql_ptr);
printf(%s,out_str);
return 1;
}
if(num_tasks == 0){
printf(\nNo active jobs found\n);
return 0;
}
printf(\nQueue stats for all hosts/apps\n);
printf(\n%-35s%-6s%-6s%-6s %s\n, hostname,run,que,max,status);
printf(------------------------------------------------------------\n);
for(i=0;i<num_tasks;i++){
printf(%-35s%-6d%-6d%-6d %s\n,
ql_ptr[i].host_name,ql_ptr[i].num_running,ql_ptr[i].num_waiting,
ql_ptr[i].maxtsk,ql_ptr[i].stat_str);
}
free(ql_ptr);
return 0;
}else if(which == WATCHQUE_CPU){
cpu_ptr = api_mon_que_cpu(qmgr_host,qmgr_port,out_str);
if(cpu_ptr == NULL){
printf(%s,out_str);
return 1;
}
printf(\nQueue load stats for all hosts/apps\n);
printf(\n%-35s%-12s%-12s%-12s\n, hostname,%cpu util,avail mem,avail disk);
printf(---------------------------------------------------------------------\n);
for(i=0;i<cfg->total_h;i++){
printf(%-35s%-12d%-12d%-12d\n,
248
#include <stdio.h>
cpu_ptr[i].host_name,cpu_ptr[i].cpu_util,cpu_ptr[i].avail_mem,cpu_ptr[i].free_disk);
free(cpu_ptr);
return 0;
}else{
printf(\nError, invalid selection\n);
return 1;
}
}
/*NOTREACHED*/
/* ==================== */
int list_complete(void)
{
int num_completed_jobs;
Job_List *jc_ptr = NULL;
Job_List *jc_ptr2 = NULL;
char *mon_msgs = NULL;
int num_files;
FILE_LIST *fl_list = NULL;
int srtn;
int i;
int job_number;
char j_numstr[100];
int found;
char out_str[2048];
char sfile[256];
char mon_file[256];
int cpu_secs, pct_cpu_avg, pct_cpu_max;
int mem_kbts, pct_mem_avg, pct_mem_max;
int dsk_mbts, pct_dsk_avg, pct_dsk_max;
int elapsed,status;
int num_fs;
JOB_FS_LIST *job_fs_list;
extern void get_leaf_and_extention(char *,char *);
jc_ptr = api_get_completedjob_list(qmgr_host,qmgr_port,&num_completed_jobs,out_str);
if(num_completed_jobs == 0){
printf(\nNo completed jobs found\n);
return 0;
}
if( (num_completed_jobs < 0) || (jc_ptr == NULL) ){
printf(%s,out_str);
if(jc_ptr != NULL)
free(jc_ptr);
return 1;
#include <stdio.h>
}
job_number = -1;
if(auto_startup == 0){
/*
** present list to user ...
*/
printf(\nCompleted jobs ....\n\n);
printf(num jobname
username
subtime\n);
printf(------------------------------------------------------\n);
for(i=0;i<num_completed_jobs;i++){
printf(%-4d %-20s %-20s %-20s\n,jc_ptr[i].job_number,
jc_ptr[i].job_name,jc_ptr[i].job_user,jc_ptr[i].sub_time_str);
}
printf(\nEnter job number: );
scanf(%s,j_numstr);
if( (j_numstr[0] == q) || (j_numstr[0] == Q) || (j_numstr[0] == 0) ){
free(jc_ptr);
return 0;
}
sscanf(j_numstr,%d,&job_number);
printf(\n);
}else{
printf(\nError, cant list completed jobs in batch mode.\n);
free(jc_ptr);
return 1;
}
found = -1;
for(i=0;i<num_completed_jobs;i++){
if(job_number == jc_ptr[i].job_number){
found = i;
break;
}
}
if(found < 0){
printf(Error, job number %d not in list\n,job_number);
free(jc_ptr);
return 1;
}
printf(Job name:
%s\n,jc_ptr[found].job_name);
printf(Job user:
%s\n,jc_ptr[found].job_user);
printf(Job originating host: %s\n,jc_ptr[found].job_submit_host);
printf(Job originating dir: %s\n,jc_ptr[found].work_dir);
250
#include <stdio.h>
printf(Job AM hostname:
%s\n,jc_ptr[found].am_host_name);
printf(Job run host:
%s\n,jc_ptr[found].job_run_host);
if(jc_ptr[found].jobstatus == JOB_SUCCESSFUL)
printf(Job complete status: success\n);
else if(jc_ptr[found].jobstatus == JOB_ABORTED)
printf(Job complete status: aborted\n);
else if(jc_ptr[found].jobstatus == JOB_FAILED)
printf(Job complete status: failed\n);
else
printf(Job complete status: unknown\n);
/* ------------ */
sprintf(mon_file,%s/%s.mon,jc_ptr[found].work_dir,jc_ptr[found].job_name);
jc_ptr2 = api_com_job_gen(jc_ptr[found].job_submit_host,mon_file,out_str);
if(jc_ptr2 == NULL){
printf(%s,out_str);
free(jc_ptr);
return 1;
}
/* check job number ... */
if(jc_ptr[found].job_number != jc_ptr2->job_number){
printf(\nJob numbers do not match -\n);
printf( assuming newer job with same .mon file is currently running\n);
printf( so no additional job info is available\n);
free(jc_ptr);
free(jc_ptr2);
return 1;
}
printf(\ngeneral info:\n);
printf(num jobname
jobuser
amhost
runhost
subtime
status\n);
printf(-----------------------------------------------------------------------------------------------------------\n);
printf(%-4d %-20s %-20s %-20s %-20s %-30s %-6d\n,jc_ptr2->job_number,
jc_ptr2->job_name,
jc_ptr2->job_user,
jc_ptr2->am_host_name,
jc_ptr2->job_run_host,
jc_ptr2->sub_time_str,
jc_ptr2->jobstatus);
/* ------------ */
job_fs_list = api_com_job_stats(jc_ptr[found].job_submit_host,mon_file,
&cpu_secs,&pct_cpu_avg,&pct_cpu_max,
&mem_kbts,&pct_mem_avg,&pct_mem_max,
&dsk_mbts,&pct_dsk_avg,&pct_dsk_max,
&elapsed,&status,
#include <stdio.h>
&num_fs,&srtn,out_str);
printf(\njob stats:\n);
printf(cpu(sec)=%d, %%cpu(avg)=%d,
%%cpu(max)=%d\n,cpu_secs,pct_cpu_avg,pct_cpu_max);
printf(mem(kb) =%d, %%mem(avg)=%d,
%%mem(max)=%d\n,mem_kbts,pct_mem_avg,pct_mem_max);
printf(dsk(mb) =%d, %%dsk(avg)=%d,
%%dsk(max)=%d\n,dsk_mbts,pct_dsk_avg,pct_dsk_max);
printf(elapsed =%d, status=%d\n,elapsed,status);
/*
printf(total num filesys = %d\n,num_fs);
for(i=0;i<num_fs;i++){
fprintf(stdout, %s max=%d usage=%d\n,job_fs_list[i].file_sys_name,
job_fs_list[i].disk_max_size_mb,
job_fs_list[i].disk_used_pct);
}
printf(\n);
*/
if( (num_fs > 0) && (job_fs_list != NULL) ){
free(job_fs_list);
}
/* ------------ */
mon_msgs = api_com_job_mon(jc_ptr[found].job_submit_host,mon_file,out_str);
if(mon_msgs == NULL){
printf(Error, unable to determine mon file msgs\n%s\n,out_str);
free(jc_ptr);
free(jc_ptr2);
return 1;
}
printf(\nmon file contents:\n%s,mon_msgs);
free(mon_msgs);
/* ------------ */
fl_list =
api_com_job_received_files_list(jc_ptr[found].job_submit_host,mon_file,&num_files,out_str);
#ifdef DEBUG
252
#include <stdio.h>
printf(api_com_job_received_files_list: num_files = %d\n,num_files);
#endif
if(num_files < 0){
printf(%s,out_str);
free(jc_ptr);
free(jc_ptr2);
return 4;
}
if(num_files > 0){
for(i=0;i<num_files;i++){
if(i == 0){
printf(\nviewable files:\n);
printf(index job file
size (kb)\n);
printf(--------------------------------------------------\n);
}
get_leaf_and_extention(fl_list[i].filename,sfile);
printf(%-10d%-30s %d\n,i+1,sfile,fl_list[i].sizekb);
}
free(fl_list);
}
/* ------------ */
free(jc_ptr);
free(jc_ptr2);
}
return 0;
/* ==================== */
int write_rcfile(void)
{
int srtn;
char out_str[2048];
if(has_cmd_rcf){
srtn = api_rcfile_write(cmd_rcf_file,out_str);
if(srtn != 0){
printf(%s,out_str);
return 1;
}else{
printf(\nSettings successfully written to rc file <%s>\n,cmd_rcf_file);
}
}else{
printf(\nWarning, no -rcf file specified so cannot write settings\n);
}
}
return 0;
/* ==================== */
#include <stdio.h>
int admin_test(void)
{
int status;
char *test_str = NULL;
char out_str[2048];
test_str = api_admin_test(orgpath,org_name,rmgr_port,&status,out_str);
if(status != 0){
printf(\nAdmin test returns %d, text = %s,status,out_str);
}
if(test_str != NULL){
printf(\n%s,test_str);
free(test_str);
}
}
return 0;
/* ==================== */
int reconfig_quemgr(void)
{
int status;
char *recfg_str = NULL;
char out_str[2048];
/*
** if user is Admin then ...
*/
if(strcmp(api_user_name,cfg->ADMIN) != 0){
printf(\nError, user <%s> is not the Admin <%s>, so cannot reconfig\n,
api_user_name,cfg->ADMIN);
return 0;
}
recfg_str = api_reconfig_quemgr(qmgr_host,qmgr_port,&status,out_str);
if(status != 0){
printf(\nReconfig returns %d, text = %s,status,out_str);
}
if(recfg_str != NULL){
printf(\n%s,recfg_str);
free(recfg_str);
}
}
return 0;
/* ==================== */
void print_menu(void)
{
254
#include <stdio.h>
printf(\n);
printf(Enter selection:\n);
printf( 1). submit a job\n);
printf( 2). abort a job\n);
printf( 3). monitor a job\n);
printf( 4). show QueMgr log file\n);
printf( 5). show QueMgr jobs/queues\n);
printf( 6). show QueMgr cpu/mem/disk\n);
printf( 7). list completed jobs\n);
printf( 8). write rcfile settings\n);
printf( 9). admin test\n);
printf( 10). admin reconfig QueMgr\n);
printf( 11). quit\n);
printf(\n);
printf(choice: );
return;
/* ==================== */
int get_response(void)
{
char bogus[100];
int choice;
(void)scanf(%s,bogus);
if( (bogus[0] == q) || (bogus[0] == Q))
choice = QUIT;
else
choice = atoi(bogus);
if( (choice < SUBMIT) || (choice > QUIT) )
return NOTVALID;
}
return choice;
/* ==================== */
int doit(int choice)
{
int srtn;
if(choice == QUIT)
return -1;
#ifdef DEBUG
printf(choice made was: %d\n,choice);
#endif
if(dont_connect){
if(choice != ADMINTEST){
printf(\nError, only valid option with -nocon is Admin test\n);
return 0;
#include <stdio.h>
if(choice == SUBMIT){
srtn = submit_job();
}else if(choice == ABORT){
srtn = abort_job();
}else if(choice == WATCHJOB){
srtn = watch_job();
}else if(choice == WATCHQUE_LOG){
srtn = watch_que(choice);
}else if(choice == WATCHQUE_FULL){
srtn = watch_que(choice);
}else if(choice == WATCHQUE_CPU){
srtn = watch_que(choice);
}else if(choice == LISTCOMP){
srtn = list_complete();
}else if(choice == RCFILEWRITE){
srtn = write_rcfile();
}else if(choice == ADMINTEST){
srtn = admin_test();
}else if(choice == RECONFIG){
srtn = reconfig_quemgr();
}else{
srtn = 0;
printf(invalid choice ?\n);
}
}
return srtn;
/* ==================== */
int main(int argc,char **argv)
{
int i,j,k;
int len1;
int done;
int not_first_real_app;
int first_real_app_num;
FILE *wp = NULL;
int do_print;
char env_file[256];
char first_real_app_str[128];
int srtn;
int choice;
int timout;
int has_timout;
char *ptr;
char *qmgr_hoststr;
char *qmgr_portstr;
char *user_str;
256
char home_dir[256];
char tmpstr[256];
char error_msg[256];
char tmp_host[256];
char out_str[2048];
#ifdef WINNT
int err;
WORD wVersionRequested;
WSADATA wsaData;
#endif
#include <stdio.h>
#include <stdio.h>
/*
** get this hostname ...
*/
gethostname(api_this_host,256);
strcpy(tmp_host,api_this_host);
host_entry = (struct hostent *)gethostbyname(tmp_host);
if(host_entry != NULL)
strcpy(api_this_host,host_entry->h_name);
/* ------------ */
/*
** get this username ...
*/
user_str = api_getlogin();
strcpy(api_user_name,user_str);
/* ------------ */
/*
** assume binpath is from P3_HOME (or AM_HOME) ...
** command-line will override ...
*/
binpath[0] =\0;
orgpath[0] =\0;
#ifdef ULTIMA
ptr = getenv(AM_HOME);
#else
ptr = getenv(P3_HOME);
#endif
if(ptr != NULL){
strcpy(binpath,ptr);
}
#ifdef DEBUG
printf(binpath = <%s>\n,binpath);
#endif
/* ------------ */
/*
** get QueMgr host, port, app name, index, p3home (or amhome),
** and -rcf from command line args ...
*/
lic_file[0] = \0;
has_qmgr_host = 0;
has_qmgr_port = 0;
has_org = 0;
has_orgpath = 0;
has_timout = 0;
timout = BLOCK_TIMEOUT;
strcpy(org_name,default);
ptr = getenv(P3_ORG);
258
#include <stdio.h>
if(ptr != NULL){
strcpy(org_name,ptr);
has_org = 1;
}
qmgr_host[0] = \0;
qmgr_port = -1111;
rmgr_port = RMTMGR_RESV_PORT;
api_application_name[0] = \0;
api_application_index = -1;
sys_rcf_file[0] = \0;
usr_rcf_file[0] = \0;
has_cmd_rcf = 0;
cmd_rcf_file[0] = \0;
strcpy(usr_rcf_file,.p3mgrrc);
get_home_dir(home_dir);
if(home_dir[0] != \0){
sprintf(usr_rcf_file,%s/.p3mgrrc,home_dir);
}
#ifdef DEBUG
fprintf(stderr,usr_rcf_file = <%s>\n,usr_rcf_file);
#endif
auto_startup = 0;
do_print = 0;
env_file[0] = \0;
if(argc > 1){
i = 1;
while(i < argc){
if((strcmp(argv[i],-qmgrhost) == 0) && (i < argc-1)){
has_qmgr_host = 1;
strcpy(qmgr_host,argv[i+1]);
i++;
}else if((strcmp(argv[i],-qmgrport) == 0) && (i < argc-1)){
has_qmgr_port = 1;
qmgr_port = atoi(argv[i+1]);
i++;
}else if((strcmp(argv[i],-rmgrport) == 0) && (i < argc-1)){
rmgr_port = atoi(argv[i+1]);
i++;
}else if((strcmp(argv[i],-timeout) == 0) && (i < argc-1)){
timout = atoi(argv[i+1]);
has_timout = 1;
i++;
}else if((strcmp(argv[i],-org) == 0) && (i < argc-1)){
has_org = 1;
strcpy(org_name,argv[i+1]);
i++;
}else if((strcmp(argv[i],-orgpath) == 0) && (i < argc-1)){
has_orgpath = 1;
strcpy(orgpath,argv[i+1]);
#include <stdio.h>
i++;
}else if((strcmp(argv[i],-auth) == 0) && (i < argc-1)){
strcpy(lic_file,argv[i+1]);
i++;
}else if((strcmp(argv[i],-app) == 0) && (i < argc-1)){
strcpy(api_application_name,argv[i+1]);
i++;
}else if((strcmp(argv[i],-rcf) == 0) && (i < argc-1)){
has_cmd_rcf = 1;
strcpy(cmd_rcf_file,argv[i+1]);
i++;
#ifdef ULTIMA
}else if((strcmp(argv[i],-amhome) == 0) && (i < argc-1)){
#else
}else if((strcmp(argv[i],-p3home) == 0) && (i < argc-1)){
#endif
strcpy(binpath,argv[i+1]);
i++;
}else if((strcmp(argv[i],-choice) == 0) && (i < argc-1)){
auto_startup = atoi(argv[i+1]);
i++;
}else if(strcmp(argv[i],-env) == 0){
do_print = 1;
}else if(strcmp(argv[i],-envall) == 0){
do_print = 2;
}else if((strcmp(argv[i],-envf) == 0) && (i < argc-1)){
strcpy(env_file,argv[i+1]);
do_print = 3;
i++;
}else if((strcmp(argv[i],-envfall) == 0) && (i < argc-1)){
strcpy(env_file,argv[i+1]);
do_print = 4;
i++;
}else if(strcmp(argv[i],-nocon) == 0){
dont_connect = 1;
}else if(strcmp(argv[i],-version) == 0){
fprintf(stderr,version: %s\n,GLOBAL_AM_VERSION);
return 0;
}
i++;
}
}
#ifdef DEBUG
if(has_cmd_rcf)
fprintf(stderr,cmd_rcf_file = <%s>\n,cmd_rcf_file);
#endif
/* ------------ */
/*
** if binpath is still emtpy then its an error ...
*/
260
#ifdef DEBUG
printf(binpath = <%s>\n,binpath);
#endif
#include <stdio.h>
if(binpath[0] == \0){
#ifdef ULTIMA
printf(Error, AM_HOME env var not set\n);
#else
printf(Error, P3_HOME env var not set\n);
#endif
return 1;
}
#ifdef LAPI
if(lic_file[0] == \0){
ptr = getenv(MSC_LICENSE_FILE);
if(ptr == NULL){
ptr = getenv(LM_LICENSE_FILE);
}
if(ptr == NULL){
printf(Error, authorization file not set (MSC_LICENSE_FILE)\n);
return 1;
}
strcpy(lic_file,ptr);
}
#else
strcpy(lic_file,empty.noauth);
#endif
/*
** change back-slashes to forward slashes for binpath ...
*/
i = 0;
j = 0;
k = (int)strlen(binpath);
while(i < k){
#ifdef DEBUG
fprintf(stderr,txtmgr: i=%d, k=%d\n,i,k);
fprintf(stderr,txtmgr: binpath[i] = %c\n,binpath[i]);
#endif
if(binpath[i] == \\){
if(i < k-1){
if(binpath[i+1] == \\){
i++;
}
}
tmpstr[j] = /;
j++;
}else{
tmpstr[j] = binpath[i];
j++;
#include <stdio.h>
#ifdef DEBUG
fprintf(stderr,HERE\n);
#endif
i++;
}
tmpstr[j] = \0;
strcpy(binpath,tmpstr);
/*
** make sure binpath has no slash at end ...
*/
len1 = (int)strlen(binpath);
if(len1 > 0){
if( (binpath[len1-1] == /) || (binpath[len1-1] == \\) ){
binpath[len1-1] = \0;
}
}
/*
** mck - add /p3manager_files (or analysis_manager) to binpath ...
*/
#ifdef ULTIMA
strcat(binpath,/analysis_manager);
#else
strcat(binpath,/p3manager_files);
#endif
/* ------------ */
/*
** MCK MCK MCK - get orgpath - it WILL be the same as binpath
**
for the org.cfg file ...
*/
if(has_orgpath == 0){
strcpy(orgpath,binpath);
}else{
/*
** change back-slashes to forward slashes for orgpath ...
*/
i = 0;
j = 0;
k = (int)strlen(orgpath);
while(i < k){
#ifdef DEBUG
fprintf(stderr,txtmgr: i=%d, k=%d\n,i,k);
fprintf(stderr,txtmgr: orgpath[i] = %c\n,orgpath[i]);
262
#endif
#include <stdio.h>
if(orgpath[i] == \\){
if(i < k-1){
if(orgpath[i+1] == \\){
i++;
}
}
tmpstr[j] = /;
j++;
}else{
tmpstr[j] = orgpath[i];
j++;
}
#ifdef DEBUG
fprintf(stderr,HERE\n);
#endif
i++;
}
tmpstr[j] = \0;
strcpy(orgpath,tmpstr);
/*
** make sure orgpath has no slash at end ...
*/
len1 = (int)strlen(orgpath);
if(len1 > 0){
if( (orgpath[len1-1] == /) || (orgpath[len1-1] == \\) ){
orgpath[len1-1] = \0;
}
}
/*
** mck - add /p3manager_files (or analysis_manager) to orgpath ...
*/
#ifdef ULTIMA
strcat(orgpath,/analysis_manager);
#else
strcat(orgpath,/p3manager_files);
#endif
}
/* ------------ */
sprintf(sys_rcf_file,%s/%s/p3mgrrc,orgpath,org_name);
#ifdef DEBUG
fprintf(stderr,sys_rcf_file = <%s>\n,sys_rcf_file);
#endif
/* ------------ */
#include <stdio.h>
/*
** check env vars if not set on command-line
*/
if(qmgr_host[0] == \0){
qmgr_hoststr = getenv(P3_MASTER);
if(qmgr_hoststr != NULL){
strcpy(qmgr_host,qmgr_hoststr);
has_qmgr_host = 1;
}
}
if(qmgr_host[0] == \0){
qmgr_hoststr = getenv(MSC_AM_QUEMGR);
if(qmgr_hoststr != NULL){
strcpy(qmgr_host,qmgr_hoststr);
has_qmgr_host = 1;
}
}
if(qmgr_host[0] == \0){
qmgr_hoststr = getenv(QUEMGR_HOST);
if(qmgr_hoststr != NULL){
strcpy(qmgr_host,qmgr_hoststr);
has_qmgr_host = 1;
}
}
if(qmgr_port == -1111){
qmgr_portstr = getenv(P3_PORT);
if(qmgr_portstr != NULL){
qmgr_port = atoi(qmgr_portstr);
has_qmgr_port = 1;
}
}
if(qmgr_port == -1111){
qmgr_portstr = getenv(MSC_AM_QUEPORT);
if(qmgr_portstr != NULL){
qmgr_port = atoi(qmgr_portstr);
has_qmgr_port = 1;
}
}
if(qmgr_port == -1111){
qmgr_portstr = getenv(QUEMGR_PORT);
if(qmgr_portstr != NULL){
qmgr_port = atoi(qmgr_portstr);
has_qmgr_port = 1;
}
}
264
/* ------------ */
#include <stdio.h>
#ifndef ULTIMA
/*
** checkout license ...
*/
if( (do_print == 0) && (dont_connect == 0) ){
if((srtn = api_checkout_license(lic_file)) != 0){
printf(Error, Authorization failure %d.,srtn);
if(global_auth_msg != NULL){
printf( Error msg = %s\n,global_auth_msg);
}else{
printf(\n);
}
return 1;
}
#ifdef DEBUG
fprintf(stderr,auth_file = %s\n,lic_file);
fprintf(stderr,checkout_license returns %d\n,srtn);
#endif
}
#endif
/* ------------ */
/*
** init api ...
*/
srtn = api_init(out_str);
if(srtn != 0){
printf(%s, error code = %d\n,out_str,srtn);
return 1;
}
/* ------------ */
/*
** adjust global network timeout if desired ...
*/
if(has_timout == 0){
timout = 30;
}
srtn = api_set_gbl_timeout(timout);
if(srtn != 0){
printf(Error, unable to set global timeout to %d secs\n,timout);
api_release_license();
#ifdef WINNT
WSACleanup();
#endif
return 1;
}
/* ------------ */
#include <stdio.h>
ptr = getenv(AM_THIS_HOST);
if(ptr != NULL){
if( (strcmp(ptr,no) != 0) && (strcmp(ptr,NO) != 0) ){
api_use_this_host = 1;
}
}
/* ------------ */
/*
** read orgs if possible (org.cfg is in binpath) ...
*/
org = NULL;
num_orgs = 0;
if( (has_qmgr_host == 0) && (has_qmgr_port == 0) ){
org = api_read_orgs(binpath,&num_orgs,&srtn);
if(srtn != 0){
printf(Warning, unable to read org.cfg file, code = %d\n,srtn);
/*
* use defaults ...
*/
strcpy(qmgr_host,api_this_host);
qmgr_port = QUEMGR_RESV_PORT;
}else{
if( (num_orgs > 0) && (org != NULL) ){
/*
** figure out which quemgr to connect to ...
*/
done = 0;
for(i=0;i<num_orgs;i++){
if(strcmp(org[i].org_name,org_name) == 0){
strcpy(qmgr_host,org[i].host_name);
qmgr_port = org[i].port;
done = 1;
break;
}
}
if( (!done) && (has_org == 0) ){
/*
** use first available ...
*/
strcpy(qmgr_host,org[0].host_name);
qmgr_port = org[0].port;
done = 1;
}else if( (!done) && (has_org == 1) ){
/*
** no match found, assume this host and all ...
*/
strcpy(qmgr_host,api_this_host);
qmgr_port = QUEMGR_RESV_PORT;
done = 1;
266
#include <stdio.h>
}
}else{
printf(Warning, unable to read org.cfg file, no orgs found\n);
/*
* use defaults ...
*/
strcpy(qmgr_host,api_this_host);
qmgr_port = QUEMGR_RESV_PORT;
done = 1;
}
}
}
#ifdef DEBUG
printf(\n);
printf(quemgr org = %s\n,org_name);
printf(quemgr host = %s\n,qmgr_host);
printf(quemgr port = %d\n,qmgr_port);
#endif
/* ------------ */
if(! dont_connect){
/*
** get config info ...
*/
cfg = api_get_config(qmgr_host, qmgr_port, &srtn, error_msg);
if(srtn != 0){
printf(Error, msg = %s, error = %d\n,error_msg,srtn);
api_release_license();
#ifdef WINNT
WSACleanup();
#endif
return 1;
}
/* ------------ */
/* find first real app, just in case */
first_real_app_str[0] = \0;
not_first_real_app = 1;
first_real_app_num = 1;
for(i=0;i<MAX_APPS;i++){
if(not_first_real_app){
#ifdef DEBUG
fprintf(stderr,cfg->progs[%d].app_name = <%s>\n,i,cfg->progs[i].app_name);
#endif
#include <stdio.h>
if(cfg->progs[i].app_name[0] != \0){
not_first_real_app = 0;
strcpy(first_real_app_str,cfg->progs[i].app_name);
first_real_app_num = i + 1;
break;
}
if(not_first_real_app){
/* error, no apps defined */
fprintf(stderr,TxtMgr Error: No valid applications defined.\n);
return 1;
}
if(api_application_name[0] != \0){
for(i=0;i<MAX_APPS;i++){
if(strcmp(api_application_name,cfg->progs[i].app_name) == 0){
api_application_index = i + 1;
break;
}
}
}
if(api_application_index <= 0){
/* app not specified - use first available */
strcpy(api_application_name,first_real_app_str);
api_application_index = first_real_app_num;
/* put up message about app not found, using first one */
fprintf(stderr,\nTxtMgr Info: No application specified.\nUsing first available application of
<%s> = %d\n,api_application_name,api_application_index);
}
#ifdef DEBUG
fprintf(stderr,application_name = <%s>\n,api_application_name);
fprintf(stderr,application_index = %d\n,api_application_index);
#endif
/* ----------- */
/* DEBUGGING
if(orgpath[0] != \0){
api_write_config(cfg,orgpath,bogus,&srtn,error_msg);
if(srtn != 0){
printf(Error, unable to write config files, msg = %s, code = %d\n,error_msg,srtn);
api_release_license();
#ifdef WINNT
WSACleanup();
#endif
return 0;
}
268
}
DEBUGGING */
#include <stdio.h>
/* ----------- */
/*
** initialize config values ...
*/
api_init_uiconfig(cfg);
/*
** because we are txtmgr, reset job_mon_flag of ui_config
** to be off by default ...
*/
if(auto_startup == SUBMIT){
ui_config.job_mon_flag = 0;
}
#ifdef DEBUG_111
api_rcfile_print(1);
#endif
/* ----------- */
/*
** override some settings if needed ...
*/
(void)api_rcfile_read(sys_rcf_file,out_str);
(void)api_rcfile_read(usr_rcf_file,out_str);
if(has_cmd_rcf){
srtn = api_rcfile_read(cmd_rcf_file,out_str);
if(srtn != 0){
printf(%s\n,out_str);
}
}
#ifdef DEBUG_111
api_rcfile_print(1);
#endif
/* ----------- */
/*
** if just a print env then do it and stop ...
*/
if(do_print){
if(do_print >= 3){
if(env_file[0] != \0){
#include <stdio.h>
wp = fopen(env_file,wb);
if(wp != NULL){
api_rcfile_write2(wp,(do_print-3));
fclose(wp);
}
}
}else{
api_rcfile_print(do_print-1);
}
api_release_license();
#ifdef WINNT
WSACleanup();
#endif
return 0;
}
} /* ! dont_connect ... */
/* ----------- */
if(auto_startup > 0){
srtn = doit(auto_startup);
}else{
/*
** query for selection and do work ...
*/
while(1){
print_menu();
choice = get_response();
srtn = doit(choice);
#ifdef DEBUG
printf(doit(%d) returns %d\n,choice,srtn);
#endif
if(srtn < 0)
break;
}
srtn = 0;
}
api_release_license();
#ifdef WINNT
WSACleanup();
#endif
}
return srtn;
270
Index
MSC Patran Analysis Manager Users Guide
A
IN
D
E
X
Index
ABAQUS, 13
ABAQUS submittals, 13
abort
selecting job, 88
action
abort, 88
configure, 36
monitor, 70
administrator, 119
analysis
ABAQUS, 13
general, 14, 15
MSC.Nastran, 11
Analysis Preference, 8
applications, 120
adding, 121
deleting, 123
command arguments, 53
configuration
disk, 129
examples, 148
files, 118
general, 51
organizational group, 154
queue, 132
separate users, 155
test, 136
configuration management, 115
configure, 36
disk space, 37
mail, 48
memory, 42
miscellaneous, 60
restart, 55
time, 49
daemon, 94
General Manager, 96
Job Manager, 95
Queue Manager, 94
default host/queue, 54
disable, 10
disk configuration, 129
edit file, 32
editor, 96
enable, 10
environment variables, 105
errors, 164
executables, 4, 92
execute, 20
files
configuration, 116, 118
created, 23
directory structure, 92
disk configuration, 152
edit, 32
examples, 148
host configuration, 148
queue configuration, 153
save settings, 36
selecting, 28
X resources, 113
filesystem
add, 130
delete, 131
test, 142
fonts, 113
general, 15
Generic submittals, 15
host, 29
adding, 127
configuration, 126
deleting, 129
test, 137, 138
host groups, 134
installation, 109
instructions, 110
requirements, 109
integration, 5
interface
configuration management, 115
user, 94
MSC.Marc Submittals, 14
MSC.Nastran, 10, 11
MSC.Nastran submittals, 11
Multiple File Transfer, 33
NQS, 132
organization, 6
multiple, 105
queue, 29
add, 133
delete, 134
test, 145
queue type, 119
LSF, 119
NQS, 119
reconfigure, 145
restart, 55
rules, 11, 13, 14
startup arguments, 96
statistics, 96, 197
submit
preparing, 8
separate user, 53
INDEX 273
test
application, 137
disk, 143
MSC.Patran AM host, 141
physical hosts, 138
queue, 145
test configuration/host, 137, 138
user interface, 94
variables, 105
X resources, 113