Patran 2012 Doc Analysis Manager User

Download as pdf or txt
Download as pdf or txt
You are on page 1of 278

Patran 2012

Analysis Manager
Users Guide

Corporate

Europe

Asia Pacific

MSC.Software Corporation
2 MacArthur Place
Santa Ana, CA 92707 USA
Telephone: (800) 345-2078
Fax: (714) 784-4056

MSC.Software GmbH
Am Moosfeld 13
81829 Munich, Germany
Telephone: (49) (89) 43 19 87 0
Fax: (49) (89) 43 61 71 6

MSC.Software Japan Ltd.


Shinjuku First West 8F
23-7 Nishi Shinjuku
1-Chome, Shinjuku-Ku
Tokyo 160-0023, JAPAN
Telephone: (81) (3)-6911-1200
Fax: (81) (3)-6911-1201

Worldwide Web
www.mscsoftware.com

Disclaimer
This documentation, as well as the software described in it, is furnished under license and may be used only in accordance with
the terms of such license.
MSC.Software Corporation reserves the right to make changes in specifications and other information contained in this document
without prior notice.
The concepts, methods, and examples presented in this text are for illustrative and educational purposes only, and are not
intended to be exhaustive or to apply to any particular engineering problem or design. MSC.Software Corporation assumes no
liability or responsibility to any person or company for direct or indirect damages resulting from the use of any information
contained herein.
User Documentation: Copyright 2010 MSC.Software Corporation. Printed in U.S.A. All Rights Reserved.
This notice shall be marked on any reproduction of this documentation, in whole or in part. Any reproduction or distribution of this
document, in whole or in part, without the prior written consent of MSC.Software Corporation is prohibited.
The software described herein may contain certain third-party software that is protected by copyright and licensed from
MSC.Software suppliers. Contains IBM XL Fortran for AIX V8.1, Runtime Modules, (c) Copyright IBM Corporation 1990-2002,
All Rights Reserved.
MSC, MSC/, MSC Nastran, MD Nastran, MSC Fatigue, Marc, Patran, Dytran, and Laminate Modeler are trademarks or registered
trademarks of MSC.Software Corporation in the United States and/or other countries.
NASTRAN is a registered trademark of NASA. PAM-CRASH is a trademark or registered trademark of ESI Group. SAMCEF is
a trademark or registered trademark of Samtech SA. LS-DYNA is a trademark or registered trademark of Livermore Software
Technology Corporation. ANSYS is a registered trademark of SAS IP, Inc., a wholly owned subsidiary of ANSYS Inc. ACIS is a
registered trademark of Spatial Technology, Inc. ABAQUS, and CATIA are registered trademark of Dassault Systemes, SA.
EUCLID is a registered trademark of Matra Datavision Corporation. FLEXlm is a registered trademark of Macrovision
Corporation. HPGL is a trademark of Hewlett Packard. PostScript is a registered trademark of Adobe Systems, Inc. PTC, CADDS
and Pro/ENGINEER are trademarks or registered trademarks of Parametric Technology Corporation or its subsidiaries in the
United States and/or other countries. Unigraphics, Parasolid and I-DEAS are registered trademarks of UGS Corp. a Siemens
Group Company. All other brand names, product names or trademarks belong to their respective owners.

P3:V2012:Z:ANM:Z: DC-USR-PDF

Contents
MSC Patran Analysis Manager Users Guide

Overview
Purpose

Product Information

What is Included with this Product?


Integration with MSC Patran

How this Manual is Organized

Getting Started
Quick Overview

Enabling/Disabling the Analysis Manager


MSC Nastran Submittals
ABAQUS Submittals

11
13

MSC.Marc Submittals
Generic Submittals

10

14
15

The Main Form


16
UNIX Interface
17
Windows Interface
18
Invoking the Analysis Manager Manually
Files Created

23

Submit
Introduction
Selecting Files

26
28

Where to Run Jobs

29

20

6 MSC Patran Analysis Manager Users Guide

Windows Submittal

31

Configure
Introduction

34

Disk Space
35
MSC Nastran Disk Space
35
ABAQUS, MSC.Marc, and General Disk Space
Memory
40
MSC Nastran Memory
40
ABAQUS Memory
42
MSC.Marc and General Memory
Mail
Time
General

46
47
49

Restart
53
MSC Nastran Restarts
53
MSC.Marc Restarts
55
ABAQUS Restarts
56
Miscellaneous
58
MSC Nastran Miscellaneous
58
MSC.Marc Miscellaneous
59
ABAQUS Miscellaneous
61
General Miscellaneous
62

Monitor
Introduction

68

Running Job
69
Windows Interface

72

Completed Job
75
Windows Interface
76
Host/Queue
78
Job Listing
79
Host Status
80
Queue Manager Log

81

44

38

CONTENTS 7

Full Listing
CPU Loads

82
82

Abort
Selecting a Job

86

Aborting a Job
87
UNIX Interface
87
Windows Interface
87

System Management
Directory Structure

90

Analysis Manager Programs


92
Analysis Manager Program Startup Arguments
Analysis Manager Environment File
100
Organization Environment Variables

103

Installation
107
Installation Requirements
107
Installation Instructions
108
X Resource Settings

111

Configuration Management Interface


Modify Configuration Files
116
Test Configuration
134
Queue Manager
143
Examples of Configuration Files

146

Starting the Queue/Remote Managers


Starting Daemons at Boot Time
156

155

Error Messages
Error Messages

113

162

Application Procedural Interface (API)


Analysis Manager API

194

94

8 MSC Patran Analysis Manager Users Guide

Analysis Manager Application Procedural Interface (API) Description


Include File

204

Example Interface

230

194

Chapter 1: Overview
Patran Analysis Manager Users Guide

Overview

Purpose

Product Information

What is Included with this Product?

Integration with MSC Patran

How this Manual is Organized

5
6

Patran Analysis Manager Users Guide


Purpose

Purpose
MD Nastran, MSC.Marc, and MSC Patran are analysis software systems developed and maintained by
the MSC.Software Corporation. MD Nastran and MSC.Marc are advanced finite element analysis
programs used mainly for analyzing complex structural and thermal engineering problems. The core of
MSC Patran is a finite element analysis pre/postprocessor. Several optional products are available with
MSC Patran including advanced postprocessing, interfaces to third party solvers and application
modules. This document describes the MSC Patran Analysis Manager, one of these application modules.
The Analysis Manager provides interfaces within MSC Patran to submit, monitor and manage analysis
jobs on local and remote networked systems. It can also operate in a stand-alone mode directly with MD
Nastran, MSC.Marc, ABAQUS, and other general purpose finite element solvers.
At many sites, engineers have several computing options. Users can choose from multiple platforms or
various queues when jobs are submitted. In reality, the resources available to them are not equal. They
differ based on the amount of disk space and memory available, system speed, cost of computing
resources, and number of users. In networked environments, users frequently do their modeling on local
workstations with the actual analysis performed on compute servers or other licensed workstations.
The MSC Patran Analysis Manager automates the process of running analysis software, even on remote
and dissimilar platforms. Files are automatically copied to where they are needed; the analysis is
performed; pertinent information is relayed back to the user; files are returned or deleted when the
analysis is complete even in heterogeneous computing environments. Time consuming system
housekeeping tasks are reduced so that more time is available for productive engineering.
The Analysis Manager replaces text-oriented submission scripts with a Motif-based menu-driven
interface (or windows native interface on Windows platforms), allowing the user to submit and control
his job with point and click ease. No programming is required. Most users are able to productively use it
after a short demonstration.

Chapter 1: Overview 3
Product Information

Product Information
The MSC Patran Analysis Manager provides convenient and automatic submittal, monitoring, control
and general management of analysis jobs to local or remote networked systems. Primary benefits of using
the Analysis Manager are engineering productivity and efficient use of local and corporate network-wide
computing resources for finite element analysis.
The Analysis Manager has its own scheduling capability. If commercially available queueing software,
such as LSF (Load Sharing Facility) from Platform Computing Ltd. or NQS is available, then the
Analysis Manager can be configured to work closely with it.
This release of the MSC Patran Analysis Manager works explicitly with MD Nastran & MSC.Marc
releases up to version 2006, and versions of ABAQUS up to 6.x. It also has a general capability which
allows almost any software analysis application to be supported in a generic way.
For more information on how to contact your local MSC representative see Technical Support, xi.

Patran Analysis Manager Users Guide


What is Included with this Product?

What is Included with this Product?


The MSC Patran Analysis Manager product includes the following items:
1. Various executable programs, services or daemons for ALL supported computer platforms which
usually reside in
$P3_HOME/p3manager_files/bin
where $P3_HOME is a variable indicating the <installation_directory>, the directory
location of the MSC Patran installation. The main executables are:
P3Mgr (Graphical User Interface)
QueMgr (Queue Manager)
JobMgr (Job Manager)
NasMgr (MSC.Natran Manager)
AbaMgr (ABAQUS Manager)
MarMgr (MSC.Marc Manager)
GenMgr (General Manager)
RmtMgr (Remote Manager)
AdmMgr (Admin Manager - Unix only - part of P3Mgr on Windows)
TxtMgr (Text User Interface)
Job_Viewer (Database Job Viewer - Unix only)

2. Template configuration files contained in


$P3_HOME/p3manager_files/default/conf
These configuration files must be modified to fit each new computing environment and network.
These and the above executables are described in System Management.
3. Two empty working directories called
$P3_HOME/p3manager_files/default/log
$P3_HOME/p3manager_files/default/proj
which are necessary and are used during analysis execution to store various files.
4. This Users Manual. An on-line version is provided to allow direct access to this information from
within MSC Patran.

Chapter 1: Overview 5
Integration with MSC Patran

Integration with MSC Patran


The MSC Patran Analysis Manager can function as a separately run program but is intended to be run
directly from within MSC Patran in a seamless manner when submitting analysis jobs. It is integrated
with MSC Patran such that engineers can submit, monitor and manage their analysis jobs directly from
within the MSC Patran graphical interface. It provides a user-friendly environment to submit analysis
jobs, then monitor and control job execution graphically. It is a distributed, multiple-process application
which runs in a heterogeneous network.
There are various modes in which the Analysis Manager can be invoked. Normally, a user will see a
seamless integration between MSC Patran and the Analysis Manager. Jobs can be submitted, monitored
and aborted simply by setting the appropriate Action in pull down menus available from the Analysis
application form in MSC Patran. When a job is being monitored, the monitoring window or form can be
put away and recalled at any time. The user can even quit MSC Patran and the monitoring window will
remain present until the user closes it.
The full user interface is also available from within MSC Patran simply by pressing a button on the
Analysis application form or from the Tools pull down menu on the main form. This gives access to
change default settings, submit previously created input files, change the default computer host or queue
in which to submit jobs, and many other options which are explained throughout this manual.
The MSC Patran Analysis Manager can also be invoked from the system prompt. This mode of
implementation gives the user maximum flexibility to manage analysis jobs.

Patran Analysis Manager Users Guide


How this Manual is Organized

How this Manual is Organized


This manual is organized into various chapters, each dealing with certain functions of the product. The
manual includes the following chapter topics:
Overview provides general information and an overview of the features of MSC Patran
Analysis Manager.
Getting Started describes rules for analysis input decks, how to invoke MSC Patrans Analysis Manager
and gives the details involved in setting up, submitting, monitoring, and aborting an analysis job directly
from within MSC Patran.
Submit describes the use of the job submittal capability from the MSC Patran Analysis Manager
user interface.
Configure describes how to configure various options such as memory, disk space, restarts, time of

submittal, host or queue selection, and a number of other options.


Monitor describes the monitoring capability of jobs, completed jobs, and hosts or queues. The graphical
monitoring window is also described in detail.
Abort describes how to abort running jobs.
System Management details the system management. The individual program executables are described
as well as the necessary configuration files, installation, guidelines and requirements. This chapter is
mainly for the system administrator that must install and configure the Analysis Manager.
Error Messages gives descriptions and solutions to error messages.

Chapter 2: Getting Started


Patran Analysis Manager Users Guide

Getting Started

Quick Overview

Enabling/Disabling the Analysis Manager

MSC Nastran Submittals

ABAQUS Submittals

MSC.Marc Submittals

Generic Submittals

The Main Form

Invoking the Analysis Manager Manually

Files Created

10

11

13
14
15

16

23

20

Patran Analysis Manager Users Guide


Quick Overview

Quick Overview
Before Patrans Analysis Manager can be used, it must be installed and configured by the system
administrator. See System Management for more on the installation and set-up of the module.
In so doing, the system administrator starts the Analysis Managers queue manager (QueMgr) daemon
or service, which is always running on a master system. The queue manager schedules all jobs submitted
through the Analysis Manager. The master host is generally the system on which Patran or an analysis
module was installed, but does not have to be.
The system administrator also starts another daemon (or service) that runs on all machines configured to
run analyses, called the remote manager (RmtMgr). This daemon/service allows for proper
communication and file transfer to/from these machines.
Users that already have analysis input files prepared and are not using Patran may skip to The Main Form
after reviewing the rules for input files for the various submittal types in this Chapter.
When using Patran, in general, the user begins by setting the Analysis Preference to the appropriate
analysis, such as MSC Nastran, which is available from the Preferences pull down menu on the top menu
bar.

Chapter 2: Getting Started 9


Quick Overview

Once the Analysis Preference is set and a proper analysis model has been created in Patran, the user can
submit the job. Generally, the submittal process takes place from the Analysis application form when the
user presses the Apply button. The full interface with access to all features of Patrans Analysis Manager
is always available, regardless of the Preference setting, from the Tools pull down menu or from the
Analysis Manager button on the Analysis form. The location of the submittal form is explained
throughout this chapter for each supported analysis code.

10

Patran Analysis Manager Users Guide


Enabling/Disabling the Analysis Manager

Enabling/Disabling the Analysis Manager


There may be times when it is not desirable or required to submit a job through the Analysis Manager.
In such a case, the user can temporarily disable the Analysis Manager and make use of Patrans generic
submittal capability for each analysis code supported. Disabling the Analysis Manager does not change
the user interface at all, i.e., the Analysis Manager button remains on the Analysis form. However when
the Apply button is pressed on the Analysis application form, the job will be submitted via Patrans
generic submit scripts.
To disable the Analysis Manager, type the following command in Patrans command window and press
the Return or Enter key:
analysis_manager.disable()
To enable the Analysis Manager after it has been disabled type this command:
analysis_manager.enable()
If a more permanent enabling or disabling of the Analysis Manager is required, the user may place these
commands as necessary in a users p3epilog.pcl file. This file is invoked at startup from the users
local directory, or home directory, or $P3_HOME in that order, if found.

Chapter 2: Getting Started 11


MSC Nastran Submittals

MSC Nastran Submittals


Any standard MSC Nastran (up to version 2006) problem can be submitted using Patrans
Analysis Manager. This is accomplished from the Analysis form with the Analysis Preference set to
MSC Nastran.
The following rules apply to MSC Nastran run-ready input files for submittal:
The BEGIN BULK and ENDDATA statements must be in the main input file; the one that is specified
when submitting, not in INCLUDE files.
The filename may not have any '.' characters except for the extension. The filename must also begin with
a letter (not a number).
Run-ready input files prepared by Patran follow these rules. Correct and proper analysis files are created
by following the instructions and guidelines as outlined in the Patran Interface to MD Nastran Preference
Guide,.
To submit, monitor, and manage MSC Nastran jobs from Patran using the Analysis Manager, make sure
the Analysis Preference is set to MSC Nastran. This is done from the Preferences menu on the main
Patran form. The Analysis form appears when the Analysis toggle, located on the main Patran
application switch, is chosen. Pressing the Apply button on the Analysis application form with the
Action set to Analyze, Monitor, or Abort will cause the Analysis Manager to perform the desired
action. A chapter is dedicated to each of these actions in the manual as well as one for custom
configuration of MSC Nastran submittals.
The Analysis Manager generates the MSC Nastran File Management Section (FMS) of the input file
automatically, unless the input file already contains the following advanced FMS statements:
INIT
DBLOCATE
ACQUIRE
DBCLEAN
DBFIX
DBLOAD
DBSETDEL
DBUNLOAD
EXPAND
RFINCLUDE
ENDJOB
ASSIGN USRSOU
ASSIGN USROBJ
ASSIGN OBJSCR
ASSIGN INPUTT2
ASSIGN INPUTT4
in which case the user is prompted whether or not to use the existing FMS as-is, or to have the Analysis
Manager auto-generate the FMS, using what FMS is already present, with certain exceptions.

12

Patran Analysis Manager Users Guide


MSC Nastran Submittals

The question asked is:


This file contains advanced FMS statements. Do you want to bypass
the Patran Analysis Manager auto-FMS capability?
Answer NO to auto-generate FMS; answer YES to use existing FMS. Typically you would answer YES
to this question unless you are fully aware of the FMS in the file.
With FMS automatically generated, each logical database is made up of multiple physical files, each with
a maximum size of 231 bytes (the typical maximum file size), up to the disk space currently free, or until
the size limit requested in the Analysis Manager is met. Large problems requiring databases and scratch
files larger than 231 bytes can, therefore, be run without the user having to add ANY FMS statements.
But this requires that you do not bypass the auto-FMS capability.
If multiple file systems have been defined, the Analysis Manager will generate FMS (provided the input
file does not contain advanced FMS or the user wishes to use the Analysis Managers automatic FMS
capability along with his advanced file) so that the scratch and database files are split onto each file
system defined, according to the free space available at run time. See Disk Space for more information.
Restarts are handled by the Analysis Manager in the following manner: the needed FMS is generated so
that the restart run will succeed. If database files exist on the local machine, they are copied to the
analysis machine prior to execution; otherwise, they are expected to exist already in the scratch areas.
Any ASSIGN, MASTER statements are changed/generated to ensure MSC Nastran will locate preexisting databases correctly. See Restart for more information.

Chapter 2: Getting Started 13


ABAQUS Submittals

ABAQUS Submittals
Any standard ABAQUS (up to version 6.x) problem can be submitted using Patrans Analysis Manager.
This is accomplished from the Analysis form with the Analysis Preference set to ABAQUS.
The following rules apply to ABAQUS run-ready input files for submittal:
1. The filename may not have any '.' characters except for the extension. The filename must begin
with a letter (not a number).
2. The combined filename and path should not exceed 80 characters.
Run-ready input files prepared by Patran follow these rules. Correct and proper analysis files are created
by following the instructions and guidelines as outlined in the Patran Interface to ABAQUS Preference
Guide.
To submit, monitor, and manage ABAQUS jobs from Patran using the Analysis Manager, make sure the
Analysis Preference is set to ABAQUS. This is done from the Preferences menu on the main form. The
Analysis form appears when the Analysis toggle, located on the Patran application switch, is chosen.
Pressing the Apply button on the Analysis application form with the Action set to Analyze, Monitor,
or Abort will cause the Analysis Manager to perform the desired action. A chapter is dedicated to each
of these actions in the manual as well as one for custom configuration of ABAQUS submittals.
If multiple file systems have been defined, the Analysis Manager will generate aux_scratch and
split_scratch parameters appropriately based on current free space among all file systems for the
host on which the job is executing. See Disk Space for more information.
Restarts are handled by the Analysis Manager by optionally copying the restart (.res) file to the
executing host first, then running ABAQUS with the oldjob keyword. See Restart for more
information.

14

Patran Analysis Manager Users Guide


MSC.Marc Submittals

MSC.Marc Submittals
Any standard MSC.Marc (up to version 2006) problem can be submitted using Patrans Analysis
Manager. This is accomplished from the Analysis form with the Analysis Preference set to MSC.Marc.
The following rules apply to MSC.Marc run-ready input files for submittal:
1. The filename may not have any '.' characters except for the extension. The filename must begin
with a letter (not a number).
Run-ready input files prepared by Patran follow these rules. Correct and proper analysis files are created
by following the instructions and guidelines as outlined in the Marc Preferance Guide.
To submit, monitor, and manage MSC.Marc jobs from Patran using the Analysis Manager, make sure the
Analysis Preference is set to MSC.Marc. This is done from the Preferences menu on the main form.
The Analysis form appears when the Analysis toggle, located on the Patran application switch, is chosen.
Pressing the Apply button on the Analysis application form with the Action set to Analyze, Monitor,
or Abort will cause the Analysis Manager to perform the desired action. A chapter is dedicated to each
of these actions in the manual as well as one for custom configuration of
MSC.Marc submittals.
Multiple file systems are not supported with MSC.Marc submittals. See Disk Space for
more information.
Restarts, user subroutines, externally referenced result (POST) and view factor files are handled by the
Analysis Manager by optionally copying these files to the executing host first, then running MSC.Marc
with the appropriate command arguments. See Restart for more information.

Chapter 2: Getting Started 15


Generic Submittals

Generic Submittals
Aside from the explicitly supported analysis codes, MSC Nastran, MSC.Marc, and ABAQUS, most any
analysis application can be submitted, monitored and managed using Patrans Analysis Manager general
analysis management capability. This is accomplished by selecting Analysis Manager from the Tools
pull down menu on the main Patran form. This brings up the full Analysis Manager user interface which
is described in the next section, The Main Form.
When the Analysis Manager is accessed in this manner, it keys off the current Analysis Preference. If the
Preference is set to MSC Nastran, MSC.Marc, and ABAQUS, the jobname and any restart information
is passed from the current job to the Analysis Manager and is brought up ready to submit, monitor, or
manage this job.
Any other Preference that is set must be configured correctly as described in Installation and is considered
part of the general analysis management. The jobname from the Analysis form is passed to the Analysis
Manager and the job submitted with the configured command line and arguments. (How to configure this
information is given in Miscellaneous and Applications.) If an analysis code is to be submitted, yet no
Analysis Preference exists for this code, the Analysis Manager is brought up in its default mode and the
user must then manually change the analysis application to be submitted via an option menu. This is
explained in detail in the next section.
On submittal of a general analysis code, the job file is copied to the specified analysis computer, the
analysis is run, and all resulting files from the submittal are copied back to the invoking computer
and directory.

16

Patran Analysis Manager Users Guide


The Main Form

The Main Form


When Patrans Analysis Manager is invoked either from the system prompt or via a button or a pull down
menu from within Patran, the main Analysis Manager form appears as shown. There are two interfaces
shown, one for UNIX and one for Windows platforms. Only the main form is shown here with brief
explanations. Details are provided in subsequent Chapters.

Chapter 2: Getting Started 17


The Main Form

UNIX Interface

Note:

The rest of this forms appearance varies depending on the Action that is set. Different
databoxes, listboxes, or other items in accordance with the Action/Object menu settings are
displayed. Each of these are discussed in the following chapters.

18

Patran Analysis Manager Users Guide


The Main Form

Windows Interface

Chapter 2: Getting Started 19


The Main Form

Note:

The rest of this forms appearance varies depending on the Tab that is set. Different
databoxes, listboxes, or other items in accordance with the Tree and/or Tab settings that are
displayed. Each of these are discussed in the following chapters.

Window Pull-down Menus


The following simple pull-down menus are available from the Windows interface:
Queue

The main purpose of this pull-down is to allow a user to Exit the program, Print
where appropriate, and to Connect To... other queue manager daemons or services.
User Settings can also be saved and read from this pull-down menu. For
administrators, other items on this pull-down become available when configuring the
Analysis Manager and for Starting and Stopping queue manager services. This is
detailed in System Management. These items in the Queue pull-down menu are only
enabled when the Administration tree tab is accessed.

Edit

Gives access to standard text Cut and Paste operation when applicable.

View

This pull-down menu allows the user mainly to update the view when jobs are being
run. The Refresh (F5) option graphically updates the window when in monitoring
mode. The program automatically refreshes the screen based on the Update Speed
also. All Jobs or only the current User Jobs can be seen if desired.

Tools

The Options under this menu allow the user to change the default editor when
viewing result files or input files. The number of user completed jobs viewable from
the interface is also set here.

Windows

The main purpose of this pull-down menu is to hide or display the Status Bar and
Output Window at the bottom of the window.

Help

Not currently implemented in this release.

Windows Icons
These icons appear on the main form.

Folder

The open folder icon is the same as the Connect To... option under the Queue pulldown menu, which allows you to connect to other queue manager daemons/services
that may be running and accessible.

Save

The diskette icon is for saving user settings.

Printer

Allows to print when appropriate.

Paintbrush

This allows refresh of the window when in monitoring mode.

20

Patran Analysis Manager Users Guide


Invoking the Analysis Manager Manually

Invoking the Analysis Manager Manually


If installed properly, Patrans Analysis Manager can be invoked from the system prompt with the
following arguments:
$P3_HOME/p3analysis_mgr arg1 arg2 arg3 arg4 [optional args] [args5-8]

where $P3_HOME is a variable indicating the <installation_directory>, the directory


location of the Patran installation.
Each argument is described in the following table.
Argument
arg1
(start-up type)

Description
The program can be started up in one of the following 8 modes (enter the
number only):
1- Start up the full interface. See The Main Form. (default)
2- Start up the Queue Monitor. See Monitor.
3- Start up the Abort Job now. See Abort.
4- Monitor a Running Job. See Monitor.
5- Monitor a Completed Job. See Monitor.
6- Submit the job. See Submit.
7- Submit in batch mode. (No user interface appears or messages.)
8- Same as 7 but waits until job is done. Returns status codes:
0=success, 1=failure, 2=abort.

arg2
(extension)

This is the extension of the analysis input file (e.g., .dat,.bdf, .inp).
(.dat is the default)

arg3 (jobname)

This is the Patran jobname; the jobname that appears in any jobname textbox
(without the extension). (default = unknown)

arg4
(application type)

This is the analysis application requested (enter the number only).


1

MSC Nastran (default)

ABAQUS

MSC.Marc

20

General code #1

21

General code #2

thru

General code #10

29

Chapter 2: Getting Started 21


Invoking the Analysis Manager Manually

Argument

Description

optional args

-coldstart coldstart_jobname

(MSC Nastran)

The -coldstart parameter followed by the cold start MSC Nastran


jobname indicates a restart job. Also see P3Mgr.

optional args

-runtype <0, 1 or 2>

(ABAQUS)

-restart oldjobname
The -runtype parameter followed by a 0, 1 or a 2 is to specify whether the
run is a full analysis, a restart, or a check run respectively. The
-restart parameter is to specify the old jobname for a restart run.

optional args
(MSC.Marc)
arg5 (x position)

arg6 (y position)

See P3Mgr
Optional - Specifies the X position of upper left corner of Patran right hand side
interface in inches. (UNIX only)
Optional - Specifies the Y position of upper left corner of Patran right hand side
interface in inches. (UNIX only)

arg7 (width)

Optional - Width of right-hand side interface in inches. (UNIX only)

arg8 (height)

Optional - Height of right-hand side interface in inches. (UNIX only)

If no arguments are provided, defaults are used (full interface (1), .dat, unknown, MSC Nastran (1)).
The arguments listed in the table above are very convenient when invoking the Analysis Manager from
pre and postprocessors such as Patran, which have access to the pertinent information which may be
passed along in the arguments. It may, however, be more convenient for the user to define an alias such
that the program always comes up in the same mode.
Here are some examples of invoking Patrans Analysis Manager:
$P3_HOME/bin/p3analysis_mgr
or
$P3_HOME/p3manager_files/p3analysis_mgr 1 bdf myjob 1
or
$P3_HOME/p3manager_files/p3analysis_mgr 1 bdf myjob MSC.Nastran
This invokes Patrans Analysis Manager by specifying the entire path name to the executable where
$P3_HOME is a variable containing the Patran installation directory. The entire user interface is brought
up specified by the first argument. The input file is called myjob.bdf and the last argument specifies
that MSC Nastran is the analysis code of preference.
Here is another example:

22

Patran Analysis Manager Users Guide


Invoking the Analysis Manager Manually

p3analysis_mgr 1 inp myjob 2 -runtype 1 -restart oldjob


p3analysis_mgr 1 inp myjob ABAQUS -runtype 1 -restart oldjob
This example invokes the Analysis Manager by assuming the executable name can be found in the users
path. The entire user interface is brought up specified by the first argument. The input file is called
myjob.inp. The code of preference is ABAQUS and the last two arguments indicate that a restart
analysis is to be performed and the job is a restart from a previous job called oldjob. Another example:
p3analysis_mgr 3 dat myjob 20
This example requests the termination of an analysis by the jobname of myjob with an input file called
myjob.dat. The analysis code specified is a user defined application defined by the number 20 in the
configuration files.
p3analysis_mgr 5 dat myjob 1
This example requests the completed monitor graph of an MSC Nastran analysis by the jobname of
myjob with an input file called myjob.dat.
If only the full interface is brought up by the user in stand-alone mode, it may be more convenient to
specify an alias and place it in a login file (.login, .cshrc) such as:
alias p3am p3analysis_mgr 1 dat unknown 1
This way all the user has to type is p3am to invoke the program each time.

Chapter 2: Getting Started 23


Files Created

Files Created
Aside from the files generated by the analysis codes themselves, Patrans Analysis Manager also
generates files, the contents of which are described in the following table.
Argument

Description

jobname.mon

This file contains the final monitoring or status information from a


submitted job. It can be replotted using the Monitor | Completed Job
selection from the main form.

jobname.tml

This is the analysis manager log file that gives the status of the analysis job
and parameters that were used during execution.

jobname.submit

This file contains the messages that would normally appear on the screen if
the job were submitted interactively. When a silent submit is performed
(batch submittal), this file is created. Interactive submittals will display all
messages to a form on the screen.

jobname.stdout

This file contains any messages that would normally go to the standard
output (generally the screen) if the user had invoked the analysis code from
the system prompt.

jobname.stderr

This file will contain any messages from the analysis which are written to
standard error. If no such messages are generated this file does not appear.

Any or all of these files should be checked for error messages and codes if a job is not successful and it
does not appear that the analysis itself is at fault for abnormal termination.

24

Patran Analysis Manager Users Guide


Files Created

Chapter 3: Submit
Patran Analysis Manager Users Guide

Submit

Introduction

26

Selecting Files

Where to Run Jobs

29

Windows Submittal

31

28

26

Patran Analysis Manager Users Guide


Introduction

Introduction
The process of submitting a job requires the user to select the file and options desired. The job is
submitted to the system and ultimately executes MD Nastran, ABAQUS, MSC.Marc, or some other
application module. Patrans Analysis Manager properly handles all necessary files and provides
monitoring capability to the user during and after job execution. See Monitor for more information on
monitoring jobs.
In Patran, jobs are submitted one of two ways: through the Analysis application form for the particular
Analysis Preference, or outside of Patran through Patrans Analysis Manager user interface with the
Action (or tree tab in the Windows interface) set to Submit. Submitting through the Analysis form in
Patran makes the submittal process transparent to the user and is explained in Getting Started.
For more flexibility the full user interface can be invoked from the system prompt as explained in the
previous chapter or from within Patran by pressing the Analysis Manager button on the Analysis
application form or by invoking it from the Tools pull down menu. This gives access to more advanced
and flexible features such as submitting existing input files from different directories, changing groups
or organizations (queue manager daemons/services), selecting different hosts or queues, and configuring
analysis specific items. The rest of this chapter explains these capabilities.
Below is the UNIX submittal form (see Windows Submittal for the Windows interface).

Chapter 3: Submit 27
Introduction

28

Patran Analysis Manager Users Guide


Selecting Files

Selecting Files
The filename of the job that is currently opened will appear in a textbox of the form on the previous page.
If this is not the job to be submitted, press the Select File button and a file browser will appear.
Below is the UNIX file browser form (see Windows Submittal for the Windows interface)
.

All appropriate files in the selected directory are displayed in the file browser. Select the file to be run
from those listed in the file browser or change the directory path in the Filter databox and then press the
Filter button to re-display the files in the new directory indicated. An asterisk (*) serves as a wild card.
Select OK once the file is properly selected and displayed, or double-click on the selected file.
Note:

The directory in the Filter databox indicates where the input file will be copied from upon
submission AND where the results files from the analysis will be copied to upon
completion. Any existing results files of the same names will be overwritten on completion
and you must have write privileges to the specified directory.

Chapter 3: Submit 29
Where to Run Jobs

Where to Run Jobs


A default host system or queue is provided. However, a different host system or queue may be selected
using the Host/Queue list on the form. Select the host system or queue where the job is to execute.
If Patran Analysis Manager is to schedule the job, then select the host where the job will run. Make the
choice by clicking the toggle to the left of the appropriate host name.
If another scheduling software system (i.e., LSF or NQS) is enabled, then select the queue to submit the
job to. The queueing software executes each job on the host it selects.
Below is the UNIX interface (see Windows Submittal for the Windows interface).

The submit function can also be invoked manually from the system prompt. See Invoking the Analysis
Manager Manually for details. It can be invoked in both an interactive and a batch mode.

30

Patran Analysis Manager Users Guide


Where to Run Jobs

Note:

Often, the user will look into the Host/Queue listing window described in Host/Queue, to
see what host/queue is most appropriate (free or empty) before selecting from the list and
submitting. When submitting to an LSF/NQS queue, the host is selected automatically,
however you can select a particular host from the Choose Specific Host button (not shown)
if desired.

Chapter 3: Submit 31
Windows Submittal

Windows Submittal
The interface on Windows platforms is quite different in appearance than that for UNIX, but the process
is almost identical. Submitting through this interface is simple. Simply follow these steps:

32

Patran Analysis Manager Users Guide


Windows Submittal

Once a file is selected, you can edit the file if necessary before submitting it. This is done by pressing the
Edit File button. By default the Notepad application is used as the editor. The default editor can be
changed under the Tools | Options menu pick as shown below.

Chapter 4: Configure
Patran Analysis Manager Users Guide

Configure

Introduction

34

Disk Space

35

Memory

Mail

Time

General

Restart

Miscellaneous

40

46
47
49
53
58

34

Patran Analysis Manager Users Guide


Introduction

Introduction
By setting the Action to Conbody on the main Patran Analysis Manager form, the user has control of a
variety of options that affect job submittal. The user can customize the submitting environment by setting
any of the parameters discussed in this chapter. These parameters can be saved such that all subsequent
submittals use the new settings or they can be set for a single submittal only. All of this is at the control
of the user.

Chapter 4: Configure 35
Disk Space

Disk Space
The Disk Space configuration is analysis code specific.

MSC Nastran Disk Space


After selecting the Disk Space option on the Object menu, the following Disk Space form appears.

36

Patran Analysis Manager Users Guide


Disk Space

Note:

Patrans Analysis Manager will only check for sufficient disk space if the numbers for
DBALL, MASTER, and SCRATCH are provided. An error message will appear if not
enough disk space is available. If these values are not specified the job will be submitted
and will run until completion or the disk is full and an error occurs.

Chapter 4: Configure 37
Disk Space

The Windows interface for MSC Nastran disk space is shown below.

38

Patran Analysis Manager Users Guide


Disk Space

ABAQUS, MSC.Marc, and General Disk Space


After selecting the Disk Space option on the Object menu, the following Disk Space form appears.

The Windows interface for ABAQUS, MSC.Marc, or other user defined analysis disk space requirements
is shown below.

Chapter 4: Configure 39
Disk Space

40

Patran Analysis Manager Users Guide


Memory

Memory
The Memory configuration is analysis code specific.

MSC Nastran Memory


After selecting the Memory option on the Object menu, the following Memory form appears.

Chapter 4: Configure 41
Memory

The Windows interface for MSC Nastran memory requirements is shown below:

42

Patran Analysis Manager Users Guide


Memory

ABAQUS Memory
After selecting the Memory option on the Object menu, the following Memory form appears.

Chapter 4: Configure 43
Memory

The Windows interface for ABAQUS memory requirements is shown below:

44

Patran Analysis Manager Users Guide


Memory

MSC.Marc and General Memory


After selecting the Memory option on the Object menu, the following Memory form appears.

The Windows interface for MSC.Marc or other general application memory requirements is shown
below:

Chapter 4: Configure 45
Memory

46

Patran Analysis Manager Users Guide


Mail

Mail
The Mail configuration setting determines whether or not to have mail notification and, if so, where to
send the mail notices.

Note:

In this version there is no mail notification. This feature has been disabled.

Chapter 4: Configure 47
Time

Time
Any job can be submitted to be run immediately, with a delay, or at a specific future time. The default
submittal is immediate. To change the submittal time, use the following Time form.

48

Patran Analysis Manager Users Guide


Time

The Windows interface for setting job submit delay and maximum job time is specified directly on the
Submit | Job Control tab as shown below:

Note:

There is no Day-of-the-Week type submittal on Windows.

Chapter 4: Configure 49
General

General
The General configuration form allows preferences to be set for a number of items as described below.
Nothing in this form is analysis specific.

Note:

Items not described on this page are described on subsequent pages in this section.

50

Patran Analysis Manager Users Guide


General

The Windows interface for General setting is specified directly on the Submit | General tab as shown
below:

Chapter 4: Configure 51
General

Note:

Unlike the UNIX interface, to save a default Host/Queue, you select the Host/Queue on the
Job Control tab and then save the settings under the Queue pull-down menu.

Project Directory
The project directory is a subdirectory below the Patran Analysis Manager install path where the
Analysis Manager's job-specific files are created during job execution.
Projects are a method of organizing ones jobs and results. For instance, if a user had two different
bracket assembly designs and each assembly contained many similar if not identical parts, each assembly
file might be named assembly.dat. But to avoid interference, each file is executed out of a different
project directory.
If the first project is design1 and the second is design2, then one job is executed out of <file
system(s) for selected host>/proj/design1 and the other is <file system(s)
for selected host>/proj/design2. Hence, the user could have both jobs running at the same
time without any problems, even though they are labeled with the same file name. See Disk Configuration.
When the job is completely finished, all appropriate files are copied back to the originating host/directory
(the machine and directory where the job was actually submitted from).
Pre and Post Commands
The capability exists to execute commands prior to submission of an analysis in the form of a pre and
post capability. For instance, let us say that before submitting an analysis the user needs to translate an
input deck from ASCII form to binary form running some utility called ascbin. This is done on the
submitting host by typing ascbin at the system prompt. This same operation can be done by specifying
ascbin in the Pre databox for the originating host.
Similarly, on completion of the analysis and after the files have been copied back from the executing host,
the user needs to again run a program to translate the results from one file format to another using a
program called trans. He would then place the command trans in the Post databox for the originating
host.
A Pre and a Post command can be specified on the executing (analysis) host side also.
These commands specified in the databoxes can be as simple as a one word command or can reference
shell scripts. Arguments to the command can be specified. Also, if keywords, such as the jobname or
hostname, from Patrans Analysis Manager are needed, they can be referenced by placing a $ in front of
them. The available keywords that are interpreted in the Pre and Post databoxes can be examined by
pressing the Keyword Index button. For more explanation of keywords, see General Miscellaneous.
Separate User
The Separate User option allows job submittal to the selected system as a different user in case the current
user does not have an account on the selected system. This must be enabled and set up in advance by the
system administrator. In order for this to work properly, the separate user account specified in this
databox must exist on both the selected system to run the job and the machine where the job is being

52

Patran Analysis Manager Users Guide


General

submitted from. See Examples of Configuration Files for an explanation on how to set up separate users
submission.
Default Host/Queue
The Default Host/Queue, if saved, is the host/queue to which jobs are submitted when submitted directly
from Patran by using the Apply button on the Analysis form. It is also the host/queue to which jobs will
be submitted when using the batch submittal from the direct Analysis Manager command line. It is also
the host/queue which will come up as the selected default when the full Analysis Manager interface is
started. If this setting is not saved, the default host/queue is the first in the list.
Patran Database
You can specify the name of an Patran database so that on a post-submit task such as running a script file
it will know the Patran database to use for what it (the script) wants to do (like automatically reading the
results back in after a job has completed.
Copy/Link Results Files
By default all results files are copied back to the directory where the input file resides. The copy/link
functionality is just a method for transfering files. If you are remote then the files will be copied via the
Analysis Manager. But if you run locally then there is no good reason to transfer the files or even copy
them, so you can set this flag and the Analysis Manager will either link the files in the work dir to the
original ones or use the copy system command instead of trying to read one file and send bytes over to
write another file. If you are low on disk space then the link is a good way to go, but of course the
Analysis Manager needs to see the results files from the submittal host to the analysis host scratch disk
space location for this to work.

Chapter 4: Configure 53
Restart

Restart
The Restart configuration is analysis code specific and does not apply to General applications.
Within Patran, to perform a restart using the Analysis Manager, the job is submitted from the Analysis
application as normal however, a restart job must be indicated. When invoking the Analysis Managers
main interface with a restart job from Patran, this information is passed to the Analysis Manager and the
restart jobname shows up in the Configure| Restart form. The restart job can be submitted directly from
the main form or from Patran. In either case, the restart job looks for the previous job to be restarted in
the local path and/or on the host machine. If this restart jobname is not specified, the databases must be
located on the host machine to perform a successful restart.

MSC Nastran Restarts


After selecting the Restart option on the menu, the following Restart form appears. To save the MSC
Nastran database for restart using the Patran Analysis Manager, the Scratch Run toggle must be set to
No in the Configure | Restart form. If the Save Databases toggle is set to No, the database is deleted
from the host machine after the analysis. If the Copy Databases Back toggle is set to No, the databases
are not copied back to the local path. The database files are given .DBO and .MST filename extensions
for the .DBALL and .MASTER files, respectively.
A restart job that is submitted with the Analysis Manager searches for the Initial Job name .MST files in
the path where Patran is invoked. Therefore, if this file and the other database files are renamed or moved,
the restart job will not be successful.
Patran automatically generates the ASSIGN MASTER FMS statement required to perform a restart. If
the restart .bdf is not generated by Patran and the Analysis Manager is used to submit the job, the
.bdf must contain an ASSIGN MASTER FMS statement that specifies the name and location of the
restart database. The following error will be issued by the Patran Analysis Manager if the ASSIGN
statement is missing.
ERROR... Restart type file but no MASTER file specified with an
ASSIGN statement. Use an ASSIGN statement to locate at least the
MASTER database file(s) for previous runs.
See the UNIX and Window forms below for more explanation.

54

Patran Analysis Manager Users Guide


Restart

Chapter 4: Configure 55
Restart

MSC.Marc Restarts
Restarts in MSC.Marc are quite similar to those in MSC Nastran.

56

Patran Analysis Manager Users Guide


Restart

ABAQUS Restarts

Chapter 4: Configure 57
Restart

After selecting the Restart option on the menu, the following Restart form appears.

58

Patran Analysis Manager Users Guide


Miscellaneous

Miscellaneous
The Miscellaneous configuration is analysis code specific.

MSC Nastran Miscellaneous


After selecting the Miscellaneous option on the menu, the following form appears.

Chapter 4: Configure 59
Miscellaneous

MSC.Marc Miscellaneous
After selecting the Miscellaneous option on the menu, the following form appears.
Note:

When invoked from Patran, items requiring file locations are usually passed directly into
the Analysis Manager such as the User Subroutine, POST file, and View Factor file. Thus,
in this case, there would be no need to reenter these items.

60

Patran Analysis Manager Users Guide


Miscellaneous

Chapter 4: Configure 61
Miscellaneous

ABAQUS Miscellaneous
After selecting the Miscellaneous option on the menu, the following form appears.

62

Patran Analysis Manager Users Guide


Miscellaneous

General Miscellaneous
After selecting the Miscellaneous option on the menu, the following form appears.
Note:

Some examples of General analysis applications are discussed below.

Chapter 4: Configure 63
Miscellaneous

Examples of some specific command lines used to invoke analysis codes are given here.
Example 1:
The first example involves the ANSYS 5 code. First the Analysis Preference must be set to ANSYS 5
from Patrans Analysis Preference form and an input deck for ANSYS 5 must have been generated via
the Analysis application (this is done by setting the Action to Analyze, and the Method to Analysis
Deck). Then Patrans Analysis Manager can be invoked from the Analysis main form. Note that a direct
submittal from Patran is not feasible in this and the subsequent example.
The jobfile (jobname.prp in this case) is automatically displayed as the input file and the Submit
button can be pressed. The jobfile is the only file that is copied over to the remote host with this general
analysis submittal capability.
In the host.cfg configuration file the path_name of the executable is defined. The rest of the
command line would then look like this:
-j $JOBNAME < $JOBFILE > $JOBNAME.log
If the executable and path defined is as /ansys/bin/ansys.er4k50a, then the entire command
that is executed is:
/ansys/bin/ansys.er4k50a -j $JOBNAME < $JOBFILE > $JOBNAME.log
Here the executable is invoked with a parameter (-j) specifying the jobname. The input file
($JOBFILE) is redirected using the UNIX redirect symbol as the standard input and the standard output
is redirected into a file called $JOBNAME.log. The variables beginning with the $ sign are passed by
Patrans Analysis Manager. All resulting output files are copied back to the invoking host and directory
on completion.
Example 2:
This is a more complicated example where an analysis code needs more than one input file. The general
analysis capability in Patrans Analysis Manager only copies one input file over to the remote host for
execution. If more than one file needs to be copied over then a script must be developed for this purpose.
This example shows how Patran FEA can be submitted via a script that does the proper copying of files
to the remote host.
The Analysis Preference in Patran is set to Patran FEA and, in addition to setting the Preference, the
input file suffix is specified as .job. Patran FEA needs three possible input files: jobname.job,
jobname.ntl, and an auxiliary input file. The jobname.job file is automatically copied over to the
remote host. The auxiliary input file can be called anything and is specified in the jobname.job file.
A shell script called FeaExecute is created and placed on all hosts that allow execution of Patran FEA.
This FeaExecute script does the following:
1. Parses the jobname.job file to find the name of the auxiliary input file if it is specified.
2. Copies the auxiliary input file and the jobname.ntl file to the remote host.
3. Execute the FeaControl script which controls actual execution of the Patran FEA job. This is
a standard script which is delivered with the Patran FEA installation.

64

Patran Analysis Manager Users Guide


Miscellaneous

In the Patran Analysis Manager configuration file, the FeaExecute script and its path are specified.
The input parameters for this script are:
-j $JOBNAME, -h $P3AMHOST -d $P3AMDIR
which specify the jobname, the host from which the job was submitted and the directory on that host
where job was submitted from. With this information the job can be successfully run. The full command
that is executed on the remote host is (assuming a location of FeaExecute):
/fea/bin/FeaExecute -j $JOBNAME, -h $P3AMHOST -d $P3AMDIR
The FeaExecute script contents are shown for completeness:
#! /bin/sh
# Script to submit Patran FEA to a remote host via the Analysis Manager
# Define a function for displaying valid params for this script
abort_usage( ) {
cat 2>&1 <</
Usage: $Cmd -j Jobname -h Remote_Host -d Remote_Dir
/
exit 1
}
# Define a function for checking status
check_status( ) {
Status=$1
if [ $1 -ne 0 ] ; then
echo Error detected ... aborting $Cmd
exit 2
fi
}
# Define a function for doing a general-purpose exit
exit_normal( ) {
echo $Cmd complete
exit 0
}
# Define a for extracting keyword values from
# the .job file. Convert keyword value to upper case
GetKeyValue( )
{
JobFile=${1?} ; Key=echo ${2?} | sed s/ //g
cat $JobFile | sed s/ //g | grep -i ^$Key= | \
sed s/^.*=// | tr [a-z] [A-Z]
}
# Define a for extracting keyword values from
# the .job file. Return the correct case for all characters
# (dont force anything to upper case.)
GetKeyValueCC( )
{
JobFile=${1?} ; Key=echo ${2?} | sed s/ //g
cat $JobFile | sed s/ //g | grep -i ^$Key= | \
sed s/^.*=//
}
# Define a function to get the Jobname from the jobfilename
#
# # usage: get_Jobname filespecification

Chapter 4: Configure 65
Miscellaneous

#
get_Jobname()
{
echo $1 | sed -e s;^.*/;; -e s;\..*$;;
}
# Determine the command name of this script
Cmd=echo $0 | sed s;^.*/;;
# Assign the default argument parameter values
Jobname=
Verbose=
if [ <installation_directory> = ] ; then
Acommand=<installation_directory>/bin/FeaControl
else
Acommand=<installation_directory>/bin/FeaControl
fi
Status=0
# Parse through the input arguments.
if [ $# -ne 6 ] ; then
abort_usage
fi
while [ $# -ne 0 ] ; do
case $1 in
-j) Jobname=$2 ; shift 2 ;;
-h) remhost=$2 ; shift 2 ;;
-d) remdir=$2 ; shift 2 ;;
*) abort_usage ;;
esac
done
# Runtime determination of machine/system type
OsName=uname -a | awk {print $1}
case $OsName in
SunOS)
Rsh=rsh
RshN1=-n
RshN2=
;;
HP-UX)
Rsh=remsh
RshN1=
RshN2=
;;
AIX)
Rsh=/usr/ucb/remsh
RshN1=
RshN2=-n
;;
ULTRIX)
Rsh=/usr/ucb/rsh
RshN1=
RshN2=-n
;;
IRIX)
Rsh=rsh
RshN1=
RshN2=-n
;;

66

Patran Analysis Manager Users Guide


Miscellaneous

*)
Rsh=rsh
RshN1=
RshN2=
;;
esac
# Determine the fully expanded names for the input files.
JobFile=${Jobname}.job
AifFile=GetKeyValueCC $JobFile AUXILIARY INPUT FILE
# Copy the files over from the remote host
NtlFile=${Jobname}.ntl
lochost=hostname
curdir=pwd
if [ $curdir = $remdir ] ; then
crap=1
else
if [ $remhost = $lochost ] ; then
cp ${remdir}/${NtlFile} .
if [ $AifFile = ] ; then
crap=1
else
cp ${remdir}/${AifFile} .
fi
else
rcp ${remhost}:${remdir}/${NtlFile} .
if [ $AifFile = ] ; then
crap=1
else
rcp ${remhost}:${remdir}/${AifFile} .
fi
fi
fi
# Perform the analysis
$Acommand $Jobname ; check_status $?
# Successful exit of script
exit_normal

Chapter 5: Monitor
Patran Analysis Manager Users Guide

Monitor

Introduction

68

Running Job

Completed Job

Host/Queue

69
75
78

68

Patran Analysis Manager Users Guide


Introduction

Introduction
By setting the Action to Monitor on the main Patran Analysis Manager form, the user can monitor not
only his active jobs but also the Host or Queue activity. In addition, completed graphical monitoring
graphs can also be recalled at anytime. Each of these functions is explained in this chapter.

Each of these functions for monitoring jobs or hosts/queues is also accessible directly from the Analysis
application form within Patran. The only difference is that the full user interface of Patran Analysis
Manager is not accessed first; instead, the monitoring forms are displayed directly as explained in the
next few pages.
Note:

The UNIX interface is shown above. In subsequent sections both the UNIX and the Windows
interface are shown. Monitoring in the Windows interface is done from the Monitor tree tabs.

Chapter 5: Monitor 69
Running Job

Running Job
With the Action set to monitor a Running Job, pertinent information about a specific job that is currently
running or queued to run can be obtained. Jobs can be monitored from any host in the Analysis Manager's
configuration, not just from where they were submitted.

Note:

This form is not displayed when a job is monitored directly from Patran. Instead, only the
monitoring form is displayed as shown on the next page since all the pertinent information to
monitor a job is passed in from Patran. The Windows interface is displayed further down also.

70

Patran Analysis Manager Users Guide


Running Job

A graph of the selected running job appears, showing the duration of the job where it has been or is
running.

The following table describes all the widgets that appear in this job graph.

Chapter 5: Monitor 71
Running Job

Item

Description

Job Status

This widget gives the total elapsed time in blue and the actual CPU time in red.
A check mark appears when the job is completed successfully. Otherwise, an
X appears. The clear portion of the blue bar indicates the amount of time the
job was queued before execution began. Elapsed and CPU time are reported in
minutes.

Percent CPU
Usage

This widget gives the percentage of CPU that is being used by the analysis code
at any given time. The maximum percentage of CPU during job execution is
indicated as a grey shade which remains at the highest level of % CPU usage.

Total Disk Usage

This widget gives the total amount of disk space used by the job during
execution in megabytes.

Percent Disk Usage This widget gives the percentage of the total disk space that this job occupies at
an given time for all file systems. If you click on this widget with the mouse, all
file systems will be shown. The maximum percentage of disk space used during
job execution is indicated as a grey shade which remains at the highest level.
Job Information

Job # - the sequential number of the job


Job Name - the name of the job
Owner - The name of the user or job owner
Elapsed Time - how long the job has been running

Returning Job
Files

All files created during execution are copied back and displayed in this list box.
After job completion and during job execution, it is possible to click on any of
these files to view them with a simple editor/viewer. The following keystrokes
are available in this viewer window:
ctrl-s:
ctrl-n:
ctrl-c:
ctrl-<:
ctrl->:

to search for a string


to repeat search
exits out of viewer
goes to top of file
goes to bottom of file

72

Patran Analysis Manager Users Guide


Running Job

Item
Controls

Description
Remove beginning queue time - takes off the queued portion of the graphics bar,
e.g., the portion that is not blue before job begins.
Suspend/Resume Job - when toggled on, the job will be indefinitely suspended.
A banner across the CPU dial will display the word SUSPENDED while the job
is suspended. Toggle the switch off to resume the job. The banner will be
removed.
Update (Sec.) - how often to update the graph / display
Pixels Per Min - how many pixels wide per minute
MB Per Inch - how many megabytes per inch to be displayed
Normalize Graph - make the graph fit in the window area

Close

Closes the monitoring form.

Status Window

Status messages are returned in this window. If the log file is being monitored,
then log file lines will appear here also for MSC Nastran and ABAQUS.

The bottom left panel lists information about the job, such as date and time of event task name, host name,
and status. Any error and status messages will appear here. An example listing is:
Fri Jan

4 13:31:31 1994 <TASK COMPLETED>

Task Name: shock

The running job function can also be invoked manually from the system prompt. See Invoking the
Analysis Manager Manually for details.

Windows Interface
For Running Jobs, when a job is submitted from the Windows interface, the user is queried as the
whether he/she wants the interface to switch automatically to the monitoring mode.

When a job is running the Monitor tree shows running jobs and jobs that have been queued.

Chapter 5: Monitor 73
Running Job

When a Running Jobs in the tree structure is selected, three tabs become available to give specific
status of the job, allow viewing of created output files, and give graphical display of memory, CPU and
disk usage.

74

Patran Analysis Manager Users Guide


Running Job

Chapter 5: Monitor 75
Completed Job

Completed Job
This is an Analysis Manager utility that allows the user to graph a particular completed job run by the
Analysis Manager

Note:

This form is not displayed if this action is selected directly from the Analysis application
form in Patran. Instead, only the monitoring form is displayed as shown on the next page.
The Windows interface is also shown

76

Patran Analysis Manager Users Guide


Completed Job

The .mon file is created when a job is first submitted to Patrans Analysis Manager. Information on all
the job tasks is written to the .mon file. Time submitted, job name, job number, time actually run, time
finished and completion status are all recorded in the file, so that this Analysis Manager function can read
the file and have enough information to graph the jobs progress completely.
The explanation of the graphs on this form is identical to that of a Running Job except that the Update
slider bar does not show up since it is not applicable to a completed job.

Windows Interface
For Completed Jobs, the Windows interface displays them under the Completed Jobs tab in the
Monitor tree.

Chapter 5: Monitor 77
Completed Job

78

Patran Analysis Manager Users Guide


Host/Queue

Host/Queue
Information about all hosts or queues used by Patrans Analysis Manager and jobs submitted through the
Analysis Manager can be reviewed using the Monitor Host/Queue selection. Options available include
Job Listing, Host Status, Queue Manager Log and a Full Listing. Press the Apply
button to invoke these functions. The user can vary how often the information is updated, using the
slider control.

The Host/Queue monitoring function can also be invoked manually from the system prompt. See
Invoking the Analysis Manager Manually for details.

Chapter 5: Monitor 79
Host/Queue

Job Listing
The initial application form of Monitor's Host/Queue appears as follows:

At the top of the main form for Monitor Queue is a slider labeled Update Time (Min.). Drag the slider
to the left to shorten the interval between information updates, or drag the slider to the right to slow
update of information. The default interval time is 5 minutes. In the Windows interface the refresh setting
is set under the View | Update Speeds menu pick.
The update interval may be changed at any time during the use of any Monitor Queue options.
All jobs are listed which are currently running in some capacity. Information about each job includes:
Job Number, Job Name, Owner and Time. The job number is a unique, sequential number that the
Analysis Manager generates for each job submitted to it. Pressing the Close button will close down the
monitor form.

80

Patran Analysis Manager Users Guide


Host/Queue

Host Status
When the Host Status toggle is highlighted the form appears as follows:

The status is reported on all hosts or queues used by the Analysis Manager. Information about each
host/queue includes: host/queue name (Host Name), number of jobs running (# Running), number of jobs
queued (# Queued), maximum allowed to run concurrently (Max Running), and Host Type (i.e., MSC
Nastran).
If NQS or LSF is being used, queue information is provided instead of host information. See Submit for
more information on default settings.
The update interval may be changed at any time during the use of any Monitor Queue options. The default
interval time is 5 minutes. In Windows, use the View | Update Speeds menu option.
To exit from the Monitor Queue, select the Close button on the bottom of the main form on the right.
Log files are unaffected when the form is closed.

Chapter 5: Monitor 81
Host/Queue

Queue Manager Log


When the Queue Manager Log toggle is selected, the form appears as follows:

The most recent jobs submitted are listed, regardless of where or when they were run. Information about
each job includes: date and time of event, event description, job number, job or task name or host name,
task type or PID (process id of task), and owner. Most recent jobs are listed in the text list box from the
time the Analysis Managers Queue Manager daemon was started. See System Management for a
description of the Queue Manager daemon.
The update interval may be changed at any time during the use of any Monitor Queue options. The default
interval time is 5 minutes. In Windows, use the View | Update Speeds menu option.
To exit from the Monitor Queue, select the Close button on the bottom of the main form on the right.
Log files are unaffected when the form is closed.

82

Patran Analysis Manager Users Guide


Host/Queue

Full Listing
When Full Listing is selected, the form appears as follows:

The Full Listing information shows all job tasks submitted. Information about each host/queue includes:
status (blue = running; red = queued), job number, task name, task type, date and time submitted, and
owner.
The queue name is shown if an additional scheduler is present and being used (LSF/Utopia) and a pointer
to the actual queue name.
The update interval may be changed at any time during the use of any Monitor Queue options. The default
interval time is 5 minutes.
To exit from the Monitor Queue, select the Close button on the bottom of the main form on the right.
Note:

There is no Full Listing form in the Windows interface.

CPU Loads
When CPU Loads is selected, the form appears as follows:

Chapter 5: Monitor 83
Host/Queue

The load on the workstations and computers can be determined by inspecting this form which
periodically updates itself. The list of hosts or queues appears with the percent CPU usage, total amount
of free disk space, and available memory at that particular snapshot in time. The user may sort the hosts
by CPU UTILIZATION, FREE DISK SPACE, or AVAILABLE MEMORY, so that the host or
queue with the best situation appears at the top. Also, indicated in blue are the best hosts or queues for
each category of CPU, disk space and memory.

84

Patran Analysis Manager Users Guide


Host/Queue

Chapter 6: Abort
Patran Analysis Manager Users Guide

Abort

Selecting a Job

Aborting a Job

86
87

86

Patran Analysis Manager Users Guide


Selecting a Job

Selecting a Job
This capability allows the user to terminate a running job originally submitted through Patrans Analysis
Manager. When aborting a job, the Analysis Manager cleans up all appropriate files.

The abort function can also be invoked manually from the system prompt. See Invoking the Analysis
Manager Manually for details. A currently running job must be available.

Chapter 6: Abort 87
Aborting a Job

Aborting a Job
You can only abort jobs which you own (i.e., originally submitted by you).
When a job is aborted, the analysis files are removed from where they were copied to, and all scratch and
database files are removed, unless the job is a restart from a previous run, in which case the scratch files
are removed, but the original database files from previous runs are left unaffected.
Note:

When a job is aborted from within Patran, no user interface appears. The job is simply aborted
after the confirmation.

UNIX Interface
Press the Apply button on the main form with the Action set to Abort as shown on the previous page.
You are asked to confirm with,
Are you sure you wish to abort job # <jobname> ?
Press the OK button to confirm.
The Cancel button will take no action and close the Abort form.

Windows Interface
There are three ways to abort a job from the Windows interface.
1. When the job is initially submitted, a modal window appears asking whether you want to monitor
or abort the job or simply do nothing and let the job run.

2. Once the job is running, from the Job Control tab in the Monitor tree structure. There is an
Abort button on this form to terminate the job.

88

Patran Analysis Manager Users Guide


Aborting a Job

3. From the Monitor | Running Jobs tree structure you can right mouse click on a running job. A
pulldown menu appears from which you can select Abort.

Chapter 7: System Management


Patran Analysis Manager Users Guide

System Management

Directory Structure

90

Analysis Manager Programs

Organization Environment Variables

Installation

X Resource Settings

Configuration Management Interface

Examples of Configuration Files

Starting the Queue/Remote Managers

92
103

107
111
113

146
155

90

Patran Analysis Manager Users Guide


Directory Structure

Directory Structure
The Analysis Manager has a set directory structure, configurable environment variables and other tunable
parameters which are discussed in this chapter.
The Analysis Manager directory structure is displayed below. The main installation directory is shown
as an environment variable, $P3_HOME =<installation_directory>. Typically this would be
or /msc/patran200x or something similar.

where:
<org> (optional) is an additional organizational group and shares the same directory tree as default
yet will have its own unique set of configuration files. See Organization Environment Variables.
<arch> is one of:

Chapter 7: System Management 91


Directory Structure

HP700

Hewlett Packard HP-UX

RS6K

IBM RS/6000 AIX

SGI5

Silicon Graphics IRIX

SUNS

Sun SPARC Solaris

LX86

Linux (MSC or Red Hat)

WINNT

Windows 2000 or XP

There may be more than one <arch> directory in a filesystem. Architecture types that are not applicable
to your installation may be deleted to reduce disk space usage; however, all machine architecture types
that will be accessed by the Analysis Manager must be kept. Each one of the executables under the bin
directory is described in Analysis Manager Programs.
All configuration files are explained in detail in Examples of Configuration Files. These include org.cfg,
host.cfg, disk.cfg, lsf.cfg, and nqs.cfg.
Organization groups and their uses are described in Organization Environment Variables.
The QueMgr.log file is created when a Queue Manager daemon is started and does not exist until this
time and, therefore, will not exist in the above directory structure unitl after the initial installation. Use
of this file is described in Starting the Queue/Remote Managers respectively. The file QueMgr.rdb is also
created when a Queue Manager daemon is started and is a database containing job specific statistics of
every job ever submitted through the Queue Manager for that particular set of configuration file or
<org>. The contents of this file can be viewed on Unix platforms using the Job_Viewer executable.
Items in the bin and exe directories are scripts to enable easier access to the main programs. These scripts
make sure that the proper environment variables are set before invoking the particular program that reside
in $P3_HOME/p3manager_files/bin/<arch>.

Note:

p3analysis_mgr

Invokes P3Mgr

p3am_admin

Invokes AdmMgr (Unix only - on Windows this is P3Mgr.)

p3am_viewer

Invokes Job_Viewer (Unix only)

QueMgr

Invokes QueMgr (Unix only)

RmtMgr

Invokes RmtMgr (Unix only)

The directories (conf, log, proj) for each set of configuration file (organizational
structure) must have read, write, and execute (777) permission for all users. This can be the
cause of task manager errors.

92

Patran Analysis Manager Users Guide


Analysis Manager Programs

Analysis Manager Programs


The Analysis Manager is comprised of two main parts, the user interface and a number of daemons. Each
of these parts and the executables are described below. All executables are found in the
$P3_HOME/p3manager_files/bin directory, where $P3_HOME is the installation directory,
typically /msc/patran200x.
User Interface
The first part of the Analysis Manager is the user interface from which a user submits and monitors the
progress of jobs (P3Mgr is the executable name). This program can be executed in many different ways
and from many different locations (i.e., either locally or remotely over a network). An administration tool
also is available to easily set up and edit configuration files, and test for proper installation. (AdmMgr is
the executable name on Unix. On Windows there is no separate executable; it is part of P3Mgr.) A small
editor program (p3edit) is also part of the user interface portion and is invoked directly from the main
user interface when editing and viewing files.
Two shell scripts are actually used to invoke the Analysis Manager and the administration tool. These are
p3analysis_mgr and p3am_admin. When properly installed, these scripts automatically determine the
installation path directory structure and which machine architecture executable to use.
Daemons
The second part of the Analysis Manager is a series of daemons (or services on Windows) which actually
execute and control jobs. These daemons are responsible for queuing jobs, finding a host to run jobs,
moving data files to selected hosts, executing the selected analysis code, etc. Each one is described here:
Queue Manager
This is a daemon (or service on Windows) which must run all the time (QueMgr executable name). The
machine on which the Queue Manager runs is knows as the master host. Generally it runs as root (or
administrator) and is responsible for scheduling jobs. The Queue Manager always has a complete account
of all jobs running and/or queued. When a request to run a job is received, the Queue Manager checks to
see what hosts are eligible to run the selected code and how many jobs each host is currently running. If
there is a host which is eligible, the Queue Manager will start up the task on that host. If the Analysis
Manager is installed along with a third party scheduling program (i.e., LSF or NQS) the Queue Manager
is responsible for communicating with the scheduling software to control job execution. In summary, the
Queue Manager is the Scheduler of the Analysis Manager environment. (Also, see Starting the
Queue/Remote Managers, Starting the Queue Manager.)
Remote Manager
There is only one Queue Manager, but there are many Remote Managers. A RmtMgr process runs on
each and every analysis machine. These are machines that are configured to run an analysis such as MSC
Nastran or MSC.Marc. A RmtMgr can also be run on each submit machine (recommended - see Job
Manager below). These are machines from which the analysis was submitted such as where Patran runs.
If the submit and analysis machines are the same host, then only one RmtMgr needs to be running. The
QueMgr and RmtMgr processes start up at boot time automatically and run always, but use very little

Chapter 7: System Management 93


Analysis Manager Programs

memory and cpu resources, so users will not notice performance effects. Also these processes can run as
root (Administrator on Windows) or as any user, if these privileges are not available.
Each RmtMgr binds to a known/chosen port number that is the same for every RmtMgr machine. Each
RmtMgr process collects machine statistics on free CPU cycles, free memory and free disk space and
returns this data to the QueMgr at frequent intervals. The RmtMgr is actually used to perform a
command and return the output from that command on the host it is running. This is essentially a remote
shell (rsh) host command as on a Unix machine.
Note:

Its best to run the RmtMgr service on Windows as someone other than SYSTEM (the
default if you do not do anything different). After installing the RmtMgr, use the control
panel to access the services and then find the RmtMgr and change its startup to use a
different account, something generic if it exists, or an Analysis Manager admin. account. If
the RmtMgr is running as a user and not SYSTEM then the NasMgr/ MarMgr / AbaMgr/
GenMgr will run as this user and have access to Windows networking, shared drives and
all. If it is run as SYSTEM then it is limited to only local Windows drives, shares, etc. The
QueMgr does not do much in the way of files so running that as SYSTEM is OK.

Job Manager
The Job Manager (JobMgr executable name) runs for the life of a job. When a user submits a job using
the Analysis Managerr, the user interface tells Queue Manager about the job and then starts a Job
Manager daemon. The Job Manager daemon will receive and save job information from the Analysis
Manager's user interface. The main purpose of the Job Manager is to record job status for monitoring and
file transfer.
During the execution of jobs, users utilizing the Analysis Manager's user interface program can
seamlessly connect to the Job Manager of their job and see what the status of the job is. In summary, the
Job Manager controls the execution of a single job and is always aware of the current status of that job.
The Job Manager runs on the submit host machine.
Note:

On Windows if a RmtMgr is running on a local machine, the JobMgr will be started


through it as usual, but if a RmtMgr is NOT running then a JobMgr will be started
anyway, and the submit will still work fine. The only restriction is if, in this latter case, the
user logs off, a popup dialog appears asking if the user really wants to logoff. The job will
be terminated if he does. This will not happen if the RmtMgr is running as a service.

MSC Nastran Manager


The MSC Nastran Manager (NasMgr executable name) runs only for the life of a job. The MSC Nastran
Manager is started by the Queue Manager when the task reaches the top of its queue and is eligible to
run. The purpose of the MSC Nastran Manager is to run the MSC Nastran job. When the NasMgr first
comes up, it generates FMS (if necessary), checks to see if there is enough disk space, etc. The NasMgr
will make sure it has all of the files it needs for the job. If not, it will obtain them. Finally, the MSC
Nastran job is started.

94

Patran Analysis Manager Users Guide


Analysis Manager Programs

During execution, the NasMgr relays pertinent information (disk usage, cpu, etc.) to the Job Manager
(JobMgr), which then updates the graphical information displayed to the user. The NasMgr is also
responsible for cleaning up files and putting results back to desired locations, as well as reporting its
status to the Job Manager. This daemon runs on the analysis host machine and only for the life of the
analysis.
MSC.Marc Manager
The MSC.Marc Manager (MarMgr executable name) runs only for the life of a job. The MarMgr is
identical in function to the MSC Nastran Manager (NasMgr) except it is for execution of MSC.Marc
analyses.
ABAQUS Manager
The ABAQUS Manager (AbaMgr executable name) runs only for the life of a job. The AbaMgr is
identical in function to the MSC Nastran Manager (NasMgr) except it is for execution of ABAQUS
analyses.
General Manager
The General Manager (GenMgr executable name) runs only for the life of a job. The GenMgr is
identical in function to the MSC Nastran Manager (NasMgr) except it is for execution of general analysis
applications.
Editor
The editor (p3edit executable name) runs when requested from P3Mgr when viewing results files or
editing the input deck.
Text Manager
The Text Manager (TxtMgr executable name) is a text based interface to the Analysis Manager to
illustrate the Analysis Manager API. See Application Procedural Interface (API).
Job Viewer
The job viewer (Job_Viewer executable name) is a simple program available on UNIX platforms for
opening and viewing job statistics for the Analysis Managers database file. This file is generally located
in $P3_HOME/p3manager_files/default/log/QueMgr.rdb. You must run Job_Viewer
and then open the file manually.

Analysis Manager Program Startup Arguments


AbaMgr, NasMgr, MarMgr, GenMgr
Started automatically by QueMgr (or NQS/LSF); no command line arguments.

Chapter 7: System Management 95


Analysis Manager Programs

JobMgr
Started automatically by P3Mgr/TxtMgr (or RmtMgr); no command line arguments.
RmtMgr
This is a daemon on Unix or a service on Windows and started automatically at boot time. Possible
command line arguments (also see Organization Environment Variables):
Argument

Description

-version

Just prints Analysis Manager version and exits

-ultima

Switch to change P3_HOME to AM_HOME, p3manager_files to


analysis_manager so there is no p3 in the environment required.
(Generally not used!)

-port <####>

Port number to use. MUST be the SAME port number for ALL RmtMgr's for
whole network (per QueMgr) default is 1800 if not set.

-path <path>

Use to specify base path for finding the Analysis Manager executables:
$P3_HOME/p3manager_files/bin/{arch}/*Mgr.
<path> is the base path $P3_HOME. Default is to use same path as program
was started up with, but in the case of "./RmtMgr ...." it will not work. If
a full path is used to start RmtMgr (like in a startup script) then this is not
needed.

-orgpath <path>

Use to specify base path for finding the Analysis Manager org tree
(configuration files and directories):
$P3_HOME/p3manager_files/{org}/{conf,log,proj}.
<path> is the base path $P3_HOME. Use to specify the base path to find the org
tree only if different than the -path argument.
RmtMgr writes files in the proj/{projectname} directories, so if this is
not the default (desired) location (same as -path above) then this argument
needs to be set.

-name <name>

Windows only. Use if you want to run more than one RmtMgr service. Each
must have a unique name so the start/stop method can distinguish which one to
work with. Default <name> is MSCRmtMgr.

QueMgr (AdMgr)
This is a daemon on Unix or a service on Windows and started automatically at boot time. Possible
command line arguments (also see Organization Environment Variables):
Note:

On Unix, the AdmMgr (p3am_admin) accepts the same arguments as QueMgr.

96

Patran Analysis Manager Users Guide


Analysis Manager Programs

Argument

Description

-version

Just prints Analysis Manager version and exits

-ultima

Switch to change P3_HOME to AM_HOME, p3manager_files to


analysis_manager so there is no p3 in the environment required.
(Generally not used!)

-port <####>

Port number to use. The default is 1900 if not set. If using an org.cfg file then
use this argument with the -org option below to force a port number and org
name.

-path <path>

Use to specify base path for finding the Analysis Manager executables:
$P3_HOME/p3manager_files/bin/{arch}/*Mgr.
<path> is the base path $P3_HOME. Default is to use same path as program
was started up with, but in the case of "./QueMgr ...." it will not work. If
a full path is used to start QueMgr (like in a startup script) then this is not
needed.

-orgpath <path>

Use to specify base path for finding the Analysis Manager org tree
(configuration files and directories):
$P3_HOME/p3manager_files/{org}/{conf,log,proj}.
<path> is the base path $P3_HOME. Use to specify the base path to find the org
tree only if different than the -path argument.
RmtMgr writes files in the proj/{projectname} directories, so if this is
not the default (desired) location (same as -path above) then this argument
needs to be set.

-name <name>

Windows only. Use if you want to run more than one QueMgr service. Each
must have a unique name so the start/stop method can distinguish which one to
work with. Default <name> is MSCQueMgr.

-rmtmgrport <####> The port number to use for ALL RmtMgr's that this QueMgr will connect to
for the entire network. Default is 1800 (default RmtMgr -port value) if not
set.
-rmgrport <####>

Same as -rmtmgrport above.

Chapter 7: System Management 97


Analysis Manager Programs

Argument

Description

-org <org>

org name to use. This is the name of the directory containing the configuration
files for this Queue Manager daemon (i.e.,
$P3_HOME/p3manager_files/{org}/{conf,log,proj}). The
default is default. If using an org.cfg file then use this with the -port
option above to force a port number and org name.

-delayint <###>

Default is 20 seconds. This is rarely used. Every delay_interval seconds


the QueMgr asks another host in its list of all job hosts for a status and if it has
not heard back from a host in (delay_interval * number_of_hosts
* 3) + 30 seconds then it had been approximately 3 round trips through the
host list without a response, so QueMgr marks the host as DOWN and will not
submit new jobs to it, until it starts responding again. This is a flag to be able to
modify that interval to account for network problems etc. which may be causing
the Analysis Manager to think some hosts are down when they may not really
be down.

P3Mgr
This program is started by the user. If 4 arguments are present then its assumed that:
Argument
arg 1

Description
Startup type..... It is one of the following:
1 - Start Up Full Interface.
2 - Start Up Queue Monitor Now.
3 - Start Up Abort Job Now.
4 - Start Up Monitor Running Job Now.
5 - Start Up Monitor Completed Job Now.
6 - Start Up Submit Now. (Submit current job)
7 - Start Up Submit Quiet. (Submit current job without GUI)
8 - Start Up Submit Quiet and wait for job to complete. (with exit status)

arg 2

Extension of the job input file.

arg 3

Job name (with optional path).

arg 4

Application type (integer).


1 - MSC Nastran
2 - ABAQUS
3 - MSC.Marc
20 through 29 - General (user defined applications)

If 4 more args are specified (Unix only) then its assumed:

98

Patran Analysis Manager Users Guide


Analysis Manager Programs

Argument

Description

arg 5

X position of upper left corner of Patran right hand side interface in inches.

arg 6

Y position of upper left corner of Patran right hand side interface in inches.

arg 7

Width of Patran right hand side interface in inches.

arg 8

Height of Patran right hand side interface in inches.

The following arguments can be used alone or after the first 4 arguments above:
Argument

Description

-rcf <file>

rcf file to use for all GUI settings (same format as -env/-envall output)
- see Analysis Manager Environment File.

-auth <file>

License file to use. Environment variable MSC_LICENSE_FILE is the


default. This can also point to a port as well as a physical license file (with
path), e.g., -auth 1700@banff

-env

Prints the rcf / GUI settings for all applications.

-envall

Same as -env but even more information is printed.

-extra <args>

Argument to add extra arguments to add on the end of a particular command


line.

-runtype <#>

ABAQUS ONLY set run type to:


0 - full analysis
1 - restart
2 - data check

-restart <file>

ABAQUS ONLY - coldstart filename for restart.

-coldstart <file>

MSC Nastran ONLY - coldstart filename for restart. MSC.Marc uses the
rcfile - see Analysis Manager Environment File.

TxtMgr
This program is started by the user to manage jobs through a simple text submittal program. Possible
arguments are:
Argument

Description

-version

Same as RmtMgr.

-qmgrhost <hostname>

Hostname QueMgr is running on. Default is this one if no org.cfg is


found.

-qmgrport <####>

Port QueMgr is running on. Default is 1900 if no org.cfg is found.

Chapter 7: System Management 99


Analysis Manager Programs

Argument

Description

-rmgrport <####>

Port for ALL RmtMgr's for this org (QueMgr). Not needed unless using
the Admin test feature and the default RmtMgr port is not being used.

-org <org>

org to use. Default is default.

-orgpath <path>

Same as RmtMgr. Needed for writing configuration files and/or Admin


tests if it is not the default path (default is $P3_HOME).

-auth <file>

License file to use. Environment variable MSC_LICENSE_FILE is the


default.

-app <name>

Application name to use. Default is MSC Nastran (or first valid app).

-rcf <file>

rcf file to use for all GUI settings (same format as -env/-envall
output).- see Analysis Manager Environment File.

-p3home <path>

Switch to use if $P3_HOME environment variable is not set.

-amhome <path>

Switch to use if $AM_HOME environment variable is not set.

-choice <#>

Startup option if not full menu:


1) submit a job
2) abort a job
3) monitor a job
4) show QueMgr log file
5) show QueMgr jobs/queues
6) show QueMgr cpu/mem/disk
7) list completed jobs
8) write rcfile settings
9) admin test
10) admin reconfig QueMgr

-env

Prints the rcf / GUI settings for all applications.

-envall

Same as -env but even more info printed.

-envf <file>

Write env settings to file specified.

-envfall <file>

Same as -envf but even more info written.

-nocon

Do not attempt to connect to a QueMgr. Useful for when one is not


running and you want to test the Admin configuration files, etc.

100

Patran Analysis Manager Users Guide


Analysis Manager Programs

Analysis Manager Environment File


The -env and -envall argument to some of the above programs (P3Mgr in particular) will list the
environment setting used in the Analysis Manager. The environment can be set by reading a particular
file with the -rcfile argument. Default values of this environment are found in the .p3mgrrc file
which gets stored in the users home directory when any settings are saved from within the Analysis
Manager. Most all the widgets in the P3Mgr user interface can be set by reading an rcfile. When
MSC.Marc jobs are submitted via Patran, all additional parameters such as restart filename, number of
domains and host information for parallel processing and other information is passed to the Analysis
Manager via this rcfile. There is an entry in the rcfile for each widget in the user interface. A list
of a these entries is given below. Notice that the configuration information is also listed in the rcfile.
Configuration information is explained in Configuration Management Interface and Examples of
Configuration Files.
#
# rc file --#
cfg.total_h_list[0].host_name = tavarua
cfg.total_h_list[0].arch = HP700
cfg.total_h_list[0].maxtasks = 2
cfg.total_h_list[0].num_apps = 3
cfg.total_h_list[0].sub_app[MSC.Nastran].pseudohost_name =
tavarua_nast2001
cfg.total_h_list[0].sub_app[MSC.Nastran].exepath =
/solvers/nast2001/bin/nast2001
cfg.total_h_list[0].sub_app[MSC.Nastran].rcpath =
/solvers/nast2001/conf/nast2001rc
cfg.total_h_list[0].sub_app[ABAQUS].pseudohost_name = tavarua_aba62
cfg.total_h_list[0].sub_app[ABAQUS].exepath =
/solvers/hks/Commands/abaqus
cfg.total_h_list[0].sub_app[ABAQUS].rcpath = /solvers/hks/6.21/site/abaqus_v6.env
cfg.total_h_list[0].sub_app[MSC.Marc].pseudohost_name =
tavarua_marc2001
cfg.total_h_list[0].sub_app[MSC.Marc].exepath =
/solvers/marc2001/tools/run_marc
cfg.total_h_list[0].sub_app[MSC.Marc].rcpath =
/solvers/marc2001/tools/include
cfg.total_h_list[1].host_name = salani
cfg.total_h_list[1].arch = WINNT
cfg.total_h_list[1].maxtasks = 2
cfg.total_h_list[1].num_apps = 3
cfg.total_h_list[1].sub_app[MSC.Nastran].pseudohost_name =
salani_nast2001
cfg.total_h_list[1].sub_app[MSC.Nastran].exepath =
d:\msc\bin\nast2001.exe
cfg.total_h_list[1].sub_app[MSC.Nastran].rcpath =
d:\msc\conf\nast2001.rcf
cfg.total_h_list[1].sub_app[ABAQUS].pseudohost_name = salani_aba62
cfg.total_h_list[1].sub_app[ABAQUS].exepath =
d:\hks\Commands\abq621.bat
cfg.total_h_list[1].sub_app[ABAQUS].rcpath = d:\hks\6.21\site\abaqus_v6.env
cfg.total_h_list[1].sub_app[MSC.Marc].pseudohost_name =
salani_marc2001
cfg.total_h_list[1].sub_app[MSC.Marc].exepath =
d:\msc\marc2001\tools\run_marc.bat

Chapter 7: System Management 101


Analysis Manager Programs

cfg.total_h_list[1].sub_app[MSC.Marc].rcpath =
d:\msc\marc2001\tools\include.bat
#
unv_config.auto_mon_flag = 1
unv_config.time_type = 0
unv_config.delay_hour = 0
unv_config.delay_min = 0
unv_config.specific_hour = 0
unv_config.specific_min = 0
unv_config.specific_day = 0
unv_config.mail_on_off = 0
unv_config.mon_file_flag = 1
unv_config.copy_link_flag = 0
unv_config.job_max_time = 0
unv_config.project_name = user1
unv_config.orig_pre_prog =
unv_config.orig_pos_prog =
unv_config.exec_pre_prog =
unv_config.exec_pos_prog =
unv_config.separate_user = user1
unv_config.p3db_file =
unv_config.email_addr = empty
#
nas_config.disk_master = 0
nas_config.disk_dball = 0
nas_config.disk_scratch = 0
nas_config.disk_units = 2
nas_config.scr_run_flag = 1
nas_config.save_db_flag = 0
nas_config.copy_db_flag = 0
nas_config.mem_req = 0
nas_config.mem_units = 0
nas_config.smem_units = 0
nas_config.extra_arg =
nas_config.num_hosts = 2
nas_host[tavarua.scm.na.mscsoftware.com].mem = 0
nas_host[tavarua.scm.na.mscsoftware.com].smem = 0
nas_host[tavarua.scm.na.mscsoftware.com].num_cpus = 0
nas_host[lalati.scm.na.mscsoftware.com].mem = 0
nas_host[lalati.scm.na.mscsoftware.com].smem = 0
nas_host[lalati.scm.na.mscsoftware.com].num_cpus = 0
nas_config.default_host = tavarua_nast2001
nas_config.default_queue = N/A
nas_submit.restart_type = 0
nas_submit.restart = 0
nas_submit.modfms = 1
nas_submit.nas_input_deck =
nas_submit.cold_jobname =
#
aba_config.copy_res_file = 1
aba_config.save_res_file = 0
aba_config.mem_req = 0
aba_config.mem_units = 0
aba_config.disk_units = 2
aba_config.space_req = 0
aba_config.append_fil = 0
aba_config.user_sub =
aba_config.use_standard = 1
aba_config.extra_arg =
aba_config.num_hosts = 2

102

Patran Analysis Manager Users Guide


Analysis Manager Programs

aba_host[tavarua.scm.na.mscsoftware.com].num_cpus = 1
aba_host[tavarua.scm.na.mscsoftware.com].pre_buf = 0
aba_host[tavarua.scm.na.mscsoftware.com].pre_mem = 0
aba_host[tavarua.scm.na.mscsoftware.com].main_buf = 0
aba_host[tavarua.scm.na.mscsoftware.com].main_mem = 0
aba_host[lalati.scm.na.mscsoftware.com].num_cpus = 1
aba_host[lalati.scm.na.mscsoftware.com].pre_buf = 0
aba_host[lalati.scm.na.mscsoftware.com].pre_mem = 0
aba_host[lalati.scm.na.mscsoftware.com].main_buf = 0
aba_host[lalati.scm.na.mscsoftware.com].main_mem = 0
aba_config.default_host = tavarua_aba62
aba_config.default_queue = N/A
aba_submit.restart = 0
aba_submit.aba_input_deck =
aba_submit.restart_file =
#
mar_config.disk_units = 2
mar_config.space_req = 0
mar_config.mem_req = 0
mar_config.mem_units = 2
mar_config.translate_input = 1
mar_config.num_hosts = 2
mar_host[tavarua.scm.na.mscsoftware.com].num_cpus = 1
mar_host[lalati.scm.na.mscsoftware.com].num_cpus = 1
mar_config.default_host = tavarua_marc2001
mar_config.default_queue = N/A
mar_config.cmd_line =
mar_config.mon_file = $JOBNAME.sts
mar_submit.save = 0
mar_submit.nprocd = 0
mar_submit.datfile_name =
mar_submit.restart_name =
mar_submit.post_name =
mar_submit.program_name =
mar_submit.user_subroutine_name =
mar_submit.viewfactor =
mar_submit.hostfile =
mar_submit.iamval =

Chapter 7: System Management 103


Organization Environment Variables

Organization Environment Variables


P3_HOME & P3_PLATFORM
These two environment variables are set automatically for the user when Patran and/or the Analysis
Manager is invoked, if the proper installation has been performed. The P3_HOME variable references
the actual installation directory and P3_PLATFORM references the machine architecture. This
architecture can be one of the following:
HP700

Hewlett Packard HP-UX

RS6K

IBM RS/6000 AIX

SGI5

Silicon Graphics IRIX

SUNS

Sun SPARC Solaris

LX86

Linux (MSC or Red Hat)

WINNT

Windows 2000 or XP

These variables can be set in the following manner with cshell (if necessary):
setenv P3_HOME /msc/patran200x
setenv P3_PLATFORM HP700
or for bourne shell or kern shell users:
P3_HOME=/patran3
P3_PLATFORM=DECA
export P3_HOME
export P3_PLATFORM
or on Windows:
set P3_HOME=c:/msc/patran200x
set P3_PLATFORM=WINNT
In most instances, users will never have to concern themselves with these environment variables but are
included here for completeness. In a typical Patran installation, a file called .wrapper exists in the
$P3_HOME/bin directory which automatically determines these environment variables. The names of
the invoking scripts, p3analysis_mgr and p3am_admin exist as pointers to .wrapper in this bin directory
which, when executed, determines the variable values and then executes the actual scripts. For this to
work conveniently, the user should have $P3_HOME/bin in his/her path, otherwise the entire path name
must be used when invoking the programs.
P3_ORG
It may be desirable to have multiple Queue Managers running (groups of systems for the Analysis
Manager to use) each with a separate organizational directory for Analysis Manager configuration files.
An optional environment variable, P3_ORG, may be set for each user to specify a separate named
organizational directory. If defined, the Analysis Manager will use it for accessing its required
configuration files and thus connect to the Queue Manager specified by P3_ORG.

104

Patran Analysis Manager Users Guide


Organization Environment Variables

If not defined, the default organizational directory of default is used (i.e.,


$P3_HOME/p3manager_files/default).
For example, suppose a site has many computers with MSC Nastran installed, yet access to each is to be
limited to certain engineering groups. Each group expects to only be able to submit to their computers
and are not permitted to see a choice to submit to the other groups machines. Therefore, to solve this
problem, set up two or more different Analysis Manager organizational groups and start separate Queue
Managers for each. The configuration files (host.cfg and disk.cfg) for the first set of machines are located
in the $P3_HOME/p3manager_files/default/conf directory. Now, to handle another groups machines,
create another directory structure under $P3_HOME/p3manager_files called groupb. A directory
$P3_HOME/p3manager_files/groupb should be created with subdirectories identical to the
$P3_HOME/p3manager_files/default tree. The easiest way to create the new organization is to just copy
the default organization tree:
cp -r $P3_HOME/p3manager_files/default
$P3_HOME/p3manager_files/groupb
Now, the configuration files in the $P3_HOME/p3manager_files/group directory can be edited to
set up the new group of machines. When more than one organization is defined, there will be one Queue
Manager (QueMgr) running for each organizational group. When starting the Queue Manager, the -org
argument must be used for organizations which are not default.
In this example, the Queue Manager for the groupb organization would be started as follows:
QueMgr $P3_HOME -org groupb
Where $P3_HOME is the directory where the installation is located, say /msc/patran200x.
Once the organizations are created, the configuration files edited and the Queue Managers started, users
can utilize the P3_ORG environment variable to specify which Queue Manager to communicate with.
Therefore, users wishing to use the newer version of MSC Nastran will set their P3_ORG environment
variable to groupb and users wishing to use the normal version of MSC Nastran will not set the
environment variable at all.
setenv P3_ORG groupb
or for bourne shell or kern shell users:
P3_ORG=groupb
export P3_ORG
or on Windows:
set P3_ORG=groupb
It is also possible to dynamically change organizational groups without setting this environment variable.
If different organizational groups need to be created, but access should be given to all users, a
configuration file called org.cfg can be created and placed in the $P3_HOME/p3manager_files directory.
This will allow users to change organizational group directories from the Analysis Manager user
interface. The form of this configuration file is described in Examples of Configuration Files. If this
configuration file is used to allow users to change groups on the fly from the user interface, then a
QueMgr must be started on each host with unique port IDs using the -port parameter, e.g.,

Chapter 7: System Management 105


Organization Environment Variables

QueMgr $P3_HOME -port 1501 -org groupb


P3_PORT and P3_MASTER
The Analysis Manager is actually very flexible in the manner in which it can be installed and accessed.
This is to take into account the variation in networks that exist from company to company or even from
department to department. These two environment variables exist to allow flexibility in more restrictive
networking environments.
Say, for example, that there are 50 machines, all with their own Patran installations, yet none of them are
NFS mounted to each other or to the master node where the QueMgr daemon is running. A difficult way
to solve this problem is to make sure the same configuration files exist on all 50 machines in the same
directory tree structure, including the master node. If a change has to be made to the configuration file,
then all 50 machines and the master node will have to be updated. Since the QueMgr is the only program
that needs to read the configuration files, an easier solution would be for only the master node to keep an
up-to-date set of configuration files and have the users of the other 50 machines set the P3_PORT and
P3_MASTER environment variables to reference the port and machine of the master node. For example,
setenv P3_PORT 1501
setenv P3_MASTER bangkok
or for bourne shell or kern shell users:
P3_PORT=1501
P3_MASTER=bangkok
export P3_PORT
export P3_MASTER
or on Windows:
set P3_PORT=1501
set P3_MASTER=bangkok
where 1501 is the port number that QueMgr is using and bangkok is the master nodes
computer hostname.
The Analysis Manager works in the following manner:
1. First a check is made to see if P3_PORT and P3_MASTER have been set. If they have, the
information is used to communicate with the master host and only the organizational group
specified either through P3_ORG or the default organizational group will appear.
2. If P3_PORT and P3_MASTER have not been set, then the org.cfg configuration file is read which
contains the master nodes and port numbers for all organizational groups that have been created.
If P3_ORG is also specified, then that organizational group will appear as the default; however,
all other organizational groups will still be accessible.
3. If neither P3_PORT, P3_MASTER or the org.cfg file have been specified or exist, then the
defaults are used. The default host is the host machine that the Analysis Manager was started on.
The default port is 1900. And the default configuration files (organization) are in the default
directory. Multiple organizational groups will not be accessible, however the P3_ORG variable

106

Patran Analysis Manager Users Guide


Organization Environment Variables

can be specified to change organizational groups each time the Analysis Manager is invoked. In
this last method, the user or the system administrator that starts the QueMgr does not need to ever
worry about assigning unique port numbers. However, it is one of the most restrictive installations
and methods of access.
MSC_RMTMGR_ARGS and MSC_QUEMGR_ARGS
The RmtMgr and QueMgr will also read the environment variables called MSC_RMTMGR_ARGS and
MSC_QUEMGR_ARGS, respectively, for all of its arguments. It is one big string as in this cshell
setting:
setenv MSC_RMTMGR_ARGS -port 1850 -path /msc/patran200x
setenv MSC_QUEMGR_ARGS -port 1950 -path /msc/patran200x
or for bourne shell or kern shell users:
MSC_RMTMGR_ARGS=-port 1850 -path /msc/patran200x
MSC_QUEMGR_ARGS=-port 1950 -path /msc/patran200x
export MSC_RMTMGR_ARGS
export MSC_QUEMGR_ARGS
or on Windows:
set MSC_RMTMGR_ARGS=-port 1850 -path /msc/patran200x
set MSC_QUEMGR_ARGS=-port 1950 -path /msc/patran200x
And then when RmtMgr and/or QueMgr start they will check these and get their arguments from these
strings. Real command line arguments overwrite these in case both are set.
This method is needed on Windows because there is currently no way to save the startup arguments for
a service, so on reboot the RmtMgr would not know its startup arguments. It would have to read a file or
read an environment string to get them. The only problem right now is if you have two RmtMgr's running
on the same machine there is no way to have different args for each.
Note:

On Windows these variables should be set under the System Control Panel such that on
reboot, the RmtMgr and QueMgr start up with these arguments. You can check the Event
Viewer under Adminstrative Tools Control Panel to check for proper startup.

AM_CMD_STATUS and AM_JOB_STATUS


In addition, at the end of a job these two environment variables get set.
AM_CMD_STATUS gets set on the executing host after the job has completed there, with the exit status
of the command. This can be used by a postscript on the execute host to possibly do different things based
on the exit status of the job. One must know the exit status of the application they are running to know
what is good and what is bad or f there are any other possible codes/meanings.
AM_JOB_STATUS get set on the submit host at the end of the job, after all the files have been transferred
and such and this can be used by a post program on the submit host for the same reasons. The values for
this environment variable are 0, 1, 2 where 0 means successful, 1 means abort and 2 means failure of
any kind.

Chapter 7: System Management 107


Installation

Installation
Installation Requirements
The following definitions apply to this section:
1. The master host is the machine which continually runs the Analysis Manager daemon (called
QueMgr). This is also referred to as the master node.
2. The submit host is the machine from which the analysis is submitted, sometimes referred to as the
client also.
3. The analysis host is the machine which actually executes the analysis.
Below is an itemized list of installation requirements:
1. One master node must be chosen for each organizational group (for each Queue Manager that will
be running - typical installation only have one!).
2. Queue Manager (QueMgr) should run as root on the master node. This is not a strict requirement
but recommended on Unix. On Windows it can run as a user or as administrator.
3. Each node (submit and analysis hosts) in the Analysis Manager configuration must be reachable
to and from the master node via a TCP/IP network.
4. Each analysis host must have a Remote Manager (RmtMgr) running with the same port number
(for each QueMgr). It is recommended that each Submit machine also (especially on Windows)
however this is not a strict requirement. (This takes the place of rsh (remsh) remote access
capabilities that used to be used in older versions of the Analysis Manager.)
5. The Analysis Manager software will come off of the installation media onto any machine (master,
submit, or analysis host) under the $P3_HOME/p3manager_files directory. The $P3_HOME
variable is the installation directory and is typically set up as /msc/patran200x and is usually
defined as an environment variable. This p3manager_files directory and tree must exist as-is and
not be renamed.
6. Each analysis host machine in the Analysis Manager configuration must be able to identically see
the installation tree. If a RmtMgr is running this is not an issue because the RmtMgr knows where
the Analysis Manager executables are.
7. The root user should run the administration program (p3am_admin (AdmMgr)) on the master
node to test and ensure that new users can correctly access the Analysis Manager. See
Configuration Management Interface.
Each user wishing to use the Analysis Manager must meet the following requirements:
1. Users who are using the Analysis Manager should have the same login name, user and group ids
on all hosts / nodes in the Analysis Manager configuration. This will prevent file access problems.
In specific cases, users may run jobs on different accounts other than their own, but this must be
set up by the system administrator. This is described in Examples of Configuration Files.
2. Users must have uname in their default search path (path or PATH environment variable in the
user's .cshrc or .profile file).

108

Patran Analysis Manager Users Guide


Installation

Installation Instructions
1. Unload the p3manager_files directory from the installation media. (Consult the Installation
guide for more information on how this is done.)
2. Decide on a master node (typically the node the Patran software is located on), and login to that
node as root.
3. Decide which machine(s), that have MSC Nastran, MSC.Marc, ABAQUS, or other analysis
modules to be used, will be included in the Analysis Managers configuration. Find out where
each runtime script and/or configuration files are located (i.e. /msc/bin/nast200x,
/msc/conf/nast200xrc for MSC Nastran) for each machine. Only these machines will be enabled
for later job submission, monitoring, and management.
4. Each analysis host machine that will be configured to run an analysis code must be able to see the
p3manager_files directory structure as outlined in Directory Structure. This directory
structure must also exist on the master node as well as client (submit) nodes. This can be done in
one of two ways. Either the directory structure can be copied directly to each machine so that it
can be accessed in the same manner as on the master node, or symbolic links and NFS mounts
can be created. In any case, if on one machine you type
cd $P3_HOME/p3manager_files
you should be able to do the same on all analysis nodes and see the same directory structure.
As an example of setting up a link, suppose that the machine venus is the master host and has
the installation directory structure in /venus/users/msc/patran200x. A link can be established on
venus by typing:
mkdir /patran
ln -s /venus/users/msc/patran200x /patran
This will ensure that on venus, if you type cd /patran you will be put into
/venus/users/msc/patran200x.
Now on an analysis host called jupiter, NFS mount the disk /venus/users and then type:
mkdir /patran
ln -s /venus/users/msc/patran /patran
This will ensure that analysis host jupiter can see the installation directory structure. Repeat
this for all analysis hosts. NFS mounts are not necessary if you wish to copy the installation
directory structures to each host separately instead of creating links.
Each submit host (hosts that submit jobs) does not necessarily need to see the directory structure
in exactly the same way as the master and analysis hosts do. They only need to be able to see an
installation directory structure to find the user interface executable (P3Mgr).
Note:

The above description sounds a bit more restrictive than it really is. In actuality, if a
RmtMgr is started on each analysis host, the directory structure can be seen because
RmtMgr knows from where it was launched and thus knows where all the Analysis
Manager executable are. However, it is still recommended to follow the above
procedure if at all possible.

Chapter 7: System Management 109


Installation

5. Start up the RmtMgr daemon or service on each and every analysis node. It is recommended to
start RmtMgr on submit machines also. Starting the Queue/Remote Managers explains this
procedure. This must be done before configuration testing can be done.
6. Use the p3am_admin program to set up the configuration files. This program is located in
$P3_HOME/bin/p3am_admin
Modify Configuration Files explains the use of this program and the format of the generated

configuration files as a result of running this program. The configuration file will be placed in the
correct locations automatically. The following configuration files will be generated:
host.cfg

Host file configuration file

disk.cfg

Disk space configuration file

lsf.cfg

LSF configuration file (if you plan to use LSF as your scheduler, and not the
Analysis Manager own built-in scheduler.

nqs.cfg

NQS configuration file (if you plan to use NQS as your scheduler, and not the
Analysis Manager own built-in scheduler.

Note:

For a minimal configuration with a single Queue Manager, you should remove or
rename the file $P3_HOME/p3manager_files/org.cfg. See step 12. for more
information.

7. Test the configuration setup using p3am_admins testing features. Specifically, do basic tests and
network tests for each user that wishes to access the Analysis Manager. Test Configuration
explains this procedure in detail.
8. Start up the QueMgr daemon on the master node. Starting the Queue/Remote Managers explains
this procedure.
9. Add commands to the appropriate rc files for automatic start-up of the QueMgr and RmtMgr
daemons when the master, submit or analysis nodes have to be rebooted. Starting the
Queue/Remote Managers also explains this procedure.
10. Invoke the Analysis Manager user interface as a normal user and check that the installation was
performed properly. Invoking the Analysis Manager is explained in Invoking the Analysis
Manager Manually.
11. Repeat the procedure from step 2. for each organizational group (Queue Manager) you wish to
set up.
12. When more than one organizational group (Queue Manager) is to be accessed, either modify the
org.cfg file and add the port numbers and group names, or have users set the appropriate
environment variables to access them. See Organization Environment Variables for an explanation
of these variables and see Examples of Configuration Files for setting up the org.cfg file.
13. Make sure users have $P3_HOME/bin in their path. Most Analysis Manager executables can be
invoked from$P3_HOME/bin or are links from $P3_HOME/bin for setting all environment
variables. These include:

110

Patran Analysis Manager Users Guide


Installation

p3am_admin
p3am_viewer (Unix only)
p3analysis_mgr
QueMgr
RmtMgr

It is always safest to invoke these executables from $P3_HOME/bin.

Chapter 7: System Management 111


X Resource Settings

X Resource Settings
The Analysis Manager GUI on Unix requires the use of certain X Window System Resources. The
following explains this use.
The name of the Analysis Manager X application class is P3Mgr. Therefore, to change the background
color the Analysis Manager uses to red, the following resource specification is used:
P3Mgr*background: red
The lines below belong in the P3Mgr file delivered with your installation. This file can be found in
$P3_HOME/app-defaults. This file can reside in the users local directory or in his home directory
or be placed in .Xdefaults or /usr/lib/X11/app-defaults. It is most convenient to place
it in the users home directory; that way changes can be made instantly without having to log out. These
are the resources which the Analysis Manager requires for it to look and behave like Patran.
!
! Resources for Patran Analysis Manager:
!
P3Mgr*background:white
P3Mgr*foreground:black
P3Mgr*bottomShadowColor:bisque4
P3Mgr*troughColor:bisque3
P3Mgr*topShadowColor:white
P3Mgr*highlightColor: black
P3Mgr*XmScrollBar.foreground:white
P3Mgr*XmScrollBar.background:white
P3Mgr*mon_run_trough.background:DodgerBlue
P3Mgr*mon_ok_label.foreground:DodgerBlue
P3Mgr*mon_bad_label.foreground:red
P3Mgr*que_mon_queued.background:red
P3Mgr*que_mon_run.background:DodgerBlue
P3Mgr*mon_disk_trough.background:red
P3Mgr*mon_cpu_trough.background: green
!
! End of Patran Analysis Manager Resources
!

A file called p3am_admin (AdmMgr) also exists for the system administration tool X resources.
Font Handling
The Analysis Manager on Unix requires three fonts to work correctly. At start-up, the Analysis Manager
looks through the fonts available on the machine and picks out three fonts which meet its needs. You will
notice that there are no font definitions in the default Analysis Manager resources. On platforms which
utilize an R4 based version of X windows, the fonts are NOT adjustable by the user. The fonts that the
Analysis Manager calculates are used all the time.
On R5 X windows platforms, the three fonts are still calculated by the Analysis Manager, but the user
has the option of overriding the calculated fonts by using the X resources. The names of the resources to
use are as follows:

112

Patran Analysis Manager Users Guide


X Resource Settings

P3Mgr*fontList:

*lucida-bold-r-*-14-140-*

P3Mgr*middle.fontList:

*lucida-medium-r-*-14-140-*

P3Mgr*fixed.fontList:

*courier-medium-r-*-12-120-*

If the user decides to change the fonts, these are the resources which need to be set. All three fonts do not
have to be changed, a single one can be adjusted by itself. The only requirement is that a fixed font is
defined for P3Mgr*fixed.fontList. It is important for this font to be fixed or the interface for the
Queue Monitor will not appear correctly.

Chapter 7: System Management 113


Configuration Management Interface

Configuration Management Interface


The Analysis Manager requires a predefined set of configuration files for its use. These configuration
files may be changed and validated using the Configuration Management Preference (p3am_admin
executable name), which enables menu-driven ease in changing and testing the configuration files.
Examples of the configuration files are found in Examples of Configuration Files. You may also edit them
using any text editor; however, you will probably find it more intuitive to use the administration tool until
you become familiar with the configuration files.
To run p3am_admin from the installation directory using all defaults type:
$P3_HOME/bin/p3am_admin
or to call out a specific set of configuration files other than the default:
$P3_HOME/bin/p3am_admin -org <org>
where <org> is the directory name containing the configuration file located in
$P3_HOME/p3manager_files/<org>/{conf, log, proj} or specify the full path:
<path_name>/AdmMgr $P3_HOME -org <org>
where:
<path_name>

$P3_HOME/p3manager_files/bin/<arch>/

<arch>

the architecture type of the machine you wish to run on which


can be one of the following:

HP700

Hewlett Packard HP-UX

RS6K

IBM RS/6000 AIX

SGI5

Silicon Graphics IRIX

SUNS

Sun SPARC Solaris

LX86

Linux (MSC or Red Hat)

WINNT

Windows 2000 or XP

114

Patran Analysis Manager Users Guide


Configuration Management Interface

The arguments are defined as follows:


$P3_HOME

The path where the Analysis Manager is installed. This path will be used to
locate the p3manager_files directory. For example, if /msc/patran200x is
specified, the p3am_admin (AdmMgr) program will look for the
/msc/patran/p3manager_files directory. Typically, the install directory is
/msc/patran200x and defined in an environment variable called
$P3_HOME.

-org <org>

This is the organizational group to be used. See Organization Environment


Variables for a description on the use of organizations. It is the name of the
directory under the p3manager_files directory that contains the
configuration files.

Both of the arguments listed above are optional. If they are not specified, the p3am_admin (AdmMgr)
program will check for the following two environment variables:
P3_HOME

The path where the Analysis Manager is installed.

P3_ORG

The organization to be used. This is the <org> directory.

If the command line arguments are not specified, then at least the P3_HOME environment variable must
be set. The P3_ORG variable is not required. If the P3_ORG variable is not set and the -org option is not
provided, an organization of default is used. Therefore, p3am_admin (AdmMgr) will check for
configuration files in the following location:
$P3_HOME/p3manager_files/default/conf
When running the p3am_admin (AdmMgr) program, it is recommended this be done on the master node
and as the root user. The p3am_admin (AdmMgr) program can be run as normal users, but some of the
testing options will not be available. In addition, the user may not have the necessary privileges to save
the changes to the configuration files or start up a Queue Manager daemon.
When p3am_admin (AdmMgr) starts up, it will take the arguments provided (or environment variables)
and check to see if configuration files already exist. The configuration files should exist as follows. The
last two are only necessary if LSF or NQS queueing are used.
$P3_HOME/p3manager_files/<org>/conf/host.cfg
$P3_HOME/p3manager_files/<org>/conf/disk.cfg
$P3_HOME/p3manager_files/<org>/conf/lsf.cfg
$P3_HOME/p3manager_files/<org>/conf/nqs.cfg
If these files exist, they will be read in for use within the p3am_admin (AdmMgr) program. If these files
are not found, p3am_admin (AdmMgr) will start up in an initial state. In this state there are no hosts,
filesystems, or queues defined and they must all be added using the p3am_admin (AdmMgr)
functionality.

Chapter 7: System Management 115


Configuration Management Interface

Therefore, upon initial installation and/or configuration of the Analysis Manager, the p3am_admin
(AdmMgr) program will come up in an initial state and the user can build up configuration files to save.
Action Options
The initial form for p3am_admin (AdmMgr) has the following Actions/Options:
1. Modify Config Files
2. Test Configuration
3. Reconfigure Que Mgr

On Windows the Administration tree tab is the equivalent:

116

Patran Analysis Manager Users Guide


Configuration Management Interface

Modify Configuration Files


Modify Config(uration) Files has the following Objects:
1. Applications
2. Physical Hosts
3. Hosts
4. Filesystems
5. Queues

Chapter 7: System Management 117


Configuration Management Interface

Selecting the Queue Type


The Analysis Manager requires a Queue Type, whether to use LSF, NQS or Analysis Manager own
queueing capability. This typically should be the first thing set when setting up a new configuration.
To select or change the selection of a Queue Type, click on the Queue Type: menu for either LSF, NQS,
or MSC or AM Queue, as listed on the right side of the Modify Config Files form. Only one queue type
may be selected.
To save the configuration, the Apply button must be pressed and the newly added queue type information
will be saved in the host.cfg file.
Note:

Apply saves all configuration files: host, disk, and, if applicable, lsf or nqs.
Queue Managers set up on Windows only have choice of the default MSC Queue type.
LSF and NQS are not supported for Queue Managers running on Windows.

Administrator User
You must also set the Admin user. This should not be root on Unix or the administrator account on
Windows, but should be a normal user name.

118

Patran Analysis Manager Users Guide


Configuration Management Interface

Configuration Version
There are three configuration versions. The functionality accessible to setup is dependent on which
version you select. Version 1 is the original.
Version 2 includes an additional capability of limiting the maximum number of task for any given
applications allowed to run at any one time. If this number is exceeded, any additional submittals are
queued until the maximum number of tasks for that application drops below this number. This is
typically used when there are only so may application licenses available such that a job cannot be
submitted without a license being available. Otherwise the application might fail due to no license
being available.
Version 3 includes all capabilities of versions 1 and 2, and also includes the ability to set up a host, made
of a group of hosts, that will be monitored for the least loaded machines. Once a machine in that group
satisfies the loading critiria, the job is submitted to that machine.
Applications
Since the Analysis Manager can execute different applications, it needs to know which applications to
execute and how to access them. This configuration information is stored in the host.cfg file located in
the $P3_HOME/p3manager_files/default/conf directory. This portion of the host.cfg file contains the
following fields:
type

An integer number used to identify the application. The user never has to worry
about this number because it is automatically assigned by the program.

program name

Program names can be either:


NasMgr

for executing MSC Nastran

MarMgr

for executing MSC.Marc

AbaMgr

for executing ABAQUS

GenMgr

for executing other analysis modules.

Patran name

The name of the Patran Preference from which to key off of when invoking the
Analysis Manager. These can be MSC Nastran, MSC.Marc, ABAQUS,
ANSYS, etc. Check to see what the exact Patran Preference spelling is and
remove any spaces. If the Preference does not exist then the first configured
application will be used when the Analysis Manager is invoked from Patran
after which the user can change it to the one he wants.

optional args

Used for generic program execution only. These specify the arguments to be
added to the invoking line when running a generic application.

MaxAppTask

By default this is not set. If the configuration file version is set to 2 or 3, then
you may specify the maximum number of tasks that the given application can
run at any one time (on all machines). This is convenient when you dont want
jobs submitted with the possibility of one or more not being able to check out
the proper licenses if none are available because too many jobs are running at
once.

Chapter 7: System Management 119


Configuration Management Interface

The p3am_admin (AdmMgr) program can be used to add and delete applications or change any field
above as shown in the forms below.

The exception to this is the Maximum Number of Tasks. This value must be changed manually by
editing the configuration file and then restarting the Queue Manager service on Windows. On UNIX,
this can be controlled through the Administration GUI.
Adding an Application
To add an application, select the Add Application button. (On Windows, right mouse click the
Applications tree tab.) An application list form appears from which an application can be selected. If
GENERAL is selected the Application Name and Optional Args data boxes appear on the main form.

120

Patran Analysis Manager Users Guide


Configuration Management Interface

For GENERAL, enter the name of the application as it is know by the Patran Preference, without any
spaces. For example if ANSYS 5 is a preference, then enter ANSYS5.
Enter the optional arguments that are needed to run the specified analysis code. For example, if an
executable for the MARC analysis code needs arguments of -j jobname, you can specify -j
$JOBNAME as the optional args. Arguments can be specified explicitly such as the -j, or they can be
placed in as variables such as the $JOBNAME. The following variables are available:
$JOBFILE

Actual filename selected (without full path)

$JOBNAME

The jobname ($JOBFILE without extension)

$P3AMHOST

The client hostname from where the job was submitted

$P3AMDIR

Directory on client host where $JOBFILE resides

$APPNAME

Application name (Patran Preference name)

$PROJ

Project Name selected

$DISK

Total Disk space requested (mb)

Up to 10 GENERAL applications can be added. To save the configuration, the Apply button must be
pressed and the newly added application information will be saved in the host.cfg file. On Windows this
is Save Config Settings under the Queue pull down menu.
Note:

Apply saves all configuration files: host, disk, and if applicable, lsf or nqs.

Chapter 7: System Management 121


Configuration Management Interface

Deleting an Application
To remove an application, select the Delete Application button. A list of defined applications appears.

Select one to be deleted by clicking on the application name in the list. Then, select OK. The application
will be removed from the list and the list of application will disappear.
On Windows, simply select the application you want to delete from the Applications tree tab and press
the Delete button (or right-mouse click the application and select Delete).
To save the configuration, the Apply button must be pressed and the newly deleted application
information will be saved in the host.cfg file. On Windows this is Save Config Settings under the Queue
pull down menu.
Note:

Apply saves all configuration files: host, disk, and, if applicable, lsf or nqs.

Physical Hosts
Since the Analysis Manager can execute jobs on different hosts, it needs to know about each analysis
host. Host configuration for the Analysis Manager is done via the host.cfg file located in the
$P3_HOME/p3manager_files/default/conf directory.
This portion of the host.cfg file contains the following fields:

122

Patran Analysis Manager Users Guide


Configuration Management Interface

physical host

Name of host machine for the use of the Analysis Manager

class

System & O/S type:

maximum tasks

HP700

Hewlett Packard HP-UX

RS6K

IBM RS/6000 AIX

SGI5

Silicon Graphics IRIX

SUNS

Sun SPARC Solaris

LX86

Linux (MSC or Red Hat)

WINNT

Windows 2000 or XP

Maximum allowable concurrent job processes for this machine.

The p3am_admin (AdmMgr) program can be used to add and delete hosts or change any field above as
shown in the forms below.

Adding a Physical Host


To add a host for use by the Analysis Manager, press the Add Physical Host button. (On Windows, right
mouse click the Physical Hosts tree tab.) A new host description will be created and displayed in the left
scrolled window, with Host Name: Unknown and Host Type: UNKNOWN.

Chapter 7: System Management 123


Configuration Management Interface

Enter the name of the host in the Host Name box, and select the system/OS in the Host Type menu.
Additional hosts can be added by repeating this process.
When all hosts have been added, select Apply and the newly added host information will be saved in the
host.cfg file. On Windows this is Save Config Settings under the Queue pull down menu.
Note:

Apply saves all configuration files: host, disk, and if applicable, lsf or nqs.

124

Patran Analysis Manager Users Guide


Configuration Management Interface

Deleting a Host
To remove a host from use by the Analysis Manager, select the Delete Physical Host button on the
bottom of the p3am_admin (AdmMgr) form. A list of possible hosts will appear.

Select the host to be deleted by clicking on the hostname in the list. Then, select OK. The host will be
removed from the list of hosts and the list will go away.
On Windows, simply select the Host you want to delete from the Physical Hosts tree tab and press the
Delete button (or right-mouse click the host and select Delete).
When all host configurations are ready, select Apply and the revised host.cfg file will be saved,
excluding the deleted hosts. On Windows this is Save Config Settings under the Queue pull down menu.
Analysis Manager Host Configurations
In addition to specifying physical hosts, it is necessary to specify specific names by which the Analysis
Manager can recognize the actions it should take on various hosts. For example, it may be possible that
ABAQUS and MSC Nastran are configured to run on the same physical host or that two versions of MSC
Nastran are installed on the same physical host. In order to account for this, each application and physical
host has its own name or AM host name assigned to it. Host configuration for the Analysis Manager is
done via the host.cfg file located in the $P3_HOME/p3manager_files/default/conf
directory.
This portion of the host.cfg file contains the following fields:

Chapter 7: System Management 125


Configuration Management Interface

AM hostname

Unique name for the combination of the analysis application and physical host.
It can be called anything but must be unique, for example, nas68_venus.

physical host

The physical host name where the analysis application will run.

type

The unique integer ID assigned to this type of analysis. This is automatically


assigned by the program and the user should not have to worry about this.

path

How this machine can find the analysis application - for MSC Nastran, this is
the runtime script (typically the nast200x file), for MSC.Marc, ABAQUS, and
GENERAL applications, this is the executable location.

rcpath

How this machine can find the analysis application runtime configuration file the MSC Nastran nast200xrc file or the ABAQUS site.env file. This is not
applicable to MSC.Marc or GENERAL application and should be filed with the
keyword NONE.

The p3am_admin (AdmMgr) program can be used to add and delete AM hosts and change any field
above as shown by the forms below.

Adding an AM Host
An AM host is a unique name which the user will specify when submitting a job. Information contained
in the AM host is a combination of the physical host and application type along with the physical location
of that application. To add a specific AM host press the Add AM Host button. A new host description

126

Patran Analysis Manager Users Guide


Configuration Management Interface

will be created and displayed in the left scrolled window, with AM Host Name: Unknown, Physical Host:
UNKNOWN, and Application Type: Unknown.

Enter the unique name of the host in the AM Host Name box, and select the Physical Host that this
application will run on. The application is selected from the Application Type menu. Then, specify the
Configuration Location and Runtime Location paths in the corresponding boxes. The unique name
should reflect the name of the application to be run and where it will run. For example, if V68 of MSC
Nastran is to be run on host venus, then specify NasV68_venus as the AM host name.
The Runtime Location is the actual path to the executable or script to be run, such as /msc/bin/nas68 for
MSC Nastran. The Config Location is the actual path to the MSC Nastran rc (nast68rc) file or the
ABAQUS site.env file.
Additional AM hosts can be added by repeating this process.
For each AM host, at least one filesystem must be specified. Use the Add Filesystem capability in
Modify Config Files/Filesystems to specify a filesystem for each added host.
When all hosts have been added, select Apply and the newly added host information will be saved in the
host.cfg file. On Windows this is Save Config Settings under the Queue pull down menu. Note that
Apply saves all configuration files: host, disk, and if applicable, lsf or nqs.
For Group, see Groups (of hosts).

Chapter 7: System Management 127


Configuration Management Interface

Deleting an AM Host
To remove a host from use by the Analysis Manager, select the Delete AM Host button. A list of possible
hosts will appear.

Select the host to be deleted by clicking on the hostname in the list. Then, select OK. The host will be
removed from the list of hosts and the list of hosts will go away.
On Windows, simply select the AM Host you want to delete from the AM Hosts tree tab and press the
Delete button (or right-mouse click the host and select Delete).
When all host configurations are ready, select Apply and the revised host.cfg file will be saved,
excluding the deleted hosts. On Windows this is Save Config Settings under the Queue pull down menu.
Disk Configuration
In order to define filesystems to be written for scratch and database files, the Analysis Manager needs to
have a list of each file system for each host in the disk.cfg file that is to be used when running analyses.
This file contains a list of each host, a list of each file system for that host, and the file system type. There
are two different Analysis Manager file system types: NFS and local.

128

Patran Analysis Manager Users Guide


Configuration Management Interface

Adding a Filesystem
Use the Modify Config Files/Filesystems form to specify or add a filesystem for use by the Analysis
Manager.

Press the Add Filesystem button. Then, select a host from the list provided.
There are two types of filesystems: NFS and local. Select the appropriate type for the newly added
filesystem.
Additional filesystems can be added by repeating this process. Multiple filesystems can be added for each
host. When all filesystems have been added, select Apply and the newly added filesystem information
will be saved in the disk.cfg file.
Each host must contain at least one filesystem.
After adding a host or filesystem, test the configuration information using the Test Configuration form.
See Test Configuration.
Note:

When using the Analysis Manager with LSF or NQS, you must run the administration
program and start a Queue Manager on the same machine that LSF or NQS executables
are located.

Chapter 7: System Management 129


Configuration Management Interface

On Windows the form appears as below:

When an AM Host is created, one filesystem is created by default (c:\temp). You can add more
filesystems to an AM Host under the Disk Space tree tab and pressing the Add button. You can change
the directory path by clicking on the Directory itself and editing it in the normal method on Windows.
The Type is changed by the pulldown menu next to the Directory name. If the filesystem is a Unix
filesystem, make sure you remove the c:, e.g., /tmp.
Deleting a Filesystem
At the bottom of the Modify Config Files/Filesystems form, select the Delete Filesystem button to
delete a filesystem from use by the Analysis Manager.
Then, select a host from the list provided, and click OK.
After selecting a host, a list of filesystems defined for the chosen host will appear. Choose the filesystem
to delete from this list and click OK.
On Windows, select the AM Host under the Disk Space tree tab and press the Delete button. The last
filesystem created is deleted.

130

Patran Analysis Manager Users Guide


Configuration Management Interface

Additional filesystems can be deleted by repeating this process. When all appropriate filesystems have
been deleted, select Apply and the updated filesystem information will be saved in the disk.cfg file. On
Windows this is Save Config Settings under the Queue pull down menu.
Queue Configuration
If the LSF or NQS scheduling system is being used at this site, the Analysis Manager can interact with it
using the queue configuration file (i.e., lsf.cfg or nqs.cfg). Ensure that LSF or NQS Queue is set for the
Queue Type field in the Modify Config Files form. See Analysis Manager Host Configurations. This sets
a line in the host.cfg file to QUE_TYPE: LSF or NQS. The Queue Manager configuration file lists
each queue name, and all hosts allowed to run MSC Nastran, MSC.Marc, MSC Nastran, or other
GENERAL applications for that queue. In addition, a queue install path is required so that the Analysis
Manager can execute queue commands with the proper path.
Note:

NQS and LSF are only supported by Unix platform Queue Managers. Although you can
submit to an LSF or NQS queue from Windows to a Unix platform, the Windows Queue
Manager does not support LSF or NQS submittals at this time.

Chapter 7: System Management 131


Configuration Management Interface

Adding a Queue
To add a queue for use by the Analysis Manager, press the Add Queue button on the bottom of the
p3am_admin (AdmMgr) form. A new queue description will be created and displayed on the left panel,
with MSC Queue Name: Unknown and LSF (or NQS) Queue Name: Unknown.

Enter the names of the queue in the MSC Queue Name and LSF (or NQS) Queue Name boxes
provided. These names can be the same or different. In addition, the administrator must also choose
between one or more hosts from the listbox on the right side of the specified queue name. The host in the
listbox to the right only appear after selecting an application from the Application pulldown menu. Only
those hosts configured to run that application will appear in the list box. These are the hosts which will
be allowed to run the analysis application when submitted to that queue.
Additional queues can be added by repeating this process. When all queues have been added, press Apply
and the newly queue host information will be saved in the lsf.cfg (or nqs.cfg) file.
Various information need to be supplied for the Analysis Manager to communicate properly with the
queueing software. The most important information is the Executable Path. Enter the full executable
path where the NQS or LSF executables can be found. In addition, you may specify additional (optional)
parameters for the NQS or LSF executables to use if necessary. Keywords can also be used. The
description of how these keywords work can be found in General. Two keywords are available: MEM
and DISK, which are evaluated to what Minimum MEMory and DISK space has been specified. For
example, if an NQS command has these additional parameters: -nr -lm $MEM -lf $DISK

132

Patran Analysis Manager Users Guide


Configuration Management Interface

then submission will be


qsub -nr -lm <current MEM value> -lf <current DISK value>
where the current MEMory value is the larger of the MEMory specified here or the general memory
requirement specified by the user. Current DISK space operates similarly. See Memory and/or Disk
Space. The MEM and DISK specified here are only used if additional parameters using the keywords
are supplied.
Deleting a Queue
To remove a queue from use by the Analysis Manager, press the Delete Queue button on the bottom of
the p3am_admin (AdmMgr) form. A list of possible queues will appear.
Select the queue to be deleted by clicking on the queue name in the list. Then, select OK. The queue will
be removed from the list of queues and the list of queues will go away.
When the queue configuration is ready, select Apply and the revised lsf.cfg (or nqs.cfg) file will be saved,
excluding the deleted queues.
Groups (of hosts)
This is a nice feature that allows you to define a group. This group attribute can then be assigned to an
AM Host. All AM Hosts with this attribute are grouped together and when a job is submitted, the host
in this group list that matches the least loaded criteria is the one selected for job submission. This is a
semi-automatic host selection mechanism based on certain criteria that is explained below.

Chapter 7: System Management 133


Configuration Management Interface

This version of Analysis Manager supports the concept of groups of hosts. In the host.cfg file if you
specify VERSION: 3 as the first non-commented line and you also add the group/queue name on the
end of the am_host line in the AM_HOSTS section then you will have enabled this feature. Here is an
example:
VERSION: 3
...
AM_HOSTS:
#am_hosthosttypebin pathrc pathgroup
#----------------------------------------------------------------------------N2004_hst1host11/msc/bin/n2004/msc/conf/nast2004rcgrp_nas2004
N2004_hst2host21/msc/bin/n2004/msc/conf/nast2004rcgrp_nas2004
N2004_hst3host31/msc/bin/n2004/msc/conf/nast2004rcgrp_nas2004
N2001_hst1host11/msc/bin/n2001/msc/conf/nast2001rcgrp_nas2001
N2001_hst2host21/msc/bin/n2001/msc/conf/nast2001rcgrp_nas2001
N2001_hst3host31/msc/bin/n2001/msc/conf/nast2001rcgrp_nas2001
M2001_hst1host13/m2001/marcNONEgrp_mar2001
M2001_hst2host23/m2001/marcNONEgrp_mar2001
M2001_hst3host33/m2001marcNONEgrp_mar2001
...

In this configuration, when you submit a job, you will also have the choice of the group name with the
added label of 'least-loaded-grp:<group name>' in addition to and to distinguish it from regular host
names. When you select this group instead of a regular host, the Analysis Manager will then decide

134

Patran Analysis Manager Users Guide


Configuration Management Interface

which host from the list of those in the group is best-suited to run the job and start it there when possible.
Here, best-suited means the next available host based several factors, including:
Free tasks on each host (Maximum currently running jobs)
Cpu utilization of host
Available memory of host
Free disk space of host
Time since most recent job was started on host

If in the above example, you submitted an MSC Nastran job to the grp_nas2004 then there are 3
machines the Analysis Manager could select to run the job, host1, host2 or host3. The Analysis Manager
will query each host for the current cpu utilization, available memory and free disk space (as configured
by the Analysis Manager) and also the free tasks and time since an Analysis Manager job was last started
and figure out which, if any, machine can run the job. If more than one machine can run the job based
on the criteria above then the Analysis Manager will select the best suited host by sorting the acceptable
hosts in a user-selectable sort order. If no machines have met the criteria then the job remains queued,
and the Analysis Manager will try again to find a suitable host at periodic intervals. The user selectable
sort order is specificed in an optional configuration file called msc.cfg. If this file does not exist then
the sort order and criteria are as follows:
free_tasks
cpu_util
avail_mem
free_disk
last_job_time

Where the defaults for cpu util, available mem and disk are:
Cpu util: 98
Available mem: 5 mb
Available disk: 10 mb

Thus any host that has cpu util < 98 and available mem > 5mb and available disk > 10mb and at least one
free task (so it can start another Analysis Manager job) is eligible to run a job and the best suited host will
be the one after a sort on all eligible hosts is done. You can change the sort order and defaults for cpu
util, available mem and disk in the msc.cfg file. The msc.cfg file exists in the same location as the
host.cfg and disk.cfg and has this format as explained in Group/Queue Feature.

Test Configuration
The p3am_admin (AdmMgr) program has various tests that facilitate verification of the
configuration.

Chapter 7: System Management 135


Configuration Management Interface

Application Test
Changes to the host.cfg file dealing with defined applications can be tested by selecting the Test
Configuration/Applications option. The Applications Test form will appear when the Application
Test button is pressed. On Windows press the Test Configuration button under Adminstration

136

Patran Analysis Manager Users Guide


Configuration Management Interface

This test checks to make sure that:


1. Applications are defined only once.
2. At least one AM host has been assigned on which the application can be executed.
Physical Hosts Test
Changes to portions of the host.cfg file dealing with physical hosts can be tested by selecting the Test
Configuration/Physical Hosts option.

Chapter 7: System Management 137


Configuration Management Interface

At the bottom left of the form are two buttons:


1. Basic Host Test.
2. Network Host Test.
Basic Host Test
The Basic Host Test will validate the host configuration information in the host.cfg file. There are no
requirements for running the Basic Host Test. A message box provides status information as each of the
following Basic Host Tests are run:
1. Validates that at least one host in the host.cfg file is present.
2. Ensures that each host specified is a valid host with the nameserver. (i.e. makes sure the machine
that the p3am_admin (AdmMgr) program is running on recognizes each of the host
names provided.)
3. Ensure that a valid Host Type has been provided for each of the hosts. (i.e., when a new host is
added, the Host Type is set to Unknown. Makes sure the user changed it to something valid).
4. Checks that a Master host has been selected.
5. Makes sure two hosts with the same address were not specified.
If a problem is detected, close the form and return to the Modify Config File form to correct
the configuration.

138

Patran Analysis Manager Users Guide


Configuration Management Interface

Network Host Test


The Network Host Test will validate all of the physical host configuration information in the host.cfg
file, and validate communication paths between hosts.
Requirements to run the Network Host Test include:
1. Must be root.
2. Must be on the Master node (the host running the Queue Manager).
3. Must provide a username (each user must be tested separately.
A message box provides status information as each of the following network host tests is run:
1. Checks user remote command (rcmd) access between the Master node and other specified hosts.
2. Validates that each host has the correct architecture setting.
3. Makes sure that each host sees the installation directory in the same way.
4. Checks the Analysis Manager directories (i.e., makes sure user can read all configuration files;
ensures he can create a directory under proj directory and other locations).
If a problem is detected, close the form and return to the Modify Config Files form to correct the
configuration or exit to the system to correct the problem. It is highly recommended that you run the
Network Host Test for each user who wants to use the Analysis Manager.

Chapter 7: System Management 139


Configuration Management Interface

AM Hosts Test
Changes to portions of the host.cfg file dealing with the AM hosts can be tested by selecting the Test
Configuration/AM Hosts option.

At the bottom left of the form are two buttons:


1. Basic AM Host Test.
2. Network AM Host Test.
Basic Host Test
The Basic AM Host Test will validate the AM host configuration information in the host.cfg file. There
are no requirements for running the Basic AM Host Test. A message box provides status information as
each of the following Basic Host Tests are run:
1. Validates that at least one AM host in the host.cfg file is present.
2. Ensures that each AM host specified is a valid name.
3. Ensure that each AM host has a physical host assigned.
4. Checks that an application has been assigned to each AM host.
5. Checks that the configuration file exists for each AM host. This is applicable to MSC Nastran and
ABAQUS only.
6. Checks that the actual executable is accessible and has the proper privileges.
7. Makes sure two AM hosts with the same names are not specified.

140

Patran Analysis Manager Users Guide


Configuration Management Interface

If a problem is detected, close the form and return to the Modify Config File form to correct the
configuration.
Network Host Test
The Network AM Host Test will validate all of the AM host configuration information in the host.cfg
file, and validate communication paths between hosts.
Requirements to run the Network Host Test include:
1. Must be root.
2. Must be on Master node.
3. Must provide a username.
A message box provides status information as each of the following network AM host tests is run:
1. Checks the location and privileges of the configuration file for each application.
2. Checks the runtime location (executable) for each application and privileges.
If a problem is detected, close the form and return to the Modify Config Files form to correct the
configuration or exit to the system to correct the problem. It is highly recommended that you run the
Network Host Test for each user who wants to use the Analysis Manager.
Disk (Filesystem) Test
Changes to the disk.cfg file can be tested by selecting the Test Configuration/ Disk Configuration
option. The test disk configuration form will appear.

Chapter 7: System Management 141


Configuration Management Interface

At the bottom left of the form are two buttons:


1. Basic Disk Test.
2. Network Disk Test.
Basic Disk Test
The Basic Disk Test will validate the disk configuration information. There are no requirements for
running the Basic Disk Test. A message box provides status information as each of the following basic
disk tests are run:
1. Makes sure at least one filesystem is defined for each host.
2. Makes sure there is a value for each filesystem (i.e., no empty entries), and that the entries are
absolute paths which start with a /.
3. Checks the length of the filesystem definitions. If they are greater than 25, a warning is provided.
This may cause problems.
If a problem is detected, close the form and return to the Modify Config Files form to correct the
disk configuration.

142

Patran Analysis Manager Users Guide


Configuration Management Interface

Network Disk Test


The Network Disk Test will validate all of the disk configuration information. A message box provides
status information as each test is run.
Requirements to run the Network Disk Test include:
1. Must be root.
2. Must be on Master node.
3. Must provide a username.
A message box provides status information as each of the following network disk tests are run:
1. Check that each filesystem exists for each host, and that it can be written to by the provided user.
If a problem is detected, close the form and return to the Modify Config Files form to correct the disk
configuration or exit to the system to correct the problem. It is highly recommended that you run the
Network Disk Test for each user who wants to use the Analysis Manager.
Queue Test
Changes to the lsf.cfg queue configuration file can be tested by selecting the Test Configuration/Queue
Configuration option. The test queue configuration form will appear.

At the bottom left of the form are two buttons:


1. Basic Queue Test.

Chapter 7: System Management 143


Configuration Management Interface

2. Advanced Queue Test.


Basic Queue Test
The Basic Queue Test will validate the queue configuration information in the lsf.cfg or nqs.cfg file. A
queueing system (i.e., LSF) must be defined to run the Basic Queue Test. A message box provides status
information as each of the following basic queue tests are run:
1. Makes sure at least one queue has been specified.
2. Makes sure that at least one application has been specified per queue.
3. Makes sure a unique MSC queue name has been specified.
4. Makes sure that the LSF or NQS queue in unique and exists.
5. Make sure that for each queue at least one physical host is specified as a member of the queue.
6. Make sure the LSF or NQS executables path has been specified and that it is an absolute path.
If a problem is detected, close the form and return to the Modify Config Files form to correct the
configuration.
Advanced Queue Test
The Advanced Queue Test will validate all of the queue configuration information (i.e., the lsf.cfg or
nqs.cfg file).
Requirements to run the Advanced Queue Test include:
1. Must be using LSF or NQS for queue management.
2. Must be on Master node.
3. Must provide a username.
4. Must be root.
A message box provides status information as each of the following network queue tests is run:
1. Makes sure the LSF executables (bsub, bkill, bjobs) or the NQS executables (qsub, qdel, qstart)
are in the specified location, and are executable by the provided user.
If a problem is detected, close the form and return to the Modify Config Files form to correct the queue
configuration or exit to the system to correct the problem. It is highly recommended that you run the
Advanced Queue Test for each user who wants to use the Analysis Manager.

Queue Manager
This simply allows any changes in the configuration files that may have been implemented during a
p3am_admin (AdmMgr) session to be applied. If the configuration files are owned by root then you must
have root access to change them. Once they have been changed, in order for the QueMgr to recognize
them, it must be reconfigured. Simply press the Apply button with the Restart QueMgr toggle selected.

144

Patran Analysis Manager Users Guide


Configuration Management Interface

This forces the Queue Manager to reread the configuration files. Once the Queue Manager has been
reconfigured, new jobs submitted will use the updated configuration.
If a reconfiguration is issued while jobs are currently running, then those jobs are allowed to finish before
the reconfiguration occurs. During this period, the Queue Manager is said to be in drain mode, not
accepting any new jobs until all old jobs are complete and the Queue Manager has reconfigured itself.
The Queue Manager can also be halted immediately (which kills any job running) or can be halted after
it is drained.

When the Queue Manager is halted, the three toggles on the right side change to one toggle to allow the
Queue Manager to be started. All configurations that are being used are shown on the left. When the
Queue Manager is halted, you may change some of the configurations on the left side, such as Port, Log
File, and Log File User before starting the daemon again. For more information on the Queue Manager
see Starting the Queue/Remote Managers.
On Windows you can Start and Stop the Queue Manager from the Queue pulldown menu when you are
in the Administration tree tab.

Chapter 7: System Management 145


Configuration Management Interface

Or you can right mouse click the Administration tree tab and the choices to Read or Save configuration
file or Start and Stop the Queue Manager are also available.

146

Patran Analysis Manager Users Guide


Examples of Configuration Files

Examples of Configuration Files


The Analysis Manager is a distributed application and requires a predefined set of configuration files for
its use. These configuration files may be changed using the Configuration Management tool (called
p3am_admin (AdmMgr)) (see Configuration Management Interface) or they may be edited by hand.
When one or more of the configuration files is changed, the Queue Manager must either be restarted or
reconfigured to force it to read and recognize the changes.
Host Configuration File
To set up and execute different applications on a variety of physical computer hosts, the Analysis
Manager uses a host configuration file (host.cfg) located in the directory:
$P3_HOME/p3manager_files/default/conf
where $P3_HOME is the location of the installation, typically /msc.
The host.cfg file contains five distinct areas of information/administrator, Queue Type, AM Host
information, Physical Host information, and Application information, in this order. The queue type may
be either MSC, LSF, or NQS. The administrator is any valid user name except root (or Administrator on
Windows).
The AM host information has these fields associated with it:
AM Hostname

A unique name for the combination of the analysis application and physical
host. It can be called anything but must be unique, for example nas68_venus.

Physical Host

The physical host name where the analysis application will run.

Type

The unique integer ID assigned to this type of analysis. This is automatically


assigned by the program and the user should not have to worry about this.

Path

How this machine can find the analysis application. For MSC Nastran, this is
the runtime script (typically the nast68 file), for MSC.Marc, ABAQUS or
GENERAL applications, this is the executable location.

rcpath

How this machine can find the analysis application runtime configuration file:
the MSC Nastran nast68rc file or the ABAQUS site.env file. This is not
applicable to a MSC.Marc, GENERAL application and should be filed with
the keyword NONE.

The physical host information has the following fields associated with it:
Physical Host

Name of host machine for the use of the Analysis Manager

Class

Machine type (RS6K, HP700, etc.)

Chapter 7: System Management 147


Examples of Configuration Files

Max

Maximum allowable concurrent processes for this machine

MaxAppTsk

Maximum application tasks. This is used say, if four MSC Nastran hosts are
configured, but there are only enough licenses for three concurrent jobs.
Without this the 4th job would always fail. With MaxAppTsk set to 3, the
4th job waits in the queue until one of the previous running jobs completes,
then it gets submitted. It is ONLY present if the configuration file version is
>=2. This is set with the VERS: or VERSION: field at the top of the file.

Note:

The MaxAppTsk setting must be added manually. There is no widget in the AdmMgr to do
this. If there are NO configuration files on start up of the AdmMgr, then it will set the version
to 2 and use 1000 as the MaxAppTsk. If configuration files exist and version 2 is set, it will
honor whatever is already there and pass them through. If version 1 is set, then MaxAppTsk
is not written to the configuration files.

The application information has the following fields associated with it:
Type

A number indicating analysis program type

Prog_name

The name of the application job manager for this application

Patran name

The name of the application which corresponds to the Patran


analysis preference

Options

Optional arguments for use with the GENERAL application.

If the scheduling system is a separate package (e.g., LSF or NQS), then the Analysis Manager will submit
jobs to a queue provided. Queues are described below. Also, If the scheduler is separate from the
Analysis Manager, then the maximum task field is not used. All tasks are submitted through the queue
and the queueing system will execute or hold each task according to its own configuration. An example
of a host.cfg file is given below. Each comment line must begin with a # character. All fields are
separated by one or more spaces. All fields must be present.
#-----------------------------------------------------# Analysis Manager host.cfg file
#-----------------------------------------------------#
#
# A/M Config file version
# Que Type: possible choices are P3, LSF, or NQS
#
VERSION: 2
ADMIN: am_admin
QUE_TYPE: MSC
#
#-----------------------------------------------------# AM HOSTS Section
#-----------------------------------------------------#
# Must start with a P3AM_HOSTS: tag.
#

148

Patran Analysis Manager Users Guide


Examples of Configuration Files

# AM Host:
# Name to represent the choice as it will appear
# on the AM menus.
#
# Physical Host:
# Actual hostname of the machine to run the application on.
#
# Type:
# 1 - MSC.Nastran
# 2 - ABAQUS
# 3 - MSC.Marc
# 20 - User defined (General) application #1
# 21 - User defined (General) application #3
# etc. (max of 29)
#
# This field defines the application for this entry.
# Each value will have a corresponding entry in the
# APPLICATIONS section.
#
# EXE_Path:
# Where executable entry is made.
#
# RC_Path:
# Where runtime configuration file (if present) is found.
# Set to NONE if General application.
#
#-----------------------------------------------------# Physical Hosts Section
#-----------------------------------------------------#
# Must start with a PHYSICAL_HOSTS: tag.
#
# Class:
# HP700 - Hewlett Packard HP-UX
# RS6K - IBM RS/6000 AIX
# SGI5 - Silicon Graphics IRIX
# SUNS - Sun Solaris
# LX86 - Linux/
# WINNT - Windows
#
# Max:
#
# Maximum allowable concurrent tasks for this host.
#
#-----------------------------------------------------# Applications Section
#-----------------------------------------------------#
# Must start with a APPLICATIONS: tag.
#
# Type: See above for values
# Prog_name:
#
# The name of the Patran AM Task Manager executable to start.
#
# This field must be set to the following, based on the
# application it represents:
#
# MSC.Nastran -> NasMgr
# HKS/ABAQUS -> AbaMgr

Chapter 7: System Management 149


Examples of Configuration Files

# MSC.Marc -> MarMgr


# Any General App -> GenMgr
#
# option args:
#
# This field contains the default command line which will
# appear in the AM user interface configure menu. This
# field is only valid for user defined (General) applications.
# The command line can contain any text including any of the
# following keywords (which will be evaluated at runtime):
#
# $JOBFILE Actual filename selected (w/o full path)
# $JOBNAME Jobname ($JOBFILE w/o extension)
# $P3AMHOST Hostname of AM host
# $P3AMDIR Dir on AM host where $JOBFILE resides
# $APPNAME Application name (P3 preference name)
# $PROJ Project Name selected
# $DISK Total Disk space requested (mb)
#
#
# AM Host Physical Host Type EXE_Path RC_Path
#--------------------------------------------------------------------P3AM_HOSTS:
Venus_nas675

venus

/msc/msc675/bin/nast675

/msc/msc675/conf/nast675rc

Venus_nas68

venus

/msc/msc68/bin/nast68

/msc/msc68/conf/nast68rc

Venus_aba53

venus

/hks/abaqus

/hks/site/abaqus.env

Venus_mycode

venus

20

/mycode/script

NONE

Mars_nas68

mars

/msc/msc68/bin/nast68

/msc/msc68/conf/nast68rc

Mars_aba5

mars

/hks/abaqus

/hks/site/abaqus.env

Mars_mycode

mars

20

/mycode/script

NONE

#--------------------------------------------------------------------#
#Physical Host Class Max
#-------------------------------------------------------------PHYSICAL HOSTS:
venus

SGI4D

mars

SUN4

#-------------------------------------------------------------#
#
#Type Prog_name MSC P3 name MaxAppTsk [option args]
#-------------------------------------------------------------APPLICATIONS:
1

NasMgr

MSC.Nastran

AbaMgr

ABAQUS

MarMgr

MSC.Marc

20

GenMgr

MYCODE

-j $JOBNAME -f $JOBFILE

150

Patran Analysis Manager Users Guide


Examples of Configuration Files

#--------------------------------------------------------------

Disk Configuration File


This configuration file defines the scratch disk space and disk systems to use for temporary files and
databases. Every AM host must have a filesystem associated with it.
In particular, the Analysis Managers MSC Nastran Manager (NasMgr) generates MSC Nastran File
Management Section (FMS) statements for each job submitted. The FMS statements are to initialize and
allocate each MSC Nastran scratch and database file for each job. In order to define files to be written for
each scratch and database logical file, the Analysis Manager uses the disk configuration file (called
disk.cfg) to know each file system for each host in the host.cfg file that is to be used when running MSC
Nastran. So, the disk configuration file contains a list of each host, a list of each file system for that host,
and the file system type. There are two different Analysis Manager file system types, nfs or local (leave
field blank). An example of the disk.cfg file:
#--------------------------------------------------------------------# Analysis Manager disk.cfg file
#--------------------------------------------------------------------#
# AM Host
#
# AM host from the host.cfg file Patran AM_HOSTS section.
#
# File System
#
# The filesystem directory
#
# Type
#
# The type of filesystem. If the filesystem is local
# to the machine, this field is left blank. If the
# filesystem is NFS mounted, the string nfs appears
# in this field.
#
# #
# AM Host File

System Type

(nfs or blank)

#---------------------------------------------------------------------Venus_nas675

/user2/nas_scratch

Venus_nas675

/venus/users/nas_scratch

#
Venus_nas68

/user2/nas_scratch

Venus_nas68

/venus/users/nas_scratch

Venus_nas68

/tmp

#
Venus_aba53

/user2/aba_scratch

Venus_aba53

/venus/users/aba_scratch

Venus_aba53

/tmp

Chapter 7: System Management 151


Examples of Configuration Files

Venus_mycode

/tmp

Venus_mycode

/server/scratch

nfs

#
Mars_nas68

/mars/nas_scratch

#
Mars_aba5.2

/mars/users/aba_scratch

Mars_aba5.2

/tmp

#
Mars_mycode

/tmp

#---------------------------------------------------------------------

Each comment line must begin with a # character. All fields are separated by one or more spaces. All
fields must be present.
In this example, the term file system is used to define a directory that may or may not be its own file
system, and that already exists and has permissions so that any the Analysis Manager user can create
directories below it. It is recommended that the Analysis Manager file systems be directories with large
amounts of disk space and restricted to the Analysis Managers use, because the Analysis Managers
MSC Nastran, MSC.Marc, ABAQUS, and GENERAL Managers only know about their own jobs
and processes.
Queue Configuration File
If a separate scheduling system (i.e., LSF or NQS) is being used at this site, the Analysis Manager can
interact with it, using the queue configuration file. This file is of the same name as the Queue Manager
type field in the host.cfg file (i.e. QUE_TYPE: LSF or NQS), with a.cfg extension (i.e., lsf.cfg or
nqs.cfg). The Queue Manager configuration file lists each queue name, and all hosts allowed to run
applications for that queue. In addition, a queue install path is required, so that the Analysis Manager can
execute queue commands with the proper path. An example of a Queue Manager configuration file is
given below.
Each comment line must begin with a # character. All fields are separated by one or more spaces. All
fields must be present.
#-----------------------------------------------------# Analysis Manager lsf.cfg file
#-----------------------------------------------------#
# Below is the location (path) of the LSF executables (i.e. bsub)
#
QUE_PATH: /lsf/bin
QUE_OPTIONS:
QUE_MIN_MEM:
QUE_MIN_DISK:
#
# Below, each queue which will execute MSC tasks is listed.
# Each queue contains a list of hosts (from host.cfg) which
# are eligible to run tasks from the given queue.

152

Patran Analysis Manager Users Guide


Examples of Configuration Files

#
# NOTE:
# Each queue can only contain one Host of a given application
# version(i.e., if there are two version entries for
# MSC.Nastran, nas67 and nas68, then each queue
# set up to run MSC.Nastran tasks could only include
# one of these versions. To be able to submit to
# the other version, create a separate, additional
# MSC queue containing the same LSF queue name, but
# referencing the other version)
#
#
TYPE: 1
#MSC Que

LSF Que

Hosts

#--------------------------------------------------------Priority_nas

priority

mars_nas675, venus_nas675

Normal_nas

normal

mars_nas675, venus_nas675

Night_nas

night

mars_nas675

#---------------------------------------------------------

#
TYPE: 2
#MSC Que

LSF Que

Hosts

#--------------------------------------------------------Priority_aba

priority

mars_aba53, venus_aba53

Normal_aba

normal

mars_aba53, venus_aba53

Night_aba

night

mars_aba53, venus_aba53

#---------------------------------------------------------

Organizational Group Configuration File


If you wish to link all organizational groups (running Queue Managers) together, so that any user may
see and switch between these organizational groups from within the Analysis Manager without setting
any environment variables, then it is necessary to create an org.cfg file in the top level directory:
$P3_HOME/p3manager_files/org.cfg
where $P3_HOME is the Patran installation location.

Chapter 7: System Management 153


Examples of Configuration Files

Three fields are required in this file:


org

The organizational group name

master host

The host on which the Queue Manager daemon is running for the particular
organizational group in question

port #

The unique port ID used for this Queue Manager daemon. Each Queue
Manager must have been started with the -port option.

An example of this configuration file follows:


#-----------------------------------------------------# Patran ANALYSIS MANAGER org.cfg file
#-----------------------------------------------------#
# Org

Master Host

Port #

#-----------------------------------------------------default

casablanca

1500

atf

atf_ibm

1501

lsf_atf

atf_sgi

1502

support

umea

1503

Separate Users Configuration File


In order to allow execution of analysis jobs on machines where the user does not have an account, the
system administrator may have to set up special accounts or allow access to other accounts for users to
submit jobs. This is done with the .p3amusers configuration file located in
$P3_HOME/p3manager_files/<org>/conf.
The file contents are very simple. They simply contain the name of each user account per line that will
be allowed for others to use and submit jobs. As an example:
user1
user2
sippola
smith
The filename begins with a period (.) meaning it will be hidden when issuing a normal directory
content command. The Queue Manager daemon must be restarted once this file is created or modified.
Note:

Any user account that is configured in this manner must exist not only on the machine
where the analysis is going to run, but also on the machine from which the job was
submitted.

154

Patran Analysis Manager Users Guide


Examples of Configuration Files

The capability or necessity of this separate user file has somewhat been obsoleted. In general the
following applies:
1. On Unix machines, if RmtMgrs are running as root then they can run the job as the user (or the
separate user as specified by this file) with no problem.
2. On Unix machines, if RmtMgrs are running as a specific user then the job will run as that user
regardless of the user (or separate user) who submitted the job.
3. If Windows, the job gets runs as whoever is running the RmtMgr on the PC. The user (and
separate user) is ignored.
Group/Queue Feature
This configuration file msc.cfg, allows the default least-loaded criteria to be modified when using the
host grouping feature for automatically selection the least loaded machine to submit to. The file contents
look like:
SORT_ORDER: free_tasks cpu_util last_job_time avail_mem
free_disk
GROUP:grp_nas2004
MIN_DISK: 10
MIN_MEM::5
MAX_CPU_UTIL: 95
The SORT_ORDER line lists the names of the sort criteria in the order you care to sort eligible hosts. The
remaining lines are then for each group you care to change the defaults. Thus you must define multiple
entries of the GROUP, MIN_DISK, MIN_MEM, MAX_CPU_UTIL for each group.
A group can not contain multiple entries that use the same physical hosts (e.g.: nast2004_host1 and
nast2001_host1 in the above example) because then the Analysis Manager would not know which to use.
In this case just create another group name (grp_nas2001 like above) and it will work as expected. You
can have different applications in the same group with no problems. You could in the above example
have used grp_nas2004 as the group name for all the MSC Nastran entries (possibly changing the name
of the group to make more sense that its for hosts which run MSC Nastran) or you can keep them separate
with the added flexibility of defining a different sort order and util/mem/disk criteria for each
application/group.

Chapter 7: System Management 155


Starting the Queue/Remote Managers

Starting the Queue/Remote Managers


The discussion pertains to Unix machines. See the end of this section for a discussion of Windows. The
Queue and Remote Managers are the two daemon (or services on Windows) which run continuously. The
Queue Manager runs on the master node, (QueMgr executable). The Remote Manager (RmtMgr
executable) runs on all analysis hosts and it is recommended that it run on all submit host also. The
programs are located in the following directory:
$P3_HOME/p3manager_files/bin/<arch>
where <arch> is one of the following:
HP700

Hewlett Packard HP-UX

RS6K

IBM RS/6000 AIX

SGI5

Silicon Graphics IRIX

SUNS

Sun SPARC Solaris

LX86

Linux (MSC or Red Hat)

WINNT

Windows 2000 or XP

QueMgr Usage:
The Queue Manager can be manually invoked by typing
$P3_HOME/bin/QueMgr <args>
with the arguments below:
QueMgr -path $P3_HOME -org <org> -log <logfile> -port <#>
where:
$P3_HOME is the installation directory.
<org>

Organization name for subgroup config files. Defaults to default.

<logfile>

A different log filename for QueMgr. If not specified, the QueMgr.log located in:
$P3_HOME/p3manager_files/<org> is used.

<#>

If the org.cfg file is to be used to allow users to interactively switch between


organizational groups, then the QueMgr must be started with a unique port ID. The
ID can be any number as long as it is unique and not being used by anything else
and should match that listed in org.cfg if this file is used.

Only the -path is required unless the QueMgr is started with a full path. The QueMgr is recommended
to be started as root although not a strict requirement. It is recommended to run the QueMgr as a separate
user such as the administrator account.

156

Patran Analysis Manager Users Guide


Starting the Queue/Remote Managers

Example:
If the Analysis Manager is installed in /msc/patran200x and the master node is an IBM RS/6000
computer, log into the master node (as root if you want) and do the following:
/msc/patran200x/bin/QueMgr -path /msc/patran200x
If the Analysis Manager is installed on a filesystem that is not local to the master node and the QueMgr
is started as root, it is recommended that the -log option be used when starting the Queue Manager. The
-log option should be used to specify a log file which should be on a filesystem local to the master node.
Writing files as root onto network mounted filesystems is sometimes not possible. Starting the QueMgr
as a normal user solves this problem.
You may want to put this command line somewhere in a script so the Queue Manager is started as root
each time the master node is rebooted. See Starting Daemons at Boot Time.
Note:

There are other arguments that can be used when starting up the Queue Manager for more
flexibility. See Analysis Manager Program Startup Arguments.

RmtMgr Usage:
The Remote Manager can be manually invoked by typing
$P3_HOME/bin/RmtMgr
where:
$P3_HOME is the installation directory. No arguments are necessary unless you start from where it exists
with a ./RmtMgr in which case you will need the -path $P3_HOME argument.
The RmtMgr should not be started as root.
Example:
If the Analysis Manager is installed in /msc/patran200x and the analysis node is an IBM RS/6000
computer, log into the analysis node as root and do the following:
/msc/patran200x/bin/RmtMgr -path /msc/patran200x
All other arguments not specified will be defaulted. You may want to put this command line somewhere
in a script so the Queue Manager is started as root each time the master node is rebooted. See Starting
Daemons at Boot Time.
Note:

There are other arguments that can be used when starting up the Remote Manager for more
flexibility. See Analysis Manager Program Startup Arguments.

Starting Daemons at Boot Time


To restart the QueMgr (or RmtMgr) daemon when the master host workstation is rebooted there are a
number of things that can be done. Two are listed here for Unix platforms. The first is recommended via

Chapter 7: System Management 157


Starting the Queue/Remote Managers

the /etc/rc2 method as opposed to the inittab method. These methods can vary from Unix machine to Unix
machine. If you have trouble, consult your system administrator.
Windows uses services. Manually installing and configuring these services is also described below.
Unix Method: rc

As root the following done in general terms:


Create a file in /etc/rc2.d called Sxxz_p3am where xx is a number as high as possible (say 99)
and the name z_p3am is simply a name. (The higher number indicates that it will be executed last of all
the scripts in this directory during startup.) In this file you place the script commands to start the QueMgr
and RmtMgr. You can also add the su command to start up the daemons as a user.
Note:

The location of the rc2.d directory may vary from computer to computer. Check /etc and /sbin.

Below is an example of a file called S99z_p3am:


#! /sbin/sh
# This script starts up the QueMgr and RmtMgr
# of the Patran Analysis Manager application.
# starts QueMgr as am_admin
su - am_admin -c "/etc/p3am_que start"
# starts the RmtMgr as am_admin
su - am_admin -c "/etc/p3am_rmt start"

What this script actually does is call another script (or two) that actually starts or stops the QueMgr and
RmtMgr, but it could have been done directly in the above script. The contents of the p3am_que script
are:
#! /usr/bin/csh -f
# This service starts/stops the QueMgr used with
# the Analysis Manager application.
if ( $#argv != 1 ) then
echo "Usage: $0 { start | stop }"
exit 1
endif
if ( $status != 0 ) then
echo "Cannot determine platform. Exiting..."
exit 1
endif
set P3_HOME = /msc/patran200x
switch ( $argv[1] )
case start:
if ( -x ${P3_HOME}/p3manager_files/bin/SGI5/QueMgr ) then
${P3_HOME}/p3manager_files/bin/SGI5/QueMgr
endif
breaksw
case stop:
set quepid = `ps -eo comm,user,pid | \

158

Patran Analysis Manager Users Guide


Starting the Queue/Remote Managers

egrep "QueMgr[ ]*[am_admin]" | awk '{print $3}'`


foreach Qproc ( $quepid )
kill -9 $Qproc
end
breaksw
default:
echo "Usage: $0 { start | stop }"
exit 1
breaksw
endsw

The p3am_rmt script would be identical except that RmtMgr replaces QueMgr. This could be done
with a single script where another argument is the daemon type, RmtMgr or QueMgr and thus another
variable is set to start or stop the one specified in the argument list.
Note:

The script above is specific to starting the QueMgr on SGI machines. For other machines,
replace the SGI5 with the appropriate <arch> as described in Directory Structure.

The above script can be used to stop the daemons also. This would be done if the machine were brought
down when rebooting. In this case you use a script in the rc0.d directory with a name of Kxx_p3am
where xx is the lowest number such as 01 to force it to be executed first among all the scripts in this
directory. The argument to the above script would then be stop instead of start. This is used to do a clean
and proper exit of the daemons when the machine is shut down. The example of a script called
K01_p3am is:
#! /sbin/sh
# This script stops the QueMgr and RmtMgr
# of the Patran Analysis Manager application.
# stop QueMgr
/etc/p3am_que stop
# stop the RmtMgr
/etc/p3am_rmt stop
Unix Method: inittab:

As root do the following:


Edit the file, /etc/inittab, and add the following line at the end:
p3am:2:once:/bin/sh /etc/p3am >/dev/null 2>&1 # MSC.AM QueMgr daemon

Note:

The number following the p3am in the above lines must match the init default # in the inittab
file. Check this number to make sure you are using the correct one. Otherwise it will not
start on reboot.

Now create the file, /etc/p3am and add the following lines:
#!/bin/sh
QueMgr=$P3_HOME/bin/QueMgr
RmtMgr=$P3_HOME/bin/RmtMgr

Chapter 7: System Management 159


Starting the Queue/Remote Managers

if [ -x $QueMgr ]
then
$QueMgr -path $P3_HOME
fi
if [ -x $RmtMgr ]
then
$RmtMgr
fi

where $P3_HOME is the Analysis Manager installation directory commonly referred to as $P3_HOME
throughout this manual. You must replace it with the exact path in the above example. Make sure that
this files protection allows for execution:
chmod 755 /etc/p3am
For Window machines:

The Queue and Remote Managers are installed as services. Once the service is installed, no further action
needs to be taken. In general the installation from the media installs these services. You will have to start
and stop them to reconfigure if you change the configuration files. If for some reason you must install the
Analysis Manager manually and assuming that the following directory exists:
$P3_HOME\p3manager_files
follow these steps,
1. Edit the files install_server.bat and install_client.bat in
$P3_HOME\p3manager_files\bin\WINNT
and make sure that the path points to
$P3_HOME\\p3manager_files\\bin\\WINNT\\QueMgr.exe and RmtMgt.exe,

respectively. Make sure there are two back slashes between each entry.
2. Double-click the install_server.bat and install_client.bat files. This will
install the services.
3. Edit the gui_install.reg file and make sure the path is correct also with two back slashes
between each entry in the Path= field, e.g.,
Path=C:\\MSC.Software\\MSC.Patran\\2004\\p3manager_files\\bin
\\WINNT\\AnalysisMgrB.dll

4. Right mouse click the gui_install.reg file and select merge. This will merge it into the
registry. This is not necessary if youve installed from the CD. If you get a message saying, No
doctemplate is loaded. Cannot create new document. this is because you have not merged this
file into the registry, or the path was incorrect.
5. Optional: You may want the Queue and Remote Manager services to startup as different users
other than Administrator. To do this right mouse click My Computer and select Manage. Then
open the Services tree tab and find MSCQueMgr (or MSCRmtMgr) and select it and view
Properties from the Action pulldown menu. Under the Log On tab you can change to This
Account or select another account for the services to start up as.
6. You can start and stop the Queue Manager and/or Remote Managers from the Services form from
the previous step. However you can also use the small command files in

160

Patran Analysis Manager Users Guide


Starting the Queue/Remote Managers

$P3_HOME\p3manager_files\bin\WINNT
called:
start_server.bat
start_client.bat
stop_server.bat
stop_client.bat
query_server.bat
query_client.bat
remove_server.bat
remove_client.bat
to do exactly as the file describes for starting, stopping, querying, and removing the Queue
Manager (server) service or the Remote Manager (client) service.
If you follow the above steps, manually installation should be successful. You will still have to edit your
configuration files and the reconfigure (or stop and start) the Queue Manager to read the configuration
before you will be able to successfully use the Analysis Manager. See Configuration Management
Interface.

Chapter 8: Error Messages


Patran Analysis Manager Users Guide

Error Messages

Error Messages

162

162

Patran Analysis Manager Users Guide


Error Messages

Error Messages
The following are possible error messages and their corresponding explanations and possible solutions.
Only messages which are not self explanatory are elaborated upon. If you are having trouble, please
check the QueMgr.log file usually located in the directory
$P3_HOME/p3manager_files/<org>/log or in the directory that the was specified by the -log
argument when starting the QueMgr. On Windows, check the Event Log under the Administrative
Tools Control Panel (or a system log on Unix).
Note:

The directories (conf, log, proj) for each set of configuration file (organizational
structure) must have read, write, and execute (777) permission for all users. This can be the
cause of many task manager errors.

Sometimes errors occur because the RmtMgr is running as root or administrator on Windows yet
RmtMgr is trying to access network resources such as shared drives. For this reason it is recommended
that RmtMgr (and QueMgr) be started as a normal user.
PCL Form Messages...
Patran Analysis Manager not installed.
Check for proper installation and authorization. Check with your system administrator. The Analysis
Manager directory $P3_HOME/p3manager_files must exist and a proper license must be available
for the Analysis Manager to be accessible from within Patran.
Windows
No doctemplate is loaded. Cannot create new document.
If you get a message saying, this is because you have not merged this file into the registry, or the path
was incorrect. See For Window machines: in Queue Manager.
Job Manager Daemon (JobMgr) Errors
ERROR... Starting a JobMgr on local host.
JobMgr is unable to run most likely because of a permission problem. Make sure that the input deck is
being submitted from a directory that has read/write permissions set.
================
311 ERROR... Unable to start network communication on server side.
335 ERROR... Unable to initiate server communication.

JobMgr is unable to create server communication. Possible reason is the hosts network interface is not
configured properly.
================
312 ERROR... Unable to start network communication on client side.
ERROR... Unable to create and connect to client socket

Chapter 8: Error Messages 163


Error Messages

JobMgr is unable to create client communication. Possible reason is the hosts network interface is not
configured properly.
================
ERROR... Problem in socket accept
301 ERROR... Unable to accept network connection.
ERROR... Unable to accept message
ERROR... Unable to complete network accept.

JobMgr is unable to complete communication connection. Possible reason is the hosts network
interface is not configured properly, or the network connectivity has been interrupted.
================
307 ERROR... Problem with network communication select.
ERROR... Select ready returned, but no data
ERROR... Problem in socket select
302 ERROR... Unable to read data from network connection.
306 ERROR... Data ready on network, but unable to read.
ERROR... Socket empty
327 ERROR... Data channel empty during read.
324 ERROR... Error with network communication select.
ERROR... Unknown error on select

JobMgr is unable to determine when data is available or reading. Possible cause is loss of network
connectivity.
================
ERROR... Problem reading socket message.
326 ERROR... Timeout while reading message.
325 ERROR... Error in message received.
304 ERROR... Unknown receive_message error
305 ERROR... Timeout with no responses

JobMgr received an error while trying to read data or received a timeout while waiting to read data.
Possible cause is loss in network connectivity or the sending process has terminated prematurely.
================
321 ERROR... Unable to contact QueMgr
ERROR... Timeout with no response from server.

JobMgr received an error or timeout while trying to contact the QueMgr. Possible cause is loss in
network connectivity or the QueMgr process has terminated prematurely.
================
ERROR... Unable to accept connection from A/M
ERROR... Timeout with no response from A/M
ERROR... Unable to contact ANALYSIS MANAGER interface.

JobMgr received an error or timeout while trying to contact the Analysis Manager interface. Possible
cause is loss in network connectivity or the Analysis Manager interface process has terminated (either
prematurely or by user intervention).
================

164

Patran Analysis Manager Users Guide


Error Messages

339 ERROR... Unable to initiate config network communication.


328 ERROR... Unable to receive gen_config struct
329 ERROR... Unable to receive app_config struct
330 ERROR... Unable to receive app_submit struct
ERROR... Unable to receive general config info.
ERROR... Unable to receive application specific config info.
ERROR... Unable to receive application specific submit info.

JobMgr is unable to receive configuration information from the Analysis Manager interface for a submit
job request. Possible cause is loss in network connectivity or premature termination of the Analysis
Manager interface process.
================
340 ERROR... Unable to send general config info.
341 ERROR... Unable to send application specific config info.
342 ERROR... Unable to send application specific submit info.

JobMgr is unable to send configuration information to the [Aba,Gen,Mar,Nas]Mgr process. Possible


cause is loss in network connectivity or premature termination of the [Aba,Gen,Mar,Nas]Mgr process.
================
ERROR... Out of memory
303 ERROR... Unable to alloc
ERROR... Unable to alloc mem
ERROR... Unable to alloc mem
ERROR... Unable to alloc mem

memory.
for file sys max
for file sys space
for file sys names

This indicates the workstation is out of memory. Free up memory used by other processes and try to
submit again at a later time.
================
344 ERROR... Unable to determine mail flag from config struct
ERROR... Unable to determine mail config setting.

JobMgr is unable to query memory for the mail config setting. Contact support personnel for assistance.
================
345 ERROR... Unable to determine delay time from config struct
ERROR... Unable to determine delay time setting.

JobMgr is unable to query memory for the delay time config setting. Contact support personnel for
assistance.
================
350 ERROR... Unable to determine disk req from config struct
ERROR... Unable to determine disk requirement.

JobMgr is unable to query memory for the disk req config setting. Contact support personnel for
assistance.
================
349 ERROR... Unable to determine memory req from config struct
ERROR... Unable to determine memory requirement.

Chapter 8: Error Messages 165


Error Messages

JobMgr is unable to query memory for the memory req config setting. Contact support personnel for
assistance.
================
352 ERROR... Unable to determine pos prog from config struct
ERROR... Unable to determine pos program.

JobMgr is unable to query memory for the pos prog config setting. Contact support personnel for
assistance.
================
351 ERROR... Unable to determine pre prog from config struct
ERROR... Unable to determine pre program.

JobMgr is unable to query memory for the pre prog config setting. Contact support personnel for
assistance.
================
353 ERROR... Unable to determine job filename from config struct
ERROR... Unable to determine job filename.

JobMgr is unable to query memory for the job filename config setting. Contact support personnel for
assistance.
================
337 ERROR... Unable to determine specific index from submit struct
338 ERROR... Unable to determine submit specific host index.

JobMgr is unable to query memory for the specific index config setting. Contact support personnel for
assistance.
================
ERROR... Unable to determine submit index from submit struct
ERROR... Unable to determine submit host index.

JobMgr is unable to query memory for the submit index config setting. Contact support personnel for
assistance.
================
ERROR... Unable to determine/execute message

JobMgr received an unrecognizable message. Contact support personnel for assistance.


================
323 ERROR... Unable to fork child process

ERROR... Unable to fork new process.

JobMgr cannot fork a new process. Perhaps the system is heavily loaded or the maximum number of
per-user processes have been exceeded. Terminate extra processes and try again.
================

166

Patran Analysis Manager Users Guide


Error Messages

336 ERROR... Unable to send file receive setup info.


ERROR... Unable to accept connection on recv file
ERROR... Unable to recv file
334 ERROR... Unable to receive data file.

JobMgr is unable to set-up and/or receive the data file from the [Aba,Gen,Mar,Nas]Mgr. Possible
cause is loss of network connectivity or premature termination of the [Aba,Gen,Mar,Nas]Mgr process.
================
ERROR... Unable to send file
333 ERROR... Unable to transfer data file.

JobMgr is unable to send data file to [Aba,Gen,Mar,Nas]Mgr process. The executing host or network
connection may be down.
================
346 ERROR... Unknown state
347 ERROR... Inconsistant state.

JobMgr is in an inconsistent state. Contact support personnel for assistance.


================
310 ERROR... Unable to open log file.

JobMgr is unable to open log file. Check write permission on the current working directory and log file
(if it exists).
================
348 ERROR... Unable to cd to local work dir.

JobMgr is unable to change directory to the directory with the input filename specified. Check existence
and permissions on this directory.
================
314 ERROR... Unable to determine true host address.
315 ERROR... Unable to determine actual host address.

JobMgr is unable to determine its host address. Possible causes are an invalid host file or name
server entry.
================
316 ERROR... Unable to determine usrname.

JobMgr cannot determine user name. Check the passwd, user account files for possible errors.
================
322 ERROR... Unable to open stdout stream.
ERROR... Unable to open stderr stream.

JobMgr cannot open stdout, stderr streams.


================

Chapter 8: Error Messages 167


Error Messages

331 ERROR... Application is NOT supported.

JobMgr is asked to submit a job which is not from a supported application. Check installation by
running the basic and network tests with the Administration tool.
================
343 ERROR... Received signal.

JobMgr received a signal from the operating system. JobMgr has encountered an error or was signalled
by a user.
================
354 ERROR... Invalid version of Job Manager.

The current version of JobMgr does not match that of the QueMgr. An invalid/incomplete installation
is most likely the cause. To determine what version of each executable is installed, type JobMgr version, and QueMgr -version and compare output.
=======================================================

User Interface (P3Mgr) Errors...


ERROR... <type> argument must be a number from 1 - 8
P3Mgr must be started with an initial interface type of 1 through 8 only.
================
ERROR... -runtype not acceptable.
P3Mgr was started with an invalid runtype for ABAQUS.
================
ERROR... Getting P3_HOME environment variable.

P3Mgr is unable to determine the P3_HOME environment variable. Set the environment variable to the
location of the Patran install path (<installation_directory> for example).
================
ERROR... Obtaining ANALYSIS MANAGER licenses.
P3Mgr is unable to obtain necessary license tokens. Check nodelock file
or netls daemons.
================
ERROR... Problem reading .p3mgrrc
ERROR... Unable to write to file <.../.p3mgrrc[_org]>

P3Mgr is unable to read/write to the rc file to load/save configuration settings. Check the owner and
permissions on the designated file.
================
ERROR... Unable to determine QueMgr host or port

168

Patran Analysis Manager Users Guide


Error Messages

P3Mgr cannot determine which port to connect to a valid Queue Manager. The file QueMgr.sid is not
actually used anymore. You should set P3_MASTER and P3_PORT environment variables, or use the
org.cfg file.
================
ERROR... QueMgr host <> from org.cfg file inconsistent
ERROR... QueMgr port <> from org.cfg file inconsistent

P3Mgr has found the QueMgr host and/or port from the QueMgr.sid file to be different from that
found in the org.cfg file. Check the org.cfg file and modify it to match the current QueMgr settings,
or restart the QueMgr with the org.cfg settings. The QueMgr.sid file is no longer used so this message
should never appear. Call MSC support personnel if this happens.
================
ERROR... Unable to determine address of master host <>

P3Mgr is unable to determine the address of the master host provided. Check the QueMgr.sid file for
proper hostname and/or the org.cfg file and/or the P3_MASTER environment variable. Also check for
an invalid host file or name server entry. The QueMgr.sid file is no longer used so this message should
never appear. Call MSC support personnel if this happens.
================
ERROR... Unable to Contact QueMgr to determine version information.

P3Mgr is unable to contact the QueMgr. Check to see if the QueMgr is up and running, and the master
host is up and running, and the P3Mgr host and the master host can communicate via the network.
================
ERROR...
ERROR...
ERROR...
ERROR...
ERROR...
ERROR...
ERROR...
ERROR...

Problem creating socket to communicate with Queue Mgr


Unable to send request to Queue Mgr. Is the Queue Mgr running
#<> sending request to Queue Mgr.
#<> receiving message back from QueMgr.
In message back from QueMgr.
Communicating with Queue Mgr.
Invalid message received from QueMgr.
Timeout Waiting For Message From QueMgr.

P3Mgr cannot create communication socket, or is unable to send request to the QueMgr process. Check
the QueMgr process is up and running, the QueMgr host is up and running, and the network is
connected.
================
ERROR... Creating Communications Socket For Job Monitor
ERROR... Establishing communication to Job Mgr with port <>.
ERROR... In Job Mgr Communication

P3Mgr cannot create communication socket, or is unable to send request to the JobMgr process. Check
the JobMgr process is up and running, and the network is connected.
================

Chapter 8: Error Messages 169


Error Messages

ERROR... Sending generic configuration structure to Job Mgr


ERROR... Sending application configuration structure to Job Mgr
ERROR... Sending submit structure to Job Mgr

P3Mgr cannot send configuration info to the P3Mgr process. Check that the P3Mgr process is up and
running, and the network interface is configured properly.
================
ERROR... An incompatible version of the QueMgr is currently running.

P3Mgr has determined that the version of the QueMgr presently running is not compatible. An
incomplete or invalid installation is most likely the cause. Type P3Mgr -version and QueMgr -version
and compare the output. Re-install the software if necessary.
================
ERROR... No valid applications defined in QueMgr configuration

P3Mgr has been started with an application not supported by the current configuration used by the
QueMgr. Update the configuration files to include the new application and restart the QueMgr before
continuing.
================
ERROR... Org <> does not contain any defined applications.

P3Mgr has been started (or switched to) an organization (group) which does not contain any
applications. Check the configuration files and restart the QueMgr process for the designated
organization.
================
ERROR... Filename is too long. Shorten jobname

P3Mgr can only submit jobs with files no longer than 32 characters. Shorten the job filename to less than
32 characters and submit again.
================
ERROR... Job <> is not currently active.

P3Mgr was asked to monitor or delete a job with a given name (and owner) which cannot be located in
the queue.
================
ERROR... File <> does not exist... Enter A/M to select file explicitly.

P3Mgr was asked to monitor a completed job (using the mon file) from the jobname information only
and this file cannot be found. Use the Full Analysis Manager interface and the file browser (under
Monitor, Completed Job) to select an existing mon file.
================
ERROR... Monitor file <> does not exist

170

Patran Analysis Manager Users Guide


Error Messages

P3Mgr was asked to monitor a completed job and the selected mon file does not exist. Select an existing
mon file and try again.
================
ERROR... You must choose an A/M monitor file (.mon extension).

P3Mgr was asked to monitor a completed job, but no mon file was specified. Select a mon file and try
again.
================
ERROR... Unable to parse Monitor File

P3Mgr encountered an error while parsing the mon file. Contact support personnel for assistance.
================
ERROR... than one active job named <> found. Request an active list
ERROR... More than one active job named <> found. Enter A/M to
explicitly select job
ERROR... No jobs named <> owned by <> are currently active

P3Mgr is asked to monitor or delete a job from the job name (and owner) and no such job can be located
in the queues. Select an active list of jobs from the Full Analysis Manager interface (Monitor, Running
jobs)
================
ERROR... No Host Selected. Submit cancelled.
ERROR... No Queue Selected. Submit cancelled.

P3Mgr attempted to submit a job, but no host or queue was selected. Select a host or queue and try to
submit again.
================
ERROR... QueMgr has been reconfigured. Exit and restart to continue.

P3Mgr has found the QueMgr has been reconfigured (restarted) and so the P3Mgrs copy of the
configuration information may be invalid. Exit the P3Mgr interface and restart to load the latest
configuration information before continuing.
================
ERROR... Starting a Job Mgr on local host.

P3Mgr is unable to spawn a JobMgr process. Perhaps the system is heavily loaded or the maximum
number of per-user processes has been met. Free up unused processes and try to submit again.
================
ERROR... Unable to alloc mem for load info.
ERROR... Unable to allocate memory for org structure

This indicates the workstation is out of memory. Free up memory used by other processes and try again.
================

Chapter 8: Error Messages 171


Error Messages

ERROR... Unable to open log file

P3Mgr is unable to open a log file. Check file permissions if it exists, or check local directory
access/permissions.
================
ERROR... Unable to open unique submit log file

P3Mgr is unable to open a submit log file. Check file permissions if it exists, or check local directory
access/permissions.
================
ERROR... Unknown version of ANALYSIS MANAGER .p3mgrrc file

P3Mgr has attempted to read in a .p3mgrrc file but from an unsupported version. Remove the
.p3mgrrc file and save configuration settings in a new .p3mgrrc file.
================
ERROR... Could not open file <>

P3Mgr is asked to submit a job, but no filename has been input. Selected an input filename and try to
submit again.
================
ERROR... File <> does not exist.

P3Mgr has been asked to submit a file, but no such file can be found. Select an existing input file (or
check file permissions) and submit again.
Additional (submit) Errors...
ABAQUS:

ERROR... JobName <> and Restart JobName <> are identical.


ABAQUS cannot have jobs where the job name and the restart job name are the same. Change one or the
other and re-submit.
================
ERROR... Unable to open input file <>

P3Mgr is unable to open the designated input file. Check file permissions.
================
ERROR... JobName <> and Input Temperature File JobName <> are identical.

ABAQUS cannot have jobs where the job name and the temperature data file job name are the same.
Change one or the other and re-submit.
================
ERROR... *RESTART, READ found but no restart jobname specified.

172

Patran Analysis Manager Users Guide


Error Messages

ABAQUS RESTART card encountered, but no filename specified. Add filename to this card and
re-submit.
MSC Nastran:
================
ERROR... File <> cannot contain more than one period in its name.
P3Mgr will currently only allow MSC Nastran jobs to contain one period in their
filename. Rename the input file to contain no more than one period and re-submit.
================
ERROR... File <> must begin with an alpha character.

P3Mgr will currently only allow MSC Nastran jobs to start with an alpha character, and not a number.
Rename the input file to start with a letter (A-z, a-z) and re-submit.
================
ERROR... Include cards are too early in file.

P3Mgr can currently only support MSC Nastran jobs with Include cards between the BEGIN BULK
and ENDDATA cards. Place the contents of the include files before the BEGIN BULK card directly
into the input file and re-submit.
================
ERROR... BEGIN BULK card present with no CEND card present.

P3Mgr has encountered a BEGIN BULK card before a CEND card. P3Mgr currently requires a
BEGIN BULK card if there is a CEND card found in the input file. Add a BEGIN BULK card to
the input file and re-submit.
================
ERROR... ENDDATA card missing.

P3Mgr has encountered the end of the input file without finding an ENDDATA card. Add an
ENDDATA card and re-submit.
MSC.Marc:
================
ERROR... Unable to load MSC.Marc configuration info.

The network is unable to transfer the MSC.Marc config info over to the MarMgr from the JobMgr
running on the submit machine. Check network connectivity and the submit machine for any problems.
================
ERROR... Unable to load MSC.Marc submit info

The network is unable to transfer the MSC.Marc submit info over to the MarMgr from the JobMgr
running on the submit machine. Check network connectivity and the submit machine for any problems.

Chapter 8: Error Messages 173


Error Messages

================
INFORMATION: Total disk space req of %d (kb) met

Information message telling that enough disk space has been found on the file systems configured for
MSC.Marc to run.
================
WARNING: Total disk space req of %d (kb) cannot IMMEDIATLEY be met.
Continuing on regardless ...

Information message telling that there is currently not enough free disk space found to honor the space
requirement provided by the user. The job will continue however, because the space may be freed up at
a later time (by another job finishing perhaps) before this job needs it
================
ERROR... Total disk space req of %d (kb) cannot EVER be met. Cannot
continue.

There is not enough disk space (free or used) to honor the space requirement provided by the user so the
job will stop. Add more disk space or check the requirement specified.
================
WARNING: Cannot determine if disk space req %d (kb) can be met.
Continuing on regardless ...

Information message telling the disk space of the file system(s) configured for MSC.Marc cannot be
determined. The job will continue anyway as there may be enough space. Sometime, if the file system is
mounted over nfs the size of the file system is not available.
================
INFORMATION: No disk space requirement specified

If no disk space requirement is provided by the user then this information message will be printed.
================
ERROR... Unable to alloc ## bytes of memory in sss, line lll

The MarMgr is unable to allocate memory for its own use, check the memory and swap space on the
executing machine.
================
ERROR... Unable to receive file sss

MarMgr could not transfer a file from the JobMgr on the submit machine. Check the network
connectivity and submit machine for any problems.
Editing (p3edit) Errors...
ERROR... Unable to allocate enough memory for file list.
This indicates the workstation is out of memory. Free up memory used by other processes and try again.

174

Patran Analysis Manager Users Guide


Error Messages

================
ERROR... Unable to determine file statistics for <>

This indicates the operating system is unable to determine file statistics for the requested file. The
requested file most likely does not exist. Select an existing file to view/edit and try again.
================
ERROR... File <> does not appear to be ASCII

P3edit can only view/edit ASCII files, and the requested file appears to be non-ASCII. Select an ASCII
file to view/edit and try again.
================
ERROR... File <> is too large to view

Due to memory constraints, P3edit is limited to viewing/editing files no larger than 16 mb in size. (Except
for Cray and Convex machines, where the max file size limit is 60 mb and 40 mb, respectively) Select a
smaller file to view/edit and try again.
================
ERROR... Unable to open file <>

P3edit is unable to open requested file to load into viewer/editor. Select an existing file to view/edit
and try again.
================
ERROR... File <> is empty

P3edit has found the selected file is empty. Currently P3edit can only view/edit files with data. Select
a file containing data to view/edit and try again.
================
ERROR... Unable to seek to beginning of file <>

P3edit is unable to seek to beginning of selected file. Possible system error occurred during seek, or
file is corrupted.
================
ERROR... Unable to read in file <>

P3edit is unable to read file data. Perhaps file is corrupted.


================
ERROR... System error while reading file <>

A system error occurred during file read. Try to view/edit file again at another time.
================
ERROR... Unable to read text

Chapter 8: Error Messages 175


Error Messages

P3edit is unable to read file completely, or is unable to read text from memory to write file.
================
ERROR... Unable to scan text

P3edit is unable to scan text from memory to search for text pattern.
================
ERROR... Unable to write text to file

P3edit is unable to write text out to file. Perhaps the disk is full, or some system error occurred during
the write call.
RmtMgr Errors...
RmtMgr errors are returned to the connection program and also printed in the OS system log (syslog)
or Event Viewer for Windows.
================
RmtMgr Error RM_CANT_GET_ADDRESS

This should not happen, but if for some reason the OS / network ca not determine the network address of
the machine RmtMgr is started on this will be printed before RmtMgr exits. Contact your system
administrator for more information.
================
RmtMgr Error port number ####

invalid

This should not happen, but if for some reason the program contacting the RmtMgr does not supply a
valid port then this will be printed. The connection to the RmtMgr cannot be completed, but the RmtMgr
will continue on listening for other connections.
================
RmtMgr Error RM_CANT_CREATE_SERVER_SOCKET

If the RmtMgr can not create its main server socket for listening then this error message will be printed
before the RmtMgr exits.
================
RmtMgr Error accept failed, errno = ##

If the accept system call fails on the socket after a connection is established this will be printed. The error
number should be checked against the system errno list for the platform to see the cause.
================
RmtMgr Error unable to end proc ## errno=%d",

If RmtMgr is asked to kill/end a process and it is unable to do so then this message is printed. The errno
should give the reason.

176

Patran Analysis Manager Users Guide


Error Messages

================
RmtMgr Error Invalid message of <xxxxx>

An invalid message format/syntax was sent to RmtMgr. The message will be ignored and RmtMgr will
continue, listening for other connections.
================
RmtMgr Error Invalid message status code = ##

The status on the receive message is not correct, the status code will help determine the cause, most likely
invalid network connectivity is the reason.
================
RmtMgr Error Invalid NULL message

An invalid message format/syntax was sent to RmtMgr. The message will be ignored and RmtMgr will
continue, listening for other connections.
================
RmtMgr Error unable to determine system type, error = ##

RmtMgr is unable to determine what kid of platform it is running on. RmtMgr will exit. Check the
supported platform/OS list.
================
RmtMgr Error WSAStartup failed, code = ##

Windows network/socket communication initialization failed the code should be checked against the
windows system error list for the cause.
================
RmtMgr Error WSAStartup version incompatible, code = ##

If a contacting program is of a different version than the RmtMgr then this message is printed. The
RmtMgr will continue, listening for other connections.
QueMgr Errors...
Sometimes errors occur because the RmtMgr is running as root or administrator on Windows yet
RmtMgr is trying to access network resources such as shared drives. For this reason it is recommended
that RmtMgr (and QueMgr) be started as a normal user.
ERROR... Determining computer architecture
QueMgr is unable to recognize its host architecture. Check installation and OS version compatibility.
================
ERROR... Invalid -port option argument <>
ERROR... Invalid -usr option argument <>

Chapter 8: Error Messages 177


Error Messages

QueMgr was started with an invalid port or user argument. Select a valid port or user name argument
and try to start the QueMgr again.
================
ERROR... Cant Find
ERROR... Cant Find
ERROR... Cant Find
ERROR... Cant Find
ERROR... Cant Find
202 ERROR... Job to

Job Number To Remove.


Job Number To Resume.
Job Number To Suspend.
Job To Resume.
Job To Suspend.
remove not found by the Que Manager.

QueMgr is unable to locate job in internal list to remove, resume or suspend. Contact support personnel
for assistance.
================
ERROR... Cant Resume Job unless its RUNNING and SUSPENDED.
ERROR... Cant Suspend Job unless its RUNNING.

QueMgr received an invalid suspend/resume request. QueMgr can only suspend running jobs, and can
only resume suspended jobs.
================
203 ERROR... Problem creating com file for Task Manager execution.

QueMgr is unable to create a com file on the eligible host(s) for execution. Possible causes are lack of
permission connecting to the eligible host(s) as the designated user (check network permission/access
using the Administration tool) or incorrect path/permission on the directory on the eligible host(s). (Use
the Administration tool to check this.) The major cause of this error is that the specified user does not
have remote shell access from the Master Host to the Analysis Host. Resolutions to this problem are to
add the Analysis Host name to the hosts.equiv file or the users.rhosts file on the Master Host.
================
ERROR... Creating host/port file <>.

QueMgr is unable to create the QueMgr.sid file containing its host and port number. Possible causes
are invalid sid user name (with the -usr command line option), invalid organization (-org
command line option) or invalid network or directory/file permissions. This file and -usr are no longer
used and the message should not ever appear. Call MSC support if this happens.
================
ERROR...
ERROR...
ERROR...
ERROR...
RECONFIG

Opening log file <>


Opening rdb file <>
Seeking to end of rdb file <>
Unable to determine next valid job number
ERROR... Unable to determine next valid job number

QueMgr is unable to open the log and/or rdb files. Check the owner and permissions of these files. If
the rdb file is corrupted, QueMgr may not be able to seek to its end and determine the next available job
number to use.
================

178

Patran Analysis Manager Users Guide


Error Messages

ERROR...
ERROR...
ERROR...
ERROR...
RECONFIG

Out Of Memory
Problem Allocating memory for return string
Unable to alloc mem for load info
Unable to alloc memory for load information
ERROR... Unable to alloc mem for load info

The workstation is out of memory. Free up memory used by other processes and restart the QueMgr.
================
ERROR...
ERROR...
ERROR...
ERROR...
ERROR...
ERROR...
ERROR...

Problem
Problem
Problem
Problem
Problem
Problem
Problem

Creating Socket.
Binding Socket.
Connecting Socket.
Reading Socket Message.
Writing To Socket.
in socket accept.
in socket select.

QueMgr cannot create/complete/read/write network communication. Possible causes are invalid


network interface configuration, or a loss in network connectivity.
ERROR... Unable to signal Task

QueMgr is unable to send signal to [Aba,Gen,Mar,Nas]Mgr. Check network permission/access using


the Administration tool.
================
ERROR... Problem Telling JobMgr That Job Is Being Killed.

QueMgr has been requested to terminate a job, and cannot inform the JobMgr of this. Possible cause is
network connectivity loss, or the JobMgr process has terminated unexpectedly, or the JobMgr
workstation is down.
ERROR... Host Index <> Received, Max Index is <>,
ERROR... Queue Index <> Received, Max Index is <>
ERROR... Specific Index <> Received, Max Index is <>
QueMgr asked to submit a job with an invalid index. Contact support personnel for assistance.
================
ERROR... User: <> can not delete job number <> which is owned by: <>
201 ERROR... User can not kill a job owned by someone else.

QueMgr was asked to delete a job from a user other than the one who submitted the job. Only the user
who submitted a job is eligible to delete it.
================
ERROR... You must be the root user to run QueMgr.

The QueMgr process must run as the root user. Restart the QueMgr as the root user.
================
209 ERROR... Unable to start Task Manager.

Chapter 8: Error Messages 179


Error Messages

QueMgr is unable to start up an [Aba,Gen,Mar,Nas]Mgr process. Check network/host connections and


access, and install tree path on the remote host. (Use the Administration tool to check network
permissions.)
================
ERROR... Unable to start Load Manager on host <>

QueMgr is unable to start up a LoadMgr process. Check network/host connections and access, and
install tree path on the remote host. Also, check admin user account access. (Use the Administration tool
to check network permissions.) Obsolete. RmtMgr is now used.
================
205 ERROR... Error submitting job to NQS. See Que Manager Log.
211 ERROR... Unable to submit task to NQS queue.

QueMgr received an error while trying to submit a job to an NQS queue. Check the QueMgr.log file
for the more detailed NQS error.
================
206 ERROR... Error submitting job to LSF. See Que Manager Log.
207 ERROR... Error in return string from LSF bsub. See Que Manager Log.
210 ERROR... Unable to submit task to LSF queue.

QueMgr received an error while trying to submit a job to an LSF queue. Check the QueMgr.log file
for the more detailed LSF error.
================
214 ERROR... Unable to delete task from NQS queue.

QueMgr is unable to delete task from an NQS queue. Perhaps the task has finished or has already been
deleted by an outside source.
================
213 ERROR... Unable to delete task from LSF queue.

QueMgr is unable to delete task from an LSF queue. Perhaps the task has finished or has already been
deleted by an outside source.
================
217 ERROR... Job killed from outside queue system.

QueMgr cannot find job in queue when it is expected to be there. QueMgr can only assume the job was
deleted from outside the Analysis Manager environment.
================
218 ERROR... Invalid version of Task Manager.

The current version of [Aba,Gen,Mar,Nas]Mgr does not match that of the QueMgr. An
invalid/incomplete installation is most likely the cause. To determine what version of each executable is
installed, type [Aba,Gen,Mar,Nas]Mgr-version, and QueMgr -version and compare output.

180

Patran Analysis Manager Users Guide


Error Messages

Application Manager (AbaMgr, GenMgr, MarMgr, NasMgr) Errors...

420 ERROR... Unable to determine host address.


[Aba,Gen,Mar,Nas]Mgr is unable to determine its host address. Possible causes are an invalid host file
or name server entry.
================
ERROR... File <> is NOT executable
442 ERROR... Unable to execute specified application file.
ERROR... Unable to exec new application process

Executable to run is not executable. Check file/directory permissions.


================
ERROR... File <> is NOT found
424 ERROR... Unable to determine application executable path.

Executable file cannot be found. Check application installation.


================
ERROR... Inconsistant number of valid file systems
410 ERROR... Bad file system count.

[Aba,Gen,Mar,Nas]Mgr has received an invalid number of file systems.


================
436 ERROR... Application errors.

The application has terminated with errors. This is not an Analysis Manager error, but just indicates there
are errors in the analysis. Check and modify the input file and try again.
================
ERROR... Pos_application fatal
434 ERROR... Application fatal in pos routine.

The application received a fatal error in the pos routine. Contact support personnel for assistance.
================
ERROR... Pre application error
435 ERROR... Application fatal in pre routine.

The application received a fatal error in the pre routine. Contact support personnel for assistance.
================
ERROR... Abort_application fatal

The application received a fatal error in the abort routine. Contact support personnel for assistance.
================
426 ERROR... Unable to alloc mem.
ERROR... Unable to alloc mem for File_Sys

Chapter 8: Error Messages 181


Error Messages

ERROR... Unable to alloc mem for executable path


ERROR... Unable to alloc mem for file system names

The workstation is out of memory. Free up memory used by other processes and try again.
================
403 ERROR... Unable to initiate network communication.
413 ERROR... Unable to initiate file transfer network communication.
423 ERROR... Problem with network communication accept.
415 ERROR... Problem with network communication select.
404 ERROR... Problem with network communication select.
ERROR... Unknown error on select

[Aba,Gen,Mar,Nas]Mgr is unable to start/complete network connection/read. Possible causes are loss


in network connectivity or premature termination of sending process.
================
431 ERROR... Unable to connect network communication.
422 ERROR... Unable to contact QueMgr

Process is unable to contact the QueMgr. Perhaps the QueMgr host is down, or the QueMgr process is
not running or the network is down.
================
ERROR... Timeout with no response from JobMgr
ERROR... Unable to accept connection from JobMgr
430 ERROR... Unable to contact JobMgr

Process is unable to contact the JobMgr. Perhaps the JobMgr host is down, or the JobMgr process is
not running or the network is down.
================
ERROR... Unable to create info socket
417 ERROR... Unable to initiate job info network communication.
414 ERROR... Unable to determine job info over network.

[Aba,Gen,Mar,Nas]Mgr is unable to start info socket communication. Check network interface


configuration.
================
ERROR... Unable to determine job filename

Process is unable to determine job filename. Contact support personnel for assistance.
================
ERROR... Unable to determine number of clock tics/sec
425 ERROR... Unable to determine machine clock rate.

Process cannot determine the machine setting for the number of clock tics per second. Check machine
operating system manual for further assistance.
================

182

Patran Analysis Manager Users Guide


Error Messages

ERROR... Unable to fork child process


418 ERROR... Unable to fork new process

[Aba,Gen,Mar,Nas]Mgr cannot fork a new process. Perhaps the system is heavily loaded or the
maximum number of per-user processes have been exceeded. Terminate extra processes and try again.
================
ERROR... Unable to load file system info
409 ERROR... Unable to load file system information.

Contact support personnel for assistance.


================
408 ERROR... Unable to locate this host in config structure
ERROR... Unable to locate this host in config host list.

Contact support personnel for assistance.


================
440 ERROR... Unable to determine current working directory.

Process could not determine its current working directory. Check file systems designated for executing
[Aba,Gen,Mar,Nas]Mgr.
427 ERROR... Unable to create work file system dirs.
ERROR... Unable to make proj sub-dirs off file systems

Process could not make directories off main file system entries in the configuration. Check file
system/directory access/permission. (Use the Administration tool.)
================
428 ERROR... Unable to create local work dir.
ERROR... Unable to make unique dir off proj dir

Process is unable to make a unique named directory below the designated file system/directory it is
currently executing out of. Check file system/directory access/permission. (Use the Administration tool.)
================
411 ERROR... Unable to cd to local work dir.
429 ERROR... Unable to cd to local work dir.
ERROR... Unable to cd to unique dir off proj dir

Process is unable to change directory to the unique directory created below the file system/directory
designated in the configuration. Check owner and permission of parent directory.
================
400
412
401
421

ERROR...
ERROR...
ERROR...
ERROR...

Unable
Unable
Unable
Unable

to
to
to
to

open log file.


re-open log file.
open stderr stream
open stdout stream

[Aba,Gen,Mar,Nas]Mgr is unable to open, re-open log and/or stdout, stderr stream files. Check current
directory permissions.

Chapter 8: Error Messages 183


Error Messages

================
ERROR... Unable to place process in new group

Process is unable to change process group. Contact support personnel for assistance.
================
ERROR... Unable to obtain config info from QueMgr
402 ERROR... Unable to receive configuration info.
ERROR... Unable to receive gen_config struct
ERROR... Unable to receive app_config struct
ERROR... Unable to receive app_submit struct
405 ERROR... Unable to receive general config info.
406 ERROR... Unable to receive application config info.
407 ERROR... Unable to receive application submit info.

[Aba,Gen,Mar,Nas]Mgr is unable to receive configuration information from JobMgr. Possible causes


are network connectivity loss, or the JobMgr host is down.
================
ERROR... Unable to determine configuration host index
ERROR... Unable to determine <> submit paramters

[Aba,Gen,Mar,Nas]Mgr is unable to query memory for either a host index or submit parameters.
Contact support personnel for assistance.
================
416 ERROR... Unable to receive data file.
ERROR... Unable to recv file <> from JobMgr

Process is unable to receive file from JobMgr. Possibly the network is down, or the JobMgr host is off
the network, or the JobMgr host is down.
================
ERROR... file: <> cannot be sent

Process can not send file to JobMgr. Possibly the network is down, or the JobMgr host is off the
network, or the JobMgr host is down.
================
432
437
438
439

ERROR...
ERROR...
ERROR...
ERROR...

Task
Task
Task
Task

aborted.
aborted while executing.
aborted before execution.
aborted after execution.

The [Aba,Gen,Mar,Nas]Mgr has been aborted. This is not necessarily an Analysis Manager error, but
an indication that the analysis has been terminated by the user.
================
441 ERROR... Received 2nd signal.

184

Patran Analysis Manager Users Guide


Error Messages

The Process has received a signal, either from an abort (from the user) or from an internal error, and
during the shutdown procedures, a second signal has occurred, indicating an error in the shutdown
procedure.
================
ERROR... Total disk space req of %d (kb) cannot be met

[Aba,Gen,Mar,Nas]Mgr is unable to locate the amount of free disk space on all designated file systems
of this host for the analysis to be run. Either free up some disk space, reduce the amount of disk requested
in the interface, or submit job to a different host (with a different set of file systems).
================
ERROR... Unable to temporarily rename input file

[Aba,Gen,Mar,Nas]Mgr is unable to rename input file temporarily. Contact support personnel for
assistance.
================
ERROR... File <> cannot be found
ERROR... Unable to open input file

Process has transferred the designated file, but is now unable to locate it for opening or reading. Check
network connections, JobMgr host, and file system permissions.
================
443 ERROR... Invalid version of Task Manager

The current version of [Aba,Gen,Mar,Nas]Mgr does not match that of the QueMgr/RmtMgr. An
invalid/incomplete installation is most likely the cause. To determine what version of each executable is
installed, type [Aba,Gen,Mar,Nas]Mgr -version, and QueMgr/RmtMgr -version and compare output.
Additional application specific Errors...
ABAQUS (AbaMgr):

ERROR... Unable to create local environment file


AbaMgr is unable to create a local abaqus.env file. Check file system/directory free space and
permissions.
================
ERROR... Unable to load ABAQUS configuration info

AbaMgr is unable to load configuration information from internal memory. Contact support personnel
for assistance.
================
ERROR... Unable to load ABAQUS submit info

AbaMgr is unable to load submit information from internal memory. Contact support personnel for
assistance.

Chapter 8: Error Messages 185


Error Messages

================
GENERAL (GenMgr):

ERROR... Unable to load GENERAL configuration info


GenMgr is unable to load configuration information from internal memory. Contact support personnel
for assistance.
================
ERROR... Unable to load GENERAL submit info

GenMgr is unable to load submit information from internal memory. Contact support personnel for
assistance.
================
MSC Nastran (NasMgr):

ERROR... ASSIGN statement <> contains relative pathname


MSC Nastran ASSIGN statements cannot contain relative pathnames, as this may point to different
locations from invocation to invocation. Change ASSIGN statement to a full pathname and submit again.
================
ERROR... Bad card format

NasMgr cannot parse statement properly. Check card format.


================
ERROR... Include cards in the Executive Control section

NasMgr only allows Include cards to be within the BEGIN BULK and ENDDATA sections of the input
file. Place the contents of the include cards which lie before the BEGIN BULK card directly in the input
file, and submit again.
================
ERROR... Restart type file but no MASTER file specified

The input file appears to be a restart, as the RESTART card is found, but no MASTER file is specified.
Use an ASSIGN card to designate which MASTER file is to be used, and submit again.
================
ERROR... Unable to add MASTER database FMS
ERROR... Unable to add DBALL database FMS
ERROR... Unable to add SCRATCH database FMS

When NasMgr is adding FMS, line length is found to be greater than the maximum of 80 characters.
Decrease the filename (jobname) length or use links to shorten the file system/directory names.
================
ERROR... Unable to load MSC Nastran configuration info

186

Patran Analysis Manager Users Guide


Error Messages

NasMgr is unable to load configuration information from internal memory. Contact support personnel
for assistance.
================
ERROR... Unable to load MSC Nastran submit info

NasMgr is unable to load submit information from internal memory. Contact support personnel for
assistance.
================
ERROR... Unable to read file include <>

NasMgr has transferred the designated file, but is now unable to locate it for opening or reading. Check
network connections, JobMgr host, and file system permissions.
================
ERROR... Unexpected end of file

NasMgr has encountered the end of the input file without finding complete information. Check input
file.
MSC.Marc (MarMgr):
================
ERROR... Unable to load MSC.Marc configuration info.

The network is unable to transfer the MSC.Marc config info over to the MarMgr from the JobMgr
running on the submit machine. Check network connectivity and the submit machine for any problems.
================
ERROR... Unable to load MSC.Marc submit info

The network is unable to transfer the MSC.Marc submit info over to the MarMgr from the JobMgr
running on the submit machine. Check network connectivity and the submit machine for any problems.
================
INFORMATION: Total disk space req of %d (kb) met

Information message telling that enough disk space has been found on the file systems configured for
MSC.Marc to run.
================
WARNING: Total disk space req of %d (kb) cannot IMMEDIATLEY be met.
Continuing on regardless ...

Information message telling that there is currently not enough free disk space found to honor the space
requirement provided by the user. The job will continue however, because the space may be freed up at
a later time (by another job finishing perhaps) before this job needs it
================

Chapter 8: Error Messages 187


Error Messages

ERROR... Total disk space req of %d (kb) cannot EVER be met. Cannot
continue.

There is not enough disk space (free or used) to honor the space requirement provided by the user so the
job will stop. Add more disk space or check the requirement specified.
================
WARNING: Cannot determine if disk space req %d (kb) can be met.
Continuing on regardless ...

Information message telling the disk space of the file system(s) configured for MSC.Marc cannot be
determined. The job will continue anyway as there may be enough space. Sometime, if the file system is
mounted over nfs the size of the file system is not available.
================
INFORMATION: No disk space requirement specified

If no disk space requirement is provided by the user then this information message will be printed.
================
ERROR... Unable to alloc ## bytes of memory in sss, line lll

The MarMgr is unable to allocate memory for its own use, check the memory and swap space on the
executing machine.
================
ERROR... Unable to receive file sss

MarMgr could not transfer a file from the JobMgr on the submit machine. Check the network
connectivity and submit machine for any problems.
Administration (AdmMgr) Testing Messages...
ERROR... An invalid version of Queue Manager is currently running
The current version of AdmMgr does not match that of the running QueMgr. An invalid/incomplete
installation is most likely the cause. To determine what version of each executable is installed, type
AdmMgr -version, and QueMgr -version and compare output.
================
ERROR... <> specified as type for host, but <> detected.
ERROR... <> specified as type for host, but UNKNOWN is detected.
ERROR... Host Type <> is not a valid selection for host <>.

The AdmMgr program has discovered the host architecture for the indicated host is not the same as what
is designated in the configuration, or no specific type has been given to this host. Change the host type
to the correct one and re-test.
================
ERROR... A/M Host <> configuration file <> does not contain an absolute
path.

188

Patran Analysis Manager Users Guide


Error Messages

The AdmMgr program has found an rc file entry, or an exe file entry in the host.cfg file, or a file
system in the disk.cfg file to not be a full path. Change the entries to be fully qualified. (starts with a
slash character /)
================
ERROR... A/M Host <> does not have a valid application defined. Run
Basic A/M Host Test.

The configuration files do not contain any valid applications. Add a valid application and all its required
information and run the basic test to verify.
================
ERROR... A/M Host <> filesystem <> does not contain an absolute path.

The file system designated for the host listed is not fully qualified. Change the entry to begin with a slash
/ character.
================
ERROR... A/M Host <> runtime file <> does not contain an absolute path.

The rc file designated for the host listed is not fully qualified. Change the entry to begin with a slash /
character.
================
ERROR... A/M Host name <> is used more than once within application <>.

Each application contains a list of Analysis Manager host names (which are mapped to physical host
names) and each Analysis Manager host name must be unique. The AdmMgr program has found the
designated Analysis Manager host names is being used more than once. Change the Analysis Manager
host name for all but one of the applications and re-test.
================
ERROR...
ERROR...
ERROR...
ERROR...
ERROR...

Access to host <>


Access to host <>
Access to host <>
Access to host <>
Bad return string

failed for admin <>.


invalid return string for admin <>.
failed for user <>.
invalid return string for user <>.
from physical host <>.

These errors indicate various problems when trying to execute a command on the designated host as the
admin or user provided. Network access to a host for a user can fail for a number of reasons, among which
are lack of network permission or the network/host is down. To check network permission, the user must
be able to rsh (or remsh on some platforms) from the master host (where the QueMgr process runs)
to each application host (not interface host, but where the defined application is targeted to run). Rsh (or
remsh) access is denied if the user has no password on the remote host, does not have a valid .rhosts
file in his/her home directory on the remote host, with the same owner and group ids, and file permissions
of 600 (-rw-------) or there is no /etc/hosts.equiv file on the remote host, with an entry for the
originating host. The RmtMgr replaces rsh. Call MSC if you get this error.
================
ERROR... Admin account can NOT be root.

Chapter 8: Error Messages 189


Error Messages

The AdmMgr program requires an administrator account which is not the root account. Change the
administrator account name to something other than root and continue testing.
================
ERROR... Unable to locate Admin account <>.

The AdmMgr program is unable to locate the admin account name in the passwd file/database. Make sure
the admin account name provided is a valid user account name on all application hosts (and the master
host as well) and continue testing.
================
ERROR... Application must be chosen for A/M Host <>.

The configuration requires each Analysis Manager host to reference an application. Add a reference for
this AM host and re-test.
================
ERROR... Application name <> is not referenced by any A/M Host.

The application specified is not referenced by any Analysis Manager hosts. Add AM hosts or remove this
application and continue testing.
================
ERROR... Application name <> is used more than once.

Only unique application names can be used. Re-name the applications so no two are alike and re-test.
================
ERROR... Application not specified for A/M Host <>.

Cannot do Unique A/M Host Test. The configuration requires each A/M host to reference an application.
Add a reference for this A/M host and re-test.
================
ERROR... At least one filesystem must be defined for A/M Host <>.

The configuration requires each Analysis Manager host to reference a file system. Add a reference for
this AM host and re-test.
================
ERROR... At least one host must be specified.

At least one physical host must be specified. Add a physical host entry and re-test.
================
ERROR... At least one application must be specified.

At least one application must be defined. Add an application and re-test.


================

190

Patran Analysis Manager Users Guide


Error Messages

ERROR... Could not determine host address for host <>.

The Admin program is unable to determine the host address for the designated host. Possible causes are
an invalid name entered, or an invalid host file or name server entry.
================
ERROR... Detected NULL host name.

Provide a host name where specified and re-test.


================
ERROR... Executable test on host <> failed for user <>.

Either the Analysis Manager install tree is not accessible on the remote host, or network access to the
remote host is denied. Make sure the Analysis Manager install tree is the same on all application hosts
(either through nfs or by created identical directories) and try again.
If network access is the cause for failure, check network permission. (see the ERROR... Access to host
<> failed for admin <> error description)
================
ERROR... Execution of command failed on host <>.

Either the command does not exist, or network access is denied. Most likely due to network access
permission. (See the ERROR... Access to host <> failed for admin <> error description.)
================
ERROR... Failure Creating file <> on host <>.
ERROR... Failure Accessing Test File <> on host <>.

The user does not have permission to create/access a test file in the proj directory on the designated
host. Check the permission of this directory on the remote host and re-test. If permission is not the
problem, check network access to the remote as the user. (See the ERROR... Access to host <> failed
for admin <> error description.)
================
ERROR... Failure Creating Test File <> on host <>.

The user does not have permission to create/access a test file in the file system/directory on the designated
host as listed in the disk.cfg file. Check the permission of this directory on the remote host and retest. If permission is not the problem, check network access to the remote host as the user. (See the
ERROR... Access to host <> failed for admin <> error description.)
================
ERROR... Host <> and Host <> have an identical addresses.

Remove one of the host entries (since they are the same host) or change one to point to a different host
and re-test.
================

Chapter 8: Error Messages 191


Error Messages

ERROR... Host not specified for A/M Host <>. Run Basic A/M Host Test.

Each Analysis Manager host must reference a physical host. Provide a physical host reference and
continue testing.
================
ERROR... Invalid A/M queue name <>.
ERROR... Invalid LSF queue name <>.
ERROR... Invalid NQS queue name <>.

Enter a valid queue name (no space characters, etc.) and re-test.
ERROR... LSF executables path <> must be an absolute path.
ERROR... NQS executables path <> must be an absolute path.

The pathname must be a fully qualified path name. Change the path to be fully qualified (starts with a
slash / character) and re-test.
================
ERROR... NULL A/M Host name is invalid.

Provide an A/M host name where specified and re-test.


================
ERROR... No queue defined for application <>.

Provide a queue where specified and re-test.


================
ERROR... Physical host has not been defined for A/M Host <>. Run Basic
A/M Host Test.

Each Analysis Manager host must reference a physical host. Provide a physical host reference and
continue testing.
================
ERROR... Physical host must be chosen for A/M Host <>.

Each Analysis Manager host must reference a physical host. Provide a physical host reference and
continue testing.
================
ERROR... Remote execution of uname command failed on host <>.

Either the uname command (required by Analysis Manager) cannot be found on the remote host of the
users default search path, or the network access to the remote host is denied. Check the existence,
permission, and location of the uname command on the remote host. (Some Convex machines are
shipped without a uname, but Analysis Manager provides one, just place a copy of the uname program
(or link) into a default path directory, such as /bin.) If network access is the cause of failure, check as
above. (See the ERROR... Access to host <> failed for admin <> error description.)
================

192

Patran Analysis Manager Users Guide


Error Messages

ERROR...
ERROR...
ERROR...
ERROR...
ERROR...
ERROR...

Unable
Unable
Unable
Unable
Unable
Unable

to
to
to
to
to
to

start
start
start
start
start
start

executable
executable
executable
executable
executable
executable

<.../bjobs> on host <> as user <>.


<.../bkill> on host <> as user <>.
<.../bsub> on host <> as user <>.
<.../qdel> on host <> as user <>.
<.../qstat> on host <> as user <>.
<.../qsub> on host <> as user <>.

Either the designated files do not exist on the remote host as indicated or the network access to the remote
host to test each executable is failing. Check the file existence and change the path as required, or check
network access as described above. (See the ERROR... Access to host <> failed for admin <> error
description.)
================
ERROR... Zero A/M hosts defined. At least one required.

Define an Analysis Manager host and re-test.


================
ERROR... Zero queues defined. At least one required.

Define a queue and re-test.


================
ERROR... Queue <> must contain at least one selected host.

Select a host or hosts from the list for each queue and continue.
================
ERROR... configuration file <> not located on physical host <>.
ERROR... runtime file <> not located on physical host <>.

The AdmMgr program cannot locate the rc or exe file designated in the configuration on the specified
host. Check the installation of the application or the rc/exe path and re-test.
================
ERROR... Unable to open unique submit log file

In the current working directory, there are more than 50 submit log files.

Chapter 9: Application Procedural Interface (API)


Patran Analysis Manager Users Guide

Application Procedural Interface


(API)

Analysis Manager API

Include File

Example Interface

204
230

194

194

Patran Analysis Manager Users Guide


Analysis Manager API

Analysis Manager API


Analysis Manager Application Procedural Interface (API)
Description
The Analysis Manager can be used by literally any program or user interface that can access the Analysis
Manager API. Thus with some programming knowledge and skill, customized user interfaces to the
Analysis Manager can be made. The API is not a part of the Analysis Manager product for general use.
However, for completeness, this appendix describes these procedural calls to access the Analysis
Manager. The necessary include file and an example usage of the API is also included in this appendix.
If you have a special customization need to incorporate the Analysis Manager API, please contact your
local MSC representative. MSC is happy to provide customized solutions to its customers on a fee basis.
Assumptions:
The product is ALREADY installed and configured. A description of what it takes to install and
configure is included in System Management, but for the purpose of describing the API, assume this
for now.
A Quick Background
There are 3 machines involved in the job submit/abort/monitor cycle:
1. The QueMgr scheduling process machine, labelled the master node.
2. The user's submit (home) machine where the graphical user interface (GUI) is run and the input
files are located, labelled the submit node.
3. The analysis machine where the job actually runs, labelled the analysis node.
All 3 machines can be the same or different or any combination of these. And there are two separate
persistent processes, which are already running as part of the installation. These processes (called
daemons on Unix or services on Windows) are the:
1. QueMgr - job scheduling daemon or service
2. RmtMgr - remote command daemon or service
There is one and only one QueMgr process per site (or group or organization or network) but there are
many RmtMgr processes. A RmtMgr process runs on each and every analysis machine. A RmtMgr can
also be run on each submit machine (recommended). If the submit and analysis machines are the same
host, then only one RmtMgr needs to be running.
The QueMgr and RmtMgr processes start up at boot time automatically and run always, but use very
little memory and CPU resources, so users will not notice performance effects. Also these processes can
run as root (Administrator on Windows) or as any user, if these privileges are not available.
Each RmtMgr binds to a known/chosen port number that is the same for every RmtMgr machine. Each
RmtMgr process collects machine statistics on free CPU cycles, free memory and free disk space and
returns this data to the QueMgr at frequent intervals.

Chapter 9: Application Procedural Interface (API) 195


Analysis Manager API

The QueMgr then maintains a sorted list of each RmtMgr machine and its capacity to report back to a
GUI/user. (A least loaded host selection is currently being developed so the QueMgr selects the actual
host for a submit based on these statistics, instead of a user explicitly setting the hostname in the GUI.)
There are a few other AM executables:
1. The TxtMgr - a simple text-based UI which is built on this API and demonstrates all these
features.
2. The JobMgr - GUI back-end processes, starts up on the same machine as the GUI (submit
machine) when a job is submitted and runs only for the life of a job. There is always 1 JobMgr
process per job.
3. The analysis family: These 3 programs are all built on top of an additional API which uses many
common features each must do. The common code is run and the custom work for each
application is in a few separate routines, pre_app(), post_app(), abort_app()
NasMgr - The MSC Nastran analysis process which communicates data to/from the JobMgr

and spawns the actual MSC Nastran sub-process. It also reads include files and transfers them,
adds FMS statements to the deck if appropriate, and periodically sends job resource data and
msgpop message data to the JobMgr to store off.
MarMgr - The Abaqus analysis process which does the same things as NasMgr, but for the

Abaqus application.
AbaMgr - The Abaqus analysis process which does the same things as NasMgr, but for the

Abaqus application.
GenMgr - The General application analysis process, used for any other application. Does

what NasMgr does except it has no knowledge of the application and just runs it and collects
resource usage.
General outline of the Analysis Manager API:
With Analysis Manager there are 5 fundamental functions one can perform:
1. Submit a job
2. Abort a job
3. Monitor a specific job
4. Monitor all the hosts/queues
5. List statistics of a completed job
Note:

With the job database viewer


($P3_HOME/p3manager_files/bin/ARCH/Job_Viewer) one can
view/gather/query statistics about ALL jobs for a company/site/etc. as the QueMgr maintains
a database of all job statistics. (The database is generally located in the
$P3_HOME/p3manager_files/default/log/QueMgr.rdb file.)

196

Patran Analysis Manager Users Guide


Analysis Manager API

Each function requires some common data and some unique data. Common data include the QueMgr host
and port it is listening on and the configuration structure information. Unique data is described
further below.
Configure

The first step to any of the Analysis Manager functions is to connect to an already running QueMgr. To
do this you must first know the host and port of the running QueMgr, which is usually in the
$P3_HOME/p3manager_files/org.cfg or the
$P3_HOME/p3manager_files/default/conf/QueMgr.sid file. After that, simply call
CONFIG *cfg;
char qmgr_host[128];
int qmgr_port;
int ret_code;
int error_msg;
cfg = get_config(qmgr_host, qmgr_port, &ret_code, error_msg)

ret_code and possbily error_msg are returned for checking errors.


The CONFIG structure is defined in an include file shown below. Then initialize sub-parts of the
configuration structure by calling
init_config(cfg)
Then determine the application name/index. The application is the name of the application you plan to
work with, most-likely MSC Nastran, but it could be anything that is already pre-configured. A
configuration includes basically the application name, and a list of hosts and paths where it is installed,
as described in the $P3_HOME/p3manager_files/default/conf/host.cfg file, read by the
QueMgr on start up. Each application has different names and possibly different options to the Analysis
Manager functions. All applications names/indexes are in the cfg structure so the GUI can ask the user
and check against the accepted list.
Then call the function of choice:
1. Submit a job
2. Abort a job
3. Monitor a specific job
4. Monitor all the hosts/queues
5. List statistics of a completed job
Submit

For submit, the GUI then needs to fill in the application structure data and make a call to submit the job.
The call may block and wait for the job to complete (maybe a very long time) or it can return
immediately. See the job info rcf/GUI settings listed below for what can be set and changed. Assuming
defaults for ALL settings, then only a jobname (input file selection), hostname and (possibly) memory
need to be set before submitting.
Then call

Chapter 9: Application Procedural Interface (API) 197


Analysis Manager API

char *jobfile;
char *jobname; /* usually same as basename of jobfile */
int background;
int ret_code;
int job_number;
job_number = submit_job(jobfile,jobname,background,&ret_code);

This call goes through many steps: contacting the QueMgr, getting a valid reserved job number, asking
the QueMgr to start a JobMgr, etc. and then sends all the config/rcf/GUI structure info to the JobMgr.
The JobMgr runs for the life of the job and is essentially the back-end of the GUI, transferring files
to/from the user submit machine to the analysis machine (the NasMgr, MarMgr, AbaMgr or
GenMgr process).
Abort

For abort, the GUI then needs to query the QueMgr for a list of jobs, and then present this for the user
to select:
char *qmgr_host;
int qmgr_port;
JOBLIST *job_list;
int job_count;
job_count = get_job_list(qmgr_host,qmgr_port,job_list);

Once a job is chosen, a simple call deletes it:


int job_number;
char *job_user;
int ret_code;
ret_code = delete_job(job_number,job_user);
Monitor Running Job

For monitor a specific job, the GUI then needs to query the QueMgr for a list of jobs, and then present
this for the user to select:
char *qmgr_host;
int qmgr_port;
JOBLIST *job_list;
int job_count;
job_count = get_job_list(qmgr_host,qmgr_port,job_list);

Once a job is chosen, a simple call with a severity level returns data:
int job_number;
int severity_level;
int cpu, mem, disk;
int msg_count;
char *ret_string;
ret_string = monitor_job(job_number,severity_level,
&cpu,&mem,&disk,&msg_count);

The ret_string then contains a list (array of strings) of all messages the application stored (msgpop
type) that are <= the severity level input along with a resource usage string. The number of msgpop
messages is stored in msg_count, to be referenced like:

198

Patran Analysis Manager Users Guide


Analysis Manager API

for(i=0;i<msg_count;i++)
printf("%s",ret_string[i]);

printf("cpu time used by job = %d,


mem used by job = %d, disk used by job = %d\n",cpu,mem,disk);
The CPU, MEM and DISK values are the current resources used by the job.
Monitor Hosts/Queues

For monitor all hosts/queues, the GUI then needs to make a call and get back all QueMgr data for the
application chosen. This gets complex. There are 4 different types/groups of data available. For now lets
just assume only one type is wanted. There are:
1. FULL_LIST
2. JOB_LIST
3. QUEMGR_LOG
4. QUE_STATUS
5. HOST_STATS
Each has its own syntax and set of data. For the QUE_STATUS type, the call returns an array of
structures containing the hostname, number of running jobs, number of waiting jobs, maximum jobs
allowed to run on that host, for the given (input) application.
char *qmgr_host;
int qmgr_port;
int job_count;
QUESTAT *que_info;
que_info = get_que_stats(qgr_host,qmgr_port,&job_count);
for(i=0;i<job_count;i++)
printf("%s %d %d %d\n",
que_info[i].hostname,que_info[i].num_running,
que_info[i].num_waiting,que_info[i].maxtsk);

For FULL_LIST:
See Include File.
For JOB_LIST:
See Include File.
For QUEMGR_LOG, this is simply a character string of the last 4096 bytes of the QueMgr log file:
See Include File.
For HOST_STATS:
See Include File.

Chapter 9: Application Procedural Interface (API) 199


Analysis Manager API

Monitor Completed Job

For a list completed jobs, the GUI then needs to query the QueMgr for a list of jobs, and then present this
for the user to select.
char *qmgr_host;
int qmgr_port;
JOBLIST *job_list;
int job_count;
job_count = get_job_list(qmgr_host,qmgr_port,job_list);

Once a job is chosen, a simple call will return all the job data saved.
int job_number;
JOBLIST *comp_info;
comp_info = get_completedjob_stats(job_number);
Remote Manager

On a another level, a GUI could also connect to any RmtMgr and ask it to perform a command and return
the output from that command. This is essentially a remote shell (rsh) host command as on a Unix
machine. This functionality may come in handy when adding/extending the Analysis Manager product
to network install other MSC software or whatever is thought of. The syntax for this is as follows:
char *ret_msg;
int ret_code;
char *rmtuser;
char *rmthost;
int rmtport;
char *command;
int background (== FORGROUND (0) or BACKGROUND (1))
ret_msg = remote_command(rmtuser, rmthost,
rmtport, command, background, &ret_code)

Structures
The JOBLIST structure contains these members:
int job_number;
char job_name[128];
char job_user[128];
char job_host[128];
char work_dir[256];
int port_number;

cfg structure from config.h:


typedef struct{
char org_name[NAME_LENGTH];
char org_name2[NAME_LENGTH];
char host_name[NAME_LENGTH];
unsigned int addr;
int port;
}ORG;
typedef struct{
char prog_name[NAME_LENGTH];
char app_name[NAME_LENGTH];
char args[PATH_LENGTH];
char extension[24];

200

Patran Analysis Manager Users Guide


Analysis Manager API

}PROGS;
typedef struct{
char pseudohost_name[NAME_LENGTH];
char host_name[NAME_LENGTH];
char exepath[PATH_LENGTH];
char rcpath[PATH_LENGTH];
int glob_index;
int sub_index;
char arch[NAME_LENGTH];
unsigned int address;
}HSTS;
typedef struct{
int num_hosts;
HSTS *hosts;
}HOST;
typedef struct{
char pseudohost_name[NAME_LENGTH];
char exepath[PATH_LENGTH];
char rcpath[PATH_LENGTH];
int type;
}APPS;
typedef struct{
char host_name[NAME_LENGTH];
int num_subapps;
APPS subapp[MAX_SUB_APPS];
int maxtsk;
char arch[NAME_LENGTH];
unsigned int address;
}TOT_HST;
typedef struct{
char queue_name1[NAME_LENGTH];
char queue_name2[NAME_LENGTH];
int glob_index;
}QUES;
typedef struct{
int num_queues;
QUES *queues;
}QUEUE;
typedef struct{
char queue_name1[NAME_LENGTH];
char queue_name2[NAME_LENGTH];
HOST sub_host[MAX_APPS];
}TOT_QUE;
typedef struct{
char file_sys_name[NAME_LENGTH];
int model;
int max_size;
int cur_free;
}FILES;
typedef struct{
char pseudohost_name[NAME_LENGTH];
int num_fsystems;
FILES *sub_fsystems;
}TOT_FSYS;
typedef struct{
char sepuser_name[NAME_LENGTH];
}SEP_USER;

Chapter 9: Application Procedural Interface (API) 201


Analysis Manager API

typedef struct{
int QUE_TYPE;
char ADMIN[128];
int NUM_APPS;
unsigned int timestamp;
/* prog names */
PROGS progs[MAX_APPS];
/* host stuff */
HOST hsts[MAX_APPS];
int total_h;
TOT_HST *total_h_list;
/* que stuff */
char que_install_path[PATH_LENGTH];
char que_options[PATH_LENGTH];
int min_mem_value;
int min_disk_value;
int min_time_value;
QUEUE ques[MAX_APPS];
int total_q;
TOT_QUE *total_q_list;
/* file stuff */
int total_f;
TOT_FSYS *total_f_list
/* separate user stuff */
int total_u;
SEP_USER *total_u_list;
}CONFIG;

An example of all the rcf/GUI settings from the app_config.h files:


cfg.total_host[0].host_name = hal9000.macsch.com
cfg.total_host[0].arch = HP700
cfg.total_host[0].maxtasks = 3
cfg.total_host[0].num_apps = 3
cfg.total_host[0].sub_app[MSC/NASTRAN].pseudohost_name = nas_host_u
cfg.total_host[0].sub_app[MSC/NASTRAN].exepath = /msc/bin/nast705
cfg.total_host[0].sub_app[MSC/NASTRAN].rcpath = /msc/conf/nast705rc
cfg.total_host[0].sub_app[ABAQUS].pseudohost_name = aba_host_u
cfg.total_host[0].sub_app[ABAQUS].exepath = /hks/abaqus
cfg.total_host[0].sub_app[ABAQUS].rcpath = /hks/site/abaqus.env
cfg.total_host[0].sub_app[GENERIC].pseudohost_name = gen_host_u
cfg.total_host[0].sub_app[GENERIC].exepath = /apps/bin/GENERALAPP
cfg.total_host[0].sub_app[GENERIC].rcpath = NONE
cfg.total_host[1].host_name = daisy.macsch.com
cfg.total_host[1].arch = WINNT
cfg.total_host[1].maxtasks = 3
cfg.total_host[1].num_apps = 4
cfg.total_host[1].sub_app[MSC/NASTRAN].pseudohost_name = nas_host_nt
cfg.total_host[1].sub_app[MSC/NASTRAN].exepath = c:/msc/bin/nastran.exe
cfg.total_host[1].sub_app[MSC/NASTRAN].rcpath = c:/msc/conf/nast706.rcf
cfg.total_host[1].sub_app[ABAQUS].pseudohost_name = aba_host_nt
cfg.total_host[1].sub_app[ABAQUS].exepath = c:/hks/abaqus.exe
cfg.total_host[1].sub_app[ABAQUS].rcpath = c:/hks/site/abaqus.env
cfg.total_host[1].sub_app[GENERIC].pseudohost_name = gen_host_nt
cfg.total_host[1].sub_app[GENERIC].exepath = c:/apps/bin/GENERALAPP.exe
cfg.total_host[1].sub_app[GENERIC].rcpath = NONE
cfg.total_host[1].sub_app[GENERIC2].pseudohost_name = gen_host2_nt
cfg.total_host[1].sub_app[GENERIC2].exepath = c:/WINNT/System32/mem.exe
cfg.total_host[1].sub_app[GENERIC2].rcpath = NONE

202

Patran Analysis Manager Users Guide


Analysis Manager API

#
unv_config.auto_mon_flag = 0
unv_config.time_type = 0
unv_config.delay_hour = 0
unv_config.delay_min = 0
unv_config.specific_hour = 0
unv_config.specific_min = 0
unv_config.specific_day = 0
unv_config.mail_on_off = 0
unv_config.mon_file_flag = 0
unv_config.copy_link_flag = 0
unv_config.job_max_time = 0
unv_config.project_name = nastusr
unv_config.orig_pre_prog =
unv_config.orig_pos_prog =
unv_config.exec_pre_prog =
unv_config.exec_pos_prog =
unv_config.separate_user = nastusr
unv_config.p3db_file =
#
nas_config.disk_master = 0
nas_config.disk_dball = 0
nas_config.disk_scratch = 0
nas_config.disk_units = 2
nas_config.scr_run_flag = 1
nas_config.save_db_flag = 0
nas_config.copy_db_flag = 0
nas_config.mem_req = 0
nas_config.mem_units = 0
nas_config.smem_units = 0
nas_config.extra_arg =
nas_config.num_hosts = 2
nas_host[hal9000.macsch.com].mem = 0
nas_host[hal9000.macsch.com].smem = 0
nas_host[daisy.macsch.com].mem = 0
nas_host[daisy.macsch.com].smem = 0
nas_config.default_host = nas_host_u
nas_config.default_queue = N/A
nas_submit.restart_type = 0
nas_submit.restart = 0
nas_submit.modfms = 0
nas_submit.nas_input_deck =
nas_submit.cold_jobname =
#
aba_config.copy_res_file = 1
aba_config.save_res_file = 0
aba_config.mem_req = 0
aba_config.mem_units = 0
aba_config.disk_units = 2
aba_config.space_req = 0
aba_config.append_fil = 0
aba_config.user_sub =
aba_config.use_standard = 1
aba_config.extra_arg =
aba_config.num_hosts = 2
aba_host[hal9000.macsch.com].num_cpus = 1
aba_host[hal9000.macsch.com].pre_buf = 0
aba_host[hal9000.macsch.com].pre_mem = 0
aba_host[hal9000.macsch.com].main_buf = 0
aba_host[hal9000.macsch.com].main_mem = 0

Chapter 9: Application Procedural Interface (API) 203


Analysis Manager API

aba_host[daisy.macsch.com].num_cpus = 1
aba_host[daisy.macsch.com].pre_buf = 0
aba_host[daisy.macsch.com].pre_mem = 0
aba_host[daisy.macsch.com].main_buf = 0
aba_host[daisy.macsch.com].main_mem = 0
aba_config.default_host = aba_host_u
aba_config.default_queue = N/A
aba_submit.restart = 0
aba_submit.aba_input_deck =
aba_submit.restart_file =
#
gen_config[GENERIC].disk_units = 2
gen_config[GENERIC].space_req = 0
gen_config[GENERIC].mem_units = 2
gen_config[GENERIC].mem_req = 0
gen_config[GENERIC].cmd_line = jid=$JOBFILE mem=$MEM
gen_config[GENERIC].mon_file = $JOBNAME.log
gen_config[GENERIC].default_host = gen_host_u
gen_config[GENERIC].default_queue = N/A
gen_submit[GENERIC].gen_input_deck =
#
gen_config[GENERIC2].disk_units = 2
gen_config[GENERIC2].space_req = 0
gen_config[GENERIC2].mem_units = 2
gen_config[GENERIC2].mem_req = 0
gen_config[GENERIC2].cmd_line =
gen_config[GENERIC2].mon_file = $JOBNAME.log
gen_config[GENERIC2].default_host = gen_host2_nt
gen_config[GENERIC2].default_queue = N/A
gen_submit[GENERIC2].gen_input_deck =
#

204

Patran Analysis Manager Users Guide


Include File

Include File
This include file (api.h) must be included in any source file using the Analysis Manager API.
#ifndef _AMAPI
#define _AMAPI
#ifdef __cplusplus
extern C {
#endif
#if defined(SGI5)
typedef int socklen_t;
#elif defined(DECA)
typedef size_t socklen_t;
#elif defined(HP700)
# if !defined(_ILP32) && !defined(_LP64)
typedef int socklen_t;
# endif
#elif defined(WINNT)
typedef int socklen_t;
#endif
#define RMTMGR_RESV_PORT
#define QUEMGR_RESV_PORT

1800
1900

#define GLOBAL_AM_VERSION 2003.0.1


#ifndef AM_INITIALIZE
# define AM_EXTERN extern
#else
# define AM_EXTERN
# if !defined(__LINT__)
# if !defined(__TAG_USED)
# define __TAG_USED
static char *sccsid[] =
{
@(#) MSC Analysis Manager 2003.0.1,
@(#)
};
# endif /* __TAG_USED */
# endif /* __LINT__ */
#endif
#if defined(AM_INITIALIZE)
char *global_auth_msg = NULL;
int __is_checked_out = 0;
#else
extern char *global_auth_msg;
extern int __is_checked_out;
#endif
#if defined(AM_INITIALIZE)

Chapter 9: Application Procedural Interface (API) 205


Include File

int xxx_has_input_deck;
int hks_has_restart;
int has_extra_arg;
#else
extern int xxx_has_input_deck;
extern int hks_has_restart;
extern int has_extra_arg;
#endif
#define SOCKET_VERSION1
#define SOCKET_VERSION2

1
1

#ifndef PATH_LENGTH
# define PATH_LENGTH 400
#endif
#ifndef NAME_LENGTH
# define NAME_LENGTH 256
#endif
#ifndef MAX_STR_LEN
# define MAX_STR_LEN
#endif

256

#ifndef SOMAXCONN
# define SOMAXCONN 20
#endif
#ifdef ULTIMA
#define MSGPOP
1
#else
#ifdef MSGPOP
# undef MSGPOP
#endif
#define MSGPOPnotused
#endif

#define NOT_JOB_OWNER -201


#define UNKNOWN_STATUS -1
#define OK_STATUS
0
#define BAD_STATUS
999
#define BLOCK_TIMEOUT 60
#define NONB_TIMEOUT 15
#define MAX_EVENT_NUMBER

115

#define TOTAL_TO_QM_EVENTS 39
/* ---------------------------- */
/* all events to QueMgr are first (and sequential) */

206

Patran Analysis Manager Users Guide


Include File

#define TRANS_CONFIG1
#define XX_QM_PING
#define QM_XX_PING

39 /* highest to QueMgr message */


98

#define JM_QM_JOB_FINISHED2
#define JM_QM_JOB_INIT3
#define JM_QM_ADD_TASK4
#define JM_QM_DB_UPDATE
19
#define JM_QM_CLEANUP_JOB
26
#define TM_QM_TASK_FINISHED5
#define TM_QM_TASK_RUNNING6
#define TM_QM_APP_FILES 25
#define PM_QM_REMOVE_JOB7
#define PM_QM_FULL_LIST8
#define PM_QM_JOB_LIST9
#define PM_QM_QUEMGR_LOG10
#define PM_QM_QUE_STATUS11
#define PM_QM_JOB_SELECT_LIST12
#define PM_QM_JOB_COMP_LIST27
#define PM_QM_JOBNUM_REQ13
#define PM_QM_SUSPEND_JOB
#define PM_QM_RESUME_JOB
#define PM_QM_CPU_LOADS

21
22
23

#define PM_QM_START_UP_JOBMGR 29
#define PA_QM_HALT_QUEMGR14
#define PA_QM_DRAIN_HALT15
#define PA_QM_DRAIN_RESTART16
#define PA_QM_CHECK
17
#define PA_QM_GET_RECFG_TEXT 18
#define XX_QM_REQ_VERSION

20

/* future XX_QM events (33-38) */


#define RM_QM_LOAD_INFO
#define RM_QM_CMD_OUT
#define RM_XX_PROC_OUT

24
28
32

#define QM_JM_TASK_FINISHED40
#define QM_JM_TASK_RUNNING41
#define QM_JM_KILL_TASK42
#define QM_JM_ACCEPT_REQUEST43
#define TM_JM_IN_PRE44
#define TM_JM_RUN_INFO45
#define TM_JM_IN_POS46
#define TM_JM_GET_FILES

62

Chapter 9: Application Procedural Interface (API) 207


Include File

#define TM_JM_PUT_FILES
63
#define TM_JM_CFG_STRUCTS
65
#define TM_JM_DISK_INIT
66
#define TM_JM_LOG_INFO69
#define TM_JM_PRE_PROG
96
#define TM_JM_POS_PROG
97
#define TM_JM_SUSPEND_JOB
#define TM_JM_RESUME_JOB

77
78

#define TM_JM_ADD_COMMENT
#define TM_JM_RM_FILE

85

86

#define TM_JM_RUNNING_FILE

87

#define TM_JM_MSG_BUFFERS

95

#define TM_PM_GET_FILES

108

#define XX_RM_STOP_NOW
74
#define XX_RM_RMT_CMD
81
#define XX_RM_RMT_AM_CMD
99
#define XX_RM_SEND_LOADS
82
#define XX_RM_KILL_PROCESS
83
#define XX_RM_REMOVE_FILE
84
#define XX_RM_REMOVE_AM_FILE 100
#define XX_RM_WRITE_FILE
75
#define XX_RM_PUT_FILE
109
#define XX_RM_PUL_FILE
110
#define XX_RM_PING_ME
111
#define XX_RM_GET_UNAME
112
#define XX_RM_EXIST_FILE
113
#define XX_RM_DIR_WRITEABLE 114
#define XX_RM_CAT_FILE
115
#define QM_PM_RET_CODE47
#define QM_PM_FULL_LISTING48
#define QM_PM_JOB_LIST49
#define QM_PM_QUEUE_STATUS50
#define QM_PM_QUEMGR_LOG51
#define QM_PM_JOB_SEL_LIST52
#define QM_PM_SEND_JOBNUM53
#define QM_PM_NEEDS_RECFG
91
#define QM_PM_LOAD_INFO
92
#define QM_PM_JOBMGR_START

94

#define PM_JM_REQ_JOBMON 54
#define PM_JM_REQ_RUNNING_FILE 88
#define PM_JM_KILL_TRANSFERS 90
#define PM_JM_MSGDEST_REQ
101
#define PM_JM_STATS_REQ
102

208

Patran Analysis Manager Users Guide


Include File

#define PM_JM_LOGFILE_REQ
103
#define PM_JM_MON_INIT
104
#define PM_JM_LIST_RUN_FILES 105
#define PM_JM_REQ_RUNNING_FILE2 106
#define QM_PA_INFO
67
#define QM_PA_SEND_RECFG_TEXT 68
#define JM_PM_LOG_COMMENT55
#define JM_PM_LOG_INIT_JOB56
#define JM_PM_LOG_TASK_SUBMIT57
#define JM_PM_LOG_TASK_RUN58
#define JM_PM_LOG_TASK_COMPLETE59
#define JM_PM_LOG_JOB_FINISHED60
#define JM_PM_TIME_SYNC61
#define JM_PM_LOG_LINE
70
#define JM_PM_FILE_PRESENT
71
#define JM_JM_PRE_FINISHED
#define JM_JM_POS_FINISHED

72
73

#define JM_TM_RECV_SETUP
#define JM_TM_GIVEME_FILE
#define JM_TM_GIVEME_FILE2

64
89
107

#define QM_XX_REQ_VERSION

76

#define QM_TM_SUSPEND_JOB
79
#define QM_TM_RESUME_JOB
80
#define QM_TM_KILL_JOB
93
#define MAX_ORGS
28
#define MAX_APPS
30
#define MAX_SUB_APPS 50
#define MAX_GEN_APPS 10
#define LOCAL
#define NFS

#define MSC_QUEUE
#define LSF_QUEUE
#define NQS_QUEUE

0
0
1
2

#define MSC_NASTRAN
1
#define HKS_ABAQUS
2
#define MSC_MARC
3
#define GENERAL
20
#define MAX_NUM_FILE_SYS 20
#define UNITS_WORDS
0
#define UNITS_64BIT_WORDS
99
#define UNITS_KB
1
#define UNITS_MB
2

Chapter 9: Application Procedural Interface (API) 209


Include File

#define UNITS_GB

#define MIN_MEM_REQ
#define MIN_DISK_REQ
#define MIN_TIME_REQ

1 /* (mb) */
1 /* (mb) */
99999 /* (min) */

#define JOB_SUBMITTED
#define JOB_QUEUED
#define JOB_RUNNING

1
2

#define JOB_SUCCESSFUL
0
#define JOB_ABORTED
1
#define JOB_FAILED
2
#define FILE_STILL_DOWNLOADING 1
#define FILE_DOWNLOAD_COMPLETE 0
/* ---------------------------- */
#define IC_CLEAN
0
#define IC_CANT_GET_ADDRESS
-100
#define IC_CANT_OPEN_HOST_FILE -101
#define IC_CANT_ALLOC_MEM
-102
#define IC_NOT_ENUF_HOSTS
-103
#define IC_CANT_OPEN_QUE_FILE -104
#define IC_MISSING_FIELDS
-105
#define IC_CANT_FIND_HOST
-106
#define IC_ADD_QUE_ERROR
-107
#define IC_NOT_ENUF_QUES
-108
#define IC_CANT_FIND_QUE
-109
#define IC_NO_QUE_TYPE
-110
#define IC_UNKNOWN_QUE_TYPE
-111
#define IC_NO_QUE_PATH
-112
#define IC_CANT_FIND_MACH
-113
#define IC_BAD_MAXTSK
-114
#define IC_TOO_FEW_QUE_APPS
-115
#define IC_BAD_APP_TYPE
-116
#define IC_NOT_ENUF_SUB_HOSTS -117
#define IC_BAD_PORT
-118
#define IC_NO_ADMIN
-119
#define IC_BAD_ADMIN
-120
#define ID_CLEAN
0
#define ID_CANT_OPEN_DISK_FILE -150
#define ID_CANT_GET_ADDRESS
-151
#define ID_CANT_ALLOC_MEM
-152
#define ID_CANT_FSTAT
-153
#define ID_NOT_ENUF_FSYS
-154
#define ID_NOT_ENUF_SUBS
-155
#define ID_CANT_FIND_HOST
-156
#define IU_CLEAN
0
#define IU_CANT_ALLOC_MEM

-180

210

Patran Analysis Manager Users Guide


Include File

#defineTIME_SYNC 99
#define LOG_COMMENT100
#define LOG_INIT_JOB101
#define LOG_TASK_SUBMIT102
#define LOG_TASK_RUN103
#define LOG_TASK_COMPLETE104
#define LOG_JOB_FINISHED105
#define LOG_DISK_INIT106
#define LOG_DISK_UPDATE107
#define LOG_CPU_UPDATE108
#define LOG_DISK_SUMMARY109
#define LOG_DISK_FS_SUMMARY110
#define LOG_CPU_SUMMARY111
#define LOG_LOGLINE
112
#define LOG_FILE_PRESENT
113
#define LOG_TASK_SUSPEND
114
#define LOG_TASK_RESUME
115
#define LOG_RUNNING_FILE
116
#define LOG_RUNNING_DONE
117
#define LOG_MEM_UPDATE118
#define LOG_MEM_SUMMARY119
/* ---------------------------- */
typedef struct{
char file_sys_name[PATH_LENGTH];
int disk_used_pct;
int disk_max_size_mb;
}JOB_FS_LIST;
typedef struct{
char filename[PATH_LENGTH];
int sizekb;
}FILE_LIST;
typedef struct{
char org_name[NAME_LENGTH];
char org_name2[NAME_LENGTH];
char host_name[NAME_LENGTH];
unsigned int addr;
int port;
}ORG;
typedef struct{
char prog_name[NAME_LENGTH];
char app_name[NAME_LENGTH];
int maxapptsk;
char args[PATH_LENGTH];
char extension[24];
}PROGS;
typedef struct{
char
pseudohost_name[NAME_LENGTH];
char
host_name[NAME_LENGTH];

Chapter 9: Application Procedural Interface (API) 211


Include File

char
exepath[PATH_LENGTH];
char
rcpath[PATH_LENGTH];
int
glob_index;
int
sub_index;
int
maxapptsk;
char
arch[NAME_LENGTH];
unsigned int address;
}HSTS;
typedef struct{
int
num_hosts;
HSTS
*hosts;
}HOST;
typedef struct{
char
pseudohost_name[NAME_LENGTH];
char
exepath[PATH_LENGTH];
char
rcpath[PATH_LENGTH];
int
maxapptsk;
int
type;
}APPS;
typedef struct{
char
host_name[NAME_LENGTH];
int
num_subapps;
APPS
subapp[MAX_SUB_APPS];
int
maxtsk;
char
arch[NAME_LENGTH];
unsigned int address;
}TOT_HST;
typedef struct{
char
queue_name1[NAME_LENGTH];
char
queue_name2[NAME_LENGTH];
int
glob_index;
}QUES;
typedef struct{
int
num_queues;
QUES
*queues;
}QUEUE;
typedef struct{
char
queue_name1[NAME_LENGTH];
char
queue_name2[NAME_LENGTH];
HOST
sub_host[MAX_APPS];
}TOT_QUE;
typedef struct{
char
file_sys_name[NAME_LENGTH];
int
model;
int
max_size;
int
cur_free;
}FILES;

212

Patran Analysis Manager Users Guide


Include File

typedef struct{
char
pseudohost_name[NAME_LENGTH];
int
num_fsystems;
FILES
*sub_fsystems;
}TOT_FSYS;
typedef struct{
char
sepuser_name[NAME_LENGTH];
}SEP_USER;
/* ---------------------------- */
typedef struct{
int QUE_TYPE;
char ADMIN[128];
int NUM_APPS;
int config_file_version;
unsigned int timestamp;
char prog_version[32];
/* prog names */
PROGS progs[MAX_APPS];
/* host stuff */
HOST hsts[MAX_APPS];
int total_h;
TOT_HST *total_h_list;
/* que stuff */
char que_install_path[PATH_LENGTH];
char que_options[PATH_LENGTH];
int min_mem_value;
int min_disk_value;
int min_time_value;
QUEUE ques[MAX_APPS];
int total_q;
TOT_QUE *total_q_list;
/* file stuff */

Chapter 9: Application Procedural Interface (API) 213


Include File

int total_f;
TOT_FSYS *total_f_list;
/* separate user stuff */
int total_u;
SEP_USER *total_u_list;
int qmgr_port;
int rmgr_port;
char qmgr_host[256];
}CONFIG;
/************************************************************************/
/* Defines for setting the different values of the config structure */
/************************************************************************/
#define CONFIG_VERSION
1
#define NO_JOB_MON
#define START_JOB_MON

#define SUBMIT_NOW
#define SUBMIT_DELAY
#define SUBMIT_SPECIFIC

0
1

#define SUNDAY
#define MONDAY
#define TUESDAY
#define WEDNESDAY
#define THURSDAY
#define FRIDAY
#define SATURDAY
#define MAIL_OFF
#define MAIL_ON

0
1
2
5

6
0
1

#define UI_MGR_MAIL
#define MASTER_MAIL
#define MAX_PROJ_LENGTH
typedef struct{
#ifndef CRAY
int pad1;
#endif
int version;
#ifndef CRAY
int pad2;
#endif
int job_mon_flag;
#ifndef CRAY
int pad3;

0
1
16

214

Patran Analysis Manager Users Guide


Include File

#endif
int time_type;
#ifndef CRAY
int pad4;
#endif
int delay_hour;
#ifndef CRAY
int pad5;
#endif
int delay_min;
#ifndef CRAY
int pad6;
#endif
int specific_hour;
#ifndef CRAY
int pad7;
#endif
int specific_min;
#ifndef CRAY
int pad8;
#endif
int specific_day;
#ifndef CRAY
int pad9;
#endif
int mail_on_off;
#ifndef CRAY
int pad10;
#endif
int bogus;
#ifndef CRAY
int pad11;
#endif
int mon_file_flag;
#ifndef CRAY
int pad12;
#endif
int copy_link_flag;
#ifndef CRAY
int pad13;
#endif
int job_max_time;
#ifndef CRAY
int pad14;
#endif
int bogus1;
char project_name[128];
char orig_pre_prog[256];
char orig_pos_prog[256];
char exec_pre_prog[256];
char exec_pos_prog[256];
char separate_user[128];
char p3db_file[256];
char email_addr[256];

Chapter 9: Application Procedural Interface (API) 215


Include File

} Universal_Config_Info;
/* ---------------------------- */
typedef struct {
char host_name[128];
int num_running;
int num_waiting;
int maxtsk;
char stat_str[64];
}Que_List;
typedef struct {
char msg[2048];
}Msg_List;
typedef struct {
int job_number;
char job_name[128];
char job_user[128];
char job_submit_host[128];
char am_host_name[128];
char job_proj[128];
char work_dir[256];
int application;
int port_number;
char job_run_host[128];
char sub_time_str[128];
int jobstatus;
}Job_List;
typedef struct {
char host_name[128];
int cpu_util;
int free_disk;
int avail_mem;
int status;
}Cpu_List;
/************************************************************************/
/*
*/
/* MSC.Nastran specific configuration structures.
*/
/*
*/
/************************************************************************/
#define DEFAULT_BUFFSIZE

8193

/*
** mck 6/12/98 - change to 0, so they dont get added unless you type something ...
**
#define CONFIG_DEFAULT_SMEM ( (DEFAULT_BUFFSIZE-1) * 100 )
#define CONFIG_DEFAULT_MEM
8000000
*/

216

Patran Analysis Manager Users Guide


Include File

#define CONFIG_DEFAULT_SMEM 0
#define CONFIG_DEFAULT_MEM
0
#define NAS_NONE
#define NO
#define YES
#define SINGLE
#define MULTI

0
0
1
1
2

#define DB_GET_NO_FILES
500
#define DB_GET_MST_P3_FILE
600
#define DB_GET_ALL_P3_FILES 650
#define DB_GET_MST_MK_FILE
700
#define DB_GET_ALL_MK_FILES 750
typedef struct {
#ifndef CRAY
int pad1;
#endif
int host_index;/* Global Host Index.*/
#ifndef CRAY
int pad2;
#endif
float mem;
/* stored as whatever.
#ifndef CRAY
int pad3;
#endif
float smem;
/* stored as whatever.
#ifndef CRAY
int pad4;
#endif
int num_cpus;/* Number cpus on machine.*/

*/

*/

char host_name[128];/* Real Host Name (host_name)*/


char mem_str[64];
char smem_str[64];
} Nas_Config_Host;
typedef struct {
#ifndef CRAY
int pad1;
#endif
int application_type;/* Should be set to MSC_NASTRAN*/
#ifndef CRAY
int pad2;
#endif
int default_index;/* Index just within Nas List*/
#ifndef CRAY
int pad3;
#endif
int disk_master;
/* stored as KB.
*/
#ifndef CRAY

Chapter 9: Application Procedural Interface (API) 217


Include File

int pad4;
#endif
int disk_dball;
/* stored as KB.
*/
#ifndef CRAY
int pad5;
#endif
int disk_scratch;
/* stored as KB.
*/
#ifndef CRAY
int pad6;
#endif
int disk_units;
/* see defines below
*/
#ifndef CRAY
int pad7;
#endif
int scr_run_flag;
#ifndef CRAY
int pad8;
#endif
int save_db_flag;
#ifndef CRAY
int pad9;
#endif
int copy_db_flag;
#ifndef CRAY
int pad10;
#endif
float mem_req;
/* stored as whatever */
#ifndef CRAY
int pad11;
#endif
int mem_units;
#ifndef CRAY
int pad12;
#endif
int smem_units;
#ifndef CRAY
int pad13;
#endif
int num_hosts;
#ifndef CRAY
int pad14;
#endif
int bogus;
char default_host[128];/* uihost_name is saved here*/
char default_queue[128];/* queue_name1 is saved here*/
char mem_req_str[64];
char extra_arg[256];
Nas_Config_Host *host_ptr;
} Nas_Configure_Info;
typedef struct {
#ifndef CRAY
int pad1;
#endif

218

Patran Analysis Manager Users Guide


Include File

int submit_index;
/* Index just within Nas List */
#ifndef CRAY
int pad2;
#endif
intspecific_index;
/* see descrip below.*/
#ifndef CRAY
int pad3;
#endif
int restart_type;
#ifndef CRAY
int pad4;
#endif
int restart;
#ifndef CRAY
int pad5;
#endif
int modfms;
#ifndef CRAY
int pad6;
#endif
int bogus;
char nas_input_deck[256];
/* full path and filename
char cold_jobname[256];
/* coldstart jobname
} Nas_Submit_Info;

*/
*/

/* The specific_index variable is only used when the queuing type is */


/* not MSC_QUEUE (i.e. it is LSF). If it is -1 then that means the */
/* task can be submitted to any host in the defined queue. If the
*/
/* specific_index has a value other than -1, then this is a index into*/
/* the host list (host list for the application, not global index)
*/
/* of the specific host the task should be submited to.
*/
/************************************************************************/
/*
*/
/* ABAQUS specific configuration structures.
*/
/*
*/
/************************************************************************/
/* Following default values are in words (64bit).*/
/*
** mck - 6/12/98 change to 0 so they dont get added unless you type something ...
**
#define DEFAULT_PRE_BUF
400000
#define DEFAULT_PRE_MEM
1000000
#define DEFAULT_MAIN_BUF
2000000
#define DEFAULT_MAIN_MEM
6000000
*/
#define DEFAULT_PRE_BUF
0
#define DEFAULT_PRE_MEM
0
#define DEFAULT_MAIN_BUF
0
#define DEFAULT_MAIN_MEM
0

Chapter 9: Application Procedural Interface (API) 219


Include File

#define ABA_NONE
#define ABA_RESTART
#define ABA_CHECK

1
2

typedef struct {
#ifndef CRAY
int pad1;
#endif
int host_index;
/* Global Host Index.
*/
#ifndef CRAY
int pad2;
#endif
int num_cpus;/* Number cpus on machine.*/
#ifndef CRAY
int pad3;
#endif
float pre_buf;/* stored as whatever.*/
#ifndef CRAY
int pad4;
#endif
float pre_mem;/* stored as whatever.*/
#ifndef CRAY
int pad5;
#endif
float main_buf;/* stored as whatever.*/
#ifndef CRAY
int pad6;
#endif
float main_mem;/* stored as whatever.*/
char pre_buf_str[64];
char pre_mem_str[64];
char main_buf_str[64];
char main_mem_str[64];
char host_name[128];
/* Real Host Name (host_name) */
} Aba_Config_Host;
typedef struct {
#ifndef CRAY
int pad1;
#endif
int application_type; /* Should be set to HKS_ABAQUS */
#ifndef CRAY
int pad2;
#endif
int default_index;/* Index just within Aba List*/
#ifndef CRAY
int pad3;
#endif
int copy_res_file;
#ifndef CRAY
int pad4;
#endif
int save_res_file;
#ifndef CRAY

220

Patran Analysis Manager Users Guide


Include File

int pad5;
#endif
float mem_req;
/* stored as whatever */
#ifndef CRAY
int pad6;
#endif
int mem_units;/* One of the defines above*/
#ifndef CRAY
int pad7;
#endif
int disk_units;/* One of the defines above*/
#ifndef CRAY
int pad8;
#endif
int space_req;/* stored as KB.*/
#ifndef CRAY
int pad9;
#endif
int append_fil;/* 0 = no 1 = yes*/
#ifndef CRAY
int pad10;
#endif
int num_hosts;
#ifndef CRAY
int pad11;
#endif
int use_standard;
/* 0 = no 1 = yes
*/
char default_host[128];/* uihost_name is saved here */
char default_queue[128];/* queue_name1 is saved here */
char user_sub[128];
char mem_req_str[64];
char extra_arg[256];
Aba_Config_Host *host_ptr;
} Aba_Configure_Info;
typedef struct {
#ifndef CRAY
int pad1;
#endif
int submit_index;/* Index just within Aba list*/
#ifndef CRAY
int pad2;
#endif
intspecific_index;/* see description below*/
#ifndef CRAY
int pad3;
#endif
int restart;
#ifndef CRAY
int pad4;
#endif
int bogus;
char aba_input_deck[256]; /* full path and filename
char restart_file[256];

*/

Chapter 9: Application Procedural Interface (API) 221


Include File

} Aba_Submit_Info;
/* The specific_index variable is only used when the queuing type is*/
/* not P3_QUEUE (i.e. it is LSF). If it is -1 then that means the*/
/* task can be submitted to any host in the defined queue. If the*/
/* specific_index has a value other than -1, then this is a index into*/
/* the host list (host list for the application, not global index)*/
/* of the specific host the task should be submited to.*/
/************************************************************************/
/*
*/
/* MSC.Marc specific configuration structures.
*/
/*
*/
/************************************************************************/
#define MAR_NONE
#define MAR_RESTART

typedef struct {
#ifndef CRAY
int pad1;
#endif
int host_index;
/* Global Host Index.
*/
#ifndef CRAY
int pad2;
#endif
int num_cpus;
/* Number cpus on machine. */
#ifndef CRAY
int pad3;
#endif
int bogus;
char host_name[128];
/* Real Host Name (host_name) */
} Mar_Config_Host;
typedef struct {
#ifndef CRAY
int pad1;
#endif
int application_type;
/* Should be set to MSC_MARC */
#ifndef CRAY
int pad2;
#endif
int default_index;
/* Index just within Mar List */
#ifndef CRAY
int pad3;
#endif
int disk_units;
/* One of the defines above */
#ifndef CRAY
int pad4;
#endif
int space_req;
/* stored as KB.
*/
#ifndef CRAY
int pad5;
#endif

222

Patran Analysis Manager Users Guide


Include File

int mem_units;
/* One of the defines above */
#ifndef CRAY
int pad6;
#endif
float mem_req;
/* stored as whatever */
#ifndef CRAY
int pad7;
#endif
int num_hosts;
#ifndef CRAY
int pad8;
#endif
int translate_input;
char default_host[128]; /* uihost_name is saved here */
char default_queue[128]; /* queue_name1 is saved here */
char cmd_line[256];
/* command line to run with */
char mon_file[256];
/* log file to monitor
*/
char mem_req_str[64];
Mar_Config_Host *host_ptr;
} Mar_Configure_Info;
typedef struct {
#ifndef CRAY
int pad1;
#endif
int submit_index;
#ifndef CRAY
int pad2;
#endif
int rid;
#ifndef CRAY
int pad3;
#endif
int pid;
#ifndef CRAY
int pad4;
#endif
int prog;
#ifndef CRAY
int pad5;
#endif
int user;
#ifndef CRAY
int pad6;
#endif
int save;
#ifndef CRAY
int pad7;
#endif
int vf;
#ifndef CRAY
int pad8;
#endif
int nprocd;

/* Index just within Mar list */

/* Flag: restart file (-rid filename) */

/* Flag: post_name (-pid filename) */

/* Flag: program_name (-prog progname) */

/* Flag: user_subroutine_name (-user subname)*/

/* Flag: save executable (0/1) (-save yes/no) */

/* Flag: viewfactor file (-vf vfname) */

/* Number processes or domains (-nprocd #) */

Chapter 9: Application Procedural Interface (API) 223


Include File

#ifndef CRAY
int pad9;
#endif
int host;
/* Flag: hostfile (-host hostfilename) */
#ifndef CRAY
int pad10;
#endif
int iam;
/* Flag: iam flag for licensing (-iam iamtag) */
#ifndef CRAY
int pad11;
#endif
int specific_index;
/* see description below
*/
/* All files should have full path and filename */
char datfile_name[256];
/* input deck */
char restart_name[256];
/* restart file */
char post_name[256];
/* post file */
char program_name[256];
/* program file */
char user_subroutine_name[256];
/* user subroutine file */
char viewfactor[256];
/* viewfactor file */
char hostfile[256];
/* hostfile */
char iamval[256];
/* iam licensing tag - no file involved */
} Mar_Submit_Info;
/* The specific_index variable is only used when the queuing type is*/
/* not P3_QUEUE (i.e. it is LSF). If it is -1 then that means the*/
/* task can be submitted to any host in the defined queue. If the*/
/* specific_index has a value other than -1, then this is a index into*/
/* the host list (host list for the application, not global index)*/
/* of the specific host the task should be submited to.*/
/************************************************************************/
/*
*/
/* GENERAL specific configuration structures.
*/
/*
*/
/************************************************************************/
typedef struct {
#ifndef CRAY
int pad1;
#endif
int host_index;
/* Global Host Index.
*/
#ifndef CRAY
int pad2;
#endif
int bogus;
char host_name[128];
/* Real Host Name (host_name) */
} Gen_Config_Host;
typedef struct {
#ifndef CRAY
int pad1;
#endif
int application_type;/* Should be set to GEN - RANGE */
#ifndef CRAY

224

Patran Analysis Manager Users Guide


Include File

int pad2;
#endif
int default_index;/* Index just within Gen List*/
#ifndef CRAY
int pad3;
#endif
int disk_units;/* One of the defines above*/
#ifndef CRAY
int pad4;
#endif
int space_req;/* stored as KB.*/
#ifndef CRAY
int pad5;
#endif
int mem_units;/* One of the defines above*/
#ifndef CRAY
int pad6;
#endif
float mem_req;/* stored as whatever */
#ifndef CRAY
int pad7;
#endif
int num_hosts;
#ifndef CRAY
int pad8;
#endif
int translate_input;
char default_host[128];/* uihost_name is saved here */
char default_queue[128];/* queue_name1 is saved here */
char cmd_line[256];
/* command line to run with */
char mon_file[256];
/* log file to monitor
*/
char mem_req_str[64];
Gen_Config_Host *host_ptr;
} Gen_Configure_Info;
typedef struct {
#ifndef CRAY
int pad1;
#endif
int submit_index;/* Index just within Gen list*/
#ifndef CRAY
int pad2;
#endif
intspecific_index;/* see description below*/
char gen_input_deck[256]; /* full path and filename
} Gen_Submit_Info;

*/

/* The specific_index variable is only used when the queuing type is*/
/* not MSC_QUEUE (i.e. it is LSF). If it is -1 then that means the*/
/* task can be submitted to any host in the defined queue. If the*/
/* specific_index has a value other than -1, then this is a index into*/
/* the host list (host list for the application, not global index)*/
/* of the specific host the task should be submited to.*/

Chapter 9: Application Procedural Interface (API) 225


Include File

/* ---------------------------- */
/*
** api globals ...
*/
#ifdef AM_INITIALIZE
AM_EXTERN int
gbl_nwrk_timeout_secs = BLOCK_TIMEOUT;
AM_EXTERN int
api_use_this_host = 0;
#else
AM_EXTERN int
gbl_nwrk_timeout_secs;
AM_EXTERN int
api_use_this_host;
#endif
AM_EXTERN CONFIG
*cfg;
AM_EXTERN ORG
*org;
AM_EXTERN int
num_orgs;
AM_EXTERN Universal_Config_Info ui_config;
AM_EXTERN Nas_Configure_Info nas_config;
AM_EXTERN Nas_Submit_Info
nas_submit;
AM_EXTERN Aba_Configure_Info aba_config;
AM_EXTERN Aba_Submit_Info
aba_submit;
AM_EXTERN Mar_Configure_Info mar_config;
AM_EXTERN Mar_Submit_Info
mar_submit;
AM_EXTERN Gen_Configure_Info gen_config[MAX_GEN_APPS];
AM_EXTERN Gen_Submit_Info
gen_submit[MAX_GEN_APPS];
AM_EXTERN char
api_this_host[256];
AM_EXTERN char
api_user_name[256];
AM_EXTERN char
api_application_name[64];
AM_EXTERN int
api_application_index;
/*
* api functions ...
*/
/*
* init - MUST BE FIRST api_* call made by application ...
*/
extern int api_init(char *out_str);
/*
* just to set the global timeout for communication ...
*/
extern int
api_get_gbl_timeout();
extern int
api_set_gbl_timeout(int secs);
/*
* reads an org.cfg file if possible and builds the ORG struct for list of QueMgrs ...
*/
extern ORG
*api_read_orgs(char *dir,int *num_orgs,int *status);
/*
* contacts running QueMgr and builds cfg struct ...
*/
extern CONFIG *api_get_config(char *qmgr_host,int qmgr_port,int *status,char *out_str);

226

Patran Analysis Manager Users Guide


Include File

/*
* reads *.cfg files and builds cfg struct (No QueMgr process involved) ...
*/
extern CONFIG *api_read_config(CONFIG *cfg,char *path,char *orgname,int *status,char
*out_str);
/*
* reads *.cfg files (without building path) and builds cfg struct (No QueMgr process involved) ...
*/
extern CONFIG *api_read_config_fullpath(CONFIG *cfg,char *path,int *status,char *out_str);
/*
* writes *.cfg files from cfg struct (No QueMgr process involved) ...
*/
extern void api_write_config(CONFIG *cfg,char *path,char *orgname,int *stauts,char
*out_str);
/*
* tries to contact running QueMgr and check if timestamp is ok ...
* returns 0 if all ok ...
*/
extern int
api_ping_quemgr(char *qmgr_host,int qmgr_port,unsigned int timestamp,char
*out_str);
/*
* initializes UI config structs (nas, aba, gen[] subimt and config) ...
*/
extern void api_init_uiconfig(CONFIG *cfg);
/*
* gets logged in user name
*/
extern char *api_getlogin(void);
/*
* checks on job data deck and returns possible question for UI to ask, setting answer for
* submit call below ...
*/
extern int api_check_job(char *ques_text,char *ans1_text,char *ans2_text,char *out_str);
/*
* submits job (needs filled in UI config and submit structs as well as global cfg struct) ...
*/
extern int
api_submit_job(char *qmgr_host,int qmgr_port,char *jobname,int background,int
*job_number,char *base_path,int *jmgr_port,int answer,char *out_str);
/*
* gets list of all running jobs from QueMgr ...
*/
extern Job_List *api_get_runningjob_list(char *qmgr_host,int qmgr_port,int *job_count,char
*out_str);
/*
* gets initial socket for later on api_mon_job_* calls ...
*/

Chapter 9: Application Procedural Interface (API) 227


Include File

extern int api_mon_job_init(char *job_host,int job_port,int *msg_port,char *out_str);


/*
* gets all messages of sev level and lower from JobMgr ...
*/
extern Msg_List *api_mon_job_msgs(int msg_sock,char *ui_host,int msg_port,int sev_level,
int *num_msgs,char *out_str);
/*
* gets current job statistics and run status ...
*/
extern JOB_FS_LIST *api_mon_job_stats(int msg_sock,char *ui_host,int msg_port,
int *cpu,int *pct_cpu,
int *mem, int *pct_mem,
int *dsk,int *pct_dsk,
int *elapsed,int *status,
int *num_fs,int *retcod,
char *out_str);
/*
* gets last 100 lines of job mon file ...
*/
extern char *api_mon_job_mon(int msg_sock,char *ui_host,int msg_port,char *out_str);
/*
* returns list of files active while job is running ...
*/
extern FILE_LIST *api_mon_job_running_files_list(int msg_sock,char *ui_host,int msg_port,
int *num_files,char *out_str);
/*
** returns general info about a job from a mon_file ...
*/
extern Job_List *api_com_job_gen(char *sub_host,char *mon_file,char *out_str);
/*
* gets job statistics and run status from mon file
*/
extern JOB_FS_LIST *api_com_job_stats(char *sub_host,char *mon_file,
int *cpu,int *pct_cpu_avg,int *pct_cpu_max,
int *mem,int *pct_mem_avg,int *pct_mem_max,
int *dsk,int *pct_dsk_avg,int *pct_dsk_max,
int *elapsed,int *status,
int *num_fs,int *retcod,
char *out_str);
/*
* gets last 100 lines of job mon file ...
*/
extern char *api_com_job_mon(char *sub_host,char *mon_file,char *out_str);
/*
* returns list of files from mon file ...

228

Patran Analysis Manager Users Guide


Include File

*/
extern FILE_LIST *api_com_job_received_files_list(char *sub_host,char *mon_file,int
*num_files,char *out_str);
/*
* starts file download ...
*/
extern int api_download_file_start(int msg_sock,int job_number,char *filename,char *out_str);
/*
* checks on donwload file status ...
*/
extern int api_download_file_check(int job_number,char *filename,int *filesizekb);
/*
* returns all jobs for all hosts and apps from a running QueMgr ...
*/
extern Que_List *api_mon_que_full(char *qmgr_host,int qmgr_port,int *num_tsks,char
*out_str);
/*
* gets last 4k bytes of QueMgr log file ...
*/
extern char *api_mon_que_log(char *qmgr_host,int qmgr_port,char *out_str);
/*
* gets all hosts statistics ...
*/
extern Cpu_List *api_mon_que_cpu(char *qmgr_host,int qmgr_port,char *out_str);
/*
* gets list of last 25 or so completed jobs from QueMgr ...
*/
extern Job_List *api_get_completedjob_list(char *qmgr_host,int qmgr_port,int *job_count,char
*out_str);
/*
* abort job ...
*/
extern int
api_abort_job(char *qmgr_host,int qmgr_port,int job_number,char *job_user,char
*out_str);
/*
* reads rc file and overrides all UI settings found ..
*/
extern int
api_rcfile_read(char *rcfile,char *out_str);
/*
* writes rc file from UI settigns ...
*/
extern int
api_rcfile_write( char *rcfile,char *out_str);
extern int
api_rcfile_write2(FILE *stream,int short_or_long);
/*

Chapter 9: Application Procedural Interface (API) 229


Include File

* prints UI settings in rc format to screen (0 is short, != 0 is full display) ...


*/
extern void api_rcfile_print(int fullprint);
/*
* calls admin test procedure(s) and returns status and msgs ...
*/
extern char *api_admin_test(char *orgpth,char *orgnam,int rport,int *status,char *out_str);
/*
* just to get home dir ...
*/
extern void api_get_home_dir(char *home_dir);
/*
* to reconfig quemgr ...
*/
extern char *api_reconfig_quemgr(char *qmgr_host,int quemgr_port,int *status,char *out_str);
/*
* license checkout and return ...
*/
extern int api_checkout_license(char *license_file);
extern void api_release_license(void);
#ifdef __cplusplus
}
#endif
#endif /* _AMAPI */

230

Patran Analysis Manager Users Guide


Example Interface

Example Interface
This is the actual source file of the TxtMgr, which uses the Analysis Manager API and the previously
shown api.h include file.
#include <stdlib.h>
#include <string.h>
#include <sys/types.h>
#ifndef WINNT
# include <unistd.h>
# include <sys/time.h>
# include <sys/uio.h>
# include <sys/socket.h>
# include <netinet/in.h>
# include <netdb.h>
#else
# include <winsock.h>
#endif
#include <time.h>
#define AM_INITIALIZE 1
#include api.h
int dont_connect = 0;
int has_qmgr_host;
int has_qmgr_port;
int has_org;
int has_orgpath;
char lic_file[256];
char org_name[256];
char binpath[256];
char orgpath[256];
char qmgr_host[256];
int qmgr_port;
int rmgr_port;
int msg_sock = -1;
int msg_port = -1;
int msg_sock_job = -1;
int auto_startup;
char sys_rcf_file[256];
char usr_rcf_file[256];
int has_cmd_rcf;
char cmd_rcf_file[256];
/* ==================== */
#define SUBMIT
1
#define ABORT
2
#define WATCHJOB
3
#define WATCHQUE_LOG 4
#define WATCHQUE_FULL 5
#define WATCHQUE_CPU 6
#define LISTCOMP
7

#include <stdio.h>

Chapter 9: Application Procedural Interface (API) 231


Example Interface

#include <stdio.h>
#define RCFILEWRITE 8
#define ADMINTEST 9
#define RECONFIG
10
#define QUIT
11 /* must be highest defined number type */
#define NOTVALID
9999
/* ==================== */
#ifdef WINNT
BOOL console_event_func(DWORD dwEvent)
{
if(dwEvent == CTRL_LOGOFF_EVENT)
return TRUE;
#ifdef DEBUG
fprintf(stderr,\nbye ...);
#endif
fprintf(stderr,\n);
api_release_license();
#ifdef WINNT
WSACleanup();
#endif
return FALSE;
}
#endif
/* ==================== */
void leafname(char *input_string, char *output_string)
{
int
string_length;
int
i;
char
temp_string[256];
int
found;
/*********************************************************************/
/* First get rid of the leading path (if any).
*/
/*********************************************************************/
string_length = strlen(input_string);
if(string_length < 1){
output_string[0] = \0;
return;
}
found = 0;
for(i = string_length - 1; i >= 0; i--){
if( (input_string[i] == /) || (input_string[i] == \\) ){
found = 1;
strcpy(temp_string, &input_string[i + 1]);
break;
}

232

Patran Analysis Manager Users Guide


Example Interface

#include <stdio.h>

if(found == 0)
strcpy(temp_string, input_string);
/*********************************************************************/
/* Now get rid of the extention (if any).
*/
/*********************************************************************/
string_length = strlen(temp_string);
if(string_length < 1){
output_string[0] = \0;
return;
}
for(i = string_length - 1; i >= 0; i--){
if( (temp_string[i] == .) && (i != 0) ){
temp_string[i] = \0;
strcpy(output_string, temp_string);
return;
}
}

strcpy(output_string, temp_string);
return;

/* ==================== */
int submit_job(void)
{
int background;
int i;
int lenc;
int submit_index;
int job_number;
int jmgr_port;
char job_name[256];
int mem;
char job_fullname[256];
int srtn;
char out_str[2048];
int ans;
char ques_text[512];
char ans1_text[32];
char ans2_text[32];
background = 0;
/*
** if not auto_startup, ask for details ...
*/
if(auto_startup == 0){

Chapter 9: Application Procedural Interface (API) 233


Example Interface

background = 1;

#include <stdio.h>

/*
** ask jobname ...
*/
printf(\nEnter job name: );
scanf(%s,job_name);
/*
** ask memory ...
*/
printf(\nEnter memory (in set units): );
scanf(%d,&mem);
/*
** print list of hosts from QueMgr ...
** and ask for which to submit to ...
*/
if(cfg->QUE_TYPE == MSC_QUEUE){
printf(\nhosts:\n);
printf(index name\n);
printf(------------\n);
for(i=0;i<cfg->hsts[api_application_index-1].num_hosts;i++){
printf(%-5d %s\n,i+1,cfg->hsts[api_application_index-1].hosts[i].pseudohost_name);
}
printf(\nEnter host index: );
scanf(%d,&submit_index);
submit_index--;
printf(\n);
if( (submit_index < 0) || (submit_index >= cfg->hsts[api_application_index-1].num_hosts) ){
printf(Error, invalid index\n);
return 1;
}
}else{
printf(\nqueues:\n);
printf(index name\n);
printf(------------\n);
for(i=0;i<cfg->ques[api_application_index-1].num_queues;i++){
printf(%-5d %s -> %s\n,i+1,cfg->ques[api_application_index-1].queues[i].queue_name1,
cfg->ques[api_application_index-1].queues[i].queue_name2);
}
printf(\nEnter queue index: );
scanf(%d,&submit_index);
submit_index--;
printf(\n);
if( (submit_index < 0) || (submit_index >= cfg->ques[api_application_index-1].num_queues) ){
printf(Error, invalid index\n);
return 1;

234

Patran Analysis Manager Users Guide


Example Interface

#include <stdio.h>

}
/*
** set up config/submit struct info ...
*/
strcpy(job_fullname,job_name);
lenc = (int)strlen(job_fullname);
for(i=0;i<lenc;i++){
if(job_fullname[i] == \\)
job_fullname[i] = /;
}
leafname(job_fullname,job_name);
if(api_application_index == MSC_NASTRAN){
sprintf(nas_submit.nas_input_deck,%s,job_fullname);
nas_config.mem_req = (float)mem;
nas_submit.submit_index = submit_index;
}else if(api_application_index == HKS_ABAQUS){
sprintf(aba_submit.aba_input_deck,%s,job_fullname);
aba_config.mem_req = (float)mem;
aba_submit.submit_index = submit_index;
}else if(api_application_index == MSC_MARC){
sprintf(mar_submit.datfile_name,%s,job_fullname);
mar_config.mem_req = (float)mem;
mar_submit.submit_index = submit_index;
}else{
sprintf(gen_submit[api_application_index-GENERAL].gen_input_deck,%s,job_fullname);
gen_config[api_application_index-GENERAL].mem_req = (float)mem;
gen_submit[api_application_index-GENERAL].submit_index = submit_index;
}
}else{
/*
** leave all config and submit struct settings alone, as
** the rcf/override ASSUME to have it all correct ...
** (just get job_name for use below ...)
*/
if(api_application_index == MSC_NASTRAN){
leafname(nas_submit.nas_input_deck,job_name);
}else if(api_application_index == HKS_ABAQUS){
leafname(aba_submit.aba_input_deck,job_name);
}else if(api_application_index == MSC_MARC){
leafname(mar_submit.datfile_name,job_name);
}else{
leafname(gen_submit[api_application_index-GENERAL].gen_input_deck,job_name);
}
}

Chapter 9: Application Procedural Interface (API) 235


Example Interface

#include <stdio.h>
ans = NO;
srtn = api_check_job(ques_text,ans1_text,ans2_text,out_str);
if(srtn < 0){
printf(%s,out_str);
return srtn;
}
if(srtn > 0){
redo:
printf(%s\n,ques_text);
printf(\nAnswer:\n);
printf(-------\n);
printf(0 - %s\n,ans1_text);
printf(1 - %s\n,ans2_text);
printf(\nanswer: );
scanf(%d,&ans);
printf(\n);
if( (ans != NO) && (ans != YES) ){
printf(Error, invalid answer\n\n);
goto redo;
}
}
srtn = api_submit_job(qmgr_host,qmgr_port,job_name,background,&job_number,binpath,
&jmgr_port,ans,out_str);
if(out_str[0] != \0){
printf(%s,out_str);
}
if( (srtn == 0) && (background == 1) ){
/*
** right away get monitor socket ...
*/
msg_sock = api_mon_job_init(api_this_host,jmgr_port,&msg_port,out_str);
if(msg_sock < 0){
msg_port = -1;
msg_sock_job = -1;
printf(%s,out_str);
}else{
msg_sock_job = job_number;
}
}
}

return srtn;

/* ==================== */
int abort_job(void)

236

Patran Analysis Manager Users Guide


Example Interface

#include <stdio.h>
int srtn;
Job_List *jr_ptr = NULL;
int num_running_jobs;
int job_number;
char j_numstr[100];
int found;
char job_user[256];
char job_name[256];
char proj_name[256];
int i;
char out_str[2048];
jr_ptr = api_get_runningjob_list(qmgr_host,qmgr_port,&num_running_jobs,out_str);
if(num_running_jobs == 0){
printf(\nNo active jobs found\n);
return 0;
}
if( (num_running_jobs < 0) || (jr_ptr == NULL) ){
printf(%s,out_str);
if(jr_ptr != NULL)
free(jr_ptr);
return 1;
}
job_number = -1;
if(auto_startup == 0){
/*
** present list to user ...
*/
printf(\nRunning jobs ....\n\n);
printf(num jobname
jobuser
project
amhost
runhost
subtime\n);
printf(----------------------------------------------------------------------------------------------------------\n);
for(i=0;i<num_running_jobs;i++){
printf(%-4d %-20s %-20s %-20s %-20s %-20s %-20s\n,jr_ptr[i].job_number,
jr_ptr[i].job_name,
jr_ptr[i].job_user,
jr_ptr[i].job_proj,
jr_ptr[i].am_host_name,
jr_ptr[i].job_run_host,
jr_ptr[i].sub_time_str);
}
for(i=0;i<100;i++)
j_numstr[i] = \0;
printf(\nEnter job number: );
scanf(%s,j_numstr);
if( (j_numstr[0] == q) || (j_numstr[0] == Q) || (j_numstr[0] == 0) ){
free(jr_ptr);
return 0;

Chapter 9: Application Procedural Interface (API) 237


Example Interface

#include <stdio.h>
}
sscanf(j_numstr,%d,&job_number);
found = 0;
for(i=0;i<num_running_jobs;i++){
if(job_number == jr_ptr[i].job_number){
found = 1;
break;
}
}
if(!found){
printf(Error, job number %d not in list\n,job_number);
free(jr_ptr);
return 1;
}
printf(\n);
}else{
if(api_application_index == MSC_NASTRAN){
leafname(nas_submit.nas_input_deck,job_name);
}else if(api_application_index == HKS_ABAQUS){
leafname(aba_submit.aba_input_deck,job_name);
}else if(api_application_index == MSC_MARC){
leafname(mar_submit.datfile_name,job_name);
}else{
leafname(gen_submit[api_application_index-GENERAL].gen_input_deck,job_name);
}
strcpy(proj_name,ui_config.project_name);
/*
** search list for match and set job_number ...
*/
job_number = -1;
for(i=0;i<num_running_jobs;i++){
if(strcmp(jr_ptr[i].job_name,job_name) == 0){
if(strcmp(jr_ptr[i].job_proj,proj_name) == 0){
job_number = jr_ptr[i].job_number;
break;
}
}
}
}
strcpy(job_user,api_user_name);
srtn = api_abort_job(qmgr_host,qmgr_port,job_number,job_user,out_str);
if(out_str[0] != \0){
printf(%s,out_str);

238

Patran Analysis Manager Users Guide


Example Interface

#include <stdio.h>

free(jr_ptr);
}

return srtn;

/* ==================== */
int watch_job(void)
{
Job_List *jr_ptr = NULL;
int num_running_jobs;
int check;
char job_host[128];
int job_port = 0;
char j_numstr[100];
int found;
int srtn;
char *log_str;
char job_name[256];
char proj_name[256];
char sfile[256];
int i;
int job_number;
int sev_level;
char out_str[2048];
int num_msgs;
Msg_List *msg_ptr = NULL;
int cpu, pct_cpu;
int mem, pct_mem;
int dsk, pct_dsk;
int elapsed;
int status;
FILE_LIST *file_list = NULL;
int num_files = 0;
int file_index;
int sizekb;
int num_fs;
JOB_FS_LIST *job_fs_list;
extern void get_leaf_and_extention(char *,char *);
jr_ptr = api_get_runningjob_list(qmgr_host,qmgr_port,&num_running_jobs,out_str);
if(num_running_jobs == 0){
printf(\nNo active jobs found\n);
return 0;
}
if( (num_running_jobs < 0) || (jr_ptr == NULL) ){
printf(%s,out_str);
if(jr_ptr != NULL)
free(jr_ptr);

Chapter 9: Application Procedural Interface (API) 239


Example Interface

return 1;

#include <stdio.h>

job_number = -1;
if(auto_startup == 0){
/*
** present list to user ...
*/
printf(\nRunning jobs ....\n\n);
printf(num jobname
jobuser
project
amhost
runhost
subtime\n);
printf(----------------------------------------------------------------------------------------------------------\n);
for(i=0;i<num_running_jobs;i++){
printf(%-4d %-20s %-20s %-20s %-20s %-20s %-20s\n,jr_ptr[i].job_number,
jr_ptr[i].job_name,
jr_ptr[i].job_user,
jr_ptr[i].job_proj,
jr_ptr[i].am_host_name,
jr_ptr[i].job_run_host,
jr_ptr[i].sub_time_str);
}
for(i=0;i<100;i++)
j_numstr[i] = \0;
printf(\nEnter job number: );
scanf(%s,j_numstr);
if( (j_numstr[0] == q) || (j_numstr[0] == Q) || (j_numstr[0] == 0) ){
free(jr_ptr);
return 0;
}
sscanf(j_numstr,%d,&job_number);
found = 0;
for(i=0;i<num_running_jobs;i++){
if(job_number == jr_ptr[i].job_number){
job_port = jr_ptr[i].port_number;
strcpy(job_host,jr_ptr[i].job_submit_host);
found = 1;
break;
}
}
if(!found){
printf(Error, job number %d not in list\n,job_number);
free(jr_ptr);
return 1;
}
}else{
if(api_application_index == MSC_NASTRAN){
leafname(nas_submit.nas_input_deck,job_name);

240

Patran Analysis Manager Users Guide


Example Interface

#include <stdio.h>
}else if(api_application_index == HKS_ABAQUS){
leafname(aba_submit.aba_input_deck,job_name);
}else if(api_application_index == MSC_MARC){
leafname(mar_submit.datfile_name,job_name);
}else{
leafname(gen_submit[api_application_index-GENERAL].gen_input_deck,job_name);
}
strcpy(proj_name,ui_config.project_name);
/*
** search list for match and set job_number ...
*/
job_number = -1;
for(i=0;i<num_running_jobs;i++){
if(strcmp(jr_ptr[i].job_name,job_name) == 0){
if(strcmp(jr_ptr[i].job_proj,proj_name) == 0){
job_port = jr_ptr[i].port_number;
strcpy(job_host,jr_ptr[i].job_submit_host);
break;
}
}
}
if(job_number < 0){
printf(Error, job name %s not in list\n,job_name);
free(jr_ptr);
return 1;
}
}
free(jr_ptr);
#ifdef DEBUG
fprintf(stderr,posa\n);
#endif
/*
** get msg socket if needed ...
*/
if( (msg_sock < 0) || (msg_sock_job != job_number) ){
#ifdef DEBUG
fprintf(stderr,posa1\n);
#endif
msg_sock = api_mon_job_init(job_host,job_port,&msg_port,out_str);
if(msg_sock < 0){
msg_port = -1;
msg_sock_job = -1;
printf(%s,out_str);
return 1;
}else{

Chapter 9: Application Procedural Interface (API) 241


Example Interface

msg_sock_job = job_number;

#include <stdio.h>

#ifdef DEBUG
fprintf(stderr,posb\n);
#endif
/*
** get severity if not auto ...
*/
sev_level = 3;
if(auto_startup == 0){
#ifdef MSGPOP
if(api_application_index == MSC_NASTRAN){
printf(Enter message severity level >=: );
scanf(%d,&sev_level);
printf(\n);
}
if(sev_level < 0) sev_level = 0;
if(sev_level > 3) sev_level = 3;
#endif
}
#ifdef DEBUG
fprintf(stderr,posc\n);
#endif
/*
** get monitor info ...
*/
msg_ptr = api_mon_job_msgs(msg_sock,api_this_host,msg_port,sev_level,&num_msgs,out_str);
if(num_msgs < 0){
printf(%s,out_str);
return 2;
}
#ifdef DEBUG
fprintf(stderr,posd\n);
#endif
if(msg_ptr == NULL){
printf(%s,out_str);
return 3;
}else if(num_msgs == 0){
printf(\nNo messages at this time ...\n\n);
}else{
/*
** mgs format is severity@sevbuf@msgtxt ... sevbuf is string NULL when severity=0
*/
for(i=0;i<num_msgs-1;i++){
printf( %s\n,msg_ptr[i].msg);
}

242

Patran Analysis Manager Users Guide


Example Interface

free(msg_ptr);

#include <stdio.h>

#ifdef DEBUG
fprintf(stderr,pose\n);
#endif
job_fs_list =
api_mon_job_stats(msg_sock,api_this_host,msg_port,&cpu,&pct_cpu,&mem,&pct_mem,
&dsk,&pct_dsk,&elapsed,&status,&num_fs,&srtn,out_str);
if(srtn != 0){
printf(%s,out_str);
}else{
printf(job stats:\n);
if(status == JOB_SUBMITTED){
printf(cpu=%d, %%cpu=%d, mem=%d, %%mem=%d, disk=%d, %%disk=%d,
elapsed=%d, status=%s\n,
cpu,pct_cpu,mem,pct_mem,dsk,pct_dsk,elapsed,submitted);
}else if(status == JOB_QUEUED){
printf(cpu=%d, %%cpu=%d, mem=%d, %%mem=%d, disk=%d, %%disk=%d,
elapsed=%d, status=%s\n,
cpu,pct_cpu,mem,pct_mem,dsk,pct_dsk,elapsed,queued);
}else if(status == JOB_RUNNING){
printf(cpu=%d, %%cpu=%d, mem=%d, %%mem=%d, disk=%d, %%disk=%d,
elapsed=%d, status=%s\n,
cpu,pct_cpu,mem,pct_mem,dsk,pct_dsk,elapsed,running);
}else{
printf(cpu=%d, %%cpu=%d, mem=%d, %%mem=%d, disk=%d, %%disk=%d,
elapsed=%d, status=%s\n,
cpu,pct_cpu,mem,pct_mem,dsk,pct_dsk,elapsed,unknown);
}
/*
printf(total num filesys = %d\n,num_fs);
for(i=0;i<num_fs;i++){
fprintf(stdout, %s max=%d usage=%d\n,job_fs_list[i].file_sys_name,
job_fs_list[i].disk_max_size_mb,
job_fs_list[i].disk_used_pct);
}
*/
printf(\n);

if( (num_fs > 0) && (job_fs_list != NULL) ){


free(job_fs_list);
}

#ifdef DEBUG
fprintf(stderr,posf\n);
#endif
log_str = api_mon_job_mon(msg_sock,api_this_host,msg_port,out_str);

Chapter 9: Application Procedural Interface (API) 243


Example Interface

if(log_str == NULL){
printf(%s,out_str);
}else{
printf(mon file contents:\n);
printf(%s,log_str);
free(log_str);
}

#include <stdio.h>

file_list =
api_mon_job_running_files_list(msg_sock,api_this_host,msg_port,&num_files,out_str);
#ifdef DEBUG
printf(api_mon_job_running_files_list: num_files = %d\n,num_files);
#endif
if(num_files < 0){
printf(%s,out_str);
return 4;
}
if(num_files == 0)
return 0;
for(i=0;i<num_files;i++){
if(i == 0){
printf(\ndownloadable files: (use q to quit)\n);
printf(index job file
size (kb)\n);
printf(--------------------------------------------------\n);
}
get_leaf_and_extention(file_list[i].filename,sfile);
printf(%-10d%-30s %d\n,i+1,sfile,file_list[i].sizekb);
}
for(i=0;i<100;i++)
j_numstr[i] = \0;
printf(\nEnter file index to download: );
scanf(%s,j_numstr);
if( (j_numstr[0] == q) || (j_numstr[0] == Q) || (j_numstr[0] == 0) ){
free(file_list);
return 0;
}
sscanf(j_numstr,%d,&file_index);
if(file_index == 0){
free(file_list);
return 0;
}
check = 0;
if(file_index < 0){
check = 1;
file_index *= -1;
}

244

Patran Analysis Manager Users Guide


Example Interface

#include <stdio.h>
if(file_index > num_files){
printf(invalid index\n);
free(file_list);
return 5;
}
if(check){
srtn = api_download_file_check(job_number,file_list[file_index-1].filename,&sizekb);
#ifdef DEBUG
printf(check returns %d\n,srtn);
#endif
if(srtn == FILE_STILL_DOWNLOADING){
printf(File %s is still being transfered\n,file_list[file_index-1].filename);
}else if(srtn == FILE_DOWNLOAD_COMPLETE){
printf(File %s transfer complete !\n,file_list[file_index-1].filename);
}
}else{
srtn = api_download_file_start(msg_sock,job_number,file_list[file_index-1].filename,out_str);
if(srtn != 0){
printf(File download (%s) start failed, error = %d (%s),
file_list[file_index-1].filename,srtn,out_str);
}
}
free(file_list);
}

return 0;

/* ==================== */
int watch_que(int which)
{
int i;
int num_tasks;
Que_List *ql_ptr = NULL;
char out_str[2048];
char *log_str = NULL;
Cpu_List *cpu_ptr = NULL;
if(which == WATCHQUE_LOG){
log_str = api_mon_que_log(qmgr_host,qmgr_port,out_str);
if(log_str == NULL){
printf(%s,out_str);
return 1;

Chapter 9: Application Procedural Interface (API) 245


Example Interface

#include <stdio.h>

printf(\n);
printf(%s,log_str);
free(log_str);
return 0;
}else if(which == WATCHQUE_FULL){
ql_ptr = api_mon_que_full(qmgr_host,qmgr_port,&num_tasks,out_str);
if( (num_tasks < 0) || (ql_ptr == NULL) ){
if(ql_ptr != NULL)
free(ql_ptr);
printf(%s,out_str);
return 1;
}
if(num_tasks == 0){
printf(\nNo active jobs found\n);
return 0;
}
printf(\nQueue stats for all hosts/apps\n);
printf(\n%-35s%-6s%-6s%-6s %s\n, hostname,run,que,max,status);
printf(------------------------------------------------------------\n);
for(i=0;i<num_tasks;i++){
printf(%-35s%-6d%-6d%-6d %s\n,
ql_ptr[i].host_name,ql_ptr[i].num_running,ql_ptr[i].num_waiting,
ql_ptr[i].maxtsk,ql_ptr[i].stat_str);
}
free(ql_ptr);
return 0;
}else if(which == WATCHQUE_CPU){
cpu_ptr = api_mon_que_cpu(qmgr_host,qmgr_port,out_str);
if(cpu_ptr == NULL){
printf(%s,out_str);
return 1;
}
printf(\nQueue load stats for all hosts/apps\n);
printf(\n%-35s%-12s%-12s%-12s\n, hostname,%cpu util,avail mem,avail disk);
printf(---------------------------------------------------------------------\n);
for(i=0;i<cfg->total_h;i++){
printf(%-35s%-12d%-12d%-12d\n,

246

Patran Analysis Manager Users Guide


Example Interface

#include <stdio.h>
cpu_ptr[i].host_name,cpu_ptr[i].cpu_util,cpu_ptr[i].avail_mem,cpu_ptr[i].free_disk);

free(cpu_ptr);
return 0;
}else{
printf(\nError, invalid selection\n);
return 1;
}
}

/*NOTREACHED*/

/* ==================== */
int list_complete(void)
{
int num_completed_jobs;
Job_List *jc_ptr = NULL;
Job_List *jc_ptr2 = NULL;
char *mon_msgs = NULL;
int num_files;
FILE_LIST *fl_list = NULL;
int srtn;
int i;
int job_number;
char j_numstr[100];
int found;
char out_str[2048];
char sfile[256];
char mon_file[256];
int cpu_secs, pct_cpu_avg, pct_cpu_max;
int mem_kbts, pct_mem_avg, pct_mem_max;
int dsk_mbts, pct_dsk_avg, pct_dsk_max;
int elapsed,status;
int num_fs;
JOB_FS_LIST *job_fs_list;
extern void get_leaf_and_extention(char *,char *);
jc_ptr = api_get_completedjob_list(qmgr_host,qmgr_port,&num_completed_jobs,out_str);
if(num_completed_jobs == 0){
printf(\nNo completed jobs found\n);
return 0;
}
if( (num_completed_jobs < 0) || (jc_ptr == NULL) ){
printf(%s,out_str);
if(jc_ptr != NULL)
free(jc_ptr);
return 1;

Chapter 9: Application Procedural Interface (API) 247


Example Interface

#include <stdio.h>

}
job_number = -1;
if(auto_startup == 0){

/*
** present list to user ...
*/
printf(\nCompleted jobs ....\n\n);
printf(num jobname
username
subtime\n);
printf(------------------------------------------------------\n);
for(i=0;i<num_completed_jobs;i++){
printf(%-4d %-20s %-20s %-20s\n,jc_ptr[i].job_number,
jc_ptr[i].job_name,jc_ptr[i].job_user,jc_ptr[i].sub_time_str);
}
printf(\nEnter job number: );
scanf(%s,j_numstr);
if( (j_numstr[0] == q) || (j_numstr[0] == Q) || (j_numstr[0] == 0) ){
free(jc_ptr);
return 0;
}
sscanf(j_numstr,%d,&job_number);
printf(\n);
}else{
printf(\nError, cant list completed jobs in batch mode.\n);
free(jc_ptr);
return 1;
}
found = -1;
for(i=0;i<num_completed_jobs;i++){
if(job_number == jc_ptr[i].job_number){
found = i;
break;
}
}
if(found < 0){
printf(Error, job number %d not in list\n,job_number);
free(jc_ptr);
return 1;
}
printf(Job name:
%s\n,jc_ptr[found].job_name);
printf(Job user:
%s\n,jc_ptr[found].job_user);
printf(Job originating host: %s\n,jc_ptr[found].job_submit_host);
printf(Job originating dir: %s\n,jc_ptr[found].work_dir);

248

Patran Analysis Manager Users Guide


Example Interface

#include <stdio.h>
printf(Job AM hostname:
%s\n,jc_ptr[found].am_host_name);
printf(Job run host:
%s\n,jc_ptr[found].job_run_host);
if(jc_ptr[found].jobstatus == JOB_SUCCESSFUL)
printf(Job complete status: success\n);
else if(jc_ptr[found].jobstatus == JOB_ABORTED)
printf(Job complete status: aborted\n);
else if(jc_ptr[found].jobstatus == JOB_FAILED)
printf(Job complete status: failed\n);
else
printf(Job complete status: unknown\n);
/* ------------ */
sprintf(mon_file,%s/%s.mon,jc_ptr[found].work_dir,jc_ptr[found].job_name);
jc_ptr2 = api_com_job_gen(jc_ptr[found].job_submit_host,mon_file,out_str);
if(jc_ptr2 == NULL){
printf(%s,out_str);
free(jc_ptr);
return 1;
}
/* check job number ... */
if(jc_ptr[found].job_number != jc_ptr2->job_number){
printf(\nJob numbers do not match -\n);
printf( assuming newer job with same .mon file is currently running\n);
printf( so no additional job info is available\n);
free(jc_ptr);
free(jc_ptr2);
return 1;
}
printf(\ngeneral info:\n);
printf(num jobname
jobuser
amhost
runhost
subtime
status\n);
printf(-----------------------------------------------------------------------------------------------------------\n);
printf(%-4d %-20s %-20s %-20s %-20s %-30s %-6d\n,jc_ptr2->job_number,
jc_ptr2->job_name,
jc_ptr2->job_user,
jc_ptr2->am_host_name,
jc_ptr2->job_run_host,
jc_ptr2->sub_time_str,
jc_ptr2->jobstatus);
/* ------------ */
job_fs_list = api_com_job_stats(jc_ptr[found].job_submit_host,mon_file,
&cpu_secs,&pct_cpu_avg,&pct_cpu_max,
&mem_kbts,&pct_mem_avg,&pct_mem_max,
&dsk_mbts,&pct_dsk_avg,&pct_dsk_max,
&elapsed,&status,

Chapter 9: Application Procedural Interface (API) 249


Example Interface

#include <stdio.h>
&num_fs,&srtn,out_str);

if(srtn < 0){


printf(%s,out_str);
if( (num_fs > 0) && (job_fs_list != NULL) ){
free(job_fs_list);
}
free(jc_ptr);
free(jc_ptr2);
return 1;
}

printf(\njob stats:\n);
printf(cpu(sec)=%d, %%cpu(avg)=%d,
%%cpu(max)=%d\n,cpu_secs,pct_cpu_avg,pct_cpu_max);
printf(mem(kb) =%d, %%mem(avg)=%d,
%%mem(max)=%d\n,mem_kbts,pct_mem_avg,pct_mem_max);
printf(dsk(mb) =%d, %%dsk(avg)=%d,
%%dsk(max)=%d\n,dsk_mbts,pct_dsk_avg,pct_dsk_max);
printf(elapsed =%d, status=%d\n,elapsed,status);
/*
printf(total num filesys = %d\n,num_fs);
for(i=0;i<num_fs;i++){
fprintf(stdout, %s max=%d usage=%d\n,job_fs_list[i].file_sys_name,
job_fs_list[i].disk_max_size_mb,
job_fs_list[i].disk_used_pct);
}
printf(\n);
*/
if( (num_fs > 0) && (job_fs_list != NULL) ){
free(job_fs_list);
}
/* ------------ */
mon_msgs = api_com_job_mon(jc_ptr[found].job_submit_host,mon_file,out_str);
if(mon_msgs == NULL){
printf(Error, unable to determine mon file msgs\n%s\n,out_str);
free(jc_ptr);
free(jc_ptr2);
return 1;
}
printf(\nmon file contents:\n%s,mon_msgs);
free(mon_msgs);
/* ------------ */
fl_list =
api_com_job_received_files_list(jc_ptr[found].job_submit_host,mon_file,&num_files,out_str);
#ifdef DEBUG

250

Patran Analysis Manager Users Guide


Example Interface

#include <stdio.h>
printf(api_com_job_received_files_list: num_files = %d\n,num_files);
#endif
if(num_files < 0){
printf(%s,out_str);
free(jc_ptr);
free(jc_ptr2);
return 4;
}
if(num_files > 0){
for(i=0;i<num_files;i++){
if(i == 0){
printf(\nviewable files:\n);
printf(index job file
size (kb)\n);
printf(--------------------------------------------------\n);
}
get_leaf_and_extention(fl_list[i].filename,sfile);
printf(%-10d%-30s %d\n,i+1,sfile,fl_list[i].sizekb);
}
free(fl_list);
}
/* ------------ */
free(jc_ptr);
free(jc_ptr2);
}

return 0;

/* ==================== */
int write_rcfile(void)
{
int srtn;
char out_str[2048];
if(has_cmd_rcf){
srtn = api_rcfile_write(cmd_rcf_file,out_str);
if(srtn != 0){
printf(%s,out_str);
return 1;
}else{
printf(\nSettings successfully written to rc file <%s>\n,cmd_rcf_file);
}
}else{
printf(\nWarning, no -rcf file specified so cannot write settings\n);
}
}

return 0;

Chapter 9: Application Procedural Interface (API) 251


Example Interface

/* ==================== */

#include <stdio.h>

int admin_test(void)
{
int status;
char *test_str = NULL;
char out_str[2048];
test_str = api_admin_test(orgpath,org_name,rmgr_port,&status,out_str);
if(status != 0){
printf(\nAdmin test returns %d, text = %s,status,out_str);
}
if(test_str != NULL){
printf(\n%s,test_str);
free(test_str);
}
}

return 0;

/* ==================== */
int reconfig_quemgr(void)
{
int status;
char *recfg_str = NULL;
char out_str[2048];
/*
** if user is Admin then ...
*/
if(strcmp(api_user_name,cfg->ADMIN) != 0){
printf(\nError, user <%s> is not the Admin <%s>, so cannot reconfig\n,
api_user_name,cfg->ADMIN);
return 0;
}
recfg_str = api_reconfig_quemgr(qmgr_host,qmgr_port,&status,out_str);
if(status != 0){
printf(\nReconfig returns %d, text = %s,status,out_str);
}
if(recfg_str != NULL){
printf(\n%s,recfg_str);
free(recfg_str);
}
}

return 0;

/* ==================== */
void print_menu(void)
{

252

Patran Analysis Manager Users Guide


Example Interface

#include <stdio.h>
printf(\n);
printf(Enter selection:\n);
printf( 1). submit a job\n);
printf( 2). abort a job\n);
printf( 3). monitor a job\n);
printf( 4). show QueMgr log file\n);
printf( 5). show QueMgr jobs/queues\n);
printf( 6). show QueMgr cpu/mem/disk\n);
printf( 7). list completed jobs\n);
printf( 8). write rcfile settings\n);
printf( 9). admin test\n);
printf( 10). admin reconfig QueMgr\n);
printf( 11). quit\n);
printf(\n);
printf(choice: );
return;

/* ==================== */
int get_response(void)
{
char bogus[100];
int choice;
(void)scanf(%s,bogus);
if( (bogus[0] == q) || (bogus[0] == Q))
choice = QUIT;
else
choice = atoi(bogus);
if( (choice < SUBMIT) || (choice > QUIT) )
return NOTVALID;
}

return choice;

/* ==================== */
int doit(int choice)
{
int srtn;
if(choice == QUIT)
return -1;
#ifdef DEBUG
printf(choice made was: %d\n,choice);
#endif
if(dont_connect){
if(choice != ADMINTEST){
printf(\nError, only valid option with -nocon is Admin test\n);

Chapter 9: Application Procedural Interface (API) 253


Example Interface

return 0;

#include <stdio.h>

if(choice == SUBMIT){
srtn = submit_job();
}else if(choice == ABORT){
srtn = abort_job();
}else if(choice == WATCHJOB){
srtn = watch_job();
}else if(choice == WATCHQUE_LOG){
srtn = watch_que(choice);
}else if(choice == WATCHQUE_FULL){
srtn = watch_que(choice);
}else if(choice == WATCHQUE_CPU){
srtn = watch_que(choice);
}else if(choice == LISTCOMP){
srtn = list_complete();
}else if(choice == RCFILEWRITE){
srtn = write_rcfile();
}else if(choice == ADMINTEST){
srtn = admin_test();
}else if(choice == RECONFIG){
srtn = reconfig_quemgr();
}else{
srtn = 0;
printf(invalid choice ?\n);
}
}

return srtn;

/* ==================== */
int main(int argc,char **argv)
{
int i,j,k;
int len1;
int done;
int not_first_real_app;
int first_real_app_num;
FILE *wp = NULL;
int do_print;
char env_file[256];
char first_real_app_str[128];
int srtn;
int choice;
int timout;
int has_timout;
char *ptr;
char *qmgr_hoststr;
char *qmgr_portstr;
char *user_str;

254

Patran Analysis Manager Users Guide


Example Interface

char home_dir[256];
char tmpstr[256];
char error_msg[256];
char tmp_host[256];
char out_str[2048];
#ifdef WINNT
int err;
WORD wVersionRequested;
WSADATA wsaData;
#endif

#include <stdio.h>

struct hostent *host_entry;


#ifdef WINNT
extern BOOL console_event_func(DWORD );
#endif
extern void get_home_dir(char *);
/* ------------ */
/*
** necessary windows startup socket code ...
*/
#ifdef WINNT
wVersionRequested = MAKEWORD( SOCKET_VERSION1, SOCKET_VERSION2 );
err = WSAStartup( wVersionRequested, &wsaData );
if(err != 0){
printf(Error, WSAStartup failed\n);
return 1;
}
if( ( LOBYTE( wsaData.wVersion ) != 1 ) ||
( HIBYTE( wsaData.wVersion ) != 1 ) ){
WSACleanup();
printf(Error, WSAStartup version incompatible\n);
return 1;
}
#endif
/* ------------ */
#ifdef WINNT
/*
** console handler ...
*/
(void)SetConsoleCtrlHandler((PHANDLER_ROUTINE)console_event_func, TRUE);
#endif
/* ------------ */

Chapter 9: Application Procedural Interface (API) 255


Example Interface

#include <stdio.h>
/*
** get this hostname ...
*/
gethostname(api_this_host,256);
strcpy(tmp_host,api_this_host);
host_entry = (struct hostent *)gethostbyname(tmp_host);
if(host_entry != NULL)
strcpy(api_this_host,host_entry->h_name);
/* ------------ */
/*
** get this username ...
*/
user_str = api_getlogin();
strcpy(api_user_name,user_str);
/* ------------ */
/*
** assume binpath is from P3_HOME (or AM_HOME) ...
** command-line will override ...
*/
binpath[0] =\0;
orgpath[0] =\0;
#ifdef ULTIMA
ptr = getenv(AM_HOME);
#else
ptr = getenv(P3_HOME);
#endif
if(ptr != NULL){
strcpy(binpath,ptr);
}
#ifdef DEBUG
printf(binpath = <%s>\n,binpath);
#endif
/* ------------ */
/*
** get QueMgr host, port, app name, index, p3home (or amhome),
** and -rcf from command line args ...
*/
lic_file[0] = \0;
has_qmgr_host = 0;
has_qmgr_port = 0;
has_org = 0;
has_orgpath = 0;
has_timout = 0;
timout = BLOCK_TIMEOUT;
strcpy(org_name,default);
ptr = getenv(P3_ORG);

256

Patran Analysis Manager Users Guide


Example Interface

#include <stdio.h>
if(ptr != NULL){
strcpy(org_name,ptr);
has_org = 1;
}
qmgr_host[0] = \0;
qmgr_port = -1111;
rmgr_port = RMTMGR_RESV_PORT;
api_application_name[0] = \0;
api_application_index = -1;
sys_rcf_file[0] = \0;
usr_rcf_file[0] = \0;
has_cmd_rcf = 0;
cmd_rcf_file[0] = \0;
strcpy(usr_rcf_file,.p3mgrrc);
get_home_dir(home_dir);
if(home_dir[0] != \0){
sprintf(usr_rcf_file,%s/.p3mgrrc,home_dir);
}
#ifdef DEBUG
fprintf(stderr,usr_rcf_file = <%s>\n,usr_rcf_file);
#endif
auto_startup = 0;
do_print = 0;
env_file[0] = \0;
if(argc > 1){
i = 1;
while(i < argc){
if((strcmp(argv[i],-qmgrhost) == 0) && (i < argc-1)){
has_qmgr_host = 1;
strcpy(qmgr_host,argv[i+1]);
i++;
}else if((strcmp(argv[i],-qmgrport) == 0) && (i < argc-1)){
has_qmgr_port = 1;
qmgr_port = atoi(argv[i+1]);
i++;
}else if((strcmp(argv[i],-rmgrport) == 0) && (i < argc-1)){
rmgr_port = atoi(argv[i+1]);
i++;
}else if((strcmp(argv[i],-timeout) == 0) && (i < argc-1)){
timout = atoi(argv[i+1]);
has_timout = 1;
i++;
}else if((strcmp(argv[i],-org) == 0) && (i < argc-1)){
has_org = 1;
strcpy(org_name,argv[i+1]);
i++;
}else if((strcmp(argv[i],-orgpath) == 0) && (i < argc-1)){
has_orgpath = 1;
strcpy(orgpath,argv[i+1]);

Chapter 9: Application Procedural Interface (API) 257


Example Interface

#include <stdio.h>
i++;
}else if((strcmp(argv[i],-auth) == 0) && (i < argc-1)){
strcpy(lic_file,argv[i+1]);
i++;
}else if((strcmp(argv[i],-app) == 0) && (i < argc-1)){
strcpy(api_application_name,argv[i+1]);
i++;
}else if((strcmp(argv[i],-rcf) == 0) && (i < argc-1)){
has_cmd_rcf = 1;
strcpy(cmd_rcf_file,argv[i+1]);
i++;
#ifdef ULTIMA
}else if((strcmp(argv[i],-amhome) == 0) && (i < argc-1)){
#else
}else if((strcmp(argv[i],-p3home) == 0) && (i < argc-1)){
#endif
strcpy(binpath,argv[i+1]);
i++;
}else if((strcmp(argv[i],-choice) == 0) && (i < argc-1)){
auto_startup = atoi(argv[i+1]);
i++;
}else if(strcmp(argv[i],-env) == 0){
do_print = 1;
}else if(strcmp(argv[i],-envall) == 0){
do_print = 2;
}else if((strcmp(argv[i],-envf) == 0) && (i < argc-1)){
strcpy(env_file,argv[i+1]);
do_print = 3;
i++;
}else if((strcmp(argv[i],-envfall) == 0) && (i < argc-1)){
strcpy(env_file,argv[i+1]);
do_print = 4;
i++;
}else if(strcmp(argv[i],-nocon) == 0){
dont_connect = 1;
}else if(strcmp(argv[i],-version) == 0){
fprintf(stderr,version: %s\n,GLOBAL_AM_VERSION);
return 0;
}
i++;
}
}
#ifdef DEBUG
if(has_cmd_rcf)
fprintf(stderr,cmd_rcf_file = <%s>\n,cmd_rcf_file);
#endif
/* ------------ */
/*
** if binpath is still emtpy then its an error ...
*/

258

Patran Analysis Manager Users Guide


Example Interface

#ifdef DEBUG
printf(binpath = <%s>\n,binpath);
#endif

#include <stdio.h>

if(binpath[0] == \0){
#ifdef ULTIMA
printf(Error, AM_HOME env var not set\n);
#else
printf(Error, P3_HOME env var not set\n);
#endif
return 1;
}
#ifdef LAPI
if(lic_file[0] == \0){
ptr = getenv(MSC_LICENSE_FILE);
if(ptr == NULL){
ptr = getenv(LM_LICENSE_FILE);
}
if(ptr == NULL){
printf(Error, authorization file not set (MSC_LICENSE_FILE)\n);
return 1;
}
strcpy(lic_file,ptr);
}
#else
strcpy(lic_file,empty.noauth);
#endif
/*
** change back-slashes to forward slashes for binpath ...
*/
i = 0;
j = 0;
k = (int)strlen(binpath);
while(i < k){
#ifdef DEBUG
fprintf(stderr,txtmgr: i=%d, k=%d\n,i,k);
fprintf(stderr,txtmgr: binpath[i] = %c\n,binpath[i]);
#endif
if(binpath[i] == \\){
if(i < k-1){
if(binpath[i+1] == \\){
i++;
}
}
tmpstr[j] = /;
j++;
}else{
tmpstr[j] = binpath[i];
j++;

Chapter 9: Application Procedural Interface (API) 259


Example Interface

#include <stdio.h>

#ifdef DEBUG
fprintf(stderr,HERE\n);
#endif
i++;
}
tmpstr[j] = \0;
strcpy(binpath,tmpstr);
/*
** make sure binpath has no slash at end ...
*/
len1 = (int)strlen(binpath);
if(len1 > 0){
if( (binpath[len1-1] == /) || (binpath[len1-1] == \\) ){
binpath[len1-1] = \0;
}
}
/*
** mck - add /p3manager_files (or analysis_manager) to binpath ...
*/
#ifdef ULTIMA
strcat(binpath,/analysis_manager);
#else
strcat(binpath,/p3manager_files);
#endif
/* ------------ */
/*
** MCK MCK MCK - get orgpath - it WILL be the same as binpath
**
for the org.cfg file ...
*/
if(has_orgpath == 0){
strcpy(orgpath,binpath);
}else{
/*
** change back-slashes to forward slashes for orgpath ...
*/
i = 0;
j = 0;
k = (int)strlen(orgpath);
while(i < k){
#ifdef DEBUG
fprintf(stderr,txtmgr: i=%d, k=%d\n,i,k);
fprintf(stderr,txtmgr: orgpath[i] = %c\n,orgpath[i]);

260

Patran Analysis Manager Users Guide


Example Interface

#endif

#include <stdio.h>

if(orgpath[i] == \\){
if(i < k-1){
if(orgpath[i+1] == \\){
i++;
}
}
tmpstr[j] = /;
j++;
}else{
tmpstr[j] = orgpath[i];
j++;
}
#ifdef DEBUG
fprintf(stderr,HERE\n);
#endif
i++;
}
tmpstr[j] = \0;
strcpy(orgpath,tmpstr);
/*
** make sure orgpath has no slash at end ...
*/
len1 = (int)strlen(orgpath);
if(len1 > 0){
if( (orgpath[len1-1] == /) || (orgpath[len1-1] == \\) ){
orgpath[len1-1] = \0;
}
}
/*
** mck - add /p3manager_files (or analysis_manager) to orgpath ...
*/
#ifdef ULTIMA
strcat(orgpath,/analysis_manager);
#else
strcat(orgpath,/p3manager_files);
#endif
}
/* ------------ */
sprintf(sys_rcf_file,%s/%s/p3mgrrc,orgpath,org_name);
#ifdef DEBUG
fprintf(stderr,sys_rcf_file = <%s>\n,sys_rcf_file);
#endif

Chapter 9: Application Procedural Interface (API) 261


Example Interface

/* ------------ */

#include <stdio.h>

/*
** check env vars if not set on command-line
*/
if(qmgr_host[0] == \0){
qmgr_hoststr = getenv(P3_MASTER);
if(qmgr_hoststr != NULL){
strcpy(qmgr_host,qmgr_hoststr);
has_qmgr_host = 1;
}
}
if(qmgr_host[0] == \0){
qmgr_hoststr = getenv(MSC_AM_QUEMGR);
if(qmgr_hoststr != NULL){
strcpy(qmgr_host,qmgr_hoststr);
has_qmgr_host = 1;
}
}
if(qmgr_host[0] == \0){
qmgr_hoststr = getenv(QUEMGR_HOST);
if(qmgr_hoststr != NULL){
strcpy(qmgr_host,qmgr_hoststr);
has_qmgr_host = 1;
}
}
if(qmgr_port == -1111){
qmgr_portstr = getenv(P3_PORT);
if(qmgr_portstr != NULL){
qmgr_port = atoi(qmgr_portstr);
has_qmgr_port = 1;
}
}
if(qmgr_port == -1111){
qmgr_portstr = getenv(MSC_AM_QUEPORT);
if(qmgr_portstr != NULL){
qmgr_port = atoi(qmgr_portstr);
has_qmgr_port = 1;
}
}
if(qmgr_port == -1111){
qmgr_portstr = getenv(QUEMGR_PORT);
if(qmgr_portstr != NULL){
qmgr_port = atoi(qmgr_portstr);
has_qmgr_port = 1;
}
}

262

Patran Analysis Manager Users Guide


Example Interface

/* ------------ */

#include <stdio.h>

#ifndef ULTIMA
/*
** checkout license ...
*/
if( (do_print == 0) && (dont_connect == 0) ){
if((srtn = api_checkout_license(lic_file)) != 0){
printf(Error, Authorization failure %d.,srtn);
if(global_auth_msg != NULL){
printf( Error msg = %s\n,global_auth_msg);
}else{
printf(\n);
}
return 1;
}
#ifdef DEBUG
fprintf(stderr,auth_file = %s\n,lic_file);
fprintf(stderr,checkout_license returns %d\n,srtn);
#endif
}
#endif
/* ------------ */
/*
** init api ...
*/
srtn = api_init(out_str);
if(srtn != 0){
printf(%s, error code = %d\n,out_str,srtn);
return 1;
}
/* ------------ */
/*
** adjust global network timeout if desired ...
*/
if(has_timout == 0){
timout = 30;
}
srtn = api_set_gbl_timeout(timout);
if(srtn != 0){
printf(Error, unable to set global timeout to %d secs\n,timout);
api_release_license();
#ifdef WINNT
WSACleanup();
#endif
return 1;
}
/* ------------ */

Chapter 9: Application Procedural Interface (API) 263


Example Interface

#include <stdio.h>
ptr = getenv(AM_THIS_HOST);
if(ptr != NULL){
if( (strcmp(ptr,no) != 0) && (strcmp(ptr,NO) != 0) ){
api_use_this_host = 1;
}
}
/* ------------ */
/*
** read orgs if possible (org.cfg is in binpath) ...
*/
org = NULL;
num_orgs = 0;
if( (has_qmgr_host == 0) && (has_qmgr_port == 0) ){
org = api_read_orgs(binpath,&num_orgs,&srtn);
if(srtn != 0){
printf(Warning, unable to read org.cfg file, code = %d\n,srtn);
/*
* use defaults ...
*/
strcpy(qmgr_host,api_this_host);
qmgr_port = QUEMGR_RESV_PORT;
}else{
if( (num_orgs > 0) && (org != NULL) ){
/*
** figure out which quemgr to connect to ...
*/
done = 0;
for(i=0;i<num_orgs;i++){
if(strcmp(org[i].org_name,org_name) == 0){
strcpy(qmgr_host,org[i].host_name);
qmgr_port = org[i].port;
done = 1;
break;
}
}
if( (!done) && (has_org == 0) ){
/*
** use first available ...
*/
strcpy(qmgr_host,org[0].host_name);
qmgr_port = org[0].port;
done = 1;
}else if( (!done) && (has_org == 1) ){
/*
** no match found, assume this host and all ...
*/
strcpy(qmgr_host,api_this_host);
qmgr_port = QUEMGR_RESV_PORT;
done = 1;

264

Patran Analysis Manager Users Guide


Example Interface

#include <stdio.h>
}
}else{
printf(Warning, unable to read org.cfg file, no orgs found\n);
/*
* use defaults ...
*/
strcpy(qmgr_host,api_this_host);
qmgr_port = QUEMGR_RESV_PORT;
done = 1;
}
}
}
#ifdef DEBUG
printf(\n);
printf(quemgr org = %s\n,org_name);
printf(quemgr host = %s\n,qmgr_host);
printf(quemgr port = %d\n,qmgr_port);
#endif
/* ------------ */
if(! dont_connect){
/*
** get config info ...
*/
cfg = api_get_config(qmgr_host, qmgr_port, &srtn, error_msg);
if(srtn != 0){
printf(Error, msg = %s, error = %d\n,error_msg,srtn);
api_release_license();
#ifdef WINNT
WSACleanup();
#endif
return 1;
}
/* ------------ */
/* find first real app, just in case */
first_real_app_str[0] = \0;
not_first_real_app = 1;
first_real_app_num = 1;
for(i=0;i<MAX_APPS;i++){
if(not_first_real_app){
#ifdef DEBUG
fprintf(stderr,cfg->progs[%d].app_name = <%s>\n,i,cfg->progs[i].app_name);
#endif

Chapter 9: Application Procedural Interface (API) 265


Example Interface

#include <stdio.h>
if(cfg->progs[i].app_name[0] != \0){
not_first_real_app = 0;
strcpy(first_real_app_str,cfg->progs[i].app_name);
first_real_app_num = i + 1;
break;
}

if(not_first_real_app){
/* error, no apps defined */
fprintf(stderr,TxtMgr Error: No valid applications defined.\n);
return 1;
}
if(api_application_name[0] != \0){
for(i=0;i<MAX_APPS;i++){
if(strcmp(api_application_name,cfg->progs[i].app_name) == 0){
api_application_index = i + 1;
break;
}
}
}
if(api_application_index <= 0){
/* app not specified - use first available */
strcpy(api_application_name,first_real_app_str);
api_application_index = first_real_app_num;
/* put up message about app not found, using first one */
fprintf(stderr,\nTxtMgr Info: No application specified.\nUsing first available application of
<%s> = %d\n,api_application_name,api_application_index);
}
#ifdef DEBUG
fprintf(stderr,application_name = <%s>\n,api_application_name);
fprintf(stderr,application_index = %d\n,api_application_index);
#endif
/* ----------- */
/* DEBUGGING
if(orgpath[0] != \0){
api_write_config(cfg,orgpath,bogus,&srtn,error_msg);
if(srtn != 0){
printf(Error, unable to write config files, msg = %s, code = %d\n,error_msg,srtn);
api_release_license();
#ifdef WINNT
WSACleanup();
#endif
return 0;
}

266

Patran Analysis Manager Users Guide


Example Interface

}
DEBUGGING */

#include <stdio.h>

/* ----------- */
/*
** initialize config values ...
*/
api_init_uiconfig(cfg);
/*
** because we are txtmgr, reset job_mon_flag of ui_config
** to be off by default ...
*/
if(auto_startup == SUBMIT){
ui_config.job_mon_flag = 0;
}
#ifdef DEBUG_111
api_rcfile_print(1);
#endif
/* ----------- */
/*
** override some settings if needed ...
*/
(void)api_rcfile_read(sys_rcf_file,out_str);
(void)api_rcfile_read(usr_rcf_file,out_str);
if(has_cmd_rcf){
srtn = api_rcfile_read(cmd_rcf_file,out_str);
if(srtn != 0){
printf(%s\n,out_str);
}
}
#ifdef DEBUG_111
api_rcfile_print(1);
#endif
/* ----------- */
/*
** if just a print env then do it and stop ...
*/
if(do_print){
if(do_print >= 3){
if(env_file[0] != \0){

Chapter 9: Application Procedural Interface (API) 267


Example Interface

#include <stdio.h>
wp = fopen(env_file,wb);
if(wp != NULL){
api_rcfile_write2(wp,(do_print-3));
fclose(wp);
}

}
}else{
api_rcfile_print(do_print-1);
}
api_release_license();
#ifdef WINNT
WSACleanup();
#endif
return 0;
}
} /* ! dont_connect ... */
/* ----------- */
if(auto_startup > 0){
srtn = doit(auto_startup);
}else{

/*
** query for selection and do work ...
*/
while(1){
print_menu();
choice = get_response();
srtn = doit(choice);
#ifdef DEBUG
printf(doit(%d) returns %d\n,choice,srtn);
#endif
if(srtn < 0)
break;

}
srtn = 0;
}

api_release_license();
#ifdef WINNT
WSACleanup();
#endif
}

return srtn;

268

Patran Analysis Manager Users Guide


Example Interface

MSC.Fatigue Quick Start Guide

Index
MSC Patran Analysis Manager Users Guide

A
IN
D
E
X
Index

ABAQUS, 13
ABAQUS submittals, 13
abort
selecting job, 86
action
abort, 86
configure, 34
monitor, 68
administrator, 117
analysis
ABAQUS, 13
general, 14, 15
MSC.Nastran, 11
Analysis Preference, 8
applications, 118
adding, 119
deleting, 121

command arguments, 51
configuration
disk, 127
examples, 146
files, 116
general, 49
organizational group, 152
queue, 130
separate users, 153
test, 134
configuration management, 113
configure, 34
disk space, 35
mail, 46
memory, 40
miscellaneous, 58
restart, 53
time, 47

daemon, 92
General Manager, 94
Job Manager, 93
Queue Manager, 92
default host/queue, 52
disable, 10
disk configuration, 127

edit file, 32
editor, 94
enable, 10
environment variables, 103
errors, 162
executables, 4, 90
execute, 20

files
configuration, 114, 116
created, 23
directory structure, 90
disk configuration, 150
edit, 32
examples, 146
host configuration, 146
queue configuration, 151
save settings, 34
selecting, 28
X resources, 111
filesystem
add, 128
delete, 129
test, 140
fonts, 111

270 MSC Patran Analysis Manager Users Guide

general, 15
Generic submittals, 15

MSC.Marc Submittals, 14
MSC.Nastran, 10, 11
MSC.Nastran submittals, 11

host, 29
adding, 125
configuration, 124
deleting, 127
test, 135, 136
host groups, 132

installation, 107
instructions, 108
requirements, 107
integration, 5
interface
configuration management, 113
user, 92

job stats, 94, 195


job viewer, 94, 195

least loaded, 132


limitations, 11, 13, 14
load leveling, 132
LSF, 130

maximum application tasks, 118


modify
configuration files, 116
monitor, 68
completed job, 75
CPU Loads, 82
full listing, 82
host status, 80
host/queue, 78
job listing, 79
queue manager log, 81
running job, 69

NQS, 130

organization, 6
multiple, 103

physical hosts, 121


adding, 122
deleting, 124
test, 136
product information, 3
product purpose, 2
program, 4, 20, 90
queue manager, 155
program arguments, 94
project directory, 51

queue, 29
add, 131
delete, 132
test, 143
queue type, 117
LSF, 117
NQS, 117

reconfigure, 143
restart, 53
rules, 11, 13, 14

startup arguments, 94
statistics, 94, 195
submit
preparing, 8
separate user, 51

INDEX 271

test
application, 135
disk, 141
MSC.Patran AM host, 139
physical hosts, 136
queue, 143
test configuration/host, 135, 136

user interface, 92

variables, 103

X resources, 111

272 MSC Patran Analysis Manager Users Guide

You might also like