Distributed Ansys Guide
Distributed Ansys Guide
ANSYS, Inc. Southpointe 275 Technology Drive Canonsburg, PA 15317 [email protected] http://www.ansys.com (T) 724-746-3304 (F) 724-514-9494
Table of Contents
1. Overview of Distributed ANSYS .......................................................................................................... 11 2. Configuring Distributed ANSYS .......................................................................................................... 21 2.1. Prerequisites for Running Distributed ANSYS or the Distributed Solvers ......................................... 21 2.1.1. MPI Software ....................................................................................................................... 21 2.1.2. Using MPICH ........................................................................................................................ 22 2.1.2.1. Configuration for Windows Systems Running MPICH .................................................... 22 2.1.2.2. Configuration for UNIX/Linux Systems Running MPICH ................................................ 23 2.2. Installing Distributed ANSYS ......................................................................................................... 23 2.3. Setting Up the Environment for Distributed ANSYS or the Distributed Solvers ................................ 23 2.3.1. Using the mpitest Program ................................................................................................... 26 2.3.1.1. Running a Local Test .................................................................................................... 26 2.3.1.2. Running a Distributed Test .......................................................................................... 26 2.3.2. Other Considerations ........................................................................................................... 28 2.4. Starting Distributed ANSYS .......................................................................................................... 29 2.4.1. Starting Distributed ANSYS via the Launcher ........................................................................ 29 2.4.2. Starting Distributed ANSYS via Command Line ..................................................................... 29 3. Running Distributed ANSYS ................................................................................................................ 31 3.1. Advantages of Using Distributed ANSYS ........................................................................................ 31 3.2. Supported Analysis Types ............................................................................................................. 31 3.3. Supported Features ...................................................................................................................... 31 3.4. Running a Distributed Analysis ...................................................................................................... 32 3.5. Understanding the Working Principles and Behavior of Distributed ANSYS .................................... 33 3.6. An Example Distributed ANSYS Analysis (Command Method) ........................................................ 35
List of Tables
1.1. Parallel Solvers Available in Distributed ANSYS (PPFA License Required) .............................................. 11 1.2. Parallel Solvers Available in Shared-Memory ANSYS with a PPFA License .............................................. 12 1.3. Parallel Solvers Available in Shared-Memory ANSYS Without a PPFA License ........................................ 12 2.1. Platforms and MPI Software ................................................................................................................ 21
Distributed ANSYS Guide . ANSYS Release 9.0 . 002114 . SAS IP, Inc.
vi
Table 1.1 Parallel Solvers Available in Distributed ANSYS (PPFA License Required)
Solvers/Feature DPCG/PCG DJCG/JCG Shared-Memory Hardware Y Y DistributedMemory Hardware Y Y Mixed-Memory Hardware Y Y
Distributed ANSYS Guide . ANSYS Release 9.0 . 002114 . SAS IP, Inc.
*Runs in shared-memory parallel mode on the local machine only; element formulation and results calculation will still run in distributed-memory parallel mode.
Table 1.2 Parallel Solvers Available in Shared-Memory ANSYS with a PPFA License
Solvers/Feature DPCG DJCG Distributed sparse PCG JCG AMG Sparse DDS ICCG Element formulation, results calculation Shared-Memory Hardware Y Y -Y Y Y Y Y Y Y DistributedMemory Hardware Y Y -----Y --Mixed-Memory Hardware Y Y -----Y ---
Table 1.3 Parallel Solvers Available in Shared-Memory ANSYS Without a PPFA License
Solvers/Feature DPCG DJCG Distributed sparse PCG JCG AMG Sparse DDS ICCG Element formulation, results calculation Shared-Memory Hardware* ---Y Y -Y -Y Y DistributedMemory Hardware ----------Mixed-Memory Hardware -----------
*Via /CONFIG,NPROC,n. In ANSYS, the solution time is typically dominated by three parts: the time spent to create the element matrices and form the global matrices or global systems of equations, the time to solve the linear system of equations, 12
Distributed ANSYS Guide . ANSYS Release 9.0 . 002114 . SAS IP, Inc.
Chapter 1: Overview of Distributed ANSYS and the time spent post-processing the solution (i.e., calculating derived quantities such as stress and strain). The distributed solvers (DDS, DPCG, DJCG) that currently exist in shared-memory ANSYS can significantly decrease the time spent to solve the linear system of equations. However, when using these distributed solvers, the time spent creating the system of equations and the time spent calculating the derived quantities from the solution to the system of equations is not reduced. Shared-memory architecture (/CONFIG,NPROC) runs a solution over multiple processors on a single multiprocessor machine. When using shared-memory ANSYS, you can reduce each of the three main parts of the overall solution time by using multiple processors. However, this architecture is limited by the memory bandwidth; you typically see very little reduction in solution time beyond two to four processors. The distributed-memory architecture of Distributed ANSYS runs a solution over multiple processors on a single machine or on multiple machines. It decomposes large problems into smaller domains, transfers the domains to each processor, solves each domain, and creates a complete solution to the model. Because each of the three main parts of the overall solution time are running in parallel, the whole model solution time is significantly reduced. The memory required is also distributed over multiple systems. This memory-distribution method allows you to solve very large problems on a cluster of machines with limited memory. Distributed ANSYS works by launching ANSYS on multiple machines. The machine that ANSYS is launched on is called the master machine and the other machines are called the slave machines. (If you launch Distributed ANSYS using multiple processors on the same machine, you would use the same terminology when referring to the processors, e.g., the master and slave processors). All pre-processing and post-processing commands are executed only on the master machine. Only the SOLVE command and any necessary supporting commands (e.g., /SOLU, FINISH, /EOF, /EXIT, etc.) are sent to the slave machines to be processed. Files generated by Distributed ANSYS are named JobnameN.ext, where N is the process number. The master process is always 0, and the slave processes are 1, 2, etc. When the solution is complete and you issue the FINISH command in /SOLU, Distributed ANSYS combines all JobnameN.rst files into a single Jobname.rst file, located on the master machine. The remaining chapters explain how to configure your environment to run Distributed ANSYS, how to run a Distributed ANSYS analysis, and what features and analysis types are supported in Distributed ANSYS. You should read these chapters carefully and fully understand the process before attempting to run a distributed analysis. The proper configuration of your environment and the installation and configuration of the appropriate MPI software are critical to successfully running a distributed analysis.
Distributed ANSYS Guide . ANSYS Release 9.0 . 002114 . SAS IP, Inc.
13
14
Distributed ANSYS Guide . ANSYS Release 9.0 . 002114 . SAS IP, Inc.
SGI 64-bit MPI 4.3 (MPT 1.8) with http://www.sgi.com/software/mpt/overview.html /IRIX64 6.5.23m array services 3.5 Sun UltraSPARC HPC CLUSTERTOOLS 64-bit / Solaris 8, 5.0 UltraSPARC III and IV 64-bit / Solaris 8 Intel IA-32 Linux MPI/Pro 1.6.5 / RedHat AS 2.1 Kernel 2.4.9 Intel IA-64 Linux MPICH-1.2.5 / RedHat AS 2.1 Kernel 2.4.18 and AMD Opteron 64-bit Linux / SuSE Kernel 2.4.21 Fujitsu SPARC64 Parallelnavi 2.1 IV / Solaris 8 Intel IA-32 bit / MPI/Pro 1.6.5 Windows XP Home or Professional (Build 2600) Version 5.1, Windows 2000 Version 5.0 (Build 2195) http://www.sun.com/hpc/communitysource/
http://www.mpi-softtech.com
Section 2.3: Setting Up the Environment for Distributed ANSYS or the Distributed Solvers 1. 2. 3. 4. 5. 6. 7. 8. Insert the CD in your CD drive. Choose Start>Run and type D:\MPICH\Setup.exe (replace D with the letter of your CD drive). Exit all running Windows programs and click Next. The Software License Agreement screen appears. Read the agreement, and if you accept, click Yes. Choose the destination for installation. To choose the default installation location, click Next. To change the default installation location, click Browse to navigate to the desired location, and click Next. On the Select Components screen, select the default options, then click Next. On the Start Copying Files screen, verify that the information is correct and click Next to continue the installation. The Setup Complete dialog box appears. Click Finish. Run C:\Program Files\MPICH\mpd\bin\MPIRegister.exe and enter your login and password.
If you are using MPICH and running the distributed solvers in shared-memory ANSYS, you will need to use an alternate ANSYS script and executable when using the distributed solvers. For MPICH, use the ansddsmpich script and the ansddsmpich.exe executable. See the ANSYS Advanced Analysis Techniques Guide for more information.
2.3. Setting Up the Environment for Distributed ANSYS or the Distributed Solvers
After you've ensured that your cluster meets the prerequisites and you have ANSYS and the correct versions of MPI installed, you need to configure your distributed environment using the following procedure. This procedure applies to both Distributed ANSYS (on supported platforms) and to the distributed solvers running under sharedmemory ANSYS. 1. Obtain the machine name for each machine on the cluster. You will need this name to set up the Configure Cluster option of the ANS_ADMIN utility in Step 3. Windows: Right-click on My Computer, left-click on Properties, and select the Network Identification or Computer Name tab. The full computer name will be listed. Note the name of each machine (not including the domain). UNIX/Linux: Type hostname on each machine in the cluster. Note the name of each machine. You will need this name to set up the .rhosts file, as well as for the ANS_ADMIN utility.
Distributed ANSYS Guide . ANSYS Release 9.0 . 002114 . SAS IP, Inc.
23
Chapter 2: Configuring Distributed ANSYS 2. (UNIX/Linux only) Set up the .rhosts file on each machine. The .rhosts file lists all machines in the cluster. The machines should be listed using their complete system name, as taken from uname. For example, an .rhosts file for a two-machine cluster might look like this:
golinux1.ansys.com jqd golinux2 jqd
Verify communication between machines via rsh (e.g., rsh golinux2 ls). You should not be prompted for a password. If you are, check the .rhosts permissions and machine names for correctness. If you plan to run the distributed solvers on one machine with multiple processors in a shared memory environment, you need to have the MPI software installed, but you do not need the .rhosts file. 3. Configure the hosts90.ans file. Use the ANS_ADMIN utility to configure this file. You can manually modify the file later, but we strongly recommend that you use ANS_ADMIN to create this file initially to ensure that you establish the correct format. Windows: Start >Programs >ANSYS 9.0 >Utilities >ANS_ADMIN UNIX/Linux:
/ansys_inc/v90/ansys/bin/ans_admin90
Choose Configuration options, and then click Configure Cluster. Choose the hosts90.ans file to be configured and click OK. Then enter the system name (from Step 1) in the Machine hostname field and click Add. On the next dialog box, enter the system type in the Machine type drop-down, and the number of processors in the Max number of jobs field for each machine in the cluster. The working directory field also requires an entry, but this entry is not used by Distributed ANSYS or the distributed solvers. The remaining fields do not require entries. The hosts90.ans should be located in your current working directory, your home directory, or the apdl directory. 4. For running Distributed ANSYS with MPICH: The ANSYS90_DIR and the dynamic load library path (e.g., LD_LIBRARY_PATH) must be set by the appropriate shell startup script in order to run Distributed ANSYS with MPICH. Use the following scripts (supplied with ANSYS) to configure the distributed environment correctly for MPICH. For csh or tcsh shells, add the following line to your .cshrc, .tcshrc, or equivalent shell startup file:
source /ansys_inc/v90/ansys/bin/confdismpich90.csh
For sh or bash shells, add the following line to your .login, .profile, or equivalent shell startup file:
. /ansys_inc/v90/ansys/bin/confdismpich90.sh
As a test, rsh into all machines in the cluster (including the master) and verify that the ANSYS90_DIR and the LD_LIBRARY_PATH are set correctly. For example:
rsh master1 env | grep ANSYS90_DIR
and 24
Distributed ANSYS Guide . ANSYS Release 9.0 . 002114 . SAS IP, Inc.
Section 2.3: Setting Up the Environment for Distributed ANSYS or the Distributed Solvers
rsh master1 env | grep LD_LIBRARY
Note Adding these confdismpich90 lines to your shell startup file will place the required ANSYS load library path settings in front of any existing system load library settings and will likely affect other applications, including native MPI. If you have problems running other applications after including these scripts, you will need to comment out these lines to run the other applications. 5. On UNIX/Linux systems, you can also set the following environment variables: ANSYS_RSH - This is the remote shell command to use in place of the default rsh. ANSYS_NETWORK_START - This is the time, in seconds, to wait before timing out on the start-up of the client (default is 15 seconds). ANSYS_NETWORK_COMM - This is the time to wait, in seconds, before timing out while communicating with the client machine (default is 5 seconds).
ON IBM systems: LIBPATH - on IBM, if POE is installed in a directory other than the default (/usr/lpp/ppe.poe), you must supply the installed directory path via the LIBPATH environment variable:
export LIBPATH=nondefault-directory-path/lib
On SGI systems: On SGI, in some cases, the default settings for environment variables MPI_MSGS_PER_PROC and MPI_REQUEST_MAX may be too low and may need to be increased. See the MPI documentation for SGI for more information on settings for these and other environment variables. On SGI, when you install the SGI MPI software, you must also install the array 3.2 software (available from the Message Passing Toolkit 1.3 distribution). The array daemon must be running on each system you plan to use for the distributed solvers. Update the /usr/lib/arrayd.conf file to list each system on which you plan to run the distributed solvers. The local hostname of the machine must be listed first in this file.
To verify that these environment variables are set correctly on each machine, run:
rsh machine1 env
On Windows systems only (for running the distributed solvers): If Windows running MPICH: Add C:\Program Files\MPICH\mpd\bin to the PATH environmental variable on all Windows machines (assuming MPICH was installed on the C:\ drive). This line must be in your path for distributed processing to work correctly.
Distributed ANSYS Guide . ANSYS Release 9.0 . 002114 . SAS IP, Inc.
25
On UNIX:
/ansys_inc/v90/ansys/bin/mpitest90
where x is the number of processors in your machines file (6 in this example). On Windows running MPICH:: 1. 2. Create a file named machines in your local/home directory. Open the machines file in an editor. Add your master and slave machines in your cluster. For example, in this cluster of two machines, the master machine is gowindows1. List the machine name separately for each processor (CPU) on that machine. For example, if gowindows1 has four processors and gowindows2 has two, the machines file would look like this:
26
Distributed ANSYS Guide . ANSYS Release 9.0 . 002114 . SAS IP, Inc.
Section 2.3: Setting Up the Environment for Distributed ANSYS or the Distributed Solvers gowindows1 gowindows1 gowindows1 gowindows1 gowindows2 gowindows2 Note You can also simply list the number of processors on the same line: gowindows1 4. 3. From a command prompt, navigate to your working directory. Run the following:
mpirun -np x -machinefile machines "C:\Program Files\ANSYS Inc\V90\ANSYS\bin\platform\mpitest.exe"
where x is the number of processors in your machines file (6 in this example). On Linux running MPICH: Note These instructions are typically done one time by the system administrator. Individual users may not have the necessary privileges to complete all of these steps. 1. 2. Edit the machines file. Navigate to the /ansys_inc/v90/ansys/mpich/<platform>/share subdirectory. Open the file machines.LINUX in an editor. In the machines.LINUX file, change machine1 to the master machine in your Linux cluster. For example, in our cluster of two machines, the master machine is golinux1. List the machine name separately for each processor (CPU) on that machine. For example, if golinux1 has four processors and golinux2 has two, the machines.LINUX file would look like this: golinux1 golinux1 golinux1 golinux1 golinux2 golinux2 Delete any other machines listed. Note If you are running an SMP box, you will simply list the number of processors on the same line: golinux1 4. 3. 4. Edit the mpitestmpich90 script to read np = x where x is the number of processors in your machines.LINUX file. Navigate to your working directory. Run the following:
/ansys_inc/v90/ansys/bin/mpitestmpich90
On Linux running MPI/Pro: 1. 2. Edit the machines file in the /etc subdirectory. Open the machines file in an editor. In the machines file, change machine1 to the master machine in your Linux cluster. For example, in our cluster of two machines, the master machine is golinux1. List the machine name separately for each processor (CPU) on that machine. For example, if golinux1 has four processors and golinux2 has two, the machines file would look like this:
Distributed ANSYS Guide . ANSYS Release 9.0 . 002114 . SAS IP, Inc.
27
Chapter 2: Configuring Distributed ANSYS golinux1 golinux1 golinux1 golinux1 golinux2 golinux2 Delete any other machines listed. 3. 4. Edit the mpitest90 script to read np = x where x is the number of processors in your machines file. Navigate to your working directory. Run the following:
/ansys_inc/v90/ansys/bin/mpitest90
On UNIX machines running native MPI: The process is the same as described for running MPICH on Linux machines (above), but you will need to contact your MPI vendor to find out where the appropriate machines file resides and how to edit it. Once you have properly edited the machines file following your vendor's instructions, edit mpitest90 to read np = x where x is the number of processors in your machines file.
28
Distributed ANSYS Guide . ANSYS Release 9.0 . 002114 . SAS IP, Inc.
2. 3.
Select the correct environment and license on the Launch tab. Select the Parallel Performance for ANSYS add-on. Go to the Solver Setup tab. Select Run Distributed ANSYS. Specify the MPI type to be used for this distributed run. MPI types include: MPI Native MPICH MPICH_SH (Shared-memory Linux machines)
You must also specify either local machine or multiple hosts. If local machine, specify the number of processors on that machine. If multiple hosts, select the machines you want to use from the list of available hosts. The list of available hosts is populated from the hosts90.ans file. Click on the machines you want to use and click Add to move them to the Selected Hosts list to use them for this run. You can also add or remove a host, but be aware that adding or removing a host from here will modify only this run; the hosts90.ans file will not be updated with any new information from this dialog box. 4. Click Run to launch ANSYS.
You can view the actual mpirun command line that the launcher issues by setting the ANS_SEE_RUN_COMMAND environment variable to 1. Setting this environment variable is useful for troubleshooting.
For MPICH:
ansys90 -pp -mpi mpich -dis -np n
Distributed ANSYS Guide . ANSYS Release 9.0 . 002114 . SAS IP, Inc.
29
Chapter 2: Configuring Distributed ANSYS For example, if you run a job in batch mode on a local host using four processors and MPI, with an input file named input1 and an output file named output1, the launch command would be:
ansys90 -pp -dis -np 4 -b -i input1 -o output1
Multiple Hosts If you are running distributed ANSYS across multiple hosts, you need to specify the number of processors on each machine: For native MPI or MPI/Pro:
ansys90 -pp -dis -machines machine1:np:machine2:np:machine3:np
For MPICH:
ansys90 -pp -mpi mpich -dis -machines machine1:np:machine2:np:machine3:np
where machine1 (or 2 or 3) is the name of the machine and np is the number of processors you want to use on the corresponding machine. For example, if you run a job in batch mode using two machines (one with four processors and one with two processors) and MPI, with an input file named input1 and an output file named output1, the launch command would be:
ansys90 -pp -dis -b -machines machine1:4:machine2:2 -i input1 -o output1
For MPICH, you can also use just the -np n options to run across multiple hosts. To use this option, you will need to modify the default machines.LINUX file, located in /ansys_inc/v90/ansys/mpich/<platform>/share. The format is one hostname per line, with either hostname or hostname:n, where n is the number of processors in an SMP. The hostname should be the same as the result from the command hostname. By default, the machines.LINUX file is set up with only one machine:
machine1 machine1 machine1 machine1
To run multiple machines, you need to modify this file to list the additional machines:
machine1 machine1 machine2 machine2 machine3 machine3
The order in which machines are listed in this file is the order in which the work will be distributed. For example, if you wanted to run Distributed ANSYS across one processor of each machine before being distributed to the second processor of any of the machines, you might want to order them as follows:
machine1 machine2 machine3 machine1 machine2 machine3
Note Do not run Distributed ANSYS in the background via the command line (i.e., do not append an ampersand (&) to the command line).
210
Distributed ANSYS Guide . ANSYS Release 9.0 . 002114 . SAS IP, Inc.
Full transient analyses for single field structural and single field thermal analysis
Spectrum analyses, cyclic symmetry analyses, and modal analyses are not supported. FLOTRAN, low- and high-frequency electromagnetics, and coupled-field analyses (including multifield, ROM, and FSI) are not supported.
All other KEYOPTS are supported as documented in the element descriptions. The following ANSYS features are not supported by Distributed ANSYS: p-Elements Superelements/automatic substructuring Element morphing Arc-length method (ARCLEN) Inertia relief (IRLF) Prestress effects (PSTRES) Initial conditions (IC) Initial stress (ISTRESS,ISFILE) Nonlinear diagnostic tool (NLHIST) Partial solution (PSOLVE) Fast thermal solution option (THOPT) The radiosity surface elements (SURF251, SURF252)
Optimization and probabilistic design are not supported under Distributed ANSYS. Restarts are not supported. The PGR file is not supported.
32
Section 3.5: Understanding the Working Principles and Behavior of Distributed ANSYS Choosing the PCG or JCG solver when you're running Distributed ANSYS will automatically run the distributed version of these solvers. Other solvers (ICCG, frontal, etc.) will not work in a distributed environment. 4. Solve the analysis. Command(s): SOLVE GUI: Main Menu> Solution> Solve> Current LS After the solution completes, specify the set of results to be read from the results file. Note that a SET command is required as not all solution data is in the database. Command(s): SET GUI: Main Menu> General Postproc>Read Results Postprocess your results as you would for any analysis.
5.
6.
Notes on Running Distributed ANSYS: Only the master machine reads the config.ans file. Distributed ANSYS ignores the /CONFIG,noeldb command.
Distributed ANSYS Guide . ANSYS Release 9.0 . 002114 . SAS IP, Inc.
33
Chapter 3: Running Distributed ANSYS Use of APDL In pre- and post-processing, APDL works the same in Distributed ANSYS as in shared-memory ANSYS. However, in /SOLUTION, Distributed ANSYS does not support certain *GET items. In general, Distributed ANSYS supports global solution *GET results such as total displacements and reaction forces. It does not support element level results specified by ESEL, ESOL, and ETABLE labels. Unsupported items will return a *GET value of zero. When an error occurs in one of the CPU processors during the Distributed ANSYS execution, the processor sends an error message to all other CPU processors so that the entire run exits gracefully. However, if an error message fails to send, the job may hang, and you will need to manually kill all the processes. You can remove each Jobnamen.ERR file if you suspect that one of the jobs hung. This action should rarely be required. You can launch Distributed ANSYS in either interactive or batch mode on the master processor. However, the slave processor is always in batch mode. The slave process cannot read the START90.ANS or STOP90.ANS files. The master process sends all /CONFIGURE,LABEL commands to the slave processors as needed. Shared-memory ANSYS can postprocess using the Jobname.DB file (if the solution results were saved), as well as using the Jobname.RST file. Distributed ANSYS, however, can only postprocess using the Jobname.RST file and cannot use the Jobname.DB file as no solution results are written to the database. You will need to issue a SET command before postprocessing. In Distributed ANSYS, the OUTPR command prints NSOL and RSOL in the same manner as in shared-memory ANSYS. However, for other items such as ESOL, Distributed ANSYS prints only the element solution on the CPU domain of the master processor. Therefore, OUTPR, ESOL has incomplete information and is not recommended. Also, the order of elements is different from that of sharedmemory ANSYS due to domain decomposition. A direct one-to-one element comparison with shared-memory ANSYS will be different if using OUTPR. When a Distributed ANSYS job is executed, the output for the master processor is written to the screen by default. If you specified an output file via the launcher or the -o option, the output is written to that file. Distributed ANSYS automatically outputs the ASCII files from each slave processor to Jobnamen.OUT. Normally these slave output files have little value because all of the job information is on the master processor (Jobname0.OUT). The same principle also applies to the other ANSYS ASCII files such as Jobname0.ERR, Jobname0.MNTR, etc. However, if the job hangs, you may be able to determine the cause by studying the contents of the slave output files. You can use /CONFIG,NPROC,n to activate shared-memory parallel behavior for the shared-memory sparse solver in Distributed ANSYS (EQSLV,sparse). The distributed sparse solver is more scalable than the shared-memory sparse solver. However, the distributed sparse solver uses more memory than the sharedmemory sparse solver and is less robust. For very difficult problems, we recommend that you try using the shared-memory sparse solver because you can still achieve scalable performance in element formulation and results calculations, which often arise in medium-size nonlinear problems.
34
Distributed ANSYS Guide . ANSYS Release 9.0 . 002114 . SAS IP, Inc.
Section 3.6: An Example Distributed ANSYS Analysis (Command Method) Large Number of CE/CP and Contact Elements In all PPFA products (both shared-memory ANSYS and Distributed ANSYS), the program can handle a large number of coupling and constraint equations (CE/CP) and contact elements. However, specifying too many of these items can force Distributed ANSYS to communicate more among each CPU processor, resulting in longer elapsed time to complete a distributed parallel job. You should reduce the number of CE/CP if possible and make potential contact pairs in a smaller region to achieve non-deteriorated performance.
! use Dpcg solver in the Distributed ansys run ! use Distributed sparse solver ! use default shared memory version of sparse solver
Distributed ANSYS Guide . ANSYS Release 9.0 . 002114 . SAS IP, Inc.
35