0% found this document useful (0 votes)
343 views83 pages

Ansys Solution Fall03 PDF

Uploaded by

Guru75
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
343 views83 pages

Ansys Solution Fall03 PDF

Uploaded by

Guru75
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 83

Welcome to ANSYS Solutions

10/22/03 12:09 PM

Fall 2003

Download pdf

ANSYS Homepage

Contact Us | Privacy Policy

Contact Us

Archived Issues

Copyright 2003 ANSYS Inc. All rights reserved.

file:///Matt's%20G4/Desktop%20Folder/MATT%20JOBS/Stuff%20to%20Back%20Up/AN%20-%20Solutions%20Fall%2003-latest/TMP7rbycn5r1h.htm

Page 1 of 1

Wild Fire Mk Vl
Prototype

This article is based on the work of the following members


of the da Vinci Project Team:
Vladimir Kudriavtsev, Team Leader, Engineering and R&D
Ta-Liang Hsu , Stress Analyst
Asier Ania, Thermal & Stress Analyst
Max Buneta, System Analysis and Design
Michael Trauttmansdorff , System Design and CAD
Marek Krzeminski, Dynamics and Control
Kalman Rooz, Senior Consultant
James Porcher, Ground Operations
Brian Feeney, Da Vinci Project Team Leader

Announced in 1996 to promote the development and flight of spacecraft for low-cost
commercial transport of humans into space, the international X-Prize Foundation is providing
a purse of US$10 million to the first competitor who can safely launch and land a manned
spacecraft to an altitude of 100 kilometers (the international border of space), twice in a twoweek period.
The first Canadian entry in this competition, the fully volunteer da Vinci Project (a wholly
owned by ORVA Space Corp.) has put years of engineering research, design and
developmental testing into the vehicle design, propulsion and flight guidance system.
A full-scale flight-engineering prototype of the manned rocket has been constructed. Detailed
engineering and fabrication of the full-scaled manned rocket named Wild Fire Mk Vl is
currently underway. Flight-testing of the manned rocket and X-Prize competition flights are
targeted to continue throughout 2004.
For R&D efforts on the project, a wide range of engineering software was utilized for CAD,
basic engineering calculations, trajectory analysis, dynamics and mission control,
supersonic external aerodynamics, and internal heat flow. Part 1 of this article series
appearing in the last issue of ANSYS Solutions covered the use of ANSYS software for

thermal analysis of the thermal protection system and on integration with CAD, MCAD and
CAE software. This installment describes how ANSYS Classic and ANSYS Workbench/
DesignSpace were utilized to perform stress analysis of the rocket block.

Integrated Vehicle FEA Analysis with ANSYS


On May 13, 2003, the da Vinci project announced Kindersley, Saskatchewan
(www.kindersley.ca) (see Figure 1a) as the launch site for Wildfire rocket-balloon. At the
same time, our team (see Figure 1b) was finalizing flight mission profile and flight safety at its
Toronto, Ontario research center located at the da Vinci Polytechnic Institute.

Figure 1a. Aerial view Kindersley airport, Saskatchewan

Figure 1b. Stress Analysis presentation at the Da Vinci Polytechnic Institute

ANSYS technology was widely utilized throughout all stages of the project and included
external and internal loading of the space capsule, landing impact loading, parachute
deployment, rocket block ground handling, vibration analysis and static and dynamic inertial
loading of the flight hardware. Aeroheating, aerodynamic and capsule stress analysis were
described in the paper presented at the third EADS International Re-entry Vehicle Symposium
in Archachon, France March, 2003 ( http://www.davinciproject.com/beta/Technical/
TechnicalMain.html). In the present article, we will describe selected aspects of the rocket
block inertial stress analysis.
The general outline of the rocket block (global model) structural skeleton is given on Figures
2a and 5a,b and also at http://206.210.04.185/fea_davinvi_text.html. Our main objective was to
build integrated 3d computer prototype (global model), which could be utilized for the truss
structure design and evaluation.
Our approach was implemented in several stages: we build separate vertical and angled
truss models using ANSYS Classic 7.0 and ANSYS Workbench 7.0 interfaces, we compared
results and developed good physical understanding of the stress distribution in the
fundamental elements of the global model. We then utilized buckling theory to estimate
buckling loads and we established truss thickness on the basis of load safety analysis. We
then moved towards integrated rocket block FEA.
Our first bold attempt was to work with ANSYS Design Space/Workbench 7.0 to directly utilize
bi-directional associatively with AutoDesk Inventor 7 or use SAT plug-in interface. However,
we quickly realized that because of high truss thickness to length aspect ratio (typical for our
structure) it is impractical to build accurate grids on any available PC hardware.

Figure 2a. Rocket block outline shown in ANSYS Workbench and DesignSpace 7.0 (shell is
suppressed)

Figure 2b. Upper truss structure imported in ANSYS Workbench

We were working with desktop P4 class PCs and both memory and speed were not sufficient
to convert and solve the entire geometry. Thus, we decided to utilize ANSYS Classic interface
and to build parametrically controllable grid on a fixed geometry template and to use
Workbench for the analysis of the individual components (latches, locks, bolts, individual
trusses).
Our rocket block analyses were split into three phases. In the first phase we performed
parametric studies of loaded trusses of various thicknesses, lengths and inclinations using
both ANSYS Workbench and ANSYS Classic (see for example Figure 12b). In the second we
worked with ANSYS Classic beam and shell elements (Figures 7a,b) and estimated
moments, forces and deformations (Table 1, Figures a,b). These results were subsequently
used to perform truss buckling analysis (Table 2). During the third phase we build 3D brick
model of the truss structure. Shell elements were utilized for the aero-shell and inertial
loading (from the space capsule and fuel tank) was estimated and applied at the appropriate
locations. Parametric 30 deg (1/12th of the model) grid was build and then rotated around the
circumference. We utilized full 360 deg. geometry to account for possible three-dimensional
loading. However in some cases a half-model can be sufficient.
Our grid size estimates and first analysis (utilizing 3d brick model) quickly demonstrated
limitations of Pentium4 PC hardware available to us, many junctions were gridded with
insufficient resolution and demonstrated staggered patterns, especially visible on element
solution outputs (see Figures 4 a,b). RAM memory limitations, memory management model
and overall bus throughput were clear show-stoppers. We evaluated alternative available
hardware platforms and approached Sun Microsystems Canada to provide us with its
scalable Sun Blade Server technology, either stand-alone or a cluster of two SunBlade 2000s
dual-processor workstations. It features a high-performance, crossbar-switch system
interconnect that provides high bandwidth (up to 4 GBps) for ultra-high-speed processors and
graphic subsystems, running on Solaris 9. Each workstation has 8Gb of shared RAM, with
two workstations offering 16GB and Server offering 32GB. System is powered by 64-bit 1200MHz UltraSPARC III Cu, 8-MB external cache per CPU. ANSYS provides superior parallel
scalability in both memory sharing (AMG solver) and distributed memory (DDS solver)
regimes, so our engineering department can effectively implement flexible processor loading.
Our subsequent experience demonstrated great potential of scalable shared memory servers
when coupled with algebraic multi-grid (AMG) parallel solvers (Solvers International and
ANSYS).

Figure 3a. Blade 2000 Workstation

Sun Microsystems entered into a technology partnership agreement with da Vinci Space
Project which made it possible for us to utilize Sun Microsystems hardware and to build much
more accurate three-dimensional models. In this article we demonstrate work-in-progress
results obtained on Sun Blade 2000 (See Figures 3a,b).

Figure 4a. Initial simulation results using PC, iteration 1 (60K model)

Figure 4b. Initial simulation results using PC, iteration 2 (130K model)

Rocket Block Loading Conditions and Constraints


Inertial and aerodynamic load variation was estimated using three-dimensional trajectory
analysis and was implemented on Mathworks Matlab 6.5/Simulink 13 and Maple 8 Mathcad
software platforms. Sample trajectory results were demonstrated in the EADS paper and
were shown (axial and lateral load components) in the Part I of this article. The da Vinci
Wildfire rocket is launched from a giant helium balloon, so it will be difficult to secure precise
vertical launch angle, thus a window of +- 30 degrees has been considered, which can result
in significant variation in lateral loads and poses serious challenge to ensure safe structural
design. Truss and shell structure also needs to be rigid to secure that bend curvature angle
is kept below 0.5 deg. Increase in lateral loads will contribute to lateral distortions and
associated bending moments on the structural components. Global model presents many
challenges, first among many is to select proper constraints and zero displacement boundary
conditions to fix structure statically. Rockets and airplanes do not really have restriction points,
they sort of hang in the air. Airplanes are in static balance during the horizontal flight and
rockets move with acceleration, thus experiencing inertial loads.
ANSYS offers two types of restraints. Conventional are being used for static systems. An
equivalent free body analysis is performed if static analysis and inertial relief are used. This is
a technique in which the applied forces and the torques are balanced by inertial forces
induced by an acceleration field. Displacement constraints on the structure should be only
those necessary to prevent rigid body motions. We utilized both approaches and performed
free body analysis manually to estimate equivalent inertial loading (see Figure 5b) and this
was build-in our flight simulator software. Axial loading stress diagrams showed that
compression load (-) (inertial mass force) is relieved into tension (+) as we traverse axially
into the internally pressurized engine chamber. For static analysis we fixed inner portion of the
engine ring thus limiting its axial and radial displacements. All other rocket block components
were allowed to deform as appropriate.

Figure 5a. Functional elements of the structure

Figure 5b. Loads on the rocket block

We evaluated several different trajectories and several characteristic time moments including
engine launch, engine cut-off, and maximum dynamic loads. In this article we present ondesign lateral load scenario that takes place during 15 deg off-vertical launch and presumes
that reactive control system (RCS) returns vehicle to a vertical flight profile. We considered
2.31g N_x (axial) and 0.5176g N_y (lateral) at trajectory time of 2 sec from the engine launch.

This leads to the following boundary conditions (see Figure 5b). Force applied from the space
capsule equals to 7112N x(2.3139+1)=23568 N is directed axially down, and laterally 7112 N
x 0.5176=3681 N (see Figure 5b) . Forces from the N2O fuel tank are applied at cap mounting
rings (3300+15161 N(N2O fuel))x3.3139=61178 N and are directed axially down, (3300+
15161)N x0.5176=9555 N in lateral direction, at tank ring. In our analysis we assumed 4130
steel properties, truss O.D. =38.1 mm, thickness =1.6 for upper and lower trusses, O.D. =25.4
mm, thickness = 1.2446 for engine to aero-shell trusses.

Integrated Truss Beam Element and Shell Model


In the preliminary design phase, a simple beam type element (BEAM4) model was build to
simulate all truss tubes and rings structures, and shell type element (SHELL 63) was
implemented to simulate aero-shell structure. It had 6300 elements and took 1.5 min to run
on P4 PC computer. The main objective was to obtain the axial forces and bending moments
in tubes and rings. This information can then be used for the design of the appropriate
sections. Maximum stresses can be re-estimated using detailed brick element global model.
Obviously, the tubes were under compression forces and experienced end moments. Thus
trusses must resist compression buckling and flexural bending at the same time. Therefore
tubes should be designed and evaluated as beam-column structures. Required design
loads can be easily obtained from the beam element type. The appropriate design formulas
can be found in steel structures design code or textbooks, see for example Bruhn, Analysis
and Design of Flight Vehicle Structures, Chap. C8 and Handbook of Steel construction (p127). For a beam-column, under compression and end moments (internal loads), the
secondary bending is considered. The section design is based on the interaction equation:

fa/Fa+ Cm Ca fb/Fb < 1 (from Canadian code)


Fa : buckling strength

fa : compressive stress

Fb : bending strength

fb : bending stress

Cm : reduction factor

Ca : magnification factor

Fa = compressive stress = P/A


Fa = buckling allowable strength = _AFy(1+_2n )-1/n
Where : _ =0.9, n=1.34, r=(I/A)1/2 = (KL/r) (Fy/2E)1/2
Cm = 0.6-0.4(M1/M2)_ >= 0.4, if no transverse loads between both ends, M1/M2 positive for
opposite directions
Cm = 0.85 if both ends hinged, with transverse loads
Cm = 1.0 if both ends fixed, with transverse loads

Fe = Euler buckling stress = 2E/(KL/r)1/2Fb = allowable bending strength = 0.9 Fy for tube.
Resulting beam element model safety factors are presented in the Table 2. They point that
lower truss and engine aero-shell truss are weakest links in the present designs. Additional
effort is required to modify truss thicknesses and junction design to ensure safety factor
uniformity. This can be accomplished utilizing new ANSYS 7.1 variational technology
capability or through manual iterations.

Figure 6. Beam column P & M

Figure 7a. Axial load 3.86g

Figure 7b. 2.31g axial, 0.51 lateral

After tube and ring thickness were established detailed brick element model was created to
simulate truss tubes and rings. Aero-shell was still modeled utilizing shell elements.
Because of high stresses at near tube and ring junctures beam element stress results are
not very accurate in these areas. Therefore a brick element (8-node SOLID45) model is
required to comprehensively evaluate safety of proposed structural design.

Table 1
Forces and Moments Table

Case 1: 3.86g Nx

axial force

My -l

Mz -l

My -j

Mz -j

upper truss

-3224

3472

-17094

-4423

11916

lower truss

-4176

-6938

-4196

7903

2751

-873

11102

8262

-176

-7003

-2861

68831

207130

engine to aeroshell truss


capsule mount ring

N2O tank ring

1787

-242930

4801

856

-107910

5444

axial force

My -l

Mz -l

-3888

-3158

8912

engine to aeroshell truss

-3066

273

24154

capsule mount ring

-2467

61969

-161694

N2O tank ring

3349

-529720

21369

aeroshell attach ring

3672

-388460

17713

aeroshell attach ring

case : 2.31g Nx 0.51g


upper truss

My -j

Mz -j

-8207 -12935

lower truss
39960

28859

Table 1 data summarizes moments and force components in all key system elements and
were extracted from ANSYS visual outputs (see Figure 8a, lower truss and Figure 8b for the
aero-shell).

Figure 8b. Aeroshell bending stress

Table 2
Safety Factors for each item

Safety Factor
I

Safety Factor II

Safety Factor III

upper truss

10.9

4.1

4.1

lower truss

1.56

1.3

1.3

engine to aeroshell truss

2.59

capsule mount ring

2.3

2.3

N2O tank ring

1.2

1.2

aeroshell attach ring

-1.4

1.4

2.7

2.7

Items

aeroshell (3 mm thick)

7.4

Safety Factor I is based on manual buckling analysis done by Canadian code. Safety Factor 2
is based on 130K element brick model assuming the yield strength of 500Mpa. All
estimations included load factor uncertainty of 1.5. From the results presented in the Table 2
(both beam and brick models) we can conclude that present design exhibit considerable
non-uniformity, safety factors on several elements are close to the threshold and that buckling
is not likely to occur.

Brick Model
Brick model implementation survived three iterations and we are presently working on
iterations four and five. First two iterations were 60,000 (Figure 4a) and 120,000 (Figure 4b)
elements in size. However, due to machine speed and memory limitations resolution near
tube/ring junctions in high stress areas was not sufficient (Figure 4a). So refined brick
element model (third iteration) was build utilizing ANSYS 7.1 release and was run on
SunBlade 2000 workstation. Model included 348,000 elements (Figure 9a,b) and was tested
using iterative (PCG), direct sparse matrix and iterative AMG (parallel) solvers. Single
processor direct sparse solver took 35 minutes to run, iterative PCG took 90 minutes. Total
RAM memory allocation was equal to 2.6GB (3.47Gb for AMG solver). ANSYS estimated
following rating for Sun Blade: 161 MIPS, 125 Scalar MFLOPS, 188 Vector MFLOPS (ANSYS
defaults are 80/20/40). Using ANSYS runtime statistics we estimated that 550,000 element
model will require approximately 4GB of RAM and with memory saving options (MSAVE) we
can run 1 million element model on Sun Blade 2000 workstation. If we link two workstations
together we can increase total memory pull to 16GB and run it in 2-3 hours in distributed
domain mode.

Figure 9a. Capsule ring, 348K element model

Figure 9b. N2O tank ring, 348K element model

Iterative PCG (1.e-6 accuracy) and direct sparse solver that demonstrated very close stress
and deformation results. Lateral deformation (bending) of the structure is very important
limitation as it ultimately leads to engine trust misalignment (Figure 11a) and can lead to the
failure trajectory shown in Figure 10a. Summary of trajectory analysis (body rotation around its
center mass) are presented on Figure 10b. They clearly demonstrate that allowable
misalignment range is within 0.5 degree. Figure 11a shows distribution of RB lateral
deflection, with maximum equal to 18 mm (0.018 m). Same figure also demonstrates two
approaches to estimate angle vs. deformation relationship. Nonlinear estimation takes into
the account spatial deformation curvature and requires lateral deformation to be within 11
mm (0.11 m). This suggests further need to increase structural stiffness of the rocket block
despite the fact that entire design has safety factors above unity (see Table 2). Figures 11b,c
also demonstrate deformed shape (top view) and side view. Considerable lateral
deformation takes place, especially in the upper truss/capsule ring side. Von Mises stress
distributions at several characteristic truss and ring locations are shown in Figures
12a,b,c,d,e and Figures 13 a,b,c,d. Figures 12a,b compare stress loads calculated using
130K element model and 348K element model (252/242 MPa respectively). On Figure 12e we
show Workbench results that were obtained using simplified stand-alone truss model (239
MPa).
These are highest stress areas of the rocket block, and all models demonstrate close
results, however higher resolution model provides much better quality field data with smooth
gradient transition. On Figures 12d,e we show stress distribution at the N2O ring-to-truss
junction location (truss is not shown). Figure 12e shows the backside of Figure 12d. Same
location is exemplified on Figure 4b (130K element PC based model). Notice considerable
improvement in the stress distribution smoothness and considerable reduction in maximum
stress on the ring, i.e. 269 MPa (old model) and 150.39 MPa in the new model. We can also
notice that peak stress (in old model) was located near the lower truss attachment point.
Upon closer review of the same location (Fig 12e) we notice that ring stress at the truss

attachment point is equal to 97MPa, i.e. it is almost 2.5 times smaller. Such finding will be
further investigated and to be confirmed using local grid dependency study, however it clearly
shows considerable potential design benefits and radical improvement in solution quality
and accuracy. Smaller stress will lead to thinner and lighter parts or to higher safety factor for
the given component.

Figure 10a. Trajectory simulation with 1deg engine trust vector misalignment

Figure 10b. Applicable misalignment range

Figure 12b. Lower Truss Truss/ring junction (375K model)

Figure 12c. Truss results analysis using Workbench, maximum stress 239 MPa

We are concurrently working on the 4th and 5th generation brick models which will consist of
700,000 and 1,000,000 elements and will be implemented on Sun Blade 2000. Fourth
generation model will utilize shared memory (single node, 2 processor) AMG implementation
and is expected to consume 7.5GB of RAM and fifth generation model will use over 10 GB of
Ram and will be implemented utilizing ANSYS DDS distributed domain solver.

Figure 12d. Ring model details

Additional examples of von Mises stress distributions in various key system components and
assemblies are shown in Figure 13a,b,c,d.

Figure 13 a. Engine to Aero-shell Truss and Aero-shell attachment ring, maximum stress 162
MPa

Figure 13 b. Stress distribution in truss/ring/shell junction, maximum stress 242 MPa

Figure 13d. Stress Distribution in the half-geometry, showing maximum von Mises stress of
the entire structure, 242MPa

Maximum nodal stress and yield strength safety summary is presented in the Table 3.

Table 3
Component
Items

Safety Factor

Stress, MPa

upper truss

4.9

98.9

lower truss

1.9

242.4

capsule mount ring

5.9

81.8

N2O fuel tank ring

3.2

150.4

aeroshell truss attach ring

2.9

162.9

engine to aeroshell truss

2.9

162.7

aeroshell

2.4

200.2

engine truss

8.9

53.8

348K element model results (for von Mises stress) clearly point that lower truss structure has
highest stresses and present the weakest link. We further verified these results by running a
525K element model that demonstrated the same maximum stress and lateral deformation.
This confirms the conclusion that came out of beam element model. We recommended to
further increase the lower truss thickness and we anticipate that it will increase overall
system structural stiffness thus helping to reduce lateral deflections to 12 mm (0.012m).

Conclusions
ANSYS Software tools were heavily utilized by da Vinci project engineers throughout all
phases of research, design and manufacturing. They allowed effective integration with
parametric Autodesk Inventor 7 CAD package and Matlab 6.5/Simulink 13 to form
computational design analysis (CDA) process streamline. Using comprehensive set of
computational solution technologies from ANSYS we analyzed multiplicity of thermal and
stress problems that were encountered by our project team. These technologies included
(but were not limited to) Workbench bi-directional parametric CAD interface, variational design
technology and superior parametric grid generation capability of ANSYS Classic. Wide set of
utilized features included beam, shell and brick element models and a selection of iterative
and sparse matrix serial and parallel solvers.
Not all finite-element results were created equal. If PC hardware imposes considerable
modeling limits (model size, resolution, and high aspect ratios) then it inevitably raises the
bar of required human expertise, slows down CDA process and requires considerable
number of extra engineering resources to estimate and validate uncertainties. That in turn
leads to lower safety or product over-design. Implementation of powerful scalable
workstations allowed us to shorten design process and required engineering time
considerably with 3 months to 1 week ratio.
ANSYS tools had performed at their best when were coupled with the scalable state-of-the-art
Sun Microsystems Blade 2000 workstations-servers under Solaris 9 64-bit operational
system. We found shared parallel memory mode to be easiest and fastest to work with.
Advanced hardware became the enabling environment that allowed bringing the best in both
ANSYS Classic and ANSYS Workbench environments.

Space will never be the same.

Acknowledgements: to Dav1d Grossman for aerial photographs of Kindersley airport.


For more information on the volunteer da Vinci Space Project, visit http://www.davinciproject.com

Contact Us | Privacy Policy Copyright 2003 ANSYS Inc. All rights reserved.

By Professor John Draper


Managing Director
Safe Technology Ltd.

Engineers have known for more than 150 years that metal can fail in fatigue, but fingers
crossed used to be the design rule for metal fatigue. Advanced CAD packages could provide
accurate stresses and dynamic modeling could predict loads and vibration effects with
impressive accuracy. But the final and most important calculation - How long will it last in
service? - was an over-simplified, almost back-of-the-envelope calculation surrounded by
uncertainty. Now engineers can perform fatigue analysis on machined, forged and cast
components in steel, aluminum and cast iron, on complex assemblies containing different
materials and surface finishes and most recently can even incorporate the effects of high
temperature and time-dependant creep-fatigue in the calculation using state-of-the-art
software represented by fe-safe v5 from Safe Technology Ltd.
Fatigue used to be a black art that produced financial black holes. Premature fracture of
engineering components costs Europe and the US about 4% of GDP each year. Potential
errors were hidden behind the euphemism of safety factors. And manufacturers paid the
price for overweight components that cracked prematurely, a seemingly endless series of
prototype developments, unpredictable warranty claims and loss of customer confidence.
Traditionally, fatigue failures have been fixed by over design. But increasingly, engineers are
under pressure to design down to save weight and material costs; over design is no longer
a viable option and the need for sophisticated fatigue analysis tools becomes increasingly
apparent.

How Cracks Initiate


In 1850 the I.Mech.E. discussed the results of fatigue tests on wrought iron, where an iron bar
was subjected to repeated cycles of loading to simulate 90 years in the service life of a

railway axle. They considered that the raised shoulders used to locate the wheels on the axle
contributed to the failures, and that failures would not occur providing the stresses did not
exceed the elastic limit.
So three essential features of fatigue design were discussed in 1850 that failures are
caused by cyclic loading; that stress concentrations at changes of section reduce the fatigue
life; and that there is a safe working stress below which failure will not occur (although its
definition is not quite as simple as that proposed in 1850).
We now know that fatigue cracks initiate in weaker grains or grain boundaries in metals
under the action of repeated stress cycles and that these small cracks may propagate, also
as a result of cyclic stresses, until the material fractures. But to calculate the fatigue life we
must consider every significant load in the service life. The complexity of service loading, and
the complex stress states which can occur during the service loading, means that fatigue
analysis is much more challenging than simple design to withstand maximum loads, and it
has taken the past 150 years to produce the algorithms needed for confident fatigue design.

Technology Advances
The past fifteen years have seen the transformation of fatigue analysis into a sophisticated
computer-aided design tool with an accuracy at least equal to other software in the design
and analysis process. Fatigue analysis software combines component loading, FEA
stresses and materials data and performs advanced multi-axial fatigue analysis. The
software interfaces directly to FEA software and is supplied complete with a database of
material fatigue properties to which users can add their own data. It calculates where and
when fatigue cracks will occur - the fatigue hot-spots - the factors of safety on working
stresses for rapid optimization and the probability of survival at different service lives - the
'warranty claim' curve. The results are presented as contour plots of fatigue lives, stress
safety factors and probabilities of failure and plotted using standard FEA viewers and
graphics software.
The accuracy of modern fatigue analysis is based on research that originated in the 1950s,
and really started to be applied during the 1980's when in-vehicle load measurement became
available together with analysis software and low cost computers. This early software
analyzed measured strain histories and was used for prototype assessment and post-failure
investigation. Local stress-strain or critical location fatigue analysis uses a mathematical
model of the material's response to each loading event, including the effects of in-elastic
stresses that may occur locally in fillet radii or other stress concentrations. This integrated
approach works equally well for high-cycle components and for components where some of
the fatigue damage is caused by less frequent high loads. The traditional separation of
fatigue into high-cycle and low-cycle is no longer necessary.
Materials databases contain reliable fatigue properties for a large number of materials, and
material testing to published standards is widely available at low cost. Local stress-strain
fatigue methods were originally developed to analyze strain gauge measurements, often

taken from prototypes.


During the past 15 years, the fatigue analysis techniques have been extended to analyze
results from finite element models. FEA analysis is computationally intensive. Using elasticplastic FEA to model a component's stress-strain response to a long time history of service
loading is unacceptably time-consuming. For this reason, techniques have been developed
to take a single set of results from an elastic FEA, scale the stresses by the time history of
loading and calculate the extent of any plasticity that may be developed at each node on the
model. This takes place in the fatigue software where the analysis is much more
computationally efficient. Multi-axially loaded components can be analyzed by simple
superimposition of the elastic results before estimating plasticity effects.
Until recently these methods could only be applied to uni-axial stress states. This was
because of the complexity of calculating cyclic plasticity effects for multi-axial stresses.
Suitable techniques of acceptable accuracy are now available, and it has become common
practice to treat all components as multi-axially stressed. Principal stresses that change their
orientation are handled using critical plane analysis, a search technique to find the direction
of crack initiation at each node on the model. Figure 1 shows the fatigue life contour plot of an
aluminum alloy suspension component, generated using modern fatigue analysis software
working from a finite element computer model. The software correctly predicted the failure
locations and gave excellent correlation with the test lives.

FIGURE 1. FATIGUE LIFE CONTOURS FOR A SUSPENSION COMPONENT

Moving Away From Design-Test-Redesign

During design, when the component exists only in a 3D CAD system, potential failures can be
corrected cheaply. Once metal has been cut and moulds or dies produced, fatigue failures
become much more expensive, and the planned production time-scale is in jeopardy. Costs
really escalate if prototype testing fails to reveal a potential failure and the component is put
into production. There is then a real possibility of warranty claims, recalls and accident
liability. If manufacturers are to escape from the costly cycle of design-test-redesign, they
need to invest a little more in design.
Figure 2 shows what happens if durability assessment is built into the design process to
assess and validate virtual components before cutting metal. True, initial design costs are
higher because the analysis is more sophisticated, but compensation comes in the form of
increased accuracy, which gives the confidence to remove 'virtual metal' from areas that are
over-strength in fatigue. Lower weight may reduce dynamic forces and produce further
benefits in other components. There will also be immediate savings in material costs in
production.

FIGURE 2. COST AND TEST BENEFITS FROM FATIGUE ANALYSIS DURING DESIGN

Further payback comes during prototype testing, since it may not be necessary to test at all.
More probably, a prototype could be required by the customer or demanded by legislation. But
because of the extra effort and sophisticated analysis at the design stage, there is now a
much greater probability that the prototype will be right first time. Test times are reduced and
are more predictable, so they can be integrated into the project plan. And the product gets to
market faster. In addition, there are some other less obvious benefits. Fatigue analysis
software can identify which loads are important at the predicted failure points, so again
prototype testing can be simplified and inadvertently missing out some important loading can

be avoided.
Fatigue analysis software is also able to estimate the future warranty claim curve, and in
particular how it might vary for different user profiles. An example is shown in Figure 3.

FIGURE 3. EFFECT OF CUSTOMER PROFILE ON IN-SERVICE FAILURES

This shows the effect of two different user profiles on the calculated in-service failure rate, and
from this the design could be refined or exclusions could be put in the warranty. Either way,
the information is available to assess both options and make the most cost-effective
decision.Continuous Development
With these techniques now widely accepted, Safe Technologys fe-safe software is being
extended into the much more demanding areas of high temperature and time dependent
creep fatigue . These features are vital considerations in engine design, for example. The
main considerations here are:
Stress-strain response, which depends on instantaneous strain rate and temperature
Bulk relaxation of stresses with time
Strain-aging of the material
Phase relationship between stress and temperature, which can vary from cycle to cycle

Although quite specific materials data is required, the results justify the effort, and very
successful life estimates are being obtained. Figure 4 shows the results of high-temperature
fatigue analysis of a prototype automotive piston. The premature failure of the prototype was
predicted accurately by the fatigue analysis software, and a number of computer-aideddesign iterations were used to develop the successful design. It is interesting that one of the
design changes was to increase the radius of the fillet at the point of crack initiation a
solution which goes back to the earliest days of the fatigue design.

FIGURE 4. HIGH TEMPERATURE FATIGUE ANALYSIS OF A PISTON

Conclusion
Fatigue test lives are usually plotted on a logarithmic scale. Fatigue progress also seems to
follow a log scale. 150 years ago engineers were defining the initial rules for fatigue design.
15 years ago a simple uni-axial fatigue analysis from a few strain gauges was considered a
success. Today we can achieve a much more accurate fatigue analysis for complex multiaxial
stress states, for real service loading, with effects of temperature and manufacturing process
for a million nodes in a finite element model and highly detailed fatigue life contour plots can
be produced. Safe working stresses for design optimization can be calculated and warranty
claim curves can be estimated. Advances in computer power and faster software algorithms
allow durability analysis to be integrated even more tightly into the design process.
Successful fatigue analysis for increasingly demanding components is now a practical
proposition. As more products are designed right rather than just strong enough to meet
ever more stringent weight and cost saving targets, under tighter production schedules,
fatigue analysis can no longer be ignored or done with your fingers crossed.

John Draper worked as a fatigue design specialist in the UK aircraft industry, then as a project manager for
fatigue research and failure investigation projects in the British Railways R&D Division. He founded Safe
Technology Limited in 1987. He is also Chairman of the Engineering Integrity Society and an Honorary Visiting
Professor to Sheffield Hallam University, England. For more information on Safe Technology and fatigue
analysis software, visit www.safetechnology.com.
Acknowledgement: The author wishes to thank Metaldyne International, Federal Mogul Technology and MIRA
for all images supplied.

Contact Us | Privacy Policy Copyright 2003 ANSYS, Inc. All rights reserved.

Lynn Lewis
HP CAE Alliance Manager
Hewlett-Packard Company

Obtaining maximum value from a companys investments in CAE demands the highest levels
of performance and capability from the computing environment. Extraordinary advances in
computing price and performance now enable users to increase model resolution and
complexity for deeper insights into improving products.
At the forefront of these advances is technology from Hewlett-Packard Company, which
provides a wide range of computing hardware as well as the expertise to help companies
select the right solution for their needs. HP solution architects will evaluate a companys CAE
requirements, working with users and the HP CAE team to navigate the complex range of
options available today:
The recent release of HP Integrity servers and clusters gives CAE users new choices with
compelling benefits. These systems support all ANSYS Inc. products and ANSYS Inc.
partners applications on the HP-UX operating system. Many of these applications are also
available for Linux systems. This article provides guidance on how these Itanium2-based
systems meet the toughest CAE requirementswithout compromise.

Server Solutions for CAE Applications


Integrity Server Configurations. HP Integrity servers deliver powerful throughput, flexible
operation, and easy system management supporting hundreds of CAE users running a
complete suite of CAE applications. Based on Intel Itanium2 processors, these server
architectures have comparatively lower memory latency, massive memory capacity, and a fast
I/O subsystem.

The two-processor HP Integrity rx2600 entry-level server provides fast and affordable
performance for complex technical computing.
The four-processor HP Integrity rx5670 supports larger models with up to 48 GB of main
memory and increased scalability for SMP parallel processing.
The high-end HP Integrity Superdome, with up to 64 Intel Itanium2 processors, delivers
record-breaking raw performance and job throughput required by large engineering
organizations.
Operating Environments. Running the HP-UX 11i version 2 Technical Computing Operating
Environment (TCOE), HP Integrity servers support virtually all CAE solver codes at the fastest
possible performance level and with SMP or MPI scalability up to 64 processors. Current HPUX customers on PA-RISC systems achieve seamless integration of Integrity servers into
their existing high-performance computing infrastructure. Current SGI IRIX, Tru64 UNIX, Sun
Solaris, and IBM AIX customers can readily migrate to HPs UNIX for economy and operating
system independence.
Running the 64-bit Linux operating environment, HP Integrity servers support most leading
solver codes, with more being released periodically. Performance scaling is achieved with up
to four processors on entry-level Integrity servers today, or 64-bit Linux clusters of these
nodes can be built with up to 128 CPUs for MPI applications such as computational fluid
dynamics (CFD).
Integrity Server Advantages.
Superdome runs multiple CAE jobs from many users simultaneously with little
degradation in time-to-solution.
2- and 4-CPU servers may be racked as a loosely coupled farm running multiple
simultaneous jobs with no contention.
All SMP server memory is available for pre- and post-processing tasks such as grid
generation and domain decomposition/discretization.
Servers run all solver codes (SMP or parallel MPI) for operational efficiency.
Single operating system image on one large server provides the easiest system
administration.
HP Services performs on-site installation and management (visit www.hp.com/hps for
details).
Investment Protection. HP Integrity servers provide best-in-class performance, an open
architecture roadmap, aggressive acquisition prices, and low operating costs, all of which
help maximize a companys return on investment in CAE. HP also offers the proven
AlphaServer series for CAE solutions, ideal for current Alpha customers. Based on the Alpha
EV7 processor, AlphaServers have an impressive record for performance, scalability,
availability, and management. CAE solver applications are supported on Tru64 UNIX
operating environments through 2006.
ANSYS Support for Tru64. From feedback received from its customers, ANSYS Inc. has
reconsidered discontinuing support for the HP Compaq Alpha UNIX Tru64 as early as ANSYS

8.1 and now plans to support the Tru64 version until at least ANSYS 9.1 (Spring 2005). The
ANSYS Inc. letter information customers of this decision is at www.ansys.com//
hardware_support.

HP Integrity cluster HP Integrity Superdome

Cluster solutions for Optimization or Multi-Physics


Cluster Configurations. Clustering multiple SMP nodes to form a large, scalable system is
increasingly popular among CAE users. With recent advances in interconnect technologies,
system management software, and distributed-memory CAE applications, clusters provide
attractive and cost-effective solutions for the high performance and capacity requirements of
fluid and explicit analysis. HP offers complete cluster solutions including servers,
interconnects, software, management tools, integration, implementation, customer services,
and training, all from HP and trusted HP resellers. The HP-UX Integrity cluster is typically
based on the HP two-processor rx2600 Integrity servers.
hptc/ClusterPack Software. The HP-UX Integrity cluster comprises multiple compute servers
controlled by one head node (or management server), which directs the cluster by means of
an integrated software package: hptc/ClusterPack, which provides a comprehensive set of
integrated tools that simplifies the tasks of cluster configuration, cluster system
administration, and distributed job management. Multiple interconnect technologies are
supported under the HP-UX Integrity server cluster. For many CAE applications, the industrystandard Gigabit Ethernet has proven cost-effective in connecting a moderate number of
servers. The HP ProCurve Switch 5300xl series is well-suited for a 16-node cluster, with the
possibility of scaling up to 32 nodes on a single ProCurve Switch 5308xl. For CAE
applications more dependent on low latency or high bandwidth, HP offers the HyperFabric2
interconnect supporting three messaging protocols: TCP, UDP, and HMP (HPs
HyperMessaging Protocol).
Integrity Cluster Advantages.
Enjoy attractive price/performance benefits while maintaining a 64-bit architecture for
large models and solution accuracy.
Add or subtract nodes easily for maximum flexibility and scalability.
Achieve superior rack density in crowded IT spaces.
Run MPI applications (see page 4).
Choose your preferred private switched-fabric interconnectHyperFabric2 for HP-UX,
Myrinet for Linux, or Gigabit Ethernet for both.
HP Services performs on-site installation and management (visit www.hp.com/hps for
details).
Linux clusters. HP also offers flexible Linux-based solutions. Linux clusters are available
using both HP Integrity servers with Intel Itanium2 processors and HP ProLiant servers with
Intel Xeon processors.
Server Farm for Structural Analysis Solvers. Available configurations: multiple rx2600 or
rx5670 servers, each with direct attach SCSI or FC disk array having 5+ spindles per disk
controller for striping scratch files. HP-UX 11i / TCOE with vxfs file system. Tuning info

available. Benefits include: multiple users submit 1, 2, and 4-way jobs simultaneously for fast
job turnaround and little contention. 64-bit address space for models > ~2 million DOF.
Solvers gain price/performance benefit of Itanium2 for ROI.

Helpful URLs
Explore the following websites for further information about products and services provided by
Hewlett-Packard Company.
HP solutions for CAE
www.hp.com/go/hptc
www.hp.com/techservers/cae
www.hp.com/go/integrity
Integrity Cluster Quote Requests and Information
www.hp.com/techservers/clusters/linux_clusterblocks
HPTC Consulting and Integration
www.hp.com/techservers/products/consulting_and_integration

Lynn Lewis is HP CAE Alliance Manager at Hewlett-Packard Company, a technology solutions provider to
consumers, businesses and institutions globally. The company's offerings span IT infrastructure, personal
computing and access devices, global services and imaging and printing for consumers, enterprises and small
and medium businesses. For the last four quarters, HP revenue totaled $71.8 billion. More information about
HP is available at www.hp.com

Contact Us | Privacy Policy

Copyright 2003 ANSYS, Inc.


All rights reserved.

By Arend Dittmer
Integration Architect
Platform Computing, Inc.

Responding to the industry trend toward handling increasingly complex models in shorter
timeframes using multiple solvers, ANSYS Inc. introduced the parallel processing solver DDS
(Distributed Domain Solver) with ANSYS version 5.7. Part of the Parallel Performance for
ANSYS add-on module, DDS solves large static or transient nonlinear analyses over multiple
systems (distributed memory parallel, DMP) and/or over multiple processors on a single
machine (shared memory multi-processor, SMP).
To fully leverage parallel processing technology, distributed resources have to be managed
effectively. Platform Computings Load Share Facility (LSF), a sophisticated distributed
resource management system, increases the throughput of DDS jobs by optimizing the host
allocation for distributed DDS jobs. Moreover it ensures that DDS jobs are controllable and
that accounting information is available. LSF makes it possible to run and manage parallel
MPI based distributed applications as if they were running on a single system.

Load Share Facility


LSF is a solution that is highly complementary to distributed applications such as ANSYS
DDS. LSF launches jobs on nodes that meet given resources requirements and chooses the
least loaded systems in case multiple candidate nodes qualify. On Non Uniform Memory
Architecture (NUMA) based systems, LSF supports an automated dynamic allocation of CPU
sets, optimized for short memory access times.
Moreover, LSF facilitates the enforcement of business priorities through centrally defined
policies for coordinated resource allocation. Examples of supported resource management
policies are the preemption policy, the fair share scheduling policy (which allows fair access
to CPU resources based on recent usage history) or the FCFS (First Come First Serve) policy.
Through the integration of LSF with MPI libraries MPI-based applications such as ANSYS

DDS can be submitted to LSF transparently, without the need to edit MPI specific host files.
Job Control and Accounting
A key problem inherent to distributed applications like ANSYS DDS is the lack of control over
the application at runtime. With every distributed DDS run, DDS processes are launched on
multiple hosts. The Message Passing Interface (MPI) standard defines an interface for
passing messages between these processes, but does not define a method to control all the
distributed processes that belong to the same DDS run. It is not possible for an end user to
kill or suspend a DDS run spanning multiple hosts with one command. MPI based
applications are typically out of control once they have been launched. This has several
implications:
In order to control a DDS run, an end user has to log into multiple hosts and manually
identify and kill processes
Without an additional control mechanism some LSF scheduling policies cannot be
enforced. It is for example not possible to let a higher prioritized parallel DDS job
suspend a DDS job of a lower priority, to use its resources. The signal for job
suspension is not sent to all processes of the lower prioritized DDS job.
An exit of one process due to an error condition may result in runaway processes
wasting CPU cycles. As all the distributed processes of a DDS run depend on each
other, an exit of one DDS process should result in the exit of all processes that belong to
this application run. Lacking a central point of control this behavior cannot be enforced.
There is no easy way to get accounting information (e.g. CPU time usage, memory
usage) for distributed DDS jobs.
In order to make it easier to control parallel applications and allow the enforcement of
scheduling policies, LSF provides a tool that acts as the single point of information and
control for parallel runs of MPI based applications: The Parallel Application Manager (PAM).
PAM tracks all processes of a distributed application and forwards signals to all processes of
an application instance. Moreover PAM is aware of processes that exit due to an error
condition and automatically kills parallel jobs if any one of its processes exits.
With the automated forwarding of control signals through PAM it is also possible for LSF to
enforce scheduling policies for distributed application runs. In the preemption scenario
mentioned above, PAM enables LSF to suspend all processes belonging to a distributed
application run of a lower priority to give way to higher prioritized jobs. With PAM as a single
point of control LSF can enforce resource limits, for example a wall clock runtime limit or a
CPU-time limit.
Beyond providing job control capabilities for distributed parallel application runs, PAM also
facilitates the collection of resource usage information. The accounting information collected
by LSF on each host, is aggregated by PAM and is accessible to end users as well as
administrators. In tandem with solutions like Platform Intelligence this accounting information
can be used to charge for utilized compute resources.

LSF and PAM are bundled in Platform Computings HPC package and work with MPI
implementations from the major system vendors HP, IBM, SGI and Sun. A generic
implementation of PAM that works with all MPI implementations is also available. Particularly
in the context of low cost, high performance compute environments it is important to note that
LSF/PAM are available for building IA32 and IA64 Linux clusters.

Integration with Vendor-Specific MPIs


The following figure shows a simplified high level overview of the startup procedure for
distributed ANSYS DDS runs on HP-UX 11 systems, using the HP 1.8.3 MPI libraries.

Launching distributed ANSYS DDS jobs through Platform LSF

A job is submitted to LSF by prepending LSFs bsub command to the command line for
launching a distributed DDS job. According to the number of requested CPUs, LSF allocates
hosts that are to be used for the distributed launch. For host allocation LSF considers defined
scheduling policies, specified resource requirements (no requirements were specified in this
example) and the dynamic load situation on candidate hosts. After identifying suitable
execution hosts, LSF batch launches PAM on one of the identified execution hosts. In this
example, PAM loads an HP specific Vendor Component Library (VCL), which is part of HPs
MPI package. The VCL contains a startup function that launches the MPI application on the
allocated hosts. Through the MPI_proc_info function PAM is able to retrieve the process id
and host of each process belonging to the parallel ANSYS DDS run. After executing this
function PAM has the information that is required to forward job control signals or gather

accounting information for DDS jobs launched through LSF.

Arend Dittmer is Integration Architect with Platform Computing, Inc. The company plans, builds, runs and
manages enterprise grids that optimize IT resources according to core business objectives. Platform is at the
forefront of grid software development and has helped more than 1,600 clients gain powerful insights that
create real, tangible business value.

Contact Us | Privacy Policy Copyright 2003 ANSYS, Inc. All rights reserved.

A defining feature of the CFX-5 CFD software is its high performance coupled multigrid solver.
While it is true that coupled multigrid is a key technology to achieve high performance, many
additional technologies are involved. High performance CFD requires a cohesive strategy and
excellence in six technology areas: meshing, accuracy, reliability, speed, physics, and
flexibility. As with most technologies, overall success is limited by the weakest link. CFX-5
delivers high performance CFD by a unique combination of strategies in these six areas, as
outlined below.

Meshing Strategy
The CFX-5 meshing strategy is one of flexibility. No single strategy is ideal for every case.
Choice is the key. Four element types, hexahedral, tetrahedral, wedge or prism, and pyramid,
are available. From these building blocks almost any style of mesh is possible. CFX-5
supplies its own state-of-the-art hybrid meshing technology which exploits a mixture of
element types, as well as automated curvature, edge and 3-D proximity detection, boundary
orthogonality, and smooth transition scales. The result is near-automatic best-practice CFD
meshes that resolve the geometry and the boundary layer. CFX-5 also supports a large
number of external grid formats. More choice. Finally, combinations of multiple mesh styles in
a single analysis is easy to do using the General Grid Interface (GGI) technology in CFX-5,
developed and perfected over a 12 year period in CFX-TASCflow. For example, a hex mesh
created with ICEM Hexa can be connected to a hybrid unstructured mesh created with the
CFX-5 mesher.

Accuracy
Every company has finite computing resources, no matter what the size. It remains essential
to get the most accurate answer possible on your mesh, be it 20,000 or 20 million nodes.

CFX-5 delivers high accuracy per node by a combination of several key factors. Element
based discretization is employed, unlike the volume filled methods in most other
commercial CFD codes. High accuracy per node is the result. On a tetrahedral mesh, each
integration volume in CFX-5 consists of 60 surface integration points, on average. In
comparison, competitor volume-filled methods consist of a meager 4 surface integration
points per volume. High accuracy per node does not mean high cost per node. The CFX-5
element-based method is inherently efficient, providing a natural condensation of effort. For
example, on a pure tetrahedral element mesh there are 1/5th the number of linear equations
to solve compared to the same mesh used in a volume-filled method.
Rapid reliable reduction of numerical error as the mesh is refined is critical. Numerical errors
occur in the key transport terms advection, diffusion, and in sources. The CFX-5 element
technology yields accurate local gradients for diffusion, and sub-grid source resolution.
Because the advection term error is often dominant, CFX-5 offers all of the industry-accepted
first and second order advection schemes. Our premier second order scheme is robust and
accurate. CFX-5 is the only commercial CFD software that enables second order
discretization by default.
In any CFD prediction there are model errors in addition to numerical errors. The turbulence
model is often a major source of model error. Choice is available (over 16 different turbulence
models from RANS to DES), but for some this is daunting. In response, CFX-5 has
established a new workhorse turbulence model, SST. This model is as economical as the k-;
model, but offers much higher fidelity, giving excellent answers on a wide range flows and
near-wall mesh conditions. For more complex flows, CFX-5 offers full Reynolds Stress
models based on the v-equation with automatic wall treatment and a newly developed zonal
Detached Eddy Simulation (DES) formulation based on the SST model.

Reliability
A CFX-5 simulation involves the solution of a set of coupled non-linear equations. The nonlinear solution is obtained by repeatedly updating and solving a set of linearized equations. It
sounds obvious, but one of the best ways to achieve reliable convergence is simply to solve
the linear equations well. There are many approaches but one method stands out, coupled
multigrid. The CFX multigrid method has evolved over a 20 year period, rooted first in CFXTASCflow and now in CFX-5. It has been used to solve literally millions of simulations. This is
the only linear solver available in CFX-5. It is fully automatic. It is fully scalable (linear increase
in CPU time with problem size). It is insensitive to mesh aspect ratio. It greatly reduces
sensitivity to the time step and relaxation factors. It is fully parallelized. It is fully implicit. It
solves the linear equations tightly. All of this means the user experiences far fewer problems
in getting the desired computation from start to end. Many coupled solvers and multigrid
solvers exist in other codes, but none like that in CFX-5. CFX has relied on this technology,
solely, for the past 10+ years.

Speed

Any number of computers can be combined to perform a given analysis: parallel computing.
The CFX-5 parallel implementation combines the memory and CPU resources of many
machines to reduce wall-clock time, and to make larger simulations possible. All physical
models, features, modes, and options in CFX-5 work in parallel, no exceptions.
CFX-5 makes it is as easy to perform a simulation in serial or parallel, using a multiprocessor computer, a network of single processor computers, or any combination. Parallel
performance maintains both CPU scalability and memory scalability, even for large numbers
of processors. The CFX-5 parallel performance is due, in part, to the fact that parallel and
serial simulations follow the same convergence history. It is easy to parallelize a CFD code,
but much harder to parallelize while preserving its convergence performance. And with the
release of CFX-5.6 even the native mesh generation process is parallelized. Another industry
first.

Physics
A wide range of physical models is needed in any modern CFD software package. The fidelity
of a simulation is linked directly to the choice of physical models available. CFX-5 has seen a
huge expansion in its physical models in recent years. Steady and transient turbulence
models (16+), frame change models (3), mixture models, multiphase models
(homogeneous or inhomogeneous, N phases), free surface models, interphase mass
transfer models like boiling, condensation and cavitation, Lagrangian phases, combustion
models (8+), real fluid models, high speed flow models, radiation models (3+), transient
models the list goes on. CFX-5.6 is model rich. A key point is that these models all interoperate with each other, and in conjunction with the key technologies already discussed: for
all element types, across all GGI connection types, using the coupled multigrid solver, in
parallel, with accurate numerics and advanced turbulence models. This inter-operability is
referred to as a full feature matrix, and is a significant benefit to CFX-5 users.

Flexibility
Finally, high performance is only possible when the software system is flexible and open.
This is achieved in CFX-5 through a variety of approaches. A highly flexible and powerful User
Fortran system is available for custom development. The user interface system is
programmable and scripted. The pre-processing, solver and post-processor all use a
uniform, simple, state language, as well as embedded Perl language support. All functions
run interactively or in batch mode. The result is that CFX-5 can be customized and embedded
into your end-use system, in a clear and reliable manner.
In the future you might hear it said that CFX-5 performs well because of coupled multigrid.
Maybe your response could then be yes, thats true, but thats not the whole story. High
performance CFD is achieved in CFX-5 by application of a cohesive strategy and excellence in
all six of the key technology areas: meshing, accuracy, reliability, speed, physics, and flexibility.
CFX1427 , CFX1426 - Automatic 3-D proximity detection in the CFX-5 hybrid mesher. The

ANSYS SOLUTIONS: Feature: Achieving High Performance CFD

10/22/03 12:26 PM

mesh density increases automatically where objects are close to each other.
GGI2.eps - Dis-similar meshes can be connected together using General Grid Interface
technology in CFX-5. The mesh structure and shape can change. The GGI connection is fully
automated for the user.
LES.eps - Large eddy simulation, LES, of flow behind a cylinder. This fully transient
turbulence model directly predicts and resolves the largest turbulent structures.
graph.eps, graph2.eps, dragpolar.eps, cfx1961, cfx1966 - AIAA Drag Prediction Workshop
results obtained with CFX-5. Supplied hex mesh involves 5.8 million nodes and has grid
aspect ratio in excess of 5000:1. The predicted lift and drag are stable in about 110
timesteps. This transient calculation runs on 16 Linux PCs in about 20 hours. AIAA Drag
Prediction Workshop results obtained using CFX-5 were computed within the EU Flomania
project.

Contact Us | Privacy Policy Copyright 2003 ANSYS, Inc. All rights reserved.

Challenge:
To improve energy efficiency in the design of refrigerant-cooled hermetic motors for waterand air-cooled chillers.

Solution:
Implement CFX to simulate heat transfer phenomena in high-power induction motors and
refrigerant flow through the hermetic motors.

Benefits:
Enabled engineers to explore large-scale unsteady flow phenomena to obtain the highest
energy efficiency in a wide range of applications. Minimized environmental impact by
improving reliability of the products, increasing service intervals, and reducing the need for
disassembly of the units. Also eliminated possible emission of refrigerant associated with
these processes.
Using CFX to analyze the unsteady flow field inside the chiller, track down the response of the
unit structure, and predict air motion around the machines, allowed engineers to deliver units
with lowest noise radiation.

Introduction
As living standards rise precision manufacturing proliferates around the world, airconditioning equipment accounts for more and more energy consumption. Stricter
environmental regulations and increasing environmental consciousness of air conditioning
equipment users demand more efficient compressors with less impact on the environment.
The Trane Company, a leading worldwide supplier of indoor comfort systems, is using the
latest technology -CFX software from ANSYS, Inc. -to achieve these goals.

Improving energy efficiency is a key issue in the


design of chillers for an air-conditioning system.
Efficient chillers not only reduce the operating cost
but also reduce green house gas emissions by
reducing power consumption. A one-percent
increase in efficiency brings substantial savings
over the life of a medium size chiller.
The social and environmental benefits are even
greater. Trane engineers are using multi-objective
optimization to improve chiller efficiency. Different
from the traditional trial-and-error technique, Trane
engineers analyze multiple variables and effects
simultaneously and optimize the design to achieve
the maximum benefit.

Challenge
A refrigerant-cooled hermetic motor is an integral part of Trane water- and air-cooled chillers.
This technology provides high efficiency and extends the life of high-power induction motors.
In these hermetic refrigeration machines, coolant flow is needed to carry away the heat
generated by high
power motors to improve energy efficiency and motor durability. However, the coolant flow
also induces the windage loss to the systems.
Two competing mechanisms need to be quantified simultaneously to achieve the optimized
design of these hermetic systems. Trane engineers use latest computational fluid dynamics
(CFD) technology to simulate heat transfer phenomena in high-power induction motors and
refrigerant flow through these motors concurrently.
These simulations bring many benefits compared to traditional build-and-test techniques:
temperature distribution in the motor is evaluated to eliminate hot spots and obtain uniform
cooling; through-flow induced torque and loss are determined; the impact of rotor surface
geometry and speed are assessed; and the required mass flow rate for effective cooling is
predicted. As a result, hermetic refrigerant machines are designed with optimized
configuration for the best energy efficiency.

To improve energy efficiency, design and analysis move from the individual component to the
entire system. Compressor aerodynamic performance is no longer limited to single parts, but
extended to include the interaction of multiple components.
This integrated analysis ensures the maximum performance of each component as a part of
the compressor system at various working conditions.

Solution
Trane has developed a Virtual Laboratory for the design of refrigerant compressors. It is
based on CFX, a CFD software package that Trane has found to be ideally suited to modeling
indoor comfort systems and which includes a real-gas equation of state for the refrigerant.
The result is that engineers can easily obtain overall performance and local flow field details
for
complete compressor stages without building a prototype.
Different from single-component analysis, the Trane Virtual Laboratory simulates the entire
compressor, readily providing information on the interactions between the components. Since
it quantifies the impact of design changes of a single part on the performance of the whole
compressor, this feature has special value for the overall improvement of machines. This
capacity proved very useful when Trane investigated the options to use different diffusers in
compressors.
The impact on the overall compressor performance was analyzed in detail, with the
simulations indicating that the change would lead to different flow fields inside the upstream
and downstream components. The individual performance of these components was altered
when the diffuser was changed. The information obtained was of great value to designers,
guiding product improvements and avoiding unnecessary design iterations.
Transient simulations in the Trane Virtual Laboratory provide further physical insights into
compressor performance and flow field unsteadiness, reduction of which is critical to improve
efficiency and reduce vibration and noise levels. Pressure fluctuation distributions inside the
impellers and diffusers can be obtained for different compressor designs and loss
mechanisms inside the flow field can be studied thoroughly.

CFX allows Trane engineers to explore large-scale unsteady flow phenomena. Removing
these instabilities has the benefit of obtaining the highest energy efficiency in a wide range of
applications. To minimize the impact on the environment, Trane also develops technology to
improve the reliability of our products, increase service intervals, and reduce the need for
disassembly of the units. Possible emission of refrigerant associated with these processes
is eliminated.
For the compressors designed by Trane, the force and torque on components are quantified
to ensure safety and reliability. The force and torque are calculated from the pressure and
viscous shear stress obtained from flow field simulations on the surface of the components.
These force and torque calculations are also used to predict the vibration and noise of the
system. The possibility of failure associated with fatigue of the components is therefore
reduced. All this analysis is conducted using a whole system simulation under strict dynamic
loading conditions. This cutting-edge technology helps Trane chillers earn a reputation as the
world's most reliable refrigeration machines.
In recent years, the sound produced by HVAC equipment has attracted more attention. Since
the equipment is typically located near building occupants, noise radiation must be
controlled. Reducing acoustic impact is particularly important for hospitals, schools, and
music halls. The machinery sound from HVAC equipment is caused by temporal variation in
the flow field. The internal flow field vibrates the machines, setting the air around them in
motion. This unsteadiness reaches the human ear in the form of noise.

Benefits
By using CFX to analyze the unsteady flow field inside the chiller, track down the response of
the unit structure, and predict air motion around the machines, Trane engineers deliver units
with the lowest noise radiation. Since Trane developed the industrial first scroll compressor,
the Trane 3-DR scroll compressor, in 1987, these devices have dominated the market for
small tonnage air-conditioning equipment.
Compared to the reciprocating compressors, where intake, compression, and discharge
occur in discrete steps, scroll compressors conduct intake, compression, and discharge
phases of operation simultaneously in an on-going sequence. Their smooth operating
characteristics reduce force and torque variation inside the compressor, making scroll
compressors quiet and reliable. Lubrication is very critical to improve the durability of scroll
compressors since the oil pump is an integral part of the compressor. Trane engineers
developed the technology to use the latest particle tracking techniques to predict the oil
circulation rate inside the scroll compressors.
The oil coming from different regions inside the compressors is tracked through the
operating process. Oil circulation features of different designs and oil droplet sizes are
obtained at different operating conditions. CFX helps designers develop compressor designs
with adequate lubrication and an ample supply of oil. The fully lubricated moving components
extend the durability of the system and reduce the need to replace parts and service the units.

Contact Us | Privacy Policy

Copyright 2003 ANSYS Inc.


All rights reserved.

John Krouse
Editorial Director

Finite-element analysis (FEA) and related engineering simulation


technologies bring significant benefits to user companies. How this
software is applied and what role it is given in overall operations
have a direct bearing on the value of the technology to the enterprise
and the ultimate justification making such an investment.
As analytical and predictive tools, simulation can be used to study
stress, deformation, fluid flow, heat distribution and other critical
attributes in much greater detail and far less time than running
physical tests on hardware prototypes. On this level, simulation brings a lot to the table in
improving designs of parts and assemblies, and as a foundation of virtual prototyping in
evaluating the behavior of complex products before any hardware is built.
Benefits are cranked up a notch when such simulation is applied early in the cycle when
critical decisions are made regarding the configuration of the product. Through such
frontloading of the product development process, engineers can make changes and refine
designs easily and inexpensively compared to finding and fixing problems later in
development when making changes is far more expensive and time-consuming. Up-front
simulation requires changes in the way analysis is performed. Greater cooperation and
closer communication are required between the design department and analysis group, for
example, as are better data exchange technologies. Such changes make it much easier to do
analysis alongside a relatively narrow stretch of design.
Huge gains are possible by taking this a step further and using simulation continuously
within the design process throughout the entire product lifecycle - from concept and
engineering through prototype testing, hardware production, and after-delivery support. This
utilizes the power of simulation as a design tool not only before the project is handed off to

production but also as design continues throughout the life of the product as engineering
changes are made, often hundreds or thousands of them. This trend allows analysis to be
done more seamlessly within routine engineering through the use of optimization trade-off
studies, for example, that balance competing requirements such as weight, stress, and
manufacturability.
In this manner, simulation technology is used to compress the design cycle and even change
the product development process to reduce time and lower costs. The effective use of
simulation technology can thus have a significant impact on how operations are performed in
the engineering department, where blending the tools into the overall product development
process enables engineers, designers, and analysts do their jobs more efficiently. These
benefits alone generally are more than sufficient to justify an investment in simulation tools.
The value of simulation for the enterprise extends far beyond this role of the technology as a
cost-cutter and time-saver in the engineering department, however. Smart companies are
finding that they can actually leverage simulation in changing their overall business focus.
Companies with experience in engineering simulation are using the tools to develop
innovative new products, and organizations that formerly never used simulation in their
engineering operations are investing in the tools as a way to take advantage of emerging
market opportunities, shifts in customer demands, new competitive pressures, and radically
different economic conditions.
Manufacturers that fabricate products for general applications might want to penetrate
specialized markets such as defense or biomedical, for example, which often demand welldocumented results of rigorous analysis that can be provided only with simulation tools.
Likewise, subcontractors that once considered themselves strictly parts manufacturers are
now shouldering more responsibility for design and find themselves concerned with
analyzing stresses, evaluating deformation, and predicting overall product behavior.
Simulation also is a critical element in mass customization, design for manufacturability,
engineered-to-order product development, and other types of business approaches that
represent a shift in the traditional way a company may operate.
Such creative applications of simulation technology requires out-of-the-box thinking, of
course, especially from smart executives who today must be on their toes not only in running
a business but also in re-engineering the company to take advantage of analysis tools that
less astute competitors might not ever have never heard of. For this reason, engineering
simulation is now being discussed more in boardrooms than ever before and the technology
is assuming a critical role in the strategic plans of some of the worlds most progressive
companies likely to be leaders in the coming years as many of their less imaginative peers
get left behind in the dust.

John Krouse is president of Krouse Associates and editor of the industry newsletter Engineering Process
Journal covering enterprise-wide strategies for integrated product development. Krouse Associates provides

public relations counseling and writing services for technology-based companies in the CAE, CAD, CAM, PDM,
PLM and related markets. Previously with Penton Media for 18 years, Krouse was CAD/CAM editor of Machine
Design magazine, and editor and publisher of its sister publication Computer Aided Engineering magazine.

Contact Us | Privacy Policy

Copyright 2003 ANSYS, Inc.


All rights reserved.

file:///Matt's%20G4/Desktop%20Folder/MATT%20JOBS/Stuff%20to%20Back%20Up/AN%20-%20Solutions%20Fall%2003-latest/TMP8cha8n5rst.htm

Page 3 of 3

ANSYS Presents at SG Cowen 31st Annual Fall Technology


Conference
James E. Cashman III, President and Chief Executive Officer of ANSYS, Inc., presented a
company overview at the SG Cowen 31st Annual Fall Technology Conference in Boston,
Massachusetts on Thursday, September 4. A live audio Webcast and archive is available on
www.ansys.com/newsrooms/investor.htm.

Siemens Broadens Relationship with ANSYS


ANSYS, Inc. recently announced a deal anticipated to be worth more than $5 million over the
next three years with Siemens. The three-year contract includes more than 200 seats of
ANSYS software, as well as seats from the ANSYS ICEM CFD Product Suite and services.
This agreement speaks to the long-standing relationship that ANSYS has established with
Siemens. ANSYS has provided Siemens with computer-aided engineering solutions for more
than two decades. During that time, Siemens implemented ANSYS core simulation and
virtual prototyping applications - such as ANSYS Mechanical - within the design process of a
large number of different products.
"Siemens relies on ANSYS for its superior engineering software capabilities," said Gerhard
Muller, Siemens PG IT. "Incorporating the ANSYS product family with the CAD-systems we are
using has allowed us to maximize our design process while containing design costs."
"The various industries that Siemens supplies closely mirrors the markets served by ANSYS,"
said Joseph Fairbanks, vice president of worldwide sales and support at ANSYS. "Our
ongoing relationship with Siemens signals their great confidence in the solutions ANSYS has
provided to enhance their product development process. We value our relationship with this

global leader, and we will continue to provide the premier technology solutions that they've
depended on for 10 years."

Structural Reliability Technology Announces the Release


of a 3-D Fracture Mechanics Pre- and Post-Processor for
ANSYS
In July 2003, Structural Reliability Technology (SRT), located in Boulder, Colo., released an
ANSYS-compatible version of its FEA-Crack software, which is a 3-D crack analysis program
with automatic mesh generation. In the past, creating a 3-D finite element model that contains
a crack has been tedious and time consuming, but FEA-Crack simplifies this process
considerably.
FEA-Crack is a Windows - based program where the user inputs basic information about
geometry, dimensions, material properties, and boundary conditions. The automatic mesh
generator then creates ready-to-run ANSYS input files (*.ans). If desired, FEA-Crack can be
used as a driver program to run multiple ANSYS jobs in batch mode. The FEA-Crack postprocessor module extracts data from ANSYS results files and computes various fracture
mechanics parameters including the J-integral, stress intensity factor, crack opening area,
and the failure assessment diagram (FAD). These results are presented in spreadsheet
tables and x-y plots. FEA-Crack also has 3-D graphics capabilities, which enable viewing
meshes, color stress maps, and deformed shape.
Until recently, FEA-Crack analyses were restricted to stationary cracks. However, a remeshing fatigue crack growth module has been added to FEA-Crack. Other crack growth
capabilities are currently in development.

Fatigue Analysis Software Suite Offers Increased Neutrality


For FEA Users
Safe Technology announced the release of another FEA interface for fe-safe, further
increasing the neutrality of their suite of durability analysis software for finite element models.
fe-safe v5.00-11, is now available with a direct interface to the NASTRAN OUTPUT2 file
format. This is a major sub-release of fe-safev5, released by Safe Technology in January. The
new interface can be downloaded by NASTRAN users direct from the company's Web site at
www.safetechnology.com.
This new development means that fe-safe now uniquely offers direct interfaces to the
following FEA file formats:
ABAQUS .ODB and. FIL

ANSYS .RST files


NASTRAN .OP2 and .FO6
IDEAS .UNV
BEASY
Pro/Mechanica
Hypermesh
PATRAN
FEMSYS
CADFIX
MTS RPCIII
Adams
Other new developments also now incorporated in fe-safev5.00-11 are:
Influence coefficients - evaluation of the individual contribution of a unit load on the strains or
stresses in a particular direction
Gauge outputs - this facility enables the strains or stresses in a particular direction to be
exported
This is in addition to the extended capabilities and substantial increases in speed of analysis
and ease-of-use already incorporated in fe-safev5. These include:
complex loading conditions and advanced multi-axial fatigue capabilities
fatigue analysis of cast iron
fatigue analysis of welded structures
high temperature and creep fatigue analysis
fe-rotate (quick analysis of axi-symmetric components)
thermo-mechanical fatigue analysis
comprehensive signal processing capabilities (safe4fatigue)
default algorithms (automatic selection of the most appropriate method of fatigue
analysis based on the selected material, ensuring ease of use for even the nonspecialist engineer).

ANSYS CEO Named to Pittsburgh Technology Council


Board Executive Committee
In September, the Pittsburgh Technology Council named ANSYS President and CEO, James
E. Cashman III, to the executive committee of its board of directors.
During Cashman's 30 years experience in finance and operations, he has held management
positions at Structural Dynamics Research Corporation in the areas of international sales,
major account development and production planning. Programs instituted under his
supervision resulted in a 678 percent growth in international sales over a two-year period, a
183 percent increase in international sales revenue and profit growth of 190 percent.

Cashman holds a bachelor's degree in mechanical engineering and master's degrees in


mechanical engineering and business administration from the University of Cincinnati in
Ohio. He was voted CEO of the Year during the Pittsburgh Technology Council's Tech 50
awards last year, and he has served on the Council's board since June 2000.

HP Leads in Worldwide Server Revenue in Fastest Growing


Market Segments No. 1 in x86, Linux, Blades and
Windows Servers
Driven by continued strong customer demand for its ProLiant servers, HP reaffirmed its No. 1
position worldwide in the markets for x86, blades, Linux and Windows servers - the fastest
growing segments of the server marketboth in terms of factory server revenue and unit
shipments, according to figures released today by IDC.(1)
HP also continued its strong position in the UNIX server market and remained No. 1 in
terms of worldwide total server shipments, with 30.8 percent market share for the second
calendar quarter of 2003.
HP held the following market leadership positions according to numbers released by IDC:
HP is No. 1 in the worldwide x86 server market, both in terms of factory revenue and unit
shipments, driven by its industry-standard Intel-based HP ProLiant servers;
Capitalizing on the continued strong growth of Linux, HP held a firm lead in worldwide
server revenue for the Linux market with 28.9 percent market share;
HP ProLiant BL blade servers lead in worldwide revenue and shipments for the x86
server blades market, with 31 percent market share in terms of unit shipments and 32.9
percent of factory revenue (an increase of more than 10 percentage points from the
previous quarter);
Driven by brisk HP Superdome server sales, HP leads in worldwide revenue for highend enterprise servers (servers priced $500,000 or more) with 29.7 percent. HP also
tied for the lead in the UNIX midrange enterprise server market (servers priced from
$25,000 to $500,000) with 33.7 percent share of factory revenues worldwide; and
HP is No. 1 in the worldwide Windows server market segment with 33.6 percent share
of factory revenue, based on record ProLiant server sales worldwide during the second
calendar quarter of 2003.
"HP ProLiant systems remain, quarter after quarter, the world's best-selling industry-standard
servers, which shows that customers demand standards, value and innovation," said Scott
Stallard, senior vice president and general manager, HP Enterprise Storage and Servers.
"Together with the new 64-bit Integrity systems, HP offers the strongest industry-standard
based server lineup of any vendor in the marketplace today."
From the industry-standard ProLiant and Integrity servers, to the high-end Superdome and
NonStop systems, HP offers enterprise customers the broadest server portfolio in the
industry, along with the industry's leading storage and management solutions.

ANSYS Recognized as One of the Fastest-Growing Tech


Companies by Business 2.0 Magazine
For the second consecutive year, ANSYS, Inc. has been named in Business 2.0s B2 100, the
magazines ranking of the fastest-growing technology companies. ANSYS ranked 81 on the
list, climbing five places from last year. The list of 100 was winnowed down from an original
group of 2,000 publicly traded tech firms.
To make the B2 100, companies had to meet rigorous financial requirements. Criteria for
making the final list included at least three years of trading on a major U.S. stock exchange, at
least $50 million in annual revenue, and positive cash flow over the most recently reported 12
months. Business 2.0 editors then ranked the companies with the help of Zacks Investment
Research of Chicago, using a combination of four financial criteria: growth in revenue, profit,
and operating cash flow during the past three years, and the 12-month stock return.
Cash flow growth counted for 40 percent of a companys ranking; each of the other criteria
counted for 20 percent.
In announcing this years list, Business 2.0 editor Josh Quittner observed that "companies on
the 2003 B2 100 have shown remarkable results in a most challenging business
environment. But what the list shows is who to watch, who to watch out for, and what we can
learn from the leaders. Technology is alive and well, as proven by the performance of these
companies."
Manufacturing continues to be an extremely competitive field where time and cost are both
crucial factors that can make or break a product. ANSYS is providing technology that helps
manufacturers compete more effectively, by accelerating product introduction, and driving
product innovation, said Jim Cashman, president and CEO at ANSYS, Inc. Our simulation
and virtual prototyping tools let you measure design performance directly on the desktop at
any stage in the product lifecycle. Weve succeeded because our technology works and
because we can easily demonstrate its benefits to the bottom line.
The B2 100 is featured in the October 2003 issue of the magazine, on newsstands
September 22.

Upcoming Events
ANSYS Seminar Series
On-going
Locations throughout North America
ANSYS CFX, UK User Conference 2003

12 November 2003
Coventry, UK
German CFX Conference 2003
4 - 6 November 2003
Garmisch-Partenkirchen, Germany
PowerGen 2003
9-11 December
Las Vegas, NV
Nordic Update Seminar
30 October 2003
Helsinki, Finland
Nordic Update Seminar
31 October 2003
Copenhagen, Denmark
Nordic Update Seminar
3 November 2003
Oslo, Norway
Nordic Update Seminar
4 November 2003
Stockholm, Sweden
Benelux User Meeting
5 November 2003
Breda, The Netherlands
French User Conference
6-7 November
Beaune, France
Spanish User Conference
10 November 2003
Madrid, Spain
UK User Conference
11-12 November 2003
Coventry, UK
21st CAD-FEM Users Meeting
International Congress on FEM Technology
12-14 November 2003
Potsdam/Berlin, Germany
2003 Japan ANSYS Conference

20-21 November 2003


Crowne Plaza Metropolitan
Tokyo, Japan

Contact Us | Privacy Policy

Copyright 2003 ANSYS, Inc.


All rights reserved.

By Brian Yu
Marketing Manager
3Dlabs

Professionals use applications like ANSYS to increase productivity. Similarly, a professionals


hardware also should improve workflow. The core components of a Windows or Linux-based
workstation include a motherboard with support for the latest components, large amounts of
memory, a fast CPU, ample hard drive capacity, and most importantly, a robust graphics
accelerator. A workstation-class graphics accelerator one used by professional designers
and other engineering professionals is required by demanding applications such as
ANSYS to provide the latest features; offer a stable work environment; be seamlessly
compatible with other software, and have powerful performance. In this way, the graphics card
should be a transparent addition to project workflow, allowing users to work without having to
worry about hardware performance or driver stability. With a workstation-class graphics card,
one that has been built from the ground up with professionals needs in mind, users need not
worry about overloading their system and potentially losing hours of work.

Unlike cards adapted from PC games, professional-grade graphics accelerators such as the Wildcat
VP990 Pro card from 3Dlabs are developed to meet demanding analysis and engineering applications
running on technical workstations.

Workstation graphics accelerators are designed from the start with the intent of improving the
overall user experience for the customer with faster throughput and efficiency. Features like
image quality and stability are paramount, whereas graphics cards originally designed for
playing computer games on PCs (unlike true professional cards) are designed with one thing
in mind - speed. With gamers begging for more frames per second (FPS), gaming card
manufacturers (and those who manufacture so-called professional cards based on gaming
technology) take shortcuts and cut corners on rendering quality in order to hold the title of the
fastest card in the industry. That distinction in no way helps the professional user, as pixel
dropout is usually the result at higher FPS. Occasionally, the gaming enhancements cause
professional applications to stutter and sometimes stall. Consumer cards are not designed
to take on the heavy workloads that workstation applications impose on the graphics
processor. Professionals in CAD and CAE disciplines cannot afford to sacrifice image quality
and stability for the sake of speed.
Professional graphics accelerators are engineered to handle the rigorous demands of
professional CAD and CAE software. The models associated with these demanding
applications require a high-level of hardware stability to ensure users productivity and
efficiency. Qualities such as visual accuracy and precision, speed and performance, and
stability are examples of what to look for an in a graphics card the very qualities that set
professional accelerators apart from the gaming-bred variety. Add industry leading
technology, rock-solid drivers, and relevant, professional software certifications to that list,
and youll get the card that can hold up under the stress of demanding simulation
applications, such as the Workbench Environment or ANSYS.

Brian Yu is a Marketing Manager at 3Dlabs, the only workstation graphics company solely focused on
designing professional-grade graphics accelerators to run graphics-intensive applications. These professional
graphics accelerators go through rigorous testing, both internally and with application vendors, to ensure
seamless compatibility with the host hardware, the operating system, and other concurrently running
applications. Information on the company and its products can be found at www.3dlabs.com.

Contact Us | Privacy Policy Copyright 2003 ANSYS, Inc. All rights reserved.

Introduction
The rapid developments in numerical simulation techniques, faster computing ability, and
greater memory capacity, are allowing engineers to create and test industrial equipment in
virtual environments. Through finite element analysis (FEA), these sophisticated simulations
provide valuable information for designing and developing new products, as well as
perfecting existing ones. Manufacturers have found this method eminently useful, as it helps
them to achieve better productivity at a lower cost per unit, and develop engineering
components that are easy to manufacture, and which make the most economic use of their
materials.
Tata Consultancy Services (TCS), of Mumbai, India, is at the forefront of FEA. Dilip K. Mahanty,
Ph.D, Group Leader of the Design & Analysis Center of TCS Engineering Services Group,
and an expert in FEA, has used this method to perform numerous design evaluations and
modifications on components for everything from household appliances to locomotives. In
order to attain the utmost accuracy, Dr. Mahanty always uses ANSYS, a leader in Finite
Element Analysis software.
International Auto Limited (IAL), one of the foremost automotive component and aggregate
systems manufacturers in India, supplies sheet metal stampings and precision machine
parts to various original equipment manufacturers throughout the country.

Challenge
IALs objective was to find a way to reduce the weight of the front axle of a commercial tractor.
Since the tractor had never been known to fail in the field, the design of its front axle was to be
used as the basis for the axle of the new vehicle. As the front axle undergoes the heaviest
load conditions of the tractor, it would take some intense testing and pinpoint modifications to
equal the success of the existing axle, while making it lighter.
Mr. S.S. Udgata, Research & Development Manager for International Auto, Ltd., called Dr.
Mahanty to collaborate with his team in the evaluation and modification of the design for the
new tractor.

Solution
Once again, Dr. Mahanty turned to ANSYS to solve the problem. The team members devised
a series of 13 grueling certification tests, including seven major cases, and six sub-cases,
through which they would put a geometric model of the front axle of the tractor, and use FEA/
ANSYS to evaluate it under different load conditions.
They built the model in Unigraphics, a CAD package, and then transferred it to ANSYS. Using
HP Workstation C3000, each iteration took about 30 minutes to solve the finite element model
for the given load and boundary conditions. ANSYS Mechanical software was used for pre-/
post-processing and solving.
First, they performed the Drop Test, wherein a 35/55-hp tractor was run on a hard course. At
35 kmph, the tractor approached a pit measuring 5 x 2 x 2.5 feet, and one of the front wheels
was allowed to fall into it. The tractor was pushed, under its own power, to the end of the pit,
where it stayed until the engine stopped.
Next came the five-phase Torture Test, in which the tractor was put on level and V-shaped test
tracks that simulated various unfavorable road conditions, including pits (potholes), small
humps, and large humps.

During alternating segments of this test, both front wheels fell into the pit; one wheel was in
the pit, while another went over a hump; one was on a plane, while the other was on a slope;
then, it was operated on a V-shaped road with small and large humps.
During both phases of the Wide Open Test, the front axle extenders were fully extended, while
the tractor ran, first with one wheel on a slope, and the other on a plane, then on the V-shaped
road with a hump, at 15 kmph.
For the Pit Test, the tractor went down one side of a 10-foot-deep pit with a 20-degree slope,
and up the other, at 30 kmph.
In the Eight (8)-Shaped Track Test, the tractor ran at 35 kmph on an 8-shaped track, with three
medium-sized humps positioned at 120o to each other in each half of the track. While
negotiating a curve, the steering wheel was turned until it reached its locking position.
The Impact Test had one side of the tractor crashing into a wall at 35 kmph.
In the Worst Load Test, all of the worst load conditions were applied to the axle, and then
analyzed.
The results of the 13 load cases on the current model indicated that the structure was safe
overall. Although there were a few high-stress areas, it was apparent that they were localized,
since the stresses died down within an element or two. These results were used as the
basis for comparison with the five proposed models that had evolved, each of which
emphasized weight optimization and easy manufacturability.
The proposed designs were then evaluated under the same selected worst load cases as
the existing design. The analyses of all five models yielded displacements and stresses
close to those in the existing design. The displacement increases were insignificant, while
the stress increases were close to 15 percent. All of the new designs met the structural
requirements. Ultimately, the U-box with extender was chosen as the best of the proposed
models.

Benefits
Not only did the proposed designs have a weight reduction of approximately 40%, but they
could also be produced without much welding, which meant marked savings in
manufacturing costs. The need for smaller components, such as bearings and knuckles,
also made them considerably less expensive.
Dr. Mahanty says, This analysis work showcases the use of finite element analysis as a
method for reduction of cost in terms of materials and manufacturing. The benefits provided
are many. We could change the design and see the impact in a very short time, which helped
in fixing a feasible design. Also, due to analysis results obtained from ANSYS, it was very
much possible to check the design parameters, such as maximum stress level limits. And
the reduction in the cost of production and weight significantly reduced the cost of the new
design of the front axle.

Contact Us | Privacy Policy

Copyright 2003 ANSYS Inc.


All rights reserved.

By Charles Foundyller
President
Daratech, Inc.

Digital prototyping and simulation have greater near-term potential than perhaps any other
class of information technology to help manufacturers in a wide range of industries compete
in todays environment. Developing best-selling products at lower cost, on shorter schedules,
at higher qualityfew manufacturers we talk to believe these imperatives can be met without
putting digital simulation and digital performance models at the core of product development
and validation.
To meet these urgent business goals, the worlds leading manufacturers are using digital
simulation technology in conjunction with work-process restructuring to shorten product
development cycles, manage increasing product complexity and variety, lower development
costs by reducing physical prototype counts, better control subjective product attributes that
create brand value and buyer appeal, and reduce warranty exposure, recalls and product
failures.
Helping accelerate the move to math is the dramatically improving price/performance of highperformance computers, together with higher-quality, better-managed deployment,
implementation and training. Also key is growing confidence among both practitioners and
management in simulation accuracy and correlation with physical test results. However,
bringing simulation tools and processes into the mainstream of product development, and
making their capabilities and results available to support every decision pointwhat some
describe as simulation-based designremains supremely challenging, even at companies
where program managers, discipline specialists and designers have all committed to
making such initiatives succeed.
Even more important than shortfalls in the technology, we believe, are the management

challenges for manufacturers. Program managers, department heads and engineers alike
repeatedly describe how the greatest barriers to broader, more effective use of simulation are
not technological but instead organizational, cultural and psychological. Indeed, the culture
clash encountered by digital prototypings advocates is summed up in an adage some of
these pioneers like to repeat: Everyone believes the test results except the test engineer
and no one believes the analysis results except the analyst. What most everyone agrees on
is that the capabilities offered by todays technologies are substantially ahead of
manufacturers ability to configure their product development processes so that the full
potential of these technologies is realized.
Our best-practices research among the industrys most successful implementers of digital
prototyping indicates that, to remedy this, the first step is for management to gain a sound
understanding of the capabilities as well as limitations of todays technologies, and put in
place organizational structures and processes that exploit what is available today while also
having the flexibility to continue adopting new, improved capabilities as they become available.
Management also needs to take a leading role in fostering cooperation and collaboration
among divisions, departments and disciplines that may have had little or no direct contact
with one another in the past, and that often resist change and new ways of working. In this
regard, management can profit by leading the effort to build and implement process and
platform frameworks that coordinate and tie together different groups and organizations, and
then extend these frameworks to be increasingly inclusive of suppliers and partners. Here,
best practices have demonstrated that bringing in new technologies can be used to aid
restructuring of work processes, and serve to incent and involve people in the new ways of
working.
One notable gauge of digital prototypings increasingly central role is the significantly higher
growth expected for this market than for mechanical CAD/CAM. By our forecast, expenditure
on digital prototyping and simulation will grow an average of 11.5 percent annually to nearly
$2.5 billion by 2007. By contrast, mechanical CAD/CAM expenditure is projected to increase
only 3 percent annually over the same period, to $5.1 billion.
Moreover, the breakout segment of digital prototyping that we term systems performance
modeling will enjoy dynamic growth of 14 percent annually to more than $1.4 billion by 2007,
we project. Systems performance includes CFD, crash, dynamics and motion, forging and
mold design, durability and fatigue, heat transfer and thermal interactions, NVH, control
systems design and the like. The other main market segment, structural analysis, is
essentially the traditional CAE domainsolvers plus pre- and post-processors for structural
finite-element analysis. We project structural analysis expenditure will grow 6 percent
annually to nearly $1 billion by 2007.
Another mark of digital prototypings ascendancy is seen in the automotive industry, which
traditionally leads others in the use of simulation technology. Weve seen that in automotive
vehicle programs, expenditure on physical test is declining, while expenditure on digital
prototyping and simulation is on the increase. In 1985, most customer investment in the test,

ANSYS SOLUTIONS: Guest Commentary: Digital Prototyping takes on a central role in product development

10/22/03 12:22 PM

measurement and verification domain was on physical prototyping and test. In 1995,
manufacturers were still investing heavily in physical testing. But today, many organizations
are spending as much on digital prototyping and simulation as on physical test and
measurement. While total spending on digital and physical methods together has not
changed dramatically, we believe the total is poised to increase as more executives and
managers come to understand the full potential of digital prototyping and simulation, and the
technology continues to meet more and more of practitioners critical needs.

Charles Foundyller is founder and president of Daratech, Inc., a market research and technology assessment
firm specializing in CAD, CAM, CAE, PLM and related areas. The company hosts a variety of user forums and
technology workshops throughout the year and publishes a range of studies, newsletters, market reports,
sourcebooks, and industry statistics. Daratechs upcoming management-level conference on Digital Product
Simulation Strategies for Aerospace and Defense will take place Nov. 3-4, 2003, in Anaheim, Calif., focusing
on reducing costs, accelerating schedules, and improving quality and performance through strategic
application of digital simulation and validation technologies. For more information, call at 617.354.2339 or visit
www.daratech.com.

Contact Us | Privacy Policy Copyright 2003 ANSYS, Inc. All rights reserved.

By John Crawford
Consulting Analyst

Every so often I have the opportunity to introduce a new analyst to finite


element analysis. I usually begin by presenting a framework for how one
views engineering analysis in general and finite element analysis in
particular. I then talk about the steps one goes through when doing an
analysis and interpreting the results. It is important to think about the
entire process up front because its very easy to get wound up in the
details of doing an analysis and lose sight of the big picture.
The list below outlines the steps that I follow in my daily work and that I
share with others who are just getting started with finite element
analysis. While I have used examples and terminology from structural
problems, these guidelines listed are equally valid for heat transfer,
electromagnetics, CFD, etc.
1. Thoroughly understand the actual problem.
The first step in any analysis is to understand the problem. Dont accept someone
elses interpretation of the problem. Look at the components and figure out for yourself
how it works and what the real issues are. Youll know that you understand a problem
when you can successfully explain it to someone else. If you cant explain it to another
person, chances are good that you dont understand it yourself, and if you dont
understand the problem you certainly arent going to be able to analyze it properly and
understand whether your answers are correct or not.
2. Predict what you think the answer will be.
Once you understand the problem you should try to estimate what you think the
answer will be and how the system will behave. Identify regions where you think high
stresses will occur, estimate what the deflected shape of the structure will be, and so
forth. Develop an image in your mind of what the component will look like after loads
are applied and use this to determine how youll set up the model. More than another
other step in the analysis process, this one depends on your engineering intuition to

lead you in the right direction.


3. Decide if finite element analysis is a reasonable method for analyzing this problem.
While finite element analysis is a very powerful tool, it isnt the only way of analyzing
things and sometimes isnt the best way, either. Some problems are solved more
efficiently using classical techniques and others are best understood via experiment.
Make sure that finite element analysis is appropriate and reasonable before you
progress any further. If you can find a better way of solving the problem, use it.
4. Determine the type of analysis needed to obtain reasonable answers.
This is the most crucial part of the analysis process because you will make almost all
of the critical decisions that will define the path you will follow as you make the model,
solve it, and postprocess the results. The real world is three dimensional, transient,
and nonlinear, while the FEA world almost always involves some simplification of one
or more of these. Is a static analysis sufficient, or is a transient analysis necessary?
Will you need to do a heat transfer analysis to obtain a temperature distribution before
you do the stress analysis? Are nonlinear material properties needed? Can you take
advantage of symmetry to reduce the number of elements in the model? Will an
axisymmetric model provide satisfactory results? What kind of meshing techniques
are best suited for this geometry? Will you use free meshing, mapped meshing,
sweep meshing, or a combination of these? If dissimilar meshes are unavoidable,
will you use constraint equations or bonded contact elements to tie these regions
together? How will you apply boundary conditions? By answering these questions you
will define a blueprint for how you will do the rest of your work.
5. Determine the type of elements you will use.
Once you have decided on an approach you will need to choose the elements that you
will use to obtain the desired results. This is why its so important to fully understand
the problem and visualize how the components will behave. As an example, lets
consider a cylinder that is fixed at one end and has a load on the other end that
causes bending. We can model this several ways. The easiest and simplest way is to
use pipe elements (such as PIPE16). These elements will do a very good job of
simulating the way in which the cylinder will bend, but they assume that the cross
section of the tube does not change shape. If we model the tube using shell elements
(SHELL63, SHELL93, etc) well be able to see if the tube changes cross section from
circular to elliptical. Moving another step closer to reality, we could use solid elements
(SOLID45, SOLID95, SOLID92, etc) to see if the wall of the cylinder changes
thickness. Depending on what we think is important in our problem we can choose
the best way of getting an answer that we believe will satisfy our needs in an efficient
and effect manner.
6. Determine the geometry needed to generate the elements.
The geometry youll need to generate a mesh depends on the elements you have
chosen and the techniques you will use to mesh the model. While everything in the
real world is a 3D solid, the FEA world isnt necessarily 3D or solid. In the case of the
previously mentioned tube, if we are using pipe elements well only need a series of
lines and arcs that define the centerline of the tube. If we are using shell elements we
have two possible paths to follow. One would be a series of lines that defines the

center line of the tube and a circle at one end that we can drag down these lines to
generate areas. Another is to make or import areas and skip the dragging operation. If
our tube will be meshed using 3D solid elements we could map, sweep, or free mesh
a volume, or we could use a center line and drag a 2D ring down it and generate the
volumes and mesh it at the same time. The geometry you need is a function of the
elements you will make. You might also find that you need to move the geometry to a
particular location to better suit how you plan to analyze it. As an example of this, when
using axisymmetric elements, ANSYS requires the global Y axis to be the axis of
symmetry and the elements must be located on the Z=0 plane and on the positive
side of the X=0 plane.
7. Create the geometry within ANSYS or import it from another source.
Once you have determined the geometry you will use to generate the mesh, you can
create the geometry within ANSYS, import it from another source, or do both. The path
you choose depends on the complexity of the model and whether another source of
geometry is available. For relatively simple geometry it might be faster to generate it
within ANSYS. Complex geometry might be better made in a CAD program and
imported into ANSYS via IGES, one of the Connection products, or a third party
translator like CADfix. If you are importing geometry from another source you may have
to alter it to suit your needs. You may only need a segment of the geometry, a planar
cut through 3D geometry, or nothing more than a series of lines. You may also choose
to remove details from the geometry that you think are insignificant and would add
unneeded complexity to the model. Sometimes the CAD geometry is missing features
we believe are important, like fillets on inside corners. One thing for sure is that the
CAD geometry frequently needs to be modified to make it suitable for analysis. We
might alter the CAD geometry to allow us to take advantage of symmetry or other
simplifications, or we may subdivide the geometry so we can apply boundary
conditions properly or use certain meshing schemes. One idea to keep in mind is that
it is usually easier to mesh a group of smaller, simpler geometries than it is to mesh
a single, more complicated geometry. Plus, if it is decided at a later date to make a
modification to the geometry and rerun the analysis, having a number of volumes
means that youll only have to clear and remesh a small part of the model instead of
the whole thing.
8. Create the attributes needed to define the elements.
All elements in ANSYS are defined by their attributes, which are the tables that contain
the information that describe the element and its behavior. There are five type of
attributes; TYPE (defines the element type), REAL (defines physical constants), MAT
(defines material properties), ESYS (defines the coordinate system the element is
aligned with) and SECNUM (defines cross section information). Usually, 2 or 3
attributes are all that are needed to define most elements. Its convenient to assign
attributes to each geometry entity you will be meshing because they are automatically
applied to the elements as they are generated. It also allows you to remesh the
geometry without having to worry about which attributes are currently active.
9. Set element sizes.
Assign what you think are realistic values for the element edge length in various
regions of the model. Use your prediction of how the model will behave to help you

determine where the elements need to be small enough to obtain accurate results
and where they can be large and still provide reasonable answers. Another factor to
consider when setting element size is the geometry and whether ANSYS will be able
to mesh each region successfully. You may find it beneficial to adjust the element size
in certain regions to improve the likelihood of successful meshing. Another meshing
parameter is the rate at which element size increases from the outside to the inside of
the model. It is common to have larger elements in the middle of areas and volumes
because high stresses usually occur on the outside surface. If you think that the
number of elements in your model might present a problem during solution, you can
increase the rate at which elements increase in size from outside to inside.
10. Mesh the geometry and create any other elements that are needed.
Meshing can be as easy as executing a single command, or as time consuming as
almost any other part of model building. You can use mapped meshing, sweep
meshing, free meshing, or explicit element generation. A commonly followed
procedure is to begin with mapped meshing and then use sweep meshing and free
meshing as needed. This is followed by generation of special elements like contact
elements, point mass elements, spring and damper elements, surface effect
elements, and so forth.
11. Apply boundary conditions.
Boundary conditions can be applied to solid model geometry or directly to nodes and
elements. It is good practice to apply boundary conditions to solid model geometry
whenever possible in the event that you might want to remesh part or all of a model
later on. Not all boundary conditions can be applied to solid model geometry, so it is
common for a model to also have boundary conditions applied directly to nodes and
elements.
12. Set the load step controls.
There are a number of solution controls that can be set to enable a more efficient or
more accurate solution. You may wish to choose a specific solver for your problem, or
you might control the time step size or the amount of data that is written to the result
file. You can also control the number of substeps that will be solved in a given load
step and much, much more.
13. Write the load step files.
After you have applied boundary conditions and defined the controls for a given load
step you can write a file that contains this load step information. Its not always
necessary to write a load step file, especially if you only have to solve a single load
step for your problem. One benefit of writing a load step file is that it acts as a record of
the boundary conditions and solution settings used to run the analysis. You can open
a load step file with an editor and see all the ANSYS commands that control the
analysis, which is a handy way of making sure that the boundary conditions really are
what you think they are. Another benefit of using load step files is that you can rerun
the analysis using a given load step file and be sure that you have exactly the same
boundary conditions and solution settings as before.
14. Solve the load step files.

Load step files can be solved either individually or as a group using the LSSOLVE
command. You can also choose to solve the currently applied boundary conditions
and solution settings using the SOLVE command. During solution it is frequently
beneficial to keep an eye on the output window and see how the things are
progressing. Depending on the analysis being done, ANSYS may plot the
convergence criteria and how the solution is converging in the graphics window. If you
watch the available solution output while the program is solving you can occasionally
detect and diagnose problems that may occur during solution.
15. Review the results.
Look at the results and see if there is anything obviously wrong. Are all the load steps
that you thought were being solved present in the results file? Are there any obvious
errors in the results? Do the results compare favorably with your understanding of the
problem?
16. Interpret the results.
All too often the postprocessing of finite element results is done quickly and with
hardly a second thought, but one of the most important steps in the analysis process
is to look at the results and interpret what they really mean. If a singularity is present in
the model do we ignore it or do we modify the model to include the real world
geometry at this location? Another question we must ask ourselves is whether the
mesh is refined enough to provide answers that are accurate enough for our needs?
By viewing the averaged results, the unaveraged results, the Powergraphics results,
the full graphics results (along with SMXB values), and the estimated error, we should
be able to determine whether the mesh is adequately refined and what the real
answer might actually be.
17. Compare the results to your original prediction.
When you look at the finite element results you should ask yourself if they make sense
and appeal to your understanding of how the system works. Are the highest stresses
in regions that seem reasonable? Are the answers close to what you initially thought
they would be? This is a vitally important part of the analysis process because
reviewing each result and comparing it to what you thought it would be will help you
sharpen your engineering intuition. The intelligent analyst will always try to fit the
answers he is seeing on the screen into his understanding of how things work. By
doing this for each analysis you will become a better engineer and a more valuable
and productive analyst.
18. Iterate as needed to obtain a satisfactorily accurate answer.
What are the odds that your first answer is sufficiently accurate? If you have done a
good job of interpreting the results you will have a reasonable idea of how much error
is present in your results. Use your estimation of the actual stress values as the final
answer from your analysis. When you present results you should present the values
that were calculated in ANSYS, your interpretation of what they really mean, how much
error is included in them, and what your final estimate actually is.
While all of the steps are important and must be done properly to obtain valid answers, the
most difficult and crucial ones from the list above are 1-3, and 16-17. These require

engineering insight and understanding. You must understand the problem, use your intuition
and experience to predict the behavior of the system and the answer you area likely to obtain,
determine the finite element representation that will give you this answer, and interpret the
results of the analysis into a sensible and accurate engineering assessment of how the
system behaves.
Perhaps the most interesting step is 17. By comparing the finite element results to what you
thought the result would be, you will revisit your understanding of the problem and see if you
have overlooked something. If the results are not what you thought they would be, there are
three possibilities that might explain this. One is that you made a mistake when doing the
analysis and the problem you solved was not the problem you actually intended to solve.
Another is that the problem was set up properly, but ANSYS did not analyze it correctly. Such
occurrences are few and far between, but no program is perfect and there is a small but finite
chance that you may have run across a problem with the program.
Take a look at the Class 3 error reports and see if anything has been reported that could
explain why your answers dont look quite right to you. Finally, if the results from your analysis
do not compare favorably with your understanding of the problem, maybe your understanding
of the problem was incorrect or incomplete. As you look at the results and think about things,
sometimes the light bulb goes on and it all suddenly makes sense. These moments of
enlightenment are one of the highlights of being an engineering analyst.

Contact Us | Privacy Policy

Copyright 2003 ANSYS, Inc.


All rights reserved.

By ANSYS, Inc. Technical Support

there an easier way to define varying shell thicknesses than


Q: Isentering
a separate real constant for each element?

A:

A simpler method is to use shell sections instead of real constants. A


more general way of defining shell construction than the real
constants option, shell sections allow the user to define the thickness
of a set of elements as a function of X,Y, and Z.
The thickness of elements SHELL131, SHELL132 and SHELL181 can be defined by a
section. Shell section commands allow for layered composite shell definition and provide
the input options for specifying the thickness, material, orientation and number of
integration points through the thickness of the layers.
Note that shell sections provide additional flexibility in defining single-layered shells as
well as layered composites. They allow the use of the ANSYS function builder to define
thickness as a function of global coordinates and the number of integration points used.
Shell section data is defined by the SECTYPE, SECDATA, SECOFFSET and
SECFUNCTION commands:
SECTYPE,SecID,Type,Subtype,Name,REFINEKEY associates the section type
information with the section ID number. SecID is the section identification number;
Type is set to SHELL to specify a shell section (other types of sections are BEAM
and PRETENSION). Subtype only applies to type=BEAM, Name is the name of the
section and is optional, and REFINEKEY only applies to type=BEAM.
SECDATA,TK,MAT,THETA,NUMPT describes the geometry of the section. TK is the
thickness of the shell layer for constant layer shells; MAT is the material ID for a
shell layer. Theta is the angle of the layer element coordinate system with respect to
the element coordinate system and NUMPT is the number of integration points in a
layer.
SECOFFSET,Location,OFFSET1,OFFSET2,CG-Y,CG-Z,SH-Y,SH-Z defines the

section offset for cross sections. More details on this command can be found in the
ANSYS Commands Reference.
SECFUNCTION,TABLE specifies the shell section thickness as a tabular function.
TABLE is the table name reference for specifying tabular thickness as a function of
global XYZ coordinates. To specify a table, enclose the table name in percent signs
(%). Use *DIM to define a table before using this command. This table defines the
total shell thickness at any point in space, and can be entered by the user, or
defined by ANSYS through the Function Builder.
The table used by SECFUNCTION can be created in three ways: entered as a TABLE in
ANSYS, as a table imported from a spreadsheet, or as a function created in the Function
Builder. Building the table explicitly requires you to specify thickness values for different
points in space, and ANSYS will interpolate thickness values for locations not explicitly
defined. The Function Builder allows you to specify the thickness as a mathematical
function of X, Y and Z entered into a calculator format. It also lets allows you define
equations for different regimes, e.g. X<0 or X>0.
The function builder is accessed through the GUI by choosing Utility Menu>Parameters>
Functions>Define/Edit. Choose either a single equation or a multi-valued function. If you
choose multi-valued function, you enter the name of the regime variable. For both
methods, enter the equation using the primary variables (in this case, X, Y and Z) and the
keypad. If you are using a multi-valued function, click on the different Regime tabs on the
Function Builder and define each regime and the equation for that regime. When you are
finished building your equation, choose Editor>Save, and save the function file.
After you have built your function, you need to load it in (building it does not make it active).
Choose Utility Menu>Parameters> Functions>Read from File and browse to your function
file. You will then need to enter a table parameter name. You will use this name when you
specify your function for the shell thickness. Save your new table. The function is saved as
an encoded table, and ANSYS automatically dimensions the table.
If you need to define your thicknesses as discrete values instead of a function, you first
need to define a table parameter, either by the *DIM command, or by choosing Utility
Menu>Parameters> Array Parameters>Define/Edit. Be sure to identify the variables for the
table, or the shell thicknesses will not be read. If the table rows correspond to the xcoordinate and the columns the y-coordinates, be sure to specify that variables 1 and 2
are X and Y, respectively.
The table can be entered in the GUI by choosing Utility Menu>Parameters>Array
Parameters>Define/Edit, or by using *SET or by importing the table. You can define your
table in Excel or in comma delimited format, then read it in either by Utility Menu>
Parameters>Array Parameters>Read from File or by using *TREAD. This command
reads the first element of the table as the first value of variable 3, then reads the rest of
the first row as the values of variable 2, and the rest of the first column as the values of
variable 1. So, if your variables are defined as X, Y and Z in order, you would set up the
first line of your table as: z1, y1, y2, y3, y4, etc. The second row of your table would be x1,
thickness(x1,y1,z1), thickness(x1,y2,z1), thickness(x1,y3,z1), etc. When you have defined
all of the values corresponding to z1, your next line would be: z2, y1, y2, y3, y4, etc., and
each line following would have the thicknesses for each of the x and y locations for that

value of z. If you are using the same X and Y values for each value of Z, you still need to
list their values for each value of Z. An example follows:
0, 0, 5, 10
0, .1, .15, .2
5, .1, .17, .21
10, .1, .18, .22
5, 0, 5, 10
0, .2, .25, .3
5, .2, .27, .31
10, .2, .28, .32
10, 0, 5, 10
0, . 3, .35, .4
5, .3, .37, .41
10, .3, .38, .32
The X, Y and Z values are all 0, 5, and 10. The first four rows define the thicknesses for X
and Y values at Z=0. The following four rows define the thicknesses for X and Y values at
Z=5, and so on. Each value of Z has a 2D table associated with it, with Y values in the first
row, and X values in the first column. Even though the first row and first column of each
subtable is the same, you still need to define these values for each value of Z.
Once the function table is defined, the shell section is defined. If using the GUI, choose
Main Menu>Preprocessor> Sections>Shell>Add/Edit. In this window, choose the material
associated with the shell section, and choose the function, then save the section. If using
command format, use SECTYPE, SECDATA, SECOFFSET and SECFUNCTION. After you
have defined your section(s), you can assign the appropriate section ID to areas or
elements during meshing the same way as you assign real constants. You can verify that
your thickness function was done properly by choosing Utility Menu>PlotCtrls>Style>Size
and Shape, or use the /ESHAPE command. ANSYS will create a 3-D visualization of your
shells.

Contact Us | Privacy Policy Copyright 2003 ANSYS, Inc. All rights reserved.

Steve Pilz
ANSYS Inc.
Jean-Daniel Beley
CADOE S.A.

What is the optimal shape for your design? How do you find this shape? What tools do you
use? How many people need to help you? Critically important How long does it take to get
to this final shape?
The typical design verification process is iterative and requires multiple people with different
software systems to drive it. The process starts with a CAD model that was designed with
intuition and user-accumulated experience. The CAD model can be parametric, or nonparametric, which well explore below. An instance of the CAD model is then exported and
provided to the finite element analyst. A well-known loop then begins: geometry cleaning
(geometry healing, feature suppression, geometry abstraction), meshing, boundary
conditions, solve, post-processing, and finally, delivery of results to validate or invalidate the
design. If the design is not acceptable, the analyst goes back to the designer for a
discussion. If agreeable, the CAD model is then modified to take on a new shape. The
design/analysis loop iteration is then repeated.
With a well-constructed and robustly rebuildable parametric model, and close cooperation
between the design and analysis engineers, this iterative process can work well and has
been serving product development companies for 30 years. But its an expensive and timeconsuming process thats ubiquitous enough that there is no distinct competitive advantage
to using it.
The traditional design verification process is an iterative loop thats expensive and timeconsuming.

The traditional design verification process is an iterative loop thats expensive and timeconsuming.

Time Sinks
This traditional process has inherent time sinks:
Building a fully parametric, rebuildable and exportable CAD model.
Rebuild CAD model for each verification driven iteration. CAD models, when fully
parametric, arent normally very forgiving of analysis lead design change suggestions.
Data exporting and importing iterations. A data import problem in one iteration is
likely a problem for all process iterations.
While examining these and other time sinks built into the traditional process, two kinds of
CAD models have to be considered: parametric models and non-parametric models. With a

fully parametric model, modifying one dimension can be straightforward if the model has
been built so that each modification leads to a valid model. However, building a fully
parametric design is always a hard task that is very time consuming and expensive. Many
companies, despite the intentions of their aggressive plans, find that in practice, it is entirely
too expensive to build fully parametric models all the time. Modifications of the geometry such
as mid surface extraction are seldom part of the CAD history tree, for instance, and often the
parameters controlling the shape or function of the design are not created with future design
simulation in mind.
Two other cases make it difficult to go back and forth between CAD and FEM. First, when
using legacy models, CAD is not always available and even less often parametric. Second,
getting robust parametric models for large assemblies often is difficult, especially when
different divisions within the same company design the parts or when parts are coming from
different suppliers.
The time spent on geometry cleaning also is strongly dependent on the quality of the model
and also on the files used for the communication between CAD and FEM experts. IGES
format, for example, usually requires geometry healing before being able to mesh.

A Better Way
There has to be a better way. What if there was a product that let you work with one model, but
allowed you to change it over and over to investigate multiple what-if scenarios without
having to go through all of that import/export work? What if you had a tool that made a finite
element model flexible, useful, and parametrically modifiable? What if it didnt matter where
that finite element model came from, or how old it was?
These challenges are met with ParaMesh: a parametric mesh modification tool that directly
works on an existing FE mesh to perform shape modifications. It is discipline-neutral
(structural, CFD, electromagnetic, multiphysics...) and also mesh-type neutral, so it can
handle solid meshes (tetra, hex as well as pyramids and wedges), shell meshes (quad or
triangles) and a mix of solid and shells. The software began shipping to customers
September 24, 2003.
With ParaMesh, you read in a meshed model, and you shrink, stretch, move, indent, bulge,
thicken, thin and generally do what you want to the mesh, as many times as you want -- all
without that painful CAD model updating/import step.

Modifying a truck cross member: the central assembly is slide 300 mm to the left without
using a CAD system.

How ParaMesh Works


Input to ParaMesh is simply a node and an element file. Formats allowed include structural
analysis solver input files such as ANSYS, NASTRANs, PATRAN, CFD solvers input files such
as StarCD, Fluent, CGNS, and general text files.
ParaMesh uses the data from these files as a starting point. Features, such as holes, ribs
and protrusions are identified and made able to move, or morph to a different shape. Surface
offset, translations, and rotations, or more complex features, such as emboss shape creation
are associated with a ParaMesh parameter, giving you parametric control over the mesh
changes requested.

ParaMesh works directly on an existing finite element mesh to perform shape modifications
quickly.

ParaMesh works directly on an existing finite element mesh to perform shape modifications
quickly.
ParaMesh modifies the nodal locations to stretch and shrink the mesh, without changing
anything else. This means that the rest of the analysis input isnt harmed in any way, that is,
no boundary conditions need to be updated, deleted or reapplied.

Mesh Morphing Algorithm Details


The node moving, or mesh morphing process is optimization based. ParaMesh technology is
based on a global smoothing technique, which allows large transformations of the mesh
while still maintaining solution accuracy.
The smoothing algorithm seeks to find new node locations maximizing mesh quality, such

that:

, where

on one integration point of the current element, and


mesh.

is one of the quality measures

is the quality measure on the initial

This optimization technique works to maximize individual and global element shape metrics
subject to millions of quality measures, with hundreds of thousand parameters that are
geometrically tied.
This problem is complex and computationally intensive to resolve, which is why most
previous mesh local optimization algorithms have attempted this optimization on an element
basis, rather than process the entire mesh at one time, tuning global element quality as a
whole.
For large modifications of the mesh, local algorithms have had a lot of trouble getting a useful
result, producing some elements that were very distorted, and others, such as those close
the modified boundaries, were often inverted.
ParaMeshs global optimization algorithm processes all the elements simultaneously,
allowing larger modifications of the mesh by taking advantage of the computational
processing power available today.

Finally, the mesh can be re-exported out of ParaMesh, generating an input file that is similar
to the initial input file, but with new coordinates. If the initial input file is ready to solve, the
updated input file also will be ready to solve.

Legacy Models
Many companies maintain a library of previously created, or legacy, finite element models.
Collectively, these models at once represent a huge financial investment as well as a
treasure trove of digital information.
Previously, these legacy models were mostly worthless. Because they were not parametric or
geometrically modifiable, most companies, using the traditional process, started any
geometric changes or iterations at the CAD model level, despite the large costs.
ParaMesh is adept at reading these legacy models, extracting the pertinent mesh information,
modifying the mesh parametrically to take the shape desired, an exporting the new mesh
back to the surgically modified input file of the legacy model. ParaMesh, in many ways, gives
life to previously dead legacy models.

Summary of Benefits
ParaMesh modifies only nodal coordinates, and nothing else. After the modification of
the node coordinates, the input file is ready to solve: material properties, physical
properties, solution controls remain unchanged. Running design of experiment (DOE)
or optimization studies after parameterizing the mesh with ParaMesh is straightforward,
and very cost effective.
ParaMesh does not require geometry. No CAD license and/or experts are needed.
Every modification will be done on the original mesh, without a new CAD model, without
healing, without remeshing, without reapplying boundary conditions. Most of the shape
modifications can be done in a few minutes with ParaMeshs intuitive interface, and its
powerful mesh manipulation tools. ParaMesh also can change the model easily in ways
that a CAD model would not appreciate.

Contact Us | Privacy Policy

Copyright 2003 ANSYS, Inc.


All rights reserved.

You might also like