DesignXplorer Users Guide
DesignXplorer Users Guide
ANSYS, Ansys Workbench, AUTODYN, CFX, FLUENT and any and all ANSYS, Inc. brand, product, service and feature
names, logos and slogans are registered trademarks or trademarks of ANSYS, Inc. or its subsidiaries located in the
United States or other countries. ICEM CFD is a trademark used by ANSYS, Inc. under license. CFX is a trademark
of Sony Corporation in Japan. All other brand, product, service and feature names or trademarks are the property
of their respective owners. FLEXlm and FLEXnet are trademarks of Flexera Software LLC.
Disclaimer Notice
THIS ANSYS SOFTWARE PRODUCT AND PROGRAM DOCUMENTATION INCLUDE TRADE SECRETS AND ARE CONFID-
ENTIAL AND PROPRIETARY PRODUCTS OF ANSYS, INC., ITS SUBSIDIARIES, OR LICENSORS. The software products
and documentation are furnished by ANSYS, Inc., its subsidiaries, or affiliates under a software license agreement
that contains provisions concerning non-disclosure, copying, length and nature of use, compliance with exporting
laws, warranties, disclaimers, limitations of liability, and remedies, and other provisions. The software products
and documentation may be used, disclosed, transferred, or copied only in accordance with the terms and conditions
of that software license agreement.
ANSYS, Inc. and ANSYS Europe, Ltd. are UL registered ISO 9001: 2015 companies.
For U.S. Government users, except as specifically granted by the ANSYS, Inc. software license agreement, the use,
duplication, or disclosure by the United States Government is subject to restrictions stated in the ANSYS, Inc.
software license agreement and FAR 12.212 (for non-DOD licenses).
Third-Party Software
See the legal information in the product help files for the complete Legal Notice for ANSYS proprietary software
and third-party software. If you are unable to access the Legal Notice, contact ANSYS, Inc.
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. iii
DesignXplorer User's Guide
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
iv of ANSYS, Inc. and its subsidiaries and affiliates.
DesignXplorer User's Guide
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. v
DesignXplorer User's Guide
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
vi of ANSYS, Inc. and its subsidiaries and affiliates.
DesignXplorer User's Guide
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. vii
DesignXplorer User's Guide
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
viii of ANSYS, Inc. and its subsidiaries and affiliates.
DesignXplorer User's Guide
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. ix
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
x of ANSYS, Inc. and its subsidiaries and affiliates.
List of Tables
1. Different Types of Kurtosis ..................................................................................................................... 399
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. xi
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
xii of ANSYS, Inc. and its subsidiaries and affiliates.
Ansys DesignXplorer Overview
The following links provide quick access to information about Ansys DesignXplorer and its use:
How to Participate
The program is voluntary. To participate, select Yes when the Product Improvement Program dialog
appears. Only then will collection of data for this product begin.
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 13
Ansys DesignXplorer Overview
Data We Collect
The data we collect under the Ansys Product Improvement Program are limited. The types and amounts
of collected data vary from product to product. Typically, the data fall into the categories listed here:
Hardware: Information about the hardware on which the product is running, such as the:
System: Configuration information about the system the product is running on, such as the:
• country code
• time zone
• language used
• time duration
Session Actions: Counts of certain user actions during a session, such as the number of:
• project saves
• restarts
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
14 of ANSYS, Inc. and its subsidiaries and affiliates.
The Ansys Product Improvement Program
• toolbar selections
• number and types of entities used, such as nodes, elements, cells, surfaces, primitives, etc.
• time and frequency domains (static, steady-state, transient, modal, harmonic, etc.)
• the solution controls used, such as convergence criteria, precision settings, and tuning options
• solver statistics such as the number of equations, number of load steps, number of design points,
etc.
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 15
Ansys DesignXplorer Overview
• actual values of material properties, loadings, or any other real-valued user-supplied data
In addition to collecting only anonymous data, we make no record of where we collect data from. We
therefore cannot associate collected data with any specific customer, company, or location.
No, your participation is voluntary. We encourage you to participate, however, as it helps us create
products that will better meet your future needs.
No. You are not enrolled unless you explicitly agree to participate.
3. Does participating in this program put my intellectual property at risk of being collected or discovered
by Ansys?
Yes, you can stop participating at any time. To do so, select Ansys Product Improvement Program
from the Help menu. A dialog appears and asks if you want to continue participating in the program.
Select No and then click OK. Data will no longer be collected or sent.
No, the data collection does not affect the product performance in any significant way. The amount
of data collected is very small.
The data is collected during each use session of the product. The collected data is sent to a secure
server once per session, when you exit the product.
Not at this time, although we are adding it to more of our products at each release. The program
is available in a product only if this Ansys Product Improvement Program description appears in the
product documentation, as it does here for this product.
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
16 of ANSYS, Inc. and its subsidiaries and affiliates.
Introduction to Ansys DesignXplorer
8. If I enroll in the program for this product, am I automatically enrolled in the program for the other Ansys
products I use on the same machine?
Yes. Your enrollment choice applies to all Ansys products you use on the same machine. Similarly,
if you end your enrollment in the program for one product, you end your enrollment for all Ansys
products on that machine.
9. How is enrollment in the Product Improvement Program determined if I use Ansys products in a cluster?
In a cluster configuration, the Product Improvement Program enrollment is determined by the host
machine setting.
10. Can I easily opt out of the Product Improvement Program for all clients in my network installation?
c. Change the value from "on" to "off" and save the file.
Design exploration describes the relationship between the design variables and the performance of the
product using Design of Experiments (DOEs) and response surfaces. DOEs and response surfaces provide
all of the information required to achieve simulation-driven product development. Once the variation
of product performance with respect to design variables is known, it becomes easy to understand and
identify the changes required to meet requirements for the product. After response surfaces are created,
you can analyze and share results using curves, surfaces, and sensitivities that are easily understood.
You can use these results at any time during the development of the product without requiring addi-
tional simulations to test a new configuration.
Available Tools
DesignXplorer offers a powerful suite of DOE types:
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 17
Ansys DesignXplorer Overview
Central Composite Design (CCD) provides a traditional DOE sampling set, while the objective of Op-
timal Space-Filling (OSF) is to gain the maximum insight with the fewest number of points. OSF is
very useful when you have limited computation time.
After sampling, design exploration provides several different response surface types to represent the
simulation's responses:
These response surface types can accurately represent highly nonlinear responses, such as those
found in high frequency electromagnetics.
Once the simulation's responses are characterized, DesignXplorer supplies the following optimization
algorithms:
You can also use extensions to integrate external optimizers into the DesignXplorer workflow. For
more information, see Performing an Optimization with an External Optimizer (p. 198).
DesignXplorer provides several graphical tools for investigating a design. These tools include sensit-
ivity plots, correlation matrices, curves, surfaces, trade-off plots and parallel charts with Pareto front
display, and spider charts.
DesignXplorer also provides correlation matrix techniques to help you identify the key parameters of
a design before you create response surfaces.
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
18 of ANSYS, Inc. and its subsidiaries and affiliates.
Introduction to Ansys DesignXplorer
Additionally, from a series of 2D or 3D simulations, you can create a ROM (reduced order model).
ROMs are stand-alone digital objects that offer a mathematical representation for computationally
inexpensive, near real-time analysis. For more information, see Using ROMs (p. 235).
• Product performance includes maximum stress, mass, fluid flow, and velocities.
Based on exploration results, you can identify the key parameters of the design and how they affect
product performance. You can then use this knowledge to influence the design so that it meets the
product's requirements.
DesignXplorer provide tools to analyze a parametric design with a reasonable number of parameters.
Supported response surface methods are suitable for problems using 10 to 15 input parameters.
In addition to performing the standard simulation, you must define the parameters to investigate.
The input parameters, also called design variables, can include CAD parameters, loading conditions,
material properties, and more.
You choose the output parameters, also called performance indicators, from the simulation results.
Output parameters can include maximum stresses, fluid pressure, velocities, temperatures, masses,
and custom-defined results. For example, product cost could be a custom-defined result based on
masses and manufacturing constraints that you use as an output parameter.
CAD parameters can be defined in a CAD package or in Ansys DesignModeler. In Workbench, mater-
ial properties are defined in the Engineering Data cell of an analysis system that you've inserted in
the Project Schematic. The origin of other parameters are in the simulation model itself. Output
parameters are defined in the various simulation environments (Mechanical, CFD, and so on). Custom
parameters are defined in the Parameter Set bar.
When you update the DOE, DesignXplorer creates a response surface for each output parameter. A
response surface is an approximation of the response of the system. Its accuracy depends on several
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 19
Ansys DesignXplorer Overview
factors, including complexity of the variations of the output parameters, number of points in the
original DOE, and choice of the response surface type.
DesignXplorer provides a variety of response surface types. The default type, Genetic Aggregation,
automates the process of selecting, configuring, and generating the response surface best suited to
each output parameter in your problem. Several other response surface types are available for selection.
For instance, Standard Response Surface - Full 2nd Order Polynomials, which is based on a modified
quadratic formulation, provides satisfying results when the variations of the output parameters are
slight. However, Kriging is more effective for problems with a broad range of variation.
After response surfaces are created, you can thoroughly investigate the design using a variety of
graphical and numerical tools. Additionally, you can use optimization techniques to identify valid
design points.
Usually, the investigation starts with viewing sensitivity graphs because they graphically display the
relative influence of the input parameters. These bar or pie charts indicate how much output para-
meters are locally influenced by the input parameters around a given response point. Varying the
location of the response point can provide a totally different graph. Explanations of these graphs
typically use a hill and valley analogy. If the point is at the top of a steep hill, the influence of the
parameters is large. If the response point is in a flat valley, the influence of the input parameters is
small.
The response surfaces provide curves or surfaces that show the variation of one output parameter
with respect to one or two input parameters at a time. These curves or surfaces also are dependent
on the response point.
Both sensitivity charts and response surfaces are key tools for answering such what-if questions as
"What parameter should we change if we want to reduce cost?".
DesignXplorer provides additional tools for identifying design candidates. In addition to thoroughly
investigating response surface curves to determine design candidates, you can use optimization
techniques to find design candidates from a Response Surface cell or any cell containing design
points. DesignXplorer provides two types of goal-driven optimization (GDO) systems: Response Surface
Optimization and Direct Optimization.
You can drag a GDO system from the Toolbox and drop it in the Project Schematic. If you drop a
Response Surface Optimization system on an existing Response Surface system, these two systems
share this portion of the data. For a Direct Optimization system, you can create data transfer links
between the Optimization cell and any other cell containing design points. You can insert several
GDO systems in the project, which is useful if you want to analyze several hypotheses.
Once a GDO system is inserted, you must define the optimization study. This includes choosing the
optimization method, setting the objectives and constraints, and specifying the domain. You then
solve the optimization problem. In many cases, there is not a unique solution, and several design
candidates are identified.
The results of the optimization are also very likely to provide design candidates that cannot be
manufactured. For example, a radius of 3.14523 mm is likely difficult to achieve. However, because
all information about the variability of the output parameters is provided by the source of design
point data, whether a Response Surface cell or another DesignXplorer cell, you can easily find an
acceptable design candidate close to the one indicated by the optimization.
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
20 of ANSYS, Inc. and its subsidiaries and affiliates.
Introduction to Ansys DesignXplorer
You should check the accuracy of the response surface for the design candidates. To verify a candidate,
you update the design point, which checks the validity of the output parameters.
Probabilistic characterization provides a probability of success or failure rather than a simple yes or
no evaluation. For instance, a probabilistic analysis could determine that one part in 1 million is likely
to fail. Probabilistic analysis can also predict the probability of a product surviving its expected useful
life.
The number of simulations depend on the number of parameters as well as the convergence criteria
for the means and standard deviations of the parameters. While you can provide a hard limit for the
number of points to compute, the accuracy of the correlation matrix can be affected if not enough
points are computed. Based on results, you reduce the number of parameters to the 10 to15 with
the highest correlations.
The first step to solving this problem is to set Response Surface Type to Kriging. This response
surface type determines the accuracy of the response surface as well as the number of points that
are required to increase the accuracy. With this method, you can set a manual or automatic refinement
type. Manual refinement allows you to control the number of points to compute.
Sparse Grid is yet another of the response surface types available. This adaptive response surface is
driven by the accuracy that you request. It automatically refines the matrix of design points where
the gradient of the output parameters is higher to increase the accuracy of the response surface. To
use this feature, you set Response Surface Type to Sparse Grid and Design of Experiments Type
to Sparse Grid Initialization.
While no manual refinement is available when Sparse Grid is selected, manual refinement is available
for other response surface types. With manual refinement, you can enter specific points into the set
of existing design points used to calculate the response surface.
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 21
Ansys DesignXplorer Overview
Limitations
• Material parameters, which are input parameters linked to geometrical bodies, are not supported by
DesignXplorer. A material parameter has a finite list of material names.
• When you use an external optimizer for a goal-driven optimization system, unsupported properties
and functionality are not displayed in the DesignXplorer interface.
To insert a design exploration system, you must have a DesignXplorer license. This license must be
available when you preview or update a DesignXplorer system or cell and also when results need to
be generated from a response surface.
You must also have licenses available for the systems that are to solve the design points that
DesignXplorer generates. The licenses for the solvers must be available when you update a system that
generate design points.
The DesignXplorer license is released when all DesignXplorer tabs are closed.
Note:
• If you do not have a DesignXplorer license, you can successfully resume an existing design
exploration project and review the already generated results, with some interaction possible
on charts. However, to update a DesignXplorer system or evaluate a response surface, a
license is required. If you do not have a license, an error displays.
• To update design points simultaneously, you must have one solver license for each simul-
taneous solve. Be aware that the number of design points that can be solved simultaneously
is limited by hardware, your RSM configuration, and available solver licenses.
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
22 of ANSYS, Inc. and its subsidiaries and affiliates.
User Interface
• You do not need to reserve licenses for DesignXplorer components because DesignXplorer
does not check licenses out of the reserve pool. However, if some licenses are reserved
for design point updates, any update of a DesignXplorer component will require an extra
license. This license can come from a bundle.
• Inserting a 3D ROM system for producing a ROM (p. 235) requires a ROM Builder license.
A ROM Builder license also enables existing DesignXplorer capabilities.
User Interface
The Workbench user interface allows you to easily build your project in a workspace called the Project
Schematic.
Project Schematic
From the Project tab, you add design exploration systems to the Project Schematic to perform different
types of parametric analyses. From the Toolbox, you drag a system from under Design Exploration
and drop it under the Parameter Set bar. Optionally, you can double-click the system in the Toolbox.
Each system added to the Project Schematic has a blue system header and one or more cells for ana-
lysis components. You generally interact with a system at the cell level. Right-clicking a system header
or cell displays context menu options. Double-clicking a cell performs the default option, which appears
in bold in the context menu. Because the default option is typically Edit, double-clicking a cell generally
opens its component tab. Component tabs are described in more detail later.
Toolbox
When you are viewing the Project Schematic, the Toolbox displays a Design Exploration category
with DesignXplorer systems. To perform a particular type of design exploration, you drag the system
from the Toolbox and drop it in the Project Schematic below the Parameter Set bar.
When you are viewing the component tab for a cell, the Toolbox displays the charts that can be added
to the object currently selected in the Outline pane. For example, assume that you double-clicked a
Response Surface cell to open its component tab. If you select a response point in the Outline pane,
the Toolbox displays the charts that can be inserted for the response point.
Double-clicking a chart in the Toolbox adds it to the object selected in the Outline pane. You can also
add a chart by dragging it from the Toolbox and dropping it on an object in the Outline pane. Addi-
tionally, you can right-click a node in the Outline pane and select the chart to add from the context
menu.
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 23
Ansys DesignXplorer Overview
Component Tabs
Once a DesignXplorer system is added to the Project Schematic, double-clicking a cell typically opens
its component tab. For a cell where Edit is not the default menu option, you can right-click the cell and
select Edit. A component tab displays a window configuration with multiple panes in which to set
analysis options, run the analysis, and view results. For example, double-clicking a Design of Experiments
cell displays the component tab for the DOE.
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
24 of ANSYS, Inc. and its subsidiaries and affiliates.
User Interface
A component tab has four panes: Outline, Table, Properties, and either Chart or Results. In general,
you select an object in the Outline pane and either set up its properties or view the table or chart as-
sociated with it.
• Outline: Provides a hierarchy of the main objects that make up the cell that you are editing.
The state icon on the root node tells you if the data for the cell is up-to-date or if it must be updated.
It also helps you to figure out the effect of your changes on the cell and its parameter properties.
Quick help is associated with various states. If you see the information icon to the right of the root
node, click it to see what immediate actions must be taken. If links appear in the quick help, they
take you to more information in the Ansys product help.
On nodes for result objects (such as response points, charts, and Min-Max search), state icons indicate
if the objects are up-to-date or if they need to be updated. State icons help you to quickly assess the
current status of your result objects. For example, if you change a DOE setting, the state icon of the
corresponding chart is updated, given the pending changes. When the state icon indicates that the
update on a result object has failed ( ), you can try to update the object by right-clicking it and se-
lecting the appropriate menu option.
• Table: A tabular view of the data associated with the object selected in the Outline pane. The title
bar contains a description of the table. You can right-click in the table to export the data to a CSV
(Comma-Separated Values) file. For more information, see Exporting Design Point Parameter Values
to a Comma-Separated Values File in the Workbench User's Guide.
• Properties: Lists the properties that can be set for the object selected in the Outline pane. For ex-
ample, when a parameter for a DOE is selected in the Outline pane, you set bounds for this parameter
in the Properties pane. When the root node for a Response Surface cell is selected, you set the re-
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 25
Ansys DesignXplorer Overview
sponse surface type and other options in the Properties pane. When a chart is selected, you set
plotting and display options in the Properties pane.
• Chart or Results: Displays various charts or results for the object selected in the Outline pane. In
the Charts pane, you can right-click a chart to export the data to a CSV (Comma-Separated Values)
file. The Results pane is shown only on the component tab for a goal-driven optimization system.
You can insert and duplicate charts (or a response point with charts for a Response Surface cell)
even if the system is out-of-date. When the system is out-of-date, the charts in the Chart pane are
updated when the system is updated. For any DesignXplorer cell where a chart is inserted before the
system is updated, all types of charts supported by the cell are inserted by default at the end of the
update. If a cell already contains a chart, no new chart is inserted by default. For a Response Surface
cell, if there is no response point, a response point is inserted by default with all charts. For more
information, see Using DesignXplorer Charts (p. 32).
Note:
• In the Table and Properties panes, input parameter values and output parameter values
obtained from a simulation are displayed in black text. Output parameters based on a re-
sponse surface are displayed in the color specified on the Response Surface tab in the
Options dialog box. For more information, see Response Surface Options (p. 38).
• In the Properties pane, the color convention for output parameter values is not applied
to the Calculated Minimum and Calculated Maximum values. These values always display
in black text.
Context Menu
Right-clicking a DesignXplorer system header or cell in the Project Schematic displays a context menu.
Likewise, right-clicking a node in the Outline pane of a component tab displays a context menu. The
options available depend on the state of the system or cell. The options typically available are Update,
Preview, Clear Generated Data, and Refresh. The option selected is performed only on the selected
system or cell.
Parameters
DesignXplorer makes use of two types of parameters:
• Input parameters
• Output parameters
For information about performing what-if studies to investigate design alternatives, see Working with
Parameters and Design Points in the Workbench User's Guide. For information about grouping parameters
by application, see Tree Usage in Parameters Tab and Parameter Set Tabs (p. 28).
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
26 of ANSYS, Inc. and its subsidiaries and affiliates.
Parameters
Input Parameters
Input parameters define the values to analyze for the model under investigation.
Note:
Usage of DesignXplorer is limited to 10 input parameters. Contact Ansys support for questions.
Input parameters include CAD parameters, analysis parameters, DesignModeler parameters, and mesh
parameters.
• Examples of CAD and DesignModeler input parameters are length and radius.
• Examples of analysis input parameters are pressure, material properties, materials, and sheet
thickness.
• Examples of mesh parameters are relevance, number of prism layers, and mesh size on an entity.
Input parameters can be discrete or continuous. Each of these parameter types has a specific form.
Discrete parameters physically represent different configurations or states of the model. An example is
the number of holes in a geometry. Discrete parameters allow you to analyze different design variations
in the same parametric study without having to create multiple models for parallel parametric analysis.
For more information, see Defining Discrete Input Parameters (p. 270).
Continuous parameters physically vary in a continuous manner between some lower bound and upper
bound. Examples are a CAD dimension and load magnitude. Continuous parameters allow you to analyze
a continuous value within a defined range, with each parameter representing a direction in the design
and treated as a continuous function in Design of Experiments and Response Surface systems. For
a continuous parameter, you can impose additional limitations on the values within this range. For
more information, see Defining Continuous Input Parameters (p. 272).
If you disable an input parameter, its initial value, which becomes editable, is used for the design ex-
ploration study. If you change the initial value of a disabled input parameter during the study, all de-
pending results are invalidated. A disabled input parameter can have a different initial value in each
DesignXplorer system. For more information, see Changing Input Parameters (p. 276).
Output Parameters
Output parameters either result from the geometry or are response outputs from the analysis. Examples
include volume, mass, frequency, stress, velocity, pressure, force, heat flux, and so on.
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 27
Ansys DesignXplorer Overview
Output parameters can include derived parameters, which are calculated from output and input para-
meters using equations that you provide. All derived parameters that you define in the Workbench
Parameter Set bar are passed into DesignXplorer as output parameters.
Note:
When you define a derived parameter in the Parameter Set bar, DesignXplorer keeps
the unit specified at the moment of its creation as the native unit. In Workbench, you
can select Units → Display Values as Defined to see the native unit. For example, if
meter is the unit specified when a derived parameter is created, meter is the native
unit. If you want the derived parameter to use inch as the native unit, you must specify
inch before you create the derived parameter.
When editing the Parameter Set tab, all parameters for the project are listed in the Outline pane, under
Input Parameters and Output Parameters, depending on their nature.
The parameters are also grouped by system name to reflect the origin of the parameters and the
structure of the Project Schematic when working in parametric environments. Because parameters can
be manipulated from the component tabs for a Parameters cell and the Parameter Set bar, and also
in DesignXplorer tabs, the same tree structure is always used.
Tip:
You can edit the system name by right-clicking the system name in the Project
Schematic.
Design Points
A design point is defined by a snapshot of parameter values where output parameter values are calculated
directly by a project update. Design points are created by design exploration. For instance, they are
created when processing a DOE or correlation matrix or when refining a response surface.
It is also possible to insert a design point at the project level from an optimization candidate design to
perform a validation update. Output parameter values are not copied to the created design point because
they were calculated by design exploration and are, by definition, approximated. Actual output para-
meters are calculated from the design point input parameters when a project is updated.
You can also edit process settings for design point updates, including the order in which points are
updated and the location where the update occurs. When submitting design points for update, you
can specify whether the update is to run locally on your machine or sent via RSM for remote processing.
For more information, see Working with Design Points (p. 278).
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
28 of ANSYS, Inc. and its subsidiaries and affiliates.
Workflow
Response Points
A response point is defined by a snapshot of parameter values where output parameter values are
calculated in DesignXplorer from a response surface. As such, the parameter values are approximate
and calculated from response surfaces. You should verify the most promising designs by a solve in the
system using the same parameter values.
When editing a Response Surface cell, you can create new response points from either the Outline or
Table pane. You can also insert response points and design points from the Table pane or Chart pane
by right-clicking a table row or point on the chart and selecting an appropriate option from the context
menu. For instance, you can right-click a point in a Response chart and select the option for inserting
a new response point in this location.
You can duplicate a response point by right-clicking it in the Outline pane and selecting Duplicate.
You can also duplicate a response point using the drag-and-drop operation. An update of the response
point is attempted so that the duplication of an existing up-to-date response point results in a new up-
to-date response point.
Note:
• When you use the context menu to duplicate a chart that is a child of a response
point, a new chart is inserted under the same response point. However, when you
use the drag-and-drop operation, the duplicate is inserted under the other response
point.
For more information, see Working with Response Surfaces (p. 121).
Workflow
To run a parametric analysis in DesignXplorer, you must:
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 29
Ansys DesignXplorer Overview
Features in the geometry that are important to the analysis should be exposed as parameters.
These parameters can then be passed to DesignXplorer.
2. Drag a system analysis from the Toolbox and drop it in the Project Schematic, connecting it to
the DesignModeler or CAD file.
3. Double-click the Parameter Set bar and do the following for each input parameter:
b. In the Properties pane, set the limits for the input parameter.
4. In the Project Schematic, drag the DesignXplorer system that you want to insert and drop it below
the Parameter Set bar.
• For any DesignXplorer system, right-click the system header and select Duplicate. A new system
of this type is added to the Project Schematic under the Parameter Set bar. No data is shared
with the original system.
• For a DesignXplorer system with a Design of Experiments cell, click this cell and select Duplicate.
A new system of this type is added to the Project Schematic under the Parameter Set bar. No
data is shared with the original system.
• For a DesignXplorer system with a Response Surface cell, click this cell and select Duplicate. A
new system of this type is added to the Project Schematic under the Parameter Set bar. The
DOE data is shared with the original system.
• For a Direct Optimization, Response Surface Optimization, or ROM Builder system, click the
system header and select Duplicate. A new system of this type is added to the Project Schematic
under the Parameter Set bar. The DOE and response surface data is shared with the original system.
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
30 of ANSYS, Inc. and its subsidiaries and affiliates.
Workflow
Any cell in the duplicated DesignXplorer system that contains data that is not shared with the original
system is marked as Update Required.
When you duplicate a DesignXplorer system, definitions of your data (such as charts, responses points,
and metric objects) are also duplicated. An update is required to calculate the results for the duplicated
data.
1. Specify design point update options in the Properties pane of the Parameter Set tab. These
options can vary from the global settings specified on the Solution Process tab in the Options
dialog box.
2. For each cell in the design exploration system, double-click it to open the component tab and
set up any analysis options that are needed. Options can include parameter limits, optimization
objectives or constraints, optimization type, and more.
• From the component tab for the cell, right-click the root node in the Outline pane and select
Update.
4. Make sure that you set up and solve each cell in a DesignXplorer system to complete the analysis
for this system.
Tip:
To update alls systems in the entire project, either click Update Project on the
toolbar or right-click in any empty area of the Project Schematic and select Update
Project.
5. View the results for each DesignXplorer system from the component tabs for its cells. Results are
in the form of tables, statistics, charts, and so on.
In the Project Schematic, cells display icons to indicate their states. If a cell is out-of-date and must
be updated, right-click the cell and select Update.
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 31
Ansys DesignXplorer Overview
Progress Pane
You can open the Progress pane from the View menu or click Show Progress in the lower right
corner of the Workbench window. During execution of an update, the Status cell displays the com-
ponent currently being updated. The Details cell displays additional information about updating this
component. The Progress cell displays a progress bar.
Throughout the execution of the update, this pane continuously reports the progress. To stop the
update, you would clicking the red stop button to the right of the progress bar. To restart a stopped
update at a later time, you use any of the methods for starting a new update.
Messages Pane
If solution errors exist and the Messages pane is not open, in the lower right of the window, the
button for showing messages flashes orange. Clicking this button, which indicates the number of
messages generated, opens the Messages pane so that you can see the solution errors. You can also
open the Messages pane from the View menu.
All of the charts available for a cell are listed in the Toolbox for the component tab. When you update
a cell for the first time, one of each chart available for the cell is automatically inserted in the Outline
pane. For a Response Surface cell, a new response point is also automatically inserted. If a cell already
contains charts, the charts are replaced with the next update.
Note:
Charts are available for selection in the Outline pane. Most charts are created under Charts.
However, charts for a Response Surface cell are an exception. The Predicated vs. Observed
chart is inserted under Quality → Goodness of Fit. Other charts are inserted under respective
response points under Response Point.
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
32 of ANSYS, Inc. and its subsidiaries and affiliates.
Using DesignXplorer Charts
• Drag a chart from the Toolbox and drop it on the Outline pane.
• In the Outline pane, right-click the cell under which to add the chart and select the option for inserting
the desired chart type.
• To create a second instance of a chart with default settings or to create a new Response Surface chart
under a different response point, drag the desired chart from the Toolbox and drop it on the parent
cell.
• To create an exact copy of an existing chart, right-click the chart and select Duplicate.
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 33
Ansys DesignXplorer Overview
For Response Surface charts, the Duplicate option on the context menu creates an exact copy of the
existing chart under the same response point. To create a fresh instance of a chart type under a different
response point, drag the existing chart and drop it on the new response point.
Chart duplication triggers a chart update. If the update succeeds, both the original chart and the duplicate
are up-to-date.
In the Chart pane, you can drag the mouse over various chart elements to view coordinates and other
element details.
You can change chart properties in the Properties pane. You can also right-click the directory on the
chart to use context-menu options for performing various chart-related operations. The options available
depend on the chart type and state of the chart.
• To edit general chart properties, right-click a chart or chart element and select Edit Properties. For
more information, see Setting Chart Properties in the Workbench User's Guide.
• To add new points to your design, right-click a chart point. Depending on the chart type, you can
select from the following context menu options: Explore Response Surface at Point, Insert as
Design Point, Insert as Refinement Point, Insert as Verification Point, and Insert as Custom
Candidate Point.
– Right-click a chart parameter and, depending on the parameter, select Disable <parameterID>,
Disable all Input Parameters but <parameterID>, or Disable all Output Parameters but
<parameterID>.
– If at least one parameter is already disabled, you can right-click anywhere on the chart and select
Reverse Enable/Disable to enable all disabled parameters or vice versa.
For general information about working with charts, see Working with the Chart Pane in the Workbench
User's Guide.
You can export the design point values of a selected DesignXplorer table or chart to an ASCII file,
which then can be used by other programs for further processing. For more information, see Exporting
and Importing Design Points (p. 96).
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
34 of ANSYS, Inc. and its subsidiaries and affiliates.
Design Exploration Options
Once a response surface is solved, you can export it as an independent reduced-order model (DX-
ROM) for reuse in other environments. For more information, see Exporting Response Surfaces (p. 169).
Exporting ROMs
A 3D ROM system is based on a Design of Experiments (DOE) and its design points, which automate
the production of solution snapshots and the ROM itself. Once the ROM is produced, you can export
it to a ROMZ file, which can be consumed by Fluent in Workbench. You can also export the ROM
to an FMU file, which can be consumed by anyone who has access to Ansys Twin Builder or any
other tool that can read this file type. For more information, see ROM Consumption (p. 237).
2. In the tree, expand Design Exploration to see its three child tabs:
• Design of Experiments
• Response Surface
As you click the four DesignXplorer tabs, you might need to scroll to see all options. A grayed-out
section becomes available only if a previous option is changed to some specific value that enables
the section.
For descriptions of options on other tabs, see Workbench User Preferences in the Workbench User's
Guide.
3. To change the default value for an option, click directly in the field for the option. Some fields require
you to directly enter text while others require you to make selections from dropdown menus or
select or clear check boxes.
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 35
Ansys DesignXplorer Overview
• Preserve Design Points After DX Run: If selected, design points created for a design exploration
cell are saved to the project's design points table once the solution completes. When this check box
is selected, the Retain Data for Each Preserved Design Point option is enabled. If this check box
is selected, data is retained for each design point saved to the project's design points table. For more
information, see Retaining Data for Generated Design Points (p. 280) in the Workbench User's Guide.
• Retry All Failed Design Points: If selected, additional attempts are made to solve design points that
failed to update during the first run. When this check box is selected, the following options are enabled:
– Number of Retries: Number of times to try to update failed design points. The default is 5.
– Retry Delay (seconds): Number of seconds to elapse between tries. The default is 120.
Under Graph, the Chart Resolution option specifies the number of points for a continuous input
parameter axis in a 2D or 3D Response Surface chart. The range is from 2 to 100. The default is 25. In-
creasing the number of points enhances the chart resolution.
Under Sensitivity:
• Significance Level: Relative importance or significance to assume for input variables. The allowable
range is from 0.0 to 1.0, where 0 means that all input variables are assumed to be insignificant and
1.0 means that all input variables are assumed to be significant. The default is 0.025.
• Correlation Coefficient Calculation Type: Calculation method for determining sensitivity correlation
coefficients. Choices are:
– Rank Order (Spearman): Evaluates correlation coefficients based on the rank of samples
(default).
• Display Parameter Full Name: If selected, full parameter names display rather than short parameter
names.
• Parameter Naming Convention: Naming style for input parameters within design exploration. Choices
are:
– Taguchi Style: Names for parameters are continuous variables and noise variables.
– Uncertainty Management Style: Names for parameters are design variables and uncertainty
variables (default).
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
36 of ANSYS, Inc. and its subsidiaries and affiliates.
Design Exploration Options
– Reliability Based Optimization Style: Names for parameters are design variables and random
variables.
Under Messages, the Confirm if Min-Max Search can take a long time option specifies whether to
display an alert before performing a Min-Max search operation when there are discrete input parameters.
You might want to display such alerts because Min-Max searches can be time-consuming:
• Box-Behnken Design
For more information, see DOE Types (p. 82) and Working with Design Points (p. 278).
When Central Composite Design is selected, options under Central Composite Design Options are
enabled:
• Design Type: Method for improving the response surface fit for the DOE. Choices are:
– Face-Centered
– Rotatable
– VIF-Optimality
– G-Optimality
– Auto-Defined
For more information, see Central Composite Design (CCD) (p. 82) and Using a Central Composite
Design DOE (p. 92).
Note:
If you change the setting for Design Type here in the Options dialog box, new
design points are generated for a DOE that has not yet been solved.
• Enhanced Template: Specifies whether to use the enhanced template. This check box is enabled
only for Rotatable and Face-Centered design types.
When either Optimal Space-Filling Design or Latin Hypercube Sampling Design is the algorithm
selected, options under Latin Hypercube Sampling or Optimal Space-Filling are enabled:
• Design Type: Method for improving the response surface fit for the DOE. Choices are:
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 37
Ansys DesignXplorer Overview
– Centered L2
– Maximum Entropy
• Max Number of Cycles: Maximum number of iterations that the base DOE is to undergo for the
final sample locations to conform to the chosen DOE type.
• Sample Type: Method for determining the number of samples. Choices are:
– CCD Samples: Number of samples is the same as that of a corresponding CCD design (default).
– Linear Model Samples: Number of samples is the same as that of a design of linear resolution.
– Pure Quadratic Model Samples: Number of samples is the same as that of a design of pure
quadratic resolution, which uses constant and quadratic terms.
– Full Quadratic Model Samples: Number of samples is the same as that of a design of full
quadratic resolution, which uses all constant, quadratic and linear terms.
– User Defined Samples: Number of DOE samples that you want to have generated.
For more information, see Optimal Space-Filling Design (OSF) (p. 83) and Latin Hypercube Sampling
Design (p. 86).
– Kriging
– Non-Parametric Regression
– Neural Network
For more information, see Response Surface Types (p. 99) and Using Response Sur-
faces (p. 99). The algorithm that you select determines if subsequent categories are en-
abled.
• Color for Response Surface Based Output Values: Color in which to display output values that
are calculated from a response surface. While simulation output values that are calculated from
a design point update always display in black text, DesignXplorer applies the color that is selected
here to response surface-based output values in the Properties and Table panes for all cells
and in the Results pane for the Optimization component (specifically for the Candidate Points
chart and Samples chart).
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
38 of ANSYS, Inc. and its subsidiaries and affiliates.
Design Exploration Options
– Design points, derived parameters with no output parameter dependency, verified can-
didate points, and all output parameters calculated in a Direct Optimization system are
simulation-based. Consequently, these output values display in black text.
– Response points, Min-Max search results, and candidate points in a Response Surface
Optimization system are based on response surfaces. Consequently, these output values
display in the color specified by this option.
In a Direct Optimization system, derived and direct output parameters are all calculated
from a simulation and so display in black text. In a Response Surface Optimization system,
the color used for derived values depends on the definition (expression) of the derived
parameter. If the expression of the parameter depends on at least one output parameter,
either directly or indirectly, the derived values are considered to be based on a response
surface and so display in the color specified by this option.
Under Kriging Options, the Kernel Variation Type option specifies the mode for correlation parameter
selection. This option is available only when Kriging is the algorithm selected. Choices are:
• Variable Kernel Variation: Radial basis function mode that uses one correlation parameter for
each design variable (default).
• Constant Kernel Variation: Pure Kriging mode that uses a single correlation parameter.
Under Neural Network Options, the Number of Cells option species the number of cells that the
neural network uses to control the quality of the response surface. This option is available only when
Neural Network is the algorithm selected. A higher value allows the neural network to better capture
parameter interactions. The recommended range is from 1 to 10. The default is 3.
Once a response surface is solved, it is possible to switch to another response surface type or change
the options for the current response surface in the Properties pane for the Response Surface cell.
Anytime that you change options in the Properties pane, you must update the response surface to
obtain the new fitting.
Under Weighted Latin Hypercube, Sampling Magnification specifies the number of times to reduce
regular Latin Hypercube samples while achieving a certain probability of failure (Pf ). For example, the
lowest probability of failure for 1000 Latin Hypercube samples is approximately 1/1000. The default is
5. A magnification of 5 is meant to use 200 weighted/biased Latin Hypercube samples to approach the
lowest probability of 1/1000. You should not use a magnification greater than 5 because a significant
Pf error can occur due to highly biased samples.
Under Optimization:
• Method Selection: Specifies whether Auto or Manual is the default for optimization method selection
in a newly inserted optimization system. To make performing optimizations easy for non-experts,
Auto is the original setting. For more information, see Using Goal-Driven Optimizations (p. 173).
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 39
Ansys DesignXplorer Overview
• Constraint Handling: A constraint satisfaction filter on samples generated from a Screening, NLPQL,
MOGA, or Adaptive Single-Objective optimization that determines what candidates to display in the
candidates table. This option can be used for any optimization application and is especially useful
for Screening samples to detect the edges of solution feasibility for highly constrained nonlinear
optimization problems. Choices are:
– Relaxed: Treats the upper, lower, and equality constraints as objectives. A candidate points
that violates an objective is still considered feasible and so is shown in the table.
– Strict (default): Treats the upper, lower, and equality constraints as hard constraints. If any
constraint is violated, the candidate is not shown in the table. Depending on the extent of
constraint violations, it is possible that no candidate points are shown in the table.
• Tolerance Settings: For Direct Optimization and Response Surface Optimization systems, indicates
whether to display options in the Optimization cell for entering tolerance values for objectives and
constraints. When this check box is selected (default) and the Solution Process Update property for
the Parameter Set bar is set to Submit to Design Point Service (DPS), options also display in the
Optimization cell for entering initial values for objectives, which are sent to DPS when design points
are updated. For more information, see Tolerance Settings (p. 207).
ACT provides direct, API-driven product customization via standard extensions. In DesignXplorer,
this includes the integration of optimizers and sampling (DOE) methods, both custom and third-
party, into the design exploration workflow.
ACT enables process compression and automation via wizard extensions, which enable you to
leverage the scripting capabilities of the Workbench framework API. A wizard extension loaded in
Workbench provides simulation guidance within the Project Schematic workflow, walking non-
expert users step-by-step through a simulation. As a data-integrated application, you can automate
DesignXplorer using a simulation wizard launched from the Project tab.
For more information, see Simulation Wizards in the Ansys ACT Developer's Guide.
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
40 of ANSYS, Inc. and its subsidiaries and affiliates.
Customizing DesignXplorer with Ansys ACT
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 41
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
42 of ANSYS, Inc. and its subsidiaries and affiliates.
DesignXplorer Systems and Components
The topics in this section provide an introduction to using specific DesignXplorer systems and their in-
dividual components for your design exploration projects. A component is generally referred to as a
cell.
What is Design Exploration?
DesignXplorer Systems
DesignXplorer Components
After setting up your analysis, you can pick one of the system under Design Exploration in the Toolbox
and then do any of the following:
• Parametrize your solution and view an interpolated response surface for the parameter ranges
• View the parameters associated with the minimum and maximum values of your outputs
• Create a correlation matrix that shows you the sensitivity of outputs to changes in your input para-
meters
• Set output objectives and see what input parameters meet these objectives
• Produce and consume ROMs for computationally inexpensive, near real-time analysis
DesignXplorer Systems
The following DesignXplorer systems are available if you have installed Ansys DesignXplorer and have
an appropriate license:
Parameters Correlation System
Response Surface System
Goal-Driven Optimization Systems
3D ROM System
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 43
DesignXplorer Systems and Components
When a project has many input parameters (more than 10), building an accurate response surface
becomes an expensive process. By using a Parameters Correlation system, you can identify the most
significant input parameters and then disable those that are less significant when building the response
surface. With fewer input parameters, the response surface is more accurate and less expensive to
build.
The Parameters Correlation system contains a single cell: Parameters Correlation (p. 48)
For the deterministic method, response surfaces for all output parameters are generated in two steps:
• Solving the output parameters for all design points as defined by a DOE
• Fitting the output parameters as a function of the input parameters using regression analysis
techniques
DesignXplorer offers two types of GDO systems: Response Surface Optimization and Direct Optim-
ization.
The Direct Optimization system contains a single cell: Optimization (p. 55)
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
44 of ANSYS, Inc. and its subsidiaries and affiliates.
DesignXplorer Components
3D ROM System
A 3D ROM (p. 61) system is used to produce a ROM (p. 235) from a series of simulations. As a stand-
alone digital object, a ROM offers a mathematical representation for computationally inexpensive,
near real-time analysis.
DesignXplorer Components
Design Exploration systems are made up of one or more components or cells. Double-clicking a cell
in the Project Schematic opens its component tab. Virtually all component tabs contain these four
panes: Outline, Properties, Table, and Chart.
The content in a pane depends on the object selected in the Outline pane. The following topics sum-
marize the component tabs for the various cells in Design Exploration systems:
Design of Experiments Component Reference
Parameters Correlation Component Reference
Response Surface Component Reference
Optimization Component Reference
3D ROM Component Reference
Note:
The DOE for a 3D ROM system can share data only with the DOE for another
DesignXplorer system of the same type.
The Design of Experiments tab allows you to preview or generate design points. The Preview oper-
ation generates design points but does not solve them. The Update operation both generates and
solves design points. On the Design of Experiments tab, you can set input parameter limits and
properties for the DOE and view the design points table and several parameter charts. For more in-
formation, see:
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 45
DesignXplorer Systems and Components
The following panes in the Design of Experiments tab allow you to customize your DOE and view
the updated results.
Outline:
• Select output parameter and view their minimum and maximum values.
• Select charts to view available charts and change chart types and data properties. You can use the
Toolbox or context menu to insert as many charts as you want.
Properties:
• Preserve Design Points After DX Run: Specifies whether to retain design points at the project
level each time that the DOE is updated. If you select this check box, Retain Data for Each Preserved
Design Point is shown. If you also select this check box, in addition to saving the design points
to the project's design points table, data for each design point is saved. For more information, see
Retaining Data for Generated Design Points (p. 280) in the Workbench User's Guide.
• Number of Retries: Specify the number of times DesignXplorer is to try to update failed design
points. If the Retry All Failed Design Points check box is not selected on the Design Exploration
tab in the Options dialog box, the default is 0. However, you can specify the default number of
retries for this specific project here. When the Number of Retries property is not set to 0, Retry
Delay (seconds) specifies how much time is to elapse between tries.
• Design of Experiments Type: Specifies the DOE type to use. Choices follow. For descriptions and
specific properties, see DOE Types (p. 82).
– Box-Behnken Design
– Custom
– Custom + Sampling
– External sampling methods as defined by the DOE extensions loaded to the project.
Table:
Displays the design points and input parameter data when previewing. On updating, the table also
displays output parameter data. You can add data points manually if Design of Experiments Type
is set to Custom.
Chart:
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
46 of ANSYS, Inc. and its subsidiaries and affiliates.
DesignXplorer Components
• Display Parameter Full Name: Indicate whether to show the full parameter name or the
short parameter name.
• Use the Enabled check boxes for the input parameters to enable or disable the display of
parameter axes on the chart.
• Click a line on the chart to display input and output values for this line in the Input Para-
meters and Output Parameters sections of the Properties pane.
The chart displays only updated design points. If the DOE does not yet contain any updated design
points, output parameters are automatically disabled from the chart. The axes that are visible cor-
respond to the input parameters for the design points.
The Parameters Parallel chart supports interactive exploration of the DOE. When you place the
mouse cursor on the graph, sliders appear at the upper and lower bounds of each axis. You can
use the sliders to easily filter for each parameter. Design points that fall outside of the bounds
defined by the sliders are dynamically hidden.
You can also look at the data in this chart as a Spider chart. Right-click in an empty area of the
chart and select Edit Properties. Then, in the Properties pane for the chart, change Chart Type
to Spider. The Spider chart shows all input and output parameters arranged in a set of radial axes
spaced equally. Each design point is represented by a corresponding envelope defined in the radial
axes.
• Display Parameter Full Name: Indicate whether to show the full parameter name or the
short parameter name.
• X-Axis (Bottom) X-Axis (Top), Y-Axis (Left), Y-Axis (Right): Design points can be plotted
for either X-Axis (Bottom) or X-Axis (Top) against input and output parameters on any of
the other axes.
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 47
DesignXplorer Systems and Components
The following panes in the Parameters Correlation tab allow you to customize your search and view
the results.
Outline:
• Select Parameters Correlation and change properties and view the number of samples generated
for this correlation.
• Select output parameters and view their minimum and maximum values.
• Select charts to view available charts and change chart types and data properties.
Properties:
• Preserve Design Points After DX Run: Specifies whether to retain design points at the project
level from this parameters correlation. If this check box is selected, Retain Data for Each Preserved
Design Point is shown. If you also select this check box, in addition to saving the design points
to the project's design points table, data for each design point is saved. For more information, see
Retaining Data for Generated Design Points (p. 280) in the Workbench User's Guide.
• Number of Retries: Number of times DesignXplorer is to try to update failed design points. If the
Retry All Failed Design Points check box is not selected on the Design Exploration tab in the
Options dialog box, the default is 0. However, for a correlation that is not linked to a response
surface, you can specify the default number of retries for this specific project here. When Number
of Retries is not set to 0, Retry Delay (seconds) specifies how much time is to elapse between
tries.
• Reuse the samples already generated: Specifies whether to reuse the samples generated in a
previous correlation.
• Correlation Type: Algorithm to use for the parameter correlation. Choices are:
– Spearman
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
48 of ANSYS, Inc. and its subsidiaries and affiliates.
DesignXplorer Components
– Pearson
• Number Of Samples: Maximum number of samples to generate for this correlation. This value
must be greater than the number of enabled input parameters.
• Auto Stop Type: Choices are Execute All Simulations and Enable Auto Stop. When Enable Auto
Stop is selected, set the additional options that are shown:
– Mean Value Accuracy: Desired accuracy for the mean value of the sample set.
– Standard Deviation Accuracy: Desired accuracy for the standard deviation of the sample set.
– Convergence Check Frequency: Number of simulations to execute before checking for conver-
gence. This value must be greater than the number of enabled input parameters.
– Size of Generated Sample Set: Read-only value indicating the number of samples generated
for the correlation solution.
• Correlation Filtering: Specifies whether to filter major input parameters based on correlation values.
• R2 Contribution Filtering: Specifies whether to filter major input parameters based on R2 contri-
butions.
• Maximum Number of Major Inputs: Maximum number of input parameters selected as major input
parameters. By default, this number is the minimum of the current number of input parameters
and 20 (where 20 is the recommended maximum number of input parameters for a response surface).
Table:
Displays both a correlation matrix and a determination matrix for the input and output parameters.
Chart:
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 49
DesignXplorer Systems and Components
You can create a Correlation Scatter chart for a given parameter combination by right-clicking the
associated cell in the Correlation Matrix chart and selecting Insert <x-axis> vs <y-axis> Correlation
Scatter.
To view the Correlation Scatter chart, in the Outline pane under Charts, select Correlation Scatter.
Use the Properties pane as follows:
• Enable or disable the display of the quadratic and linear trend lines.
• View the coefficient of determination and the equations for the quadratic and linear trend
lines.
To view the Correlation Matrix chart, in the Outline pane under Charts, select Correlation Matrix.
Use the Properties pane as follows:
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
50 of ANSYS, Inc. and its subsidiaries and affiliates.
DesignXplorer Components
To view the Determination Matrix chart, in the Outline pane under Charts,select Determination
Matrix. Use the Properties pane as follows:
Sensitivities Chart
The Sensitivities chart allows you to graphically view the global sensitivities of each output para-
meter with respect to the input parameters.
To view the Sensitivities chart, in the Outline pane under Charts, select Sensitivities. Use the
Properties pane as follows:
The full model R2 represents the variability of the output parameter that can be explained by a
linear (or quadratic) correlation between the input parameters and the output parameter.
The value of the bars corresponds to the linear (or quadratic) determination coefficient of each input
associated to the selected output.
To view the Determination Histogram chart, in the Outline pane under Charts, select Determination
Histogram. Use the Properties pane as follows:
• Threshold R2: Enables you to filter input parameters by hiding those with a determination
coefficient lower than the given threshold.
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 51
DesignXplorer Systems and Components
The following panes in the Response Surface tab allow you to customize your response surface and
view the results:
Outline:
• Select Response Surface to specify the Response Surface Type, change response surface properties,
and view the tolerances table (Genetic Aggregation only) or the response points table (all other
response surface types).
• Under Refinement, view the tolerances table (Genetic Aggregation only), Convergence Curves
chart (Genetic Aggregation only), and refinement points table.
• Under Quality, view the goodness of fit results, the verification points table, and the Predicted vs
Observed chart.
• Under Response Points, view the response points table and view the properties and available
charts for individual response points.
Properties:
• Preserve Design Points After DX Run: Select this check box if you want to retain at the project
level design points that are created when refinements are run for this response surface. If this
property is set, Retain Data for Each Preserved Design Point is available. If this check box is se-
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
52 of ANSYS, Inc. and its subsidiaries and affiliates.
DesignXplorer Components
lected, in addition to saving the design points to the project's design points table, data for each
design point is saved. For more information, see Retaining Data for Generated Design Points (p. 280).
Note:
Selecting this check box does not preserve any design points unless you run either a
manual refinement or one of the Kriging refinements because the response surface uses
the design points generated by the DOE. If the DOE of the response surface does not
preserve design points, when you perform a refinement, only the refinement points are
preserved at the project level. If the DOE is set to preserve design points and the response
surface is also set to preserve design points, when you perform a refinement, the project
contains the DOE design points and the refinement points.
• Number of Retries: Number of times DesignXplorer is to try to update the failed design points. If
the Retry All Failed Design Points option is not selected on the Design Exploration tab in the
Options dialog box, the default is 0. However, you can specify the default number of retries for
this specific project here. When Number of Retries is not set to 0, Retry Delay (seconds) specifies
how much time is to elapse between tries.
• Response Surface Type: The type of response surface. Choices follow. For descriptions, see Response
Surface Types (p. 99).
– Genetic Aggregation
– Kriging
– Non-Parametric Regression
– Neural Network
– Sparse Grid
• Refinement Type: Where applicable, select Manual to enter refinement points manually or Auto-
Refinement to automate the refinement process.
• Generate Verification Points: Specify the number of verification points to be generated. The default
value is 1. The results are included in the verification points table and the Goodness of Fit chart.
• Settings for selected response surface type, as applicable. For more information, see Response
Surface Types (p. 99).
Table: Depending on your selection in the Outline pane, displays one of the following tables:
• Min-Max Search
• Refinement Points
• Goodness of Fit
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 53
DesignXplorer Systems and Components
• Verification Points
• Response Points
Chart:
Displays the available charts for the response point selected in the Outline pane:
Response Chart
Local Sensitivity Charts
Spider Chart
Response Chart
The Response chart allows you to graphically view the effect that changing each input parameter
has on the displayed output parameter.
You can add response points to the response points table by right-clicking the Response chart and
selecting Explore Response Surface at Point, Insert as Design Point, Insert as Refinement Point
or Insert as Verification Point.
• Display Parameter Full Name: Specifies whether to display the full parameter name or the
parameter ID on the chart.
• Chart Resolution Along X: Sets the number of points to display on the X axis response
curve. The default is 25.
• Chart Resolution Along Y: Sets the number of points to display on the Y axis response
curve when Mode is set to 3D. The default is 25.
• Number of Slices: Sets the number of slices when Mode is set to 2D Slices and there are
continuous input parameters.
• Show Design Points: Specifies whether to display design points on the chart.
• Choose the input parameters to display in either the first axis option or the first and second
axis options, depending on the chart mode.
• Use the sliders or drop-down menus to change the values of the input parameters that are
not displayed to see how they affect the values of displayed parameters. You can enter
specific values in the boxes above the sliders.
• View the interpolated output parameter values for the selected set of input parameter values.
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
54 of ANSYS, Inc. and its subsidiaries and affiliates.
DesignXplorer Components
• Display Parameter Full Name: Specifies whether to display the full parameter name or the
parameter ID on the chart.
Chart Mode: Set to Bar or Pie. This option is available only for the Local Sensitivity chart.
• Axes Range: Set to Use Min Max of the Output Parameter or Use Chart Data.
• Chart Resolution: Set the number of points per curve. The default is 25.
• Use the sliders to change the values of the input parameters to see how the sensitivity
changes for each output.
• View the interpolated output parameter values for the selected set of input parameter values.
Spider Chart
The Spider chart allows you to visualize the effect that changing the input parameters has on all
of the output parameters simultaneously. Use the Properties pane as follows:
• Display Parameter Full Name: Specifies whether to display the full parameter name or the
parameter ID on the chart.
• Use the sliders to change the values of the input parameters to see how they affect the
output parameters.
• View the interpolated output parameter values for the selected set of input parameter values.
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 55
DesignXplorer Systems and Components
The following panes in the Optimization tab allow you to customize your GDO and view the results:
Outline: Allows you to select the following nodes and perform related actions in the tab:
• Optimization:
– Change optimization properties and view the size of the generated sample set.
– View an optimization summary with details on the study, method, and returned candidate
points.
– View the Convergence Criteria chart for the optimization. For more information, see Using
the Convergence Criteria Chart (p. 215).
– Select an objective or constraint and view its properties, the calculated minimum and max-
imum values of each of the outputs, and History chart. For more information, see History
Chart (p. 59).
• Domain:
– Select an input parameter or parameter relationship to view and edit its properties or to
see its History chart. For more information, see History Chart (p. 59).
• Raw Optimization Data: For Direct Optimization systems, when an optimization update is finished,
DesignXplorer saves the design point data calculated during the optimization. You can access this
data by selecting Raw Optimization Data in the Outline pane.
Note:
The design point data is displayed without analysis or optimization results. The data
does not show feasibility, ratings, Pareto fronts, and so on.
• Convergence Criteria: View the Convergence Criteria chart and specify the criteria to display. For
more information, see Using the Convergence Criteria Chart (p. 215).
• Results:
Select one of the result types available to view results in the Charts pane and, in some cases, in
the Table pane. When a result is selected, you can change the data properties of its related chart
(X axis and Y axis parameters, parameters to display on the bar chart, and so on), and edit its table
data.
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
56 of ANSYS, Inc. and its subsidiaries and affiliates.
DesignXplorer Components
Properties: When Optimization is selected in the Outline pane, the Properties pane allows you to
specify:
• Method Name: Choices for methods follow. If optimization extensions are loaded to the project,
you can also choose an external optimizer.
– MOGA
– NLPQL
– MISQP
– Screening
– Adaptive Single-Objective
– Adaptive Multiple-Objective
• Relevant settings for the selected Method Name. Depending on the method of optimization, these
can include specifications for samples, sample sets, number of iterations, and allowable convergence
or Pareto percentages.
Table: Before the update, specify input parameter domain settings and objective and constraint set-
tings:
• Optimization Domain
– Set the Upper Bound and Lower Bound for each input parameter. For NLPQL and MISQP optim-
izations, also set the Starting Value.
– Set Left Expression, Right Expression, and Operator for each parameter relationship.
For more information, see Defining the Optimization Domain (p. 199).
– For each parameter, you can define an objective, constraint, or both. Options vary according to
parameter type.
– For a parameter with Objective Type set to Seek Target, you specify a target.
– For a parameter with Constraint Type set to Lower Bound <= Values <= Upper Bound, you
use Lower Bound and Upper Bound to specify the target range.
– For a parameter with an Objective or Constraint defined, you specify the relative Objective
Importance or Constraint Importance of that parameter in regard to the other objectives.
– For a parameter with a Constraint defined (such as output parameters, discrete parameters, or
continuous parameters with manufacturable values), you specify the Constraint Handling for
that parameter.
For more information, see Defining Optimization Objectives and Constraints (p. 203).
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 57
DesignXplorer Systems and Components
During an update of a Direct Optimization system, if you select an objective, constraint, or input
parameter in the Outline pane, the Table pane shows all of the design points being calculated by
the optimization. For iterative optimization methods, the display is refreshed dynamically after each
iteration, allowing you to track the progress of the optimization by simultaneously viewing design
points in the Table pane, History charts in the Charts pane, and History chart sparklines in the Outline
pane. For the Screening optimization method, these objects are updated only after the optimization
has completed.
After the update, when you select Candidate Points under Results in the Outline pane, the Table
pane displays up to the maximum number of requested candidates generated by the optimization.
The number of gold stars or red crosses displayed next to each objective-driven parameter indicate
how well the parameter meets the stated objective, from three red crosses for the worst to three
gold stars for the best. The Table pane also allows you to add and edit your own candidate points,
view values of candidate point expressions, and calculates the percentage of variation for each
parameter for which an objective has been defined. For more information, see Working with Candidate
Points (p. 210).
Note:
Goal-driven parameter values with inequality constraints receive either three stars to
indicate that the constraint is met or three red crosses to indicate that the constraint
is not met.
You can verify predicted output values for each candidate. For more information, see Verifying Can-
didates by Design Point Update (p. 214).
Results:
The Convergence Criteria chart is the default optimization chart, so it displays in the Chart pane
unless you select another chart type. When Convergence Criteria is selected in the Outline pane,
the Properties pane displays the convergence criteria relevant to the selected optimization method
in read-only mode. Various generic chart properties can be changed for the Convergence Criteria
chart.
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
58 of ANSYS, Inc. and its subsidiaries and affiliates.
DesignXplorer Components
The chart remains available when the optimization update is complete. The legend shows the color-
coding for the convergence criteria.
• Using the Convergence Criteria Chart for Multiple-Objective Optimization (p. 216)
• Using the Convergence Criteria Chart for Single-Objective Optimization (p. 217)
History Chart
The History chart allows you to view the history of a single enabled objective, constraint, input
parameter, or parameter relationship during the update process. For iterative optimization methods,
the History chart is updated after each iteration. For the Screening optimization method, it is updated
only when the optimization is complete.
In the Outline pane, select an object under Objectives and Constraints or an input parameter or
parameter relationship under Domain. The Properties pane displays various properties for the se-
lected object. Various generic chart properties can be changed for both types of History chart.
In the Chart pane, the color-coded legend allows you to interpret the chart. In the Outline pane,
a sparkline graphic of the History chart is displayed next to each objective, constraint, and input
parameter object.
• Working with the History Chart in the Chart Pane (p. 219)
Properties: The following options are applied to results in both the Table and Chart panes.
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 59
DesignXplorer Systems and Components
• Display Parameter Full Name: Specifies whether to display the full parameter name or short
parameter name.
• Coloring Method: Specifies whether to color the results according to candidate type or source
type.
• Show Starting Point: Select to show the starting point on the chart (NLPQL and MISQP only).
• Show Verified Candidates: Select to show verified candidates in the results (Response Surface
Optimization system only).
• Change various generic chart properties for the results in the Chart pane.
Tradeoff Chart
The Tradeoff chart allows you to view the Pareto fronts created from the samples generated in the
goal-driven optimization. In the Outline pane under Charts, select Tradeoff to display this chart
in the Chart pane. Use the Properties pane as follows:
• Number of Pareto Fronts to Show: Set the number of Pareto fronts that are displayed on
the chart.
• Show infeasible points: Enable or disable the display of infeasible points. This option is
available when constraints are defined.
• Click a point on the chart to display a Parameters section that shows the values of the input
and output parameters for this point.
• Change various generic chart properties for the results in the Chart pane.
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
60 of ANSYS, Inc. and its subsidiaries and affiliates.
DesignXplorer Components
Samples Chart
The Samples chart allows you to visually explore a sample set given defined objectives. In the
Outline pane under Charts, select Samples to display this chart in the Chart pane. Use the Prop-
erties pane as follows:
• Chart Mode: Set to Candidates or Pareto Fronts. When Pareto fronts is selected, the fol-
lowing options can be set:
– Number of Pareto Fronts to Show: Either enter the value or use the slider to select the
number of Pareto fronts to display.
• Show infeasible points: Enable or disable the display of infeasible points. This option is
available when constraints are defined.
• Click a line on the chart to display the values of the input and output parameters for this
line in the Parameters section. Use the Enabled check box to enable or disable the display
of parameter axes on the chart.
• Change various generic chart properties for the results in the Chart pane.
Sensitivities Chart
The Sensitivities chart allows you to graphically view the global sensitivities of each output para-
meter with respect to the input parameters. In the Outline pane under Charts, select Sensitivities
to display this chart in the Chart pane. Use the Properties pane as follows:
• Change various generic chart properties for the results in the Chart pane.
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 61
DesignXplorer Systems and Components
In the Outline pane for the Design of Experiments (3D ROM) cell, all input parameters are selected
for use by default. In the Properties pane for each input variable, you set lower and upper bounds.
Accepting the default values for all other properties is generally recommended.
When you perform an update, the design points for building the ROM are inserted in the Table
pane and their results are calculated. As each design point is updated, its results are saved to a
ROM snapshot file (ROMSNP).
ROM Builder
The ROM Builder cell provides for setting up the solver system for ROM production. While ROM
setup is specific to the ANSY product, the workflow for producing a ROM is generic. When you
perform an update, the ROM is built. Once the update finishes, you can open the ROM in the ROM
Viewer and export the ROM.
Note:
Currently, you can set up and build a ROM only for a Fluent system. ROM production
examples (p. 237) are provided.
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
62 of ANSYS, Inc. and its subsidiaries and affiliates.
Using Parameters Correlations
The application of goal-driven optimization in a finite element-based framework is always a challenge
in terms of solving time, especially when the finite element model is large. For example, hundreds or
thousands of finite element simulation runs is not uncommon. If one simulation run takes hours to
complete, it is almost impractical to perform optimization at all with thousands or even hundreds of
simulations.
In a DOE, sampling points increase dramatically as the number of input parameters increases. For example,
a total of 149 sampling points (finite element evaluations) are needed for 10 input variables using
Central Composite Design with fractional factorial design. As the number of input variables increases,
the analysis becomes more and more intractable. In this case, one would like to exclude unimportant
input parameters from the DOE sampling to reduce unnecessary sampling points. A correlation matrix
is a tool to help identify which input parameters are unimportant and therefore treated as deterministic
parameters.
• Determine which input parameters have the most (and the least) effect on your design.
A Parameters Correlation system also provides a variety of charts to assist in your assessment of
parametric effects. For more information, see Working with Parameters Correlation Charts (p. 73).
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 63
Using Parameters Correlations
1. In the Project Schematic, drag a Parameters Correlation system from under Design Exploration
in the Toolbox and drop it under the Parameter Set bar.
4. If you want to review the samples to be calculated before generating them, in the Outline pane,
right-click Parameters Correlation and select Preview.
5. To generate the samples, in the Outline pane, right-click Parameters Correlation and select Update.
6. When the update finishes, use the filtered results in the Table pane to find the most relevant inputs
for a selected output. For more information, see Reviewing Filtered Correlation Data (p. 69).
7. Use the various charts in the Outline pane to examine the results. For more information, see Using
DesignXplorer Charts (p. 32).
Select this check box if you want to reuse the samples generated in a previous correlation.
Correlation Type
Select the Spearman or Pearson correlation type. The default value is determined by the Correl-
ation Coefficient Calculation Type option in Tools → Options.
• Pearson: Select this option to correlate linear relationships. This correlation method uses actual
data to evaluate the correlation and bases correlation coefficients on sample values.
For more information on these correlation types, see Sample Generation (p. 67).
Number Of Samples
Specify the maximum number of samples to be generated in the correlation sample set. The default
is 100. This value must be greater than the number of enabled input parameters.
• Execute All Simulations: DesignXplorer updates the number of design points specified by the
Number of Samples property.
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
64 of ANSYS, Inc. and its subsidiaries and affiliates.
Running a Parameters Correlation
• Enable Auto Stop: The number of samples required to calculate the correlation is determined
according to the convergence of the mean and standard deviation of the output parameters.
At each iteration, the mean and standard deviation convergences are checked against the level
of accuracy specified by the Mean Value Accuracy and Standard Deviation Accuracy properties.
For more information, see Correlation Convergence Process (p. 68).
By default, all output parameters are taken into account for the filtering process. You can change
these filtering options before updating the correlation study. You can also change the filtering options
after the update has been completed. When you change options, a new design point update is not
necessary. An Update operation updates only the display to show the results sorted according to
your selected criteria.
To indicate that an output parameter is not to be considered in the filtering process, you can use
either of these methods to select the Ignore for Filtering check box:
• In the Outline pane, select the output. Then, in the Property pane, select Ignore for Filtering.
• In the Outline pane, select one or more outputs, right-click, and select Ignore for Filtering.
To indicate that an output parameter is to be considered in the filtering process, in the Outline
pane, select one or more outputs, right-click, and select Use for Filtering.
In the Outline pane, your filtering outputs are indicated by a funnel icon. When Parameters Cor-
relation is selected in the Outline pane, your filtering parameters are also listed in the summary
report in the Table pane.
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 65
Using Parameters Correlations
To specify your filtering criteria, select Parameters Correlation in the Outline pane. Then, in the
Properties pane under Filtering Method, set properties:
• Relevance Threshold: Determines how strictly inputs are filtered for inclusion in the major input
category. To estimate the relevance of parameter relationships, a metric is computed for any input-
output pair. If one of the metrics for a given input parameter exceeds the value set for Relevance
Threshold, that parameter is categorized as a major input. Otherwise, it is categorized as a minor
input.
The default value is 0.5, with possible values ranging from 0 to 1. A value of 1 applies the
strictest filter, and a value of 0 the most relaxed. For example, if there are 10 inputs and Relevance
Threshold is set to 0, all 10 of the outputs are categorized as major outputs. If Relevance
Threshold is changed to 1, a number of the outputs are filtered out and categorized instead as
minor inputs.
For more information on how Relevance Threshold filters inputs, see Parameters Correlation
Filtering Theory (p. 299).
• Correlation Filtering: This is based on Pearson's, Spearman's, and quadratic correlation. For each
input-output pair, both correlation complexity and the number of samples are used to determine
relevance. The greater the number of samples, the greater the probability that the correlation
detected is valid. If the relevance is greater than the Relevance Threshold value, the input is
categorized as a major input. This property is enabled by default.
• R2 Contribution Filtering: Computes a reduced model and evaluates the R2 contribution of each
input parameter on the filtering output. The R2 contribution is then used to calculate relevance.
If the relevance is greater than the Relevance Threshold value, the input is categorized as a
major input. This property is enabled by default.
• Maximum Number of Major Inputs: Maximum number of input parameters that can be categor-
ized as major inputs.
The maximum value is equal to the number of enabled input parameters. The minimum value
is 1. The default is 20 if the number of input parameters is greater than or equal to 20. This is
the recommended maximum number of inputs for a response surface.
Once you have set your filtering criteria, update the Parameters Correlation cell to filter the results.
If design points have already been updated, they are not updated again. It only updates the results
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
66 of ANSYS, Inc. and its subsidiaries and affiliates.
Running a Parameters Correlation
according to the new filter. This allows you to change your filter settings and review multiple
scenarios without having to run a new design point update each time.
Note:
When you select only one filtering method (Correlation Filtering or R2 Contribution
Filtering), the relevance threshold can change the list of major input parameters but
not their best relevance values.
When you select both filtering methods, Correlation Filtering is applied first, which
retains a list of major input parameters based on the Relevance Threshold value. The
R2 Contribution Filtering method is then applied, based on the list of major input
parameters retained by the Correlation Filtering method. This can cause the best relev-
ance values displayed in the tables of major and minor input parameters to change
slightly when you change the Relevance Threshold value with both filtering methods
enabled.
Sample Generation
Two methods of generating samples are available: Pearson's and Spearman's:
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 67
Using Parameters Correlations
• Recognizes monotonic relationships, which are less restrictive than linear ones. In a monotonic rela-
tionship, one of the following two things happens:
– As the value of one variable increases, the value of the other variable also increases.
– As the value of one variable increases, the value of the other variable decreases.
During the update of a Parameters Correlation cell, you can monitor progress in the Progress pane.
The table of samples refreshes automatically as results are returned to DesignXplorer.
By clicking the red stop button to the right of the progress bar, you can interrupt the update. If
enough samples are calculated, partial correlation results are generated. You can see the results in
the Table pane of the component tab by selecting a chart object in the Outline pane.
To restart an interrupted update, you either right-click Parameters Correlation in the Outline pane
and select Update or click Update on the toolbar. The update restarts where it was interrupted.
1. The convergence status is checked each time the number of points specified for Convergence
Check Frequency has been updated.
• The mean and the standard deviation are calculated based on all up-to-date design points
available at this step.
• The mean is compared with the mean at the previous step. It is considered to be stable if the
difference is smaller than 1% by default (Mean Value Accuracy = 0.01).
• The standard deviation is compared with the standard deviation at the previous step. It is con-
sidered to be stable if the difference is smaller than 2% by default (Standard Deviation
Accuracy = 0.02).
3. If the mean and standard deviation are stable for all output parameters, the correlation is converged.
The convergence status is indicated by the Converged property. When the process is converged,
Converged is set to Yes and the possible remaining unsolved samples are automatically removed.
If the process has stopped because the value for Number of Samples is reached before convergence,
Converged is set to No.
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
68 of ANSYS, Inc. and its subsidiaries and affiliates.
Working with Correlation Results
Under Filtering Method, you see the relevance threshold, configuration, and filtering output para-
meters used to filter the correlation data.
Under Major Input Parameters and Minor Input Parameters, you see inputs sorted in descending
order according to their relevance to the output indicated. This output, shown in the Output Para-
meter column, is the output for which the input has the most relevance.
• The R2 Contribution and Correlation Value values reference the relationship for that input-
output pair.
• Correlation Value corresponds to the most relevant correlation among Pearson, Spearman,
or Quadratic correlations. For Pearson or Spearman, the value can have a (+/-) sign. For
Quadratic, the value is positive. For more information, see Relevance of the Correlation
Value (p. 299).
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 69
Using Parameters Correlations
Note:
Unlike the Correlation Matrix chart, the Determination Matrix chart is not symmetric. The Determination
Matrix chart displays the R2 for each parameter pair. To view this chart, select it in the Outline pane.
Quadratic determination data is shown in both the Table and Chart panes.
In addition, quadratic information is shown in the Correlation Scatter chart and general parameters
correlation table. The Correlation Scatter chart displays both the quadratic trend line and the linear
trend line equation for the selected parameter pair. The general parameters correlation table shows
quadratic data, linear data, and correlation design points.
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
70 of ANSYS, Inc. and its subsidiaries and affiliates.
Working with Correlation Results
whether it is statistically true according to the significance level (or acceptable risk) set by the user.
From the hypothesis test, a p-value (probability value) is calculated, and is compared with the signi-
ficance level. If the p-value is greater than the significance level, it is concluded that the NULL hypo-
thesis is true, and that the input parameter is insignificant to the output parameter, and vice versa.
In the Properties pane for the Sensitivities chart, you can choose the output parameters for which
you want to review sensitivities and the input parameters that you would like to evaluate for the
output parameters.
On the Design Exploration tab of the Options dialog box, the default setting for Significance Level
is 0.025. Parameters with a sensitivity value above this significance are shown with a flat line on the
Sensitivities chart. The value displayed for these parameters when you place the mouse cursor over
them on the chart is 0.
To view the actual correlation value of the insignificant parameter pair, you select Correlation Matrix
in the Outline pane and then place the mouse cursor over the square for that pair in the matrix. Or,
you can set Significance Level to 1 in the Options dialog box, which bypasses the significance test
and displays all input parameters on the Sensitivities chart with their actual correlation values.
In addition, linear information is shown in the Correlation Scatter chart and the general parameters
correlation table. The Correlation Scatter chart displays both the quadratic trend line and the linear
trend line equation for the selected parameter pair. The general parameters correlation table shows
quadratic data, linear data, and correlation design points.
If you select one or more design points and right-click, the context menu offers many of the same
options available for the project's design points table for the Parameter Set bar.
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 71
Using Parameters Correlations
• In the Outline pane, click the locked icon to the right of the Design Points node.
• In the Properties pane, set Sampling Type from Auto (default) to Custom.
• Changes the icon to the right of the Design Points node from an locked icon to an unlocked
icon ( ). If you place the mouse cursor over the unlocked icon, the tooltip indicates that the
sample set is customized.
• Displays a warning icon ( ) in the Message column for the top Parameters Correlation
node and the Design Points node. If you click the icon, the message indicates that the sampling
is custom and that results might be inaccurate because the distribution of the design points
might not be optimal.
• Displays an information icon ( ) to the left of the Parameters Correlation node if there are
not enough samples to update the correlation. If you click the icon, the message indicates
that you must add more design points in the table or set Sampling Type back to Auto.
• Sets Auto Stop Type to Execute All Simulations and makes this property read-only.
• Hides Number of Samples because the read-only property Size of Generated Sample Set
already displays the size of the sample set.
• Makes input parameter values editable. As you enter or edit a value, DesignXplorer validates
the value against the ranges for the parameter.
• Allows you to set all output parameter values as editable if the custom correlation is not linked
to a Response Surface cell. For more information, see Editable Output Parameter Values (p. 292).
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
72 of ANSYS, Inc. and its subsidiaries and affiliates.
Working with Parameters Correlation Charts
For a response surface-based correlation, values for the output parameters are calculated by
evaluating the response surface.
• Allows you to import design points from a CSV file, just like you do for a custom DOE. For
more information, see Importing Design Points from a CSV File (p. 96).
– For a standalone Parameters Correlation system, DesignXplorer imports values for input
parameters. During parsing and validation (p. 97) of the design point data, DesignXplorer
might ask whether you want to adjust parameter ranges. If values for output parameters
exist in the CSV file, DesignXplorer also imports them.
• Allows you to copy all or selected design points from the Parameter Set into the custom
correlation, just like you do for a custom DOE. For more information, see Copying Design
Points (p. 97)
Note:
• If you preview design points before unlocking the design points table,
DesignXplorer keeps the design points from the preview. You can edit or delete
these design points and add new ones.
• If you edit values for output parameters, you can clear the edited values.
DesignXplorer then marks these design points as out-of-date.
When a Parameters Correlation system is updated, one instance of each chart is added to the project.
In the Outline pane for the Parameters Correlation cell, you can select each object under Charts to
view that chart in the Charts pane.
To add a new instance of a chart, double-click it in the Toolbox. The chart is added as the last entry
under Charts.
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 73
Using Parameters Correlations
To add a Correlation Scatter chart for a particular parameter combination, right-click the associated cell
in the Correlation Matrix chart and select Insert <x-axis> vs <y-axis> Correlation Scatter.
Color-coding of the cells indicates the strength of the correlation. The correlation value is displayed
when you place the mouse cursor over a cell. The closer the absolute correlation value is to 1, the
stronger the relationship. A value of 1 indicates a positive correlation, which means that when the
first parameter increases, the second parameter increases as well. A value of −1 indicates a negative
correlation, which means that when the first parameter increases, the second parameter decreases.
When you run a Pearson's correlation, the square of the correlation value corresponds to the R2 of
the linear fitting between the pair of parameters. When you run a Spearman's correlation, the correl-
ation value corresponds to the R2 of the linear fitting between rank values of the pair of parameters.
For more information on these correlation types, see Sample Generation (p. 67).
In the following parameters correlation table, input parameter P13-PIPE_Thickness is a major input
with a strong effect on the design.
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
74 of ANSYS, Inc. and its subsidiaries and affiliates.
Working with Parameters Correlation Charts
On the other hand, input parameter P12-Hoop Dist is not important to the study because it has
little effect on the outputs. In this case, you might want to disable P12-Hoop Dist by clearing the
Enabled check box in the Properties pane. When the input is disabled, the chart changes accordingly.
To disable parameters, you can also right-click a cell corresponding to that parameter and select the
desired option from the context menu. You can disable the selected input, disable the selected output,
disable all other inputs, or disable all other outputs.
To generate a Correlation Scatter chart for a given parameter combination, right-click the corresponding
cell in the correlation matrix chart and select Insert <x-axis> vs <y-axis> Correlation Scatter.
If desired, you can export the correlation matrix data to a CSV file by selecting the Export Chart Data
as CSV context option. For more information, see Exporting Design Point Parameter Values to a
Comma-Separated Values File in the Workbench User's Guide.
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 75
Using Parameters Correlations
the degree of quadratic correlation between two parameters pair via a graphical presentation of linear
and quadratic trends.
Note:
You can create a Correlation Scatter chart for a given parameter combination by right-clicking the
corresponding cell in the correlation matrix chart and selecting Insert <x-axis> vs <y-axis> Correlation
Scatter.
In this example, under Trend Lines in the Properties pane, Linear and Quadratic display equations
for R2 values.
Because both the Linear and Quadratic properties are enabled in this example:
• The equations for the linear and quadratic trend lines are shown in the chart legend.
• The linear and quadratic trend lines are each represented by a separate line on the chart. The
closer the samples lie to the curve, the closer the coefficient of determination is to the optimum
value of 1.
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
76 of ANSYS, Inc. and its subsidiaries and affiliates.
Working with Parameters Correlation Charts
When you export the Correlation Scatter chart data to a CSV file or generate a report, the trend line
equations are included in the export and are shown in the CSV file or Workbench project report. For
more information, see Exporting Design Point Parameter Values to a Comma-Separated Values File
in the Workbench User's Guide.
Color-coding of the cells indicates the strength of the correlation (R2). The R2 value is displayed when
you place the mouse cursor over a cell. The closer the R2 value is to 1, the stronger the relationship.
In the following determination matrix, input parameter P5–Tensile Yield Strength is a major input
because it drives all the outputs.
You can disable inputs that have little effect on the outputs. To disable a parameter in the chart:
• In the Properties pane, clear the Enabled check box for the parameter.
• Right-click a cell corresponding to the parameter and select an option from the context menu. You
can disable the selected input, disable the selected output, disable all other inputs, or disable all
other outputs.
You can also select the Export Chart Data as CSV context option to export the correlation matrix
data to a CSV file. For more information, see Exporting Design Point Parameter Values to a Comma-
Separated Values File in the Workbench User's Guide.
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 77
Using Parameters Correlations
When you view a Determination Histogram chart, you should also check the Full Model R2 (%) value
to see how well output variations are explained by input variations. This value represents the variab-
ility of the output parameter that can be explained by a linear or quadratic correlation between the
input parameters and the output parameter. The closer this value is to 100%, the more certain it is
that output variations result from the inputs. The lower the value, the more likely it is that other
factors such as noise, mesh error, or an insufficient number of points is causing the output variations.
In the following figure, you can see that input parameters P3–LENGTH, P2–HEIGHT, and P4–FORCE
all affect output P8–DISPLACEMENT. You can also see that of the three inputs, P3–LENGTH has by
far the greatest effect. The value for the linear determination is 96.2%.
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
78 of ANSYS, Inc. and its subsidiaries and affiliates.
Working with Parameters Correlation Charts
To view the chart for a quadratic determination, in the Properties pane, set Determination Type to
Quadratic. With a quadratic determination type, input P5–YOUNG could also have a slight effect on
P8–DISPLACEMENT. For example, Full Model R2 (%) could improved slightly, perhaps to 97.436%.
You can filter your inputs to keep only the most important parameters by selecting and clearing their
check boxes in the Outline pane.
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 79
Using Parameters Correlations
Generally, the effect of an input parameter on an output parameter is driven by the following two
things:
• The amount by which the output parameter varies across the variation range of an input parameter.
• The variation range of an input parameter. Typically, the wider the variation range, the larger the
effect of the input parameter.
The statistical sensitivities are based on the Spearman rank order correlation coefficients, which sim-
ultaneously take both aspects into account.
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
80 of ANSYS, Inc. and its subsidiaries and affiliates.
Using Design of Experiments
Design of Experiments (DOE) is a technique used to scientifically determine the location of sampling
points. It is included as part of response surface, goal-driven optimization, and other analysis systems.
There are a wide range of DOE algorithms or methods available in engineering literature. However, they
all have common characteristics. They try to locate sampling points such that the space of random input
parameters is explored in the most efficient way, or they try to obtain the required information with a
minimum of sampling points.
Sample points in efficient locations not only reduce the required number of sampling points but also
increase the accuracy of the response surface that is derived from the results of the sampling points.
By default, the deterministic method uses Central Composite Design, which combines one center point,
points along the axis of the input parameters, and the points determined by a fractional factorial design.
For more information, see DOE Types (p. 82).
Once you set up your input parameters, you update the DOE, which submits the generated design
points to the analysis system to determine a solution. Design points are solved simultaneously if the
analysis system is set up to do so. Otherwise, design points are solved sequentially. After the solution
is complete, you update the Response Surface cell, which generates response surfaces for each output
parameter based on the data in the generated design points.
Note:
If you change the DOE type after doing an initial analysis and preview the design points table, any
design points generated for the new algorithm that are the same as design points solved for a previous
algorithm appear as up-to-date. Only the design points that are different from any previously submitted
design points need to be solved.
You should set properties for your DOE before generating your design points table. The following topics
describe setting up and solving a DOE:
Setting Up the DOE
DOE Types
Number of Input Parameters for DOE Types
Comparison of LHS and OSF DOE Types
Using a Central Composite Design DOE
Upper and Lower Locations of DOE Points
DOE Matrix Generation
Exporting and Importing Design Points
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 81
Using Design of Experiments
3. In the Properties pane under Design of Experiments, make a selection for Design of Experiments
Type. For descriptions, see DOE Types (p. 82).
4. Specify additional properties for the DOE. The selection for Design of Experiments Type determines
what properties are available.
DOE Types
In the Properties pane for the Design of Experiments cell, Design of Experiments Type specifies the
algorithm or method for locating sampling points. DesignXplorer supports several DOE types:
Central Composite Design (CCD)
Optimal Space-Filling Design (OSF)
Box-Behnken Design
Custom
Custom + Sampling
Sparse Grid Initialization
Latin Hypercube Sampling Design
External Design of Experiments
• Design Type: Design type to use for CCD to improve the response surface fit for the DOE. For each
design type, the alpha value is defined as the location of the sampling point that accounts for all
quadratic main effects. The following CCD design types are available:
– Face-Centered: A three-level design with no rotatability. The alpha value equals 1.0. A Template
Type setting automatically appears.
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
82 of ANSYS, Inc. and its subsidiaries and affiliates.
DOE Types
– Rotatable: A five-level design that includes rotatability. The alpha value is calculated based on
the number of input variables and a fraction of the factorial part. A design with rotatability has
the same variance of the fitted value regardless of the direction from the center point. A Template
Type setting automatically appears.
– G-Optimality: Minimizes a measure of the expected error in a prediction and minimizes the
largest expected variance of prediction over the region of interest.
– Auto-Defined: Design exploration automatically selects the design type based on the number
of input variables. Use of this option is recommended for most cases because it automatically
switches between G-Optimality if the number of input variables is five or VIF-Optimality other-
wise. However, you can select Rotatable as the design type if the default option does not provide
good values for the goodness of fit from the response surface plots.
For more information, see Using a Central Composite Design DOE (p. 92)
• Template Type: Enabled when Design Type is set to either Face-Centered or Rotatable. Choices
are Standard (default) and Enhanced. Choose Enhanced for a possible better fit for the response
surfaces.
For more information, see Using a Central Composite Design DOE (p. 92).
To offset the noise associated with physical experimentation, classic DOE types such as CCD focus on
parameter settings near the perimeter of the design region. Because computer simulation is not quite
as subject to noise, OSF is able to distribute the design parameters equally throughout the design
space with the objective of gaining the maximum insight into the design with the fewest number of
points. This advantage makes it appropriate when a more complex modeling technique such as Kriging,
non-parametric regression, or neural networks is used.
OSF has some of the same disadvantages as LHS, though to a lesser degree. Possible disadvantages
follow.
• When Samples Type is set to CCD Samples, a maximum of 20 input parameters is supported. For
more information, see Number of Input Parameters for DOE Types (p. 88).
• Extremes, such as the corners of the design space, are not necessarily covered.
• The selection of too few design points can result in a lower quality of response prediction.
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 83
Using Design of Experiments
– Max-Min Distance: Maximizes the minimum distance between any two points (default). This
strategy ensures that no two points are too close to each other. For a small size of sampling (N),
the Max-Min Distance design generally lies on the exterior of the design space and fills in the
interior as N becomes larger. Generally, this is the faster algorithm.
– Centered L2: Minimizes the centered L2-discrepancy measure. The discrepancy measure corres-
ponds to the difference between the empirical distribution of the sampling points and the uniform
distribution. This means that the centered L2 yields a uniform sampling. This design type is
computationally faster than the Maximum Entropy type.
– Maximum Entropy: Maximizes the determinant of the covariance matrix of the sampling points
to minimize uncertainty in unobserved locations. This option often provides better results for
highly correlated design spaces. However, its cost increases non-linearly with the number of input
parameters and the number of samples to be generated. Thus, it is recommended only for small
parametric problems.
• Maximum Number of Cycles: Determines the number of optimization loops the algorithm needs,
which in turns determines the discrepancy of the DOE. The optimization is essentially combinator-
ial, so a large number of cycles slows down the process. However, this makes the discrepancy of
the DOE smaller. For practical purposes, 10 cycles is generally good for up to 20 variables. The
value must be greater than 0. The default is 10.
• Samples Type: Determines the number of DOE points the algorithm should generate. This option
is suggested if you have some advanced knowledge about the nature of the model. Choices are:
– CCD Samples: Supports a maximum of 20 inputs (default). Generates the same number of samples
a CCD DOE would generate for the same number of inputs. You can use this to generate a space
filling design that has the same cost as a corresponding CCD design.
– Linear Model Samples: Generates the number of samples as needed for a linear model.
– Pure Quadratic Model Samples: Generates the number of samples as needed for a pure quad-
ratic model (no cross terms).
– Full Quadratic Samples: Generates the number of samples needed to generate a full quadratic
model.
• Random Generator Seed: Enabled when LHS is used. Set the value used to initialize the random
number generator invoked internally by LHS. Although the generation of a starting point is random,
the seed value consistently results in a specific LHS. This property allows you to generate different
samplings by changing the value or regenerate the same sampling by keeping the same value.
The default is 0.
• Number of Samples: Enabled when Samples Type is set to User-Defined Samples. Specifies the
default number of samples. The default is 10.
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
84 of ANSYS, Inc. and its subsidiaries and affiliates.
DOE Types
Box-Behnken Design
When Design of Experiments Type is set to Box-Behnken Design,a three-level quadratic design is
generated. This design does not contain fractional factorial design. The sample combinations are
treated in such a way that they are located at midpoints of edges formed by any two factors. The
design is rotatable (or in cases, nearly rotatable).
One advantage of Box-Behnken Design is that it requires fewer design points than a full factorial CCD
and generally requires fewer design points than a fractional factorial CCD. Additionally, Box-Behnken
Design avoids extremes, allowing you to work around extreme factor combinations. Consider using
Box-Behnken Design if your project has parametric extremes (for example, has extreme parameter
values in corners that are difficult to build). Because a DOE based on Box-Behnken Design doesn't
have corners and does not combine parametric extremes, it can reduce the risk of update failures.
For more information, see the Box-Behnken Design (p. 306) theory section.
• Prediction at the corners of the design space is poor and that there are only three levels per
parameter.
• A maximum of 12 input parameters is supported. For more information, see Number of Input
Parameters for DOE Types (p. 88).
Custom
When Design of Experiments Type is set to Custom, you can add points directly in the design points
table by manually entering input parameters and optionally output parameter values. You can also
import and export (p. 96) design points into the design points table of the custom DOE from the
Parameter Set bar.
You can change the mode of the design points table so that output parameter values are editable.
You can also copy and paste data and import data from a CSV file by right-clicking and selecting
Import Design Points. For more information, see Working with Tables (p. 292).
Note:
• If you generate design points using a DOE type other than Custom and then later switch
to Custom, all points existing in the design points table from the initial DOE type are
retained.
• If you set the DOE type to Custom and add points directly to the design points table,
these manually added design points are cleared if you later switch to another DOE type.
• The table can contain derived parameters. Derived parameters are always calculated by
the system, even if the table mode is All Output Values Editable.
• Editing output values for a row changes the state of the Design of Experiments cell to
Update Required. The DOE must be updated, even though no calculations are done.
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 85
Using Design of Experiments
• DOE charts do not reflect the design points added manually using the Custom DOE
type until the DOE is updated.
• It is expected that the Custom DOE type is used to enter DOEs that were built externally.
If you use this feature to manually enter all design points, you must make sure to enter
enough points so that a good fitting can be created for the response surface. This is an
advanced feature that should be used with caution. Always verify your results with a
direct solve.
Custom + Sampling
When Design of Experiments Type is set to Custom + Sampling, you have the same capabilities
as when it is set to Custom (p. 85). You can complete the design points table automatically to fill the
design space efficiently. For example, you can initialized the design points table with design points
imported from a previous study. Or, your initial DOE (Central Composite Design, Optimal Space Filling
Design or Custom type) can be completed with new points that you manually add. The generation
of these new design points takes into account the coordinates of previous design points.
When Custom + Sampling is set, Total Number of Samples specifies the number of samples that
you want, including the number of existing design points. You must enter a positive number. If the
total number of samples is less than the number of existing points, no new points are added. If there
are discrete input parameters, the total number of samples corresponds to the number of points that
should be reached for each combination of discrete parameters.
One advantage of Sparse Grid Initialization is that it refines only in the directions necessary, so that
fewer design points are needed for the same quality response surface. Another is that it is effective
at handling discontinuities. Although you must use this DOE type to build a Sparse Grid response
surface, you can also use it for other types of response surfaces.
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
86 of ANSYS, Inc. and its subsidiaries and affiliates.
DOE Types
space, but no two points share the same value. This means that no point shares a row or a column
of the grid with any other point.
• When Samples Type is set to CCD Samples, a maximum of 20 input parameters is supported.
For more information, see Number of Input Parameters for DOE Types (p. 88).
• Extremes, such as the corners of the design space, are not necessarily covered. Additionally,
the selection of too few design points can result in a lower quality of response prediction.
Note:
The Optimal Space-Filling Design (OSF) DOE type is an LHS design that is extended with
post-processing. For more information, see Comparison of LHS and OSF DOE Types (p. 91).
• Samples Type: Determines the number of DOE points the algorithm should generate. This option
is suggested if you have some advanced knowledge about the nature of the model. The following
choices are available:
– CCD Samples: Supports a maximum of 20 inputs (default). Generates the same number of samples
a CCD DOE would generate for the same number of inputs. You can use this to generate an LHS
design that has the same cost as a corresponding CCD design.
– Linear Model Samples: Generates the number of samples as needed for a linear model.
– Pure Quadratic Model Samples: Generates the number of samples as needed for a pure quad-
ratic model (no cross terms).
– Full Quadratic Samples: Generates the number of samples needed to generate a full quadratic
model.
• Random Generator Seed: Enabled when LHS is used. Set the value used to initialize the random
number generator invoked internally by LHS. Although the generation of a starting point is random,
the seed value consistently results in a specific LHS. This property allows you to generate different
samplings by changing the value or regenerate the same sampling by keeping the same value.
The default is 0.
• Number of Samples: Enabled when Samples Type is set to User-Defined Samples. Specifies the
default number of samples. The default is 10.
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 87
Using Design of Experiments
DesignXplorer filters the Design of Experiments Type list for applicability to the current project,
displaying only those sampling methods that you can use to generate the DOE as it is currently
defined. For example, assume that a given sampling allows a maximum of 10 input parameters. As
soon as more than 10 inputs are defined, that sampling method is removed from the list.
As such, the recommendation for most DOE types is to have as few enabled input as possible. Fewer
than 20 inputs is ideal. Some DOE types have a limit on the number of inputs:
• LHS and OSF have a limit of 20 inputs when Samples Type is set to CCD Samples
The number of inputs should be taken into account when selecting a DOE type for your study—or
when defining inputs if you know ahead of time which DOE type you intend to use.
If you are using a DOE that does not limit the number of inputs and more than 20 are enabled, in the
Outline pane for the following cells, DesignXplorer shows an alert icon in the Message column for the
root node:
• Design of Experiments
• Response Surface
The warning icon is displayed only when required component edits are completed. The number next
to the icon indicates the number of active warnings. You can click the icon to review the warning
messages. To remove a warning about having too many inputs defined, disable inputs by clearing check
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
88 of ANSYS, Inc. and its subsidiaries and affiliates.
Number of Input Parameters for DOE Types
boxes in the Enabled column until only the permitted number of inputs is selected. If you are unsure
of which parameters to disable, you can use a Parameters Correlation system to determine the inputs
that are least correlated with your results. For more information, see Using Parameters Correlations (p. 63).
Factors other than the number of enabled inputs affect response surface generation:
• Response surface types using automatic refinement add additional design points to improve the
resolution of each output. The more outputs and the more complicated the relationship between
the inputs and outputs, the more design points that are required.
• Increasing the number of output parameters increases the number of response surfaces that are
required.
• Discrete input parameters can be expensive because a response surface is generated for each
discrete combination, as well as for each output parameter.
• A non-linear or non-polynomial relationship between input and output parameters requires more
design points to build an accurate response surface, even with a small number enabled inputs.
Such factors can offset the importance of using a small number of enabled input parameters. If you
expect that a response surface can be generated with relative ease, it might be worthwhile to exceed
the recommended number of inputs. For example, assume that the project has polynomial relationships
between the inputs and outputs, only continuous inputs, and a small number of outputs. In this case,
you might ignore the warning and proceed with the update.
If the DOE-response surface combination is not supported, quick help explains the underlying problem.
The following example explains that the number of design points is less than the number of enabled
input parameters. When you enable discrete input parameters, the number of required design points
is affected. For each enabled discrete input parameter combination, the number of design points must
be equal to or greater than the number of enabled continuous input parameters.
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 89
Using Design of Experiments
Two examples show how the minimum number for design points is determined when the number of
enabled discrete parameters is first 0 and then 2.
Example 1
Example 2
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
90 of ANSYS, Inc. and its subsidiaries and affiliates.
Comparison of LHS and OSF DOE Types
– Where the 2nd discrete parameter has 2 levels (for example, 5;6)
{10; 5}, {15; 5}, {20; 5}, {10; 6}, {15; 6}, {20; 6}
• LHS is an advanced form of the Monte Carlo sampling method. In LHS, no point shares a row or
column of the design space with any other point.
• OSF is essentially an LHS design that is optimized through several iterations, maximizing the distance
between points to achieve a more uniform distribution across the design space. Because it aims to
gain the maximum insight into the design by using the fewest number of points, it is an effective
DOE type for complex modeling techniques that use relatively large numbers of design points.
Because OSF incorporates LHS, both DOE types aim to conserve optimization resources by avoiding the
creation of duplicate points. Given an adequate number of design points to work with, both methods
result in a high quality of response prediction. OSF, however, offers the added benefit of fuller coverage
of the design space.
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 91
Using Design of Experiments
For example, with a two-dimensional problem that has only two input parameters and uses only six
design points, it can be difficult to build an adequate response surface. This is especially true in the
case of LHS because of its nonuniform distribution of design points over the design space.
When the number of design points for the same scenario is increased to twenty, the quality of the
resulting response surface is improved. LHS, however, can result in close, uneven groupings of design
points and so can skip parts of the design space. OSF, with its maximization of the distance between
points and more uniform distribution of points, addresses extremes more effectively and provides far
better coverage of the design space. For this reason, OSF is the recommended method.
• Face-Centered
• Rotatable
• VIF-Optimality
• Optimality
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
92 of ANSYS, Inc. and its subsidiaries and affiliates.
Using a Central Composite Design DOE
• Auto Defined
A Rotatable (spherical) design is preferred because the prediction variance is the same for any two
locations that are the same distance from the design center. However, there are other criteria to consider
for an optimal design setup. The following two criteria are commonly considered in setting up an op-
timal design using the design matrix.
• The degree of non-orthogonality of regression terms can inflate the variance of model coefficients.
• The position of sample points in the design can be influential based on their position with respect
to others of the input variables in a subset of the entire set of observations.
An optimal CCD design should minimize both the degree of non-orthogonality of term coefficients and
the number of sample points having abnormal influence. In minimizing the degree of non-orthogonality,
the Variation Inflation Factor (VIF) of regression terms is used. For a VIF-Optimality design, the maximum
VIF of the regression terms is to be minimized, and the minimum value is 1.0. In minimizing the oppor-
tunity of influential sample points, the leverage value of each sample points is used. Leverages are the
diagonal elements of the Hat matrix, which is a function of the design matrix. For a G-Optimality design,
the maximum leverage value of sample points is to be minimized.
For a VIF-Optimality design, the alpha value or level is selected such that the maximum VIF is minimized.
Likewise, for a G-Optimality design, the alpha value or level is selected such that the maximum leverage
is minimized. The rotatable design is found to be a poor design in terms of VIF- and G-efficiencies.
For an optimal CCD, the alpha value or level is selected such that both the maximum VIF and the
maximum leverage are the minimum possible. For an Auto Defined design, the alpha value is selected
from either the VIF-Optimality or G-Optimality design that meets the criteria. Because it is a multi-ob-
jective optimization problem, in many cases, there is no unique alpha value such that both criteria reach
their minimum. However, the alpha value is evaluated such that one criterion reaches minimum while
the other approaches minimum.
For the current Auto Defined setup (except for a problem with five variables that uses a G-Optimality
design) all other multi-variable problems use VIF-Optimality. In some cases, despite the fact that Auto
Defined provides an optimal alpha meeting the criteria, this design might not give as good of a response
surface as anticipated due to the nature of the physical data used for fitting in the regression process.
In this case, you should try other design types that might give a better response surface approximation.
Note:
You can set any design type for CCD as the default in the Options dialog box. On the Design
of Experiments tab, set Design Type to the type to want to use as the default. For more
information, see Design Exploration Options (p. 35).
It is a good practice to always verify some selected points on the response surface with an actual sim-
ulation evaluation to determine its validity of use for further analyses. In some cases, a good response
surface does not mean a good representation of an underlying physics problem. The response surface
is generated according to the predetermined sampling points in the design space, which sometimes
misses capturing an unexpected change in some regions of the design space. In this case, try using an
enhanced design—a Rotatable or Face-Centered design type with Template Type set to Enhanced.
For an enhanced DOE, a mini CCD is appended to a standard CCD design, where a second alpha value
is added and set to half the alpha value of the standard CCD. The mini CCD is set up so that the rotat-
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 93
Using Design of Experiments
ability and symmetry of the CCD design are maintained. The purposes of the appended mini CCD are
to capture any drastic changes in the design space and to provide a better response surface fit.
Note:
Alternatively, you can try to enrich the DOE by changing the selection for Design of Exper-
iments Type from Central Composite Design to Custom + Sampling and then specifying
a value for Total Number of Samples. For more information, see Custom + Sampling (p. 86).
The location of the generated design points for the deterministic method is based on a central composite
design. If N is the number of input parameters, then a central composite design consists of:
• 2*N axis point located at the -α and +α position on each axis of the selected input parameters.
• 2(N-f) factorial points located at the -1 and +1 positions along the diagonals of the input parameter
space.
The fraction f of the factorial design and the resulting number of design points are given in the following
table:
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
94 of ANSYS, Inc. and its subsidiaries and affiliates.
DOE Matrix Generation
For example, for goal-driven optimization, the DOE points should be located close to where the optimum
design is determined to be. The location of the DOE points depends on the outcome of the analysis.
Not having that knowledge at the start of the analysis, you can determine the location of the points as
follows:
• For a design variable, the upper and lower levels of the DOE range coincide with the bounds specified
for the input parameter. It often happens in optimization that the optimum point is at one end of
the range specified for one or more input parameters.
• For an uncertainty variable, the upper and lower levels of the DOE range are the quantile values
corresponding to a probability of 0.1% and 99.9%, respectively. This is the standard procedure,
whether the input parameter follows a bounded distribution (such as uniform) or unbounded distri-
bution (such as normal). This occurs because the probability that the input variable value exactly
coincides with the upper or lower bound for a bounded distribution is exactly zero. Failure can never
occur when the value of the input variable is equal to the upper or lower bound. Failure typically
occurs in the tails of a distribution, so the DOE points should be located there, but not at the very
end of the distribution.
Note:
The design points are solved simultaneously if the analysis system is configured to perform
simultaneous solutions. Otherwise, they are solved sequentially.
To clear the design points generated for the DOE matrix, return to the Project Schematic, right-click
the Design of Experiments cell, and select Clear Generated Data. You can clear data from any design
exploration cell in the Project Schematic in this way and regenerate your solution for the cell with
changes to the parameters if desired.
Note:
The Clear Generated Data operation does not clear the design point cache. To clear this
cache, right-click in an empty area of the Project Schematic and select Clear Design Points
Cache for All Design Exploration Systems.
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 95
Using Design of Experiments
To export design points to an ASCII file, you use the following options:
• Right-click a cell in the Table pane and select Export Table Data as CSV.
• Right-click a chart in the Chart pane and select Export Chart Data as CSV.
The parameter values for each design point in the table or chart are exported to a CSV file. The values
are always exported in the units defined in Workbench. This means that they export as when Display
Values as Defined is selected from the Units menu.
You can also import an external CSV file to create either design points in a custom Design of Exper-
iments cell or refinement and verification points in a Response Surface cell. Right-click a cell in the
Table pane for a Design of Experiments or Response Surface cell and select the appropriate import
option. For more information, see Working with Tables (p. 292) and Exporting Table Data (p. 294).
When a DOE cell is set to a custom type, Import Design Points from CSV is available on the context
menu when you right-click any of the following:
• A Design of Experiments or Design of Experiments (3D ROM) cell in a system on the Project
Schematic
• The Design of Experiments node in the Outline pane for a Design of Experiments cell
• The Design of Experiments (3D ROM) node in the Outline pane for a Design of Experiments
(3D ROM) cell
• In the design points table for a Design of Experiments or Design of Experiments (3D ROM) cell
For more information, see Importing Data from a CSV File (p. 294).
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
96 of ANSYS, Inc. and its subsidiaries and affiliates.
Copying Design Points
Note:
If the range of imported or copied values is more than 10 percent smaller than the
DOE settings, the shrink option is available to reduce the DOE range to fit. If the values
exceed the range defined in the DOE settings, the expand option is available to extend
the range.
If you choose not to select either or both check boxes that are available for adjusting parameter
ranges, when you click Apply, only the design points with all parameters falling within the predefined
parameter ranges are imported or copied into the DOE. With the import or copy limited to the pre-
defined parameter ranges, all out-of-bounds design points are ignored. If you click Cancel, the import
or copy operation is terminated.
• Custom DOEs
For more information, see Importing DPS Design Points into a Workbench Project in the Workbench
User's Guide.
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 97
Using Design of Experiments
all or selected design points. Any output parameter values for the design points copied to the DOE cell
are read-only because they have been calculated by a solver.
Note:
Design point data is always parsed and validated before being either copied or imported
into a DOE. The dialog boxes that you might see during this process are described in
the previous topic.
Copying All Design Points from the Parameter Set Bar into a DOE Cell
When a DOE cell is set to a custom DOE type, you can copy all design points from the Parameter Set
bar into the DOE cell:
1. In the Project Schematic, double-click the DOE cell to which you want to copy design points
from the Parameter Set bar.
3. Either right-click the parent node in the Outline pane or right-click in the Table pane and select
Copy all Design Points from the Parameter Set.
All design points from the Parameter Set bar are copied into the DOE cell.
Copying Selected Design Points from the Parameter Set Bar into a DOE Cell
When a DOE cell is set to a custom DOE type, you can copy selected design points from the Parameter
Set bar into the DOE cell:
3. Right-click and select Copy Design Points to and then select a DOE cell on the submenu.
While the submenu lists all Design of Experiments and Paramters Correlation cells defined
in the project, only custom DOEs and custom correlations are available for selection.
All selected design points in the Parameter Set bar are copied into the DOE cell.
Note:
If an unsolved design point was previously copied to a custom DOE, and subsequently this
design point was solved in the Parameter Set bar, you can copy it to the custom DOE again
to push the output values for this design point to the custom DOE.
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
98 of ANSYS, Inc. and its subsidiaries and affiliates.
Using Response Surfaces
Response surfaces are functions of varying natures in which the output parameters are described in
terms of the input parameters. Built from the DOE, they quickly provide the approximated values of
the output parameters throughout the design space without having to perform a complete solution.
DesignXplorer provides tools to estimate and improve the quality of a response surface.
Once a response surface is generated, you can create and manage response points and charts. These
postprocessing tools help you to understand how each output parameter is driven by input parameters
and how you can modify your design to improve its performance.
Once your response surface is solved, you can export it as an independent reduced-order model (DX-
ROM) to be reused in other environments.
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 99
Using Response Surfaces
Genetic Aggregation
Genetic Aggregation is the default algorithm for generating response surfaces. It automates the process
of selecting, configuring, and generating the type of response surface best suited to each output
parameter in your problem. From the different types of response surface available (Full 2nd-Order
Polynomials, Non-Parametric Regression, Kriging, and Moving Least Squares), Genetic Aggregation
automatically builds the response surface type that is the most appropriate approach for each output.
Auto-refinement is available when you select at least one output parameter for refinement in the
Tolerances table and specify a tolerance value for this parameter. Once begun, auto-refinement
handles design point failures and continues until one of the stopping criteria is met.
Auto-refinement takes into account failed design points by avoiding the areas close to the failed
points when generating the next refinement points. The Crowding Distance Separation Percentage
property specifies the minimum allowable distance between new refinement points, providing a ra-
dius around failed design points that serves as a constraint for refinement points.
Genetic Aggregation takes more time than classical response surfaces such as Full 2nd order Polyno-
mial, Non-Parametric Regression, or Kriging because of multiple solves of response surfaces and the
cross-validation process. In general, Genetic Aggregation is more reliable than the classical response
surface models.
The Genetic Aggregation response surface can be a single response surface or a combination of
several different response surfaces (obtained by a crossover operation during the genetic algorithm).
For more information, see Genetic Aggregation (p. 307) in the DesignXplorer theory section.
Because a Genetic Aggregation response surface can take longer to generate than other response
surfaces, you can monitor the generation process via the progress bar and messages. You also have
the ability to stop the update. However, if an update is stopped, any data generated up to that point
is discarded.
Once the response surface has been generated, a link to the log file is available in the Properties
pane. The log file contains information that can help you to assess the quality of your response surface.
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
100 of ANSYS, Inc. and its subsidiaries and affiliates.
Response Surface Types
Note:
For advanced options to be visible, the Show Advanced Options check box must
be selected on the Design Exploration tab in the Options window. For more in-
formation, see Design Exploration Options (p. 35).
Meta Model
• Response Surface Type: Determines the type of response surface. This section assumes Ge-
netic Aggregation is selected and advanced options are shown.
• Random Generator Seed: Advanced option allowing you to specify the value used to initialize
the random number generator. By changing this value, you start the Genetic Aggregation
from a different population of response surfaces. The default is 0.
Log File
– Final: Information on the best response surface generated by the last generation.
– Final With Details: Information on the full population of response surfaces generated by
the last generation.
– Iterative With Details: Information on the full population of response surfaces generated
by each generation.
If you change the Display Level value after generating the response surface, you must update
the response surface again to regenerate the log file with the new content.
• Log: Advanced option that displays after a response surface and its log file are generated.
This property provides a link to the ResponseSurface.log file, which is stored in the
project files in the subdirectory dpall\global\DX.
Refinement
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 101
Using Response Surfaces
• Number of Refinement Points: Read-only property indicating the number of existing refine-
ment points.
– Maximum Output: Only the output with the largest ratio between the maximum predicted
error and the tolerance is considered. Only one refinement point is generated in each iter-
ation.
– All Outputs: All outputs are considered. Multiple refinement points can be generated. If
two refinement points are too close (with a distance less than specified for Crowding
Distance Separation Percentage), only one is inserted.
• Maximum Number of Refinement Points per Iteration: Specifies the maximum number of
refinement points that can be simultaneously updated at each iteration using your HPC re-
sources. This property is available only when Output Variable Combinations is set to Max-
imum Output. The default for Maximum Number of Refinement Points per Iteration is 1.
However, to improve efficiency, you can increase this value. For simultaneous design point
updates to occur, in the properties for the Parameter Set bar, you must set Update Option
to Submit to Remote Solve Manager, specify an available RSM Queue, and set Job Submis-
sion to One Job for Each Design Point. For more information about using Remote Solve
Manager, see Working with Parameters and Design Points in the Workbench User's Guide. For
theoretical information about how design points are updated simultaneously, see Genetic
Aggregation with Multiple Refinement Points (p. 311).
• Convergence State: Read-only value indicating the state of the convergence. Possible values
are Converged and Not Converged. If the value is Not Converged, the reason for conver-
gence failure is appended.
Verification Points
Generate Verification Points: Specifies whether verification points (p. 138) are to be generated.
When this check box is selected, Number of Verification Points becomes visible. The default
value is 1. However, you can enter a different value if desired.
Report Image: Specifies the image file to display for solved design points in DesignXplorer
tables and charts. For more information, see Viewing Design Point Images in Tables and
Charts (p. 282).
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
102 of ANSYS, Inc. and its subsidiaries and affiliates.
Response Surface Types
2. Select at least one output parameter for refinement and specify its tolerance value. If no output
parameters are selected for refinement, manual refinement is used.
To view the tolerances table, select an output parameter or one of the following in the Outline
pane:
• Response Surface
• Output Parameters
• Refinement
• Tolerances
Calculated Minimum
Minimum value, which is calculated from design point values and Min-Max search results.
Calculated Maximum
Maximum value, which is calculated from design point values and Min-Max search results.
Maximum predicted error, which is an estimation of the predicted error for the Genetic Ag-
gregation response surface. For more information, see Genetic Aggregation (p. 307) in the
theory section.
Refinement
Determines whether an output parameter and its tolerance are taken into account for the
Genetic Aggregation refinement process. When this check box is selected for an output, a
tolerance value must be specified.
Tolerance
Enabled and required when an output has been selected for refinement. The value must be
greater than 0.
Tolerance values have units and are refreshed according to whether Display Values as
Defined or Display Values in Project Units is selected from the Workbench Units menu.
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 103
Using Response Surfaces
The tolerances table, the Properties pane for the output parameter, and the Convergence chart
are fully synchronized. Consequently, changes in one are reflected in the others.
Note:
Discrete output parameters are excluded from the refinement. Property values
are approximations based the resulting response surface.
To use Genetic Aggregation auto-refinement, you must select at least one output parameter for
refinement and assign it a tolerance value.
1. Use one of the following methods to select at least one output parameter for refinement:
• In the tolerances table, select the Refinement check box for the output.
• In the Outline pane, right-click the output parameter and select Use for Refinement.
All output parameters selected for refinement are included in the refinement process. Their toler-
ance values are taken into account when identifying new refinement points.
You can remove an output parameter from consideration for Genetic Aggregation auto-refinement.
• In the tolerances table, clear the Refinement check box for the output.
• In the Outline pane, right-click the output parameter and select Ignore for Refinement.
If at least one output parameter is still selected for refinement, the auto-refinement process
is used. If the check boxes for all output parameters have been cleared, the refinement reverts
to a manual process.
Cleared and derived output parameters are disabled. They are excluded from the refinement
process and their tolerance values are ignored.
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
104 of ANSYS, Inc. and its subsidiaries and affiliates.
Response Surface Types
The following tolerances table shows output parameters with different auto-refinement settings.
Because at least one output is selected for refinement, you know that auto-refinement is to be
used.
• P7: Selected for refinement with tolerance value defined (is included in refinement).
• P8: Selected for refinement with tolerance value pending (prevents refinement until tolerance
is defined).
• P9: Not selected for refinement with the previously defined tolerance value disabled (is not
included in refinement).
The chart is automatically generated and dynamically updated as the Genetic Aggregation refinement
runs. To view the chart, select one of the following in the Outline pane:
• Refinement
• Tolerances
• Refinement Points
The X axis displays the number of refinement points (whether successfully updated or not) used
to refine the response surface, while the Y axis displays the ratio between the maximum predicted
error and the tolerance for each output parameter.
Each output parameter marked for auto-refinement is represented by a separate curve corresponding
to this ratio, which is calculated for each output parameter to refine and at each iteration. Anything
below the convergence threshold curve is in the convergence region, indicated by a shaded blue
area on the chart.
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 105
Using Response Surfaces
Example
Before the first refinement point is run, if output P3 has a tolerance of 0.2[g] and a maximum pre-
dicted error of 0.3[g], it has a ratio of 0.3/0.2=1.5. If the ratio is equal to 1.5, you know that the
maximum predicted error is 1.5 times larger than the tolerance.
The auto-refinement process can generate one point or several points per iteration. The objective
is to reach a convergence threshold of less than or equal to 1 for all outputs used in the auto-re-
finement process, at which point all the convergence curves are in the area below the convergence
threshold. The refinement process stops when either the maximum number of refinement points
has been reached or the convergence threshold objective has been met.
You can enable or disable a convergence curve by selecting or clearing the check box for the output
parameter in the Properties pane of the Convergence Curves chart.
Once the Convergence Curves chart has been generated, you can export it as a CSV file by right-
clicking the chart and selecting Export Chart Data as CSV.
A second-order polynomial is typically preferred for the regression model. This model is generally an
approximation of the true input-to-output relationship, and only in special cases does it yield a true
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
106 of ANSYS, Inc. and its subsidiaries and affiliates.
Response Surface Types
and exact relationship. Once this relationship is determined, the resulting approximation of the output
parameter as a function of the input variables is called the response surface.
• Calculation of polynomial coefficients based on these modified input values. Some polynomial
terms can be filtered by using the F-Test Filtering and Significance Level properties.
Consequently, the response surface that is generated can fit more complex responses than simple
parabolic curvatures.
If the goodness of fit of the response surface is not as good as expected for an output parameter,
you can select a different transformation type in its properties. The Yeo-Johnson transformation is
more numerically stable in its back-transformation. While the Box-Cox transformation is more numer-
ically unstable in its back-transformation, in some cases, it gives a better fit. If in the Properties pane
for an output parameter, you set Transformation Type to None, the full 2nd-order polynomials re-
sponse surface is computed without any transformation on the data for this output parameter.
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 107
Using Response Surfaces
Note:
For advanced options to be visible, the Show Advanced Options check box must be
selected on the Design Exploration tab in the Options window. For more information,
see Design Exploration Options (p. 35).
• Response Surface Type: Determines the type of response surface. This section assumes
Standard Response Surface - Full 2nd-Order Polynomials is selected and advanced options
are shown.
• Inputs Transformation Type: Advanced option determining the type of power transformation
to apply to all continuous input parameters, with and without manufacturable values, before
solving the response surface. Choices are Yeo-Johnson (default) and None. When None is
selected, no transformations are applied to continuous input parameters.
• Inputs Scaling: Advanced option determining whether to scale the data for all continuous
input parameters, with and without manufacturable values, before solving the response surface.
This check box is selected by default. If you clear this check box, no scaling of input parameter
data occurs.
• Significance Level: Advanced option indicating the threshold to use during model-building
to filter significant terms of the polynomial regression. The range for possible values is from
0 to 1. The default is 0.05.
Note:
As indicated earlier, for advanced options to be visible, the Show Advanced Options
check box must be selected on the Design Exploration tab in the Options window.
• Transformation Type: Determines the type of power transformation to apply to the output para-
meter. Choices are Yeo-Johnson (default), Box-Cox, and None. When None is selected, no trans-
formation is applied to the output parameter. Transformations are not applied to derived output
parameters.
• Scaling: Advanced option determining whether to scale the data for the output parameter. This
check box is selected by default. If you clear this check box, no scaling of the output parameter
data occurs.
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
108 of ANSYS, Inc. and its subsidiaries and affiliates.
Response Surface Types
Kriging
Kriging is a meta-modeling algorithm that provides an improved response quality and fits higher order
variations of the output parameter. It is an accurate multidimensional interpolation combining a
polynomial model similar to the one of the standard response surface—which provides a "global"
model of the design space—plus local deviations so that the Kriging model interpolates the DOE
points. Kriging provides refinement capabilities for continuous input parameters, including those with
manufacturable values. It does not support discrete parameters. The effectiveness of Kriging is based
on the ability of its internal error estimator to improve response surface quality by generating refine-
ment points and adding them to the areas of the response surface most in need of improvement.
In addition to manual refinement capabilities, Kriging offers an auto-refinement option that automat-
ically and iteratively updates the refinement points during the update of the response surface. At
each iteration of the refinement, Kriging evaluates a predicted relative error in the full parameter
space. DesignXplorer uses the predicted relative error instead of the predicted error because this allows
the same values to be used for all output parameters, even when the parameters have different ranges
of variation.
At this step in the process, the predicted relative error for one output parameter is the predicted error
of the output parameter normalized by the known maximum variation of the output parameter:
Where Omax and Omin are the maximum and minimum known values (on design points) of the
output parameter.
For guidelines on when to use Kriging, see Changing the Response Surface (p. 122).
The prediction of error is a continuous and differentiable function. To find the best candidate refine-
ment point, the refinement process determines the maximum of the prediction function by running
a gradient-based optimization procedure. If the prediction of the accuracy for the new candidate re-
finement point exceeds the required accuracy, the point is then promoted as a new refinement point.
The auto-refinement process continues iteratively, locating and adding new refinement points until
either the refinement has converged or the maximum allowable number of refinement points has
been generated. The refinement converges when the response surface is accurate enough for direct
output parameters.
For more information, see Kriging (p. 314) in the theory section.
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 109
Using Response Surfaces
1. In the Outline pane for the response surface, select the Response Surface cell.
2. Under Meta Model in the Properties pane, set Response Surface Type to Kriging.
• For Maximum Number of Refinement Points, enter the maximum number of refinement
points that can be generated.
• For Maximum Predicted Relative Error (%), enter the maximum predicted relative error that
is acceptable for all parameters.
Note:
For advanced options to be visible, the Show Advanced Options check box
must be selected on the Design Exploration tab in the Options window. For
more information, see Design Exploration Options (p. 35).
• For Output Variable Combinations, select a value for determining how output variables are
considered in terms of predicted relative error. This value controls the number of refinement
points that are created per iteration.
• For Crowding Distance Separation Percentage, enter a value for determining the minimum
allowable distance between new refinement points.
During the refinement process, if one or more design points fail, auto-refinement uses the
value specified for this property to avoid areas close to failed design points when generating
the next refinement points. This property specifies the minimum allowable distance between
new refinement points, providing a radius around failed design points that serves as a con-
straint for refinement points.
• If you want to generate verification points (p. 138), select the Generate Verification Points
check box. Number of Verification Points becomes visible. The default value is 1.
• If you want a different number of verification points to be generated, enter the desired value.
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
110 of ANSYS, Inc. and its subsidiaries and affiliates.
Response Surface Types
2. Under Refinement in the Properties pane, select or clear the Inherit From Model Settings
check box. This determines whether the maximum predicted relative error defined at the
model level is applicable to the parameter.
If the Inherit From Model Settings check box is cleared, Maximum Predicted Relative Error
becomes available so that you can enter the maximum predicted relative error that you find
acceptable. This can be different than the maximum predicted relative error defined at the
model level.
The generated points for the refinement appear in the refinement points table, which displays in
the Table pane when either Refinement or a refinement point is selected in the Outline pane. As
the refinement points are updated, the Convergence Curves chart updates dynamically, allowing
you to monitor the progress of the Kriging auto-refinement. For more information, see Kriging
Convergence Curves Chart (p. 113).
The auto-refinement process continues until either the maximum number of refinement points is
reached or the response surface is accurate enough for direct output parameters. If all output
parameters have a predicted relative error that is less than the Maximum Predicted Relative Error
values defined for them, the refinement is converged.
As with Sparse Grid, Kriging is an interpolation. Goodness of fit is not a reliable measure for Kriging
because the response surface passes through all of the design points, making the goodness of fit
appear to be perfect. As such, the generation of verification points is essential for assessing the
quality of the response surface and understanding the actual goodness of fit.
If the accuracy of the verification points is larger than the predicted relative error given by Kriging,
you can insert the verification points as refinement points and then run a new auto-refinement so
that the new points are included in the generation of the response surface. Verification points can
only be inserted as refinement points in manual refinement mode. For more information, see Veri-
fication Points (p. 138).
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 111
Using Response Surfaces
Kriging Properties
For Kriging, the following properties are available in the Properties pane for the response surface.
Note:
For advanced options to be visible, the Show Advanced Options check box must be
selected on the Design Exploration tab in the Options window. For more information,
see Design Exploration Options (p. 35).
Meta Model
• Response Surface Type: Determines the type of response surface. This section assumes
Kriging is selected and advanced options are shown.
• Kernel Variation Type: Determines the type of kernel variation. Choices are:
– Variable: Sets the kernel variation to pure Kriging mode, which uses a correlation parameter
for each design variable.
– Constant: Sets the kernel variation to radial basis function mode, which uses a single cor-
relation parameter for all design variables.
Refinement
• Number of Refinement Points: Read-only property indicating the number of existing refine-
ment points.
• Maximum Predicted Relative Error (%): Determines the maximum predicted relative error
that is acceptable for all parameters.
– Maximum Output: Only the output with the largest predicted relative error is considered.
Only one refinement point is generated in each iteration.
– All Outputs: All outputs are considered. Multiple refinement points are generated in each
iteration.
– Sum of Outputs: The combined predicted relative error of all outputs is considered. Only
one refinement point is generated in each iteration.
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
112 of ANSYS, Inc. and its subsidiaries and affiliates.
Response Surface Types
If two candidate refinement points are closer together than the defined minimum distance,
only the first candidate is inserted as a new refinement point.
• Predicted Relative Error (%): Read-only value indicating the predicted relative error for all
parameters.
• Converged: Read-only value indicating the state of the convergence. Possible values are Yes
and No.
• Inherit From Model Settings: Indicates if the maximum predicted relative error that you've
defined at the model level is applicable for this output parameter. This check box is selected by
default.
• Maximum Predicted Relative Error (%): Displays only when the Inherit From Model Settings
check box is cleared. Determines the maximum predicted relative error that you accept for this
output parameter. This value can be different than the maximum predicted relative error defined
at the model level.
• Predicted Relative Error: Read-only value populated on update. Predicted relative error for this
output parameter.
The chart is automatically generated and dynamically updated as the Kriging refinement runs. You
can view the chart and its properties by selecting Refinement or a Refinement Points in the
Outline pane.
There are two curves for each output parameter. On curve represents the percentage of the current
predicted relative error. The other curve represents the maximum predicted relative error required
that parameter.
Additionally, there is a single curve that represents the maximum of the predicted relative error for
output parameters that are not converged.
During the run, you can click the red stop button in the Progress pane to interrupt the process so
that you can adjust the requested maximum error or change chart properties before continuing.
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 113
Using Response Surfaces
Non-Parametric Regression
Non-parametric regression (NPR) provides improved response quality and is initialized with one of
the available DOE types. NPR algorithm is implemented in DesignXplorer as a metamodeling technique
prescribed for predictably high nonlinear behavior of the outputs with respect to the inputs.
NPR belongs to a general class of Support Vector Method (SVM) type techniques. These are data
classification methods that use hyperplanes to separate data groups. The regression method works
similarly. The main difference is that the hyperplane is used to categorize a subset of the input sample
vectors that are deemed sufficient to represent the output in question. This subset is called the support
vector set.
The internal parameters of the response surface are fixed to constant values and are not optimized.
The values are determined from a series of benchmark tests and strike a compromise between the
response surface accuracy and computational speed. For a large family of problems, the current settings
provide good results. However, for some problem types (like ones dominated by flat surfaces or lower
order polynomials), some oscillations might be noticed between the DOE points.
To circumvent this, you can use a larger number of DOE points or, depending on the fitness landscape
of the problem, use one of the several optimal space-filling DOEs provided. In general, it is suggested
that the problems first be fitted with a quadratic response surface and the NPR fitting adopted only
when the goodness of fit from the quadratic response surface model is unsatisfactory. This ensures
that the NPR is only used for problems where low order polynomials do not dominate.
Neural Network
A neural network is a mathematical technique based on the natural neural network in the human
brain.
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
114 of ANSYS, Inc. and its subsidiaries and affiliates.
Response Surface Types
To interpolate a function, a network with three levels (input, hidden, and output) is built and the
connections between them are weighted.
Each arrow is associated with a weight ( ). Each ring is called a cell (like a neuron).
If the inputs are xi, the hidden level contains function gj(xi). The output solution is:
Where K is a predefined function, such as the hyperbolic tangent or an exponential based function,
to obtain something similar to the binary behavior of the electrical brain signal (like a step function).
The function is continuous and differentiable.
The weight functions (wjk) are issued from an algorithm that minimizes (as the least squares method)
the distance between the interpolation and the known values (design points). This is called learning.
The error is checked at each iteration with the design points that are not used for learning. Learning
design points need to be separated from error-checking design points.
The error decreases and then increases when the interpolation order is too high. The minimization
algorithm is stopped when the error is the lowest.
This method uses a limited number of design points to build the approximation. It works better when
the number of design points and the number of intermediate cells are high. It can give interesting
results with several parameters.
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 115
Using Response Surfaces
Meta Model
• Response Surface Type: Determines the type of response surface. This section assumes
Neural Network is selected.
• Number of Cells: Determines the number of neurons to use in the hidden layer of the neural
network.
Sparse Grid
Sparse Grid provides refinement capabilities for continuous parameters, including those with manu-
facturable values. It does not support discrete parameters. Sparse Grid uses an adaptive response
surface, which means that it refines itself automatically. A dimension-adaptive algorithm allows it to
determine which dimensions are most important to the objectives functions, thereby reducing com-
putational effort.
Sparse Grid is an adaptive algorithm based on a hierarchy of grids. The DOE type Sparse Grid Initial-
ization generates a DOE matrix containing all the design points for the smallest required grid: the
level 0 (the point at the current values) plus the level 1 (two points per input parameters). If the ex-
pected level of quality is not met, the algorithm further refines the grid by building a new level in
the corresponding directions. This process is repeated until one of the following occurs:
• The value specified for Maximum Depth is reached in one level. The maximum depth is the max-
imum number of levels that can be created in the hierarchy. Once the maximum depth for a direction
is reached, there is no further refinement in that direction.
The relative error for an output parameter is the error between the predicted and the observed output
values, normalized by the known maximum variation of the output parameter at this step of the
process. Because there are multiple output parameters to process, DesignXplorer computes the worst
relative error value for all of the output parameters and then compares this against the value for
Maximum Relative Error. As long as at least one output parameter has a relative error greater than
the expected error, the maximum relative error criterion is not validated.
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
116 of ANSYS, Inc. and its subsidiaries and affiliates.
Response Surface Types
Because Sparse Grid uses an auto-refinement algorithm, it is not possible to add refinement points
manually. As a result, for Sparse Grid, the following behaviors occur:
• Insert as Refinement Point operation is not available on the right-click context menu. Also, if you
use commands to attempt to insert a refinement point, an error occurs.
1. In the Outline pane for the Design of Experiments cell, select Design of Experiments.
2. Under Design of Experiments in the Properties pane, set Design of Experiments Type to
Sparse Grid.
3. Either preview or update the DOE to validate your selections. This causes the Response Surface
Type property for Response Surface cells in downstream Design Exploration systems to default
to Sparse Grid.
Note:
If the Design of Experiments cell is shared by multiple systems, the DOE definition ap-
plies to the Response Surface cells for each of those systems.
1. In the Outline pane for the Response Surface cell, select Response Surface.
2. In the Properties pane under Meta Model, verify that Response Surface Type is set to Sparse
Grid.
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 117
Using Response Surfaces
• For Maximum Relative Error (%), enter the value to be allowed for all of the output para-
meters. The smaller the value, the more accurate the response surface.
• For Maximum Depth, enter the number of grid levels that can be created in a given direction.
If needed, you can adjust this value later, according to your update results.
• For Maximum Number of Refinement Points, enter the maximum number of refinement
points that can be generated as part of the refinement process.
For more information, see Sparse Grid Auto-Refinement Properties (p. 119).
At any time, you can click the red stop button in the Progress pane to interrupt the Sparse Grid
refinement so that you can change properties or see partial results. The refinement points that
have already been calculated are visible, and the displayed charts are based on the response surface's
current level of refinement. The refinement points that have not been calculated display the Update
Required icon to indicate which output parameters must be updated.
If a design point fails during the refinement process, Sparse Grid refinement stops in the area where
the failed design point is located, but refinement continues in the rest of the parameter space to
the degree possible. Failed refinement points display the Update Failed, Update Required icon.
You can attempt to pass the failed design points by updating the response surface once again.
The auto-refinement process continues until the Maximum Relative Error (%) objective is attained,
the Maximum Depth limit is reached for all input parameters, the Maximum Number of Refinement
Points is reached, or the response surface converges. If all output parameters have a Maximum
Relative Error (%) that is higher than the Current Relative Error defined for them, the refinement
is converged.
Note:
If Sparse Grid refinement does not appear to be converging, you can do the following
to accept the current level of convergence:
2. Either set the value for Maximum Relative Error (%) slightly above the value for
Current Relative Error (%) or set the value for Maximum Number of Refinement
Points equal to the value for Number of Refinement Points.
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
118 of ANSYS, Inc. and its subsidiaries and affiliates.
Response Surface Types
Note:
Because Sparse Grid uses an auto-refinement process, you cannot add refinement points
manually as with other DOE types. To generate verification points automatically, under
Verification Points in the Properties pane for the Response Surface cell, select the
Generate Verification Points check box. Then, update the response surface.
For more information on assessing the quality of a response surface, see Goodness of Fit for Output
Parameters in a Response Surface (p. 128) and Verification Points (p. 138).
• Maximum Relative Error (%): Maximum relative error allowable for the response surface. This
value is used to compare against the worst relative error obtained for all output parameters. So
long as any output parameter has a relative error greater than the expected relative error, this
criterion is not validated. This property is a percentage. The default is 5.
• Maximum Depth: Maximum depth (number of hierarchy levels) that can be created as part of
the Sparse Grid hierarchy. Once the number of levels defined in this property is reached for a
direction, refinement does not continue in that direction. The default is 4. A minimum value of
2 is required because the Sparse Grid Initialization DOE type is already generating levels 0 and
1).
• Maximum Number of Refinement Points: Maximum number of refinement points that can be
generated for use with the Sparse Grid algorithm. The default is 1000.
• Converged: Indicates the state of the convergence. Possible values are Yes and No.
The chart is automatically generated and dynamically updated as the Sparse Grid refinement runs.
You can view the chart and its properties by selecting Refinement or a refinement point in the
Outline pane.
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 119
Using Response Surfaces
There is one curve per output parameter to represent the current relative error (in percentage),
and one curve to represent the maximum relative error for all direct output parameters. Auto-re-
finement stops when the maximum relative error required (represented as a horizontal threshold
line) has been met.
You can disable one or several output parameters curves and keep only the curve of the maximum
relative error.
During the run, you can click the red stop button in the Progress pane to interrupt the process so
that you can adjust the requested maximum error or change chart properties before continuing.
The following sections provide recommendations on making your initial selection of a meta-modeling
algorithm, evaluating the response surface performance in terms of goodness of fit, and changing the
response surface as needed to improve the goodness of fit:
Working with Response Surfaces
Changing the Response Surface
Refinement Points
Performing a Manual Refinement
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
120 of ANSYS, Inc. and its subsidiaries and affiliates.
Response Surface Refinement
The number of design points should always exceed the number of inputs. Ideally, you should have
at least twice as many design points as inputs. Most of the standard DOE types are designed to
generate a sufficient number, but custom DOE types might not.
Only keep the input parameters that are playing a major role in your study. Disable any input
parameters that are not relevant by clearing their check boxes in the DOE's Outline pane. You can
determine which are relevant from a correlation or sensitivity analysis.
Requirements and recommendations regarding the number of input parameters vary according to
the DOE type selected. For more information, see Number of Input Parameters for DOE Types (p. 88).
To specify that verification points are to be created when the response surface is generated, select
the Generate Verification Points check box in the response surface's Properties pane.
If you want to manually select a response surface type, however, it is usually a good practice to begin
with the Standard Response Surface 2nd-Order Polynomials response surface. Once the response
surface is built, you can assess its quality by reviewing the goodness of fit for each output parameter.
1. In the Project Schematic, right-click the Response Surface cell and select Edit.
2. In the Outline pane, under Quality, select the goodness of fit object.
The Table pane displays goodness-of-fit metrics for each output parameter. The Chart pane displays
the Predicted vs. Observed chart.
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 121
Using Response Surfaces
3. Review the goodness of fit, paying particular attention to the Coefficient of Determination
property. The closer this value is to 1, the better the response surface. For more information, see
Goodness of Fit Criteria (p. 129).
• If the goodness of fit is acceptable, add verification points and then recheck the goodness of
fit. If needed, you can further refine the response surface manually. For more information, see
Performing a Manual Refinement (p. 124).
• If the goodness of fit is poor, try changing your response surface. For more information, see
Changing the Response Surface (p. 122).
1. In the Project Schematic, right-click the Response Surface cell and select Edit.
3. In the Properties pane under Meta Model, select a different choice for Response Surface Type.
5. Review the goodness of fit for each output parameter to see if the new response surface provides
a better fit.
Once the goodness of fit is acceptable, you should add verification points and then recheck the
goodness of fit. If needed, you can further refine the response surface manually. For more information,
see Performing a Manual Refinement (p. 124).
If the default Genetic Aggregation response surface type takes too long to generate response
surfaces, you can switch to Standard Response Surface - Full 2nd-Order Polynomials. This re-
sponse surface type is particularly effective when the variation of the output is smooth with regard
to the input parameters.
Kriging
If Standard Response Surface Full 2nd-Order Polynomials does not produce a response surface
with an acceptable goodness of fit, try Kriging. After updating the response surface with this
method, recheck the goodness of fit. For Kriging, Coefficient of Determination must be set to
1. If it is not, the model is over-constrained and not suitable for refinement via Kriging. Kriging
fits the response surface through all the design points, which means that many of the other
metrics are always perfect. Therefore, it is particularly important to run verification points with
Kriging.
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
122 of ANSYS, Inc. and its subsidiaries and affiliates.
Response Surface Refinement
Non-Parametric Regression
If the model is over-constrained and not suitable for refinement via Kriging, try switching to Non-
Parametric Regression.
If you decide to use one of the other response surface types available, consider your selection carefully
to ensure that the response surface suits your specific purpose. Characteristics for response surface
types follow.
Genetic Aggregation
• Effective when the variation of the output is smooth with regard to input parameters
Kriging
Note:
Kriging is an interpolation that matches the points exactly. Always use verification
points to check the goodness of fit.
Non-Parametric Regression
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 123
Using Response Surfaces
Neural Network
Sparse Grid
Refinement Points
Refinement points are points added to your model to enrich and improve your response surface.
They can either be generated automatically with the response surface update or added manually as
described in Performing a Manual Refinement (p. 124). As with design points, DesignXplorer must
perform a design point update (a real solve) to obtain the output parameters for the refinement
points.
On update, the refinement points are used to build the response surface and are taken into account
for the generation of verification points. Along with DOE points, refinement points are used as
learning points in calculations for the goodness of fit.
All refinement points are shown in the refinement points table, which you access by selecting Refine-
ment → Refinement Pointsin the Outline pane.
After a first optimization study, you can insert the best candidate design as a refinement point to
improve the response surface quality in this area of the design space. To create a new refinement
point, you can use one of the following methods:
The point is added to the refinement points table and is used in the next response surface
generation.
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
124 of ANSYS, Inc. and its subsidiaries and affiliates.
Min-Max Search
2. In the Table pane, enter input parameter values into the bottom row.
Note:
By default, output values are not editable in the table. However, you can change
the editing mode of the table. For more information, see Editable Output
Parameter Values (p. 292).
1. In the Table pane, right-click in the table and select Import Refinement Points.
2. Browse to and select the CSV file containing the refinement points to import. For more
information, see Importing Data from a CSV File (p. 294).
Note:
You can also copy information from a CSV file and paste them into the refinement
points table.
To update the refinement points and then rebuild the response surface to take these points into ac-
count, click Update on the toolbar. Each out-of-date refinement point is updated, and the response
surface is rebuilt from the DOE points and refinement points.
Min-Max Search
The Min-Max search examines the entire output parameter space from a response surface to approximate
the minimum and maximum values of each output parameter. If the Min-Max Search check box is se-
lected, a Min-Max search is performed each time the response surface is updated. Clearing the check
box disables the Min-Max search. You can disable this feature in cases where the search could be very
time-consuming, such as when there are a large number of input parameters or when there are discrete
input parameters.
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 125
Using Response Surfaces
Note:
If you have discrete input parameters, an alert displays before a Min-Max search is per-
formed, reminding you that the search can be time-consuming. If you do not want this
message to display, you can clear the Confirm if Min-Max Search can take a long time
check box on the Design Exploration tab in the Options window. For more information,
see Design Exploration Options (p. 35).
The algorithm used to search the output parameter space for the minimum and maximum values depends
on the input parameters.
• If at least one input parameter is continuous with manufacturable values, MISQP is used.
Before updating your response surface, set the options for the Min-Max search. In the Outline pane,
select Min-Max Search.
When there are only discrete parameters, in the Properties pane, Number of Initial Samples and
Number of Start Points are not available because they are not applicable. DesignXplorer computes
the number of parameter combinations and then sorts the sample points to get the minimum and
maximum values. For example, if there are only two discrete input parameters (with four and three
levels respectively), 12 samples points (4*3) are sorted to get the minimum and maximum values.
When there is at least one continuous parameter, in the Properties pane, Number of Initial Samples
and Number of Start Points are available.
• For Number of Initial Samples, enter the size of the initial sample set to generate in the space
of continuous input parameters.
• For Number of Start Points, enter the number of starting points that the Min-Max search al-
gorithm is to use. Using more starting points lengthens the search times.
If all input parameters are continuous, one search is done for the minimum and one for the maximum.
When the number of starting points is greater than one, two searches (minimum and maximum) are
done for each starting point. To find the minimum or maximum of an output, the generated sample
points are sorted and the n first points of the sort are used as the starting points. The search algorithm
is then run twice for each starting point, once for the minimum and once for the maximum.
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
126 of ANSYS, Inc. and its subsidiaries and affiliates.
Min-Max Search
Example 1
DesignXplorer generates 100 initial sample points and then sorts them to get the starting points. For
each starting point, a local optimization is done.
If Number of Start Points is set to 3, six local optimization are done (three for the minimum and three
for the maximum).
When there are discrete input parameters, the number of searches increases by the number of parameter
combinations. There are two searches per combination, one for the minimum and one for the maximum.
Example 2
• There are two discrete input parameters (with three and four levels respectively).
With multiple starting points, the number of searches is multiplied accordingly. For three starting points,
six local optimization are done (three for the minimum and three for the maximum) for each combination.
The total number of discrete combinations is equal to 12 (3*4). For each combination, DesignXplorer
generates 100 initial sample points in the space of the continuous parameters and where discrete
parameter values are fixed to the discrete combination values. Consequently, 12*100 initial sample
points are generated and 12*6 local optimizations are done.
Search Results
Once your response surface is updated, selecting Min-Max Search in the Outline pane displays the
sample points that contain the minimum and maximum values calculated for each output parameter
in the response surface table. The minimum and maximum values in the output parameter Properties
pane are also updated based on the results of the search. If the response surface is updated in any way,
including changing the fitting for an output parameter or performing a refinement, a new Min-Max
search is performed.
You can save the sample points shown in the response surface table by selecting Insert as Design
Points from the context menu. You can also save sample points as response points (or as refinement
points to improve the response surface) by selecting Explore Response Surface at Point from the
context menu.
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 127
Using Response Surfaces
The sample points obtained from the Min-Max search are used by the Screening optimization. If you
run a Screening optimization, the samples are automatically taken into account in the sample set used
to run or initialize the optimization. For more information, see Performing a Screening Optimiza-
tion (p. 179).
You can disable the Min-Max search by clearing the check box in the Enabled column in the Outline
pane. If you disable the Min-Max search, no search is done when the response surface is updated. If
you disable the search after performing an initial search, the results from the initial search remain in
the output parameter properties and are shown in the Table pane for the response surface when you
select Min-Max Search in the Outline pane.
Note:
• If the Design of Experiments cell is solved but the Response Surface cell is not
solved, the minimum and maximum values for each output parameter are extracted
from the DOE solution's design points and displayed in the Properties pane for the
output parameters.
• For discrete parameters and continuous parameters with manufacturable values, there
is only one minimum and maximum value per output parameter, even if a discrete
parameter has many levels. There is not one Min-Max value set per combination of
parameter values.
A response surface is built from design points in the DOE and refinement points, which are collectively
called learning points. Calculations for the goodness of fit compare the response surface outputs with
the DOE results used to create them.
For response surface types that try to find the best fit of the response surface to DOE points (such
as Standard Response Surface - Full 2nd-Order Polynomial), you can get an idea of how well the fit
was accomplished. However, for interpolated response surface methods that force the response surface
to pass through all of the DOE points (such as Kriging), the goodness of fit usually appears to be
perfect. In this case, goodness of fit indicates that the response surface passed through the DOE
points used to create it, but it does not indicate whether the response surface captures the parametric
solution.
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
128 of ANSYS, Inc. and its subsidiaries and affiliates.
Quality Metrics for Response Surfaces
If any of the input parameters is discrete, a different response surface is built for each combination
of the discrete levels and the quality of the response surface might be different from one configuration
to another.
Goodness of fit is closely related to the response surface type used to generate the response surface.
If the goodness of fit is not of the expected quality, you can try to improve it by changing the response
surface. For more information, see Changing the Response Surface (p. 122).
To add a new goodness of fit object, right-click Quality and select Insert Goodness of Fit. Right-click
the object to copy and paste, delete, or duplicate.
Note:
Criteria marked as advanced options are visible in the goodness of fit table only if you've
selected the Show Advanced Options check box on the Design Exploration tab in the
Options window. For more information, see Design Exploration Options (p. 35).
Several criteria are calculated for the points taken into account in the construction of the response
surface. The mathematical representations in the descriptions for these criteria use the following
notation:
The coefficient of determination is the percent of the variation of the output parameter that
can be explained by the response surface regression equation. It is the ratio of the explained
variation to the total variation. The best value is 1.
The points used to create the response surface are likely to contain variation for each output
parameter, unless all output values are the same, which results in a flat response surface. This
variation is illustrated by the response surface that is generated. If the response surface were
to pass directly through each point, which is the case for the Kriging response surface, the
coefficient of determination would be 1, meaning that all variation is explained.
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 129
Using Response Surfaces
The adjusted coefficient of determination is an advanced option that is available only for
standard response surfaces. It takes the sample size into consideration when computing the
coefficient of determination. The best value is 1. When the number of samples is small (<30),
the adjusted coefficient of determination is usually more reliable than the coefficient of determ-
ination.
The maximum relative residual is an advanced option. Relatively speaking, it is the maximum
distance out of all generated points from the calculated response surface to each generated
point. The best value is 0%. In general, the closer the value is to 0%, the better quality of the
response surface.
However, in some situations, you can have a larger value and still have a good response surface.
For example, this can be true when the mean of the output values is close to zero.
For regression methods, the root mean square error is the square root of the average square
of the residuals at the DOE points. The best value is 0. In general, the closer the value is to 0,
the better quality of the response surface.
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
130 of ANSYS, Inc. and its subsidiaries and affiliates.
Quality Metrics for Response Surfaces
The relative root mean square error is an advanced option. It is the square root of the average
square of the residuals scaled by the actual output values at the points for regression methods.
The best value is 0%. In general, the closer the value is to 0%, the better quality of the response
surface.
However, in some situations, you can have a larger value and still have a good response surface.
For example, this can be true when some of the output values are close to zero. You can obtain
100% of relative error if the observed value = 1e-10 and the predicted value = 1e-8. However,
if the range of output values is 1, this error becomes negligible.
The relative maximum absolute error is the absolute maximum residual value relative to the
standard deviation of the actual output data, modified by the number of samples. The best
value is 0%. In general, the closer the value is to 0%, the better quality of the response surface.
The relative maximum absolute error and the relative average absolute error correspond to the
maximum error and average absolute error scaled by the standard deviation. For example, the
relative root mean square error becomes negligible if both of these values are small.
The relative average absolute error is the average of the residuals relative to the standard devi-
ation of the actual outputs. This value is useful when the number of samples is low (<30). The
best value is 0%. In general, the closer the value is to 0%, the better quality of the response
surface.
The relative average absolute error and the relative maximum absolute error correspond to the
maximum error and average absolute error scaled by the standard deviation. For example, the
relative root mean square error becomes negligible if both of these values are small.
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 131
Using Response Surfaces
If a response surface has discrete input parameters, you can view the response surface for each
combination of discrete levels by selecting the discrete input values in the Properties pane.
For a Genetic Aggregation response surface, goodness of fit is also calculated for learning points
by using the cross-validation technique. The advantages of cross-validation are that it assesses:
• Richness of the DOE (instead of only measuring the quality of the fit)
When cross-validation produces bad metrics, it means that there are not enough points.
For more information on how goodness of fit is calculated for the different types of response surface
points, see Goodness of Fit Calculations (p. 137).
Chart
• Display Parameter Full Name: Specifies whether to display the full parameter name.
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
132 of ANSYS, Inc. and its subsidiaries and affiliates.
Quality Metrics for Response Surfaces
Input Parameters
This category is shown only when there are discrete input parameters. You can select a discrete
value to view the associated response surface.
General
This category is shown only when Response Surface Type is set to Standard Response Surface
- Full 2nd-Order Polynomial and the Show Advanced Options check box is selected on the
Design Exploration tab in the Options window. For more information, see Design Exploration
Options (p. 35).
The single advanced property under this category, Confidence Level, is used post-modeling as
an input for assessing the goodness of fit. The confidence level indicates how likely an output
parameter being estimated falls within the confidence interval. The default setting is 0.95, which
means that the interval is calculated so that the true value will fall within the interval 95 out of
100 times. However, when this advanced option is shown, you can set any value between 0 and
1.
Output Parameters
This category displays all output parameters so that you can indicate which to display.
For a Standard Response Surface - Full 2nd-Order Polynomials response surface, you can generate
the Advanced Goodness of Fit report (p. 134) for the selected output parameter.
In the goodness of fit table, each parameter is rated on how close it comes to the ideal value
for each goodness of fit metric. The rating is indicated by the number of gold stars or red
crosses next to the parameter. The worst rating is three red crosses. The best rating is three
gold stars.
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 133
Using Response Surfaces
Note:
The root mean square error has no rating because it is not a bounded characteristic.
When calculated for different types of response surface points, goodness-of-fit metrics quantify
different aspects of your response surface. You can interpret your goodness of fit table as follows:
For the Genetic Aggregation response surface, goodness-of-fit metrics calculated for learning
points via the cross-validation technique provide an especially effective way to assess the sta-
bility of your response surface without needing to add new verification points. If the metrics
are good, you can be confident in the quality of your model. If the metrics are not good, you
know that you need to enrich your model by adding new refinement points.
The Advanced Goodness of Fit report is a text-based report that shows goodness-of-fit metrics
for the response surface of a selected output parameter. This report can help you to assess
whether your response surface is reliable and accurate enough for you to proceed with confid-
ence. It is available for an output parameter only in a response surface generated when Re-
sponse Surface Type is set to Standard Response Surface - Full 2nd-Order Polynomials.
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
134 of ANSYS, Inc. and its subsidiaries and affiliates.
Quality Metrics for Response Surfaces
To generate the Advanced Goodness of Fit report for a given output parameter:
1. In the Outline pane for the response surface, select Quality → Goodness of Fit.
2. In the Table pane, right-click the column for the desired output parameter and select
Generate Advance Goodness of Fit Report.
Once generated, the Advanced Goodness of Fit report displays in a separate window. The fol-
lowing properties provide general information about analyzing the goodness of fit:
Regression Model
Type of polynomial regression. Cross Quadratic Surface means that the response surface
uses constant terms, linear terms (X), pure quadratic terms (Xi*Xi) and cross-quadratic
terms (Xi*Xj). You cannot change this property.
Input Transformation
Applied transformation on continuous input parameters before solving the response surface.
To change the transformation type, advanced properties (p. 35) must be shown. In the
Outline pane for the Response Surface cell, select Response Surface. Then, in the Prop-
erties pane under Meta Model, change the selection for Inputs Transformation Type.
This property and the subsequent advanced property, Inputs Scaling, apply to all continuous
input variables, with and without manufacturable values. If you clear the check box for Inputs
Scaling, no scaling occurs of the data for input parameters.
Output Transformation
Applied transformation on the given output parameter before solving the response surface.
To change the transformation type for the output parameter, in the Outline pane for the
Response Surface cell, select it. Then, in the Properties pane under Output Settings,
change the selection for Transformation Type. Transformations do not apply to derived
output parameters. If advanced properties (p. 35) are shown, you can also change the
Scaling property. When you clear this check box, no scaling occurs of the data for this
output parameter.
Analysis Type
Algorithm used to select the relevant regression terms. For Modified Linear Forward
Stepwise Regression, the individual regression terms are iteratively added to the regression
model if they are found to cause a significant improvement of the regression results. A
partial F-test is used to determine the significance of the individual regression terms. If a
regression term is not significant, it is ignored.
Used to filter out insignificant regression terms. A regression term is ignored when its
probability of being zero is greater than Filtering Significance Level and when its contri-
bution to the regression sum of squares is insignificant. For example, if Filtering Significance
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 135
Using Response Surfaces
Level is 0.05, a regression term is ignored when its probability of being zero is greater than
0.05 and its probability (based on F-test) to contribute to the sum of squares is less than
0.95 (1 − 0.05). The contribution of a regression term to the regression model is evaluated
by using the F-test and the Filtering Significance Level.
The following properties provide all the scaling models, transformation models, and regression
term coefficients used to build the regression model:
Regression term coefficients used to build the regression model. These coefficients are
displayed in a table with the following auxiliary values. The Confidence Level property for
Goodness of Fit controls the probability value. For more information, see Goodness of Fit
Display Properties (p. 132).
• Std. Dev. of Coefficient: Statistical estimation of the dispersion of the coefficient. A low
standard deviation indicates stability of the coefficient. A high standard deviation indicates
that the coefficient has a wide range of potential values.
• Prob. Coeff. =0: Corresponds to the probability that the coefficient is zero. If this prob-
ability is high, it suggests that the coefficient is insignificant. More significant coefficients
have a lower probability of being zero.
• Confidence Interval [Lower Bound; Upper Bound]: Corresponds to the confidence in-
terval of each coefficient. For example, if the confidence level is 0.95, there is a 95%
probability that the coefficient is in this range. The smaller the confidence interval, the
more likely it is that the coefficient is accurate.
The following properties provide statistical data that enable you to measure the stability of the
regression model and to understand which input parameters are significant.
VIF measures the degree of multi-collinearity of the i-th independent variable with the
other independent variables in the regression model. If the Maximum VIF for Full Regres-
sion Model value is very high (>10), there is a high interdependency among the terms and
the response surface is not unique. If the response surface is not reliable in spite of a good
R2 value, you should add more points to enrich the response surface.
Pearson correlation of selected regression terms. A non-diagonal term measures the collin-
earity between two independent variables. If the absolute value of this term is close to 1,
the response surface is not unique. As with a high VIF value, if the response surface is not
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
136 of ANSYS, Inc. and its subsidiaries and affiliates.
Quality Metrics for Response Surfaces
reliable in spite of the good R2 value, you should add more points to enrich the response
surface.
Each row of this table corresponds to a sample point used to build the response surface.
The Confidence Level property for Goodness of Fit controls the probability value. For
more information, see Goodness of Fit Display Properties (p. 132).
• Residual Value: Difference between the sample value and the approximated value.
• Approx. Std Dev: Standard deviation of the predicted value at this sample point. If the
standard deviation is small, the predicted value is statistically stable. Consequently, you
can have confidence in the predicted values in the area close to this point.
• Lower Bound and Upper Bound: Bounds of the confidence interval of the approximated
value. For example, if the confidence level is 0.95, there is a 95% probability that the
approximated value is in this range. The smaller the confidence interval, the more likely
it is that the approximated value is accurate.
Each row of this table corresponds to a sample point used to build the response surface.
• Hat Matrix Diagonal: These elements are the leverages that describe the influence each
observed value has on the fitted value for the same observation.
• Student's Deleted Residual: Student's residual with the i-th observation removed.
• Cook's Distance: Measures the effect of the i-th observation on all fitted values. A point
with a value greater than 1 can be considered influential.
• P-value: Probability of obtaining a test statistic at least as extreme as the one that was
actually observed, assuming that the null hypothesis is true. This value represents the
error percentage you can make if you reject the null hypothesis, where the null hypothesis
is that the sample point has not influenced the observed result. Traditionally, following
Fisher, one rejects the null hypothesis if the p-value is less than or equal to a specified
significance level, often 0.05, or more stringent values, such as 0.02 or 0.01.
Summary of previous data and additional metrics commonly used to estimate the quality
of the response surface. For more information, see Goodness of Fit for Output Parameters
in a Response Surface (p. 128).
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 137
Using Response Surfaces
used to generate the response surface. Verification points are not used in the generation of the
response surface.
Learning Points
For learning points, goodness of fit measures the quality of the response surface interpolation. To
calculate the goodness of fit, the error between the predicted and observed values is calculated
for each learning point.
Verification Points
For verification points, goodness of fit measures the quality of the response surface prediction. To
calculate the goodness of fit, the verification points are placed in locations that maximize the distance
to the learning points. After the verification points are calculated with a design point update, the
differences between verification point values and predicted values are calculated. For more inform-
ation, see Verification Points (p. 138).
• the i-th learning point where and are respectively the observed input and output
parameter values.
• the response surface built with all the learning points except the i-th learning point.
Standard goodness of fit on learning points compares to . Goodness of fit based on cross-
validation compares to .
For more information, see Genetic Aggregation (p. 307) in the theory section.
Verification Points
Verification points enable you to verify that the response surface accurately approximates the output
parameter values. They compare the predicted and observed values of the output parameters. After
the response surface is created, the verification points are placed in locations that maximize the dis-
tance from existing DOE points and refinement points (Optimal Space-Filling algorithm). Verification
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
138 of ANSYS, Inc. and its subsidiaries and affiliates.
Quality Metrics for Response Surfaces
points can also be added manually or imported from a CSV file. For more information, see Creating
Verification Points (p. 139).
A design point update is a real solve that calculates each verification point. These verification point
results are then compared with the response surface predictions and the difference is calculated.
Verification points are useful in validating any type of response surface. In particular, however, you
should always use verification points to validate the accuracy of interpolated response surfaces, such
as Kriging or Sparse Grid.
You can specify that verification points are generated automatically by selecting the Generate
Verification Points check box in the Properties pane for the response surface. When this check
box is selected, Number of Verification Points can be set to the number of verification points
to be generated.
You can insert a new verification point directly in the verification points table:
1. In the Outline pane for the response surface, select Quality → Verification Points.
2. In the verification points table, enter the values of the input parameters into the New
Verification Point row.
3. Update the verification points and goodness-of-fit metrics by right-clicking the row and se-
lecting Update.
By default, the output parameters of the verification point table are grayed out and filled in
only by a real solve. However, to change the output values, you can change the editing mode
of the verification points table as described in Editable Output Parameter Values (p. 292). This
allows you to enter verification points manually, rather than by performing a real solve, and
still compare them with the response surface.
You can import data from a CSV file by right-clicking and selecting Import Verification Points.
This is a way to compare either experimental data or data run in another simulation with the
simulation response surface. You can also copy information from a CSV input file and paste
them into the verification points table.
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 139
Using Response Surfaces
• The verification points table shows the input and output values for each verification point gen-
erated.
• The Predicted vs Observed chart shows the values predicted from the response surface versus
the values observed from the design points. For more information, see Predicted vs. Observed
Chart (p. 140).
Verification points are not used to build the response surface until they are turned into refinement
points and the response surface is recalculated. If a verification point reveals that the current response
surface is of a poor quality, you can insert it as a refinement point so that it is taken into account
to improve the accuracy of the response surface.
1. In the Outline pane for the response surface, select Quality → Verification Points.
2. In the verification points table, right-click the verification point and select Insert as Refinement
Point.
Note:
The Insert as Refinement Point option is not available for Kriging and Sparse Grid
response surfaces.
To view the Predicted vs. Observed chart, in the Outline pane for the response surface, under Quality,
select either Goodness Of Fitor Verification Points. A chart is available for each goodness of fit object.
The display is determined by your selections in the Properties pane.
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
140 of ANSYS, Inc. and its subsidiaries and affiliates.
Response Surface Charts
By default, all output parameters are displayed on the chart, and the output values are normalized.
However, if only one output parameter is plotted, the output values are not normalized.
Verification points are not used in the generation of the response surface. Consequently, if they appear
close to the diagonal line, the response surface is correctly representing the parametric model. Oth-
erwise, not enough data is provided for the response surface to detect the parametric behavior of
the model. You must refine the response surface.
If you position the mouse cursor over a point of the chart, the corresponding parameter values appear
in the Properties pane, including the predicted and observed values for the output parameters.
If you right-click a point on the chart, you can select from the context menu options available for this
point. For example, you can insert the point as a refinement point or response point in the response
surface table, which is a good way to improve the response surface around a verification point with
an insufficient goodness of fit.
When a response surface is updated, one response point and one of each of the chart types are created
automatically. You can insert as many response points and charts as you want. To add a new chart:
2. Select the insert option for the type of chart that you want to add.
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 141
Using Response Surfaces
If an instance of the chart already exists for any response point in the Outline pane, additional instances
of the chart have numbers appended to their names. For example, the second and third instances of
a Response chart are named Response 1 and Response 2.
To duplicate a chart that already exists in the Outline pane, right-click the chart and select Duplicate.
This operation creates a duplicate chart under the same response point.
Note:
Chart duplication triggers a chart update. If the update succeeds, both the original chart and
the duplicate are up-to-date.
Once you've created a chart, you can change the name of a chart cell by double-clicking it and entering
the new name. This does not affect the title of the chart, which is set as part of the chart properties.
You can also save a chart as a graphic by right-clicking it and selecting Save Image As. For more in-
formation, see Saving a Chart in the Ansys Workbench documentation.
Each chart provides the ability to visually explore the parameter space using options provided in the
chart's Properties pane. For discrete parameters and continuous parameters with manufacturable values,
you use drop-down menus. For continuous variables, you use sliders. The sliders allow you to modify
an input parameter value and view its effect on the displayed output parameter.
All of the Response charts under a response point in the Chart pane use the same input parameter
values because they are all based on the current parameter values for that response point. Thus, when
you modify the input parameter values, the response point and all of its charts are refreshed to take
the new values into account.
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
142 of ANSYS, Inc. and its subsidiaries and affiliates.
Response Surface Charts
Once your inputs and output are selected, you can use the sliders or enter values to change the values
of the input parameters that are not selected to explore how these parameters affect the shape and
position of the curve.
Additionally, you can opt to display all the design points currently in use (from the DOE and the re-
sponse surface refinement) by selecting the Show Design Points check box in the Properties pane.
With a small number of input parameters, this option can help you to evaluate how closely the response
surface fits the design points in your project.
• 2D: Displays a two-dimensional contour graph that allows you to view how changes to a single
input affect a single output. For more information, see Using the 2D Response Chart (p. 146).
• 3D: Displays a three-dimensional contour graph that allows you to view how changes to two inputs
affect a single output. For more information, see Using the 3D Response Chart (p. 147).
When you solve a response surface, a Response chart is automatically added for the default response
point in the Outline pane for the response surface. You can add an additional response chart for
either the default response point or a different response point. To add another Response chart, right-
click the desired response point in the Outline pane and select Insert Response.
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 143
Using Response Surfaces
• Continuous parameters are represented by colored curves that reflect the continuous nature of
the parameter values.
• Discrete parameters are represented by bars that reflect the discrete nature of the parameter
values. There is one bar for each discrete value.
The following examples show how each type of parameter is displayed on the 2D Response chart.
The 3D Response chart has two inputs and can have combinations of parameters with like types
or unlike types. Response chart examples follow for possible combinations.
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
144 of ANSYS, Inc. and its subsidiaries and affiliates.
Response Surface Charts
The 2D Slices Response chart has two inputs. Combinations of these inputs are categorized first by
the parameter type of the select input (the X Axis) and then further distinguished by the parameter
type of the calculated input (the Slice Axis). For each X-axis parameter type, there are two different
renderings:
• The X axis in conjunction with continuous values (a continuous parameter). In this instance, you
specify the number of curves or "slices".
• The X axis in conjunction with discrete values (either a discrete parameter or a continuous para-
meter with manufacturable values). In this instance, the number of slices is automatically set to
the number of discrete levels or the number of manufacturable values.
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 145
Using Response Surfaces
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
146 of ANSYS, Inc. and its subsidiaries and affiliates.
Response Surface Charts
1. In the Outline pane for the response surface, select the Response chart object.
For more information on available properties, see Response Chart: Properties (p. 154).
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 147
Using Response Surfaces
• For Chart Resolution Along X and Chart Resolution Along Y, specify the resolutions.
The Response chart automatically updates according to your selections. A smooth three-dimensional
contour of Z versus X and Y displays.
For more information on available properties, see Response Chart: Properties (p. 154).
The triad control at the bottom left of the Chart pane allows you to rotate the 3D response chart
in freehand mode or quickly view the chart from a particular plane. To zoom in or out on any part
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
148 of ANSYS, Inc. and its subsidiaries and affiliates.
Response Surface Charts
of the chart, use the shift-middle mouse button or scroll wheel. See Setting Chart Properties for
details.
The value of the input on the slice axis is calculated from the number of curves defined for the X
and Y axes.
• When one or both of the inputs are continuous parameters, you specify the number of slices to
display.
• When one or both of the input parameters are either discrete or continuous with manufacturable
values, the number of slices is determined by the number of levels defined for the input paramet-
ers.
Essentially, the first input on the X axis is varying continuously, while the number of curves, or
slices, defined for the slice axis represents the second input. Both inputs are then displayed on the
XY plane, with regard to the output parameter on the Y axis. You can think of the 2D Slices Response
chart as a projection of the 3D response surface curves onto a flat surface.
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 149
Using Response Surfaces
1. In the Outline pane for the response surface, select the Response Chart object.
For more information on available properties, see Response Chart: Properties (p. 154).
The 3D image is then rotated so that the X axis is along the bottom edge of the chart, mirroring
the perspective of the 2D Slices Response chart for a better comparison.
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
150 of ANSYS, Inc. and its subsidiaries and affiliates.
Response Surface Charts
Finally, Mode is switched to 2D Slices. By comparing the following chart to the 3D version, you
can see how the 2D Slices Response chart is actually a two-dimensional rendering of a three-dimen-
sional image. From the following example, you can see observe the following:
• Along the Y axis, there are 10 slices, corresponding to the value for Number of Slices.
• Along the X axis, each slice intersects with 25 points, corresponding to the value for Chart Res-
olution Along X.
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 151
Using Response Surfaces
• Input Parameter 2: WB_Y (continuous with manufacturable values of 1, 1.5, and 1.8, with a range
of 1; 1.8)
• Design Points: Six design points on the X axis and six design points on the Y axis.
Because there is an input with manufacturable values, the number of slices is determined by the
number of levels defined.
The following figures show the initial Response chart in 2D, 3D, and 2D Slices modes.
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
152 of ANSYS, Inc. and its subsidiaries and affiliates.
Response Surface Charts
To display the design points from the DOE and the response surface refinement, select the Show
Design Points check box in the Properties pane. The design points are superimposed on your
chart.
When working with manufacturable values, you can improve chart quality by extending the para-
meter range. Here, the range is increased by adding manufacturable values of -1 and 3, which be-
come the upper and lower bounds.
To improve the quality of your chart further, you can increase the number of points used in building
it by entering values in the Properties pane. In the following figures, the number of points on the
X and Y axes are increased from 6 to 25.
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 153
Using Response Surfaces
Chart Properties
Determines the properties of the chart.
• Display Parameter Full Name: Specifies whether to show the full parameter name or the short
parameter name.
• Chart Resolution Along X: Determines the number of points on the X axis. The number of points
controls the amount of curvature that can be displayed. A minimum of 2 points is required and
produces a straight line. A maximum of 100 points is allowed for maximum curvature. The default
is 25.
• Chart Resolution Along Y: Determines the number of points on the Y axis (3D and 2D Slices
modes only). The number of points controls the amount of curvature that can be displayed. A
minimum of 2 points is required and produces a straight line. A maximum of 100 points is allowed
for maximum curvature. The default is 25.
• Number of Slices: Determines the number of slices displayed in the 2D Slices chart.
• Show Design Points: Determines whether all of the design points currently in use, both in the
Design of Experiments and from the response surface refinement, are used to build the response
surface.
Axes Properties
Determines the data to display on each chart axis. For each axis, under Value, you can change what
the chart displays on an axis by selecting an option from the drop-down list.
• For X Axis, available options are each of the input parameters enabled in the project.
– For the 3D mode, each of the input parameters enabled in the project.
– For the 2D Slices mode, each of the output parameters in the project.
• For the Z Axis (3D mode only), available options are each of the output parameters in the project.
• For Slice Axis (2D Slices mode only), available options are each of the input parameters enabled
in the project. This property is available only when both input parameters are continuous.
Input Parameters
For each input parameter, you can change the value.
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
154 of ANSYS, Inc. and its subsidiaries and affiliates.
Response Surface Charts
• For continuous parameters, you move the slider. The number to the right of the slider rep-
resents the current value.
• For discrete parameters or continuous parameters with manufacturable values, you use the
keyboard to enter a new value.
Output Parameters
For each output parameter, you can view the interpolated value.
You can use the slider bars in the Properties pane for the chart to adjust values for input parameters
to visualize different designs. You can also enter specific values. In the top left of the Chart pane, the
parameter legend box allows you to select the parameter that is in the primary (top) position. Only
the axis of the primary parameter is labeled with values.
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 155
Using Response Surfaces
• The Local Sensitivity chart is a powerful project-level tool, allowing you see at a glance the effect
of all the input parameters on output parameters. For more information, see Using the Local
Sensitivity Chart (p. 158).
• The Local Sensitivity Curves chart helps you to further focus your analysis by allowing you to view
independent parameter variations within the standard Local Sensitivity chart. It provides a means
of viewing the effect of each input on specific outputs, given the current values of other parameters.
For more information, see Using the Local Sensitivity Curves Chart (p. 162).
When you solve a response surface, a Local Sensitivity chart and a Local Sensitivity Curves chart are
automatically added for the default response point in the Outline pane for the response surface. To
add another chart (this can be either an additional chart for the default response point or a chart for
a different response point), right-click the desired response point in the Outline pane and select
either Insert Local Sensitivity or Insert Local Sensitivity Curves.
For information on how continuous parameters with manufacturable values are represented on local
sensitivity charts, see Understanding the Local Sensitivities Display (p. 156).
For discrete parameters, the Properties pane includes a drop-down menu that is populated with the
discrete values defined for the parameter. By selecting different discrete values for each parameter,
you can explore the different sensitivities given different combinations of discrete values. The chart
is updated according to the changed parameter values. You can check the sensitivities in a single
chart, or you can create multiple charts to compare different designs.
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
156 of ANSYS, Inc. and its subsidiaries and affiliates.
Response Surface Charts
For continuous parameters with manufacturable values, continuous values are represented by a
transparent gray curve, while manufacturable values are represented by colored markers.
• Input parameters P2-D and P4-P (the yellow and blue curves) are continuous parameters.
• Input parameters P1-B and P3-L (the gray curves) are continuous parameters with manufacturable
values.
• The black markers indicate the location of the response point on each of the input curves.
For continuous parameters with manufacturable values, continuous values are represented by a
gray bar, while manufacturable values are represented by a colored bar in front of the gray bar.
Each bar is defined with the minimum and maximum extracted from the manufacturable values
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 157
Using Response Surfaces
and the average calculated from the support curve. The minimum and maximum of the output can
vary according to whether or not manufacturable values are used. In this case, both the colored
bar and the gray bar for the input are visible on the chart.
Also, if the parameter range extends beyond the manufacturable values that are defined, the bar
is topped with a gray line to indicate the sensitivity obtained while ignoring the manufacturable
values.
• Input parameters P2-D and P4-P (the yellow and blue bars) are continuous parameters.
• Input parameters P1-B and P3-L (the red and green bars) are continuous with manufacturable
values.
• The bars for inputs P1-B and P3-L show differences between the minimum and maximum of the
output when manufacturable values are used (the colored bar in front) versus when they are not
used (the gray bar in back, now visible).
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
158 of ANSYS, Inc. and its subsidiaries and affiliates.
Response Surface Charts
By default, the Local Sensitivity chart shows the effect of all input parameters on all output para-
meters, but it is also possible to specify the inputs and outputs to be considered. If you consider
only one output, the resulting chart provides an independent sensitivity analysis for each single
input parameter. To specify inputs and outputs, select or clear the Enabled check box.
For more information on available properties, see Local Sensitivity Chart: Properties (p. 161).
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 159
Using Response Surfaces
By default, Axes Range is set to Use Min Max of the Output Parameter and the maximum
known variation of P4 is 2.36. Using this chart, you want to determine what percentage of this
range of variation is produced by varying only P1, or only P2.
3. If P4 increases while P1 increases, the sign is positive. Otherwise, the sign is negative.
For this example, the sensitivity is 83%. This corresponds to the red bar for P1-WB_Thickness in
the Local Sensitivity chart.
Now plot P4 = f(P2), Max(P4)-Min(P4) ~= 0.4. This corresponds to 16.9% of 2.36, as represented
by the blue bar for P2-WB_Radius in the Local Sensitivity chart.
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
160 of ANSYS, Inc. and its subsidiaries and affiliates.
Response Surface Charts
On each of these curves, only one input is varying. The other input parameters are constant.
However, you can change the value of any input and get an updated curve. This also applies to
the standard Local Sensitivity chart. All the sensitivity values are recalculated when you change
the value of an input parameter. If parameters are correlated, you'll see the sensitivity varying.
The relative weights of inputs can vary, depending on the design.
If Axes Range is set to Use Chart Data, the Min-Max search results are ignored. The input para-
meter generating the largest variation of the output is taken as the reference to calculate the
percentages for other input parameters.
In this example, P1 is the input generating the largest variation of P4 (1.96). The red bar for P1
is set to 100%, and the blue bar for P2 is set to 20.4%, meaning that P2 only generates 20.4%
of the variation that P1 can generate.
Chart Properties
• Display Parameter Full Name: Specify whether to show the full parameter name or the short
parameter name.
• Axes Range: Determines the lower and upper bounds of the Y axis. Available only for bar
charts.
– If Use Min Max of the Output Parameter is selected, values on the Y axis are scaled based
on Min-Max search results. If the Min-Max search object was disabled, the output parameter
bounds are determined from existing design points. The axis bounds are fixed to -100, 100,
or 0.
– If Use Chart Data is selected, values on the Y axis are scaled based on the input parameter
that generates the largest variation of the output parameter. The axis bounds are adjusted
to fit the displayed bars.
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 161
Using Response Surfaces
Input Parameters
Each of the input parameters is listed in this section. For each input parameter:
• Under Value, you can change the value by moving the slider or entering a new value with the
keyboard. The number to the right of the slide represents the current value.
• Under Enabled, you can enable or disable the parameter by selecting or clearing the check
box. Disabled parameters do not display on the chart.
Output Parameters
Each of the output parameters is listed in this section. For each output parameter:
• Under Value, you can view the interpolated value for the parameter.
• Under Enabled, you can enable or disable the parameter by selecting or clearing the check
box. Disabled parameters do not display on the chart.
You can modify various generic chart properties for this chart. For more information, see Setting
Chart Properties.
• Single output: Calculates the effect of each input parameter on a single output parameter of
your choice.
• Dual output: Calculates the effect of each input parameter on two output parameters of your
choice.
• Each curve represents the effect of an enabled input parameter on the selected output.
• For each curve, the current response point is indicated by a black point marker (all of the response
points have equal Y axis values).
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
162 of ANSYS, Inc. and its subsidiaries and affiliates.
Response Surface Charts
• Continuous parameters with manufacturable values are represented by gray curves. The colored
markers are the manufacturable values that are defined.
• Where effects are the same for one or more inputs, the curves are superimposed. The curve of
the input displayed first on the list hides the curves for the other inputs.
To change the output being considered, go to the Properties pane and select a different output
parameter for the Y axis property.
• Each curve represents the effect of an enabled input parameter on the two selected outputs.
• The circle at the end of a curve represents the beginning of the curve, which is the lower bound
of the input parameter.
• For each curve, the current response point is indicated by a black point marker.
• Continuous parameters with manufacturable values are represented by gray curves. The colored
markers are the manufacturable values that are defined.
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 163
Using Response Surfaces
• Where effects are the same for one or more inputs, the curves are superimposed. The curve of
the input displayed first on the list hides the curves for the other inputs.
To change the outputs being considered, go to the Properties pane and select a different output
parameter for one or both of the chart axes.
For more information, see Local Sensitivity Curves Chart: Properties (p. 167).
An example follows of a Local Sensitivity bar chart for a given response point. This chart shows
the effect of four input parameters (P1-B, P2-D, P4-P, and P5-E) on two output parameters (P6-
V and P8-DIS).
On the left side of the chart, you can see how input parameters affect the output parameter P6-
V:
• P2-D (yellow bar) has the most effect and the effect is positive.
• Input parameters P4-P and P5-E (the teal and blue bars) have no effect at all.
On the right side of the chart, you can the difference in how the same input parameters affect
the output parameter P8-DIS:
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
164 of ANSYS, Inc. and its subsidiaries and affiliates.
Response Surface Charts
• P2-D yellow bar) has the most effect, but the effect is negative.
• P1-B (red bar) has a moderate effect, but the effect is negative.
• P4-P (teal bar) has now has a moderate effect, and the effect is positive.
• P5-E (blue bar) now has a moderate effect, and the effect is negative.
Single Output
When you view the Local Sensitivity Curves chart for the same response point, the chart defaults
to the Single Output version. This means that the chart shows the effect of all enabled inputs on
a single selected output. The output parameter is on the Y axis, with effect measured horizontally.
In the two examples that follow, you can see that the Single Output curve charts for output
parameters P6-V and P8-DIS show the same sensitivities as the Local Sensitivity bar chart.
For output P6-V, inputs P4-P and P5-E have the same level of effect. Consequently, the blue line
is hidden behind the teal line.
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 165
Using Response Surfaces
For output P8-DIS, inputs P1-B and P5-E have the same level of effect. Consequently, the blue
line is hidden behind the red line.
Dual Output
For more information, you can view the dual output version of the Local Sensitivity Curves chart.
The dual output version shows the effect of all enabled inputs on two selected outputs. In this
particular example, there are only two output parameters. If there were six outputs in the project,
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
166 of ANSYS, Inc. and its subsidiaries and affiliates.
Response Surface Charts
however, you could narrow the focus of your analysis by selecting the two that are of most interest
to you.
The following figure shows the effect of the same input parameters on both of the output para-
meters used previously. Output P6-V is on the X axis, with effect measured vertically. Output P8-
DIS is on the Y axis, with effect measured horizontally. From this dual representation, you can
see the following:
• P2-D has the most significant effect on both outputs. The effect is positive for output P6-V
and is negative for output P8-DIS.
• P1-B has a moderate effect on both outputs. The effect is positive for output P6-V and is
negative for output P8-DIS.
• P4-P has no effect on output P6-V and has a moderate positive effect on output P8-DIS.
• P5-E has no effect on output P6-V and a moderate negative effect on output P8-DIS. Due to
duplicate effects, its curve is hidden for both outputs.
Chart Properties
• Display Parameter Full Name: Specify whether to show the full parameter name or the short
parameter name.
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 167
Using Response Surfaces
• Axes Range: Determines the lower and upper bounds of the output parameters axes.
– If Use Min Max of the Output Parameter is selected, the values on the Y axis are scaled
based on Min-Max search results. If the Min-Max search object was disabled, the output
parameter bounds are determined from existing design points. The axis bounds are fixed to
-100, 100, or 0.
– If Use Chart Data is selected, the values on the Y axis are scaled based on the input para-
meter that generates the largest variation of the output parameter. The axis bounds are
adjusted to fit the displayed bars.
• Chart Resolution: Determines the number of points per curve. The number of points controls
the amount of curvature that can be displayed. A minimum of 2 points is required and produces
a straight line. A maximum of 100 points is allowed for maximum curvature. The default is 25.
Axes Properties
Determines what data is displayed on each chart axis. For each axis, under Value, you can change
what the chart displays on an axis by selecting an option from the drop-down.
• For X-Axis, available options are Input Parameters and each of the output parameters defined
in the project. If Input Parameters is selected, you are viewing a single output chart. Otherwise,
you are viewing a dual output chart.
• For Y-Axis, available options are each of the output parameters defined in the project.
Input Parameters
Each of the input parameters is listed in this section. For each input parameter:
• Under Value, you can change the value by moving the slider or entering a new value with the
keyboard. The number to the right of the slider represents the current value.
• Under Enabled, you can enable or disable the parameter by selecting or clearing the check
box. Disabled parameters do not display on the chart.
Note:
Discrete input parameters display with their levels values associated to a response
point, but they cannot be enabled as chart variables.
Output Parameters
Each of the output parameters is listed in this section. For each output parameter, under Value,
you can view the interpolated value.
You can modify various generic chart properties for this chart. For more information, see Setting
Chart Properties.
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
168 of ANSYS, Inc. and its subsidiaries and affiliates.
Exporting Response Surfaces
An update of a DX-ROM consumes fewer resources and is faster than updating the actual solvers. While
a solver-based evaluation can consume gigabytes of memory and take hours (or even days), a DX-ROM
evaluation consumes only kilobytes and is nearly instantaneous. To take advantage of this, you might
want to import a response surface back into Workbench and use it as a lightweight replacement for
one or more systems in a simulation project.
Response surface export is supported for all DesignXplorer response surface types except Neural Network
and Sparse Grid. Because DesignXplorer does not build a response surface for derived parameters, derived
parameters are not exported. All other parameter types are supported.
DesignXplorer can export a ROM response surface in the following file formats:
Functional Mock-up Unit (version 1.0): *.fmu and Functional Mock-up Unit (version 2.0): *.fmu
The .fmu format has the broadest applicability. It can be used by Twin Builder, MATLAB, or any
software with functional mock-up interface (FMI) support. It is the recommended format for export
to external software.
Note:
Using an FMU file exported from DesignXplorer implies the approval of the terms of use
supplied in the License.txt file. To access License.txt, use a zip utility to manually
extract all files from the .fmu package.
The format can be imported into Workbench and DesignXplorer using custom DX-ROM tools. ANSYS
provides the Response Surface Reader app, which contains three custom tools for working with
any response surface exported as a DesignXplorerNative Database file (DXROM): Response Surface
Reader System, Excel DX-ROM Add-in, and DX-ROM Postprocessing Utility. For more information,
see Tools for Importing a DX-ROM Response Surface into Workbench (p. 170).
Once exported, the DX-ROM response surface file is available to be imported into the software or
reader of your choice. Disabled parameters and derived parameters are not included in the export.
1. Access the Export Response Surface dialog box use one of the following methods:
• In the Project Schematic, right-click the Response Surface cell and select Export Response
Surface.
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 169
Using Response Surfaces
– In the Outline pane, right-click Response Surface and select Export Response Surface.
– In the Outline pane, under Response Surface select the Response chart for the desired
point. Then, in the Chart pane, right-click and select Export Response Surface.
2. In the Export Response Surface dialog box, specify a file type, location, and name. Then, click
Save.
Note:
When a very large response surface of the Kriging type is exported, DesignXplorer
excludes the predictor error from the export so that it can complete successfully. The
predictor error, which provides a confidence interval of the Kriging approximation,
does not have to be present to use the DX-ROM to evaluate a Kriging response surface.
You can import your DX-ROM response surface file into the software or FMI-supported reader of your
choice. For more information, see Tools for Importing a DX-ROM Response Surface into Work-
bench (p. 170).
Each of these tools is delivered in the Response Surface Reader app, which is available for download
from the Ansys Store.
ANSYS provides the following custom DX-ROM tools for reading the DesignXplorer Native Database
(DXROM) file format:
ACT extension that exposes a new system under Design Exploration in the Workbench Toolbox.
Lightweight command line tool that can be used to evaluate your exported DX-ROM file, either
in Workbench or in your own external simulation environment. You can use the utility independ-
ently (as delivered, with no need for additional development) or in the development of your own
custom response surface tools.
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
170 of ANSYS, Inc. and its subsidiaries and affiliates.
Exporting Response Surfaces
While you can optionally select a product version, on the page for the app, you can select the
version that you want to download. The default is the latest version.
4. Download the desired version, which requires agreeing to the software license agreement.
5. Decompress the downloaded ZIP file for the app to a directory of your choice.
The DX-ROM tools included in the app are now available to install, load, and use. For more information,
see the help document DX_ResponseSurfaceReaders.pdf, which is included in the app's doc
directory.
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 171
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
172 of ANSYS, Inc. and its subsidiaries and affiliates.
Using Goal-Driven Optimizations
DesignXplorer offers two different types of goal-driven optimization systems: Response Surface Optim-
ization and Direct Optimization.
• A Response Surface Optimization system draws its information from its own Response Surface
cell and so is dependent on the quality of the response surface. The available optimization
methods are Screening, MOGA, NLPQL, and MISQP, which all use response surface evaluations
rather than real solves.
• A Direct Optimization system has only one cell, which utilizes real solves rather than response
surface evaluations. The available optimization methods are Screening, NLPQL, MISQP, Adaptive
Single-Objective, and Adaptive Multiple-Objective.
Tip:
On the DesignXplorer page In the ANSYS Help, DesignXplorer Optimization Tutorials provides
several optimization examples that you can step through to learn how to use DesignXplorer
to analyze and optimize design spaces.
To make performing optimizations easy for non-experts, DesignXplorer automatically selects an optim-
ization method by default. Based on the number and types of input parameters, the number of objectives
and constraints, any defined parameter relationships, and the run time index (for a Direct Optimization
system only), DesignXplorer selects the most appropriate method and sets its properties.
Any time that you make a change to any input used for automatic method selection, DesignXplorer
once again assesses the scenario to select the most appropriate method. For method availabilities and
capabilities, see Goal-Driven Optimization Methods (p. 178).
Note:
In Tools → Options → Design Exploration → Sampling and Optimization, you can change
Method Selection from Auto to Manual if you want this to be the default for newly inserted
optimization systems. As indicated in the next topic, you can always change the method
selection for a particular optimization system in the properties of the Optimization cell.
Although a Direct Optimization system does not have a Response Surface cell, it can draw information
from any other system or component that contains design point data. It is possible to reuse existing
design point data, reducing the time needed for the optimization, without altering the source of the
design points. For example:
• You can transfer design point data from an existing Response Surface Optimization optimization
and improve upon it without actually changing the original response surface.
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 173
Using Goal-Driven Optimizations
• You can use information from a Response Surface system that has been refined with the Kriging
method and validated. You can transfer the design point data to a Direct Optimization system
and then adjust the quality of the original response surface without affecting the attached direct
optimization.
• You can transfer information from any DesignXplorer system or component containing design
point data that has already been updated, saving time and resources by reusing existing, up-to-
date data rather than reprocessing it.
Note:
The transfer of design point data between two Direct Optimization systems is not supported.
You can monitor optimization progress of a Direct Optimization system from the Table pane. During
the direct optimization, the Table pane displays all the design points as they are calculated, allowing
you to see how the optimization proceeds, how it converges, and so on. Once the optimization is
complete, the raw design point data is stored for future reference. You can access the data by selecting
Raw Optimization Data in the Outline pane. The Table pane displays the design points that were
calculated during the optimization.
Note:
This list is compiled of raw data and does not show feasibility, ratings, and so on for the in-
cluded design points.
1. From under Design Exploration in the Workbench Toolbox, drag either the Response Surface
Optimization system or Direct Optimization system and drop it in the Project Schematic. You
can drag it:
• Directly under either the Parameter Set bar or an existing system under the Parameter Set bar,
in which case it does not share any data with any other systems in the Project Schematic.
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
174 of ANSYS, Inc. and its subsidiaries and affiliates.
Creating a Goal-Driven Optimization System
• On the Design of Experiments cell of a system containing a response surface, in which case it
does share all data generated by the Design of Experiments cell.
• On the Response Surface cell of a system containing a response surface, in which case it does
share all data generated for the Design of Experiments and Response Surface cells.
For more information on data transfer, see Transferring Design Point Data for Direct Optimiza-
tion (p. 177).
2. For a Response Surface Optimization system, if you are not sharing the Design of Experiments
and Response Surface cells, edit the DOE, setting it up as described in Design of Experiments
Component Reference (p. 45). Then, solve both the Design of Experiments and Response Surface
cells.
3. For a Direct Optimization system, if you have not already shared data via the options in step 1,
you can create data transfer links to provide the system with design point data.
Note:
4. On the Project Schematic, double-click the Optimization cell of the new system to open the
component tab.
5. Indicate whether DesignXplorer is to automatically select the most appropriate optimization method
or if you are to manually select the method.
Automatic Selection
To have DesignXplorer select the most appropriate method:
• For a Direct Optimization system, Maximum Number of Candidates and Run Time
Index are available. For Run Time Index, the default value is 5 – Medium. However,
you can change this value if you want. This value and the number and types of input
parameters, number of objectives and constraints, and any defined parameter relation-
ships determine the method selected and its properties.
Manual Selection
To manually select the method yourself:
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 175
Using Goal-Driven Optimizations
2. In the Properties pane, for Method Selection, select Manual. If after automatic method selection
you switch to manual selection, the method and properties initially proposed by automatic
selection are kept so that you can simply modify only the needed properties.
3. Select the method and specify properties. For more information, see Goal-Driven Optimization
Methods (p. 178).
This read-only property displays an estimate of the number of either evaluations or design points that
will be calculated during the optimization process. However, this process can be stopped before this
estimate is reached. For a Direct Optimization system, you can change the selection for Run Time
Index to increase or decrease the number shown in Estimated Number of Design Points.
1. In the Outline pane, select either Objectives and Constraints or an object under it.
2. In the Table or Properties pane, define the optimization objectives and constraints. For more
information, see Defining Optimization Objectives and Constraints (p. 203).
• In the Outline pane, select Domain or an input parameter or parameter relationship under
it.
• In the Table or Properties pane, define the selected domain object. For more information,
see Defining the Optimization Domain (p. 199).
• A stopping criterion, such as the maximum number of iterations or maximum number of points,
is reached. To continue the convergence, you can set Method Selection to Manual and modify
the stopping criterion. Or, you can adjust the problem, perhaps by changing the starting point
or domain definition.
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
176 of ANSYS, Inc. and its subsidiaries and affiliates.
Transferring Design Point Data for Direct Optimization
• The optimization converges before reaching the maximum number of points or iterations. In
this case, in the Property pane under Optimization Status, DesignXplorer sets the read-only
property Converged to Yes.
• The optimization is unable to converge before reaching the maximum number of points or iter-
ations. In this case, in the Property pane under Optimization Status, DesignXplorer sets the
read-only property Converged to No.
Note:
The transfer of design point data between two Direct Optimization systems is not supported.
Data Transferred
The data transferred consists of all design points that have been obtained by a real solution. Design
points obtained from the evaluation of a response surface are not transferred. Design point data is
transferred according to the nature of its source component.
• Design of Experiments cell: All points, including those with custom output values.
• Response Surface cell: All refinement and verification points used to create or evaluate from
the response surface, because they are obtained by a real solution.
Note:
The points from the DOE are not transferred. To transfer these points, you must
create a data transfer link from the Design of Experiments cell.
• Parameters Correlation cell (linked to a Response Surface cell): No points because all of them
are obtained from a response surface evaluation rather than from a real solve.
Data Usage
Once transferred, the design point data is stored in an initial sample set. In the Properties pane for the
Optimization cell, Method Name is available under Optimization.
• If you set Method Name to NLPQL or MISQP, the samples are not used.
• If you set Method Name to Adaptive Single-Optimization, the initial sample set is filled with
transferred points and additional points are generated to reach the requested number of samples
and the workflow of LHS.
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 177
Using Goal-Driven Optimizations
• If you set Method Name to MOGA or Adaptive Multiple-Objective, the initial sample set is filled
with transferred points and additional points are generated to ready the requested number of samples.
The transferred points are added to the samples generated to initiate the optimization. For example, if
you have requested 100 samples and 15 points are transferred, a total of 115 samples are available to
initiate the optimization.
When there are duplicates between the transferred points and the initial sample set, the duplicates are
removed. For example, if you have requested 100 samples, 15 points are transferred, and 6 duplicates
are found, a total of 109 samples are available to initiate the optimization.
For more information on data transfer links, see Links in the Workbench User's Guide.
The following table indicates the general capabilities of each goal-driven optimization method.
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
178 of ANSYS, Inc. and its subsidiaries and affiliates.
Goal-Driven Optimization Methods
MISQP X X X X X
MOGA X X X X X
Adaptive
X X X
Single-Objective
Adaptive
X X X X
Multiple-Objective
External Capabilities are determined by the optimizer, as defined in the
Optimizer optimization extension.
Method Name, which appears in the Properties pane for the Optimization cell, specifies the optimiz-
ation method for the design study. DesignXplorer filters Method Name choices for applicability to the
current project, displaying only the methods that can be used to solve the optimization problem as it
is currently defined. For example, if your project has multiple objectives defined and an external optimizer
does not support multiple objectives, External Optimizer is excluded. When no objectives or constraints
are defined for a project, all optimization methods are available for Method Name. If you already know
that you want to use a particular external optimizer, you should select External Optimizer as the
method before setting up the rest of the project. Otherwise, it could be inadvertently filtered from the
list.
For instructions for setting the optimization properties for each algorithm, see:
Performing a Screening Optimization
Performing a MOGA Optimization
Performing an NLPQL Optimization
Performing an MISQP Optimization
Performing an Adaptive Single-Objective Optimization
Performing an Adaptive Multiple-Objective Optimization
Performing an Optimization with an External Optimizer
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 179
Using Goal-Driven Optimizations
• Verify Candidate Points: Select to verify candidate points automatically at end of an update
for a Response Surface Optimization system. This property is not applicable to a Direct Op-
timization system.
• Number of Samples: Enter the number of samples to generate for the optimization. Samples
are generated very rapidly from the response surface and do not require an actual solve of
design points.
The number of samples must be greater than the number of enabled input parameters. The
number of enabled input parameters is also the minimum number of samples required to
generate the Sensitivities chart. You can enter a minimum of 2 and a maximum of 10,000. The
default is 1000 for a Response Surface Optimization system and 100 for a Direct Optimization
system.
• In the Outline pane, select Objectives and Constraints or an object under it.
• In the Table or Properties pane, define the optimization objectives and constraints. For more
information, see Defining Optimization Objectives and Constraints (p. 203).
• In the Outline pane, select Domain or an input parameter or parameter relationship under it.
• In the Table or Properties pane, define the selected domain object. For more information, see
Defining the Optimization Domain (p. 199).
The result is a group of points or sample set. The Table pane displays the points that are most in
alignment with the objectives and constraints as the candidate points for the optimization. The
Properties pane displays Size of Generated Sample Set, which is read-only.
When the Screening method is used for a Response Surface Optimization system, the sample
points obtained from a response surface Min-Max search are automatically added to the sample
set used to initialize or run the optimization.
For example, if the Min-Max search results in 4 points and you run the Screening optimization
with Number of Samples set to 100, the final optimization sample set contains up to 104 points.
If a point found by the Min-Max search is also contained in the initial screening sample set, it is
only counted once in the final sample set.
Note:
When constraints exist, the Tradeoff chart indicates which samples are feasible (meet
the constraints) or infeasible (do not meet the constraints). There is a display option
for the infeasible points.
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
180 of ANSYS, Inc. and its subsidiaries and affiliates.
Goal-Driven Optimization Methods
6. In the Outline pane, select Optimization. Then, in the Properties pane under Optimization
Status, review the outcome:
Number of Evaluations: Number of design point evaluations to perform. This value takes all
points used in the optimization, including design points pulled from the cache. It can be used
to measure the efficiency of the optimization method to find the optimum design point.
• Number of Failures: Number of failed design points for the optimization. When design points
fail, a Screening optimization does not attempt to solve additional design points in their place.
The Samples chart for a Direct Optimization system does not include failed design points.
• Size of Generated Sample Set: Number of samples generated in the sample set. For a Direct
Optimization system, this is the number of samples successfully updated. For a Response
Surface Optimization system, this is the number of samples successfully updated plus the
number of different (non-duplicate) samples generated by the Min-Max search if it is enabled.
Number of Candidates: Number of candidates obtained. This value is limited by the Maximum
Number of Candidates input property.
7. In the Outline pane, select Domain or any object under it to view domain data in the Properties
and Table panes.
8. For a Direct Optimization system, select Raw Optimization Data in the Outline pane. The Table
pane displays the design points that were calculated during the optimization. If the raw optimiz-
ation data point exists in the design points table for the Parameter Set bar, the corresponding
design point name is indicated in parentheses in the Name column.
Note:
This list is compiled of raw data and does not show feasibility, ratings, and so on for
the included design points.
Note:
MOGA requires advanced options. Ensure that the Show Advanced Options check box is
selected on the Design Exploration tab of the Options window. Advanced options display
in italic type.
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 181
Using Goal-Driven Optimizations
• Verify Candidate Points: Select to verify candidate points automatically at the end of the op-
timization update.
• Type of Initial Sampling: Advanced option for generating different kinds of sampling. If you
do not have any parameter relationships defined, set to Screening (default) or Optimal Space-
Filling. If you have parameter relationships defined, the initial sampling must be performed by
the constrained sampling algorithms because parameter relationships constrain the sampling.
For such cases, this property is automatically set to Constrained Sampling.
• Random Generator Seed: Advanced option that displays only when Type of Initial Sampling
is set to Optimal Space-Filling. Value for initializing the random number generator invoked
internally by the Optimal Space-Filling (OSF) algorithm. The value must be a positive integer.
You can generate different samplings by changing the value or regenerate the same sampling
by keeping the same value. The default is 0.
• Maximum Number of Cycles: Advanced option that displays only when Type of Initial Sampling
is set to Optimal Space-Filling. Number of optimization loops the algorithm needs, which in
turns determines the discrepancy of the OSF. The optimization is essentially combinatorial, so
a large number of cycles slow down the process. However, this makes the discrepancy of the
OSF smaller. The value must be greater than 0. For practical purposes, 10 cycles is usually good
for up to 20 variables. The default is 10.
• Number of Initial Samples: Initial number of samples to use. This number must be greater
than the number of enabled input parameters. The minimum recommended number of initial
samples is 10 times the number of enabled input parameters. The larger the initial sample set,
the better your chances of finding the input parameter space that contains the best solutions.
The number of enabled input parameters is also the minimum number of samples required to
generate the Sensitivities chart. You can enter a minimum of 2 and a maximum of 10000. The
default is 100.
If you switch from the Screening method to the MOGA method, MOGA generates a new sample
set. For the sake of consistency, enter the same number of initial samples as you used for the
Screening method.
• Number of Samples Per Iteration: Number of samples to iterate and update with each iteration.
This number must be greater than the number of enabled input parameters but less than or
equal to the number of initial samples. The default is 100 for a Response Surface Optimization
system and 50 for a Direct Optimization system.
• Maximum Allowable Pareto Percentage: Convergence criterion. Percentage value that repres-
ents the ratio of the number of desired Pareto points to the number of samples per iteration.
When this percentage is reached, the optimization is converged. For example, a value of 70
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
182 of ANSYS, Inc. and its subsidiaries and affiliates.
Goal-Driven Optimization Methods
with Number of Samples Per Iteration set to 200 would mean that the optimization should
stop once the resulting front of the MOGA optimization contains at least 140 points. Of course,
the optimization stops before that if the maximum number of iterations is reached.
If the Maximum Allowable Pareto Percentage is too low (below 30), the process can converge
prematurely. If the value is too high (above 80), the process can converge slowly. The value of
this property depends on the number of parameters and the nature of the design space itself.
The default is 70. Using a value between 55 and 75 works best for most problems. For more
information, see Convergence Criteria in MOGA-Based Multi-Objective Optimization (p. 347).
• Maximum Number of Iterations: Stop criterion. Maximum number of iterations that the al-
gorithm is to execute. If this number is reached without the optimization having reached con-
vergence, iterations stop. This also provides an idea of the maximum possible number of function
evaluations that are needed for the full cycle, as well as the maximum possible time it can take
to run the optimization. For example, the absolute maximum number of evaluations is given
by:
• Mutation Probability: Advanced option for specifying the probability of applying a mutation
on a design configuration. The value must be between 0 and 1. A larger value indicates a more
random algorithm. If the value is 1, the algorithm becomes a pure random search. A low prob-
ability of mutation (<0.2) is recommended. The default is 0.01. For more information on mutation,
see MOGA Steps to Generate a New Population (p. 350).
• Crossover Probability: Advanced option for specifying the probability with which parent
solutions are recombined to generate offspring solutions. The value must be between 0 and 1.
A smaller value indicates a more stable population and a faster (but less accurate) solution. If
the value is 0, the parents are copied directly to the new population. A high probability of
crossover (>0.9) is recommended. The default is 0.98.
• Type of Discrete Crossover: Advanced option for specifying the type of crossover for discrete
parameters. This property is visible only if there is at least one discrete input variable or continu-
ous input variable with manufacturable values. Three crossover types are available: One Point,
Two Points, and Uniform. According to the type of crossover selected, the children are closer
to or farther from their parents. The children are closer for One Point and farther for Uniform.
The default is One Point. For more information on crossover, see MOGA Steps to Generate a
New Population (p. 350).
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 183
Using Goal-Driven Optimizations
or continuous with manufacturable values. The value corresponds to the product of the number
of levels per input parameter. For example, if the optimization problem contains 2 discrete
parameters with 4 and 5 levels respectively and 1 continuous parameter with 6 manufacturable
values, Maximum Number of Permutations is equal to 120 (4*5*6).
• In the Outline pane, select Objectives and Constraints or an object under it.
• In the Table or Properties pane, define the optimization objectives. For more information, see
Defining Optimization Objectives and Constraints (p. 203).
Note:
For MOGA, at least one output parameter must have an objective defined. Multiple
objectives are allowed.
• In the Outline pane under Domain, enable the desired input parameters.
• In the Table or Properties pane, define the selected domain object. For more information, see
Defining the Optimization Domain (p. 199).
The result is a group of points or sample set. The Table pane displays the points that are most in
alignment with the objectives as the candidate points for the optimization. The Properties pane
displays Size of Generated Sample Set, which is read-only.
6. In the Outline pane, select Optimization. Then, in the Properties pane under Optimization
Status, view the outcome:
• Obtained Pareto Percentage: Percentage representing the ratio of the number of Pareto points
obtained by the optimization.
• Number of Iterations: Number of iterations executed. Each iteration corresponds to the gener-
ation of a population.
• Number of Evaluations: Number of design point evaluations performed. This value takes all
points used in the optimization, including design points pulled from the cache. It can be used
to measure the efficiency of the optimization method to find the optimum design point.
• Number of Failures: Number of failed design points for the optimization. When a design point
fails, a MOGA optimization does not retain this point in the Pareto front and does not attempt
to solve another design point in its place. Failed design points are also not included on the
Direct Optimization Samples chart.
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
184 of ANSYS, Inc. and its subsidiaries and affiliates.
Goal-Driven Optimization Methods
• Size of Generated Sample Set: Number of samples generated in the sample set. This is the
number of samples successfully updated for the last population generated by the algorithm. It
usually equals the Number of Samples Per Iteration.
• Number of Candidates: Number of candidates obtained. This value is limited by the Maximum
Number of Candidates input property.
7. In the Outline pane, select Domain or any object under it to review domain data in the Properties
and Table panes.
8. For a Direct Optimization system, select Raw Optimization Data in the Outline pane. The Table
pane displays the design points that were calculated during the optimization. If the raw optimiz-
ation data point exists in the design points table for the Parameter Set bar, the corresponding
design point name is indicated in parentheses in the Name column.
Note:
This list is compiled of raw data and does not show feasibility, ratings, and so on for
the included design points.
• Verify Candidate Points: Select to verify candidate points automatically at end of the optimiz-
ation update.
• Finite Difference Approximation: When analytical gradients are not available, NLPQL approx-
imates them numerically. This property allows you to specify the method of approximating the
gradient of the objective function. Choices are:
– Central: Increases the accuracy of the gradient calculations by sampling from both sides of
the sample point but increases the number of design point evaluations by 50%. This method
makes use of the initial point, as well as the forward point and rear point. This is the default
method for preexisting databases and new Response Surface Optimization systems.
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 185
Using Goal-Driven Optimizations
– Forward: Uses fewer design point evaluations but decreases the accuracy of the gradient
calculations. This method makes use of only two design points, the initial point and forward
point, to calculate the slope forward. This is the default method for new Direct Optimization
systems.
• Initial Finite Difference Delta (%): Advanced option for specifying the relative variation used
to perturb the current point to compute gradients. Used in conjunction with Allowable Con-
vergence (%) to ensure that the delta in NLPQL's calculation of finite differences is large enough
to be seen above the noise in the simulation problem. This wider sampling produces results
that are more clearly differentiated so that the difference is less affected by solution noise and
the gradient direction is clearer. The value should be larger than both the value for Initial Finite
Difference Delta (%) and the noise magnitude of the model. However, smaller values produce
more accurate results, so set Initial Finite Difference Delta (%) only as high as necessary to
be seen above simulation noise.
For a Direct Optimization system, the default percentage value is 1. For a Response Surface
Optimization system, the default percentage value is 0.001. The minimum is 0.0001, and the
maximum is 10.
For parameters with Allowed Values set to Manufacturable Values or Snap to Grid, the value
for Initial Finite Difference Delta (%) is ignored. In such cases, the closest allowed value is
used to determine the finite difference delta.
• Allowable Convergence (%): Stop criterion. Tolerance to which the Karush-Kuhn-Tucker (KKT)
optimality criterion is generated during the NLPQL process. A smaller value indicates more
convergence iterations and a more accurate (but slower) solution. A larger value indicates
fewer convergence iterations and a less accurate (but faster) solution.
For a Direct Optimization system, the default percentage value is 0.1. For a Response Surface
Optimization system, the default percentage value is 0.0001. The maximum percentage value
is 100. These values are consistent across all problem types because the inputs, outputs, and
gradients are scaled during the NLPQL solution.
• Maximum Number of Iterations: Stop criterion. Maximum number of iterations that the al-
gorithm is to execute. If convergence happens before this number is reached, the iterations
stop. This also provides an idea of the maximum possible number of function evaluations that
are needed for the full cycle. For NLPQL, the number of evaluations can be approximated ac-
cording to the Finite Difference Approximation gradient calculation method, as follows:
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
186 of ANSYS, Inc. and its subsidiaries and affiliates.
Goal-Driven Optimization Methods
• In the Outline pane, select Objectives and Constraints or an object under it.
• In the Table or Properties pane, define the optimization objectives. For more information, see
Defining Optimization Objectives and Constraints (p. 203).
Note:
For NLPQL, exactly one output parameter must have an objective defined. Multiple
parameter constraints are permitted.
• In the Outline pane, select Domain or an input parameter or parameter relationship under it.
• In the Table or Properties pane, define the selected domain object. For more information, see
Defining the Optimization Domain (p. 199).
The result is a group of points or sample set. The points that are most in alignment with the ob-
jective are displayed in the table as the candidate points for the optimization. In the Properties
pane, the result Size of Generated Sample Set is read-only. This value is always equal to 1 for
NLPQL.
6. In the Outline pane, select Optimization. Then, in the Properties pane under Optimization
Status, view the outcome:
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 187
Using Goal-Driven Optimizations
• Number of Iterations: Number of iterations executed. Each iteration corresponds to one formu-
lation and solution of the quadratic programming subproblem, or alternatively, one evaluation
of gradients.
• Number of Evaluations: Number of design point evaluations performed. This value takes all
points used in the optimization, including design points pulled from the cache. It can be used
to measure the efficiency of the optimization method to find the optimum design point.
• Number of Failures: Number of failed design points for the optimization. When a design point
fails, NLPQL changes the direction of its search. It does not attempt to solve an additional design
point in its place and does not include it on the Direct Optimization Samples chart.
• Size of Generated Sample Set: Number of samples generated in the sample set. This is the
number of iteration points obtained by the optimization and should be equal to the number
of iterations.
• Number of Candidates: Number of candidates obtained. This value is limited by the Maximum
Number of Candidates input property.
7. In the Outline pane, select Domain or any node under it to review domain information in the
Properties and Table panes.
8. For a Direct Optimization system, in the Outline pane, select Raw Optimization Data. The Table
pane displays the design points that were calculated during the optimization. If the raw optimiz-
ation data point exists in the design points table for the Parameter Set bar, the corresponding
design point name is indicated in parentheses in the Name column.
Note:
This list is compiled of raw data and does not show feasibility, ratings, and so on for
the included design points.
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
188 of ANSYS, Inc. and its subsidiaries and affiliates.
Goal-Driven Optimization Methods
• Verify Candidate Points: Select to verify candidate points automatically at end of the optimiz-
ation update.
• Finite Difference Approximation: When analytical gradients are not available, MISQP approx-
imates them numerically. This property allows you to specify the method of approximating the
gradient of the objective function. Choices are:
– Central: Increases the accuracy of the gradient calculations by sampling from both sides of
the sample point but increases the number of design point evaluations by 50%. This method
makes use of the initial point, as well as the forward point and rear point. This is the default
method for preexisting databases and new Response Surface Optimization systems.
– Forward: Uses fewer design point evaluations but decreases the accuracy of the gradient
calculations. This method makes use of only two design points, the initial point and forward
point, to calculate the slope forward. This is the default method for new Direct Optimization
systems.
• Initial Finite Difference Delta (%): Advanced option for specifying the relative variation used
to perturb the current point to compute gradients. Used in conjunction with Allowable Con-
vergence (%) to ensure that the delta in MISQP's calculation of finite differences is large enough
to be seen above the noise in the simulation problem. This wider sampling produces results
that are more clearly differentiated so that the difference is less affected by solution noise and
the gradient direction is clearer. The value should be larger than both the value for Initial Finite
Difference Delta (%) and the noise magnitude of the model. Because smaller values produce
more accurate results, set Initial Finite Difference Delta (%) only as high as necessary to be
seen above simulation noise.
For a Direct Optimization system, the default percentage value is 1. For a Response Surface
Optimization system, the default percentage value is 0.001. The minimum is 0.0001, and the
maximum is 10.
For parameters with Allowed Values set to Manufacturable Values or Snap to Grid, the value
for Initial Finite Difference Delta (%) is ignored. In such cases, the closest allowed value is
used to determine the finite difference delta.
• Allowable Convergence (%): Stop criterion. Tolerance to which the Karush-Kuhn-Tucker (KKT)
optimality criterion is generated during the MISQP process. A smaller value indicates more
convergence iterations and a more accurate (but slower) solution. A larger value indicates
fewer convergence iterations and a less accurate (but faster) solution.
For a Direct Optimization system, the default percentage value is 0.1. For a Response Surface
Optimization system, the default percentage value is 0.0001. The maximum value is 100. These
values are consistent across all problem types because the inputs, outputs, and gradients are
scaled during the MISQP solution.
• Maximum Number of Iterations: Stop criterion. Maximum number of iterations that the al-
gorithm is to execute. If convergence happens before this number is reached, the iterations
stop. This also provides an idea of the maximum possible number of function evaluations that
are needed for the full cycle. For MISQP, the number of evaluations can be approximated ac-
cording to the Finite Difference Approximation gradient calculation method, as follows:
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 189
Using Goal-Driven Optimizations
• In the Outline pane, select Objectives and Constraints or an object under it.
• In the Table or Properties pane, define the optimization objectives. For more information, see
Defining Optimization Objectives and Constraints (p. 203).
Note:
For MISQP, exactly one output parameter must have an objective defined, but multiple
parameter constraints are permitted.
• In the Table or Properties pane, define the optimization domain. For more information, see
Defining the Optimization Domain (p. 199).
The result is a group of points or sample set. The points that are most in alignment with the ob-
jective are displayed in the table as the candidate points for the optimization. In the Properties
pane, the Size of Generated Sample Set result is read-only. For MISQP, this value is always equal
to 1.
6. In the Outline pane, select Optimization. Then, in the Properties pane under Optimization
Status, view the outcome:
• Number of Iterations: Number of iterations executed. Each iteration corresponds to one formu-
lation and solution of the quadratic programming subproblem, or alternatively, one evaluation
of gradients.
• Number of Evaluations: Number of design point evaluations performed. This value takes all
points used in the optimization, including design points pulled from the cache. It can be used
to measure the efficiency of the optimization method to find the optimum design point.
• Number of Failures: Number of failed design points for the optimization. When a design point
fails, MISQP changes the direction of its search. It does not attempt to solve an additional design
point in its place and does not include it on the Direct Optimization Samples chart.
• Size of Generated Sample Set: Number of samples generated in the sample set. This is the
number of design points updated in the last iteration.
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
190 of ANSYS, Inc. and its subsidiaries and affiliates.
Goal-Driven Optimization Methods
• Number of Candidates: Number of candidates obtained. This value is limited by the Maximum
Number of Candidates input property.
7. In the Outline pane, select Domain or any node under it to review domain information in the
Properties and Table panes.
8. For a Direct Optimization system, in the Outline pane, select Raw Optimization Data. The Table
pane displays the design points that were calculated during the optimization. If the raw optimiz-
ation data point exists in the design points table for the Parameter Set bar, the corresponding
design point name is indicated in parentheses in the Name column.
Note:
This list is compiled of raw data and does not show feasibility, ratings, and so on
for the included design points.
The Adaptive Single-Objective method is available for input parameters that are continuous, including
those with manufacturable values. It can handle only one output parameter goal, although other
output parameters can be defined as constraints. It does not support the use of parameter relationships
in the optimization domain. For more information, see Adaptive Single-Objective Optimization
(ASO) (p. 342).
Note:
The Adaptive Single-Objective method requires advanced options. Ensure that the Show
Advanced Options check box is selected on the Design Exploration tab of the Options
window. Advanced options display in italic type.
• Number of Initial Samples: Number of samples generated for the initial Kriging and after all
domain reductions for the construction of the next Kriging.
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 191
Using Goal-Driven Optimizations
You can enter a minimum of (NbInp+1)*(NbInp+2)/2 (also the minimum number of OSF
samples required for the Kriging construction) or a maximum of 10,000. The default is
(NbInp+1)*(NbInp+2)/2 for a Direct Optimization system. There is no default for a Re-
sponse Surface Optimization system.
Because of the Adaptive Single-Objective workflow (in which a new OSF sample set is generated
after each domain reduction), increasing the number of OSF samples does not necessarily improve
the quality of the results and significantly increases the number of evaluations.
• Random Generator Seed: Advanced option that displays only when Type of Initial Sampling
is set to Optimal Space-Filling. The value for initializing the random number generator invoked
internally by OSF. The value must be a positive integer. This property allows you to generate
different samplings by changing the value or to regenerate the same sampling by keeping the
same value. The default is 0.
• Maximum Number of Cycles: Number of optimization loops that the algorithm needs, which
in turns determines the discrepancy of the OSF. The optimization is essentially combinatorial,
so a large number of cycles slows down the process. However, this makes the discrepancy of
the OSF smaller. The value must be greater than 0. For practical purposes, 10 cycles is usually
good for up to 20 variables. The default is 10.
• Number of Screening Samples: Number of samples for the screening generation on the current
Kriging. This value is used to create the next Kriging (based on error prediction) and verified
candidates.
You can enter a minimum of (NbInp+1)*(NbInp+2)/2 (also the minimum number of OSF
samples required for the Kriging construction) or a maximum of 10,000. The default is
100*NbInp for a Direct Optimization system. There is no default for a Response Surface
Optimization system.
The larger the screening sample set, the better the chances of finding good verified points.
However, too many points can result in a divergence of the Kriging.
• Number of Starting Points: Determines the number of local optima to explore. The larger the
number of starting points, the more local optima explored. In the case of a linear surface, for
example, it is not necessary to use many points. This value must be less than the value for
Number of Screening Samples because these samples are selected in this sample. The default
is the value for Number of Initial Samples.
• Maximum Number of Domain Reductions: Stop criterion. Maximum number of domain reduc-
tions for input variation. (No information is known about the size of the reduction beforehand.)
The default is 20.
• Percentage of Domain Reductions: Stop criterion. Minimum size of the current domain accord-
ing to the initial domain. For example, with one input ranging between 0 and 100, the domain
size is equal to 100. The percentage of domain reduction is 1%, so the current working domain
size cannot be less than 1 (such as an input ranging between 5 and 6). The default is 0.1.
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
192 of ANSYS, Inc. and its subsidiaries and affiliates.
Goal-Driven Optimization Methods
• Convergence Tolerance: Stop criterion. Minimum allowable gap between the values of two
successive candidates. If the difference between two successive candidates is smaller than the
value for Convergence Tolerance multiplied by the maximum variation of the parameter, the
algorithm is stopped. A smaller value indicates more convergence iterations and a more accurate
(but slower) solution. A larger value indicates fewer convergence iterations and a less accurate
(but faster) solution. The default is 1E-06.
• Retained Domain per Iteration (%): Advanced option that allows you to specify the minimum
percentage of the domain you want to keep after a domain reduction. The percentage value
must be between 10 and 90. A larger value indicates less domain reduction, which implies
better exploration but a slower solution. A smaller value indicates a faster and more accurate
solution, with the risk of it being a local one. The default percentage value is 40.
• In the Outline pane, select Objectives and Constraints or an object under it.
• In the Table or Properties pane, define the optimization objectives. For more information,
see Defining Optimization Objectives and Constraints (p. 203).
Note:
• For the Adaptive Single-Objective method, exactly one output parameter must have
an objective defined.
• After you have defined an objective, a warning icon displays in the Message column
of the Outline pane if the recommended number of input parameters has been ex-
ceeded. For more information, see Number of Input Parameters for DOE Types (p. 88).
• In the Table or Properties pane, define the optimization domain. For more information, see
Defining the Optimization Domain (p. 199).
The result is a group of points or sample set. The points that are most in alignment with the ob-
jectives are displayed in the table as the candidate points for the optimization. In the Properties
pane, the Size of Generated Sample Set result is read-only.
6. In the Outline pane, select Optimization. Then, in the Properties pane under Optimization
Status, view the outcome:
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 193
Using Goal-Driven Optimizations
• Number of Evaluations: Number of design point evaluations performed. This value takes into
account all points used in the optimization, including design points pulled from the cache. It
corresponds to the total of LHS points and verification points.
• Number of Failures: Number of failed design points for the optimization. When a design point
fails, the Adaptive Single-Objective method changes the direction of its search. It does not at-
tempt to solve an additional design point in its place and does not include it on the Direct
Optimization Samples chart.
• Size of Generated Sample Set: Number of samples generated in the sample set. This is the
number of unique design points that have been successfully updated.
• Number of Candidates: Number of candidates obtained. This value is limited by the Maximum
Number of Candidates input property.
7. For a Direct Optimization system, in the Outline pane, select Raw Optimization Data. The Table
displays the design points that were calculated during the optimization. If the raw optimization
data point exists in the design points table for the Parameter Set bar, the corresponding design
point name is indicated in parentheses in the Name column.
Note:
This list is compiled of raw data and does not show feasibility, ratings, and so on for
the included design points.
The Adaptive Multiple-Objective method is available only for continuous input parameters, including
those with manufacturable values. It can handle multiple objectives and multiple constraints. For
more information, see Adaptive Multiple-Objective Optimization (p. 353).
Note:
The Adaptive Multiple-Objective method requires advanced options. Ensure that the Show
Advanced Options check box is selected on the Design Exploration tab of the Optionswin-
dow. Advanced options display in italic type.
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
194 of ANSYS, Inc. and its subsidiaries and affiliates.
Goal-Driven Optimization Methods
• Type of Initial Sampling: If you do not have any parameter relationships defined, set to
Screening (default) or Optimal Space-Filling. If you do have parameter relationships
defined, the initial sampling must be performed by the constrained sampling algorithms
(because parameter relationships constrain the sampling). In such cases, this property is
automatically set to Constrained Sampling.
• Random Generator Seed: Advanced option that displays only when Type of Initial
Sampling is set to Optimal Space-Filling. Value for initializing the random number gener-
ator invoked internally by the Optimal Space-Filling (OSF) algorithm. The value must be a
positive integer. This property allows you to generate different samplings by changing the
value or to regenerate the same sampling by keeping the same value. The default is 0.
• Maximum Number of Cycles: Determines the number of optimization loops the algorithm
needs, which in turns determines the discrepancy of the OSF. The optimization is essentially
combinatorial, so a large number of cycles slows down the process. However, this makes
the discrepancy of the OSF smaller. The value must be greater than 0. For practical purposes,
10 cycles is usually good for up to 20 variables. The default is 10.
• Number of Initial Samples: Initial number of samples to use. This number must be
greater than the number of enabled inputs. The minimum recommended number of initial
samples is 10 times the number of enabled input parameters. The larger the initial sample
set, the better your chances of finding the input parameter space that contains the best
solutions.
The number of enabled input parameters is also the minimum number of samples required
to generate the Sensitivities chart. You can enter a minimum of 2 and a maximum of 10000.
The default is 100.
If you are switching the method from Screening to MOGA, MOGA generates a new sample
set. For the sake of consistency, enter the same number of initial samples as used for the
Screening method.
If the Maximum Allowable Pareto Percentage is too low (below 30), the process can
converge prematurely, and if it is too high (above 80), it can converge slowly. The value
of this property depends on the number of parameters and the nature of the design space
itself. The default is 70. Using a value between 55 and 75 works best for most problems.
For more information, see Convergence Criteria in MOGA-Based Multi-Objective Optimiza-
tion (p. 347).
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 195
Using Goal-Driven Optimizations
allows you to minimize the number of iterations performed while still reaching the desired
level of stability. When the specified percentage is reached, the optimization is converged.
The default percentage value is 2. To not take the convergence stability into account, set
the percentage value to 0. For more information, see Convergence Criteria in MOGA-Based
Multi-Objective Optimization (p. 347).
• Maximum Number of Iterations: Stop criterion. Maximum number of iterations that the
algorithm is to execute. If this number is reached without the optimization having reached
convergence, iterations stop. This also provides an idea of the maximum possible number
of function evaluations that are needed for the full cycle, as well as the maximum possible
time it can take to run the optimization. For example, the absolute maximum number of
evaluations is given by: Number of Initial Samples + Number of Samples
Per Iteration * (Maximum Number of Iterations - 1).
• Crossover Probability: Advanced option for specifying the probability with which parent
solutions are recombined to generate offspring solutions. The value must be between 0
and 1. A smaller value indicates a more stable population and a faster (but less accurate)
solution. If the value is 0, the parents are copied directly to the new population. A high
probability of crossover (>0.9) is recommended. The default is 0.98.
• Type of Discrete Crossover: Advanced option for specifying the kind of crossover for
discrete parameters. This property is visible only if there is at least one discrete input variable
or continuous input variable with manufacturable values. Three crossover types are available:
One Point, Two Points, and Uniform. According to the type of crossover selected, the
children are closer to or farther from their parents. Children are closer for One Point and
farther for Uniform. The default is One Point. For more information on crossover, see
MOGA Steps to Generate a New Population (p. 350).
• In the Outline pane, select Objectives and Constraints or an object under it.
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
196 of ANSYS, Inc. and its subsidiaries and affiliates.
Goal-Driven Optimization Methods
• In the Table or Properties pane, define the optimization objectives. For more information,
see Defining Optimization Objectives and Constraints (p. 203).
Note:
– After you have defined an objective, a warning icon displays in the Message
column of the Outline pane if the recommended number of input parameters
is exceeded. For more information, see Number of Input Parameters for DOE
Types (p. 88).
• In the Outline pane, select Domain or an input parameter or parameter relationship under
it.
• In the Table or Properties pane, define the selected domain object. For more information,
see Defining the Optimization Domain (p. 199).
The result is a group of points or sample set. The points that are most in alignment with the
objectives are displayed in the table as the candidate points for the optimization. In the
Properties pane, the result Size of Generated Sample Set is read-only.
6. In the Outline pane, select Optimization. Then, in the Properties pane under Optimization
Status, view the outcome:
• Number of Failures: Number of failed design points for the optimization. When a
design point fails, an Adaptive Multiple-Objective optimization does not retain this
point in the Pareto front to generate the next population, attempt to solve an addi-
tional design point in its place, or include it on the Direct Optimization Samples chart.
• Size of Generated Sample Set: Number of samples generated in the sample set. This
is the number of samples successfully updated for the last population generated by
the algorithm. It usually equals the Number of Samples Per Iteration.
7. In the Outline pane, select Domain or any object under it to view domain data in the
Properties and Table panes.
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 197
Using Goal-Driven Optimizations
8. For a Direct Optimization system, select Raw Optimization Data in the Outline pane. The
Table pane displays the design points that were calculated during the optimization. If the
raw optimization data point exists in the design points table for the Parameter Set bar, the
corresponding design point name is indicated in parentheses in the Name column.
Note:
This list is compiled of raw data and does not show feasibility, ratings, and so on
for the included design points.
Once an optimization extension is installed and loaded to the project as described in Working with
DesignXplorer Extensions (p. 414), you are ready to start using it extended functionality.
DesignXplorer filters the Method Name list for applicability to the current project, displaying only
those optimization methods that you can use to solve the optimization problem as it is currently
defined. When no objectives or constraints are defined for a project, all optimization methods are
listed. If you already know that you want to use a particular external optimizer, you should select it
as the method before setting up the rest of the project. Otherwise, the optimization method could
be inadvertently filtered from the list.
DesignXplorer shows only the optimization functionality that is specifically defined in the extension.
Additionally, DesignXplorer filters objectives and constraints according to the optimization method
selected, making only those objects supported by the selected optimization method available for
selection. For example, if you have selected an optimizer that does not support the Maximize objective
type, Maximize is not included in the Objective Type list.
If you already have a specific problem you want to solve, you should set up the project before selecting
an optimization method for Method Name. Otherwise, the desired objectives and constraints could
be filtered from the lists.
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
198 of ANSYS, Inc. and its subsidiaries and affiliates.
Defining the Optimization Domain
When you select Domain or any object under it, the Table pane displays the input parameters and
parameter relationships that are defined and enabled for the optimization. It does not display disabled
domain objects.
• Select Domain and then edit the input parameter domain in the Table pane.
• Select an input parameter under Domain and then edit the input parameter domain in either the
Properties or Table pane.
For enabled input parameters in the Properties and Table panes, the following settings are available:
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 199
Using Goal-Driven Optimizations
Lower Bound
Defines the lower bound for the optimization input parameter space. Increasing the lower bound
confines the optimization to a subset of the DOE domain. By default, the lower bound corresponds
to the following values defined in the DOE:
Upper Bound
Defines the upper bound for the input parameter space. By default, the upper bound corresponds
to the following values defined in the DOE:
Starting Value
Available only for NLPQL and MISQP. Specifies where the optimization starts for each input
parameter.
Because NLPQL and MISQP are gradient-based methods, the starting point in the parameter space
determines the candidates found. With a poor starting point, NLPQL and MISQP might find a
local optimum, which is not necessarily the same as the global optimum. This setting gives give
you more control over your optimization results by allowing you specify exactly where in the
parameter space the optimization should begin.
• Must fall within the domain constrained by the enabled parameter relationships. For more
information, see Defining Parameter Relationships (p. 200).
For each disabled input parameter, specify the desired value to use in the optimization. By default,
the value is copied from the current design point when the optimization system was created.
Note:
When the optimization is refreshed, disabled input values persist. However, they
are not updated to the current design point values.
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
200 of ANSYS, Inc. and its subsidiaries and affiliates.
Defining the Optimization Domain
multiple input parameters, with the values remaining physically bounded and reflecting the constraints
on the optimization problem.
Note:
To specify parameter relationships for outputs, you can create derived parameters. You
create derived parameters for an analysis system by providing expressions. The derived
parameters are then passed to DesignXplorer as outputs. For more information, see Creating
or Editing Parameters in the Workbench User's Guide.
A parameter relationship has one operator and two expressions. In the following example, you can
see that two parameter relationships have been defined, each involving input parameters P1 and P2.
You can create, edit, enable and disable, and view parameter relationships.
• In the Outline pane, right-click Parameter Relationships and select Insert Parameter Relationship.
• In the Table pane under Parameter Relationships, enter parameter relationship data in the bottom
row.
• In the Outline pane, select Domain or Parameter Relationships under it. Then, edit the parameter
relationship in the Table pane.
• In the Outline pane under Parameter Relationships, select a parameter relationship. Then, edit
the parameter relationship in the Properties or Table pane.
The following table indicates the editing tasks that the Properties, Table, and Outline panes allow
you to perform.
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 201
Using Goal-Driven Optimizations
Name
Editable in the Properties and Table panes. Each parameter relationship is given a default name
such as Parameter or Parameter 2, based on the order in which the parameter relationship
was created. When you define both the left and right expressions for the parameter relationship,
the default name is replaced by the relationship. For example, v can become P1<=P2. The name
is updated accordingly when either of the expressions is modified.
The Name property allows you to edit the name of the parameter relationship. Once you edit
this property, the name persists. To resume the automated naming system, you must delete the
custom name, leaving the property empty.
Editable in the Properties and Table panes. Allows you to define the parameter relationship ex-
pressions.
Viewable in the Properties pane. Shows the quantity type for the expressions in the parameter
relationship.
Operator
Editable in the Properties and Table panes. Allows you to select the expression operator from a
drop-down menu. Available values are <= and >=.
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
202 of ANSYS, Inc. and its subsidiaries and affiliates.
Defining Optimization Objectives and Constraints
When the evaluation is complete, the value for each expression is displayed:
• Under Domain in the Table pane. To view expression values for a parameter relationship, click the
plus icon next to the name. The values display below the corresponding expression.
• Under Candidate Points in the Table pane. In the Properties pane, select the Show Parameter
Relationships check box. Parameter relationships that are defined and enabled, along with their
expressions and current expression values for NLPQL and MISQP, are shown in the candidate points
table in the Table pane. For more information, see Viewing and Editing Candidate Points in the
Table Pane (p. 211).
If the evaluation fails, the Outline and Properties panes display an error message. Review the error
to identify problems with the corresponding parameter relationship.
Note:
• The evaluation can fail because the selected optimization method does not support
parameter relationships or because the optimization includes one or more invalid
parameter relationships. Parameter relationships can be invalid if they contain
quantities that are not comparable, parameters for which the values are unknown,
or expressions that are incorrect.
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 203
Using Goal-Driven Optimizations
Additional optimization properties can be set in the Properties pane. The optimization approach used
in the design exploration environment departs in many ways from traditional optimization techniques,
giving you added flexibility in obtaining the desired design configuration.
Note:
If you are using an external optimizer, DesignXplorer filters objectives and constraints accord-
ing to the optimization method selected, displaying only the types that the method supports.
For example, if you have selected an optimizer that does not support the Maximize objective
type, DesignXplorer does not display Maximize as a choice.
In both the Table and Properties pane, the following optimization options are available:
After you select a parameter in the empty row of the table, DesignXplorer automatically assigns the
name of the objective or constraint based on the properties that you then define for it. If you later
change these properties, DesignXplorer automatically changes the name.
For example, assume that you have selected parameter P1 in the empty row and set the objective
type to Minimize. DesignXplorer assigns Minimize P1 as the name. If you change the objective
type to Maximize, DesignXplorer changes the name to Maximize P1. If you then add a constraint
type of Values >= Bound and set Lower Bound to 3, DesignXplorer changes the name to Maximize
P1; P1 >= 3.
In either the Table pane or Outline pane, you can manually change the name of an objective or
constraint. Any custom name that you assign is persisted, which means that DesignXplorer no longer
changes the name if you change the properties. To restore automated naming, you delete the
custom name, leaving the option empty. When you click elsewhere, DesignXplorer will restore
automated naming.
Available options depend on the type of parameter and whether it is an input or output.
Parameter
In the last row in the Table pane, this option allows you to select the input or output parameter
for which to add an objective or constraint. In the newly inserted row, you then define properties
for this parameter. For existing rows, this option is display-only. However, you can delete rows.
Objective Type
Available for continuous input parameters without manufacturable values and output parameters.
Allows you to define an objective by setting the objective type. See the following tables for available
objective types.
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
204 of ANSYS, Inc. and its subsidiaries and affiliates.
Defining Optimization Objectives and Constraints
Objective Initial
Visible only when tolerance settings are enabled and the Solution Process Update property for
the Parameter Set bar is set to Submit to Design Point Service (DPS). This property does not affect
DesignXplorer but rather is included in the fitness terms that DesignXplorer sends when the update
of an optimization study is sent to DPS. For more information, see Initial Values for Objectives (p. 209).
Objective Target
For a parameter with an objective, allows you to set the best estimated goal value that the optim-
ization method can achieve for the objective. This value is not a stopping criterion. If the optimization
method can find a better value than the target value, it will do so. On the history chart and sparkline,
the target value is shown by a dashed line.
Objective Tolerance
Visible when tolerance settings are enabled. For a parameter with a Seek Target objective type,
allows you to set the level of accuracy for reaching the target value. The tolerance value is not a
strict constraint to satisfy but rather a goal to reach. The tolerance value must be positive. For more
information, see Tolerance Settings (p. 207).
Constraint Type
Available for continuous input parameters with manufacturable values, discrete input parameters,
and output parameters. Allows you to define a constraint by setting the constraint type. See the
following tables for available constraint types.
Depending on the constraint type, allows you to set one or more values for limiting the target value
for the constraint.
• For a discrete input parameter or a continuous input parameter with manufacturable variables,
set the lower or upper limit for the input value.
• For an output parameter, set the range for limiting the target value for the constraint.
Constraint Tolerance
Allows you to set a feasibility tolerance. The optimization method considers any point with a con-
straint violation less than the tolerance value as a feasible point. The tolerance value must be pos-
itive.
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 205
Using Goal-Driven Optimizations
Constraints for Discrete Input Parameters or Continuous Input Parameters with Manufacturable
Values
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
206 of ANSYS, Inc. and its subsidiaries and affiliates.
Defining Optimization Objectives and Constraints
Tolerance Settings
DesignXplorer optimization methods can use tolerance settings to improve convergence and the relev-
ance of results. The Decision Support Process can also use tolerance settings to sort candidate points
and define their ratings values.
Note:
In the Workbench Options window under Design Exploration → Sampling and Optim-
ization, the Tolerance Settings check box is selected by default. This preference value
is used to initialize the Tolerance Settings check box in the properties for the Optim-
ization cell when an optimization system is newly inserted in the Project Schematic.
When this check box is selected, tolerance values can be entered for objectives of the
Seek Target type and for constraints. Additionally, if the Solution Process Update
property for the Parameter Set bar is set to Submit to Design Point Service (DPS),
the Initial option is shown for objectives. For more information, see Sending an Optim-
ization Study to DPS (p. 209).
For a given design point P, the fitness function can be written as:
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 207
Using Goal-Driven Optimizations
Where represents the rating for an objective or a constraint and corresponds to its relative
weight.
Note:
While tolerance settings are available for external optimizers in DesignXplorer, they are not
available for external optimizers in the DesignXplorer API.
Postprocessing Properties
Under Decision Support Process in the Properties pane, the following postprocessing properties for
objectives and constraints are available:
Objective Importance
For a parameter with an objective, allows you to select the relative importance of this parameter
compared to other objectives. Choices are Default, Higher, and Lower.
Constraint Importance
For a parameter with a constraint defined, allows you to select the relative importance of this
parameter compared to other constraints. Choices are Default, Higher, and Lower.
Constraint Handling
For a parameter with a constraint defined, allows you to specify the handling of the constraint
for this parameter. This option can be used for any optimization application and is best thought
of as a constraint satisfaction filter on samples generated from optimization runs. It is especially
useful for screening samples to detect the edges of solution feasibility for highly constrained
nonlinear optimization problems. Choices are:
• Relaxed: Samples are generated in the full parameter space, with the constraint only
being used to identify the best candidates. When constraint handling is relaxed, the
upper, lower, and equality constrained objectives of the candidate points are treated
as objectives. Therefore, any violation of the objective is still considered feasible.
• Strict: Samples are generated in the reduced parameter space defined by the constraint.
When constraint handling is strict (default), the upper, lower, and equality constraints
are treated as hard constraints. If any of these constraints is violated, the candidate point
is no longer shown.
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
208 of ANSYS, Inc. and its subsidiaries and affiliates.
Sending an Optimization Study to DPS
• Screening requires that an objective or a constraint is defined for at least one parameter. Multiple
output objectives are allowed.
• MOGA and Adaptive Multiple-Objective require that an objective is defined for at least one
parameter. Multiple output objectives are allowed.
• NLPQL, MISQP, and Adaptive Single-Objective require that an objective is defined for one para-
meter. Only a single output objective is allowed.
– If the selected optimization method requires a starting value, the initial value is synchron-
ized with the starting value.
– If the selected optimization method does not require a starting value, the initial value is
set based on values for the project’s current design point. The initial value must be con-
sistent with the input parameter type (continuous or discrete). For continuous input
parameters, the initial value must also be consisted with the setting for the Allowed
Values property (Any, Manufacturable Values, or Snap to Grid), meaning it must be
set to the closest discrete level, closest manufacturable value, or closest point on the grid.
• For an output parameter, if the design point is up-to-date or has been already updated formerly,
this property is initialized from the values of the current design point of the project. Otherwise,
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 209
Using Goal-Driven Optimizations
if the design point has never been updated, this property is blank and is highlighted in yellow
to indicate that a value is required.
Note:
• The type, target, and initial values for all objectives must be consistent. If these values
are either not set or invalid, DesignXplorer cannot send the objectives and constraints,
which means DPS cannot start the update.
• For Workbench projects created in 2019 R3 or earlier, no changes are required for an
up-to-date Optimization cell in which tolerance settings are enabled. However, if any
change are made to the optimization definition, initial values must be provided before
an update can be submitted to DPS successfully.
Fitness Terms
When an update is submitted to DPS successfully, DesignXplorer automatically sends the objectives
and constraints for the optimization study to the DPS project as fitness terms.
If the Optimization cell is already up-to-date, you can manually send objectives and constraints for the
study to the DPS project. In the Outline pane of the Optimization cell, right-click Optimization and
select Send Study to DPS Project. The Messages pane indicates if the study has been sent successfully.
In the DPS web app, the DPS project's Configuration tab displays one or more configurations. After
opening the appropriate configuration, you can see any fitness terms that were sent from DesignXplorer
in the Fitness Values area on the Parameter Definitions tab. In this area, you can also directly add
fitness terms directly in the DPS project.
In the breadcrumb trail at the top of the window, you can click the project name to go to the project's
Design Points tab, where you can add calculated fitness values to the table and then sort design points
by these values to find the best candidates. For more information, see Defining Fitness in the DCS for
Design Points Guide.
Note:
Changes to fitness terms do not invalidate evaluated DPS design points but rather trigger
the re-evaluation of fitness values for all DPS design points.
• When you select Optimization, the Table pane displays a summary of candidate data.
• When you select Candidate Points, the Table pane displays existing candidate points and allows
you to add new custom candidate points. The Chart pane displays results graphically. For more in-
formation, see Using Candidate Point Results (p. 227).
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
210 of ANSYS, Inc. and its subsidiaries and affiliates.
Working with Candidate Points
Once candidate points are created, you can verify them and also have the option of inserting them into
the response surface as other types of points:
Viewing and Editing Candidate Points in the Table Pane
Retrieving Intermediate Candidate Points
Inserting Candidate Points as New Design, Response, Refinement, or Verification Points
Verifying Candidates by Design Point Update
The maximum number of candidate points that can be generated is determined by Maximum
Number of Candidates. The recommended maximum number of candidate points depends on the
optimization method selected for use. For example, only one candidate is generally needed for
gradient-based, single-objective methods (NLPQL and MISQP). For multiple-objective methods, you
can request as many candidates as you want. For each Pareto front that is generated, there are sev-
eral potential candidates.
Note:
Because the number of candidate points does not affect the optimization, you can
experiment by changing the value for Maximum Number of Candidates and then
updating the optimization. Providing that only this property changes, the update
performs only postprocessing operations, which means candidates are rapidly gener-
ated.
The Table pane displays each candidate point, along with its input and output values. Output para-
meter values calculated from design point updates display in black text. Output parameter values
calculated from a response surface are displayed in the custom color defined in Tools → Options →
Design Exploration → Response Surface. For more information, see Response Surface Options (p. 38).
The number of gold stars or red crosses displayed next to each goal-driven parameter indicates how
well the parameter meets the stated goal. Parameters with three gold stars are the best, and parameters
with three red crosses are the worst.
For each parameter with a goal defined, the optimization also calculates the percentage of variation
for all parameters with regard to an initial reference point. By default, the initial reference point for
NLPQL or MISQP is the Starting Point defined in the optimization properties. For Screening or MOGA,
the initial reference point is the most viable candidate, Candidate 1. You can set any candidate point
as the initial reference point by selecting it in the Reference column. The Parameter Value column
displays the parameter value and the stars or crosses indicating the quality of the candidate. In the
Variation from Reference column, green text indicates variation in the expected direction. Red text
indicates variation that is not in the expected direction. When there is no obvious direction (as for a
constraint), the percentage value displays in black text.
The Name property for each candidate point indicates whether it corresponds to a design point in
the Table pane for the Parameter Set bar. A candidate point corresponds to a design point when
they both have the same input parameter values. If the design point is deleted from the Parameter
Set bar or the definition of either point is changed, the link between the two points is broken, without
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 211
Using Goal-Driven Optimizations
invalidating your model or results. Additionally, the indicator is removed from the candidate point's
name.
If parameter relationships are defined and enabled, you can opt to also view parameter relationships
in the candidate points table. In the Properties pane for the Optimization cell, select the Show
Parameter Relationships check box. In the Table pane, the candidate points table then displays
parameter relationships and their expressions. For NLPQL and MISQP, the candidate points table also
displays current values for each expression.
• In the Outline pane under Results, select Candidate Points. In the Table pane, enter data into
the cells of the bottom table row. For a Response Surface Optimization system, you can also
right-click a candidate point row in the Table pane and select Insert as Custom Candidate Point.
• In the Outline pane, select Optimization. For a Response Surface Optimization system, you can
also right-click a candidate point in the Table pane and select Insert as Custom Candidate Point.
• In the Outline pane under Results, select any chart. Right-click a point in the chart and select Insert
as Custom Candidate Point.
When a custom candidate point is created in a Response Surface Optimization system, the outputs
of custom candidates are automatically evaluated from the response surface. When a custom candidate
is created in a Direct Optimization system, the outputs of the custom candidates are not brought
up-to-date until the next real solve.
Once a candidate point is created, it is automatically plotted in the results in the Chart pane and can
be treated as any other candidate point. You have the ability to edit the name, edit input parameter
values, and select options from the right-click context menu. In addition to the standard context
menu options, an Update Custom Candidate Point option is available for out-of-date candidates in
a Direct Optimization system. Additionally, a Delete option allows you to delete a custom candidate
point.
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
212 of ANSYS, Inc. and its subsidiaries and affiliates.
Working with Candidate Points
been found midway through the optimization, you can stop the optimization and retrieve these results
without having to run the rest of the optimization.
To stop the optimization, click Show Progress in the lower right corner of the window to open the
Progress pane. To the right of the progress bar, click the red stop button. In the dialog box that
opens, select either Interrupt or Abort. Intermediate results are available in either case.
When the optimization is stopped, candidate points are generated from the data available at this
time, such as solved samples, results of the current iteration, the current populations, and so on.
To assess the state of the optimization at the time it was stopped, under Optimization in the Prop-
erties pane, look at the optimization status and counts.
To view the intermediate candidate points, under Results, select Candidate Points.
Note:
DesignXplorer might not be able to return verified candidate points for optimizations
that have been stopped.
When an optimization is stopped midway, the Optimization cell remains in an unsolved state. If you
change any settings before updating the optimization again, the optimization process must start
over. However, if you do not change any settings before the next update, DesignXplorer makes use
of the design point cache to quickly return to the current iteration.
For more information, see Using the History Chart (p. 218).
When either Optimization or Candidate Points is selected in the Outline pane, you can select one
or more candidate points in the Table pane and then right-click one of them to select an option for
inserting them as new points. You can also right-click a point in an optimization chart. The options
available on the context menu depend on the type of optimization. Possible options are:
• Explore Response Surface at Point. Inserts new response points in the Table pane for the Response
Surface cell by copying the input and output parameter values of the selected candidate points.
• Insert as Design Point. Inserts new design points in the Table pane for the Parameter Set bar
by copying the input parameter values of the selected candidate points. The output parameter
values are not copied because they are approximated values provided by the response surface.
• Insert as Refinement Point. Inserts new refinement points in the Table pane for the Response
Surface cell by copying the input parameter values of the selected candidate points.
• Insert as Verification Point. Inserts new verification points in the Table pane for the Response
Surface cell by copying the input and output parameter values of the selected candidate points.
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 213
Using Goal-Driven Optimizations
• Insert as Custom Candidate Point. Inserts new custom candidate points in the candidate points
table by copying the input parameter values of the selected candidate points.
For a Response Surface Optimization system, the same insertion operations are available for the
raw optimization data table and most optimization charts, depending on the context. For instance,
it is possible to right-click a point in a Tradeoff chart to insert the corresponding sample as a response
point, refinement point, or design point. The same operations are also available from a Samples chart.
Note:
For a Direct Optimization system, only the Insert as Design Point and Insert as Custom
Candidate Point options are available.
With either Optimization or Candidate Points is selected in the Outline pane, select one or more
candidate points in the Table pane and then right-click one of them and select Verify by Design
Points Update. This context menu option is available for both optimization-generated candidate
points and custom candidate points.
DesignXplorer verifies candidate points by creating and updating design points with a real solve, using
the input parameter values of the candidate points. The output parameter values for each candidate
point are displayed in a separate row. For a Response Surface Optimization system, verified candidates
are placed next to the row containing the output values generated by the response surface. The se-
quence varies according to sort order. Output parameter values calculated from design point updates
are displayed in black text. Output parameter values calculated from a response surface are displayed
in the custom color defined in Tools → Options → Design Exploration → Response Surface. For
more information, see Response Surface Options (p. 38).
In a Response Surface Optimization system, if a large difference exists between the results of the
verified and unverified rows for a point, the response surface might not be accurate enough in that
area. In such cases, perhaps refinement or other adjustments are necessary. If desired, you can insert
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
214 of ANSYS, Inc. and its subsidiaries and affiliates.
Goal-Driven Optimization Charts and Results
the candidate point as a refinement point. You then recompute the optimization so that the refinement
point and new response surface are taken into account.
Note:
• Often candidate points do not have practical input parameters. For example, ideal
thickness could be 0.127 instead of the more practical 0.125. If desired, you can right-
click the candidate and select Insert as Design Point, edit the parameters of the design
point, and then run this design point instead of the candidate point.
To solve the verification points, DesignXplorer uses the same mechanism that is used to solve DOE
points. The verification points are either deleted or persisted after the run as determined by the
Preserve Design Points after a DX Run option for DesignXplorer. As usual, if the update of the
verification point fails, it is preserved automatically in the project. You can explore it as a design point
by editing the Parameter Set bar in the Project Schematic.
• The History chart is available for the following objects in the Outline pane:
With the exception of the Candidate Points chart, the Convergence Criteria chart, and the History chart,
it is possible to duplicate charts. Right-click the chart in the Outline pane and select Duplicate. Or, use
the drag-and-drop operation, which attempts an update of the chart so that the duplication of an up-
to-date chart results in the creation of an up-to-date chart.
Note:
• The Convergence Criteria chart is not available for the Screening method.
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 215
Using Goal-Driven Optimizations
• When an external optimizer is used, the Convergence Criteria chart is generated if data
is available.
When Optimization or Convergence Criteria is selected in the Outline pane, the Chart pane displays
the Convergence Criteria chart. The rendering and logic of the chart varies according to whether you
are using a multiple-objective or single-objective optimization method.
Because the chart is updated after each iteration, you can use it to monitor the progress of the op-
timization. When the convergence criteria have been met, the optimization stops and the chart remains
available.
Note:
If you are using an external optimizer that supports multiple objectives, the Convergence
Criteria chart displays the data that is available.
Before running your optimization, you specify values for the convergence criteria. After selecting
Optimization in the Outline pane, edit the values under Optimization in the Properties pane.
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
216 of ANSYS, Inc. and its subsidiaries and affiliates.
Goal-Driven Optimization Charts and Results
You can enable or disable the convergence criteria that display on the Convergence Criteria chart
by. Select Convergence Criteria in the Outline pane. In the Properties pane under Criteria, select
or clear the Enabled check box for a criterion to enable or disable it.
Before running your optimization, you specify values for the convergence criteria relevant to your
selected optimization method. Although these criteria are not explicitly shown on the chart, they
affect the optimization and the selection of the best candidate.
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 217
Using Goal-Driven Optimizations
To specify convergence criteria values, select Optimization in the Outline pane. Then, edit the
values under Optimization in the Properties pane.
• Red points representing the best candidates that are feasible points
Additionally, it gives you the option of monitoring the progress of the selected object while the op-
timization is still in progress. If you select an object during an update, the chart refreshes automatically
and shows the evolution of the objective, constraint, input parameter, or parameter relationship
throughout the update.
For the iterative optimization methods, the chart is refreshed after each iteration. For the Screening
method, it is updated only when the optimization update is complete. You can select a different object
at any time during the update to plot and view a different chart.
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
218 of ANSYS, Inc. and its subsidiaries and affiliates.
Goal-Driven Optimization Charts and Results
The History charts remain available when the update is completed. In the Outline pane, a sparkline
version of the History chart is displayed for each objective, constraint, input parameter, or parameter
relationship.
If the History chart indicates that the optimization has converged midway through the process, you
can stop the optimization and retrieve the results without having to run the rest of the optimization.
For more information, see Retrieving Intermediate Candidate Points (p. 212).
Note:
You can access the History chart by selecting an objective or a constraint under Objectives and
Constraints in the Outline pane. Or, you select an input parameter or parameter relationship under
Domain.
• Number of points in the sample set (as defined by the Size of Generated Sample Set
property) along the X axis
• Objective values, which fall within the optimization domain defined for the associated
parameter, along the Y axis
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 219
Using Goal-Driven Optimizations
You can place the mouse cursor over any data point in the chart to view the X and Y coordinates.
Screening
For a Screening optimization, which is non-iterative, the History chart displays all the points of the
sample set. The chart is updated when all points have been evaluated. The plot reflects the non-
iterative process, with each point visible on the chart.
MOGA
For a MOGA optimization, the History chart displays the evolution of the population of points
throughout the iterations in the optimization. The chart is updated at the end of each iteration
with the most recent population (as defined by the Number of Samples per Iteration property).
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
220 of ANSYS, Inc. and its subsidiaries and affiliates.
Goal-Driven Optimization Charts and Results
For an NLPQL or MISQP optimization, the History chart enables you to trace the progress of the
optimization from a defined starting point. The chart displays the objective value associated with
the point used for each iteration. The chart does not display the points used to evaluate the deriv-
ative values. It reflects the gradient optimization process, displaying a point for each iteration.
Adaptive Single-Objective
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 221
Using Goal-Driven Optimizations
For an Adaptive Single-Objective optimization, the History chart enables you to trace the progress
of the optimization through a specified maximum number of evaluations. On the Input Parameter
History chart, the upper and lower bounds of the input parameter are represented by blue lines,
allowing you to see the domain reductions narrowing toward convergence.
The chart displays the objective value corresponding to LHS or verification points, showing all
evaluated points.
Adaptive Multiple-Objective
For an Adaptive Multiple-Objective optimization, the History chart displays the evolution of the
population of points throughout the iterations in the optimization. Each set of points (the number
of which is defined by the Number of Samples Per Iteration property) corresponds to the popu-
lation used to generate the next population. Points corresponding to real solve are plotted as black
points. Points from the response surface are plotted with a square colored as specified in Tools →
Options → Design Exploration → Response Surface. For more information, see Response Surface
Options (p. 38).
The plot reflects the iterative optimization process, with each iteration visible on the chart. All
candidate points generated by the optimization are real design points.
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
222 of ANSYS, Inc. and its subsidiaries and affiliates.
Goal-Driven Optimization Charts and Results
The History chart sparkline is similar to the History chart in the Chart pane:
• Sparklines are gray if no constraints are present. However, if constraints are present:
– Sparklines are red when the constraint or parameter relationship is not met. When
parameter relationships are enabled and taken into account, the optimization should
not pick unfeasible points.
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 223
Using Goal-Driven Optimizations
• In the Outline pane, the sparkline for Minimize P9; P9 <= 14000 N is entirely green, indic-
ating that the constraints are met throughout the optimization history.
• In the Outline pane, the sparkline for Maximize P7; P7 >= 13000 is both red and green,
indicating that the constraints are violated at some points and met at others.
• In the Charts pane, the History chart is shown for the constraint Maximize P7; P7 >= 13000.
The points beneath the dotted gray line for the lower bound are infeasible points.
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
224 of ANSYS, Inc. and its subsidiaries and affiliates.
Goal-Driven Optimization Charts and Results
The History chart for an objective or constraint displays a red line to represent the evolution of the
parameter for which an objective or constraint has been defined. Constraints are represented by
gray dashed lines. The target value is represented by a blue dashed line.
In the following History chart for a MOGA optimization, the output parameter P9 – WB_BUCK is
plotted. The parameter is constrained such that it must have a value lesser than or equal to 1100.
The dotted gray line represents the constraint.
Given that Constraint Type is set to Maximize and Upper Bound is set to 1100, the area under
the dotted gray line represents the infeasible domain.
For an input parameter, the History chart displays a red line to represent the evolution of the
parameter for which the objective has been defined. If an objective or constraint is defined for the
parameter, the same chart displays when the objective, constraint, or input parameter is selected.
In the following History chart for an NLPQL optimization, the input parameter P3 – WB_L is plotted.
For P3 – WB_L, Starting Value is set to 100, Lower Bound is set to 90, and Upper Bound is set
to 110. The optimization converged upward to the upper bound for the parameter.
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 225
Using Goal-Driven Optimizations
For a parameter relationship, the History chart displays two lines to represent the evolution of the
left expression and right expression of the relationship. The number of points are along the X axis.
The expression values are along the Y axis.
In the following History chart for a Screening optimization, the parameter relationship P2 > P1 is
plotted.
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
226 of ANSYS, Inc. and its subsidiaries and affiliates.
Goal-Driven Optimization Charts and Results
To generate candidate points results, update the Optimization cell. Then, in the Outline pane under
Results, select Candidate Points.
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 227
Using Goal-Driven Optimizations
• Green lines represent each the candidate points generated by the optimization.
If you set Coloring Method to by Source Type, the samples are colored according to the source
from which they were calculated, following the color convention used for data in the Table pane.
Samples calculated from a simulation are represented by black lines. Samples calculated from a
response surface are represented by a line in the custom color specified in Tools → Options →
Design Exploration → Response Surface. For more information, see Response Surface Op-
tions (p. 38).
When you move your mouse over the results, you can pick out individual objects, which become
highlighted in orange. When you select a point, the parameter values for the point are displayed
in the Value column of the Properties pane.
Across the bottom of the results, a vertical line displays for each parameter. When you mouse over
a vertical line, two handles appear at the top and bottom of the line. Drag the handles up or down
to narrow the focus down to the parameters ranges that interest you.
When you select a point on the results, the right-click context menu provides options for exporting
data and saving candidate points as design, refinement, verification, or custom candidate points.
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
228 of ANSYS, Inc. and its subsidiaries and affiliates.
Goal-Driven Optimization Charts and Results
Table
Determines the properties of the results displayed in the Table pane. For the Show Parameter
Relationships property, select the Value check box to display parameter relationships in the Table
pane for the results.
Chart
Determines the properties of the results displayed in the Chart pane. Select the Value check box
to enable the property.
• Display Parameter Full Name: Select to display the full parameter name rather than the short
parameter name.
• Show Starting Point: Select to show the starting point in the results (NLPQL and MISQP only).
• Show Verified Candidates: Select to show verified candidates in the results. This option is
available for a Response Surface Optimization system only. Candidate verification is not necessary
for a Direct Optimization system because the points result from a real solve, rather than an es-
timation.
• Coloring Method: Select whether the results should be colored by candidate type or source
type:
– by Candidate Type: Different colors are used for different types of candidate points. This
is the default value.
– by Source Type: Output parameter values calculated from simulations are displayed in
black. Output parameter values calculated from a response surface are displayed in the
custom color selected on the Response Surface tab in the Options window. For more
information, see Response Surface Options (p. 38).
Input Parameters
Each of the input parameters is listed in this section. In the Enabled column, you can select or clear
check boxes to enable or disable input parameters. Only enabled input parameters are shown on
the chart.
Output Parameters
Each of the output parameters is listed in this section. In the Enabled column, you can select or
clear check boxes to enable or disable output parameters. Only enabled output parameters are
shown on the chart.
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 229
Using Goal-Driven Optimizations
You can change various generic chart properties can be changed for this chart.
Note:
• The Sensitivities chart is available only for the Screening and MOGA optimization
methods.
• If the p-Value calculated for a particular input parameter is above the Significance Level
specified in Tools → Options → Design Exploration, the bar for that parameter is shown
as a flat line on the chart. For more information, see Viewing Significance and Correlation
Values (p. 70).
You can change the properties for the chart in the Properties pane.
• You can select which parameter to display on each axis of the chart by selecting the parameter
from the list next to the axis name.
• You can limit the Pareto fronts shown by moving the slider or entering a value in the field above
the slider.
You can change various generic chart properties for this chart.
When an optimization is updated, you can view the best candidates (up to the requested number)
from the sample set based on the stated objectives. However, these results are not truly representative
of the solution set, as this approach obtains results by ranking the solution by an aggregated weighted
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
230 of ANSYS, Inc. and its subsidiaries and affiliates.
Goal-Driven Optimization Charts and Results
method. Schematically, this represents only a section of the available Pareto fronts. To display different
sections, you change the weights for the Objective Importance or Constraint Importance property
in the Properties pane. This postprocessing step helps in selecting solutions if you are sure of your
preferences for each parameter.
The following figures shows the results of the tradeoff study performed on a MOGA sample set. The
first Pareto front (non-dominated solutions) is represented by blue points on the output-axis plot.
You can move the slider in the Properties pane to the right to add more fronts, effectively adding
more points to the Tradeoff chart. Additional points added in this way are inferior to the points in
the first Pareto front in terms of the objectives or constraints that you specified. However, in some
cases where there are not enough first Pareto front points, these additional points can be necessary
to obtain the final design. You can right-click individual points and save them as design points or
response points.
In 2D and 3D Tradeoff charts, MOGA always ensures that feasible points are shown as being of better
quality than the infeasible points. It uses different markers to indicate them in the chart. Colored
rectangles represent feasible points. Gray circles represent infeasible points. Infeasible points are
available if any of the objectives are defined as constraints. You can enable or disable the display of
infeasible points in the Properties pane.
Also, in both 2D and 3D Tradeoff charts, the best Pareto front is blue. The fronts gradually transition
to red for the worst Pareto front. The following figure is a typical 2D Tradeoff chart with feasible and
infeasible points.
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 231
Using Goal-Driven Optimizations
For more information on Pareto fronts and Pareto-dominant solutions, see GDO Principles (p. 325).
This chart provides a multidimensional representation of the parameter space that you are studying.
It uses the parallel Y axes to represent all of the inputs and outputs. Each sample is displayed as a
group of lines, where each point is the value of one input or output parameter. The color of the line
identifies the Pareto front to which the sample belongs. You can also set the chart so that the lines
display the best candidates and all other samples.
While the Tradeoff chart can show only three parameters at a time, the Samples chart can show all
parameters at once, making it a better option for exploring the parameter space. Because of its inter-
activity, the Samples chart is a powerful exploration tool. Using the axis sliders in the Properties pane
to easily filter each parameter provides you with an intuitive way to explore alternative designs. The
Samples chart dynamically hides the samples that fall outside of the bounds. Repeating this operation
with each axis allows you to manually explore and find trade-offs.
The Properties pane for the Samples chart has two choices for Mode: Candidates and Pareto Fronts.
You can display the candidates with the full sample set or display the samples by Pareto front (same
as the Tradeoff chart). If you set Mode to Pareto Fronts, the Coloring method property becomes
available, allowing you to specify by Pareto Front or by Samples for coloring the chart.
In the following Samples chart, Mode is set to Pareto Fronts and Coloring method is set to by
Pareto Front. The gradient ranges are from blue for the best to red for the worst.
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
232 of ANSYS, Inc. and its subsidiaries and affiliates.
Goal-Driven Optimization Charts and Results
When Mode is set to Candidates, Coloring method is not shown because coloring is always by
samples.
Chart Properties
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 233
Using Goal-Driven Optimizations
• Display Parameter Full Name: Select to show the full parameter name rather than the short
parameter name.
• Mode: Specifies whether to display the chart by candidates or Pareto fronts. Choices are by Pareto
Fronts and by Samples. If Pareto Fronts is selected, Coloring method becomes available as the
last option so that you can specify how to color the chart.
• Number of Pareto Fronts to Show: Specifies the number of Pareto fronts to display on the chart.
Input Parameters
Lists the input parameters. In the Enabled column, you can select or clear check boxes to enable or
disable input parameters. Only enabled input parameters are shown on the chart.
Output Parameters
Lists the output parameters. In the Enabled column, you can select or clear check boxes to enable
or disable output parameters. Only enabled output parameters are shown on the chart.
Specifies various generic chart properties. For more information, see Setting Chart Properties.
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
234 of ANSYS, Inc. and its subsidiaries and affiliates.
Using ROMs
The following sections explain how to produce parametric ROMs (reduced order models) from Fluent
steady analyses and then use them to evaluate models in 2D or 3D to rapidly explore the variation of
results:
ROM Overview
ROM Workflow
ROM Production Example for Fluent
Exporting the ROM
Consuming an FMU 2.0 File in Twin Builder
Consuming a ROMZ File in Fluent
Analyzing and Troubleshooting ROM Production
Quality Metrics for ROMs
ROM Limitations and Known Issues
ROM Overview
You can produce a ROM by learning the physics of a given model and extracting its global behavior
from offline simulations. As a standalone digital object, a ROM can be consumed outside of its production
environment for computationally inexpensive, near real-time analysis.
In a parametric ROM, you can evaluate the model and rapidly explore the variation of the results, de-
pending on input parameter values.
Because calculating a ROM result requires only simple algebraic operations (such as vectors summation
and response surfaces evaluations), this step is computationally inexpensive compared to the FOM (full
order model) processing step. Not only is the ROM processing step several orders of magnitude
cheaper, but also the ROM can easily be delivered to and consumed by any number of users, yielding
significant returns on your initial investment in its production.
ROM Workflow
The ROM workflow consists of two distinct stages:
ROM Production
ROM Consumption
ROM Production
In Workbench, you use a DesignXplorer 3D ROM system to drive ROM production from either a 2D
or 3D simulation. In the Workbench Toolbox, the 3D ROM system is visible under Design Exploration.
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 235
Using ROMs
A 3D ROM system is based on a Design of Experiments (DOE) and its design points, which automate
the production of solution snapshots and the ROM itself.
You define the input parameters and content of the ROM in the simulation environment, which is
currently limited to Fluent. A medium-sized ROM generally has 3 to 6 input parameters, while a very
large ROM might have more than 15.
When lots of input parameters are enabled, you might need to increase the number of ROM snapshot
files to maintain ROM accuracy. If you decide that you no longer want to vary an input parameter,
you can disable it.
A ROM must always have a least one output parameter. While output parameters have no impact on
ROM production, you can use them to monitor results while DesignXplorer updates the design points.
While ROM setup is specific to the Ansys product, the ROM production workflow is generic. This
means that as ROM support is extended to additional Ansys products in future releases, the steps
that you take to produce a ROM will be the same in all simulation environments.
Note:
• If a Workbench project with a ROM was created in a version earlier than 2019 R3,
the previously existing ROM system name (ROM Builder) and DOE cell name (Design
of Experiments (RB)) are shown.
Because ROM production requires several simulations, this stage can be computationally expensive.
However, once the ROM is built, it can be consumed at negligible cost.
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
236 of ANSYS, Inc. and its subsidiaries and affiliates.
ROM Production Example for Fluent
ROM Consumption
The workflows for consuming ROMs can differ from one application to another. Currently, to consume
a ROM, you export the ROM (p. 248) as either an FMU 2.0 file or a ROMZ file, depending on which
consumption environment is targeted.
• An FMU 2.0 file is a compressed package containing the data and libraries that are needed for
ROM consumption. You can import this file into Ansys Twin Builder. For more information, see
Consuming an FMU 2.0 File in Twin Builder (p. 248).
• A ROMZ file is also a compressed package containing the data and libraries that are needed
for ROM consumption. You can import the ROMZ file into Fluent in Workbench so that you
can evaluate results directly in Fluent. For more information, see Reduced Order Model (ROM)
Evaluation in Fluent in the Fluent User's Guide.
Because an exported ROM is a standalone digital object, deployment for consumption by many users
is quick and easy.
Note:
For advanced Fluent users, a beta workflow exists for manually defining or generating in
standalone Fluent all of the appropriate files for finally producing the Fluent ROM in Work-
bench. For more information about this advanced workflow, see the DesignXplorer Beta Features
Manual.
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 237
Using ROMs
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
238 of ANSYS, Inc. and its subsidiaries and affiliates.
ROM Production Example for Fluent
To assess ROM accuracy, you can view quality metrics and run both verification points and refinement
points. For more information, see Quality Metrics for ROMs (p. 262).
Once ROM accuracy is verified, you can export the ROM for consumption in Twin Builder or Fluent
in Workbench. For more information, see Exporting the ROM (p. 248).
Prerequisites
The following steps explain how to download and extract sample files and then how to enable
DesignXplorer advanced options:
3. Start Workbench.
b. On the Design Exploration page, select the Show Advanced Options check box.
c. Click OK.
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 239
Using ROMs
DesignXplorer now displays advanced options in italic type in various panes. Advanced ROM options
provide for opening the ROM Builder log file and setting a user-generated ROM mesh file.
To estimate minimum production time for a ROM, you can multiply the time it takes to update one
design point by the number of learning points. For this example, producing the ROM takes approx-
imately 40 minutes on 10 cores.
For your convenience, project archive files without solved design points (HeatExchanger.wbpz)
and with solved design points (HeatExchanger_DOE_Solved.wbpz) are included in the directory
where you extracted the sample files.
1. In Workbench, select File → Open, navigate to the directory where you extracted the
sample files, and open the project archive file HeatExchanger.wbpz.
a. In the Project Schematic, double-click the Parameter Set bar to open it.
b. In the Outline pane, look at the input and output parameters, which have been
well defined.
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
240 of ANSYS, Inc. and its subsidiaries and affiliates.
ROM Production Example for Fluent
4. In the Project Schematic, right-click the Mesh cell and select Update.
5. When the update finishes, double-click the Setup cell to open Fluent, clicking Yes when
asked whether to load the new mesh.
6. When the Fluent Launcher window opens, if necessary, adjust the number of processors
based on your local license and hardware configurations and then click Start.
a. To make Reduced Order Model (Off) visible in the tree under Setup → Models,
enable and load the ROM addon module by executing this command in the Fluent
console:
b. On the ribbon's Solution tab, click the button in the Initialization group for initial-
izing the entire flow field and then click Calculate in the Run Calculation group.
c. When the calculation completes, click OK to close the dialog box. Then, in the tree
under Setup → Models, double-click Reduced Order Model (Off).
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 241
Using ROMs
d. In the Reduced Order Model window that opens, select the Enable Reduced Order
Model check box to expand the window so that you can set up the ROM.
On the Setup tab, each pane displays a filtering option and buttons for toggling
which values to show, how to show these values, and selecting and deselecting
all shown values. Any Fluent custom function fields that you have defined are in-
cluded in the list of variables available for selection.
Note:
e. Select the variables and zones to include in the ROM and then click Add to move
them to the Selected for ROM list.
Because you cannot add selections to the ROM later, ensure that you add all vari-
ables and zones of interest. If you want to delete a particular selection from the
Selected for ROM list, click it and then click Delete beneath the list.
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
242 of ANSYS, Inc. and its subsidiaries and affiliates.
ROM Production Example for Fluent
Everything in the Selected for ROM list will be included in the ROM.
g. In the Fluent toolbar, click the button with the Ansys logo.
This button synchronizes the Workbench cell, pushing your changes to the Design
of Experiments (3D ROM) cell.
h. If you are asked to save your changes, select the option for saving changes for
current and future calculations.
a. On the Project Schematic, right-click the Fluid Flow (Fluent) system and select Update.
b. When the update finishes, select View → Files and locate the ROM snapshot file
(ROMSNP) that has been created.
This global file captures the results for the current parameters configured for DP 0. It is one of a
set of snapshot files that is needed to build the ROM.
1. In the Workbench Toolbox under Design Exploration, double-click 3D ROM to add a system
of this type to the Project Schematic.
A 3D ROM system is inserted under the Parameter Set bar. In the Design of Experiments (3D
ROM) cell, (3D ROM) indicates that the design points in this cell are used to build the ROM.
The DOE for a 3D ROM system can share data only with the DOE for another 3D ROM system.
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 243
Using ROMs
Tip:
You can import data from an external CSV file into the Design of Experiments
(3D ROM) cell. For more information, see Importing Data from a CSV File (p. 294)
and Exporting and Importing ROM Snapshot Archive Files (p. 254).
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
244 of ANSYS, Inc. and its subsidiaries and affiliates.
ROM Production Example for Fluent
The default value for Design of Experiments Type is Optimal Space-Filling Design.
The default value for Number of Samples depends on the number of input parameters.
DesignXplorer calculates this value by multiplying the number of enabled input para-
meters by eight. In the example, four input parameters are enabled. Consequently,
Number of Samples is set to 32.
d. For each input parameter in the Outline pane, select it and edit the lower and upper
bounds in the Properties pane. For this example, use the following values.
f. In the Properties pane, select the Preserve Design Points After DX Run check box
and set Report Image to FFF-Results:Figure001.png.
In the Table pane, the Report Image column is added. The image for each design point
will display temperature results on the symmetry face. These figures are defined in the
Results cell in CFD-Post. For more information, see Figure Command in the CFD-Post
User's Guide and Viewing Design Point Images in Tables and Charts (p. 282).
3. In the toolbar, click Preview to generate the 32 design points without updating them.
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 245
Using ROMs
Tip:
To avoid the wait, you can save and close this project and then open the project
archive file HeatExchanger_DOE_Solved.wbpz and save it to a new file name.
Because the 32 design points are already updated in this file, you can skip to step 7.
For each design point, a report image and a ROM snapshot file (ROMSNP) are produced. For
more information, see ROM Snapshot Files (p. 251).
5. When the update finishes, close the Design of Experiments (3D ROM) cell.
a. On the Project Schematic, double-click the ROM Builder cell to open it.
The Table pane displays a summary of all the variables that are included in the ROM.
c. In the Properties pane, verify that Solver System is set to the correct Ansys product.
The default is Fluid Flow (Fluent) (FFF) because this is only the system in which a ROM is
set up.
d. For Construction Type, accept the default value, which is Fixed Number of Modes.
Note:
If the design points for the Design of Experiments (3D ROM) cell was
smaller than 10, you would set Number of Modes to the number of design
points.
When the update completes, a toolbar button is enabled for exporting the ROM to either an
FMU 2.0 file or ROMZ file.
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
246 of ANSYS, Inc. and its subsidiaries and affiliates.
ROM Production Example for Fluent
8. To assess the ROM accuracy, in the Outline pane, select Goodness Of Fit. Then, in the chart,
check the error metrics at the DOE points. Learning points include DOE points plus refinement
points. For more information, see Quality Metrics for ROMs (p. 262).
b. In the Properties pane, select the Preserve Design Points After DX Run check box
and the Generate Verification Points check box.
f. When the update finishes, in the Outline pane, select Verification Points; then in the
Table pane, check the verification point values and report images.
To improve ROM accuracy, you can choose to run some additional design points. You can do so in
either the Design of Experiments cell or the ROM Builder cell.
3. In the toolbar, click Preview to generate the design points without updating them.
The additional points are added where the distances between existing design points are the
highest.
In the Table pane for the Design of Experiments (3D ROM) cell, you can see the statuses of ROM
snapshot files and perform many other operations. For more information, see ROM Snapshot
Files (p. 251). ROM snapshot files and the ROM Builder log file provide for analyzing and
troubleshooting ROM production (p. 250).
If you want to edit output values for a design point or add, import, or copy design points, you must
change the Design of Experiments Type property to Custom or Custom + Sampling. To edit
one or more output values for a design point, you right-click the output value and then select either
the option for setting it or all output values as editable. For more information, see Editable Output
Parameter Values (p. 292). For any design points that you add manually, you must set ROM snapshot
files. For more information, see Setting Specific ROM Snapshot Files for Design Points (p. 253).
When you edit a design point, you must update the ROM.
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 247
Using ROMs
• In the toolbar for the ROM Builder cell, click Export ROM.
• In the Project Schematic, right-click the ROM Builder cell and select Export ROM.
2. In the Export ROM dialog box, navigate to the location where you want to save the file.
4. For Save as type, select the file type to which to export the file. Choices are:
• FMU Version 2 Files (*.fmu). An FMU (Functional Mock-up) 2.0 file can be consumed by
anyone who has access to Ansys Twin Builder or any other tool that can read this file type.
For more information, see Consuming an FMU 2.0 File in Twin Builder (p. 248).
• ROM Files (*.romz). The ROMZ file for a ROM can be imported into Fluent in Workbench,
where results can be evaluated directly in Fluent. For more information, see Consuming a
ROMZ File in Fluent (p. 250).
Note:
Using an FMU 2.0 file exported from DesignXplorer implies the approval of the terms
of use supplied in the License.txt file. To access License.txt, use a zip utility to
manually extract all files from the .fmu package.
Consumption of the exported ROM file can require a lot of memory, depending on the ROM setup and
the ROM mode count.
Note:
To experiment, you can consume the supplied FMU 2.0 file for the supplied ROM pro-
duction example (p. 237) (HeatExchanger.fmu) in Twin Builder. This file resides in
the directory where you extracted the sample files.
An FMU 2.0 file that is created by exporting a 3D ROM has all the enabled input parameters from the
DOE and works only within the ranges for these inputs. The FMU outputs are the minimum, maximum,
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
248 of ANSYS, Inc. and its subsidiaries and affiliates.
Consuming an FMU 2.0 File in Twin Builder
and average values for each selected variable on each selected zone. Existing scalar outputs from
Workbench are ignored. Currently, there is no way to add custom scalar outputs.
Otherwise, you can use an FMU 2.0 file that is created by exporting a 3D ROM in the same manner as
any other FMU file. For more information on these files and using 3D ROMs in Twin Builder, search the
Twin Builder help for these topics:
• FMU Components
2. Navigate to and select the FMU 2.0 file for the 3D ROM.
3. Click Open.
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 249
Using ROMs
6. Connect all FMU inputs with the remainder of the system model.
7. Click Analyze.
Note:
You cannot import a ROMZ file into the Fluent system used to produced the 3D ROM
in the same instance of Workbench.
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
250 of ANSYS, Inc. and its subsidiaries and affiliates.
Analyzing and Troubleshooting ROM Production
When creating a ROM in the ROM Builder, you update the Design of Experiments (3D ROM) cell in
the 3D ROM system, which generates a ROM snapshot file for each design point.
In the ROM Builder cell, this same column is also available in refinement and verification point
tables. Placing the mouse cursor over a cell in the column displays the name of the snapshot file.
Descriptions follow for each status icon that can display in the snapshot column:
• Up-to-date ( ): The design point has a snapshot file associated with it, and no issues with
this file were detected.
• Update required ( ): The design point is not yet solved and must be updated. Updating
the design point will generate a new snapshot file.
• Update failed ( ): The design point has been calculated or has an imported snapshot file
associated with it but the file is not useable.
• Attention required ( ): The design point has editable outputs but does not have a snapshot
file associated with it. To continue, you must select a snapshot file for this design point.
When an information icon ( ) appears to the right of the status icon, you can click it to view remarks,
errors, or guidance.
When a snapshot file is invalid or missing, the DOE cell cannot be updated. To continue, you must
do one of the following:
• Set a valid snapshot file manually, which is described in Manually Adding Design Points and
Setting Snapshot Files (p. 254)
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 251
Using ROMs
The same is true if there is an invalid or missing snapshot file for a refinement point or validation
point in the ROM Builder cell.
• Invalid DOE design points and refinement points prevent building the ROM.
• Invalid verification points prevent building the goodness of fit (p. 263).
Additionally, you can display information about a ROM snapshot file from the Files pane of the
Project Schematic:
2. In the Name column, right-click the cell with the snapshot file in which you are interested
and select Display File Info.
The file information displays results for each variable and zone that was included in the ROM.
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
252 of ANSYS, Inc. and its subsidiaries and affiliates.
Analyzing and Troubleshooting ROM Production
When you finish reviewing this information, you can close the file.
However, for added design points, ROM snapshot files must be set. Multiple 3D ROM systems can
use the same snapshot files.
To add design points and set snapshot files, you can use these methods:
Manually Adding Design Points and Setting Snapshot Files
Exporting and Importing ROM Snapshot Archive Files
Importing Design Points and Snapshot File Settings from CSV Files
In certain situations, these methods ask how you would like to proceed:
• Before copying or importing design points into a DOE, DesignXplorer parses and validates
the data. For more information, see the parsing and validation information in Copying Design
Points (p. 97).
• If you set a snapshot file that is in the ROM production folder for a different project and this
same file does not already exist in the ROM production folder for the current project, a dialog
box opens, asking whether you want to move or copy the file.
• In a case where a snapshot file with the same name already exists in the ROM production
folder, a dialog box opens, asking whether you want to use the existing file or rename the
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 253
Using ROMs
new file. Before deciding which action to take, you can click Show Details and compare in-
formation for the two files. If you select rename, the imported file is copied and assigned a
new name that corresponds to the current name plus a suffix with time stamp information.
Any time that you edit design points or refinement points, you must update the ROM.
Caution:
If you edit verification points without defining output parameter values and setting
snapshot files, you must update these points to get the new goodness of fit. If you
add verification points that are up-to-date, the goodness of fit is updated automat-
ically.
If you select multiple rows, output parameter values for all selected rows are set as editable.
If you want to make output parameter values for all rows editable, you would select Set All
Output Values as Editable. For more information, see Editable Output Parameter Val-
ues (p. 292).
2. In the Table pane, right-click any editable output parameter value in the row and select Set
Snapshot File.
3. In the dialog box that opens, select the appropriate snapshot file and click Open.
1. In the Table pane for the Design of Experiments (3D ROM) cell, right-click and select
Export as Snapshot Archive.
2. In the dialog box that opens, specify a directory location and file name and then click
Save.
1. In the Table pane for the Design of Experiments (3D ROM) cell, right-click and select
Import Design Points and Snapshot Files → Browse.
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
254 of ANSYS, Inc. and its subsidiaries and affiliates.
Analyzing and Troubleshooting ROM Production
2. In the dialog box that opens, navigate to the desired snapshot archive file and click Open.
A check for consistency is made against the first valid snapshot that is imported.
Note:
You can use a software tool like 7-Zip to open and modify a snapshot archive file.
If you want, you can save the modified file as a ZIP file. DesignXplorer supports
importing snapshot archive files with either SNPZ or ZIP extensions.
Importing Design Points and Snapshot File Settings from CSV Files
In CSV files that contain design points to import, for each design point, you can include the name
of the snapshot file to set to avoid having to manually set it. The snapshot files referenced must
be in the ROM production folder of the project into which you are importing the design points.
1. Ensure that the snapshot files referenced in the CSV file are copied into the ROM production
folder of the project into which to import design points.
To easily find this directory, in the Table pane for the DOE cell, you can right-click a cell in
the ROM snapshot column and select Open Containing Folder.
2. Proceed with the import of the CSV file. For more information, see Importing Data from a CSV
File (p. 294).
DesignXplorer keeps any snapshot file that was generated with the same inputs as a point in any
of these tables. The tables listed in the first three bullets have ROM snapshot columns, so it’s clear
which snapshot files are kept. While the table listed in the last bullet does not have a ROM snapshot
column, DesignXplorer still keeps snapshot files associated with its points.
The Clear All Unused ROM Snapshots option is available on the context menu after you set up a
solver for ROM production. It is disabled if there are no unused snapshot files to remove.
Note:
The Retain Data option does not modify the behavior of this functionality.
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 255
Using ROMs
When DesignXplorer advanced options are shown, ROM Mesh File is visible in the Properties pane
for the Design of Experiments (3D ROM) cell.
The mesh file must reside in the ROM production folder for the current project. If you select a mesh
file in a different project and this same file does not already exist in the current project, a dialog box
opens, asking whether you want to move or copy the file. In a case where a mesh file with the same
name already exists in the ROM production folder, the dialog box indicates that the existing file will
be overwritten.
Caution:
Ensure that the custom mesh file that you select is compatible with the ROM.
To stop using the custom mesh file, you would clear the ROM Mesh File property. While DesignXplorer
does not remove the file from the project, it unregisters the file so that you can remove it manually.
Tip:
If the mesh file already exists in the ROM production folder for the current project, you
can directly enter the file name in ROM Mesh File. If the mesh file is in the ROM production
for a different project, you must enter the full path.
1. In the Outline pane for the ROM Builder cell, select ROM Builder.
When DesignXplorer advanced options are shown, ROM Builder.log is visible in the Properties
pane under Log File.
Tip:
When advanced options are not shown, in the Files pane for the Project Schematic,
you can right-click Rom Builder.log and select Open Containing Folder to go
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
256 of ANSYS, Inc. and its subsidiaries and affiliates.
Analyzing and Troubleshooting ROM Production
to the location where this log file is stored. You can then double-click the file to open
it.
When Display Level Log File is set to Medium, the ROM Builder writes metric errors about SVD.
• When Construction Type is set to Global, the ROM Builder computes one SVD per field.
• When Construction Type is set to Local, the ROM Builder computes one SVD per field and
per support.
Nb Rel. Abs. Rel. Abs. LOO Rel. LOO LOO Rel. LOO Abs.
Modes Proj. Proj. Proj. Proj. Err. Proj. Abs. Proj. Err. Proj. Err.
Err. Err. Err. RMS RMS Err. Proj. Err. RMS RMS
1
2
...
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 257
Using ROMs
For a given row, the errors shown in the table correspond to errors obtained by using the M first
modes of the SVD, with M equal to the number of modes. See the Nb Modes column.
The following notation is used in the mathematical representations in descriptions for these criteria:
• corresponds to the vector of values of the projection of the i-th snapshot onto the
subspace based on the first modes of the SVD.
• corresponds to the approximated value of the ROM on the j-th entities of the i-th
snapshot.
Computes the ratio between the average of projection error on snapshots and the average of the
norm of snapshots:
Computes the ratio between root mean square of the projection error on snapshots and the root
mean square of the norm of snapshots:
NB: This metric corresponds to the one exposed in the DesignXplorer interface to control the
level of ROM accuracy when Construction Type is set to Fixed Accuracy. The number of modes
for building the ROM is selected such that the associated Rel. Proj. Err. RMS is less than the
defined Maximum Relative Error.
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
258 of ANSYS, Inc. and its subsidiaries and affiliates.
Analyzing and Troubleshooting ROM Production
The LOO (Leave-one-out) computation metrics are similar to the previous metrics except for one
difference: the projection of the i-th snapshot is computed by using SVD based only on all
snapshots except the i-th snapshot. This technique allows you to see the stability of the SVD and
the accuracy of the projection on other points than those used by the SVD.
The number of selected modes used to build the ROM is specified below the table of SVD errors. For
example, it might indicate: Number of selected modes = 3.
When Display Level Log File is set to High, the ROM Builder writes additional metrics about ROM
approximation errors.
One table of ROM errors is shown per support and per field. Each row displays metric errors per
snapshot on its entities.
Snapshot Err. Rel. Err. Rel. Err. DOF error DOF DOF error DOF error
Nrm2 Err. NrmInf NrmInf 1st error 3rd 95th
Nrm2 Quartile Median Quartile Percentile
1
2
...
Err. Nrm2
Computes the root of the sum of squared errors on entities of the snapshot:
Computes the root of the sum of squared errors on entities of the snapshot divided by the norm
of the snapshot:
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 259
Using ROMs
Err. NrmInf
Computes the maximum absolute error measured on the entities of the snapshot:
Computes the maximum absolute error measured on the entities of the snapshot divided by the
maximum absolute value of the entities of the snapshot:
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
260 of ANSYS, Inc. and its subsidiaries and affiliates.
Analyzing and Troubleshooting ROM Production
Minimum
Corresponds, for a given metric, to the minimum value obtained on all snapshots
Maximum
Corresponds, for a given metric, to the maximum value obtained on all snapshots
Average
Corresponds, for a given metric, to the average value obtained on all snapshots
1st quart
Corresponds, for a given metric, to the upper bound of 25% of all snapshots
median
Corresponds, for a given metric, to the upper bound of 50% of all snapshots
3rd quart
Corresponds, for a given metric, to the upper bound of 75% of all snapshots
Reference values on DOFs of snapshots are used to build the ROM. The following table allows you to
see quickly statistics associated to reference values of entities of each snapshot.
Min. Value
Max. Value
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 261
Using ROMs
Mean Value
1st Quartile
Median
3rd Quartile
Lower Limit
Upper Limit
Equal to Q3+1.5*IQR
Lower and upper limits correspond to the "inner fences” that mark off the “reasonable” values
from the outlier values. An outlier is a value that is distant from other values. In some cases, an
outlier can correspond to an erratic point and reveal a real problem on the snapshot.
Outliers
Indicates when the Min. Value is less than the Lower Limit or when the Max. Value is greater
than the Upper Limit.
If goodness of fit is poor, you can enrich your ROM and improve its accuracy by manually adding refine-
ment points (p. 124). Adding refinement points to a ROM Builder cell is equivalent to adding design
points to a Design of Experiments cell. Both design points and refinement points are taken into account
when you update the ROM.
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
262 of ANSYS, Inc. and its subsidiaries and affiliates.
Quality Metrics for ROMs
To view goodness of fit for the output parameters in a ROM, in the Outline pane for the ROM
Builder cell, select Goodness Of Fit.
In both the Table and Chart panes, the design points and refinement points are grouped together
and labeled as Learning Points.
• The Table pane displays the differences (errors) between the real solve solutions and the ROM
solutions, calculated at all nodes and all design points (average, maximum, and so on), for
each field (variable) for the selected region (zone). Each parameter is rated on how close it
comes to the ideal value for each goodness of fit metric. The rating is indicated by the number
of gold stars or red crosses next to the parameter. The worst rating is three red crosses. The
best rating is three gold stars.
• The Chart pane displays the error that is measured for each snapshot. You can choose to turn
on and off the display of learning points and verification points in the chart properties. You
can display the chart as either a bar chart of the error for each snapshot or a cumulative dis-
tribution. Depending on the mode, placing the mouse cursor over a particular point displays
the error value or cumulative error percentage.
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 263
Using ROMs
Goodness of fit differs for each field. In the Properties pane, Chart and General categories indicate
what to display in the chart and table.
– Error per Snapshot: Allows you to quickly see if the level of error is uniform for all
snapshots and which snapshots have the highest and lowest errors.
– Cumulative Distribution Error: Allows you to quickly see what percentage of snapshots
have an error smaller than a given value.
• Error Type: Indicates the type of error to display. Choices are L-Infinity Norm Error and Rel-
ative L2-Norm Error (%). The first choice displays the absolute error. The other choice displays
the normalized error. For more information, see ROM Goodness of Fit Criteria (p. 265).
Under General, Region indicates the region of interest to display in the table. You can select All
Regions or a particular region. In the table, you see error metrics for each field included in the ROM.
If you want to add a new goodness-of-fit object, right-click Quality and select Insert Goodness of
Fit. You can also right-click an existing goodness-of-fit object and then copy and paste, duplicate, or
delete it. After duplicating a goodness-of-fit object, you can modify is properties, such as the region
to display in the table and the field to show in the chart.
Note:
During computation of the goodness of fit, you can interrupt the update of the ROM
Builder cell if the computation is taking too long. You are still able to export the
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
264 of ANSYS, Inc. and its subsidiaries and affiliates.
Quality Metrics for ROMs
ROM (p. 248). You can then update the ROM Builder cell later to compute the goodness
of fit.
Two types of errors are calculated for the points taken into account in the construction of the response
surface: L-Infinity Norm Error and Relative L2-Norm Error (%). The mathematical representations
for these criteria use the following notation:
1. In the Outline pane for the ROM Builder cell, select Verification Points.
2. In the Table pane, for each verification point to add, enter values for its input parameters in
the New Verification Point row.
3. On the toolbar, click Update to perform a real solve of each verification point.
Verification point results are then compared with the ROM predictions and the difference is calculated
and displayed in the refinement points table.
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 265
Using ROMs
2. In the Table pane, for each refinement point to add, enter values for its input parameters in
the New Refinement Point row.
3. On the toolbar, click Update to update each out-of-date refinement point and then rebuild
the ROM from both the design points and refinement points.
In the ROM Builder log file (p. 256), metrics take into account refinement points.
Note:
For ROM limitations specific to Fluent, see ROM Limitations in the Fluent User's Guide.
• During ROM production, it is not possible to use several Fluent systems along with a single 3D ROM
system.
• Input parameters coming from a non-Fluent cell must be disabled (for instance a Geometry cell in
a Fluid Flow system). All enabled input parameters must be defined in Fluent.
• A 3D ROM system does not support geometric parameter updates. All geometric parameters enabled
in the Design of Experiments (3D ROM) cell must be disabled.
• After importing a Fluent case file with the ROM already created, the Design of Experiments (ROM)
cell may appear as though it is undefined. If this occurs for a case that already has the DOE defined,
open the Fluent Setup cell and then close Fluent. The Design of Experiments (3D ROM) cell will
change to Update Required.
• Clearing the Geometry or Mesh cell in a Fluent Fluid Flow system created in a previous release may
prevent the ROM from being created. Create the Fluent mesh again using the latest release to allow
for ROM creation.
• You cannot import a ROMZ file into the Fluent system used to produced the 3D ROM in the same
instance of Workbench.
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
266 of ANSYS, Inc. and its subsidiaries and affiliates.
Working with DesignXplorer
The following section describe the type of work you do in DesignXplorer:
Working with Parameters
Working with Design Points
Working with Sensitivities
Working with Tables
Working with Remote Solve Manager and DesignXplorer
Working with Design Exploration Results in Workbench Project Reports
Note:
If you modify your analysis after it is solved, parameters can change. DesignXplorer displays
the refresh required icon ( ) on cells with changed data.
• When a direct input or direct output parameter is added to or deleted from the project, or when
the unit of a parameter is changed, all results, including design point results and the cache of
design point results, are invalidated on a Refresh operation. An Update operation is required
to recalculate all design points.
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 267
Working with DesignXplorer
• When a derived output parameter is added to or deleted from the project, or when the expression
of a derived output parameter is modified, all results but the cache of design point results are
invalidated on a Refresh operation. So, on an Update operation, all design point results are re-
trieved without any new calculation. Other design exploration results are recalculated.
• Because DesignXplorer is primarily concerned with the range of variation in a parameter, changes
to a parameter value in the model or the Parameter Set bar are not updated to existing design
exploration systems with a Refresh or Update operation. The parameter values that were used
to initialize a new design exploration system remain fixed within DesignXplorer unless you change
them manually.
Tip:
You can change the way that units are displayed in your design exploration systems from
the Units menu. Changing the units display in this manner causes the existing data in each
system to be shown in the new units system. It does not require an update of the design
exploration systems.
When you make non-parametric changes, the first cell in the design exploration system indicates that
a refresh is required. However, the Refresh operation invalidates design point results and the cache of
design points. An Update operation is then required to recalculate design points.
Because generating new design points can be time-consuming and costly, when you know that recal-
culating design points is unnecessary, you can approve generated data instead. Rather than invalidating
design point results, the Approve Generated Data operation retains the already generated design
points as up-to-date and retains the design point results in the cache of design points.
If non-parametric changes exist, the Approve Generated Data option is available in the right-click
context menu for the first cell in a design exploration system. Additionally, when the cell is being edited,
a button is available on the toolbar:
Note:
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
268 of ANSYS, Inc. and its subsidiaries and affiliates.
Working with Parameters
When this operation runs, all cells that were up-to-date before the non-parametric changes are once
again up-to-date.
• In the Outline pane for each cell with user-approved data, an alert icon ( ) displays in the
Message column for the root node. If you click this icon to view warnings, you see a cautionary
message that indicates data contains user-approved generated data that may include non-
parametric changes.
• In the Table pane for the cell, the user-approved icon ( ) displays in the Name column of all
design points with user-approved data.
A Design of Experiments cell with user-approved design points is once again marked as up-to-date.
A Response Surface cell can be marked as up-to-date if it depends on a Design of Experiments cell
with user-approved design points or if it contains user-approved points, such as refinement points.
When a cell in a design exploration system contains user-approved data, any cell that depends on it
also displays the user-approved icon. For example, if a Design of Experiments cell contains user-approved
data, any Response Surface cell that depends on this DOE displays this icon. Any new design points
that you might insert in a cell's table do not display icons because their results are to be based on a
real solve.
Assume that you switch to another DOE type and generate new design points, without keeping the
previous design points. The DOE then contains only new design points, so no user-approved icons
display. However, cells that depend on this DOE still display user-approved icons.
• The Response Surface cell displays the icon because it still contains user-approved refinement
points. If you deleted these refinement points and updated the response surface, the Response
Surface cell would no longer display the icon.
• The Optimization cell displays the icon because it still contains user-approved verified candidate
points. If you deleted these candidate points and updated the optimization, the Optimization
cell would no longer display the icon.
All reports that DesignXplorer automatically generates include the same cautionary message that is
displayed when you click the alert icon ( ) in the Message column for the root node of a cell with
user-approved data. For example, the Workbench project report contains this cautionary message in
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 269
Working with DesignXplorer
corresponding component sections. Additionally, in table images, the project report displays the user-
approved icon ( ) in the Name column of user-approved points.
If you export table or chart data, the first row of the CSV file also includes the same cautionary message.
Note:
• In some situations, non-parametric changes are not detected. For example, if you edit the
input file of a Mechanical APDL system outside of Workbench, this non-parametric change
is not detected. To synchronize design exploration systems with the new state of the
project, you must perform a Clear Generated Data operation followed by an Update
operation. In a few rare cases, inserting, deleting, duplicating or replacing systems in a
project is not reliably detected.
• The Clear Generated Data operation does not clear the design point cache. To clear the
design point cache, right-click in an empty area of the Project Schematic and select Clear
Design Points Cache for All Design Exploration Systems.
Input Parameters
By defining and adjusting input parameters, you specify the analysis of the model under investigation.
This section describes how to define and change input parameters.
1. In the Outline pane for the Design of Experiments cell, select the parameter.
Both the Properties pane and Table pane display level information for the discrete para-
meter selected in the Outline pane. In the Properties pane, you see the number of levels
(discrete values). In the Table pane, you see the integer values for each level. You can add,
delete, and edit levels for the discrete parameter, even if it is disabled.
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
270 of ANSYS, Inc. and its subsidiaries and affiliates.
Working with Parameters
Note:
3. In the Table pane, define the levels for the discrete parameter:
• To add a level, select the empty cell in the bottom row of the Discrete Value column,
type an integer value, and press Enter. In the Properties pane, Number of Levels
is updated automatically.
• To delete a level, right-click any part of the row containing the level to remove and
select Delete.
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 271
Working with DesignXplorer
• To edit a level, select the cell with the value to change, type an integer value, and
press Enter.
Note:
In the Table pane, the Discrete Value column is not sorted as you add, delete, and edit
levels. To sort it manually, click the down-arrow on the right of the header cell and select
a sorting option. Once you sort the column, integer values are auto-sorted as you add,
delete, and edit levels.
1. In the Outline pane for the Design of Experiments cell, select the parameter.
The values for Lower Bound and Upper Bound define the range of the analysis.
DesignXplorer initializes the range based on the current value for the parameter, using
−10% for the lower bound and +10% for the upper bound. If a parameter has a current
value of 0.0, the initial range is computed as 0.0 → 10.0.
Because DesignXplorer is not aware of the physical limits of parameters, you must check
that the assigned range is compatible with the physical limits of the parameter. Ideally, the
current value of the parameter is at the midpoint of the range between the upper and
lower bounds. However, this is not a requirement.
3. If you need to change the range, select the bound to change, type in a number, and press
Enter. You are not limited to entering integer values. However, the relative variation must
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
272 of ANSYS, Inc. and its subsidiaries and affiliates.
Working with Parameters
be equal to or greater than 1e-10 in the same units as the parameter. If the relative variation
is less than 1e-10, you can either adjust the range or disable the parameter.
• To adjust the range, select the parameter in the Outline pane and then edit the
values for the bounds in the Properties pane.
• To disable the parameter, clear the Enable check box in the Outline pane.
In the Properties pane, Allowed Values specifies whether you want to impose a limitation
beyond the range defined by the lower and upper bounds. When this property is editable,
the default setting is Any, which means any value within the range is allowed.
4. If you want to further limit values, change the setting for Allowed Values. Descriptions
follow for the other choices that are possible. The subsequent table summarizes when a
choice is shown.
• Snap to Grid. When selected, Grid Interval displays, specifying the distance that
must exist between adjacent design points when generating new design points.
The units are the same as those for the parameter. The default value is calculated
as follows: (Upper Bound − Lower Bound)/1000. This value is then rounded
to the closest power of 10. For more information, see Defining a Grid Interval (p. 274).
• Manufacturable Values. When selected, levels display in the Table pane so that
only the real-world manufacturing or production values that you specify are taken
into account during postprocessing. For more information, see Defining Levels for
Manufacturable Values (p. 275).
The following table describes possible states for Allowed Values and choice availability
based on its state.
• Snap to Grid
• Manufacturable
Values
Response Surface Design of Editable • Any
Optimization Experiments
• Manufacturable
Values
Response Surface Read-only Same value as
selected in the DOE
Optimization Editable if Any is • Any
selected in the DOE
• Snap to Grid
Read-only if Manu- Manufacturable
facturable Values Values
is selected in the
DOE
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 273
Working with DesignXplorer
Note:
• Use of this method does not affect existing design points. They are reused
without adjustment, even when they do not match the grid.
• Not all optimization methods support setting Allowed Values to Snap to Grid.
For example, NLPQL does not support this setting.
The default value for Grid Interval is calculated as follows: (Upper Bound − Lower Bound)/1000.
The resulting value is then rounded to the closest power of 10. For example, assume a lower
bound of 3.52 and an upper bound of 4.08. After subtracting 3.52 from 4.08, the calculation
(0.56/1000) yields 0.00056, which is rounded to 0.001.
You can specify a different value for Grid Interval. However, the value cannot be negative or
zero. It must also be less than or equal to this value: (Upper Bound − Lower Bound).
When you edit the grid interval, the lower bound, upper bound, or starting point can become
invalid. In this case, DesignXplorer highlights in yellow the values which need your attention.
Edit each highlighted property to enter a value matching the grid interval. If you enter an invalid
value, DesignXplorer automatically snaps it to the grid.
When Snap to Grid is set, DesignXplorer excludes optimization methods that do not support
this setting from the list of methods available. However, you might have already selected such
a method before you set Snap to Grid. In this case, DesignXplorer cannot update the Optimization
cell. When you place the mouse cursor over Optimization in the Outline pane, a tooltip indicates
that the selected optimization method does not support continuous input parameters with Al-
lowed Values set to Snap to Grid. It then indicates that you must either select another optimiz-
ation method or change Allowed Values to some choice other than Snap to Grid.
DesignXplorer also cannot update the Optimization cell in other situations where at least one
continuous input variable has Allowed Values set to Snap to Grid. Tooltips for the Optimization
cell indicate how to resolve these additional problems:
• A lower or upper bound is not on the grid defined by the grid interval.
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
274 of ANSYS, Inc. and its subsidiaries and affiliates.
Working with Parameters
In the Properties pane, Number of Levels is initially set to 2 because this is the minimum
number of levels allowed. The Table pane displays all levels specified for manufacturable values.
The values for the two initial levels default to the lower and upper bounds. You can add, delete,
and edit levels, even if this continuous input parameter is currently disabled.
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 275
Working with DesignXplorer
• To add a level, select the empty cell in the bottom row of the Manufacturable Values column,
type a numeric value, and press Enter. In the Properties pane, Number of Levels is updated
automatically.
• To delete a level, right-click any part of the row containing the level to remove and select
Delete. If you delete the level representing either the upper bound or lower bound, the range
is not narrowed.
• To edit a level, select the manufacturable value to change, type a numeric value, and press
Enter.
Note:
• In the Properties pane, Value is populated with the parameter value defined
in the Parameters table. This value cannot be edited in the DOE.
• If you enter a numeric value for a level that is outside of the range defined in
the Properties pane, you can opt to either automatically extend the range to
encompass the new value or cancel the commit of the new value. If you opt
to extend the range, all existing DOE results and design points are deleted.
• If you adjust the range when manufacturable values are defined, manufacturable
values falling outside the new range are automatically removed and all results
are invalidated.
If you decide that you no longer want to limit analysis of the sample set to only manufacturable
values, you can set Allowed Values to Any. This removes the display of levels from the Tables
pane. If you ever set Allowed Values to Manufacturable Values again, the Table pane once
again displays the levels that you previously defined.
Once results are generated, you can change the setting for Allowed Values without invalidating
the DOE or response surface. You can also add, delete, or edit levels without needing to regenerate
the entire DOE and response surface, provided that you do not alter the range. As long as the
range remains the same, DesignXplorer reuses the information from the previous updates.
Because the following types of results are based on manufacturable values, if you make any edits
to manufacturable values, you must regenerate them:
• Min/Max objects
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
276 of ANSYS, Inc. and its subsidiaries and affiliates.
Working with Parameters
• From the Outline pane for the Design of Experiments cell, you enable or disable an input
parameter by selecting or clearing the check box to the right of the parameter.
• In the Properties pane for the parameter selected in the Outline pane, you specify further attrib-
utes, such as the levels (integer values) for a discrete input parameter or the range (upper and
lower bounds) and allowed values for a continuous input parameter.
Making any of the following changes to an input parameter in a Design of Experiments, Design
of Experiments (SSA), or Parameters Correlation cell would require clearing all generated data
associated with this system:
For example, assume that you make one of these changes in a Design of Experiments cell for a
goal-driven optimization system. Because the change would clear all generated data in all cells of
the system, a dialog box displays, asking you to confirm the change. The change is committed only
if you click Yes. If you click No, the change is discarded. If desired, you can duplicate the system
and then make the parameter change in the new system. Alternatively, you can change the DOE
type to Custom before making a parameter change to retain the design points falling within the
new range.
Note:
Using the DOE type (p. 81), the number of generated design points is directly related
to the number of selected input parameters. The design and analysis workflow (p. 29)
shows that specifying many input parameters makes heavy demands on computer time
and resources, including system analysis, DesignModeler geometry generation, and CAD
system generation. Also, large ranges for input parameters can lead to inaccurate results.
Output Parameters
Each output parameter corresponds to a response surface, which is expressed as a function of the
input parameters. Some typical output parameters are equivalent stress, displacement, and maximum
shear stress.
When you select an output parameter in the Outline pane for a cell that displays parameters, you
can see maximum and minimum values for this output parameter in the Properties pane. The max-
imum and minimum values shown depend on the state of the design exploration system.
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 277
Working with DesignXplorer
They are the "best" minimum and maximum values available in the context of the current cell. This
means that they are the best values between what the current cell eventually produced and what
the parent cell provided. A Design of Experiment cell produces design points, and a best minimum
value and maximum value are extracted from these design points. A Response Surface cell produces
Min-Max search results if this option is enabled. If a refinement is run, new points are generated, and
a best minimum value and maximum value are extracted from these points. An Optimization cell
produces a sample set and again, a better minimum value and maximum value than what is provided
by the parent response surface is found in these samples. Consequently, it is important to remember
the state of the design exploration system when viewing the minimum and maximum values in the
parameter properties.
The number and the definition of the design points created depend on the number of input parameters
and the properties of the DOE. For more information, see Using a Central Composite Design DOE (p. 92).
You can preview the generated design points by clicking Preview on the toolbar before updating a
DOE, a response surface (if performing a refinement), or another design exploration feature that generates
design points during an update.
Some design exploration features provide the ability to edit the list of design points and the output
parameter values of these points. For more information, see Working with Tables (p. 292).
Before starting the update of design points, you can set the design point update option, change the
design point update order, and specify initialization conditions for a design point update. For more in-
formation, see Specifying Design Point Update Options (p. 279).
• From a component tab, right-click the root node in the Outline pane and select Update.
• In the Project Schematic, click Update Project on the toolbar to update all systems in the project.
Note:
• The Update All Design Points operation updates the design points for the Parameter
Set bar. It does not update the design points for design exploration systems.
• If an update of a design exploration system reuses design points that are partially updated,
the update required icon ( ) displays beside the output parameters that are partially up-
dated. DesignXplorer always updates partially updated design points to completion and
publishes an informational message.
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
278 of ANSYS, Inc. and its subsidiaries and affiliates.
Working with Design Points
During the update, design points are updated simultaneously if the analysis system is configured to
perform simultaneous solutions. Otherwise, they are updated sequentially.
When you update a Design of Experiments, Response Surface, or Parameters Correlation cell, its
design points table is updated dynamically. As the points are solved, generated design points appear
and their results display.
As each design point is updated, parameter values are written to a CSV log file. If you want to use this
data to continue your work with the response surface, you can import it back into this cell's design
points table.
You can change the update order using options in the design points table. For more information, see
Changing the Design Point Update Order in the Workbench User's Guide.
• If Design Point Initiation is set to From Current (default), when a design point is updated,
it is initialized with the data of the design point that is designated as current.
• If Design Point Initiation is set to From Previously Updated, when a design point is updated,
it is initialized with the data of the previously updated design point. In some cases, it can be
more efficient to update each design point starting from the data of the previously updated
design point, rather than restarting from the current design point each time.
Note:
Retained design points with valid retained data do not require initialization data.
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 279
Working with DesignXplorer
For more information, see Specifying the Initialization Conditions for a Design Point Update in the
Workbench User's Guide.
DesignXplorer allows you to preserve generated design points so that they are automatically saved
to the Parameter Set bar for later exploration or reuse. The preservation of design points must be
enabled first at the project level. You can then configure the preservation of design points for indi-
vidual components.
• You enable this functionality at the project level in Tools → Options → Design Exploration.
Under Design Points, select the Preserve Design Points After DX Run check box.
• You enable this functionality at the component level in the cell properties. Right-click the cell
and select Edit. In the Properties pane, under Design Points, select the Preserve Design
Points After DX Run check box.
When design points are preserved, they are included in the design points table for the Parameter
Set bar. A design points table for a cell indicates if design points here correspond to design points
for the Parameter Set bar. Design points include DOE points, refinement points, direct correlation
points, and candidate points. Design points correspond when they share the same input parameter
values.
When a correspondence exists, the point's name specifies the design point to which it is related. If
the source design point is deleted from the Parameter Set bar or the definition of either design point
is changed, the indicator is removed from the point's name and the link between the two points is
broken, without invalidating your model or results.
• You enable this functionality at the project level in Tools → Options → Design Exploration. Under
Design Points, select both the Preserve Design Points After DX Run and Retain Data for Each
Preserved Design Point check boxes.
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
280 of ANSYS, Inc. and its subsidiaries and affiliates.
Working with Design Points
• You enable this functionality at the component level in the cell properties. Right-click the cell and
selecting Edit. In the Properties pane, under Design Points, select both the Preserve Design
Points After DX Run and Retain Data for Each Preserved Design Point check boxes.
Note:
The behavior of a newly inserted cell follows the project-level settings. The behavior of
existing cells is not affected. Existing cells follow their configuration at the component
level.
When a DesignXplorer cell is updated, preserved design points are added to the project's design
points table and the calculated data for each of these design points is retained.
Once design point data has been retained, you have the option of using the data within the project
or exporting design point data to a separate project:
• To switch to another design within the project, right-click a design point in the design points table
and select Set as Current. This allows you to review and explore the associated design.
• To export retained design point data to a separate project, go to the design points table, right-click
one or more design points with retained data, and select Export Selected Design Points.
For more information about using retained design point data, see Preserving Design Points and Re-
taining Data (p. 289) in the Workbench User's Guide.
In a chart, you can right-click a design point and select Insert as Design Point. Some charts that
support this operation are the Samples chart, Tradeoff chart, Response chart, and Correlation Scatter
chart.
When inserting new design points into the project, the input parameter values of the selected can-
didate design points are copied. The output parameter values are not copied because they are ap-
proximated values provided by the response surface. To view the existing design points in the project,
edit the Parameter Set bar in the Project Schematic.
Th availability of the Insert as Design Point option from an optimization chart depends on the context.
For instance, you can right-click a point in a Tradeoff chart to insert it as a design point in the project.
The same operation is available from a Samples chart, providing it is not being displayed as a Spider
chart.
Note:
If your cell is out-of-date, the charts and tables that you see are out-of-date. However, you
can still explore and manipulate the existing table and chart data and insert design points.
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 281
Working with DesignXplorer
• Design points exist in the Parameter Set bar. Because this is not the case by default, you can
easily accomplish this by selecting the Preserve Design Points After DX Run check box for
the DesignXplorer component and then updating the component. If you want to enable this
property for all new DesignXplorer components, you do so in Tools → Options → Design
Exploration. For more information, see Design Exploration Options (p. 35).
• A Workbench project report exists. For information about Workbench project reports, see
Working with Project Reports in the Workbench User's Guide.
• The image to display in DesignXplorer tables and charts is selected for the Report Image
property for a DesignXplorer component that uses design points. In the following figure, you
can see that Preserve Design Points After DX Run is selected and Report Image has a PNG
file selected. You can select from all PNG files that were generated for the Workbench project
report.
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
282 of ANSYS, Inc. and its subsidiaries and affiliates.
Working with Design Points
Initially, Report Image is set to None, which means tables using design points do not display the
Report Image column. However, once you select a PNG file for Report Image, all tables using design
points display this column. If the selected image exists for a design point, the column displays a
thumbnail.
From a chart, you open the image for a point by right-clicking the point and selecting Show Report
Image. If this context menu option is not available, the selected point is not linked to a design point.
During a design point update, the Report Image column displays new thumbnails as images for
design points are generated. While the update is running, you can open an image.
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 283
Working with DesignXplorer
As a consequence, if the same design points are reused when previewing or updating a design ex-
ploration system, they immediately show up as up-to-date and the cached output parameter values
display.
The cached data is invalidated automatically when relevant data changes occur in the project. You
can also force the cache to be cleared by right-clicking in an empty area of the Project Schematic
and selecting Clear Design Points Cache for All Design Exploration systems.
While a Direct Optimization system is updating, DesignXplorer refreshes the results in the design
points table as it is calculated. Design points pulled from the cache also display. The Table pane
generally refreshes dynamically as design points are submitted for update and as they are updated.
However, if design points are updated via Remote Solve Manager, the RSM job must be completed
before results in the Table pane are refreshed.
Once the optimization is completed, the raw design point data is saved. When you select Raw Optim-
ization Data in the Outline pane, the Table pane displays the raw data. While you cannot edit the
raw data, you can export it to a CSV file by right-clicking in the table and selecting Export Table
Data as CSV. For more information, see Exporting Design Point Parameter Values to a Comma-Separ-
ated Values File in the Workbench User's Guide. Once the raw data is exported, you can import it as a
custom DOE.
You can also select one or more rows in the table and right-click to select one of the following options
from the context menu:
• Insert as Design Point: Creates new design points in the project by copying the input para-
meter values of the selected candidate points to the design points table for the Parameter
Set bar. The output parameter values are not copied because they are approximated values
provided by the response surface.
• Insert as Custom Candidate Point: Creates new custom candidate points in the candidate
points table by copying the input parameter values of the selected candidate points.
Note:
The design point data is in raw format, which means it is displayed without analysis or
optimization results. Consequently, it does not show feasibility, ratings, Pareto fronts, and
so on.
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
284 of ANSYS, Inc. and its subsidiaries and affiliates.
Working with Design Points
• Update the current design point by right-clicking it and selecting Update Selected Design Point.
Note:
The Update Project operation is processed differently than a design point update. When
you update the whole project, design point data is not logged. You must use the right-
click context menu to log data on the current design point.
• Update either selected design points or all design points using the Update Selected Design Point
option on the right-click context menu.
• Open an existing project. For all up-to-date design points in the project data is immediately logged.
Formatting
The generated log file is in the extended CSV file format used to export table and chart data and
to import data from external CSV files to create new design, refinement, and verification points.
While this file is primarily formatted according to CSV standards, it supports some non-standard
formatting conventions. For more information, see Exporting Design Point Parameter Values to
a Comma-Separated Values File and Exporting Design Point Parameter Values to a Comma-Separ-
ated Values File.
File Location
The log file is named DesignPointLog.csv and is written to the directory user_files for
the Workbench project. You can locate the file in the Files pane of Workbench by selecting View
→ Files.
Because the design point log file is in the extended CSV file format used elsewhere by
DesignXplorer, you can import the design point data back into the design points table for the
Design of Experiments cell of any design exploration system.
To import data from the design point log file, you set Design of Experiments Type to Custom.
The list of parameters in the file must exactly match the order and parameter names in
DesignXplorer. For example, the order and names might be P1, P7, and P3.
To import the log file, you might need to manually extract a portion.
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 285
Working with DesignXplorer
• Review the column order and parameter names in the header row of the Table pane.
• Export the first row of your custom DOE to create a file with the correct order and parameter
names for the header row.
2. Find the file DesignPointLog.csv in the directory user_files for the project and then
compare it to your exported DOE file. Verify that the column order and parameter names exactly
match those in DesignXplorer.
3. If necessary, update the column order and parameter names in the design point log file.
If parameters were added or removed from the project, the file has several blocks of data to reflect
this that are distinguished by header lines.
4. Manually remove any unnecessary sections from the file, keeping only the block of data that is
consistent with your current parameters.
Note:
The header line is produced when the log file is initially created and reproduced when
a parameter is added or removed from the project. If parameters have been added or
removed, you must verify the match between DesignXplorer and the log file header
row again.
2. Right-click any cell in the design points table and select Import Design Points and then Browse.
3. Browse to the directory user_files for the project and select the file DesignPointLog.csv.
The design point data is loaded into the design points table.
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
286 of ANSYS, Inc. and its subsidiaries and affiliates.
Working with Design Points
In the power options for your computer, configure your computer to always remain turned on.
Set options for the time after which to turn off the hard disk, sleep, and hibernate to Never.
This can avoid update issues that are caused by the computer timing out or shutting down
during an update.
If you're using CAD, select Tools → Options → Geometry Import and set CAD Licensing to
Hold. This places a hold on the license currently being used so that it is available for the update.
When the update finishes, you can set CAD Licensing back to Release, which is the default.
Don't close your modeling application while you are running DesignXplorer. For example, using
the "Reader" mode in CAD can create update issues because the editors are closed by default.
By default, Ansys Mechanical and Ansys Meshing restart after each design point. This default
value lengthens the overall processing time but improves overall system performance (memory
and CPU) when the generation steps of each design point (geometry, mesh, solve, and postpro-
cessing) are lengthy.
If overall system performance is not a concern, you can reduce the overall processing time by
directing Ansys Mechanical and Ansys Meshing to restart less frequently or not at all. Select
Tools → Options and then in the Mechanical and Meshing tabs, under Design Points, do one
of the following:
• To restart less frequently, set the number of design points to update before restarting to a
higher value, such as 10.
• To prevent restarts completely, clear the check box for periodically restarting during a design
point update.
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 287
Working with DesignXplorer
Only expose the parameters that you are actively optimizing. Additionally, turn off surfaces that
aren't being used, coordinate system imports, and so on.
While design exploration provides some safeguards against parametric conflicts and poorly
defined problems, it cannot identify and eliminate all potential issues. Give careful consideration
to your parametric setup, including factors such as ranges of variation, to make sure that the
setup is reasonable and well-defined.
If you have geometry parameters, double-check your mesh to avoid any simulation errors that
can be caused by meshing. Pay particular attention to local refinement and global quality. If
you are using Ansys Mechanical or Fluent, use the Morphing option when possible.
Consider setting Design of Experiments Type to Box-Behnken Design if your project has
parametric extremes (such as parameter values in corners that are difficult to build) and has 12
or fewer continuous input parameters. Because this DOE type doesn't have corners and does
not combine parametric extremes, it can reduce the risk of update failures.
Design points often fail because the model parameters cannot be regenerated. To test the
model parameters, try creating a project with a test load and coarse mesh that runs quickly.
Solve the test project to verify the validity of the parameters.
You have the ability to submit a Geometry-Only update for all design points to DesignXplorer.
In the Properties pane for the Parameter Set bar, set Update Option to Submit to Remote
Solve Manager and set Pre-RSM Foreground Update to Geometry. When you submit the
next update, the geometry is updated first, so any geometry failures are found sooner.
Occasionally, all of your parameters appear to be feasible, but the model still fails to update
due to issues in the history-based CAD tool. In this case, you can re-parameterize the model in
Ansys SpaceClaim or a similar direct modeling application.
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
288 of ANSYS, Inc. and its subsidiaries and affiliates.
Working with Design Points
Try to update the current design point before submitting a full design point update. If the update
fails, you can open the project in an Ansys product to further investigate this design point. For
more information, see Handling Failed Design Points (p. 289).
Select a few design points with extreme values and try to update them. For example, select the
design points with the smallest gap and largest radius and try to update them. In this way, you
can assess the robustness of your design before committing to a full correlation.
Caution:
Because every design point is retained within the project, this approach can affect
performance and disk resources when used on projects with 100 or more design
points.
The preservation of design points and design point data can be defined as the default behavior at
the project level or can be configured for individual cells.
• To set this as the default behavior at the project level, select Tools → Options → Design Explor-
ation. Under Design Points, select both the Preserve Design Points After DX Run and Retain
Data for Each Preserved Design Point check boxes.
– When you opt to preserve the design points, the design points are added to the design points
table for the Parameter Set bar.
– When you opt to also retain the design point data, the calculated data for each design point
is saved in the project.
• To configure this functionality at the component level, right-click the cell and select Edit. In the
Properties pane, under Design Points, select both the Preserve Design Points After DX Run
and Retain Data for Each Preserved Design Point check boxes.
Once data has been retained for a design point, you can set it as the current design point. This
enables you to review the associated design within the project and further investigate any update
problems that might have occurred.
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 289
Working with DesignXplorer
The first step in gathering information about the update issue is to review error messages. In
DesignXplorer, a failed design point is considered to be completely failed. In reality, though, it
might be a partial failure, where the design point failed to update for only a single output. As
such, you need to know which output is related to the failure.
You can find this information by hovering your mouse over a failed design point in the design
points table. A primary error message tells you what output parameters have failed for the
design point and specifies the name of the first failed component. Additional information might
also be available in the Messages pane.
Sometimes a design point update fails because of a short-term issue, such as a license or CPU
resource being temporarily unavailable. The issue could actually have been remedied in the
time between your first update attempt and a second attempt. Try again to update all design
points before proceeding further.
In Tools → Options → Design Exploration, the Retry Failed Design Points check box globally
sets whether DesignXplorer is to make additional attempts to solve all design points that failed
during the previous run. This check box is applicable to all DesignXplorer systems except
Parameters Correlation systems that are linked to response surfaces.
When Retry Failed Design Points is selected, Number of Retries and Retry Delay become
available so that you can specify the number of times that the update should be retried and
the delay in seconds between each attempt.
You can override this global option in the Properties pane for a cell. In the Properties pane,
under Failed Design Points Management, setting Number of Retries to 0 disables automat-
ically retrying the update for failed design points. When any other integer value is set, Retry
Delay is available so that you can specify the delay in seconds between each attempt.
Error messages can indicate that the failure was caused by factors external to DesignXplorer,
such as network, licensing, or hardware issues. For example, the update can fail because a license
was not available when it was needed or because a problem existed with your network con-
nectivity. If the issue isn't remedied by your second attempt to update, address the issue in
question and then retry the update.
To avoid potentially invalidating the original project, you should create a duplicate test project
using the Save As menu option. You can either duplicate the entire project or narrow your focus
by creating separate projects for one or more failed design points. To create separate projects, select
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
290 of ANSYS, Inc. and its subsidiaries and affiliates.
Working with Sensitivities
the failed design points in the table. Then, right-click any of these selections and select Export
Selected Design Points.
Once a design point has retained data, you can right-click it and select Set as Current. You can
then open the project in an editor to troubleshoot the update issue. For more information, see
Retaining Data for Generated Design Points (p. 280)
To further investigate one or more failed design points without invalidating other design points
or design exploration results, you can create a duplicate test project for the failed design points.
Right-click the one or more failed design points with retained data and select Export Selected
Design Points. With the next update, each failed design point that you've selected is exported
to a separate project. For more information, see Exporting Design Points to New Projects in the
Workbench User's Guide.
The sensitivities available for goal-driven optimizations are statistical sensitivities, which are global
sensitivities. The single parameter sensitivities available for response surfaces are local sensitivities.
Global, statistical sensitivities are based on a correlation analysis using the generated sample points,
which are located throughout the entire space of input parameters.
Local parameter sensitivities are based on the difference between the minimum and maximum value
obtained by varying one input parameter while holding all other input parameters constant. As such,
the values obtained for local parameter sensitivities depend on the values of the input parameters that
are held constant.
Global, statistical sensitivities do not depend on the values of the input parameters because all possible
values for the input parameters are already taken into account when determining the sensitivities.
• Sensitivity charts are not available if all input parameters are discrete.
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 291
Working with DesignXplorer
You can add a new row to a table by entering values in the * row. You enter values in columns for input
parameters. Once you enter an input value in the * row, the row is added to the table and the remaining
input parameters are set to their initial values. You can then edit this row in the table, changing input
parameter values as needed. Output parameter values are calculated when the cell is updated.
Output parameter values calculated from a design point update are displayed in black text. Output
parameter values calculated from a response surface are displayed in the custom color specified in
Tools → Options → Design Exploration → Response Surface. For more information, see Response
Surface Options (p. 38).
If the source design point is deleted from the Parameter Set bar or the definition of either design
point is changed, the link between the two points is broken without invalidating your model or results.
Additionally, the indicator is removed from the name of the point.
• Design points table for a Design of Experiments cell or Design of Experiments (3D ROM)
cell when the Design of Experiments Type property is set to either Custom or Custom +
Sampling
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
292 of ANSYS, Inc. and its subsidiaries and affiliates.
Working with Tables
Because output values are provided by a real solve, editing output values is not normally necessary.
However, you might want to insert existing results from external sources such as experimental data
or known designs. Rows with editable output values are not calculated during an update of the DOE.
By default, output values are calculated during an update. However, you can make output values in
the previously specified tables editable using options on the context menu:
• Set Output Values as Editable makes the output values editable in selected rows. If you right-
click one or more rows with editable output values, you can select Set Output Values as
Calculated if you want output values to be set as calculated once again. However, this inval-
idates the DOE, requiring you to update it.
• Set All Output Values as Calculated sets output values for all rows as calculated.
• Set All Output Values as Editable makes the output values editable in all rows.
– If you enter an input value in the * row, the row is added to the table and the remaining
input parameters are set to their initial values. The output parameter values are blank.
You must enter values in all columns before updating the DOE.
– If you enter an output value in the * row, the row is added to the table and the values
for all input parameters are set to their initial values. The remaining output parameter
values are blank. You must enter values in all columns before updating the DOE.
Note:
• A table can contain derived parameters. Derived parameters are always calculated, even
if Set All Output Values as Editable is selected.
• Editing output values for a row changes the cell's state to indicate that an update is re-
quired. The cell must be updated, even though no calculations are done.
• If the points are solved and you select Set All Output Values as Editable and then select
Set All Output Values as Calculated without making any changes, the outputs are
marked out-of-date. You must update the cell to recalculate the points.
To manipulate the data in a large number of table rows, you should use the export and import cap-
abilities that are described in the next two sections.
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 293
Working with DesignXplorer
The import option is available from the context menu when you right-click a cell in the Project
Schematic. It is also available when you right-click in the Table pane for a cell where the import
feature is implemented. For example, you can right-click a Response Surface cell in the Project
Schematic and select Import Verification Points. Or, you can right-click in the Table pane for a
Design of Experiments cell or a Design of Experiments (3D ROM) cell and select Import Design
Points from CSV.
Note:
For the Import Design Points from CSV option to be available for a DOE cell, the
Design of Experiments Type property must be set to Custom or Custom + Sampling.
For example, assuming that the DOE cell is set to a custom DOE type, to import design points from
an external CSV file into the DOE cell, do the following:
1. In the Project Schematic, right-click the DOE cell into which to import design points and
select Import Design Points from CSV.
During the import, the CSV file is parsed and validated. If the format is invalid or the described para-
meters are not consistent with the current project, a list of errors is displayed and the import operation
is terminated. If the file validates, the data is imported.
• The file must conform to the extended CSV file format. In particular, a header line identifying
each parameter by its ID (P1, P2, …, Pn) is mandatory to describe each column. For more in-
formation, see Extended CSV File Format in the Workbench User's Guide.
• The order of the parameters in the file might differ from the order of the parameters in the
project.
• Values must be provided in the units defined, which you can see by selecting Units → Display
Values As Defined.
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
294 of ANSYS, Inc. and its subsidiaries and affiliates.
Working with Remote Solve Manager and DesignXplorer
• If values for output parameter are provided, they must be provided for all output parameters
except derived output parameters. Any values provided for derived output parameters are
ignored.
• Even if the header line states that output parameter values are provided, it is possible to omit
them on a data line.
• If parameter values for imported design points are out of range or do not fill the range, a
dialog box opens, giving you options for automatically expanding and shrinking the parameter
ranges in the DOE to better fit the imported values. This same dialog box can also appear
when you are copying design points from the Parameter Set bar into a DOE. For more inform-
ation, see Parsing and Validation of Design Point Data (p. 97).
• Once design points are imported, any provided output parameter values are editable. For
design points imported without output parameter values, values are read-only. You must either
update the DOE cell or set output parameter values as editable (p. 292) and then enter values
to continue.
To send updates via RSM, you must first install and configure RSM. For more information, see the Install-
ation and Licensing Help and Tutorials page on the Ansys Customer Portal. From the menu, select
Downloads → Installation and Licensing Help and Tutorials.
For more information on submitting design points for remote update, see Updating Design Points in
Remote Solve Manager in the Workbench User's Guide.
For more information on configuring the solution cell update location, see Solution Process in the
Workbench User's Guide.
For more information on configuring the design point update option, see Setting the Design Point
Update Option in the Workbench User's Guide.
Previewing Updates
When a Design of Experiments, Response Surface, or Parameters Correlation cell requires an update,
in some cases, you can preview the results of the update. A preview prepares the data and displays it
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 295
Working with DesignXplorer
without updating the design points. This allows you to experiment with different settings and options
before actually solving a DOE or generating a response surface or parameters correlation.
• In the Project Schematic, either right-click the cell to update and select Preview or select the cell
and click Preview on the toolbar.
• In the Outline pane for the cell, either right-click the root node and select Preview or select this
node and click Preview on the toolbar.
Pending State
Once a remote update for a Design of Experiments, Response Surface, or Parameters Correlation
cell begins, the cell enters a pending state. However, some degree of interaction with the project is still
available. For example, you can:
• Open the Options window, access the component tab for the Parameter Set bar, and archive the
project.
• Follow the overall process of an update in the Progress pane by clicking Show Progress in the lower
right corner of the window.
• Interrupt, abort, or cancel the update by clicking the red stop button to the right of the progress bar
in the Progress pane and, in the dialog box that then opens, clicking the button for the desired op-
eration.
• Follow the process of individual design point updates in the Table pane for the cell. Each time a
design point is updated, the Table pane displays the results and status.
• Exit the project and either create a new project or open another existing project. If you exit the project
while it is in a pending state due to a remote design point update, you can later reopen it. The update
will then resume automatically.
Exit a project while a design point update or a solution cell update via RSM is in progress. For inform-
ation on behavior when exiting during an update, see Updating Design Points in Remote Solve
Manager or Exiting a Project During a Remote Solve Manager Solution Cell Update in the Workbench
User's Guide.
Note:
• For iterative update processes such as Kriging refinement, Sparse Grid response surface
updates, and Correlation with Auto-Stop, design point updates are sent to RSM iteratively.
The calculations of each iteration are based on the convergence results of the previous
iteration. Consequently, if you exit Workbench during the pending state for an iterative
process, when you reopen the project, only the current iteration is completed.
• The pending state is not available for updates that contain verification points or candidate
points. You can still submit these updates to RSM. However, you do not receive interme-
diate results during the update process, and you cannot exit the project. Once the update
is submitted via RSM, you must wait for the update to complete and the results to be re-
turned from the remote server.
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
296 of ANSYS, Inc. and its subsidiaries and affiliates.
Working with Design Exploration Results in Workbench Project Reports
For more information, see Working with Ansys Remote Solve Manager and User Interface Overview in
the Workbench User's Guide.
Selecting File → Export Report generates the project report, which you can edit and save. For more
information, see Working with Project Reports in the Workbench User's Guide.
The project report begins with a summary, followed by separate sections for global, system, and com-
ponent information. The project report also has appendices.
Project Summary
The project summary includes the project name, date and time created, and product version.
Global Information
This section includes an image of the schematic and tables corresponding to the Files, Outline of All
Parameters, Design Points, and Properties panes.
System Information
A system information section exists in the project report for each design exploration system in the
project. For example, a project with Response Surface and Response Surface Optimization systems
has two system information sections.
Component Information
Each system information section contains subsections for the components (cells) in the system. For ex-
ample, because a Response Surface Optimization system has three cells, its system information section
has three subsections. Because a Parameters Correlation system has only one cell, its system information
section has only one subsection. Each component information subsection contains such data as para-
meter or model properties and charts.
Appendices
The final section of the project report consists of matrices, tables, and additional information related
to the project.
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 297
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
298 of ANSYS, Inc. and its subsidiaries and affiliates.
DesignXplorer Theory
When performing a design exploration, a theoretical understanding of the methods available is beneficial.
The underlying theory of the methods is categorized as follows:
Parameters Correlation Filtering Theory
Response Surface Theory
Goal-Driven Optimization Theory
Theory References
• The relevance of the correlation value between input and output parameters
• The R2 contribution of input parameters in prediction of output parameter values (gain in prediction)
For each correlation value, DesignXplorer computes the p-value of this correlation.
The p-value of a correlation value allows DesignXplorer to quantify the relevance of the correlation.
• rXY = observed correlation between X (an input parameter) and Y (an output parameter)
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 299
DesignXplorer Theory
The p-value corresponds to the probability of obtaining the same rXY assuming that the null hypo-
thesis is true. The p-value is given by .
When trying to find a relationship between X and Y, two types of errors are possible:
• Type I Error (false positive): You think that there is a relationship between X and Y, when there is
in fact no relation between X and Y.
• Type II Error (false negative): You think that there is no relationship between X and Y, when there
is in fact a relationship between X and Y.
The larger the correlation value rXY , the less likely it is that the null hypothesis is true.
The relevance value of the correlation value exposed at the DesignXplorer level is a transformation
of the p-value from [0; 1] to [0; 1]:
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
300 of ANSYS, Inc. and its subsidiaries and affiliates.
Parameters Correlation Filtering Theory
When running a filtering method with a relevance threshold equal to 0.5 on the correlation value
only, the major input parameters selected must be at least a p-value of the correlation coefficient
with any filtering output parameter equals to or less than 0.05.
R2 Contribution
For each filtering output parameter, DesignXplorer builds an internal meta-model (polynomial response
surface) based on the sample points generated during the correlation update.
A first response surface is built based on major input parameter returned by the filtering by using
the correlation values.
Once a first response surface has been built, DesignXplorer tests the contribution of each input
parameter in the prediction of the output parameter, and refines the response surface by removing
the insignificant input parameters and by adding new significant input parameters.
• = R-squared of the response surface based on current major input parameters without the
major input parameter
• = R-squared of the response surface based on current major input parameters and the
minor input parameter
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 301
DesignXplorer Theory
DesignXplorer computes the Fisher Test to test the significance of major input parameter and the
gain in prediction:
Under the null hypothesis that response surface does not provide a significantly better R-squared
than the response surface , has an distribution, with degrees of freedom.
The null hypothesis is rejected if the calculated from the data is greater than the critical value of
the distribution for the false-rejection probability equals to 0.05. This means that if the p-value of
the Fisher Test is greater than 0.05, the major input parameter becomes a minor input parameter.
DesignXplorer computes the Fisher Test to test the significance of major input parameter and the
gain in prediction:
Under the null hypothesis that response surface does not provide a significantly better R-squared
than the response surface , has an distribution, with degrees of freedom.
For each output parameter, DesignXplorer computes the contribution of each input parameter
( or ).
For a given output, to compute the relevance of contribution, DesignXplorer scales contribution
by the best known contribution for this output, and transforms this value with the following
function:
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
302 of ANSYS, Inc. and its subsidiaries and affiliates.
Response Surface Theory
• = relevance of the i-th input parameter on the j-th filtering output parameter
• = best relevance of the i-th input parameter for any filtering output parameter
The list of major inputs corresponds to the input parameters with its relevance
.
If the size of the list exceeds input parameters, DesignXplorer selects only the input parameters
with the best relevance.
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 303
DesignXplorer Theory
trial-and-error search to a powerful and cost-effective (in terms of computational time) statistical
method.
A very simple designed experiment is the screening design. In this design, a permutation of lower and
upper limits (two levels) of each input variable (factor) is considered to study their effect to the output
variable of interest. While this design is simple and popular in industrial experimentations, it only
provides a linear effect, if any, between the input variables and output variables. Furthermore, effect
of interaction of any two input variables, if any, to the output variables is not characterizable.
To compensate for the insufficiency of the screening design, it is enhanced to include the center point
of each input variable in experimentations. The center point of each input variable allows a quadratic
effect, minimum or maximum inside the explored space, between input variables and output variables
to be identifiable, if one exists. The enhancement is commonly known as response surface design to
provide quadratic response model of responses.
The quadratic response model can be calibrated using full-factorial design (all combinations of each
level of input variable) with three or more levels. However, the full-factorial designs generally require
more samples than necessary to accurately estimate model parameters. In light of the deficiency, a
statistical procedure is developed to devise much more efficient experiment designs using three or five
levels of each factor but not all combinations of levels, known as fractional factorial designs. Among
these fractional factorial designs, the two most popular DOE types (p. 304) are Central Composite Designs
(CCDs) (p. 304) and Box-Behnken designs (p. 306).
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
304 of ANSYS, Inc. and its subsidiaries and affiliates.
Response Surface Theory
CCDs, considered to be the original form of CCDs, the value of is greater than 1. The following
is a geometrical representation of a circumscribed CCD of three factors:
Example 1: Circumscribed
Inscribed CCDs, on the contrary, are designed using as "true" physical lower and upper
limits for experiments. The five-level coded values of inscribed CCDs are evaluated by scaling down
CCDs by the value of evaluated from circumscribed CCDs. For the inscribed CCDs, the five-level
coded values are labeled as . The following is a geometrical representation
of an inscribed CCD of three factors:
Example 2: Inscribed
Face-centered CCDs are a special case of CCDs in which . As a result, the face-centered CCDs
become a three-level design that is located at the center of each face formed by any two factors.
The following is a geometrical representation of a face-centered CCD of three factors:
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 305
DesignXplorer Theory
Example 3: Face-Centered
Box-Behnken Design
Unlike the CCD, the Box-Behnken Design is quadratic and does not contain embedded factorial or
fractional factorial design. As a result, the Box-Behnken Design has a limited capability of orthogonal
blocking, compared to CCD. The main difference of Box-Behnken Design from CCD is that Box-
Behnken is a three-level quadratic design in which the explored space of factors is represented by
. The "true" physical lower and upper limits corresponding to . In this design,
however, the sample combinations are treated such that they are located at midpoints of edges
formed by any two factors. The following is a geometry representation of Box-Behnken designs of
three factors:
The circumscribed CCD generally provides a high quality of response prediction. However, the cir-
cumscribed CCD requires factor level setting outside the physical range. Due to physical and eco-
nomic constraints, some industrial studies prohibit the use of corner points, where all factors are
at an extreme. In such cases, factor spacing/range can possibly be planned out in advance to ensure
, so each coded factor falls within feasible levels.
The inscribed CCD uses points only within the specified factor levels. This means that it does not
have the "corner point constraint" issue that the circumscribed CCD has. However, the improvement
compromises the accuracy of the response prediction near the lower and upper limits of each
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
306 of ANSYS, Inc. and its subsidiaries and affiliates.
Response Surface Theory
factor. While the inscribed CCD provides a good response prediction over the central subset of
factor space, the quality is not as high as the response prediction provided by the inscribed CCD.
Like the inscribed CCD, the face-centered CCD does not require points outside the original factor
range. Compared to the inscribed CCD, the face-centered CCD provides a relatively high quality of
response prediction over the entire explored factor space. The drawback of the face-centered CCD
is that it gives a poor estimate of the pure quadratic coefficient, which is the coefficient of square
term of a factor.
For relatively the same accuracy, Box-Behnken Design is more efficient than CCD in cases involving
three or four factors because it requires fewer treatments of factor level combinations. However,
like the inscribed CCD, the prediction at the extremes (corner points) is poor. The property of
"missing corners" can be useful if these should be avoided due to physical or economic constraints,
because potential of data loss in those cases can be prevented.
For information on the Neural Network response surface, see Neural Network (p. 114).
Genetic Aggregation
The Genetic Aggregation response surface can be written as an ensemble using a weighted average
of different metamodels:
where:
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 307
DesignXplorer Theory
To estimate the best weight factor values, DesignXplorer minimizes the Root Mean Square Error
(RMSE) of the Design of Experiments points (or design points) on , and the RMSE of the same
design points based on the cross-validation of .
With:
where:
= design point
= output parameter value at
= prediction of the response surface built without the design point
= number of design points
Cross-Validation
DesignXplorer uses the Leave-One-Out and K-Fold cross-validation methods.
Leave-One-Out Method:
• For a given i-th response surface, DesignXplorer computes N sub-metamodels, where each
sub-response surface corresponds to thei-th response surface fitted to N − 1 design points.
• The cross-validation error of the j-th design point is the error at this point of the sub-response
surface built without the j-th design point.
K-Fold Method:
• Builds k sub-metamodels of the i-th response surface, where each sub-response surface corres-
ponds to the i-th response surface fitted to N − N /k design points.
• The cross-validation error at the j-th design point is the error at this point of the sub-response
surface built without the subset of N /k design points containing the j-th design point.
• To improve the relevance of the k-fold strategy, the N /k design points used as validation points
of each fold are selected by using the maximum of minimum distance.
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
308 of ANSYS, Inc. and its subsidiaries and affiliates.
Response Surface Theory
If you note , the cross-validation error of i-th response surface built without the j-th design
point, then:
DesignXplorer computes analytically the weight factor values. This computation is based on a sim-
ilar approach proposed by Viana et al. (2009).
where 1 is the identity matrix and C the matrix of the mean square errors:
With:
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 309
DesignXplorer Theory
Genetic Algorithm
There are many types of metamodels, including Polynomial Regression, Kriging, Support Vector
Regression, and Moving Least Squares. For each response surface, there are different settings. For
example, on the Kriging response surface, you can control the type of kernel (Gaussian, exponential,
and so on), the type of kernel variation (anisotropic or isotropic), and the type of polynomial regres-
sion (linear, quadratic, and so on).
To increase the chance of getting the most effective response surface, DesignXplorer generates a
population of metamodels with different types and settings. This population corresponds to the
first population of the genetic algorithm run by DesignXplorer. The next populations are obtained
by cross-over and mutation of previous population.
Cross-Over Operator
There are two types of cross-over. The first one is the cross-over between two response surfaces
of the same type (for example, two Kriging), and the second one is the cross-over between two
response surfaces of different types (for example, a Kriging and a polynomial regression).
In the first case, DesignXplorer exchanges a part of settings from the first parent to the second
parent (for example, the exchange of kernel type between two Kriging response surfaces).
In the second case, DesignXplorer creates a new response surface (an ensemble), which is a com-
bination of the two parents (for example, the combination of a Kriging and a polynomial regression
response surface).
Mutation Operator
DesignXplorer mutates one or several settings of the response surface (or of the response surfaces,
in the case of a combination of response surfaces).
To keep a diversity of response surfaces, the genetic algorithm removes a part of the response
surface type that is too present in the population, while it retains other response surfaces less
present.
In the best case, the population contains similar metamodels in terms of prediction accuracy ( )
when the predicted values are different : this increases the chance of error cancellation on
the ensemble.
For external references, see Genetic Aggregation Response Surface References (p. 359).
Automatic Refinement
Gaussian Process regression methods, such as Kriging, provide a prediction distribution. The genetic
aggregation can contain several surrogate models that do not afford a local prediction distribution.
Ben Salem et al. [1] introduced a universal prediction (UP) for all surrogate models. This distribution
relies on cross-validation sub-model predictions. Based on this empirical distribution, they define
an adaptive sampling technique for global refinement (UP-SMART).
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
310 of ANSYS, Inc. and its subsidiaries and affiliates.
Response Surface Theory
The Genetic Aggregation refinement is a hybrid variant of UP-SMART consisting in adding at step
a point:
With:
• = distance penalization
Bibliography
[1] M. Ben Salem, O. Roustant, F. Gamboa, & L. Tomaso (2017). "Universal prediction distribution for
surrogate models." SIAM/ASA Journal on Uncertainty Quantification, 5(1), 1086-1109.
The following figure represents the absolute predicted error of a response surface with only one
input parameter:
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 311
DesignXplorer Theory
For N refinement points to submit simultaneously, the generation of the i-th refinement point
depends on the (i-1) first pending refinement points, with i > 2. The response surface is updated
only when all N refinement points have been generated.
While the previous figure showns that the first refinement point is based on the APE, in the next
figure, refinement points are based on the absolute weighted predicted error (AWPE), which takes
into account the influence of the pending refinement points on the response surface. AWPE is a
transformation of APE that reduces the predicted error around the pending refinement point to
favor the domain with a high predicted error and a low density of pending refinement points. The
i-th refinement point, with i > 1, is selected as the worst point with the maximum AWPE.
Algorithm Summary:
3. j = 0
c. i = 2
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
312 of ANSYS, Inc. and its subsidiaries and affiliates.
Response Surface Theory
iii. i = i + 1
h. j = j + N
where:
• W(X’) is equal to 1 in the full input parameter space except close to point X’, where W de-
creases to be equal to 0 at X’
General Definitions
The error sum of squares SSE is:
(1)
where:
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 313
DesignXplorer Theory
where:
For linear regression analysis the relationship between these sums of squares is:
For a linear regression analysis, the regression model at any sampled location , with in the
-dimensional space of the input variables can be written as:
where:
= row vector of regression terms of the response surface model at the sampled
location
Kriging
Kriging postulates a combination of a polynomial model plus departures of the form given by:
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
314 of ANSYS, Inc. and its subsidiaries and affiliates.
Response Surface Theory
While "globally" approximates the design space, creates "localized" deviations so that
the Kriging model interpolates the sample data points. The covariance matrix of is given
by:
where is the correlation matrix and is the spatial correlation of the function between
any two of the sample points and . is an symmetric, positive definite matrix with
ones along the diagonal. The correlation function is a Gaussian correlation function:
The in are the unknown parameters used to fit the model, is the number of design variables,
and and are the components of sample points and . In some cases, using a single
correlation parameter gives sufficiently good results. You can specify the use of a single correlation
parameter or one correlation parameter for each design variable in the Options dialog box. Select
Tools → Options → Design Exploration → Response Surface and then, under Kriging Options,
make the desired section for Kernal Variation Type.
can be written:
Predicted Error
Kriging or Gaussian process regression (GPR) is widely popular, especially in spatial statistics. It is
based on the early works of Krige [1]. The mathematical framework can be found in [2,3]. Kriging
models predict the outputs of a function f: based on a set of n observations. Within the GP
framework, the posterior distribution is given by the conditional distribution of Y given the obser-
vations:
In the Gaussian process framework, Kriging or Gaussian process regression provides a mean function
(prediction) and a predicted variance that DesignXplorer uses to assess the local prediction.
The general form assumes that the true model response is a realization of a Gaussian process de-
scribed by the following equation:
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 315
DesignXplorer Theory
In this case, the Kriging predictions (m) and variance ( ) are given by:
The standard deviation is used as a tool to assess the errors. In fact, for a Gaussian distribution,
cover 99.7% of the possible predictions. The predicted error displayed in DesignXplorer is .
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
316 of ANSYS, Inc. and its subsidiaries and affiliates.
Response Surface Theory
For a Genetic Aggregation response surface, predicted error is calculated using the universal pre-
diction (UP) distribution [4] to quantify the predicted error. The UP distribution can be applied for
any surrogate model. It is based on a weighted empirical probability measure supported by the CV
submodel prediction.
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 317
DesignXplorer Theory
As indicated in Automatic Refinement (p. 310) in the genetic aggregation theory section, the predicted
error is the criterion Y.
Bibliography
[1] D. G. Krige. "A Statistical Approach to Some Basic Mine Valuation Problems on the Witwatersrand."
Journal of the Chemical, Metallurgical and Mining Society of South Africa, 52:119–139, 1951
[3] M. L. Stein. "Interpolation of Spatial Data: Some Theory for Kriging." Springer Science & Business
Media, New York, 2012.
[4] M. Ben Salem, O. Roustant, F. Gamboa, & L. Tomaso (2017). "Universal prediction distribution for
surrogate models." SIAM/ASA Journal on Uncertainty Quantification, 5(1), 1086-1109.
Non-Parametric Regression
Let the input sample (as generated from a DOE method) be , where each
is an -dimensional vector and represents an input variable. The objective is to determine the
equation of the form:
(2)
(3)
where: is the kernel map and the quantities and are Lagrange multipliers whose
derivation are shown in later sections.
To determine the Lagrange multipliers, you start with the assumption that the weight vector
must be minimized such that all (or most) of the sample points lie within an error zone around the
fitted surface. The following figure provides a simple demonstration. It fits a regression line for a
group of sample points with a tolerance of ε, which is characterized by slack variables ξ* and ξ.
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
318 of ANSYS, Inc. and its subsidiaries and affiliates.
Response Surface Theory
(4)
where is an arbitrary constant. To characterize the tolerance properly, you define a loss
function in the interval (- , ), which de facto becomes the residual of the solution. In the present
implementation, you use the -insensitive loss function, which is given by the equation:
(5)
The primal problem in Equation 4 (p. 319) can be rewritten as the following using generalized loss
functions:
(6)
To solve this efficiently, a Lagrangian dual formulation is done on this to yield the following expres-
sion:
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 319
DesignXplorer Theory
(7)
After some simplification, the dual Lagrangian can be written as the following Equation 8 (p. 320):
(8)
is a quadratic constrained optimization problem, and the design variables are the vector . Once
this is computed, the constant in can be obtained by the application of Karush-Kuhn-Tucker (KKT)
conditions. Once the -insensitive loss functions are used instead of the generic loss functions l( )
in, this can be rewritten in a much simpler form as the following:
(9)
which is to be solved by a QP optimizer to yield the Lagrange multipliers and the constant
is obtained by applying the KKT conditions.
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
320 of ANSYS, Inc. and its subsidiaries and affiliates.
Response Surface Theory
Sparse Grid
The Sparse Grid metamodeling is a hierarchical Sparse Grid interpolation algorithm based on
piecewise multilinear basis functions.
Piecewise linear hierarchical basis (from the level 0 to the level 3):
The calculation of coefficients values associated to a piecewise linear basis is hierarchical and ob-
tained by the differences between the values of the objective function and the evaluation of the
current Sparse Grid interpolation.
For a multi-dimensional problem, the Sparse Grid metamodeling is based on piecewise multilinear
basis functions that are obtained by a sparse tensor product construction of one-dimensional multi-
level basis.
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 321
DesignXplorer Theory
The generation of the Sparse Grid is obtained by the following tensor product:
Example 6: Tensor Product Approach to Generate the Piecewise Bilinear Basis Functions W2,0
To generate a new Sparse Grid , any Sparse Grid that meets the order relation
must be generated before:
means:
with
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
322 of ANSYS, Inc. and its subsidiaries and affiliates.
Response Surface Theory
and
For example, in a two-dimensional problem, the generation of the grid requires several steps:
1. Generation of
2. Generation of and
3. Generation of n and
The calculation of coefficients values associated to a piecewise multilinear basis is similar to the
calculation of coefficients of linear basis: the coefficients are obtained by the differences between
the values of the objective function on the new grid and the evaluation (of the same grid) with the
current Sparse Grid interpolation (based on old grids).
You can observe for a higher-dimensional problem that not all input variables carry equal weight.
A regular Sparse Grid refinement can lead to too many support nodes. This is why the Sparse Grid
metamodeling uses a dimension-adaptive algorithm to automatically detect separability and which
dimensions are the more or the less important ones to reduce computational effort for the objectives
functions.
The hierarchical structure is used to obtain an estimate of the current approximation error. This
current approximation error is used to choose the relevant direction to refine the Sparse Grids. If
the approximation error has been found with Sparse Grid Wl, the next iteration consists in the
generation of new Sparse Grids obtained by incrementing of each dimension level of (one by
one) as far as possible: the refinement of can generate two new Sparse Grids and
(if the and already exist).
The Sparse Grid metamodeling stops automatically when the desired accuracy is reached or when
the maximum depth is met in all directions (the maximum depth corresponds to the maximum
number of hierarchical interpolation levels to compute: if the maximum depth is reached in one
direction, the direction is not refined further).
The new generation of the Sparse Grid allows as many linear basis functions as there are points of
discretization.
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 323
DesignXplorer Theory
All Sparse Grids generated by the tensor product contain only one-point that allows refinement
more locally.
Sparse Grid metamodeling is more efficient with a more local refinement process that uses less
design points, as well as, reaching the requested accuracy faster.
• Screening
• MOGA
• NLPQL
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
324 of ANSYS, Inc. and its subsidiaries and affiliates.
Goal-Driven Optimization Theory
• MISQP
• Adaptive Single-Objective
• Adaptive Multiple-Objective
The Screening, MISQP, and MOGA methods can be used with discrete parameters. The Screening, MISQP,
MOGA, Adaptive Multiple-Objective, and Adaptive Single-Objective methods can be used with continuous
parameters with manufacturable values.
The GDO process allows you to determine the effect on input parameters with certain objectives applied
to the output parameters. For example, in a structural engineering design problem, you can determine
the combination of design parameters that best satisfy minimum mass, maximum natural frequency,
maximum buckling and shear strengths, and minimum cost, with maximum value constraints on the
von Mises stress and maximum displacement.
This section describes GDO and its use in performing single- and multiple-objective optimization.
GDO Principles
GDO Guidelines and Best Practices
Goal-Driven Optimization Theory
GDO Principles
You can apply goal-driven optimization to design optimization by using any of the following methods:
• Screening. This is a non-iterative direct sampling method that uses a quasi-random number
generator based on the Hammersley algorithm.
• MOGA. This is an iterative Multi-Objective Genetic Algorithm that can optimize problems with
continuous input parameters.
– Part of the population is simulated by evaluations of the Kriging response surface, which is
constructed of all design points submitted by MOGA.
MOGA is better for calculating the global optima, while NLPQL and MISQP are gradient-based al-
gorithms ideally suited for local optimization. So you can start with Screening or MOGA to locate the
multiple tentative optima and then refine with NLPQL or MISQP to zoom in on the individual local
maximum or minimum value.
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 325
DesignXplorer Theory
The GDO framework uses a Decision Support Process (DSP) based on satisfying criteria as applied to
the parameter attributes using a weighted aggregate method. In effect, the DSP can be viewed as a
postprocessing action on the Pareto fronts as generated from the results of the various optimization
methods.
Usually Screening is used for preliminary design, which can lead you to apply one of the other ap-
proaches for more refined optimization results. Note that running a new optimization causes a new
sample set to be generated.
In either approach, the Tradeoff chart, as applied to the resulting sample set, shows the Pareto-
dominant solutions. However, in MOGA, the Pareto fronts are better articulated and most of the
feasible solutions lie on the first front, as opposed to the usual results of Screening, where the solutions
are distributed across all the Pareto fronts. This is illustrated in the following two figures.
This first figure shows the 6,000 sample points generated by the Screening method.
This second figure shows a final sample set for the same problem after 5,400 evaluations by MOGA.
The following figure demonstrates the necessity of generating Pareto fronts. The optimal Pareto front
shows two non-dominated solutions. The first Pareto front or Pareto frontier is the list of non-dominated
points for the optimization.
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
326 of ANSYS, Inc. and its subsidiaries and affiliates.
Goal-Driven Optimization Theory
A dominated point is a point that, when considered in regard to another point, is not the best solution
for any of the optimization objectives. For example, if point A and point B are both defined, point B
is a dominated point when point A is the better solution for all objectives.
In this example, the two axes represent two output parameters with conflicting objectives: the X axis
represents Minimize P6 (WB_V), and the Y axis represents Maximize P9 (WB_BUCK). The chart shows
two optimal solutions, point 1 and point 2. These solutions are non-dominated, which means that
both points are equally good in terms of Pareto optimality, but for different objectives. Point 1 is the
better solution for Minimize P6 (WB_V), and point 2 is the better solution for Maximize P9
(WB_BUCK). Neither point is strictly dominated by any other point, so both are included on the first
Pareto front.
Convergence Rate % and Initial Finite Difference Delta % in NLPQL and MISQP
Typically, the use of NLPQL or MISQP is suggested for continuous problems when there is only one
objective function. The problem might or might not be constrained and must be analytic. This
means that the problem must be defined only by continuous input parameters and that the objective
functions and constraints should not exhibit sudden "jumps" in their domain.
The main difference between these algorithms and MOGA is that MOGA is designed to work with
multiple objectives and does not require full continuity of the output parameters. However, for
continuous single objective problems, the use of NLPQL or MISQP gives greater accuracy of the
solution as gradient information and line search methods are used in the optimization iterations.
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 327
DesignXplorer Theory
MOGA is a global optimizer designed to avoid local optima traps, while NLPQL and MISQP are local
optimizers designed for accuracy.
For NLPQL and MISQP, the default convergence rate, which is specified by the Allowable Conver-
gence (%) property, is set to 0.1% for a Direct Optimization system and 0.0001 % for a Response
Surface Optimization system. The maximum value for this property is 100%. This is computed
based on the (normalized) Karush-Kuhn-Tucker (KKT) condition. This implies that the fastest conver-
gence rate of the gradients or the functions (objective function and constraint) determine the ter-
mination of the algorithm.
The default convergence rate is used in conjunction with the initial finite difference delta percentage
value, which is specified by the Initial Finite Difference Delta (%) advanced property. This property
defaults to 1% for a Direct Optimization system and 0.001 % for a Response Surface Optimization
system. You use this property to specify a percentage of the variation between design points to
ensure that the Delta use in the calculation of finite differences is large enough to be seen over
simulation noise. The specified percentage is defined as a relative gradient perturbation between
design points.
The advantage of this approach is that for large problems, it is possible to get a near-optimal
feasible solution quickly without being trapped into a series of iterations involving small solution
steps near the optima. To work most effectively with NLPQL and MISQP, keep the following guidelines
in mind:
• If the Initial Finite Difference Delta (%) is greater than the Allowable Convergence (%), the
relative gradient perturbation gets iteratively smaller, until it matches the allowable convergence
rate. At this point, the relative gradient value stays the same through the rest of the analysis.
• If the Initial Finite Difference Delta (%) is less than or equal to the Allowable Convergence
(%), the current relative gradient step remains constant through the rest of the analysis.
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
328 of ANSYS, Inc. and its subsidiaries and affiliates.
Goal-Driven Optimization Theory
• Both the Initial Finite Difference Delta (%) and Allowable Convergence (%) should be
higher than the magnitude of the noise in your simulation.
When setting the values for these properties, you have the usual trade-offs between speed and
accuracy. Smaller values result in more convergence iterations and a more accurate (but slower)
solution, while larger values result in fewer convergence iterations and a less accurate (but faster)
solution. At the same time, however, you must be aware of the amount of noise in your model.
For the input variable variations to be visible in the output variables, both values must be
greater than the magnitude of the simulation's noise.
In general, DesignXplorer's default values for Initial Finite Difference Delta (%) and Allowable
Convergence (%) cover the majority of optimization problems. For example, if you know that
the noise magnitude in your direct optimization problem is 0.0001, then the default values (Al-
lowable Convergence (%) = 0.001 and Initial Finite Difference Delta (%) = 0.01) are good.
When the defaults are not a good match for your problem, of course, you can adjust the values
to better suit your model and your simulation needs. If you require a more numerically accurate
solution, you can set the convergence rate to as low as 1.0E-10% and then set the Initial Finite
Difference Delta (%) accordingly.
Note:
An optimization problem is undefined when none of its parameters have an objective defined. An
optimization problem can also be unable to satisfy a constraint.
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 329
DesignXplorer Theory
In the Project Schematic, both the state and the Quick Help for the Optimization cell would indicate
that input is required. Likewise, in the Outline view for the cell, both the state and Quick Help for
the root node would indicates that input is required. If you tried to update the optimization with
no objective set, an error message would display in the Message view.
As demonstrated by the following figure, some combinations of Constraint Type, Lower Bound,
and possibly Upper Bound settings specify constraints that cannot be satisfied. The results obtained
from these settings indicate only the boundaries of the feasible region in the design space. Currently,
only the Screening method can solve a pure constraint satisfaction problem.
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
330 of ANSYS, Inc. and its subsidiaries and affiliates.
Goal-Driven Optimization Theory
In a Screening optimization, sample generation is driven by the domain definition, which is the
lower and upper bounds for the parameter. Sample generation is not affected by parameter objective
and constraint settings, unlike non-Screening optimization methods. However, parameter objective
and constraint settings do affect the generation of candidate points.
Typically, the Screening method is best suited for conducting a preliminary design study. It is a
low-resolution, fast, and exhaustive study that you can use to quickly locate approximate solutions.
Because the Screening method does not depend on any parameter objectives, you can change the
objectives after performing the analysis to view the candidates that meet different objective sets,
allowing you to quickly perform preliminary design studies. It's easy to keep changing the objectives
and constraints to view the different corresponding candidates, which are drawn from the original
sample set.
For example, you can run a Screening optimization for 3,000 samples and then use Tradeoff or
Samples charts to view the Pareto fronts. You can use the solutions slider in the Properties view
for the chart to display only the prominent points in which you are interested, which are usually
the first few fronts. When you run MOGA or Adaptive Multiple-Objective, you can limit the number
of Pareto fronts that are computed in the analysis.
Once all of your objectives are defined, update the Optimization cell to generate up to the requested
number of candidate points. You can save any of the candidates by right-clicking and selecting
Explore Response Surface at Points, Insert as Design Points, or Insert as Refinement Points.
When working with a Response Surface Optimization system, you should validate the best obtained
candidate results by saving the corresponding design points, solving them at the project level, and
comparing the results.
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 331
DesignXplorer Theory
subject to:
The symbols and denote the vectors of the continuous and the integer variables, respectively.
DesignXplorer allows you to define a constrained design space with linear or non-linear constraints.
The constraint sampling is a heuristic method based on Shifted-Hammersley (Screening) and MISQP
sampling methods.
For a given screening of sample points generated in the hypercube of input parameters, only a
part of this sampling ( ) is within the constrained design space (feasible domain).
Otherwise, DesignXplorer needs to create additional points to reach the sample points.
The Screening method is not guaranteed to find either enough sample points or at least one feasible
point. For this reason, DesignXplorer solves an MISQP problem for each constraint on input para-
meters:
subject to:
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
332 of ANSYS, Inc. and its subsidiaries and affiliates.
Goal-Driven Optimization Theory
subject to:
Once the point closest to the center mass of the feasible points has been found, DesignXplorer
projects part of the infeasible points onto the skin of the feasible domain.
Given an infeasible point and feasible point , you can build a new feasible point :
To find close to the skin of the feasible domain, the algorithm starts with equals 1, and decreases
value until is feasible.
To optimize the distribution of point on the skin of the feasible domain, the algorithm chooses the
furthermost infeasible points in terms of angles:
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 333
DesignXplorer Theory
Once enough points have been generated on the skin of the feasible domain, the algorithm generates
internal points. The internal points are obtained by combination of internal points and points
generated on the skin of the feasible domain.
The conventional Hammersley sampling algorithm is constructed by using the radical inverse
function. Any integer can be represented as a sequence of digits by the following
equation:
(10)
For example, consider the integer 687459, which can be represented this way as , and
so on. Because this integer is represented with radix 10, you can write it as
and so on. In general, for a radix representation, the equation
is:
(11)
The inverse radical function is defined as the function that generates a fraction in (0, 1) by reversing
the order of the digits in Equation 11 (p. 334) about the decimal point, as shown.
(12)
Thus, for a -dimensional search space, the Hammersley points are given by the following expression:
(13)
where indicates the sample points. Now, from the plot of these points, it is seen that the
first row (corresponding to the first sample point) of the Hammersley matrix is zero and the last
row is not 1. This implies that, for the -dimensional hypercube, the Hammersley sampler generates
a block of points that are skewed more toward the origin of the cube and away from the far edges
and faces. To compensate for this bias, a point-shifting process is proposed that shifts all Hammersley
points by the amount that follows:
(14)
This moves the point set more toward the center of the search space and avoids unnecessary bias.
Thus, the initial population always provides unbiased, low-discrepancy coverage of the search space.
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
334 of ANSYS, Inc. and its subsidiaries and affiliates.
Goal-Driven Optimization Theory
minimize:
subject to:
where:
It is assumed that objective function and constraints are continuously differentiable. The idea is
to generate a sequence of quadratic programming subproblems obtained by a quadratic approx-
imation of the Lagrangian function and a linearization of the constraints. Second order information
is updated by a quasi-Newton formula and the method is stabilized by an additional (Armijo) line
search.
The method presupposes that the problem size is not too large and that it is well-scaled. Also,
the accuracy of the methods depends on the accuracy of the gradients. Because analytical
gradients are unavailable for most practical problems, it is imperative that the numerical (finite
difference based) gradients are as accurate as possible.
Before the actual derivation of the NLPQL equations, Newton's iterative method for the solution
of nonlinear equation sets is reviewed. Let be a multivariable function such that it can be
expanded about the point in a Taylor's series.
(15)
where it is assumed that the Taylor series actually models a local area of the function by a
quadratic approximation. The objective is to devise an iterative scheme by linearizing the vector
Equation 15 (p. 335). To this end, it is assumed that at the end of the iterative cycle, the Equa-
tion 15 (p. 335) would be exactly valid. This implies that the first variation of the following expres-
sion with respect to Δx must be zero.
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 335
DesignXplorer Theory
(16)
The first expression indicates the first variation of the converged solution with respect to the in-
crement in the independent variable vector. This gradient is necessarily zero because the converged
solution clearly does not depend on the step-length. Therefore, Equation 17 (p. 336) can be written
as the following:
(18)
where the index indicates the iteration. Equation 18 (p. 336) is therefore used in the main
quadratic programming scheme.
NLPQL Derivation:
Consider the following single-objective nonlinear optimization problem. It is assumed that the
problem is smooth and analytic throughout and is a problem of decision variables.
minimize:
subject to:
where:
(19)
where and are the numbers of inequality and equality constraints. In many cases the inequality
constraints are bound above and below, in such cases, it is customary to split the constraint into
two inequality constraints. To approximate the quadratic sub-problem assuming the presence of
only equality constraints, the Lagrangian for Equation 19 (p. 336) as:
(20)
where is a dimensional vector (non-zero), which are used as Lagrange multipliers. Thus,
Equation 20 (p. 336) becomes a functional, which depends on two sets of independent vectors.
To minimize this expression, you seek the stationarity of this functional with respect to the two
sets of vectors. These expressions give rise to two sets of vector expressions as the following:
(21)
Equation 21 (p. 336) defines the Karush-Kuhn-Tucker (KKT) conditions that are the necessary con-
ditions for the existence of the optimal point. The first equation is composed of nonlinear al-
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
336 of ANSYS, Inc. and its subsidiaries and affiliates.
Goal-Driven Optimization Theory
gebraic equations and the second one is made up of nonlinear algebraic equations. The matrix
is a matrix defined as the following:
(22)
Thus, in Equation 22 (p. 337), each column is a gradient of the corresponding equality constraint.
For convenience, the nonlinear equations of Equation 21 (p. 336) can be written as the following:
(23)
where Equation 23 (p. 337) is a system of nonlinear equations. The independent variable
set can be written as:
(24)
(25)
Referring to the section on Newton based methods, the vector in Equation 24 (p. 337) is
updated as the following:
(26)
The increment of the vector is given by the iterative scheme in Equation 18 (p. 336). Referring to
Equation 23 (p. 337), Equation 24 (p. 337), and Equation 25 (p. 337), the iterative equation is expressed
as the following:
(27)
This is only a first order approximation of the Taylor expansion of Equation 23 (p. 337). This is in
contrast to Equation 18 (p. 336) where a quadratic approximation is done. This is because in
Equation 24 (p. 337), a first order approximation has already been done. The matrices and the
vectors in Equation 27 (p. 337) can be expanded as the following:
(28)
and
(29)
This is obtained by taking the gradients of Equation 25 (p. 337) with respect to the two variable
sets. The sub-matrix is the Hessian of the Lagrange function in implicit form.
To demonstrate how Equation 29 (p. 337) is formed, let us consider the following simple case.
Given a vector of two variables and , you write:
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 337
DesignXplorer Theory
It is required to find the gradient or Jacobian of the vector function ). To this effect, the deriv-
ation of the Jacobian is evident because in the present context, the vector indicates a set of
nonlinear (algebraic) equations and the Jacobian is the coefficient matrix that "links" the increment
of the independent variable vector to the dependent variable vector. Thus, you can write:
Thus, the Jacobian matrix is formed. In some applications, the equation is written in the following
form:
where the column indicates the gradient of the component of the dependent variable
vector with respect to the independent vector. This is the formalism you use in determining
Equation 29 (p. 337).
(30)
Solution of Equation 30 (p. 338) iteratively solves Equation 31 (p. 338) in a series of linear steps till
the point when the increment is negligible. The update schemes for the independent variable
and Lagrange multiplier vectors and are written as the following:
(31)
The individual equations of the Equation 30 (p. 338) are written separately now. The first equation
(corresponding to minimization with respect to ) can be written as:
(32)
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
338 of ANSYS, Inc. and its subsidiaries and affiliates.
Goal-Driven Optimization Theory
The last step in Equation 32 (p. 338) is done by using Equation 31 (p. 338). Thus, using Equa-
tion 32 (p. 338) and Equation 30 (p. 338), the iterative scheme can be rewritten in a simplified form
as:
(33)
Thus, Equation 33 (p. 339) can be used directly to compute the value of the Lagrange
multiplier vector. Note that by using Equation 33 (p. 339), it is possible to compute the update of
and the new value of the Lagrange multiplier vector in the same iterative step. Equa-
tion 33 (p. 339) shows the general scheme by which the KKT optimality condition can be solved
iteratively for a generalized optimization problem.
where the definition of the matrices in Equation 34 (p. 339) and Equation 35 (p. 339) are given
earlier. To solve the quadratic minimization problem let us form the Lagrangian as:
(36)
Now, the KKT conditions can be derived (as done earlier) by taking gradients of the Lagrangian
in Equation 36 (p. 339) as the following:
(37)
In a condensed matrix form Equation 37 (p. 339) can be written as the following:
(38)
Equation 38 (p. 339) is the same as Equation 33 (p. 339). This implies that the iterative scheme in
Equation 38 (p. 339), which actually solves a quadratic subproblem (Equation 34 (p. 339) and
Equation 35 (p. 339)) in the domain . In case the real problem is quadratic, then this iterative
scheme solves the exact problem.
On addition of inequality constraints, the Lagrangian of the actual problem can be written as the
following:
(39)
The inequality constraints have been converted to equality constraints by using a set of slack
variables as the following:
(40)
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 339
DesignXplorer Theory
The squared term is used to ensure that the slack variable remains positive, which is required to
satisfy Equation 40 (p. 339). The Lagrangian in Equation 39 (p. 339) acts as an enhanced objective
function. It is seen that the only case where the additional terms might be active is when the
constraints are not satisfied.
The KKT conditions as derived from Equation 39 (p. 339) (by taking first variations with respect to
the independent variable vectors) are:
(41)
From the KKT conditions in Equation 41 (p. 340), it is evident that there are equations
for a similar number of unknowns. Therefore, this equation set possesses a unique solution. Let
this (optimal) solution be marked as . At this point, a certain number of constraints are active
while others are inactive. Let the number of active inequality constraints be and the total
number of active equality constraints be . By an active constraint, it is assumed that the constraint
is at its threshold value of zero. Therefore, let and be the sets of active and inactive equality
constraints (respectively) and and be the sets of active and inactive inequality constraints
respectively. Therefore, you can write the following relations:
(42)
where indicates the count of the elements of the set under consideration. These sets
partition the constraints into active and inactive sets. Therefore, you can write:
(43)
Thus, the last three equations in Equation 41 (p. 340) can be represented by the Equation 43 (p. 340).
These are the optimality conditions for constraint satisfaction. From these equations, you can
now eliminate such that the Lagrangian in Equation 39 (p. 339) depends on only three independ-
ent variable vectors. From the last two conditions in Equation 43 (p. 340), you can write the follow-
ing condition, which is always valid for an optimal point:
(44)
Using Equation 44 (p. 340) in the set Equation 41 (p. 340), the KKT optimality conditions can be
written as the following:
(45)
Thus, the new set contains only unknowns. Now, following the same logic as in
Equation 24 (p. 337) as done earlier—let us express Equation 45 (p. 340) in the same form as in
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
340 of ANSYS, Inc. and its subsidiaries and affiliates.
Goal-Driven Optimization Theory
Equation 23 (p. 337). This represents a system of nonlinear equations. The independent
variable set can be written in vector form as the following:
(46)
Newton's iterative scheme is also used here. Therefore, the same equations as in Equation 26 (p. 337)
and Equation 27 (p. 337) also apply here. Following Equation 27 (p. 337), you can write:
(47)
Taking the first variation of the KKT equations in Equation 45 (p. 340) and equating to zero, the
sub-quadratic equation is formulated as the following:
(48)
At the step, the first equation can be written (by linearization) as:
(49)
Thus, the linearized set of equations for Newton’s method to be applied can be written in an
explicit form as:
(51)
So, in presence of both equality and inequality constraints, Equation 51 (p. 341) can be used in a
quasi-Newtonian framework to determine the increments and Lagrange multipliers
and when stepping from iteration to .
The Hessian matrix is not computed directly but is estimated and updated in a BFGS type
line search.
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 341
DesignXplorer Theory
minimize:
subject to:
where:
The symbols and denote the vectors of the continuous and integer variables, respectively.
It is assumed that problem functions and are continuously differentiable
subject to all . It is not assumed that integer variables can be relaxed. In other words,
problem functions are evaluated only at integer points and never at any fractional values in
between.
MISQP solves MINLP by a modified sequential quadratic programming (SQP) method. After linear-
izing constraints and constructing a quadratic approximation of the Lagrangian function, mixed-
integer quadratic programs are successively generated and solved by an efficient branch-and-cut
method. The algorithm is stabilized by a trust region method as originally proposed by Yuan for
continuous programs. Second order corrections are retained. The Hessian of the Lagrangian
function is approximated by BFGS updates subject to the continuous and integer variables. MISQP
is able to solve also non-convex nonlinear mixed-integer programs.
For external references, see MISQP Optimization Algorithm References (p. 360).
ASO supports a single objective and multiple constraints. It is available for continuous parameters,
including those with manufacturable values. It does not support the use of parameter relationships
in the optimization domain and is available only for a Direct Optimization system.
Like MISQP, ASO solves constrained nonlinear programming problems of the form:
minimize:
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
342 of ANSYS, Inc. and its subsidiaries and affiliates.
Goal-Driven Optimization Theory
subject to:
where:
The purpose is to refine and reduce the domain intelligently and automatically to provide the
global extrema.
ASO Workflow
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 343
DesignXplorer Theory
ASO Steps
OSF Sampling
OSF (Optimal Space-Filling Design) is used for the Kriging construction. In the original OSF, the
number of samples equals the number of divisions per axis and there is one sample in each divi-
sion.
When a new OSF is generated after a domain reduction, the reduced OSF has the same number
of divisions as the original and keeps the existing design points within the new bounds. New
design points are added until there is a point in each division of the reduced domain.
In the following example, the original domain has eight divisions per axis and contains eight
design points. The reduced domain also has eight divisions per axis and includes two of the ori-
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
344 of ANSYS, Inc. and its subsidiaries and affiliates.
Goal-Driven Optimization Theory
ginal design points. To have a design point in each division, six new design points need to be
added.
Note:
The total number of design points in the reduced domain can exceed the number in
the original domain if multiple existing points wind up in the same division. In the
previous example above, if two existing points wound up in the same division of the
new domain, seven new design points (rather than six) would have been added to
have a point in each of the remaining divisions.
Kriging Generation
A response surface is created for each output, based on the current OSF and consequently on
the current domain bounds.
MISQP
MISQP is run on the current Kriging response surface to find potential candidates. A few MISQP
processes are run at the same time, beginning with different starting points, and consequently,
giving different candidates.
All the obtained candidates are either validated or not, based on the Kriging error predictor. The
candidate point is checked to see if further refinement of the Kriging surface changes the selection
of this point. A candidate is considered as acceptable if there aren't any points, according to this
error prediction, that call it into question. If the quality of the candidate is not called into question,
the domain bounds are reduced. Otherwise, the candidate is calculated as a verification point.
When a new verification point is calculated, it is inserted in the current Kriging response
surface as a refinement point and the MISQP process is restarted.
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 345
DesignXplorer Theory
When candidates are validated, new domain bounds must be calculated. If all of the can-
didates are in the same zone, the bounds are reduced, centered on the candidates. Other-
wise, the bounds are reduced as an inclusive box of all candidates. At each domain reduc-
tion, a new OSF is generated (conserving design points between the new bounds) and a
new Kriging response surface is generated based on this new OSF.
Taking a closer, more formal look at the multi-objective optimization problem, let the following
denote the set of all feasible solutions, which are those that do not violate constraints:
(52)
(53)
If there exists such that for all objective functions is optimal. This, for is ex-
pressed:
(54)
This indicates that is certainly a desirable solution. Unfortunately, this is a utopian situation
that rarely exists, as it is unlikely that all has minimum values for at a common point
. The question is left: What solution should be used? That is, how should an "optimal" solution
be defined? First, consider the so-called ideal (utopian) solution. To define this solution, separately
attainable minima must be found for all objective functions. Assuming there is one, let be
the solution of the scalar optimization problem:
(55)
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
346 of ANSYS, Inc. and its subsidiaries and affiliates.
Goal-Driven Optimization Theory
Here is called the individual minimum for the scalar problem . The vector
is called ideal for a multi-objective optimization problem, and the points in , which determined
this vector is the ideal solution.
It is usually not true that Equation 56 (p. 347) holds, although it would be useful, as the multi-
objective problem would have been solved by considering a sequence for scalar problems. It is
necessary to define a new form of optimality, which leads to the concept of Pareto Optimality.
Introduced by V. Pareto in 1896, it is still the most important part of multi-objective optimization.
(56)
A point is said to be Pareto Optimal for the problem if there is no other vector such
that for all
(57)
This definition is based on the intuitive conviction that the point is chosen as the optimal
if no criterion can be improved without worsening at least one other criterion. Unfortunately, the
Pareto optimum almost always gives not a single solution, but a set of solutions. Usually Pareto
optimality is spoken of as being global or local depending on the neighborhood of the solutions
, and in this case, almost all traditional algorithms can at best guarantee a local Pareto optimality.
However, this MOGA-based system, which incorporates global Pareto filters, yields the global
Pareto front.
The Maximum Allowable Pareto Percentage criterion looks for a percentage that represents
a specified ratio of Pareto points per Number of Samples Per Iteration. When this percentage
is reached, the optimization is converged.
The Convergence Stability Percentage criterion looks for population stability, based on
mean and standard deviation of the output parameters. When a population is stable with
regards to the previous one, the optimization is converged. The criterion functions in the
following sequence:
• Population 1: When the optimization is run, the first population is not taken into account.
Because this population was not generated by the MOGA algorithm, it is not used as a
range reference for the output range (for scaling values).
• Population 2: The second population is used to set the range reference. The minimum,
maximum, range, mean, and standard deviation are calculated for this population.
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 347
DesignXplorer Theory
• Populations 3 – 11: Starting from the third population, the minimum and maximum output
values are used in the next steps to scale the values (on a scale of 0 to 100). The mean
variations and standard deviation variations are checked. If both of these are smaller than
the value for the Convergence Stability Percentage property, the algorithm is converged.
At each iteration and for each active output, convergence occurs if:
where:
The first Pareto front solutions are archived in a separate sample set internally and are distinct
from the evolving sample set. This ensures minimal disruption of Pareto front patterns already
available from earlier iterations. You can control the selection pressure (and, consequently, the
elitism of the process) to avoid premature convergence by altering the Maximum Allowable
Pareto Percentage property. For more information about this and other MOGA properties, see
Performing a MOGA Optimization (p. 181).
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
348 of ANSYS, Inc. and its subsidiaries and affiliates.
Goal-Driven Optimization Theory
MOGA Workflow
MOGA Steps
MOGA is run and generates a new population via cross-over and mutation. After the first iter-
ation, each population is run when it reaches the number of samples defined by the Number
of Samples Per Iteration property. For details, see MOGA Steps to Generate a New Popula-
tion (p. 350).
4. Convergence Validation
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 349
DesignXplorer Theory
MOGA converges when the Maximum Allowable Pareto Percentage or the Convergence
Stability Percentage has been reached.
If the optimization is not converged, the process continues to the next step.
If the optimization has not converged, it is validated for fulfillment of stopping criteria.
When the Maximum Number of Iterations criterion is met, the process is stopped without
having reached convergence.
If the stopping criteria have not been met, MOGA is run again to generate a new population
(return to Step 2).
6. Conclusion
Steps 2 through 5 are repeated in sequence until the optimization has converged or the
stopping criteria have been met. When either of these things occurs, the optimization con-
cludes.
The process MOGA uses to generate a new population has two main steps: Cross-over and
Mutation.
1. Cross-over
A cross-over operator that linearly combines two parent chromosome vectors to produce
two new offspring according to the following equations:
Consider the following two parents (each consisting of four floating genes), which have
been selected for cross-over:
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
350 of ANSYS, Inc. and its subsidiaries and affiliates.
Goal-Driven Optimization Theory
The concatenation of these chains forms the chromosome, which crosses over with another
chromosome.
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 351
DesignXplorer Theory
2. Mutation
Mutation alters one or more gene values in a chromosome from its initial state. This can result
in entirely new gene values being added to the gene pool. With these new gene values, the
genetic algorithm might be able to arrive at a better solution than was previously possible.
Mutation is an important part of the genetic search, as it helps to prevent the population
from stagnating at any local optima. Mutation occurs during evolution according to a user-
defined mutation probability.
where is the child, is the parent, and δ is a small variation calculated from a polynomial
distribution.
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
352 of ANSYS, Inc. and its subsidiaries and affiliates.
Goal-Driven Optimization Theory
probability of 0.5. This mutation operator can only be used for binary genes. The concaten-
ation of these chains forms the chromosome, which crosses over with another chromosome.
AMO supports multiple objectives and multiple constraints. It is limited to continuous parameters,
including those with manufacturable values. It is available only for a Direct Optimization system.
Note:
AMO does not support discrete parameters because with discrete parameters, it is
necessary to construct a separate response surface for each discrete combination.
When discrete parameters are used, MOGA is the more efficient optimization method.
For more information, see Multi-Objective Genetic Algorithm (MOGA) (p. 348).
AMO Workflow
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 353
DesignXplorer Theory
AMO Steps
The initial population of MOGA is used for the constructing Kriging response surfaces.
2. Kriging Generation
A Kriging response surface is created for each output, based on the first population and then
improved during simulation with the addition of new design points.
For more information, see Kriging (p. 109) or Kriging (p. 314).
3. MOGA
MOGA is run, using the Kriging response surface as an evaluator. After the first iteration,
each population is run when it reaches the number of samples defined by the Number of
Samples Per Iteration property.
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
354 of ANSYS, Inc. and its subsidiaries and affiliates.
Goal-Driven Optimization Theory
5. Error Check
Each point is validated for error. If the error for a given point is acceptable, the approxim-
ated point is included in the next population to be run through MOGA (return to Step 3).
If the error is not acceptable, the points are promoted as design points. The new design
points are used to improve the Kriging response surface (return to Step 2) and are included
in the next population to be run through MOGA (return to Step 3).
6. Convergence Validation
MOGA converges when the maximum allowable Pareto percentage has been reached.
When this happens, the process is stopped.
If the optimization is not converged, the process continues to the next step.
If the optimization has not converged, it is validated for fulfillment of the stopping criteria.
When the maximum number of iterations has been reached, the process is stopped without
having reached convergence.
If the stopping criteria have not been met, the MOGA algorithm is run again (return to
Step 3).
8. Conclusion
Steps 2 through 7 are repeated in sequence until the optimization has converged or the
stopping criteria have been met. When either of these things occurs, the optimization con-
cludes.
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 355
DesignXplorer Theory
• Screening: Sample set corresponds to the number of Screening points plus the Min-Max
search results. Search results that duplicate existing points are omitted.
• Single-objective optimization (NLPQL, MISQP, ASO). Sample set corresponds to the iteration
points.
• Multiple-objective optimization (MOGA, AMO) Sample set corresponds to the final population.
The Decision Support Process sorts the sample set using the cost function to extract the best can-
didates. The cost function takes into account both the Importance level of objectives and constraints
and the feasibility of points. (The feasibility of a point depends on how constraints are handled.
When the Constraint Handling property is set to Relaxed, all infeasible points are included in the
sort. When the property is set to Strict, all infeasible points are removed from the sort.) Once the
sample set has been sorted, you can change the Importance level and Constraint Handling
properties for one or more constraints or objectives without causing DesignXplorer to create more
design points. The Decision Support Process sorts the existing sample set again.
Given input parameters, output parameters, and their individual targets, the collection of ob-
jectives is combined into a single, weighted objective function, , which is sampled by means of
a direct Monte Carlo method using uniform distribution. The candidate points are subsequently
ranked by ascending magnitudes of the values of . The function for (where all continuous input
parameters have usable values of type "Continuous") is given by the following:
(59)
where:
(60)
(61)
where:
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
356 of ANSYS, Inc. and its subsidiaries and affiliates.
Goal-Driven Optimization Theory
The fuzziness of the combined objective function derives from the weights , which are simply
defined as follows:
(62)
The labels used are defined in Defining Optimization Objectives and Constraints (p. 203).
The targets represent the desired values of the parameters, and are defined for the continuous input
parameters as follows:
(63)
And, for the output parameters, you have the following desired values:
(64)
where:
= user-specified target
= constraint lower bound
= constraint upper bound
Thus, Equation 62 (p. 357) and Equation 63 (p. 357) constitute the input parameter objectives for the
continuous input parameters and Equation 62 (p. 357) and Equation 64 (p. 357) constitute the output
parameter objectives and constraints.
The following section considers the case where discrete input parameters and continuous input
parameters with manufacturable values are possible. Assume that a continuous input parameter
with manufacturable values is defined as follows:
(65)
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 357
DesignXplorer Theory
(66)
where, as before:
(67)
Thus, the GDO objective equation becomes the following (for parameters with discrete usable values).
(68)
Therefore, Equation 61 (p. 356), Equation 62 (p. 357), and Equation 66 (p. 358) constitute the input
parameter objectives for parameters that might be continuous or possess discrete usable values.
The norms, objectives, and constraints as in equations Equation 66 (p. 358) and Equation 67 (p. 358)
are also adopted to define the input goals for input parameters of the type Discrete, which are
those parameters whose usable alternatives indicate a whole number of some particular design
feature (number of holes in a plate, number of stiffeners, and so on.
Thus, equations Equation 61 (p. 356), Equation 62 (p. 357), and Equation 66 (p. 358) constitute the input
parameter goals for discrete parameters.
Therefore, the GDO objective function equation for the most general case (where there are continu-
ous and discrete parameters) can be written as the following:
(69)
where:
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
358 of ANSYS, Inc. and its subsidiaries and affiliates.
Theory References
From the normed values it is obvious that the lower the value of , the better the design with re-
spect to the desired values and importance. Thus, a quasi-random uniform sampling of design
points is done by a Hammersley algorithm and the samples are sorted in ascending order of .
The desired number of designs are then drawn from the top of the sorted list. A crowding technique
is employed to ensure that any two sampled design points are not very close to each other in the
space of the input parameters.
Following the same procedures, you get rating scale for design candidate value of 0.9333 as
[two crosses] (away from target). Therefore, the extreme cases are as follows:
1. Design Candidate value of 0.9 (the worst), the rating scale is [three crosses]
2. Design Candidate value of 1.1 (the best), the rating scale is [three stars]
Note:
Objective-driven parameter values with inequality constraints receive either three stars
(the constraint is met) or three red crosses (the constraint is violated).
Theory References
This section provides external references for further reading and study on the theory underpinning
DesignXplorer's design exploration capabilities:
Genetic Aggregation Response Surface References
MISQP Optimization Algorithm References
Reference Books
Reference Articles
Note:
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 359
DesignXplorer Theory
Ben Salem, M., & Tomaso, L., "Automatic selection for general surrogate models,
Structural and Multidisciplinary Optimization, Vol. 58, No. 2, 2018, pp. 719-734.
GA reference 2
Ben Salem, M., Roustant, O., Gamboa, F., & Tomaso, L., "Universal prediction distribution
for surrogate models," SIAM/ASA Journal on Uncertainty Quantification , Vol. 5, No. 1,
2017, pp. 1086-1109.
GA reference 3
GA reference 4
Viana, F.A.C., Haftka, R.T., and Steffen V., "Multiple Surrogates: How Cross-Validation
Errors Can Help Us to Obtain the Best Predictor," Structural and Multidisciplinary Optim-
ization, Vol. 39, No. 4, 2009, pp. 439-457.
MISQP reference 1
MISQP reference 2
MISQP reference 3
O. Exler, K. Schittkowski, "A trust region SQP algorithm for mixed-integer nonlinear
programming," Optimization Letters, Vol. 1, 2007, p.269-280.
Reference Books
A list follows of reference books—from primer to advanced users.
Probabilistic Structural Mechanics Handbook – Theory and Industrial Applications Edited by Raj
Sundararajan
This book provides a very good introduction into probabilistic analysis and design of all kinds of
industrial applications. The coverage is quite balanced and includes the theory of probabilistic
methods, structural reliability methods, and industrial application examples. For more information
on probabilistic methods (sampling-based), see "Simulation and the Monte Carlo Method" (p. 361)
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
360 of ANSYS, Inc. and its subsidiaries and affiliates.
Theory References
by Reuven Rubinstein. For more information on structural reliability methods (such as FORM/SORM),
see Probability, Reliability, and Statistical Methods in Engineering Design (p. 361) by Achintya Haldar
and Sankaran Mahadevan.
This book provides detailed descriptions of all kinds of sampling methods, including Monte Carlo
(brute force), Variance Reduction Monte Carlo (of which Latin Hypercube Sampling is one), Import-
ance Sampling, Markov Chain Monte Carlo (widely known as MCMC), and many more.
Probability, Reliability, and Statistical Methods in Engineering Design by Achintya Haldar and Sankaran
Mahadevan
This book presents a good introduction on risk assessments and structural reliability designs. It
clearly addresses the methods widely used to conduct risk-based and reliability-based designs,
such as FORM/SORM.
Applied Linear Statistical Models by John Neter, Michael Kutner, William Wasserman and Christopher
Nachtsheim
This book is basically the backbone of the 2nd-order polynomials response surface in DesignXplorer.
It comprehensively covers regression analysis (stepwise regression in particular), input-output
transformation for better regression, and statistical goodness-of-fit measures. It also covers sampling
schemes for DOEs. For more on DOEs, CCD, and Box-Behnken design, see Design and Analysis
of Experiment by Douglas Montgomery. CCD and Box-Behnken design are traditional DOE sampling
schemes. They differ from Design and Analysis of Computer Experiments (DACE), which is used
for an interpolation-based response surface, such as Kriging.
This book is a good resource on sampling schemes for a least-square fit based response surface.
It provides a detailed explanation on establishing fraction factorial design with a certain resolution.
Design and Analysis of Computer Experiments by Thomas Santner, Brian Williams and William Notz
This book describes how to create a space filling design for an interpolation-based response surface.
This includes non-parametric regression and Kriging/Radial Basis, which are supported by
DesignXplorer, and Pursuit Projection Regression, which is not supported by DesignXplorer.
Reference Articles
A list follows of reference articles on design exploration topics such as probability, reliability, and
statistics.
Articles reference 1
Probability Concepts in Engineering Planning and Design; Volume II: Decision, Risk, and
Reliability
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 361
DesignXplorer Theory
Articles reference 2
Ayyub, B. M. (editor)
Articles reference 3
Articles reference 4
1997.
Articles reference 5
SIAM, 1975.
Articles reference 6
Barnes, J.W.
Articles reference 7
Beasley, M.
Articles reference 8
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
362 of ANSYS, Inc. and its subsidiaries and affiliates.
Theory References
Articles reference 9
Billinton, R.
Articles reference 10
Calabro, S. R.
Articles reference 11
Casciati, F.
Articles reference 12
Catuneanu, V. M.
Reliability Fundamentals
Articles reference 13
Chorafas, D. N.
Articles reference 14
Cox, S. J.
Butterworth-Heinemann, 1991.
Articles reference 15
Dai, S-H.
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 363
DesignXplorer Theory
Articles reference 16
Ditlevson, O.
Articles reference 17
Gumbel, E.
Statistics of Extremes
Articles reference 18
Articles reference 19
Articles reference 20
Harr, M. E.
Articles reference 21
Hart, Gary C.
Articles reference 22
Henley, E. J.
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
364 of ANSYS, Inc. and its subsidiaries and affiliates.
Theory References
Articles reference 23
Articles reference 24
Kapur, K. C.
Articles reference 25
Leemis, L. M.
Articles reference 26
Leitch, R. D.
Articles reference 27
Litle, W. A.
Articles reference 28
1982.
Articles reference 29
Lucia, A. C.
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 365
DesignXplorer Theory
Articles reference 30
Articles reference 31
Madsen, H. O.
Articles reference 32
Articles reference 33
Marek, P.
Articles reference 34
McCormick, N. J.
Articles reference 35
Melchers, R. E.
Articles reference 36
Misra, K. B.
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
366 of ANSYS, Inc. and its subsidiaries and affiliates.
Theory References
Articles reference 37
Modarres, M.
What Every Engineer Should Know about Reliability and Risk Analysis
Articles reference 38
ASCE, 1986.
Articles reference 39
Reliability of Structures
Articles reference 40
Papoulis, A.
Articles reference 41
Pugsley, A. G.
Articles reference 42
Rackwitz, R.
Articles reference 43
Rao, S. S.
Reliability-Based Design
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 367
DesignXplorer Theory
Articles reference 44
Rubinstein, R. Y.
Articles reference 45
Schlaifer, R.
Articles reference 46
Schneider, J.
Articles reference 47
Shooman, M. L.
Articles reference 48
Sinha, S. K.
Articles reference 49
Smith, D. J.
Articles reference 50
Spencer, B. F.
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
368 of ANSYS, Inc. and its subsidiaries and affiliates.
Theory References
Articles reference 51
Thoft-Christensen, P.
Articles reference 52
Thoft-Christensen, P.
Articles reference 53
Thoft-Christensen, P.
Articles reference 54
Articles reference 55
Tichy, M.
Wittmann, F. H.
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 369
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
370 of ANSYS, Inc. and its subsidiaries and affiliates.
Troubleshooting
An Ansys DesignXplorer Update operation returns an error message saying that one or several design
points have failed to update.
Click Show Details in the message dialog box to see the list of design points that failed to update
and the associated error messages. Each failed design point is automatically preserved for you at
the project level, so you can return to the project and edit the Parameter Set bar to see the failed
design points and investigate the reason for the failures.
This error means that the project failed to update correctly for the parameter values defined in the
listed design points. There are a variety of failure reasons, from a lack of a license to a CAD model
generation failure. If you try to update the system again, an update attempts to update only the
failed design points, and if the update is successful, the operation completes. If there are persistent
failures with a design point, to investigate, copy its values to the current design point, attempt a
project update, and edit the cells in the project that are not correctly updated.
When you open a project, one or more DesignXplorer cells are marked with a black X icon in the
Project Schematic.
The black X icon next to a DesignXplorer cell indicates a disabled status. This status is given to
DesignXplorer cells that fail to load properly because the project has been corrupted by the absence
of necessary files (most often the parameters.dxdb and parameters.params files). The
Messages pane displays a warning message indicating the reason for the failure and lists each of
the disabled cells.
When a DesignXplorer cell is disabled, you cannot update, edit, or preview it. Try one of the following
methods to address the issue.
Method 1: (recommended) Navigate to the Files pane, locate the missing files if possible, and copy
them into the correct project folder. The parameters.dxdb and parameters.params files are
normally located in the subdirectory <project name>_files\dpall\global\DX. With the
necessary files restored, the disabled cells should load successfully.
Method 2: Delete all DesignXplorer systems containing disabled cells. Once the project is free of
corrupted cells, you can save the project and set up new DesignXplorer systems.
When an Excel file is added to a Microsoft Office Excel component in a project, an instance of the
Excel application is launched in the background. If you double-click any other Excel workbook in
the Windows Explorer window or open any other Excel workbook from the Recent items option
on the Start menu, the Excel application attempts to use the instance of Excel that is already running.
As a result, all of the background workbooks are displayed in foreground and any update of the
Excel components in Workbench prevents you from modifying any opened Excel workbooks.
Therefore, to open a different workbook, do not use the same instance of Excel that Workbench is
using.
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 371
Troubleshooting
To open or view a different Excel workbook (other than the ones added to the project):
2. From this new instance, select File → Open and then the workbook.
Note:
You can still view the Excel workbooks added to the project by selecting Open File in
Excel from the right-click context menu.
If a fatal error occurs while a cell of DesignXplorer is either in a pending state or resuming from a
pending update, the project could be stuck in the pending state even if the pending updates are
no longer running.
To recover from this situation, you can abandon the pending updates by selecting Tools → Abandon
Pending Updates.
To update a DesignXplorer cell or a project, derived parameters must be correctly defined. If the
definition for a derived parameter is invalid, which can occur when the parameter is defined without
units or the value of the Expression property has an incorrect syntax. DesignXplorer cells remain
in an unfulfilled state until the error is fixed.
To verify derived parameter definition, right-click the Parameter Set bar and select Edit to open
the Parameter Set tab. In the Outline pane, derived parameters that are incorrectly defined have
a red cell containing an error message in the Value column. In the Properties pane for such a derived
parameter, the same error message displays in the Expression property cell.
Based on the information in the error message, correct derived parameter definitions as follows:
2. In the Parameters pane, enter a valid expression for the Expression property.
3. Refresh the project or the unfulfilled DesignXplorer cell to apply the changes.
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
372 of ANSYS, Inc. and its subsidiaries and affiliates.
Feature Archive
The purpose of this archive is to provide documentation for deactivated features of previous versions.
Contact Ansys support if you have questions on how to access the capabilities.
Six Sigma Analysis
F-Test Filtering
Export a Response Surface as a VHDL-AMS Language File
Define as Starting Point
Cumulative Distribution Plot Type
Manual Production of ROM Files from Standalone Fluent
Working with DesignXplorer Extensions
• Understand how your performance will vary with your design tolerances
A Six Sigma Analysis uses statistical distribution functions (such as Gaussian, normal, uniform, and so
on) to describe uncertainty parameters.
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 373
Feature Archive
Using a Six Sigma Analysis system, you can determine the extent to which uncertainties in the
model affect the results of the analysis. An uncertainty (random quantity) is a parameter whose value
is impossible to determine at a given point in time (if it is time-dependent) or at a given location (if
it is location-dependent). An ambient temperature is an example. You cannot know precisely what
the temperature will be one week from now in a given city.
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
374 of ANSYS, Inc. and its subsidiaries and affiliates.
Six Sigma Analysis
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 375
Feature Archive
Before solving the DOE, you must set up input parameter options. The following panes for the
Design of Experiments cell contain objects that are unique to Six Sigma Analysis:
Outline:
• View the Skewness and Kurtosis properties for each input parameter distribution.
• View the calculated mean and standard deviation for all distribution types except Normal and
Truncated Normal where you can set those values.
• View the lower bound and upper bound for each parameter, which are used to generate the
DOE.
Properties: When an input parameter is selected in the Outline pane, the Properties pane allows
you to set its properties:
• Distribution Type: Type of distribution associated with the input parameter. Choices follow. For
more information, see Distribution Functions (p. 389).
– Uniform
– Triangular
– Normal
– Truncated Normal
– Lognormal
– Exponential
– Beta
– Weibull
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
376 of ANSYS, Inc. and its subsidiaries and affiliates.
Six Sigma Analysis
• Distribution Upper Bound (Uniform, Triangular, Truncated Normal, and Beta only)
Table: When an output parameter or chart is selected in the Outline pane, the Table pane displays
the design points table, which populates automatically during the solving of the points. When an
input parameter is selected in the Outline pane, the Table pane displays data for each of the
samples in the set:
• Quantile: Input parameter value point for the given PDF and CDF values.
• PDF: Probability Density Function of the input parameter along the X axis.
• CDF: Cumulative Distribution Function is the integration of PDF along the X axis.
Chart : When an input parameter is selected in the Outline pane, the Chart pane displays the
Probability Density Function and Cumulative Distribution Function for the distribution type chosen
for the input parameter. The Parameters Parallel chart and Design Points vs Parameter chart are
also available, just as in a standard DOE (p. 45).
The following panes for the Six Sigma Analysis cell allow you to customize your analysis and view
the results:
Outline:
• Select each input parameter and view its distribution properties, statistics, upper and lower
bounds, initial value, and distribution chart.
• Select each output parameter and view its calculated maximum and minimum values, stat-
istics and distribution chart.
• Set the table display format for each parameter to Quantile-Percentile or Percentile-
Quantile.
Properties:
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 377
Feature Archive
• Sampling Type: Type of sampling for the Six Sigma Analysis. For more information, see Sample
Generation (p. 393).
– WLHS: When selected, the Weighted Latin Hypercube Sampling technique is used.
For parameters, Probability Table specifies how to display analysis information in the Table pane:
• Quantile-Percentile
• Percentile-Quantile
• Probability: Probability that the parameter is less than or equal to the specified value.
• Sigma Level: Approximate number of standard deviations away from the sample mean for
the given sample value.
If Probability Table is set to Quantile-Percentile in the Properties pane, you can edit the para-
meter value and see the corresponding Probability and Sigma Level values. If Probability Table
is set to Percentile-Quantile, the columns are reversed. You can then enter a Probability or Sigma
Level value and see the corresponding changes in the other columns.
Chart: When a parameter is selected in the Outline pane, the Chart pane displays the Probability
Density Function and Cumulative Distribution Function. A global Sensitivities chart is available in
the Outline pane.
• Change various generic chart properties for the results in the Chart pane.
For more information, see Working with Sensitivities (p. 291) and Statistical Sensitivities in a
SSA (p. 396).
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
378 of ANSYS, Inc. and its subsidiaries and affiliates.
Six Sigma Analysis
• Uniform
• Triangular
• Normal
• Truncated Normal
• Lognormal
• Exponential
• Beta
• Weibull
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 379
Feature Archive
• You measured the snow height on both ends of the beam 30 different times.
From these histograms, you can conclude that an exponential distribution is suitable to describe
the scatter of the snow height data for H1 and H2. From the measured data, you determine
that the average snow height of H1 is 100 mm and the average snow height of H2 is 200 mm.
You can directly derive the parameter λ by dividing 1 by the mean value. This leads to λ1 =
1/100 = 0.01 for H1, and λ 1 = 1/200 = 0.005 for H2.
When you specify input parameters for a Six Sigma Analysis, DesignXplorer validates the distri-
bution properties to ensure that there are no inconsistencies in the distribution definition.
However, in some cases, invalid attribute combinations can already exist. For example, a project
created with a previous version of DesignXplorer could have invalid attribute combinations.
When you open the project, DesignXplorer runs checks to catch any inconsistencies in the
distribution definition. The following table shows the checks performed for each distribution
type:
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
380 of ANSYS, Inc. and its subsidiaries and affiliates.
Six Sigma Analysis
Truncated Lower bound is smaller than upper bound, and these bounds are not too far away
Normal from the mean in terms of standard deviation:
If inconsistencies are found, a warning dialog box opens. The Messages pane provides addi-
tional information.
In the Project Schematic, the Design of Experiments cell has a state of Attention Required,
indicating that you must edit the distribution definition to resolve any inconsistencies.
Example:
The truncated normal distribution is used in this example. It is defined by the following attributes:
• Mean
• Standard deviation
In the following figure, you can see these attributes in the Properties and Chart panes for the
Design of Experiments cell in the Six Sigma Analysis system:
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 381
Feature Archive
In this example, Distribution Upper Bound (3) is far away from the Mean (1.5) when compared
to the Standard Deviation (0.075). The validation check fails because (Upper Bound −
Mean)/Standard Deviation = 20. To fix the inconsistency, you must either reduce the bounds,
modify the mean, or modify the standard deviation.
• To update each cell in the analysis separately, right-click the cell and select Update.
• To update the entire analysis at once, right-click the system header in the Project Schematic
and select Update.
• To update the entire project at once, in the Project Schematic, click Update Project on the
toolbar.
Tables (SSA)
In the Six Sigma Analysis component tab, you can view probability tables for any input or output
parameter selected in theOutline pane. In the Properties pane for the parameter, select Quantile-
Percentile or Percentile-Quantile for Probability Table.
• To add a value to the Quantile-Percentile table, type the desired value into the New
Parameter Value cell at the end of the table. A row with the value that you entered is
added to the table in the appropriate location.
• To add a new value to the Percentile-Quantile table, type the desired value into the appro-
priate cell (New Probability Value or New Sigma Level) at the end of the table.
• To delete a row from either table, right-click the row and select Remove Level.
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
382 of ANSYS, Inc. and its subsidiaries and affiliates.
Six Sigma Analysis
You can also overwrite any value in an editable column. Corresponding values are then displayed
in the other columns in this row.
You can change various generic chart properties for this chart.
You can change various generic chart properties for this chart.
Note:
If the p-Value calculated for a particular input parameter is above the value specified
for Significance Level in Tools → Options → Design Exploration, the bar for that
parameter is shown as a flat line on the chart. For more information, see Viewing Sig-
nificance and Correlation Values (p. 70).
Statistical Measures
When the Six Sigma Analysis cell is updated, the following statistical measures display in the
Properties pane for each parameter. For descriptions of these measures, see SSA Theory (p. 398).
• Mean
• Standard deviation
• Skewness
• Kurtosis
• Shannon entropy
• Signal-to-noise ratios
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 383
Feature Archive
impossible to determine at a given point in time (if it is time-dependent) or at a given location (if it
is location-dependent). An example is ambient temperature. You cannot know precisely what the
temperature will be one week from now in a given city.
SSA uses statistical distribution functions (such as Gaussian, normal, uniform, and so on) to describe
uncertain parameters.
SSA allows you to determine whether your product satisfies Six Sigma quality criteria. A product has
Six Sigma quality if only 3.4 parts out of every 1 million manufactured fail. This quality definition is
based on the assumption that an output parameter relevant to the quality and performance assessment
follows a Gaussian distribution, as shown.
An output parameter that characterizes product performance is typically used to determine whether
a product's performance is satisfactory. The parameter must fall within the interval bounded by the
lower specification limit (LSL) and upper specification limit (USL). Sometimes only one of these limits
exists.
An example of this is a case when the maximum von Mises stress in a component must not exceed
the yield strength. The relevant output parameter is the maximum von Mises stress and the USL is
the yield strength. The lower specification limit is not relevant. The area below the probability density
function falling outside the specification interval is a direct measure of the probability that the product
does not conform to the quality criteria, as shown above. If the output parameter does follow a
Gaussian distribution, then the product satisfies a Six Sigma quality criterion if both specification
limits are at least six standard deviations away from the mean value.
In reality, an output parameter rarely follows a Gaussian distribution exactly. However, the definition
of Six Sigma quality is inherently probabilistic. It represents an admissible probability that parts do
not conform to the quality criteria defined by the specified limits. The nonconformance probability
can be calculated no matter which distribution the output parameter actually follows. For distributions
other than Gaussian, the Six Sigma level is not really six standard deviations away from the mean
value, but it does represent a probability of 3.4 parts per million, which is consistent with the definition
of Six Sigma quality.
This section describes a Six Sigma Analysis system in DesignXplorer and how to use it to perform
a SSA.
SSA Principles
Guidelines for Selecting SSA Variables
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
384 of ANSYS, Inc. and its subsidiaries and affiliates.
Six Sigma Analysis
Sample Generation
Weighted Latin Hypercube Sampling (WLHS)
Postprocessing SSA Results
SSA Theory
SSA Principles
Computer models are described with specific numerical and deterministic values. For example,
material properties are entered using certain values, and the geometry of the component is assigned
a certain length or width. An analysis based on a given set of specific numbers and values is called
a deterministic analysis. The accuracy of a deterministic analysis depends upon the assumptions
and input values used for the analysis.
While scatter and uncertainty naturally occur in every aspect of an analysis, deterministic analyses
do not take them into account. To deal with uncertainties and scatter, use SSA to answer the fol-
lowing questions:
• If the input variables of a finite element model are subject to scatter, how large is the scatter of
the output parameters? How robust are the output parameters? Here, output parameters can be
any parameter that Ansys Workbench can calculate. Examples are the temperature, stress, strain,
or deflection at a node, the maximum temperature, stress, strain, or deflection of the model, and
so on.
• If the output is subject to scatter due to the variation of the input variables, then what is the
probability that a design criterion given for the output parameters is no longer met? How large
is the probability that an unexpected and unwanted event, such as failure, takes place?
• Which input variables contribute the most to the scatter of an output parameter and to the failure
probability? What are the sensitivities of the output parameter with respect to the input variables?
SSA can be used to determine the effect of one or more variables on the outcome of the analysis.
In addition to SSA techniques available, Ansys Workbench offers a set of strategic tools to enhance
the efficiency of the SSA process. For example, you can graph the effects of one input parameter
versus an output parameter, and you can easily add more samples and additional analysis loops
to refine your analysis.
In traditional deterministic analyses, uncertainties are either ignored or accounted for by applying
conservative assumptions. You would typically ignore uncertainties if you know for certain that the
input parameter has no effect on the behavior of the component under investigation. In this case,
only the mean values or some nominal values are used in the analysis. However, in some situations,
the influences of uncertainties exist but are still neglected, as for the thermal expansion coefficient,
for which the scatter is usually ignored.
If you are performing a thermal analysis and want to evaluate the thermal stresses, the equation
is:
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 385
Feature Archive
because the thermal stresses are directly proportional to the Young's modulus as well as to the
thermal expansion coefficient of the material.
The following table shows the probability that the thermal stresses will be higher than expected,
taking uncertainty variables into account.
Uncertainty variables taken into account Probability that the Probability that the
thermal stresses are thermal stresses are
more than 5% higher more than 10% higher
than expected than expected
Young's modulus (Gaussian distribution ~16% ~2.3%
with 5% standard deviation)
Young's modulus and thermal expansion ~22% ~8%
coefficient (each with Gaussian distribution
with 5% standard deviation)
Reliability is typically a concern when product or component failures have significant financial
consequences (costs of repair, replacement, warranty, or penalties) or worse, can result in injury or
loss of life.
If you use a conservative assumption, the difference in thermal stresses shown above tells you that
uncertainty or randomness is involved. Conservative assumptions are usually expressed in terms
of safety factors. Sometimes regulatory bodies demand safety factors in certain procedural codes.
If you are not faced with such restrictions or demands, then using conservative assumptions and
safety factors can lead to inefficient and costly over-design. By using SSA methods, you can avoid
over-design while still ensuring the safety of the component.
SSA methods even enable you to quantify the safety of the component by providing a probability
that the component will survive operating conditions. Quantifying a goal is the necessary first step
toward achieving it.
More information about choosing and defining uncertainty variables can be found in the following
sections.
Uncertainty Variables for Response Surface Analyses
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
386 of ANSYS, Inc. and its subsidiaries and affiliates.
Six Sigma Analysis
the ones you know have a significant effect on the result parameters. If you are unsure which
uncertainty variables are important, include all of the random variables you can think of and then
perform a Monte Carlo analysis. After you learn which uncertainty variables are important and
should be included in your response surface analysis, you can eliminate those that are unnecessary.
The type and source of the data you have determines which distribution functions can be used
or are best suited to your needs.
Measured Data
Mean Values, Standard Deviation, Exceedance Values
No Data
Measured Data
If you have measured data, then you must first know how reliable that data is. Data scatter is
not just an inherent physical effect but also includes inaccuracy in the measurement itself. You
must consider that the person taking the measurement might have applied a "tuning" to the
data. For example, if the data measured represents a load, the person measuring the load could
have rounded the measurement values. This means that the data you receive is not truly the
measured values. The amount of this tuning could provide a deterministic bias in the data that
you need to address separately. If possible, you should discuss any bias that might have been
built into the data with the person who provided the data to you.
If you are confident about the quality of the data, then how you proceed depends on how
much data you have. In a single production field, the amount of data is typically sparse. If you
have only a small amount of data, use it only to evaluate a rough figure for the mean value
and the standard deviation. In these cases, you could model the uncertainty variable as a
Gaussian distribution if the physical effect you model has no lower and upper limit, or use the
data and estimate the minimum and maximum limit for a uniform distribution.
In a mass production field, you probably have a lot of data. In these cases you could use a
commercial statistical package that allows you to actually fit a statistical distribution function
that best describes the scatter of the data.
The mean value and the standard deviation are most commonly used to describe the scatter
of data. Frequently, information about a physical quantity is given as a value such as .
Often, this form means that the value 100 is the mean value and 5.5 is the standard deviation.
Data in this form implies a Gaussian distribution, but you must verify this (a mean value and
standard deviation can be provided for any collection of data regardless of the true distribution
type). If you have more information, for example, you know that the data is lognormal distrib-
uted, then SSA allows you to use the mean value and standard deviation for a lognormal dis-
tribution.
Sometimes the scatter of data is also specified by a mean value and an exceedance confidence
limit. The yield strength of a material is sometimes given in this way. For example, a 99% ex-
ceedance limit based on a 95% confidence level is provided. This means that, from the measured
data, you can be sure by 95% that in 99% of all cases the property values exceed the specified
limit and only in 1% of all cases do they drop below the specified limit. The supplier of this
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 387
Feature Archive
information is using the mean value, the standard deviation, and the number of samples of
the measured data to derive this kind of information. If the scatter of the data is provided in
this way, the best way to pursue this further is to ask for more details from the data supplier.
Because the given exceedance limit is based on the measured data and its statistical assessment,
the supplier might be able to provide you with the details that were used.
If the data supplier does not give you any further information, then you could consider assuming
that the number of measured samples was large. If the given exceedance limit is denoted with
and the given mean value is denoted with , the standard deviation can be derived
from the equation:
Exceedance Probability C
99.5% 2.5758
99.0% 2.3263
97.5% 1.9600
95.0% 1.6449
90.0% 1.2816
No Data
In situations where no information is available, there is never just one right answer. Following
are hints about which physical quantities are usually described in terms of which distribution
functions. This information might help you with the particular physical quantity that you have
in mind. Additionally, a list follows of which distribution functions are usually used for which
kind of phenomena. You might need to choose from multiple options.
Geometric Tolerances
• If you are designing a prototype, you could assume that the actual dimensions of the manu-
factured parts would be somewhere within the manufacturing tolerances. In this case it is
reasonable to use a uniform distribution, where the tolerance bounds provide the lower and
upper limits of the distribution function.
• If the manufacturing process generates a part that is outside the tolerance band, one of two
things can happen: the part must either be fixed (reworked) or scrapped. These two cases
are usually on opposite ends of the tolerance band. An example of this is drilling a hole. If
the hole is outside the tolerance band, but it is too small, the hole can just be drilled larger
(reworked). If, however, the hole is larger than the tolerance band, then the problem is either
expensive or impossible to fix. In such a situation, the parameters of the manufacturing
process are typically tuned to hit the tolerance band closer to the rework side, steering clear
of the side where parts need to be scrapped. In this case, a Beta distribution is more appro-
priate.
• Often a Gaussian (normal) distribution is used. The fact that this distribution has no bounds
(it spans minus infinity to infinity), is theoretically a severe violation of the fact that geomet-
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
388 of ANSYS, Inc. and its subsidiaries and affiliates.
Six Sigma Analysis
rical extensions are described by finite positive numbers only. However, in practice, this lack
of bounds is irrelevant if the standard deviation is very small compared to the value of the
geometric extension, which is typically true for geometric tolerances.
Material Data
• In some cases the material strength of a part is governed by the weakest-link theory. This
theory assumes that the entire part fails whenever its weakest spot fails. For material properties
where the weakest-link assumptions are valid, the Weibull distribution might be applicable.
• For some cases, it is acceptable to use the scatter information from a similar material type.
For example, if you know that a material type very similar to the one you are using has a
certain material property with a Gaussian distribution and a standard deviation of ±5% around
the measured mean value, then you can assume that for the material type you are using,
you only know its mean value. In this case, you could consider using a Gaussian distribution
with a standard deviation of ±5% around the given mean value.
Load Data
For loads, you usually only have a nominal or average value. You could ask the person who
provided the nominal value the following questions: Out of 1000 components operated under
real life conditions, what is the lowest load value any one of the components sees? What is the
most likely load value? That is, what is the value that most of these 1000 components are
subject to? What is the highest load value any one component would be subject to? To be
safe, you should ask these questions not only of the person who provided the nominal value
but also of one or more experts who are familiar with how your products are operated under
real-life conditions. From all the answers you get, you can then consolidate what the minimum,
the most likely, and the maximum value probably is. As verification, compare this picture with
the nominal value that you would use for a deterministic analysis. The nominal value should
be close to the most likely value unless using a conservative assumption. If the nominal value
includes a conservative assumption (is biased), its value is probably close to the maximum
value. Finally, you can use a triangular distribution using the minimum, most likely, and max-
imum values obtained.
Distribution Functions
Beta Distribution
fX(x)
r,t
xmin
xmax
You provide the shape parameters and and the distribution lower bound and upper bound
and of the random variable .
The Beta distribution is very useful for random variables that are bounded at both sides. If linear
operations are applied to random variables that are all subjected to a uniform distribution,
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 389
Feature Archive
then the results can usually be described by a Beta distribution. For example, if you are dealing
with tolerances and assemblies where the components are assembled and the individual toler-
ances of the components follow a uniform distribution (a special case of the Beta distribution),
the overall tolerances of the assembly are a function of adding or subtracting the geometrical
extension of the individual components (a linear operation). Hence, the overall tolerances of
the assembly can be described by a Beta distribution. Also, as previously mentioned, the Beta
distribution can be useful for describing the scatter of individual geometrical extensions of
components as well.
Exponential Distribution
fX(x)
xmin
You provide the decay parameter and the shift (or distribution lower bound) of the
random variable .
The exponential distribution is useful in cases where there is a physical reason that the prob-
ability density function is strictly decreasing as the uncertainty variable value increases. The
distribution is mostly used to describe time-related effects. For example, it describes the time
between independent events occurring at a constant rate. It is therefore very popular in the
area of systems reliability and lifetime-related systems reliability, and it can be used for the life
distribution of non-redundant systems. Typically, it is used if the lifetime is not subjected to
wear-out and the failure rate is constant with time. Wear-out is usually a dominant life-limiting
factor for mechanical components that would preclude the use of the exponential distribution
for mechanical parts. However, where preventive maintenance exchanges parts before wear-
out can occur, then the exponential distribution is still useful to describe the distribution of
the time until exchanging the part is necessary.
µ
x
You provide values for the mean value and the standard deviation of the random variable
.
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
390 of ANSYS, Inc. and its subsidiaries and affiliates.
Six Sigma Analysis
if the random variable is a linear combination of two or more other effects if those effects also
follow a Gaussian distribution.
Lognormal Distribution
fX(x)
You provide values for the logarithmic mean value and the logarithmic deviation . The
parameters and are the mean value and standard deviation of :
The lognormal distribution is another basic and commonly-used distribution, typically used to
describe the scatter of the measurement data of physical phenomena, where the logarithm of
the data would follow a normal distribution. The lognormal distribution is suitable for phenom-
ena that arise from the multiplication of a large number of error effects. It is also used for random
variables that are the result of multiplying two or more random effects (if the effects that get
multiplied are also lognormally distributed). It is often used for lifetime distributions such as
the scatter of the strain amplitude of a cyclic loading that a material can endure until low-cycle-
fatigue occurs.
Uniform Distribution
fX(x)
xmin xmax
You provide the distribution lower bound and upper bound and of the random
variable .
The uniform distribution is a fundamental distribution for cases where the only information
available is a lower and an upper bound. It is also useful to describe geometric tolerances. It
can also be used in cases where any value of the random variable is as likely as any other
within a certain interval. In this sense, it can be used for cases where "lack of engineering
knowledge" plays a role.
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 391
Feature Archive
Triangular Distribution
fX(x)
You provide the minimum value (or distribution lower bound) , the most likely value limit
and the maximum value (or distribution upper bound) .
The triangular distribution is most helpful to model a random variable when actual data is not
available. It is very often used to capture expert opinions, as in cases where the only data you
have are the well-founded opinions of experts. However, regardless of the physical nature of
the random variable you want to model, you can always ask experts questions like "Out of 1000
components, what are the lowest and highest load values for this random variable?" and other
similar questions. You should also include an estimate for the random variable value derived
from a computer program, as described above. For more details, see Choosing a Distribution
for a Random Variable (p. 387).
2 G
xmin xmax
G
x
You provide the mean value and the standard deviation of the non-truncated Gaussian
distribution and the truncation limits and (or distribution lower bound and upper
bound).
The truncated Gaussian distribution typically appears where the physical phenomenon follows
a Gaussian distribution, but the extreme ends are cut off or are eliminated from the sample
population by quality control measures. As such, it is useful to describe the material properties
or geometric tolerances.
Weibull Distribution
fX(x)
m,xchr
xmin
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
392 of ANSYS, Inc. and its subsidiaries and affiliates.
Six Sigma Analysis
You provide the Weibull characteristic value , the Weibull exponent m, and the minimum
value (or distribution lower bound). There are several special cases. For the distri-
bution coincides with a two-parameter Weibull distribution. The Rayleigh distribution is a special
case of the Weibull distribution with and .
In engineering, the Weibull distribution is most often used for strength or strength-related
lifetime parameters, and is the standard distribution for material strength and lifetime parameters
for very brittle materials (for these very brittle materials, the "weakest-link theory" is applicable).
For more details, see Choosing a Distribution for a Random Variable (p. 387).
Sample Generation
For SSA, the sample generation is based on the Latin Hypercube Sampling (LHS) technique by default.
In the Properties view for a Six Sigma Analysis cell, Sampling Type can be set to either LHS or
WLHS (Weighted Latin Hypercube Sampling).
LHS is a more advanced and efficient form of Monte Carlo analysis methods. With LHS, the points
are randomly generated in a square grid across the design space, but no two points share input
parameters of the same value. This means that no point shares a row or a column of the grid with
any other point. Generally, LHS requires 20% to 40% fewer simulations loops than the Direct Monte
Carlo technique to deliver the same results with the same accuracy. However, that number is largely
problem-dependent.
In WLHS, the input variables are discretized unevenly/unequally in their design space. The cell (or
hypercube, in multiple dimensions) size of probability of occurrence is evaluated according to to-
pology of output/response. The cell size of probability of occurrence is discretized such that it is
smaller (relatively) around minimum and maximum of output/response. As evaluation of cell size
is output/response oriented, WLHS is somewhat unsymmetrical/biased.
In general, WLHS is intended to stretch distribution farther out in the tails (lower and upper) with
less number of runs than LHS. This means that given same number of runs, WLHS is expected to
reach a smaller than LHS. Due to biasness, however, the evaluated can be subject to some
difference compared to LHS.
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 393
Feature Archive
Histogram
A histogram plot is most commonly used to visualize the scatter of a SSA variable. A histogram
is derived by dividing the range between the minimum value and the maximum value into intervals
of equal size. Then SSA determines how many samples fall within each interval, that is, how many
"hits" landed in each interval.
SSA also allows you to plot histograms of your uncertainty variables so you can double-check
that the sampling process generated the samples according to the distribution function you
specified. For uncertainty variables, SSA not only plots the histogram bars, but also a curve for
values derived from the distribution function you specified. Visualizing histograms of the uncer-
tainty variables is another way to verify that enough simulation loops have been performed. If
the number of simulation loops is sufficient, the histogram bars have the following characteristics:
• They are close to the curve that is derived from the distribution function.
• They have no major gaps. While they have no hits in an interval, neighboring intervals have
many hits.
However, if the probability density function is flattening out at the far ends of a distribution (for
example, the exponential distribution flattens out for large values of the uncertainty variable)
then there might logically be gaps. Hits are counted only as positive integer numbers and as
these numbers gradually get smaller, a zero hit can happen in an interval.
The value of the cumulative distribution function at the location is the probability that the
values of stay below . Whether this probability represents the failure probability or the reli-
ability of your component depends on how you define failure.
For example, if you design a component such that a certain deflection should not exceed a certain
admissible limit, then a failure event occurs if the critical deflection exceeds this limit. Thus, for
this example, the cumulative distribution function is interpreted as the reliability curve of the
component. On the other hand, if you design a component such that the eigenfrequencies are
beyond a certain admissible limit, then a failure event occurs if an eigenfrequency drops below
this limit. So for this example, the cumulative distribution function is interpreted as the failure
probability curve of the component.
The cumulative distribution function also lets you visualize what the reliability or failure probab-
ility would be if you chose to change the admissible limits of your design.
A cumulative distribution function plot is an important tool to quantify the probability that the
design of your product does or does not satisfy quality and reliability requirements. The value of
a cumulative distribution function of a particular output parameter represents the probability
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
394 of ANSYS, Inc. and its subsidiaries and affiliates.
Six Sigma Analysis
that the output parameter remains below a certain level as indicated by the values on the X axis
of the plot.
The probability that Shear Stress Maximum remains less than a limit value of 1.71E+5 is about
93%, which means that there is a 7% probability that Shear Stress Maximum exceeds the limit
value of 1.71E+5.
Probability Table
Instead of reading data from the cumulative distribution chart, you can also obtain important
information about the cumulative distribution function in tabular form. A probability table is
available that is designed to provide probability values for an even spread of levels of an input
or output parameter. You can view the table in either Quantile-Percentile (Probability) mode or
Percentile-Quantile (Inverse Probability) mode. The probability table lets you find out the para-
meter levels corresponding to probability levels that are typically used for the design of reliable
products. If you want to see the probability of a value that is not listed, you can add it to the
table. Likewise, you can add a probability or sigma-level and see the corresponding values. You
can also delete values from the table. For more information, see Using Statistical Postpro-
cessing (p. 382).
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 395
Feature Archive
Note:
Both tables have more rows if the number of samples is increased. If you are designing
for high product reliability, which is a low probability that the product does not conform
to quality or performance requirements, then the sample size must be adequately
large to address those low probabilities. Typically, if your product does not conform
to the requirements denoted with "Preq," then the minimum number of samples should
be determined by . For example, if your product has a probab-
ility of that it does not conform to the requirements, then the minimum
number of samples should be .
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
396 of ANSYS, Inc. and its subsidiaries and affiliates.
Six Sigma Analysis
The sensitivities available under the Six Sigma Analysis and Goal Driven Optimization views
are statistical sensitivities. Statistical sensitivities are global sensitivities, whereas the parameter
sensitivities available under the Responses view are local sensitivities. The global, statistical
sensitivities are based on a correlation analysis using the generated sample points, which are
located throughout the entire space of input parameters. The local parameter sensitivities are
based on the difference between the minimum and maximum value obtained by varying one
input parameter while holding all other input parameters constant. As such, the values obtained
for local parameter sensitivities depend on the values of the input parameters that are held
constant. Global, statistical sensitivities do not depend on the values of the input parameters,
because all possible values for the input parameters are already taken into account when determ-
ining the sensitivities.
Design exploration displays sensitivities as both a bar chart and pie chart. The charts describe
the sensitivities in an absolute fashion (taking the signs into account). A positive sensitivity indicates
that increasing the value of the uncertainty variable increases the value of the result parameter
for which the sensitivities are plotted. Conversely, a negative sensitivity indicates that increasing
the uncertainty variable value reduces the result parameter value.
Using a sensitivity plot, you can answer the following important questions.
How can I make the component more reliable or improve its quality?
If the results for the reliability or failure probability of the component do not reach the expected
levels, or if the scatter of an output parameter is too wide and therefore not robust enough for
a quality product, then you should make changes to the important input variables first. Modifying
an input variable that is insignificant would be a waste of time.
Of course, you are not in control of all uncertainty parameters. A typical example where you have
very limited means of control involves material properties. For example, if it turns out that the
environmental temperature (outdoor) is the most important input parameter, then there is
probably nothing you can do. However, even if you find out that the reliability or quality of your
product is driven by parameters that you cannot control, this data has importance—it is likely
that you have a fundamental flaw in your product design! You should watch for influential para-
meters like these.
If the input variable you want to tackle is a geometry-related parameter or a geometric tolerance,
then improving the reliability and quality of your product means that it might be necessary to
change to a more accurate manufacturing process or use a more accurate manufacturing machine.
If it is a material property, then there might be nothing you can do about it. However, if you only
had a few measurements for a material property and consequently used only a rough guess
about its scatter, and the material property turns out to be an important driver of product reliab-
ility and quality, then it makes sense to collect more raw data.
How can I save money without sacrificing the reliability or the quality of the product?
If the results for the reliability or failure probability of the component are acceptable or if the
scatter of an output parameter is small and therefore robust enough for a quality product, then
there is usually the question of how to save money without reducing the reliability or quality. In
this case, you should first make changes to the input variables that turned out to be insignificant
because they do not affect the reliability or quality of your product. If it is the geometrical prop-
erties or tolerances that are insignificant, you can consider applying a less expensive manufacturing
process. If a material property turns out to be insignificant, then this is not typically a good way
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 397
Feature Archive
to save money, because you are usually not in control of individual material properties. However,
the loads or boundary conditions can be a potential for saving money, but in which sense this
can be exploited is highly problem-dependent.
SSA Theory
The purpose of a SSA is to gain an understanding of the effects of uncertainties associated with
the input parameter of your design. This goal is achieved using a variety of statistical measures and
postprocessing tools.
Statistical Postprocessing
Convention: Set of data .
1. Mean Value
Mean is a measure of average for a set of observations. The mean of a set of observations is
defined as follows:
(70)
2. Standard Deviation
Standard deviation is a measure of dispersion from the mean for a set of observations. The
standard deviation of a set of observations is defined as follows:
(71)
3. Sigma Level
Sigma level is calculated as the inverse cumulative distribution function of a standard Gaussian
distribution at a given percentile. Sigma level is used in conjunction with standard deviation to
measure data dispersion from the mean. For example, for a pair of quantile and sigma level
, it means that value is about standard deviations away from the sample mean.
4. Skewness
Skewness is a measure of degree of asymmetry around the mean for a set of observations. The
observations are symmetric if distribution of the observations looks the same to the left and
right of the mean. Negative skewness indicates the distribution of the observations being left-
skewed. Positive skewness indicates the distribution of the observations being right-skewed.
The skewness of a set of observations is defined as follows:
(72)
5. Kurtosis
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
398 of ANSYS, Inc. and its subsidiaries and affiliates.
Six Sigma Analysis
kurtosis indicates a relatively peaked distribution of the observations. As such, the kurtosis of
a set of observations is defined with calibration to the normal distribution as follows:
(73)
6. Shannon Entropy
The probability mass is used in a normalized fashion, such that not only the shape, but the
range of variability, or the distribution is accounted for. This is shown in Equation 75 (p. 399),
where is the relative frequency of the parameter falling into a certain interval, and is
the width of the interval. As a result, Shannon entropy can have a negative value. Following are
some comparisons of the Shannon entropy, where S2 is smaller than S1.
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 399
Feature Archive
Three signal-to-noise ratios are provided in the statistics of each output in your SSA. These ratios
are as follows:
• Nominal is Best
• Smaller is Better
• Larger is Better
Signal-to-noise (S/N) ratios are measures used to optimize control parameters to achieve a robust
design. These measures were first proposed by Dr. Taguchi of Nippon Telephone and Telegraph
Company, Japan, to reduce design noises in manufacturing processes. These design noises are
normally expressed in statistical terms such as mean and standard deviation (or variance). In
computer aided engineering (CAE), these ratios have been widely used to achieve a robust
design in computer simulations. For a design to be robust, the simulations are carried out with
an objective to minimize the variances. The minimum variance of designs/simulations can be
done with or without targeting a certain mean. In design exploration, minimum variance targeted
at a certain mean (called Nominal is Best) is provided, and is given as follows:
(76)
Nominal is Best is a measure used for characterizing design parameters such as model dimension
in a tolerance design, in which a specific dimension is required, with an acceptable standard
deviation.
In some designs, however, the objectives are to seek a minimum or a maximum possible at the
price of any variance.
• For the cases of minimum possible (Smaller is Better), which is a measure used for character-
izing output parameters such as model deformation, the S/N is expressed as follows:
(77)
• For the cases of maximum possible (Larger is Better), which is a measure used for character-
izing output parameters such as material yield, the S/N is formulated as follows:
(78)
These three S/N ratios are mutually exclusive. Only one of the ratios can be optimized for any
given parameter. For a better design, these ratios should be maximized.
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
400 of ANSYS, Inc. and its subsidiaries and affiliates.
Six Sigma Analysis
(80)
Note:
The minimum and maximum values strongly depend on the number of samples. If
you generate a new sample set with more samples, then chances are that the min-
imum value is lower in the larger sample set. Likewise, the maximum value of the
larger sample set is most likely higher than for the original sample set. Hence, the
minimum and maximum values should not be interpreted as absolute physical bounds.
The sensitivity charts displayed under Six Sigma Analysis are global sensitivities based on
statistical measures. For more information, see see Single Parameter Sensitivities (p. 155).
• The amount by which the output parameter varies across the variation range of an input
parameter.
• The variation range of an input parameter. Typically, the wider the variation range, the larger
the effect of the input parameter.
The statistical sensitivities displayed under Six Sigma Analysis are based on the Spearman-Rank
Order Correlation coefficients that take both those aspects into account at the same time.
Basing sensitivities on correlation coefficients follows the concept that the more strongly an
output parameter is correlated with a particular input parameter, the more sensitive it is with
respect to changes of that input parameter.
(81)
where:
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 401
Feature Archive
Because the sample size is finite, the correlation coefficient is a random variable. Hence,
the correlation coefficient between two random variables and usually yields a small, but
nonzero value, even if and are not correlated at all in reality. In this case, the correlation
coefficient would be insignificant. Therefore, you need to find out if a correlation coefficient is
significant or not. To determine the significance of the correlation coefficient, assume the hypo-
thesis that the correlation between and is not significant at all, meaning that they are not
correlated, and (null hypothesis). In this case the variable:
(82)
(83)
where:
There is no closed-form solution available for Equation 83 (p. 402). See Abramowitz and Stegun
(Pocketbook of Mathematical Functions, abridged version of the Handbook of Mathematical
Functions, Harry Deutsch, 1984) for more details.
The larger the correlation coefficient , the less likely it is that the null hypothesis is true. Also,
the larger the correlation coefficient , the larger is the value of t from Equation 82 (p. 402) and
consequently also the probability is increased. Therefore, if the probability that the null
hypothesis is true is given by . If exceeds a certain significance level, for ex-
ample 2.5%, you can assume that the null hypothesis is true. However, if is below the
significance level, then it can be assumed that the null hypothesis is not true and that con-
sequently the correlation coefficient is significant. This limit can be changed in Design Explor-
ation Options (p. 35).
The cumulative distribution function of sampled data is also called the empirical distribution
function. To determine the cumulative distribution function of sampled data, you need to order
the sample values in ascending order. Let be the sampled value of the random variable
having a rank of , which makes smallest out of all sampled values. The cumulative distri-
bution function that corresponds to is the probability that the random variable has
values below or equal to . Because you have only a limited amount of samples, the estimate
for this probability is itself a random variable. According to Kececioglu (Reliability Engineering
Handbook, Vol. 1, 1991, Prentice-Hall, Inc.), the cumulative distribution function associated
with is:
(84)
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
402 of ANSYS, Inc. and its subsidiaries and affiliates.
F-Test Filtering
The cumulative distribution function of sampled data can only be given at the individual sampled
values using Equation 84 (p. 402). Hence, the evaluation of the probability
that the random variable is less than or equal to an arbitrary value x requires an interpolation
between the available data points.
If is, for example, is between and , the probability that the random variable is less or
equal to is:
(85)
The cumulative distribution function of sampled data can only be given at the individual sampled
values using Equation 84 (p. 402). Hence, the evaluation of the inverse
cumulative distribution function for any arbitrary probability value requires an interpolation
between the available data points.
The evaluation of the inverse of the empirical distribution function is most important in the tails
of the distribution, where the slope of the empirical distribution function is flat. In this case, a
direct interpolation between the points of the empirical distribution function similar to Equa-
tion 85 (p. 403) can lead to inaccurate results. Therefore, the inverse standard normal distribution
function is applied for all probabilities involved in the interpolation. If is the requested
probability for which you are looking for the inverse cumulative distribution function value, and
is between and , the inverse cumulative distribution function value can be calculated
using:
(86)
F-Test Filtering
When you are configuring a Response Surface cell, you must set the type of response surface to gen-
erate. If you set Response Surface Type to Standard Response Surface - Full 2nd-Order Polynomials,
F-Test Filter (Beta) displays in the properties for output parameters. This option, which appears under
Output Settings, determines whether to filter significant terms of the polynomial regression. When the
check box is selected (default), the Significance Level value specified in the response surface properties
is used as the threshold for this filtering. If you clear this check box, significant terms are not filtered.
All terms are kept.
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 403
Feature Archive
Note:
When these conditions are met, when you click Export Response Surface in the toolbar, VHDL-AMS
Language file (*.vhd) (Beta) is listed as a file type to which you can save.
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
404 of ANSYS, Inc. and its subsidiaries and affiliates.
Cumulative Distribution Plot Type
If you know and define the distribution types of input parameters in the Design of Experiments cell,
the distribution types of output parameters are unknown. When you select an output parameter in the
Outline pane for the Six Sigma Analysis cell, the Properties pane displays chart properties. For Cumu-
lative Distribution Plot Type (Beta), you select a distribution type. All attributes of this distribution,
such as mean, standard deviation, and so on, are then automatically selected to fit as best as possible
the Empirical Cumulative Distribution Function.
You can see which distribution type is closest to the empirical distribution by comparing two Cumulative
Distribution Functions and by comparing the Probability Density Function with the Probability Density.
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 405
Feature Archive
By default, Cumulative Distribution Plot Type (Beta) is set to Normal. This means that in the chart
for the output parameter, the solid blue and green curves are the cumulative distribution function and
probability density function based on the normal distribution.
If you set Cumulative Distribution Plot Type (Beta) to a different distribution type, the labels for the
solid blue and green curves are updated according. If you set Cumulative Distribution Plot Type
(Beta) to None, no solid blue or green curves are shown.
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
406 of ANSYS, Inc. and its subsidiaries and affiliates.
Manual Production of ROM Files from Standalone Fluent
This advanced example describes how to manually define or generate in standalone Fluent all of the
appropriate files needed to finally build a ROM in Workbench:
Setting Up the Workbench Project
Setting Up ROM Building in Fluent
Creating the ROM in the ROM Builder
Creating Mesh and Snapshot Files in Fluent Standalone
Importing the Mesh and Snapshot Files into the DOE and Generating the ROM
The case, which is for a steady fluids simulation of a heat exchanger, is run independently for each
design point (design of experiments). The ROM-specific files are created manually in standalone Fluent
for later import into a 3D ROM system in a Workbench project.
Note:
• The CAS file for this advanced example is included in the same ZIP file as the
sample files for the standard example. To download the ZIP file, in "Using ROMs"
in the DesignXplorer User's Guide, see the "Prerequisites" topic.
1. In the Toolbox under Component Systems, double-click Fluent to add a system of this type
to the Project Schematic.
2. Right-click the Setup cell, select Import Fluent Case → Browse, navigate to the directory
where you extracted the sample files, and open the CAS file (HeatExchanger.cas).
The Parameter Set bar is added to the Project Schematic, indicating that input and output
parameters are defined in the CAS file.
4. Verify that input and output parameters are defined appropriately, which they are.
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 407
Feature Archive
1. In the Project Schematic, double-click the Solution cell to open the case in Fluent.
2. To make Reduced Order Model (Off) visible in the tree under Setup → Models, enable and
load the ROM addon module by executing this command from the Fluent console:
3. On the ribbon's Solution tab, click the button in the Initialization group for initializing the
entire flow field.
4. In the tree under Setup → Models, double-click Reduced Order Model (Off).
5. In the Reduced Order Model window that opens, select the Enable Reduced Order Model
check box to expand the window so that you can set up the ROM.
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
408 of ANSYS, Inc. and its subsidiaries and affiliates.
Manual Production of ROM Files from Standalone Fluent
8. Select File → Export → Case, name the file HeatExchanger_RomSetup.cas, and click
OK.
9. Select File → Close Fluent to close Fluent and push your changes to the Workbench project.
1. In the Toolbox under Design Exploration, double-click 3D ROM to add a system of this type to
the Project Schematic.
For this example, you will keep only one input parameter for the ROM.
4. In the Properties pane, set the lower and upper bounds for P1 to 0.001 and 0.01 m/s.
5. In the Outline pane, clear the check boxes for P2, P3, and P4 to disable them.
Their constant values, which are reported in the Properties pane by default, can be kept.
8. In the toolbar, click Preview to generate the four design points without updating them.
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 409
Feature Archive
9. Take note of the generated inlet velocity value for P1: 0.004375.
Tip:
Tip:
4. On the ribbon's User-Defined tab, click Execute on Demand to open the Execute on Demand
window.
This generates the setup files that are used when creating the mesh and snapshot files. These
setup files are written in the Fluent working directory and can be safely removed at the end
of this procedure.
This creates the file mesh.rommsh in the Fluent working directory. This mesh file is used to
display the ROM.
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
410 of ANSYS, Inc. and its subsidiaries and affiliates.
Manual Production of ROM Files from Standalone Fluent
8. In the tree, under Parameters & Customization > Parameters > Input Parameters, enter
the value for each parameter for the first design point that you previewed in the DOE:
a. Click the button in the Initialization group for initializing the entire flow field.
b. When asked if you want to discard the data and proceed, click OK.
11. For each of the three additional design points generated in the Design of Experiments cell,
repeat steps 8 through 10, entering the corresponding value for inlet_velocity_shell and
the appropriate name for the snapshot file:
12. Close Fluent and download the ROMSNP and the ROMMSH files to the computer with Work-
bench installed.
Tip:
Importing the Mesh and Snapshot Files into the DOE and Generating the
ROM
This section describes how to use the mesh file and four snapshot files to produce the ROM.
Note:
The next step, which can be computationally expensive, can be performed on a separate
computer.
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 411
Feature Archive
• For Windows, run the following command, where Path is the location where you want to
create the file:
Rom.MeshProcessing.exe C:\Path\mesh.rommsh C:\Path\mesh_processed.rommsh
• For Linux, the path in the following command assumes that Ansys is installed using the
symbolic link /ansys_inc. If necessary, substitute your installation path for the path
given:
/ansys_inc/v241/Framework/bin/Linux64/runwb2 -cmd /ansys_inc/v241/Addins/ROM/bin/Linux64/Rom.MeshProcessing
3. After the file mesh_processed.rommsh is created, start Workbench and open the project
HeatExchanger_RomProduction.wbpj.
4. In the Project Schematic, double-click the Design of Experiments (3D ROM) cell to open it.
When DesignXplorer advanced options are shown, ROM Mesh File is visible in the Properties
pane.
6. For ROM Mesh File, click Browse and select the file mesh_processed.rommsh that you created
in step 2.
9. Right-click in the Table pane and select Set All Output Values as Editable.
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
412 of ANSYS, Inc. and its subsidiaries and affiliates.
Manual Production of ROM Files from Standalone Fluent
10. Manually set the snapshot file for each design point:
a. In the first line, enter a dummy value such as 0 for P5 and P6. This value is not used for ROM
building.
b. In the first line, right-click the Fluent ROM Snapshot cell, select Set Snapshot File, browse
to snp_1.romsnp, and when prompted click Copy and Proceed.
c. Repeat step b for the next three design points, selecting the corresponding snapshot file
(snp_2.romsnp through snp_4.romsnp).
Warning:
When manually setting a snapshot file, if you mistakenly select a file that does not
match the point, the resulting ROM will have invalid values. To avoid errors, you should
automate this step by importing a ROM snapshot archive file (SNPZ). Like a 7z or ZIP
file, a SNPZ file is a compressed file. It contains all snapshot files and one CSV file.
11. Save the project and update the Design of Experiments (3D ROM) cell.
This operation is fast because all of the outputs are already calculated.
12. In the ROM Builder cell, set Number of Modes to 4, which is the number of DOE points, and
click Update.
13. When the update completes, in the toolbar, click Open in ROM Viewer.
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 413
Feature Archive
Note:
For information on using ACT, see the Ansys ACT Developer's Guide, the Ansys ACT XML Reference
Guide, and the Ansys ACT Reference Guide.
To install an extension:
In most cases, you select a binary extension in which a defined optimizer is compiled into a
WBEX file. WBEX files cannot be modified.
The extension is installed in your App Data directory. Once installed, it is shown in the Extensions
Manager and can be loaded to your projects.
1. From the Workbench Project tab, select Extensions → Manage Extensions. The Extensions
Manager opens, showing all installed extensions.
2. Select the check box for the extension to load and then close the Extensions Manager.
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
414 of ANSYS, Inc. and its subsidiaries and affiliates.
Working with DesignXplorer Extensions
The extension should now be loaded to the project, which means that it is available to be selected
as an optimization method. You can select Extensions → View Log File to verify that the extension
loaded successfully.
Note:
The extension must be loaded separately to each project unless you have specified it as
a default extension in the Options window. You can also specify whether loaded extensions
are saved to your project. For more information, see Extensions in the Workbench User's
Guide.
In some cases, a necessary extension might not be available when a project is reopened. For example,
the extension could have been unloaded or the extension might not have been saved to the project
upon exit. When this happens, the option in the drop-down menu is replaced with an alternate label,
<extensionname>@<optimizername> or <extensionname>@<doename> instead of the actual
name of the external optimizer or DOE. You can still do postprocessing with the project for data ob-
tained when the problem was solved previously. However, the properties are read-only and further
calculations cannot be performed unless you reload the extension or select a different optimizer or
DOE type.
• If you reload the extension, the properties become editable and calculations can be performed.
You have the option of saving the extension to the project so it does not have to be reloaded
again the next time the project is opened.
• When you select a different optimization method or sampling type (one not defined in the missing
extension), the alternate label disappears from the list of options.
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 415
Release 2024 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
416 of ANSYS, Inc. and its subsidiaries and affiliates.