A Concept Exploration Method For Product Family Design
A Concept Exploration Method For Product Family Design
A Concept Exploration Method For Product Family Design
Ph.D. Dissertation
Timothy W. Simpson
Abstract
Current design research is directed at improving the efficiency and effectiveness of designers in
the product realization process, and until recently, the focus has been predominantly on
designing a single product. However, today’s market—characterized by words such as mass
customization, rapid innovation, and make-to-order products—requires a new approach to
provide the necessary product variety to remain competitive. The answer advocated in this
dissertation is the design and development of scalable product platforms around which a family
of products can be realized to satisfy a variety of market niches. In particular, robust design
principles, statistical metamodeling techniques, and the market segmentation grid, an attention
directing tool from management science, are synthesized into the Product Platform Concept
Exploration Method (PPCEM); the PPCEM is an efficient and effective method for designing
scalable product platforms, the cornerstone of an effective product family. The efficiency and
effectiveness of the method are tested and verified through application to three example
problems: the design of a family of oil filters, the design of a family of absorption chillers, and the
design of a family of General Aviation aircraft.
Q1. How can scalability be modeled and realized in product family design?
There is a one-to-one correspondence between the hypotheses and research questions. The
Product Platform Concept Exploration Method mentioned in Hypothesis 1 is developed to
answer the first research question, providing a method to model and realize scalability in product
family design. Hypotheses 2 and 3 entail affirmative answers to Questions 2 and 3 which are
explicitly tested and verified in Chapter 4. Confirmation of Hypothesis 1 is not contingent upon
verification of Hypotheses 2 and 3; Hypotheses 2 and 3 help to support Hypothesis 1 but have
implications which extend beyond product family design. These implications are discussed more
thoroughly in the concluding chapter of this dissertation, Section 8.1.
Since Question 1 is quite broad, three supporting research questions and sub-hypotheses are
proposed to facilitate the verification of Hypothesis 1. As with the preceding research questions
and hypotheses, there is a one-to-one correspondence between each of these supporting
question and the correspondingly numbered hypothesis. The supporting questions and sub-
hypotheses are stated as follows.
Q1.1. How can product platform scaling opportunities be identified from overall
design requirements?
Q1.2. How can robust design principles be used to design a scalable product
platform?
Q1.3. How can individual targets for product variants be aggregated and modeled for
product platform design?
Sub-Hypothesis 1.1: The market segmentation grid can be utilized to identify scale
factors for a product platform.
Sub-Hypothesis 1.2: Robust design principles can be used to design a scalable
product platform by minimizing the sensitivity of a product platform to variations in
scale factors.
Sub-Hypothesis 1.3: Individual targets for product variants can be aggregated into an
appropriate mean and variance and used in conjunction with robust design
principles to effect a product family .
As with the main hypotheses, the sub-hypotheses are stated here to provide context for the
literature review in the next chapter and development of the PPCEM in Section 3.1.
Supporting Posits
There are several posits which support the research hypotheses. Six posits support Hypotheses
1 and Sub-Hypotheses 1.1-1.3.; they are the following.
Posit 1.1: The RCEM provides an efficient and effective means for developing robust
top-level design specifications for complex systems design.
Posit 1.2: Metamodeling techniques, specifically, design of experiments and response
surface methodology, can be used to facilitate concept exploration and optimization,
thus increasing a designer’s efficiency.
Posit 1.3: Robust design principles can be used to minimize the sensitivity of a design
to variations in uncontrollable (i.e., noise) factors and/or variations in design
parameters (i.e., control factors).
Posit 1.4: Robust design principles can be used effectively in the early stages of the
design process by modeling the response itself with separate goals for “bringing the
mean on target” and “minimizing the variation.”
Posit 1.5: The compromise DSP is capable of effecting robust design solutions through
separate goals for “bringing the mean on target” and “minimizing variation” of noise
factors and/or variations in the design variables.
Posit 1.6: The market segmentation grid can be used to identify opportunities for
platform leveraging in product family design.
Posit 2.1: Building an (interpolative) kriging model is not predicated on the assumption
of underlying random error in the data.
Posit 2.2: Kriging provides very flexible modeling capabilities based on the wide
variety of spatial correlation functions which can be selected to model the data.
• Posit 2.1 is more fact than assumption; it can be substantiated by Sacks, et al.
(1989); Koehler and Owen (1996); and Cressie (1993).
• Posit 2.2 is substantiated by many researchers, most notably: Sacks, et al. (1989);
Welch, et al. (1992); Cressie (1993); and Barton (1992; 1994).
The testing of Posit 2.2 helps to verify Hypothesis 2; the strategy for testing Hypothesis 2 (and
Posit 2.2) is outlined in Section ___.
• Posit 3.1 is taken from Sacks, et al. (1989) who state that the “classical notions of
experimental blocking, replication and randomization are irrelevant” for deterministic
computer experiments which contain no random error. Moreover, any experimental
design text (see, e.g., Montgomery, 1991) can verify that experimental design
properties such as replication, blockability, and rotatability are developed explicitly
to handle and account for random (measurement) error in a physical experiment for
which classical experimental designs have been developed.
Furthermore, since kriging (using an underlying constant model) is being advocated in this
dissertation for metamodeling deterministic computer experiments, an additional posit in support
of Hypothesis 3 is the following.
Posit 3.2: Since kriging (with an underlying constant model) models rely on the spatial
correlation between data, confounding and aliasing of main effects and two-factor
interactions have no significant meaning when predicting a response.
• Posit 3.2 is substantiated by Sacks, et al. (1989); Currin, et al. (1991); Welch, et
al. (1990); and Barton (1992; 1994). In physical experimentation, great care is
taken to ensure that aliasing and confounding of main effects and two-factor
interactions do not occur to ensure accurate estimation of coefficients of the
polynomial response surface model (see, e.g., Montgomery, 1991).
The experimental procedure for testing Hypothesis 3 is discussed in the next section along with
the specific strategy for verification and testing of all of the hypotheses.
First and foremost, the Product Platform Concept Exploration Mehotd (PPCEM) is
hypothesized as a method for designing scalable product platforms. In addition, the PPCEM is
hypothesized to be efficient and effective. The efficiency of the PPCEM is attained by using:
• metamodels to:
– create inexpensive approximations of computer analyses, and
– facilitate the implementation of robust design; and
• robust design principles to design simultaneously a family of products around a
scalable product platform.
The effectiveness of the PPCEM is attained by using:
• robust design principles to design a scalable product platform, and
• lexicographic minimum concept to generate a solution portfolio to maintain design
flexibility.
To verify the effectiveness of the PPCEM as a method to design scalable product platforms,
three example problems are utilized. In each example, the resulting product platform obtained
using the PPCEM is compared to the results obtained from designing each product in the family
separately and then aggregating the products into a common set of specifications. The three
example problems for testing the PPCEM are:
Efficiency of the PPCEM is verified by comparing the time involved for building, validating, and
using the necessary metamodels against the time the procedure takes to repeat the process
without the metamodels. Similarly, the efficiency achieved through implementation of robust
design principles to design simultaneously a family of products is discussed at the end of each
example problem.
Verification and testing of the sub-hypotheses related to Hypotheses 1 entail:
Testing Sub-Hypothesis 1.1 - The procedure for using the market segmentation grid
to identify scale factors for a product platform is shown in Figure ___ and described
in Section ___. Further verification of this sub-hypothesis requires demonstrating
that this procedure can indeed be used to identify scale factors for a product
platform. This is demonstrated in all three examples wherein the appropriate scale
factors are identified for leveraging the product platform in the product family.
Testing Sub-Hypothesis 1.2 - If appropriate scale factors can be identified for a
product platform (i.e., Sub-Hypothesis 1.1 is true) then the principles of robust
design can be employed to develop a design which is robust with respect to these
noise factors much in the same way that Chang, et al. (1994) use robust design
principles to develop “conceptually robust” designs with regard to appropriate
“conceptual noise” factors resulting from distributed, concurrent engineering
practices. Verification of this sub-hypothesis requires demonstration of the
approach, and the three examples provide such a demonstration.
Testing Sub-Hypothesis 1.3 - The procedure for aggregating the individual targets of
the product variants is outlined in Section ___. As with Sub-Hypothesis 1.1, further
verification of this sub-hypothesis requires demonstrating that this procedure can
indeed be used; the three examples are used to demonstrate just that.
Verification of these sub-hypotheses helps to further verify Hypothesis 1. The strategy for
testing Hypotheses 2 and 3 is outlined in the next Section.
The remaining chapters of the dissertation flow as shown in Figure Error! No text of
specified style in document..1. Having laid the foundation in Decision-Based Design, robust
design, and the RCEM and presented the research questions for the work in this chapter,
Chapter 2 contains a literature review of related work. Based on the discussion in Section 1.1,
three research areas are reviewed: (1) product family and product platform design, (2) robust
design, and (3) metamodeling in Sections 2.1, 2.2, and 2.3, respectively.
The PPCEM is then introduced in Chapter 3 as the tools, approaches, and philosophies
from Chapters 1 and 2 are synthesized into an efficient and effective method for desiging
scalable product platforms for a product family. As noted in Figure Error! No text of
specified style in document..1, the PPCEM is overviewed in Section 3.1 with the individual
steps elaborated in Sections 3.1.1 through 3.1.5. After the PPCEM is introduced, the research
hypotheses are revisited in Section 3.2, and supporting posits are stated and substantiated.
Section 3.3 contains an outline of the strategy for verification and testing of the hypotheses. This
includes a preview of Chapter 4—wherein Hypotheses 2 and 3 are tested—and Chapters 5
through 7 wherein the PPCEM is applied to three example problems, verifying Hypotheses 1
and Sub-Hypotheses 1.1 through 1.3.
Chapter 4 entails a brief departure from product platform design yet is an intergral part
of the development of the PPCEM. In Section 4.1, an initial feasibility study of the usefulness of
kriging is performed, comparing the predictive capability of kriging models to that of second-
order response surfaces, the current “standard” for metamodeling. The experimental set-up for
testing Hypotheses 2 and 3 is then introduced in Section 4.2. An extensive study of six
engineering test problems selected from the literature is conducted to determine the utility of
various spatial correlation functions and space filling designs; results are presented and
discussed in Section 4.3.
Once the kriging and space filling design study is completed in Chapter 4, the first of the
three example problems used to demonstrate the PPCEM and verify its associated hypthoses is
given in Chapter 5: the design of a family of oil filters. The second and third example problems
are the design of a family of absorption chillers, Chapter 6, and the design of a family of General
Aviation aircraft, Chapter 7. In each chapter, an overview of the problem is given along with
pertinent analysis information. Then, the steps of the PPCEM are performed and a summary
and discussion of the results is given.
Chapter 8 is the final chapter in the dissertation. It begins in Section 8.1 with a recap of
the dissertation, emphasizing the research hypotheses and resulting contributions from the work.
A critical review of the dissertation is given in Section 8.2; limitations and shortcomings of the
work are addressed. This is followed by a discussion of possible future work to refine the
PPCEM and the associated metamodeling techniques.
Finally, there are three appendices which supplement the dissertation, specifically the
work in Chapter 4. A description of the minimax Latin hypercube design algorithm which is
unique to this work is given in Appendix A. Appendix B outlines the kriging algorithm
developed and utilized in this dissertation. In addition, a step-by-step example of building and
using a kriging model is also included. Finally, the six test problems used in Section 3.4 for
investigating the utility of different experimental design techniques and kriging models are
detailed in Appendix C.
Di
Filter Element
P o, Vo
L - 0.02
Oil outflow
P i, Vi
Q
Oil inflow
General
Absorption
Oil Filters Aviation
Chillers
Aircraft
Platform
MDO Example
Product Platform Concept Exploration Method
Chp 4
Chp 3
CHAPTER 1
FOUNDATIONS FOR PRODUCT FAMILY AND PRODUCT PLATFORM
DESIGN
The principal objective in this disseration is to develop the Product Platform Concept
Exploration Method (PPCEM) for efficient and effective design of scalable product platforms
for a product family. As the title of this chapter implies, the foundations for developing the
PPCEM are presented here. The heart of the chapter lies in Section 1.3 wherein the research
objectives, hypotheses, and contributions for the work are described; this section sets the stage
for the chapters that follow, culminating in the development of the PPCEM in Chapter 3.
Specifically, Sections 1.1 and 1.2 provide the motiviation, foundation, and context for
investigating the proposed research and serve to establish context for the reader. More
specifically, in Section 1.1 the concepts of product family and product plaform design are
introduced, and opportunities for advancing this nascent research area are identified. In Section
1.2, the foundations for the work are presented, namely, Decision-Based Design, robust design
principles, and the Robust Concept Exploration Method. Section 1.4 contains an overview of
the dissertation.
CHAPTER 2
STATE-OF-THE-ART IN PRODUCT FAMILY DESIGN, ROBUST DESIGN, AND
METAMODELING: LITERATURE REVIEW
In this chapter a survey of relevant work in product family and product platform design, robust
design, and metamodeling is presented in Sections 2.1, 2.2, and 2.3, respectively. In Section
2.1, the descriptors, tools, and current methods for designing product families and product
platforms are discussed. Section 2.2 contains a review of robust design principles, tracing the
evolution of robust design from parameter design to the early stages of product design. This
segues into a discussion of metamodeling techniques for building inexpensive approximations of
computer analyses to facilitate robust design and concept exploration. In particular, the kriging
approach to metamodeling is introduced in Section 2.3.1, and a variety of space filling
experimental designs for quering the computer code to build these models are described in
Section 2.3.2. Section 2.4 concludes the chapter with a summary of what has been presented
and a preview of what is next.
CHAPTER 3
THE PRODUCT PLATFORM CONCEPT EXPLORATION METHOD
The work in this chapter is a synthesis of the previous chapters and represents the principal
objective in this dissertation, namely, to develop the Product Platform Concept Exploration
Method (PPCEM) for efficient and effective design of scalable product platforms for a product
family. To start, an overview of the PPCEM and its associated steps and tools is given in
Section 3.1 with each step of the PPCEM and its constituent elements elaborated in Sections
3.1.1 through 3.1.5; the resulting infrastructure of the PPCEM is presented in Section 3.1.6. In
Section 3.2 the research hypotheses on which the PPCEM is founded are revisited from
Section 1.3.1. More specifically, in Section 3.2.1 the relationship between the research
hypotheses and modifications to the RCEM are detailed, and in Section 3.2.2 supporting posits
for the research hypotheses are stated. Section 3.3 follows with an outline of the strategy for
verification and testing of the research hypotheses. Section 3.4 concludes the chapter with a
recap of what has been presented and a look ahead to the metamodeling study in Chapter 4
and the example problems in Chapters 5, 6, and 7 which are used to test the research
hypotheses and demonstrate the application of the PPCEM.
CHAPTER 4
THE UTILITY OF KRIGING AND SPACE FILLING EXPERIMENTAL DESIGNS
In this chapter, Hypotheses 2 and 3 are tested, verifying the utility of kriging and space filling
experimental designs for building metamodels of deterministic computer analyses. An initial
feasibility study of kriging as a metamoldeing technique is given in Section 4.1; the study invovles
comparing kriging and response surface models in the multidisciplinary design of an aerospike
nozzle. Once kriging is established as a viable alternative, a detailed study is set-up to test
Hypotheses 2 and 3, Section 4.2. Six problems are introduced in Section 4.2.1 to serve as the
test bed for benchmarking kriging and space filling designs. In Sections 4.2.2 and 4.2.3,
experimental design choices and error assessment measures are discussed, respectively.
Section 4.3 contains the results of the study and a discussion of the ramifications of the results.
A summary of the chapter is then given in Section 4.4 along with a discussion of the relevance of
this chapter to the development of the PPCEM.
4.1 INITIAL FEASIBLITY STUDY OF KRIGING: THE MULTIDISCIPLINARY
DESIGN OF AN AEROSPIKE NOZZLE
4.1.1 Background for the Aerospike Nozzle Problem
4.1.2 Metamodeling of the Aerospike Nozzle Problem
4.1.3 Optimization using the Response Surface and Kriging Metamodels
4.2 EXPERIMENTAL SET-UP: KRIGING AND SPACE FILLING EXPERIMENTAL
DESIGN TESTBED
4.2.1 Overview of Testbed Problems
4.2.2 Experimental Design Choices for Test Problems
4.2.3 Validation Points and Error Metrics for Assessing Model Accuracy
4.2.4 Summary of Kriging Study
4.3 THE UTILITY OF KRIGING AND SPACE FILLING DESIGNS
4.3.1 Experimental Set Up
4.3.2 Which Correlation Function is Best?
4.3.3 Which Types of Experimental Designs are Best?
4.4 A LOOK BACK AND A LOOK AHEAD
CHAPTER 5
DESIGN OF A FAMILY OF OIL FILTERS
CHAPTER 6
DESIGN OF A FAMILY OF ABSORPTION CHILLERS
6.1 OVERVIEW OF THE ABSORPTION CHILLER PLATFORM PROBLEM
6.1.1 Problem Statement and Leveraging Strategy
6.1.2 Relevant Analyses for Absorption Chillers
6.2 STEPS 1 AND 2: CREATE MARKET SEGMENTATION GRID AND AND
CLASSIFY FACTORS FOR ABSORPTION CHILLER PLATFORM
6.3 STEP 3: BUILD AND VALIDATE METAMODELS
6.4 STEP 4: AGGREGATE PRODUCT SPECIFICATIONS AND FORMULATE
ABSORPTION CHILLER PLATFORM COMPRMISE DSP
6.5 STEP 5: DEVELOP THE ABSORPTION CHILLER PLATFORM PORTFOLIO
6.6 RAMIFICATIONS OF THE RESULTS OF THE ABSORPTION CHILLER
EXAMPLE PROBLEM
6.7 A LOOK BACK AND A LOOK AHEAD
CHAPTER 7
DESIGN OF A FAMILY OF GENERAL AVIATION AIRCRAFT
CHAPTER 8
ACHIEVEMENTS AND RECOMMENDATIONS
8.1 RESEARCH OBJECTIVES AND HYPOTHESES REVISITED
8.2 CRITICAL REVIEW OF THE DISSERATION
8.3 FUTURE WORK
8.3.1 Future Work in Kriging
8.3.2 Future Work with Space Filling Experimental Designs
8.3.3 Future Work in Product Family and Product Platform Design
APPENDIX A
A MINIMAX LATIN HYPERCUBE DESIGN GENERATOR USING A GENETIC
ALGORITHM
APPENDIX B
KRIGING STEP-BY-STEP
This appendix is intended to supplement the brief description of kriging which is given in Section
2.4.1. In Section B.1, the question of “What is kriging?” is addressed. As part of this section,
three other questions the reader might be asking him/herself are addressed:
• Section B.1.1 - “Why use kriging?”
• Section B.1.2 - “What is a spatial correlation function?”
• Section B.1.3 - “How is a kriging model built, validated, and implemented?”
After these questions are addressed, a simple one dimensional example is presented, going
step-by-step through the process of building, validating, and using a kriging model.
Six engineering test problems—2 two variabåle problems, 2 three variable problems, and 2 four
variable problems—have been selected from the literature to further test the utility of kriging as a
metamodeling technique. The analysis of these problems is simple enough that building kriging
models of the responses is overkill to say the least. However, these problems do serve a
purpose; they have been selected because: (a) they have been well studied, (b) the behavior of
the system and the underlying equations are known, and (c) the optimum solution is also know.
Thus, because the underlying equations and optimum solution are known, it is easy to determine
the utility of kriging on a variety of problems, hence testing Hypotheses 2 and 3 from Section
1.3.1. Each example is described in turn along with the corresponding constraints, bounds, and
objectives. The optimum solution for each problem is also given, and only the pertinent
equations have been numbered.
The principal objective in this dissertation is to develop the Product Platform Concept
Exploration Method (PPCEM) to facilitate the design of a common scalable product platform
for a product family. As the title of this chapter implies, the foundations for developing this
method are presented here. The heart of the chapter lies in Section 1.3 wherein the research
objectives, hypotheses, and contributions for the work are described; this sets the stage for the
chapters that follow, culminating in the development of the PPCEM in Chapter 3. Sections 1.1
and 1.2 contain the motivation, foundation, and context for investigating the proposed research
and serve to establish context for the reader. Specifically, in Section 1.1 the concepts of
product family and product platform design are introduced and defined, and opportunities for
advancing this nascent research area are identified. In Section 1.2, the foundations for the
1
1.1 FRAME OF REFERENCE: PRODUCT FAMILY AND PRODUCT
PLATFORM DESIGN
Today’s competitive and highly volatile market is redefining the way companies do
business. “Customers can no longer be lumped together in a huge homogeneous market, but
are individuals whose individual wants and needs can be ascertained and fulfilled” (Pine, 1993).
Companies are being called upon to deliver better products faster and at less cost for customers
who are more demanding in a market which is characterized by words such as mass
customization and rapid innovation. Even government agencies like NASA are re-examining the
way they operate and do business, adopting slogans such as “better, faster, cheaper.”
“The sellers’ market of the fifties and sixties was characterized by high demand and a
relative shortage of supply. Firms produced large volumes of identical products,
supported by mass production techniques. ... The buyer’s market of the eighties and
beyond is forcing companies making specific high-volume products to manufacture a
growing range of products tailored to individual customer’s needs at the cost of
standard mass-produced goods.”
So why the growing concern for satisfying the individual customer? Stan Davis,
the person who coined the term mass customization, captures it best: “The more a company can
deliver customized goods on a mass basis relative to their competition, the greater is their
competitive advantage” (Davis, 1987). Simply stated, companies which offer customized goods
at minimal extra cost have a competitive advantage over those that do not. Pine (1993)
2
attributes the increasing attention on product variety and customer demand to the saturation of
“Today, demand for new products frequently has to be diverted from older ones. It is
therefore important for new products to meet customer needs more completely, to be of
higher quality, and simply to be different from what is already in the marketplace.”
Similar themes pervade the texts by Wortmann, et al., (1997) who examine industry’s response
in Europe to the “customer-driven” market, and Anderson (1997) who examines the role of
This increasing need to distinguish and differentiate products from competitors is further
"The customer now has plenty of choice for almost every product within a price range.
With this increased choice, consumers have become more aware of the good and bad
features of a product...they select the product that most closely fulfills their opinion of
being the best value for the money. This is not just price but a wide range of non-price
factors such as quality, reliability, aesthetics..."
Chinnaiah, et al. (1998) also examine the trend toward mass customized goods, citing more
demanding customers and market saturation as impetus for the shift. Uzumeri and Sanderson
(1997) state that “The emergence of global markets has fundamentally altered competition as
many firms have known it” with the resulting market dynamics “forcing the compression of
product development times and expansion of product variety.” The study by Womack, et al.
(1990) of the automobile industry in the 1980s provides just one of numerous examples of this
trend.
3
Since many companies typically design new products one at a time, Meyer and Lehnerd
(1997) have found that the focus on individual customers and products results in “a failure to
products or product lines.” Similarly, Erens (1997) states that “If sales engineers and designers
focus on individual customer requirements, they feel that sharing components compromises the
quality of their products.” The end result is a “mushrooming” or diversification of products and
parts with proliferating variety and costs. Mather (1995) states that “Rarely does the full
spectrum of product offerings get reviewed at one time to ensure it is optimal for the business.”
Consequently, companies are being faced with the challenge of providing as much
variety as possible for the market with as little variety as possible between products.
Toward this end, the approach advocated in this dissertation and by many strategic
family of products with as much commonality between products as possible with minimal
compromise in quality and performance. Several engineering examples are presented in the
next section to provide context and foster a better understanding of the product family concept
and how product families have been successfully developed and realized. Research
opportunities in product family and product platform design then are discussed in Section 1.1.2.
The following examples from Sony, Lutron, Nippondenso, Black & Decker, Canon,
and Rolls-Royce exemplify successful product families and have been studied as such.
4
Additional examples which might interest the reader include: Swiss army knives and Swatch
watches (Ulrich and Eppinger, 1995), Xerox copiers (Paula, 1997), Anderson windows
(Stevens, 1995), Hewlett-Packard printers (see, e.g., Lee, et al., 1993), the Boeing 747 family
of aircraft (see, e.g., Rothwell and Gardiner, 1990), and the Kodak single use camera (see,
The design of the Sony Walkman is a classic example of managing the design of a product
family (Sanderson and Uzumeri, 1997). Sony first introduced the Walkman in 1979, which has
dominated the personal portable stereo market for over a decade, and has remained the leader
both technically and commercially despite fierce competition from world-class competitors, e.g.,
Matushita, Toshiba, Sanyo and Sharp. Sony built all of their Walkman models around key
modules and platforms and used modular design and flexible manufacturing to produce a wide
variety of quality products at low cost. Incremental design changes accounted for only 20-30 of
the 250+ models Sony introduced in the U.S. in the 1980s. “The remaining 85% of Sony's
models were produced from minor rearrangements of existing features and cosmetic redesigns
of the external case...topological changes [such as these] can be made with little cost or risk”
(Sanderson and Uzumeri, 1995). The basic mechanisms in each platform were refined
continually while following a disciplined and creative approach of focusing its families on clear
5
Lutron - Electronic Lighting Control Systems
When engineers at Lutron design a new product line, they begin with a fairly standard product
with very few options (see, e.g., Spira, 1993). They then work with individual customers to
extend the product line until they eventually have a hundred or so models which customers can
purchase. Then engineering and production work together to redesign the product line with 15-
20 standardized components that can be configured into the same hundred models from which
customers could initially chose. Additional customization work can be performed to meet
individual customer requirements; in its electronic lighting systems line, used in conference
rooms, ballrooms, and hotel lobbies, Lutron has rarely shipped the same system twice
(Spira, 1993).
Nippondenso Co. Ltd. makes automotive components for Toyota, other Japanese car makers,
and car makers in other countries. They design their panel meters using a combinatoric strategy
as illustrated in Figure 1.1. A panel meter is composed of six parts (in rare cases, only five),
and in order to reduce inventory and production costs, each type of part has been redesigned
so that its mating features to its neighbors are identical across the part type. This was done by
standardizing the design (denoted by SD in the figure) in an effort to reduce the number of
variants of each part. Inventory and manufacturing costs were reduced without sacrificing the
product offering. Each zigzag line on the right hand side of Figure 1.1 represents a valid type of
6
meter, and as many as 288 types of meters can be assembled from 17 different components
(Whitney, 1993).
The most common component in all power tools is the universal motor which Black & Decker
redesigned in the early 1970s. The redesign was in response to the threat of required double
insulation on electrical devices to protect the user from electrical shock if the main insulation
system fails. Double insulation was incorporated into 122 basic tools with hundreds of
variations, from jig saws and grinders to edgers and hedge trimmers. Through standardization
of the product line, Black & Decker was able to produce all of its power tools using a line of
motors that varied only in the stack length and the amount of copper wrapped within the motor.
As a result, all of the motors could be produced on a single machine with stack lengths varying
7
from 0.8 in to 1.75 in and power output ranging from 60 to 650 watts. Furthermore, new
designs were developed using standardized components such as the redesigned motor, which
allowed products to be introduced, exploited and retired with minimal expense related to
Canon - Copiers
Canon has successfully dominated the low volume end of the copier market since the mid
1980s. Canon's copiers offer a wide range of functions and market uses, including: 500-70,000
copies in either black and white or as many as six different colors. To provide this variety,
Canon has a number of different series (base models or platforms) from which variant
derivatives are created to cover most of the customer's economic and technical requirements.
About 80 percent of the components of these copiers are standard; the remaining 20 percent
are altered and modified to produce product variants within the product family, see (Rothwell
Rolls-Royce designs its aircraft engines around a common platform and then “derates” or
“upgrades” the platform to suit specific customer needs (cf., Rothwell and Gardiner, 1990). An
example is the RTM322 engine which was designed to allow several versions to be produced to
cater to different market requirements and power outputs. As shown in Figure 1.2, the
8
RTM322 platform is common to multiple versions of the engine, namely, the turboshaft,
turbofan and turboprop. When the RTM322 engine is scaled by a factor of 1.8, the engine
platform becomes the core for the RB550 series which is produced in two versions: turboprop
and turbofan .
In light of these examples, the following definitions for product family, product platform,
and derivatives and product variants are offered to provide context for the remainder of the
dissertation.
A product family is a group of products which share common form features and
function(s), targeting one or multiple market niches. Here, form features refer
generally to the shape and characterizing features of a product; function refers generally
to the utilization intent of a product. The Sony Walkman product family is one such
example; it contains a variety of models with different features and functions, e.g.,
graphic equalizer, auto-reverse, and waterproof casing, to target specific market niches.
9
A product platform, in this dissertation, is the common set of design variables around
which a family of products can be developed. In general terms, a product platform is
the common technological base from which a product family is derived through
modification and instantiation of the product platform to target specific market niches
(cf., Erens, 1997; McGrath, 1995; Meyer and Lehnerd, 1997). The universal motor
platform developed by Black & Decker is an example of a successful product platform.
Product platforms are also prevalent in the automobile industry, for example, where
several car models are typically derivatives of a common platform (cf., Siddique and
Rosen, 1998); Kobe (1997) and Naughton (1997) describe GM’s and Honda’s global
platform strategies, respectively.
In light of these examples and definitions, opportunities for making contributions in product
family and product platform design are discussed in the next section.
10
1.1.2 Opportunities in Product Family Design and Product Platform Design
platform design, a closer look at the previous examples is needed. The examples from Lutron,
product family design. Each company redesigned or consolidated a group of distinct products
to create a more “efficient and effective” product family. Here, efficient and effective refers to
the increased economies of scale each company was able to realize by standardizing
• standardizing components so as to
• reduce manufacturing variability (i.e., the variety of parts that are produced in a given
manufacturing facility) and thereby
While the cost savings in manufacturing and inventory begin almost immediately from this type of
approach, the rewards are typically long-term since the capital investments and redesign costs
can be significant. Black & Decker, for example, estimated that it would take seven years to
reach the break-even point when they redesigned their universal motor platform for Double
11
tooling, Black & Decker spent $17M to redesign their motors; however, by paying attention to
standardization and exploiting platform scaling around the motor stack length, all of their motors
could be produced on the same machines. As a result, material costs dropped from $0.77 to
$0.42 per motor while labor costs fell from $0.248 to $0.045 per motor, yielding an annual
savings of $1.82M per year. The cost of Black & Decker tools decreased by as much as 62%,
But must a company spend millions of dollars in costly redesign to achieve a good
product family? The answer is obviously no, and the examples from Rolls Royce, Canon, and
Sony demonstrate such an approach. These three companies exemplify an a priori or top-
down approach to product family design, i.e., strategically manage and develop a family of
products based on a common platform and its derivatives. McGrath (1995) states that “A clear
platform strategy leverages the resulting products, enabling them to be deployed rapidly and
“Companies target new platforms to meet the needs of a core group of customers but
design them for easy modification into derivatives through the addition, substitution, or
removal of features. Well-designed platforms also provide a smooth migration path
between generations so neither the customer nor the distribution channel is disrupted.”
Finally, commonality and standardization across product families allow new designs to be
introduced, exploited, and retired with minimal expense related to product development
(Lehnerd, 1987).
12
As discussed in Section 1.1.1, Sony and Canon have been able to dominate their
respective markets despite serious local and global competition through a well managed product
platform implementation strategy. The Sony Walkman has been the leader in the personal
stereo market for decades; Sanderson and Uzumeri (1995) studied the success of the Sony
“Sony's strategy employed a judicious mix of design projects, ranging from large team
efforts that produced major new model 'platforms' to minor tweaking of existing
designs. Throughout, Sony followed a disciplined and creative approach to focus its
sub-families on clear design goals and target models to distinct market segments. Sony
supported its design efforts with continuous innovation in features and capabilities, as
well as key investments in flexible manufacturing.”
Similiarly, Canon was able to steal, and henceforth dominate, the low-end copier market from
Xerox through careful development and realization of a family of products derived from
common platforms (Jacobson and Hillkirk, 1986). Companies like Xerox now are in the
process of re-engineering their product development processes to facilitate the design and
development of new families of copiers in record time (Paula, 1997). Along these same lines,
Rolls Royce can boast similar success. By scaling the RTM322 engine platform to satisfy a
range of thrust and power requirements, Rolls Royce was able to (a) reduce manufacturing and
inventory costs by using similar modules and components from one engine to the next and, more
importantly, (b) facilitate the costly certification phase of its engine development process.
Good product platforms do not just come off the shelf; they must be carefully planned,
designed, and developed. This requires intimate knowledge of customer requirements and a
13
thorough understanding of the market. However, as discussed in the literature review in Section
2.2.1, many of the tools and methods which have been developed to facilitate the
management and development of effective product platforms and product families are at
modeling and design synthesis. Meanwhile, engineering design methods and tools for
synthesizing product families and product platforms are limited or slowly evolving. Consider the
brief summary in Table 1.1 of the product family examples from Section 1.1.1 and the
availability of design support. The majority of the examples from Section 1.1.1 require modular
design to facilitate upgrading and derating product variants through the addition and removal of
clustering approaches have been developed to reduce variability within a product family and
facilitate redesigning product families to improve component commonality, see Section 2.2.3.
Meanwhile, little to no attention has been paid to platform scaling issues for product
14
Table 1.1 Product Family Examples: Approach and Available Support
• In many product families, scalability can be exploited from both a technical standpoint
and a manufacturing standpoint to increase the potential benefits of having a common
product platform. The Rolls Royce RTM322 engine and the Black & Decker universal
motor are excellent examples of this.
• Finally, and perhaps most importantly, the concept of scalability and scalable product
platforms provides an excellent inroads into product family and product platform design
15
through the synthesis of current research efforts in Decision-Based Design and the
Robust Concept Exploration Method (described in Sections 1.2.1 and 1.2.2,
respectively), robust design (described in Section 2.3) and tools from
marketing/management science (described in Section 2.2.1).
How can a common scalable product platform be modeled and designed for a
product family?
(PPCEM) is developed in this dissertation to provide a Method which facilitates the synthesis
and Exploration of a common Product Platform Concept which can be scaled into an
appropriate family of products. The PPCEM and its associated tools and steps are introduced
in Section 3.1. The underlying assumption behind the PPCEM is that a common set of
specifications (i.e., design variable settings) can be found for a product platform which can then
be scaled in one or more of its “dimensions” to realize a product family. This product family can
then satisfy a wide variety of customer requirements with minimal compromise in individual
product quality and performance even though the product family is derived from a common
platform through scaling. Although the PPCEM is predominantly a method for parameteric or
16
costs through better economies of scale and amortization of capital investment over a wider
variety of derivative products based on the common product platform. In special cases, such as
the Rolls Royce RTM322 engine platform mentioned earlier and the Boeing 747 series of
aircraft, an added benefit of scaling a common product platform is to expidite the testing and
certification phase of development (cf., Rothwell and Gardiner, 1990). The foundation for
developing this approach is presented in the next section. The specific research focus for the
The technology base for the dissertation is described in this section. An overview of
Decision-Based Design, the design paradigm subscribed to in this dissertation, and the
overview of the Robust Concept Exploration Method (from which the Product Platform
1.2.1 Decision-Based Design, the Decision Support Problem Technique, and the
Compromise Decision Support Problem
Decision-Based Design (DBD) is rooted in the notion that the principal role of a
designer in the design of an artifact is to make decisions (see, e.g., Muster and Mistree, 1988).
This role is useful in providing a starting point for developing design methods based on
paradigms that spring from the perspective of decisions made by designers (who may use
17
methods (computer-aided design optimization), or methods that evolve from specific analysis
Decision Support Problem (DSP) Technique (see, e.g., Bras and Mistree, 1991), a technique
that supports human judgment in designing systems that can be manufactured and maintained.
In the DSP Technique, designing is defined as the process of converting information that
characterizes the needs and requirements for a product into knowledge about a product
(Mistree, et al., 1990). This definition is extended easily to product family design: the process
of converting information that characterizes the needs and requirements for a product family into
knowledge about a product family, or as is the case of this work, a common scalable product
platform. A complete description of the DSP Technique can be found in, e.g., (e.g., Mistree, et
al., 1990).
Among the tools available within the DSP Technique, the compromise DSP (Mistree, et
al., 1993) is a general framework for solving multiobjective, non-linear, optimization problems.
In this dissertation, the compromise DSP is central to modeling multiple design objectives and
assessing the tradeoffs pertinent to product family and product platform design. Examples of
these tradeoffs are discussed in the context of the two example problems in Chapters 6 and 7.
hybrid formulation based on Mathematical Programming and Goal Programming (Mistree, et al.,
1993), see Figure 1.3. The compromise DSP is used to determine the values of the design
18
variables which satisfy a set of constraints and bounds and achieve as closely as possible a set
of conflicting goals. The compromise DSP is solved using the Adaptive Linear Programming
(ALP) algorithm which is based on sequential linear programming and is part of the DSIDES
scheme or rank-ordered into priority levels using a preemptive approach to effect a solution on
the basis of preference. For the preemptive approach, the lexicographic minimum concept
(Ignizio, 1985) is used to evaluate different design scenarios quickly by changing the priority
levels of the goals to be achieved. The capabilities of the lexicographic minimum concept are
employed to develop the product platform portfolio as discussed in Section 3.1.4, with further
examples in Sections 6.4 and 7.5. Differences between the Archimedean and preemptive
deviation functions and a description of the ALP algorithm, design and deviation variables,
system constraints, goals, and bounds are discussed by, e.g., Mistree, et al. (Mistree, et al.,
1993).
19
Given
An alternative to be improved. Assumptions used to model the domain of interest.
The system parameters:
n number of system variables
p + q number of system constraints (p equality constraints, q inequality constraints)
m number of system goals
gi(x) system constraint function
fk(di) function of deviation variables to be minimized at priority level kth for the
preemptive case.
Find
The values of the independent system variables:
xi i = 1, …, n;
The values of the independent system variables:
di-, di+ i = 1, …, m
Satisfy
System constraints (linear, nonlinear)
gi(x) = 0 for i = 1, .., p; gi(x) = 0 for i = p+1, .., p+q
System goals (linear, nonlinear)
Ai(x) + di- + di+ = Gi i = 1, …, m
Bounds
ximin = xi = ximax i = 1, …, n
di-, di+ = 0 ; i = 1, …, m; di- . di+ = 0 ; i = 1, …, m
Minimize
Preemptive deviation function (lexicographic minimum):
Z = [ f1(di- , di+), ..., fk(dk- , dk+) ]
because it is a feasible point that achieves the system goals to the “best” extent that is possible.
This notion of satisficing solutions is in philosophical harmony with the notion of developing a
broad and robust set of top-level design specifications. The efficacy of the compromise DSP in
creating ranged sets of top-level design specifications has been demonstrated in both aircraft
20
design (Lewis, et al., 1994; Simpson, et al., 1996) and ship design (Smith and Mistree, 1994).
Developing ranged sets of top-level design specifications is generalized into the notion of
“portfolio” of solutions rather than a single point solution, greater design flexibility can be
maintained during the design process. Finally, the compromise DSP also provides the
cornerstone of the Robust Concept Exploration Method which is reviewed in the next section.
The Robust Concept Exploration Method (RCEM) has been developed to facilitate
quick evaluation of different design alternatives and generation of top-level design specifications
with quality considerations in the early stages of design (see, e.g., Chen, et al., 1996a). It is
primarily useful for designing complex systems and facilitating computationally expensive design
analysis. The RCEM is created by integrating several methods and tools—robust design
methods (see, e.g., Phadke, 1989), the Response Surface Methodology (see, e.g., Myers and
Montgomery, 1995), and Suh's Design Axioms (Suh, 1990)—within the compromise DSP
(Mistree, et al., 1993). A review of the wide variety of applications that have successfully
The RCEM is a four step process as illustrated in Figure 1.4. The corresponding
computer infrastructure is illustrated in Figure 1.5. The steps are described as follows.
Step 1 - Classify Design Parameters : Given the overall design requirements, this step
involves the use of Processor A, see Figure 1.5, to (a) classify different design
21
parameters as either control factors, noise factors, or responses following the
terminology used in robust design, and (b) define the concept exploration space.
Step 2 - Screening Experiments: This step requires the use of the point generator
(Processor B), simulation programs (Processor C), and an experiment analyzer
(Processor D) shown in Figure 1.5 to set up and perform initial screening
experiments and analyze the results. The results of the screening experiments are used
to (a) fit low-order response surface models, (b) identify significant main effects, and (c)
reduce the design region.
Step 3 - Elaborate the Response Surface Model: This step also requires the use of the
point generator (Processor B), simulation programs (Processor C), and experiment
analyzer (Processor D) to set up and perform secondary experiments and analyze the
results. The results from the secondary experiments are used to (a) fit second-order
response surface models (using Processor E) which replace the original computer
analyses, (b) identify key design drivers and the significance of different design factors
and their interactions, and (c) quickly evaluate different design alternatives and answer
"what-if" questions in Step 4.
22
RCEM Steps: Methods, Tools, and Math Construct:
Overall Design Requirements
STEP 2
Conduct “screening experiments”
Response Surface Methods /
DOE/ANOVA Statistical Methods
STEP 3
Elaborate response surface models
The RCEM is taken as the foundation for the research work in this dissertation for
• demonstrated effectiveness for complex systems and robust design, see, e.g., (Chen, et
al., 1997),
The usefulness of these features of the RCEM to this research work are elaborated throughout
the dissertation, particularly in Sections 3.1 and 3.2 wherein the PPCEM is introduced. The
research objectives for the dissertation are described in the next section.
• a set of research questions that capture motivation and specific issues to be addressed,
• a set of corresponding research hypotheses that offer a context by which the research
proceeds, defining the structure of the verification studies performed in this work, and
• a set of resulting research contributions that embody the deliverables from the research
in terms of intellectual value, a repeatable method of solution, limitations, and avenues of
further investigation.
The research questions are presented in Section 1.3.1 along with the corresponding research
hypotheses. The research hypotheses (and supporting posits) are discussed in more detail in
Section 3.2 along with issues of verification and validation. The resulting research contributions
24
1.3.1 Research Questions and Hypotheses in the Dissertation
The principal goal in this dissertation is the development of a method to facilitate the
design of a scalable product platform around which a family of products can be developed. As
discussed in the previous section, Decision-Based Design and the RCEM provide the
foundation on which this work is built. Given this foundation and goal, the motivation for this
research is embodied in the primary research question identified in Section 1.1.2 which is
repeated here.
Q1. How can a common scalable product platform be modeled and designed for a
product family?
This research question is related directly to the principal goal in this research which is to
advance product family design through the development of a method to design a scalable
product platform for a product family. The following hypothesis is investigated in this
for designing a common product platform which can be scaled to realize a product
family.
25
Since Question 1 is quite broad, three supporting research questions and sub-
hypotheses are proposed to facilitate the verification of Hypothesis 1. The supporting questions
Q1.1. How can product platform scaling opportunities be identified from overall
design requirements?
Q1.2. How can robust design principles be used to facilitate designing a common
Q1.3. How can individual targets for product variants be aggregated and modeled for
Sub-Hypothesis 1.1: The market segmentation grid can be utilized to help identify
Sub-Hypothesis 1.2: Robust design principles can be used to facilitate the design of a
Sub-Hypothesis 1.3: Individual targets for product variants can be aggregated into an
appropriate mean and variance and used in conjunction with robust design
26
There is a one-to-one correspondence between each supporting question and sub-
hypothesis. The sub-hypotheses are stated here primarily to provide context for the literature
review in the next chapter and the development of the PPCEM in Section 3.1. The strategy for
In addition to the primary research question related to the design of scalable product
platforms, two secondary research questions are also investigated in this dissertation.
Q3. Are space filling designs better suited for building approximations of deterministic
response surface models—are employed in the RCEM to facilitate concept exploration and the
techniques such as kriging may be better suited for building approximations of deterministic
computer analyses than the response surface models currently employed in Steps 2 and 3 of the
27
RCEM (see Section 1.2.2). Moreover, the traditional or “classical” experimental designs which
are typically used to sample the design space by querying the computer code to generate data
to build these approximations, may not be well-suited for deterministic computer analyses either;
hence, alternative “space filling” designs also are investigated as part of the research in this
dissertation. The specific hypotheses, which are investigated in response to the secondary
Hypothesis 3: Space filling experimental designs are suited better for building
designs.
The motivation for these last two research questions and hypotheses is discussed in
Section 2.4 wherein the limitations of response surface modeling and design of experiments
techniques within the RCEM are discussed in greater detail. It is worth noting that Hypotheses
2 and 3 are related to Hypothesis 1 but have implications which extend beyond product family
The relationship between the hypotheses and the various sections of the dissertation are
summarized in Table 1.2. The hypotheses are elaborated more in the literature review in the
28
next chapter in the sections listed in the table and revisited in Chapter 3 after the Product
Platform Concept Exploration Method is presented. Verification and validation issues are
discussed in Section 3.3, and testing of the individual hypotheses commences in Chapter 4,
lasting until Chapter 7. Although it is not noted in the table, Chapter 8 contains a review of the
hypotheses and their verification. The resulting contributions from these hypotheses are
described in the next section to provide context for the development of the research in the
dissertation.
Sections Sections
Hypothesis Discussed Tested
H1 Product Platform Concept Exploration Method Chp 3 Chp 6 & 7
§2.2.1, §3.1.1, §6.2, §7.1.3
SH1.1 Usefulness of market segmentation grid
§3.1.2, §3.2
§2.3, §3.1.2, §6.3-6.5,
SH1.2 Robust design of scalable product platform
§3.1.4, §3.2 §7.4-7.6
§2.3.3, §3.1.4, §6.3-6.5,
SH1.3 Aggregating product family specifications
§3.2 §7.4-7.6
Utility of kriging for metamodeling deterministic §2.4.1, §2.4.2, Chp 4, §5.2,
H2
computer experiments §3.1.3, §3.2 §7.3
§2.4.3, §3.1.3,
H3 Utility of space filling experimental designs §5.3
§3.2
The hypotheses and sub-hypotheses, taken together, define the research presented in
this dissertation and hence the contributions from the research. As evidenced by the principal
29
goal in the dissertation and Hypothesis 1, the PPCEM is the primary contribution in the
• The notion of scale factors in product platform design and a means of identifying them
for a product platform: Sections 2.3, 3.1.1, 3.1.2, 6.2, and 7.1-7.2.
• An abstraction of robust design principles for realizing scalable product platforms for
product family design: Sections 2.3, 3.1.2, 3.1.4, 6.3-6.5, and 7.4-7.6.
• An algorithm to build, validate, and use a kriging model: Section 2.4.2, Chapters 4, 5,
and 7, and Appendix A.
• An algorithm for generating minimax Latin hypercube designs: Section 2.4.3 and
Appendix C.
30
This being the first chapter of the dissertation, these contributions cannot be substantiated;
therefore, they are revisited in Section 8.1 after all of the research findings have been
Figure 1.6. Having lain the foundation by introducing the research questions and hypotheses for
the work in this chapter, the next chapter contains a literature review of related research,
elucidating the problems and opportunities in product family and product platform design.
Three research areas are reviewed: (1) product family and product platform design with
particular emphasis on scalability and sizing, (2) robust design and its application in engineering
design, and (3) statistical metamodeling and its role in engineering design, see Sections 2.2, 2.3,
and 2.4, respectively. A discussion of how these disparate research areas relate to one another
synthesized into a method for designing a scalable product platform for a product family. The
PPCEM and its associated steps are presented in Section 3.1. After the PPCEM is presented,
the research hypotheses are revisited in Section 3.2, and supporting posits are stated and
substantiated. Section 3.3 contains an outline of the strategy for verification and testing of the
hypotheses which includes a preview of Chapters 4 and 5—wherein Hypotheses 2 and 3 are
31
tested—and Chapters 6 and 7 wherein the PPCEM is applied to two example problems,
Testing of the hypotheses begins in Chapter 4, but Chapters 4 and 5 entail a brief
departure from product platform design yet are an integral part of the development of the
familiarize the reader with the method and to begin to verify Hypotheses 2 by comparing the
accuracy of kriging models to second-order response surface models, the current standard in
metamodeling. In Chapter 5, an extensive study of six engineering test problems selected from
the literature is conducted to determine the utility of kriging metamodels and various
Once the kriging/DOE study is completed in Chapter 5, the first of two examples used
to demonstrate the PPCEM and verify its associated hypotheses is given in Chapter 6: the
design of a family of universal electric motors. This first example employs the PPCEM without
any metamodeling, providing “proof of concept” that the method works. Then, in Chapter 7 the
PPCEM is applied to the design of a family of General Aviation aircraft, making full use of the
kriging metamodels and robust design capabilities. In each chapter, an overview of the problem
is given along with pertinent analysis information, the steps of the PPCEM are performed, and
32
Relevance Hypotheses
Chapter 1 • Introduction, motivation, and
Problem Identification
Chapter 3
• Introduce PPCEM and its steps
Product Platform
Method
• Demonstrate implementation of
Chapter 6 Verify
PPCEM without metamodels
Design of a Family • Provide proof of concept and
H1; SH1.1,
of Universal Motors SH1.2, & SH1.3
initial verification of method
Chapter 8
contributions, and limitations Summarize
Closing Remarks • Identify avenues of future work
33
Chapter 8 is the final chapter in the dissertation and contains a summary of the
dissertation, emphasizing answers to the research questions and resulting research contributions
in Sections 8.1 and 8.2, respectively. Possible avenues of future work are discussed in Section
There are six appendices which supplement the dissertation. Appendix A contains a
contains detailed descriptions of the experimental designs investigated in the kriging/DOE study
in Chapter 5; the minimax Latin hypercube design, which is introduced in Section 2.4.3 and
dissertation. Appendix D contains descriptions of the six engineering test problems used in the
kriging/DOE study, and supplemental information for the kriging/DOE study is given in
Appendix E. Supplemental information for the General Aviation aircraft problem in Chapter 7 is
given in Appendix F.
from bottom to top, beginning with the foundation provided in this chapter: Decision-Based
Design and the Robust Concept Exploration Method. This figure provides a road map for the
dissertation, and it is referred to at the end of each chapter to help guide the reader through the
34
Chp 8: Achievements and Recommendations
Chp 6 Chp 7
Chp 5
Platform
Nozzle Design
Product Platform Concept Exploration Method
Chp 4 Chp 3
35
2.
CHAPTER 2
Given the research focus identified in Section 1.3, a survey of relevant work in product
family and product platform design, robust design, and metamodeling is presented in this chapter
in Sections 2.2, 2.3, and 2.4, respectively. A thorough description of what is in this chapter and
how these disparate fields of research relate to each other is offered in Section 2.1. In Section
2.2, the tools and methods for designing product families and product platforms introduced in
Section 1.1.2 are discussed in more detail. Section 2.3 then contains a review of robust design
principles, focusing on robust design opportunities in product family and product platform
design. This segues into a discussion of metamodeling and approximation techniques in Section
2.4 to facilitate the implementation of robust design. In particular, the kriging approach to
of deterministic computer experiments, and a variety of space filling experimental designs for
querying a computer code to build kriging models are described in Section 2.4.3. Section 2.5
31
concludes the chapter with a summary of what has been presented and a preview of what is
next.
32
2.1 WHAT IS PRESENTED IN THIS CHAPTER
In the preceding chapter, product families and product platforms were introduced along
with several illustrative examples. In this chapter, a literature review of tools and methods which
facilitate the development of product families and product platforms is presented; the focus is on
three areas: (1) approaches for product family and product platform design, (2) robust design
principles and their implementation, and (3) metamodeling, in Sections 2.2, 2.3, and 2.4,
respectively. At first glance, these three research areas appear unrelated; however, transitional
elements presented at the end of each section preface the discussion in the section that follows
as the literature review moves from the general area of product family design to the specific area
of metamodeling, see Figure 2.1. The relevant hypotheses covered in each section are noted in
Figure 2.1.
33
H3 H2
SH1.3 SH1.2
Space
Filling Kriging H1 SH1.1
DoE
Modeling Conceptual
Metamodeling Mean and Noise
Variance Factors
Scalable Market
Robust Design Principles Product Segmentation
Platform Grid
As shown in Figure 2.1, the discussion in Section 2.2 explores in greater depth some of
the tools and approaches for product family and product platform design including: product
family maps and the market segmentation grid (Meyer, 1997); approaches to product family
and product platform design; and finally, the notion of a scalable product platform (Rothwell and
Gardiner, 1990). The work by Rothwell and Gardiner then is used to provide a transition to a
discussion of robust design principles in Section 2.3 by relating Rothwell and Gardiner’s
concept of “robust design” for product families to the idea of a “conceptual noise factor” in a
distributed design environment as introduced in (Chang and Ward, 1995; Chang, et al., 1994).
This notion of a “conceptual noise factor” then is extended to scale factors within a scalable
34
product platform, providing a means to abstract robust design principles for application in
In Section 2.3, the focus also shifts from extending Taguchi’s robust design to its
implementation within the Robust Concept Exploration Method (RCEM), i.e., through the use
the beginning of Section 2.4, providing a transition from robust design to utilizing metamodels to
facilitate its implementation as alluded to in Figure 2.1. The general approach to metamodeling
also is discussed in the beginning of Section 2.4, followed by a closer look at some of the
limitations of second-order response surface models in engineering design in Section 2.4.1. This
discussion provides the impetus for a closer look at two specific aspects of metamodeling—
model selection and experimental sampling—which also are investigated as part of this research.
Specifically, kriging and space filling experimental designs are examined as potential alternatives
to the response surface methods and classical design of experiments (DOE) currently employed
in the RCEM. Taken together, this literature review provides the necessary elements for the
development of the Product Platform Concept Exploration Method for designing scalable
product platforms for a product family as presented in Chapter 3. Toward this end, the state-
of-art in product family and product platform design is discussed in the next section.
35
2.2 PRODUCT FAMILY AND PRODUCT PLATFORM DESIGN TOOLS AND
METHODS
As stated in Section 1.1, in order to provide as much variety as possible for the market
with as little variety as possible between products, many researchers advocate a product
platform and product family approach to satisfy effectively a wide range of customer needs. In
Section 2.2.1, several attention directing tools developed to facilitate product family and
product platform design are presented. In Section 2.2.2, metrics for assessing product platform
effectiveness are discussed. Finally, in Section 2.2.3, methods for product family design are
reviewed.
2.2.1 Attention Directing Tools for Product Family and Product Platform Design
A large portion of the work in strategic marketing and management is focused on either
categorizing or mapping the evolution and development of product families. These maps
typically are applied a posteriori to a product family but can be used a priori to identify new
directions for product development within the product family. Examples of product family maps
include the work by Meyer and Utterback (1993) and Wheelwright and Sasser (1989); a brief
Meyer and Utterback (1993) use the Product Family Map shown in Figure 2.2 to trace
the evolution of a product family. In their map, each generation of the product family employs a
platform as the foundation for targeting specific products at different (or complimentary)
markets. Improved designs and new technologies spawn successive generations, and cost
reductions and the addition and removal of features can lead to new products. Multiple
36
generations can be planned from existing ones, expanding to different markets or revitalizing old
ones. A more formal map, with four levels of hierarchy in the product family (i.e., product
family, product platforms, product extensions, and specific products) also is introduced in their
work in an effort to assess the dynamics of a firm’s core capabilities for product development;
Time
Product 7
Product 8
Product 1
Product 2
Product 3
Product 4
New Niches
Figure 2.2 Product Family Map (adapted from Meyer and Utterback, 1993)
37
In related work, Wheelwright and Sasser (1989) have developed the Product
Development Map to trace the evolution of a company’s product lines, see Figure 2.3. In
addition to mapping the evolution of the product line, they also categorize a product line into
“core” and “leveraged” products, dividing leveraged products into “enhanced,” “customized,”
“These distinctions—core, hybrid, and the others—are immediately useful because they
give managers a way of thinking about their products more rigorously and less
anecdotally. But the various turns on the product map—the various “leverage points”—
also serve as crucial indicators of previous management assumptions about the
corporate strengths and market forces shaping product evolutions.” (Wheelwright and
Sasser, 1989, p. 114)
38
Enhanced Customized
• • •Prototype
•••••• Core
Hybrid
Cost-
reduced
Core
Time
As shown in Figure 2.3., the core product, typically derived from an engineering
prototype, provides the engineering platform upon which further enhancements are made.
Enhanced products are developed from the core by adding distinctive features to target specific
market niches; enhanced products are typically the first products leveraged from the core
product. Enhanced products can be customized further to provide more choice if necessary.
Cost-reduced products are “scaled” or “stripped” down versions (e.g., less expensive materials
and fewer features) of the core which are targeted at price-sensitive markets. Finally the hybrid
39
product is an entirely new design, resulting from the combination of characteristics of two or
more core products. As an example, the evolution of three generations of a family of vacuum
These product family maps are useful attention directing tools for product family design
and development but offer little direction for designing a scalable product platform. Toward this
end, the market segmentation grid developed by Meyer (1997) facilitates identifying leveraging
High Cost
High Performance
Low Cost
Low Performance
Derivative Products
Product Platform
products are listed horizontally in the grid. The vertical axis reflects different tiers of price and
performance within each market segment. Several example instantiations of this grid can be
40
found in (Meyer, 1997; Meyer and Lehnerd, 1997) for companies such as Hewlett Packard,
This simple market segmentation grid can be used by firms to segment their markets,
helping to define a clear product platform strategy. For instance, a marketing strategy which
employs no leveraging is shown in Figure 2.5a. Companies which fail to maintain a good
platform leveraging strategy often have too many products that share too little technology,
Three types of platform leveraging strategies can be identified within the market
shown in Figure 2.5b-d. All three leveraging strategies enable a more efficient and effective
product family to be developed. Examples of these leveraging strategies include the following
(Meyer, 1997):
41
Vertically leveraging - a product platform is leveraged to address a range of
price/performance tiers within a specific market segment. A company which excels in
the high-end segment of its market may scale down its platform into lower
price/performance tiers by removing functionality from its high-end platform to achieve
lower price products. The other option is to scale up a low-end platform by adding
more powerful component technologies or modules to meet the higher performance
demands for the higher tiers. The main benefit of this approach is the capability of the
company to leverage its knowledge about a particular market niche without having to
develop a new platform for each price/performance tier. The Rolls Royce RTM322
engine and Canon’s low-end copiers discussed in Section 1.1.1 exemplify this
approach.
High Cost Platform 1 Platform 2 Platform 3 High Cost High End Platform Leverage
High Performance High Performance
Platform 4 Platform 5
Mid-Range Mid-Range
Scaled Up
Mid-Range Mid-Range
42
Beachhead approach - combines horizontal and vertical leveraging to achieve perhaps the
most powerful platform leveraging strategy. In a beachhead approach, a company
develops a low-cost effective platform for a particular market segment and then scales
up the performance characteristics of the platform and adds other features to target new
market segments. The example of Compaq computers is offered in (Meyer, 1997).
Compaq entered the personal computer market in 1982 and, after establishing a
foothold in the portable computer market niche, slowly introduced a stream of new
products for other market segments and different price/performance tiers, including a
line of desktop PCs for business and home use. Of the examples discussed in Section
1.1.1, the Sony Walkman, Black & Decker’s universal electric motor platform, and
Lutron’s lighting systems also exemplify this type of approach to platform leveraging.
Sony initiated a beachhead approach from the start with their Walkman product lines.
The same is not true for Black & Decker and Lutron. Both companies began with no
leveraging strategy and only after redesigning their product lines as discussed in Section
1.1.2 where they able to achieve a more efficient and effective beachhead approach.
Consequently, they are now both leaders in their respective fields.
The market segmentation grid provides a useful attention directing tool to help map and
idenfity product platform leveraging opportunities within a product family, providing an answer
to the question:
Q1.1. How can product platform scaling opportunities be identified from overall
design requirements?
43
Keep in mind, however, that the market segmentation grid is only an attention directing tool;
leveraging strategy and exploit scaling opportunities within a product family. The market
segmentation grid is simply a way of representing that strategy, providing a clear mapping of
product leveraging opportunities within the product family. Use of the market segmentation grid
to help identify scaling opportunities within the Product Platform Concept Exploration Method is
further elaborated in Section 3.1.1. In the next section, metrics for assessing product platforms
are discussed.
Several metrics and cost models have been developed to assess either the efficiency
and effectiveness of a product platform or the commonality between a group of products within
a product family. Meyer, et al. (1997), in particular, define two metrics—platform efficiency
and platform effectiveness—to manage the research and development costs of product
which assesses how much it costs to develop derivative products relative to how much it costs
to develop the product platform within the product family. The platform efficiency metric can
also be used to compare different platforms across different product families to assess the
44
Platform effectiveness is a ratio of the revenue a product platform and its derivatives
Product Sales
Platform Effectiveness = [2.2]
Product Development Costs
where the effectiveness of the platform can be assessed at the individual product level or for a
These metrics require costing and revenue information which is typically known only
after the product platform and its derivatives have been developed and reached the market.
These metrics prove useful for managing research and development within the product family
and determining when to renew or re-focus product platform efforts; however, they offer little
commonality of parts within a product family. Many commonality indices have been proposed
for assessing the degree of commonality within a product family. Products which share more
parts and modules within a product family achieve greater inventory reductions, exhibit less part
variability, improve standardization, and shorten development and lead times because more
parts are reused and fewer new parts have to be designed (cf., Collier, 1981). McDermott and
Stock (1994) discuss the benefits of commonality on new product development time, inventory,
and manufacturing; they also cite several researchers who have shown that part commonality
45
across a range of products has reduced inventory costs while maintaining a desired level of
customer service. Particular measures for assessing commonality includes the following:
• Kota and Sethuraman (1998) introduce the Product Commonality Index for determining
the level of part commonality in a product family. Through the study of a family of
portable personal stereos, they illustrate methods to “measure and eliminate non-value
added variations, suggest robust design strategies including modularity and
postponement of product differentiation.” Their approach provides a means to
benchmark product families based on their capability to simultaneously share parts
effectively and reduce the total number of parts.
• Siddique, et al. (1998) propose a commonality index to aid in the configuration design
of common automotive platforms. They are working with an automobile manufacturer
to reduce the number of platforms they utilize across their entire range of cars and
trucks in an effort to reduce development times, costs, and product variety. Ongoing
research efforts for measuring the “goodness” of a common platform are discussed in
(Siddique, 1998).
Commonality measures such as these are based primarily on the ratio of the number of
shared parts, components, and modules to the total number of parts, components, and modules
in the product family. Taking this one step further, Martin and Ishii (1996) seek to assess the
46
cost of producing product variety through the measurement of three indices: commonality,
differentiation point, and set-up costs. The commonality index is similar to that proposed by
Collier (1981) and measures the percentage of common components within a group of products
in a product family. The second index measures the differentiation point for product variety
within an assembly or manufacturing process; the idea being that the later the differentiation
point can be postponed the lower the costs of producing the necessary variety (cf., Lee and
Billington, 1994; Lee and Tang, 1997). Finally, the set-up cost index assesses the cost
contributions needed to provide variety compared to the total cost for the product. The indirect
costs of providing product variety then is taken as a weighted linear combination of these
indices; the weightings for the individual indices may vary from industry to industry. The direct
costs of providing product variety, they assert, are relatively straightforward to determine.
Generalizations are made regarding the costs of product variety based on these indices;
however, there is no work to substantiate their claims or the usefulness of the indices. The
In later work, Martin and Ishii (1997) introduce a process sequence graph which
provides a qualitative assessment of the flow of a product through the assembly process and its
differentiation point. A product family of eighteen instrument panels is analyzed, citing that
differentiation for product variety begins in the second step in the assembly process. This leads
differentiation to reduce production costs and lead-times. The end result is a graph of Variety
47
Voice of the Customer (V2OC) versus percentage commonality for the family of instrument
Figure 2.6 V2OC Rating vs. Commonality (from Martin and Ishii, 1997)
assemblies shared between products to the total number of assemblies in the product family and
is again very similar to that of Collier (1981). The V2OC measure assesses “the importance of
a component’s variety to the aggregated market—not the individual buyer. V2OC is a measure
the importance of a component to a customer, as well as the heterogeneity of the market with
response to that component” (Martin and Ishii, 1997). They do not describe how to measure
V2OC or explain how the V2OC ratings for the instrument panel family are created;
consequently, V2OC does not provide a useful measure for product variety. The resulting
graph, however, is insightful and similar to the product variety tradeoff graph which is introduced
in Section 3.1.5 and illustrated in Section 7.6.2 in the context of the General Aviation aircraft
example. The reasoning behind the target region in the figure is not discussed in their paper
48
either; however, intuition suggests that components with low V2OC rating (i.e., are not
important to the customer) can be common from one product to the next while it is important to
customize components (i.e., decrease their commonality) that have a high V2OC. This idea is
explored in greater depth in Sections 3.1.5 and 7.6.2 wherein a non-commonality index based
assessing and studying product variety tradeoffs. In the meantime, methods for designing
The majority of engineering design research has been directed at improving the
efficiency and effectiveness of designers in the product realization process, and until recently, the
focus has been on designing a single product. For instance, Suh (1990) offers his two axioms
for design: (1) maintain independence of functional requirements, and (2) minimize the
information content of a design. Pahl and Beitz (1988; 1996) offer their four phase approach to
product design which involves the following: clarification of the task, conceptual design,
embodiment design, and detail design. Similarly, Hubka and Eder (1988; 1996) advocate an
approach which involves the following: elaboration of the assigned problem, conceptual design,
laying out, and elaboration. Pugh (1991) introduces the notion of total design which has at its
core market/user needs and demands, the product design specification, conceptual design,
detail design, manufacturing, and selling. In the well-known review of mechanical engineering
49
design research conducted by Finger and Dixon (1989a; 1989b), scant trace of product family
Perhaps the most developed method for product family design which currently exists is
the work by Erens (1997). Erens, in conjunction with several of his colleagues (Erens and
Breuls, 1995; Erens and Verhulst, 1997; Erens and Hegge, 1994; McKay, et al., 1996),
develops a product modeling language for product variety. The primary focus is on the product
little aid for design synthesis and analysis, only representation. The product modeling language
physical. Use of the product modeling langurage is demonstrated in the context of a family of
office chairs, a family of overhead projectors, and a family of cardio-vascular Xray machines.
Excerpts from the family of office chairs example is illustrated in Figure 2.7. The office chair
itself is shown in Figure 2.7a, and the variety of options from which to choose: upholstery,
materials, colors, fixtures, etc., are shown in Figure 2.7b. In Figure 2.7c, the general
representation of the product architecture for the office chair is depicted, and the hierarchy in
the product variety model is illustrated in Figure 2.7d. As illustrated in this example, the product
modeling language provides an effective means for representing product variety but offers little
50
(a) An office chair (b) Office chair options
(c) Office chair architecture (d) Office chair product variety model
In other work, Fujita and Ishii (1997) outline a series of tasks—design specification
analysis, system structure synthesis, configuration, and model instantiation—for product variety
design as their foundation for a formal approach for the design and synthesis of product families.
They decompose product families into systems, modules, and attributes as shown in Figure 2.8.
Under this hierarchical representation scheme, product variety can be implemented at different
levels within the product architecture. For instance, two shared modules and two sets of shared
attributes are shown in Figure 2.8. A formal algorithm has not yet been developed however.
51
System Modules Attributes
Configuration/Geometry
Shared Shared
Architecture
Different
Shared
Functional/Physical
Figure 2.8 Product Variety Decomposed into Systems, Modules, and Attributes (from
Fujita and Ishii, 1997)
investigated for product family design. Stadzisz and Henrioud (1995) cluster products based on
geometric similarities to obtain product families in order to decrease product variability within a
product family in order to minimize the required flexibility of the associated assembly system. A
similar Design for Mass Customization approach is developed in (Tseng, et al., 1996) which
groups similar products into families based on product topology or manufacturing and assembly
similarity and provides a series of steps to formulate an optimal product family architecture
52
a process for redesigning a set of related products through similarity and clustering of common
products around a “core product concept,” i.e., a product platform. The resulting product
family is composed of a set of product variants which share characteristics in common with the
variety and in the context of a product platform and product family. Modularity greatly
facilitates the addition and removal of features to upgrade and derate a product platform (cf.,
Ulrich, 1995). Ulrich and Tung (1991), Ulrich (1995), and Ulrich and Eppinger (1995)
investigate product architecture and modularity and its the impact on product change, product
standardization as a means for enhancing product flexibility and offering a wide variety of
products. Meanwhile, Chen, et al. (1994) suggest designing flexible products which can be
reduce the cost of offering product variety. Rosen (1996) investigates the use of discrete
He emphasizes, as do Ulrich and Eppinger (1995), that the design of product architectures is
“critical in being able to mass customize products to meet differentiated market niches and
53
and other strategic issues.” A Product Module Reasoning System (Newcomb, et al., 1996)
currently is being developed “to reason about sets of product architectures, to translate design
requirements into constraints on these sets, to compare architecture modules from different
viewpoints, and to directly enumerate all feasible modules without generate-and-test or heuristic
Pahl and Beitz (1996) also discuss the advantages and limitations of modular products
to fulfill various overall functions through the combination of distinct modules. Because such
modules often come in various sizes, modular products often involve size ranges where the initial
size is the basic design and derivative sizes are sequential designs. In the context of a scalable
product platform, the initial size constitutes the product platform and the derivative sizes are its
product variants. Their approach for designing size ranges is as follows (Pahl and Beitz, 1996):
• Prepare the basic design for the range either from a new or existing product;
• Use similarity laws to determine the physical relationships between geometrically similar
product ranges;
• Determine appropriate “theoretical” steps sizes within the desired size range;
• Check the product size range against assembly layouts, checking any critical
dimensions; and
54
In the context of their approach, the method developed in this dissertation facilitates the
development of the basic design (i.e., the platform) and the sequential designs (i.e., derivative
products) simultaneously.
The concept of sizing leads into an area of product platform design that has received
little attention—product platforms that can be “scaled” or “stretched” into derivative products
for a product family (in combination to being upgraded/degraded through the addition/removal
of modules). The implications of design “stretching” and “scaling” within the context of
developing a family of products are discussed first in (Rothwell and Gardiner, 1988; 1990), see
Figure 2.9. Rothwell and Gardiner (1988) use the term “robust designs” to refer to designs that
have sufficient inherent design flexibility or “technological slack” to enable them to evolve into a
design family of variants that meet a variety of changing market requirements by “uprating,”
“rerating,” and “derating” a platform design as shown in Figure 2.9. The process of developing
these designs is shown in Figure 2.9 and consists of three phases, namely, composite,
55
Figure 2.9 Robust Designs (from Rothwell and Gardiner, 1990)
Rothwell and Gardiner (1990) provide several examples of successful robust designs
and discuss how they “allow for change because essentially they contain the basis for not just a
single product but rather a whole product family of uprated or derated variants.” Consider the
Rolls Royce RB211 engine family illustrated in Figure 2.10. The original RB211 consisted of
seven modules which could be easily upgrade or scaled down to improve or derate the engine.
For example, by replacing the large front low pressure fan with a scaled down fan, the lower
thrust, derated, 535C engine was derived. Further improvements are made by scaling different
components of the engine to improve fuel consumption while increasing thrusts. Rolls Royce
takes advantage of similar stretching and scaling in its RTM322 engine which was discussed
56
Figure 2.10 Rolls Royce RB211 Engine Family
(from Rothwell and Gardiner, 1990)
Several other products also have benefited from platform scaling. For example, Black
& Decker scales the stack length of their universal motor platform to vary the output power of
the motor for a wide variety of applications, see Section 1.1.1 and (Lehnerd, 1987). The
Boeing 747-200, 747-300, and 747-400 are scaled derivatives of the Boeing 747 (Rothwell
and Gardiner, 1990). Many automobile manufacturers also scale their passenger car platforms
to offer, for example, two-door coupes, two- and four-door sedans, three- and five-door
hatchbacks, and maybe a wagon which are all derived from the same platform (Rothwell and
Gardiner, 1990). Honda, for instance, is taking full advantage of platform scaling to compete in
today’s global market by developing two scaled versions of their Accord for the U.S. and
Japanese markets from one platform (Naughton, et al., 1997). Siddique, et al. (1998)
document efforts at Ford to improve the commonality of their product platforms to capitalize on
57
Despite the apparent advantages of scalable product platforms, a formal approach for
the design and synthesis of stretchable and scalable platforms does not exist. Rothwell and
Gardiner state that it has “become increasingly possible to develop a robust design which has
the deliberate designed-in capability of being stretched;” however, they only offer the process
shown in Figure 2.9 as a guide to designers. Consequently, developing a method to model and
design scalable product platforms around which a family of products can be developed through
scaled derivatives of the product platform is the principal objective in this dissertation. In an
effort to realize such a method, an extension of robust design principles is offered in the next
section, providing a means to turn Rothwell and Gardiner’s idea of “robust design” for scalable
proposed by Taguchi, is to improve the quality of a product or process by not only striving to
achieve performance targets but also by minimizing performance variation. Taguchi’s methods
have been widely used in industry (see, e.g., Byrne and Taguchi, 1987; Phadke, 1989) for
parameter and tolerance design. Reviews of such applications can be found in, e.g., (Nair,
1992).
factors can be represented with a P-diagram as shown in Figure 2.11, where P represents either
58
product or process (Phadke, 1989). The three types of factors which serve as inputs to the P-
• Control Factors (x) – parameters that can be specified freely by a designer; the
settings for the control factors are selected to minimize the effects of noise factors on the
response y.
• Noise Factors (z) – parameters not under a designer’s control or whose settings are
difficult or expensive to control. Noise factors cause the response, y, to deviate from
their target and lead to quality loss through performance variation. Noise factors may
include system wear, variations in the operating environment, uncertain design
parameters, and economic uncertainties.
• Signal factors (M) – parameters set by the designer to express the intended value for
the response of the product; signal factors are those factors used to adjust the mean of
the response but which no effect on the variation of the response.
Control Factors
x
?z , ?z
z
Noise Factors
This robust design terminology is used to classify design parameters and responses and
to identify sources of variability. The objective in robust design is to reduce the variation of
59
system performance caused by uncertain design parameters, thereby reducing system sensitivity.
Variations in noise factors, shown in Figure 2.11 as normally distributed with mean ? z and
In an effort to generalize robust design for product design, Chen, et al. (1996a) develop
Type I - Robust design associated with the minimization of the deviation of performance
caused by the deviation of noise factors (uncontrollable parameters).
Type II - Robust design associated with the minimization of the deviation of performance
caused by the deviation of control factors (design variables).
The idea behind the two major types of robust design applications are illustrated in Figure 2.12.
As indicated by the P-diagrams for Type I and Type II applications, the deviation of the
response is caused by variations in the noise factor, z, the uncontrollable parameter in Type I
applications. Type II is different from Type I in that its input does not include a noise factor.
The variation in performance is caused solely by variations in control factors or design variables
The traditional Taguchi robust design method is of Type I as shown in the top half of
Figure 2.12. A designer adjusts control factors, x, to dampen the variations caused by the
noise factor, z. The two curves represent the performance variation as a function of noise factor
60
performance as closely as possible to the target, M, the designs at both levels are acceptable
because their means are the target M. However, introducing robustness, when x = a, the
performance varies significantly with the deviation of noise factor, z; however, when x = b, the
solution because x = b dampens the effect of the noise factors more than when x = a.
Type I
y Control Factor
x = Control Factors
Objective or x=a
Deviation
M = Signal Factors
Function
y = Response
x= b
Type II
Objective or
Deviation
x = Control Factors Function
Robust
Solution
y = Response
M = Signal Factors
²x ²x
Optimal
Solution
M
x
x µ robust
opt Design
(x = a) (x = b) Variable
61
The concept behind Type II robust design is represented in the lower half of Figure
2.12. For purposes of illustration, assume that performance is a function of only one variable, x.
In general, for this type of robust design, to reduce the variation of the response caused by the
deviations of design variables, a designer is interested in the flat part of a curve near the
performance target instead of seeking the peak or optimum value. If the objective is to move
the performance function towards target M and if a robust design is not sought, then the point x
= a is chosen. However, for a robust design, x = b is a better choice. This is because if the
design variable varies within ±?x of its mean, the resulting variation of response of the design at
x = b is much smaller than that at x = a, while the means of the two responses are essentially
equal. Implementation of these two types of robust design are discussed in the next section.
Taguchi’s robust design method to systematically vary and test the different levels of each of the
control factors. Taguchi advocates the use of an inner-array and outer-array approach to
implement robust design (cf., e.g., Byrne and Taguchi, 1987). The inner-array consists of an
OA which contains the control factor settings; the outer-array consists of the OA which contains
the noise factors and their settings which are under investigation. The combination of the inner-
array and outer-array constitutes the product array. The product array is used to test various
combinations of the control factor settings systematically over all combinations of noise factors
62
after which the mean response and standard deviation may be approximated for each run using
the equations:
1 n
• Response mean: y? ?y
n i ?1 i
[2.3]
(y i ? y )2
n
• Standard deviation: S? ?i?1 n ? 1 [2.4]
Preferred parameter values then can be determined through analysis of the signal-to-noise (SN)
ratio; factor levels that maximize the appropriate SN ratio are optimal. There are three
??y 2 ??
SN T ? 10 log ?? 2 [2.5]
??S ??
• Smaller the better (for making the system response as small as possible):
??1 n 1 ??
SN L ? ? 10log ?? ? 2 ?? [2.6]
??n i ?1 yi ??
• Larger the better (for making the system response as large as possible):
??1 n 2 ??
SN S ? ? 10 log ?? ? y i [2.7]
??n i ?1 ??
63
Once all of the SN ratios have been computed for each run of an experiment, there are
two common options for analysis: Analysis of Variance (ANOVA) and a graphical approach.
ANOVA can be used to determine the statistically significant factors and the appropriate setting
for each. In the graphical approach, the SN ratios and average responses are plotted for each
factor against its levels. The graphs then are examined to “pick the winner,” i.e., pick the factor
levels which (1) best maximize SN and (2) bring the mean on target (or maximize or minimize
There are many criticisms of Taguchi’s implementation of robust design through the
inner and outer array approach: it requires too many experiments, the analysis is statistically
questionable because of the use of orthogonal arrays, it does not accommodate constraints, and
the responses should be modeled directly instead of the SN ratios (see, e.g., Montgomery,
1991; Nair, 1992; Otto and Antonsson, 1993; Shoemaker, et al., 1991; Tribus and Szonyi,
1989; Tsui, 1992). Consequently many variations of the Taguchi method have been proposed
and developed; a review of numerous robust design optimization methods can be found in (Otto
and Antonsson, 1993; Simpson, et al., 1997a; Simpson, et al., 1997b; Su and Renaud, 1996;
response surface models are created and used to approximate the design space, replacing the
computer analysis code or simulation routine used to model the system. The major elements of
64
the response surface model approach for robust design applications are as follows (see, e.g.,
• combining control and noise factors in a single array instead of using Taguchi's
inner- and outer-array approach,
Instead of using Taguchi’s orthogonal array as the combined array for experiments, central
composite designs are employed in the RCEM to fit second-order response surface models for
integration with Taguchi's robust design. The response surface model postulates a single, formal
yˆ = f(x,z) [2.8]
where yˆ is the estimated response and x and z represent the settings of the control and noise
variables, respectively. In Equation 2.8, it is assumed that the noise variables are independent.
From the response surface model, it is possible to estimate the mean and variance of the
response. For Type I applications in which the deviations of noise factors are the source of
variation:
???f ?? 2
2
m
• Variance of response: ?
yˆ
2
? ?i ? 1 ?? ?? ?
???z i ?? z i
[2.10]
65
where ? ?represents the mean values, m is the number of noise factors in the response model,
and ? z i is the standard deviation associated with each noise factor. In Type II robust design,
i.e., when the deviations of control factors are the source of variation, ? z and ? z i in Equations
2.9-2.10 are replaced by the mean and deviation of the variable control factors. Using this
approach, robust design can be achieved by having separate goals for “bringing the mean on
target” and “minimizing the deviation” within a compromise DSP (cf., Chen, et al., 1996b).
When satisfying the design requirements and reducing the variation of system
performance are equally important, it is effective to model the two aspects of robust design
as separate goals in the compromise DSP. For instance, when designing a power plant, it may
be required to bring the power output as close as possible to its target value while at the same
time, reduce the variation of the system performance so that the power output remains constant
during operation. Moreover, setting an overall design requirement at a specific value during the
early stages of design may sometimes be crucial because a small variation may require significant
changes in other design requirements or incur substantial costs in order to compensate for it.
However, modeling the two aspects of robust design as two separate goals may not be an
effective approach when satisfying a range of design requirements is the major concern,
In Figure 2.13, the quality distributions of two different designs (I and II) are illustrated.
Both designs have the same mean value but different deviations. If the two aspects of robust
design are modeled as separate goals, obviously the design with the least deviation (Design I)
66
would be chosen because both designs have the same performance mean. However, in this
particular situation where the mean of the quality performance lies outside the range of
requirements, a smaller fraction of the performance falls inside the upper and lower requirement
limits (URL and LRL, respectively) with a thinner bell shape, i.e., the shadowed area which is
enclosed by A, B, and C is smaller than the area enclosed by A', B' and C. This is acceptable
in manufacturing when the process itself can be manually shifted to bring the mean back on
target, but when designing a system to accommodate noise, this option is not always
available.
Designs
Distribution
meeting
A’ requirements
A Design II
B’ B C mean, ?
Design capability indices have been developed with exactly this in mind. They are
based on process capability indices from statistical process control and apply in the same
design capability index (see Figure 2.14) is computed to assess the capability of a family of
67
designs to satisfy a ranged set of design requirements (Chen, et al., 1996c; Simpson, et al.,
1997a).
C dk • 1 C dk • 1
C •1
Cdk = Cdu dk Cdk = Cdl
3? 3? 3? 3?
? ? ?
C du C dl C du C dl
Assume that the system performance is normally distributed with mean, ? , and standard
deviation, ? . The design capability indices Cdl, Cdu, and Cdk measure the extent to which a
family of designs satisfies a ranged set of design requirements as specified by upper and lower
requirement limits (URL and LRL, respectively). As shown the figure, when nominal is better,
i.e., upper and lower design requirement limits are given, finding a family of designs with Cdk = 1
satisfies the design requirements. In this scenario, Cdk is computed using Equation 2.11; Cdk is
68
ˆ ? LRL )
(?? (URL ? ??
ˆ)
C dl ? ;C du ? ; Cdk ? min{ Cdl , Cdu} [2.11]
3??
ˆ 3??
ˆ
When smaller is better (e.g., “the motors should weigh less than 0.5 kg”) designs with a
Cdk = 1 are capable of satisfying the requirement. In this case, Cdk = Cdu as shown in Figure
2.14, and designs with a Cdu < 1 do not meet this requirement because a portion of the
distribution falls outside of the URL. Similarly, when larger is better (e.g., “the efficiency of
these motors should be 30% or better”), designs with a Cdk = 1 are capable of meeting this
There are some assumptions associated with the use of Cdk. For example, Cdk = 1
implies only 99.73% of the designs conform to requirements assuming that the system
However, the type of distribution of system performance depends on the actual system
response and the statistical distribution of each design variable or uncertainty parameter. When
the system function is complex, it may be difficult to perform a judicious evaluation to determine
deviate by ±3? z (as is typical in a six sigma approach to quality) around their nominal value ? z,
and that each system response varies by ±3? y around its mean value, ? y, which can be
calculated by:
?y = y (? x) [2.12]
69
The standard deviation, ? y, is approximated using a first order Taylor series expansion
(assuming ? x is small):
???y ??
2
m
ˆ2
?? ? ? ?? ??? 2z [2.13]
i ?1 ???z i ??
y
Modifications to the process capability indices for different variances have been
proposed (see, e.g., Johnson, et al., 1992; Ng and Tsui, 1992; Rodriguez, 1992), and design
capability indices could be modified similarly. For example, if a uniform distribution is used for
each response instead of a normal distribution, then Cdk, Cdu, and Cdl become as follows:
ˆ ? LRL )
(?? (URL ? ??
ˆ)
C dl ? ;C du ? ;Cdk ? min{ Cdl , Cdu} [2.14]
3??
ˆ 3??
ˆ
(b ? a) 2
ˆ2
?? ? [2.15]
12
where a and b are the lower and upper limits of the range of y.
shown in Figure 2.15. Design capability indices can be used for constraints and/or goals,
three cases—smaller is better, nominal is better, and larger is better—if a design requirement is
a wish, then making Cdk as close to one as possible is a goal in the compromise DSP. When a
requirement is a demand, then Cdk = 1 is taken as a constraint. Note that when a deviation
70
function solely includes design capability indices, the negative deviation variable, di-, is always
Given:
• Functions y(x), including those ranged design requirements which are constraints, gi(x),
and those which are objectives, Ai(x)
• Deviations of the uncontrollable variables, ? Z
• Target upper and lower design requirement limits, URLi and LRLi
Find:
• Design variables, x
Satisfy:
• Constraints: Cdk-constraints = 1 (or use worst-case analysis)
• Goals: Cdk-objectives + di- - di+ = 1
• Bounds on the design variables
• di- , di+ = 0 ; di- • di+ = 0
Minimize:
• Deviation Function: Z = [f1(di-, ..., di+)...fk(di-, ..., di+)]
develop an extension for product family design, specifically for scalable product platforms,
Q1.2. How can robust design principles be used to facilitate the design a common
Extensions of robust design for product family design are discussed in the next section.
71
2.3.2 Robust Design for Product Family Design: Scale Factors
There have been two known allusions to using robust design in product family design.
First, Lucas (1994) describes a way that the results of a robust design experiment can be used
to identify the need for product differentiation. When large effects are present in the system,
different product types can be sent to customers having different features as opposed to
designing one product which is robust over the entire range of effects. He states that this is
common practice in the chemical industry where, for example, different polymer viscosities are
desired by different customers and better results often are obtained by customizing the product
for its specific environment rather than delivering a single robust product.
Second, Chang, et al. (1994) and Chang and Ward (1995) introduce the notion of
“conceptual robustness” which is pertinent to this research. The term “conceptual robustness” is
developed by Chang and his colleagues for mathematically modeling and computationally
treating variations in the design proposed by other members of the development team as
“conceptual noise,” robust design principles can be used to make “conceptually robust”
decisions which are robust against these variations (Chang, et al., 1994). The “conceptually
robust” design of a two-axis CNC milling machine is used as an illustrative example. In (Chang
and Ward, 1995), this idea is applied to modular design which is a “function-oriented design
that can be integrated into different systems for the same functional purpose without (or with
minor) modifications.” The design of an air conditioning system for ten different automobiles is
72
It is this idea of a “conceptual noise factor” that enables the utilization of robust design in
the context of product family design, particularly in the design of a scalable product platform.
By identifying an appropriate “scale factor” for a scalable product platform, robust design
principles can be used to minimize the sensitivity of the product platform to variations in
a scale factor. In this regard, a “conceptually robust” product platform can be realized which
has minimum sensitivity to variations in the scale factor, realizing a robust product family. For
• Scale factor - factor around which a product platform can be “scaled” or “stretched”
to realize derivative products within a product family.
In essence, a scale factor is a noise factor within a scalable product family or, to borrow
terminology from Chang, et al. (1994)), a “conceptual noise factor” around which a
“conceptually robust” product platform can be developed for a product family. Examples of
scale factors include the stack length in a motor, as in the Black & Decker universal motor
example (Lehnerd, 1987), the number of passengers on an aircraft ,as in the Boeing 747 family
(Rothwell and Gardiner, 1990), or the number of compressor stages in an aircraft engine, as in
the Rolls Royce RTM322 example (Rothwell and Gardiner, 1990). Scale factors may be either
discrete or continuous; however, continuous scale factors are investigated primarily in this
dissertation. The specific relationship between different types of scale factors and different
73
Given the definition for a scale factor, a third type of robust design now can be identified
for product family design, complementing the two types of robust design discussed previously:
Type III - Robust design associated with minimizing the sensitivity of a product platform to
variations in a scale factor.
As defined, Type III robust design is nearly identical to Type I robust design as shown in Figure
2.16. Notice that the P-diagram on the left of the figure has been modified to accommodate
scale factors because essentially they are treated as noise factors in the product platform design
process.
Type III
Platform Design
Variable
x = Control Factors
y x=a
y = Response
Objective or x= b
M = Signal Factors
Deviation
Function
Figure 2.16 Type III Robust Design: Scale Factors for Product Platforms
It should be noted that these scale factors are not the same “scaling/leveling factors”
shown in the P-diagram in (Taguchi and Phadke, 1986) or (Suh, 1990) which are used to scale
74
a response to achieve a desired value. Using the same diagram shown in Figure 2.12 for the
Type I robust design, the idea behind Type III robust design is illustrated in the right hand side
of Figure 2.16. Given two possible settings (x = a and x = b) for one of the design variables, x,
which defines the platform, the setting x = b should be selected because it minimizes the
scale factor, then robust design can be used to minimize the sensitivity of the product platform to
changes in these scaling factors. In this manner, a scalable product platform can be developed
and instantiated to realize a family of products. This raises the following question then.
Q1.3. How can individual targets for product variants be aggregated and modeled for
product platform design?
Using the concept of a scale factor for a product platform, it is now possible to
aggregate the individual targets for product variants within a product family around an
appropriate mean and a standard deviation. Robust design principles then can be used to
“match” the mean and standard deviation of the product family with the desired mean and
Section 2.3.1:
• creating separate goals for “bringing the mean on target” and “minimizing the deviation”
of the product platform for variations in the scale factor within a compromise DSP, or
75
• using design capability indices to assess the capability of a family of designs to satisfy a
ranged set of design requirements.
To demonstrate these implementations, the former approach is utilized in the universal electric
motor problem in Chapter 6; the latter is employed in the General Aviation aircraft example in
Chapter 7. The General Aviation aircraft example also makes use of metamodels to facilitate
the implementation of robust design and design capability indices and expedite the concept
employed to create approximations of the mean and variation of a response in the presence of
approximation for the actual analysis (i.e., computer code) during the design process. The
general approach to response surface modeling is shown in Figure 2.17. In statistical terms,
design variables are factors, and design objectives are responses; the factors and responses to
be investigated for a particular design problem provide the input for the approach of Figure
2.17, and the solutions (improved or robust) are the output. To identify these solutions, this
approach includes three sequential stages: screening, modeling building, and model exercising.
The first step (screening) is employed only if the problem includes a large number of
factors (usually greater than 10); screening experiments are used to reduce the set of factors to
those that are most important to the response(s) being investigated. Statistical experimentation
76
is used to define the appropriate design analyses which must be run to evaluate the desired
effects of the factors. Often two level fractional factorial designs or Plackett-Burman designs
are used for screening (cf., Myers and Montgomery, 1995), and only main (linear) effects of
Given:
Large # of YES
Factors, Run Screening
Responses Factors? Experiment
NO Screening
Run Modeling Reduce #
Experiment(s) Factors
Build Predictive
Model ( y )
Model
Building
NO
Search Design
Model Space
Exercising
Solutions
(improved or robust)
In the second stage (model building) of the approach in Figure 2.17, response surface
models are created to replace computationally expensive analyses and facilitate fast analysis and
77
exploration of the design space. If little curvature appears to exist, a two level fractional
k
y = ?0 + • ? i xi [2.16]
i=1
k k
y = ?0 + • ? ixi + • ? ii xi2 + • • ? ij xixj [2.17]
i=1 i=1 i j
i<j
is commonly used. Among the various types of experimental design for fitting a second-order
response surface model, the central composite design (CCD) is probably the most widely used
experimental design for regularly shaped (spherical or cuboidal) design spaces (cf., Myers and
Montgomery, 1995). In the case of irregularly shaped design spaces, D-optimal designs have
been successfully employed to build second order response surface models (see, e.g., Giunta, et
al., 1994).
If noise factors are included for robust design, the mean and variance of each response
must be estimated, and predictive metamodels for both are constructed. As discussed in
(Koch, et al., 1998), there are essentially three approaches which can be employed to construct
78
a response surface model is built (see, e.g., Chen, et al., 1996b; Shoemaker, et al.,
1991). The mean value of a response is estimated by evaluating the response surface at
the mean of the noise factor, and the variance is estimated using a Taylor series
approximation. This is the approach currently employed in the RCEM as described
previously in Section 2.3.1.
3. Product array approach: Uses the inner- and outer-array approach advocated by
Taguchi (see, e.g., Montgomery, 1991; Phadke, 1989) to develop separate
approximations for the mean and variance of each response. The inner-array prescribes
settings for the control factors, and the outer-array prescribes settings for the noise
factors. This experimentation strategy leads to multiple response values for each set of
control factor settings from which a response mean and variance can be computed from
which metamodels can be constructed.
Of the three approaches, the product array approach typically yields the most accurate
approximations because the metamodels are built directly from the original analysis code rather
approximations of computer analysis and simulation codes involves the following: (a) choosing
79
an experimental design to sample the computer code, (b) choosing a model to represent the
data, and (c) fitting the model to the observed data. There are a variety of options for each of
these steps as shown in Figure 2.18, and some of the more prevalent approximation techniques
have been identified. For example, response surface methodology usually employs central
composite designs, second-order polynomials, and least squares regression analysis. The
reader is referred to (Simpson, et al., 1997b) for a recent review of numerous mechanical and
2.18 with particular emphasis on response surface methodology, neural networks, inductive
By far the most popular technique for building metamodels these days is the response
surface approach which typically employs second-order polynomial models fit using least
squares regression techniques (Myers and Montgomery, 1995). These response surface
models replace the existing analysis code while providing the following:
• fast analysis tools for optimization and exploration of the design space.
80
SAMPLE
EXPERIMENTAL MODEL MODEL APPROXIMATION
DESIGN CHOICE FITTING TECHNIQUES
An added advantage of response surfaces is that they can smooth the data in the case of
numerical noise which may hinder the performance of some gradient-based optimizers (cf.,
Giunta, et al., 1994). This “smoothing” effect is both good and bad, depending on the problem.
Su and Renaud (1996) present an example where a second-order response surface smoothes
out the variability in a response so that the robust solution is lost in the approximating function; a
“flat region” does not exist in a second-order response surface, only an inflection point. Su and
Renaud’s example is investigated in more detail in Section 4.1 wherein the kriging process is
demonstrated step-by-step as it applies to their example to familiarize the reader with kriging.
In the meantime, additional limitations of response surfaces are discussed in the next section,
81
providing motivation for investigating alternative metamodeling techniques for use in engineering
design.
Response surfaces typically are second-order polynomial models which make them
easy to use and implement; however, they have limited capabilities to model accurately non-
linear functions of arbitrary shape. Some two-variable examples of the types of surfaces that a
second-order response surface can model are illustrated in Figure 2.19. Obviously, higher-
order response surfaces can be used to model a non-linear design space; however, instabilities
may arise (cf., Barton, 1992), or it may be too difficult to take a sufficient number of sample
points in order to estimate all of the coefficients in the polynomial equation, particularly in high
dimensions. Hence, many researchers advocate the use of a sequential response surface
modeling approach using move limits (see, e.g., Toropov, et al., 1996) or a trust region
approach (see, e.g., Rodriguez, et al., 1997). More generally, the Concurrent Sub-Space
develop response surface approximations of the design space which form the basis of the
subspace coordination procedure (Renaud and Gabriele, 1994; Renaud and Gabrielle, 1991;
Wujek, et al., 1995). The Hierarchical and Interactive Decision Refinement methodology uses
statistical regression and other metamodeling techniques to recursively decompose the design
space into subregions and fit each region with a separate model during design space refinement
(Reddy, 1996). Finally, the Model Management Framework (Booker, et al., 1995; Dennis and
82
Torczon, 1995) is being developed collaboratively by researchers at Boeing, IBM, and Rice to
optimization.
Many of the previously mentioned sequential approaches are being developed for
in nature, it is often difficult to isolate a small region of good design which can be accurately
represented by a low-order polynomial response surface model. Koch, et al. (1997) discuss
the difficulties encountered when screening large variable problems with multiple objectives as
part of the response surface approach. Barton (1992) states that the response region of interest
will never be reduced to a “small neighborhood” which is good for all objectives during
techniques which have sufficient flexibility to build accurate global approximations of the design
space and which are suitable for modeling computer experiments which are typically
83
y = 80 + 4x1 + 8x 2 -4x12 - 12x22 -12 x 1x2 y = 80 + 4x1 + 8x2 - 3x 21 -12x22 - 12 x1x2
x2 x1
x2 x1
x2 x1
x2 x1
y = 80 -4x1 + 12x 2 -3x 21 -12x22 -12 x1x2 y = 80 + 4x1 + 8x2 - 2x12 -12x 22 -12 x1x 2
The approach investigated in this dissertation is called kriging, and it is introduced in the
designs which can be used to sample the design space in Section 2.4.3. These two sections lay
the foundation for the work in Chapters 4 and 5 wherein Hypotheses 2 and 3 are tested
explicitly to determine the utility of kriging and space filling experimental designs for building
84
2.4.2 The Kriging Approach to Metamodeling
Kriging has its roots in the field of geostatistics—a hybrid discipline of mining
engineering, geology, mathematics, and statistics (cf., Cressie, 1993)—and is useful for
predicting temporally and spatially correlated data. Kriging is named after D. G. Krige, a South
African mining engineer who, in the 1950s, developed empirical methods for determining true
ore grade distributions from distributions based on sampled ore grades (Matheron, 1963).
Several texts which describe kriging and its usefulness for predicting spatially correlated data
(see, e.g., Cressie, 1993) and mining (see, e.g., Journel and Huijbregts, 1978) exist. These
metamodels are extremely flexible due to the wide range of correlation functions which can be
chosen for building the metamodel. Furthermore, depending on the choice of the correlation
function, the metamodel can either “honor the data,” providing an exact interpolation of the data,
or “smooth the data,” providing an inexact interpolation (Cressie, 1993). In this work, as in
most applications of kriging, the concern is solely on spatial prediction; it is assumed that the
These days, kriging goes by a variety of names including DACE (Design and Analysis of
Computer Experiments) modeling—the title of the inaugural paper by Sacks, et al. (1989)—and
spatial correlation metamodeling (see, e.g., Barton, 1994). There are also several types of
kriging (cf., Cressie, 1993): ordinary kriging, universal kriging, lognormal kriging, and trans-
Gaussian kriging. In this dissertation, ordinary kriging is employed, following the work in, e.g.,
(Booker, et al., 1995; Koehler and Owen, 1996), and only the term kriging is used.
85
Unlike response surfaces, however, kriging models have found limited use in engineering
design applications since introduction into the literature by Sacks, et al. (1989). Consequently,
• Giunta (1997) and Giunta, et al. (1998) perform a preliminary investigation into the use
of kriging for the multidisciplinary design optimization of a High Speed Civil Transport
aircraft.
• Sasena (1998) compares and contrasts kriging and smoothing splines for approximating
noisy data.
• Schonlau, et al. (1997) use a global/local search algorithm based on kriging for shape
optimization of an automobile piston engine.
• Osio and Amon (1996) develop a multistage numerical optimization strategy based on
kriging which they demonstrate on the thermal design of embedded electronic package
which has 5 design variables.
• Booker (1996) and Booker, et al. (1996) using a kriging approach to study the
aeroelastic and dynamic response of a helicopter rotor during structural design.
86
Some researchers have also employed kriging-based strategies for numerical optimization (see,
e.g., Cox and John, 1995; Trosset and Torczon, 1997). A look at the mathematics of kriging is
offered next.
Mathematics of Kriging
Kriging postulates a combination of a polynomial model and departures of the following form:
where y(x) is the unknown function of interest, f(x) is a known polynomial function of x, and
Z(x) is the realization of a stochastic process with mean zero, variance ? 2, and non-zero
covariance. The f(x) term in Equation 2.18 is similar to the polynomial model in a response
surface, providing a “global” model of the design space. In many cases f(x) is simply taken to
be a constant term ? (cf., Koehler and Owen, 1996; Sacks, et al., 1989; Welch, et al., 1990).
Only kriging models with constant underlying global models are investigated in this work as well.
While f(x) “globally” approximates the design space, Z(x) creates “localized” deviations
so that the kriging model interpolates the ns sampled data points. The covariance matrix of Z(x)
where R is the correlation matrix, and R(xi,xj) is the correlation function between any two of the
ns sampled data points xi and xj. R is a ns x ns symmetric, positive definite matrix with ones
along the diagonal. The correlation function R(xi,xj) is specified by the user.
87
In this work, five different correlation functions are examined for use in the kriging
model, see Table 2.1. In all of the correlation functions listed in Table 2.1, ndv is the number of
design variables, ? k are the unknown correlation parameters used to fit the model, and dk = xki -
xkj which is the distance between the kth components of sample points xi and xj. The
correlation functions of Equations 2.20 and 2.21 are from (Sacks, et al., 1989); the correlation
These five correlation functions are chosen primarily because of the frequency with
which they appear in the literature; the Gaussian correlation function, Equation 2.20, is the most
popular one in use. Correlation functions with multiple parameters per dimension exist;
however, correlation functions with only one parameter per dimension are considered in this
dissertation to facilitate finding the maximum likelihood estimates (MLEs) or “best guess” of the
?k used to fit the model. As mentioned in Section 1.3.2, one of the contributions in this work is
to study the effects of these five different correlation functions on the accuracy of a kriging
88
Table 2.1 Summary of Correlation Functions
Once a correlation function has been selected, predicted estimates, yˆ (x), of the
where y is the column vector of length ns (number of sample points) which contains the values of
the response at each sample point, and f is a column vector of length ns which is filled with ones
when f(x) in Equation 2.18 is taken as a constant. In Equation 2.25, rT (x) is the correlation
vector of length ns between an untried x and the sampled data points {x1, x2, ..., xns} and is
89
Finally, the ?ˆ ?in Equation 2.25 is estimated using Equation 2.27.
When f(x) is assumed to be a constant, then ?ˆ ?is a scalar which simplifies the calculation of
The estimate of the variance, ?ˆ ?2 , from the underlying global model (not the variance of
(y ? f ?ˆ )?T R?1 (y ? f ?ˆ )?
?ˆ ?2 ? [2.28]
ns
where f is again a column vector of ones because f(x) is assumed to be a constant. The
maximum likelihood estimates (i.e., “best guesses”) for the ? k used to fit the model are found by
[n s ln ( ?ˆ 2?) ? ln |R| ]
? [2.29]
2
Both ?ˆ ?2 and |R| are functions of ? k. While any values for the ? k create an interpolative
approximation model, the “best” kriging model is found by solving the k-dimensional
unconstrained nonlinear optimization problem given by Equation 2.29; this process is discussed
further in the next section. It is worth noting that in some cases using a single correlation
parameter gives sufficiently good results (see, e.g., Booker, et al., 1995; Osio and Amon, 1996;
Sacks, et al., 1989). In this work, however, a unique ? value for each dimension always is
90
considered based on past difficulties with scaling the design space to [0,1]k during the model
fitting process. The algorithms used in this dissertation to build and predict with a kriging model
Once the MLEs for each theta have been found, the final step is to validate the model.
Since a kriging model interpolates the data, residual plots and R2 values—the usual model
assessments for response surfaces (cf., Myers and Montgomery, 1995)—are meaningless
because there are no residuals. Therefore, validating the model using additional data points is
essential if they can be afforded. If additional validation points can be afforded, then the
maximum absolute error, average absolute error, and root mean square error (RMSE) for the
additional validation points can be calculated to assess model accuracy. These measures are
summarized in Table 2.2. In the table, nerror is the number of random test points used, then yi is
the actual value from the computer code/simulation, and yˆ i is the predicted value from the
approximation model.
91
Table 2.2 Error Measures for Kriging Metamodels
?
1 n error
y i ? yˆ i
avg. abs. error n error i ?1 [2.31]
?
n error
(yi ? yˆ i )2
i? 1
RMSE n error [2.32]
However, sometimes taking additional validation points is not possible due to the added
model assessment which requires no additional points is needed. One such approach is the
leave-one-out cross validation (Mitchell and Morris, 1992a). In this approach, each sample
point used to fit the model is removed one at a time, the model is rebuilt without that sample
point, and the difference between the model without the sample point and actual value at the
sample point is computed for all of the sample points. The cross validation root mean square
?
ns
( yi ? yˆ i )2
i? 1
cvrmse = [2.33]
ns
92
The MLEs for the ? k are not re-computed for each model; the initial ? k MLEs based on the full
sample set are used. Mitchell and Morris (1992a) describe an approach which facilitates cross
Before a kriging metamodel (or any metamodel for that matter) can be created, the
design space must be sampled in order to obtain data to fit the model. Hence, an important
step in any metamodeling approach is the selection of an appropriate sampling strategy, i.e., an
experimental design by which the computer analysis or simulation code is queried. In the next
Many researchers (see, e.g., Currin, et al., 1991; Sacks and Schiller, 1988) argue that
classical experimental designs, such as the central composite designs and Box-Behnken designs,
are not well-suited for sampling deterministic computer experiments. Sacks, et al. (1989) state
that the “classical notions of experimental blocking, replication and randomization are irrelevant”
when it comes to deterministic computer experiments that have no random error; hence, designs
for deterministic computer experiments should “fill the space” as opposed to possess properties
Booker (1996) summarizes the difference between classical experimental designs and
new space filling designs well. In the classical design and analysis of physical experiments,
random variation is accounted for by spreading the sample points out in the design space and by
taking multiple data points (replicates), see Figure 2.20a. In deterministic computer
93
experiments, replication at a sample point is meaningless; therefore, the points should be chosen
to fill the design space. One approach is to minimize the integrated mean square error over the
design region (cf., Sacks, et al., 1989); the space filling design illustrated in Figure 2.20b is an
Q3. Are space filling designs better suited for building approximations of deterministic
As stated in Section 1.3.1, Hypothesis 3 is that space filling designs are better
experimental designs . In an effort to test this hypothesis, an investigation into the utility of
several classical and space filling experimental designs is conducted in Chapter 5. Eleven
different types of experimental designs investigated in this dissertation: two classical experimental
designs and nine space filling experimental designs. The different designs are described next;
94
1.0 1.0
0.5 0.5
x2 0.0 x2 0.0
-0.5 -0.5
-1.0 -1.0
-1.0 -0.5 0.0 0.5 1.0 -1.0 -0.5 0.0 0.5 1.0
x1 x1
(a) Classical design w/replicates (b) Space filling design w/o replicates
Classical experimental designs are so named because they have been developed for what are
experiments which are plagued by variability and random error (see, e.g., Box and Draper,
1987; Myers, et al., 1989; Myers and Montgomery, 1995). Among these designs, the central
composite and Box-Behnken designs are well known and easily generated; hence, they are
employed in this work to serve as a basis for comparison against the sampling capability of
space filling designs. A brief description of these two types of designs follows.
95
A central composite design (CCD) is a X2
are the most widely used experimental design for fitting Star pts
X3 Center pt
Factorial pts
second-order response surfaces (Myers and Montgomery,
Figure 2.21 Central
1995). Different CCDs are formed by varying the distance Composite Design
from the center of the design space to the star points; in this
• ordinary central composite design (CCD) - star points are placed a distance of ±? (?
> 1) from the center with the cube points placed at ±1 from the center,
• face centered central composite (CCF) design - star points are positioned on the
faces of the cube, and
• inscribed central composite (CCI) design - star points are positioned at ±1/? from
the center with the cube points placed at ±1.
In addition, combinations of the CCD and CCF, and CCI and CCF are investigated based on
96
designs should not be used when accurate predictions at the
Figure 2.22 Box-Behnken
Design
extremes (i.e., the corners) are important. An example 13
Figure 2.22.
Numerous space filling experimental designs have been developed in an effort to provide more
efficient and effective means for sampling deterministic computer experiments. For instance,
Koehler and Owen (1996) describe several Bayesian and Frequentist types of space filling
experimental designs, including maximin and minimax designs, maximum entropy designs,
integrated mean squared error (IMSE) designs, orthogonal arrays, Latin hypercubes, scrambled
nets and randomized grids. Latin hypercube designs were introduced in (McKay, et al., 1979)
for use with computer codes and compared to random sampling and stratified sampling.
Minimax and maximin designs were developed by Johnson, et al. (1990) specifically for use
with computer experiments. Sherwy and Wynn (1987; 1988) and Currin, et al. (1991) use the
maximum entropy principle to develop designs for computer experiments. Similarly, Sacks et
al. (1989) discuss entropy designs in addition to IMSE designs and maximum mean squared
error designs for use with deterministic computer experiments. Finally, a review of several
Bayesian experimental designs for linear and nonlinear regression is given in (Chaloner and
Verdinelli, 1995).
97
Comparisons of the different types of space filling experimental designs are few; often
the novel space filling design being described is compared against Latin hypercube designs and
random sampling (see, e.g., Kalagnanam and Diwekar, 1997; Park, 1994; Salagame and
Barton, 1997), but rarely is it compared against other space filling designs. An exception are
the maximin Latin hypercubes (Morris and Mitchell, 1995) which are compared against maximin
designs (Johnson, et al., 1990) and Latin hypercubes; the authors conclude by means of an
example that maximin Latin hypercube designs are better than either maximin or Latin
hypercube designs alone. In this dissertation, one of the contributions is to compare and
contrast a wide variety of space filling designs against themselves and classical experimental
designs. Toward this end, nine space filling experimental designs are investigated: Latin
hypercubes, maximin Latin hypercubes, minimax Latin hypercubes, optimal Latin hypercubes,
Hammersley point designs, and uniform designs. An overview of each of these designs follows;
98
filling experimental design intended for use with computer Design
the OA. The orthogonal arrays used in this work are limited
Figure 2.24 Orthogonal Array
Design
to q2 runs where q is a prime power. An example nine point
99
Orthogonal array-based Latin hypercube X2
constructed from purely algebraic means using the process Figure 2.26 Orthogonal Latin
Hypercube
described in (Ye, 1997). An example nine point orthogonal
100
1995) is used to construct these designs for varying sample Hypercube
2.27.
Figure 2.28.
101
A uniform design is a design based strictly on
X2
In addition to the maximin Latin hypercube designs from (Morris and Mitchell, 1995), a
minimax Latin hypercube design is introduced in this dissertation. Only a brief description of this
unique design is given here; a detailed description of the design an a discussion of how it is
generated are included in Appendix C. From an intuitive stand point, because prediction with
kriging relies on the spatial correlation between data points, a design which minimizes the
maximum distance between the sample points and any point in the design space should yield an
accurate predictor. Such a design is referred to as a minimax design (Johnson, et al., 1990).
While the minimax criterion ensures good coverage of the design space by minimizing the
maximum distance between points, it does not ensure good stratification of the design space
(i.e., when the sample points are projected into 1-dimension, many of the points may overlap
(cf., Johnson, et al., 1990)). Meanwhile, because a Latin hypercube ensures good stratification
of the design space, combining it with the minimax criterion provides a good compromise
102
between the two much as the maximin Latin hypercubes developed by Morris and Mitchell
(1995) does. Example 9, 11, and 14 point maximin Latin hypercube designs are shown in
Figure 2.31. The specifics of the genetic algorithm used to generate these minimax Latin
X2 X2 X2
X1 X1 X1
In Chapter 5, the minimax Latin hypercube designs are compared against the other
classical and space filling experimental designs discussed in this section. A look ahead to that
Through the review of the literature which is presented in this chapter, the necessary
elements for a method to model and design a scalable product platform for a product family
have been identified by elucidating the research questions (and hypotheses) introduced in
Section 1.3.1. In the next chapter, these constitutive elements are integrated to create the
Product Platform Concept Exploration Method (PPCEM), providing a Method which facilitates
103
the synthesis and Exploration of a common Product Platform Concept which can be scaled
into an appropriate family of products. The relationship between the individual sections in this
chapter and the PPCEM developed in the next chapter are illustrated in Figure 2.32. In
particular, the market segmentation grid is revisited in Section 3.1.1 as it applies to the PPCEM.
The concept of a “conceptual noise factor” is formalized into a scale factor in Section 3.1.2
which is fundamental to the utilization of the PPCEM. Metamodeling techniques within the
PPCEM are discussed in Sections 3.1.3. Aggregation of the individual product specifications
into an appropriate comproimse DSP formulation for the product family is described in Section
3.1.4, and development of the product platform portfolio for the product family is explained in
Section 3.1.5.
In addition to the presentation of the PPCEM, the research hypotheses are revisited in
Section 3.2 in the next chapter. Supporting posits are stated for each hypothesis, and the
verification strategy for testing the hypotheses is elaborated in Section 3.3. The discussion in
Section 3.3 sets the stage for the example problems which are presented in Chapters 4 through
104
Platform
Chp 3
105
3.
CHAPTER 3
Platform
In this chapter, the elements of the previous chapters are synthesized to meet the
principal objective in this dissertation, namely, to develop the Product Platform Concept
Exploration Method (PPCEM) for designing a common scalable product platform for a product
family. An overview of the PPCEM and its associated steps and tools is given in Section 3.1
with each step of the PPCEM and its constituent elements elaborated in Sections 3.1.1 through
3.1.5; the resulting infrastructure of the PPCEM is presented in Section 3.1.6. In Section 3.2,
the research hypotheses are revisited from Section 1.3.1 and supporting posits are identified.
Section 3.3 follows with an outline of the strategy for verification and testing of the research
hypotheses. Section 3.4 concludes the chapter with a recap of what has been presented and a
look ahead to the metamodeling studies in Chapters 4 and 5 and the example problems in
Chapters 6 and 7 which are used to test the research hypotheses and demonstrate the
92
93
3.1 OVERVIEW OF THE PPCEM AND RESEARCH HYPOTHESES
As stated in Section 1.3.2, the principal contribution in this dissertation is the Product
Platform Concept Exploration Method (PPCEM) for designing a common scalable product
platform for a product family. As the name implies, the PPCEM is a Method which facilitates
the synthesis and Exploration of a common Product Platform Concept which can be scaled
into an appropriate family of products. The steps and associated tools (with relevant sections
Step 1 Market
Create Market Segmentation Grid Segmentation
Grid (§ 2.2)
Step 2
Classify Factors and Ranges Robust Design
Principles
(§ 2.3)
Step 3
Build and Validate Metamodels
Metamodeling
Techniques
Step 4 (§ 2.4)
Aggregate Product Platform Specifications
Compromise
Step 5 Decision Support
Develop Product Platform Portfolio Problem (§1.2)
94
Figure 3.1 Steps and Tools of the PPCEM
There are five steps to the PPCEM as illustrated in Figure 3.1. The input to the
PPCEM are the overall design requirements, and the output of the PPCEM is the product
platform portfolio which is described in Section 3.1.5. The tools utilized in each step of the
PPCEM are shown on the right hand side of Figure 3.1; their involvement in the various steps of
the PPCEM is elaborated further in Sections 3.1.1 through 3.1.5 wherein the implementation of
each step of the PPCEM is described. These steps prescribe how to formulate the problem
and describe how to solve it; the actual implementation of each step is liable to vary from
problem to problem.
Given the overall design requirements, Step 1 in the PPCEM is to create the market
segmentation grid as shown in Figure 3.2. As discussed in Section 2.2.1, the market
segmentation grid provides a link between management, marketing, and engineering design to
help identify and map which type of leveraging can be used to meet the overall design
requirements and realize a suitable product platform and product family. In the PPCEM, the
market segmentation grid serves as an attention directing tool to help identify potential
platform design. Examples of this step are given in Sections 6.2 and 7.1.3.
95
Overall Design
Requirements
IDENTIFY
LEVERAGING:
1. vertical
2. horizontal
3. beachead
Platform
Once the market segmentation grid has been created, Step 2 of the PPCEM is to
classify factors as illustrated in Figure 3.3. Factors are classified in the following manner:
Overall Design
Requirements
IDENTIFY
LEVERAGING:
1. vertical
2. horizontal
3. beachead
Platform
96
• Responses are performance parameters of the system; in the problem formulation, they
may be constraints or goals or both and are identified from the overall design
requirements and the market segmentation grid.
• Control factors are variables that can be freely specified by a designer; settings of the
control factors are chosen to minimize the effects of variations in the system while
achieving desired performance targets and meeting the necessary constraints. Signal
factors also are lumped within control factors because it is often difficult to know, a
priori, which design variables are control factors and can be used to minimize the
sensitivity of the design to noise variations and those which are signal factors and have
no influence on the robustness of the system.
• Noise factors are parameters over which a designer has no control or which are too
difficult or expensive to control.
• Scale factor is a factor around which a product platform is leveraged either through
vertical scaling, horizontal scaling, or a combination of the two.
Appropriate ranges for the control and noise factors are identified during this step, and
constraints and goal targets for the responses are also identified.
The relationship between different leveraging capabilities and types of scale factors
considered in this dissertation are illustrated in Figure 3.4. As discussed in Section 2.2.1, three
types of leveraging can be mapped using the market segmentation grid: (1) vertical leveraging,
(2) horizontal leveraging, and (3) a beachhead approach which is a combination of vertical and
97
As shown in Figure 3.4, the relationship between each type of scale factor and the three
98
Leveraging Scale Factors
Scaled Down
Scaled Up
Mid-Range
• the length of a motor to
provide varying torque
Low Cost • number of compressors
Low Performance
Platform C in an engine
Segment A Segment B Segment C
(a) Vertical
High Cost
High Performance Scale factors are:
a combination of
Mid-Range • parametric, conceptual,
Platform
• and/or configurational
Low Cost
Low Performance scaling factors
Segment A Segment B Segment C
(c) Beachhead
If known, an appropriate range—upper and lower limit—is identified for each scale factor
during this step of the PPCEM; otherwise, finding this range becomes part of the design
process. Examples of Step 2 are offered in Sections 6.2 and 7.3. Once the responses, control,
noise, and scale factors and corresponding constraints/targets and ranges have been identified,
99
3.1.3 Step 3 - Build and Validate Metamodels
Step 3 in the PPCEM is to build and validate metamodels relating the control, noise,
and scale factors to the responses using the elements of the PPCEM shown in Figure 3.5. The
routines which are inexpensive to run. Because robust design principles are being used, these
metamodels are either functions of control, noise, and scale factors as discussed in Sections
2.3.1 and 2.4, or approximate the mean and standard deviation of each response for known
variations in the noise and scale factors. If the analytic equations or simulation routine are not
sufficiently expensive to warrant metamodeling, this step can be skipped provided that the mean
and standard deviation of each response (as a result of variation in the scale factor and any
relevant noise factors) can be computed easily. The universal motor example in Chapter 6
forgoes metamodel construction because the analyses permit the mean and standard deviation
of each response to be easily computed; however, such is not the case in Chapter 7, the design
facilitate the implementation of robust design and search for a good aircraft platform.
100
E. Metamodeling
Build metamodels
Validate metamodels
Use metamodels
D. Analysis or
Simulation E1. Model Choice
Routine
Response Kriging
Surface
The steps for building and validating the metamodels follow the traditional metamodeling
computer analysis or simulation program used to model the system is queried to obtain data.
The experimental design is used to sample the design space identified by the ranges (i.e.,
bounds) on the control, noise, and scale factors. The resulting sample data then is used to build
a metamodel (e.g., a kriging model, Section 2.4.2) for each response. Model accuracy then is
assessed through additional validation points or otherwise appropriate procedures for the
of such an approach can be found in, e.g., (Chen, et al., 1997; Koch, et al., 1997; Myers and
Montgomery, 1995). The difficulty, then, lies in defining an appropriate design space. In this
dissertation, the design space for the General Aviation aircraft example problem is known,
101
making identification of a good design space appear trivial. In reality it is not, and there is often
great difficulty finding an appropriately good design space. Identifying and quantifying a “good”
design space is not addressed in this dissertation; it is a possibility for future work (see Section
8.3).
Once the necessary metamodels have been built and validated, Step 4 in the PPCEM is
compromise DSP to model the necessary constraints and goals for the product family and
product platform based on the overall design requirements, the market segmentation developed
in Step 1, and the factor classification and ranges developed in Step 2, see Figure 3.6. It is
imperative that product constraints or goals given in the overall design requirements that are not
captured within the desired platform leveraging strategy be included in the compromise DSP.
102
F. The Compromise DSP
Find
Control Variables
Satisfy
Overall Design Constraints
Requirements Goals
{ "Mean on Target"
"Minimize Deviation" } orvalues
{ Cdk }
Bounds
Minimize
Deviation Function
A. Market Segementation Grid
IDENTIFY
LEVERAGING:
1. vertical
2. horizontal
3. beachead
Platform
x Product y
Control Platform Response
Factors Scale
s Factors
Two approaches for aggregating the product platform specifications are demonstrated
in this dissertation.
1. Separate goals for “bringing the mean on target” and “minimizing the variation” are
created (see Section 2.3.2) for the product family. This follows the implementation of
robust design principles which is traditionally used in the RCEM, except they are being
applied to a product family as opposed to a single product. The procedure is as
follows:
a. identify targets from market segmentation grid and overall design requirements for
each derivative product;
b. compute target means for the platform based by averaging individual targets;
103
c. Compute standard deviations for the platform based on individual targets by
dividing the range of each target by six, assuming a normal distribution with ±3?
variations, or by 12 , assuming a uniform distribution; and
d. create separate goals for “bringing the mean on target” and “minimizing the
variation” as necessary.
2. Design capability indices (see Section 2.3.2) to assess the capability of a family of
designs to satisfy a ranged set of design requirements. The procedure is as follows:
a. identify upper and lower requirement limits (URL and LRL, respectively) from the
market segmentation grid and overall design requirements for each derivative
product;
b. compute the mean and standard deviation as the average and standard deviation of
the individual instantiations of the product family for a given set of design variables;
and
The first approach is utilized in the universal motor example in Chapter 6 (see Section 6.3 in
particular for more details on its implementation). Meanwhile, design capability indices are
employed in the General Aviation aircraft example in Chapter 7 (see Section 7.4).
The compromise DSP is used to determine the values for the control (design) variables
which best satisfy the product family goals (“bringing the mean on target” and “minimizing the
variation” in the first; making Cdk = 1 in the second) while satisfying the necessary constraints.
Constraints can be either worse case scenario, evaluated on an individual basis, or aggregated in
a similar manner as the goals, constraining the mean and deviation of the responses or the
104
appropriate Cdk. The compromise DSP is exercised in Step 5 of the PPCEM to obtain the
Step 5 of the PPCEM is to solve the compromise DSP using the aggregate product
platform specifications to develop the product platform portfolio. This step makes use of the
metamodels created in Step 3 in conjunction with the compromise DSP and the aggregate
product specifications formulated in Step 4; design scenarios for exercising the compromise
DSP are abstracted from the overall design requirements, see Figure 3.7. The resulting “pool”
portfolio. This portfolio of solutions can provide a wealth of information about the appropriate
settings for the design variables for the product platform based on different design scenarios or
robustness considerations; hence, the solution portfolio is called the product platform
portfolio.
105
F. The Compromise DSP
Find
Control Variables
Satisfy Product
Overall Design Constraints Platform
Requirements Goals Portfolio
{ "Mean on Target"
"Minimize Deviation" }or values
{ Cdk }
Bounds
Minimize
Deviation Function E. Metamodeling
Build metamodels
Validate metamodels
Use metamodels
Response Kriging
Surface
The concept of a solution portfolio is not new to this research; it is simply a more
appropriate name for what has been previously referred to as a ranged set of specifications
(see, e.g., Lewis, et al., 1994; Simpson, et al., 1996; Simpson, et al., 1997a; Smith and
Mistree, 1994). The objective when using the PPCEM is to generate a variety of options
for product platforms; it is not necessarily used to evaluate these options or select one
from them. It facilitates generating these options with the end result being the product
platform portfolio, namely, the “pool” of solutions (i.e., design variable settings) which should
be maintained in order for the product platform to have sufficient flexibility to meet the desired
design scenarios in the event that one scenario is preferred to the next.
In addition to developing the product platform portfolio for the product family, product
variety tradeoff studies also can be performed by making use of two measures—the Non-
106
Commonality Index (NCI) and the Performance Deviation Index (PDI)—which are described
as follows:
Non-commonality Index (NCI): NCI is used to assess the amount of variation between
parameter settings of each product within a product family; the smaller the variation, the
smaller NCI, and the more common the products within the product family. Computing
NCI is perhaps best illustrated through example; consider the 3 products shown in
Figure 3.8. Assume that each product is described by three design variables: x1, x2, and
x3 (if these three hypothetical products were electric motors, then x1, x2, and x3 might be
the outer radius of the motor, the length of the motor, and the number of windings in the
motor). First, the dissimilarity of each design variable settings for each product within
the family is computed as follows:
1. Compute the mean of each of the xj within the product family and the absolute value
of the difference between ? j and xj for each of the i products.
2. Normalize each difference by the range of that particular design variable: [upper
bound (ubj) - lower bound (lbj)]; this measures the relative variation in the values of
the design variables to the total range for that design variable.
3. Compute the average of the resulting normalized differences; this value is denoted
DIi in the figure and is the dissimilarity of the settings of xi for the group of products.
The scale factor around which the product family is derived is not included in this
computation. NCI is taken as a weighted sum of the individual DIi, where the weights
reflect the relative difficulty or cost associated with allowing each parameter to vary.
For an electric motor for instance, it may be easier or cheaper to allow the number of
windings (x3) to vary between different motor models but not so to allow the outer
radius to vary (x1). In this case, w1 would be much larger than w3 to reflect this within
the product family.
107
Dissimilarity of x1
2 3
Product Descriptions
1 ??
??2.6 ? 2.5 ? 2.6 ? 2.5 ? 2.6 ? 2.8 ??
??
DI1 ?
Product 1 3 ?? (3 ? 2) ??
Product 2
Product 3
µ1 DI1 ? 0.133
µ2 DI 2 ? 0.0733
Dissimilarity Index
Dissimilarity of x3
1 n ? j ? x i, j
DI j ? ? 0 100
?
n i?1 ub j ? lb j ? DI3 ?
1 ??
??22 ? 13 ? 22 ? 18 ? 22 ? 35 ??
??
3 ?? (100 ? 0) ??
j = 1, ..., # design variables
i = 1, ..., # products in family µ3 DI3 ? 0.0867
# d.v .
NCI =? ? wj DI j ? 0.55(0.133) ? 0.3(0.0733 ) ? 0.15(0.0867 ) ? 0.108
Non-commonality Index:DisIndx
j?1
Performance Deviation Index: Assuming that a market niche is defined by a set of goal
targets and constraints and that the necessary constraints are met for each individual
derivative product, then the deviation variables in the compromise DSP are a direct
measure of how well each derivative product meets its targets. The Performance
Deviation Index (PDI) for a product family thus is taken as a linear combination
(possibly weighted) of the deviation variables for each derivative product within the
product family as given by Equation 3.1:
n
PDI ? ? w i Zi [3.1]
i? 1
108
where i = {1, ..., # products in family}, and Zi is the corresponding deviation function
for each derivative product within the product family. Weightings may be used to bias
the measure for certain products within the family.
Example computations of the NCI and PDI for a family of products are demonstrated in the
General Aviation aircraft example (see Section 7.6.2) and are explained in more detail in the
NCI and PDI are, in and of themselves, ad hoc measures for a product family, similar to
the commonality indices and platform efficiency and effectiveness measures discussed in Section
2.2.2. However, having these measures for non-commonality and performance deviation for a
family of products allows product variety tradeoff studies to be performed, see Figure 3.9 and
Figure 3.10.
high
Worst Designs
High NCI
High PDI
Designs based
on Common
Product Platform
PDI
Individually
Best Designs Optimized
Low NCI Designs
Low PDI
low
low high
NCI
109
By plotting NCI and PDI for a family of designs as illustrated in Figure 3.9, regions of
good and bad product family designs can be identified; the worst designs have high NCI and
PDI, while the best have low NCI and PDI. Individually optimized designs within a product
family, where commonality is not important, are liable to have a low PDI but a high NCI for the
resulting group of products. On the other hand, a product family based on a common product
platform is liable to have a low NCI; ideally, a low PDI is desirable but may be difficult to
achieve depending on the amount of commonality desired between products within the resulting
product family.
0.5
KEY:
PPCEM
0.4 Designs
Benchmark
Designs
0.3
²PDI i ²NCI i, ²PDI i =
PDI
²PDI lost
change in NCI and
PDI by allowing i
0.2 variables to vary
b/n each design
110
NCI vs. PDI curves of the form shown in Figure 3.10 can be generated by trading off
product commonality for product performance and vice versa. By designing each product
individually, benchmark designs can be created which have a low PDI. Meanwhile, the
platform designs obtained by implementing the PPCEM have a low NCI. What is of interest to
study is the resulting ?PDIlost and the ?NCIgain to assess the tradeoff between commonality and
performance. If ?PDIlost is too large, the non-commonality of the designs can be increased
traversing the front of the envelope provides the largest (?PDIi) with the smallest (?NCIi). This
curve is generated in the General Aviation aircraft example (Section 7.6.3), and additional
By assembling the various elements of the PPCEM, the complete infrastructure of the
PPCEM is shown in Figure 3.11. As illustrated in the figure, the PPCEM consists of
“Processors” A-F which are employed as the overall design specifications are transformed into
the product platform portfolio. As described in the previous sections, each step employs one or
111
F. The Compromise DSP
Find
Control Variables
Satisfy Product
Overall Design Constraints Platform
Requirements Goals Portfolio
{"Mean on Target"
"Minimize Deviation" }orvalues
{ C dk }
Bounds
Minimize
Deviation Function E. Metamodeling
A. Market Segementation Grid Build metamodels
Validate metamodels
IDENTIFY Use metamodels
LEVERAGING: D. Analysis or
1. vertical Simulation E1. Model Choice
2. horizontal Routine
3. beachead Response Kriging
Platform
Surface
Step 1 - Create Market Segme ntation Grid relies on human judgment and
engineering “know-how” as Processor A in the PPCEM to map the overall
design requirements into an appropriate market segmentation grid and identify
leveraging opportunities.
Step 2 - Classify Factors and Ranges relies on the human judgment and Processor
B in the PPCEM to map the overall design requirements and market
segmentation grid into appropriate control, noise, and scale factors and identify
corresponding ranges for each. The responses being investigated also need to
be identified in this step of the process.
Step 3 - Build and Validate Metamodels relies on human judgment and Processors
C, D, and E for construction and validation of the necessary metamodels.
112
Step 4 - Aggregate Product Platform Specifications relies on human judgment to
formulate a compromise DSP, Processor F, using information from Processors
A and B and the overall design requirements.
Referring back to Figure 1.5, the structure of the PPCEM is very similar to the RCEM;
this is not a coincidence. In essence, the PPCEM is derived from the RCEM through a series
of modifications based on the research questions and hypotheses presented in Section 1.3.1.
As stated in Section 1.3, there are three main hypotheses in this dissertation:
113
Hypothesis 2: Kriging is a viable alternative for building metamodels of deterministic
computer analyses.
Hypothesis 3: Space filling experimental designs are better suited for building
designs.
the efficiency of the PPCEM but have ramifications beyond the PPCEM itself. To facilitate
are as follows:
Sub-Hypothesis 1.1: The market segmentation grid can be utilized to help identify
Sub-Hypothesis 1.2: Robust design principles can be used to facilitate the design of a
Sub-Hypothesis 1.3: Individual targets for product variants can be aggregated into an
appropriate mean and variance and used in conjunction with robust design
114
It is upon these hypotheses that the PPCEM and the work in this dissertation are
grounded. The relationship between these hypotheses and the modifications to RCEM which
form the PPCEM are presented in the next section. This is followed in Section 3.2.2 with
As the research hypotheses are addressed, modifications to the RCEM are made and
the PPCEM thus is realized. The relationships between the research hypotheses, designated as
H1, H1.1, ..., H3, and the specific elements of the RCEM are illustrated in Figure 3.12.
Addressing the first hypothesis provides an interface with marketing, enabling the identification
of scalable product platforms; this is accomplished through the addition of a new module to the
RCEM, namely, the market segmentation grid. Scale factors, Sub-Hypotheses 1.1, then are
identified through the use of the market segmentation grid around which a scalable product
115
Robust
Design of
Scalable Product
Platforms Platform
Portfolio
H1.2
H1.3
F. The Compromise DSP
Find
Control Variables
Satisfy
Overall Design Robust, Top-Level
Constraints
Requirements Design Specifications
Goals
"Mean on Target"
"Minimize Deviation"
“Maximize the independence”
Market Bounds
Segmentation Minimize E. Response Surface Model
Grid Deviation Function
Noise z C. Simulation
Factors Programs
Classify x Product/ (Rigorous Analysis
y Tools)
Scale Control Process Response y=f( x, z)
Factors Factors
? ˆy = f( x,? z)
k 2 l 2
? ? ˆy= ? (ŽzŽf )? ?ˆz + ? Žf ? ? ˆx i
( )
i=1 i i i=1 Žxi
B. Point Generator
D. Experiments Analyzer
Design of Experiments
Plackett-Burman Eliminate unimportant factors H2
Full Factorial Design Reduce the design space to the region of
Fractional Factorial Design interest
Taguchi Orthogonal Array
Central Composite Design Plan additional experiments
etc. Kriging
H3
Space
Filling
DoE
Sub-Hypothesis 1.2 relates to designing the actual product platform; robust design
principles are abstracted for use in product family design by aggregating product family targets
and constraints into appropriate means and variances. The resulting formulation allows robust
design principles, already embodied in the RCEM in the form of separate goals for “bringing the
mean on target” and “minimizing the variation,” to be utilized when solving the compromise
DSP. Notice that the goal for “minimize the independence” is not included in the compromise
DSP formulation because the intent is to rely solely on robust design principles to designing a
suitable product platform which can be scaled into a product family. As the research
116
hypotheses are addressed and the RCEM is correspondingly modified, the PPCEM is realized;
Figure 3.12. The intent is not to replace the current response surface and design of experiments
capabilities of the RCEM; rather, it is to augment the current capabilities with kriging and novel
experiments. The specific posits which support these claims and the research hypotheses are
There are several posits which support the research hypotheses which have been
revealed during the literature review in Chapter 2 and in the discussion in Section 1.1. Six
posits support Hypotheses 1 and Sub-Hypotheses 1.1-1.3.; they are the following:
Posit 1.1: The RCEM provides an efficient and effective means for developing robust
117
Posit 1.3: Robust design principles can be used to minimize the sensitivity of a design
Posit 1.4: Robust design principles can be used effectively in the early stages of the
design process by modeling the response itself with separate goals for “bringing the
Posit 1.5: The compromise DSP is capable of effecting robust design solutions through
separate goals for “bringing the mean on target” and “minimizing variation” of noise
Posit 1.6: The market segmentation grid can be used to identify opportunities for
• Posit 1.1 is substantiated in (Chen, 1995) by explicitly testing and verifying the
efficiency and effectiveness of the RCEM for developing robust top-level design
specifications for complex systems design.
118
• Posit 1.3, Posit 1.4, and Posit 1.5 are substantiated by the work in (Chen, et al.,
1996b); Chen and her coauthors describe a general robust design procedure which can
minimize the sensitivity of a design to variations in noise factors and/or design
parameters (Posit 1.3) by having separate goals for “bringing the mean on target” and
“minimizing the variation” (Posit 1.4) of each response in the compromise DSP (Posit
1.5). These posits are further substantiated in (Chen, 1995) as part of the development
of the RCEM.
• Posit 1.6 is substantiated by the discussion in Section 2.2.1, i.e., the market
segmentation grid can be used as an attention directing tool to identify leveraging
opportunities in product platform design (cf., Meyer, 1997; Meyer and Lehnerd, 1997);
identifying these leveraging opportunities provided the initial impetus for developing the
market segmentation grid.
These six posits help to support Hypothesis 1 and Sub-Hypotheses 1.1-1.3. The strategy for
testing and verifying these hypotheses is outlined in Section 3.3. Before the verification strategy
Posit 2.1: Building an interpolative kriging model is not predicated on the assumption
Posit 2.2: Kriging provides very flexible modeling capabilities based on the wide
variety of spatial correlation functions which can be selected to model the data.
119
• Posit 2.1 is more fact than assumption; it is substantiated by, e.g., Sacks, et al. (1989);
Koehler and Owen (1996); and Cressie (1993).
• Posit 2.2 is substantiated by many researchers, most notably Sacks, et al. (1989);
Welch, et al. (1992); Cressie (1993); and Barton (1992; 1994).
Posits 2.1 and 2.2 both help to verify Hypothesis 2; the strategy for testing Hypothesis 2 is
• Posit 3.1 is taken from Sacks, et al. (1989) who state that the “classical notions of
experimental blocking, replication, and randomization are irrelevant” for deterministic
computer experiments which contain no random error. Moreover, any experimental
design text (see, e.g., Montgomery, 1991) can verify that replication, blockability, and
rotatability are developed explicitly to handle and account for random (measurement)
error in a physical experiment for which classical experimental designs have been
developed.
Since kriging (using an underlying constant model) is being advocated in this dissertation
120
Posit 3.2: Since kriging (with an underlying constant model) models rely on the spatial
correlation between data, confounding and aliasing of main effects and two-factor
• Posit 3.2 is substantiated by Sacks, et al. (1989); Currin, et al. (1991); Welch, et al.
(1990); and Barton (1992; 1994). In physical experimentation, great care is taken to
ensure that aliasing and confounding of main effects and two-factor interactions do not
occur to ensure accurate estimation of coefficients of the polynomial response surface
model (see, e.g., Montgomery, 1991).
The experimental procedure for testing Hypothesis 3 is introduced in the next section along with
the specific strategy for verification and testing of all of the other hypotheses.
The question of whether testing the proposed hypotheses really answers the research
questions is a difficult one, and it raises the issues of verification and validation as discussed by
Peplisnki (1997). According to the Concise Oxford English Dictionary (1982), to validate is to
make valid, to ratify or confirm. The root, valid, is then defined as:
• (law) sound and sufficient, executed with proper formalities (valid contract);
121
With respect to engineering design research, the intent of the validation process is to show the
research and its products to be sound, well grounded on principles of evidence, able to
“confirm”, “verify” is used in the context of establishing the correspondence of actual facts or
details with those proposed or guessed at, while “validate” is used in the context of establishing
validity by authoritative affirmation or by factual proof. The boundary between verification and
validation is thus shifting and often open to interpretation; in many cases the two words are used
interchangeably.
In this research, definitions for “verification” and “validation” are applied that, while not
inconsistent with the general uses above, are more specific and tailored for efforts in engineering
design research. In practice, the verification and validation of design methods is much more
than a debugging process. Three primary phases can be identified: firstly, problem justification;
secondly, completeness and consistency checks of the methodology, and thirdly, validation of
Ignizio (1990).) Verification then refers to the second phase of the process and is focused
primarily on internal consistency and completeness, while validation as the third phase of the
process is focused on consistency with external evidence, ideally through testing the design
method on actual case studies. This validation of performance is perhaps the area most open to
122
If what is to be validated is a closed form mathematical expression or algorithm, it can
be proven, or validated, in a traditional and formal mathematical sense. For example, the case
of showing a solution vector, x, belongs to the set of feasible solutions for a given mathematical
model is a closed problem. Alternatively, if the problem is open, if the subject is dealing with
some “heuristic,” non-precise scheme, the issue of validation becomes one of “correctness
beyond reasonable doubt.” The validation of design methods falls into this category. In
this case it is achieved ultimately by results and usefulness and through a convincing
demonstration to (and an acceptance and ratification by) one’s peers in the field. An analogy
with mathematics and the concept of “necessary” and “sufficient” conditions can be drawn with
respect to the validation of heuristics. Heuristics are aimed toward satisfying the necessary
conditions only; it is not possible to develop an absolute proof for an open problem by
definition.
As anticipated, the operations research literature provides some useful insight into the
problem solving by heuristic programming Lin (1975) makes the following remarks:
We therefore define a valid heuristic algorithm (to solve a given problem) as any
procedure which will produce a feasible solution acceptable to the design engineer,
within limits of computing time, and consider the problem solved if we can construct
a valid heuristic procedure to solve it. We see that in the domain where a heuristic
algorithm operates, there are elements of technique, experimentation, judgment and
persuasion, as well as compromise.
123
Specific heuristic programs are justified, not because they attain an analytically
verifiable optimum solution, but rather because experimentation has proven that they
are useful in practice.
In summary, while noting that judgment is subjective and based on faith, the validation of a
heuristic, and therefore the validation of design methods, can be established if (Smith, 1992):
• the time and consumed resources are within reasonable limits, and
It is against these three issues that a verification and validation strategy is developed.
Meanwhile, verification and testing of the hypotheses has already begun by stating and
substantiating posits in support of each hypothesis. What is tested in the remainder of the
dissertation is the “intellectual leap of faith” required to jump from the posits to the hypotheses.
The relationships between the next four chapters and the individual research hypotheses are
124
H3 Utility of space filling experimental designs X X
The relationships listed in Table 3.1 are elaborated further in the next two sections. The
strategy for testing Hypothesis 1 and the related sub-hypotheses is outlined in Section 3.3.1.
platform for a product family. To verify this, two example problems are utilized to demonstrate
the effectiveness of the PPCEM: the design of a family of universal electric motors (Chapter 6)
and the design of a family of General Aviation aircraft (Chapter 7). These two examples have
- product family aggregated around mean and standard deviation of stack length and
separate goals for “bringing the mean on target” and “minimize the variation” are
employed to design the product platform (see Section 6.3).
Note that metamodels are not employed in this first example because mean and standard
deviation of the responses can be estimated directly from the relevant analysis equations (see
Section 6.3).
125
- horizontal scaling of a product platform (see Section 7.1.3),
- metamodels for mean and standard deviation of the GAA family to facilitate
implementation of robust design and development of the aircraft platform (see
Section 7.3), and
- design capability indices to assess quickly the capability of the family of aircraft to
satisfy the range of requirements (see Section 7.4).
The first example parallels Black & Decker’s vertical scaling strategy for its universal motors
(Lehnerd, 1987) discussed in Section 1.1.1 and is used to provide “proof of concept” that the
PPCEM works. The second example is based on a previous application of the RCEM to
develop a “common and good” set of top-level design specifications for a family of General
Aviation aircraft (see, e.g., Simpson, 1995; Simpson, et al., 1996). The General Aviation
aircraft problem is employed in this work to demonstrate further the effectiveness of the
In each example, the product platform obtained using the PPCEM is compared to (a)
the initial baseline design to show improvement over the starting design, and (b) individually
designed, benchmark products which are aggregated into a product family to provide a
reference to compare against the PPCEM product family (i.e., design the family of products
with the PPCEM and without the PPCEM and discuss the differences in product performance,
computational expense, and usefulness). Product variety tradeoff studies are also performed for
the family of General Aviation aircraft, examining the tradeoff between commonality of the
126
aircraft and their corresponding performance for the PPCEM family and the individually
Testing Sub-Hypothesis 1.1 - The procedure for using the market segmentation grid to
identify scale factors for a product platform is shown in Figure 3.4 and described in
Section 3.1.2. Further verification of this sub-hypothesis requires demonstrating that
this procedure can be used to identify scale factors for a product platform. In the
universal motor example in Chapter 6, the market segmentation grid is used to identify a
vertical leveraging strategy and parametric scaling factor (stack length); in the General
Aviation aircraft example, a horizontal leveraging strategy and configurational scale
factor (number of passengers) are used.
Testing Sub-Hypothesis 1.2 - If appropriate scale factors can be identified for a product
platform (i.e., if Sub-Hypothesis 1.1 is true), then the principles of robust design can be
employed to develop a product platform which has minimum sensitivity to variations in
the scale factor and is thus robust for the product family. Verification of this sub-
hypothesis requires implementation of the approach, and the two examples provide such
a demonstration.
Testing Sub-Hypothesis 1.3 - The procedure for aggregating the individual targets of the
product variants is outlined in Section 3.1.4. As with Sub-Hypothesis 1.1, further
verification of this sub-hypothesis requires demonstrating that this procedure can be
used to model and design a family of products; the approaches outlined in Section 3.1.4
are used in the two examples to illustrate both methods for aggregating product family
specifications.
127
3.3.2 Testing Hypotheses 2 and 3
initial feasibility study of the utility of kriging is presented in Chapter 4 to familiarize the reader
aerospike nozzle—is used to compare the predictive capability of a kriging model against that of
second-order response surfaces. The specific aspect of Hypothesis 2 being tested in Section
4.2 is whether or not kriging, using an underlying constant global model in combination with a
Gaussian correlation function (one of the five being investigated in this dissertation, see section
surface.
Chapter 5 continues from where Chapter 4 leaves off. To test the utility of kriging and
space filling designs (and thus Hypotheses 2 and 3) a testbed of six engineering test problems is
created to:
• test the effect of different correlation functions on the accuracy of the kriging model for a
wide variety of engineering analysis equations (linear, quadratic, cubic, reciprocal,
exponential, etc.);
• correlate the types of functions (analysis equations) which kriging models can and
cannot approximate accurately; and
• test the effect of eleven different experimental designs on the accuracy of the resulting
kriging model.
Of the eleven experimental designs mentioned in the last bullet, two are classical designs—
central composite and Box-Behnken—and the remaining nine are space filling (see Section
128
5.1.3 and Appendices B and C for a description of each). In this manner, Hypothesis 3 is
explicitly tested by comparing the accuracy of the kriging model built from a space filling
experimental design against that of a classical experimental design. The first two bullets relate to
testing Hypothesis 2, and the particulars of that portion of the study are described in Section
The elements of the previous chapters are synthesized in this chapter to meet the
principal objective in this dissertation, namely, to develop the Product Platform Concept
Exploration Method (PPCEM) for designing common scalable product platforms for a product
family, see Figure 3.13. There are five steps to the PPCEM which prescribe how to formulate
the problem and describe how to solve it. As such, the PPCEM provides a Method which
facilitates the synthesis and Exploration of a common Product Platform Concept which can
Testing and verification of the PPCEM is outlined in the previous section and takes
Hypothesis 2 (which has implications for Step 3 of the PPCEM) commences in the next chapter
wherein an initial feasibility study of the utility of kriging is given. At the end of Chapter 4,
several questions are posed which preface the kriging/DOE study in Chapter 5. The
implications of the results of the study on metamodeling within the PPCEM (Step 3) are
129
Platform
Nozzle Design
Product Platform Concept Exploration Method
Chp 4 Chp 3
130
4.
CHAPTER 4
Nozzle Design
In this chapter, the process of testing Hypothesis 2 and establishing kriging as a viable
models through two examples. The first is a simple one-dimensional example in Section 4.1
which is used to familiarize the reader with (a) the process of creating a kriging model and (b)
some of the differences between a kriging model and a second-order response surface. The
second-order response surface models and kriging models is conducted by means of error
analysis (Section 4.2.2), visualization (Section 4.2.3), and optimization (Section 4.2.4) These
examples establish that a simple kriging model can compete with a second-order response
123
surface, thereby setting the stage for an extensive investigation into the utility of kriging and
124
4.1 OVERVIEW OF KRIGING MODELING AND A 1-D EXAMPLE
Having presented the mathematics behind kriging in Section 2.4.2, a simple one variable
example best illustrates the difference between the approximation capabilities of a second-order
response surface model and a kriging model. This example comes from Su and Renaud (1996)
who fabricated this example to demonstrate some of the limitations of using second-order
response surface models, see Figure 4.1. The function is an eighth-order function given by
Equation 4.1.
9
f ( x) ? ? a i (x i ? 900)(i ?1) [4.1]
i?1
a1 = -659.23
a2 = 190.22
a3 = -17.802
a4 = 0.82691
a5 = -0.021885
a6 = 0.0003463
a7 = -3.2446 x 10-6
a8 = 1.6606 x 10-8
a9 = -3.5757 x 10-11
A second-order response surface model is fit to five sample points within the region of
the optimum (x = 932) using least squares regression. The five sample points are given in Table
4.1. The original function, the location of the five sample points, and the resulting second-order
125
Table 4.1 Sample Points for 1-D Example
No. x x (scaled) y
1 922 0.00 43.976
2 927 0.25 20.143
3 932 0.50 13.963
4 937 0.75 17.330
5 942 1.00 22.698
80
Su and Renaud (1996) Fcn
70
Sample Points
60
2nd Order RS Model
50
40
30
20
10
915 925 935 945 955 965
x
A kriging model using a constant for the global model and the Gaussian correlation
function of Equation 2.17 is fit to the same five points in order to compare a kriging model
against a second-order response surface model. The process of fitting a kriging model is
model.
126
In order to fit a kriging model to the five sample points, the x values are scaled to [0,1]
as shown in Table 4.1, and the response values are written as a column vector, yT = {43.977,
20.143, 13.963, 17.330, 22.698}. Because a constant underlying global model is selected for
the kriging model, f is simply a column vector of ones: fT = {1, 1, 1, 1, 1}. Using a Gaussian
correlation function for the localized portion of the model, Equation 2.21, is particularized for
R(xi,xj) = ??
exp( ? ? | x i ? x j | 2) i, j ? 1,2,3,4, 5;i ? j??
?? ?? [4.2]
?? 1 i? j ??
The correlation function for each sample point is then computed as follows:
i = 1, j = 1, R(x1,x1) = 1
???
i = 5, j = 5, R(x5,x5) = 1
127
1 e ?0.0625? e ?0 .25 ? e ? 0.5625?
?? e ? ? ??
?0.0625? ? 0.25 ? ?0.5625?
?? 1 e e e ??
R = ?? 1 e ? 0.0625? e ?0.25? ??
e ?0.0625? ??
?? sym 1
?? 1 ??
where ? is the unknown parameter which is used to fit the kriging model to the data.
The constant portion of the global model is now estimated using Equation 2.27 which is
In order to find the maximum likelihood estimate for ? , the variance of sample data from
the underlying constant global model must be estimated from Equation 2.28 which is repeated
(y ? f ?ˆ )?T R?1 (y ? f ?ˆ )?
?ˆ ?2 ? [4.4]
ns
where ns = 5. The MLE for ? is then found by maximizing Equation 4.5 which is the same as
[n s ln( ?ˆ 2?) ? ln | R |]
? ( ?) ? ? [4.5]
2
128
A plot of ? (? ) is given in Figure 4.2. The MLE, or “best” guess, for ? is the point which
-10
0 2 4 6 8 10 12 14 16 18 20
-11
? ???
-12
-13
? * = 6.924
-14
-15
-16
-17
-18
theta
In this example, the MLE for ? is 6.924; hence, the “best” kriging model to fit these five
sample points when using a constant underlying global model and the Gaussian correlation
function is when ? = 6.924. Substituting this value into Equation 4.2, the resulting correlation
matrix is thus:
??
1 0.649 0.177 0.020 0.001??
1 0.649 0.177 0.020
?? 1 0.649 0.177??
R=
?? sym 1 0.649??
?? 1 ??
Now, new points are predicted using the scalar form of Equation 2.25 which is
where rT (x) is the correlation vector of length 5 between an untried value of x and the sampled
data points {0.00, 0.25, 0.50, 0.75, 1.00}. The general form of rT (x) is given by Equation
where R is the Gaussian correlation function. Notice that the x values for which a new y is to be
predicted are scaled to [0,1]; however, the predicted values of y are the actual values.
underlying constant global model—is shown in Figure 4.3 along with the original function, the
second-order response surface, and the five sample points. Immediately evident from the figure
is fact that the kriging model interpolates the data points, approximating the original function
better than the second-order response surface model which represents a least squares fit. In
this example, the interpolating capability of the kriging model allows it to predict an optimum
130
80
Su and Renaud (1996) Fcn
70
Sample Points
2nd Order RS Model
60
Kriging w/Gauss.
50
40
30
20
10
915 925 935 945 955 965
x
Figure 4.3 One Variable Example of Response Surface and Kriging Models
It is also important to notice that outside of the design space defined by the sample
points (920 = x = 945), neither model predicts as well as expected. The kriging model returns
to the underlying global model which is a constant in this example. This is typical behavior for a
kriging model; far from the design points, the kriging model returns to the underlying global
model because the influence of the sample points has “exponentially decayed away” outside of
Sixteen, evenly spaced points (not including the sample points) are taken from within the
sample range (920 = x = 945) to assess the accuracy of the two approximations. The
maximum absolute error, the average absolute error, and the root mean square error (MSE),
131
Equations 2.30-2.32, for the 16 validation points are listed in Table 4.2. Both raw values and
percentages of actual values are listed in the table for ease of comparison.
Based on this error analysis, the kriging model approximates the original function better
because it has a lower root MSE, average absolute error, and maximum absolute error. A
more involved example to compare further the predictive capability of second-order response
The design of an aerospike nozzle has been selected as the preliminary test problem for
comparing the predictive capability of response surface and kriging models. The linear
aerospike rocket engine is the propulsion system proposed for the VentureStar reusable launch
vehicle (RLV) which is illustrated in Figure 4.4. The VentureStar RLV is one of the concepts
132
Figure 4.4 VentureStar RLV with Aerospike Nozzle (Korte, et al., 1997)
The aerospike rocket engine consists of a rocket thruster, cowl, aerospike nozzle, and
plug base regions as shown in Figure 4.5. The aerospike nozzle is a truncated spike or plug
nozzle that adjusts to the ambient pressure and integrates well with launch vehicles (Rao, 1961).
The flow field structure changes dramatically from low altitude to high altitude on the spike
surface and in the base region (Hagemann, et al., 1996; Mueller and Sule, 1972; Rommel, et
al., 1995). Additional flow is injected in the base region to create an aerodynamic spike
(Iacobellis, et al., 1967) which gives the aerospike nozzle its name and increases the base
133
Figure 4.5 Aerospike Components and Flow Field Characteristics
(Korte, et al., 1997)
The analysis of the nozzle involves two disciplines: aerodynamics and structures; there is
an interaction between the structural displacements of the nozzle surface and the pressures
caused by the varying aerodynamic effects. Thrust and nozzle wall pressure calculations are
made using computational fluid dynamics (CFD) analysis and are linked to a structural finite
element analysis model for determining nozzle weight and structural integrity. A mission average
engine specific impulse and engine thrust/weight ratio are calculated and used to estimate vehicle
Figure 4.6. Korte, et al. (1997) provide additional details on the aerodynamic and structural
134
Figure 4.6 Multidisciplinary Domain Decomposition for Aerospike Nozzle (Korte, et
al., 1997)
For this study, three design variables are considered: starting (thruster) angle, exit (base)
height, and (base) length as shown in Figure 4.7. The thruster angle (a) is the entrance angle of
the gas from the combustion chamber onto the nozzle surface; the base height (h) and length (l)
refer to the solid portion of the nozzle itself. A quadratic curve defines the aerospike nozzle
surface profile based on the values of thruster angle, height, and length.
135
Figure 4.7 Nozzle Geometry Design Variables (Korte, et al., 1997)
Bounds for the design variables are set to produce viable nozzle profiles from the
quadratic model based on all combinations of thruster angle, height, and length within the design
space. Second-order response surface models and kriging models are developed and validated
for each response (thrust, weight, and GLOW) in the next section; optimization of the aerospike
nozzle using the response surface and kriging models for different objective functions is
136
4.2.1 Metamodeling of the Aerospike Nozzle Problem
The data used to fit the response surface and kriging models is obtained from a 25 point
random orthogonal array (Owen, 1992). The use of these orthogonal arrays in this preliminary
example is based, in part, on the success of the work by Booker, et al. (1995) and the
recommendations of Barton (1994). The actual sample points are illustrated in Figure 4.8 and
are scaled to fit the three dimensional design space defined by the bounds on the thruster angle
Length
Angle
Height
137
The response surface models for weight, thrust, and GLOW are fit to the 25 sample points
using ordinary least squares regression techniques and the software package JMP® (SAS,
1995). The resulting second-order response surface models are given in the Equations 4.7-4.9.
The equations are scaled against the baseline design to protect the proprietary nature of some of
the data.
The R2, R2adj, and root MSE values for each of these second-order response surface
models are summarized in Table 4.2. As evidenced by the high R2 and R2adj values and low
root MSE values, the second-order polynomial models appear to capture a large portion of the
observed variance.
Response
Measure Weight Thrust GLOW
2
R 0.986 0.998 0.971
R2adj 0.977 0.996 0.953
root MSE 1.12% 0.01% 0.25%
138
Kriging Models for the Aerospike Nozzle Problem
The kriging models are built from the same 25 sample points used to fit the response surface
models. In this preliminary example, a constant term for the global model and a Gaussian
correlation function, Equation 2.21, for the local departures are chosen.
Initial investigations revealed that a single ? parameter was insufficient to model the data
accurately due to scaling of the design variables (a similar problem is encountered in (Giunta, et
al., 1998)). Therefore, a simple 3-D exhaustive grid search with a refinable step size is used to
find the maximum likelihood estimates for the three ? parameters needed to obtain the “best”
kriging model. The resulting maximum likelihood estimates for the three ? parameters for the
weight, thrust, and GLOW models are summarized in Table 4.4; note that these values are for
MLE Response
Values Weight Thrust GLOW
? angle = 0.548 0.30 3.362
? height = 1.323 0.50 2.437
? length = 2.718 0.65 0.537
With these parameters for the Gaussian correlation function, the kriging models now are
specified fully. A new point is predicted using these ? values and the 25 sample points as shown
139
in combination with Equations 2.25-2.27. The accuracy of the response surface and kriging
An additional 25 randomly selected validation points are used to verify the accuracy of
the response surface and kriging models. Error is defined as the difference between the actual
response from the computer analysis, y(x), and the predicted value, yˆ (x), from the response
surface or kriging model. The maximum absolute error, the average absolute error, and the root
MSE, see Equations 2.30-2.32, for the 25 randomly selected validation points are summarized
in Table 4.5.
For the weight and GLOW responses, the kriging models have lower maximum
absolute errors and lower root MSEs than the response surface models; however, the average
absolute error is slightly larger for the kriging models. For thrust, the response surface models
140
are slightly better than the kriging models according to the values in the table; the maximum
absolute error and root MSE are slightly less while the average absolute errors are essentially
the same. It is not surprising that the response surface models predict thrust better; it has a very
high R2 value, 0.998, and low root MSE, 0.01%. It is reassuring to note, however, that the
kriging model, despite using only a constant term for the underlying global model, is only slightly
less accurate than the corresponding response surface model. In summary, it appears that both
models predict each response reasonably well, with the kriging models having a slight advantage
in overall accuracy because of the lower root MSE values. A graphical comparison is
presented in the next section to examine the accuracy of the response surface and kriging
models further.
comparison of the response surface and kriging models is performed to visualize differences in
the two approximation models. In Figure 4.9-4.10, three dimensional contour plots of thrust,
weight, and GLOW as a function of thruster angle, length, and height are given. In each figure,
the same contour levels are used for the response surface and kriging models so that the shapes
141
(a) Thrust (b) Weight
Figure 4.9 Response Surface and Kriging Models for Thrust and Weight
In Figure 4.9a, the contours of thrust for the response surface and kriging models are
very similar. As evidenced by the high R2 and low root MSE values, the response surface
models should fit the data quite well, and it is reassuring to note that the kriging models resemble
the response surface models even through the underlying global model for the kriging models is
just a constant term. This demonstrates the power and flexibility of the “local” deviations of the
The contours of the response surface and kriging models in Figure 4.9b are also very
similar, but the influence of the localized perturbations caused by the Gaussian correlation
function can be seen in the kriging model for weight. The error analysis from the previous
section indicated that the kriging model for weight is slightly more accurate than the second-
order response surface model which may result from the small non-linear localized variations in
142
The general shape of the GLOW contours is the same in Figure 4.10; however, the size
and shape of the different contours, particularly along the length axis, are quite different. The
end view along the length axis in Figure 4.10b further highlights the differences between the two
models. Notice also in Figure 4.10b that the kriging model predicts a minimum GLOW located
within the design space centered around Height = -0.8, Angle = 0, along the axis defined by 0.2
= Length = 0.8; this minimum was verified through additional experiments and is assumed to be
From the graphical and error analyses of the response surface and kriging models, it
appears that both models fit the data quite well. In the next section the accuracy of both
metamodels is put to the test. Four optimization problems are formulated and solved using each
143
of the metamodels and the efficiency and accuracy of the results are compared as a final test of
model adequacy.
The true test of the accuracy of the response surface and kriging models comes when
the approximations are used during optimization. It is paramount that any approximations used
in optimization prove reasonably accurate, lest they lead the optimization algorithm into regions
of bad designs. Trust Region approaches (see e.g., Lewis, 1996; Rodriguez, et al., 1997) and
the Model Management framework (see e.g., Alexandrov, et al., 1997; Booker, et al., 1995)
have been developed to ensure that optimization algorithms are not led astray by inaccurate
approximations. In this work, however, the focus has been on developing the approximation
models, particularly the kriging models, and not on the optimization itself.
Four different optimization problems are formulated and solved to compare the
accuracy of the response surface and kriging models, see Table 4.6: (1) maximize thrust, (2)
minimize weight, (3) minimize GLOW, and (4) maximize thrust/weight ratio. The first two
objective functions in Table 4.6 represent traditional single objective, single discipline
optimization problems. The second two objective functions are more characteristic of
tradeoffs between the aerodynamics and structures disciplines. As seen in the table, for each
objective function, constraint limits are placed on the remaining responses; for instance,
constraints are placed on the maximum allowable weight and GLOW and the minimum
144
allowable thrust/weight ratio when maximizing thrust. However, none of the constraints are
Each optimization problem is solved using: (a) the second-order response surface
models and (b) the kriging model approximations for thrust, weight, and GLOW. The
optimization is performed using the Generalized Reduced Gradient (GRG) algorithm in OptdesX
145
(Parkinson, et al., 1998). Three different starting points are used for each objective function
(the lower, middle, and upper bounds of the design variables) to assess the average number of
analysis and gradient calls necessary to obtain the optimum design within the given design space.
The same parameters (i.e., step size, tolerance, constraint violation, etc.) are used within the
GRG algorithm for each optimization. The optimization results are summarized in Table 4.7.
Design variable and response values have been scaled as a percentage of the baseline design
146
Table 4.7 Aerospike Nozzle Optimization Results Using Metamodels
The following observations are made based on the data in Table 4.7.
147
• Average number of analysis and gradient calls: In general, the response surface
models require fewer analysis and gradient calls to achieve the optimum than the kriging
models do. This can be attributed, in part, to the fact that the response surface models
are simple second-order polynomials; the kriging models are more complex, non-linear
functions as evidenced in Figure 4.9 and Figure 4.10.
• Convergence rates: Although not shown in the table, optimization using the response
surface models tends to converge more quickly than when using kriging models. This
can be inferred from the number of gradient calls which is one to three calls fewer for
the response surface models than the kriging models.
148
• Optimum designs: The optimum designs obtained from the response surface and
kriging models are essentially the same for each objective function, indicating that both
approximations send the optimization algorithm in the same general direction. The
largest discrepancy is the length for the minimize GLOW optimization; response surface
models predict the optimum GLOW occurs at the upper bound on length (+1) while the
kriging models yield 0.676. This difference is evident from Figure 4.10. Furthermore, it
has been verified through additional experiments that the GLOW value obtained using
the kriging models is the actual minimum.
• Predicted optima and prediction errors: To check the accuracy of the predicted
optima, the optimum design values for angle, height, and length are used as inputs into
the original analysis codes and the percentage difference between the actual and
predicted values is computed. The prediction error is less than 5% for all cases and is
0.5% or less in three quarters of the results, indicating close agreement between the
metamodels and the actual analyses.
In summary, the response surface and kriging approximations yield comparable results
with minimal difference in predictive capability. It is worth noting that the kriging models
perform as well as the second-order response surface models even though the global
portion of the kriging model is only a constant. This helps to verify Hypothesis 2 which
states that kriging models are a viable metamodeling technique for building
unanswered.
149
• Correlation function: A Gaussian correlation function is utilized in this example to fit
the data, but is this the best correlation function of the five being considered in this
dissertation?
• Model validation: Because kriging models interpolate the data, R2 values and residual
plots cannot be used to assess model accuracy. In this example an additional 25
validation points are employed to assess accuracy; however, other validation
approaches exist. One such approach which does not require additional validation
points is leave-one-out cross validation (Mitchell and Morris, 1992) mentioned in
Section 2.4.2. Does cross validation provide a sufficient assessment of model
accuracy?
A study of six engineering test problems is set up and performed in the next chapter to answer
these questions. In closing this chapter, a brief look ahead to that study is offered in the next
section.
In an attempt to determine the types of applications for which kriging is useful, several
engineering examples are introduced in the next chapter to serve as test problems to establish
the utility of kriging and verify Hypothesis 2. In addition to testing Hypothesis 2, several
classical and space filling experimental designs are compared and contrasted in an effort to test
150
Hypothesis 3 to determine if space filling experimental designs are better suited for building
151
5. 5
CHAPTER 5
Kriging/DOE Testbed
In this chapter, Hypotheses 2 and 3 are tested explicitly, verifying the utility of kriging
and space filling experimental designs for building metamodels of deterministic computer
analyses. A pictorial overview and specific details of the study are given in Section 5.1. Six
benchmark kriging and space filling designs and verify Hypotheses 2 and 3. In Sections
5.1.2, 5.1.3, and 5.1.4, the factors, experimental designs, and responses in the study are
explained. Analysis of variance of the data and response correlation are presented in the
precursory data analysis in Section 5.2. Section 5.3 contains the results of testing Hypothesis 2
and a discussion of the ramifications of the results; the results and discussion regarding
145
Hypothesis 3 follow in Section 5.4. A summary of the study and its relevance to the
146
5.1 OVERVIEW OF KRIGING/DOE STUDY AND PROBLEM TESTBED
expensive to run and you desire to replace it with a metamodel, a kriging one in particular.
Assume that there are k design variables which you wish to include in the metamodel. What is
best type of experimental design you should use to query the simulation to generate data to build
an accurate kriging metamodel? How many sample points should you use? What type of
correlation function should you use to obtain the best predictor? Lastly, how can you best
The objective in this study is to answer precisely these questions. Given a series of test
problems (i.e., analyses), determine the best experimental design, sample size, and correlation
function to generate the most accurate model and determine how best to validate it. Toward
this end, a testbed of six engineering examples—the design of a three-bar truss, a two-bar truss,
a spring, a two-member frame, a welded beam, and a pressure vessel as introduced in Section
5.1.1 and Appendix D—has been created to test the utility of kriging and space filling
experimental designs. A pictorial overview of the kriging/DOE study is given in Figure 5.1; the
Contained in these six engineering examples are a total of 26 different types of equations
which are used to test the utility of kriging at metamodeling deterministic computer analyses. If a
then Hypothesis 2 is considered to be verified. Moreover, for each example, five correlation
147
functions are used to construct five different kriging metamodels in an effort to determine which
Meanwhile, for each example several classical and space filling experimental designs are
used to construct each kriging metamodel. By analyzing the accuracy of the resulting kriging
metamodel, the experimental design which yields the most accurate predictor, on average, can
be determined. In this regard, Hypothesis 3 is tested explicitly to verify that space filling
experimental designs yield more accurate kriging metamodels than do classical experimental
designs. And while Hypothesis 2 and 3 are being tested, the usefulness of cross validation root
148
2 Variable Problems 3 Variable Problems 4 Variable Problems
Problem
Six Test
Testbed
Problems
(§5.1.1)
3-bar 2-bar 2-member welded pressure
spring
truss truss frame beam vessel
EQN
1-4 5, 6, 7 8-14 15-17 18-22 23-26
(1-26)
Factors
& Levels For 2 Variables For 3 Variables For 4 Variables
(§5.1.2) NSAMP
7, 8, 9, 10, 11, 12, 13, 14 13-25 20-41
(§4.1.3)
In total, 7905 kriging models are constructed: one for each correlation function
(CORFCN) for each experimental design (DOE) for each sample size (NSAMP) for each
equation (EQN) in each problem. As an example, the arrows in Figure 5.1 trace Equation 7 in
the two-bar truss problem. For EQN 7, there are 15 possible experimental design (DOE)
149
choices; in this case, the minimax Latin hypercube design (mnmxl) is being considered. For this
design, there are several possible choices for NSAMP, ranging from 7-14, because this is a two
variable problem. Using 10 sample points as an example, at the next level there are five
correlation functions (CORFCN) which can be used to build a kriging model; the Gaussian
correlation function is highlighted in this example. Finally, three measures of model accuracy are
computed for the kriging model resulting from this particular combination of EQN, DOE,
NSAMP, and CORFCN: max. abs. error, root mean square error (RMSE), and cross
validation root mean square error (CVRMSE) which are “normalized” by the corresponding
After a precursory analysis of the data in Section 5.2, these three error measures of
model accuracy are used to test Hypotheses 2 and 3 explicitly. As shown in Figure 5.1:
Hypothesis 3 is tested in Section 5.4 by isolating the effects of experimental design (DOE)
and sample size (NSAMP) on the error measures of accuracy of the resulting kriging
model.
Six test problems were selected from the literature to provide a testbed for assessing the
utility of kriging and several different space filling experimental designs. These problems are not
150
meant to be all inclusive; rather, they are taken as representative of typical analyses encountered
in mechanical design. The analysis of these problems is simple enough not to warrant building
kriging models of the responses; however, these problems have been selected because:
a. they have been well studied and the behavior of the system and the underlying analysis
equations are known,
c. they have been used by other researchers to test their own metamodeling strategies and
algorithms.
Furthermore, the optimum solution for each problem is also known; however, a more extensive
error analysis is employed to assess the accuracy of the kriging models (see Section 5.1.4).
In the following sections, each example is described along with its pertinent constraints,
design variable bounds, and the objective function; note that a kriging model is constructed for
each constraint and objective function in each problem. The values of the parameters in the
equations (i.e., all of the letters and symbols which are not explicitly stated as being design
variables) are given in the referenced sections of Appendix D which contain the complete
151
Two Variable Problems
The two variable problems investigated are the design of a two-bar truss (Figure 5.2) and of a
symmetric three-bar truss (Figure 5.3). The problem formulations (objective functions,
constraints, and bounds) follow each figure. A complete description of the two-bar and three-
D
A1 A2 A3
Section C-C’
N
H C
2P ? ?
? ?
C’
? ? x
B B P2 P1
Find: Find:
• Tube diameter, D • Cross section area, A1 = A3
• Height of the truss, H • Cross section area, A2
Satisfy: Satisfy:
• Constraints: • Constraints:
? 2E(D 2 ? T 2 ) P(B2 ? H2 )1 /2 ??1 A ??
g1(x) = ? =0 g1(x) = 20,000 - ?? ? 2 ??
8(B 2 ? H 2 ) ?TDH
2
=0
??A 1 2 A1A 2 ? 2A 1 ??
P(B2 ? H 2 )1/ 2
g2(x) = ? y - =0 20, 000 2A1
? TDH g2(x) = 20,000 - =0
2A1A 2 ? 2A 12
• Bounds: 20, 000A 2
g3(x) = 15,000 ? =0
0.5 in. = D = 5.0 in. 2A1A 2 ? 2A12
5.0 in. = H = 50 in. • Bounds:
0.5 in2 = A1 = A3 = 1.2 in2
Minimize: 0.0 in2 = A2 = 4.0 in2
Weight, W(x) = 2? ?DT(B2 + H2)1/2
Minimize:
For more information:
152
• see, e.g., (Schmit, 1981) Weight, W(x) = ? N( 2 2 A1 ? A 2 )
• see Appendix D, Section D.1
For more information:
• see, e.g., (Schmit, 1981)
• see Appendix D, Section D.2
153
Three Variable Problems
The three variable problems are the design of a compression spring (Figure 5.4) and two-
member frame (Figure 5.5). Complete descriptions of these problems are given in Appendix D,
z U1
(1) (3)
P
L L
x (2) y
U2 U3
t h
Find: Find:
• Number of active coils, N • Frame width, d
• Mean coil diameter, D • Frame height, h
• Wire diameter, d • Frame wall thickness, t
Satisfy: Satisfy:
• Constraints: • Constraints:
g1(x) = S - 8CfFmaxD/(?d3) = 0 g1(x) = (? 12 + 3?2)1/2 = 40,000
g2(x) = lmax - lf = 0 g2(x) = (? 22 + 3?2)1/2 = 40,000
g3(x) = ?pm - ? = 0
• Bounds:
g4(x) = (Fmax - Fload)/K - ?w = 0
2.5 in. = d = 10 in.
g5(x) = Dmax - D - d = 0
2.5 in. = h = 10 in.
g6(x) = C - 3 = 0
0.1 in. = t = 1.0 in.
• Bounds:
Minimize:
3 = N = 30
Volume, V(x) = 2L(2dt + 2ht - 4t2)
1.0 in. = D = 6.0 in.
0.2 in. = d = 0.5 in. For more information:
154
Minimize: • see (Arora, 1989)
Volume, V(x) = ?2Dd2(N + 2)/4 • see Appendix D, Section D.4
For more information:
• see, e.g., (Siddall, 1982)
• see Appendix D, Section D.3
155
Four Variable Problems
The four variable problems being investigated are the design of a welded beam, Figure 5.6, and
design of a pressure vessel, Figure 5.7. The problem formulations follow each figure.
Complete descriptions are given in Appendix D, Sections D.5 and D.6, respectively.
Th Ts
F F
R R
l L t
B
h
b
L
Find: Find:
• Weld height, h • Cylinder radius, R
• Weld length, l • Cylinder length, L
• Bar thickness, t • Shell thickness, Ts
• Bar width, b • Spherical head thickness, Th
Satisfy: Satisfy:
• Constraints: • Constraints:
g1(x) = [(?’)2 + 2?’?’’cos? + (?’’)2]1/2 = ?d g1(x) = Ts - 0.0193R = 0
g2(x) = 6FL/(bt2) = 30,000 g2(x) = Th - 0.00954R = 0
g3(x) =
4.013 EI? t
[1? ( )
EI
] = 6000 g3(x) = ?R2L + (4/3)?R3 - 1.296E6 = 0
L2 2L ?
g4(x) = 4FL3/(Et3b) = 0.25 • Bounds:
25 in. = R = 150 in.
• Bounds: 25 in. = L = 240 in.
0.125 in. = h = 2.0 in. 1.0 in. = Ts = 1.375 in.
2.0 in. = l = 10.0 in. 0.625 in. = Th = 1.0 in.
2.0 in. = t = 10.0 in.
0.125 in. = b = 2.0 in. Minimize:
F(x) = 0.6224TsRL + 1.7781ThR2 +
Minimize:
156
F(x) = (1 + c3)h2l + c4tb(L + l) 3.1661Ts2L + 19.84Ts2R
For more information: For more information:
• see (Ragsdell and Phillips, 1976) • see, e.g., (Sandgren, 1990)
• see Appendix D, Section D.5 • see Appendix D, Section D.6
157
Taken together, these six problems provide a wide variety of functions to approximate
since a kriging model is built for each objective function and constraint for each problem. In
total, there are 26 different equations contained in these six problems, ranging from simple linear
functions to reciprocal square roots; some equations even require the inversion of a finite
element matrix (see Section D.4 for the analysis of the two-member frame). With these six
problems as the testbed for verifying Hypotheses 2 and 3, the factors (and corresponding levels
The three basic factors considered in this experiment are listed in Table 5.1: CORFCN
refers to the correlation function used in the kriging model, EQN refers to the equation being
approximated, and DOE refers to the type of experimental design being utilized to sample the
equation to provide data to fit the model. The corresponding levels for each factor also are
• CORFCN has 5 levels of interest based on the correlation functions being studied (refer
to Table 2.1, Equations 2.20-2.24); the correlation function associated with each level
is given in the first two columns of Table 5.1.
• EQN has 26 levels based on the total number of equations (i.e., objective functions and
constraints) in the six test problems; when showing the levels for EQN, the objective
function for each problem is singled out from the constraints, see the middle two
columns of Table 5.1.
158
• DOE has 15 levels based on all of the classical and space filling experimental design
introduced in Section 2.4.3 for investigation; the acronyms and corresponding name of
each design is listed in the last two columns of Table 5.1.
Every effort is made to ensure that the observations of each factor level in the
experiment are properly balanced; however, some factors (and levels) are beyond control.
Each level of CORFCN given in Table 5.1 occurs an equal number of times in each problem;
hence, it is easy to examine the effect of the different correlation functions on the overall
accuracy of the kriging model (see Section 5.3.1). The factor EQN is used to isolate the
functions being considered and is utilized in Section 5.3.2 when the accuracy of the kriging
model is examined for each pair of problems. As such, both of these factors are relatively well-
balanced in the design. The levels of DOE, however, are not well-balanced because the fifteen
levels for DOE do not appear equally in each problem; for example, there is no Box-Behnken
159
Eqn. 2.24 Three Variable Problems hamss Hammersley
Spring Sequence
8 V(x) mnmxl Minimax Lh
9-14 g1(x) - g6(x) mxmnl Maximin Lh
oalhd Orthogonal
Two-Member Frame Array-Based Lh
15 V(x) oarry Orthogonal
16-17 g1(x) - g2(x) Array
oplhd Optimal Lh
Four Variable Problems rnlhd Random Lh
Welded Beam unifd Uniform Design
18 F(x) yelhd Orthogonal Lh
19-22 g1(x) - g4(x)
Pressure Vessel
23 F(x)
24-26 g1(x) - g3(x)
To make things even more complicated, the number of sample points within each design
depends on the type of DOE considered and the number of variables in the problem. For
instance, a CCD for the two variable problems has 22 + 2•2 + 1 = 9 points while a random
Latin hypercube can have any number of sample points. Hence, great care must be taken when
analyzing the effects of DOE because of the biasing which occurs due to unbalanced sample
sizes in the experiment. This is discussed in more detail in the next section which contains a
complete listing of which experimental designs (and corresponding sample sizes) are used in
160
5.1.3 Experimental Design Choices for Test Problems
points used. How is the number of points to be determined for a given design? For the
two types of classical designs utilized in this dissertation—CCDs and Box-Behnken designs—
the number of points essentially is fixed once the number of factors is specified. Fractional
factorial designs within a CCD are not considered for these problems because they contain so
few variables. Unlike the CCDs and Box-Behnken designs, for most space filling designs the
number of points is not dictated by the number of factors and can be any number within reason.
Therefore, in order to determine the number of points used in a space filling design, a
CCD with the same number of factors is used to determine the baseline number of points, e.g.,
for three factors, a CCD requires 15 points, and the number of points used in all space filling
designs for three factors would be selected to be as close to 15 as possible. However, because
some space filling designs can have a variable number of sample points, a variety of sample sizes
for each design are considered in order to see if fewer or slightly more points provides an
improved fit. As a guideline, an upper bound on the number of points of about 1.5 times the
number prescribed by the baseline CCD is employed. This factor of 1.5 is primarily based on
the recommendations of Giunta, et al. (1994) who found that for small problems (i.e., fewer
than about five factors) the variance of a second-order response surface model leveled off when
the number of sample points was about 1.5 times the number of terms in the polynomial model.
This number serves as a guideline in this work despite the fact that kriging models do not
161
How important is the number of design points when picking an experimental
design? The answer is very important. In order to compare the utility of different experimental
designs properly, it is important to use the same number of sample points because a design with
more sample points is expected to provide more information, possibly resulting in a more
accurate model. Therefore, when designs do not have the same number of points, it is
impossible to determine if an improvement in model accuracy is from the design itself (i.e.,
spacing of the points in the design space) or from the number of sample points. However, in
some cases it is extremely difficult, if not impossible, to have two different designs which have
the same number of points. For instance, a three factor CCD has 15 points, a three factor Box-
Behnken has 13 (since replicates are not used), and a strength 2 randomized OA has either 9,
16, or 25 points since it is restricted to q2 points where q is the number of levels and is
restricted to be a prime power. Despite these difficulties, every effort is made to make the
sample sizes overlap as much as possible from one design to the next. The experimental designs
and corresponding sample sizes for each pair of problems are described in the following
sections.
For the two variable problems (the two-bar and three-bar trusses), nine types of experimental
designs are considered, see Table 5.2. Of these nine types of designs, there are 51 unique
designs because each design which has a different number of points is considered a unique
design. For instance, a seven point Latin hypercube and an eight point Latin hypercube are
162
unique designs because they have different sample sizes even though they are both Latin
hypercube designs.
superscript (†) in the table. To minimize the effects of this randomness, each of these designs is
randomized three times, and the resulting error measures are averaged over all three
randomizations for that specific design to prevent a design from yielding a poor model because
of its randomly chosen levels. As a result, there a total of 71 designs which are fit for each of
the two variable problems. Finally, notice that neither Box-Behnken designs nor orthogonal
arrays are included in these problems; there is no Box-Behnken design for two factors, and a
163
nine point orthogonal array for two factors is a 3 x 3 grid, the same as a face-centered central
Eleven types of experimental designs for a total of 63 unique designs are considered (as shown
in Table 5.3) for the three variable spring and a two-member frame test problems. In all, there
are 92 total designs constructed for each three variable problem once the three randomizations
of the Latin hypercube, orthogonal Latin hypercube, orthogonal array, and orthogonal array-
164
Notice that a 13 point Box-Behnken design is included in the set of designs for the three
variable problems along with two randomized orthogonal array designs: a 16 point OA and a 25
point OA. One thing to note about these designs (and the orthogonal array-based Latin
hypercubes as well) is that the number of points in the design is limited to q2 sample points,
where q is a power of a prime number. Thus, only q = 4 and q = 5 point OAs are considered
for the three variable problems in order to maintain a fairly consistent number of points between
For the two, four variable problems, 66 unique designs from eleven types of experimental
designs are employed (see Table 5.4). Including the repetitions of the designs with random
permutations, a total of 102 designs are examined for each of these problems.
165
†
Each design is instantiated three times because it is based on a random permutation,
and the resulting error measures are averaged over all three randomizations for that
design.
Notice in Table 5.4 that only four maximin Latin hypercubes are considered: 22, 25, 26,
and 28 point designs. This is because the simulated annealing (Morris and Mitchell, 1995) used
to create these designs is not very robust in generating large four factor designs, and large four
factor designs are not listed in (Morris and Mitchell, 1992). In addition, three orthogonal arrays
are employed: 16, 25, and 32 point designs. The 16 and 25 point designs are strength 2
designs; the 32 point OA design is a strength 3 design with 2q3 points and levels 0, ..., q-1. So
while there are more points with the 32 point OA design than in the 25 point OA design, the
number of unique factor levels being considered in the 32 point OA design is actually less than in
In summary, the experimental designs and corresponding levels listed in Table 5.2
through Table 5.4 are used to generate data to build kriging models for each equation in each
problem. For each kriging model, the kriging model is cross validated and the accuracy of the
kriging model is further assessed using a set of validation points which is independent of the
design and number of samples. The end result is three measures of model accuracy which
provide the responses for this study as explained in the next section.
As shown in Figure 5.1, there are three responses in the kriging/DOE study:
166
1. cross validation root mean square error (CVRMSE), see Equation 2.33 in Section
2.4.2, of the kriging model;
2. maximum absolute error (MAX), see Equation 2.30 in Section 2.4.2, of the kriging
model; and
3. root mean square error (RMSE), see Equation 2.32 in Section 2.4.2 of the kriging
model.
The CVRMSE of the kriging model is based on the leave-one-out cross validation procedure
described in Section 2.4.2; it utilizes the sample data to validate the model and does not require
assessment of model adequacy; therefore, three sets of validation points are used to compute
MAX and RMSE. The average absolute error measure, Equation 2.27, is not included in this
study since it correlates well with RMSE and provides little additional information beyond that
obtained from analysis of RMSE. The number of validation points used in each problem is
listed in Table 5.5: 1000, 1500, and 2000 validation points for the two, three, and four variable
problems, respectively.
167
Rather than randomly pick these validation points, the points are obtained from a
random Latin hypercube to ensure uniformity within the design space. The predicted values
from each kriging model are compared against the actual values from the set of validation points,
and the error measures MAX and RMSE are computed. These measures are then
“normalized” as a percentage of the sample range, for the particular design under investigation,
in order to compare responses with different magnitudes. A precursory analysis of the data is
In total, there are 11535 kriging models constructed as shown in Table 5.6 for the six
test problems—one kriging model for each equation for each design for each test problem. For
each of these models, there are three measures of model accuracy: MAX, RMSE, and
CVRMSE; hence, there are 34605 data points in the resulting data set.
Problem No. of No. of No. of Total No. of No. of No. of No. of Total
Name Variables Responses Unique DOE DOE CORFCN Models Models
2bar 2 3 51 71 5 765 1005
3bar 2 4 51 71 5 1020 1340
2mem 3 3 63 92 5 945 1380
spring 3 7 63 92 5 2205 3220
press 4 4 66 102 5 1320 2040
weld 4 5 66 102 5 1650 2550
Grand Totals 7905 11535
168
To facilitate analysis of the data set, the error measures of the designs which are
replicated—the orthogonal arrays, random Latin hypercubes, OA-based Latin hypercubes, and
orthogonal Latin hypercubes—are averaged to reduce the data set to 7905 models. However,
not all of these 7905 models are good, i.e., many contain outliers which bias the results, and
potential outliers must be removed. The cause of the outliers can be attributed to incomplete
convergence of the numerical optimization used to fit the model or singularities in the data set
which occur during model fitting, numerical round-off error, or bad data resulting from
transferring data from file to file, program to program, and computer to computer.
Hence, the data set is culled to remove any potential outliers. Rather than first fit the
model and remove potential outliers based on the residuals, the data is culled based on (a)
potential RMSE outliers, (b) potential MAX outliers, and (c) potential CVRMSE outliers since
it is known that many outliers exist due to singularities in the data set which occur during model
fitting. The process is described in detail in Appendix E; density plots are included in Appendix
E to show the distribution of the resulting data for the two, three, and four variable problems.
In this manner, the data set is reduced from 7905 models to 7578. This constitutes a
reduction of about 4% which is considered reasonable given the magnitude of the study and the
potential for errors. From this point forward, any reference to “the data set” refers to the final
culled data set with all of the potential outliers removed and not to the original data set unless
explicitly specified.
169
Analysis of variance is performed in the next section to determine which factors have a
significant effect on the accuracy of the resulting kriging model. This is followed in Section 5.2.2
determine which factors have a significant effect on the accuracy of the resulting kriging model
(see, e.g., (Chambers, et al., 1992; Montgomery, 1991) for more on ANOVA). The software
package S-Plus4 (MathSoft, 1997) is used to analyze the data. The ANOVA is performed
separately for each pair of two, three, and four variable problems for all three error measures.
Furthermore, because of the size of the data set, only main effects and two-factor interactions
can be studied. The ANOVA results are given in Section E.2, and a summary of the ANOVA
results are given in Table 5.7. In the table, the factor main effects and two-factor interaction
effects are listed in the first column of the table; a colon between factors (e.g.,
CORFCN:NSAMP) indicates a two-factor interaction. The abbreviations “sig” and “not sig”
are used to indicate whether or not the effect is significant. For instance, all of the main effects
and two-factor interactions except CORFCN:NSAMP are significant for RMSE.RANGE and
170
Table 5.7 Summary of ANOVA Results for Kriging/DOE Study
As can be seen in Table 5.7, the majority of the effects are significant on all of the error
measures. It is not surprising to see that the main effects of the factors DOE, CORFCN,
NSAMP, and EQN are significant for all responses for all of the problems. Likewise, the
interaction between DOE and NSAMP is significant for all RMSE.RANGE and
MAX.RANGE values. The interaction between CORFCN and NSAMP is not significant in
the majority of cases since it is unlikely that these two factors would interact to provide a more
accurate model. The interaction between DOE and CORFCN is significant in all but one case
which is interesting to note. In summary, it appears that there are many significant interactions
and main effects to examine. Observations regarding many of these interactions can be inferred
from the appropriate graphs; however, the commentary in Sections 5.3 and 5.4 focuses
171
The two most important measures of model accuracy in this study are considered to be
RMSE and MAX. Why are these two particular measures the most important? RMSE is
used to gauge the overall accuracy of the model, and MAX is used to gauge the local accuracy
of the model. Ideally, RMSE and MAX would be zero, indicating that the metamodel predicts
the underlying analysis or model exactly; however, this is rarely the case. Therefore, the lower
the value of either error measure, the more accurate the model.
design application because high values of RMSE can lead an optimization algorithm into a region
of bad design and high values of MAX prevent the optimization algorithm from finding the true
optimum solution. To see if the two measures are correlated, a plot of RMSE.RANGE versus
MAX.RANGE for the data set is given in Figure 5.8. Here and henceforth, the acronyms
RMSE.RANGE and MAX.RANGE are used to refer to the values of RMSE and MAX when
172
6
4
max.range
Since the data is widely scattered in Figure 5.8, the two error measures do correlate
well. Models with low RMSE.RANGE values tend to have low MAX.RANGE values, but
models with moderate RMSE.RANGE values have any of a variety of MAX.RANGE values.
conclusions.
the wide scattering of the data, RMSE.RANGE and CVRMSE.RANGE are not correlated
either. This means that the cross validation root mean square error is not a sufficient
measure of model accuracy because root mean square error provides the best possible
173
assessment of overall model accuracy. If CVRMSE.RANGE and RMSE.RANGE had been
correlated, then CVRMSE alone could be computed to assess model accuracy without having
1.0
0.8
cvrmse.range
0.6
0.4
0.2
0.0
Figure 5.9, there is a wide scattering of the data, and it appears that MAX.RANGE and
CVRMSE.RANGE are not well correlated either. Hence, there is no need to examine
CVRMSE.RANGE further because it does not provide a good assessment of model accuracy
since it does not correlate well with either MAX.RANGE or RMSE.RANGE. As stated
174
earlier, this finding is unfortunate because it means that additional validation points must be
taken in order to assess the accuracy of a kriging model properly; cross validating the
1.0
0.8
cvrmse.range
0.6
0.4
0.2
0.0
0 1 2 3 4 5 6
max.range
Using only RMSE.RANGE and MAX.RANGE, the error of the resulting kriging
models can now be assessed by isolating a single factor (or pair of factors). The process for
analyzing the data in order to interpret specific results is identified at the beginning of each
section when Hypotheses 2 and 3 are tested. Hypothesis 2 is tested first in the next section,
In order to test this hypothesis, two factors are isolated to analyze the results further, namely,
CORFCN and EQN. Both factors were found to have a significant effect on the accuracy of
the resulting kriging in the ANOVA in Section 5.2.1. The effect of CORFCN on
RMSE.RANGE and MAX.RANGE is investigated in the next section. The effect of EQN on
RMSE.RANGE and MAX.RANGE is discussed in Section 5.3.2. Keep in mind that all of
these results are based strictly on averages of the data at a given level of a particular variable; it
is assumed that biasing due to unbalanced numbers of observations at each level is negligible
The effect of correlation function on model accuracy was found to be significant in the
ANOVA in Section 5.2.1, but it is uncertain which correlation function yields the best results on
average. Therefore, the effect of CORFCN on RMSE.RANGE aggregated over all the
problems and for each pair of problems is shown in Figure 5.11. The average (mean) of
RMSE.RANGE for each factor level is plotted on the vertical axis in the figure. Meanwhile, the
176
vertical bars within the figure are used for grouping purposes, showing the range of effects of the
different levels of the factor being considered (in this case, CORFCN) for each problem group
as indicated on the x-axis. The numbers 1, 2, 3, 4, and 5 in the figure indicate the level of
correlation function as described in the key in the figure. The horizontal dashed lines which
cross each vertical bar indicate the group average of RMSE.RANGE for that particular
grouping; for instance, the mean RMSE.RANGE for all of the problems is about 0.062. The
arrows are used to indicate the effect a particular level of CORFCN has on RMSE.RANGE;
the same holds true regardless of the factor being considered. For example, in Figure 5.11 the
average effect of CORFCN = 1 in the two variable problems is slightly less than 0.08 while the
average effect of CORFCN = 4 in the same problems is slightly greater than 0.06. Finally,
lower values of RMSE.RANGE (and MAX.RANGE) are better; so, the lower the arrow of a
particular level on the vertical line, the more accurate is the resulting kriging model.
177
Key:
1 1 = Exponential
0.08 2 = Gaussian
3 = Cubic
1
1
mean of rmse.range
0.07
3 2
1
5
4
5 3
0.06
4 3 4
5
2
2 4
3
5
0.05
2
all problems 2 variable problems 3 variable problems 4 variable problems
Factor - CORFCN
Some observations regarding Figure 5.11 are as follows. The exponential correlation
function (CORFCN = 1) repeatedly is the worst. Overall, the Gaussian correlation function
(CORFCN = 2) provides the lowest RMSE.RANGE on average and also for the three and
four variable problems as well. The linear Matérn (CORFCN = 4) yields the lowest average
RMSE.RANGE for the two variable problems but yields comparable results to the piece-wise
correlation function otherwise. The piece-wise cubic (CORFCN = 3) and quadratic Matérn
(CORFCN = 5) correlation functions generally yield worse results than the Gaussian correlation
178
The effect of CORFCN on MAX.RANGE is shown in Figure 5.12. As in Figure 5.11,
the exponential correlation function (CORFCN = 1) repeatedly is the worst but does
surprisingly well in the four variable case. Overall, the Gaussian correlation function (CORFCN
= 2) provides the lowest MAX.RANGE on average and for the three and four variable
problems as well.
Key:
1 = Exponential
4
0.7
2 = Gaussian 3
3 = Cubic 5
2 1
mean of max.range
0.6
1 1
4
0.5
5 3
2
1
4
0.4
3 5
2 5
3
4 2
As seen in Figure 5.12, the linear Matérn correlation function (CORFCN = 4) yields
the best MAX.RANGE for the two variable problems and the worst for the four variable
problems with on average results otherwise. The piece-wise cubic (CORFCN = 3) and
179
quadratic Matérn (CORFCN = 5) correlation functions yield comparable results, falling
somewhere in the middle of the spectrum in each problem and performing slightly better than
average overall.
correlation function to use for building kriging models. On average, it provides the lowest
RMSE.RANGE and MAX.RANGE, yielding the most accurate kriging models. Furthermore,
it also yields the best results in the three and four variable problems when averaged over all
designs, sample sizes, and equations; in the two variable problems its performance is average
but not far behind the linear Matérn correlation function (CORFCN = 4).
In order to determine which types of equations are fit best, the factor EQN is used to
isolate which equations are well fit by the kriging models, thus explicitly testing Hypothesis 2. A
plot of the resulting RMSE.RANGE of the two, three, and four variable problems for each level
of the factor EQN is shown in Figure 5.13; the effect of each level of EQN on the mean of
RMSE.RANGE is averaged over all DOE, NSAMP, and CORFCN for each problem. For
clarity, dashed lines are used to indicate the 5% and 10% RMSE.RANGE values. If a 5% level
of model accuracy is used as a cut-off point, then 14 out of the 26 equations in this study are
accurately modeled by kriging. If that cut-off is raised to 10%, then 20 out of the 26 equations
180
0.15
18
19
21
4 2
mean of rmse.range
10 16
10% 11 12
17
6
5%
7
9 22
8
15
20
5 14 26 23
1 13 24 25
0.0
fewer kriging models meet a 10% cut-off point. In Figure 5.14, only nine of the 26 equations
fall within the 10% level of accuracy. If the level of accuracy is allowed to drop to 20%, which
is quite high, then five more equations may be considered modeled accurately by kriging (EQN
= 7, 8, 9, 20, and 22). The mean values for MAX.RANGE for Equations 18, 19, and 21 are
beyond the scale of the chart. For convenience, the equations and corresponding level of EQN
are listed in Table 5.8 which summarizes which equations are fit well and which are not based
181
3 18 19
21
1.0
0.8
16 1210
11
mean of max.range
17
0.6
4
2 6
0.4
0.2
7 22
8 9
20
10% 14 15 23
5 26
1
0.0
13 24 25
The types of equations which are well fit by kriging are noted in Table 5.8. At the 5%
level of RMSE.RANGE the linear combinations of the design variables (EQN = 1, 5, 8, 13, 15,
24, and 25) and most reciprocal equations (EQN = 7, 9, 20, and 22) are modeled well. Some
higher-order equations are also modeled well (EQN = 23 and 26). At the 10% level, all of the
equations in the three variable problems are modeled well (EQN = 8-17). At this level, the
equations based on the finite element model of the two-member frame (EQN = 16 and 17) also
are accurately represented by the kriging models. Looking at MAX.RANGE, however, the
182
majority of the equations which meet the 10% cut-off are linear combinations of the design
variables (e.g., EQN = 1, 5, 13, 24, and 25) which may involve higher-order terms or
183
Table 5.8 Summary of Equations Accurately Modeled by Kriging
6 g1(x) = ? no yes no
8(B 2 ? H 2 ) ?TDH
P(B ? H )
2 2 1/ 2
184
3.1661Ts2L + 19.84Ts2R
24 g1(x) = Ts - 0.0193R yes yes yes
25 g2(x) = Th - 0.00954R yes yes yes
2 3
26 g3(x) = ?R L + (4/3)?R - 1.296E6 yes yes yes
Which types of functions are not modeled well by kriging? Based on the data in
Table 5.8, the equations which are not modeled well by kriging are the equations involving
reciprocals of combinations of the design variables in the three-bar truss problem (EQN = 2, 3,
and 4) and two-bar truss problems (EQN = 6 and 7), and the majority of the welded beam
equations which include shear stress calculations, a cosine term which is a function of the design
variables, and a variety of reciprocals and square roots of terms which are functions of the
design variables. In addition, the finite element equations for the two-member frame (EQN =
16 and 17) are not modeled well at the 5% level accuracy or at the 10% level of accuracy of
MAX.RANGE. It is also interesting to note that the objective function of the welded beam
problem (EQN = 18) is one of the equations approximated worst by the kriging. This is rather
surprising considering it is very similar to the objective function of the pressure vessel problem
(EQN = 23) which is modeled well in all cases. Perhaps these differences are due to the size of
the design space as opposed to the equations themselves; very few approximation methods will
work well if the points are sparsely scattered throughout the design space which may be the
of equations. Using a 5% level of accuracy, over half (14 out of the 26) of the equations
studied are accurately modeled over the entire design space as measured by RMSE.RANGE; if
185
a 10% level of accuracy is used instead, then over 3/4 of the equations (20 out of 26) are
accurately modeled. Unfortunately, only nine of the 26 meet the 10% level of accuracy in
important from a design standpoint since accuracy over the entire design space is more
important during design space search than the maximum discrepancy at any one given point. As
Hypothesis 3: Space filling experimental designs are better suited for building
In order to test this hypothesis, the data set is analyzed by isolating the factor DOE which was
found to have a significant effect on the accuracy of the resulting kriging in the ANOVA in
Section 5.2.1. However, as the same number of sample sizes are not used in each design as
discussed in Section 5.1.3, the results also must be conditioned on sample size (NSAMP) for a
fair comparison between designs. Consider, for instance, the combined CCD + CCF (ccdaf)
design in the two variable problems which has 13 sample points. It cannot be concluded that
the combined CCD + CCF is the best by averaging over all designs because its effect is biased
by the fact that it has 13 sample points which is at the upper end of the number of points in the
186
two variable problems and is therefore expected to yield good results because of the large
number of sample points. Meanwhile, the effects of all of the other designs with variable
numbers of points (i.e., unifd, hamss, mxmnl, mnmxl, and oplhd) are averaged over all sample
sizes where the smaller the sample size, the less accurate the model, and the worse the effect of
these designs. The results for the two, three, and four variable problems are discussed in
Sections 5.4.1, 5.4.2, and 5.4.3, respectively, by conditioning on both design type and sample
size. As stated previously, keep in mind that the results are based on averaging the data at a
given level of a particular variable; it is assumed that biasing due to unbalanced numbers of
observations at each level is negligible since such a large data set is being used.
For the two variable problems, the classical designs (CCI, CCF, and CCD) each utilize
nine sample points, and the combined CCF + CCI and CCF + CCD each have 13 points.
Hence, the average effect of each design which has nine points and each design which has 13
points on RMSE.RANGE is shown in Figure 5.15. The average effect on MAX.RANGE of all
Looking first at the nine point designs in Figure 5.15, it is surprising to note that both the
CCD and CCI designs perform well with the minimax Latin hypercube (mnmxl) design yielding
the best RMSE.RANGE values; recall that the minimax Latin hypercube designs are unique to
this research (see Appendix C). The orthogonal Latin hypercubes (yelhd), maximin Latin
hypercubes (mxmnl), and the uniform designs (unifd) also perform well. The worst designs are
187
the Hammersley sampling sequence design, the CCF design and the OA-based Latin
In the 13 point designs, the uniform design (unifd) yields the best results with the
maximin and minimax Latin hypercube designs giving equally good results which are only slightly
worse than that of the uniform design. The random Latin hypercubes (rnlhd) continue to give
average results with the combined CCD + CCF (ccdaf) and CCI + CCF (cciaf) designs giving
slightly better results but results which are still about 1% worse than the maximin and minimax
Latin hypercubes. The optimal Latin hypercube designs are the worst with the Hammersley
sampling sequence designs showing good improvement with the extra four sample points.
0.12
hamss
ccfac
oplhd
0.10
oalhd
mean of rmse.range
0.08
rnlhd
0.06
oplhd hamss
mxmnl mnmxl
unifd
NSAMP = 9 NSAMP = 13
Factor - DOE
188
Figure 5.15 Effect of 9 and 13 Point DOE on RMSE.RANGE
Turning to the results of MAX.RANGE in Figure 5.16, the classical designs perform
quite well with the CCI design (ccins) being the best in the nine point designs and the combined
CCD + CCF (ccdaf) being the best in the 13 point designs. The nine point minimax Latin
hypercubes (mnmxl) and OA-based Latin hypercubes (oalhd) yield comparable results to the
CCD and CCF designs. The remaining space filling designs all fair worse than the CCD with
the Hammersley sampling sequence giving the worst MAX.RANGE. In the 13 point designs,
the combined CCI + CCF (cciaf) is the second best design with the maximin Latin hypercube
(mxmnl) coming in a close third. The uniform, optimal Latin hypercube, minimax Latin
hypercube, and random Latin hypercube designs are slightly worse than the average, and the
189
hamss
0.8
mean of max.range
0.6
oplhd
mxmnl unifd hamss
0.4
rnlhd
yelhd
oplhd rnlhd
ccdes mnmxl unifd mnmxl
ccfac
0.2
oalhd mxmnl
cciaf
ccins
ccdaf
NSAMP = 9 NSAMP = 13
Factor - DOE
Based on these results, the space filling designs do best in terms of RMSE.RANGE
while the classical designs yield the lowest MAX.RANGE. If RMSE.RANGE is taken as the
more important of the two measures of error, than the space filling DOE are better than the
classical DOE for the two variable problems considered in this dissertation. The results for the
In the three variable problems, there are four values of NSAMP which must be
considered due to differences in prescribed sample sizes. The Box-Behnken design has 13
points; the classic CCD, CCF, and CCI designs have 15; the CCI + CCF has 21; and the
190
CCD + CCF has 23. The effects of these designs on RMSE.RANGE is shown in Figure 5.17;
hamss rnlhd
0.09
mnmxl ccdes
hamss rnlhd
0.08
oplhd
mean of rmse.range
unifd
mxmnl
0.07
rnlhd
0.06
mnmxl hamss
oplhd
unifd mnmxl
unifd rnlhd
0.05
Figure 5.17 Effect of 13, 15, 21, and 23 Point DOE on RMSE.RANGE
In Figure 5.17, the Box-Behnken (bxbnk) design dominates the 13 point designs with a
RMSE.RANGE of about 5%. All of the space filling designs perform quite poorly in fact with
RMSE.RANGE values of about 8% or worse; it appears that these designs do not fare well
when relatively few sample points are taken in the design space. A similar observation can be
made regarding the 15 point designs also. The maximin Latin hypercube (mxmnl) yields the best
result, but the CCI and CCF designs are both almost as good. The minimax Latin hypercube
191
(mnmxl) design yields average results with the uniform and optimal Latin hypercube design
(oplhd) fairing slightly better but not as good as the CCI and CCF. The CCD is the worse
design with the random Latin hypercube (rnlhd) and the Hammersley sampling sequence
In the 21 and 23 point designs in Figure 5.17, the optimal Latin hypercube design
(oplhd) yields the lowest RMSE.RANGE with the random Latin hypercube design (rnlhd)
yielding the worst. The combined CCI + CCF (cciaf) is the second best 21 point design,
followed closely by the minimax Latin hypercube (mnmxl). The uniform design (unifd) gives an
average result in the 21 point case but is the second best design in the 23 point case; the best
design is the combined CCD + CCF (ccdaf). The optimal Latin hypercube design (oplhd) is
the worst of the 23 point designs with the minimax Latin hypercube (mnmxl) and random Latin
hypercube designs (rnlhd) yielding results which are worse than the average.
In Figure 5.18, the effects of these different designs on MAX.RANGE are plotted. The
classical experimental designs consistently provided the lowest MAX.RANGE when averaging
over all other factors. The space filling designs do not perform well in any case and yield
particularly poor results in the 13 and 21 point designs. The minimax Latin hypercube (mnmxl)
design is no exception, giving near average results in 15, 21, and 23 point designs and the next
As with the two variable problems, the space filling designs offer better results if
RMSE.RANGE is considered while the classical designs are better when it comes to
192
MAX.RANGE for the problems considered in this dissertation. The four variable problems are
hamss oplhd
ccdes
rnlhd mnmxl
hamss
unifd hamss
hamss
0.6
rnlhd
mean of max.range
unifd
unifd
oplhd unifd
oplhd
0.2
mxmnl
bxbnk ccins cciaf
ccfac ccdaf
For the four variable problems, there are two sample sizes to examine: NSAMP = 25
points and NSAMP = 33 points. Figure 5.19 contains the effects of DOE on RMSE.RANGE
for these sample sizes, and Figure 5.20 contains the effects on of DOE on MAX.RANGE for
these sample sizes. As seen in the figures, there are twelve designs which use 25 sample points
193
and only six with 33. The combined CCD + CCF is not considered because it is the only
Looking first at Figure 5.19, the minimax Latin hypercube (mnmxl) design introduced in
this dissertation yields the best results on average. The uniform design (unifd) is a close second
in the 25 point case, and the random Latin hypercube design (rnlhd) is a close second in the 33
point case. Of the classical 25 point designs, the Box-Behnken (bxbnk) design performs slightly
better than average while the CCD (ccdes), CCF (ccfac), and CCI (ccins) designs all do worse
than average. Finally, the Hammersley sampling sequence (hamss) designs perform poorly at
194
0.08 ccdes = 0.14
hamss hamss
oalhd
ccfac
0.07
ccins
oplhd
0.06
bxbnk cciaf
mean of rmse.range
mxmnl
0.05
rnlhd
0.04
oarry yelhd
0.03
oplhd
unifd
mnmxl
rnlhd
0.02
mnmxl
NSAMP = 25 NSAMP = 33
Factor - DOE
In Figure 5.20, the effects of the 25 and 33 point DOE on MAX.RANGE are plotted.
Unlike the two and three variable problems, the space filling designs yield the best
MAX.RANGE for the four variable problems. The randomized 25 point orthogonal (oarry)
produces the lowest MAX.RANGE with the 25 point minimax Latin hypercube (mnmxl) a close
second. The classical Box-Behnken (bxbnk), CCF (ccfac), and CCI (ccins) designs and the
space filling uniform design (unifd) and optimal Latin hypercube design (oplhd) all yield
comparable results which are only slightly worse than either the minimax Latin hypercube or the
randomized orthogonal array. The 25 point orthogonal array-based Latin hypercube (oalhd)
195
and the maximin Latin hypercube (mxmnl) produce results which are close to the average effect.
The Hammersley sampling sequence (hamss) designs yield the worst MAX.RANGE in both the
25 and 33 point designs; the 25 point CCD (ccdes) does not fair much better than the
hamss
1.2
hamss ccdes
1.0
mean of max.range
0.8
rnlhd
0.6
yelhd oplhd
oalhd
0.4
mxmnl
NSAMP = 25 NSAMP = 33
Factor - DOE
In the 33 point designs, the minimax Latin hypercube (mnmxl) yields the lowest
MAX.RANGE on average. The combined CCI + CCF (cciaf) and random Latin hypercube
designs (rnlhd) give comparable results which are slightly worse than the minimax Latin
196
hypercube. Finally, the 33 point orthogonal Latin hypercubes (yelhd) and optimal Latin
The space filling designs yield lower RMSE.RANGE values for all of the two, three,
and four variable problems considered in this dissertation. The classical experimental designs
yield lower MAX.RANGE values for the two and three variable problems but do not perform
as well as the space filling designs in the four variable problems. In small dimensions, i.e., two
and three variables, the classical designs spread the points out equally well in the design space
regardless if they are “space filling” designs or not. However, based on the observed trends in
the data, it appears that as the number of design variables increases, the space filling designs
perform better and better in terms of the two error measurements used in this study. As their
name implies, the space filling designs do a better job at spreading out points in the design space
and thus filling the space as the number of variables increases. Hence, Hypothesis 3 is verified
because the space filling designs do perform better than the classical designs in terms of
Furthermore, the larger the number of design variables and the more sample points, the better is
the accuracy of the resulting kriging model from a space filling design.
Some additional comments about particular space filling designs are as follows.
• The minimax Latin hypercube designs introduced in this dissertation perform quite well
in these problems. With the exception of the 13 point minimax Latin hypercube design
197
for the three variable problems, these designs consistently are among the best of the
designs in terms of its effect on RMSE.RANGE and MAX.RANGE.
• The Hammersley sampling sequence designs perform poorly in all of these problems.
The impetus for the Hammersley designs, though, are to provide good stratification of a
k-dimensional space (Kalagnanam and Diwekar, 1997); as such, they are designed to
perform well in large design space which may explain why they perform so poorly in
these relatively small problems.
• The random Latin hypercube designs provide average results as might be expected
because these designs simply rely on a random scattering of points in the design space.
By imposing additional considerations on these designs to “control” the randomization of
the points, the performance of these designs can be improved. For instance, the
orthogonal Latin hypercubes (Ye, 1997), the maximin Latin hypercubes (Morris and
Mitchell, 1995), the (IMSE) optimal Latin hypercubes (Park, 1994), and the
orthogonal-array based Latin hypercubes (Tang, 1993) typically yield a more accurate
kriging model than does the basic random Latin hypercube. This observation is not
new; rather, it supports the claims which the creators of these designs when they
introduced them.
• The uniform designs perform surprisingly well, considering they are based solely on
number-theoretic reasoning (Fang and Wang, 1994). Regardless, the importance of
these designs lies in that fact that uniformly spreading the points out the design space
yields an accurate kriging model; this is a new observation which is obvious but not well
documented in the literature.
Hypotheses 2 and 3 have now been verified as a result of this study. In the next
section, the relevance of these results with regard to the PPCEM are discussed.
198
5.5 A LOOK BACK AND A LOOK AHEAD
In this chapter, 7905 kriging metamodels of six engineering test problems have been
constructed and validated using a variety of correlation functions and experimental designs to
test and verify Hypotheses 2 and 3. In closing this chapter, recall the questions posed at the
• What is best type of experimental design you should use to query the simulation to
generate data to build an accurate kriging metamodel? For problems containing
only two variables, either classical or space filling designs yield good results; however,
as the size of the design space increases (i.e., number of variables increases), space
filling experimental design tend to yield more accurate kriging metamodels on average
since they tend to spread the points out well in the design space. In particular, the
minimax Latin hypercube design, uniform designs, and orthogonal arrays yield good
results. Random Latin hypercubes also provide good results, provided orthogonality or
optimality (e.g., IMSE) are imposed to control the randomization. Finally, of the
designs considered, Hammersley point designs are not recommended unless numerous
sample points can be afforded.
• How many sample points should you use? The interaction between sample size and
experimental design type is examined in Section E.3 because this impact does not
directly impact testing Hypotheses 2 or 3 since the analysis is conditioned on sample
size. In general, the more sample points which can be afforded, the more accurate the
resulting model. However, as discussed in Section E.3, a recommendation on the
number of sample points cannot be made at this time because a wide enough spread of
points was not investigated.
199
• What type of correlation function should you use to obtain the best predictor?
Based on the results in Section 5.3.1, the Gaussian correlation function yields the most
accurate predictor on average of the five studied.
• Lastly, how can you best validate the metamodel once you have constructed it?
As discussed in Section 5.2.2, cross validation root mean square error is not a sufficient
measure of model accuracy since it does not correlate well with either root mean square
error or max. error. One possible explanation of this is that because the sample sizes
are relatively small, an insufficient number of points is available to cross-validate the
model properly. If more points were available, then cross validation error may yield a
reasonable assessment of model accuracy; however, this has not been tested. In light of
this result, then, it is imperative that additional sample points be taken to validate a
kriging model.
These results have a direct bearing on the metamodeling capabilities within the Product Platform
Concept Exploration Method (PPCEM) as depicted in Figure 5.21. In the event that
within the context of product platform design, then the best correlation function to select—if
kriging metamodels are to be utilized—is the Gaussian correlation function, and the best
design to use is a space filling experimental design if the problem has more than two variables (if
the problem only has two variables, then a classical experimental design will suffice). Also, it is
recommended to take as many sample points as possible, but keep in mind that additional
sample points are needed to validate the model since cross validation does not appear to
200
Family of Universal
Kriging/DOE Testbed Electric Motors
§5.3 §5.2
Chp 6
Platform
MDO Example
Product Platform Concept Exploration Method
Chp 3
In the next chapter, the focus returns to the PPCEM, and the process of testing and
electric motors is offered as “proof of concept” that the PPCEM works and that it is effective at
facilitating the design and development of a scalable product platform for a product family.
201
202
6.
CHAPTER 6
implemented to verify its use for designing a family of universal motors around a common
scalable product platform. An overview of the universal motor problem is presented in Section
6.1; a schematic of a typical universal motor is given in Section 6.1.1, and a practical
mathematical model for universal motors is derived in Section 6.1.2. Section 6.2 contains the
implementation of Steps 1 and 2 of the PPCEM. A market segmentation grid is created for the
problem and relevant factors and responses for the universal motor platform are identified.
Section 6.3 follows with the implementation of Step 4 of the PPCEM by aggregating the
universal motor specifications and formulating a compromise DSP; Step 3 of the PPCEM—
building metamodels—is not utilized in this example because analytical expressions for mean and
standard deviation of the responses are derived separately. Section 6.4 contains the
development of the actual universal motor platform; ramifications of the resulting universal motor
190
platform and product family are analyzed in Section 6.5. Through this example problem,
Hypothesis 1 - All but Step 3 of the PPCEM is employed in this chapter to design a family
of universal motors based on a scalable product platform. The success of the method
as discussed in Section 6.5 provides an initial “proof of concept” for the method and
hence Hypothesis 1.
Sub-Hypothesis 1.1 - The market segmentation grid is utilized in Section 6.2 to help
identify the stack length as the scale factor around which the motor family is vertically
scaled to achieve the desired platform leveraging strategy; this supports Sub-Hypothesis
1.1.
Sub-Hypothesis 1.2 - The scale factor for the family of universal motors is taken as the
stack length, following the footsteps of the example by Black & Decker (Lehnerd,
1987). Robust design principles are used in this example to develop a universal motor
platform—defined by seven design variables—which is insensitive to variations in the
scale factor and is thus good for a family of motors based on different instantiations of
the stack length (the scale factor). The success of this implementation helps to support
Sub-Hypothesis 1.2.
Sub-Hypothesis 1.3 - Robust design principles of “bringing the mean on target” and
“minimizing the deviation” are utilized in this example to aggregate individual targets and
constraints and to facilitate the design of the family of motors. Combining this
formulation with the compromise DSP allows a family of motors to be designed around
a common, scalable product platform, verifying Sub-Hypothesis 1.3.
Despite all of the work in the previous two chapters, Hypotheses 2 and 3 are not tested
in this example. Analytic expressions for mean and standard deviation of the response are
derived from the analysis equations themselves and used directly in the compromise DSP for the
191
PPCEM. In concluding the chapter, a brief look ahead to the General Aviation aircraft example
192
6.1 OVERVIEW OF THE UNIVERSAL MOTOR PROBLEM
Universal electric motors are so named for their capability to function on both direct
current (DC) and alternating current (AC). Universal motors also deliver more torque for a
given current than any other kind of AC motor (Chapman, 1991). The high performance
characteristics and flexibility of universal motors understandably have led to a wide range of
applications, especially in household use where they are found in electric drills, saws, blenders,
vacuum cleaners, and sewing machines, to name a few examples (Martin, 1986).
In addition, many companies manufacture several products which use universal motors;
for example several companies offer a complete line of power tools, whereas several others
offer a line of kitchen appliances or yard care tools (cf., Lehnerd, 1987). For these companies,
it has already become common practice to utilize a family of universal motors of similar physical
1987). The advantages of this approach included increased modularity with decreased
manufacturing time and inventory costs. For example, Black & Decker developed a family of
universal motors for its power tools in the 1970s in response to a need to redesign their tools as
In this chapter the task is to identify a set of common physical dimensions for a
hypothetical family of universal motors to satisfy a range of performance needs, providing initial
“proof of concept” for the PPCEM. To begin, a physical description and schematic of the
193
universal motor is offered in the next section. In Section 6.1.2, relevant analyses for modeling
6.1.1 Physical Description, Schematic, and Nomenclature for the Universal Motor
Problem
A universal motor is composed of an armature and a field which are also referred to as
the rotor and stator, respectively, see Figure 6.1. The motor depicted in the figure has two field
poles, an attached cooling fan, and laminations in both the armature and the field. Laminating
the metal in both the armature and field greatly reduces certain kinds of power losses (cf.,
Nasar, 1987).
The armature consists of metal shaft about which wire is wrapped longitudinally around
two or more metal slats, or armature poles, as many as thousands of times. The field consists of
a hollow metal cylinder within which the armature rotates. The field also has wire wrapped
longitudinally around interior metal slats, or field poles, as many as hundreds of times.
194
Figure 6.1 Schematic of a Universal Motor
(adapted from G.S.Electric, 1997)
For a universal motor, the wraps of wire around the armature and the field are wired in
series, which means that the same current is applied to both sets of wire. As current passes
through the field windings, a large magnetic field is generated, which passes through the metal of
the field, across an air gap between the field and the armature, then through the armature
windings, through the shaft of the armature, across another air gap, and back into the metal of
However when the magnetic field passes though the armature windings, which are
themselves carrying current, the magnetic field exerts a force on the current carrying wires,
which is in the direction of the cross product of the vector direction of the current in the
armature windings and the vector direction of the magnetic field. Because of the geometry of
the windings, current on one side of the armature always is passing in the opposite direction to
the current on the other side of the armature. Thus, the force exerted by the magnetic field on
one side of the armature is opposite to the force exerted on the other side of the armature.
Thereby a net torque is exerted on the armature, causing the armature to spin within the field.
The reader is referred to (Chapman, 1991) or any physics text book (e.g., Tipple, 1991) to
learn more about how an electric motor operates. The nomenclature for the universal electric
195
a Number of current paths on the armature Nc Number of turns of wire on the armature
Aa Area between a pole and the armature [mm2] Ns Number of turns of wire on the field, per
A wa Cross-sectional area of the wires on the pole
armature [mm2] ? Rotational speed [rad/sec]
A wf Cross-sectional area of the wires on the ? Resistivity of copper [Ohms/m]
field [mm2] ? copper Density of copper [kg/m3]
B Magnetic field strength (generated by the ? steel Density of steel [kg/m3]
current in the field windings) [Tesla, T] p armature Number of poles on the armature
? Magnetic flux [Webers, Wb] p field Number of poles on the field
? Magnetomotive force [Ampere?turns] P Gross Power Output [W]
H Magnetizing intensity [Ampere?turns/m] ro Outer radius of the stator [m]
I Electric current [Amperes] Ra Resistance of armature windings [Ohms]
K Motor constant [n.m.u.] Rs Resistance in the field windings [Ohms]
lr Diameter of armature [m] ? Total reluctance of the magnetic circuit
lg Length of air gap [m] [Ampere?turns/m]
lc Mean path length within the stator [m] ?s Reluctance of the stator [Ampere?turns/m]
L Stack length [m] ?a Reluctance of one air gap [Ampere?turns/m]
? steel Relative permeability of steel [n.m.u.] ?r Reluctance of the armature
?o Permeability of free space [Henrys/m] [Ampere?turns/m]
? air Relative permeability of air [n.m.u.] t Thickness of the stator [m]
m Plex of the armature winding [n.m.u.] T Torque [Nm]
M Mass [kg] Vt Terminal voltage [Volts, V]
? Efficiency [n.m.u.] Z Number of conductors on the armature
A universal motor is the same as a direct current (DC) series motor; however, in order
to minimize certain kinds of power losses within the core of the motor when operating on AC
power, a universal motor is constructed with slightly thinner laminations in both the field and the
armature and less field windings. However, the governing electromagnetic equations for the
operation of a series DC motor and a universal motor running on DC current are identical
current is only slightly less than the performance of the same motor running on DC current, see
Figure 6.2. This discrepancy in performance is due to losses caused by the inherent oscillation
in alternating current (AC); for an overview of the extra losses associated with AC operation,
196
Figure 6.2 Comparison of the Torque-Speed Characteristics of a Universal Motor
Rated at 1/4 Hp and 8000 rpm when Operating on AC and DC Power Supplies (Martin,
1986)
These extra losses incurred in AC operation of a universal motor are difficult, if not
impossible, to model analytically; thus, complicated finite element analyses are becoming more
popular for modeling motor behavior under AC current. Since such a detailed analysis is
beyond the scope of this work, the derived model for the performance of the universal motor is
for DC operation for which simple analytical expressions are known or can be derived.
Moreover, several texts indicate that the performance of universal motors under AC and DC
conditions is quite comparable and include diagrams such as the one reproduced in Figure 6.2
(see, e.g., (Chapman, 1991; Martin, 1986; Shultz, 1992; Unnewehr, 1983); Shultz (1992)
197
states that “Universal motors...will operate either on DC or AC up to 60 Hz. Their
performance will be essentially the same when operated on DC or AC at 60 Hz.” The sample
torque-speed curves in Figure 6.2 graphically illustrate this, showing that for one specific motor,
the performance characteristics between AC and DC operation do not deviate significantly until
well past the full-load torque of the motor. For this work, all motors are designed for operation
at full-load torque. Thus, it is assumed that designing a universal motor under DC conditions
The model takes as input the design variables {Nc, Ns, Awa, Awf, ro, t, lgap, I, Vt, L} and
returns as output the power (P), torque (T), mass (M), and efficiency (? ) of the motor. To
formulate the model, it is necessary to derive equations for P, T, M, and ? as functions of the
design variables. The equations are based primarily on those given in (Chapman, 1991) and
Power
The basic equation for power output of a motor is the input power minus losses:
where the input power is the product of the voltage and the current,
198
• at the interface between the brushes and the armature (brush losses),
• in heating up the core and copper wires which adversely effects the magnetic
properties of the core and the current carrying ability of the wires (thermal losses),
and
Simple analytic expressions only exist for the copper losses and the brush losses. Stray losses
usually are assumed to be no more than one percent, and thus can be neglected. Mechanical
however, these variables are beyond the scope of the motor model itself. Hence mechanical
losses are neglected. Core losses, especially those incurred by eddy currents, can be minimized
by the use of thin laminations in the stator and rotor; assuming this is done, the core losses can
be assumed to be small and thus can be neglected. Thermal losses are in general non-negligible,
but are highly dependent upon the external cooling scheme (e.g., cooling fan and fins on the
housing) applied to the motor. Because an effective cooling scheme can keep the motor from
running too hot, and as the setup of the cooling configuration is beyond the scope of this model,
thermal losses are neglected. The combined effects of all the aforementioned neglected losses
will, however, decrease the output power and efficiency from the predicted value from the
model. Nevertheless, the following equations serve as a sufficiently accurate model for the DC
199
operation of a universal motor. Consequently, the general equation for power losses reduces
from,
to a more manageable:
where
and
Pbrush = ? I [6.6]
where ? is typically 2 volts. Substituting these expressions into the power equation yields:
However, Ra and Rs, the resistances of the armature and field windings, can be specified further
as functions of the design variables. The resistances Ra and Rs can be computed directly from
the general equation that the resistance of any wire is given by:
200
Assuming that each wrap (i.e., turn) of wire on the armature is approximately the shape of a
rectangle with length L (the stack length of the motor) and width lr (the diameter of the armature)
then in terms of the physical dimensions of the motor, lr can be expressed as two times the
radius of the armature, which is just the outer radius of the stator minus the thickness of the
The total length of wire on the armature is the stack length, L, times the total number of wraps
Similarly, assuming that each wrap of wire on the field is approximately the shape of a rectangle
with length L (the stack length of the motor) and width double the inner radius of the stator (ro-
201
However the purpose of the field windings is to create a magnetic field across the armature, thus
requiring two field poles, one for the “North” end of the magnetic field and one for the “South”
end. Thus, pfield is 2, and Equation 6.12 becomes Equation 6.13 which is:
Now that Ra and Rs are expressed in terms of the design variables in Equations 6.11 and 6.13,
Efficiency
The equation for efficiency can be computed directly from the equation for power. The basic
equation for efficiency, expressed as a decimal and not a percentage, is given by:
? = P/P in [6.14]
Mass
For the purpose of estimating the mass of the motor, it is modeled as a solid steel cylinder with
length L and radius lr/2 for the armature and a hollow steel cylinder with length L, outer radius ro
and inner radius (ro-t) for the stator. The mass of the windings on both the armature and the
field are also included, where the length of each winding is the same as those assumed for the
derivation of the power equation, see Equation 6.10. Thus the equation for mass is of the form:
202
where:
Using Equations 6.16-6.18 for Mstator, Marmature, and Mwindings, the mass of the motor, Equation
Torque
The last equation to derive is an equation for torque. In general, the torque of a DC motor is
given by:
T = K?I [6.19]
where K is a motor constant, ? is the magnetic flux, and I is the current. For a DC motor, K is
computed as:
(Z)( parmature )
K= [6.20]
2? a
Z = 2Nc [6.21]
assuming a simplex (m = 1) wave winding on the armature. Since the number of armature poles
parmature = 2 [6.23]
2Nc (2) N c
K= = [6.24]
2 ?( 2) ?
The derivation of the flux term, ?, is significantly more complicated. To begin, consider
the idealized DC motor shown in Figure 6.3a with its corresponding magnetic circuit shown in
Figure 6.3b. As shown in the figure, N is the number of turns on the stator (which is equal to
2Ns for the model being derived), I is the current, A is the cross-sectional area of the stator, lr is
the diameter of the armature, lg is the gap length, and lc is the mean magnetic path length in the
stator.
204
(a) Physical model (b) Magnetic circuit
In general the equation for flux through a magnetic circuit is simply the magnetomotive
?
?? [6.25]
?
where the magnetomotive force, ? , is simply the number of turns around one pole of the field
? = N sI [6.26]
The total reluctance, ? , is calculated from the magnetic circuit shown in Figure 6.3b.
For a magnetic circuit, reluctances in series add just like resistors in series in an electric circuit;
therefore, the total reluctance in the idealized DC motor is the sum of the reluctances of the
205
? = ? s + ? r + 2? a [6.27]
Length
? = [6.28]
(Permeability )(Area cross? sec tion)
When permeability, ? , is expressed as the relative permeability of the material times the
permeability of free space, ? o, the reluctance of the stator, rotor, and air gaps are:
lc lr la
? s= ,? r= ,? a= [6.29]
? steel? o A s ? steel? o A r ? air ? oA a
In order to approximate more closely a universal motor for this example, the idealized
universal motor. The resulting model geometry is shown in Figure 6.4a and is described by the
outer radius of the stator, ro, the thickness of the stator, t, the diameter of the armature, lr, the
length of the air gap, lgap, and the stack length, L. The resulting magnetic circuit is shown in
Figure 6.4b; notice that the magnetic circuit for the idealized DC motor and the magnetic circuit
for a universal motor are different, because in a universal motor there are two paths which the
magnetic flux can take around the stator, i.e., clockwise and counter-clockwise. These two
paths are in parallel and thus are included in the magnetic circuit as two parallel flux paths.
Reluctances in parallel in a magnetic circuit act like resistors in parallel in an electric circuit, so
that the combined reluctance of two identical reluctances in parallel is simply one half the
206
lc lr la
? s= ,? r= ,? a= [6.30]
2? steel? oA s ? steel? o A r ? air ? oA a
In Equation 6.30, the mean magnetic path length in the stator, lc, is taken to be one half
The cross-sectional area of the stator, As, is taken to be the thickness of the stator times the
As = (t)(L) [6.32]
207
The cross-sectional area of the armature is taken to be approximately the diameter of the
At = (lr)(L) [6.33]
The cross-sectional area of the air gap is the length of the air gap times the stack length:
Aa = (lgap)(L) [6.34]
The last expression needed for the calculation of reluctance is the relative permeability
of the stator and the armature. For the purposes of this model, both the stator and the armature
are assumed to be made of steel with the relative permeability versus magnetizing intensity curve
Figure 6.5 Relative Permeability Versus Magnetizing Intensity for a Typical Piece of
Steel (Chapman, 1991)
208
The curve is divided into three regions, and each section is fit with an appropriate
numerical expression in order to include the curve shown in Figure 6.5 in the model. The curve
N cI
H= [6.36]
l c ? l r ? 2l gap
The relative permeability of air, ? air, is taken as unity, and the permeability of free space is a
constant, ? o = 4 ? 10-7. Now with expressions for K, ? , ? , ? s, ? r, ? a, lc, lr, As, Ar, Aa, and
This completes the mathematical model for the universal motor, and the PPCEM now
can be implemented to design a family of universal motors around a common product platform.
The initial steps of the PPCEM, Steps 1 and 2, are outlined in the next section.
With a given set of performance requirements and the model derived in Section 6.1.3,
the first step in implementing the PPCEM is to create the market segmentation grid to identify
and map which type of leveraging can be used to meet the overall design requirements and
209
realize the desired product platform and product family. The market segmentation grid shown in
Figure 6.6 depicts the desired leveraging strategy for this universal motor example. The goal is
to design a motor platform which can be leveraged vertically for different market segments
which are defined by the torque needs of each market, following in footsteps of the Black &
Decker universal motor example from (Lehnerd, 1987) which was discussed in Section 1.1.1.
In this specific example, ten instantiations of the motor are to be considered; moreover, in order
to reduce cost, size, and weight, it is supposed the best motor is the one that satisfies its
performance requirements with the least overall mass and greatest efficiency. Standardized
interfaces will ensure horizontal leveraging across market segments; however, only vertical
High End
Vertical Scaling
Functional
parametric
Mid-Range
scale factor:
torque =
f(length)
Low End
210
Having created the market segmentation grid and identified an appropriate leveraging
strategy and scale factor, Step 2 in the PPCEM is to classify the factors of interest within the
universal motor problem. The design variables (i.e., control factors) and corresponding ranges
3. Cross-sectional area of the wire used on the armature (0.01 = Awa = 1.0 mm2)
4. Cross-sectional area of the wire used on the field poles (0.01 = Awf = 1.0 mm2)
The terminal voltage, Vt, is fixed at 115 volts to correspond to standard household
voltage, and the length of the air gap, lgap, is set to 0.7 mm which is taken to be the minimum
possible air gap length. The minimum air gap length is fixed because the performance equations
derived in Section 6.1.3 indicate that minimizing the air gap length maximizes torque and
Following in the footsteps of the Black & Decker example, the stack length, L, is the
scale factor for the product family primarily because of its importance in the torque equation,
Equation 6.19, derived in Section 6.1.3, i.e., torque is directly proportional to stack length. To
increase torque across the platform, stack length of the motors is increased while keeping the
other physical parameters (e.g., the outer radius and the thickness) unchanged. Furthermore, it
211
is assumed that the greatest manufacturing costs savings can be achieved by exploiting the fact
that only the stack length of the motors varies while still providing a variety of torque and power
ratings. The initial range of interest for stack length is taken to be 1 to 20 centimeters; specific
instantiations are computed in Step 5 so as to meet the desired torque requirements for each
platform derivative.
There are a total of six responses (i.e., goals and constraints) which are of interest for
each motor. The following constraint values—Table 6.2—and goal targets—Table 6.3—are
Constraints Value
Magnetizing intensity, H H < 5000
Feasible geometry ro > t
Power of each motor, P P = 300 W
Efficiency of each motor, ? ? = 0.15
Mass of each motor, M M = 2.0 kg
The constraint on magnetizing intensity ensures that the magnetic flux within the motor
does not exceed the physical flux carrying capacity of the steel (Chapman, 1991). The
constraint on feasible geometry ensures that the thickness of the stator does not exceed the
radius of the stator, since the thickness is measured from the outside of the motor inward, as
indicated in Figure 6.4a. The desired power for each motor is 300 W which is treated as an
equality constraint to ensure that design variable settings are selected to match this requirement
212
exactly. A minimum allowable efficiency of 15% and a maximum allowable mass of 2.0 kg are
assumed to define a feasible motor. The efficiency and mass goal target for each motor are
listed in Table 6.2 along with the desired torque requirement for each motor.
Goal
Motor Torque [Nm] Mass [kg] Efficiency
1 0.05 0.50 0.70
2 0.10 ? ?
3 0.125 ? ?
4 0.15 ? ?
5 0.20 ? ?
6 0.25 ? ?
7 0.30 ? ?
8 0.35 ? ?
9 0.40 ? ?
10 0.50 ? ?
For the purpose of illustration, the relationship between the design variables, the scale
factor, and the responses, are shown in the P-Diagram in Figure 6.7.
X = Control Factors
# wire turns on armature Y = Responses
# wire turns on field pole Universal Power
Afield wire Torque
Motor
Mass
Aarmature wire Model Efficiency
Motor radius Feasible Geometry
Stator thickness Magnetizing Intensity
Current drawn S = stack length
213
This concludes Steps 1 and 2 of the PPCEM. Step 3 is not utilized in this particular
example since it is possible to derive expressions for mean and variance of each motor due to
scaling the stack length as described in the next section. The next step, then, is Step 4 which is
The corresponding compromise DSP formulation for the universal motor product
platform is listed in Figure 6.8. In summary, there are nine design variables, seven constraints,
and two objectives. The two objectives (minimize mass to its target and maximize efficiency to
its target) are assumed to have equal importance to the design, and are thus weighted equally in
214
Given:
? Parametric (horizontal) scale factor: stack length
? Universal motor model analysis equations, Section 6.1.2
Find:
? The system variables, x:
• Number of turns on the armature, Nc • Thickness of the stator, t
• Number of turns on each pole on the • Current drawn by the motor, I
field, Ns • Radius of the motor, r
• Cross-sectional area of the wire on the • Mean of stack length, ? L
armature, Awa • Standard deviation of stack
• Cross-sectional area of the wire on the length, ? L
field, Awf
Satisfy:
? The system constraints:
• Magnetizing intensity, H: Hmax = 5000
• Feasible geometry: t < ro
• Power output, P: P = 300 W
• Motor efficiency, ? : ? = 0.15
• Mass, M: M = 2.0 kg
? Aggregated torque requirements:
• Mean torque, ? T : ?T = 0.2425 Nm
• Standard deviation torque, ? T : ? T = 0.13675 Nm
Minimize:
? Mean mass, target: M = 0.50 kg
Maximize:
? Mean efficiency, target: ? = 0.70
Figure 6.8 Universal Motor Product Platform Compromise DSP Formulation for Use
with OptdesX
215
The aggregated mean torque, ? T , and standard deviation, ? T are calculated as the
sample mean and standard deviation of the set of torque requirements {0.05, 0.1, 0.125, 0.15,
0.2, 0.25, 0.3, 0.35, 0.4, 0.5} Nm assuming a uniform distribution. Power, efficiency, and
mass for the family are assumed to be uniformly distributed with respective means and standard
deviations because it is assumed that the distribution of the demand for the motors is uniform.
The mean power, mean efficiency, and mean mass are calculated as the power,
efficiency, and mass, respectively, for the mean length. The standard deviation of torque is
approximated using a first-order Taylor series expansion, assuming that the standard deviation is
?T
? T ?Ý? ? [6.37]
?? L L
Now that the compromise DSP for the family of universal motors is formulated, Step 5 of the
For this example problem, the compromise DSP formulated in Section 6.3 is solved
using the Generalized Reduced Gradient (GRG) algorithm in OptdesX. For a thorough
explanation of OptdesX and the GRG algorithm, see, e.g., (Parkinson, et al., 1998). The
OptdesX software package is used instead of DSIDES in this example since implementation of
216
Note that in order to develop the product portfolio, the compromise DSP is formulated
with goals for mean torque, ? T , and standard deviation of torque, ? T , which ensures that the
product portfolio will be able to be instantiated for all ten values of torque within the range of the
scale factor specified by the mean and standard deviation, ? L and ? L, for the stack length. Also
note that the constraint on magnetizing intensity is formulated to ensure that the entire product
family will meet the constraint on magnetizing intensity individually. This is accomplished by
computing a maximum magnetizing intensity which represents the magnetizing intensity for the
largest instantiation of the product family and is simply evaluated at the upper bound of current.
The compromise DSP in Figure 6.8 is solved using three different starting points in OptdesX:
the lower, middle, and upper bounds of the design variables. The best design variable settings
and responses for the motor platform are listed in Table 6.3. The values for the number of
armature turns and field turns have been rounded to the nearest integer.
217
For the purpose of verifying the solution itself, convergence plots for mean mass and
mean efficiency are presented in Figure 6.10 for high, middle, and low starting points used in
Figure 6.9 Convergence Plots for the Universal Motor Product Platform
To develop the individual motors within the scaled product family using the product platform
specifications from Table 6.4, the compromise DSP given in Figure 6.8 is modified such that Nc,
Ns, Awa, Awf, r, and t are held constant at the values listed in Table 6.4, and only the current, I,
and stack length, L, are allowed to vary to meet the original set of torque requirements.
Because the mean and standard deviation for stack length have been found for the product
platform, the initial range of interest for stack length now can be discarded in favor of the range
for the product platform. Using the assumption that length is uniformly distributed, the minimum
218
?L - 3 ?L = L = ?L - 3 ?L [6.38]
Substituting the values for mean and standard deviation of stack length shown in Table 6.4, the
new lower and upper bounds of interest for stack length are as follows:
0.057 = L = 5.18 cm
Note that because individual torque goals are being set, the goal for standard deviation of
torque is eliminated in the modified compromise DSP formulation. Also, the constraint on
magnetizing intensity is no longer imposed on any maximum magnetizing intensity but rather on
the individual magnetizing intensity and is evaluated at the current for each motor.
The product platform is instantiated by selecting appropriate values for the scale factor
(stack length) within the range specified by the mean and standard deviation in Table 6.4 for
each desired set of torque and power requirements. The current also is being allowed to vary
since it is a dependent variable in the system, i.e., it is the amount of current which is drawn by
the motor such that the given torque and power requirements are met for a given motor
geometry. In terms of the principles of robust design, the values shown in Table 6.4 for the
product portfolio are found such that the goal for mean power is on target, while varying the
current allows the standard deviation of power across the instantiated product family to be zero.
The modified compromise DSP for the product family is shown in Figure 6.10 and is again
solved using OptdesX while starting from three different starting points.
219
Given:
? Configuration scale factor = stack length
? Universal motor model equations
? Platform settings for Nc, Ns, Awa, Awf, r, and t (Table 6.4)
Find:
? The system variables, x:
• Stack length, L • Current drawn by the motor, I
Satisfy:
? The system constraints:
• Magnetizing intensity, H: Hmax = 5000
• Feasible geometry: t < ro
• Power output, P: P = 300 W
• Motor efficiency, ? : ? = 0.15
• Mass, M: M = 2.0 kg
? Individual torque requirements:
• Torque, T: T = {0.05, 0.1, 0.125, 0.15, 0.2, 0.25,
0.3, 0.35, 0.4, 0.5} Nm
? The bounds on the system variables:
0.1 = I = 6.0 A
0.057 = L = 5.18 cm
Minimize:
? Mass, target: M = 0.50 kg
Maximize:
? Efficiency, target: ? = 0.70
Figure 6.10 Compromise DSP Formulation for Instantiating the PPCEM Platform for
Use with OptdesX
The resulting values for current and stack length of each motor (PPCEM platform
instantiation) are listed in Table 6.5 along with the corresponding response values. Notice that
the stack length varies from 0.865 cm to 2.95 cm in order to meet the desired torque and
power requirements. The resulting current drawn by the system ranges from 3.39 Amps to
220
5.82 Amps which is slightly high but acceptable for a motor with such a large torque. Finally,
notice that only three motors meet the desired efficiency target of 70%, and these are all at the
low-end, and only one motor achieves the mass target of 0.5 kg.
It is uncertain whether the failure to achieve the desired mass and efficiency targets is a
property of the system itself or a result of using the PPCEM. Therefore, a family of individually
designed universal motors is developed in the next section to provide a benchmark for
comparison. The differences between this family of benchmark motors and the PPCEM
221
6.5 RAMIFICATIONS OF THE RESULTS OF THE ELECTRIC MOTOR
EXAMPLE PROBLEM
In order to generate a family of benchmark motors to compare with the PPCEM family
of motors, the compromise DSP presented in Figure 6.10 is modified such that Nc, Ns, Awa,
Awf, r, and t are all design variables variables in addition to I and L. The resulting compromise
DSP is shown in Figure 6.12. This compromise DSP is solved using OptdesX for each of the
ten power and torque ratings. Three different starting points—lower, middle, and upper
Given:
? Universal motor model analysis equations, Section 6.1.2
Find:
? The system variables, x:
• Number of turns on the armature, Nc • Thickness of the stator, t
• Number of turns on each pole on the • Current drawn by the motor, I
field, Ns • Radius of the motor, r
• Cross-sectional area of the wire on the • Mean of stack length, ? L
armature, Awa • Standard deviation of stack
• Cross-sectional area of the wire on the length, ? L
field, Awf
Satisfy:
? The system constraints:
• Magnetizing intensity, H: Hmax = 5000
• Feasible geometry: t < ro
• Power output, P: P = 300 W
• Motor efficiency, ? : ? = 0.15
• Mass, M: M = 2.0 kg
222
? Individual torque requirement:
• Torque T = {0.05, 0.1, 0.125, 0.15, 0.2, 0.25,
0.3, 0.35, 0.4, 0.5} Nm
? The bounds on the system variables:
100 = Nc = 1500 turns 1.0 = ro = 10.0 cm
1 = Ns = 500 turns 0.5 = t = 10.0 mm
0.01 = Awa = 1.0 mm2 0.1 = I = 6.0 A
0.01 = Awf = 1.0 mm2 0.057 = L = 5.18 m
Minimize:
? Mean mass, target: M = 0.50 kg
Maximize:
? Mean efficiency, target: ? = 0.70
Figure 6.11 Compromise DSP Formulation for Benchmark Universal Motor Family for
Use with OptdesX
The resulting design variable settings and responses for each benchmark motor are
summarized in Table 6.6. Compared to the PPCEM solutions listed in Table 6.5, the number of
armature turns, Nc, is generally lower than the PPCEM platform specification and the number of
field turns, Ns, is slightly higher. The cross-sectional area of the field wire, Awf, is lower than
the PPCEM platform specification; however, the PPCEM platform value for Awa, armature wire
cross-sectional area, is contained within the range observed for the benchmark motors.
Similarly, the ranges for motor radius, r, and thickness, t, for the benchmark motors both span
the values of the PPCEM platform specifications. These motors draw less current—a maximum
of 4.71 Amps—compared to the PPCEM family of motors which draw as much as 5.82 Amps
for the equivalent motor. Finally, note that the range of stack lengths of the benchmark motors
are comparable to the range of stack lengths found using the PPCEM.
223
Table 6.6 Benchmark Universal Motor Specifications
Regarding the performance of each motor, the desired torque and power requirements
are achieved by each motor; moreover, more of the benchmark motors achieve the mass and
efficiency targets of 0.5 kg and 70%. Unlike the PPCEM family of motors, four of the
benchmark motors achieves the efficiency target of 70%, and four of the motors achieves the
mass target of 0.5 kg. A closer comparison of the performance of the individual motors is
For the purpose of validating the solutions themselves, convergence plots for mass and
efficiency for the 0.25 Nm benchmark motor torque are presented in Figure 6.13 for the high,
middle, and low starting points. Fairly good convergence is observed in each graph; however,
the final value of efficiency from the high starting point is slightly worse than the final values of
efficiency from the low and middle starting points. Therefore, in situtations where all three
points do not converge to the same final value, only the best solution is reported.
224
(a) Mass (b) Efficiency
6.5.2 Comparison between the Benchmark Universal Motor Family and the PPCEM
Motor Family
compare against the performance of the PPCEM family of universal motors found in Section
6.4. As shown in Table 6.5 and Table 6.6, both families meet their goals for both power and
torque; however, their responses for efficiency and mass differ. The efficiency and mass of each
motor within the benchmark family and the PPCEM family are repeated in Table 6.7 along with
the percentage difference of each response from the benchmark to the PPCEM. For efficiency,
a positive change denotes an improvement from the benchmark to the PPCEM; meanwhile, a
negative change denotes an improvement in the mass. Finally, note that a motor which has
achieved its target mass (0.5 kg) and efficiency (70%) is considered to be equivalent to a motor
with a mass which is lower than the target or an efficiency which is higher than the target.
225
Table 6.7 Comparison of the Responses between the Benchmark Motor Family and
the PPCEM Motor Family
Three of the PPCEM motors have equivalent efficiency ratings to the corresponding
benchmark motor which produces the same torque; however, only the motor with the smallest
torque (Motor #1) is considered to have a mass equivalent to its corresponding benchmark
motor. As tallied at the bottom of Table 6.7, the PPCEM motors lose 7% in efficiency and
weigh 9% more than the family of benchmark motors, on average. Therefore, while the
family of PPCEM motors based on a common product platform scaled in stack length
are able to achieve the desired range of torque and power requirements, they lose, on
engineering designers and managers to decide if the savings (in inventory, manufacturing, etc.)
from having a family motors based a common platform and scaled only in stack length
226
outweighs the sacrifice in mass and efficiency. Meanwhile, an attempt to improve the
performance of the PPCEM family of motors in relation to the benchmark family of motors is
While investigating this example, it was learned that Black & Decker varies more than
just stack length when it scales its universal motors to meet a variety of power ratings. In
addition to increasing the stack length of the motor, they also allow the number of turns in the
field and armature and the cross-sectional area of the wires in the field and armature to vary
from one motor to the next. Careful inspection of any two motors from one of their power tool
lines (say, corded drills) reveals that this is indeed the case.
The question then becomes: how well do the PPCEM instantiations perform if the
number of field and armature turns (N s and Nc) and the cross-sectional area of the field
and armature wires (A wf and Awa) are allowed to vary in conjunction with varying the
stack length (and current)? The results obtained by solving a new set of compromise DSPs
for each universal motor are listed in Table 6.8. These solutions are obtained by modifying the
compromise DSP in Figure 6.10 to allow Nc, Ns, Awf, and Awa to vary from their platform
227
Table 6.8 New PPCEM Universal Motor Instantiations with Varying Numbers of
Turns, Wire Cross-Sectional Areas, and Stack Lengths
Recall that the target for efficiency is 70%, and the target for mass is 0.5 kg. So as
discussed previously, even if a particular motor weighs less than 0.5 kg or has an efficiency
greater than 70%, it is still considered to be equivalent to a motor which is exactly 0.5 kg or has
70% efficiency. With this in mind, the new family of PPCEM motors (allowing the numbers of
turns and wire cross-sectional areas to vary along with stack length and current) and the family
Table 6.9. In both families of motors, the necessary torque and power requirements have been
met, and the two sets of motors are compared solely on their respective efficiencies and masses.
The result is that four of the ten motors are equivalent (identical) since they achieve the targets
for mass and efficiency, and the remaining six motors vary by less than 2%. The highest torque
motor in this new PPCEM family is slightly less efficient (-2.7%) than the corresponding
228
benchmark motor, but it weighs less (-4.5%). This tradeoff is really negligible since more wire
can be wrapped around the field or armature to improve the efficiency with only slight increase
in mass. Consequently, by allowing the numbers of wire turns and the wire cross-
sectional areas to vary while also scaling the stack length, the resulting family of motors
benchmark motors. This is a very important observation because it indicates that the PPCEM
can be used to obtain a family of motors which sacrifices minimal performance even though the
motors are based, for the most part, on a common platform specification.
Table 6.9 Comparison of Benchmark Designs and New PPCEM Instantiations with
Varying Numbers of Turns, Wire Cross-Sectional Areas, and Stack Lengths
From an engineering standpoint, does it make sense to let Nc, Ns, Awf, and Awa
vary along with the stack length (and current)? From a manufacturing perspective, it makes
229
perfect sense to allow the number of turns of wire in the armature and field (Nc and Ns,
respectively) to vary since it costs little extra to wrap more (or fewer) turns when the motor is
being produced. From a inventory perspective, however, it would appear that allowing Awf and
Awa to vary is not cost effective since it requires that multiple wire types (i.e., varying cross-
sectional areas) must be kept in stock in order to produce the family of motors. The justification
for allowing Awf and Awa to vary is as follows. As the stack length of the motor increases (with
everything else being held constant), the torque on the motor increases; however, the power
output actually decreases because the copper losses, given by Equation 6.5, increase as since
One way to compensate for this loss in power is to allow more current to be drawn as
is the case in this example. What is typically done, however, to compensate for this decrease in
power (as the stack length is increased) is to decrease the number of field and armature turns
while simultaneously increasing the field and armature wire cross-sectional areas. This lowers
the resistances Ra and Rs and reduces copper losses without having to draw additional current in
order to maintain the desired output power. An added benefit of this approach is that the
rotational speed of the motor will also increase. In reality, the operating speed of the motor is a
very important design consideration since power and torque are related through the equation:
P = T? [6.39]
where P is power, T is torque, and ? is the rotational speed of the motor. The speed of the
motor has been neglected in this example since it is fixed once power and torque have been
230
specified. Based on Equation 6.39, for a fixed power output, as torque increases, the rotational
speed of the motor must decrease. In many cases however, as power increases and torque
increases, maintaining a consistent operating speed for the motor is often desired. The
additional inventory costs incurred by stocking a wider variety of wire sizes is offset by this
combination of effects.
But what if Awf and Awa were held fixed and only Nc and Ns were allowed to vary
along with stack length (and current)? The answer is summarized in Table 6.10 wherein the
compromise DSP for the family of motors for instantiating the PPCEM platform, Figure 6.10,
has been modified a third time to allow only Nc, Ns, L and I to vary from the platform
specifications. Comparison of the data in Table 6.10 with Table 6.8 (the PPCEM instantiations
with Awf and Awa varying also) reveals that both families of motors are nearly identical in terms
231
Table 6.10 Third PPCEM Universal Motor Family with Varying Numbers of Turns
and Stack Lengths
The mass and efficiency of the four families of motors (the benchmark, the PPCEM
varying only L, the PPCEM varying L, Nc, and Ns, and the PPCEM varying L, Nc, Ns, Awf, and
Awa) are summarized in Table 6.11 to facilitate comparison. The percentage difference (%
Diff.) listed in the table is a comparison of each PPCEM instantiation against the corresponding
benchmark motor, i.e., the performance characteristics of a motor which has been individually
designed and optimized. As stated previously, motors which achieve their respective mass and
efficiency targets of 0.50 kg and 70.0% are considered equivalent solutions even if the motor
has a lower mass or higher efficiency. In this regard, some observations based on the data in
• As more variables are allowed to vary from the platform specifications, the better the
performance of the individual motors; the tradeoff is that less and less is common
between the motors within the product family. It then becomes a decision of the
232
engineering designers and management to evaluate the tradeoffs between commonality
and performance to determine the best family to pursue. This reinforces the statement
that the PPCEM facilitates generating these options but is not necessarily used to select
the best one since it requires information which is beyond the scope of this investigation.
Table 6.11 Efficiency and Mass of Benchmark Motors and PPCEM Motor Platform
Families
233
• In this example, the PPCEM instantiations when Nc, Ns, Awf, and Awa are allowed to
vary in addition to the stack length yields an equivalent family of motors to the family of
individually designed benchmark motors. Varying only Nc and Ns in the PPCEM family
while holding Awf and Awa fixed at the platform specification also yields a good family of
motors with minimal sacrifice in performance (mass and efficiency) when compared to
the benchmark family of motors.
In light of these observations, are the solutions obtained from the PPCEM useful?
The answer is undoubtedly yes. The initial family of motors obtained using the PPCEM meets
the range of torque and power requirements which have been specified for the product family.
However, because these motors are based on a common product platform and vary only in
stack length (and current), the motors lose, on average, 10% in both efficiency and mass for the
effort to reflect a more realistic set of motors, by allowing the number of turns in the armature
and field to vary in addition to the stack length, the family of motors obtained using the PPCEM
are essentially identical to the equivalent family of benchmark motors. The necessary torque
and power requirements are met with minimal sacrifice in performance (mass and efficiency). If
the wire cross-sectional areas of the wire in the field and armature are allowed to vary in
addition to the number of wire turns in each and the stack length, then the family of motors
obtained using the PPCEM are identical, for all intents and purposes, to the corresponding
family of benchmark motors. Thus, the PPCEM has greatly facilitated generating a variety of
options which the engineering designers and managers can select from based on what is best for
the company.
234
Are the time and resources consumed within reasonable limits? In general, fewer
analysis calls are required to obtain the PPCEM family of motors than the benchmark family of
motors. To obtain the benchmark motor family, ten optimization problems must be solved
where each optimization involves finding the best settings of eight design variables. For the
PPCEM platform instantiations, the initial family of motors requires solving one optimization
problem to find the values of nine design variables (which includes mean and standard deviation
of stack length for the platform) followed by solving ten optimization problems involving as few
as two (current, I, and stack length, L) and as many as six (current, I, stack length, L, #
armature turns, Nc, # field turns, Ns, cross-sectional areas of the field wire, Awf, and armature
wire, Awa) design variables where the size of the subsequent optimization problems is dependent
on the number of design variables which are being instantiated for each motor from the platform
design.
optimize the motor platform and individual motors, each iteration of the optimization requires
one analysis call to evaluate the current iterate and two evaluation calls per design variable to
estimate the gradient to determine the next iterate. For the family of benchmark motors, the
235
where d.v. is an abbreviation for design variable, and n is the average number of iterations
required to solve each optimization. For the PPCEM family of motors, the number of analysis
??
??1 analysis ?? ??
?? 2 analyses ???9 d.v. ???+ (10 motors)•
(m iterations) ?? ?
?? iteration ?? ??(d.v.)(iteration) ??
?? ??
??
where m is the number of iterations required to find the PPCEM platform design, and k is the
number of iterations required to find each instantiation of the PPCEM platform. On average, n
˜ 10, m ˜ 12, and k ˜ 5; therefore, the average number of analysis calls required to obtain the
benchmark motor designs is 170•10 = 1700 while the average number of analysis calls required
to find the PPCEM motor designs is 19•12 + 50•5 = 478, a difference of 1222 analyses. So
even if as many as six design variables are allowed to vary between PPCEM instantiations from
the product platform, then by replacing (2 d.v.) in Equation 6.41 with (6 d.v.), it would still only
require about 19•12 + 130•5 = 878 analysis calls which is slightly more than half the analysis
calls required to find the benchmark motor designs, and this estimate does not even take into
consideration the fact that each optimization is solved from three different starting point. So by
using the PPCEM to first find a common motor platform design and then scaling the
platform in the stack length, an equivalently good family of motors can be obtained with
fewer analysis calls than if each motor were designed individually. Plus, there is the
236
added benefit of the family of motors found using the PPCEM have more in common
Is the work grounded in reality? The problem formulation has been developed from
an electric motor for a 3/8” variable speed, reversible, corded Black & Decker drill, model
#7190. The drill is rated at 288 W and 1200 rpm, drawing 3.5 Amps of current and is at the
low-end of their product line. The gear reduction on the motor is estimated to be 10:1;
therefore, the operating speed of the motor itself is 12,000 rpm. Using Equation 6.39, the
operating motor torque is computed as being 0.23 Nm. Assuming an input voltage of 115 V,
the input power is 402.5 W when drawing 3.5 Amps of current (see Equation 6.2). Since the
output power is 288 W, the efficiency of the motor is computed using Equation 6.14 and is
71.6%. The mass of the motor is 0.496 kg. Consequently, the target values of 300 W power,
70% efficiency, 0.5 kg, and 0.05 Nm to 0.5 Nm of torque are built around the performance
ratings for this motor, and the motor from this drill is taken as the mid-range motor in a family of
The pertinent motor specifications (design variable settings) for this torque and power
237
It is difficult to count the number of armature turns in the actual motor; the best guess is around
750 turns. Can the analytical model be used to predict the performance of the actual
motor given these specifications? Unfortunately, the analytical model developed in this
chapter cannot be used to predict the performance to the actual motor given these
specifications. There are two discrepancies which arise between the model and the actual
motor. First, the real motor from the drill is not a true universal motor since it appears to be
designed for AC use only (as stated on the exterior of the box). Second, the number of poles
on the armature is twelve; in a real universal motor, the number of poles in the armature is
typically two which is an important assumption used when deriving the torque equations
(Equations 6.19-6.24) for the motor. However, these specifications can still be used to gauge
In general, the values for stator thickness, motor radius, stack length, and current are in
close agreement with the values obtained using the PPCEM, see Table 6.5, Table 6.6, Table
6.8, and Table 6.10. The number of field turns is on the low end while the number of armature
turns is on the high end compared to this actual motor. Finally, the wire cross-sectional areas in
the actual motor are slightly smaller than the values obtained using the analytical model for the
motor. These discrepancies are discussed in more detail in the context of the limitations and
What are the limitations of the analytical model and problem formulation
developed in this chapter? There are two noteworthy shortcomings to the analytical model
238
and problem formulation presented in this chapter for the family of universal electric motors.
First, the speed of the motor has not been taken into consideration in the problem formulation as
discussed previously. By specifying power and torque requirements for each motor, the
resulting rotational speed of the motor is fixed through Equation 6.40. It is important to ensure
that as the torque of the motor increases that the power also increases so that the operating
speed of the motor does not decrease significantly. For purposes of this demonstration, this is
not a major concern; however, a more realistic representation of the motor problem formulation
The second notable shortcoming of the analytical model relates to the large numbers of
armature turns in each motor and the related discrepancies between wire cross-sections and
number of field turns. Can 1062 turns of 0.241 mm2 wire be packed into a cylindrical volume
with a radius of ˜ 2.0 cm (the motor radius minus the thickness of the PPCEM platform listed in
Table 6.4) and a length of 2.62 cm? The answer depends on how tightly wires can be packed
around the armature and how much steel is used within the poles of armature. The complexity
of such an analysis was considered to be beyond the scope of this example; however, it is
recommended to include these space considerations in future studies in order to improve the
fidelity of the model. Furthermore, decreasing the number of armature turns is liable to increase
the required number of field turns (which are considered to be low given that the Black &
Decker motor has 135 turns) in order to maintain sufficient magnetic flux through the motor.
Placing space constraints on the amount of wire in the field and armature should also have the
239
effect of decreasing Awf and Awa in addition to making the number of field and armature
Finally, do the benefits of the work outweigh the cost? The answer to this last
question is also affirmative. Regardless of whether the family of benchmark motors or the
PPCEM family was being designed, the analytical model would still have been constructed. The
only addition to the model required to use the PPCEM is deriving an expression for the
standard deviation of torque based on variations in the motor stack length. This is achieved by
means of a first-order Taylor series approximation, Equation 6.37, which requires taking the
derivative of the torque equation, Equation 6.19, with respect to stack length. Once this is
accomplished, using the PPCEM yields a family of motors with high commonality and negligible
benchmark motors. Thus, the PPCEM facilitates the exploration of product platform concepts
which can be scaled into an appropriate family of products, providing initial “proof of concept”
that the PPCEM does what it is intended to do. A look ahead to the next example to verify
The design of a family of universal electric motors has been utilized to demonstrate how
Furthermore, the application of the market segmentation grid to help identify and map vertical
leveraging of a product platform for a wide range of performance/price tiers within a given
240
market segment is illustrated in this example. From the simple analytical model derived in
Section 6.1.3, it has also been shown in this example that for small-scale problems such as this
one, Step 3 of the PPCEM—building metamodels—is not always necessary provided that
analytical expressions exist, or can be derived, to relate variations in the scale factor (the stack
length of the motor in this example) to variations in product performance (goals and constraints).
As evidenced by the discussion in the previous section comparing the individually designed
benchmark motors and the PPCEM platform motors, the PPCEM has been implemented to
design a family of universal motors (based on a scalable product platform) which is capable of
meeting a wide range of torque requirements with minimal compromise in efficiency and mass.
Demonstrated: Demonstrate:
• vertical leveraging • horizontal leveraging
• parametric scale • configurational scale
factor: stack length factor: # passengers
• no metamodels • kriging metamodels
• robust design to facilitate robust
implementation: design and concept
separate goals for exploration
“bringing mean on • robust design
target” and “minimize implementation:
the variation” Family of Universal Family of General design capability
• OptdesX solver Electric Motors Aviation Aircraft indices
Test: • use DSIDES
• H1, SH1.1, SH1.2, Test:
and SH1.3 Chp 6 Chp 7 • H1, SH1.1, SH1.2,
SH1.3, and H2
Platform
241
As shown in Figure 6.13, in Chapter 7 the PPCEM is applied to a larger, more
complex problem, namely, the design of a family of General Aviation aircraft. In the General
Aviation aircraft example, all of the steps in the PPCEM are implemented, including
further Hypotheses 1, its related sub-hypotheses, and Hypothesis 2. The details of the General
Aviation aircraft example are discussed at the beginning of the next chapter.
242
7.
CHAPTER 7
In this chapter, the PPCEM is applied in full to the design of a family of General
Aviation aircraft (GAA) for final verification of the method. The layout of this chapter parallels
that of the universal motor case study in the previous chapter. Motivation and background for
the GAA are given in Section 7.1, along with Step 1 of the PPCEM, namely, creation of an
appropriate market segmentation grid for the family of GAA to accommodate the problem
requirements. Based on the desired horizontal leveraging strategy, the scale factor for the
problem is taken as the number of passengers on the aircraft as explained in Section 7.2; the
control factors and responses for the family of GAA also are described in Section 7.2 as part of
Step 2 of the PPCEM. Kriging metamodels then are created for the mean and standard
deviation of each response for the family of GAA in Section 7.3. After validating the accuracy
of the metamodels, a compromise DSP for the family of aircraft is formulated in Section 7.4 and
234
exercised in Section 7.5 to develop the GAA platform portfolio. Ramifications of the results
and comparison of the PPCEM solutions against individually designed benchmark aircraft are
discussed in Section 7.6. In addition, a product variety tradeoff study is performed, making use
of the non-commonality indices (NCI) and performance deviation indices (PDI) to examine the
As shown in Table 7.1, all but Hypothesis 3 are verified further in this chapter. As
mentioned in the preceding paragraph, the market segmentation grid is used to identify a
horizontal leveraging strategy for the GAA product family providing further verification of Sub-
Hypothesis 1.1. The use of design capability indices is demonstrated to aggregate the product
family specifications (SH1.3) and facilitate the development of a product platform which is
robust (SH1.2) to variations in the number of passengers on the aircraft, the scale factor.
Furthermore, kriging metamodels are exploited in this chapter to facilitate the implementation of
robust design within the PPCEM, providing further support for Hypothesis 2. Space filling
designs are utilized to create these metamodels; however, only one type of design is used—an
235
SH1.3 Aggregating product family X X
specifications
H2 Utility of kriging for metamodeling X
deterministic computer experiments
A summary of the findings and lessons learned in this example are summarized at the
end of the chapter in Section 7.7. The objective in the summary is to describe how Hypothesis
1 has been verified further through this example. A brief look ahead to Chapter 8 is given in this
last section.
236
7.1 STEP 1: DEVELOPMENT OF THE MARKET SEGMENTATION GRID
Before developing the market segmentation grid, an overview of the General Aviation
aircraft example is given in the next section. This is followed in Section 7.1.2 with a brief
overview of the phases of aircraft design. The market segmentation grid for the General
Aviation aircraft example is presented in Section 7.1.3 along with the baseline design which
What is a General Aviation aircraft? The term General Aviation encompasses all
flights except military operations and commercial carriers. Its potential buyers form a diverse
group that include weekend and recreational pilots, training pilots and instructors, traveling
business executives and even small commercial operators. Satisfying a group with such diverse
needs and economic potential poses a constant challenge for the General Aviation industry
because it is impossible to satisfy all of the market needs with a single aircraft. The present
financial and legal pressures endured by the General Aviation sector makes small production
runs of specialized models unprofitable. As a result, many General Aviation aircraft are no
longer being produced, and the few remaining models are beyond the financial capability of all
In an effort to revitalize the General Aviation sector, the National Aeronautics and
Space Administration (NASA) and the Federal Aviation Administration (FAA) recently
237
sponsored a General Aviation Design Competition (NASA and FAA, 1994). For this work, a
One solution to the GAA crisis is to develop a family of aircraft which can be adapted
easily to satisfy distinct groups of customer demands. Therefore, the purpose in this example is
a family of aircraft scaled around the two, four, and six seater configurations
using the PPCEM. This family of General Aviation aircraft must be capable of
price and operating cost while meeting desired technical and economic
considerations.
In order to realize this objective and demonstrate the application of the PPCEM, a brief
overview of aircraft design is given in the next section. This is followed in Section 7.1.3 with the
development of the market segmentation grid—Step 1 of the PPCEM—for the family of GAA.
How does one go about designing an aircraft? Aircraft design traditionally is divided
into three phases, namely, conceptual, preliminary, and detailed design as illustrated in Figure
238
7.1. If manufacturing design is considered as a part of the design process, design for production
can be added as a fourth phase. The first two phases of aircraft design, the conceptual and
preliminary phases, are sometimes combined and called advanced design or synthesis in the
aerospace industry, while follow-on phases are called project design or analysis. More detailed
descriptions of the decisions made in each phase and the disciplines involved in aircraft design
can be found in, e.g., (Bond and Ricci, 1992). Specifically, the efforts in this example are
directed toward utilizing the computer within the traditional conceptual phase of aircraft design
Conceptual Design Phase: In this phase, the general size and configuration of the aircraft
is determined. Parametric trade studies are conducted using preliminary estimates of
aerodynamics and weights to determine the best wing loading, wing sweep, aspect ratio,
thickness ratio, and general wing-body-tail configuration. Different engines are
considered and the thrust loading is varied to obtain the best match of airframe and
engine. The first look at cost and manufacturing possibilities is made at this time. The
feasibility of the design to accomplish a given set of mission requirements is established,
but the details of the configuration are subject to change.
239
Platform
Detailed Production
Design Baseline
In general, in the early stages of aircraft design, the aircraft concept is synthesized at the
level. Top-level design specifications are used as the starting point for the preliminary design at
the subsystem level, and form the basis for the specifications (functional properties) that are
developed during the preliminary design phase. These top-level design specifications include
variables such as aspect ratio, thickness ratio, and wing-body-tail configuration. The top-level
240
design specifications can be continuous (e.g., aspect ratio = 7-11) or they can be discrete
design concepts (e.g., single- or twin-engine, single, number of propeller blades, high or low
wing, and fixed or retractable landing gear). The settings of the top-level design specifications
define the conceptual baseline which becomes the configuration input for preliminary design,
where the system is decomposed for more sophisticated analysis by discipline, subsystem, or
component. The reader is referred to (Koch, 1997) for more discussion on the resulting
Several synthesis and analysis programs have been created to facilitate the conceptual
and preliminary design of aircraft and hence the development of these top-level design
specifications. One such program is entitled FLOPS (FLight OPtimization System (McCullers,
preliminary design and evaluation of advanced aircraft concepts. Another program is called
GASP (General Aviation Synthesis Program (NASA, 1978)). GASP is a computer program
which performs tasks specifically associated with the conceptual design of General Aviation
aircraft; consequently, it has been selected as the synthesis program for use in this example.
What is GASP and how does it work? GASP is a synthesis and analysis computer
program which facilitates parametric studies of small aircraft. GASP specializes in small fixed-
wing aircraft employing propulsion systems varying from a single piston engine with fixed pitch
GASP contains an overall control module and six technology submodules which perform the
241
various independent studies required in the design of General Aviation or small transport type
• Aerodynamics Module: Lift and drag coefficients, lift curve slope computation due to
aspect ratio, sweep angle, Mach number, and induced drag are determined.
• Weight and Balance Module: Weight trend coefficients for gross weight, payload,
and aircraft geometry are used to estimate the center of gravity, travel of the aircraft,
fuel tank size, and compartment weight.
• Mission Performance Module: All of the mission segments such as taxi, take off,
climb, cruise and landing are analyzed including the total range. The program also
calculates the best rate of climb, high speed climb, and other characteristics.
• Economics Module: Manufacturing and operating costs are estimated based on the
date of inception of the program (i.e., in 1970's dollars).
Input variables for GASP are general indicators of aircraft type, size, and performance.
The numerical output from GASP includes many aircraft design characteristics such as range,
direct operating cost, maximum cruise speed, and lift-to-drag ratio. For conceptual design of an
aircraft, GASP is used to find appropriate settings for the top-level design specifications that
satisfy the design requirements. By utilizing GASP as the simulation package within the
242
PPCEM, the PPCEM can be used to develop a set of top-level design specifications for a
suitable aircraft platform which is good for the entire family of aircraft as shown in Figure 7.1.
The first step in the PPCEM is to develop the market segmentation grid which is accomplished
7.1.3 The Market Segmentation Grid for the GAA Example Problem
As stated at the beginning of this section, the objective in this example is to design a
family of GAA around the two, four, and six seater configurations. The market segmentation
grid shown in Figure 7.2 depicts a potential leveraging strategy for the GAA example. The goal
is to design a low end aircraft platform which can be leveraged across three different market
segments which are defined by the capacity of the aircraft (i.e., two people, four people, and six
people) similar to the Boeing 747 series of aircraft (Rothwell and Gardiner, 1990). Each
aircraft could eventually be vertically scaled through the addition and removal of features to
increase its performance and attractiveness to a customer base willing to pay a higher price. At
243
High End
Mid-Range
The baseline configuration is derived from an existing General Aviation aircraft, namely,
the Beechcraft Bonanza B36TC. The Bonanza is a four-to-six seat, single-engine business and
utility aircraft as illustrated in Figure 7.3 and is one of the most popular GAA sold in recent
25,000 ft with a speed of 200 knots and a minimum range of 956 nautical miles (at 79% of
power). Furthermore, Bonanza’s mission and performance characteristics are close to those
specified in the GAA competition (NASA and FAA, 1994). Taking the Bonanza as the starting
point, several calculations can be performed to determine the GASP input data, specifically for
aerodynamics, engine performance, and stability control parameters, according to the mission
specifications when used in GASP are summarized in Table 7.2, and the corresponding top-
level design specifications for this baseline aircraft are listed in Table 7.3 where dimensions in
245
Table 7.3 Baseline Model Specifications
Span 38.3 ft
Mean Chord 5.09 ft
1/4 Chord Sweep 4.0°
Taper Ratio 0.46
Root Thickness 0.15
Wing Loading 20.5 lb/ft2
Wing Fuel Volume 180 gal
Horizontal Tail Aspect Ratio 5.08
Area 45.2 ft 2
Span 15.15 ft
Mean Chord 3.09 ft
Thickness 0.09
Vertical Tail Aspect Ratio 1.07
Area 20.8 ft 2
Span 4.71 ft
Mean Chord 4.61 ft
Thickness 0.07
Engine Power 350 HP turbocharged
Static Thrust/Wt 0.339
Activity Factor 110
Propeller Diameter 6.30 ft
# of Blades 3
The flight mission for the family of GAA in this example is illustrated in Figure 7.4. As
specified in the GAA competition guidelines (NASA and FAA, 1994), a General Aviation
246
aircraft is required to fly at 150-300 kts (Mach 0.24 to 0.48) for a range of 800-1000 nautical
miles. The mission profile shown in Figure 7.4 has a (baseline) cruise speed of Mach 0.31 (˜
200 kts) and a range of 900 n.m. (nautical miles). In the diagram, FAR 23 represents Part 23
of the Federal Aviation Requirement which designates acceptable noise levels during aircraft
Based on the GAA market segmentation grid in Figure 7.2, the scale factor for the
around the number of people on the aircraft; hence, the number of passengers is the scale factor
in this problem. The effect of the number of passengers on the length of the fuselage and sizing
of the aircraft is discussed more in the next section wherein the targets and requirements for
each aircraft and the design variables for this example are described.
247
7.2 STEP 2: GAA FACTOR CLASSIFICATION
Having created the market segmentation grid and identified an appropriate leveraging
strategy and scale factor, the next step in the PPCEM is to classify the factors within the GAA
problem. The general configuration of each aircraft has been fixed at three propeller blades,
high wing position, and retractable landing gear based on previous work (Simpson, 1995). The
design variables (i.e., control factors) and corresponding ranges of interest in this study are as
follows:
1. Cruise speed, CSPD ? [Mach 0.24, Mach 0.48]; baseline is Mach 0.31
A brief description of the importance and effects of each of these variables is included in Section
F.1.
There are a total of nine responses (i.e., requirements and goals) which are of interest
for each aircraft: takeoff noise, direct operating cost, ride roughness, empty weight and fuel
weight, purchase price, and maximum cruise speed, flight range and lift/drag ratio. In general, it
• lower direct operating cost, purchase price, empty weight and fuel weight to their
targets;
248
• raise maximum cruise speed, flight range, and lift/drag ratio to their targets; and
• meet constraints on the maximum takeoff noise, ride roughness, direct operating cost,
empty weight, and fuel weight, and minimum flight range.
The constraint values and target values for the goals employed in this example are listed in Table
7.4 and Table 7.5, respectively. As such, these constraints and targets define each market
niche for each of the three aircraft. As shown in Table 7.4 the constraint values for take-off
noise, direct operating cost, ride roughness aircraft empty weight, and range are the same for
each aircraft within the family; only the fuel weight constraint varies for each aircraft, allowing
Table 7.4 Constraints for the Two, Four, and Six Seater GAA
The goal targets which define each market niche are listed in Table 7.5. A compromise
DSP is used to determine the settings of the six design variables which lower fuel weight, empty
weight, direct operating cost, and purchase price to their targets or below while raising
maximum lift/drag, cruise speed, and range to their targets. The compromise DSP formulation
249
Table 7.5 Goal Targets for the Two, Four, and Six Seater GAA
Based on the leveraging strategy shown in Figure 7.2, the number of people in the
aircraft is taken as the scale factor in the design process, ranging from a minimum of 2 to a
maximum of 6. Furthermore, it is assumed that the demand for the aircraft is uniform; therefore,
the scale factor—the number of passengers—is assumed to be uniformly distributed and so are
the corresponding responses. Taking the number of passengers as a scale factor, the length of
the central portion of the fuselage of the aircraft is scaled automatically within GASP to
accommodate the necessary number of passengers (plus one pilot). Because the length of the
aircraft is fixed once the number of people is specified, the mean and variance of the
scale factor are known in this example unlike in the universal motor example.
Based on this factor classification scheme, the P-diagram for the GAA example
250
Y = Responses
Takeoff Noise
X = Control Factors Ride Roughness
Cruise speed
Empty Weight
Aspect Ratio
Fuel Weight
Propeller Diameter GASP Purchase Price
Wing Loading
Direct Operating Cost
Engine Activity Factor
Maximum Range
Seat Width Maximum Speed
S = # passengers Maximum Lift/drag
As shown in the figure, there are six control factors (design variables), one scale factor (the
number of passengers), and nine responses (constraints and goals). The process of constructing
kriging metamodels which relate the control and scale factors to each of the responses is
The next step in the PPCEM is to build and validate metamodels of the
analysis/simulation routine, i.e., GASP. In particular, robustness models are constructed for
each of the nine responses, yielding a total of 18 metamodels: one metamodel for the mean and
a variance of each response. Why use metamodels in the GAA example? The impetus is
two-fold. First, GASP provides a “black-box” type analysis for sizing an aircraft. The
computation time for GASP is about 45 seconds which does not necessarily warrant the use of
metamodels; however, after multiplying this number by three—the number of aircraft in the
family—and considering the number of design scenarios that are to be considered, the
computational expense adds up quickly. Moreover, it is difficult to estimate the mean and
251
variance of each response for the family of aircraft without the metamodels. It is much more
efficient to build metamodels for the mean and deviation of each response and use them to
search the design space than it is to use GASP directly in the search for a good platform design.
The product array approach is employed to build kriging metamodels of the mean
deviation of each response to variations in the number of passengers in each aircraft. This
approach is illustrated in Figure 7.6. The outer array is based on a randomized orthogonal array
of 64 points (n = 64). The use of the randomized orthogonal array is based, primarily, on ease
of generation and available sample sizes; it is also based, in part, on its performance in the
kriging/DOE study in Chapter 5 even though a six variable test problem was not utilized in the
study. To compare the sample size, a half-fraction CCD for six factors would contain 45
points, and a full-fraction CCD would contain 77 points. The kriging models employ the
Gaussian correlation function, Equation 2.16, because this correlation function yielded the
lowest RMSE and max. error, on average, in the kriging/DOE study in Chapter 5 (see Section
5.3 in particular).
252
Platform
Scale Factor(s)
Responses
??noise??
Inner Array
Sample ??wemp??
?? doc ??
# PAX
design space
1
3
5
??rough??
j ? ??wfuel ??
??purch ??
1
2
3
# cspd ar dprp wl af ws
??range??
yj,1,1 yj,1,3 yj,1,5 µj,1 ? j,1 ??vcrmx??
1 0.24 7.0 5.9 20 85 17 ??
ldmax ??
2 0.24 7.0 5.0 19 91 20 yj,2,1 yj,2,3 yj,2,5 µj,2 ? j,2
Outer Array
• • • •
• • • •
• • • • Kriging models for each response
? ˆy ? f(cspd, ar, dprp, wl,af, ws)
n 0.48 11.0 5.0 20 85 17 yj,n,1 yj,n,3 yj,n,5 µj,n ? j,n j
? yˆ ? f(cspd, ar, dprp, wl,af, ws)
j
Figure 7.6 Product Array Approach for Constructing GAA Kriging Models
Because there is only one scale factor which has three possible settings, the inner array
shown in Figure 7.6 simply contains three runs, one for each possible value of the scale factor.
Hence, GASP is executed 3n times in order to build the kriging models for the mean and
deviation of the GAA responses. Notice that the variable PAX—the number of passengers—
varies from 1 to 5 in the figure. This is because the total number of people on the aircraft is
equal to the number of passengers plus 1 pilot; varying PAX from one to five is the same as
varying the number of people on the aircraft from two to six, allowing a family of aircraft to be
After varying the number of passengers for each combination of the design variables as
specified by the outer array, the mean and standard deviation of each response are computed
for each run using Equations 7.1 and 7.2 which are as follows:
253
y j,i,1 ? y j,i ,3 ? y j,i ,5
• Mean: ? y j ,i ? , j = {noise, wemp, ..., ldmax}, i = {1, 2, ..., n} [7.1]
3
y j,i,1 ? y j,i,5
• Std. Dev.: ? y j,i ? , j = {noise, wemp, ..., ldmax}, i = {1, 2, ..., n} [7.2]
12
Computation of the standard deviation assumes a uniform distribution of the response because
the number of passengers is assumed to vary uniformly over the design space. As an example,
the mean and standard deviation of the direct operating cost, DOC, for the 3rd experimental
It is in this manner that the means and deviations for each response for each experimental run in
the outer array are computed for a given experimental design. Kriging metamodels then are
constructed for the mean and deviation of each response, resulting in 18 metamodels. The
kriging algorithm described in Section A.2.1 is used to fit the model; the fitted values (MLE
estimates) for the “best” kriging model for each response are listed in Section F.2.
A set of 1000 validation points from a random Latin hypercube are used to assess the
accuracy of the GAA kriging models. The maximum error and root mean square error (RMSE)
based on the set of validation points for the kriging models based on the 64 point orthogonal
254
array are summarized in Table 7.6; both raw values and percentages (of the sample range) are
listed.
With the exception of the maximum error for ? DOC , all of the kriging metamodels appear
sufficiently accurate for this study; maximum errors are about 4% or less, and RMSEs are 1%
or less. Despite the large maximum error for ? DOC , the RMSE for ? DOC is sufficiently low
255
are used throughout the rest of the GAA example. Thus, Step 3 of the PPCEM is complete,
and the compromise DSP for the family of aircraft is formulated in the next section as Step 4 in
the PPCEM.
In the universal motor example, separate goals for “bringing the mean on target” and
“minimizing the deviation” for variations in the stack length are used. In this example, the
compromise DSP for the family of GAA employs design capability indices (Cdk) to assess the
ranged set of design requirements. Design capability indices are formulated for both the
constraints and goals for the family of GAA as defined by the constraints and target values listed
( 75 ? ? noise)
• NOISE = 75 = URL C dk,noise ? Cdu ,noise ? [7.5]
3? noise
(80 ? ? doc )
• DOC < 80 = URL C dk,doc ? C du,doc ? [7.6]
3? doc
(2.0 ? ? rough )
• ROUGH = 2 = URL C dk, rough ? Cdu, rough ? [7.7]
3? rough
(2200 ? ? wemp )
• WEMP = 2200 = URL C dk,wemp ? C du,wemp ? [7.8]
3? wemp
256
(450 ? ? wfuel )
• WFUEL = 450 = URL C dk,wfuel ? C du,wfuel ? [7.9]
3? wfuel
(? range ? 2000)
• RANGE = 2000 = LRL C dk, range ? Cdl,range ? [7.10]
3? range
• WFUEL:
• WEMP:
• PURCH:
(60 ? ? doc )
• DOC: C dk,doc ? C du,doc ? [7.14]
3? doc
257
(? ld max ? 17)
• LDMAX: C dk,ld max ? Cdl, ldmax ? [7.15]
3? ld max
(? vcrmx ? 200)
• VCRMX: C dk,vcrmx ? C dl, vcrmx ? [7.16]
3? vcrmx
(? range ? 2500)
• RANGE: C dk, range ? Cdl,range ? [7.17]
3? range
The resulting compromise DSP for the GAA product platform using these Cdk
formulations is given in Figure 7.7. There are six design variables, six constraints, and seven
goals. Of the seven goals, three are related to the economic performance of the aircraft—
empty weight (WEMP), purchase price (PURCH), and direct operating cost (DOC)—and the
remaining four are related to the technical performance of the aircraft: fuel weight (WFUEL),
maximum lift/drag (LDMAX), maximum cruise speed (VCRMX), and maximum flight range
(RANGE).
258
Given:
o Baseline aircraft configuration and mission profile
o Configuration scale factor = # passengers (where total # seats = # passengers + 1 pilot)
o Kriging models for mean and standard deviation of each response
Find:
o The system variables, x:
• cruise speed, CSPD • wing loading, WL
• wing aspect ratio, AR • engine activity factor, AF
• propeller diameter, DPRP • seat width, WS
o The values of the deviation variables associated with G(x):
• fuel weight Cdk, d1-, d1+ • maximum lift/drag Cdk, d5-, d5+
- +
• empty weight Cdk, d2 , d2 • maximum speed Cdk, d6-, d6+
• direct operating cost Cdk, d3-, d3+ • maximum range Cdk, d7-, d7+
• purchase price Cdk, d4-, d4+
Satisfy:
o The system constraints, C(x), based on kriging models:
• NOISE Cdk greater than 1: Cdk,noise(x) = 1 [7.5]
• DOC Cdk greater than 1: Cdk,doc(x) = 1 [7.6]
• ROUGH Cdk greater than 1: Cdk,rough(x) = 1 [7.7]
• WEMP Cdk greater than 1: Cdk,wemp(x) = 1 [7.8]
• WFUEL Cdk greater than 1: Cdk,wfuel (x) = 1 [7.9]
• RANGE Cdk greater than 1: Cdk,range(x) = 1 [7.10]
o The system goals, G(x), based on kriging models:
• WFUEL Cdk greater than 1: Cdk,noise(x) + d1- - d1+ = 1.0 [7.11]
• WEMP Cdk greater than 1: Cdk,wemp(x) + d2- - d2+ = 1.0 [7.12]
• DOC Cdk greater than 1: Cdk,doc(x) + d3- - d3+ = 1.0 [7.13]
• PURCH Cdk greater than 1: Cdk,purch(x) + d4- - d4+ = 1.0 [7.14]
• LDMAX Cdk greater than 1: Cdk,ldmax(x) + d5- - d5+ = 1.0 [7.15]
• VCRMX Cdk greater than 1: Cdk,vcrmx(x) + d6- - d6+ = 1.0 [7.16]
• RANGE Cdk greater than 1: Cdk,range(x) + d7- - d7+ = 1.0 [7.17]
o Constraints on deviation variables: di- • di+ = 0 and di-, di+ = 0.
o The bounds on the system variables:
0.24 M = CSPD = 0.48 M 19 lb/ft2 = WL = 25 lb/ft 2
7 = AR = 11 85 = AF = 110
5.0 ft = DPRP = 5.96 ft 14.0 in = WS = 20.0 in
259
Minimize:
o The sum of the deviation variables associated with:
• fuel weight Cdk, d1- • maximum lift/drag Cdk, d5-
-
• empty weight Cdk, d2 • maximum speed Cdk, d6-
• direct operating cost Cdk, d3- • maximum range Cdk, d7-
• purchase price Cdk, d4-
Z = { f1(d1-), f2(d2-), f3(d3-), f4(d4-), f5(d5-), f6(d6-), f7(d7-) }
Based on this GAA compromise DSP, the initial baseline design is infeasible in two
regards. First, the propeller diameter is too great as explained in Section 7.1 At 6.3 ft, the
speed of the propeller tip is above sonic speed, violating a tipspeed constraint which is not
explicitly modeled in the GAA compromise DSP; thus, the range for the propeller diameter is
set at 5-5.96 ft so that this constraint is always met. Second, the DOC violates the $80/hr
constraint which has been selected. The baseline design still represents a good design;
however, the GAA compromise DSP is being used to improve it as discussed in the next
section wherein the product platform portfolio is developed in Step 5 of the PPCEM.
In order to develop the GAA platform portfolio for the family of GAA to meet the
constraints and goals set forth in Section 7.2, three design scenarios are investigated (see Table
7.7).
Overall Tradeoff Study: All of the goals are weighted equally in an effort to develop a
platform that simultaneously meets both economic and performance requirements as
best as possible.
260
Economic Tradeoff Study: Economic related goals (C dk’s for empty weight, purchase
price, and direct operating cost) are given top priority to find a platform which meets all
of the economic requirements as best as possible; satisfying performance goals is
second priority.
Performance Tradeoff Study: Performance related goals (C dk’s for fuel weight, max.
lift/drag, max. speed, and max. range) are placed at the first priority level to develop a
platform that satisfies all of the performance requirements as best as possible;
meanwhile, economic goals are given second priority.
The corresponding deviation function formulations for each scenario are listed in Table 7.7.
Deviation Function
Scenario PLEV1 PLEV2 Note:
- -
(d1 + d2 + d3 - d 1- drives Cdk-wfuel to 1
d 2- drives Cdk-wemp to 1
1. Overall Tradeoff + d4- + d5- + d6-
d 3- drives Cdk-purch to 1
+ d7-)/7
(d2- + d3- + d4- (d1- + d5- + d6- d 4- drives Cdk-doc to 1
2. Economic Tradeoff d 5- drives Cdk-ldmax to 1
)/3 + d7-)/4
(d1- + d5- + d6- (d2- + d3- + d4- d 6- drives Cdk-vcrmx to 1
3. Performance Tradeoff d 7- drives Cdk-range to 1
+ d7-)/4 )/3
Three starting points are used when solving the GAA product platform compromise
DSP for each scenario: the lower, middle, upper bounds of the design variables; in a situation
where all three starting points do not converge to the same solution, the design with the lowest
deviation function value is taken as the best design (the reader is referred to the convergence
studies in Section 7.6.1). The resulting product platform specifications obtained by solving the
261
compromise DSP in Figure 7.7 are given in the next section. The individual instantiations of the
aircraft within the family based on the kriging metamodels then are discussed in Section 7.5.2.
7.5.1 Results of the GAA Compromise DSP for the Family of Aircraft
The resulting product platform specifications for each design scenario are summarized in
Table 7.8. Recall that the target values for each Cdk is 1; values above one indicate that the
family of GAA has met the desired URL or LRL while values below one indicate that the targets
have not been met for that particular requirement. All solutions are feasible, and the values for
Cdk,rough and Cdk,noise have not been included because they have no bearing on the deviation
function (other than to make the solution infeasible). The Cdk values for the initial baseline
design have also been included in the table for the sake of comparison. The PPCEM based
family has an unfair advantage because the baseline aircraft (the Beechcraft Bonanza B36TC
presented in Section 7.1.3) is a six seater aircraft and, as such, is not expect to perform well
when scaled down to fit fewer passengers; however, it still provides a reference point to
Baseline Scenario
Design 1 2 3
Des. Var.
CSPD [Mach] 0.31 0.244 0.242 0.291
AR 7.88 8.00 8.09 7.62
DPRP [ft] 6.3 5.13 5.19 5.55
WL [lb/ft2] 20.5 22.45 22.63 22.48
AF 110 89.60 89.40 85.63
262
WS [in] 20 18.60 18.72 18.70
Goals
Cdk-wfuel P* -0.640 1.164 1.236 1.156
Cdk-wemp E 0.074 0.810 0.903 0.806
Cdk-doc E -670.476 -1.588 -1.312 -26.270
Cdk-purch E -2.557 0.733 0.449 0.070
Cdk-ldmax P -3.230 -4.474 -4.427 -4.964
Cdk-vcrmx P -4.397 -4.303 -3.702 -2.017
Cdk-range P -4.157 0.577 -0.672 0.429
Dev. Fcn.
PLEV1 2.036 0.986 2.388
PLEV2 2.950 9.4556
*
P indicates Cdk is related to performance; E to economics—economic
goals rank first in Scenario 2; performance goals rank first in Scenario 3
Compared to the initial baseline design, the PPCEM designs have a lower cruise speed,
propeller diameter, engine activity factor, and seat width. Meanwhile, the wing loading is slightly
larger in general; and the aspect ratio fluctuates around the baseline value. Comparing the
design variables for Scenario 1 and 2, there is negligible difference. This indicates that in the
overall tradeoff study, the economic goals tend to dominate the solution despite all goals being
equally weighted. In an effort to achieve better performance in Scenario 3 (at the sacrifice of
the economic goal achievement), the cruise speed is slightly higher, the propeller diameter is
slightly larger, and the aspect ratio and engine activity factor are slightly lower for this scenario
than either Scenario 1 or 2. Thus, in order to maintain sufficient flexibility to achieve all the
design considerations in all three scenarios, the resulting product platform is taken as follows:
• Cruise speed = Mach 0.242 or Mach 0.291 (if performance is first priority)
263
• Propeller diameter = 5.34 ± 0.2 ft
These values comprise the range of values that cruise speed, aspect ratio, etc. should be
allowed to take in order to meet the goals as best as possible in any of the three design
scenarios. It is these values which define the GAA product platform around which the family of
Before instantiating the individual aircraft to examine how well they perform given these
specifications, notice in Table 7.8 that very few Cdk goals achieve their target of 1; only Cdk,wfuel
is consistently larger than 1, indicating that the family of GAA are capable of meeting the
specified fuel weight targets. The empty weight Cdk and purchase price Cdk are the second best
with Cdk,range performing well in Scenarios 1 and 3. All Cdk values are improved over the
baseline design except for Cdk,ldmax which has decreased slightly. In Scenario 2, the economic
Cdk’s for direct operating cost and empty weight improve slightly but at the expense of a slight
decrease in the purchase price Cdk when compared to the value obtained in Scenario 1. The
big tradeoff between the economic and performance goals in Scenarios 2 and 3 is best seen in
Cdk,doc. In all three scenarios, the family of GAA is far from achieving its target of $60/hr for the
direct operating cost as indicated by the low values for Cdk,doc; however, in Scenario 3 when
achieving performance goals is given a higher priority than economic goals, Cdk,doc is even
264
worse, indicating the compromise between a family of aircraft that has performs well versus one
that is economical.
To study these compromises further, five more design scenarios are formulated (see
Section F.4) to determine whether it is the Cdk formulation that is performing poorly or that the
targets are difficult to achieve. The results are listed separately in Section F.4 and discussed
therein. The end result of examining all these design scenarios is learning that significant
tradeoffs are occurring in Scenarios 1, 2, and 3 where the economic and performance Cdk goals
are equally weighted at different priority levels. Only when a particular Cdk is given first priority
(i.e., placed at PLEV1) in the GAA product platform compromise DSP can the target (C dk = 1)
be achieved. Any other time, the solutions from the GAA product platform compromise DSP
represent the best possible compromise which can be obtained for a particular design scenario,
poor Cdk value or not. Furthermore, the deviation function values shown in Table 7.8 are not
much value in and of themselves because they are based on how well the Cdk achieve their
target of 1. Recall that Cdk is only a means to end, i.e., to generate a family of aircraft
which satisfies the given ranged set of requirements as well as possible. What is important,
however, is the resulting aircraft which come from instantiating the PPCEM aircraft platform to
Unlike in the universal motor example, instantiation of the individual aircraft within the
GAA product family only requires specifying the number of passengers on the plane, not solving
265
another compromise DSP to find the best stack length to meet a particular torque requirement.
The individual constraints and goals for each aircraft must be formulated first, however, based
on the specifications given in Section 7.2. Based on the requirements in Table 7.4, the
where Ci,wfuel = {450 lbs, 475 lbs, 500 lbs} and i = {1, 3, 5} passengers. Meanwhile, the
individual goals based on the targets in Table 7.5 for each aircraft are given by:
266
where Ti,wfuel = {450 lbs, 400 lbs, 350 lbs}, Ti,wemp = {1900 lbs, 1950 lbs, 2000 lbs}, Ti,purch =
{$41000, $42000, $43000} and i = {1, 3, 5} passengers. Based on these goals, the deviation
function for each aircraft is a combination of: d1+, d2+, d3+, d4+, d5-, d6-, and d7- because it is
desired to lower fuel weight, empty weight, direct operating cost, and purchase price to their
targets and to raise maximum lift/drag, cruise speed, and range to theirs. The resulting deviation
function formulations for each aircraft for each scenario is listed in Table 7.9. These deviation
functions are identical to those listed in Table 7.7 except that di+ and di- are for the individual
goals of each aircraft and not the Cdk for the family (which only uses di- to raise Cdk to its target
Deviation Function
Scenario PLEV1 PLEV2 Note:
+
(d1 + d2 ++ d 1+ lowers fuel weight to target
d 2+ lowers empty weight to target
1. Overall tradeoff d3+ + d4+ + d5-
d 3+ lowers direct oper. cost to
+ d6- + d7-)/7 target
(d2+ + d3+ + (d1+ + d5- + d6- d 4+ lowers purchase price to target
2. Economic tradeoff d 5- raises max. lift/drag to target
d4+)/3 + d7-)/4
-
(d1+ + d5- + d6- (d2+ + d3+ + d 6- raises max. speed to target
3. Performance tradeoff d 7 raises max. range to target
+ d7-)/4 d4+)/3
The instantiations of the two, four, and six seater GAA for the family of aircraft based
on the PPCEM platform values are summarized in Table 7.10. These response values are
obtained by evaluating the kriging metamodels at the design variable values listed in Table 7.8
for each scenario. Based on the low deviation function values listed in Table 7.10, it appears
267
that the PPCEM based family of aircraft perform reasonably well on an individual basis. The
targets for fuel weight and empty weight are met in all cases. Despite the poor showing of
Cdk,doc, the DOC values for the individual aircraft in Scenarios 1 and 3 are within $2/hr of the
target of $60/hr; meanwhile, the DOC values are near their maximum permitted value ($80/hr)
in Scenario 3 when economics takes second priority to performance. The purchase price goals
of {$41000, $42000, $43000} are within $1000 or less of being met in all cases. The
maximum lift/drag ratio (LDMAX) and cruise speeds (VCRMX) do not meet their targets of 17
or 200 very well. The maximum range target (2500 n.m.) is met in Scenario 1 and by all but the
six seater in Scenario 3; the range values for Scenario 2 are slightly below the target.
Response
Design No. of WFUEL WEMP DOC PURCH LDMAX VCRMX RANGE Dev Fcn
Scenario Seats [lbs] [lbs] [$/hr] [$] [kts] [nm] PLEV1 PLEV2
2 447.70 1892.32 61.05 42078.2 16.16 193.56 2542.7 0.018
1 4 409.19 1929.50 62.31 42601.3 15.80 190.17 2517.3 0.028
6 376.65 1959.37 62.54 43138.7 15.68 188.87 2502.8 0.036
2 445.06 1895.36 60.52 42221.0 16.16 194.82 2497.3 0.013 0.019
2 4 405.92 1932.81 61.83 42749.0 15.81 191.51 2462.0 0.016 0.036
6 373.48 1962.93 62.20 43296.0 15.68 190.20 2444.0 0.015 0.054
2 446.36 1891.95 78.16 42402.6 16.04 198.16 2543.1 0.016 0.112
3 4 406.63 1929.39 79.30 42972.1 15.73 195.16 2509.0 0.029 0.115
6 377.05 1959.14 78.28 43462.1 15.56 193.72 2482.9 0.050 0.105
So what does all this mean in terms of designing a scalable platform for a product
family? Considerable improvement has been made over the initial baseline design, but has a
268
good family of aircraft been designed? Answers to this question are offered in the next
section in which verification and the implications of the results are discussed.
To verify the results obtained from implementing the PPCEM, the following questions
are addressed.
Verify compromise DSP solutions - What do the convergence histories look like for
each scenario? Is the best solution being obtained?
Verify kriging predictions - How does the predicted performance of the individual
aircraft based on the kriging models compare to the actual performance in
GASP?
Verify PPCEM family - How does the family of aircraft based on the PPCEM
compare to the aggregate group of individually designed benchmark aircraft?
Convergence histories of the GAA compromise DSP solution for Scenario 1 for the
family of GAA is illustrated in Figure 7.8. As seen in the figure, all three starting points converge
to approximately the same solution, indicating that the best possible solution has likely been
obtained. The initial deviation function for the high starting point is quite large (~15) while the
269
initial design based on the middle starting point is slightly infeasible; hence, the jump in iteration 2
16
14 Low
Mid
12
Hi
10
PLEV1
6
4
2
0
0 5 10 15
Iterations
The convergence histories for the PLEV1 and PLEV2 for Scenarios 2 and 3 are
illustrated in Figure 7.9. Similar to Figure 7.8, the three starting points for Scenarios 2 and 3
yield a wide range of initial deviation function values, but the model tends to converge at the
same or nearly similar solutions. This trend holds true at both priority levels in both scenarios.
270
45 5
40 4.5
35 4
3.5
30
3
25
PLEV1
2.5
PLEV1
20
2
15 1.5
10 1
5 0.5
0 0
0 2 4 6 8 10 12 14 16 0 5 10 15 20 25
Iterations Iterations
4.5 40
4 35
3.5 30
3
25
PLEV2
PLEV2
2.5
20
2
15
1.5
1 10
0.5 5
0 0
0 2 4 6 8 10 12 14 16 0 5 10 15 20 25
Iterations Iterations
7.9a) and PLEV2 in Scenario 3 (Figure 7.9d) and between PLEV1 in Scenario 3 (Figure 7.9b)
and PLEV2 in Scenario 2 (Figure 7.9c) since the same goals are equally weighted at different
levels in these scenarios. Comparing these graphs reveals the true nature of the tradeoffs that
occur between the economic goals and the performance goals. When the economic goals are
placed at the first priority level in Scenario 2, a much lower value for the deviation function is
capable of being achieved compared to when they are placed at the second priority level as in
271
Scenario 3. The same holds true for the performance goals in the first priority level in Scenario
Previously, the performance of the individual aircraft have been based on predictions
from kriging metamodels (see Table 7.10). Therefore, the performance of the individual aircraft
is evaluated directly in GASP as opposed to being estimated from the kriging metamodels. The
results are summarized in Table 7.11 and can be compared directly to the previous values listed
in Table 7.10. The resulting approximation error between the kriging model predictions and the
The errors are expressed as a percent of the actual value obtained from GASP; a
positive error indicates over-prediction of the response, and a negative error indicates under-
prediction. The maximum error occurs for RANGE of the six seater GAA (= 3.42%). In
general, the kriging models over-predict PURCH, LDMAX, VCRMX, and RANGE and
272
under-predict WFUEL and DOC. The average percentage error for each response also is
listed in the table; values range from 0.62% to a high of 2.52%. In summary, then, it appears
that the kriging metamodel predictions are quite accurate based on the error analysis in Table
7.12.
For further verification of the PPCEM aircraft, individual (benchmark) aircraft are
designed using GASP and DSIDES directly to compare to the aircraft obtained through the
implementation of the PPCEM. The compromise DSP for these benchmark aircraft is shown in
Figure 7.10 and is derived from Equations 7.16-7.28 for the individual constraints and goals
listed in Table 7.4 and Table 7.5, respectively. As in the individual instantiations of the PPCEM
platform, the deviation variables of interest are: d1+, d2+, d3+, d4+, d5-, d6-, and d7- because it is
273
desired to lower fuel weight, empty weight, direct operating cost, and purchase price to their
targets and to raise maximum lift/drag, cruise speed, and range to theirs.
274
Given:
o Baseline aircraft configuration and mission profile
o General Aviation Synthesis Program (GASP)
Find:
o The system variables, x:
• cruise speed, CSPD • wing loading, WL
• wing aspect ratio, AR • engine activity factor, AF
• propeller diameter, DPRP • seat width, WS
o The values of the deviation variables associated with G(x):
• fuel weight Cdk, d1-, d1+ • maximum lift/drag Cdk, d5-, d5+
• empty weight Cdk, d2-, d2+ • maximum speed Cdk, d6-, d6+
- +
• direct operating cost Cdk, d3 , d3 • maximum range Cdk, d7-, d7+
• purchase price Cdk, d4-, d4+
Satisfy:
o The system constraints, C(x), based on kriging models:
• noise [dbA]: NOISE(x) = 75 dbA [7.18]
• direct operating cost [$/hr]: DOC(x) = $80/hr [7.19]
• ride roughness: ROUGH(x) = 2.0 [7.20]
• aircraft empty weight [lbs]: WEMP(x) = 2200 lbs [7.21]
• aircraft fuel weight [lbs]: WFUEL(x) = Ci,wfuel [7.22]
• maximum flight range [nm]: RANGE(x) = 2200 nm [7.23]
o The system goals, G(x), based on kriging models:
• aircraft fuel weight [lbs]: WFUEL(x)/Ti,wfuel + d1- - d1+ = 1.0 [7.24]
• aircraft empty weight [lbs]: WEMP(x)/Ti,wemp + d2- - d2+ = 1.0 [7.25]
• direct operating cost [$/hr]: DOC(x)/60 + d3- - d3+ = 1.0 [7.26]
• purchase price [$]: PURCH(x)/Ti,purch + d4- - d4+ = 1.0 [7.27]
• maximum lift/drag: LDMAX(x)/17 + d5- - d5+ = 1.0 [7.28]
• maximum cruise speed [kts]: VCRMX(x)/200 + d6- - d6+ = 1.0 [7.29]
• maximum range [nm]: RANGE(x)/2500 + d7- - d7+ = 1.0 [7.30]
o Constraints on deviation variables: di- • di+ = 0 and di-, di+ = 0.
o The bounds on the system variables:
0.24 M = CSPD = 0.48 M 19 lb/ft2 = WL = 25 lb/ft 2
7 = AR = 11 85 = AF = 110
5.0 ft = DPRP = 5.96 ft 14.0 in = WS = 20.0 in
Minimize:
275
o The sum of the deviation variables associated with:
• fuel weight, d1+ • maximum lift/drag ratio, d5-
• empty weight, d2+ • maximum speed, d6-
+
• direct operating cost, d3 • maximum range, d7-
• purchase price, d4+
Z = { f1(d1+), f2(d2+), f3(d3+), f4(d4+), f5(d5-), f6(d6-), f7(d7-) }
particularized with the appropriate targets and constraints and solved: Ci,wfuel = {450 lbs, 475
lbs, 500 lbs}, Ti,wfuel = {450 lbs, 400 lbs, 350 lbs}, Ti,wemp = {1900 lbs, 1950 lbs, 2000 lbs},
Ti,purch = {$41000, $42000, $43000} and i = {1, 3, 5} passengers. The same three design
scenarios are used when designing each benchmark aircraft, see Table 7.13. All three scenarios
are tradeoff studies: Scenario 1 is an overall tradeoff with all goals weighted equally; Scenario 2
has the economic goals weight equally at the first priority level (PLEV1), and the performance
goals weighted equally at the second priority level (PLEV2); and Scenario 3 is the reverse of
Scenario 2 with performance goals being ranked first and economics second. The deviation
function formulations for each scenario for the benchmark aircraft are listed in the table. Notice
that a combination of di+ and di- are being used in the deviation function and not just di-. This is
because it is desired to lower fuel weight, empty weight, direct operating cost, and purchase
price to their targets and to raise maximum lift/drag, cruise speed, and range to theirs. (With the
Cdk formulation, the only concern is to minimize di- in order to ensure that Cdk = 1.)
276
Table 7.13 Design Scenarios for Designing GAA Benchmark Aircraft
Deviation Function
Scenario PLEV1 PLEV2 Note:
+ + +
(d1 + d2 + d 1 lowers fuel weight to target
d 2+ lowers empty weight to target
1. Overall tradeoff d3+ + d4+ + d5-
d 3+ lowers direct oper. cost to
+ d6- + d7-)/7 target
(d2+ + d3+ + (d1+ + d5- + d6- d 4+ lowers purchase price to target
2. Economic tradeoff d 5- raises max. lift/drag to target
d4+)/3 + d7-)/4
-
(d1+ + d5- + d6- (d2+ + d3+ + d 6- raises max. speed to target
3. Performance tradeoff d 7 raises max. range to target
+ d7-)/4 d4+)/3
As before, three starting points—lower, middle, and upper values—are used when
designing each aircraft for each scenario; the best design(s) is then taken as the one with the
lowest deviation function value. Convergence plots for each aircraft for each scenario are listed
separately in Section F.5 and are similar to those observed for the PPCEM solutions in 7.6.1.
The final settings of the design variables for each aircraft for each of these three design scenarios
are listed in Table 7.14 through Table 7.16 for Scenarios 1-3, respectively. Each set of results
is discussed in turn and plotted graphically with the corresponding PPCEM instantiations for a
quick comparison of the results. The results for Scenario 1 are listed in Table 7.14.
277
WS [in] ? 18.60 ? 18.04 18.35 19.45
Responses
WFUEL [lbs] 449.43 413.8 388.49 449.67 408.35 349.97
WEMP [lbs] 1887.15 1921.71 1946.59 1888.23 1926.33 1986.58
DOC [$/hr] 61.98 63.31 63.85 61.6 63.9 64.02
PURCH [$] 41817 42374.5 42827 41607.8 42240 43262.9
LDMAX 15.89 15.61 15.53 16.54 15.8 16.4
VCRMX [kts] 190.83 188.47 187.61 187.47 184.58 181.94
RANGE [nm] 2491 2436 2420 2536 2672 2466
Dev. Fcn.
PLEV1 0.0240 0.0377 0.0506 0.0187 0.0341 0.0303
As can be seen in the table, there is little variation between the design variable settings
for the benchmark aircraft even though they have been designed individually. The benchmark
aircraft share common settings for the cruise speed and propeller diameter despite being
individually designed. Aspect ratio and wing loading only vary slightly between each aircraft,
and the difference in seat widths (WS) for each aircraft is less than 1.5 in. The engine activity
factor varies the most of the six design variables. It is interesting to note that the PPCEM design
variable values are quite close to the benchmark designs. Seat width, aspect ratio, and engine
activity factor all are contained within the range of settings for the benchmark aircraft. The
PPCEM values for cruise speed and propeller diameter are only slightly larger than the
corresponding values which are shared between all three benchmark aircraft.
Despite the similarity of the design variable settings for the two families of aircraft, only
the 4 seater benchmark and PPCEM aircraft have similar deviation function values. The two
and six seater aircraft from the PPCEM are both slightly worse than the benchmark designs as a
result of having a common set of design variables for all three aircraft. To see why this is and to
278
facilitate comparison of the performance of the two families of aircraft (the one based on the
PPCEM and the group of benchmark aircraft), plots of the individual goal achievements for
each aircraft are given in Figure 7.11 for Scenario 1. The idea of using a “spider” or
“snowflake” plot to show goal achievement comes from Sandgren (1989). In the spider plot,
goal deviation values are plotted on the axes of the web; the closer a mark is on its axis to the
origin, the better that particular goal has been achieved. In this manner, the shape of the
polygon formed by connecting the deviation values for each design can be used to compare
designs quickly. In other words, the two seater aircraft from the PPCEM platform and the
benchmark design can be quickly compared by plotting their goal achievement on the same
spider plot as is done in Figure 7.11 for all three aircraft which comprise the GAA family.
279
SCENARIO 1 - Overall Tradeoff Study
2 Seater 4 Seater
WEMP WEMP
0.08 0.10
0.06 0.08
RANGE DOC RANGE 0.06 DOC
0.04
0.04
0.02 0.02
0.00 0.00
VCRMX VCRMX
PURCH PURCH
6 Seater WEMP
Deviation Functions: PLEV1
0.15
Benchmark PPCEM Family
RANGE 0.10 DOC 2 Seater 0.0187 0.0240
4 Seater 0.0341 0.0377
0.05
6 Seater 0.0303 0.0506
0.00
VCRMX
PURCH
PPCEM Family
Benchmark
LDMAX WFUEL
Figure 7.11 Graphical Comparison of Benchmark Aircraft and PPCEM Family for
Scenario 1
In the overall tradeoff study (Scenario 1) shown in Figure 7.11 all seven goals are
• In the two seater aircraft, the achievement of WEMP, DOC, PURCH, WFUEL, and
RANGE appear virtually equal. The PPCEM aircraft exhibits slightly better
achievement of the VCRMX target; however, the benchmark design has better
280
LDMAX than the PPCEM which can account for the difference between the deviation
functions for these aircraft.
• In the four seater aircraft, the PPCEM solutions perform slightly better at DOC and
VCRMX, but slightly worse with RANGE, WFUEL, and LDMAX. Both aircraft
designs achieve the target for empty weight (WEMP).
• In the six seater aircraft, both designs achieve the WEMP and PURCH targets. DOC
achievement is essentially equal for both aircraft. The PPCEM designs yield slightly
better VCRMX than the benchmark aircraft; however, the benchmark design
outperforms the PPCEM design in WFUEL, LDMAX, and RANGE. It appears that
the difference in achievement in LDMAX and WFUEL account for the large
discrepancy in the two deviations functions for the six seater aircraft because the
achievement of the other goals are comparable for both aircraft.
The results for Scenario 2 are summarized in Table 7.15. As seen in Scenario 1, the
cruise speeds for the PPCEM aircraft and the benchmark aircraft are essentially the same. The
aspect ratio for the PPCEM aircraft is contained within the range of the benchmark designs but
is on the low end. The propeller diameter for the PPCEM aircraft is slightly higher than the
benchmark aircraft, which have nearly identical propeller diameters again despite being designed
individually. The wing loading for the PPCEM aircraft is about 2 lb/ft2 lower than that of the
benchmark designs, whose value, only vary by about 0.7 lb/ft2 between all three aircraft. The
engine activity factors for the benchmark aircraft vary from 85 to a high of 109; the PPCEM
aircraft have a value of 89.4 which falls within the range of the benchmark designs. Finally, the
seat widths for the benchmark designs are lower than for the PPCEM aircraft and converging
281
Table 7.15 Individual PPCEM and Benchmark Aircraft for Scenario 2
The resulting deviation functions for the two families of aircraft are comparable to the
benchmark designs having consistently lower PLEV1 (deviation function value at priority level
1) but slightly larger PLEV2 than the PPCEM aircraft. In the preemptive (i.e., lexicographic)
case, however, having lower PLEV2 values does not matter unless PLEV1 values are the same.
The first level deviation function value for the four seater benchmark design is zero, indicating
that the design is capable of meeting all of its designated targets. The 2 and 6 seater benchmark
designs also both fare well at achieving their targets, having PLEV1 values of 0.01 and 0.001,
respectively. The PPCEM aircraft, on the other hand, have PLEV1 values which are slightly
worse, and the four seater PPCEM design does not achieve all of its targets as did the
282
benchmark design. A look at the discrepancies between the goal achievement of the two
families of aircraft can be seen in the spider plot for the Scenario 2 shown in Figure 7.12.
Figure 7.12 Graphical Comparison of Benchmark Aircraft and PPCEM Family for
Design Scenario 2, Priority Level 1 Only
In Figure 7.12 the results of the economic tradeoff study (Scenario 2) are illustrated.
Only the three economic related goals which are considered in the first priority level are shown:
empty weight (WEMP), direct operating cost (DOC), and purchase price (PURCH). Some
283
• Both sets of aircraft achieve the desired targets for empty weight.
• The purchase price (PURCH) for the seater aircraft from the PPCEM is slightly lower
than that of the benchmark aircraft; however, the purchase price for the four seater
PPCEM design is slightly higher than its comparative benchmark. Both six seater
aircraft do equally well at achieving the target.
• The DOC target achievement for the PPCEM solutions are higher for all three aircraft
than the individually designed benchmark aircraft; the inability to achieve the DOC
target is the main cause for the large discrepancy in the deviation function value
(PLEV1) for the PPCEM aircraft.
Finally, the results for Scenario 3 are summarized in Table 7.16. Notice that the
PPCEM cruise speed values are larger than all three benchmark designs while the aspect ratio is
less. The propeller diameter again is slightly larger for the PPCEM designs than for the
benchmark aircraft. The wing loading for both families of aircraft are comparable, but the
engine activity factor for the PPCEM tends to be on the lower end of the benchmark aircraft
setting. The seat width for the PPCEM is within the range of seat widths found for the
benchmark designs and is nearly identical to that of the four seater benchmark aircraft with the
two seater being slightly smaller and the six seater slightly larger.
284
AF ? 85.63 ? 101.11 95.86 85
WS [in] ? 18.70 ? 18.22 18.88 19.42
Responses
WFUEL [lbs] 450.92 415.34 389.84 449.25 399.57 350.17
WEMP [lbs] 1887 1921.7 1946.82 1888.8 1937.72 1986.37
DOC [$/hr] 77.43 79.13 79.76 68.03 72.92 64.19
PURCH [$] 42150 42727 43190.3 41871 42699.8 43237.2
LDMAX 15.79 15.52 15.44 16.32 16.05 16.44
VCRMX [kts] 195.53 193.32 192.46 190.62 187.99 181.68
RANGE [nm] 2499 2451 2437 2494 2497 2478
Dev. Fcn.
PLEV1 0.0240 0.0446 0.0671 0.0223 0.0293 0.0335
PLEV2 0.1062 0.1120 0.1112 0.0517 0.0772 0.0251
The deviation function values at priority level 1 (PLEV1) for the two families of aircraft
exhibit similar trends to those seen previously except this time it is the two seater aircraft which
have comparable goal achievement of their first level goals, not the four seater aircraft as in the
previous scenario. The PPCEM 4 seater aircraft deviation function (PLEV1) is about 1.5 that
of the benchmark design while the six seater PPCEM is about twice that of the benchmark.
Unlike in Scenario 2, however, the benchmark designs also have lower PLEV2 compared to
the PPCEM designs. To see the discrepancy between the individual goal achievement at the
first priority level, the deviation variables for the four performance goals which are considered at
the first priority level—fuel weight (WFUEL), maximum lift to drag ratio (LDMAX), maximum
cruise speed (VCRMX), and maximum flight range (RANGE)—are plotted in Figure 7.13.
• The fuel weight target is met by all three benchmark aircraft while the PPCEM aircraft
do not achieve their target. In fact, the PPCEM aircraft exhibit increasingly worse
285
achievement of the target as the aircraft is scaled to accommodate more passengers
which can account for the increase in PLEV1 for the four and six seater PPCEM
aircraft.
• Neither family of aircraft achieves the target for maximum lift/drag ratio well; however,
the benchmark aircraft consistently perform better.
• All three of the PPCEM aircraft do better at achieving the target for maximum cruise
speed than do the individually designed benchmark aircraft.
• The PPCEM aircraft have only slightly worse RANGE achievement than the benchmark
aircraft.
286
SCENARIO 3 - Performance Tradeoff Study
VCRMX VCRMX
0.10
Benchmark PPCEM Family
0.05 2 Seater 0.0223 0.0240
RANGE LDMAX 4 Seater 0.0293 0.0446
0.00 6 Seater 0.0335 0.0671
PPCEM Family
Benchmark
VCRMX
Figure 7.13 Graphical Comparison of Benchmark Aircraft and PPCEM Family for
Design Scenario 3, Priority Level 1 Only
In summary, in order to improve the commonality of the aircraft within the GAA
product family, the overall performance of the individual aircraft within the product family
decreases. This decrease in performance, however, varies from aircraft to aircraft and scenario
to scenario. The question that the designers/managers are now faced with is: how much
287
product platform as possible? Ideally, minimal performance would have to be sacrificed to
increase commonality between derivative products, but it appears that a tradeoff does exist as
one might expect. In reality, however, it would not be known how much performance was
being sacrificed by designing a common platform for the product family because benchmark
designs would not necessarily exist (unless this was a redesign process). Toward this end, a
To assess the tradeoff between product commonality and product performance within
the family of GAA, a product variety tradeoff study is performed using the PDI and NCI
measures described in Section 3.1.5. Currently, there are two points on the PDI vs. NCI
graph: the family of aircraft based on the PPCEM solutions and the group (family) of benchmark
aircraft which have been individually designed. What is interesting to study is the effect of
allowing one or more design variables to vary in the PPCEM for each aircraft while holding the
remaining variables constant at the platform values found using the PPCEM. In this manner, the
PPCEM facilitates generating a variety of alternatives for the product platform and
corresponding product family. By allowing one or more variables to vary between aircraft, the
performance of the individual aircraft within the resulting product family can be improved such
that there is minimal tradeoff between product commonality and performance. Before this
tradeoff can be assessed, however, the relative importance of the design variables is needed in
288
The weightings in NCI used in this study are based on rank ordering the design
variables with regard to relative ease/cost with which they can be allowed to vary—the more
costly it is to allow that variable to change, the more important it is to have that variable stay the
same across derivative products. For this example, the weightings listed in Table 7.17 are used.
Cruise speed (CSPD) is the easiest/cheapest variable to allow to vary between designs because
it is easy to vary the cruise speed throughout the mission without having to make any
modifications to the aircraft; meanwhile, seat width (WS) is the most expensive to allow to vary
because it is costly not to have the same fuselage width (fuselage width being directly
proportional to seat width) for all of the aircraft within the GAA family. These weights are
derived from a pairwise comparison of the design variables; the justification for the pairwise
comparison and computation of the rank ordering and relative importance are explained in
Section F.4.1.
289
For this product variety study, two design scenarios are considered: the economic
tradeoff study (Scenario 2) and the performance tradeoff study (Scenario 3) listed in Table
7.13. For these two scenarios, the individual PPCEM and benchmark aircraft are listed in
Table 7.15 and Table 7.16 for Scenarios 2 and 3, respectively. The resulting PDI and NCI for
each group of aircraft based on these design variable values are computed and listed in Table
7.18 and Table 7.20 for Scenarios 2 and 3, respectively; remember that only the first priority
level is used when computing PDI, and the weightings used in the NCI are the relative
importances listed in Table 7.17 and do not include the variation in the scale factor.
Knowing the two extremes of the PDI vs. NCI curve, it is possible to work
“backward” along the curve from the PPCEM solutions toward the benchmark designs by
allowing one or multiple design variables to vary between each aircraft while holding the others
1. Starting with the individual PPCEM aircraft, vary one variable at a time for each aircraft;
for instance, hold {AF, AR, CSPD, DPRP, WL} at the settings prescribed by the
PPCEM platform and vary WS for each aircraft to improve the performance of that
aircraft as much as possible. This entails solving a compromise DSP for each aircraft,
with the PPCEM value for WS taken as the starting point in DSIDES. All six variables
are allowed to vary one-at-a-time from the PPCEM platform values, solving a
compromise DSP for each aircraft for each variable. NCI and PDI are then computed
for each of the six resulting aircraft families, e.g., the family of aircraft which share
common {AF, AR, CSPD, DPRP, WL} but varying WS.
2. Repeat Step 1, allowing any two variables to vary at a given time between aircraft from
the PPCEM platform values. There are 15 possible pairs of variables which are varied
290
two-at-a-time, and a compromise DSP is solved for each possible pair with the
PPCEM value taken as the starting point. NCI and PDI are computed for each of the
15 resulting product families.
3. Repeat Step 1, allowing any three variables to vary at a given time between aircraft. In
order to reduce the number of combinations that must be examined, CSPD is not varied
from aircraft to aircraft because it is known not to change much between aircraft in the
group of benchmark designs. Hence, only AF, AR, DPRP, WL, and WS are allowed
to vary from their PPCEM platform values, resulting in 10 different combinations of the
five variables taken three at a time. NCI and PDI are computed for each of the 10
resulting product families.
variety study for each scenario. The resulting NCI and PDI are listed in Table 7.18 for
Scenario 2 and in Table 7.20 for Scenario 3. In the tables, the results are grouped by the
number of variables where the variables not listed are being held constant. For instance, in
Table 7.18 the NCI and PDI for the family of aircraft when allowing WL to vary from one
aircraft to the next are 0.0171 and 0.0155, respectively, with all over design variables fixed at
the PPCEM values; when AF and WL are allowed to vary from one aircraft to the next, the
resulting NCI and PDI for the group of products are 0.0234 and 0.0150, respectively.
291
Table 7.18 Product Variety Tradeoff Study - Scenario 2
NCI PDI
Benchmark Designs - Each aircraft is optimized; all
0.1795 0.0038
variables can vary
PPCEM Designs using Cdk - Each aircraft is
0.0000 0.0184
designed to have same variables
Allow 1 AF 0.0178 0.0181
variable to AR 0.0040 0.0181
vary between CSPD 0.0000 0.0182
aircraft from DPRP 0.0026 0.0179
PPCEM WL 0.0171 0.0155
designs WS 0.0381 0.0117
AR 0.0147 0.0181
CSPD 0.0178 0.0181
AF DPRP 0.0155 0.0175
WL 0.0234 0.0150
Allow 2 WS 0.0509 0.0113
variables to CSPD 0.0041 0.0181
vary between AR DPRP 0.0172 0.0175
aircraft from WL 0.0230 0.0147
PPCEM WS 0.0559 0.0096
designs DPRP 0.0027 0.0179
CSPD WL 0.0233 0.0152
WS 0.0382 0.0117
DPRP WL 0.0249 0.0154
WS 0.0528 0.0106
WL WS 0.0702 0.0086
DPRP 0.0110 0.0181
AR WL 0.0370 0.0146
Allow 3 AF WS 0.0848 0.0100
variables to DPRP WL 0.0495 0.0154
vary between WS 0.0672 0.0147
aircraft from WL WS 0.1068 0.0081
PPCEM AR DPRP WL 0.0405 0.0151
designs WS 0.0572 0.0107
AR WL WS 0.0803 0.0068
DPRP WL WS 0.0701 0.0075
The gray shaded rows in Table 7.18 indicate the best increase in PDI which can be
achieved by allowing 1, 2, or 3 variables to vary at a given time. So, if only one variable is
allowed to vary between aircraft, then allowing WS to vary yields the best improvement in
292
Scenario 2; if two can vary, then WL and WS should be allowed to vary; if three can vary, then
varying AR, WL, and WS yields the best improvement. The complete set of results for each
scenario for each aircraft are listed in Section F.4. Plots of NCI versus PDI for each scenario
follow each table and are discussed in turn. The PDI and NCI values for Scenario 2 are plotted
in Figure 7.14.
0.018
KEY:
0.016
²PDI 1 Cdk
0.014 WL Cdk-Vary1
²PDI 2 Cdk-Vary2
0.012 WS
Cdk-Vary3
PDI
Notice that the PPCEM solution using the Cdk formulation yields the top left point in
Figure 7.14; the individual benchmark designs provide the bottom right point with all of the
293
variations on the PPCEM Cdk solutions falling in between the two, creating an envelope of
possible combinations of NCI and PDI. As highlighted in the Table 7.18, varying {WS}, {WS
and WL}, and {AR, WL, and WS} yields the best improvement in PDI if 1, 2, and 3 variables
are allowed to vary between each of the PPCEM aircraft; notice that these points lay on the
front of the product variety envelope. In general, as more design variables are allowed to vary,
Is there any way to move down this curve without having to look at all possible
combinations? As it turns out, the design variables that have the most impact on the
performance of aircraft are the ones that progress down the front of the curve. This information
can be obtained from a statistical Analysis of Variance (ANOVA) of the data used to build the
kriging metamodels in Step 3 of the PPCEM. The full ANOVA for the family of GAA is given
in Section F.3, and Pareto plots based on the results of the ANOVA are illustrated in Figure
7.15. The Pareto plots provide a means of quickly identifying which variables have the most
impact on a particular response; the larger the horizontal bar, the more influence a variable has
on the response. In Figure 7.15, only the effects of the design variables on the response means
have been plotted because they govern the average performance of the GAA family.
Based on these Pareto plots for the GAA response means, the effect of each factor on
each response can be ranked by order of importance, see Table 7.19. In the table, 1 indicates
most important and 6 the least. So for example, the seat width (WS) has the largest effect on
the purchase price (PURCH) and cruise speed (CSPD) has the least. The economic responses
294
in the first priority level in Scenario 2 are shown in the top half of the table; the performance
responses which are in the first priority level in Scenario 3 are shown in the bottom half of the
table.
(g) ? LDMAX
295
Response 1 2 3 4 5 6
DOC CSPD DPRP WL AR WS AF
WEMP WS AR WL DPRP AF CSPD
PURCH WS AR DPRP WL AF CSPD
Importance on Performance Related Goals
Response 1 2 3 4 5 6
LDMAX AR WL WS CSPD DPRP AF
RANGE WL WS DPRP AF AR CSPD
WFUEL WS WL AR DPRP AF CSPD
VCRMX WL DPRP WS AR CSPD AF
Returning to the tradeoff study for Scenario 2, the design variables that shape the front
of the envelope when allowed to vary are as follows: WS, WL, {AR,WS}, {WS,WL}, {WS,
WL, DPRP}, and {AR, WL, WS}. Looking at the rank ordering of importance in Table 7.19,
it can be seen that the variables in these combinations are variables that have the largest effect
on the responses. WS has the largest effect on WEMP and PURCH, two of the three
economic responses in Scenario 2; {WS, AR} are the two most important factors both of these
economic responses; {AR, WL, WS} are among the top three variables that are most
important to the three economic responses in Scenario 2. Thus, by allowing the design variables
with the most impact to vary between aircraft while keeping the others fixed, substantial
To see if the same holds true in Scenario 3, the NCI and PDI values for Scenario 3 are
listed in Table 7.20 and plotted in Figure 7.16. As highlighted in the table, the best
improvement in PDI can be obtained by allowing AR to vary between aircraft if only one
variable is allowed to vary. Notice in Table 7.19 that AR is most important to LDMAX.
296
Recall from Figure 7.13 that the largest discrepancy between the PPCEM family of aircraft and
between aircraft within the PPCEM family, each aircraft is able to achieve better LDMAX,
resulting in a lower PDI for the PPCEM product family. Meanwhile, if two variables are
allowed to vary, then AF and WL yield the best improvement in PDI because WL has a large
impact on both RANGE and VCRMX. In the three variable case, varying DPRP, WL, and
WS yield the best improvement; notice that all three of these variables are among the most
297
Table 7.20 Product Variety Tradeoff Study - Scenario 3
NCI PDI
Benchmark Designs - Each aircraft is optimized; all
0.0918 0.0284
variables can vary
PPCEM Designs using Cdk - Each aircraft is
0.0000 0.0452
designed to have same variables
Allow AF 0.0267 0.0452
1 variable AR 0.0059 0.0434
to vary CSPD 0.0000 0.0453
b/n aircraft DPRP 0.0013 0.0452
from Cdk WL 0.0010 0.0437
designs WS 0.0068 0.0443
AR 0.0269 0.0430
CSPD 0.0000 0.0453
AF DPRP 0.0193 0.0450
WL 0.0119 0.0390
Allow WS 0.0159 0.0440
2 variables CSPD 0.0017 0.0452
to vary AR DPRP 0.0101 0.0428
b/n aircraft WL 0.0082 0.0402
from Cdk WS 0.0183 0.0404
designs DPRP 0.0000 0.0453
CSPD WL 0.0049 0.0397
WS 0.0093 0.0449
DPRP WL 0.0150 0.0401
WS 0.0081 0.0446
WL WS 0.0071 0.0422
DPRP 0.0530 0.0430
AR WL 0.0085 0.0394
Allow AF WS 0.0361 0.0399
3 variables DPRP WL 0.0272 0.0388
to vary WS 0.0158 0.0442
b/n aircraft WL WS 0.0349 0.0408
from Cdk AR DPRP WL 0.0135 0.0414
designs WS 0.0203 0.0397
AR WL WS 0.0154 0.0417
DPRP WL WS 0.0194 0.0361
The PDI and NCI values for Scenario 3 are plotted in Figure 7.16. As in the previous
graph for Scenario 2, the PPCEM solution using the Cdk formulation yields the top left point; the
individual benchmark designs provide the bottom right point. Notice that the combinations of
298
design variables which move the family of aircraft down the front of the product variety
envelope are, in general, the ones which are rank ordered highest in Table 7.19. Notice also
that more than half of ?PDIlost can be gained back if {DPRP, WL, WS} are allowed to vary
KEY:
0.045 ²PDI 1 Cdk
WL
²PDI 2 Cdk-Vary1
AR
²PDI 3 Cdk-Vary2
0.040
Cdk-Vary3
PDI
CSPD,WL
Benchmark
0.035 AF,WL ²PDI i = best
²PDI lost change in PDI
by allowing
DPRP,WL,WS i variables to
0.030 vary b/n each
aircraft design
²NCI gain
0.025
0.00 0.02 0.04 0.06 0.08 0.10
NCI - Weighted by Importance
comparing the benchmark and PPCEM aircraft in Section 7.1.2, considerable improvement in
the performance of the PPCEM family of aircraft can be obtained by allowing 1 or more
299
variables to vary between aircraft while holding the remainder of the variables at the platform
setting. In the product variety study performed in this section, it has been shown how statistical
analysis of variance can be used to traverse the front of the product variety tradeoff envelope,
maximizing the gains in PDI with minimal loss in commonality. It is now up to the discretion of
the designers/managers to evaluate the implications of this tradeoff on inventory, production, and
sales to decide the appropriate compromise between commonality and performance. A closer
look at some of the lessons learned from this example is offered in the next section along with a
In this chapter, the PPCEM is applied in full to the design of a family of General
Aviation aircraft. The GAA family is based on a common scalable product platform which is
scaled around the number of passengers in much the same way that Boeing has scaled their 747
series of aircraft around the capacity and flight range (cf., Rothwell and Gardiner, 1990). The
market segmentation grid has been used to help identify an appropriate leveraging strategy for
the family of aircraft based on the initial problem statement, i.e., horizontally leverage the family
of GAA to satisfy a variety of low-end market segments. Each aircraft eventually could be
vertically scaled as well through the addition and removal of features as technology improves to
Particularization of the PPCEM for this example occurs through GASP, the General
Aviation Synthesis Program, which is used to model and simulate the performance of each
300
aircraft mathematically . Kriging metamodels for response means and variances are employed
within the PPCEM to facilitate the implementation of robust design based on GASP analyses.
These kriging metamodels then are used in conjunction with design capability indices and a
GAA compromise DSP to synthesize a robust aircraft platform which is scalable into a family of
aircraft.
Three different design scenarios are used to exercise the GAA compromise DSP to
create alternative product platforms and the product platform portfolio. Instantiation of the
individual aircraft within the PPCEM family reveals that the PPCEM provides an effective
means for designing a common scalable aircraft platform for the family of GAA. However,
upon comparison with individually designed benchmark aircraft, a tradeoff is found to exist
between having a common set of design variables which define the aircraft platform and the
performance of the scaled derivatives based on that platform. To examine the extent to which
this tradeoff occurs, a product variety tradeoff study is performed using the PPCEM to
demonstrate the ease with which alternative product platforms and product families can be
generate and to make use of the NCI and PDI measures proposed in Section 3.1.5. It is
observed that considerable improvement can be made by allowing one or more variables to
vary between each aircraft based on the original PPCEM platform; however, commonality
between the aircraft is sacrificed. To determine which variables to vary, ANOVA of the data
used to build the kriging metamodels in Step 3 of the PPCEM can be used to determine the
variables that have the largest effect on each response, allowing the front portion of the product
301
variety envelope to be traversed for maximum improvement in PDI with minimal loss of
commonality. The implications of this tradeoff on inventory, production, and sales must be
alternatives for the common product platform and corresponding product platform and not to
Sub-Hypothesis 1.1 - The market segmentation grid is utilized in Section 7.1.3 to help
identify an appropriate (horizontal) scale factor for the family of GAA—the number of
passengers—in order to achieve the desired platform leveraging based on the problem
objectives; this further supports Sub-Hypothesis 1.1.
Sub-Hypothesis 1.2 - The scale factor for the GAA product family is the number of
passengers, see Sections 7.1.3 and 7.2. Robust design principles then are used in this
example to develop an aircraft platform—defined by six design variables—which is
insensitive to variations in the scale factor and is thus good for the family of General
Aviation aircraft based on the two, four, and six seater configurations. The success of
this implementation helps to support Sub-Hypothesis 1.2.
Sub-Hypothesis 1.3 - Design capability indices are utilized in this example to aggregate
individual targets and constraints and to facilitate the design of a family of General
Aviation aircraft. Combining this formulation with the compromise DSP allows a family
302
of GAA to be designed around a common, scalable product platform, further verifying
Sub-Hypothesis 1.3.
So, are the solutions obtained from the PPCEM useful? The PPCEM has been
used to generate a variety of feasible options for the GAA platform and corresponding family of
aircraft. While there is some tradeoff between the performance of the individual aircraft based
aircraft, the increased commonality between the design specifications of each aircraft (i.e.,
aspect ratio, seat width, propeller diameter, etc.) should generate sufficient savings to offset the
minimal loss in performance. Regardless of whether it does or not, the family of aircraft
obtained using the PPCEM yields considerable improvement over the initial family of aircraft
based on the baseline Beechcraft Bonanza design, see Table 7.8 and the discussion thereafter.
Are the time and resources consumed within reasonable limits? Basically, the
PPCEM has been used to design a family of three aircraft almost as efficiently as a single
aircraft. The initial start up costs to use the PPCEM in this example are about one day which is
the time it takes to sample the GAA design space and construct kriging metamodels to
approximate GASP. Once this is accomplished, the computational savings resulting from using
As a result, the computational savings are comparable to, if not greater than, those obtained in
the universal electric motor example in Chapter 6. Consider, for instance, that it requires
MP. Meanwhile, the kriging metamodels require approximately 0.25 seconds to run after about
from using metamodels in the PPCEM are substantial when one considers the large numbers of
design scenarios and tradeoff studies used in this chapter and in Appendix F, not to mention the
fact that multiple starting points are used in all cases. The cost savings (in terms of number of
analysis) are not as clear cut as they are in the universal motor example in Chapter 6 (see
Section 6.5.3); therefore, they are not estimated. However, the discussion in (Simpson, 1995)
regarding the cost savings of using approximations to replace GASP sheds some light on the
Is the work grounded in reality? As stated in Section 7.1.3, the baseline design (i.e.,
starting point) for the GAA product family is the Beechcraft Bonanza B36TC presented in
Section 7.1.3. While the Beechcraft Bonanza is only a six seater aircraft, its specifications are
employed in GASP to provide a family of baseline to compare with the PPCEM family of
aircraft based on a common scalable platform. Discussion of these results in Section 7.5.1
reveals that the PPCEM solutions are able to improve upon both the technical and economic
304
improvements are slightly less than the improvements obtained by individually designing each
aircraft (i.e., the benchmark designs), the time savings resulting from using the PPCEM to design
the family of three aircraft simultaneously can be used to “tweak” the individual designs as
needed to ensure adequate performance and product quality. It still stands, however, that the
PPCEM solutions, even with all six design variables held at the common product platform
specifications, yield improvement over the baseline design. The results of the product variety
tradeoff studies discussed in Section 7.6.4 provides several options to improve the PPCEM
family of aircraft.
Finally, do the benefits of the work outweigh the cost? The true benefit from using
the PPCEM in a problem like this is the wealth of information that is obtained during its
implementation. The PPCEM greatly facilitates the generation of a variety of alternatives for a
common product platform and its corresponding scaled derivative products. Use of the
PPCEM permits the product platform and the scaled product family to be designed
simultaneously, thus increasing the commonality of specifications across the products within the
family. Product variety tradeoff studies can be easily performed using the PPCEM (and NCI
and PDI metrics) to evaluate the compromise between commonality and individual product
This concludes the second, and final, example for testing and verifying the PPCEM
having demonstrated the full implementation of the PPCEM to design a family of products and
facilitate product variety tradeoff studies. In the next and final chapter, a summary of
305
achievements and contributions from the work is offered along with critical review of the
306
8.
CHAPTER 8
In this dissertation, a method has been developed, presented, and tested to facilitate the
design of a scalable product platform for a product family. The development and presentation
of this method is brought to a close in this chapter. In Section 8.1, closure is sought by returning
to the research questions posed in Chapter 1 and reviewing the answers that have been offered.
The resulting contributions are then summarized in Section 8.2. Limitations of the research are
discussed in Section 8.3, and possible avenues of future work are described in Section 8.4.
Concluding remarks are given in Section 8.5, closing this chapter and the dissertation.
293
Chp 8: Achievements and Recommendations
294
8.1 CLOSURE: ANSWERING THE RESEARCH QUESTIONS
develop the Product Platform Concept Exploration Method (PPCEM) to facilitate the design of
a common product platform which can be scaled to realize a product family. In particular, the
concept of platform scalability is introduced and exploited in the context of the following
Q1. How can a common scalable product platform be modeled and designed for a
product family?
Two secondary research questions are also offered in Section 1.3.1 for investigation in this
Q3. Are space filling designs better suited for building approximations of deterministic
To address these questions, research hypotheses and posits are introduced and
identified in support of achieving the principal objective for the dissertation. Their elaboration
295
and verification have provided the context in which the research work has proceeded. The end
result is a synthesis of engineering design, operations research, applied statistics, and strategic
management methods and tools to form the Product Platform Concept Exploration Method. Its
development has been portrayed pictorially using Figure 8.1 which depicts the flow of the
296
Chp 8: Achievements and Recommendations
Chp 6 Chp 7
Chp 5
Platform
Nozzle Design
Product Platform Concept Exploration Method
Chp 4 Chp 3
Answering Question 1: Question 1 is the primary research question posed for the
work in this dissertation and its answer is embodied by the Product Platform Concept
297
Exploration Method: a Method which facilitates the synthesis and Exploration of a common
Product Platform Concept which can be scaled into an appropriate family of products. The
method consists of a prescription for formulating the problem and a description for solving it.
• the design of a universal electric motor platform which is (vertically) scaled around the
stack length of the motor to realize a family of electric motors capable of satisfying a
variety of torque and power requirements (Chapter 6), and
• the design of a General Aviation aircraft platform which is (horizontally) scaled into a
two, a four, and a six seat configuration to realize a family of aircraft capable of
satisfying a variety of performance and economic requirements (Chapter 7).
While only demonstrated for these two examples, it is asserted that the method is generally
applicable to other examples in this class of problems: parametrically scalable product platforms
whose performance can be mathematically modeled or simulated. Other examples which have
taken advantage of this type of scaling include the design of a family of oil filters (Seshu, 1998)
and the design of a family of absorption chillers for a variety of refrigeration capacities
(Hernandez, et al., 1998). Both examples integrate nicely within the framework of the PPCEM.
In support of the primary research question and objective, three additional questions
also are offered in Section 1.3.1. Answers to these questions are summarized as follows.
Q1.1. How can product platform scaling opportunities be identified from overall
design requirements?
298
In this research, the market segmentation grid (Meyer, 1997) is employed to help
identify platform scaling opportunities based on overall design requirements. Its success as an
attention directing tool for mapping scaling opportunities within a product family is discussed in
Section 2.2.1 and then demonstrated in both examples. In the universal motor example in
Chapter 6, the market segmentation grid is used to identify vertical scaling opportunities within
the desired product family to realize a range of torque and power ratings for different
price/performance tiers within the market; standardization of the motor interfaces will provide
horizontal leveraging opportunities of this family of motors into other market segments in a
manner similar to Black & Decker’s response to Double Insulation in the 1970s (Lehnerd,
leveraging strategy is identified by means of the market segmentation grid, resulting in a family of
three aircraft based on a two, four, and six seater configuration leveraged about a common
product platform. Opportunities for vertical scaling of the resulting family of aircraft through
engine upgrades, add-on features, and technological advancements also are discussed;
Q1.2. How can robust design principles be used to facilitate designing a common
299
By identifying “conceptual noise” factors around which a family of products can be
scaled, robust design principles can be abstracted for use in product family and product
platform design. Consequently, the idea of a scale factor is introduced in Section 2.3.2 as a
factor around which a product platform can be “scaled” or “stretched” to realize derivative
products within a product family. Scale factors are, in essence, noise factors for a scalable
product platform, and robust design principles can be used accordingly to minimize the
this approach is demonstrated through the two examples. In the universal motor example in
Chapter 6, the stack length of the motor is taken as the (parametric) scale factor around which a
family of motors is created. In the General Aviation Aircraft example, the number of passengers
is the (configurational) scale factor around which a family of three aircraft are developed. In
both cases, robust design principles are employed to develop a common set of design variables
which are robust with respect to variations in the scaling factor as the product platform is scaled
and instantiated to realize the product family. A product variety tradeoff study also is performed
in the General Aviation aircraft example (see Section 7.6.2) to further verify this approach.
Q1.3. How can individual targets for derivative products be aggregated and modeled
300
Through the identification of appropriate scaling factors during the product family design
process, the individual targets for derivative products can be aggregated into a mean and
variance around which the product family can be simultaneously designed either by having
separate goals for “bringing the mean on target” and “minimizing the variation” or through the
capability of a family of designs to satisfy a ranged set of design requirements. The former
approach is utilized to design the universal electric motor platform in Chapter 6. Goals for
“bringing the mean on target” and “minimizing the variation” caused by variations in the scale
factor (stack length) are used within a compromise DSP to effect a platform design which
matches the target mean and variation for the aggregated product family. In Chapter 7, design
capability indices are employed to design a family of General Aviation aircraft around a common
metamodeling tool for engineering design by Sacks, et al. (1989), kriging has received little
attention by the engineering community building surrogate models. Perhaps this is because of
the added complexity of fitting the model or using it or of the inability to glean useful information
directly from the MLE parameters used to fit the model. Whatever the reason, the research in
this dissertation has been directed at improving the ease with which kriging models can be built,
validated, and used. Moreover, the initial feasibility study and comparison of kriging models—
301
and the extensive kriging/DOE investigation in Chapter 5 is aimed at familiarizing the reader with
kriging and making it a viable alternative for building surrogate metamodels of deterministic
computer experiments. Its utility was tested extensively in Chapter 5 wherein it was concluded
that the Gaussian correlation function provides the most accurate kriging predictor, on average,
and that kriging can accurate model a wide variety of functions typical of engineering analysis.
While the study is not all inclusive, nor is it intended to be, it has provided valuable insight into
the utility of kriging metamodels for engineering design. Potential avenues of future work to
classical experimental designs are not well suited for sampling computer experiments which are
deterministic; rather, points should be chosen to “fill the space,” providing good coverage of the
design space since replicate sample points are not needed. In an effort to verify the utility of
space filling experimental designs, a comparison of nine space filling and two classical
experimental designs is performed in Chapter 5 (see Section 5.4 in particular) to address this
third research question. The eleven experimental designs are compared on the basis of their
capability to produce accurate kriging metamodels for the testbed of six engineering problems
used in this dissertation. For the sample sizes investigated in this study, it was observed that the
space filling experimental designs yielded more accurate kriging models in the larger design
spaces (3 and 4 variables) while the classical experimental designs (CCDs) performed well in
the two dimensional design space for the reasons discussed at the end of Section 5.4.4. Prior
302
to this investigation, few researchers had compared their experimental designs against one
another, or to classical designs for that matter. As such, the findings in the kriging/DOE study in
Chapter 5 represent unique contributions from the research. A summary of the research
The contributions offered in this dissertation are introduced in Section 1.3.2 and realized
throughout the dissertation. As stated at the beginning of Chapter 1, the primary contribution
from this work is embodied in the Product Platform Concept Exploration Method which
provides a method to identify, model, and synthesis scalable product platforms for a product
• A procedure for identifying scale factors for a product platform, see Sections 3.1.1 and
3.1.2.
• An algorithm to build, validate, and use a kriging model, see Section 2.4.2, Chapter 4
and 5, and Appendix A.
303
• A preliminary comparison of the predictive capability of second-order response
surfaces and kriging models in the design of a rocket nozzle, see Section 4.2.
• An algorithm for generating minimax Latin hypercube designs, see Section 2.4.3 and
Appendix C.
worth to be either an addition to the fundamental knowledge of the field or a new and better
interpretation of the facts already known. The contributions associated with kriging represent a
new interpretation of facts already known. Kriging has been around since the 1960s (see, e.g.,
Cressie, 1993; Matheron, 1963) when it was developed originally for mining and geostatistics
applications; however, it has received limited attention in the engineering design community until
recently. The kriging algorithm presented is not totally unique to this dissertation; however, the
use of a simulated annealing algorithm (see Appendix A for more details on its use in the
maximum likelihood estimation for the kriging metamodels) to find the “best” kriging model is.
Moreover, the comparison of the accuracy of different correlation functions on the resulting
kriging model had never been performed in such depth. Likewise, the comparison of space
filling and experimental designs represents a new and better interpretation of facts already
known because such an extensive study has never been undertaken. With the exception of the
304
minimax Latin hypercube design, the experimental designs investigated in this dissertation are the
result of years of research work by statisticians and mathematicians. The minimax Latin
hypercube design, however, represents an addition to the fundamental knowledge of the field of
experimental design.
The contributions made in the area of product family design, specifically the method of
the field. While other product family design strategies and methods have been slowly evolving,
the investigation of a method for platform scaling is previously unrecorded. The incorporation of
the market segmentation grid into the engineering design process provides a new interpretation
of facts already known, demonstrating how the market segmentation grid becomes a useful
attention directing tool for identifying platform leveraging strategies in product family and, with a
little engineering knowledge, appropriate scale factors for the intended scalable platform. In this
regard, the concept of scale factors in product family design and extending robust design to
product family and product platform design is unique to this dissertation as are the NCI and PDI
measures for product family non-commonality and performance deviation. The measures are
not of significant value in and of themselves, however, the product variety tradeoff studies which
these indices make possible provide significant insight into the tradeoffs of product family design.
Taken together, the resulting Product Platform Concept Exploration Method for designing
scalable product platforms for a product family provides an addition to the fundamental
knowledge of the nascent field of product family design. However, the PPCEM is by no means
305
a panacea for product platform and product family design nor is it without its limitations.
Toward this end, a critical evaluation of the work is offered in the next section followed by
This section comprises the confessional portion of the dissertation wherein the research
itself is critically evaluated. Already the PPCEM has been critically evaluated as it pertains to
the two example problems in Chapters 6 and 7, see Sections 6.5 and 7.6.5. In this section, the
product family based on a common scalable product platform? There are two basic
requirements which must be met in order for the PPCEM to be applicable to the design of a
scalable product platform. First, the concept of scalability must be exploitable within the
product family; exploited in the sense that having one or more scale factors provides a means to
realize a variety of performance requirements while also facilitating the manufacturing process.
For instance, in the electric motor example in Chapter 6, the motor could have just as easily
been scaled in the radial dimension as it was in the axial direction (i.e., stack length) to achieve
the necessary torque requirements; however, the underlying assumption in the choice of stack
length as the scale factor is that it can be exploited from both a technical sense and a
manufacturing sense. As Lehnerd (1987) alludes to in his article on Black & Decker and their
universal motor platform, by varying only the stack length of the motor, all of the motors—
306
ranging from 60 Watts to 660 Watts—could be produced on the same machine simply by
stacking more laminations onto the field and armature. Had the radius of the motor been scaled
instead of the stack length, different machines and tooling configurations would have been
required to produce the family of motors since varying the radius of the motors is more than a
stacking operation. Consequently, it is very important that one or multiple scale factors be
identified for the product family and that it be capable of being exploited from both a
technical standpoint and a manufacturing standpoint in order for the PPCEM to yield
useful results.
The second consideration when applying the PPCEM is that the performance of the
product family must be able to mathematically modeled, simulated, or quantified in order for the
PPCEM to be employed. It would be extremely difficult, if not impossible, for the PPCEM to
be utilized to design a common scalable automotive body platform based solely on aesthetic
considerations for instance. Consider the examples discussed in Section 1.1.1; to which of
these examples could the PPCEM be applied and why (or why not)? For the sake of
Would
Example
PPCEM Why or Why Not?
from §1.1.1
Apply?
Their platform strategy involves modular design and
Sony: Walkman No standardization of components; few, if any, scaling issues are
present within the product family.
307
They employ a combinatoric strategy to realize the necessary
Nippondenso:
No product variety based on a few well-designed, standardized
Panel Meters
parts; few, if any, scaling issues are present.
Lutron: Lighting No Same reasoning as Nippondenso.
Control Systems
The platform is scaled around the stack length of motor and an
Black & Decker:
Yes attempt was made to recreate their family of motors as the
Universal Motor
initial “proof of concept” for the PPCEM in Chapter 6.
The majority of copier design involves modular design of
Not components and assemblies; however, some scaling issues may
Canon: Copiers really arise to accommodate different print volumes, paper sizes, etc.
Yes, in The RTM322 was scaled to create a new product platform,
Rolls Royce:
some but modularity of engine components facilitated vertical scaling
RTM322 Engine
aspects of the platform to upgrade and derate engine.
As stated in Section 1.1.2, the types of problems to which the PPCEM is readily
applicable (given that the previous two conditions regarding scalability and quantifiability are
met) typically involve parametric or variant design. The fact that the PPCEM is intended
primarily for parametric or variant design raises another important issue, namely, successful
implementation of the PPCEM assumes that the basic concept or configuration on which the
product platform is being based is good for the entire product family. In order for the
PPCEM to be employed, a good underlying concept or configuration must have already been
established in order to obtain the full benefit of the method. In the GAA example in Chapter 7,
for instance, if the three blade, high wing position, retractable landing gear configuration had not
been a suitable concept for the two, four, and six seater aircraft, then no matter what parameter
settings were obtained from using the PPCEM, the performance of the family of aircraft would
have been poor regardless because the underlying concept was not good for all three aircraft.
308
An attempt to identify a good configuration for the family of GAA is discussed in (Simpson,
1995) but the existence of such a concept is assumed to already exist in this work.
Incorporation of the conceptual and configurational design of the product family along with the
parametric scaling of the product platform is a fertile area for future work.
options for common product platforms which can be scaled into an appropriate product family.
The PPCEM is not necessarily intended to be used to evaluate these options or select one of
them. The idea behind the product platform portfolio—the output from applying the PPCEM—
requirements for as long as possible. As the product platform design progresses into the
detailed stages of design, this design freedom is reduced; however, during the early stages of the
design process, formulating and answering a variety of "what if" type questions and examining a
wide variety of design scenarios is important to the product platform design process.
Meanwhile, the NCI and PDI measures introduced in Section 3.1.5 and employed in
Section 7.6.4 represent an attempt to provide a means to evaluate different product platforms
and their respective product families. Ultimately the non-commonality of a set of parameters
losses in customer sales; however, this is extremely difficult to accomplish without sufficient
industry input. Modeling the process and manufacturing aspects of product platforms and
product families is another fertile research area which has yet to be explored.
309
As far as the scale factors themselves goes, the concept of a scale factor—while
discussed in Section 2.3.2 and 3.1.2—is still not fully understood. In the motor example in
Chapter 6, for instance, the mean and standard deviation of the motor stack length was a scale
factor which was treated much like a design variable. Meanwhile, in the GAA example in
Chapter 7, the scale factor was the number of passengers which was treated as a design
parameter which varied from two to six, i.e., its permissible range of values was known a priori
based on the intended leveraging strategy. In any event, when metamodels are to be utilized
within the PPCEM, an initial range for each scale factor is necessary in order to construct these
metamodels. This follows in the same manner that a permissible range of any noise factor is
expected to be known before robust design principles can be applied to a problem (cf.,
Phadke, 1989). It is important to examine the concept of scale factors further, finding more
examples of scaled product platforms to understand the manner in which they have been scaled
and, more importantly, how those scale factors are identified during the design process.
This brings to light another shortcoming of the PPCEM, namely, the use of the market
segmentation grid to “identify” scale factors around which the product platform is leveraged
within a product family. As stated in Section 2.2.1, the market segmentation grid is only an
attention directing tool and considerable engineering “know-how” and problem insight are
required before a successful platform leveraging strategy can be identified. Then, only after a
suitable platform leveraging strategy is identified, can engineers hope to find (and be able to
exploit) scaling opportunities within the product family to realize the necessary product variety.
310
The market segmentation grid is the end result of this process and is really only useful for
mapping the resulting platform leveraging strategy. The two examples used in this
dissertation trivialize this process when in reality it is extremely difficult, if not impossible, to
identify one or more scaling factors which can be exploited within a product family. Developing
tools and methods to facilitate the process of identifying scale factors is one potential avenue for
further investigation.
Part of understanding scale factors better involves understanding their effect on product
performance and how scale factors can be used effectively to satisfy a wide variety of customer
requirements. If scale factors induce too much variability in product performance, then it might
not be possible to apply the PPCEM to develop a common product platform which does not
significantly compromise the performance of the product family over the range of interest. In
such a case, it might be necessary to “split” the design space into two or more product
platforms and corresponding product families rather than compromise product performance and
quality by having one single product platform which is scaled over the entire range of
performance. The work in (Chang and Ward, 1995; Lucas, 1994; Rangarajan, 1998; Seshu,
1998) further investigates and discusses these types of issues. Lucas (1994) in particular
presents interesting remarks on how to resolve these types of issues using concepts from robust
Turning to specific implementation issues within the PPCEM, it may not have been
sufficiently clear that kriging, while part of the PPCEM, is not an integral part of the PPCEM
311
since it is not the only metamodeling technique which can be used within the PPCEM.
Response surfaces, neural nets, radial basis functions, etc. are all viable metamodeling options
for use in engineering design and with the PPCEM. The extensive literature review of
metamodeling applications in engineering design in (Simpson, et al., 1997b) supports this. The
engineering design is that they are sufficiently accurate for the task at hand.
The investigations into kriging in this dissertation are primarily intended to shed light on
alternative metamodeling techniques which offer some advantages to response surface models
which are typically employed. The case for investigating alternatives to response surfaces has
been made in Section 2.4.1 and is also discussed in (Simpson, et al., 1998; Simpson, et al.,
1997b). The objective in this research is not to prove that kriging metamodels are better than
response surface models; rather, it is to demonstrate that kriging metamodels are a viable
Similarly, the use of space filling experimental designs as opposed to classical designs
are not mandated by this research. The investigation served to gain a better understanding of
the different sampling strategies which exist and the associated advantages and disadvantages of
each. If one experimental design type had proven superior in every example, then perhaps only
that design should be considered in the future. However, that was not the case, and the results
of this study are by no means generalizable to all types of engineering design problems. Very
few engineering problems, for instance, involve only two to four variables, and the availability of
312
codes to generate these space filling designs, the computation expense of them, and the nature
of the underlying analyses are just a few of the key factors that influence the decision of how to
sample a design space efficiently and effectively. Recommendations for future work in the areas
of experimental design and kriging are discussed in more detail in the next section.
means complete nor is it intended to be. Obviously, a wider variety problems should be
considered in order to obtain more generalizable recommendations. Additional space filling and
classical experimental designs which have not been considered include the following:
Classical Experimental Designs: fractional factorial designs and small central composite
designs (see, e.g., Box and Draper, 1987); D-optimal designs (see, e.g., Box and
Draper, 1971; Giunta, et al., 1994; Mitchell, 1974; St. John and Draper, 1975); I-, A-,
E-, and G-optimal designs (see, e.g., Hardin and Sloane, 1993; Myers and
Montgomery, 1995); minimum bias designs (see, e.g., Myers and Montgomery, 1995;
Venter and Haftka, 1997); and other hybrid designs (see, e.g., Giovannitti-Jensen and
Myers, 1989; Myers and Montgomery, 1995)
Space Filling Experimental Designs: median Latin hypercubes (see, e.g., Kalagnanam
and Diwekar, 1997; McKay, et al., 1979); minimax and maximin designs (Johnson, et
al., 1990); scrambled nets (Koehler and Owen, 1996); orthogonal arrays of different
strengths (Owen, 1992); Maximum entropy designs (Currin, et al., 1991; Shewry and
313
Wynn, 1987; Shewry and Wynn, 1988); and factorial hypercube designs (Salagame
and Barton, 1997).
the current testbed of problems, larger problems also should be investigated because very few
engineering problems only have 2-4 variables. However, problems with larger dimensional
design spaces (i.e., more design variables), invoke new complications. For instance, many of
the generators used to create the space filling experimental designs become computationally
expensive in and of themselves for large numbers of factors. For example, the simulated
annealing algorithm for generating maximin Latin hypercube designs (Morris and Mitchell, 1992;
1995) becomes extremely slow even for four factor designs with as few as 25 variables as
discussed in Section 5.1. Moreover, fractional factorial based central composite designs are
available for problems with five or more factors. Hence, larger problems require different
As for the minimax Latin hypercube design, which is unique to this dissertation, the
genetic algorithm which is employed to generate these designs needs further studying to develop
a better understanding of its workings and to learn the optimal combination of parameters for
their use, namely, population size, number of permissible generations, mutation rates, and
termination criteria. Also, as it stands right now, the current design criterion—minimize the
maximum distance between sample points and prediction points—does not yield a unique
design for a given sample size and number of design variables. Developing and implementing an
314
optimization criterion such as that proposed by Mitchell and Morris (1995) for their maximin
Latin hypercube designs could improve the effectiveness of the minimax Latin hypercubes.
As for kriging, only kriging metamodels which employ an underlying constant for the
global portion of the model have been investigated in this work. In general, f(x) in Equation
2.14 could be taken as a linear or quadratic model instead of a constant which may permit more
accurate kriging approximations; however, the problem of having a sufficient number of samples
to estimate all of the unknown coefficients in f(x) resurfaces. A preliminary investigation of such
an approach is documented in (Giunta, et al., 1998); they find that minimal improvement in the
Meanwhile, the power of kriging lies in its capability to interpolate accurately a wide
range of linear and non-linear functions. An iterative or sequential strategy which takes
advantage of this may prove useful provided the kriging models can be fit and validated quickly
from one iteration to the next. Consequently, trust region based approaches which incorporate
kriging metamodels (see, e.g., Alexandrov, et al., 1997; Booker, et al., 1996; Booker, et al.,
1995; Cox and John, 1995; Dennis and Torczon, 1996; Osio and Amon, 1996; Schonlau, et
al., 1997).
Finally, alternative optimization algorithms for finding the “best” kriging model also must
be investigated for use with larger problems. The simulated annealing algorithm currently
employed to fit the kriging models, see Appendix A, becomes extremely inefficient for problems
315
with more than eight variables and approximately 180 sample points. Moreover, the matrix
inversion routines in the current prediction software do not take full advantage of the properties
of the correlation matrix, R, in kriging which is always symmetric and positive definite. Several
matrix decomposition and inversion algorithms have been developed to take advantage of these
The concept of scalability and scalable product platforms has provided an excellent
inroads into product family and product platform design, marrying current research efforts in
Decision-Based Design, the Robust Concept Exploration Method, and robust design with tools
from marketing/management science. The end result is the Product Platform Concept
Exploration Method which has been demonstrated by means of two examples: the design of a
family of universal motors and the design of a family of General Aviation aircraft. While it has
been shown that the PPCEM is effective at producing a family of products based on scaled
Furthermore, much in the same way that the product platform provides a platform for
leveraging with a product family, the Product Platform Concept Exploration Method provides a
platform for leveraging future work in product family and product platform design, see Figure
8.2. The different types of systems can be classified on the vertical axis of a market
segmentation grid and different characteristics of product platform design on the horizontal axis.
316
The use of the PPCEM to design scalable product platforms for a variety of systems then can
be plotted on this market segmentation grid as illustrated in Figure 8.2 for the two examples in
this dissertation. Perhaps through the addition of different “Processors” to the PPCEM,
additional capabilities could be developed within the framework of the PPCEM to design
modular platforms or facilitate product family redesign around a common platform, for instance.
Complex
Systems
GAA ...
Simple Universal
Systems Motor
Figure 8.2 The PPCEM as a Platform for Other Platform Design Methods
Several avenues of future work have also been mentioned during the critical analysis in
Section 8.3. In addition to these potential research areas, additional verification and extensions
of the PPCEM are offered in the following sections as they tie to current research within the
the Georgia Institute of Technology. These sections have been co-written with colleagues who
317
are planning to pursue (or are currently pursuing) the discussed research. A summary of those
providing input for this section and their standing within the Systems Realization Laboratory are
8.4.3 Additional Verification of the PPCEM and Kriging Metamodels through the
Concurrent Design of an Engine Lubrication System
The objective in the Ford Engine Design Project is to develop and improve engine
lubrication system models to support advanced concurrent powertrain design and development
(cf., Rangarajan, 1998). As part of this work, robust design specifications are sought which are
capable of satisfying a wide variety of torque and power requirements for different automobile
engines. After developing a better understanding of the engine lubrication system and its
318
components, potential scaling opportunities within the engine lubrication systems components
can be identified and exploited using the PPCEM to develop a robust and common platform
design for the valves, pistons, bearings, etc. This platform then can be instantiated quickly using
minimal additional analysis for different classes of vehicles (e.g., automobiles, trucks, and vans)
in an effort to maintain better economies of scale across a wide variety of automobile makes and
models.
components, the use of kriging metamodels for building surrogate approximations of the
associated complex fluid dynamics analyses also can be investigated. Currently second-order
response surfaces are used extensively during the design process; however, the complex
analyses for friction losses, power losses, etc. cannot be modeled well by response surfaces
over a large region of the design space, thus limiting the search for good solutions. Building
accurate global approximations of these analyses using kriging metamodels may yield additional
insight into the complexities of the design space, allowing better solutions to be identified. The
utility of the kriging for partitioning and screening large systems also can be examined in the
context of the engine lubrication system since a large number of factors (˜ 20) currently are
being utilized which would push the limits of the kriging metamodeling software (i.e., fitting the
model, matrix inversion, etc.). Finally, additional metamodeling techniques such as neural
networks (see, e.g., Cheng and Titterington, 1994; Hajela and Berke, 1992; Rumelhart, et al.,
319
1994; Widrow, et al., 1994) also can be compared to kriging given the size and complexity of
the problem.
Balancing the need to customize products for target markets while enabling the
proliferation of options and model derivatives leads to increased tooling cost and production line
complexity. At first glance, it may appear that automotive platforms are prime examples for
product variety design research. However, in a recent study, Siddique, et al. (1998) identified
significant differences between the variety characteristics of automotive platforms from some of
the examples that other researchers have studied (e.g., the Sony Walkman family). For
example, the majority of product family design research is applicable to products that are
modular with respect to functions as discussed in Section 2.2.3. The automotive platform, on
the other hand, is not modular because the platform accomplishes one function as a whole. As
a result, many product family design approaches do not readily apply; however, careful
commonization of platforms can still be used to increase product variety while reducing the
number of components between different models and the product line complexity.
Developing a common platform requires a robust platform that can support all of the
requirements for different car models and also a common assembly process that can support
these variations. For the automotive industry, platform requirements come from packaging
constraints (underhood, passenger, etc.), safety/crash requirements, size of the vehicle, styling,
320
and other requirements/regulations. Cars in similar classes have similar types of requirements
(except for styling, maybe); as such, the underbody for similar cars have the potential to be
commonized. Toward this end, a method for the configuration design of common product
platforms is to be developed, extending the parameter design capabilities of the PPCEM for
designing scalable product platforms. As discussed in (Siddique, 1998; Siddique and Rosen,
Using configuration design methods, the underlying common core for different platforms can be
identified along with the required variations. This information then can be used to increase the
commonality of the product platform and determine how to isolate the variability in specific
modules.
is desired so that the same assembly line can be used to produce all of the (minor) platform
derivatives. Using the same component loading sequences, tooling sequences, etc. provides
some of the requirements when developing a common assembly process (cf., Nevins and
Whitney, 1989; Whitney, 1993). Other requirements that need to be considered specifically for
321
automobile platforms include common locators, weld lines, transfer points, etc. Hence, it is
8.4.5 Integrated Product and Process Design of Product Families and Mass
Customized Goods
Mass customization, i.e., the manufacture of customized products with the efficiency and
advantage and possibly the next world-class manufacturing paradigm. Although the
marketplace is rapidly moving towards mass customization, very little work has been done on
formalizing an integrated product and process development method that would enable
companies to practice mass customization in a systematic and efficacious manner. For example,
the PPCEM provides a method to develop a common product platform which can be scaled to
provide the necessary variety for a product family; however, its focus is solely on modeling the
systems (see, e.g., Abair, 1995; Anderson and Pine, 1997; Chinnaiah, et al., 1998; Dhumal, et
al., 1996; Hormozi, 1994; Richards, 1996) focuses primarily on developing cost effective
manufacturing systems to realize a wide variety of products. Integrating the two fields of
research has received little attention in the context of designing families of products.
should be given to the integration of product design, production system design, and organization
322
• Principles of product and process development for mass customized production:
- systems to support the required information transfer and group decision making.
• Design for dis-aggregated production, i.e., decentralized supply chains and production
systems for the growing global economy.
An initial investigation into the concurrent modeling of product and process for design of
integrated product and process design of a family of absorption chillers for a variety of
capacities is presented. In related work, game-theoretic models of product and process design
have been implemented (see, Hernandez, 1998) to facilitate the formulation and solution of such
an approach, providing a foundation for future integration of product and process design of
family of products.
323
8.4.6 Product Family Mappings and “Ideality” Metrics
telecommunications industry, is (1) that several solution paths exist to satisfy a given set of
customer requirements using available components and (2) when customers ask for new
functional capabilities, it is difficult to determine how this functionality can be created and
must be established for the purpose of identifying the most appropriate solution strategy given
the specific design and customer requirements. The NCI and PDI measures presented in this
however, these measures cannot be used in “real-time” by designers to guide the product
platform development process. Therefore, the objective is to survey further the existing
3. define useful “real-time” metrics to guide engineering design and improve the product
family architecture,
4. map new functionality and products into the product family, and
areas of greatest improvement. Possible metrics include those relating to the system flexibility,
324
complexity, upgradability, etc., in addition to improving current metrics for commonality,
modularity, etc. for “real-time” use by designers. The end result will be an efficient process for
designing assemble-to-order systems, thereby replacing the expensive and time consuming
products, one of which is the ability to reuse and remanufacture components and modules from
one product to the next (cf., Alexander, 1993; Paula, 1997; Rothwell and Gardiner, 1990;
Sanderson and Uzumeri, 1995). Product reuse is the act of reclaiming products (or parts of
products) from a previous use and remanufacturing them for another use (where the second use
may or may not be the same as the original). Product reuse is both economically and
• previously used products are diverted from landfill or other means of disposal,
• all of the energy, emissions, and financial resources involved in creating the geometric
form of components are reduced.
of product design characteristics, product development strategies, and external factors on the
value of reuse and remanufacture over time. The model can be used to assess the potential
325
value of product remanufacture for an OEM (original equipment manufacturer) which integrates
the reclaiming and reuse of products into its existing production system. The model allows
• product design characteristics of each product model, (e.g., the number of components
and required disassembly sequence),
• product development decisions over time (e.g., the level of product variety, the rate of
product evolution, and the degree of component standardization across product variety
and evolution), and
• external business factors which affect reclaiming and remanufacturing (e.g., the cost of
labor and the retirement distribution of used products over time).
The model then is used to determine which products to reclaim, which components to recycle
and remanufacture, and the resulting costs and benefits of these actions over time. Thus, it
provides an analysis tool to assess the potential value of reuse and remanufacturing on the
development of product families based on common product platforms, providing additional cost
In closing this dissertation, a quote by T.S. Eliot found in the introductory section of
326
And know the place for the first time.”
— T.S. Eliot
The PPCEM is not an end in itself; rather, it provides a stepping stone for future research work
in this nascent field of engineering design. For it is only at the end of this dissertation that the
problems and difficulties associated with product family and product platform design are truly
understood and appreciated. And now that we understand them, either for the first time or in
greater depth, new paths can be explored and new methods can be developed which continue
to advance the state-of-the-art in product family and product platform design. It is the hope of
the author that the PPCEM enjoys the same success as the RCEM, providing a foundation on
which future research can be established in the same manner that this work builds upon the
327
REFERENCES
1982, The Concise Oxford Dictionary, Oxford University Press, Oxford, UK.
Abair, R., 1995, October 22-27, "Agile Manufacturing: This Is not Just Repackaging of
Material Requirements Planning and Just-In-Time," 38th American Production and
Inventory Control Society (APICS) International Conference and Exhibition,
Orlando, FL, APICS, pp. 196-198.
Alexander, B., 1993, June 14-16, "Kodak Fun Saver Camera Recycling," Society of Plastics
Engineers Recycling Conference - Survival Tactics thru the '90's, Chicago, IL, pp.
207-212.
Alexandrov, N., Dennis, J. E., Jr., Lewis, R. M. and Torczon, V., 1997, "A Trust Region
Framework for Managing the Use of Approximation Models in Optimization,"
NASA/CR-20145, ICASE Report. No. 97-50, Institute for Computer Applications in
Science and Engineering (ICASE), NASA Langley Research Center, Hampton, VA.
Anderson, D. M. and Pine, B. J., II, 1997, Agile Product Development for Mass
Customization, Irwin, Chicago, IL.
Balling, R. J. and Clark, D. T., 1992, September 21-23, "A Flexible Approximation Model for
Use with Optimization," 4th AIAA/USAF/NASA/OAI Symposium on
Multidisciplinary Analysis and Optimization, Cleveland, OH, AIAA, Vol. 2, pp.
886-894. AIAA-92-4801-CP.
Barton, R. R., 1992, December 13-16, "Metamodels for Simulation Input-Output Relations,"
Proceedings of the 1992 Winter Simulation Conference (Swain, J. J., Goldsman,
D., et al., eds.), Arlington, VA, IEEE, pp. 289-299.
Barton, R. R., 1994, December 11-14, "Metamodeling: A State of the Art Review,"
Proceedings of the 1994 Winter Simulation Conference (Tew, J. D., Manivannan,
S., et al., eds.), Lake Beuna Vista, FL, IEEE, pp. 237-244.
467
Booker, A. J., 1996, "Case Studies in Design and Analysis of Computer Experiments,"
Proceedings of the Section on Physical and Engineering Sciences, American
Statistical Association.
Booker, A. J., Conn, A. R., Dennis, J. E., Frank, P. D., Serafini, D., Torczon, V. and Trosset,
M., 1996, "Multi-Level Design Optimization: A Boeing/IBM/Rice Collaborative
Project," 1996 Final Report, ISSTECH-96-031, The Boeing Company, Seattle, WA.
Booker, A. J., Conn, A. R., Dennis, J. E., Frank, P. D., Trosset, M. and Torczon, V., 1995,
"Global Modeling for Optimization: Boeing/IBM/Rice Collaborative Project," 1995
Final Report, ISSTECH-95-032, The Boeing Company, Seattle, WA.
Box, G. E. P. and Behnken, D. W., 1960, "Some New Three Level Designs for the Study of
Quantitative Variables," Technometrics, Vol. 2, No. 4, pp. 455-475, "Errata," Vol. 3,
No. 4, p. 576.
Box, G. E. P. and Draper, N. R., 1987, Empirical Model Building and Response Surfaces,
John Wiley & Sons, New York.
Box, M. J. and Draper, N. R., 1971, "Factorial Designs, the |X'X| Criterion, and Some Related
Matters," Technometrics, Vol. 13, No. 4 (November), pp. 731-742.
Bras, B. A. and Mistree, F., 1991, "Designing Design Processes in Decision-Based Concurrent
Engineering," SAE Transactions, Journal of Materials & Manufacturing, SAE
International, Warrendale, PA, pp. 451-458.
Byrne, D. M. and Taguchi, S., 1987, "The Taguchi Approach to Parameter Design," Quality
Progress, Vol. December, pp. 19-26.
Chaloner, K. and Verdinelli, I., 1995, "Bayesian Experimental Design: A Review," Statistical
Science, Vol. 10, No. 3, pp. 273-304.
Chambers, J. M., Freeny, A. E. and Heiberger, R. M., 1992, "Chapter 5: Analysis of Variance;
Designed Experiments," Statistical Models in S (Chambers, J. M. and Hastie, T. J.,
eds.), Wadsworth & Brooks/Cole, Pacific Grove, CA, pp. 145-193.
Chang, T.-S. and Ward, A. C., 1995, September 17-20, "Design-in-Modularity with
Conceptual Robustness," Advances in Design Automation (Azarm, S., Dutta, D., et
al., eds.), Boston, MA, ASME, Vol. 82-1, pp. 493-500.
Chang, T.-S., Ward, A. C., Lee, J. and Jacox, E. H., 1994, November 6-11, "Distributed
Design with Conceptual Robustness: A Procedure Based on Taguchi's Parameter
468
Design," Concurrent Product Design Conference (Gadh, R., ed.), Chicago, IL,
ASME, Vol. 74, pp. 19-29.
Chen, W., Rosen, D., Allen, J. K. and Mistree, F., 1994, "Modularity and the Independence of
Functional Requirements in Designing Complex Systems," Concurrent Product Design
(Gadh, R., ed.), ASME, Vol. 74, pp. 31-38.
Chen, W., 1995, "A Robust Complex Exploration Method for Configuring Complex Systems,"
Ph.D. Dissertation, G. W. Woodruff School of Mechanical Engineering, Georgia
Institute of Technology, Atlanta, GA.
Chen, W., Allen, J. K., Mavris, D. and Mistree, F., 1996a, "A Concept Exploration Method
for Determining Robust Top-Level Specifications," Engineering Optimization, Vol.
26, No. 2, pp. 137-158.
Chen, W., Allen, J. K., Tsui, K.-L. and Mistree, F., 1996b, "A Procedure for Robust Design:
Minimizing Variations Caused by Noise and Control Factors," Journal of Mechanical
Design, Vol. 118, No. 4, pp. 478-485.
Chen, W., Simpson, T. W., Allen, J. K. and Mistree, F., 1996c, August 18-22, "Use of Design
Capability Indices to Satisfy a Ranged Set of Design Requirements," Advances in
Design Automation (Dutta, D., ed.), Irvine, CA, ASME, Paper No. 96-
DETC/DAC-1090.
Chen, W., Allen, J. K., Schrage, D. P. and Mistree, F., 1997, "Statistical Experimentation
Methods for Achieving Affordable Concurrent Systems Design," AIAA Journal, Vol.
35, No. 5, pp. 893-900.
Chen, W., Allen, J. K. and Mistree, F., 1997, "A Robust Concept Exploration Method for
Enhancing Productivity in Concurrent Systems Design," Concurrent Engineering:
Research and Applications, Vol. 5, No. 3, pp. 203-217.
Cheng, B. and Titterington, D. M., 1994, "Neural Networks: A Review from a Statistical
Perspective," Statistical Science, Vol. 9, No. 1, pp. 2-54.
Chinnaiah, P. S. S., Kamarthi, S. V. and Cullinane, T. P., 1998, "Characterization and Analysis
of Mass-Customized Production Systems," International Journal of Agile
Manufacturing, under review.
Clark, K. B. and Wheelwright, S. C., 1993, Managing New Product and Process
Development, Free Press, New York.
469
Cogdell, J. R., 1996, Foundations of Electrical Engineering, Prentice Hall, Upper Saddle
River, NJ.
Collier, D. A., 1981, "The Measurement and Operating Benefits of Component Part
Commonality," Decision Sciences, Vol. 12, No. 1, pp. 85-96.
Collier, D. A., 1982, "Aggregate Safety Stock Levels and Component Part Commonality,"
Management Science, Vol. 28, No. 22, pp. 1296-1303.
Cox, D. D. and John, S., 1995, March 13-16, "SDO: A Statistical Method for Global
Optimization," Proceedings of the ICASE/NASA Langley Workshop on
Multidisciplinary Optimization (Alexandrov, N. M. and Hussaini, M. Y., eds.),
Hampton, VA, SIAM, pp. 315-329.
Cressie, N. A. C., 1993, Statistics for Spatial Data, Revised Edition, John Wiley & Sons,
New York.
Currin, C., Mitchell, T., Morris, M. and Ylvisaker, D., 1991, "Bayesian Prediction of
Deterministic Functions, With Applications to the Design and Analysis of Computer
Experiments," Journal of the American Statistical Association, Vol. 86, No. 416,
pp. 953-963.
Davis, S. M., 1987, Future Perfect, Addison-Wesley Publishing Company, Reading, MA.
Dennis, J. E. and Torczon, V., 1995, March 13-16, "Managing Approximation Models in
Optimization," Proceedings of the ICASE/NASA Langley Workshop on
Multidisciplinary Design Optimization (Alexandrov, N. M. and Hussaini, M. Y.,
eds.), Hampton, VA, SIAM, pp. 330-347.
Dennis, J. E., Jr. and Torczon, V., 1996, September 4-6, "Approximation Model Management
for Optimization," 6th AIAA/USAF/NASA/ISSMO Symposium on Multidisciplinary
Analysis and Optimization, Bellevue, WA, AIAA, Vol. 2, pp. 1044-1046. AIAA-
96-4099-CP.
Dhumal, A., Dhawan, R., Kona, A. and Soni, A. H., 1996, August 18-22, "Reconfigurable
System Analysis for Agile Manufacturing," 5th ASME Flexible Assembly Conference
(Soni, A., ed.), Irvine, CA, ASME, Paper No. 96-DETC/FAS-1367.
DiCamillo, G. T., 1988, "Winning Turnaround Strategies at Black & Decker," Journal of
Business Strategy, Vol. 9, No. 2, pp. 30-33.
Diwekar, U. M., 1995, "Hammersley Sampling Sequence (HSS) Manual," Engineering &
Public Policy Department, Carnegie Mellon University, Pittsburgh, PA.
470
Eggert, R. J. and Mayne, R. W., 1993, "Probabilistic Optimal Design Using Successive
Surrogate Probability Density Functions," Journal of Mechanical Design, Vol. 115,
No. 3, pp. 385-391.
Erens, F. and Breuls, P., 1995, "Structuring Product Families in the Development Process,"
Proceedings of ASI'95, Lisbon, Portugal, .
Erens, F. and Verhulst, K., 1997, "Architectures for Product Families," Computers in
Industry, Vol. 33, No. 165-178, pp.
Erens, F. J. and Hegge, H. M. H., 1994, "Manufacturing and Sales Co-ordination for Product
Variety," International Journal of Production Economics, Vol. 37, No. 1, pp. 83-
99.
Erens, F., 1997, "Synthesis of Variety: Developing Product Families," Ph.D. Dissertation,
University of Technology, Eindhoven, The Netherlands.
Fang, K.-T. and Wang, Y., 1994, Number-theoretic Methods in Statistics, Chapman & Hall,
New York.
Finger, S. and Dixon, J. R., 1989a, "A Review of Research in Mechanical Engineering Design.
Part 1: Descriptive, Prescriptive, and Computer-Based Models of Design Processes,"
Research in Engineering Design, Vol. 1, pp. 51-67.
Finger, S. and Dixon, J. R., 1989b, "A Review of Research in Mechanical Engineering Design.
Part 2: Representations, Analysis, and Design for the Life Cycle," Research in
Engineering Design, Vol. 1, pp. 121-137.
Fujita, K. and Ishii, K., 1997, September 14-17, "Task Structuring Toward Computational
Approaches to Product Variety Design," Advances in Design Automation (Dutta, D.,
ed.), Sacramento, CA, ASME, Paper No. DETC97/DAC-3766.
G.S. Electric, 1997, "Why Universal Motors Turn On the Appliance Industry,"
http://www.gselectric.com/electric/univers4.htm.
Giunta, A., Watson, L. T. and Koehler, J., 1998, September 2-4, "A Comparison of
Approximation Modeling Techniques: Polynomial Versus Interpolating Models," 7th
AIAA/USAF/NASA/ISSMO Symposium on Multidisciplinary Analysis &
Optimization, St. Louis, MI, AIAA, AIAA-98-4758.
Goffe, W. L., Ferrier, G. D. and Rogers, J., 1994, "Global Optimization of Statistical Functions
with Simulated Annealing," Journal of Econometrics, Vol. 60, No. 1-2, pp. 65-100.
Source code is available at http://netlib2.cs.utk.edu/opt.
Hagemann, G., Schley, C.-A., Odintsov, E. and Sobatchkine, A., 1996, July, "Nozzle
Flowfield Analysis with Particular Regard to 3D-Plug-Cluster Configurations," AIAA-
96-2954.
Hajela, P. and Berke, L., 1992, "Neural Networks in Structural Analysis and Design: An
Overview," Computing Systems in Engineering, Vol. 3, No. 1-4, pp. 525-538.
Hardin, R. H. and Sloane, N. J. A., 1993, "A New Approach to the Construction of Optimal
Designs," Journal of Statistical Planning and Inference, Vol. 37, pp. 339-369.
Hernandez, G., 1998, "A Probablistic-Based Design Approach with Game Theoretical
Representations of the Enterprise Design Process," M.S. Thesis, G. W. Woodruff
School of Mechanical Engineering, Georgia Institute of Technology, Atlanta, GA.
Hernandez, G., Simpson, T. W., Allen, J. K., Bascaran, E., Avila, L. F. and Salinas, F., 1998,
September 13-16, "Robust Design of Product Families for Make-to-Order Systems,"
Advances in Design Automation Conference, Atlanta, GA, ASME, DETC98/DAC-
5595.
Hollins, B. and Pugh, S., 1990, Successful Product Design, Butterworths, Boston, MA.
472
Hubka, V. and Eder, W. E., 1988, Theory of Technical Systems: A Total Concept Theory
for Engineering Design, Springer, New York.
Hubka, V. and Eder, W. E., 1996, Design Science: Introduction to the Needs, Scope and
Organization of Engineering Design Knowledge, Springer, New York.
Iacobellis, S. F., Larson, V. R. and Burry, R. V., 1967, December, "Liquid-Propellant Rocket
Engines: Their Status and Future," Journal of Spacecraft and Rockets, Vol. 4, pp.
1569-1580.
Ignizio, J. P., 1985, Introduction to Linear Goal Programming, Sage University Papers,
Beverly Hills, CA.
Ignizio, J. P., 1990, An Introduction to Expert Systems: The Methodology and its
Implementation, McGraw-Hill, New York.
Ignizio, J. P., Wyskida, R. M. and Wilhelm, M. R., 1972, "A Rationale for Heuristic Program
Selection and Evaluation," Vol. 4, No. 1, pp. 16-19.
Iman, R. J. and Shortencarier, M. J., 1984, "A FORTRAN77 Program and User's Guide for
Generation of Latin Hypercube and Random Samples for Use with Computer Models,"
NUREG/CR-3624, SAND83-2365, Sandia National Laboratories, Albuquerque, NM.
Jacobson, G. and Hillkirk, J., 1986, Xerox: American Samurai, Macmillan Publishing
Company, New York.
Johnson, M. E., Moore, L. M. and Ylvisaker, D., 1990, "Minimax and Maximin Distance
Designs," Journal of Statistical Planning and Inference, Vol. 26, No. 2, pp. 131-
148.
Johnson, N. L., Kotz, S. and Pearn, W. L., 1992, "Flexible Process Capability Indices,"
Institute of Statistics Mimeo Series, University of North Carolina, Chapel Hill, NC.
Journel, A. G. and Huijbregts, C. J., 1978, Mining Geostatistics, Academic Press, New
York.
Kalagnanam, J. R. and Diwekar, U. M., 1997, "An Efficient Sampling Technique for Off-Line
Quality Control," Technometrics, Vol. 39, No. 3, pp. 308-319.
Kannan, B. K. and Kramer, S. N., 1994, "An Augmented Lagrange Multiplier Based Method
for Mixed Integer Discrete Continuous Optimization and Its Application to Mechanical
Design," Journal of Mechanical Design, Vol. 116, No. 2, pp. 405-411.
473
Kleijnen, J. P. C., 1987, Statistical Tools for Simulation Practitioners, Marcel Dekker,
New York.
Kobe, G., 1997, "Platforms - GM's Seven Platform Global Strategy," Automotive Industries,
Vol. 177, pp. 50.
Koch, P. N., 1997, "Hierarchical Modeling and Robust Synthesis for the Preliminary Design of
Large Scale, Complex Systems," Ph.D. Dissertation, G. W. Woodruff School of
Mechanical Engineering, Georgia Institute of Technology, Atlanta, GA.
Koch, P. N., Allen, J. K., Mistree, F. and Mavris, D., 1997, September 14-17, "The Problem
of Size in Robust Design," Advances in Design Automation, Sacramento, CA,
ASME, Paper No. DETC97/DAC-3983.
Koch, P. N., Mavris, D., Allen, J. K. and Mistree, F., 1998, September 13-16, "Modeling
Noise in Approximation-Based Robust Design: A Comparison and Critical Discussion,"
Advances in Design Automation, Atlanta, GA, ASME, DETC98/DAC-5588.
Korte, J. J., Salas, A. O., Dunn, H. J., Alexandrov, N. M., Follett, W. W., Orient, G. E. and
Hadid, A. H., 1997, "Multidisciplinary Approach to Aerospike Nozzle Design," NASA-
TM-110326, NASA Langley Research Center, Hampton, VA.
Kota, S. and Sethuraman, K., 1998, September 13-16, "Managing Variety in Product Families
Through Design for Commonality," Design Theory and Methodology - DTM'98,
Atlanta, GA, ASME, DETC98/DTM-5651.
Lee, H. L. and Billington, C., 1994, "Designing Products and Processes for Postponement,"
Management of Design: Engineering and Management Perspective (Dasu, S. and
Eastman, C., eds.), Kluwer Academic Publishers, Boston, MA, pp. 105-122.
Lee, H. L. and Tang, C. S., 1997, "Modeling the Costs and Benefits of Delayed Product
Differentiation," Management Science, Vol. 43, No. 1, pp. 40-53.
Lee, H. L., Billington, C. and Carter, B., 1993, "Hewlett-Packard Gains Control of Inventory
and Service through Design for Localization," Interfaces, Vol. 32, No. 4, pp. 1-11.
Lehnerd, A. P., 1987, "Revitalizing the Manufacture and Design of Mature Global Products,"
Technology and Global Industry: Companies and Nations in the World Economy
(Guile, B. R. and Brooks, H., eds.), National Academy Press, Washington, D.C., pp.
49-64.
474
Lewis, K., Lucas, T. and Mistree, F., 1994, September 7-9, "A Decision Based Approach to
Developing Ranged Top-Level Aircraft Specifications: A Conceptual Exposition," 5th
AIAA/USAF/NASA/ISSMO Symposium on Multidisciplinary Analysis and
Optimization, Panama City, FL, Vol. 1, pp. 465-481.
Lewis, R. M., 1996, "A Trust Region Framework for Managing Approximation Models in
Engineering Optimization," 6th AIAA/USAF/NASA/ISSMO Symposium on
Multidisciplinary Analysis and Optimization, Bellevue, WA, AIAA, Vol. 2, pp.
1053-1055. AIAA-96-4101-CP.
Li, H.-L. and Chou, C.-T., 1994, "A Global Approach for Nonlinear Mixed Discrete
Programming in Design Optimization," Engineering Optimization, Vol. 22, No. 2, pp.
109-122.
Lin, S., 1975, "Heuristic Programming as an Aid to Network Design," Networks, Vol. 5, No.
1, pp. 33-43.
Lucas, J. M., 1976, "Which Response Surface Design is Best," Technometrics, Vol. 18, No.
4, pp. 411-417.
Lucas, J. M., 1994, "Using Response Surface Methodology to Achieve a Robust Process,"
Journal of Quality Technology, Vol. 26, No. 4, pp. 248-260.
Martin, M. and Ishii, K., 1996, August 18-22, "Design for Variety: A Methodology for
Understanding the Costs of Product Proliferation," Design Theory and Methodology -
DTM'96 (Wood, K., ed.), Irvine, CA, ASME, Paper No. 96-DETC/DTM-1610.
Martin, M. V. and Ishii, K., 1997, September 14-17, "Design for Variety: Development of
Complexity Indices and Design Charts," Advances in Design Automation (Dutta, D.,
ed.), Sacramento, CA, ASME, Paper No. DETC97/DFM-4359.
Mather, H., 1995, October 22-27, "Product Variety -- Friend or Foe?," Proceedings of the
1995 38th American Production & Inventory Control Society International
Conference and Exhibition, Orlando, FL, APICS, pp. 378-381.
Matheron, G., 1963, "Principles of Geostatistics," Economic Geology, Vol. 58, pp. 1246-
1266.
475
Mavris, D. N., Bandte, O. and Schrage, D. P., 1995, May, "Economic Uncertainty Assessment
of an HSCT Using a Combined Design of Experiments/Monte Carlo Simulation
Approach," 17th Annual Conference of International Society of Parametric
Analysts, San Diego, CA, .
Mavris, D., Bandte, O. and Schrage, D., 1996, September 4-6, "Application of Probabilistic
Methods for the Determination of an Economically Robust HSCT Configuration," 6th
AIAA/USAF/NASA/ISSMO Symposium on Multidisciplinary Analysis and
Optimization, Bellevue, WA, AIAA, Vol. 2, pp. 968-978. AIAA-96-4090-CP.
McCullers, L. A., 1993, "Flight Optimization System, User's Guide, Version 5.7," NASA
Langley Research Center, Hampton, VA.
McDermott, C. M. and Stock, G. N., 1994, "The Use of Common Parts and Designs in High-
Tech Industries: A Strategic Approach," Production and Inventory Management
Journal, Vol. 35, No. 3, pp. 65-68.
McKay, A., Erens, F. and Bloor, M. S., 1996, "Relating Product Definition and Product
Variety," Research in Engineering Design, Vol. 8, No. 2, pp. 63-80.
McKay, M. D., Beckman, R. J. and Conover, W. J., 1979, "A Comparison of Three Methods
for Selecting Values of Input Variables in the Analysis of Output from a Computer
Code," Technometrics, Vol. 21, No. 2, pp. 239-245.
Meyer, M. H. and Lehnerd, A. P., 1997, The Power of Product Platforms: Building Value
and Cost Leadership, Free Press, New York.
Meyer, M. H. and Utterback, J. M., 1993, "The Product Family and the Dynamics of Core
Capability," Sloan Management Review, Vol. 34, pp. 29-47.
Meyer, M. H., 1997, "Revitalize Your Product Lines Through Continuous Platform Renewal,"
Research Technology Management, Vol. 40, No. 2, pp. 17-28.
Meyer, M. H., Tertzakian, P. and Utterback, J. M., 1997, "Metrics for Managing Research
and Development in the Context of the Product Family," Management Science, Vol.
43, No. 1, pp. 88-111.
Mistree, F., Hughes, O. F. and Bras, B. A., 1993, "The Compromise Decision Support
Problem and the Adaptive Linear Programming Algorithm," Structural Optimization:
Status and Promise (Kamat, M. P., ed.), AIAA, Washington, D.C., pp. 247-289.
476
Mistree, F., Smith, W. F., Bras, B., Allen, J. K. and Muster, D., 1990, "Decision-Based
Design: A Contemporary Paradigm for Ship Design," Transactions, Society of Naval
Architects and Marine Engineers (Paresai, H. R. and Sullivan, W., eds.), Jersey
City, New Jersey, pp. 565-597.
Mitchell, T. J. and Morris, M. D., 1992, "Bayesian Design and Analysis of Computer
Experiments: Two Examples," Statistica Sinica, Vol. 2, pp. 359-379.
Mitchell, T. J. and Morris, M. D., 1992b, December 13-16, "The Spatial Correlation Function
Approach to Response Surface Estimation," Proceedings of the 1992 Winter
Simulation Conference (Swain, J. J., Goldsman, D., et al., eds.), Arlington, VA,
IEEE, pp. 565-571.
Montgomery, D. C., 1991, Design and Analysis of Experiments, Third Edition, John Wiley &
Sons, New York.
Morris, M. D. and Mitchell, T. J., 1992, "Exploratory Designs for Computer Experiments,"
ORNL/TM-12045, Oak Ridge National Laboratory, Oak Ridge, TN.
Morris, M. D. and Mitchell, T. J., 1995, "Exploratory Designs for Computational Experiments,"
Journal of Statistical Planning and Inference, Vol. 43, No. 3, pp. 381-402.
Mueller, T. J. and Sule, W. P., 1972, November, "Basic Flow Characteristics of a Linear
Aerospike Nozzle Segment," ASME, Paper No. 72-WA/Aero-2.
Muster, D. and Mistree, F., 1988, "The Decision Support Problem Technique in Engineering
Design," International Journal of Applied Engineering Education, Vol. 4, No. 1,
pp. 23-33.
Myers, R. H. and Montgomery, D. C., 1995, Response Surface Methodology: Process and
Product Optimization Using Designed Experiments, John Wiley & Sons, New
York.
Myers, R. H., Khuri, A. I. and Carter, W. H., 1989, "Response Surface Methodology : 1966-
1988," Technometrics, Vol. 31, No. 2 (May), pp. 137-157.
Nair, V. N., 1992, "Taguchi's Parameter Design: A Panel Discussion," Technometrics, Vol.
34, No. 2, pp. 127-161.
477
Narducci, R. P., 1995, "Selected Optimization Procedures for CFD-Based Shape Design
Involving Shock Waves or Computational Noise," Ph.D. Dissertation, Department
Aerospace and Ocean Engineering, Virginia Polytechnic Institute and State University,
Blacksburg, VA.
NASA and FAA, 1994, "General Aviation Design Competition Guidelines," Virginia Space
Grant Consortium, NASA Langley Research Center, Hampton, VA.
NASA, 1978, January, "GASP - General Aviation Synthesis Program," NASA CR-152303,
Contract NAS 2-9352, Ames Research Center, Moffett Field, CA.
Naughton, K., Thornton, E., Kerwin, K. and Dawley, H., 1997, "Can Honda Build a World
Car?", Business Week, September 8, pp. 100(7).
Nevins, J. L. and Whitney, D. E., ed., 1989, Concurrent Design of Products and Processes,
McGraw-Hill, New York.
Newcomb, P. J., Bras, B. and Rosen, D. W., 1996, August 18-22, "Implications of Modularity
on Product Design for the Life Cycle," Design Theory and Methodology - DTM'96
(Wood, K., ed.), Irvine, CA, ASME, Paper No. 96-DETC/DTM-1516.
Ng, K. K. and Tsui, K. L., 1992, "Expressing Variability and Yield with a Focus on the
Customer," Quality Engineering, Vol. 5, No. 2, pp. 255-267.
Nicolai, L. M., 1984, Fundamentals of Aircraft Design, METS Inc., San Jose.
Osio, I. G. and Amon, C. H., 1996, "An Engineering Design Methodology with Multistage
Bayesian Surrogates and Optimal Sampling," Research in Engineering Design, Vol. 8,
No. 4, pp. 189-206.
Otto, K. N. and Antonsson, E. K., 1993, "Extensions to the Taguchi Method of Product
Design," Journal of Mechanical Design, Vol. 115, No. 1, pp. 5-13.
Owen, A. B., 1992, "Orthogonal Arrays for Computer Experiments, Integration and
Visualization," Statistica Sinica, Vol. 2, pp. 439-452.
Pahl, G. and Beitz, W., 1988, Engineering Design, The Design Council/Springer-Verlag,
London/Berlin.
Pahl, G. and Beitz, W., 1996, Engineering Design: A Systematic Approach, 2nd Revised
Edition, Springer-Verlag, New York.
478
Parkinson, A. and al., e., 1998, "OptdesX™: A Software System for Optimal Engineering
Design," User's Manual, Release 2.0.4, Design Synthesis, Inc., Provo, UT.
Paula, G., 1997, "Reinventing a Core Product Line," Mechanical Engineering, Vol. 119, No.
10, pp. 102-103.
Peplinski, J. D., 1997, "Enterprise Design: Integrating Product, Process, and Organization,"
Ph.D. Dissertation, G. W. Woodruff School of Mechanical Engineering, Georgia
Institute of Technology, Atlanta, GA.
Phadke, M. S., 1989, Quality Engineering using Robust Design, Prentice Hall, Englewood
Cliffs, New Jersey.
Pine, B. J., II, 1993, Mass Customization: The New Frontier in Business Competition,
Harvard Business School Press, Boston, MA.
Pugh, S., 1991, Total Design - Integrated Methods for Successful Product Engineering,
Addison-Wesley Publishing Company, New York.
Ragsdell, K. M. and Phillips, D. T., 1976, "Optimal Design of a Class of Welded Structures
Using Geometric Programming," Journal of Engineering for Industry, Vol. 98, Series
B, No. 3, pp. 1021-1025.
Raymer, D. P., 1992, Aircraft Design: A Conceptual Approach, 2nd Edition, AIAA,
Washington, D.C.
Reddy, S. Y., 1996, August 18-22, "HIDER: A Methodology for Early-Stage Exploration of
Design Space," Advances in Design Automation (Dutta, D., ed.), Irvine, CA,
ASME, Paper No. 96-DETC/DAC-1089.
Renaud, J. E. and Gabrielle, G. A., 1991, September 22-25, "Sequential Global Approximation
in Non-Hierarchic System Decomposition and Optimization," Advances in Design
479
Automation - Design Automation and Design Optimization (Gabriele, G., ed.),
Miami, FL, ASME, Vol. 32-1, pp. 191-200.
Richards, C., 1996, "Agile Manufacturing: Beyond Lean?," Production and Inventory
Management Journal, Vol. 37, No. 2, pp. 60-64.
Rodriguez, J. F., Renaud, J. E. and Watson, L. T., 1997, September 14-17, "Trust Region
Augmented Lagrangian Methods for Sequential Response Surface Approximation and
Optimization," Advances in Design Automation (Dutta, D., ed.), Sacramento, CA,
ASME, Paper No. DETC97/DAC-3773.
Rommel, R., Hagemann, G., Shley, C., Krülle, G. and Manski, D., 1995, July, "Plug Nozzle
Flowfield Calculations for SSTO Applications," AIAA-95-2784.
Rosen, D. W., 1996, August 18-22, "Design of Modular Product Architectures in Discrete
Design Spaces Subject to Life Cycle Issues," Advances in Design Automation
(Dutta, D., ed.), Irvine, CA, ASME, Paper No. 96-DETC/DAC-1485.
Rothwell, R. and Gardiner, P., 1988, "Re-Innovation and Robust Designs: Producer and User
Benefits," Journal of Marketing Management, Vol. 3, No. 3, pp. 372-387.
Rothwell, R. and Gardiner, P., 1990, "Robustness and Product Design Families," Design
Management: A Handbook of Issues and Methods (Oakley, M., ed.), Basil
Blackwell Inc., Cambridge, MA, pp. 279-292.
Rumelhart, D. E., Widrow, B. and Lehr, M. A., 1994, "The Basic Ideas in Neural Networks,"
Communications of the ACM, Vol. 37, No. 3 (March), pp. 87-92.
Sacks, J. and Schiller, S., 1988, "Spatial Designs," Statistical Decision Theory and Related
Topics (Gupta, S. S. and Berger, J. O., eds.), Springer-Verlag, New York, pp. 385-
399.
Sacks, J., Welch, W. J., Mitchell, T. J. and Wynn, H. P., 1989, "Design and Analysis of
Computer Experiments," Statistical Science, Vol. 4, No. 4, pp. 409-435.
Salagame, R. R. and Barton, R. R., 1997, "Factorial Hypercube Designs for Spatial Correlation
Regression," Journal of Applied Statistics, Vol. 24, No. 4, pp. 453-473.
Sanderson, S. and Uzumeri, M., 1995, "Managing Product Families: The Case of the Sony
Walkman," Research Policy, Vol. 24, pp. 761-782.
480
Sanderson, S. W. and Uzumeri, M., 1997, The Innovation Imperative: Strategies for
Managing Product Models and Families, Irwin, Chicago, IL.
Sanderson, S. W., 1991, "Cost Models for Evaluating Virtual Design Strategies in Multicycle
Product Families," Journal of Engineering and Technology Management, Vol. 8,
pp. 339-358.
Sandgren, E., 1989, September 17-21, "A Multi-Objective Design Tree Approach for
Optimization Under Uncertainty," Advances in Design Automation - Design
Optimization (Ravani, B., ed.), Montreal, Quebec, Canada, ASME, Vol. 19-2, pp.
Sandgren, E., 1990, "Nonlinear Integer and Discrete Programming in Mechanical Design
Optimization," Journal of Mechanical Design, Vol. 112, No. 2, pp. 223-229.
SAS, 1995, JMP® User's Guide, Version 3.1, SAS Institute, Inc., Cary, NC.
Sasena, M. J., 1998, "Optimization of Computer Simulations via Smoothing Splines and Kriging
Metamodels," M.S. Thesis, Department of Mechanical Engineering, University of
Michigan, Ann Arbor, MI.
Schmit, L. A., 1981, "Structural Synthesis—Its Genesis and Development," AIAA Journal,
Vol. 19, No. 10, pp. 1249-1263.
Schonlau, M., Welch, W. J. and Jones, D. R., 1997, "Global Versus Local Search in
Constrained Optimization of Computer Models," Technical Report RR-97-11, to
appear in New Developments and Applications in Experimental Design (Fluornoy,
N., et al., Eds.), Institute for Mathematical Statistics, Institute for Improvement in
Quality and Productivity, University of Waterloo, Waterloo, Ontario, Canada.
Seshu, U. S. D. K., 1998, "Including Life Cycle Considerations in Computer Aided Design,"
M.S. Thesis, G. W. Woodruff School of Mechanical Engineering, Georgia Institute of
Technology, Atlanta, GA.
Shewry, M. C. and Wynn, H. P., 1987, "Maximum Entropy Sampling," Journal of Applied
Statistics, Vol. 14, No. 2, pp. 165-170.
Shewry, M. C. and Wynn, H. P., 1988, "Maximum Entropy Sampling with Application to
Simulation Codes," Proceedings of the 12th World Congress on Scientific
Computation, IMAC88, Vol. 2, pp. 517-519.
481
Shigley, J. E. and Mischke, C. R., 1989, Mechanical Engineering Design, Fifth Edition,
McGraw-Hill Publishing Company, New York.
Shirley, G. V., 1990, "Models for Managing the Redesign and Manufacture of Product Sets,"
Journal of Manufacturing and Operations Management, Vol. 3, No. 2, pp. 85-
104.
Shoemaker, A. C., Tsui, K. L. and Wu, J., 1991, "Economical Experimentation Methods for
Robust Design," Technometrics, Vol. 33, No. 4, pp. 415-427.
Shultz, G. P., 1992, Transformers and Motors, Prentice Hall, Carmel, IN.
Siddall, J. N., 1982, Optimal Engineering Design: Principles and Applications, Marcel
Dekker, Inc., New York.
Siddique, Z. and Rosen, D. W., 1998, September 13-16, "On the Applicability of Product
Variety Design Concepts to Automotive Platform Commonality," Design Theory and
Methodology - DTM'98, Atlanta, GA, ASME, DETC98/DTM-5661.
Siddique, Z., 1998, "Common Platform Development: Designing for Product Variety," Ph.D.
Proposal, G. W. Woodruff School of Mechanical Engineering, Georgia Institute of
Technology, Atlanta, GA.
Simon, H. A., 1996, The Sciences of the Artificial (3rd ed.), MIT Press, Cambridge, MA.
Simpson, T. W., 1995, December, "Development of a Design Process for Realizing Open
Engineering Systems," M.S. Thesis, G. W. Woodruff School of Mechanical
Engineering, Georgia Institute of Technology, Atlanta, GA.
Simpson, T. W., Chen, W., Allen, J. K. and Mistree, F., 1996, September 4-6, "Conceptual
Design of a Family of Products Through the Use of the Robust Concept Exploration
Method," 6th AIAA/USAF/NASA/ISSMO Symposium on Multidisciplinary Analysis
and Optimization, Bellevue, WA, AIAA, Vol. 2, pp. 1535-1545. AIAA-96-4161-
CP.
Simpson, T. W., Chen, W., Allen, J. K. and Mistree, F., 1997a, October 13-16, "Designing
Ranged Sets of Top-Level Design Specifications for a Family of Aircraft: An
Application of Design Capability Indices," SAE World Aviation Congress and
Exposition, Anaheim, CA, AIAA-97-5513.
Simpson, T. W., Peplinski, J., Koch, P. N. and Allen, J. K., 1997b, September 14-17, "On the
Use of Statistics in Design and the Implications for Deterministic Computer
482
Experiments," Design Theory and Methodology - DTM'97, Sacramento, CA, ASME,
Paper No. DETC97/DTM-3881.
Simpson, T. W., Allen, J. K. and Mistree, F., 1998, September 13-16, "Spatial Correlation
Metamodels for Global Approximation in Structural Design Optimization," Advances in
Design Automation, Atlanta, GA, ASME, DETC98/DAC-5613.
Smith, W. F. and Mistree, F., 1994, May 24-27, "The Development of Top-Level Ship
Specifications: A Decision-Based Approach," 5th International Conference on
Marine Design, Delft, The Netherlands, pp. 59-76.
Smith, W., 1992, "Modeling and Exploration of Ship Systems in the Early Stages of Decision-
Based Design," Ph.D. Dissertation, Operations Research Department, University of
Houston, Houston, TX.
Sobieszczanski-Sobieski, J., Barthelemy, J.-F. and Riley, K. M., 1982, "Sensitivity of Optimum
Solutions of Problem Parameters," AIAA Journal, Vol. 20, No. 9, pp. 1291-1299.
Spira, J. S., 1993, "Mass Customizing Through Training at Lutron Electronics," Planning
Review, Vol. 22, No. 4, pp. 23-24.
St. John, R. C. and Draper, N. R., 1975, "D-Optimality for Regression Designs: A Review,"
Technometrics, Vol. 17, No. 1, pp. 15-23.
Stadzisz, P. C. and Henrioud, J. M., 1995, May 21-27, "Integrated Design of Product Families
and Assembly Systems," IEEE International Conference on Robotics and
Automation, Nagoya, Japan, IEEE, Vol. 2, pp. 1290-1295.
Stevens, T., 1995, "More Panes, More Gains", Industry Week, December 18, pp. 59(3).
Su, J. and Renaud, J. E., 1996, September 4-6, "Automatic Differentiation in Robust
Optimization," 6th AIAA/USAF/NASA/ISSMO Symposium on Multidisciplinary
Analysis and Optimization, Bellevue, WA, AIAA, Vol. 1, pp. 201-215. AIAA-96-
4005-CP.
Suh, N. P., 1990, Principles of Design, Oxford University Press, Oxford, U.K.
Sweetman, B., 1996, "VentureStar: 21st Century Space Shuttle", Popular Science, October,
pp. 42-47.
Taguchi, G. and Phadke, M. S., 1986, "Quality Engineering Through Design Optimization,"
National Electronics Conference at the National Communications Forum,
Rosemont, IL, Professional Education Int Inc, Vol. 40, pp. 32-39.
483
Tang, B., 1993, "Orthogonal Array-Based Latin Hypercubes," Journal of the American
Statistical Association, Vol. 88, No. 424, pp. 1392-1397.
Toropov, V., van Keulen, F., Markine, V. and de Doer, H., 1996, September 4-6,
"Refinements in the Multi-Point Approximation Method to Reduce the Effects of Noisy
Structural Responses," 6th AIAA/USAF/NASA/ISSMO Symposium on
Multidisciplinary Analysis and Optimization, Bellevue, WA, AIAA, Vol. 2, pp. 941-
951. AIAA-96-4087-CP.
Tribus, M. and Szonyi, G., 1989, "An Alternative View of the Taguchi Approach," Quality
Progress, Vol. 22, No. 5, pp. 46-52.
Tseng, M. M., Jiao, J. and Merchant, M. E., 1996, "Design for Mass Customization," CIRP
Annals - Manufacturing Technology, Vol. 45, No. 1, pp. 153-156.
Tsui, K.-L., 1992, "An Overview of Taguchi Method and Newly Developed Statistical
Methods for Robust Design," IIE Transactions, Vol. 24, No. 5, pp. 44-57.
Ulrich, K. T. and Eppinger, S. D., 1995, Product Design and Development, McGraw-Hill,
Inc., New York.
Ulrich, K. T. and Tung, K., 1991, "Fundamentals of Product Modularity," ASME Winter
Annual Meeting, Atlanta, GA, ASME, Vol. 39, pp. 73-80.
Ulrich, K., 1995, "The Role of Product Architecture in the Manufacturing Firm," Research
Policy, Vol. 24, No. 3, pp. 419-440.
Unnewehr, S. A. N. a. L. E., 1983, Electromechanics and Electric Machines, John Wiley &
Sons, New York.
Uzumeri, M. and Sanderson, S., 1995, "A Framework for Model and Product Family
Competition," Research Policy, Vol. 24, pp. 583-607.
Venter, G. and Haftka, R. T., 1997, April 7-10, "Minimum-Bias Based Experimental Design
for Constructing Response Surfaces in Structural Optimization," 38th
AIAA/ASME/ASCE/AHS/ASC Structures, Structural Dynamics, and Materials
Conference and AIAA/ASME/AHS Adaptive Structures Forum, Kissimmee, FL,
AIAA, Vol. 2, pp. 1225-1238. AIAA-97-1053.
484
Welch, W. J., Buck, R. J., Sacks, J., Wynn, H. P., Mitchell, T. J. and Morris, M. D., 1992,
"Screening, Predicting, and Computer Experiments," Technometrics, Vol. 34, No. 1,
pp. 15-25.
Welch, W. J., Yu, T.-K., Kang, S. M. and Sacks, J., 1990, "Computer Experiments for
Quality Control by Parameter Design," Journal of Quality Technology, Vol. 22, No.
1, pp. 15-22.
Wheelwright, S. C. and Clark, K. B., 1992, "Creating Project Plans to Focus Product
Development," Harvard Business Review, Vol. 70, pp. 70-82.
Wheelwright, S. C. and Sasser, W. E., Jr., 1989, "The New Product Development Map,"
Harvard Business Review, Vol. 67 (May-June), pp. 112-125.
Whitney, D. E., 1993, "Nippondenso Co. Ltd: A Case Study of Strategic Product Design,"
Research in Engineering Design, Vol. 5, pp. 1-20.
Widrow, B., Rumelhart, D. E. and Lehr, M. A., 1994, "Neural Networks: Applications in
Industry, Business and Science," Communications of the ACM, Vol. 37, No. 3
(March), pp. 93-105.
Womack, J. P., Jones, D. T. and Roos, D., 1990, The Machine that Changed the World,
Rawson Associates, New York.
Wujek, B. A., Renaud, J. E., Batill, S. M. and Brockman, J. B., 1995, September 17-21,
"Concurrent Subspace Optimization Using Design Variable Sharing in a Distributed
Computing Environment," Advances in Design Automation (Azarm, S., Dutta, D., et
al., eds.), Boston, MA, ASME, Vol. 82, pp. 181-188.
Ye, Q., 1997, "Orthogonal Latin Hypercubes and Their Application in Computer Experiments,"
Technical Report #305, Department of Statistics, University of Michigan, Ann Arbor,
MI.
Yu, J.-C. and Ishii, K., 1998, "Design Optimization for Robustness Using Quadrature Factorial
Models," Engineering Optimization, Vol. 30, No. 3-4, pp. 203-225.
485
VITA
Timothy W. Simpson was born in Arlington, Massachusetts on February 25, 1972. He grew
up in Corning, New York and Wilmington, North Carolina before moving to Danville,
Kentucky where he attended Boyle County High School. After high school, he attended
Cornell University where he graduated in 1994 with distinction, earning his Bachelor of Science
Engineering in 1995 from the Georgia Institute of Technology. In 1997, he had a summer
research appointment at the Institute for Computer Applications in Science and Engineering at
NASA Langley in Hampton, Virginia. His graduate work has been funded by a Graduate
Research Fellowship from the National Science Foundation, a Presidential Fellowship from the
the Department of Mechanical & Nuclear Engineering and the Department of Industrial &
College, Pennsylvania.
486
A.
APPENDIX A
This appendix is intended to supplement the brief description of kriging and its equations
which is given in Section 2.4.2. In Section A.1, the process for building, validating, and
implementing a kriging model is discussed in detail. In Section A.2 the kriging source code for
mlefinder.f and krigit.f are given in Sections A.2.1 and A.2.2, respectively, along with a
README file which details their execution, see Section A.2.3. A sample input parameter file,
input data file, and corresponding output files are included in Section A.2.4.
321
A.1 BUILDING, VALIDATING, AND IMPLEMENTING A KRIGING MODEL
Once the appropriate sample data has been obtained, there are three basic steps
required for kriging: (1) building the kriging model, (2) validating the model, and (3) using the
model. The general steps themselves are the same regardless of the metamodeling technique
used (response surfaces, neural nets, etc.); however, special attention is given to the kriging
approach because of its nascency in engineering design applications. Each step is discussed in
turn as it applies to the kriging approach being advocated in this dissertation; parallels to RS
The general approach for building a kriging model is illustrated in Figure A.1. In order
to fit the “best” kriging model, an unconstrained non-linear optimization algorithm is needed to
obtain the maximum likelihood estimates (MLEs) for the ? k values used to fit the “best” kriging
322
Sample points
(x i ,yi), i=1,...,ns mlefinder.f
mle objective
Unconstrained functioin
Non-linear
Repeated Optimization
for each Algorithm detR ?ˆ ?2
response, y i
?k R invR ?ˆ ?
k=1,..., ndv
?*
Once the ns sample points have been obtained, an unconstrained, non-linear optimizer is
invoked to find the MLEs for the ? k as illustrated in Figure A.1. The optimizer guesses a value
for the ? k which provides the input for the mlefinder.f subroutine used to find the value of the
MLE objective function. Once a set of ? k values has been guessed, the correlation matrix R—
using any possible correlation function—is computed along with its determinant and inverse.
The inverse of R is then used to compute ?ˆ ?, and both ?ˆ ?and R-1 are then used to compute ?ˆ ?2 .
The MLE objective function is computed using the determinant of R and ?ˆ ?2 , and this value is
then returned to the optimizer which then selects a new value for the ? k so that the MLE
objective function is maximized. This process is repeated until convergence is achieved, yielding
the maximum likelihood estimate, or “best” guess values, for ? k based on the given sample data.
This process is then repeated for each response, yi. Currently, the simulated annealing algorithm
323
from (Goffe, et al., 1994) is employed to perform the optimization; his algorithm is on-line at the
Once the MLEs for each theta have been found, the next step is to validate the kriging
model. Since a kriging model interpolates the data, residual plots and R2 values—the usual
model assessments for response surfaces (Myers and Montgomery, 1995)—are meaningless
because there are no residuals. Therefore, validating the model using additional data points is
essential if they can be afforded. If additional validation points can be afforded, then the
maximum absolute error, average absolute error, and root mean square error (MSE) for the
additional validation points can be calculated to assess model accuracy. These measures are
summarized in Table A.1. In the table, nerror is the number of random test points used, yi is the
actual value from the computer code/simulation, and yˆ i is the predicted value from the
approximation model.
?
1 n error
y i ? yˆ i
avg. abs. error n error i ?1 [A.2]
?
n error
(yi ? yˆ i )2
i? 1
root MSE n error [A.3]
324
However, sometimes taking additional validation points is not possible due to the added
model assessment which requires no additional points is needed. One such approach is the
leave-one-out cross validation (Mitchell and Morris, 1992). In this approach, each sample
point used to fit the model is removed one at a time, the model is rebuilt without that sample
point, and the difference between the model without the sample point and actual value at the
sample point is computed for all of the sample points. The cross validation root mean square
?
ns
( yi ? yˆ i )2
i? 1
cvrmse = [A.4]
ns
It is worth noting that the MLEs for the ? k are not re-computed for each model; the
initial ? k MLEs based on the full sample set are used. Mitchell and Morris (1992) describe an
elegant and efficient approach for performing this cross validation since it can be time consuming
depending on the size of the sample set. In this dissertation, however, a “brute force”
approach—remove each point one at a time, re-compute all of the matrices, etc.—is used to
325
A.1.3 Using a Kriging Model
Once a kriging model has been built and deemed sufficiently accurate, it is ready to be
used in optimization or for concept exploration. For those familiar with response surfaces, RS
model prediction simply requires substituting the new value of x, the design variables, into the
first-order or second-order response surface equations once the model has been built and
properly validated. Prediction with a kriging model, however, requires the inversion and
multiplication of several matrices; these matrices grow as the number of sample points increases.
Hence, for large problems prediction with the kriging model may become computationally
expensive as well. Regardless, the general approach for using a kriging model in optimization is
Goals, bounds,
and constraints
krigit.f
Optimization yˆ
Algorithm
(e.g., DSIDES) Repeated
rT yˆ i for each
response, y i
x new R inv R ?ˆ ?
x* Sample points
?*
(x i,yi ), i=1,...,ns
326
The process of using a kriging model in optimization commences once the constraints,
goals, and variable bounds are fed into a numerical optimization algorithm, e.g., DSIDES
(Parkinson and al., 1998). The optimization algorithm then selects a value for the design
variables: xnew. The sample points, ? * from Figure A.1, and xnew serve as the input for the
dace_eval.f subroutine used for prediction. The krigit.f algorithm is used to compute R, the
inverse of R, and then ?ˆ ?. Meanwhile, rT is also computed, and once rT and ?ˆ ?are known, yˆ i
can be computed. This process is repeated for each response, and the vector of predicted
values, yˆ , is returned to the optimizer which adjusts xnew until the best point, x*, is found. The
The kriging source code for mlefinder.f and krigit.f are included in Sections A.2.1 and
A.2.2, respectively. A README file describing the intricacies of the kriging codes and file
naming conventions is included in Section A.2.3. Finally, sample parameter and data input files
327
* to evaluate the correlation matrix and DACE model parameters.
*
* Tim Simpson, 25 February 1998 / Tony Giunta, 12 May 1997
*
***********************************************************************
*
* Input Variables:
* ----------------
* xvector = vector of length 'numdv' which contains theta guesses
* krig = integer indicating response currently under investigation
*
* Output Variables:
* -----------------
* MLE = value of the MLE objective function
*
* Parameter Variables:
* --------------------
* numdv = number of variables
* numsamp = number of data samples from which the correlation matrix
* and the DACE model parameters are calculated
* numnew = number of new points to be predicted
* corflag = integer value used to indicate correlation function
* 1 -> Exponential correlation function, p=1
* 2 -> Gaussian correlation function with p=2
* 3 -> Cubic spline correlation function
* 4 -> Matern function, once differentiable (exp*linear)
* 5 -> Matern function, twice differentiable (exp*quadratic)
*
* Local Variables:
* ----------------
* DOUBLE PRECISION
* ----------------
* xmat = numdv x numsamp of sample site locations
* cormat = correlation matrix (numsamp x numsamp)
* invmat = inverse of the correlation matrix (numsamp x numsamp)
* Fvect = matrix (1 x numsamp) of constant terms (all = 1 in
* 'correlate')
* FRinv = matrix product of 'Fvect' and 'invmat'
* yvect = matrix (1 x numsamp) of response values
* yfb_vect = matrix (1 x numsamp) resulting from
* ('yvect'-'Fvect'*'betahat')
* yfbRinv = matrix (1 x numsamp) resulting from
* ('yvect' - 'Fvect'*'betahat')*'invmat'
* RHSterms = matrix product of 'invmat' and 'yfb_vect'
* r_xhat = matrix (1 x numsamp) created by using the vector 'xnew'
* in the correlation function
* betahat = estimate of the constant term in the DACE model (beta)
* sigmahat = estimate of the variance (sigma) term in the data
* work = vector of length 'numsamp' used as temporary storage by
* the LAPACK subroutine DGEDI
*
* INTEGER
* -------
* ipvt = vector of length 'numsamp' of pivot locations used in
328
* LAPACK subroutines DGEDI and DGEFA
*
*
***********************************************************************
integer numdv,numsamp,numresp,krig,corflag,numnew
character*16 fprefix
C
C include parameter settings for numdv, numsamp, numresp
C
include 'dace.params.h'
integer i,j,ipvt(numsamp),iprint
character*24 deckfile,fitsfile
C
C specify correlation function parameter p if necessary
C
if (corflag.eq.2) then
p=2.0
else if (corflag.eq.1) then
p=1.0
end if
C
C open necessary .dek file
C
open(21,file=deckfile,status='old')
C
C initialize the DACE modeling parameters
C
do 10 i = 1,numdv
theta(i) = xvector(i)
10 continue
C
C read in xmat and response arrays
C
do 100 i=1,numsamp
read (21,*) (xmat(i,j),j=1,numdv),(resp(i,k),k=1,numresp)
100 continue
close(21)
C
C Assign response to yvect for the response of interest (specified
C by variable 'krig'); yvect is the response for which model is being
C built using 'theta' parameters; also, initiliaze Fvect
329
C
do 107 i=1,numsamp
yvect(i)=resp(i,krig)
Fvect(i,1)=1.0d0
107 continue
C
C call subroutine to calculate the inverse of the correlation matrix
C
C
C call subroutine to compute intermediate kriging equations
C
C
C call subroutine to compute sigmahat and mle obj function
C
call mleobjfcn (detR,yfb_vect,Rinvyfb,MLE,numsamp)
C
C return to simulated annealing algorithm with MLE value
C
return
end
***********************************************************************
*
subroutine cormat (xmat,invmat,detR,numsamp,numdv,
& work,ipvt,theta,p,corflag,iprint)
*
*
* This subroutine calculates the correlation matrix and its inverse
*
* Tim Simpson 15 February 1998 / Tony Giunta, 12 May 1997
*
***********************************************************************
*
* Needed External Files:
* ----------------------
* LAPACK subroutines DGEFA and DGEDI (FORTRAN77)
*
* Inputs:
* -------
* DOUBLE PRECISION:
* -----------------
* xmat,work,p,theta
*
330
* INTEGER:
* --------
* numdv,numsamp,ipvt,corflag
*
* Variables ipvt and work will be changed upon exit.
*
* Outputs:
* --------
* DOUBLE PRECISION:
* -----------------
* invmat,detR
*
* Local:
* ------
* DOUBLE PRECISION:
* -----------------
* det(2) = dummy variable used in DGEFI subroutine
* R = value of correlation function between i_th and j_th
* sample points
*
***********************************************************************
C
C passed variables
C
integer numdv,numsamp,corflag,ipvt(numsamp),iprint
if (iprint.eq.1) then
do 321 i=1,numsamp
321 write(28,829) (invmat(i,j),j=1,numsamp)
331
829 format(5(f12.3,1x))
end if
C
C calculate the inverse of the correlation matrix using dgefa and dgedi
C
do 307 i=1,numsamp
work(i)=0.0d0
ipvt(i)=0
307 continue
if (iprint.eq.1) then
do 320 i=1,numsamp
320 write(28,828) (invmat(i,j),j=1,numsamp)
828 format(5(f12.3,1x))
end if
return
end
***********************************************************************
*
* include LAPACK routines used to find inverse of correlation matrix;
* available on-line at http://www.netlib.org/
*
***********************************************************************
include 'dgefa.f'
include 'dgedi.f'
***********************************************************************
*
subroutine compeqn (invmat,Fvect,yvect,FRinv,yfb_vect,
& Rinvyfb,betahat,numsamp,numdv)
*
*
* This subroutine calculates the DACE correlation matrix and
* corresponding equations needed for model prediction.
*
* Tim Simpson 15 February 1998
*
*----------------------------------------------------------------------
332
*
* Inputs:
* -------
* DOUBLE PRECISION:
* -----------------
* invmat,Fvect,yvect
*
* Outputs:
* --------
* DOUBLE PRECISION:
* -----------------
* betahat,FRinv,Rinvyfb,yfb_vect
*
* Local:
* ------
* DOUBLE PRECISION:
* -----------------
* beta_den,beta_num
*
***********************************************************************
C
C passed variables
C
integer numsamp,numdv
C
C compute F'Rinv
C
do 310 i=1,numsamp
FRinv(i)=0.0d0
do 315 j=1,numsamp
FRinv(i)=FRinv(i)+Fvect(j,1)*invmat(j,i)
315 continue
310 continue
C
C compute betahat = (F'Rinv*yvect)/(F'Rinv*F)
C
beta_den=0.0d0
beta_num=0.0d0
do 320 i=1,numsamp
beta_den=beta_den+FRinv(i)*Fvect(i,1)
beta_num=beta_num+FRinv(i)*yvect(i)
320 continue
betahat = beta_num / beta_den
333
C
C compute y-f'betahat
C
do 330 i = 1,numsamp
yfb_vect(i) = yvect(i) - betahat*Fvect(i,1)
330 continue
C
C compute Rinv*(y-f'beta)
C
do 340 i=1,numsamp
Rinvyfb(i)=0.0d0
do 345 j=1,numsamp
Rinvyfb(i)=Rinvyfb(i)+invmat(i,j)*yfb_vect(j)
345 continue
340 continue
return
end
C********************************************************************
C
subroutine mleobjfcn(detR,yfb_vect,Rinvyfb,MLE,numsamp)
C
C author: Tim Simpson date: 16 February 1998
*
*
* Inputs:
* -------
* DOUBLE PRECISION:
* -----------------
* detR,yfb_vect,Rinvyfb
*
* INTEGER:
* --------
* numsamp
*
*
* Outputs:
* --------
* DOUBLE PRECISION:
* -----------------
* MLE
*
* Local:
* ------
* DOUBLE PRECISION:
* -----------------
* sigmahat = estimated value of
*
C********************************************************************
C
C passed variables
334
C
double precision detR,yfb_vect(numsamp),Rinvyfb(numsamp),MLE
integer numsamp
C
C local variables
C
double precision sigmahat
integer i
C
C compute sigma_hat = (yfb_vect*Ringyfb)/numsamp
C
sigmahat=0.0d0
do 10 i=1,numsamp
sigmahat=sigmahat+yfb_vect(i)*Rinvyfb(i)
10 continue
sigmahat=ABS(sigmahat)/numsamp
C
C compute MLE objective function
C
C write(6,83) sigmahat,detR,MLE
C 83 format('sigmahat=',f12.4,2x,'detR=',f10.7,'obj fcn=',f12.5)
return
end
C********************************************************************
C
subroutine scfxmat(R,xmat,theta,corflag,p,numdv,numsamp,i,j)
C
C author: tim simpson date: 2/11/98
C
C subroutine to compute correlation function for correlation
C matrix; NOT to compute r_xhat.
C
C
C Output:
C -------
C R = value of correlation function between two sample points,
C given theta
C
C Input:
C ------
335
C xmat = matrix of sample points
C theta = array of theta values
C i,j = i_th and j_th elements of correlation matrix for which
C correlation function is being computed
C corflag = integer flag specifying correlation function
C
C All variables except R are unchanged upon exiting
C
C********************************************************************
C
C passed variables
C
integer i,j,corflag,numdv,numsamp
double precision R,xmat(numsamp,numdv),theta(numdv),p
C
C local variables
C
double precision sum,thetadist,dist
integer k
if ((corflag.eq.2).or.(corflag.eq.1)) then
sum=0.0d0
do 120 k = 1,numdv
dist = ABS(xmat(i,k)-xmat(j,k))
sum = sum + theta(k)*((dist)**p)
120 continue
R = exp( -1.0d0*sum )
336
do 150 k=1,numdv
dist = ABS(xmat(i,k)-xmat(j,k))
sum = sum*(exp(-theta(k)*dist)*
& (1.+theta(k)*dist+(theta(K)**2*dist**2)/3.0))
150 continue
R = sum
end if
return
end
337
* DOUBLE PRECISION
* ----------------
* xmat = numdv x numsamp of sample site locations
* invmat = inverse of the correlation matrix (numsamp x numsamp)
* Fvect = matrix (1 x numsamp) of constant terms
* (all = 1 in 'correlate')
* FRinv = matrix product of 'Fvect' and 'invmat'
* yvect = matrix (1 x numsamp) of response values
* yfb_vect = matrix (1 x numsamp) resulting from
* ('yvect' - 'Fvect'*'betahat')
* Rinvyfb = matrix product of 'invmat' and 'yfb_vect'
* r_xhat = matrix (1 x numsamp) created by using the vector 'xnew'
* in the correlation function
* betahat = estimate of the constant term in the DACE model
* work = vector of length 'numsamp' used as temporary storage by
* the LAPACK subroutine DGEDI
*
* INTEGER
* -------
* ipvt = vector of length 'numsamp' of pivot locations used in
* LAPACK subroutines DGEDI and DGEFA
*
* Notes:
* ------
* A. All points X must be scaled [0,1]
* B. Three files are needed for prediction:
* 1. '*.dek' -> data file containing sample points
* used to fit the model
* 2. '*.fits' -> file containing theta parameters for
* kriging model
* 3. '*.pts' -> file containing new points at which
* to predict y_hat(X)
* C. Predicted y_hat are written to file 'wei.4_160.out'
* D. User must specify modeling parameters in 'krig.params.h', for
* instance if you want to predict 100 new points, then numnew
* is changed to 100 in 'krig.params.h' file
* E. Subroutines 'dgedi.f' and 'dgefa.f' must be in same directory
* when compiling this code
* F. Files 'krig.params.h' and 'krig.files.h' must also be in same
* directory when compiling this code
* G. Program must be recompiled any time changes are made to
* either 'krig.files.h' or 'krig.params.h'
*
************************************************************************
integer numdv,numsamp,numnew,numresp,corflag
character*16 fprefix
C
C integer parameters are specified in dace.params.h file
C
include 'dace.params.h'
338
& Rinvyfb(numsamp),yvect(numsamp),yfb_vect(numsamp),
& xnew(numnew,numdv),ynew(numnew),work(numsamp),dummy2,
& Farray(numnew,numresp),thetaray(numresp,numdv),
& theta(numdv),resp(numsamp,numresp),p,detR
integer krig,i,j,k,ipvt(numsamp),dummy,lenstr,scfstr
character*16 ftitle
character*20 deckfile,fitsfile,newptfile,outfile,scfname
scfname='expgaucubma1ma2'
C
C specify correlation function; make sure .fits file has correct thetas
C
if (corflag.eq.2) then
p=2.0
else if (corflag.eq.1) then
p=1.0
end if
C
C open necessary .dek, .fit, .npt, and .out files
C
call getlen(fprefix,lenstr)
ftitle=fprefix
deckfile=ftitle(1:lenstr) // '.dek'
newptfile=ftitle(1:lenstr) // '.npt'
scfstr=(3*(corflag-1))+1
fitsfile=ftitle(1:lenstr) // '.' // scfname(scfstr:scfstr+2)
& // '.fit'
outfile=ftitle(1:lenstr) // '.' // scfname(scfstr:scfstr+2)
& // '.out'
open(21,file=deckfile,status='old')
open(22,file=fitsfile,status='old')
open(23,file=newptfile,status='old')
open(27,file=outfile,status='unknown')
print *
print *, deckfile,fitsfile,newptfile,outfile
print *, numnew,numdv,numsamp,numresp
C
C initialize xmat, response, and theta arrays,
C
print *
write(6,*) 'Reading in sample data...'
do 10 i=1,numsamp
10 read (21,*) (xmat(i,j),j=1,numdv),(resp(i,k),k=1,numresp)
close(21)
print *
write(6,*) 'Reading in theta parameters...'
do 20 i=1,numresp
read(22,*) dummy,(thetaray(i,j),j=1,numdv),dummy2
write(6,1000) dummy,(thetaray(i,j),j=1,numdv)
339
1000 format(i2,8f9.5)
20 continue
close(22)
print *
write(6,*) 'Reading in new points at which to predict y_hat...'
do 30 i=1,numnew
read(23,*) (xnew(i,j),j=1,numdv)
write(6,78) i,(xnew(i,j),j=1,numdv)
78 format(i3,2x,8(f6.4,1x))
30 continue
close(23)
C
C step through each response, predict new y_hat at each new point
C
do 40 krig=1,numresp
write(6,1001)
1001 format(/,60('-'),/)
write(6,*) 'Predicing response #',krig,' using theta paramters:'
C
C Assign response to yvect for the response of interest (specified
C by variable 'krig'); yvect is the response for which model is being
C built using 'theta' parameters specified by 'krig'
C
do 50 j=1,numdv
theta(j)=thetaray(krig,j)
50 continue
write(6,1002) (theta(j),j=1,numdv)
1002 format(8f9.5)
print *
write(6,*) 'Predicted values for response #',krig
do 60 i=1,numsamp
yvect(i)=resp(i,krig)
Fvect(i,1)=1.0d0
60 continue
C
C call subroutine to calculate the inverse of the correlation matrix
C
C
C call subroutine to compute intermediate kriging equations
C
C
C call subroutine to predict new point given correlation matrix and
C the correlation parameters
C
340
call dace_eval(xnew,xmat,r_xhat,betahat,Rinvyfb,numsamp,
& numdv,numnew,theta,ynew,p,corflag)
C
C store predicted values for current response
C
do 80 i=1,numnew
Farray(i,krig)=ynew(i)
80 continue
C
C compute predicted values for next response, i.e., increment 'krig'
C
40 continue
C
C write predicted values to specified .out file
C
do 90 i=1,numnew
write(27,79) (Farray(i,krig),krig=1,numresp)
79 format(10(f13.5,1x))
90 continue
close(27)
print *
write(6,*) 'All response values written to specified .out file'
stop
end
***********************************************************************
*
subroutine getlen(string,lenstr)
*
*
* This subroutine is used to determine the actual length of the
* filename prefix specified by the user in 'krig.params.h'.
*
* With this known, the .dek, .fit, .npt, and .out suffixes are
* concatenated onto the prefix, and the files are opened.
*
* Author: Tim Simpson, 2/15/98
*
* From: Koffman and Friedman, Fortran (5th ed.), Addison-Wesley,
* New York, pp. 537-538.
*
***********************************************************************
*
character*1 blank
character*16 string
parameter (blank=' ')
integer next
do 10 next = LEN(string), 1, -1
if (string(next:next).ne.blank) then
lenstr=next
341
return
end if
10 continue
lenstr=0
if (lenstr.eq.0) then
write(6,*) 'You have not specified a file name prefix'
stop
end if
return
end
***********************************************************************
*
subroutine cormat (xmat,invmat,detR,numsamp,numdv,
& work,ipvt,theta,p,corflag)
*
*
* This subroutine calculates the correlation matrix and its inverse
*
* Tim Simpson 15 February 1998 / Tony Giunta, 12 May 1997
*
***********************************************************************
*
* Needed External Files:
* ----------------------
* LAPACK subroutines DGEFA and DGEDI (FORTRAN77)
*
* Inputs:
* -------
* DOUBLE PRECISION:
* -----------------
* xmat,work,p,theta
*
* INTEGER:
* --------
* numdv,numsamp,ipvt,corflag
*
* Variables ipvt and work will be changed upon exit.
*
* Outputs:
* --------
* DOUBLE PRECISION:
* -----------------
* invmat,detR (used in mle*.f but not dace*.f -> done to maintain
* consistency between cormat routine for two codes)
*
* Local:
* ------
* DOUBLE PRECISION:
* -----------------
* det(2) = dummy variable used in DGEFI subroutine
* R = value of correlation function between i_th and j_th
* sample points
*
342
***********************************************************************
C
C passed variables
C
integer numdv,numsamp,corflag,ipvt(numsamp)
return
end
***********************************************************************
*
343
* include LINPACK routines used to find inverse of correlation matrix
*
***********************************************************************
include 'dgefa.f'
include 'dgedi.f'
***********************************************************************
*
subroutine compeqn (invmat,Fvect,yvect,FRinv,yfb_vect,
& Rinvyfb,betahat,numsamp,numdv)
*
*
* This subroutine calculates the DACE correlation matrix and
* corresponding equations needed for model prediction.
*
* Tim Simpson 15 February 1998
*
*----------------------------------------------------------------------
*
* Inputs:
* -------
* DOUBLE PRECISION:
* -----------------
* invmat,Fvect,yvect
*
* Outputs:
* --------
* DOUBLE PRECISION:
* -----------------
* betahat,FRinv,Rinvyfb,yfb_vect
*
* Local:
* ------
* DOUBLE PRECISION:
* -----------------
*
***********************************************************************
C
C passed variables
C
integer numsamp,numdv
integer i,j
344
C
C compute F'Rinv
C
do 310 i=1,numsamp
FRinv(i)=0.0d0
do 315 j=1,numsamp
FRinv(i)=FRinv(i)+Fvect(j,1)*invmat(j,i)
315 continue
310 continue
C
C compute betahat = (F'Rinv*yvect)/(F'Rinv*F)
C
beta_den=0.0d0
beta_num=0.0d0
do 320 i=1,numsamp
beta_den=beta_den+FRinv(i)*Fvect(i,1)
beta_num=beta_num+FRinv(i)*yvect(i)
320 continue
betahat = beta_num / beta_den
print *, 'betahat=',betahat
C
C compute y-f'betahat
C
do 330 i = 1,numsamp
yfb_vect(i) = yvect(i) - betahat*Fvect(i,1)
330 continue
C
C compute Rinv*(y-f'beta)
C
do 340 i=1,numsamp
Rinvyfb(i)=0.0d0
do 345 j=1,numsamp
Rinvyfb(i)=Rinvyfb(i)+invmat(i,j)*yfb_vect(j)
345 continue
340 continue
return
end
***********************************************************************
*
subroutine dace_eval(xnew,xmat,r_xhat,betahat,Rinvyfb,
& numsamp,numdv,numnew,theta,ynew,p,corflag)
*
*
* Use DACE interpolating model to predict response values at unsampled
* locations
*
* Tony Giunta, 12 May 1997/Tim Simpson 12 February 1998
*
***********************************************************************
*
345
* Inputs:
* -------
* xnew
* xmat
* r_xhat
* betahat
* Rinvyfb
* numsamp
* numdv
* theta
*
* Outputs:
* --------
* ynew
*
* Local Variables:
* ----------------
* sum = temporary variable used for calculating the terms in the
* vector 'r_xhat'
* yeval = scalar value resulting from matrix multiplication of
* 'r_xhat' * 'Rinvyfb'
*
************************************************************************
*
double precision xnew(numnew,numdv),r_xhat(numsamp),
& xmat(numsamp,numdv),Rinvyfb(numsamp),betahat,
& theta(numdv),ynew(numnew),yeval,p,R
integer i,j,k,numdv,numsamp,numnew,corflag
C
C calculate the vector r(x)
C
do 200 i = 1,numnew
do 210 j = 1,numsamp
call scfxnew(R,xnew,xmat,theta,corflag,p,numdv,numsamp,
& numnew,i,j)
C
C equate r_xhat to correlation between ith xnew point and jth sample point
C
r_xhat(j)=R
210 continue
C
C calculate the estimate of Y, i.e., Y_hat(x)
C
yeval = 0.0d0
do 220 k=1,numsamp
220 yeval=yeval+r_xhat(k)*Rinvyfb(k)
ynew(i) = yeval + betahat
print '(f20.5)', ynew(i)
C
C repeat for next xnew
C
200 continue
346
return
end
C********************************************************************
C
subroutine scfxmat(R,xmat,theta,corflag,p,numdv,numsamp,i,j)
C
C Author: Tim Simpson Date: 2/11/98
C
C subroutine to compute spatial correlation function (scf) for
C correlation matrix; NOT to compute scf for r_xhat.
C
C Output:
C -------
C R = value of correlation function between two sample points,
C given theta
C
C Input:
C ------
C xmat = matrix of sample points
C theta = array of theta values
C i,j = i_th and j_th elements of correlation matrix for which
C correlation function is being computed
C corflag = integer flag specifying correlation function:
C
C All variables except R are unchanged upon exiting
C
C********************************************************************
C
C passed variables
C
integer i,j,corflag,numdv,numsamp
double precision R,xmat(numsamp,numdv),theta(numdv),p
C
C local variables
C
double precision sum,thetadist,dist
integer k
if ((corflag.eq.2).or.(corflag.eq.1)) then
sum=0.0d0
do 120 k = 1,numdv
dist = ABS(xmat(i,k)-xmat(j,k))
sum = sum + theta(k)*((dist)**p)
120 continue
R = exp( -1.0d0*sum )
347
else if (thetadist.ge.1.0) then
sum=sum*0.0
else
sum=sum*(2.0*(1.0-thetadist)**3)
end if
130 continue
R = sum
end if
return
end
C********************************************************************
C
subroutine scfxnew(R,xnew,xmat,theta,corflag,p,numdv,numsamp,
& numnew,i,j)
C
C Author: Tim Simpson Date: 2/13/98
C
C subroutine to compute spatial correlation function (scf) between
C ith xnew point and each sample point; NOT to compute scf for xmat.
C
C Output:
C -------
C R = value of correlation function for r_xhat
C
C Input:
C ------
C xnew = matrix of new points (numnew,numdv)
C xmat = matrix of sample points (numsamp,numdv)
C theta = array of theta values
C i,j = i_th xnew point j_th sample point for which
C correlation function is being computed
C corflag = integer flag specifying correlation function (see scfxmat)
C
348
C All variables except R are unchanged upon exiting
C
C********************************************************************
C
C passed variables
C
integer i,j,corflag,numdv,numsamp
double precision R,xnew(numnew,numdv),
& xmat(numsamp,numdv),theta(numdv),p
C
C local variables
C
double precision sum,thetadist,dist
integer k
if ((corflag.eq.2).or.(corflag.eq.1)) then
sum=0.0d0
do 400 k = 1,numdv
dist = ABS(xnew(i,k)-xmat(j,k))
sum = sum + theta(k)*((dist)**p)
400 continue
R = exp( -1.0d0*sum )
else if (corflag.eq.3) then
sum=1.0d0
do 410 k=1,numdv
dist = ABS(xnew(i,k)-xmat(j,k))
thetadist=theta(k)*dist
if (thetadist.lt.0.5) then
sum=sum*(1.0-6.0*(thetadist**2)+6.0*(thetadist**3))
else if (thetadist.ge.1.0) then
sum=sum*0.0
else
sum=sum*(2.0*(1.0-thetadist)**3)
end if
410 continue
R = sum
else if (corflag.eq.4) then
sum=1.0
do 420 k=1,numdv
dist = ABS(xnew(i,k)-xmat(j,k))
sum = sum*(exp(-theta(k)*dist)*(1.+theta(k)*dist))
420 continue
R = sum
else if (corflag.eq.5) then
sum=1.0
do 430 k=1,numdv
dist = ABS(xnew(i,k)-xmat(j,k))
thetadist=theta(k)*dist
sum = sum*(exp(-thetadist)*
& (1.+thetadist+(thetadist**2)/3.0))
430 continue
R = sum
end if
349
return
end
****************
* SOURCE CODES *
****************
The F77 files needed to build, run, and cross-validate a kriging
model are the following. All of these codes should be contained in
the same file as the sample data files. For compiling, links are
made internally to the codes; therefore, only need to compile main
routine in order to compile entire code.
*******************
* PARAMETER FILES *
*******************
Two parameter files are necessary to run these codes. A description of
the parameters in each file are included in the respecctive '*.h' file.
350
**************************
* FILE NAMING CONVENTION *
**************************
The following file naming convention is used in all of the codes.
'fprefix' = is the file name prefix for ALL the files; it is specified
in 'dace.params.h'; it should be less than 8 characters long
and does not include a final period (the period added to
the end of 'fprefix' comes with the file name extension
fprefix.dek = input file for all codes; contains sample points for which
----------- the kriging model is being built; format is:
fprefix.*.fit = output from 'simmle'; contains the MLE values for theta
------------- and also the corresponding MLE objective function value;
format is:
351
. . . . . .
. . . . . .
numresp MLE_theta_x1 MLE_theta_x2 ... MLE_theta_x_numdv MLE_obj_fcn_value
*****************
* EXAMPLE FILES *
*****************
The sample data file is '2bar.dek' and the 'dace.params.h' file has
been renamed '2bar.h' for ease of association with the example. The
new point file is '2bar.npt' and contains 11 points at which to
predict new values of the responses.
352
4. after 'simmle' is complete, check contents of '2bar.gau.fit' and
compare to '2bar.gau.fit%'. Here, the correlation function used
in '2bar.h' is corflag=2, the Gaussian correlation function
**********************************
* SOME FINAL NOTES AND POINTERS *
**********************************
A.2.4 Sample Parameter and Data Input Files and Kriging Output
2bar.dek
--------
1.0000 0.5000 25.011 0.72046 0.20582
0.9286 0.7500 26.392 0.63087 0.22508
0.8571 0.2500 20.395 0.65797 0.02051
0.7857 0.8750 24.968 0.45596 0.16248
0.7143 0.3750 19.221 0.50157 -0.03038
0.6429 0.6250 20.066 0.32731 -0.00134
0.5714 0.1250 15.300 0.32361 -0.33822
0.5000 0.9375 19.901 -0.19855 -0.06383
0.4286 0.4375 15.086 -0.14293 -0.31312
0.3571 0.6875 15.439 -0.64681 -0.31213
0.2857 0.1875 11.508 -0.73043 -0.75329
0.2143 0.8125 13.531 -2.04917 -0.52760
0.1429 0.3125 9.9140 -2.26733 -1.00284
0.0714 0.5625 9.7780 -4.18023 -1.04129
353
dace.params.h
-------------
c**********************************************************
c *
c Parameter input file for dace_v6+, xval_v4+ *
c Author: Tim Simpson *
c Date: 2/23/98 *
c *
c**********************************************************
c
c specify parameter values for dace modeling software
c
parameter ( numdv=6,numsamp=64,numresp=5,numnew=1000,
& fprefix='brake.oa64',mleflag=0 )
c
c numdv = # design variables
c numsamp = # samples in data set
c numresp = # response models to be built
c numnew = # new points at which to predict y_hat
c
c fprefix = prefix of titles of files to opened/used
c
c linflag = indicates order of model (0=0_th, 1=1_st or linear)
c mleflag = indicates whether or not MLE for theta should be
c found (0=no, 1=yes)
c corflag = indicates which correlation function is used
2bar.gau.fit
------------
1 0.04879 0.03442 31.34387
2 6.03023 0.06015 9.08923
3 1.13740 0.34200 36.40723
2bar.npt
----------
0.0 0.1
0.1 0.4
0.2 0.9
0.3 0.0
0.4 0.5
0.5 0.6
0.6 1.0
354
0.7 0.2
0.8 0.8
0.9 0.3
1.0 0.7
2bar.gau.out
------------
7.06591 -4.93252 -1.60996
9.59596 -3.19662 -1.06274
13.72457 -2.38054 -0.52032
10.87004 -0.56786 -0.91234
15.00633 -0.29634 -0.32415
17.37792 0.03354 -0.14981
22.42622 0.01010 0.04215
17.66755 0.49679 -0.14165
24.50410 0.50823 0.15698
21.47997 0.69081 0.07588
27.15561 0.66212 0.25021
355
B.
APPENDIX B
Chapter 5 are presented in this appendix. As introduced in Section 2.4.3, two types of classical
experimental designs are considered in this work: central composite designs and Box-Behnken
designs. They are described in Sections B.1.1 and B.1.2, respectively. In addition to these two
types of classical designs, nine types of space filling designs are studied in Chapter 5, testing
Hypothesis 3 and the utility of space filling designs for deterministic computer experiments. Of
these nine space filling designs, eight are described in this appendix in Sections B.2.1 through
B.2.8. These designs include the following: random Latin hypercubes, random orthogonal
arrays, IMSE optimal Latin hypercubes, maximin Latin hypercubes, orthogonal-array based
Latin hypercubes, uniform designs, orthogonal Latin hypercubes, and Hammersley sequence
designs. The ninth space filling experimental design—the minimax Latin hypercube which is
354
355
B.1 CLASSICAL EXPERIMENTAL DESIGNS
design—the cube portion of the design with corners at ±1—augmented by additional “star” and
“center” points which allow the estimation of a second-order polynomial equation. Note that
the “star” points reside at a distance ±? from the origin. Central composite designs (CCDs)
where k is the number of factors being considered. The standard CCD typically uses ? > 1;
values for ? are determined based on the number of factors in the experiment (Myers and
Montgomery, 1995). However, variations on the standard CCD do exist. If ? = 1, then the
design is called a face-centered CCD or CCF for short; if ? = 1/? , then the design is said to be
Run # X1 X2 X3
1 -1 -1 -1
2 -1 -1 1
3 -1 1 -1 X2
4 -1 1 1
5 1 -1 -1
6 1 -1 1
7 1 1 -1
8 1 1 1 X1
9 -? 0 0
10 ?? 0 0 Star pts
11 0 -? 0 X3 Center pt
12 0 ?? 0 Factorial pts
13 0 0 -?
14 0 0 ??
15 0 0 0
356
Figure B.1 Central Composite Design for 3 Factors
Central composite designs have many desirable features (e.g., rotatable, orthogonal,
blockable, etc.) for fitting non-deterministic data with random error (cf.,Myers and
Montgomery, 1995); however, as Sacks, et al. (1989) point out, “because deterministic
replication, and randomization are irrelevant.” Regardless, central composite designs are
employed in this work as a basis for comparison since they are quickly becoming, or maybe
already are, the standard approach for fitting second-order response surface models in
engineering computer applications (cf., Simpson, et al., 1997). Specifically, CCF designs, a
scaled CCD (where ? = 1 and the factorial points lie at ±1/? ), a CCI, and a combination
CCF+CCD and a combination CCF+CCI are employed in this work. The reader is again
referred to (Simpson, et al., 1997) for a review of several studies comparing CCDs and other
Box-Behnken (BB) designs (Box and Behnken, 1960; Myers and Montgomery, 1995) are a
family of efficient three-level designs for fitting second-order response surfaces that are formed
by combining 2k factorial and incomplete block designs. They are an important alternative to
central composite designs because they require only three levels of each factor; however, Myers
and Montgomery (1995) warn that BB designs should not be used when accurate predictions at
the extremes (i.e., the corners of the hypercube) are important. They recommend using a face-
357
centered CCD in such cases. An inspection of the three factor Box-Behnken design shown in
Figure B.2 illustrates the lack of points near the corners of the cube and thus the poor prediction
Run# X1 X2 X3
1 -1 -1 0
X2
2 -1 1 0
3 1 -1 0
4 1 1 0
5 0 -1 -1
6 0 -1 1
7 0 1 -1 X1
8 0 1 1
9 -1 0 -1
10 1 0 -1
11 -1 0 1 X3
12 1 0 1
13 0 0 0
Two factor Box-Behnken designs do not exist. Designs for four to six factors are
constructed using the design matrices shown in Figure B.3. In this work, only one center point
is used in each design since replication of a deterministic computer experiment is wasted effort.
The resulting designs for 4, 5, and 6 factors requires 25, 41, and 49 points, respectively.
358
4 Factors 5 Factors 6 Factors
X1 X2 X3 X4 X1 X2 X3 X4 X5 X1 X2 X3 X4 X5 X6
±1 ±1 0 0 ±1 ±1 0 0 0 ±1 ±1 0 ±1 0 0
0 0 ±1 ±1 0 0 ±1 ±1 0 0 ±1 ±1 0 ±1 0
±1 0 0 ±1 0 ±1 0 0 ±1 0 0 ±1 ±1 0 ±1
0 ±1 ±1 0 ±1 0 ±1 0 0 ±1 0 0 ±1 ±1 0
±1 0 ±1 0 0 0 0 ±1 ±1 0 ±1 0 0 ±1 ±1
0 ±1 0 ±1 0 ±1 ±1 0 0 ±1 0 ±1 0 0 ±1
0 0 0 0 ±1 0 0 ±1 0 0 0 0 0 0 0
0 0 ±1 0 ±1
±1 0 0 0 ±1
0 ±1 0 ±1 0
0 0 0 0 0
The procedure for creating Box-Behnken designs is described in (e.g., Box and
Behnken, 1960; Myers and Montgomery, 1995). The software package JMP® (SAS, 1995)
can also be used to quickly generate these designs. As a final note, Lucas (1976) compares
Box-Behnken designs to a wide variety of experimental designs used in conjunction with non-
deterministic experiments.
Of the numerous space filling designs which exist, random Latin hypercubes, random
orthogonal arrays, IMSE optimal Latin hypercubes, maximin Latin hypercubes, orthogonal-
array based Latin hypercubes, uniform designs, orthogonal Latin hypercubes, and Hammersley
sampling sequence designs are utilized in this dissertation. These designs have been selected
based on availability of code and ease of generation. Each of these space filling experimental
359
B.2.1 Random Latin Hypercubes
Perhaps the earliest space filling experimental design intended for use with computer
experiments (Monte Carlo simulation in particular) is a Latin hypercube (McKay, et al., 1979).
A Latin hypercube is a matrix of n rows and k columns where n is the number of levels being
examined and k is the number of design variables. Each column contains the levels 1, 2, ..., n,
randomly permuted, and the k columns are matched at random to form the Latin hypercube.
Three two dimensional Latin hypercubes with n = 9 are shown in Figure B.4. Notice how the
points are randomly scattered throughout the design space; several of the modified Latin
hypercube designs discussed later in this section attempt to control the scattering.
X2 X2 X2
X1 X1 X1
Figure B.4 Three Nine Point Latin Hypercube Designs for 2 Factors
By their nature, Latin hypercubes are quite easy to generate because they require only a
random permutation of n levels in each column of the design matrix. The big advantage of Latin
hypercube designs is that they ensure stratified sampling, i.e., each of the input variables is
sampled at n levels. Thus, when a Latin hypercube is projected or collapsed into a single
360
dimension, n distinct levels are obtained. This is extremely beneficial for deterministic computer
experiments since the Latin hypercube points do not overlap, minimizing any information loss.
An orthogonal array (OA) is a matrix of n rows and k columns with every element being
one of q symbols: 0, ..., q-1 (Owen, 1992). An orthogonal array has an associated strength t
depending on the number of combinations of l levels appearing in any of the r columns of the
Booker, et al. (1995) describe, all possible combinations, taken any two at a time, of the three
columns are chosen in the OA, each of these combinations appears only once in the design as a
row. The same happens for any pair of columns; hence, the OA has strength r = 2.
Run # X1 X2 X3 X2
1 1 1 1
2 1 2 2
3 1 3 3
4 2 1 2
5 2 2 3
6 2 3 1 X1
7 3 1 3
8 3 2 1 X3
9 3 3 2
Notice that if the OA design in Figure B.5 is projected into any two dimensions, the
points form a 3x3 grid of nine points. This turns out to be a nice feature of these types of
361
designs; as Barton (1994) points out, “Orthogonal arrays...are an attractive class of sparse
designs because they provide balanced (full factorial) designs for any projection into r factors.”
Barton also states, however, that this type of balance can lead to problems with kriging models;
when these designs are projected into one dimension, points do overlap which can lead to ill-
conditioning of the correlation matrix. Meanwhile, the classical experimental designs, in general,
do not exhibit good projection capabilities, yielding several overlapping points when projected
into two dimensions as can be ascertained from Figure B.1 and Figure B.2. However, these
designs do not project well into single dimensions as Latin hypercube designs do. Owen’s
algorithm for generating these designs was obtained from his web site.
Park (1994) combines Latin hypercube designs and integrated mean square error
(IMSE) optimal designs (Sacks, et al., 1989) to generate a hybrid set of designs which he refers
to as optimal Latin hypercubes or olhd for short. These designs are well spread out over the
design space because of the IMSE criterion, do not have any replicated points, and are often
symmetric or nearly so. They also maintain the good projection capabilities which are
characteristic of Latin hypercubes. A 7 point olhd and an 8 point olhd for two factors are
shown in Figure B.6. Notice how well the points are spread out in the design space; yet, they
do not extend all the way to the edge of the design space.
362
X2 X2
X1 X1
Park has developed a two-stage (exchange and Newton-type) algorithm which finds a
olhd for a given number of factors and runs; the reader is referred to (Park, 1994) for details
about the algorithm, and he has provided a copy of it for use in this research. The algorithm
works well for problems with a small number of factors and a small number of runs. The
algorithm can also be used to generate optimal Latin hypercube designs which maximize the
entropy of a design following the work in (Shewry and Wynn, 1987). Only designs which
minimize IMSE over the design space are considered in this work.
Morris and Mitchell (1995) develop a maximin Latin hypercube design (Mmlhd) for
criterion used in (Johnson, et al., 1990) and the good projective capabilities of Latin
hypercubes. As the name implies, the maximin distance criterion is used to maximize the
minimum distance (either Euclidean or rectangular) between any two sample points, thus
363
spreading the points out as much as possible in the design space. As an example, a two factor
X2 X2
X1 X1
Morris and Mitchell use a simulated annealing search algorithm to construct these
designs. They also have generated a catalog of these designs (Morris and Mitchell, 1992) for
both Euclidean and rectangular distances for n (number of points) between 3 and 12 and k
(number of factors) between 2 and 5 and designs for k=2(n-20), k=n(n-9), and k=n/2(n-14).
For designs not listed in the catalog, their algorithm has been obtained for use in this research. It
is worth noting that for problems with large n and large k, the simulated annealing algorithm used
to construct these designs is quite slow, thus limiting the use of these types of designs to
Tang (1993) uses Latin hypercubes to construct strength r orthogonal arrays which he
refers to as U designs when used for designing experiments. His designs are quite similar to
364
those of Owen (1992) discussed earlier; however, the proposed sampling schemes were
developed independently of each another. The strength r OA-based Latin hypercubes stratify
each r-dimensional projection—particularly good for one dimensional projections when fitting a
model—providing designs which are better suited for computer experiments and numerical
integration. An example six point U design and Latin hypercube design are shown in Figure
B.8.
X2 X2
X1 X1
As Tang points out in his paper, notice in Figure B.8 how the points are more uniformly
scattered in the U design than the Latin hypercube design in the two dimensional region.
Supposed that the area inside the dashed boxes represents the region of interest. The U design
is the more preferable design because each dashed box contains exactly one design point which
is not so for the Latin hypercube. The algorithm for creating these designs can be found in
(Tang, 1993); a copy of it has been provided by Tang for use in this dissertation.
365
B.2.6 Uniform Designs
theoretic methods (Fang and Wang, 1994). These designs are denoted as UDn(qt) where UD
signifies uniform design, n is the number of experiments, q is the number of levels of each factor,
and t is the number of columns in the design, i.e., the number of factors. All of the uniform
A uniform design is obtained from a generating vector (n; h1, ..., ht) of a good lattice
point set where 1 = h1 < h2 < ... < ht < n which has greatest common divisor (n, hi) = 1, i = 1,
..., t. Specifically, the UD is formed from the terms qki which are defined:
where 0 < qki = n. Generating vectors (n; h1, ..., ht) for small n are listed in the appendix of
(Fang and Wang, 1994); the criterion for choosing the generating vectors is the mean square
error criterion. The generating vectors utilized in this work are summarized in Table B.1.
Table B.1 Uniform Design Generating Vectors for Small Sample Sizes
In general, UD are easy to create once the generating vector is known. The first column
of the design is the levels 1, ..., n. Subsequent columns are based on the qki from Equation 3.17
366
which requires estimating the multiplication modulo n of khi. Example 7, 9, and 11 point
X2 X2 X2
X1 X1 X1
Ye (1997) has created a class of Latin hypercubes which preserves the orthogonality
among columns; hence, these designs are referred to as orthogonal Latin hypercubes (OLH).
These designs retain the orthogonality of traditional experimental designs (e.g., CCDs) while
attempting to maintain a good spread of points throughout the design space. The orthogonality
guarantees that the quadratic and interaction effects are uncorrelated with the estimates of linear
effects. The reader is referred to (Ye, 1997) for a detailed description for creating orthogonal
Latin hypercubes; he has provided a copy of his algorithm for use in this dissertation. It is worth
noting that the orthogonality of these designs does not depend on the numerical values of the
levels; hence, new OLH designs can be computed by permuting the numerical values or
367
reversing the signs of the columns. Three permutations of a 9 point OLH for 2 factors are
X2 X2 X2
X1 X1 X1
Figure B.10 Nine Point Orthogonal Latin Hypercube Designs for 2 Factors
The OLH designs are constructed by purely algebraic means without the aid of
computers; however, Ye does propose an algorithm to search for optimal OLH designs based
on different selection criteria (minimum entropy, and maximin spacing based on either Euclidean
injection molding cooling process with six variables are given in (Ye, 1997). In this dissertation,
optimal OLH designs are not considered, only random permutations of the algebraically-
Latin hypercube techniques are designed for uniformity along a single dimension where
subsequent columns are randomly paired for placement on a k-dimensional cube. Hammersley
sequence sampling (HSS) provides a low-discrepancy experimental design for placing n points
368
in a k-dimensional hypercube (Kalagnanam and Diwekar, 1997), providing better uniformity
properties over the k-dimensional space than Latin hypercubes. Example 7, 9, and 11 point
X2 X2 X2
X1 X1 X1
designs require significantly fewer samples to converge to the variance of a derived distribution
than Latin hypercube designs and Monte Carlo (random sampling) methods, thus verifying the
good uniformity properties of these types of designs in k-dimensions. The reader is referred to
(Kalagnanam and Diwekar, 1997) for a definition of Hammersley points and an explicit
procedure for generating them. The algorithm in (Iman and Shortencarier, 1984) has been
modified by Diwekar (see, Diwekar, 1995) to generate HSS designs efficiently; Diwekar has
369
C.
APPENDIX C
The details of the minimax Latin hypercube design which are unique to this dissertation
are presented in this appendix. In Section C.1 the question of “Why a minimax Latin hypercube
design?” is addressed. The genetic algorithm which is used to generate these designs is then
presented in Section C.2. Convergence studies of the genetic algorithm are included in Section
C.3.
368
C.1 WHY A MINIMAX LATIN HYPERCUBE DESIGN?
From an intuitive stand point, since prediction with a kriging model relies on the spatial
correlation between data points, a design which minimizes the maximum distance between the
sample points and any point in the design space should yield an accurate predictor. This type of
design is referred to as a minimax design (Johnson, et al., 1990). Thus, the novel space filling
design advocated in this work is the combination of the minimax criterion with a Latin hypercube
design. Why combine the two? Alone, the minimax criterion does not ensure good
stratification of the design space, i.e., when the sample points are projected into 1-dimension,
many of the points may overlap. Because a Latin hypercube ensures good stratification of the
design space when projected into 1-dimension, the minimax criterion combined with a Latin
hypercube design provides a good compromise between minimizing the maximum distance
between the sample points and any point in the design space and the good projection properties
of the Latin hypercube. Morris and Mitchell (1995) used a similar argument when combining
the maximin criterion with Latin hypercubes to create maximin Latin hypercube designs; their
designs performed better than Latin hypercube designs and maximin designs in comparative
studies.
How are these novel designs created? The algorithm provided by Morris and
Mitchell (1995) could not be modified easily to accommodate the minimax criterion as opposed
to the maximin criteria. Moreover, their use of a simulated annealing algorithm to generate
369
maximin Latin hypercube designs is slow at best. Thus, an entirely different approach for
Consider that a Latin hypercube is a (n x k) matrix which has n runs with n levels for k
factors. Since each column of the Latin hypercube contains a random permutation of n
numbers, there are n! possible permutations for each of k columns, yielding a total of (n!)k
possible combinations. For large n and k, an exhaustive search of all possible combinations to
find the minimax design is too time consuming if not impossible. Hence, an efficient method to
search for good designs is needed. Toward this end, a genetic algorithm (see, e.g., Goldberg,
1989) based design generator is developed to create minimax Latin hypercube designs.
Genetic algorithms have been shown to work well for large combinatorial problems, and genetic
algorithms have been successfully utilized to create D-optimal designs (see, e.g., Giunta, et al.,
1994; Narducci, 1995) and minimum bias designs (Venter and Haftka, 1997). Moreover, the
discrete nature of the Latin hypercubes appears to be well suited for the genetic algorithm
The genetic algorithm for creating minimax Latin hypercube designs developed in this
dissertation is shown in Figure C.1. To begin, the user specifies the maximum number of
generations, maxngen, to be created and n and k, the number of levels (and therefore rows)
in each hypercube and the number of factors (i.e., columns), respectively. In Step 1, an initial
370
population of size npopsize of random Latin hypercubes with n rows and k columns is
Step 2 involves computing the distances between all the sample points in a given Latin
hypercube and each of the 2k corners of the design space. By definition, the minimax distance
371
criterion serves to minimize the maximum distance between the sample points and all possible
prediction points. However, as the design space under investigation is a hypercube (as
opposed to spherical or irregularly shaped), the prediction point that is furthest away is always
one of the corners, see Figure C.2. Therefore, only n•2k distance calculations need to be made
as opposed to computing distances over the entire design space. In this work, the squared
Euclidean distance is used when computing distances; however, rectangular distance also could
be used.
Equidistant
points lie on
circles of
expanding radii
k
X2 In [1,n] hypercube,
opposite corner is
furthest new point
from sample point
X1
Sample point
Once all of the distances have been computed, the Latin hypercubes are rank ordered
in Step 3 based on the maximum distance for each design where the smaller the maximum
distance, the better the design. The top half of the population then is selected for mating in Step
4, provided the maximum number of generations, maxngen, has not yet been created. The
372
best design in each population is mated with each of the designs in the top half of the population,
excluding itself. The best design is carried over into the next generation—reminiscent of a
survival of the fittest strategy (Goldberg, 1989)—and a random mutation of the best design is
During mating, the usual genetic algorithm process of crossover (see, e.g., Goldberg,
1989) is modified because Latin hypercubes must maintain n distinct levels in each column. So
rather than switch individual “genes” within two mating Latin hypercubes, one column is chosen
randomly in each mating design, and the resulting columns are switched. The first child is the
best design with the randomly chosen column from the mating design, and the second child is the
mating design, imate, with the randomly chosen column form the best design. After mating is
done, mutation may occur. Child 1 undergoes mutation if the probability P = (random
greater than P2. When computing the probability, normalization of the maximum distance is
taken as:
where maxdistmax and maxdistmin are for the entire population before mating. In this manner, the
larger (worse) the maximum distance of the parent design, the greater the probability that its
child (Child 2) will undergo a mutation in hopes of further reducing its maximum distance after
373
mating. Meanwhile, since the normalized maximum distance of the best design is zero, the
probability of mutating Child 1 is also based on the normalized distance of the mating parent, but
the probability of mutation, P1, is higher than P2 because this design has more in common with
Once a new population has been created, Steps 2 and 3 are repeated, computing the
distances between the sample points and the corners and rank ordering the designs based on
smaller maximum distance being better. The best 1/2 of the population is then mated and
mutated again as the minimax distance Latin hypercube is sought. A pictorial illustration of the
In Figure C.3, an initial population of six Latin hypercube designs are generated
randomly. In this example, each Latin hypercube has five sample points for two factors as
illustrated in the 2-D grids of sample points. Once these designs are created, the distance
between each point and each corner is computed, and the maximum distance is recorded.
Multiple points can have the same maximum distance; hence, the number of points having this
maximum distance is also recorded. In the example shown in Figure C.3, LHDs 4, 5, and 6
each have two points which are at the maximum distance of 25.
374
INITIAL MAXIMUM MATING & NEW NEW MAX
POPULATION DISTANCE RANK MUTATION POPULATION DISTANCE
LHD 1 ? ? LHD 1’
2 5 4 1
3 4 Pt 5, Crnr 2 2 5
1
5
4
2
3
1
Pt 1, Crnr 3
Pt 3, Crnr 4 } = 25 x4 Mutate: col 2
row 2 & 4
5 2
3 4
1 3
25
? ?
LHD 2 LHD 2’
4 1 4 1
2 4 2 4
5
3
1
2
5
3
Pt 1, Crnr 2
Pt 3, Crnr 2 } = 25 1
no change
5 2
3 5
1 3
25
LHD 3 LHD 3’
co
3 3 1 1
l2
2 1 4 4
4
1
4
2
Pt 5, Crnr 1 = 32 x5 col 1
2 2
3 5
25
5 5 5 3
LHD 4 LHD 4’
col 1
1 4 4 4
4 2 2 2
2
3
5
5
1
3
Pt 1, Crnr 3
Pt 3, Crnr 3 } = 25 2 col 2 col 2 5 5
3 1
1 3
32
LHD 5 LHD 5’
1 4 4 4
4 5 2 5
2
5
3
2
3
1
Pt 2, Crnr 1
Pt 1, Crnr 3 } = 25 3 col 2 5 2
3 3
1 1
32
col
col 1
LHD 6 LHD 6’
1
1 5 4 1
2 4 5 4
Mutate col 1
3
5
3
2
Pt 1, Crnr 3 = 32 x6 row 2 & 3 2 2
3 5
25
4 1 1 3
KEY: sample pt ? corner number indicates max distance x deleted from population
Figure C.3 Genetic Algorithm-Based Minimax Latin Hypercube Generator
Once the distances have been computed, the designs are rank ordered based on
maximum distance with smaller maximum distances preferred over larger ones. Once the rank
ordering is completed, the top half of the population is selected for mating. The best design, that
which is ranked first, then is mated with each of the remaining designs in the top half of the
population as previously described. The columns which are switched and the mutations which
occur in the example in Figure C.3 are listed alongside each arrow. For instance, LHD 5’ is a
375
combination of column 2 from LHD 2 and column 2 from LHD 5; no mutation occurs for this
particular design. The maximum distance of the resulting new designs then are computed, and
the process is repeated until the specified number of generations has been created. The best
design at the end of this process is a minimax Latin hypercube design. Convergence studies of
For the purposes of demonstration, six minimax Latin hypercubes are generated from
varying population sizes to investigate the convergence of the genetic algorithm, see Table C.1.
Population sizes of 20, 40, and 60 Latin hypercubes are investigated for two 2, 3, and 4
variable designs ranging from 9 points to 41 points as shown in the table. In the current
algorithm, the number of generations is used to dictate termination of the algorithm. For the 2
and 3 variable designs, 50 generations are created before the genetic algorithm is terminated; in
the 4 variable problem, 100 generations are permited before termination. The resulting minimax
distance—the square of the Euclidean distance—between any sample point and any point in the
design space is listed for each combination of number of variables, sample size, population size,
and number of generations. The number of Latin hypercubes which have this minimax distance
also are listed. In general, as the sample size increases, the number of Latin hypercubes with
this minimax distance increase. Convergence plots for each sample size (e.g., two factors with
376
Table C.1 Summary of Minimax Latin Hypercube Convergence Study
General observations regarding the convergence of the genetic algorithm in Figure C.4
• The larger the population size, the quicker the convergence to a minimax design.
• For the two and three variable designs, 50 generations is more than sufficient to achieve
a minimax design; however, considerably more generations are needed in the 4 variable
designs to achieve low minimax distances.
• The larger the sample size and design space, the more important the population size. In
the two variable case, convergence to a minimax design occurs regradless of the
377
population size; however, in the three and four variable cases, a population size of 60 is
necessary to achieve the lowest minimax distance.
There are several other factors which can affect the performance of the genetic algorithm which
are not studied (e.g., different mutation probabilities, random seeds, alternative mating routines,
etc.); these factors are discussed in Section 8.3 as possible future work.
100
98 popsize=20
popsize=40
Maximum Distance
96 popsize=60
94
92
90
88
0 10 20 30 40 50
Iterations
378
270
265 popsize=20
popsize=40
Maximum Distance
260 popsize=60
255
250
245
240
0 10 20 30 40 50
Iterations
445
440
popsize=20
435
popsize=40
Maximum Distance
430 popsize=60
425
420
415
410
405
400
0 10 20 30 40 50
Iterations
379
1070
1060 popsize=20
popsize=40
Maximum Distance
1050 popsize=60
1040
1030
1020
1010
0 10 20 30 40 50
Iterations
1750
1700 popsize=20
popsize=40
Maximum Distance
1650 popsize=60
1600
1550
1500
1450
0 20 40 60 80 100
Iterations
380
5000
4950
4900 popsize=20
popsize=40
4850
Maximum Distance
popsize=60
4800
4750
4700
4650
4600
4550
4500
0 20 40 60 80 100
Iterations
381
D.
APPENDIX D
Kriging/DOE Testbed
three variable problems, and 2 four variable problems—are introduced to test the utility of
kriging and space filling experimental designs for building metamodels of deterministic computer
experiments. The two variable problems are the design of a two-bar truss, Section D.D.1, and
the design of a symmetric three bar truss, Section D.D.2; the three variable problems are the
design of a helical compression spring, Section D.D.3, and a two-member frame, Section
D.D.4; and the four variables problems are the design of a welded beam, Section D.D.5, and a
pressure vessel, Section D.D.6. As explained in Section 5.1.1, building kriging approximations
of these analyses is overkill to say the least; however, they are taken to be representative of
380
typical analyses encountered in mechanical engineering design, allowing Hypotheses 2 and 3 to
be tested and verified. Each example is described in turn along with its corresponding
381
D.1 DESIGN OF A TWO-BAR TRUSS
The symmetric two-bar truss shown in Figure D.1 has been studied by several
researchers (see, e.g., Balling and Clark, 1992; Schmit, 1981; Sobieszczanski-Sobieski, et al.,
1982). In this example, a single load case of 2P = 66,000 lbs is considered. The distance
between the supports is 2B = 60 in. The two bars are identical, having an annular cross section
with wall thickness T = 0.1 in. The material properties are Young’s modulus E = 30E6 psi,
density ? = 0.3 lbs/in3 and yield stress ? y = 60,000 psi. There are two design variables—mean
tube diameter (D) and height (H) of the truss—which have the following ranges of interest:
Section C-C’
H C
2P
C’
B B
382
W(x) = 2? ?DT(B2 + H2)1/2 [D.1]
? E(D ? T ) P( B2 ? H2 )1/ 2
2 2 2
g1(x) = ? e - ? = ? =0 [D.2]
8(B2 ? H2 ) ? TDH
P( B2 ? H2 )1/ 2
g2(x) = ? y - ? = ? y - =0 [D.3]
? TDH
The first constraint prevents failure due to Euler buckling, the second due to yield. The resulting
optimum value from (Schmit, 1981) for W(x) is 19.8 lbs, occurring at:
D* = 2.47 in.
H* = 30.15 in.
approximated over the region of interest specified by the bounds on the two design variables
given at the beginning of this section. The next example is another two variable, structural
optimization problem.
The second example also comes from (Schmit, 1981); it is the design of a symmetric
three-bar planar truss, see Figure D.2. Two loads are considered: P1 = 20,000 lbs acting at
45° to the x axis and P2 = 20,000 lbs acting at an angle of 135° to the axis. Pertinent design
383
parameters include the following: N = 10 in., ? 1 = 135°, ? 1 = 90°, ? 1 = 45°, density ? = 0.1
A1 A2 A3
? ?
? ?
? ? x
P2 P1
The cross sectional areas of the bars are the design variables. Due to symmetry, there
are only two design variables: A1 = A3 and A2; the ranges of interest are:
The objective is to find A1 and A2 to minimize the weight of the truss subject to stress
W(x) = ? N( 2 2 A1 ? A 2 ) [D.4]
Stress constraints guard against both tensile and compressive failure and are formulated
as:
384
-15,000 = ? ij(x) = 20,000 i = 1, 2, 3; j = 1, 2
where ? ij is the stress in the ith member due to the jth load. Because of symmetry, only three of
the six inequality constraints need to be considered. The corresponding expressions for these
are as follows.
??1 A2 ??
g1(x) = 20,000 - ? 11 = 20,000 - ?? ? 2 ??= 0 [D.5]
??A1 2A1 A2 ? 2A1 ??
20,000 2A 1
g1(x) = 20,000 - ? 21 = 20,000 - 2 = 0 [D.6]
2A1A 2 ? 2A 1
20, 000A 2
g1(x) = 15,000 + ? 31 = 15,000 - ? 2 = 0 [D.7]
2 A 1A 2 ? 2A 1
The optimum value W(x) given in (Schmit, 1981) is 2.64 lbs, occurring at:
Equation D.4— which are to be replaced by kriging models for the design space defined by the
bounds on A1 and A2. This concludes the two variables examples considered in this
dissertation; the first three variable problem, the design of a helical compression spring, is
design is the design of a helical compression spring (Kannan and Kramer, 1994; Sandgren,
385
1990; Siddall, 1982), see Figure D.2. There are three design variables—number of active coils
(N), mean coil diameter (D), and wire diameter (d)—with the following ranges of interest:
3 = N = 30
1.0 in. = D = 6.0 in.
0.2 in. = d = 0.5 in.
The spring is to be manufactured from music wire spring steel ASTM A228; the
maximum outside diameter, Dmax, must be less than 3 in. The allowable stress is S = 189,000
psi, and the shear modulus is G = 1.15E8. The maximum working load is Fmax = 1000 lbs, the
preload compressive force is Fp = 300 lbs, and the maximum deflection under preload is ?pm = 6
in. The combined deflection from preload to maximum load is ?w = 1.25 in. The maximum free
length is lmax = 14 in. The spring has an end coefficient equal to 1, and it is assumed to be
386
Figure D.3 Helical Compress Spring (from Siddall, 1982)
The objective is to minimize the volume of material in the spring for a static loading
Following the problem formulations given in (Kannan and Kramer, 1994; Sandgren,
1990; Siddall, 1982), the design shear stress must be less than the allowable maximum shear
stress or:
where:
C = D/d
In reality, the spring correction factor, Cf, given by the preceding equation is for springs
subject to dynamic loading and is only important when fatigue is a concern. In this example, the
spring is assumed to be under a constant load, and the correction factor which should be used
387
Cf = 1 + 0.5/C
However, in order to maintain consistency with previous problem formulations and solutions
which are also in error, (see, Kannan and Kramer, 1994; Sandgren, 1990; Siddall, 1982), the
fatigue correction factor is used in this example since it is only for demonstrative purposes.
The spring free length must be less than the specified value:
where:
lf = ? + 1.05(N + 2)d
? ?= Fmax/K
K = Gd 4/(8ND3)
This assumes that the spring length is 1.05 times the solid length under Fmax.
The deflection under preload must not exceed specified deflection ?pm:
388
lf - ?p - (Fmax - Fload)/K - 1.05(N + 2)d = 0
where:
? ?p = Fp/K
As Siddall (1982) points out, this constraint function will always be zero at convergence;
The deflection from preload to maximum load must be equal to that specified:
There are two additional geometric constraints which also must be satisfied. The
outside diameter of the coil must be less than the maximum specified:
The inner coil diameter must be at least three times the wire diameter for proper winding:
g6(x) = C - 3 = 0 [D.13]
This spring problem is typically solved using a mixed discrete/continuous solver. The
continuous solution which is given in (Sandgren, 1990) is V(x) = 2.6353 which occurs at:
N* = 9.192
389
D* = 1.2052 in.
d* = 0.2814 in.
Equation D.8—are to be approximated using kriging models. The design region of interest is
defined by the bounds on the design variables given at the beginning of this section. The design
of a two-member frame presented in the next section is the other three variable test problem
This example is from (Arora, 1989) and is the design of a two-member frame subjected
to the out-of-plane loads shown as in Figure D.4. There are three design variables—frame
width (d), height (h), and wall thickness (t)—with the following ranges of interest:
z U1
(1) (3)
P
L L
x (2) y
U2 U3
t h
390
Figure D.4 Two-Member Frame
The objective is to minimize the volume of the frame subject to stress constraints and
The two members in the frame are subject to both bending and torsional stresses, and
the combined stress only needs to be imposed at nodes (1) and (2) since the frame is symmetric
and the two members of the frame are identical. The stresses are calculated using the finite
element method where the nodal displacements are defined as U1 = vertical displacement at
node (2), U2 = rotation about line (3)-(2) and U3 = rotation about line (1)-(2). Following
(Arora, 1989), the equilibrium equation for the finite element model is:
?? ??
??24 ?6 L 6L ????U1 ?? ??P??
EI GJ 2
??? 6L (4L ?
2
L) 0 ????U2 ??? ??0 ??
L3 ?? EI
GJ 2 ????U ?? ??
??0 ??
??
?? 6L 0 (4L2 ? L )???? 3??
?? EI ??
where E = 3.0E7 psi, G = 1.154E7 psi and the load at node (2), P = -10,000 lbs. Values for I,
391
A = (d - t)(h - t)
Once the displacements U1, U2 and U3 are calculated for a given design, the torque, T,
and bending moments at nodes (1), M1, and (2), M2, for member (1)-(2) are:
T = -GJU3/L
M1 = 2EI(-3U1 + U2L)/L2
M2 = 2EI(-3U1 + 2U2L)/L2
The corresponding torsional shear and bending stress are then computed using:
? ? = T/(2At)
? ?1 = M1h/(2I)
? ?2 = M2h/(2I)
The effective stress, ? e, at nodes (1) and (2) is determined using von Mises yield criterion and
must be less than the allowable stress which is taken as 40,000 psi.
The resulting optimum value from (Arora, 1989) is V(x) = 703.916 in3, occurring at:
d* = 7.798 in
h* = 10.00 in
t* = 0.10 in
392
In summary, two constraints—Equations D.15 and D.16—and one objective—
Equations D.14—are to be replaced with kriging approximations over the design space defined
by the bounds on the design variables listed at the beginning of this section. This is the last of
the three variable problems considered in this dissertation; the first four variable test problem is
This example is taken (Ragsdell and Phillips, 1976) and has been solved by several
researchers studying design optimization (see, e.g., Eggert and Mayne, 1993; Kannan and
Kramer, 1994; Sandgren, 1989). The objective is to minimize the total system cost of the
welded beam structure shown in Figure D.5 subject to five constraints which define feasibility.
The length, L, of the bar is 14 in., and the force acting on the bar is taken as 6000 lbs. The bar
“A” is made out of 1010 steel. There are four design variables—weld height (h), weld length
(l), bar thickness (t), and width (b)—with the following ranges of interest:
F F
l L t
B
h
393
Figure D.5 Welded Beam
The objective function is a combination of set-up cost, welding labor cost and material
include the maximum shear stress in the weld, the maximum normal stress in the beam, the bar
buckling load, the bar end deflection, and a geometric constraint which ensures that the
thickness of the weld, h, is less than the bar width, b, assuming a 45° weldment. The necessary
2
? ?(x) = [(?’) + 2?’?’’cos? + (?’’)2]1/2 [D.18]
where:
cos??= l/2R
M = F[L + (l/2)]
J = 2{0.707hl[(l2/12) + (t + h)/2)2]}
394
The shear stress in the weld must be less than the design shear stress in the weld, ?d, which is
2
? ? (x) = 6FL/(bt ) [D.19]
which must be less than the design normal stress for the beam material, ? d, which is taken as
30,000 psi.
For narrow rectangular bars, a good approximation to the buckling load is:
4.013 EI? t EI
Pc (x) ? 2 [1 ? ( ) ] [D.20]
L 2L ?
where I = (1/12)tb2, and ? = (1/3)Gtb3. Young’s modulus, E, is equal to 30E6 psi; the shear
modulus, G, is equal to 12E6 psi. The value for Pc(x) must be greater than F in order for the
Assuming that the bar is a cantilever beam of length L, the deflection in the beam is:
395
The deflection of the beam must less than the maximum permissible deflection which is taken as
0.25 in.
The optimum value F(x) given in (Ragsdell and Phillips, 1976) is $2.386, occurring at:
function—Equation D.17—which are replaced by kriging models and approximated over the
design space defined by the variable bounds given at the beginning of this section. In the next
section, the design of a pressure vessel is introduced as the second four variable problem for
investigation.
The final four variable example is the design of a pressure vessel (Li and Chou, 1994;
Sandgren, 1990). The cylindrical pressure vessel is shown in Figure D.6. The shell is made in
two halves of rolled steel plate which are joined by two longitudinal welds. Available rolling
equipment limits the length of the shell to 20 ft. The end caps are hemispherical, forged, and
welded to the shell. All welds are single-welded butt joints with a backing strip. The material is
carbon steel ASME SA 203 grade B. The pressure vessel should store 750 ft 3 of compressed
air at a pressure of 3,000 psi. There are four design variables—radius (R) and length (L) of the
396
cylindrical shell, shell thickness (Ts), and spherical head thickness (Th)—which have the
Th Ts
R R
The design objective is to minimize total system cost which is a combination of welding,
material, and forming costs. The total system cost is given by:
Meanwhile, the constraints which limit the minimal wall thicknesses Ts and Th are from the
ASME boiler and pressure vessel codes and are given as:
397
The constraint for the minimum tank volume is written as:
The fourth constraint limiting the length of the cylinder already has been accounted for in the
This problem is typically solved using a mixed discrete/continuous solver; however, only
the continuous solution which is given by Sandgren (1990) is of interest. The optimum,
D.22—are replaced by kriging approximations over the design region of interest within the
398
E.
APPENDIX E
This appendix contains supplemental information for the kriging/DOE study performed
in Chapter 5. Specifically, the process used to cull the data and remove potential outliers is
explained in Section E.E.1. Through this process, the data is reduced from a total of 7905
models to 7578 models as discussed in Section 5.1.4. The analysis of variance (ANOVA) of
the resulting data set is given in Section E.E.2; this information supplements the discussion in
Section 5.2.1. Finally, interaction plots for DOE and NSAMP for each problem are given in
397
E.1 CULLING THE DATA TO REMOVE POTENTIAL OUTLIERS
The process employed in this dissertation to cull the data and remove potential outliers
1. cull the data first based on RMSE.RANGE because it is the most important measure of
model accuracy, see Section E.1.1;
2. cull the data next based on MAX.RANGE because it is the next most important
measure of model accuracy, see Section E.1.2; and
3. cull the data for the final time based on CVRMSE.RANGE because it is the least
important measure of model accuracy, see Section E.1.3.
The results of each step are described in each of the following sections.
To begin the culling process to remove potential outliers, a plot of the distribution of
RMSE.RANGE for each model for each pair of problems is shown in Figure E.1. Because the
majority of the data is scattered near the origin, the points with unusually large RMSE.RANGE
values are considered potential outliers and are thus removed from the data set. As can be seen
in the figure, the RMSE.RANGE for one of the two variable models is almost 600, and there
are two models which have RMSE.RANGE values over 400 in the 3 variable problems.
In each pair of problems, the data is culled until all of the data points cluster together
and there are few, if any, points which reside by themselves. Points to the right of the dashed
lines in Figure E.1 (i.e., where the hump of the distribution returns to the x-axis) are removed
from the data set. In this manner, the number of models in the two, three, and four variable
398
problems is reduced from 1785 to 1745, 3150 to 3095, and 2970 to 2951, respectively,
4 Variable Problems
Density
3 Variable Problems
0.06
Density
0.04
0.02
0.0
0 100 200 300 400
rmse.range
2 Variable Problems
0.04
Density
0.02
0.0
0 100 200 300 400 500 600
rmse.range
The resulting distribution of RMSE.RANGE of the models of the culled data set is
shown in Figure E.2. Notice in the figure how the points are better distributed with few extreme
points.
399
4 Variable Problems
20
Density
10
0
0.0 0.2 0.4 0.6
rmse.range
3 Variable Problems
Density
10
0
0.0 0.1 0.2 0.3 0.4
rmse.range
2 Variable Problems
Density
12
6
0
0.0 0.1 0.2 0.3 0.4
rmse.range
The next step is to cull the data further based on MAX.RANGE as described in the
next section.
MAX.RANGE are removed from the data set. The distribution of MAX.RANGE after culling
the data based on RMSE.RANGE is shown in Figure E.3. There are few potential outliers,
e.g., the group of points in the four variable problems with MAX.RANGE values near 14. The
400
dashed lines indicate the approximate cut-off point used to cull the data, with all points to the
right of the dashed line removed because they are potential outliers.
4 Variable Problems
Density
3 Variable Problems
Density
2.0
1.0
0.0
0 1 2 3
max.range
2 Variable Problems
2.0
Density
1.0
0.0
0 1 2 3 4
max.range
The resulting distribution of MAX.RANGE after all of the potential outliers are removed
is illustrated in Figure E.4. Notice that the remaining points are clustered together nicely with
relatively few extreme points or groups of points. Twenty-eight models are removed from the
two variable problems, thirty-three models are removed from the three variable problems, and
401
seventy-one models are removed from the four variable problems, leaving a total of 7659
models.
4 Variable Problems
Density
2.0
1.0
0.0
0 1 2 3 4 5 6
max.range
3 Variable Problems
Density
3.0
2.0
1.0
0.0
0.0 0.5 1.0 1.5 2.0 2.5
max.range
2 Variable Problems
Density
2.0
1.0
0.0
0.0 0.5 1.0 1.5 2.0 2.5
max.range
The final step is to cull the data based on CVRMSE.RANGE; this is done in the next
section.
402
After culling the data based on RMSE.RANGE and MAX.RANGE, the resulting
distribution of CVRMSE.RANGE of the models of the culled data set is shown in Figure E.5.
Based on CVRMSE.RANGE, a few potential outliers still remain, and these are removed from
the data set. The dashed lines show the point at which the data is culled with all the models to
the right of the line removed from the data set. This reduces the number of models in the two,
three, and four variable problems from 1717 to 1690, 3062 to 3049, and 2880 to 2839,
4 Variable Problems
2.0
Density
3 Variable Problems
Density
1.0
0.0
0 5 10 15
cvrmse.range
2 Variable Problems
Density
4
2
0
0.0 0.5 1.0 1.5 2.0 2.5
cvrmse.range
403
The resulting distributions of CVRMSE.RANGE of the models after culling are
illustrated in Figure E.6. Since this is the last step in the culling process to remove any potential
outliers, this distribution represents the final distribution of the values of CVRMSE.RANGE
which are analyzed in the data set. Notice that the data is fairly well clustered with no extreme
points. The final distributions for RMSE.RANGE and MAX.RANGE are shown in Figure E.7
4 Variable Problems
12
Density
8
4
0
0.0 0.2 0.4 0.6 0.8 1.0 1.2
cvrmse.range
3 Variable Problems
8
Density
4
0
0.0 0.2 0.4 0.6 0.8 1.0
cvrmse.range
2 Variable Problems
6
Density
4
2
0
0.0 0.2 0.4 0.6 0.8 1.0 1.2
cvrmse.range
404
The final distribution of RMSE.RANGE of the models in the culled data set after all of
the potential outliers have been removed is shown in Figure E.7. The data look good even
through there is a large group of models in the four variable problems which are slightly removed
from the rest of the data. However, these models are not removed from the data set because
there is such a large grouping. The scatter in the two and three variable problems appears to be
okay with only a few extreme points. Regardless, the distributions in Figure E.7 represent the
4 Variable Problems
Density
20
10
0
0.0 0.2 0.4 0.6
rmse.range
3 Variable Problems
20
Density
10
0
0.0 0.05 0.10 0.15 0.20 0.25 0.30
rmse.range
2 Variable Problems
15
Density
10
5
0
0.0 0.1 0.2 0.3
rmse.range
405
The final distribution of MAX.RANGE of the models in the culled data set is shown in
Figure E.8. The data are scattered nicely and well clustered. As such, there are no more
potential outliers which need be removed. These distributions represent the final distributions of
MAX.RANGE of the 7578 models in the data set which are to be analyzed.
4 Variable Problems
Density
2.0
1.0
0.0
0 1 2 3 4 5 6
max.range
3 Variable Problems
Density
3
2
1
0
0.0 0.5 1.0 1.5 2.0 2.5
max.range
2 Variable Problems
Density
2.0
1.0
0.0
0.0 0.5 1.0 1.5 2.0 2.5
max.range
Now that potential outliers have been removed from the data set, it is ready to analyzed.
The analysis of variance is discussed in the next section. This information supplements the
406
E.2 ANALYSIS OF VARIANCE RESULTS
The analysis of variance (ANOVA) results for the culled data set are summarized as
follows. The software package S-Plus4 (MathSoft, 1997) is employed to perform the
necessary analyses; Pr(F) values of 10% or less are considered significant. A discussion of the
RMSE.RANGE
Df Sum of Sq Mean Sq F Value Pr(F)
doe 12 0.790807 0.0659006 72.3791 0.0000000
nsamp 7 0.231039 0.0330056 36.2503 0.0000000
eqn 6 3.236058 0.5393430 592.3647 0.0000000
corfcn 4 0.109507 0.0273766 30.0680 0.0000000
doe:nsamp 31 0.414692 0.0133772 14.6923 0.0000000
doe:eqn 72 0.729166 0.0101273 11.1229 0.0000000
doe:corfcn 48 0.137153 0.0028574 3.1383 0.0000000
nsamp:eqn 42 0.225910 0.0053788 5.9076 0.0000000
nsamp:corfcn 28 0.017491 0.0006247 0.6861 0.8900391
eqn:corfcn 24 0.076081 0.0031700 3.4817 0.0000000
Residuals 1415 1.288346 0.0009105
MAX.RANGE
Df Sum of Sq Mean Sq F Value Pr(F)
doe 12 39.2740 3.27283 132.978 0.0000000
nsamp 7 13.2717 1.89596 77.035 0.0000000
eqn 6 200.1000 33.35000 1355.043 0.0000000
corfcn 4 1.9708 0.49271 20.019 0.0000000
doe:nsamp 31 7.8845 0.25434 10.334 0.0000000
doe:eqn 72 53.5926 0.74434 30.243 0.0000000
doe:corfcn 48 2.8798 0.05999 2.438 0.0000003
nsamp:eqn 42 11.5988 0.27616 11.221 0.0000000
nsamp:corfcn 28 0.1757 0.00627 0.255 0.9999754
eqn:corfcn 24 1.0262 0.04276 1.737 0.0150119
Residuals 1415 34.8256 0.02461
CVRMSE.RANGE
Df Sum of Sq Mean Sq F Value Pr(F)
doe 12 0.49719 0.041432 7.922 0.0000000
nsamp 7 0.25892 0.036989 7.073 0.0000000
eqn 6 13.91795 2.319658 443.548 0.0000000
407
corfcn 4 38.94373 9.735933 1861.635 0.0000000
doe:nsamp 31 0.16915 0.005457 1.043 0.4020019
doe:eqn 72 0.81794 0.011360 2.172 0.0000001
doe:corfcn 48 1.13349 0.023614 4.515 0.0000000
nsamp:eqn 42 0.37403 0.008905 1.703 0.0036142
nsamp:corfcn 28 3.51757 0.125628 24.022 0.0000000
eqn:corfcn 24 13.38914 0.557881 106.674 0.0000000
Residuals 1415 7.40013 0.005230
408
ANOVA FOR 3 VARIABLE PROBLEMS
RMSE.RANGE
Df Sum of Sq Mean Sq F Value Pr(F)
doe 14 0.396151 0.0282965 39.6287 0.0000000
corfcn 4 0.468482 0.1171205 164.0250 0.0000000
nsamp 9 0.517310 0.0574789 80.4980 0.0000000
eqn 9 4.548163 0.5053515 707.7348 0.0000000
doe:corfcn 56 0.200964 0.0035886 5.0258 0.0000000
doe:nsamp 39 0.281790 0.0072254 10.1190 0.0000000
doe:eqn 126 0.927631 0.0073622 10.3105 0.0000000
corfcn:nsamp 36 0.025811 0.0007170 1.0041 0.4624071
corfcn:eqn 36 0.121429 0.0033730 4.7238 0.0000000
nsamp:eqn 81 0.576383 0.0071158 9.9656 0.0000000
Residuals 2641 1.885782 0.0007140
MAX.RANGE
Df Sum of Sq Mean Sq F Value Pr(F)
doe 14 63.6899 4.54928 95.4241 0.0000000
corfcn 4 16.9380 4.23449 88.8212 0.0000000
nsamp 9 18.6596 2.07328 43.4885 0.0000000
eqn 9 356.9195 39.65772 831.8472 0.0000000
doe:corfcn 56 6.1304 0.10947 2.2962 0.0000002
doe:nsamp 39 37.2497 0.95512 20.0343 0.0000000
doe:eqn 126 81.4085 0.64610 13.5524 0.0000000
corfcn:nsamp 36 1.7057 0.04738 0.9938 0.4796303
corfcn:eqn 36 3.4569 0.09603 2.0142 0.0003410
nsamp:eqn 81 32.5882 0.40232 8.4390 0.0000000
Residuals 2641 125.9078 0.04767
CVRMSE.RANGE
Df Sum of Sq Mean Sq F Value Pr(F)
doe 14 0.33509 0.02393 5.706 0.0000000
corfcn 4 49.62834 12.40709 2957.611 0.0000000
nsamp 9 0.39570 0.04397 10.481 0.0000000
eqn 9 12.65203 1.40578 335.111 0.0000000
doe:corfcn 56 3.03595 0.05421 12.923 0.0000000
doe:nsamp 39 0.35440 0.00909 2.166 0.0000416
doe:eqn 126 1.86217 0.01478 3.523 0.0000000
corfcn:nsamp 36 0.47594 0.01322 3.152 0.0000000
corfcn:eqn 36 18.17634 0.50490 120.358 0.0000000
nsamp:eqn 81 0.38010 0.00469 1.119 0.2222467
Residuals 2641 11.07891 0.00419
409
410
ANOVA FOR 4 VARIABLE PROBLEMS
RMSE.RANGE
Df Sum of Sq Mean Sq F Value Pr(F)
doe 14 0.92494 0.066067 51.370 0.0000000
nsamp 13 0.28108 0.021622 16.812 0.0000000
eqn 8 13.86528 1.733160 1347.593 0.0000000
corfcn 4 0.08302 0.020756 16.139 0.0000000
doe:nsamp 38 0.20198 0.005315 4.133 0.0000000
doe:eqn 111 6.72750 0.060608 47.125 0.0000000
doe:corfcn 56 0.54880 0.009800 7.620 0.0000000
nsamp:eqn 104 0.97659 0.009390 7.301 0.0000000
nsamp:corfcn 52 0.01214 0.000233 0.182 1.0000000
eqn:corfcn 32 0.22220 0.006944 5.399 0.0000000
Residuals 2406 3.09439 0.001286
MAX.RANGE
Df Sum of Sq Mean Sq F Value Pr(F)
doe 14 252.069 18.0049 120.648 0.0000000
nsamp 13 41.469 3.1899 21.375 0.0000000
eqn 8 2551.345 318.9182 2137.007 0.0000000
corfcn 4 4.116 1.0290 6.895 0.0000162
doe:nsamp 38 61.576 1.6204 10.858 0.0000000
doe:eqn 111 1007.096 9.0729 60.796 0.0000000
doe:corfcn 56 5.537 0.0989 0.662 0.9748813
nsamp:eqn 104 161.063 1.5487 10.377 0.0000000
nsamp:corfcn 52 0.524 0.0101 0.068 1.0000000
eqn:corfcn 32 5.331 0.1666 1.116 0.2994217
Residuals 2406 359.062 0.1492
CVRMSE.RANGE
Df Sum of Sq Mean Sq F Value Pr(F)
doe 14 0.97608 0.06972 17.785 0.0000000
nsamp 13 0.18734 0.01441 3.676 0.0000082
eqn 8 12.16207 1.52026 387.799 0.0000000
corfcn 4 42.77639 10.69410 2727.929 0.0000000
doe:nsamp 38 0.21336 0.00561 1.432 0.0426066
doe:eqn 111 4.12844 0.03719 9.487 0.0000000
doe:corfcn 56 4.24417 0.07579 19.333 0.0000000
nsamp:eqn 104 0.47239 0.00454 1.159 0.1343197
nsamp:corfcn 52 0.08734 0.00168 0.428 0.9998909
eqn:corfcn 32 17.00269 0.53133 135.537 0.0000000
Residuals 2406 9.43206 0.00392
411
E.3 INTERACTION OF DOE AND NSAMP
In addition to looking at how well individual sample sizes in a design affect the accuracy
of the resulting kriging model as is done in Section 4.4, it also is interesting to look at how the
number of sample points in a design affects the accuracy of the model to try to determine the
relationship between sample size and model accuracy. As the number of sample points
increases, the approximation should become more accurate, but how many points is enough
for designs with variable sample sizes? This is examined in this section, particularly in Figure
Of the designs utilized in this dissertation, only five designs allow variable sample sizes:
Hammersley sampling sequence (hamss) designs, minimax Latin hypercube (mnmxl) designs,
maximin Latin hypercube (mxmnl) designs, optimal Latin hypercube designs (oplhd), random
Latin hypercube designs (rnlhd), and uniform designs (unifd). A plot of the average effect of
each type of DOE across the sample range for each pair of problems is given in Figure E.9 -
Figure E.11. Recall that the following sample sizes are used for each pair of problems (refer to
412
DOE
0.14 unifd
oplhd
rnlhd
0.12
hamss
mxmnl
mnmxl
mean of rmse.range
0.10
0.08
0.06
0.04
7 8 9 10 11 12 13 14
NSAMP
Figure E.9 Effect of DOE and Sample Size on RMSE.RANGE for the Two Variable
Problems
In Figure E.9, the general observed trend is that the accuracy of the resulting kriging
model (as measured by RMSE.RANGE) improves as the sample size increases regardless of
the type of design. This is of no surprise since additional sample points should yield a more
accurate model because there is more information on which to base the approximation. The
same trends can observed in Figure E.10 and Figure E.11 for the three and four variable
problems, respectively. However, it appears that a wide enough spread of sample sizes has not
been examined to draw any conclusions regarding the minimum number of samples which
413
0.10
DOE
mnmxl
rnlhd
hamss
unifd
oplhd
0.08
mean of rmse.range
0.06
0.04
13 14 15 16 17 18 19 21 23 25
NSAMP
Figure E.10 Effect of DOE and Sample Size on RMSE.RANGE for the Three
Variable Problems
0.08
mean of rmse.range
0.04 0.06
DOE
unifd
hamss
oplhd
rnlhd
0.02
mnmxl
20 21 22 23 24 25 26 27 28 29 31 33
NSAMP
Figure E.11 Effect of DOE and Sample Size on RMSE.RANGE for the Four Variable
Problems
414
F.
APPENDIX F
This appendix contains detailed supplemental information for the GAA example
problem in Chapter 7. Section F.1 contains a brief description of the design variables used in
the GAA example. The sample points, data, and MLE theta parameters for the kriging
metamodels are given in Section F.2. Analysis of variance of the sample data is offered in
Section F.3; it is used to generate the Pareto plots in Section 7.6.3. Additional design scenarios
for the exercising the PPCEM product platform compromise DSP are presented and discussed
in Section F.4, and convergence histories for the individually designed benchmark aircraft for
Scenarios 1-3 are plotted in Section F.5. Information for the GAA product variety tradeoff
study discussed in Section 7.6.3 is included in Section F.6. Justification of the design variable
weights used in the computation of NCI are explained in Section F.6.1 while Sections F.6.2 and
F.6.3 contain all of the results—instantiations of each aircraft, responses, and corresponding
413
deviation function values—of the product variety tradeoff study for Scenarios 2 and 3,
respectively. Finally, sample DSIDES files for the PPCEM platform and the individual aircraft
414
F.1 GAA DESIGN VARIABLE DESCRIPTION
The intent in this section is to provide the reader with a brief description of the design
variables considered in the GAA example—cruise speed, aspect ratio, propeller diameter, wing
loading, engine activity factor, and seat width—and why they are important in aircraft design.
Where applicable, typical ranges of the design variables for current GAA are provide to justify
Cruise speed (CSPD): General Aviation aircraft are typically subsonic aircraft with cruise
speed in the range of 0 < M < 0.6 where M is the mach number. Based on the GAA
Design Competition guidelines (NASA and FAA, 1994) the aircraft should be capable
of maintaining a cruise speed, CSPD, between 150 and 300 kts. Therefore, the cruise
speed Mach number varies from Mach 0.24 to Mach 0.48 in order to find the “best”
velocity at which to fly the cruise portion of the mission.
Aspect ratio (AR): Aspect ratio of a wing is defined as the span of the wing squared
divided by its area; span is the distance from wing tip to wing tip measured
perpendicular to the plane (Raymer, 1992). Raymer states that the Wright brothers
were the first to investigate the effect of aspect ratio on flight performance, observing
that a long, skinny wing (high aspect ratio) has less drag than a short, fat wing (low
aspect ratio). In general, wings with a high aspect ratio tend to have lower induced drag
and therefore larger values of (L/D)max. Moreover, a high aspect ratio wing tends to
have high lift-curve slopes. High lift curve slopes have two consequences:
1. at low speeds, the approach attitude is conducive to good runway visibility from the
cockpit, and
415
Also, high aspect ratio wings usually weigh more than low aspect ratio wings. In
general, General aviation aircraft have aspect ratios ranging from 7.6 (single engine) to
9.2 (twin turboprop). In this investigation, aspect ratios ranging from 7-11 are
investigated.
Wing loading (WL): Wing loading is the ratio of the aircraft weight to the area of the wing
and affects stall speed, climb rate, takeoff and landing distances, and turn performance;
wing loading also has a very strong effect upon the takeoff gross weight of an aircraft
(Raymer, 1992). The larger the wing, the lower the wing loading. Raymer notes that
the larger wing helps reduce takeoff/landing field lengths; however, the additional drag
and empty weight due to the larger wing will increase takeoff gross weight in order to
perform the mission. A larger wing also is liable to affect operating costs because more
fuel is consumed to provide the necessary lift to achieve the mission. Representative
wing loadings of General Aviation aircraft vary between 17 (single engine) and 26 (twin
engine); the range utilized in the GAA example is 17 to 25.
For the GAA flight envelope, i.e., cruise speeds of Mach 0.2 to Mach 0.6, piston and
turboprop engines are the most widely used engines. In addition, a piston engine has a low fuel
consumption compared to a turboprop engine. Although, the engine weight of a piston engine
(1.1 - 1.75 lb/take-off hp) is heavier than a turboprop (.35 - .55 lb/take-off hp), the acquisition
416
cost of the piston engine ($25 - $ 50/hp) is much cheaper than the turboprop engine ($60 -
$100/hp). General Aviation aircraft pilots are primarily concerned about the direct operating
cost (DOC) of the aircraft. In general practice, piston engines are very competitive for GAA
which require less than 500 hp compared to turboprop engines which are more suitable for a
powerful GAA engine of more than 500 hp. For this work, the engine selected for the GAA
design is a piston, reciprocate fuel injection, engine equipped with 350 hp.
Engine activity factor (AF): To help size the engine, the engine activity factor is used; it is
a measure of the amount of power being absorbed by the propeller (Raymer, 1992).
Raymer states that engine activity factors range from about 90 - 200. For example, a
typical light-aircraft, such as a general aviation aircraft, has an activity factor of 100. A
range of 85-110 is examined in this study where 110 is the value of the baseline design.
Seat width (WS): The seat width is used to size the width of the cabin; in particular:
where aisle width is taken as a fixed parameter and 12 in. is the total thickness of the
fuselage walls. As seat width increases, cabin width increases and the surface area of
the fuselage increases causing an increased drag. Seat widths for an economy sized
aircraft vary between 17 in and 22 in, whereas seat widths in a small aircraft vary
between 16 in and 18 in (Raymer, 1992). In this study, the baseline value of 20 in is
used as the upper bound for seat width while the lower bound is taken as 14 in for
comfort reasons.
417
These six variables constitute the control factors for the General Aviation aircraft family.
The bounds on the design variables define the design space in which the kriging models are
Table F.1 contains the 64 sample points from the randomized orthogonal array used in
this study, and are scaled to fit the design space based on the bounds listed in the beginning of
Section 7.2. GASP is invoked using these 64 points for each aircraft; the response values are
listed in Table F.2 for the one passenger (two seater) aircraft, in Table F.3 for the three
passenger (four seater) aircraft, and in Table F.4 for the five passenger (six seater) aircraft.
Kriging metamodels are constructed for the mean, ? , and standard deviation, ? , of each
response, yielding a total of 18 models for the nine GAA responses. For the GAA example
problem, it is assumed that the demand for each aircraft is uniform; therefore, the scale factor—
the number of passengers—is assumed to be uniformly distributed and so are the corresponding
responses. In light of this, the mean and standard deviation of each response are computed
y j,i,1 ? y j,i,5
• Std. Dev.: ? y j,i ? , j = {noise, wemp, ..., ldmax}, i = {1, 2, ..., n} [F.2]
12
418
So for example, the mean and deviation of the direct operating cost, DOC, for the 3rd
To fit the kriging models to the data, the sample points listed in Table F.1 are scaled to
be between [0,1] and the simulated annealing algorithm and mlefinder.f algorithm given in Section
A.2.1 are invoked to find the maximum likelihood estimates for each ? k in the Gaussian
correlation function. The resulting MLE values for the theta parameters, ? k, used to fit each
kriging model are listed in Table F.5. These parameters produce the “best” kriging model for
the sample data and are used within the GAA Compromise DSP and DSIDES to facilitate the
Table F.1 64 Point Orthogonal Array Used to Build Kriging Metamodels for GAA
Example Problem
419
9 1 0 7 6 0 1 41 3 0 4 3 4 2
10 1 2 0 7 4 7 42 3 2 5 2 0 5
11 1 3 3 0 3 6 43 3 3 1 5 2 3
12 1 1 2 1 2 0 44 3 1 6 4 3 4
13 1 7 4 2 6 4 45 3 7 7 7 5 0
14 1 5 5 3 5 3 46 3 5 0 6 6 6
15 1 4 1 4 7 5 47 3 4 3 1 1 7
16 1 6 6 5 1 2 48 3 6 2 0 7 1
17 4 0 2 4 5 7 49 6 0 6 1 6 5
18 4 2 3 5 6 1 50 6 2 1 0 5 2
19 4 3 0 2 1 0 51 6 3 5 7 7 4
20 4 1 7 3 7 6 52 6 1 4 6 1 3
21 4 7 6 0 4 3 53 6 7 2 5 0 6
22 4 5 1 1 0 4 54 6 5 3 4 4 0
23 4 4 5 6 2 2 55 6 4 0 3 3 1
24 4 6 4 7 3 5 56 6 6 7 2 2 7
25 5 0 3 2 7 3 57 7 0 1 7 1 6
26 5 2 2 3 1 4 58 7 2 6 6 7 0
27 5 3 7 4 6 2 59 7 3 4 1 5 1
28 5 1 0 5 5 5 60 7 1 5 0 6 7
29 5 7 1 6 3 7 61 7 7 3 3 2 5
30 5 5 6 7 2 1 62 7 5 2 2 3 2
31 5 4 4 0 0 0 63 7 4 7 5 4 4
32 5 6 5 1 4 6 64 7 6 0 4 0 3
420
Table F.2 Response Values for 1 Passenger GAA for 64 Point OA
PAX NOISE WEMP DOC ROUGH WFUEL PURCH RANGE VCRMX LDMAX
1 72.557 1884.54 63.124 2.028 447.596 41403.1 2983 184.628 16.073
1 73.834 1894.91 60.672 2.236 438.478 42613.8 2823 198.503 17.091
1 73.001 1889.44 59.289 1.972 450.704 42457.9 2164 199.883 16.484
1 73.192 1816.08 57.814 1.905 523.536 41264.1 2167 205.805 15.911
1 73.484 1878.83 57.027 2.318 464.365 42609 2377 205.608 19.12
1 73.301 1955.33 60.303 2.075 384.01 43843.4 2119 198.371 17.099
1 73.638 1953 61.177 2.217 382.699 43714.4 2602 196.692 17.403
1 72.766 1865.21 57.705 2.299 478.156 42000.3 2725 200.446 19.019
1 73.893 1807.07 65.091 1.948 529.699 41357.2 2252 208.777 15.795
1 72.624 1906.7 70.292 1.803 431.643 42363.3 1965 193.086 15.678
1 73.107 1952.68 72.466 2.236 384.506 43203.4 2983 190.378 17.492
1 72.97 1835.46 68.823 2.177 502.377 41130.1 3318 196.206 17.113
1 73.272 1943.61 68.558 2.346 397.792 43572.4 2649 198.129 19.156
1 73.468 1904.32 67.194 2.258 436.036 42976.2 2513 201.539 18.342
1 72.78 1913.85 69.355 2.084 427.426 42759.6 2458 196.092 17.406
1 73.67 1879.05 64.918 2.228 463.882 42727.6 2181 206.98 18.6
1 72.994 1912.96 83.018 1.869 419.247 42581.9 2543 193.104 15.587
1 73.149 1834.45 82.796 2.058 505.538 41693.3 2421 204.48 17.446
1 72.571 1842.57 95.928 2.17 499.555 41057 3100 193.176 18.541
1 73.846 1926.6 83.071 2.026 403.906 43285.7 2444 196.817 16.429
1 73.614 1953.73 83.177 2.49 386.029 43744.4 2946 196.706 20.256
1 72.758 1924.92 87.896 2.252 416.307 42549.1 3157 190.261 19.087
1 73.507 1858.01 83.004 2.105 483.301 42326.9 2139 207.308 18.119
1 73.326 1913.58 83.365 2.059 428.406 43270.5 1903 203.93 18.246
1 73.129 1868.49 82.837 2.042 466.84 41942 2963 196.348 16.584
1 72.988 1885.51 83.241 2.068 452.136 42107 2790 195.578 17.397
1 73.847 1875.17 83.152 2.155 461.134 42643.5 2273 204.065 18.024
1 72.603 1878.8 80.39 1.877 456.447 41852.1 2371 193.308 16.405
1 72.797 1957.43 82.199 2.04 384.397 43591.6 1999 195.558 18.52
1 73.688 1849.04 83.168 2.127 492.969 42384 1945 210.852 18.681
1 73.283 1879.81 82.696 2.403 461.839 42090.5 3366 196.674 19.423
1 73.449 1980.06 83.443 2.336 358.95 44141.9 2804 194.624 19.267
1 73.517 1801.96 78.834 1.991 537.274 41161.6 2482 206.886 16.218
1 73.318 1912.87 83.034 2.017 425.578 42840.1 2483 197.619 16.52
1 73.663 1947.52 83.17 2.106 390.13 43541.3 2538 197.102 16.985
1 72.775 1840.03 82.578 2.099 499.966 41220.9 3155 195.949 17.061
1 72.529 1917.35 80.334 2.363 427.24 42607.3 3098 192.691 19.898
1 73.82 1961.03 83.205 2.386 376.73 43757 2820 195.237 18.772
1 73.001 1863.4 80.381 2.009 481.741 42234.9 2029 204.399 17.493
1 73.146 1902.26 81.269 2.113 443.218 42984.5 2074 203.254 18.297
1 73.315 1841.93 82.827 2.029 494.783 41588 2858 200.196 16.353
1 73.485 1910.72 82.979 2.135 426.787 42748.5 2905 196.903 17.149
1 72.807 1862.12 80.866 2.026 477.791 41802.7 2390 198.855 17.479
1 73.676 1876.65 83.17 2.026 458.394 42417.1 2470 201.639 16.468
1 73.86 1854.67 83.365 2.209 487.486 42671.1 1792 212.367 19.392
1 72.592 1923.47 80.04 1.985 416.958 42885.9 2066 195.418 17.655
1 73.119 1969.26 83.205 2.212 368.491 43554.8 2991 190.894 17.95
1 72.92 1911.33 83.139 2.446 431.433 42674.6 3357 194.807 19.94
1 73.646 1913.87 82.961 2.076 418.014 42854.3 2643 194.644 16.571
1 72.745 1886.34 82.812 2.225 452.166 41924.3 3213 191.292 18.296
1 73.506 1883.04 83.151 1.981 454.951 42843.3 1930 205.321 17.371
1 73.346 1845.59 82.848 1.941 491.884 41826 2264 203.683 16.698
421
1 72.992 1946.02 83.284 2.119 396.058 43394.9 2188 196.527 19.065
1 73.135 1855.94 82.96 2.247 488.335 42080.8 2519 203.967 19.428
1 72.571 1860.63 88.447 2.163 480.849 41581.7 2786 195.383 18.993
1 73.829 1991.8 83.512 2.278 344.412 44500.9 2458 195.942 19.027
1 72.854 1874.45 80.302 1.738 459.146 41829.4 2055 194.868 15.6
1 73.681 1827.34 83.056 2.059 510.184 41944.7 2119 208.918 17.747
1 73.278 1881.32 82.916 2.298 458.798 42314.1 3140 198.097 18.862
1 73.453 1964.89 83.26 2.144 367.103 43625.5 2625 190.495 17.111
1 73.125 1950.06 83.457 2.258 391.466 43564.4 2533 196.797 19.744
1 72.959 1896.25 83.161 2.287 446.543 42462.4 2925 197.121 19.572
1 73.854 1906.05 83.356 2.119 430.593 43237.6 2085 203.73 18.254
1 72.588 1894.72 94.83 2.132 447.424 42112.4 2463 193.412 19.53
422
Table F.3 Response Values for 3 Passenger GAA for 64 Point OA
PAX NOISE WEMP DOC ROUGH WFUEL PURCH RANGE VCRMX LDMAX
3 72.557 1918.88 64.063 2.014 412.124 41922.6 2822 181.831 15.754
3 73.834 1928.1 61.734 2.226 404.586 43125.8 2704 195.66 16.756
3 73 1925.67 60.593 1.958 413.277 43024.6 2108 196.887 16.119
3 73.192 1849.37 59.014 1.895 489.025 41740.6 2111 202.305 15.552
3 73.483 1910.54 58.096 2.312 431.878 43105 2328 202.889 18.708
3 73.3 1993.62 61.557 2.063 344.707 44474.5 2072 195.605 16.728
3 73.638 1990.4 62.354 2.208 344.484 44336 2556 194.113 17.049
3 72.766 1895.28 58.879 2.295 447.241 42447.7 2661 197.602 18.622
3 73.892 1838.43 66.418 1.939 497.408 41828.8 2206 205.824 15.454
3 72.623 1946.14 72.437 1.785 390.773 42978 1907 189.914 15.33
3 73.107 1990.4 74.287 2.227 345.985 43820.7 2828 187.784 17.146
3 72.97 1860.74 70.579 2.172 476.278 41462.4 3174 193.144 16.747
3 73.271 1978.84 70.129 2.339 361.849 44132.3 2597 195.285 18.759
3 73.468 1938.46 68.667 2.253 401.155 43526.4 2464 198.914 17.958
3 72.779 1949.21 71.166 2.074 391.06 43312 2397 193.19 17.033
3 73.67 1912.1 66.321 2.222 430.041 43231.6 2137 203.902 18.192
3 72.993 1952.37 83.139 1.854 378.457 43215.9 2482 190.248 15.267
3 73.149 1863.17 82.996 2.053 474.828 42092.9 2365 201.257 17.06
3 72.571 1872.74 95.278 2.16 466.836 41493.6 3020 190.278 18.157
3 73.846 1964.34 83.115 2.012 364.829 43903.6 2401 194.125 16.093
3 73.619 1986.98 83.418 2.485 351.476 44266.9 2839 193.94 19.856
3 72.758 1960.12 90.09 2.24 378.344 43101.4 3079 187.519 18.703
3 73.506 1888.49 83.148 2.099 451.216 42770.1 2092 204.137 17.715
3 73.326 1948.95 83.382 2.048 391.932 43821.9 1857 200.84 17.834
3 73.13 1900.19 82.997 2.031 432.777 42421.6 2828 193.559 16.253
3 72.988 1919.09 83.139 2.054 416.669 42603.4 2722 192.414 17.035
3 73.847 1904.74 83.175 2.15 429.793 43066.5 2233 200.904 17.64
3 72.603 1915.58 82.05 1.862 418.25 42419.1 2307 190.3 16.054
3 72.804 1995.43 82.24 2.031 345.096 44186.8 1944 192.387 18.109
3 73.688 1880.24 83.291 2.122 460.288 42845.4 1903 207.68 18.256
3 73.282 1909.54 82.83 2.397 431.04 42537.9 3223 193.995 19.046
3 73.448 2017.81 83.547 2.325 318.929 44757.7 2752 191.832 18.881
3 73.517 1831.63 80.707 1.979 506.32 41587.6 2431 203.781 15.879
3 73.318 1951 83.256 2.004 385.524 43458.8 2427 194.826 16.17
3 73.663 1986.32 83.407 2.095 349.085 44164.7 2487 194.094 16.631
3 72.775 1868.5 82.325 2.092 469.152 41609.5 3077 192.754 16.696
3 72.529 1949.89 80.762 2.355 392.093 43104.4 3025 189.898 19.497
3 73.82 1997.35 83.436 2.377 338.826 44343.6 2692 192.487 18.404
3 73.001 1895.96 82.577 2.001 447.565 42715.1 1977 201.172 17.1
3 73.145 1937.58 83.201 2.103 406.199 43523.3 2024 200.012 17.887
3 73.315 1872.96 82.792 2.016 461.365 42056.1 2798 197.414 16.013
3 73.484 1946.57 83.101 2.124 388.517 43312 2830 194.004 16.795
3 72.807 1894.54 80.836 2.016 444.05 42278.5 2327 195.714 17.099
3 73.676 1912.21 83.312 2.015 421.139 42973.4 2421 198.686 16.121
3 73.859 1885.3 83.514 2.207 455.039 43121.8 1758 209.18 18.952
3 72.592 1960.38 80.721 1.972 378.712 43450.6 2009 192.191 17.267
3 73.119 2007.83 83.126 2.196 327.766 44183.8 2830 188.152 17.594
3 72.92 1942.18 83.328 2.439 399.149 43141.9 3239 192.065 19.555
3 73.646 1950.27 83.107 2.064 379.303 43436.4 2520 191.91 16.243
3 72.745 1918.93 82.698 2.216 417.39 42417.8 3057 188.503 17.936
3 73.506 1918.74 83.281 1.972 418.152 43391.7 1888 202.094 16.983
3 73.346 1879.75 82.826 1.926 456.426 42345.3 2210 200.605 16.336
423
3 72.991 1983.83 83.029 2.106 357.035 44007.8 2129 193.699 18.646
3 73.135 1886.05 83.082 2.243 455.805 42523.7 2462 200.983 19.016
3 72.57 1892.09 91.357 2.155 447.189 42041.9 2714 192.375 18.597
3 73.829 2030.56 83.605 2.267 303.987 45153 2414 193.317 18.635
3 72.854 1912.5 81.33 1.721 419.482 42407.4 1996 191.586 15.261
3 73.681 1852.94 83.162 2.056 483.12 42293.3 2078 205.855 17.363
3 73.277 1912.28 83.053 2.291 426.385 42786.4 3031 195.386 18.491
3 73.453 2002.79 83.47 2.132 326.876 44236.6 2495 187.706 16.778
3 73.125 1986.87 83.672 2.25 352.966 44155.4 2475 193.938 19.327
3 72.958 1928.88 83.146 2.279 411.42 42951.6 2858 194.113 19.173
3 73.854 1939.11 83.368 2.111 396.482 43739.5 2046 200.636 17.859
3 72.588 1929.07 94.311 2.121 411.6 42620 2394 190.201 19.111
424
Table F.4 Response Values for 5 Passenger GAA for 64 Point OA
PAX NOISE WEMP DOC ROUGH WFUEL PURCH RANGE VCRMX LDMAX
5 72.556 1944.19 64.584 1.987 386.305 42380.4 2737 181.049 15.659
5 73.833 1950.23 62.147 2.2 382.171 43533.8 2642 194.847 16.656
5 72.999 1954.13 61.322 1.928 384.146 43517.5 2082 195.301 15.95
5 73.191 1872.07 59.556 1.868 465.777 42127.5 2089 200.992 15.421
5 73.483 1927.39 58.497 2.286 414.76 43401.8 2312 201.983 18.575
5 73.3 2024.38 62.331 2.032 313.32 45037.5 2048 194.222 16.537
5 73.637 2018.16 62.986 2.179 316.304 44838.8 2534 192.824 16.892
5 72.765 1909.15 59.223 2.269 433.14 42679.8 2645 196.727 18.522
5 73.892 1859.15 66.916 1.91 476.335 42179.9 2191 204.558 15.351
5 72.622 1976.38 73.757 1.757 359.64 43487.5 1876 188.102 15.144
5 73.106 2017.68 75.158 2.199 318.293 44308.5 2737 186.542 17.006
5 72.969 1881.11 70.826 2.143 455.724 41870.1 3116 193.144 16.726
5 73.271 2001.28 70.852 2.31 339.084 44541 2575 194.285 18.6
5 73.467 1959.61 69.272 2.222 379.7 43901.1 2446 197.821 17.82
5 72.778 1976.84 72.083 2.044 362.901 43801.8 2371 191.854 16.876
5 73.669 1931.83 66.893 2.193 410.013 43588.8 2120 202.98 18.047
5 72.992 1987.8 83.501 1.825 342.41 43894.6 2464 189.404 15.164
5 73.148 1886.04 83.097 2.024 451.468 42509.8 2350 200.386 16.963
5 72.571 1886.59 95.214 2.136 452.629 41745.9 3007 189.863 18.093
5 73.846 1992.26 83.262 1.987 336.432 44406 2354 192.836 15.942
5 73.618 2008.64 83.602 2.459 329.356 44681.7 2774 193.296 19.741
5 72.757 1982.31 95.498 2.216 355.279 43478.4 3051 186.253 18.563
5 73.506 1912.23 83.238 2.068 427.021 43196 2075 203.004 17.578
5 73.325 1977.67 83.588 2.018 362.565 44323.4 1835 199.172 17.636
5 73.129 1924.53 83.06 2.005 407.622 42857.9 2752 192.563 16.152
5 72.987 1945.15 83.26 2.028 389.855 43085.4 2696 191.571 16.903
5 73.846 1928.08 83.274 2.12 405.969 43498.4 2220 200.029 17.528
5 72.602 1942.54 83.132 1.835 390.551 42889.4 2277 189.011 15.897
5 72.803 2026.78 82.485 1.998 312.929 44754.8 1914 190.957 17.892
5 73.687 1900.91 83.295 2.09 439.452 43207.2 1890 206.508 18.125
5 73.282 1920.44 82.879 2.376 419.891 42735.6 3185 193.581 18.99
5 73.448 2043.11 83.609 2.296 292.726 45209.4 2678 190.542 18.712
5 73.516 1850.06 81.226 1.956 487.528 41926.8 2418 203.16 15.802
5 73.317 1979.25 83.392 1.976 356.248 43953.3 2400 193.322 16.006
5 73.662 2016.96 83.537 2.065 317.717 44726.7 2462 192.789 16.46
5 72.774 1889.78 82.366 2.064 447.291 42012.7 3050 192.348 16.634
5 72.528 1969.72 81.084 2.33 371.431 43472 3005 189.253 19.385
5 73.819 2021.3 83.546 2.35 314.079 44794 2620 191.643 18.26
5 73 1921.68 83.144 1.97 420.987 43164.6 1956 199.789 16.943
5 73.144 1962.73 83.337 2.072 380.188 43963.3 2002 198.605 17.709
5 73.314 1895.96 82.943 1.991 437.611 42476.3 2780 196.602 15.918
5 73.484 1973.45 83.317 2.098 360.755 43792.8 2743 192.762 16.656
5 72.806 1919.67 80.959 1.984 418.363 42712.1 2304 194.378 16.961
5 73.675 1937.35 83.391 1.988 395.361 43430.3 2400 197.631 15.983
5 73.859 1900.99 83.447 2.175 439.079 43394 1747 208.227 18.819
5 72.59 1990 81.4 1.942 348.302 43968.7 1981 190.637 17.073
5 73.118 2036.91 83.249 2.168 297.777 44710.7 2729 186.886 17.429
5 72.919 1959.5 83.367 2.412 381.461 43472.6 3180 191.643 19.479
5 73.646 1975.94 83.376 2.043 352.808 43907.8 2449 190.953 16.124
5 72.744 1935.9 82.804 2.195 399.789 42725.3 2999 187.905 17.862
5 73.505 1944.4 83.407 1.944 391.938 43844.2 1869 200.711 16.817
5 73.345 1903.64 83.103 1.902 431.991 42758.4 2189 199.293 16.198
425
5 72.991 2011.36 83.124 2.077 328.837 44496.6 2101 192.293 18.445
5 73.134 1899.64 83.148 2.216 441.72 42777.1 2450 200.537 18.925
5 72.57 1908.13 91.649 2.128 430.605 42315.5 2697 191.516 18.499
5 73.828 2058.89 83.845 2.241 274.964 45660.3 2393 191.836 18.453
5 72.852 1940.67 82.161 1.697 390.35 42876.2 1964 189.836 15.086
5 73.681 1875.46 83.195 2.026 460.258 42706.5 2067 204.98 17.272
5 73.277 1926.63 83.196 2.268 411.705 43043.4 2984 194.765 18.421
5 73.452 2032.34 83.614 2.106 296.421 44802.2 2411 186.909 16.638
5 73.124 2010.5 83.861 2.224 328.725 44580.1 2450 192.805 19.149
5 72.957 1946.13 83.191 2.253 393.548 43254.4 2839 193.254 19.058
5 73.853 1966.44 83.519 2.082 368.695 44229.5 2029 199.277 17.701
5 72.587 1949.46 94.147 2.094 390.73 42967.5 2369 189.069 18.956
The MLE values for the ? k listed in Table F.5 are used in conjunction with the sample
points listed in Table F.1 (scaled to [0,1]), the ? and ? of each response (computed from the
data in Table F.2 through Table F.4), and the Gaussian correlation function, Equation 2.17, to
426
estimate ? and ? response values at untried points in the design space. The krigit.f algorithm
The analysis of variance of the means of the GAA responses are listed in Table F.6.
This information is used to generate the Pareto plots for the GAA response means give in Figure
7.7. Looking at the information in Table F.6, it is noted that for empty weight (WEMP), fuel
weight (WFUEL), and purchase price (PURCH), all of the design variables are significant, i.e.,
Pr(F) > 0.01. For maximum flight range (RANGE), all but cruise speed are significant ,
whereas the opposite is true for direct operating cost (DOC); cruise speed is the only variable
with a significant effect on DOC. For maximum cruise speed (VCRMX), all but engine activity
factor are significant. Finally, cruise speed, aspect ratio, wing loading, and seat width all have
significant effects on maximum lift to drag ratio (LDMAX). However, insignificant variables are
Table F.6 Main Effects ANOVA Results for GAA Response Means
WEMP DOC
Df Sum of Sq Mean Sq F Value Df Sum of Sq Mean Sq F Value
Pr(F) Pr(F)
CSPD 7 334.45 47.78 17.984 CSPD 7 4522.135 646.0194 91.50201
0.0000 0.0000
AR 7 25940.78 3705.83 1394.927 AR 7 44.150 6.3071 0.89334
0.0000 0.5292
DPRP 7 1249.92 178.56 67.213 DPRP 7 113.404 16.2005 2.29464
0.0000 0.0665
WL 7 19038.63 2719.80 1023.774 WL 7 61.516 8.7880 1.24473
0.0000 0.3236
AF 7 581.71 83.10 31.281 AF 7 32.378 4.6255 0.65515
0.0000 0.7065
427
WS 7 93613.86 13373.41 5033.946 WS 7 32.472 4.6389 0.65705
0.0000 0.7050
Resid 21 55.79 2.66 Resid 21 148.263 7.0602
WFUEL PURCH
Df Sum of Sq Mean Sq F Value Df Sum of Sq Mean Sq F Value
Pr(F) Pr(F)
CSPD 7 534.0 76.28 25.748 CSPD 7 131673 18810 12.195
0.0000 0.0000
AR 7 19764.3 2823.47 953.023 AR 7 12976562 1853795 1201.819
0.0000 0.0000
DPRP 7 1891.0 270.14 91.183 DPRP 7 7363576 1051939 681.975
0.0000 0.0000
WL 7 20985.0 2997.86 1011.885 WL 7 1309407 187058 121.270
0.0000 0.0000
AF 7 673.1 96.16 32.456 AF 7 873248 124750 80.876
0.0000 0.0000
WS 7 101120.4 14445.77 4875.974 WS 7 24701497 3528785 2287.719
0.0000 0.0000
Resid 21 62.2 2.96 Resid 21 32392 1542
RANGE VCRMX
Df Sum of Sq Mean Sq F Value Df Sum of Sq Mean Sq F Value
Pr(F) Pr(F)
CSPD 7 114956 16422 3.2275 CSPD 7 9.5692 1.3670 3.6772
0.0174 0.0095
AR 7 131815 18831 3.7009 AR 7 24.2662 3.4666 9.3249
0.0092 0.0000
DPRP 7 593925 84846 16.6752 DPRP 7 615.1011 87.8716 236.3669
0.0000 0.0000
WL 7 7971133 1138733 223.7989 WL 7 816.5116 116.6445 313.7635
0.0000 0.0000
AF 7 177899 25414 4.9947 AF 7 4.3497 0.6214 1.6715
0.0019 0.1706
WS 7 617934 88276 17.3492 WS 7 499.8083 71.4012 192.0630
0.0000 0.0000
Resid 21 106852 5088 Resid 21 7.8069 0.3718
LDMAX KEY:
Df Sum of Sq Mean Sq F Value Df = degree of freedom
Pr(F)
CSPD 7 6.98133 0.99733 310.093 CSPD = Cruise speed
0.0000
AR 7 71.52682 10.21812 3177.036 AR = Aspect ratio
0.0000
DPRP 7 0.02329 0.00333 1.034 DPRP = Propeller diameter
0.4373
WL 7 9.33662 1.33380 414.709 WL = Wing loading
0.0000
AF 7 0.02003 0.00286 0.889 AF = Engine activity factor
0.5319
WS 7 8.93490 1.27641 396.865 WS = Seat width
0.0000
Resid 21 0.06754 0.00322 Resid = residuals
428
F.4 ADDITIONAL DESIGN SCENARIOS FOR STUDY GAA PRODUCT
PLATFORM USING Cdk FORMULATION
In all, eight different design scenarios are considered to investigate several performance
and economic tradeoffs within the product family, see Table F.7. Only the first three scenarios
are discussed in Section 7.5 in regards to the PPCEM product platform. These remaining
429
Table F.7 GAA Product Platform Design Scenarios
Deviation Function
Scenario PLEV1 PLEV2 PLEV3 PLEV4
-
Group I: Tradeoff d 1 drives fuel weight Cdk to 1
d 2- drives empty weight Cdk to 1
Studies
(d1- + d2- + d3- d 3- drives direct operating cost Cdk to 1
d 4- drives purchase price Cdk to 1
1. Overall tradeoff + d4- + d5- + d6-
- d 5- drives max. lift/drag Cdk to 1
+ d7 )/7
(d2- + d3- + d4- (d1- + d5- + d6- d 6- drives max. speed Cdk to 1
2. Economic tradeoff d 7- drives max. range Cdk to 1
)/3 + d7-)/4
(d1- + d5- + d6- (d2- + d3- + d4-
3. Performance tradeoff
+ d7-)/4 )/3
Group II: Economic
Tradeoff Studies
4. Economic driver: (d1- + d5- + d6-
(d2- + d4-)/2 d3-
manufacturing + d7-)/4
5. Economic driver: (d1- + d5- + d6-
d3- (d2- + d4-)/2
operator cost + d7-)/4
Group III: Performance
Tradeoff Studies
6. Performance driver: (d2- + d3- + d4-
(d1- + d6-)/2 (d5- + d7-)/2
speed and fuel )/3
7. Performance driver: (d2- + d3- + d4-
d5- d7- (d1- + d6-)/2
aerodynamics )/3
8. Performance driver: (d2 + d3- + d4-
-
d7- d5- (d1- + d6-)/2
distance )/3
These design scenarios are used in conjunction with the GAA product platform
compromise DSP given in Figure 7.7 in Section 7.4. As with Scenarios 1-3 discussed in
Section 7.5, three starting points are used when solving the compromise DSP for each scenario:
the lower, middle, and upper bounds of the design variables; the resulting design(s) with the
430
lowest deviation function is taken as the best solution. The Cdk solutions are given in Section
The resulting Cdk solutions for these 8 scenarios are summarized and discussed in Table
F.8. The initial Cdk designs are included as Scenario 0 in each graph for the sake of
comparison; they are designated by a black square. The white circle represents the Cdk values
obtained in Scenario 1, the overall tradeoff study; the black diamonds indicate Cdk values
obtained for the economically oriented design scenarios (Scenarios 2, 4, and 5); and the white
triangles indicate Cdk values obtained for the performance oriented design scenarios (Scenarios
3, 6, 7, and 8). A horizontal dashed line at Cdk = 1 is included in each plot to designate the
target value for each Cdk; points above the line indicate that the target has been met in that
particular scenario. Although it is not noted, all designs are feasible based on the constraints
431
Table F.8 PPCEM Solutions - Goal Cdk Values
2.0
1.5
Fuel Weight (WFUEL) Cdk
1.0
• The target for WFUEL is always met, 0.5
regardless of the design scenario. 0.0
-0.5 0 1 2 3 4 5 6 7 8
• Considerable improvement over the initial Cdk
-1.0
of -0.7 is obtained in all scenarios.
Scenario
432
Direct Operating Cost (WEMP) Cdk
2.0
• The target for DOC is never met; the best Cdk 1.0
value that can be achieved is 0.5 when DOC 0.0
-1.0 0 1 2 3 4 5 6 7 8
is given the highest priority in Scenario 5. -2.0
-3.0
• The initial Cdk is quite poor (-670 which is -4.0
well off the chart); so, considerable -5.0
-6.0
improvement is made despite not being able Scenario
to achieve the target of 1.
2.0
Purchase Price (PURCH) Cdk
1.0
• The target for PURCH is met when given the
0.0
highest priority (Scenario 4) and by 0 1 2 3 4 5 6 7 8
-1.0
happenstance in Scenario 8.
-2.0
• All design scenarios yield improvement over -3.0
the baseline Cdk. Scenario
433
Max. Cruise Speed (VCRMX) Cdk
2.0
• The target for VCRMX is only met when 1.0
0.0
given the highest priority (Scenario 6). -1.0 0 1 2 3 4 5 6 7 8
-2.0
• Six out of the eight design scenarios yield -3.0
-4.0
improvement over the initial Cdk value.
-5.0
-6.0
• In Scenarios 4 and 8, priority is placed on -7.0
WFUEL and RANGE, which greatly Scenario
compromises max cruise speed as might be
expected (fly slower yet further).
The end result of having examined all these design scenarios is that significant tradeoffs
are occurring in Scenarios 1, 2, and 3 where the economic and performance Cdk goals are
equally weighted at different priority levels. Only when a particular Cdk is given first priority
434
(i.e., placed at PLEV1) in the GAA product platform compromise DSP can the target of 1 be
achieved. Any other time, the results indicate the best possible compromise which can be
The resulting PPCEM platform designs for Scenarios 1-8 (and the baseline design) are
summarized in Table F.9; the resulting instantiations of each aircraft are listed in Table F.10.
Note that these instantiations are directly from GASP based on the PPCEM platform
specifications given in Table F.9; they are not based on kriging metamodel predictions. Also,
all solutions listed in the table are feasible, and the values for ride roughness and takeoff
435
Table F.10 PPCEM Instantiations in GASP from Scenarios 1-8
Response
Design No. of wfuel wemp doc purch ldmax vcrmx range Dev. Fcn.
Scenario Seats [lbs] [lbs] [$/hr] [$] [kts] [nm] PLEV1 PLEV2 PLEV3 PLEV4
2 364.39 1961.95 83.46 43908.9 16.49 193.88 2237.0
0 4 324.00 2000.43 83.40 44556.1 16.16 191.39 2136.0
6 293.66 2030.03 83.53 45099.0 16.01 190.15 2067.0
2 449.43 1887.15 61.98 41817.0 15.89 190.83 2491.0 0.024
1 4 413.80 1921.71 63.31 42374.5 15.61 188.47 2436.0 0.038
6 388.49 1946.59 63.85 42827.0 15.53 187.61 2420.0 0.051
2 447.25 1889.73 61.60 41959.4 15.91 192.24 2446.0 0.017 0.031
2 4 411.34 1924.56 62.85 42502.8 15.63 189.51 2393.0 0.020 0.051
6 385.69 1949.77 63.38 42989.1 15.55 189.07 2377.0 0.019 0.073
2 450.92 1886.97 77.43 42150.1 15.79 195.53 2499.0 0.024 0.106
3 4 415.34 1921.70 79.13 42727.0 15.52 193.32 2451.0 0.045 0.112
6 389.84 1946.82 79.76 43190.3 15.44 192.46 2437.0 0.067 0.111
2 442.52 1893.71 62.41 41685.2 16.03 186.82 2569.0 0.008 0.040 0.031
4 4 406.67 1928.43 63.20 42229.0 15.76 184.25 2509.0 0.003 0.053 0.042
6 381.37 1953.28 63.61 42702.3 15.68 183.82 2492.0 0.000 0.060 0.063
2 448.84 1888.86 60.96 42077.4 15.93 194.39 2353.0 0.016 0.013 0.037
5 4 412.80 1923.83 62.13 42659.0 15.65 192.20 2303.0 0.036 0.008 0.057
6 386.92 1949.25 62.64 43126.4 15.56 191.33 2287.0 0.044 0.001 0.080
2 440.78 1896.38 59.62 42879.9 16.12 203.32 2092.0 0.000 0.029 0.015
6 4 402.41 1933.72 60.77 43481.9 15.77 200.43 2049.0 0.003 0.059 0.016
6 374.03 1961.53 61.43 43968.4 15.60 198.87 2027.0 0.037 0.062 0.015
2 447.55 1896.55 72.01 42180.9 17.61 194.45 2072.0 0.000 0.171 0.014 0.076
7 4 411.12 1931.83 76.37 42694.1 17.21 191.00 2010.0 0.000 0.196 0.036 0.097
6 382.46 1959.79 80.21 43170.0 17.03 189.41 1982.0 0.000 0.207 0.073 0.114
2 450.38 1888.00 69.26 41603.1 16.51 187.43 2503.0 0.000 0.029 0.032 0.056
8 4 420.42 1916.93 71.48 42019.1 16.21 184.65 2441.0 0.024 0.047 0.064 0.064
6 391.41 1945.51 72.33 42565.7 16.15 184.21 2427.0 0.029 0.050 0.099 0.069
The convergence plots for the two, four, and six seater GAA have been included in this
section to provide the full details of the model convergence history for the individually designed
aircraft. Unlike the convergence plots for the PPCEM Cdk solutions, the iterations have not
436
been out carried to match the longest run; thus, some of the convergence lines appear to stop
abruptly.
0 0.00 0
0 5 10 15 20 25 0 5 10 15 0 5 10 15
Iterations Iterations Iterations
437
0.18 0.18 0.16
0.08
0.08 0.08
0.06
0.06 0.06
0.04 0.04 0.04
0 0.00 0
0 5 10 15 0 5 10 15 20 0 5 10 15 20
Iterations Iterations Iterations
0.08 0.16
0.10
0.07 0.14
0.06
0.04 0.08
2
0.05 0.06
0.04
0.04
0.03 0.04
0.03
0.02
0.02 0.02
0.01 0.01
0.00 0.00 0
0 5 10 15 20 0 5 10 15 0 5 10 15
Iterations Iterations Iterations
0.08
0.08 0.08
0.06
0.06 0.06
0.04 0.04 0.04
438
Figure F.3 Convergence of Individual Benchmark Aircraft - Scenario 3
Supplemental information for the GAA product variety tradeoff study is included in this
section. In particular, the rank ordering of the design variables for computing the non-
commonality index (NCI) are listed in Section F.3.1. In Section F.3.2, the detailed results of
The justification for the rank ordering of the design variables in the GAA product variety
case study are listed in Table F.11. The viewpoints for these decisions are listed in the table.
The resulting preferences are then quantified in Table F.12 and the total and relative importance
Table F.11 Viewpoints for Pairwise Comparisons for GAA Design Variables
AF < AR* It is easier to derate the engine than it is to allow aspect ratio to vary.
AF > CSPD Cruise speed is the cheapest variable to allow to vary.
AF > DPRP Propeller diameter is the second cheapest variable to allow to vary.
AF < WL It is easier to derate the engine than to allow wing loading to vary.
AF < WS Seat width is the most costly variable to vary.
AF > dummy All variables are preferred to the dummy.
AR > CSPD Cruise speed is the cheapest variable to vary.
AR > DPRP Propeller diameter is the second cheapest variable to allow to vary.
AR > WL It is more costly to allow aspect ratio to vary than wing loading.
AR < WS Seat width is the most costly variable to vary.
AR > dummy All variables are preferred to the dummy.
439
CSPD < DPRP Cruise speed is the cheapest variable to vary.
CSPD < WL “ “ “ “ “ “ “ “
CSPD < WS “ “ “ “ “ “ “ “
CSPD > dummy All variables are preferred to the dummy.
DPRP < WL Propeller diameter is the second cheapest variable to allow to vary.
DPRP < WS Seat width is the most costly variable to allow to vary.
DPRP > dummy All variables are preferred to the dummy.
WL < WS Seat width is the most costly variable to vary.
WL > dummy All variables are preferred to the dummy.
WS > dummy All variables are preferred to the dummy.
*
The symbol > indicates preference.
The relative importances listed in Table F.12 are used to compute the non-commonality
index (NCI) for each family of aircraft in the product variety tradeoff study in Section 7.6.2.
The results for the product variety tradeoff study for Scenario 2 are listed in the next section.
440
F.6.2 Product Variety Tradeoff Study Results for Scenario 2
The PPCEM solutions based on the GAA Compromise DSP employing the Cdk
The following tables list the actual goal values for each aircraft (platform instantiation) found by
allowing one, two, and three variables to vary at a time between aircraft while holding the
441
Table F.14 Product Variety Tradeoff Study Results for Scenario 3, Allowing 1 Design
Variable to Vary Between Aircraft
Vary: AF WFUEL WEMP DOC PURCH LDMAX VCRMX RANGE PLEV1 PLEV2
2 Seater 86.462 448.04 1888.96 61.62 41908.0 15.91 191.81 2448.0 0.016 0.031
4 Seater 90.500 410.92 1925.01 62.76 42544.3 15.63 189.95 2394.0 0.020 0.050
6 Seater 95.473 383.76 1951.71 63.17 43083.4 15.55 189.51 2375.0 0.018 0.071
Vary: AR WFUEL WEMP DOC PURCH LDMAX VCRMX RANGE PLEV1 PLEV2
2 Seater 8.085 447.15 1889.85 61.55 41963.6 15.91 192.24 2447.0 0.016 0.031
4 Seater 8.085 411.25 1924.68 62.79 42532.8 15.63 189.95 2394.0 0.020 0.050
6 Seater 8.237 383.81 1952.07 63.22 43037.8 15.67 189.07 2370.0 0.018 0.070
Vary: CSPD WFUEL WEMP DOC PURCH LDMAX VCRMX RANGE PLEV1 PLEV2
2 Seater 0.242 447.15 1889.85 61.55 41963.6 15.91 192.24 2447.0 0.016 0.031
4 Seater 0.242 411.25 1924.68 62.79 42532.8 15.63 189.95 2394.0 0.020 0.050
6 Seater 0.242 385.59 1949.88 63.31 42993.2 15.55 189.07 2378.0 0.018 0.073
Vary: DPRP WFUEL WEMP DOC PURCH LDMAX VCRMX RANGE PLEV1 PLEV2
2 Seater 5.195 447.15 1889.85 61.55 41963.6 15.91 192.24 2447.0 0.016 0.031
4 Seater 5.195 411.25 1924.68 62.79 42532.8 15.63 189.95 2394.0 0.020 0.050
6 Seater 5.253 385.11 1950.48 63.06 43067.0 15.55 189.95 2381.0 0.017 0.071
Vary: WL WFUEL WEMP DOC PURCH LDMAX VCRMX RANGE PLEV1 PLEV2
2 Seater 22.778 448.56 1888.51 61.48 41948.8 15.88 192.45 2414.0 0.016 0.034
4 Seater 24.255 424.76 1911.81 62.07 42426.7 15.35 192.58 2077.0 0.015 0.091
6 Seater 23.729 394.38 1941.52 62.83 42929.7 15.35 190.86 2157.0 0.016 0.102
Vary: WS WFUEL WEMP DOC PURCH LDMAX VCRMX RANGE PLEV1 PLEV2
2 Seater 18.644 448.57 1888.48 61.50 41948.5 15.92 192.46 2449.0 0.016 0.030
4 Seater 17.129 440.06 1896.89 61.66 42105.4 15.90 192.24 2448.0 0.010 0.056
6 Seater 16.557 430.5329 1906.48 61.6126 42311.8 15.9428 192.459 2456.0 0.009 0.087
Table F.15 Product Variety Tradeoff Study Results for Scenario 3, Allowing 2 Design
Variable to Vary Between Aircraft
Vary: AF AR WFUEL WEMP DOC PURCH LDMA VCRM RANGE PLEV1 PLEV2
X X
2 Seater 89.402 8.085 447.15 1889.85 61.55 41963.6 15.91 192.24 2447.0 0.016 0.031
4 Seater 92.351 8.326 407.23 1929.33 62.58 42665.9 15.82 190.38 2378.0 0.020 0.046
6 Seater 90.630 8.505 380.04 1956.52 63.04 43160.7 15.87 189.51 2354.0 0.018 0.066
Vary: AF CSPD WFUEL WEMP DOC PURCH LDMA VCRM RANGE PLEV1 PLEV2
X X
2 Seater 86.462 0.241 448.04 1888.96 61.62 41908.0 15.91 191.81 2448.0 0.016 0.031
4 Seater 90.500 0.240 410.92 1925.01 62.76 42544.3 15.63 189.95 2394.0 0.020 0.050
6 Seater 95.473 0.241 383.76 1951.71 63.17 43083.4 15.55 189.51 2375.0 0.018 0.071
Vary: AF DPRP WFUEL WEMP DOC PURCH LDMA VCRM RANGE PLEV1 PLEV2
X X
2 Seater 89.402 5.195 447.15 1889.85 61.55 41963.6 15.91 192.24 2447.0 0.016 0.031
442
4 Seater 89.402 5.195 411.25 1924.68 62.79 42532.8 15.63 189.95 2394.0 0.020 0.050
6 Seater 86.244 5.364 384.97 1950.79 62.70 43158.0 15.55 191.26 2386.0 0.016 0.069
Vary: AF WL WFUEL WEMP DOC PURCH LDMA VCRM RANGE PLEV1 PLEV2
X X
2 Seater 97.594 23.169 449.52 1887.71 61.21 42056.4 15.82 193.98 2327.0 0.015 0.042
4 Seater 93.881 24.314 423.79 1912.79 61.97 42478.1 15.34 192.78 2067.0 0.015 0.092
6 Seater 93.211 23.889 394.43 1941.50 62.69 42985.7 15.32 191.52 2126.0 0.015 0.104
Vary: AF WS WFUEL WEMP DOC PURCH LDMA VCRM RANGE PLEV1 PLEV2
X X
2 Seater 95.199 18.468 449.77 1887.36 61.31 41988.4 15.95 193.10 2451.0 0.015 0.029
4 Seater 89.402 17.129 440.06 1896.89 61.66 42105.4 15.90 192.24 2448.0 0.010 0.056
6 Seater 96.582 16.556 428.36 1908.63 61.52 42424.5 15.94 193.10 2450.0 0.008 0.085
Vary: AR CSPD WFUEL WEMP DOC PURCH LDMA VCRM RANGE PLEV1 PLEV2
X X
2 Seater 8.085 0.242 447.15 1889.85 61.55 41963.6 15.91 192.24 2447.0 0.016 0.031
4 Seater 8.085 0.242 411.25 1924.68 62.79 42532.8 15.63 189.95 2394.0 0.020 0.050
6 Seater 8.237 0.241 383.81 1952.07 63.22 43037.8 15.67 189.07 2370.0 0.018 0.070
Vary: AR DPRP WFUEL WEMP DOC PURCH LDMA VCRM RANGE PLEV1 PLEV2
X X
2 Seater 8.085 5.195 447.15 1889.85 61.55 41963.6 15.91 192.24 2447.0 0.016 0.031
4 Seater 8.085 5.195 411.25 1924.68 62.79 42532.8 15.63 189.95 2394.0 0.020 0.050
6 Seater 7.774 5.398 387.72 1947.16 62.77 43126.2 15.29 191.60 2401.0 0.016 0.073
Vary: AR WL WFUEL WEMP DOC PURCH LDMA VCRM RANGE PLEV1 PLEV2
X X
2 Seater 8.467 23.143 446.80 1891.45 61.09 42063.4 16.13 193.42 2318.0 0.015 0.039
4 Seater 8.534 24.117 418.11 1919.59 61.85 42580.4 15.71 192.50 2081.0 0.015 0.082
6 Seater 8.949 23.750 383.81 1954.29 62.33 43213.2 15.99 191.21 2111.0 0.015 0.089
Vary: AR WS WFUEL WEMP DOC PURCH LDMA VCRM RANGE PLEV1 PLEV2
X X
2 Seater 8.133 18.529 449.94 1887.29 61.40 41936.9 15.98 192.67 2451.0 0.015 0.029
4 Seater 7.955 16.850 446.96 1889.78 61.57 41974.6 15.83 192.46 2465.0 0.009 0.059
6 Seater 7.971 15.353 455.38 1882.09 60.86 41906.3 16.05 193.96 2504.0 0.005 0.097
Vary: CSPD DPRP WFUEL WEMP DOC PURCH LDMA VCRM RANGE PLEV1 PLEV2
X X
2 Seater 0.242 5.195 447.15 1889.85 61.55 41963.6 15.91 192.24 2447.0 0.016 0.031
4 Seater 0.242 5.195 411.25 1924.68 62.79 42532.8 15.63 189.95 2394.0 0.020 0.050
6 Seater 0.241 5.253 385.11 1950.48 63.06 43067.0 15.55 189.95 2381.0 0.017 0.071
Vary: CSPD WL WFUEL WEMP DOC PURCH LDMA VCRM RANGE PLEV1 PLEV2
X X
2 Seater 0.241 22.622 447.06 1889.94 61.56 41962.9 15.91 192.20 2449.0 0.016 0.031
4 Seater 0.240 24.358 425.54 1911.07 62.03 42409.3 15.33 192.54 2060.0 0.015 0.094
6 Seater 0.240 24.185 397.75 1938.28 62.66 42893.9 15.27 191.36 2074.0 0.015 0.113
Vary: CSPD WS WFUEL WEMP DOC PURCH LDMA VCRM RANGE PLEV1 PLEV2
X X
2 Seater 0.241 18.644 448.57 1888.48 61.50 41948.5 15.92 192.46 2449.0 0.016 0.030
4 Seater 0.241 17.129 440.06 1896.89 61.66 42105.4 15.90 192.24 2448.0 0.010 0.056
6 Seater 0.240 16.557 430.53 1906.48 61.61 42311.8 15.94 192.46 2456.0 0.009 0.087
Vary: DPRP WL WFUEL WEMP DOC PURCH LDMA VCRM RANGE PLEV1 PLEV2
443
X X
2 Seater 5.195 22.627 447.15 1889.85 61.55 41963.6 15.91 192.24 2447.0 0.016 0.031
4 Seater 5.161 23.529 419.32 1916.91 62.59 42399.7 15.47 190.48 2208.0 0.018 0.076
6 Seater 5.325 24.334 397.54 1938.79 62.12 43058.1 15.24 193.53 2055.0 0.012 0.112
Vary: DPRP WS WFUEL WEMP DOC PURCH LDMA VCRM RANGE PLEV1 PLEV2
X X
2 Seater 5.195 18.724 447.15 1889.85 61.55 41963.6 15.91 192.24 2447.0 0.016 0.031
4 Seater 5.169 16.890 444.58 1892.45 61.65 41986.0 15.94 191.81 2453.0 0.009 0.058
6 Seater 5.172 15.667 448.00 1889.52 61.13 42001.4 16.10 193.10 2485.0 0.006 0.093
Vary: WL WS WFUEL WEMP DOC PURCH LDMA VCRM RANGE PLEV1 PLEV2
X X
2 Seater 22.371 18.507 448.46 1888.52 61.54 41922.4 15.99 191.99 2512.0 0.016 0.025
4 Seater 22.691 16.944 443.99 1893.11 61.51 42044.0 15.92 192.50 2441.0 0.009 0.059
6 Seater 23.820 15.266 465.41 1873.05 60.20 41854.8 15.97 196.24 2256.0 0.001 0.127
Table F.16 Product Variety Tradeoff Study Results for Scenario 3, Allowing 3 Design
Variable to Vary Between Aircraft
Vary: AF AR DPRP WFUE WEMP DOC PURCH LDMA VCRM RANG PLEV1 PLEV2
L X X E
2 Seater 89.402 8.085 5.195 447.15 1889.85 61.55 41963.6 15.91 192.24 2447.0 0.016 0.031
4 Seater 89.402 8.085 5.195 411.25 1924.68 62.79 42532.8 15.63 189.95 2394.0 0.020 0.050
6 Seater 93.099 8.065 5.220 384.69 1950.74 63.14 43083.5 15.52 189.95 2379.0 0.018 0.071
Vary: AF AR WL WFUE WEMP DOC PURCH LDMA VCRM RANG PLEV1 PLEV2
L X X E
2 Seater 96.128 8.096 23.093 449.14 1888.09 61.24 42038.4 15.84 193.64 2344.0 0.015 0.041
4 Seater 97.736 7.712 24.697 430.38 1905.18 62.06 42380.2 14.97 193.46 2017.0 0.014 0.105
6 Seater 94.927 7.836 24.728 403.26 1932.16 62.55 42858.9 14.97 192.46 1990.0 0.014 0.128
Vary: AF AR WS WFUE WEMP DOC PURCH LDMA VCRM RANG PLEV1 PLEV2
L X X E
2 Seater 88.726 8.134 18.540 449.97 1887.25 61.42 41933.2 15.97 192.67 2451.0 0.015 0.029
4 Seater 93.981 7.872 17.021 444.59 1891.81 61.67 42047.9 15.73 192.67 2460.0 0.010 0.060
6 Seater 100.63 7.738 15.299 455.87 1880.92 60.88 41985.6 15.86 194.90 2509.0 0.005 0.099
Vary: AF DPRP WL WFUE WEMP DOC PURCH LDMA VCRM RANG PLEV1 PLEV2
L X X E
2 Seater 98.111 5.309 22.872 445.34 1891.78 61.07 42216.0 15.87 194.90 2386.0 0.016 0.034
4 Seater 89.025 5.222 23.116 415.44 1920.75 62.46 42525.7 15.55 191.14 2294.0 0.018 0.063
6 Seater 97.700 5.260 24.688 398.24 1938.01 62.16 43074.9 15.18 193.67 1986.0 0.013 0.121
Vary: AF DPRP WS WFUE WEMP DOC PURCH LDMA VCRM RANG PLEV1 PLEV2
L X X E
2 Seater 89.492 5.040 18.482 452.83 1883.95 62.28 41646.4 15.95 189.24 2435.0 0.018 0.037
4 Seater 98.601 5.164 18.201 418.94 1917.23 62.42 42473.7 15.72 190.96 2403.0 0.017 0.052
6 Seater 89.402 5.195 16.557 430.53 1906.48 61.61 42311.8 15.94 192.46 2456.0 0.009 0.087
Vary: AF WL WS WFUE WEMP DOC PURCH LDMA VCRM RANG PLEV1 PLEV2
L X X E
2 Seater 100.89 22.778 18.457 449.60 1887.56 61.20 42053.9 15.93 193.74 2413.0 0.015 0.032
4 Seater 89.387 22.690 16.941 443.99 1893.11 61.51 42043.9 15.92 192.50 2441.0 0.009 0.059
444
6 Seater 86.076 23.531 14.761 473.40 1865.24 60.05 41689.5 16.10 196.35 2329.0 0.000 0.123
Vary: AR DPRP WL WFUE WEMP DOC PURCH LDMA VCRM RANG PLEV1 PLEV2
L X X E
2 Seater 8.085 5.195 22.627 447.15 1889.85 61.55 41963.6 15.91 192.24 2447.0 0.016 0.031
4 Seater 7.642 5.244 24.177 429.50 1905.81 62.24 42305.0 15.00 192.46 2116.0 0.015 0.096
6 Seater 7.543 5.453 23.512 397.10 1937.45 62.49 43033.4 14.94 193.21 2233.0 0.014 0.099
Vary: AR DPRP WS WFUE WEMP DOC PURCH LDMA VCRM RANG PLEV1 PLEV2
L X X E
2 Seater 8.085 5.195 18.644 448.57 1888.48 61.50 41948.5 15.92 192.46 2449.0 0.016 0.030
4 Seater 7.676 5.361 16.568 454.70 1881.64 61.10 41975.6 15.64 194.82 2490.0 0.006 0.062
6 Seater 8.134 5.503 17.821 400.83 1935.54 61.71 43070.9 15.75 194.32 2399.0 0.010 0.072
Vary: AR WL WS WFUE WEMP DOC PURCH LDMA VCRM RANG PLEV1 PLEV2
L X X E
2 Seater 8.460 22.996 18.606 447.62 1890.59 61.08 42037.5 16.16 193.26 2351.0 0.014 0.036
4 Seater 8.171 23.904 15.921 471.22 1867.30 60.28 41725.9 15.95 196.01 2224.0 0.002 0.093
6 Seater 8.172 22.936 15.672 449.42 1888.55 60.80 42057.6 16.11 194.32 2415.0 0.004 0.100
Vary: DPRP WL WS WFUE WEMP DOC PURCH LDMA VCRM RANG PLEV1 PLEV2
L X X E
2 Seater 5.195 22.627 18.724 447.15 1889.85 61.55 41963.6 15.91 192.24 2447.0 0.016 0.031
4 Seater 5.270 23.697 16.672 458.04 1879.84 60.64 41976.1 15.79 195.74 2246.0 0.004 0.085
6 Seater 5.401 23.867 16.392 442.02 1895.88 60.43 42425.2 15.76 197.45 2213.0 0.002 0.116
The PPCEM solutions based on the GAA Compromise DSP employing the Cdk
445
The following tables list the actual goal values for each aircraft (platform instantiation) found by
allowing one, two, and three variables to vary at a time between aircraft while holding the
Table F.18 Product Variety Tradeoff Study Results for Scenario 3, Allowing 1 Design
Variable to Vary Between Aircraft
Vary: AF WFUEL WEMP DOC PURCH LDMA VCRM RANG PLEV1 PLEV2
X X E
2 Seater 85.631 450.92 1886.97 77.43 42150.1 15.79 195.53 2499.0 0.024 0.106
4 Seater 97.046 410.83 1925.72 78.97 42880.5 15.51 193.54 2421.0 0.045 0.112
6 Seater 87.448 389.13 1947.46 79.71 43212.9 15.44 192.46 2432.0 0.067 0.111
Vary: AR WFUEL WEMP DOC PURCH LDMA VCRM RANG PLEV1 PLEV2
X X E
2 Seater 7.637 450.65 1887.30 77.42 42156.8 15.81 195.53 2498.0 0.024 0.106
4 Seater 7.697 414.25 1923.01 79.09 42753.8 15.59 193.32 2445.0 0.044 0.112
6 Seater 7.891 386.20 1951.20 79.65 43293.6 15.67 192.68 2418.0 0.063 0.111
Vary: CSPD WFUEL WEMP DOC PURCH LDMA VCRM RANG PLEV1 PLEV2
X X E
2 Seater 0.291 450.92 1886.97 77.43 42150.1 15.79 195.53 2499.0 0.024 0.106
4 Seater 0.291 415.34 1921.70 79.13 42727.0 15.52 193.32 2451.0 0.045 0.112
6 Seater 0.291 389.84 1946.82 79.76 43190.3 15.44 192.46 2437.0 0.067 0.111
Vary: DPRP WFUEL WEMP DOC PURCH LDMA VCRM RANG PLEV1 PLEV2
X X E
2 Seater 5.577 450.46 1887.35 77.28 42187.9 15.79 195.95 2494.0 0.024 0.106
4 Seater 5.547 415.34 1921.70 79.13 42727.0 15.52 193.32 2451.0 0.045 0.112
6 Seater 5.547 389.84 1946.82 79.76 43190.3 15.44 192.46 2437.0 0.067 0.111
Vary: WL WFUEL WEMP DOC PURCH LDMA VCRM RANG PLEV1 PLEV2
X X E
2 Seater 22.431 450.47 1887.40 77.48 42158.6 15.80 195.52 2510.0 0.024 0.106
4 Seater 22.440 414.99 1922.03 79.17 42723.5 15.52 193.15 2460.0 0.044 0.112
6 Seater 22.365 388.95 1947.68 79.87 43205.5 15.46 192.42 2461.0 0.064 0.112
Vary: WS WFUEL WEMP DOC PURCH LDMA VCRM RANG PLEV1 PLEV2
X X E
2 Seater 18.727 450.40 1887.48 77.46 42160.4 15.79 195.53 2498.0 0.024 0.106
4 Seater 19.087 407.71 1929.11 79.53 42839.9 15.45 192.68 2439.0 0.043 0.115
6 Seater 18.803 387.68 1948.92 79.88 43233.4 15.42 192.46 2433.0 0.066 0.112
446
Table F.19 Product Variety Tradeoff Study Results for Scenario 3, Allowing 2 Design
Variables to Vary Between Aircraft
Vary: AF AR WFUEL WEMP DOC PURCH LDMA VCRMX RANGE PLEV1 PLEV2
X
2 Seater 85.963 7.637 450.52 1887.42 77.41 42160.8 15.81 195.53 2497.0 0.024 0.106
4 Seater 94.321 7.730 410.41 1926.56 78.94 42883.9 15.61 193.54 2421.0 0.043 0.112
6 Seater 86.406 7.919 385.56 1951.90 79.61 43312.0 15.69 192.68 2414.0 0.062 0.111
Vary: AF CSPD WFUEL WEMP DOC PURCH LDMA VCRMX RANGE PLEV1 PLEV2
X
2 Seater 85.631 0.291 450.92 1886.97 77.43 42150.1 15.79 195.53 2499.0 0.024 0.106
4 Seater 85.631 0.291 415.34 1921.70 79.13 42727.0 15.52 193.32 2451.0 0.045 0.112
6 Seater 85.631 0.291 389.84 1946.82 79.76 43190.3 15.44 192.46 2437.0 0.067 0.111
Vary: AF DPRP WFUEL WEMP DOC PURCH LDMA VCRMX RANGE PLEV1 PLEV2
X
2 Seater 85.631 5.547 450.92 1886.97 77.43 42150.1 15.79 195.53 2499.0 0.024 0.106
4 Seater 91.408 5.526 413.39 1923.47 79.13 42788.9 15.52 193.32 2440.0 0.045 0.112
6 Seater 93.781 5.573 386.30 1949.97 79.50 43327.3 15.44 192.89 2412.0 0.067 0.111
Vary: AF WL WFUEL WEMP DOC PURCH LDMA VCRMX RANGE PLEV1 PLEV2
X
2 Seater 88.205 22.391 449.09 1888.66 77.49 42199.7 15.81 195.56 2512.0 0.023 0.107
4 Seater 85.631 22.000 411.11 1925.75 79.66 42754.7 15.60 192.40 2557.0 0.037 0.115
6 Seater 87.849 21.963 384.60 1951.75 80.25 43250.2 15.53 191.59 2547.0 0.057 0.114
Vary: AF WS WFUEL WEMP DOC PURCH LDMA VCRMX RANGE PLEV1 PLEV2
X
2 Seater 85.631 18.701 450.92 1886.97 77.43 42150.1 15.79 195.53 2499.0 0.024 0.106
4 Seater 89.090 19.209 403.97 1932.64 79.58 42917.1 15.43 192.46 2427.0 0.042 0.116
6 Seater 87.408 18.846 385.90 1950.60 79.89 43277.3 15.41 192.46 2427.0 0.066 0.113
Vary: AR CSPD WFUEL WEMP DOC PURCH LDMA VCRMX RANGE PLEV1 PLEV2
X
2 Seater 7.384 0.263 453.28 1882.49 66.81 42053.6 15.44 195.53 2516.0 0.030 0.046
4 Seater 7.408 0.260 417.17 1917.73 68.07 42620.9 15.20 192.99 2466.0 0.049 0.050
6 Seater 7.443 0.256 391.21 1943.38 68.56 43107.7 15.14 192.36 2449.0 0.071 0.048
Vary: AR DPRP WFUEL WEMP DOC PURCH LDMA VCRMX RANGE PLEV1 PLEV2
X
2 Seater 7.637 5.559 450.50 1887.43 77.37 42161.2 15.81 195.53 2496.0 0.024 0.106
4 Seater 7.697 5.534 414.54 1922.76 79.22 42732.1 15.59 193.11 2448.0 0.044 0.112
6 Seater 7.997 5.574 384.42 1953.22 79.48 43352.1 15.76 192.89 2408.0 0.061 0.111
Vary: AR WL WFUEL WEMP DOC PURCH LDMA VCRMX RANGE PLEV1 PLEV2
X
2 Seater 7.630 22.450 450.52 1887.39 77.45 42151.2 15.80 195.40 2505.0 0.024 0.106
4 Seater 7.763 22.434 413.00 1924.41 79.12 42769.5 15.65 193.11 2452.0 0.041 0.112
6 Seater 7.858 22.197 384.31 1952.92 79.95 43294.9 15.69 192.12 2481.0 0.055 0.113
Vary: AR WS WFUEL WEMP DOC PURCH LDMA VCRMX RANGE PLEV1 PLEV2
X
2 Seater 7.638 18.711 450.48 1887.47 77.43 42160.2 15.81 195.53 2497.0 0.024 0.106
4 Seater 8.077 19.254 398.23 1939.72 79.52 43057.3 15.81 192.68 2402.0 0.037 0.117
6 Seater 7.888 18.929 381.23 1956.04 79.92 43380.3 15.62 192.46 2410.0 0.061 0.113
447
Vary: CSPD DPRP WFUEL WEMP DOC PURCH LDMA VCRMX RANGE PLEV1 PLEV2
X
2 Seater 0.291 5.547 450.92 1886.97 77.43 42150.1 15.79 195.53 2499.0 0.024 0.106
4 Seater 0.291 5.547 415.34 1921.70 79.13 42727.0 15.52 193.32 2451.0 0.045 0.112
6 Seater 0.291 5.547 389.84 1946.82 79.76 43190.3 15.44 192.46 2437.0 0.067 0.111
Vary: CSPD WL WFUEL WEMP DOC PURCH LDMA VCRMX RANGE PLEV1 PLEV2
X
2 Seater 0.266 22.186 447.54 1889.16 70.12 42167.2 15.75 195.12 2563.0 0.025 0.066
4 Seater 0.291 22.000 411.11 1925.75 79.66 42754.7 15.60 192.40 2557.0 0.037 0.115
6 Seater 0.291 22.000 385.81 1950.64 80.26 43226.5 15.53 191.77 2544.0 0.058 0.114
Vary: CSPD WS WFUEL WEMP DOC PURCH LDMA VCRMX RANGE PLEV1 PLEV2
X
2 Seater 0.291 18.701 450.92 1886.97 77.43 42150.1 15.79 195.53 2499.0 0.024 0.106
4 Seater 0.266 19.087 406.92 1928.79 71.63 42829.6 15.36 192.68 2439.0 0.044 0.071
6 Seater 0.266 18.803 386.88 1948.60 71.94 43223.2 15.33 192.46 2433.0 0.067 0.068
Vary: DPRP WL WFUEL WEMP DOC PURCH LDMA VCRMX RANGE PLEV1 PLEV2
X
2 Seater 5.562 22.458 450.59 1887.27 77.40 42165.3 15.79 195.65 2502.0 0.024 0.106
4 Seater 5.537 22.440 415.14 1921.90 79.23 42719.1 15.52 193.15 2461.0 0.044 0.112
6 Seater 5.587 21.507 380.93 1955.15 80.55 43298.9 15.62 191.31 2652.0 0.053 0.116
Vary: DPRP WS WFUEL WEMP DOC PURCH LDMA VCRMX RANGE PLEV1 PLEV2
X
2 Seater 5.543 18.731 450.55 1887.36 77.51 42156.0 15.79 195.53 2500.0 0.024 0.107
4 Seater 5.547 19.088 407.71 1929.11 79.53 42839.9 15.45 192.68 2439.0 0.043 0.115
6 Seater 5.547 18.701 389.84 1946.82 79.76 43190.3 15.44 192.46 2437.0 0.067 0.111
Vary: WL WS WFUEL WEMP DOC PURCH LDMA VCRMX RANGE PLEV1 PLEV2
X
2 Seater 22.477 18.725 450.57 1887.31 77.45 42157.0 15.79 195.53 2498.0 0.024 0.106
4 Seater 22.332 18.943 409.36 1927.49 79.54 42806.8 15.50 192.68 2476.0 0.039 0.115
6 Seater 22.340 18.679 389.13 1947.50 79.88 43193.9 15.47 192.29 2468.0 0.063 0.112
Table F.20 Product Variety Tradeoff Study Results for Scenario 3, Allowing 3 Design
Variables to Vary Between Aircraft
Vary: AF AR DPRP WFUE WEMP DOC PURCH LDMA VCRM RANG PLEV1 PLEV2
L X X E
2 Seater 86.088 7.637 5.543 450.62 1887.34 77.46 42158.0 15.81 195.53 2498.0 0.024 0.106
4 Seater 103.172 7.717 5.532 407.38 1929.19 79.01 42978.4 15.60 193.54 2400.0 0.043 0.113
6 Seater 85.344 7.944 5.574 385.39 1952.11 79.51 43327.7 15.71 192.89 2413.0 0.062 0.111
Vary: AF AR WL WFUE WEMP DOC PURCH LDMA VCRM RANG PLEV1 PLEV2
L X X E
2 Seater 86.803 7.521 22.091 448.27 1889.13 77.87 42151.2 15.77 194.69 2590.0 0.025 0.109
4 Seater 85.631 7.617 22.000 411.11 1925.75 79.66 42754.7 15.60 192.40 2557.0 0.037 0.115
6 Seater 87.706 7.639 21.962 384.38 1952.03 80.24 43255.2 15.55 191.59 2546.0 0.056 0.114
Vary: AF AR WS WFUE WEMP DOC PURCH LDMA VCRM RANG PLEV1 PLEV2
L X X E
2 Seater 91.408 7.701 18.496 451.01 1886.96 77.14 42204.0 15.89 195.95 2484.0 0.024 0.105
448
4 Seater 85.650 8.008 19.256 399.03 1938.74 79.56 43037.4 15.75 192.68 2407.0 0.037 0.117
6 Seater 85.248 7.974 18.996 378.80 1958.65 79.98 43419.0 15.67 192.25 2403.0 0.059 0.114
Vary: AF DPRP WL WFUE WEMP DOC PURCH LDMA VCRM RANG PLEV1 PLEV2
L X X E
2 Seater 85.631 5.547 22.482 450.92 1886.97 77.43 42150.1 15.79 195.53 2499.0 0.024 0.106
4 Seater 90.961 5.563 22.327 411.78 1924.96 79.13 42810.3 15.54 193.11 2468.0 0.041 0.113
6 Seater 91.032 5.579 21.518 379.01 1956.87 80.48 43363.1 15.61 191.35 2635.0 0.052 0.116
Vary: AF DPRP WS WFUE WEMP DOC PURCH LDMA VCRM RANG PLEV1 PLEV2
L X X E
2 Seater 85.631 5.547 18.701 450.92 1886.97 77.43 42150.1 15.79 195.53 2499.0 0.024 0.106
4 Seater 85.858 5.570 19.372 401.75 1934.85 79.70 42949.5 15.40 192.46 2426.0 0.041 0.117
6 Seater 85.631 5.547 18.701 389.84 1946.82 79.76 43190.3 15.44 192.46 2437.0 0.067 0.111
Vary: AF WL WS WFUE WEMP DOC PURCH LDMA VCRM RANG PLEV1 PLEV2
L X X E
2 Seater 91.408 22.360 18.493 451.18 1886.52 77.29 42189.7 15.84 195.85 2516.0 0.023 0.106
4 Seater 97.816 22.355 18.790 407.73 1928.68 79.18 42927.2 15.52 193.24 2441.0 0.041 0.114
6 Seater 91.824 21.649 18.222 390.78 1945.57 79.97 43163.7 15.68 191.92 2625.0 0.059 0.112
Vary: AR DPRP WL WFUE WEMP DOC PURCH LDMA VCRM RANG PLEV1 PLEV2
L X X E
2 Seater 7.634 5.553 22.467 450.70 1887.22 77.43 42152.7 15.80 195.48 2500.0 0.024 0.106
4 Seater 7.815 5.553 22.507 413.02 1924.55 79.02 42793.0 15.67 193.45 2431.0 0.043 0.112
6 Seater 7.674 5.666 22.184 384.82 1951.55 79.39 43336.4 15.54 192.89 2476.0 0.058 0.110
Vary: AR DPRP WS WFUE WEMP DOC PURCH LDMA VCRM RANG PLEV1 PLEV2
L X X E
2 Seater 7.639 5.541 18.716 450.45 1887.51 77.49 42159.2 15.80 195.53 2499.0 0.024 0.107
4 Seater 8.046 5.571 19.257 398.16 1939.66 79.42 43072.9 15.78 192.89 2401.0 0.037 0.116
6 Seater 8.020 5.545 18.994 378.31 1959.28 80.00 43419.1 15.72 192.03 2401.0 0.059 0.114
Vary: AR WL WS WFUE WEMP DOC PURCH LDMA VCRM RANG PLEV1 PLEV2
L X X E
2 Seater 7.636 22.475 18.709 450.39 1887.56 77.44 42159.4 15.81 195.48 2499.0 0.023 0.106
4 Seater 7.617 22.482 18.701 415.34 1921.70 79.13 42727.0 15.52 193.32 2451.0 0.045 0.112
6 Seater 7.989 22.380 18.913 379.47 1958.03 79.96 43408.4 15.73 192.25 2426.0 0.057 0.114
Vary: DPRP WL WS WFUE WEMP DOC PURCH LDMA VCRM RANG PLEV1 PLEV2
L X X E
2 Seater 5.597 22.416 18.649 450.48 1887.28 77.20 42199.8 15.81 196.11 2504.0 0.023 0.105
4 Seater 5.571 22.212 18.940 408.01 1928.74 79.55 42836.2 15.52 192.68 2499.0 0.036 0.115
6 Seater 5.615 21.550 19.024 373.92 1961.89 80.68 43428.3 15.55 191.06 2625.0 0.050 0.118
The following sample DSIDES files are for solving the GAA Compromise DSP within
the PPCEM. These files are explicitly for the Cdk formulation which uses the kriging
449
metamodels based on the 64 point orthogonal array (OA). These particular files are for
Scenario 1 with a high starting point (i.e., all design variables at their upper bound). The .dat
file is given Section F.4.F.7.1, and the .f file is given in Section F.4.F.7.2.
450
ALPOUT : Output Control
1 1 1 1 0 0 0 0 1 1
integer numdv,numsamp,numresp,corflag
character*16 fprefix
integer scfstr,lenstr,i,j,k,dummy,krig
real dummy2
character*16 ftitle
character*24 deckfile,fitsfile,scfname
C
C using: maxsamp=500, maxdv=25, maxresp=20
C
double precision xmat(500,15),betahat(20),
& thetaray(20,15),theta(15),Rinvyfb(20,500),
& Fvect(500,1),FRinv(500),yvect(500),yfb_vect(500),
& resp(500,20)
scfname='expgaucubma1ma2'
451
if (corflag.eq.2) then
p=2.0
else if (corflag.eq.1) then
p=1.0
end if
call getlen(fprefix,lenstr)
ftitle=fprefix
deckfile=ftitle(1:lenstr) // '.dek'
scfstr=(3*(corflag-1))+1
fitsfile=ftitle(1:lenstr) // '.' // scfname(scfstr:scfstr+2)
& // '.fit'
open(21,file=deckfile,status='old')
open(22,file=fitsfile,status='old')
452
60 continue
RETURN
END
C
C subroutine to parse filename specified in ‘dace.params.h’ file.
C
subroutine getlen(string,lenstr)
character*1 blank
character*16 string
parameter (blank=' ')
integer next,lenstr
do 10 next = LEN(string), 1, -1
if (string(next:next).ne.blank) then
lenstr=next
return
end if
10 continue
lenstr=0
if (lenstr.eq.0) then
write(6,*) 'You have not specified a file name prefix'
stop
end if
return
end
C
C subroutine needed for kriging metamodel computation
C
subroutine cormat (xmat,invmat,numsamp,numdv,theta,p,corflag)
integer numdv,numsamp,corflag,ipvt(500)
double precision xmat(500,15),invmat(numsamp,numsamp),
& work(500),p,theta(15)
integer i,j,info
double precision det(2),R
do 300 i = 1,numsamp
do 305 j = i,numsamp
if( i .eq. j ) then
invmat(i,j) = 1.0d0
else
call scfxmat(R,xmat,theta,corflag,p,numdv,numsamp,i,j)
invmat(i,j) = R
invmat(j,i) = invmat(i,j)
endif
305 continue
453
300 continue
C do 306 i=1,numsamp
C write(6,82) (invmat(i,j),j=1,numsamp)
C 82 format(14(f4.2,1x))
C 306 continue
do 307 i=1,numsamp
work(i)=0.0d0
ipvt(i)=0
307 continue
call dgefa(invmat, numsamp, numsamp, ipvt, info)
if( info .ne. 0 ) then
write(*,*)"Error in DGEFA, info = ",info
stop
endif
call dgedi(invmat, numsamp, numsamp, ipvt, det, work, 11)
C do 310 i=1,numsamp
C write(6,84) (invmat(i,j),j=1,numsamp)
C 84 format(14(f10.2,1x))
C 310 continue
return
end
***********************************************************************
*
* include LINPACK routines used to find inverse of correlation matrix
*
***********************************************************************
*
include 'dgefa.f'
include 'dgedi.f'
*
***********************************************************************
C
C subroutine needed for kriging metamodel computation
C
subroutine compeqn (invmat,Fvect,yvect,FRinv,yfb_vect,
& Rinvyfb,betahat,numsamp,numdv,krig)
integer numsamp,numdv
double precision betahat(20),invmat(numsamp,numsamp),
& Fvect(500,1),FRinv(500),
& Rinvyfb(20,500),yvect(500),yfb_vect(500)
double precision beta_den,beta_num
integer i,j,krig
C
C compute F'Rinv
C
do 310 i=1,numsamp
FRinv(i)=0.0d0
do 315 j=1,numsamp
FRinv(i)=FRinv(i)+Fvect(j,1)*invmat(j,i)
454
315 continue
310 continue
C
C compute betahat = (F'Rinv*yvect)/(F'Rinv*F)
C
beta_den=0.0d0
beta_num=0.0d0
do 320 i=1,numsamp
beta_den=beta_den+FRinv(i)*Fvect(i,1)
beta_num=beta_num+FRinv(i)*yvect(i)
320 continue
betahat(krig) = beta_num / beta_den
C
C compute y-f'betahat
C
do 330 i = 1,numsamp
yfb_vect(i) = yvect(i) - betahat(krig)*Fvect(i,1)
330 continue
C
C compute Rinv*(y-f'beta)
C
do 340 i=1,numsamp
Rinvyfb(krig,i)=0.0d0
do 345 j=1,numsamp
Rinvyfb(krig,i)=Rinvyfb(krig,i)+invmat(i,j)*yfb_vect(j)
345 continue
340 continue
return
end
C
C subroutine needed for kriging metamodel computation
C
subroutine scfxmat(R,xmat,theta,corflag,p,numdv,numsamp,i,j)
integer i,j,corflag,numdv,numsamp
double precision R,xmat(500,15),theta(15),p
double precision sum,thetadist,dist
integer k
if ((corflag.eq.2).or.(corflag.eq.1)) then
sum=0.0d0
do 120 k = 1,numdv
dist = DABS(xmat(i,k)-xmat(j,k))
sum = sum + theta(k)*((dist)**p)
120 continue
R = DEXP( -1.0d0*sum )
else if (corflag.eq.3) then
sum=1.0d0
do 130 k=1,numdv
dist = ABS(xmat(i,k)-xmat(j,k))
thetadist=dist*theta(k)
if (thetadist.lt.0.5) then
sum=sum*(1.0-6.0*(thetadist**2)+6.0*(thetadist**3))
else if (thetadist.ge.1.0) then
455
sum=sum*0.0
else
sum=sum*(2.0*(1.0-thetadist)**3)
end if
130 continue
R = sum
else if (corflag.eq.4) then
sum=1.0
do 140 k=1,numdv
dist = ABS(xmat(i,k)-xmat(j,k))
sum = sum*(exp(-theta(k)*dist)*(1.+theta(k)*dist))
140 continue
R = sum
else if (corflag.eq.5) then
sum=1.0
do 150 k=1,numdv
dist = ABS(xmat(i,k)-xmat(j,k))
thetadist=dist*theta(k)
sum = sum*(exp(-thetadist)*
& (1.+thetadist+(thetadist**2)/3.0))
150 continue
R = sum
end if
return
end
C*********************************************************************
C
SUBROUTINE USROUT (NDESV, NOUT, DESVAR, LCONDF, LCONSV, LXFEAS)
C
C Tim Simpson - June 18, 1998
C
C This subroutine postprocesses the solution by taking the final
C converged values and instantiating each aircraft in GASP. The
C resulting deviation function is computed for each aircraft as
C specified by the variable ‘iscen’. After processing, a file called
C ‘gasp.oa64.cdk.s1h.mod’ is created which containts all of the
C necessary information for each instantiation. (The output file
C name can be changed by changing variable ‘outfile’.
C
C*********************************************************************
INTEGER NDESV, NOUT
REAL DESVAR(NDESV)
LOGICAL LCONDF, LCONSV, LXFEAS
C
CHARACTER*80 LINE
INTEGER NUM, I
PARAMETER (NUM=7)
CHARACTER*10 CHGVAR(NUM)
REAL CHGVAL(NUM)
INTEGER CHRLEN(NUM), INDEX
integer INLINE,NUMOUT,nscale
parameter(NUMOUT=11,nscale=3)
REAL NOISE,WEMP,DOC,ROUGH,WFUEL,PURCH,VCRMX,LDMAX,RANGE
456
REAL RESPONSE(NUMOUT,nscale),GOALS(7,nscale),
& dplus(7,nscale),dminus(7,nscale),devfcn(4,nscale),
& convio(nscale),CONSTR(6,nscale)
real noisec(nscale),tpspdc(nscale),docc(nscale),
& roughc(nscale),wempc(nscale),wfuelc(nscale),
& wfuelt(nscale),wempt(nscale),doct(nscale),
& ldmaxt(nscale),vcrmxt(nscale),ranget(nscale),
& purcht(nscale),rangec(nscale)
integer ipax
REAL xlo(6),xhi(6),xnew(6)
REAL EMCRU,AR,DPROP,WGS,AF,WS,PAX
integer j,i,iscen
character*28 outfile
iscen=1
outfile='gasp.oa64.cdk.s1h.mod'
open(unit=27,file=outfile,status='unknown')
EMCRU = DESVAR(1)
AR = DESVAR(2)
DPROP = DESVAR(3)
WGS = DESVAR(4)
AF = DESVAR(5)
WS = DESVAR(6)
C
C scale the design variables to [0,1] - necessary for kriging
C
do 10 j=1,6
xnew(j)=(DESVAR(j)-xlo(j))/(xhi(j)-xlo(j))
10 continue
457
write(6,*) ' '
write(6,*) outfile
write(6,*) ' '
write(6,*) 'Instantiate final values in GASP:'
write(6,*) ' '
write(6,*) ' INPUTS: Act. Value Scaled Value'
write(6,73)
73 format(3(12('-'),1x))
write(6,74) 'Cruise_Spd=',EMCRU,xnew(1)
write(6,74) 'Aspct_Rtio=',AR,xnew(2)
write(6,74) 'Prop_Diamr=',DPROP,xnew(3)
write(6,74) 'Wing_Loadg=',WGS,xnew(4)
write(6,74) 'Eng_Act_Fc=',AF,xnew(5)
write(6,74) 'Seat_Width=',WS,xnew(6)
74 format(A12,1x,f12.4,1x,f12.4)
C
C write instantiations to output file
C
write(27,*) ' '
write(27,*) outfile
write(27,*) ' '
write(27,*) 'Instantiate final values in GASP:'
write(27,*) ' '
write(27,*) ' INPUTS: Act. Value Scaled Value'
write(27,73)
write(27,74) 'Cruise_Spd=',EMCRU,xnew(1)
write(27,74) 'Aspct_Rtio=',AR,xnew(2)
write(27,74) 'Prop_Diamr=',DPROP,xnew(3)
write(27,74) 'Wing_Loadg=',WGS,xnew(4)
write(27,74) 'Eng_Act_Fc=',AF,xnew(5)
write(27,74) 'Seat_Width=',WS,xnew(6)
C
C specify number of passengers
C
do 300 ipax=1,3
PAX = (2.*REAL(ipax))-1
C***********************************************************************
C USER has to redifine the following block to create GASP input file
C***********************************************************************
CHGVAR(1) = 'EMCRU'
CHRLEN(1) = 5
CHGVAL(1) = EMCRU
CHGVAR(2) = 'AR'
CHRLEN(2) = 2
CHGVAL(2) = AR
CHGVAR(3) = 'DPROP'
CHRLEN(3) = 5
CHGVAL(3) = DPROP
CHGVAR(4) = 'WGS'
CHRLEN(4) = 3
CHGVAL(4) = WGS
CHGVAR(5) = 'AF'
CHRLEN(5) = 2
458
CHGVAL(5) = AF
CHGVAR(6) = 'WS'
CHRLEN(6) = 2
CHGVAL(6) = WS
CHGVAR(7) = 'PAX'
CHRLEN(7) = 3
CHGVAL(7) = PAX
C**********************************************************************
C user doesn't have to change anything in the following block
C*********************************************************************
C
C create GASP input file: GASP.IN
C
OPEN(UNIT=100,FILE='GASPINPUT.rb3l')
call system('rm GASP.IN')
OPEN(UNIT=110,FILE='GASP.IN',STATUS='NEW')
DO WHILE ( .TRUE. )
READ(100,70,END=1000) LINE
INDEX = INLINE(NUM, LINE, CHGVAR, CHRLEN, CHGVAL)
IF (INDEX .NE. 0) THEN
WRITE(110, 75) CHGVAR(INDEX)(1:CHRLEN(INDEX)),
+ CHGVAL(INDEX)
ELSE
WRITE(110,70) LINE
END IF
70 FORMAT ( A80)
75 FORMAT (2X, A, '=', F10.2, ',')
ENDDO
1000 CONTINUE
CLOSE(100)
CLOSE(110)
C
C execute GASP with current values of design variables
C
CALL SYSTEM('rm performance')
call system('rm GASP.OUT')
print *, ' '
print *, 'Executing GASP for Passenger #',PAX
CALL SYSTEM('gasp <GASP.IN>GASP.OUT')
PRINT *, ''
PRINT *, 'GASP IS DONE'
C
C read GASP output file named performance into response array
C
OPEN(UNIT=120,FILE='performance', STATUS='UNKNOWN')
DO 200 I =1, NUMOUT
READ (120, *) RESPONSE(I,ipax)
200 CONTINUE
CLOSE(120)
300 continue
write(6,*) ' '
write(6,*) ' OUTPUTS: 1 Pax 3 Pax 5 Pax'
write(6,77)
459
77 format(4(12('-'),1x))
write(6,78) 'Noise=', (RESPONSE(2,j),j=1,3)
write(6,78) 'Empty_Wgt=', (RESPONSE(3,j),j=1,3)
write(6,78) 'Oper_Cost=', (RESPONSE(5,j),j=1,3)
write(6,78) 'Ride_Rough=', (RESPONSE(6,j),j=1,3)
write(6,78) 'Fuel_Wgt=', (RESPONSE(7,j),j=1,3)
write(6,78) 'Purch_Price=', (RESPONSE(8,j),j=1,3)
write(6,78) 'Max_Range=', (RESPONSE(9,j),j=1,3)
write(6,78) 'Max_Speed=', (RESPONSE(10,j),j=1,3)
write(6,78) 'L/D_Max=', (RESPONSE(11,j),j=1,3)
78 format(A12,1x,f12.4,1x,f12.4,1x,f12.4)
write(6,*) ' '
C
C write instantiations to file
C
write(27,*) ' '
write(27,*) ' OUTPUTS: 1 Pax 3 Pax 5 Pax'
write(27,77)
write(27,78) 'Noise=', (RESPONSE(2,j),j=1,3)
write(27,78) 'Empty_Wgt=', (RESPONSE(3,j),j=1,3)
write(27,78) 'Oper_Cost=', (RESPONSE(5,j),j=1,3)
write(27,78) 'Ride_Rough=', (RESPONSE(6,j),j=1,3)
write(27,78) 'Fuel_Wgt=', (RESPONSE(7,j),j=1,3)
write(27,78) 'Purch_Price=', (RESPONSE(8,j),j=1,3)
write(27,78) 'Max_Range=', (RESPONSE(9,j),j=1,3)
write(27,78) 'Max_Speed=', (RESPONSE(10,j),j=1,3)
write(27,78) 'L/D_Max=', (RESPONSE(11,j),j=1,3)
write(27,*) ' '
C
C compute deviation function for each aircraft
C
do 310 j=1,3
NOISE = RESPONSE(2,j)
WEMP = RESPONSE(3,j)
DOC = RESPONSE(5,j)
ROUGH = RESPONSE(6,j)
WFUEL = RESPONSE(7,j)
PURCH = RESPONSE(8,j)
RANGE = RESPONSE(9,j)
VCRMX = RESPONSE(10,j)
LDMAX = RESPONSE(11,j)
GOALS(1,j) = WFUEL/wfuelt(j)-1.0
GOALS(2,j) = WEMP/wempt(j) - 1.0
GOALS(3,j) = DOC/doct(j) - 1.0
GOALS(4,j) = PURCH/purcht(j) - 1.0
GOALS(5,j) = LDMAX/ldmaxt(j) - 1.0
GOALS(6,j) = VCRMX/vcrmxt(j) - 1.0
GOALS(7,j) = RANGE/ranget(j) - 1.0
do 320 i=1,7
if(GOALS(i,j).ge.0)then
dplus(i,j)=ABS(GOALS(i,j))
460
dminus(i,j)=0.0
else
dplus(i,j)=0.0
dminus(i,j)=ABS(GOALS(i,j))
endif
320 continue
if(iscen.eq.1)then
devfcn(1,j)=(dplus(1,j)+dplus(2,j)+dplus(3,j)+dplus(4,j)+
& dminus(5,j)+dminus(6,j)+dminus(7,j))/7.
devfcn(2,j)=0.0
devfcn(3,j)=0.0
devfcn(4,j)=0.0
461
else if(iscen.eq.8) then
devfcn(1,j)=dminus(7,j)
devfcn(2,j)=dminus(5,j)
devfcn(3,j)=(dplus(1,j)+dminus(6,j))/2.
devfcn(4,j)=(dplus(2,j)+dplus(3,j)+dplus(4,j))/3.
endif
310 continue
NOISE = RESPONSE(2,j)
WEMP = RESPONSE(3,j)
DOC = RESPONSE(5,j)
ROUGH = RESPONSE(6,j)
WFUEL = RESPONSE(7,j)
RANGE = RESPONSE(9,j)
convio(j)=0.0
do 420 i=1,6
462
if(CONSTR(i,j).lt.0.0)then
convio(j)=convio(j)+ABS(CONSTR(i,j))
end if
420 continue
410 continue
close(27)
RETURN
END
C*********************************************************************
C user does not need to change the following function block
C**********************************************************************
FUNCTION INLINE(NUM, LINE, CHGVAR, CHRLEN, CHGVAL)
INTEGER INLINE, NUM, I
CHARACTER*80 LINE
CHARACTER*10 CHGVAR(NUM)
INTEGER CHRLEN(NUM)
REAL CHGVAL(NUM)
DO 10 I = 1, NUM
IF (LINE(3:3+CHRLEN(I)) .EQ. CHGVAR(I)(1:CHRLEN(I))
+ //'=') THEN
INLINE = I
GOTO 20
END IF
10 CONTINUE
INLINE = 0
20 END
C***********************************************************************
C
C Subroutine USRSET
C
C Purpose: Evaluate non-linear constraints and goals.
C NOTE - Do not specify the deviation variables
C
C-----------------------------------------------------------------------
C Arguments Name Type Description
C --------- ---- ---- -----------
C Input: IPATH int = 1 evaluate constraints and goals
C = 2 evaluate constraints only
C = 3 evaluate goals only
C NDESV int number of design variables
C MNLNCG int maximum number of nonlinear
C constraints and goals
C NOUT int unit number of output data file
463
C DESVAR real vector of design variables
C
C Output: CONSTR real vector of constraint values
C GOALS real vector of goal values
C
C Input/Output: none
C-----------------------------------------------------------------------
C Common Blocks: none
C
C Include Files: none
C
C Called from: GCALC
C
C Calls to: none
C-----------------------------------------------------------------------
C Development History
C
C Author: BHARAT PATEL
C Date: 13 MARCH, 1992.
C
C Modifications:
C
C***********************************************************************
C
C
SUBROUTINE USRSET (IPATH, NDESV, MNLNCG, NOUT, DESVAR,
& CONSTR, GOALS)
C
C------------------------------
C Passed variables from RUNALP
C------------------------------
C
INTEGER IPATH, NDESV, MNLNCG, NOUT
C
REAL DESVAR(NDESV)
REAL CONSTR(MNLNCG), GOALS(MNLNCG)
INTEGER VVNUM
C
C------------------------------------------------
C Variables particular to kriging implementation
C------------------------------------------------
C
COMMON /KRIG1/ xmat,betahat,thetaray,Rinvyfb
COMMON /KRIG2/ nmdv,nmsamp,nmresp,corfcn
double precision xmat(500,15),betahat(20),
& thetaray(20,15),Rinvyfb(20,500)
integer nmdv,nmsamp,nmresp,corfcn
integer j,krig
double precision ynew(20),xnew(15),theta(15),
& R,r_xhat(500),yeval
C
C---------------------------------------------
C Local variables particular to c-dsp problem
C---------------------------------------------
464
C
REAL xlo(6),xhi(6)
REAL EMCRU,AR,DPROP,WGS,AF,WS,
& NOISEm,WEMPm,DOCm,ROUGHm,WFUELm,
& PURCHm,VCRMXm,LDMAXm,RANGEm,
& NOISEv,WEMPv,DOCv,ROUGHv,WFUELv,
& PURCHv,VCRMXv,LDMAXv,RANGEv,
& NOISEcd,NOISEcdu,ROUGHcd,ROUGHcdu,
& VCRMXcdl,LDMAXcdl,RANGEcdl,
& VCRMXcd,LDMAXcd,RANGEcd,
& WFUELcd,WFUELcdu,WFUELcdl,
& WEMPcd,WEMPcdu,WEMPcdl,
& PURCHcd,PURCHcdu,PURCHcdl,
& DOCcd,DOCcdu
EMCRU = DESVAR(1)
AR = DESVAR(2)
DPROP = DESVAR(3)
WGS = DESVAR(4)
AF = DESVAR(5)
WS = DESVAR(6)
C
C scale the design variables to [0,1] - necessary for kriging
C
do 10 j=1,nmdv
xnew(j)=(DESVAR(j)-xlo(j))/(xhi(j)-xlo(j))
10 continue
VVNUM=VVNUM+1
C*********************************************************************
C
C kriging prediction block; Tim Simpson - June 18, 1998
C *user does not have to change anything in this block*
C
C*********************************************************************
do 40 krig=1,nmresp
C
C transfer theta parameters for current response (designated by krig)
465
C
do 50 j=1,nmdv
theta(j)=thetaray(krig,j)
50 continue
C
C compute correlation vector for xnew
C
do 60 j=1,nmsamp
call scfxnew(R,xnew,xmat,theta,corfcn,nmdv,nmsamp,j)
r_xhat(j)=R
60 continue
C
C calculate ynew of current response
C
yeval = 0.0d0
do 220 j=1,nmsamp
220 yeval=yeval+r_xhat(j)*Rinvyfb(krig,j)
ynew(krig) = yeval + betahat(krig)
C
C increment variable krig to predict next ynew
C
40 continue
C
C equate predicted ynew vector to appropriate responses
C
NOISEm = REAL(ynew(1))
WEMPm = REAL(ynew(2))
DOCm = REAL(ynew(3))
ROUGHm = REAL(ynew(4))
WFUELm = REAL(ynew(5))
PURCHm = REAL(ynew(6))
RANGEm = REAL(ynew(7))
VCRMXm = REAL(ynew(8))
LDMAXm = REAL(ynew(9))
NOISEv = REAL(ynew(10))
WEMPv = REAL(ynew(11))
DOCv = REAL(ynew(12))
ROUGHv = REAL(ynew(13))
WFUELv = REAL(ynew(14))
PURCHv = REAL(ynew(15))
RANGEv = REAL(ynew(16))
VCRMXv = REAL(ynew(17))
LDMAXv = REAL(ynew(18))
C
C compute Cdk values
C
NOISEcdu=(75.-NOISEm)/(NOISEv*(3.**0.5))
NOISEcd=NOISEcdu
ROUGHcdu=(2.00-ROUGHm)/(ROUGHv*(3.**0.5))
ROUGHcd=ROUGHcdu
DOCcdu=(60.-DOCm)/(DOCv*(3.**0.5))
DOCcd=DOCcdu
466
WEMPcdl=(WEMPm-1900.)/(WEMPv*(3.**0.5))
WEMPcdu=(2000.-WEMPm)/(WEMPv*(3.**0.5))
WEMPcd=AMIN1(WEMPcdl,WEMPcdu)
WFUELcdl=(WFUELm-350.)/(WFUELv*(3.**0.5))
WFUELcdu=(450.-WFUELm)/(WFUELv*(3.**0.5))
WFUELcd=AMIN1(WFUELcdl,WFUELcdu)
PURCHcdl=(PURCHm-41000.)/(PURCHv*(3.**0.5))
PURCHcdu=(43000.-PURCHm)/(PURCHv*(3.**0.5))
PURCHcd=AMIN1(PURCHcdu,PURCHcdl)
VCRMXcdl=(VCRMXm-200.0)/(VCRMXv*(3.**0.5))
VCRMXcd=VCRMXcdl
RANGEcdl=(RANGEm-2500.0)/(RANGEv*(3.**0.5))
RANGEcd=RANGEcdl
LDMAXcdl=(LDMAXm-17.0)/(LDMAXv*(3.**0.5))
LDMAXcd=LDMAXcdl
C
C print out corresponding values of constraint and goal response
C
PRINT *, ' '
PRINT *, ' OUTPUTS: Mean Variance Cdk'
write(6,77)
77 format(4(12('-'),1x))
write(6,78) 'Noise=', NOISEm, NOISEv, NOISEcd
write(6,78) 'Empty_Wgt=', WEMPm, WEMPv, WEMPcd
write(6,78) 'Oper_Cost=',DOCm, DOCv, DOCcd
write(6,78) 'Ride_Rough=', ROUGHm, ROUGHv, ROUGHcd
write(6,78) 'Fuel_Wgt=', WFUELm, WFUELv, WFUELcd
write(6,78) 'Purch_Price=', PURCHm, PURCHv, PURCHcd
write(6,78) 'Max_Range=', RANGEm, RANGEv, RANGEcd
write(6,78) 'Max_Speed=', VCRMXm, VCRMXv, VCRMXcd
write(6,78) 'L/D_Max=', LDMAXm, LDMAXv, LDMAXcd
78 format(A12,1x,f12.4,1x,f12.4,1x,f12.5)
print *, ' '
C
C 3.0 Evaluate non-linear constraints
C
IF (IPATH .EQ. 1 .OR. IPATH .EQ. 2) THEN
C
C constraint formulations for restricting appropriate Cdk >= 1
C
CONSTR(1) = NOISEcd - 1.0
CONSTR(2) = (80.-DOCm)/(DOCv*(3.**0.5)) - 1.0
CONSTR(3) = ROUGHcd - 1.0
CONSTR(4) = (2200.-WEMPm)/(WEMPv*(3.**0.5)) - 1.0
CONSTR(5) = (450.-WFUELm)/(WFUELv*(3.**0.5)) - 1.0
CONSTR(6) = (RANGEm-2000.)/(RANGEv*(3.**0.5)) - 1.0
END IF
C
467
C 4.0 Evaluate non-linear goals
C
IF (IPATH .EQ. 1 .OR. IPATH .EQ. 3) THEN
C
C specify goals; want all Cdk >= 1.
C
GOALS(1) = WFUELcd-1.0
GOALS(2) = WEMPcd - 1.0
GOALS(3) = DOCcd - 1.0
GOALS(4) = PURCHcd - 1.0
GOALS(5) = LDMAXcd - 1.0
GOALS(6) = VCRMXcd - 1.0
GOALS(7) = RANGEcd - 1.0
END IF
C
C 5.0 Return to calling routine
C
RETURN
END
C
C subroutine needed for kriging metamodel computation to compute correlation
C between sample points and new prediction point
C
subroutine scfxnew(R,xnew,xmat,theta,corfcn,nmdv,nmsamp,j)
integer j,corfcn,nmdv,nmsamp
double precision R,xnew(15),
& xmat(500,15),theta(15)
double precision sum,thetadist,dist,p
integer k
if ((corfcn.eq.2).or.(corfcn.eq.1)) then
if(corfcn.eq.2)then
p=2.
else
p=1.
endif
sum=0.0d0
do 400 k = 1,nmdv
dist = ABS(xnew(k)-xmat(j,k))
sum = sum + theta(k)*((dist)**p)
400 continue
R = exp( -1.0d0*sum )
else if (corfcn.eq.3) then
sum=1.0d0
do 410 k=1,nmdv
dist = ABS(xnew(k)-xmat(j,k))
thetadist=theta(k)*dist
if (thetadist.lt.0.5) then
sum=sum*(1.0-6.0*(thetadist**2)+6.0*(thetadist**3))
else if (thetadist.ge.1.0) then
sum=sum*0.0
else
468
sum=sum*(2.0*(1.0-thetadist)**3)
end if
410 continue
R = sum
else if (corfcn.eq.4) then
sum=1.0
do 420 k=1,nmdv
dist = ABS(xnew(k)-xmat(j,k))
sum = sum*(exp(-theta(k)*dist)*(1.+theta(k)*dist))
420 continue
R = sum
else if (corfcn.eq.5) then
sum=1.0
do 430 k=1,nmdv
dist = ABS(xnew(k)-xmat(j,k))
thetadist=theta(k)*dist
sum = sum*(exp(-thetadist)*
& (1.+thetadist+(thetadist**2)/3.0))
430 continue
R = sum
end if
return
end
The following sample DSIDES files are for solving the GAA Compromise DSP for an
individual (benchmark) aircraft. These files are explicitly for linking GASP to DSIDES. These
particular files are the five passenger (six seater) GAA for Scenario 3 with a low starting point
(i.e., all design variables at their upper bound). The .dat file is given Section F.5.F.7.1, and the
469
AR 2 7.0 11.0 7.0 : aspect ratio
DPRP 3 5.0 5.96 5.0 : propeller diameter
WL 4 19.0 25.0 19.0 : wing loading
AF 5 85.0 110.0 85.0 : engine activity factor
WS 6 14.0 20.0 14.0 : seat width
470
C---------------------------------------
INTEGER IPATH, NDESV, MNLNCG, NOUT
C
REAL DESVAR(NDESV)
REAL CONSTR(MNLNCG), GOALS(MNLNCG)
C
C---------------------------------------
C Local variables:
C---------------------------------------
C
C******************************************************************
C 1.0 Set the values of the local design variables (optional)
C******************************************************************
C
C User has to define the type of variables in the following
C block
C
REAL EMCRU,AR,DPROP,WGS,AF,WS,PAX,
& TPSPD,NOISE,WEMP,DOC,ROUGH,WFUEL,PURCH,VCRMX,LDMAX,RANGE
INTEGER VVNUM
C
C******************************************************************
C
CHARACTER*80 LINE
INTEGER NUM, I
PARAMETER (NUM=7)
CHARACTER*10 CHGVAR(NUM)
REAL CHGVAL(NUM)
INTEGER CHRLEN(NUM), INDEX
INTEGER INLINE
INTEGER NUMOUT
integer nscale
PARAMETER (NUMOUT=11,nscale=3)
REAL RESPONSE(NUMOUT)
real noisec(nscale),tpspdc(nscale),docc(nscale),
& roughc(nscale),wempc(nscale),wfuelc(nscale),
& wfuelt(nscale),wempt(nscale),doct(nscale),
& ldmaxt(nscale),vcrmxt(nscale),ranget(nscale),
& purcht(nscale)
integer ipax
VVNUM=VVNUM+1
C
C define array of constraints for each response for each aircraft
C
data noisec / 75., 75., 75. /
data tpspdc / 850., 850., 850. /
data docc / 80., 80., 80. /
data roughc / 2., 2., 2. /
data wempc / 2200., 2200., 2200. /
data wfuelc / 450., 475., 500. /
C
C define array of goal targets for each response for each aircraft
C
data wfuelt / 450., 400., 350. /
471
data wempt / 1900., 1950., 2000. /
data doct / 60., 60., 60. /
data purcht / 41000., 42000., 43000. /
data ldmaxt / 17., 17., 17. /
data vcrmxt / 200., 200., 200. /
data ranget / 2500., 2500., 2500. /
C******************************************************************
C
C Define the design variables
C
EMCRU = DESVAR(1)
AR = DESVAR(2)
DPROP = DESVAR(3)
WGS = DESVAR(4)
AF = DESVAR(5)
WS = DESVAR(6)
C
C specify number of passengers
C
PAX = 5.0
ipax = INT((PAX+1.)/2.)
C***********************************************************************
C USER has to redifine the following block to create GASP input file
C***********************************************************************
CHGVAR(1) = 'EMCRU'
CHRLEN(1) = 5
CHGVAL(1) = EMCRU
CHGVAR(2) = 'AR'
CHRLEN(2) = 2
CHGVAL(2) = AR
CHGVAR(3) = 'DPROP'
CHRLEN(3) = 5
CHGVAL(3) = DPROP
CHGVAR(4) = 'WGS'
CHRLEN(4) = 3
CHGVAL(4) = WGS
CHGVAR(5) = 'AF'
CHRLEN(5) = 2
CHGVAL(5) = AF
CHGVAR(6) = 'WS'
CHRLEN(6) = 2
CHGVAL(6) = WS
CHGVAR(7) = 'PAX'
CHRLEN(7) = 3
CHGVAL(7) = PAX
C**********************************************************************
C user doesn't have to change anything in the following block
C*********************************************************************
C
C create GASP input file: GASP.IN
C
OPEN(UNIT=100,FILE='GASPINPUT.rb3l')
call system('rm GASP.IN')
472
OPEN(UNIT=110,FILE='GASP.IN',STATUS='NEW')
DO WHILE ( .TRUE. )
READ(100,70,END=1000) LINE
INDEX = INLINE(NUM, LINE, CHGVAR, CHRLEN, CHGVAL)
IF (INDEX .NE. 0) THEN
WRITE(110, 75) CHGVAR(INDEX)(1:CHRLEN(INDEX)),
+ CHGVAL(INDEX)
ELSE
WRITE(110,70) LINE
END IF
70 FORMAT ( A80)
75 FORMAT (2X, A, '=', F10.2, ',')
ENDDO
1000 CONTINUE
CLOSE(100)
CLOSE(110)
C
C print out values of design variables at current step
C
PRINT *, ''
PRINT *, 'INPUTS Run #', VVNUM
PRINT *, ''
print *, 'Cruise_Spd=',EMCRU
PRINT *, 'Aspct_Rtio=',AR
PRINT *, 'Prop_Diamr=',DPROP
print *, 'Wing_Loadg=',WGS
print *, 'Eng_Act_Fc=',AF
print *, 'Seat_Width=',WS
C
C execute GASP with current values of design variables
C
CALL SYSTEM('rm performance')
call system('rm GASP.OUT')
CALL SYSTEM('gasp <GASP.IN>GASP.OUT')
PRINT *, ''
PRINT *, 'GASP IS DONE'
C
C read GASP output file named performance into response array
C
OPEN(UNIT=120,FILE='performance', STATUS='UNKNOWN')
DO 200 I =1, NUMOUT
READ (120, *) RESPONSE(I)
200 CONTINUE
CLOSE(120)
C
C equate response array values to goal and constraint variables;
C response(4) in performance file is SFC which is not used
C
TPSPD = RESPONSE(1)
NOISE = RESPONSE(2)
WEMP = RESPONSE(3)
DOC = RESPONSE(5)
ROUGH = RESPONSE(6)
473
WFUEL = RESPONSE(7)
PURCH = RESPONSE(8)
RANGE = RESPONSE(9)
VCRMX = RESPONSE(10)
LDMAX = RESPONSE(11)
C
C print out corresponding values of constraint and goal response
C
PRINT *, ''
PRINT *, 'OUTPUTS'
PRINT *, ''
PRINT *, 'Tipspd=', TPSPD, ' Noise=', NOISE
PRINT *, 'Empty Wgt=', WEMP, ' Oper Cost=',DOC
PRINT *, 'Ride Rough=', ROUGH, ' Max Range=', RANGE
PRINT *, 'Fuel Wgt=', WFUEL, ' Purch Price=', PURCH
PRINT *, 'Max Speed=', VCRMX, ' L/D Max=', LDMAX
C
C*************************************************************************
C 3.0 Evaluate non-linear constraints
C**************************************************************************
C
IF (IPATH .EQ. 1 .OR. IPATH .EQ. 2) THEN
C
C specify constraints; all constraints are less than constraints
C
CONSTR(1) = 1.0 - NOISE/noisec(ipax)
CONSTR(2) = 1.0 - DOC/docc(ipax)
CONSTR(3) = 1.0 - ROUGH/roughc(ipax)
CONSTR(4) = 1.0 - WEMP/wempc(ipax)
CONSTR(5) = 1.0 - WFUEL/wfuelc(ipax)
END IF
C
C**************************************************************************
C 4.0 Evaluate non-linear goals
C**************************************************************************
IF (IPATH .EQ. 1 .OR. IPATH .EQ. 3) THEN
C
C specify goals
C
GOALS(1) = WFUEL/wfuelt(ipax)-1.0
GOALS(2) = WEMP/wempt(ipax) - 1.0
GOALS(3) = DOC/doct(ipax) - 1.0
GOALS(4) = PURCH/purcht(ipax) - 1.0
GOALS(5) = LDMAX/ldmaxt(ipax) - 1.0
GOALS(6) = VCRMX/vcrmxt(ipax) - 1.0
GOALS(7) = RANGE/ranget(ipax) - 1.0
END IF
C
C 5.0 Return to calling routine
C
RETURN
END
C*********************************************************************
474
C user does not need to change the following function block
C**********************************************************************
FUNCTION INLINE(NUM, LINE, CHGVAR, CHRLEN, CHGVAL)
INTEGER INLINE, NUM, I
CHARACTER*80 LINE
CHARACTER*10 CHGVAR(NUM)
INTEGER CHRLEN(NUM)
REAL CHGVAL(NUM)
DO 10 I = 1, NUM
IF (LINE(3:3+CHRLEN(I)) .EQ. CHGVAR(I)(1:CHRLEN(I))
+ //'=') THEN
INLINE = I
GOTO 20
END IF
10 CONTINUE
INLINE = 0
20 END
475