Technological Forecasting Based On Segmented Rate of Change
Technological Forecasting Based On Segmented Rate of Change
Technological Forecasting Based On Segmented Rate of Change
PDXScholar
Dissertations and Theses
Winter 3-16-2015
This Dissertation is brought to you for free and open access. It has been accepted for inclusion in Dissertations and Theses by an authorized
administrator of PDXScholar. For more information, please contact [email protected].
by
Dong-Joon Lim
Doctor of Philosophy
in
Technology Management
Dissertation Committee:
Timothy R. Anderson, Chair
Tugrul U. Daim
Antonie J. Jetter
Wayne W. Wakeland
Abstract
Consider the following questions in the early stage of new product development.
What should be the target market for proposed design concepts? Who will be the
competitors and how fast are they moving forward in terms of performance
improvements? Ultimately, is the current design concept and targeted launch date feasible
and competitive?
To answer these questions, there is a need to integrate the product benchmarking with
the assessment of performance improvement so that analysts can have a risk measure for
their R&D target setting practices. Consequently, this study presents how time series
benchmarking analysis can be used to assist scheduling new product releases.
Specifically, the proposed model attempts to estimate the auspicious time by which
proposed design concepts will be available as competitive products by taking into
account the rate of performance improvement expected in a target segment.
The empirical illustration of commercial airplane development has shown that this
new method provides valuable information such as dominating designs, distinct
segments, and the potential rate of performance improvement, which can be utilized in
the early stage of new product development. In particular, six dominant airplanes are
identified with corresponding local RoCs and, inter alia, technological advancement
toward long-range and wide-body airplanes represents very competitive segments of the
market with rapid changes. The resulting individualized RoCs are able to estimate the
arrivals of four different design concepts, which is consistent with what has happened
since 2007 in commercial airplane industry.
i
ii
Table of Contents
Abstract............................................................................................................................... i
List of Tables ................................................................................................................... vii
List of Figures................................................................................................................. viii
Glossary ............................................................................................................................ ix
List of Symbols ................................................................................................................. xi
I. Motivation ...................................................................................................................1
1.1 Introduction ...........................................................................................................1
1.2 Problem Statement ................................................................................................5
1.3 Research Objective ...............................................................................................8
1.4 Overview of Dissertation ....................................................................................10
B. Economic (Cost-Benefit)Analysis.........................................................36
C. Modeling (Simulation and System Dynamics) .....................................36
D. Extrapolation .........................................................................................37
2.3 Focused Review on Multi-Attribute Extrapolation Methods..............................39
2.3.1 Intuitive Models .........................................................................................39
A. Scoring Model .......................................................................................39
B. Technology Development Envelope .....................................................40
2.3.2 Parametric Frontier Models .......................................................................41
A. Planar Frontier Model (Hyper-plane) ...................................................41
B. Corrected Ordinary Least Squares ........................................................43
C. Stochastic Frontier Analysis .................................................................44
D. Ellipsoid Frontier Model .......................................................................45
E. Multi-Dimensional Growth Model ........................................................47
2.3.3 Non-parametric Frontier Models ...............................................................49
A. Data Envelopment Analysis ..................................................................49
B. Stochastic (Chance-constrained) Data Envelopment Analysis .............54
2.4 Summary of Critical Review ..............................................................................56
C. Risk Analysis.........................................................................................85
D. Proof of Concept ...................................................................................88
4.1.2 Validation using Past Datasets ...................................................................90
A. Forecasting Accuracy Evaluation Techniques ......................................90
B. Test Results from Earlier Studies ..........................................................92
4.2 Ex Ante Analysis: Focused Application..............................................................95
4.2.1 Exascale Supercomputer Development .....................................................95
A. Background ...........................................................................................95
B. Analysis .................................................................................................99
a) Dataset ...............................................................................................99
b) Model building ................................................................................101
c) Model validation .............................................................................108
d) Forecasting ......................................................................................111
C. Discussion ...........................................................................................114
D. Summary of Findings ..........................................................................118
V. Conclusion ...............................................................................................................120
5.1 Contributions to Application Area ....................................................................120
5.1.1 Exascale Supercomputer Development ...................................................120
5.2 Contributions to Managerial Insight .................................................................122
5.2.1 Risk Analysis ...........................................................................................122
5.2.2 New Product Target Setting .....................................................................123
5.3 Contributions to Methodological Development................................................124
5.3.1 Identification of Local Rate of Change ....................................................124
5.3.2 Identification of Individualized Rate of Change ......................................124
5.3.3 A Finite Forecast for an Infeasible Target ...............................................125
5.4 Limitations ........................................................................................................126
5.5 Future Work Directions ....................................................................................127
5.5.1 Innovative Measure ..................................................................................127
5.5.2 Alternate Efficiency Measures ................................................................127
v
References .......................................................................................................................131
Appendix. Model Building Guideline...........................................................................158
vi
List of Tables
List of Figures
viii
Glossary
Abbreviation
Description
AHP
AMD
C3
CAPM
CI
Confidence Interval
CM
Customer Matrix
COLS
CPU
CRA
CRS
DEA
DMU
DOE
DRS
DVD
EIA
EIS
FCM
FLOPS
FMEA
FS
Frontier Shift
GDF
GPU
HDM
HPC
IARPA
IO
Input-Oriented
IRS
LCC
LCD
MCDA(M)
MDGM
MPI
MW
Mega-Watt
NDRS
NEC
NFC
NIRS
NNSA
NSF
NUDT
OEM
OLED
OO
Output-Oriented
PFE
PPS
P-SBM
RAICS
RAM
Range-Adjusted Model
RCA
Rmax
RMSE
RoC
Rate of Change
RTS
Returns To Scale
SBM
Slack-Based Model
SDEA
SFA
SOA
TDE
TEC
TFDEA
UHD
Ultra-High Definition
VRS
W-LAN
List of Symbols
Symbol
j(k)
Input i of product j
Output r of product j
Description
xi
I. MOTIVATION
1.1 INTRODUCTION
If the future unfolded as foretold, individuals and governments would get a great
benefit from their actions taken in advance. In addition to handsome payoffs from Wall
Street, semiconductor companies could perfectly meet their market demands with newly
built fabs, and sportswear companies could get the maximum advertising effects by
grasping rising sports stars with long-term contracts. However, in practice, black swan
events often render our plans just plain useless or ineffective [1], [2]. Hence, the choice
between alternative pathways under estimated future states may significantly alter
competitive performance. Note that decision making is inseparable from how the future is
expressed. Indeed, we explicitly or implicitly pay attention to the trends and ideas that are
shaping the future which form the basis of our everyday decisions.
Future research community theoretically differentiates prediction from forecasting. A
prediction is concerned with the future that is preordained and no amount of action in the
present can influence the outcomes. Therefore, it is an apodictic, i.e., non-probabilistic,
statement on an absolute confidence level about the future [3]. Clearly, the goodness of a
prediction lies in whether it eventually comes true. A forecast, on the other hand, is a
probabilistic statement on a relatively broad confidence level about the future.
Fundamentally, it aims to affect the decision making process by investigating possible
signals related to the future events using systematic logic that forecasters must be able to
articulate and defend [4]. Thus, except for particular purposes (e.g., benchmark study), a
good forecast is determined not by whether it eventually came true but by whether it
1
much the speed of such progress might need to be adjusted. However, as technology
systems become more sophisticated, the rate of change varies more dramatically due to
the maturity levels of component technologies [7]. This structural complexity makes
todays forecasting even more challenging, which leads to the question: What is the best
way to combine growth patterns of the various attributes used to describe multi-objective
technology systems?
To answer the above question, two things must be considered: multi-attribute
evaluation and technology segmentation. Multi-attribute evaluation strives to define the
goodness of technology systems that consist of different levels of subsystems. Figure 1
illustrates the difficulty of doing this. Technology B seems to have made a disruption in a
high-end market, while technology A is overshooting the market in terms of technical
capacity 1. However, technology 2 might have been superior and recently challenged by
technology 1 on a different technical dimension. Possibly, different dynamics are taking
place in other dimensions as well where the levels of market demands also vary. This
implies that a single performance measure may be no longer capable of capturing
advancement in a new direction, which makes the holistic assessment of technology
systems difficult. Therefore, it is critical to examine not only which performance
measures are playing a major role in current technological progress but also which
alternate technologies show disruptive potential with respect to emerging performance
measures.
Technology segmentation is related to the identification of homogeneous technology
clusters. Technologies belonging to the same cluster may have a similar mix of technical
capabilities whereby they satisfy the similar target markets. From the technological
3
assessment of performance improvement so that analysts can have a risk measure for
their product launch strategy.
The above mentioned problems can be summarized as below.
Table 1 Research gap 1
GAP
#1
Frontier analysis models attempt to form a surface that can represent the same level
of technology systems at given point in time. The evolution of surfaces is then monitored
to capture the rate of change by which future technological possibilities can be estimated.
In the case of parametric frontier methods, an iso-time frontier is constructed as a
functional combination from individual growth curves. Specifically, actual observations
are fitted to an a priori defined functional form, and those growth patterns are combined
together to constitute an iso-time frontier. Therefore, it is difficult to identify distinct
technological segments from the resulting frontier.
Non-parametric frontier methods, on the other hand, have an advantage with regard
to the technology segmentation since the frontier is directly constructed by dominating
technologies that are located on the frontier. This enables the model not only to
characterize each frontier segment but also to identify proper segments that dominated
technologies belong to. However, current non-parametric frontier models dont
incorporate this property into the forecasting process. Instead, they simply aggregate rate
of changes captured from the surpassed technologies to indicate the technological
progress as a whole.
6
Research
Objective
Once the local rates of change are ascertained with respect to each frontier segment,
they can be utilized to obtain the individualized rate of change for each forecasting target.
This procedure makes it possible for the model to apply the customized progress rate
suitable for each forecasting target, thereby reflecting the characteristics of identified
segments into the forecast. This leads to the second research question as below.
8
10
II. BACKGROUND
promise [30]. Therefore, until the new approach has gained established legitimacy as a
worthwhile endeavor, great effort is often spent exploring different paths to identify
meaningful and feasible drivers of advancement. For example, OLED (organic light
emitting diodes) technology has been recently introduced as a new alternative to LCD
(liquid crystal displays) in the flat panel industry. However, it requires a sufficient
amount of time and effort to identify the direction of incremental innovation.
In the growth stage, a new technological platform finally crosses a threshold with
continuous engineering effort, which allows rapid progress [13]. The emergence of a
dominant design, in particular, plays a key role not only to attract researchers to
participate in its development but also to coalesce product characteristics and consumer
preferences [12]. The cumulative efforts reap the greatest improvement per unit of effort,
which creates a virtuous cycle by stimulating more attention devoted to the current
technological platform.
In the maturity stage, the progress slowly and asymptotically reaches a ceiling [10].
Utterback suggested that as a market ages, the focus of innovation shifts from product to
process innovation [31]. Sahal also argued that technology has inherent limits of scale
and/or complexity which restrict the steady growth of performance improvement [14]. As
such, a marginal performance increase requires more cost and engineering efforts, which
eventually deglamorize the current technological platform. As the current technological
platform loses its luster, the research community searches for alternative paths and
rapidly loses cohesion, which reduces the switching costs to the upcoming technology.
Although the S-shaped growth pattern has been observed in a number of studies that
conducted retrospective analysis on industry dynamics, it is well known that fitting some
13
14
Figure 3 Ambiguity of using time as a proxy for engineering effort (modified from [32])
The second assumption is that the upper limit of a growth curve, i.e., L, is given.
However, it is rare that the true limit of a technology is known in advance, and there is
often considerable disagreement about what the limits of a technology are. A well-known
case of misperception can be found in the disk drive industry [35]. In 1979, IBM had
reached what it perceived as a density limit of ferrite-oxide-based disk drives; therefore,
the company moved to developing thin-lm technology that had a greater potential for
increasing density. Hitachi and Fujitsu, however, continued to ride the ferrite-oxide S-
15
curve, and ultimately achieved densities that were eight times greater than the density that
IBM had perceived to be a limit.
Due to the lack of information, researchers often have no choice but to employ
regression-based calculation to estimate the upper limits of technology growth curves [6].
However, this approach is controversial in the literature [36], [37]. Danners simulation
showed that the accuracy of the resulting limit is highly sensitive to any error present in
the segment of the available data [32]. Martino also argued that the productivity of early
technology development is only minimally influenced by the upper limit because
historical data from the early stages of development contain little information as to the
location of the upper limit [38]. In this sense, he claimed that even a small error in the
upper limit estimation can result in a fairly significant error in the forecast.
The third assumption is that the appropriate growth model is predefined. However,
similar to the estimation of upper limits, a growth curve should not be selected based on
goodness of fit from historic data but on matching the behavior of the selected growth
curve to the underlying dynamics of technology growth [39], [40]. In fact, there are
various equations that represent S-shaped curves which can be categorized into two main
groups: absolute and relative models. The former quantifies the technical capability, ,
as a function of the independent parameter time, t, whereas the latter quantifies the rate of
change in technical capability, , as a function of the most recently achieved level of
technical capability, 1 (see Table 8) [32], [41].
Youngs study showed that relative models were more accurate than absolute models,
and in particular both the Bass and Harvey growth models performed well under most
circumstances [37]. Danner claims that this may be because the inherent characteristic of
16
the relative model that each new data point is anchored to the previous data point. That is,
changes in the relative model are proportional to both the progress to date and the
distance to the upper limit, whereas changes in the absolute model are only proportional
to the distance from the upper limit [32].
Name
Equation
Absolute
Gompertz [43]
Relative
1 +
ln ( ln ( )) = 0 + 1
) = 0 + 1
ln (
|) = 0 + 1 ln
ln(ln |
= (1 )3
S-curve [43]
= (/)
Bass [46]
= 0 + 1 1 + 2 1 2
Non-symmetric Responding
Logistic [47]
ln = 0 + 1 ln(1 ) + 2 ln( 1 )
Harvey [48]
ln = 0 + 1 + 2 ln(1 )
1
) + 3 ln(1 )
= 0 + 1 1 + 2 (
1
1
However, it should be noted that the underlying assumption of the relative model
that future advancement is facilitated by technical capability already achieved may not
always be true. In a similar context, Youngs finding, based on 46 historical data sets,
17
might not guarantee the most appropriate selection for the predictive problem at hand.
Therefore, model selection should consider how the models behavior matches the
process that generated the data rather than simply fitting the historical data [50].
The shape of a technologys growth curve is neither set in stone nor given to the
analyst. The limitation of the current architecture can be overcome by technological
innovation which affects the growth rate and possibly allows a higher performance to be
achieved than what had been perceived to be a limit. On the contrary, the lifecycle of a
given technology could be terminated by the unexpected adoption of alternative
technologies even before it passes the inflection point of the curve [51]. Therefore, fitting
a portion of data into an a priori defined growth function should be accompanied by a
deep understanding of the dynamics of the industry being investigated.
18
19
along various dimensions and what performance levels technologies will be able to
supply [16].
Application area
Did incumbents
succeed?
No
No
No
No
No
No
No
No
No
No
No
Yes
Yes
Yes
Yes
Yes
Yes
Yes
Yes
Yes
Yes
Yes
Trajectory mapping has been employed in a wide range of applications. The most
famous application of a trajectory map may be the hard disk drive case from
Christensens original work [35]. He used disk capacity as a performance axis and
interpreted the dynamics of industry that smaller disks have replaced bigger ones,
improving their capacities over time. Schmidt later extended Christensens work by
classifying the disk drive case as a low-end encroachment that eventually diffused
21
vehicles were decreasing. The author used this information and identified key utility
attributes that could command a significant premium before the product reaches the mass
market. This study has significant implications for identifying key drivers of technology
progress using the trajectory map. Letchumanan and Kodama mapped out the correlation
between Revealed Comparative Advantage (RCA), which is generally used to measure
the export competitiveness of a product from a particular country in terms of world
market share, and R&D intensity to examine who was making the most disruptive
advancement at a national level [77]. Even though Koh and Magee didnt utilize any
function to develop composite performance measures, their research has a significance as
they took different trade-offs into consideration to draw a trajectory map [78]. Their
results suggested that some new information transformation embodiment such as a
quantum or optical computing might continue the trends given the fact that information
transformation technologies have shown a steady progress.
Few researchers have proposed the predictive approach of the disruptive innovation
theory considering multidimensional aspects of technology systems. Schmidt suggested
using part-worth curves in search of low-end encroachment [69]. Paap and Katz provided
general guidance for ex ante identification of future disruption drivers [79]. Several
authors have suggested using extant methods for technological forecasting to assess
potential disruptive technologies [16], [17]. Govindarajan and Kopalle argued that
capturing a firms willingness to cannibalize could be a sign of ex ante prediction of
disruptive innovation [80]. Doering and Parayre presented a technology assessment
procedure that iterates among searching, scoping, evaluating, and committing [81].
23
Table 10 summarizes 40 studies that have employed the trajectory map to identify
disruptive alternatives (technology, product, service, etc.). The majority of the studies
adopted a single performance measure to draw the trajectory map. A trajectory map,
however, should be able to take multiple perspectives into account not to miss potential
disruptive indications. Many ex post case studies have shown that disruptions have
occurred from an entirely new type of performance measure that hadnt been considered.
Furthermore, it was often observed that the new technology started below the prior one in
performance on the primary dimension but was superior on a secondary one [18]. This
implies that the current performance measure may no longer be capable of capturing
advancement in a new direction. Therefore, it is crucial to examine not only which
performance measures are playing a major role in current progress but also which
alternate technologies show disruptive potential with respect to the emerging
performance measures.
24
Application area
Performance measure
Plotting method
Walsh (2004)
Keller & Hsig (2009)
Martinelli (2012)
Phaal et al. (2011)
Padgett & Mulvey (2007)
X. Huang & Soi (2010)
Kaslow (2004)
Kassicieh & Rahal (2007)
Christensen (1997)
Schmidt (2011)
Rao et al. (2006)
Bradley (2009)
Lucas & Goh (2009)
Madjdi & Hsig (2011)
Husig et al. (2005)
Walsh et al. (2005)
Figueiredo (2010)
Caulkins et al. (2011)
Adamson (2005)
Belis-Bergouignan et al.
(2004)
Ho (2011)
Werfel & Jaffe (2012)
Microsystems
Office application
Telecommunication
S&T based industry
Brokerage market
General industry
Vaccine
Therapeutics
Disk drive
Disk drive
P2P and VoIP
Medical operation
Photography
W-LAN
W-LAN
Silicon industry
Forestry industry
General industry
Fuel cell vehicle
Organic compound
Critical dimension
Number of operations
Patent citation
Sales
Service integration level
Capacity & Price
Efficacy
Patent publication
Capacity
Part-worth
Data transfer
Noninvasiveness
Price, convenience, etc.
Active Hotspot ratio
Data rates
Number of firms
Novelty & complexity
Market connection
Utility coefficient values
Environmental
performance
Technology sources
Patent
Growth curve
Data accumulation
Data accumulation
Data accumulation
Data accumulation
Data accumulation
Data accumulation
Patent mapping
Data accumulation
Data accumulation
Data accumulation
Data accumulation
Data accumulation
Data accumulation
Data accumulation
Data accumulation
Data accumulation
Skiba curve
Data accumulation
Data accumulation
General industry
Smoking cessation
products
Nano-biotechnology
General industry
(High-tech)
Manufacturing SMEs
Patent
Correlation between
Exports and R&D intensity
AMT
Civil aircraft
Printers
Service oriented
manufacturing
industry
Electrical machinery
Smart grid
Semiconductor
Mobile phone
Renewable energy
Manufacturing and
service industries
Radiation therapy
General industry
DRAM
Home networking
IT
Artificial disc
25
Data accumulation
Reduced form
model
Data accumulation
Data accumulation
Data accumulation
Diffusion rate
(Entropy statistics)
Sales and price
Sales, income, employees,
and productivity
Data accumulation
Marginal productivity
Average age
Devices per chip
Mobile subscribers
Energy Production
Labor productivity
Data accumulation
Data accumulation
Data accumulation
Data accumulation
Data accumulation
Data accumulation
Capability
Patent
DRAM shipment and
Memory density
New household/year
Megabits
Patent
Growth curve
Poisson model
Data accumulation
Price function
Data accumulation
Data accumulation
Data accumulation
Data accumulation
(LCC) analysis within the stage gate model for project and work process management
[157].
Much of the engineering design-focused approach is at a more detailed level of
abstraction with the focus being the individual product or specific market [111]. The
perspective of this approach is that a product is a complex assembly of interacting
components [134]. Perhaps the most known method is conjoint analysis, which attempts
to identify the ideal combination of product size, shape, configuration, function, and
dimensions [158]. Furthermore, recent attention to the product categorization has been
enhanced by benchmarking studies in an attempt to identify distinct combinations of
product attributes. An initial work related to this product-focused approach may be found
in Doyle and Greens study which used a widely known benchmarking technique, data
envelopment analysis (DEA), to identify homogeneous product groups, i.e., competitors,
as well as market niches [21]. Specifically, they applied DEA to classify printers by
ordering them from broad to niche based on the number of times each printer appears in
others reference sets. In a similar vein, Seiford and Zhu developed measures for products
attractiveness and progress by separating context-dependent frontiers [22]. Further, Po et
al. showed how this product feature-based clustering can be used for decision makers to
know the changes required in product design so the product can be classified into a
desired cluster [23]. Amirteimoori and Kordrostami later extended this approach to take
the size of products into account, thereby comparing products grouped by scale [24]. In
addition, Amin et al. clarified the role of alternative optimal solutions in the clustering of
multidimensional observations from the DEA approach [159]. Most recently, Dai and
28
Kuosmanen proposed a new approach that can take into account the cluster-specific
efficiency rankings as well as stochastic noises [25].
Perspective on product
Marketfocused
A product is a bundle
of attributes
Brainstorming
Delphi
Morphology
Lead user analysis
Development team
organization
Team staffing
Performance
measurement
Operations
managementfocused
A product is a sequence
of development and/or
production process
steps
Capacity utilization
Process performance
Development
sequence and schedule
Supplier and material
selection
Engineering
designfocused
A product is a complex
assembly of interacting
components
Conjoint analysis
Data envelopment
analysis (DEA)
29
Voice of customer
Probe and learn
Empathic design
Fuzzy cognitive map
Crowdsourcing
Team arrangement
Infrastructure and
training
Development stategate
Leadership and
communication
Capital asset pricing
model (CAPM)
Failure mode and
effects analysis
(FMEA)
Reliability,
availability,
maintainability and
supportability
(RAMS)
Life cycle cost
(LCC)
30
later shown to have serious defects, is often better than a deliberate blank which tends to
stop thought and research [165].
This technique is usually integrated with other forecasting models not only to identify
a firm basis of possibilities but also to investigate the impact of technology interactions
under the various conditions. Recent hybrid applications of scenario analysis include
Nowack et al.s Delphi-based model [166], Winebrake and Creswicks analytical
hierarchy process (AHP)-based model [167], Kok et al.s participative back casting-based
model [168], and Jetter and Schweinforts fuzzy cognitive map (FCM)-based model
[169].
C. RISK ASSESSMENT AND ENVIRONMENTAL IMPACT ANALYSIS
Risk analysis pays particular attention to the negative impact of technologies on
social institutions and critical infrastructure [170]. Linkov et al. surveyed the comparative
risk assessment (CRA), multi-criteria decision analysis (MCDA), and adaptive
management methods applicable to environmental remediation and restoration projects
and asserted that it is required to shift from optimization-based management to an
adaptive management paradigm for the conservation of the ecosystem [171]. Recently, as
an attempt to distinguish and categorize the potential risk in advance, more attention has
been paid to developing the predictive models. As an example, Kolar and Lodge
developed an ecological risk assessment model to evaluate the risk of alien species for
nonagricultural systems [172].
In a similar vein, environmental impact analysis (EIA) has become an important and
often obligatory part of todays technology assessment [173]. Ramanathans study
applied a multi-criteria model to capture the perceptions of stakeholders on the relative
31
32
E. MORPHOLOGY
The morphology (or morphological method) was developed by Zwicky in 1962 [121]
in an attempt to deduce all of the solutions of a given definite problem. The method
proceeds as follows:
1. The statement of the problem, i.e., the object of the morphological device, is
made.
2. The precise definitions of the class of devices are elaborated.
3. Related parameters with sub-elements are grouped as matrices and listed for
connection.
4. The alternative solutions are obtained as chains of selected elements from each
matrix.
5. Determine the performance values of all of the derived solutions.
6. Select the particularly desirable solutions and their realization.
This process provides a framework for thinking in basic principles and parameters, which
is growing in importance, even if practiced in a disordered or ad hoc fashion [3].
Recent developments of morphology technique tend to be in conjunction with data
mining approaches with the current heavy interests on network analysis. Examples can be
found in Feng and Fuhais study [179] and Jun et al.s study [180] that used patent-based
morphological mapping, and Yoon et al.s study [181] that developed the text mining
morphology analysis.
33
F. ANALOGY
As a research technique, analogies have been mainly applied in the social sciences
[3]. Nonetheless, it may improve the anticipatory insight, especially when quantitative
methods suffer from the absence of sufficient data but there exist analogous events in
history. The classic application can be found in Bruces study of The Railroad and the
Space Program-An Exploration in Historical Analogy in 1965 [182]. The study sought
to test the feasibility of using railroad development in a systematic way to forecast space
program development. The historical analogy method, however, tends to neglect the
political, social, and philosophical impact, thereby often providing unsatisfactory
forecasts. A recent application of Goodwin et al. [183], which conducted a comparative
analysis of four different forecasting methods using analogous time series data for a sales
forecast of a new product, also concluded that using an analogy led to higher errors than
the parameters estimated from small but actual data.
G. CAUSAL MODEL
A causal model considers the explicit cause-and-effect relationships that affect the
growth of technology systems [6]. Therefore, this technique relies on the assumption that
the relevant variables and their linkages are known and can be described in a structural
model. However, due to the lack of information, the use of causal modeling is limited to
forecasting adoption or diffusion of innovations where the related parameters can be
measured [50], [184][188].
34
35
36
Daim et al. applied system dynamics to the fuel cell industry and found that the
adoption rate would be increased as a consequence of government policies and
supply/demand relations [199]. Maier developed a new product diffusion model using
system dynamics to incorporate competition and to map the process of substitution
among successive product generations [200]. Suryani et al. constructed a system
dynamics model to forecast air passenger demand and to evaluate some policy scenarios
related with runway and passenger terminal capacity expansion to meet the future
demand [201].
D. EXTRAPOLATION
Extrapolation models employ mathematical and statistical techniques to extend time
series data into the future under the assumption that the past conditions and trends will
continue in the future more or less unchanged [6]. Since estimation is a data-based
forecast, it requires a sufficient amount of good data to be effective. The next section
provides a focused review on the frequently used extrapolation models that can deal with
multi-attribute assessment.
Figure 4 classifies various technology assessment methods in two dimensional plots.
While the qualitative approaches tend to focus more on eliciting multiple perspectives
from the knowledge sources (e.g., expert panels, history, etc.), the quantitative
approaches place more emphasis on drawing meaningful findings by analyzing the
numerical data. It is not surprising that technology focused approaches tend to be
quantitative, whereas society focused approaches are mostly qualitative.
37
38
2 2
1 +
(1)
Once the scoring model is obtained, it becomes possible to estimate the overall score
of the future technologies by extending the historical trend. However, while the scoring
The weights and the tradeoff coefficients were determined by an Air Force officers subjective judgment.
39
model provides a composite measure so that each technology system can be put on a
common basis, it is not capable of capturing the necessary information to simultaneously
evaluate each system attribute relative to the remaining attributes.
B. TECHNOLOGY DEVELOPMENT ENVELOPE
Technology development envelope (TDE) was originally developed by Gerdsri to
identify an optimum technology development path as a roadmapping method [203]. The
procedure consists of six steps:
Step 1: Technology forecasting to identify emerging alternatives.
Step 2: Technology characterization to establish evaluation criteria.
Step 3: Technology assessment on identified alternatives based on criteria.
Step 4: Hierarchical modeling to determine the relative importance of criteria.
Step 5: Technology assessment to determine the relative value of alternatives.
Step 6: Formation of TDE to establish an optimum development path.
Within this process, TDE constructs a hierarchical decision model (HDM) to
determine the relative importance of emerging technologies aligned with the
organizations objectives. Technologies having the highest value in each time period
represent the most preferred technology alternatives. In this sense, the path connecting
those technologies from one period to another becomes an optimal technology
development roadmap.
However, since the technology assessment process in TDE is predicated upon HDM,
multiple perspectives on technology attributes are to be averaged within the process of
obtaining a single ranking of technology alternatives for each period. That is,
40
combinatorial values derived from different levels of technology attributes are supposed
to be represented by a single weighting scheme aggregated from a panel of experts
opinions. This becomes a critical issue to identify the better technology when the
market segments exist as having particular customers with differing value propositions on
technology systems.
(2)
where is the introduction date of a system, and denotes the technical capabilities.
Specification of the functional form and determination of the coefficients of the
equation provide a measure of average technological trend over time. For example, Lim
41
et al. applied a planar frontier model to develop the wireless protocol forecasting model.
The resulting equation is shown as follows [205]:
= 1984.411 2.532 ()
(3)
+ 6.651 ()
42
(4)
(5)
Once the production function is estimated, the efficiency measurement for each
observed production can be made by comparing them to the maximum (minimum)
possible output (input) for a given input (output) along with a desired directional distance
function.
Since COLS has its root in statistical principles, i.e., the maximum likelihood, the
frontier is constituted to represent the general pattern of actual observations without
taking noise into account [26]. That is, any variation in the dataset, including possible
noise, is considered to contain significant information about the efficiency. Therefore,
this method may not be appropriate when there is a need to identify the underlying
pattern of production possibilities without the impact of the random noise.
43
~(0, 2 ),
~+ (0, 2 ),
= 1, ,
(6)
Martino later extended Dodsons model to allow use of it in any order [213].
Martinos generalized ellipsoid model is given as follows:
( ) = 1
(7)
where n is the order of the ellipsoid, the value of the ith technical capability, and the
intercept of the ellipsoid on the ith axis.
Martino also suggested using the mean absolute deviation rather than the mean
squared deviation for the fitting procedure to reduce the effect of extreme values. This
allows the fitted frontier surface to be located closer to the median of the observations
instead of the mean.
Although this approach can provide a measure to investigate the SOA formation
process, the fundamental question remains to be resolved: why the technology tradeoff
surface should be following the ellipsoid form? In detail, the ellipsoid frontier model
presupposes that the tradeoff of one technical capability being relinquished for the
45
46
47
one another [214], [215]. Therefore, the progress of the iso-time frontier should be guided
by adjusted upper limits with the consideration of the architectural complexity involved.
48
r ur yr 0 w
max h0
i vi xi 0
u y w
1, j,
v x
r
rj
i ij
u r , vi , w is free
(8)
where 0 denotes the input-oriented efficiency of DMU being assessed, the weight
assigned to output r, the weight assigned to input i, the ith input variable of DMU
j, the rth output variable of DMU j, and w the returns to scale (RTS) parameter.
The above input-oriented multiplier model can be readily translated to the primal
(envelopment) model, which is shown below as a single-stage theoretical formulation:
49
min 0 sr si
i
r
y s y , r 1,..., s
x s x , i 1,..., m
1
,
s
,
s
,
rj
ro
ij
0 io
(9)
where 0 denotes the technical input-oriented efficiency, the loading factor attached to
DMU j, + and the slacks equal the reduced cost of and respectively. Note that if
the optimal value of 0 is less than 1, then 0 is inefficient in that the model (9) will
have identified another production possibility that secures at least the output vector but
using no more than the reduced input vector . Thus, is a measure of the radial
input efficiency of 0 in that it reflects the proportion to which all of its observed
inputs can be reduced pro rata, without detriment to its output levels [209].
DEA studies have often examined the changing performance of units over time
[217][221]. A shorthand notation for a DEA model can be defined as 0 (0 , 0 ) as the
efficiency of the DMU o in time period t with input and output characteristics (0 , 0 ),
being measured against the frontier of peers also in time period t. A peer compared
If the value
of +1
0 (0 , 0 ) is less than 1.0, then the unit in period t is inefficient relative to units from
a unit from different conditions affecting all units cannot be determined simply from the
50
efficiency scores. Fre et al. introduced a DEA-based Malmquist productivity index (MPI)
to measure the technical efficiency change (TEC) and the frontier shift (FS) over time as
an extension of the original concept introduced by Malmquist [217], [222]. The inputoriented MPI can be defined as
1
0 (0 , 0 )
0+1 (0+1 , 0+1 ) 0+1 (0 , 0 ) 2
= 0 0 = +1 +1 +1 [ +1 +1
]
0 (0 , 0 )
0 (0 , 0 ) 0 (0 , 0 )
(9)
where 0 denotes DEA efficiency score and 0 , 0 input and output levels at given point
in time t. Therefore, TEC indicates technical efficiency change between period t and t+1:
improves (<1), remains (=1), and declines (>1.) In a similar sense, FS measures the
amount of frontier shift: regress (>1), no shift (=1), and progress (<1).
To extend the time-series application of DEA into technological forecasting, Inman
developed a measure to quantify the rate of frontier expansion by which the arrival of the
following DMUs can be estimated [223]. Specifically, his method, technology
forecasting using data envelopment analysis (TFDEA), establishes the envelopment, i.e.,
SOA technology frontier, using the data points identified as relatively efficient from DEA
(see Fig. 8). Note that the frontier is a set of convex combinations formed by SOA
technologies, hence its not a curved surface but a piecewise linear combination. The
tradeoffs between technical capabilities can be considered as a radial improvement within
this frontier space. The TFDEA iterates the frontier formation process over time to track
the rate of frontier shift. This momentum of progress is then used to make a forecast for
the future technologies (DMUs.)
Unlike the iso-time frontier from MDGM, the frontier constructed by TFDEA
typically consists of multiple vintages of SOA technologies. This allows the model to
51
specify the individual timing, i.e., effective time, of any points on the frontier according
to the corresponding tradeoff surface. This enables TFDEA to identify the starting point
of each forecasting target from which their best forecast can be made. Lim et al.
examined how this approach could improve the forecasting accuracy compared to a
planar frontier model in which the constant baseline, i.e., the regression constant, is
assumed for all forecasts [205].
This might overlook the unique growth patterns captured from different tradeoff surfaces.
Consequently, it was shown at times in previous applications that forecasting based on a
single aggregated RoC did not consider the unique growth patterns of each technology
segment, which resulted in a conservative or aggressive forecast [224], [225]. This issue,
in particular, becomes more visible when the application area contains distinct progress
patterns identified from multiple technology segments. Therefore, it is necessary to
incorporate the notion of segmented RoC into the forecasting procedure so that each
forecasting target can be subject to the individualized RoC that best reflects the potential
growth rate of analogous technologies.
It has been occasionally observed from the past applications that TFDEA may suffer
from instances of infeasible super-efficiencies when variable returns to scale (VRS) was
assumed. In theory, this is also a problem for the input-oriented decreasing returns to
scale (DRS) model and output-oriented increasing returns to scale (IRS) model [27]. This
problem results in failure to make a forecast for the target technology since the model is
unable to measure the superiority of corresponding technology compared to current SOA
technologies. Note that the constant returns to scale (CRS) model is also susceptible to
this problem when zero data is included in any input variables [27]. However, this is rare
in actual applications since it indicates heterogeneous DMUs or technologies [209].
The problem of infeasibilities in the super-efficiency model can be attributed to the
inherent characteristics of a non-parametric frontier since this approach identifies the
production possibilities without spanning unobserved regions. Especially under VRS,
DEA constitutes the frontier purely based on observed DMUs and, therefore, tradeoffs in
uncharted regions remain unknown. This renders forecasting targets subject to those
53
unknown regions impossible to be projected from the current SOA frontier in a radial
manner.
B. STOCHASTIC (CHANCE-CONSTRAINED ) DATA ENVELOPMENT ANALYSIS
Land et al. proposed a data envelopment analysis model that can deal with stochastic
variability in inputs and outputs, which evolved from the earlier technique called chanceconstrained programming developed by Land et al. [226], [227]. The standard inputoriented model is presented below:
min ( + + )
. . ( 0 ) 1 (1 )0 + = 0,
= 1, ,
+ = 0 ,
= 1, ,
(10)
+ , , 0, , ,
where denotes radial input contraction factor, + and slack variables, E the
mathematical expectation, F the distribution function of the standard normal distribution,
the standard deviation of best practice output minus observed output, i.e., s.d. (
0 ), loading factor, the ith input variable of DMU j, the rth output variable
54
First, the observed outputs must not exceed best practice outputs more often than
probability of K. For example, in the case of K = 0.01, only 1% or less of DMUs will do
better than the DMU being assessed. That is, K indicates the fraction of DMUs being
located above the frontier in Fig. 5. This constraint can be simplified as below [26]:
( 0 ) ,
= 1, ,
(11)
55
High flexibility in
model building
Relies on
Difficult to estimate
subjective opinions
required parameters
No consideration
Sensitive to
of multiple
multicollinearity
tradeoffs
Model Scoring model
Planar model
(Hyper-plane)
TDE (Technology
o COLS (Corrected
Development
Ordinary Least Squares)
Envelope)
o SFA (Stochastic Frontier
Analysis)
Ellipsoid frontier
MDGM (MultiDimensional Growth
Model)
Cons
TFDEA (Technology
Forecasting using Data
Envelopment Analysis)
o SDEA (Stochastic Data
Envelopment Analysis)
57
o : Econometric models
Since TFDEA has at its core the widely used technique of DEA, TFDEA inherits the
ability to provide many of the same rich results. One of the key results yielded by DEA is
the identification of targets and efficient peers [233]. Specifically, DEA constitutes the
frontier of a production possibility set (PPS) based on best practice DMUs. Within this
framework, relative efficiency is determined by comparing the performance of each unit
against that of a (virtual) target formed by efficient peers. A practical interpretation is that
efficient peers can serve as role models which inefficient DMUs can emulate so that they
may improve their performances. In other words, those benchmarks have a mix of inputoutput levels similar to that of DMUs being compared, which indicates that they are
likely to operate in analogous environments and/or to favor similar operating practices
[209].
The implementation of TFDEA relies on a series of benchmarking processes over
time [223]. This is depicted in Fig. 9, assuming an output-oriented DEA model under
variable returns to scale (VRS) [234]. The frontier year, T, is the point in time at which
the analysis is conducted. Products G, H, and I are identified to be the most competitive
at time T and therefore define the SOA frontier at time T. Products A~F, in contrast, were
themselves SOA when they were first released but were superseded by subsequent
products and hence are located below the frontier. Products J and K are future products,
58
i.e., sets of specifications used as forecasting targets that are placed beyond the current
SOA frontier.
The TFDEA process can be understood as three procedural stages. First, it iterates
the DEA process to obtain efficiency scores of products both at the time of release and at
the frontier year. Second, it estimates an RoC that represents how fast products have been
replaced by the next generation products. In other words, the RoC indicates a potential
growth rate of the SOA frontier in the future. Finally, the model makes a forecast of
future products based on the average RoC.
The original TFDEA process simply aggregates RoCs from the past products and
uses the average RoC to make a projection without taking technological segmentation
into account. However, as previously discussed, DEA provides pragmatic information
regarding benchmarks, which enables an identification of distinct product clusters. This
59
replacements with substantial performance advancement over time. This may imply that
more engineering effort has been invested in cluster 2-type products, which results in
more frequent introductions of advanced products over time.
Once distinguishing clusters are identified with varying RoCs, it is readily possible
to make a forecast using those local RoCs. For example, the estimated arrival of future
product J can be determined by measuring how far it is from the current SOA frontier and
then extracting the root of that distance using local RoCs from cluster 1 given the fact
that it is projected to the frontier facet of cluster 1. In the same manner, the arrival of
future product K can be estimated using local RoCs from cluster 2. One may expect that
if both products were achievable with the same amount of engineering advances, the
arrival of product K might be earlier than that of product J since faster progress is
expected from cluster 2-type products. In other words, requiring the same amount of time
to reach the technological level of product J would entail significant development risk.
Figure 10 depicts how the local RoC and individualized RoC can be obtained.
Product L had been located on the SOA frontier in the past but later became obsolete by
the current SOA frontier formed by new competitive products: M, N, and O. As
aforementioned, the fact that product L is compared to its virtual target, i.e., L,
constituted by its peers: M, N, and O indicates that product L may have a similar mix of
input-output levels with those peers although the absolute level of attributes may vary,
which can classify them as homogeneous products. Hence, the technological
advancement, namely the performance gap between L and L during a given time period,
can be represented by the peers as a form of local RoC. Consequently, each local RoC
61
indicates a growth potential for adjacent frontier facets based on the technological
advancement observed from the related past products.
Once the local RoC of current SOA products are obtained, it is straightforward to
compute the individualized RoC for the new product concepts. Suppose product
developers came up with a product concept Q. Note that by definition, a better product
would be located beyond the current SOA frontier as superseded products are located
below, namely enveloped by, the current SOA frontier. It is seen that the virtual target of
Q, i.e., Q, is subject to the frontier facet constituted by current SOA products N, O, and
P. Thus, the individualized RoC of Q can be obtained by combining local RoCs with the
reference information: how close Q is from N, O, and P respectively. It should be noted
here that technological advancement observed from the product L may have affected the
individualized RoC of Q as SOA product N and O are involved in both sides of the facets
62
63
3.1.2
It has been observed in past super-efficiency DEA applications that the infeasibility
problem occurs when variable returns to scale (VRS) was assumed [27]. In theory, this is
also a problem for an input-oriented decreasing returns to scale (DRS) model and outputoriented increasing returns to scale (IRS) model. This problem results in failure to make a
forecast in TFDEA since the model is unable to measure the super-efficiency,
0 (0> , 0> ) , i.e., superiority of specified technical capabilities from the current
frontier. Note that the constant returns to scale (CRS) DEA model is also susceptible to
this problem when a zero value is included in an input variable [209]. However, this is
rare in TFDEA applications since it indicates heterogeneous technologies.
Figure 11 depicts possible occasions in which infeasible super-efficiency occurs
under VRS. It is readily seen that target E and F are subject to infeasibility from the
current frontier in the input-oriented (IO) model and output-oriented (OO) model
respectively, whereas target D is infeasible in both orientations. Therefore, the arrivals of
those targets in corresponding orientation from the current frontier are unable to be
computed using the traditional TFDEA model.
This section is adapted from a paper accepted in International Transactions in Operational Research [313]
64
Alternate measures have been developed to deal with the infeasible super-efficiency
problem. Lovell and Rouse suggested employing a user-defined scaling factor to make
the VRS super-efficiency model feasible [235]. Cook et al. developed a radial measure
of super-efficiency with respect to both input and output direction so that one can derive
the minimum change needed to project a DMU to a non-extreme position, and the other
can reflect the radial distance of that shifted DMU from the frontier formed by the
remaining DMUs [236]. In a similar vein, Lee et al. proposed a slack based superefficiency model that can consider both input savings and output surplus in cases where
infeasibility occurs [237]. Lee and Zhu further extended this model to deal with the
infeasibility problem caused by zero input values [238].
In this study, Cook et al.s alternate super-efficiency measure is adopted for two
main reasons: a) it returns bi-oriented L1 distances for infeasible targets and hence it
65
secures the existing RoC calculation; b) it returns the same radial distance as the standard
super-efficiency measure [239] when the target is feasible.
Cook et al. [236] defined the term extremity to indicate a minimum radial
movement in either direction needed for a DMU to reach a non-extreme position. For
example, in the input-oriented model, target E will have an extremity of 0.75 (=15/20), to
bring it down to the closest feasible point, i.e., E (20, 15). The radial input augmentation
is then applied, i.e., 1.25 (=25/20), from this shifted point E to the peer unit C.
Consequently, the input-oriented super-efficiency of target E from the current frontier can
be defined as 2.583 (=1.25+1/0.75). In a similar sense, the output-oriented superefficiency of target F from the current frontier is 4.5 (=2+1/0.4), and target D has 6.333
(=5+1/0.75) and 12 (=2+1/0.1) from the input-oriented model and output-oriented model
respectively.
Once the super-efficiency score of each forecasting target from the current frontier is
obtained, RoCs can be applied to the estimation of those target technologies arrivals.
Note that targets that contain extremities in their super-efficiency scores require RoCs
from both orientations. That is, the time period for the extremity can be estimated by the
RoC from the opposite orientation model. In the case of target F, for example, the outputoriented TFDEA model should be able to compute how long it will take to reduce the
input from 10 to 5 based on the RoC that one would obtain from the input-oriented model
as well as to augment the output from 2 to 5 based on the output-oriented RoC. This
indicates that performing TFDEA in both orientations is required to deal with the
infeasible forecasting targets.
66
3.2 FORMULATION
I now turn to the TFDEA formulation incorporating the proposed approach under
VRS. The entire process can be divided into four separate stages.
The first stage iterates efficiency measurement in a time series manner so that the
evolution of the SOA frontier can be monitored. As mentioned above, it is required to
obtain RoCs in both orientations to make a forecast for targets containing extremities.
{,}
(25). Note that the model can be formulated as a single large LP, it may also be
formulated and solved as a series of equivalent, smaller LP models for the time of release
(R) and models for the current frontier time (C) depending on the implementation
algorithm. Specifically, represents the th input and represents the th output for
each technology j = 1,, n, and j = k identifies the technology to be evaluated.
The objective functions for each orientation, (12) and (19), incorporate minimizing
effective dates as well to ensure reproducible outcomes from possible alternate optimal
solutions by distinguishing between Pareto-efficient technologies4 [240], [241].
Constraints (15), (16), (22), and (23) limit the reference sets so that two types of
efficiencies, one at the time of release (R) and the other at the current frontier time (C) in
4
Unlike the iso-time frontier from parametric frontier models, the technology frontier constructed by
TFDEA typically consists of multiple vintages of SOA technologies. This allows the model to specify the
individual timing, i.e., effective date, of any points on the frontier according to the corresponding tradeoff
surface. Therefore, the issue of alternate optimal solutions occurs either due to weakly efficient technology
or to an efficient but not an extreme technology, namely F type or E type in Charnes et al.s classification
[314]. Both cases can be dealt with by introducing the secondary objective to choose the reference
technology presenting either in the farthest time horizon, i.e., maximum sum of effective date, or in the
closest time horizon, i.e., minimum sum of effective date. Note that depending on the application area,
slack maximization may be preferred to prevent weakly efficient technologies from setting the effective
date. Further discussion can be found in [240], [315].
67
which the forecast is conducted, are obtained. That is, and each measure the
amount by which technology k is surpassed by the technologies available at the time of
release since constraint (15) and (22) allow the reference set of technology k to only
include technologies that had been released up to . Similarly, and can be
interpreted as how superior technology is compared to the current SOA frontier by
constraint (16) and (23).
Note that the current time is defined as a fixed time T, which can be either the
most recent time in the dataset or a certain point in time as a forecasting origin when the
68
=1
[ (
)]
=1
(12)
=1
. .
,
, , {, }
(13)
, , {, }
(14)
. .
= 0,
(, )| >
(15)
. .
= 0,
(, )| >
(16)
, {, }
(17)
, , {, }
(18)
=1
. .
,
=1
. .
= 1,
=1
. .
0,
=1
[ + (
)]
=1
(19)
=1
. .
,
, , {, }
(20)
, , {, }
(21)
. .
= 0,
(, )| >
(22)
. .
= 0,
(, )| >
(23)
, {, }
(24)
, , {, }
(25)
=1
. .
,
=1
. .
= 1,
=1
. .
0,
69
The non-VRS models such as non-decreasing returns to scale (NDRS), nonincreasing returns to scale (NIRS), or CRS would render the objective function, (12) and
(19), to no longer be linear as the denominator is not constrained to be equal to 1. For
computational purposes, the same general secondary goal of minimizing effective years
=1
respectively, in the objective function as seen in (26) and (27). While this
[
=1
[
=1
(
=1
)]
(26)
=1
(
=1
)]
(27)
=1
The second stage, shown by (28)-(31), calculates the RoCs from each orientation,
( = 1), but later superseded by new technologies at the current frontier time, > 1
( < 1). Having calculated RoCs of past technologies in (28) and (30), the idea of
segmented RoC can then be implemented by taking the weighted average of RoC for
each technology on the current SOA frontier. This leads to the calculation of local RoCs
in (29) and (31), where ( ) represents the local RoC driven by technology j at current
time T. Note that technology j has an efficiency score of 1 at the current frontier; in other
70
words, it is one of the SOAs that constitutes the frontier onto which future technologies
are to be projected. The numerator of (29) and (31) indicates the weighted sum of RoCs
from past technologies that have set technology j as a (or one of) benchmark(s). The
denominator indicates the accumulated contribution of technology j to the evolution of
the SOA frontier. Consequently, ( ) represents the local RoC that only counts RoCs
in which SOA technology j has been used as a benchmark5.
=1 ,
=1 ,
= ( )
| = 1, > 1
(28)
=1 ,
=1, >0 ,
| = 1
(29)
=1
=1
1
= ( )
| = 1, < 1
(30)
=1
=1, >0
| = 1
(31)
A special case [316] was observed in which the RoC for one product, i.e., ( ), exceeded 10.0 due to
the short time period between the effective date and the actual release date. An RoC of 10.0 indicates that
the technology is advancing at a rate of 1000% per time period (day, month, year). This greatly exceeds
even the rapidly moving portions of the computer industry such as microprocessors and therefore is
considered an unreliable estimate of RoC. The current implementation assumes the maximum acceptable
RoC to be 10.0 and hence drops those having RoCs greater than this limit from the local RoC calculation.
Exploring this further is a topic for future work.
5
71
The third stage solves super-efficiency models for the forecasting targets of future
products. Since the purpose of this stage is to measure the super-efficiency of each
forecasting target from the current frontier, the reference set is confined to the current
SOA technologies by (35), (36), (41), and (42). M is a user-defined large positive number
to give a preemptive priority to the identification of a minimum radial shift of inputs (or
outputs) to render the model feasible. In the output-oriented model, shown by (32)-(37),
radial output reduction and extremity are obtained as 1 and 1 + , respectively.
Likewise, in the input-oriented model, shown by (38)-(43), radial input augmentation and
extremity are defined as 1 + and 1 , respectively.
( + )
(32)
| >
. .
(1 + ) ,
(33)
(34)
=1
. .
(1 ) ,
=1
. .
= 1,
| = 1
(35)
. .
= 0,
(, )| >
(36)
. . ,
0,
(37)
=1
72
( + )
(38)
| >
. .
(1 + ) ,
(39)
(40)
=1
. .
(1 ) ,
=1
. .
= 1,
| = 1
(41)
. .
= 0,
(, )| >
(42)
. . ,
0,
(43)
=1
The last stage makes a forecast of the arrival of future technologies. The
individualized RoC for each forecasting target k can be computed by combining the local
RoCs of SOA technology j that constitutes the frontier facet onto which technology k is
being projected. That is, in the case of the output-oriented (input-oriented) model, the
estimated elapsed time for the extremity, if any, is computed using the individualized
RoC from the input-oriented (output-oriented) model. For target F in Fig. 11, for example,
the time span for the extremity, namely distance from F to F, 2 (=10/5), from the outputoriented model is estimated by individualized RoC combined with input-oriented local
RoCs of its peers: A and B. In addition, the time span for radial output reduction, namely,
distance from F to A, 0.4 (=2/5), is estimated by individualized RoC from its outputoriented peer, A. Consequently, the forecasted arrival time of F is obtained by the sum of
those estimated elapsed times and the reference time of the current frontier. Likewise, the
forecasted arrival time of D under the output-oriented (input-oriented) model is obtained
73
by the sum of the estimated time span for the distance from itself to a radially shifted
point, i.e., D (D), using input-oriented (output-oriented) local RoC of C (A) and
estimated time span for the distance from the shifted point to its peer, A (C), using
corresponding output-oriented (input-oriented) local RoC.
(
=
)
+
1
1
=1 ,
=1 ,
=1 ,
=1 ,
=1 ,
=1 ,
(1 + )
(1 + )
1
1
=1 ,
=1 ,
| >
(44)
| >
(45)
=1 ,
=1 ,
=1 ,
=1 ,
Equations (32) and (33) yield the same solution, when the original TFDEA model is
feasible, and provide results with consistent interpretation when the original TFDEA
model is infeasible. The following proofs of theorem (32) and (33) guarantee that the
proposed TFDEA extension returns a feasible and a finite solution.
74
75
77
165
166
212
218
248
265
266
268
271
273
274
276
277
282
283
284
293
298
305
321
325
330
331
336
341
344
349
350
353
355
357
LCD
panel name
T520HW01 V0
V562D1-L04
V460H1-LH7
LTA550HF02
V400H1-L08
LK460D3LA63
LTA460HM03
LTA460HQ05
P460HW03 V0
V460H1-L11
V460H1-LH9
LK601R3LA19
LTA550HJ06
P546HW02 V0
P645HW03 V0
P650HVN02.2
R300M1-L01
LTF320HF01
V315H3-L01
T400HW03 V3
V420H2-LE1
V370H4-L01
V400H1-L10
LTA460HN01-W
V500HK1-LS5
BR650D15
LK600D3LB14
LK695D3LA08
LTI700HA01
T706DB01 V0
V546H1-LS1
Actual
year of release
( )
Extremity
(1 )
Radial distance
(1 + )
Individualized
output-oriented RoC
=1 ,
(
)
=1 ,
=1 ,
(
)
=1 ,
2008
2008
2009
2009
2009
2010
2010
2010
2010
2010
2010
2010
2010
2010
2010
2010
2010
2010
2010
2010
2010
2010
2010
2011
2011
2011
2011
2011
2011
2011
2011
0.9351
0.9632
0.8183
0.9151
0.8508
0.9778
0.9778
0.9778
0.9778
0.8183
0.8653
0.6737
0.8159
0.9151
0.9674
0.9674
0.7910
0.9590
0.9590
0.9018
0.8893
0.9211
0.9018
0.9778
0.9489
0.8571
0.6012
0.8268
0.9289
0.9060
0.8159
1.6236
1.0035
1.3059
2.0317
1.2724
2.2312
1.7685
2.0760
2.0760
1.2929
1.4340
1.4558
1.7228
2.3603
1.8781
1.7607
2.7735
1.1092
1.3354
1.2863
1.9215
1.3982
1.4399
1.9895
2.5756
1.3431
1.5420
1.8865
1.6223
2.8878
2.0305
1.1778
1.1876
1.1932
1.1708
1.1984
1.1752
1.1826
1.1824
1.1824
1.1932
1.1900
1.1609
1.1738
1.1415
1.1413
1.1603
1.1156
1.2043
1.2039
1.1955
1.1832
1.1982
1.1952
1.1825
1.1354
1.1650
1.1686
1.1446
1.1510
1.1286
1.1688
1.1972
1.6580
1.1879
1.1986
1.1849
1.1941
1.1941
1.1941
1.1941
1.1879
1.1898
1.4553
1.1940
1.1986
1.2088
1.2088
1.6838
1.1817
1.1817
1.1866
1.1877
1.1849
1.1866
1.1941
1.1962
1.2028
1.1866
1.2050
1.2110
1.2345
1.1940
Individualized
input-oriented RoC
Effective date
=1 ,
(
)
=1 ,
Forecasted
time of release
_
(
)
2006.72
2007.00
2006.86
2006.70
2006.90
2006.77
2006.77
2006.77
2006.77
2006.86
2006.83
2007.00
2006.77
2006.70
2006.56
2006.56
2007.00
2006.95
2006.95
2006.88
2006.86
2006.90
2006.88
2006.77
2006.74
2006.64
2006.88
2006.61
2006.52
2006.56
2006.77
2009.82
2007.16
2009.53
2011.16
2009.21
2011.43
2010.11
2011.02
2011.02
2009.48
2009.73
2010.27
2011.09
2012.11
2010.13
2009.75
2009.52
2007.79
2008.91
2008.92
2011.35
2009.33
2009.58
2010.78
2012.43
2009.24
2012.67
2011.42
2009.56
2012.41
2012.06
The sixth and seventh columns both show the individualized output-oriented RoC
and input-oriented RoC. The time span required for output reduction, i.e., extremity, was
therefore obtained using corresponding individualized output-oriented RoC. Likewise,
the time span required for input augmentation, i.e., radial distance, was computed using
the corresponding individualized input-oriented RoC.
The last column shows the forecasted year of release considering the superiority of
each target technology compared to the 2007 SOA frontier. That is, the forecasted year of
release was obtained by the sum of the optimal starting point of the forecast, i.e.,
effective date shown in the eighth column, and the estimated elapsed times for extremity
and radial distance.
78
The accuracy of the proposed model can be readily shown by comparing those
forecasted years with actual years of release. The deviation statistics contain this
information. As seen from Fig. 12, forecast deviation distribution of 31 infeasible targets
has a mean of -0.26 years with 0.41 in a 95% confidence interval (CI). This is more
accurate than a forecast of 64 feasible targets, i.e., mean deviation of +1.19 years (0.53),
which could improve the overall forecasting performance of a mean deviation of +0.72
years (0.40). Note that the proposed model yielded the forecast results equivalent to that
of a traditional TFDEA model for feasible targets. Consequently, it is shown that the
proposed model could make a reasonable forecast for formerly infeasible targets as well
as a consistent forecast for feasible targets.
79
In this chapter, the proposed approach is applied to actual case studies. This is
organized in two sections. The first section focuses on ex post analysis, which revisits the
past applications to show how this approach can improve the forecasting accuracy using a
hold-out sample technique. In particular, the case study of the commercial airplane
industry is described in detail to fully explain the use of the proposed approach. To
further validate the utility of the proposed approach, six past datasets are revisited and the
comparative results are provided in comparison to the traditional approach.
The second section focuses on ex ante analysis, which addresses how the proposed
approach can be used to solve the actual forecasting problems in the supercomputer
industry. Specifically, the case study aims to investigate technological progress of
supercomputer development to identify the innovative potential of three leading
technology paths toward Exascale development: hybrid system, multicore system and
manycore system.
80
81
EIS
(year)
DC8-55
DC8-62
747-100
747-200
DC10-30
DC10-40
L1011- 500
747-300
767-200ER
767-300ER
747-400
MD-11
A330-300
A340-200
A340-300
MD-11ER
777-200ER
777-300
A330-200
A340-600
A340-500
777-300ER
777-200LR
A380-800
1965
1966
1969
1971
1972
1973
1979
1983
1984
1988
1989
1990
1993
1993
1993
1996
1997
1998
1998
2002
2003
2004
2006
2007
Travel
range
(1,000 km)
9.205
9.620
9.800
12.700
10.010
9.265
10.200
12.400
12.200
11.065
13.450
12.270
10.500
15.000
13.700
13.408
14.305
11.120
12.500
14.600
16.700
14.685
17.370
15.200
Passenger
capacity
(3rd class)
132
159
366
366
250
250
234
412
181
218
416
293
295
261
295
293
301
365
253
380
313
365
301
525
PFE
(passengers*km/L)
13.721
16.646
19.559
23.339
18.199
16.844
19.834
25.652
24.327
26.575
25.803
24.595
31.877
25.252
27.335
24.939
25.155
23.713
22.735
28.323
24.334
29.568
28.841
24.664
Cruising
speed
(km/h)
870
870
893
893
870
870
892
902
849
849
902
870
870
870
870
870
892
892
870
881
881
892
892
902
Maximum
speed
(km/h)
933
965
945
945
934
934
955
945
913
913
977
934
913
913
913
934
945
945
913
913
913
945
945
945
EIS: entry into service, PFE: Passenger fuel efficiency derived from passenger capacity, maximum travel range at
full payload, and fuel capacity
Local RoC
Dominated airplanes
747-300
1.000949
747-400
1.001404
A330-300
1.002188
767-300ER, A340-300
777-300ER
1.002561
767-300ER, A340-200/300/600
777-200LR
1.004606
A340-200/500
A380-800
1.003989
A340-500/600
82
for higher fuel efficiency [245]. For example, recent long-range airplanes, the twinjet
A330 and the four-engine A340, became popular for their efficient wing design [246].
Meanwhile, the Airbus A340-500 has an operating range of 16,700 km, which is the
second longest range of any commercial jet after the Boeing 777-200LR (range of 17,370
km) [247]. Therefore, it is not surprising that the A330-300 has been selected as a
benchmark of not only the same family airplane A340-300 but also the Boeing 767300ER, which is also a relatively long-range (11,065 km) airplane with high passenger
fuel efficiency (26.575 passenger*km/L). Additionally, the Airbus A380-800 became the
worlds largest passenger airplane with a seating capacity of 525 [248]. One can also
relate this feature to the reference set which consists of its predecessors: A340-500 and
A340-600 with relatively higher passenger capacities as well. This long-range, wide-body
airplane has emerged as a fast-growing segment as airlines emphasized transcontinental
aircraft capable of directly connecting any two cities in the world [243]. This, in fact, has
initiated a series of introductions of the A340 family for Airbus to compete with Boeing
[249], which is consistent with the fast local RoCs, indicating a very competitive segment
of the market with rapid improvement.
The Boeing 777 series ranks as one of Boeings best-selling aircraft for their high
fuel efficiency, which enable long-range routes [250]. In particular, the 777-300ER is the
extended range version of the 777-300, which has a maximum range of 14,685 km, made
possible by superior passenger fuel efficiency of 29.568 passenger*km/L. These
exceptional characteristics made not only the preceding 767-300ER but also the Airbus
series that pursued higher fuel efficiency (A340-200/300/600) appoint the 777-300ER as
a benchmark for their performance evaluation. Likewise, the 777-200LR has been
84
selected as a benchmark for long-range airplanes that have relatively smaller passenger
capacities: A340-200 and A340-500. Because of rising fuel costs, airlines have asked for
a fuel-efficient alternative and have increasingly deployed the aircraft on long-haul
transoceanic routes [251]. This has driven engineering efforts more toward energy
efficient aircraft, which is reflected in the fast local RoCs of the Boeing 777 series.
C. RISK ANALYSIS
I now turn to the strategic planning for the proposed airplane concepts (see Table 16).
In particular, the planning team would like to identify the relevant engineering targets for
each design concept as well as the corresponding rate of technological advancement, i.e.,
individualized RoC, so that they can examine the feasibility of proposed design concepts
in terms of their delivery target.
As SOA airplanes at the frontier of 2007 represent different types of past airplanes,
future airplanes, namely design concepts, can be classified by the characteristics of their
reference airplanes identified on the 2007 frontier. This allows the model to compute an
individualized RoC under which each future airplane is expected to be released. Figure
13 summarizes the results.
Travel
range
(1,000 km)
Passenger
capacity
(3rd class)
PFE
(passengers*km/L)
Cruising
speed
(km/h)
Maximum
speed
(km/h)
Delivery
target
(year)
14.816
467
28.950
917
988
2010
15.750
280
34.851
913
954
2010
15.000
315
34.794
903
945
2013
14.800
369
35.008
903
945
2015
85
The first design concept aims for a large commercial aircraft carrying 467
passengers while having the fast cruising speed of 917 km/h. As noted earlier, these
characteristics are also reflected in its reference airplanes: 747-400, 777-300ER, and
A380-800. That is, this design concept would compete with these three airplanes in the
current (2007) market with given specifications. The individualized RoC of this design
concept can therefore be obtained by interpolating local RoCs in conjunction with
reference information. Here, the individualized RoC obtained was 1.002748, which
suggests a more rapid technology development in its category compared to the average
RoC of 1.002149. This is about 28% faster and resulted in an estimated entry into service
(EIS) of the current design concept in 2011.49. Therefore, one may consider the delivery
target of 2010 to be an aggressive goal that might encounter technical challenges by
outpacing the rate of technological advancement of the past.
In a similar manner, characteristics of the second design concepts long range of
15,750 km with outstanding passenger fuel efficiency of 34.851 passenger*km/L are
consistent with the nature of its identified reference airplanes: A330-300 and 777-200LR.
As implied in the local RoCs of 777-200LR (1.004606) with its reference information
(0.67), this concept is subject to one of the fastest advancing technology clusters seeking
a high fuel efficiency. Consequently, it was expected that the very fast individualized
RoC of 1.003793 could achieve this level of specification by 2013.45. Similar to the first
design concept, this indicates that the delivery target of 2010 may involve a significant
technical risk since it requires exceeding the past rate of technological advancement.
86
The third design concept has features similar to the second design concept such that
it also aims to be a long-range and fuel efficient aircraft; however, it pursues a larger
passenger capacity of 315. This feature is reflected in the reference set that additionally
includes 777-300ER, which has a large passenger capacity of 365. The relatively slow
local RoC of the 777-300ER and the A330-300 may imply the difficulty of technological
advancement with respect to the travel range and passenger capacity. As a result, the
individualized RoC for this design concept was found to be 1.003494, giving a forecasted
EIS of 2012.45. Given the delivery target of 2013, the current design concept might be
regarded as a feasible goal; however, on the other hand this possibly entails a modest
market risk of lagging behind in the performance competition.
The last design concept is a variation of the third design concept, aiming for a much
larger airplane but with a shortened travel range. Not surprisingly, this different blend of
6
This figure depicts conceptualized frontier facets relevant to the four design concepts under discussion.
87
the same three peers makes a virtual target of this design concept positioned closer to the
777-300ER than to the 777-200LR and A330-300, which would result in a further
conservative prospect based on the slow rate of performance improvement represented by
the 777-300ER. Consequently, the individualized RoC was found to be 1.002568, giving
a forecasted EIS of 2020.16. This indicates that the delivery target of 2015 may be an
overly optimistic goal which could cause a postponement due to technical risks involved.
D. PROOF OF CONCEPT
I now come back to the present and validate the performance of the presented
method (see Table 17). The first design concept was the Boeing 747-8, which began
deliveries in 2012 [252]. In fact, this airplane faced two years of delays since its original
plan of 2010 due to assembly and design problems followed by contractual issues [253].
The second design concept was another Boeing airplane, 787-9, which made its
maiden flight in 2013, and the delivery began on July 2014 [254], [255]. In line with the
results, the originally targeted EIS of 2010 could not be met because of multiple delays
due to technical problems in addition to a machinists strike [256].
The third design concept was the initial design target of Airbus A350-900, which has
been changed and rescheduled to enter service in the second half of 2014 [257], [258].
The delay was mainly imposed by a strategic redesign of the A350, the so-called XWB
(extra-wide-body) program, that allows for a maximum seating capacity of 440 with a 10abreast high-density seating configuration as well as a reinforced fuselage design [259]. It
is interesting to note that Airbus has made a strategic decision by delaying the A350-
88
900s delivery with improved specifications to compete with the Boeing 777 series in the
jumbo jet segment, which was recognized in the analysis results seven years ago.
Similarly, the last design concept was the Airbus A350-1000, whose EIS has also
been rescheduled to 2018 [260]. This airplane is the largest variation of the A350 family
and designed to compete with the Boeing 777-300ER, as is also seen from the reference
information. Nevertheless, the postponed delivery target of 2018 may still be an
aggressive goal considering the technological advancement observed in this segment.
Planned
EIS
2010
Estimated
EIS
2011.49
Delayed
EIS
2012
A330-300, 777-200LR
2010
2013.45
2014
2013
2012.45
2014
2015
2020.16
2018
2 (787-9)
*
3 (A350-900 )
89
Initial design
90
Swanson and Whites study discussed that forecasting accuracy may also be affected
by an increase of the fit period when a rolling origin is employed [266]. To avoid this,
they suggested a procedure called a fixed size rolling window to maintain a constant
91
length of the fit period. This technique can clean out old data in an attempt to update the
forecasting model, thereby mitigating the influence of data from the distant past [261].
RMSE
(Root mean square error)
Constant
Segmented
RoC
RoC
Deviation statistics
(95% confidence interval)
Constant
Segmented
RoC
RoC
Paired t test
t-stat
p-value
11.9208
6.3084
-9.06(5.18)
-3.56(3.65) -4.3653
0.0023
7.8229
7.2524
-7.22(3.38)
-6.32(3.17) -2.1274
0.0454
23.1312
16.7987
-15.57(7.62)
-9.30(6.30) -5.3973
0.0001
LCD [232]
2.3061
2.1508
+0.63(0.27)
+0.35(0.30) 6.7182
0.0000
HEV [268]
3.4176
3.3329
-2.33(1.70)
-2.26(1.67) -3.2221
0.0105
DSLR [269]
2.6333
2.6271
-0.43(0.36)
-0.15(0.33) -3.8553
0.0002
92
In all cases, the segmented RoC showed not only smaller forecasting errors, i.e.,
> , but also statistically significant distributions
One may infer that forecasting accuracy improvement would be more significant if
unique segments were identified with a greater local RoC contrast to one another, and
future technologies were subject to those unique segments. This can be shown by
comparing the constant RoC with individualized RoCs.
Figure 15 contains this information. Note that RoCs were normalized to show their
distribution in comparison to constant RoCs that were set to be 100% across the
forecasting origins. It is seen that in the case of commercial airplane and battle tank
applications, individualized RoCs for forecasting targets show skewed distributions from
constant RoCs. That is, most of the forecasting targets were subject to relatively fast
progressing segments such that constant RoCs had to yield fairly conservative forecasts
as seen from the deviation statistics. On the other hand, the segmented RoC approach
could reflect those variations by obtaining fast individualized RoCs from the distribution
of local RoCs, which resulted in considerable accuracy improvements.
In contrast, when the local RoC of a certain segment by which most future
technologies are classified was close to the constant RoC, the impact of segmented RoC
would be marginal even if a wide range of local RoCs was identified. This can be seen
from the case of DSLR application in which a constant RoC could reasonably represent
the variations of individualized RoCs as an average value.
93
A special case can occur when the regions or clusters do not contain past products
that have been surpassed. In this case, a product may not have a local RoC. Graphically,
this would occur in Fig. 9 if products B and E were not included, which would then result
in G failing to have a local RoC. In such cases, Gs local RoC could be assumed to be the
average RoC of all SOA products (H and I). Another approach would be to average the
RoC for products that are on the same facet(s) of the efficiency frontier (simply H).
94
95
However, this steady performance improvement has been made possible by meeting the
exponentially growing power demands. Now that the 20MW of power consumption is set
as a feasible limit, the engineering effort has to be focusing more on minimizing power
consumption than on maximizing computational power. This implies that extrapolation
relying on a single performance measure, i.e., computing speed, may overlook required
features of future technology systems and could eventually result in an erroneous forecast.
Specifically, the average power efficiency of todays top 10 systems is about 2.2
Petaflops per megawatt. This indicates that it is required to improve power efficiency by
a factor of 23 to achieve the Exascale goal. It is therefore crucial to incorporate the power
consumption and multicore characteristics that identify the power efficiency into the
measure of technology assessment to have a comprehensive understanding of future high
performance computing (HPC) [281]. This requires a multifaceted approach to
investigate the tradeoffs between system attributes, which can tackle questions such as:
How much performance improvement would be restricted by power and/or core
reduction? What would be the maximum attainable computing performance with certain
levels of power consumption and/or the number of cores?
98
B. ANALYSIS
a) Dataset
The TOP500 list was first created in 1993 to assemble and maintain a list of the 500
most powerful computer systems [288]. Since the list has been compiled twice a year,
datasets from 1993 to 2013 have been combined and cleaned up so that each machine
appears once in the final dataset. The purpose of this study is to consider both power
efficiency and performance effectiveness; therefore, lists up to 2007 were excluded due
to the lack of information on the power consumption (see table 19). Variables selected for
this study are as follows:
In the final dataset, there were a total of 1,199 machines, with the number of cores
ranging from 960 to 3.12 million, power ranging from 19KW to 17.81MW, and Rmax
ranging from 9 Teraflops to 33.86 Petaflops from 2002 to 2013. Note that a logarithmic
transformation was applied to all three variables due to their exponentially increasing
trends.
99
1993
~2007
2008
~2009
2010-1*
2010-2*
2011-1*
2011-2*
~2013
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O: Available, : Unavailable
-1(2) denotes the first (second) list of corresponding years
100
b) Model building
As discussed earlier, power consumption and the number of cores are key variables
representing the power efficiency and therefore were used as input variables, while the
maximum LINPACK performance (Rmax) was used as the output variable. This allows
the model to identify the better performing supercomputer which has lower power,
fewer cores, and/or higher performance if other factors are held constant.
Orientation can be either input-oriented or output-oriented and can be best thought of
as whether the technological progress is better characterized as input reduction or
output augmentation [232]. While power consumption will be a key concern in the
Exascale computing, the advancement of this industry has been driven primarily by
computing performance, i.e., flops, improvement. In fact, the Exascale computing is a
clearly defined development goal, and therefore an output orientation was selected for
this application. It should be noted here that either orientation can deal with tradeoffs
among input and output variables.
As with many DEA applications, variable returns to scale (VRS) was selected for
appropriate returns to scale assumption since doubling the input(s) doesnt correspond to
doubling the output(s) here.
The main purpose of this study is to make a forecast of the Exascale computer
deployment by examining past rates of progress, thus the frontier year of 2013 was used
so as to cover the entire dataset. Lastly, minimizing the sum of effective dates was added
as a secondary goal into the model to handle the potential issue of multiple optima from
the dynamic frontier year [240]. Table 20 summarizes the model parameters used in this
study.
101
Output
Orientation
RTS
Frontier year
Frontier type
Second goal
Power, Cores
Rmax
Output
VRS
2013
Dynamic
Min
Year
Cores
Power
Rmax
Technology Family
Processor
Interconnect
2013
2,688
46.00
100,900
Intel
InfiniBand
2013
3,120,000
17,808.00
33,862,700
Intel
Custom
HPCC
2013
10,920
237.00
531,600
Intel
InfiniBand
2012
560,640
8,209.00
17,590,000
AMD
Cray
2012
9,216
45.11
110,500
Intel
InfiniBand
2012
8,192
41.09
86,346
IBM Power
Custom
iDataPlex DX360M3
2011
3,072
160.00
142,700
Intel
InfiniBand
NNSA/SC Blue
Gene/Q Prototype 2
2011
8,192
40.95
85,880
IBM Power
Custom
DEGIMA Cluster
2011
7,920
31.13
42,830
Intel
InfiniBand
2008
1,260
18.97
9,259
IBM Power
InfiniBand
2008
1,024
42.60
9,669
Intel
InfiniBand
2008
960
153.43
14,669
IBM Power
InfiniBand
2007
960
91.55
9,058
Intel
InfiniBand
Beacon Appro
GreenBlade
GB824M
BlueGene/Q, Power
BQC 16C 1.60GHz
BladeCenter QS22
Cluster
Cluster Platform
3000 BL2x220
Power 575, p6 4.7
GHz, Infiniband
BladeCenter HS21
Cluster
103
104
It is also interesting to point out that there was a distinct performance gap between
Many/Multicore based machines and GPU/Accelerator based machines using AMD
processors in 2011 and 2012. This can be attributed to the strategic partnership between
Cray and AMD. In fact, Cray has been a staunch supporter of AMD processors since
2007, and their collaboration has delivered continued advancement in HPC [290]. In
particular, Crays recent interconnect technology, Gemini, was customized for the AMD
Opteron CPUs Hyper-Transport links to optimize internal bandwidth [291]. Since
modern supercomputers are deployed as massively centralized parallel systems, the speed
and flexibility of interconnect become important for the overall performance of a
supercomputer. Given that hybrid machines using AMD processors all use Crays
105
interconnect system, one may notice that AMD based supercomputers had a significant
performance contribution from Cray interconnect as well as Nvidia coprocessors.
One may notice that top supercomputers based on Intel processors have switched to
hybrid systems since 2010. This is because combining CPUs and GPUs is advantageous
in data parallelism, which makes it possible to balance the workload distribution as
efficient use of computing resources becomes more important in todays HPC structure
[292]. Hybrid machines using Intel processors have all adopted InfiniBand interconnect
for their cluster architectures regardless of GPUs/Accelerators: Nvidia, ATI Radeon,
Xeon Phi, PowerXCell, etc. InfiniBand, manufactured by Mellanox and Intel, enables
low processing overhead and is designed to carry multiple traffic types such as clustering,
communications, and storage over a single connection [293]. In particular, its GPUDirect technology facilitates faster communication and lower latency of GPU/Accelerator
based systems that can increase computing and accelerator resources, as well as improves
productivity and scalable performance [294]. Intel acquired the InfiniBand business from
Qlogic in 2012 to support innovating on fabric architectures not only for the HPC but
also data centers, cloud, and Web 2.0 market [295].
As another possibility, recent attention is focusing on Intels next generation
supercomputer, which will adopt Crays Aries interconnect with Intel Xeon Phi
accelerator as Intels first non-InfiniBand based hybrid system after its acquisition of
interconnect business from Cray [296]. Interestingly, this transition reflects the strategic
decision of Cray, ending an association with AMD to facilitate an independent
interconnect architecture rather than a processor specific one as AMDs recent
performance and supply stability fell behind competitors [291], [297].
106
Unlike AMD or Intel processor based systems, the top performing supercomputers
using the IBM Power (PC) processor were Many/Multicore systems. IBM initially
developed the multicore architecture which later evolved to manycore systems, known as
Blue Gene technology. The Blue Gene approach is to use a large number of simple
processing cores and to connect them via a low latency, highly scalable custom
interconnect [298]. This has the advantage of achieving a high aggregate memory
bandwidth, whereas GPU clusters require messages to be copied from the GPU to the
main memory and then from main memory to the remote node, whilst maintaining low
power consumption as well as cost and floor space efficiency [299]. Currently,
GPU/Accelerator based systems suggest smaller cluster solutions for the next generation
HPC with its promising performance potential; however, the Blue Gene architecture
demonstrates an alternate direction of massively parallel quantities of independently
operating cores with fewer programming challenges involved [300].
107
c) Model validation
To validate a predictive performance of the constructed model, hold-out sample tests
were conducted. Specifically, a rolling origin was used to determine the forecast accuracy
by collecting deviations from multiple forecasting origins so that the performance of the
model can be tested both in the near-term and far-term. This provides an objective
measure of accuracy without being affected by occurrences unique to a certain fixed
origin [261]. The comparative results with the planar model and random walk 9 are
summarized in Table 22.
Since the first hybrid system, the Blade Center QS22, appeared in 2008 in the dataset,
the hold-out sample test was conducted from the origin of 2009 for hybrid systems. For
example, the mean absolute deviation of 1.58 years was obtained from TFDEA when the
model made a forecast on arrivals of post-2009 hybrid systems based on the rate of
technological progress observed from 2008 to 2009. The overall forecasting error across
the forecasting origins was found to be 1.32 years, which is more accurate than the planar
model and random walk.
Although multicore systems showed successive introductions from 2007 to 2012,
technological progress, i.e., expansion of SOA frontier surface, hasnt been observed
until 2010. This rendered the model able to make a forecast only in 2011. The resulting
forecast error of TFDEA was found to be about a year, which is slightly greater than that
of the planar model albeit still more accurate than the random walk. However, care must
be taken to conclude which one was more accurate than the other since the result was
The random walk model simply predicts that the next period value is the same as the current value, i.e.,
the arrivals of forecasting targets = the forecasting origin [319].
108
obtained only from a single forecasting origin in 2011. Therefore, the forecasting of the
multicore Exascale system will be made using both TFDEA and the planar model in the
following section.
Consecutive introductions of manycore systems with a steady technological progress
made it possible to conduct hold-out sample tests from the origin of 2007 to 2012.
Notwithstanding a bigger average forecasting error of 1.49 years due to the inclusion of
errors from longer forecasting windows than the other two systems, TFDEA showed
outperforming forecast results compared to the planar model and random walk.
Hybrid systems
Multicore systems
Manycore systems
TFDEA
Planar
model
Random
walk
TFDEA
Planar
model
Random
walk
TFDEA
Planar
model
Random
walk
2007
N/A
N/A
N/A
N/A
N/A
N/A
1.8075
2.8166
2.9127
2008
N/A
N/A
N/A
N/A
N/A
N/A
1.4470
2.5171
2.4949
2009
1.5814
2.7531
2.1852
N/A
N/A
N/A
2.0060
2.3593
2.0509
2010
1.1185
1.9956
1.5610
N/A
N/A
N/A
1.4996
2.0863
1.6016
2011
1.8304
1.5411
1.2778
0.9899
0.7498
1.0000
1.2739
1.8687
1.3720
2012
0.7564
1.2012
1.0000
N/A
N/A
N/A
0.8866
2.2269
1.0000
Average
1.3217
1.8728
1.5060
0.9899
0.7498
1.0000
1.4867
2.3125
1.9053
Overall, it is shown that the TFDEA model provides a reasonable forecast for three
types of supercomputer systems with the maximum possible deviation of 18 months. In
109
addition, it is interesting to note that forecasts from TFDEA tended to be less sensitive to
the forecasting window than the planar model or random walk.
This implies that the current technological progress of supercomputer technologies
exhibits multifaceted characteristics that can be better explained by various tradeoffs
derived from the frontier analysis. In addition, a single design tradeoff identified from the
planar model was shown to be vulnerable to the forecasting window: it showed a
tendency to be less accurate as the forecasting window gets longer.
110
d) Forecasting
I now turn to the forecasting of the Exascale systems. As previously noted, the
design goal of the Exascale supercomputer is not only to have the Exaflops (1018 flop /
second) computing performance but also 20MW power consumption and 100 million
total cores considering the realistic operating conditions (see Table 23) [276], [280]. This
set of specifications was set as a forecasting target to estimate when this level of system
could be operational given the past technological progress identified from the relevant
segments.
Power
Rmax
100 million
20 MW
1 Exaflops
Table 24 summarizes the forecasting results from the three architectural approaches.
Exascale performance was forecasted to be achieved earliest by hybrid systems in
2021.13. Hybrid systems are expected to accomplish this with a relatively high
individualized RoC of 2.22% and having the best current level of performance
represented by Tianhe-2. Figure 19 depicts the identified individualized RoC with respect
to the local RoCs. It is seen that the technology frontier of hybrid systems includes a wide
range of progress patterns in terms of local RoCs, i.e., 0.27%~2.71%, and the Exascale
target is subject to the relatively fast advancing segment.
Considering the possible deviations identified in the previous section (1.32 year),
one could expect the arrival of a hybrid Exascale system within the 2020 timeframe. This
promising future of hybrid systems is, in fact, acknowledged by many industry experts
111
claiming that GPU/Accelerator based systems will be more popular in the TOP500 list
for their outstanding power efficiency, which may spur the Exascale development [282],
[285].
The forecasted arrival time of the first multicore based Exascale system is far
beyond 2020 due to the slow rate of technological advancement: 1.19% as well as
relatively lower performance of current SOA multicore systems. It is also shown from
Fig. 19 that the technology frontier of multicore systems has relatively narrow ranges of
local RoCs, i.e., 0.48%~1.86%, and, inter alia, the Exascale target is pertinent to the
moderate segment.
Note that projection from the planar model also estimated the arrival of multicore
based Exascale system farther beyond the 2020 timeframe10.
This result implies that innovative engineering efforts are required for multicore
based architecture to be scaled up to the Exaflop performance. Even though the RIKEN
10
The arrival of the first multicore Exascale system was forecasted in 2061.62 from the planar model.
112
embarked on the project to develop the Exascale system continuing the preceding success
of K-computer, IBMs cancellation of the Blue Water contract and recent movement
toward the use of a design house raises questions on the prospect of multicore based
HPCs [301], [302].
The first manycore based system is expected to reach the Exascale target by 2022.28.
This technology path has been mostly led by the progress of the Blue Gene architecture,
and the individualized RoC was found to be 2.34%, which was the fastest of the three. It
is interesting to note from Fig. 19 that this fast progress, however, belongs to the
moderate region of the technology frontier where the local RoCs range from 1.09% to
3.40%.
Although this fast advancement couldnt overcome the current performance gap with
hybrid systems in the Exascale race, the Blue Gene architecture still suggests a promising
pathway toward the Exascale computing by virtue of its stable configurations closer to
the traditional design with fewer programming challenges [299].
Multicore system
Manycore system
Individualized
Rate of change (RoC)
1.022183
1.011872
1.023437
Forecasted arrival of
Exascale supercomputer
2021.13
(2019.81~2022.45)
2031.74
(2030.75~2032.73)
2022.28
(2020.80~2023.77)
113
C. DISCUSSION
The analysis of technological RoC makes it possible to forecast a date for achieving
Exascale performance from three different approaches; however, it is worthwhile to
examine these forecasts with consideration for the business environment and emerging
technologies to anticipate the actual deployment possibilities of the Exascale systems.
The optimistic forecast is that, as seen from the high performing Tianhe-2 and Titan
Cray XK7 system, there would be an Intel or AMD based system with a Xeon Phi or
Nvidia coprocessor and a custom Cray interconnect system. However, given business
realities it is unlikely that the first Exascale system will use AMD processors. Intel
purchased the Cray interconnect division and is expected to design the next generation
Cray interconnect optimized for Intel processors and Xeon Phi coprocessors [303].
Existing technology trends and the changing business environment would make a
forecast of a hybrid Exascale system with a Cray interconnect, Intel Processors and Xeon
Phi coprocessors.
The 2.22% annual improvement for hybrid systems has come mostly from a
combination of advances in Cray systems, such as their transverse cooling system, Cray
interconnects, AMD processors and Nvidia coprocessors. It is difficult to determine the
contribution of each component; however, it is worth noting that only Cray systems using
AMD processors were SOAs. This implies that Crays improvements are the highest
contributor to the RoC for AMD based hybrid systems. Furthermore, Intels recent
decision to move production of Cray interconnect chips from TSMC to its more advanced
processes will likely result in additional performance improvement. Thus, one may
114
expect that the Cray / Intel collaboration might result in a RoC greater than the 2.22%
and reach the Exascale goal earlier.
Another possibility of achieving Exascale systems is IBMs Blue Gene architecture
using the IBM Power (PC) processor with custom interconnects. This approach has
shown a 2.34% yearly improvement building on the 3rd highest rated Sequoia system.
The Blue Gene architecture, with high bandwidth, low latency interconnects and no
coprocessors to consume bandwidth or complicate programming, is an alternative to the
coprocessor (hybrid) architectures being driven by Intel and AMD. Given IBMs more
stable business structure, it may be more effective moving forward while Intel / Cray
work out their new relationship.
Who has the system experience to build an Exascale system? Cray, IBM and Appro
have built the largest SOA original equipment manufacturer (OEM) systems. In 2012,
Cray purchased Appro, leaving just two major supercomputer manufacturers [304].
Based upon the captured RoCs and the business changes, one can expect that the first
Exascale system will be built by either Cray or IBM.
As supercomputer systems become more complex and expensive, it is worth noting
the funding efforts in a story about the future of HPC. The U.S. Department of Energy
(DOE) recently awarded $425 million in federal funding to IBM, Nvidia, and other
companies that will build two 150 Petaflops systems with an option on one system to
build it out to 300 Petaflops [305]. The plan states that IBM will supply its Power
architecture processors, while Nvidia will supply its Volta GPUs, and Mellanox will
provide interconnected technologies to wire everything together [306]. In addition, the
DOE and the National Nuclear Security Administration (NNSA) have announced $100
115
million to fund the FastForward2 project that will develop technologies needed for future
energy efficient machines in collaboration with AMD, Cray, IBM, Nvidia and Intel [307].
One may notice that U.S. science funding will support both hybrid and manycore systems
for producing the next leap toward the Exascale.
Japan had earlier announced a goal to reach Exascale with the total project cost of
$1.2 billion by 2020 [308]. However, the deputy director at the RIKEN Advanced
Institute for Computational Science (RAICS) recently modified the goal and plans to
build a 200 to 600 Petaflops system by 2020. Nonetheless, given the fact that RIKEN
selected Fujitsu to develop the basic design for the system, there is a keen interest in how
much the multicore system could be scaled up with a relatively low power efficiency of
complex cores.
Data driven forecasting techniques, such as TFDEA, make a forecast of technical
capabilities based upon released products, so emerging technologies that are not yet being
integrated into products are not considered. In the supercomputer academic literature,
there is an ongoing debate about when the currently dominating large core processors
(Intel, AMD) will be displaced by larger numbers of power-efficient, lower performance
small cores such as ARM, much like what happened when microprocessors displaced
vector machines in the 1990s and ARM based mobile computing platforms are affecting
both Intel and AMD desktop and laptop sales [282], [287]. Although there is no ARM
based supercomputer in the TOP500 yet, the European Mont-Blanc project is targeting
getting one on the list by 2017, and Nvidia is developing an ARM based supercomputer
processor for use with its coprocessor chips [309]. Small cores are a potentially disruptive
technology as power efficiency is becoming more important; therefore, further analysis is
116
new
kind
of
supercomputer
attracting recent
attention
is
the
superconducting supercomputer (as one way to enable the quantum computing). Even
though the exact financial and technical details with a timeframe were not disclosed, the
Intelligence Advanced Research Projects Activity (IARPA) revealed that funding
contracts have been awarded to IBM, Raytheon-BBN and Northrop Grumman
Corporation focusing on the development of the Cryogenic Computer Complexity (C3)
program [310]. Early research suggested that a superconducting supercomputer would be
able to provide around 100 Petaflops of performance while consuming just 200 kilowatts
[311]. If the mission of the C3 program can be achieved and the related technologies can
be successfully transferred to practical usages, the next generation supercomputers could
be far different from the ones of the past, and the Exascale goal could be achieved
without concerns of power and cooling capacities.
Lastly, this study set the Exascale target considering the realistic operating
conditions: 20MW of power consumption and 100 million cores. If this set of
specifications was relaxed at the manufacturers free will, the arrival of an Exascale
computer could come earlier than current forecasts as China is believed to be targeting
the 2018-2020 timeframe for continuing their gigantic design of Tianhe-2.
117
D. SUMMARY OF FINDINGS
The HPC industry is experiencing a radical transition which requires improvement
of power efficiency by a factor of 23 to deploy and/or manage the Exascale systems. This
has created an industry concern that the nave forecast based on the past performance
curve may have to be adjusted. TFDEA is well suited to deal with multiple tradeoffs
between systems attributes. This study examined comparative prospects of three
competing technology alternatives with various design possibilities considering the
complex business environment to achieve the Exascale computing so that researchers and
manufacturers can have a better view of their development targets. In sum, the results
showed that the current development target of 2020 might entail technical risks
considering the rate of change toward the power efficiency observed in the past. It is
anticipated that either a Cray built hybrid system using Intel processors or an IBM built
Blue Gene architecture system using PowerPC processors will likely achieve the goal
between early 2021 and late 2022.
In addition, the results provided a systematic measure of technological change,
which can guide a decision on the new product target setting practice. Specifically, the
rate of change contains information not only about how much performance improvement
is expected to be competitive but also about how much technical capability should be
relinquished to achieve a specific level of technical capabilities in other attributes. One
can also utilize this information to anticipate the possible disruptions. As shown in the
HPC industry, the rate of change of the manycore system was found to be slightly faster
than that of the hybrid system. Although the arrival of the hybrid Exascale system is
forecasted earlier than a manycore system because of its current surpassing level of
118
performance, the fast rate of change of the manycore system implies that the performance
gap could be overcome, and Blue Gene architecture might accomplish the Exascale goal
earlier if the hybrid system development is unable to keep up with the expected progress.
119
V. CONCLUSION
120
In addition, the identified rates of change can be used to give insights into the
estimation of the future performance levels for new product development target setting
purposes. Supercomputer manufacturers may have their own roadmaps based on past
performance improvement, which has been mostly driven by the computation speed.
However, as noted above, the transition toward Exascale demands considering both
power and core efficiency. This would necessarily require the established roadmap to be
modified. There are three alternatives, i.e., hybrid, multicore, and manycore systems,
each heading toward the same goal. Who then do manufacturers bet on to win the race?
In addition, how much performance improvement should be made by a certain point in
time to meet the planned delivery of the Exascale computer? The rate of change contains
information to better inform their decisions.
One can also utilize rates of change to anticipate the possible disruptions. For
example, the rate of change of manycore systems was found to be faster than that of
hybrid system. Although the arrival of a hybrid Exascale system is forecasted earlier
because of its current level of performance being superior to manycore systems, this
indicates that the performance gap could be overcome, and the Blue Gene architecture
might accomplish the Exascale goal earlier within the possible forecast deviation if Cray
and Intel are unable to keep up with performance advancement expected from the given
rate of change while they work out their new relationship.
121
change toward the power efficiency observed in the past. It is forecasted that either a
Cray built hybrid system using Intel processors or an IBM built Blue Gene architecture
system using PowerPC processors will likely achieve the goal between early 2021 and
late 2022. This indicates that the improvement of power efficiency by a factor of 23
would require the maximum delay of four years from the past performance curve.
123
Research
Question
#1
Answer
to
Research
Question
#1
Research
Question
#2
Answer
to
Research
Question
#2
Research
Question
#3
Answer
to
Research
Question
#3
5.4 LIMITATIONS
This study adopts an engineering design perspective that a product is a complex
assembly of interacting components. Consequently, the term segment is being used to
indicate a set of engineering designs having a similar mix of product attributes. Note that
the academic community in marketing uses this term with a broader implication of shared
needs and value propositions determined by a meaningful number of customers. In this
study, the identified targets and competitors are derived purely based on technical
specifications. This attribute-based approach can be limited in its ability to represent the
overall appeal of products especially those for which other holistic product features that
are not reflected in technical specifications such as aesthetics are important.
In a similar vein, the DEA measurement is based on the relative performance of the
products, and therefore the state-of-the-art products may be the most advanced ones but
may not be the most successful ones in the market. This indicates that the resulting rate of
change is more likely to be reflected by the technological progress than market desires.
In addition, the estimation of release date from the proposed model doesnt take into
account externalities such as strategic postponement, financial conditions, market
acceptance levels, self-imposed delay due to the product portfolio management, etc.
Therefore, the resulting release date should be understood as a baseline for implementing
the tactical launch decision with respect to product attributes concerned rather than the
bottom line of decision. This also suggests that the estimated release date may have to be
further adjusted if the industrial market is less sensitive to the technological superiority
than to market strategies.
126
non-radial and/or non-oriented distance measures for estimating the frontier with
consideration of the furthest target, closest target, restricted targets, scale-efficient target,
or target located in a predefined direction that could set more realistic targets whereby
diverse patterns of technological advancement can be explored.
Russell measure
Geometric distance function model (GDF)
Hyperbolic model
Non-oriented measure
Additive model
Range-adjusted model (RAM)
Slack-based model (SBM)
Proportional slack-based model (P-SBM)
Directional model
128
complement this aspect such that the rate of performance improvement can be obtained
from diverse levels of products, thereby yielding the risk distribution for each design
concept instead of a point estimation.
130
References
[1]
N. Taleb, The Black Swan: The Impact of the Highly Improbable Fragility. New
York: Random House LLC, 2010.
[2]
[3]
[4]
P. Saffo, Six rules for effective forecasting, Harv. Bus. Rev., vol. 85, no. 7/8, pp.
122131, 2007.
[5]
[6]
[7]
[8]
[9]
J. A. Schumpeter, Business cycles, migration and health., Soc. Sci. Med., vol. 64,
no. 7, pp. 14204, 1939.
[10] R. N. Foster, Innovation: The Attackers Advantage. Summit Books, 1986, p. 316.
[11] J. M. Utterback, Mastering the Dynamics of Innovation: How Companies can
Seize Opportunities in the Face of Technological Change, vol. 81, no. 1. Harvard
Business School Press, 1994, pp. 138140.
[12] G. Yokote and R. A. Utterback, Time lapses in information dissemination:
research laboratory to physicians office, Bull. Med. Libr. Assoc., vol. 62, no. 3,
pp. 251257, 1974.
[13] A. Sood and G. J. Tellis, Technological evolution and radical innovation, J.
Mark., vol. 69, no. 3, pp. 152168, 2005.
131
[14] D. Sahal, The farm tractor and the nature of technological innovation, Res.
Policy, vol. 10, no. 4, pp. 368402, 1981.
[15] C. Christensen, The Innovators Dilemma, vol. 1, no. 1. Harvard Business School
Press, 1997, p. 225.
[16] E. Danneels, Disruptive technology reconsidered: a critique and research agenda,
J. Prod. Innov. Manag., vol. 21, no. 4, pp. 246258, Jul. 2004.
[17] D. Yu and C. C. Hang, A reflective review of disruptive innovation theory, Int. J.
Manag. Rev., vol. 12, no. 4, pp. 435452, Dec. 2010.
[18] G. J. Tellis, Disruptive technology or visionary leadership?, J. Prod. Innov.
Manag., vol. 23, no. 1, pp. 3438, Jan. 2006.
[19] P. S. Cohan, The dilemma of the Innovators Dilemma: Clayton Christensens
management theories are suddenly all the rage, but are they ripe for disruption?,
Industry Standard, vol. 10, Industry Standard, 2010.
[20] C. M. Christensen and M. Overdorf, Meeting the challenge of disruptive change,
Harv. Bus. Rev., vol. 78, no. 2, pp. 6676, 2000.
[21] J. R. Doyle and R. H. Green, Comparing products using data envelopment
analysis, Omega, vol. 19, no. 6, pp. 6318, 1991.
[22] L. M. Seiford and J. Zhu, Context-dependent data envelopment analysis
Measuring attractiveness and progress, Omega, vol. 31, no. 5, pp. 397408, 2003.
[23] R.-W. Po, Y.-Y. Guh, and M.-S. Yang, A new clustering approach using data
envelopment analysis, Eur. J. Oper. Res., vol. 199, no. 1, pp. 276284, 2009.
[24] A. Amirteimoori and S. Kordrostami, An alternative clustering approach: a DEAbased procedure, Optimization. pp. 114, 2011.
[25] X. Dai and T. Kuosmanen, Best-practice benchmarking using clustering methods:
Application to energy regulation, Omega, vol. 42, no. 1, pp. 179188, 2014.
[26] P. Bogetoft and L. Otto, Benchmarking with DEA, SFA, and R, 1st ed. New York:
Springer, 2010, p. 367.
[27] L. M. Seiford and J. Zhu, Infeasibility of super-efficiency data envelopment
analysis models, INFOR, vol. 37, no. 2, pp. 174188, 1999.
132
134
135
technologies and core competencies, Technol. Forecast. Soc. Change, vol. 72, no.
2, pp. 213236, 2005.
[91] P. N. Figueiredo, Discontinuous innovation capability accumulation in latecomer
natural resource-processing firms, Technol. Forecast. Soc. Change, vol. 77, no. 7,
pp. 10901108, 2010.
[92] J. P. Caulkins, G. Feichtinger, D. Grass, R. F. Hartl, and P. M. Kort, Two state
capital accumulation with heterogenous products: Disruptive vs. non-disruptive
goods, J. Econ. Dyn. Control, vol. 35, no. 4, pp. 462478, 2011.
[93] M.-C. Belis-Bergouignan, V. Oltra, and M. Saint Jean, Trajectories towards clean
technology: example of volatile organic compound emission reductions, Ecol.
Econ., vol. 48, no. 2, pp. 201220, 2004.
[94] J. C. Ho, Technology evaluation in Taiwans technology industries: Strategies,
trajectories, and innovations, Technol. Forecast. Soc. Change, vol. 78, no. 8, pp.
13791388, 2011.
[95] S. H. Werfel and A. B. Jaffe, Induced innovation and technology trajectory:
Evidence from smoking cessation products, Res. Policy, vol. 42, no. 1, pp. 1522,
2012.
[96] H. J. No and Y. Park, Trajectory patterns of technology fusion: Trend analysis
and taxonomical grouping in nanobiotechnology, Technol. Forecast. Soc. Change,
vol. 77, no. 1, pp. 6375, 2010.
[97] Y. E. Spanos and I. Voudouris, Antecedents and trajectories of AMT adoption:
The case of Greek manufacturing SMEs, Res. Policy, vol. 38, no. 1, pp. 144155,
2009.
[98] K. Frenken and L. Leydesdorff, Scaling trajectories in civil aircraft (19131997),
Res. Policy, vol. 29, no. 3, pp. 331348, 2000.
[99] C. Watanabe, S. Lei, and N. Ouchi, Fusing indigenous technology development
and market learning for greater functionality developmentAn empirical analysis
of the growth trajectory of Canon printers, Technovation, vol. 29, no. 4, pp. 265
283, 2009.
[100] M. Hobo, C. Watanabe, and C. Chen, Double spiral trajectory between retail,
manufacturing and customers leads a way to service oriented manufacturing,
Technovation, vol. 26, no. 7, pp. 873890, 2006.
138
139
[112] M. V. Mansfield and Wagner. K., Organizational and strategic factors associated
with probabilities of success in industrial R&D, J. Bus., vol. 48, pp. 179198,
1975.
[113] G. Bacon, S. Beckman, D. C. Mowery, and E. Wilson, Managing product
definition in high-technology industries: a pilot study, Calif. Manage. Rev., vol.
36, no. 2, pp. 3256, 1994.
[114] V. Srinivasan, W. S. Lovejoy, and D. Beach, Integrated product design for
marketability and manufacturing, J. Mark. Res., vol. 34, no. 1, pp. 154163, 1997.
[115] E. Dahan and V. Srinivasan, Predictive power of Internet-based product concept
testing using visual depiction and animation, J. Prod. Innov. Manag., vol. 17, no.
2, pp. 99109, 2000.
[116] A. Schulze and M. Hoegl, Organizational knowledge creation and the generation
of new product ideas: A behavioral approach, Res. Policy, vol. 37, no. 10, pp.
17421750, 2008.
[117] J. Kim and D. Wilemon, Focusing the fuzzy front-end in new product
development, R&D Manag., vol. 32, no. 4, pp. 269279, 2002.
[118] M. Engwall and A. Jerbrant, The resource allocation syndrome: the prime
challenge of multi-project management?, Int. J. Proj. Manag., vol. 21, no. 6, pp.
403409, Aug. 2003.
[119] G. R. Schirr, Flawed tools: the efficacy of group research methods to generate
customer ideas, in Journal of Product Innovation Management, 2012, vol. 29, no.
3, pp. 473488.
[120] N. Dalkey and O. Helmer, An Experimental Application of the DELPHI Method
to the Use of Experts, Manage. Sci., vol. 9, no. 3, pp. 458467, 1963.
[121] F. Zwicky, Morphology of propulsive power, Society for Morphological
Research, California, pp. 1382, 1962.
[122] E. von Hippel, Lead users: a source of novel product concepts, Management
Science, vol. 32, no. 7. pp. 791805, 1986.
[123] A. Griffin and J. Hauser, The voice of the customer, Mark. Sci., vol. 12, no. 1,
pp. 127, 1993.
140
[136] M. Perkmann, A. Neely, and K. Walsh, How should firms evaluate success in
university-industry alliances? A performance measurement system, R&D Manag.,
vol. 41, no. 2, pp. 202216, 2011.
[137] M. D. Morelli, S. D. Eppinger, and R. K. Gulati, Predicting technical
communication in product development organizations, IEEE Trans. Eng. Manag.,
vol. 42, no. 3, pp. 215222, 1995.
[138] S. B. Modi and V. A. Mabert, Supplier development: Improving supplier
performance through knowledge transfer, J. Oper. Manag., vol. 25, no. 1, pp. 42
64, 2007.
[139] R. J. Slotegraaf and K. Atuahene-Gima, Product development team stability and
new product advantage: the role of decision-making processes, J. Mark., vol. 75,
no. 1, pp. 96108, 2011.
[140] D. Robertson and T. J. Allen, CAD system use and engineering performance,
IEEE Trans. Eng. Manag., vol. 40, no. 3, pp. 274282, 1993.
[141] M. Muethel, F. Siebdrat, and M. Hoegl, When do we really need interpersonal
trust in globally dispersed new product development teams?, R&D Manag., vol.
42, no. 1, pp. 3146, 2012.
[142] J. E. Burroughs, D. W. Dahl, C. P. Moreau, A. Chattopadhyay, and G. J. Gorn,
Facilitating and rewarding creativity during new product development, J. Mark.,
vol. 75, no. 4, pp. 5367, 2011.
[143] N. Ale Ebrahim, S. Ahmed, and Z. Taha, Modified stage-gate: a conceptual
model of virtual product development process, African J. Mark. Manag. Vol 1 No
9 pp 211219 December 2009, vol. 1, no. 9, pp. 211219, 2009.
[144] K. B. Kahn, G. Barczak, J. Nicholas, A. Ledwith, and H. Perks, An examination
of new product development best practice, J. Prod. Innov. Manag., vol. 29, no. 2,
pp. 180192, 2012.
[145] R. G. Cooper and E. J. Kleinschmidt, Winning Businesses in Product
Devhopmbiit: the Critical Success Factors, Res. Technol. Manag., vol. 50, no. 3,
pp. 5266, 2007.
[146] A. C. Edmondson and I. M. Nembhard, Product development and learning in
project teams: The challenges are the benefits, J. Prod. Innov. Manag., vol. 26, no.
2, pp. 123138, 2009.
142
[160] G. Rowe and G. Wright, The Delphi technique as a forecasting tool: issues and
analysis, Int. J. Forecast., vol. 15, no. 4, pp. 353375, 1999.
[161] M. T. A. Wentholt, S. Cardoen, H. Imberechts, X. Van Huffel, B. W. Ooms, and L.
J. Frewer, Defining European preparedness and research needs regarding
emerging infectious animal diseases: Results from a Delphi expert consultation,
Prev. Vet. Med., vol. 103, no. 23, pp. 8192, 2012.
[162] J. Pearce, C. Jones, S. Morrison, M. Olff, S. van Buschbach, A. B. Witteveen, R.
Williams, F. Orengo-Garca, D. Ajdukovic, A. T. Aker, D. Nordanger, B. LuegerSchuster, and J. I. Bisson, Using a delphi process to develop an effective trainthe-trainers program to train health and social care professionals throughout
Europe, J. Trauma. Stress, vol. 25, no. 3, pp. 337343, 2012.
[163] J. Landeta and J. Barrutia, People consultation to construct the future: A Delphi
application, Int. J. Forecast., vol. 27, no. 1, pp. 134151, 2011.
[164] O. Lindqvist, G. Lundquist, A. Dickman, J. Bkki, U. Lunder, C. L. Hagelin, B. H.
Rasmussen, S. Sauter, C. Tishelman, and C. J. Frst, Four essential drugs needed
for quality care of the dying: a Delphi-study based international expert consensus
opinion., J. Palliat. Med., vol. 16, no. 1, pp. 3843, 2013.
[165] H. Kahn, On alternative world futures: issues and themes, New York, 1965.
[166] M. Nowack, J. Endrikat, and E. Guenther, Review of Delphi-based scenario
studies: Quality and design considerations, Technol. Forecast. Soc. Change, vol.
78, no. 9, pp. 16031615, 2011.
[167] J. J. Winebrake and B. P. Creswick, The future of hydrogen fueling systems for
transportation: An application of perspective-based scenario analysis using the
analytic hierarchy process, Technol. Forecast. Soc. Change, vol. 70, no. 4, pp.
359384, 2003.
[168] K. Kok, M. van Vliet Mathijs, I. Brlund Ilona, A. Dubel, and J. Sendzimir,
Combining participative backcasting and exploratory scenario development:
Experiences from the SCENES project, Technol. Forecast. Soc. Change, vol. 78,
no. 5, pp. 835851, 2011.
[169] A. Jetter and W. Schweinfort, Building scenarios with Fuzzy Cognitive Maps: An
exploratory study of solar energy, Futures, vol. 43, no. 1, pp. 5266, 2011.
[170] T. A. Tran and T. Daim, A taxonomic review of methods and tools applied in
technology assessment, Technol. Forecast. Soc. Change, vol. 75, no. 9, pp. 1396
1405, 2008.
144
[195] B. V. Dean and S. S. Sengupta, Research budgeting and project selection, IRE
Trans. Eng. Manag., vol. 4, pp. 158169, 1962.
[196] K. Moslehi and R. Kumar, A reliability perspective of the smart grid, IEEE
Trans. Smart Grid, vol. 1, no. 1, pp. 5764, 2010.
[197] M. Friedewald and O. Raabe, Ubiquitous computing: An overview of technology
impacts, Telemat. Informatics, vol. 28, no. 2, pp. 5565, 2011.
[198] B. J. W. Forrester, Industrial dynamics. A major breakthrough for decision
makers, Harv. Bus. Rev., vol. 36, no. 4, pp. 3766, 1958.
[199] T. U. Daim, G. Rueda, H. Martin, and P. Gerdsri, Forecasting emerging
technologies: Use of bibliometrics and patent analysis, Technol. Forecast. Soc.
Change, vol. 73, no. 8, pp. 9811012, 2006.
[200] F. H. Maier, New product diffusion models in innovation managementa system
dynamics perspective, Syst. Dyn. Rev., vol. 14, pp. 285308, 1998.
[201] E. Suryani, S. Y. Chou, and C. H. Chen, Air passenger demand forecasting and
passenger terminal capacity expansion: A system dynamics framework, Expert
Syst. Appl., vol. 37, no. 3, pp. 23242339, 2010.
[202] J. Martino, A comparison of two composite measures of technology, Technol.
Forecast. Soc. Change, vol. 44, no. 2, pp. 147159, 1993.
[203] N. Gerdsri, An Analytical Approach on Building a Technology Development
Envelope (TDE) for Roadmapping of Emerging Technologies, Portland State
University, 2005.
[204] A. J. Alexander and J. R. Nelson, Measuring technological change: Aircraft
turbine engines, Technol. Forecast. Soc. Change, vol. 5, no. 2, pp. 189203, 1973.
[205] D.-J. Lim, T. R. Anderson, and J. Kim, Forecast of wireless communication
technology: A comparative study of regression and TFDEA Model, in PICMETTechnology Management for Emerging Technologies, 2012, pp. 12471253.
[206] G. C. Taylor, Methodologies for characterizing technologies, Electric Power
Research Institute, Palo Alto, p. 120, 1978.
[207] R. E. Klitgaard, Measuring technological change: Comments on a proposed
methodology, Technol. Forecast. Soc. Change, vol. 6, pp. 437440, 1974.
147
[208] D. J. Aigner and S. F. Chu, On estimating the industry production function, Am.
Econ. Rev., vol. 58, no. 4, pp. 826839, 1968.
[209] H. O. Fried, C. A. K. Lovell, and S. S. Schmidt, The Measurement of Productive
Efficiency and Productivity Growth, 1st ed. New York: Oxford University Press,
2008, p. 638.
[210] D. Aigner, C. A. K. Lovell, and P. Schmidt, Formulation and estimation of
stochastic frontier production function models, J. Econom., vol. 6, no. 1, pp. 21
37, 1977.
[211] E. N. Dodson, Measurement of state of the art and technological advance,
Technol. Forecast. Soc. Change, vol. 146, no. 23, pp. 129146, 1985.
[212] E. N. Dodson, A general approach to measurement of the state of the art and
technological advance, Technol. Forecast. Soc. Change, vol. 1, no. 4, pp. 391
408, 1970.
[213] J. P. Martino, Measurement of technology using tradeoff surfaces, Technol.
Forecast. Soc. Change, vol. 27, no. 2, pp. 147160, 1985.
[214] B. F. Cole, An Evolutionary Method for Synthesizing Technological Planning
and Architectural Advance, Georgia Institute of Technology, 2009.
[215] M. J. Farrell, The measurement of productive efficiency, J. R. Stat. Soc. Ser. A,
vol. 120, no. 3, pp. 253290, 1957.
[216] A. Charnes, W. W. Cooper, and E. Rhodes, Measuring the efficiency of decision
making units, Eur. J. Oper. Res., vol. 2, no. 6, pp. 429444, 1978.
[217] R. Fare, S. Grosskopf, and C. A. K. Lovell, Production Frontiers. Cambridge
University Press, 1994, p. 316.
[218] A. Charnes, C. T. Clark, W. W. Cooper, and B. Golany, A developmental study
of data envelopment analysis in measuring the efficiency of maintenance units in
the U.S. air forces, Annals of Operations Research, vol. 2, no. 1. pp. 95112,
1984.
[219] T. Sueyoshi, Comparisons and analyses of managerial efficiency and returns to
scale of telecommunication enterprises by using DEA/WINDOW, Commun. Oper.
Res. Soc. Japan, vol. 37, pp. 210219, 1992.
148
149
http://www.bloomberg.com/news/articles/2012-05-29/airbus-blinking-first-witha350-helps-boeing-plot-777-successor.
[258] H. Scott, Smaller seats, fee rises and new planes? 2014: The year ahead in air
travel, CNN, 2014. [Online]. Available:
http://www.cnn.com/2014/01/01/travel/2014-aviation-year/.
[259] K. Max, PICTURE: 10-abreast A350 XWB would offer unprecedented operating
cost advantage, Flight, 2008. [Online]. Available:
http://www.flightglobal.com/news/articles/picture-10-abreast-a350-xwb-39wouldoffer-unprecedented-operating-cost-223853/
[260] Anonymous, A350XWB family A350-1000, Airbus, 2014. [Online]. Available:
http://www.airbus.com/aircraftfamilies/passengeraircraft/a350xwbfamily/a3501000/.
[261] L. J. Tashman, Out-of-sample tests of forecasting accuracy: an analysis and
review, Int. J. Forecast., vol. 16, no. 4, pp. 437450, 2000.
[262] S. Makridakis, S. C. Wheelwright, and R. J. Hyndman, Forecasting: Methods and
Applications, 8th ed. New York: John Wiley & Sons, 2008, p. 656.
[263] J. S. Armstrong, Principles of Forecasting: A Handbook for Researchers and
Practitioners, 2001st ed. New York: Springer, 2001, p. 849.
[264] R. J. Hyndman and A. B. Koehler, Another look at measures of forecast accuracy,
Int. J. Forecast., vol. 22, no. 4, pp. 679688, 2006.
[265] S. Makridakis and R. Winkler, Sampling distributions of post-sample forecasting
errors, Appl. Stat., vol. 38, no. 2, pp. 331342, 1989.
[266] N. R. Swanson and H. White, Forecasting economic time series using flexible
versus fixed specification and linear versus nonlinear econometric models, Int. J.
Forecast., vol. 13, no. 4, pp. 439461, 1997.
[267] J. S. Armstrong and F. Collopy, Error measures for generalizing about forecasting
methods: Empirical comparisons, Int. J. Forecast., vol. 8, no. 1, pp. 6980, 1992.
[268] S. Jahromi, T. Anderson, and A. Tudori, Forecasting hybrid electric vehicles
using TFDEA, in PICMET-Technology Management in the IT-Driven Services,
2013, pp. 20982107.
[269] B. S. Yoon, A. Charoensupyan, N. Hu, R. Koosawangsri, M. Abdulai, and X.
Wang, Digital single-lens reflex camera technology forecasting with technology
152
153
[282] H. Simon, Why we need exascale and why we wont get there by 2020, in
Optical Interconnects Conference, 2013.
[283] J. D. Owens, M. Houston, D. Luebke, S. Green, J. E. Stone, and J. C. Phillips,
GPU computing, Proc. IEEE, vol. 96, no. 5, pp. 879899, May 2008.
[284] Sheng Li, J.-H. Ahn, R. D. Strong, J. B. Brockman, D. M. Tullsen, and N. P.
Jouppi, McPAT: An integrated power, area, and timing modeling framework for
multicore and manycore architectures, in MICRO-42. 42nd Annual IEEE/ACM
International Symposium on. IEEE, 2009, pp. 469480.
[285] J. P. Panziera, Five challenges for future supercomputers, Supercomputers, pp.
3840, 2012.
[286] R. Vuduc and K. Czechowski, What GPU computing means for high-end
systems, IEEE Micro, pp. 7478, 2011.
[287] N. Rajovic, P. Carpenter, I. Gelado, N. Puzovic, and A. Ramirez, Are mobile
processors ready for HPC?, in Supercomputing Conference, 2013.
[288] J. J. Dongarra, P. Luszczek, and A. Petitet, The LINPACK benchmark: past,
present and future, Concurr. Comput. Pract. Exp., vol. 15, no. 9, pp. 803820,
2003.
[289] P. Wood, Cray Inc. replacing IBM to build UI supercomputer, The News Gazette,
Illinois, p. 1, 2011.
[290] L. Lawrence, Cray jumped from AMD for the flexibility and performance of Intel
chips, The Inquirer, 2012.
[291] C. Murariu, Intel signs deal with Cray, marks AMDs departure from Crays
supercomputers, Softpedia, 2012.
[292] F. Wang, C.-Q. Yang, Y.-F. Du, J. Chen, H.-Z. Yi, and W.-X. Xu, Optimizing
Linpack benchmark on GPU-accelerated Petascale supercomputer, J. Comput. Sci.
Technol., vol. 26, no. 5, pp. 854865, Sep. 2011.
[293] A. Cole, Intel and InfiniBand: pure HPC play, or is there a fabric in the works?,
IT Business Edge, 2012.
[294] Anonymous, InfiniBand strengthens leadership as the high-speed interconnect of
choice, Mellanox Technologies, pp. 131, 2012.
154
[295] Z. Whittaker, Intel buys QLogics InfiniBand assets for $125 million, ZD Net,
2012. [Online]. Available: http://www.zdnet.com/article/intel-buys-qlogicsinfiniband-assets-for-125-million/.
[296] IntelPR, Intel acquires industry-leading, high-performance computing
interconnect technology and expertise, Intel Newsroom, 2012. [Online]. Available:
http://newsroom.intel.com/community/intel_newsroom/blog/2012/04/24/intelacquires-industry-leading-high-performance-computing-interconnect-technologyand-expertise.
[297] T. Morgan, Intel came a-knockin for Cray super interconnects, The Register,
2012. [Online]. Available:
http://www.theregister.co.uk/2012/04/25/intel_cray_interconnect_followup/.
[298] R. Haring, M. Ohmacht, T. Fox, M. Gschwind, P. Boyle, N. Chist, C. Kim, D.
Satterfield, K. Sugavanam, P. Coteus, P. Heidelberger, M. Blumrich, R.
Wisniewski, A. Gara, and G. Chiu, The IBM Blue Gene/Q compute chip, IEEE
Micro, vol. 32, no. 2, pp. 4860, 2011.
[299] C. Lazou, Should I buy GPGPUs or Blue Gene?, HPCwire, 2010.
[300] M. Feldman, Blue Genes and GPU clusters top the latest Green500, HPCwire,
2011.
[301] H. Kimihiko, Exascale supercomputer project, RIKEN Advanced Institute for
Computational Science, Kobe, 2013.
[302] L. Tung, After X86 servers, is IBM gearing up to sell its chips business too?,
ZDNet, 2014. [Online]. Available: http://www.zdnet.com/article/after-x86-serversis-ibm-gearing-up-to-sell-its-chips-business-too/.
[303] Davis Nick, Cray CS300 cluster supercomputers to feature new Intel(R) Xeon
Phi(TM) coprocessors x100 family, Cray Media, 2013. [Online]. Available:
http://investors.cray.com/phoenix.zhtml?c=98390&p=irolnewsArticle&ID=1830424.
[304] Davis Nick, Cray to acquire appro international, Cray Media, 2012. [Online].
Available: http://investors.cray.com/phoenix.zhtml?c=98390&p=irolnewsArticle&ID=1756786.
[305] P. Thibodeau, Coming by 2023, an exascale supercomputer in the U.S.,
Computerworld, 2014. [Online]. Available:
http://www.computerworld.com/article/2849250/coming-by-2023-an-exascalesupercomputer-in-the-us.html.
155
[306] D. Takahashi, IBM and Nvidia win $425M to build two monstrous
supercomputers for the Department of Energy, VentureBeat, 2014.
[307] Anonymous, Department of energy awards $425 million for next generation
supercomputing technologies, Department of Energy, 2014.
[308] Y. Kawaguchi, Japans policy toward exascale computing, Ministry of
Education, Culture, Sports, Science, and Technology (Japan), 2014.
[309] R. V. Aroca and L. M. G. Gonalves, Towards green data centers: A comparison
of x86 and ARM architectures power efficiency, J. Parallel Distrib. Comput., vol.
72, no. 12, pp. 17701780, 2012.
[310] A. Maruccia, US intelligence signs up IBM to build an eco-friendly
supercomputer, The Inquirer, 2014.
[311] S. Anthony, US intel agency is developing a superconducting exascale computer
and cryogenic memory, Extreme Tech, 2014.
[312] D.-J. Lim and T. R. Anderson, Technology trajectory mapping using data
envelopment analysis: the ex-ante use of disruptive innovation theory on flat panel
technologies, R&D Manag.(In press), 2015.
[313] D.-J. Lim, Technology forecasting using DEA in the presence of infeasibility,
Int. Trans. Oper. Res.(In press), 2015.
[314] A. Charnes, W. W. Cooper, and R. M. Thrall, A structure for classifying and
characterizing efficiency and inefficiency in Data Envelopment Analysis, J.
Product. Anal., vol. 2, no. 3, pp. 197237, 1991.
[315] T. R. Anderson and L. Inman, Resolving the issue of multiple optima in
Technology Forecasting using Data Envelopment Analysis, in PICMETTechnology Management in the Energy Smart World, 2011, pp. 15.
[316] S. Durairajan, M. I. Prado, N. Rahimi, and S. R. Jahromi, Forecasting
microprocessor technology in the multicore era using TFDEA, in PICMETTechnology Management in the IT-Driven Services, 2013, pp. 21082115.
[317] D.-J. Lim and T. R. Anderson, Improving forecast accuracy by a segmented rate
of change in technology forecasting using data envelopment analysis (TFDEA),
in PICMET-Infrastructure and Service Integration, 2014, pp. 29032907.
156
157
158
159
160