0% found this document useful (0 votes)
481 views36 pages

History of Strategy

"Strategy" is a term that can be traced back to the ancient Greeks. The use of the term in business dates only to the twentieth century. This essay focuses on how the evolution of ideas about business strategy was influenced by competitive thinking.

Uploaded by

avaldes6870
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
481 views36 pages

History of Strategy

"Strategy" is a term that can be traced back to the ancient Greeks. The use of the term in business dates only to the twentieth century. This essay focuses on how the evolution of ideas about business strategy was influenced by competitive thinking.

Uploaded by

avaldes6870
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
You are on page 1/ 36

Pankaj Ghemawat

Competition and Business Strategy in Historical Perspective

A review of theories of competition and business strategy over the last


half-century reveals a fairly linear development of early work by
academics and consultants into efforts to understand the determinants
of industry profitability and
competitive position and, more recently, to add a time or historical
dimension to the analysis. The possible implications of the emergence
of a market for such ideas are also discussed.

“Strategy” is a term that can be traced back to the ancient Greeks, for
whom it meant a chief magistrate or a military commander in chief.
The use of the term in business, however, dates only to the twentieth
century, and its use in a self-consciously competitive context is even
more recent.

After providing some historical background, this essay focuses on how


the evolution of ideas about business strategy was influenced by
competitive thinking in the second half of the twentieth century. The
review aims not to be comprehensive but, instead, to focus on some
key topical issues in applying competitive thinking to business
strategy.
Particular attention is paid to the role of three institutions —Harvard
Business School and two consulting firms, the Boston Consulting Group
and McKinsey & Company—in looking at the historical development
and diffusion of theories of business competition and strategy. The
essay concludes with some discussion of how the emergence of a
market for ideas in this broad domain is likely to affect future
developments in this area.

PANKAJ GHEMAWAT is the Jaime and Josefina Chua Tiampo Professor of


Business Administration at Harvard Business School. The author has
drawn upon an earlier draft prepared by Dr. Peter Botticelli under his
supervision and has also benefited from helpful comments by Walter A.
Friedman, Thomas K. McCraw, and three referees.

Business History Review 76 (Spring 2002): 37–74. © 2002 by The


President and Fellows of Harvard College. Pankaj Ghemawat / 38

Historical Background

Until the nineteenth century, the scope for applying (imperfectly)


competitive thinking to business situations appeared to be limited:
intense competition had emerged in many lines of business, but
individual firms apparently often lacked the potential to have much of
an influence on competitive outcomes. Instead, in most lines of
business — with the exception of a few commodities in which
international trade had eveloped—firms had an incentive to remain
small and to employ as little fixed capital as possible. It was in this era
that Adam Smith penned his famous description of market forces as an
“invisible hand” that was largely beyond the control of individual firms.

The scope for strategy as a way to control market forces and shape the
competitive environment started to become clearer in the second half
of the nineteenth century. In the United States, the building of the
railroads after 1850 led to the development of mass markets for the
first time. Along with improved access to capital and credit, mass
markets encouraged large-scale investment to exploit economies of
scale in production and economies of scope in distribution. In some
industries, Adam Smith’s “invisible hand” was gradually tamed by what
the historian Alfred D. Chandler Jr. has termed the “visible hand” of
professional managers. By the late nineteenth century, a new type of
firm began to emerge, first in the United States and then in Europe:
the vertically integrated, multidivisional (or “M-form”) corporation that
made large investments in manufacturing and marketing and in
management hierarchies to coordinate those functions. Over time, the
largest M-form companies managed to alter the competitive
environment within their industries and even across industry lines.1

The need for a formal approach to corporate strategy was first


articulated by top executives of M-form corporations. Alfred Sloan
(chief executive of General Motors from 1923 to 1946) devised a
strategy that was explicitly based on the perceived strengths and
weaknesses of its competitor, Ford.2 In the 1930s, Chester Barnard, a
top
executive with AT&T, argued that managers should pay especially
close attention to “strategic factors,” which depend on “personal or
organizational action.”3
1
Alfred D. Chandler Jr., Strategy and Structure (Cambridge, Mass., 1963) and Scale and
Scope (Cambridge, Mass., 1990).
2
See Alfred P. Sloan Jr., My Years with General Motors (New York, 1963).
3
Chester I. Barnard, The Functions of the Executive (Cambridge, Mass., 1968; first published 1938), 204–5.

The organizational challenges involved in World War II were a vital


stimulus to strategic thinking. The problem of allocating scarce
resources across the entire economy in wartime led to many
innovations in management science. New operations-research
techniques (e.g., linear programming) were devised, which paved the
way for the use of
quantitative analysis in formal strategic planning. In 1944, John von
Neumann and Oskar Morgenstern published their classic work, The
Theory of Games and Economic Behavior. This work essentially solved
the problem of zero-sum games (most military ones, from an
aggregate perspective) and framed the issues surrounding non-zero-
sum games (most business ones). Also, the concept of “learning
curves” became an increasingly important tool for planning. The
learning curve was first discovered in the military aircraft industry in
the 1920s and 1930s, where it was noticed that direct labor costs
tended to decrease by a constant percentage as the cumulative
quantity of aircraft produced doubled. Learning effects figured
prominently in wartime production planning efforts.

World War II also encouraged the mindset of using formal strategic


thinking to guide management decisions. Thus, Peter Drucker argued
that “management is not just passive, adaptive behavior; it means
taking action to make the desired results come to pass.” He noted that
economic theory had long treated markets as impersonal forces,
beyond the control of individual entrepreneurs and organizations. But,
in the age of M-form corporations, managing “implies responsibility for
attempting to shape the economic environment, for planning, initiating
and carrying through changes in that economic environment, for
constantly pushing back the limitations of economic circumstances on
the enterprise’s freedom of action.”4 This insight became the rationale
for business strategy—that, by consciously using formal planning, a
company could exert some positive control over market forces.

However, these insights on the nature of strategy largely lay fallow for
the decade after World War II because wartime destruction led to
excess demand, which limited competition as firms rushed to expand
capacity. Given the enormous job of rebuilding Europe and much of
Asia, it was not until the late 1950s and 1960s that many large
multinational corporations were forced to consider global competition
as a factor in planning. In addition, the wartime disruption of foreign
multinationals enabled U.S. companies to profit from the postwar boom
without effective competitors in many industries.

A more direct bridge to the development of strategic concepts for


business applications was provided by interservice competition in the
4
Peter Drucker, The Practice of Management (New York, 1954),11.

U.S. military after World War II. In this period, American military
leaders found themselves debating the arrangements that would best
protect legitimate competition between military services while
maintaining the needed integration of strategic and tactical planning.
Many argued that the Army, Navy, Marines, and Air Force would be
more efficient if they were unified into a single organization. As the
debate raged, Philip Selznick, a sociologist, noted that the Navy
Department “emerged as the defender of subtle institutional values
and tried many times to formulate the distinctive characteristics of the
various services.” In essence, the “Navy spokesmen attempted to
distinguish between the Army as a ‘manpower’ organization and the
Navy as a finely adjusted system of technical, engineering skills —a
‘machine-centered’ organization. Faced with what it perceived as a
mortal threat, the Navy became highly self-conscious about its
distinctive competence.”5 The concept of “distinctive competence” had
great resonance for strategic management, as we will see next.

Academic Underpinnings

The Second Industrial Revolution witnessed the founding of many elite


business schools in the United States, beginning with the Wharton
School in 1881. Harvard Business School, founded in 1908, was one of
the first to promote the idea that managers should be trained to think
strategically and not just to act as functional administrators. Beginning
in 1912, Harvard offered a required second-year course in “business
policy,” which was designed to integrate the knowledge gained in
functional areas like accounting, operations, and finance, thereby
giving students a broader perspective on the strategic problems faced
by corporate executives. A course description from 1917 claimed that
“an analysis of any business problem shows not only its relation to
other problems in the same group, but also the intimate connection of
groups. Few problems in business are purely intra-departmental.” It
was also stipulated that the policies of each department must maintain
a “balance in accord with the underlying policies of the business as a
whole.”6

In the early 1950s, two professors of business policy at Harvard,


George Albert Smith Jr. and C. Roland Christensen, taught students to
question whether a firm’s strategy matched its competitive
environment. In reading cases, students were instructed to ask: do a
company’s5 Philip Selznick, Leadership in Administration (Evanston, Ill.,
1957), 49–50. 6Official Register of Harvard University, 29 Mar. 1917,
42–3.

policies “fit together into a program that effectively meets the


requirements of the competitive situation”?7 Students were told to
address this problem by asking: “How is the whole industry doing? Is it
growing and expanding? Or is it static; or declining?” Then, having
“sized up” the competitive environment, the student was to ask: “On
what basis must any one company compete with the others in this
particular industry? At what kinds of things does it have to be
especially competent, in order to compete?”8
In the late 1950s, another Harvard business policy professor, Kenneth
Andrews, built on this thinking by arguing that “every business
organization, every subunit of organization, and even every individual
[ought to] have a clearly defined set of purposes or goals which keeps
it moving in a deliberately chosen direction and prevents its drifting in
undesired directions” (emphasis added). As shown in the case of Alfred
Sloan at General Motors, “the primary function of the general
manager, over time, is supervision of the continuous process of
determining the nature of the enterprise and setting, revising and
attempting to achieve its goals.”9 The motivation for these conclusions
was supplied by an industry note and company cases that Andrews
prepared on Swiss watchmakers, which uncovered significant
differences in performance associated with their respective strategies
for competing in that industry.10 This format of combining industry
notes with company cases, which had been initiated at Harvard
Business School by a professor of manufacturing, John MacLean,
became the norm in Harvard’s business policy course. In practice, an
industry note was often followed by multiple cases on one or several
companies with the objective, inter alia, of economizing on students’
preparation time.11

By the 1960s, classroom discussions in the business policy course


focused on matching a company’s “strengths” and “weaknesses” —its
distinctive competence—with the “opportunities” and “threats” (or
risks) it faced in the marketplace. This framework, which came to be
referred to by the acronym SWOT, was a major step forward in bringing
explicitly competitive thinking to bear on questions of strategy.
Kenneth Andrews put these elements together in a way that became
particularly well known. (See Figure 1.) In 1963, a business policy
confer 7George Albert Smith Jr. and C. Roland Christensen, Suggestions
to Instructors on Policy Formulation (Chicago, 1951), 3–4.
8
George Albert Smith Jr., Policy Formulation and Administration (Chicago, 1951), 14.
9
Kenneth R. Andrews, The Concept of Corporate Strategy (Homewood, Ill., 1971), 23.
10
See Part I of Edmund P. Learned, C. Roland Christensen, and Kenneth Andrews, Problems of General
Management (Homewood, Ill., 1961).
11
Interview with Kenneth Andrews, 2 Apr. 1997.

Figure 1. Andrews’s Strategy Framework. (Source: Kenneth Andrews,


The Concept of Corporate Strategy, rev. ed. [Homewood, Ill., 1980],
69.)
Ence was held at Harvard that helped diffuse the SWOT concept in
academia and in management practice. Attendance was heavy, and
yet the popularity of SWOT—which was still used by many firms,
including Wal-Mart, in the 1990s—did not bring closure to the problem
of actually defining a firm’s distinctive competence. To solve this
problem, strategists had to decide which aspects of the firm were
“enduring and unchanging over relatively long periods of time” and
which were “necessarily more responsive to changes in the
marketplace and the pressures of other environmental forces.” This
distinction was crucial because “the strategic decision is concerned
with the long-term.

Development of the enterprise” (emphasis added).12 When strategy


choices were analyzed from a long-range perspective, the idea of
“distinctive competence” took on added importance because of the
risks involved in most long-run investments. Thus, if the opportunities
a firm was pursuing appeared “to outrun [its] present distinctive
competence,” then the strategist had to consider a firm’s “willingness
to gamble that the latter can be built up to the required level.”13

The debate over a firm’s “willingness to gamble” its distinctive


competence in pursuit of opportunity continued in the 1960s, fueled by
a booming stock market and corporate strategies that were heavily
geared toward growth and diversification. In a classic 1960 article,
“Marketing Myopia,” Theodore Levitt was sharply critical of firms that
seemed to
focus too much on delivering a product, presumably based on its
distinctive competence, rather than consciously serving the customer.
Levitt thus argued that when companies fail, “it usually means that the
product fails to adapt to the constantly changing patterns of consumer
needs and tastes, to new and modified marketing institutions and
practices, or to product developments in complementary industries.”14

However, another leading strategist, Igor Ansoff, argued that Levitt


was asking companies to take unnecessary risks by investing in new
products that might not fit the firm’s distinctive competence. Ansoff
argued that a company should first ask whether a new product had a
“common thread” with its existing products. He defined the common
thread as a firm’s “mission” or its commitment to exploit an existing
need in the market as a whole.15 Ansoff noted that “sometimes the
customer is erroneously identified as the common thread of a firm’s
business. In reality, a given type of customer will frequently have a
range of unrelated product missions or needs.”16 Thus, for a firm to
maintain its strategic focus, Ansoff suggested certain categories for
defining the common thread in its business/corporate strategy. (See
Figure 2.) Ansoff and others also focused on translating the logic of the
SWOT framework into a series of concrete questions that needed to be
answered in the development of strategies.17

In the 1960s, diversification and technological changes increased the


complexity of the strategic situations that many companies faced, and
intensified their need for more sophisticated measures that could
12
Andrews, The Concept of Corporate Strategy, 29.
13
Ibid., 100.
14
Theodore Levitt, “Marketing Myopia,” Harvard Business Review (July/Aug. 1960): 52.
15
Igor Ansoff, Corporate Strategy (New York, 1965), 106–9.
16
Ibid., 105–8.
17
Michael E. Porter, “Industrial Organization and the Evolution of Concepts for Strategic Planning,” in T. H.
Naylor, ed., Corporate Strategy (New York, 1982), 184.

Figure 2. Ansoff’s Product/Mission Matrix as adapted by Henry


Mintzberg. (Source: Henry Mintzberg, “Generic Strategies,” in
Advances in Strategic Management, vol. 5 [Greenwich, Conn., 1988],2.
For the original, see Igor Ansoff, Corporate Strategy [New York, 1965],
128.)

Will be used to evaluate and compare many different types of


businesses. Since business policy groups at Harvard and elsewhere
remained strongly wedded to the idea that strategies could only be
analyzed on a case-by-case basis in order to account for the unique
characteristics of every business, corporations turned elsewhere to
satisfy their craving for standardized approaches to strategy making.18
A study by the Stanford Research Institute indicated that a majority of
large U.S. companies had set up formal planning departments by
1963.19 Some of these internal efforts were quite elaborate. General
Electric (GE) is a bellwether example: it used Harvard faculty
extensively in its executive education programs, but it also
independently developed an elaborate, computer-based “Profitability
Optimization Model” (PROM) in the first half of the 1960s that
appeared to explain a significant fraction of the variation in the return
on investment afforded by its various businesses.20 Over time, like
many other companies, GE also sought the help of private consulting
firms. While consultants made important contributions in many areas,
such as planning, forecasting, logistics, and long-range research and
development (R&D), the following section traces their early impact on
mainstream strategic thinking.

The Rise of Strategy Consultants

The 1960s and early 1970s witnessed the rise of a number of strategy
consulting practices. In particular, the Boston Consulting Group 18 Adam
M. Brandenburger, Michael E. Porter, and Nicolaj Siggelkow,
“Competition and Strategy: The Emergence of a Field,” paper
presented at McArthur Symposium, Harvard Business School, 9 Oct.
1996, 3–4.
19
Stanford Research Institute, Planning in Business (Menlo Park, 1963).
20
Sidney E. Schoeffler, Robert D. Buzzell, and Donald F. Heany, “Impact of Strategic Planning on Profit
Performance,” Harvard Business Review (Mar./Apr. 1974): 139.

(BCG), founded in 1963, had a major impact on the field by applying


quantitative research to problems of business and corporate strategy.
BCG’s founder, Bruce Henderson, believed that a consultant’s job was
to find “meaningful quantitative relationships” between a company
and its chosen markets.21 In his words, “good strategy must be based
primarily on logic, not . . . on experience derived from intuition.”22
Indeed, Henderson was utterly convinced that economic theory would
someday lead to a set of universal rules for strategy. As he explained,
“[I]n most firms strategy tends to be intuitive and based upon
traditional patterns of behavior which have been successful in the past.
. . . [However,] in growth industries or in a changing environment, this
kind of strategy is rarely adequate. The accelerating rate of change is
producing a business world in which customary managerial habits and
organization are increasingly inadequate.”23

In order to help executives make effective strategic decisions, BCG


drew on the existing knowledge base in academia: one of its first
employees, Seymour Tilles, was formerly a lecturer in Harvard’s
business policy course. However, it also struck off in a new direction
that Bruce Henderson is said to have described as “the business of
selling powerful oversimplifications.”24 In fact, BCG came to be known
as a “strategy boutique” because its business was largely based,
directly or indirectly, on a single concept: the experience curve
(discussed below). The value of using a single concept came from the
fact that “in nearly all problem solving there is a universe of alternative
choices, most of
which must be discarded without more than cursory attention.”
Hence, some “frame of reference is needed to screen the . . .
relevance of data, methodology, and implicit value judgments”
involved in any strategy decision. Given that decision making is
necessarily a complex process, the most useful “frame of reference is
the concept. Conceptual thinking is the skeleton or the framework on
which all other choices are sorted out.”25

BCG and the Experience Curve. BCG first developed its version of the
learning curve—what it labeled the “experience curve”—in 1965– 66.
According to Bruce Henderson, “it was developed to try to explain
price and competitive behavior in the extremely fast growing
segments” of industries for clients like Texas Instruments and Black
and21 Interview with Seymour Tilles, 24 Oct. 1996. Tilles credits
Henderson for recognizing the competitiveness of Japanese industry at
a time, in the late 1960s, when few Americans believed that Japan or
any other country could compete successfully against American
industry.
22
Bruce Henderson, The Logic of Business Strategy (Cambridge, Mass., 1984), 10.
23
Bruce D. Henderson, Henderson on Corporate Strategy (Cambridge, Mass., 1979), 6–7.
24
Interview with Seymour Tilles, 24 Oct. 1996.
25
Henderson, Henderson on Corporate Strategy, 41.

Decker.26 As BCG consultants studied these industries, they naturally


asked why “one competitor outperforms another (assuming
comparable management skills and resources)? Are there basic rules
for success? There, indeed, appear to be rules for success, and they
relate to the impact of accumulated experience on competitors’ costs,
industry prices and the interrelation between the two.”27

The firm’s standard claim for the experience curve was that for each
cumulative doubling of experience, total costs would decline by
roughly 20 to 30 percent due to economies of scale, organizational
learning, and technological innovation. The strategic implication of the
experience curve, according to BCG, was that for a given product
segment, “the producer . . . who has made the most units should have
the lowest costs and the highest profits.”28 Bruce Henderson claimed
that with the experience curve “the stability of competitive
relationships should be predictable, the value of market share change
should be calculable, [and] the effects of growth rate should [also] be
calculable.”29

>From the Experience Curve to Portfolio Analysis. By the early 1970s,


the experience curve had led to another “powerful oversimplification”
by BCG: the “Growth-Share Matrix,” which was the first use of what
came to be known as “portfolio analysis.” (See Figure 3.) The idea was
that after experience curves were drawn for each of a diversified
company’s business units, their relative potential as areas for
investment could be compared by plotting them on the grid.

BCG’s basic strategy recommendation was to maintain a balance


between “cash cows” (i.e., mature businesses) and “stars,” while
allocating some resources to feed “question marks,” which were
potential stars. “Dogs” were to be sold off. In more sophisticated
language, a BCG vice president explained that “since the producer
with the largest stable market share eventually has the lowest costs
and greatest profits, it becomes vital to have a dominant market share
in as many products as possible. However, market share in slowly
growing products can be gained only by reducing the share of
competitors who are likely to fight back.” If a product market is
growing rapidly, “a company can gain share by securing most of the
growth. Thus, while competitors grow, 26 Bruce Henderson explained
that, unlike earlier versions of the “learning curve,” BCG’s experience
curve “encompasses all costs (including capital, administrative,
research and marketing) and traces them through technological
displacement and product evolution. It is also based on cash flow rates,
not accounting allocation.” Bruce D. Henderson, preface to Boston
Consulting Group, Perspectives on Experience (Boston, 1972; first
published 1968).
27
Boston Consulting Group, Perspectives on Experience, 7.
28
Patrick Conley, “Experience Curves as a Planning Tool,” in Boston Consulting Group pamphlet (1970):
15.
29
Bruce Henderson, preface, Boston Consulting Group, Perspectives on Experience.

Figure 3. BCG’s Growth-Share Matrix. (Source: Adapted from George


Stalk Jr. and Thomas M. Hout, Competing Against Time [New York,
1990], 12.)

The company can grow even faster and emerge with a dominant share
when growth eventually slows.”30 Strategic Business Units and Portfolio
Analysis. Numerous other consulting firms came up with their own
matrices for portfolio analysis
at roughly the same time as BCG. McKinsey & Company’s effort, for
instance, began in 1968 when Fred Borch, the CEO of GE, asked
McKinsey to examine his company’s corporate structure, which
consisted of two hundred profit centers and one hundred and forty-five
departments arranged around ten groups. The boundaries for these
units had been defined according to theories of financial control, which
the Mc-Kinsey consultants judged to be inadequate. They argued that
the firm should be organized on more strategic lines, with greater
concern for external conditions than internal controls and a more
future-oriented approach than was possible using measures of past
financial performance. The study recommended a formal strategic
planning system that would divide the company into “natural business
units,” which Borch later renamed “strategic business units,” or SBUs.
GE’s executives followed this advice, which took two years to put into
effect.

However, in 1971, a GE corporate executive asked McKinsey for help in


evaluating the strategic plans that were being written by the
company’s many SBUs. GE had already examined the possibility of
using the BCG growth-share matrix to decide the fate of its SBUs, but
its top management had decided then that they could not set priorities
on the basis of just two performance measures. And so, after studying
the problem for three months, a McKinsey team produced what came
to be known as the GE/McKinsey nine-block matrix. The nine-block
matrix used about a dozen measures to screen for industry attractive 30
Conley, “Experience Curves as a Planning Tool,” 10–11.

Figure 4. Industry Attractiveness–Business Strength Matrix. (Source:


Arnoldo C. Hax and
Nicolas S. Majluf, Strategic Management: An Integrative Perspective
[Englewood Cliffs, N.J., 1984], 156.)

Business, or profitability, and another dozen to screen for competitive


position, although the weights to be attached to them were not
specified.31 (See Figure 4.)

Another, more quantitative, approach to portfolio planning was


developed at roughly the same time under the aegis of the “Profit
Impact of Market Strategies” (PIMS) program, which was the
multicompany successor to the PROM program that GE had started a
decade earlier. By the mid-1970s, PIMS contained data on six hundred
and twenty SBUs drawn from fifty-seven diversified corporations.32
These data were used, in the first instance, to explore the
determinants of returns on investment by regressing historical returns
on variables such as market share, product quality, investment
intensity, marketing and R&D expenditures, and several dozen others.
The regressions established what were supposed to be benchmarks for
the potential performance of SBUs with particular characteristics
against which their actual performance might be compared.
31
Interview with Mike Allen, 4 Apr. 1997.
32
Sidney E. Schoeffler, Robert D. Buzzell, and Donald F. Heany, “Impact of Strategic Planning on Profit
Performance,” Harvard Business Review (Mar./Apr. 1974): 139–40, 144–5.

In all these applications, segmenting diversified corporations into SBUs


became an important precursor to analyses of economic
performance.33 This forced “de-averaging” of cost and performance
numbers that had previously been calculated at more aggregated
levels. In addition, it was thought that, with such approaches,
“strategic thinking was appropriately pushed ‘down the line’ to
managers closer to the particular industry and its competitive
conditions.”34

In the 1970s, virtually every major consulting firm used some type of
portfolio analysis to generate strategy recommendations. The concept
became especially popular after the oil crisis of 1973 forced many
large corporations to rethink, if not discard, their existing long-range
plans. A McKinsey consultant noted that “the sudden quadrupling of
energy costs [due to the OPEC embargo], followed by a recession and
rumors of impending capital crisis, [meant that] setting long-term
growth and diversification objectives was suddenly an exercise in
irrelevance.” Now, strategic planning meant “sorting out winners and
losers, setting priorities, and husbanding capital.” In a climate where
“product and geographic markets were depressed and capital was
presumed to be short,”35 portfolio analysis gave executives a ready
excuse to get rid of poorly performing business units while directing
most available funds to the “stars.” Thus, a survey of the “Fortune
500” industrial companies concluded that, by 1979, 45 percent of them
had introduced portfolio planning techniques to some extent.36

Emerging Problems. Somewhat ironically, the very macroeconomic


conditions that (initially) increased the popularity of portfolio analysis
also began to raise questions about the experience curve. The high
inflation and excess capacity resulting from downturns in demand
induced by the “oil shocks” of 1973 and 1979 disrupted historical
experience curves in many industries, suggesting that Bruce
Henderson had oversold the concept when he circulated a pamphlet in
1974 entitled “Why Costs Go Down Forever.” Another problem with the
experience curve was pinpointed in a classic 1974 article by William
Abernathy and Kenneth Wayne, which argued that “the consequence
of intensively pursuing a cost-minimization strategy [e.g., one based
on the experience curve] is a reduced ability to make innovative
changes and to33 See Walter Kiechel III, “Corporate Strategists under
Fire,” Fortune (27 Dec. 1982).

34Frederick W. Gluck and Stephen P. Kaufman, “Using the Strategic Planning Framework,” in McKinsey
internal document, “Readings in Strategy” (1979), 3–4.
35J. Quincy Hunsicker, “Strategic Planning: A Chinese Dinner?” McKinsey staff paper (Dec. 1978), 3.
36Philippe Haspeslagh, “Portfolio Planning: Uses and Limits,” Harvard Business Review
(Jan. /Feb. 1982): 59.

Respond to those introduced by competitors.”37 Abernathy and Wayne


pointed to the case of Henry Ford, whose obsession with lowering costs
had left him vulnerable to Alfred Sloan’s strategy of product innovation
in the car business. The concept of the experience curve was also
criticized for treating cost reductions as automatic rather than
something to be managed, for assuming that most experience could be
kept proprietary instead of spilling over to competitors, for mixing up
different sources of cost reduction with very different strategic
implications (e.g., learning versus scale versus exogenous technical
progress), and for leading to stalemates as more than one competitor
pursued the same generic success factor.38
In the late 1970s, portfolio analysis came under attack as well. One
problem was that, in many cases, the strategic recommendations for
an SBU were very sensitive to the specific portfolio-analytic technique
employed. For instance, an academic study applied four different
portfolio techniques to a group of fifteen SBUs owned by the same
Fortune 500 corporation; it found that only one of the fifteen SBUs fell
into the same portion of each of the four matrices, and only five of the
fifteen were classified similarly in terms of three of the four matrices.39
This was only a slightly higher level of concordance than would have
been expected if the fifteen SBUs had been randomly classified four
separate times!

An even more serious problem with portfolio analysis was that even if
one could figure out the “right” technique to employ, the mechanical
determination of resource allocation patterns on the basis of historical
performance data was inherently problematic. Some consultants
acknowledged as much. In 1979, Fred Gluck, the head of McKinsey’s
strategic management practice, ventured the opinion that “the heavy
dependence on ‘packaged’ techniques [has] frequently resulted in
nothing more than a tightening up, or fine tuning, of current initiatives
within the traditionally configured businesses.” Even worse, technique-
based strategies “rarely beat existing competition” and often leave
businesses “vulnerable to unexpected thrusts from companies not
previously considered competitors.”40 Gluck and his colleagues sought
to loosen some of the constraints imposed by mechanistic approaches,
37
William J. Abernathy and Kenneth Wayne, “Limits of the Learning Curve,” Harvard
Business Review (Sept./Oct. 1974): 111.
38
Pankaj Ghemawat, “Building Strategy on the Experience Curve,” Harvard Business
Review (Mar. /Apr.): 1985.
39
Yoram Wind, Vijay Mahajan, and Donald J. Swire, “An Empirical Comparison of Standardized Portfolio
Models,” Journal of Marketing 47 (Spring 1983): 89–99. The statistical analysis of their results is based on
an unpublished draft by Pankaj Ghemawat.
40
Gluck and Kaufman, “Using the Strategic Planning Framework,” 5–6.

Figure 5. Four Phases of Strategy. (Source: Adapted from Frederick W.


Gluck, Stephen P.
Kaufman, and A. Steven Walleck, “The Evolution of Strategic
Management,” McKinsey staff paper [Oct. 1978], 4. Reproduced in
modified form in Gluck, Kaufman, and Walleck, “Strategic Management
for Competitive Advantage,” Harvard Business Review [July/Aug.
1980], 157.)

Proposing that successful companies devise progressive strategies to


take them through four basic stages. Each stage requires these
companies to grapple with increasing levels of dynamism,
multidimensionality, and uncertainty, and they therefore become less
amenable to routine quantitative analysis. (See Figure 5.)

The most stinging attack on the analytical techniques popularized by


strategy consultants was offered by two Harvard professors of
production, Robert Hayes and William Abernathy, in 1980. They argued
that “these new principles [of management], despite their
sophistication and widespread usefulness, encourage a preference for
(1) analytic detachment rather than the insight that comes from ‘hands
on experience’ and (2) short-term cost reduction rather than long-term
development of technological competitiveness.”41 Hayes and
Abernathy in particular criticized portfolio analysis as a tool that led
managers to focus on minimizing financial risks rather than on
investing in new opportunities that require a long-term commitment of
resources.42 They went on to compare U.S. firms unfavorably with
Japanese and, especially, European ones.

These and other criticisms gradually diminished the popularity of


portfolio analysis. However, its rise and fall did have a lasting influence
on subsequent work on competition and business strategy because it
highlighted the need for more careful analysis of the two basic
dimensions of portfolio-analytic grids: industry attractiveness and
competitiveness.
41
Robert H. Hayes and William J. Abernathy, “Managing Our Way to Economic Decline,” Harvard Business
Review (July/Aug. 1980): 68.
42
Ibid., 71.

Figure 6. Two Basic Dimensions of Strategy.

Positive position. Although these two dimensions had been identified


earlier—in the General Survey Outline developed by McKinsey &
Company for internal use in 1952, for example —portfolio analysis
underscored this particular method of analyzing the effects of
competition on business performance. U.S. managers, in particular,
proved avid consumers of insights about competition because the
exposure of much of U.S. industry to competitive forces increased
dramatically during the 1960s and 1970s. One economist roughly
calculated that heightened import competition, antitrust actions, and
deregulation increased the share of the U.S. economy that was subject
to effective competition from 56 percent in 1958 to 77 percent by
1980.43 The next two sections
describe attempts to unbundle these two basic dimensions of strategy.
(See Figure 6.)

Unbundling Industry Attractiveness


Thus far, we have made little mention of economists’ contributions to
thinking about competitive strategy. On the one hand, economic
theory emphasizes the role of competitive forces in determining
market outcomes. However, on the other hand, economists have often
overlooked the importance of strategy because, since Adam Smith,
they have traditionally focused on the case of perfect competition: an
idealized situation in which large numbers of equally able competitors
drive an industry’s aggregate economic profits (i.e., profits in excess of
the opportunity cost of the capital employed) down to zero. Under
perfect competition, individual competitors are straitjacketed, in the
sense of having a choice between producing efficiently and pricing at
cost or shutting down.

Some economists did address the opposite case of perfect competition,


namely pure monopoly, with Antoine Cournot providing the first
43
William G. Shepherd, “Causes of Increased Competition in the U.S. Economy, 1939–
1980,” Review of Economics and Statistics (Nov. 1982): 619.

Definitive analysis—as well as analysis of oligopoly under specific


assumptions—in 1838.44 Work on monopoly yielded some useful
insights, such as the expectation of an inverse relation between the
profitability of a monopolized industry and the price elasticity of the
demand it faced—an insight that has remained central in modern
marketing. Nevertheless, the assumption of monopoly obviously took
things to the other, equally unfortunate, extreme by ruling out all
directly competitive forces in the behavior of firms.

This state of affairs began to change at an applied level in the 1930s,


as a number of economists, particularly those associated with the
“Harvard school,” began to argue that the structure of many industries
might permit incumbent firms to earn positive economic profits over
long periods of time.45 Edward S. Mason argued that the structure of an
industry would determine the conduct of buyers and sellers — their
choices of critical decision variables—and, by implication, its
performance along such dimensions as profitability, efficiency, and
innovativeness.46 Joe Bain, also of the Harvard Economics Department,
advanced the research program of uncovering the general relation
between industry structure and performance through empirical work
focused on a limited number of structural variables —most notably, in
two studies published in the 1950s. The first study found that the
profitability of manufacturing industries in which the eight largest
competitors accounted for more than 70 percent of sales was nearly
twice that of industries with eight-firm concentration ratios of less than
70 percent.47 The second study explained how, in certain industries,
“established sellers can persistently raise their prices above a
competitive level without attracting new firms to enter the industry.”48
Bain identified three basic barriers to entry: (1) an absolute cost
advantage by an established firm (an enforceable patent, for instance);
(2) a significant degree of product differentiation; and (3) economies of
scale.

Bain’s insights led to the rapid growth of a new subfield of economics,


known as industrial organization, or “IO” for short, that explored the
structural reasons why some industries were more profitable than
others. By the mid-1970s, several hundred empirical studies in IO had
44
Antoine A. Cournot, Recherches sur les Principes Mathematiques de la Theorie des
Richesses (Paris, 1838), sects. 26, 27; and Jurg Niehans, A History of Economic Theory (Baltimore, 1990),
180–2.
45
Economists associated with the Chicago School generally doubted the empirical importance of this
possibility—except as an artifact of regulatory distortions.
46
Mason’s seminal work was “Price and Production Policies of Large-Scale Enterprise,”
American Economic Review (Mar. 1939): 61–4.
47
Joe S. Bain, “Relation of Profit Rate to Industry Concentration: American Manufacturing, 1936–1940,”
Quarterly Journal of Economics (Aug. 1951): 293–324.
48 Joe S. Bain, Barriers to New Competition (Cambridge, Mass., 1956), 3 n.

Figure 7. Differences in the Profitability of Selected Industries, 1971–


1990. (Source: Anita M. McGahan, “Selected Profitability Data on U.S.
Industries and Companies,” Harvard Business School Publishing, No.
792-066 [1992].) have been carried out. While the relation between
structural variables and performance turned out to be more
complicated than had been suggested earlier,49 these studies
reinforced the idea that some industries are inherently much more
profitable or “attractive” than others, as indicated below. (See Figure
7.)

Harvard Business School’s Business Policy Group was aware of these


insights from across the Charles River: excerpts from Bain’s book on
barriers to entry were even assigned as required readings for the
business policy course in the early 1960s. But the immediate impact of
IO on business strategy was limited. Although many problems can be
discerned in retrospect, two seem to have been particularly important.
First, IO economists focused on issues of public policy rather than
business policy: they concerned themselves with the minimization
rather than the maximization of “excess” profits. Second, the emphasis
of Bain and his successors on using a limited list of structural variables
to explain industry profitability shortchanged the richness of modern
industrial competition (“conduct” within the IO paradigm).

Both of these problems with applying classical IO to business-strategic


concerns about industry attractiveness were addressed by Michael
Porter, a graduate of the Ph.D. program offered jointly by Harvard’s
Business School and its Economics Department. In 1974, Porter
prepared a “Note on the Structural Analysis of Industries,” which
presented his first attempt to turn IO on its head by focusing on the
business policy objective of profit maximization, rather than on the
public policy objective of minimizing “excess” profits.50 In 1980, he
released his landmark book, Competitive Strategy, which owed much
of its suc.
49
See, for instance, Harvey J. Goldschmid, H. Michael Mann, and J. Fred Weston, eds., Industrial
Concentration: The New Learning (Boston, 1974).

50
Michael E. Porter, “Note on the Structural Analysis of Industries,” Harvard Business School Teaching
Note, no. 376-054 (1983).

Success to Porter’s elaborate framework for the structural analysis of


industry attractiveness. Figure 8 reproduces Porter’s “five forces”
approach to understanding the attractiveness of an industry
environment for the “average” competitor within it. In developing this
approach to strategy, Porter noted the trade-offs involved in using a
“framework” rather than a more formal statistical “model.” In his
words, a framework “encompasses many variables and seeks to
capture much of the complexity of actual competition. Frameworks
identify the relevant variables and the questions that the user must
answer in order to develop conclusions tailored to a particular industry
and company” (emphasis added).51 In academic terms, the drawback
of frameworks such
as the five forces is that they often range beyond the empirical
evidence that is available. In practice, managers routinely have to
consider much longer lists of variables than are embedded in the
relatively simple quantitative models used by economists. In the case
of the five forces, a survey of empirical literature in the late 1980s —
more than a decade after Porter first developed his framework —
revealed that only a few points were strongly supported by the
empirical literature generated by the IO field.52 (These points appear
in bold print in Figure 8.) This does not mean that the other points are
in conflict with IO research; rather, they reflect the experience of
strategy practitioners, including Porter himself.

In managerial terms, one of the breakthroughs built into Porter’s


framework was that it emphasized “extended competition” for value
rather than just competition between existing rivals. For this reason,
and because it was easy to put into effect, the five-forces framework
came to be used widely by managers and consultants. Subsequent
years witnessed refinements and extensions, such as the
rearrangement and incorporation of additional variables (e.g., import
competition and multimarket contact) into the determinants of the
intensity of five forces. The biggest conceptual advance, however, was
one proposed in the mid-1990s by two strategists concerned with
game theory, Adam Brandenburger and Barry Nalebuff, who argued
that the process of creating value in the marketplace involved “four
types of players — customers, suppliers, competitors, and
complementors.”53 By a firm’s “complementors,” they meant other
firms from which customers buy.
51
Michael E. Porter, “Toward a Dynamic Theory of Strategy,” in Richard P. Rumelt, Dan E. Schendel, and
David J. Teece, eds., Fundamental Issues in Strategy (Boston, 1994), 427–9.
52
Richard Schmalensee, “Inter-Industry Studies of Structure and Performance,” in Richard Schmalensee
and R. D. Willig, eds., Handbook of Industrial Organization, vol. 2 (Amsterdam, 1989).
53
Adam M. Brandenburger and Barry J. Nalebuff, Co-opetition (New York, 1996).

Figure 8. Porter’s Five-Forces Framework for Industry Analysis.

Complementary products and services, or to which suppliers sell


complementary resources. As Brandenburger and Nalebuff pointed out,
the practical importance of this group of players was evident in the
amount of attention being paid in business to the subject of strategic
alliances and partnerships. Their Value Net graphic depicted this more
complete description of the business landscape —emphasizing, in
particular, the
equal roles played by competition and complementarity. (See Figure
9.)

Figure 9. The Value Net. (Source: Adam M. Brandenburger and Barry J.


Nalebuff, Co-opetition [New York, 1996], 17.)

Other strategists, however, argued that some very limiting


assumptions were built into such frameworks. Thus, Kevin Coyne and
Somu Subramanyam of McKinsey argued that the Porter framework
made three tacit but crucial assumptions: First, that an industry
consists of a set of unrelated buyers, sellers, substitutes, and
competitors that interact at arm’s length. Second, that wealth will
accrue to players that are able to erect barriers against competitors
and potential entrants, or, in other words, that the source of value is
structural advantage. Third, that uncertainty is sufficiently low that you
can accurately predict participants’ behavior and choose a strategy
accordingly.54

Unbundling Competitive Position

The second basic dimension of business strategy highlighted by Figure


6 is competitive position. While differences in the average profitability
of industries can be large, as indicated in Figure 7, differences in
profitability within industries can be even larger.55 Indeed, in some
cases firms in unattractive industries can significantly outperform the
averages for more profitable industries, as indicated in Figure 10. In
addition, one might argue that most businesses in most industry
environments are better placed to try to alter their own competitive
positions, rather than the overall attractiveness of the industry in
which they operate. For both these reasons, competitive position has
been of great interest to business strategists. (See Figure 10.)
54
Kevin P. Coyne and Somu Subramanyam, “Bringing Discipline to Strategy,” McKinsey Quarterly 4 (1996):
16.
55
See, for instance, Richard P. Rumelt, “How Much Does Industry Matter?” Strategic Management Journal
(March 1991): 167–85.
Figure 10. Profitability within the Steel Industry, 1973–1992. (Source:
David Collis and Pankaj Ghemawat, “Industry Analysis: Understanding
Industry Structure and Dynamics,” in Liam Fahey and Robert M.
Randall, The Portable MBA in Strategy [New York, 1994], 174.)

Traditional academic research has made a number of contributions to


our understanding of positioning within industries, starting in the
1970s. The IO-based literature on strategic groups, initiated at Harvard
by Michael Hunt’s work on broad-line versus narrow-line strategies in
the major home appliance industry, suggested that competitors within
particular industries could be grouped in terms of their competitive
strategies in ways that helped explain their interactions and relative
profitability.56 A stream of work at Purdue explored the heterogeneity
of competitive positions, strategies, and performance in brewing and
other industries with a combination of statistical analysis and
qualitative case studies. More recently, several academic points of
view about the sources of performance differences within industries
have emerged — views that are explored more fully in the next
section. However, it does seem accurate to say that the work that had
the most impact on the strategic thinking of business about
competitive positions in the late 1970s and the 1980s was more
pragmatic than academic in its intent, with consultants once again
playing a leading role.
56
See Michael S. Hunt, “Competition in the Major Home Appliance Industry,” DBA diss., Harvard University,
1972. A theoretical foundation for strategic groups was provided by Richard E. Caves and Michael E.
Porter, “From Entry Barriers to Mobility Barriers,” Quarterly Journal of Economics (Nov. 1977): 667–75.

Competitive Cost Analysis.

With the rise of the experience curve in the 1960s, most strategists
turned to some type of cost analysis as the basis for assessing
competitive positions. The interest in competitive cost analysis
survived the declining popularity of the experience curve in the 1970s
but was reshaped by it in two important ways. First, more attention
was paid to disaggregating businesses into their component activities
or processes and to thinking about how costs in a particular activity
might be shared across businesses. Second, strategists greatly
enriched their menu of cost drivers to include more than just
experience.

The disaggregation of businesses into their component activities


seems to have been motivated, in part, by early attempts to “fix” the
experience curve to deal with the rising real prices of many raw
materials in the 1970s.57 The proposed fix involved splitting costs into
the costs of purchased materials and “cost added” (value added minus
profit margins) and redefining the experience curve as applying only to
the latter.
The natural next step was to disaggregate a business’s entire cost
structure into activities whose costs might be expected to behave in
interestingly different ways. As in the case of portfolio analysis, the
idea of splitting businesses into component activities diffused quickly
among consultants and their clients in the 1970s. A template for
activity analysis that became especially prominent is reproduced in
Figure 11.

Activity analysis also suggested a way of getting around the


“freestanding” conception of individual businesses built into the
concept of SBUs. One persistent problem in splitting diversified
corporations into SBUs was that, with the exception of pure
conglomerates, SBUs were often related in ways that meant they
shared elements of their cost structure with each other. Consulting
firms, particularly Bain and Strategic Planning Associates, both of
whose founders had worked on a BCG study of Texas Instruments that
was supposed to highlight the problem of shared costs, began to
emphasize the development of what came to be called “field maps”:
matrices that identified shared costs at the level of individual activities
that were linked across businesses, as illustrated below.58

The second important development in competitive cost analysis over


the late 1970s and early 1980s involved enrichment of the menu of
cost drivers considered by strategists. Scale effects, while officially
lumped into the experience curve, had long been looked at
independently in particular cases; even more specific treatment of the
effects of scale was now forced by activity analysis that might indicate,
for example, that advertising costs were driven by national scale,
whereas distributed.
57
This is based on my experience working at BCG in the late 1970s.
58
Walter Kiechel III, “The Decline of the Experience Curve,” Fortune (5 Oct. 1981).

Figure 11. McKinsey’s Business System. (Source: Adapted from Carter


F. Bales, P. C. Chatterjee, Donald J. Gogel, and Anupam P. Puri,
“Competitive Cost Analysis,” McKinsey staff paper [Jan. 1980], 6.)

Distribution costs were driven by local or regional scale. Field maps


underscored the potential importance of economies (or diseconomies)
of scope across businesses rather than scale within a business. The
effects of capacity utilization on costs were dramatized by
macroeconomic downturns in the wake of the two oil shocks. The
globalization of competition in many industries highlighted the location
of activities as a main driver of competitors’ cost positions, and so on.
Thus, an influential mid-1980s discussion of cost analysis enumerated
ten distinct cost drivers.59

Customer Analysis. Increased sophistication in analyzing relative costs


was accompanied by increased attention to customers in the process
of analyzing competitive position. Customers had never been entirely
invisible: even in the heyday of experience curve analysis, market
segmentation had been an essential strategic tool —although it was
sometimes used to gerrymander markets to “demonstrate” a positive
link between share and cost advantage rather than for any analytic
purpose. But, according to Walker Lewis, the founder of Strategic
Planning Associates, “To those who defended in classic experience-
curve strategy, about 80% of the businesses in the world were
commodities.”60 This started to change in the 1970s.

Increased attention to customer analysis involved reconsideration of


the idea that attaining low costs and offering customers low prices was
always the best way to compete. More attention came to be paid to
differentiated ways of competing that might let a business command a
price premium by improving customers’ performance or reducing their
(other) costs. While (product) differentiation had always occupied
center stage in marketing, the idea of looking at it in a cross-
functional, competitive context that also accounted for relative costs
apparently started to emerge in business strategy in the 1970s. Thus,
a member of Harvard’s Business Policy group recalls using the
distinction between cost and differentiation, which was implicit in two
of the three sources of entry barriers identified by Joe Bain in the
1950s (see above), to organize classroom discussions in the early
1970s.61 And McKinsey reportedly started to apply the distinction
between cost and “value” to client studies later in that decade.62 The
first published accounts, in Michael Porter’s book Competitive Strategy
and in a Harvard Business Review article by William Hall, appeared in
1980.63
59
Michael E. Porter, Competitive Advantage (New York, 1985), ch. 3.
60
Quoted in Kiechel, “The Decline of the Experience Curve.”

Both Hall and Porter argued that successful companies usually had to
choose to compete either on the basis of low costs or by differentiating
products through quality and performance characteristics. Porter also
identified a focus option that cut across these two “generic strategies”
and linked these strategic options to his work on industry analysis:

In some industries, there are no opportunities for focus or


differentiation—it’s solely a cost game—and this is true in a number of
bulk commodities. In other industries, cost is relatively unimportant
because of buyer and product characteristics.64

Many other strategists agreed that, except in such special cases, the
analysis of competitive position had to cover both relative cost and
differentiation. There was continuing debate, however, about the
proposition, explicitly put forth by Porter, that businesses “stuck in the
middle” should be expected to perform less well than businesses that
had targeted lower cost or more differentiated positions. Others saw
optimal positioning as a choice from a continuum of trade-offs between
cost and differentiation, rather than as a choice between two mutually
exclusive (and extreme) generic strategies.

Porter’s book, published in 1985, suggested analyzing cost and


differentiation via the “value chain,” a template that is reproduced in
Figure 12. While Porter’s value chain bore an obvious resemblance to
McKinsey’s business system, his discussion of it emphasized the
importance of regrouping functions into the activities actually
performed to produce, market, deliver, and support products, thinking
about links between activities, and connecting the value chain to the
determinants of competitive position in a specific way:

Competitive advantage cannot be understood by looking at a firm as a


whole. It stems from the many discrete activities a firm performs in
designing, producing, marketing, delivering, and supporting its
product. Each of these activities can contribute to a firm’s relative cost
position and create a basis for differentiation. . . . The value chain
disaggregates a firm into its strategically relevant activities in order to
understand the behavior of costs and the existing and potential
sources of differentiation.65
61
Interview with Hugo Uyterhoeven, 25 Apr. 1997.
62
Interview with Fred Gluck, 18 Feb. 1997.
63
Michael E. Porter, Competitive Strategy (New York, 1980), ch. 2; and William K. Hall, “Survival Strategies
in a Hostile Environment,” Harvard Business Review (Sept./Oct. 1980): 78–81.
64
Porter, Competitive Strategy, 41–4.

Figure 12. Porter’s Value Chain. (Source: Michael E. Porter, Competitive


Advantage [New
York, 1985], 37.)

Putting customer analysis and cost analysis together was promoted


not only by disaggregating businesses into activities (or processes) but
also by splitting customers into segments based on cost-to-serve as
well as customer needs. Such “de-averaging” of customers was often
said to expose situations in which 20 percent of a business’s customers
accounted for more than 80 percent, or even 100 percent, of its
profits.66
It also suggested new customer segmentation criteria. Thus, Bain &
Company built a thriving “customer retention” practice, starting in the
late 1980s, on the basis of the higher costs of capturing new
customers as opposed to retaining existing ones.

Competitive Dynamics and History

The development of business systems, value chains, and similar


templates naturally refocused attention on the problem of coordinating
across a large number of choices linked in cross section that was
highlighted, in a cross-functional context, in the original description of
Harvard Business School’s course on business policy. However, such
attention tended to crowd out consideration of longitudinal links
between choices, which was emphasized by Selznick’s work on
organizational commitments and distinctive competences and evident
in Andrews’s focus on the aspects of firm behavior that were “enduring
and unchanging over relatively long periods of time.”
65
Porter, Competitive Advantage, 33, 37.
66
Talk by Arnoldo Hax at MIT on 29 April 1997.
The need to return the time dimension to predominantly static ideas
about competitive position was neatly illustrated by the techniques for
“value-based strategic management” that began to be promoted by
consulting firms like SPA and Marakon, among others, in the 1980s.
The development and diffusion of value-based techniques,
which connected positioning measures to shareholder value using
spreadsheet models of discounted cash flows, was driven by increases
in capital market pressures in the 1980s, particularly in the United
States: merger and acquisition activity soared; hostile takeovers of
even very large companies became far more common; many
companies restructured to avoid them; levels of leverage generally
increased; and there was creeping institutionalization of equity
holdings.67 Early value-based work focused on the spread between a
company or division’s rate of return and its cost of capital as the basis
for “solving” the old corporate strategy problem of resource allocation
across businesses. It quickly became clear, however, that estimated
valuations were very sensitive to two other, more dynamic, drivers of
value: the length of the time horizon over which positive spreads
(competitive advantage) could be sustained on the assets in place, and
the (profitable) reinvestment opportunities or growth options afforded
by a strategy.68
At the same time, analyses of business performance started to
underscore the treacherousness of assuming that current profitability
and growth could automatically be sustained. Thus, my analysis of 700
business units revealed that nine-tenths of the profitability differential
between businesses that were initially above average and those that
were initially below average vanished over a ten-year period.69 (See
Figure 13.)

The unsustainability of most competitive advantages was generally


thought to reflect the “Red Queen” effect: the idea that as
organizations struggled to adapt to competitive pressures, they would
become stronger competitors, sending the overall level of competition
spiraling upward and eliminating most, if not all, competitive
advantages.70 In the late 1980s and early 1990s, both academics and
consultants started to wrestle with the dynamic question of how
businesses might create and sustain competitive advantage in the
presence of competitors who could not all be counted on to remain
inert all the time.
67
F. M. Scherer and David Ross, Industrial Market Structure and Economic Performance (Boston, 1990), ch.
5.
68
Benjamin C. Esty, “Note on Value Drivers,” Harvard Business School Teaching Note, no. 297-082 (1997).
69
Pankaj Ghemawat, “Sustainable Advantage,” Harvard Business Review (Sept./Oct. 1986): 53–8, and
Commitment (New York, 1991), ch. 5.
70
The first economic citation of the “Red Queen” effect is generally attributed to L. Van Valen. See L. Van
Valen, “A New Evolutionary Law,” Evolutionary Theory 1 (1973): 1–30. The literary reference is to Lewis
Carroll’s Alice’s Adventures in Wonderland and Through the Looking Glass (New York, 1981; first published
1865–71), in which the Red Queen tells Alice: “here, you see, it takes all the running you can do, to keep in
the same place. If you want to get somewhere else, you must run at least twice as fast . . .” (p. 127).
Figure 13. The Limits to Sustainability.

>From an academic perspective, many of the consultants’


recommendations regarding dynamics amounted to no more, and no
less, than the injunction to try to be smarter than the competition (for
example, by focusing on customers’ future needs while competitors
remained focused on their current needs). The most thoughtful
exception that had a truly dynamic orientation was work by George
Stalk and others at BCG on time-based competition. In an article
published in the Harvard Business Review in 1988, Stalk argued:
“Today the leading edge of competition is the combination of fast
response and increasing variety. Companies without these advantages
are slipping into commodity-like competition, where customers buy
mainly on price.”71 Stalk expanded on this argument in a book
coauthored with Thomas Hout in 1990, according to which time-based
competitors “[c]reate more information and share it more
spontaneously. For the information technologist, information is a fluid
asset, a data stream. But to the manager of a business . . . information
is fuzzy and takes many forms —knowing a
customer’s special needs, seeing where the market is heading . . .”72
71
George Stalk Jr., “Time—The Next Source of Competitive Advantage,” Harvard Business Review
(July/Aug. 1988).

Time-based competition quickly came to account for a substantial


fraction of BCG’s business. Eventually, however, its limitations also
became apparent. In 1993, George Stalk and Alan Webber wrote that
some Japanese companies had become so dedicated to shortening
their product-development cycles that they had created a “strategic
treadmill on which companies were caught, condemned to run faster
and faster but always staying in the same place competitively.”73 In
particular, Japanese electronics manufacturers had reached a
remarkable level of efficiency, but it was an “efficiency that [did] not
meet or create needs for any customer.”74

For some, like Stalk himself, the lesson from this and similar episodes
was that there were no sustainable advantages: “Strategy can never
be a constant. . . . Strategy is and always has been a moving target.” 75
However, others, primarily academics, continued to work in the 1990s
on explanations of differences in performance that would continue to
be useful even after they were widely grasped.76 This academic work
exploits, in different ways, the idea that history matters, that history
affects both the opportunities available to competitors and the
effectiveness with which competitors can exploit them. Such work can
be seen as an attempt to add a historical or time dimension, involving
stickiness and rigidities, to the two basic dimensions of early portfolio
analytic grids: industry attractiveness and competitive position. The
rest of this section briefly reviews four strands of academic inquiry that
embodied new approaches to thinking about the time dimension.

Game Theory. Game theory is the mathematical study of interactions


between players whose payoffs depend on each other’s choices. A
general theory of zero-sum games, in which one player’s gain is
exactly equal to other players’ losses, was supplied by John von
Neumann and Oskar Morgenstern in their pathbreaking book The
Theory of Games
and Economic Behavior.77 There is no general theory of non-zero-sum
games, which afford opportunities for cooperation as well as competi
tion, but research in this area does supply a language and a set of
logical tools for analyzing the outcome that is likely —the equilibrium
point—given specific rules, payoff structures, and beliefs if players all
behave “rationally.”78
72
Stalk and Hout, Competing Against Time, 179.
73
George Stalk Jr. and Alan M. Webber, “Japan’s Dark Side of Time,” Harvard Business
Review (July/Aug. 1993): 94.
74
Ibid., 98–9.
75
Ibid., 101–2.
76
This test of stability is in the spirit of the game theorists, John von Neumann and Oskar Morgenstern. See
their Theory of Games and Economic Behavior (Princeton, 1944).
77
Ibid.

Economists trained in IO started to turn to game theory in the late


1970s as a way of studying competitor dynamics. Since the early
1980s, well over half of all the IO articles published in the leading
economics journals have been concerned with some aspect of non-
zero-sum game theory.79 By the end of the 1980s alone, competition to
invest in tangible and intangible assets, strategic control of
information, horizontal mergers, network competition and product
standardization, contracting, and numerous other settings in which
interactive effects were apt to be important had all been modeled
using game theory.80 The effort continues.

Game-theory IO models tend, despite their diversity, to share an


emphasis “on the dynamics of strategic actions and in particular on the
role of commitment.”81 The emphasis on commitment or irreversibility
grows out of game theory’s focus on interactive effects. From this
perspective, a strategic move is one that “purposefully limits your
freedom of action. . . . It changes other players’ expectations about
your future responses, and you can turn this to your advantage. Others
know that when you have the freedom to act, you also have the
freedom to capitulate.”82
The formalism of game theory is accompanied by several significant
limitations: the sensitivity of the predictions of game-theory models to
details, the limited number of variables considered in any one model,
and assumptions of rationality that are often heroic, to name just a
few.83 Game theory’s empirical base is also limited. The existing
evidence suggests, nonetheless, that it merits attention in analyses of
interactions between small numbers of firms. While game theory often
formalizes preexisting intuitions, it can sometimes yield unanticipated,
and even counterintuitive, predictions. Thus, game-theory modeling of
78
There is also a branch of game theory that provides upper bounds on
players’ payoffs if freewheeling interactions between them are
allowed. See Brandenburger and Nalebuff’s Coopetition for applications
of this idea to business.
79
Pankaj Ghemawat, Games Businesses Play (Cambridge, Mass., 1997), 3.
80
For a late 1980s survey of game-theory IO, consult Carl Shapiro, “The Theory of Business Strategy,”
RAND Journal of Economics (Spring 1989): 125–37.
81
Ibid., 127.
82
Avinash K. Dixit and Barry J. Nalebuff, Thinking Strategically (New York, 1991), 120. Their logic is based
on Thomas C. Schelling’s pioneering book, The Strategy of Conflict (Cambridge, Mass., 1979; first
published in 1960).
83
For a detailed critique, see Richard P. Rumelt, Dan Schendel, and David J. Teece, “Strategic Management
and Economics,” Strategic Management Journal (Winter 1991): 5–29. For further discussion, see
Ghemawat, Games Businesses Play, chap. 1.

Shrinkage in, and exit from, declining industries yielded the prediction
that, other things being equal, initial size should hurt survivability. This
surprising prediction turns out to enjoy some empirical support!84

The Resource-Based View of the Firm. The idea of looking at


companies in terms of their resource endowments is an old one, but it
was revived in the 1980s in an article by Birger Wernerfelt.85
Wernerfelt noted: “The traditional concept of strategy [put forth by
Kenneth Andrews in 1971] is phrased in terms of the resource position
(strengths
and weaknesses) of the firm, whereas most of our formal economic
tools operate on the product market side.”86 While Wernerfelt also
described resources and products as “two sides of the same coin,”
other adherents to what has come to be called the resource-based
view (RBV) of the firm argue that superior product market positions
rest on the ownership of scarce, firm-specific resources.

Resource-based theorists also seek to distinguish their perspective on


sustained superior performance from that of IO economics by stressing
the intrinsic inimitability of scarce, valuable resources for a variety of
reasons: the ability to obtain a particular resource may be dependent
on unique, historical circumstances that competitors cannot recreate;
the link between the resources possessed by a firm and its sustained
competitive advantage may be causally ambiguous or poorly
understood; or the resource responsible for the advantage may be
socially complex and therefore “beyond the ability of firms to
systematically manage and influence” (e.g., corporate culture).87
Game-theory IO, in contrast, has tended to focus on less extreme
situations in which imitation of superior resources may be feasible but
uneconomical (e.g., because of preemption).

Resource-based theorists therefore have traditionally tended to see


firms as stuck with a few key resources, which they must deploy across
product markets in ways that maximize total profits rather than profits
in individual markets. This insight animated C. K. Prahalad and Gary
Hamel’s influential article, “The Core Competence of the Corporation,”
which attacked the SBU system of management for focusing on
products rather than on underlying core competencies in a way that
arguably bounded innovation, imprisoned resources, and led to a
decline in investment: “In the short run, a company’s competitiveness
derives from the price/performance attributes of current products. . . .
In the long run, competitiveness derives from the . . . core
competencies that spawn unanticipated new products.”88
84
For a discussion of the original models (by Ghemawat and Nalebuff) and the supporting empirical
evidence, consult Ghemawat, Games Businesses Play, ch. 5.
85
In the same year, Richard Rumelt also noted that the strategic firm “is characterized by a bundle of
linked and idiosyncratic resources and resource conversion activities.” See his chapter, “Towards a
Strategic Theory of the Firm,” in R. B. Lamb, ed., Competitive Strategic Management (Englewood Cliffs,
N.J., 1984), 561.
86
Birger Wernerfelt, “A Resource-based View of the Firm,” Strategic Management Journal 5 (1984): 171. In
addition to citing Andrews’s 1971 book, The Concept of Corporate Strategy, Wernerfelt referred to the
pioneering work of Edith Penrose, The Theory of the Growth of the Firm (Oxford, 1959).
87
Jay B. Barney, “Firm Resources and Sustained Competitive Advantage,” Journal of Management (March
1991): 107–11.

To many resource-based theorists, the core competencies that


Prahalad and Hamel celebrate are simply a neologism for the
resources that the RBV has emphasized all along. Whether the same
can be said about another, more distinct, line of research on dynamic
capabilities that emerged in the 1990s is an open question.

Dynamic Capabilities. In the 1990s, a number of strategists have tried


to extend the resource-based view by explaining how firm-specific
capabilities to perform activities better than competitors can be built
and redeployed over long periods of time. The dynamic-capabilities
view of the firm differs from the RBV because capabilities are to be
developed rather than taken as given, as described more fully in a
pioneering article by David Teece, Gary Pisano, and Amy Shuen:
If control over scarce resources is the source of economic profits, then
it follows that issues such as skill acquisition and learning become
fundamental strategic issues. It is this second dimension,
encompassing skill acquisition, learning, and capability accumulation
that . . . [we] refer to as “the dynamic capabilities approach.” . . . Rents
are viewed as not only resulting from uncertainty . . . but also from
directed activities by firms which create differentiated capabilities, and
from managerial efforts to strategically deploy these assets in
coordinated ways.89

Taking dynamic capabilities also implies that one of the most strategic
aspects of the firm is “the way things are done in the firm, or what
might be referred to as its ‘routines,’ or patterns of current practice
and learning.”90 As a result, “research in such areas as management
of R&D, product and process development, manufacturing, and human
resources tend to be quite relevant [to strategy].”91 Research in these
areas supplies some specific content to the idea that strategy
execution is important.
88
C. K. Prahalad and Gary Hamel, “The Core Competence of the Corporation,” Harvard
Business Review (May/June 1990): 81.
89
David J. Teece, Gary Pisano, and Amy Shuen, “Dynamic Capabilities and Strategic
Management,” mimeo (June 1992): 12–13.
90
David Teece and Gary Pisano, “The Dynamic Capabilities of Firms: An Introduction,” Industrial and
Corporate Change 3 (1994): 540–1. The idea of “routines” as a unit of analysis was pioneered by Richard
R. Nelson and Sidney G. Winter, An Evolutionary Theory of Economic Change (Cambridge, Mass., 1982).
91
Teece, Pisano, and Shuen, “Dynamic Capabilities and Strategic Management,” 2.

The process of capability development is thought to have several


interesting attributes. First, it is generally “path dependent.” In other
words, “a firm’s previous investments and its repertoire of routines (its
‘history’) constrains its future behavior . . . because learning tends to
be local.” Second, capability development also tends to be subject to
long time lags. And third, the “embeddedness” of capabilities in
organizations can convert them into rigidities or sources of inertia —
particularly when attempts are being made to create new,
nontraditional capabilities.92

Commitment. A final, historically based approach to thinking about the


dynamics of competition that is intimately related to the three
discussed above focuses on commitment or irreversibility: the
constraints imposed by past choices on present ones.93 The managerial
logic of focusing on decisions that involve significant levels of
commitment has been articulated particularly well by a practicing
manager:
A decision to build the Edsel or Mustang (or locate your new factory in
Orlando or Yakima) shouldn’t be made hastily; nor without plenty of
inputs. . . . [But there is] no point in taking three weeks to make a
decision that can be made in three seconds—and corrected
inexpensively later if wrong. The whole organization may be out of
business while you oscillate between baby-blue or buffalo-brown coffee
cups.94

Commitments to durable, firm-specific resources and capabilities that


cannot easily be bought or sold account for the persistence observed
in most strategies over time. Modern IO theory also flags such
commitments as being responsible for the sustained profit differences
among product market competitors: thought experiments as well as
formal models indicate that, in the absence of the frictions implied by
commitment, hit-and-run entry would lead to perfectly competitive
(zero-profit) outcomes even without large numbers of competitors.95 A
final attraction of commitment as a way of organizing thinking about
competitor dynamics is that it can be integrated with other modes of
strategic analysis described earlier in this note, as indicated in Figure
92
Dorothy Leonard-Barton, “Core Capabilities and Core Rigidities: A Paradox in Managing New Product
Development,” Strategic Management Journal (1992): 111–25.
93
For a book-length discussion of commitments, see Pankaj Ghemawat, Commitment (New York, 1991).
For connections to the other modes of dynamic analysis discussed in this section, see chs. 4 and 5 of
Pankaj Ghemawat, Strategy and the Business Landscape (Reading, Mass., 1999).
94
Robert Townsend, Up the Organization (New York, 1970).
95
See, for instance, William J. Baumol, John C. Panzar, and Robert D. Willig, Contestable Markets and the
Theory of Industry Structure (New York, 1982) for an analysis of the economic implications of zero
commitment; and Richard E. Caves, “Economic Analysis and the Quest for Competitive Advantage,”
American Economic Review (May 1984): 127–32, for comments on the implications for business strategy.

Figure 14. Commitment and Strategy (Source: Adapted from Pankaj


Ghemawat, “Resources and Strategy: An IO Perspective,” Harvard
Business School working paper [1991], 20, Fig. 3).

The ideas behind the figure are very simple. Traditional positioning
concepts focus on optimizing the fit between product market activities
on the right-hand side of the figure. The bold arrows running from left
to right indicate that choices about which activities to perform, and
how to perform them, are constrained by capabilities and resources
that can be varied only in the long run and that are responsible for
sustained profit differences between competitors. The two fainter
arrows that feed back from right to left capture the ways in which the
activities the organization performs and the resource commitments it
makes affect its future opportunity set or capabilities. Finally, the bold
arrow that runs from capabilities to resource commitments serves as a
reminder that the terms on which an organization can commit
resources depend, in part, on the capabilities it has built up. Markets
for Ideas at the Millennium96

A teleology was implicit in the discussion in the last three sections:


starting in the 1970s, strategists first sought to probe the two basic
dimensions of early portfolio-analytic grids, industry attractiveness and
competitive position, and then to add a time or historical dimension to
the analysis. Dynamic thinking along the lines discussed in the
previous section and others (e.g., options thinking, systems dynamics,
disruptive technologies and change management, to cite just four
other areas of enquiry) has absorbed the bulk of academic strategists’
attention in the last fifteen-plus years. But when one looks at the
practice of strategy in the late 1990s, this simple narrative is
complicated by an apparent profusion of tools and ideas about strategy
in particular and
management in general, many of which are quite ahistorical. Both
points are illustrated by indexes of the influence of business ideas such
as, for example, importance-weighted citation counts calculated by
Richard Pascale, admittedly with a significant subjective component,
that are reproduced in Figure 15.97 A complete enumeration, let alone
discussion, of contemporary tools and ideas is beyond the scope of this
essay, but a few broad points seem worth making about their recent
profusion and turnover. Given the forward-looking nature of this
discussion, it is inevitably more conjectural than the retrospectives in
the
previous sections.
96
For a more extended discussion of the ideas in this postscript, see Pankaj Ghemawat, “Competition
among Management Paradigms: An Economic Analysis,” Harvard Business School Working Paper (2000).

Some of the profusion of ideas about strategy and management is


probably to be celebrated. Thus, there are advantages to being able to
choose from a large menu of ideas rather than from a small one,
especially in complex environments where “one size doesn’t fit all”
(and especially when the fixed costs of idea development are low).
Similarly, the rapid turnover of many ideas, which appears to have
increased in recent years, can be explained in benign terms as well.98
Thus, some argue that the world is changing rapidly, maybe faster
than ever before; others, that the rapid peaking followed by a decline
in attention to ideas may indicate that they have been successfully
internalized rather than discredited; yet others, that at least some of
the apparent turnover represents a rhetorical spur to action, rather
than real change in the underlying ideas themselves.99

It seems difficult to maintain, however, that all the patterns evident in


Figure 15 conform to monotonic ideals of progress. Consider, for
example, what happened with business-process reengineering, the
single most prominent entry as of 1995. Reengineering was
popularized in the early 1990s by Michael Hammer and James Champy
of the consulting firm CSC Index.100 Hammer originally explained the
idea in a 1990 article in the Harvard Business Review: “Rather than
embedding outdated processes in silicon and software, we should
obliterate them and start over. We should . . . use the power of modern
information technology to radically redesign our business processes in
order to achieve dramatic improvements in their performance.”101
Hammer and Champy’s book, Reengineering the Corporation, which
came out in 1993, sold nearly two million copies. Surveys in 1994
found that 78 percent of the Fortune 500 companies and 60 percent of
a broader sample of 2,200
97
For additional discussion of the methodology employed, consult Richard T. Pascale,
Managing on the Edge (New York, 1990), 18–20.
98
For some evidence that management ideas have become shorter-lived, see Paula P. Carson, Patricia A.
Lanier, Kerry D. Carson, and Brandi N. Guidry, “Clearing a Path through the Management Fashion Jungle:
Some Preliminary Trailblazing,” Academy of Management Journal (December 2000).
99
Richard D’Aveni, among many others, asserts unprecedented levels of environmental
change in Hypercompetition: Managing the Dynamics of Strategic Maneuvering (New York, 1994). William
Lee and Gary Skarke discuss apparently transient ideas that are permanently valuable in “Value-Added
Fads: From Passing Fancy to Eternal Truths,” Journal of Management Consulting (1996): 10–15. Robert G.
Eccles and Nitin Nohria emphasize the rhetorical uses of changing the wrappers on a limited number of
timeless truths about management in Beyond the Hype: Rediscovering the Essence of Management
(Boston, 1992).
100
See Michael Hammer and James Champy, Reengineering the Corporation (New York, 1993). See also
John Micklethwait and Adrian Wooldridge, The Witch Doctors (New York, 1996). Micklethwait and
Wooldridge devote a chapter to CSC Index.

Figure 15. Ebbs, Flows, and Residual Impact of Business Fads, 1950–
1995. (Source: Adapted from Richard T. Pascale, Managing on the Edge
[New York, 1990], 18–20.)

U.S. companies were engaged in some form of reengineering, on


average with several projects apiece.102 Consulting revenues from
reengineering exploded to an estimated $2.5 billion by 1995.103 After
1995, however, there was a bust: consulting revenues plummeted, by
perhaps two-thirds over the next three years, as reengineering came
to be seen as a euphemism for downsizing and as companies
apparently shifted to placing more emphasis on growth (implying,
incidentally, that there had been some excesses in their previous
efforts to reengineer).
Much of the worry that the extent of profusion or turnover of ideas
about management may be excessive from a social standpoint is
linked to the observation that this is one of the few areas of intellectual
enquiry in which it actually makes sense to talk about markets for
ideas.
Unlike, say, twenty-five or thirty years ago, truly large amounts of
money are at stake, and are actively competed for, in the development
of “blockbuster” ideas like reengineering —a process that increasingly
seems to fit with the end state described by Schumpeter as the
“routinization of innovation.” Market-based theoretical models indicate
that, on the supply side, private incentives to invest in developing new
products are likely, in winner-take-all settings, to exceed social
gains.104 To the extent that market-based, commercial considerations
increasingly influence the development of new ideas about
management, they are a source of growing concern.

Concerns about supply-side salesmanship are exacerbated by the


demand-side informational imperfections of markets for ideas, as
opposed to more conventional products. Most fundamentally, the
buyer of an idea is unable to judge how much information is worth until
it is disclosed to him, but the seller has a difficult time repossessing
the information in case the buyer decides, following disclosure, not to
pay very much for it. Partial disclosure may avoid the total breakdown
of market-based exchange in such situations but still leaves a residual
information asymmetry.105 Performance contracting is sometimes
proposed as an antidote to otherwise ineradicable informational
problems of this sort, but its efficacy and use in the context of
management ideas seem to be limited by noisy performance
measurement. Instead, the market-based transfer of ideas to
companies appears to be sustained by mechanisms such as reputation
and observational learning. Based on microtheoretical analysis, these
mechanisms may lead to “cascades” of ideas, in which companies that
choose late optimally decide to ignore their own information and
emulate the choices made earlier by other companies.106 Such fadlike
dynamics can also enhance the sales of products with broad, as
opposed to niche, appeal.107 And then there are contracting problems
within, rather than between, firms that point in the same direction. In
particular, models of principal-agent problems show that managers, in
order to preserve or gain reputation when markets are imperfectly
informed, may prefer either to “hide in the herd” so as not to be
accountable or to “ride the herd” in order to prove quality.108 The
possible link to situations in which managers must decide which, if any,
new ideas to adopt should be obvious. More broadly, demand-side
considerations suggest some reasons to worry about patterns in the
diffusion of new ideas as well as the incentives to develop them in the
first place.
101
Michael Hammer, “Reengineering Work: Don’t Automate, Obliterate,” Harvard Business Review
(July/Aug. 1990): 104.
102
Micklethwait and Wooldridge, The Witch Doctors, 29.
103
See James O’Shea and Charles Madigan, Dangerous Company: The Consulting Powerhouses and the
Businesses They Save and Ruin (New York, 1997).
104
For a general discussion, see Robert H. Frank and Philip J. Cook, The Winner-Take-All Society (New York,
1995); for formal modeling and a discussion specific to the management idea business, see Ghemawat,
“Competition among Management Paradigms.”

Whether such worries about the performance of markets for ideas


actually make their effects felt in the real world of management is,
ultimately, an empirical matter. Unfortunately, the informational
imperfections noted above—and others, such as the difficulty of
counting ideas—complicate systematic empirical analysis of product
variety and turnover in management ideas. A shared basis for
understanding the historical evolution of ideas, which I have attempted
to provide in the specific context of competitive thinking about
business strategy, is but a first step in unraveling such complications.
105
See, for example, James J. Anton and Dennis A. Yao, “The Sale of Ideas: Strategic Disclosure, Property
Rights, and Incomplete Contracts,” unpublished working paper, Fuqua School of Business, Duke University
(1998).
106
See Sushil Bikhchandani, David Hirshleifer, and Ivo Welch, “Learning from the Behavior of Others:
Conformity, Fads and Informational Cascades,” Journal of Economic Perspectives (1998): 15–70.
107
See Daniel L. McFadden and Kenneth E. Train, “Consumers’ Evaluation of New Products: Learning from
Self and Others,” Journal of Political Economy (Aug. 1996): 683–703.
108
These models derive some of their real-world appeal from the use of relative performance measures to
evaluate managers. See Robert Gibbons and Kevin J. Murphy, “Relative Performance Evaluation of Chief
Executive Officers,” Industrial and Labor Relations Review (Feb. 1990): 30S–51S.

You might also like