0% found this document useful (0 votes)
16 views17 pages

Baldwin (2022)

This retrospective article discusses the contributions and shortcomings of 'Design Rules, Volume 1: The Power of Modularity' and its influence on the subsequent volume. It highlights the evolution of modularity in computer design, the insights gained from studying the industry, and the theoretical frameworks developed to understand complex systems. The author reflects on the historical context and the significant changes in the computer industry driven by modular designs and their implications for technology and organization.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
16 views17 pages

Baldwin (2022)

This retrospective article discusses the contributions and shortcomings of 'Design Rules, Volume 1: The Power of Modularity' and its influence on the subsequent volume. It highlights the evolution of modularity in computer design, the insights gained from studying the industry, and the theoretical frameworks developed to understand complex systems. The author reflects on the historical context and the significant changes in the computer industry driven by modular designs and their implications for technology and organization.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 17

Design rules: past and future

Downloaded from https://academic.oup.com/icc/advance-article/doi/10.1093/icc/dtac055/6868533 by guest on 06 December 2022


Carliss Y. Baldwin*
Harvard Business School, Soldiers Field, Boston, MA 02163, USA, e-mail: [email protected]
*Main author for correspondence.

Abstract
It is a great honor to have a special issue of Industrial and Corporate Change dedicated to The Power of Mod-
ularity Today: 20 Years of “Design Rules.” In this retrospective article, I will give some background on how
Design Rules, Volume 1: The Power of Modularity came to be written and list what I believe were its major
contributions. I go on to describe the book’s shortcomings and gaps, which prevented the widespread adop-
tion of the tools and methods the book proposed. I then explain how the perception of the gaps influenced
my thinking in Design Rules, Volume 2: How Technology Shapes Organizations.
JEL classification: L1, L2, O3

1. Background
Design Rules, Volume 1 (DR1) began as a case, a conundrum, and a model that was never pub-
lished. The case was Sun Microsystems, Inc.—1987 (Baldwin and Soll, 1990). Founded in 1982,
Sun was an anomaly: many of its actions seemed to fly in the face of the “sound management
principles” taught to MBAs and executives. Sun raised money in the public capital markets far
more frequently than seemed prudent. It offered high-performance technical workstations, yet
appeared to have no proprietary technology. The company built systems that were incredibly fast
with off-the-shelf hardware and software. It outsourced much of its manufacturing. It developed
a network file sharing protocol and a Reduced Instruction Set Computer chip architecture, but,
instead of exploiting these proprietary technologies, practically gave them away. Sun’s managers
appeared to be doing everything wrong, and its survival seemed to be a matter of smoke and mir-
rors. However, the hardware and software architects within Sun also seemed to see technology in
a new way—as a playing field for moves in a competitive game. In the mid-1980s, Kim Clark and
I set out to understand the conundrum of Sun’s technological mastery and its apparent success.
Our goal was to bring their “game” and its “moves” into the realm of formal economic analysis
(Baldwin and Clark, 1997).
In the late 1980s, the word “modularity” was in the air, surrounded by an aura of technical
wizardry. Modularity was what allowed both Sun and its main competitor, Apollo Computer, to
outsource key components of their systems. Sun in large measure succeeded in driving Apollo out
of the market because it was “more modular” than its rival—whatever that meant. But what was
this thing called modularity, and how could we represent it within the formal analytic frameworks
of economics?
The virtue of a modular system, we discovered, is that its components can be mixed and
matched to achieve the highest-value configuration in a particular setting. The mixing and

© The Author(s) 2022. Published by Oxford University Press in association with Oxford University Press and the
Industrial and Corporate Change Association.
This is an Open Access article distributed under the terms of the Creative Commons
Attribution-NonCommercial-NoDerivs licence (https://creativecommons.org/licenses/by-nc-nd/4.0/), which permits
non-commercial reproduction and distribution of the work, in any medium, provided the original work is not altered
or transformed in any way, and that the work is properly cited. For commercial re-use, please contact
[email protected]
2 C.Y. Baldwin

matching is possible because the designers do not have to know precisely how the modules will
be arranged after the fact. They only have to know generally what each module will do, how it
will fit in, and what constitutes good module performance. Thus, the essence of modularity, we
felt, lies in the options it gave designers to postpone and then revise key decisions. Option theory
is a well-defined field in economics with a wealth of formal models. By 1992, Kim and I had a
model of the costs and benefits of modularity based on the theory of real options. Five years later,

Downloaded from https://academic.oup.com/icc/advance-article/doi/10.1093/icc/dtac055/6868533 by guest on 06 December 2022


after many rounds of reviews and revisions, the model was still unpublished. (It later became the
basis of Chapter 10 of DR1.) In early 1995, as our hopes of publication in a refereed journal
faded, we began working on a book. We discarded the first draft in early 1996 and began all over
again.
By then, however, we had a shared vision of what we needed to do. It seemed to us that all
the arguments of that time (this was the early 1990s), including our own, were based on indirect
evidence. We were depending on what other people said about designs. However, the second-hand
reports were unreliable: in some circles, every design was said to be modular. Modularity can be
relative. Both Sun and its rival Apollo could truthfully claim to have used modular computer
designs, but, in some fashion, Sun’s designs were more modular. Thus, we felt that there was
a great and looming gap in the literature on technology, strategy, and management. What was
happening to the designs themselves? This was the question that led us to write Design Rules.
(It was not planned as two volumes.)
We began with two basic insights. First, modular systems create options for designers and
users. This was the fundamental message of our model. It is in the nature of options to allow
unexpected things to happen. In a modular technical system, designers of modules are free
to experiment and choose the best design from among many trials. Development can proceed
opportunistically and incrementally, with each module on its own semi-independent trajectory.
The overall process was like biological evolution in that it was incremental, parallel, and not
predetermined.
However, design evolution was unlike biological evolution in three important ways. First,
the processes generating design change are search processes based on designers’ foresight and
subject to their goals and incentives (including economic incentives). Second, the survival of
designs depends on designers’ own assessments and evaluations, which in turn were based on
tests they devised to rate and rank designs. Third, the whole sequence needs to take place within
an architectural framework that was itself designed for the purpose of “playing host” to an evo-
lutionary process. Inspired by Carver Mead and Lynn Conway, we called the framework the
system’s “design rules” (Mead and Conway, 1980).
The following then were our two key insights, which formed the central message of the book:

• Modular designs create options


• Modular designs can evolve.

Similar insights expressed in different words can be found in seminal works on the science of
design by Ashby (1952), Simon (1962), and Alexander (1964). Our two-pronged thesis echoed
theoretical arguments that were being put forward in the late 1990s by other researchers, espe-
cially, Langlois and Robertson (1992), Garud and Kumaraswamy (1993, 1995), Sanchez and
Mahoney (1996), and Schilling (2000).
The need to go “to the designs themselves” caused us to focus exclusively on one industry,
the “greater” computer industry. We knew that modularity was a general property of complex
systems (Simon, 1962; Schilling, 2000). Hence, modularity of some kind would be present in
almost any industrial context. But in order to “see” designs as the designers saw them, we needed
to deepen our technological understanding in some particular domain. We would have to learn
one or more engineering languages to the level of comprehension, if not mastery. We would
have to read descriptions and appraisals of specific designs and learn to identify the recurrent
engineering trade-offs and compromises. This was a daunting prospect, and we avoided making
this commitment for a long time. But, in the end, it was inescapable.
By then, we had a base of technical knowledge about computers from our previous work.
We knew that most computer designs were highly modular, and we knew that the industry had
Design rules 3

gone through massive structural changes as a result of changing designs. (Indeed, those changes
continue today.) Thus, in 1995, we chose the greater computer industry, comprising all makers
of hardware, software, components, and services that go into a computer system, as our research
site. We then began the task of finding and valuing the options and charting the evolution of
computer designs.
We imposed four rules on our investigation. First, we would focus on design alone (not produc-

Downloaded from https://academic.oup.com/icc/advance-article/doi/10.1093/icc/dtac055/6868533 by guest on 06 December 2022


tion). Second, there would be no metaphors. Third, all definitions and models would be backed
up with examples based on the “deep” understanding of real designs and technologies. Fourth,
to unify the exposition, we would focus on a single, large, and evolving group of technologies.
We believed that our arguments would be credible only if they were plausible and convincing to
people (mostly engineers) who used technologies to create new products and systems.
We actually began with contemporaneous technologies—personal computers, technical work-
stations, microprocessors, and operating systems. But looking for the origins of modularity in
computers, we were driven backward in time to IBM’s System/360 and even older computer
designs. We found that, while the designs and the performance of computers had changed greatly,
the desire for design options and the perception that modularity was the key that would unlock
the door were constants. This was true no matter how far back in time we went. Some designers
were always trying to create options through modularity, while others wanted to integrate all the
components in order to achieve higher levels of performance.
Imagine our relief when we reached the beginning of this seemingly endless series of recurring
debates. The origin, we discovered, was what might be called the first “architectural document”
on computer design. It was written by the great mathematician, John von Neumann in a few
weeks in the wartime spring of 1944 (Burks et al., 1946). Although the original memorandum
was not published until after his death, copies and redrafts were widely circulated and became
enormously influential. Virtually, all subsequent computer designs bear this document’s stamp—
because it provides a way to think abstractly and systematically about the enormously complex
artifact, that is, an electronic computer. (The story of von Neumann’s report and other precursors
of modular computer designs may be found in Chapter 5 of DR1.)
With 1944 and the von Neumann memorandum identified as our starting point, we were free
to begin moving forward in time again, tracking the lineages of actual designs from this common
source. We sought to document both the evolution of designers’ perceptions and the parallel
evolution of the designs themselves.
Our efforts received an unexpected gift in early 1996 through the work of the computer scien-
tist and evolutionary theorist, John Holland. Holland (1992, 1996) had developed an overarching
theory of complex adaptive systems, which encompassed the processes of biological evolution,
neuronal and immunological growth, cellular automata, and, importantly, complex games, like
checkers and chess. Because of the broad reach of his theory, we could locate our theory of mod-
ular design evolution within his framework. By looking at how his theory was constructed, we
could see what was needed to complete our own. From Holland, we got the idea of “operators”
as primitive moves and sequences of operators as strategies in a structured, multiplayer game.
Thus, the first part of DR1 (Chapters 2–8) explained what modularity is and how it came to
be present in computer designs. The second half (Chapters 9–16) explained what can be done
with modularity and its consequences for the surrounding economy. In the later chapters, we
cataloged the “operator moves” made possible by modular designs. In developing this catalog,
our objective was to connect the abstract theory of operators with the nitty-gritty reality of actual
designs and designers’ decisions. We did not strive to come up with a complete list of operators.
Instead, our rule was “a real example for each operator.” That is, each operator we described
had to play a documentable role in at least one important episode in the history of the computer
industry.
For the operators splitting and substitution, we could point to the design of IBM’s System/360
and the subsequent emergence of plug-compatible devices (DR1, Chapters 10 and 14). We
could also cite Sun Microsystems’ decomposition of the design of a technical workstation in the
1990s (Chapter 11). For the operator excluding, we had the computer architectures of Digital
Equipment Corporation (DEC), once the second-largest computer systems maker in the world
4 C.Y. Baldwin

(Chapter 12).1 For augmenting, we had not only DEC’s strategy for minicomputers but also
the example of VisiCalc, the first spreadsheet program. VisiCalc’s functionality drove the early
growth in demand for personal computers, but, in successive rounds of substitution, VisiCalc
was itself unseated by Lotus 1–2–3, Borland’s Quattro Pro, and finally Microsoft Excel (Chapter
12). Finally, for the operators inversion and porting, we had the examples of Unix and C—the
first portable operating system and the language, respectively, invented for the purpose of coding

Downloaded from https://academic.oup.com/icc/advance-article/doi/10.1093/icc/dtac055/6868533 by guest on 06 December 2022


it (Chapter 13).
Describing how modularity emerged in computer designs and documenting the operator moves
made possible by modular architectures took us through the 1970s in terms of the history of
computer designs. Understanding the history of designs, we believed, contributed to a better
understanding of the great sea-change in the structure of this industry that occurred in the 1970s.
Between 1970 and 1980, the greater computer industry changed from a highly concentrated and
vertically integrated oligopoly dominated by IBM to a more fragmented and vertically disinte-
grated modular cluster of independent firms, linked by design rules governing the systems they
made.
In 1980, the market value of the cluster surpassed the market value of IBM. Hence, that
year was a convenient way to mark the beginning of a new industrial order in computers. The
workstation battles fought between Sun and Apollo, as well as all the intricate and fascinating
moves and countermoves in the personal computer and operating systems markets, were part of
this new order. But there was more: new centers of economic activity, like Silicon Valley; new
cultural institutions, like the Internet, the Worldwide Web, and email; economic anomalies, like
the Internet bubble and crash; new rules of property established via anti-trust and intellectual
property disputes; new social and political movements, like the Free Software Movement; and
even new theories of engineering and technological innovation, like the Open Standards and
Open Source initiatives. All these things, we believed, were tightly linked to a new, post-1980
industrial order.
In 1998, we knew we were in a period of unprecedented ferment, growth, and change in the
industry we were studying. Kim and I did not know how to approach the enormously complex
and variegated developments we were observing. Thus, 1980, the year before the introduction of
the revolutionary IBM PC, seemed a good time to pause and take stock of what had happened.
In December 1998, we sent DR1 off to the publisher while promising ourselves that we would
soon write a second volume.

2. Contributions of DR1
What were the singular contributions of DR1? What made it different?
I think the first contribution was to offer a new way to understand and explain the architecture
of complex, man-made systems—any system. It was a new theory, based on observable, objective
facts, not subjective perceptions, of a system’s structure. To the question: “How do you know
modules exist?” (which my finance colleagues loved to ask), we could answer: “This is what you
have to do, this is how you will know if the system has modules and what they contain.”
To develop a theory of modular systems based on observable data, we had to introduce readers
to new tools, specifically, dependencies among tasks and decisions, design structure matrices
(DSMs), and design hierarchies. None of these tools were totally new: in fact, the list of inventors
would go on for several paragraphs and include great design theorists like Simon (1962, 1981)
and Alexander (1964); great empiricists like Eppinger (1991, 1994); and great computer scientists
like Parnas (1972a,b, 1985, 2001), Bell and Newell (1971), and Mead and Conway (1980);
as well as more obscure figures such as Marples (1961) and Steward (1981). However, to our
knowledge, no one had ever brought the tools together and used them to determine modularity
and module boundaries in real systems. Almost no practitioners and only a few scholars saw the
benefit of using these tools to map out entire systems.
Given a picture of the structure of a system, one could imagine how it might be changed. This
was the role of the “modular operators.” Through application of operators, technological change

1 In 1998, DEC was acquired by Compaq Corporation, which in turn was acquired by Hewlett Packard in 2002.
Design rules 5

or “evolution”—what Schumpeter (1934) called “new combinations”—was not an amorphous


process, but something one could see and document with before-and-after maps. Evolution via
operators was the book’s second contribution.
The third contribution was to tie design structure and evolution to the financial theory of
“real” options. We showed how one could (in principle) compare the value of two different
system architectures. One could also calculate the incremental value of improving a particular

Downloaded from https://academic.oup.com/icc/advance-article/doi/10.1093/icc/dtac055/6868533 by guest on 06 December 2022


module, moving a module up or down in a design hierarchy, or making a module work in multiple
systems (today, such modules are called “cross-platform” or “multi-homing” modules).
Our last contribution was to provide an explanation for historical changes in the structure
of the computer industry, which could be traced to a documented change in the technical archi-
tecture of a particular computer system. The historical change was the fragmentation of the
computer industry between 1965 and 1980. The technical change was the modularization of
IBM’s System/360 following the company’s decision to create a single set of “binary-compatible”
processors for all of its markets. Binary compatibility allowed users to upgrade their hardware
without rewriting their software. Modularity allowed a new set of “plug-compatible peripheral”
(PCP) devices to be attached to System/360 computers without IBM’s permission. Between 1965
and 1975, 12 new subindustries emerged in the greater computer industry. By 1980, over 200 new
firms had entered the industry: 80% of the entrants made modules, not whole systems (Baldwin
and Clark, 2000: 7–8; 376–377).
In arguing that a change in technical architecture caused a change in industry structure, we
implicitly assumed that the systems had the property of technical and organizational “mirroring”:

[O]rganizations are boundedly rational, [hence] their knowledge and information-processing


structure come to mirror the internal structure of the product they are designing. (Henderson
and Clark, 1990: 27.)

We made this assumption without much thought. It seemed obvious in part because of the
way that PCP makers hooked their devices up to System/360 processors. PCP makers did not
connect their products to the interiors of modules: they attached them at the connection points
that IBM had designed for its own peripherals. By virtue of its ownership of the machines,2
IBM claimed ownership of these interfaces. However, the interface designs were quite simple and
hence unpatentable. As a result, IBM could not prevent attachments but could only sue the PCP
makers after the fact. (The status of interfaces was the subject of several long-lasting lawsuits and
countersuits. The questions were eventually decided mostly in IBM’s favor. However, by the time
the law was settled, the PCP companies were well established in their niches. IBM’s customers
then demanded open interfaces as a matter of course.)
In our defense, we were not the only ones to assume mirroring without investigating its under-
pinnings. Transaction costs economists led by Williamson (1985) assumed that “technologically
separable interfaces” existed at many points in all production systems. A transaction—thus a
boundary between two firms—could go at any of these interfaces as long as transaction costs were
low enough. Langlois and Robertson (1992), Garud and Kumaraswamy (1993, 1995), Sanchez
and Mahoney (1996), and Schilling (2000) also associated modularity with the emergence of
networks of autonomous firms making complementary products.

3. Shortcomings and gaps


Unfortunately, each of our contributions had major shortcomings that stood in the way of
widespread adoption of the analytic tools and methods we advocated. This section describes
impediments to the use of the theory in practice.

3.1 Objective identification of modules


For this purpose, the main tool we offered was the so-called DSM, also known as a task struc-
ture matrix (Steward, 1981; Eppinger, 1991; McCord and Eppinger 1993; Eppinger et al., 1994).

2 IBM generally maintained ownership of its equipment and leased it to customers.


6 C.Y. Baldwin

The main shortcoming of this methodology was its cost. Applying the technique requires compil-
ing a list of all decisions or tasks needed to design or make a specific product plus the dependencies
between them. In real systems, the list of “steps” is often very long and many lists are incomplete.
However, the real problem lies in tracing the dependencies across tasks and decisions. Depen-
dencies are often known only to direct participants, who may not be aware of what they know.
Thus, tracing dependencies generally requires interviewing participants, asking “whose actions

Downloaded from https://academic.oup.com/icc/advance-article/doi/10.1093/icc/dtac055/6868533 by guest on 06 December 2022


and/or information you must obtain in order to take the actions required to do your job?” The
answers can then be used to populate the off-diagonal cells of a square matrix—the DSM.3
As if that were not enough, one then has to sort the matrix. With appropriate sorting, modules
will appear as discrete blocks of decisions/tasks on the main diagonal of the matrix. Hierar-
chy within the system can then be used to put the blocks in order with the most depended on
components appearing at the top of the matrix.
Unfortunately, an unsorted matrix will not necessarily reveal its modular structure, even if one
exists. Prior information about anticipated hierarchy and clusters can be used to do preliminary
sorting, which may be sufficient to show whether the modules are cleanly separated and/or hierar-
chically ordered (Baldwin et al., 2014). Matrix multiplication can reveal the degree of connection
between elements, a measure sometimes called “propagation cost” (MacCormack et al., 2006,
2012). However, summary measures do not show hierarchical relationships or the boundaries of
modules.
Finding the “true” modular structure of a complex technical system thus entails both major
expense and a major risk. The expense is the cost of surveying participants in the process, distract-
ing them from the tasks at hand. Companies seldom see enough benefit in understanding their
own processes to undertake the expense and suffer the disruption. The risk is that overlooked
dependencies, especially those arising between firms or divisions, can cause unexpected glitches,
unpredictable behaviors, delays, cost overruns, and, in extreme cases, system failure. Still, the
expense of tracking dependencies (which may change over time) is so large that most compa-
nies simply let their systems grow “naturally” and wait for embedded dependencies to “reveal
themselves” in the form of errors, bugs, and bottlenecks (Kim et al., 2014; Goldratt and Cox,
2016).
The final shortcoming of DSMs is that a process must already exist and be running for depen-
dency tracking to take place. One cannot track transfers of material, energy, and information in
a system for which no detailed design or working prototype exists. In fact, one of the benefits
of electronic design automation (EDA) and other computer-assisted design methods is that they
both reveal and constrain channels of potential dependency. But design codification itself requires
a prior mental model of the system and its components and how the components will work
together to deliver system-level value (Bucciarelli, 1994). Inventors of new-to-the-world tech-
nologies often have difficulty in communicating what they “see” and do not naturally translate
their mental models into DSMs.

3.2 Design evolution via operators


I initially had high hopes for the modular operators. I envisioned designers of real systems ana-
lyzing their modular structure and then saying things like “we’ll split this group of components
and substitute here, here, and here; we’ll augment the system with these functions and exclude
these others; this function can be inverted to be a common resource to other components; this
module can be given a new interface that allows it to be used in different systems.” Kim and I
also went to great lengths to find real examples corresponding to each operator and to provide a
formula for the financial net option value of each potential move.
The operators and the formulas were never much used in practice.4 They provided an appeal-
ing theory and showed that any change in system structure could be represented by a finite series
of one-step modifications. But, in the end, the concepts did not correspond to any real cognitive

3 Kim and I chose to put each input to a given decision or task in the corresponding row of the matrix; the sources
of the inputs then appear in the columns. However, practice varies across researchers.
4 Woodard et al. (2013) is an exception.
Design rules 7

process nor did they address any felt needs. In effect, they reduced complex cognitive and commu-
nication processes into a linear series of steps, each with a “value.” I now believe humans approach
complex systems more holistically and conceive of changes “in groups,” not as a series of incre-
mental “moves.” Humans are very conscious of complementarities in technological systems, and
thus, an intuitive “bundling” of modifications is both cognitively appealing and efficient. Com-
plementarity also means that the existence of well-defined values for one-step moves is doubtful,

Downloaded from https://academic.oup.com/icc/advance-article/doi/10.1093/icc/dtac055/6868533 by guest on 06 December 2022


since move A may be worthless in the absence of another move B (Milgrom and Roberts, 1990,
1994, 1995).
Ironically, however, concepts very close to operators are often used to formulate strategies
for products that are offered within larger ecosystems (Holgersson et al., 2022). For example,
“commoditization” of a product is equivalent to multiple substitutions (Christensen and Raynor,
2003). Creating a “cross-platform” or “multihoming” product entails porting. “Platform” prod-
ucts provide centralized common functions to different types of users and hence are examples
of inversion. Notwithstanding these examples, as elements of an overarching theory of design
evolution, the modular operators were a bust.
Nevertheless, the six modular operators turned out to serve an important purpose in the overall
argument of DR1, just not the one I expected. Systematically investigating real examples of the
operators and applying the concepts of modularity and option value to each one made us look
beyond IBM’s System/360 to other systems and companies. Although our tools were overly formal
and impractical, the examples gave us a broad empirical base. We showed that our theory could be
applied to hardware systems built by Sun Microsystems and DEC, to “killer apps” like VisiCalc,
to operating systems like Unix, and to computer languages like C.
This empirical effort in turn gave us credibility among computer scientists and hardware and
software engineers, sometimes including the architects and designers of the systems we discussed.
Their interest and acceptance of the basic arguments legitimized the book. A few vetted our
analysis beforehand, and others read chapters after the book was published. We found that our
analysis generally resonated with their experiences. Some were pleased to see scholars from an
alien field (management) attempt to translate their concerns and decisions into the language of
economics and finance. In the end, although operators were not a major theoretical contribution,
they provided us with a framework we could use to organize the empirical analysis and enhance
the credibility of our arguments. The operators provided excellent scaffolding, even if in the end
they could be discarded.

3.3 Valuation using the theory of real options—the elusive sigma


Our most radical contribution was to apply the financial theory of real options to changes in
complex designs and specifically modularizations. Using option theory allowed us to formally
demonstrate the conditions under which modularization increased the value of a system and,
under specific assumptions, to quantify the value of splitting a computer system into modules that
could be combined and upgraded in different ways. For example, using a thought experiment
based on System/360, we showed that the value of a modular system could exceed that of a
comparable integral system5 by 25 (!) times or more. This was the “power of modularity.” Under
reasonable assumptions, in head-to-head competition, the integral system would not stand a
chance.
However, as with our other contributions, there were problems in converting the theory into
practice. Valuing an option requires knowing the probability distribution of future outcomes for
each module, as well as the value of adhering to the status quo. To achieve tractability, we had to
assume the end-point values were normally distributed. This meant that the distributions could
be described using only two parameters: (i) the mean, which could be set to zero without loss of
generality, and (ii) the standard deviation, 𝜎, also known as volatility or “sigma.”
Given normal distributions, the sigma of an integral system was a simple function of the sigmas
of each module.6 The value of the modular system would then depend on the number of modules,

5 Comparable meant “with the same underlying distribution of outcomes.”


2 2 1/2
6 𝜎𝐼𝑛𝑡𝑒𝑔𝑟𝑎𝑙 𝑆𝑦𝑠𝑡𝑒𝑚 = (𝜎𝑀𝑜𝑑𝑢𝑙𝑒 1 + … + 𝜎𝑀𝑜𝑑𝑢𝑙𝑒 𝑁 ) .
8 C.Y. Baldwin

the sigma of each module’s end-point distribution, and the number of experimental alternatives
(substitutes) generated for each module. The optimal number of experiments in turn was an
endogenous variable that depended on the module’s sigma and the cost per experiment. Both
sigmas and the cost per experiment could vary by module; thus, the modular systems did not
have to be symmetric.
The problem lies in estimating the sigma. Financial option values are for the most part esti-
mated by looking at the past volatility in the price of a stock or some other traded asset.7 For

Downloaded from https://academic.oup.com/icc/advance-article/doi/10.1093/icc/dtac055/6868533 by guest on 06 December 2022


most designs and technologies, however, there is no traded asset that can reveal past, much less
future, sigmas. Yet, this was the question that serious practitioners most frequently asked: “where
do we get sigma?”
It was the right question, but not one could answer. Compared to the cost of setting up
appropriate data collection efforts to study sigmas across different product domains, the cost of
observing DSMs was miniscule. Even if the resources could be found, defining groups of designs
that were homogeneous enough to form meaningful probability distributions seemed impossible.
The impossibility of estimating sigmas did not become clear to me until DR1 had been out for
a while and practitioners had tried to apply the method. For 15 years, I avoided the issue, hoping
a quick fix would appear. None did.

3.4 The limits of mirroring


Those 15 years (2000–2015 approximately) saw the appearance and spectacular growth of three
new forms of organization. The new organizations—platforms, ecosystems, and open-source
communities—were more distributed and networked than the large multidivisional corporations
that formed the backbone of the economy from 1900 to 2000. The new forms of organization
were a surprise to scholars and practitioners alike (Grove, 1996). They soon became targets of
scholarly research (Gawer and Cusumano, 2002; von Hippel and von Krogh, 2003, von Krogh
and von Hippel, 2006; Adner and Kapoor, 2010; Gulati et al., 2012; Puranam et al., 2014).
Powerful personal computers and servers and cheap Internet communication were the com-
mon core of these new organizations, but otherwise, they fell outside then-prevailing theories
of management, economics, and innovation.
From the beginning, it was clear to me that modularity lies behind these new forms of orga-
nization: modular technical architectures permitted more distributed forms of organization. But
the theory behind such “mirroring” was not well understood, much less widely accepted. The evi-
dence we presented in DR1 that the computer industry broke apart because of the modularization
of System/360 was a “post hoc ergo propter hoc” rationale based on a sample of one.
The argument also contained a dangerous echo of technological determinism. Technological
determinism is the proposition that technologies cause social change, including changes in orga-
nization structure. In the late 20th century, this theory was widely discredited. Many studies
showed that technologies developed along lines determined by users, that is, they were “socially
constructed.” Furthermore, the “needs” of technologies were often used as a spurious justification
for excessive amounts of control in mass production factories as well as coercion and exploitation
of workers by managers and owners of companies (Noble, 1979, 1984; Bluestone and Harrison,
1982; Pinch and Bijker, 1984; Piore and Sabel, 1984; Hughes, 1987, 1993; Orlikowski, 1992;
MacKenzie and Wajcman, 1999; Leonardi and Barley, 2008, 2010; MacKenzie, 2012).
The “post hoc ergo propter hoc” and “sample of one” critiques of the mirroring hypothesis
were fair, I felt. However, the idea that specific technologies did not influence or constrain organi-
zations seemed extreme to the point of being silly. Kim’s work with colleagues William Abernathy,
Robert Hayes, Steven Wheelwright, Rebecca Henderson, and Taka Fujimoto was precisely about
how organizations struggled to use technologies more effectively and succeeded or failed to the
degree they “got it right.” Furthermore, I felt the tools we used in DR1 could be used to explain the
correspondence between technological boundaries (of modules) and organizational boundaries

7 Estimation based on previous data can be problematic because the theory specifies the sigma that will prevail for
the lifetime of the option. If asset pricing distributions are not stationary, using estimates based on previous data will
price the options incorrectly. Today, huge resources are devoted to refining estimates of sigma and other distributional
parameters and to developing option-based strategies for betting on future volatilities.
Design rules 9

(of firms). Managers through product design and procurement decisions decided where to place
their own firm’s boundaries—what the company would and would not do (Porter, 1996). Bound-
aries in turn were marked by transactions between independent agents. What sort of manager
would place a transaction/boundary inside a tightly interconnected module?
But managers often did just that. In the first few years after DR1 appeared, a number of inno-
vation and strategy scholars began to present evidence of exceptions to strict mirroring—cases

Downloaded from https://academic.oup.com/icc/advance-article/doi/10.1093/icc/dtac055/6868533 by guest on 06 December 2022


where module boundaries and organizational boundaries were not well defined, yet participants
suffered no loss of efficiency or value. A typical case would be a buyer–supplier pair in which
the participants freely exchanged knowledge, cooperated in the design of a part, and shared any
gains. From these and other examples, it became clear that mirroring was not a law of nature nor
even a universally good idea. It was simply a common pattern, the outcome of an underlying,
unobserved balance of costs and benefits.
In DR1, Kim and I argued that in the wake of a technological change, firm boundaries might
shift to achieve better mirroring. The result would be a visible change in industry structure. We
did not prove this conjecture or explain when or why the pattern would arise. However, if the
concept of modularity was to be relevant to the exploding body of work on new organizational
forms, we had to address this gap and sooner rather than later.

4. How DR1 influenced DR2


When I began to write Design Rules, Volume 2 in 2016, I was not conscious of the shortcomings
and gaps listed above. Rather, I had a strong and growing conviction that the relationship between
technology and organizations had not received sufficient attention from students of technology or
organizations. The new, distributed organizations—ecosystems of firms making complementary
products, platforms linking many different groups, and open-source communities—seemed to be
related to the spread of digital technology, but the underlying causal linkages were unknown.
Economists, organization theorists, and management gurus were building new theories galore,
but they generally approached technologies in a superficial way. Furthermore, in explaining events
in the greater computer industry, one group of scholars saw “ecosystems,” another group saw
“platforms,” and still another saw “communities.” “All these” were not a very satisfying way to
make sense of the whole.
A major historical puzzle also loomed behind the exciting, day-to-day events. For the better
part of a century, large, vertically integrated, multidivisional corporations had dominated the
global economy (Schumpeter, 1942; Drucker, 1946; 1993; Galbraith, 1967; Servan-Schreiber,
1968; Chandler, 1977, 1990). In the 1980s and 1990s, these corporations remained at the center
of scholarly research in strategy and organizational behavior. Small startups hoped to grow up
to be industrial giants with publicly traded stocks (Grove, 1996).
At the same time, under the banner of shareholder value maximization, many large corpora-
tions were being broken apart into smaller, more focused enterprises that issued debt in order to
become privately owned. What changed? Why were organizations once admired for their scale,
their operational capabilities, and their internal management systems now considered inefficient
and bloated bureaucracies unprepared for the 21st century? (Jensen, 1986, 1993).
Thus, I began DR2 with a short list of organizational “surprises”—ecosystems, platforms, and
open-source communities—and a single goal. The goal was to explain how technology shapes
organizations, specifically how particular technologies set requirements that could not be waved
aside and thus rewarded different forms of organization. The explanation had to address both
sides of the historical puzzle: first, the rise of so-called modern corporations and their dominance
during most of the 20th century and then the surprising recent success of distributed forms of
organizations including ecosystems, platforms, and open-source communities.
It was far from my intent to redress the shortcomings of Volume 1. However, the shortcomings
and gaps listed above must have haunted me. When I looked over the chapter drafts of DR2 as
part of writing this paper, I was amazed to find that I had tried to address each shortcoming
and fill in each gap to the best of my ability. To conclude this article, I would like to look to
the future and explain how the shortcomings of Volume 1 influenced the contents and themes of
Volume 2.
10 C.Y. Baldwin

5. Closing the gaps: the goals of volume 2


5.1 Underpinnings of the mirroring hypothesis
The main empirical argument in DR1 was that changes in the design of computers in the 1960s
caused the computer industry to break apart in the 1970s, 1980s, and 1990s. As explained above,
the argument was based on an extremely shaky foundation—what later came to be called the
“mirroring hypothesis.” Of the four gaps in DR1, this was the first I addressed. First, the theory

Downloaded from https://academic.oup.com/icc/advance-article/doi/10.1093/icc/dtac055/6868533 by guest on 06 December 2022


behind the hypothesis had to be built up to provide a stronger foundation. Second, the empirical
scope of the hypothesis needed to be determined. Third, the growing number of exceptions found
in the literature needed to be explained and placed in context.
I addressed the theory in an article “Where Do Transactions Come From?,” which was written
in 2003, though not published until 2008 (Baldwin, 2008). Lyra Colfer and I then extended
the theory and investigated it empirically in “The Mirroring Hypothesis: Theory, Evidence and
Exceptions” (Colfer and Baldwin, 2016). The two papers provided a foundation for a theory
of how technology shapes organizations: they lie at the core of DR2. In related work, Alan
MacCormack, John Rusnak, and I tested the mirroring hypothesis by comparing the degree of
interdependence in software systems created by closely coordinated teams within companies vs.
widely distributed open-source communities (MacCormack et al., 2012).
The view of mirroring I would teach to students today is much more nuanced than what I
and others perceived when DR1 was first published. First, real production does not proceed as
a series of well-defined steps as Coase, Williamson, and other economists assumed. In even the
smallest productive organizations, there are numerous transfers of material, energy, and infor-
mation taking place all the time in different directions. It is impossible to turn each of these
transfers into a legally enforceable transaction—if one tried, production itself would grind to a
halt. DSMs provide a more accurate picture of the task networks that lie at the heart of real
design and production systems than previous theories that looked only at sequences of steps.
In these complex task networks, transaction costs are lowest at the “thin crossing points” of the
network, which correspond to the boundaries of modules. Thus, other things equal, organization
boundaries marked by transactions are most likely to appear at the technological boundaries of
modules, rather than in their interiors. The mirroring of technical and organization boundaries
economized on transaction costs across firms and coordination costs within firms.
However, the fact that mirroring is economical does not mean that it is always optimal. For
example, a transaction may be placed at a “thick” crossing point, if the parties invest in additional
coordination mechanisms and reciprocal trust (Gibbons and Henderson, 2012; Volvosky, 2022).
Because different degrees of mirroring might be optimal in different settings, mirrors are often
“broken” (Cabigiosu and Camuffo, 2012; Cabigiosu et al., 2013).
From an empirical perspective, mirroring is the predominant pattern in the economy, appear-
ing in about two-thirds of the 142 cases studied by Colfer and Baldwin (2016). However, partial
mirroring is also common (and often profitable), while overly strict mirroring can be counter-
productive. Furthermore, long-lasting contracts and interpersonal relationships can allow teams
from two or more autonomous firms to work interdependently within the same module. Con-
versely, a tight-knit team within a single firm can create a modular technical system if they see
merit in doing so.
A new technology that changes the modular boundaries of a technical system perforce changes
transactions costs at various points. It will then be feasible to locate transactions at the new thin
crossing points, thereby shifting organization boundaries between firms. This is exactly what
happened when IBM modularized System/360. When the company standardized the interfaces
between various components of the system, it created many new thin crossing points where
devices made by third parties could be attached to IBM equipment. IBM then faced a slew of
new entrants making PCPs (DR1, Chapter 14).

5.2 Alternatives to DSMs: functional components and value structure maps


Even though I needed DSMs to reason about the location of transactions and organizational
boundaries, their shortcomings continued to bother me. Practically speaking, they contained too
much detail and were not a tool one could use in all settings.
Design rules 11

I then remembered that Karl Ulrich, in his seminal paper on product architecture, based his
definition of modularity on functions (Ulrich, 1995). Early in the process of writing DR1, Kim
and I had adamantly rejected functions as the basis of our theory. DSMs were objective, while
functions were subjective: different individuals might perceive different (or multiple) functions
for the same component or process. Functions were also slippery, often changing over time.
Little by little, I came to realize that I could use functional components to avoid the shortcom-

Downloaded from https://academic.oup.com/icc/advance-article/doi/10.1093/icc/dtac055/6868533 by guest on 06 December 2022


ings of DSMs; without knowing the underlying DSM, humans can use their imaginations to break
things apart into components and mentally assemble components into larger systems (Edelman,
1992; Bucciarelli, 1994). In solving specific problems, they can associate physical properties—
hardness, weight, shape, color, edibility, mobility, etc.—with particular things (including objects,
plants, animals, and other people) and think about how those properties might or might not
serve their goals. Finally, humans can perceive the absence—and need—for a quality as well as
its presence and use that perception to define a component with the specific attributes needed to
serve the purpose of the design. The component will be an input to the technical recipe, and the
desirable attributes it supplies are its function(s).
Functional components are creations of human imagination: they may or may not exist in the
material world. However, to become part of a real object or process, a functional component
must become real. Behind each functional component in a real system lies a method of obtaining
it. This method is a technology whose steps and dependencies can be represented via a DSM.
Thus, every functional component in a real system can be “tracked back” to a corresponding
DSM that specifies tasks and decisions and the linkages between them.
In this sense, functional components lie in a middle ground between a detailed technical recipe
and the end product of the recipe. The end product in turn must have value in expectation:
otherwise, why should we go through the effort of assembling the inputs and carrying out the
recipe?8
Functional components are thus a shortcut way of referring to existing or potential DSMs
without actually constructing them. They might indicate material inputs (ingredients) or processes
(techniques). They might refer to modules or to designated parts of an interdependent system.
Different designers may also choose to combine functional components in different ways.
To bring these ideas down to earth, consider the technology for making a cake. For a basic
cake, one needs (i) butter, (ii) sugar, (iii) eggs, and (iv) flour; plus the processes of (v) beating, (vi)
mixing, and (vii) baking; plus (viii) a recipe specifying quantities and steps (both sequence and
timing). These are the basic components of cake technology: each has a function that justifies its
inclusion in the cake-making recipe.9 Given access to the ingredients, processes, and knowledge,
one can make a cake. If the cake has value, then each of the components have value in the context
of cake-making technology. Functional components are thus “carriers of value.”
In addition to a simple list of components, I also needed a way to represent the system structure
and evolution without defining sequences of specific actions. (I did not plan to revive the modular
operators.) As I began constructing what I later called “value structure maps,” it struck me that
answering one question about a functional component told me most of what I needed to know:
was the component (1) essential or (2) optional?
Essential components cannot be omitted without compromising the entire system. Butter,
sugar, eggs, and flour are essential for a basic cake recipe.10 A value structure map for a cake
can be written as follows:

Here, the linking symbol denotes a complementary combination of essential functional


components via some technical recipe. If any of the components is missing, the recipe will fail

8 Value in expectation is the anticipated benefit that justifies committing resources (time, energy, and materials)
to carrying out the recipe. The ex post realized value can be different from value in expectation and may be negative.
Given a probability distribution over numerical outcomes, value in expectation can be reduced to a number—the mean
or “expected value.”
9 The recipe-maker and the cake-maker do not need to “know” the precise function of each component—for
instance, what do eggs really do? The makers only have to know that including eggs in the recipe makes a cake better.
10 More advanced recipes may offer substitutes for the essential components; for example, oil for butter, maple
syrup for sugar, and ground nuts for flour.
12 C.Y. Baldwin

and one will not have a cake. (Leave out flour, one can have a pudding. Leave out butter, flour,
and egg yolks, one can have a meringue.)
Optional components increase the value of the system, but if they are not present, the tech-
nology will continue to work. In the technology of cake, icing is an optional component, as are
fruit and candles. We can use a “+” symbol to denote an optional combination. A value structure
map for a cake with optional icing, fruit, and candles can be written as follows:

Downloaded from https://academic.oup.com/icc/advance-article/doi/10.1093/icc/dtac055/6868533 by guest on 06 December 2022


The leading “1” in the brackets indicates that the cake has stand-alone value, even without any
fancy features. In contrast, within the cake technology, the optional features inside the brackets
have value only in combination with the cake.11 Each of the terms in this expression can be
expanded into a different set of underlying functional components with its own list of ingredients
and processes.
That was it: with two symbols placed between essential components (or groups) and
“+” between optional components (or groups), I could “map” the structure of any technological
system. Importantly, the level of abstraction could be chosen to fit the problem: the map did not
have to include a distracting amount of detail. However, behind each component lie a technical
recipe for making it and a corresponding DSM. Thus, any component in a map could be “tracked
back” to show its components, the components of its components, and so on back to the natural
world.
I extended value structure maps a little further in order to identify strategically significant
components (bottlenecks) and to show the boundaries of modules. With these extensions, a series
of value structure maps could be used to describe the evolution of any technical system.
Value structure mapping of functional components turned out to be a lightweight and ver-
satile way of representing complex technical systems spread out over many organizations. The
maps were more flexible and easier to construct than DSMs, although DSMs were present in the
background. Throughout DR2, I use value structure maps to analyze technical systems and show
how technology has shaped organizations within these systems.

5.3 Real options without probabilities


As I worked on value structure mapping, I became increasingly aware of the limitations of
quantitative financial methods as a guide to making decisions about real technologies. The rec-
ommended method of valuing technology investments in finance is to forecast mean cash flows
in future time periods and discount them by an appropriate cost of capital. Option theory uses
essentially the same methodology but makes adjustments to the way future cash flows and the
discount rate are estimated.
At first blush, forecasting mean cash flows would seem to be an opportunity to use probability
theory. However, without data on the frequency of comparable past events, probability distribu-
tions are merely subjective guesstimates. For new technologies, the relevant data often do not
exist.
In 2016, I learned that I was not the first financial economist to have strong doubts about
financial models and probability measures. Mervyn King, Governor of the Bank of England dur-
ing the Financial Crisis of 2007–2008, described how, as the global financial markets dried up
during the crisis, the mathematical models used by central bankers stopped being reliable. The
models had been calibrated on previous data, which did not capture the interlocking pathways
of cause and effect that came into play during the crisis.
King described the situation as one of radical uncertainty. “Radical uncertainty refers to uncer-
tainty so profound that it is impossible to represent the future in terms of a knowable and
exhaustive list of outcomes to which we can attach probability distributions” (King, 2016: 9).12

11 The optional components may be used in other recipes, thus having value within those technologies.
12 King based his arguments on the earlier work of Knight (1921). Knight argued that under “uncertainty,” out-
comes are unknowable and unmeasurable. “Uncertainty” is different from “risk,” where outcomes have a measurable
probability distribution. Today, the precise definition of “Knightian uncertainty” is murky as scholars have tried to fit his
Design rules 13

I felt that the phrase “radical uncertainty” perfectly described the technologies that gave rise
to the organizational “surprises” that were the focus of DR2. Indeed, in 1997, Andy Grove,
CEO of Intel, foreshadowed King’s critique of financial models. Asked about the profitability
of Intel’s e-commerce investments, he replied, “What’s my ROI on e-commerce? Are you crazy?
This is Columbus and the New World. What was his ROI?” [Grove as quoted in “In search of
the perfect market?” The Economist (May 9, 1997)].

Downloaded from https://academic.oup.com/icc/advance-article/doi/10.1093/icc/dtac055/6868533 by guest on 06 December 2022


In other words, for the new technologies and organizations formed in the wake of the Internet,
ROIs based on numerical forecasts and probability distributions were a chimera—“an illusion
or fabrication of the mind.”13
How can one construct a rigorous theory based on real options without recourse to quanti-
tative estimates or probability distributions? The first step (for me) was to take the prohibition
seriously and then see what tools were left.
It helped to realize that humans have been living with technologies and uncertainty since pre-
historic times. Human technologies such as stone tools, agriculture, alcohol, and medicine (as well
as magic and sacrifices to supernatural beings) were acknowledged ways of influencing future
uncertain events in ways that were favorable to human beings. In contrast, probability theory
was invented in the 1600s, mathematical economics in the 1800s, and modern finance in the
1950s. Humans have dealt with both technology and uncertainty for far longer than the tools I
was abandoning had existed.
I could also be selective in what I discarded. I kept the concept of complementarity, which
had been made much more rigorous by Milgrom and Roberts (1990, 1994, 1995). I kept DSMs
but used value structure maps as a convenient shorthand. Finally, I recognized that different
technologies exhibit different degrees of radical uncertainty. For truly nascent technologies like
the first flying machines or the commercial Internet, it was impossible to envision what future
revenue streams or new organizations would arise. For other technologies, for example, high-
speed machine tools, container shipping, or IBM’s System/360, it was clear how value would be
created, but not how it would be allocated among different potential claimants. In other cases,
for example, failure rates of machines on an assembly line, there might be data, but responding
to it would change the underlying frequencies: in other words, the probability distributions were
endogenous and non-stationary.
Given a value structure map, it is sometimes possible to derive propositions that hold for all
probability distributions (with finite domains). The “power of modularity” in DR1 was proved
this way: the fact that a “portfolio of options” is worth more than an “option on a portfolio” is
a distribution-invariant mathematical proposition (Merton, 1973, Theorem 7).
I am now convinced that there is no practical way to reliably estimate probability distributions
for many technologies. Sigma remains elusive. Thus, in DR2, I formulate and prove proposi-
tions about technologies and organizations that hold for all (finite) probability distributions.
This choice restricted opportunities to use formal models and statistics and reduced the scope of
what I could demonstrate. But in my view, it was true to reality in a world of radical uncertainty.

6. Conclusion
Design Rules, Volume 1: The Power of Modularity sought to arrive at a deeper understanding of
technology and technical change by combining theories of technology structure (then in nascent
form) with the theory of real options taken from financial economics. Kim Clark had partici-
pated in some of the earliest work on technology structure, basing his arguments on Christopher
Alexander’s Notes on the Synthesis of Form and Herbert Simon’s “The Architecture of Complex-
ity” (Abernathy and Clark, 1985; Clark, 1985; Henderson and Clark, 1990). I introduced the
perspective of financial economics and real options theory. In the 1990s, the gap between these
two fields was very large. Figuratively, we threw a rope bridge across a deep chasm but found

arguments into theories of decision-making based on subjective probability assessments (Langlois and Cosgel, 1993).
King (2016) and Kay and King (2020) reject attempts to reintroduce probability distributions, and thus, their concept
of “radical uncertainty” is more sharply defined than “Knightian uncertainty.”
13 Merriam-Webster, https://www.merriam-webster.com/dictionary/chimera.
14 C.Y. Baldwin

that no one (on either side) wanted to cross. To scholars in finance and scholars in the new field
of technology and innovation management, there were simply no perceived gains from trade—
especially if “trade” meant learning a new, hybrid language based on knowledge gleaned from
both sides.
There were other bridge-builders, including Richard Langlois and Paul Robertson, Raghu
Garud and Arun Kumaraswamy, Ron Sanchez and Joe Mahoney, and Melissa Schilling. But we

Downloaded from https://academic.oup.com/icc/advance-article/doi/10.1093/icc/dtac055/6868533 by guest on 06 December 2022


were widely scattered and mostly unaware of each other’s work. Kim was also preoccupied, first
in building the new Technology and Operations Management department and then in leading the
Harvard Business School as the dean.
DR1 became a book because we needed to build a theoretical foundation and assemble suffi-
cient empirical evidence to convince even a handful of people to cross the bridge. We had hopes,
but few expectations. In retrospect, Kim and I were both amazed at the number of scholars and
practitioners who found the book relevant to the problems they were trying to solve. Sometimes,
a long shot can pay off.

Acknowledgments
I would like to thank the guest editors of this special issue—Michael Jacobides, Stefano Brusoni,
Joachim Henkel, Samina Karim, Alan MacCormack, Phanish Puranam, and Melissa Schilling—
as well as the editors of Industrial and Corporate Change for giving me this opportunity to
reflect on how the first volume of Design Rules came to be written and how it influenced Vol-
ume 2. I would also like to thank the contributors to the special issue—Ron Sanchez, Peter
Galvin, and Norbert Bach; Christina Fang and Ji-hyun Kim; Marc Alochet, John Paul MacDuffie,
and Christophe Midler; Nicholas Argyres, Jackson Nickerson, and Hakan Ozalp; Peter Mur-
mann and Benedikt Schuler; Jose Arrieta, Roberto Fontana, and Stefano Brusoni; Robin Cowan
and Nicolas Jonard; Stephan Billinger, Stefano Benincasa, Oliver Baumann, Tobias Kretschmer,
and Terry Schumacher; Sabine Brunswicker and Satyam Mukherjee; and Richard Langlois. My
thanks to you for “crossing the bridge” and taking Volume 1 as a point of departure in your own
journeys! Special thanks to Alan MacCormack and Samina Karim for comments on a previous
draft, which led to substantial improvement of this article. Errors and omissions are my own.

References
Abernathy, W. J. and K. B. Clark (1985), ‘Innovation: mapping the winds of creative destruction,’ Research
Policy, 14(1), 3–22.
Adner, R. and R. Kapoor (2010), ‘Value creation in innovation ecosystems: how the structure of technological
interdependence affects firm performance in new technology generations,’ Strategic Management Journal,
31(3), 306–333.
Alexander, C. (1964), Notes on the Synthesis of Form. Harvard University Press: Cambridge, MA.
Ashby, W. R. (1952), Design for a Brain. John Wiley & Sons: Medford, MA.
Baldwin, C. Y. (2008), ‘Where do transactions come from? Modularity, transactions and the boundaries of firms,’
Industrial and Corporate Change, 17(1), 155–195.
Baldwin, C. Y. and K. B. Clark (1997), ‘Sun wars: competition within a modular cluster,’ in D. B. Yoffie (ed.),
Competing in the Age of Digital Convergence. Harvard Business School Press: Boston, MA, pp. 133–157.
Baldwin, C. Y. and K. B. Clark (2000), Design Rules, Volume 1, the Power of Modularity. MIT Press:
Cambridge, MA.
Baldwin, C. Y., A. D. MacCormack and J. Rusnak (2014), ‘Hidden structure: using network methods to map
product architecture,’ Research Policy, 43(8), 1381–1397.
Baldwin, C. Y. and J. Soll (1990), ‘Sun Microsystems—1987 (A), (B), (C),’Harvard Business School Publishing:
Boston, MA.
Bell, C. G. and A. Newell (1971), Computer Structures: Readings and Examples. McGraw-Hill: New York, NY.
Bluestone, B. and B. Harrison (1982), The Deindustrialization of America. Basic Books: New York, NY.
Bucciarelli, L. L. (1994), Designing Engineers. MIT Press: Cambridge MA.
Burks, A. W., H. H. Goldstine and J. von Neumann (1946), ‘Preliminary discussion of the logical design of an
electronic computing instrument,’ Bell, C. G. and A. Newell (eds), (1971) Computer Structures: Readings
and Examples. pp. 92–119.
Design rules 15

Cabigiosu, A. and A. Camuffo (2012), ‘Beyond the “mirroring” hypothesis: product modularity and interorga-
nizational relations in the air conditioning industry,’ Organization Science, 23(3), 686–703.
Cabigiosu, A., F. Zirpoli and A. Camuffo (2013), ‘Modularity, interfaces definition and the integration of external
sources of innovation in the automotive industry,’ Research Policy, 42(3), 662–675.
Chandler, A. D. (1977), The Visible Hand: The Managerial Revolution in American Business. Harvard University
Press: Cambridge, MA.
Chandler, A. D. (1990), Scale and Scope: The Dynamics of Industrial Capitalism. Harvard University Press:

Downloaded from https://academic.oup.com/icc/advance-article/doi/10.1093/icc/dtac055/6868533 by guest on 06 December 2022


Cambridge, MA.
Christensen, C. M. and M. E. Raynor (2003), The Innovator’s Solution: Creating and Sustaining Successful
Growth. Harvard Business School Press: Boston.
Clark, K. B. (1985), ‘The interaction of design hierarchies and market concepts in technological evolution,’
Research Policy, 14(5), 235–251.
Colfer, L. J. and C. Y. Baldwin (2016), ‘The mirroring hypothesis: theory, evidence, and exceptions,’ Industrial
and Corporate Change, 25(5), 709–738.
Drucker, P. F. (1946; 1993), Concept of the Corporation. Transaction Publishers: London
Edelman, G. M. (1992), Bright Air, Brilliant Fire: On the Matter of the Mind. Basic books: New York, NY.
Eppinger, S. D. (1991), ‘Model-based approaches to managing concurrent engineering,’ Journal of Engineering
Design, 2(4), 283–290.
Eppinger, S. D., D. E. Whitney, R. P. Smith and D. A. Gebala (1994), ‘A model-based method for organizing tasks
in product development,’ Research in Engineering Design, 6(1), 1–13.
Galbraith, J. K. (1967), The New Industrial State. Houghton Mifflin: Boston.
Garud, R. and A. Kumaraswamy (1993), ‘Changing competitive dynamics in network industries: an exploration
of Sun Microsystems’ open systems strategy,’ Strategic Management Journal, 14(5), 351–369.
Garud, R. and A. Kumaraswamy (1995), ‘Technological and organizational designs to achieve economies of
substitution,’ Strategic Management Journal, 17, 63–76. Reprinted in Managing in the Modular Age: Archi-
tectures, Networks, and Organizations. G. Raghu, A. Kumaraswamy, and R.N. Langlois (eds), Blackwell,
Oxford/Malden, MA.
Gawer, A. and M. A. Cusumano (2002), Platform Leadership: How Intel, Microsoft and Cisco Drive Industry
Innovation. Harvard Business School Press: Boston, MA.
Gibbons, R. and R. Henderson (2012), ‘Relational contracts and organizational capabilities,’ Organization
Science, 23(5), 1350–1364.
Goldratt, E. M. and J. Cox (2016), The Goal: A Process of Ongoing Improvement. Routledge: Milton Park, UK.
Grove, A. S. (1996), Only the Paranoid Survive. Doubleday: New York.
Gulati, R., P. Puranam and M. Tushman (2012), ‘Meta-organization design: rethinking design in interorganiza-
tional and community contexts,’ Strategic Management Journal, 33(6), 571–586.
Henderson, R. M. and K. B. Clark (1990), ‘Generational innovation: the reconfiguration of existing systems and
the failure of established firms,’ Administrative Science Quarterly, 35(1), 9–30.
Holgersson, M., C. Y. Baldwin, H. Chesbrough and M. L. A. M. Bogers (2022), ‘The Forces of Ecosystem
Evolution,’ California Management Review, 64(3), 5–23.
Holland, J. H. (1992), Adaptation in Natural and Artificial Systems: An Introductory Analysis with Applications
to Biology, Control and Artificial Intelligence, 2nd edn. MIT Press: Cambridge, MA.
Holland, J. H. (1996), Hidden Order: How Adaptation Builds Complexity. Addison-Wesley Publishing
Company: Reading, MA.
Hughes, T. P. (1987), ‘The evolution of large technological systems,’ in W. E. Bijker, T. P. Hughes and T. Pinch
(eds), The Social Construction of Technological Systems: New Directions in the Sociology and History of
Technology. MIT Press: Cambridge, MA, pp. 51–82.
Hughes, T. P. (1993), Networks of Power: Electrification in Western Society, 1880–1930. Johns Hopkins
University Press: Baltimore, MD.
Jensen, M. C. (1986), ‘Agency costs of free cash flow, corporate finance, and takeovers,’ The American economic
review, 76(2), 323–329.
Jensen, M. C. (1993), ‘The modern industrial revolution, exit, and the failure of internal control systems,’
The Journal of Finance, 48(3), 831–880.
Kay, J. and M. King (2020), Radical Uncertainty: Decision-Making beyond the Numbers. WW Norton &
Company: New York, NY.
Kim, G., K. Behr and K. Spafford (2014), The Phoenix Project: A Novel about IT, DevOps, and Helping Your
Business Win. IT Revolution.
King, M. (2016), The End of Alchemy: Money, Banking, and the Future of the Global Economy. WW Norton
& Company: New York, NY.
Knight, F. H. (1921), Risk, Uncertainty and Profit. Houghton Mifflin: Boston, MA.
16 C.Y. Baldwin

Langlois, R. N. and M. M. Cosgel (1993), ‘Frank Knight on risk, uncertainty, and the firm: a new interpretation,’
Economic Inquiry, 31(3), 456–465.
Langlois, R. N. and P. L. Robertson (1992), ‘Networks and innovation in a modular system: lessons from the
microcomputer and stereo component industries,’ Research Policy, 21(4), 297–313. Reprinted in Managing
in the Modular Age: Architectures, Networks, and Organizations. G. Raghu, A. Kumaraswamy, and R. N.
Langlois (eds), Blackwell, Oxford/Malden, MA.
Leonardi, P. M. and S. R. Barley (2008), ‘Materiality and change: challenges to building better theory about

Downloaded from https://academic.oup.com/icc/advance-article/doi/10.1093/icc/dtac055/6868533 by guest on 06 December 2022


technology and organizing,’ Information and Organization, 18(3), 159–176.
Leonardi, P. M. and S. R. Barley (2010), ‘What’s under construction here? Social action, materiality, and power
in constructivist studies of technology and organizing,’ Academy of Management Annals, 4(1), 1–51.
MacCormack, A., C. Baldwin and J. Rusnak (2012), ‘Exploring the duality between product and organizational
architectures: a test of the “mirroring” hypothesis,’ Research Policy, 41(8), 1309–1324.
MacCormack, A., J. Rusnak and C. Baldwin (2006), ‘Exploring the structure of complex software designs: an
empirical study of open source and proprietary code,’ Management Science, 52(7), 1015–1030.
MacKenzie, D. (2012), ‘Missile accuracy: a case study in the social processes of technological change,’ in
W. E. Bijker, T. P. Hughes and T. J. Pinch (eds), The Social Construction of Technological Systems: New
Directions in the Sociology and History of Technology. MIT Press: Cambridge, MA, pp. 189–216.
MacKenzie, D. and J. Wajcman (1999), The Social Shaping of Technology. Open University Press: Milton
Keynes, UK.
Marples, D. (1961), ‘The decisions of engineering design,’ IEEE Transactions of Engineering Management, 2,
55–71.
McCord, K. R. and S. D. Eppinger (1993). ‘Managing the iteration problem in concurrent engineering,’ MIT
Working Paper 3594-93-MSA, Massachusetts Institute of Technology (August).
Mead, C. and L. Conway (1980), Introduction to VLSI Systems. Addison-Wesley: Reading, MA.
Merton, R. C. (1973), ‘Theory of rational option pricing,’ Bell Journal of Economics and Management Science,
4(Spring), 141–183. Reprinted in Continuous Time Finance, Basil Blackwell, Oxford, UK, 1990.
Milgrom, P. and J. Roberts (1990), ‘The economics of manufacturing: technology, strategy and organization,’
American Economic Review, 80(3), 511–528.
Milgrom, P. and J. Roberts (1994), ‘Complementarities and systems: understanding Japanese economic
organization,’ Estudios Economicos, 9(1), 3–42.
Milgrom, P. and J. Roberts (1995), ‘Complementarities and fit strategy, structure, and organizational change in
manufacturing,’ Journal of Accounting and Economics, 19(2), 179–208.
Noble, D. F. (1979), America by Design: Science, Technology, and the Rise of Corporate Capitalism. Oxford
University Press: USA.
Noble, D. F. (1984), Forces of Production: A Social History of Industrial Automation. Oxford University Press:
Oxford.
Orlikowski, W. J. (1992), ‘The duality of technology: rethinking the concept of technology in organizations,’
Organization Science, 3(3), 398–427.
Parnas, D. L. (1972a), ‘A technique for software module specification with examples,’ Communications of the
ACM, 15(5), 330–336.
Parnas, D. L. (1972b), ‘On the criteria to be used in decomposing systems into modules,’ Communications of the
ACM, 15(12), 1053–1058. Reprinted in Hoffman and Weiss (eds) Software Fundamentals: Collected Papers
of David Parnas, Boston MA: Addison-Wesley.
Parnas, D. L. (2001) Software Fundamentals: Collected Papers by David L. Parnas, D. M. Hoffman and D.
M. Weiss (ed.), Addison-Wesley: Boston, MA.
Parnas, D. L., P. C. Clements and D. M. Weiss (1985), ‘The modular structure of complex systems,’ IEEE
Transactions on Software Engineering, SE-11(3), 259–266.
Pinch, T. J. and W. E. Bijker (1984), ‘The social construction of facts and artefacts: or how the sociology of
science and the sociology of technology might benefit each other,’ Social Studies of Science, 14(3), 399–441.
Piore, M. J. and C. F. Sabel (1984), The Second Industrial Divide: Possibilities for Prosperity, Vol. 4. Basic books:
New York.
Porter, M. E. (1996), ‘What is strategy?’ Harvard Business Review, 74(6), 61–78.
Puranam, P., O. Alexy and M. Reitzig (2014), ‘What’s “new” about new forms of organizing?’ Academy of
Management Review, 39(2), 162–180.
Sanchez, R. A. and J. T. Mahoney (1996), ‘Modularity, flexibility and knowledge management in product and
organizational design,’ Strategic Management Journal, 17(S2), 63–76. Reprinted in Managing in the Modular
Age: Architectures, Networks, and Organizations. G. Raghu, A. Kumaraswamy, and R.N. Langlois (eds),
Blackwell, Oxford/Malden, MA.
Schilling, M. A. (2000), ‘Toward a general systems theory and its application to interfirm product modu-
larity,’ Academy of Management Review, 25(2), 312–334. Reprinted in Managing in the Modular Age:
Design rules 17

Architectures, Networks, and Organizations. G. Raghu, A. Kumaraswamy, and R.N. Langlois (eds),
Blackwell, Oxford/Malden, MA.
Schumpeter, J. A. (1934), The Theory of Economic Development. Harvard University Press: Cambridge, MA.
Schumpeter, J. A. (1942), Capitalism, Socialism, and Democracy. Harper & Brothers: New York.
Servan-Schreiber, J. J. (1968), The American Challenge. Atheneum: New York.
Simon, H. A. (1962), ‘The architecture of complexity,’ Proceedings of the American Philosophical Society, 106,
467–482. Reprinted in Simon (1981) The Sciences of the Artificial, 2nd edn. MIT Press, Cambridge, MA,

Downloaded from https://academic.oup.com/icc/advance-article/doi/10.1093/icc/dtac055/6868533 by guest on 06 December 2022


193–229.
Simon, H. A. (1981), The Sciences of the Artificial, 2nd edn. MIT Press: Cambridge, MA.
Steward, D. V. (1981), ‘The design structure system: a method for managing the design of complex systems,’
IEEE Transactions on Engineering Management, EM-28(3), 71–74.
Ulrich, K. (1995), ‘The role of product architecture in the manufacturing firm,’ Research Policy, 24(3), 419–440.
Reprinted in Managing in the Modular Age: Architectures, Networks, and Organizations. G. Raghu, A.
Kumaraswamy, and R.N. Langlois (eds), Blackwell, Oxford/Malden, MA.
Volvosky, H. (2022), ‘Collaborating at the tower of babel: the meaning of cooperation and the foundations of
long-term exchange,’ manuscript (May, 2022).
von Hippel, E. and G. von Krogh (2003), ‘Open source software and the ‘private collective’ innovation model:
issues for organization science,’ Organization Science, 14(2), 209–223.
von Krogh, G. and E. von Hippel (2006), ‘The promise of research on open source software,’ Management
Science, 52(7), 975–983.
Williamson, O. E. (1985), The Economic Institutions of Capitalism. Free Press: New York, NY.
Woodard, C. J., N. Ramasubbu, F. T. Tschang and V. Sambamurthy (2013), ‘Design capital and design moves:
the logic of digital business strategy,’ MIS Quarterly, 37(2), 537–564.

You might also like