0% found this document useful (0 votes)
15 views14 pages

Div Class Title Models Simulations and Their Objects Div

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
15 views14 pages

Div Class Title Models Simulations and Their Objects Div

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 14

Science in Context 12, 2 (1999), pp.

247-260

SERGIO SISMONDO

Models, Simulations, and


Their Objects

1. Mathematical Models and Computer Simulations

Mathematical models and their cousins, computer simulations, occupy an uneasy


space between theory and experiment, between abstract and concrete, and often
between the pressures of pure science and the needs of pragmatic action. The
contributions to this volume explore those uneasy spaces, and the work that it
takes to maintain positions in those spaces.
Models and simulations do not, of course, form a homogeneous category. The
ones considered here form a continuum, from spare symbolic entities to somewhat
more complex sets of equations that are computerized largely for ease of calcula-
tion and manipulation, to computer programs so large and intricate that no one
person understands how they function. The differences between the endpoints of
this continuum are large enough that complex computer simulations can be said to
use models, of many different types, or to have some particular models at their
heart. Simple models and complex simulations, then, are in at least this way
different types of objects, while they are related as endpoints on a continuum.
Nonetheless, in being seen as occupying a position between theories and data,
simulations and models perform some similar functions, and pose some similar
problems. Although there are two sections in this volume, the first mostly address-
ing simulations and the second looking at economic models, some of the lessons of
the papers cut across this divide of subject matter, applying to models and
simulations, and to economics, physics, and physiology.
Whereas theories, like local claims, can be true or false, models and simulations
are typically seen in more pragmatic terms, being more or less useful, rather than
more or less true. Scientific models and simulations are given the status of tools, as
well as representations; they are objects, as well as ideas. They easily cross
categories, such as "theory" and "experiment," the bounds of which are otherwise
well-established. And modeling and simulation sit uncomfortably in science both
socially and epistemically, because of the boundaries they cross.
Models have become ubiquitous in public policy and corporate strategy, as well
as applied and pure science. The demands of objectivity in public life have meant
that decisions across a wide range of subject matters have to be accompanied by
the appropriate scientific validation. Despite the fact that they do not have

Downloaded from https://www.cambridge.org/core. IP address: 207.241.231.81, on 29 Jul 2018 at 10:32:06, subject to the Cambridge Core terms of
use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/S0269889700003409
248 SERGIO SISMONDO

well-established epistemic positions within pure science, models and simulations


form bridges between theoretical knowledge and its application to phenomena,
bridges on which validation can be made to run.
Nelson Goodman points out that the word "model" is promiscuous in its
meanings (Goodman 1968, 171; see Winsberg, this volume). Models are typically
copies that manage to be both concrete examples and exemplary, in differing
proportions. Scale models might be the simplest of models, being material copies
of real or imagined (as in a model of an unbuilt architectural project) objects; they
become epistemic instruments, and thus exemplary, if used or manipulated. The
model student is one who stands between an ideal of studenthood and the bulk of
students, examples of the one and to the others. Even the fashion model gives
shape to clothing, but that shape is commonly thought of as ideal. An artist's
model provides a concrete example — a copy of ideal poses, shapes, or scenes? —
for the artist to represent.
Scientific uses of the term are no less promiscuous, and often sit in the same
space of being both representations and things to be represented. The basic and
original scientific models are material and conceptual analogues. These are man-
ageable systems, or systems thought to be comprehensible, that stand in for unruly
or opaque ones. For the turn-of-the-century physicist and meteorologist C. T. R.
Wilson, his cloud chamber was a model system for studying cloud formation
around ionized particles (Galison 1997). Tinker-toy models are material objects
that can stand in for molecules, allowing chemists to manipulate and visualize
things they cannot see; Eric Francoeur (1997) describes the genesis of these
models, and the compromises that were struck between their requirements as
flexible material objects and their requirements as accurate and persuasive depic-
tions. Similarly, cell cultures are model systems for biochemistry, and rats and
mice are model organisms for experiments on behavior and physiology, allowing
for the production of knowledge about structures of behavior and physiology of a
much broader class of organisms (Rheinberger 1997, Sismondo 1997). And con-
ceptual models, like the billiard ball model of gases, posit analogies to make
theories more comprehensible, to allow theories to be extended, and to give them
explanatory force (Hesse 1966).
Mathematical models, the models of this volume, are similarly manageable
systems standing in for the unruly or opaque, though also for the incomplete: they
are typically seen as applications, approximations, or specifications of theories
and principles that cannot by themselves be applied. One simple model for
fisheries management, for example, is summed up by this central equation:
Bt+I = Bt + rBt(l-BtIK)-qEtBt
where Bt is the biomass offish in year t, r is the growth rate, AT is the equilibrium
size of the population without fishing, q is a coefficient representing the ease of
catching fish, and Et is the fishing effort during year t (Hilborn and Mangel 1997,
242-3). The model makes use of a basic understanding or theory of the dynamics

Downloaded from https://www.cambridge.org/core. IP address: 207.241.231.81, on 29 Jul 2018 at 10:32:06, subject to the Cambridge Core terms of
use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/S0269889700003409
Models, Simulations, and Their Objects 249

of populations, but also of estimates of the fecundity of its organisms, the carrying
capacity of the environment, and so on — it could easily be expanded to give a
more complex representation of any of these factors, and to add others — to create
an expected trajectory of population size. Models, then, have theories as inputs,
and in so doing connect theories to data; they generally make more precise
predictions than do theories, or make predictions where the theories can make
none.
The material uses of the word "model" suggest that even mathematical models
should be analogues of physical systems, and as analogues they are tools for
understanding, describing, and exploring those systems. They should behave in
the same way as the things they represent behave. Models are therefore different
from theories not only in being applied, but in being analogues. Theories are now
typically thought not to represent particular physical systems or even classes of
physical systems, but underlying structures or necessities. According to the "se-
mantic" conception, the current favorite philosophical account of theories, theories
are defined by or specify families of models; theories provide structural ingredients
of models. A distinction between models and theories is not always made in
practice, however; and even when a distinction is drawn, some things are called
models by people who want to call attention to their inadequacies, and theories by
people who want to call attention to the insight that they provide.
Mary Hesse chooses as one origin point for debates about scientific models the
comments and arguments of Pierre Duhem in his 1914 book La Theorie Physique.
There Duhem ties models to national characters:
This whole theory of electrostatics constitutes a group of abstract ideas and
general propositions, formulated in the clear and precise language of geome-
try and algebra, and connected with one another by the rules of strict logic.
This whole fully satisfies the reason for a French physicist and his taste for
clarity, simplicity and order. ...
Here is a book [by Oliver Lodge] intended to expound the modern
theories of electricity and to expound a new theory. In it are nothing but
strings which move around pulleys, which roll around drums, which go
through pearl beads ... toothed wheels which are geared to one another and
engage hooks. We thought we were entering the tranquil and neatly ordered
abode of reason, but we find ourselves in a factory. (Quoted in Hesse 1966,2)
While many English scientists would not have seen the factory metaphor as an
insult — William Ashworth (1996) shows the impact of descriptions of the brain as
a factory for thinking on Charles Babbage's ideas for his Difference Engine —
Duhem pushes what he sees as an insult further, making the well-known contrast
between the "strong and narrow" minds of Continental physicists, and the "broad
and weak" ones of the English. Duhem is keen not only to support French over
English science, but to support a picture of physical theory, soon to become
roughly the Logical Positivists' picture, as the creation of abstract and elegant

Downloaded from https://www.cambridge.org/core. IP address: 207.241.231.81, on 29 Jul 2018 at 10:32:06, subject to the Cambridge Core terms of
use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/S0269889700003409
250 SERGIO SISMONDO

structures that predict and save empirical data. Hesse, drawing on the response to
Duhem by the English physicist N. R. Campbell, sees models as heuristically
essential to the development of and extension of theories, and also essential to the
explanatory power of theories. Being able to derive statements of data from
theories is not to explain that data, though being able to derive statements of data
from theories via intuitive models is. Hence successful theories need models that
concretize, that provide metaphors in which to think. Hesse's 1960s work on
models and metaphors can now be seen as one of the arguments for the semantic
conception of theories, as opposed to the "received" view, no longer so received".
The theoretical models that Hesse describes are first and foremost analogies,
ways of bringing a theory to bear on the world by seeing one domain in terms of
another. The wave theory of light creates an analogy between light and waves in
media like water or air. But scientific models are much more various than that,
many of them depending upon no analogy between domains, unless one of those
domains is the mathematical formalism of the model itself, a possibility that
strains Hesse's picture beyond recognition. For example, the fisheries model
above doesn't posit any analogy (though there are undoubtedly many metaphors
hidden in the background), but itself stands in for different factors affecting
populations. The model does not essentially make use of some other analogy, but
is itself something of an analogue. By being an analogue it is expected to have
outputs that can be compared with data.
Computer simulations are similarly analogues, virtual copies of systems. Because
of the problems that computers are seen to solve efficiently, simulations are
usually more obvious analogues than are models. In most social simulations, for
example, the researcher defines "agents" with fixed repertoires of behaviors.
Those agents are then set to interact, responding at each moment to the actions of
others. The goal is to see what, if any, social patterns emerge, or what assumptions
are needed to create particular social patterns. Social simulations are analogues,
then, because components of the program are analogous to individuals, and those
components interact in a timeframe that is analogous to a sequence of moments in
real time. Simple one-to-one correspondences between virtual objects and real
ones (however idealized and simplified they might be) can be easily drawn. For
most non-computerized mathematical models such correspondences are more
difficult to draw, because preferred mathematical styles occlude individuals in
favor of aggregates or larger-scale relations. The components of an equation can
only rarely be neatly paired with obvious objects in the world, instead following a
logic defined by relations among objects.

2. The Truth of Models and Theories

Because of their applicability we might want to say that a good model or simulation
is more true than theories to which it is related. It makes predictions that are more

Downloaded from https://www.cambridge.org/core. IP address: 207.241.231.81, on 29 Jul 2018 at 10:32:06, subject to the Cambridge Core terms of
use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/S0269889700003409
Models, Simulations, and Their Objects 251

precise or more correct than associated theories, if those theories make any
predictions at all; for this reason Nancy Cartwright sees good models in physics as
correcting theories and laws. Cartwright's work turns on recognizing the impor-
tance of ceterisparibus clauses. Most explanations involving fundamental physical
laws use implicit ceteris paribus clauses, either to account for the fact that the
fundamental laws assume an idealized world, or to account for complex interac-
tions among causes. To take an example from her How the Laws of Physics Lie
(1983) even so simple a law as Snell's law for refraction needs to be corrected.
Snell's Law: At the interface between dielectric media, there is (also) a
refracted ray in the second medium, lying in the plane of incidence, making
an angle q, with the normal, and obeying Snell's law:
sin q/sinq, = n2/nl
where vl and v2 are the velocities of propagation in the two media, and nl =
(c/vl), n2 = (c/v2) are the indices of refraction. (Miles V. Klein, in Cartwright
1983,46)
But, as Cartwright points out, this law is for media which are isotropic, having the
same optical properties in all directions. Since most media are optically anisotropic,
Snell's law is generally false, and needs to be corrected if it is to be applied in
particular situations.
While fundamental laws and theories lie about real situations, models that use
them or apply them can often get much closer to the truth. To see an easy case of
that, we might turn to a model of climate change. The basic theory of the
"greenhouse effect" is simple and firmly established. At equilibrium, the energy
absorbed by the earth's surface from the sun would balance the energy radiated
from the earth into space. However, the earth's surface radiates energy at longer
wavelengths than it absorbs, which means that accumulations in the atmosphere
of certain gases, like carbon dioxide and methane, will change the equilibrium
temperature by absorbing more energy at longer wavelengths than shorter ones.
General Circulation Models, which are computer simulations, address questions
about the effects of greenhouse gases by assessing the interaction of many factors,
including: water vapor feedback with temperature, the effects of clouds at different
heights, the feedback with temperature of ice and snow's reflection of light, the
changing properties of soil and vegetation with temperature, and possibly the
effects of ocean circulation (IPCC 1995). Without taking into account these
interacting factors, and more, the basic theory says almost nothing useful, or even
true, about the real material effects of greenhouse gases.
Scientific theories are typically about idealized or abstract realms, which are
uncovered in the process of developing and establishing the theories. Because
theoretical reasoning is highly mobile, theories can achieve universality of recogni-
tion and respect that more particularistic scientific knowledge is less likely to
attain. In addition, theoretical reasoning often does important work in explaining
particulars; explanation usually is the placing of particulars in a persuasive theo-

Downloaded from https://www.cambridge.org/core. IP address: 207.241.231.81, on 29 Jul 2018 at 10:32:06, subject to the Cambridge Core terms of
use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/S0269889700003409
252 SERGIO SISMONDO

retical context, often one which indicates key causes. Therefore the realms that
theories describe can be taken as fundamental and real, despite their lack of
immediacy.
Some vaguely parallel points might be made about experiments. The category
of "experiment" has been the subject of sustained inquiry in Science and Technol-
ogy Studies for the past twenty years, and in that time it has been shown to be
considerably more interesting, both epistemologically and socially, than it appears
at face value (e.g. Hacking 1983; Latour 1983; Knorr Cetina 1981; Shapin 1984;
Rheinberger 1997). With the exception of natural experiments, in which scientists
achieve material control by drawing close but often problematic analogies between
different naturally occurring situations, experiments are about induced relations
between objects that are themselves often pre-constructed and purified. As such,
experiments' epistemic value depends upon the careful social discrimination be-
tween the natural and the artificial or artifactual. Science's line of discrimination
between natural and artificial became roughly fixed toward the end of the seven-
teenth century, and its exact location is renegotiated in different sciences on an
ongoing basis.
For solid theoretical and experimental claims, then, the object of representation
is given by nature, but it has become transparent with the acceptance of theoretical
and experimental styles of reasoning. In both cases the object domain is only made
manifest by human agency: theoretical claims are about idealized, transcendent,
or at least submerged structures, the details of which become clear with theoretical
work; experimental claims are about material structures which are almost always
not found in nature, but are rather constructed in the laboratory. And each of
theory and experimentation are given more epistemological value because of
correspondences that are made between individual theories and experiments:
strategies of correspondence are well-developed.
Some of those strategies involve models and simulations, which form bridges
between theory and data. Gaston Bachelard argued that science alternates between
rationalism and realism. In its rationalist mode it creates complex mathematical
theories. In its realist mode it produces objects to match those theories, experimen-
tally. This picture is a good starting point for at least some sciences, but should be
supplemented by some understanding of the gap between theories and even
experimentally produced reality. Models help fill that gap, and thereby legitimize
the continuation of work on theory.
Mathematical models and computer simulations apply more concretely than
the theories they instantiate, but because they don't have the epistemic traditions
behind them that theorizing and experimentation have, they don't have transparent
object domains. Models and simulations are typically considered neither true nor
false, but rather more or less useful. Whereas the agency embedded in theoretical
and experimental knowledge per se has become largely hidden, models and
simulations are complex enough that they cannot be seen as natural: it is easy to
see the assumptions made in modeling. This is despite the fact that, following

Downloaded from https://www.cambridge.org/core. IP address: 207.241.231.81, on 29 Jul 2018 at 10:32:06, subject to the Cambridge Core terms of
use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/S0269889700003409
Models, Simulations, and Their Objects 253

Cartwright's down-to-earth intuitions, they are often more true to the material
world, and thus more natural, than at least the theories to which they are related.
Those theories, after all, make more assumptions about the material world, not
fewer. But the agency behind models and simulations is too visible to allow them to
easily represent transparent domains. They don't have a "home" domain, a meta-
physical category to which they unproblematically apply. Unlike heavily idealized
theories, they don't apply neatly to rationally-discovered Platonic worlds. Unlike
local facts, they don't apply neatly to natural or experimental phenomena. One
result of this is that when we see people arguing over a model, they are likely also
arguing over styles of inquiry and explanation. Sismondo (1999) argues that a
debate over an ecological model was in part an argument over what sort of a
knowledge ecology would allow, whether it would allow highly abstract theoretical
knowledge or only knowledge very tied to the material world. Breslau (1997)
argues that a dispute in the U.S. over the evaluation of federal training programs is
a dispute over the merits of different approaches to socio-economic research.
In the end, the above sort of talk of the domains of theory, experiment, and
models takes us back to an old problem. How should we understand the status of
object domains that appear to depend upon human agency, but have the solidity
that we attribute to independently real structures? We can explain the solidity in
terms of discipline, and assume that all such object domains are constructed. We
can explain the discipline in terms of solidity, and assume that human agency
wraps itself around the real. Or we can try to find terms in which to combine or
evade constructivist and realist discourses. Although this is not a question directly
addressed by any of the studies of this volume, taken as a whole they suggest that
some version of the last option is the only option: models and simulations are
misfits that don't sit comfortably in established categories.

3. The Richness of Modeling and Simulation

Michael Redhead (1980) distinguishes two types of theoretical models in physics.


In cases in which a theory is difficult to apply, models are used which simplify
assumptions or substitute tractable equations for intractable ones; these models
are impoverished theories. Conversely, theories may provide constraints but leave
space for more complete specification; then models are introduced which enrich
the theory by filling the empty spaces. In either case, the ostensible goal of
modeling is to apply theories, to connect the theories to data. In practice such
models may be quite far from any conceivable data, being created in contexts in
which some supplement to theoretical constructs is desired, to bring those con-
structs in closer, but not necessarily immediate, contact with some more actual or
actualizable world.
Redhead, however, is perhaps looking at special cases. Most models and simula-
tions have a more nuanced and distanced relation to theory, not merely impover-

Downloaded from https://www.cambridge.org/core. IP address: 207.241.231.81, on 29 Jul 2018 at 10:32:06, subject to the Cambridge Core terms of
use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/S0269889700003409
254 SERGIO SISMONDO

ishing or enriching theories. Even in their relation only to theory, simulations and
complex models quite often violate Redhead's dichotomy by belonging to both
sides. Simulations may be enriched theories, adding considerable detail in the
effort to portray their objects, and simultaneously impoverished, substituting
approximations for intractable equations, in the effort to make them more appli-
cable. Both movements away from theory, though, are driven by the goal of
applicability: theoretical structures which are computationally unmanageable, or
unmanageably abstract, are inapplicable. Simulations, like models, stand between
theories and material objects, pointing in both directions. For example, a simula-
tion of AIDS transmission in intravenous drug users in a single country might
have parameters that represent quite detailed knowledge of drug use, the popula-
tion structure of drug users, responsiveness to treatment, and so on. At the same
time it may incorporate simplifying assumptions, relative to other models, about
the closure of its population, about the incubation pattern of AIDS, and so on
(see, e.g., Pasqualucci et al. 1998).
In addition, putting models and simulations merely in the context of theories
misses their often complex positioning in the world. Sometimes a theory will be
put to work in applied science, but if so it is combined with so many other types of
knowledge that it becomes only one component of a model or simulation. Take,
for example, Ragnar Frisch's 1933 "Rocking Horse Model" of the business cycle,
discussed in an elegant paper by Marcel Boumans (1999). The model consists of
three equations, relating such quantities as consumption, production, and the
money supply. As Boumans explains, there is a motivating metaphor behind the
model; Frisch imagines the economy as a "rocking horse hit by a club." The model
then brings physical knowledge to bear on the problem, through an equation
normally used to describe a pendulum damped by friction. Yet Frisch was not
merely enriching (or impoverishing) physical theory, because there are too many
other components of the model: basic economic relations, a theory about delays in
the production of goods and the deployment of capital, guesses of values of key
parameters, and so on. As Boumans argues, economic model-building is the
"integration of ingredients" so that the model meets "criteria of quality." Theories
play a role, but so do metaphors, mathematical techniques, views on policy, and
empirical data; for example, Frisch chose values for parameters to make sure that
his model would cycle at the same rate as real economic cycles. In addition, there
may be multiple theories playing roles, from different disciplines. Therefore we
shouldn't see models and simulations only in their relation to theories, bringing
theories into closer contact with data; models and simulations do many things at
once.
Seeing models and simulations just in a space between theories and data, the
typical way of seeing them, misses their articulation with other goals, resources,
and constraints. There are resources provided and constraints imposed by the
media in which they operate, because they are created with the mathematical and
computing tools available. They are created to fit into particular cultures of

Downloaded from https://www.cambridge.org/core. IP address: 207.241.231.81, on 29 Jul 2018 at 10:32:06, subject to the Cambridge Core terms of
use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/S0269889700003409
Models, Simulations, and Their Objects 255

research: models and simulations have to take particular forms in order to be


accepted. And they are created to fit into particular social settings, becoming
objective by balancing among sides in debates.

4. Modeling and Simulating as Experiment and Theory

Work on models and simulations is like theoretical work in that the ostensible
object of representation is absent; modelers and simulators trade in symbols. They
produce representations, perhaps check those representations against data, and so
on. But modeling and (especially) simulation is like experimental work in that the
behavior of the model or simulation is the subject of investigation. From her many
interviews with people engaged with simulations, Deborah Dowling shows that
working with simulations is seen to have aspects of experimental work, despite its
being largely in the realm of representation. Researchers make small changes — to
parameters, initial conditions, the grain of calculation, etc. — and learn what
results. The flavor of such activities better matches the flavor of experimentation
than that of theorizing. Thus Dowling describes the ways in which simulations are
pulled back and forth between the categories of theory and experiment, depending
upon context.
Modeling and simulation is also like experimentation in its pattern of give and
take in their creation. Daniel Breslau and Yuval Yonay frame their contribution to
this volume within a critique of the singular focus on metaphor in studies of
economics. Illustrating a microinteractionist approach to the sociology of eco-
nomics, they draw on studies by David Gooding, Ludwig Fleck, and Andrew
Pickering, to show that economic modelers have to perform, in Pickering's term, a
"dance of agencies" in much the same way as do experimenters. The materials that
modelers work with — particular established formulations and modeling tools —
are recalcitrant; they do not behave as the modelers would like, either producing
too many answers, the wrong sort of answer, or intractable equations. In response,
the modelers have to find assumptions and tools that allow them to create objects
with the right disciplinary forms, objects capable of indicating unique solutions to
problems, sophisticated enough to be seen as advances in the field, and uncontrived
enough to produce epistemic solidity. This last condition is particularly important:
to avoid too-easy criticisms of assumptions, those assumptions have to conform to
disciplinary standards. As a result, the model is granted agency by the modelers.
Epistemic solidity for a model or simulation is tricky. The criteria that might be
applied depend upon the uses to which the object might be put, its complexity, the
available data, and the state of the field. As Eric Winsberg shows in his paper in
this volume, simulations and their components are evaluated on a variety of
fronts, revolving around fidelity to either theory or material: assumptions are
evaluated as close enough to the truth, or unimportant enough not to to mislead;
approximations are judged as not introducing too much error; the computing

Downloaded from https://www.cambridge.org/core. IP address: 207.241.231.81, on 29 Jul 2018 at 10:32:06, subject to the Cambridge Core terms of
use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/S0269889700003409
256 SERGIO SISMONDO

tools are judged for their transparency; graphics systems and techniques are
expected to show salient properties and relationships. All of these judgments and
more are difficult matters, and many do not have straightforward answers. In
some cases these judgments create an interesting type of uncertainty. In the global
warming debate, for example, a key type of number has become the estimate of the
sensitivity of temperatures to a doubling of carbon dioxide in the atmosphere.
Sensitivity is typically reported as a range, as in 1.5° to 4.5°, or as an estimate with
upper and lower bounds, as in 3° ± 1.5°. The range does not measure statistical
uncertainty, though, because the key General Circulation Models are made to be
deterministic. Rather, the range measures the confidence of the climatologists in
their models and their assumptions (Sluijs et al. 1998).
In their focus on examining the warrant for fundamental theories, philosophers
of science have almost completely neglected the processes involved in applying
such theories. When philosophers do address application, it is generally assumed
that application is little more than deriving data from equations, with the help of
the right parameters and assumptions. But the papers of this volume show that
modeling and simulation, typical modes of application, are anything but straight-
forward derivation. Applied theory isn't simply theory applied, because it instan-
tiates theoretical frameworks using a logic that stands outside of those frameworks.
Thus Winsberg calls for an "epistemology of simulation," which would study the
grounds for believing the results of complex models and simulations.

5. Models and Simulations as Tools and Objects of Knowledge

Because they are supposed to be analogues, models and simulations are themselves
studied in the way that natural systems might be. Knowledge about them is
supposed to run parallel to knowledge about the things that they represent, which
allows modeling to be like experimentation, in both Dowling's and Breslau and
Yonay's senses. Researchers can learn about the behavior of models and simula-
tions, or make them behave naturally, and be doing properly scientific research.
But as objects they are also open to use in more instrumental contexts, providing
inputs for other research. That is, if treated as black boxes the data they produce
can sometimes simply feed into other research.
Martina Merz's paper here is an ethnographic study of event generators, com-
puter programs in high energy physics that simulate the effect of beams of particles
colliding with other particles. They are important for a number of reasons: they
play a role in data analysis, being used as a template for comparison with real data;
they are used to test simulations of detectors. But they are not just tools, because
creating them and using them is doing physics, too. Merz argues, using terminology
adopted from Hans-Jorg Rheinberger and Karin Knorr Cetina, that event genera-
tors manage to be simultaneously epistemic objects and technological things. As
epistemic objects what is valued is their open-endedness, their ability to behave

Downloaded from https://www.cambridge.org/core. IP address: 207.241.231.81, on 29 Jul 2018 at 10:32:06, subject to the Cambridge Core terms of
use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/S0269889700003409
Models, Simulations, and Their Objects 257

unpredictably when pushed. As technological things what is valued is their closure,


their assumed straightforward application of taken-for-granted knowledge. In
Merz's rich treatment of the use and study of event generators she shows how this
divide between epistemic objects and technological things plays out in the culture
of particle physics. Actors' differing fields of interest mean that event generators
are expected to remain both open and closed, and so are slated to remain multival-
ent objects.
Interestingly, the same divide can be seen in the relatively simple economic and
econometric models that Robert Evans and Adrienne van den Bogaard discuss in
the second section of this volume. For some actors they are epistemic things, or as
Rheinberger calls them, "question-generating machines." In the attempt to make
them policy-relevant their proponents try to turn these models into technological
objects or "answering machines." For Evans, democratic participation in economic
decision-making requires processes that keep alive the multivalent appearance of
economic models.

6. Models and Simulations Negotiating Politics

Models and simulations are, as I have already mentioned, increasingly important


in more public spheres. The two last papers of this volume take a look at the
mediating work of models in economic policy, and the boundary-crossings they
describe leave the sphere of relatively pure research.
Robert Evans examines a controversy over economic models to show how the
models concretize assumptions that have obvious moral weight, in his case as-
sumptions over the causes of unemployment. The models in question were used by
the different members of the United Kingdom's "Panel of Independent Forecas-
ters," which from 1993 to 1997 was a key actor providing information for the UK's
economic policies. As Evans shows, there is great uncertainty surrounding the
models: it isn't clear which ones make the best predictions, and which ones have
the firmest evidential base. Hence it is possible to see a dispute about morals which
is integral to the dispute about models. While unique solutions to central problems
of economic modeling would undoubtedly have constrained economic policy
choices, the inability of the modelers to agree on unique solutions created a
situation in which the values implicit in policy decisions could be sharply defined
and highlighted.
Adrienne van den Bogaard presents an overview of her research on the devel-
opment of the institution of economic forecasting in the Netherlands in the 1930s
and 1940s. She argues that a particular style of macro-economic and macro-
econometric modeling of the Dutch economy became central to economic planning
because it solved political problems. In particular, it solved problems of objectivity,
by not being identifiable with the perspectives of any of the four traditional
"pillars" of Dutch society: the social democrats, the liberals, the catholics, and the

Downloaded from https://www.cambridge.org/core. IP address: 207.241.231.81, on 29 Jul 2018 at 10:32:06, subject to the Cambridge Core terms of
use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/S0269889700003409
258 SERGIO SISMONDO

protestants. Each of these pillars came with its own politics, down to the level of
economic decisions. At a time when fractures were severe, economists had to find
an authoritative approach. A failed approach attempted to synthesize the perspec-
tives of the pillars. The successful approach turned on a confluence between the
neutrality of modeling and the particular style of Dutch tolerance, a careful and
studied tolerance that recognizes the need to grant factions and positions their
autonomy, even while not granting them respect. The successful model could
become objective by being neutral, thereby becoming a tool for and an arbiter
between all of the pillars; the successful modelers could create a monopoly over
some central portions of economic planning.

In short, models and simulations cut across boundaries of pure categories we


accept in science, and sometimes politics. Some people might be tempted to see the
compromises that models make — between the domains of the theoretical and the
material, between their uses as pragmatic and representational objects, between
different goals — as unsatisfactory, to see it as simple inconsistency or imperfection.
But we might choose instead to see models and simulations as monsters necessary
to mediate between worlds that cannot stand on their own, or that are unmanage-
able. The level of the ideal, for example, often lacks legitimacy among the
instrumentally-minded. The natural world is usually intractable in terms of ideals,
but it is opaque without them. Models become a form of glue, simultaneously
epistemic and social, that allows inquiry to go forward, by connecting the ideal and
the material. To do that they need to make compromises: they must simultaneously
look like theory — because they have to explain, predict, and give structure — and
like practical knowledge — because they have to connect to real-world features.
They are a diverse lot, and are made to do a diverse number of things. But they are
necessarily so, being made to stand between worlds, and pushed one way and
another. Therefore we should resist the urge to do much epistemic neatening of
this messy category of models and simulations.

Downloaded from https://www.cambridge.org/core. IP address: 207.241.231.81, on 29 Jul 2018 at 10:32:06, subject to the Cambridge Core terms of
use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/S0269889700003409
Models, Simulations, and Their Objects 259

References

Ashworth, William. 1996. "Memory, Efficiency, and Symbolic Analysis: Charles


Babbage, John Herschel, and the Industrial Mind." Isis 87:629-653.
Breslau, Daniel. 1997. "Contract Shop Epistemology: Credibility and Problem
Construction in Applied Social Science." Social Studies of Science 27:363-94.
Boumans, Marcel. 1999. "Built-in Justification." Forthcoming in M. Morgan and
M. Morrison, editors, Models as Mediators. Cambridge: Cambridge University
Press.
Cartwright, Nancy. 1983. How the Laws of Physics Lie. Oxford: Oxford University
Press.
Francoeur, Eric. 1997. "The Forgotten Tool: The Design and Use of Molecular
Models." Social Studies of Science 27:7-40.
Galison, Peter. 1997. Image and Logic: A Material Culture of Microphysics.
Chicago: University of Chicago Press.
Goodman, Nelson. 1968. Languages of Art. New York: Bobbs-Merrill.
Hacking, Ian. 1983. Representing and Intervening: Elementary Topics in the
Philosophy of Natural Science. Cambridge: Cambridge University Press.
Hesse, Mary. 1966. Models and Analogies in Science. Notre Dame: Notre Dame
University Press.
Hilborn, Ray, and Marc Mangel. 1997. The Ecological Detective: Confronting
Models with Data. Princeton, NJ: Princeton University Press.
IPCC (Intergovernmental Panel on Climate Change). 1995. Climate Change 1995:
The Science of Climate Change, edited by J. T. Houghton, L. G. Meira Filho,
B. A. Callander, N. Harris, A. Kattenberg, and K. Maskell. Cambridge:
Cambridge University Press.
Knorr Cetina, Karin D. 1981. The Manufacture of Knowledge: An Essay on the
Constructivist and Contextual Nature of Science. Oxford: Pergamon Press.
Latour, Bruno. 1983. "Give Me a Laboratory and I will Raise the World." In
Science Observed: Perspectives on the Social Study of Science, edited by K. D.
Knorr Cetina and Michael Mulkay. London: Sage.
Pasqualucci, Cristina, Lucille Rava; Carla Rossi, and Giuseppe Schinaia. 1998.
"Estimating the Size of the HIV/AIDS Epidemic: Complementary Use of the
Empirical Bayesian Back-Calculation and the Mover-Stayer Model for Gath-
ering the Largest Amount of Information." Simulation 71:213-227.
Redhead, Michael. 1980. "Models in Physics." British Journal for the Philosophy
of Science 31:145-63.
Rheinberger, Hans-Jorg. 1997. Toward a History of Epistemic Things: Synthesiz-
ing Proteins in the Test Tube. Stanford: Stanford University Press.
Shapin, Steven. 1984. "Pump and Circumstance: Robert Boyle's Literary Tech-
nology." Social Studies of Science 14:481-520.

Downloaded from https://www.cambridge.org/core. IP address: 207.241.231.81, on 29 Jul 2018 at 10:32:06, subject to the Cambridge Core terms of
use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/S0269889700003409
260 SERGIO SISMONDO

Sismondo, Sergio. 1997. "Deflationary Metaphysics and the Construction of


Laboratory Mice." Metaphilosophy 28:219-32.
. 1999. "Island Biogeography and the Multiple Domains of Models." Biology
and Philosophy, forthcoming.
Sluijs, Jeroen van der, Josee van Eijndhoven, Simon Shackley and Brian Wynne.
1998. "Anchoring Devices in Science for Policy: The Case of Consensus around
Climate Sensitivity." Social Studies of Science 28:291-323.

Department of Philosophy
Queen's University
Kingston, Canada, K7L 3N6

Downloaded from https://www.cambridge.org/core. IP address: 207.241.231.81, on 29 Jul 2018 at 10:32:06, subject to the Cambridge Core terms of
use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/S0269889700003409

You might also like