Heather Douglas: Is Science Value-Free? (Science, Policy, and The Value-Free Ideal)
Heather Douglas: Is Science Value-Free? (Science, Policy, and The Value-Free Ideal)
Heather Douglas: Is Science Value-Free? (Science, Policy, and The Value-Free Ideal)
DOI 10.1007/s11948-010-9204-8
Gregory J. Morgan
Received: 26 March 2010 / Accepted: 10 April 2010 / Published online: 28 April 2010
Springer Science+Business Media B.V. 2010
In Science, Policy, and the Value-Free Ideal, Heather Douglas (2009) describes the
value free ideal for science, that social, ethical, and political values should have no
influence over the reasoning of scientists, and that scientists should proceed in their
work with as little concern as possible for such values (p. 1). After considering the
role of science in policy making, Douglas rejects this ideal and contends that we can
reject it without damaging the integrity of science. Science can be objective
without being value-free, she contends.
Douglas claims that the value-free ideal in its current form gains a
stranglehold on the philosophy of science around 1960. The current form of the
ideal is that value judgments internal to science involving the evaluation and
acceptance of scientific results at the heart of the scientific practice are to be as free
as humanly possible of social and ethical values (p. 45). Works of logical
empiricists such as Hans Reichenbachs The Rise of Scientific Philosophy illustrate
this view. Reichenbach aims to draw a sharp line between knowledge and ethics
(Reichenbach 1951). Douglas argues that the Cold War and the professionalism of
the field of philosophy of science lead to a prudent desire to make philosophy of
science apolitical and narrow and clearly distinct from Marxist philosophy in which
values and facts are closely interrelated (p. 49).
Marshalling arguments presented by C. West Churchman (1948) and Richard
Rudner (1953), Douglas attacks the value-free ideal. To accept a hypothesis, a
scientist must judge that the evidence is sufficiently strong or that the probability of
the hypothesis is sufficiently high. In either case, Rudner (1953) argued that the
standard of sufficiency used to make the judgment depends on how serious a
mistake it would be to accept a false hypothesis. To determine the seriousness of a
mistake for any given hypothesis involves a value judgment. Douglas argues that
G. J. Morgan (&)
Department of Philosophy, College of Arts and Letters, Stevens Institute of Technology,
Castle Point on Hudson, Hoboken, NJ 07030, USA
e-mail: [email protected]
123
424 G. J. Morgan
one cannot give an account of hypothesis or theory acceptance without such values.
She does not emphasize ambiguity in the term theory acceptance. The term could
mean accept as a true belief, accept as a theory on which to base policy,
accept as a theory on which to develop technology, accept as a theory worthy of
further pursuit, etc. Different meanings incorporate values differently and
pragmatic arguments that work for one meaning might not work for another.
Douglas considers Richard Jeffreys (1956) counter-proposal that scientists need not
accept or reject hypotheses, but merely assign probabilities to them and then turn
them over to the public. She argues that this suggestion does not work because one
must accept the probability values themselves, and that requires a moral judgment. I
am not convinced that one could not reiterate Jeffreys point and say that it is
probabilities (and variances) all the way down, i.e., the probability of the probability
of the theory, etc, and that the potential infinite regress may or may not be vicious.
(See Atkinson and Peijnenburg 2006 for a related point.)
As Douglas presents the history, philosophers Carl Hempel (1965) and Ernst
Nagel (1961) have some sympathy for the ChurchmanRudner point, but it is the
widely-read work of historian-philosopher Thomas Kuhn (1962) that praises the
isolation of science and promotes normal science as puzzle-solving that need not be
influenced by socially important problems. A more contemporary advocate of the
view that the scientific community should keep the value-free ideal is Hugh Lacey
(1999), who worries that giving up the value-free ideal undermines scientists ability
to gain significant knowledge.
Douglas takes up the task of showing that Lacey is wrong; it is possible, and in
fact preferable, to abandon the value-free ideal. Indeed Douglas spends Chap. 4
arguing that scientists have moral responsibilities to consider the reasonably
foreseeable consequences of making an error in choosing to accept a theory. I would
add that if one has to consider the consequences of such a choice, one must also
consider the reasonably foreseeable consequences of not being in error. Douglas
argues her case by distinguishing between two different rolesdirect and indirect
that values could play in theory acceptance. First, values can serve as reasons to
accept a claim. For example the physicist Paul Dirac claimed that the reason he
believed the General Theory of Relativity was because of its aesthetic value (1980).
Second, values can act to weigh the importance of a claim, helping to decide what
should count as sufficient evidence (p. 96). Douglas argues that society should
accept social, ethical, and cognitive values in the second indirect role, but rarely in
the first. Russian agronomist Trofim Lysenkos famous rejection of Mendelian
genetics, and the Catholic Churchs rejection of the Copernican system constitute
illegitimate uses of social values in the direct role as, presumably, would Diracs
aesthetically driven theory acceptance. According to Douglas, science is politicized
if social values play a direct role in theory acceptance.
I am not sure if the situation is as neat as Douglas implies. Surely there can be
cases of politicization of science through the indirect role also. If the results of a
study support claims that run counter to a researchers political views and he/she
responds by increasing the amount of evidence needed to accept those claims, it
would seem that the indirect role of scientific advice can also be manipulated for
political reasons. These worries are mitigated somewhat by Douglas prescription
123
Book Review 425
that values used in the indirect role should be made explicit and public as far as
possible when accepting theory to make policy. However, one needs further
normative principles to exclude some values from even an indirect role in theory
acceptance.
In Chap. 6, Douglas sketches seven different senses of objectivity that she claims
are untouched by values playing an indirect role in theory acceptance. She focuses
on senses of objectivity in knowledge creating processes that produce trustworthy
knowledge claims (p. 116). What holds these different senses of objectivity
together is a strong sense of trust in what is called objective. For example,
Douglas calls value-neutral objectivity a potential replacement for a value-free
ideal in certain cases.
Value-neutral objectivity requires scientists to take a position that is balanced or
neutral with respect to a spectrum of values (p. 123). To illustrate this conception
of objectivity, Douglas suggests that the Intergovernmental Panel on Climate
Change (IPCC) report on global climate exhibits value neutrality because of the
large number and diversity of scientists and their values involved in creating the
report. As Douglas points out, in some cases one end of the spectrum is
unacceptable (e.g., racist, sexist, etc.), so this form of objectivity is of limited use.
A method for determining which values are unacceptable would be useful, but
she has little to say about how one is meant to judge when a value is unacceptable.
In the case of the spectrum of political values in the US, from left to right, there is
often no consensus about whether there can be balance or where the center lies.
Presumably what makes some values unacceptable is a moral question that should
not be properly settled by polling citizens or even by a democratic vote.
If she is right, Douglas views have a practical significance for risk assessment
and management. She rejects the orthodox two-stage process of value-free risk
assessment and value-laden risk management (Lowrance 1976). Social, cognitive,
and ethical values should inform risk assessments, she claims. Practically speaking,
involving the public in the risk assessment process is one way to introduce some
social values into risk assessment, and further work might examine better ways of
doing this.
Overall, Douglas should be commended on expanding the scope of general
philosophy of science into questions of policy making. Philosophers of science
influenced by the logical empiricist tradition have, for too long, mostly ignored the
interplay between scientific methodology and ethics, and scientists related moral
demand to consider the consequences of scientific advice. Given the practical
consequences of this omission, philosophers of science also have ethical demands to
further explore how philosophy of science can enlighten policy-making. With
Science, Policy, and the Value Free Ideal Douglas is leading the way.
References
Atkinson, D., & Peijnenburg, J. (2006). Probability without certainty: foundationalism and the Lewis
Reichenbach debate. Studies in History and Philosophy of Science Part A, 37, 442453.
123
426 G. J. Morgan
123