Studies in History and Philosophy of Modern Physics: David Deutsch
Studies in History and Philosophy of Modern Physics: David Deutsch
art ic l e i nf o a b s t r a c t
Article history: Claims that the standard procedure for testing scientific theories is inapplicable to Everettian quantum
Received 14 September 2015 theory, and hence that the theory is untestable, are due to misconceptions about probability and about
Accepted 11 June 2016 the logic of experimental testing. Refuting those claims by correcting those misconceptions leads to an
improved theory of scientific methodology (based on Popper's) and testing, which allows various sim-
Keywords: plifications, notably the elimination of everything probabilistic from the methodology (‘Bayesian’ cre-
Testability dences) and from fundamental physics (stochastic processes).
Everettian Quantum Theory & 2016 The Author. Published by Elsevier Ltd. This is an open access article under the CC BY license
Probability (http://creativecommons.org/licenses/by/4.0/).
Popperian Methodology
When citing this paper, please use the full journal title Studies in History and Philosophy of Modern Physics
1. Introduction will have placed it in a state ψ . An observable X^ of S is then
perfectly measured by an observer A (where A includes an
A version of quantum theory that is universal (applies to all appropriate measuring apparatus and as much of the rest of the
physical systems), and deterministic (no random numbers), was world as is affected). That means that the combined system S A
first proposed informally by Schrödinger (Bitbol, 1996) and then undergoes a process of the form
independently and in detail by Everett (DeWitt & Graham, 1973). X
ψ j0i- hxi jψ ijxi ; ai i; ð1Þ
Subsequent work to elaborate and refine it has resulted in what is i
often called Everettian quantum theory (Wallace, 2012). It exists in
several variants, but I shall use the term to refer only to those that where j0i denotes the initial state of A, the jxi i are the eigenstates2
of X,^ and each jai i is a þ1-eigenstate of the projector for A's having
are universal and deterministic in the above senses. All of them
observed the eigenvalue xi of X. ^ For generic ψ , all the coefficients
agree with Everett's original theory that generically, when an
hxi jψ i in (1) are non-zero, and therefore all the possible mea-
experiment is observed to have a particular result, all the other
surement results a1 ; a2 ; … happen, according to L and Everettian
possible results also occur and are observed simultaneously by
quantum theory. Moreover, since interactions with the environ-
other instances of the same observer who exist in physical reality ment can never be perfectly eliminated, the measurement must be
– whose multiplicity the various Everettian theories refer to by imperfect in practice, so we can drop the qualification ‘generic’: in
terms such as ‘multiverse’, ‘many universes’, ‘many histories’ or all measurements, every possible result happens simultaneously.
even ‘many minds’.1 Furthermore, the theory is deterministic: it says that the evo-
Specifically, suppose that a quantum system S has been pre- lution of all quantities in nature is governed by differential equa-
pared in a way which, according to a proposed law of motion L, tions (e.g. the Schrödinger or Heisenberg equations of motion)
involving only those quantities and space and time, and thus it
E-mail address: [email protected] does not permit physically random processes. This is in contrast
1
‘Many minds’ theories themselves exist in several versions. The original one with certain other versions of quantum theory, and notably ‘wave-
(Albert & Loewer, 1988) is probabilistic and therefore does not count as Everettian
in the sense defined here. It is also dualistic (involves minds not being physical
2
objects, and obeying different laws from theirs) and therefore arguably not uni- Here and throughout, I am treating the measured observables as having
versal. Lockwood's (1996) version avoids both these defects and is Everettian, so its discrete eigenvalues. The notation would be more cumbersome, but the arguments
testability is vindicated in this paper. of this paper no different, for continuous eigenvalues.
http://dx.doi.org/10.1016/j.shpsb.2016.06.001
1355-2198/& 2016 The Author. Published by Elsevier Ltd. This is an open access article under the CC BY license (http://creativecommons.org/licenses/by/4.0/).
Please cite this article as: Deutsch, D. The logic of experimental tests, particularly of Everettian quantum theory. Studies in History and
Philosophy of Modern Physics (2016), http://dx.doi.org/10.1016/j.shpsb.2016.06.001i
2 D. Deutsch / Studies in History and Philosophy of Modern Physics ∎ (∎∎∎∎) ∎∎∎–∎∎∎
Please cite this article as: Deutsch, D. The logic of experimental tests, particularly of Everettian quantum theory. Studies in History and
Philosophy of Modern Physics (2016), http://dx.doi.org/10.1016/j.shpsb.2016.06.001i
D. Deutsch / Studies in History and Philosophy of Modern Physics ∎ (∎∎∎∎) ∎∎∎–∎∎∎ 3
explanations. Experimental tests themselves are primarily about same (or more) explicanda. So another consequence is that in the
explanation too: they are precisely attempts to locate flaws in a absence of a good rival explanation, an explanatory theory cannot
theory by creating new explicanda of which the theory may turn be refuted by experiment: at most it can be made problematic. If
out to be a bad explanation. only one good explanation is known, and an experimental result
Let me distinguish here between a bad explanation and a false makes it problematic, that can motivate a research programme to
one. Which of a theory's assertions about an explicandum are false replace it (or to replace some other theory). But so can a theore-
and which are true (i.e. correspond with the physical facts) is an tical problem, a philosophical problem, a hunch, a wish – anything.
objective and unchanging property of the theory (to the extent An important consequence of this explanatory conception of
that it is unambiguous). But how bad or good an explanation is science is that experimental results consistent with a theory T do
depends on how it engages with its explicanda and with other not constitute support for T. That is because they are merely
knowledge that happens to exist at the time, such as other explicanda. A new explicandum may make a theory more pro-
explanations and recorded results of past experiments. An expla- blematic, but it can never solve existing problems involving
nation is better the more it is constrained by the explicanda and by a theory (except by making rival theories problematic – see
other good explanations,5 but we shall not need precise criteria Section 3). The asymmetry between refutation (tentative) and
here; we shall only need the following: that an explanation is bad support (non-existent) in scientific methodology is better under-
(or worse than a rival or variant explanation) to the extent that… stood in this way, by regarding theories as explanations, than
through Popper's (op. cit.) own argument from the logic of pre-
(i) it seems not to account for its explicanda; or dictions, appealing to what has been called the ‘arrow of modus
(ii) it seems to conflict with explanations that are otherwise ponens’. Scientific theories are only approximately modelled as
good; or propositions, but they are precisely explanations.
(iii) it could easily be adapted to account for anything (so it I now define an objective notion, not referring to probabilities
explains nothing). or ‘expectation values’, of what it means for a proposed experi-
ment to be expected to have a result x under an explanatory theory
It follows that sometimes a good explanation may be less true T. It means that if the experiment were performed and did not
than a bad one (i.e. its true assertions about reality may be subset result in x, T would become (more) problematic. Expectation is
of the latter's, and its false ones a superset). Since two theories thus defined in terms of problems, and problems in terms of
may have overlapping explicanda, or be flawed in different explanation, of which we shall need only the properties (i)–(iii).
respects, the relations ‘truer’ and ‘better’ are both only partial Note that expectations in this sense apply only to (some) physical
orderings of explanations. Nevertheless, the methodology of sci- events, not to the truth or falsity of propositions in general – and
ence is to seek out, and apparently to correct, apparent flaws, particularly not to scientific theories: if we have any expectation
conflicts or deficiencies in our explanations (thus obtaining better about those, it should be that even our best and most fundamental
explanations), in the hope that this will correct real flaws and theories are false. For instance, since quantum theory and general
deficiencies (thus providing truer explanations). relativity are inconsistent with each other, we know that at least
When we, via arguments or experiments, find an apparent one of them is false, presumably both, and since they are required
flaw, conflict or inadequacy in our theories, that constitutes a to be testable explanations, one or both must be inadequate for
scientific problem and the theories are problematic (but not some phenomena. Yet since there is currently no single rival
necessarily refuted yet – see below). So scientific methodology theory with a good explanation for all the explicanda of either of
consists of locating and then solving problems; but it does not them, we rightly expect their predictions to be borne out in any
prescribe how to do either. Both involve creative conjectures – currently proposed experiment.
ideas not prescribed by scientific methodology. Most conjectures A test of a theory is an experiment whose result could make the
are themselves errors, and there need not be a right error to theory problematic. A crucial test – the centrepiece of scientific
make next. Accordingly, all decisions to modify or reject theories experimentation – can, on this view, take place only when there
are tentative: they may be reversed by further argument or are at least two good explanations of the same explicandum (good,
experimental results. And no such event as ‘accepting’ a theory, that is, apart from the fact of each other's existence). Ideally it is an
distinct from conjecturing it in the first place, ever happens (cf. experiment such that every possible result will make all but one of
Miller, 2006, (Chap. 4)). those theories problematic, in which case the others will have
Scientific methodology, in turn, does not (nor could it validly) been (tentatively) refuted.
provide criteria for accepting a theory. Conjecture, and the cor- It will suffice to confine attention to problems arising from tests
rection of apparent errors and deficiencies, are the only processes of fundamental theories in physics. And of those problems, only
at work. And just as the objective of science isn't to find evidence the simplest will concern us, namely when an existing explanation
that justifies theories as true or probable, so the objective of the apparently does not account for experimental results. This can
methodology of science isn't to find rules which, if followed, are happen when there seems either to be an unexplained regularity in
guaranteed, or likely, to identify true theories as true. There can be the results (criterion (i) above), or an irregularity (i.e. an explana-
no such rules. A methodology is itself merely a (philosophical) tion's prediction not being borne out – criterion (ii) above). So, if
theory – a convention, as Popper (1959) put it, actual or proposed the result of an experiment is predicted to be invariably a1 , but in
– that has been conjectured to solve philosophical problems, and successive trials it is actually a5 ; a29; a1 ; a3 …, with no apparent
is subject to criticism for how well or badly it seems to do that.6 pattern, that is an apparent irregularity. If it is a5 ; a5; a5 ; a5 …, that is
There cannot be an argument that certifies it as true or probable, apparently both an apparent unexplained regularity and an irre-
any more than there can for scientific theories. gularity. Scientific methodology in this conception does not spe-
In this view a scientific theory is refuted if it is not a good cify how many instances constitute a regularity, nor what con-
explanation but has a rival that is a good explanation with the stitutes a pattern, nor how large a discrepancy constitutes a pre-
diction apparently not being borne out (as distinct from being a
5
mere experimental error – see Section 6). Sometimes conflicting
For an informal discussion of good explanation, see Deutsch (2011, (Chap. 1)).
6
For example, Popper’s theory was proposed in order to avoid the problem of
opinions about these matters can be resolved by repeating the
induction, and the problem of infinite regress in seeking authority (‘justification’) experiment, or by testing other assumptions about the apparatus,
for theories, among other problems. etc. But in any case, the existence of a problem with a theory has
Please cite this article as: Deutsch, D. The logic of experimental tests, particularly of Everettian quantum theory. Studies in History and
Philosophy of Modern Physics (2016), http://dx.doi.org/10.1016/j.shpsb.2016.06.001i
4 D. Deutsch / Studies in History and Philosophy of Modern Physics ∎ (∎∎∎∎) ∎∎∎–∎∎∎
little import besides, as I said, informing research programmes – 3. Refuting theories by their failure to explain
unless both the new and the old explicanda are well explained by a
rival theory. In that case the problem becomes grounds for con- Suppose for simplicity that two mutually inconsistent theories,
sidering the problematic theory tentatively refuted. Therefore, to D and E, are good explanations of a certain class of explicanda,
meet criterion (iii) above, it must not be protected from such a including all known results of relevant experiments, with the only
refutation by declaring it ad hoc to be unproblematic. Instead any problematic thing about either of them being the other's existence.
claim that its apparent flaws are not real must be made via sci- Suppose also that in regard to a particular proposed experiment, E
entific theories and judged as explanations in the same way as makes only the everything-possible-happens prediction (my dis-
other theories. cussion will also hold if it is a something-possible-happens pre-
In contrast, the traditional (inductivist) account of what hap- diction) for results a1 ; a2 ; …, while D predicts a particular result a1 .
pens when experiments raise a problem is in summary: that from If the experiment is performed and the result a2 is observed, then
an apparent unexplained regularity, we are supposed to ‘induce’ D (or more precisely, the combination of D and the background
knowledge) becomes problematic, while neither E nor its combi-
that the regularity is universal7 (or, according to ‘Bayesian’
nation with the same background knowledge is problematic any
inductivism, to increase our credence for theories predicting that);
longer (provided that the explanation via experimental error
while from an apparent irregularity, we are supposed to drop the
would be bad – Section 6 below).
theory that had predicted regularity (or to reduce our credence for
Observing the result a1 , on the other hand, would be consistent
it). Such procedures would neither necessitate nor yield any
with the predictions of both D and E. Even so, it would be a new
explanation. But scientific theories do not take the form of pre-
explicandum which, by criterion (i) above, would raise a problem
dictions that past experiments, if repeated, would have the same
for the explanation E, since why the result a1 was observed but the
outcomes as before: they must, among other things, imply such others weren't would be explained by D but unexplained by E. Note
predictions, but they consist of explanations. that if it were not for the existence of D, the result a1 would not
In any experiment designed to test a scientific theory T, the make E problematic at all. (Nor would any result, and so there
prediction of the result expected under T also depends on other would be no methodological reason for doing the experiment
theories: background knowledge, including explanations of what at all.)
the preparation of the experiment achieves, how the apparatus If the experiment is then repeated and the result a1 is obtained
works, and the sources of error. Nothing about the unmet expec- each time, that is an apparent regularity in nature. Again by cri-
tation dictates whether T or any of those background-knowledge terion (i), E then becomes a bad explanation while D becomes the
assumptions was at fault. Therefore there is no such thing as an only known good explanation for all known results of experi-
experimental result logically contradicting8 T, nor logically ments. That is to say, E is refuted (provided, again, that experi-
entailing a different ‘credence’ for T. But as I have said, an apparent mental error is a bad explanation). Although E has never made a
failure of T's prediction is merely a problem, so seeking an alter- false prediction, it cannot account for the new explicandum (i.e.
native to T is merely one possible approach to solving it. And the repeated results a1 ) that its rival D explains.
although there are always countless logically consistent options Again, all refutations are tentative. Regardless of how often the
for which theory to reject, the number of good explanations known above experiment is repeated with result a1 every time, it remains
for an explicandum is always small. Things are going very well possible that E is true – in which case the existence of a different
when there are as many as two, with perhaps the opportunity for a explanation D with more accurate predictions may be a coin-
crucial test; more typically it is one or zero.9 For instance, when cidence. But coincidence by itself could ‘explain’ anything, so,
neutrinos recently appeared to violate a prediction of general absent additional explanatory details, it must be a bad explanation
relativity by exceeding the speed of light, no good explanation by criterion (iii) above. A good explanation of all relevant obser-
involving new laws of physics was, in the event, created, and the vations might be some E&G, where G is a good explanation for why
only good explanation turned out to be that a particular optical the result must be a1 when the experiment is carried out under
these circumstances, and why other results could be obtained
cable had been poorly attached (Adam et al., 2012).
Note that even if T is the culprit, merely replacing it by T under different circumstances.
Thus it is possible for an explanatory theory to be refuted by
cannot solve the problem, because the negation of an explanation
experimental results that are consistent with its predictions. In
(e.g. ‘gravity is not due to the curvature of spacetime’) is not itself
particular, the everything-possible-happens interpretation of
an explanation. Again, at most, finding a good explanation that
quantum theory, to which it has been claimed that Everettian
contradicts T can become the aim of a research programme.
quantum theory is equivalent, could be refuted in this way (pro-
I shall now show that it is possible for an explanatory theory T
vided, as always, that a suitable rival theory existed), and hence it
to be testable even by an experiment for which T makes only
is testable after all. Therefore the argument that Everettian
everything-possible-happens predictions, and whose results,
quantum theory itself is untestable fails at its first step. But I shall
therefore (if T designates them as possible), cannot contradict
show in Section 8 that it is in fact much more testable than any
those predictions. mere everything-possible-happens theory.
It follows that under E, the string of repeated results a1 is
7
E.g. Aristotle’s definition of induction as “argument from the particular to the expected not to happen, in the sense defined in Section 2, even
universal”. though E asserts that, like every other sequence, it will happen
8
That is known as the Duhem–Quine thesis (Quine, 1960). It is true, and must (among other things). This is no contradiction. Being expected is a
be distinguished from the Duhem–Quine problem, which is the misconception that
scientific progress is therefore impossible or problematic.
methodological attribute of a possible result (depending, for
9
One of the misconceptions underlying the so-called ‘problem of induction’ is instance, on whether a good explanation for it exists) while hap-
that since there is always an infinity of predictive formulae matching any particular pening is a factual one. What is at issue in this paper is not whe-
data, science must be chronically overwhelmed with theories, with too few ways to ther the properties ‘expected not to happen’ and ‘will happen’ are
choose between them. And hence that scientific methodology must consist of
consistent but whether they can both follow from the same
rationales for selecting a favoured theory from the overabundance: ‘the simplest’,
perhaps, or the ‘least biased’. But a predictive formula is not an explanation, and deterministic explanatory theory, in this case E, under a reasonable
good explanations are hard to come by. scientific methodology. And I have just shown that they can.
Please cite this article as: Deutsch, D. The logic of experimental tests, particularly of Everettian quantum theory. Studies in History and
Philosophy of Modern Physics (2016), http://dx.doi.org/10.1016/j.shpsb.2016.06.001i
D. Deutsch / Studies in History and Philosophy of Modern Physics ∎ (∎∎∎∎) ∎∎∎–∎∎∎ 5
Note that the condition that E be explanatory is essential to the despite the opportunities for good-faith disagreements that cri-
argument of this section, which depends on the criteria (i) and (iii) teria such as (i)–(iii) still allow, one may succeed in doing so.
for being a bad explanation. Under philosophies of science that
identify theories with their predictions, theories like E would
indeed be untestable and would inform no expectations. So much 5. The status of stochastic theories
the worse for those philosophies.
A stochastic theory (in regard to a particular class of experi-
ments) is like a something-possible-happens theory except that it
4. The renunciation of authority makes an additional assertion: that a ‘random’ one of the possible
results a1 ; a2 ; … happens, with probabilities p1 ; p2 ; … specified by
The reader may have noticed that the methodology I am the theory. The ‘random’ values are either given as initial condi-
advocating is radically different in purpose, not only in substance, tions at the beginning of time (as in pilot-wave versions of
from that which is taken for granted in most studies of the quantum theory) or produced by ‘random processes’ in which a
“empirical viability” of Everettian quantum theory, and of scien- physical system ‘chooses randomly’ which of several possible
tific theories in general. That traditional role of methodology has continuations of its trajectory it will follow (as in ‘collapse’ the-
been to provide (1) some form of authority for theories – such as ories). In this section I argue that such a theory cannot be an
confirmation of their truth, justification, probability, credence, explanatory description of nature, and under what circumstances
reason for believing, reason for relying upon, or ‘secure founda- it can nevertheless be useful as an approximation or mathematical
tions’ and (2) rules for using experiment and observation to give idealisation.
theories such authority. I am adopting Popper's view (e.g. Popper, We have become accustomed to the idea of physical quantities
1960) that no such authority exists, nor is needed for anything in taking ‘random’ values with each possible value having a ‘prob-
the practice or philosophy of science, and that the quest for it ability’. But the use of that idea in fundamental explanations in
historically has been a mistake. physics is inherently flawed, because statements assigning
Consequently, readers who conceive of science in terms of such probabilities to events, or asserting that the events are random,
a quest may regard the arguments of this paper as an extended
form a deductively closed system from which no factual state-
acknowledgement that Everettian quantum theory is indeed fun-
ment (statement about what happens physically) about those
damentally flawed in its connection to experiment, since in their
events follows (Papineau, 2002, 2010). For instance, one cannot
view I am denying – for all theories, not just this one – the very
identify probabilities with the actual frequencies in repeated
existence of the connection they are seeking. Similarly, many
experiments, because they do not equal them in any finite
philosophers regard Popper's own claim to have ‘solved the pro-
number of repeats, and infinitely repeated experiments do not
blem of induction’ as absurd, since his philosophy neither explains
occur. And in any case, no statement about frequencies in an
how inductive reasoning provides such authority nor (given his
infinite set implies anything about a finite subset – unless it is a
claim that no such reasoning exists) provides an alternative
‘typical’ subset, but ‘typical’ is just another probabilistic concept,
account of scientific reasoning that does provide it. This is not the
not a factual one, so that would be circular. Hence, notwith-
place to defend Popper in this regard (but see Popper (1959)). I
standing that they are called ‘probabilities’, the pi in a stochastic
merely ask readers taking such positions to conclude, from this
theory would be purely decorative (and hence the theory would
paper, that the testability of Everettian quantum theory is not an
remain a mere something-possible-happens theory) were it not
additional absurdity separate from that of the non-existence of
for a special methodological rule that is usually assumed impli-
confirmation, inductive reasoning, etc.
citly. There are many ways of making it explicit, but those that
To that end, note that when a methodology has authority as its
refer to individual measurement outcomes (rather than to infi-
purpose, it cannot consistently allow much ambiguity in its rules
nite sequences of them, which do not occur in nature) and con-
or in the concepts (such as ‘confirming instance’, or ‘probability’)
form to the probability calculus, all agree on this:
to which they refer, because if two scientists, using different
interpretations of the concepts or rules, draw different conclusions If a theory attaches numbers pi to possible results ai of an
from the same experimental results, those conclusions cannot experiment; and calls those numbers ‘probabilities’; and if;
possibly both have authority in the above senses. But the metho-
in one or more instances of the experiment; the observed
dology I am advocating is that of requiring theories to be good
explanations and seeking ways of exposing flaws and deficiencies f requencies of the ai differ significantly; according to some
in them. So its rules do not purport to be sources of authority but statistical test; from the pi ; then a scientific problem should
merely summarise our “history of learning how not to fool our- be deemed to exist: ð3Þ
selves” (Feynman, 1974). It is to be expected that people using
those rules may sometimes ‘expect’ different experimental results (Or on equivalent procedures, under philosophies that do not refer
or have different opinions about whether something is ‘proble- to ‘problems’.) Every finite sequence fails some test for random-
matic’. Indeed, explanation itself cannot be defined unambigu- ness, and if a statistical test is designed to fail, its failure does not
ously, because, for instance, new modes of explanation can always create a new explicandum, and hence does not make any theory
be invented (e.g. Darwin's new mode of explanation did not problematic. Therefore one must choose the statistical test in the
involve predicting future species from past ones). Disagreeing rule (3) independently of the experiment's results (e.g. in advance
about what is problematic or what counts as an explanation will in of knowing them).
general cause scientists to embark on different research projects, The rule (3) does not specify how many instances, nor which
of which one or both may, if they seek it (there are no guarantees), statistical test, nor at what ‘significance level’ the test should be
provide evidence by both their standards that one or both of their applied. In the event of a disagreement about those matters, the
theories are problematic. There is no methodology that can validly experiment can be repeated until the proposed statistical tests all
guarantee (or promise with some probability etc.) that following it agree. They will also all agree even for a single experiment if one of
will lead to truer theories – as demonstrated by countless argu- the pi is sufficiently close to 1. (Indeed, one can regard repeated
ments of which Quine's (loc. cit.) is one. But if one adopts this instances of an experiment as a single experiment for which one of
methodology for trying to eliminate flaws and deficiencies, then the pi is close to 1.)
Please cite this article as: Deutsch, D. The logic of experimental tests, particularly of Everettian quantum theory. Studies in History and
Philosophy of Modern Physics (2016), http://dx.doi.org/10.1016/j.shpsb.2016.06.001i
6 D. Deutsch / Studies in History and Philosophy of Modern Physics ∎ (∎∎∎∎) ∎∎∎–∎∎∎
This is the rationale under which a stochastic theory's asser- in a flame has no bearing on why it warms us. The prediction (via
tions about probabilities are ‘tested’.10 I shall henceforward place the kinetic theory of gases, etc.) that it will radiate heat under
the term in quotation marks when referring to such predictions particular circumstances rests not on ignorance but on a sub-
because such ‘tests’ are not tests under the methodology of science stantive, independently testable explanation of why it would make
I am advocating. They depend unavoidably on following rule (3), very little difference to regard the state of the flame as random
and the crucial thing about that rule is that it is methodological, with a particular probability distribution. That explanation is,
and therefore normative: it is about how experimenters should again, that under the given circumstances any state of the flame
behave and think in response to certain events; it is not a sup- that would not do so would have properties that would constitute
posed law of nature, nor is it any factual claim about what happens an unexplained regularity and hence a new explicandum and a
in nature (the explicanda), nor is it derived from one. So it should problem. So it is because we know something about the flame –
not, by criterion (i), appear in a (good) scientific explanation. Nor, not because of any of the things we do not know – that we expect
on the other hand, could it be appended to the explanatory sci- it to warm us. Expectations, as always in science, are derived from
entific methodology I am advocating, for then it would be purely explanatory theories.
ad hoc: scientific methodology should be about whether reality The prevalence of the misconceptions about probability that I
seems to conform to our explanations; there is a problem when it have described in this section has accustomed us to reinterpreting
does not, and only then. And one cannot make an explanation all human thought in probabilistic terms, as advocated by ‘Baye-
problematic merely by declaring it so. Nor, therefore, can one make sian’ philosophy, but this is a mistake. When a jury disagrees on
a theory testable merely by promising to deem certain experi- what is ‘beyond reasonable doubt’ or ‘true on the balance of
mental results, consistent with the theory, problematic. If such probabilities’, it is not because jurors have disagreed on the
results are unexplained by a theory (as in the example in numerical value of some quantities obeying the probability cal-
Section 3), then the theory is already problematic and there is no culus. What they have actually done (or should have done,
need for a special rule such as (3). But if not, the theory is not according to the methodology I advocate here), is try to explain
problematic, and no ad hoc methodological rule can make it so. the evidence. Thus ‘guilty beyond reasonable doubt’ should mean
Yet rule (3) is tacitly assumed by the Born rule (2), and con- that they consider all explanations that they can think of, that clear
sequently by ‘collapse’ theories, and by pilot-wave theories, which, the defendant, bad by criteria such as (i)–(iii). And if they interpret
like all stochastic theories, depend both for their physical meaning those criteria differently, it should not be because they are with
and for their ‘testability’ on (3). As I said, their ‘probabilities’ are hindsight exploiting the leeway in them in order to deliver a
merely decorative without it. particular verdict – because that would itself violate criterion (iii).
So, how is it that stochastic theories can be useful in practice? I Although the consequences of error may (or may not) be very
shall show in Section 7 how ‘collapse’ variants of quantum theory different in the laboratory and the courtroom, what we should do
can, even though they are ruled out as descriptions of nature by – and all we can do, as I explained in Section 4 – is adopt meth-
the above argument. But most useful stochastic theories take the odologies, in science as in law and everything else, whose purpose
form of unphysically idealised models whose logic is as follows. is to facilitate error correction, not to create (a semblance of)
The explicandum is some process whose real defining property is authority for our surviving guesses.
awkward or intractable to express precisely (such as the fairness of
a pair of dice and of how they are thrown, or the non-
designedness of mutations in genes). One replaces that property 6. Experimental error
by the mathematical property of randomness. This method of
approximation can be useful only if there is a good explanation for Since error can never be perfectly eliminated from experi-
why one can expect the intended purpose of the model to be ments, their results are meaningless unless accompanied by an
unaffected by that replacement. In the case of dice, the intended estimate of the errors in them. Error estimates are often treated as
purpose of fairness includes things like the outcomes being probabilistic – either literally, in the sense of the results being
unpredictable by the players. One would argue, among other treated as stochastic variables, or as credences. But since the
things, that this purpose is achieved because the manner in which approach I am advocating eschews both literal probabilities and
they are thrown does not give the thrower an opportunity to credences, I must now give a non-probabilistic account of the
determine the outcome, because that would require motor control nature of errors and error estimates.
far beyond the precision of which human muscles and nerves are Any proposed scientific experiment must come with an
capable; and therefore that any pattern in the results that was explanatory theory of how the apparatus, when used in practice,
meaningful in the game would be an unexplained regularity. Then would work. This is used to estimate what I shall call the least
one would argue that the optimal strategy for playing a game expected error and the greatest expected error, which I shall define
involving dice thrown in a realistic way is identical to what it in a way that does not refer to probability.
would be in a game where the dice were replaced by a generator of For example, suppose that we use everyday equipment to
random numbers – even though the latter is physically impossible. measure out pieces of dough in a kitchen, intending them to have
Then one can develop useful strategies without needing to cal- equal weights, and that later we find that they do indeed all have
culate specifically what human hands and real dice would do, equal weights according to a state-of-the-art laboratory balance.
which would require intractable computations of microscopic The theories of that and of the kitchen apparatus will provide an
processes even if one knew the precise initial conditions, of which explanation for why the latter is much less accurate (say, because
one is necessarily ignorant. of friction and play in its moving parts), so those results are
But one thing that cannot be modelled by random numbers is unexpected under the definition of ‘expect’ in Section 2, and
ignorance itself (as in the ‘ignorance interpretation of probability’). constitute an unexplained regularity (as with the dice in
Fortunately, our ignorance of the microstate of the gas molecules Section 4). They are therefore a new explicandum, and proble-
matic by criterion (i). ‘Coincidence’ would be one explanation
10
consistent with these events, but under the circumstances that
A stochastic theory may also make non-probabilistic predictions and be
tested through them. In particular, every stochastic theory makes something-
would be a bad explanation by criterion (iii). The least error which,
possible-happens assertions, and these may allow it to be tested in the manner of if subsequently discovered, would not make the theory of the
Section 3. I shall give an example of this in Section 8. apparatus problematic, I call the least expected error.
Please cite this article as: Deutsch, D. The logic of experimental tests, particularly of Everettian quantum theory. Studies in History and
Philosophy of Modern Physics (2016), http://dx.doi.org/10.1016/j.shpsb.2016.06.001i
D. Deutsch / Studies in History and Philosophy of Modern Physics ∎ (∎∎∎∎) ∎∎∎–∎∎∎ 7
Similarly if, in the laboratory, one of the pieces was found to case where they cannot. Nor, for similar reasons, can it be regarded
weigh more than the others combined, and if, according to our as a bound on x χ , since then, if x1 and x2 were
two different
explanation of how the measuring-out process worked, such a results, we should have x1 χ o ε and x2 χ o ε and hence
1
piece would have been noticed during it and subdivided, that ðx1 þ x2 Þ χ o ε 1 jx1 x2 j, which would contradict the
2 2
would be an unexplained irregularity and therefore a problem by assumption that ε is the best estimate of the error. (Unless x1 ¼ x2 ,
criteria (i) and (ii). The greatest such error which, if subsequently but if all results were equal despite ε c δ, this would be an
discovered, would not make the theory of the apparatus proble- unexplained regularity and a new problem, as in the previous
matic, I call the greatest expected error. example.) So the estimate can
only mean, again as in that example,
Understood in this way, the greatest and least expected errors that if the true error x χ were later revealed to exceed ε (by a
are properties of the state of explanatory theories about the measurement that was more accurate according to an unrivalled
apparatus and its use. Improving those theories could increase or good explanation), that would make our explanation of the mea-
reduce the expected errors even if the apparatus and its use are surement process problematic. And that explanation, being about
unchanged. unknown errors, cannot be of any help in increasing our knowl-
Experimental errors are often categorised as ‘random’ or ‘sys- edge of χ : knowledge cannot be obtained from ignorance.
tematic’, but those terms can be misleading: Random errors are Some error processes have inherently bounded effects, but
those caused by processes such as Brownian motion, which are most explanations that inform error estimates are, like the ones in
well approximated by a known stochastic law. So they can be the dough example, not about the error processes themselves, but
understood in the manner of Section 4. And in theory testing, about processes in the experiment that are expected to prevent or
where experiments are necessarily repeatable, such processes do correct or detect errors, but only if they are above a certain size.
not necessarily cause errors, because their effect could be made Hence, if we estimate x χ at ε, and then perform the whole
negligible just by repeating the experiment sufficiently often. So measurement
several times with results x1 ; x2 ; …, and max
what remains to be explained here are systematic errors – that is, xi xj turns out to be less than ε, that tells us nothing more
errors that are not random but unknown, and therefore conform to about χ than we knew when we had performed it once. In parti-
no known system. (This includes errors that could be approxi- cular, it need not (absent other knowledge) tell us that χ is ‘likely’
mated as random with suitable stochastic laws, perhaps depend- to be, or can be expected to be, between the largest and the
ing on time and other conditions, if those were known.) Like the smallest of the xi . Indeed, as I said, if max xi xj is too small
‘unknown unknowns’ of military planning (Rumsfeld, 2002), (smaller than the least expected error), then the whole experiment
unknown variables in physics are counter-intuitive. Theories of is
(absent further explanation) problematic, just as it is when some
xi xj is greater than ε.
them are something-possible-happens theories, which I have
So both the inevitable presence in the multiverse of all possible
discussed, but what does it mean to estimate errors that are
errors after any experiment, and the validity of approximating
unknown, yet not random?
For example, consider measuring a real-valued physical quan- some but not all of them as probabilistic, are consistent with the
tity χ that all relevant explanations agree is a constant of nature – conception of science that I am advocating.
say, the speed of light in units of a standard rod and clock. For any
given design of measuring instrument (say, that of Fizeau), even
with repeated runs of the experiment whose results are optimally 7. ‘Collapse’ variants of quantum theory
combined
to give an overall result x, the greatest expected value of
x χ cannot be reduced beyond a certain limit, say ε, because of ‘Collapse’ variants of quantum theory invoke a special law of
systematic errors. (I shall for simplicity consider only cases where motion, namely the Born rule (2), for times t when a measurement
the expectations is completed. This law is unique in being the only stochastic law of
of positive and negative errors are sufficiently
similar for x χ to be an appropriate quantity to estimate.) motion ever proposed in fundamental physics.11 Though vague, it
As always with something-possible-happens theories, there does have a “domain of validity” that includes all experiments that
will be a set of possible values, determined by various explana- are currently feasible. Within that domain, it allows those theories
tions. I have already mentioned the theories that χ is constant to be ‘tested’ as stochastic theories, according to the rationale
under suitable circumstances, and is a real number (not a vector, described in Section 4, and all such ‘tests’ to date have been pas-
say). There will also be upper and lower bounds a and b on x, sed. Specifically, the logic of such a ‘test’ is as follows:
determined by explanations of the apparatus and of χ itself. For According to a law of motion L under a ‘collapse’ theory with
example, there was a limit on the speed at which Fizeau's cogged the Born rule, the probabilities of the possible results a1 ; a2 ; … of
wheel could be smoothly rotated. And before the speed of light the given measurement are p1 ; p2 ; … respectively. A different law
was ever measured, there was already a good explanation that it of motion (or a different version of quantum theory, or some other
must be greater than that of sound because lightning arrives stochastic theory), predicts different probabilities q1 ; q2 ; …. The
before thunder. Furthermore, even though χ has a continuous two sets of probabilities follow from the only known good
range of possible values, x must be instantiated in a discrete explanations of the phenomena in question. A statistical test is
variable, because even in cases where the result appears in a chosen, together with some significance level α and a number N,
nominally continuous variable, like a pointer angle, it must where N is sufficient for the following. The experiment is per-
always end up in a discrete form such as numerals recorded on formed N times. Let n1 ; n2 ; … be the number of times that the
paper, or in brains, before it can participate in the processes of result was a1 ; a2 ; … respectively. The statistical test distinguishes
scientific methodology. So x has only finitely many possible between three possibilities: (i) the probability is less than α that
values between a and b. For simplicity, suppose that they have a the observed frequencies n1 =N; n2 =N; … would differ by at least
constant spacing δ.
this much from the p1 ; p2 ; … and at least this little from the q1 ; q2
So it would not make sense to estimate x χ at less than δ=2, ; … if the probabilities really were p1 ; p2 ; …; or (ii) the same with
nor of course at more than b a. But what does it mean to estimate
it at some ε with δ{ε{b a? The estimate ε cannot be regarded 11
Pilot-wave theories and ‘dynamical collapse’ theories implement the Born
as probabilistic (e.g. as ‘x χ is probably no more than ε’), since rule indirectly, but are also stochastic theories in the sense I have defined. Though
that would allow further measurements with the same instrument they do not give measurements a special status, the conclusions of this section
to improve the accuracy of the result, and we are dealing with the apply to them too, because they depend on Rule (3) for their ‘testing’.
Please cite this article as: Deutsch, D. The logic of experimental tests, particularly of Everettian quantum theory. Studies in History and
Philosophy of Modern Physics (2016), http://dx.doi.org/10.1016/j.shpsb.2016.06.001i
8 D. Deutsch / Studies in History and Philosophy of Modern Physics ∎ (∎∎∎∎) ∎∎∎–∎∎∎
the p's and q's interchanged; or (iii) both. N and α are chosen so as Section 1, which denies the theory's testability (or misleadingly, its
to make it impossible for neither (i) nor (ii) to hold.12 If (iii) holds, confirmability), is called the ‘epistemic problem’. The other – the
the experiment is inconclusive. But if (i) holds, Rule (3) requires ‘practical problem’ – denies that decisions whose options have
the experimenter behave as if the first theory had been (tenta- multiple simultaneous outcomes can be made rationally if all the
tively) refuted; or if (ii) holds, the same for the second theory. As I possible outcomes of any option are going to happen anyway. The
argued in Section 4, Rule (3) cannot make an explanation pro- latter criticism has been rebutted by the so-called decision-
blematic; so there has to be an explanatory reason, not just a rule, theoretic argument (Deutsch, 1999; Wallace, 2003). Using a non-
for regarding the respective theories as problematic if that hap- probabilistic version of game-theoretic rationality,14 it proves15
pens – see the following section. that (in the terminology of the present paper) rational gamblers
For convenience in what follows, I shall re-state the statistical- who knew Everettian quantum theory (and considered it a good
test part of the above ‘testing’ procedure in terms of gambling: The explanation) but knew no ‘collapse’ variants of it (or considered
experimenter considers a class of thought experiments in which them bad explanations), and have therefore made no probabilistic
two gamblers, one of whom knows only the first theory and the assumptions, when playing games in which randomisers were
other only the second, bet with each other, at mutually agreed replaced by quantum measurements, would place their bets as if
odds, on what each result a1 ; a2 ; …, or combination of those those were randomisers, i.e. using the probabilistic rule (2)
results, will be. They behave rationally in the sense of classical according to the methodological rule (3).
(probabilistic) game theory, i.e. they try to maximise the expecta- The decision-theoretic argument, since it depends on game-
tion values of their winnings, as determined by the Born rule (2) theoretic axioms, which are normative, is itself a methodological
and their respective theories. Since the p1 ; p2 ; … are not all equal theory, not a scientific one. And therefore, according to it, all valid
to the corresponding q1 ; q2 ; …, the experimenter can set odds for a uses of probability in decision-making are methodological too.
pattern of bets on various propositions (such as ‘the next outcome They apply when, and only when, some emergent physical phe-
will be between a2 and a7 ’), to which each gambler would agree nomena are well approximated as ‘measurements’, ‘decisions’ etc.
because each would calculate that the probabilistic expectation so that the axioms of non-probabilistic game theory are applicable.
value of his winnings is positive for each bet. Each such pattern Applying them is a substantive step that does not (and could not)
corresponds to a statistical test of whether the results are ‘sig- follow from scientific theories.
nificantly’ incompatible with the first theory, the second, or both. Solving the ‘practical problem’ does not fully solve the ‘epis-
Finally, given the actual results of the experiments, the experi- temic problem’, for one cannot directly translate the gamblers’
menter calculates the amount that the winner would have won situation into the scientist's. For instance, it is not clear how the
(which is the amount that the loser would have lost). If it exceeds a scientist's equivalent of ‘winnings’ on discovering a true theory
certain value, rule (3) then requires the loser's theory to be should be modelled, since there is no subjective difference
deemed ‘refuted’ at the appropriate significance level. If neither between discovering a true theory and mistakenly thinking that
wins more than that value, then the experiment is inconclusive. one has. On the view, which I have argued here is a misconception,
Note, in passing, that these ‘tests’ of what their advocates call that scientific methodology is about generating confirmation of, or
‘probabilistic predictions’ are not the only ones by which theories credence for, theories being true or probable, the ‘epistemic’ pro-
framed under ‘collapse’ theories can be tested. They, like most blem takes the form: ‘the decision-theoretic argument proves that
versions of quantum theory, also make ordinary (i.e. non-prob- if one believes Everettian quantum theory, it is rational (in that
abilistic) predictions of, for instance, emission spectra, and the non-probabilistic game-theoretic sense) to make choices as
location of dark bands in interference patterns. It is true that in though the outcomes were probabilistic; but it cannot be rational
experimental practice there are small deviations from those pre- in any sense to believe Everettian quantum theory in the first place
dictions, for example due to differences between the real system because, by hypothesis, we would then believe that whenever
and apparatus and idealised models of them assumed by the some experimental evidence has led us to believe Everettian
prediction, but in the absence of a rival explanatory theory pre- quantum theory, the contrary evidence was in reality present too’.
dicting those deviations, they are validly treated as experimental There are arguments (e.g. Greaves, 2007) that the ‘epistemic’
errors, as in Section 6. conclusion nevertheless follows from versions of the decision-
Notealso that the Born rule's identification of the quantities13 theoretic argument, but unfortunately they share the same mis-
hxi jψ ðt Þ j2 as probabilities cannot hold at general times t for any conception about confirmation and credence, so for present pur-
observable with eigenstates jxi i for which interference is detect- poses it would be invalid to appeal to them. Hence I must connect
able, directly or indirectly. That is because during interference the decision-theoretic argument to theory testing via a route that
phenomena, i.e. almost all the time in almost all quantum systems, does not assign confirmation, credence, or probability to theories.
those quantities violate the axioms of the probability calculus The connection is, fortunately, straightforward: it is via expec-
(Deutsch, Ekert, & Lupacchini, 2000). tations, which, in the non-probabilistic sense defined in Section 2,
are not credences and do not obey the probability calculus. Nor, as
I showed there, is it per se inconsistent to expect an experiment to
8. Everettian quantum theory
14
For a comprehensive defence of that version of game-theoretic rationality,
Everettian quantum theory's combination of determinism with
and rebuttals of counter-arguments, see Wallace (2012) Part II.
unpredictable outcomes of experiments has motivated two main 15
A proof is not necessarily an explanation, so ‘proved’ in this context is a
criticisms of it in relation to probability. The one that I presented in lesser claim than ‘fully explained’. Quantum theory is currently in a similar
explanatory state to that of general theory of relativity in its early years: it
explained the phenomena of gravity decisively better than any rival theory, and
12
Confirmation-based methodologies would interpret (i) as ‘second theory from its principles many things could be proved about the orbits of planets, the
confirmed, first refuted’ (to significance level α), and (ii) as vice-versa, and (iii) as behaviour of clocks, etc., including testable predictions; nevertheless the entity that
‘neither theory confirmed, both refuted’, and N and α are chosen so as to make it it directly referred to, namely spacetime, was only imperfectly explained, so that,
impossible for both to be ‘confirmed’. for instance, the event horizon in the Schwarzschild solution was mistaken for a
13
These
are discrete quantities. For continuous
2 eigenvalues x, the Born rule physical singularity. Analogously, the entity that quantum theory directly refers to,
says that hxjψ ðt Þ j2 , usually represented as ψ ðx; t Þ , is a probability distribution namely the multiverse, is only imperfectly explained at present, in terms of
function over x, at certain times t. approximative entities such as universes (Deutsch, 2010).
Please cite this article as: Deutsch, D. The logic of experimental tests, particularly of Everettian quantum theory. Studies in History and
Philosophy of Modern Physics (2016), http://dx.doi.org/10.1016/j.shpsb.2016.06.001i
D. Deutsch / Studies in History and Philosophy of Modern Physics ∎ (∎∎∎∎) ∎∎∎–∎∎∎ 9
have the result x even though one knows that not-x will also principle one can then construct interference experiments that
happen – provided that there is an explanation for such expecta- would produce a deterministic result a1 under Everettian quantum
tions, which, indeed, there is, as follows. theory but a range of possibilities a1 ; a2 ; … under those theories
Suppose that gamblers who knew Everettian quantum theory (Deutsch, 1985). Those are something-possible-happens asser-
and did not use any probabilistic rule such as (2) or (3), and were tions, so even though the theories also assign ‘probabilities’ to
rational in the non-probabilistic sense required by the decision- those values, the something-possible-happens assertions alone, if
theoretic argument, were to play the quantum-measurement- borne out (with any result other than a1 ), would be sufficient to
driven game of Section 7. That argument says that they would refute Everettian quantum theory. If, in addition, repeated results
place exactly the same bets as described there. If one of them were had statistics close to those of the probabilistic assertions, there
to lose by a large enough margin, his expectation (in the non- would be a new problem of explaining that: a literally stochastic
probabilistic sense defined in Section 2) will have been violated, theory is not explanatory, as explained in Section 5. If, on the other
while the other's will not. Hence the loser's theory will be pro- hand, repeated results a1 were obtained, that would refute the
blematic and the winner's not. The experimenter who seeks good non-Everettian theories, as explained in Section 3 – but they can
explanations can infer that if the gamblers were then informed of also be ruled out without experimentation, as bad explanations.
each other's theory, they would both consider that the loser's
theory has been refuted, and hence the experimenter – who is
now aware of the same evidence and theories as they are – must 9. Generalisation to constructor theory
agree with them.
Following this testing procedure will (tentatively) refute dif- Marletto (2015) has shown that if the decision-theoretic argu-
ferent theories in different universes. As I said in Section 4, this is ment is valid, it also applies to a wide class of theories that con-
no defect in a methodology that does not purport to be guaran- form to constructor theory (Deutsch & Marletto, 2015). The argu-
teed, nor probabilistically likely, to select true theories. However, ments of this paper would apply to any theory in that class: they
note, as a reassuring consistency check (not a derivation – that are all testable. (Everettian quantum theory is of course in the
would be circular!), that the decision-theoretic argument also
class; versions of quantum theory that invoke random physical
implies that on the assumption that one of the theories is true, it is
quantities are not.)
rational (by the criteria used in the argument) to bet that the other
one will be refuted.
Consequently, the conventional modes of testing ‘collapse’
10. Conclusions
variants of quantum theory, or theories formulated under them,
are valid for Everettian quantum theory: any experiment that
By adopting a conception – based on Popper's – of scientific
‘tests’ a probabilistic prediction of a ‘collapse’ variant is auto-
theories as conjectural and explanatory and rooted in problems
matically also a valid test of the corresponding multi-universe
(rather than being positivistic, instrumentalist and rooted in evi-
prediction of Everettian quantum theory, because it does not
dence), and a scientific methodology not involving induction,
depend on the Born rule nor any other assumption referring to
confirmation, probability or degrees of credence, and bearing in
probability.
Indeed, as I have argued in Sections 4 and 5, on this view of the mind the decision-theoretic argument for betting-type decisions,
logic of testability it is stochastic theories, including ‘collapse’ we can eliminate the perceived problems about testing Everettian
variants of quantum theory, that suffer from the very flaw that quantum theory and arrive at several simplifications of metho-
Everettian quantum theory is accused of having under conven- dological issues in general.
tional theories of probability, namely: conventional ‘tests’ of (or ‘Bayesian’ credences are eliminated from the methodology of
under) those variants all depend on arbitrary instructions such as science, but rational expectations are given an objective meaning
the rule (3) about what experimenters should think. If it were independent of subjective beliefs.
deemed valid to add such instructions to a scientific theory's The claim that standard methods of testing are invalid for
assertions about reality, the same could be done to Everettian Everettian quantum theory depends on adopting a positivist or
quantum theory and the entire controversy about its testability instrumentalist view of what the theory is about. That claim
would collapse by fiat. But it does not need special methodological evaporates, given that scientific theories are about explaining the
rules. It is testable through its physical assertions alone. physical world.
So all scientific theories can (as they must) all be tested under Even everything-possible-happens theories can be testable. But
the same methodological rules. There is no need, and no room, for Everettian quantum theory is not one of them. Because of its
special pleading on behalf of ‘collapse’ theories. Nor is there any explanatory structure (exploited by, for instance, the decision-
room for stochastic theories, except those that can be explained as theoretic argument) it is testable in all the standard ways. It is the
approximations in the sense of Section 4. Indeed, Albrecht and predictions of its ‘collapse’ variants (and any theory predicting
Phillips (2014) suggest that all stochastic phenomena currently literally stochastic processes in nature) that are not genuinely
known to physics are quantum phenomena in disguise. testable: their ‘tests’ depend on scientists conforming to a rule of
Traditional ‘collapse’ theories are also inherently far worse behaviour, and not solely on explanations conforming to reality.
explanations than Everettian quantum theory, by criterion (i),
since they neither explain what happens physically between
measurements, nor what happens during a ‘collapse’. They also Acknowledgements
suffer from a class of inconsistencies known as the ‘measurement
problem’ (thus failing criterion (ii)), which do not exist in Ever- I am grateful to David Miller, Liberty Fitz-Claridge, Daniela
ettian quantum theory. Frauchiger, Borzumehr Toloui and to the anonymous referees for
Note, however, that Everettian quantum theory is in principle helpful comments on earlier drafts of this paper, and especially to
testable against ‘dynamical collapse’ stochastic theories such as Chiara Marletto for illuminating conversations and incisive criti-
the Ghirardi–Rimini–Weber theory with fixed parameters, or any cism at every stage of this work.
‘collapse’ variants that specify explicitly enough the conditions This work was supported in part by a grant from the Templeton
under which ‘collapse’ is supposed to happen. That is because in World Charity Foundation (Grant No. TWCF0068/AB42). The opinions
Please cite this article as: Deutsch, D. The logic of experimental tests, particularly of Everettian quantum theory. Studies in History and
Philosophy of Modern Physics (2016), http://dx.doi.org/10.1016/j.shpsb.2016.06.001i
10 D. Deutsch / Studies in History and Philosophy of Modern Physics ∎ (∎∎∎∎) ∎∎∎–∎∎∎
expressed in this paper are those of the author and do not necessarily DeWitt, B. S., & Graham, N. (Eds.). (1973). The many-worlds interpretation of quan-
reflect the views of Foundation. tum mechanics. Princeton: Princeton Univ. Press.
Feynman, R. P. (1974). Commencement address. USA: California Institute of
Technology.
Greaves, H. (2007). On the Everettian epistemic problem. Studies in History and
References Philosophy of Modern Physics, 38(1), 120–152.
Greaves, H., & Myrvold, W. (2010). Everett and Evidence in Saunders et al. op. cit. (pp.
264–306).
Adam, T., et al. (2012). Measurement of the neutrino velocity with the OPERA Kent, A. (2010). One world versus many: The inadequacy of Everettian accounts of
detector in the CNGS beam. Journal of High Energy Physics (p. 93), 93. evolution, probability, and scientific confirmation in Saunders et al. op. cit. (pp.
Albert, D., & Loewer, B. (1988). Interpreting the many worlds interpretation. 307–354).
Synthese, 77, 195–213. Lockwood, M. (1996). ‘Many Minds’ interpretations of quantum mechanics. British
Albrecht, A., & Phillips, D. (2014). Origin of probabilities and their application to the Journal for the Philosophy of Science, 47(2), 159–188.
multiverse. Physical Review D, 90(12), 123514. Marletto, C. (2015). Constructor theory of probability 〈http://arxiv.org/abs/1507.
Bitbol, M. (ed.) (1996). The interpretation of quantum mechanics: Dublin seminars 03287〉.
(1949–1955) and other unpublished essays, Erwin Schrödinger. OxBow Press. Miller, D. (2006). Out of error: Further essays on critical rationalism. UK: Ashgate.
Dawid, R., & Thébault, K. (2014). Against the empirical viability of the Deutsch– Papineau, D. (2002). Decisions and many minds, seminar at All Souls College, Oxford,
Wallace–Everett approach to quantum mechanics. Studies in History and Phi- 31 October 2002.
losophy of Modern Physics, 47, 55–61. Papineau, D. (2010) A Fair Deal for Everettians in Saunders et al. op. cit. (pp. 206–
Deutsch, D. (1985). Quantum theory as a universal physical theory. International 226).
Journal of Theoretical Physics, 24, 1–41. Popper, K. R. (1959). The logic of scientific discovery. UK: Routledge.
Deutsch, D. (1997). The fabric of reality (pp. 4–6)London: The Penguin Press. Popper, K. R. (1960). Knowledge without authority In: D. Miller (Ed.),
Deutsch, D. (1999). Quantum theory of probability and decisions. Proceedings of the Popper selections (p. 1985) USA: Princeton University Press.
Royal Society of London, A455, 3129–3137. Quine, W. V. O. (1960). Word and object (p. 189) USA: MIT Press.
Deutsch, D., Ekert, A., & Lupacchini, R. (2000). Machines, logic and quantum phy- Rumsfeld, D. (2002). United States Department of Defense news briefing, February 12.
sics. Bulletin of Symbolic Logic, 3, 3. Wallace, D. (2003). Everettian rationality: Defending Deutsch's approach to prob-
Deutsch, D. (2010). Apart from universes In: S. Saunders, J. Barrett, A. Kent, & ability in the Everett interpretation. Studies in History and Philosophy of Modern
D. Wallace (Eds.), Many worlds? Everett, quantum theory, & reality (pp. 542– Physics, 34, 415–438.
552). UK: Oxford University Press. Wallace, D. (2012). The emergent multiverse: Quantum theory according to the Everett
Deutsch, D. (2011). The beginning of infinity. London: The Penguin Press. interpretation. UK: Oxford University Press.
Deutsch, D., & Marletto, C. (2015). Constructor theory of information. Proceedings of
the Royal Society of London, A471, 2174.
Please cite this article as: Deutsch, D. The logic of experimental tests, particularly of Everettian quantum theory. Studies in History and
Philosophy of Modern Physics (2016), http://dx.doi.org/10.1016/j.shpsb.2016.06.001i