The Puzzle of The Unmarked Clock and The New Rational Reflection Principle

Download as pdf or txt
Download as pdf or txt
You are on page 1of 13

Philos Stud (2013) 164:127139

DOI 10.1007/s11098-013-0091-0

The puzzle of the unmarked clock and the new rational


reflection principle

Adam Elga

Published online: 14 February 2013


 Springer Science+Business Media Dordrecht 2013

Abstract The puzzle of the unmarked clock derives from a conflict between the
following: (1) a plausible principle of epistemic modesty, and (2) Rational
Reflection, a principle saying how ones beliefs about what it is rational to believe
constrain the rest of ones beliefs. An independently motivated improvement to
Rational Reflection preserves its spirit while resolving the conflict.

Keywords Reflection principle  Externalism  Christensen  Williamson 


Luminosity  Rationality  Epistemic modesty  Bayesianism  Epistemic akrasia

1 Introduction

Will it rain tomorrow? Im not sure. My credence (subjective probability) that it will
rain is 40 %.
Is 40 % the rational credence for me to have that it will rain? Im not sure about
that either. Properly taking all of ones evidence into account can be tricky. Im not
sure that Ive done it exactly right.1
So: I am uncertain whether it will rain. And I am uncertain about the rational
degree of belief for me to have that it will rain.
Is there a principle that links these two sorts of uncertainty? Or can any old
beliefs about the weather be rationally combined with any old beliefs about what it
is rational to believe about the weather?2

1
Compare to Christensen (2007, 2010).
2
Here and elsewhere I use talk of beliefs as shorthand for talk of degrees of belief.

A. Elga (&)
Princeton University, Princeton, NJ, USA
e-mail: [email protected]

123
128 A. Elga

2 The puzzle of the unmarked clock

There does seem to be a principle that links the two sorts of uncertainty. Below I
will motivate an improved version of just such a principle. But first, a puzzle:
Take a quick look at this picture of an irritatingly austere clock, whose minute
hand moves in discrete 1-min jumps3:

If your eyes are like mine, it wont be clear whether the clock reads 12:17 or
some other nearby time. What should you believe about the time that the clock
reads?
That, it seems, depends on what the clock really reads. If the clock really reads
12:17, then you should be highly confident that it reads a time near 12:1799 %
confident, say, that the time is within a minute of 12:17. But you should be highly
uncertain as between 12:16, 12:17, and 12:18. For definiteness, suppose that given
your visual acuity, you should have roughly the same degree of belief in each of
these three possibilities.
If the clock had instead indicated a different time4:03, sayyou should have
instead been 99 % confident that the time was within a minute of 4:03, but highly
uncertain as between 4:02, 4:03, and 4:04. And the corresponding pattern holds for
any other time, as well.
But now there is a problem. Suppose that you are 99 % confident in H, the
proposition that the time is within a minute of 12:17. You ask yourself: is 99 % the
rational level of confidence for you to have in H? You might reason as follows.
Either the time (indicated on the clock) is 12:17 or not:

If the time is 12:17, then 99 % is the correct level of confidence for you to
have in H.
If the time is not 12:17, then 99 % is an irrationally high level of confidence
for you to have in H. For example, if the time is really 12:18, then you should
have less than 99 % confidence in H. For in that case, you should have more
than 1 % confidence that the time is 12:19, a possibility incompatible with H.

So on the one hand, you have 99 % confidence in H. On the other hand, you think
that 99 % is a level of confidence that is definitely not too low, and is probably too high
(since you think that the time is probably not exactly 12:17).4 But that looks irrational.

3
The clock example is due to Williamson (2007), as adapted by Christensen (2010, pp. 122125), whose
discussion I follow in this section. Irritatingly austere is from Williamson (2010, p. 13).
4
Compare to Christensen (2010, p. 124): it seems that [a subject looking at the clock] should think that .3 is
probably too high a credence for her to have that [the clock reads a particular time], and certainly not too low.

123
Unmarked clock 129

Compare: Pangloss is 99 % confident that the next round of Mideast peace talks
will succeed. But he also thinks that he is often irrationally overconfident in good
outcomes, and never irrationally underconfident in them. As a result, he thinks that
99 % is a level of confidence that is definitely not too low, and is probably too high.5
Pangloss seems to have an irrational combination of attitudes. And, at least at
first glance, your attitudes toward the unmarked clock look to be just as irrational.
But given your imperfect ability to distinguish nearby times, your attitudes toward
the clock seem to be perfectly rational. The puzzle is to resolve this conflict. What
degrees of belief should you have about what time the clock displays?

3 Probing the assumptions

A careful treatment of the puzzle would probe some of the assumptions made in the
informal presentation above. Is the puzzle an artifact of the idealized way in which
the setup was assumed to be completely rotationally symmetric? Or of a
questionable assumption that the clock viewer is sure just how the position of the
clock hand determines what it is rational for her to believe? Or of an undefended
assumption that the viewer has perfect access to her exact degrees of belief? Or of
the assumption that it is rational for the clock viewer to be uncertain what it is
rational to believe?
These are all fair questions, but we neednt get caught up in the details. For the
puzzle of the clock is an instance of a much more general conflict, a conflict that
cant be avoided by tweaking the details of the clock setup. And laying out and
resolving the more general conflict will resolve the puzzle as a side-effect.

4 Rational Reflection

To begin laying out the general conflict, recall the question from the end of Sect. 1:
can any old beliefs about the weather be rationally combined with any old beliefs
about what it is rational to believe about the weather?
The answer is: no. For example:
Joe is certain just what degrees of belief he ought to have. In particular, he is
certain that he ought rationally have degree of belief 99 % that it will rain. But
despite this, his degree of belief that it will rain is only 1 %.
I hope you agree that Joes combination of attitudes is unreasonable. But if not,
imagine chatting with Joe about the weather.
The evidence strongly supports that it will rain, he might say. There are
plenty of storm clouds nearby, and the barometric pressure is low.
Furthermore, it has rained every day for the last month, and this is the rainy
season. Yes, Im quite certain of exactly what degrees of belief are rational for

5
This example is based on the case of Brayden from Christensen (2010, pp. 121122).

123
130 A. Elga

me, and that it is rational for me to be extremely confident that it will rain
tomorrow.
So, will it rain tomorrow? you ask.
No.
This dialogue makes dramatic that Joes beliefs about the weather do not mesh
properly with his beliefs about what he should believe about the weather.
Joe is unreasonable because he violates the following constraint6:
CERTAIN Whenever a possible rational agent is certain exactly what degrees of
belief she ought rationally have, she has those degrees of belief.
This constraint is extremely plausible. But it covers only a very specific case
the case in which one is certain just what one should believe. Can it be generalized
to cover cases in which one is uncertain about what degrees of belief one should
have?
To see one natural way of generalizing the constraint, modify the case of Joe.
Suppose that Joe is not certain that his degree of belief in rain should be exactly
99 %. Instead, suppose that he is just certain that it should be quite highsay,
greater than 90 %. But despite this, Joes degree of belief that it will rain is only
1 %.
Again, Joes combination of attitudes looks unreasonable.
If Joe is rational, it seems, his degree of belief that it will rain should be
somewhere in the range of values that he thinks might be rational. Indeed, it seems
that his degree of belief that it will rain should be some kind of average of those
values.
This suggests a tempting way to generalize CERTAIN7:
RATIONAL REFLECTION P(H | P0 is ideal) = P0 (H)
whenever P is the credence function of a possible rational subject S, H is a
proposition, P0 is a credence function, ideal means perfectly rational for
S to have in her current situation, and the conditional probability is well
defined.8
The statement above is a mouthful. But the guiding idea is simple: When one is
rationally certain what credence function one should have, one should have that
credence function. But when one is uncertain, then one should have as ones
credence function a weighted average of the functions one thinks it might be
rational to have.
For example, suppose that one is 50 % confident that one should have credence
function P1 and 50 % confident that one should have credence function P2. Further

6
This constraint is a more cautious cousin of the principle of non-akrasia from Ross (2006, p. 277).
7
RATIONAL REFLECTION is a variant of the principle RatRef from Christensen (2010, p. 122). A
similar principle is introduced in Ross (2006, 10.3) under the name The epistemic principal principle.
Earlier work on related principles includes Goldstein (1983), Lewis (1986), van Fraassen (1984).
8
To avoid complications, here and below I ignore self-locating beliefs, assume that every situation
determines a unique ideally rational probability function, and assume that credences are countably
additive and defined over a space of possibilities that is at most countably infinite.

123
Unmarked clock 131

suppose that P1(rain) = 70 % and P2(rain) = 90 %. Then if one is rational,


RATIONAL REFLECTION entails that one will have as ones degree of belief in
rain the average of 70 and 90 %i.e., 80 %.9

5 Epistemic modesty can be rational

The story so far: I have presented the puzzle of the unmarked clock, and claimed
that it is an instance of a more general conflict. As a first step in laying out the
general conflict, I gave some motivation for RATIONAL REFLECTION. That
principle connects ones beliefs about, say, the weather, with ones beliefs about
what it is rational to believe about the weather.
To complete my explanation of the general conflict, I will need to address the
question: is it ever rational to be uncertain about what it is rational to believe?
The answer is: yes. For example10:
HYPOXIA Bill the perfectly rational airline pilot gets a credible warning from
ground control:
Bill, theres an 99 % chance that in a minute your air will have reduced
oxygen levels. If it does, you will suffer from hypoxia (oxygen deprivation),
which causes hard-to-detect minor cognitive impairment. In particular, your
degrees of belief will be slightly irrational. But watch outif this happens,
everything will still seem fine. In fact, pilots suffering from hypoxia often
insist that their reasoning is perfectpartly due to impairment caused by
hypoxia!

A few minutes later, ground control notices that Bill got luckyhis air stayed
normal. They call Bill to tell him. Right before Bill receives the call, should he be
uncertain whether his degrees of belief are perfectly rational?
The example invites us to answer yes, for the following reason. Before Bill is told
that he got lucky, he should be uncertain whether he is suffering from hypoxia, and
so should be uncertain whether his degrees of belief are perfectly rational. And he
should be uncertain about what degrees of belief it is rational for him to have.
One might reject this analysis. One might claim that Bill should be absolutely
certain that he got lucky and avoided hypoxia. But that is implausible. Such

9
For readers familiar with the Reflection Principle from van Fraassen (1984), the Principal Principle
from Lewis (1986), or RatRef from Christensen (2010), it may be helpful to note that RATIONAL
REFLECTION entails
RATREF P(H | I(H) = x) = x
whenever P is the credence function of a possible rational subject S, H is a proposition, x is a real
number, I(H) = x denotes the proposition that the ideal probability for S to have in H is x, and
the conditional probability is well defined.
10
Cf. Elga (2008), Christensen (2010).

123
132 A. Elga

certainty would be overconfidence on Bills parta failure to properly take into


account ground controls credible warning.
The case of Bill shows that in some situations, rationality is compatible with
uncertainty about what degrees of belief are rational. Indeed, Bill should think that
he is in exactly such a situation. Let us record these conclusions:
MODESTY In some possible situations, it is rational to be uncertain about what
degrees of belief it is rational for one to have. Furthermore, it can be rational to
have positive degree of belief that one is in such a situation.

6 Modesty from anti-luminosity

An independent argument supporting MODESTY goes by way of uncertainty about


evidence. One way to be uncertain what one should believe is to be uncertain what
evidence one has. The conclusions of anti-luminosity arguments from Williamson
(2000, 2008) entail that in some situations, rationality is compatible with uncertainty
about what evidence one has. And they entail that it can be rational to suspect that
one may be in such a situation. So anti-luminosity arguments provide a route to
MODESTY available even to those who reject the argument based on HYPOXIA.

7 Modesty conflicts with Rational Reflection

So far I have argued for MODESTY, which entails that it is sometimes rational to be
uncertain about what it is rational to believe. And I have given a motivation for
RATIONAL REFLECTION, which is a constraint on how ones opinions of what is
rational to believe ought to mesh with the rest of ones opinions. I hope Ive
convinced you that both of these claims are true.
It was a trap.
It turns out that MODESTY and RATIONAL REFLECTION are inconsistent
with each other. That is the more general conflict I promised to explain. And it is the
conflict at the root of the puzzle of the clock.
Here is a proof that MODESTY and RATIONAL REFLECTION are inconsistent
with each other. (The proof may be skipped without loss of continuity.)
Proof: Suppose that a particular subject has credence function P, and that P0 is
any credence function that the subject thinks might be ideal. Then if
RATIONAL REFLECTION is true, for any proposition H, P(H | P is
ideal) = P0 (H). In particular, when H is the proposition that P0 is ideal:
PP0 is idealjP0 is ideal P0 P0 is ideal:
By the definition of conditional probability, the left hand side of this equation equals
1. So P0 is immodest, in the sense that it assigns credence 1 to the claim that it itself
is the ideal credence function for the subject to have. And the same is true for every
credence function that the subject thinks might be ideal for her. So the subject is

123
Unmarked clock 133

certain that rationality requires her to be certain about what credences it is rational
to have. This conflicts with (the second sentence of) MODESTY.11,12
So we have an apparent paradox: we had initially plausible motivations for
believing both MODESTY and RATIONAL REFLECTION, but now have seen that
the two claims conflict.13
The puzzle of the clock is an instance of this conflict. For recall that the puzzle
depended on the assumption that the clock viewer should be uncertain about what it
is rational for her to believe about the time. That assumption is an instance of
MODESTY. And it depended on the assumption that it is unreasonable for the
viewer to be 99 % confident in a proposition, while thinking that 99 % is a level of
confidence that is certainly not too low and probably too high. That assumption
derives from the same considerations that motivate RATIONAL REFLECTION.
How should the conflict between MODESTY and RATIONAL REFLECTION
be resolved? There are a number of options14:

We might reject MODESTY, and claim for example that rationality requires one
to be certain just what degrees of belief are rational. This would require us to say
that Bill the pilot should be certain that he has avoided hypoxia, and that the
clock viewer should be certain exactly what it is rational for her to believe about
the clock. That seems desperate.15
We might reject RATIONAL REFLECTION and similar principles, insisting
that beliefs about what it is rational to believe do not impose systematic

11
Williamson (2010, Appendix) proves related results, delivering conditions for the necessary truth of
RatRef (a reflection principle closely related to RATIONAL REFLECTIONsee note 7).
12
As was pointed out to me by Michael Titelbaum, the proof in the text is analogous to the proof that the
Old Principal Principle is in tension with Humeanism about laws of nature (Hall 1994; Lewis 1994).
13
One might try to avoid the conflict by advocating not RATIONAL REFLECTION but the very similar
principle RatRef (described in footnote 7). This does not work because it is unreasonable to accept both
RatRef and MODESTY. That is because it is unreasonable to accept MODESTY without also accepting
the following claim, which itself is incompatible with RatRef:
POSITIVE It can be rational to think something of the form Im not sure how confident I should be
that P0 is ideal. Maybe I should be 20 % confident that it is, maybe 21 %, or maybe another value.
(Here 20 and 21 % are placeholders for any two positive values, and P0 can be any credence function.)
Proof that POSITIVE is incompatible with RatRef: For brevity, use V(H, v) to denote the proposition
that the ideal credence to have in H is v, and use I(P0 ) to denote the proposition that P0 is the ideally
rational credence function to have. Now suppose POSITIVE is true. Then for some possible rational
person who has credence function P, there are distinct positive values x and x0 such that P(V(I(P0 ), x)) and
P(V(I(P0 ), x0 )) are both greater than 0. But I(P0 ) is compatible with at most one of V(I(P0 ), x) and
V(I(P0 ), x0 ), since it settles what the ideal probability to have in I(P0 ) is. So at least one of
P(I(P0 )|V(I(P0 ), x)) and P(I(P0 )|V(I(P0 ), x0 )) equals zero. But RatRef entails that these two terms equal
x and x0 respectively, which are both positive. So POSITIVE contradicts RatRef.
14
Compare to Christensen (2010, p. 124).
15
Alternatively, we might reject the second sentence of MODESTY by saying that rationality requires
one to be certain of the following falsehood: rationality requires one to be certain exactly what time the
clock indicates. That seems just as desperate, since a rational viewer of the clock might well realize that
she has an imperfect ability to discriminate nearby times. Thanks here to Kenny Easwaran.

123
134 A. Elga

constraints on ones other beliefs.16 A defender of this line takes on the burden
of explaining away the initial appeal of such principles, and the seeming
irrationality of the clock viewers 99 % confidence is probably too high and
definitely not too low stance.
We might say that MODESTY and RATIONAL REFLECTION both express
rational ideals, but admit that some rational ideals conflict with others.17 This may be
defensible in the end,18 but there is a cost to admitting that the notion of perfect
rationality is itself inconsistent. And this proposal seems not to give a clear answer to
the question: what degrees of belief should the clock viewer have about the time?

This brief survey of options is not exhaustive, and the objections I have raised are
not conclusive. But I hope to convince you that we can do better. We can resolve the
conflict in a way that allows us to consistently hold on to MODESTY, and also to
the considerations that motivate RATIONAL REFLECTION.19
All we need to do is amend RATIONAL REFLECTION. Let me explain.

8 New Rational Reflection

Think back to how RATIONAL REFLECTION was motivated above (in Sect. 4)
The story started with this constraint:
CERTAIN Whenever a possible rational agent is certain exactly what degrees of
belief she ought rationally have, she has those degrees of belief.
This constraint is extremely plausible and extremely cautious. It doesnt even rule
out that one can be rationally certain that one is irrational. It just rules out that one
can be rationally certain exactly what degrees of belief one should have, without
having those degrees of belief.
The next step was to generalize this constraint to cover cases in which one is
uncertain what degrees of belief one ought to have. It was suggested that when one
is uncertain what credence function is rational, one should have a credence function
that is a particular weighted average of the ones that one thinks might be rational.
That is RATIONAL REFLECTION.
There was nothing wrong with the first step of the story: CERTAIN is correct.
And there was nothing wrong with trying to generalize CERTAIN to cover more
cases. But RATIONAL REFLECTION is the wrong way to generalize CERTAIN.

16
Williamson (2010) seems sympathetic to this approach.
17
This is proposed as a fallback position in Christensen (2010, p. 136).
18
It may be defensible, for example, if we are forced to accept a similar conclusion about independent
cases, as is argued in Christensen (2007).
19
Compare: the ideal outcome of thinking about [the puzzle of the clock] would be our finding a way of
accommodating the intuitions behind [a principle similar to RATIONAL REFLECTION] while avoiding
the difficulties weve been examining (Christensen 2010, p. 136).

123
Unmarked clock 135

A better way of generalizing CERTAIN is brought out by the following line of


reasoning.20
Suppose that youre considering what credence function it would be rational for
you to have. Consider the candidate functionsthe ones that you think might be
ideally rationalas a kind of panel of purported experts. In the special case that you
are sure what function is ideal, the panel contains just a single member, and youre
sure that she is the true expert. In that case, you should just believe what the expert
believes. That corresponds to CERTAIN.
But now suppose that you are uncertain which function is ideal. In that case, it is
as if the panel contains a number of purported experts and you are uncertain which
one is the true expert.
For concreteness, suppose that the panel consists of credence functions named
Cassandra, Merlin, and Sherlock. Conditional on Sherlock being the true expert,
what credences should you have?
It is tempting to answer: the ones that Sherlock has. That is how RATIONAL
REFLECTION answers the question. But that answer is not in general correct. For
Sherlock might himself be uncertain who is the true expert. And conditional on
Sherlock being the true expert, you should not be uncertain who the true expert is.
So: conditional on Sherlock being the true expert, you shouldnt align your
credences to Sherlocks. What should you do?
A warm-up question will point the way to the answer: What should be your
credence that it will rain tomorrow, given that Sherlock is the true expert and that
many people will use umbrellas tomorrow?
Answer: your credence should be rather high. And it should not in general equal
Sherlocks unconditional credence that it will rain. For the information that many people
will use umbrellas tomorrow provides strong evidence that it will rain tomorrow.
This suggests that your conditional credence should not equal Sherlocks credence
that it will rain. Rather, it should equal Sherlocks credence that it will rain conditional
on (at least) the information that many people will use umbrellas tomorrow.
More generally: your credences, conditional on Sherlock being the true expert,
should equal Sherlocks credences conditional on Sherlock being the true expert.
Further generalizing this thought yields the following principle21:
NEW RATIONAL REFLECTION P(H | P0 is ideal) = P0 (H |P0 is ideal)
whenever P is the credence function of a possible rational subject S, H is a
proposition, P0 is a credence function, ideal means perfectly rational for S to
have in her current situation, and the conditional probability is well defined.

20
The line of reasoning, including the example involving the panel of experts, is due to Hall (1994,
p. 510). Christensen (2010, p. 135) comes tantalizingly close to applying such reasoning to RATIONAL
REFLECTION, but pulls back at the last moment.
21
The constraint is named NEW RATIONAL REFLECTION because it stands to RATIONAL
REFLECTION as the New Principal Principle stands to the Principal Principle (Lewis 1994; Hall
1994). (NEW RATIONAL REFLECTION also stands to RATIONAL REFLECTION as the guru
principle from Elga (2007) stands to the Reflection Principle from van Fraassen (1995).) The
derivation from footnote 23 of a special case of NEW RATIONAL REFLECTION can be adapted to
yield a parallel derivation of a special case of the New Principal Principle.

123
136 A. Elga

An example will help illustrate the difference between the new principle and the
old. Suppose that you are 50 % confident that you should have credence function P1
and 50 % confident that you should have P2. RATIONAL REFLECTION entails that
if you are rational, your probability for a proposition H will be the average of P1(H) and
P2(H). In contrast, NEW RATIONAL REFLECTION entails that if you are rational,
your probability for H will be the average of P1(H |P1 is ideal) and P2(H | P2 is ideal).
(There is another strategy to motivate the new principle22: Suppose that a subject
starts out wondering: what degrees of belief are rational for me? And suppose that she
then learns the answer to that question. She will end up certain just what she ought to
believe. And so CERTAIN will impose a constraint on her final state of mind. But that
will indirectly impose a constraint on her initial state of mindby way of an
assumption about how the subject should update her beliefs when she gets new
information. In other words, we can generalize CERTAIN by saying: Rational agents
have states of mind that are consistent with CERTAIN, and would remain consistent
with CERTAIN were they to learn the truth about what they ought to believe. This
strategy yields a derivation of NEW RATIONAL REFLECTION in a special case, and
so lends some credence to the truth of the principle in full generality.)23
Moral: NEW RATIONAL REFLECTION is the right way to generalize
CERTAIN. It expresses the manner in which a subjects opinions about what it is
rational to believe constrain her other opinions. And it is perfectly consistent with
MODESTY. So the conflict between MODESTY and RATIONAL REFLECTION
has a satisfying resolution: drop RATIONAL REFLECTION and adopt NEW
RATIONAL REFLECTION instead.
The end. Except for one final matter: it remains to address the puzzle of the clock.

9 Resolving the puzzle

Recall the setup:


22
This second strategy adapts an idea from Ross (2006, pp. 277299). A similar argument was
independently suggested to me by Boris Kment.
23
The special case: a rational agent has probability function P at time 0, realizes that rational agents
update their beliefs by conditionalization, and realizes that she is about to conditionalize on the truth of
what probability function is ideal-for-her-at-time-0. Together with CERTAIN, these assumptions entail
that the agent satisfies the instance of NEW RATIONAL REFLECTION that applies to her.
Proof: Under the above conditions, suppose that agent conditionalizes on the information that
probability function P0 is ideal-for-her-at-time-0. As a result, at time 1 her new probability function is
P(- |P0 is ideal-at-0). By the assumption that rational agents conditionalize, this new probability function
is ideal-for-her-at-time-1.
But at time 1, the agent is certain that P0 was ideal-for-her-at-time-0. And she is certain that at time 0
she remained ideally rational by conditionalizing on the truth about what function was ideal-for-her-at-
time-0. So she is at time 1 certain that the following probability function is ideal-for-her-at-time-1: P0 (-
|P0 is ideal-at-0).
Now CERTAIN applies to the agent at time 1, and entails that the agents probability function at time
1 is P0 (- |P0 is ideal-at-0). We have now derived two expressions for the agents probability function at
time 1. Equating them yields that for any proposition H:
PHjP0 is ideal-at-0 P0 HjP0 is ideal-at-0;
which is an instance of NEW RATIONAL REFLECTION.

123
Unmarked clock 137

You look at the clock, ending up 99 % confident in H, the proposition that the
time is either 12:16, 12:17, or 12:18. You are highly uncertain as between those
three possibilities, assigning, say, 33 % of your confidence to each of them.24 That
pattern of attitudes looks reasonable.
But as we saw in Sect. 2, that pattern of attitudes also entails that you think that
99 % is a level of confidence in H that is definitely not too low for you, and is
probably too high. That makes your 99 % confidence in H look unreasonable.
So: the first line of reasoning concludes that your beliefs about the clock are
reasonable. The second line concludes that those beliefs are unreasonable. What has
gone wrong? That is the puzzle.
The answer is that the second line of reasoning is wrong. For what lies behind
that reasoning is the thought that ones degree of confidence in H should always be a
weighted average of the degrees of confidence that one thinks might be rational.
That is an initially tempting thought. And it is a thought that follows from
RATIONAL REFLECTION. But it is incorrect.
In contrast, this averaging thought doesnt follow from NEW RATIONAL
REFLECTION. That is why the state of uncertainty about the clock described above
is compatible with NEW RATIONAL REFLECTION. The bottom line is that there
just isnt anything wrong with your state of uncertainty about the clock.
Then why does that state of uncertainty seem unreasonable?25 Because it
superficially resembles states of uncertainty that are genuinely unreasonable.
For instance, your state of mind superficially resembles the state of mind of
Pangloss, the self-aware optimist from Sect. 2:
Pangloss is 99 % confident that the next round of Mideast peace talks will
succeed. But he also thinks that he is often irrationally overconfident in good
outcomes, and never irrationally underconfident in them. As a result, he thinks
that 99 % is a level of confidence that is definitely not too low, and is probably
too high.
Pangloss seems to exhibit the same pattern of uncertainty that you do. And Pangloss is
unreasonable. That provides additional temptation to think that you, too, are unreasonable.
But the cases are different, and the reason Pangloss is unreasonable does not
apply to you as a viewer of the clock. Let me explain what makes Pangloss
unreasonable, and why no corresponding consideration applies to the viewer of the
clock.

24
Nothing substantial would change if the 33/33/33 % distribution were changed to one that favored
12:17 over the other two times.
25
For an independent and complementary diagnosis, see Horowitz and Sliwa (2011).

123
138 A. Elga

Let S be the proposition that the peace talks will succeed, and let PG be
Panglosss credence function. For simplicity, suppose that Pangloss is sure that the
ideal credence for him to have in S is either 99 or 66 %, but has no idea which. It
follows26 that Pangloss has approximately 99 % credence in S, conditional on the
rational credence in S being 66 %:
PG Sj66 % is ideal  99 %:
There looks to be a mismatch here. And there is: on any natural understanding of
the case, such a conditional credence is totally unreasonable. Conditional on 66 %
being the ideal credence to have that the talks will succeed, Panglosss credence that
the talks succeed should not be approximately 99 %. Rather, it should be
approximately 66 %.
Compare: in an ordinary case, ones credence that it will rain next week,
conditional on the rational credence being 66 %, should be approximately 66 %.
Only with a very special back-story would it make sense for that conditional
credence to be anywhere near 99 %. And the same is true for Panglosss conditional
credence above.
That is why Pangloss is unreasonable.
At first glance, the situation with the viewer of the clock looks similar. In
particular, the viewer of the clock has 100 % credence in H, conditional on the
rational credence in H being 66 %:
PHj66 %is ideal 100 %:
That conditional credence looks to involve the same sort of mismatch as Panglosss.
But in the clock case, the mismatch is only apparent. For the clock case has the
following very special feature. Given the setup, the information 66 % is the ideal
credence to have in H is strong evidence that H is true. Indeed, it is conclusive
evidence that H is true, since the clock viewer is certain that
66 % is the ideal credence for the viewer to have in H only if the time is either
12:16 or 12:18.
No corresponding claim holds for Pangloss.
The bottom line is that clock viewers conditional credences exhibit the same
apparent mismatch that Panglosss do. The difference is that in the viewers case,
the mismatch is only apparent.

Acknowledgments Thanks to David Christensen, Paulina Sliwa, Sophie Horowitz, Maria Lasonen-
Aarnio, Michael Titelbaum, Jenann Ismael, participants in the 2011 Brown Epistemology workshop, the
Corridor Group, the Princeton Formal Epistemology reading group, and the 2012 Bellingham Summer
Philosophy Conference, an audience at Stanford University, and especially Joshua Schechter.

References

26
It follows by the probability calculus that Panglosss credence in S is a weighted average of
PG(S|99 %is ideal) and PG(S| 66 %is ideal). So this weighted average equals 99 %. But since each of
these terms is no greater than 100 %, and since they are weighted roughly equally in the average, the only
way for their average to be 99 % is for each of them to be close to 99 %.

123
Unmarked clock 139

Christensen, D. (2007). Does Murphys law apply in epistemology? Self-doubt and rational ideals. Oxford
Studies in Epistemology, 2, 331.
Christensen, D. (2010). Rational reflection. Philosophical Perspectives, 24, 121140.
Elga, A. (2007). Reflection and disagreement. Nous, 41, 478502.
Elga, A. (2008, June). Lucky to be rational. Paper presented at Bellingham summer philosophy conference.
http://www.princeton.edu/*adame/papers/bellingham-lucky.pdf. Accessed 17 Nov 2009.
Goldstein, M. (1983). The prevision of a prevision. Journal of the American Statistical Association,
78(384), 817819.
Hall, N. (1994). Correcting the guide to objective chance. Mind, 103, 505518.
Horowitz, S., & Sliwa, P. (2011, June). Level-bridging, uncertainty, and akrasia. Manuscript.
Lewis, D. (1986). A subjectivists guide to objective chance. In Philosophical Papers (Vol. 2). Oxford:
Oxford University Press.
Lewis, D. (1994). Humean supervenience debugged. Mind, 103, 473490.
Ross, J. (2006, October). Acceptance and practical reason. PhD thesis, Rutgers University.
van Fraassen, B. C. (1984). Belief and the will. Journal of Philosophy, 81, 235256.
van Fraassen, B. C. (1995). Belief and the problem of Ulysses and the Sirens. Philosophical Studies, 77,
737.
Williamson, T. (2000). Knowledge and its limits. Oxford: Oxford University Press.
Williamson, T. (2007). Improbable knowing. Notes, 2007. http://www.philosophy.ox.ac.uk/_data/assets/
pdf_file/0014/1319/Orielho.pdf. Accessed 1 Sept 2011.
Williamson, T. (2008). Why epistemology cannot be operationalized. In Q. Smith (Ed.), Epistemology,
new essays (Chap. 11, pp. 277300). Oxford: Oxford University Press.
Williamson, T. (2010). Very improbable knowing. Draft manuscript, 2010. http://www.philosophy.ox.
ac.uk/_data/assets/pdf_file/0015/19302/veryimprobable.pdf. Accessed 1 Sept 2011.

123

You might also like