Boghossian Inference Agency and Responsibility
Boghossian Inference Agency and Responsibility
7
Inference, Agency,
and Responsibility
Paul Boghossian
1. Introduction
What happens when we reason our way from one proposition to another? This
process is usually called “inference” and I shall be interested in its nature.¹
There has been a strong tendency among philosophers to say that this psycho-
logical process, while perhaps real, is of no great interest to epistemology. As one
prominent philosopher (who shall remain nameless) put it to me (in conversation):
“The process you are calling ‘inference’ belongs to what Reichenbach called the
‘context of discovery’; it does not belong to the ‘context of justification,’ which is
all that really matters to epistemology.”²
I believe this view to be misguided. I believe there is no avoiding giving a central
role to the psychological process of inference in epistemology, if we are to adequately
explain the responsibility that we have, qua epistemic agents, for the rational
management of our beliefs. I will try to explain how this works in this chapter.
In the course of doing so, I will trace some unexpected connections between our
topic and the distinction between a priori and a posteriori justification, and I will
Earlier versions of some of the material in this chapter were presented at a Workshop on Inference at the
CSMN in Oslo, at the Conference on Reasoning at the University of Konstanz, both in 2014, at Susanna
Siegel’s Seminar at Harvard in February 2015, and at a Workshop on Inference and Logic at the Institute of
Philosophy at the University of London in April 2015. A later version was given as a keynote address at the
Midwest Epistemology Workshop in Madison, WI in September 2016. I am very grateful to the members
of the audiences at those various occasions, and to Mark Richard, Susanna Siegel, David J. Barnett,
Magdalena Balcerak Jackson. and an anonymous referee for OUP for detailed written comments.
¹ Importantly, then, I am not here talking about inference as argument: that is as a set of propositions,
with some designated as “premises” and one designated as the “conclusion.” I am talking about inference as
reasoning, as the psychological transition from one (for example) belief to another. Nor am I, in the first
instance, talking about justified inference. I am interested in the nature of inference, even when it is
unjustified.
² For another example of a philosopher who downplays the importance of the psychological process of
inference see Audi (1986).
OUP CORRECTED PROOF – FINAL, 23/4/2019, SPi
draw some general methodological morals about the role of phenomenology in the
philosophy of mind.
In addition, I will revisit my earlier attempts to explain the nature of the process of
inference (Boghossian 2014, 2016) and further clarify why we need the type of
“intellectualist” account of that process that I have been pursuing.
³ Later on, I shall argue that this should be regarded as preservative memory storage, in the sense of
Burge (1993) and Huemer (1999).
OUP CORRECTED PROOF – FINAL, 23/4/2019, SPi
Given that fact, couldn’t the Propositionalist make sense of the a priori/a posteriori
distinction by saying that perceptual experiences give one access to a posteriori
evidence, while non-perceptual experiences, such as intuition or understanding,
give one access to a priori evidence?
The problem with this reply on behalf of the Propositionalist is that we know that
sometimes perceptual experience will be needed to give one access to a proposition
that is then believed on a priori grounds.
For example, perceptual experience may be needed to give one access to the
proposition “If it’s sunny, then it’s sunny.” But once we have access to that propos-
ition, we may come to justifiably believe it on a priori grounds (for example, via the
understanding).
In consequence, implementing the response given on behalf of the Propositionalist
will require us to distinguish between those uses of perceptual experience that merely
give us access to thinking the proposition in question, versus those that give us access
to it as evidence.
But how are we to draw this distinction without invoking the classic distinction
between a merely enabling use of perceptual experience, and an epistemic use of such
experience, a distinction that appears to presuppose the Statist View. For what would
it be to make a proposition accessible as evidence, if not to experience it in a way that
justifies belief in it?
To sum up. Taking the a priori/a posteriori distinction seriously requires thinking
of mental states as sometimes playing a justificatory role; it appears not to be
consistent with the view that it is only propositions that do any justifying.⁴
I don’t now say that the Statist View is comprehensively correct, that all reasons for
judgment are always mental states, and never propositions. I only insist that, if we are
to take the notion of a priori justification seriously, mental states, and, in particular,
experiences, must sometimes be able to play a justificatory role.
⁴ Once we have successfully defined a distinction between distinct ways of believing a proposition, we
can introduce a derivative distinction between types of proposition: if a proposition can be reasonably
believed in an a priori way, we can say that it is an a priori proposition; and if it can’t be reasonably believed
in an a priori way, but can only be reasonably believed in an a posteriori way, then we can say that it is an a
posteriori proposition. But this distinction between types of proposition would be dependent upon, and
derive from, the prior distinction between distinct ways of reasonably believing a proposition, a distinction
which depends on construing epistemic bases as mental states, rather than propositions.
OUP CORRECTED PROOF – FINAL, 23/4/2019, SPi
Here is a further question. Assuming that we are talking about an explicit standing
belief, does that belief always have as its basis the basis on which it was originally
formed; or could the basis have somehow shifted over time, even if the belief in
question is never reconsidered?
It is natural to think that the first option is correct, that explicit standing beliefs
have whatever bases they had when they were formed as occurrent judgments. At
their point of origin, those judgments, having been arrived at on a certain basis, and
not having been reconsidered or rejected, go into memory storage, along with their
epistemic bases.
Is the natural answer correct? It might seem that it is not obviously correct. For
sometimes, perhaps often, you forget on what basis you formed the original occurrent
judgment that gave you this standing belief. If you now can’t recall what that basis
was, is it still the case that your belief has that basis? Does it carry that basis around
with it, whether you can recall it or not?
I want to argue that the answer to this question is “yes.” I think this answer is
intuitive. But once again there is an unexpected interaction with the topic of the a
priori. Any friend of the a priori should believe that the natural answer is correct. Let
me say a little more about this interaction.
Burge has developed just such a view (I shall look at his application of it to memory).
Burge distinguishes between a merely enabling use of memory, which Burge
calls “preservative memory,” and an epistemic use, which Burge calls “substantive
memory”:
[Preservative] memory does not supply for the demonstration propositions about memory,
the reasoner, or past events. It supplies the propositions that serve as links in the
demonstration itself. Or rather, it preserves them, together with their judgmental force,
and makes them available for use at later times. Normally, the content of the knowledge of
a longer demonstration is no more about memory, the reasoner, or contingent events than
that of a shorter demonstration. One does not justify the demonstration by appeals to
memory. One justifies it by appeals to the steps and the inferential transitions of the
demonstration. . . . In a deduction, reasoning processes’ working properly depends on
memory’s preserving the results of previous reasoning. But memory’s preserving such
results does not add to the justificational force of the reasoning. It is rather a background
condition for the reasoning’s success.⁶
Given this distinction, we can say that the reason why a long proof is able to provide a
priori justification for its conclusion is that the only use of memory that is essential in
a long proof is preservative memory, rather than substantive memory. If substantive
memory of the act of writing down a proposition were required to arrive at justified
belief in the conclusion of the proof, that might well compromise the conclusion’s a
priori status. But it is plausible that in most of the relevant cases all that’s needed is
preservative memory.
Now, against the background of the Propositional View of justifiers, all that would
need to be preserved is, as Burge says, just the propositions (along perhaps with their
judgmental force).
However, against the background of the Statist View, we would need to think that
preservative memory can preserve a proposition not only with its judgmental force,
but also along with its mental state justifier.⁷
Now, just like an earlier step in a proof can be invoked later on without this
requiring the use of substantive memory, so, too, can an a priori justified standing
belief be invoked later on without this compromising its ability to deliver a priori
justification. To account for this, we must think that when an occurrent judgment
goes into memory storage, it goes into preservative memory storage.
Thus, it follows that a standing belief will have whatever basis it originally had,
whether or not one recalls that basis later on. And so, we arrive at the conclusion we
were after: a belief ’s original basis is the basis on which it is maintained, unless the
matter is explicitly reconsidered.
Of course, both of these claims about bases are premised on the importance of
preserving a robust use for the a priori/a posteriori distinction. But as I’ve argued
elsewhere (Boghossian forthcoming), we have every reason to accord that distinction
the importance it has traditionally had.
⁸ On the Propositional View, the basis would always be a proposition that is the object of some
occurrent judgment. As I say, this particular distinction won’t matter for present purposes.
⁹ Of course, judgments are not the only sorts of propositional attitude that inference can relate. One can
infer from suppositions and from imperatives (for example) and one can infer to those propositional
attitudes as well. Let us call this broader class of propositional attitudes that may be related by inference,
acceptances (see Wright 2014 and Boghossian 2014). Some philosophers further claim that even such non-
attitudinal states as perceptions could equally be the conclusions of inferences (Siegel 2017). In the sense
intended (more on this below), this is a surprising claim and it is to be hoped that getting clearer on the
nature of inference will eventually help us adjudicate on it. For the moment, I will restrict my attention to
judgments.
OUP CORRECTED PROOF – FINAL, 23/4/2019, SPi
is a basis for you, whether it is the reason for which you came to make a certain
judgment.
Thus, it is some sort of psychological fact about you that establishes that this
perception of yours serves as your basis for believing this observable fact. And it is
some sort of psychological fact about you that establishes that it is these judgments
that serve as your basis for making this new judgment, via an inference from those
other judgments.
I am interested in the nature of this process of arriving at an occurrent judgment
that q by inference from the judgment that p, a process which establishes p as your
reason for judging q.
7. Types of Inference
What is an example of the sort of process I am talking about?
One example, that I will call, for reasons that I will explain later, reasoning 2.0,
would go like this:
(1) I consider explicitly some proposition that I believe, for example p.
And I ask myself explicitly:
(2) (Meta) What follows from p?
And then it strikes me that q follows from p. Hence,
(3) (Taking) I take it that q follows from p.
At this point I ask myself
(4) Is q plausible? Is it less plausible than the negation of p?
(5) I conclude that q is not less plausible than not-p.
(6) So, I judge q.
I add q to my stock of beliefs.
Reasoning 2.0 is admittedly not the most common type of reasoning. But it is
probably not as rare as it is fashionable to claim nowadays. In philosophy, as in other
disciplines, there is a tendency to overlearn a good lesson. Wittgenstein liked to
emphasize that many philosophical theories overly intellectualize cognitive phenom-
ena. Perhaps so. But we should not forget that there are many phenomena that call
for precisely such intellectualized descriptions.
Reasoning 2.0 happens in a wide variety of contexts. Some of these, to be sure, are
rarified intellectual contexts, as, for example, when you are working out a proof, or
formulating an argument in a paper.
But it also happens in a host of other cases that are much more mundane. Prior to
the 2016 presidential election, the conventional wisdom was that Donald Trump was
extremely unlikely to win it. Prior to that election, many people are likely to have
OUP CORRECTED PROOF – FINAL, 23/4/2019, SPi
taken a critical stance on this conventional wisdom, asking themselves: “Is there
really evidence that shows what conventional wisdom believes?” Anyone taking such
a stance would be engaging in reasoning 2.0.
Having said that, it does seem true that there are many cases where our reasoning
does not proceed via an explicit meta-question about what follows from other things
we believe. Most cases of inference are seemingly much more automatic and unre-
flective than that. Here is one:
On waking up one morning you recall that
(Rain Inference)
(1) It rained heavily through the night.
You conclude that
(2) The streets are filled with puddles (and so you should wear your boots rather
than your sandals).
Here, the premise and conclusion are both things of which you are aware. But, it
would seem, there is no explicit meta-question that prompts the conclusion. Rather,
the conclusion comes seemingly immediately and automatically. I will call this an
example of reasoning 1.5.
The allusion here, of course, is to the increasingly influential distinction between
two kinds of reasoning, dubbed “System 1” and “System 2” by Daniel Kahneman. As
Kahneman (2011, pp. 20–1) characterizes them,
System 1 operates automatically and quickly, with little or no effort and no sense
of voluntary control.
System 2 allocates attention to the effortful mental activities that demand it,
including complex computations. The operations of System 2 are often associated
with the subjective experience of agency, choice, and concentration.
As examples of System 1 thinking, Kahneman gives detecting that one object is more
distant than another, orienting to the source of a sudden sound, and responding to a
thought experiment with an intuitive verdict. Examples of System 2 thinking are
searching memory to identify a surprising sound, monitoring your behavior in a
social setting, and checking the validity of a complex logical argument.
Kahneman intends this not just as a distinction between distinct kinds of reason-
ing, but of thinking more broadly.
Applied to the case of reasoning, it seems to me to entail that a lot of reasoning falls
somewhere in-between these two extremes.
The (Rain) inference, for example, is not effortful or attention-hogging. On the
other hand, it seems wrong to say that it is not under my voluntary control, or that
there is no sense of agency associated with it. It still seems to be something that I do.
That is why I have labeled it “System 1.5 reasoning.”
The main difference between reasoning 2.0 and reasoning 1.5 is not agency per se,
but rather the fact that in reasoning 2.0, but not in 1.5, there is an explicit (Meta)
OUP CORRECTED PROOF – FINAL, 23/4/2019, SPi
question, an explicit state of taking the conclusion to follow from the premises, and,
finally, an explicit drawing of the conclusion as a result of that taking. All three of
these important elements seem to be present in reasoning 2.0, but missing from
reasoning 1.5.
My own preferred version of Frege’s view, for reasons that, since I have explained
elsewhere, I won’t rehearse here, I would put like this:
(Inferring) S’s inferring from p to q is for S to judge q because S takes (the
accepted truth of ) p to provide (contextual) support for (the acceptance of ) q.
Let us call this insistence that an account of inference must in this way incorporate a
notion of “taking” the Taking Condition on inference:
(Taking Condition): Inferring from p to q necessarily involves the thinker taking
p to support q and drawing q because of that fact.
As so formulated, this is not so much a view, as a schema for a view. It signals the
need for something to play the role of “taking,” but without saying exactly what it is
that plays that role, nor how it plays it.
In other work, I have tried to say more about this taking state and how it might
play this role (see Boghossian 2014, 2016).
In an important sense, however, it was probably premature to attempt to do that.
Before trying to explain the nature of the type of state in question, we need to satisfy
ourselves that a correct account of inference has to assume the shape that the
(Taking) condition describes. Why should we impose a phenomenologically counter-
intuitive and theoretically treacherous-looking condition such as (Taking) on any
adequate account of inference?
In this chapter, then, instead of digging into the nitty-gritty details of the taking
state, I want to explain in fairly general terms why an account of inference should
assume this specific shape, what is at stake in the debate about (Taking), and why it
should be resolved one way rather than another.
OUP CORRECTED PROOF – FINAL, 23/4/2019, SPi
All of this, of course, is part and parcel of the larger debate in epistemology about
the extent to which foundational notions in epistemology are normative and deonto-
logical in nature. The hope is that, by playing out this debate in the special case of
reasoning, we will shed light both on the nature of reasoning and on the larger
debate.
With these points in mind, let’s turn to our question about the nature of inference
and to whether it should in general be thought of as involving a “taking” state.
But this can’t be right. Mere associations could involve the interaction of represen-
tational states on the basis of their content. The Woody Allenesque depressive who,
on thinking “I am having so much fun” always then thinks “But there is so much
suffering in the world,” is having an association of judgments on the basis of their
content. But he is not thereby inferring from the one proposition to the other. (For
more discussion of Kornblith, see Boghossian 2016.)
If mere sensitivity to content isn’t enough to distinguish reasoning from association,
then perhaps what’s missing is support: the depressive’s thinking doesn’t count as
reasoning because his first judgment—that he is having so much fun—doesn’t support
his second—that there is so much suffering in the world. Indeed, the first judgment might
be thought to slightly undermine the second, since at least the depressive is having fun. By
contrast, in the (Rain) inference the premise does support the conclusion.
But, of course, that can’t be a good proposal either. Sometimes I reason from a p to
a q where p does not support q. That makes the reasoning bad, but it is reasoning
nonetheless. Indeed, it is precisely because it is reasoning that we can say it’s bad. The
very same transition would be just fine, or at any rate, acceptable, if it were a mere
association.
OUP CORRECTED PROOF – FINAL, 23/4/2019, SPi
Third, taking appears to account well for how inference could be subject to a
Moore-style paradox. That inference is subject to such a style of paradox has been
well described by Ulf Hlöbil (although he disputes that my explanation of it is
adequate). As Hlöbil (2014, pp. 420–1) puts it:
(IMP) It is either impossible or seriously irrational to infer P from Q and to judge, at the same
time, that the inference from Q to P is not a good inference.
. . . [It] would be very odd for someone to assert (without a change in context) an instance of
the following schema,
(IMA) Q; therefore, P. But the inference from Q to P is not a good inference (in my context).
. . . it seems puzzling that if someone asserts an instance of (IMA), this seems self-defeating.
The speaker seems to contradict herself . . . Such a person is irrational in the sense that her state
of mind seems self-defeating or incoherent. However, we typically don’t think of inferrings as
contentful acts or attitudes . . . Thus, the question arises how an inferring can generate the kind
of irrationality exhibited by someone who asserts an instance of (IMA). Or, to put it differently:
How can a doing that seems to have no content be in rational tension with a judgment or a
belief?
Hlöbil is right that there is a prima facie mystery here: how could a doing be in
rational tension with a judgment?
The Taking Condition, however, seems to supply a good answer: there can be a
tension between the doing and the judgment because the doing is the result of taking
the premises to provide good support for the conclusion, a taking that the judgment
then denies.
Fourth, taking offers a neat explanation of how there could be two kinds of
inference—deductive and inductive.¹¹
Of course, in some inferences the premises logically entail the conclusion and in
others they merely make the conclusion more probable than it might otherwise be.
That means that there are two sets of standards that we can apply to any given
inference. But that only gives us two standards that we can apply to an inference, not
two different kinds of inference.
Intuitively, though, it’s not only that there are two standards that we can apply to
any inference, but two different types of inference. And, intuitively once more, that
distinction involves taking: it is a distinction between an inference in which the
thinker takes his premises to deductively warrant his conclusion versus one in which
he takes them merely to inductively warrant it.
Finally, some inferences seem not only obviously unjustified, and so not ones that
rational people would perform; more strongly, they seem impossible. Even if you were
¹¹ The following two points were already mentioned in Boghossian (2014); I mention them here again
for the sake of completeness.
OUP CORRECTED PROOF – FINAL, 23/4/2019, SPi
willing to run the risk of irrationality, they don’t seem like inferences that one
could perform.
Consider someone who claims to infer Fermat’s Last Theorem (FLT) directly from
the Peano axioms, without the benefit of any intervening deductions, or knowledge of
Andrew Wiles’s proof of that theorem. No doubt such a person would be unjustified
in performing such an inference, if he could somehow get himself to perform it.
But more than that, we feel that no such transition could be an inference to begin
with, at least for creatures like ourselves. What could explain this?
The Taking Condition provides an answer. For the transition from the Peano
axioms to FLT to be a real inference, the thinker would have to be taking it that the
Peano axioms support FLT’s being true. And no ordinary person could so take it, at
least not in a way that’s unmediated by the proof of FLT from the Peano axioms. (The
qualification is there to make room for extraordinary people, like Ramanujan, for
whom many more number-theoretic propositions were obvious than they are for the
rest of us.)¹²
We, see, then, that there are a large number of considerations, both intuitive and
theoretical, for imposing a Taking Condition on inference.
¹² I’m inclined to think that this sort of example actually shows something stronger than that taking
must be involved in inference. I’m inclined to think that it shows that the taking must be backed by an
intuition or insight that the premises support the conclusion. For the fact that an ordinary person can’t take
it that FLT follows from the Peano axioms directly isn’t a brute fact. It is presumably to be explained by the
fact that an ordinary person can’t simply “see” that the FLT follows from the Peano axioms. I hope to
develop this point further elsewhere.
OUP CORRECTED PROOF – FINAL, 23/4/2019, SPi
We need to ask the following: What would we miss if we only had Helmholtz’s use
of the word “inference” to work with? What important seam in epistemology would
be obscured if we only had his word?
I have already tried to indicate what substantive issue is at stake. I don’t mind if
you use the same word “inference” to cover both Helmholtz’s sub-personal transi-
tions and adult human reasoning 2.0 and 1.5. The crucial point is not to let that
linguistic decision obscure the normative landscape: unlike the latter, the sub-
personal transitions of a person’s visual system are not ones that we can hold the
person responsible for, and they are not ones whose goodness or badness enters into
our assessments of her rationality. I can’t be held responsible for the hard-wired
transitions that make my visual system liable to the Müller-Lyer illusion. Of course,
once I find out that it is liable to such illusion, I am responsible for not falling for it, so
to say, but that’s a different matter.
Obviously, here we are in quite treacherous territory, the analogue to the question
of free will and responsibility within the cognitive domain. Ill-understood as this
issue is in general, it is even less well understood in the cognitive domain, in part
because we are far from having a satisfactory conception of mental action.
But unless you are a skeptic about responsibility, you will think that there are some
conditions that distinguish between mere mechanical transitions, and those cognitive
transitions towards which you may adopt a participant reactive attitude, to use
Strawson’s famous expression (see also Smithies 2016).
And what we know from reflection on cases, such as those of the habitual
depressive, is that mere associative transitions between mental states—no matter
how conscious and content-sensitive they may be—are not necessarily processes for
which one can be held responsible.
It is only if there is a substantial sense in which the transitions are ones that a
thinker performed that she can be held responsible for them. That is the fundamental
reason why Helmholtz-style transitions cannot, in and of themselves, amount to
reasoning in the intended sense.
We can get at the same point from a somewhat different angle.
Any transition whatsoever could be hard-wired in, in Helmholtz’s sense. One
could even imagine a creature in which the transition from the Peano axioms to FLT
is hard-wired in as a basic transition.
What explains the perceived discrepancy between a mere transition that is hard-
wired in versus one that is the result of inference?
The answer I’m offering is that merely wired-in transitions can be anything you
like because there is no requirement that the thinker approve the transition, and
perform that transition as a result of that approval; they can just be programmed in.
By contrast, I’m claiming, inferential transitions must be driven by an impression on
the thinker’s part that his premises support his conclusion.
To sum up the argument so far: a person’s inferential behavior, in the intended
sense, is part and parcel of his constitution as a rational agent. A person can be held
OUP CORRECTED PROOF – FINAL, 23/4/2019, SPi
responsible for inferring well or poorly, and such assessments can enter into an
overall evaluation of his virtues as a rational agent. Making sense of this seems to
require imposing the Taking Condition on inference. Helmholtz-style sub-personal
“inferences” can be called that, if one wishes, but they lie on the other side of the
bright line that separates cognitive transitions for which one can be held responsible
from those for which one cannot.
To put these two points together, I believe that our mental lives are tacitly guided
to a large extent by phenomenologically inert conscious states that do their guiding
tacitly.
One of the interesting lessons for the philosophy of mind that is implicit in all this
is that you can’t just tell by the introspection of qualitative phenomenology what the
basic elements of your conscious mental life are, especially when those are intentional
or cognitive elements. You need a theory to guide you. Going by introspection of
phenomenology alone, you may never have seen the need to recognize states of
intuition or intellectual seeming; you may never have seen the need to recognize
fleeting occurrent judgments, made while surveying a scene; and you may never have
seen the need to postulate states of taking.
I think that part of the problem here, as I’ve already noted, stems from over-
learning the good lesson that Wittgenstein taught us, that in philosophy there has
been a tendency to give overly intellectualized descriptions of cognitive phenomena.
My own view is that conscious life is shot through with states and events that play
important, traditionally rationalistic roles, which have no vivid qualitative phenom-
enology, but which can be recognized through their indispensable role in providing
adequate accounts of central cognitive phenomena. In the particular case of infer-
ence, the fact that we need a subject’s inferential behavior to be something for which
he can be held rationally responsible is a consideration in favor of the Taking
Condition that no purely phenomenological consideration can override.
In this sense, it is up to me as to whether I preserve the belief. It thus makes sense to hold me
responsible for the result of the process. I say that something like this story characterizes a great
deal of adult human inference. Indeed, it is tempting to say that all inference—at least adult
inference in which we are conscious of making an inference—is like this.
(Chapter 6, this volume, p. x)
Richard is trying to show how you can be responsible for your reasoning, even if you
are not aware of it and did not perform it. But all he shows, at best, is that you can be
held responsible for the output of some reasoning, rather than for the reasoning itself,
if the output of that reasoning is a belief.
But it is a platitude that one can be held responsible for one’s beliefs. The point has
nothing to do with reasoning and does not show that we can be held responsible for
our reasoning, which is the process by which we sometimes arrive at beliefs, and not
the beliefs at which we arrive. You could be held responsible for any of your beliefs
that you find sitting in your “belief box,” even if it weren’t the product of any
reasoning, but merely the product of association.
Once you become aware that you have that belief, you are responsible for making
sure that you keep it if and only if you have a good reason for keeping it.
If it just popped into your head, it isn’t yet clear that it has an epistemic basis, let
alone a good one.
To figure out whether you have a good basis for maintaining it, you would have to
engage in some reasoning. So, we would be right back where we started.
Richard (Chapter 6, this volume, p. 93) comes around to considering this objec-
tion. He says:
I’ve been arguing that the fact that we hold the reasoner responsible for the product of her
inference—we criticize her for a belief that is unwarranted, for example—doesn’t imply that in
making the inference the reasoner exercises a (particularly interesting) form of agency. Now, it
might be said that we hold she who reasons responsible not just for the product of her
inference, but for the process itself. When a student writes a paper that argues invalidly to a
true conclusion, the student gets no credit for having blundered onto the truth; he loses credit
for having blundered onto the truth. But, it might be said, it makes no sense to hold someone
responsible for a process if they aren’t the, or at least an, agent of the process.
Let us grant for the moment that when there is inference, both its product and the
process itself are subject to normative evaluation. What exactly does this show? We hold
adults responsible for such things as implicit bias. To hold someone responsible for implicit
bias is not just to hold them responsible for whatever beliefs they end up with as a result of
the underlying bias. It is to hold the adult responsible for the mechanisms that generate
those beliefs, in the sense that we think that if those mechanisms deliver faulty beliefs, then
the adult ought to try to alter those mechanisms if he can. (And if he cannot, he ought to be
vigilant for those mechanisms’ effects.)
Implicit bias is, of course, a huge and complicated topic, but, even so, it seems to me
to be misapplied here.
OUP CORRECTED PROOF – FINAL, 23/4/2019, SPi
We are certainly responsible for faulty mechanisms if we know that they exist and
are faulty.
Since we are all now rightly convinced that we suffer from all sorts of implicit
bias, without knowing exactly how they operate, we all have a responsibility to
uncover those mechanisms within us that are delivering faulty beliefs about other
people and to modify them; or, at the very least, if that is not possible, to neutralize
their effects.
But what Richard needs is not an obvious point like that. What he needs to argue is
that a person can be responsible for mechanisms that deliver faulty judgments, even
if he doesn’t know anything about them: doesn’t know that they exist, doesn’t know
that their deliverances are faulty, and doesn’t know how they operate.
Consider a person who is, relative to the rest of the community of which he is a
part, color blind, but who is utterly unaware of this attribute of his. Such a person
would have systematically erroneous views about the sameness or difference of
certain colored objects in his environment. His color judgments would be faulty.
But do we really want to hold him responsible for those faulty judgments and allow
those to enter into assessments of his rationality? I don’t believe that would be right.
Once he discovers that he is color-blind relative to his peers, then he will have
some responsibility to qualify his judgments about colors and look out for those
circumstances in which his judgments may be faulty. But until such time as he is
brought to awareness, we can find his judgments faulty without impugning his
rationality.
The confounding element in the implicit bias case is that the faulty judgments are
often morally reprehensible and so suggest that perhaps a certain openness to
morally reprehensible thoughts lies at the root of one’s susceptibility to implicit
bias. And that would bring a sense of moral responsibility for that bias with it. But
if so, that is not a feature that Richard can count on employing for the case of
inference in general.
Richard (Chapter 6, this volume, p. 92) also brings up the case where we might be
tempted to say that we inferred but where it is not clear what premises we inferred
from:
More significantly, there are cases that certainly seem to be inferences in which I simply don’t
know what my premises were. I know Joe and Jerome; I see them at conventions, singly and in
pairs, sometimes with their significant others, sometimes just with each other. One day it
simply comes to me: they are sleeping together. I could not say what bits of evidence buried in
memory led me to this conclusion, but I—well, as one sometimes says, I just know. Perhaps
I could by dwelling on the matter at least conjecture as to what led me to the conclusion. But
I may simply be unable to.
Granted, not every case like this need be a case of inference. But one doesn’t want to say that
no such case is. So if taking is something that is at least in principle accessible to consciousness,
one thinks that in some such cases we will have inference without taking.
OUP CORRECTED PROOF – FINAL, 23/4/2019, SPi
There is little doubt that there are cases somewhat like the ones that Richard
describes. There are cases where beliefs simply come to you. And sometimes they
come with a great deal of conviction, so you are tempted to say: I just know. I don’t
know exactly how I know, but I just know.
What is not obvious is (a) that they are cases of knowledge or (b) that when they
are, that they are cases of (unconscious) inference from premises that you’re not
aware of.
Of course, a Reliabilist would have no difficulty making sense of the claim that
there could be such cases of knowledge, but I am no Reliabilist.
But even setting that point to one side, the most natural description of the case
where p suddenly strikes you as true is that you suddenly have the intuition that p is
true. No doubt there is a causal explanation for why you suddenly have that intuition.
And no doubt that causal explanation has something to do with your prior experi-
ences with p.
But all of that is a far cry from saying that you inferred to p from premises that you
are not aware of.
Richard’s most compelling argument against requiring taking for inference stems,
interestingly enough, from the case of perception:
If we had reason to think that it was only in such explicit cases [he means reasoning 2.0] that
justification could be transmitted from premises to conclusion, then perhaps we could agree
that such cases should be given pride of place. But we have no reason to think that. I see a face;
I immediately think that’s Paul. My perceptual experience—which I would take to be a belief or
at least a belief-like state that I see a person who looks so—justifies my belief that I see Paul. It is
implausible that in order for justification to be transmitted I must take the one to justify the
other. (Chapter 6, this volume, p. 96)
I agree that no taking state is involved in perceptual justification. I wouldn’t say, with
Richard, that perceptual experience is a belief: I don’t see how the belief That’s Paul
could justify the belief That’s Paul.
But I would say that the perceptual experience is a visual seeming with the content
That’s Paul and that this can justify the belief That’s Paul without an intervening
taking state.
But there is a reason for this that is rooted in the nature of perception (one could
say something similar about intuition). The visual seeming That’s Paul presents the
world (as John Bengson (2015) has rightly emphasized) as this being Paul in front of
one. When you then believe That’s Paul on that basis, there is no need to take its
seeming to be p to support believing its being p. You are already there (that’s why
Richard finds it so natural to say that perceptual experience is itself a belief, although
that is going too far).
All that can happen is that a doubt about the deliverances of one’s visual system
can intervene, to block you from endorsing the belief that is right there on the tip of
your mind, so to speak.
OUP CORRECTED PROOF – FINAL, 23/4/2019, SPi
But that is the abnormal case, not the default one. Not to believe what perception
presents you with, unless you have reason to not believe it, would be the mistake.
But the inference from p to q is not like that. A belief that p, which is the input into
the inferential process, is not a seeming that q. And while the transition to believing
that q may be familiar and well supported, it is not simply like acquiescing in
something that is already the proto-belief that q.
15. Conclusion
Well, there are many things that I have not done in this essay. I have not developed a
positive account of the taking state. I have not discussed possible regress worries. But
one can’t do everything.
What I’ve tried to do is explain why the shape of an account of reasoning that gives
a central role to the Taking Condition has a lot to be said for it, especially if we are to
retain the traditional connections between reasoning, responsibility for reasoning,
and assessments of a person’s rationality.
References
Audi, Robert. (1986). Belief Reason, and Inference. Philosophical Topics, 14 (1), 27–65.
Bengson, John. (2015). The Intellectual Given. Mind, 124 (495), 707–60.
Block, Ned. (1995). On a Confusion about a Function of Consciousness. Behavioral and Brain
Sciences, 18 (2), 227–87.
Boghossian, Paul. (2014). What is Inference? Philosophical Studies, 169 (1), 1–18.
Boghossian, Paul. (2016). Reasoning and Reflection: A Reply to Kornblith. Analysis, 76 (1),
41–54.
Boghossian, Paul. (forthcoming). Do We Have Reason to Doubt the Importance of the
Distinction Between A Priori and A Posteriori Knowledge? A Reply to Williamson. In
P. Boghossian and T. Williamson, Debating the A Priori. Oxford University Press.
Burge, Tyler. (1993). Content Preservation. The Philosophical Review, 102 (4), 457–88.
Cappelen, Herman. (2012). Philosophy Without Intuitions. Oxford University Press.
Chisholm, Roderick M. (1989). Theory of Knowledge, 3rd edition. Prentice-Hall.
Deutsch, Max. (2015). The Myth of the Intuitive. MIT Press.
Frege, Gottlob. (1979). Logic. In Posthumous Writings. Blackwell.
Helmholtz, Heinrich von. (1867). Treatise on Physiological Optics, Vol. III. Dover Publications.
Hlöbil, Ulf. (2014). Against Boghossian, Wright and Broome on Inference. Philosophical
Studies, 167 (2), 419–29.
Huemer, Michael. (1999). The Problem of Memory Knowledge. Pacific Philosophical Quarterly
(80), 346–57.
Kahneman, Daniel. (2011). Thinking, Fast and Slow. Macmillan.
Kornblith, Hilary. (2012). On Reflection. Oxford University Press.
Littlejohn, Clayton. (2012). Justification and the Truth-Connection. Cambridge University
Press.
OUP CORRECTED PROOF – FINAL, 23/4/2019, SPi
Pettit, Philip. (2007). Rationality, Reasoning and Group Agency. Dialectica, 61 (4), 495–519.
Pryor, James. (2000). The Skeptic and the Dogmatist. Noûs, 34 (4), 517–49.
Pryor, James. (2005). There is Immediate Justification. In M. Steup and E. Sosa (eds.),
Contemporary Debates in Epistemology. Blackwell.
Pryor, James. (2007). Reasons and That-clauses. Philosophical Issues, 17 (1), 217–44.
Siegel, Susanna. (2017). The Rationality of Perception. Oxford University Press.
Smithies, Declan. (2016). Reflection On: On Reflection. Analysis, 76 (1), 55–69.
Williamson, Timothy. (2000). Knowledge and Its Limits. Oxford University Press.
Williamson, Timothy. (2007). The Philosophy of Philosophy. John Wiley & Sons.
Wright, Crispin. (2014). Comment on Paul Boghossian, “What is Inference.” Philosophical
Studies, 169 (1), 27–37.