0% found this document useful (0 votes)
41 views388 pages

Forallx (Adelaide) 2022

A book introducing the reader to basic formal logic, with plenty of exercises, paving the way for more advanced university-level studies (such as philosophy and mathematical proofs).

Uploaded by

Tuigen
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
41 views388 pages

Forallx (Adelaide) 2022

A book introducing the reader to basic formal logic, with plenty of exercises, paving the way for more advanced university-level studies (such as philosophy and mathematical proofs).

Uploaded by

Tuigen
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 388

forall𝓍 ADELAIDE

forall𝓍
ADELAIDE

Antony Eagle
University of Adelaide

Tim Button
University College London

P.D. Magnus
University at Albany, State University of New York
© 2005–2022 by Antony Eagle, Tim Button and P.D. Magnus. Some rights reserved.
This version of forall𝓍 ADELAIDE is current as of 20 January 2022.

This book is a derivative work created by Antony Eagle, based upon Tim Button’s 2016
Cambridge version of P.D. Magnus’s forall𝓍 (version 1.29). There are pervasive sub‐
stantive changes in content, theoretical approach, coverage, and appearance. (For one
thing, it’s more than twice as long.)
You can find the most up to date version of forall𝓍 ADELAIDE at github.com/
antonyeagle/forallx‐adl.

The current version of forall𝓍 Cambridge is available at github.com/


OpenLogicProject/forallx‐cam, which has now also diverged noticeably from the
2016 version. Magnus’ original is available at fecundity.com/logic.
This book, like its predecessors, is released under a Creative Commons license (Attri‐
bution 4.0).

This work is licensed under the Creative Commons Attribution 4.0 International Li‐
cense. To view a copy of this license, visit creativecommons.org/licenses/by/4.
0/ or send a letter to Creative Commons, PO Box 1866, Mountain View, CA 94042,
USA.

Typesetting was carried out entirely in XELATEX using the Memoir class. The body text is
set in Constantia; math is set in Cambria Math; sans serif in Calibri; and monospaced
text in Consolas. The style for typesetting proofs is based on fitch.sty (v0.4) by Peter
Selinger, University of Ottawa.

Kaurna miyurna, Kaurna yarta, ngai tampinthi.


Made on Kaurna land. Its sovereignty was never ceded.
Contents

1 Key Notions 1

1 Arguments 2
2 Valid Arguments 5
3 Other Logical Notions 17

2 The Language of Sentential Logic 24

4 First Steps to Symbolisation 25


5 Connectives 35
6 Sentences of Sentential 49
7 Use and Mention 56

3 Truth Tables 62

8 Truth‐Functional Connectives 63
9 Complete Truth Tables 74
10 Semantic Concepts 79
11 Entailment and Validity 84
12 Truth Table Shortcuts 92
13 Partial Truth Tables 97
14 Expressiveness of Sentential 104

4 The Language of Quantified Logic 109

15 Building Blocks of Quantifier 110

iii
iv CONTENTS

16 Sentences with One Quantifier 122


17 Multiple Generality 136
18 Identity 150
19 Definite Descriptions 159
20 Sentences of Quantifier 170

5 Interpretations 176

21 Extensionality 177
22 Truth in Quantifier 200
23 Semantic Concepts 216
24 Demonstrating Consistency and Invalidity 222
25 Reasoning about All Interpretations 228

6 Natural Deduction for Sentential 233

26 Proof and Reasoning 234


27 The Idea of Natural Deduction 240
28 Basic Rules for Sentential: Rules without Subproofs 246
29 Basic Rules for Sentential: Rules with Subproofs 259
30 Some Philosophical Issues about Conditionals, Meaning, and Negation 285
31 Proof‐Theoretic Concepts 291
32 Proof Strategies 301
33 Derived Rules for Sentential 303
34 Alternative Proof Systems for Sentential 312

7 Natural Deduction for Quantifier 317

35 Basic Rules for Quantifier 318


36 Derived Rules for Quantifier 340
37 Rules for Identity 345
38 Proof‐Theoretic Concepts and Semantic Concepts 352
39 Next Steps 355

Appendices 364

A Alternative Terminology and Notation 364


B Quick Reference 369
C Index of defined terms 374

Acknowledgements, etc. 378


List of Figures

1 How the sections depend on one another. . . . . . . . . . . . . . . . . . . . viii

4.1 A phrase structure tree for Example 18. . . . . . . . . . . . . . . . . . . . . 27


4.2 A phrase structure tree for Example 19. . . . . . . . . . . . . . . . . . . . . 27
4.3 Paraphrase of Example 26 showing its subsentential structure, as in Ex‐
ample 27. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
4.4 Different sentential structures for ‘Not A and B’ shown in schematic syn‐
tactic trees. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30

6.1 Formation tree for ‘¬(𝑃 ∧ ¬(¬𝑄 ∨ 𝑅))’. . . . . . . . . . . . . . . . . . . . . . 53

21.1 An interpretation represented diagrammatically. . . . . . . . . . . . . . . . 187


21.2 A representation of ‘Every Rose Has Its Thorn’. . . . . . . . . . . . . . . . . 188
21.3 Representing the argument from §15. . . . . . . . . . . . . . . . . . . . . . . 189
21.4 ‘Chat Systems’, https://xkcd.com/1810/ . . . . . . . . . . . . . . . . . . . 189
21.5 A simple graph. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 190
21.6 A graph with ‘loops’ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 190
21.7 A more complicated graph. . . . . . . . . . . . . . . . . . . . . . . . . . . . 191
21.8 Multiple techniques used to depict a complex interpretation. . . . . . . . . 192
21.9 A graph of a reflexive relation on {1, 3, 4}. . . . . . . . . . . . . . . . . . . . 193
21.10 A graph of the transitive relation ‘older than’ on some University of Ad‐
elaide buildings. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 194
21.11 An extract of Qantas’ route network. . . . . . . . . . . . . . . . . . . . . . . 194
21.12 ‘next to’: a symmetric but intransitive relation. . . . . . . . . . . . . . . . . 195
21.13 > (black arrows) and ≥ (orange dotted arrows) on the domain {0, 1, 2}. . . 196
21.14 Black arrows indicate both ⊂ and ⊆, dotted arrows indicate ⊆ only, on the
domain ℘{𝑎, 𝑏} = {{𝑎, 𝑏}, {𝑎}, {𝑏}, ∅}. . . . . . . . . . . . . . . . . . . . . . . . 197

29.1 Proof of ¬(𝐴 ∨ 𝐵) ∴ (¬𝐴 ∧ ¬𝐵) . . . . . . . . . . . . . . . . . . . . . . . . . 280


29.2 Proof that (𝐴 ∨ 𝐵), ¬(𝐴 ∧ 𝐶), ¬(𝐵 ∧ ¬𝐷) ∴ (¬𝐶 ∨ 𝐷). . . . . . . . . . . . . . . 281
29.3 A complicated proof . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 282

v
vi LIST OF FIGURES

33.1 Disjunctive syllogism is derivable in the standard proof system. . . . . . . . 305


33.2 Modus tollens is derivable in the standard proof system. . . . . . . . . . . . 306
33.3 Tertium non datur is derivable in the standard proof system. . . . . . . . . 308

34.1 Schematic ∨E proof. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 314


34.2 Schematic proof using DS to emulate ∨E. . . . . . . . . . . . . . . . . . . . 314

39.1 A red‐yellow spectrum. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 361


How to Use This Book

This book has been designed for use in conjunction with the University of Adelaide
courses PHIL 1110 Introduction to Logic and PHIL 1111OL Introductory Logic. But it is suit‐
able for self‐study as well. I have included a number of features to assist your learning.

› forall𝓍 is divided into seven chapters, each further divided into sections and
subsections. The sections are continuously numbered.

– Chapter 1 gives an overview of how I understand the project of formal logic;


– Chapters 2–3 cover sentential or truth‐functional logic;
– Chapters 4–5 cover quantified or predicate logic;
– and Chapters 6–7 cover the formal proof systems for our logical languages.

› The book contains many cross‐references to other sections. So a reference to


‘§6.2’ indicates that you should consult section 6, subsection 2 – you will find this
on page 49. Cross‐references and entries in the table of contents are hyperlinked.

› Figure 1 shows how the sections depend on one another. For example, the arrows
coming from §21 in the diagram show that understanding that section requires
familiarity with §11 and §20, and also any sections on which they depend.

› Each chapter in the book concludes with a box labelled ‘Key Ideas in §𝑛’. These
are not a summary of the chapter, but contain some indication of what I regard as
the main ideas that you should be taking away from your reading of the chapter.

› Logical ideas and notation are pretty ubiquitous in philosophy, and there are a
lot of different systems. We cannot cover all the alternatives, but some indication
of other terminology and notation is contained in Appendix A.

› A quick reference to many of the aspects of the logical systems I introduce can
be found in Appendix B.

vii
§1

§2 §15

§3 §16 §20

§4 §17 §21 §26

§5 §18 §22 §27

§6 §23 §28

§7 §19 §24 §29 §30

§8 §25 §31

§9 §33 §32

§10 §34 §35 §36

§11 §12 §37

§14 §13 §39 §38

Figure 1: How the sections depend on one another.

› When I first use a new piece of technical terminology, it is introduced by writing


it in small caps, LIKE THIS. The index of defined terms is Appendix C.

› The book is the product of a number of authors. ‘I’ doesn’t always mean me, but
‘you’ mostly means you, the reader, and ‘we’ mostly means you and me.

I appreciate any comments or corrections: [email protected].


Chapter 1

Key Notions
1
Arguments

Logic is the business of evaluating arguments – identifying some of the good ones and
explaining why they are good. So what is an argument?
In everyday language, we sometimes use the word ‘argument’ to talk about belligerent
shouting matches. Logic is not concerned with such teeth‐gnashing and hair‐pulling.
They are not arguments, in our sense; they are disputes. A dispute like this is often
more about expressing feelings than it is about persuasion.
Offering an ARGUMENT, in the sense relevant to logic (and other disciplines, like law
and philosophy), is something more like making a case. It involves presenting reasons
that are intended to favour, or support, a specific claim. Consider this example of an
argument that someone might give:

It is raining heavily.
If you do not take an umbrella, you will get soaked.
So: You should take an umbrella.

We here have a series of sentences. The word ‘So’ on the third line indicates that
the final sentence expresses the CONCLUSION of the argument. The two sentences be‐
fore that express PREMISES of the argument. If the argument is well‐constructed, the
premises provide reasons in favour of the conclusion. In this example, the premises
do seem to support the conclusion. At least they do, given the tacit assumption that
you do not wish to get soaked.
So this is the sort of thing that logicians are interested in when they look at arguments.
We shall say that an argument is any collection of premises, together with a conclu‐
sion.1
In the example just given, we used individual sentences to express both of the argu‐
ment’s premises, and we used a third sentence to express the argument’s conclusion.

1 Because arguments are made of sentences, logicians are very concerned with the details of particu‐
lar words and phrases appearing in sentences. Logic thus also has close connections with linguistics,
particularly that sub‐discipline of linguistics known as SEMANTICS, the theory of meaning.

2
§1. ARGUMENTS 3

Many arguments are expressed in this way. But a single sentence can contain a com‐
plete argument. Consider:

I was wearing my sunglasses; so it must have been sunny.

This argument has one premise followed by a conclusion.


Many arguments start with premises, and end with a conclusion. But not all of them.
The argument with which this section began might equally have been presented with
the conclusion at the beginning, like so:

You should take an umbrella. After all, it is raining heavily. And if you do
not take an umbrella, you will get soaked.

Equally, it might have been presented with the conclusion in the middle:

It is raining heavily. Accordingly, you should take an umbrella, given that


if you do not take an umbrella, you will get soaked.

When approaching an argument, we want to know whether or not the conclusion fol‐
lows from the premises. So the first thing to do is to separate out the conclusion from
the premises. As a guideline, the following words are often used to indicate an argu‐
ment’s conclusion:

so, therefore, hence, thus, accordingly, consequently, must

And these expressions often indicate that we are dealing with a premise, rather than a
conclusion

since, because, given that

But in analysing an argument, there is no substitute for a good nose.


In a good argument, the premises provide reasons for the conclusion. We are interested
in understanding and analysing arguments because of this. Offering a good argument
to someone emphasises some reasons that favour the conclusion. Reasonable people,
we hope, may change their mind when they are given good reasons to do so. And so
offering a good argument might persuade a reasonable person to accept its conclusion.
We return to this issue of reasoning and argument a little later, in §2.3.
Above we said that when someone offers an argument, they intend to offer premises
that support a given conclusion. But there is always a question as to whether some
premises really do support a conclusion. Someone can offer a bad argument without
knowing it, and thus intend to support some conclusion without managing to do so.
Logicians aren’t very interested in the intentions of people who might offer arguments,
but they are very interested in whether the premises of a given argument do in fact
support the conclusion. Some even think the central defining question of logic is the
issue of how to characterise when a claim is a LOGICAL CONSEQUENCE of some other
claims.
4 KEY NOTIONS

Key Ideas in §1
› An argument is a collection of sentences, divided into one or
more premises and a single conclusion.
› The conclusion may be indicated by ‘so’, ‘therefore’ or other ex‐
pressions; the premises indicated by ‘since’ or ‘because’.
› The premises are intended to support the conclusion – though
whether they do so is another matter.

Practice exercises
At the end of every section, there are practice exercises that review and explore the
material covered in the chapter. There is no substitute for actually working through
some problems, because logic is more about cultivating a way of thinking than it is
about memorising facts.

A. What is the difference between argument in the everyday sense, and in the logicians’
sense? What is the point of logical arguments?
B. Highlight the phrase which expresses the conclusion of each of these arguments:

1. It is sunny. So I should take my sunglasses.


2. It must have been sunny. I did wear my sunglasses, after all.
3. No one but you has had their hands in the cookie‐jar. And the scene of the crime
is littered with cookie‐crumbs. You’re the culprit!
4. Miss Scarlett and Professor Plum were in the study at the time of the murder.
And Reverend Green had the candlestick in the ballroom, and we know that
there is no blood on his hands. Hence Colonel Mustard did it in the kitchen
with the lead‐piping. Recall, after all, that the gun had not been fired.
2
Valid Arguments

In §1, we gave a very permissive account of what an argument is. To see just how
permissive it is, consider the following:

There is a bassoon‐playing dragon in Rundle Mall.


So: Salvador Dali was a poker player.

We have been given a premise and a conclusion. So we have an argument. Admittedly,


it is a terrible argument. But it is still an argument.

2.1 Two Ways that Arguments Can Go Wrong


It is worth pausing to ask what makes the argument so weak. In fact, there are two
sources of weakness. First: the argument’s (only) premise is obviously false. Rundle
Mall has some interesting buskers, but not quite that interesting. Second: the conclu‐
sion does not follow from the premise of the argument. Even if there were a bassoon‐
playing dragon in Rundle Mall, we would not be able to draw any conclusion about
Dali’s predilection for poker.
What about the main argument discussed in §1? The premises of this argument might
well be false. It might be sunny outside; or it might be that you can avoid getting soaked
without taking an umbrella. But even if both premises were true, it does not necessarily
show you that you should take an umbrella. Perhaps you enjoy walking in the rain,
and you would like to get soaked. So, even if both premises were true, the conclusion
might nonetheless be false. (If we were to add the formerly tacit assumption that
you do not wish to get soaked as a further premise, then the premises taken together
would provide support for the conclusion.) Those premises might still provide some
reason for thinking the conclusion is correct. But if the premises can be true, while
the conclusion is false, there is at least some good sense in which there is a gap or lack
in the support those premises provide for the conclusion.
The general point is as follows. For any argument, there are two ways that it might go
wrong:

5
6 KEY NOTIONS

1. One or more of the premises might be false.

2. The conclusion might not follow from, or be a consequence of, the premises –
even if the premises were true, they would not support the conclusion.

To determine whether or not the premises of an argument are true is often a very
important matter. But that is normally a task best left to experts in the field: as it
might be, historians, scientists, or whomever. In our role as logicians, we are more
concerned with arguments in general. So we are (usually) more concerned with the
second way in which arguments can go wrong.

2.2 Conclusive Arguments


As logicians, we want to be able to determine when the conclusion of an argument
follows from the premises. One way to put this is as follows. We want to know whether,
if all the premises were true, the conclusion would also have to be true. This motivates
a definition:

An argument is CONCLUSIVE if, and only if, the truth of the premises
guarantees the truth of the conclusion.
In other words: an argument is conclusive if, and only if: it is not pos‐
sible for the premises of the argument to be true while the conclusion
is false.

Consider another argument:

You are reading this book.


This is a logic book.
So: You are a logic student.

This is not a terrible argument. Both of the premises are true. And most people who
read this book are logic students. Yet, it is possible for someone besides a logic student
to read this book. If your housemate picked up the book and thumbed through it, they
would not immediately become a logic student. So the premises of this argument,
even though they are true, do not guarantee the truth of the conclusion. This is not a
conclusive argument.
Or consider this pair of arguments:

Mary has two brothers; Mary has two brothers;


So: There are three siblings in her So: There are at least three siblings
family. in her family.
The argument on the left might be pretty good: that premise provides some reason to
accept the conclusion, especially if you imagine a real conversation in which someone
makes this case. It would be weird to use a premise about Mary’s brothers in an ar‐
gument for a conclusion about her siblings unless Mary had no sisters. Weird, but
§2. VALID ARGUMENTS 7

not impossible. Strictly speaking, if Mary had a sister the speaker did not mention, it
would be possible for the premise to be true while the conclusion is false. So the left
argument is not conclusive, interpreted strictly.
The argument on the right shows that the premise does make a compelling case for
a related but more hedged conclusion. You might think ‘that second argument is a
bit nit‐picking’. But that is exactly what makes it so watertight. It doesn’t depend on
what would make for a ‘normal’ conversation, or whether you are making the same
assumptions as the speaker, etc. No matter what, the truth of the premises secures the
truth of the conclusion: it is conclusive.
The crucial thing about a conclusive argument is that it is impossible, in a very strict
sense, for the premises to be true whilst the conclusion is false. Consider this example:

Oranges are either fruits or musical instruments.


Oranges are not fruits.
So: Oranges are musical instruments.

The conclusion of this argument is ridiculous. Nevertheless, it follows from the


premises. If both premises were true, then the conclusion would just have to be true.
So the argument is conclusive.
Why is this argument conclusive? The most important factor for us in considering
what makes an argument conclusive is to examine the argument’s STRUCTURE – the
grammatical forms of the premises and conclusion. An argument will be conclusive if
its structure guarantees that its premises support the conclusion. In the present case,
one premise says that oranges are in one of two categories; the other premise says that
oranges are not in the first category. We conclude that they are in the second category.
The premises and conclusion are about oranges. But it is plausible to think that any
argument with this same sort of structure must be conclusive, whether we are talking
about oranges, or cars – or anything really.
Conclusiveness of an argument is insensitive to the truth or falsity of the premises. An
argument can be conclusive while nevertheless going wrong in the first of the ways we
identified in §2.1. Consider this argument

The earth has twenty‐eight moons;


So: The earth has an even number of moons.

The argument is conclusive, as there is no possible way in which the earth could have
twenty‐eight moons while having an odd number of moons. But the premise is so ob‐
viously false that this argument could never be used to persuade anyone. The premise
supports the conclusion, but that support is moot given the evident falsity of the
premise.

2.3 Reasons to Believe


A conclusive argument, in the logician’s sense, links the premises to the conclusion.
It turns the reasons you have for accepting to the premises into reasons to accept its
8 KEY NOTIONS

conclusion. But a conclusive argument need not provide you with a reason to believe
the conclusion. One way this can happen is when you don’t accept any of the premises
in the first place. When the premises support the conclusion, that might just mean
that they would be excellent reasons to accept the conclusion – if only they were true!
So: we are interested in whether or not a conclusion follows from some premises. Don’t,
though, say that the premises infer the conclusion. Entailment is a relation between
premises and conclusions; inference is something we do. So if you want to mention
inference when the conclusion follows from the premises, you could say that one may
infer the conclusion from the premises.
But even this may be doubted. Often, when you believe the premises, a conclusive
argument provides you with a reason to believe the conclusion. In that case, it might
be appropriate for you to infer the conclusion from the premises.
But sometimes a conclusive argument shows that some premises support a conclusion
you cannot accept. Suppose, for example, that you know the conclusion to be false.
The fact that the argument is conclusive and has a false conclusion tells you that the
premises cannot all be true. (Consider the argument from the previous section with
the false conclusion ‘Oranges are musical instruments’: the second premise is as absurd
as the conclusion.) In general, when an argument is conclusive

› the truth of all the premises guarantees the truth of the conclusion; and equally
› the falsity of the conclusion guarantees the falsity of at least one of the premises.

In this sort of situation, you might find that the argument gives you a better reason
to abandon any belief in one of the premises than to accept the conclusion. A con‐
clusive argument shows there is some reason to believe its conclusion, if you accept its
premises; it doesn’t mean there aren’t better reasons to reject its premises, if you reject
its conclusion. Consider this sort of example:1

If I look in the cupboard, I’ll find some muesli.


I am looking in the cupboard.
So: I find some muesli.

Someone might believe both premises, and not accept the conclusion, because on look‐
ing in the cupboard, they do not see the muesli. (Someone else finished it off earlier.)
It would be silly for this person to ‘follow the argument where it leads’. Rather, they
should use the fact that these premises entail a conclusion they now know to be false,
having just looked, to reject one of the premises. The obvious candidate is the first
premise. So this person should probably stop believing that they’ll find muesli if they
look in the cupboard, and start believing instead that they are out of muesli or suchlike.
Some cases are less straightforward. Consider this argument:

It is immoral to cause avoidable suffering.


Eating meat causes avoidable suffering.
So: Eating meat is immoral.

1 This sort of example is discussed by Gilbert Harman, Change in View, MIT Press, esp. ch. 2.
§2. VALID ARGUMENTS 9

Many people will find the premises plausible, and the conclusion therefore compelling.
This is part of the case for vegetarianism. But not everyone finds the conclusion accept‐
able; such people end up finding one or both of the premises should be rejected. In‐
terestingly, people may find themselves still finding the premises attractive even when
they recognise a conclusion follows that they cannot accept. Here they may find that
there is some reason to accept the premises (perhaps they seem true at first glance),
and some reason to reject them (they have, at second glance, consequences the person
cannot accept). I don’t want to adjudicate the merits of this argument here. I only want
to emphasise that even offering a conclusive argument to someone, with premises that
they currently accept, is not enough to make them come to believe the conclusion.
Sometimes people think that logic will provide a powerful tool to persuade and con‐
vince others of their point of view. They sometimes want to study logic as if it is some
dark art enabling them to subdue beliefs that contradict their own. This is not really
a very nice thing to want to do – to force a belief on someone, whether they want to
believe it or not – and so it is not particularly regrettable that logic doesn’t help you
to do it. Logic can show which claims follow from which others, and which contradict
one another. It can help us elaborate the content of some claim, or delineate the com‐
mitments a certain belief would incur. But logic does not tell you what to believe, even
when you have a conclusive argument.
The question, what ought I to believe? is one of the deepest in the area of philosophy
known as EPISTEMOLOGY, the theory of knowledge. Logic is not able to answer that
question all by itself. Even if logic tells you that there is a conclusive argument from
premise 𝒜 to conclusion ℬ, logic can’t tell you whether you ought to believe both, or
reject both. However, logic will tell you something important, even if it is only a limited
part of the answer to the question of rational belief. It will tell you that, when you
know an argument to be conclusive, you cannot both accept its premises while rejecting
its conclusion – at least, not while being ideally rational. Thus conceived, logic is not
even a science of reasoning, because it does not tell you what to think. Logic can’t tell
you, or anyone else, which packages of premises‐and‐conclusions to accept.

2.4 Conclusiveness for Special Reasons


An argument can be conclusive for reasons unrelated to its structure. Take this ex‐
ample about my pet fox Juanita:

Juanita is a vixen.
So: Juanita is a fox.

It is impossible for the premise to be true and the conclusion false. So the argument is
conclusive. But this is not due to the structure of the argument. Here is an inconclusive
argument with seemingly the same structure or form. The new argument is the result
of replacing the word ‘vixen’ in the first argument with the word ‘cathedral’, but keeping
the overall grammatical structure the same:

Juanita is a cathedral.
So: Juanita is a fox.
10 KEY NOTIONS

This might suggest that the conclusiveness of the first argument is keyed to the mean‐
ing of the words ‘vixen’ and ‘fox’. But, whether or not that is right, it is not simply
the FORM of the argument that makes it conclusive. It is instructive to compare the
first argument with this modification, where we replace ‘vixen’ with the near‐synonym
‘female fox’:

Juanita is a female fox.


So: Juanita is a fox.

This also seems to be conclusive. But now we might suspect the occurrence of the word
‘fox’ in both premise and conclusion is not mere coincidence, but an essential part of
the explanation as to why this is conclusive.
Equally, consider the argument:

The sculpture is green all over.


So: The sculpture is not red all over.

Again, because nothing can be both green all over and red all over, the truth of the
premise would guarantee the truth of the conclusion. So the argument is conclusive.
But here is an inconclusive argument with the same form:

The sculpture is green all over.


So: The sculpture is not shiny all over.

The argument is inconclusive, since it is possible to be green all over and shiny all over.
(I might paint my nails with an elegant shiny green varnish.) Plausibly, the conclusive‐
ness of this argument is keyed to the way that colours (or colour‐words) interact. But,
whether or not that is right, it is not simply the form of the argument that makes it
conclusive.
An argument can be conclusive due to its structure, and also be conclusive for other
reasons. Arguably, this might be going on in the argument discussed at the end of §2.2,
with the premise ‘Oranges are not fruits’. Some people might think this premise has to
be false, because of what oranges are. (Many will say that being a fruit is an essential
part of what it is to be an orange.) But if the premise ‘Oranges are not fruits’ has to be
false, it is not possible for the premises to be true. So it is not possible for premises to
be true while the conclusion is false. Hence the argument is conclusive – both because
it has a good structure, but also because it has a premise that cannot be true.2

2 When an argument has an impossible premise, any argument with that premise will be conclusive no
matter what the conclusion is! So this is a weird kind of case of conclusiveness. But nothing much
really turns on it, and it is simpler to simply count it as conclusive than to try and separate out such
‘degenerate’ cases of conclusive arguments. See also §3.3.
§2. VALID ARGUMENTS 11

2.5 Validity
Logicians try to steer clear of controversial matters like whether there is a definition
of an orange that requires it to be a fruit, or whether there is a ‘connection in mean‐
ing’ between being green and not being red. It is often difficult to figure such things
out from the armchair (a logician’s preferred habitat), and there may be widespread
disagreement even among subject matter experts.
So logicians do not study conclusive arguments in general, but rather concentrate on
those conclusive arguments which have a good structure or form.3 This is why the
logic we are studying is sometimes called FORMAL LOGIC. We introduce a special term
for the class of arguments logicians are especially interested in:

An argument is VALID if, and only if, it is conclusive due to its structure;
otherwise it is INVALID.

The notion of the structure of a sentence, or an argument, is an intuitive one. I make


the notion more precise in §4.1. Relying on our intuitive grasp of the notion for now,
however, we can see the argument about ogres on the right has the same form as the
argument on the left about oranges (slightly tweaked from our earlier presentation in
§2.2 to make its structure clearer). It is easy to see that both of these arguments are
conclusive and valid:

Either Oranges are fruits or oranges Either Ogres are fearsome or ogres
are musical instruments. are mythical.
It is not the case that Oranges are It is not the case that Ogres are fear‐
fruits. some.
So: Oranges are musical instruments. So: Ogres are mythical.
The shared structure of these two arguments is something like this:

Either 𝒜 or ℬ.
It is not the case that 𝒜 .
So: ℬ.

Any argument with this structure will be conclusive in virtue of structure, and hence
valid. It does not matter, really, what sentences we put in place of ‘𝒜 ’ and ‘ℬ’. (Within
limits: you can’t put a question or an exclamation and get a valid argument – see §3.1.)
This highlights that valid arguments do not need to have true premises or even true
conclusions. We can put a true sentence in place of 𝒜 and a false sentence in place of
ℬ , and both premises and the conclusion will be false. The argument is still valid.

3 It can be very hard to tell whether an invalid argument is conclusive or inconclusive. Consider the
argument ‘The sea is full of water; so the sea is full of H2 O’. This is conclusive, since water just is the
same stuff as H2 O. The sea cannot be full of that water stuff without being full of that exact same stuff,
namely, H2 O stuff. But it took a lot of chemistry and ingenious experiments to figure out that water is
H2 O. So it was not at all obvious that this argument was conclusive. On the other hand, it is generally
very clear when an argument is conclusive due to its structure – you can just see the structure when
the argument is presented to you.
12 KEY NOTIONS

Conversely, having true premises and a true conclusion is not enough to make an ar‐
gument valid. Consider this example:

London is in England.
Beijing is in China.
So: Paris is in France.

The premises and conclusion of this argument are, as a matter of fact, all true. But the
argument is invalid. If Paris were to declare independence from the rest of France, then
the conclusion would be false, even though both of the premises would remain true.
Thus, it is possible for the premises of this argument to be true and the conclusion false.
The argument is therefore inconclusive, and hence invalid.
Return briefly to another example we discussed earlier:

Juanita is a female fox.


So: Juanita is a fox.

This seems to have something like this structure:

𝑎 is an ℱ 𝒢.
So: 𝑎 is a 𝒢.

For most adjectives ℱ , this structure yields a conclusive argument when you replace
the schematic letters by English words. E.g.,

Bob is a tall man;


So: Bob is tall.

But not all: some adjectives like ‘fake’ or ‘alleged’ do not yield conclusive arguments
when substituted for ℱ : ‘This is a fake gun; so this is a gun’ is not a conclusive argument.
We will return in §15 to the logical structure of examples like these, and to expressions
like ‘fake gun’ in §16.6.

2.6 Soundness
The important thing to remember is that validity is not about the actual truth or falsity
of the sentences in the argument. It is about whether the structure of the argument
ensures that the premises support the conclusion. Nonetheless, we shall say that an
argument is SOUND if, and only if, it is both valid and all of its premises are true. So
every sound argument is valid and conclusive. But not every valid argument is sound,
and not every conclusive argument is sound.
It is often possible to see that an argument is valid even when one has no idea whether
it is sound. Consider this extreme example (after Lewis Carroll’s Jabberwocky):

’Twas brillig, and the slithy toves did gyre and gimble in the wabe.
So: The slithy toves did gyre and gimble in the wabe.
§2. VALID ARGUMENTS 13

This argument is valid, simply because of its structure (it has a premise conjoining two
claims by ‘and’, and a conclusion which is one of those claims). But is it sound? That
would depend on figuring out what all those nonsense words mean!

2.7 Inductive Assessment of Arguments


Many good arguments are inconclusive and invalid. Consider this one:

In January 1997, it rained in London.


In January 1998, it rained in London.
In January 1999, it rained in London.
In January 2000, it rained in London.
So: It rains every January in London.

This argument generalises from observations about several cases to a conclusion about
all cases. Though it is invalid, that doesn’t mean it is a bad argument. The premises
appear to provide some support for the conclusion, though it falls short of being con‐
clusive.
INDUCTION describes a form of reasoning from evidence to hypotheses about that evid‐
ence. For example, when we reason from a sample of eligible voters to a hypothesis
about how the whole population will vote, we are reasoning inductively. INDUCTIVE
LOGIC is the attempt to generalise deductive logic to evaluate arguments in line with
the canons of good inductive reasoning. The proponents of inductive logic think there
might be a generalisation of deductive logic which enables us to evaluate arguments in
a more fine‐grained way than the options we’ve canvassed so far (i.e., just ‘valid’, and
‘invalid and conclusive’ and ‘invalid and inconclusive’). A significant part of the project
of inductive logic is the attempt to classify inconclusive arguments in terms of whether
their premises provide good inductive evidence in favour of their conclusions.
In the example, the premises are of the form ‘In January 𝑛, it rains’, and the conclusion is
of the form ‘Every January, it rains’. This argument thus has a general conclusion drawn
from instances of that generalisation. (That is, ‘it rains in January 1997’ is an instance
of the generalisation ‘it rains in January of every year’.) If we regard the foregoing
premises as providing decent support for the conclusion, we might think that adding
additional premises of the same sort before drawing the conclusion would make it even
stronger: In January 2001, it rained in London; In January 2002…. This principle, that a
generalisation is increasingly supported the more instances we adduce, more might be
part of our toolkit when evaluating arguments inductively. But, no matter how many
premises of this form we add, the argument will remain inconclusive. Even if it has
rained in London in every January thus far, it remains possible that London will stay
dry next January. (Even if we include every instance – past, present, and future – the
argument will be inconclusive, because one will need the additional premise that there
are no other instances than those we’ve included.)
The point of all this is that most arguments which are inductively very strong are not
(deductively) valid. Arguments which represent very good examples of inductive reas‐
oning generally are not watertight. Unlikely though it might be, it is possible for their
14 KEY NOTIONS

conclusion to be false, even when all of their premises are true. In this book, our in‐
terest is simply in sorting the (deductively) valid arguments from the invalid ones. The
project of inductive logic, of sorting the invalid arguments further into the inductively
good and poor ones, we shall set aside entirely from here on.

2.8 Making Conclusive Arguments Valid


Some arguments which are conclusive but invalid can be turned into valid arguments.
So consider again the argument ‘The sculpture is green all over; therefore it is not red
all over’. We can make a valid argument from this by adding a premise:

The sculpture is green all over.


If the sculpture is green all over, then it is not red all over.
So: The sculpture is not red all over.

This new argument has a premise which makes explicit a fact about green and red that
was merely implicit in the original argument. Since the original argument was con‐
clusive – since the fact about green and red is true just in virtue of the meaning of the
words ‘green’ and ‘red’ (and ‘not’) – the new argument remains conclusive. (We can’t
undermine conclusiveness by adding further premises.) But the new argument is valid,
because the additional premise we have added yields an argument with a structure that
guarantees the truth of the conclusion, given the truth of the premises.
The original argument is sometimes thought to be merely an abbreviation of the ex‐
panded valid argument. An argument with an unstated premise, such that it can be
seen to be valid when the premise is made explicit, is called an ENTHYMEME.4 Many
inconclusive arguments can be treated as enthymematic, if the unstated premise is
obvious enough:

The Nasty party platform includes imprisoning people for chewing gum;
So: The Nasty party will not form the next government.

The unstated premise is something like ‘If a party platform includes imprisoning
people for chewing gum, then that party will win too few votes to form the next govern‐
ment’. The unstated premise may or may not be true. But if it is added, the argument
is made valid.
Any conclusive argument you are likely to come across will either already be valid, or
can be transformed into a valid argument by making some assumption on which it
implicitly relies into an explicit premise.
Not every inconclusive argument should be treated as an enthymeme. In particular,
many strong inductive arguments can be made weaker when they are treated as en‐
thymematic. Consider:

4 The term is from ancient Greek; the concept was given its first philosophical treatment by Aristotle in
his Rhetoric. He gives this example, among others: ‘He is ill, since he has fever’.
§2. VALID ARGUMENTS 15

In January 2015, it was hot in Adelaide.


In January 2020, it was hot in Adelaide.
So: In January 2025, it will be hot in Adelaide.

This argument is inconclusive. It can be made valid by adding the unstated premise
‘Every January, it is hot in Adelaide’. But that unstated premise is extremely strong – we
do not have sufficient evidence to conclude that it will be hot in Adelaide in January for
eternity. So while the premises we have been given explicitly are good reason to think
Adelaide will continue to have a hot January for the foreseeable future, we do not have
good enough reason to think that every January will be hot. Treating the argument
as an enthymeme makes it valid, but also makes it less persuasive, since the unstated
premise on which it relies is not one many people will share.5

Key Ideas in §2
› An argument is conclusive if, and only if, the truth of the
premises guarantees the truth of the conclusion.
› An argument is valid if, and only if, the form of the premises
and conclusion alone ensures that it is conclusive. Not every
conclusive argument is valid (though they can be made valid by
addition of appropriate premises).
› An argument can be good and persuade us of its conclusion even
if it is not conclusive; and we can fail to be persuaded of the
conclusion of a conclusive argument, since one might come to
reject its premises.

Practice exercises
A. What is a conclusive argument? What, in addition to being conclusive, is required
for an argument to be valid? What, in addition to being valid, is required for an argu‐
ment to be sound?
B. Which of the following arguments are valid? Which are invalid but conclusive?
Which are inconclusive? Comment on any difficulties or points of interest.

1. Socrates is a man.
1. 2. All men are carrots.
So: Therefore, Socrates is a carrot.

1. Abe Lincoln was either born in Illinois or he was once president.


2. 2. Abe Lincoln was never president.
So: Abe Lincoln was born in Illinois.
5 If we had claim that the unstated premise was rather ‘If it has been hot in January in recent repres‐
entative years, it will be hot in January for the near future’, we would have had a valid argument and
one that has a plausible unstated premise – though the unstated premise seems to be plausible only
because the unamended inductive argument was fine to begin with!
16 KEY NOTIONS

1. Abe Lincoln was the president of the United States.


3.
So: Abe Lincoln was a citizen of the United States.

1. If I pull the trigger, Abe Lincoln will die.


4. 2. I do not pull the trigger.
So: Abe Lincoln will not die.

1. Abe Lincoln was either from France or from Luxembourg.


5. 2. Abe Lincoln was not from Luxembourg.
So: Abe Lincoln was from France.

1. If the world ends today, then I will not need to get up tomorrow morning.
6. 2. I will need to get up tomorrow morning.
So: The world will not end today.

1. Joe is right now 19 years old.


7. 2. Joe (the same one) is also right now 87 years old.
So: Bob is now 20 years old.

C. Which of the following are valid arguments?

Lizards are cute;


1.
So: lizards are cute, or I’ll eat my hat.

Anyone can pass logic if they work diligently.


2. Sarah is diligent and hard working.
So: Sarah can pass logic.

If you buy a lottery ticket, you’ve wasted your money.


3. You have wasted your money.
So: you have bought a lottery ticket

We must do something.
4. Quacking like a duck is something.
So: We must quack like a duck.

D. Could there be:

1. A valid argument that has one false premise and one true premise?
2. A valid argument that has only false premises?
3. A valid argument with only false premises and a false conclusion?
4. A sound argument with a false conclusion?
5. An invalid argument that can be made valid by the addition of a new premise?
6. A valid argument that can be made invalid by the addition of a new premise?

In each case: if so, give an example; if not, explain why not.


3
Other Logical Notions

In §2, we introduced the idea of a valid argument. We will want to introduce some
more ideas that are important in logic.

3.1 Truth Values


As we said in §1, arguments consist of premises and a conclusion, where the premises
are supposed to support the conclusion. But if premises are supposed to state reasons,
and conclusions are supposed to state claims, then both of them have to be the sort of
sentence which can be used to say how things are, truly or falsely.
So many kinds of English sentence cannot be used to express premises or conclusions
of arguments. For example:

› Questions, e.g., ‘are you feeling sleepy?’


› Imperatives, e.g., ‘Wake up!’
› Cohortatives, e.g., ‘Let’s go to the beach!’
› Exclamations, e.g., ‘Ouch!’

The common feature of these three kinds of sentence is that they cannot be used to
make assertions: they cannot be true or false. It does not even make sense to ask
whether a question is true (it only makes sense to ask whether the answer to a question
is true).
The general point is that the premises and conclusion of an argument must be sen‐
tences that are capable of having a TRUTH VALUE. The notions of validity and conclus‐
iveness are defined in terms of truth preservation, so these properties depend on the
constituents of arguments being the kinds of things which have a truth value.

A DECLARATIVE SENTENCES (like ‘Bob is singing’) states how things are


or could be; it is a sentence that is either true or false.

17
18 KEY NOTIONS

The two truth values that concern us are just TRUE and FALSE. We may not know
which truth value a declarative sentence has, but we generally know what kinds of
conditions would need to obtain in order for it to be true or false. (We thus have no
need of a supposed intermediate truth value like ‘unknown’ – that would reflect our
attitudes or beliefs about a claim, whereas the truth value reflects how things really
are independently of our attitudes or beliefs.)
To form part of an argument, a sentence must have the kind of grammatical structure
that permits it to have a truth value. In terms of the notion of structure introduced
in §2.2, we are going to focus on those structures which yield a declarative sentence
when supplied with declarative sentences. For example ‘it is not the case that 𝒜 ’ yields
a declarative sentence whenever we put a declarative sentence in for 𝒜 , and may not
yield anything grammatical at all otherwise:

1. It is not the case that dogs can fly. (Declarative, true)


2. It is not the case that dogs bark. (Declarative, false)
3. It is not the case that in 10,000BC, Kaurna people occupied Kangaroo Island.
(Declarative, unknown but determinate truth value)
4. It is not the case that are you feeling sleepy? (Ungrammatical)

3.2 Consistency
Consider these two sentences:

5. Jane’s only brother is shorter than her.


6. Jane’s only brother is taller than her.

Logic alone cannot tell us which, if either, of these sentences is true. Yet we can say
that if the first sentence 5 is true, then the second sentence 6 must be false. And if
6 is true, then 5 must be false. It is impossible that both sentences are true together.
These sentences are inconsistent with each other. And this motivates the following
definition:

Sentences are JOINTLY CONSISTENT if, and only if, it is possible for
them all to be true together.

Conversely, 5 and 6 are JOINTLY INCONSISTENT.


Consistency is relatively trivial in some cases. If we take at random some unrelated
sentences, they will typically be consistent. For example, ‘Eggs are delicious’, ‘Frogs
hop’, and ‘Barry hosts a podcast’ have nothing much to do with one another. It is un‐
surprising to find out that they can all be true together, because how could unrelated
sentences place any constraints on the possibility of their simultaneous truth? But con‐
sistency can be surprising, when related sentences can be true together even though
it may at first glance seem impossible. One of Einstein’s great contributions was the
discovery that these claims are consistent: ‘Albert correctly measured the spaceship to
§3. OTHER LOGICAL NOTIONS 19

be 50m’, and ‘Brunhilde correctly measured the spaceship to be 45m’, so long as Albert
and Brunhilde are in relative motion.
We can ask about the consistency of any number of sentences. For example, consider
the following four sentences:

7. There are at least four giraffes at the wild animal park.


8. There are exactly seven gorillas at the wild animal park.
9. There are not more than two Martians at the wild animal park.
10. Every giraffe at the wild animal park is a Martian.

7 and 10 together entail that there are at least four Martian giraffes at the park. This
conflicts with 9, which implies that there are no more than two Martian giraffes there.
So the sentences 7–10 are jointly inconsistent. They cannot all be true together. (Note
that the sentences 7, 9 and 10 are jointly inconsistent. But if some sentences are already
jointly inconsistent, adding an extra sentence to the mix will not make them consist‐
ent!)
There is an interesting connection between consistency and conclusive arguments. A
conclusive argument is one where the premises guarantee the truth of the conclusion.
So it is an argument where if the premises are true, the conclusion must be true. So
the premises cannot be jointly consistent with the claim that the conclusion is false.
Since the argument ‘Dogs and cats are animals, so dogs are animals’ is conclusive, that
shows that the sentences ‘Dogs and cats are animals’ and ‘Dogs are not animals’ are
jointly inconsistent. If an argument is conclusive, the premises of the argument taken
together with the denial of the conclusion will be jointly inconsistent.

Sentences 𝒜 and ℬ are jointly inconsistent iff the arguments‘ 𝒜 ∴


not‐ℬ’ and ‘ℬ ∴ not‐𝒜 ’ are both conclusive. This can be generalised to
arbitrary inconsistent collections of sentences.

We just linked consistency to conclusive arguments. There is an analogous notion


linked to valid arguments:

Sentences are JOINTLY FORMALLY CONSISTENT if, and only if, consider‐
ing only their structure, they can all be true together.

Another way to put it: some sentences are formally consistent iff, looking just at their
structure (and not looking at what they are actually about), they might all be true
together.
Just as validity is more stringent than conclusiveness (it is conclusiveness plus some‐
thing more), consistency is more stringent than formal consistency (it is formal con‐
sistency plus substantive consistency). If some sentences are jointly consistent, they
are also jointly formally consistent. But some formally consistent sentences are jointly
inconsistent. Any conclusive but invalid argument will give us an example.
20 KEY NOTIONS

For example, since ‘The sculpture is green all over, so the sculpture is not red all over’
is conclusive, these sentences are jointly formally consistent but not consistent:

11. The sculpture is uniformly green all over.


12. The sculpture is uniformly red all over.

These sentences have have no interesting internal structure for our purposes, so it
is easy to make them formally consistent. But, holding fixed their actual meaning
(particularly the actual meaning of ‘red’ and ‘green’), we see that the truth of 11 excludes
the truth of 12. (Again, more on the notion of structure invoked here in §4.1.)

3.3 Necessity and Contingency


In assessing whether an argument is conclusive, we care about what would be true if
the premises were true. But some sentences just must be true. Consider these sen‐
tences:

13. It is raining.
14. If it is raining, water is precipitating from the sky.
15. If something is green, it is colourless.
16. Either it is raining here, or it is not.
17. It is both raining here and not raining here.

In order to know if sentence 13 is true, you would need to look outside or check the
weather channel. It might be true; it might be false.
Sentence 14 is different. You do not need to look outside to know that it says something
true. Regardless of what the weather is like, if it is raining, water is precipitating – that
is just what rain is, meteorologically speaking. That is a NECESSARY TRUTH. Here, a
necessary connection in meaning between ‘rain’ and ‘precipitation’ makes what the
sentence says true in every circumstance.
Sentence 15 is a NECESSARY FALSEHOOD or IMPOSSIBILITY. Nothing is, or even could be,
both green and colourless. We don’t need to do any scientific or other investigation to
know that 15 is not and cannot be true.
Sentence 16 is also a necessary truth. Unlike sentence 14, however, it is the structure of
the sentence which makes it necessary. No matter what ‘raining here’ means, ‘Either it
is raining here or it is not raining here’ will be true. The structure ‘Either it is … or it is
not …’, where both gaps (‘…’) are filled by the same phrase, must be a true sentence.
Equally, you do not need to check the weather, or even the meaning of words, to de‐
termine whether or not sentence 17 is true. It must be false, simply as a matter of
structure. It might be raining here and not raining across town; it might be raining
now but stop raining even as you finish this sentence; but it is impossible for it to be
both raining and not raining in the same place and at the same time. So, whatever
the world is like, it is not both raining here and not raining here. It is a NECESSARY
FALSEHOOD.
§3. OTHER LOGICAL NOTIONS 21

These last two examples, of necessary truths and impossibilities in virtue of structure,
are of particular interest to logicians. We will come back to them in §10.
A sentence which is capable of being true or false, but which says something which is
neither necessarily true nor necessarily false, is CONTINGENT.
If a sentence says something which is sometimes true and sometimes false, it will def‐
initely be contingent. But something might always be true and still be contingent. For
instance, it seems plausible that whenever there have been people, some of them ha‐
bitually arrive late. ‘Some people are habitually late’ is always true. But it is contingent,
it seems: human nature could have been more punctual. If so, the sentence would have
been false. But if something is really necessary, it will always be true, and couldn’t even
possibly be false.1
If some sentences contain amongst themselves a necessary falsehood, those sentences
are jointly inconsistent. At least one of them cannot be true, so they cannot all be true
together. Accordingly, if an argument has a premise that is a necessary falsehood, or its
conclusion is a necessary truth, or both, then the argument is conclusive – its premises
and the denial of its conclusion will be jointly inconsistent.

› An argument with a necessarily true conclusion is conclusive;


› An argument with an impossible premise is conclusive.

This second observation might be surprising. But note that if a premise is impossible,
there is no way to make it true, and hence no way to make it true while making the con‐
clusion false. These are both degenerate cases of conclusiveness, where there need be
no real connection between the premises and conclusion to ground the conclusiveness
of an argument.

Key Ideas in §3
› Arguments are made up of declarative sentences, all of which are
either true or false.
› Some declarative sentences are formally consistent if, and only
if, their structures don’t rule out the possibility that they are all
true together.
› Some declarative sentences can only have one truth value – they
are either necessary or impossible. Others are contingent, hav‐
ing one truth value in some circumstances and the other truth
value in other circumstances.

1 Here’s an interesting example to consider. It seems that, whenever anyone says the sentence ‘I am here
now’, they say something true. That sentence is, whenever it is uttered, truly uttered. But does it say
something necessary or contingent?
22 KEY NOTIONS

Practice exercises
A. Which of the following sentences are capable of being true or false?

1. Earth is the third planet from the Sun.


2. Pluto is the ninth planet from the Sun.
3. Have you been feeding the lions?
4. Socrates said, ‘Be as you wish to seem’.
5. ‘Have you been feeding the lions?’ is a sentence.
6. Always forgive your enemies; nothing annoys them so much.

B. Which of the following are declarative sentences?

1. Answer me!
2. ‘Answer me!’, she demanded.
3. You are required to answer me.
4. Saying nothing is not an answer.
5. If you want answers, ask Alfred.
6. Why won’t you answer me?
7. All that is left to ask is: ‘Who has the answers?’

C. For each of the following: Is it necessarily true, necessarily false, or contingent?

1. Caesar crossed the Rubicon.


2. Someone once crossed the Rubicon.
3. No one has ever crossed the Rubicon.
4. If Caesar crossed the Rubicon, then someone has.
5. Even though Caesar crossed the Rubicon, no one has ever crossed the Rubicon.
6. If anyone has ever crossed the Rubicon, it was Caesar.

D. Look back at the sentences 7–10 in this section (about giraffes, gorillas and Martians
in the wild animal park), and consider each of the following:

1. 8, 9, and 10
2. 7, 9, and 10
3. 7, 8, and 10
4. 7, 8, and 9

Which are jointly consistent? Which are jointly inconsistent?


E. Are these sentences jointly consistent?

1. There are three people leaving the party: Atheer, Brigitte, and James.
2. Brigitte is wearing Atheer’s hat.
3. Each of the people is wearing a hat.
4. No person is wearing their own hat.
5. Atheer is wearing Brigitte’s hat.
§3. OTHER LOGICAL NOTIONS 23

F. Could there be:

1. A conclusive argument, the conclusion of which is necessarily false?


2. An inconclusive argument, the conclusion of which is necessarily true?
3. Jointly consistent sentences, one of which is necessarily false?
4. Jointly inconsistent sentences, one of which is necessarily true?

In each case: if so, give an example; if not, explain why not.


G. Some feature of a set 𝐴 is MONOTONIC if any set including all the members of 𝐴 also
has the feature. (So having at least 3 members is monotonic, since adding more mem‐
bers clearly preserves that feature.) Explain why inconsistency of a set of sentences is
monotonic.
Chapter 2

The Language of Sentential


Logic
4
First Steps to Symbolisation

4.1 Argument Structure


Consider this argument:

It is raining outside.
If it is raining outside, then Jenny is miserable.
So: Jenny is miserable.

and another argument:

Jenny is an anarcho‐syndicalist.
If Jenny is an anarcho‐syndicalist, then Dipan is an avid reader of Tolstoy.
So: Dipan is an avid reader of Tolstoy.

Both arguments are valid, and there is a straightforward sense in which we can say
that they share a common structure. We might express the structure thus, when we
let letters stand for phrases in the original argument:

A
If A, then C
So: C

This is an excellent argument STRUCTURE. Surely any argument with this structure will
be valid. And this is not the only good argument structure. Consider an argument like:

Jenny is either happy or sad.


Jenny is not happy.
So: Jenny is sad.

Again, this is a valid argument. The structure here is something like:

25
26 THE LANGUAGE OF SENTENTIAL LOGIC

A or B
not: A
So: B

A superb structure! You will recall that this was the structure we saw in the original
arguments which introduced the idea of validity in §2.5. And here is a final example:

It’s not the case that Jim both studied hard and acted in lots of plays.
Jim studied hard
So: Jim did not act in lots of plays.

This valid argument has a structure which we might represent thus:

not both: A and B


A
So: not: B

The examples illustrate the idea of validity – conclusiveness in virtue of structure. The
validity of the arguments just considered has nothing very much to do with the mean‐
ings of English expressions like ‘Jenny is miserable’, ‘Dipan is an avid reader of Tolstoy’,
or ‘Jim acted in lots of plays’. If it has to do with meanings at all, it is with the meanings
of phrases like ‘and’, ‘or’, ‘not,’ and ‘if …, then …’.

4.2 Sentence Trees and Canonical Clauses


Since arguments are made of sentences, the logical structure of an argument will be
related to the grammatical structure of the sentences within it. We’ve already seen in
§3.1 that arguments are made up of declarative sentences. One standard way of analys‐
ing the grammatical structure of a declarative sentence is to break it into constituent
phrases. Such approaches are known as PHRASE STRUCTURE GRAMMARS. These struc‐
tures are usefully depicted in a hierarchical SYNTACTIC TREE. Consider the example:

18. Tariq is a man.

This sentence (phrase of type S) can be divided into two main parts: the NOUN PHRASE
‘Tariq’ (type NP), and the VERB PHRASE ‘is a man’ (type VP). In this example, the verb
phrase itself divides into the verb ‘is’ and the DETERMINER PHRASE ‘a man’ (type DP).
We will return to the internal structure of sentences in §15. This phrase structure is
depicted in the tree in Figure 4.1.
Let’s look at a more complicated example. Consider

19. Alice will ace the test, or Bob will.


§4. FIRST STEPS TO SYMBOLISATION 27

NP VP

Tariq is DP

a man

Figure 4.1: A phrase structure tree for Example 18.

NP VP Coord S

Alice will VP or NP VP

ace DP Bob will VP

the test ace the test

Figure 4.2: A phrase structure tree for Example 19.

First we note that this sentence is elliptical, the verb phrase ‘ace the test’ being omitted
after ‘Bob will’ as it is understood to be supplied by the first clause of the sentence.1
The full tree, with the elided phrase supplied (marked by having the VP label in a box
at the lower right), is depicted in Figure 4.2. In this example, the whole sentence is
a compound of two clauses which are sentences in their own right, or SUBSENTENCES,
connected by ‘or’.
When analysing the structure of compound sentences, a useful notion is that of a CA‐
NONICAL CLAUSE.2 These are the simplest units of English sentences that can constitute
a sentence by themselves. Here are some examples:

1 A closely related example is discussed by Paul Elbourne (2011) Meaning, Oxford University Press, pp.
74–6. He argues that this kind of elision is crucial evidence for phrase structure grammars, for only
phrases can be omitted in this way: witness the ungrammatical ‘Alice will ace the test and Bob will
the’, where arbitrary words are omitted that do not form a grammatical phrase. This is evidence that
English syntax is sensitive to the phrasal structure of sentences, and does not treat them as merely a
string of words.
2 A fuller treatment of canonical clauses can be found in Rodney Huddleston and Geoffrey Pullum (2005)
A Student’s Introduction to English Grammar, Cambridge University Press, pp. 24–5.
28 THE LANGUAGE OF SENTENTIAL LOGIC

20. They knew the victim.


21. She has read your article.
22. Jenny is happy.

A canonical clause has internal structure too, but its parts are not themselves sentences.
It is composed of a GRAMMATICAL SUBJECT, typically but not always a noun phrase
(‘Jenny’, ‘She’), and a PREDICATE, always a verb phrase (‘knew the victim’, ‘is happy’).
The following, in contrast with the previous examples, are noncanonical clauses:

23. They did not know the victim.


24. She has read your article and she vehemently disagrees with it.
25. She says that Jenny is happy.

These give us some characteristics of canonical clauses:

› They are positive, as in 20, rather than negative, as indicated by the underlined
‘not’ in 23.

› Canonical clauses are simple, and not coordinated with any other clause, as in
21. In 24, the COORDINATOR ‘and’ links two clauses that would be canonical on
their own into a longer COMPOUND sentence.

› The underlined clause in 25 is a subordinate clause, part of a more complex clause.


Canonical clauses are main clauses, as in 22.

4.3 Identifying Argument Structure


Our topic is not English grammar. But the idea of a canonical clause is illuminating
for the logic we are examining in this part of the course. Sentential logic is, more or
less, the logic that helps us understand and analyse arguments whose conclusiveness
depends on their constituent canonical clauses and how they are joined together by
SENTENCE CONNECTIVES. The sentence connectives we focus on are a grammatically
diverse group, including

Coordinators including ‘and’, ‘or’, ‘if and only if ’ (also ‘but’)

Adjunct heads including ‘if … then …’, ‘only if’ (also ‘unless’, ‘because’, ‘since’, ‘hence’ –
though we treat these last as signalling premises and conclusions of arguments)

Negatives including ‘not’, ‘‐n’t’, ‘it is not the case that’ (‘never’).

And there are others that we won’t deal with (‘always’, ‘might’).

How can we identify the logical structure of an argument? The logical structure of
an argument is the form one arrives at by ‘abstracting away’ the details of the words
in an argument, except for a special set of words, the STRUCTURAL WORDS. This con‐
nects with what we have just been saying, because in many of the examples from §4.1,
§4. FIRST STEPS TO SYMBOLISATION 29

the arguments can be analysed as involving canonical clauses which are linked by the
structural words ‘and’, ‘or’, ‘not’, ‘if … then …’ and ‘… if and only if …’. So our efforts
to identify the logical structure of arguments often coincide with linguist’s efforts to
identify the canonical clauses that constitute the sentences in those arguments – or,
at least, constitute some plausible paraphrase or reformulation of those sentences.
We can see this if we return to this earlier argument:

Jenny is either happy or sad.


Jenny is not happy.
So: Jenny is sad.

We paraphrase the compound sentence ‘Jenny is either happy or sad’ into a compound
of two canonical clauses, joined by the coordinator ‘or’:

› Jenny is happy or Jenny is sad.

And we paraphrase the noncanonical clause ‘Jenny is not happy’ as the negative clause

› It is not the case that: Jenny is happy.

Identifying the coordinator ‘or’ and the negative clause ‘it is not the case that’ as struc‐
tural words, and replacing the canonical clause ‘Jenny is happy’ with the placeholder
letter ‘A’, and the canonical clause ‘Jenny is sad’ with the placeholder letter ‘B’, we arrive
again at the argument structure we previously identified:

A or B
not: A
So: B

In this and the other examples above, we removed all details of the arguments except
for the special words ‘and’, ‘or’, ‘not’ and ‘if …, then …’, and replaced the other clauses
in the argument (or its near paraphrase) by placeholder letters ‘A’, ‘B’, etc. (We were
careful to replace the same clause by the same placeholder every time it appeared.)
Another example:

26. Butter isn’t healthy, but it is delicious.

We begin by paraphrasing. We note that the pronoun ‘it’ is actually referring to the
previously mentioned subject ‘butter’, and we move the negative ‘isn’t’ into a position
that reveals the canonical clause ‘butter is healthy’. We also note that the effect of ‘but’
is roughly the same as our structural word ‘and’ (though it suggests a contrast that ‘and’
does not, ‘but’ expresses the roughly equivalent idea that each of the claims it connects
are true) we obtain this more stilted paraphrase:

27. It is not the case that butter is healthy and butter is delicious.
30 THE LANGUAGE OF SENTENTIAL LOGIC

Not S and S

butter is healthy butter is delicious

Figure 4.3: Paraphrase of Example 26 showing its subsentential structure, as in Ex‐


ample 27.

S Not S

S S

Not S and S A and S

A B B

Figure 4.4: Different sentential structures for ‘Not A and B’ shown in schematic syn‐
tactic trees.

This paraphrase has the syntactic tree in Figure 4.3. Note that we have not further
broken down the canonical clauses into subject‐predicate form, because our structural
words, the sentence connectives, do not occur within any canonical clause and hence
are not ‘visible’ to our analysis at the level of sentences. Finally, we replace the canon‐
ical clauses with placeholder sentences, and we reach the structure ‘Not A and B’.
The schematic paraphrase ‘Not A and B’ is potentially ambiguous, because it is not
obvious whether the ‘not’ applies just to the A‐clause, or to the whole ‘A and B’ clause.
We could introduce parentheses to eliminate this ambiguity, distinguishing ‘Not (A
and B)’ from ‘Not A and B’. This is the approach we will take in our formal language
Sentential(see page 40). Alternatively, we can use the hierarchical nature of syntactic
trees to see see that there are two different structures possible for ‘Not A and B’, as
depicted in the schematic syntactic trees in Figure 4.4.
Sharp‐eyed readers will have noticed that our list of special words doesn’t precisely line
up with the canonical clauses we introduced in §4.2. Consider the noncanonical clause
‘She said that Jenny is sad’, in which the canonical clause ‘Jenny is sad’ is subordinate
within an indirect speech report. Because ‘She said that’ is not on our special list of
words, we cannot analyse this sentence as composed from a canonical clause and some
§4. FIRST STEPS TO SYMBOLISATION 31

structural expression. So for the purposes of sentential logic, we will treat ‘She said
that Jenny is sad’ as if it were canonical, even though it is not from the point of view of
English grammar. From the point of view of our structural words, this sentence doesn’t
have any further structure that we can identify. Such QUASI‐CANONICAL CLAUSES –
clauses that do not feature any of our list of structural words – will also be replaced by
placeholder letters in our analysis.
This raises another question. What makes the words on our list special? Logicians
tend to take a pragmatic attitude to this question. They say: nothing! If you had chosen
different words, you would have come up with a different structure. In terms of the
syntactic trees we drew above, there is a hierarchy of levels when breaking down the
top‐level sentence into constituent phrases, and different choices of structural words
correspond to different choices concerning the level at which to stop our analysis.
Linguists take a very expansive view of structural words, so that the class of canonical
clauses is rather small, and there is a lot of structure to be identified in natural language.
For example, the presence of modal auxiliaries (like ‘will’ or ‘must’, as in ‘James must
eat’), verb inflections other than the present tense (e.g., ‘they took snuff’), and subor‐
dination (as in ‘Etta knew that peaches were abundant’), in addition to items on our
list of structural words, suffices to make a clause noncanonical. And there are logics
which do take these to be structural words: modal logics, tense logics, and epistemic
logics treat these as structural words and provide recipes for the structural analysis of
arguments that abstract away features other than these words. But there is no funda‐
mental principle that divides words once and for all into structural words and other
words. So logicians are more interested in quasi‐canonical clauses, given some fixed
list of structural words, when using logic to model or represent natural languages.
We will start simply, and focus on the list of truth‐functional sentence connectives,
principally ‘and’, ‘or’, ‘not’, ‘if … then …’ and ‘… if and only if …’. There are practical
reasons why these words are useful ones to focus on initially, which I will now discuss.

4.4 Formal Languages


In logic, a FORMAL LANGUAGE is a language which is particularly suited to representing
the structure or form of its sentences. Such languages have a precisely defined SYNTAX,
that allows us to specify without difficulty the grammatical sentences of the language,
and they also have a precise SEMANTICS which both tells us what meanings are in those
languages, and assigns meanings to the sentences and their constituents.
Formal languages are not the same as natural languages, like English. The languages
we will be looking at are much more limited in their expressive power than English, and
not subject to ambiguity or imprecision in the way that English is. But a formal lan‐
guage can be very useful in representing or modelling features of natural language. In
particular, they are very good at capturing the structure of natural language sentences
and arguments, because we can design them specifically to represent a particular level
of structural analysis. The semantics we give for such languages reflects that this is
their primary use, as you will see. The languages assign fixed meanings only to those
expressions which can be used to represent structure, but (unlike English) not every
expression is treated as structural.
32 THE LANGUAGE OF SENTENTIAL LOGIC

In this chapter we will begin developing a formal language which will allow us to rep‐
resent many sentences of English, and arguments involving those sentences. The lan‐
guage will have a very small basic vocabulary, since we are designing it to represent the
sentential structure of the examples with which we began. So it will have expressions
corresponding to the English sentence connectives ‘and’, ‘or’, ‘not’ and ‘if …, then …’
and ‘if and only if’. The language represents these English connectives by its own class
of dedicated sentence connectives which allow simple sentences of the language to be
combined into more complex sentences. These words are a good class to focus on in
developing a formal language, because languages with expressions analogous to these
feature a good balance between being useful and being well behaved and easy to study.
The language we will develop is called Sentential, and the study of that language and
its features is called sentential logic. (It also has other names: see Appendix A.)
Once we have our formal language Sentential, we will be able to go on (in chapter 3)
to show that it has a very nice feature: it is able to represent the structure of a large
class of valid natural language arguments. So while logic can’t help us with every good
argument, and it can’t even help us with every conclusive argument, it can help us to
understand arguments that are valid, or conclusive due to their structure, when that
structure involves ‘and’, ‘or’, ‘not’, ‘if’ and various other expressions.
We will see later in this book that one could take additional expressions in natural
language to be involved is determining the structure of a sentence or argument. The
language we develop in chapter 4 is one that is suited to represent what we get when
we take names like ‘Juanita’, predicates like ‘is a vixen’ or ‘is a fox’, and quantifier expres‐
sions like ‘every’ and ‘some’ to be structural – see also §16. And as we have mentioned,
there are still other formal logical languages, unfortunately beyond the scope of this
book, which take still other words or grammatical categories as structural: logics which
take modal words like ‘necessarily’ or ‘possibly’ as structural, and logics which take tem‐
poral adverbs like ‘always’ and ‘now’ as structural.3 What we notice here is that using
logic to model natural language arguments inevitably involves a compromise. If we
have lots of structural words, we can show many conclusive arguments to be valid, but
our logic is complex and involves many different sentence connectives. If we take rel‐
atively few words as structural, we can represent fewer conclusive arguments as valid,
but our formal language is a lot easier to work with.
We will start now by putting those tantalising observations about richer logics out of
your mind! We’ll focus to begin with on the language Sentential, and on how we can
use it to model arguments in English featuring some particular sentence connectives
as structural words, including those we’ve just highlighted above. This collection of
structural words has struck many logicians over the years as providing a good balance
between simplicity and strength, ideal for an introduction to logic.

3 Some arguments will be valid in richer logical frameworks, but not according to the more austere frame‐
work for logical structure provided by Sentential. For example ‘Sylvester is always active; so Sylvester
is active now’ is conclusive. From the point of view of temporal logic, the argument has this form ‘Al‐
ways A; so now A’, which is valid in normal temporal logics. But it is not valid according to Sentential,
because the premise ‘Sylvester is always active’ has no internal structural words that occur on the list
of sentence connectives that Sentential represents.
§4. FIRST STEPS TO SYMBOLISATION 33

4.5 Atomic Sentences


In §4.1, we started isolating the form of an argument by replacing quasi‐canonical
clauses (those including none of our list of structural words) within the argument with
individual placeholder letters. Thus in the first example of this section, ‘it is raining
outside’ is a quasi‐canonical clause of ‘If it is raining outside, then Jenny is miserable’,
and we replaced this clause with ‘A’.
Our artificial language, Sentential, pursues this idea absolutely ruthlessly. We start with
some ATOMIC SENTENCES, which are analogues of the canonical clauses of natural lan‐
guages. These will be the basic building blocks out of which more complex sentences
are built. Unlike natural languages, these simplest sentences of Sentential have no in‐
teresting internal structure at all. The atomic sentences of Sentential will consist of
upper case italic letters, with or without numerical subscripts. There are only twenty‐
six letters of the alphabet, but there is no limit to the number of atomic sentences that
we might want to consider, so we add numerical subscripts to each letter to extend our
library of atomic sentences indefinitely. This procedure does give us atomic sentences
with some syntax, but it is irrelevant to meaning: there is no connection between ‘𝐴8 ’
and ‘𝐴67 ’ just because they both involve ‘𝐴’. Likewise, there is no connection between
‘𝐷17 ’ and ‘𝐺17 ’ even though they share the same numerical subscript. Upper case italic
letters, with or without subscripts, are all the atomic sentences there are. So, here are
five different atomic sentences of Sentential:

𝐴, 𝑃, 𝑃1 , 𝑃2 , 𝐴234 .

Atomic sentences are the basic building blocks of Sentential. We introduce them in
order to represent, or symbolise, certain English sentences. To do this, we provide a
SYMBOLISATION KEY, such as the following, which assigns a temporary linkage between
some atomic sentences of Sentential, and some quasi‐canonical sentences of the natural
language we are representing using Sentential, such as English:

𝐴: It is raining outside.
𝐶 : Jenny is miserable.

In doing this, we are not fixing this symbolisation once and for all. We are just saying
that, for the time being, we shall use the atomic sentence of Sentential, ‘𝐴’, to symbolise
the English sentence ‘It is raining outside’, and the atomic sentence of Sentential, ‘𝐶 ’, to
symbolise the English sentence ‘Jenny is miserable’. Later, when we are dealing with
different sentences or different arguments, we can provide a new symbolisation key;
as it might be:

𝐴: Jenny is an anarcho‐syndicalist.
𝐶 : Dipan is an avid reader of Tolstoy.

Given this flexibility, it isn’t true that a sentence of Sentential means the same thing
as any particular natural language sentence – see also §8.4. The question of what any
given atomic sentence of Sentential means is actually a bit hard to make sense of; we
will return to it in §9.
34 THE LANGUAGE OF SENTENTIAL LOGIC

But it is important to understand that whatever internal structure an English sentence


might have is lost when it is symbolised by an atomic sentence of Sentential. From the
point of view of Sentential, an atomic sentence is just a letter. It can be used to build
more complex sentences, but it cannot be taken apart. So we cannot use Sentential to
symbolise arguments whose validity depends on structure within sentences that are
symbolised as atomic sentences of Sentential, such as arguments whose conclusiveness
turns on different structural words than those Sentential seeks to capture, or arguments
which depend on the internal subject‐predicate structure of canonical clauses.

Key Ideas in §4
› Arguments are made of sentences, and to understand the struc‐
ture of an argument we must understand the syntactic struc‐
ture of those sentences, including analysing those sentences into
their simplest constituents, roughly, how a compound sentence
can be constructed by combining canonical clauses.
› The formal language Sentential is designed to model English ar‐
guments involving compound sentences structured by the sen‐
tence connectives ‘and’, ‘or’, ‘not’, and ‘if’, and related expressions.
› We symbolise these arguments this by abstracting away any
other aspects of English sentences, using structureless atomic
sentences to represent clauses that do not include these special
expressions.
› Many English words and phrases can be treated as structural,
and different formal languages can be motivated by other
choices of structural expressions. Sentential represents a particu‐
larly well behaved aspect of English sentence structure.

Practice exercises
A. True or false: if you are going to represent an English sentence by an atomic sen‐
tence in Sentential, the English sentence cannot have a sentence connective (like ‘and’)
occurring within it.
B. Which one or more of the following are not atomic sentences of Sentential?

1. 𝐴′ ;
2. 𝑊0 ;
3. 𝑄5902222 ;
4. 77𝑃 ;
5. 𝑉9.

C. This argument is invalid in English: ‘There is water in the glass and it is cold; there‐
fore it is cold’. Comment on why it is invalid, and what potential pitfalls arise when
considering how to symbolise this argument into Sentential.
5
Connectives

In the previous section, we considered symbolising relatively simple English sentences


with atomic sentences of Sentential, when they did not include any of our list of struc‐
tural words, the English sentence connectives ‘and’, ‘or’, ‘not’, and so forth. This leaves
us wanting to deal with sentences including these structural words. In Sentential, we
shall make use of logical connectives to build complex sentences from atomic compon‐
ents. There are five logical connectives in Sentential, inspired by the English structural
words we have encountered so far. This table summarises them, and they are explained
throughout this section.

Symbol What it is called Rough English analogue


¬ negation ‘It is not the case that …’
∧ conjunction ‘Both … and …’
∨ disjunction ‘Either … or …’
→ conditional ‘If … then …’
↔ biconditional ‘… if and only if …’

As the table suggests, we will introduce these connectives by the English connectives
they parallel. It is important to bear in mind that they are perfectly legitimate stan‐
dalone expressions of Sentential, with a meaning independent of the meaning of their
English analogues. The nature of that meaning we will see in §9. Sentential is not
a strange way of writing English, but a free‐standing formal language, albeit one de‐
signed to represent some aspects of natural language.

5.1 Modelling and Paraphrase


We saw in §4.3 the grammatical heterogeneity of the sentence connectives. Even
within the same class of connective we don’t have grammatical uniformity. The dif‐
ferent sentence connectives that express negation – e.g., ‘not’, ‘‐n’t’, and ‘it is not the
case that’ – can occur in quite different places in a grammatical sentence. Consider
these examples:

35
36 THE LANGUAGE OF SENTENTIAL LOGIC

1. Vassiliki doesn’t like ballet;


2. Vassiliki dislikes ballet;
3. It is not the case that Vassiliki likes ballet.

To the grammarian, these differences are of great significance. To the logician, they
are not. When the logician considers these examples, what matters is that these are
all negated sentences, not the particular way that the idea of negation happens to be
implemented syntactically in English. All these sentences seem to be expressing an
idea that might roughly be expressed as ‘It is not the case that: Vassiliki likes ballet’.
These sentences are all acceptable PARAPHRASES of each other, because they all express
more or less the same content. They need not be perfectly synonymous to be accept‐
able paraphrases, because sometimes the small divergences in meaning do not matter
for our project.
Logic aims to represent relations within and between sentences that are significant for
arguments. Since there are important grammatical differences that are argumentat‐
ively insignificant, logic will sometimes overlook linguistic accuracy in order to capture
the ‘spirit’ of a sentence. That spirit is what is present in all acceptable paraphrases of
that sentence. If some group of sentences can all play more or less the same role in an
argument, the logician will aim to capture just what is essential to the argumentative
role. In the following argument, it doesn’t matter which of the previous examples we
use for the second premise: the argument remains good whichever we include.

1. Vassiliki either likes ballet or soccer;


2. Vassiliki doesn’t like ballet;
So: Vassiliki likes soccer.

Partly this tolerant attitude arises because logicians are interested principally in how
to represent natural language arguments in a formal language. The formal language
is already artificial and limited compared to the expressive power of natural language.
Sentential only has five different sentence connectives, compared to the rich variety
seen in English. From a logical point of view we are already sacrificing nuances of
meaning when we represent an argument in a formal language. So it really doesn’t
matter which way of paraphrasing the original argument we choose, as long as it pre‐
serves the gist of the argument, because we’ll already be modelling that argument in
a way that cannot be faithful to its exact meaning. So when symbolising natural lan‐
guage arguments into English, it is generally appropriate to find a paraphrase that is
good enough, but one that most explicitly displays the sentence connectives that you
take to be involved in the logical structure of the argument. So while the sentence ‘It
is not the case that Vassiliki likes ballet’ is much more stilted sounding than ‘Vassiliki
doesn’t like ballet’, it has the virtue of displaying the logical structure of the sentence
more clearly. It is obvious, in the paraphrase, that we have a negation operating on the
canonical clause ‘Vassiliki likes ballet’, and that makes symbolisation straightforward.
As will be evident throughout this section, symbolising an argument in Sentential is
not like translating it into another natural language. Translation aims to preserve the
meaning of your original argument in all its nuance. Symbolisation is more like mod‐
elling, where you choose to include certain important features and leave out other
§5. CONNECTIVES 37

features that are not important for you. A physicist, for example, might model a sys‐
tem of moving bodies by treating all of them as point particles. Of course the bodies
are in fact extended in space, but for the purposes for which the model is designed
it may be irrelevant to include that detail, if all that the physicist is trying to do is to
predict the overall trajectories of those bodies. Likewise, in logic if our project is to
analyse whether an argument is conclusive or not, we may not need to include every
detail of meaning in order to complete that project. A good model needn’t be perfectly
accurate, and in fact, highly accurate models can be very poor because their additional
complexity makes them too unwieldy to work with. We will happily settle for models
which are good enough to represent the important details. In the present context, that
means we want to paraphrase arguments in a way that makes their logical structure
explicit, and then to symbolise them using the closest available Sentential connectives,
even if they are not perfectly synonymous with the English sentence connectives they
contain. We’ll return to this issue further in §8.4.
Because a symbolisation isn’t exactly alike in meaning to the original sentence, there
is some room for the exercise of judgment. You will need sometimes to make choices
between different paraphrases that seem to capture the gist of the sentence about
equally well. You may have to make a judgment about what the sentence is ‘really’
trying to say. Such choices cannot be reduced to a mechanical algorithm. Though
you can ask yourself some leading questions: ‘which way was this ambiguous sentence
intended?’ or ‘are these two options plausibly intended to be understood as mutually
exclusive?’.

5.2 Negation
Consider how we might symbolise these sentences:

28. Mary is in Barcelona.


29. It is not the case that Mary is in Barcelona.
30. Mary is not in Barcelona.

In order to symbolise sentence 28, we will need an atomic sentence. We might offer
this symbolisation key:

𝐵: Mary is in Barcelona.

Since sentence 29 is obviously related to the sentence 28, we shall not want to symbolise
it with a completely different sentence. Roughly, sentence 29 means something like ‘It
is not the case that B’. In order to symbolise this, we need a symbol for negation. We
will use ‘¬’. Now we can symbolise sentence 29 with ‘¬𝐵’.
Sentence 30 also contains the word ‘not’. And it is obviously equivalent to sentence 29.
As such, we can also symbolise it with ‘¬𝐵’.
It is much more common in English to see negation appear as it does in 30, with a
‘not’ found somewhere within the sentence, than the more formal form in 29. The
form in 29 has the benefit that the sentence ‘Mary is in Barcelona’ itself appears as
38 THE LANGUAGE OF SENTENTIAL LOGIC

a subsentence of 28 – so we can see how negation forms a new sentence by literally


adding words to the old sentence. This is not so direct in 30. But since 29 and 30 are
near enough synonymous in meaning, we can treat them both as negations of 28. (This
issue is a bit tricky – see the discussion of examples 34 and 35.)

If a sentence can be paraphrased by a sentence beginning ‘It is not


the case that …’, then it may be symbolised by ¬𝒜 , where 𝒜 is the
Sentential sentence symbolising the sentence occurring at ‘…’.1

It will help to offer a few more examples:

31. The widget can be replaced.


32. The widget is irreplaceable.
33. The widget is not irreplaceable.

Let us use the following representation key:

𝑅: The widget is replaceable.

Sentence 31 can now be symbolised by ‘𝑅’. Moving on to sentence 32: saying the widget
is irreplaceable means that it is not the case that the widget is replaceable. So even
though sentence 32 does not contain the word ‘not’, we shall symbolise it as follows:
‘¬𝑅’. This is close enough for our purposes.
Sentence 33 can be paraphrased as ‘It is not the case that the widget is irreplaceable.’
Which can again be paraphrased as ‘It is not the case that it is not the case that the
widget is replaceable’. So we might symbolise this English sentence with the Sentential
sentence ‘¬¬𝑅’. Any sentence of Sentential can be negated, not only atomic sentences,
so ‘¬¬𝑅’ is perfectly acceptable as a sentence of Sentential. You might have the sense
that these two negations should ‘cancel out’, and in Sentential that sense will turn out
to be vindicated.
But some care is needed when handling negations. Consider:

34. Jane is happy.


35. Jane is unhappy.

If we let the Sentential‐sentence ‘𝐻’ symbolise ‘Jane is happy’, then we can symbolise
sentence 34 as ‘𝐻’. However, it would be a mistake in general to symbolise sentence 35
with ‘¬𝐻’. If Jane is unhappy, then she is not happy; but sentence 35 does not mean the
same thing as ‘It is not the case that Jane is happy’. Jane might be neither happy nor
unhappy; she might be in a state of blank indifference. In order to symbolise sentence
35, then, we will typically want to introduce a new atomic sentence of Sentential. Nev‐
ertheless, there may be limited circumstances where it doesn’t matter which of these
nonsynonymous ways to deny ‘Jane is happy’ I adopt.
§5. CONNECTIVES 39

Sometimes a sentence will include a negative for literary effect, even though what is
expressed isn’t negative. Consider

36. I don’t like cricket; I love it!

The first clause seems negative, but the speaker confounds the hearer’s expectation
by going on to assert something even more positive than mere liking. Here the first
clause really means something like ‘I don’t merely like cricket’ – what is denied is that
the speaker’s affection for cricket is limited to mere liking.2

5.3 Conjunction
Consider these sentences:

37. Adam is athletic.


38. Barbara is athletic.
39. Adam is athletic, and Barbara is also athletic.

We will need separate atomic sentences of Sentential to symbolise sentences 37 and 38;
perhaps

𝐴: Adam is athletic.
𝐵: Barbara is athletic.

Sentence 37 can now be symbolised as ‘𝐴’, and sentence 38 can be symbolised as ‘𝐵’.
Sentence 39 roughly says ‘A and B’. We need another symbol, to deal with ‘and’. We will
use ‘∧’. Thus we will symbolise it as ‘(𝐴 ∧ 𝐵)’. This connective is called CONJUNCTION.
We also say that ‘𝐴’ and ‘𝐵’ are the two CONJUNCTS of the conjunction ‘(𝐴 ∧ 𝐵)’.
Notice that we make no attempt to symbolise the word ‘also’ in sentence 39. Words
like ‘both’ and ‘also’ function to draw our attention to the fact that two things are being
conjoined. Maybe they affect the emphasis of a sentence. But we will not (and cannot)
symbolise such things in Sentential.
Some more examples will bring out this point:

40. Barbara is athletic and energetic.


41. Barbara and Adam are both athletic.
42. Although Barbara is energetic, she is not athletic.
43. Adam is athletic, but Barbara is more athletic than him.

Sentence 40 is obviously a conjunction. The sentence says two things (about Barbara).
In English, it is permissible to refer to Barbara only once. It might be tempting to

2 Technically, the negation here targets not the content of the clause, but the typical expectation that a
cooperative speaker who says ‘I like X’ is communicating that liking is the highest point on the scale of
affection their attitude to X reaches. These ‘scalar implicatures’ are discussed at length in Larry Horn
(1989) A Natural History of Negation, University of Chicago Press, p. 382. We return to this ‘metalin‐
guistic’ negation in §19.4 below.
40 THE LANGUAGE OF SENTENTIAL LOGIC

think that we need to symbolise sentence 40 with something along the lines of ‘𝐵 and
energetic’. This would be a mistake. Once we symbolise part of a sentence as ‘𝐵’, any
further structure is lost. ‘𝐵’ is an atomic sentence of Sentential. Conversely, ‘energetic’
is not an English sentence at all. What we are aiming for is something like ‘𝐵 and
Barbara is energetic’. So we need to add another sentence letter to the symbolisation
key. Let ‘𝐸 ’ symbolise ‘Barbara is energetic’. Now the entire sentence can be symbolised
as ‘(𝐵 ∧ 𝐸)’.
Sentence 41 says one thing about two different subjects. It says of both Barbara and
Adam that they are athletic, and in English we use the word ‘athletic’ only once. The
sentence can be paraphrased as ‘Barbara is athletic, and Adam is athletic’. We can
symbolise this in Sentential as ‘(𝐵 ∧ 𝐴)’, using the same symbolisation key that we have
been using.
Sentence 42 is slightly more complicated. The word ‘although’ sets up a contrast
between the first part of the sentence and the second part. Nevertheless, the sentence
tells us both that Barbara is energetic and that she is not athletic. In order to make
each of the conjuncts an atomic sentence, we need to replace ‘she’ with ‘Barbara’. So we
can paraphrase sentence 42 as, ‘Both Barbara is energetic, and Barbara is not athletic’.
The second conjunct contains a negation, so we paraphrase further: ‘Both Barbara is
energetic and it is not the case that Barbara is athletic’. And now we can symbolise this
with the Sentential sentence ‘(𝐸 ∧ ¬𝐵)’. Note that we have lost all sorts of nuance in this
symbolisation. There is a distinct difference in tone between sentence 42 and ‘Both
Barbara is energetic and it is not the case that Barbara is athletic’. Sentential does not
(and cannot) preserve these nuances.
Sentence 43 raises similar issues. There is a contrastive structure. The speaker who
asserts it means something by that ‘but’ — something to the effect of there being a
contrast between those two features. But their brief utterance doesn’t tell exactly which
contrast is intended. These contrasts are not something that Sentential is designed to
deal with. So we can paraphrase the sentence as ‘Both Adam is athletic, and Barbara is
more athletic than Adam’. (Notice that we once again replace the pronoun ‘him’ with
‘Adam’.) How should we deal with the second conjunct? We already have the sentence
letter ‘𝐴’, which is being used to symbolise ‘Adam is athletic’, and the sentence ‘𝐵’ which
is being used to symbolise ‘Barbara is athletic’; but neither of these concerns their
relative ‘athleticity’. So, to to symbolise the entire sentence, we need a new sentence
letter. Let the Sentential sentence ‘𝑅’ symbolise the English sentence ‘Barbara is more
athletic than Adam’. Now we can symbolise sentence 43 by ‘(𝐴 ∧ 𝑅)’.

A sentence can be symbolised as (𝒜 ∧ ℬ) if it can be paraphrased in


English as ‘Both …, and …’, or as ‘…, but …’, or as ‘although …, …’.

You might be wondering why I am putting parentheses around the conjunctions. The
reason for this is to avoid potential ambiguity. This can be brought out by considering
how negation might interact with conjunction. Consider:

44. It’s not the case that you will get both soup and salad.
45. You will not get soup but you will get salad.
§5. CONNECTIVES 41

Sentence 44 can be paraphrased as ‘It is not the case that: both you will get soup and
you will get salad’. Using this symbolisation key:

𝑆1 : You will get soup.


𝑆2 : You will get salad.

We would symbolise ‘both you will get soup and you will get salad’ as ‘(𝑆1 ∧ 𝑆2 )’. To
symbolise sentence 44, then, we simply negate the whole sentence, thus: ‘¬(𝑆1 ∧ 𝑆2 )’.
Sentence 45 is a conjunction: you will not get soup, and you will get salad. ‘You will not
get soup’ is symbolised by ‘¬𝑆1 ’. So to symbolise sentence 45 itself, we offer ‘(¬𝑆1 ∧ 𝑆2 )’.
These English sentences are very different, and their symbolisations differ accordingly.
In one of them, the entire conjunction is negated. In the other, just one conjunct is
negated. Parentheses help us to avoid ambiguity by clear distinguishing these two
cases.
Once again, however, English does feature this sort of ambiguity. Suppose instead of
44, we’d just said

46. You won’t get soup with salad.

The sentence 46 is arguably ambiguous; one of its readings says the same thing as 44,
the other says the same thing as 45. Parentheses enable Sentential to avoid precisely
this ambiguity.
The introduction of parentheses prompts us to define some other useful concepts. If
a sentence of Sentential has the overall form (𝒜 ∧ ℬ), we say that its main connective is
conjunction – even if other connectives occur within 𝒜 or ℬ. Likewise, if the sentence
has the form ¬𝒜 , its main connective is negation. We define the scope of an occurrence
of a connective in a sentence as the subsentence which has that connective as its main
connective. So the scope of ‘¬’ in ‘¬(𝑆1 ∧ 𝑆2 )’ is the whole sentence (because negation
is the main connective), while the scope of ‘¬’ in ‘(¬𝑆1 ∧ 𝑆2 )’ is just the subsentence
‘¬𝑆1 ’ – the main connective of the whole sentence is conjunction. Parentheses help us
keep track of things the scope of our connectives, and which connective in a sentence
is the main connective. I say more about the notions of a main connective and the
scope of a connective in §6.3.

5.4 Disjunction
Consider these sentences:

47. Either Denison will play golf with me, or he will watch movies.
48. Either Denison or Ellery will play golf with me.

For these sentences we can use this symbolisation key:

𝐷: Denison will play golf with me.


𝐸 : Ellery will play golf with me.
𝑀: Denison will watch movies.
42 THE LANGUAGE OF SENTENTIAL LOGIC

However, we shall again need to introduce a new symbol. Sentence 47 is symbolised


by ‘(𝐷 ∨ 𝑀)’. The connective is called DISJUNCTION. We also say that ‘𝐷’ and ‘𝑀’ are the
DISJUNCTS of the disjunction ‘(𝐷 ∨ 𝑀)’.
Sentence 48 is only slightly more complicated. There are two subjects, but the English
sentence only gives the verb once. However, we can paraphrase sentence 48 as ‘Either
Denison will play golf with me, or Ellery will play golf with me’. Now we can obviously
symbolise it by ‘(𝐷 ∨ 𝐸)’ again.

A sentence can be symbolised as (𝒜 ∨ ℬ) if it can be paraphrased in


English as ‘Either …, or …’. Each of the disjuncts must be a sentence.

Sometimes in English, the word ‘or’ excludes the possibility that both disjuncts are
true. This is called an EXCLUSIVE OR. An exclusive ‘or’ is clearly intended when it says,
on a restaurant menu, ‘Entrees come with either soup or salad’: you may have soup;
you may have salad; but, if you want both soup and salad, then you have to pay extra.
At other times, the word ‘or’ allows for the possibility that both disjuncts might be true.
This is probably the case with sentence 48, above. I might play golf with Denison, with
Ellery, or with both Denison and Ellery. Sentence 48 merely says that I will play with
at least one of them. This is called an INCLUSIVE OR. The Sentential symbol ‘∨’ always
symbolises an inclusive ‘or’.
It might help to see negation interact with disjunction. Consider:

49. Either you will not have soup, or you will not have salad.
50. You will have neither soup nor salad.
51. You get either soup or salad, but not both.

Using the same symbolisation key as before, sentence 49 can be paraphrased in this
way: ‘Either it is not the case that you get soup, or it is not the case that you get salad’.
To symbolise this in Sentential, we need both disjunction and negation. ‘It is not the
case that you get soup’ is symbolised by ‘¬𝑆1 ’. ‘It is not the case that you get salad’ is
symbolised by ‘¬𝑆2 ’. So sentence 49 itself is symbolised by ‘(¬𝑆1 ∨ ¬𝑆2 )’.
Sentence 50 also requires negation. It can be paraphrased as, ‘It is not the case that
either you get soup or you get salad’. Since this negates the entire disjunction, we
symbolise sentence 50 with ‘¬(𝑆1 ∨ 𝑆2 )’.
Sentence 51 is an exclusive ‘or’. We can break the sentence into two parts. The first
part says that you get one or the other. We symbolise this as ‘(𝑆1 ∨ 𝑆2 )’. The second
part says that you do not get both. We can paraphrase this as: ‘It is not the case both
that you get soup and that you get salad’. Using both negation and conjunction, we
symbolise this with ‘¬(𝑆1 ∧ 𝑆2 )’. Now we just need to put the two parts together. As we
saw above, ‘but’ can usually be symbolised with ‘∧’. Sentence 51 can thus be symbolised
as ‘((𝑆1 ∨ 𝑆2 ) ∧ ¬(𝑆1 ∧ 𝑆2 ))’. This last example shows something important. Although
the Sentential symbol ‘∨’ always symbolises inclusive ‘or’, we can symbolise an exclusive
‘or’ in Sentential. We just have to use a few of our other symbols as well.
§5. CONNECTIVES 43

5.5 Conditional
Consider these sentences:

52. If Jean is in Paris, then Jean is in France.


53. Jean is in France only if Jean is in Paris.

Let’s use the following symbolisation key:

𝑃: Jean is in Paris.
𝐹 : Jean is in France.

Sentence 52 is roughly of this form: ‘if P, then F’. We will use the symbol ‘→’ to symbolise
this ‘if …, then …’ structure. So we symbolise sentence 52 by ‘(𝑃 → 𝐹)’. The connective
is called THE CONDITIONAL. Here, ‘𝑃’ is called the ANTECEDENT of the conditional
‘(𝑃 → 𝐹)’, and ‘𝐹 ’ is called the CONSEQUENT.
Sentence 53 is also a conditional. Since the word ‘if’ appears in the second half of the
sentence, it might be tempting to symbolise this in the same way as sentence 52. That
would be a mistake. My knowledge of geography tells me that sentence 52 is unprob‐
lematically true: there is no way for Jean to be in Paris that doesn’t involve Jean being
in France. But sentence 53 is not so straightforward: were Jean in Dijon, Marseilles, or
Toulouse, Jean would be in France without being in Paris, thereby rendering sentence
53 false. Since geography alone dictates the truth of sentence 52, whereas travel plans
(say) are needed to know the truth of sentence 53, they must mean different things.
In fact, sentence 53 can be paraphrased as ‘If Jean is in France, then Jean is in Paris’. So
we can symbolise it by ‘(𝐹 → 𝑃)’.

A sentence can be symbolised as (𝒜 → ℬ) if it can be paraphrased in


English as ‘If A, then B’ or ‘A only if B’ or ‘B if A’.

In fact, many English expressions can be represented using the conditional. Consider:

54. For Jean to be in Paris, it is necessary that Jean be in France.


55. It is a necessary condition on Jean’s being in Paris that she be in France.
56. For Jean to be in France, it is sufficient that Jean be in Paris.
57. It is a sufficient condition on Jean’s being in France that she be in Paris.

If we think really hard, all four of these sentences mean the same as ‘If Jean is in Paris,
then Jean is in France’. So they can all be symbolised by ‘𝑃 → 𝐹 ’.
It is important to bear in mind that the connective ‘→’ tells us only that, if the ante‐
cedent is true, then the consequent is true. It says nothing about a causal connection
between two events (for example). In fact, we seem to lose a huge amount when we
use ‘→’ to symbolise English conditionals. We shall return to this in §§8.6 and 11.5.
44 THE LANGUAGE OF SENTENTIAL LOGIC

5.6 Biconditional
Consider these sentences:

58. Shergar is a horse only if it he is a mammal.


59. Shergar is a horse if he is a mammal.
60. Shergar is a horse if and only if he is a mammal.

We shall use the following symbolisation key:

𝐻: Shergar is a horse.
𝑀: Shergar is a mammal.

Sentence 58, for reasons discussed above, can be symbolised by ‘𝐻 → 𝑀’.


Sentence 59 is importantly different. It can be paraphrased as, ‘If Shergar is a mammal
then Shergar is a horse’. So it can be symbolised by ‘𝑀 → 𝐻’.
Sentence 60 says something stronger than either 58 or 59. It can be paraphrased as
‘Shergar is a horse if he is a mammal, and Shergar is a horse only if Shergar is a mammal’.
This is just the conjunction of sentences 58 and 59. So we can symbolise it as ‘(𝐻 →
𝑀) ∧ (𝑀 → 𝐻)’. We call this a BICONDITIONAL, because it entails the conditional in
both directions.
We could treat every biconditional this way. So, just as we do not need a new Sentential
symbol to deal with exclusive ‘or’, we do not really need a new Sentential symbol to deal
with biconditionals. However, we will use ‘↔’ to symbolise the biconditional. So we
can symbolise sentence 60 with the Sentential sentence ‘𝐻 ↔ 𝑀’.
The expression ‘if and only if’ occurs a lot in philosophy and logic. For brevity, we
can abbreviate it with the snappier word ‘IFF’. I shall follow this practice. So ‘if’ with
only one ‘f’ is the English conditional. But ‘iff’ with two ‘f’s is the English biconditional.
Armed with this we can say:

A sentence can be symbolised as (𝒜 ↔ ℬ) if it can be paraphrased in


English as ‘A iff B’; that is, as ‘A if and only if B’.

Other expressions in English which can be used to mean ‘iff’ include ‘exactly if’ and
‘exactly when’, or even ‘just in case’. So if we say ‘You run out of time exactly when the
buzzer sounds’, we mean: ‘if the buzzer sounds, then you are out of time; and also if
you are out of time, then the buzzer sounds’.
A word of caution. Ordinary speakers of English often use ‘if …, then …’ when they
really mean to use something more like ‘… if and only if …’. Perhaps your parents
told you, when you were a child: ‘if you don’t eat your vegetables, you won’t get any
dessert’. Suppose you ate your vegetables, but that your parents refused to give you
any dessert, on the grounds that they were only committed to the conditional (roughly
‘if you get dessert, then you will have eaten your vegetables’), rather than the bicondi‐
tional (roughly, ‘you get dessert iff you eat your vegetables’). Well, a tantrum would
rightly ensue. So, be aware of this when interpreting people; but in your own writing,
make sure you use the biconditional iff you mean to.
§5. CONNECTIVES 45

5.7 Unless
We have now introduced all of the connectives of Sentential. We can use them together
to symbolise many kinds of sentences, but not every kind. It is a matter of judgment
whether a given English connective can be symbolised in Sentential. One rather tricky
case is the English‐language connective ‘unless’:

61. Unless you wear a jacket, you will catch cold.


62. You will catch cold unless you wear a jacket.

These two sentences are clearly equivalent. To symbolise them, we shall use the sym‐
bolisation key:

𝐽: You will wear a jacket.


𝐷: You will catch a cold.

How should we try to symbolise these in Sentential? Note that 62 seems to say: ’either
you will catch cold, or if you don’t catch cold, it will be because you wear a jacket’. That
would have this symbolisation in Sentential: ‘(𝐷 ∨ (¬𝐷 → 𝐽)’. This turns out to be just
a long‐winded way of saying ‘(¬𝐷 → 𝐽)’, i.e., if you don’t catch cold, then you will have
worn a jacket.
Equally, however, both sentences mean that if you do not wear a jacket, then you will
catch cold. With this in mind, we might symbolise them as ‘¬𝐽 → 𝐷’.
Equally, both sentences mean that either you will wear a jacket or you will catch a cold.
With this in mind, we might symbolise them as ‘𝐽 ∨ 𝐷’.
All three are correct symbolisations. Indeed, in chapter 3 we shall see that all three
symbolisations are equivalent in Sentential.

If a sentence can be paraphrased as ‘Unless A, B,’ then it can be sym‐


bolised as 𝒜 ∨ ℬ, or ¬𝒜 → ℬ, or ¬ℬ → 𝒜 .

Again, though, there is a little complication. ‘Unless’ can be symbolised as a condi‐


tional; but as I said above, people often use the conditional (on its own) when they
mean to use the biconditional. Equally, ‘unless’ can be symbolised as a disjunction;
but there are two kinds of disjunction (exclusive and inclusive). So it will not surprise
you to discover that ordinary speakers of English often use ‘unless’ to mean something
more like the biconditional, or like exclusive disjunction. Suppose I say: ‘I shall go
running unless it rains’. I probably mean something like ‘I shall go running iff it does
not rain’ (i.e., the biconditional), or ‘either I shall go running or it will rain, but not
both’ (i.e., exclusive disjunction). Again: be aware of this when interpreting what other
people have said, but be precise in your writing, unless you want to be deliberately am‐
biguous.
We should not take ‘unless’ to always have the stronger biconditional form. Consider
this example:
46 THE LANGUAGE OF SENTENTIAL LOGIC

63. We’ll capture the castle, unless the Duke tries to stop us.

This certainly says that if we fail to capture the castle, it will have been because of that
pesky Duke. But what if the Duke tries but fails to stop us? We might in that case
capture the castle even though he tried to stop us. While 63 is still true, it is not the
case that ‘if the Duke tries to stop us, we won’t capture the castle’ is true.

Key Ideas in §5
› Sentential features five connectives: ‘∧’ (‘and’), ‘∨’ (‘or’), ‘¬’ (‘not’),
‘→’ (‘if …, then …’) and ‘↔’ (‘if and only if’ or ‘iff’).
› These connectives, alone and in combination, can be used to
symbolise many English constructions, even some which do not
feature the English counterparts of the connectives – as when we
approached the symbolisation of sentences involving ‘unless’.
› Figuring out how to symbolise a given natural language sentence
might not be straightforward, however, as in the cases of ‘A if B’
and ‘A only if B’, which have quite different symbolisations into
Sentential. ‘A unless B’ is perhaps even trickier, being sometimes
used ambiguously by English speakers and being able to be sym‐
bolised in many good ways.
› What matters in symbolisation is that the ‘spirit’ of the argument
is preserved and modelled appropriately, not that every nuance
of meaning is preserved.

Practice exercises
A. Using the symbolisation key given, symbolise each English sentence in Sentential.

𝑀: Those creatures are men in suits.


𝐶 : Those creatures are chimpanzees.
𝐺 : Those creatures are gorillas.

1. Those creatures are not men in suits.


2. Those creatures are men in suits, or they are not.
3. Those creatures are either gorillas or chimpanzees.
4. Those creatures are neither gorillas nor chimpanzees.
5. If those creatures are chimpanzees, then they are neither gorillas nor men in
suits.
6. Unless those creatures are men in suits, they are either chimpanzees or they are
gorillas.

B. Using the symbolisation key given, symbolise each English sentence in Sentential.
§5. CONNECTIVES 47

𝐴: Mister Ace was murdered.


𝐵: The butler did it.
𝐶: The cook did it.
𝐷: The Duchess is lying.
𝐸: Mister Edge was murdered.
𝐹: The murder weapon was a frying pan.

1. Either Mister Ace or Mister Edge was murdered.


2. If Mister Ace was murdered, then the cook did it.
3. If Mister Edge was murdered, then the cook did not do it.
4. Either the butler did it, or the Duchess is lying.
5. The cook did it only if the Duchess is lying.
6. If the murder weapon was a frying pan, then the culprit must have been the cook.
7. If the murder weapon was not a frying pan, then the culprit was either the cook
or the butler.
8. Mister Ace was murdered if and only if Mister Edge was not murdered.
9. The Duchess is lying, unless it was Mister Edge who was murdered.
10. If Mister Ace was murdered, he was done in with a frying pan.
11. Since the cook did it, the butler did not.
12. Of course the Duchess is lying!

C. Using the symbolisation key given, symbolise each English sentence in Sentential.

𝐸1 : Ava is an electrician.
𝐸2 : Harrison is an electrician.
𝐹1 : Ava is a firefighter.
𝐹2 : Harrison is a firefighter.
𝑆1 : Ava is satisfied with her career.
𝑆2 : Harrison is satisfied with his career.

1. Ava and Harrison are both electricians.


2. If Ava is a firefighter, then she is satisfied with her career.
3. Ava is a firefighter, unless she is an electrician.
4. Harrison is an unsatisfied electrician.
5. Neither Ava nor Harrison is an electrician.
6. Both Ava and Harrison are electricians, but neither of them find it satisfying.
7. Harrison is satisfied only if he is a firefighter.
8. If Ava is not an electrician, then neither is Harrison, but if she is, then he is too.
9. Ava is satisfied with her career if and only if Harrison is not satisfied with his.
10. If Harrison is both an electrician and a firefighter, he must be satisfied with his
work.
11. It cannot be that Harrison is both an electrician and a firefighter.
12. Harrison and Ava are both firefighters if and only if neither of them is an electri‐
cian.

D. Give a symbolisation key and symbolise the following English sentences in Senten‐
tial.
48 THE LANGUAGE OF SENTENTIAL LOGIC

1. Alice and Bob are both spies.


2. If either Alice or Bob is a spy, then the code has been broken.
3. If neither Alice nor Bob is a spy, then the code remains unbroken.
4. The German embassy will be in an uproar, unless someone has broken the code.
5. Either the code has been broken or it has not, but the German embassy will be
in an uproar regardless.
6. Either Alice or Bob is a spy, but not both.

E. Give a symbolisation key and symbolise the following English sentences in Sentential.

1. If there is food to be found in the pridelands, then Rafiki will talk about squashed
bananas.
2. Rafiki will talk about squashed bananas unless Simba is alive.
3. Rafiki will either talk about squashed bananas or he won’t, but there is food to
be found in the pridelands regardless.
4. Scar will remain as king if and only if there is food to be found in the pridelands.
5. If Simba is alive, then Scar will not remain as king.

F. For each argument, write a symbolisation key and symbolise all of the sentences of
the argument in Sentential.

1. If Dorothy plays the piano in the morning, then Roger wakes up cranky. Dorothy
plays piano in the morning unless she is distracted. So if Roger does not wake
up cranky, then Dorothy must be distracted.
2. It will either rain or snow on Tuesday. If it rains, Neville will be sad. If it snows,
Neville will be cold. Therefore, Neville will either be sad or cold on Tuesday.
3. If Zoog remembered to do his chores, then things are clean but not neat. If he
forgot, then things are neat but not clean. Therefore, things are either neat or
clean; but not both.

G. We symbolised an exclusive ‘or’ using ‘∨’, ‘∧’, and ‘¬’. How could you symbolise an
exclusive ‘or’ using only two connectives? Is there any way to symbolise an exclusive
‘or’ using only one connective?
6
Sentences of Sentential

The sentence ‘either apples are red, or berries are blue’ is a sentence of English, and
the sentence ‘(𝐴 ∨ 𝐵)’ is a sentence of Sentential. Although we can identify sentences
of English when we encounter them, we do not have a formal definition of ‘sentence
of English’. But in this chapter, we shall offer a complete definition of what counts as a
sentence of Sentential. This is one respect in which a formal language like Sentential is
more precise than a natural language like English. Of course, Sentential was designed
to be much simpler than English.

6.1 Expressions
We have seen that there are three kinds of symbols in Sentential:

Atomic sentences 𝐴, 𝐵, 𝐶, …, 𝑍
with subscripts, as needed 𝐴1 , 𝐵1 , 𝑍1 , 𝐴2 , 𝐴25 , 𝐽375 , …

Connectives ¬, ∧, ∨, →, ↔

Parentheses (,)

We define an EXPRESSION of Sentential as any finite nonempty string of symbols of


Sentential. Take any of the symbols of Sentential and write them down, in any order,
and you have an expression of Sentential. Expressions are sometimes called ‘strings’,
because they are just a string of symbols from the approved list above. No restriction
is placed on expressions apart from having to contain at least one character, and not
going on infinitely long.

6.2 Sentences
We want to know when an expression of Sentential amounts to a sentence. Many expres‐
sions of Sentential will be uninterpretable nonsense. ‘)𝐴17 𝐽𝑄𝐹¬𝐾))∧)()’ is a perfectly

49
50 THE LANGUAGE OF SENTENTIAL LOGIC

good expression, but doesn’t look likely to end up a correctly formed sentence of our
language. Accordingly, we need to clarify the grammatical rules of Sentential. We’ve
already seen some of those rules when we introduced the atomic sentences and con‐
nectives. But I will now make them all explicit.
Obviously, individual atomic sentences like ‘𝐴’ and ‘𝐺13 ’ should count as sentences.
We can form further sentences out of these by using the various connectives. Using
negation, we can get ‘¬𝐴’ and ‘¬𝐺13 ’. Using conjunction, we can get ‘(𝐴∧𝐺13 )’, ‘(𝐺13 ∧𝐴)’,
‘(𝐴 ∧ 𝐴)’, and ‘(𝐺13 ∧ 𝐺13 )’. We could also apply negation repeatedly to get sentences
like ‘¬¬𝐴’ or apply negation along with conjunction to get sentences like ‘¬(𝐴 ∧ 𝐺13 )’
and ‘¬(𝐺13 ∧ ¬𝐺13 )’. The possible combinations are endless, even starting with just
these two sentence letters, and there are infinitely many sentence letters. So there is
no point in trying to list all the sentences one by one.
Instead, we will describe the process by which sentences can be constructed. Consider
negation: Given any sentence 𝒜 of Sentential, ¬𝒜 is a sentence of Sentential. (Why the
funny fonts? I return to this in §7.)
We can say similar things for each of the other connectives. For instance, if 𝒜 and ℬ
are sentences of Sentential, then (𝒜 ∧ ℬ) is a sentence of Sentential. Providing clauses
like this for all of the connectives, we arrive at the following formal definition for a
SENTENCE of Sentential

1. Every atomic sentence is a sentence.


2. If 𝒜 is a sentence, then ¬𝒜 is a sentence.
3. If 𝒜 and ℬ are sentences, then (𝒜 ∧ ℬ) is a sentence.
4. If 𝒜 and ℬ are sentences, then (𝒜 ∨ ℬ) is a sentence.
5. If 𝒜 and ℬ are sentences, then (𝒜 → ℬ) is a sentence.
6. If 𝒜 and ℬ are sentences, then (𝒜 ↔ ℬ) is a sentence.
7. Nothing else is a sentence.

Definitions like this are called RECURSIVE. Recursive definitions begin with some spe‐
cifiable base elements, and then present ways to generate indefinitely many more ele‐
ments by compounding together previously established ones. To give you a better idea
of what a recursive definition is, we can give a recursive definition of the idea of an an‐
cestor of mine. We specify a base clause.

› My parents are ancestors of mine.

and then offer further clauses like:

› If x is an ancestor of mine, then x’s parents are ancestors of mine.

› Nothing else is an ancestor of mine.


§6. SENTENCES OF Sentential 51

Using this definition, we can easily check to see whether someone is my ancestor: just
check whether she is the parent of the parent of … one of my parents. And the same is
true for our recursive definition of sentences of Sentential. Just as the recursive defini‐
tion allows complex sentences to be built up from simpler parts, the definition allows
us to decompose sentences into their simpler parts. And if we get down to atomic
sentences, then we are ok.
Let’s consider some examples.

1. Suppose we want to know whether or not ‘¬¬¬𝐷’ is a sentence of Sentential.


Looking at the second clause of the definition, we know that ‘¬¬¬𝐷’ is a sen‐
tence if ‘¬¬𝐷’ is a sentence. So now we need to ask whether or not ‘¬¬𝐷’ is a
sentence. Again looking at the second clause of the definition, ‘¬¬𝐷’ is a sen‐
tence if ‘¬𝐷’ is. Again, ‘¬𝐷’ is a sentence if ‘𝐷’ is a sentence. Now ‘𝐷’ is an atomic
sentence of Sentential, so we know that ‘𝐷’ is a sentence by the first clause of the
definition. So for a compound sentence like ‘¬¬¬𝐷’, we must apply the defin‐
ition repeatedly. Eventually we arrive at the atomic sentences from which the
sentence is built up.

2. Next, consider the expression ‘¬(𝑃 ∧ ¬(¬𝑄 ∨ 𝑅))’. Looking at the second clause
of the definition, this is a sentence if ‘(𝑃 ∧ ¬(¬𝑄 ∨ 𝑅))’ is. And this is a sentence
if both ‘𝑃’ and ‘¬(¬𝑄 ∨ 𝑅)’ are sentences. The former is an atomic sentence, and
the latter is a sentence if ‘(¬𝑄 ∨ 𝑅)’ is a sentence. It is. Looking at the fourth
clause of the definition, this is a sentence if both ‘¬𝑄’ and ‘𝑅’ are sentences. And
both are!

3. Suppose we want to know whether ‘(𝐴¬ ∨ 𝐵1 )’ is a sentence. By the third clause


of the definition, this is a sentence iff its two constituents are sentences. The
second is not: it consists of an upper case letter and a numerical superscript,
which is not in conformity with the definition of an atomic sentence. The first
isn’t either: while it is constructed from an upper case letter and the negation
symbol, they are in the wrong order to be a sentence by the first clause of the
definition. So the original expression is not a sentence.

4. A final example. Consider the expression ‘(𝑃 → ¬(𝑄 → 𝑃)’. If this is a sentence,
then it’s main connective is ‘→’, and it was formed from the sentences ‘𝑃’ and
‘¬(𝑄 → 𝑃’. ‘𝑃’ is a sentence because it is a sentence letter. Is ‘¬(𝑄 → 𝑃’ a sen‐
tence? Only if ‘(𝑄 → 𝑃’ is a sentence. But this isn’t a sentence; it lacks a closing
parenthesis which would need to be there if this was correctly formed using the
clause in the definition covering conditionals. It follows that the expression with
which we started isn’t a sentence either.

6.3 Main Connectives and Scope


Ultimately, every sentence of Sentential is constructed in a predictable way out of
atomic sentences. When we are dealing with a sentence other than an atomic sen‐
tence, we can see that there must be some sentential connective that was introduced
52 THE LANGUAGE OF SENTENTIAL LOGIC

last, when constructing the sentence. We call that the MAIN CONNECTIVE of the sen‐
tence. In the case of ‘¬¬¬𝐷’, the main connective is the very first ‘¬’ sign. In the case
of ‘(𝑃 ∧ ¬(¬𝑄 ∨ 𝑅))’, the main connective is ‘∧’. In the case of ‘((¬𝐸 ∨ 𝐹) → ¬¬𝐺)’, the
main connective is ‘→’.
The recursive structure of sentences in Sentential will be important when we consider
the circumstances under which a particular sentence would be true or false. The sen‐
tence ‘¬¬¬𝐷’ is true if and only if the sentence ‘¬¬𝐷’ is false, and so on through the
structure of the sentence, until we arrive at the atomic components. We will return to
this point in chapter 3.
The recursive structure of sentences in Sentential also allows us to give a formal defini‐
tion of the scope of a negation (mentioned in §5.3). The scope of ‘¬’ is the subsentence
for which ‘¬’ is the main connective. So a sentence like

(𝑃 ∧ (¬(𝑅 ∧ 𝐵) ↔ 𝑄))

was constructed by conjoining ‘𝑃’ with ‘(¬(𝑅 ∧ 𝐵) ↔ 𝑄)’. This last sentence was con‐
structed by placing a biconditional between ‘¬(𝑅 ∧ 𝐵)’ and ‘𝑄’. And the former of these
sentences – a subsentence of our original sentence – is a sentence for which ‘¬’ is the
main connective. So the scope of the negation is just ‘¬(𝑅 ∧ 𝐵)’. More generally:

The SCOPE of an instance of a connective (in a sentence) is the sub‐


sentence which has that instance of the connective as its main con‐
nective.

I talk of ‘instances’ of a connective because, in an example like ‘¬¬𝐴’, there are two
occurrences of the negation connective, with different scopes – one is the main con‐
nective of the whole sentence, the other has just ‘¬𝐴’ as its scope.
The recursive definition of a sentence of Sentential can also be depicted using a FORMA‐
TION TREE, similar to the syntactic trees for English we saw in §4, but much simpler. At
each leaf node of the tree is either an atomic sentence or a sentence connective. Each
nonleaf node contains a Sentential sentence, and branching from it are (i) its main
connective, and (ii) the immediate subsentences in the scope of the main connective.
Sentential sentences are just those expressions of the language that have a formation
tree respecting these rules. Consider again the sentence ‘¬(𝑃 ∧ ¬(¬𝑄 ∨ 𝑅))’. This has
the formation tree depicted in Figure 6.1.
At the beginning of this chapter, we introduced the sentences of Sentential in a way that
was parasitic upon identifying the structure of certain English sentences. Now we can
see that Sentential has its own syntax, which can be understood and used independently
of English. Our understanding of the meaning of the sentence connectives is still tied
to their English counterparts, but in §8.3 we will see that we can also understand their
meaning independently from the natural language we used to motivate them. However,
it remains true that what makes Sentential useful is that its syntax and connectives
are designed to capture, more or less, elements of the structure of natural language
sentences.
§6. SENTENCES OF Sentential 53

¬(𝑃 ∧ ¬(¬𝑄 ∨ 𝑅))

¬ (𝑃 ∧ ¬(¬𝑄 ∨ 𝑅))

𝑃 ∧ ¬(¬𝑄 ∨ 𝑅)

¬ (¬𝑄 ∨ 𝑅)

¬𝑄 ∨ 𝑅

¬ 𝑄

Figure 6.1: Formation tree for ‘¬(𝑃 ∧ ¬(¬𝑄 ∨ 𝑅))’.

6.4 Structure and Ambiguity


In later sections, we will see the utility of symbolising arguments to evaluate their
validity. But even at this early stage, there is value in symbolising sentences, because
doing so can clear up ambiguity.
Consider the following imagined regulation: ‘Small children must be quiet and seated
or carried at all times’. This sentence has an ambiguous structure, by which I mean:
there are two different grammatical structures that it might be thought to have. We
can draw up a syntactic tree to show this. But equally, symbolising can bring this out.
Let us use the following key:

𝑄: Small children must be quiet.


𝑆: Small children must be seated.
𝐶 : Small children must be carried.

Leaving the temporal adverbial phrase ‘at all times’ aside, here are two symbolisations
of our target sentence:

𝑄 and either 𝑆 or 𝐶 – in Sentential, (𝑄 ∧ (𝑆 ∨ 𝐶));


either 𝑄 and 𝑆, or 𝐶 – in Sentential, ((𝑄 ∧ 𝑆) ∨ 𝐶).

This ambiguity matters. If a child is carried, but is not quiet, the parent is violating
the regulation with the first structure, but in compliance with the regulation with the
second structure. As we will soon see, Sentential has resources to ensure that no ambi‐
guity is present in any symbolisation of a given target sentence, which helps us make
clear what we really might have meant by that sentence. And as this case makes clear,
that might be important in the framing of legal statutes or contracts, in the formula‐
tion of government policy, and in other written documents where clarity of meaning
is crucial.
54 THE LANGUAGE OF SENTENTIAL LOGIC

6.5 Parenthetical Conventions


Strictly speaking, the parentheses in ‘(𝑄 ∧ 𝑅)’ are an indispensable part of the sentence.
Part of this is because we might use ‘(𝑄 ∧ 𝑅)’ as a subsentence in a more complicated
sentence. For example, we might want to negate ‘(𝑄 ∧ 𝑅)’, obtaining ‘¬(𝑄 ∧ 𝑅)’. If we
just had ‘𝑄 ∧ 𝑅’ without the parentheses and put a negation in front of it, we would
have ‘¬𝑄 ∧ 𝑅’. It is most natural to read this as meaning the same thing as ‘(¬𝑄 ∧ 𝑅)’.
But as we saw in §5.3, this is very different from ‘¬(𝑄 ∧ 𝑅)’.
Strictly speaking, then, ‘𝑄 ∧ 𝑅’ is not a sentence. It is a mere expression.
When working with Sentential, however, it will make our lives easier if we are sometimes
a little less than strict. So, here are some convenient conventions.
First, we allow ourselves to omit the outermost parentheses of a sentence. Thus we
allow ourselves to write ‘𝑄 ∧ 𝑅’ instead of the sentence ‘(𝑄 ∧ 𝑅)’. However, we must
remember to put the parentheses back in, when we want to embed the sentence into
a more complicated sentence!
Second, it can be a bit painful to stare at long sentences with many nested pairs of
parentheses. To make things a bit easier on the eyes, we shall allow ourselves to use
parentheses in varied sizes, which sometimes helps in seeing which parentheses pair
up with each other.
Combining these two conventions, we can rewrite the unwieldy sentence

(((𝐻 → 𝐼) ∨ (𝐼 → 𝐻)) ∧ (𝐽 ∨ 𝐾))

rather more readably as follows:

(𝐻 → 𝐼) ∨ (𝐼 → 𝐻) ∧ (𝐽 ∨ 𝐾)

The scope of each connective is now much clearer.


There are systems of logic which omit parentheses, using so‐called ‘Polish notation’. I
discuss this notation briefly in Appendix A, p. 367.

Key Ideas in §6
› The class of sentences of Sentential has a perfectly precise recurs‐
ive definition that allows us to determine in a step‐by‐step fash‐
ion, for any expression, whether it is a sentence or not.
› The main connective of a sentence is the final rule applied in the
construction of a sentence; the scope of a connective is that sub‐
sentence in the construction of which it is the main connective.
› Each Sentential sentence, unlike English sentences, has an unam‐
biguous structure.
› We can sometimes permit ourselves some liberality in the use of
parentheses in Sentential, when we can be sure it gives rise to no
difficulty.
§6. SENTENCES OF Sentential 55

Practice exercises
A. For each of the following: (a) Is it a sentence of Sentential, strictly speaking? (b) Is
it a sentence of Sentential, allowing for our relaxed parenthetical conventions?

1. (𝐴)
2. 𝐽374 ∨ ¬𝐽374
3. ¬¬¬¬𝐹
4. ¬∧𝑆
5. (𝐺 ∧ ¬𝐺)
6. (𝐴 → (𝐴 ∧ ¬𝐹)) ∨ (𝐷 ↔ 𝐸)
7. (𝑍 ↔ 𝑆) → 𝑊 ∧ 𝐽 ∨ 𝑋
8. (𝐹 ↔ ¬𝐷 → 𝐽) ∨ (𝐶 ∧ 𝐷)

B. Construct a formation tree in the style of Figure 6.1 for the following sentences:

1. (((𝐴 → 𝐵) ∧ (𝐵 → 𝐴)) → (𝐵 ↔ 𝐴))


2. (((𝑃 → 𝑄) ∧ (¬𝑅 → 𝑄)) ∨ ¬𝑅).

C. Are there any sentences of Sentential that contain no atomic sentences? Explain
your answer.

D. What is the scope of each connective in the sentence

(𝐻 → 𝐼) ∨ (𝐼 → 𝐻) ∧ (𝐽 ∨ 𝐾)
7
Use and Mention

In this chapter, I have talked a lot about sentences. So I need to pause to explain an
important, and very general, point.

7.1 Quotation Conventions


Consider these two sentences:

› Malcolm Turnbull is the Prime Minister.

› The expression ‘Malcolm Turnbull’ is composed of two upper case letters and
thirteen lower case letters.

When we want to talk about this ex‐Prime Minister, we USE his name. When we want
to talk about his name, we MENTION that name. And in English, we normally do so by
putting it in quotation marks.
There is a general point here. When we want to talk about things in the world, we
just use words. When we want to talk about words, we typically have to mention those
words.1 We need to indicate that we are mentioning them, rather than using them.
To do this, some convention is needed. We can surround the expression in matched
left and right quotation marks, or display them centrally in the page (say). So this
sentence:

› ‘Malcolm Turnbull’ is the Prime Minister.

says that some expression is the Prime Minister. And that’s false. The man is the Prime
Minister; his name isn’t. Conversely, this sentence:

1 More generally, when we want to talk about something we use its name. So when we want to talk about
an expression, we use the name of the expression – which is just the expression enclosed in quotation
marks. Mentioning an expression is using a name of that expression.

56
§7. USE AND MENTION 57

› Malcolm Turnbull is composed of two upper case letters and thirteen lower case
letters.

also says something false: Malcolm Turnbull is a man, made of meat rather than letters.
One final example:

› ‘ ‘Malcolm Turnbull’ ’ is the name of ‘Malcolm Turnbull’.

On the left‐hand‐side, here, we have the name of a name (it consists of an expression in
quotation marks, and that embedded expression itself contains quotation marks). On
the right hand side, we have a name (of an expression). Perhaps this kind of sentence
only occurs in logic textbooks, but it is true.
Those are just general rules for quotation, and you should observe them carefully in
all your work! To be clear, the quotation‐marks here do not indicate indirect speech.
They indicate that you are moving from talking about an object, to talking about the
name of that object.

7.2 Object Language and Metalanguage


These general quotation conventions are of particular importance for us. After all,
we are describing a formal language here, Sentential, and so we are often mentioning
expressions from Sentential.
When we talk about a language, the language that we are talking about is called the
OBJECT LANGUAGE. The language that we use to talk about the object language is called
the METALANGUAGE.
For the most part, the object language in this chapter has been the formal language
that we have been developing: Sentential. The metalanguage is English. Not conver‐
sational English exactly, but English supplemented with some additional vocabulary
which helps us to get along.
Now, I have used italic upper case letters for atomic sentences of Sentential:

𝐴, 𝐵, 𝐶, 𝑍, 𝐴1 , 𝐵4 , 𝐴25 , 𝐽375 , …

These are sentences of the object language (Sentential). They are not sentences of Eng‐
lish. So I must not say, for example:

› 𝐷 is an atomic sentence of Sentential.

Obviously, I am trying to come out with an English sentence that says something about
the object language (Sentential). But ‘𝐷’ is a sentence of Sentential, and no part of Eng‐
lish. So the preceding is gibberish, just like:

› Schnee ist weiß is a German sentence.


58 THE LANGUAGE OF SENTENTIAL LOGIC

What we surely meant to say, in this case, is:

› ‘Schnee ist weiß’ is a German sentence.

Equally, what we meant to say above is just:

› ‘𝐷’ is an atomic sentence of Sentential.

The general point is that, whenever we want to talk in English about some specific
expression of Sentential, we need to indicate that we are mentioning the expression,
rather than using it. We can either deploy quotation marks, or we can adopt some
similar convention, such as placing it centrally in the page.
English is, generally, its own metalanguage. An expression of English enclosed in
matching quotation marks is another expression of English, as the quotation marks
are parts of English too. This causes a potential problem of ambiguity if the expression
quoted itself contains quotation marks. English allows us to talk about operations on
English expressions, as in this example

64. An English word results from adding ‘ing’ or ‘ion’ to the expression ‘confus’.

But this example is ambiguous. On one reading, it is discussing the expressions ‘con‐
fusing’ and ‘confusion’, and saying truly that they are both English words. But on an‐
other reading, the matching quotation marks are the one before ‘ing’ and the one after
‘ion’, and Example 64 is stating falsely that this unusual string of English letters and
punctuation can be added to ‘confus’ to form an English word:

ing’ or ‘ion

To avoid this, we might introduce some mechanism for indicating which quotation
marks are matched with each other.

7.3 Script Fonts, and Recursive Definitions Revisited


However, we do not just want to talk about specific expressions of Sentential. We also
want to be able to talk about any arbitrary sentence of Sentential. Indeed, I had to do
this in §6, when I presented the recursive definition of a sentence of Sentential. I used
upper case script font letters to do this, namely:

𝒜, ℬ, 𝒞, 𝒟, …

These symbols do not belong to Sentential. Rather, they are part of our (augmented)
metalanguage that we use to talk about any expression of Sentential. To repeat the
second clause of the recursive definition of a sentence of Sentential, we said:

2. If 𝒜 is a sentence, then ¬𝒜 is a sentence.

This talks about arbitrary sentences. If we had instead offered:


§7. USE AND MENTION 59

› If ‘𝐴’ is a sentence, then ‘¬𝐴’ is a sentence.

this would not have allowed us to determine whether ‘¬𝐵’ is a sentence. To emphasise,
then:

‘𝒜 ’ is a symbol in augmented English, which we use to talk about any


Sentential expression. ‘𝐴’ is a particular atomic sentence of Sentential.

To come at this distinction a slightly different way, while ‘𝒜 ’ designates a sentence of


Sentential, it can designate a different sentence on different occasions. It behaves a
bit a like a pronoun. The pronoun ‘it’ always designates some object, but a different
one in different circumstances of use. Likewise ‘𝒜 ’ can stand for different sentences
of Sentential. By contrast, ‘𝐴’ always names just one atomic sentence of Sentential, the
first letter of the English alphabet.
This last example raises a further complication for our quotation conventions. I have
not included any quotation marks in the clauses of our recursive definition of a sen‐
tence of Sentential in §6.2. Should I have done so?
The problem is that the expression on the right‐hand‐side of most of our recursive
clauses are not sentences of English, since they contain Sentential connectives, like ‘¬’.
Consider clause 2. We might try to write:

2′ . If 𝒜 is a sentence, then ‘¬𝒜 ’ is a sentence.

But this is no good: ‘¬𝒜 ’ is not a Sentential sentence, since ‘𝒜 ’ is a symbol of (augmen‐
ted) English rather than a symbol of Sentential. What we really want to say is something
like this:

2″ . If 𝒜 is any Sentential sentence, then the expression that consists of the symbol
‘¬’, followed immediately by the sentence 𝒜 , is also a sentence.

This is impeccable, but rather long‐winded. But we can avoid long‐windedness by


creating our own conventions. We can perfectly well stipulate that an expression like
‘¬𝒜 ’ should simply be read as abbreviating the long‐winded account. So, officially, the
metalanguage expression ‘¬𝒜 ’ simply abbreviates:

the expression that consists of the symbol ‘¬’ followed by the sentence 𝒜

and similarly, for expressions like ‘(𝒜 ∧ ℬ)’, ‘(𝒜 ∨ ℬ)’, etc. The latter is the expression
which consists of an opening parenthesis, followed by the sentence 𝒜 , followed by the
symbol ‘∨’, followed by the sentence ℬ, followed by a closing parenthesis.
If you like, you can think of our recursive definition of a sentence as a schema stand‐
ing for infinitely many instances of each clause, one for each Sentential sentence. In
the schematic clause for negation (‘If 𝒜 is a sentence, ¬𝒜 is also a sentence’), we can
60 THE LANGUAGE OF SENTENTIAL LOGIC

consider each instance involving ‘𝒜 ’ being replaced by some Sentential sentence sur‐
rounded by quotation marks in accordance with our conventions. So ‘¬𝒜 ’ is to be
understood as abbreviating the expression consisting of a left quotation mark, a neg‐
ation sign, the same Sentential sentence as 𝒜 , and a right quotation mark. Hence if if
𝒜 is ‘𝑃’, ¬𝒜 just is ‘¬𝑃’.

7.4 Quotation Conventions for Arguments


One of our main purposes for using Sentential is to study arguments, and that will be
our concern in chapter 3. In English, the premises of an argument are often expressed
by individual sentences, and the conclusion by a further sentence. Since we can sym‐
bolise English sentences, we can symbolise English arguments using Sentential. Thus
we might ask whether the argument whose premises are the Sentential sentences ‘𝐴’
and ‘𝐴 → 𝐶 ’, and whose conclusion is the Sentential sentence ‘𝐶 ’, is valid. However, it is
quite a mouthful to write that every time. So instead I shall introduce another bit of
abbreviation. This:
𝒜1 , 𝒜2 , …, 𝒜𝑛 ∴ 𝒞
abbreviates:

the argument with premises 𝒜1 , 𝒜2 , …, 𝒜𝑛 and conclusion 𝒞

To avoid unnecessary clutter, we shall not regard this as requiring quotation marks
around it. This is a name of an argument, not an argument itself. (Note, then, that ‘∴’
is a symbol of our augmented metalanguage, and not a new symbol of Sentential.)

7.5 Pedantry in Practice


Having been precise about use and mention, you can now relax! If you’ve understood
this section, you know how to do things properly. In exercises and practice problems,
unless explicit instructions otherwise are given, you will be expected to do things prop‐
erly and respect the distinction between use and mention. But your understanding of
the topic means that you can probably do things a bit more sloppily elsewhere – safe
in the knowledge you can fix them up if you need to. As the great twentieth century
philosopher David Lewis said of the way he presented his account of the word ‘knows’
in his paper ‘Elusive Knowledge’,2

I could have said my say fair and square, bending no rules. It would have
been tiresome, but it could have been done…. I could have taken great
care to distinguish between (1) the language I use when I talk about know‐
ledge, or whatever, and (2) the second language that I use to talk about
the semantic and pragmatic workings of the first language. If you want to
hear my story told that way, you probably know enough to do the job for
yourself.

2 David Lewis (1996) ‘Elusive Knowledge’, Australasian Journal of Philosophy 74, pp. 549–67, at pp. 566–7.
§7. USE AND MENTION 61

Wise words. In the end, the distinction between use and mention is intended to re‐
move potential confusion. But sometimes over‐eager application of it can prove just
as big an obstacle to communication.

Key Ideas in §7
› It is crucial to distinguish between use and mention – between
talking about the world, and talking about expressions.
› We use Sentential to represent sentences and arguments. But we
use English – augmented with some additional vocabulary – to
talk about Sentential.
› We introduced a slightly unusual convention for understanding
quoted expressions involving script font letters: ‘(𝒜 → ℬ)’, to
take a representative example, is to be interpreted as the Sen‐
tential expression consisting of a left parenthesis, followed by
whatever Sentential expression 𝒜 represents, followed by ‘→’, fol‐
lowed by whatever Sentential expression ℬ represents, followed
by a closing right parenthesis.

Practice exercises
A. For each of the following: Are the quotation marks correctly used, strictly speaking?
If not, propose a corrected version.

1. Snow is not a sentence of English.


2. ‘𝒜 → 𝒞 ’ is a sentence of Sentential.
3. ‘¬𝒜 ’ is the expression consisting of the symbol ‘¬’ followed by the upper case
script letter ‘A’.
4. If ‘𝒜 ’ is a sentence, so is ‘(𝒜 ∨ 𝒜)’.
5. ‘𝒜 ’ has the same number of characters as ‘𝐴’.

B. Example 64 was ambiguous because it was unclear which pairs of quotation marks
were matched with each other. Can you come up with a proposal for how we might
indicate matching quotation marks to avoid this potential ambiguity?
Chapter 3

Truth Tables
8
Truth‐Functional Connectives

8.1 Functions
So much for the grammar or syntax of Sentential. We turn now to the meaning of
Sentential sentences. For technical reasons, it is best to start with the intended inter‐
pretation of the connectives.
As a preliminary, we need to have the concept of a (mathematical) function. Frequently,
we refer to things not by name, but by the relations they have to other things. You
can refer to Barack Obama by name, but one could equally refer to him in relation
to his role, as ‘the 44th President of the United States’, or in relation to his family,
as ‘the husband of Michelle and father of Malia and Sasha’. These kinds of referring
expressions are known as descriptions, and we will look at them in more detail in §19.
Our interest in them now is in the relations they involve. Barack Obama is denoted
by ‘the biological father of Malia Obama’ or ‘the biological father of Sasha Obama’, in
relation to his children. But we can consider ‘the biological father of …’ in relation
to other individuals: so ‘the biological father of Ivanka Trump’ denotes Donald, ‘the
biological father of Daisy Turnbull’ denotes Malcolm, and so on. We can summarise
this information in a table like this:

Input ‘the biological father of …’


Malia Obama Barack
Sasha Obama Barack
Ivanka Trump Donald
Daisy Turnbull Malcolm
… …

In this table we have an input, on the left, which is related to the output on the right.
The relation which maps the things in the left column to their corresponding outputs
on the right is known as a function – in this case, the ‘biological father of’ function.
More precisely, a FUNCTION is a relation between the members of some collection 𝐴 and

63
64 TRUTH TABLES

some collection 𝐵 (which may be the same as 𝐴), such that each input to the function
is a member of 𝐴, each output of the function is a member of 𝐵, and, crucially, each
member of 𝐴 is associated with at most one member of 𝐵. So ‘the biological father of …’
is a function from the set of people (living or dead) to itself, and associates each person
with their biological father. We implicitly that ‘biological father’ permits each person
to be associated with a unique father. If we consider other notions of fatherhood, such
as paternal figure, those would not yield a function, because many people have two
or more paternal figures in their lives. Note that while everyone is associated with a
unique biological father by this function (no input is associated with more than one
output), the converse does not hold: some outputs are linked to more than one input,
in fact, every father with more than one child.
Common examples of functions occur in mathematics: we can consider the function
‘the sum of 𝑥 and 𝑦’, which takes two numbers as input, and spits out their unique
sum, 𝑥 + 𝑦. This is again a function from a set to itself, this time the set of integers.
There are functions which are from one set to another: consider, ‘the number of …’s
children’, which is a function from people to numbers, mapping each person to the
number of children they have. (Even if they have more than one child, there is still a
unique number characterising how many they have, and that is what this function spits
out.) There are functions which do not associate an output with every input: consider
‘the eldest child of’, which associates each parent with their first‐born child, but doesn’t
associate nonparents with anything. There are many relations which are not functions.
While Barack Obama can be characterised as ‘the father of Malia’, Malia Obama cannot
be characterised as ‘the child of Barack’, since that attempted description would apply
equally to her sister.

8.2 The Idea of Truth‐Functionality


The relevance of the notion of a function is that we are going to identify the meanings
of the Sentential sentence connectives with a certain class of functions.
A valid argument is one with a structure that guarantees the truth of the conclusion,
given the truth of the premises (§2.5). Our interest in valid arguments leads us to be
interested in the truth or falsity of sentences. Sentential gives rules governing the con‐
struction of complex sentences from smaller constituents for each sentence connective.
The meanings of the sentence connectives in Sentential are likewise going to allow us
to ‘construct’, or determine, the truth‐value of a complex sentence as the result of the
truth values of its constituent sentences and the main connective of that sentence.
This is an important idea about Sentential sentence connectives. They are each associ‐
ated with a rule that fixes the truth value of a complex sentence of which they are the
main connective, given the truth values of the constituent sentences. But such a rule
is just a function: a function that takes one or two truth values as input, and yields
a truth value as output. Such a function is called a TRUTH‐FUNCTION. We can now
summarise our important insight about Sentential:
§8. TRUTH‐FUNCTIONAL CONNECTIVES 65

A connective is TRUTH‐FUNCTIONAL iff the truth value of a sentence


with that connective as its main connective is uniquely determined by
the truth value(s) of the constituent sentence(s), i.e., its meaning is a
truth‐function.
Every connective in Sentential is truth‐functional.

It turns out that we don’t need to know anything more about the atomic sentences
of Sentential than their truth values to assign a truth value to those nonatomic, or
COMPOUND, sentences. More generally, the truth value of any compound sentence
depends only on the truth value of the subsentences that comprise it. In order to
know the truth value of ‘(𝐷 ∧ 𝐸)’, for instance, you only need to know the truth value of
‘𝐷’ and the truth value of ‘𝐸 ’. In order to know the truth value of ‘ (𝐷 ∧𝐸)∨𝐹 ’, you need
only know the truth value of ‘(𝐷 ∧ 𝐸)’ and ‘𝐹 ’. And so on. This is fact a good part of the
reason why we chose these connectives, and chose their English near‐equivalents as
our structural words. To determine the truth value of some Sentential sentence, we only
need to know the truth value of its components. This is why the study of Sentential is
termed truth‐functional logic.

8.3 Schematic Truth Tables


We introduced five connectives in chapter 2. To give substance to the claim that they
are truth‐functional, we simply need to explain how each connective yields a truth
value for a compound sentence with that connective as its main connective, when sup‐
plied with the truth values of its IMMEDIATE SUBSENTENCES. In any sentence of the
form 𝒜 → ℬ, 𝒜 and ℬ are the immediate subsentences, since those combine with the
main connective to form the sentence.
Not only does the truth value of a compound sentence of Sentential depend only on the
truth values assigned to its subsentences, it does so uniquely, because it is a function.
We may happily refer to the truth value of 𝒜 as determined by its constituents. This
means we may represent the pattern of dependence of compound sentence truth values
on immediate subsentence truth values in a simple table, like those we saw in §8.1.
These tables show how to relate the input truth values to output truth values for any
sentence connective. For convenience, we shall represent the truth value True by ‘T’
and the value False by ‘F’. (Just to be clear, the two truth values are True and False; the
truth values are not letters!)
These truth tables completely characterise how the connectives of Sentential behave.
Accordingly, we can take these schematic truth tables as giving the meanings of the
connectives. The truth function associated with a connective captures their entire
contribution to the sentences in which they appear.
A language, natural or artificial, is COMPOSITIONAL iff the meaning of a complex sen‐
tence is the product of the syntactic structure of the sentence and the meanings of its
constituents. In Sentential, the only dimension of meaning is truth value. So in Senten‐
tial, we have a special case of compositionality: the truth value of a complex sentence
depends on the syntax and the truth value of the constituent atomic sentences.
66 TRUTH TABLES

Negation For any sentence 𝒜 : If 𝒜 is true, then ‘¬𝒜 ’ is false. If ‘¬𝒜 ’ is true, then
𝒜 is false. We can summarize this dependence in the SCHEMATIC TRUTH TABLE for
negation, which shows how any sentence with negation as its main connective has a
truth value depending on the truth value of its immediate subsentence:

𝒜 ¬𝒜
T F
F T

This is a schematic table, because the input truth values are not associated with any
specific Sentential sentence, but with arbitrarily chosen sentences which have the truth
value in question. Whatever Sentential sentence we choose in place of 𝒜 , whether
atomic or not, we know that if 𝒜 has the truth value True, then ¬𝒜 will have the truth
value False.

Conjunction For any sentences 𝒜 and ℬ, 𝒜∧ℬ is true if and only if both 𝒜 and ℬ
are true. We can summarize this in the schematic truth table for conjunction:

𝒜 ℬ 𝒜∧ℬ
T T T
T F F
F T F
F F F

Note that conjunction is COMMUTATIVE. The truth value for 𝒜 ∧ ℬ is always the same
as the truth value for ℬ ∧ 𝒜 .

Disjunction Recall that ‘∨’ always represents inclusive or. So, for any sentences 𝒜
and ℬ, 𝒜 ∨ ℬ is true if and only if either 𝒜 or ℬ is true. We can summarize this in the
schematic truth table for disjunction:

𝒜 ℬ 𝒜∨ℬ
T T T
T F T
F T T
F F F

Like conjunction, disjunction is commutative.


§8. TRUTH‐FUNCTIONAL CONNECTIVES 67

Conditional I’m just going to come clean and admit it. Conditionals are a problem
in Sentential. This is not because there is any problem finding a truth table for the
connective ‘→’, but rather because the truth table we put forward seems to make ‘→’
behave in ways that are different to the way that the English counterpart ‘if … then
…’ behaves. Exactly how much of a problem this poses is a matter of philosophical
contention. I shall discuss a few of the subtleties in §8.6 and §11.5. (It is no problem
for Sentential itself, of course – the only potential difficulty arises when we try to use
the Sentential conditional to represent ‘if …, then …’.)
We know at least this much from a parallel with the English conditional: if 𝒜 is true
and ℬ is false, then 𝒜 → ℬ should be false. The conditional claim ‘if I study hard, then
I’ll pass’ is clearly false if you study hard and still fail. For now, I am going to stipulate
that this is the only type of case in which 𝒜 → ℬ is false. We can summarize this with
a schematic truth table for the Sentential conditional.

𝒜 ℬ 𝒜→ℬ
T T T
T F F
F T T
F F T

The conditional is not commutative. You cannot swap the antecedent and consequent
in general without changing the truth value, because 𝒜 → ℬ has a different truth table
from ℬ → 𝒜 . Compare:

1. If a coin lands heads 1000 times, something surprising has happened. (True –
that is very surprising.)
2. If something surprising has happened, then a coin lands heads 1000 times. (False
– there are other surprising things than that.)

Biconditional Since a biconditional is to be the same as the conjunction of a condi‐


tional running in each direction, we shall want the truth table for the biconditional to
be:

𝒜 ℬ 𝒜↔ℬ
T T T
T F F
F T F
F F T

You can think of the biconditional as saying that the two immediate constituents have
the same truth value – it’s true if they do, false otherwise. Unsurprisingly, the bicondi‐
tional is commutative.
68 TRUTH TABLES

8.4 Symbolising Versus Translating


We have seen how to use a symbolisation key in §4.1 to temporarily assign an interpret‐
ation to some of the atomic sentences of Sentential. This symbolisation key will at the
very least assign a truth‐value to those atomic sentences – the same one as the English
sentence it symbolises.
But since the connectives of Sentential are truth‐functional, they really are sensitive to
nothing except the truth values of the symbolised sentences. So when we are symbol‐
ising a sentence or an argument in Sentential, we are ignoring everything besides the
contribution that the truth values of a subsentence might make to the truth value of
the whole.
There are subtleties to natural language sentences that far outstrip their mere truth
values. Sarcasm; poetry; snide implicature; emphasis; these are important parts of
everyday discourse. But none of this is retained in Sentential. As remarked in §5, Sen‐
tential cannot capture the subtle differences between the following English sentences:

1. Jon is elegant and Jon is quick


2. Although Jon is elegant, Jon is quick
3. Despite being elegant, Jon is quick
4. Jon is quick, albeit elegant
5. Jon’s elegance notwithstanding, he is quick

All of the above sentences will be symbolised with the same Sentential sentence, per‐
haps ‘𝐹 ∧ 𝑄’.
I keep saying that we use Sentential sentences to symbolise English sentences. Many
other textbooks talk about translating English sentences into Sentential. But a good
translation should preserve certain facets of meaning, and – as I have just pointed
out – Sentential just cannot do that. This is why I shall speak of symbolising English
sentences, rather than of translating them.
This affects how we should understand our symbolisation keys. Consider a key like:

𝐹 : Jon is elegant.
𝑄: Jon is quick.

Other textbooks will understand this as a stipulation that the Sentential sentence ‘𝐹 ’
should mean that Jon is elegant, and that the Sentential sentence ‘𝑄’ should mean that
Jon is quick. But Sentential just is totally unequipped to deal with meaning. The pre‐
ceding symbolisation key is doing no more nor less than stipulating that the Sentential
sentence ‘𝐹 ’ should take the same truth value as the English sentence ‘Jon is elegant’
(whatever that might be), and that the Sentential sentence ‘𝑄’ should take the same
truth value as the English sentence ‘Jon is quick’ (whatever that might be).
§8. TRUTH‐FUNCTIONAL CONNECTIVES 69

When we treat an atomic Sentential sentence as symbolising an English


sentence, we are stipulating that the Sentential sentence is to take the
same truth value as that English sentence.
When we treat a compound Sentential sentence as symbolising an Eng‐
lish sentence, we are claiming that they share a truth‐functional struc‐
ture, and that the atomic sentences of the Sentential sentence symbol‐
ise those sentences which play a corresponding role in the structure
of the English sentence.

8.5 Non‐Truth‐Functional Connectives


In plenty of languages there are connectives that are not truth‐functional. In English,
for example, we can form a new sentence from any simpler sentence by prefixing it
with ‘It is necessarily the case that …’. The truth value of this new sentence is not fixed
solely by the truth value of the original sentence. For consider two true sentences:

1. 2 + 2 = 4.
2. Shostakovich wrote fifteen string quartets.

Whereas it is necessarily the case that 2 + 2 = 4,1 it is not necessarily the case that
Shostakovich wrote fifteen string quartets. If Shostakovich had died earlier, he would
have failed to finish Quartet no. 15; if he had lived longer, he might have written a few
more. So ‘It is necessarily the case that …’ is a connective of English, but it is not truth‐
functional. Another example: ‘one hundred years ago’. Both ‘many people have cars’
and ‘many people have children’ are true. But while ‘One hundred years ago, many
people had children’ is true, ‘One hundred years ago, many people had cars’ is false.
In these cases, we had the same input truth values, but different output. We can turn
this into a test for truth‐functionality: if for some one‐place connective ‘#’, you can
find sentences 𝒜 and ℬ such that (i) 𝒜 has the same truth value as ℬ, while (ii) #𝒜
has a different truth value from #ℬ, then ‘#’ is not a truth‐functional connective. Its
truth value is obviously not fixed by the truth value of its immediate subsentence. The
test can be generalised in the obvious way to binary connectives, etc: can you find a
connective such that when you feed it the same truth values as input, you get different
results? If so, that result is not a function of the input truth values.

8.6 Indicative Versus Subjunctive Conditionals


I want to bring home the point that Sentential only deals with truth functions by consid‐
ering the case of the conditional. When I introduced the schematic truth table for the
material conditional in §8.3, I did not say much to justify it. Let me now offer two justi‐
fications. These are not arguments for the truth function associated with ‘→’, which is

1 Given that the English numeral ‘2’ names the number two, and the numeral ‘4’ names the number four,
and ‘+’ names addition, then it must be that the result of adding two to itself is four. This is not to say
that ‘2’ had to be used in the way we actually use it – if ‘2’ had named the number three, the sentence
would have been false. But in its actual use, it is a necessary truth.
70 TRUTH TABLES

a perfectly legitimate rule for associating input truth values with outputs. What needs
justification is our idea that we should use this truth function when symbolise English
sentences whose main connective is ‘if … then …’. They attempt to show that our usage
of the English conditional ‘if’ is in line with the way ‘→’ behaves.

Edgington’s Argument The first follows a line of argument due to Dorothy Edging‐
ton.2 Suppose that Lara has drawn some shapes on a piece of paper, and coloured
some of them in. I have not seen them, but I claim:

If any shape is grey, then that shape is also circular.

As it happens, Lara has drawn the following:

A C D

In this case, my claim is surely true. Shapes C and D are not grey, and so can hardly
present counterexamples to my claim. Shape A is grey, but fortunately it is also circular.
So my claim has no counterexamples. It must be true. And that means that each of
the following instances of my claim must be true too:

› If A is grey, then it is circular (true antecedent, true consequent)

› If C is grey, then it is circular (false antecedent, true consequent)

› If D is grey, then it is circular (false antecedent, false consequent)

However, if Lara had drawn a fourth shape, thus:

A B C D

then my claim would have be false. So it must be that this claim is false:

› If B is grey, then it is a circular (true antecedent, false consequent)

Now, recall that every connective of Sentential has to be truth‐functional. This means
that the mere truth value of the antecedent and consequent must uniquely determine
the truth value of the conditional as a whole. Thus, from the truth values of our four
claims – which provide us with all possible combinations of truth and falsity in ante‐
cedent and consequent – we can read off the truth table for the material conditional.

2 Dorothy Edgington (2020) ‘Indicative Conditionals’, in Edward N Zalta, ec., TheStanford Encyclopedia
of Philosophy, http://plato.stanford.edu/entries/conditionals/.
§8. TRUTH‐FUNCTIONAL CONNECTIVES 71

The or‐to‐if argument A second justification for symbolising ‘if’ as ‘→’ is this. We
know already that if 𝒜 is true and 𝒞 is false, then ‘if 𝒜 then 𝒞 ’ will be false. What should
we say about the other rows of the truth table, the rows on which either 𝒜 is false, or
ℬ is true (or both)? On these lines either 𝒜 is false or ℬ is true. So the disjunction
‘Not‐𝒜 or ℬ’ is true on these lines. If a disjunction is true, and its first disjunct isn’t,
then the second disjunct has to be true. In this case, the first disjunct (‘Not‐𝒜 ’) isn’t
true just in case 𝒜 is true; so it turns out that if 𝒜 obtains, then so does ℬ. So the
truth of the disjunction on those three lines leads us to conclude that the conditional
‘if 𝒜 , then ℬ’ should also be true on those three lines. Thus we obtain the truth table
we have associated with ‘→’ as the best truth table to use for symbolising ‘if’.
What these two arguments show is that ‘→’ is the only candidate for a truth‐functional
conditional. Otherwise put, it is the best conditional that Sentential can provide. But is
it any good, as a surrogate for the conditionals we use in everyday language? Should
we think that ‘if’ is a truth functional connective? Consider two sentences:

65. If Mitt Romney had won the 2012 election, then he would have been the 45th
President of the USA.
66. If Mitt Romney had won the 2012 election, then he would have turned into a
helium‐filled balloon and floated away into the night sky.

Sentence 65 is true; sentence 66 is false. But both have false antecedents and false
consequents. So the truth value of the whole sentence is not uniquely determined by
the truth value of the parts. This use of ‘if’ fails our test for truth‐functionality. Do not
just blithely assume that you can adequately symbolise an English ‘if …, then …’ with
Sentential’s ‘→’.
The crucial point is that sentences 65 and 66 employ SUBJUNCTIVE conditionals, rather
than INDICATIVE conditionals. Subjunctive conditionals are also sometimes known as
COUNTERFACTUALS. They ask us to imagine something contrary to what we are assum‐
ing as fact – that Mitt Romney lost the 2012 election – and then ask us to evaluate what
would have happened in that case. The classic illustration of the difference between
the indicative and subjunctive conditional comes from pairs like these:

67. If a dingo didn’t take Azaria Chamberlain, something else did.


68. If a dingo hadn’t taken Azaria Chamberlain, something else would have.

The indicative conditional in 67 is true, given the actual historical fact that she was
taken, and given that we are not assuming at this point anything about how she was
taken. But is the subjunctive in 68 also true? It seems not. She was not destined to
be taken by something or other, and if the dingo hadn’t intervened, she wouldn’t have
disappeared at all.3

3 If we are assuming as fact that a dingo took her, then when we consider what would have happened
had the dingo not been involved, we imagine a situation in which all the actual consequences of the
dingo’s action are removed.
72 TRUTH TABLES

The point to take away from this is that subjunctive conditionals cannot be tackled
using ‘→’. This is not to say that they cannot be tackled by any formal logical language,
only that Sentential is not up to the job.4
So the ‘→’ connective of Sentential is at best able to model the indicative conditional
of English, as in 67. In fact there remain difficulties even with indicatives in Senten‐
tial. One family of difficulties arises from consideration of the or‐to‐if argument. The
argument seems compelling in cases like this:

69. Either the butler or the gardener did it;


So: If it wasn’t the butler, it was the gardener.

But what if our confidence in the premise 69 derives from our confidence in just one
disjunct? Suppose that we are certain it was the butler, and certain that the gardener
has an airtight alibi and wasn’t anywhere near the manor at the time of the murder?
Because we are certain it was the butler, we might be equally certain of 69, that it was
either the butler or the gardener (this inference from a disjunct to a disjunction seems
odd, but it is surely valid). But we might also be sure that if it wasn’t the butler, it was
the valet – he was the only other person with motive. In this sort of case, we might have
confidence in the premise 69 of this or‐to‐if argument and reject its conclusion. Yet
the Sentential analogue of the or‐to‐if argument is valid: as we’ll see after we introduce
the concept of validity for Sentential in §11), ‘𝐴 ∨ 𝐵 ∴ ¬𝐴 → 𝐵’ turns out to be valid.
This mismatch suggests that ‘if’ and ‘→’ aren’t a perfect match. I shall say a little more
about other difficulties for the material conditional analysis of indicatives in §11.5 and
in §30.1.
For now, I shall content myself with the observation that ‘→’ is the only plausible can‐
didate for a truth‐functional conditional. Our working hypothesis is that many uses
of ‘if’ can be adequately approximated by ‘→’. Many English conditionals cannot be rep‐
resented adequately using ‘→’. Sentential is an intrinsically limited language. But this
is only a problem if you try to use it to do things it wasn’t designed to do.

Key Ideas in §8
› The connectives of Sentential are all truth‐functional, and have
their meanings specified by the truth‐tables laid out in §8.3.
› When we treat a sentence of Sentential as symbolising an English
sentence, we need only say that as far as truth value is concerned
and truth‐functional structure is concerned, they are alike.
› English has many nontruth‐functional connectives. Some uses
of the conditional ‘if’ are nontruth‐functional. But as long as
we remain aware of the limitations of Sentential, it can be a very
powerful tool for modelling a significant class of arguments.

4 There are in fact logical treatments of counterfactuals, the most influential of which is David Lewis
(1973) Counterfactuals, Blackwell.
§8. TRUTH‐FUNCTIONAL CONNECTIVES 73

Practice exercises
A. Which of the following arguably may not characterise a function, where 𝑥 is the
input and 𝑦 is the output:

1. 𝑦 is the product of 𝑥 and itself;


2. 𝑦 is the square root of 𝑥 ;
3. 𝑦 is a child of 𝑥 ;
4. 𝑦 is a child of 𝑥 and younger than any other child of 𝑥 ;
5. 𝑦 is taller than 𝑥 ;
6. 𝑦 is as tall as 𝑥 .

B. True or false: if the main connective of some sentence is truth‐functional, then the
truth value of the sentence uniquely determines the truth values of any constituents.
C. Suppose † is some English one‐place connective, so that ‘†𝒜 ’ is a grammatical sen‐
tence. How can we test if it is not truth‐functional?
9
Complete Truth Tables

9.1 Valuations
So far, we have considered assigning truth values to Sentential sentences indirectly. We
have said, for example, that a Sentential sentence such as ‘𝐵’ is to take the same truth
value as the English sentence ‘Big Ben is in London’ (whatever that truth value may
be). But we can also assign truth values directly. We can simply stipulate that ‘𝐵’ is to
be true, or stipulate that it is to be false – at least for present purposes.

A VALUATION is any assignment of truth values to some atomic sen‐


tences of Sentential. It assigns exactly one truth value, either True or
False, to each of the sentences in question.

A valuation is thus a function from atomic sentences to truth values. So this is a valu‐
ation:

𝐴, 𝐺, 𝑃, 𝐺7 ↦ T
𝐹, 𝑅, 𝑍 ↦ F.

There is no requirement that a valuation be a TOTAL FUNCTION, that is, that it assign
a truth value to every atomic sentence. To be fix the truth value of a sentence 𝒜 of
Sentential a valuation must assign a truth value to every atomic sentence 𝒜 contains.
A valuation is a temporary assignment of ‘meanings’ to Sentential sentences, in much
the same way as a symbolisation key might be. (It only assigns truth values, the only di‐
mension of meaning that Sentential is sensitive to.) What is distinctive about Sentential
is that almost all of its basic vocabulary – the atomic sentences – only get their mean‐
ings in this temporary fashion. The only parts of Sentential that get their meanings
permanently are the connectives, which always have a fixed interpretation.
This is rather unlike English, where most words have their meanings on a permanent
basis. But there are some words in English – like pronouns (‘he’, ‘she’, ‘it’) and demon‐
stratives (‘this’, ‘that’) – that get their meaning assigned temporarily, and then can

74
§9. COMPLETE TRUTH TABLES 75

be reused with a different meaning in another context. Such expressions are called
CONTEXT SENSITIVE. In this sense, all the atomic sentences of Sentential are context
sensitive expressions. Of course we don’t have anything so explicit and deliberate as a
valuation or a symbolisation key in English to assign a meaning to a particular use of
‘this’ or ‘that’ – the circumstances of a conversation automatically assign an appropriate
object (usually). In Sentential, however, we need to explicitly set out the interpretations
of the atomic sentences we are concerned with.

9.2 Truth Tables


We introduced schematic truth tables in §8.3. These showed what truth value a com‐
pound sentence with a certain structure was determined to have by the truth values
of its subsentences, whatever they might be. We now introduce a closely related idea,
that of a truth table. This shows how a specific compound sentence has its truth value
determined by the truth values of its specific atomic subsentences, across all the pos‐
sible ways that those atomic subsentences might be assigned True and False.
You will no doubt have realised that a way of assigning True and False to atomic sen‐
tences is a valuation. So we can say: a TRUTH TABLE summarises how the truth value of
a compound sentence depends on the possible valuations of its atomic subsentences.
Each row of a truth table represents a possible valuation. The entire complete truth
table represents all possible valuations. And the truth table provides us with a means
to calculate the truth value of complex sentences, on each possible valuation. This is
pretty abstract. So it might be easiest to explain with an example.

9.3 A Worked Example


Consider the sentence ‘(𝐻 ∧ 𝐼) → 𝐻’. There are four possible ways to assign True and
False to the atomic sentences ‘𝐻’ and ‘𝐼 ’: both true, both false, ‘𝐻’ true and ‘𝐼 ’ false, and
‘𝐼 ’ true and ‘𝐻’ false. So there are four possible valuations of these two atomic sentences.
We can lay out these valuations as follows:

𝐻 𝐼 (𝐻 ∧ 𝐼) → 𝐻
T T
T F
F T
F F

To calculate the truth value of the entire sentence ‘(𝐻 ∧ 𝐼) → 𝐻’, we first copy the truth
values for the atomic sentences and write them underneath the letters in the sentence:

𝐻 𝐼 (𝐻 ∧ 𝐼) → 𝐻
T T T T T
T F T F T
F T F T F
F F F F F
76 TRUTH TABLES

Now consider the subsentence ‘(𝐻 ∧ 𝐼)’. This is a conjunction, (𝒜 ∧ ℬ), with ‘𝐻’ as 𝒜
and with ‘𝐼 ’ as ℬ. The schematic truth table for conjunction gives the truth conditions
for any sentence of the form (𝒜 ∧ ℬ), whatever 𝒜 and ℬ might be. It summarises the
point that a conjunction is true iff both conjuncts are true. In this case, our conjuncts
are just ‘𝐻’ and ‘𝐼 ’. They are both true on (and only on) the first row of the truth table.
Accordingly, we can calculate the truth value of the conjunction on all four rows.

𝒜 ∧ℬ
𝐻 𝐼 (𝐻 ∧ 𝐼) → 𝐻
T T T TT T
T F T FF T
F T F FT F
F F F FF F

Now, the entire sentence that we are dealing with is a conditional, 𝒞 → 𝒟, with ‘(𝐻 ∧ 𝐼)’
as 𝒞 and with ‘𝐻’ as 𝒟. On the second row, for example, ‘(𝐻 ∧ 𝐼)’ is false and ‘𝐻’ is true.
Since a conditional is true when the antecedent is false, we write a ‘T’ in the second
row underneath the conditional symbol. We continue for the other three rows and get
this:

𝒞 →𝒟
𝐻 𝐼 (𝐻 ∧ 𝐼) → 𝐻
T T T T T
T F F T T
F T F T F
F F F T F

The conditional is the main logical connective of the sentence. And the column of ‘T’s
underneath the conditional tells us that the sentence ‘(𝐻 ∧ 𝐼) → 𝐻’ is true regardless
of the truth values of ‘𝐻’ and ‘𝐼 ’. They can be true or false in any combination, and the
compound sentence still comes out true. Since we have considered all four possible
assignments of truth and falsity to ‘𝐻’ and ‘𝐼 ’ – since, that is, we have considered all the
different valuations – we can say that ‘(𝐻 ∧ 𝐼) → 𝐻’ is true on every valuation.
In this example, I have not repeated all of the entries in every column in every suc‐
cessive table. When actually writing truth tables on paper, however, it is impractical
to erase whole columns or rewrite the whole table for every step. Although it is more
crowded, the truth table can be written in this way:

𝐻 𝐼 (𝐻 ∧ 𝐼) → 𝐻
T T T TT TT
T F T FF TT
F T F FT TF
F F F FF TF
§9. COMPLETE TRUTH TABLES 77

Most of the columns underneath the sentence are only there for bookkeeping purposes.
The column that matters most is the column underneath the main connective for the
sentence, since this tells you the truth value of the entire sentence. I have emphasised
this, by putting this column in bold. When you work through truth tables yourself, you
should similarly emphasise it (perhaps by drawing a box around the relevant column).

9.4 Building Complete Truth Tables


A COMPLETE TRUTH TABLE has a row for every possible assignment of True and False to
the relevant atomic sentences. Each row represents a valuation, and a complete truth
table has a row for all the different valuations.
The size of the complete truth table depends on the number of different atomic sen‐
tences in the table. A sentence that contains only one atomic sentence requires only
two rows, as in the schematic truth table for negation. This is true even if the same
letter is repeated many times, as in the sentence ‘ (𝐶 ↔ 𝐶) → 𝐶 ∧ ¬(𝐶 → 𝐶)’. The
complete truth table requires only two rows because there are only two possibilities:
‘𝐶 ’ can be true or it can be false. The truth table for this sentence looks like this:

𝐶 (𝐶 ↔𝐶 )→𝐶 ∧ ¬(𝐶 →𝐶 )
T TTT TT FF TTT
F F T F F F FF FTF

Looking at the column underneath the main connective, we see that the sentence is
false on both rows of the table; i.e., the sentence is false regardless of whether ‘𝐶 ’ is
true or false. It is false on every valuation.
A sentence that contains two atomic sentences requires four rows for a complete truth
table, as in the schematic truth tables, and as in the complete truth table for ‘(𝐻 ∧ 𝐼) →
𝐻’.
A sentence that contains three atomic sentences requires eight rows:

𝑀 𝑁 𝑃 𝑀 ∧ (𝑁 ∨ 𝑃)
T T T T T T T T
T T F T T T T F
T F T T T F T T
T F F T F F F F
F T T F F T T T
F T F F F T T F
F F T F F F T T
F F F F F F F F

From this table, we know that the sentence ‘𝑀 ∧ (𝑁 ∨ 𝑃)’ can be true or false, depending
on the truth values of ‘𝑀’, ‘𝑁’, and ‘𝑃’.
78 TRUTH TABLES

A complete truth table for a sentence that contains four different atomic sentences
requires 16 rows. Five letters, 32 rows. Six letters, 64 rows. And so on. To be perfectly
general: If a complete truth table has 𝑛 different atomic sentences, then it must have
2𝑛 rows.1
In order to fill in the columns of a complete truth table, begin with the right‐most
atomic sentence and alternate between ‘T’ and ‘F’. In the next column to the left, write
two ‘T’s, write two ‘F’s, and repeat. For the third atomic sentence, write four ‘T’s fol‐
lowed by four ‘F’s. This yields an eight row truth table like the one above. For a 16
row truth table, the next column of atomic sentences should have eight ‘T’s followed
by eight ‘F’s. For a 32 row table, the next column would have 16 ‘T’s followed by 16 ‘F’s.
And so on.

Key Ideas in §9
› A valuation of some atomic sentences associates each of them
with exactly one of our truth values; it is like an extremely
stripped down version of a symbolisation key. In there are 𝑛
atomic sentences, there are 2𝑛 valuations of them.
› A truth table lays out the truth values of a particular Sentential
sentence in each of the distinct possible valuations of its con‐
stituent atomic sentences.

Practice exercises
A. How does a schematic truth table differ from a regular truth table? What is a com‐
plete truth table?
B. Offer complete truth tables for each of the following:

1. 𝐴→𝐴
2. 𝐶 → ¬𝐶
3. (𝐴 ↔ 𝐵) ↔ ¬(𝐴 ↔ ¬𝐵)
4. (𝐴 → 𝐵) ∨ (𝐵 → 𝐴)
5. (𝐴 ∧ 𝐵) → (𝐵 ∨ 𝐴)
6. ¬(𝐴 ∨ 𝐵) ↔ (¬𝐴 ∧ ¬𝐵)
7. (𝐴 ∧ 𝐵) ∧ ¬(𝐴 ∧ 𝐵) ∧ 𝐶
8. (𝐴 ∧ 𝐵) ∧ 𝐶 → 𝐵
9. ¬ (𝐶 ∨ 𝐴) ∨ 𝐵

If you want additional practice, you can construct truth tables for any of the sentences
and arguments in the exercises for Chapter 2.

1 Since the values of atomic sentences are independent of each other, each new atomic sentence 𝒜𝑛+1
we consider is capable of being true or false on every existing valuation on 𝒜1 , …, 𝒜𝑛 , and so there
must be twice as many valuations on 𝒜1 , …, 𝒜𝑛 , 𝒜𝑛+1 as on 𝒜1 , …, 𝒜𝑛 .
10
Semantic Concepts

In §9.1, we introduced the idea of a valuation and showed how to determine the truth
value of any Sentential sentence on any valuation using a truth table in the remainder
of the chapter. In this section, we shall introduce some related ideas, and show how to
use truth tables to test whether or not they apply.

10.1 Logical Truths and Falsehoods


In §3, I explained necessary truth and necessary falsity. Both notions have close but
imperfect surrogates in Sentential. We shall start with a surrogate for necessary truth.

𝒜 is a LOGICAL TRUTH iff it is true on every valuation (among those


valuations on which it has a truth value).

We need the parenthetical clause because of the way we have defined valuations. A
given valuation 𝑣 might only assign truth values to some atomic sentences and not
all. For any sentence 𝒜 which contains an atomic sentence to which 𝑣 doesn’t assign a
truth value, 𝒜 will not have any truth value according to 𝑣. Logical truths in Sentential
are sometimes called TAUTOLOGIES.
We can determine whether a sentence is a logical truth just by using truth tables. If
the sentence is true on every row of a complete truth table, then it is true on every
valuation for its constituent atomic sentences, so it is a logical truth. In the example
of §9, ‘(𝐻 ∧ 𝐼) → 𝐻’ is a logical truth.
This is only, though, a surrogate for necessary truth. There are some necessary truths
that we cannot adequately symbolise in Sentential. An example is ‘2 + 2 = 4’. This
must be true, but if we try to symbolise it in Sentential, the best we can offer is an

79
80 TRUTH TABLES

atomic sentence, and no atomic sentence is a logical truth.1 Still, if we can adequately
symbolise some English sentence using a Sentential sentence which is a logical truth,
then that English sentence expresses a necessary truth.
We have a similar surrogate for necessary falsity:

𝒜 is a LOGICAL FALSEHOOD iff it is false on every valuation (among


those on which it has a truth value).

We can determine whether a sentence is a logical falsehood just by using truth tables.
If the sentence is false on every row of a complete truth table, then it is false on every
valuation, so it is a logical falsehood. In the example of §9, ‘ (𝐶 ↔ 𝐶) → 𝐶 ∧ ¬(𝐶 → 𝐶)’
is a logical falsehood. A logical falsehood is often called a CONTRADICTION.

10.2 Logical Equivalence


Here is a similar, useful notion:

𝒜 and ℬ are LOGICALLY EQUIVALENT iff they have the same truth value
on every valuation among those which assign both of them a truth
value.

It is easy to test for logical equivalence using truth tables. Consider the sentences
‘¬(𝑃 ∨ 𝑄)’ and ‘¬𝑃 ∧ ¬𝑄’. Are they logically equivalent? To find out, we may construct
a truth table.

𝑃 𝑄 ¬ (𝑃 ∨ 𝑄) ¬𝑃 ∧ ¬𝑄
T T F T T T FT FFT
T F F T T F FT FTF
F T F F T T TF FFT
F F T F F F TF TT F

Look at the columns for the main connectives; negation for the first sentence, conjunc‐
tion for the second. On the first three rows, both are false. On the final row, both are
true. Since they match on every row, the two sentences are logically equivalent.

1 At this risk of repeating myself: 2 + 2 = 4 is necessarily true, but it is not necessarily true in virtue of
its structure. A necessary truth is true, with its actual meaning, in every possible situation. A Sentential‐
logical truth is true in the actual situation on every possible way of interpreting its atomic sentences.
These are interestingly different notions.
§10. SEMANTIC CONCEPTS 81

10.3 More Parenthetical Conventions


Consider these two sentences:

((𝐴 ∧ 𝐵) ∧ 𝐶)
(𝐴 ∧ (𝐵 ∧ 𝐶))

These have the same truth table, and are logically equivalent. Consequently, it will
never make any difference from the perspective of truth value – which is all that Sen‐
tential cares about (see §8) – which of the two sentences we assert (or deny). And since
the order of the parentheses does not matter, I shall allow us to drop them. In short,
we can save some ink and some eyestrain by writing:

𝐴∧𝐵∧𝐶

The general point is that, if we just have a long list of conjunctions, we can drop the
inner parentheses. (I already allowed us to drop outermost parentheses in §6.) The
same observation holds for disjunctions. Since the following sentences are logically
equivalent:

((𝐴 ∨ 𝐵) ∨ 𝐶)
(𝐴 ∨ (𝐵 ∨ 𝐶))

we can simply write:

𝐴∨𝐵∨𝐶

And generally, if we just have a long list of disjunctions, we can drop the inner par‐
entheses. But be careful. These two sentences have different truth tables, so are not
logically equivalent:

((𝐴 → 𝐵) → 𝐶)
(𝐴 → (𝐵 → 𝐶))

So if we were to write:

𝐴→𝐵→𝐶

it would be dangerously ambiguous. So we must not do the same with conditionals.


Equally, these sentences have different truth tables:

((𝐴 ∨ 𝐵) ∧ 𝐶)
(𝐴 ∨ (𝐵 ∧ 𝐶))

So if we were to write:

𝐴∨𝐵∧𝐶

it would be dangerously ambiguous. Never write this. The moral is: you can drop
parentheses when dealing with a long list of conjunctions, or when dealing with a
long list of disjunctions. But that’s it.
82 TRUTH TABLES

10.4 Consistency
In §3, I said that sentences are jointly consistent iff it is possible for all of them to be
true at once. We can offer a surrogate for this notion too:

𝒜1 , 𝒜2 , …, 𝒜𝑛 are JOINTLY CONSISTENT iff there is some valuation


which makes them all true.

Derivatively, sentences are JOINTLY INCONSISTENT if there is no valuation that makes


them all true. Note that this notion applies to a single sentence as well: a sentence is
consistent iff it is true on some valuation.
Again, it is easy to test for joint consistency using truth tables. If we draw up a truth
table for all the sentences together, if there is some row on which each of them gets a
‘T’, then they are consistent.
So, for example, consider these sentences: ¬𝑃, 𝑃 → 𝑄, 𝑄:

𝑃 𝑄 ¬𝑃 𝑃→𝑄 𝑄
T T FT T TT T
T F FT T FF F
F T TF F TT T
F F TF F TF F

We can see on the third row, the valuation which assigns F to ‘𝑃’ and T to ‘𝑄’, each of
the sentences is true. So these are jointly consistent.

Key Ideas in §10


› A logical truth is a sentence true on every valuation of its atomic
constituents; a logical falsehood is true on no valuation of its
atomic constituents.
› A sentence (collection of sentences) is consistent iff there is a
valuation on which it is (they are all) true.
› Two sentences are logically equivalent iff they are true on exactly
the same valuations. We can use the notion of logical equival‐
ence to motivate further parenthetical conventions.

Practice exercises
A. Check all the claims made in introducing the new notational conventions in §10.3,
i.e., show that:
§10. SEMANTIC CONCEPTS 83

1. ‘((𝐴 ∧ 𝐵) ∧ 𝐶)’ and ‘(𝐴 ∧ (𝐵 ∧ 𝐶))’ have the same truth table
2. ‘((𝐴 ∨ 𝐵) ∨ 𝐶)’ and ‘(𝐴 ∨ (𝐵 ∨ 𝐶))’ have the same truth table
3. ‘((𝐴 ∨ 𝐵) ∧ 𝐶)’ and ‘(𝐴 ∨ (𝐵 ∧ 𝐶))’ do not have the same truth table
4. ‘((𝐴 → 𝐵) → 𝐶)’ and ‘(𝐴 → (𝐵 → 𝐶))’ do not have the same truth table

Also, check whether:

5. ‘((𝐴 ↔ 𝐵) ↔ 𝐶)’ and ‘(𝐴 ↔ (𝐵 ↔ 𝐶))’ have the same truth table.

B. What is the difference between a logical truth and a logical falsehood? Are there
any other kinds of sentence in Sentential?
Revisit your answers to exercise §9B (page 78). Determine which sentences were logical
truths, which were logical falsehoods, and which, if any, were neither logical truths nor
logical falsehoods.
C. What does it mean to say that two sentences of Sentential are logically equivalent?
Use truth tables to decide if the following pairs of sentences are logically equivalent:

1. ¬(𝑃 ∧ 𝑄), (¬𝑃 ∨ ¬𝑄);


2. (𝑃 → 𝑄), ¬(𝑄 → 𝑃);
3. ¬(𝑃 ↔ 𝑄), ((𝑃 ∨ 𝑄) ∧ ¬(𝑃 ∧ 𝑄)).

D. What does it mean to say that some sentences of Sentential are jointly inconsistent?
Use truth tables to determine whether these sentences are jointly consistent, or jointly
inconsistent:

1. 𝐴 → 𝐴, ¬𝐴 → ¬𝐴, 𝐴 ∧ 𝐴, 𝐴 ∨ 𝐴
2. 𝐴 ∨ 𝐵, 𝐴 → 𝐶 , 𝐵 → 𝐶
3. 𝐵 ∧ (𝐶 ∨ 𝐴), 𝐴 → 𝐵, ¬(𝐵 ∨ 𝐶)
4. 𝐴 ↔ (𝐵 ∨ 𝐶), 𝐶 → ¬𝐴, 𝐴 → ¬𝐵
11
Entailment and Validity

11.1 Entailment
The following idea is related to joint consistency, but is of great interest in its own right:

The sentences 𝒜1 , 𝒜2 , …, 𝒜𝑛 ENTAIL the sentence 𝒞 if there is no valu‐


ation of the atomic sentences which makes all of 𝒜1 , 𝒜2 , …, 𝒜𝑛 true
and 𝒞 false.

(Why is this not a biconditional? The full answer will have to wait until §23.)
Again, it is easy to test this with a truth table. Let us check whether ‘¬𝐿 → (𝐽 ∨ 𝐿)’ and
‘¬𝐿’ entail ‘𝐽’, we simply need to check whether there is any valuation which makes
both ‘¬𝐿 → (𝐽 ∨ 𝐿)’ and ‘¬𝐿’ true whilst making ‘𝐽’ false. So we use a truth table:

𝐽 𝐿 ¬ 𝐿 → (𝐽 ∨ 𝐿) ¬𝐿 𝐽
T T FT TTT T FT T
T F TF TTT F TF T
F T FT TFTT FT F
F F TF FFF F TF F

The only row on which both‘¬𝐿 → (𝐽 ∨ 𝐿)’ and ‘¬𝐿’ are true is the second row, and that
is a row on which ‘𝐽’ is also true. So ‘¬𝐿 → (𝐽 ∨ 𝐿)’ and ‘¬𝐿’ entail ‘𝐽’.

11.2 The Double Turnstile


We are going to use the notion of entailment rather a lot in this book. It will help us,
then, to introduce a symbol that abbreviates it. Rather than saying that the Sentential
sentences 𝒜1 , 𝒜2 , … and 𝒜𝑛 together entail 𝒞 , we shall abbreviate this by:

𝒜1 , 𝒜2 , …, 𝒜𝑛 ⊨ 𝒞.

84
§11. ENTAILMENT AND VALIDITY 85

The symbol ‘⊨’ is known as the DOUBLE TURNSTILE, since it looks like a turnstile with
two horizontal beams.
But let me be clear. ‘⊨’ is not a symbol of Sentential. Rather, it is a symbol of our
metalanguage, augmented English (recall the difference between object language and
metalanguage from §7). So the metalanguage sentence:

70. 𝑃, 𝑃 → 𝑄 ⊨ 𝑄

is just an abbreviation for the English sentence:

71. The Sentential sentences ‘𝑃’ and ‘𝑃 → 𝑄’ entail ‘𝑄’.

Note that there are no constraints on the number of Sentential sentences that can be
mentioned before the symbol ‘⊨’. Indeed, one limiting case is of special interest:

72. ⊨ 𝒞 .

72 is false if there is a valuation which makes all the sentences appearing on the left
hand side of ‘⊨’ true and makes 𝒞 false. Since no sentences appear on the left side of
‘⊨’ in 72, it is trivial to make ‘them’ all true. So it is false if there is a valuation which
makes 𝒞 false – and so 72 is true iff every valuation makes 𝒞 true. Otherwise put, 72
says that 𝒞 is a logical truth. Equally:

73. 𝒜1 , …, 𝒜𝑛 ⊨

says that 𝒜1 , …, 𝒜𝑛 are jointly inconsistent. It follows that

74. 𝒜 ⊨

says that 𝒜 is individually inconsistent. That is to say, 𝒜 is a logical falsehood.1


Here is the important connection between inconsistency and entailment, expressed
using this new notation.

𝒜1 , …, 𝒜𝑛 ⊨ 𝒞 iff 𝒜1 , …, 𝒜𝑛 , ¬𝒞 ⊨.

If every valuation which makes each of 𝒜1 , …, 𝒜𝑛 true also makes 𝒞 true, then all of
those valuations also make ‘¬𝒞 ’ false. So there can be no valuation which makes each
of 𝒜1 , …, 𝒜𝑛 , ¬𝒞 true. So those sentences are jointly inconsistent.

1 If you find it difficult to see why ‘⊨ 𝒜 ’ should say that 𝒜 is a logical truth, you should just take 72 as
an abbreviation for that claim. Likewise you should take ‘𝒜 ⊨’ as abbreviating the claim that 𝒜 is a
logical falsehood.
86 TRUTH TABLES

11.3 ‘⊨’ versus ‘→’


I now want to compare and contrast ‘⊨’ and ‘→’.
Observe: 𝒜 ⊨ 𝒞 iff there is no valuation of the atomic sentences that makes 𝒜 true
and 𝒞 false.
Observe: 𝒜 → 𝒞 is a logical truth iff there is no valuation of the atomic sentences that
makes 𝒜 → 𝒞 false. Since a conditional is true except when its antecedent is true and
its consequent false, 𝒜 → 𝒞 is a logical truth iff there is no valuation that makes 𝒜
true and 𝒞 false.
Combining these two observations, we see that 𝒜 → 𝒞 is a logical truth iff 𝒜 ⊨ 𝒞 .2
But there is a really important difference between ‘⊨’ and ‘→’:

‘→’ is a sentential connective of Sentential.


‘⊨’ is a symbol of augmented English.

When ‘→’ is flanked with two Sentential sentences, the result is a longer Sentential sen‐
tence. By contrast, when we use ‘⊨’, we form a metalinguistic sentence that mentions
the surrounding Sentential sentences.
If 𝒜 → 𝒞 is a logical truth, then 𝒜 ⊨ 𝒞 . But 𝒜 → 𝒞 can be true on a valuation without
being a logical truth, and so can be true on a valuation even when 𝒜 doesn’t entail
𝒞 . Sometimes people are inclined to confuse entailment and conditionals, perhaps
because they are tempted by the thought that we can only establish the truth of a
conditional by logically deriving the consequent from the antecedent. But while this is
the way to establish the truth of a logically true conditional, most conditionals posit a
weaker relation between antecedent and consequent than that – for example, a causal
or statistical relationship might be enough to justify the truth of the conditional ‘If you
smoke, then you’ll lower your life expectancy’.

11.4 Entailment and Validity


We now make an important observation:

If 𝒜1 , 𝒜2 , …, 𝒜𝑛 ⊨ 𝒞 , then 𝒜1 , 𝒜2 , …, 𝒜𝑛 ∴ 𝒞 is valid.

Here’s why. If 𝒜1 , 𝒜2 , …, 𝒜𝑛 entail 𝒞 , then there is no valuation which makes all of


𝒜1 , 𝒜2 , …, 𝒜𝑛 true whilst making 𝒞 false. It is thus not possible – given the actual
meanings of the connectives of Sentential – for the sentences 𝒜1 , 𝒜2 , …, 𝒜𝑛 to jointly
be true without 𝒞 being true too, so the argument 𝒜1 , 𝒜2 , …, 𝒜𝑛 ∴ 𝒞 is conclusive.
Furthermore, because the conclusiveness of this argument doesn’t depend on anything
other that the structure of the sentences in the argument, it is also valid.

2 This result sometimes goes under a fancy title: the Deduction Theorem for Sentential. It is easy to see that
this more general result follows from the Deduction Theorem: ℬ1 , …, ℬ𝑛 , 𝒜 ⊨ 𝒞 iff ℬ1 , …, ℬ𝑛 , ⊨ 𝒜 → 𝒞 .
§11. ENTAILMENT AND VALIDITY 87

The only conclusive arguments in Sentential are valid ones. Because we consider every
valuation, the conclusiveness of an argument cannot turn on the particular truth val‐
ues assigned to the atomic sentences. For any collection of atomic sentences, there
is a valuation corresponding to any way of assigning them truth values. This means
that we treat the atomic sentences as all independent of one another. So there is no
possibility that there might be some connection in meaning between sentences of Sen‐
tential unless it is in virtue of those sentences having shared constituents and the right
structure.
In short, we have a way to test for the validity of some English arguments. First, we sym‐
bolise them in Sentential, as having premises 𝒜1 , 𝒜2 , …, 𝒜𝑛 , and conclusion 𝒞 . Then we
test for entailment using truth tables. If there is an entailment, then we can conclude
that the argument we symbolised has the right kind of structure to count as valid.
For example, suppose we consider this argument:

Jim studied hard, so he didn’t act in a lot of plays. For he can’t study hard
while acting in lots of plays.

The argument can be paraphrased as follows, a form we saw in §4.1:

It’s not the case that Jim both studied hard and acted in lots of plays.
Jim studied hard
So: Jim did not act in lots of plays.

We offer this symbolisation key:

𝑆: Jim studied hard;


𝐴: Jim acted in lots of plays.

Using this symbolisation, the argument has the following form:


¬(𝑆 ∧ 𝐴), 𝑆 ∴ ¬𝐴.
We draw up a truth table:

A S ¬(𝑆 ∧ 𝐴) 𝑆 ¬𝐴
T T F T F
T F T F F
F T T T T
F F T F T

The only valuation on which the premises are both true is represented on the third
row, assigning F to 𝐴 and T to 𝑆. And on this valuation, the conclusion is true. So this
argument is valid, and this statement of entailment is correct: ¬(𝑆 ∧ 𝐴), 𝑆 ⊨ ¬𝐴.
Our test uses the precisely defined relation of entailment in Sentential as a test for
validity in natural language. If we have symbolised an argument successfully, then
we have captured its form (or rather, one of its forms). If the symbolised Sentential
argument turns out to be valid, we can conclude that the natural language argument
is valid too, because it can be modelled as having a valid form.
88 TRUTH TABLES

11.5 The Limits of these Tests


We have reached an important milestone: a test for the validity of arguments! But, we
should not get carried away just yet. It is important to understand the limits of our
achievement. There are three sorts of limitations I want to discuss:

1. Some valid arguments are overlooked by our test;

2. Some invalid arguments are misclassified by a naive application of our test to an


inadequate symbolisation; and

3. There are some putative examples of sentences that cannot be symbolised be‐
cause of some assumptions that Sentential makes.

I shall illustrate these limits with three examples.


First, consider the argument:

75. Daisy is a small cow. So, Daisy is a cow.

To symbolise this argument in Sentential, we would have to use two different atomic
sentences – perhaps ‘𝑆’ and ‘𝐶 ’ – for the premise and the conclusion respectively. Now,
it is obvious that ‘𝑆’ does not entail ‘𝐶 ’. But the English argument surely seems valid –
the structure of ‘Daisy is a small cow’ guarantees that ‘Daisy is a cow’ is true. Note that
a small cow might still be rather large, so we cannot fudge things by symbolising ‘Daisy
is a small cow’ as a conjunction of ‘Daisy is small’ and ‘Daisy is a cow’. (We’ll return
to this sort of case in §16, where we will see how to symbolise 75 as a valid argument
in Quantifier.) But our Sentential‐based test for validity in English will have some false
negatives: it will classify some valid English arguments as invalid. This is because some
valid arguments are valid in virtue of structure which is not truth‐functional. We’ll see
more examples of this in §15.
Second, consider the following arguments:

76. It’s not the case that the Crows will win by a lot, if they win. So the Crows will
win.
77. It’s not the case that, if God exists, She answers malevolent prayers. So God
exists.

Both of these arguments have the same structure. Let’s focus on the second, example
77. Symbolising it in Sentential, we would offer something like ‘¬(𝐺 → 𝑀) ∴ 𝐺 ’. Now,
as can easily be checked with a truth table, this is a correct entailment in Sentential.
So if we symbolise the argument 77 in Sententialin this way, the conditional premise
entails that God exists. But that’s strange: surely even the atheist can accept sentence
77, without contradicting herself! Some say that 77 would be better symbolised by
‘𝐺 → ¬𝑀’, even though that doesn’t reflect the apparent form of the English sentence.
‘𝐺 → ¬𝑀’ does not entail 𝐺 . This symbolisation does a better job of reflecting the
intuitive consequences of the English sentence 77, but at the cost of abandoning a
§11. ENTAILMENT AND VALIDITY 89

straightforward correspondence between the structure of English sentences and their


Sentential symbolisations.
A better alternative might be to think that the conditional ‘if God exists, she answers
malevolent prayers’ is not to be symbolised by ‘→’. This conditional is false, many think:
they may think, for example, that it is part of the concept of God that God is good, and
hence would not grant prayers with evil intent. Even the atheist might accept this,
maybe because they accept the subjunctive conditional ‘even if God were to exist, God
would not answer malevolent prayers’ (§8.6). This sort of example has motivated many
philosophers to offer nontruth‐functional accounts of the English ‘if’, including some
that make it behave rather like a subjunctive conditional.3
The cases in 76 and 77 are examples of the (so‐called) paradoxes of material implication.
They highlight a limitation of Sentential, in that it appears not to have a conditional con‐
nective that adequately models these uses of the English ‘if’. But it is also a limitation of
our tests for validity in English, because the test is only as good as the symbolisations
we come up with as part of it. If we are not careful, we might end up mistakenly us‐
ing an inadequate symbolisation, and giving the wrong verdict about some argument,
thinking it valid when it is not. (I will return one final time to the relation between ‘if’
and → in §30.1.)
Finally, consider the sentence:

78. Jan is neither bald nor not‐bald.

To symbolise this sentence in Sentential, we would offer something like ‘¬(𝐽 ∨ ¬𝐽)’. This
a logical falsehood (check this with a truth‐table). But sentence 78 does not itself
seem like a logical falsehood; for we might have happily go on to add ‘Jan is on the
borderline of baldness’! To make this point another way: as is easily seen by truth
tables, ‘¬(𝐽 ∨ ¬𝐽)’ is logically equivalent to ‘¬𝐽 ∧ 𝐽’. This latter sentence symbolises an
obvious logical falsehood in English:

79. Jan is both not‐bald and also bald.

It is so obvious, though, that 78 is synonymous with 79? It seems like it may not be,
even though our test will classify any English argument from one to the other as valid
(since both are symbolised as logical falsehoods, which degenerately entail anything).
Because of the way we have defined valuations, every sentence of Sentential is assigned
either True or False in any valuation which makes it meaningful by assigning truth
values to its atomic sentences. This property of Sentential is known as BIVALENCE: that
every sentence has exactly one of the two possible truth values. The case of Jan’s bald‐
ness (or otherwise) raises the general question of what logic we should use when deal‐
ing with vague discourse, properties like ‘bald’ or ‘tall’ which seem to have borderline
cases. Many think it plausible that a borderline case of F is neither a case of F, nor does

3 Edgington discusses Stalnaker’s version of such an account in ‘Nearest Possible Worlds’, §4.1 of her
‘Indicative Conditionals’, cited above (http://plato.stanford.edu/entries/conditionals/#Sta).
90 TRUTH TABLES

it fail to be a case of F. Hence they have been tempted to deny bivalence for English:
‘Jan is bald’, they say, is neither True nor False! If 𝑝 is neither true nor false, then it is
hardly surprising that ‘𝑝 or not‐𝑝’ turns out to be untrue. If these thinkers are right that
vagueness in English leads to the denial of bivalence, while Sentential is bivalent, this
will give rise to mismatches between English and Sentential. These mismatches will
not involve inadequate symbolisation, but a more fundamental disagreement about
the background framework – here, a disagreement about the nature of truth.4
In different ways, these three examples highlight some of the limits of working with
a language like Sententialthat can only handle truth‐functional connectives. Moreover,
these limits give rise to some interesting questions in philosophical logic. Part of the
purpose of this course is to equip you with the tools to explore these questions of philo‐
sophical logic. But we have to walk before we can run; we have to become proficient
in using Sentential, before we can adequately discuss its limits, and consider alternat‐
ives. It is important to recognise that these are limits to Sentential only in its role as
a framework to model validity in English and other natural languages. They are not
problems for Sentential as a formal language. Moreover, as I have emphasised already,
these limitations are merely manifestations of the fact that Sentential is being used as
a model of natural language. Models are typically not designed or intended to capture
every aspect of what they model. Their utility derives often from being simpler than
the complex things they are representing. The limitations we have noted indicate that
Sentential may not model English perfectly in these cases. But Sentential remains an
adequate model of English in many other cases.

Key Ideas in §11


› If every valuation which makes some sentences all true is also
one that makes some further sentence true, then those sentences
entail the further sentence. We use the symbol ‘⊨’ for entail‐
ment.
› We can test for entailment using truth tables, in the same sort of
way that we test for consistency.
› If an argument when symbolised turns out to be an entail‐
ment, then the original argument is valid in virtue of its truth‐
functional structure. So we can test for validity using the truth
table tests for entailment.
› These tests nevertheless have limitations: not every valid argu‐
ment can be symbolised as a Sentential entailment. These limit‐
ations are typical of using simpler models to represent complex
things.

4 For more on the logic of vagueness, see Roy Sorensen’s 2018 entry ‘Vagueness’ in The Stanford Encyc‐
lopedia of Philosophy (https://plato.stanford.edu/entries/vagueness/). He discusses views that
deny bivalence in §5.
§11. ENTAILMENT AND VALIDITY 91

Practice exercises
A. What does it mean to say that sentences 𝒜1 , 𝒜2 , … , 𝒜𝑛 of Sentential entail a further
sentence 𝒞 ?
B. If 𝒜1 , 𝒜2 , … , 𝒜𝑛 ⊨ 𝒞 , what can you say about the argument with premises
𝒜1 , 𝒜2 , … , 𝒜𝑛 and conclusion 𝒞 ?
C. Use truth tables to determine whether each argument is valid or invalid.

1. 𝐴→𝐴∴𝐴
2. 𝐴 → (𝐴 ∧ ¬𝐴) ∴ ¬𝐴
3. 𝐴 ∨ (𝐵 → 𝐴) ∴ ¬𝐴 → ¬𝐵
4. 𝐴 ∨ 𝐵, 𝐵 ∨ 𝐶, ¬𝐴 ∴ 𝐵 ∧ 𝐶
5. (𝐵 ∧ 𝐴) → 𝐶, (𝐶 ∧ 𝐴) → 𝐵 ∴ (𝐶 ∧ 𝐵) → 𝐴

D. Answer each of the questions below and justify your answer.

1. Suppose that 𝒜 and ℬ are logically equivalent. What can you say about 𝒜 ↔ ℬ?
2. Suppose that (𝒜 ∧ ℬ) → 𝒞 is neither a logical truth nor a logical falsehood. What
can you say about whether 𝒜, ℬ ∴ 𝒞 is valid?
3. Suppose that 𝒜 , ℬ and 𝒞 are jointly inconsistent. What can you say about (𝒜 ∧
ℬ ∧ 𝒞)?
4. Suppose that 𝒜 is a logical falsehood. What can you say about whether 𝒜, ℬ ⊨
𝒞?
5. Suppose that 𝒞 is a logical truth. What can you say about whether 𝒜, ℬ ⊨ 𝒞 ?
6. Suppose that 𝒜 and ℬ are logically equivalent. What can you say about (𝒜 ∨ ℬ)?
7. Suppose that 𝒜 and ℬ are not logically equivalent. What can you say about
(𝒜 ∨ ℬ)?

E. If two sentences of Sentential, 𝒜 and 𝒟, are logically equivalent, what can you say
about (𝒜 → 𝒟)? What about the argument 𝒜 ∴ 𝒟?
F. Consider the following principle:

› Suppose 𝒜 and ℬ are logically equivalent. Suppose an argument contains 𝒜


(either as a premise, or as the conclusion). The validity of the argument would
be unaffected, if we replaced 𝒜 with ℬ.

Is this principle correct? Explain your answer.


12
Truth Table Shortcuts

With practice, you will quickly become adept at filling out truth tables. In this section,
I want to give you some permissible shortcuts to help you along the way.

12.1 Working through Truth Tables


You will quickly find that you do not need to copy the truth value of each atomic sen‐
tence, but can simply refer back to them. So you can speed things up by writing:

𝑃 𝑄 (𝑃 ∨ 𝑄) ↔ ¬ 𝑃
T T T FF
T F T FF
F T T TT
F F F FT

You also know for sure that a disjunction is true whenever one of the disjuncts is true.
So if you find a true disjunct, there is no need to work out the truth values of the other
disjuncts. Thus you might offer:

𝑃 𝑄 (¬ 𝑃 ∨ ¬ 𝑄) ∨ ¬ 𝑃
T T F FF FF
T F F TT TF
F T TT
F F TT

Equally, you know for sure that a conjunction is false whenever one of the conjuncts
is false. So if you find a false conjunct, there is no need to work out the truth value of
the other conjunct. Thus you might offer:

92
§12. TRUTH TABLE SHORTCUTS 93

𝑃 𝑄 ¬ (𝑃 ∧ ¬ 𝑄) ∧ ¬ 𝑃
T T FF
T F FF
F T T F TT
F F T F TT

A similar short cut is available for conditionals. You immediately know that a condi‐
tional is true if either its consequent is true, or its antecedent is false. Thus you might
present:

𝑃 𝑄 ((𝑃 → 𝑄 ) → 𝑃) → 𝑃
T T T
T F T
F T T F T
F F T F T

So ‘((𝑃 → 𝑄) → 𝑃) → 𝑃’ is a logical truth. In fact, it is an instance of Peirce’s Law,


named after Charles Sanders Peirce.

12.2 Testing for Validity and Entailment


When we use truth tables to test for validity or entailment, we are checking for bad
rows: rows where the premises are all true and the conclusion is false. Note:

› Any row where the conclusion is true is not a bad row.

› Any row where some premise is false is not a bad row.

Since all we are doing is looking for bad rows, we should bear this in mind. So: if we
find a row where the conclusion is true, we do not need to evaluate anything else on
that row: that row definitely isn’t bad. Likewise, if we find a row where some premise
is false, we do not need to evaluate anything else on that row.
With this in mind, consider how we might test the following claimed entailment:

¬𝐿 → (𝐽 ∨ 𝐿), ¬𝐿 ⊨ 𝐽.

The first thing we should do is evaluate the conclusion on the right of the turnstile. If
we find that the conclusion is true on some row, then that is not a bad row. So we can
simply ignore the rest of the row. So at our first stage, we are left with something like:

𝐽 𝐿 ¬ 𝐿 → (𝐽 ∨ 𝐿) ¬𝐿 𝐽
T T T
T F T
F T ? ? F
F F ? ? F
94 TRUTH TABLES

where the blanks indicate that we are not going to bother doing any more investiga‐
tion (since the row is not bad) and the question‐marks indicate that we need to keep
investigating.
The easiest premise on the left of the turnstile to evaluate is the second, so we next do
that:

𝐽 𝐿 ¬ 𝐿 → (𝐽 ∨ 𝐿) ¬𝐿 𝐽
T T T
T F T
F T F F
F F ? T F

Note that we no longer need to consider the third row on the table: it will not be a bad
row, because (at least) one of premises is false on that row. And finally, we complete
the truth table:

𝐽 𝐿 ¬ 𝐿 → (𝐽 ∨ 𝐿) ¬𝐿 𝐽
T T T
T F T
F T F F
F F T F F T F

The truth table has no bad rows, so this claimed entailment is genuine. (Any valuation
on which all the premises are true is a valuation on which the conclusion is true.)
It might be worth illustrating the tactic again, this time for validity. Let us check
whether the following argument is valid

𝐴 ∨ 𝐵, ¬(𝐴 ∧ 𝐶), ¬(𝐵 ∧ ¬𝐷) ∴ (¬𝐶 ∨ 𝐷).

So we need to check whether the premises entail the conclusion.


At the first stage, we determine the truth value of the conclusion. Since this is a dis‐
junction, it is true whenever either disjunct is true, so we can speed things along a bit.
We can then ignore every row apart from the few rows where the conclusion is false.
§12. TRUTH TABLE SHORTCUTS 95

𝐴 𝐵 𝐶 𝐷 𝐴∨𝐵 ¬(𝐴 ∧ 𝐶) ¬(𝐵 ∧ ¬𝐷) (¬ 𝐶 ∨ 𝐷)


T T T T T
T T T F ? ? ? F F
T T F T T
T T F F T T
T F T T T
T F T F ? ? ? F F
T F F T T
T F F F T T
F T T T T
F T T F ? ? ? F F
F T F T T
F T F F T T
F F T T T
F F T F ? ? ? F F
F F F T T
F F F F T T

We must now evaluate the premises. We use shortcuts where we can:

𝐴 𝐵 𝐶 𝐷 𝐴∨𝐵 ¬ (𝐴 ∧ 𝐶) ¬ (𝐵 ∧ ¬ 𝐷) (¬ 𝐶 ∨ 𝐷)
T T T T T
T T T F T F T F F
T T F T T
T T F F T T
T F T T T
T F T F T F T F F
T F F T T
T F F F T T
F T T T T
F T T F T T F F TT F F
F T F T T
F T F F T T
F F T T T
F F T F F F F
F F F T T
F F F F T T

If we had used no shortcuts, we would have had to write 256 ‘T’s or ‘F’s on this table.
Using shortcuts, we only had to write 37. We have saved ourselves a lot of work.
By the notion of a bad rows – a potential counterexample to a purported entailment –
you can save yourself a huge amount of work in testing for validity. There is still lots of
work involved in symbolising any natural language argument into Sentential, but once
96 TRUTH TABLES

that task is undertaken it is a relatively automatic process to determine whether the


symbolisation is an entailment.

Key Ideas in §12


› Some shortcuts are available in constructing truth tables. For ex‐
ample, if a conjunction has one false conjunct, we needn’t check
the truth value of the other in order to determine that the whole
conjunction is false.
› When applying our test for entailment, we need only check those
rows on which all the premises are true to see if the conclusion
is false on those rows. So we needn’t check any row where the
conclusion is true, or where a premise is false.

Practice exercises
A. Using shortcuts, determine whether each sentence is a logical truth, a logical false‐
hood, or neither.

1. ¬𝐵 ∧ 𝐵
2. ¬𝐷 ∨ 𝐷
3. (𝐴 ∧ 𝐵) ∨ (𝐵 ∧ 𝐴)
4. ¬ 𝐴 → (𝐵 → 𝐴)
5. 𝐴 ↔ 𝐴 → (𝐵 ∧ ¬𝐵)
6. ¬(𝐴 ∧ 𝐵) ↔ 𝐴
7. 𝐴 → (𝐵 ∨ 𝐶)
8. (𝐴 ∧ ¬𝐴) → (𝐵 ∨ 𝐶)
9. (𝐵 ∧ 𝐷) ↔ 𝐴 ↔ (𝐴 ∨ 𝐶)
13
Partial Truth Tables

Sometimes, we do not need to know what happens on every row of a truth table. Some‐
times, just a single row or two will do.

13.1 Direct Uses of Partial Truth Tables


Logical Truth In order to show that a sentence is a logical truth (tautology), we need
to show that it is true on every valuation. That is to say, we need to know that it comes
out true on every row of the truth table. So, it seems, we need a complete truth table.
To show that a sentence is not a logical truth, however, we only need one valuation,
corresponding to a truth table row on which the sentence is false. Therefore, in order to
show that some sentence is not a logical truth, it is enough to provide a single valuation
– a single row of the truth table – which makes the sentence false.
Suppose that we want to show that the sentence ‘(𝑈 ∧ 𝑇) → (𝑆 ∧ 𝑊)’ is not a logical
truth. We set up a PARTIAL TRUTH TABLE:

𝑆 𝑇 𝑈 𝑊 (𝑈 ∧ 𝑇) → (𝑆 ∧ 𝑊)
F

We have only left space for one row, rather than 16, since we are only looking for one
valuation on which the sentence is false. For just that reason, we have filled in ‘F’ for the
entire sentence. A partial truth table is a device for ‘reverse engineering’ a valuation,
given a truth value assigned to a complex sentence. We work backward from that truth
value to what the valuation must or could be.
The main connective of the sentence is a conditional. In order for the conditional to be
false, the antecedent must be true and the consequent must be false. So we fill these
in on the table:

𝑆 𝑇 𝑈 𝑊 (𝑈 ∧ 𝑇) → (𝑆 ∧ 𝑊)
T F F
97
98 TRUTH TABLES

In order for the ‘(𝑈 ∧ 𝑇)’ to be true, both ‘𝑈’ and ‘𝑇’ must be true.

𝑆 𝑇 𝑈 𝑊 (𝑈 ∧ 𝑇) → (𝑆 ∧ 𝑊)
T T TTT F F

Now we just need to make ‘(𝑆 ∧ 𝑊)’ false. To do this, we need to make at least one of
‘𝑆’ and ‘𝑊 ’ false. We can make both ‘𝑆’ and ‘𝑊 ’ false if we want. All that matters is that
the whole sentence turns out false on this row. Making an arbitrary decision, we finish
the table in this way:

𝑆 𝑇 𝑈 𝑊 (𝑈 ∧ 𝑇) → (𝑆 ∧ 𝑊)
F T T F TTT FFF F

So we now have a partial truth table, which shows that ‘(𝑈 ∧ 𝑇) → (𝑆 ∧ 𝑊)’ is not a
logical truth. Put otherwise, we have shown that there is a valuation which makes
‘(𝑈 ∧ 𝑇) → (𝑆 ∧ 𝑊)’ false, namely, the valuation which makes ‘𝑆’ false, ‘𝑇’ true, ‘𝑈’ true
and ‘𝑊 ’ false.

Logical Falsehood Showing that something is a logical falsehood (contradiction) re‐


quires us to consider every row of a complete truth table. We need to show that there is
no valuation which makes the sentence true; that is, we need to show that the sentence
is false on every row of the truth table.
However, to show that something is not a logical falsehood, all we need to do is find a
valuation which makes the sentence true, and a single row of a truth table will suffice.
We can illustrate this with the same example.

𝑆 𝑇 𝑈 𝑊 (𝑈 ∧ 𝑇) → (𝑆 ∧ 𝑊)
T

To make the sentence true, it will suffice to ensure that the antecedent is false. Since
the antecedent is a conjunction, we can just make one of them false. For no particular
reason, we choose to make ‘𝑈’ false; and then we can assign whatever truth value we
like to the other atomic sentences.

𝑆 𝑇 𝑈 𝑊 (𝑈 ∧ 𝑇) → (𝑆 ∧ 𝑊)
F T F F F FT TFF F

Equivalence To show that two sentences are logically equivalent, we must show that
the sentences have the same truth value on every valuation. So this requires us to
consider each row of a complete truth table.
To show that two sentences are not logically equivalent, we only need to show that
there is a valuation on which they have different truth values. So this requires only a
§13. PARTIAL TRUTH TABLES 99

partial truth table: construct the table given the assumption that one sentence is true
and the other false. So, for example, to show that ‘(¬𝐴 ∧ 𝐵)’ and ‘(𝐴 ∨ ¬𝐵)’ are not
logically equivalent, constructing this partial truth‐table would suffice:

𝐴 𝐵 (¬ 𝐴 ∧ 𝐵) (𝐴 ∨ ¬ 𝐵)
F T T FTT F FF T

Disjunctions and biconditionals in partial truth tables However, unless we are


lucky, we might need to construct a partial truth table with two rows. (Or attempt to
construct two partial truth tables.) If we are trying to show 𝒜 and ℬ logically inequi‐
valent, we will want to consider the possibility that it is 𝒜 which is true and ℬ which
is false, as well as the other possibility that it is ℬ which is true and 𝒜 which is false.
Suppose we are considering the sentence ‘¬(𝑃 ∨ 𝑄)’, and we are testing whether it is
not a logical truth. So we begin our partial truth table by assuming it false and seeing
if we can construct a valuation.

𝑃 𝑄 ¬ (𝑃 ∨ 𝑄)
F T

We see that the falsity of the whole negated sentence requires the embedded disjunc‐
tion to be true. But what we do we now? There is no unique way to proceed. What we
do is add a new row, in effect ‘branching’ the possible ways of constructing a valuation
which makes this embedded disjunction true. On the first we assume the first disjunct
is true, and on the second that the right disjunct is true

𝑃 𝑄 ¬ (𝑃 ∨ 𝑄)
T FTT
T F TT

But we can then see that no matter how we fill in the blank cells, in either row, we will
be able to make the whole sentence come out false. For example:

𝑃 𝑄 ¬ (𝑃 ∨ 𝑄)
T T FTTT
F T FFTT

So it is possible to construct a valuation – more than one – on which this sentence is


false, so ‘¬(𝑃 ∨ 𝑄)’ is not a logical truth.
100 TRUTH TABLES

Consistency To show that some sentences are jointly consistent, we must show that
there is a valuation which makes all of the sentence true. So this requires only a partial
truth table with a single row.
To show that some sentences are jointly inconsistent, we must show that there is no
valuation which makes all of the sentence true. So this requires a complete truth table:
You must show that on every row of the table at least one of the sentences is false.

Validity To show that an argument is valid, we must show that there is no valuation
which makes all of the premises true and the conclusion false. So this requires us to
consider all valuations in a complete truth table.
To show that argument is invalid, we must show that there is a valuation which makes
all of the premises true and the conclusion false. So this requires only a one‐line partial
truth table on which all of the premises are true and the conclusion is false.
This table summarises what we need to consider in order to demonstrate the presence
or absence of various semantic features of sentences and arguments. So, checking a
sentence for contradictoriness involves considering all valuations, and we can directly
do that by constructing a complete truth table.

Check if yes Check if no


logical truth? all valuations: complete one valuation: partial truth
truth table table
logical falsehood? all valuations: complete one valuation: partial truth
truth table table
logically equivalent? all valuations: complete one valuation: partial truth
truth table table
consistent? one valuation: partial truth all valuations: complete
table truth table
valid? all valuations: complete one valuation: partial truth
truth table table
entailment? all valuations: complete one valuation: partial truth
truth table table

In all these uses of partial truth tables, we must begin constructing them with a par‐
ticular semantic property in mind. We will begin the construction with a different
hypothesis about the target sentence, depending on what property we are testing for.
If we are using partial truth tables to test consistency, we will begin by assigning each
sentence ‘true’. If we are testing for validity, we will assign the premises ‘true’, but the
conclusion ‘false’.

13.2 Indirect Uses of Partial Truth Tables


We just saw how to use partial truth tables to directly construct a valuation which
demonstrates that an argument is invalid, or that some sentences are consistent, etc.
§13. PARTIAL TRUTH TABLES 101

But it turns out we can use the method of partial truth tables in an indirect way to
also evaluate arguments for validity or sentences for inconsistency. The idea is this:
we attempt to construct a partial truth table showing that the argument is invalid, and
if we fail, we can conclude that the argument is in fact valid.
So consider showing that an argument is invalid, which we just saw requires only a
one‐line partial truth table on which all of the premises are true and the conclusion is
false. So consider an attempt to show this argument invalid: (𝑃 ∧ 𝑅), (𝑄 ↔ 𝑃) ∴ 𝑄. We
construct a partial truth table, and attempt to construct a valuation which makes all
the premises true and the conclusion false:

𝑃 𝑄 𝑅 (𝑃 ∧ 𝑅) (𝑄 ↔ 𝑃) 𝑄
F T T F

Looking at the second premise, if we are to construct this valuation we need to make 𝑃
false: the premise is true, so both constituents have to have the same truth value, and
𝑄 is false by assumption in this valuation:

𝑃 𝑄 𝑅 (𝑃 ∧ 𝑅) (𝑄 ↔ 𝑃) 𝑄
F T F TF F

But looking at the first premise, we see that both 𝑃 and 𝑅 have to be true to make this
conjunction true:

𝑃 𝑄 𝑅 (𝑃 ∧ 𝑅) (𝑄 ↔ 𝑃) 𝑄
?? F T T TT F TF F

The truth of the first premise (given the other assumptions) has to make 𝑃 true, but
the truth of the second (given the other assumptions) has to make 𝑃 false. So: there
is no coherent way of assigning a truth value to 𝑃 so as to make this argument invalid.
(This is marked by the ‘??’ in the partial truth table.) Hence, it is valid.
I call this an INDIRECT use of partial truth tables. We do not construct the valuations
which actually demonstrate the presence or absence of a semantic property of an argu‐
ment or set of sentences. Rather, we show that the assumption that there is a valuation
that meets a certain condition is not coherent. So in the above case, we conclude that
nowhere among the 8 rows of the complete truth table for that argument is one that
makes the premises true and the conclusion false.
This procedure works because our partial truth table test is guaranteed to succeed in
demonstrating the absence of validity, for an invalid argument. Accordingly, if our test
fails to demonstrate the absence of validity, that must be because the argument is in
fact valid. (This is in keeping with the table at the end of the previous section. Our
failure to construct a valuation showing the argument invalid implicitly considers all
valuations.)
102 TRUTH TABLES

This indirect method of partial truth tables can also be used if we need to add addi‐
tional branches to our partial truth table. Suppose we are testing if ‘𝑃 ↔ ¬𝑃’ is a logical
falsehood. We attempt to construct a valuation making it true:

𝑃 (𝑃 ↔ ¬ 𝑃)
T

Again we need to branch our partial truth table to deal with the two possible ways this
biconditional might be true: if the two sides are both false, or if they are both true

𝑃 (𝑃 ↔ ¬ 𝑃)
T TT
F TF

We can see, as we complete the table, that there is no coherent valuation making this
sentence true. So the indirect method allows us to deduce that it is a logical falsehood:

𝑃 (𝑃 ↔ ¬ 𝑃)
?? T TT F
?? F TF T

Key Ideas in §13


› A partial truth table is a way to reverse engineer a valuation,
given a truth value for a sentence.
› Partial truth tables can be effective tests for the absence of most
of the semantic properties of sentences and arguments; and they
can provide a test for the presence of consistency in some sen‐
tences.
› We can also use partial truth tables indirectly to test for the
presence of semantic properties of sentences and arguments, by
showing that those properties cannot be absent.

Practice exercises
A. Use complete or partial truth tables (as appropriate) to determine whether these
pairs of sentences are logically equivalent:
§13. PARTIAL TRUTH TABLES 103

1. 𝐴, ¬𝐴
2. 𝐴, 𝐴 ∨ 𝐴
3. 𝐴 → 𝐴, 𝐴 ↔ 𝐴
4. 𝐴 ∨ ¬𝐵, 𝐴 → 𝐵
5. 𝐴 ∧ ¬𝐴, ¬𝐵 ↔ 𝐵
6. ¬(𝐴 ∧ 𝐵), ¬𝐴 ∨ ¬𝐵
7. ¬(𝐴 → 𝐵), ¬𝐴 → ¬𝐵
8. (𝐴 → 𝐵), (¬𝐵 → ¬𝐴)

B. Use complete or partial truth tables (as appropriate) to determine whether these
sentences are jointly consistent, or jointly inconsistent:

1. 𝐴 ∧ 𝐵, 𝐶 → ¬𝐵, 𝐶
2. 𝐴 → 𝐵, 𝐵 → 𝐶 , 𝐴, ¬𝐶
3. 𝐴 ∨ 𝐵, 𝐵 ∨ 𝐶 , 𝐶 → ¬𝐴
4. 𝐴, 𝐵, 𝐶 , ¬𝐷, ¬𝐸 , 𝐹

C. Use complete or partial truth tables (as appropriate) to determine whether each
argument is valid or invalid:

1. 𝐴 ∨ 𝐴 → (𝐴 ↔ 𝐴) ∴ 𝐴
2. 𝐴 ↔ ¬(𝐵 ↔ 𝐴) ∴ 𝐴
3. 𝐴 → 𝐵, 𝐵 ∴ 𝐴
4. 𝐴 ∨ 𝐵, 𝐵 ∨ 𝐶, ¬𝐵 ∴ 𝐴 ∧ 𝐶
5. 𝐴 ↔ 𝐵, 𝐵 ↔ 𝐶 ∴ 𝐴 ↔ 𝐶
14
Expressiveness of Sentential

When we introduced the idea of truth‐functionality in §8.2, we observed that every


sentence connective in Sentential was truth‐functional. As we noted, that property
allows us to represent complex sentences involving only these connectives using truth
tables.

14.1 Other Truth‐Functional Connectives


Are there other truth functional connectives than those in Sentential? If there were,
they would have schematic truth tables that differ from those for any of our connectives.
And it is easy to see that there are. Consider this proposed connective:

The Sheffer stroke For any sentences 𝒜 and ℬ, 𝒜 ↓ ℬ is true if and only if both 𝒜
and ℬ are false. We can summarize this in the schematic truth table for the Sheffer
Stroke:

𝒜 ℬ 𝒜↓ℬ
T T F
T F F
F T F
F F T

Inspection of the schematic truth tables for ∧, ∨, etc., shows that their truth tables are
different from this one, and hence the Sheffer Stroke is not one of the connectives of
Sentential. It is a connective of English however: it is the ‘neither … nor …’ connective
that features in ‘Siya is neither an archer nor a jockey’, which is false iff she is either.

‘Whether or not’ The connective ‘… whether or not …’, as in the sentence ‘Sam is
happy whether or not she’s rich’ seems to have this schematic truth table:

104
§14. EXPRESSIVENESS OF Sentential 105

𝒜 ℬ 𝒜 whether or not ℬ
T T T
T F T
F T F
F F F

This too corresponds to no existing connective of Sentential.


In fact, it can be shown that there are many other potential truth‐functional connect‐
ives that are not included in the language Sentential.1

14.2 The Expressive Power of Sentential


Should we be worried about this, and attempt to add new connectives to Sentential?
It turns out we already have enough connectives to say anything we wish to say that
makes use of only truth‐functional connectives.2 The connectives that are already in
Sentential are TRUTH‐FUNCTIONALLY COMPLETE:

For any truth‐functional connective in any language – that is, one


which has a truth‐table like ours – there is a schematic sentence of
Sentential which has the same truth table.

Remember that a schematic sentence is something like 𝒜 ∨ ¬ℬ , where arbitrary Sen‐


tential sentences can fill the places indicated by 𝒜 and ℬ.
This is actually not very difficult to show. We’ll start with an example. Suppose we
have the English sentence ‘Siya is exactly one of an archer and a jockey’. This sentence
features the connective ‘Exactly one of … and …’. In this case, the simpler sentences
which are connected to form the complex sentence are ‘Siya is an archer’ and ‘Siya is a
jockey’. The schematic truth table for this connective is as follows:

𝒜 ℬ ‘Exactly one of 𝒜 and ℬ’


T T F
T F T
F T T
F F F

1 There are in fact sixteen truth‐functional connectives that join two simpler sentences into a complex
sentence, but Sentential only includes four. (Why sixteen? Because there are four rows of the schem‐
atic truth table for such a connective, and each row can have either a T or an F recorded against it,
independent of the other rows, so there are 2 × 2 × 2 × 2 = 16 ways of constructing such a truth‐table.)
2 I should flag a potential limitation here: this result needs our assumption that True and False are the
only truth values. Some MANY‐VALUED LOGICS include further ‘truth values’; for example a third truth
value Indeterminate. (It is dubious whether that is a truth value, or whether a sentence having it just
reflects our ignorance of its truth or falsity.) Such logics can have truth‐functional connectives that
don’t behave like any connective of Sentential. One motivating example for a third truth value, neither
true nor false, are cases of presupposition failure we will look at in §19.4.
106 TRUTH TABLES

We want now to find a schematic Sentential sentence that has this same truth table. So
we shall want the sentence to be true on the second row, and true on the third row, and
false on the other rows. In other words, we want a sentence which is true on either the
second row or the third row.
Let’s begin by focusing on that second row, or rather the family of valuations corres‐
ponding to it. Those valuations include only those that make 𝒜 true and and ℬ false.
These are the only valuations among those we are considering which make 𝒜 true and
ℬ false. So they are the only valuations which make both 𝒜 and ¬ℬ true. So we can
construct a sentence which is true on valuations in that family, and those valuations
alone: the conjunction of 𝒜 and ¬ℬ, (𝒜 ∧ ¬ℬ).
Now look at the third row and its associated family of valuations. Those valuations
make 𝒜 false and ℬ true. They are the only valuations among those we are considering
which make 𝒜 false and ℬ true. So they are the only valuations among those we are
considering which make both of ¬𝒜 and ℬ true. So we can construct a schematic
sentence which is true on valuations in that family, and only those valuations: the
conjunction of ¬𝒜 and ℬ, (¬𝒜 ∧ ℬ).
Our target sentence, the one with the same truth table as ‘Exactly one of 𝒜 and ℬ’, is
true on either the second or third valuations. So it is true if either (𝒜 ∧ ¬ℬ) is true or
if (¬𝒜 ∧ ℬ) is true. And there is of course a schematic Sentential sentence with just this
profile: (𝒜 ∧ ¬ℬ) ∨ (¬𝒜 ∧ ℬ).
Let us summarise this construction by adding to our truth table:

𝒜 ℬ ‘Exactly one of 𝒜 and ℬ’ (𝒜 ∧ ¬ℬ) ∨ (¬𝒜 ∧ ℬ)


T T F F F F
T F T T T F
F T T F T T
F F F F F F

As we can see, we have come up with a schematic Sentential sentence with the intended
truth table.

14.3 The Disjunctive Normal Form Procedure


The procedure sketched above can be generalised:

1. First, identify the truth table of the target connective;

2. Then, identify which families of valuations (schematic truth table rows) the tar‐
get sentence is true on, and for each such row, construct a conjunctive schematic
Sentential sentence true on that row alone. (It will be a conjunction of schematic
letters sentences of those schematic letters which are true on the valuation, and
negated schematic letters, for those schematic letters false on the valuation).

› What if the target connective is true on no valuations? Then let the schem‐
atic Sentential sentence (𝒜∧¬𝒜) represent it – it too is true on no valuation.
§14. EXPRESSIVENESS OF Sentential 107

3. Finally, the schematic Sentential sentence will be a disjunction of those conjunc‐


tions, because the target sentence is true according to any of those valuations.

› What if there is only one such conjunction, because the target sentence
is true in only one valuation? Then just take that conjunction to be the
Sentential rendering of the target sentence.

Logicians say that the schematic sentences that this procedure spits out are in DIS‐
JUNCTIVE NORMAL FORM.
This procedure doesn’t always give the simplest schematic Sentential sentence with a
given truth table, but for any truth table you like this procedure gives us a schematic
Sentential sentence with that truth table. Indeed, we can see that the Sentential sentence
¬(𝒜 ↔ ℬ) has the same truth table as our target sentence too.
The procedure can be used to show that there is some redundancy in Sentential itself.
Take the connective ↔. Our procedure, applied to the schematic truth table for 𝒜 ↔ ℬ,
yields the following schematic sentence:

(𝒜 ∧ ℬ) ∨ (¬𝒜 ∧ ¬ℬ).

This schematic sentence says the same thing as the original schematic sentence with
the biconditional as its main connective, without using the biconditional. This could
be used as the basis of a program to remove the biconditional from the language. But
that would make Sentential more difficult to use, and we will not pursue this idea fur‐
ther.

Key Ideas in §14


› There are truth‐functional connectives, such as ‘neither … nor
…’, which don’t correspond to any of the official connectives of
Sentential.
› Nevertheless for any sentence structure 𝒜 ⊕ ℬ, where ‘⊕’ is a
truth‐functional connective, it is possible to construct a Senten‐
tial schematic sentence which has the same truth table. (Indeed,
one can do this making use only of negation, conjunction, and
disjunction.)
› So Sentential is in fact able to express any truth‐functional con‐
nective.

Practice exercises
A. For each of columns (i), (ii) and (iii) below, use the procedure just outlined to find
a Sentential sentence that has the truth table depicted:
108 TRUTH TABLES

𝒜 ℬ (i) (ii) (iii)


T T T T F
T F T T T
F T T F F
F F T T T

B. Can you find Sentential schematic sentences which have the same truth table as these
English connectives?

1. ‘𝒜 whether or not ℬ’;


2. ‘Not both 𝒜 and ℬ’;
3. ‘Neither 𝒜 nor ℬ, but at least one of them’;
4. ‘If 𝒜 then ℬ, else 𝒞 ’.
Chapter 4

The Language of Quantified


Logic
15
Building Blocks of Quantifier

15.1 The Need to Decompose Sentences


Consider the following argument, which is obviously valid in English:

Willard is a logician. All logicians wear funny hats. So Willard wears a


funny hat.

To symbolise it in Sentential, we might offer a symbolisation key:

𝐿: Willard is a logician.
𝐴: All logicians wear funny hats.
𝐹 : Willard wears a funny hat.

And the argument itself becomes:


𝐿, 𝐴 ∴ 𝐹
This is invalid in Sentential – there is a valuation on which the premises are true and
the conclusion false. But the original English argument is clearly valid.
The problem is not that we have made a mistake while symbolising the argument. This
is the best symbolisation we can give in Sentential. The problem lies with Sentential
itself. ‘All logicians wear funny hats’ is about both logicians and hat‐wearing. By not
retaining this in our symbolisation, we lose the connection between Willard’s being a
logician and Willard’s wearing a hat.
Another example. This argument is also intuitively valid:

John loves James;


So: John loves someone.

Again, the best Sentential symbolisation is the invalid 𝑃 ∴ 𝑄. The validity of this argu‐
ment depends on the internal structure of the sentences, and specifically the connec‐
tion between the name ‘James’ and the phrase ‘someone’.

110
§15. BUILDING BLOCKS OF Quantifier 111

The basic units of Sentential are atomic sentences, and Sentential cannot decompose
these. None of the sentences in the arguments above have any truth‐functional con‐
nectives, so must be symbolised as atomic sentences of Sentential. To symbolise argu‐
ments like the preceding, we will have to develop a new logical language which will
allow us to split the atom. We will call this language Quantifier, and the study of this
language and its features is quantified logic.
The details of Quantifier will be explained throughout this chapter, but here is the ba‐
sic idea about how to split the atom(ic sentence). The key insight is that many natural
language sentences have subject‐predicate structure, and some arguments are valid in
virtue of this structure. Quantifier adds to Sentential some resources for modelling this
structure – or perhaps more accurately, it allows us to model the predicate‐name struc‐
ture of many sentences, along with any truth‐functional structure.

Names First, we have names. In Quantifier, we indicate these with lower case italic
letters. For instance, we might let ‘𝑏’ stand for Bertie, or let ‘𝑖 ’ stand for Willard. The
names of Quantifier correspond to proper names in English, like ‘Willard’ or ‘Elyse’,
which also stand for the things they name.

Predicates Second, we have predicates. English predicates are expressions like


‘ is a dog’ or ‘ is a logician’. These are not complete sentences by them‐
selves. In order to make a complete sentence, we need to fill in the gap. We need to
say something like ‘Bertie is a dog’ or ‘Willard is a logician’. In Quantifier, we indicate
predicates with upper case italic letters. For instance, we might let the Quantifier pre‐
dicate ‘𝐷’ symbolise the English predicate ‘ is a dog’. Then the expression ‘𝐷𝑏’

will be a sentence in Quantifier, which symbolises the English sentence ‘Bertie is a dog’.
Equally, we might let the Quantifier predicate ‘𝐿’ symbolise the English predicate ‘

is a logician’. Then the expression ‘𝐿𝑖 ’ will symbolise the English sentence ‘Willard is a
logician’.

Quantifiers Third, we have quantifier phrases. These tell us how much. In Eng‐
lish there are lots of quantifier phrases. But in Quantifier we will focus on just two:
‘all’/‘every’ and ‘there is at least one’/‘some’. So we might symbolise the English sen‐
tence ‘there is a dog’ with the Quantifier sentence ‘∃𝑥𝐷𝑥 ’, which we would naturally
read aloud as ‘there is at least one thing, 𝑥, such that 𝑥 is a dog’.
That is the general idea. But Quantifier is significantly more subtle than Sentential. So
we will come at it slowly.

15.2 Singular Terms


In English, a SINGULAR TERM is a noun phrase that refers to a specific person, place, or
thing. The word ‘dog’ is not a singular term, because there are a great many dogs. The
phrase ‘Bertie’ is a singular term, because it refers to a specific terrier. Here are some
further examples:
112 THE LANGUAGE OF QUANTIFIED LOGIC

› Proper Names, e.g., ‘Bertie enjoys playing fetch’;


‘Scott Morrison breached the human rights of children seeking asylum’;

› Definite Descriptions, e.g., ‘The oldest person is a woman’;


‘Ortcutt is the shortest spy’;

› Possessives, e.g., ‘Antony’s eldest child loves coding’;

› Pronouns, e.g., ‘She (points) plays violin’;


‘Icecream is delicious. Everyone loves it’;

› Demonstratives, e.g., ‘That loud dog is so annoying’.

The clearest cases of singular terms are proper names, and these occupy a distinct
syntactic category in Quantifier. Most of the other types of English singular terms are
modelled in more or less indirect ways in Quantifier.
Definites and possessives are handled principally by paraphrase. We treat possessives
as disguised definite descriptions, paraphrasing ‘Antony’s eldest child’ as something
like ‘the eldest child of Antony’. In turn, definites are handled as complex constructions
in Quantifier, as we’ll see when we take them up in §19.
Even trickier in some ways are singular term uses of pronouns and demonstratives.
Both of these constructions rely on the CONVERSATIONAL CONTEXT to fix a determinate
reference. I might need to gesture, or rely on our previous utterances, to understand
who ‘she’ refers to in ‘she plays violin’. Likewise, ‘that loud dog’ might vary in which
dog it refers to from conversation to conversation. There are approaches that attempt
to model the role of context in determining the meaning of an expression, but we
will not attempt to do this here. Quantifier is not designed to model every aspect of
English. If we are forced to try to represent some English sentences involving context‐
sensitive singular terms in Quantifier, we will need to resort to paraphrases that are
not fully adequate (e.g., treating demonstratives as definite descriptions, despite their
differences).
Moreover, pronouns are not singular terms in every use. Compare the uses of the
pronoun ‘she’ in ‘She plays violin’ and ‘Every girl thinks she deserves icecream’. The
first refers to a specific individual, perhaps with some contextual cues like gestures to
help identify which. The second, however, doesn’t refer to a specific girl – rather, it
ranges over all girls. Perhaps surprisingly, Quantifier takes this second kind of use of
pronouns to be primary. We will discuss how Quantifier represents pronouns when we
introduce variables in §15.5.

15.3 Names
PROPER NAMES are a particularly important kind of singular term. These are expres‐
sions that label individuals without describing them. The name ‘Emerson’ is a proper
name, and the name alone does not tell you anything about Emerson. Of course, some
names are traditionally given to boys and other are traditionally given to girls. If ‘Hil‐
ary’ is used as a singular term, you might guess that it refers to a woman. You might,
§15. BUILDING BLOCKS OF Quantifier 113

though, be guessing wrongly. Indeed, the name does not necessarily mean that what
is referred to is even a person: Hilary might be a giraffe, for all you could tell just from
the name ‘Hilary’. In English, the use of certain names triggers our knowledge of these
conventions, so that (for example) the use of name ‘Fido’ might well trigger an expect‐
ation that the thing named is a dog. However, while it would violate convention to
name your child ‘Fido’, once the you manage to assign the name it can perfectly well
refer to a human child.
In Quantifier, there are no conventions around its category of NAMES. These are pure
names, whose only role is to designate some specific individual. Names in Quantifier
are represented by lower‐case letters ‘𝑎’ through to ‘𝑟’. We can add subscripts if we
want to use some letter more than once, if we have a complicated discourse with many
different names. So here are some names in Quantifier:

𝑎, 𝑏, 𝑐, …, 𝑟, 𝑎1 , 𝑓32 , 𝑗390 , 𝑚12 .

These should be thought of along the lines of proper names in English. But with some
differences. First, ‘Antony Eagle’ is a proper name, but there are a number of people
with this name. (Equally, there are at least two people with the name ‘P.D. Magnus’
and several people named ‘Tim Button’.) We live with this kind of ambiguity in Eng‐
lish, allowing context to determine that some particular utterance of the name ‘Antony
Eagle’ refers to one of the contributors to this book, and not some other guy. In Quan‐
tifier, we do not tolerate any such ambiguity. Each name must pick out exactly one
thing. (However, two different names may pick out the same thing.) Second, names
(and predicates) in Quantifier are assigned their meaning, or interpreted, only tempor‐
arily. (This is just like the way that atomic sentences are only assigned a truth value in
Sentential temporarily, relative to a valuation.)
As with Sentential, we provide symbolisation keys. These indicate, temporarily, what a
name shall pick out. So we might offer:

𝑒: Elsa
𝑔: Gregor
𝑚: Marybeth

Again, what we are saying in using natural language names is that the thing referred
to by the Quantifier name ‘𝑒’ is stipulated – for now – to be the same thing referred to
by the English proper noun ‘Elsa’. We are not saying that ‘𝑒’ and ‘Elsa’ are synonyms.
For all we know, perhaps there is additional nuance to the meaning of the name ‘Elsa’
other than what it refers to. If there is, it is not preserved in the Quantifier name ‘𝑒’,
which has no nuance in its meaning other than the thing it denotes. The Quantifier
name ‘𝑒’ might be stipulated to denote Elsa on one occasion, and Eddie on another.
You may have been taught in school that a noun is a ‘naming word’. It is safe to use
names in Quantifier to symbolise proper nouns. But dealing with common nouns is a
more subtle matter. Even though they are often described as ‘general names’, common
nouns actually function as names relatively rarely. Some common nouns which do
often function as names are NATURAL KIND TERMS like ‘gold’ and ‘tiger’. These are
common nouns which really do name a kind of thing. Consider this example:
114 THE LANGUAGE OF QUANTIFIED LOGIC

Gold is scarce;
Nothing scarce is cheap;
So: Gold isn’t cheap.

This argument is valid in virtue of its form. The form of the first premise has the phrase
‘ is scarce’ being predicated of ‘gold’, in which case, ‘gold’ must be functioning as

a name in this argument. But notice that ‘ is gold’ is a perfectly good predicate.
So we cannot simply treat all natural kind terms as proper names of general kinds.

15.4 Predicates
The simplest predicates denote properties of individuals. They are things you can say
about the features or behaviour of an object. Here are some examples of English pre‐
dicates:

› runs

› is a dog

› is a member of Monty Python

› was a student of David Lewis

› A piano fell on

In general, you can think about predicates as things which combine with singular terms
to make sentences. In these cases, they are interpreted with the aid of natural language
VERB PHRASES. The most elementary phrases that correspond to predicates are simple
intransitive verb phrases like ‘runs’. But predicates can symbolise more complex verb
phrases. In a subject‐predicate sentence, we can treat any syntactic sub‐unit of a sen‐
tence including everything but the subject as a predicate. So ‘Bertie is a dog’ can be
seen as involving the name ‘Bertie’ and the predicate ‘is a dog’. Note that proper names
can occur within a predicate, as in the verb phrase ‘was a student of David Lewis’. (We
will see how to model sentences where multiple names interact with a complex pre‐
dicate below in §17.) You can begin with sentences and make predicates out of them
by removing singular terms and leaving ‘slots’ in which singular terms can be placed.
Consider the sentence, ‘Vinnie borrowed the family car from Nunzio’. By removing one
singular term, we can obtain any of three different predicates:

› borrowed the family car from Nunzio

› Vinnie borrowed from Nunzio


§15. BUILDING BLOCKS OF Quantifier 115

› Vinnie borrowed the family car from

(What if we wanted to remove two or more singular terms and leave more than one
gap? We shall return to this in §17.) Quantifier predicates are capital letters 𝐴 through
𝑍, with or without subscripts. We might write a symbolisation key for predicates thus:

𝐴: is angry
𝐻: is happy

If we combine our two symbolisation keys, we can start to symbolise some English
sentences that use these names and predicates in combination. For example, consider
the English sentences:

80. Elsa is angry.


81. Gregor and Marybeth are angry.
82. If Elsa is angry, then so are Gregor and Marybeth.

To make a simple subject‐predicate sentence, we need to couple the predicate with as


many names as there are ‘gaps’. Since in the above key ‘𝐴’ has one gap in the symbol‐
isation key, following it with a single name makes a grammatically formed sentence of
Quantifier. So sentence 80 is straightforward: we symbolise it by ‘𝐴𝑒’.
Sentence 81: this is a conjunction of two simpler sentences. The simple sentences
can be symbolised just by ‘𝐴𝑔’ and ‘𝐴𝑚’. Then we help ourselves to our resources from
Sentential, and symbolise the entire sentence by ‘𝐴𝑔 ∧𝐴𝑚’. This illustrates an important
point: Quantifier has all of the truth‐functional connectives of Sentential.
Sentence 82: this is a conditional, whose antecedent is sentence 80 and whose con‐
sequent is sentence 81. So we can symbolise this with ‘𝐴𝑒 → (𝐴𝑔 ∧ 𝐴𝑚)’.

Predicates without subjects Actually we can make an elegant further assumption


to incorporate Sentential entirely within Quantifier. A predicate combines with singular
terms to make sentences. But some English predicates, though they require a singular
term syntactically, don’t seem to involve the referent of those singular terms in their
meaning. We see this in expressions like these:

83. It is snowing;
84. It seems that George is hungry;
85. She’ll be right.

The pronouns in these cases are known as DUMMY PRONOUNS. These pronouns have
no obvious referent, but that poses no problem for the interpretation of the sentences.
Example 83 means something like the bare verb ‘snowfalling’, if only that were gram‐
matical. In these cases, the predicate following the pronoun does not denote a property
of individuals, because there is no way to attach it to a particular individual. Contrast
these examples:
116 THE LANGUAGE OF QUANTIFIED LOGIC

86. Coober Pedy is a harsh place. It has underground houses.


87. Coober Pedy is a harsh place. It seldom rains there.

In 86, ‘it’ refers to Coober Pedy. Not so in 87: ‘It’ in that example is a DUMMY PRONOUN.
Yet ‘there’ does refer to Coober Pedy. Drop the latter pronoun, and you obtain ‘it seldom
rains’. Having already argued that ‘it’ is a dummy pronoun in the longer sentence, it
appears to remain a dummy pronoun in the shortened sentence. Try substituting some
proper name for ‘it’ in ‘it seldom rains’ and see what nonsense you get.
Such predicates, where the dummy pronoun should not be thought of as a genuine
gap in the sentence, and with no other singular terms evident, have a meaning which
is purely determined by the predicate. English requires a grammatical subject, but
Quantifier is not subject to a similar limitation. Rather than reproduce this feature of
English, we will allow Quantifier predicates occuring by themselves to count as gram‐
matical sentences of Quantifier. These zero‐place predicates, semantically requiring
no subject, are to be symbolised by capital letters ‘𝐴’ through ‘𝑍’ (perhaps subscrip‐
ted), but without needing any adjacent name to be grammatical.
Note we have thereby included all atomic sentences of Sentential in Quantifier among
these special predicates of Quantifier. In Sentential we used these sentences to symbol‐
ise any sentence of English which did not include sentence connectives. In Quantifier
their use will be more limited. But nevertheless, syntactically, since Quantifier has all
the connectives of Sentential and all the atomic sentences, we see that every sentence
of Sentential is also a sentence of Quantifier.

15.5 Quantifiers
We are now ready to introduce quantifiers. In general, a quantifier tells us how many.
Consider these sentences:

88. Everyone is happy.


89. Someone is angry.
90. Every girl thinks she deserves icecream.
91. Most people are happy.
92. Exactly two people are angry.
93. More than three people are happy.

We will focus initially on the coarse‐grained quantifiers ‘every’/‘all’ and ‘some’. We will
look at numerical quantifiers, as in examples 92 and 93, in §18.
It might be tempting to symbolise sentence 88 as ‘(𝐻𝑒 ∧ (𝐻𝑔 ∧ 𝐻𝑚))’. Yet this would
only say that Elsa, Gregor, and Marybeth are happy. We want to say that everyone is
happy, even those we have not named, even those who are nameless.
Note that 88 and 89 and 90 can be roughly paraphrased like this:

94. Every person is such that: they are happy.


95. Some person is such that: they are angry.
96. Every girl is such that: she thinks she herself deserves icecream.
§15. BUILDING BLOCKS OF Quantifier 117

In each of these, we have a pronoun – singular ‘they’ in 94 and 95, ‘she’ in 96 – which
is governed by the preceding phrase. That phrase gives us information about what
this pronoun is pointing to – is it pointing severally to everyone, as in 94? Or to just
someone, though it is generally unknown which particular person it is, as in 95? In
either case, something general is being said, rather than something specific, even in
example 95 which is true just in case there is at least one angry person – it doesn’t
matter which person it is.
In this sort of construction, the sentences ‘they are happy’ and ‘she thinks she herself
deserves icecream’, which are headed by a bare pronoun, are called OPEN SENTENCES.
An open sentence in English can be used to say something meaningful, if the circum‐
stances permit a unique interpretation of the pronoun – consider ‘she plays violin’ from
§15.3. But in many cases no such unique interpretation is possible. If I gesture at a large
crowd and say simply ‘he is angry’, I may not manage to say anything meaningful if there
is no way to establish which person this use of ‘he’ is pointing to. The other part of the
sentence, the ‘every person is such that …’ part, is called a QUANTIFIER PHRASE. The
quantifier phrase gives us guidance about how to interpret the otherwise bare pronoun.
The treatment of quantifier phrases in Quantifier actually follows the structure of these
paraphrases rather well. The Quantifier analogue of these embedded pronouns is the
category of VARIABLE. In Quantifier, variables are italic lower case letters ‘𝑠’ through ‘𝑧’,
with or without subscripts. They combine with predicates to form open sentences of
the form ‘𝒜𝓍’. Grammatically variables are thus like singular terms. However, as their
name suggests, variables do not denote any fixed individual. They will not be assigned
a meaning by a symbolisation key, even temporarily. Rather, their role is to be governed
by an accompanying quantifier phrase to say something general about a situation. In
Quantifier, an open sentence combines with a quantifier to form a sentence. (Notice
that I have here returned to the practice of using ‘𝒜 ’ as a metavariable, from §7.)

Universal Quantifier The first quantifier from Quantifier we meet is the UNIVERSAL
QUANTIFIER, symbolised ‘∀’, and which corresponds to ‘every’. Unlike English, we al‐
ways follow a quantifier in Quantifier by the variable it governs, to avoid the possibility
of confusion. Putting this all together, we might symbolise sentence 88 as ‘∀𝑥𝐻𝑥 ’. The
variable ‘𝑥’ is serving as a kind of placeholder, playing the role that is allotted to the
pronoun in the English paraphrase 94. The expression ‘∀𝑥’ intuitively means that you
can pick anyone to be temporarily denoted by ‘𝑥’. The subsequent ‘𝐻𝑥’ indicates, of
that thing you picked out, that it is happy. (Note that pronoun again.)
I should say that there is no special reason to use ‘𝑥’ rather than some other variable.
The sentences ‘∀𝑥𝐻𝑥 ’, ‘∀𝑦𝐻𝑦’, ‘∀𝑧𝐻𝑧’, and ‘∀𝑥5 𝐻𝑥5 ’ use different variables, but they will
all be logically equivalent.

Existential quantifier To symbolise sentence 89, we introduce a second quantifier:


the EXISTENTIAL QUANTIFIER, ‘∃’. Like the universal quantifier, the existential quanti‐
fier requires a variable. Sentence 89 can be symbolised by ‘∃𝑥𝐴𝑥 ’. Whereas ‘∀𝑥𝐴𝑥 ’ is
read naturally as ‘for all 𝑥, it (𝑥) is angry’, ‘∃𝑥𝐴𝑥 ’ is read naturally as ‘there is something,
𝑥 , such that it (𝑥 ) is angry’. Once again, the variable is a kind of placeholder; we could
just as easily have symbolised sentence 89 with ‘∃𝑧𝐴𝑧’, ‘∃𝑤256 𝐴𝑤256 ’, or whatever.
118 THE LANGUAGE OF QUANTIFIED LOGIC

Some more examples will help. Consider these further sentences:

97. No one is angry.


98. There is someone who is not happy.
99. Not everyone is happy.

Sentence 97 can be paraphrased as, ‘It is not the case that someone is angry’. We can
then symbolise it using negation and an existential quantifier: ‘¬∃𝑥𝐴𝑥 ’. Yet sentence
97 could also be paraphrased as, ‘Everyone is not angry’. With this in mind, it can
be symbolised using negation and a universal quantifier: ‘∀𝑥¬𝐴𝑥 ’. Both of these are
acceptable symbolisations. Indeed, it will transpire that, in general, ∀𝑥¬𝒜 is logically
equivalent to ¬∃𝑥𝒜 . Symbolising a sentence one way, rather than the other, might
seem more ‘natural’ in some contexts, but it is not much more than a matter of taste.
Sentence 98 is most naturally paraphrased as, ‘There is some x, such that x is not happy’.
This then becomes ‘∃𝑥¬𝐻𝑥 ’. Of course, we could equally have written ‘¬∀𝑥𝐻𝑥 ’, which
we would naturally read as ‘it is not the case that everyone is happy’. And that would
be a perfectly adequate symbolisation of sentence 99.
Quantifiers get their name because they tell us how many things have a certain feature.
Quantifier allows only very crude distinctions: we have seen that we can symbolise ‘no
one’, ‘someone’, and ‘everyone’. English has many other quantifier phrases: ‘most’, ‘a
few’, ‘more than half’, ‘at least three’, etc. Some can be handled in a roundabout way in
Quantifier, as we will see: the numerical quantifier ‘at least three’, for example, we will
meet again in §18. But others, like ‘most’, are simply unable to be reliably symbolised
in Quantifier.

15.6 Domains
Given the symbolisation key we have been using, ‘∀𝑥𝐻𝑥 ’ symbolises ‘Everyone is happy’.
Who is included in this everyone? When we use sentences like this in English, we
usually do not mean everyone now alive on the Earth. We almost certainly do not
mean everyone who was ever alive or who will ever live. We usually mean something
more modest: everyone now in the building, everyone enrolled in the ballet class, or
whatever.
In order to eliminate this ambiguity, we will need to specify a DOMAIN. The domain is
just the things that we are talking about. So if we want to talk about people in Chicago,
we define the domain to be people in Chicago. We write this at the beginning of the
symbolisation key, like this:

domain: people in Chicago

The quantifiers range over the domain. Given this domain, ‘∀𝑥’ is to be read roughly as
‘Every person in Chicago is such that …’ and ‘∃𝑥’ is to be read roughly as ‘Some person
in Chicago is such that …’.
In Quantifier, the domain must always include at least one thing. Moreover, in English
we can conclude ‘something is angry’ when given ‘Gregor is angry’. In Quantifier, then,
§15. BUILDING BLOCKS OF Quantifier 119

we shall want to be able to infer ‘∃𝑥𝐴𝑥 ’ from ‘𝐴𝑔’. So we shall insist that each name
must pick out exactly one thing in the domain. If we want to name people in places
beside Chicago, then we need to include those people in the domain.
In permitting multiple domains, Quantifier follows the lead of natural languages like
English. Consider an argument like this:

100. All the beer has been drunk; so we’re going to the bottle‐o.

The premise says that all the beer is gone. But the conclusion only makes sense if
there is more beer at the bottle shop. So whatever domain of things we are talking
about when we state the premise, it cannot include absolutely everything. In Quantifier,
we sidestep the interesting issues involved in deciding just what domain is involved
in evaluating sentences like ‘all the beer has been drunk’, and explicitly include the
current domain of quantification in our symbolisation key.
Note further that to make sense of the sentence ‘all the beer has been drunk’, the do‐
main will have to contain both past and present things, so we can understand what we
are saying about the now‐absent beer. A domain contains what we are talking about.
It might be difficult to understand how we do it, but we do talk about past things,
fictional things, abstract things, merely possible things, and other unusual entities.
So our domains must be flexible enough to include any of these things we might be
talking about. It is a question in philosophical logic as to how we can explain how we
manage to include nonexistent things in our domain of discourse, but for the purposes
of Quantifier all we need to know is that this is something we somehow manage to do.

A domain must have at least one member. A name must pick out
exactly one member of the domain. But a member of the domain may
be picked out by one name, many names, or none at all. The domain
can consist of anything we might be discussing; it is not restricted to
things that presently exist.
120 THE LANGUAGE OF QUANTIFIED LOGIC

Key Ideas in §15


› Quantifier gives us resources for modelling some aspects of sub‐
sentential structure in natural languages: names, predicates, and
some quantifier phrases.
› The names of Quantifier are expressions that are used to refer
to objects in some circumscribed domain; they can often have
their referents temporarily set by associating them with natural
language proper names in a symbolisation key.
› The predicates of Quantifier are expressions that denote proper‐
ties of objects; they can often have their referents temporarily
set by associating them with natural language verb phrases in a
symbolisation key.
› The quantifiers of Quantifier have a fixed meaning not set by a
symbolisation key. They tell us how many things in the domain
meet a certain predicate. In Quantifier, we concentrate on two
quantifiers: the universal ‘∀’ (‘every’) and existential ‘∃’ (‘some’).
› Quantifiers are supported by variables, which behave rather like
pronouns in English.

Practice exercises
A. In each of the following sentences, identify the names and predicates. Comment on
any difficulties.

1. Kurt and Alonzo are logicians;

2. Silver is useful in film photography;

3. Mary’s ring is silver;

4. Kurt and Alonzo lift a couch;

5. The biggest threat to life on earth is carbon dioxide.

B. Identify the possible predicates that can be found by replacing singular terms with
gaps in these sentences:

1. He dislikes Joel;

2. Andrew Leigh was a professor of economics at the ANU;

3. The professor of genomics was elected president of the Academy.

C. Make use of this symbolisation key to symbolise the following sentences into Quan‐
tifer, commenting on any difficulties:
§15. BUILDING BLOCKS OF Quantifier 121

domain: cities and towns in South Australia


𝑎: Adelaide
𝑚: Mount Gambier
𝑇: is a town
𝑈: is ugly
𝐿: is large

1. Adelaide is big and ugly.

2. Mount Gambier is not large.

3. If Mount Gambier is large, then Adelaide definitely is!

4. Mount Gambier is a large village.

5. Every city and town is ugly.

D. Can this argument be adequately symbolised in Quantifier? Comment on any diffi‐


culties.

1. She is tall;
2. Bob likes her;
So: Bob likes someone tall.
16
Sentences with One Quantifier

We now have the basic pieces of Quantifier. Symbolising many sentences of English
will only be a matter of knowing the right way to combine predicates, names, quanti‐
fiers, and the truth‐functional connectives. There is a knack to this, and there is no
substitute for practice.

16.1 Common Quantifier Phrases


As in Sentential (recall §5), we will give canonical symbolisations for certain common
English quantificational structures. Consider these sentences:

101. Every coin in my pocket is a 20¢ piece.


102. Some coin on the table is a dollar.
103. Not all the coins on the table are dollars.
104. None of the coins in my pocket are dollars.

In providing a symbolisation key, we need to specify a domain. Since we are talking


about coins in my pocket and on the table, the domain must at least contain all of
those coins. Since we are not talking about anything besides coins, we let the domain
be all coins. Since we are not talking about any specific coins, we do not need to need
to deal with any names. So here is our key:

domain: all coins


𝑃: is in my pocket
𝑇: is on the table
𝑄: is a 20¢ piece
𝐷: is a dollar

Sentence 101 is most naturally symbolised using a universal quantifier. The universal
quantifier says something about everything in the domain, not just about the coins in

122
§16. SENTENCES WITH ONE QUANTIFIER 123

my pocket. So if we want to talk just about coins in my pocket, we will need to restrict
the quantifier, by imposing a condition on the things we are saying are 20¢ pieces. That
is: something in the domain is claimed to be a 20¢ piece only if it meets the restricting
condition. That leads us to this conditional paraphrase:

105. For any (coin): if that coin is in my pocket, then it is a 20¢ piece.
restriction

So we can symbolise it as ‘∀𝑥(𝑃𝑥 → 𝑄𝑥)’.


Since sentence 101 is about coins that are both in my pocket and that are 20¢ pieces, it
might be tempting to translate it using a conjunction. However, the sentence ‘∀𝑥(𝑃𝑥 ∧
𝑄𝑥)’ would symbolise the sentence ‘every coin is both a 20¢ piece and in my pocket’.
This obviously means something very different than sentence 101. And so we see:

A sentence can be symbolised as ∀𝑥(ℱ𝑥 → 𝒢𝑥) if it can be paraphrased


in English as ‘every F is G’ or ‘all Fs are Gs’.

Example 102 uses the quantifier phrase ‘some’. The same thought could be expressed
using different quantifier phrases:

106. At least one coin on the table is a dollar.


107. There is a coin on the table that is a dollar.

These phrases all indicate an existential quantifier. In these examples, the class of coins
on the table is being related to the class of dollar coins, and it is claimed that at least
one member of the former class is also in the latter class – that there is overlap. This
is represented in Quantifier following the example of this paraphrase:

108. There is something (a coin): it is in both the class of things on the table, and in
the class of dollar coins.

That is symbolised using a conjunction, ‘∃𝑥(𝑇𝑥 ∧ 𝐷𝑥)’.


We know from Sentential that the order of conjuncts doesn’t matter: ‘(𝑃 ∧𝑄)’ is logically
equivalent to ‘(𝑄 ∧ 𝑃)’. Likewise in Quantifier, ‘∃𝑥(𝑇𝑥 ∧ 𝐷𝑥)’ is logically equivalent to
‘∃𝑥(𝐷𝑥 ∧ 𝑇𝑥)’. This fits well with the English sentences we are symbolising, because
overlap is itself a symmetrical relation between classes. We see this also in the fact
that we can paraphrase example 102 as ‘Some dollar coin is on the table’.
Notice that we needed to use a conditional with the universal quantifier, but we used a
conjunction with the existential quantifier. Suppose we had instead written ‘∃𝑥(𝑇𝑥 →
𝐷𝑥)’. That would mean that there is some object in the domain such that if it is 𝑇, then
it is also 𝐷. For this to be true, we just need something to not be 𝑇. So it is very easy
for ‘∃𝑥(𝑇𝑥 → 𝐷𝑥)’ to be true. Given our symbolisation, it will be true if some coin is
not on the table. Of course there is a coin that is not on the table: there are coins lots
of other places. That is rather less demanding than the claim that something is both
𝑇 and 𝐷.
124 THE LANGUAGE OF QUANTIFIED LOGIC

A conditional will usually be the natural connective to use with a universal quantifier,
but a conditional within the scope of an existential quantifier tends to say something
very weak indeed. As a general rule of thumb, do not put conditionals in the scope of
existential quantifiers unless you are sure that you need one.

A sentence can be symbolised as ∃𝑥(ℱ𝑥 ∧ 𝒢𝑥) if it can be paraphrased


in English as ‘some F is G’, ‘at least one F is G’, or ‘There is an F which
is G’.

Sentence 103 can be paraphrased as, ‘It is not the case that every coin on the table
is a dollar’. So we can symbolise it by ‘¬∀𝑥(𝑇𝑥 → 𝐷𝑥)’. You might look at sentence
103 and paraphrase it instead as, ‘Some coin on the table is not a dollar’. You would
then symbolise it by ‘∃𝑥(𝑇𝑥 ∧ ¬𝐷𝑥)’. Although it is probably not immediately obvious
yet, these two sentences are logically equivalent. (This is due to the logical equival‐
ence between ¬∀𝑥𝒜 and ∃𝑥¬𝒜 , mentioned in §15, along with the logical equivalence
between ¬(𝒜 → ℬ) and 𝒜 ∧ ¬ℬ.)
Sentence 104 can be paraphrased as, ‘It is not the case that there is some dollar
in my pocket’. This can be symbolised by ‘¬∃𝑥(𝑃𝑥 ∧ 𝐷𝑥)’. It might also be para‐
phrased as, ‘Everything in my pocket is a nondollar’, and then could be symbolised
by ‘∀𝑥(𝑃𝑥 → ¬𝐷𝑥)’. Again the two symbolisations are logically equivalent. Both are
correct symbolisations of sentence 104.

16.2 Empty Predicates


In §15, I emphasised that a name must pick out exactly one object in the domain. How‐
ever, a predicate need not apply to anything in the domain. A predicate that applies
to nothing in a domain is called an EMPTY predicate (relative to that domain). This is
worth exploring.
Suppose we want to symbolise these two sentences:

109. Every monkey knows sign language


110. Some monkey knows sign language

It is possible to write the symbolisation key for these sentences in this way:

domain: animals
𝑀: is a monkey.
𝑆: knows sign language.

Sentence 109 can now be symbolised by ‘∀𝑥(𝑀𝑥 → 𝑆𝑥)’. Sentence 110 can be symbolised
as ‘∃𝑥(𝑀𝑥 ∧ 𝑆𝑥)’.
It is tempting to say that sentence 109 entails sentence 110. That is, we might think that
it is impossible for it to be the case that every monkey knows sign language, without
§16. SENTENCES WITH ONE QUANTIFIER 125

it’s also being the case that some monkey knows sign language. But this would be
a mistake. It is possible for the sentence ‘∀𝑥(𝑀𝑥 → 𝑆𝑥)’ to be true even though the
sentence ‘∃𝑥(𝑀𝑥 ∧ 𝑆𝑥)’ is false.
How can this be? The answer comes from considering whether these sentences would
be true or false if there are no monkeys. If there are no monkeys at all (in some domain),
then ‘∀𝑥(𝑀𝑥 → 𝑆𝑥)’ would be vacuously true. Take the domain of reptiles. Look at the
domain, and pick any monkey you like – it knows sign language!1 There is certainly
no counterexample to the claim available in this domain. And because of the role of
the conditional in our symbolisation, it turns out that a universally quantified claim
with an unsatisfied restricting condition will also be true. In Quantifier, a universally
quantified sentence of the form ∀𝓍𝒜𝓍 → ℬ𝓍 is false only if we can find something
which is 𝒜 without being ℬ. If we can’t find such a thing, perhaps because we can’t
find anything which is 𝒜 in the first place, then the sentence will be true (since truth is
just lack of falsity, and this sentence isn’t false because we can’t find a case that falsifies
it). This derives ultimately from the feature of Sentential we have already acknowledged
to be questionably analogous to English, namely, the fact that a conditional is false only
if there is a counterexample, a case where the antecedent is true and the consequent
false.
Another example will help to bring this home. Suppose we extend the above symbol‐
isation key, by adding:

𝑅: is a refrigerator

Now consider the sentence ‘∀𝑥(𝑅𝑥 → 𝑀𝑥)’. This symbolises ‘every refrigerator is a mon‐
key’. And this sentence is true, given our symbolisation key. This is counterintuitive,
since we do not want to say that there are a whole bunch of refrigerator monkeys. It
is important to remember, though, that ‘∀𝑥(𝑅𝑥 → 𝑀𝑥)’ is true iff any member of the
domain that is a refrigerator is a monkey. Since the domain is animals, there are no
refrigerators in the domain. Again, then, the sentence is vacuously true.
If you were actually dealing with the sentence ‘All refrigerators are monkeys’, then you
would most likely want to include kitchen appliances in the domain. Then the predic‐
ate ‘𝑅’ would not be empty and the sentence ‘∀𝑥(𝑅𝑥 → 𝑀𝑥)’ would be false. Remember,
though, that a predicate is empty only relative to a particular domain.

When ℱ is an empty predicate relative to a given domain, a sentence


∀𝑥(ℱ𝑥 → …) will be vacuously true of that domain.

16.3 Picking a Domain


The appropriate symbolisation of an English language sentence in Quantifier will de‐
pend on the symbolisation key. Choosing a key can be difficult. Suppose we want to
symbolise the English sentence:

1 Remember this is not a counterfactual claim (8.6); even if ‘All monkeys know sign language’ is vacuously
true in the domain of reptiles, that wouldn’t mean that ‘Any monkey would know sign language’ is true.
126 THE LANGUAGE OF QUANTIFIED LOGIC

111. Every rose has a thorn.

We might offer this symbolisation key:

𝑅: is a rose
𝑇: has a thorn

It is tempting to say that sentence 111 should be symbolised as ‘∀𝑥(𝑅𝑥 → 𝑇𝑥)’. But
we have not yet chosen a domain. If the domain contains all roses, this would be
a good symbolisation. Yet if the domain is merely things on my kitchen table, then
‘∀𝑥(𝑅𝑥 → 𝑇𝑥)’ would only come close to covering the fact that every rose on my kitchen
table has a thorn. If there are no roses on my kitchen table, the sentence would be
trivially true. This is not what we want. To symbolise sentence 111 adequately, we need
to include all the roses in the domain. But now we have two options.
First, we can restrict the domain to include all roses but only roses. Then sentence
111 can, if we like, be symbolised with ‘∀𝑥𝑇𝑥 ’. This is true iff everything in the domain
has a thorn; since the domain is just the roses, this is true iff every rose has a thorn.
By restricting the domain, we have been able to symbolise our English sentence with a
very short sentence of Quantifier. So this approach can save us trouble, if every sentence
that we want to deal with is about roses.
Second, we can let the domain contain things besides roses: rhododendrons; rats;
rifles; whatevers. And we will certainly need to include a more expansive domain if
we simultaneously want to symbolise sentences like:

112. Every cowboy sings a sad, sad song.

Our domain must now include both all the roses (so that we can symbolise sentence
111) and all the cowboys (so that we can symbolise sentence 112). So we might offer the
following symbolisation key:

domain: people and plants


𝐶: is a cowboy
𝑆: sings a sad, sad song
𝑅: is a rose
𝑇: has a thorn

Now we will have to symbolise sentence 111 with ‘∀𝑥(𝑅𝑥 → 𝑇𝑥)’, since ‘∀𝑥𝑇𝑥 ’ would
symbolise the sentence ‘every person or plant has a thorn’. Similarly, we will have to
symbolise sentence 112 with ‘∀𝑥(𝐶𝑥 → 𝑆𝑥)’.
In general, the universal quantifier can be used to symbolise the English expression
‘everyone’ if the domain only contains people. If there are people and other things in
the domain, then ‘everyone’ must be treated as ‘every person’.
§16. SENTENCES WITH ONE QUANTIFIER 127

If you choose a narrow domain, you can make the task of symbolisation easier. If we
are attempting to symbolise example 96, ‘every girl thinks that she deserves icecream’,
we can pick the domain to be girls and then we only need to introduce a predicate
‘ thinks that they themselves deserve icecream’, symbolised ‘𝐷’. The symbolisa‐

tion is then the simple ‘∀𝑥𝐷𝑥 ’. But our options are limited if the conversation goes
on to talk about things other than girls. On the other hand, if you pick an expansive
domain (such as everything whatsoever), you can always just impose an appropriate
restriction. In this case, we could introduce the predicate ‘𝐺 ’ to stand for ‘ is a

girl’, and symbolise the sentence as ‘∀𝑥(𝐺𝑥 → 𝐷𝑥)’.


When choosing an expansive domain, you must take some care with implicitly restric‐
ted predicates. It would not be appropriate to paraphrase ‘all the beer has been drunk’
relative to a very expansive domain as ‘For everything: if it is beer, it has been drunk’.
To adequately capture the intent, we shall need to make the implicit contextual restric‐
tion of the predicate explicit, in something like this paraphrase ‘For everything: if it is
beer at the party, it has been drunk’.

16.4 The Utility of Paraphrase


When symbolising English sentences in Quantifier, it is important to understand the
structure of the sentences you want to symbolise. What matters is the final symbolisa‐
tion in Quantifier, and sometimes you will be able to move from an English language
sentence directly to a sentence of Quantifier. Other times, it helps to paraphrase the
sentence one or more times. Each successive paraphrase should move from the original
sentence closer to something that you can finally symbolise directly in Quantifier.
For the next several examples, we will use this symbolisation key:

domain: women
𝐵: is a bassist.
𝑅: is a rock star.
𝑘: Kim Deal

Now consider these sentences:

113. If Kim Deal is a bassist, then she is a rock star.


114. If any woman is a bassist, then she is a rock star.

The same words appear as the consequent in sentences 113 and 114 (‘… she is a rock
star’), but they mean very different things (recall §15.5). To make this clear, it often
helps to paraphrase the original sentences into a more unusual but clearer form.
Sentence 113 can be paraphrased as, ‘Consider Kim Deal: if she is a bassist, then she
is a rockstar’. The bare pronoun ‘she’ gets to denote Kim Deal because of our initial
‘Consider Kim Deal’ remark. This then says something about one particular person,
and can obviously be symbolised as ‘𝐵𝑘 → 𝑅𝑘’.
128 THE LANGUAGE OF QUANTIFIED LOGIC

Sentence 114 gets a very similar paraphrase, with the same embedded conditional: ‘Con‐
sider any woman: if she is a bassist, then she is a rockstar’. The difference in the ‘Con‐
sider …’ phrase however forces a very different intepretation for the sentence as a whole.
Replacing the English pronouns by variables, the Quantifier equivalent of a pronoun, we
get this awkward quasi‐English paraphrase: ‘For any woman x, if x is a bassist, then x
is a rockstar’. Now this can be symbolised as ‘∀𝑥(𝐵𝑥 → 𝑅𝑥)’. This is the same sentence
we would have used to symbolise ‘Every woman who is a bassist is a rock star’. And on
reflection, that is surely true iff sentence 114 is true, as we would hope.
Consider these further sentences, and let us consider the same interpretation as above,
though in a domain of all people.

115. If anyone is a bassist, then Kim Deal is a rock star.


116. If anyone is a bassist, then they are a rock star.

The same words appear as the antecedent in sentences 115 and 116 (‘If anyone is a
bassist…’). But it can be tricky to work out how to symbolise these two uses. Again,
paraphrase will come to our aid.
Sentence 115 can be paraphrased, ‘If there is at least one bassist, then Kim Deal is a
rock star’. It is now clear that this is a conditional whose antecedent is a quantified
expression; so we can symbolise the entire sentence with a conditional as the main
connective: ‘∃𝑥𝐵𝑥 → 𝑅𝑘’.
Sentence 116 can be paraphrased, ‘For all people x, if x is a bassist, then x is a rock star’.
Or, in more natural English, it can be paraphrased by ‘All bassists are rock stars’. It is
best symbolised as ‘∀𝑥(𝐵𝑥 → 𝑅𝑥)’, just like sentence 114.
The word ‘any’ is particularly tricky, because it can sometimes mean ‘every’ and some‐
times ‘at least one’! Think about the two occurrences of ‘any’ in this sentence:

117. Any student will be happy if they have any money.


For every student: if there exists some money they possess, then they will be
happy.

This can be symbolised ‘∀𝑥(𝑆𝑥 → (∃𝑦(𝑀𝑦 ∧ 𝑃𝑥𝑦) → 𝐻𝑥))’, where the first occurrence of
‘any’ is represented by a universal quantifier, and the second by an existential quantifier.
The moral is that the English words ‘any’ and ‘anyone’ should typically be symbolised
using quantifiers. And if you are having a hard time determining whether to use an
existential or a universal quantifier, try paraphrasing the sentence with an English sen‐
tence that uses words besides ‘any’ or ‘anyone’.2

2 The story about ‘any’ and ‘anyone’ is actually rather interesting. It is well‐known to linguists that ‘any’
has at least two readings: so‐called FREE CHOICE ‘ANY’, which is more or less like a universal quantifier
(‘Any friend of Jessica’s is a friend of mine!’), and NEGATIVE POLARITY (NPI) ‘ANY’, which only occurs
in ‘negative’ contexts, like negation (‘I don’t want any peas!’), where it functions more or less like an
existential quantifier (‘It is not the case that: there exist peas that I want’).
Interestingly, the antecedent of a conditional is a negative environment (being equivalent to ¬𝒜 ∨ 𝒞 ),
and so we expect that ‘any’ in the antecedent of a conditional will have an existential interpretation.
§16. SENTENCES WITH ONE QUANTIFIER 129

16.5 Quantifiers and Scope


Continuing the example, suppose I want to symbolise these sentences:

118. If everyone is a bassist, then Tim is a bassist


119. Everyone is such that, if they are a bassist, then Tim is a bassist.

To symbolise these sentences, I shall have to add a new name to the symbolisation key,
namely:

𝑏: Tim

Sentence 118 is a conditional, whose antecedent is ‘everyone is a bassist’. So we will


symbolise it with ‘(∀𝑥𝐵𝑥 → 𝐵𝑏)’. This sentence is necessarily true: if everyone is indeed
a bassist, then take any one you like – for example Tim – and he will be a bassist.
Sentence 119, by contrast, might best be paraphrased by ‘every person x is such that,
if x is a bassist, then Tim is a bassist’. This is symbolised by ‘∀𝑥(𝐵𝑥 → 𝐵𝑏)’. And this
sentence is false. Kim Deal is a bassist. So ‘𝐵𝑘’ is true. But Tim is not a bassist, so ‘𝐵𝑏’
is false. Accordingly, ‘𝐵𝑘 → 𝐵𝑏’ will be false. So ‘∀𝑥(𝐵𝑥 → 𝐵𝑏)’ will be false as well.
In short, ‘(∀𝑥𝐵𝑥 → 𝐵𝑏)’ and ‘∀𝑥(𝐵𝑥 → 𝐵𝑏)’ are very different sentences. We can explain
the difference in terms of the scope of the quantifier. The scope of quantification is very
much like the scope of negation, which we considered when discussing Sentential(§6.3),
and it will help to explain it in this way. We define quantifier scope officially in §20,
but we also return to it in a preliminary way in §17.2.
In the sentence ‘(¬𝐵𝑘 → 𝐵𝑏)’, the scope of ‘¬’ is just the antecedent of the conditional.
We are saying something like: if ‘𝐵𝑘’ is false, then ‘𝐵𝑏’ is true. Similarly, in the sentence
‘(∀𝑥𝐵𝑥 → 𝐵𝑏)’, the scope of ‘∀𝑥 ’ is just the antecedent of the conditional. We are saying
something like: if ‘𝐵’ is true of everything, then ‘𝐵𝑏’ is also true.
In the sentence ‘¬(𝐵𝑘 → 𝐵𝑏)’, the scope of ‘¬’ is the entire sentence. We are saying
something like: ‘(𝐵𝑘 → 𝐵𝑏)’ is false. Similarly, in the sentence ‘∀𝑥(𝐵𝑥 → 𝐵𝑏)’, the
scope of ‘∀𝑥’ is the entire sentence. We are saying something like: ‘(𝐵𝑥 → 𝐵𝑏)’ is true
of everything.
Scope can make a drastic difference in meaning. Reconsider these examples:

And it does: ‘If anyone is home, they will answer the door’ means something like: ‘If someone is home,
then that person will answer the door’. It does not mean ‘If everyone is home, then they will answer
the door’. This is what we see in 115.
But 116 is not a conditional – its main connective is a quantifier. So here free choice ‘any’ is the natural
interpretation, so we use the universal quantifier.
We see the same thing with the quantifier ‘someone’. In ‘if someone is a bassist, Kim Deal is’, someone
gets symbolised by an existential quantifier in the scope of a conditional. But in ‘If someone is a bassist,
they are a musician’ it should be symbolised by a universal taking scope over a conditional.
130 THE LANGUAGE OF QUANTIFIED LOGIC

Scope of ‘∀𝑥’
120. ( ∀𝑥𝐵𝑥 → 𝐵𝑏)
If everything is 𝐵, then 𝑏 is too. Trivially true
Scope of ‘∀𝑥’
121. ∀𝑥(𝐵𝑥 → 𝐵𝑏)
All 𝐵s are such that 𝑏 is 𝐹 . False if 𝑏 isn’t 𝐹 but there are some 𝐹 s

The moral of the story is simple. When you are using quantifiers and conditionals, be
very careful to make sure that you have sorted out the scope correctly.

16.6 Dealing with Complex Adjectives


When we encounter a sentence like

122. Herbie is a white car,

we can paraphrase this as ‘Herbie is white and Herbie is a car’. We can then use a
symbolisation key like:

𝑊: is white
𝐶: is a car
ℎ: Herbie

This allows us to symbolise sentence 122 as ‘𝑊ℎ ∧ 𝐶ℎ’. But now consider:

123. Julia Gillard is a former prime minister.


124. Julia Gillard is prime minister.

Following the case of Herbie, we might try to use a symbolisation key like:

𝐹: is former
𝑃: is Prime Minister
𝑗: Julia Gillard.

Then we would symbolise 123 by ‘𝐹𝑗∧𝑃𝑗’, and symbolise 124 by ‘𝑃𝑗’. That would however
be a mistake, since that symbolisation suggests that the argument from 123 to 124 is
valid, because the symbolisation of the premise does logically entail the symbolisation
of the conclusion.
‘White’ is a INTERSECTIVE adjective, which is a fancy way of saying that the white 𝐹 s are
among the 𝐹 s and among the white things: any white car is a car and white, just like
any successful lawyer is a lawyer and successful, and a one tonne rhinoceros is both a
rhino and a one tonne thing. But ‘former’ is a PRIVATIVE adjective, which means that
any former 𝐹 is not now among the 𝐹 s. Other privative adjectives occur in phrases such
§16. SENTENCES WITH ONE QUANTIFIER 131

as ‘fake diamond’, ‘Deputy Lord Mayor’, and ‘mock trial’. When symbolising these sen‐
tences, you cannot treat them as a conjunction. So you will need to symbolise ‘

is a fake diamond’ and ‘ is a diamond’ using completely different predicates, to


avoid a spurious entailment between them. The moral is: when you see an adjectiv‐
ally modified predicate like ‘white car’, you need to ask yourself carefully whether the
modifier is intersective, and can be symbolised as a conjunctive predicate, or not.3
Things are a bit more complicated, however. Recall this example from page 88:

Daisy is a small cow.

We note that a small cow is definitely a cow, and so it seems we might treat ‘small’ as an
intersective adjective. We might formalise this sentence like this: ‘𝑆𝑑 ∧ 𝐶𝑑 ’, assuming
this symbolisation key:

𝑆: is small
𝐶: is a cow
𝑑 : Daisy

But note that our symbolisation would suggest that this argument is valid:

125. Daisy is a small cow; so Daisy is small.

The symbolised argument, 𝑆𝑑 ∧ 𝐶𝑑 ∴ 𝑆𝑑 , is clearly valid.


But the original argument 125 is not valid. Even a small cow is still rather large. (Like‐
wise, even a short basketball player is still generally well above average height.) The
point is that ‘ is a small cow’ denotes the property something has when it is small

for a cow, while ‘ is small’ denotes the property of being a small thing. (In or‐

dinary speech we tend to keep the ‘for an 𝐹 ’ part of these phrases silent, and let our
conversational circumstances supply it automatically.) But neither should we treat
‘small’ as a nonintersective adjective. If we do, we will be unable to account for the
valid argument ‘Daisy is a small cow, so Daisy is a cow’.
The correct symbolisation key will thus be this, keeping the other symbols as they
were:

𝑆: is small‐for‐a‐cow

On this symbolisation key, the valid formal argument 𝑆𝑑 ∧ 𝐶𝑑 ∴ 𝐶𝑑 corresponds to this


valid argument, as it should:

3 This caution also applies to adjectives which are neither intersective nor privative, like ‘alleged’ in ‘al‐
leged murderer’. These ought not be symbolised by conjunction either.
132 THE LANGUAGE OF QUANTIFIED LOGIC

Daisy is a cow that is small‐for‐a‐cow; so Daisy is a cow.

Likewise, this rather unusual English argument turns out to be valid too:

Daisy is a cow that is small‐for‐a‐cow; so Daisy is small‐for‐a‐cow.

(Note that it can be rather difficult to hear the English sentence ‘Daisy is small’ as
saying the same thing as the conclusion of this argument, ‘Daisy is small for a cow’,
which explains why ‘Daisy is a small cow, so Daisy is small’ strikes us as invalid.)
If we take these observations to heart, there are many intersective adjectives which can
change their meaning depending on what predicate they are paired with. Small‐for‐an‐
oil‐tanker is a rather different size property than small‐for‐a‐mouse, but in ordinary
English we use the phrases ‘small oil tanker’ and ‘small mouse’ without bothering to
make these different senses of ‘small’ explicit.
The way that ‘small’ behaves makes it a member of the class of SUBSECTIVE adjectives,
as in ‘poor dancer’. These are like intersective adjectives in that every poor dancer is
a dancer (and every small cow is a cow). But the way that ‘poor’ behaves in this ex‐
pression is such that we cannot conclude that a poor dancer is poor – they are bad at
dancing, not necessarily financially disadvantaged. In these cases, the meaning of the
modifying adjective is itself modified by the noun: in ‘poor dancer’, we get a distinct‐
ively dancing‐related sense of ‘poor’.
When symbolising, it is best to make these modified adjectives very explicit, generally
introducing a new predicate to the symbolisation key to represent the. Doing so blocks
the fallacious argument from ‘Daisy is a small cow’ to ‘Daisy is small’, where the natural
sense of the conclusion is the generic size claim ‘Daisy is small‐for‐a‐thing’. (Likewise,
symbolising ‘ is poor dancer’ as ‘ is poor‐for‐a‐dancer and is a dancer’

bloocks the fallacious argument from ‘Rupert Murdoch is a poor dancer’ to ‘Rupert
Murdoch is poor’.)
The upshot is this: though you can symbolise ‘Daisy is a small cow’ as a conjunction,
you will need to symbolise ‘ is a small cow’ and ‘ is a small animal’ using

different predicates of Quantifier to stand for the different appearances of ‘small’ – to


symbolise ‘small for a cow’ and ‘small for an animal’.
The overall message is not particularly specific: treat adjectives with care, and always
think about whether a conjunction or some other symbolisation best captures what is
going on in the English. There is no substitute for practice in developing a good sense
of how symbolise arguments.

16.7 Generics
One final complication presents itself. In English, there seems to be a difference
between these sentences:

126. Ducks lay eggs;


127. All ducks lay eggs.
§16. SENTENCES WITH ONE QUANTIFIER 133

The sentence in 127 is false: drakes and ducklings do not, for example. But nevertheless
126 seems to be true, for all that. That sentence lacks an explicit quantifier – it doesn’t
say ‘all ducks lay eggs’. It is what is known as a GENERIC claim: it shares a structure
with examples like ‘cows eat grass’ or ‘rocks are hard’. Generic claims concern what is
typical or normal: the typical duck lays eggs, the typical rock is hard, the typical cow
eats grass. Unlike universally quantified claims, generics are exception‐tolerant: even
if drakes don’t lay eggs, still, ducks lay eggs.
We cannot represent this exception‐tolerance very easily in Quantifier. The initial idea
is to use the universal quantifier, but this will give the wrong results in some cases. For
it will make this argument come out valid, when it should be ruled invalid:

Ducks lay eggs. Donald Duck is a duck. So Donald Duck lays eggs.

One alternative idea that we can implement is that the word ‘ducks’ in 127 is referring to
a natural kind, the species of ducks. So in fact rather than being a quantified sentence,
it is in fact just a subject‐predicate sentence, saying something more or less like ‘The
duck is an oviparous species’. This certainly works for some cases, such as ‘Rabbits are
abundant’, where we have to be understood as saying something about the kind. (How
could an individual be abundant?)
But this cannot handle every aspect of ‘ducks lay eggs’. People do treat those generics
as quantified, because they are often willing to conclude things about individuals given
the generic claim. Given the information that ducks lay eggs, and that Wilhelmina is a
duck, most people conclude that Wilhelmina lays eggs – thus apparently treating the
generic as having the logical role of a universal quantifier.
The proper treatment of generics in English remains a wide‐open question.4 We will
not delve into it further, but you should be careful when symbolising not to be drawn
into the trap of unwarily treating every generic as a universal.

Key Ideas in §16


› Our quantifiers, together with truth‐functional connectives, suf‐
fice to symbolise many natural language quantifier phrases in‐
cluding ‘all’, ‘some’, and ‘none’.
› Judicious choice of domain can save you trouble when symbol‐
ising, but can also hamper your ability to symbolise all the sen‐
tences you’d like to.
› Paraphrasing natural language sentences can often reveal their
quantifier structure, even if the surface form is misleading.
› Complexities in symbolising arise from adjectival modification
of predicates, empty predicates (relative to a chosen domain),
and generics – take care.

4 See, for example, Sarah‐Jane Leslie and Adam Lerner (2016) ‘Generic Generalizations’, in Edward N Za‐
lta, ed., The Stanford Encyclopedia of Philosophy, https://plato.stanford.edu/archives/win2016/
entries/generics/.
134 THE LANGUAGE OF QUANTIFIED LOGIC

Practice exercises
A. Here are the syllogistic figures identified by Aristotle and his successors, along with
their medieval names:

› Barbara. All G are F. All H are G. So: All H are F

› Celarent. No G are F. All H are G. So: No H are F

› Ferio. No G are F. Some H is G. So: Some H is not F

› Darii. All G are F. Some H is G. So: Some H is F.

› Camestres. All F are G. No H are G. So: No H are F.

› Cesare. No F are G. All H are G. So: No H are F.

› Baroko. All F are G. Some H is not G. So: Some H is not F.

› Festino. No F are G. Some H are G. So: Some H is not F.

› Datisi. All G are F. Some G is H. So: Some H is F.

› Disamis. Some G is F. All G are H. So: Some H is F.

› Ferison. No G are F. Some G is H. So: Some H is not F.

› Bokardo. Some G is not F. All G are H. So: Some H is not F.

› Camenes. All F are G. No G are H So: No H is F.

› Dimaris. Some F is G. All G are H. So: Some H is F.

› Fresison. No F are G. Some G is H. So: Some H is not F.

Symbolise each argument in Quantifier.


B. Using the following symbolisation key:

domain: people
𝐾: knows the combination to the safe
𝑆: is a spy
𝑉: is a vegetarian
ℎ: Hofthor
𝑖 : Ingmar

symbolise the following sentences in Quantifier:

1. Neither Hofthor nor Ingmar is a vegetarian.


2. No spy knows the combination to the safe.
3. No one knows the combination to the safe unless Ingmar does.
4. Hofthor is a spy, but no vegetarian is a spy.
§16. SENTENCES WITH ONE QUANTIFIER 135

C. Using this symbolisation key:

domain: all animals


𝐴: is an alligator.
𝑀: is a monkey.
𝑅: is a reptile.
𝑍: lives at the zoo.
𝑎: Amos
𝑏: Bouncer
𝑐 : Cleo

symbolise each of the following sentences in Quantifier:

1. Amos, Bouncer, and Cleo all live at the zoo.


2. Bouncer is a reptile, but not an alligator.
3. Some reptile lives at the zoo.
4. Every alligator is a reptile.
5. Any animal that lives at the zoo is either a monkey or an alligator.
6. There are reptiles which are not alligators.
7. If any animal is a reptile, then Amos is.
8. If any animal is an alligator, then it is a reptile.

D. For each argument, write a symbolisation key and symbolise the argument in Quan‐
tifier. In each case, try to decide if the argument you have symbolized is valid.

1. Willard is a logician. All logicians wear funny hats. So Willard wears a funny
hat.
2. Nothing on my desk escapes my attention. There is a computer on my desk. As
such, there is a computer that does not escape my attention.
3. All my dreams are black and white. Old TV shows are in black and white. There‐
fore, some of my dreams are old TV shows.
4. Neither Holmes nor Watson has been to Australia. A person could see a
kangaroo only if they had been to Australia or to a zoo. Although Watson has
not seen a kangaroo, Holmes has. Therefore, Holmes has been to a zoo.
5. No one expects the Spanish Inquisition. No one knows the troubles I’ve seen.
Therefore, anyone who expects the Spanish Inquisition knows the troubles I’ve
seen.
6. All babies are illogical. Nobody who is illogical can manage a crocodile. Berthold
is a baby. Therefore, Berthold is unable to manage a crocodile.
17
Multiple Generality

So far, we have only considered sentences that require simple predicates with just one
‘gap’, and at most one quantifier. Much of the fragment of Quantifier that focuses on
such sentences was already discovered and codified into syllogistic logic by Aristotle
more than 2000 years ago. The full power of Quantifier really comes out when we start
to use predicates with many ‘gaps’ and multiple quantifiers. Despite first appearances,
the discovery of how to handle such sentences was a very significant one. For this
insight, we largely have the German mathematician and philosopher Gottlob Frege
(1879) to thank.1

17.1 Many‐place Predicates


All of the predicates that we have considered so far concern properties that can be at‐
tributed to objects might have by themselves, as it were. ‘Herbie is white’, ‘Kim Deal is
a bassist’, etc., all talk about the features of a single individual. The associated predic‐
ates, such as ‘ is white’, have one gap in them, and to make a sentence, we simply
need to slot in one name. They are ONE‐PLACE predicates.
But other predicates concern the relationship between two things. Here are some ex‐
amples of many‐place predicates in English:

loves
is to the left of

1 Frege (1879) Begriffsschrift, eine der arithmetischen nachgebildete Formelsprache des reinen Denkens,
Halle a. S.: Louis Nebert. Translated as ‘Concept Script, a formal language of pure thought modelled
upon that of arithmetic’, by S. Bauer‐Mengelberg in J. van Heijenoort, ed. (1967) From Frege to Gödel:
A Source Book in Mathematical Logic, 1879–1931, Cambridge, MA: Harvard University Press. The same
logic was independently discovered by the American philosopher and mathematician Charles Sanders
Peirce: see Peirce (1885) ‘On the Algebra of Logic. A Contribution to the Philosophy of Notation’ Amer‐
ican Journal of Mathematics 7, pp. 197–202. Warning: don’t consult either of these original works,
which will likely just confuse you. The present text is the result of more than a century of refinement
in how to present the basic systems Frege and Peirce introduced, and is a lot more user friendly.

136
§17. MULTIPLE GENERALITY 137

is in debt to
is supervised by

These are TWO‐PLACE predicates. They need to be filled in with two terms (names or
pronouns, most commonly) in order to make a sentence. Conversely, if we start with an
English sentence containing many singular terms, we can remove two singular terms,
to obtain different two‐place predicates. Consider the sentence ‘Vinnie borrowed the
family car from Nunzio’. By deleting two singular terms, we can obtain any of three
different two‐place predicates:

Vinnie borrowed from ;


borrowed the family car from ;
borrowed from Nunzio.

And by removing all three singular terms, we obtain a THREE‐PLACE predicate:

borrowed from .

Indeed, there is no in principle upper limit on the number of gaps or places that our
predicates may contain.
Now there is a little problem with the above. I have used the same symbol, ‘ ’, to

indicate a gap formed by deleting a term from a sentence. However (as Frege emphas‐
ised), these are different gaps. To obtain a sentence, we can fill them in with the same
term, but we can equally fill them in with different terms, and in various different or‐
ders. The following are all perfectly good sentences, obtained by filling in the gaps in
‘ loves ’, but they mean quite different things:

Karl loves Karl;


Karl loves Imre;
Imre loves Karl;
Imre loves Imre.

The point is that we need some way of keeping track of the gaps in predicates, so that
we can keep track of how we are filling them in.
Another way to put the point: when it comes to two‐(or more)‐place predicates, some‐
times the order matters. ‘Shaq is taller than Jordan’ doesn’t mean the same thing as
‘Jordan is taller than Shaq’. It matters whose name fills the first gap in the predicate,
and whose name fills the second.
To keep track of the gaps, we shall label them. The labelling conventions I adopt are
best explained by example. Suppose I want to symbolise the following sentences:
138 THE LANGUAGE OF QUANTIFIED LOGIC

128. Karl loves Imre.


129. Imre loves himself.
130. Karl loves Imre, but not vice versa.
131. Karl is loved by Imre.

I will start with the following symbolisation key:

domain: people
𝑖 : Imre
𝑘: Karl
𝐿: loves
1 2

› Sentence 128 will now be symbolised by ‘𝐿𝑘𝑖 ’.


› Sentence 129 can be paraphrased as ‘Imre loves Imre’. It can now be symbolised
by ‘𝐿𝑖𝑖 ’.
› Sentence 130 is a conjunction. We might paraphrase it as ‘Karl loves Imre, and
Imre does not love Karl’. It can now be symbolised by ‘𝐿𝑘𝑖 ∧ ¬𝐿𝑖𝑘’.
› Sentence 131 might be paraphrased by ‘Imre loves Karl’. It can then be symbolised
by ‘𝐿𝑖𝑘’. Of course, this erases the difference in tone between the active and
passive voice; such nuances are lost in Quantifier.

This last example highlights something important. Suppose we add to our symbolisa‐
tion key the following:

𝑀: loves
2 1

Here, we have used the same English word (‘loves’) as we used in our symbolisation
key for ‘𝐿’. However, we have swapped the order of the gaps around (just look closely
at those little subscripts!) So ‘𝑀𝑘𝑖 ’ and ‘𝐿𝑖𝑘’ now both symbolise ‘Imre loves Karl’. ‘𝑀𝑖𝑘’
and ‘𝐿𝑘𝑖 ’ now both symbolise ‘Karl loves Imre’. Since love can be unrequited, these are
very different claims. The moral is simple. When we are dealing with predicates with
more than one place, we need to pay careful attention to the order of the places.
With these examples in hand, I can now give the official account of how we understand
Quantifier symbolisations. Suppose we have an Quantifier expression 𝒜𝑡1 , …, 𝑡𝑘 , where
each 𝑡𝑖 is a name or a variable, symbolising a singular term, and where 𝒜 symbolises
a 𝑘‐place predicate. The 𝑖 ‐th term is to be interpreted as filling the gap labelled ‘𝑖 ’. So
consider the following symbolisation key:

domain: places
𝑎: Adelaide
𝑏: Alice Springs
𝑐: Coober Pedy
𝐵: is between and
2 1 3
𝐾: is between and
1 2 3
§17. MULTIPLE GENERALITY 139

. Then if we want to symbolise ‘Coober Pedy is between Adelaide and Alice Springs’,
we can do so using either ‘𝐵𝑎𝑐𝑏’ or ‘𝐾𝑐𝑎𝑏’. The difference is in how the symbolisation
key instructs us to fill the gaps we have established in the predicate as we take steps
to represent it symbolically. There is no ‘right’ answer here: either can be good. The
representation using ‘𝐵’ graphically represents which item is between the other two
in the syntax itself, while the representation using ‘𝐾’ is more faithful to the original
English.
Suppose we add to our symbolisation key the following:

𝑆: thinks only of
1 1
𝑇: thinks only of
1 2
𝑎: Alice

As in the case of ‘𝐿’ and ‘𝑀’ above, the difference between these examples is only in how
the gaps in the construction ‘… thinks only of …’ are labelled. In ‘𝑇’, we have labelled the
two gaps differently. They do not need to be filled with different names or variables,
but there is always the potential to put different names in those different gaps. In the
case of ‘𝑆’, the gaps have the same label. In some sense, there is only one gap in this
sentence, which is why the symbolisation key associates it with a one‐place predicate
– it means something like ‘𝑥 thinks only of themself’. The second predicate is more
flexible. Take something we can say with the predicate ‘𝑆’, such as ‘𝑆𝑎’, ‘Alice thinks
only of herself’. We can express pretty much the same thought using the two‐place
predicate ‘𝑇’: ‘𝑇𝑎𝑎’.
We have introduced a potential ambiguity in our treatment of predicates. (See also
§20.2.) There is nothing overt in our language that distinguishes the one‐place pre‐
dicate ‘𝐴’ (such that ‘𝐴𝑏’ is grammatical) from the two‐place predicate ‘𝐴’ (such that
‘𝐴𝑏’ is ungrammatical, but ‘𝐴𝑏𝑘’ is grammatical). We are, in effect, just letting context
disambiguate how many argument places there are in a given predicate, by assuming
that in any expression of Quantifier we write down, the number of names or variables
following a predicate indicates how many places it has. We could introduce a system
to disambiguate: perhaps adding a superscripted ‘1’ to all one‐place predicates, a su‐
perscripted ‘2’ to all two‐place predicates, etc. Then ‘𝐴1 𝑏’ is grammatical while ‘𝐴1 𝑏𝑘’
is not; conversely, ‘𝐴2 𝑏’ is ungrammatical and ‘𝐴2 𝑏𝑘’ is grammatical. This system of
superscripts would be effective but cumbersome. We will thus keep to our existing
practice, letting context disambiguate. What you should not do, however, is make use
of the same capital letter to symbolise two different predicates in the same symbolisa‐
tion key. If you do that, context will not disambiguate, and you will have failed to give
an interpretation of the language at all.

17.2 Scope and Nested Quantifiers


Once we have two (or more) gaps in a predicate, we can fill them with different things.
We’ve so far seen cases where multiple names are slotted into a many‐place predicate.
But we can also insert other terms, like variables. So to continue our example using
140 THE LANGUAGE OF QUANTIFIED LOGIC

the predicate ‘𝑇’, ‘ thinks only of ’, we can put a variable in the first gap, and
1 2
a name in the second, if we wish: ‘𝑇𝑥𝑎’. This isn’t a sentence, because no quantifer tells
us how to understand that variable. (The sentence might be representing ‘they think
only of Alice’, but without context there is no determinate referent for the pronoun
‘they’.) Introduce a quantifier, and we have an interpretable sentence:

132. ∀𝑥𝑇𝑥𝑎;
Everyone thinks only of Alice.

The fact that we can fill the two gaps of two‐place predicates with different things, or
even with the same thing, gives us an reason to favour the two‐place predicate symbol‐
isation of ‘Alice thinks only of themself’ as ‘𝑇𝑎𝑎’. That allows us to symbolise certain
arguments that cannot be adequately symbolised using a one‐place predicate. For ex‐
ample: ‘Alice thinks only of herself; so there is someone who is the only person Alice
thinks of’. The symbolisation of this argument might be: ‘𝑇𝑎𝑎 ∴ ∃𝑥𝑇𝑎𝑥 ’. This might
have some prospect of being valid, whereas ‘𝑆𝑎 ∴ ∃𝑥𝑇𝑎𝑥 ’ will not be valid.
The real power of many‐place predicates comes when we consider examples in which
both gaps in the predicate are filled by variables governed by different quantifiers. In
cases where the quantifier expressions interact, we can express things we cannot say
even when we allow logically complex combinations of one‐quantifier sentences. With
this power comes potential confusion too. So let’s proceed carefully.
Consider the sentence ‘everyone loves someone’. This illustrates our goal, as two quan‐
tifier expressions occur in this sentence: ‘everyone’ and ‘someone’. But it also illustrates
the potential pitfalls, as there is a possible ambiguity in this sentence. It might mean
either of the following:

133. For every person, there is some person that they love
134. There is some particular person whom every person loves

It is fairly straightforward to see that these don’t mean the same thing. The first would
be true as long as everybody has somebody they love. One sort of case in which 133 is
true is the cyclic central love triangle in Twelfth Night, where Viola loves Duke Orsino,
the Duke loves Olivia, and Olivia loves Viola (who, disguised as a young man, is the
Duke’s go‐between with Olivia).
In the Twelfth Night situation, 134 is not true. It could only be true if everybody loves
the same person, e.g., if the Duke, Viola, and Olivia herself all love Olivia.
How can we symbolise these two different disambiguations of our original sentence?
(Remember: one of the strengths of symbolic logic is that it is supposed to be able to
clearly represent that which would be ambiguous in natural language.)
Let’s paraphrase a little more formally as we step towards a fully symbolic representa‐
tion. As our sentence has two quantifiers, I will use numbers to link pronouns in our
paraphrase with the quantifier expressions which govern them. Using this device, our
sentences can be paraphrased as follows:
§17. MULTIPLE GENERALITY 141

135. Everyone1 is such that there is someone2 such that: they1 love them2 .
136. There is someone2 such that everyone1 is such that: they1 love them2 .

(Take a moment to convince yourself that these paraphrases suceed.)


You can see immediately that the difference in these paraphrases lies in the order of
the quantifier expressions, and the remainder of the paraphrase, ‘they1 love them2 ’, is
the same in each sentence. Using variables to symbolise pronouns, and choosing the
variable ‘𝑥’ for ‘they1 ’ and ‘𝑦’ for ‘them2 ’, we can symbolise this ‘𝐿𝑥𝑦’, where ‘𝐿’ stands
for the two‐place predicate ‘ loves ’.
1 2

The quantifier order in the paraphrases governs how they interact. As we saw in §16.5,
the scope of a quantifier is roughly the Quantifier expression in which that quantifier
is the main connective. (Later on we will be a little more precise about the way that
quantifier scope functions in Quantifier: see §20.) So in ‘∀𝑥∃𝑦𝐿𝑥𝑦’, the scope of ‘∀𝑥’ is
the whole sentence, while the scope of ‘∃𝑦’ is just ‘∃𝑦𝐿𝑥𝑦’. The following guides us in
interpreting these ‘nested’ quantifiers, in which one falls in the scope of another:

When one quantifier occurs in the scope of another, the narrower


scope quantifier should be understood with respect to the value as‐
signed to a variable by the wider scope quantifier.

Let’s apply this to our example. In 135 ‘everyone’ comes first, and ‘someone’ comes next.
The intended interpretation is that this is true iff for any person 𝑥 that you pick, with
respect to that choice you can then find someone 𝑦 who 𝑥 loves. If you had chosen
someone else as the value of 𝑥, then parasitic on that different choice you may end up
needing to find a different value for 𝑦. Compare the reversed quantifier scope in 136.
That is true iff there is someone 𝑦 such that, with respect to that particular choice for
𝑦, any person 𝑥 you pick, 𝑥 loves 𝑦. With respect to a different initial choice for 𝑦, there
may be values for 𝑥 that do not satisfy 𝑥 loves 𝑦, but as the initial choice is governed
by an existential quantifier, that won’t undermine the truth of the sentence. This gives
us our two different symbolisations:

› Sentence 133 can be symbolised by ‘∀𝑥∃𝑦𝐿𝑥𝑦’. Return to our example love tri‐
angle between Duke Orsino, Viola, and Olivia. For any of the three people you
might choose, you can find another person in the domain who they love. So
sentence 133 is true.

› Sentence 134 is symbolised by ‘∃𝑦∀𝑥𝐿𝑥𝑦’. Sentence 134 is not true in the Twelfth
Night situation. For each of the people in the domain, you can find someone
who doesn’t love them, and hence no one is universally beloved. If, instead, each
person loved Olivia, then we could find someone (Olivia), such that everyone
else we examined turns out to love them. In that case, 134 would be true.

This example, besides giving some indication of how to read sentences with multiple
quantifiers, illustrates that quantifier scope matters a great deal. Indeed, the mistake
142 THE LANGUAGE OF QUANTIFIED LOGIC

that arises when one illegitimately switches them around even has a special name: a
quantifier shift fallacy. Here is a real life example from Aristotle:2

Suppose, then, that [A] the things achievable by action have some end that
we wish for because of itself, and because of which we wish for the other
things, and that we do not choose everything because of something else
– for if we do, it will go on without limit, so that desire will prove to be
empty and futile[; c]learly, [B] this end will be the good, that is to say, the
best good. (Aristotle, Nichomachean Ethics 1094a 18‐22)

Setting aside Aristotle’s subsidiary argument about desire, this argument seems to in‐
volve the following pattern of inference:

Every action aims at some end which is desired because of itself. (∀∃)
So: There is end desired because of itself which is the aim of every action, the best
good. (∃∀)

This argument form is obviously invalid. It’s just as bad as:3

Every dog has its day. (∀∃)


So: There is a day for all the dogs. (∃∀)

The moral is: take great care with the scope of quantification.

17.3 Stepping Stones to Symbolisation


Once we have the possibility of multiple quantifiers and many‐place predicates, rep‐
resentation in Quantifier can quickly start to become a bit tricky. When you are trying
to symbolise a complex sentence, I recommend laying down several stepping stones.
As usual, this idea is best illustrated by example. Consider this representation key:

domain: people and dogs


𝐷: is a dog
1
𝐹: is a friend of
1 2
𝑂: owns
1 2
𝑔: Geraldo

And now let’s try to symbolise these sentences:

2 Note that it is hotly contested whether Aristotle actually commits a fallacy here, given the compressed
nature of his prose. See, inter alia, J L Ackrill (1999), ‘Aristotle on eudaimonia’, pp. 57–77 in N Sherman,
ed., Aristotle’s Ethics: Critical Essays, Rowman & Littlefield.
3 Thanks to Rob Trueman for the example.
§17. MULTIPLE GENERALITY 143

137. Geraldo is a dog owner.


138. Someone is a dog owner.
139. All of Geraldo’s friends are dog owners.
140. Every dog owner is the friend of a dog owner.
141. Every dog owner’s friend owns a dog of a friend.

Sentence 137 can be paraphrased as, ‘There is a dog that Geraldo owns’. This can be
symbolised by ‘∃𝑥(𝐷𝑥 ∧ 𝑂𝑔𝑥)’.
Sentence 138 can be paraphrased as, ‘There is some y such that y is a dog owner’. Deal‐
ing with part of this, we might write ‘∃𝑦(𝑦 is a dog owner)’. Now the fragment we have
left as ‘𝑦 is a dog owner’ is much like sentence 137, except that it is not specifically about
Geraldo. So we can symbolise sentence 138 by:

∃𝑦∃𝑥(𝐷𝑥 ∧ 𝑂𝑦𝑥)

I need to pause to clarify something here. In working out how to symbolise the last
sentence, we wrote down ‘∃𝑦(𝑦 is a dog owner)’. To be very clear: this is neither an
Quantifier sentence nor an English sentence: it uses bits of Quantifier (‘∃’, ‘𝑦’) and bits
of English (‘dog owner’). It is really is just a stepping‐stone on the way to symbolising
the entire English sentence with a Quantifier sentence, a bit of rough‐working‐out.
Sentence 139 can be paraphrased as, ‘Everyone who is a friend of Geraldo is a dog owner’.
Using our stepping‐stone tactic, we might write

∀𝑥 𝐹𝑥𝑔 → 𝑥 is a dog owner

Now the fragment that we have left to deal with, ‘𝑥 is a dog owner’, is structurally just
like sentence 137. But it would be a mistake for us simply to put ‘𝑥’ in place of ‘𝑔’ from
our symbolisation of 137, yielding

∀𝑥 𝐹𝑥𝑔 → ∃𝑥(𝐷𝑥 ∧ 𝑂𝑥𝑥) .

Here we have a CLASH OF VARIABLES. The scope of the universal quantifier, ‘∀𝑥 ’, is
the entire conditional. But ‘𝐷𝑥 ’ also falls within the scope of the existential quantifier
‘∃𝑥’. Which quantifier has priority and governs the interpretation of the variable? In
Quantifier, if a variable 𝓍 occurs in an Quantifier sentence, it is always governed by the
quantifier which has the narrowest scope which includes that occurrence of 𝓍. So in the
sentence above, the quantifier ‘∃𝑥’ governs every occurrence of ‘𝑥’ in ‘(𝐷𝑥∧𝑂𝑥𝑥)’. Given
this, the symbolisation does not mean what we intended. It says, roughly, ‘everyone
who is a friend of Geraldo is such that there is a self‐owning dog’. This is not at all the
meaning of the English sentence we are aiming to symbolise.
To provide an adequate symbolisation, then, we must avoid clashing variables. We
can do this easily enough. There was no requirement to use ‘𝑥’ as the variable in our
symbolisation of 137, so we can easily choose some different variable for our existential
quantifier. That will give us something like this, which adequately symbolises sentence
139:
∀𝑥 𝐹𝑥𝑔 → ∃𝑧(𝐷𝑧 ∧ 𝑂𝑥𝑧) .
144 THE LANGUAGE OF QUANTIFIED LOGIC

Sentence 140 can be paraphrased as ‘For any x that is a dog owner, there is a dog owner
who is a friend of x’. Using our stepping‐stone tactic, this becomes

∀𝑥 𝑥 is a dog owner → ∃𝑦(𝑦 is a dog owner ∧ 𝐹𝑦𝑥)

Completing the symbolisation, we end up with

∀𝑥 ∃𝑧(𝐷𝑧 ∧ 𝑂𝑥𝑧) → ∃𝑦 ∃𝑧(𝐷𝑧 ∧ 𝑂𝑦𝑧) ∧ 𝐹𝑦𝑥

Note that we have used the same variable, ‘𝑧’, in both the antecedent and the con‐
sequent of the conditional, but that these are governed by two different quantifiers.
This is ok: there is no potential confusion here, because it is obvious which quanti‐
fier governs each variable. We might graphically represent the scope of the quantifiers
thus:
scope of ‘∀𝑥’
scope of ‘∃𝑦’
scope of 1st ‘∃𝑧’ scope of 2nd ‘∃𝑧’
∀𝑥 ∃𝑧(𝐷𝑧 ∧ 𝑂𝑥𝑧) → ∃𝑦( ∃𝑧(𝐷𝑧 ∧ 𝑂𝑦𝑧) ∧ 𝐹𝑦𝑥)
Even in this case, however, you might want to choose different variables for every quan‐
tifier just as a practical matter, preventing any possibility of confusion for your readers.
Sentence 141 is the trickiest yet. First we paraphrase it as ‘For any x that is a friend of a
dog owner, x owns a dog which is also owned by a friend of x’. Using our stepping‐stone
tactic, this becomes:

∀𝑥 𝑥 is a friend of a dog owner → 𝑥 owns a dog which is owned by a friend of 𝑥 .

Breaking this down a bit more:

∀𝑥 ∃𝑦(𝐹𝑥𝑦 ∧ 𝑦 is a dog owner) → ∃𝑦(𝐷𝑦 ∧ 𝑂𝑥𝑦 ∧ 𝑦 is owned by a friend of 𝑥) .

And a bit more:

∀𝑥 ∃𝑦(𝐹𝑥𝑦 ∧ ∃𝑧(𝐷𝑧 ∧ 𝑂𝑦𝑧)) → ∃𝑦(𝐷𝑦 ∧ 𝑂𝑥𝑦 ∧ ∃𝑧(𝐹𝑧𝑥 ∧ 𝑂𝑧𝑦)) .

And we are done!

17.4 Sentence Structure and Levels of Analysis


As I emphasised in §4.3, a single English sentence has many structures, depending
on how fine‐grained one is in the analysis of that sentence. We could symbolise ‘Ant‐
ony owns a car’ in a number of different ways given the resources we have so far, in
increasingly finer detail:

1. We could symbolise it as an atomic sentence of Sentential like ‘𝐴’, ignoring all


internal structure of the sentence (because it has no internal truth‐functional
structure). This is also a zero‐place predicate, so is also a sentence of Quantifier.
§17. MULTIPLE GENERALITY 145

2. We could symbolise it as another atomic sentence of Quantifier, but one which


does recognise the internal subject‐predicate struture of the English sentence.
For example, we could symbolise it as ‘𝑊𝑎’, where ‘𝑊 ’ symbolises ‘ owns a
1
car’ and ‘𝑎’ symbolises ‘Antony’.

3. Or we could symbolise it as a complex quantified sentence ‘∃𝑦(𝐶𝑦 ∧ 𝑂𝑎𝑦)’, where


‘𝑎’ is as before, but ‘𝐶 ’ means ‘ is a car’ and ‘𝑂’ is ‘ owns ’.
1 1 2

4. We could imaginably symbolise it at even finer levels of structure, breaking down


the predicate ‘is a car’ into the verb phrase ‘is’ and the indefinite noun phrase ‘a
car’ (which is itself complex). This would go beyond the representational re‐
sources even of Quantifier.

Note that any structure identified is preserved: the name ‘𝑎’ continues to appear in
more fine‐grained symbolisations once it has appeared. A one‐place predicate, having
appeared in the second symbolisation, still appears indirectly in the third symbolisa‐
tion. For the open sentence ‘∃𝑦(𝐶𝑦∧𝑂𝓍𝑦)’ can be understood a representing a complex
one‐place predicate: ‘𝓍’ is not associated with any quantifier, and can be replaced by a
name to form a grammatical sentence.
How to symbolise the structure of a sentence very much depends on what purpose
you have in symbolising. What matters is that you manage to represent enough struc‐
ture to determine whether the target argument you are symbolising is valid. Valid
arguments can be symbolised as invalid arguments, if you don’t attend to the relevant
structure, or don’t have resources in your language to represent that structure. But we
just observed that any more fine‐grained analysis of the structure of a sentence retains
the coarser structure (as it just adds more detailed substructure). So if you can show
an argument is valid (conclusive in virtue of its structure) at some level of analysis, it
will remain valid according to any more fine‐grained understanding of the structure of
that sentence. So you should aim to symbolise just enough structure in an argument
to be able to demonstrate its validity – if indeed it is valid.

Key Ideas in §17


› The true power of Quantifier comes from its ability to handle mul‐
tiple generality.
› But with great power comes additional complexity: we need to
keep track of different places in our predicates, and different
quantifiers that may govern those places. Even small alterations
in scope or order can drastically change the meaning of the sym‐
bolisation we produce.
› Use of paraphrases and hybrid English‐Quantifier sentences can
be useful in figuring out how to symbolise a complex claim fea‐
turing multiple quantifiers and complex many‐place predicates.
146 THE LANGUAGE OF QUANTIFIED LOGIC

Practice exercises
A. Using this symbolisation key:

domain: all animals


𝐴: is an alligator
1
𝑀: is a monkey
1
𝑅: is a reptile
1
𝑍: lives at the zoo
1
𝐿: loves
1 2
𝑎: Amos
𝑏: Bouncer
𝑐 : Cleo

symbolise each of the following sentences in Quantifier:

1. If Cleo loves Bouncer, then Bouncer is a monkey.


2. If both Bouncer and Cleo are alligators, then Amos loves them both.
3. Cleo loves a reptile.
4. Bouncer loves all the monkeys that live at the zoo.
5. All the monkeys that Amos loves love him back.
6. Every monkey that Cleo loves is also loved by Amos.
7. There is a monkey that loves Bouncer, but sadly Bouncer does not reciprocate
this love.

B. Using the following symbolisation key:

domain: all animals


𝐷: is a dog
1
𝑆: likes samurai movies
1
𝐿: is larger than
1 2
𝑏: Bertie
𝑒: Emerson
𝑓: Fergus

symbolise the following sentences in Quantifier:


§17. MULTIPLE GENERALITY 147

1. Bertie is a dog who likes samurai movies.


2. Bertie, Emerson, and Fergus are all dogs.
3. Emerson is larger than Bertie, and Fergus is larger than Emerson.
4. All dogs like samurai movies.
5. Only dogs like samurai movies.
6. There is a dog that is larger than Emerson.
7. If there is a dog larger than Fergus, then there is a dog larger than Emerson.
8. No animal that likes samurai movies is larger than Emerson.
9. No dog is larger than Fergus.
10. Any animal that dislikes samurai movies is larger than Bertie.
11. There is an animal that is between Bertie and Emerson in size.
12. There is no dog that is between Bertie and Emerson in size.
13. No dog is larger than itself.
14. Every dog is larger than some dog.
15. There is an animal that is smaller than every dog.
16. If there is an animal that is larger than any dog, then that animal does not like
samurai movies.

C. Using this symbolisation key,

domain: all animals


𝐿: is larger than .
1 2
𝐹: is friendlier than .
1 2
𝐷: is a dog.
1
𝑏: Bertie.
𝑒: Emerson.
𝑓: Fergus.

render the following into natural English, commenting on any difficulties:

1. (𝐿𝑒𝑏 ∧ 𝐿𝑓𝑒);
2. ∃𝑥(𝐷𝑥 ∧ 𝐿𝑥𝑒);
3. ∃𝑥(𝐷𝑥 ∧ 𝐿𝑥𝑓) → ∃𝑦(𝐷𝑦 ∧ 𝐹𝑦𝑒) ;
4. ∀𝑥(𝐷𝑥 → ¬𝐿𝑥𝑓);
5. ∃𝑦((𝐿𝑦𝑏 ∧ 𝐿𝑒𝑦) ∨ (𝐿𝑦𝑒 ∧ 𝐿𝑏𝑦));
6. ∀𝑥(𝐷𝑥 → ∃𝑦(𝐷𝑦 ∧ 𝐹𝑥𝑦));
7. ∀𝑥∀𝑦(𝐿𝑥𝑦 → ∃𝑧(𝐷𝑧 ∧ (𝐹𝑦𝑧 ∧ 𝐹𝑧𝑥))).

D. Using the following symbolisation key:


148 THE LANGUAGE OF QUANTIFIED LOGIC

domain: people and dishes at a potluck


𝑅: has run out.
1
𝑇: is on the table.
1
𝐹: is food.
1
𝑃: is a person.
1
𝐺: is guacamole.
1
𝐿: likes .
1 2
𝑒: Eli
𝑓: Francesca

render the following into natural English, commenting on any difficulties:

1. ∀𝑥(𝐹𝑥 → 𝑇𝑥);
2. (∃𝑥𝐺𝑥 → ∀𝑥(𝐺𝑥 → 𝑇𝑥));
3. ∀𝑥∀𝑦((𝑃𝑥 ∧ 𝐺𝑦) → 𝐿𝑥𝑦);
4. ∃𝑥∃𝑦(((𝐺𝑥 ∧ 𝑃𝑦) ∧ 𝐿𝑦𝑥) → 𝐿𝑒𝑥);
5. ∀𝑥(𝐿𝑓𝑥 → 𝑅𝑥);
6. ∀𝑥(𝑃𝑥 → ¬(𝐿𝑓𝑥 ∨ 𝐿𝑥𝑓));
7. ∀𝑥(∃𝑦(𝐺𝑦 ∧ 𝐿𝑥𝑦) → 𝐿𝑒𝑥);
8. ∀𝑥∀𝑦((𝑃𝑥 ∧ 𝑃𝑦) → ((𝐿𝑒𝑥 ∧ 𝐿𝑦𝑥) →)𝐿𝑒𝑦);
9. (∃𝑥(𝑃𝑥 ∧ 𝑇𝑥) → ∀𝑦(𝐹𝑦 → 𝑅𝑦)).

E. Using the following symbolisation key:

domain: people
𝐷: dances ballet.
1
𝐹: is female.
1
𝑀: is male.
1
𝐶: is a child of .
1 2
𝑆: is a sibling of .
1 2
𝑒: Elmer
𝑗: Jane
𝑝: Patrick

symbolise the following arguments in Quantifier:


§17. MULTIPLE GENERALITY 149

1. All of Patrick’s children are ballet dancers.


2. Jane is Patrick’s daughter.
3. Patrick has a daughter.
4. Jane is an only child.
5. All of Patrick’s sons dance ballet.
6. Patrick has no sons.
7. Jane is Elmer’s niece.
8. Patrick is Elmer’s brother.
9. Patrick’s brothers have no children.
10. Jane is an aunt.
11. Everyone who dances ballet has a brother who also dances ballet.
12. Every woman who dances ballet is the child of someone who dances ballet.

F. Consider the following symbolisation key:

domain: Fred, Amy, Kuiping


𝑃: pats on the head.
1 2
𝑇: is taller than .
1 2

If we hold fixed this assignment of meanings to the predicates, why is it possible that
‘∃𝑥∀𝑦𝑃𝑥𝑦’ is true, but not possible that ‘∃𝑥∀𝑦𝑇𝑥𝑦’ is true?
18
Identity

18.1 A Tricky Argument


Consider this sentence:

142. Pavel owes money to everyone.

Let the domain be people; this will allow us to translate ‘everyone’ as a universal quan‐
tifier. Offering the symbolisation key:

𝑂: owes money to
1 2
𝑝: Pavel

we can symbolise sentence 142 by ‘∀𝑥𝑂𝑝𝑥 ’. But this has a (perhaps) odd consequence. It
requires that Pavel owes money to every member of the domain (whatever the domain
may be). The domain certainly includes Pavel. So this entails that Pavel owes money
to himself.
Perhaps we meant to say:

143. Pavel owes money to everyone else


144. Pavel owes money to everyone other than Pavel
145. Pavel owes money to everyone except Pavel himself

We want to add something to the symbolisation of 142 to handle these italicised words.
Some interesting issues arise as we do so.

18.2 First Attempt to Handle the Argument


The sentences in 143–145 can all be paraphrased as follows:

150
§18. IDENTITY 151

146. Everyone who isn’t Pavel is such that: Pavel owes money to them.

This is a sentence of the form ‘every F [person who is not Pavel] is G [owed money by
Pavel]’. Accordingly it can be symbolised by something with this structure: ∀𝑥(𝒜𝑥 →
𝒢𝑥). Here is an attempt to fill in the schematic letters:

𝑂: owes money to
1 2
𝐼: is
1 2
𝑝: Pavel

With this symbolisation key, here is a proposed symbolisation: ∀𝑦(¬𝐼𝑝𝑦 → 𝑂𝑝𝑦).


This symbolisation works well. But, it turns out, it doesn’t quite do what we wanted
it to. Suppose ‘ℎ’ names Hikaru, and consider this argument, with the symbolisation
next to the English sentences:

147. Pavel owes money to everyone else: ∀𝑦(¬𝐼𝑝𝑦 → 𝑂𝑝𝑦).


148. Hikaru isn’t Pavel: ¬𝐼ℎ𝑝.
So: Pavel owes money to Hikaru: 𝑂𝑝ℎ.

This argument is valid in English. But its symbolisation is not valid. If we pick Hikaru
to be the value of ‘𝑦’, we get from 147 the conditional ¬𝐼𝑝ℎ → 𝑂𝑝ℎ. But 148 doesn’t give
us the antecedent of this conditional: ¬𝐼𝑝ℎ is potentially quite different from ¬𝐼ℎ𝑝. So
the argument isn’t formally valid.
The argument isn’t formally valid, because the sentence ‘¬𝐼ℎ𝑝’ doesn’t formally entail
‘¬𝐼ℎ𝑝’ as a matter of logical structure alone. If the original argument is valid, then we
need a symbolisation that as a matter of logic allows the distinctness (non‐identity) of
Hikaru and Pavel to entail the distinctness of Pavel and Hikaru.

18.3 Adding Identity


Logicians resolve this issue by adding identity as a new logical predicate – one of the
structural words with a fixed interpretation.1 We add a new symbol ‘=’, to clearly dif‐
ferentiate it from our existing predicates.
The symbol ‘=’ is a two‐place predicate. Since it is to have a special meaning, we shall
write it a bit differently: we put it between two terms, rather than out front. And it
does have a very particular fixed meaning. Like the quantifiers and connectives, the
identity predicate does not need a symbolisation key to fix how it is to be used. Rather,
it always gets the same interpretation: ‘=’ always means ‘ is identical to ’.
1 2
So identity is a special predicate because it has its meaning as part of logic.

1 We don’t absolutely have to do this: there are logical languages in which identity is not a logical pre‐
dicate, and is symbolised by just choosing a two‐place predicate like 𝐼𝑥𝑦. But in our logical language
Quantifier, we are choosing to treat identity as a structural word.
152 THE LANGUAGE OF QUANTIFIED LOGIC

That one thing is logically identical to another does not mean merely that the objects
in question are indistinguishable, or that all of the same things are true of them. When
two things are alike in every respect, we may say they are QUALITATIVELY IDENTICAL.
This is the sense of identity involved in ‘identical twins’, who are two distinct individu‐
als who share their properties. In Quantifier, the identity predicate represents not this
relation of similarity, but a relation of absolute or NUMERICAL IDENTITY: there is only
one, rather than two. This is the sense in which Lewis Carroll (the author of Alice in
Wonderland) is identical to Charles Lutwidge Dodgson (the Oxford mathematician):
‘they’ are the very same person, with two different names.
This might seem odd. Identity is a relation, but it doesn’t relate different things to each
other: it relates everything to itself, and to nothing else. We need a predicate for that
relation because the names and (especially) variables of Quantifier aren’t guaranteed to
have different referents, and sometimes we want to explicitly require that two terms
don’t denote the same thing. For example, suppose we want to symbolise ‘Barry is the
tallest person’. You might try ‘Barry is taller than everyone’. However, that would lead
to the absurdity that Barry is taller than himself, since he is surely among ‘everyone’.
So what we really need is ‘Barry is taller than everyone else, i.e., everyone who’s not
(identical to) Barry’, which is most naturally formulated using the identity predicate.

18.4 Symbolising Identity Sentences


Now suppose we want to symbolise this sentence:

149. Pavel is Mister Checkov.

Let us add to our symbolisation key:

𝑐 : Mister Checkov

Now sentence 149 can be symbolised as ‘𝑝 = 𝑐 ’. This means that 𝑝 is 𝑐 , and it follows
that the thing named by ‘𝑝’ is the thing named by ‘𝑐 ’.2
Let’s return to our example ‘Barry is taller than everyone else’. We want to start with a
paraphrase, like this: choose anyone from the domain; if they are not Barry, than Barry
is taller than them. Where 𝑏 symbolises ‘Barry’ and 𝑇 symbolises ‘ is taller than
1
’, we might symbolise this as: ‘∀𝑥(¬𝑥 = 𝑏 → 𝑇𝑏𝑥)’ (on the domain of people).
2

Using that same kind of structure, we can also now deal with sentences 143–145. All
of these sentences can be paraphrased as ‘Everyone who isn’t Pavel is such that: Pavel
owes money to them’. Paraphrasing some more, we get: ‘For all x, if x is not Pavel, then
x is owed money by Pavel’. Now that we are armed with our new identity symbol, we
can symbolise this as ‘∀𝑥(¬𝑥 = 𝑝 → 𝑂𝑝𝑥)’.

2 One must be careful: the sentence ‘𝑝 = 𝑐 ’ is, on this symbolisation, about Pavel and Mister Checkov;
it is not about ‘Pavel’ and ‘Mister Checkov’, which are obviously distinct expressions of English.
§18. IDENTITY 153

This last sentence contains the formula ‘¬𝑥 = 𝑝’. And that might look a bit strange,
because the symbol that comes immediately after the ‘¬’ is a variable, rather than a
predicate. But this is no problem. We are simply negating the entire formula, ‘𝑥 =
𝑝’. But if this is confusing, you may use the NON‐IDENTITY PREDICATE ‘≠’. This is an
abbreviation, characterised as folllows:

Any occurrence of the expression ‘𝓍 ≠ 𝓎’ in a sentence abbreviates


the expression ‘¬𝓍 = 𝓎’, and either can substitute for the other in any
Quantifier sentence.

I will use both expressions in what follows, but strictly speaking ‘¬𝑎 = 𝑏’ is the official
version, and we allow a conventional abbreviation ‘𝑎 ≠ 𝑏’.
In addition to sentences that use the word ‘else’, ‘other than’ and ‘except’, identity will
be helpful when symbolising some sentences that contain the words ‘besides’ and ‘only.’
Consider these examples:

150. No one besides Pavel owes money to Hikaru.


151. Only Pavel owes Hikaru money.

Sentence 150 can be paraphrased as, ‘No one who is not Pavel owes money to Hikaru’.
This can be symbolised by ‘¬∃𝑥(¬𝑥 = 𝑝 ∧ 𝑂𝑥ℎ)’. Equally, sentence 150 can be para‐
phrased as ‘for all x, if x owes money to Hikaru, then x is Pavel’. Then it can be symbol‐
ised as ‘∀𝑥(𝑂𝑥ℎ → 𝑥 = 𝑝)’. Sentence 151 can be treated similarly.3

18.5 Principles of Identity


Return to our argument from premises 147 and 148. We now symbolise it: ∀𝑦(𝑝 ≠ 𝑦 →
𝑂𝑝𝑦); ℎ ≠ 𝑝 ∴ 𝑂𝑝ℎ. This argument is valid. Given the fixed interpretation we have
assigned to ‘=’, it is not possible that a first thing be not identical to a second, while
the second is identical to the first. So we can conclude, as a matter of logical alone,
that ℎ ≠ 𝑝 is equivalent to 𝑝 ≠ ℎ.
This argument rests on one of the logical principles governing identity: that if 𝑎 is
𝑏, then 𝑏 is 𝑎. This property is known as symmetry, and is one of a cluster of basic
properties governing the structure of identity:

› Identity is REFLEXIVE: everything is identical to itself. So for any


meaningful name ‘𝒶’, ‘𝒶 = 𝒶’ is true.
› Identity is SYMMETRIC: if 𝒶 = 𝒷 , then also 𝒷 = 𝒶.
› Identity is TRANSITIVE: if 𝒶 = 𝒷 , and 𝒷 = 𝒸, then also 𝒶 = 𝒸.

3 But there is one subtlety here. Do either sentence 150 or 151 entail that Pavel himself owes money to
Hikaru?
154 THE LANGUAGE OF QUANTIFIED LOGIC

These principles of identity can be expressed as sentences of Quantifier. In the defini‐


tions given above, the names involved are arbitrary. So we can in fact paraphrase the
reflexivity of identity as saying that for any thing, it is self‐identical. Symbolised in
Quantifier, this is ‘∀𝑥𝑥 = 𝑥 ’. Likewise for the others:

› Symmetry: ∀𝑥∀𝑦(𝑥 = 𝑦 → 𝑦 = 𝑥);


› Transitivity: ∀𝑥∀𝑦∀𝑧((𝑥 = 𝑦 ∧ 𝑦 = 𝑧) → 𝑥 = 𝑧).

These principles can apply to other two‐place predicates too. For example, the two‐
place English predicate ‘ is taller than ’ is also transitive, since if Albert is
1 2
taller than Barbara, and Barbara is taller than Chloe, then Albert must be taller than
Chloe too. But it is not reflexive or symmetric: Albert is not taller than himself, and
if Albert is taller than Barbara, it cannot be also that Barbara is taller than Albert. We
will return to this topic in §21.10.
A final principle about identity is LEIBNIZ’ LAW, named after the philosopher and math‐
ematician Gottfried Leibniz:

If 𝓍 = 𝓎 then for any property at all, 𝓍 has it iff 𝓎 has it too. That is:
every instance of this schematic sentence of Quantifier, for any predic‐
ate ℱ whatsoever, is true:

∀𝑥∀𝑦 𝑥 = 𝑦 → (ℱ𝑥 ↔ ℱ𝑦) .

Leibniz’ Law certainly entails that identical things are indistinguishable, sharing every
property in common. But as we have already noted, identity isn’t merely indistin‐
guishability. Two things might be indistinguishable, but if there are two, they are not
strictly identical in the logical sense we are concerned with. Yet in many cases, even
very similar things do turn out to have some distinguishing property. There is a sig‐
nificant philosophical controversy over whether there can be cases of mere numerical
difference, i.e., of nonidentity without any qualitative dissimilarity.

18.6 There are at Least …


So far an identity predicate might seem useful in a few cases, like pseudonyms, but a
bit niche. In fact, adding it to our language gives us the ability to say lots of things we
cannot hope to say without it. In particular, the identity predicate gives us the ability
to count – to quantify our quantifiers, and say how many things there are of a particular
kind. Indeed, it is tempting to argue that the concept of counting depends on the prior
concept of numerical distinctness.
For example, consider these sentences:

152. There is at least one apple


153. There are at least two apples
154. There are at least three apples
§18. IDENTITY 155

We shall use the symbolisation key:

𝐴: is an apple
1

Sentence 152 does not require identity. It can be adequately symbolised by ‘∃𝑥𝐴𝑥 ’:
There is some apple; perhaps many, but at least one.
It might be tempting to also translate sentence 153 without identity. Yet consider the
sentence ‘∃𝑥∃𝑦(𝐴𝑥 ∧ 𝐴𝑦)’. Roughly, this says that there is some apple 𝑥 in the domain
and some apple 𝑦 in the domain. Since nothing precludes these from being one and
the same apple, this would be true even if there were only one apple.4 In order to make
sure that we are dealing with different apples, we need an identity predicate. Sentence
153 needs to say that the two apples that exist are not identical, so it can be symbolised
by ‘∃𝑥∃𝑦(𝐴𝑥 ∧ 𝐴𝑦 ∧ ¬𝑥 = 𝑦)’.
Sentence 154 requires talking about three different apples. Now we need three existen‐
tial quantifiers, and we need to make sure that each will pick out something different:
‘∃𝑥∃𝑦∃𝑧(𝐴𝑥 ∧ 𝐴𝑦 ∧ 𝐴𝑧 ∧ 𝑥 ≠ 𝑦 ∧ 𝑦 ≠ 𝑧 ∧ 𝑥 ≠ 𝑧)’.

18.7 There are at Most …


Now consider these sentences:

155. There is at most one apple.


156. There are at most two apples.

Sentence 155 can be paraphrased as, ‘It is not the case that there are at least two apples’.
This is just the negation of sentence 153:

¬∃𝑥∃𝑦(𝐴𝑥 ∧ 𝐴𝑦 ∧ ¬𝑥 = 𝑦).

But sentence 155 can also be approached in another way. It means that if you pick
out an object and it’s an apple, and then you pick out an object and it’s also an apple,
you must have picked out the same object both times. With this in mind, it can be
symbolised by
∀𝑥∀𝑦 (𝐴𝑥 ∧ 𝐴𝑦) → 𝑥 = 𝑦 .
The two sentences will turn out to be logically equivalent.
In a similar way, sentence 156 can be approached in two equivalent ways. It can be
paraphrased as, ‘It is not the case that there are three or more distinct apples’, so we
can offer:
¬∃𝑥∃𝑦∃𝑧(𝐴𝑥 ∧ 𝐴𝑦 ∧ 𝐴𝑧 ∧ 𝑥 ≠ 𝑦 ∧ 𝑦 ≠ 𝑧 ∧ 𝑥 ≠ 𝑧).
Or, we can read it as saying that if you pick out an apple, and an apple, and an apple,
then you will have picked out (at least) one of these objects more than once. Thus:

∀𝑥∀𝑦∀𝑧 (𝐴𝑥 ∧ 𝐴𝑦 ∧ 𝐴𝑧) → (𝑥 = 𝑦 ∨ 𝑥 = 𝑧 ∨ 𝑦 = 𝑧) .

4 Note that both ∃𝑥𝐴𝑥 and ∃𝑦𝐴𝑦 are true in a domain with only one apple: the use of different variables
doesn’t require that different apples are the values of those variables.
156 THE LANGUAGE OF QUANTIFIED LOGIC

18.8 There are Exactly …


Now we can symbolise ‘there are at least 𝑛’ and we can symbolise ‘there are at most 𝑛’.
Using them together, we can symbolise ‘there are exactly 𝑛’:

157. There is exactly one apple.


158. There are exactly two apples.
159. There are exactly three apples.

Sentence 157 can be paraphrased as, ‘There is at least one apple and there is at most
one apple’. This is just the conjunction of sentence 152 and sentence 155. So we can
offer:
∃𝑥𝐴𝑥 ∧ ∀𝑥∀𝑦 (𝐴𝑥 ∧ 𝐴𝑦) → 𝑥 = 𝑦 .
But it is perhaps more straightforward to paraphrase sentence 157 as, ‘There is a thing
x which is an apple, and everything which is an apple is just x itself’. Thought of in this
way, we offer:
∃𝑥 𝐴𝑥 ∧ ∀𝑦(𝐴𝑦 → 𝑥 = 𝑦) .
Similarly, sentence 158 may be paraphrased as, ‘There are at least two apples, and there
are at most two apples’. Thus we could offer

∃𝑥∃𝑦(𝐴𝑥 ∧ 𝐴𝑦 ∧ ¬𝑥 = 𝑦) ∧ ∀𝑥∀𝑦∀𝑧 (𝐴𝑥 ∧ 𝐴𝑦 ∧ 𝐴𝑧) → (𝑥 = 𝑦 ∨ 𝑥 = 𝑧 ∨ 𝑦 = 𝑧) .

More efficiently, though, we can paraphrase it as ‘There are at least two different apples,
and every apple is one of those two apples’. Then we offer:

∃𝑥∃𝑦 𝐴𝑥 ∧ 𝐴𝑦 ∧ ¬𝑥 = 𝑦 ∧ ∀𝑧(𝐴𝑧 → (𝑥 = 𝑧 ∨ 𝑦 = 𝑧) .

Finally, consider these sentence:

160. There are exactly two things.


161. There are exactly two objects.

It might be tempting to add a predicate to our symbolisation key, to symbolise the Eng‐
lish predicate ‘ is a thing’ or ‘ is an object’. But this is unnecessary. Words

like ‘thing’ and ‘object’ do not sort wheat from chaff: they apply trivially to everything,
which is to say, they apply trivially to every thing. So we can symbolise either sentence
with either of the following:

∃𝑥∃𝑦¬𝑥 = 𝑦 ∧ ¬∃𝑥∃𝑦∃𝑧(¬𝑥 = 𝑦 ∧ ¬𝑦 = 𝑧 ∧ ¬𝑥 = 𝑧); or


∃𝑥∃𝑦 ¬𝑥 = 𝑦 ∧ ∀𝑧(𝑥 = 𝑧 ∨ 𝑦 = 𝑧) .
§18. IDENTITY 157

Key Ideas in §18


› Identity (‘=’) is the one logical predicate in Quantifier, with a fixed
interpretation.
› Identity satisfies a number of logical principles, the most import‐
ant of which is Leibniz’ Law, that when 𝑥 is 𝑦, then 𝑥 is ℱ iff 𝑦 is
ℱ , for any predicate ℱ .
› Identity is crucial for symbolising numerical quantification:
‘there are at least 𝑛 ℱ s’, ‘there are at most 𝑛 ℱ s’ and ‘there are
exactly 𝑛 ℱ s’, because these all involve – tacitly – the notion of
distinctness between things.

Practice exercises
A. Explain why:

› ‘∃𝑥∀𝑦(𝐴𝑦 ↔ 𝑥 = 𝑦)’ is a good symbolisation of ‘there is exactly one apple’.

› ‘∃𝑥∃𝑦 ¬𝑥 = 𝑦 ∧ ∀𝑧(𝐴𝑧 ↔ (𝑥 = 𝑧 ∨ 𝑦 = 𝑧) ’ is a good symbolisation of ‘there are


exactly two apples’.

B. Using the following symbolisation key:

domain: all animals


𝐷: is a dog
1
𝐿: is larger than
1 2
𝐹: is fierce
1
𝑏: Bertie
𝑒: Emerson
𝑓: Fergus

symbolise the following sentences in Quantifier:

1. Bertie is larger than all the other dogs.


2. Bertie, Emerson, and Fergus are all different dogs.
3. Emerson is smaller than at least two dogs.
4. The largest dog is not fierce.
5. One fierce dog is the same size as another fierce dog.

C. Using the following symbolisation key:


158 THE LANGUAGE OF QUANTIFIED LOGIC

domain: cards in a standard deck


𝐵: is black.
1
𝐶: is a club.
1
𝐷: is a deuce.
1
𝐽: is a jack.
1
𝑀: is a man with an axe.
1
𝑂: is one‐eyed.
1
𝑊: is wild.
1

symbolise each sentence in Quantifier:

1. All clubs are black cards.


2. There are no wild cards.
3. There are at least two clubs.
4. There is more than one one‐eyed jack.
5. There are at most two one‐eyed jacks.
6. There are two black jacks.
7. There are four deuces.

D. Using the following symbolisation key:

domain: animals in the world


𝐵: is in Farmer Brown’s field.
1
𝐻: is a horse.
1
𝑃: is a Pegasus.
1
𝑊: has wings.
1

symbolise the following sentences in Quantifier:

1. There are at least three horses in the world.


2. There are at least three animals in the world.
3. There is more than one horse in Farmer Brown’s field.
4. There are three horses in Farmer Brown’s field.
5. There is a single winged creature in Farmer Brown’s field; any other creatures in
the field must be wingless.

E. Identity is a reflexive, symmetric, and transitive predicate. Can you give examples
of English predicates which are

1. Reflexive and symmetric but not transitive;


2. Reflexive and transitive but not symmetric;
3. Symmetric and transitive but not reflexive?
19
Definite Descriptions

In Quantifier, names function rather like names in English. They are simply labels for
the things they name, and may be attached arbitrarily, without any indication of the
characteristics of what they name.1
But complex noun phrases can also be used to denote particular things in English (re‐
call §15.2), and they do so not merely by acting as arbitrary labels, but often by describ‐
ing the thing they refer to. Consider sentences like:

162. Nick is the traitor.


163. The traitor went to Cambridge.
164. The traitor is the deputy.
165. The traitor is the shortest person who went to Cambridge.

These underlined noun phrases headed by ‘the’ – ‘the traitor’, ‘the deputy’, ‘the shortest
person who went to Cambridge’ – are known as DEFINITE DESCRIPTIONS. They are
meant to pick out a unique object, by using a description which applies to that object
and to no other (at least, to no other salient object). The class of possessive singular
terms, such as ‘Antony’s eldest child’ or ‘Facebook’s founder’, might be subsumed into
the class of definite descriptions. They can be paraphrased using definite descriptions:
‘the eldest child of Antony’ or ‘the founder of Facebook’.
Definite descriptions must be contrasted with INDEFINITE DESCRIPTIONS, such as
‘A traitor went to Cambridge’, where no unique traitor is implied. Definite descrip‐
tions must also be contrasted with what we might call descriptive names, such as ‘the
Pacific Ocean’. While the Pacific Ocean is an ocean, it isn’t reliably peaceful, and even
when it is, it surely isn’t the unique ocean that merits that description. These descript‐
ive name uses might also be involved in cases of GENERIC ‘the’, such as in ‘The whale is

1 This is not strictly true: consider the name ‘Fido’ which is conventionally the name of a dog. But even
here the name doesn’t carry any information in itself about what it names – the fact that we use that
as a name only for dogs allows someone who knows that to reasonably infer that Fido is a dog. But
‘Fido is a dog’ isn’t a trivial truth, as it would be if somehow ‘Fido’ carried with it the information that
it applies only to dogs.

159
160 THE LANGUAGE OF QUANTIFIED LOGIC

a mammal’. Here there is no implication that some specific whale is under discussion,
but rather that the species is mammalian. (So maybe ‘the whale’ is a complex name for
the species.) In the generic use, ‘the whale is a mammal’ can be paraphrased ‘whales
are mammals’. But a genuine definite description, such as ‘the Prime Minister is a Lib‐
eral’ cannot be paraphrased as ‘Prime Ministers are Liberals’. The question we face is:
can we adequately symbolise definite descriptions in Quantifier?2

19.1 Treating Definite Descriptions as Terms


One option would be to introduce new names whenever we come across a definite
description. This is not a great idea. We know that the traitor – whoever it is – is
indeed a traitor. We want to preserve that information in our symbolisation. So the
symbolisation of ‘The traitor is a traitor’ (or ‘the traitor is traitorous’) should be a logical
truth. But if we symbolise ‘the traitor’ by a name 𝑎, the symbolisation will be 𝑇𝑎, which
is not a logical truth.
A second option would be to introduce a new definite description operator, such as ‘℩’.
The idea would be to symbolise ‘the F’ as ‘℩𝑥𝐹𝑥’. This is taken to mean something like
this ‘the unique thing such that it is F’, which obviously involves a definite description
in its semantics. Expressions of the form ℩𝓍𝒜𝓍 would then behave, grammatically
speaking, like names – they combine with predicates to form sentences. Suppose we
follow this path. Start with the following symbolisation key:

domain: people
𝑇: is a traitor
1
𝐷: is a deputy
1
𝐶: went to Cambridge
1
𝑆: is shorter than
1 2
𝑛: Nick

We could symbolise sentence 162 with ‘℩𝑥𝑇𝑥 = 𝑛’ (‘the thing which is a traitor is
identical to Nick’), sentence 163 with ‘𝐶℩𝑥𝑇𝑥’, sentence 164 with ‘℩𝑥𝑇𝑥 = ℩𝑥𝐷𝑥 ’, and
sentence 165 with ‘℩𝑥𝑇𝑥 = ℩𝑥(𝐶𝑥 ∧ ∀𝑦((𝐶𝑦 ∧ 𝑥 ≠ 𝑦) → 𝑆𝑥𝑦))’. This last example may be
a bit tricky to parse. In semi‐formal English, it says (supposing a domain of persons):
‘the unique person such that they are a traitor is identical with the unique person such
that they went to Cambridge and they are shorter than anyone else who went to Cam‐
bridge’.
However, even adding this new symbol to our language doesn’t quite help with our
initial complaint, since it is not self‐evident that the symbolisation of ‘The traitor is

2 There is another question that I don’t address: can we come up with a good theory of the meaning of
‘the’ in English that unifies how it behaves in ‘the whale is a mammal’, ‘the Pacific Ocean is stormy’, and
‘Ortcutt is the shortest spy’? That question is very hard. Our task is to offer a symbolisation, and as
we’ve seen, a symbolisation needn’t be a translation, but only needs to capture the relevant implications
to be successful.
§19. DEFINITE DESCRIPTIONS 161

a traitor’ as ‘𝑇℩𝑥𝑇𝑥 ’ yields a logical truth. More seriously, the idea that all definite de‐
scriptions are to be treated as terms makes it more difficult to give a unified treatment
of descriptions in predicate position. It would be desirable to give a unified treatment
of ‘Ortcutt is a short spy’ and ‘Ortcutt is the short spy’; but while the former might be
symbolised as ‘(𝑆𝑜 ∧ 𝑃𝑜)’, using the predicative ‘is’, the latter would need to treated as
‘𝑜 = ℩𝑥(𝑆𝑥 ∧ 𝑃𝑥)’, using the ‘is’ of identity.
More practically, it would be nice if we didn’t have to add a new symbol to Quantifier.
And indeed, we might be able to handle descriptions using what we already have.

19.2 Russell’s Paraphrase


Bertrand Russell offered an influential account of definite descriptions that might serve
our purposes. Very briefly put, he observed that, when we say ‘the ℱ ’, where that phrase
is a definite description, our aim is to pick out the one and only thing that is ℱ (in the
appropriate contextually selected domain). Our discussion of counting in §18 gives
us an idea about how Russell proposed to handle sentences expressing that there is
exactly one ℱ .
Thus Russell offered a systematic paraphrase of sentences involving definite descrip‐
tions along these lines:3

the ℱ is 𝒢 iff: there is at least one ℱ , and


there is at most one ℱ , and
every ℱ is 𝒢

Note a very important feature of this paraphrase: ‘the’ does not appear on the right‐side
of the equivalence. This approach would allow us to paraphrase every sentence of the
same form as the left hand side into a sentence of the same form as the right hand side,
and thus ‘paraphrase away’ the definite description.
It is crucial to notice that we can handle each of the conjuncts on the right hand side
of the equivalence in Quantifier, using our techniques for dealing with numerical quan‐
tification. We can deal with the three conjuncts on the right‐hand side of Russell’s
paraphrase as follows:

∃𝑥ℱ𝑥 ∧ ∀𝑥∀𝑦((ℱ𝑥 ∧ ℱ𝑦) → 𝑥 = 𝑦) ∧ ∀𝑥(ℱ𝑥 → 𝒢𝑥)

In fact, we could express the same point rather more crisply, by recognising that the
first two conjuncts just amount to the claim that there is exactly one ℱ , and that the last
conjunct tells us that that object is 𝒢. So, equivalently, we could offer this symbolisation
of ‘The ℱ is 𝒢’:
∃𝑥 ℱ𝑥 ∧ ∀𝑦(ℱ𝑦 → 𝑥 = 𝑦) ∧ 𝒢𝑥
Using these sorts of techniques, we can now symbolise sentences 162–164 without using
any new‐fangled fancy operator, such as ‘℩’.

3 Bertrand Russell (1905) ‘On Denoting’, Mind 14, pp. 479–93; see also Russell (1919) Introduction to Math‐
ematical Philosophy, London: Allen and Unwin, ch. 16.
162 THE LANGUAGE OF QUANTIFIED LOGIC

› Sentence 162 is exactly like the examples we have just considered. So we would
symbolise it by ‘∃𝑥(𝑇𝑥 ∧ ∀𝑦(𝑇𝑦 → 𝑥 = 𝑦) ∧ 𝑥 = 𝑛)’.

› Sentence 163 poses no problems either: ‘∃𝑥(𝑇𝑥 ∧ ∀𝑦(𝑇𝑦 → 𝑥 = 𝑦) ∧ 𝐶𝑥)’.

› Sentence 164 is a little trickier, because it links two definite descriptions. But,
deploying Russell’s paraphrase, it can be paraphrased by ‘something is such that:
there is exactly one traitor and there is exactly one deputy and it is each of them’.
So we can symbolise it by:

∃𝑥 𝑇𝑥 ∧ ∀𝑦(𝑇𝑦 → 𝑥 = 𝑦) ∧ 𝐷𝑥 ∧ ∀𝑧(𝐷𝑧 → 𝑥 = 𝑧) .

Note that I have made sure that both uniqueness conditions are in the scope of
the initial existential quantifier.

Thus, we can adequately symbolise sentences involving definite descriptions in Quan‐


tifier. Incidentally, Russell also offers an account of indefinite descriptions of the same
general form, differing only in that he regards indefinite descriptions as lacking any
connotation of uniqueness. (One of the exercises below, p. 169, deals with that ac‐
count.)
Let us dispel a worry. It seems that I can say ‘the table is brown’ without implying
that there is one and only one table in the universe. But doesn’t Russell’s paraphrase
literally entail that there is only one table? Indeed it does – it entails that there is only
one table in the domain under discussion. While sometimes we explicitly restrict the
domain, usually we leave it to our background conversational presuppositions to do so
(recall §15.6). If I can successfully say ‘the table is brown’ in a conversation with you, for
example, some background restriction on the domain must be in place that we both
tacitly accept. For example, it might be that the prior discussion has focussed on your
dining room, and so the implicit domain is things in that room. In that case, ‘the table
is brown’ is true just in case there is exactly one table in that domain, and it is brown.

19.3 The Structure of Definite Descriptions


Russell offers his theory of definite descriptions as part of a campaign to effect ‘a re‐
duction of all propositions in which denoting phrases occur to forms in which no such
phrases occur’ (‘On Denoting’, p. 482). (Russell thought there were paradoxes attend‐
ant to descriptions in English that could be avoided if we showed how they could be
systematically eliminated.) We do not wish to attempt anything so radical as this kind
of reductive analysis – we only wanted to show that there is a way of symbolising def‐
inite descriptions in Quantifier. That is, we only need to assume that Russell’s proposal
allows us to model definite descriptions in Quantifier, a language that lacks any native
resources for expressing them. Officially, then, we will take no stand on whether Rus‐
sell’s analysis is correct. Officially, we only suggest that Russell’s analysis is the best
approach to symbolising English sentences involving definite descriptions in Quanti‐
fier.
§19. DEFINITE DESCRIPTIONS 163

Yet Russell’s account has some nice features that predict and explain some otherwise
puzzling features of English ‘the’, and many logicians have followed Russell in thinking
that the Russellian account might provide an adequate semantics for English definite
descriptions. So in this section and in §19.4, I cannot resist discussing some of the
evidence for Russell’s account of the English ‘the’, and some of the major puzzles for
that account. These two sections should be regarded as optional.

Empty Definite Descriptions One of the nice features of Russell’s paraphrase is


that it allows us to handle empty definite descriptions neatly. France has no king at
present. Now, if we were to introduce a name, ‘𝑘’, to name the present King of France,
then everything would go wrong: remember from §15 that a name must always pick
out some object in the domain, and whatever actual domain we choose, it will contain
no present King of France. So we are at a loss to understand ‘the King of France is bald’:
does it even say anything, since its subject has no referent?
Russell’s paraphrase neatly avoids this problem. Russell tells us to treat definite de‐
scriptions using predicates and quantifiers, instead of names. Since predicates can be
empty (see §16), this means that no difficulty now arises when the definite description
is empty. The sentence ‘the present King of France is bald’ is paraphrased as ‘there ex‐
ists exactly one present King of France’, and so turns out to be easily understood, and
in fact to be false.

Scope and Descriptions Indeed, Russell’s paraphrase helpfully highlights two ways
one can go wrong with definite descriptions. To adapt an example from Stephen
Neale,4 suppose I, Antony Eagle, claim:

166. I am grandfather to the present king of France.

Using the following symbolisation key:

𝑎: Antony
𝐾: is a present king of France
1
𝐺: is a grandfather of
1 2

Sentence 166 would be symbolised by ‘∃𝑥(∀𝑦(𝐾𝑦 ↔ 𝑥 = 𝑦) ∧ 𝐺𝑎𝑥)’. Now, suppose you


don’t think this sentence 166 is true. You might express your rejection by saying:

167. Antony isn’t the grandfather of the present king of France.

But your denial is ambiguous. There are two available readings of 167, corresponding
to these two different sentences:

168. There is no one who is both the present King of France and such that Antony is
his grandfather.
169. There is a unique present King of France, but Antony is not his grandfather.

4 Neale (1990) Descriptions, Cambridge: MIT Press.


164 THE LANGUAGE OF QUANTIFIED LOGIC

Sentence 168 might be paraphrased by ‘It is not the case that: Antony is a grandfather
of the present King of France’. It will then be symbolised by ‘¬∃𝑥 𝐾𝑥 ∧ ∀𝑦(𝐾𝑦 → 𝑥 =
𝑦) ∧ 𝐺𝑎𝑥 ’. We might call this WIDE SCOPE negation, since the negation takes scope
over the entire sentence. Note that this sentence is predicted to be true, because the
embedded sentence contains an empty definite description.
Sentence 169 can be symbolised by ‘∃𝑥(𝐾𝑥 ∧ ∀𝑦(𝐾𝑦 → 𝑥 = 𝑦) ∧ ¬𝐺𝑎𝑥). We might call
this NARROW SCOPE negation, since the negation occurs within the scope of the definite
description. Note that its truth would require that there be a present King of France,
albeit one who is a grandchild of Antony; so this sentence, unlike 168, is predicted to
be false.
These two disambiguations of your rejection 167 have different truth values, so don’t
mean the same thing. So there are two different reasons you could have for your rejec‐
tion of my claim. Are you accepting that the definite description refers and denying
what I said of the present king of France? Or are you denying that the definite descrip‐
tion refers, rejecting a more basic assumption of what I said?
The basic point is that the Russellian paraphrase provides two places in the symbolisa‐
tion of 167 for the negation of ‘isn’t’ to fit: either taking scope over the whole sentence,
or taking scope just over the predicate ‘𝐺𝑎𝑥’. We see evidence that there are these two
options in the ambiguity of 167, and so we should opt for a semantics, like Russell’s,
which has the resources to handle these ambiguities – in this case, by positing a quan‐
tifier scope ambiguity like those we discussed in §17.2.
The term‐forming operator approach to definite descriptions cannot handle this con‐
trast. There is just one symbolisation of the negation of 166 available in this frame‐
work: ‘¬𝐺𝑎℩𝑥𝐾𝑥 ’. The original sentence is false, so this negation must be true. Since
sentence 169 is false, this sentence does not express the inner negation of sentence 166.
But there is no way to put the negation elsewhere that will express the same claim as
169. (‘𝐺𝑎℩𝑥¬𝐾𝑥 ’ clearly doesn’t do the job – it says there is just one unique thing which
isn’t the present king of France, and Antony is grandfather to it!) So the sentence ‘Ant‐
ony isn’t grandfather to the present king of France’ has only one correct symbolisation,
and hence only one reading, if the operator approach to definite descriptions is correct.
Since it is ambiguous, with multiple readings, the ‘℩’‐operator approach cannot be cor‐
rect – it doesn’t provide a complex enough grammar for definite description sentences
to allow for these kinds of scope ambiguities.

19.4 The Adequacy of Russell’s Paraphrase


The evidence from scope ambiguity we just considered is a substantial point in favour
of Russell’s paraphrase, at least compared to the operator approach. But how success‐
ful is Russell’s paraphrase in general? There is a substantial philosophical literature
around this issue, but I shall content myself with two observations.

Presupposition and Negation One worry focusses on Russell’s treatment of empty


definite descriptions. If there are no ℱ s, then on Russell’s paraphrase, both ‘the ℱ is 𝒢’
and its narrow scope negation, ‘the ℱ is not‐𝒢’, are false. Strawson suggested that such
§19. DEFINITE DESCRIPTIONS 165

sentences should not be regarded as false, exactly.5 Rather, they both seem to assume
that ‘the ℱ ’ refers, and since this assumption is incorrect, the sentences misfire in a way
that should, Strawson thinks, make us regard as neither true nor false, but nevertheless
still meaningful.
A SEMANTIC PRESUPPOSITION of a declarative sentence is something that must be taken
for granted by anyone asserting the sentence, triggered or forced by the words in‐
volved.6 A pretty reliable test for whether 𝒫 is a semantic presupposition of a sentence
𝒜 is whether 𝒫 is a consequence of both 𝒜 and ¬𝒜 . Strawson elevates this test to
a definition of semantic presupposition: presuppositions are entailments that persist
when a sentence is embedded under negation. So ‘John has stopped drinking’ and its
negation ‘John hasn’t stopped drinking’ both entail in English that John used to drink,
and hence ‘John used to drink’ is a semantic presupposition of ‘John has stopped drink‐
ing’. Here the presupposition is triggered by the aspectual verb ‘stopped’.
Strawson says that PRESUPPOSITION FAILURE occurs when the presupposition of a sen‐
tence is false. If John never used to drink, both ‘John has stopped drinking’ and ‘John
hasn’t stopped drinking’ misfire. Strawson, following Frege, suggests that in cases of
presupposition failure, a sentence is neither true nor false.
In the case of definite descriptions, the Frege‐Strawson view would say that ‘the present
King of France is bald’ presupposes that there is a present King of France. Since that
presupposition fails, the sentence is neither true nor false. This is contrary to Russell’s
position that the sentence is false.
With the notion of presupposition failure in hand, the Frege‐Strawson theory seems
to be able to address the scope evidence for Russell’s account. For there is now a dis‐
tinction between the denial involved in 168 and that involved in 169, even though the
logical form of the denied sentence is ‘𝐺𝑎℩𝑥𝐾𝑥 ’. The logical negation of that sentence is
‘¬𝐺𝑎℩𝑥𝐾𝑥 ’, which shares the presupposition that there is a present King of France. But
there is also another way of rejecting a sentence, one which targets not what was said
by the sentence, but its presuppositions. This is sometimes called METALINGUISTIC
NEGATION.7 One way of identifying its presence is the use of focal stress, emphasising
the word to be targeted, and accompanied by an gloss explaining the presupposition
to be rejected, as in:

170. I don’t like cricket, I love it!


171. John hasn’t stopped drinking; he never even started!
172. Sarah is not the source of the leak; there was no leak!

The effects which Russell sees as the result of scope ambiguity are re‐analysed as in‐
volving the contrast between ordinary negation in 169, and metalinguistic negation
in 168. Crucially, on the Frege‐Strawson view, the successful deployment of metalin‐
guistic negation in cases like 172 renders the sentence ‘Sarah is the source of the leak’
neither true nor false.
5 P F Strawson (1950) ‘On Referring’, Mind 59 pp. 320–34.
6 See David I Beaver, Bart Geurts, and Kristie Denlinger (2021) ‘Presupposition’, in Edward N Zalta, ed.,
The Stanford Encyclopedia of Philosophy https://plato.stanford.edu/archives/spr2021/entries/
presupposition/, esp. §2 and §6.
7 Larry Horn (1989) A Natural History of Negation, University of Chicago Press.
166 THE LANGUAGE OF QUANTIFIED LOGIC

The phenomenon of metalinguistic negation is real. But if we agree with Frege and
Strawson here on how to model it semantically, we shall need to revise our logic. For,
in our logic, there are only two truth values (True and False), and every meaningful
sentence is assigned exactly one of these truth values. Why? Suppose there is presup‐
position failure of ‘John has stopped drinking’, because John never drank. Then ‘John
has stopped drinking’ can’t be true since it entails something false, its presupposition
‘John drank’. And ‘John hasn’t stopped drinking’ can’t be true, since it also entails that
same falsehood. So neither can be true. Since one is the negation of the other, if either
is false, the other is true. So neither can be false, either. So if there are nontrivial se‐
mantic presuppositions – semantic presuppositions that might be false – then we shall
have to admit that some meaningful sentences with a false presupposition are neither
true nor false. It remains an open question, admittedly, whether presuppositions in
the ordinary and intuitive sense are really semantic presuppositions in this sense.
But there is room to disagree with Strawson. Strawson is appealing to some linguistic
intuitions, but it is not clear that they are very robust. For example: isn’t it just false,
not ‘gappy’, that Antony is grandfather to the present King of France? (This is Neale’s
line.)

Misdescription Keith Donnellan raised a second sort of worry, which (very roughly)
can be brought out by thinking about a case of mistaken identity.8 Two men stand in
the corner: a very tall man drinking what looks like a gin martini; and a very short man
drinking what looks like a pint of water. Seeing them, Malika says:

173. The gin‐drinker is very tall!

Russell’s paraphrase will have us render Malika’s sentence as:

173′ . There is exactly one gin‐drinker [in the corner], and whomever is a gin‐drinker
[in the corner] is very tall.

But now suppose that the very tall man is actually drinking water from a martini glass;
whereas the very short man is drinking a pint of (neat) gin. By Russell’s paraphrase, Ma‐
lika has said something false. But don’t we want to say that Malika has said something
true?
Again, one might wonder how clear our intuitions are on this case. We can all agree
that Malika intended to pick out a particular man, and say something true of him (that
he was tall). On Russell’s paraphrase, she actually picked out a different man (the
short one), and consequently said something false of him. But maybe advocates of
Russell’s paraphrase only need to explain why Malika’s intentions were frustrated, and
so why she said something false. This is easy enough to do: Malika said something
false because she had false beliefs about the men’s drinks; if Malika’s beliefs about the
drinks had been true, then she would have said something true.9

8 Keith Donnellan (1966) ‘Reference and Definite Descriptions’, Philosophical Review 77, pp. 281–304.
9 Interested parties should read Saul Kripke (1977) ‘Speaker Reference and Semantic Reference’, 1977 in
French et al., eds., Contemporary Perspectives in the Philosophy of Language, Minneapolis: University
of Minnesota Press, pp. 6‐27.
§19. DEFINITE DESCRIPTIONS 167

To say much more here would lead us into deep philosophical waters. That would be
no bad thing, but for now it would distract us from the immediate purpose of learning
formal logic. So, for now, we shall stick with Russell’s paraphrase of definite descrip‐
tions, when it comes to putting things into Quantifier. It is certainly the best that we
can offer, without significantly revising our logic. And it is quite defensible as an para‐
phrase.

Key Ideas in §19


› Definite descriptions like ‘the inventor of the zipper’ are singu‐
lar terms in English, but behave rather unlike names – for ex‐
ample, while the inventor of the zipper must be an inventor, Ju‐
lius needn’t be – even if Julius is the name of the inventor of the
zipper.
› Russell’s insight was if definite descriptions denote by uniquely
describing, then we can use the ability of Quantifier to symbolise
sentences like ‘there is exactly one ℱ and it is 𝒢’ to represent
definite descriptions.
› There is an ongoing debate in linguistics about whether Russell’s
account captures the meaning of natural language definite de‐
scriptions, but there is no question that his approach is the only
viable way to represent English descriptive noun phrases in Quan‐
tifier.

Practice exercises
A. Using the following symbolisation key:

domain: people
𝐾: knows the combination to the safe.
1
𝑆: is a spy.
1
𝑉: is a vegetarian.
1
𝑇: trusts .
1 2
ℎ: Hofthor
𝑖 : Ingmar

symbolise the following sentences in Quantifier:

1. Hofthor trusts a vegetarian.


2. Everyone who trusts Ingmar trusts a vegetarian.
3. Everyone who trusts Ingmar trusts someone who trusts a vegetarian.
4. Only Ingmar knows the combination to the safe.
5. Ingmar trusts Hofthor, but no one else.
6. The person who knows the combination to the safe is a vegetarian.
7. The person who knows the combination to the safe is not a spy.
168 THE LANGUAGE OF QUANTIFIED LOGIC

B. Using the following symbolisation key:

domain: animals
𝐶𝓍: is a cat.
1
𝐺: is grumpier than .
1 2
𝑓: Felix
𝑔: Sylvester

symbolise each of the following in Quantifier:

1. Some animals are grumpier than cats.


2. Some cat is grumpier than every other cat.
3. Sylvester is the grumpiest cat.
4. Of all the cats, Felix is the least grumpy.

C. Using the following symbolisation key:

domain: cards in a standard deck


𝐵: is black.
1
𝐶: is a club.
1
𝐷: is a deuce.
1
𝐽: is a jack.
1
𝑀: is a man with an axe.
1
𝑂: is one‐eyed.
1
𝑊: is wild.
1

symbolise each sentence in Quantifier:

1. The deuce of clubs is a black card.


2. One‐eyed jacks and the man with the axe are wild.
3. If the deuce of clubs is wild, then there is exactly one wild card.
4. The man with the axe is not a jack.
5. The deuce of clubs is not the man with the axe.

D. Using the following symbolisation key:

domain: animals in the world


𝐵: is in Farmer Brown’s field.
1
𝐻: is a horse.
1
𝑃: is a Pegasus.
1
𝑊: has wings.
1
§19. DEFINITE DESCRIPTIONS 169

symbolise the following sentences in Quantifier:

1. The Pegasus is a winged horse.


2. The animal in Farmer Brown’s field is not a horse.
3. The horse in Farmer Brown’s field does not have wings.

E. In this section, I symbolised ‘Nick is the traitor’ by ‘∃𝑥(𝑇𝑥 ∧ ∀𝑦(𝑇𝑦 → 𝑥 = 𝑦) ∧ 𝑥 = 𝑛)’.


Two equally good symbolisations would be:

› 𝑇𝑛 ∧ ∀𝑦(𝑇𝑦 → 𝑛 = 𝑦)

› ∀𝑦(𝑇𝑦 ↔ 𝑦 = 𝑛)

Explain why these would be equally good symbolisations.


F. Candace returns to her parents’ home to find the family dog with a bandaged nose.
Her mother says ‘the dog got into a fight with another dog’, and this seems a perfectly
appropriate thing to say in the circumstances.
Does this example pose a problem for Russell’s approach to definite descriptions?
G. Some people have argued that the following two sentences are ambiguous. Are
they? If they are, explain how this fact might be used to provide support to Russell’s
paraphrases of definite description sentences.

1. The prime minister has always been Australian.


2. The number of planets is necessarily eight.

H. Russell’s paraphrase of an indefinite description sentence like ‘José met a man’ is:
there is at least one thing x such that x is male and human and and José met x.
Note that the word ‘is’ has apparently two readings: sometimes, as in ‘Fido is heavy’,
it indicates predication; sometimes, as in ‘Fido is Rover’, it indicates identity (in the
case, the same dog is known by two names). Something interesting arises if Russell’s
account of indefinite descriptions is right, since in an example like ‘Fido is a dog of
unusual size’, we might interpret the ‘is’ in either way, roughly:

1. Fido is identical to a dog of unusual size;


2. Fido has the property of being a dog of unusual size.

Suppose that ‘𝑈’ symbolises the property of being a dog of unusual size, then our two
readings can be symbolised ‘∃𝑥(𝑈𝑥 ∧ 𝑓 = 𝑥)’ and ‘𝑈𝑓’.
Is there any significant difference in meaning between these two symbolisations?
20
Sentences of Quantifier

We know how to represent English sentences in Quantifier. The time has finally come
to properly define the notion of a sentence of Quantifier.

20.1 Expressions
There are six kinds of symbols in Quantifier:

Predicate symbols 𝐴, 𝐵, 𝐶, …, 𝑍
with subscripts, as needed 𝐴1 , 𝐵1 , 𝑍1 , 𝐴2 , 𝐴25 , 𝐽375 , …
and the identity symbol =.
Names 𝑎, 𝑏, 𝑐, …, 𝑟
with subscripts, as needed 𝑎1 , 𝑏224 , ℎ7 , 𝑚32 , …

Variables 𝑠, 𝑡, 𝑢, 𝑣, 𝑤, 𝑥, 𝑦, 𝑧
with subscripts, as needed 𝑥1 , 𝑦1 , 𝑧1 , 𝑥2 , …

Connectives, of two types


Truth‐functional Connectives ¬, ∧, ∨, →, ↔
Quantifiers ∀, ∃

Parentheses (,)

We define an EXPRESSION of Quantifier as any string of symbols of Quantifier. Take


any of the symbols of Quantifier and write them down, in any order, and you have an
expression.

20.2 Terms and Formulae


In §6, we went straight from the statement of the vocabulary of Sentential to the defin‐
ition of a sentence of Sentential. In Quantifier, we shall have to go via an intermediary

170
§20. SENTENCES OF Quantifier 171

stage: via the notion of a formula. The intuitive idea is that a formula is any sentence,
or anything which can be turned into a sentence by adding quantifiers out front. But
this will take some unpacking.
We start by defining the notion of a term.

A TERM is any name or any variable.

So, here are some terms:


𝑎, 𝑏, 𝑥, 𝑥1 𝑥2 , 𝑦, 𝑦254 , 𝑧
We next need to define ATOMIC FORMULAE.

1. If ℛ is any predicate other than the identity predicate ‘=’, and we


have 0 or more terms 𝓉1 , 𝓉2 , …, 𝓉𝓃 (not necessarily distinct from
one another), then ℛ𝓉1 𝓉2 …𝓉𝑛 is an atomic formula.
2. If 𝓉1 and 𝓉2 are terms, then 𝓉1 = 𝓉2 is an atomic formula.
3. Nothing else is an atomic formula.

The use of script fonts here follows the conventions laid down in §7. So, ‘ℛ ’ is not
itself a predicate of Quantifier. Rather, it is a symbol of our metalanguage (augmented
English) that we use to talk about any predicate of Quantifier. Similarly, ‘𝓉1 ’ is not a
term of Quantifier, but a symbol of the metalanguage that we can use to talk about any
term of Quantifier. So here are some atomic formulae:

𝑥=𝑎
𝑎=𝑏
𝐹𝑥
𝐹𝑎
𝐺𝑥𝑎𝑦
𝐺𝑎𝑎𝑎
𝑆𝑥1 𝑥2 𝑎𝑏𝑦𝑥1
𝑆𝑏𝑦254 𝑧𝑎𝑎𝑧

Remember that we allow zero‐place predicates too, to ensure that sentence letters of
Sentential are grammatical expressions of Quantifier too. According to the definition,
any predicate symbol followed by no terms at all is also an atomic formula of Quantifier.
So ‘𝑄’ by itself is an acceptable atomic formula.
Earlier, we distinguished many‐place from one‐place predicates. We made no distinc‐
tion however in our list of acceptable symbols between predicate symbols with dif‐
ferent numbers of places. This means that ‘𝐹 ’, ‘𝐹𝑎’, ‘𝐹𝑎𝑥 ’, and ‘𝐹𝑎𝑥𝑏’ are all atomic
formulae. We will not introduce any device for explicitly indicating what number of
places a predicate has. Rather, we will assume that in every atomic formula of the
172 THE LANGUAGE OF QUANTIFIED LOGIC

form 𝒜𝓉1 …𝓉𝑛 , 𝒜 denotes an 𝑛‐place predicate. This means there is a potential for
confusion in practice, if someone chooses to symbolise an argument using both the
one‐place predicate ‘𝐴’ and the two‐place predicate ‘𝐴’. Rather than forbid this en‐
tirely, we recommend choosing distinct symbols for different place predicates in any
symbolisation you construct.
Once we know what atomic formulae are, we can offer recursion clauses to define
arbitrary formulae. The first few clauses are exactly the same as for Sentential.

1. Every atomic formula is a formula.


2. If 𝒜 is a formula, then ¬𝒜 is a formula.
3. If 𝒜 and ℬ are formulae, then (𝒜 ∧ ℬ) is a formula.
4. If 𝒜 and ℬ are formulae, then (𝒜 ∨ ℬ) is a formula.
5. If 𝒜 and ℬ are formulae, then (𝒜 → ℬ) is a formula.
6. If 𝒜 and ℬ are formulae, then (𝒜 ↔ ℬ) is a formula.
7. If 𝒜 is a formula and 𝓍 is a variable, then ∀𝓍𝒜 is a formula.
8. If 𝒜 is a formula and 𝓍 is a variable, then ∃𝓍𝒜 is a formula.
9. Nothing else is a formula.

Here are some formulae:

𝐹𝑥
𝐺𝑎𝑦𝑧
𝑆𝑦𝑧𝑦𝑎𝑦𝑥
(𝐺𝑎𝑦𝑧 → 𝑆𝑦𝑧𝑦𝑎𝑦𝑥)
∀𝑧(𝐺𝑎𝑦𝑧 → 𝑆𝑦𝑧𝑦𝑎𝑦𝑥)
𝐹𝑥 ↔ ∀𝑧(𝐺𝑎𝑦𝑧 → 𝑆𝑦𝑧𝑦𝑎𝑦𝑥)
∃𝑦(𝐹𝑥 ↔ ∀𝑧(𝐺𝑎𝑦𝑧 → 𝑆𝑦𝑧𝑦𝑎𝑦𝑥))
∀𝑥∃𝑦(𝐹𝑥 ↔ ∀𝑧(𝐺𝑎𝑦𝑧 → 𝑆𝑦𝑧𝑦𝑎𝑦𝑥))
∀𝑥(𝐹𝑥 → ∃𝑥𝐺𝑥)
∀𝑦(𝐹𝑎 → 𝐹𝑎)

We can now give a formal definition of scope, which incorporates the definition of
the scope of a quantifier. Here we follow the case of Sentential, though we note that a
connective can be either a connective or a quantifier:

The MAIN CONNECTIVE in a formula is the connective that was intro‐


duced last, when that formula was constructed using the recursion
rules.

The SCOPE of a connective in a formula is the subformula for which


that connective is the main connective.
§20. SENTENCES OF Quantifier 173

So we can graphically illustrate the scope of the quantifiers in the last two examples
thus:
scope of ‘∀𝑥’
scope of ‘∃𝑦’ scope of ‘∀𝑥’
scope of ‘∀𝑧’ scope of ‘∃𝑥’
∀𝑥 ∃𝑦(𝐹𝑥 ↔ ∀𝑧(𝐺𝑎𝑦𝑧 → 𝑆𝑦𝑧𝑦𝑎𝑦𝑥)) ∀𝑥(𝐹𝑥 → ∃𝑥𝐺𝑥 )

Note that it follows from our recursive definition that ‘∀𝑥𝐹𝑎’ is a formula. While puzz‐
ling on its face, with that quantifier governing no variable in its scope, it nevertheless
is a formula of the language. It will turn out that, because the quantifier binds no
variable in its scope, it is redundant; the formula ‘∀𝑥𝐹𝑎’ is logically equivalent to ‘𝐹𝑎’.
Eliminating such formulae from the language involves greatly increased complication
in the definition of a formula for no important gain.

20.3 Sentences
Recall that we are largely concerned in logic with assertoric sentences: sentences that
can be either true or false. Many formulae are not sentences. Consider the following
symbolisation key:

domain: people
𝐿: loves
1 2
𝑏: Boris

Consider the atomic formula ‘𝐿𝑧𝑧’. All atomic formulae are formulae, so ‘𝐿𝑧𝑧’ is a for‐
mula. But can it be true or false? You might think that it will be true just in case the
person named by ‘𝑧’ loves herself, in the same way that ‘𝐿𝑏𝑏’ is true just in case Boris
(the person named by ‘𝑏’) loves himself. But ‘𝑧’ is a variable, and does not name anyone
or any thing. It is true that we can sometimes manage to make a claim by saying ‘it
loves it’, the best Engish rendering of ‘𝐿𝑧𝑧’. But we can only do so by making use of
contextual cues to supply referents for the pronouns ‘it’ – contextual cues that artifical
languages like Quantifier lack. (If you and I are both looking at a bee sucking nectar on
a flower, we might say ‘it loves it’ to express the claim that the bee [it] loves the nectar
[it]. But we don’t have such a rich environment to appeal to when trying to interpret
formulae of Quantifier.)
Of course, if we put an existential quantifier out front, obtaining ‘∃𝑧𝐿𝑧𝑧’, then this
would be true iff someone loves themself (i.e., someone [𝑧] is such that they [𝑧] love
themself [𝑧]). Equally, if we wrote ‘∀𝑧𝐿𝑧𝑧’, this would be true iff everyone loves them‐
selves. The point is that, in the absence of an explicit introduction or contextual cues,
we need a quantifier to tell us how to deal with a variable.
Let’s make this idea precise.

A BOUND VARIABLE is an occurrence of a variable 𝓍 that is within the


scope of either ∀𝓍 or ∃𝓍.
A FREE VARIABLE is any variable that is not bound.
174 THE LANGUAGE OF QUANTIFIED LOGIC

For example, consider the formula

∀𝑥(𝐸𝑥 ∨ 𝐷𝑦) → ∃𝑧(𝐸𝑥 → 𝐿𝑧𝑥)

The scope of the universal quantifier ‘∀𝑥 ’ is ‘∀𝑥(𝐸𝑥 ∨ 𝐷𝑦)’, so the first ‘𝑥’ is bound by the
universal quantifier. However, the second and third occurrence of ‘𝑥’ are free. Equally,
the ‘𝑦’ is free. The scope of the existential quantifier ‘∃𝑧’ is ‘∃𝑧(𝐸𝑥 → 𝐿𝑧𝑥)’, so ‘𝑧’ is
bound.
In our last example from the previous section, ‘∀𝑥(𝐹𝑥 → ∃𝑥𝐺𝑥)’, the variable ‘𝑥’ in ‘𝐺𝑥 ’
is bound by the quantifier ‘∃𝑥’, and so 𝑥 is free in ‘𝐹𝑥 → ∃𝑥𝐺𝑥’ only when it appears in
‘𝐹𝑥’. So while the scope of ‘∀𝑥 ’ is the whole sentence, it nevertheless doesn’t bind every
variable in its scope – only those such that, were it absent, would be free. (So we might
say an occurrence of a variable 𝓍 is BOUND BY an occurrence of quantifier ∀𝓍/∃𝓍 just
in case it would have been free had that quantifier been omitted.)
Finally we can say the following.

The SENTENCES of Quantifier are just exactly those formulae of Quanti‐


fier that contain no free variables.

Since an atomic formula formed by a zero‐place predicate contains no terms at all, and
hence cannot contain a variable, every such expression is a sentence – they are just the
atomic sentences of Sentential. Any other formula which contains no variables, but
only names, is also a sentence, as well as all those formulae which contain only bound
variables.
Our definition of a formula allows for examples like ‘∃𝑥∀𝑥𝐹𝑥 ’. This is a sentence, since
the variable in ‘𝐹𝑥’ is in the scope of a quantifier attached to ‘𝑥’. But which one? It
could make a difference whether the sentence is to be understood as saying everything
is 𝐹 , or something it. To resolve this issue, let us stipulate that a variable is bound by
the quantifier which is the main connective of the smallest subformula in which the
variable is bound. So in ‘∃𝑥∀𝑥𝐹𝑥 ’, the variable is bound by the universal quantifier,
because it was already bound in the subformula ‘∀𝑥𝐹𝑥 ’.

20.4 Parenthetical Conventions


We will adopt the same notational conventions governing parentheses that we did for
Sentential (see §6 and §10.3.)
First, we may omit the outermost parentheses of a formula, and we sometimes use
variably sized parentheses to aid with readability.
Second, we may omit parentheses between each pair of conjuncts when writing long
series of conjunctions.
Third, we may omit parentheses between each pair of disjuncts when writing long
series of disjunctions.
§20. SENTENCES OF Quantifier 175

Key Ideas in §20


› There is a precise recursive definition of the notion of a sentence
in Quantifier, describing how they are built out of basic expres‐
sions.
› The distinction between a formula and a sentence is important;
in a sentence, no variable occurs without an associated quan‐
tifier binding it. Sentences, unlike mere formulae, contain all
the information needed to understand their variables, once in‐
terpreted.

Practice exercises
A. Identify which variables are bound and which are free. Are any of these expressions
formulas of Quantifier? Are any of them sentences? Explain your answers.

1. ∀𝑥(𝐴𝑥 ∨ (𝐶𝑥 → 𝐵𝑥))


2. ∃𝑥(𝐿𝑥𝑦 ∧ ∀𝑦𝐿𝑦𝑥)
3. (∀𝑥𝐴𝑥 ∧ 𝐵𝑥)
4. (∀𝑥(𝐴𝑥 ∧ 𝐵𝑥) ∧ ∀𝑦(𝐶𝑥 ∧ 𝐷𝑦))
5. (∀𝑥(𝐴𝑥 ∧ 𝐵𝑥) ∧ ∀𝑥𝑦(𝐶𝑥 ∧ 𝐷𝑦))
6. (∀𝑥∃𝑦 𝑅𝑥𝑦 → (𝐽𝑧 ∧ 𝐾𝑥) ∨ 𝑅𝑦𝑥)
7. (∀𝑥1 (𝑀𝑥2 ↔ 𝐿𝑥2 𝑥1 ) ∧ ∃𝑥2 𝐿𝑥3 𝑥2 )

B. Identify which of the following are (a) expressions of Quantifier; (b) formulae of
Quantifier; and (c) sentences of Quantifier.

1. ∃𝑥(𝐹𝑥 → ∀𝑦(𝐹𝑥 → ∃𝑥𝐺𝑥));


2. ∃¬𝑥(𝐹𝑥 ∧ 𝐺𝑥);
3. ∃𝑥(𝐺𝑥 ∧ 𝐺𝑥𝑥);
4. (𝐹𝑥 → ∀𝑥(𝐺𝑥 ∧ 𝐹𝑥));
5. (∀𝑥(𝐺𝑥 ∧ 𝐹𝑥) → 𝐹𝑥);
6. (𝑃 ∧ (∀𝑥(𝐹𝑥 ∨ 𝑃)));
7. (𝑃 ∧ ∀𝑥(𝐹𝑥 ∨ 𝑃));
8. ∀𝑥(𝑃𝑥 ∧ ∃𝑦(𝑥 ≠ 𝑦)).
Chapter 5

Interpretations
21
Extensionality

Recall that Sentential is a truth‐functional language. Its connectives are all truth‐
functional, and all that we can do with Sentential is key sentences to particular truth
values. We can do this directly. For example, we might stipulate that the Sentential sen‐
tence ‘𝑃’ is to be true. Alternatively, we can do this indirectly, offering a symbolisation
key, e.g.:

𝑃: Big Ben is in London

But recall from §8 that this should be taken to mean:

› The Sentential sentence ‘𝑃’ is to take the same truth value as the English sentence
‘Big Ben is in London’ (whatever that truth value may be)

The point that I emphasised is that Sentential cannot handle differences in meaning
that go beyond mere differences in truth value.

21.1 Symbolising Versus Translating: Extensional Languages


Quantifier has some similar limitations. It gets beyond mere truth values, since it en‐
ables us to split up sentences into terms, predicates and quantifier expressions. This
enables us to consider what is true of some particular object, or of some or all objects.
But we can do no more than that.
When we provide a symbolisation key for some Quantifier predicates, such as:

𝐶: lectures on logic in Adelaide in Semester 2, 2018


1

we do not carry the meaning of the English predicate across into our Quantifier predic‐
ate. We are simply stipulating something like the following:

177
178 INTERPRETATIONS

› ‘𝐶 ’ and ‘ lectures on logic in Adelaide in Semester 2, 2018’ are to be true of


1
exactly the same things.

So, in particular:

› ‘𝐶 ’ is to be true of all and only those things which lecture on logic in Adelaide in
Semester 2, 2018 (whatever those things might be).

This is an indirect stipulation. Alternatively we can stipulate predicate extensions dir‐


ectly. We can stipulate that ‘𝐶 ’ is to be true of Antony Eagle and Jon Opie, and only
them. As it happens, this direct stipulation would have the same effect as the indirect
stipulation. But note that the English predicates ‘ is Antony Eagle or Jon Opie’

and ‘ lectures on logic in Adelaide in Semester 2, 2018’ have very different mean‐
ings!
The point is that Quantifier does not give us any resources for dealing with nuances
of meaning. When we interpret Quantifier, all we are considering is what the predic‐
ates are actually true of. For this reason, I say only that Quantifier sentences symbolise
English sentences. It is doubtful that we are translating English into Quantifier, for
translations should preserve meanings.
The EXTENSION of an English expression is just the things to which it actually applies.
So the extension of a name is the thing actually named; and the extension of a predic‐
ate is just those things it actually covers. Our symbolisation keys can be understood
as stipulating that the extension of an Quantifier expression is to be the same as the
extension of some English expression.
English is not an extensional language. In English, there is a distinction between the
extension of a term – or what it denotes – and what it means. While the predicate ‘

is a monotreme’ and the predicate ‘ is either an echidna or a platypus’ have the


same extension, and apply to the same creatures, they are not synonymous. For one
thing, you can replace an expression being used in a sentence by one of its synonyms,
and the resulting sentence will mean the same thing as the original – and will have
the same truth value. But while ‘Necessarily, all monotremes are monotremes’ is true,
‘Necessarily, all monotremes are either echidnas or platypuses’ is not true. There are
extinct species of monotreme, such as Steropodon, that are neither. Since there were
monotremes that are neither platypus nor echnidna, that must be possible. So despite
having the same current extension, these expressions differ in meaning. We need more
than just the extension of some English expressions to fix the truth value of an English
sentence involving them. Another example might be provided by singular terms. ‘Scott
Morrison’ and ‘the prime minister’ share their extension, but differ in meaning: ‘Always,
Scott Morrison will be prime minister’ is false, but ‘Always, the prime minister will be
prime minister’ is trivially true. This difference in meaning is particularly evident when
they are embedded in more complex constructions.
§21. EXTENSIONALITY 179

In Quantifier, by contrast, the substitition of one name for another with the same exten‐
sion will always yield a sentence with the same truth‐value; likewise with the substitu‐
tion of one predicate for another with the same extension. This is normally summed
up by saying that Quantifier is an EXTENSIONAL LANGUAGE. An extensional language is
one where non‐logical expressions with the same extension can always be swapped for
one another in a sentence without changing the truth value.1
We noted above that a name and a definite description might have very different mean‐
ings, despite having the same extension. Our treatment of definite descriptions (§19)
allows us to preserve the extensionality of Quantifier while also preserving the key lo‐
gical features of descriptions. The definite description in English is analysed as a quan‐
tifier expression in Quantifier, so there is no such singular term as a definite description
in Quantifier to be assigned an extension at all. What are assigned extensions are pre‐
dicates and names, and our approach to definite descriptions allows the extensions
of those predicates and names to fix the truth value of any symbolisation of an Eng‐
lish sentence involving definite descriptions. The logical properties of the symbolised
sentence in Quantifier, however, are also determined by the quantifier structure, which
isn’t fixed by assigning an extension.

21.2 A Word on Domains and Extensions


We always start our symbolisation keys by specifying a domain, and with extensions in
view we can see why this is important. If we are to interpret an expression in terms of
what it applies to, then we are going to need to know what things are ‘out there’ for it to
apply to. As we said before (§15.6), the only restriction we place on our domains is that
they be nonempty collections of things. But every extension we assign will be drawn
from the domain: every name will be a member of the domain, and every one‐place
predicate will be assigned some things drawn from the domain.
We can stipulate directly which things in our domain our predicates are to be true of.
So it is worth noting that our stipulations can be as arbitrary as we like. For example,
we could stipulate that ‘𝐻’ should be true of, and only of, the following objects:

David Cameron
the number 𝜋
every top‐F key on every piano ever made

1 What this shows, in passing, is that Quantifier lacks the resources to express things like ‘ has always

been ’ or ‘ is necessarily ’, which would allow us to separate expressions with the same
present extension.
Another construction with apparently similar effects is when a true identity, such as ‘Lewis Carroll is
Charles Lutwidge Dodgson’, is embedded in a belief report such as ‘AE believes that Lewis Carroll is
Charles Lutwidge Dodgson’. The belief report appears to be false, if AE doesn’t know that ‘Lewis Carroll’
is a pen name for the Oxford mathematician. But still, this seems hard to deny: ‘AE believes that Lewis
Carroll is Lewis Carroll’. Many have used this to argue that the meaning of a name in English is not just
its extension. But this is actually a rather controversial case, unlike the example in the main text. Many
philosophers think that the meaning of a name in English just is its extension. But let us be clear: no
one generalises this to predicates. Everyone agrees that the meaning of an English predicate is not just
its extension.
180 INTERPRETATIONS

Now, the objects that we have listed have nothing particularly in common. But this
doesn’t matter. Logic doesn’t care about what strikes us mere humans as ‘natural’ or
‘similar’ (see below, §21.7). As long as the extension assigned consists of elements of
the domain, we’ve managed to come up with a acceptable interpretation, at least from
a purely logical point of view. Armed with this interpretation of ‘𝐻’, suppose I now add
to my symbolisation key:

𝑑 : David Cameron
𝑛: Julia Gillard
𝑝: the number 𝜋

Then ‘𝐻𝑑 ’ and ‘𝐻𝑝’ will both be true, on this interpretation, but ‘𝐻𝑛’ will be false, since
Julia Gillard was not among the stipulated objects.
This process of explicit stipulation is just giving the extension of a predicate by a list
of items falling under it. A more common way of identifying the extension of a pre‐
dicate is to derive it from a classification rule that sorts everything into those items
that fall under it and those that do not. Such a classification rule is often what people
think of as the meaning of a predicate. But such a classification rule goes beyond the
extension. In effect, our earlier symbolisation keys assign an extension by relying on
our knowledge of the classification rule associated with English predicates. While the
rule might determine the extension in a domain, it is not the extension: two different
rules could come up with the same extension, as in the case of monotremes earlier.
The extension of a one‐place predicate is just some things. We can identify them by
simply listing them. A list like that isn’t anything more than the items in the list, so
that list will be the same in every domain which contains those things in the extension.
But we can see a further difference between extensions and classification rules when
we can consider applying the same rule in different domains. So the classification rule
associated with the English predicate ‘ is a student’ applies to many many people

in the domain of all people. In the domain ‘people in this class’, it applies only to a
select few. In fact, you can even use the classification rule ‘ is a student’ to fix
an extension where the domain doesn’t include any students at all. In that case it will
just yield an EMPTY EXTENSION – one containing nothing. Here are some other cases
where the empty extension should be assigned to a predicate:

› ‘ is a round square’ (in any domain), or


1

› ‘ is a horse’ in the domain of sheep, or


1

› ‘ is divisible without remainder by ’ in the domain of prime numbers


1 2
(where the number 1 isn’t normally thought to be a prime).

The empty collection too is a legitimate extension, because the empty set is still some‐
thing: it is still a sub‐collection of the domain. However, our names must always be
§21. EXTENSIONALITY 181

assigned some element of the domain. An ‘empty’ name would have no extension at all
– and in an extensional language it is hard to see how we can admit such expressions.
So keep clear the distinction between assigning an empty collection as an extension,
and assigning nothing at all.
There are also trivial cases of the opposite sort, where we assign the entire domain as
the extension of a predicate:

› ‘ is something’ (in any domain), or


1

› ‘ is a horse’ in the domain of horses, or


1

› ‘ is greater than or equal to ’ in the domain consisting of just the num‐


1 2
ber 1 (the extension is ⟨1, 1⟩, which includes every ordered pair in this domain);
and

21.3 Many‐place Predicates


All of this is quite easy to understand when it comes to one‐place predicates. But it
gets much messier when we consider two‐place predicates. Consider a symbolisation
key like:

𝐿: loves
1 2

Given what I said above, this symbolisation key should be read as saying:

› ‘𝐿’ and ‘ loves ’ are to be true of exactly the same things in the domain
1 2

So, in particular:

› ‘𝐿’ is to be true of a and b (in that order) iff a loves b.

It is important that we insist upon the order here, since love – famously – is not always
reciprocated. (Note that ‘a’ and ‘b’ here are symbols of English, and that they are being
used to talk about particular things in the domain.)
That is an indirect stipulation. What about a direct stipulation? This is slightly harder.
If we simply list objects that fall under ‘𝐿’, we will not know whether they are the lover
or the beloved (or both). We have to find a way to include the order in our explicit
stipulation.
To do this, we can specify that two‐place predicates are true of ORDERED PAIRS of ob‐
jects, which differ from two‐membered collections in that the order of a pair is import‐
ant. Thus we might stipulate that ‘𝐵’ is to be true of, and only of, the following pairs
of objects:
182 INTERPRETATIONS

⟨Lenin, Marx⟩
⟨Heidegger, Sartre⟩
⟨Sartre, Heidegger⟩

Here the angle brackets keep us informed concerning order – ⟨a, b⟩ is a different pair
from ⟨b, a⟩ even though they correspond to the same collection of two things, a and b.
Suppose I now add the following stipulations:

𝑙: Lenin
𝑚: Marx
ℎ: Heidegger
𝑠: Sartre

Then ‘𝐵𝑙𝑚’ will be true, since ⟨Lenin, Marx⟩ was in my explicit list. But ‘𝐵𝑚𝑙 ’ will be
false, since ⟨Marx, Lenin⟩ was not in my list. However, both ‘𝐵ℎ𝑠’ and ‘𝐵𝑠ℎ’ will be true,
since both ⟨Heidegger, Sartre⟩ and ⟨Sartre, Heidegger⟩ are in my explicit list.
It is perhaps worth being explicit that our sequences are ordered groups of things,
not lists of words or names. (Unless our domain is the set of words or names!) The
extension of a many‐place predicate is no less a worldly collection than the extension
of a name or a one‐place predicate. The sequences that are members of that extension
have things as their constituents, not names.
To make these ideas more precise, we would need to develop some set theory, the
mathematical theory of collections. This would give you some tools for modelling
extensions and ordered pairs (and ordered triples, etc.), as well as some ideas about the
metaphysical status of collections and sets. Indeed, set theoretic tools lie at the heart
of contemporary linguistic approaches to meaning in natural languages. However, we
have neither the time nor the need to cover set theory in this book. I shall leave these
notions at an imprecise level; the general idea will be clear enough, I hope.

21.4 Zero‐place Predicates

› Consider these examples:


1. Coober Pedy is a harsh place. It has underground houses.
2. Coober Pedy is a harsh place. It seldom rains there.
› In 1, ‘it’ refers to Coober Pedy. Not so in 2: ‘It’ in that sentence is a textbfdummy
pronoun (though ‘there’ does refer)!
› We took this example to suggest that the textbfbare report ‘it is raining’ involved
no genuine singular term, and modelled by a textbfzero‐place predicate like 𝑅.
› These don’t really apply to anything; they are either true or false of the whole
speech situation, rather than true or false of something in the domain.
› We treat them as having textbftruth values as their extension – just like the text‐
bfvaluations of Sentential. (In fact we can consider the valuations of Sentential
as assignments of truth values as the extensions of atomic sentences of that lan‐
guage.)
§21. EXTENSIONALITY 183

21.5 Semantics for Identity


Identity is a special predicate of Quantifier. We write it a bit differently than other
two‐place predicates: ‘𝑥 = 𝑦’ instead of ‘𝐼𝑥𝑦’ (for example). More important, though,
its meaning is fixed as to be genuine identity, once and for all. Given a domain, the
extension of ‘=’ comprises just those pairs consisting of any member of the domain
and itself. If the domain is numbers, for example, the extension of ‘=’ on this domain
will be
⟨1, 1⟩, ⟨2, 2⟩, ⟨3, 3⟩, ….

If two names 𝒶 and 𝒷 are assigned to the same object in a given symbolisation key, and
thus have the same extension, 𝒶 = 𝒷 will also be true on that symbolisation key. And
since the two names have the same extension, and Quantifier is extensional, substitut‐
ing one name for another will not change the truth value of any Quantifier sentence.
So, in particular, if ‘𝑎’ and ‘𝑏’ name the same object, then all of the following will be
true:

𝐴𝑎 ↔ 𝐴𝑏
𝐵𝑎 ↔ 𝐵𝑏
𝑅𝑎𝑎 ↔ 𝑅𝑏𝑏
𝑅𝑎𝑎 ↔ 𝑅𝑎𝑏
𝑅𝑐𝑎 ↔ 𝑅𝑐𝑏
∀𝑥𝑅𝑥𝑎 ↔ ∀𝑥𝑅𝑥𝑏

This fact is sometimes called the INDISCERNIBILITY OF IDENTICALS, or Leibniz’ Law


(which we already encountered in §18.5): if two names denote the same thing, then no
claim will be true when formulated using one of those names but false when formu‐
lated using the other in its place. That is because, at the level of extensions, there really
is just one thing that those claims are about; how could that one thing be discernible
from itself?
The converse claim – the IDENTITY OF INDISCERNIBLES – is much more controversial.
It is not defensible in Quantifier. Suppose two objects differ in some feature 𝔉, but
our symbolisation key includes no predicate which denotes 𝔉. Then we might not
have any sentence true of the one object and false of the other, even though they are
distinct objects which have different properties. We can’t tell them apart, not because
they are really just one, but because we don’t have the right words to talk about how
they are different. Quantifier thus allows that exactly the same predicates might be true
of two distinct objects, and thus they are distinct but indiscernible when restricted to
the resources of our symbolisation.
Once we move beyond Quantifier and other extensional languages, even the indiscern‐
ibility of identicals is controversial. For example, consider cases of secret identities, like
in Spider‐Man. Suppose we consider the English predicate ‘MJ believes shoots
webs’. In the story, MJ believes that Spider‐Man shoots webs, but she does not believe
that Peter Parker shoots webs. Even though Spider‐Man is Peter Parker, substituting
one name for the other in this predicate turns a true sentence into a false one. An
184 INTERPRETATIONS

open sentence which doesn’t always yield the truth value when we substitute names
with the same referent is called OPAQUE. The word which is responsible for the opacity
here is the attitude verb ‘believes’. Intuitively, what matters for a belief ascription is not
which things are identical, but which things the subject represents as identical. Other
attitude verbs also lead to opacity: ‘knows’, ‘wants’, ‘remembers’. A language which
contains opaque constructions cannot be completely represented in an extensional
language like Quantifier.

21.6 Interpretations
I defined a valuation in Sentential as any assignment of truth and falsity to atomic sen‐
tences. In Quantifier, I am going to define an INTERPRETATION as consisting of three
things:

› the specification of a domain;

› for each name that we care to consider, an assignment of exactly one object
within the domain as its extension;

› for each nonlogical 𝑛‐place predicate 𝒜 (i.e., any predicate other than ‘=’) that
we care to consider, a specification of its extension: what things (or pairs of
things, or triples of thing, etc.) the predicate is to be true of – where all those
things must be in the domain.

A zero‐place predicate (what we called ‘atomic sentences’ in Sentential) has a truth


value, either T or F, as its extension – thus the valuations of Sentential are intepretations
of a very restricted part of Quantifier – those interpretations which only care to consider
zero‐place predicates.
The symbolisation keys that I considered in chapter 4 consequently give us one very
convenient way to present an interpretation. We shall continue to use them through‐
out this chapter. It is important to note, however, that the symbolisation key tends to
associate each predicate with an already understood predicate (in English), and then
uses that predicate as a classification rule to determine the extension of the predicate.
We can, and sometimes do, offer an interpretation by simply giving the extensions
of the intepreted Quantifier expressions directly. We can simply list the items, pairs,
triples, etc., in the extension assigned to a predicate. So the following two symbolisa‐
tion keys give the same intepretation:

domain: Days of the week


› 𝐵: is the day before
1 2

domain: Sunday, Monday, Tuesday, Wednesday, Thursday, Friday, Saturday


𝐵: ⟨Sunday, Monday⟩, ⟨Monday, Tuesday⟩, ⟨Tuesday, Wednesday⟩,

⟨Wednesday, Thursday⟩, ⟨Thursday, Friday⟩, ⟨Friday, Saturday⟩,
⟨Saturday, Sunday⟩.
§21. EXTENSIONALITY 185

So far, the domains we have used are drawn from the actual world, and English pre‐
dicates have been used to assign their actual extensions to Quantifier predicates. But a
perfectly good interpretation might assign merely possible entities to the domain, and
merely possible extensions to the predicates of Quantifier, or a mixture of the two.
Let’s therefore make the assumption that interpretations are not restricted to actuality:

Any collection of objects, actual or merely possible, can serve as the


domain of an interpretation. Any collection of objects drawn from
that domain can serve as the extension of a one‐place predicate, any
collection of pairs of such objects can serve as the extension of a two‐
place predicate, etc.

A challenging question, which we will not address, is what enables us to talk about and
apparently make use of such merely possible things in our interpretations.

21.7 Predicates, Properties, and Relations


A feature that might be shared by a number of different things is called a PROPERTY.
These can be physical features like the property being red, or the property having a mass
of 2kg; but we can also consider other properties that have a more social or mental basis,
such as being a logician, being rich, or being annoying. In many cases, a property will
determine an extension, because we can think of the property as dividing the domain
in two: into those things in the domain having a certain feature, and those lacking
it. As the extension of a one‐place predicate is some things, the things with a certain
feature can correspond to an extension of a predicate. In that case we might say that
the predicate represents the property. So on the domain of people, those people who
have the property being a logician is an extension, and we can offer an interpretation
on which this extension is assigned to some one‐place predicate.
We have to be careful about this though, for several reasons. The first is that a prop‐
erty is a way for something to be, independent of the domain in which it might be
included. An extension by contrast is simply some things drawn from a given domain.
So a property might determine an extension in a domain, but we should not take it to
be an extension. Likewise, we should not understand the symbolisation key as assign‐
ing a property as the meaning of a predicate. Recalling §21.6, we might understand a
symbolisation key as associate a Quantifier predicate with a property as expressed by
an English predicate. But this is just a means to assign an extension to the predicate,
using the property as a classification of the domain.
Another reason to hesitate in taking extensions to be properties is that, depending
on the domain, many distinct properties will have the same extension. Consider the
ratites, the bird family including ostriches, emus, cassowaries, and kiwis, among others.
All living ratites are unable to fly. So on the domain of ratites, the property being a
bird determines the same extension as the property being flightless. Suppose some
predicate ‘𝐹 ’ is assigned this extension. If an extension is a property, then we’d have to
186 INTERPRETATIONS

say that extension is both is a bird and is flightless – but since those two are distinct,
the extension cannot be both, and in fact it is clear that it cannot be either.2
A final reason for caution is that many legitimate extensions don’t seem to correspond
to genuine properties. It is hard to characterise the distinction between genuine prop‐
erties and the others, but one idea is that genuine properties contribute to resemblance.
If two things are both red, then they resemble one another in appearance. But any
things from a domain can form an extension, and we can imagine just selecting things,
which can then be assigned to a predicate. In that case there is unlikely to be anything
those things have in common just because they belong to an extension. Suppose I toss
a coin ten times, and determine an extension by selecting 𝑛 from the domain of natural
numbers if the 𝑛‐th toss was heads. The extension I get is: 1, 3, 4, 5, 7. But clearly noth‐
ing in this recipe means this extension corresponds to a genuine property. Nothing
in Quantifier requires that predicates must have genuine (i.e., resemblance‐grounding)
properties determining their extensions.
Properties determine extensions for one‐place predicates. The corresponding entity
determining the extension of a many‐place predicate is a RELATION. So the relation of
loving on a domain determines an extension, a set of pairs from the domain standing
in that relation. The same caveats apply as in the case of properties: we shouldn’t
take Quantifier predicates to have relations as their meaning, we shouldn’t identify a
relation with its extension, and we shouldn’t take any extension to correspond to a
genuine relation.

21.8 Representing one‐place predicates: Euler diagrams


One way of presenting an interpretation that is often convenient is to present its ex‐
tensions diagrammatically.
The definition of an interpretation in §21.6 entailed that a one‐place predicate should
have as its extension a collection of things drawn from the domain. If we help ourselves
to the resources of set theory, the mathematical theory that treats collections and
groups, then the extension of a predicate is a SUBSET of the domain. The subsets of
a given set 𝑋 are those sets such that everything in them is also in 𝑋. (So the set of
children is a subset of the set of people – every member of the former is a member
of the latter.) We can represent subsets graphically using a EULER DIAGRAM, without
having to specify each of the items which is in the domain.
We begin by demarcating a region on the page to represent the domain – typically, a
rectangle. Then for each one‐place predicate, we can represent it by a sub‐region of
the domain, subject to the following rules:

› The interior of a region associated with predicate 𝒫 represents the members of


the domain that are in the extension of 𝒫 ; the exterior – the region within the

2 These properties are coextensive on this domain, but not in general. But some properties seem to be
necessarily coextensive: take the properties being a plane figure with three sides and being a plane figure
with three vertices. At first glance these are different properties – one involves counting lines, the other
counting angles.
§21. EXTENSIONALITY 187

Mammals

Egg‐layers

Birds
Animals

Figure 21.1: An interpretation represented diagrammatically.

domain but outside the given region – represents those members of the domain
that are not in the extension of 𝒫 ;

› If two predicates have extensions that overlap, the regions that represent them
in the diagram should overlap.

› If one predicate has an extension that is entirely within the extension of another
predicate, the region associated with the first should be entirely within the re‐
gion associated with the second;

› If two predicates have nonoverlapping extensions, the regions that represent


them should not overlap.

We then label and shade each region to enable us to tell which predicates they are
associated with. We may also add labelled dots to denote individuals who may fall
into various predicate extensions.
These rules are quite abstract, but they are easy to apply in practice. Suppose we are
given the following interpretation:

domain: animal species


𝐸: lays eggs;
1
𝑀: is a mammal;
1
𝐵: is a bird;
1

We may represent this interpretation as in Figure 21.1. In this diagram, the egg‐layers
are a subset of the animals – not all animals reproduce by laying eggs, so they do not
coincide with the whole domain. The birds are a subset of animals too, and in fact
wholly within the egg‐layers: all birds reproduce oviparously. The mammals are a
subset of the animals, and one that overlaps with the egg‐layers (the monotremes!),
though neither is contained in the other, so the regions overlap without either being
wholly within the other. The birds and mammals do not have any common members,
so the associated regions do not overlap at all. Note that the sizes of the regions may,
188 INTERPRETATIONS

but need not, represent the number of things within them. In this diagram they prob‐
ably don’t.
Another example. Consider this interpretation, discussed earlier on page 126:

domain: people and plants


𝐶: is a cowboy
1
𝑆: sings a sad, sad song
1
𝑅: is a rose
1
𝑇: has a thorn
1

The wisdom of Bret Michaels then tells us that the Euler diagram should look like the
representation in Figure 21.2.
We can start to see how interpretations can help us evaluate arguments quite vividly
in this graphical environment. Recall this example from §15: ‘Willard (‘𝑤’) is a logician
(‘𝐿’); Every logician wears a funny hat (‘𝐹 ’) ∴ Willard wears a funny hat’. This can
be represented using an Euler diagram making use of a labelled dot to represent the
individual Willard, as in Figure 21.3. We can see from the diagram that this argument
is valid: Willard falls in the region corresponding to ‘𝐹 ’ because he falls in the region
corresponding to ‘𝐿’, which is wholly within the ‘𝐹 ’‐region.
If you would like a real challenge, you might try to figure out the interpretation corres‐
ponding to the Euler diagram in Figure 21.4.

21.9 Representing two‐place predicates: directed graphs


Suppose we want to consider just a single two‐place predicate, ‘𝑅’. Then we can repres‐
ent it, in some cases, by depicting the individual members of the domain by little dots
(possibly with labels), and drawing a single‐headed arrow from the first to the second
of an ordered pair of items just in case that ordered pair falls within the extension of

roses

sad song singers thorn‐havers

cowboys

Figure 21.2: A representation of ‘Every Rose Has Its Thorn’.


§21. EXTENSIONALITY 189

𝐹
𝑤
𝐿

People

Figure 21.3: Representing the argument from §15.

the predicate. Such a diagram is known as a DIRECTED GRAPH. A directed graph com‐
prises a collection of NODES, and a collection of arrows (ordered links) between those
nodes; in our case, the nodes are the elements of the domain, and the arrows corres‐
pond to the extension of a two‐place predicate on that domain, given the convention
that when there is an arrow running from 𝓍 to 𝓎 in a graph, that means ⟨𝓍, 𝓎⟩ is in the
extension of the predicate.
Let’s consider some examples.

› First, consider the following interpretation, written in the more standard man‐
ner:

Figure 21.4: ‘Chat Systems’, https://xkcd.com/1810/


190 INTERPRETATIONS

1 2

4 3

Figure 21.5: A simple graph.

domain: 1, 2, 3, 4
𝑅: ⟨1, 2⟩, ⟨2, 3⟩, ⟨3, 4⟩, ⟨4, 1⟩, ⟨1, 3⟩.

That is, an interpretation whose domain is the first four positive whole num‐
bers, and which interprets ‘𝑅’ as being true of and only of the specified pairs of
numbers. This might be represented by the simple graph depicted in Figure 21.5.

› Consider the following interpretation:

domain: 1, 2, 3, 4
𝑅: ⟨1, 3⟩, ⟨3, 1⟩, ⟨3, 4⟩, ⟨1, 1⟩, ⟨3, 3⟩, ⟨4, 4⟩.

We might offer the graph in Figure 21.6 to represent it. The existence of pairs
like ⟨1, 1⟩ in the extension of 𝑅 is borne out in the presence of ‘loops’ from nodes
to themselves in the graph. (Because of these loops, our graph is not what graph
theorists call a ‘simple’ graph.) Notice that 2 is in the domain, and hence in‐
cluded as a node in the graph, but it has no arrows attached to it, because it is
not included in the extension assigned to 𝑅.

› A third example is depicted in Figure 21.7, corresponding to this interpretation:

Domain: Amia, Barbara, Corine, Davis


𝐹 : ⟨Amia, Amia⟩, ⟨Barbara, Barbara⟩, ⟨Davis, Davis⟩, ⟨Amia, Barbara⟩,
⟨Amia, Corine⟩, ⟨Barbara, Amia⟩, ⟨Barbara, Corine⟩, ⟨Corine, Barbara⟩

1 2

4 3

Figure 21.6: A graph with ‘loops’


§21. EXTENSIONALITY 191

Amia Barbara

Davis Corine

Figure 21.7: A more complicated graph.

If we wanted, we can extend our graphical conventions, making our diagrams more
complex in order to depict more complex interpretations. For example, we could in‐
troduce another kind of arrow (maybe with a dashed shaft) to represent a further two‐
place predicate.3 We could add names as labels attached to particular objects. To
symbolise the extension of a one‐place predicate, we might simply adopt the approach
from §21.8, and mark a shaded region around some particular objects and stipulate
that the thus encircled objects (and only them) are to fall in the extension of some
predicate ‘𝐻’, say.4
All of these graphical innovations are used in Figure 21.8, which graphically represents
the following interpretation:

domain: 2, 3, 4, 5, 6
𝐺: ⩾ 4.
1
𝐷: is distinct from and exactly divisible by ;
1 2
𝑇: +3 = ;
1 2
𝑓: 4;

Here the label ‘𝑓’ represents the name attached to 4, the blue dotted arrow represents
the relation assigned to 𝑇, the black solid arrow represents the relation assigned to 𝐷,
and the grey ellipse represents the collection of things in the domain which fall in the
extension assigned to 𝐺 .

21.10 Properties of Binary Relations


The extension of a two‐place predicate is a collection of ordered pairs drawn from a
domain 𝐷. Such a collection is also known as a BINARY RELATION on 𝐷. Binary relations
have many interesting features. We have seen some of them already in §18.5, when we
noted that reflexivity, symmetry and transitivity are all features of identity. We will now
treat these notions more generally. Binary relations provide some good examples of

3 This kind of ‘directed multigraph’ is sometimes called a QUIVER.


4 We needn’t stop there. For example, we could introduce DIRECTED HYPERGRAPHS, where the edges can
join more than two nodes.
192 INTERPRETATIONS

how directed graphs can help grasp an interpretation, because the properties of binary
relations often correspond to easily grasped conditions on the associated graphs.

› This section should be regarded as optional, and only for the dedicated student of
logic.

› A binary relation ℜ on 𝐷 is REFLEXIVE iff for any 𝑥 in 𝐷, ⟨x,x⟩ is


in ℜ.
› ℜ is IRREFLEXIVE on 𝐷 iff for no 𝑥 in 𝐷 is ⟨x, x⟩ in ℜ.
› If ℜ is neither reflexive nor irreflexive, it is nonreflexive.

An example of a reflexive relation is the extension of ‘ is the same height as ’


1 2
on the domain of people. Everyone is the same height as themselves. An example of
an irreflexive relation is the extension of ‘ is taller than ’ on the domain of
1 2
people, since no one is taller than they are. An example of a nonreflexive relation might
be the extension of ‘ trusts the judgment of ’ on the domain of people: some
1 2
people trust their own judgment, while others second‐guess themselves. Pictorially, a
reflexive relation corresponds to a graph in which every node in the graph has an arrow
pointing to itself, such as in Figure 21.9. Note the resemblance between this graph and
the graph in Figure 21.6. They are almost the same, except the earlier Figure shows a
relation in which the domain contains the number 2, which is not paired with itself
in the relation. A relation being reflexive requires everything in the domain relate to
itself: in Figure 21.6, that condition is not satisfied. The graph of an irreflexive relation
has no loops from a node directly to itself, such as the relation depicted in Figure 21.5.
The graph of a nonreflexive relation will have a mixture of nodes with and without
loops, such as in Figure 21.6.
Reflexivity is a property of an extension, not a predicate (except the identity predicate).
But there is a sentence of Quantifier which expresses reflexivity. Suppose the two‐place
predicate ‘𝑅’ is assigned a relation ℜ, a set of pairs, as its extension on 𝐷: then ‘∀𝑥𝑅𝑥𝑥 ’
(‘everything bears 𝑅 to itself’) will be true under this interpretation iff ℜ is reflexive.

2 3

4 5 6
‘𝑓 ’

Figure 21.8: Multiple techniques used to depict a complex interpretation.


§21. EXTENSIONALITY 193

4 3

Figure 21.9: A graph of a reflexive relation on {1, 3, 4}.

› A binary relation ℜ is TRANSITIVE iff for any 𝑥, 𝑦, and 𝑧, whenever


⟨x, y⟩ and ⟨y, z⟩ are in ℜ, then ⟨x, z⟩ is also in ℜ.
› ℜ is INTRANSITIVE iff for any 𝑥, 𝑦, and 𝑧, whenever ⟨x, y⟩ and ⟨y, z⟩
are in ℜ, then ⟨x, z⟩ is not in ℜ.
› ℜ is nontransititive iff it is neither transitive nor intransitive.
If the predicate ‘𝑅’ is assigned ℜ as its extension in some interpretation,
then ℜ is transitive iff ‘∀𝑥∀𝑦∀𝑧((𝑅𝑥𝑦 ∧ 𝑅𝑦𝑧) → 𝑅𝑥𝑧)’ holds in this
interpretation.

An example of a transitive relation is the extension of ‘ is older than ’ on the


1 2
domain of buildings. Whenever one building is older than another, which is older than
a third, the first must be older than the third. An example of an intransitive relation is
the extension of ‘ is the successor of ’ on the domain of numbers, where 𝑥
1 2
is the successor of 𝑦 iff 𝑥 = 𝑦 + 1. While 7 is the successor of 6, and 6 is the successor
of 5, it is not the case that 7 is the sucessor of 5. An example of a nontransitive relation
is the extension of ‘there is a direct flight between and ’ on the domain of
1 2
the Qantas network. There is a direct flight from Adelaide to Perth, and a direct flight
from Perth to Broome, but no direct flight from Adelaide to Broome. On the other
hand, there is a direct flight from Perth to Sydney, and there is a direct flight from
Adelaide to Sydney.
Pictorially, a reflexive relation is one where there is a ‘shortcut’ between any two nodes
that can be reached from one another by travelling along arrows in the intended direc‐
tion. Consider the graph in Figure 21.10. Intuitively, one can ‘get from’ the oldest (the
Mitchell Building) to the youngest (the Napier Building) via the intermediate aged
Elder Hall and Bonython Hall. But you can also go directly, following the topmost
curve. The graph of an intransitive relation has no shortcuts of this sort. The graph
of a nontransitive relation has a mix of shortcuts, such as the graph of our Qantas
example depicted in Figure 21.11.
One interesting question: is the relation depicted in Figure 21.6 intransitive? There
are no obvious shortcuts; while you can get from 1 to 3, and from 3 to 4, you cannot go
directly from 1 to 4. But those loops actually make for some degenerate shortcuts. E.g.,
194 INTERPRETATIONS

Mitchell (1882) Elder (1900) Bonython (1936) Napier (1958)

Figure 21.10: A graph of the transitive relation ‘older than’ on some University of Ad‐
elaide buildings.

there is an edge from 1 (𝑥) to 3 (𝑦), and from 3 (𝑦) to 3 (𝑧), and there is the ‘shortcut’
from 1 (𝑥) to 3 (𝑧). This may not be how you were thinking of transitivity, but look back
at the definition, which is phrased in terms of picking any pairs from the domain. This
even includes those pairs consisting of a node and itself.

› A binary relation ℜ is SYMMETRIC iff for any ⟨x, y⟩ in ℜ, ⟨y, x⟩ is


also in ℜ.
› ℜ is ASYMMETRIC iff for any ⟨x, y⟩ in ℜ, ⟨y, x⟩ is not in ℜ.
› ℜ is ANTISYMMETRIC iff for any ⟨x, y⟩ in ℜ, ⟨y, x⟩ is in ℜ only if
𝑥 = 𝑦.
› ℜ is NONSYMMETRIC iff it is neither symmetric, asymmetric, nor
antisymmetric.
If the predicate ‘𝑅’ is assigned ℜ as its extension in some interpreta‐
tion, then ℜ is symmetric iff ‘∀𝑥∀𝑦(𝑅𝑥𝑦 → 𝑅𝑦𝑥)’ holds in this inter‐
pretation.

An example of a symmetric relation is the extension of ‘ lives next door to to


1

Broome

Perth Sydney

Adelaide

Figure 21.11: An extract of Qantas’ route network.


§21. EXTENSIONALITY 195

’ on the domain of people. If Alf lives next to Beth, then Beth also lives next door
2
to Alf. An asymmetric relation is ‘ is greater than ’ on the natural numbers:
1 2
if 𝑥 > 𝑦, then it cannot also be that 𝑦 > 𝑥. If we consider not ‘greater than’, but
the weaker relation ‘greater than or equal to’, we see an example of an antisymmetric
relation: if 𝑥 ≥ 𝑦 then it can be the case that 𝑦 ≥ 𝑥 only in the special case where 𝑥 = 𝑦.
An example of a nonsymmetric relation might be the relation ‘ loves ’ on
1 2
the domain of people: sometimes love is requited, so that both members of a given
pair love each other; and sometimes it is unrequited.
Pictorially, whenever we have an arrow from one node to another, if there is also a ‘re‐
verse’ arrow back from the second to the first, then the relation depicted is symmetric.
See the depiction of the ‘next to’ relation in Figure 21.12. A relation is asymmetric if
there are never such reverse arrows. A relation is antisymmetric if the only time there
are arrows from 𝑥 to 𝑦 and back is when 𝑥 = 𝑦 and there is a loop. I depict both rela‐
tions > and ≥ in Figure 21.13; the difference is that that the antisymmetric relation ≥
has an arrow from each node to itself.
The observant reader will have noticed that, among these definitions of properties of
binary relations, the only definition which actually mentions the domain is the defini‐
tion of reflexivity. All the other definitions are conditional in form: they say, if certain
pairs are in the extension, then certain other pairs will (or won’t) be too. We needn’t
specify a domain to check whether these conditionals hold of any relation. But to check
whether a relation is reflexive, we need not only the ordered pairs of the relation, but
also what the domain is, so we can see if any members of the domain are missing from
the relation.5
These properties of binary relations are not completely independent of each other. For
example, a reflexive relation cannot be asymmetric: a relation ℜ is asymmetric only if
it is never the case that for any 𝑥 and 𝑦, ⟨x,y⟩ is in ℜ; but if the relation is reflexive, then
every pair consisting of something and itself is in ℜ, and nothing prohibits us from
picking the same value for 𝑥 and 𝑦. A reflexive relation can at best be antisymmetric.
A relation which is reflexive, symmetric and transitive is known as an EQUIVALENCE RE‐
LATION. We’ve already established that identity is an equivalence relation in §18.5. But

Alf Beth Carl

Figure 21.12: ‘next to’: a symmetric but intransitive relation.

5 Reflexivity is an extrinsic property of a relation – you can’t tell just from the extension whether a relation
is reflexive, because it is relative to the domain from which the relata of the relation may be drawn. The
others are intrinsic properties of a relation; whether the relation has them is determined just by which
pairs are in the relation. The very same set of pairs might be reflexive on one domain and nonreflexive
on another, but it will be symmetric on every domain if it is symmetric on any.
196 INTERPRETATIONS

0 1 2

Figure 21.13: > (black arrows) and ≥ (orange dotted arrows) on the domain {0, 1, 2}.

there are other equivalence relations too: consider ‘ is the same height as ’.
1 2
This relation will structure the domain into clusters of people with the same height as
each other. Such a division into groups is known as a PARTITION, and the individual
groups are known as cells. When you partition a domain, you sort the domain into
cells which are uniform with respect a given feature – in this case, height. There will
be no connections between cells of the partition, but within each cell, each person will
be related to every other person in the cell. These cells are also known as EQUIVALENCE
CLASSES. Identity is the extreme case of an equivalence relation, because it partitions
the domain into cells containing entities that are equivalent in every respect, i.e., are
identical, and hence each cell contains just one individual member.
A relation on a domain 𝐷 is TOTAL iff for any 𝑥 and 𝑦 in 𝐷, either ⟨x,y⟩ or ⟨y,x⟩ (or both) is
in the extension.6 Any two things are related, in some way, by a total relation. We can
use the notion of totality to define a kind of relation that is of particular mathematical
importance:

A relation ℜ on a domain 𝐷 is an ORDERING iff ℜ is reflexive, transitive,


and antisymmetric. Special cases:
› ℜ is a STRICT ORDER iff ℜ is an asymmetric order; i.e., iff ℜ is
reflexive, transitive, and asymmetric.
› ℜ is a TOTAL ORDER iff ℜ is total and is an order;
› ℜ is a STRICT TOTAL ORDER iff ℜ is total and is a strict order.
An order that is not total is commonly called a PARTIAL ORDER.

An order, as the name suggests, gives a particular kind of structure to a domain. A


strict total order arranges everything in the domain in an order without any ties; an ex‐
ample would be the greater than relation > we considered above. A non‐mathematical
example might be ‘ is taller than ’, which arranges the domain of people in
1 2
a linear way from tallest to shortest. If there are ties permitted, then the relation is not

6 If the predicate ‘𝑅’ is assigned ℜ as its extension in some interpretation, then ℜ is total iff ‘∀𝑥∀𝑦(𝑅𝑥𝑦 ∨
𝑅𝑦𝑥)’ holds in this interpretation.
§21. EXTENSIONALITY 197

{𝑎, 𝑏}

{𝑏} {𝑎}

Figure 21.14: Black arrows indicate both ⊂ and ⊆, dotted arrows indicate ⊆ only, on the
domain ℘{𝑎, 𝑏} = {{𝑎, 𝑏}, {𝑎}, {𝑏}, ∅}.

strict: a total order with ties is ≥ on the natural numbers, or the relation ‘ is no
1
shorter than ’, which allows that whatever fills the first slot might be either taller
2
than, or the same height as, whatever fills the second.
If not everything in the domain is comparable, or ordered with respect to each other,
the relation is partial. Consider a railway network, in which all lines radiate from a
central station, like the Adelaide Metro network. The relation ‘ is no further
1
along the line than ’ is a partial order.7 But there are many pairs of incomparable
2
stations: Woodville is not on the same line as Oaklands, so neither is further along the
line than the other. (We could create a total order, perhaps by measuring distance or
counting stations along the line, and say that two stations are equally far away iff they
are the same number of stops from Adelaide. But we are focussed here on the partial
order induced by the actual railway network.) A strict partial order would be ‘ is
1
further along the line than ’, which excludes ties.
2

A mathematical example of a partial order is the subset relation ⊆. Consider some set
𝑋 containing just 𝑎 and 𝑏 as members. Recall from §21.8 that the subsets of 𝑋 are those
sets such that everything in them is also in 𝑋. So 𝑋 is a subset of 𝑋, as is the set just
containing 𝑎, and the set just containing 𝑏. Finally, the empty set (with no members)
obviously is a subset of any set (trivially – nothing in it is absent from any other set).
That gives the structure of subsets depicted in Figure 21.14. The corresponding strict
partial order ⊂ results from removing any loops from a set to itself in that diagram. This
is obviously a partial order, since {𝑎} is neither a subset of {𝑏} or vice versa, though is
is a subset of the original set {𝑎, 𝑏}, and the empty set ∅ is a subset of both.

7 Every station is no further along than itself; pick any two distinct stations, such as Woodville and
Kilkenny: if Kilkenny is no further along than Woodville, then Woodville is further along than Kilkenny;
and if we pick three stations, such as Oaklands, Brighton and Seacliff, then since Oaklands is no further
than Brighton, and Bright is no further than Seacliff, it follows that Oaklands is no further than Seacliff.
198 INTERPRETATIONS

Key Ideas in §21


› An interpretation of Quantifier is given by a domain and a tempor‐
ary assignment of extensions to some of the nonlogical vocabu‐
lary of Quantifier i.e., the names and predicates of the language.
› Any set of actual or merely possible things can serve as a domain.
› The extension of a name is a thing from some specified domain.
› The extension of an 𝑛‐place predicate is a collection of ordered
sequences of length 𝑛 of things from the domain. The identity
predicate has a privileged extension; it is always the collection of
pairs from the domain consisting of things and themselves.
› The extension of a zero‐place predicate is a truth value; these
correspond to atomic sentences of Sentential.
› Given a domain, it is often possible to represent the interpreta‐
tion of a predicate by means of an Euler diagram or a directed
graph on that domain.
› Directed graphs provide a convenient way to represent the struc‐
tural features of binary relations.

Practice exercises
A. For each of the following collections of individuals, properties and relations, con‐
struct an interpretation that includes any appropriate extensions they determine, and
an appropriate domain. Use your personal judgment if needed, and comment on any
difficulties.

1. The relation ‘ is just as rich as ’, Bill Gates, Elon Musk, and Warren
1 2
Buffet;

2. The relation ‘ is exactly divisible by ’, and the property ‘ is even’;


1 2

3. The property ‘ is a great novel’, the relation ‘ is harder to understand


1 1
than ’, Moby Dick, Finnegan’s Wake, Bleak House, A Wrinkle in Time, The
2
Left Hand of Darkness.

4. The relation ‘ is part of ’, your left leg, your lower body, you.
1 2

B. Since Quantifier is an extensional language, if let the Quantifier name ‘𝑠’ denote Su‐
perman, and the name ‘𝑐 ’ denote Clark Kent, then ‘𝑠 = 𝑐 ’ will be true. But it will be
true for the same reason that ‘𝑠 = 𝑠’ is true: because ⟨Clark Kent, Clark Kent⟩ is in
the extension of ‘=’. Can you use this observation as the basis for any argument that
English is not an extensional language?
§21. EXTENSIONALITY 199

C. Using some of the methods introduced in this section (Euler diagrams and directed
graphs), give a graphical representation of the following interpretations. You will need
to rely on your general knowledge to prepare these diagrams.

domain: people
𝑀: is a musician;
1
1. 𝑊: is a woman;
1
𝐸: is an economist.
1

domain: cards in a standard deck


𝑆: is a seven;
1
2. 𝐹: is a face card;
1
𝑅: is red;
1
𝑗: the Jack of Hearts.

domain: Australian states


𝐿: is larger in population than ;
1 2
3. 𝑉: ’s name ends in a vowel;
1
𝑎: South Australia;
𝑞: Queensland.

D. These questions concern the material from the optional section on binary relations
(§21.10).

1. Show that any total relation on a domain is reflexive on that domain.


2. Show that a transitive relation is asymmetric if and only if it is irreflexive.
3. What is wrong with the following argument that reflexivity is a consequence of
symmetry and transitivity?
If ⟨x,y⟩ is in ℜ, then ⟨y,x⟩ is in ℜ since we assume ℜ is symmetric. If
both ⟨x,y⟩ is in ℜ and ⟨y,x⟩ is in ℜ, then since ℜ is transitive, ⟨x,x⟩ is in
ℜ – so ℜ is reflexive.
4. A relation ℜ is SERIAL on a domain 𝐷 iff for each 𝑥 in 𝐷, there exists some 𝑦
such that ⟨x,y⟩ is in ℜ. Show that a serial, symmetric and transitive relation is
reflexive.
22
Truth in Quantifier

We now know what interpretations are. Since, among other things, they tell us which
predicates are true of which objects – and pairs, etc., of objects –, they provide us with
an account of the truth of atomic sentences. But we must show how to extend that to an
account of what it is for any Quantifier sentence to be true or false in an interpretation.
We know from §20 that there are three kinds of sentence in Quantifier:

› atomic sentences (i.e., atomic formulae of Quantifier which have no free vari‐
ables);

› sentences whose main connective is a sentential connective; and

› sentences whose main connective is a quantifier.

We need to explain truth for all three kinds of sentence.


I shall offer a completely general explanation in this section. However, to try to keep the
explanation comprehensible, I shall at several points use the following interpretation:

domain: all people born before 2000CE


𝑎: Aristotle
𝑏: George W Bush
𝑊: is wise
1
𝑅: was born before
1 2

This will be my go‐to example in what follows.

22.1 Atomic Sentences


The truth of atomic sentences should be fairly straightforward. The sentence ‘𝑊𝑎’
should be true just in case ‘𝑊 ’ is true of whatever is named by ‘𝑎’. Given our go‐to

200
§22. TRUTH IN Quantifier 201

interpretation, this is true iff ‘is wise’ is true of whatever is named by ‘Aristotle’, i.e., iff
Aristotle is wise. In fact (in the actual world) Aristotle is wise. So the sentence is true.
Equally, ‘𝑊𝑏’ is false on our go‐to interpretation, because George W Bush is not wise.
Likewise, on this interpretation, ‘𝑅𝑎𝑏’ is true iff the object named by ‘𝑎’ was born before
the object named by ‘𝑏’. Well, Aristotle was born before Bush. So ‘𝑅𝑎𝑏’ is true. Equally,
‘𝑅𝑎𝑎’ is false: Aristotle was not born before Aristotle. We can summarise these intuitive
ideas more generally:

› When ℱ is a one‐place predicate, and 𝒶 is a name, then ℱ𝒶 is


true in an interpretation iff:
the object assigned as the extension of 𝒶 is among the objects
assigned to the extension of ℱ .
› When ℛ is an 𝑛‐place predicate and 𝒶1 , 𝒶2 , …, 𝒶𝑛 are names (not
necessarily all different from each other), ℛ𝒶1 𝒶2 …𝒶𝑛 is true in
an interpretation iff:
the 𝑛‐tuple of objects assigned as the extensions of 𝒶1 , 𝒶2 , …, 𝒶𝑛
in that interpretation, ⟨𝒶1 , …, 𝒶𝑛 ⟩, is among the 𝑛‐tuples compris‐
ing the extension assigned to ℛ in that interpretation.

Two other kinds of atomic sentences exist: zero‐place predicates, and identity sen‐
tences.

› A zero‐place predicate is true in an interpretation iff it is assigned the extension


true in that interpretation.
Here in fact, we are using Sentential valuations as constituents of Quantifier in‐
terpretations. (A valuation is what you get when you are only intepreting zero‐
place predicates; we will want it to turn out that the part of Quantifier which
deals just with zero‐place predicates and their truth‐functional combinations is
the familiar language Sentential, so those interpretations will have to behave like
the valuations we are familiar with.)

› Identity sentences (two names flanking the identity predicate) are also easy to
handle. Where 𝒶 and 𝒷 are any names, 𝒶 = 𝒷 is true in an interpretation iff:
𝒶 and 𝒷 have the same extension (are assigned the very same object) in that
interpretation.1
So in our go‐to interpretation, ‘𝑎 = 𝑏’ is false, since Aristotle is distinct from
Bush; but ‘𝑎 = 𝑎’ is true.

1 Of course this is just the result of applying the conditions above for atomic sentences with two‐place
predicates to the special constant extension assigned to ‘=’: 𝒶 = 𝒷 is true iff the pair of the extensions
of 𝒶 and 𝒷 is in the extension of identity, which can only happen if the extensions are the same.
202 INTERPRETATIONS

22.2 Sentential Connectives


In Sentential, the truth value of a sentence in a valuation depends only on its main
connective, and the truth values of its constituent sentences (§8.3). The truth value of a
complex sentence ‘percolates up’ from those of its grammatically simpler constituents.
Things are exactly the same when it comes to Quantifier sentences that are built up from
simpler ones using the truth‐functional connectives that were familiar from Sentential.
(We introduced such sentences in §20.) The rules governing these truth‐functional
connectives are exactly the same as they were when we considered Sentential. Here
they are:

Where 𝒜 and ℬ are any sentences of Quantifier,


› 𝒜 ∧ ℬ is true in an interpretation iff:
both 𝒜 is true and ℬ is true in that interpretation
› 𝒜 ∨ ℬ is true in an interpretation iff:
either 𝒜 is true or ℬ is true in that interpretation
› ¬𝒜 is true in an interpretation iff:
𝒜 is false in that interpretation
› 𝒜 → ℬ is true in an interpretation iff:
either 𝒜 is false or ℬ is true in that interpretation
› 𝒜 ↔ ℬ is true in an interpretation iff:
𝒜 has the same truth value as ℬ in that interpretation

This presents the very same information as the schematic truth tables for the connect‐
ives; it just does it in a slightly different way. Some examples will probably help to
illustrate the idea. On our go‐to interpretation:

› ‘(𝑎 = 𝑎 ∧ 𝑊𝑎)’ is true;

› ‘(𝑅𝑎𝑏 ∧ 𝑊𝑏)’ is false because, although ‘𝑅𝑎𝑏’ is true, ‘𝑊𝑏’ is false;

› ‘(𝑎 = 𝑏 ∨ 𝑊𝑎)’ is true;

› ‘¬𝑎 = 𝑏’ is true;

› ‘(𝑊𝑎 ∧ ¬(𝑎 = 𝑏 ∧ 𝑅𝑎𝑏))’ is true, because ‘𝑊𝑎’ is true and ‘𝑎 = 𝑏’ is false.

Make sure you understand these examples.

22.3 When the Main Connective is a Quantifier


The exciting innovation in Quantifier, though, is the use of quantifiers. And in fact, ex‐
pressing the truth conditions for quantified sentences is a bit more fiddly than one
might expect. The general principle of compositionality faces a challenge when it
§22. TRUTH IN Quantifier 203

comes to sentences whose main connective is a quantifier. We cannot say that the
truth value of ‘∃𝑥𝐹𝑥’ depends on its syntax and the truth value of ‘𝐹𝑥’, because we have
no guidance in assigning a truth value to a formula with a free variable (though see
§22.6).
Here is a first naïve thought. We want to say that ‘∀𝑥ℱ𝑥 ’ is true iff ℱ is true of everything
in the domain. This should not be too problematic: our interpretation will specify
directly what ℱ is true of.
Unfortunately, this naïve first thought is not general enough. For example, we want to
be able to say that ‘∀𝑥∃𝑦ℒ𝑥𝑦’ is true just in case ‘∃𝑦ℒ𝑥𝑦’ is true of everything in the do‐
main. And this is problematic, since our interpretation does not directly specify what
‘∃𝑦ℒ𝑥𝑦’ is to be true of. Instead, whether or not this is true of something should follow
just from the interpretation of ℒ , the domain, and the meanings of the quantifiers.
So here is a second naïve thought. We might try to say that ‘∀𝑥∃𝑦ℒ𝑥𝑦’ is to be true
in an interpretation iff ∃𝑦ℒ𝒶𝑦 is true for every name 𝒶 that we have included in our
interpretation. And similarly, we might try to say that ∃𝑦ℒ𝒶𝑦 is true just in case ℒ𝒶𝒷
is true for some name 𝒷 that we have included in our interpretation. (This kind of
approach is known as SUBSTITUTIONAL QUANTIFICATION – our own approach below in
§22.4 will make use of substitution, but in a more sophisticated way.)
Unfortunately, this is not right either. To see this, observe that in our go‐to interpret‐
ation, we have only given interpretations for two names, ‘𝑎’ and ‘𝑏’. But the domain
– all people born before the year 2000CE – contains many more than two people. I
have no intention of trying to name all of them! In most interpretations, things in the
domain go unnamed; we can’t understand quantifiers as ranging over only actually
named things without missing out some things in such interpretations.2
So here is a third thought. (And this thought is not naïve, but correct.) Although
it is not the case that we have named everyone, each person could have been given a
name. So we should focus on this possibility of extending an interpretation, by adding
a previously uninterpreted name to our interpretation. I shall offer a few examples of
how this might work, centring on our go‐to interpretation, and I shall then present the
formal definition.

› In our go‐to interpretation, ‘∃𝑥𝑅𝑏𝑥 ’ should be true. After all, in the domain,
there is certainly someone who was born after Bush. Lady Gaga is one of those

2 Can we solve this issue by the brute force proposal to simply assign everything in the domain a name?
That is not possible, because there are not enough names. Remember (§20) that names in Quantifier
consist of the English letters 𝑎, …, 𝑟 with numerical subscripts if needed. So every name in Quantifier
consists of a letter and a finite numeral. It turns out you can arrange all these names in a single, infinitely
long list (basically, represent each name by a code number, and order the names by the size of the code
number). The items in a list like that can be enumerated. But Cantor famously showed that some
things are too many to be enumerated; any single list which attempts to include all of them would
inevitably miss some. (One example: while you can enumerate all the finite sequences of itemse from
a finite alphabet, you cannot enumerate all the infinite sequences of such items.) So some collections
are too many for them all to have names, because we’d run out of names before labelling them all. So
substitutional quantification can’t handle all examples of quantification.
Cantor’s result, and the ‘diagonal argument’ he used to establish it, is discussed in ch. 5 of Tim Button
(2021) Set Theory: An Open Introduction, st.openlogicproject.org/settheory‐screen.pdf.
204 INTERPRETATIONS

people. Indeed, if we were to extend our go‐to interpretation – temporarily, mind


– by adding the name ‘𝑐 ’ to refer to Lady Gaga, then ‘𝑅𝑏𝑐 ’ would be true on this
extended interpretation. And this, surely, should suffice to make ‘∃𝑥𝑅𝑏𝑥 ’ true
on the original go‐to interpretation.

› In our go‐to interpretation, ‘∃𝑥(𝑊𝑥 ∧ 𝑅𝑥𝑎)’ should also be true. After all, in the
domain, there is certainly someone who was both wise and born before Aristotle.
Socrates is one such person. Indeed, if we were to extend our go‐to interpretation
by letting a previously uninterpreted name, ‘𝑐 ’, denote Socrates, then ‘𝑊𝑐 ∧ 𝑅𝑐𝑎’
would be true on this extended interpretation. Again, this should surely suffice
to make ‘∃𝑥(𝑊𝑥 ∧ 𝑅𝑥𝑎)’ true on the original go‐to interpretation.

› In our go‐to interpretation, ‘∀𝑥∃𝑦𝑅𝑥𝑦’ should be false. After all, consider the
last person born in the year 1999. I don’t know who that was, but if we were to
extend our go‐to interpretation by letting a previously uninterpreted name, ‘𝑑 ’,
denote that person, then we would not be able to find anyone else in the domain
to denote with some further previously uninterpreted name, perhaps ‘𝑒’, in such
a way that ‘𝑅𝑑𝑒’ would be true. Indeed, no matter whom we named with ‘𝑒’, ‘𝑅𝑑𝑒’
would be false. And this observation is surely sufficient to make ‘∃𝑦𝑅𝑑𝑦’ false in
our extended interpretation. And this is sufficient to make ‘∀𝑥∃𝑦𝑅𝑥𝑦’ false on
the original interpretation.

Look at the multiplying extensions; the quantifiers involved are mirrored in the inter‐
pretations we are asked to consider. ‘∃𝑥𝐹𝑥’ is true if there exists an extended interpreta‐
tion where some named thing is 𝐹 , while ‘∀𝑥𝐹𝑥 ’ is true if every extended interpretation
makes the newly named thing 𝐹 . In effect, we handle quantification over possibly un‐
named objects in a domain by quantifying over potential interpretations that assign
names to those objects while keeping everything else the same.
Some readers might prefer a more visual aid. If the domain is large, we need to consider
lots of extended intepretations, because we need to consider assigning each of the
items in the domain a new name. So I will consider a very simple interpretation J, with
two objects and some binary relation between them, pictured here:

• •

Note there are no labels on these nodes. No names are assigned to objects in this
interpretation. The arrows give the extension of ‘𝑄’ in this interpretation J. Suppose we
want to consider whether ‘∃𝑥∀𝑦𝑄𝑦𝑥 ’ is true in J. Then we want to know whether there
is some way of assigning a new name ‘𝑑 ’ to entities in this domain to make ‘∀𝑦𝑄𝑦𝑑 ’
come out true. So there are two interpretations to consider, L and R, because there are
two things in the domain to which ‘𝑑 ’ might be attached:
§22. TRUTH IN Quantifier 205

‘𝑑 ’ ‘𝑑 ’
• • • •

L R

The original sentence is true in J iff ‘∀𝑦𝑄𝑦𝑑 ’ is true in one of these extended interpret‐
ations. But in turn that is the case iff ‘𝑄𝑓𝑑 ’ is true in each interpretation extending the
already extended interpretation by adding yet another new name ‘𝑓’. So now there are
four interpretations to consider: two extensions of L, and two extensions of R:

‘𝑑 ’ ‘𝑑 ’ ‘𝑑 ’ ‘𝑑 ’
• • • • • • • •
‘𝑓 ’ ‘𝑓 ’ ‘𝑓 ’ ‘𝑓 ’
LL LR RL RR

We now have all the interpretations we’ll need. Let’s go through it step by step:

1. ‘∃𝑥∀𝑦𝑄𝑦𝑥 ’ is true in J iff

2. There exists some extended interpretation that assigns something to a newly


chosen name ‘𝑑 ’ such that ‘∀𝑦𝑄𝑦𝑑 ’ is true in it, i.e., iff

3. Either ‘∀𝑦𝑄𝑦𝑑 ’ is true in L or ‘∀𝑦𝑄𝑦𝑑 ’ is true in R; ie., iff either

a) ‘𝑄𝑓𝑑 ’ is true in every interpretation extending L, i.e., iff


i. ‘𝑄𝑓𝑑 ’ is true in LL (no); and
ii. ‘𝑄𝑓𝑑 ’ is true in LR (no).
OR
b) ‘𝑄𝑓𝑑 ’ is true in every interpretation extending R, i.e., iff
i. ‘𝑄𝑓𝑑 ’ is true in RL (yes); and
ii. ‘𝑄𝑓𝑑 ’ is true in RR (yes).

4. Since ‘𝑄𝑓𝑑 ’ is true in both RL and RR, then

5. ‘∀𝑦𝑄𝑦𝑑 ’ is true in R, so that

6. ‘∃𝑥∀𝑦𝑄𝑦𝑥 ’ is true in J.

All of these interpretations that spawn from our original interpretation share the same
domain, and the same extension for ‘𝑄’. They differ only in that the extended interpret‐
ations add new labels to the items in the domain, interpreting the previously unused
names.
Let’s try another visual example. Consider this interpretation I, with a domain of three
things, an interpreted name ‘𝑎’ and two interpreted predicates ‘𝐹 ’ and ‘𝐺 ’.
206 INTERPRETATIONS

• • •
‘𝑎 ’ 𝐺
𝐹

There are three extended interpretations, II–IV, corresponding to the three ways of
adding some unused name ‘𝑑 ’:

II III

•‘𝑑 ’ • • • •‘𝑑 ’ •
‘𝑎 ’ 𝐺 ‘𝑎 ’ 𝐺
𝐹 𝐹

IV

• • •‘𝑑 ’
‘𝑎 ’ 𝐺
𝐹

We can see that ‘∀𝑥𝐹𝑥 ’ is true in I, because in each of II–IV, ‘𝑑 ’ labels something in
the ‘𝐹 ’‐region. We can see that ‘∀𝑥𝐺𝑥 ’ is false in I, because in II and III, ‘𝑑 ’ labels
something not in the ‘𝐺 ’‐region. ‘∃𝑥𝐺𝑥’ is true in I, because interpretation IV attaches
‘𝑑 ’ to something in the ‘𝐺 ’‐region.

22.4 Formal Truth Conditions for Quantified Sentences


If you have understood the three examples, and the visual aids, you’ve understood
everything that matters. Strictly speaking, though, we still need to give a precise defin‐
ition of the truth conditions for quantified sentences. The result, sadly, is a bit ugly,
and requires a few new definitions. Brace yourself!
Suppose that 𝒜 is a formula, 𝓍 is a variable, and 𝒸 is a name. We shall write ‘𝒜|𝒸↷𝓍 ’
to represent the formula that results from replacing or SUBSTITUTING every free occur‐
rence of 𝓍 in 𝒜 by 𝒸. So if we began with the formula ‘𝐹𝑥𝑦’, the metalanguage expres‐
sion “‘𝐹𝑥𝑦’|𝑐↷𝑥 ” denotes the formula ‘𝐹𝑐𝑦’. If we began with ‘𝐹𝑦𝑦’, then ‘𝐹𝑦𝑦’|𝑐↷𝑥 just is
the original formula ‘𝐹𝑦𝑦’, since there are no instances of ‘𝑥’ in ‘𝐹𝑦𝑦’ to be replaced. If
we began with ‘𝐹𝑥 ∨ ∀𝑥𝐺𝑥 ’, then ‘𝐹𝑥 ∨ ∀𝑥𝐺𝑥 ’|𝑐↷𝑥 is ‘𝐹𝑐 ∨ ∀𝑥𝐺𝑥 ’, since neither occurrence
of ‘𝑥’ in ‘∀𝑥𝐺𝑥 ’ is free.
Suppose we begin with a quantified formula, ∀𝓍𝒜 or ∃𝓍𝒜 . If we strip off the quantifier,
and pick any name 𝒸, then 𝒜|𝒸↷𝓍 is known as a SUBSTITUTION INSTANCE of the original
quantified formulae, and 𝒸 may be called the INSTANTIATING NAME. So:

∃𝑥(𝑅𝑒𝑥 ↔ 𝐹𝑥)

is a substitution instance of
∀𝑦∃𝑥(𝑅𝑦𝑥 ↔ 𝐹𝑥)
§22. TRUTH IN Quantifier 207

with the instantiating name ‘𝑒’, because ‘∃𝑥(𝑅𝑦𝑥 ↔ 𝐹𝑥)’|𝑒↷𝑦 turns out to be ‘∃𝑥(𝑅𝑒𝑥 ↔
𝐹𝑥)’.
Armed with this notation, the rough idea is as follows. The sentence ∀𝓍𝒜 will be true
iff 𝒜|𝒸↷𝓍 is true no matter what object (in the domain) we name with 𝒸. Similarly, the
sentence ∃𝓍𝒜 will be true iff there is some way to assign the name 𝒸 to an object that
makes 𝒜|𝒸↷𝓍 true. More precisely, we stipulate:

› ∀𝓍𝒜 is true in an interpretation iff:


𝒜|𝒸↷𝓍 is true in every interpretation that extends the original
interpretation by assigning an object to some previously unin‐
terpreted name 𝒸 not appearing in 𝒜 (without changing the ori‐
ginal interpretation in any other way).
› ∃𝓍𝒜 is true in an interpretation iff:
𝒜|𝒸↷𝓍 is true in some interpretation that extends the original
interpretation by assigning an object to some previously unin‐
terpreted name 𝒸 not appearing in 𝒜 (without changing the ori‐
ginal interpretation in any other way).

That is: we pick a previously uninterpreted name that doesn’t appear in 𝒜 .3 We uni‐
formly replace any free occurrences of the variable 𝓍 in 𝒜 by our previously uninter‐
preted name, which creates a substitution instance of ‘∀𝓍𝒜 ’ and ‘∃𝓍𝒜 ’. Then if this
substitution instance is true on every (respectively, some) way of adding an interpret‐
ation of the previously uninterpreted name to our existing interpretation, then ‘∀𝓍𝒜 ’
(respectively, ‘∃𝓍𝒜 ’) is true on that existing interpretation.
To be clear: all this is doing is formalising (very pedantically) the intuitive idea ex‐
pressed above. The result is a bit ugly, and the final definition might look a bit opaque.
Hopefully, though, the spirit of the idea is clear.
The trickiest part of all of this is keeping things straight when you have nested quan‐
tifiers, particularly quantifiers of different types. As above, when we considered
‘∃𝑦∀𝑥𝑄𝑦𝑥 ’, we needed first to consider interpretations that assigned some new name
‘𝑑 ’, and then, for each of those interpretations, we generate a parasitic family of new
interpretations assigning some other new name ‘𝑒’. If we apply our truth conditions,
we can summarise the basic cases of two nested quantifiers as follows:

3 There will always be such a previously uninterpreted name: any given sentence of Quantifier only con‐
tains finitely many names, but Quantifier has a potentially infinite stock of names to draw from.
208 INTERPRETATIONS

› ∀𝓍∀𝓎ℱ𝓍𝓎: This will be true in interpretation ℑ iff for some new


names ‘𝒸’ and ‘𝒹’, ‘ℱ𝓍𝓎 ’|𝒸↷𝓍 |𝒹↷𝓎 – i.e., ‘ℱ𝒸𝒹 ’ – is true in every
interpretation just like ℑ except it also assigns extensions to the
new names.
› ∃𝓍∃𝓎ℱ𝓍𝓎 is true in ℑ iff for some new names ‘𝒸’ and ‘𝒹’, ‘ℱ𝒸𝒹 ’
is true in some interpretation just like ℑ except it also assigns
extensions to the new names.
› ∀𝓍∃𝓎ℱ𝓍𝓎 is true in ℑ iff for some new names ‘𝒸’ and ‘𝒹’, for each
interpretation ℑ′ (otherwise like ℑ) assigning an extension to ‘𝒸’
there exists some interpretation ℑ″ assigning an extension to ‘𝒹’
(and otherwise just like ℑ′ ) which makes ‘ℱ𝒸𝒹 ’ true.
› ∃𝓍∀𝓎ℱ𝓍𝓎 is true in ℑ iff for some new names ‘𝒸’ and ‘𝒹’, there
exists some interpretation ℑ′ (otherwise like ℑ) assigning an ex‐
tension to ‘𝒸’ which makes ‘ℱ𝒸𝒹 ’ true in each interpretation ℑ″
assigning an extension to ‘𝒹’ and otherwise just like ℑ′ .

Note that these rules are listed here for your convenience; they can be derived directly
from the stipulation above, so you don’t need to learn them separately.

22.5 Pitfalls of Alternative Approaches


Our rule for evaluating a universal quantifier sentence says ∀𝑥𝒜 is true in an inter‐
pretation 𝐽 iff 𝒜|𝒸↷𝓍 is true in every interpretation just like 𝐽 except that it assigns an
extension to a previously unused name 𝒸. You might be thinking, this all seems un‐
necessarily complicated. Can’t we do things more simply? Let’s look at some simpler
alternatives; we’ll see that they go awry.

Why Restrict to New Names? Here is the first proposed simplification: do away
with the requirement for the name in the extended interpretation to be new. That
would give us this proposal:

› ∀𝑥𝒜 is true in an interpretation 𝐽 iff 𝒜|𝑐↷𝑥 is true in any interpretation extending


𝐽 which assigns an extension to ‘𝑐 ’.

This only differs from the correct proposal in cases where the name ‘𝑐 ’ is already used.
Suppose we consider ‘∀𝑥𝑐 = 𝑥 ’. This is false in any interpretation with two or more
things – they can’t both be 𝑐 . But ‘𝑐 = 𝑥’|𝑐↷𝑥 is just ‘𝑐 = 𝑐 ’. And this is true in every
interpretation which assigns anything to be 𝑐 at all. Since ‘𝑐 = 𝑐 ’ is true in every ex‐
tended interpretation, this alternative rule wrongly predicts ‘∀𝑥𝑐 = 𝑥 ’ is true in the
original intepretation. The problem arises of course because the name we are substi‐
tuting for the universally quantified variable interacts with the existing occurences of
the name. The moral: always use a new previously uninterpreted name.
§22. TRUTH IN Quantifier 209

Why not substitute 𝑥 for 𝑐 ? The truth conditions above start with a quantified sen‐
tence, drop the quantifier, and substitute a name for the associated variable. Why do
things that way? Couldn’t we start by considering the truth values of some sentence
with a name across many interpretations, and then swap the name for a variable and
add a quantifier? Here is a second proposed alternative:

› ∀𝑥𝒜|𝑥↷𝒸 is true in an interpretation 𝐼 iff in every interpretation just like 𝐼 except


that it interprets a new name 𝒸, 𝒜 is true.

The problem here is that 𝒜 is actually not well‐defined. Consider this case, a sen‐
tence with a degenerate initial quantifier: ‘∀𝑥∃𝑥𝑅𝑥𝑥 ’. The official truth conditions say:
this sentence is true iff ‘∃𝑥𝑅𝑥𝑥 ’ is true in every extended interpretation that assigns
something to a new name. Since ‘∃𝑥𝑅𝑥𝑥 ’ has no names, it will be true on all of these
extended intepretations iff it is true in the original interpretation. So the initial ∀𝑥
quantifier is redundant.
What happens on the alternative approach? It turns out there are several candidates
for 𝒜 :

1. ∀𝑥∃𝑥𝑅𝑥𝑥 = ∀𝑥∃𝑥𝑅𝑥𝑥|𝑥↷𝒸 , so 𝒜 = ∃𝑥𝑅𝑥𝑥 ;

2. ∀𝑥∃𝑥𝑅𝑥𝑥 = ∀𝑥∃𝑥𝑅𝑥𝑐|𝑥↷𝒸 , so 𝒜 = ∃𝑥𝑅𝑥𝑐 ;

3. ∀𝑥∃𝑥𝑅𝑥𝑥 = ∀𝑥∃𝑥𝑅𝑐𝑐|𝑥↷𝒸 , so 𝒜 = ∃𝑥𝑅𝑐𝑐 .

The problem is, these different candidates for 𝒜 don’t all give the same results in a given
interpretation. (‘∃𝑥𝑅𝑥𝑥 ’ certainly doesn’t always have the same truth value as ‘∃𝑥𝑅𝑥𝑐 ’.)
This means that this doesn’t actually provide a way of assigning a truth value to a
quantified sentence. We need our definition of truth to determine an unambiguous
answer, and this proposal doesn’t manage to meet that requirement.

22.6 Satisfaction
The discussion of truth in §§22.1–22.4 only ever involved assigning truth values to sen‐
tences. When confronted with formulae involving free variables, we altered them by
substituting a previously uninterpreted name for the variable, converting it into a sen‐
tence, and temporarily extending the interpretation to cover that previously uninter‐
preted name. This is a significant departure from the definition of truth for Senten‐
tial sentences in §8.3. There, we showed how the truth value of a complex Sentential
sentence in a valuation depended on the truth value of its constituents in that same
valuation. By contrast, on the approach just outlined, the truth value of ‘∃𝑥𝐹𝑥’ in an
interpretation depends on the truth value of some other sentence ‘𝐹𝑐 ’, which is not a
constituent of ‘∃𝑥𝐹𝑥’, in some different (although related) interpretation!
There is another way to proceed, which allows arbitrary formulae of Quantifier, even
those with free variables, to be assigned (temporarily) a truth value. This approach can
let the truth value of ‘∃𝑥𝐹𝑥’ in an interpretation depend on the temporary truth value
of ‘𝐹𝑥’ in that same interpretation. This is conceptually neater (with no multiplying
210 INTERPRETATIONS

interpretations) than the approach just introduced, and I present it briefly here as an
alternative to the substitutional approach of the preceding sections.

› This section should be regarded as optional, and only for the dedicated student of
logic.

The inspiration for the approach comes, once again, from thinking of variables in Quan‐
tifier as behaving like pronouns in English. In §15.5 we gave a gloss of ‘Someone is angry’
as ‘Some person is such that: they are angry’. Concentrate on ‘they are angry’. This sen‐
tence featuring a bare pronoun doesn’t express any specific claim intrinsically. But we
can, temporarily, fix a referent for the pronoun ‘they’ – temporarily elevating it to the
status of a name. If we do so, the sentence can be evaluated. We can introduce a ref‐
erent by pointing: ‘They [points to someone] are angry’. Or we can fix a referent by
simply assigning one: ‘Consider that person over there. They are angry’. If we can find
someone or other to temporarily be the referent of the pronoun ‘they’, then it will be
true that there is someone such that they are angry. If no matter who we fix as the
referent, they are angry, then it will be true that everyone is such that they are angry.
This is, in a nutshell, the idea we will use to handle quantification in Quantifier. Let us
introduce some terminology:

A VARIABLE ASSIGNMENT over an interpretation is an assignment of


exactly one object from the domain of that interpretation to each vari‐
able that we care to consider.

If we have an interpretation, and a variable assignment, then we can evaluate every for‐
mula of Quantifier – not only sentences. Of course, the evaluation of the open formulae
will be very fragile, since even given a single background interpretation, a formula like
‘𝐹𝑥’ might be true relative to one variable assignment and false relative to another.
Let us start, as before, by giving rules for evaluating atomic formulae of Quantifier,
given an interpretation and a variable assignment.

Where ℛ is any 𝑛‐place predicate (𝑛 ≥ 0), and 𝓉1 , …, 𝑡𝑛 are any terms


– variables or names, then:
› ℛ𝓉1 …𝓉𝑛 is true on a variable assignment over an interpretation
iff:
ℛ is true of the objects assigned to the terms 𝓉1 , …, 𝓉𝑛 by that
variable assignment and that intepretation, considered in that
order.
› 𝓉1 = 𝓉2 is true on a variable assignment over an interpretation
iff:
𝓉1 and 𝓉2 are assigned the very same object by that variable as‐
signment in that interpretation.
§22. TRUTH IN Quantifier 211

These are very similar clauses to those we saw for atomic sentences in §22.4. Indeed,
when we are considering atomic sentences of Quantifier, the mention of a variable as‐
signment is redundant, since no atomic sentence contains a variable. (If an atomic
formula contains a variable, the variable would be free and the formula thus not a
sentence.)
The recursion clauses that extend truth for atomic formulae to truth for arbitrary for‐
mulae are these (I omit the clauses for ∨, →, and ↔, which you can easily fill in yourself,
following the model in §22.2):

Where 𝒜 and ℬ are formulae of Quantifier, and 𝓍 is a variable:


› ¬𝒜 is true on a variable assignment over an interpretation iff:
𝒜 is false on that variable assignment over that interpretation;
› 𝒜 ∧ ℬ is true on a variable assignment over an interpretation iff:
both 𝒜 is true on that variable assignment over that interpret‐
ation and ℬ is true on that variable assignment over that inter‐
pretation;
› …
› ∀𝓍𝒜 is is true on a variable assignment over an interpretation
iff:
𝒜 is true on every variable assignment differing from the original
one at most in what it assigns to 𝓍 over that interpretation;
› ∃𝓍𝒜 is is true on a variable assignment over an interpretation
iff:
𝒜 is true on some variable assignment differing from the original
one at most in what it assigns to 𝓍 over that interpretation.

The last two clauses are where this approach is strongest. Rather than consider‐
ing some substitition instance of ∀𝓍𝒜 , we simply consider the direct constituent 𝒜 .
Rather than considering all variations on the original interpretation which include
some previously uninterpreted name, we simply consider all ways of varying what is as‐
signed to 𝓍 by the original variable assignment, but keeping everything else unchanged.
If you see the rationale for the clauses offered in §22.4, you can see why the clauses just
offered are appropriate.
We now have the idea of truth on a variable assignment over an intepretation. But what
we want – if this alternative approach is to yield the same end result – is truth in an
interpretation. Notice that, given an interpretation, varying the variable assignment
can change the truth value of a formula with free variables. But it cannot change the
truth value of a formula which is a sentence, so if a sentence is true on one variable
assignment over an interpretation, it is true on every variable assignment over that
interpretation. So we can reintroduce the notion of truth in an intepretation, like so:
212 INTERPRETATIONS

𝒜 is true in an interpretation iff:


𝒜 is a sentence of Quantifier, and 𝒜 is true on any (or every) variable
assignment over that interpretation.

Here’s how this works in practice. Suppose we want to figure out whether the sentence
‘∀𝑥∃𝑦𝐿𝑥𝑦’ is true on an interpretation which associates the two‐place predicate ‘𝐿’ with
the relation ‘ is no heavier than ’, and has as its domain the planets in our
1 2
solar system. We might reason as follows:

For each way of picking planets to be the values of ‘𝑥’ and ‘𝑦’, either we pick
two different planets, and one is lighter than the other (no two planets have
the same mass); or we pick the same planet, and ‘they’ are identical in mass.
In either case, we can always find something no heavier than anything we
pick. So for any variable assignment to ‘𝑥 ’ over this interpretation, we can
then assign something to ‘𝑦’ so as to make ‘𝐿𝑥𝑦’ true on that joint assign‐
ment. Hence no matter what we assign to ‘𝑥’, ‘∃𝑦𝐿𝑥𝑦’ is true on that assign‐
ment. But since that is true no matter what we assign to ‘𝑥’, ‘∀𝑥∃𝑦𝐿𝑥𝑦’ is
true on every variable assignment over this interpretation. But that latter is
a sentence, so is true in this interpretation.

Let us say that a sequence of objects ⟨𝑎1 , …, 𝑎𝑛 ⟩ SATISFIES a formula 𝒜 in which the
variables 𝓍1 , …, 𝓍𝑛 occur freely iff there is a variable assignment over an intepretation
whose domain includes each 𝑎𝑖 , and which assigns each 𝑎𝑖 to the variable 𝓍𝑖 , and on
which 𝒜 is true over that interpretation. What we have expressed in terms of variable
assignments could have been expressed, a little more awkwardly, using the notion of
some objects satisfying a formula. Indeed, this is how Alfred Tarski, the inventor of
this approach to truth in Quantifier, first introduced the idea.4

4 A translation of his original 1933 paper is Alfred Tarski (1983) ‘The Concept of Truth in Formalized
Languages’ in his Logic, Semantics, Metamathematics, Indianapolis: Hackett, pp. 152–278. It is quite
technical in places.
§22. TRUTH IN Quantifier 213

Key Ideas in §22


› An atomic sentence of Quantifier is true in an interpretation iff
the extensions of the names occurring in it, taken in the appro‐
priate order, fall in the extension of the predicate they accom‐
pany.
› A compound sentence of Quantifier has truth conditions relative
to an interpretation that are the same as those for Sentential, if
the main connective is truth‐functional.
› A compound sentence of Quantifier whose main connective is a
quantifier ‘∀𝑥 ’ (resp., ‘∃𝑥’) is true in an interpretation if on every
(resp., some) interpretation extending the first by assigning an
extension to some previously uninterpreted name ‘𝑐 ’, the result
of replacing every occurence of the variable ‘𝑥’ bound by the
quantifier with 𝑐 is true.

Practice exercises
A. Consider the following interpretation:

domain: The domain comprises only Corwin and Benedict


𝐴: is to be true of both Corwin and Benedict
𝐵: is to be true of Benedict only
𝑁: is to be true of no one
𝑐: is to refer to Corwin

Determine whether each of the following sentences is true or false in that interpreta‐
tion:

1. 𝐵𝑐
2. 𝐴𝑐 ↔ ¬𝑁𝑐
3. 𝑁𝑐 → (𝐴𝑐 ∨ 𝐵𝑐)
4. ∀𝑥𝐴𝑥
5. ∀𝑥¬𝐵𝑥
6. ∃𝑥(𝐴𝑥 ∧ 𝐵𝑥)
7. ∃𝑥(𝐴𝑥 → 𝑁𝑥)
8. ∀𝑥(𝑁𝑥 ∨ ¬𝑁𝑥)
9. ∃𝑥𝐵𝑥 → ∀𝑥𝐴𝑥

B. Consider the following interpretation:

domain: The domain comprises only Lemmy, Courtney and Eddy


𝐺: is to be true of Lemmy, Courtney and Eddy.
𝐻: is to be true of and only of Courtney
𝑀: is to be true of and only of Lemmy and Eddy
𝑐: is to refer to Courtney
𝑒: is to refer to Eddy
214 INTERPRETATIONS

Determine whether each of the following sentences is true or false in that interpreta‐
tion:

1. 𝐻𝑐
2. 𝐻𝑒
3. 𝑀𝑐 ∨ 𝑀𝑒
4. 𝐺𝑐 ∨ ¬𝐺𝑐
5. 𝑀𝑐 → 𝐺𝑐
6. ∃𝑥𝐻𝑥
7. ∀𝑥𝐻𝑥
8. ∃𝑥¬𝑀𝑥
9. ∃𝑥(𝐻𝑥 ∧ 𝐺𝑥)
10. ∃𝑥(𝑀𝑥 ∧ 𝐺𝑥)
11. ∀𝑥(𝐻𝑥 ∨ 𝑀𝑥)
12. ∃𝑥𝐻𝑥 ∧ ∃𝑥𝑀𝑥
13. ∀𝑥(𝐻𝑥 ↔ ¬𝑀𝑥)
14. ∃𝑥𝐺𝑥 ∧ ∃𝑥¬𝐺𝑥
15. ∀𝑥∃𝑦(𝐺𝑥 ∧ 𝐻𝑦)

C. Following the diagram conventions introduced at the end of §21, consider the fol‐
lowing interpretation:

1 2

3 4 5

Determine whether each of the following sentences is true or false in that interpreta‐
tion:

1. ∃𝑥𝑅𝑥𝑥
2. ∀𝑥𝑅𝑥𝑥
3. ∃𝑥∀𝑦𝑅𝑥𝑦
4. ∃𝑥∀𝑦𝑅𝑦𝑥
5. ∀𝑥∀𝑦∀𝑧((𝑅𝑥𝑦 ∧ 𝑅𝑦𝑧) → 𝑅𝑥𝑧)
6. ∀𝑥∀𝑦∀𝑧((𝑅𝑥𝑦 ∧ 𝑅𝑥𝑧) → 𝑅𝑦𝑧)
7. ∃𝑥∀𝑦¬𝑅𝑥𝑦
8. ∀𝑥(∃𝑦𝑅𝑥𝑦 → ∃𝑦𝑅𝑦𝑥)
9. ∃𝑥∃𝑦(¬𝑥 = 𝑦 ∧ 𝑅𝑥𝑦 ∧ 𝑅𝑦𝑥)
10. ∃𝑥∀𝑦(𝑅𝑥𝑦 ↔ 𝑥 = 𝑦)
11. ∃𝑥∀𝑦(𝑅𝑦𝑥 ↔ 𝑥 = 𝑦)
12. ∃𝑥∃𝑦(¬𝑥 = 𝑦 ∧ 𝑅𝑥𝑦 ∧ ∀𝑧(𝑅𝑧𝑥 ↔ 𝑦 = 𝑧))

D. Why, when we are trying to figure out whether ‘∀𝑥𝑅𝑥𝑎’ is true in an interpreta‐
tion, do we need to consider whether ‘𝑅𝑐𝑎’ is true in some expanded interpretation
§22. TRUTH IN Quantifier 215

with a new name ‘𝑐 ’. Why can’t we make do with substituting a name we’ve already
interpreted?
E. Explain why on page 207 we did not give the truth conditions for the existential
quantifer like this:

∃𝓍𝒜|𝑥↷𝑐 is true in an interpretation 𝐼 iff in some interpretation just like 𝐼


except that might assign something different to 𝒸, 𝒜 is true.
23
Semantic Concepts

Offering a precise definition of truth in Quantifier was more than a little fiddly. But
now that we are done, we can define various central logical notions. These will look
very similar to the definitions we offered for Sentential. However, remember that they
concern interpretations, rather than valuations.
So:

› An Quantifier sentence 𝒜 is a LOGICAL TRUTH iff 𝒜 is true in every


interpretation; this is written ⊨ 𝒜 .
› 𝒜 is a LOGICAL FALSEHOOD iff 𝒜 is false in every interpretation;
i.e., ⊨ ¬𝒜 .
› Two Quantifier sentences 𝒜 and ℬ are LOGICALLY EQUIVALENT iff
they are true in exactly the same interpretations as each other;
i.e., both 𝒜 ⊨ ℬ and ℬ ⊨ 𝒜 .
› The Quantifier sentences 𝒜1 , 𝒜2 , …, 𝒜𝑛 are JOINTLY CONSISTENT
iff there is some interpretation in which all of the sentences are
true. They are JOINTLY INCONSISTENT iff there is no such inter‐
pretation.

Some examples might be helpful to illustrate these ideas.


Consider the sentences ‘∀𝑥(𝐹𝑥 ∨ 𝐺𝑥)’ and ‘¬(𝐹𝑎 → 𝐺𝑎)’. I offer an interpretation show‐
ing them consistent:

Domain: musical artists


𝐹: plays an instrument
1
𝐺: sings
1
𝑎: Keith Moon

216
§23. SEMANTIC CONCEPTS 217

‘𝐹𝑎’ is true in this interpretation, while ‘𝐺𝑎’ is false. So the conditional ‘(𝐹𝑎 → 𝐺𝑎)’ is
false, and so ‘¬(𝐹𝑎 → 𝐺𝑎)’ is true. Suppose we add a previously unused name 𝑏 to this
interpretation: it will either denote a musician or a singer (I am including producers
as playing a musical instrument). So (𝐹𝑏 ∨ 𝐺𝑏) will be true in each such extended
interpretation. (𝐹𝑏 ∨ 𝐺𝑏) is (𝐹𝑥 ∨ 𝐺𝑥)|𝑏↷𝑥 , so by our semantic clauses, ∀𝑥(𝐹𝑥 ∨ 𝐺𝑥)
is true in the original interpretation. Since there is an interpretation making each of
these sentences true, they are consistent.
Now look at ‘∃𝑦(𝑃𝑦∧¬𝑃𝑦)’. If this is true in an intepretation, assigning some extension
to ‘𝑃’, then some interpretation with the same extension assigned to ‘𝑃’ and some new
name ‘𝑏’ makes ‘(𝑃𝑏 ∧ ¬𝑃𝑏)’ true. But that can be the case only if the extension of ‘𝑏’ is
included in the extension of ‘𝑃’, and also isn’t included in that extension – impossible.
So whatever extension we assign to ‘𝑃’, ‘∃𝑦(𝑃𝑦 ∧ ¬𝑃𝑦)’ is false. So that sentence is
false on every interpretation, and is a logical falsehood. Its negation is therefore a
logical truth: ‘¬∃𝑦(𝑃𝑦 ∧ ¬𝑃𝑦)’. This is entirely general: the negation of a logical truth
(falsehood) is a logical falsehood (truth).
Look now at ‘∀𝑥¬𝐺𝑥 ’ and ‘¬∃𝑥𝐺𝑥’. Any interpretation which makes the first true must
be such that any extended interpretation with a new name ‘𝑏’ makes ‘¬𝐺𝑏’ true, i.e.,
any extended interpretation will make ‘𝐺𝑏’ false. Hence, there is no extended interpret‐
ation making ‘𝐺𝑏’ true; so ‘∃𝑥𝐺𝑏’ is false in the original interpretation, hence ‘¬∃𝑥𝐺𝑥’
is true. Each step in this line of argument can also be run in reverse; so ‘∀𝑥¬𝐺𝑥 ’ and
‘¬∃𝑥𝐺𝑥’ are true in exactly the same interpretations. They are logically equivalent.
Consider finally ‘∀𝑥𝐺𝑥 ’ and ‘𝐺𝑎 → ∃𝑦(𝑃𝑦 ∧ ¬𝑃𝑦)’. These are inconsistent. Any inter‐
pretation making ‘𝐺𝑎 → ∃𝑦(𝑃𝑦 ∧ ¬𝑃𝑦))’ true must make ‘𝐺𝑎’ false – if it didn’t, it would
have to make its consequent true, but that’s a logical falsehood. But could ‘∀𝑥𝐺𝑥 ’ be
true in such an interpretation? No – that would require every extended interpretation
to make ‘𝐺𝑏’ true, even on the extended interpretation where the new name ‘𝑏’ is as‐
signed the same referent as the existing name ‘𝑎’, which is not in the extension of ‘𝐺 ’
in this interpretation. So no interpretation can make both of these sentences true.
Given this, any interpretation which makes ‘𝐺𝑎 → ∃𝑦(𝑃𝑦∧¬𝑃𝑦)’ true must make ‘∀𝑥𝐺𝑥 ’
false, and must therefore make ‘¬∀𝑥𝐺𝑥 ’ true. So there is no interpretation on which
‘𝐺𝑎 → ∃𝑦(𝑃𝑦 ∧ ¬𝑃𝑦)’ is true and in which ‘¬∀𝑥𝐺𝑥 ’ is false. This is a familiar notion
from Sentential: entailment. We will use the symbol ‘⊨’ for entailment in Quantifier,
much as we did for Sentential. This introduces an ambiguity, but a harmless one, since
every valid argument in Sentential remains a valid argument in Quantifier.

𝒜1 , 𝒜2 , …, 𝒜𝑛 together ENTAIL 𝒞 (written 𝒜1 , 𝒜2 , …, 𝒜𝑛 ⊨ 𝒞 ) iff


› every interpretation on which all of 𝒜1 , 𝒜2 , …, 𝒜𝑛 are true is also
one in which 𝒞 is true; or, equivalently,
› there is no interpretation in which all of 𝒜1 , 𝒜2 , …, 𝒜𝑛 are true
and in which 𝒞 is false.

These definitions establish a close connection between consistency and entailment,


as before: 𝒜1 , …, 𝒜𝑛 ⊨ 𝒞 iff 𝒜1 , …, 𝒜𝑛 , and ¬𝒞 are jointly inconsistent. Since joint
218 INTERPRETATIONS

inconsistency is a property of some sentences collectively, it doesn’t actually matter


which of them is regarded as ‘the’ conclusion. The truth of all but one of them will force
the remaining one to be false, whichever one is chosen. So the joint inconsistency of
‘𝐺𝑎 → ∃𝑦(𝑃𝑦 ∧ ¬𝑃𝑦)’ and ‘∀𝑥𝐺𝑥 ’ establishes also that ∀𝑥𝐺𝑥 ⊨ ¬(𝐺𝑎 → ∃𝑦(𝑃𝑦 ∧ ¬𝑃𝑦)).
Also as before, there is a close connection between validity and entailment:

𝒜1 , 𝒜2 , …𝒜𝑛 ∴ 𝒞 is VALID in Quantifier iff there is no interpretation


in which all of the premises are true and the conclusion is false; i.e.,
𝒜1 , 𝒜2 , …𝒜𝑛 ⊨ 𝒞 . It is INVALID in Quantifier otherwise.

Every possible situation determines some interpretation (assuming it is not possible


for there to be nothing at all), since every possible situation will have a domain and its
properties and relations will determine extensions on that domain. So whenever there
is an entailment, there is no interpretation (and hence no possible situation) making
the premise true and the conclusion false. So the argument must be conclusive.
Quantifier entailments are conclusive in virtue of form, because the consideration of
many alternative interpretations of the nonlogical expressions involved means that no
argument can be conclusive because of some special ‘connection in meaning’ between
such expressions in the language. Entailment in Quantifier reflects only the logical
structure of the sentences involved, where this now includes the new logical resources
of Quantifier over Sentential.
Still, just as with Sentential, there are arguments that are valid in English that aren’t
symbolised as a valid argument in Quantifier. Consider

It is raining
So: It will be that it was raining.

This is conclusive: there is no possible situation in which it is raining now, but in the
future it turns out that it never was raining. But the best we can apparently do in
quantifier is to symbolise ‘it is raining’ by ‘𝑃’, and ‘it will be that it was raining’ by ‘𝑄’,
and 𝑃 ∴ 𝑄 is not valid in Quantifier. If this argument is valid, it will depend on the
logic of expressions that are treated as non‐logical expressions in Quantifier– here, the
tenses ‘is’, ‘was’ and ‘will be’. We will look at this example again briefly in §39.

23.1 Some Subtleties about Truth and Interpretation


Our interpretations allow us a precise characterisation of formal truth for Quantifier.
In §2, we introduced the idea of the structure of a sentence, and the pragmatic ap‐
proach that logicians take to that notion. In Quantifier, the logical constants are those
of Sentential plus the quantifiers and the identity predicate. But every other item of
the language is up for reinterpretation, and our interpretations allow us to reinterpret
these expressions in an arbitrary fashion.
In Sentential, we were able to characterise a logical truth as a sentence which is actually
true on every reinterpretation of nonlogical expressions (in Sentential, the nonlogical
§23. SEMANTIC CONCEPTS 219

expressions are just are the atomic sentences). And we characterised a formally valid ar‐
gument as one in which each possible reinterpretation (i.e., valuation) of the premises
which makes them actually true, is also one which makes the conclusion actually true.
But this account of formal validity needs refining when it comes to Quantifier. Consider
the Quantifier sentence which says that there are at least three things:

∃𝑥∃𝑦∃𝑧(𝑥 ≠ 𝑦 ∧ 𝑥 ≠ 𝑧 ∧ 𝑦 ≠ 𝑧).

This sentence contains no nonlogical expressions. Hence the sentence has a constant
meaning, and is true on each reinterpretation just in case it is actually true. It is actually
true – there are actually at least three things. Hence our Sentential‐inspired account
of logical truth would suggest that this sentence is a logical truth. But it seems very
strange to think that it is a logical truth that there are at least three things. Isn’t it
possible that there had been fewer? And shouldn’t logic allow for that possibility?
The keen‐eyed among you will have noticed that this sentence is not, in fact, a logical
truth of Quantifier. The reason is that in defining truth for sentences of Quantifier, we
considered not just reinterpretations of the nonlogical expressions, but also we allowed
the domain of our interpretation to freely depart from actuality (§21.6). So we are, in
effect, allowing our interpretations to vary the meanings of the nonlogical vocabulary
and to vary the possible situations at which our sentences are to be evaluated. So
we can consider a possible situation in which there are just two things, and if that
situation provides the domain of an interpretation, there is no way of extending that
interpretation to three names ‘𝑎’, ‘𝑏’, and ‘𝑐 ’ such that ‘𝑎 ≠ 𝑏 ∧ 𝑎 ≠ 𝑐 ∧ 𝑏 ≠ 𝑐 ’ is true;
hence ‘∃𝑥∃𝑦∃𝑧(𝑥 ≠ 𝑦 ∧ 𝑥 ≠ 𝑧 ∧ 𝑦 ≠ 𝑧)’ isn’t true on the original interpretation.
One sentence of this class is nevertheless a logical truth: the claim that something
exists, ‘∃𝑥𝑥 = 𝑥 ’. Since it is a constraint on Quantifier domains that they cannot be
empty (§15.6), there is for any interpretation a way of adding a previously uninterpreted
name ‘𝑐 ’ which denotes an object in the domain, and of course ‘𝑐 = 𝑐 ’ is true in that
extended interpretation, since each name must have a referent in any interpretation,
and the identity predicate is always interepreted as representing pairs of objects in the
domain and themselves.
This appeal to possible situations is very suggestive. Every possible situation seems
to have a domain of things which exist in that possibility, and the properties and re‐
lations that are instantiated in that possibility determine extensions drawn from that
domain. So you might wonder: should we think of interpretations as just being ‘pos‐
sible worlds’? Should we, that is, think that a sentence 𝒜 is consistent (i.e., there is an
interpretation on which it is true) iff 𝒜 is true in some possible situation?
We should not. In Quantifier it is not even clear how to do this, because a sentence
like ‘∃𝑥(𝐹𝑥 ∧ 𝐺𝑥)’ doesn’t even seem to mean anything without a prior intepretation.
If we assign a meaning to ‘𝐹 ’ and ‘𝐺 ’, we can then see if there are possible situations
making the sentence true. The problem then will be that there are logically consistent
sentences that are true in no possible world. For example, suppose we interpret ‘𝐹 ’ to
be ‘ is a cat’ and ‘𝐺 ’ to be ‘ is a mineral’. Obviously, given these asssignments

of meaning, the sentence can be rendered in English as ‘there is a cat which is a mineral’,
220 INTERPRETATIONS

and that is impossible: I would assume that it is of the essence of a cat that it be a living
animal, not an inanimate mineral. But the sentence is perfectly consistent despite
being impossible, precisely consistency involves reinterpreting the predicates so that
they can possibly hold together. There is a significant difference between possibility
and consistency:

› Possibility is about the meaning of an already interpreted sentence: holding that


meaning fixed, is there a possible situation which it describes?

› Consistency is about logical structure, according to some language: holding that


structure fixed, is there a way to assign a meaning to the non‐structural expres‐
sions to make the sentence come out true?

This distinction is not always noted. Philosophers are prone to talking of ‘logical pos‐
sibility’ where they generally mean to talk about consistency. Possibility, as I under‐
stand it, is a property of intepreted sentences, or propositions. Logical consistency
is a property of interpretable sentences, relative to some language in which they are
assigned a meaning. In this sense, the logical ‘languages’ Sentential and Quantifier are
more like proto‐languages, because only a select few of the expressions of the language
are antecedently meaningful.1

Key Ideas in §23


› Definitions of entailment, consistency, logical falsehood and lo‐
gical equivalence can be given for Quantifier that are very close
parallels to those for Sentential, though given in terms of intepret‐
ations not valuations.
› The notion of a logical truth in Quantifier is slightly different than
the notion of a logical truth in Sentential, since we allow inter‐
pretations that vary not just in what they assign as extensions to
the nonlogical vocabulary, but also variation in the domain over
which quantifiers range. Nothing like the latter feature occurs
in Sentential valuations, but it is crucial for ensuring that contin‐
gent counting sentences, without names or nonlogical predic‐
ates, aren’t misclassified as logical truths. Consistency must be
distinguished from possibility.

Practice exercises
A. The following entailments are correct in Quantifier. In each case, explain why.

1 I owe my understanding of this point to the distinction between ‘representational’ and ‘interpretational’
semantics drawn by John Etchemendy (1999) The Concept of Logical Consequence, CSLI Publications,
chs. 2–5.
§23. SEMANTIC CONCEPTS 221

1. ∀𝑥𝑄𝑥, ∀𝑥(𝑄𝑥 → 𝑅𝑥) ⊨ ∃𝑥𝑅𝑥 ;


2. ∀𝑥∀𝑦 𝑅𝑥𝑦 → ∀𝑧(𝑅𝑦𝑧 → 𝑥 ≠ 𝑧) ⊨ (𝑅𝑎𝑎 → ¬𝑅𝑎𝑎).
3. ∃𝑥(𝑃𝑥 ∧ 𝑄𝑥) ⊨ 𝑃𝑎;
4. ∃𝑥𝑅𝑥𝑥 ⊨ ∃𝑥∃𝑦𝑅𝑥𝑦;
5. ∃𝑥∀𝑦𝑅𝑥𝑦 ⊨ ∀𝑦∃𝑥𝑅𝑥𝑦;
6. ∀𝑥∀𝑦∀𝑧((𝑥 ≠ 𝑦 ∧ 𝑥 ≠ 𝑧) → 𝑦 = 𝑧) ⊨ ¬∃𝑥∃𝑦∃𝑧(𝑥 ≠ 𝑦 ∧ (𝑥 ≠ 𝑧 ∧ 𝑦 ≠ 𝑧)).

B. Show that, for any formula 𝒜 with at most 𝑥 free, the following two sentences are
logically equivalent: ∃𝑥𝒜 and ¬∀𝑥¬𝒜 .
C. Show that

1. 𝒜 ⊨ ℬ iff ‘𝒜→ℬ’ is a logical truth;


2. 𝒜 is logically equivalent to ℬ iff ‘𝒜↔ℬ’ is a logical truth.
24
Demonstrating Consistency and
Invalidity

24.1 Logical Truths and Logical Falsehoods


Suppose we want to show that ‘∃𝑥𝐴𝑥𝑥 → 𝐵𝑑 ’ is not a logical truth. This requires show‐
ing that the sentence is not true in every interpretation; i.e., that it is false in some
interpretation. If we can provide just one interpretation in which the sentence is false,
then we will have shown that the sentence is not a logical truth.
In order for ‘∃𝑥𝐴𝑥𝑥 → 𝐵𝑑 ’ to be false, the antecedent (‘∃𝑥𝐴𝑥𝑥 ’) must be true, and
the consequent (‘𝐵𝑑 ’) must be false. To construct such an interpretation, we start by
specifying a domain. Keeping the domain small makes it easier to specify what the
predicates will be true of, so we shall start with a domain that has just one member.
For concreteness, let’s say it is the city of Paris.

domain: Paris

The name ‘𝑑 ’ must name something in the domain, so we have no option but:

𝑑 : Paris

Recall that we want ‘∃𝑥𝐴𝑥𝑥 ’ to be true, so we want all members of the domain to be
paired with themselves in the extension of ‘𝐴’. We can offer:

𝐴: is in the same place as


1 2

Now ‘𝐴𝑑𝑑 ’ is true, so it is surely true that ‘∃𝑥𝐴𝑥𝑥 ’. Next, we want ‘𝐵𝑑 ’ to be false, so the
referent of ‘𝑑 ’ must not be in the extension of ‘𝐵’. We might simply offer:

𝐵: is in Germany
1

222
§24. DEMONSTRATING CONSISTENCY AND INVALIDITY 223

Now we have an interpretation where ‘∃𝑥𝐴𝑥𝑥 ’ is true, but where ‘𝐵𝑑 ’ is false. So there
is an interpretation where ‘∃𝑥𝐴𝑥𝑥 → 𝐵𝑑 ’ is false. So ‘∃𝑥𝐴𝑥𝑥 → 𝐵𝑑 ’ is not a logical truth.
We can just as easily show that ‘∃𝑥𝐴𝑥𝑥 → 𝐵𝑑 ’ is not a logical falsehood. We need only
specify an interpretation in which ‘∃𝑥𝐴𝑥𝑥 → 𝐵𝑑 ’ is true; i.e., an interpretation in which
either ‘∃𝑥𝐴𝑥𝑥 ’ is false or ‘𝐵𝑑 ’ is true. Here is one:

domain: Paris
𝑑 : Paris
𝐴: is in the same place as
1 2
𝐵: is in France
1

This shows that there is an interpretation where ‘∃𝑥𝐴𝑥𝑥 → 𝐵𝑑 ’ is true. So ‘∃𝑥𝐴𝑥𝑥 → 𝐵𝑑 ’


is not a logical falsehood.

24.2 Logical Equivalence


Suppose we want to show that ‘∀𝑥𝑆𝑥 ’ and ‘∃𝑥𝑆𝑥 ’ are not logically equivalent. We need
to construct an interpretation in which the two sentences have different truth values;
we want one of them to be true and the other to be false. We start by specifying a
domain. Again, we make the domain small so that we can specify extensions easily.
In this case, we shall need at least two objects. (If we chose a domain with only one
member, the two sentences would end up with the same truth value. In order to see
why, try constructing some partial interpretations with one‐member domains.) For
concreteness, let’s take:

domain: Ornette Coleman, Sarah Vaughan

We can make ‘∃𝑥𝑆𝑥 ’ true by including something in the extension of ‘𝑆’, and we can
make ‘∀𝑥𝑆𝑥 ’ false by leaving something out of the extension of ‘𝑆’. For concreteness we
shall offer:

𝑆: plays saxophone
1

Now ‘∃𝑥𝑆𝑥 ’ is true, because ‘𝑆’ is true of Ornette Coleman. Slightly more precisely,
extend our interpretation by allowing ‘𝑐 ’ to name Ornette Coleman. ‘𝑆𝑐 ’ is true in this
extended interpretation, so ‘∃𝑥𝑆𝑥 ’ was true in the original interpretation. Similarly,
‘∀𝑥𝑆𝑥 ’ is false, because ‘𝑆’ is false of Sarah Vaughan. Slightly more precisely, extend our
interpretation by allowing ‘𝑑 ’ to name Sarah Vaughan, and ‘𝑆𝑑 ’ is false in this extended
interpretation, so ‘∀𝑥𝑆𝑥 ’ was false in the original interpretation. We have provided a
counter‐interpretation to the claim that ‘∀𝑥𝑆𝑥 ’ and ‘∃𝑥𝑆𝑥 ’ are logically equivalent.
224 INTERPRETATIONS

To show that 𝒜 is not a logical truth, it suffices to find an interpreta‐


tion where 𝒜 is false.
To show that 𝒜 is not a logical falsehood, it suffices to find an inter‐
pretation where 𝒜 is true.
To show that 𝒜 and ℬ are not logically equivalent, it suffices to find
an interpretation where one is true and the other is false.

24.3 Validity, Entailment and Consistency


To test for validity, entailment, or consistency, we typically need to produce interpret‐
ations that determine the truth value of several sentences simultaneously.
Consider the following argument in Quantifier:

∃𝑥(𝐺𝑥 → 𝐺𝑎) ∴ ∃𝑥𝐺𝑥 → 𝐺𝑎

To show that this is invalid, we must make the premise true and the conclusion false.
The conclusion is a conditional, so to make it false, the antecedent must be true and
the consequent must be false. Clearly, our domain must contain two objects. Let’s try:

domain: Karl Marx, Ludwig von Mises


𝐺: hated communism
1
𝑎: Karl Marx

Given that Marx wrote The Communist Manifesto, ‘𝐺𝑎’ is plainly false in this interpret‐
ation. But von Mises famously hated communism. So ‘∃𝑥𝐺𝑥’ is true in this interpreta‐
tion. Hence ‘∃𝑥𝐺𝑥 → 𝐺𝑎’ is false, as required.
But does this interpretation make the premise true? Yes it does! For ‘∃𝑥(𝐺𝑥 → 𝐺𝑎)’
to be true, ‘𝐺𝑐 → 𝐺𝑎’ must be true in some extended interpretation that is almost
exactly like the interpretation just given, except that it also interprets some previously
uninterpreted name ‘𝑐 ’. Let’s extend our original interpretation by letting ‘𝑐 ’ denote
Karl Marx – the same thing as ‘𝑎’ denotes in the original interpretation. Since ‘𝑎’ and
‘𝑐 ’ denote the same thing in the extended interpretation, obviously ‘𝐺𝑐 → 𝐺𝑎’ will be
true. So ‘∃𝑥(𝐺𝑥 → 𝐺𝑎)’ is true in the original interpretation. So the premise is true,
and the conclusion is false, in this original interpretation. The argument is therefore
invalid.
In passing, note that we have also shown that ‘∃𝑥(𝐺𝑥 → 𝐺𝑎)’ does not entail ‘∃𝑥𝐺𝑥 →
𝐺𝑎’. And equally, we have shown that the sentences ‘∃𝑥(𝐺𝑥 → 𝐺𝑎)’ and ‘¬(∃𝑥𝐺𝑥 → 𝐺𝑎)’
are jointly consistent.
Let’s consider a second example. Consider:

∀𝑥∃𝑦𝐿𝑥𝑦 ∴ ∃𝑦∀𝑥𝐿𝑥𝑦

Again, I want to show that this is invalid. To do this, we must make the premises true
and the conclusion false. Here is a suggestion:
§24. DEMONSTRATING CONSISTENCY AND INVALIDITY 225

domain: People with a living biological sibling


𝐿: shares a parent with
1 2

The premise is clearly true on this interpretation. Anyone in the domain has a living
sibling. That sibling will also, then, be in the domain, because one cannot be someone’s
sibling without also having them as a sibling. So for everyone in the domain, there will
be at least one other person in the domain who is their sibling, and thus has a parent
in common with them. Hence ‘∀𝑥∃𝑦𝐿𝑥𝑦’ is true. But the conclusion is clearly false, for
that would require that there is some single person who shares a parent with everyone
in the domain, and there is no such person. So the argument is invalid. We observe
immediately that the sentences ‘∀𝑥∃𝑦𝐿𝑥𝑦’ and ‘¬∃𝑦∀𝑥𝐿𝑥𝑦’ are jointly consistent and
that ‘∀𝑥∃𝑦𝐿𝑥𝑦’ does not entail ‘∃𝑦∀𝑥𝐿𝑥𝑦’.
For my third example, I will mix things up a bit. In §21, I described how we can present
some interpretations using diagrams. For example:

1 2

Using the conventions employed in §21, the domain of this interpretation is the first
three positive whole numbers, and ‘𝑅’ is true of 𝓍 and 𝓎 just in case there is an arrow
from 𝓍 to 𝓎 in our diagram. Here are some sentences that the interpretation makes
true:

› ‘∀𝑥∃𝑦𝑅𝑦𝑥 ’

› ‘∃𝑥∀𝑦𝑅𝑥𝑦’ witness 1

› ‘∃𝑥∀𝑦(𝑅𝑦𝑥 ↔ 𝑥 = 𝑦)’ witness 1

› ‘∃𝑥∃𝑦∃𝑧(¬𝑦 = 𝑧 ∧ 𝑅𝑥𝑦 ∧ 𝑅𝑧𝑥)’ witness 2

› ‘∃𝑥∀𝑦¬𝑅𝑥𝑦’ witness 3

› ‘∃𝑥(∃𝑦𝑅𝑦𝑥 ∧ ¬∃𝑦𝑅𝑥𝑦)’ witness 3

This immediately shows that all of the preceding six sentences are jointly consistent.
We can use this observation to generate invalid arguments, e.g.:

∀𝑥∃𝑦𝑅𝑦𝑥, ∃𝑥∀𝑦𝑅𝑥𝑦 ∴ ∀𝑥∃𝑦𝑅𝑥𝑦


∃𝑥∀𝑦𝑅𝑥𝑦, ∃𝑥∀𝑦¬𝑅𝑥𝑦 ∴ ¬∃𝑥∃𝑦∃𝑧(¬𝑦 = 𝑧 ∧ 𝑅𝑥𝑦 ∧ 𝑅𝑧𝑥)

and many more besides.


226 INTERPRETATIONS

To show that 𝒜1 , 𝒜2 , …, 𝒜𝑛 ∴ 𝒞 is invalid, it suffices to find an inter‐


pretation where all of 𝒜1 , 𝒜2 , …, 𝒜𝑛 are true and where 𝒞 is false.
That same interpretation will show that 𝒜1 , 𝒜2 , …, 𝒜𝑛 do not entail 𝒞 .
That same interpretation will show that 𝒜1 , 𝒜2 , …, 𝒜𝑛 , ¬𝒞 are jointly
consistent.

When you provide an interpretation to refute a claim – that some sentence is a logical
truth, say – this is sometimes called providing a COUNTERMODEL.

Key Ideas in §24


› Testing for consistency and invalidity involves displaying just a
single appropriate interpretation.
› It can be a subtle matter to figure out what such an interpretation
might be like.
› An interpretation which shows an argument invalid is a counter‐
example to that argument.

Practice exercises
A. Show that each of the following is neither a logical truth nor a logical falsehood:

1. 𝐷𝑎 ∧ 𝐷𝑏
2. ∃𝑥𝑇𝑥ℎ
3. 𝑃𝑚 ∧ ¬∀𝑥𝑃𝑥
4. ∀𝑧𝐽𝑧 ↔ ∃𝑦𝐽𝑦
5. ∀𝑥(𝑊𝑥𝑚𝑛 ∨ ∃𝑦𝐿𝑥𝑦)
6. ∃𝑥(𝐺𝑥 → ∀𝑦𝑀𝑦)
7. ∃𝑥(𝑥 = ℎ ∧ 𝑥 = 𝑖)

B. For each of the following, say whether the sentence is a logical truth, a logical false‐
hood, or neither:

1. ∃𝑥∃𝑦𝑥 = 𝑦;
2. (∀𝑥(𝑃𝑥 ∧ 𝑄𝑥) ↔ ∀𝑥𝑃𝑥 ∧ ∀𝑥𝑄𝑥);
3. ∃𝑥(𝑃𝑥 ∧ 𝑄𝑥) → ¬(∃𝑥𝑃𝑥 ∧ ∃𝑥𝑄𝑥);
4. ∀𝑥∀𝑦(𝑥 ≠ 𝑦 → (𝐹𝑥 ↔ ¬𝐹𝑦)).

C. Show that the following pairs of sentences are not logically equivalent.
§24. DEMONSTRATING CONSISTENCY AND INVALIDITY 227

1. 𝐽𝑎, 𝐾𝑎
2. ∃𝑥𝐽𝑥 , 𝐽𝑚
3. ∀𝑥𝑅𝑥𝑥 , ∃𝑥𝑅𝑥𝑥
4. ∃𝑥𝑃𝑥 → 𝑄𝑐 , ∃𝑥(𝑃𝑥 → 𝑄𝑐)
5. ∀𝑥(𝑃𝑥 → ¬𝑄𝑥), ∃𝑥(𝑃𝑥 ∧ ¬𝑄𝑥)
6. ∃𝑥(𝑃𝑥 ∧ 𝑄𝑥), ∃𝑥(𝑃𝑥 → 𝑄𝑥)
7. ∀𝑥(𝑃𝑥 → 𝑄𝑥), ∀𝑥(𝑃𝑥 ∧ 𝑄𝑥)
8. ∀𝑥∃𝑦𝑅𝑥𝑦, ∃𝑥∀𝑦𝑅𝑥𝑦
9. ∀𝑥∃𝑦𝑅𝑥𝑦, ∀𝑥∃𝑦𝑅𝑦𝑥

D. Show that the following sentences are jointly consistent:

1. 𝑀𝑎, ¬𝑁𝑎, 𝑃𝑎, ¬𝑄𝑎


2. 𝐿𝑒𝑒, 𝐿𝑒𝑔, ¬𝐿𝑔𝑒, ¬𝐿𝑔𝑔
3. ¬(𝑀𝑎 ∧ ∃𝑥𝐴𝑥), 𝑀𝑎 ∨ 𝐹𝑎, ∀𝑥(𝐹𝑥 → 𝐴𝑥)
4. 𝑀𝑎 ∨ 𝑀𝑏, 𝑀𝑎 → ∀𝑥¬𝑀𝑥
5. ∀𝑦𝐺𝑦, ∀𝑥(𝐺𝑥 → 𝐻𝑥), ∃𝑦¬𝐼𝑦
6. ∃𝑥(𝐵𝑥 ∨ 𝐴𝑥), ∀𝑥¬𝐶𝑥, ∀𝑥 (𝐴𝑥 ∧ 𝐵𝑥) → 𝐶𝑥
7. ∃𝑥𝑋𝑥, ∃𝑥𝑌𝑥, ∀𝑥(𝑋𝑥 ↔ ¬𝑌𝑥)
8. ∀𝑥(𝑃𝑥 ∨ 𝑄𝑥), ∃𝑥¬(𝑄𝑥 ∧ 𝑃𝑥)
9. ∃𝑧(𝑁𝑧 ∧ 𝑂𝑧𝑧), ∀𝑥∀𝑦(𝑂𝑥𝑦 → 𝑂𝑦𝑥)
10. ¬∃𝑥∀𝑦𝑅𝑥𝑦, ∀𝑥∃𝑦𝑅𝑥𝑦
11. ¬𝑅𝑎𝑎, ∀𝑥(𝑥 = 𝑎 ∨ 𝑅𝑥𝑎)
12. ∀𝑥∀𝑦∀𝑧(𝑥 = 𝑦 ∨ 𝑦 = 𝑧 ∨ 𝑥 = 𝑧), ∃𝑥∃𝑦 ¬𝑥 = 𝑦
13. ∃𝑥∃𝑦(𝑍𝑥 ∧ 𝑍𝑦 ∧ 𝑥 = 𝑦), ¬𝑍𝑑 , 𝑑 = 𝑒

E. Show each of the following non‐entailments by providing an appropriate interpret‐


ation on which the premises are true and the conclusion false:

1. 𝑃𝑎 ⊭ ∃𝑥(𝑃𝑥 ∧ 𝑄𝑥);
2. ∀𝑦(𝑃𝑦 → ∃𝑥𝑅𝑦𝑥) ⊭ ∀𝑥(𝑃𝑥 → ∃𝑦𝑅𝑦𝑦);
3. ∀𝑥𝑅𝑥𝑥 ⊭ ∀𝑥𝑅𝑎𝑥 .

F. Show that the following arguments are invalid:

1. ∀𝑥(𝐴𝑥 → 𝐵𝑥) ∴ ∃𝑥𝐵𝑥


2. ∀𝑥(𝑅𝑥 → 𝐷𝑥), ∀𝑥(𝑅𝑥 → 𝐹𝑥) ∴ ∃𝑥(𝐷𝑥 ∧ 𝐹𝑥)
3. ∃𝑥(𝑃𝑥 → 𝑄𝑥) ∴ ∃𝑥𝑃𝑥
4. 𝑁𝑎 ∧ 𝑁𝑏 ∧ 𝑁𝑐 ∴ ∀𝑥𝑁𝑥
5. 𝑅𝑑𝑒, ∃𝑥𝑅𝑥𝑑 ∴ 𝑅𝑒𝑑
6. ∃𝑥(𝐸𝑥 ∧ 𝐹𝑥), ∃𝑥𝐹𝑥 → ∃𝑥𝐺𝑥 ∴ ∃𝑥(𝐸𝑥 ∧ 𝐺𝑥)
7. ∀𝑥𝑂𝑥𝑐, ∀𝑥𝑂𝑐𝑥 ∴ ∀𝑥𝑂𝑥𝑥
8. ∃𝑥(𝐽𝑥 ∧ 𝐾𝑥), ∃𝑥¬𝐾𝑥, ∃𝑥¬𝐽𝑥 ∴ ∃𝑥(¬𝐽𝑥 ∧ ¬𝐾𝑥)
9. 𝐿𝑎𝑏 → ∀𝑥𝐿𝑥𝑏, ∃𝑥𝐿𝑥𝑏 ∴ 𝐿𝑏𝑏
10. ∀𝑥(𝐷𝑥 → ∃𝑦𝑇𝑦𝑥) ∴ ∃𝑦∃𝑧 ¬𝑦 = 𝑧
25
Reasoning about All Interpretations:
Demonstrating Inconsistency and
Validity

25.1 Logical Truths and Logical Falsehoods


We can show that a sentence is not a logical truth just by providing one carefully spe‐
cified interpretation: an interpretation in which the sentence is false. To show that
something is a logical truth, on the other hand, it would not be enough to construct
ten, one hundred, or even a thousand interpretations in which the sentence is true. A
sentence is only a logical truth if it is true in every interpretation, and there are infin‐
itely many interpretations. We need to reason about all of them, and we cannot do
this by dealing with them one by one!
Sometimes, we can reason about all interpretations fairly easily. For example, we can
offer a relatively simple argument that ‘𝑅𝑎𝑎 ↔ 𝑅𝑎𝑎’ is a logical truth:

Any relevant interpretation will give ‘𝑅𝑎𝑎’ a truth value. If ‘𝑅𝑎𝑎’ is true
in an interpretation, then ‘𝑅𝑎𝑎 ↔ 𝑅𝑎𝑎’ is true in that interpretation. If
‘𝑅𝑎𝑎’ is false in an interpretation, then ‘𝑅𝑎𝑎 ↔ 𝑅𝑎𝑎’ is true in that inter‐
pretation. These are the only alternatives. So ‘𝑅𝑎𝑎 ↔ 𝑅𝑎𝑎’ is true in every
interpretation. Therefore, it is a logical truth.

This argument is valid, of course, and its conclusion is true. However, it is not an
argument in Quantifier. Rather, it is an argument in English about Quantifier: it is an
argument in the metalanguage.
Note another feature of the argument. Since the sentence in question contained no
quantifiers, we did not need to think about how to interpret ‘𝑎’ and ‘𝑅’; the point was
just that, however we interpreted them, ‘𝑅𝑎𝑎’ would have some truth value or other.
(We could ultimately have given the same argument concerning Sentential sentences.)

228
§25. REASONING ABOUT ALL INTERPRETATIONS 229

Here is another bit of reasoning. Consider the sentence ‘∀𝑥(𝑅𝑥𝑥 ↔ 𝑅𝑥𝑥)’. Again, it
should obviously be a logical truth. But to say precisely why is quite a challenge. We
cannot say that ‘𝑅𝑥𝑥 ↔ 𝑅𝑥𝑥 ’ is true in every interpretation, since ‘𝑅𝑥𝑥 ↔ 𝑅𝑥𝑥 ’ is not
even a sentence of Quantifier (remember that ‘𝑥’ is a variable, not a name). So we have
to be a bit cleverer.

Consider some arbitrary interpretation. Consider some arbitrary member


of the model’s domain, which, for convenience, we shall call obbie, and
suppose we extend our original interpretation by adding a previously un‐
interpreted name, ‘𝑐 ’, to name obbie. Then either ‘𝑅𝑐𝑐 ’ will be true or it
will be false. If ‘𝑅𝑐𝑐 ’ is true, then ‘𝑅𝑐𝑐 ↔ 𝑅𝑐𝑐 ’ is true. If ‘𝑅𝑐𝑐 ’ is false, then
‘𝑅𝑐𝑐 ↔ 𝑅𝑐𝑐 ’ will be true. So either way, ‘𝑅𝑐𝑐 ↔ 𝑅𝑐𝑐 ’ is true. Since there
was nothing special about obbie – we might have chosen any object – we
see that no matter how we extend our original interpretation by allowing
‘𝑐 ’ to name some new object, ‘𝑅𝑐𝑐 ↔ 𝑅𝑐𝑐 ’ will be true in the new interpret‐
ation. So ‘∀𝑥(𝑅𝑥𝑥 ↔ 𝑅𝑥𝑥)’ was true in the original interpretation. But we
chose our interpretation arbitrarily. So ‘∀𝑥(𝑅𝑥𝑥 ↔ 𝑅𝑥𝑥)’ is true in every
interpretation. It is therefore a logical truth.

This is quite longwinded, but, as things stand, there is no alternative. In order to show
that a sentence is a logical truth, we must reason about all interpretations.
But sometimes we can draw a conclusion about some way all interpretations must be,
by considering a hypothetical interpretation which isn’t that way, and showing that
hypothesis leads to trouble. Here is an example.

Suppose there were an interpretation which makes ‘𝑎 ≠ 𝑏 → ¬∃𝑥(𝑥 =


𝑎∧𝑥 = 𝑏)’ false. Then while ‘𝑎 ≠ 𝑏’ is true on that interpretation, ‘¬∃𝑥(𝑥 =
𝑎 ∧ 𝑥 = 𝑏)’ must be false. So ‘∃𝑥(𝑥 = 𝑎 ∧ 𝑥 = 𝑏)’ must be true; and hence
it has some true substitition instance with some previously uninterpreted
name ‘𝑐 ’ as the instantiating name: ‘𝑐 = 𝑎 ∧ 𝑐 = 𝑏’ is true. But then it must
also be that ‘𝑎 = 𝑏’ is true, which contradicts our original supposition that
‘𝑎 ≠ 𝑏’ is true on this intepretation. So there can be no such interpretation,
and hence the sentence cannot be false on any interpretation and must be
a logical truth.

This is an example of REDUCTIO reasoning: we assume something, and derive some


absurdity or contradiction; we then conclude that our assumption is to be rejected. It
is often easier to show that all Fs have a property by deriving an absurdity from the
assumption that some F lacks the property, than it is to show it ‘directly’ by consid‐
ering each F in turn. This is a powerful technique, used widely in mathematics and
elsewhere.

25.2 Other Cases


Similar points hold of other cases too. Thus, we must reason about all interpretations
if we want to show:
230 INTERPRETATIONS

› that a sentence is a logical falsehood; for this requires that it is false in every
interpretation.

› that two sentences are logically equivalent; for this requires that they have the
same truth value in every interpretation.

› that some sentences are jointly inconsistent; for this requires that there is no
interpretation in which all of those sentences are true together; i.e., that, in every
interpretation, at least one of those sentences is false.

› that an argument is valid; for this requires that the conclusion is true in every
interpretation where the premises are true.

› that some sentences entail another sentence.

The problem is that, with the tools available to you so far, reasoning about all interpret‐
ations is a serious challenge! Let’s take just one more example. Here is an argument
which is obviously valid:
∀𝑥(𝐻𝑥 ∧ 𝐽𝑥) ∴ ∀𝑥𝐻𝑥
After all, if everything is both H and J, then everything is H. But we can only show
that the argument is valid by considering what must be true in every interpretation in
which the premise is true. And to show this, we would have to reason as follows:

Consider an arbitrary interpretation in which the premise ‘∀𝑥(𝐻𝑥 ∧ 𝐽𝑥)’ is


true. It follows that, however we expand the interpretation with a previ‐
ously uninterpreted name, for example ‘𝑐 ’, ‘𝐻𝑐 ∧ 𝐽𝑐 ’ will be true in this new
interpretation. ‘𝐻𝑐 ’ will, then, also be true in this new interpretation. But
since this held for any way of expanding the interpretation, it must be that
‘∀𝑥𝐻𝑥 ’ is true in the old interpretation. And we assumed nothing about
the interpretation except that it was one in which ‘∀𝑥(𝐻𝑥 ∧ 𝐽𝑥)’ is true. So
any interpretation in which ‘∀𝑥(𝐻𝑥 ∧ 𝐽𝑥)’ is true is one in which ‘∀𝑥𝐻𝑥 ’ is
true. The argument is valid!

Even for a simple argument like this one, the reasoning is somewhat complicated. For
longer arguments, the reasoning can be extremely torturous.
Reductio reasoning is particularly useful in demonstrating the validity of valid argu‐
ments. In that case we will typically make the hypothetical supposition that there is
some interpretation which makes the premises true and the conclusion false, and then
derive an absurdity from that supposition. From that we can conclude there is no such
intepretation, and hence that any interpretation which makes the premises true must
also be one which makes the conclusion true.
The following table summarises whether a single (counter‐)interpretation suffices, or
whether we must reason about all interpretations (whether directly considering them
all, or indirectly making use of reductio).
§25. REASONING ABOUT ALL INTERPRETATIONS 231

Yes No
logical truth? all interpretations one counter‐interpretation
logical falsehood? all interpretations one counter‐interpretation
logically equivalent? all interpretations one counter‐interpretation
consistent? one interpretation consider all interpretations
valid? all interpretations one counter‐interpretation
entailment? all interpretations one counter‐interpretation

This might usefully be compared with the table at the end of §13.1. The key difference
resides in the fact that Sentential concerns truth tables, whereas Quantifier concerns
interpretations. This difference is deeply important, since each truth‐table only ever
has finitely many lines, so that a complete truth table is a relatively tractable object.
By contrast, there are infinitely many interpretations for any given sentence(s), so that
reasoning about all interpretations can be a deeply tricky business.

Key Ideas in §25


› To demonstrate that something is a logical truth or a logical false‐
hood, or that an argument is an entailment, involves reasoning
about all intepretations.
› Reasoning about all intepretations of a given sentence is intrins‐
ically more difficult than reasoning about all valuations, because
there are only finitely many valuations to consider for any Senten‐
tial sentence, but infinitely many intepretations for an Quantifier
sentence.
› Some indirect strategies, like reductio, can be very useful in over‐
coming this obstacle.

Practice exercises
A. Show that each of the following is either a logical truth or a logical falsehood:

1. 𝐷𝑎 ∧ ¬𝐷𝑎;
2. 𝑃𝑚 ∧ ¬∃𝑥𝑃𝑥 ;
3. ∀𝑥∀𝑦(𝑥 ≠ 𝑦 ↔ 𝑦 ≠ 𝑥);
4. ∀𝑥(𝐹𝑥 ∨ ∃𝑦¬𝐹𝑦));
5. ∃𝑥((𝑥 = ℎ ∧ 𝑥 = 𝑖) ∧ (𝐹ℎ ↔ ¬𝐹𝑖)).

B. Show that the following pairs of sentences are logically equivalent.

1. ∃𝑥(𝐽𝑥 ∧ 𝑥 = 𝑚), 𝐽𝑚;


2. ∀𝑥∀𝑦(𝑥 = 𝑦 ∧ 𝑅𝑥𝑦), ∃𝑥(𝑅𝑥𝑥 ∧ ∀𝑦(𝑥 = 𝑦);
3. (∃𝑥𝑃𝑥 ∧ 𝑄𝑐), ∃𝑥(𝑃𝑥 ∧ 𝑄𝑐);
4. ∀𝑥∀𝑦𝑅𝑥𝑦, ∀𝑦∀𝑥𝑅𝑥𝑦;
5. ∀𝑥(𝑃𝑥 → ∃𝑦𝑅𝑥𝑦), ¬∃𝑥(∀𝑦¬𝑅𝑥𝑦 ∧ 𝑃𝑥).
232 INTERPRETATIONS

C. Show that the following sentences are jointly inconsistent:

1. 𝑀𝑎 ∨ 𝑀𝑏, (𝑀𝑎 → ∀𝑥¬𝑀𝑥), ¬𝑀𝑏;


2. ∃𝑥(𝐵𝑥 ∨ 𝐴𝑥), ∀𝑥¬𝐶𝑥, ∀𝑥 (𝐴𝑥 ∨ 𝐵𝑥) → 𝐶𝑥 ;
3. ∀𝑥∀𝑦(𝑅𝑥𝑦 → ¬𝑅𝑦𝑥), ∀𝑥𝑅𝑥𝑥 .
4. ∀𝑥∀𝑦𝑥 = 𝑦, ∃𝑥𝑃𝑥 , ¬∀𝑥𝑃𝑥 .

D. Show that the following arguments are valid:

1. 𝑀𝑎 ∴ (𝑄𝑎 ∨ ¬𝑄𝑎);
2. ¬(𝑀𝑏 ∧ ∃𝑥𝐴𝑥), (𝑀𝑏 ∧ 𝐹𝑏) ∴ ¬∀𝑥(𝐹𝑥 → 𝐴𝑥);
3. ∀𝑦𝐺𝑦, ∃𝑦¬𝐻𝑦 ∴ ¬∀𝑥(𝐺𝑥 → 𝐻𝑥);
4. ∃𝑥𝐾𝑥, ∀𝑥(𝐾𝑥 ↔ ¬𝐿𝑥) ∴ ∃𝑥¬𝐿𝑥 ;
5. ∀𝑥𝑄𝑥, ∀𝑥(𝑄𝑥 → 𝑅𝑥) ∴ ∃𝑥𝑅𝑥 ;
6. ∃𝑧(𝑁𝑧 ∧ 𝑂𝑧𝑧) ∴ ¬∀𝑥∀𝑦(𝑂𝑥𝑦 → ∃𝑧(¬𝑁𝑧 ∧ (𝑧 = 𝑥 ∨ 𝑧 = 𝑦)));
7. ∃𝑥∃𝑦(𝑍𝑥 ∧ 𝑍𝑦 ∧ 𝑥 ≠ 𝑦), ¬𝑍𝑑 ∴ ∃𝑦(𝑍𝑦 ∧ 𝑦 ≠ 𝑑);
8. ∀𝑥∀𝑦 (𝑅𝑥𝑦 → ∀𝑧(𝑅𝑦𝑧 → 𝑥 ≠ 𝑧)) ∴ 𝑅𝑎𝑎 → ¬𝑅𝑎𝑎.
Chapter 6

Natural Deduction for Sentential


26
Proof and Reasoning

26.1 Arguments and Reasoning Revisited


Back in §2, we said that a symbolised argument is valid iff it is not possible to make all
of the premises true in a valuation, while the conclusion is false.
In the case of Sentential, this led us to develop truth tables. Each row of a complete
truth table corresponds to a valuation. So, when faced with a Sentential argument, we
have a very direct way to assess whether it is possible to make all of the premises true
and the conclusion false: just plod through the truth table.
But truth tables do not necessarily give us much insight. Consider two arguments in
Sentential:

𝑃 ∨ 𝑄, ¬𝑃 ∴ 𝑄
𝑃 ↔ 𝑄, ¬𝑃 ∴ ¬𝑄.

Clearly, these are valid arguments. You can confirm that they are valid by constructing
four‐row truth tables. With truth tables, we only really care about the truth values
assigned to whole sentences, since that is what ultimately determines whether there
is an entailment. But we might say that these two arguments, proceeding from differ‐
ent premises with different logical connectives involved, must make use of different
principles of IMPLICATION – different principles about what follows from what. What
follows from a disjunction is not at all the same as what follows from a biconditional,
and it might be nice to keep track of these differences. While a truth table can show
that an argument is valid, it doesn’t really explain why the argument is valid. To ex‐
plain why 𝑃 ∨ 𝑄, ¬𝑃 ∴ 𝑄 is valid, we have to say something about how disjunction and
negation work and interact.
Certainly human reasoning treats disjunctions very differently from biconditionals.
While logic is not really about human reasoning, which is more properly a subject mat‐
ter for psychology, nevertheless we can formally study the different forms of reasoning
involved in arguing from sentences with different structures, by asking: what would it

234
§26. PROOF AND REASONING 235

be acceptable to conclude from premises with a certain formal structure, supposing one
cannot give up one’s committment to those premises?
The role of reasoning should not be overstated. Many principles of good reasoning
have no place in logic, because they concern how to make judgments on the basis of
inconclusive evidence. Our focus here is on implication, whether or not it would be
good reasoning to form beliefs in accordance with those implications. The idea here
is that ‘𝑃 ∨ 𝑄’ and ‘¬𝑃’ implies ‘𝑄’, whether or not it would be a good idea to belive ‘𝑄’
if you believed those premises. To emphasise a point we made earlier (§2.3): maybe
once you notice that these premises imply ‘𝑄’, what good reasoning demands is that
you give up one of those premises.

26.2 Formal Proof and Natural Deduction


In special cases, thinking about reasoning can help us understand logical implication.
Reasoning often occurs step‐by‐step: you accept a certain claim, and deduce some
intermediate further claim, and from there derive a conclusion. There is an approach
to logical argument that mirrors this step‐by‐step approach. The idea is to understand
argument by applying very obvious, seemingly trivial, rules to sentences of a logical
language in virtue of their structure, generating certain derived results, to which the
rules can be applied again, and then again in turn to further derived results. Some of
these rules will govern the behaviour of the sentential connectives. Others will govern
the behaviour of the quantifiers and identity. The whole system of rules will govern
the construction of a proof (or derivation) that proceeds validly from some premises
to a conclusion.
This is a very different way of thinking about arguments. Rather than thinking about the
possible meanings of the argument using valuations or interpretations, we manipulate
the sentences involved in the premises and conclusion to construct an object, the proof,
which directly demonstrates the validity of the argument.
Very abstractly, a FORMAL PROOF is a sequence of sentences, such that each sentence
is justified by some rule of the proof system, possibly given some previous sentence or
sentences. (The sentences may also be accompanied by a ‘commentary’ explaining how
that sentence is justified.) There are lots of different systems for constructing formal
proofs in Sententialand Quantifier, including among others axiomatic systems, semantic
tableaux, Gentzen‐style natural deduction, and sequent calculus. Different systems
may be more efficient, or simpler to use, or better when one is reasoning about formal
proofs rather than constructing them. All formal proof systems share two important
features:

› The rules are unambiguous; and

› The rules apply at some stage of a proof because of the syntactic structure of
the sentences involved – no consideration of valuations or interpretations is in‐
volved.

These two features make it easy to design a computer program that produces formal
proofs, as long as it can parse and analyse the syntax of expressions. No consideration
236 NATURAL DEDUCTION FOR SENTENTIAL

of meanings need be involved, which makes it quite unlike the earlier ways we had
of analysing arguments in Sentential, such as truth‐tables, which essentially involved
understanding the truth‐functions that characterise the meanings of the logical con‐
nectives of the language.
The proof system we adopt is called NATURAL DEDUCTION. It is an attractive system
in many ways, in part because the rules it uses are designed to be very simple and
to emulate certain obvious and natural patterns of reasoning. This makes it useful
for some purposes – it is often easy to understand that the rules are correct, and it
has the very nice feature that one can reuse one formal proof within the course of
constructing another. Don’t be misled, however: the formal proofs constructed by
using natural deduction are stylised and abstracted from ordinary reasoning. Natural
deduction may be slightly less artificial than using a truth table, but it is not in any
sense a psychologically realistic model of reasoning. (For one thing, many ‘natural’
instincts in human reasoning correspond to invalid patterns of argument.)
One specific way in which natural deduction is supposed to improve over truth table
techniques is in the insight into how an argument works that it can provide. Rather
than just discovering that one sentence cannot be true without another being true,
we see almost literally how to break down premises into their consequences, and then
build up from those intermediate consequences to the conclusion, where all the steps
of deconstruction and reconstruction involve the use of simple and obviously correct
implications. Though this doesn’t mimic human reasoning perfectly, it resembles it
sufficiently well that we often seem to better understand how an argument works once
we’ve constructed a natural deduction proof of it, even if we already knew it to be valid.
Consider this pair of valid arguments:

¬𝑃 ∨ 𝑄, 𝑃 ∴ 𝑄
𝑃 → 𝑄, 𝑃 ∴ 𝑄.

From a truth‐table perspective, these arguments are indistinguishable: their premises


are true in the same valuations, and so are their conclusions. Yet they are distinct
from an implicational point of view, since one will involve consideration of the con‐
sequences of disjunctions and negations, and the other consideration of the con‐
sequences of the conditional. The natural deduction proofs demonstrating the validity
of these arguments reflect the different connectives involved, and promise us a more
fine‐grained analysis of how valid arguments function.
Formal proofs are obviously useful in demonstrating validity. If you manage to con‐
struct a proof in a well‐designed proof system, you’ll know the corresponding argu‐
ment is valid. But the aim of a good proof system is not just that all of the formal
proofs it permits correspond to valid arguments: it is also that every valid argument
has a proof. Our system of natural deduction rules has this property, but I won’t be
able to demonstrate that with complete rigour – I return to this idea in §38
§26. PROOF AND REASONING 237

26.3 Efficiency in Natural Deduction


The use of natural deduction can be motivated by more than the search for insight. It
might also be motivated by practicality. Consider:

𝐴1 → 𝐶1 ∴ (𝐴1 ∧ 𝐴2 ) → (𝐶1 ∨ 𝐶2 ).

To test this argument for validity, you might use a 16‐row truth table. If you do it
correctly, then you will see that there is no row on which all the premises are true and
on which the conclusion is false. So you will know that the argument is valid. (But, as
just mentioned, there is a sense in which you will not know why the argument is valid.)
But now consider:

𝐴1 → 𝐶1 ∴ (𝐴1 ∧ 𝐴2 ∧ 𝐴3 ∧ 𝐴4 ∧ 𝐴5 ∧ 𝐴6 ∧ 𝐴7 ∧ 𝐴8 ∧ 𝐴9 ∧ 𝐴10 ) →
(𝐶1 ∨ 𝐶2 ∨ 𝐶3 ∨ 𝐶4 ∨ 𝐶5 ∨ 𝐶6 ∨ 𝐶7 ∨ 𝐶8 ∨ 𝐶9 ∨ 𝐶10 ).

This argument is also valid – as you can probably tell – but to test it requires a truth
table with 220 = 1048576 rows. In principle, we can set a machine to grind through
truth tables and report back when it is finished. In practice, complicated arguments
in Sentential can become intractable if we use truth tables. But there is a very short
natural deduction proof of this argument – just 6 rows. You can see it on page 264,
though it won’t make much sense if you skip the intervening pages.
When we get to Quantifier, though, the problem gets dramatically worse. There is noth‐
ing like the truth table test for Quantifier. To assess whether or not an argument is valid,
we have to reason about all interpretations. But there are infinitely many possible in‐
terpretations. We cannot even in principle set a machine to grind through infinitely
many possible interpretations and report back when it is finished: it will never finish.
We either need to come up with some more efficient way of reasoning about all in‐
terpretations, or we need to look for something different. Since we already have some
motivation for considering the role in arguments of particular premises, rather than all
the premises collectively as in the truth‐table approach, we will opt for the ‘something
different’ path – natural deduction.

26.4 Our System of Natural Deduction and its History


The modern development of natural deduction dates from research in the 1930s by
Gerhard Gentzen and, independently, Stanisław Jaśkowski.1 However, the natural de‐
duction system that we shall consider is based on slightly later work by Frederic Fitch.2

1 Gerhard Gentzen (1935) ‘Untersuchungen über das logische Schließen’, translated as ‘Investigations into
Logical Deduction’ in M. E. Szabo, ed. (1969) The Collected Works of Gerhard Gentzen, North‐Holland.
Stanisław Jaśkowski (1934) ‘On the rules of suppositions in formal logic’, reprinted in Storrs McCall, ed.
(1967) Polish logic 1920–39, Oxford University Press.
2 Frederic Fitch (1952) Symbolic Logic: An introduction, Ronald Press Company.
In the design of the present proof system, I drew on earlier versions of forall𝓍, but also on the natural
deduction systems of Jon Barwise and John Etchemendy (1992) The Language of First‐Order Logic, CSLI;
and of Volker Halbach (2010) The Logic Manual, Oxford University Press.
238 NATURAL DEDUCTION FOR SENTENTIAL

Natural deduction was so‐called because the rules of implication it codifies were seen
as reflecting ‘natural’ forms of human reasoning. It must be admitted that no one
spontaneously and instinctively reasons in a way that conforms to the rules of natural
deduction. But there is one place where these forms of inference are widespread –
mathematical reasoning. And it will not surprise you to learn that these systems of in‐
ference were introduced initially to codify good practice in mathematical proofs. Don’t
worry, though: we won’t expect that you are already a fluent mathematician. Though
some of the rules might be a bit stilted and formal for everyday use, the rationale for
each of them is transparent and can be easily understood even by those without ex‐
tensive mathematical training.
One further thing about the rules we shall give is that they are extremely simple. At
every stage in a proof, it is trivial to see which rules apply, how to apply them, and
what the result of applying them is. While constructing proofs as a whole might take
some thought, the individual steps are the kind of thing that can be undertaken by a
completely automated process. The development of formal proofs in the early years of
the twentieth century emphasised this feature, as a part of a general quest to remove
the need for ‘insight’ or ‘intuition’ in mathematical reasoning. As the philosopher and
mathematician Alfred North Whitehead expressed his conception of the field, ‘the ul‐
timate goal of mathematics is to eliminate any need for intelligent thought’. You will
see I hope that natural deduction does require some intelligent thought. But you will
also see that, because the steps in a proof are trivial and demonstrably correct, that
finding a formal proof for a claim is a royal road to mathematical knowledge.
If we have a correct natural deduction proof of an argument, we can often do more
than simply report that the argument is valid. Often the natural deduction proof mir‐
rors the intuitive line of reasoning that justifies the valid argument, or even leads to its
discovery. Because of the prevalence of natural deduction in introductory logic texts
like this one, many philosophers have internalised the rules of natural deduction in
their own thought. So a good knowledge of natural deduction can be helpful in inter‐
preting contemporary philosophy: oftentimes the prose presentation of an argument
more or less exactly corresponds to some natural deduction proof in an appropriate
symbolisation.
§26. PROOF AND REASONING 239

Key Ideas in §26


› Any formal proof is a sequence of sentences, each of which fol‐
lows by some relatively simple rules from previous sentences or
is licensed in some other way. The final sentence is the conclu‐
sion of the proof.
› The rules are unambiguous and apply only because of the syn‐
tactic structure of the sentences involved: no consideration of
meanings need be involved. This makes them ideal for com‐
puters to use.
› A formal proof system can be theoretically useful, as it might
give insight into why an argument is valid by showing how the
conclusion can be derived from the premises.
› It might also be practically useful, because it can be much faster
to produce a single proof demonstrating an entailment than it
would be to show that all valuations or interpretations making
the premises true also make the conclusion true.
› A natural deduction proof system aims to use natural and ob‐
viously correct rules, which can contribute to the project of es‐
tablishing the conclusions of proofs as certain knowledge, and
can help in understanding the informal writing of those with a
knowledge of logic.

Practice exercises
A. Is the purely syntactic nature of formal proof a virtue or a vice? Can we be sure that
any class of ‘good’ arguments that is identified on purely syntactic grounds corresponds
to an interesting category?
B. Are formal proofs always more efficient than truth table arguments? Does reasoning
about Sentential sentences using valuations never give understanding?
27
The Idea of Natural Deduction

27.1 Assumptions and their Consequences


The fundamental idea behind natural deduction is that formal proofs begin from AS‐
SUMPTIONS, and the rules for constructing a formal proof either involve introducing a
new assumption or apply to previously generated sentences to produce further claims,
which are then themselves the subject of the proof rules. The rules apply to a sentence
because of its main connective only (so they are indifferent to what other connectives
occur within a sentence). For each main connective there are two proof rules:

› an ELIMINATION rule, that applies to a sentence having that connective as the


main connective, and allows us to add some further sentence(s) to our proof;
and
› an INTRODUCTION rule, that applies to some sentence(s) in our proof, and allows
us to add some further sentence having that connective as the main connective
to our proof.

Some of these rules also have a further effect of removing a previous assumption, or
discharging it. A natural deduction proof is just a sequence of sentences constructed
by making assumptions or using these introduction and elimination rules:

Any sequence of sentences, where every sentence is either (i) an as‐


sumption, whether discharged or undischarged, or (ii) follows from
earlier sentences by the natural deduction proof rules, is a formal nat‐
ural deduction proof.

Henceforth, I shall simply call these ‘proofs’, but you should be aware that there are
informal proofs too.1

1 Many of the arguments we offer in our metalanguage, quasi‐mathematical arguments about our formal
languages, are proofs. Sometimes people call formal proofs ‘deductions’ or ‘derivations’, to ensure that

240
§27. THE IDEA OF NATURAL DEDUCTION 241

Below (§27.3), I will introduce a system for representing natural deduction proofs that
will make clear which sentences are assumptions, when those assumptions are made
and discharged, and also provide a commentary explaining how the proof was con‐
structed, i.e., which rules and sentences are used to justify others. The commentary
isn’t strictly necessary for a correct formal proof, but it is essential in learning how
those proofs work.

27.2 Assumptions and Suppositions


Let’s look as these initial assumptions. In ordinary reasoning, an assumption is a claim
we might be accepting without having a full justification for it. We might believe it, or
it might be a supposition that we are making ‘for the sake of argument’. In natural
deduction, which isn’t really about belief, assumptions are understood in this second,
suppositional, sense. A natural deduction proof begins with a supposed assumption:
the rules then tell us what we can derive from this assumption. We can make additional
assumptions whenever we like in the course of the proof.
Suppose we have a natural deduction proof. But what is it a proof of ? Well, the last sen‐
tence in the sequence is in some sense where we’ve ended up: the result of the proof. It
can be considered the conclusion of the proof. But even if there is a proof with conclu‐
sion 𝒞 , that doesn’t mean that we have proved 𝒞 and should therefore come to believe
it. For we have neglected the role of assumptions. Some of the proof rules discharge
previously made assumptions, but not every assumption has to be discharged in a cor‐
rectly formed proof. These remaining ‘active’ or UNDISCHARGED assumptions are the
suppositions on which the correctness of the proof conclusion depends. So a natural
deduction proof is conditional: it establishes the conclusion, conditional on the truth
of any undischarged assumptions.
This structure establishes a nice relationship between proofs and arguments. A given
proof is a PROOF OF AN ARGUMENT 𝒜1 …𝒜𝑛 ∴ 𝒞 if the final sentence in the proof is 𝒞 ,
and each of the undischarged assumptions is among the premises of the argument 𝒜𝑖 .
Note that if there is a proof of 𝒜 ∴ 𝒞 , that will also be a proof of 𝒜, ℬ ∴ 𝒞 , and any
other argument with the conclusion 𝒞 and including 𝒜 as a premise. This is related to
the fact that if you have a valid argument, adding extra premises can’t make the argu‐
ment invalid. (Likewise, if you have a correctly constructed proof, making additional
assumptions can’t make it incorrect.)
Looking above, you can see that a single assumption is already a natural deduction
proof. If we assume ‘𝑃’, and stop there, we have a correctly formed proof of the ar‐
gument 𝑃 ∴ 𝑃. The premise is the undischarged assumption; the conclusion is the
last sentence in the proof. It doesn’t matter that, in this case, the undischarged as‐
sumption is the last sentence! Since this argument is obviously valid, though trivial,
we have some assurance that our simplest proofs are correct, and don’t generate fal‐
lacious proofs of invalid arguments. Likewise, this proof would also be a proof of the
argument 𝑃, 𝑄∴𝑃.

no one will confuse the metalanguage activity of proving things about our logical languages with the
activity of constructing arguments within those languages. But it seems unlikely that anyone in this
course will be confused on this point, since we are not offering very many proofs in the metalanguage
in the first place!
242 NATURAL DEDUCTION FOR SENTENTIAL

We could make more assumptions. If we had the sequence of sentences ‘𝑃’ followed
by ‘𝑄’, they would both be undischarged assumptions, and the conclusion would be ‘𝑄’.
So this would be a proof of the argument 𝑃, 𝑄 ∴ 𝑄.
Admittedly there isn’t much we can do if the only rule we have is the one that allows
us to make an assumption whenever we want. We shall want some other rules. But
first I’ll introduce a way of depicting natural deduction proofs that makes the role of
assumptions very clear.

27.3 Representing Natural Deduction Proofs


We will use a particular graphical representation of natural deduction proofs, one
which makes use of ‘nesting’ of sentences to vividly represent which assumptions a
particular sentence in a proof is relying on at any given stage, and uses a device of ho‐
rizontal marks to distinguish assumptions from derived sentences. It will be easier to
see how this works with some examples.
A natural deduction proof is a sequence of sentences. We will write this sequence
vertically, so each successive sentences occupies its own line. We mark a sentence as
an assumption by underlining it. So let’s consider a very simple proof, the one‐line
proof from the assumption ‘𝑃’:

1 𝑃

In this proof, the horizontal line marks that the sentence above it is an assumption,
and not justified by earlier sentences in the proof. Everything written below the line
will either be something which follows (directly or indirectly) from the assumptions
we have already made, or it will be some new assumption. We don’t need a special
indication for the conclusion: it’s just the last line.
There is also a vertical line at the left, the ASSUMPTION LINE. This indicates the RANGE
of the assumption. This vertical line should be continued downwards whenever we
extend the proof by applying the natural deduction rules, unless and until a rule that
discharges the assumption is applied. Then we discontinue the vertical line. We’ll have
to wait until §29 to see real examples of rules that discharge assumptions.2
When a sentence is in the range of an assumption, that generally means the assump‐
tion will be playing some role in justifying that sentence. This isn’t always the case,
however, and we will see that not every sentence in the range of an assumption intu‐
itively depends on the assumption. Not every undischarged assumption is essential to
the derivation of a given sentence. This again mirrors a feature of valid arguments:
the truth of the conclusion of a a valid argument doesn’t always require the truth of
every premise (e.g., 𝐴, 𝐵 ⊨ 𝐴, but ‘𝐵’ isn’t really playing an essential role here).
Whenever we make a new assumption, we underline it and introduce a new assump‐
tion line. So this is a proof of the argument 𝑃, 𝑄 ∴ 𝑄:

2 We’ll also see there that sometimes we can discharge all the assumptions in a proof, and we will have an
assumption line with no horizontal assumption marker, and so no undischarged assumptions attached
to it. So an assumption line really marks the range of zero or more assumptions.
§27. THE IDEA OF NATURAL DEDUCTION 243

1 𝑃

2 𝑄

Here you see we’ve extended the assumption line adjacent to ‘𝑃’, and introduced a new
assumption line for ‘𝑄’.
You see that we also number the sentences in the proof. These numbers are not strictly
part of the proof, but are part of the commentary, and help us remember which sen‐
tences we are referring to when we explain how subsequent sentences added to the
proof are justified.
We now have enough to describe our first natural deduction proof rule. It is the rule
that says we can extend any proof by making a new assumption. In abstract generality,
here is our NEW ASSUMPTION rule: if we have any natural deduction proof, we can
extend it by making a new assumption from any sentence. Graphically, we add the new
sentence at the bottom of the proof, indenting it in the range of a new assumption line,
and extending the range of all existing undischarged assumptions. Abstractly, the rule
looks like this:

𝑛 𝒜

𝑛+1 ℬ

That is, whenever we have a proof, whatever its contents, we may make an arbitrary
assumption to extend that proof. We will discuss new assumptions, and the special
family of rules that handles discharging assumptions, in §29.1.
Introducing a new assumption line for each new assumption is best practice. That
allows each assumption potentially to be discharged independently of all the others.
But sometimes you know that you won’t be discharging some assumptions, and that
they will all remain active throughout the proof. (Every sentence in the proof will
be in the range of those assumptions.) In that case, you can use a single assumption
line for a number of assumptions. For example, if we know we’re going to make and
retain the assumptions ‘𝑃’ and ‘𝑄’, we can write them on successive lines, draw a single
assumption line, and a single horizontal line marking that any sentences above it are
the assumptions attached to that assumption line:

1 𝑃

2 𝑄


244 NATURAL DEDUCTION FOR SENTENTIAL

For any given sentence in a proof, you can easily see the undischarged assumptions
on which that sentence depends: just look at which assumptions are attached to the
assumptions lines to its left. If some line in a proof is in the range of an assumption,
we will say that it is an ACTIVE ASSUMPTION at that point in the proof. Likewise, for a
given assumption you can use the assumption line to easily see which sentences are in
the range of that assumption.
Let’s consider a couple more examples of how to set up up a proof.

¬(𝐴 ∨ 𝐵) ∴ ¬𝐴 ∧ ¬𝐵.

We start a proof by writing an assumption:

1 ¬(𝐴 ∨ 𝐵)

We are hoping to conclude that ‘¬𝐴 ∧ ¬𝐵’; so we are hoping ultimately to conclude our
proof with

1 ¬(𝐴 ∨ 𝐵)

𝑛 ¬𝐴 ∧ ¬𝐵

for some number 𝑛. It doesn’t matter which line we end on, but we would obviously
prefer a short proof to a long one! We don’t have any rules yet, so we cannot fill in the
middle of this proof.
Suppose we had an argument with more than one premise:

𝐴 ∨ 𝐵, ¬(𝐴 ∧ 𝐶), ¬(𝐵 ∧ ¬𝐷) ∴ ¬𝐶 ∨ 𝐷.

If our argument has more than one premise, we can use either single or multiple as‐
sumption lines:

1 𝐴∨𝐵 1 𝐴∨𝐵

2 ¬(𝐴 ∧ 𝐶) 2 ¬(𝐴 ∧ 𝐶)

3 ¬(𝐵 ∧ ¬𝐷) 3 ¬(𝐵 ∧ ¬𝐷)

⋮ ⋮

𝑛 ¬𝐶 ∨ 𝐷 𝑛 ¬𝐶 ∨ 𝐷

Again, these represent the same proof; the right hand form is a conventional shorthand
for the official form on the left.
What remains to do is to explain each of the rules that we can use along the way from
premises to conclusion. The rules are divided into two families: those rules that involve
§27. THE IDEA OF NATURAL DEDUCTION 245

making or getting rid of further assumptions that are made ‘for the sake of argument’,
and those that do not. The latter class of rules are simpler, so we will begin with those
in §28, and turning to the others in §29. After introducing the rules, I will return in
§29.10 to the two incomplete proofs above, to see how they may be completed.

Key Ideas in §27


› A formal natural deduction proof is a graphical representation
of argument from assumptions, in accordance with a strict set
of rules for deriving further claims and keeping track of which
assumptions are active (‘undischarged’) at a given point in the
proof.
› A correctly formed natural deduction proof can be extended by
making an arbitrary new assumption at any point.

Practice exercises
A. If the following a correctly formed proof in our natural deduction system?

1 𝐴

2 ((𝐵 ↔) ∨ 𝐴)

3 ¬𝐴

4 𝐷 → ¬𝐷

B. Which of the following could, given the right rules, be turned into a proof corres‐
ponding to the argument
¬(𝑃 ∧ 𝑄), 𝑃 ∴ ¬𝑄?

1 ¬(𝑃 ∧ 𝑄) 1 ¬𝑄

2 𝑃 ⋮
1. 3.
⋮ 𝑛 𝑃

𝑛 ¬𝑄 𝑛+1 ¬(𝑃 ∧ 𝑄)

1 ¬(𝑃 ∧ 𝑄) 1 𝑃

2 𝑃 2 ¬(𝑃 ∧ 𝑄)
2. 4.
⋮ ⋮

𝑛 ¬𝑄 𝑛 ¬𝑄
28
Basic Rules for Sentential: Rules
without Subproofs

28.1 Conjunction Introduction


Suppose I want to show that Ludwig is reactionary and libertarian. One obvious way
to do this would be as follows: first I show that Ludwig is reactionary; then I show
that Ludwig is libertarian; then I put these two demonstrations together, to obtain the
conjunction.
Our natural deduction system will capture this thought straightforwardly. In the ex‐
ample given, I might adopt the following symbolisation key to represent the argument
in Sentential:

𝑅: Ludwig is reactionary
𝐿: Ludwig is libertarian

Perhaps I am working through a proof, and I have obtained ‘𝑅’ on line 8 and ‘𝐿’ on line
15. Then on any subsequent line I can obtain ‘(𝑅 ∧ 𝐿)’ thus:

8 𝑅

15 𝐿

16 (𝑅 ∧ 𝐿) ∧I 8, 15

Note that every line of our proof must either be an assumption, or must be justified by
some rule. We add the commentary ‘∧I 8, 15’ here to indicate that the line is obtained
by the rule of conjunction introduction (∧I) applied to lines 8 and 15. Note the derived
conjunction depends on the collective assumptions of the two conjuncts.

246
§28. BASIC RULES FOR Sentential: RULES WITHOUT SUBPROOFS 247

Since the order of conjuncts does not matter in a conjunction, I could equally well have
obtained ‘(𝐿 ∧ 𝑅)’ as ‘(𝑅 ∧ 𝐿)’. I can use the same rule with the commentary reversed,
to reflect the reversed order of the conjuncts:

8 𝑅

15 𝐿

16 (𝐿 ∧ 𝑅) ∧I 15, 8

More generally, here is our CONJUNCTION INTRODUCTION rule: if we have obtained 𝒜


and ℬ by some stage in a proof under some shared assumptions – whether by proof or
assumption – that justifies us in introducing their conjunction, which inherits those
same assumptions. Abstractly, the rule looks like this:

𝑚 𝒜

𝑛 ℬ

(𝒜 ∧ ℬ) ∧I 𝑚, 𝑛

To be clear, the statement of the rule is schematic. It is not itself a proof. ‘𝒜 ’ and ‘ℬ’ are
not sentences of Sentential. Rather, they are symbols in the metalanguage, which we
use when we want to talk about any sentence of Sentential (see §7). Similarly, ‘𝑚’ and
‘𝑛’ are not a numerals that will appear on any actual proof. Rather, they are symbols
in the metalanguage, which we use when we want to talk about any line number of
any proof. In an actual proof, the lines are numbered ‘1’, ‘2’, ‘3’, and so forth. But when
we define the rule, we use variables to emphasise that the rule may be applied at any
point. The rule requires only that we have both conjuncts available to us somewhere
in the proof, earlier than the line that results from the application of the rule. They
can be separated from one another, and they can appear in any order. So 𝑚 might be
less than 𝑛, or greater than 𝑛. Indeed, 𝑚 might even equal 𝑛, as in this proof:

1 𝑃

2 𝑃∧𝑃 ∧I 1, 1

Note that the rule involves extending the vertical line to cover the newly introduced
sentence. This is because what has been derived depends on the same assumptions as
what it was derived from, and so it must also be in the range of those assumptions.
248 NATURAL DEDUCTION FOR SENTENTIAL

All of the rules in this section justify a new claim which inherits all the
assumptions of anything from which it has been derived by a natural
deduction rule.

The two starting conjuncts needn’t have the same assumptions, but the derived con‐
junction inherits their joint assumptions:

1 𝑃

2 𝑄

3 (𝑃 ∧ 𝑄) ∧I 1, 2

28.2 Conjunction Elimination


The above rule is called ‘conjunction introduction’ because it introduces a sentence
with ‘∧’ as its main connective into our proof, prior to which it may have been ab‐
sent. Correspondingly, we also have a rule that eliminates a conjunction. Not that the
earlier conjunction is somehow removed! It’s just that we use a sentence whose main
connective is a conjunction to justify further sentences in which that conjunction does
not feature.
Suppose you have shown that Ludwig is both reactionary and libertarian. You are en‐
titled to conclude that Ludwig is reactionary. Equally, you are entitled to conclude
that Ludwig is libertarian. Putting these observations together, we obtain our CON‐
JUNCTION ELIMINATION rules:

𝑚 (𝒜 ∧ ℬ) 𝑚 (𝒜 ∧ ℬ)

⋮ ⋮

𝒜 ∧E 𝑚 ℬ ∧E 𝑚

The point is simply that, when you have a conjunction on some line of a proof, you
can obtain either of the conjuncts by ∧E later on. There are two rules, because each
conjunction justifies us in deriving either of its conjuncts. We could have called them
∧E‐LEFT and ∧E‐RIGHT, to distinguish them, but in the following we will mostly not
distinguish them.1
One point might be worth emphasising: you can only apply this rule when conjunction
is the main connective. So you cannot derive ‘𝐷’ just from ‘𝐶 ∨ (𝐷 ∧ 𝐸)’! Nor can you

1 Why do we have two rules at all, rather than one rule that allows us to derive either conjunct? The
answer is that we want our rules to have an unambiguous result when applied to some prior lines of
the proof. This is important if, for example, we are implementing a computer system to produce formal
proofs.
§28. BASIC RULES FOR Sentential: RULES WITHOUT SUBPROOFS 249

derive ‘𝐷’ directly from ‘𝐶 ∧ (𝐷 ∧ 𝐸)’, because it is not one of the conjuncts of the main
connective of this sentence. You would have to first obtain ‘(𝐷 ∧ 𝐸)’ by ∧E, and then
obtain ‘𝐷’ by a second application of that rule, as in this proof:

1 𝐶 ∧ (𝐷 ∧ 𝐸)

2 𝐷∧𝐸 ∧E 1

3 𝐷 ∧E 2

Even with just these two rules, we can start to see some of the power of our formal
proof system. Consider this tricky‐looking argument:

(𝐴 ∨ 𝐵) → (𝐶 ∨ 𝐷) ∧ (𝐸 ∨ 𝐹) → (𝐺 ∨ 𝐻)
∴ (𝐸 ∨ 𝐹) → (𝐺 ∨ 𝐻) ∧ (𝐴 ∨ 𝐵) → (𝐶 ∨ 𝐷)

Dealing with this argument using truth‐tables would be a very tedious exercise, given
that there are 8 sentence letters in the premise and we would thus require a 28 = 256
line truth table! But we can deal with it swiftly using our natural deduction rules.
The main connective in both the premise and conclusion of this argument is ‘∧’. In or‐
der to provide a proof, we begin by writing down the premise, which is our assumption.
We draw a line below this: everything after this line must follow from our assumptions
by (successive applications of) our rules of implication. So the beginning of the proof
looks like this:

1 [(𝐴 ∨ 𝐵) → (𝐶 ∨ 𝐷)] ∧ [(𝐸 ∨ 𝐹) → (𝐺 ∨ 𝐻)]

From the premise, we can get each of its conjuncts by ∧E. The proof now looks like
this:

1 [(𝐴 ∨ 𝐵) → (𝐶 ∨ 𝐷)] ∧ [(𝐸 ∨ 𝐹) → (𝐺 ∨ 𝐻)]

2 [(𝐴 ∨ 𝐵) → (𝐶 ∨ 𝐷)] ∧E 1

3 [(𝐸 ∨ 𝐹) → (𝐺 ∨ 𝐻)] ∧E 1

So by applying the ∧I rule to lines 3 and 2 (in that order), we arrive at the desired
conclusion. The finished proof looks like this:

1 [(𝐴 ∨ 𝐵) → (𝐶 ∨ 𝐷)] ∧ [(𝐸 ∨ 𝐹) → (𝐺 ∨ 𝐻)]

2 [(𝐴 ∨ 𝐵) → (𝐶 ∨ 𝐷)] ∧E 1

3 [(𝐸 ∨ 𝐹) → (𝐺 ∨ 𝐻)] ∧E 1

4 [(𝐸 ∨ 𝐹) → (𝐺 ∨ 𝐻)] ∧ [(𝐴 ∨ 𝐵) → (𝐶 ∨ 𝐷)] ∧I 3, 2


250 NATURAL DEDUCTION FOR SENTENTIAL

This is a very simple proof, but it shows how we can chain rules of proof together into
longer proofs. Our formal proof requires just four lines, a far cry from the 256 lines
that would have been required had we approached the argument using the techniques
from chapter 3.
It is worth giving another example. Way back in §10.3, we noted that this argument is
valid:
𝐴 ∧ (𝐵 ∧ 𝐶) ∴ (𝐴 ∧ 𝐵) ∧ 𝐶.
To provide a proof corresponding to this argument, we start by writing:

1 𝐴 ∧ (𝐵 ∧ 𝐶)

From the premise, we can get each of the conjuncts by applying ∧E twice. And we can
then apply ∧E twice more, so our proof looks like:

1 𝐴 ∧ (𝐵 ∧ 𝐶)

2 𝐴 ∧E 1

3 𝐵∧𝐶 ∧E 1

4 𝐵 ∧E 3

5 𝐶 ∧E 3

But now we can merrily reintroduce conjunctions in the order we want them, so that
our final proof is:

1 𝐴 ∧ (𝐵 ∧ 𝐶)

2 𝐴 ∧E 1

3 𝐵∧𝐶 ∧E 1

4 𝐵 ∧E 3

5 𝐶 ∧E 3

6 𝐴∧𝐵 ∧I 2, 4

7 (𝐴 ∧ 𝐵) ∧ 𝐶 ∧I 6, 5

Recall that our official definition of sentences in Sentential only allowed conjunctions
with two conjuncts. When we discussed semantics, we became a bit more relaxed, and
allowed ourselves to drop inner parentheses in long conjunctions, since the order of
the parentheses did not affect the truth table. The proof just given suggests that we
could also drop inner parentheses in all of our proofs. However, this is not standard,
and we shall not do this. Instead, we shall return to the more austere parenthetical
§28. BASIC RULES FOR Sentential: RULES WITHOUT SUBPROOFS 251

conventions. (Though we will allow ourselves to drop outermost parentheses most of


the time, for legibility.)
Our conjunction rules correspond to intuitively correct patterns of implication. But
they are also demonstrably good in another sense. Each of our rules can be vindicated
by considering facts about entailment. Each of these schematic entailments is easily
demonstrated:

› 𝒜, ℬ ⊨ 𝒜 ∧ ℬ;

› 𝒜 ∧ ℬ ⊨ 𝒜;

› 𝒜 ∧ ℬ ⊨ ℬ.

For example, the first of these says that 𝒜 and ℬ separately suffice to entail the truth
of their conjunction. This justifies the proof rule of conjunction introduction, since
at a stage in the proof where we are assuming both 𝒜 and ℬ to be true, we are then
permitted to conclude that 𝒜 ∧ ℬ is true – just as conjunction introduction says we
can.
It can be recognised, then, that our proof rules correspond to valid arguments in Sen‐
tential, and so our conjunction rules will never permit us to derive a false sentence from
true sentences. There is no guarantee of course that the assumptions we make in our
formal proofs are in fact true – only that if they were true, what we derive from them
would also be true. So despite the fact that our proof rules are a syntactic procedure,
that rely only on recognising the main connective of a sentence and applying an appro‐
priate rule to introduce or eliminate it, each of our rules corresponds to an acceptable
entailment.

28.3 Conditional Elimination


Consider the following argument:

If Jane is smart then she is fast. Jane is smart. So Jane is fast.

This argument is certainly valid. If you have a conditional claim, that commits you to
the consequent given the antecedent, and you also have the antecedent, then you have
sufficient material to derive the consequent.
This suggests a straightforward CONDITIONAL ELIMINATION rule (→E):
252 NATURAL DEDUCTION FOR SENTENTIAL

𝑚 (𝒜 → ℬ)

𝑛 𝒜

ℬ →E 𝑚, 𝑛

This rule is also sometimes called MODUS PONENS. Again, this is an elimination rule,
because it allows us to obtain a sentence that may not contain ‘→’, having started with
a sentence that did contain ‘→’. Note that the conditional, and the antecedent, can
be separated from one another, and they can appear in any order. However, in the
commentary for →E, we always cite the conditional first, followed by the antecedent.
Here is an illustration of the rules we have so far in action, applied to this intuitively
correct argument:
𝑃, (𝑃 → 𝑄) ∧ (𝑃 → 𝑅) ∴ (𝑅 ∧ 𝑄).

1 (𝑃 → 𝑄) ∧ (𝑃 → 𝑅)

2 𝑃

3 (𝑃 → 𝑄) ∧E 1

4 (𝑃 → 𝑅) ∧E 1

5 𝑄 →E 3, 2

6 𝑅 →E 4, 2

7 (𝑅 ∧ 𝑄) ∧I 6, 5

The correctness of our proof rule of conditional elimination is supported by the easily
demonstrated validity of the corresponding entailment:

› 𝒜, 𝒜 → ℬ ⊨ ℬ.

So applying this rule can never produce false conclusions if we began with true assump‐
tions.

28.4 Biconditional Elimination


The BICONDITIONAL ELIMINATION rule (↔E) lets you do a much the same as the con‐
ditional rule. Unofficially, a biconditional is like two conditionals running in each
§28. BASIC RULES FOR Sentential: RULES WITHOUT SUBPROOFS 253

direction. So, thought of informally, our biconditional rules correspond to the left‐to‐
right conditional elimination rule, the other of which corresponds to a right‐to‐left
application of conditional elimination.
If we know that Alice is coming to the party iff Bob is, then if we knew that either of
them was coming, we’d know that the other was coming. If you have the left‐hand
subsentence of the biconditional, you can obtain the right‐hand subsentence. If you
have the right‐hand subsentence, you can obtain the left‐hand subsentence. So we
have these two instances of the rule:

𝑚 (𝒜 ↔ ℬ) 𝑚 (𝒜 ↔ ℬ)

⋮ ⋮

𝑛 𝒜 𝑛 ℬ

⋮ ⋮

ℬ ↔E 𝑚, 𝑛 𝒜 ↔E 𝑚, 𝑛

Note that the biconditional, and the right or left half, can be distant from one another
in the proof, and they can appear in any order. However, in the commentary for ↔E,
we always cite the biconditional first.
Here is an example of the biconditional rules in action, demonstrating the following
argument:
𝑃, (𝑃 ↔ 𝑄), (𝑄 → 𝑅) ∴ 𝑅.

1 (𝑃 ↔ 𝑄)

2 𝑃

3 (𝑄 → 𝑅)

4 𝑄 ↔E 1, 2

5 𝑅 →E 3, 4

Note the way that our conjunction and conditional elimination rules can be used to
parallel the biconditional elimination rules:

1 ((𝑃 → 𝑄) ∧ (𝑄 → 𝑃)) 1 (𝑃 ↔ 𝑄)

2 𝑄 2 𝑄

3 (𝑄 → 𝑃) ∧E 1 3 𝑃 ↔E 1, 2

4 𝑃 →E 3, 2
254 NATURAL DEDUCTION FOR SENTENTIAL

The correctness of our proof rules of biconditional elimination is supported by the


easily demonstrated validity of the corresponding entailments:

› 𝒜 ↔ ℬ, 𝒜 ⊨ ℬ;

› 𝒜 ↔ ℬ, ℬ ⊨ 𝒜 .

28.5 Disjunction Introduction


Suppose Ludwig is reactionary. Then Ludwig is either reactionary or libertarian. After
all, to say that Ludwig is either reactionary or libertarian is to say something weaker
than to say that Ludwig is reactionary. (𝒜 is weaker than ℬ if 𝒜 follows from ℬ, but
not vice versa.)
Let me emphasise this point. Suppose Ludwig is reactionary. It follows that Ludwig is
either reactionary or a kumquat. Equally, it follows that either Ludwig is reactionary or
that kumquats are the only fruit. Equally, it follows that either Ludwig is reactionary or
that God is dead. Many of these things would be strange inferences to draw. Since the
truth of the assumption does guarantee that the disjunction is true, there is nothing
logically wrong with the implications. This can be so even if drawing these implications
may violate all sorts of implicit conversational norms, or that inferring in accordance
with logic in this manner would be more likely a sign of psychosis than rationality.
Armed with all this, I present the DISJUNCTION INTRODUCTION rule(s):

𝑚 𝒜 𝑚 𝒜

⋮ ⋮

(𝒜 ∨ ℬ) ∨I 𝑚 (ℬ ∨ 𝒜) ∨I 𝑚

Notice that ℬ can be any sentence of Sentential whatsoever. So the following is a per‐
fectly good proof:

1 𝑀

2 𝑀∨ (𝐴 ↔ 𝐵) → (𝐶 ∧ 𝐷) ↔ 𝐸 ∧ 𝐹 ∨I 1

Using a truth table to show this would have taken 128 lines.
Here is an example, to show our rules in action:
§28. BASIC RULES FOR Sentential: RULES WITHOUT SUBPROOFS 255

1 (𝐴 ∧ 𝐵)

2 𝐴 ∧E 1

3 𝐵 ∧E 1

4 (𝐴 ∨ 𝐶) ∨I 2

5 (𝐵 ∨ 𝐶) ∨I 3

6 ((𝐴 ∨ 𝐶) ∧ (𝐵 ∨ 𝐶)) ∧I 4, 5

This disjunction rule is supported by the following valid Sentential argument forms:

› 𝒜 ⊨ 𝒜 ∨ ℬ; and

› ℬ ⊨ 𝒜 ∨ ℬ.

The rule of disjunction introduction is one place where ‘natural’ deduction doesn’t
seem to live up to its name. Is this implication really one we would naturally make?
Despite appearances, maybe we do sometimes reason like this. Consider this argument:
‘I won’t ever eat meat again. Well, either I won’t, or it will be an accident!’ (But perhaps
this is better thought of as a retraction of my initial over‐bold claim, rather than an
inference from it.)
Nevertheless, even if the rule is artificial, that doesn’t make it incorrect. We can see,
clearly, that it corresponds to a valid argument. So perhaps the problem with it as a
piece of reasoning is due to something other than invalidity.

› Sometimes disjunction introduction looks like you are ‘throwing away’ inform‐
ation that you already have – you know enough to treat ‘𝑃’ as a premise, but
you end up assenting only to the weaker claim ‘𝑃 ∨ 𝑄’. But can this be the full
story? Conjunction elimination seems to involve the same sort of inference from
a logically stronger to a logically weaker claim, and that doesn’t arouse nearly as
much animosity as disjunction introduction.

› An alternative explanation: maybe the introduced disjunct seems irrelevant, be‐


cause the content of the sentence to which the rule is applied has in general
nothing to do with the disjunct introduced. This is not the case with conjunc‐
tion elimination, where the result of applying the rule is clearly related to the
sentence it is applied to.

These considerations of relevance or information value lie beyond logic proper. They
concern what we, as thinkers and hearers, can conclude about the speaker’s state of
mind, given that they have said something with a particular content. This is the do‐
main of that part of linguistics known as PRAGMATICS, the study of meaning in context.
Most theories of pragmatics predict that disjunction introduction is valid but often
conversationally inappropriate. So Paul Grice, for example, says that if you are in a
256 NATURAL DEDUCTION FOR SENTENTIAL

position to contribute the information of a claim 𝒫 to a conversation, and you are a


cooperative speaker, then you will not contribute the weaker information ‘𝒫 or 𝒬 ’, even
though you should still regard it as true.2

28.6 Reiteration
The last natural deduction rule in this category is REITERATION (R). This just allows
us to repeat an assumption or claim 𝒜 we have already established, so long as the
repeated sentence remains in the range of any assumption which the original was in
the range of.

𝑚 𝒜

𝒜 R𝑚

Such a rule is obviously legitimate; but one might well wonder how such a rule could
ever be useful. Here is an example of it in action:

1 𝑃

2 ((𝑃 ∧ 𝑃) → 𝑄)

3 𝑃 R1

4 (𝑃 ∧ 𝑃) ∧I 1, 3

5 𝑄 →E 2, 4

This rule is unnecessary at this point in the proof (we could have applied conjunction
introduction and cited line 1 twice in our commentary), but it can be easier in practice
to have two distinct lines to which to apply conjunction introduction. The real benefits
of reiteration come when we have multiple subproofs, as we will see in the following
section (§29.1) – particularly when it comes to the negation rules. But we will also
see later in §33.1 that, strictly speaking, we don’t need the reiteration rule – though for
convenience we will keep it. And once we are able to discharge assumptions, reiteration
carries some risks (§29.3).

2 Grice’s discussion of disjunction is at pp. 44–6 in H P Grice (1989) Studies in the Way of Words, Harvard
University Press; see also §4 of Maria Aloni (2016) ‘Disjunction’ in Edward N Zalta, ed., The Stanford
Encyclopedia of Philosophy https://plato.stanford.edu/entries/disjunction/#DisjConv.
§28. BASIC RULES FOR Sentential: RULES WITHOUT SUBPROOFS 257

Key Ideas in §28


› Natural deduction gives us rules that tell us how to infer from a
sentence with a certain main connective (elimination rules), and
rules that tell us how to infer to a sentence with a certain main
connective (introduction rules).
› Proofs using the rules so far retain all the assumptions made
during the course of the proof, and so correspond to an argu‐
ment with those assumptions as premises and the final line of
the proof as a conclusion.
› Reiteration is an optional rule, but useful for housekeeping in
proofs.

Practice exercises
A. The following ‘proof’ is incorrect. Explain the mistakes it makes.

1 𝐴 ∧ (𝐵 ∧ 𝐶)

2 (𝐵 ∨ 𝐶) → 𝐷

3 𝐵 ∧E 1

4 𝐵∨𝐶 ∨I 3

5 𝐷 →E 4, 2

B. Is the following purported proof correct?

1 𝐴

2 𝐵

3 𝐴 R1

C. The following proof is missing its commentary. Please supply the correct annota‐
tions on each line that needs one:

1 𝑃∧𝑆

2 𝑃

3 𝑆

4 𝑆→𝑅

5 𝑅

6 𝑅∨𝐸
258 NATURAL DEDUCTION FOR SENTENTIAL

D. Give natural deduction proofs for the following arguments:

1. 𝑃 ∴ ((𝑃 ∨ 𝑄) ∧ 𝑃);
2. ((𝑃 ∧ 𝑄) ∧ (𝑅 ∧ 𝑃)) ∴ ((𝑅 ∧ 𝑄) ∧ 𝑃);
3. (𝐴 → (𝐴 → 𝐵)), 𝐴 ∴ 𝐵;
4. (𝐵 ↔ (𝐴 ↔ 𝐵)), 𝐵 ∴ 𝐴.
29
Basic Rules for Sentential: Rules with
Subproofs

We’ve already seen in §27 how to start a proof by making assumptions. But the true
power of natural deduction relies on its rules governing when you can make additional
assumptions during the course of the proof, and how you can discharge those assump‐
tions when you no longer need them.

29.1 Additional Assumptions and Subproofs


In natural deduction, both making and discharging additional assumptions are
handled using SUBPROOFS. These are subsidiary proofs within the main proof, which
encapsulate that part of a larger proof that depends on an assumption that is not
among the premises. (Conversely, we can think of the premises of an argument as
those assumptions left undischarged at the conclusion of a proof.)
When we start a subproof, we draw another vertical line (to the right of any existing
assumption lines) to indicate that we are no longer in the main proof. Then we write in
the assumption upon which the subproof will be based. A subproof can be thought of
as essentially posing this question: what could we show, if we also make this additional
assumption? We’ve already seen this in action earlier (§27), when we said that we
could indicate the range of several premises in an argument by either attaching them
all to one vertical assumption line, or introducing a new vertical line for each new
assumption. In that case, we never got rid of the new assumptions: they remained as
premises.
What will be new in this section is that some rules take us back out of a subproof.
So the rules we will now consider are quite different from the rules covered in §28,
none of which have this feature of being able to escape from a previously introduced
assumption.
When we are working within a subproof, we can refer to the additional assumption
that we made in introducing the subproof, and to anything that we obtained from our

259
260 NATURAL DEDUCTION FOR SENTENTIAL

original assumptions. (After all, those original assumptions are still in effect.) But at
some point, we shall want to stop working with the additional assumption: we shall
want to return from the subproof to the main proof. To indicate that we have returned
to the main proof, the vertical line for the subproof comes to an end. At this point,
we say that the subproof is CLOSED. Having closed a subproof, we have set aside the
additional assumption, so it will be illegitimate to draw upon anything that depends
upon that additional assumption. This has been implicit in our discussion all along,
but it is good to make it very clear:

Any point in a natural deduction proof is in the range of some cur‐


rently active assumptions, and the natural deduction rules can be ap‐
plied only to sentences which do not rely on any other assumptions
than those.
Equivalently, any rule can be applied to any earlier lines in a proof,
except for those lines which occur within a closed subproof.

Closing a subproof is called DISCHARGING the assumptions of that subproof. So we


can put the point this way: at no stage of a proof can you apply a rule to a sentence that
occurs only in the range of an already discharged assumption.
Subproofs, then, allow us to think about what we could show, if we made additional
assumptions. The point to take away from this is not surprising – in the course of
a proof, we have to keep very careful track of what assumptions we are making, at
any given moment. Our proof system does this very graphically, with those vertical
assumption lines that indicate the range of an assumption. (Indeed, that’s precisely
why we have chosen to use this proof system.)
When you discharge an assumption, closing a subproof, you generally introduce some
further new sentence. That sentence can be thought of as a summary of the subproof,
in the context of the other undischarged assumptions. This is particularly evident if
you think about the conditional introduction rule we are about to introduced.
When can we begin a new subproof? Whenever we want. That is the upshot of the
New Assumption rule from §27.3. At any stage in a proof it is legitimate to assume
something new, as long as we begin keeping track of what in our proof rests on this
new assumption. We don’t need any reason to justify making an assumption, but that
doesn’t mean it’s a good idea to introduce them haphazardly. Some guidelines to help
decide when it might be particularly appropriate to make a new assumption are dis‐
cussed in §32.
The idea of opening subproofs by making new assumptions can be used to illustrate a
remark I made above about when a claim depends on assumptions (p. 242). Consider
this proof:
§29. BASIC RULES FOR Sentential: RULES WITH SUBPROOFS 261

1 (𝑃 ∧ 𝑄)

2 𝑄 ∧E 1

3 𝑅

4 𝑄 R2

In this proof, even though the occurrence of ‘𝑄’ on line 4 occurs within the range of
the assumption ‘𝑅’, it does not intuitively depend on it. We used reiteration to show
that ‘𝑅’ is still derivable from active assumptions at line 4, but it does not follow that
‘𝑅’ depends in any robust way on every assumption that is active at line 4.

29.2 Conditional Introduction


To illustrate the use of subproofs, we will begin with the rule of conditional introduc‐
tion. It is fairly easy to motivate informally. The following argument in English should
be valid:

Ludwig is reactionary. Therefore if Ludwig is libertarian, then Ludwig is


both reactionary and libertarian.

If someone doubted that this was valid, we might try to convince them otherwise by
explaining ourselves as follows:

Assume that Ludwig is reactionary. Now, additionally assume that Ludwig


is libertarian. Then by conjunction introduction, it follows that Ludwig is
both reactionary and libertarian. Of course, that only follows conditional
on the assumption that Ludwig is libertarian. But this just means that, if
Ludwig is libertarian, then he is both reactionary and libertarian – at least,
that follows given our initial assumption that he is reactionary.

This kind of reasoning is vital for understanding conditional claims. As the Cambridge
philosopher Frank Ramsey pointed out:

If two people are arguing ‘If 𝒫 , will 𝒬 ?’ and are both in doubt as to 𝒫 , they
are adding 𝓅 hypothetically to their stock of knowledge and arguing on
that basis about 𝓆….1

Ramsey’s idea is that if we can reach the conclusion that 𝒞 on the basis of the hypo‐
thetical supposition that 𝒜 (generally together with some other assumptions) then we
would be entitled to judge, given the other assumptions alone, that if 𝒜 turns out to
be true, then 𝒞 will also turn out to be true – for short, that if 𝒜 then 𝒞 . This obser‐
vation of Ramsey’s – that conditionals embody the categorical content of hypothetical

1 F P Ramsey (1929), ‘General Propositions and Causality’, at p. 155 in F P Ramsey (1990) Philosophical
Papers, D H Mellor, ed., Cambridge University Press.
262 NATURAL DEDUCTION FOR SENTENTIAL

reasoning – has been important for many accounts of the English conditional, not all
of them wholly congenial to the idea that ‘if’ is to be understood as ‘→’. Yet the essence
of his idea motivates the conditional introduction rule of natural deduction.
Transferred into natural deduction format, here is the pattern of reasoning that we just
used. We started with one premise, ‘Ludwig is reactionary’, symbolised ‘𝑅’. Thus:

1 𝑅

The next thing we did is to make an additional assumption (‘Ludwig is libertarian’),


for the sake of argument. To indicate that we are no longer dealing merely with our
original assumption (‘𝑅’), but with some additional assumption, we continue our proof
as follows:

1 𝑅

2 𝐿

We are not claiming, on line 2, to have proved ‘𝐿’ from line 1. We are just making an‐
other assumption. So we do not need to write in any justification for the additional
assumption on line 2. We do, however, need to mark that it is an additional assump‐
tion. We do this in the usual way, by drawing a line under it (to indicate that it is an
assumption) and by indenting it with a further assumption line (to indicate that it is
additional).
With this extra assumption in place, we are in a position to use ∧I. So we could continue
our proof:

1 𝑅

2 𝐿

3 𝑅∧𝐿 ∧I 1, 2

The two vertical lines to the left of line 3 show that ‘𝑅 ∧ 𝐿’ is in the range of both
assumptions, and indeed depends on them collectively.
So we have now shown that, on the additional assumption, ‘𝐿’, we can obtain ‘𝑅 ∧ 𝐿’.
We can therefore conclude that, if ‘𝐿’ obtains, then so does ‘𝑅 ∧ 𝐿’. Or, to put it more
briefly, we can conclude ‘𝐿 → (𝑅 ∧ 𝐿)’:

1 𝑅

2 𝐿

3 𝑅∧𝐿 ∧I 1, 2

4 𝐿 → (𝑅 ∧ 𝐿) →I 2–3
§29. BASIC RULES FOR Sentential: RULES WITH SUBPROOFS 263

Observe that we have dropped back to using one vertical line. We are no longer re‐
lying on the additional assumption, ‘𝐿’, since the conditional itself follows just from
our original assumption, ‘𝑅’. The use of conditional introduction has discharged the
temporary assumption, so that the final line of this proof relies only on the initial as‐
sumption ‘𝑅’ – we made use of the assumption ‘𝐿’ only in the nested subproof, and the
range of that assumption is restricted to sentences in that subproof. Note that the con‐
ditional sentence ‘𝐿 → (𝑅 ∧ 𝐿)’ is a summary of what went on in the subproof, given the
undischarged assumption ‘𝑅’: if you made the additional assumption 𝐿, then you could
derive ‘(𝑅 ∧ 𝐿)’.
The general pattern at work here is the following. We first make an additional as‐
sumption, A; and from that additional assumption, we prove B. In that case, we have
established the following: If is does in fact turn out that A, then it also turns out that
B. This is wrapped up in the rule for CONDITIONAL INTRODUCTION:

𝑖 𝒜

𝑗 ℬ

(𝒜 → ℬ) →I 𝑖 –𝑗

There can be as many or as few lines as you like between lines 𝑖 and 𝑗. Notice that in our
presentation of the rule, discharging the assumption 𝒜 takes us out of the subproof in
which ℬ is derived from 𝒜 . If 𝒜 is the initial assumption of a proof, then discharging
it may well leave us with a conditional claim that depends on no undischarged assump‐
tions at all. We see an example in this proof, where the main proof, marked by the
leftmost vertical line, features no horizontal line marking an assumption:

1 𝑃∧𝑃

2 𝑃 ∧E 1

3 (𝑃 ∧ 𝑃) → 𝑃 →I 1–2

It might come as no surprise that the conclusion of this proof – being provable from
no undischarged assumptions at all – turns out to be a logical truth.
It will help to offer a further illustration of →I in action. Suppose we want to consider
the following:
𝑃 → 𝑄, 𝑄 → 𝑅 ∴ 𝑃 → 𝑅.
We start by listing both of our premises. Then, since we want to arrive at a conditional
(namely, ‘𝑃 → 𝑅’), we additionally assume the antecedent to that conditional. Thus
our main proof starts:
264 NATURAL DEDUCTION FOR SENTENTIAL

1 𝑃→𝑄

2 𝑄→𝑅

3 𝑃

Note that we have made ‘𝑃’ available, by treating it as an additional assumption. But
now, we can use →E on the first premise. This will yield ‘𝑄’. And we can then use →E
on the second premise. So, by assuming ‘𝑃’ we were able to prove ‘𝑅’, so we apply the
→I rule – discharging ‘𝑃’ – and finish the proof. Putting all this together, we have:

1 𝑃→𝑄

2 𝑄→𝑅

3 𝑃

4 𝑄 →E 1, 3

5 𝑅 →E 2, 4

6 𝑃→𝑅 →I 3–5

Let’s consider another example, this one demonstrating why reiteration can be so use‐
ful in subproofs. We know that 𝑃 ∴ 𝑄 → 𝑃 is a valid argument, from truth‐tables. This
is a proof:

1 𝑃

2 𝑄

3 𝑃 R1

4 (𝑄 → 𝑃) →I 2–3

Note that strictly speaking we needn’t have used reiteration here: the assumption of
‘𝑃’ remains active at line 2, so technically we could apply →I to close the subproof and
introduce the conditional immediately after line 2. But the use of reiteration makes it
much clearer what is going on in the proof – even though all it does it repeat the earlier
assumption and remind us that it is still an active assumption.
We now have all the rules we need to show that the argument on page 237 is valid. Here
is the six line proof, some 175,000 times shorter than the corresponding truth table:
§29. BASIC RULES FOR Sentential: RULES WITH SUBPROOFS 265

1 𝐴1 → 𝐶1

2 (𝐴1 ∧ 𝐴2 ∧ 𝐴3 ∧ 𝐴4 ∧ 𝐴5 ∧ 𝐴6 ∧ 𝐴7 ∧ 𝐴8 ∧ 𝐴9 ∧ 𝐴10 )

3 𝐴1 ∧E 2

4 𝐶1 →E 1, 3

5 (𝐶1 ∨ 𝐶2 ∨ 𝐶3 ∨ 𝐶4 ∨ 𝐶5 ∨ 𝐶6 ∨ 𝐶7 ∨ 𝐶8 ∨ 𝐶9 ∨ 𝐶10 ) ∨I 4

6 (𝐴1 ∧ 𝐴2 ∧ 𝐴3 ∧ 𝐴4 ∧ 𝐴5 ∧ 𝐴6 ∧ 𝐴7 ∧ 𝐴8 ∧ 𝐴9 ∧ 𝐴10 ) → (𝐶1 ∨ 𝐶2 ∨ 𝐶3 ∨ 𝐶4 ∨ 𝐶5 ∨ 𝐶6 ∨ 𝐶7 ∨ 𝐶8 ∨ 𝐶9 ∨ 𝐶10 ) →I 2–5

Import‐Export
Our rules so far can be used to demonstrate two important and rather controversial
principles governing the conditional. The principle of IMPORTATION is the claim that
from ‘if 𝑃 then, if also 𝑄, then 𝑅’ it follows that ‘if 𝑃 and also 𝑄, then 𝑅’. The principle
of EXPORTATION is the converse, that from ‘if 𝑃 and also 𝑄, then 𝑅’ it follows that ‘if 𝑃,
then if also 𝑄, then 𝑅’. First, we prove importation holds for our conditional:

1 (𝑃 → (𝑄 → 𝑅))

2 (𝑃 ∧ 𝑄)

3 𝑃 ∧E 2

4 (𝑄 → 𝑅) →E 1, 3

5 𝑄 ∧E 2

6 𝑅 →E 4, 5

7 ((𝑃 ∧ 𝑄) → 𝑅) →I 2–6

Second, we show exportation holds. Here, we need to open two nested subproofs:

1 ((𝑃 ∧ 𝑄) → 𝑅)

2 𝑃

3 𝑄

4 (𝑃 ∧ 𝑄) ∧I 2, 3

5 𝑅 →E 1, 4

6 (𝑄 → 𝑅) →I 3–5

7 (𝑃 → (𝑄 → 𝑅)) →I 2–6
266 NATURAL DEDUCTION FOR SENTENTIAL

29.3 Some Pitfalls of Subproofs


Making additional assumptions in the range of an assumption needs to be handled
with some care, as I said in §29.1. Now that we have a rule that discharges assumptions
in our repertoire, we can describe some of the potential pitfalls.
Consider this proof:

1 𝐴

2 𝐵

3 𝐵∧𝐵 ∧I 2, 2

4 𝐵 ∧E 3

5 𝐵→𝐵 →I 2–4

This is perfectly in keeping with the rules we have laid down already. And it should
not seem particularly strange. Since ‘𝐵 → 𝐵’ is a logical truth, no particular premises
should be required to prove it – note that ‘𝐴’ plays no particular role in the proof apart
from beginning it.
But suppose we now tried to continue the proof as follows:

1 𝐴

2 𝐵

3 𝐵∧𝐵 ∧I 2, 2

4 𝐵 ∧E 3

5 𝐵→𝐵 →I 2–4

6 𝐵 naughty attempt to invoke →E 5, 4

If we were allowed to do this, it would be a disaster. It would allow us to prove any


atomic sentence letter from any other atomic sentence letter. But if you tell me that
Anne is fast (symbolised by ‘𝐴’), I shouldn’t be able to conclude that Queen Boudica
stood twenty‐feet tall (symbolised by ‘𝐵’)! So we must be prohibited from doing this.
The rule on page 260 stipulation rules out the disastrous attempted proof above. The
rule of →E requires that we cite two individual lines from earlier in the proof. In the
purported proof, above, one of these lines (namely, line 4) occurs within a subproof
that has (by line 6) been closed. This is illegitimate.
A similar problem arises if we forget the restrictions on the rule of reiteration. Recall
§28.6 that we can reiterate an earlier sentence only if the same assumptions remain
undischarged. If we forget this, we can construct illegal ‘proofs’ such as the following:
§29. BASIC RULES FOR Sentential: RULES WITH SUBPROOFS 267

1 𝑃

2 𝑄

3 𝑃∧𝑄 ∧I 1, 2

4 𝑄 → (𝑃 ∧ 𝑄) →I 2–3

5 𝑃∧𝑄 R3

6 𝑃 → (𝑃 ∧ 𝑄) →I 1–5

This is certainly not a logical truth. What’s gone wrong is that we reiterated ‘𝑃 ∧ 𝑄’
without retaining the assumptions on which it was dependent. Naturally enough, it
was dependent on both ‘𝑃’ and ‘𝑄’, but it was reiterated into a context where the as‐
sumption ‘𝑄’ had been discharged. This is illegitimate.

29.4 Subproofs within Subproofs


Once we have started thinking about what we can show by making additional assump‐
tions, nothing stops us from posing the question of what we could show if we were to
make even more assumptions? This might motivate us to introduce a subproof within
a subproof. Here is an example which only uses the rules of proof that we have con‐
sidered so far:

1 𝐴

2 𝐵

3 𝐶

4 𝐴∧𝐵 ∧I 1, 2

5 𝐶 → (𝐴 ∧ 𝐵) →I 3–4

6 𝐵 → (𝐶 → (𝐴 ∧ 𝐵)) →I 2–5

Notice that the commentary on line 4 refers back to the initial assumption (on line 1)
and an assumption of a subproof (on line 2). This is perfectly in order, since neither
assumption has been discharged at the time (i.e., by line 4).
Again, though, we need to keep careful track of what we are assuming at any given
moment. For suppose we tried to continue the proof as follows:
268 NATURAL DEDUCTION FOR SENTENTIAL

1 𝐴

2 𝐵

3 𝐶

4 𝐴∧𝐵 ∧I 1, 2

5 𝐶 → (𝐴 ∧ 𝐵) →I 3–4

6 𝐵 → (𝐶 → (𝐴 ∧ 𝐵)) →I 2–5

7 𝐶 → (𝐴 ∧ 𝐵) naughty attempt to invoke →I 3–4

This would be awful. If I tell you that Anne is smart, you should not be able to derive
that, if Cath is smart (symbolised by ‘𝐶 ’) then both Anne is smart and Queen Boudica
stood 20‐feet tall! But this is just what such a proof would suggest, if it were permiss‐
ible.
The essential problem is that the subproof that began with the assumption ‘𝐶 ’ de‐
pended crucially on the fact that we had assumed ‘𝐵’ on line 2. By line 6, we have
discharged the assumption ‘𝐵’: we have stopped asking ourselves what we could show,
if we also assumed ‘𝐵’. So it is simply cheating, to try to help ourselves (on line 7) to the
subproof that began with the assumption ‘𝐶 ’. The attempted disastrous proof violates,
as before, the rule in the box on page 260. The subproof of lines 3–4 occurs within
a subproof that ends on line 5. Its assumptions are discharged before line 7, so they
cannot be invoked in any rule which applies to produce line 7.
It is always permissible to open a subproof with any assumption. However, there is
some strategy involved in picking a useful assumption. Starting a subproof with an
arbitrary, wacky assumption would just waste lines of the proof. In order to obtain a
conditional by →I, for instance, you must assume the antecedent of the conditional in
a subproof.
Equally, it is always permissible to close a subproof and discharge its assumptions.
However, it will not be helpful to do so, until you have reached something useful.
Recall the proof of the argument

𝑃 → 𝑄, 𝑄 → 𝑅 ∴ 𝑃 → 𝑅

from page 264. One thing to note about the proof there is that because there are two
assumptions with the same range in the main proof, it is not easily possible to discharge
just one of them using the →I rule. For that rule only applies to a one‐assumption
subproof. If we wanted to discharge another of our assumptions, we shall have to put
the proof into the right form, with each assumption made individually as the head of
its own subproof:
§29. BASIC RULES FOR Sentential: RULES WITH SUBPROOFS 269

1 𝑃→𝑄

2 𝑄→𝑅

3 𝑃

4 𝑄 →E 1, 3

5 𝑅 →E 2, 4

6 𝑃→𝑅 →I 3–5

The conclusion is now in the range of both assumptions, as in the earlier proof – but
now it is also possible to discharge these assumptions if we wish:

1 𝑃→𝑄

2 𝑄→𝑅

3 𝑃

4 𝑄 →E 1, 3

5 𝑅 →E 2, 4

6 𝑃→𝑅 →I 3–5

7 (𝑄 → 𝑅) → (𝑃 → 𝑅) →I 2–6

8 (𝑃 → 𝑄) → ((𝑄 → 𝑅) → (𝑃 → 𝑅)) →I 1–7

While it is permissible, and often convenient, to have several assumptions with the
same range and without nesting, I recommend always trying to construct your proofs
so that each assumption begins its own subproof. That way, if you later wish to apply
rules which discharge a single assumption, you may always do so.

29.5 Proofs within Proofs


One interesting feature of a natural deduction system like ours is that because we can
make any assumption at any point, and thereafter continue in accordance with the
rules, any correctly formed proof can be re‐used as a subproof in a later proof. For
example, suppose we wanted to give a proof of this argument:

(𝑃 → 𝑄) ∧ (𝑃 → 𝑅) ∴ (𝑃 → (𝑅 ∧ 𝑄)).

We begin by opening our proof by assuming the premise. We also note that the con‐
clusion is a conditional, and so we’ll assume that it is obtained by an instance of con‐
ditional introduction. That will give us this ‘skeleton’ of a proof before we begin filling
in the details:
270 NATURAL DEDUCTION FOR SENTENTIAL

1 (𝑃 → 𝑄) ∧ (𝑃 → 𝑅)

2 𝑃

𝑚 (𝑅 ∧ 𝑄) ∧I 6, 5

𝑚+1 (𝑃 → (𝑅 ∧ 𝑄)) →I 2–𝑚

Then we recall – perhaps! – that we already have a proof that looks very much like
this. On page 252 we have a proof that uses the same premise as ours, but also uses
the premise ‘𝑃’ to derive ‘(𝑅 ∧ 𝑄)’ – which is what we need. So we can simply copy that
whole proof over to fill in the missing section of our proof:

1 (𝑃 → 𝑄) ∧ (𝑃 → 𝑅)

2 𝑃

3 (𝑃 → 𝑄) ∧E 1

4 (𝑃 → 𝑅) ∧E 1

5 𝑄 →E 3, 2

6 𝑅 →E 4, 2

7 (𝑅 ∧ 𝑄) ∧I 6, 5

8 (𝑃 → (𝑅 ∧ 𝑄)) →I 2–7

This is a very useful feature: for if you have proved something once, you can re‐use
that proof whenever you need to, as a subproof in some other proof.
The converse isn’t always true, because sometimes in a subproof you use an assumption
from outside the subproof, and if you don’t make the same assumption in your other
proof, the displaced subproof may no longer correctly follow all the rules it uses.

29.6 Biconditional Introduction


The biconditional is like a two‐way conditional. The introduction rule for the bicondi‐
tional resembles two instances of conditional introduction, one for each direction.
In order to prove ‘𝑊 ↔ 𝑋’, for instance, you must be able to prove ‘𝑋’ on the assumption
‘𝑊 ’ and prove ‘𝑊 ’ on the assumption ‘𝑋’. The BICONDITIONAL INTRODUCTION rule (↔I)
therefore requires two subproofs to license the introduction. Schematically, the rule
works like this:
§29. BASIC RULES FOR Sentential: RULES WITH SUBPROOFS 271

𝑖 𝒜

𝑗 ℬ

𝑘 ℬ

𝑙 𝒜

(𝒜 ↔ ℬ) ↔I 𝑖 –𝑗, 𝑘–𝑙

There can be as many lines as you like between 𝑖 and 𝑗, and as many lines as you like
between 𝑘 and 𝑙 . Moreover, the subproofs can come in any order, and the second
subproof does not need to come immediately after the first. Again, this rule permits
us to discharge assumptions, and the same restrictions on making use of claims derived
in a closed subproof outside of that subproof apply.
We can now prove that a biconditional is like two conjoined conditionals. Using the
conditional and biconditional rules, we can prove that a biconditional entails a con‐
junction of conditionals, and vice versa:

1 (𝐴 ↔ 𝐵) 1 (𝐴 → 𝐵) ∧ (𝐵 → 𝐴)

2 𝐴 2 (𝐴 → 𝐵) ∧E 1

3 𝐵 ↔E 1, 2 3 𝐴

4 (𝐴 → 𝐵) →I 2–3 4 𝐵 →E 2, 3

5 𝐵 5 (𝐵 → 𝐴) ∧E 1

6 𝐴 ↔E 1, 5 6 𝐵

7 (𝐵 → 𝐴) →I 5–6 7 𝐴 →E 5, 6

8 (𝐴 → 𝐵) ∧ (𝐵 → 𝐴) ∧I 4, 7 8 (𝐴 ↔ 𝐵) ↔I 3–4, 6–7

Another informative example demonstrates the logical equivalence of ‘((𝑃 ∧ 𝑄) → 𝑅)’


and ‘(𝑃 → (𝑄 → 𝑅))’ given importation and exportation (page 265). We will re‐use
both of our earlier proofs, stitching them together using biconditional introduction in
the final line:
272 NATURAL DEDUCTION FOR SENTENTIAL

1 (𝑃 → (𝑄 → 𝑅))

2 (𝑃 ∧ 𝑄)

3 𝑃 ∧E 2

4 (𝑄 → 𝑅) →E 1, 3

5 𝑄 ∧E 2

6 𝑅 →E 4, 5

7 ((𝑃 ∧ 𝑄) → 𝑅) →I 2–6

8 ((𝑃 ∧ 𝑄) → 𝑅)

9 𝑃

10 𝑄

11 (𝑃 ∧ 𝑄) ∧I 9, 10

12 𝑅 →E 8, 11

13 (𝑄 → 𝑅) →I 10–12

14 (𝑃 → (𝑄 → 𝑅)) →I 9–13

15 (𝑃 ∧ 𝑄) → 𝑅) ↔ (𝑃 → (𝑄 → 𝑅)) ↔I 8–14, 1–7

Note the small gap between the nested vertical lines between lines 7 and 8 – that shows
we have two subproofs here, not one. (That would also be indicated by the fact that
the sentence on line 8 has a horizontal line under it – no vertical assumption line has
two markers of where the assumptions cease.)
The acceptability of our proof rules is grounded in the fact that they will never lead
us from truth to falsehood. The acceptability of the biconditional introduction rule is
demonstrated by the following correct entailment:

› If 𝒞1 , …, 𝒞𝑛 , 𝒜 ⊨ ℬ and 𝒞1 , …, 𝒞𝑛 , ℬ ⊨ 𝒜 , then 𝒞1 , …, 𝒞𝑛 ⊨ 𝒜 ↔ ℬ.

29.7 Disjunction Elimination


The disjunction elimination rule is slightly trickier than those we’ve seen so far. Sup‐
pose that either Ludwig is reactionary or he is libertarian. What can you conclude?
Not that Ludwig is reactionary; it might be that he is libertarian instead. And equally,
not that Ludwig is libertarian; for he might merely be reactionary. It can be hard to
draw a definite conclusion from a disjunction just by itself.
But suppose that we could somehow show both of the following: first, that Ludwig’s
being reactionary entails that he is an Austrian economist: second, that Ludwig’s being
libertarian also entails that he is an Austrian economist. Then if we know that Ludwig
§29. BASIC RULES FOR Sentential: RULES WITH SUBPROOFS 273

is either reactionary or libertarian, then we know that, whichever he is, Ludwig is an


Austrian economist. This we might call ‘no matter whether’ reasoning: if each of 𝒜 and
ℬ imply 𝒞 , then no matter whether 𝒜 or ℬ, still 𝒞 . Sometimes this kind of reasoning
is called proof by cases, since you start with the assumption that either of two cases
holds, and then show something follows no matter which case is actual.
This insight can be expressed in the following rule, which is our DISJUNCTION ELIMIN‐
ATION (∨E) rule:

𝑚 (𝒜 ∨ ℬ)

𝑖 𝒜

𝑗 𝒞

𝑘 ℬ

𝑙 𝒞

𝒞 ∨E 𝑚, 𝑖 –𝑗, 𝑘–𝑙

This is obviously a bit clunkier to write down than our previous rules, but the point
is fairly simple. Suppose we have some disjunction, 𝒜 ∨ ℬ. Suppose we have two
subproofs, showing us that 𝒞 follows from the assumption that 𝒜 , and that 𝒞 follows
from the assumption that ℬ. Then we can derive 𝒞 itself. As usual, there can be as
many lines as you like between 𝑖 and 𝑗, and as many lines as you like between 𝑘 and 𝑙 .
Moreover, the subproofs and the disjunction can come in any order, and do not have
to be adjacent.
Some examples might help illustrate the rule in action. Consider this argument:

(𝑃 ∧ 𝑄) ∨ (𝑃 ∧ 𝑅) ∴ 𝑃.

An example proof might run thus:


274 NATURAL DEDUCTION FOR SENTENTIAL

1 (𝑃 ∧ 𝑄) ∨ (𝑃 ∧ 𝑅)

2 𝑃∧𝑄

3 𝑃 ∧E 2

4 𝑃∧𝑅

5 𝑃 ∧E 4

6 𝑃 ∨E 1, 2–3, 4–5

An adaptation of the previous proof can be used to establish a proof for this argument:
(𝑃 ∧ 𝑄) ∨ (𝑃 ∧ 𝑅) ∴ (𝑃 ∧ (𝑄 ∨ 𝑅)).

We begin the cases in the same way as above, but as we continue please note the use
of the disjunction introduction rule to get the last line of each subproof in the right
format to use disjunction elimination.

1 (𝑃 ∧ 𝑄) ∨ (𝑃 ∧ 𝑅)

2 𝑃∧𝑄

3 𝑃 ∧E 2

4 𝑄 ∧E 2

5 (𝑄 ∨ 𝑅) ∨I 4

6 𝑃 ∧ (𝑄 ∨ 𝑅) ∧I 3, 5

7 𝑃∧𝑅

8 𝑃 ∧E 7

9 𝑅 ∧E 7

10 (𝑄 ∨ 𝑅) ∨I 9

11 𝑃 ∧ (𝑄 ∨ 𝑅) ∧I 8, 10

12 𝑃 ∧ (𝑄 ∨ 𝑅) ∨E 1, 2–6, 7–11

Don’t be alarmed if you think that you wouldn’t have been able to come up with this
proof yourself. The ability to come up with novel proofs will come with practice. The
key question at this stage is whether, looking at the proof, you can see that it conforms
with the rules that we have laid down. And that just involves checking every line, and
making sure that it is justified in accordance with the rules we have laid down.
Another slightly tricky example. Consider:
𝐴 ∧ (𝐵 ∨ 𝐶) ∴ (𝐴 ∧ 𝐵) ∨ (𝐴 ∧ 𝐶).

Here is a proof corresponding to this argument:


§29. BASIC RULES FOR Sentential: RULES WITH SUBPROOFS 275

1 𝐴 ∧ (𝐵 ∨ 𝐶)

2 𝐴 ∧E 1

3 𝐵∨𝐶 ∧E 1

4 𝐵

5 𝐴∧𝐵 ∧I 2, 4

6 (𝐴 ∧ 𝐵) ∨ (𝐴 ∧ 𝐶) ∨I 5

7 𝐶

8 𝐴∧𝐶 ∧I 2, 7

9 (𝐴 ∧ 𝐵) ∨ (𝐴 ∧ 𝐶) ∨I 8

10 (𝐴 ∧ 𝐵) ∨ (𝐴 ∧ 𝐶) ∨E 3, 4–6, 7–9

This disjunction rule is supported by the following valid Sentential argument form:

› If 𝒟1 , …, 𝒟𝑛 , 𝒜 ∨ ℬ, 𝒜 ⊨ 𝒞 and 𝒟1 , …, 𝒟𝑛 , 𝒜 ∨ ℬ, ℬ ⊨ 𝒞, then 𝒟1 , …, 𝒟𝑛 , 𝒜 ∨ ℬ ⊨ 𝒞.

29.8 Negation Introduction


Our negation rules are inspired by the form of reasoning known as reductio (recall page
229). In reductio reasoning, we make an assumption that 𝒜 for the sake of argument,
and show that something contradictory follows from it. Then we can conclude that our
assumption was false, and that its negation must be true. There are lots of approaches
to negation in natural deduction, but all of them stem from this same basic insight:
when an assumption that 𝒜 goes awry, conclude ¬𝒜 .
We see reductio reasoning used a lot in mathematics. For example: Suppose there
is a largest number, call it 𝑛. Since 𝑛 is the largest number, 𝑛 + 1 ⩽ 𝑛. But then,
subtracting 𝑛 from both sides, 1 ⩽ 0. And that is absurd. So there is no largest number.
Here, we make an assumption, for the sake of argument. We derive from it a claim, in
this case that 1 ⩽ 0. And we note that claim is absurd, given what we already know.
So we conclude that the negation of our assumption holds, and cease to rely on the
problematic assumption.
The only claims in logic that it is safe to say are absurd are logical falsehoods. So in a
logical version of reductio reasoning, we will want to show that claims that contradict
one another will be derivable in the range of an assumption, in order to prove the
negation of that assumption. Our NEGATION INTRODUCTION rule fits this pattern very
clearly:
276 NATURAL DEDUCTION FOR SENTENTIAL

𝑖 𝒜

𝑗 ℬ

𝑘 ¬ℬ

¬𝒜 ¬I 𝑖 –𝑗, 𝑖 –𝑘

Here, we can prove a sentence and its negation both within the range of the assumption
that 𝒜 . So if 𝒜 were assumed, something contradictory would be derivable under
that assumption. (We could apply conjunction introduction to lines 𝑗 and 𝑘 to make
the logical falsehood explicit, but that wouldn’t be strictly necessary.) Since logical
falsehoods are fundamentally unacceptable as the termination of a chain of argument,
we must have begun with an inappropriate starting point when we assumed 𝒜 . So,
in fact, we conclude, ¬𝒜 , discharging our erroneous assumption that 𝒜 . There is no
need for the line with ℬ on it to occur before the line with ¬ℬ on it.
Almost always the logical falsehood arises because of a clash between the claim we
assume and some bit of prior knowledge – typically, some claim we have established
earlier in the proof. We will thus make frequent use of the rule of reiteration in ap‐
plications of negation introduction, to get the contradictory claims in the right place
to make the rule easy to apply. Here is an example of the rule in action, showing that
this argument is provable:
𝐴, ¬𝐵 ∴ ¬(𝐴 → 𝐵).

1 𝐴

2 ¬𝐵

3 𝐴→𝐵

4 𝐵 →E 3, 1

5 ¬𝐵 R2

6 ¬(𝐴 → 𝐵) ¬I 3–4, 3–5

Another example, for practice. Let’s prove this argument:

(𝐶 → ¬𝐴)∴(𝐴 → ¬𝐶).
§29. BASIC RULES FOR Sentential: RULES WITH SUBPROOFS 277

1 (𝐶 → ¬𝐴)

2 𝐴

3 𝐶

4 ¬𝐴 →E 1, 3

5 𝐴 R2

6 ¬𝐶 ¬I 3–5, 3–4

7 (𝐴 → ¬𝐶) →I 2–6

The correctness of the negation introduction rule is demonstrated by this valid Senten‐
tial argument form:

› If 𝒞1 , …, 𝒞𝑛 , 𝒜 ⊨ ℬ and 𝒞1 , …, 𝒞𝑛 , 𝒜 ⊨ ¬ℬ, then 𝒞1 , …, 𝒞𝑛 ⊨ ¬𝒜 .

29.9 Negation Elimination


The rule of negation introduction is interesting, because it is almost its own elimina‐
tion rule too! Consider this schematic proof:

𝑖 ¬𝒜

𝑗 ℬ

𝑘 ¬ℬ

𝑘+1 ¬¬𝒜 ¬I 𝑖 –𝑗, 𝑖 –𝑘

This proof terminates with a sentence that is logically equivalent to 𝒜 , discharging


the assumption that ¬𝒜 because it leads to contradictory conclusions. This looks
awfully close to a rule of negation elimination – if only we could find a way to replace
a doubly‐negated sentence ¬¬𝒜 by the logically equivalent sentence 𝒜 , which would
have eliminated the negation from the problematic assumption ¬𝒜 .
In our system, we approach this problem by the brute force method – we allow
ourselves to use the derivation of contradictory sentences from a negated sentence
to motivate the elimination of that negation. This leads to our rule of NEGATION ELIM‐
INATION:
278 NATURAL DEDUCTION FOR SENTENTIAL

𝑖 ¬𝒜

𝑗 ℬ

𝑘 ¬ℬ

𝒜 ¬E 𝑖 –𝑗, 𝑖 –𝑘

This is also reductio reasoning, though in this case from a negated assumption. But
again, if the assumption of ¬𝒜 goes awry and allows us to derive contradictory claims
(perhaps given what we’ve already shown), that licenses us to conclude 𝒜 .
With the rule of negation elimination, we can prove some claims that are hard to prove
directly. For example, suppose we wanted to prove an instance of the LAW OF EXCLUDED
MIDDLE, that (𝒜 ∨ ¬𝒜) is true for any sentence 𝒜 . Suppose we aim at proving the
specific instance ‘(𝑃 ∨ ¬𝑃)’. (It’s easy to see that the proof we give can be adapted to
any other instance of the law.) You might initially have thought: it is a disjunction, so
should be proved by disjunction introduction. But we cannot prove either the sentence
letter ‘𝑃’ or its negation from no assumptions – so we could not prove excluded middle
from no assumptions if it was by disjunction introduction from one of its disjuncts.
So we proceed indirectly: we show that supposing the negation of the law of excluded
middle leads to logical falsehood, and conclude it by negation elimination:

1 ¬(𝑃 ∨ ¬𝑃)

2 𝑃

3 (𝑃 ∨ ¬𝑃) ∨I 2

4 ¬(𝑃 ∨ ¬𝑃) R1

5 ¬𝑃 ¬I 2–3, 2–4

6 (𝑃 ∨ ¬𝑃) ∨I 5

7 ¬(𝑃 ∨ ¬𝑃) R1

8 (𝑃 ∨ ¬𝑃) ¬E 1–6, 1–7

One interesting feature of this proof is that one of the contradictory sentences is the
assumption itself. When the assumption that ¬𝒜 goes wrong, it might be because we
have the resources to prove 𝒜 ! Some interesting philosophical controversy surrounds
proofs like this: see §30.3.
§29. BASIC RULES FOR Sentential: RULES WITH SUBPROOFS 279

To see our negation rules in action, consider:

𝑃 ∴ (𝑃 ∧ 𝐷) ∨ (𝑃 ∧ ¬𝐷).

Here is a proof corresponding with the argument:

1 𝑃

2 ¬ (𝑃 ∧ 𝐷) ∨ (𝑃 ∧ ¬𝐷)

3 𝐷

4 (𝑃 ∧ 𝐷) ∧I 1, 3

5 (𝑃 ∧ 𝐷) ∨ (𝑃 ∧ ¬𝐷) ∨I 4

6 ¬𝐷 ¬I 3–5, 3–2

7 (𝑃 ∧ ¬𝐷) ∧I 1, 6

8 (𝑃 ∧ 𝐷) ∨ (𝑃 ∧ ¬𝐷) ∨I 7

9 (𝑃 ∧ 𝐷) ∨ (𝑃 ∧ ¬𝐷) ¬E 2–8

I make two comments. In line 6, the justification cites line 2 which lies outside the
subproof. That is okay, since the application of the rule lies within the range of the
assumption of line 2. In line 9, the justification only cites the subproof from 2 to 8,
rather than two ranges of line numbers. This is because in this application of our rule,
we have the special case where the sentence such that both it and its negation can be
derived from the assumption is that assumption. It would be trivial to derive it from
itself.
The negation elimination rule is supported by this valid Sentential argument form:

› If 𝒞1 , …, 𝒞𝑛 , ¬𝒜 ⊨ ℬ and 𝒞1 , …, 𝒞𝑛 , ¬𝒜 ⊨ ¬ℬ, then 𝒞1 , …, 𝒞𝑛 ⊨ 𝒜 ;

29.10 Putting it All Together


We have now explained all of the basic rules for the proof system for Sentential. Let’s
return to some of the arguments from §27 with which we began our exploration of this
system, to see how they can be proved. And I will give a third example of a complex
proof that uses many of our rules.

1. One argument we considered earlier was this:

¬(𝐴 ∨ 𝐵) ∴ (¬𝐴 ∧ ¬𝐵).

We can now see that the proof we began to construct can be completed can be
proved as in Figure 29.1.
280 NATURAL DEDUCTION FOR SENTENTIAL

1 ¬(𝐴 ∨ 𝐵)

2 𝐴

3 (𝐴 ∨ 𝐵) ∨I 2

4 ¬(𝐴 ∨ 𝐵) R1

5 ¬𝐴 ¬I 2–3, 2–4

6 𝐵

7 (𝐴 ∨ 𝐵) ∨I 6

8 ¬(𝐴 ∨ 𝐵) R1

9 ¬𝐵 ¬I 6–7, 6–8

10 (¬𝐴 ∧ ¬𝐵) ∧I 5, 9

Figure 29.1: Proof of ¬(𝐴 ∨ 𝐵) ∴ (¬𝐴 ∧ ¬𝐵)

2. The second proof we began constructing earlier corresponded to this argument:

(𝐴 ∨ 𝐵), ¬(𝐴 ∧ 𝐶), ¬(𝐵 ∧ ¬𝐷) ∴ (¬𝐶 ∨ 𝐷).

The proof can be completed as in Figure 29.2 on page 281.

3. Finally, Figure 29.3 shows a long proof involving most of our rules in action (page
282).

These three proofs are more complex than the others we’ve considered, because they
involve multiple rules in tandem. You should make sure you understand why each
rule applies where it does, and that the proofs are correct, before you move on. You
probably won’t feel that you are able to construct a proof yourself as yet, and that is
okay. It is important now to see that these are in fact proofs. Some ideas about how
to go about constructing them yourself will be presented in §32. But you will also get
a sense about how to construct complex proofs as you practice constructing simpler
proofs and start to see how they can be slotted together to form larger proofs. There is
no substitute for practice.
§29. BASIC RULES FOR Sentential: RULES WITH SUBPROOFS 281

1 𝐴∨𝐵

2 ¬(𝐴 ∧ 𝐶)

3 ¬(𝐵 ∧ ¬𝐷)

4 𝐴

5 𝐶

6 𝐴∧𝐶 ∧I 4, 5

7 ¬(𝐴 ∧ 𝐶) R2

8 ¬𝐶 ¬I 5–6, 5–7

9 (¬𝐶 ∨ 𝐷) ∨I 8

10 𝐵

11 ¬𝐷

12 (𝐵 ∧ ¬𝐷) ∧I 10, 11

13 ¬(𝐵 ∧ ¬𝐷) R3

14 𝐷 ¬E 11–12, 11–13

15 (¬𝐶 ∨ 𝐷) ∨I 14

16 (¬𝐶 ∨ 𝐷) ∨E 1, 4–9, 10–15

Figure 29.2: Proof that (𝐴 ∨ 𝐵), ¬(𝐴 ∧ 𝐶), ¬(𝐵 ∧ ¬𝐷) ∴ (¬𝐶 ∨ 𝐷).

Key Ideas in §29


› The rules for our system are summarised on page 371.
› It is important that we keep track of restrictions on when we can
make use of claims derived in a subproof, since those subproofs
may be making use of assumptions we are no longer accepting.
› Our proof rules match the interpretation of Sentential we have
given – they will not permit us to say that some claim is provable
from some assumptions when that claim isn’t entailed by those
assumptions.
282 NATURAL DEDUCTION FOR SENTENTIAL

1 ¬𝑃 ∨ 𝑄

2 ¬𝑃

3 𝑃

4 ¬𝑄

5 𝑃 ∧ ¬𝑃 ∧I 3, 2

6 𝑃 ∧E 5

7 ¬𝑃 ∧E 5

8 𝑄 ¬E 4–7

9 𝑃→𝑄 →I 3–8

10 𝑄

11 𝑃

12 𝑄∧𝑄 ∧I 10, 10

13 𝑄 ∧E 12

14 𝑃→𝑄 →I 11–13

15 𝑃→𝑄 ∨E 1, 2–9, 10–14

16 𝑃→𝑄

17 ¬(¬𝑃 ∨ 𝑄)

18 𝑃

19 𝑄 →E 16, 18

20 ¬𝑃 ∨ 𝑄 ∨I 19

21 ¬𝑃 ¬I 18–17, 18–20

22 ¬𝑃 ∨ 𝑄 ∨I 21

23 ¬𝑃 ∨ 𝑄 ¬E 17–17, 17–22

24 ((¬𝑃 ∨ 𝑄) ↔ (𝑃 → 𝑄)) ↔I 1–15, 16–23

Figure 29.3: A complicated proof


§29. BASIC RULES FOR Sentential: RULES WITH SUBPROOFS 283

Practice exercises
A. The following ‘proof’ is incorrect. Explain the mistakes it makes.

1 ¬𝐿 → (𝐴 ∧ 𝐿)

2 ¬𝐿

3 𝐴 →E 1, 2

4 𝐿

5 𝐿 ∧ ¬𝐿 ∧I 4, 2

6 ¬𝐴

7 𝐿 →I 5

8 ¬𝐿 →E 5

9 𝐴 ¬I 6–7, 6–8

10 𝐴 ∨E 2–3, 4–9

B. The following proofs are missing their commentaries (rule and line numbers). Add
them, to turn them into bona fide proofs. Additionally, write down the argument that
corresponds to each proof.

1 𝐴→𝐷 1 ¬𝐿 → (𝐽 ∨ 𝐿)

2 𝐴∧𝐵 2 ¬𝐿

3 𝐴 3 𝐽∨𝐿

4 𝐷 4 𝐽

5 𝐷∨𝐸 5 𝐽∧𝐽

6 (𝐴 ∧ 𝐵) → (𝐷 ∨ 𝐸) 6 𝐽

7 𝐿

8 ¬𝐽

9 𝐽

10 𝐽
284 NATURAL DEDUCTION FOR SENTENTIAL

C. Give a proof representing each of the following arguments:

1. 𝐽 → ¬𝐽 ∴ ¬𝐽
2. 𝑄 → (𝑄 ∧ ¬𝑄) ∴ ¬𝑄
3. 𝐴 → (𝐵 → 𝐶) ∴ (𝐴 ∧ 𝐵) → 𝐶
4. 𝐾∧𝐿 ∴𝐾 ↔𝐿
5. (𝐶 ∧ 𝐷) ∨ 𝐸 ∴ 𝐸 ∨ 𝐷
6. 𝐴 ↔ 𝐵, 𝐵 ↔ 𝐶 ∴ 𝐴 ↔ 𝐶
7. ¬𝐹 → 𝐺, 𝐹 → 𝐻 ∴ 𝐺 ∨ 𝐻
8. (𝑍 ∧ 𝐾) ∨ (𝐾 ∧ 𝑀), 𝐾 → 𝐷 ∴ 𝐷
9. 𝑃 ∧ (𝑄 ∨ 𝑅), 𝑃 → ¬𝑅 ∴ 𝑄 ∨ 𝐸
10. 𝑆 ↔ 𝑇 ∴ 𝑆 ↔ (𝑇 ∨ 𝑆)
11. ¬(𝑃 → 𝑄) ∴ ¬𝑄
12. ¬(𝑃 → 𝑄) ∴ 𝑃

D. For each of the following sentences, construct a natural deduction proof which has
the sentence as its last line, and contains no undischarged assumptions:

1. 𝐽 ↔ (𝐽 ∨ (𝐿 ∧ ¬𝐿))
2. ((𝑃 ∧ 𝑄) ↔ (𝑄 ∧ 𝑃))
3. ((𝑃 → 𝑃) → 𝑄) → 𝑄.
30
Some Philosophical Issues about
Conditionals, Meaning, and Negation

30.1 Conditional Introduction and the English Conditional


We motivated the conditional introduction rule back on page 261 by giving a English
argument, using the English word ‘if’. Now, →I is a stipulated rule for our conditional
connective →; it doesn’t really need motivation since we could simply postulate that
such a rule is part of our formal proof system. It is justified, if justification is needed,
by the Deduction Theorem (a result noted in §11.3). (Likewise, →E may be justified by
a schematic truth table demonstration that 𝒜, 𝒜 → 𝒞 ⊨ 𝒞 .)
But if we are to offer a motivation for our rule in English, then we must be relying on
the plausibility of this English analog of →I, known as CONDITIONAL PROOF:

If you can establish 𝒞 , given the assumption that 𝒜 and perhaps some
supplementary assumptions ℬ1 , …, ℬ𝑛 , then you can establish ‘if 𝒜 , 𝒞 ’
solely on the basis of the assumptions ℬ1 , …, ℬ𝑛 .

Conditional proof captures a significant aspect of the English ‘if’: the way that con‐
ditional helps us neatly summarise reasoning from assumptions, and then store that
reasoning for later use in a conditional form.
But if conditional proof is a good rule for English ‘if’, then we can argue that ‘if’ is
actually synonymous with ‘→’:

Suppose that 𝒜 → 𝒞 . Assume 𝒜 . We can now derive 𝒞 , by →E. But we


have now established 𝒞 on the basis of the assumption 𝒜 , together with
the supplementary assumption 𝒜 → 𝒞 . By conditional proof, then, we can
establish ‘if 𝒜 , 𝒞 ’ on the basis of the supplementary assumption 𝒜 → 𝒞
alone. But that of course means that we can derive (in English) an Eng‐
lish conditional sentence from a Sentential conditional sentence, with the

285
286 NATURAL DEDUCTION FOR SENTENTIAL

appropriate interpretation of the constituents 𝒜 and 𝒞 . Since the English


conditional sentence obviously suffices for the Sentential conditional sen‐
tence, we have shown them to be synonymous.

Our discussion in §11.5 seemed to indicate that the English conditional was not syn‐
onymous with ‘→’. That is hard to reconcile with the above argument. Most philo‐
sophers have concluded that, contrary to appearances, conditional proof is not always
a good way of reasoning for English ‘if’. Here is an example which seems to show this,
though it requires some background.
It is clearly valid in English, though rather pointless, to argue from a claim to itself. So
when 𝒞 is some English sentence, 𝒞 ∴ 𝒞 is a valid argument in English. And we cannot
make a valid argument invalid by adding more premises: if premises we already have
are conclusive grounds for the conclusion, adding more premises while keeping the
conclusive grounds cannot make the argument less conclusive. So 𝒞, 𝒜 ∴ 𝒞 is a valid
argument in English.
If conditional proof were a good way of arguing in English, we could convert this valid
argument into another valid argument with this form: 𝒞 ∴ if 𝒜, 𝒞 . But then condi‐
tional proof would then allow us to convert this valid argument:

174. I will go skiing tomorrow;


175. I break my leg tonight;
So: I will go skiing tomorrow. (which is just repeating 174 again)

into this intuitively invalid argument:

174. I will go skiing tomorrow;


So: If I break my leg tonight, I will go skiing tomorrow.

This is invalid, because even if the premise is true, the conclusion seems to be actually
false. If conditional proof enables us to convert a valid English argument into an invalid
argument, so much the worse for conditional proof.
There is much more to be said about this example. What is beyond doubt is that →I is
a good rule for Sentential, regardless of the fortunes of conditional proof in English. It
does seem that instances of conditional proof in mathematical reasoning are all accept‐
able, which again shows the roots of natural deduction as a formalisation of existing
mathematical practice. This would suggest that → might be a good representation of
mathematical uses of ‘if’.

30.2 Inferentialism
You will have noticed that our rules come in pairs: an introduction rule that tells you
how to introduce a connective into a proof from what you have already, and an elim‐
ination rule that tells you how to remove it in favour of its consequences. These rules
can be justified by consideration of the meanings we assigned to the connectives of
Sentential in the schematic truth tables of §8.3.
§30. SOME PHILOSOPHICAL ISSUES ABOUT CONDITIONALS, MEANING, AND NEGATION 287

But perhaps we should invert this order of justification. After all, the proof rules
already seem to capture (more or less) how ‘and’, ‘or’, ‘not’, ‘if’, and ‘iff’ work in con‐
versation. We make some claims and assumptions. The introduction and elimination
rules summarise how we might proceed in our conversation against the background
of those claims and assumptions. Many philosophers have thought that the meaning
of an expression is entirely governed by how it is or might be used in a conversation by
competent speakers – in slogan form, meaning is fixed by use. If these proof rules de‐
scribe how we might bring an expression into a conversation, and what we may do with
it once it is there, then these proof rules describe the totality of facts on which meaning
depends. The meaning of a connective, according to this INFERENTIALIST picture, is
represented by its introduction and elimination rules – and not by the truth‐function
that a schematic truth table represents. On this view, it is the correctness of the schem‐
atic proof of 𝒜 ∨ ℬ from 𝒜 which explains why the schematic truth table for ‘∨’ has a
T on every row on which at least one of its constituents gets a T.
There is a significant debate on just this issue in the philosophy of language, about the
nature of meaning. Is the meaning of a word what it represents, the view sometimes
called REPRESENTATIONALISM? Or is the meaning of a word, rather, given by some
rules for how to use it, as inferentialism says? We cannot go deeply into this issue
here, but I will say a little. The representationalist view seems to accomodate some
expressions very well: the meaning of a name, for example, seems very plausibly to be
identified with what it names; the meaning of a predicate might be thought of as the
corresponding property. But inferentialism seems more natural as an approach to the
logical connectives:

Anyone who has learnt to perform [conjunction introduction and conjunc‐


tion elimination] knows the meaning of ‘and’, for there is simply nothing
more to knowing the meaning of ‘and’ than being able to perform these
inferences.1

It seems rather unnatural, by contrast, to think that the meaning of ‘and’ is some ab‐
stract mathematical ‘thing’ represented by a truth table.
Can the inferentialist distinguish good systems of rules, such as those governing ‘and’,
from bad systems? The problem is that without appealing to truth tables or the like,
we seem to be committed to the legitimacy of rather problematic connectives. The
most famous example is Prior’s ‘tonk’ governed by these rules:

𝑚 𝒜 𝑚 𝒜 tonk ℬ

⋮ ⋮

𝑛 𝒜 tonk ℬ tonk‐I 𝑚 𝑛 ℬ tonk‐E 𝑚

You will notice that ‘tonk’ has an introduction rule like ‘∨’, and an elimination rule
like ‘∧’. Of course ‘tonk’ is a connective we would not like in a language, since pairing

1 A N Prior (1961) ‘The Runabout Inference‐Ticket’, Analysis 21, p. 38.


288 NATURAL DEDUCTION FOR SENTENTIAL

the introduction and elimination rules would allow us to prove any arbitrary sentence
from any assumption whatsover:

1 𝑃

2 𝑃 tonk 𝑄 tonk‐I 1

3 𝑄 tonk‐E 2

If we are to rule out such deviant connectives as ‘tonk’, Prior argues, we have to ac‐
cept that ‘an expression must have some independently determined meaning before
we can discover whether inferences involving it are valid or invalid’ (Prior, op. cit., p.
38). We cannot, that is, accept the inferentialist position that the rules of implication
come first and the meaning comes second. Inferentialists have replied, but we must
unfortunately leave this interesting debate here for now.2

30.3 Constructivism
The proof of excluded middle we saw on p. 278 is an example of an INDIRECT PROOF:
even though the main connective of our conclusion is a disjunction, we don’t establish
it by disjunction introduction. This is typical in fact of reductio reasoning in general:
we show something is true, by showing that an absurdity would be true if it were false.
An influential group of mathematicians are worried by indirect proofs. There is a view
known as CONSTRUCTIVISM which regards mathematical objects as constructed not dis‐
covered. The closely related view known as INTUITIONISM agrees, while offering a par‐
ticular account of the construction as fundamentally deriving from human perception
of the passage of time. The Dutch mathematician L E J Brouwer is most famously as‐
sociated with this view. He says this about the origins of our understanding of the
natural numbers:

…intuitionistic mathematics is an essentially languageless activity of the


mind having its origin in the perception of a move of time. This percep‐
tion of a move of time may be described as the falling apart of a life mo‐
ment into two distinct things, one of which gives way to the other, but
is retained by memory. If the twoity thus born is divested of all quality,
it passes into the empty form of the common substratum of all twoities.
And it is this common substratum, this empty form, which is the basic
intuition of mathematics. (Brouwer 1981, 4–5) 3

Regardless of your view of intuitionism, the idea that mathematical objects are not
pre‐existing inhabitants of some Platonic realm has a lot of appeal.

2 The interested reader might wish to start with this reply to Prior: Nuel D Belnap, Jr (1962) ‘Tonk, Plonk
and Plink’, Analysis 22, pp. 130–34.
3 L E J Brouwer (1981) Brouwer’s Cambridge lectures on intuitionism, D van Dalen, ed., Cambridge Univer‐
sity Press, at pp. 4–5.
§30. SOME PHILOSOPHICAL ISSUES ABOUT CONDITIONALS, MEANING, AND NEGATION 289

Constructivists of all stripes think we shouldn’t accept the law of excluded middle,
because to think that any claim of the form 𝒜 ∨ ¬𝒜 must be true is to think that
there is a pre‐existing fact of the matter as to whether or not 𝒜 – and there may not
be until we have constructed the mathematical objects in question. A mathematical
existence proof, for example showing that a number with a certain property exists,
must – according to the constructivist – consist in a construction of the specific number
in question. We cannot show that some number has a property just by showing that
the supposition that no number has that property leads to contradiction.
When you show that ¬𝒜 leads to absurdity, you have constructed a proof of ¬¬𝒜 , but
that is not the same as a proof of 𝒜 itself. Constructivists thus typically accept the
¬I rule. If you prove ¬𝒜 by showing that the assumption 𝒜 leads to a contradiction,
you have positively constructed an absurdity on the basis of that assumption, and that
proof is acceptable. However, constructivists reject the ¬E rule. A proof showing that
the assumption ¬𝒜 leads to a contradiction amounts only to a positive construction
of ¬¬𝒜 – not a construction of 𝒜 .
Constructive logic has some interesting features that our logical system does not. For
example, constructive logic is typically understood to have the DISJUNCTION PROPERTY:

If 𝒜 ∨ ℬ can be proved from no assumptions, then either 𝒜 can be


proved from no assumptions, or ℬ can be proved from no assump‐
tions.

Our logic clearly lacks the disjunction property, as the proof of excluded middle
demonstrates: ‘𝑃 ∨ ¬𝑃’ can be proved but neither of its disjuncts is provable.
Constructive and intuitionistic logic is an alternative conception of the nature of lo‐
gic to that we have advocated. It is typically understood to involve recentering logic
around provability rather than truth. So rather than saying ‘𝑃 ∨ ¬𝑃’ is false, construct‐
ivists deny that it is provable, and then add that mathematical language should be
restricted to what is provable, rather than relying on a Platonistic notion of abstract
unworldly truth. The philosophical potential of this alternative way of thinking about
logic unfortunately takes us beyond the scope of the present course.4

4 A brief account of intuitionistic logic and the central role it gives to provability can be found in §§3.1–3.2
of Rosalie Iemhoff (2020) ‘Intuitionism in the Philosophy of Mathematics’, in Edward N. Zalta, ed., The
Stanford Encyclopedia of Philosophy https://plato.stanford.edu/entries/intuitionism/#BHKInt.
290 NATURAL DEDUCTION FOR SENTENTIAL

Key Ideas in §30


› The question of how to understand conditionals in natural lan‐
guage is a tricky one. The natural deduction rules we adopt are
suitable to understand the logical conditional ‘→’, but this may
only be an approximation of English ‘if’.
› The philosophical question of whether connectives are given
meaning by their truth tables or by their natural deduction rules
is an interesting one.
› Constructive (or intuitionistic) mathematics is often understood
to call for a revision of our logical proof rules, in particular ¬E.
31
Proof‐Theoretic Concepts

31.1 Provability and the Deduction Theorem


We shall introduce some new vocabulary and notation.

If there is a proof conforming to our natural deduction rules which


ends on a line containing 𝒞 , such that the undischarged assumptions
still in effect on that last line are all among 𝒜1 , 𝒜2 , …, 𝒜𝑛 then we say
that 𝒞 is PROVABLE FROM 𝒜1 , 𝒜2 , …, 𝒜𝑛 . This is abbreviated, in our
metalanguage, like this:

𝒜1 , 𝒜2 , …, 𝒜𝑛 ⊢ 𝒞.

Consider this proof:

1 𝐴

2 ¬𝐵 → ¬𝐴

3 ¬𝐵

4 ¬𝐴 →E 2, 3

5 𝐴 R1

6 𝐵 ¬E 3–4, 3–5

The undischarged assumptions are ‘𝐴’ and ‘¬𝐵 → ¬𝐴’ – the assumption ‘¬𝐵’ on line 3
is discharged by the application of negation elimination that leads to the last line, ‘𝐵’.
So this proof shows that 𝐴, ¬𝐵 → ¬𝐴 ⊢ 𝐵.
The symbol ‘⊢’ is known as the single turnstile. I want to emphasise that this is different
from the double turnstile symbol (‘⊨’) that represents entailment (§23).
291
292 NATURAL DEDUCTION FOR SENTENTIAL

› The single turnstile, ‘⊢’, concerns the existence of a certain kind of formal proof
– namely, 𝒜1 , 𝒜2 , …, 𝒜𝑛 ⊢ 𝒞 claims that there is a formal proof which termin‐
ates with 𝒞 and has among its undischarged assumptions only sentences among
𝒜1 , 𝒜2 , …, 𝒜𝑛 .

› The double turnstile, ‘⊨’, concerns the non‐existence of a certain kind of inter‐
pretation (or valuation, in the special case of Sentential) – namely, that there is
no interpretation making each of 𝒜1 , 𝒜2 , …, 𝒜𝑛 true while making 𝒞 false.

These are very different notions.


However, if we’ve designed our proof system well, we shouldn’t be able to prove a
conclusion from some assumptions unless that conclusion validly follows from those
assumptions. And if we are really fortunate, we should be able to provide a proof
corresponding to any valid argument. (More on this in §38.) But even if our two turn‐
stiles agree on which sentences they relate to other sentences, they still mean different
things. Recall the discussion of coextensive predicates in §21.7 – even if the extensions
of ‘⊢’ and ‘⊨’ are the same, we apply them on quite different grounds. If they coincide
despite being defined so differently, that is some evidence that we are uncovering a
genuine and important relation between sentences, describable in a number of differ‐
ent ways.
A key result, known as the DEDUCTION THEOREM, links the notion of provability with
the conditional:

𝒜1 , …, 𝒜𝑛 , ℬ ⊢ 𝒞 iff 𝒜1 , …, 𝒜𝑛 ⊢ ℬ → 𝒞 .

We can show this result by showing how to convert a proof showing 𝒜1 , …, 𝒜𝑛 , ℬ ⊢ 𝒞


into a proof showing 𝒜1 , …, 𝒜𝑛 ⊢ ℬ → 𝒞 , and vice versa. So first suppose we have the
proof on the left, we can apply conditional introduction to discharge the occurence of
ℬ and prove a conditional with ℬ as antecedent:

1 𝒜1 1 𝒜1

⋮ ⋮

𝑛 𝒜𝑛 𝑛 𝒜𝑛

𝑛+1 ℬ 𝑛+1 ℬ

⋮ ⋮

𝑘 𝒞 𝑘 𝒞

𝑘+1 ℬ→𝒞 →I 𝑛 + 1–𝑘

The other direction is just as easy. Suppose we have the proof on the left, terminating
in ℬ → 𝒞 , we can make a new assumption of ℬ and use conditional elimination to
generate a proof terminating in 𝒞 , with that new assumption remaining undischarged.
§31. PROOF‐THEORETIC CONCEPTS 293

1 𝒜1 1 𝒜1

⋮ ⋮

𝑛 𝒜𝑛 𝑛 𝒜𝑛

⋮ ⋮

𝑘 ℬ→𝒞 𝑘 ℬ→𝒞

𝑘+1 ℬ

𝑘+2 𝒞 →E 𝑘 , 𝑘 + 1

31.2 Other Proof‐Theoretic Notions


We now introduce a few notions that can be defined in terms of provability. We write

⊢𝒜

to mean that there is a proof of 𝒜 which ends up having no undischarged assump‐


tions. (You can think of it having no claims on the left hand side of the turnstile – a
proof which has all of its undischarged assumptions among no claims must have no
undischarged assumptions!) We now define:

𝒜 is a THEOREM iff ⊢ 𝒜 .

Just as provability is analogous to entailment (and is, we hope, coextensive with it), so
theoremhood corresponds to logical truth.
To illustrate the idea, suppose I want to prove that ‘¬(𝐺 ∧ ¬𝐺)’ is a theorem. So I must
start my proof without any assumptions. However, since I want to prove a sentence
whose main connective is a negation, I shall want to immediately begin a subproof by
making the additional assumption ‘𝐴 ∧ ¬𝐴’ for the sake of argument, and show that
this leads to contradictory consequences. All told, then, the proof looks like this:

1 𝐺 ∧ ¬𝐺

2 𝐺 ∧E 1

3 ¬𝐺 ∧E 1

4 ¬(𝐺 ∧ ¬𝐺) ¬I 1–2, 1–3

We have therefore constructed a proof of ‘¬(𝐺 ∧ ¬𝐺)’ with no (undischarged) assump‐


tions, showing that ‘¬(𝐺 ∧ ¬𝐺)’ is a theorem. This particular theorem is an instance of
what is sometimes called the LAW OF NON‐CONTRADICTION, that for any 𝒜 , ¬(𝒜 ∧¬𝒜).
You can see how the proof above could be adapted to demonstrate the theoremhood
294 NATURAL DEDUCTION FOR SENTENTIAL

of any instance of the law of non‐contradiction. Simply substitute any sentence 𝒜 for
every occurence of ‘𝐺 ’ in the above proof, and the transformed proof will remain cor‐
rect (any internal sentence connectives featuring in 𝒜 aren’t addressed by the proof
rules in that proof).1
Because every proof begins with an assumption, we can only we can only obtain a proof
of a theorem if we discharge that opening assumption with a rule which allows one to
close a subproof: conditional or biconditional introduction, or either of the negation
rules (introduction or elimination):

1 𝑄

2 𝑃

3 ¬𝑄

4 𝑄 R1

5 𝑄 ¬E 3–4, 3–3

6 𝑃→𝑄 →I 2–5

7 𝑄 → (𝑃 → 𝑄) →I 1–6

There is a connection to the deduction theorem here too. Any correct proof of 𝒞 with
one undischarged assumption 𝒜 will demonstrate 𝒜 ⊢ 𝒞 . The deduction theorem
then assures us that ⊢ 𝒜 → 𝒞 . We see just this in the last line of the above proof,
where a proof that 𝑄 ⊢ (𝑃 → 𝑄) is converted to a proof showing that ⊢ 𝑄 → (𝑃 → 𝑄).
But we cannot say that every theorem has a negation, a conditional or a biconditional as
its main connective. For one thing, we could have started with a negated disjunction or
conjunction. For another, once we have a proof of a theorem, we can apply disjunction
or conjunction introduction to its last line: e.g., we could extend the above proof by
conjunction introduction to show that ⊢ ((𝑄 → (𝑃 → 𝑄)) ∧ (𝑄 → (𝑃 → 𝑄))).
To show that something is a theorem, you just have to find a suitable proof. It is typ‐
ically much harder to show that something is not a theorem. To do this, you would
have to demonstrate, not just that certain proof strategies fail, but that no proof is
possible. Even if you fail in trying to prove a sentence in a thousand different ways,
perhaps the proof is just too long and complex for you to make out. Perhaps you just
didn’t try hard enough. Even if you come up with a systematic search strategy to show
that some sentence ℬ isn’t a theorem, there is no guarantee your strategy will yield a
result. Suppose you tried to construct all well‐formed proofs terminating with ℬ from
shortest to longest, aiming to show there is no proof in which all assumptions have
been discharged. As there is no longest proof, there is no guarantee at any stage in this
process that your failure to find such a proof shows there is no such proof. It might
just be that the shortest such proof is longer than any you’ve yet considered. On the

1 We have already seen a proof showing an instance of the law of excluded middle is a theorem in §29.9,
page 278.
§31. PROOF‐THEORETIC CONCEPTS 295

other hand, if one of the proofs you construct is a proof of ℬ with no undischarged
assumptions, they you have shown conclusively that it is a theorem, and you can stop
your search. Showing that something isn’t theorem can be harder than showing that
it is, in terms of how many proofs you have to consider. (On the other hand, showing
that something is a logical truth can be harder than showing that it is not, in terms of
how many interpretations you need to consider.)
Here is another new bit of terminology:

Two sentences 𝒜 and ℬ are PROVABLY EQUIVALENT iff each can be


proved from the other; i.e., both 𝒜 ⊢ ℬ and ℬ ⊢ 𝒜 .

Here is a third new bit of terminology:

The sentences 𝒜1 , 𝒜2 , …, 𝒜𝑛 are JOINTLY CONTRARY iff a sentence and


its negation can be proved from them, i.e., for some ℬ, 𝒜1 , 𝒜2 , …, 𝒜𝑛 ⊢
ℬ and 𝒜1 , 𝒜2 , …, 𝒜𝑛 ⊢ ¬ℬ. (Sometimes in this case the 𝒜𝑖 s are said
to be PROVABLY INCONSISTENT.)

Equivalently, some sentences are jointly contrary if you can prove a contradiction from
them: 𝒜1 , 𝒜2 , …, 𝒜𝑛 ⊢ ℬ ∧ ¬ℬ.
It is straightforward to show that some sentences 𝒜1 , 𝒜2 , …, 𝒜𝑛 are jointly contrary (if
they are): you just need to provide two proofs, one terminating in ℬ and the other in
¬ℬ , such that all of the undischarged assumptions in those proofs are among the 𝒜𝑖 s.
Showing that some sentences are not jointly contrary is much harder. It would require
more than just providing a proof or two; it would require showing that no proof of a
certain kind is possible.
Some sentences are jointly contrary iff the negation of their conjunction is a theorem.
Suppose we have these proofs showing the 𝒜𝑖 s to be jointly contrary:

1 𝒜1 1 𝒜1

⋮ ⋮

𝑛 𝒜𝑛 𝑛 𝒜𝑛

⋮ ⋮

𝑘 ℬ 𝑘′ ¬ℬ

These can be adapted to form part of a larger proof:


296 NATURAL DEDUCTION FOR SENTENTIAL

1 𝒜1 ∧ … ∧ 𝒜𝑛

2 𝒜1 ∧E 1

𝑛+1 𝒜𝑛 ∧E 1

𝑘+1 ℬ from 2–𝑛 + 1

𝑘′ + 𝑖 ¬ℬ from 2–𝑛 + 1

𝑘′ + 𝑖 + 1 ¬(𝒜1 ∧ … ∧ 𝒜𝑛 ) ¬I 1–𝑘 + 1, 1–𝑘 ′ + 𝑖

Conversely, you can extract proofs of 𝒜1 ∧ … ∧ 𝒜𝑛 ⊢ ℬ and 𝒜1 ∧ … ∧ 𝒜𝑛 ⊢ ¬ℬ from


the above proof. The pattern is quite general, since any theorem which is a negated
conjunction will be proved by some application of negation introduction on the ori‐
ginal conjunction, which involves showing that original conjunction to include jointly
contrary sentences.
To establish whether these proof‐theoretic properties hold for some sentences requires
us to construct one or two proofs, and to establish that they do not hold requires us
to consider all possible proofs. Table 31.1 summarises the requirements for provability,
contrariety, etc.

31.3 Structural Rules and the Theory of Proofs


Our proofs get their main shape from the proof rules governing the connectives. But
choices we have made about how to build proofs also contribute. These principles
about how to construct proofs give rise to quite abstract and general features of the
notion of provability represented by ‘⊢’, sometimes known as the structural rules gov‐
erning the notion of provability.2

Yes No
theorem? one proof all possible proofs
equivalent? two proofs all possible proofs
jointly contrary? two proofs all possible proofs
provable one proof all possible proofs

Table 31.1: What we need to establish proof‐theoretic features.

2 For more on structural rules, and the various logics that don’t have all the structural features of our
natural deduction system, see Greg Restall (2018) ‘Substructural Logics’ in Edward N Zalta, ed., The
Stanford Encyclopedia of Philosophy https://plato.stanford.edu/entries/logic‐substructural/.
§31. PROOF‐THEORETIC CONCEPTS 297

For example: in our proof system, it does not matter in what order we make as‐
sumptions. These two proofs, distinct in their structure, nevertheless both show that
𝐴, 𝐵 ⊢ (𝐴 ∧ 𝐵).

1 𝐴 1 𝐵

2 𝐵 2 𝐴

3 (𝐴 ∧ 𝐵) ∧I 1, 2 3 (𝐴 ∧ 𝐵) ∧I 2, 1

Recall that our definition of provability says that 𝒜1 , …, 𝒜𝑛 ⊢ ℬ just in case there is a
proof whose undischarged assumptions are all among the 𝒜𝑖 s. No mention is made
of the order of those undischarged assumptions. So while both the proofs above show
that 𝐴, 𝐵 ⊢ (𝐴 ∧ 𝐵), they also both show that 𝐵, 𝐴 ⊢ (𝐴 ∧ 𝐵).
This feature, that you can permute the order of assumptions arbitrarily, is wholly gen‐
eral, and is known as the COMMUTATIVITY of assumptions. That is to say:

A notion of provability satisfies commutativity just in case


𝒜1 , … , ℬ, … , 𝒞, … , 𝒜𝑛 , ⊢ 𝒟 iff 𝒜1 , … , 𝒞, … , ℬ, … , 𝒜𝑛 , ⊢ 𝒟.

Commutativity and the deduction theorem seem trivial. But they can be surprisingly
powerful. Consider, for example, this trivial proof by reiteration that 𝑃 → 𝑄 ⊢ 𝑃 → 𝑄:

1 𝑃→𝑄

2 𝑃→𝑄 R1

We can then reason as follows:

1. 𝑃 → 𝑄 ⊢ 𝑃 → 𝑄;

2. 𝑃 → 𝑄, 𝑃 ⊢ 𝑄 (by the deduction theorem);

3. 𝑃, 𝑃 → 𝑄 ⊢ 𝑄 (by commutativity);

4. 𝑃 ⊢ (𝑃 → 𝑄) → 𝑄 (by the deduction theorem).

This argument doesn’t construct a formal proof; it just assures you that there will be
one. (One of the exercises asks you to construct the formal proof.)
Here’s another example of a structural feature of our proof system. We allow a given
line of a proof to be reused multiple times, as long as the assumptions on which that
line relies remain undischarged. See this proof that (𝑃 → (𝑃 → 𝑄)) ⊢ (𝑃 → 𝑄):
298 NATURAL DEDUCTION FOR SENTENTIAL

1 𝑃 → (𝑃 → 𝑄)

2 𝑃

3 𝑃→𝑄 →E 1, 2

4 𝑄 →E 3, 2

5 𝑃→𝑄 →I 2–4

Here we appeal to line 2 multiple times: in eliminating the conditional on line 1 and
the conditional on line 3. No rule governing any of our connectives is associated with
this behaviour: rather, it is built in to the way we allow all of our rules to appeal to any
previous line (as long as the line doesn’t appear in a closed subproof), even if that line
has been appealed to already by some other rule. It is fairly easy to see that the above
proof cannot succed without multiple appeal to line 2.
This feature of our proof system is known as CONTRACTION: if there is a proof in which
any 𝒜 occurs as an undischarged assumption on two or more distinct lines, there is
also a proof in which one of those assumptions of 𝒜 is removed. More concisely:

A notion of provability satisfies contraction just in case:

𝒜1 , … , ℬ, … , ℬ, … , 𝒜𝑛 ⊢ 𝒞 iff 𝒜1 , … , ℬ, … , 𝒜𝑛 ⊢ 𝒞.

Contraction, or the principle that you can appeal to the same prior line multiple times,
is essentially the same as our proof rule of reiteration. In fact the rule of reiteration is
strictly dispensible (§33.1), in part because we can always appeal instead to the original
sentence multiple times.
The final structural feature I want to point to is that adding additional assumptions
doesn’t undermine provability. This property is called WEAKENING:

A notion of provability satisfies weaking just in case: if 𝒜1 , … , 𝒜𝑛 ⊢ 𝒞


then 𝒜1 , … , 𝒜𝑛 , ℬ ⊢ 𝒞 .

This is a very general feature, because any correct natural deduction proof can be em‐
bedded within an arbitrary additional assumption and still remain correct. So if a
proof with the structure illustrated schematically on the left is correct, showing that
𝒜1 , … , 𝒜𝑛 ⊢ 𝒞 , then so is the proof scheme on the right, which has all the same as‐
sumptions plus the additional assumption ℬ, and so shows 𝒜1 , … , 𝒜𝑛 , ℬ ⊢ 𝒞 :
1 𝒜1

+1 𝒜𝑛

+1 𝒞
§31. PROOF‐THEORETIC CONCEPTS 299

1 ℬ

2 𝒜1

+1 𝒜𝑛

+1 𝒞

These three structural principles involved in the construction of our natural proofs
support our decision to define provability as we did. Our definition was: 𝒜1 …𝒜𝑛 ⊢ 𝒞
when there is a proof with undischarged assumptions among the 𝒜𝑖 s. We do not re‐
quire that the undischarged assumptions be exactly the 𝒜𝑖 s, nor that the undischarged
assumptions don’t contain any redundancy, nor that the order in which assumptions
are made in the proof is the same as the order of the sentences on the left side of
the turnstile. We will briefly mention some alternative logics in which some of these
structural rules are abandoned in §39.

Key Ideas in §31


› Our formal proof system allows us to introduce a notion of prov‐
ability, symbolised ‘⊢’. This is distinct from entailment ‘⊨’, but
(if we’ve done our work correctly) they will parallel one another.
› We can introduce further notions in terms of provability: prov‐
able equivalence, joint contrariness, and being a theorem.
› Some features of our notion of provability derive from structural
features of the notion of proof we have employed, such as the
ability to appeal to a proof line multiple times, or to make addi‐
tional assumptions at will.

Practice exercises
A. Give a proof showing that each of the following sentences is a theorem:

1. 𝑂 → 𝑂;
2. 𝐽 ↔ 𝐽 ∨ (𝐿 ∧ ¬𝐿) ;
3. ((𝐴 → 𝐵) → 𝐴) → 𝐴;
4. (𝑃 → 𝑃) → 𝑄) → 𝑄 ;
5. (𝐶 ∧ 𝐷) ↔ (𝐷 ∧ 𝐶) .
300 NATURAL DEDUCTION FOR SENTENTIAL

B. Provide proofs to show each of the following:

1. 𝑃 ⊢ (𝑃 → 𝑄) → 𝑄
2. 𝐶 → (𝐸 ∧ 𝐺), ¬𝐶 → 𝐺 ⊢ 𝐺
3. 𝑀 ∧ (¬𝑁 → ¬𝑀) ⊢ (𝑁 ∧ 𝑀) ∨ ¬𝑀
4. (𝑍 ∧ 𝐾) ↔ (𝑌 ∧ 𝑀), 𝐷 ∧ (𝐷 → 𝑀) ⊢ 𝑌 → 𝑍
5. (𝑊 ∨ 𝑋) ∨ (𝑌 ∨ 𝑍), 𝑋 → 𝑌, ¬𝑍 ⊢ 𝑊 ∨ 𝑌

C. Show that each of the following pairs of sentences are provably equivalent:

1. 𝑅 ↔ 𝐸, 𝐸 ↔ 𝑅
2. 𝐺 , ¬¬¬¬𝐺
3. 𝑇 → 𝑆, ¬𝑆 → ¬𝑇
4. 𝑈 → 𝐼 , ¬(𝑈 ∧ ¬𝐼)
5. ¬(𝐶 → 𝐷), 𝐶 ∧ ¬𝐷
6. ¬𝐺 ↔ 𝐻, ¬(𝐺 ↔ 𝐻)

D. If you know that 𝒜 ⊢ ℬ, what can you say about (𝒜 ∧𝒞) ⊢ ℬ? What about (𝒜 ∨𝒞) ⊢
ℬ ? Explain your answers.
E. In this section, I claimed that it is just as hard to show that two sentences are not
provably equivalent, as it is to show that a sentence is not a theorem. Why did I claim
this? (Hint: think of a sentence that would be a theorem iff 𝒜 and ℬ were provably
equivalent.)
F. Show that 𝒜1 , 𝒜2 , …, 𝒜𝑛 ⊢ ℬ ∧ ¬ℬ iff both 𝒜1 , 𝒜2 , …, 𝒜𝑛 ⊢ ℬ and 𝒜1 , 𝒜2 , …, 𝒜𝑛 ⊢
¬ℬ .
32
Proof Strategies

There is no simple recipe for proofs, and there is no substitute for practice. Here,
though, are some rules of thumb and strategies to keep in mind.

Work backwards from what you want The ultimate goal is to obtain the conclusion.
Look at the conclusion and ask what the introduction rule is for its main connective.
This gives you an idea of what should happen just before the last line of the proof. Then
you can treat this line as if it were your goal. Ask what you could do to get to this new
goal.
For example: If your conclusion is a conditional 𝒜 → ℬ, plan to use the →I rule. This
requires starting a subproof in which you assume 𝒜 . The subproof ought to end with
ℬ . So, what can you do to get ℬ ?

Work forwards from what you have When you are starting a proof, look at the
premises; later, look at the sentences that you have obtained so far. Think about the
elimination rules for the main operators of these sentences. These will tell you what
your options are.
For a short proof, you might be able to eliminate the premises and introduce the con‐
clusion. A long proof is formally just a number of short proofs linked together, so you
can fill the gap by alternately working back from the conclusion and forward from the
premises.

Try proceeding indirectly If you cannot find a way to show 𝒜 directly, try starting
by assuming ¬𝒜 . If a contradiction follows, then you will be able to obtain 𝒜 by ¬E.
This will often be a good way of proceeding when the conclusion you are aiming at has
a disjunction as its main connective.

Persist These are guidelines, not laws. Try different things. If one approach fails,
then try something else. Remember that if there is one proof, there are many – different
proofs that make use of different ideas.

301
302 NATURAL DEDUCTION FOR SENTENTIAL

For example: suppose you tried to follow the idea ‘work backwards from what you want’
in establishing ‘𝑃, 𝑃 → (𝑃 → 𝑄) ⊢ 𝑃 → 𝑄’. You would be tempted to start a subproof
from the assumption ‘𝑃’, and while that proof strategy would eventually succeed, you
would have done better to simply apply →E and terminate after one proof step.
By contrast, suppose you tried to follow the idea ‘work forwards from what you have’
in trying to establish ‘(𝑃 ∨ (𝑄 ∨ (𝑅 ∨ 𝑆))) ⊢ 𝑃 → 𝑃’. You might begin an awkward nested
series of subproofs to apply ∨E to the disjunctive premise. But beginning with the
conclusion might prompt you instead to simply open a subproof from the assumption
𝑃, and the subsequent proof will make no use of the premise at all, as the conclusion
is a theorem.
Neither of these heuristics is sacrosanct. You will get a sense of how to construct proofs
efficiently and fluently with practice. Unfortunately there is no quick substitute for
practice.

Key Ideas in §32


› The nature of natural deduction proofs means that it is some‐
times easier to make progress by applying rules to the assump‐
tions, and at other times easier to try and figure out where a
conclusion could have come from.
› One may have to try a number of different things in the course
of constructing the same proof – there is no simple algorithm to
capture logical reasoning in this system.
33
Derived Rules for Sentential

In §§29–28, we introduced the basic rules of our proof system for Sentential. In this
section, we shall consider some alternative or additional rules for our system.
None of these rules adds anything fundamentally to our system. They are all DERIVED
rules, which means that anything we can prove by using them, we could have proved
using just the rules in our original official system of natural deduction proofs. Any of
these rules is a conservative addition to our proof system, because none of them would
enable us to prove anything we could not already prove. (Adding the rules for ‘tonk’
from §30.2, by contrast, would allow us to prove many new things – any system which
includes those rules is not a conservative extension of our original system of proof
rules.)
But sometimes adding new rules can shorten proofs, or make them more readable
and user‐friendly. And some of them are of interest in their own right, as arguably
independently plausible rules of implication as they stand, or as alternative rules we
could have taken as basic instead.

33.1 Reiteration
The first derived rule is actually one of our main proof rules: reiteration. It turns out
that we need not have assumed a rule of reiteration. We can replace each application of
the reiteration rule on some line 𝑘+1 (reiterating some prior line 𝑚) with the following
combination of moves deploying just the other basic rules of §§28–29:

𝑚 𝒜

𝑘 𝒜∧𝒜 ∧I 𝑚

𝑘+1 𝒜 ∧E 𝑘

To be clear: this is not a proof. Rather, it is a proof scheme. After all, it uses a variable,
𝒜 , rather than a sentence of Sentential. But the point is simple. Whatever sentences

303
304 NATURAL DEDUCTION FOR SENTENTIAL

of Sentential we plugged in for 𝒜 , and whatever lines we were working on, we could
produce a legitimate proof. So you can think of this as a recipe for producing proofs.
Indeed, it is a recipe which shows us that, anything we can prove using the rule R,
we can prove (with one more line) using just the basic rules of §§29–28. So we can
describe the rule R as a derived rule, since its justification is derived from our basic
rules.
You might note that in lines 5–7 in the complicated proof in Figure 29.3, we in effect
made use of this proof scheme, introducing a conjunction from prior lines only to im‐
mediately eliminate again, just to ensure that the relevant sentences appeared directly
in the range of the assumption ‘𝑄’.
We even have an explanation here about why you can’t reiterate a line from a closed
subproof. If all applications of reiteration are in fact abbreviations of the above schema,
then that restriction on reiteration derives from the more general restriction that we
cannot appeal to a proof line that relies on an assumption that has been discharged.

33.2 Disjunctive Syllogism


Here is a very natural argument form.

Mitt is either in Massachusetts or in DC. He is not in DC. So, he is in


Massachusetts.

This inference pattern is called DISJUNCTIVE SYLLOGISM. We could add it to our proof
system:

𝑚 (𝒜 ∨ ℬ) 𝑚 (𝒜 ∨ ℬ)

𝑛 ¬𝒜 𝑛 ¬ℬ

ℬ DS 𝑚, 𝑛 𝒜 DS 𝑚, 𝑛

This is, if you like, a new rule of disjunction elimination. But there is nothing funda‐
mentally new here. We can emulate the rule of disjunctive syllogism using our basic
proof rules, as the schematic proof in Figure 33.1 indicates.
We have used the rule of reiteration in this schematic proof, but we already know that
any uses of that rule can themselves be replaced by more roundabout proofs using
conjunction introduction and elimination, if required. So adding disjunctive syllogism
would not make any new proofs possible that were not already obtainable in our ori‐
ginal system.

33.3 Modus tollens


Another useful pattern of inference is embodied in the following argument:
§33. DERIVED RULES FOR Sentential 305

𝑚 𝒜∨ℬ

𝑛 ¬𝒜

𝑘 𝒜

𝑘+1 ¬ℬ

𝑘+2 𝒜 R𝑘

𝑘+3 ¬𝒜 R𝑛

𝑘+4 ℬ ¬E 𝑘 + 1–𝑘 + 2, 𝑘 + 1–𝑘 + 3

𝑘+5 ℬ

𝑘+6 ℬ R𝑘+5

𝑘+7 ℬ ∨E 𝑚, 𝑘–𝑘 + 4, 𝑘 + 5–𝑘 + 6

Figure 33.1: Disjunctive syllogism is derivable in the standard proof system.

If Hillary won the election, then she is in the White House. She is not in
the White House. So she did not win the election.

This inference pattern is called MODUS TOLLENS. The corresponding rule is:

𝑚 (𝒜 → ℬ)

𝑛 ¬ℬ

¬𝒜 MT 𝑚, 𝑛

This is, if you like, a new rule of conditional elimination.


This rule is, again, a conservative addition to our stock of proof rules. Any applica‐
tion of it could be emulated by the form of proof using our original rules shown in
Figure 33.2. Again, the schmatic proof makes a dispensible use of reiteration.

33.4 Double Negation Elimination


In Sentential, the double negation ¬¬𝒜 is equivalent to 𝒜 . In natural languages, too,
double negations tend to cancel out – Malcolm is not unaware that his leadership is
under threat iff he is aware that it is. That said, you should be aware that context and
306 NATURAL DEDUCTION FOR SENTENTIAL

𝑚 𝒜→ℬ

𝑛 ¬ℬ

𝑘 𝒜

𝑘+1 ℬ →E 𝑚, 𝑘

𝑘+2 ¬𝐵 R𝑛

𝑘+3 ¬𝒜 ¬I 𝑘–𝑘 + 1, 𝑘–𝑘 + 2

Figure 33.2: Modus tollens is derivable in the standard proof system.

emphasis can prevent them from doing so. Consider: ‘Jane is not not happy’. Arguably,
one cannot derive ‘Jane is happy’, since the first sentence should be understood as
meaning the same as ‘Jane is not unhappy’. This is compatible with ‘Jane is in a state
of profound indifference’. As usual, moving to Sentential forces us to sacrifice certain
nuances of English expressions – we have, in Sentential, just one resource for translating
negative expressions like ‘not’ and the suffix ‘un‐’, even if they are not synonyms in
English.
Obviously we can show that 𝒜 ⊢ ¬¬𝒜 by means of the following proof:

1 𝒜

2 ¬𝒜

3 ¬¬𝒜 ¬I 2–1, 2–2

There is a proof rule that corresponds to the other direction of this equivalence, the
rule of DOUBLE NEGATION ELIMINATION:

𝑖 ¬¬𝒜

𝒜 ¬¬E 𝑖

This rule is redundant, given the proof rules of Sentential:


§33. DERIVED RULES FOR Sentential 307

1 ¬¬𝒜

2 ¬𝒜

3 ¬¬𝒜 R1

4 ¬𝒜 R2

5 𝒜 ¬E 2–4, 2–3

Anything we can prove using the ¬¬E rule can be proved almost as briefly using just
¬E.

33.5 Tertium non datur


Suppose that we can show that if it’s sunny outside, then Bill will have brought an
umbrella (for fear of burning). Suppose we can also show that, if it’s not sunny outside,
then Bill will have brought an umbrella (for fear of rain). Well, there is no third way
for the weather to be. So, whatever the weather, Bill will have brought an umbrella.
This line of thinking motivates the following rule:

𝑖 𝒜

𝑗 ℬ

𝑘 ¬𝒜

𝑙 ℬ

ℬ TND 𝑖 –𝑗, 𝑘–𝑙

The rule is sometimes called TERTIUM NON DATUR, which means roughly ‘no third way’.
There can be as many lines as you like between 𝑖 and 𝑗, and as many lines as you like
between 𝑘 and 𝑙 . Moreover, the subproofs can come in any order, and the second
subproof does not need to come immediately after the first.
Tertium non datur is able to be emulated using just our original proof rules. Figure 33.3
contains a schematic proof which demonstrates this. Once again, a dispensible use of
reiteration occurs in this proof just to make it more readable.

33.6 De Morgan Rules


Our final additional rules are called De Morgan’s Laws. (These are named after the
nineteenth century logician August De Morgan.) The first two De Morgan rules show
the provable equivalence of a negated conjunction and a disjunction of negations.
308 NATURAL DEDUCTION FOR SENTENTIAL

𝑖 𝒜

𝑗 ℬ

𝑘 ¬𝒜

𝑙 ℬ

𝑚 𝒜→ℬ →I 𝑖 –𝑗

𝑚+1 ¬𝒜 → ℬ →I 𝑘–𝑙

𝑚+2 ¬ℬ

𝑚+3 𝒜

𝑚+4 ℬ →E 𝑚, 𝑚 + 3

𝑚+5 ¬ℬ R𝑚+2

𝑚+6 ¬𝒜 ¬I 𝑚 + 3–𝑚 + 5

𝑚+7 ℬ →E 𝑚 + 1, 𝑚 + 6

𝑚+8 ℬ ¬E 𝑚 + 2–𝑚 + 7

Figure 33.3: Tertium non datur is derivable in the standard proof system.

𝑚 ¬(𝒜 ∧ ℬ) 𝑚 (¬𝒜 ∨ ¬ℬ)

(¬𝒜 ∨ ¬ℬ) DeM 𝑚 ¬(𝒜 ∧ ℬ) DeM 𝑚

The second pair of De Morgan rules are dual to the first pair: they show the provable
equivalence of a negated disjunction and a conjunction of negations.

𝑚 ¬(𝒜 ∨ ℬ) 𝑚 (¬𝒜 ∧ ¬ℬ)

(¬𝒜 ∧ ¬ℬ) DeM 𝑚 ¬(𝒜 ∨ ℬ) DeM 𝑚


§33. DERIVED RULES FOR Sentential 309

The De Morgan rules are no genuine addition to the power of our original natural
deduction system. Here is a demonstration of how we could derive the first De Morgan
rule:

𝑘 ¬(𝒜 ∧ ℬ)

𝑚 ¬(¬𝒜 ∨ ¬ℬ)

𝑚+1 ¬𝒜

𝑚+2 ¬𝒜 ∨ ¬ℬ ∨I 𝑚 + 1

𝑚+3 𝒜 ¬E 𝑚 + 1–𝑚 + 2, 𝑚 + 1–𝑚

𝑚+4 ¬ℬ

𝑚+5 ¬𝒜 ∨ ¬ℬ ∨I 𝑚 + 4

𝑚+6 ℬ ¬E 𝑚 + 4–𝑚 + 5, 𝑚 + 4–𝑚

𝑚+7 𝒜∧ℬ ∧I 𝑚 + 3, 𝑚 + 6

𝑚+8 ¬𝒜 ∨ ¬ℬ ¬E 𝑚–𝑚 + 7, 𝑚–𝑘

Here is a demonstration of how we could derive the second De Morgan rule:

𝑘 ¬𝒜 ∨ ¬ℬ

𝑚 ¬𝒜

𝑚+1 𝒜∧ℬ

𝑚+2 𝒜 ∧E 𝑚 + 1

𝑚+3 ¬(𝒜 ∧ ℬ) ¬I 𝑚 + 1–𝑚 + 2, 𝑚 + 1–𝑚

𝑚+4 ¬ℬ

𝑚+5 𝒜∧ℬ

𝑚+6 ℬ ∧E 𝑚 + 5

𝑚+7 ¬(𝒜 ∧ ℬ) ¬I 𝑚 + 5–𝑚 + 6, 𝑚 + 5–𝑚 + 4

𝑚+8 ¬(𝒜 ∧ ℬ) ∨E 𝑘, 𝑚–𝑚 + 3, 𝑚 + 4–𝑚 + 7

Similar demonstrations can be offered explaining how we could derive the third and
fourth De Morgan rules. These are left as exercises.
Those mentioned above are all of the additional rules of our proof system for Sentential.
310 NATURAL DEDUCTION FOR SENTENTIAL

Key Ideas in §33


› Our official system of rules can be augmented by additional rules
that are strictly speaking unneccessary – nothing is provable
with them that couldn’t have been proved without them – but
that can nevertheless be used sometimes to speed up proofs.
› Only make use of derived rules when you are told you may do so.
› Some derived rules – such as the rule of double negation elimin‐
ation – can even be used in place of a rule of our original system,
given a different system but with the same things being provable.

Practice exercises
A. The following proofs are missing their commentaries (rule and line numbers). Add
them wherever they are required: you may use any of the original or derived rules, as
appropriate.

1 𝑍 → (𝐶 ∧ ¬𝑁) 1 𝑊 → ¬𝐵

2 ¬𝑍 → (𝑁 ∧ ¬𝐶) 2 𝐴∧𝑊

3 ¬(𝑁 ∨ 𝐶) 3 𝐵 ∨ (𝐽 ∧ 𝐾)

4 ¬𝑁 ∧ ¬𝐶 4 𝑊

5 ¬𝑁 5 ¬𝐵

6 ¬𝐶 6 𝐽∧𝐾

7 𝑍 7 𝐾

8 𝐶 ∧ ¬𝑁

9 𝐶 1 𝐿 ↔ ¬𝑂

10 ¬𝐶 2 𝐿 ∨ ¬𝑂

11 ¬𝑍 3 ¬𝐿

12 𝑁 ∧ ¬𝐶 4 ¬𝑂

13 𝑁 5 𝐿

14 ¬¬(𝑁 ∨ 𝐶) 6 ¬𝐿

15 𝑁∨𝐶 7 ¬¬𝐿

8 𝐿
§33. DERIVED RULES FOR Sentential 311

B. Give a proof representing each of these arguments; you may use any of the original
or derived rules, as appropriate:

1. 𝐸 ∨ 𝐹 , 𝐹 ∨ 𝐺 , ¬𝐹 ∴ 𝐸 ∧ 𝐺
2. 𝑀 ∨ (𝑁 → 𝑀) ∴ ¬𝑀 → ¬𝑁
3. (𝑀 ∨ 𝑁) ∧ (𝑂 ∨ 𝑃), 𝑁 → 𝑃, ¬𝑃 ∴ 𝑀 ∧ 𝑂
4. (𝑋 ∧ 𝑌) ∨ (𝑋 ∧ 𝑍), ¬(𝑋 ∧ 𝐷), 𝐷 ∨ 𝑀 ∴𝑀

C. Provide proof schemes that justify the addition of the third and fourth De Morgan
rules as derived rules.
D. The proofs you offered in response to question A above used derived rules. Replace
the use of derived rules, in such proofs, with only basic rules. You will find some ‘repe‐
tition’ in the resulting proofs; in such cases, offer a streamlined proof using only basic
rules. (This will give you a sense, both of the power of derived rules, and of how all the
rules interact.)
34
Alternative Proof Systems for Sentential

We’ve now developed a system of proof rules, all of which we have supported by show‐
ing that they correspond to correct entailments of Sentential. We’ve also seen that
these rules allow us to introduce some derived rules, which make proofs shorter and
more convenient but do not allow us to prove anything that we could not have proved
already.
This choice of proof system is not forced on us. There are alternative proof systems
which are nevertheless equivalent to the system we have introduced, in that everything
which is provable in our system is provable in the alternative system, and vice versa.
Indeed, there are lots of alternative systems. In this section, I will discuss just a couple.
The alternative systems I will discuss here result from taking one of our derived rules
as basic, and showing that doing so allows us to derive a formerly basic rule.

34.1 Replacing Negation Elimination by Double Negation


Elimination
The first alternative system results from replacing the rule of negation elimination ¬E
by double negation elimination ¬¬E. In combination with the other rules, we can
emulate the results of ¬E by an application of ¬I, followed by a single use of ¬¬E:

𝑖 ¬𝒜

𝑗 ℬ

𝑘 ¬ℬ

𝑘+1 ¬¬𝒜 ¬I 𝑖 –𝑗, 𝑖 –𝑘

𝑘+2 𝒜 ¬¬E 𝑘 + 1

312
§34. ALTERNATIVE PROOF SYSTEMS FOR Sentential 313

This proof shows that, in a system with ¬I and ¬¬E, we do not need any separate
elimination rule for a single negation – the effect of any such rule could be perfectly
simulated by the above schematic proof. The addition of a single negation elimination
rule would not allow us to prove any more than we already can. So in a sense, the rules
of double negation elimination and negation elimination are equivalent, at least given
the other rules in our system.
Since we’ve shown ¬¬E to be a derived rule in our original system, this alternative
system proves exactly the same things as our original system. The proofs will look
different, but there is a correct proof of 𝒞 from 𝒜1 , …, 𝒜𝑛 in one system, there will be a
corresponding proof in the other system. Any proof in the one system can be converted
into a proof in the other by replacement of the appropriate instances of the rules.

34.2 Replacing Disjunction Elimination with Disjunctive


Syllogism
Another alternative system results from replacing disjunction elimination ∨E (proof
by cases) by disjunctive syllogism DS. We already know that DS is a derived rule: any
proof which uses disjunctive syllogism can be converted into a proof that uses just the
basic rules. So to show the alternative system proves the same things as our original
system, we just need to show that we can emulate the effects of ∨E by using DS.
Recall that a schematic proof that makes use of ∨E has the structure in Figure 34.1. To
transform this into a proof that makes use instead of DS, we are going to use those two
subproofs, but in the scope of an assumption that ¬𝒞 . The schematic proof looks like
that in Figure 34.2. You can see that the proof is the same as the original at lines 𝑖 –𝑗
and 𝑘–𝑙 , which mirror the original two subproofs. But they play a quite different role
now. The subproof deriving 𝒞 from 𝒜 is now used in the service of a reductio of 𝒜 ,
deriving ¬𝒜 on line 𝑗 + 1 in order to put us in a position to apply DS and derive ℬ,
and then derive 𝒞 in line with the original subproof, which conflicts with the reductio
assumption ¬𝒞 on line 2, allowing us to use ¬E to derive the same conclusion as in
the original proof. This shows us that – at least if we have our negation rules (or their
equivalents) – we can replace ∨E with DS and be able to prove all the same things.
There are pros and cons to using DS as our disjunction elimination rule. You can
already see that proof by cases is much clunkier – though perhaps you are not too con‐
cerned (maybe you suspect proof by cases is not much used in natural argumentation
anyway).

Pro Using DS as an elimination rule for ∨ has the nice feature that we eliminate
a disjunction in favour of one of its disjuncts, rather than the unprecedented 𝒞 that
appears as if from nowhere in the original ∨E rule. We also dispense with the use of
subproofs in the statement of the disjunction rules.

Con Adopting DS as a basic rule destroys the nice feature of our standard rules that
only one connective is used in any rule. DS needs both disjunction and negation. We
cannot, therefore, consider a system which lacks negation rules but has disjunction
– the rules are no longer modular. This is not especially important for Sentential, but
314 NATURAL DEDUCTION FOR SENTENTIAL

1 𝒜∨ℬ
1 𝒜∨ℬ
2 ¬𝒞


𝑖 𝒜
𝑖 𝒜


𝑗 𝒞
𝑗 𝒞

𝑗+1 ¬𝒜 ¬I 𝑖 –𝑗, 𝑖 –2
𝑘 ℬ
𝑘 ℬ DS 1, 𝑗 + 1


𝑙 𝒞
𝑙 𝒞
𝒞 ∨E 1, 𝑖 –𝑗, 𝑘–𝑙
𝒞 ¬E 2–𝑙 , 2–2

Figure 34.1: Schematic ∨E proof.


Figure 34.2: Schematic proof using DS
to emulate ∨E.

could be important if you go on to consider other logical systems which may vary the
rules for one connective independently of all the others. DS shackles negation and
disjunction together – even arguments which have, intuitively nothing to do with neg‐
ation, end up having to use negation in their proof. This can be illustrated by consider‐
ing proofs that (𝐴 ∨ 𝐵) ⊢ (𝐵 ∨ 𝐴). The proofs themselves are left for an exercise, but you
can see that the proof in a system which has DS as a basic rule makes unavoidable use
of negation rules, while the proof in our standard system uses only disjunction rules.

34.3 Doing Without Negation Introduction


Strangely enough, we don’t even need our negation introduction rule. Negation elimin‐
ation, in the presence of our conditional and conjunction rules, suffices. The negation
introduction rule discharges an assumption that 𝒜 when it can be shown to lead to
both ℬ and ¬ℬ, and allows us to conclude that ¬𝒜 . To emulate it, we need a rule
that will discharge 𝒜 , and then use a combination of rules to deduce ¬𝒜 . We have a
subproof that shows 𝒜 to lead to contrary sentences, and we don’t want to get rid of
that important information. So the natural discharging rule to appeal to is conditional
introduction: this enables us to capture the content of that subproof for later use. We
cannot introduce a negation to obtain 𝒜 , but we can eliminate a negation from ¬¬𝒜 ,
if we can show that ¬¬𝒜 leads to 𝒜 , which then leads to the contrary sentences again.
Here is the whole schematic proof:
§34. ALTERNATIVE PROOF SYSTEMS FOR Sentential 315

𝑘 𝒜

𝑗 ℬ

𝑗+1 ¬ℬ

𝑗+2 ℬ ∧ ¬ℬ ∧I 𝑗, 𝑗 + 1

𝑗+3 𝒜 → (ℬ ∧ ¬ℬ) →I 𝑘–𝑗 + 2

𝑗+4 ¬¬𝒜

𝑗+5 ¬𝒜

𝑗+6 ¬¬𝒜 R𝑗+4

𝑗+7 𝒜 ¬E 𝑗 + 5–𝑗 + 5, 𝑗 + 5–𝑗 + 6

𝑗+8 ℬ ∧ ¬ℬ →E 𝑗 + 3, 𝑗 + 7

𝑗+9 ℬ ∧E 𝑗 + 8

𝑗 + 10 ¬ℬ ∧E 𝑗 + 8

𝑗 + 11 ¬𝒜 ¬E 𝑗 + 4–𝑗 + 9, 𝑗 + 4–𝑗 + 10

One thing to note about this schematic proof is that it is much longer and more com‐
plicated than our negation introduction rule. It also relies essentially on rules for the
conditional and conjunction, violating our desire that each of our rules be ‘pure’ in the
sense that the introduction and elimination of a connective should ideally only involve
sentences with it as the main connective. So we will not be availing ourselves of the
possible economy of getting rid of the negation introduction rule.
Another interesting thing here is that the negation elimination rule seems to be
twinned with the conditional introduction rule. This hints at quite a deep fact, namely,
that negation itself can be understood as a disguised conditional. Some alternative for‐
mulations of sentential logic include a sentential constant, ⊥. This is like a sentence
letter, but it has a constant truth value in every valuation: it is always F. Given this con‐
stant value, we can see that ¬𝒜 and 𝒜 → ⊥ are logically equivalent in such systems:

𝒜 ¬𝒜 𝒜 → ⊥
T F T F F
F T F T F

In this sort of system, we can understand the negation introduction rule as literally a
special case of conditional introduction: if we can show ℬ and ℬ → ⊥ in the scope of
the assumption 𝒜 , then conditional elimination leads to ⊥, and conditional introduc‐
tion gives us 𝒜 → ⊥ while discharging the assumption that 𝒜 . We still need some
316 NATURAL DEDUCTION FOR SENTENTIAL

equivalent to a negation elimination rule, because we need some way of reducing


(𝒜 → ⊥) → ⊥ to the logically equivalent 𝒜 . While it is philosophically interesting
and conceptually elegant to treat negation as implication of a logical falsehood, that
is not our preferred understanding of negation, and we will leave this sort of system
alone.

Practice exercises
A. Consider an alternative proof system which drops our negation introduction rule,
but adopts tertium non datur (§33.5) in its place. Is this alternative system equivalent
to our standard system – in particular, can you show how to emulate negation intro‐
duction using just negation elimination, tertium non datur (and structural rules like
reiteration)?
B. Construct two proofs showing that (𝐴 ∨ 𝐵) ⊢ (𝐵 ∨ 𝐴), the first using our stand‐
ard natural deduction system, the second using the system which has DS in place of
disjunction elimination. Comment on any points of interest.
Chapter 7

Natural Deduction for Quantifier


35
Basic Rules for Quantifier

35.1 Proof‐Theoretic Concepts in Quantifier


Quantifier makes use of all of the connectives of Sentential. Helpfully, our natural deduc‐
tion proof system for Quantifier will simply import all of the basic rules from chapter
6. (Obviously we will get all of the derived rules for free by doing this, but we won’t
make use of the derived rules.) We will define a correctly formed natural deduction
proof for Quantifier to be a structured sequence of sentences of Quantifier such that
each sentence is either an assumption or follows from the previous sentences by any
of the Sentential rules or by any of the new rules governing the quantifiers and identity
that we will introduce in this chapter.1
The notion of provability of 𝒜 from undischarged assumptions 𝒞1 , …, 𝒞𝑛 was intro‐
duced for Sentential in §31. The rules for Quantifier are different, but the earlier defini‐
tions go through unchanged, once we remember that the notion of proof in Quantifier
means involves a sequence of Quantifier sentences justified by the rules for Sentential
and those for Quantifier.
So in what follows I will make use of the single turnstile ‘⊢’ to mean that there is a
correctly formed proof using only the rules of Sentential and Quantifier. (And once we
introduce the identity rules in §37, I will tacitly assume that proofs can make use of
those rules too in justifying a claim using ‘⊢’.)
Likewise, the notions of theoremhood, provable equivalence, and joint contrariness all
carry over their definitions, because they are defined in terms of the single turnstile.
Some proofs in Quantifier don’t need any new rules. Consider this:

¬(∀𝑥𝑃𝑥 ∨ ∃𝑦𝑃𝑦) ∴ ¬∀𝑥𝑃𝑥.

1 Though there is a category ‘formulae which are not sentences’ in Quantifier, no member of this class
will ever appear in any correctly formed proof.

318
§35. BASIC RULES FOR Quantifier 319

1 ¬(∀𝑥𝑃𝑥 ∨ ∃𝑦𝑃𝑦)

2 ∀𝑥𝑃𝑥

3 (∀𝑥𝑃𝑥 ∨ ∃𝑦𝑃𝑦) ∨I 2

4 ¬(∀𝑥𝑃𝑥 ∨ ∃𝑦𝑃𝑦) R1

5 ¬∀𝑥𝑃𝑥 ¬I 2–3, 2–4

The sentences on each line are Quantifier sentences that are not sentences of Senten‐
tial, but the main connectives involved are just those governed by the rules we already
introduced to handle Sentential proofs.
However, not every Quantifier sentence has a Sentential connective as its main connect‐
ive. So we will also need some new basic rules to govern the quantifiers, and to govern
the identity sign, to deal with those sentences where the main connective is a quantifier
and where the sentence is an identity predication.

35.2 Universal Elimination


Holding fixed the claim that everything is F, you can conclude that any particular thing
is F. You name it; it’s F. The same is true for many‐place predicates: if every human is
shorter than 3km tall, then Amy is shorter than 3km tall, and Bob is, and Jonquil is,
and everyone else you can name.
Accordingly, the following reasoning should be fine for the corresponding symbolisa‐
tions in Quantifier:

1 ∀𝑥𝑅𝑥𝑥𝑑

2 𝑅𝑎𝑎𝑑 ∀E 1

We obtained line 2 by dropping the universal quantifier and replacing every instance
of ‘𝑥’ with ‘𝑎’. Equally, the following should be allowed:

1 ∀𝑥𝑅𝑥𝑥𝑑

2 𝑅𝑑𝑑𝑑 ∀E 1

We obtained line 2 here by dropping the universal quantifier and replacing every in‐
stance of ‘𝑥’ with ‘𝑑 ’. We could have done the same with any other name we wanted.
This motivates the UNIVERSAL ELIMINATION rule (∀E), using the notion for uniform
substitution we introduced in §22.4:
320 NATURAL DEDUCTION FOR QUANTIFIER

𝑚 ∀𝓍𝒜

𝒜|𝒸↷𝓍 ∀E 𝑚

Where 𝒸 can be any name.

The intent of the rule is that you can obtain any substitution instance of a universally
quantified formula: replace every occurrence of the free variable 𝓍 in 𝒜 with any
chosen name. (If there are any – the rule is also good when 𝒜 has no free variable,
because then the quantifier ∀𝓍 is redundant.) Remember here that the expression ‘𝒸’
is a metalanguage variable over names: you are not required to replace the variable 𝓍
by the Quantifier name ‘𝑐 ’, but you can select any name you like!
I should emphasise that (as with every elimination rule) you can only apply the ∀E rule
when the universal quantifier is the main connective. Thus the following is outright
banned:

1 (∀𝑥𝐵𝑥 → 𝐵𝑘)

2 (𝐵𝑏 → 𝐵𝑘) naughtily attempting to invoke ∀E 1

This is illegitimate, since ‘∀𝑥 ’ is not the main connective in line 1. (If you need a re‐
minder as to why this sort of inference should be banned, reread §16.)
Here is an example of the rule in action. Suppose we wanted to show that ∀𝑥∀𝑦(𝑅𝑥𝑥 →
𝑅𝑥𝑦), 𝑅𝑎𝑎 ∴ 𝑅𝑎𝑏 is provable. The proof might go like this:

1 ∀𝑥∀𝑦(𝑅𝑥𝑥 → 𝑅𝑥𝑦)

2 𝑅𝑎𝑎

3 ∀𝑦(𝑅𝑎𝑎 → 𝑅𝑎𝑦) ∀E 1

4 𝑅𝑎𝑎 → 𝑅𝑎𝑏 ∀E 3

5 𝑅𝑎𝑏 →E 4, 2

Here on line 3 we substitute the previously used name ‘𝑎’ for the variable ‘𝑥’ in
‘∀𝑦(𝑅𝑥𝑥 → 𝑅𝑥𝑦)’; and then on line 4 we substitute the new name ‘𝑏’ for the variable ‘𝑦’
in ‘𝑅𝑎𝑎 → 𝑅𝑎𝑦’. The rule of universal elimination doesn’t discriminate between new
and old names.
§35. BASIC RULES FOR Quantifier 321

35.3 Existential Introduction


Given the assumption that some specific named thing is an F, you can conclude that
something is an F: ‘Sylvester reads, so someone reads’ seems like a conclusive argument.
So we ought to allow the inference from a claim about some particular thing being F,
to a general claim that something or other is F:

1 𝑅𝑎𝑎𝑑

2 ∃𝑥𝑅𝑎𝑎𝑥 ∃I 1

Here, we have replaced the name ‘𝑑 ’ with a variable ‘𝑥’, and then existentially quantified
over it. Equally, we would have allowed:

1 𝑅𝑎𝑎𝑑

2 ∃𝑥𝑅𝑥𝑥𝑑 ∃I 1

Here we have replaced both instances of the name ‘𝑎’ with a variable, and then exist‐
entially generalised.
There are some pitfalls with this description of what we have done. The following ar‐
gument is invalid: ‘Someone loves Alice; so someone is such that someone loves them‐
selves’. So we ought not to be able to conclude ‘∃𝑥∃𝑥𝑅𝑥𝑥 ’ from ‘∃𝑥𝑅𝑥𝑎’. Accordingly,
our rule cannot be replace a name by a variable, and stick a corresponding quantifier out
the front – since that would would permit the proof of the invalid argument.
We take our cue from the ∀E rule. This rule says: take a sentence ∀𝓍𝒜 , then we can
remove the quantifier and substitute an arbitrary name for some free variable in the
formula 𝒜 (assuming there is one). The ∃I rule is in some sense a mirror image of
this rule: it allows us to move from a sentence with an arbitrary name – that might be
thought of as the result of substituting a name for a free variable in some formula 𝒜
– to a quantified sentence ∃𝓍𝒜 . So here is how we formulate our rule of EXISTENTIAL
INTRODUCTION:

𝑚 𝒜|𝒸↷𝓍

∃𝓍𝒜 ∃I 𝑚

So really we should think that the proof just above should be thought of as concluding
∃𝑥𝑅𝑥𝑥𝑑 from ‘𝑅𝑥𝑥𝑑 ’|𝑎↷𝑥 (i.e., ‘𝑅𝑎𝑎𝑑 ’).
If we have this rule, we cannot provide a proof of the invalid argument. For ‘∃𝑥𝑅𝑥𝑎’ is
not a substitution instance of ‘∃𝑥∃𝑥𝑅𝑥𝑥 ’ – both instances of ‘𝑥’ in ‘𝑅𝑥𝑥 ’ are bound by
322 NATURAL DEDUCTION FOR QUANTIFIER

the second existential quantifier, so neither is free to be substituted. So the premise is


not of the right form for the rule of ∃I to apply.
On the other hand, this proof is correct:

1 𝑅𝑎𝑎

2 ∃𝑥𝑅𝑎𝑥 ∃I 1

Why? Because the assumption ‘𝑅𝑎𝑎’ is in fact not only a substitution instance of ∃𝑥𝑅𝑥𝑥 ,
but also a substitution instance of ‘∃𝑥𝑅𝑎𝑥 ’, since ‘𝑅𝑎𝑥 ’|𝑎↷𝑥 is just ‘𝑅𝑎𝑎’ too. So we can
vindicate the intuitively correct argument ‘Narcissus loves himself, so there is someone
who loves Narcissus’.
As we just saw, applying this rule requires some skill in being able to recognise substi‐
tution instances. Thus the following is allowed:

1 𝑅𝑎𝑎𝑑

2 ∃𝑥𝑅𝑥𝑎𝑑 ∃I 1

3 ∃𝑦∃𝑥𝑅𝑥𝑦𝑑 ∃I 2

This is okay, because ‘𝑅𝑎𝑎𝑑 ’ can arise from substitition of ‘𝑎’ for ‘𝑥’ in ‘𝑅𝑥𝑎𝑑 ’, and
‘∃𝑥𝑅𝑥𝑎𝑑 ’ can arise from substitition of ‘𝑎’ for ‘𝑦’ in ‘∃𝑥𝑅𝑥𝑦𝑑 ’. But this is banned:

1 𝑅𝑎𝑎𝑑

2 ∃𝑥𝑅𝑥𝑎𝑑 ∃I 1

3 ∃𝑥∃𝑥𝑅𝑥𝑥𝑑 naughtily attempting to invoke ∃I 2

This is because ‘∃𝑥𝑅𝑥𝑎𝑑 ’ is not a substitution instance of ‘∃𝑥∃𝑥𝑅𝑥𝑥𝑑 ’, since (again) both
occurrences of ‘𝑥’ in ‘𝑅𝑥𝑥𝑑 ’ are already bound and so not available for free substitution.
Here is an example which shows our two proof rules in action, a proof showing that

∀𝑥∀𝑦(𝑅𝑥𝑦 ∧ 𝑅𝑦𝑥) ⊢ ∃𝑥𝑅𝑥𝑥 :

1 ∀𝑥∀𝑦(𝑅𝑥𝑦 ∧ 𝑅𝑦𝑥)

2 ∀𝑦(𝑅𝑎𝑦 ∧ 𝑅𝑦𝑎) ∀E 1

3 (𝑅𝑎𝑎 ∧ 𝑅𝑎𝑎) ∀E 2

4 𝑅𝑎𝑎 ∧E 3

5 ∃𝑥𝑅𝑥𝑥 ∃I 4
§35. BASIC RULES FOR Quantifier 323

For another example, consider this proof of ‘∃𝑥(𝑃𝑥 ∨ ¬𝑃𝑥)’ from no assumptions:

1 ¬(𝑃𝑑 ∨ ¬𝑃𝑑)

2 ¬𝑃𝑑

3 (𝑃𝑑 ∨ ¬𝑃𝑑) ∨I 2

4 ¬(𝑃𝑑 ∨ ¬𝑃𝑑) R1

5 𝑃𝑑 ¬E 2–3, 2–4

6 (𝑃𝑑 ∨ ¬𝑃𝑑) ∨I 5

7 ¬(𝑃𝑑 ∨ ¬𝑃𝑑) R1

8 (𝑃𝑑 ∨ ¬𝑃𝑑) ¬E 1–6, 1–7

9 ∃𝑥(𝑃𝑥 ∨ ¬𝑃𝑥) ∃I 8

One final example, a proof that ∀𝑥∀𝑦(𝑅𝑥𝑦 → 𝑅𝑦𝑥) ⊢ ∀𝑥(∀𝑦𝑅𝑥𝑦 → ∃𝑦𝑅𝑦𝑥):

1 ∀𝑥∀𝑦(𝑅𝑥𝑦 → 𝑅𝑦𝑥)

2 ∀𝑦(𝑅𝑎𝑦 → 𝑅𝑦𝑎) ∀E 1

3 (𝑅𝑎𝑏 → 𝑅𝑏𝑎) ∀E 2

4 ∀𝑦𝑅𝑎𝑦

5 𝑅𝑎𝑏 ∀E 4

6 𝑅𝑏𝑎 →E 3, 5

7 ∃𝑦𝑅𝑦𝑎 ∃I 6

8 (∀𝑦𝑅𝑎𝑦 → ∃𝑦𝑅𝑦𝑎) →I 4–7

9 ∀𝑥(∀𝑦𝑅𝑥𝑦 → ∃𝑦𝑅𝑦𝑥) ∀I 8

35.4 Empty Domains


The following proof combines our two new rules for quantifiers:

1 ∀𝑥𝐹𝑥

2 𝐹𝑎 ∀E 1

3 ∃𝑥𝐹𝑥 ∃I 2

Could this be a bad proof? If anything exists at all, then certainly we can infer that
something is F, from the fact that everything is F. But what if nothing exists at all?
324 NATURAL DEDUCTION FOR QUANTIFIER

Then it is surely vacuously true that everything is F; however, it ought not follow that
something is F, for there is nothing to be F. So if we claim that, as a matter of logic
alone, ‘∃𝑥𝐹𝑥’ follows from ‘∀𝑥𝐹𝑥 ’, then we are claiming that, as a matter of logic alone,
there is something rather than nothing. This might strike us as a bit odd.
Actually, we are already committed to this oddity. In §15, we stipulated that domains in
Quantifier must have at least one member. We then defined a logical truth (of Quantifier)
as a sentence which is true in every interpretation. Since ‘∃𝑥 𝑥 = 𝑥 ’ will be true in every
interpretation, this also had the effect of stipulating that it is a matter of logic that there
is something rather than nothing.
Since it is far from clear that logic should tell us that there must be something rather
than nothing, we might well be cheating a bit here.
If we refuse to cheat, though, then we pay a high cost. Here are three things that we
want to hold on to:

› ∀𝑥𝐹𝑥 ⊢ 𝐹𝑎: after all, that was ∀E.

› 𝐹𝑎 ⊢ ∃𝑥𝐹𝑥: after all, that was ∃I.

› the ability to copy‐and‐paste proofs together: after all, reasoning works by put‐
ting lots of little steps together into rather big chains.

If we get what we want on all three counts, then we have to countenance that ∀𝑥𝐹𝑥 ⊢
∃𝑥𝐹𝑥 . So, if we get what we want on all three counts, the proof system alone tells us
that there is something rather than nothing. And if we refuse to accept that, then we
have to surrender one of the three things that we want to hold on to!
In fact the choice is even starker. Consider this proof:

1 𝐹𝑎

2 𝐹𝑎 R1

3 (𝐹𝑎 → 𝐹𝑎) →I 1–2

4 ∃𝑥(𝐹𝑥 → 𝐹𝑥) ∃I 3

This proof uses only the obvious rule of conditional introduction, and our existential
introduction rule. It terminates in a claim that a certain thing exists: a thing that is 𝐹
if it is 𝐹 , and has no undischarged assumptions. Again the existence of something is
a theorem of our logic. The real source of the existential commitment here seems to
be the use of the name ‘𝑎’, because our rules implicitly assume that every name has a
referent, and hence as soon as you use a name you assume that there is something in
the domain for the name to latch on to.
§35. BASIC RULES FOR Quantifier 325

Before we start thinking about which to surrender,2 we might want to ask how much
of a cheat this is. Granted, it may make it harder to engage in theological debates
about why there is something rather than nothing. But the rest of the time, we will get
along just fine. So maybe we should just regard our proof system (and Quantifier, more
generally) as having a very slightly limited purview. If we ever want to allow for the
possibility of nothing, then we shall have to cast around for a more complicated proof
system. But for as long as we are content to ignore that possibility, our proof system is
perfectly in order. (As, similarly, is the stipulation that every domain must contain at
least one object.)

35.5 Universal Introduction


Suppose you had shown of each particular thing that it is F (and that there are no other
things to consider). Then you would be justified in claiming that everything is F. This
would motivate the following proof rule. If you had established each and every single
substitution instance of ‘∀𝑥𝐹𝑥 ’, then you can infer ‘∀𝑥𝐹𝑥 ’.
Unfortunately, that rule would be utterly unusable. To establish each and every single
substitution instance would require proving ‘𝐹𝑎’, ‘𝐹𝑏’, …, ‘𝐹𝑗2 ’, …, ‘𝐹𝑟79002 ’, …, and so on.
Indeed, since there are infinitely many names in Quantifier, this process would never
come to an end. So we could never apply that rule. We need to be a bit more cunning
in coming up with our rule for introducing universal quantification.
Our cunning thought will be inspired by considering:

∀𝑥𝐹𝑥 ∴ ∀𝑦𝐹𝑦

This argument should obviously be valid. After all, alphabetical variation in choice of
variables ought to be a matter of taste, and of no logical consequence. But how might
our proof system reflect this? Suppose we begin a proof thus:

1 ∀𝑥𝐹𝑥

2 𝐹𝑎 ∀E 1

We have proved ‘𝐹𝑎’. And, of course, nothing stops us from using the same justification
to prove ‘𝐹𝑏’, ‘𝐹𝑐 ’, …, ‘𝐹𝑗2 ’, …, ‘𝐹𝑟79002 , …, and so on until we run out of space, time, or
patience. But reflecting on this, we see that this is a way to prove 𝐹𝒸, for any name
𝒸. And if we can do it for any thing, we should surely be able to say that ‘𝐹 ’ is true of
everything. This therefore justifies us in inferring ‘∀𝑦𝐹𝑦’, thus:

2 In light of the second proof, many will opt for restricting ∃I. If we permit an empty domain, we will
also need ‘empty names’ – names without a referent. When the name 𝒸 is empty, it seems problematic
to conclude from ‘𝒸 is F’ that there is something which is F. (Does ‘Santa Claus drives a flying sleigh’
entail ‘Someone drives a flying sleigh’?) But empty names are not cost‐free; understanding how a name
that doesn’t name anything can have any meaning at all has vexed many philosophers and linguists.
326 NATURAL DEDUCTION FOR QUANTIFIER

1 ∀𝑥𝐹𝑥

2 𝐹𝑎 ∀E 1

3 ∀𝑦𝐹𝑦 ∀I 2

The crucial thought here is that ‘𝑎’ was just some arbitrary name. There was nothing
special about it – we might have chosen any other name – and still the proof would be
fine. And this crucial thought motivates the universal introduction rule (∀I):

𝑚 𝒜|𝒸↷𝓍

∀𝓍𝒜 ∀I 𝑚

𝒸 must not occur in any undischarged assumption, or elsewhere in 𝒜

A crucial aspect of this rule, though, is bound up in the accompanying constraint. In


English, a name like ‘Sylvester’ can play two roles: it can be introduced as a name for
a specific thing (‘let me dub thee Sylvester’!), or as an arbitrary name, introduced by
this sort of stipulation: let ‘Sylvester’ name some arbitrarily chosen man. The name
doesn’t tell us, when it subsequently appears, whether it was introduced in one way
or the other. But if it was introduced as an arbitrary name, then any conclusions we
draw about this Sylvester aren’t really dependent on the particular arbitrarily chosen
referent – they all depend rather on the stipulation used in introducing the name, and
so (specifically) they will all be consequences of the only fact we know for sure about
this Sylvester, that he is male. If all men are mortal, then an arbitrarily chosen man,
whom we temporarily call ‘Sylvester’, is mortal. If Sylvester is mortal, then there is a
date he will die. But since he was selected arbitrarily, without reference to any further
particulars of his life, then for any man, there exists a date he will die. And that is
appropriate reasoning from a universal generalisation, to another generalisation, via
claims about a specific but arbitrarily chosen person.3
An informal example of this sort of reasoning from arbitrary names is this:

Consider an arbitrary somebody who travelled from London to Munich in


2016. Call them J Doe.

› If J Doe took the train, then they had to go via Paris, and that leg of
the journey alone takes 3 hours.
› If J Doe flew, then they would have spent at least an hour in airport
transfers at each end, even setting aside the flight time itself.

3 The details about how this sort of arbitrary reference works are interesting. A controversial but never‐
theless attractive view of how it might work is Wylie Breckenridge and Ofra Magidor (2012) ‘Arbitrary
Reference’, Philosophical Studies 158, pp. 377–400.
§35. BASIC RULES FOR Quantifier 327

› The other options – driving, walking, etc., – are all even slower.

So J Doe’s journey took over two hours in every possible case. Therefore –
since J Doe is an arbitrary person – every traveller’s journey from London
to Munich in 2016 took over two hours.

We don’t have stipulations like the above to introduce a name as an arbitrary name in
Quantifier. But we do have a way of ensuring that the name has no prior associations
other than those linked to a prior universal generalisation, if we insist that, when the
name is about to be eliminated from the proof, no assumption about what that name
denotes is being relied on. That way, we can know that however it was introduced to
the proof, it was not done in a way that involved making specific assumptions about
whatever the name arbitrarily picks out.

If you can conclude something about a named object that doesn’t in‐
volve making any assumptions about it other than assumptions which
we are making more generally, then you can conclude that same some‐
thing about everything.

The simplest way for ensure that a name is not subject to any specific assumptions is if
the name was introduced by an application of ∀E, as an arbitrary name in the standard
sense. But there are other ways too. In general what we need is that the name not occur
in the range of any assumption which uses the name. If the name has been introduced
without making any assumptions about what it denotes, then we are not relying on
any special features of what the name happens to denote when we conclude that if
this arbitrary thing is F, then everything is F.
Consider the following proof to see how this works in action.

1 ∀𝑥(𝐴𝑥 ∧ 𝐵𝑥)

2 𝐴𝑎 ∧ 𝐵𝑎 ∀E 1

3 𝐴𝑎 ∧E 2

4 ∀𝑥𝐴𝑥 ∀I 3

The crucial step is applying the ∀I rule to the name ‘𝑎’ on the last line. While the name
‘𝑎’ does appear on lines 2 and 3, it doesn’t occur in the assumption – it was introduced
on line 2 as an arbitrary instance of the universal assuption.
This constraint ensures that we are always reasoning at a sufficiently general level. To
see the importance of the constraint in action, consider this terrible argument:

Everyone loves Kylie Minogue; therefore everyone loves themselves.


328 NATURAL DEDUCTION FOR QUANTIFIER

We might symbolise this obviously invalid inference pattern as:

∀𝑥𝐿𝑥𝑘 ∴ ∀𝑥𝐿𝑥𝑥

Now, suppose we tried to offer a proof that vindicates this argument:

1 ∀𝑥𝐿𝑥𝑘

2 𝐿𝑘𝑘 ∀E 1

3 ∀𝑥𝐿𝑥𝑥 naughtily attempting to invoke ∀I 2

This is not allowed, because ‘𝑘’ occurred already in an undischarged assumption,


namely, on line 1. The crucial point is that, if we have made any assumptions about
the object we are working with (including assumptions embedded in 𝒜 itself), then
we are not reasoning generally enough to license the use of ∀I.
Although the name may not occur in any undischarged assumption, it may occur as a
discharged assumption. That is, it may occur in a subproof that we have already closed.
For example:

1 𝐺𝑑

2 𝐺𝑑 R1

3 𝐺𝑑 → 𝐺𝑑 →I 1–2

4 ∀𝑧(𝐺𝑧 → 𝐺𝑧) ∀I 3

This tells us that ‘∀𝑧(𝐺𝑧 → 𝐺𝑧)’ is a theorem. And that is as it should be.
Here is another proof featuring an application of ∀I after discharging an assumption
about some name ‘𝑎’:

1 𝐹𝑎 ∧ ¬𝐹𝑎

2 𝐹𝑎 ∧E 1

3 ¬𝐹𝑎 ∧E 1

4 ¬(𝐹𝑎 ∧ ¬𝐹𝑎) ¬I 1–2, 1–3

5 ∀𝑥¬(𝐹𝑥 ∧ ¬𝐹𝑥) ∀I 4

Here we were able to derive that something could not be true of 𝑎, no matter what 𝑎
is. We cannot make a coherent assumption that 𝑎 is both 𝐹 and isn’t 𝐹 , so it doesn’t
really matter what ‘𝑎’ denotes. So the open sentence ‘¬(𝐹 ∧ ¬𝐹𝑥)’ could not be true
of anything at all. That is why we are entitled to discharge that assumption, and then
any subsequent use of ‘𝑎’ in the proof must be depending not on particular facts about
this 𝑎, but about anything at all, including whatever it is that ‘𝑎’ happens to pick out.
§35. BASIC RULES FOR Quantifier 329

You might wish to recall the proof of ‘∃𝑥(𝑃𝑥 ∨ ¬𝑃𝑥)’ from page 323. Note that, by
the second‐last line, we had already discharged any assumption which relied on the
specific name chosen (in that case, ‘𝑑 ’). The existential introduction rule has no con‐
straints on it, so that it was not necessary to discharge any assumptions using the name
before applying that rule. But we see now that, since those assumptions were in fact
discharged, we could have applied universal introduction at that second last line, to
yield a proof of ‘∀𝑥(𝑃𝑥 ∨ ¬𝑃𝑥)’.
We can also use our universal rules together to show some things about how quantifier
order doesn’t matter, when the strings of quantifiers are of the same type. For example
∀𝑥∀𝑦∀𝑧𝑆𝑦𝑥𝑧 ∴ ∀𝑧∀𝑦∀𝑥𝑆𝑦𝑥𝑧

can be proved as follows:

1 ∀𝑥∀𝑦∀𝑧𝑆𝑦𝑥𝑧

2 ∀𝑦∀𝑧𝑆𝑦𝑎𝑧 ∀E 1

3 ∀𝑧𝑆𝑏𝑎𝑧 ∀E 2

4 𝑆𝑏𝑎𝑐 ∀E 3

5 ∀𝑥𝑆𝑏𝑥𝑐 ∀I 4

6 ∀𝑦∀𝑥𝑆𝑦𝑥𝑐 ∀I 5

7 ∀𝑧∀𝑦∀𝑥𝑆𝑦𝑥𝑧 ∀I 6

Here we successively eliminate the quantifiers in favour of arbitrarily chosen names,


and then reintroduce the quantifiers (though in a different order). The only undis‐
charged assumption throughout the proof is the first line, with no names at all, so all
of the uses of universal introduction are acceptable.

35.6 Existential Elimination


Suppose we know that something is F. The problem is that simply knowing this does
not tell us which thing is F. So it would seem that from ‘∃𝑥𝐹𝑥’ we cannot immediately
conclude ‘𝐹𝑎’, ‘𝐹𝑒23 ’, or any other substitution instance of the sentence. What can we
do?
Suppose we know that something is F, and that everything which is F is G. In (almost)
natural English, we might reason thus:

Since something is F, there is some particular thing which is an F. We do


not know anything about it, other than that it’s an F, but for convenience,
let’s call it ‘Obbie’. So: Obbie is F. Since everything which is F is G, it follows
that Obbie is G. But since Obbie is G, it follows that something is G. And
nothing depended on which object, exactly, Obbie was – no matter which
F we picked for ‘Obbie’ to denote, it would have been G. So, as long as
something is F, then something is G.
330 NATURAL DEDUCTION FOR QUANTIFIER

This is a kind of generic proof by cases – a generalisation of the rule of disjunction


elimination. This is because an existential claim is actually kind of like a generalised
disjunction: ∃𝓍ℱ is true iff either the first thing in the domain is ℱ , or the second
thing in the domain is ℱ , or…. Of course it cannot really be a disjunction, since in large
domains there is no way to even enumerate all the individuals, let alone construct an
infinite sentence disjoining the claims that each of them is ℱ .
Rather than attempting to push the analogy with proof by cases too far, and attempting
to enumerate all possible cases of F, we can make use of the device of arbitrary names
again to reason generically about all those cases without having to enumerate them.
Just like a proof by cases, we eliminate our existential assumption by showing that
some one thing follows from each potential case, which here involves showing that
thing follows from the generic hypothesis about an arbitrary F.
We try to capture this reasoning pattern in a proof as follows:

1 ∃𝑥𝐹𝑥

2 ∀𝑥(𝐹𝑥 → 𝐺𝑥)

3 𝐹𝑜

4 𝐹𝑜 → 𝐺𝑜 ∀E 2

5 𝐺𝑜 →E 4, 3

6 ∃𝑥𝐺𝑥 ∃I 5

7 ∃𝑥𝐺𝑥 ∃E 1, 3–6

Breaking this down: we started by writing down our assumptions. At line 3, we made
an additional assumption: ‘𝐹𝑜’. This was just a substitution instance of ‘∃𝑥𝐹𝑥’. On this
assumption, we established ‘∃𝑥𝐺𝑥’. But note that we had made no special assumptions
about the object named by ‘𝑜’; we had only assumed that it satisfies ‘𝐹𝑥’. So nothing
depends upon which object it is. And line 1 told us that something satisfies ‘𝐹𝑥’. So our
reasoning pattern was perfectly general. We can discharge the specific assumption ‘𝐹𝑜’,
and simply infer ‘∃𝑥𝐺𝑥’ on its own.
Putting this together, we obtain the existential elimination rule (∃E):
§35. BASIC RULES FOR Quantifier 331

𝑚 ∃𝓍𝒜

𝑖 𝒜|𝒸↷𝓍

𝑗 ℬ

ℬ ∃E 𝑚, 𝑖 –𝑗

𝒸 must not occur in any assumption undischarged before line 𝑖


𝒸 must not occur in ∃𝓍𝒜
𝒸 must not occur in ℬ

As with universal introduction, the constraints are extremely important. To see why,
consider the following terrible argument:

Tim Button is a lecturer. There is someone who is not a lecturer. So Tim


Button is both a lecturer and not a lecturer.

We might symbolise this obviously invalid inference pattern as follows:

𝐿𝑏, ∃𝑥¬𝐿𝑥 ∴ 𝐿𝑏 ∧ ¬𝐿𝑏

Now, suppose we tried to offer a proof that vindicates this argument:

1 𝐿𝑏

2 ∃𝑥¬𝐿𝑥

3 ¬𝐿𝑏

4 𝐿𝑏 ∧ ¬𝐿𝑏 ∧E 1, 3

5 𝐿𝑏 ∧ ¬𝐿𝑏 naughtily attempting to invoke ∃E 2, 3–4

The last line of the proof is not allowed. The name that we used in our substitution
instance for ‘∃𝑥¬𝐿𝑥 ’ on line 3, namely ‘𝑏’, occurs in line 4. And the following proof
would be no better:
332 NATURAL DEDUCTION FOR QUANTIFIER

1 𝐿𝑏

2 ∃𝑥¬𝐿𝑥

3 ¬𝐿𝑏

4 𝐿𝑏 ∧ ¬𝐿𝑏 ∧E 1, 3

5 ∃𝑥(𝐿𝑥 ∧ ¬𝐿𝑥) ∃I 4

6 ∃𝑥(𝐿𝑥 ∧ ¬𝐿𝑥) naughtily attempting to invoke ∃E 2, 3–5

The last line of the proof would still not be allowed. For the name that we used in our
substitution instance for ‘∃𝑥¬𝐿𝑥 ’, namely ‘𝑏’, occurs in an undischarged assumption,
namely line 1.
The moral of the story is this.

If you want to squeeze information out of an existentially quantified


claim ∃𝓍𝒜 , choose a new name, never before used in the proof, to
substitute for the variable in 𝒜 .

That way, you can guarantee that you meet all the constraints on the rule for ∃E. A new
name functions like an arbitrary name – it carries no prior baggage with it, apart from
what we stipulate or assume to hold of it.
Here’s an example using this newly introduced rule: ∃𝑥(𝐹𝑥 ∧ 𝐺𝑥) ⊢ (∃𝑥𝐹𝑥 ∧ ∃𝑥𝐺𝑥):

1 ∃𝑥(𝐹𝑥 ∧ 𝐺𝑥)

2 𝐹𝑎 ∧ 𝐺𝑎

3 𝐹𝑎

4 ∃𝑥𝐹𝑥 ∃I 3

5 𝐺𝑎

6 ∃𝑥𝐺𝑥 ∃I 5

7 ∃𝑥𝐹𝑥 ∧ ∃𝑥𝐺𝑥 ∧I 4, 6

8 ∃𝑥𝐹𝑥 ∧ ∃𝑥𝐺𝑥 ∃E 1, 2–7

The use of ∃E on the last line relies on the fact that we’ve gotten rid of any occurence
of the new arbitrary name by the second last line. We have derived something that is
generic from the generic existential assumption, so it is safe to conclude that generic
claim holds regardless of the identity of the individual that makes the existential claim
true.
§35. BASIC RULES FOR Quantifier 333

An argument that makes use of both patterns of arbitrary reasoning is this example
due to Breckenridge and Magidor: ‘from the premise that there is someone who loves
everyone to the conclusion that everyone is such that someone loves them’. Here is a
proof in our system, letting ‘𝐿𝑥𝑦’ symbolise ‘ loves ’, letting ‘ℎ’ symbolise the
1 2
arbitrarily chosen name ‘Hiccup’ and letting ‘𝑎’ symbolise the arbitrarily chosen name
‘Astrid’:

1 ∃𝑥∀𝑦𝐿𝑥𝑦

2 ∀𝑦𝐿ℎ𝑦

3 𝐿ℎ𝑎 ∀E 2

4 ∃𝑥𝐿𝑥𝑎 ∃I 3

5 ∀𝑦∃𝑥𝐿𝑥𝑦 ∀I 4

6 ∀𝑦∃𝑥𝐿𝑥𝑦 ∃E 1, 2–5

At line 3, both our arbitrary names are in play – ℎ was newly introduced to the proof
in line 2 as the arbitrary person Hiccup who witnesses the truth of ‘∃𝑥∀𝑦𝐿𝑥𝑦’, and ‘𝑎’
at line 3 as an arbitrary person Astrid beloved by Hiccup. We can apply ∃I without
restriction at line 4, which takes the name ‘ℎ’ out of the picture – we no longer rely
on the specific instance chosen, since we are back at generalities about someone who
loves everyone, being such that they also love the arbitrarily chosen someone Astrid.
So we can safely apply ∀I at line 5, since the name ‘𝑎’ appears in no assumption nor in
‘∀𝑦∃𝑥𝐿𝑥𝑦’. But now we have at line 5 a claim that doesn’t involve the arbitrary name
‘ℎ’ either, which was newly chosen to not be in any undischarged assumption or in
∃𝑥∀𝑦𝐿𝑥𝑦. So we can safely say that the name Hiccup was just arbitrary, and nothing
in the proof of ‘∀𝑦∃𝑥𝐿𝑥𝑦’ depended on it, so we can discharge the specific assumption
about ℎ that was used in the course of that proof and nevertheless retain our entitled
ment to ‘∀𝑦∃𝑥𝐿𝑥𝑦’.

35.7 The Barber


Let’s try something a little more complicated, the so‐called ‘Barber paradox’:

in a certain remote Sicilian village, approached by a long ascent up a pre‐


cipitous mountain road, the barber shaves all and only those villagers who
do not shave themselves. Who shaves the barber? If he himself does, then
he does not (since he shaves only those who do not shave themselves); if
he does not, then he indeed does (since he shaves all those who do not
shave themselves). The unacceptable supposition is that there is such a
barber – one who shaves himself if and only if he does not. The story may
have sounded acceptable: it turned our minds, agreeably enough, to the
mountains of inland Sicily. However, once we see what the consequences
334 NATURAL DEDUCTION FOR QUANTIFIER

are, we realize that the story cannot be true: there cannot be such a barber,
or such a village. The story is unacceptable.4

This uses some of our tricky quantifer rules, disjunction elimination (proof by cases)
and negation introduction (reductio), so it is really a showcase of many things we’ve
learned so far.
Let’s first try to symbolise the argument.

Domain: residents of a certain remote Sicilian village


𝐵: is a barber
1
𝑆: shaves
1 2

The argument then revolves around the claim that there is a barber who shaves every‐
one who doesn’t shave themselves. Semi‐formally paraphrased: someone x exists such
that x is a barber and for all people y: y does not shave themselves iff x shaves y. That
is:
∃𝑥(𝐵𝑥 ∧ ∀𝑦(¬𝑆𝑦𝑦 ↔ 𝑆𝑥𝑦)).
The argument takes the form of a reductio, so we will begin the proof by assuming this
claim for the sake of argument and see what happens:

1 ∃𝑥(𝐵𝑥 ∧ ∀𝑦(¬𝑆𝑦𝑦 ↔ 𝑆𝑥𝑦))

2 (𝐵𝑎 ∧ ∀𝑦(¬𝑆𝑦𝑦 ↔ 𝑆𝑎𝑦))

3 ∀𝑦(¬𝑆𝑦𝑦 ↔ 𝑆𝑥𝑦) ∧E 2

4 (¬𝑆𝑎𝑎 ↔ 𝑆𝑎𝑎) ∀E 3

5 ¬(𝑃 ∧ ¬𝑃)

6 ¬𝑆𝑎𝑎

7 𝑆𝑎𝑎 ↔E 4, 6

8 ¬𝑆𝑎𝑎 R6

9 𝑆𝑎𝑎 ¬E 6–7, 6–8

10 ¬𝑆𝑎𝑎 ↔E 4, 9

11 (𝑃 ∧ ¬𝑃) ¬E 5–9, 5–10

12 (𝑃 ∧ ¬𝑃) ∃E 1, 2–11

13 𝑃 ∧E 12

14 ¬𝑃 ∧E 12

15 ¬∃𝑥(𝐵𝑥 ∧ ∀𝑦(¬𝑆𝑦𝑦 ↔ 𝑆𝑥𝑦)) ¬I 1–13, 1–14

4 R M Sainsbury (2009) Paradoxes 3rd ed., Cambridge University Press, pages 1–2.
§35. BASIC RULES FOR Quantifier 335

One trick to this proof is to be sure to instantiate the universally quantified claim at line
3 by using the same name ‘𝑎’ as was already used in line 2. This is because, intuitively,
the problem case for this supposed barber arises when you think about whether they
shaves themselves or not. But themselves trickiest part of this proof occurs at lines
5–11. By line 4, we’ve already derived a contradictory biconditional. But if we just use it
to derive ‘𝑆𝑎𝑎’ and ‘¬𝑆𝑎𝑎’, the contradictory claims we obtain would end up involving
the name ‘𝑎’. That would mean we couldn’t apply the ∃E rule, since the final line of the
subproof would contain the chosen name, so we couldn’t get our logical falsehood out
of the subproof beginning on line 2, and hence could perform the desired reductio on
line 1 via ¬I. So our trick is to suppose the negation of an unrelated logical falsehood
on line 5, derive the logical falsehood from line 4 in the range of that assumption, and
hence use ¬E to derive the logical falsehood ‘𝑃 ∧ ¬𝑃’ on line 11. This doesn’t contain
the name ‘𝑎’, and hence can be extracted from the subproof to show that line 1 by itself
suffices to derive a logical falsehood, and that shows the supposition that there is such
a barber is a logical falsehood.

35.8 Justification of these Quantifier Rules


Above, I offered informal arguments for each of our quantifier rules that seem to ex‐
emplify the pattern of argument in the rule, and to be intuitively valid. But we can also
offer justifications for our rules in terms of interpretations of the sentences involved,
and the principles governing truth of quantified sentences introduced in §22.4.
For example, consider any interpretation which makes ∀𝓍𝒜 true. In any such inter‐
pretation, there will be a nonempty domain, and every name will denote some member
of this domain. ∀𝓍𝒜 is true just in case for any name we like, it will denote something
of which 𝒜|𝒸↷𝓍 is true. So in any such interpretation, for each name in the language 𝒸,
𝒜|𝒸↷𝓍 will also be true. So the proof rule of ∀E corresponds to a valid argument form.
For ∃E, the case is only a little more involved. Suppose ∃𝓍𝒜 is true in an interpreta‐
tion. Then there is some interpretation, otherwise just like the original one, in which
some new name 𝒸 is assigned to some object in the domain, and where 𝒜|𝒸↷𝓍 is true.
Suppose that, in fact, every interpretation which makes 𝒜|𝒸↷𝓍 true also makes ℬ true,
where the new name 𝒸 does not appear in ℬ. Could ℬ be false in our original inter‐
pretation? No – for everything that appears in ℬ is already interpreted in the original
interpretation, with the same interpretation as in the interpretation which makes it
true. So it must be true in our original interpretation too. So 𝒞1 , …, 𝒞𝑛 , ∃𝓍𝒜 ⊨ ℬ
(when the name 𝒸 makes no appearance in any sentence in this argument), and the
proof rule of ∃E corresponds to a valid argument form.
You may also offer arguments from intepretations to the effect that our other quantifier
proof rules correspond to valid arguments in Quantifier:

› 𝒜|𝒸↷𝓍 ⊨ ∃𝓍𝒜 – if 𝒸 is used in the proof, it must have an interpretation as some‐


thing in the domain, and so something in the domain satisfies 𝒜 ;

› If this entailment holds:


𝒞1 , …, 𝒞𝑛 ⊨ 𝒜|𝒸↷𝓍 ,
336 NATURAL DEDUCTION FOR QUANTIFIER

where the name 𝒸 occurs nowhere among 𝒞𝑖 or elsewhere in 𝒜 , then this entail‐
ment also holds:
𝒞1 , …, 𝒞𝑛 ⊨ ∀𝓍𝒜.
For we could have substituted any other name for 𝒸 and the original entailment
would still have succeeded, since it could not have depended on the specific
name chosen. So it doesn’t matter what the interpretation of 𝒸 happens to be,
and if that doesn’t matter, it must be because everything is 𝒜 .

So we are again comforted: our proof rules can never lead us from true assumptions
to false claims, if correctly applied.

Key Ideas in §35


› We augment our natural deduction proof system for Sentential by
allowing Quantifier sentences to occur in proofs, and adding rules
governing quantifiers to go partway towards a natural deduction
system for Quantifier.
› The notation ‘⊢’ for provability carries over from its earlier use in
Sentential unchanged, once we understand that a proof can now
use the new rules for the new logical connectives in Quantifier.
› The rules for ∀E and ∃I are straightforward and can be applied
regardless of which names we deploy.
› But the other quantifier rules ∀I and ∃E contain some important
restrictions on which names we can use. These restrictions are
motivated by considerations about arbitrary reference which in‐
form us when we can introduce ‘dummy names’ in the course of
our proofs and what we can do with them.
› Our proof rules match the interpretation of Quantifier we have
given – they will not permit us to say that some claim is provable
from some assumptions when that claim isn’t entailed by those
assumptions.

Practice exercises
A. The following three ‘proofs’ are incorrect. Explain why they are incorrect. If the argu‐
ment ‘proved’ is invalid, provide an interpretation which shows that the assumptions
involved do not entail the conclusion:

1 ∀𝑥𝑅𝑥𝑥

2 𝑅𝑎𝑎 ∀E 1
1.
3 ∀𝑦𝑅𝑎𝑦 ∀I 2

4 ∀𝑥∀𝑦𝑅𝑥𝑦 ∀I 3
§35. BASIC RULES FOR Quantifier 337

1 ∀𝑥∃𝑦𝑅𝑥𝑦

2 ∃𝑦𝑅𝑎𝑦 ∀E 1

2. 3 𝑅𝑎𝑎

4 ∃𝑥𝑅𝑥𝑥 ∃I 3

5 ∃𝑥𝑅𝑥𝑥 ∃E 2, 3–4

1 ∃𝑦¬(𝑇𝑦 ∨ ¬𝑇𝑦)

2 ¬(𝑇𝑑 ∨ ¬𝑇𝑑)

3 𝑇𝑑

4 𝑇𝑑 ∨ ¬𝑇𝑑 ∨I 3

5 ¬(𝑇𝑑 ∨ ¬𝑇𝑑) R2
3. 6 ¬𝑇𝑑 ¬E 3–4, 3–5

7 (𝑇𝑑 ∨ ¬𝑇𝑑) ∨I 6

8 ¬(𝑇𝑑 ∨ ¬𝑇𝑑) R2

9 ((𝑇𝑑 ∨ ¬𝑇𝑑) ∧ ¬(𝑇𝑑 ∨ ¬𝑇𝑑)) ∧I 7, 8

10 ((𝑇𝑑 ∨ ¬𝑇𝑑) ∧ ¬(𝑇𝑑 ∨ ¬𝑇𝑑)) ∃E 1, 2–9

11 ¬∃𝑦¬(𝑇𝑦 ∨ ¬𝑇𝑦) ¬I 1–10

B. The following three proofs are missing their commentaries (rule and line numbers).
Add them, to turn them into bona fide proofs.
1 ∀𝑥∃𝑦(𝑅𝑥𝑦 ∨ 𝑅𝑦𝑥)

2 ∀𝑥¬𝑅𝑚𝑥

3 ∃𝑦(𝑅𝑚𝑦 ∨ 𝑅𝑦𝑚)

4 𝑅𝑚𝑎 ∨ 𝑅𝑎𝑚

5 𝑅𝑎𝑚

6 𝑅𝑚𝑎

7 ¬𝑅𝑎𝑚

8 ¬𝑅𝑚𝑎

9 𝑅𝑎𝑚

10 𝑅𝑎𝑚

11 ∃𝑥𝑅𝑥𝑚

12 ∃𝑥𝑅𝑥𝑚
338 NATURAL DEDUCTION FOR QUANTIFIER

1 ∀𝑥(∃𝑦𝐿𝑥𝑦 → ∀𝑧𝐿𝑧𝑥) 1 ∀𝑥(𝐽𝑥 → 𝐾𝑥)

2 𝐿𝑎𝑏 2 ∃𝑥∀𝑦𝐿𝑥𝑦

3 ∃𝑦𝐿𝑎𝑦 → ∀𝑧𝐿𝑧𝑎 3 ∀𝑥𝐽𝑥

4 ∃𝑦𝐿𝑎𝑦 4 ∀𝑦𝐿𝑎𝑦

5 ∀𝑧𝐿𝑧𝑎 5 𝐿𝑎𝑎

6 𝐿𝑐𝑎 6 𝐽𝑎

7 ∃𝑦𝐿𝑐𝑦 → ∀𝑧𝐿𝑧𝑐 7 𝐽𝑎 → 𝐾𝑎

8 ∃𝑦𝐿𝑐𝑦 8 𝐾𝑎

9 ∀𝑧𝐿𝑧𝑐 9 𝐾𝑎 ∧ 𝐿𝑎𝑎

10 𝐿𝑐𝑐 10 ∃𝑥(𝐾𝑥 ∧ 𝐿𝑥𝑥)

11 ∀𝑥𝐿𝑥𝑥 11 ∃𝑥(𝐾𝑥 ∧ 𝐿𝑥𝑥)

C. In §16 problem part A, we considered fifteen syllogistic figures of Aristotelian logic.


Provide proofs for each of the argument forms. NB: You will find it much easier if you
symbolise (for example) ‘No F is G’ as ‘∀𝑥(𝐹𝑥 → ¬𝐺𝑥)’.
D. Aristotle and his successors identified other syllogistic forms which depended upon
‘existential import’. Symbolise each of the following argument forms in Quantifier and
offer proofs.

› Barbari. Something is H. All G are F. All H are G. So: Some H is F

› Celaront. Something is H. No G are F. All H are G. So: Some H is not F

› Cesaro. Something is H. No F are G. All H are G. So: Some H is not F.

› Camestros. Something is H. All F are G. No H are G. So: Some H is not F.

› Felapton. Something is G. No G are F. All G are H. So: Some H is not F.

› Darapti. Something is G. All G are F. All G are H. So: Some H is F.

› Calemos. Something is H. All F are G. No G are H. So: Some H is not F.

› Fesapo. Something is G. No F is G. All G are H. So: Some H is not F.

› Bamalip. Something is F. All F are G. All G are H. So: Some H are F.


§35. BASIC RULES FOR Quantifier 339

E. Provide a proof of each claim.

1. ⊢ ∀𝑥𝐹𝑥 ∨ ¬∀𝑥𝐹𝑥
2. ⊢ ∀𝑧(𝑃𝑧 ∨ ¬𝑃𝑧)
3. ∀𝑥(𝐴𝑥 → 𝐵𝑥), ∃𝑥𝐴𝑥 ⊢ ∃𝑥𝐵𝑥
4. ∀𝑥(𝑀𝑥 ↔ 𝑁𝑥), 𝑀𝑎 ∧ ∃𝑥𝑅𝑥𝑎 ⊢ ∃𝑥𝑁𝑥
5. ∀𝑥∀𝑦𝐺𝑥𝑦 ⊢ ∃𝑥𝐺𝑥𝑥
6. ⊢ ∀𝑥𝑅𝑥𝑥 → ∃𝑥∃𝑦𝑅𝑥𝑦
7. ⊢ ∀𝑦∃𝑥(𝑄𝑦 → 𝑄𝑥)
8. 𝑁𝑎 → ∀𝑥(𝑀𝑥 ↔ 𝑀𝑎), 𝑀𝑎, ¬𝑀𝑏 ⊢ ¬𝑁𝑎
9. ∀𝑥∀𝑦(𝐺𝑥𝑦 → 𝐺𝑦𝑥) ⊢ ∀𝑥∀𝑦(𝐺𝑥𝑦 ↔ 𝐺𝑦𝑥)
10. ∀𝑥(¬𝑀𝑥 ∨ 𝐿𝑗𝑥), ∀𝑥(𝐵𝑥 → 𝐿𝑗𝑥), ∀𝑥(𝑀𝑥 ∨ 𝐵𝑥) ⊢ ∀𝑥𝐿𝑗𝑥

F. Write a symbolisation key for the following argument, symbolise it, and prove it:

There is someone who likes everyone who likes everyone that she likes.
Therefore, there is someone who likes herself.

G. For each of the following pairs of sentences: If they are provably equivalent, give
proofs to show this. If they are not, construct an interpretation to show that they are
not logically equivalent.

1. ∀𝑥𝑃𝑥 → 𝑄𝑐, ∀𝑥(𝑃𝑥 → 𝑄𝑐)


2. ∀𝑥∀𝑦∀𝑧𝐵𝑥𝑦𝑧, ∀𝑥𝐵𝑥𝑥𝑥
3. ∀𝑥∀𝑦𝐷𝑥𝑦, ∀𝑦∀𝑥𝐷𝑥𝑦
4. ∃𝑥∀𝑦𝐷𝑥𝑦, ∀𝑦∃𝑥𝐷𝑥𝑦
5. ∀𝑥(𝑅𝑐𝑎 ↔ 𝑅𝑥𝑎), 𝑅𝑐𝑎 ↔ ∀𝑥𝑅𝑥𝑎

H. For each of the following arguments: If it is valid in Quantifier, give a proof. If it is


invalid, construct an interpretation to show that it is invalid.

1. ∃𝑦∀𝑥𝑅𝑥𝑦 ∴ ∀𝑥∃𝑦𝑅𝑥𝑦
2. ∃𝑥(𝑃𝑥 ∧ ¬𝑄𝑥) ∴ ∀𝑥(𝑃𝑥 → ¬𝑄𝑥)
3. ∀𝑥(𝑆𝑥 → 𝑇𝑎), 𝑆𝑑 ∴ 𝑇𝑎
4. ∀𝑥(𝐴𝑥 → 𝐵𝑥), ∀𝑥(𝐵𝑥 → 𝐶𝑥) ∴ ∀𝑥(𝐴𝑥 → 𝐶𝑥)
5. ∃𝑥(𝐷𝑥 ∨ 𝐸𝑥), ∀𝑥(𝐷𝑥 → 𝐹𝑥) ∴ ∃𝑥(𝐷𝑥 ∧ 𝐹𝑥)
6. ∀𝑥∀𝑦(𝑅𝑥𝑦 ∨ 𝑅𝑦𝑥) ∴ 𝑅𝑗𝑗
7. ∃𝑥∃𝑦(𝑅𝑥𝑦 ∨ 𝑅𝑦𝑥) ∴ 𝑅𝑗𝑗
8. ∀𝑥𝑃𝑥 → ∀𝑥𝑄𝑥, ∃𝑥¬𝑃𝑥 ∴ ∃𝑥¬𝑄𝑥
36
Derived Rules for Quantifier

In this section, we shall add some additional rules to the basic rules of the previous
section. These govern the interaction of quantifiers and negation. But they are no sub‐
stantive addition to our basic rules: for each of the proposed additions, it can be shown
that their role in any proof can be wholly emulated by some suitable applications of
our basic rules from §35. (The point here is as in §33.)

36.1 Conversion of Quantifiers


In §15, we noted that ¬∃𝑥𝒜 is logically equivalent to ∀𝑥¬𝒜 . We shall add some rules
to our proof system that govern this. In particular, we add two rules, one for each
direction of the equivalence:

𝑚 ∀𝓍¬𝒜 𝑚 ¬∃𝓍𝒜

¬∃𝓍𝒜 CQ∀/¬∃ 𝑚 ∀𝓍¬𝒜 CQ¬∃/∀ 𝑚

Here is a schematic proof corresponding to our first conversion of quantifiers rule,


CQ∀/¬∃ :

340
§36. DERIVED RULES FOR Quantifier 341

1 ∀𝓍¬𝒜

2 ∃𝓍𝒜

3 𝒜|𝒸↷𝓍

4 ¬(ℬ ∧ ¬ℬ)

5 ¬𝒜|𝒸↷𝓍 ∀E 1

6 ℬ ∧ ¬ℬ ¬I 4–5, 4–3

7 ℬ ∧ ¬ℬ ∃E 2, 3–6

8 ¬∃𝓍𝒜 ¬I 2–7

A couple of things to note about this proof.

1. I was hasty at line 9 – officially I ought to have applied ∧E to line 8, obtaining the
contradictory conjuncts in the subproof, and then applied ¬I to the assumption
opening that subproof. (But then the proof would have gone over the page.)

2. Note that we had to introduce the new name 𝒸 at line 3. Once we did so, there
was no obstacle to applying ∀E on that newly introduced name in line 5. But if
we had done things the other way around, applying ∀E first to some new name
𝒸, we would have had to open the subproof with yet another new name 𝒹 .

3. The sentence ℬ cannot contain the name 𝒸 if the application of ∃E at line 8 is


to be correct. We introduce this arbitrary logical falsehood precisely so we can
show that the contradictoriness of our initial assumptions does not depend on
the particular choice of name. The alternative would have been to show that
the assumption 𝒜|𝒸↷𝓍 leads to logical falsehood, and then applied ¬I – but that
would have left the name 𝒸 outside the scope of a subproof and would not have
allowed us to apply ∃E.

A similar schematic proof could be offered for the second conversion rule, CQ¬∃/∀ .
Equally, we might add rules corresponding to the equivalence of ∃𝓍¬𝒜 and ¬∀𝓍𝒜 :

𝑚 ∃𝓍¬𝒜 𝑚 ¬∀𝓍𝒜

¬∀𝓍𝒜 CQ∃/¬∀ 𝑚 ∃𝓍¬𝒜 CQ¬∀/∃ 𝑚

Here is a schematic basic proof showing that the third conversion of quantifiers rule
just introduced, CQ∃/¬∀ , can be emulated just using the standard quantifier rules in
combination with the other rules of our system, in which some of the same issues arise
as in the earlier schematic proof:
342 NATURAL DEDUCTION FOR QUANTIFIER

1 ∃𝓍¬𝒜

2 ∀𝓍𝒜

3 ¬𝒜|𝒸↷𝓍

4 ¬(ℬ ∧ ¬ℬ)

5 𝒜|𝒸↷𝓍 ∀E 2

6 ¬𝒜|𝒸↷𝓍 R3

7 ℬ ∧ ¬ℬ ¬E 4–5, 4–6

8 ℬ ∧ ¬ℬ ∃E 1, 3–7

9 ¬∀𝓍𝒜 ¬I 2–8

A similar schematic proof can be offered for the final CQ rule.

36.2 Alternative Proof Systems for Quantifier


We saw in §34 that it is possible to formulate alternative proof systems that can nev‐
ertheless establish the same arguments are provable in Sentential. The same is true
for Quantifier. The idea is to get rid of the rules for one quantifier, retaining the rules
governing the other quantifier, but then to take the conversion of quantifier rules as
basic.
So, for example, we could consider the system which has ∃I and ∃E, and also has CQ∃/¬∀
and CQ¬∀/∃ . With these rules, we can emulate ∀E and ∀I. A schematic proof showing
how to emulate ∀E using our other basic rules is this:

1 ∀𝓍𝒜

2 ¬𝒜|𝒸↷𝓍

3 ∃𝓍¬𝒜 ∃I 2

4 ¬∀𝓍𝒜 CQ∃/¬∀ 3

5 ∀𝓍𝒜 R1

6 𝒜|𝒸↷𝓍 ¬E 2–5, 2–4

A schematic proof emulating ∀I using our other basic rules is trickier. Here it is:
§36. DERIVED RULES FOR Quantifier 343

1 𝛤

𝑚 ¬∀𝓍𝒜

𝑚+1 ∃𝓍¬𝒜 CQ¬∀/∃ 𝑚

𝑚+2 ¬𝒜|𝒸↷𝓍

𝑚+3 ¬(ℬ ∧ ¬ℬ)

𝑛 𝒜|𝒸↷𝓍 Original proof from 1

𝑛+1 ¬𝒜|𝒸↷𝓍 R𝑚+2

𝑛+2 ℬ ∧ ¬ℬ ¬E 𝑚 + 3–𝑛, 𝑚 + 3–𝑛 + 1

𝑛+3 ℬ ∧ ¬ℬ ∃E 𝑚 + 1, 𝑚 + 2–𝑛 + 2

𝑛+4 ∀𝓍𝒜 ¬E 𝑚–𝑛 + 3

To understand this schematic proof, what we need to remember is that, in order for
the original ∀I rule to apply, we must already have a proof of 𝒜|𝒸↷𝓍 which relies on
assumptions 𝛤 that do not mention 𝒸 at all. The trick is to make use of that proof
inside an assumption about an existential witness. We don’t try to perform that proof
to derive 𝒜|𝒸↷𝓍 and then attempt to manipulate ¬∀𝓍𝒜 to generate a logical falsehood.
Rather, we first assume ¬∀𝓍𝒜 , apply quantifier conversion to obtain ∃𝓍¬𝒜 , assume
that 𝒸 witnesses that existential claim so that ¬𝒜|𝒸↷𝓍 , and then use our original proof
to derive 𝒜|𝒸↷𝓍 at line 𝑛. To avoid problems with the name appearing at the bottom of
the existential witness subproof, we perform the same trick of assuming the falsehood
of an arbitrary logical falsehood (so long as ℬ doesn’t include 𝒸), and then we manage
to derive from 𝛤 what we had hoped to: that ∀𝓍𝒜 .

Key Ideas in §36


› The derived rules for Quantifier concern the interaction of quan‐
tifiers with negation.
› Do not make use of these derived rules unless you are explicitly
told you may do so.

Practice exercises
A. Show that the following are jointly contrary:

1. 𝑆𝑎 → 𝑇𝑚, 𝑇𝑚 → 𝑆𝑎, 𝑇𝑚 ∧ ¬𝑆𝑎


2. ¬∃𝑥𝑅𝑥𝑎, ∀𝑥∀𝑦𝑅𝑦𝑥
3. ¬∃𝑥∃𝑦𝐿𝑥𝑦, 𝐿𝑎𝑎
4. ∀𝑥(𝑃𝑥 → 𝑄𝑥), ∀𝑧(𝑃𝑧 → 𝑅𝑧), ∀𝑦𝑃𝑦, ¬𝑄𝑎 ∧ ¬𝑅𝑏
344 NATURAL DEDUCTION FOR QUANTIFIER

B. Show that each pair of sentences is provably equivalent:

1. ∀𝑥(𝐴𝑥 → ¬𝐵𝑥), ¬∃𝑥(𝐴𝑥 ∧ 𝐵𝑥)


2. ∀𝑥(¬𝐴𝑥 → 𝐵𝑑), ∀𝑥𝐴𝑥 ∨ 𝐵𝑑

C. In §16, I considered what happens when we move quantifiers ‘across’ various con‐
nectives. Show that each pair of sentences is provably equivalent:

1. ∀𝑥(𝐹𝑥 ∧ 𝐺𝑎), ∀𝑥𝐹𝑥 ∧ 𝐺𝑎


2. ∃𝑥(𝐹𝑥 ∨ 𝐺𝑎), ∃𝑥𝐹𝑥 ∨ 𝐺𝑎
3. ∀𝑥(𝐺𝑎 → 𝐹𝑥), 𝐺𝑎 → ∀𝑥𝐹𝑥
4. ∀𝑥(𝐹𝑥 → 𝐺𝑎), ∃𝑥𝐹𝑥 → 𝐺𝑎
5. ∃𝑥(𝐺𝑎 → 𝐹𝑥), 𝐺𝑎 → ∃𝑥𝐹𝑥
6. ∃𝑥(𝐹𝑥 → 𝐺𝑎), ∀𝑥𝐹𝑥 → 𝐺𝑎

NB: the variable ‘𝑥’ does not occur in ‘𝐺𝑎’.


When all the quantifiers occur at the beginning of a sentence, that sentence is said to
be in prenex normal form. These equivalences are sometimes called prenexing rules,
since they give us a means for putting any sentence into prenex normal form.
D. Offer proofs which justify the addition of the other CQ rules as derived rules.
37
Rules for Identity

37.1 Identity Introduction


In §21, I mentioned the philosophically contentious thesis of the identity of indiscern‐
ibles. This is the claim that objects which are indiscernible in every way – which means,
for us, that exactly the same predicates are true of both objects – are, in fact, identical
to each other. I also mentioned that in Quantifier, this thesis is not true. There are
interpretations in which, for each property denoted by any predicate of the language,
two distinct objects either both have that property, or both lack that property. They
may well differ on some property ‘in reality’, but there is nothing in Quantifier which
guarantees that that property is assigned as the interpretation of any predicate of the
language. It follows that, no matter how many sentences of Quantifier about those two
objects I assume, those sentences will not entail that these distinct objects are identical
(luckily for us). Unless, of course, you tell me that the two objects are, in fact, identical.
But then the argument will hardly be very illuminating.
The consequence of this, for our proof system, is that there are no sentences that do
not already contain the identity predicate that could justify the conclusion ‘𝑎 = 𝑏’. This
means that the identity introduction rule will not justify ‘𝑎 = 𝑏’, or any other identity
claim containing two different names.
However, every object is identical to itself. No premises, then, are required in order to
conclude that something is identical to itself. So this will be the identity introduction
rule:

𝒸=𝒸 =I

Notice that this rule does not require referring to any prior lines of the proof, nor does
it rely on any assumptions. For any name 𝒸, you can write 𝒸 = 𝒸 at any point, with only
the =I rule as justification.

345
346 NATURAL DEDUCTION FOR QUANTIFIER

Recall that a relation is reflexive iff it holds between anything in the domain and itself
(§§18.5 and21.10). Let’s see this rule in action, in a proof that identity is reflexive:

1 𝑎=𝑎 =I

2 ∀𝑥𝑥 = 𝑥 ∀I 1

This seems like magic! But note that the first line is not an assumption (there is no ho‐
rizontal line), and hence not an undischarged assumption. So the constant ‘𝑎’ appears
in no undischarged assumption or anywhere in the proof other than in ‘𝑥 = 𝑥 ’|𝑎↷𝑥 ,
so the conclusion ‘∀𝑥𝑥 = 𝑥 ’ follows by legitimate application of the ∀I rule. So we’ve
established the reflexivity of identity: ⊢ ∀𝑥𝑥 = 𝑥 .
This proof can seem equally magic:

1 𝑎=𝑎 =I

2 ∃𝑥𝑥 = 𝑥 ∃I 1

Again we’ve shown that there is something rather than nothing on the basis of no
assumptions. Again, of course, it is the implicit assumption that every name in the
proof refers that does the heavy lifting here.

37.2 Identity Elimination


Our elimination rule is more fun. If you have established ‘𝑎 = 𝑏’, then anything that
is true of the object named by ‘𝑎’ must also be true of the object named by ‘𝑏’, since it
just is the object named by ‘𝑎’.

Superman is strong. But Superman is actually Clark Kent.


So, Clark Kent is strong.

So for any sentence with ‘𝑎’ in it, given the prior claim that ‘𝑎 = 𝑏’, you can replace
some or all of the occurrences of ‘𝑎’ with ‘𝑏’ and produce an equivalent sentence. For
example, from ‘𝑅𝑎𝑎’ and ‘𝑎 = 𝑏’, you are justified in inferring ‘𝑅𝑎𝑏’, ‘𝑅𝑏𝑎’ or ‘𝑅𝑏𝑏’. More
generally:

𝑚 𝒶=𝒷

𝑛 𝒜|𝒶↷𝓍

𝒜|𝒷↷𝓍 =E 𝑚 , 𝑛
§37. RULES FOR IDENTITY 347

This uses our standard notion of substitution – it basically says that if you have some
sentence which arises from substituting 𝒶 for some variable in a formula, then you are
entitled to another substitution instance of the same formula using 𝒷 instead. Lines
𝑚 and 𝑛 can occur in either order, and do not need to be adjacent, but we always cite
the statement of identity first.
Note that nothing in the rule forbids the constant 𝒷 from occurring in 𝒜 . So this is a
perfectly good instance of the rule:

1 𝑎=𝑏

2 𝑅𝑎𝑏

3 𝑅𝑏𝑏 =E 1, 2

Here, ‘𝑅𝑎𝑏’ is ‘𝑅𝑥𝑏’|𝑎↷𝑥 , and the conclusion ‘𝑅𝑏𝑏’ is ‘𝑅𝑥𝑏’|𝑏↷𝑥 , which conforms to the
rule. This formulation allows us, in effect, to replace some‐but‐not‐all occurrences of a
name in a sentence by a co‐referring name. This rule is sometimes called Leibniz’s Law
– though recall §18.5, where we used that name for a claim about the interpretation of
‘=’. Here’s a slightly more complex example of the rule in action:

1 ∀𝑥𝑥 = 𝑑

2 𝐹𝑑

3 𝑗=𝑑 ∀E 1

4 𝑗=𝑗 =I

5 𝑑=𝑗 =E 3, 4

6 𝐹𝑗 =E 5, 6

7 ∀𝑥𝐹𝑥 ∀I 6

This proof has two features worth commenting on. First, the name ‘𝑗’ occurs on the
second last line, but no undischarged assumption uses it, so it is correct to apply ∀I on
the last line.
The second thing to note is the curious sequence of steps at lines 3–5. We need to
do that because the =E rule takes an identity statement of the form ‘𝒶 = 𝒷 ’, and a
sentence containing ‘𝒶’ – the name on the left of the identity – and generates a sentence
containing ‘𝒷 ’, the name of the right of the identity. But in our proof we ended up with
‘𝑗 = 𝑑 ’ and ‘𝐹𝑑 ’ – strictly speaking, the identity rule doesn’t apply to these sentences,
because that would be to substitute the name on the right of the identity into ‘𝐹𝑑 ’. The
sequence of steps at lines 3–5 allows us to ‘flip’ an identity. We start with ‘𝑗 = 𝑑 ’ and we
want to substitute one occurence of ‘𝑗’ in ‘𝑗 = 𝑗’ for ‘𝑑 ’. That is allowed, because ‘𝑗 = 𝑗’
is the same as ‘𝑥 = 𝑗|𝑗↷𝑥 ’. That yields ‘𝑥 = 𝑗|𝑑↷𝑥 ’, or ‘𝑑 = 𝑗’ on line 5, which is just what
we need to yield line 6.
348 NATURAL DEDUCTION FOR QUANTIFIER

To see the rules in action, we shall prove some quick results. Recall that a relation is
symmetric iff whenever it holds between x and y in one direction, it holds also between
y and x in the other direction (§21.9). This condition can be expressed as a sentence of
Quantifier:
So first, we shall prove that identity is symmetric, a result we already noted on semantic
grounds in §§18.5 and 21.10. That is, ⊢ ∀𝑥∀𝑦(𝑥 = 𝑦 → 𝑦 = 𝑥):

1 𝑎=𝑏

2 𝑎=𝑎 =I

3 𝑏=𝑎 =E 1, 2

4 𝑎=𝑏→𝑏=𝑎 →I 1–3

5 ∀𝑦(𝑎 = 𝑦 → 𝑦 = 𝑎) ∀I 4

6 ∀𝑥∀𝑦(𝑥 = 𝑦 → 𝑦 = 𝑥) ∀I 5

Line 2 is just ‘𝑥 = 𝑎’|𝑎↷𝑥 , as well as being of the right form for =I, and line 3 is just
𝑥 = 𝑎’|𝑏↷𝑥 , so the move from 2 to 3 is in conformity with the =E rule given the opening
assumption. This is the same sequence of moves we saw in the proof above, in a more
general setting,
Having noted the symmetry of identity, note that we can use this to establish the fol‐
lowing schematic proof that allows us to use 𝒶 = 𝒷 to also move from a claim about 𝒷
to a claim about 𝒶, not just vice versa as in our =E rule.:

𝑚 𝒶=𝒷

𝑚+1 𝒶=𝒶 =I

𝑚+2 𝒷=𝒶 =E 𝑚, 𝑚 + 1

𝑛 𝒜|𝒷↷𝓍

𝒜|𝒶↷𝓍 =E 𝑚 + 2 , 𝑛

This schematic proof justifies this derived rule, to save time:

𝑚 𝒶=𝒷

𝑛 𝒜|𝒷↷𝓍

𝒜|𝒶↷𝓍 =ES 𝑚, 𝑛
§37. RULES FOR IDENTITY 349

You can use either just the original identity elimination rule, or use it in combination
with this derived rule, in your proofs.
A relation is transitive (§21.10) iff whenever it holds between x and y and between y and
z, it also holds between x and z. (In the directed graph representation of the relation
introduced in §21.9, if there is a path along arrows going from node a to node b via a
third node, there is also a direct arrow from a to b.) Second, we shall prove that identity
is transitive, that ⊢ ∀𝑥∀𝑦∀𝑧((𝑥 = 𝑦 ∧ 𝑦 = 𝑧) → 𝑥 = 𝑧).

1 𝑎 =𝑏∧𝑏 =𝑐

2 𝑏=𝑐 ∧E 1

3 𝑎=𝑏 ∧E 1

4 𝑎=𝑐 =E 2, 3

5 (𝑎 = 𝑏 ∧ 𝑏 = 𝑐) → 𝑎 = 𝑐 →I 1–4

6 ∀𝑧((𝑎 = 𝑏 ∧ 𝑏 = 𝑧) → 𝑎 = 𝑧) ∀I 5

7 ∀𝑦∀𝑧((𝑎 = 𝑦 ∧ 𝑦 = 𝑧) → 𝑎 = 𝑧) ∀I 6

8 ∀𝑥∀𝑦∀𝑧((𝑥 = 𝑦 ∧ 𝑦 = 𝑧) → 𝑥 = 𝑧) ∀I 7

We obtain line 4 by replacing ‘𝑏’ in line 3 with ‘𝑐 ’; this is justified given line 2, ‘𝑏 = 𝑐 ’.
We could alternatively have used the derived rule =ES to replace ‘𝑏’ in line 2 with ‘𝑎’,
justified by line 3, ‘𝑎 = 𝑏’.
Recall from §21.10 that a relation that is reflexive, symmetric, and transitive is an equi‐
valence relation. So we’ve formally proved that identity is an equivalence relation. We
can also give formal proofs of other features of identity, such as a proof that identity is
serial.

Definite Descriptions in Proofs


I want to close this section by giving an example of how proofs involving definite de‐
scriptions work. Remember from §19 that we symbolised a definite description ‘the F
is G’, following Russell, as
∃𝑥(ℱ𝑥 ∧ ∀𝑦(ℱ𝑦 ↔ 𝑥 = 𝑦) ∧ 𝒢𝑥).
(I omit interior parentheses to aid legibility.)
Using this kind of approach, let us consider this argument:

The F is G; the H isn’t G; so the F isn’t H.

The argument, symbolised, looks like this:


∃𝑥(𝐹𝑥 ∧ ∀𝑦(𝐹𝑦 ↔ 𝑥 = 𝑦) ∧ 𝐺𝑥),∃𝑥(𝐻𝑥 ∧ ∀𝑦(𝐻𝑦 ↔ 𝑥 = 𝑦) ∧ ¬𝐺𝑥)
⊢ ∃𝑥(𝐹𝑥 ∧ ∀𝑦(𝐹𝑦 ↔ 𝑥 = 𝑦) ∧ ¬𝐻𝑥).
Here’s the proof:
350 NATURAL DEDUCTION FOR QUANTIFIER

1 ∃𝑥(𝐹𝑥 ∧ ∀𝑦(𝐹𝑦 ↔ 𝑥 = 𝑦) ∧ 𝐺𝑥)

2 ∃𝑥(𝐻𝑥 ∧ ∀𝑦(𝐻𝑦 ↔ 𝑥 = 𝑦) ∧ ¬𝐺𝑥)

3 (𝐹𝑎 ∧ ∀𝑦(𝐹𝑦 ↔ 𝑎 = 𝑦) ∧ 𝐺𝑎)

4 (𝐻𝑏 ∧ ∀𝑦(𝐻𝑦 ↔ 𝑏 = 𝑦) ∧ ¬𝐺𝑏)

5 𝐻𝑎

6 ∀𝑦(𝐻𝑦 ↔ 𝑏 = 𝑦) ∧E 4

7 (𝐻𝑎 ↔ 𝑏 = 𝑎) ∀E 6

8 𝑏=𝑎 ↔E 7, 5

9 𝐺𝑎 ∧E 3

10 ¬𝐺𝑏 ∧E 4

11 ¬𝐺𝑎 =E 8, 10

12 ¬𝐻𝑎 ¬I 5–9, 5–11

13 ¬𝐻𝑎 ∃E 2, 4–12

14 (𝐹𝑎 ∧ ∀𝑦(𝐹𝑦 ↔ 𝑎 = 𝑦)) ∧E 3

15 (𝐹𝑎 ∧ ∀𝑦(𝐹𝑦 ↔ 𝑎 = 𝑦) ∧ ¬𝐻𝑎) ∧I 14, 13

16 ∃𝑥(𝐹𝑥 ∧ ∀𝑦(𝐹𝑦 ↔ 𝑥 = 𝑦) ∧ ¬𝐻𝑥) ∃I 15

17 ∃𝑥(𝐹𝑥 ∧ ∀𝑦(𝐹𝑦 ↔ 𝑥 = 𝑦) ∧ ¬𝐻𝑥) ∃E 1, 3–16

Key Ideas in §37


› Adding rules for identity completes our presentation of the nat‐
ural deduction system for Quantifier.
› The identity elimination rule is basically Leibniz’ Law. The iden‐
ity introduction rule expresses how trivial identity is when the
same name flanks the identity symbol.
› We can prove all the properties we traditionally ascribe to iden‐
tity – reflexivity, symmetry, and transitivity – so identity is an
equivalence relation.
§37. RULES FOR IDENTITY 351

Practice exercises
A. Provide a proof of each of the following.

1. 𝑃𝑎 ∨ 𝑄𝑏, 𝑄𝑏 → 𝑏 = 𝑐, ¬𝑃𝑎 ⊢ 𝑄𝑐
2. 𝑚 = 𝑛 ∨ 𝑛 = 𝑜, 𝐴𝑛 ⊢ 𝐴𝑚 ∨ 𝐴𝑜
3. ∀𝑥 𝑥 = 𝑚, 𝑅𝑚𝑎 ⊢ ∃𝑥𝑅𝑥𝑥
4. ∀𝑥∀𝑦(𝑅𝑥𝑦 → 𝑥 = 𝑦) ⊢ 𝑅𝑎𝑏 → 𝑅𝑏𝑎
5. ¬∃𝑥¬𝑥 = 𝑚 ⊢ ∀𝑥∀𝑦(𝑃𝑥 → 𝑃𝑦)
6. ∃𝑥𝐽𝑥, ∃𝑥¬𝐽𝑥 ⊢ ∃𝑥∃𝑦 ¬𝑥 = 𝑦
7. ∀𝑥(𝑥 = 𝑛 ↔ 𝑀𝑥), ∀𝑥(𝑂𝑥 ∨ ¬𝑀𝑥) ⊢ 𝑂𝑛
8. ∃𝑥𝐷𝑥, ∀𝑥(𝑥 = 𝑝 ↔ 𝐷𝑥) ⊢ 𝐷𝑝
9. ∃𝑥 (𝐾𝑥 ∧ ∀𝑦(𝐾𝑦 → 𝑥 = 𝑦)) ∧ 𝐵𝑥 , 𝐾𝑑 ⊢ 𝐵𝑑
10. ⊢ 𝑃𝑎 → ∀𝑥(𝑃𝑥 ∨ ¬𝑥 = 𝑎)

B. Show that the following are provably equivalent:

› ∃𝑥 𝐹𝑥 ∧ ∀𝑦(𝐹𝑦 → 𝑥 = 𝑦) ∧ 𝑥 = 𝑛

› 𝐹𝑛 ∧ ∀𝑦(𝐹𝑦 → 𝑛 = 𝑦)

And hence that both have a decent claim to symbolise the English sentence ‘Nick is
the F’.
C. In §18, I claimed that the following are logically equivalent symbolisations of the
English sentence ‘there is exactly one F’:

› ∃𝑥𝐹𝑥 ∧ ∀𝑥∀𝑦 (𝐹𝑥 ∧ 𝐹𝑦) → 𝑥 = 𝑦

› ∃𝑥 𝐹𝑥 ∧ ∀𝑦(𝐹𝑦 → 𝑥 = 𝑦)

› ∃𝑥∀𝑦(𝐹𝑦 ↔ 𝑥 = 𝑦)

Show that they are all provably equivalent. (Hint: to show that three claims are prov‐
ably equivalent, it suffices to show that the first proves the second, the second proves
the third and the third proves the first; think about why.)
D. Symbolise the following argument

There is exactly one F. There is exactly one G. Nothing is both F and G. So:
there are exactly two things that are either F or G.

And offer a proof of it.


E. What condition on the directed graph of a relation corresponds to that relation
being an equivalence relation?
38
Proof‐Theoretic Concepts and
Semantic Concepts

We have used two different turnstiles in this book. This:

𝒜1 , 𝒜2 , …, 𝒜𝑛 ⊢ 𝒞

means that there is some proof which starts with assumptions 𝒜1 , 𝒜2 , …, 𝒜𝑛 and ends
with 𝒞 (and no undischarged assumptions other than 𝒜1 , 𝒜2 , …, 𝒜𝑛 ). This is a proof‐
theoretic notion.
By contrast, this:
𝒜1 , 𝒜2 , …, 𝒜𝑛 ⊨ 𝒞
means that there is no valuation (or interpretation) which makes all of 𝒜1 , 𝒜2 , …, 𝒜𝑛
true and makes 𝒞 false. This concerns assignments of truth and falsity to sentences. It
is a semantic notion.
I cannot emphasise enough that these are different notions. But I can emphasise it a
bit more: They are different notions.
Once you have internalised this point, continue reading.

38.1 Connecting Entailment and Provability


Although our semantic and proof‐theoretic notions are different, there is a deep con‐
nection between them. To explain this connection, I shall start by considering the
relationship between logical truths and theorems.
To show that a sentence is a theorem, you need only construct a proof. Granted, it
may be hard to produce a twenty line proof, but it is not so hard to check each line
of the proof and confirm that it is legitimate; and if each line of the proof individually
is legitimate, then the whole proof is legitimate. Showing that a sentence is a logical
truth, though, requires reasoning about all possible interpretations. Given a choice

352
§38. PROOF‐THEORETIC CONCEPTS AND SEMANTIC CONCEPTS 353

between showing that a sentence is a theorem and showing that it is a logical truth, it
would be easier to show that it is a theorem.
Contrawise, to show that a sentence is not a theorem is hard. We would need to reason
about all (possible) proofs. That is very difficult. But to show that a sentence is not a
logical truth, you need only construct an interpretation in which the sentence is false.
Granted, it may be hard to come up with the interpretation; but once you have done so,
it is relatively straightforward to check what truth value it assigns to a sentence. Given
a choice between showing that a sentence is not a theorem and showing that it is not
a logical truth, it would be easier to show that it is not a logical truth.
Fortunately, a sentence is a theorem if and only if it is a logical truth. As a result, if
we provide a proof of 𝒜 on no assumptions, and thus show that 𝒜 is a theorem, we
can legitimately infer that 𝒜 is a logical truth; i.e., ⊨ 𝒜 . Similarly, if we construct an
interpretation in which 𝒜 is false and thus show that it is not a logical truth, it follows
that 𝒜 is not a theorem.
More generally, we have the following powerful result:

𝒜1 , 𝒜2 , …, 𝒜𝑛 ⊢ ℬ iff 𝒜1 , 𝒜2 , …, 𝒜𝑛 ⊨ ℬ.

The left‐to‐right direction of this result, that provable argument really is a valid entail‐
ment, is known as SOUNDNESS (different from soundness of an argument from §2.6).
The right‐to‐left direction, that every entailment has a proof, is known as COMPLETE‐
NESS.
Soundness and completeness together show that, whilst provability and entailment
are different notions, they are extensionally equivalent, holding between just the same
sentences in our languages. As such:

› An argument is valid iff the conclusion can be proved from the premises.
› Two sentences are logically equivalent iff they are provably equivalent.
› Sentences are jointly consistent iff they are not jointly contrary.

For this reason, you can pick and choose when to think in terms of proofs and when to
think in terms of valuations/interpretations, doing whichever is easier for a given task.
Table 38.1 summarises which is (usually) easier.
It is intuitive that provability and semantic entailment should agree. But – let me re‐
peat this – do not be fooled by the similarity of the symbols ‘⊨’ and ‘⊢’. These two
symbols have very different meanings. And the fact that provability and semantic en‐
tailment agree is not an easy result to come by.
We showed part of this result along the way, actually. All those little observations I
made about how our proof rules were good each took the form of an argument that
whenever 𝒜1 , 𝒜2 , …, 𝒜𝑛 ⊢ ℬ, then also 𝒜1 , 𝒜2 , …, 𝒜𝑛 ⊨ ℬ. So in effect we’ve already
established the soundness of our natural deduction proof system, when we justified
those rules in terms of the existing understanding of the semantics we possess.
354 NATURAL DEDUCTION FOR QUANTIFIER

Yes No

Is 𝒜 a logical give a proof which shows ⊢ 𝒜 give an interpretation in


truth? which 𝒜 is false

Is 𝒜 a logical give a proof which shows ⊢ ¬𝒜 give an interpretation in


falsehood? which 𝒜 is true

Are 𝒜 and ℬ give two proofs, one for 𝒜 ⊢ ℬ give an interpretation in


equivalent? and one for ℬ ⊢ 𝒜 which 𝒜 and ℬ have differ‐
ent truth values

Are give an interpretation in which prove a logical false‐


𝒜1 , 𝒜2 , …, 𝒜𝑛 all of 𝒜1 , 𝒜2 , …, 𝒜𝑛 are true hood from assumptions
jointly consist‐ 𝒜1 , 𝒜2 , …, 𝒜𝑛
ent?

Is give a proof with assumptions give an interpretation in


𝒜1 , 𝒜2 , …, 𝒜𝑛 ∴ 𝒜1 , 𝒜2 , …, 𝒜𝑛 and concluding which each of 𝒜1 , 𝒜2 , …, 𝒜𝑛
𝒞 valid? with 𝒞 is true and 𝒞 is false
Table 38.1: Summary of what most efficiently demonstates various semantic and proof‐
theoretic properties, given the coextensiveness of ‘⊢’ and ‘⊨’.

Key Ideas in §38


› The two turnstiles are distinct concepts. But they are coextens‐
ive: every provable argument corresponds to an entailment, and
vice versa.
› This means that we can pick and choose our methods to suit our
task. If we want to show an entailment, we can sometimes most
effectively proceed by providing a proof. And if we want to show
something isn’t provable, it can be more efficient to provide a
countermodel.
39
Next Steps

That brings you to the end of the material in this course. But we’ve barely scratched the
surface of the subject, and there are a number of further directions in which you could
pursue the further study of logic. I will briefly indicate some of the further things you
can do in logic, as well as some of the applications that logic has found in other fields.
I’ll also mention some further reading.

39.1 Next Steps in Logic


The soundness and completeness theorems mentioned in the last section mark the
stage at which introductory logic becomes intermediate logic. More than one of the
authors of this book have written distinct sequels in which the completeness of Sen‐
tential is established, along with other results of a ‘metalogical’ flavour. The interested
student is directed to these open access texts:

› Tim Button’s book is Metatheory; it covers just Sentential, and some of the results
there were actually covered in this book already, notably results around express‐
iveness of Sentential in §14. Metatheory is available at www.homepages.ucl.ac.
uk/~uctytbu/Metatheory.pdf It should be accessible for self‐study by students
who have successfully completed this course.

› Antony Eagle’s book Elements of Deductive Logic goes rather further than Meta‐
theory, including direct proofs of Compactness for Sentential, consideration of
alternative derivation systems, discussion of computation and decidability, and
metatheory for Quantifier. It is available at github.com/antonyeagle/edl.

› The most comprehensive open access logic texts are those belonging to the Open
Logic Project (https://openlogicproject.org). There are open texts on inter‐
mediat logic, set theory, modal logic, non‐classical logics. For most of the topics
I touch on below, the Open Logic Project texts are reliable and accessible sources.
There are many texts, remixing a common core of resources which together make
up the whole open logic text.

355
356 NATURAL DEDUCTION FOR QUANTIFIER

The metatheory of classical logic, the logic we’ve discussed, is well‐understood. The
subject of logic itself goes well beyond classical logic, in at least three ways.

Using Logic
The original spur to Frege and colleagues in creating modern logic was to provide a
framework in which mathematics could be formally regimented and mathematical
proof could be systematically represented and checked. So a very standard next step in
logic is to formalise various mathematical theories. This is standardly done by fixing
a symbolisation key to give a mathematical interpretation to predicates and names of
Quantifier, and adding some axioms that carry substantive information about the in‐
terpretation. Often some new expressive resources are added too, such as FUNCTION
SYMBOLS and other term‐forming operators. Some familiar binary function symbols
include ‘+’ and ‘⋅’ (multiplication): these take two terms and yield a complex term.
They operate recursively, so complex terms can themselves be given as arguments. So
‘7 + 5’, ‘7 + 𝑥’, and ‘(3 ⋅ 𝑥) + 9’ are all complex terms.
One standard formalised mathematical theory is ROBINSON ARITHMETIC 𝑸.1 This is a
theory in the language of Quantifier plus function symbols ‘+’, ‘⋅’ and the symbol ‘′ ’ for
successor (the number immediately after a given number). There is one interpreted
name, ‘𝟎’, which names zero in the intended interpretation. The axioms of the theory
– sentences that are assumed to be true, and so serve to delimit the possible inter‐
pretations under consideration – are these, listed here together with their intended
interpretation:

› ∀𝑥 𝑥 ′ ≠ 𝟎 (zero isn’t the successor of any number);

› ∀𝑥∀𝑦(𝑥 ′ = 𝑦 ′ → 𝑥 = 𝑦) (if 𝑥 and 𝑦 have the same successor, then 𝑥 is 𝑦);

› ∀𝑦(𝑦 = 𝟎 ∨ ∃𝑥𝑦 = 𝑥 ′ ) (every number is either zero or the successor of a number);

› ∀𝑥 𝑥 + 𝟎 = 𝑥 (adding zero to a number results in that number);

› ∀𝑥∀𝑦 𝑥 + 𝑦 ′ = (𝑥 + 𝑦)′ (the sum of 𝑥 and the successor of 𝑦 is the successor of


the sum of 𝑥 and 𝑦);

› ∀𝑥 𝑥 ⋅ 𝟎 = 𝟎 (multiplying a number by zero results in zero);

› ∀𝑥∀𝑦 𝑥 ⋅ 𝑦 ′ = (𝑥 ⋅ 𝑦) + 𝑥 (the product of 𝑥 and the successor of 𝑦 is the product


of 𝑥 and 𝑦 plus 𝑥).

These axioms serve to fix the intepretation of addition, multiplication, and successor.
These axioms are all true in interpretations of Quantifier in which the domain is the
natural numbers, and the function symbols have their intended interpretation. The
consequences of these axioms, those sentences that are true in every interpretation

1 Robinson arithmetic is weaker than full ordinary arithmetic, but occupies a special place in formal
mathematics because of its role in the Gödel incompleteness theorems. Robinson arithmetic in more or
less this form is discussed by George Boolos, John P Burgess and Richard C Jeffrey (2007) Computability
and Logic, 5th ed., Cambridge University Press, chapter 16.
§39. NEXT STEPS 357

in which all the axioms are true, are the arithmetical truths that hold in Robinson
arithmetic.2
Once you have a theory like this, a collection of axioms with an intended interpretation
symbolised in Quantifier, you can ask questions about soundness and completeness of
these theories with respect to that model. Robinson arithmetic is sound with respect
to its intended interpretation in the natural numbers. But it is incomplete: there are
arithmetical truths it does not include.
The most striking aspect of this result is that any arithmetical theory that includes
Robinson arithmetic will be incomplete, in that there are truths that are not con‐
sequences of the axioms. This is the upshot of the famous limitative results that are
the central target of most intermediate logic courses: Gödel’s incompleteness theorem
and Tarski’s theorem on the indefinability of truth. These results rely essentially on
the fact that even simple arithmetical theories include a device of SELF‐REFERENCE,
enabling them to define arithmetical formulae that ‘talk about’ sentences of the lan‐
guage of arithmetic. By a diagonal argument reminiscent of the Liar paradox (‘this
sentence is not true’), Gödel and Tarski show that, as long as arithmetic is consistent,
there will be true sentences that are not provable.3

Extending Logic
The logic we have was designed to model mathematical arguments, so the uses men‐
tioned above are unsurprising. But there are arguments on many topics outside of
mathematics, and it is not obvious that the logics we have are suitable for these argu‐
ments.4
We’ve already seen in §23 an example which we might hope is valid, but which isn’t
symbolised as a valid argument in Quantifier:

It is raining
So: It will be that it was raining.

Given our liberal attitude to structural words (§4.3), the natural idea is to take the
tenses expressions – in this case, the future ‘will’ and the past ‘was’ – to be structural
words. The standard approach of TENSE LOGIC is to treat those tenses as monadic sen‐
tential connectives. Then if ‘𝑃’ means ‘it is raining’, the argument could be symbolised
like so: 𝑃 ∴ will was 𝑃.
This symbolisation isn’t especially helpful without some semantic understanding of
what these tense operators mean. In the classical logic of this text, sentences don’t

2 Another project of the same sort is the symbolisation of set theory in Quantifier(see Patrick Suppes
(1972) Axiomatic Set Theory, Dover), or the symbolisations of mereology, the formal theory of part
and whole (see Achille C Varzi (2016) ‘Mereology’ in Edward N Zalta, ed., Stanford Encyclopedia of
Philosophy, https://plato.stanford.edu/entries/mereology/).
3 See Boolos, Burgess and Jeffrey, op. cit., and Raymond M Smullyan (2001) ‘Gödel’s Incompleteness
Theorems’, pp. 72–89 in Lou Goble, ed., The Blackwell Guide to Philosophical Logic, Blackwell.
4 Good sources for the logics discussed in this section and the next – especially tense, modal, and non‐
classical logics – are John P Burgess (2008) Philosophical logic, Princeton University Press; JC Beall and
Bas C van Fraassen (2003) Possibilities and Paradox, Oxford University Press; Graham Priest (2008) An
Introduction to Non‐Classical Logic, Cambridge University Press.
358 NATURAL DEDUCTION FOR QUANTIFIER

change their truth values within a valuation. This is not the way we introduced them,
but we can think of a valuation as a snapshot, capturing a momentary assignment of
truth values at a time. A HISTORY will be an ordered sequence of valuations, represent‐
ing the way that the truth values of the atomic sentences change over time. (We may
also need to mark a special valuation, the present one, which represents how things
actually are.) Then ‘will 𝒜 ’ is true at a valuation in a history iff 𝒜 is true at some later
valuation in that history; ‘was 𝒜 ’ is true at a valuation in a history iff 𝒜 is true at some
earlier valuation in that history. An argument is valid in a history iff every valuation
at which the premises is true is also valuation in which the conclusion is true; and an
argument is valid iff it is valid on every history. The argument above will turn out to
be valid. Suppose ‘𝑃’ is true at a valuation 𝑣 in a history. Then at any valuation later
than 𝑣, ‘was 𝑃’ is true. But then at 𝑣, there is a later valuation at which ‘was 𝑃’ is true,
so that ‘will was 𝑃’ is true at 𝑣. The history was chosen arbitrarily (it does need time to
extend arbitrarily in both directions though), so the argument is valid.
This kind of structured collection of valuations we’ve called a history is also used in
other extensions of classical logic. This is a non‐exhaustive list of examples

› The logic that results from taking the sentence operators ‘necessarily’ and ‘pos‐
sibly’ to be structural words, known as MODAL LOGIC, uses a collection of valu‐
ations, which modal logicians tend picturesquely to call the POSSIBLE WORLDS;
› The logic that result from taking ‘obligatorily’ and ‘permissibly’ to be structural
words, or DEONTIC LOGIC, also uses the same framework. There a collection of
valuations corresponds to ideal possible worlds; 𝒜 is obligatory iff it is true at all
ideal worlds, permissible iff it is true at some. This is an interesting logic because
the actual world is not typically thought to be among the ideal worlds.

These constructions using valuations are extensions of Sentential. There are also quan‐
tified temporal/modal/deontic logics, where each moment of history (or each possible
world) is an interpretation, not a valuation. There are some rich issues about how the
domain should be permitted to vary over time, or just the extensions of predicates, as
well as disputes over whether the temporal/modal operators should be permitted to
occur within the scope of quantifiers. For example, should ‘∃𝑥 was 𝐹𝑥’ be acceptable
– that there is something which has the temporal property was 𝐹 ? Or should we con‐
fine the temporal properties to tensed truth, i.e., it is only sentences which have the
temporal properties of previously and subsequent truth.
A huge literature has grown up on the topic of CONDITIONAL LOGICS, logics which
add new connectives, distinct from ‘→’, hoping to better represent the behaviour of
conditionals in English. Perhaps the most celebrated are the logics associated with
Lewis‐Stalnaker conditionals. In Lewis’ version, the counterfactual conditional (§8.6)
has a certain modal force: it tells you about how things would have been under altern‐
ative possible circumstances. So the logic of conditionals he develops draws on the
framework of modal logic, with some innovations of his own. The Stalnaker condi‐
tional is similar, but Stalnaker wants to claim that the indicative conditional too has
modal force.5
5 See David Lewis (1973) Counterfactuals, Blackwell, and Robert C Stalnaker (1975) ‘Indicative condition‐
als’, Philosophia 5: 269–286, https://doi.org/10.1007/bf02379021.
§39. NEXT STEPS 359

Finally, another common extension of Quantifier is to allow quantification over possible


extensions (i.e., collections of individuals in the domain). This is SECOND‐ORDER LO‐
GIC, the thought being that first‐order logic quantifies over individuals, second‐order
logic quantifies over collections of individuals, third‐order logic over collections of col‐
lections of individuals, and so on. This allows us to represent arguments like this:

1. There is a property Bob has and Alice does not. ∃𝑋(𝑋𝑏 ∧ ¬𝑋𝑎)
So: Bob isn’t Alice. 𝑏 ≠ 𝑎

This sort of example can seem trivial, but in fact second‐order logic has many powerful
features that allow it to vastly outstrip the expressive capacity of Quantifier. It has so
much power that the philosopher W V O Quine once said that second‐order logic is
‘set theory in sheep’s clothing’ – it has substantial mathematical content, disguised up
as if it were nothing but pure logic.6

Modifying Logic
Another direction we could take is not adding extra expressive resources to logic, but
modifying the logic we already have in some way. We’ve already seen some attempts to
do this, when we briefly considered intuitionistic logic and its restrictions on negation
elimination in §30.3.
More recently, influential alternative logics have been explored which result from re‐
stricting the structural rules permitted in proofs (§31.3). These logics, even though they
are purely sentential and lack quantifiers, turn out to be very complex and puzzling.
One prominent alternative is LINEAR LOGIC, which result from restrictions to the rule
of contraction. Linear logic is sometimes said to be resource conscious; it matters, in
linear logic, how many times you need to appeal to an assumption in order to construct
a proof. Some claims which might be provable by appealing twice to an assumption,
may fail to be provable if you were permitted only to appeal once to that assumption.
So in the assumptions of a linear logic proof it is important to note how many times
an assumption appears – contraction, which says that the same things can be proved
even if you throw away duplicate assumptions, is not compatible with keeping track
of resources in this way. Linear logic has found uses in computer science, in model‐
ling the behaviour of algorithms – in an actual computation, it can be very important
to be efficient in the calls you make on resources. If appealing to an assumption in‐
volves reading that assumption from disk, for example, then the fewer appeals to the
assumption you can make, the faster your algorithm will run.
Amongst philosophers, however, the most prominent substructural logic is RELEVANCE
LOGIC (US), aka RELEVANT LOGIC (UK/Australasia).7 Relevant logicians argue that the
so‐called ‘paradoxes of material implication’ (§11.5) are symptoms of a broader failure
of classical logic to require that the premises of a valid argument must be relevant to
its conclusion. Relevant logicians are particularly incensed by the fact that 𝐴 ∧ ¬𝐴 ⊢ 𝐵;
accordingly, they need to block the standard proof:

6 W V O Quine (1970) Philosophy of Logic, Harvard University Press, at pp. 64–8.


7 See Burgess, op. cit., Priest, op. cit., and Edwin Mares ‘Relevance Logic’ in Edward N Zalta, ed., Stanford
Encyclopedia of Philosophy, https://plato.stanford.edu/entries/logic‐relevance/.
360 NATURAL DEDUCTION FOR QUANTIFIER

1 𝐴 ∧ ¬𝐴

2 ¬𝐵

3 𝐴 ∀E 1

4 ¬𝐴 ∀E 1

5 𝐵 ¬E 2–3, 2–4

The relevant logician doesn’t wish to abandon any of these rules, or at least, not the
intuition behind them. But they do generally want to resist our being able to introduce
a new irrelevant assumption like ‘¬𝐵’ whenever we want. So relevant logics are a class
of substructural logic which don’t satisfy weakening.
A final class of modifications to classical logic I will consider are MANY‐VALUED LOGICS:
logics that go beyond two truth values.8 Some many‐valued logics introduce just a
third truth value, indeterminate, to represent the truth values of unsettled matters
such as contingent future outcomes. Such logics also gained some purchase as con‐
ditional logics – not extending classical logic this time, but changing the behaviour of
the existing conditional. For example, many resist the idea that a conditional should
be true just because its antecedent is false. Perhaps we should say rather that the con‐
ditional is unsettled when the antecedent is false, because we just can’t tell whether
the consequent follows.
Another use of many valued logics appeals to not just a third truth value, but infinitely
many, sometimes called DEGREES OF TRUTH. Such logics have had some appeal to
philosophers working on vagueness, a topic we will turn to shortly.

39.2 Next Steps with Logic


There are lots of places where you might wish to apply logic: anywhere that a persuasive
case needs to be made, or clarity and precision of expression is vital. But some areas
are more active than others. One is mathematics, as already mentioned; and issues of
decidability and effective provability lead quickly into theoretical computer science.
But logic has possibly more suprising applications too:

› In law, where ambiguity needs to be resolved and the job of an advocate is


(though only partly) to persuade through rational argument.9 The skills of lo‐
gical analysis are important in deciding whether legal argument is conclusive.
Certainly the judge needs logic to see through sophistry and rhetoric.
› In linguistics, where methods from logic are foundational to formal theories of
meaning.10 Many formal semantic theories invoke a level of sub‐surface gram‐
matical structure which is the input to semantic analysis, often called LOGICAL

8 See Priest, op. cit., and Beall and van Fraassen, op. cit..
9 Henry Prakken and Giovanni Sartor (2015) ‘Law and Logic: a Review from an Argumentation Perspect‐
ive’, Artificial Intelligence 227: 214–45, https://doi.org/10.1016/j.artint.2015.06.005.
10 See Paul Portner (2005) What is Meaning?, Blackwell.
§39. NEXT STEPS 361

Figure 39.1: A red‐yellow spectrum.

FORM. The properties and nature of this postulated logical form are heavily
endebted to the kinds of logical resources available to the theorist. We saw a
prominent example of different logical frameworks being brought to bear on a
hypothesis about the ‘real’ meaning of a natural language expression in our brief
discussion of the analysis of definite descriptions in §19.

› In management consulting and related fields, the topic‐neutrality of logic and the
habits of rigorous thought it encourages are very useful for people who have to
offer recommendations about complex areas without necessarily having much
subject‐specific knowledge.

But I am a philosopher, and I’m interested primarily in applications of logic to philo‐


sophical puzzles. I’ll close this discussion, and this book, with a brief account of one
very prominent puzzle where logic illuminates the structure of the problem and the
space of available solutions. It does not, unfortunately, decide everything for us – we
need to make theoretical choices on substantive grounds, and logic, because topic‐
neutral, won’t generally be decisive on such matters.
The problem I’m talking about is the puzzle of vagueness. Consider the excerpt from
the visible spectrum depicted in Figure 39.1. The spectrum looks continuous, but the
discrete medium of the page suggests it is actually made up of many many thin slivers
of apparently constant colour, laid adjacent to one another. Let there be 1000 slivers
of colour in this spectrum. The spectrum looks continuous because the colour doesn’t
seem to change between adjacent slivers of colour. There are no sharp cutoffs or dis‐
continuities in the spectrum.
The left hand end is clearly red. But because the red areas shades smoothly into orange
and then yellow, it seems clear that the word ‘red’ denotes a predicate with no sharp
cutoffs. It is a VAGUE EXPRESSION. Its vagueness, accordingly to many, arises because
‘red’ and other vague words exhibit this feature:

A predicate ‘𝐹 ’ is TOLERANT if extremely similar things are either both


𝐹 or neither 𝐹 .

In the case of ’red’, adjacent slivers of colour in the spectrum are very similar – they are
visually indistinguishable, i.e., they look the same. And surely if two things look the
same, they cannot be different colours. How could there be two distinct colours that
are indistinguishable in appearance – colours are appearance properties, most say.
362 NATURAL DEDUCTION FOR QUANTIFIER

If ‘red’ is tolerant, then any two adjacent slivers will be either both red, or both not
red. So where ‘𝑅’ is interpreted to mean ‘ is red’, and ‘𝐶 ’ means ‘ is adjacent
1 1
to ’, in the domain of slivers of colour in this spectrum, this Quantifier sentence
2
appears to be true:
∀𝑥∀𝑦((𝑅𝑥 ∧ 𝐴𝑥𝑦) → 𝑅𝑦).
This is an instance of the principle of Tolerance, because adjacent slivers are extremely
similar. Let the slivers of colour be denoted ‘𝑎1 , …, 𝑎1000 ’, with ‘𝑎1 ’ the leftmost sliver.
Then the above tolerance principle, together with, ‘𝑅𝑎1 ’, and many premises of the
form ‘𝐴𝑎𝑖 𝑎𝑖+1 ’ will entail ‘∀𝑥𝑅𝑥 ’, given that there is a case of red, and we have enough
adjacent cases: as we do in the spectrum. The formal proof is long, but basically: if
there is a case of red, and a case of non‐red, then Tolerance tells us they cannot be
linked by a sequence of adjacent cases. But in the spectrum any two slivers can be
linked by a sequence of adjacent cases.
There are a number of responses, some of which we’ve mentioned already. The degree‐
theory solution says that each adjacent sliver to the right is red to a slightly less degree
than the sliver to the left, and that it is true to a lower degree that it is red. Such
views abandon tolerance for vague predicates in favour of a principle of ‘closeness’:
that extremely similar things are both 𝐹 to an extremely similar degree.11 There are
also solutions that appeal to truth‐value ‘gaps’, resembling many‐valued logics in some
ways – most prominent among these is the approach known as SUPERVALUATIONISM.12
Finally, there is a purely classical logic solution, Williamson’s theory of EPISTEMICISM,
which says that there are sharp cutoffs (so that the tolerance premise is false), but
explains the appearance that there are no sharp cutoffs as a byproduct of our essential
inability to know where the sharp boundaries lie. Logic won’t decide between these
approaches. But it does help us in formulating them precisely and in enabling us to
classify their strengths and weaknesses precisely. This is an advantage in most areas
of controversy and debate.

Key Ideas in §39


› With the basics in hand, there are many next steps in logic.
› It is natural to try to establish more metatheoretical results about
the systems we have discussed, and some resources are indicated
in the text to help you do this if you wish.
› There are various attempts to extend, modify and make use of lo‐
gic, in mathematics, philosophy, computer science, and linguist‐
ics. The field is vast and many topics are underexplored.
› Logic is useful in clarifying controversies and debates, and is a
tool you will likely use long after you finish with this book.

11 See N J J Smith (2008) Vagueness and Degrees of Truth, Oxford University Press.
12 See Kit Fine (1975) ‘Vagueness, truth and logic’, Synthese 54: 235–59, https://doi.org/10.1007/
bf00485047.
Appendices
Appendix A
Alternative Terminology and
Notation

Alternative terminology
Sentential logic The study of Sentential goes by other names. The name we have
given it – sentential logic – derives from the fact that it deals with whole sentences as
its most basic building blocks. Other features motivate different names. Sometimes
it is called truth‐functional logic, because it deals only with assignments of truth and
falsity to sentences, and its connectives are all truth‐functional. Sometimes it is called
propositional logic, which strikes me as a misleading choice. This may sometimes be
innocent, as some people use ‘proposition’ to mean ‘sentence’. However, noting that
different sentences can mean the same thing, many people use the term ‘proposition’
as a useful way of referring to the meaning of a sentence, what that sentence expresses.
In this sense of ‘proposition’, ‘propositional logic’ is not a good name for the study of
Sentential, since synonymous but not logically equivalent sentences like ‘Vixens are
bold’ and ‘Female foxes are bold’ will be logically distinguished even though they ex‐
press the same proposition.

Quantifier logic The study of Quantifier goes by other names. Sometimes it is called
predicate logic, because it allows us to apply predicates to objects. Sometimes it is
called first‐order logic, because it makes use only of quantifiers over objects, and vari‐
ables that can be substituted for constants. This is to be distinguished from higher‐
order logic, which introduces quantification over properties, and variables that can be
substituted for predicates. (This device would allow us to formalise such sentences as
‘Jane and Kane are alike in every respect’, treating the italicised phrase as a quantifier
over ‘respects’, i.e., properties. This results in something like ∀𝑃(𝑃𝑗 ↔ 𝑃𝑘), which is
not a sentence of Quantifier.)

Atomic sentences Some texts call atomic sentences sentence letters. Many texts use
lower‐case roman letters, and subscripts, to symbolise atomic sentences.

364
§A. ALTERNATIVE TERMINOLOGY AND NOTATION 365

Formulas Some texts call formulas well‐formed formulas. Since ‘well‐formed for‐
mula’ is such a long and cumbersome phrase, they then abbreviate this as wff. This
is both barbarous and unnecessary (such texts do not make any important use of the
contrasting class of ‘ill‐formed formulas’). I have stuck with ‘formula’.
In §6, I defined sentences of Sentential. These are also sometimes called ‘formulas’
(or ‘well‐formed formulas’) since in Sentential, unlike Quantifier, there is no distinction
between a formula and a sentence.

Valuations Some texts call valuations truth‐assignments; others call them struc‐
tures.

Expressive adequacy Some texts describe Sentential as truth‐functionally complete,


rather than expressively adequate.

n‐place predicates I have called predicates ‘one‐place’, ‘two‐place’, ‘three‐place’, etc.


Other texts respectively call them ‘monadic’, ‘dyadic’, ‘triadic’, etc. Still other texts call
them ‘unary’, ‘binary’, ‘ternary’, etc.

Names In Quantifier, I have used ‘𝑎’, ‘𝑏’, ‘𝑐 ’, for names. Some texts call these ‘constants’,
because they have a constant referent in a given interpretation, as opposed to variables
which have variable referents. Other texts do not mark any difference between names
and variables in the syntax. Those texts focus simply on whether the symbol occurs
bound or unbound.

Domains Some texts describe a domain as a ‘domain of discourse’, or a ‘universe of


discourse’.

Interpretations Some texts call interpretations models; others call them structures.

Alternative notation
In the history of formal logic, different symbols have been used at different times and
by different authors. Often, authors were forced to use notation that their printers
could typeset.
This appendix presents some common symbols, so that you can recognise them if you
encounter them in an article or in another book. Unless you are reading a research
article in philosophical or mathematical logic, these symbols are merely different nota‐
tions for the very same underlying things. So the truth‐functional connective we refer
to with ‘∧’ is the very same one that another textbook might refer to with ‘&’. Com‐
pare: the number six can be referred to by the numeral ‘6’, the Roman numeral ‘VI’, the
English word ‘six’, the German word ‘sechs’, the kanji character ‘六’, etc.
366 APPENDICES

Negation Two commonly used symbols are the not sign, ‘¬’, and the tilde operator,
‘∼’. In some more advanced formal systems it is necessary to distinguish between two
kinds of negation; the distinction is sometimes represented by using both ‘¬’ and ‘∼’.
Some texts use an overline to indicate negation, so that ‘𝒜 ’ expresses the same thing as
‘¬𝒜 ’. This is clear enough if 𝒜 is an atomic sentence, but quickly becomes cumbersome
if we attempt to nest negations: ‘¬(𝐴 ∧ ¬(¬¬𝐵 ∧ 𝐶))’ becomes the unwieldy

𝐴 ∧ (𝐵 ∧ 𝐶).

Disjunction The symbol ‘∨’ is typically used to symbolize inclusive disjunction. One
etymology is from the Latin word ‘vel’, meaning ‘or’.

Conjunction Conjunction is often symbolized with the ampersand, ‘&’. The am‐
persand is a decorative form of the Latin word ‘et’, which means ‘and’. (Its etymology
still lingers in certain fonts, particularly in italic fonts; thus an italic ampersand might
appear as ‘& ’.) Using this symbol is not recommended, since it is commonly used in
natural English writing (e.g., ‘Smith & Sons’). As a symbol in a formal system, the am‐
persand is not the English word ‘&’, so it is much neater to use a completely different
symbol. The most common choice now is ‘∧’, which is a counterpart to the symbol used
for disjunction. Sometimes a single dot, ‘•’, is used (you may have seen this in Argument
and Critical Thinking). In some older texts, there is no symbol for conjunction at all;
‘𝐴 and 𝐵’ is simply written ‘𝐴𝐵’. These are often texts that use the overlining notation
for negation. Such texts often involve languages in which conjunction and negation
are the only connectives, and they typically also dispense with parentheses which are
unnecessary in such austere languages, because negation scope is indicated directly.
‘¬(𝐴 ∧ 𝐵)’ can be distinguished from ‘(¬𝐴 ∧ 𝐵)’ easily: ‘𝐴𝐵’ vs. ‘𝐴𝐵’.

Material conditional There are two common symbols for the material conditional:
the arrow, ‘→’, and the hook, ‘⊃’. Rarely you might see ‘⇒’.

Material biconditional The double‐headed arrow, ‘↔’, is used in systems that use
the arrow to represent the material conditional. Systems that use the hook for the
conditional typically use the triple bar, ‘≡’, for the biconditional.

Quantifiers The universal quantifier is typically symbolised ‘∀’ (a rotated ‘A’), and
the existential quantifier as ‘∃’ (a rotated ‘E’). In some texts, there is no separate symbol
for the universal quantifier. Instead, the variable is just written in parentheses in front
of the formula that it binds. For example, they might write ‘(𝑥)𝑃𝑥 ’ where we would
write ‘∀𝑥𝑃𝑥 ’.
The common alternative notations are summarised below:
§A. ALTERNATIVE TERMINOLOGY AND NOTATION 367

negation ¬, ∼ , 𝒜
conjunction ∧, &, •
disjunction ∨
conditional →, ⊃, ⇒
biconditional ↔, ≡
universal quantifier ∀𝑥 , (𝑥)

Doing without parentheses: Polish notation


We have established some conventions governing when you can omit parentheses in
Sentential (in §§6.5 and 10.3). These conventions are useful in practice. It can be hard to
read sentences with many nested parentheses. Moreover, it is easy to make a mistake
when constructing sentences, if you accidentally omit a needed bracket. So there has
been some interest in ways of doing sentential logic without parentheses. Frege’s ini‐
tial formulation of sentential logic in his Begriffschrift (1879) lacked parentheses, but
involves a complicated nonlinear branching structure which is difficult to typeset and
has not been much used by anyone since.
The now‐standard approach to sentential logic without parentheses was due to the
Polish logician Jan Łukasiewicz, and for that reason it has become known as POLISH
NOTATION. It is a purely syntactic variant of our system Sentential: the truth‐functions
expressed by the connectives remain the same, but the sentences are written very dif‐
ferently.1
The basic idea is that rather than writing a connective in a logically complex sentence
between the two subsentences, it should be written before them – in a sense, it treats
all binary connectives like negation. So if 𝒜 and ℬ are two sentences, the sentence
we write in Sentential as 𝒜 → ℬ would be written as 𝐶𝒜ℬ, where the ‘𝐶 ’ is the symbol
used for the conditional connective. Because the language uses some of the standard
upper case letters for its connectives, there is potential for confusion if we use them
for atomic sentences too. So the atomic sentences of this Polish language will be ‘𝑝’, ‘𝑞’,
or ‘𝑟’, with any numerical subscripts.
The standard connectives are these, with their Sentential equivalents:

Connective Polish Sentential


negation 𝑁𝒜 ¬𝒜
conjunction 𝐾𝒜ℬ (𝒜 ∧ ℬ)
disjunction 𝐴𝒜ℬ (𝒜 ∨ ℬ)
conditional 𝐶𝒜ℬ (𝒜 → ℬ)
biconditional 𝐸𝒜ℬ (𝒜 ↔ ℬ)

We can recursively define a sentence of this Polish language:

1 In what follows I draw on Peter Simons (2017) ‘Łukasiewicz’s Parenthesis‐Free or Polish Notation’, in Ed‐
ward N Zalta, ed., The Stanford Encyclopedia of Philosophy, https://plato.stanford.edu/archives/
spr2017/entries/lukasiewicz/polish‐notation.html.
368 APPENDICES

1. Any atomic sentence (i.e.,𝑝, 𝑞, 𝑟, 𝑝1 , 𝑞1 , 𝑟1 , …) is a sentence;

2. If 𝒜 and ℬ are sentences, then so are ‘𝑁𝒜 ’, 𝐾𝒜ℬ, 𝐴𝒜ℬ , 𝐶𝒜ℬ and 𝐸𝒜ℬ;

3. Nothing else is a sentence.

It is easy to see that one can type sentences in Polish notation without the use of any
special symbols on a standard typewriter keyboard.
The notation doesn’t require parentheses. The ambiguous string ‘𝑃 → 𝑄 → 𝑅’ may
correspond to either of these two Sentential sentences: (i) ‘(𝑃 → (𝑄 → 𝑅))’ or (ii) ((𝑃 →
𝑄) → 𝑅). These are symbolised in Polish notation as, respectively, (i′) ‘𝐶𝑝𝐶𝑞𝑟’ and (ii′)
‘𝐶𝐶𝑝𝑞𝑟’. Why?

› You know, from the recursive definition, that any logically complex sentence is
formed by taking a connective and placing one or two sentences after it. So the
main connective is always the leftmost character.

› You then need only to identify a sentence that follows it. So in (i′), the main
connective is ‘𝐶 ’, which is followed by the atomic sentence ‘𝑝’ and the complex
sentence ‘𝐶𝑞𝑟’; in (ii′), with the same main connective, the sentences which fol‐
low are ‘𝐶𝑝𝑞’ and ‘𝑟’.

› No ambiguity is possible. It would be possible only if there were some sentences


𝒜 , ℬ , 𝒞 and 𝒟 such that the string 𝒜ℬ is identical to the string 𝒞𝒟, where 𝒜 ≠ 𝒞 .
That would mean, without loss of generality, that 𝒜 is an initial part of 𝒞 . But
then 𝒞 cannot be a sentence, since it is a string comprised of a sentence plus
some other symbols tacked on the end, which is not a sentence, by the recursive
definition.

The notation can be very concise. Compare the following:

(𝑃 ∧ 𝑄) → 𝑅 → 𝑃 ∧ (¬𝑄 ∨ 𝑅) ;

𝐶𝐶𝐾𝑝𝑞𝑟𝐾𝑝𝐴𝑁𝑞𝑟.

The Sentential version has 22 characters, 10 of them parentheses; the Polish version just
12.
The notation never really caught on, partly because ‐ as in the example above – it is not
always immediate to the naked eye where one constituent sentence begins and another
ends. But the main obstacle to its wider use was the lack of any easy way to indicate the
scope of a quantifier. Thus the notation has become something of a historical curiosity.
Appendix B
Quick Reference

Schematic Truth Tables

𝒜 ℬ 𝒜∧ℬ 𝒜∨ℬ 𝒜→ℬ 𝒜↔ℬ


𝒜 ¬𝒜
T T T T T T
T F T F F T F F
F T F T F T T F
F F F F T T

Symbolisation
Sentential connectives

It is not the case that P ¬𝑃


Either P, or Q (𝑃 ∨ 𝑄)
Neither P, nor Q ¬(𝑃 ∨ 𝑄) or (¬𝑃 ∧ ¬𝑄)
Both P, and Q (𝑃 ∧ 𝑄)
If P, then Q (𝑃 → 𝑄)
P only if Q (𝑃 → 𝑄)
P if and only if Q (𝑃 ↔ 𝑄)
P unless Q (𝑃 ∨ 𝑄)

Predicates

All Fs are Gs ∀𝑥(𝐹𝑥 → 𝐺𝑥)


Some Fs are Gs ∃𝑥(𝐹𝑥 ∧ 𝐺𝑥)
Not all Fs are Gs ¬∀𝑥(𝐹𝑥 → 𝐺𝑥) or ∃𝑥(𝐹𝑥 ∧ ¬𝐺𝑥)
No Fs are Gs ∀𝑥(𝐹𝑥 → ¬𝐺𝑥) or ¬∃𝑥(𝐹𝑥 ∧ 𝐺𝑥)

369
370 APPENDICES

Identity

Only c is G ∀𝑥(𝐺𝑥 ↔ 𝑥 = 𝑐)
Everything besides c is G ∀𝑥(¬𝑥 = 𝑐 → 𝐺𝑥)
The F is G ∃𝑥(𝐹𝑥 ∧ ∀𝑦(𝐹𝑦 → 𝑥 = 𝑦) ∧ 𝐺𝑥)
It is not the case that the F is G ¬∃𝑥(𝐹𝑥 ∧ ∀𝑦(𝐹𝑦 → 𝑥 = 𝑦) ∧ 𝐺𝑥)
The F is nonG ∃𝑥(𝐹𝑥 ∧ ∀𝑦(𝐹𝑦 → 𝑥 = 𝑦) ∧ ¬𝐺𝑥)

Using identity to symbolize quantities

There are at least Fs.

one: ∃𝑥𝐹𝑥
two: ∃𝑥1 ∃𝑥2 (𝐹𝑥1 ∧ 𝐹𝑥2 ∧ ¬𝑥1 = 𝑥2 )
three: ∃𝑥1 ∃𝑥2 ∃𝑥3 (𝐹𝑥1 ∧ 𝐹𝑥2 ∧ 𝐹𝑥3 ∧ ¬𝑥1 = 𝑥2 ∧ ¬𝑥1 = 𝑥3 ∧ ¬𝑥2 = 𝑥3 )
𝑛: ∃𝑥1 …∃𝑥𝑛 (𝐹𝑥1 ∧ … ∧ 𝐹𝑥𝑛 ∧ ¬𝑥1 = 𝑥2 ∧ … ∧ ¬𝑥𝑛−1 = 𝑥𝑛 )

There are at most Fs.

One way to say ‘there are at most 𝑛 Fs’ is to put a negation sign in front of the symbol‐
isation for ‘there are at least 𝑛 + 1 Fs’. Equivalently, we can offer:

one: ∀𝑥1 ∀𝑥2 (𝐹𝑥1 ∧ 𝐹𝑥2 ) → 𝑥1 = 𝑥2


two: ∀𝑥1 ∀𝑥2 ∀𝑥3 (𝐹𝑥1 ∧ 𝐹𝑥2 ∧ 𝐹𝑥3 ) → (𝑥1 = 𝑥2 ∨ 𝑥1 = 𝑥3 ∨ 𝑥2 = 𝑥3 )
three: ∀𝑥1 ∀𝑥2 ∀𝑥3 ∀𝑥4 (𝐹𝑥1 ∧ 𝐹𝑥2 ∧ 𝐹𝑥3 ∧ 𝐹𝑥4 ) →
(𝑥1 = 𝑥2 ∨ 𝑥1 = 𝑥3 ∨ 𝑥1 = 𝑥4 ∨ 𝑥2 = 𝑥3 ∨ 𝑥2 = 𝑥4 ∨ 𝑥3 = 𝑥4 )
𝑛: ∀𝑥1 …∀𝑥𝑛+1 (𝐹𝑥1 ∧ … ∧ 𝐹𝑥𝑛+1 ) → (𝑥1 = 𝑥2 ∨ … ∨ 𝑥𝑛 = 𝑥𝑛+1 )

There are exactly Fs.

One way to say ‘there are exactly 𝑛 Fs’ is to conjoin two of the symbolizations above
and say ‘there are at least 𝑛 Fs and there are at most 𝑛 Fs.’ The following equivalent
formulae are shorter:

zero: ∀𝑥¬𝐹𝑥
one: ∃𝑥 𝐹𝑥 ∧ ∀𝑦(𝐹𝑦 → 𝑥 = 𝑦)
two: ∃𝑥1 ∃𝑥2 𝐹𝑥1 ∧ 𝐹𝑥2 ∧ ¬𝑥1 = 𝑥2 ∧ ∀𝑦 𝐹𝑦 → (𝑦 = 𝑥1 ∨ 𝑦 = 𝑥2 )
three: ∃𝑥1 ∃𝑥2 ∃𝑥3 𝐹𝑥1 ∧ 𝐹𝑥2 ∧ 𝐹𝑥3 ∧ ¬𝑥1 = 𝑥2 ∧ ¬𝑥1 = 𝑥3 ∧ ¬𝑥2 = 𝑥3 ∧
∀𝑦 𝐹𝑦 → (𝑦 = 𝑥1 ∨ 𝑦 = 𝑥2 ∨ 𝑦 = 𝑥3 )
𝑛: ∃𝑥1 …∃𝑥𝑛 𝐹𝑥1 ∧ … ∧ 𝐹𝑥𝑛 ∧ ¬𝑥1 = 𝑥2 ∧ … ∧ ¬𝑥𝑛−1 = 𝑥𝑛 ∧
∀𝑦 𝐹𝑦 → (𝑦 = 𝑥1 ∨ … ∨ 𝑦 = 𝑥𝑛 )
§B. QUICK REFERENCE 371

Basic deduction rules for Sentential

Conjunction Introduction, p. 246 Negation Elimination, p. 277

𝑚 𝒜
𝑖 ¬𝒜
𝑛 ℬ
𝑗 ℬ
𝒜∧ℬ ∧I 𝑚, 𝑛
𝑘 ¬ℬ

𝒜 ¬E 𝑖 –𝑗, 𝑖 –𝑘
Conjunction Elimination, p. 248

Disjunction Introduction, p. 254


𝑚 𝒜∧ℬ

𝒜 ∧E 𝑚 𝑚 𝒜

𝒜∨ℬ ∨I 𝑚
𝑚 𝒜∧ℬ

ℬ ∧E 𝑚 𝑚 𝒜

ℬ∨𝒜 ∨I 𝑚
Conditional Introduction, p. 261
Disjunction Elimination, p. 272
𝑖 𝒜

𝑗 ℬ

𝒜→ℬ →I 𝑖 –𝑗 𝑚 𝒜∨ℬ

𝑖 𝒜

Conditional Elimination, p. 251 𝑗 𝒞

𝑘 ℬ

𝑙 𝒞
𝑚 𝒜→ℬ
𝒞 ∨E 𝑚, 𝑖 –𝑗, 𝑘–𝑙
𝑛 𝒜

ℬ →E 𝑚, 𝑛
Biconditional Introduction, p. 270
Negation Introduction, p. 275
𝑖 𝒜

𝑖 𝒜 𝑗 ℬ

𝑗 ℬ 𝑘 ℬ

𝑘 ¬ℬ 𝑙 𝒜

¬𝒜 ¬I 𝑖 –𝑗, 𝑖 –𝑘 𝒜↔ℬ ↔I 𝑖 –𝑗, 𝑘–𝑙


372 APPENDICES

Biconditional Elimination, p. 252 New Assumption, p. 243


𝑚 𝒜↔ℬ

𝑛 𝒜

ℬ ↔E 𝑚, 𝑛 Reiteration, p. 256

𝑚 𝒜↔ℬ 𝑚 𝒜

𝑛 ℬ ⋮

𝒜 ↔E 𝑚, 𝑛 𝒜 R𝑚

Derived rules for Sentential, §33

Disjunctive syllogism Tertium non datur

𝑚 𝒜∨ℬ 𝑖 𝒜

𝑛 ¬𝒜 𝑗 ℬ

ℬ DS 𝑚, 𝑛 𝑘 ¬𝒜

𝑙 ℬ
𝑚 𝒜∨ℬ ℬ TND 𝑖 –𝑗, 𝑘–𝑙
𝑛 ¬ℬ

𝒜 DS 𝑚, 𝑛 De Morgan Rules

𝑚 ¬(𝒜 ∨ ℬ)
Modus Tollens
¬𝒜 ∧ ¬ℬ DeM 𝑚
𝑚 𝒜→ℬ
𝑚 ¬𝒜 ∧ ¬ℬ
𝑛 ¬ℬ
¬(𝒜 ∨ ℬ) DeM 𝑚
¬𝒜 MT 𝑚, 𝑛

𝑚 ¬(𝒜 ∧ ℬ)
Double Negation Elimination ¬𝒜 ∨ ¬ℬ DeM 𝑚

𝑚 ¬¬𝒜 𝑚 ¬𝒜 ∨ ¬ℬ

𝒜 ¬¬E 𝑚 ¬(𝒜 ∧ ℬ) DeM 𝑚


§B. QUICK REFERENCE 373

Basic deduction rules for Quantifier

Universal elimination, p. 319 Existential introduction, p. 321

𝑚 𝒜|𝒸↷𝓍
𝑚 ∀𝓍𝒜 𝒸 can be any name
𝒸 can be any ∃𝓍𝒜 ∃I 𝑚
𝒜|𝒸↷𝓍 ∀E 𝑚
name
Existential elimination, p. 329

Universal introduction, p. 325 𝑚 ∃𝓍𝒜

𝑖 𝒜|𝒸↷𝓍
𝑚 𝒜|𝒸↷𝓍 𝑗 ℬ
∀𝓍𝒜 ∀I 𝑚 ℬ ∃E 𝑚, 𝑖 –𝑗

𝒸 must not occur in any undischarged as‐ 𝒸 must not occur in any undischarged as‐
sumption, or in 𝒜 sumption, in ∃𝓍𝒜 , or in ℬ

Identity introduction, p. 345 Identity elimination, p. 346

𝑚 𝒶=𝒷

𝑛 𝒜|𝒶↷𝓍

𝒸=𝒸 =I 𝒜|𝒷↷𝓍 =E 𝑚 , 𝑛

Derived rules for Quantifier, §36

𝑚 ∀𝓍¬𝒜 Alternative identity elimination,


¬∃𝓍𝒜 CQ∀/¬∃ 𝑚
p. 348

𝑚 𝒶=𝒷
𝑚 ¬∃𝓍𝒜
𝑛 𝒜|𝒷↷𝓍
∀𝓍¬𝒜 CQ¬∃/∀ 𝑚
𝒜|𝒶↷𝓍 =ES 𝑚, 𝑛
𝑚 ∃𝓍¬𝒜

¬∀𝓍𝒜 CQ∃/¬∀ 𝑚

𝑚 ¬∀𝓍𝒜

∃𝓍¬𝒜 CQ¬∀/∃ 𝑚
Appendix C
Index of defined terms

active assumption, 244 conditional logics, 358


antecedent, 43 conditional proof, 285
antisymmetric, 194 conjunction, 39
argument, 2 conjunction elimination, 248
assumption line, 242 conjunction introduction, 247
assumptions, 240 conjuncts, 39
asymmetric, 194 consequent, 43
atomic formulae, 171 constructivism, 288
atomic sentences, 33 context sensitive, 75
contingent, 21
biconditional, 44 contraction, 298
biconditional elimination, 252 contradiction, 80
biconditional introduction, 270 conversational context, 112
binary relation, 191 coordinator, 28
bivalence, 89 counterfactuals, 71
bound by, 174 countermodel, 226
bound variable, 173
declarative sentences, 17
canonical clause, 27 deduction theorem, 292
clash of variables, 143 definite descriptions, 159
closed, 260 degrees of truth, 360
commutative, 66 deontic logic, 358
commutativity, 297 derived, 303
complete truth table, 77 determiner phrase, 26
completeness, 353 directed graph, 189
compositional, 65 directed hypergraphs, 191
compound, 28, 65 discharging, 260
conclusion, 2 disjunction, 42
conclusive, 6 disjunction elimination, 273
conditional elimination, 251 disjunction introduction, 254
conditional introduction, 263 disjunction property, 289

374
INDEX OF DEFINED TERMS 375

disjunctive normal form, 107 impossibility, 20


disjunctive syllogism, 304 inclusive or, 42
disjuncts, 42 indefinite descriptions, 159
domain, 118 indicative, 71
double negation elimination, 306 indirect, 101
double turnstile, 85 indirect proof, 288
dummy pronoun, 116 indiscernibility of identicals, 183
dummy pronouns, 115 Induction, 13
Inductive logic, 13
elimination, 240 inferentialist, 287
empty, 124 instantiating name, 206
empty extension, 180 interpretation, 184
entail, 84, 217 intersective, 130
enthymeme, 14 intransitive, 193
epistemicism, 362 introduction, 240
epistemology, 9 intuitionism, 288
equivalence classes, 196 invalid, 11, 218
equivalence relation, 195 irreflexive, 192
Euler diagram, 186
exclusive or, 42 jointly consistent, 18, 82, 216
existential introduction, 321 jointly contrary, 295
existential quantifier, 117 jointly formally consistent, 19
exportation, 265 jointly inconsistent, 18, 82, 216
expression, 49, 170
extension, 178 law of excluded middle, 278
extensional language, 179 law of non‐contradiction, 293
Leibniz’ Law, 154
False, 18 linear logic, 359
form, 10 logical consequence, 3
formal language, 31 logical falsehood, 80, 216
formal logic, 11 logical form, 361
formal proof, 235 logical truth, 79, 216
formation tree, 52 logically equivalent, 80, 216
free choice ‘any’, 128
free variable, 173 main connective, 52, 172
function, 63 many‐valued logics, 105, 360
function symbols, 356 mention, 56
metalanguage, 57
generic, 133, 159 metalinguistic negation, 165
grammatical subject, 28 modal logic, 358
modus ponens, 252
history, 358 modus tollens, 305
monotonic, 23
identity of indiscernibles, 183
iff, 44 names, 113
immediate subsentences, 65 narrow scope, 164
implication, 234 natural deduction, 236
importation, 265 natural kind terms, 113
376 INDEX OF DEFINED TERMS

necessary falsehood, 20 relation, 186


necessary truth, 20 relevance logic, 359
negation elimination, 277 relevant logic, 359
negation introduction, 275 representationalism, 287
negative polarity (NPI) ‘any’, 128 Robinson arithmetic, 356
new assumption, 243
nodes, 189 satisfies, 212
non‐identity predicate, 153 schematic truth table, 66
nonsymmetric, 194 scope, 52, 172
noun phrase, 26 second‐order logic, 359
numerical identity, 152 self‐reference, 357
semantic presupposition, 165
object language, 57 semantics, 2, 31
one‐place, 136 sentence, 50
opaque, 184 sentence connectives, 28
open sentences, 117 sentences, 174
ordered pairs, 181 serial, 199
ordering, 196 singular term, 111
sound, 12
paraphrases, 36 soundness, 353
partial order, 196 strict order, 196
partial truth table, 97 strict total order, 196
partition, 196 structural words, 28
phrase structure grammars, 26 structure, 7, 25
Polish notation, 367 subjunctive, 71
possible worlds, 358 subproofs, 259
pragmatics, 255 subsective, 132
predicate, 28 subsentences, 27
premises, 2 subset, 186
presupposition failure, 165 substituting, 206
privative, 130 substitution instance, 206
proof of an argument, 241 substitutional quantification, 203
Proper names, 112 supervaluationism, 362
property, 185 symbolisation key, 33
provable from, 291 symmetric, 153, 194
provably equivalent, 295 syntactic tree, 26
provably inconsistent, 295 syntax, 31
qualitatively identical, 152 tautologies, 79
quantifier phrase, 117 tense logic, 357
quasi‐canonical clauses, 31 term, 171
quiver, 191 tertium non datur, 307
the conditional, 43
range, 242
theorem, 293
recursive, 50
three‐place, 137
reductio, 229
tolerant, 361
reflexive, 153, 192
total, 196
reiteration, 256
INDEX OF DEFINED TERMS 377

total function, 74
total order, 196
transitive, 153, 193
True, 18
truth table, 75
truth value, 17
truth‐function, 64
truth‐functional, 65
truth‐functionally complete, 105
two‐place, 137

undischarged, 241
universal elimination, 319
universal quantifier, 117
use, 56

vague expression, 361


valid, 11, 218
valuation, 74
variable, 117
variable assignment, 210
verb phrase, 26
verb phrases, 114

weakening, 298
wide scope, 164
Acknowledgements
Antony Eagle would like to thank P.D. Magnus and Tim Button for their work from
which this text derives, and those acknowledged below who helped them. Thanks also
to Atheer Al‐Khalfa, Caitlin Bettess, Andrew Carter, Keith Dear, Jack Garland, Bowen
Jiang, Millie Lewis, Yaoying Li, Jon Opie, Matt Nestor, Jaime von Schwarzburg, and
Mike Walmer for comments on successive versions of the Adelaide text.

Tim Button would like to thank P.D. Magnus for his extraordinary act of generosity, in
making forall𝓍 available to everyone. Thanks also to Alfredo Manfredini Böhm, Sam
Brain, Felicity Davies, Emily Dyson, Phoebe Hill, Richard Jennings, Justin Niven, and
Igor Stojanovic for noticing errata in earlier versions.

P.D. Magnus would like to thank the people who made this project possible. Notable
among these are Cristyn Magnus, who read many early drafts; Aaron Schiller, who was
an early adopter and provided considerable, helpful feedback; and Bin Kang, Craig
Erb, Nathan Carter, Wes McMichael, Selva Samuel, Dave Krueger, Brandon Lee, and
the students of Introduction to Logic, who detected various errors in previous versions
of the book.

About the Authors


Antony Eagle is Associate Professor of Philosophy at the University of Adelaide. His re‐
search interests include metaphysics, philosophy of probability, philosophy of physics,
and philosophy of logic and language. antonyeagle.org

Tim Button is a Lecturer in Philosophy at UCL. His first book, The Limits of Real‐
ism, was published by Oxford University Press in 2013. www.homepages.ucl.ac.uk/
~uctytbu/index.html

P.D. Magnus is a professor at the University at Albany, State University of New York.
His primary research is in the philosophy of science. www.fecundity.com/job/

In the Introduction to his Symbolic Logic, Charles Lutwidge Dodson advised:

When you come to any passage you don’t understand, read it again: if you
still don’t understand it, read it again: if you fail, even after three readings,
very likely your brain is getting a little tired. In that case, put the book
away, and take to other occupations, and next day, when you come to it
fresh, you will very likely find that it is quite easy.

The same might be said for this volume, although readers are forgiven if they take a
break for snacks after two readings.

You might also like